Applications of Artificial Intelligence in Medicine Practice

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Biomedical Engineering".

Deadline for manuscript submissions: closed (20 November 2021) | Viewed by 29737

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea
Interests: interdisciplinary area of cyber-physical systems; medical AI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Kennesaw State University, Marietta, GA 30060, USA
Interests: applied cryptography, security and privacy in various critical applications; data science in cybersecurity, and blockchains and smart contracts
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Information Engineering, The Catholic University of Korea, Bucheon 14462, Korea
Interests: mobile systems; electronic identification system; wireless systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Owing to the development of new artificial intelligence (AI) methods based on machine learning and deep learning, the practice of medicine has evolved over time. Combined with rapid advancements in high-performance computing, such AI-based systems have improved the accuracy and efficiency of diagnosis and treatment across various specializations. Sophisticated AI algorithms can learn features from a large volume of healthcare data and can subsequently incorporate the obtained insights for assisting in clinical practice. In addition, self-correcting abilities enhance the accuracy of the algorithm on the basis of feedback. Consequently, an AI-based healthcare support system can assist physicians for administering proper patient care and can thereby help in reducing diagnostic and therapeutic errors that are inevitable in human-based clinical practice. Furthermore, such an AI-based system can derive useful information from the data of a large patient population to assist in making real-time inferences associated with health risk alerts and health outcome predictions.

In this Special Issue, we solicit original articles from a wide variety of interdisciplinary perspectives concerning the theory and application of AI in medicine, medically-oriented human biology, and healthcare. The list of topics includes (but is not limited to): application of AI in biomedicine and clinical medicine, machine learning-based decision support, robotic surgery, data analytics and mining, laboratory information systems, and usage of AI in medical education. Furthermore, we are laying emphasis on the practical aspect of a study. Hence, the inclusion of a clinical assessment of the usefulness and potential impact of the submitted work is strongly recommended.

Prof. Dr. Kyungtae Kang
Dr. Junggab Son
Dr. Hyo-Joong Suh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medicine
  • healthcare
  • decision support systems
  • computational intelligence

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 191 KiB  
Editorial
Application of Artificial Intelligence in the Practice of Medicine
by Hyo-Joong Suh, Junggab Son and Kyungtae Kang
Appl. Sci. 2022, 12(9), 4649; https://doi.org/10.3390/app12094649 - 6 May 2022
Viewed by 1630
Abstract
Advancements in artificial intelligence (AI) based on machine and deep learning are transforming certain medical disciplines [...] Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)

Research

Jump to: Editorial

22 pages, 14460 KiB  
Article
A Deep Learning Ensemble Method to Visual Acuity Measurement Using Fundus Images
by Jin Hyun Kim, Eunah Jo, Seungjae Ryu, Sohee Nam, Somin Song, Yong Seop Han, Tae Seen Kang, Woongsup Lee, Seongjin Lee, Kyong Hoon Kim, Hyunju Choi and Seunghwan Lee
Appl. Sci. 2022, 12(6), 3190; https://doi.org/10.3390/app12063190 - 21 Mar 2022
Cited by 6 | Viewed by 2400
Abstract
Visual acuity (VA) is a measure of the ability to distinguish shapes and details of objects at a given distance and is a measure of the spatial resolution of the visual system. Vision is one of the basic health indicators closely related to [...] Read more.
Visual acuity (VA) is a measure of the ability to distinguish shapes and details of objects at a given distance and is a measure of the spatial resolution of the visual system. Vision is one of the basic health indicators closely related to a person’s quality of life. It is one of the first basic tests done when an eye disease develops. VA is usually measured by using a Snellen chart or E-chart from a specific distance. However, in some cases, such as the unconsciousness of patients or diseases, i.e., dementia, it can be impossible to measure the VA using such traditional chart-based methodologies. This paper provides a machine learning-based VA measurement methodology that determines VA only based on fundus images. In particular, the levels of VA, conventionally divided into 11 levels, are grouped into four classes and three machine learning algorithms, one SVM model and two CNN models, are combined into an ensemble method in order to predict the corresponding VA level from a fundus image. Based on a performance evaluation conducted using randomly selected 4000 fundus images, we confirm that our ensemble method can estimate with 82.4% of the average accuracy for four classes of VA levels, in which each class of Class 1 to Class 4 identifies the level of VA with 88.5%, 58.8%, 88%, and 94.3%, respectively. To the best of our knowledge, this is the first paper on VA measurements based on fundus images using deep machine learning. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

17 pages, 547 KiB  
Article
A Novel Bayesian Linear Regression Model for the Analysis of Neuroimaging Data
by Albert Belenguer-Llorens, Carlos Sevilla-Salcedo, Manuel Desco, Maria Luisa Soto-Montenegro and Vanessa Gómez-Verdejo
Appl. Sci. 2022, 12(5), 2571; https://doi.org/10.3390/app12052571 - 1 Mar 2022
Cited by 1 | Viewed by 1996
Abstract
In this paper, we propose a novel Machine Learning Model based on Bayesian Linear Regression intended to deal with the low sample-to-variable ratio typically found in neuroimaging studies and focusing on mental disorders. The proposed model combines feature selection capabilities with a formulation [...] Read more.
In this paper, we propose a novel Machine Learning Model based on Bayesian Linear Regression intended to deal with the low sample-to-variable ratio typically found in neuroimaging studies and focusing on mental disorders. The proposed model combines feature selection capabilities with a formulation in the dual space which, in turn, enables efficient work with neuroimaging data. Thus, we have tested the proposed algorithm with real MRI data from an animal model of schizophrenia. The results show that our proposal efficiently predicts the diagnosis and, at the same time, detects regions which clearly match brain areas well-known to be related to schizophrenia. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

13 pages, 3459 KiB  
Article
All You Need Is a Few Dots to Label CT Images for Organ Segmentation
by Mingeon Ju, Moonhyun Lee, Jaeyoung Lee, Jaewoo Yang, Seunghan Yoon and Younghoon Kim
Appl. Sci. 2022, 12(3), 1328; https://doi.org/10.3390/app12031328 - 26 Jan 2022
Cited by 2 | Viewed by 2518
Abstract
Image segmentation is used to analyze medical images quantitatively for diagnosis and treatment planning. Since manual segmentation requires considerable time and effort from experts, research to automatically perform segmentation is in progress. Recent studies using deep learning have improved performance but need many [...] Read more.
Image segmentation is used to analyze medical images quantitatively for diagnosis and treatment planning. Since manual segmentation requires considerable time and effort from experts, research to automatically perform segmentation is in progress. Recent studies using deep learning have improved performance but need many labeled data. Although there are public datasets for research, manual labeling is required in an area where labeling is not performed to train a model. We propose a deep-learning-based tool that can easily create training data to alleviate this inconvenience. The proposed tool receives a CT image and the pixels of organs the user wants to segment as inputs and extract the features of the CT image using a deep learning network. Then, pixels that have similar features are classified to the identical organ. The advantage of the proposed tool is that it can be trained with a small number of labeled data. After training with 25 labeled CT images, our tool shows competitive results when it is compared to the state-of-the-art segmentation algorithms, such as UNet and DeepNetV3. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

11 pages, 984 KiB  
Article
Advantages of Machine Learning in Forensic Psychiatric Research—Uncovering the Complexities of Aggressive Behavior in Schizophrenia
by Lena A. Hofmann, Steffen Lau and Johannes Kirchebner
Appl. Sci. 2022, 12(2), 819; https://doi.org/10.3390/app12020819 - 14 Jan 2022
Cited by 20 | Viewed by 2478
Abstract
Linear statistical methods may not be suited to the understanding of psychiatric phenomena such as aggression due to their complexity and multifactorial origins. Here, the application of machine learning (ML) algorithms offers the possibility of analyzing a large number of influencing factors and [...] Read more.
Linear statistical methods may not be suited to the understanding of psychiatric phenomena such as aggression due to their complexity and multifactorial origins. Here, the application of machine learning (ML) algorithms offers the possibility of analyzing a large number of influencing factors and their interactions. This study aimed to explore inpatient aggression in offender patients with schizophrenia spectrum disorders (SSDs) using a suitable ML model on a dataset of 370 patients. With a balanced accuracy of 77.6% and an AUC of 0.87, support vector machines (SVM) outperformed all the other ML algorithms. Negative behavior toward other patients, the breaking of ward rules, the PANSS score at admission as well as poor impulse control and impulsivity emerged as the most predictive variables in distinguishing aggressive from non-aggressive patients. The present study serves as an example of the practical use of ML in forensic psychiatric research regarding the complex interplay between the factors contributing to aggressive behavior in SSD. Through its application, it could be shown that mental illness and the antisocial behavior associated with it outweighed other predictors. The fact that SSD is also highly associated with antisocial behavior emphasizes the importance of early detection and sufficient treatment. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

13 pages, 4264 KiB  
Article
Automated Extraction of Cerebral Infarction Region in Head MR Image Using Pseudo Cerebral Infarction Image by CycleGAN
by Mizuki Yoshida, Atsushi Teramoto, Kohei Kudo, Shoji Matsumoto, Kuniaki Saito and Hiroshi Fujita
Appl. Sci. 2022, 12(1), 489; https://doi.org/10.3390/app12010489 - 4 Jan 2022
Cited by 3 | Viewed by 2101
Abstract
Since recognizing the location and extent of infarction is essential for diagnosis and treatment, many methods using deep learning have been reported. Generally, deep learning requires a large amount of training data. To overcome this problem, we generated pseudo patient images using CycleGAN, [...] Read more.
Since recognizing the location and extent of infarction is essential for diagnosis and treatment, many methods using deep learning have been reported. Generally, deep learning requires a large amount of training data. To overcome this problem, we generated pseudo patient images using CycleGAN, which performed image transformation without paired images. Then, we aimed to improve the extraction accuracy by using the generated images for the extraction of cerebral infarction regions. First, we used CycleGAN for data augmentation. Pseudo-cerebral infarction images were generated from healthy images using CycleGAN. Finally, U-Net was used to segment the cerebral infarction region using CycleGAN-generated images. Regarding the extraction accuracy, the Dice index was 0.553 for U-Net with CycleGAN, which was an improvement over U-Net without CycleGAN. Furthermore, the number of false positives per case was 3.75 for U-Net without CycleGAN and 1.23 for U-Net with CycleGAN, respectively. The number of false positives was reduced by approximately 67% by introducing the CycleGAN-generated images to training cases. These results indicate that utilizing CycleGAN-generated images was effective and facilitated the accurate extraction of the infarcted regions while maintaining the detection rate. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

13 pages, 1550 KiB  
Article
STHarDNet: Swin Transformer with HarDNet for MRI Segmentation
by Yeonghyeon Gu, Zhegao Piao and Seong Joon Yoo
Appl. Sci. 2022, 12(1), 468; https://doi.org/10.3390/app12010468 - 4 Jan 2022
Cited by 24 | Viewed by 4538
Abstract
In magnetic resonance imaging (MRI) segmentation, conventional approaches utilize U-Net models with encoder–decoder structures, segmentation models using vision transformers, or models that combine a vision transformer with an encoder–decoder model structure. However, conventional models have large sizes and slow computation speed and, in [...] Read more.
In magnetic resonance imaging (MRI) segmentation, conventional approaches utilize U-Net models with encoder–decoder structures, segmentation models using vision transformers, or models that combine a vision transformer with an encoder–decoder model structure. However, conventional models have large sizes and slow computation speed and, in vision transformer models, the computation amount sharply increases with the image size. To overcome these problems, this paper proposes a model that combines Swin transformer blocks and a lightweight U-Net type model that has an HarDNet blocks-based encoder–decoder structure. To maintain the features of the hierarchical transformer and shifted-windows approach of the Swin transformer model, the Swin transformer is used in the first skip connection layer of the encoder instead of in the encoder–decoder bottleneck. The proposed model, called STHarDNet, was evaluated by separating the anatomical tracings of lesions after stroke (ATLAS) dataset, which comprises 229 T1-weighted MRI images, into training and validation datasets. It achieved Dice, IoU, precision, and recall values of 0.5547, 0.4185, 0.6764, and 0.5286, respectively, which are better than those of the state-of-the-art models U-Net, SegNet, PSPNet, FCHarDNet, TransHarDNet, Swin Transformer, Swin UNet, X-Net, and D-UNet. Thus, STHarDNet improves the accuracy and speed of MRI image-based stroke diagnosis. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

21 pages, 5432 KiB  
Article
A Whole-Slide Image Managing Library Based on Fastai for Deep Learning in the Context of Histopathology: Two Use-Cases Explained
by Christoph Neuner, Roland Coras, Ingmar Blümcke, Alexander Popp, Sven M. Schlaffer, Andre Wirries, Michael Buchfelder and Samir Jabari
Appl. Sci. 2022, 12(1), 13; https://doi.org/10.3390/app12010013 - 21 Dec 2021
Cited by 4 | Viewed by 3184
Abstract
Background: Processing whole-slide images (WSI) to train neural networks can be intricate and labor intensive. We developed an open-source library dealing with recurrent tasks in the processing of WSI and helping with the training and evaluation of neuronal networks for classification tasks. Methods: [...] Read more.
Background: Processing whole-slide images (WSI) to train neural networks can be intricate and labor intensive. We developed an open-source library dealing with recurrent tasks in the processing of WSI and helping with the training and evaluation of neuronal networks for classification tasks. Methods: Two histopathology use-cases were selected and only hematoxylin and eosin (H&E) stained slides were used. The first use case was a two-class classification problem. We trained a convolutional neuronal network (CNN) to distinguish between dysembryoplastic neuroepithelial tumor (DNET) and ganglioglioma (GG), two neuropathological low-grade epilepsy-associated tumor entities. Within the second use case, we included four clinicopathological disease conditions in a multilabel approach. Here we trained a CNN to predict the hormone expression profile of pituitary adenomas. In the same approach, we also predicted clinically silent corticotroph adenoma. Results: Our DNET-GG classifier achieved an AUC of 1.00 for the ROC curve. For the second use case, the best performing CNN achieved an area under the curve (AUC) of 0.97 for the receiver operating characteristic (ROC) for corticotroph adenoma, 0.86 for silent corticotroph adenoma, and 0.98 for gonadotroph adenoma. All scores were calculated with the help of our library on predictions on a case basis. Conclusions: Our comprehensive and fastai-compatible library is helpful to standardize the workflow and minimize the burden of training a CNN. Indeed, our trained CNNs extracted neuropathologically relevant information from the WSI. This approach will supplement the clinicopathological diagnosis of brain tumors, which is currently based on cost-intensive microscopic examination and variable panels of immunohistochemical stainings. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

11 pages, 2481 KiB  
Article
Automated Detection of Gastric Cancer by Retrospective Endoscopic Image Dataset Using U-Net R-CNN
by Atsushi Teramoto, Tomoyuki Shibata, Hyuga Yamada, Yoshiki Hirooka, Kuniaki Saito and Hiroshi Fujita
Appl. Sci. 2021, 11(23), 11275; https://doi.org/10.3390/app112311275 - 28 Nov 2021
Cited by 5 | Viewed by 2385
Abstract
Upper gastrointestinal endoscopy is widely performed to detect early gastric cancers. As an automated detection method for early gastric cancer from endoscopic images, a method involving an object detection model, which is a deep learning technique, was proposed. However, there were challenges regarding [...] Read more.
Upper gastrointestinal endoscopy is widely performed to detect early gastric cancers. As an automated detection method for early gastric cancer from endoscopic images, a method involving an object detection model, which is a deep learning technique, was proposed. However, there were challenges regarding the reduction in false positives in the detected results. In this study, we proposed a novel object detection model, U-Net R-CNN, based on a semantic segmentation technique that extracts target objects by performing a local analysis of the images. U-Net was introduced as a semantic segmentation method to detect early candidates for gastric cancer. These candidates were classified as gastric cancer cases or false positives based on box classification using a convolutional neural network. In the experiments, the detection performance was evaluated via the 5-fold cross-validation method using 1208 images of healthy subjects and 533 images of gastric cancer patients. When DenseNet169 was used as the convolutional neural network for box classification, the detection sensitivity and the number of false positives evaluated on a lesion basis were 98% and 0.01 per image, respectively, which improved the detection performance compared to the previous method. These results indicate that the proposed method will be useful for the automated detection of early gastric cancer from endoscopic images. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

10 pages, 1340 KiB  
Article
Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images
by Yuki Matsuo and Kazuhiro Takemoto
Appl. Sci. 2021, 11(20), 9556; https://doi.org/10.3390/app11209556 - 14 Oct 2021
Cited by 10 | Viewed by 2781
Abstract
Open-source deep neural networks (DNNs) for medical imaging are significant in emergent situations, such as during the pandemic of the 2019 novel coronavirus disease (COVID-19), since they accelerate the development of high-performance DNN-based systems. However, adversarial attacks are not negligible during open-source development. [...] Read more.
Open-source deep neural networks (DNNs) for medical imaging are significant in emergent situations, such as during the pandemic of the 2019 novel coronavirus disease (COVID-19), since they accelerate the development of high-performance DNN-based systems. However, adversarial attacks are not negligible during open-source development. Since DNNs are used as computer-aided systems for COVID-19 screening from radiography images, we investigated the vulnerability of the COVID-Net model, a representative open-source DNN for COVID-19 detection from chest X-ray images to backdoor attacks that modify DNN models and cause their misclassification when a specific trigger input is added. The results showed that backdoors for both non-targeted attacks, for which DNNs classify inputs into incorrect labels, and targeted attacks, for which DNNs classify inputs into a specific target class, could be established in the COVID-Net model using a small trigger and small fraction of training data. Moreover, the backdoors were effective for models fine-tuned from the backdoored COVID-Net models, although the performance of non-targeted attacks was limited. This indicated that backdoored models could be spread via fine-tuning (thereby becoming a significant security threat). The findings showed that emphasis is required on open-source development and practical applications of DNNs for COVID-19 detection. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

30 pages, 577 KiB  
Article
Instance-Based Learning Following Physician Reasoning for Assistance during Medical Consultation
by Matías Galnares, Sergio Nesmachnow and Franco Simini
Appl. Sci. 2021, 11(13), 5886; https://doi.org/10.3390/app11135886 - 24 Jun 2021
Cited by 1 | Viewed by 1469
Abstract
This article presents an automatic system for modeling clinical knowledge to follow a physician’s reasoning in medical consultation. Instance-based learning is applied to provide suggestions when recording electronic medical records. The system was validated on a real case study involving advanced medical students. [...] Read more.
This article presents an automatic system for modeling clinical knowledge to follow a physician’s reasoning in medical consultation. Instance-based learning is applied to provide suggestions when recording electronic medical records. The system was validated on a real case study involving advanced medical students. The proposed system is accurate and efficient: 2.5× more efficient than a baseline empirical tool for suggestions and two orders of magnitude faster than a Bayesian learning method, when processing a testbed of 250 clinical case types. The research provides a framework to implement a real-time system to assist physicians during medical consultations. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)
Show Figures

Figure 1

Back to TopTop