Application of Artificial Intelligence in Personalized Medicine

A special issue of Journal of Personalized Medicine (ISSN 2075-4426). This special issue belongs to the section "Methodology, Drug and Device Discovery".

Deadline for manuscript submissions: closed (10 September 2022) | Viewed by 99065

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942, Japan
Interests: hematology and oncology; microbe-induced carcinogenesis; cancer drug discovery; parasitic infections; opportunistic microorganisms; immunology; microbiota; infectious diseases; artificial intelligence and medical sciences
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

Artificial Intelligence (AI) has been touted as one of the great new technological forces in medicine, and many consider it to be the foundational technology of personalized medicine. This collection will capture the capabilities and shortcomings of AI in the first quarter of the 21st century. More specifically, it will discuss how the coronavirus pandemic has accelerated the development of AI. Tentatively, the volume will include (but is not limited to) papers on:

  • How Intelligent is AI?
  • AI in Personalized Digital Healthcare
  • Application of AI to Patient History-Taking and Performing Physical Examination
  • AI in the Development of Personalized Healthcare Products and Therapeutics
  • Unorthodox Approaches Using AI in Clinical Trials
  • Application of AI to Context- and Affect-Aware Systems in Personalized Medicine
  • Personalized versus Personal Healthcare
  • Artificial Neural Networks in Medical Diagnosis
  • The Application of Fuzzy Logic in Disease Diagnosis and Management
  • AI in Radiographic Diagnosis
  • AI in Histopathology: A Telemedicine Perspective
  • AI in Sensing Devices and Wearables in Personalized Medicine
  • The Human Side of AI: Patient–Computer Interactions
  • Designing Health Information Technologies for 21st Century Personalized Medicine 

Dr. Jorge Luis Espinoza
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Personalized Medicine is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Diagnosis
  • Neural Networks
  • Wearables
  • Deep learning
  • Physical Examination
  • Radiographic Image Analysis

Published Papers (28 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 1709 KiB  
Article
Position Classification of the Endotracheal Tube with Automatic Segmentation of the Trachea and the Tube on Plain Chest Radiography Using Deep Convolutional Neural Network
by Heui Chul Jung, Changjin Kim, Jaehoon Oh, Tae Hyun Kim, Beomgyu Kim, Juncheol Lee, Jae Ho Chung, Hayoung Byun, Myeong Seong Yoon and Dong Keon Lee
J. Pers. Med. 2022, 12(9), 1363; https://doi.org/10.3390/jpm12091363 - 24 Aug 2022
Cited by 8 | Viewed by 1992
Abstract
Background: This study aimed to develop an algorithm for multilabel classification according to the distance from carina to endotracheal tube (ETT) tip (absence, shallow > 70 mm, 30 mm ≤ proper ≤ 70 mm, and deep position < 30 mm) with the application [...] Read more.
Background: This study aimed to develop an algorithm for multilabel classification according to the distance from carina to endotracheal tube (ETT) tip (absence, shallow > 70 mm, 30 mm ≤ proper ≤ 70 mm, and deep position < 30 mm) with the application of automatic segmentation of the trachea and the ETT on chest radiographs using deep convolutional neural network (CNN). Methods: This study was a retrospective study using plain chest radiographs. We segmented the trachea and the ETT on images and labeled the classification of the ETT position. We proposed models for the classification of the ETT position using EfficientNet B0 with the application of automatic segmentation using Mask R-CNN and ResNet50. Primary outcomes were favorable performance for automatic segmentation and four-label classification through five-fold validation with segmented images and a test with non-segmented images. Results: Of 1985 images, 596 images were manually segmented and consisted of 298 absence, 97 shallow, 100 proper, and 101 deep images according to the ETT position. In five-fold validations with segmented images, Dice coefficients [mean (SD)] between segmented and predicted masks were 0.841 (0.063) for the trachea and 0.893 (0.078) for the ETT, and the accuracy for four-label classification was 0.945 (0.017). In the test for classification with 1389 non-segmented images, overall values were 0.922 for accuracy, 0.843 for precision, 0.843 for sensitivity, 0.922 for specificity, and 0.843 for F1-score. Conclusions: Automatic segmentation of the ETT and trachea images and classification of the ETT position using deep CNN with plain chest radiographs could achieve good performance and improve the physician’s performance in deciding the appropriateness of ETT depth. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

12 pages, 4765 KiB  
Article
Prediction of All-Cause Mortality Based on Stress/Rest Myocardial Perfusion Imaging (MPI) Using Deep Learning: A Comparison between Image and Frequency Spectra as Input
by Da-Chuan Cheng, Te-Chun Hsieh, Yu-Ju Hsu, Yung-Chi Lai, Kuo-Yang Yen, Charles C. N. Wang and Chia-Hung Kao
J. Pers. Med. 2022, 12(7), 1105; https://doi.org/10.3390/jpm12071105 - 05 Jul 2022
Viewed by 1450
Abstract
Background: Cardiovascular management and risk stratification of patients is an important issue in clinics. Patients who have experienced an adverse cardiac event are concerned for their future and want to know the survival probability. Methods: We trained eight state-of-the-art CNN models using polar [...] Read more.
Background: Cardiovascular management and risk stratification of patients is an important issue in clinics. Patients who have experienced an adverse cardiac event are concerned for their future and want to know the survival probability. Methods: We trained eight state-of-the-art CNN models using polar maps of myocardial perfusion imaging (MPI), gender, lung/heart ratio, and patient age for 5-year survival prediction after an adverse cardiac event based on a cohort of 862 patients who had experienced adverse cardiac events and stress/rest MPIs. The CNN model outcome is to predict a patient’s survival 5 years after a cardiac event, i.e., two classes, either yes or no. Results: The best accuracy of all the CNN prediction models was 0.70 (median value), which resulted from ResNet-50V2, using image as the input in the baseline experiment. All the CNN models had better performance after using frequency spectra as the input. The accuracy increment was about 7~9%. Conclusions: This is the first trial to use pure rest/stress MPI polar maps and limited clinical data to predict patients’ 5-year survival based on CNN models and deep learning. The study shows the feasibility of using frequency spectra rather than images, which might increase the performance of CNNs. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

17 pages, 3174 KiB  
Article
Multi-Class Classification of Breast Cancer Using 6B-Net with Deep Feature Fusion and Selection Method
by Muhammad Junaid Umer, Muhammad Sharif, Seifedine Kadry and Abdullah Alharbi
J. Pers. Med. 2022, 12(5), 683; https://doi.org/10.3390/jpm12050683 - 26 Apr 2022
Cited by 16 | Viewed by 2636
Abstract
Breast cancer has now overtaken lung cancer as the world’s most commonly diagnosed cancer, with thousands of new cases per year. Early detection and classification of breast cancer are necessary to overcome the death rate. Recently, many deep learning-based studies have been proposed [...] Read more.
Breast cancer has now overtaken lung cancer as the world’s most commonly diagnosed cancer, with thousands of new cases per year. Early detection and classification of breast cancer are necessary to overcome the death rate. Recently, many deep learning-based studies have been proposed for automatic diagnosis and classification of this deadly disease, using histopathology images. This study proposed a novel solution for multi-class breast cancer classification from histopathology images using deep learning. For this purpose, a novel 6B-Net deep CNN model, with feature fusion and selection mechanism, was developed for multi-class breast cancer classification. For the evaluation of the proposed method, two large, publicly available datasets, namely, BreaKHis, with eight classes containing 7909 images, and a breast cancer histopathology dataset, containing 3771 images of four classes, were used. The proposed method achieves a multi-class average accuracy of 94.20%, with a classification training time of 226 s in four classes of breast cancer, and a multi-class average accuracy of 90.10%, with a classification training time of 147 s in eight classes of breast cancer. The experimental outcomes show that the proposed method achieves the highest multi-class average accuracy for breast cancer classification, and hence, the proposed method can effectively be applied for early detection and classification of breast cancer to assist the pathologists in early and accurate diagnosis of breast cancer. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

15 pages, 1957 KiB  
Article
Machine-Learning-Based Late Fusion on Multi-Omics and Multi-Scale Data for Non-Small-Cell Lung Cancer Diagnosis
by Francisco Carrillo-Perez, Juan Carlos Morales, Daniel Castillo-Secilla, Olivier Gevaert, Ignacio Rojas and Luis Javier Herrera
J. Pers. Med. 2022, 12(4), 601; https://doi.org/10.3390/jpm12040601 - 08 Apr 2022
Cited by 17 | Viewed by 4144
Abstract
Differentiation between the various non-small-cell lung cancer subtypes is crucial for providing an effective treatment to the patient. For this purpose, machine learning techniques have been used in recent years over the available biological data from patients. However, in most cases this problem [...] Read more.
Differentiation between the various non-small-cell lung cancer subtypes is crucial for providing an effective treatment to the patient. For this purpose, machine learning techniques have been used in recent years over the available biological data from patients. However, in most cases this problem has been treated using a single-modality approach, not exploring the potential of the multi-scale and multi-omic nature of cancer data for the classification. In this work, we study the fusion of five multi-scale and multi-omic modalities (RNA-Seq, miRNA-Seq, whole-slide imaging, copy number variation, and DNA methylation) by using a late fusion strategy and machine learning techniques. We train an independent machine learning model for each modality and we explore the interactions and gains that can be obtained by fusing their outputs in an increasing manner, by using a novel optimization approach to compute the parameters of the late fusion. The final classification model, using all modalities, obtains an F1 score of 96.81±1.07, an AUC of 0.993±0.004, and an AUPRC of 0.980±0.016, improving those results that each independent model obtains and those presented in the literature for this problem. These obtained results show that leveraging the multi-scale and multi-omic nature of cancer data can enhance the performance of single-modality clinical decision support systems in personalized medicine, consequently improving the diagnosis of the patient. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

14 pages, 1395 KiB  
Article
Effectiveness of Human–Artificial Intelligence Collaboration in Cephalometric Landmark Detection
by Van Nhat Thang Le, Junhyeok Kang, Il-Seok Oh, Jae-Gon Kim, Yeon-Mi Yang and Dae-Woo Lee
J. Pers. Med. 2022, 12(3), 387; https://doi.org/10.3390/jpm12030387 - 03 Mar 2022
Cited by 14 | Viewed by 3373
Abstract
Detection of cephalometric landmarks has contributed to the analysis of malocclusion during orthodontic diagnosis. Many recent studies involving deep learning have focused on head-to-head comparisons of accuracy in landmark identification between artificial intelligence (AI) and humans. However, a human–AI collaboration for the identification [...] Read more.
Detection of cephalometric landmarks has contributed to the analysis of malocclusion during orthodontic diagnosis. Many recent studies involving deep learning have focused on head-to-head comparisons of accuracy in landmark identification between artificial intelligence (AI) and humans. However, a human–AI collaboration for the identification of cephalometric landmarks has not been evaluated. We selected 1193 cephalograms and used them to train the deep anatomical context feature learning (DACFL) model. The number of target landmarks was 41. To evaluate the effect of human–AI collaboration on landmark detection, 10 images were extracted randomly from 100 test images. The experiment included 20 dental students as beginners in landmark localization. The outcomes were determined by measuring the mean radial error (MRE), successful detection rate (SDR), and successful classification rate (SCR). On the dataset, the DACFL model exhibited an average MRE of 1.87 ± 2.04 mm and an average SDR of 73.17% within a 2 mm threshold. Compared with the beginner group, beginner–AI collaboration improved the SDR by 5.33% within a 2 mm threshold and also improved the SCR by 8.38%. Thus, the beginner–AI collaboration was effective in the detection of cephalometric landmarks. Further studies should be performed to demonstrate the benefits of an orthodontist–AI collaboration. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

23 pages, 30736 KiB  
Article
A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of COVID-19 CT Image Segmentation
by Hanane Allioui, Mazin Abed Mohammed, Narjes Benameur, Belal Al-Khateeb, Karrar Hameed Abdulkareem, Begonya Garcia-Zapirain, Robertas Damaševičius and Rytis Maskeliūnas
J. Pers. Med. 2022, 12(2), 309; https://doi.org/10.3390/jpm12020309 - 18 Feb 2022
Cited by 48 | Viewed by 5771
Abstract
Currently, most mask extraction techniques are based on convolutional neural networks (CNNs). However, there are still numerous problems that mask extraction techniques need to solve. Thus, the most advanced methods to deploy artificial intelligence (AI) techniques are necessary. The use of cooperative agents [...] Read more.
Currently, most mask extraction techniques are based on convolutional neural networks (CNNs). However, there are still numerous problems that mask extraction techniques need to solve. Thus, the most advanced methods to deploy artificial intelligence (AI) techniques are necessary. The use of cooperative agents in mask extraction increases the efficiency of automatic image segmentation. Hence, we introduce a new mask extraction method that is based on multi-agent deep reinforcement learning (DRL) to minimize the long-term manual mask extraction and to enhance medical image segmentation frameworks. A DRL-based method is introduced to deal with mask extraction issues. This new method utilizes a modified version of the Deep Q-Network to enable the mask detector to select masks from the image studied. Based on COVID-19 computed tomography (CT) images, we used DRL mask extraction-based techniques to extract visual features of COVID-19 infected areas and provide an accurate clinical diagnosis while optimizing the pathogenic diagnostic test and saving time. We collected CT images of different cases (normal chest CT, pneumonia, typical viral cases, and cases of COVID-19). Experimental validation achieved a precision of 97.12% with a Dice of 80.81%, a sensitivity of 79.97%, a specificity of 99.48%, a precision of 85.21%, an F1 score of 83.01%, a structural metric of 84.38%, and a mean absolute error of 0.86%. Additionally, the results of the visual segmentation clearly reflected the ground truth. The results reveal the proof of principle for using DRL to extract CT masks for an effective diagnosis of COVID-19. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

19 pages, 3451 KiB  
Article
Explainable Machine Learning Model for Predicting First-Time Acute Exacerbation in Patients with Chronic Obstructive Pulmonary Disease
by Chew-Teng Kor, Yi-Rong Li, Pei-Ru Lin, Sheng-Hao Lin, Bing-Yen Wang and Ching-Hsiung Lin
J. Pers. Med. 2022, 12(2), 228; https://doi.org/10.3390/jpm12020228 - 07 Feb 2022
Cited by 12 | Viewed by 3570
Abstract
Background: The study developed accurate explainable machine learning (ML) models for predicting first-time acute exacerbation of chronic obstructive pulmonary disease (COPD, AECOPD) at an individual level. Methods: We conducted a retrospective case–control study. A total of 606 patients with COPD were screened for [...] Read more.
Background: The study developed accurate explainable machine learning (ML) models for predicting first-time acute exacerbation of chronic obstructive pulmonary disease (COPD, AECOPD) at an individual level. Methods: We conducted a retrospective case–control study. A total of 606 patients with COPD were screened for eligibility using registry data from the COPD Pay-for-Performance Program (COPD P4P program) database at Changhua Christian Hospital between January 2017 and December 2019. Recursive feature elimination technology was used to select the optimal subset of features for predicting the occurrence of AECOPD. We developed four ML models to predict first-time AECOPD, and the highest-performing model was applied. Finally, an explainable approach based on ML and the SHapley Additive exPlanations (SHAP) and a local explanation method were used to evaluate the risk of AECOPD and to generate individual explanations of the model’s decisions. Results: The gradient boosting machine (GBM) and support vector machine (SVM) models exhibited superior discrimination ability (area under curve [AUC] = 0.833 [95% confidence interval (CI) 0.745–0.921] and AUC = 0.836 [95% CI 0.757–0.915], respectively). The decision curve analysis indicated that the GBM model exhibited a higher net benefit in distinguishing patients at high risk for AECOPD when the threshold probability was <0.55. The COPD Assessment Test (CAT) and the symptom of wheezing were the two most important features and exhibited the highest SHAP values, followed by monocyte count and white blood cell (WBC) count, coughing, red blood cell (RBC) count, breathing rate, oral long-acting bronchodilator use, chronic pulmonary disease (CPD), systolic blood pressure (SBP), and others. Higher CAT score; monocyte, WBC, and RBC counts; BMI; diastolic blood pressure (DBP); neutrophil-to-lymphocyte ratio; and eosinophil and lymphocyte counts were associated with AECOPD. The presence of symptoms (wheezing, dyspnea, coughing), chronic disease (CPD, congestive heart failure [CHF], sleep disorders, and pneumonia), and use of COPD medications (triple-therapy long-acting bronchodilators, short-acting bronchodilators, oral long-acting bronchodilators, and antibiotics) were also positively associated with AECOPD. A high breathing rate, heart rate, or systolic blood pressure and methylxanthine use were negatively correlated with AECOPD. Conclusions: The ML model was able to accurately assess the risk of AECOPD. The ML model combined with SHAP and the local explanation method were able to provide interpretable and visual explanations of individualized risk predictions, which may assist clinical physicians in understanding the effects of key features in the model and the model’s decision-making process. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

19 pages, 16051 KiB  
Article
Verification of De-Identification Techniques for Personal Information Using Tree-Based Methods with Shapley Values
by Junhak Lee, Jinwoo Jeong, Sungji Jung, Jihoon Moon and Seungmin Rho
J. Pers. Med. 2022, 12(2), 190; https://doi.org/10.3390/jpm12020190 - 31 Jan 2022
Cited by 9 | Viewed by 2839
Abstract
With the development of big data and cloud computing technologies, the importance of pseudonym information has grown. However, the tools for verifying whether the de-identification methodology is correctly applied to ensure data confidentiality and usability are insufficient. This paper proposes a verification of [...] Read more.
With the development of big data and cloud computing technologies, the importance of pseudonym information has grown. However, the tools for verifying whether the de-identification methodology is correctly applied to ensure data confidentiality and usability are insufficient. This paper proposes a verification of de-identification techniques for personal healthcare information by considering data confidentiality and usability. Data are generated and preprocessed by considering the actual statistical data, personal information datasets, and de-identification datasets based on medical data to represent the de-identification technique as a numeric dataset. Five tree-based regression models (i.e., decision tree, random forest, gradient boosting machine, extreme gradient boosting, and light gradient boosting machine) are constructed using the de-identification dataset to effectively discover nonlinear relationships between dependent and independent variables in numerical datasets. Then, the most effective model is selected from personal information data in which pseudonym processing is essential for data utilization. The Shapley additive explanation, an explainable artificial intelligence technique, is applied to the most effective model to establish pseudonym processing policies and machine learning to present a machine-learning process that selects an appropriate de-identification methodology. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

26 pages, 1822 KiB  
Article
Design and Development of an Intelligent Clinical Decision Support System Applied to the Evaluation of Breast Cancer Risk
by Manuel Casal-Guisande, Alberto Comesaña-Campos, Inês Dutra, Jorge Cerqueiro-Pequeño and José-Benito Bouza-Rodríguez
J. Pers. Med. 2022, 12(2), 169; https://doi.org/10.3390/jpm12020169 - 27 Jan 2022
Cited by 20 | Viewed by 4226
Abstract
Breast cancer is currently one of the main causes of death and tumoral diseases in women. Even if early diagnosis processes have evolved in the last years thanks to the popularization of mammogram tests, nowadays, it is still a challenge to have available [...] Read more.
Breast cancer is currently one of the main causes of death and tumoral diseases in women. Even if early diagnosis processes have evolved in the last years thanks to the popularization of mammogram tests, nowadays, it is still a challenge to have available reliable diagnosis systems that are exempt of variability in their interpretation. To this end, in this work, the design and development of an intelligent clinical decision support system to be used in the preventive diagnosis of breast cancer is presented, aiming both to improve the accuracy in the evaluation and to reduce its uncertainty. Through the integration of expert systems (based on Mamdani-type fuzzy-logic inference engines) deployed in cascade, exploratory factorial analysis, data augmentation approaches, and classification algorithms such as k-neighbors and bagged trees, the system is able to learn and to interpret the patient’s medical-healthcare data, generating an alert level associated to the danger she has of suffering from cancer. For the system’s initial performance tests, a software implementation of it has been built that was used in the diagnosis of a series of patients contained into a 130-cases database provided by the School of Medicine and Public Health of the University of Wisconsin-Madison, which has been also used to create the knowledge base. The obtained results, characterized as areas under the ROC curves of 0.95–0.97 and high success rates, highlight the huge diagnosis and preventive potential of the developed system, and they allow forecasting, even when a detailed and contrasted validation is still pending, its relevance and applicability within the clinical field. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

9 pages, 908 KiB  
Article
Differentiation between Germinoma and Craniopharyngioma Using Radiomics-Based Machine Learning
by Boran Chen, Chaoyue Chen, Yang Zhang, Zhouyang Huang, Haoran Wang, Ruoyu Li and Jianguo Xu
J. Pers. Med. 2022, 12(1), 45; https://doi.org/10.3390/jpm12010045 - 04 Jan 2022
Cited by 8 | Viewed by 1837
Abstract
For the tumors located in the anterior skull base, germinoma and craniopharyngioma (CP) are unusual types with similar clinical manifestations and imaging features. The difference in treatment strategies and outcomes of patients highlights the importance of making an accurate preoperative diagnosis. This retrospective [...] Read more.
For the tumors located in the anterior skull base, germinoma and craniopharyngioma (CP) are unusual types with similar clinical manifestations and imaging features. The difference in treatment strategies and outcomes of patients highlights the importance of making an accurate preoperative diagnosis. This retrospective study enrolled 107 patients diagnosed with germinoma (n = 44) and CP (n = 63). The region of interest (ROI) was drawn independently by two researchers. Radiomic features were extracted from contrast-enhanced T1WI and T2WI sequences. Here, we established the diagnosis models with a combination of three selection methods, as well as three classifiers. After training the models, their performances were evaluated on the independent validation cohort and compared based on the index of the area under the receiver operating characteristic curve (AUC) in the validation cohort. Nine models were established and compared to find the optimal one defined with the highest AUC in the validation cohort. For the models applied in the contrast-enhanced T1WI images, RFS + RFC and LASSO + LDA were observed to be the optimal models with AUCs of 0.91. For the models applied in the T2WI images, DC + LDA and LASSO + LDA were observed to be the optimal models with AUCs of 0.88. The evidence of this study indicated that radiomics-based machine learning could be potentially considered as the radiological method in the presurgical differential diagnosis of germinoma and CP with a reliable diagnostic performance. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

16 pages, 1834 KiB  
Article
Utilization of Decision Tree Algorithms for Supporting the Prediction of Intensive Care Unit Admission of Myasthenia Gravis: A Machine Learning-Based Approach
by Che-Cheng Chang, Jiann-Horng Yeh, Hou-Chang Chiu, Yen-Ming Chen, Mao-Jhen Jhou, Tzu-Chi Liu and Chi-Jie Lu
J. Pers. Med. 2022, 12(1), 32; https://doi.org/10.3390/jpm12010032 - 02 Jan 2022
Cited by 18 | Viewed by 2689
Abstract
Myasthenia gravis (MG), an acquired autoimmune-related neuromuscular disorder that causes muscle weakness, presents with varying severity, including myasthenic crisis (MC). Although MC can cause significant morbidity and mortality, specialized neuro-intensive care can produce a good long-term prognosis. Considering the outcomes of MG during [...] Read more.
Myasthenia gravis (MG), an acquired autoimmune-related neuromuscular disorder that causes muscle weakness, presents with varying severity, including myasthenic crisis (MC). Although MC can cause significant morbidity and mortality, specialized neuro-intensive care can produce a good long-term prognosis. Considering the outcomes of MG during hospitalization, it is critical to conduct risk assessments to predict the need for intensive care. Evidence and valid tools for the screening of critical patients with MG are lacking. We used three machine learning-based decision tree algorithms, including a classification and regression tree, C4.5, and C5.0, for predicting intensive care unit (ICU) admission of patients with MG. We included 228 MG patients admitted between 2015 and 2018. Among them, 88.2% were anti-acetylcholine receptors antibody positive and 4.7% were anti-muscle-specific kinase antibody positive. Twenty clinical variables were used as predictive variables. The C5.0 decision tree outperformed the other two decision tree and logistic regression models. The decision rules constructed by the best C5.0 model showed that the Myasthenia Gravis Foundation of America clinical classification at admission, thymoma history, azathioprine treatment history, disease duration, sex, and onset age were significant risk factors for the development of decision rules for ICU admission prediction. The developed machine learning-based decision tree can be a supportive tool for alerting clinicians regarding patients with MG who require intensive care, thereby improving the quality of care. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

17 pages, 3587 KiB  
Article
Diabetic and Hypertensive Retinopathy Screening in Fundus Images Using Artificially Intelligent Shallow Architectures
by Muhammad Arsalan, Adnan Haider, Jiho Choi and Kang Ryoung Park
J. Pers. Med. 2022, 12(1), 7; https://doi.org/10.3390/jpm12010007 - 23 Dec 2021
Cited by 13 | Viewed by 3675
Abstract
Retinal blood vessels are considered valuable biomarkers for the detection of diabetic retinopathy, hypertensive retinopathy, and other retinal disorders. Ophthalmologists analyze retinal vasculature by manual segmentation, which is a tedious task. Numerous studies have focused on automatic retinal vasculature segmentation using different methods [...] Read more.
Retinal blood vessels are considered valuable biomarkers for the detection of diabetic retinopathy, hypertensive retinopathy, and other retinal disorders. Ophthalmologists analyze retinal vasculature by manual segmentation, which is a tedious task. Numerous studies have focused on automatic retinal vasculature segmentation using different methods for ophthalmic disease analysis. However, most of these methods are computationally expensive and lack robustness. This paper proposes two new shallow deep learning architectures: dual-stream fusion network (DSF-Net) and dual-stream aggregation network (DSA-Net) to accurately detect retinal vasculature. The proposed method uses semantic segmentation in raw color fundus images for the screening of diabetic and hypertensive retinopathies. The proposed method’s performance is assessed using three publicly available fundus image datasets: Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of Retina (STARE), and Children Heart Health Study in England Database (CHASE-DB1). The experimental results revealed that the proposed method provided superior segmentation performance with accuracy (Acc), sensitivity (SE), specificity (SP), and area under the curve (AUC) of 96.93%, 82.68%, 98.30%, and 98.42% for DRIVE, 97.25%, 82.22%, 98.38%, and 98.15% for CHASE-DB1, and 97.00%, 86.07%, 98.00%, and 98.65% for STARE datasets, respectively. The experimental results also show that the proposed DSA-Net provides higher SE compared to the existing approaches. It means that the proposed method detected the minor vessels and provided the least false negatives, which is extremely important for diagnosis. The proposed method provides an automatic and accurate segmentation mask that can be used to highlight the vessel pixels. This detected vasculature can be utilized to compute the ratio between the vessel and the non-vessel pixels and distinguish between diabetic and hypertensive retinopathies, and morphology can be analyzed for related retinal disorders. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

19 pages, 4820 KiB  
Article
Improved Deep Convolutional Neural Network to Classify Osteoarthritis from Anterior Cruciate Ligament Tear Using Magnetic Resonance Imaging
by Mazhar Javed Awan, Mohd Shafry Mohd Rahim, Naomie Salim, Amjad Rehman, Haitham Nobanee and Hassan Shabir
J. Pers. Med. 2021, 11(11), 1163; https://doi.org/10.3390/jpm11111163 - 09 Nov 2021
Cited by 33 | Viewed by 5053
Abstract
Anterior cruciate ligament (ACL) tear is caused by partially or completely torn ACL ligament in the knee, especially in sportsmen. There is a need to classify the ACL tear before it fully ruptures to avoid osteoarthritis. This research aims to identify ACL tears [...] Read more.
Anterior cruciate ligament (ACL) tear is caused by partially or completely torn ACL ligament in the knee, especially in sportsmen. There is a need to classify the ACL tear before it fully ruptures to avoid osteoarthritis. This research aims to identify ACL tears automatically and efficiently with a deep learning approach. A dataset was gathered, consisting of 917 knee magnetic resonance images (MRI) from Clinical Hospital Centre Rijeka, Croatia. The dataset we used consists of three classes: non-injured, partial tears, and fully ruptured knee MRI. The study compares and evaluates two variants of convolutional neural networks (CNN). We first tested the standard CNN model of five layers and then a customized CNN model of eleven layers. Eight different hyper-parameters were adjusted and tested on both variants. Our customized CNN model showed good results after a 25% random split using RMSprop and a learning rate of 0.001. The average evaluations are measured by accuracy, precision, sensitivity, specificity, and F1-score in the case of the standard CNN using the Adam optimizer with a learning rate of 0.001, i.e., 96.3%, 95%, 96%, 96.9%, and 95.6%, respectively. In the case of the customized CNN model, using the same evaluation measures, the model performed at 98.6%, 98%, 98%, 98.5%, and 98%, respectively, using an RMSprop optimizer with a learning rate of 0.001. Moreover, we also present our results on the receiver operating curve and area under the curve (ROC AUC). The customized CNN model with the Adam optimizer and a learning rate of 0.001 achieved 0.99 over three classes was highest among all. The model showed good results overall, and in the future, we can improve it to apply other CNN architectures to detect and segment other ligament parts like meniscus and cartilages. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

12 pages, 637 KiB  
Article
Machine Learning Algorithms to Predict In-Hospital Mortality in Patients with Traumatic Brain Injury
by Sheng-Der Hsu, En Chao, Sy-Jou Chen, Dueng-Yuan Hueng, Hsiang-Yun Lan and Hui-Hsun Chiang
J. Pers. Med. 2021, 11(11), 1144; https://doi.org/10.3390/jpm11111144 - 04 Nov 2021
Cited by 16 | Viewed by 2234
Abstract
Traumatic brain injury (TBI) can lead to severe adverse clinical outcomes, including death and disability. Early detection of in-hospital mortality in high-risk populations may enable early treatment and potentially reduce mortality using machine learning. However, there is limited information on in-hospital mortality prediction [...] Read more.
Traumatic brain injury (TBI) can lead to severe adverse clinical outcomes, including death and disability. Early detection of in-hospital mortality in high-risk populations may enable early treatment and potentially reduce mortality using machine learning. However, there is limited information on in-hospital mortality prediction models for TBI patients admitted to emergency departments. The aim of this study was to create a model that successfully predicts, from clinical measures and demographics, in-hospital mortality in a sample of TBI patients admitted to the emergency department. Of the 4881 TBI patients who were screened at the emergency department at a high-level first aid duty hospital in northern Taiwan, 3331 were assigned in triage to Level I or Level II using the Taiwan Triage and Acuity Scale from January 2008 to June 2018. The most significant predictors of in-hospital mortality in TBI patients were the scores on the Glasgow coma scale, the injury severity scale, and systolic blood pressure in the emergency department admission. This study demonstrated the effective cutoff values for clinical measures when using machine learning to predict in-hospital mortality of patients with TBI. The prediction model has the potential to further accelerate the development of innovative care-delivery protocols for high-risk patients. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

22 pages, 6446 KiB  
Article
Domain-Adaptive Artificial Intelligence-Based Model for Personalized Diagnosis of Trivial Lesions Related to COVID-19 in Chest Computed Tomography Scans
by Muhammad Owais, Na Rae Baek and Kang Ryoung Park
J. Pers. Med. 2021, 11(10), 1008; https://doi.org/10.3390/jpm11101008 - 07 Oct 2021
Cited by 8 | Viewed by 2156
Abstract
Background: Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is [...] Read more.
Background: Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is a time-consuming process particularly in case of trivial lesions and requires medical specialists. Method: A recent breakthrough in deep learning methods has boosted the diagnostic capability of computer-aided diagnosis (CAD) systems and further aided health professionals in making effective diagnostic decisions. In this study, we propose a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans. Our network design achieves a fast execution speed (inference time is 43 ms on a single image) with optimal memory consumption (almost 9 MB). To evaluate the performances of the proposed and state-of-the-art models, we considered two publicly accessible datasets, namely COVID-19-CT-Seg (comprising a total of 3520 images of 20 different patients) and MosMed (including a total of 2049 images of 50 different patients). Results: Our method exhibits average area under the curve (AUC) up to 98.84%, 98.47%, and 95.51% for COVID-19-CT-Seg, MosMed, and cross-dataset, respectively, and outperforms various state-of-the-art methods. Conclusions: These results demonstrate that deep learning-based models are an effective tool for building a robust CAD solution based on CT data in response to present disaster of COVID-19. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

9 pages, 1115 KiB  
Article
Machine Learning-Based Radiomics of the Optic Chiasm Predict Visual Outcome Following Pituitary Adenoma Surgery
by Yang Zhang, Chaoyue Chen, Wei Huang, Yangfan Cheng, Yuen Teng, Lei Zhang and Jianguo Xu
J. Pers. Med. 2021, 11(10), 991; https://doi.org/10.3390/jpm11100991 - 30 Sep 2021
Cited by 5 | Viewed by 2218
Abstract
Preoperative prediction of visual recovery after pituitary adenoma surgery remains a challenge. We aimed to investigate the value of MRI-based radiomics of the optic chiasm in predicting postoperative visual field outcome using machine learning technology. A total of 131 pituitary adenoma patients were [...] Read more.
Preoperative prediction of visual recovery after pituitary adenoma surgery remains a challenge. We aimed to investigate the value of MRI-based radiomics of the optic chiasm in predicting postoperative visual field outcome using machine learning technology. A total of 131 pituitary adenoma patients were retrospectively enrolled and divided into the recovery group (N = 79) and the non-recovery group (N = 52) according to visual field outcome following surgical chiasmal decompression. Radiomic features were extracted from the optic chiasm on preoperative coronal T2-weighted imaging. Least absolute shrinkage and selection operator regression were first used to select optimal features. Then, three machine learning algorithms were employed to develop radiomic models to predict visual recovery, including support vector machine (SVM), random forest and linear discriminant analysis. The prognostic performances of models were evaluated via five-fold cross-validation. The results showed that radiomic models using different machine learning algorithms all achieved area under the curve (AUC) over 0.750. The SVM-based model represented the best predictive performance for visual field recovery, with the highest AUC of 0.824. In conclusion, machine learning-based radiomics of the optic chiasm on routine MR imaging could potentially serve as a novel approach to preoperatively predict visual recovery and allow personalized counseling for individual pituitary adenoma patients. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

23 pages, 2050 KiB  
Article
MRI Deep Learning-Based Solution for Alzheimer’s Disease Prediction
by Cristina L. Saratxaga, Iratxe Moya, Artzai Picón, Marina Acosta, Aitor Moreno-Fernandez-de-Leceta, Estibaliz Garrote and Arantza Bereciartua-Perez
J. Pers. Med. 2021, 11(9), 902; https://doi.org/10.3390/jpm11090902 - 09 Sep 2021
Cited by 43 | Viewed by 5736
Abstract
Background: Alzheimer’s is a degenerative dementing disorder that starts with a mild memory impairment and progresses to a total loss of mental and physical faculties. The sooner the diagnosis is made, the better for the patient, as preventive actions and treatment can be [...] Read more.
Background: Alzheimer’s is a degenerative dementing disorder that starts with a mild memory impairment and progresses to a total loss of mental and physical faculties. The sooner the diagnosis is made, the better for the patient, as preventive actions and treatment can be started. Although tests such as the Mini-Mental State Tests Examination are usually used for early identification, diagnosis relies on magnetic resonance imaging (MRI) brain analysis. Methods: Public initiatives such as the OASIS (Open Access Series of Imaging Studies) collection provide neuroimaging datasets openly available for research purposes. In this work, a new method based on deep learning and image processing techniques for MRI-based Alzheimer’s diagnosis is proposed and compared with previous literature works. Results: Our method achieves a balance accuracy (BAC) up to 0.93 for image-based automated diagnosis of the disease, and a BAC of 0.88 for the establishment of the disease stage (healthy tissue, very mild and severe stage). Conclusions: Results obtained surpassed the state-of-the-art proposals using the OASIS collection. This demonstrates that deep learning-based strategies are an effective tool for building a robust solution for Alzheimer’s-assisted diagnosis based on MRI data. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

12 pages, 1685 KiB  
Article
Automatic Meningioma Segmentation and Grading Prediction: A Hybrid Deep-Learning Method
by Chaoyue Chen, Yisong Cheng, Jianfeng Xu, Ting Zhang, Xin Shu, Wei Huang, Yu Hua, Yang Zhang, Yuen Teng, Lei Zhang and Jianguo Xu
J. Pers. Med. 2021, 11(8), 786; https://doi.org/10.3390/jpm11080786 - 12 Aug 2021
Cited by 14 | Viewed by 2697
Abstract
The purpose of this study was to determine whether a deep-learning-based assessment system could facilitate preoperative grading of meningioma. This was a retrospective study conducted at two institutions covering 643 patients. The system, designed with a cascade network structure, was developed using deep-learning [...] Read more.
The purpose of this study was to determine whether a deep-learning-based assessment system could facilitate preoperative grading of meningioma. This was a retrospective study conducted at two institutions covering 643 patients. The system, designed with a cascade network structure, was developed using deep-learning technology for automatic tumor detection, visual assessment, and grading prediction. Specifically, a modified U-Net convolutional neural network was first established to segment tumor images. Subsequently, the segmentations were introduced into rendering algorithms for spatial reconstruction and another DenseNet convolutional neural network for grading prediction. The trained models were integrated as a system, and the robustness was tested based on its performance on an external dataset from the second institution involving different magnetic resonance imaging platforms. The results showed that the segment model represented a noteworthy performance with dice coefficients of 0.920 ± 0.009 in the validation group. With accurate segmented tumor images, the rendering model delicately reconstructed the tumor body and clearly displayed the important intracranial vessels. The DenseNet model also achieved high accuracy with an area under the curve of 0.918 ± 0.006 and accuracy of 0.901 ± 0.039 when classifying tumors into low-grade and high-grade meningiomas. Moreover, the system exhibited good performance on the external validation dataset. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

9 pages, 837 KiB  
Article
Application of Machine Learning for Predicting Anastomotic Leakage in Patients with Gastric Adenocarcinoma Who Received Total or Proximal Gastrectomy
by Shengli Shao, Lu Liu, Yufeng Zhao, Lei Mu, Qiyi Lu and Jichao Qin
J. Pers. Med. 2021, 11(8), 748; https://doi.org/10.3390/jpm11080748 - 29 Jul 2021
Cited by 9 | Viewed by 2278
Abstract
Anastomotic leakage is a life-threatening complication in patients with gastric adenocarcinoma who received total or proximal gastrectomy, and there is still no model accurately predicting anastomotic leakage. In this study, we aim to develop a high-performance machine learning tool to predict anastomotic leakage [...] Read more.
Anastomotic leakage is a life-threatening complication in patients with gastric adenocarcinoma who received total or proximal gastrectomy, and there is still no model accurately predicting anastomotic leakage. In this study, we aim to develop a high-performance machine learning tool to predict anastomotic leakage in patients with gastric adenocarcinoma received total or proximal gastrectomy. A total of 1660 cases of gastric adenocarcinoma patients who received total or proximal gastrectomy in a large academic hospital from 1 January 2010 to 31 December 2019 were investigated, and these patients were randomly divided into training and testing sets at a ratio of 8:2. Four machine learning models, such as logistic regression, random forest, support vector machine, and XGBoost, were employed, and 24 clinical preoperative and intraoperative variables were included to develop the predictive model. Regarding the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy, random forest had a favorable performance with an AUC of 0.89, a sensitivity of 81.8% and specificity of 82.2% in the testing set. Moreover, we built a web app based on random forest model to achieve real-time predictions for guiding surgeons’ intraoperative decision making. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

13 pages, 6460 KiB  
Article
Predicting 1-Year Mortality after Hip Fracture Surgery: An Evaluation of Multiple Machine Learning Approaches
by Maximilian Peter Forssten, Gary Alan Bass, Ahmad Mohammad Ismail, Shahin Mohseni and Yang Cao
J. Pers. Med. 2021, 11(8), 727; https://doi.org/10.3390/jpm11080727 - 27 Jul 2021
Cited by 16 | Viewed by 2765
Abstract
Postoperative death within 1 year following hip fracture surgery is reported to be up to 27%. In the current study, we benchmarked the predictive precision and accuracy of the algorithms support vector machine (SVM), naïve Bayes classifier (NB), and random forest classifier (RF) [...] Read more.
Postoperative death within 1 year following hip fracture surgery is reported to be up to 27%. In the current study, we benchmarked the predictive precision and accuracy of the algorithms support vector machine (SVM), naïve Bayes classifier (NB), and random forest classifier (RF) against logistic regression (LR) in predicting 1-year postoperative mortality in hip fracture patients as well as assessed the relative importance of the variables included in the LR model. All adult patients who underwent primary emergency hip fracture surgery in Sweden, between 1 January 2008 and 31 December 2017 were included in the study. Patients with pathological fractures and non-operatively managed hip fractures, as well as those who died within 30 days after surgery, were excluded from the analysis. A LR model with an elastic net regularization were fitted and compared to NB, SVM, and RF. The relative importance of the variables in the LR model was then evaluated using the permutation importance. The LR model including all the variables demonstrated an acceptable predictive ability on both the training and test datasets for predicting one-year postoperative mortality (Area under the curve (AUC) = 0.74 and 0.74 respectively). NB, SVM, and RF tended to over-predict the mortality, particularly NB and SVM algorithms. In contrast, LR only over-predicted mortality when the predicted probability of mortality was larger than 0.7. The LR algorithm outperformed the other three algorithms in predicting 1-year postoperative mortality in hip fracture patients. The most important predictors of 1-year mortality were the presence of a metastatic carcinoma, American Society of Anesthesiologists(ASA) classification, sex, Charlson Comorbidity Index (CCI) ≤ 4, age, dementia, congestive heart failure, hypertension, surgery using pins/screws, and chronic kidney disease. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

17 pages, 1389 KiB  
Article
Mandible Segmentation of Dental CBCT Scans Affected by Metal Artifacts Using Coarse-to-Fine Learning Model
by Bingjiang Qiu, Hylke van der Wel, Joep Kraeima, Haye Hendrik Glas, Jiapan Guo, Ronald J. H. Borra, Max Johannes Hendrikus Witjes and Peter M. A. van Ooijen
J. Pers. Med. 2021, 11(6), 560; https://doi.org/10.3390/jpm11060560 - 16 Jun 2021
Cited by 12 | Viewed by 3013
Abstract
Accurate segmentation of the mandible from cone-beam computed tomography (CBCT) scans is an important step for building a personalized 3D digital mandible model for maxillofacial surgery and orthodontic treatment planning because of the low radiation dose and short scanning duration. CBCT images, however, [...] Read more.
Accurate segmentation of the mandible from cone-beam computed tomography (CBCT) scans is an important step for building a personalized 3D digital mandible model for maxillofacial surgery and orthodontic treatment planning because of the low radiation dose and short scanning duration. CBCT images, however, exhibit lower contrast and higher levels of noise and artifacts due to extremely low radiation in comparison with the conventional computed tomography (CT), which makes automatic mandible segmentation from CBCT data challenging. In this work, we propose a novel coarse-to-fine segmentation framework based on 3D convolutional neural network and recurrent SegUnet for mandible segmentation in CBCT scans. Specifically, the mandible segmentation is decomposed into two stages: localization of the mandible-like region by rough segmentation and further accurate segmentation of the mandible details. The method was evaluated using a dental CBCT dataset. In addition, we evaluated the proposed method and compared it with state-of-the-art methods in two CT datasets. The experiments indicate that the proposed algorithm can provide more accurate and robust segmentation results for different imaging techniques in comparison with the state-of-the-art models with respect to these three datasets. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

14 pages, 2468 KiB  
Article
Machine Learning to Predict In-Hospital Mortality in COVID-19 Patients Using Computed Tomography-Derived Pulmonary and Vascular Features
by Simone Schiaffino, Marina Codari, Andrea Cozzi, Domenico Albano, Marco Alì, Roberto Arioli, Emanuele Avola, Claudio Bnà, Maurizio Cariati, Serena Carriero, Massimo Cressoni, Pietro S. C. Danna, Gianmarco Della Pepa, Giovanni Di Leo, Francesco Dolci, Zeno Falaschi, Nicola Flor, Riccardo A. Foà, Salvatore Gitto, Giovanni Leati, Veronica Magni, Alexis E. Malavazos, Giovanni Mauri, Carmelo Messina, Lorenzo Monfardini, Alessio Paschè, Filippo Pesapane, Luca M. Sconfienza, Francesco Secchi, Edoardo Segalini, Angelo Spinazzola, Valeria Tombini, Silvia Tresoldi, Angelo Vanzulli, Ilaria Vicentin, Domenico Zagaria, Dominik Fleischmann and Francesco Sardanelliadd Show full author list remove Hide full author list
J. Pers. Med. 2021, 11(6), 501; https://doi.org/10.3390/jpm11060501 - 03 Jun 2021
Cited by 19 | Viewed by 3364
Abstract
Pulmonary parenchymal and vascular damage are frequently reported in COVID-19 patients and can be assessed with unenhanced chest computed tomography (CT), widely used as a triaging exam. Integrating clinical data, chest CT features, and CT-derived vascular metrics, we aimed to build a predictive [...] Read more.
Pulmonary parenchymal and vascular damage are frequently reported in COVID-19 patients and can be assessed with unenhanced chest computed tomography (CT), widely used as a triaging exam. Integrating clinical data, chest CT features, and CT-derived vascular metrics, we aimed to build a predictive model of in-hospital mortality using univariate analysis (Mann–Whitney U test) and machine learning models (support vectors machines (SVM) and multilayer perceptrons (MLP)). Patients with RT-PCR-confirmed SARS-CoV-2 infection and unenhanced chest CT performed on emergency department admission were included after retrieving their outcome (discharge or death), with an 85/15% training/test dataset split. Out of 897 patients, the 229 (26%) patients who died during hospitalization had higher median pulmonary artery diameter (29.0 mm) than patients who survived (27.0 mm, p < 0.001) and higher median ascending aortic diameter (36.6 mm versus 34.0 mm, p < 0.001). SVM and MLP best models considered the same ten input features, yielding a 0.747 (precision 0.522, recall 0.800) and 0.844 (precision 0.680, recall 0.567) area under the curve, respectively. In this model integrating clinical and radiological data, pulmonary artery diameter was the third most important predictor after age and parenchymal involvement extent, contributing to reliable in-hospital mortality prediction, highlighting the value of vascular metrics in improving patient stratification. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

11 pages, 2738 KiB  
Article
Influence of the Depth of the Convolutional Neural Networks on an Artificial Intelligence Model for Diagnosis of Orthognathic Surgery
by Ye-Hyun Kim, Jae-Bong Park, Min-Seok Chang, Jae-Jun Ryu, Won Hee Lim and Seok-Ki Jung
J. Pers. Med. 2021, 11(5), 356; https://doi.org/10.3390/jpm11050356 - 29 Apr 2021
Cited by 25 | Viewed by 3703
Abstract
The aim of this study was to investigate the relationship between image patterns in cephalometric radiographs and the diagnosis of orthognathic surgery and propose a method to improve the accuracy of predictive models according to the depth of the neural networks. The study [...] Read more.
The aim of this study was to investigate the relationship between image patterns in cephalometric radiographs and the diagnosis of orthognathic surgery and propose a method to improve the accuracy of predictive models according to the depth of the neural networks. The study included 640 and 320 patients requiring non-surgical and surgical orthodontic treatments, respectively. The data of 150 patients were exclusively classified as a test set. The data of the remaining 810 patients were split into five groups and a five-fold cross-validation was performed. The convolutional neural network models used were ResNet-18, 34, 50, and 101. The number in the model name represents the difference in the depth of the blocks that constitute the model. The accuracy, sensitivity, and specificity of each model were estimated and compared. The average success rate in the test set for the ResNet-18, 34, 50, and 101 was 93.80%, 93.60%, 91.13%, and 91.33%, respectively. In screening, ResNet-18 had the best performance with an area under the curve of 0.979, followed by ResNets-34, 50, and 101 at 0.974, 0.945, and 0.944, respectively. This study suggests the required characteristics of the structure of an artificial intelligence model for decision-making based on medical images. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

11 pages, 2497 KiB  
Article
Development of a Fundus Image-Based Deep Learning Diagnostic Tool for Various Retinal Diseases
by Kyoung Min Kim, Tae-Young Heo, Aesul Kim, Joohee Kim, Kyu Jin Han, Jaesuk Yun and Jung Kee Min
J. Pers. Med. 2021, 11(5), 321; https://doi.org/10.3390/jpm11050321 - 21 Apr 2021
Cited by 22 | Viewed by 3713
Abstract
Artificial intelligence (AI)-based diagnostic tools have been accepted in ophthalmology. The use of retinal images, such as fundus photographs, is a promising approach for the development of AI-based diagnostic platforms. Retinal pathologies usually occur in a broad spectrum of eye diseases, including neovascular [...] Read more.
Artificial intelligence (AI)-based diagnostic tools have been accepted in ophthalmology. The use of retinal images, such as fundus photographs, is a promising approach for the development of AI-based diagnostic platforms. Retinal pathologies usually occur in a broad spectrum of eye diseases, including neovascular or dry age-related macular degeneration, epiretinal membrane, rhegmatogenous retinal detachment, retinitis pigmentosa, macular hole, retinal vein occlusions, and diabetic retinopathy. Here, we report a fundus image-based AI model for differential diagnosis of retinal diseases. We classified retinal images with three convolutional neural network models: ResNet50, VGG19, and Inception v3. Furthermore, the performance of several dense (fully connected) layers was compared. The prediction accuracy for diagnosis of nine classes of eight retinal diseases and normal control was 87.42% in the ResNet50 model, which added a dense layer with 128 nodes. Furthermore, our AI tool augments ophthalmologist’s performance in the diagnosis of retinal disease. These results suggested that the fundus image-based AI tool is applicable for the medical diagnosis process of retinal diseases. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1096 KiB  
Review
Artificial Intelligence-Driven Prediction Modeling and Decision Making in Spine Surgery Using Hybrid Machine Learning Models
by Babak Saravi, Frank Hassel, Sara Ülkümen, Alisia Zink, Veronika Shavlokhova, Sebastien Couillard-Despres, Martin Boeker, Peter Obid and Gernot Michael Lang
J. Pers. Med. 2022, 12(4), 509; https://doi.org/10.3390/jpm12040509 - 22 Mar 2022
Cited by 51 | Viewed by 6971
Abstract
Healthcare systems worldwide generate vast amounts of data from many different sources. Although of high complexity for a human being, it is essential to determine the patterns and minor variations in the genomic, radiological, laboratory, or clinical data that reliably differentiate phenotypes or [...] Read more.
Healthcare systems worldwide generate vast amounts of data from many different sources. Although of high complexity for a human being, it is essential to determine the patterns and minor variations in the genomic, radiological, laboratory, or clinical data that reliably differentiate phenotypes or allow high predictive accuracy in health-related tasks. Convolutional neural networks (CNN) are increasingly applied to image data for various tasks. Its use for non-imaging data becomes feasible through different modern machine learning techniques, converting non-imaging data into images before inputting them into the CNN model. Considering also that healthcare providers do not solely use one data modality for their decisions, this approach opens the door for multi-input/mixed data models which use a combination of patient information, such as genomic, radiological, and clinical data, to train a hybrid deep learning model. Thus, this reflects the main characteristic of artificial intelligence: simulating natural human behavior. The present review focuses on key advances in machine and deep learning, allowing for multi-perspective pattern recognition across the entire information set of patients in spine surgery. This is the first review of artificial intelligence focusing on hybrid models for deep learning applications in spine surgery, to the best of our knowledge. This is especially interesting as future tools are unlikely to use solely one data modality. The techniques discussed could become important in establishing a new approach to decision-making in spine surgery based on three fundamental pillars: (1) patient-specific, (2) artificial intelligence-driven, (3) integrating multimodal data. The findings reveal promising research that already took place to develop multi-input mixed-data hybrid decision-supporting models. Their implementation in spine surgery may hence be only a matter of time. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

13 pages, 660 KiB  
Review
Artificial Intelligence and Its Application to Minimal Hepatic Encephalopathy Diagnosis
by Jakub Gazda, Peter Drotar, Sylvia Drazilova, Juraj Gazda, Matej Gazda, Martin Janicko and Peter Jarcuska
J. Pers. Med. 2021, 11(11), 1090; https://doi.org/10.3390/jpm11111090 - 26 Oct 2021
Cited by 7 | Viewed by 2058
Abstract
Hepatic encephalopathy (HE) is a brain dysfunction caused by liver insufficiency and/or portosystemic shunting. HE manifests as a spectrum of neurological or psychiatric abnormalities. Diagnosis of overt HE (OHE) is based on the typical clinical manifestation, but covert HE (CHE) has only very [...] Read more.
Hepatic encephalopathy (HE) is a brain dysfunction caused by liver insufficiency and/or portosystemic shunting. HE manifests as a spectrum of neurological or psychiatric abnormalities. Diagnosis of overt HE (OHE) is based on the typical clinical manifestation, but covert HE (CHE) has only very subtle clinical signs and minimal HE (MHE) is detected only by specialized time-consuming psychometric tests, for which there is still no universally accepted gold standard. Significant progress has been made in artificial intelligence and its application to medicine. In this review, we introduce how artificial intelligence has been used to diagnose minimal hepatic encephalopathy thus far, and we discuss its further potential in analyzing speech and handwriting data, which are probably the most accessible data for evaluating the cognitive state of the patient. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

32 pages, 698 KiB  
Review
A Systematic Review on the Contribution of Artificial Intelligence in the Development of Medicines for COVID-2019
by Carla Pires
J. Pers. Med. 2021, 11(9), 926; https://doi.org/10.3390/jpm11090926 - 18 Sep 2021
Cited by 8 | Viewed by 3629
Abstract
Background: COVID-2019 pandemic lead to a raised interest on the development of new treatments through Artificial Intelligence (AI). Aim: to carry out a systematic review on the development of repurposed drugs against COVID-2019 through the application of AI. Methods: The Systematic Reviews and [...] Read more.
Background: COVID-2019 pandemic lead to a raised interest on the development of new treatments through Artificial Intelligence (AI). Aim: to carry out a systematic review on the development of repurposed drugs against COVID-2019 through the application of AI. Methods: The Systematic Reviews and Meta-Analyses (PRISMA) checklist was applied. Keywords: [“Artificial intelligence” and (COVID or SARS) and (medicine or drug)]. Databases: PubMed®, DOAJ and SciELO. Cochrane Library was additionally screened to identify previous published reviews on the same topic. Results: From the 277 identified records [PubMed® (n = 157); DOAJ (n = 119) and SciELO (n = 1)], 27 studies were included. Among other, the selected studies on new treatments against COVID-2019 were classified, as follows: studies with in-vitro and/or clinical data; association of known drugs; and other studies related to repurposing of drugs. Conclusion: Diverse potentially repurposed drugs against COVID-2019 were identified. The repurposed drugs were mainly from antivirals, antibiotics, anticancer, anti-inflammatory, and Angiotensin-converting enzyme 2 (ACE2) groups, although diverse other pharmacologic groups were covered. AI was a suitable tool to quickly analyze large amounts of data or to estimate drug repurposing against COVID-2019. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

26 pages, 1112 KiB  
Review
Automatic Segmentation of Mandible from Conventional Methods to Deep Learning—A Review
by Bingjiang Qiu, Hylke van der Wel, Joep Kraeima, Haye Hendrik Glas, Jiapan Guo, Ronald J. H. Borra, Max Johannes Hendrikus Witjes and Peter M. A. van Ooijen
J. Pers. Med. 2021, 11(7), 629; https://doi.org/10.3390/jpm11070629 - 01 Jul 2021
Cited by 27 | Viewed by 4723
Abstract
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order [...] Read more.
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Graphical abstract

Back to TopTop