Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (893)

Search Parameters:
Keywords = AUROC

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1185 KiB  
Article
PredictMed-CDSS: Artificial Intelligence-Based Decision Support System Predicting the Probability to Develop Neuromuscular Hip Dysplasia
by Carlo M. Bertoncelli, Federico Solla, Michal Latalski, Sikha Bagui, Subhash C. Bagui, Stefania Costantini and Domenico Bertoncelli
Bioengineering 2025, 12(8), 846; https://doi.org/10.3390/bioengineering12080846 (registering DOI) - 6 Aug 2025
Abstract
Neuromuscular hip dysplasia (NHD) is a common deformity in children with cerebral palsy (CP). Although some predictive factors of NHD are known, the prediction of NHD is in its infancy. We present a Clinical Decision Support System (CDSS) designed to calculate the probability [...] Read more.
Neuromuscular hip dysplasia (NHD) is a common deformity in children with cerebral palsy (CP). Although some predictive factors of NHD are known, the prediction of NHD is in its infancy. We present a Clinical Decision Support System (CDSS) designed to calculate the probability of developing NHD in children with CP. The system utilizes an ensemble of three machine learning (ML) algorithms: Neural Network (NN), Support Vector Machine (SVM), and Logistic Regression (LR). The development and evaluation of the CDSS followed the DECIDE-AI guidelines for AI-driven clinical decision support tools. The ensemble was trained on a data series from 182 subjects. Inclusion criteria were age between 12 and 18 years and diagnosis of CP from two specialized units. Clinical and functional data were collected prospectively between 2005 and 2023, and then analyzed in a cross-sectional study. Accuracy and area under the receiver operating characteristic (AUROC) were calculated for each method. Best logistic regression scores highlighted history of previous orthopedic surgery (p = 0.001), poor motor function (p = 0.004), truncal tone disorder (p = 0.008), scoliosis (p = 0.031), number of affected limbs (p = 0.05), and epilepsy (p = 0.05) as predictors of NHD. Both accuracy and AUROC were highest for NN, 83.7% and 0.92, respectively. The novelty of this study lies in the development of an efficient Clinical Decision Support System (CDSS) prototype, specifically designed to predict future outcomes of neuromuscular hip dysplasia (NHD) in patients with cerebral palsy (CP) using clinical data. The proposed system, PredictMed-CDSS, demonstrated strong predictive performance for estimating the probability of NHD development in children with CP, with the highest accuracy achieved using neural networks (NN). PredictMed-CDSS has the potential to assist clinicians in anticipating the need for early interventions and preventive strategies in the management of NHD among CP patients. Full article
Show Figures

Figure 1

15 pages, 2070 KiB  
Article
Machine Learning for Personalized Prediction of Electrocardiogram (EKG) Use in Emergency Care
by Hairong Wang and Xingyu Zhang
J. Pers. Med. 2025, 15(8), 358; https://doi.org/10.3390/jpm15080358 (registering DOI) - 6 Aug 2025
Abstract
Background: Electrocardiograms (EKGs) are essential tools in emergency medicine, often used to evaluate chest pain, dyspnea, and other symptoms suggestive of cardiac dysfunction. Yet, EKGs are not universally administered to all emergency department (ED) patients. Understanding and predicting which patients receive an [...] Read more.
Background: Electrocardiograms (EKGs) are essential tools in emergency medicine, often used to evaluate chest pain, dyspnea, and other symptoms suggestive of cardiac dysfunction. Yet, EKGs are not universally administered to all emergency department (ED) patients. Understanding and predicting which patients receive an EKG may offer insights into clinical decision making, resource allocation, and potential disparities in care. This study examines whether integrating structured clinical data with free-text patient narratives can improve prediction of EKG utilization in the ED. Methods: We conducted a retrospective observational study to predict electrocardiogram (EKG) utilization using data from 13,115 adult emergency department (ED) visits in the nationally representative 2021 National Hospital Ambulatory Medical Care Survey–Emergency Department (NHAMCS-ED), leveraging both structured features—demographics, vital signs, comorbidities, arrival mode, and triage acuity, with the most influential selected via Lasso regression—and unstructured patient narratives transformed into numerical embeddings using Clinical-BERT. Four supervised learning models—Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF) and Extreme Gradient Boosting (XGB)—were trained on three inputs (structured data only, text embeddings only, and a late-fusion combined model); hyperparameters were optimized by grid search with 5-fold cross-validation; performance was evaluated via AUROC, accuracy, sensitivity, specificity and precision; and interpretability was assessed using SHAP values and Permutation Feature Importance. Results: EKGs were administered in 30.6% of adult ED visits. Patients who received EKGs were more likely to be older, White, Medicare-insured, and to present with abnormal vital signs or higher triage severity. Across all models, the combined data approach yielded superior predictive performance. The SVM and LR achieved the highest area under the ROC curve (AUC = 0.860 and 0.861) when using both structured and unstructured data, compared to 0.772 with structured data alone and 0.823 and 0.822 with unstructured data alone. Similar improvements were observed in accuracy, sensitivity, and specificity. Conclusions: Integrating structured clinical data with patient narratives significantly enhances the ability to predict EKG utilization in the emergency department. These findings support a personalized medicine framework by demonstrating how multimodal data integration can enable individualized, real-time decision support in the ED. Full article
(This article belongs to the Special Issue Machine Learning in Epidemiology)
Show Figures

Figure 1

15 pages, 271 KiB  
Article
Are We Considering All the Potential Drug–Drug Interactions in Women’s Reproductive Health? A Predictive Model Approach
by Pablo Garcia-Acero, Ismael Henarejos-Castillo, Francisco Jose Sanz, Patricia Sebastian-Leon, Antonio Parraga-Leo, Juan Antonio Garcia-Velasco and Patricia Diaz-Gimeno
Pharmaceutics 2025, 17(8), 1020; https://doi.org/10.3390/pharmaceutics17081020 - 6 Aug 2025
Abstract
Background: Drug–drug interactions (DDIs) may occur when two or more drugs are taken together, leading to undesired side effects or potential synergistic effects. Most clinical effects of drug combinations have not been assessed in clinical trials. Therefore, predicting DDIs can provide better patient [...] Read more.
Background: Drug–drug interactions (DDIs) may occur when two or more drugs are taken together, leading to undesired side effects or potential synergistic effects. Most clinical effects of drug combinations have not been assessed in clinical trials. Therefore, predicting DDIs can provide better patient management, avoid drug combinations that can negatively affect patient care, and exploit potential synergistic combinations to improve current therapies in women’s healthcare. Methods: A DDI prediction model was built to describe relevant drug combinations affecting reproductive treatments. Approved drug features (chemical structure of drugs, side effects, targets, enzymes, carriers and transporters, pathways, protein–protein interactions, and interaction profile fingerprints) were obtained. A unified predictive score revealed unknown DDIs between reproductive and commonly used drugs and their associated clinical effects on reproductive health. The performance of the prediction model was validated using known DDIs. Results: This prediction model accurately predicted known interactions (AUROC = 0.9876) and identified 2991 new DDIs between 192 drugs used in different female reproductive conditions and other drugs used to treat unrelated conditions. These DDIs included 836 between drugs used for in vitro fertilization. Most new DDIs involved estradiol, acetaminophen, bupivacaine, risperidone, and follitropin. Follitropin, bupivacaine, and gonadorelin had the highest discovery rate (42%, 32%, and 25%, respectively). Some were expected to improve current therapies (n = 23), while others would cause harmful effects (n = 11). We also predicted twelve DDIs between oral contraceptives and HIV drugs that could compromise their efficacy. Conclusions: These results show the importance of DDI studies aimed at identifying those that might compromise or improve their efficacy, which could lead to personalizing female reproductive therapies. Full article
(This article belongs to the Section Pharmacokinetics and Pharmacodynamics)
11 pages, 480 KiB  
Article
A Novel Deep Learning Model for Predicting Colorectal Anastomotic Leakage: A Pioneer Multicenter Transatlantic Study
by Miguel Mascarenhas, Francisco Mendes, Filipa Fonseca, Eduardo Carvalho, Andre Santos, Daniela Cavadas, Guilherme Barbosa, Antonio Pinto da Costa, Miguel Martins, Abdullah Bunaiyan, Maísa Vasconcelos, Marley Ribeiro Feitosa, Shay Willoughby, Shakil Ahmed, Muhammad Ahsan Javed, Nilza Ramião, Guilherme Macedo and Manuel Limbert
J. Clin. Med. 2025, 14(15), 5462; https://doi.org/10.3390/jcm14155462 - 3 Aug 2025
Viewed by 129
Abstract
Background/Objectives: Colorectal anastomotic leak (CAL) is one of the most severe postoperative complications in colorectal surgery, impacting patient morbidity and mortality. Current risk assessment methods rely on clinical and intraoperative factors, but no real-time predictive tool exists. This study aimed to develop [...] Read more.
Background/Objectives: Colorectal anastomotic leak (CAL) is one of the most severe postoperative complications in colorectal surgery, impacting patient morbidity and mortality. Current risk assessment methods rely on clinical and intraoperative factors, but no real-time predictive tool exists. This study aimed to develop an artificial intelligence model based on intraoperative laparoscopic recording of the anastomosis for CAL prediction. Methods: A convolutional neural network (CNN) was trained with annotated frames from colorectal surgery videos across three international high-volume centers (Instituto Português de Oncologia de Lisboa, Hospital das Clínicas de Ribeirão Preto, and Royal Liverpool University Hospital). The dataset included a total of 5356 frames from 26 patients, 2007 with CAL and 3349 showing normal anastomosis. Four CNN architectures (EfficientNetB0, EfficientNetB7, ResNet50, and MobileNetV2) were tested. The models’ performance was evaluated using their sensitivity, specificity, accuracy, and area under the receiver operating characteristic (AUROC) curve. Heatmaps were generated to identify key image regions influencing predictions. Results: The best-performing model achieved an accuracy of 99.6%, AUROC of 99.6%, sensitivity of 99.2%, specificity of 100.0%, PPV of 100.0%, and NPV of 98.9%. The model reliably identified CAL-positive frames and provided visual explanations through heatmaps. Conclusions: To our knowledge, this is the first AI model developed to predict CAL using intraoperative video analysis. Its accuracy suggests the potential to redefine surgical decision-making by providing real-time risk assessment. Further refinement with a larger dataset and diverse surgical techniques could enable intraoperative interventions to prevent CAL before it occurs, marking a paradigm shift in colorectal surgery. Full article
(This article belongs to the Special Issue Updates in Digestive Diseases and Endoscopy)
Show Figures

Figure 1

10 pages, 586 KiB  
Article
The Role of Systemic Immune-Inflammation Index (SII) in Diagnosing Pediatric Acute Appendicitis
by Binali Firinci, Cetin Aydin, Dilek Yunluel, Ahmad Ibrahim, Murat Yigiter and Ali Ahiskalioglu
Diagnostics 2025, 15(15), 1942; https://doi.org/10.3390/diagnostics15151942 - 2 Aug 2025
Viewed by 145
Abstract
Background and Objectives: Accurately diagnosing acute appendicitis (AA) in children remains clinically challenging due to overlapping symptoms with other pediatric conditions and limitations in conventional diagnostic tools. The systemic immune-inflammation index (SII) has emerged as a promising biomarker in adult populations; however, [...] Read more.
Background and Objectives: Accurately diagnosing acute appendicitis (AA) in children remains clinically challenging due to overlapping symptoms with other pediatric conditions and limitations in conventional diagnostic tools. The systemic immune-inflammation index (SII) has emerged as a promising biomarker in adult populations; however, its utility in pediatrics is still unclear. This study aimed to evaluate the diagnostic accuracy of SII in distinguishing pediatric acute appendicitis from elective non-inflammatory surgical procedures and to assess its predictive value in identifying complicated cases. Materials and Methods: This retrospective, single-center study included 397 pediatric patients (5–15 years), comprising 297 histopathologically confirmed appendicitis cases and 100 controls. Demographic and laboratory data were recorded at admission. Inflammatory indices including SII, neutrophil-to-lymphocyte ratio (NLR), and platelet-to-lymphocyte ratio (PLR) were calculated. ROC curve analysis was performed to evaluate diagnostic performance. Results: SII values were significantly higher in the appendicitis group (median: 2218.4 vs. 356.3; p < 0.001). SII demonstrated excellent diagnostic accuracy for AA (AUROC = 0.95, 95% CI: 0.92–0.97), with 91% sensitivity and 88% specificity at a cut-off > 624. In predicting complicated appendicitis, SII showed moderate discriminative ability (AUROC = 0.66, 95% CI: 0.60–0.73), with 83% sensitivity but limited specificity (43%). Conclusions: SII is a reliable and easily obtainable biomarker for diagnosing pediatric acute appendicitis and may aid in early detection of complicated cases. Its integration into clinical workflows may enhance diagnostic precision, particularly in resource-limited settings. Age-specific validation studies are warranted to confirm its broader applicability. Full article
(This article belongs to the Special Issue Diagnosis and Treatment of Pediatric Emergencies—2nd Edition)
Show Figures

Figure 1

12 pages, 869 KiB  
Article
Neonatal Jaundice Requiring Phototherapy Risk Factors in a Newborn Nursery: Machine Learning Approach
by Yunjin Choi, Sunyoung Park and Hyungbok Lee
Children 2025, 12(8), 1020; https://doi.org/10.3390/children12081020 - 1 Aug 2025
Viewed by 281
Abstract
Background: Neonatal jaundice is common and can cause severe hyperbilirubinemia if untreated. The early identification of at-risk newborns is challenging despite the existing guidelines. Objective: This study aimed to identify the key maternal and neonatal risk factors for jaundice requiring phototherapy using machine [...] Read more.
Background: Neonatal jaundice is common and can cause severe hyperbilirubinemia if untreated. The early identification of at-risk newborns is challenging despite the existing guidelines. Objective: This study aimed to identify the key maternal and neonatal risk factors for jaundice requiring phototherapy using machine learning. Methods: In this study hospital, phototherapy was administered following the American Academy of Pediatrics (AAP) guidelines when a neonate’s transcutaneous bilirubin level was in the high-risk zone. To identify the risk factors for phototherapy, we retrospectively analyzed the electronic medical records of 8242 neonates admitted between 2017 and 2022. Predictive models were trained using maternal and neonatal data. XGBoost showed the best performance (AUROC = 0.911). SHAP values interpreted the model. Results: Mode of delivery, neonatal feeding indicators (including daily formula intake and breastfeeding frequency), maternal BMI, and maternal white blood cell count were strong predictors. Cesarean delivery and lower birth weight were linked to treatment need. Conclusions: Machine learning models using perinatal data accurately predict the risk of neonatal jaundice requiring phototherapy, potentially aiding early clinical decisions and improving outcomes. Full article
(This article belongs to the Section Pediatric Nursing)
Show Figures

Figure 1

13 pages, 777 KiB  
Article
Nomogram Development and Feature Selection Strategy Comparison for Predicting Surgical Site Infection After Lower Extremity Fracture Surgery
by Humam Baki and Atilla Sancar Parmaksızoğlu
Medicina 2025, 61(8), 1378; https://doi.org/10.3390/medicina61081378 - 30 Jul 2025
Viewed by 192
Abstract
Background and Objectives: Surgical site infections (SSIs) are a frequent complication after lower extremity fracture surgery, yet tools for individualized risk prediction remain limited. This study aimed to develop and internally validate a nomogram for individualized SSI risk prediction based on perioperative [...] Read more.
Background and Objectives: Surgical site infections (SSIs) are a frequent complication after lower extremity fracture surgery, yet tools for individualized risk prediction remain limited. This study aimed to develop and internally validate a nomogram for individualized SSI risk prediction based on perioperative clinical parameters. Materials and Methods: This retrospective cohort study included adults who underwent lower extremity fracture surgery between 2022 and 2025 at a tertiary care center. Thirty candidate predictors were evaluated. Feature selection was performed using six strategies, and the final model was developed with logistic regression based on bootstrap inclusion frequency. Model performance was assessed by area under the curve, calibration slope, Brier score, sensitivity, and specificity. Results: Among 638 patients undergoing lower extremity fracture surgery, 76 (11.9%) developed SSIs. Of six feature selection strategies compared, bootstrap inclusion frequency identified seven predictors: red blood cell count, preoperative C-reactive protein, chronic kidney disease, operative time, chronic obstructive pulmonary disease, body mass index, and blood transfusion. The final model demonstrated an AUROC of 0.924 (95% CI, 0.876–0.973), a calibration slope of 1.03, and a Brier score of 0.0602. Sensitivity was 86.2% (95% CI, 69.4–94.5) and specificity was 89.5% (95% CI, 83.8–93.3). Chronic kidney disease (OR, 88.75; 95% CI, 5.51–1428.80) and blood transfusion (OR, 85.07; 95% CI, 11.69–619.09) were the strongest predictors of infection. Conclusions: The developed nomogram demonstrates strong predictive performance and may support personalized SSI risk assessment in patients undergoing lower extremity fracture surgery. Full article
(This article belongs to the Special Issue Evaluation, Management, and Outcomes in Perioperative Medicine)
Show Figures

Figure 1

16 pages, 2784 KiB  
Article
Development of Stacked Neural Networks for Application with OCT Data, to Improve Diabetic Retinal Health Care Management
by Pedro Rebolo, Guilherme Barbosa, Eduardo Carvalho, Bruno Areias, Ana Guerra, Sónia Torres-Costa, Nilza Ramião, Manuel Falcão and Marco Parente
Information 2025, 16(8), 649; https://doi.org/10.3390/info16080649 - 30 Jul 2025
Viewed by 204
Abstract
Background: Retinal diseases are becoming an important public health issue, with early diagnosis and timely intervention playing a key role in preventing vision loss. Optical coherence tomography (OCT) remains the leading non-invasive imaging technique for identifying retinal conditions. However, distinguishing between diabetic macular [...] Read more.
Background: Retinal diseases are becoming an important public health issue, with early diagnosis and timely intervention playing a key role in preventing vision loss. Optical coherence tomography (OCT) remains the leading non-invasive imaging technique for identifying retinal conditions. However, distinguishing between diabetic macular edema (DME) and macular edema resulting from retinal vein occlusion (RVO) can be particularly challenging, especially for clinicians without specialized training in retinal disorders, as both conditions manifest through increased retinal thickness. Due to the limited research exploring the application of deep learning methods, particularly for RVO detection using OCT scans, this study proposes a novel diagnostic approach based on stacked convolutional neural networks. This architecture aims to enhance classification accuracy by integrating multiple neural network layers, enabling more robust feature extraction and improved differentiation between retinal pathologies. Methods: The VGG-16, VGG-19, and ResNet50 models were fine-tuned using the Kermany dataset to classify the OCT images and afterwards were trained using a private OCT dataset. Four stacked models were then developed using these models: a model using the VGG-16 and VGG-19 networks, a model using the VGG-16 and ResNet50 networks, a model using the VGG-19 and ResNet50 models, and finally a model using all three networks. The performance metrics of the model includes accuracy, precision, recall, F2-score, and area under of the receiver operating characteristic curve (AUROC). Results: The stacked neural network using all three models achieved the best results, having an accuracy of 90.7%, precision of 99.2%, a recall of 90.7%, and an F2-score of 92.3%. Conclusions: This study presents a novel method for distinguishing retinal disease by using stacked neural networks. This research aims to provide a reliable tool for ophthalmologists to improve diagnosis accuracy and speed. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal Processing)
Show Figures

Figure 1

22 pages, 1724 KiB  
Article
Development and Clinical Interpretation of an Explainable AI Model for Predicting Patient Pathways in the Emergency Department: A Retrospective Study
by Émilien Arnaud, Pedro Antonio Moreno-Sanchez, Mahmoud Elbattah, Christine Ammirati, Mark van Gils, Gilles Dequen and Daniel Aiham Ghazali
Appl. Sci. 2025, 15(15), 8449; https://doi.org/10.3390/app15158449 - 30 Jul 2025
Viewed by 348
Abstract
Background: Overcrowded emergency departments (EDs) create significant challenges for patient management and hospital efficiency. In response, Amiens Picardy University Hospital (APUH) developed the “Prediction of the Patient Pathway in the Emergency Department” (3P-U) model to enhance patient flow management. Objectives: To develop and [...] Read more.
Background: Overcrowded emergency departments (EDs) create significant challenges for patient management and hospital efficiency. In response, Amiens Picardy University Hospital (APUH) developed the “Prediction of the Patient Pathway in the Emergency Department” (3P-U) model to enhance patient flow management. Objectives: To develop and clinically validate an explainable artificial intelligence (XAI) model for hospital admission predictions, using structured triage data, and demonstrate its real-world applicability in the ED setting. Methods: Our retrospective, single-center study involved 351,019 patients consulting in APUH’s EDs between 2015 and 2018. Various models (including a cross-validation artificial neural network (ANN), a k-nearest neighbors (KNN) model, a logistic regression (LR) model, and a random forest (RF) model) were trained and assessed for performance with regard to the area under the receiver operating characteristic curve (AUROC). The best model was validated internally with a test set, and the F1 score was used to determine the best threshold for recall, precision, and accuracy. XAI techniques, such as Shapley additive explanations (SHAP) and partial dependence plots (PDP) were employed, and the clinical explanations were evaluated by emergency physicians. Results: The ANN gave the best performance during the training stage, with an AUROC of 83.1% (SD: 0.2%) for the test set; it surpassed the RF (AUROC: 71.6%, SD: 0.1%), KNN (AUROC: 67.2%, SD: 0.2%), and LR (AUROC: 71.5%, SD: 0.2%) models. In an internal validation, the ANN’s AUROC was 83.2%. The best F1 score (0.67) determined that 0.35 was the optimal threshold; the corresponding recall, precision, and accuracy were 75.7%, 59.7%, and 75.3%, respectively. The SHAP and PDP XAI techniques (as assessed by emergency physicians) highlighted patient age, heart rate, and presentation with multiple injuries as the features that most specifically influenced the admission from the ED to a hospital ward. These insights are being used in bed allocation and patient prioritization, directly improving ED operations. Conclusions: The 3P-U model demonstrates practical utility by reducing ED crowding and enhancing decision-making processes at APUH. Its transparency and physician validation foster trust, facilitating its adoption in clinical practice and offering a replicable framework for other hospitals to optimize patient flow. Full article
Show Figures

Figure 1

18 pages, 3277 KiB  
Article
A Clinical Prediction Model for Personalised Emergency Department Discharge Decisions for Residential Care Facility Residents Post-Fall
by Gigi Guan, Kadison Michel, Charlie Corke and Geetha Ranmuthugala
J. Pers. Med. 2025, 15(8), 332; https://doi.org/10.3390/jpm15080332 - 30 Jul 2025
Viewed by 177
Abstract
Introduction: Falls are the leading cause of Emergency Department (ED) presentations among residents from residential aged care facilities (RACFs). While most current studies focus on post-fall evaluations and fall prevention, limited research has been conducted on decision-making in post-fall management. Objective: [...] Read more.
Introduction: Falls are the leading cause of Emergency Department (ED) presentations among residents from residential aged care facilities (RACFs). While most current studies focus on post-fall evaluations and fall prevention, limited research has been conducted on decision-making in post-fall management. Objective: To develop and internally validate a model that can predict the likelihood of RACF residents being discharged from the ED after being presented for a fall. Methods: The study sample was obtained from a previous study conducted in Shepparton, Victoria, Australia. Consecutive samples were selected from January 2023 to November 2023. Participants aged 65 and over were included in this study. Results: A total of 261 fall presentations were initially identified. One patient with Australasian Triage Scale category 1 was excluded to avoid overfitting, leaving 260 presentations for analysis. Two logistic regression models were developed using prehospital and ED variables. The ED predictor model variables included duration of ED stay, injury severity, and the presence of an advance care directive (ACD). It demonstrated excellent discrimination (AUROC = 0.83; 95% CI: 0.79–0.89) compared to the prehospital model (AUROC = 0.77, 95% CI: 0.72–0.83). A simplified four-variable Discharge Eligibility after Fall in Elderly Residents (DEFER) score was derived from the prehospital model. The score achieved an AUROC of 0.76 (95% CI: 0.71–0.82). At a cut-off score of ≥5, the DEFER score exhibited a sensitivity of 79.7%, a specificity of 60.3%, a diagnostic odds ratio of 5.96, and a positive predictive value of 85.0%. Conclusions: The DEFER score is the first validated discharge prediction model for residents of RACFs who present to the ED after a fall. Importantly, the DEFER score advances personalised medicine in emergency care by integrating patient-specific factors, such as ACDs, to guide individualised discharge decisions for post-fall residents from RACFs. Full article
Show Figures

Figure 1

27 pages, 8496 KiB  
Article
Comparative Performance of Machine Learning Models for Landslide Susceptibility Assessment: Impact of Sampling Strategies in Highway Buffer Zone
by Zhenyu Tang, Shumao Qiu, Haoying Xia, Daming Lin and Mingzhou Bai
Appl. Sci. 2025, 15(15), 8416; https://doi.org/10.3390/app15158416 - 29 Jul 2025
Viewed by 162
Abstract
Landslide susceptibility assessment is critical for hazard mitigation and land-use planning. This study evaluates the impact of two different non-landslide sampling methods—random sampling and sampling constrained by the Global Landslide Hazard Map (GLHM)—on the performance of various machine learning and deep learning models, [...] Read more.
Landslide susceptibility assessment is critical for hazard mitigation and land-use planning. This study evaluates the impact of two different non-landslide sampling methods—random sampling and sampling constrained by the Global Landslide Hazard Map (GLHM)—on the performance of various machine learning and deep learning models, including Naïve Bayes (NB), Support Vector Machine (SVM), SVM-Random Forest hybrid (SVM-RF), and XGBoost. The study area is a 2 km buffer zone along the Duku Highway in Xinjiang, China, with 102 landslide and 102 non-landslide points extracted by aforementioned sampling methods. Models were tested using ROC curves and non-parametric significance tests based on 20 repetitions of 5-fold spatial cross-validation data. GLHM sampling consistently improved AUROC and accuracy across all models (e.g., AUROC gains: NB +8.44, SVM +7.11, SVM–RF +3.45, XGBoost +3.04; accuracy gains: NB +11.30%, SVM +8.33%, SVM–RF +7.40%, XGBoost +8.31%). XGBoost delivered the best performance under both sampling strategies, reaching 94.61% AUROC and 84.30% accuracy with GLHM sampling. SHAP analysis showed that GLHM sampling stabilized feature importance rankings, highlighting STI, TWI, and NDVI as the main controlling factors for landslides in the study area. These results highlight the importance of hazard-informed sampling to enhance landslide susceptibility modeling accuracy and interpretability. Full article
Show Figures

Figure 1

23 pages, 481 KiB  
Review
Bug Wars: Artificial Intelligence Strikes Back in Sepsis Management
by Georgios I. Barkas, Ilias E. Dimeas and Ourania S. Kotsiou
Diagnostics 2025, 15(15), 1890; https://doi.org/10.3390/diagnostics15151890 - 28 Jul 2025
Viewed by 435
Abstract
Sepsis remains a leading global cause of mortality, with delayed recognition and empirical antibiotic overuse fueling poor outcomes and rising antimicrobial resistance. This systematic scoping review evaluates the current landscape of artificial intelligence (AI) and machine learning (ML) applications in sepsis care, focusing [...] Read more.
Sepsis remains a leading global cause of mortality, with delayed recognition and empirical antibiotic overuse fueling poor outcomes and rising antimicrobial resistance. This systematic scoping review evaluates the current landscape of artificial intelligence (AI) and machine learning (ML) applications in sepsis care, focusing on early detection, personalized antibiotic management, and resistance forecasting. Literature from 2019 to 2025 was systematically reviewed following PRISMA-ScR guidelines. A total of 129 full-text articles were analyzed, with study quality assessed via the JBI and QUADAS-2 tools. AI-based models demonstrated robust predictive performance for early sepsis detection (AUROC 0.68–0.99), antibiotic stewardship, and resistance prediction. Notable tools, such as InSight and KI.SEP, leveraged multimodal clinical and biomarker data to provide actionable, real-time support and facilitate timely interventions. AI-driven platforms showed potential to reduce inappropriate antibiotic use and nephrotoxicity while optimizing outcomes. However, most models are limited by single-center data, variable interpretability, and insufficient real-world validation. Key challenges remain regarding data integration, algorithmic bias, and ethical implementation. Future research should prioritize multicenter validation, seamless integration with clinical workflows, and robust ethical frameworks to ensure safe, equitable, and effective adoption. AI and ML hold significant promise to transform sepsis management, but their clinical impact depends on transparent, validated, and user-centered deployment. Full article
(This article belongs to the Special Issue Recent Advances in Sepsis)
Show Figures

Figure 1

14 pages, 1209 KiB  
Article
Investigation of Growth Differentiation Factor 15 as a Prognostic Biomarker for Major Adverse Limb Events in Peripheral Artery Disease
by Ben Li, Farah Shaikh, Houssam Younes, Batool Abuhalimeh, Abdelrahman Zamzam, Rawand Abdin and Mohammad Qadura
J. Clin. Med. 2025, 14(15), 5239; https://doi.org/10.3390/jcm14155239 - 24 Jul 2025
Viewed by 309
Abstract
Background/Objectives: Peripheral artery disease (PAD) impacts more than 200 million individuals globally and leads to mortality and morbidity secondary to progressive limb dysfunction and amputation. However, clinical management of PAD remains suboptimal, in part because of the lack of standardized biomarkers to predict [...] Read more.
Background/Objectives: Peripheral artery disease (PAD) impacts more than 200 million individuals globally and leads to mortality and morbidity secondary to progressive limb dysfunction and amputation. However, clinical management of PAD remains suboptimal, in part because of the lack of standardized biomarkers to predict patient outcomes. Growth differentiation factor 15 (GDF15) is a stress-responsive cytokine that has been studied extensively in cardiovascular disease, but its investigation in PAD remains limited. This study aimed to use explainable statistical and machine learning methods to assess the prognostic value of GDF15 for limb outcomes in patients with PAD. Methods: This prognostic investigation was carried out using a prospectively enrolled cohort comprising 454 patients diagnosed with PAD. At baseline, plasma GDF15 levels were measured using a validated multiplex immunoassay. Participants were monitored over a two-year period to assess the occurrence of major adverse limb events (MALE), a composite outcome encompassing major lower extremity amputation, need for open/endovascular revascularization, or acute limb ischemia. An Extreme Gradient Boosting (XGBoost) model was trained to predict 2-year MALE using 10-fold cross-validation, incorporating GDF15 levels along with baseline variables. Model performance was primarily evaluated using the area under the receiver operating characteristic curve (AUROC). Secondary model evaluation metrics were accuracy, sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV). Prediction histogram plots were generated to assess the ability of the model to discriminate between patients who develop vs. do not develop 2-year MALE. For model interpretability, SHapley Additive exPlanations (SHAP) analysis was performed to evaluate the relative contribution of each predictor to model outputs. Results: The mean age of the cohort was 71 (SD 10) years, with 31% (n = 139) being female. Over the two-year follow-up period, 157 patients (34.6%) experienced MALE. The XGBoost model incorporating plasma GDF15 levels and demographic/clinical features achieved excellent performance for predicting 2-year MALE in PAD patients: AUROC 0.84, accuracy 83.5%, sensitivity 83.6%, specificity 83.7%, PPV 87.3%, and NPV 86.2%. The prediction probability histogram for the XGBoost model demonstrated clear separation for patients who developed vs. did not develop 2-year MALE, indicating strong discrimination ability. SHAP analysis showed that GDF15 was the strongest predictive feature for 2-year MALE, followed by age, smoking status, and other cardiovascular comorbidities, highlighting its clinical relevance. Conclusions: Using explainable statistical and machine learning methods, we demonstrated that plasma GDF15 levels have important prognostic value for 2-year MALE in patients with PAD. By integrating clinical variables with GDF15 levels, our machine learning model can support early identification of PAD patients at elevated risk for adverse limb events, facilitating timely referral to vascular specialists and aiding in decisions regarding the aggressiveness of medical/surgical treatment. This precision medicine approach based on a biomarker-guided prognostication algorithm offers a promising strategy for improving limb outcomes in individuals with PAD. Full article
(This article belongs to the Special Issue The Role of Biomarkers in Cardiovascular Diseases)
Show Figures

Figure 1

12 pages, 829 KiB  
Article
Predictive Performance of SAPS-3, SOFA Score, and Procalcitonin for Hospital Mortality in COVID-19 Viral Sepsis: A Cohort Study
by Roberta Muriel Longo Roepke, Helena Baracat Lapenta Janzantti, Marina Betschart Cantamessa, Luana Fernandes Machado, Graziela Denardin Luckemeyer, Joelma Villafanha Gandolfi, Bruno Adler Maccagnan Pinheiro Besen and Suzana Margareth Lobo
Life 2025, 15(8), 1161; https://doi.org/10.3390/life15081161 - 23 Jul 2025
Viewed by 243
Abstract
Objective: To evaluate the prognostic utility of the Sequential Organ Failure Assessment (SOFA) and Simplified Acute Physiology Score 3 (SAPS 3) in COVID-19 patients and assess whether incorporating C-reactive protein (CRP), procalcitonin, lactate, and lactate dehydrogenase (LDH) enhances their predictive accuracy. Methods: Single-center, [...] Read more.
Objective: To evaluate the prognostic utility of the Sequential Organ Failure Assessment (SOFA) and Simplified Acute Physiology Score 3 (SAPS 3) in COVID-19 patients and assess whether incorporating C-reactive protein (CRP), procalcitonin, lactate, and lactate dehydrogenase (LDH) enhances their predictive accuracy. Methods: Single-center, observational, cohort study. We analyzed a database of adult ICU patients with severe or critical COVID-19 treated at a large academic center. We used binary logistic regression for all analyses. We assessed the predictive performance of SAPS 3 and SOFA scores within 24 h of admission, individually and in combination with serum lactate, LDH, CRP, and procalcitonin. We examined the independent association of these biomarkers with hospital mortality. We evaluated discrimination using the C-statistic and determined clinical utility with decision curve analysis. Results: We included 1395 patients, 66% of whom required mechanical ventilation, and 59.7% needed vasopressor support. Patients who died (39.7%) were significantly older (61.1 ± 15.9 years vs. 50.1 ± 14.5 years, p < 0.001) and had more comorbidities than survivors. Among the biomarkers, only procalcitonin was independently associated with higher mortality in the multivariable analysis, in a non-linear pattern. The AUROC for predicting hospital mortality was 0.771 (95% CI: 0.746–0.797) for SAPS 3 and 0.781 (95% CI: 0.756–0.805) for the SOFA score. A model incorporating the SOFA score, age, and procalcitonin demonstrated high AUROC of 0.837 (95% CI: 0.816–0.859). These associations with the SOFA score showed greater clinical utility. Conclusions: The SOFA score may aid clinical decision-making, and incorporating procalcitonin and age could further enhance its prognostic utility. Full article
(This article belongs to the Section Microbiology)
Show Figures

Figure 1

16 pages, 1432 KiB  
Article
Transparent and Robust Artificial Intelligence-Driven Electrocardiogram Model for Left Ventricular Systolic Dysfunction
by Min Sung Lee, Jong-Hwan Jang, Sora Kang, Ga In Han, Ah-Hyun Yoo, Yong-Yeon Jo, Jeong Min Son, Joon-myoung Kwon, Sooyeon Lee, Ji Sung Lee, Hak Seung Lee and Kyung-Hee Kim
Diagnostics 2025, 15(15), 1837; https://doi.org/10.3390/diagnostics15151837 - 22 Jul 2025
Viewed by 345
Abstract
Background/Objectives: Heart failure (HF) is a growing global health burden, yet early detection remains challenging due to the limitations of traditional diagnostic tools such as electrocardiograms (ECGs). Recent advances in deep learning offer new opportunities to identify left ventricular systolic dysfunction (LVSD), a [...] Read more.
Background/Objectives: Heart failure (HF) is a growing global health burden, yet early detection remains challenging due to the limitations of traditional diagnostic tools such as electrocardiograms (ECGs). Recent advances in deep learning offer new opportunities to identify left ventricular systolic dysfunction (LVSD), a key indicator of HF, from ECG data. This study validates AiTiALVSD, our previously developed artificial intelligence (AI)-enabled ECG Software as a Medical Device, for its accuracy, transparency, and robustness in detecting LVSD. Methods: This retrospective single-center cohort study involved patients suspected of LVSD. The AiTiALVSD model, based on a deep learning algorithm, was evaluated against echocardiographic ejection fraction values. To enhance model transparency, the study employed Testing with Concept Activation Vectors (TCAV), clustering analysis, and robustness testing against ECG noise and lead reversals. Results: The study involved 688 participants and found AiTiALVSD to have a high diagnostic performance, with an AUROC of 0.919. There was a significant correlation between AiTiALVSD scores and left ventricular ejection fraction values, confirming the model’s predictive accuracy. TCAV analysis showed the model’s alignment with medical knowledge, establishing its clinical plausibility. Despite its robustness to ECG artifacts, there was a noted decrease in specificity in the presence of ECG noise. Conclusions: AiTiALVSD’s high diagnostic accuracy, transparency, and resilience to common ECG discrepancies underscore its potential for early LVSD detection in clinical settings. This study highlights the importance of transparency and robustness in AI-ECG, setting a new benchmark in cardiac care. Full article
(This article belongs to the Special Issue AI-Powered Clinical Diagnosis and Decision-Support Systems)
Show Figures

Graphical abstract

Back to TopTop