Previous Issue
Volume 5, March
 
 

BioMedInformatics, Volume 5, Issue 2 (June 2025) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 3289 KiB  
Article
Performance Comparison of Large Language Models for Efficient Literature Screening
by Maria Teresa Colangelo, Stefano Guizzardi, Marco Meleti, Elena Calciolari and Carlo Galli
BioMedInformatics 2025, 5(2), 25; https://doi.org/10.3390/biomedinformatics5020025 - 7 May 2025
Abstract
Background: Systematic reviewers face a growing body of biomedical literature, making early-stage article screening increasingly time-consuming. In this study, we assessed six large language models (LLMs)—OpenHermes, Flan T5, GPT-2, Claude 3 Haiku, GPT-3.5 Turbo, and GPT-4o—for their ability to identify randomized controlled trials [...] Read more.
Background: Systematic reviewers face a growing body of biomedical literature, making early-stage article screening increasingly time-consuming. In this study, we assessed six large language models (LLMs)—OpenHermes, Flan T5, GPT-2, Claude 3 Haiku, GPT-3.5 Turbo, and GPT-4o—for their ability to identify randomized controlled trials (RCTs) in datasets of increasing difficulty. Methods: We first retrieved articles from PubMed and used all-mpnet-base-v2 to measure semantic similarity to known target RCTs, stratifying the collection into quartiles of descending relevance. Each LLM then received either verbose or concise prompts to classify articles as “Accepted” or “Rejected”. Results: Claude 3 Haiku, GPT-3.5 Turbo, and GPT-4o consistently achieved high recall, though their precision varied in the quartile with the highest similarity, where false positives increased. By contrast, smaller or older models struggled to balance sensitivity and specificity, with some over-including irrelevant studies or missing key articles. Importantly, multi-stage prompts did not guarantee performance gains for weaker models, whereas single-prompt approaches proved effective for advanced LLMs. Conclusions: These findings underscore that both model capability and prompt design strongly affect classification outcomes, suggesting that newer LLMs, if properly guided, can substantially expedite systematic reviews. Full article
Show Figures

Figure 1

19 pages, 6148 KiB  
Article
Subject-Independent Cuff-Less Blood Pressure Monitoring via Multivariate Analysis of Finger/Toe Photoplethysmography and Electrocardiogram Data
by Seyedmohsen Dehghanojamahalleh, Peshala Thibbotuwawa Gamage, Mohammad Ahmed, Cassondra Petersen, Brianna Matthew, Kesha Hyacinth, Yasith Weerasinghe, Ersoy Subasi, Munevver Mine Subasi and Mehmet Kaya
BioMedInformatics 2025, 5(2), 24; https://doi.org/10.3390/biomedinformatics5020024 - 4 May 2025
Viewed by 185
Abstract
(1) Background: Blood pressure (BP) variability is an important risk factor for cardiovascular diseases. Still, existing BP monitoring methods often require periodic cuff-based measurements, raising concerns about their accuracy and convenience. This study aims to develop a subject-independent, cuff-less BP estimation method using [...] Read more.
(1) Background: Blood pressure (BP) variability is an important risk factor for cardiovascular diseases. Still, existing BP monitoring methods often require periodic cuff-based measurements, raising concerns about their accuracy and convenience. This study aims to develop a subject-independent, cuff-less BP estimation method using finger and toe photoplethysmography (PPG) signals combined with an electrocardiogram (ECG) without the need for an initial cuff-based measurement. (2) Methods: A customized measurement system was used to record 80 readings from human subjects. Fifteen features with the highest dependency on the reference BP, including time and morphological characteristics of PPG and subject information, were analyzed. A multivariate regression model was employed to estimate BP. (3) Results: The results showed that incorporating toe PPG signals improved the accuracy of BP estimation, reducing the mean absolute error (MAE). Using both finger and toe PPG signals resulted in an MAE of 9.63 ± 12.54 mmHg for systolic BP and 6.76 ± 8.38 mmHg for diastolic BP, providing the lowest MAE compared to previous methods. (4) Conclusions: This study is the first to integrate toe PPG for more accurate BP estimation and proposes a method that does not require an initial cuff-based BP measurement, offering a promising approach for non-invasive, continuous BP monitoring. Full article
Show Figures

Figure 1

20 pages, 2817 KiB  
Article
Escalate Prognosis of Parkinson’s Disease Employing Wavelet Features and Artificial Intelligence from Vowel Phonation
by Rumana Islam and Mohammed Tarique
BioMedInformatics 2025, 5(2), 23; https://doi.org/10.3390/biomedinformatics5020023 - 30 Apr 2025
Viewed by 147
Abstract
Background: This work presents an artificial intelligence-based algorithm for detecting Parkinson’s disease (PD) from voice signals. The detection of PD at pre-symptomatic stages is imperative to slow disease progression. Speech signal processing-based PD detection can play a crucial role here, as it has [...] Read more.
Background: This work presents an artificial intelligence-based algorithm for detecting Parkinson’s disease (PD) from voice signals. The detection of PD at pre-symptomatic stages is imperative to slow disease progression. Speech signal processing-based PD detection can play a crucial role here, as it has been reported in the literature that PD affects the voice quality of patients at an early stage. Hence, speech samples can be used as biomarkers of PD, provided that suitable voice features and artificial intelligence algorithms are employed. Methods: Advanced signal-processing techniques are used to extract audio features from the sustained vowel ‘/a/’ sound. The extracted audio features include baseline features, intensities, formant frequencies, bandwidths, vocal fold parameters, and Mel-frequency cepstral coefficients (MFCCs) to form a feature vector. Then, this feature vector is further enriched by including wavelet-based features to form the second feature vector. For classification purposes, two popular machine learning models, namely, support vector machine (SVM) and k-nearest neighbors (kNNs), are trained to distinguish patients with PD. Results: The results demonstrate that the inclusion of wavelet-based voice features enhances the performance of both the SVM and kNN models for PD detection. However, kNN provides better accuracy, detection speed, training time, and misclassification cost than SVM. Conclusions: This work concludes that wavelet-based voice features are important for detecting neurodegenerative diseases like PD. These wavelet features can enhance the classification performance of machine learning models. This work also concludes that kNN is recommendable over SVM for the investigated voice features, despite the inclusion and exclusion of the wavelet features. Full article
Show Figures

Figure 1

14 pages, 894 KiB  
Review
Artificial Intelligence as Assessment Tool in Occupational Therapy: A Scoping Review
by Christos Kokkotis, Ioannis Kansizoglou, Theodoros Stampoulis, Erasmia Giannakou, Panagiotis Siaperas, Stavros Kallidis, Maria Koutra, Christina Koutra, Anastasia Beneka and Evangelos Bebetsos
BioMedInformatics 2025, 5(2), 22; https://doi.org/10.3390/biomedinformatics5020022 - 28 Apr 2025
Viewed by 269
Abstract
Occupational therapy (OT) is vital in improving functional outcomes and aiding recovery for individuals with long-term disabilities, particularly those resulting from neurological diseases. Traditional assessment methods often rely on clinical judgment and individualized evaluations, which may overlook broader, data-driven insights. The integration of [...] Read more.
Occupational therapy (OT) is vital in improving functional outcomes and aiding recovery for individuals with long-term disabilities, particularly those resulting from neurological diseases. Traditional assessment methods often rely on clinical judgment and individualized evaluations, which may overlook broader, data-driven insights. The integration of artificial intelligence (AI) presents a transformative opportunity to enhance assessment precision and personalize therapeutic interventions. Additionally, advancements in human–computer interaction (HCI) enable more intuitive and adaptive AI-driven assessment tools, improving user engagement and accessibility in OT. This scoping review investigates current applications of AI in OT, particularly regarding the evaluation of functional outcomes and support for clinical decision-making. The literature search was conducted using the PubMed and Scopus databases. Studies were included if they focused on AI applications in evaluating functional outcomes within OT assessment tools. Out of an initial pool of 85 articles, 13 met the inclusion criteria, highlighting diverse AI methodologies such as support vector machines, deep neural networks, and natural language processing. These were primarily applied in domains including motor recovery, pediatric developmental assessments, and cognitive engagement evaluations. Findings suggest that AI can significantly improve evaluation processes by systematically integrating diverse data sources (e.g., sensor measurements, clinical histories, and behavioral analytics), generating precise predictive insights that facilitate tailored therapeutic interventions and comprehensive assessments of both pre- and post-treatment strategies. This scoping review also identifies existing gaps and proposes future research directions to optimize AI-driven assessment tools in OT. Full article
Show Figures

Figure 1

20 pages, 3963 KiB  
Article
Radiomics for Machine Learning—A Multi-Class System for the Automatic Detection of COVID-19 and Community-Acquired Pneumonia from Computed Tomography Images
by Vasileia Paschaloudi, Dimitris Fotopoulos and Ioanna Chouvarda
BioMedInformatics 2025, 5(2), 21; https://doi.org/10.3390/biomedinformatics5020021 - 26 Apr 2025
Viewed by 142
Abstract
Background: Radiomic features have been extensively used with machine learning and other Artificial Intelligence methods in medical imaging problems. Coronavirus Disease 2019 (COVID-19), which has been spreading worldwide since 2020, has motivated scientists to develop automatic COVID-19 recognition systems, to enhance the clinical [...] Read more.
Background: Radiomic features have been extensively used with machine learning and other Artificial Intelligence methods in medical imaging problems. Coronavirus Disease 2019 (COVID-19), which has been spreading worldwide since 2020, has motivated scientists to develop automatic COVID-19 recognition systems, to enhance the clinical routine in overcrowded hospitals. Purpose: To develop an automated system of recognizing COVID-19 and Community-Acquired Pneumonia (CAP) using radiomic features extracted from whole lung chest Computed Tomography (CT) images. Radiomic feature extraction from whole lung CTs simplifies the image segmentation for the malignancy region of interest (ROI). Methods: In this work, we used radiomic features extracted from CT images representing whole lungs to train various machine learning models that are capable of identifying COVID-19 images, CAP images and healthy cases. The CT images were derived from an open access data set, called COVID-CT-MD, containing 76 Normal cases, 169 COVID-19 cases and 60 CAP cases. Results: Four two-class models and one three-class model were developed: Normal–COVID, COVID–CAP, Normal–CAP, Normal–Disease and Normal–COVID–CAP. Different algorithms and data augmentation were used to train each model 20 times on a different data set split, and, finally, the model with the best average performance was selected for each case. The performance metrics of Accuracy, Sensitivity and Specificity were used to assess the performance of the different systems. Since COVID-19 and CAP share similar characteristics, it is challenging to develop a model that can distinguish these diseases. Result: The results were promising for the models finally selected for each case. The accuracy for the independent test set was 83.11% in the Normal–COVID case, 88.77% in the COVID–CAP case, 93.97% in the Normal–CAP case and 94.13% in the Normal–Disease case, when referring to two-class cases, while, in the three-class case, the accuracy was 78.55%. Conclusion: The results obtained suggest that radiomic features extracted from whole lung CT images can be successfully used to distinguish COVID-19 from other pneumonias and normal lung cases. Full article
Show Figures

Figure 1

26 pages, 1261 KiB  
Review
Strategies to Improve the Robustness and Generalizability of Deep Learning Segmentation and Classification in Neuroimaging
by Anh T. Tran, Tal Zeevi and Seyedmehdi Payabvash
BioMedInformatics 2025, 5(2), 20; https://doi.org/10.3390/biomedinformatics5020020 - 14 Apr 2025
Viewed by 672
Abstract
Artificial Intelligence (AI) and deep learning models have revolutionized diagnosis, prognostication, and treatment planning by extracting complex patterns from medical images, enabling more accurate, personalized, and timely clinical decisions. Despite its promise, challenges such as image heterogeneity across different centers, variability in acquisition [...] Read more.
Artificial Intelligence (AI) and deep learning models have revolutionized diagnosis, prognostication, and treatment planning by extracting complex patterns from medical images, enabling more accurate, personalized, and timely clinical decisions. Despite its promise, challenges such as image heterogeneity across different centers, variability in acquisition protocols and scanners, and sensitivity to artifacts hinder the reliability and clinical integration of deep learning models. Addressing these issues is critical for ensuring accurate and practical AI-powered neuroimaging applications. We reviewed and summarized the strategies for improving the robustness and generalizability of deep learning models for the segmentation and classification of neuroimages. This review follows a structured protocol, comprehensively searching Google Scholar, PubMed, and Scopus for studies on neuroimaging, task-specific applications, and model attributes. Peer-reviewed, English-language studies on brain imaging were included. The extracted data were analyzed to evaluate the implementation and effectiveness of these techniques. The study identifies key strategies to enhance deep learning in neuroimaging, including regularization, data augmentation, transfer learning, and uncertainty estimation. These approaches address major challenges such as data variability and domain shifts, improving model robustness and ensuring consistent performance across diverse clinical settings. The technical strategies summarized in this review can enhance the robustness and generalizability of deep learning models for segmentation and classification to improve their reliability for real-world clinical practice. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Figure 1

14 pages, 1101 KiB  
Article
Scouting Biomarkers for Alzheimer’s Disease via Network Analysis of Exosome Proteomics Data
by Alexis Sagonas, Avgi E. Apostolakou, Zoi I. Litou, Marianna H. Antonelou and Vassiliki A. Iconomidou
BioMedInformatics 2025, 5(2), 19; https://doi.org/10.3390/biomedinformatics5020019 - 8 Apr 2025
Viewed by 392
Abstract
Background: Exosomes are a group of extracellular vesicles that are released by almost all mammalian cell types and engage in intracellular communication. Studies conducted in recent years have shown that exosomes are involved in a variety of diseases, where they may act as [...] Read more.
Background: Exosomes are a group of extracellular vesicles that are released by almost all mammalian cell types and engage in intracellular communication. Studies conducted in recent years have shown that exosomes are involved in a variety of diseases, where they may act as “vehicles” for the transmission of biomolecules and biomolecular information. Amyloidoses constitute a critical subgroup of these diseases, caused by extracellular deposition or intracellular inclusions of insoluble protein fibrils in cells and tissues. However, how exosomes are involved in these diseases remains largely unexplored. Methods: To detect possible links between amyloid proteins and exosomes, protein data from amyloidosis-isolated exosomes were collected and visualized using biological networks. Results: This biomedical informatics approach for the analysis of interaction networks, in combination with the existing literature, highlighted the involvement of exosomes in amyloidosis while strengthening existing hypotheses regarding their mechanism of action. Conclusion: This work is focused on exosomes from patients with Alzheimer’s disease and identifies important amyloidogenic proteins found in exosomes. These proteins can be used for future research in the field of exosome-based biomarkers of amyloidosis and potential prognostic or preventive approaches. Full article
Show Figures

Figure 1

29 pages, 6518 KiB  
Article
Generative AI Models (2018–2024): Advancements and Applications in Kidney Care
by Fnu Neha, Deepshikha Bhati and Deepak Kumar Shukla
BioMedInformatics 2025, 5(2), 18; https://doi.org/10.3390/biomedinformatics5020018 - 3 Apr 2025
Viewed by 674
Abstract
Kidney disease poses a significant global health challenge, affecting millions and straining healthcare systems due to limited nephrology resources. This paper examines the transformative potential of Generative AI (GenAI), Large Language Models (LLMs), and Large Vision Models (LVMs) in addressing critical challenges in [...] Read more.
Kidney disease poses a significant global health challenge, affecting millions and straining healthcare systems due to limited nephrology resources. This paper examines the transformative potential of Generative AI (GenAI), Large Language Models (LLMs), and Large Vision Models (LVMs) in addressing critical challenges in kidney care. GenAI supports research and early interventions through the generation of synthetic medical data. LLMs enhance clinical decision-making by analyzing medical texts and electronic health records, while LVMs improve diagnostic accuracy through advanced medical image analysis. Together, these technologies show promise for advancing patient education, risk stratification, disease diagnosis, and personalized treatment strategies. This paper highlights key advancements in GenAI, LLMs, and LVMs from 2018 to 2024, focusing on their applications in kidney care and presenting common use cases. It also discusses their limitations, including knowledge cutoffs, hallucinations, contextual understanding challenges, data representation biases, computational demands, and ethical concerns. By providing a comprehensive analysis, this paper outlines a roadmap for integrating these AI advancements into nephrology, emphasizing the need for further research and real-world validation to fully realize their transformative potential. Full article
Show Figures

Figure 1

18 pages, 1542 KiB  
Article
Explainable Survival Analysis of Censored Clinical Data Using a Neural Network Approach
by Lisa Anita De Santi, Francesca Orlandini, Vincenzo Positano, Laura Pistoia, Francesco Sorrentino, Giuseppe Messina, Maria Grazia Roberti, Massimiliano Missere, Nicolò Schicchi, Antonino Vallone, Maria Filomena Santarelli, Alberto Clemente and Antonella Meloni
BioMedInformatics 2025, 5(2), 17; https://doi.org/10.3390/biomedinformatics5020017 - 27 Mar 2025
Viewed by 358
Abstract
Survival analysis is a statistical approach widely employed to model the time of an event, such as a patient’s death. Classical approaches include the Kaplan–Meier estimator and Cox proportional hazards regression, which assume a linear relationship between the model’s covariates. However, the linearity [...] Read more.
Survival analysis is a statistical approach widely employed to model the time of an event, such as a patient’s death. Classical approaches include the Kaplan–Meier estimator and Cox proportional hazards regression, which assume a linear relationship between the model’s covariates. However, the linearity assumption might pose challenges with high-dimensional data, thus stimulating interest in performing survival analysis using neural network models. In the present work, we implemented a deep Cox neural network (Cox-net) to predict the time of a cardiac event using patient data collected from the Myocardial Iron Overload in Thalassemia (MIOT) project. Cox-net achieved a concordance index (c-index) of 0.812 ± 0.036, outperforming the classical Cox regression (0.790 ± 0.040), and it demonstrated resilience to varying levels of censored patients. A permutation feature importance analysis identified fibrosis and sex as the most significant predictors, aligning with clinical knowledge. Cox-net was able to represent the nonlinear relationships between covariates and maintain reliable survival curve predictions in datasets with a large number of censored patients, making it a promising tool for determining the appropriate clinical pathway for thalassemic patients. Full article
(This article belongs to the Section Medical Statistics and Data Science)
Show Figures

Figure 1

Previous Issue
Back to TopTop