Artificial Intelligence for Better Healthcare and Precision Medicine, 2nd Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 5567

Special Issue Editor


E-Mail Website
Guest Editor
College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310058, China
Interests: medical informatics; clinical decision support system; knowledge graph; clinical data privacy computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) has emerged as a disruptive technology in healthcare and precision medicine, offering immense potential to revolutionize the field. With the growing availability of patient data and the increasing complexity of medical decision-making processes, AI presents opportunities to enhance patient care, improve treatment outcomes, and facilitate precision medicine approaches. This Special Issue explores the applications of AI in healthcare and precision medicine, highlighting its impact on disease diagnosis, treatment selection, medical imaging, drug discovery, and healthcare resource management.

Disease Diagnosis and Prognosis:

AI algorithms excel in analyzing large and diverse datasets, enabling accurate disease identification, risk assessment, and prognostic predictions. By leveraging machine learning techniques, AI systems can analyze electronic health records, genomic data, and sensor readings, facilitating early detection, precise diagnoses, and personalized prognosis for various diseases.

Treatment Selection and Optimization:

AI algorithms assist healthcare professionals in selecting the most effective treatment strategies for individual patients. By integrating patient-specific data with clinical guidelines and medical knowledge, AI systems can provide tailored and evidence-based treatment recommendations, leading to improved outcomes and minimized adverse effects.

Medical Imaging and Diagnostics:

AI has transformed medical imaging interpretation by enabling the automated analysis of radiological images. Deep learning algorithms can detect anomalies, identify patterns, and assist in the early detection of diseases such as cancer. This enhances the accuracy of diagnoses, reduces human error, and speeds up the interpretation process.

Drug Discovery and Development:

AI accelerates the drug discovery and development process by expediting the analysis of vast chemical and biological datasets. Machine learning algorithms can predict drug–target interactions, identify potential drug candidates, and optimize drug design, helping researchers and pharmaceutical companies convey new therapies to the market more rapidly.

Healthcare Resource Management:

AI plays a crucial role in optimizing healthcare resource utilization, improving efficiency, and reducing costs. AI algorithms can analyze patient data, predict disease trends, optimize hospital workflows, and assist in resource allocation, ensuring that healthcare resources are allocated effectively and equitably based on patient needs.

Large language models for better healthcare:

Large language models are a type of deep learning-based AI technology that can understand and generate natural language to enable intelligent interaction with medical data and human users. Large language models have broad prospects and potential in clinical applications (such as clinical “Q&A” and clinical text analysis) and can help doctors and patients improve medical quality and efficiency.

Overall, this Special Issue explores the potential of AI to transform healthcare and precision medicine by leveraging vast amounts of data and sophisticated algorithms. From disease diagnosis and treatment selection to medical imaging analysis and drug discovery, AI-driven solutions have the capacity to improve patient care, enhance precision medicine approaches, and optimize healthcare resource management. While there are challenges and ethical considerations to address, the integration of AI in healthcare holds great promise for enabling enhanced patient outcomes, improved efficiency, and personalized care.

Dr. Yu Tian
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • healthcare
  • precision medicine
  • disease diagnosis
  • prognosis
  • drug discovery
  • personalized treatment selection
  • healthcare resource management
  • large language models

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

21 pages, 2267 KB  
Article
An External Validation Study on Two Pre-Trained Large Language Models for Multimodal Prognostication in Laryngeal and Hypopharyngeal Cancer: Integrating Clinical, Treatment, and Radiomic Data to Predict Survival Outcomes with Interpretable Reasoning
by Wing-Keen Yap, Shih-Chun Cheng, Chia-Hsin Lin, Ing-Tsung Hsiao, Tsung-You Tsai, Wing-Lake Yap, Willy Po-Yuan Chen, Chien-Yu Lin and Shih-Ming Huang
Bioengineering 2025, 12(12), 1345; https://doi.org/10.3390/bioengineering12121345 - 10 Dec 2025
Viewed by 25
Abstract
Background: Laryngeal and hypopharyngeal cancers (LHCs) exhibit heterogeneous outcomes after definitive radiotherapy (RT). Large language models (LLMs) may enhance prognostic stratification by integrating complex clinical and imaging data. This study validated two pre-trained LLMs—GPT-4o-2024-08-06 and Gemma-2-27b-it—for outcome prediction in LHC. Methods: Ninety-two patients [...] Read more.
Background: Laryngeal and hypopharyngeal cancers (LHCs) exhibit heterogeneous outcomes after definitive radiotherapy (RT). Large language models (LLMs) may enhance prognostic stratification by integrating complex clinical and imaging data. This study validated two pre-trained LLMs—GPT-4o-2024-08-06 and Gemma-2-27b-it—for outcome prediction in LHC. Methods: Ninety-two patients with non-metastatic LHC treated with definitive (chemo)radiotherapy at Linkou Chang Gung Memorial Hospital (2006–2013) were retrospectively analyzed. First-order and 3D radiomic features were extracted from intra- and peritumoral regions on pre- and mid-RT CT scans. LLMs were prompted with clinical variables, radiotherapy notes, and radiomic features to classify patients as high- or low-risk for death, recurrence, and distant metastasis. Model performance was assessed using sensitivity, specificity, AUC, Kaplan–Meier survival analysis, and McNemar tests. Results: Integration of radiomic features significantly improved prognostic discrimination over clinical/RT plan data alone for both LLMs. For death prediction, pre-RT radiomics were the most predictive: GPT-4o achieved a peak AUC of 0.730 using intratumoral features, while Gemma-2-27b reached 0.736 using peritumoral features. For recurrence prediction, mid-RT peritumoral features yielded optimal performance (AUC = 0.703 for GPT-4o; AUC = 0.709 for Gemma-2-27b). Kaplan–Meier analyses confirmed statistically significant separation of risk groups: pre-RT intra- and peritumoral features for overall survival (for both GPT-4o and Gemma-2-27b, p < 0.05), and mid-RT peritumoral features for recurrence-free survival (p = 0.028 for GPT-4o; p = 0.017 for Gemma-2-27b). McNemar tests revealed no significant performance difference between the two LLMs when augmented with radiomics (all p > 0.05), indicating that the open-source model achieved comparable accuracy to its proprietary counterpart. Both models generated clinically coherent, patient-specific rationales explaining risk assignments, enhancing interpretability and clinical trust. Conclusions: This external validation demonstrates that pre-trained LLMs can serve as accurate, interpretable, and multimodal prognostic engines for LHC. Pre-RT radiomic features are critical for predicting mortality and metastasis, while mid-RT peritumoral features uniquely inform recurrence risk. The comparable performance of the open-source Gemma-2-27b-it model suggests a scalable, cost-effective, and privacy-preserving pathway for the integration of LLM-based tools into precision radiation oncology workflows to enhance risk stratification and therapeutic personalization. Full article
Show Figures

Figure 1

35 pages, 1561 KB  
Article
An Integrative Review of Computational Methods Applied to Biomarkers, Psychological Metrics, and Behavioral Signals for Early Cancer Risk Detection
by Lucia Bubulac, Tudor Georgescu, Mirela Zivari, Dana-Maria Popescu-Spineni, Cristina-Crenguţa Albu, Adrian Bobu, Sebastian Tiberiu Nemeth, Claudia-Florina Bogdan-Andreescu, Adriana Gurghean and Alin Adrian Alecu
Bioengineering 2025, 12(11), 1259; https://doi.org/10.3390/bioengineering12111259 - 17 Nov 2025
Viewed by 743
Abstract
The global rise in cancer incidence and mortality represents a major challenge for modern healthcare. Although current screening programs rely mainly on histological or immunological biomarkers, cancer is a multifactorial disease in which biological, psychological, and behavioural determinants interact. Psychological dimensions such as [...] Read more.
The global rise in cancer incidence and mortality represents a major challenge for modern healthcare. Although current screening programs rely mainly on histological or immunological biomarkers, cancer is a multifactorial disease in which biological, psychological, and behavioural determinants interact. Psychological dimensions such as stress, anxiety, and depression may influence vulnerability and disease evolution through neuro-endocrine, immune, and behavioural pathways, especially by affecting adherence to therapeutic recommendations. However, these dimensions remain underexplored in current screening workflows. This review synthesizes current evidence on the integration of biological markers (tumor and inflammatory biomarkers), psychometric profiling (stress, depression, anxiety, personality traits), and behavioural digital phenotyping (facial micro-expressions, vocal tone, gait/posture metrics) for potential early cancer risk evaluation. We examine recent advances in computational sciences and artificial intelligence that could enable multimodal signal harmonization, structured representation, and hybrid data fusion models. We discuss how structured computational information management may improve interpretability and may support future AI-assisted screening paradigms. Finally, we highlight the relevance of digital health infrastructure and telemedical platforms in strengthening accessibility, continuity of monitoring, and population-level screening coverage. Further empirical research is required to determine the true predictive contribution of psychological and behavioural modalities beyond established biological markers. Full article
Show Figures

Graphical abstract

21 pages, 1703 KB  
Article
Spatiotemporal Feature Learning for Daily-Life Cough Detection Using FMCW Radar
by Saihu Lu, Yuhan Liu, Guangqiang He, Zhongrui Bai, Zhenfeng Li, Pang Wu, Xianxiang Chen, Lidong Du, Peng Wang and Zhen Fang
Bioengineering 2025, 12(10), 1112; https://doi.org/10.3390/bioengineering12101112 - 15 Oct 2025
Viewed by 843
Abstract
Cough is a key symptom reflecting respiratory health, with its frequency and pattern providing valuable insights into disease progression and clinical management. Objective and reliable cough detection systems are therefore of broad significance for healthcare and remote monitoring. However, existing algorithms often struggle [...] Read more.
Cough is a key symptom reflecting respiratory health, with its frequency and pattern providing valuable insights into disease progression and clinical management. Objective and reliable cough detection systems are therefore of broad significance for healthcare and remote monitoring. However, existing algorithms often struggle to jointly model spatial and temporal information, limiting their robustness in real-world applications. To address this issue, we propose a cough recognition framework based on frequency-modulated continuous-wave (FMCW) radar, integrating a deep convolutional neural network (CNN) with a Self-Attention mechanism. The CNN extracts spatial features from range-Doppler maps, while Self-Attention captures temporal dependencies, and effective data augmentation strategies enhance generalization by simulating position variations and masking local dependencies. To rigorously evaluate practicality, we collected a large-scale radar dataset covering diverse positions, orientations, and activities. Experimental results demonstrate that, under subject-independent five-fold cross-validation, the proposed model achieved a mean F1-score of 0.974±0.016 and an accuracy of 99.05±0.55 %, further supported by high precision of 98.77±1.05 %, recall of 96.07±2.16 %, and specificity of 99.73±0.23 %. These results confirm that our method is not only robust in realistic scenarios but also provides a practical pathway toward continuous, non-invasive, and privacy-preserving respiratory health monitoring in both clinical and telehealth applications. Full article
Show Figures

Graphical abstract

19 pages, 1206 KB  
Article
A Generative Expert-Narrated Simplification Model for Enhancing Health Literacy Among the Older Population
by Akmalbek Abdusalomov, Sabina Umirzakova, Sanjar Mirzakhalilov, Alpamis Kutlimuratov, Rashid Nasimov, Zavqiddin Temirov, Wonjun Jeong, Hyoungsun Choi and Taeg Keun Whangbo
Bioengineering 2025, 12(10), 1066; https://doi.org/10.3390/bioengineering12101066 - 30 Sep 2025
Cited by 1 | Viewed by 871
Abstract
Older adults often face significant challenges in understanding medical information due to cognitive aging and limited health literacy. Existing simplification models, while effective in general domains, cannot adapt content for elderly users, frequently overlooking narrative tone, readability constraints, and semantic fidelity. In this [...] Read more.
Older adults often face significant challenges in understanding medical information due to cognitive aging and limited health literacy. Existing simplification models, while effective in general domains, cannot adapt content for elderly users, frequently overlooking narrative tone, readability constraints, and semantic fidelity. In this work, we propose GENSIM—a Generative Expert-Narrated Simplification Model tailored for age-adapted medical text simplification. GENSIM introduces a modular architecture that integrates a Dual-Stream Encoder, which fuses biomedical semantics with elder-friendly linguistic patterns; a Persona-Tuned Narrative Decoder, which controls tone, clarity, and empathy; and a Reinforcement Learning with Human Feedback (RLHF) framework guided by dual discriminators for factual alignment and age-specific readability. Trained on a triad of corpora—SimpleDC, PLABA, and a custom NIH-SeniorHealth corpus—GENSIM achieves state-of-the-art performance on SARI, FKGL, BERTScore, and BLEU across multiple test sets. Ablation studies confirm the individual and synergistic value of each component, while structured human evaluations demonstrate that GENSIM produces outputs rated significantly higher in faithfulness, simplicity, and demographic suitability. This work represents the first unified framework for elderly-centered medical text simplification and marks a paradigm shift toward inclusive, user-aligned generation for health communication. Full article
Show Figures

Figure 1

28 pages, 2379 KB  
Article
FADEL: Ensemble Learning Enhanced by Feature Augmentation and Discretization
by Chuan-Sheng Hung, Chun-Hung Richard Lin, Shi-Huang Chen, You-Cheng Zheng, Cheng-Han Yu, Cheng-Wei Hung, Ting-Hsin Huang and Jui-Hsiu Tsai
Bioengineering 2025, 12(8), 827; https://doi.org/10.3390/bioengineering12080827 - 30 Jul 2025
Viewed by 1039
Abstract
In recent years, data augmentation techniques have become the predominant approach for addressing highly imbalanced classification problems in machine learning. Algorithms such as the Synthetic Minority Over-sampling Technique (SMOTE) and Conditional Tabular Generative Adversarial Network (CTGAN) have proven effective in synthesizing minority class [...] Read more.
In recent years, data augmentation techniques have become the predominant approach for addressing highly imbalanced classification problems in machine learning. Algorithms such as the Synthetic Minority Over-sampling Technique (SMOTE) and Conditional Tabular Generative Adversarial Network (CTGAN) have proven effective in synthesizing minority class samples. However, these methods often introduce distributional bias and noise, potentially leading to model overfitting, reduced predictive performance, increased computational costs, and elevated cybersecurity risks. To overcome these limitations, we propose a novel architecture, FADEL, which integrates feature-type awareness with a supervised discretization strategy. FADEL introduces a unique feature augmentation ensemble framework that preserves the original data distribution by concurrently processing continuous and discretized features. It dynamically routes these feature sets to their most compatible base models, thereby improving minority class recognition without the need for data-level balancing or augmentation techniques. Experimental results demonstrate that FADEL, solely leveraging feature augmentation without any data augmentation, achieves a recall of 90.8% and a G-mean of 94.5% on the internal test set from Kaohsiung Chang Gung Memorial Hospital in Taiwan. On the external validation set from Kaohsiung Medical University Chung-Ho Memorial Hospital, it maintains a recall of 91.9% and a G-mean of 86.7%. These results outperform conventional ensemble methods trained on CTGAN-balanced datasets, confirming the superior stability, computational efficiency, and cross-institutional generalizability of the FADEL architecture. Altogether, FADEL uses feature augmentation to offer a robust and practical solution to extreme class imbalance, outperforming mainstream data augmentation-based approaches. Full article
Show Figures

Graphical abstract

22 pages, 9057 KB  
Article
A Multi-Stage Framework for Kawasaki Disease Prediction Using Clustering-Based Undersampling and Synthetic Data Augmentation: Cross-Institutional Validation with Dual-Center Clinical Data in Taiwan
by Heng-Chih Huang, Chuan-Sheng Hung, Chun-Hung Richard Lin, Yi-Zhen Shie, Cheng-Han Yu and Ting-Hsin Huang
Bioengineering 2025, 12(7), 742; https://doi.org/10.3390/bioengineering12070742 - 7 Jul 2025
Viewed by 872
Abstract
Kawasaki disease (KD) is a rare yet potentially life-threatening pediatric vasculitis that, if left undiagnosed or untreated, can result in serious cardiovascular complications. Its heterogeneous clinical presentation poses diagnostic challenges, often failing to meet classical criteria and increasing the risk of oversight. Leveraging [...] Read more.
Kawasaki disease (KD) is a rare yet potentially life-threatening pediatric vasculitis that, if left undiagnosed or untreated, can result in serious cardiovascular complications. Its heterogeneous clinical presentation poses diagnostic challenges, often failing to meet classical criteria and increasing the risk of oversight. Leveraging routine laboratory tests with AI offers a promising strategy for enhancing early detection. However, due to the extremely low prevalence of KD, conventional models often struggle with severe class imbalance, limiting their ability to achieve both high sensitivity and specificity in practice. To address this issue, we propose a multi-stage AI-based predictive framework that incorporates clustering-based undersampling, data augmentation, and stacking ensemble learning. The model was trained and internally tested on clinical blood and urine test data from Chang Gung Memorial Hospital (CGMH, n = 74,641; 2010–2019), and externally validated using an independent dataset from Kaohsiung Medical University Hospital (KMUH, n = 1582; 2012–2020), thereby supporting cross-institutional generalizability. At a fixed recall rate of 95%, the model achieved a specificity of 97.5% and an F1-score of 53.6% on the CGMH test set, and a specificity of 74.7% with an F1-score of 23.4% on the KMUH validation set. These results underscore the model’s ability to maintain high specificity even under sensitivity-focused constraints, while still delivering clinically meaningful predictive performance. This balance of sensitivity and specificity highlights the framework’s practical utility for real-world KD screening. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

14 pages, 370 KB  
Review
Artificial Intelligence in Diabetic Retinopathy and Diabetic Macular Edema: A Narrative Review
by Anđela Jukić, Josip Pavan, Miro Kalauz, Andrijana Kopić, Vedran Markušić and Tomislav Jukić
Bioengineering 2025, 12(12), 1342; https://doi.org/10.3390/bioengineering12121342 - 9 Dec 2025
Viewed by 98
Abstract
Diabetic retinopathy (DR) and diabetic macular edema (DME) remain major causes of vision loss among working-age adults. Artificial intelligence (AI), particularly deep learning, has gained attention in ophthalmic imaging, offering opportunities to improve both diagnostic accuracy and efficiency. This review examined applications of [...] Read more.
Diabetic retinopathy (DR) and diabetic macular edema (DME) remain major causes of vision loss among working-age adults. Artificial intelligence (AI), particularly deep learning, has gained attention in ophthalmic imaging, offering opportunities to improve both diagnostic accuracy and efficiency. This review examined applications of AI in DR and DME published between 2010 and 2025. A narrative search of PubMed and Google Scholar identified English-language, peer-reviewed studies, with additional screening of reference lists. Eligible articles evaluated AI algorithms for detection, classification, prognosis, or treatment monitoring, with study selection guided by PRISMA 2020. Of 300 records screened, 60 met the inclusion criteria. Most reported strong diagnostic performance, with sensitivities up to 96% and specificities up to 98% for detecting referable DR on fundus photographs. Algorithms trained on optical coherence tomography (OCT) data showed high accuracy for identifying DME, with area under the receiver operating characteristic curve (AUC) values frequently exceeding 0.90. Several models also predicted anti-vascular endothelial growth factor (anti-VEGF) treatment response and recurrence of fluid with encouraging results. Autonomous AI tools have gained regulatory approval and have been implemented in clinical practice, though performance can vary depending on image quality, device differences, and patient populations. Overall, AI demonstrates strong potential to improve screening, diagnostic consistency, and personalized care, but broader validation and system-level integration remain necessary. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

13 pages, 448 KB  
Systematic Review
Artificial Intelligence for Spirometry Quality Evaluation: A Systematic Review
by Julia López-Canay, Manuel Casal-Guisande, Cristina Represas-Represas, Jorge Cerqueiro-Pequeño, José-Benito Bouza-Rodríguez, Alberto Comesaña-Campos and Alberto Fernández-Villar
Bioengineering 2025, 12(12), 1286; https://doi.org/10.3390/bioengineering12121286 - 23 Nov 2025
Viewed by 550
Abstract
Background and Objectives: Spirometry is the most widely used pulmonary function test for diagnosing respiratory diseases. Its progressive incorporation into non-specialized settings, such as primary care, raises challenges for ensuring the reliability of results. In this context, tools based on artificial intelligence [...] Read more.
Background and Objectives: Spirometry is the most widely used pulmonary function test for diagnosing respiratory diseases. Its progressive incorporation into non-specialized settings, such as primary care, raises challenges for ensuring the reliability of results. In this context, tools based on artificial intelligence (AI) techniques have emerged as promising solutions to support quality control in spirometry. This systematic review aims to synthesize the available evidence on their application in this field. Methods: A systematic search was conducted in PubMed and IEEE Xplore to identify peer-reviewed original studies, published between 2014 and June 2025, that applied AI to spirometry quality control. The search and data extraction followed the PRISMA guidelines. Results: Six studies met the inclusion criteria. Four analyzed the acceptability and usability of the maneuver, and two focused on detecting errors committed during test performance. The most widely used models were convolutional neural networks, used in four studies, whereas two studies employed other conventional machine learning models. Three models reported area under the ROC curve values higher than 0.88. Conclusions: AI-based tools show great potential to assist in spirometry quality control, both in determining acceptability and in detecting errors. However, current studies remain scarce and highly heterogeneous in both objectives and methods. Broader, multicenter research, including validation in non-specialized settings, is required to confirm their clinical utility and facilitate their implementation in clinical practice. Full article
Show Figures

Figure 1

Back to TopTop