Next Article in Journal
Towards a Better Understanding of Mobile Banking App Adoption and Use: Integrating Security, Risk, and Trust into UTAUT2
Next Article in Special Issue
Advancing Predictive Healthcare: A Systematic Review of Transformer Models in Electronic Health Records
Previous Article in Journal
An Innovative Approach to Topic Clustering for Social Media and Web Data Using AI
Previous Article in Special Issue
Lossless Compression of Malaria-Infected Erythrocyte Images Using Vision Transformer and Deep Autoencoders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A State-of-the-Art Review of Artificial Intelligence (AI) Applications in Healthcare: Advances in Diabetes, Cancer, Epidemiology, and Mortality Prediction

by
Mariano Vargas-Santiago
1,†,
Diana Assaely León-Velasco
2,3,4,†,
Christian Efraín Maldonado-Sifuentes
1,† and
Liliana Chanona-Hernandez
5,*,†
1
Secretaria de Ciencia, Humanidades, Tecnología e Inovación (Secihti-IXM), Ciudad de México 03940, Mexico
2
Departamento de Sistemas, Universidad Autónoma Metropolitana, Unidad Azcapotzalco, Ciudad de México 02128, Mexico
3
Science Department, Instituto Tecnológico y de Estudios Superiores de Monterrey, Ciudad de México 14380, Mexico
4
Escuela Superior de Apan, Universidad Autónoma del Estado de Hidalgo, Hidalgo 43920, Mexico
5
Instituto Politécnico Nacional, Escuela Superior de Ingeniería Mecánica y Eléctrica, Unidad Zacatenco, Ciudad de México 07700, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Computers 2025, 14(4), 143; https://doi.org/10.3390/computers14040143
Submission received: 17 January 2025 / Revised: 10 March 2025 / Accepted: 18 March 2025 / Published: 10 April 2025

Abstract

:
Artificial Intelligence (AI) methodologies have profoundly influenced healthcare research, particularly in chronic disease management and public health. This paper provides a comprehensive state-of-the-art review of AI’s applications across diabetes, cancer, epidemiology, and mortality prediction. The analysis highlights advancements in machine learning (ML), deep learning (DL), and natural language processing (NLP) that enable robust predictive models and decision support systems, leading to significant clinical and public health outcomes. The study examines predictive modeling, pattern recognition, and decision support applications, addressing their respective challenges and potential in real-world healthcare settings. Emphasis is placed on the emerging role of explainable AI (XAI), multimodal data fusion, and privacy-preserving techniques such as federated learning, which aim to enhance interpretability, robustness, and ethical compliance. This paper underscores the vital role of interdisciplinary collaboration and adaptive AI systems in creating resilient, scalable, and patient-centric healthcare solutions.

1. Introduction

The field of artificial intelligence (AI) has experienced rapid advancements in recent years, particularly in healthcare applications, with notable impact in the healthcare sector, where machine learning (ML), deep learning (DL), and data analytics are now essential to advancing research, diagnostics, and patient care [1,2,3,4]. These technologies have introduced novel ways of addressing complex medical challenges, leading to improvements in diagnostic accuracy, treatment personalization, and preventive care strategies. This paper presents a state-of-the-art review of AI applications in healthcare, focusing on advancements in diabetes, cancer, epidemiology, and mortality prediction. Unlike systematic reviews, which comprehensively analyze all available studies based on strict inclusion and exclusion criteria, this review highlights key developments, emerging trends, and innovative methodologies that have significantly shaped the field. While prior reviews have addressed AI applications in individual domains such as diabetes or cancer, this paper fills a critical gap by providing a comprehensive analysis across these four interconnected areas, offering a unified perspective on AI’s transformative potential with a particular emphasis on practical applications like real-time disease monitoring and personalized treatment planning. The objective is to provide insights into how AI is transforming healthcare and to identify future research directions rather than to exhaustively catalog all existing research.
Existing AI applications in healthcare can be broadly categorized into predictive modeling, pattern recognition, and decision support systems. Traditional ML algorithms, including support vector machines (SVMs), random forest, and gradient boosting, have been employed to identify risk factors, predict disease progression, and classify patient data [5]. While effective in structured data environments, these models often struggle with the complexities associated with large-scale, unstructured medical datasets. In contrast, DL models, particularly convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have demonstrated superior performance in feature extraction from medical images, genomic data, and longitudinal patient records. This has propelled advancements in automated diagnostics, enabling precise detection and classification of conditions like diabetic retinopathy and various forms of cancer, often with accuracy comparable to that of human experts [3,6,7,8].
Natural language processing (NLP) has also gained traction within healthcare, especially in the analysis of medical literature and electronic health records (EHRs). Transformer-based architectures, such as BERT and GPT, have facilitated the processing and extraction of critical insights from vast amounts of textual data, enhancing personalized treatment plans and predictive models based on historical patient records [9,10]. These advancements, however, bring challenges related to model interpretability, generalizability, and data privacy. The opaque nature of many DL models can lead to challenges in clinical trust and implementation, especially when decisions need to be validated by healthcare professionals [11].
Current trends in AI research for healthcare emphasize the development of explainable AI (XAI) frameworks and the integration of multimodal data sources. Explainable AI aims to bridge the gap between model complexity and interpretability, allowing healthcare providers and stakeholders to understand how and why specific decisions are made, thus enhancing the transparency and trustworthiness of AI systems [12,13]. Additionally, integrating heterogeneous data sources such as medical imaging, genetic profiles, and patient-reported outcomes is seen as a pathway toward creating comprehensive models that offer a holistic assessment of patient health [14].
Future directions in this field are likely to focus on the adoption of federated learning and privacy-preserving techniques, which address ethical and legal concerns associated with centralized data storage. Federated learning enables institutions to collaborate on model training without sharing sensitive patient data, thereby enhancing the robustness and generalizability of AI models while adhering to data protection regulations [15]. Additionally, advancements in computational power and algorithmic efficiency are paving the way for real-time applications, facilitating early disease detection, personalized medicine, and adaptive treatment protocols [16].
The continued growth of AI in healthcare will require interdisciplinary collaboration among AI researchers, medical professionals, and policymakers to maximize its potential and ensure responsible, equitable deployment. This paper explores these advancements in depth, emphasizing the contributions of ML, DL, and NLP to improving clinical outcomes and public health strategies, while discussing the challenges and opportunities for further technological development in these domains.

2. Experimental Framework for AI in Healthcare

AI applications in healthcare require a structured evaluation pipeline to ensure robust performance, interpretability, and real-world clinical utility. This section presents a generalized experimental framework for developing and assessing AI methodologies in diabetes management, cancer detection, epidemiology, and mortality prediction. The framework consists of model selection, training, evaluation, and deployment, providing a systematic approach to integrating AI into healthcare applications.

2.1. Model Selection and Training

AI models are selected based on the specific task and data type. Baseline models include both traditional machine learning (ML) and deep learning (DL) techniques. Traditional ML approaches include support vector machines (SVMs), decision trees (DT), random forests (RF), and gradient boosting for structured data analysis. Deep learning models include convolutional neural networks (CNNs) for medical imaging, recurrent neural networks (RNNs) and long short-term memory (LSTM) models for time-series forecasting, and transformer models such as BERT and GPT for processing electronic health records and medical literature.
Optimization techniques improve model performance and generalizability. Hyperparameter tuning is conducted using Bayesian optimization or grid search. Transfer learning is applied in medical imaging tasks by fine-tuning pre-trained models such as ResNet and DenseNet. Federated learning is implemented for privacy-preserving AI, enabling multi-institutional collaboration without sharing raw patient data.

2.2. Model Evaluation Metrics

A robust evaluation framework is necessary to assess model performance, interpretability, and clinical feasibility. Classification models are evaluated using sensitivity, specificity, and F1-score. The area under the receiver operating characteristic curve (AUC-ROC) is used for binary classification tasks, such as distinguishing between cancerous and non-cancerous conditions. Regression models, particularly those for glucose prediction, are assessed using root mean squared error (RMSE) and mean absolute error (MAE).
Explainability and trustworthiness are critical for clinical adoption. Shapley additive explanations (SHAP) are used to determine feature importance, while gradient-weighted class activation mapping (Grad-CAM) provides visual explanations for CNN-based models. Local interpretable model-agnostic explanations (LIME) helps evaluate decision reasoning in text-based AI models.
To assess clinical reliability, AI models are benchmarked against expert human performance. Comparative evaluations include AI versus clinician accuracy in cancer diagnosis and AI-assisted versus traditional epidemiological surveillance methods.

2.3. Deployment and Real-World Validation

AI implementation requires testing in real-world settings and seamless integration into healthcare workflows. Clinical integration involves deploying AI-based decision support systems for physicians, automated screening tools for diabetic retinopathy and cancer detection, and AI-driven telemedicine applications for remote patient monitoring and triage.
Privacy and ethical considerations must be addressed to ensure regulatory compliance. Federated learning is utilized to enable decentralized model training across hospitals while maintaining patient data security. Compliance with the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) is ensured for patient data protection. Bias mitigation strategies are incorporated to prevent disparities in AI-driven diagnoses.

2.4. Future Directions in AI Experimental Design

To improve the robustness and generalizability of AI healthcare applications, future research should focus on adaptive learning models that continuously update based on real-world data. The integration of multimodal data fusion, combining imaging, genomics, clinical notes, and wearable device data, will enhance prediction accuracy. AI-driven personalized medicine approaches will enable tailored treatment recommendations based on patient-specific risk factors and genetic profiles.
This experimental framework provides a structured methodology for developing, evaluating, and deploying AI systems in healthcare. By standardizing the AI pipeline across different domains, we ensure reproducibility, interpretability, and real-world impact in AI-driven medical research.

3. AI Methodologies in Diabetes

The application of artificial intelligence (AI) methodologies to diabetes management has seen significant advancements over recent years. Machine learning (ML) and deep learning (DL) techniques are particularly impactful in this domain due to their ability to process large-scale data and identify complex patterns that are often elusive to traditional statistical methods. The primary focus of these methodologies lies in the prediction of blood glucose levels, early detection of diabetes onset, risk stratification for complications, and personalized treatment optimization. This section provides a critical overview of existing approaches, highlights current trends, and discusses future directions for the application of AI in diabetes management.
One of the most notable applications of AI in diabetes is the use of recurrent neural networks (RNNs) and their enhanced versions, such as long short-term memory (LSTM) networks, for predicting blood glucose levels. These models excel in handling sequential data, making them well-suited for the continuous glucose monitoring (CGM) data collected from diabetic patients. By training on historical glucose readings, insulin intake, meal data, and other physiological indicators, RNNs and LSTMs can forecast future glucose trends, allowing patients to take preemptive actions to maintain optimal glycemic control [17]. This predictive capability is crucial for preventing hyperglycemic and hypoglycemic episodes, both of which can have severe health consequences.
Other machine learning approaches, such as support vector machines (SVMs), decision trees, and ensemble models like random forests, have been applied to identify risk factors for diabetes and predict the progression of the disease. These models leverage structured datasets comprising demographic, genetic, and lifestyle variables to classify individuals at risk and provide insights into the most influential predictive features. Despite their effectiveness in binary classification tasks (e.g., diabetic vs. non-diabetic), these models often fall short in handling temporal dependencies without significant feature engineering.
In recent years, advancements in deep learning have expanded the scope of AI applications to include convolutional neural networks (CNNs) for the analysis of medical images, such as retinal scans. Diabetic retinopathy, a leading cause of blindness among diabetic patients, can be effectively screened using CNNs trained on large-scale image datasets [3]. These models have shown performance comparable to or even exceeding that of expert ophthalmologists in identifying early signs of retinopathy, facilitating early intervention and reducing the risk of vision loss.
Natural language processing (NLP) is another burgeoning area within diabetes research, enabling the extraction of relevant data from electronic health records (EHRs) and published medical literature. By employing transformer-based models like BERT, NLP can assist in synthesizing patient information and identifying correlations between medication regimens and patient outcomes. This approach aids in developing more comprehensive patient profiles and enhancing decision support systems used by healthcare professionals.
The selection of articles for this review was based on their contribution to advancements in AI methodologies for disease prediction and healthcare applications. Studies were chosen based on three primary criteria: (1) their impact on AI-based healthcare research as measured by citation count and relevance in the field; (2) their use of novel machine learning methodologies, including deep learning, explainable AI (XAI), and federated learning; and (3) their real-world applicability in clinical decision-making, diagnostics, and public health interventions. The review emphasizes state-of-the-art approaches rather than an exhaustive systematic review of all available studies. This ensures a focus on key trends and emerging technologies in AI-driven healthcare.
AI has significantly enhanced diabetes care through automated decision support tools, predictive modeling, and real-time monitoring systems. Dankwa-Mullan et al. [18,19] examine key AI-driven innovations, including automated retinal screening, clinical decision support, and patient self-management technologies, which collectively improve glycemic control and reduce complications. DL and ML models have enabled the development of predictive algorithms that assist in disease progression analysis and individualized treatment recommendations. AI-powered glucose monitoring and artificial pancreas technologies further contribute to improving patient outcomes by integrating continuous real-time data analysis.
In [19], the authors conducted an online PubMed search to identify high-impact, clinically relevant studies from 2009 onward. The inclusion criteria focused on research that demonstrated AI applications in diabetes care, particularly in automated retinal screening, clinical decision support, predictive population risk stratification, and patient self-management tools. However, this work does not present original clinical trials or prospective validation. Instead, it aggregates evidence from existing literature without real-world implementation or direct clinical assessment of AI-driven interventions. While the study highlights the potential of AI to improve diabetes management, its conclusions are primarily drawn from retrospective data, which limits its ability to assess the long-term clinical effectiveness and real-world feasibility of AI applications.
Despite these advancements, the study primarily relies on retrospective validation rather than prospective clinical implementation. While it aggregates findings from multiple AI-driven studies, it does not present direct real-world evidence or randomized controlled trials (RCTs) evaluating AI’s impact on diabetes management. The reliance on past data raises concerns regarding model generalizability and applicability in diverse patient populations. Further prospective studies are necessary to assess how these AI tools perform in clinical settings, ensuring their reliability, scalability, and integration into routine healthcare workflows.
Guan et al. [20] provide an extensive review of AI applications in diabetes management, discussing their role in risk prediction, patient monitoring, lifestyle interventions, and treatment optimization. The work is primarily a retrospective validation study, relying on literature analysis rather than prospective clinical trials or direct AI implementations in healthcare settings. The study excels in covering a wide range of AI methodologies, including ML, DL, reinforcement learning, and semi-supervised learning techniques. However, it does not provide empirical validation of these methods in real-world clinical settings. A major limitation is the lack of discussion on AI model biases, interpretability challenges, and data heterogeneity in diabetes populations. While the authors propose the development of an AI-assisted healthcare ecosystem, their recommendations remain theoretical, with no supporting clinical implementation evidence. Future work should focus on real-time deployment and validation of AI-assisted diabetes management systems, particularly in diverse and underserved populations.
Khalifa and Albadawy [21] present a systematic review on the role of artificial intelligence (AI) in diabetes care, covering prevention, diagnosis, and management. The study identifies AI’s impact across eight domains, including predictive modeling, health monitoring, diagnostic imaging, clinical decision support, and patient self-management. The review synthesizes findings from 43 experimental studies, discussing the transformative potential of AI in personalized treatment plans, enhanced diagnostics, and patient engagement tools.
The study primarily relies on a retrospective validation approach, as it aggregates findings from previously published research rather than presenting new experimental or clinical data. While it effectively categorizes AI applications in diabetes care, it does not include real-world clinical trials, prospective validations, or direct implementation studies. The reliance on literature review and secondary data analysis limits its ability to assess the true clinical impact, scalability, and regulatory feasibility of AI-driven interventions. This review provides a comprehensive synthesis of AI applications in diabetes, systematically identifying key areas where AI has demonstrated promise. It highlights advancements in predictive analytics, automated screening, machine learning-based clinical decision support, and patient engagement technologies. The structured methodology ensures a rigorous assessment of AI’s role in diabetes care, making it a valuable reference for researchers, clinicians, and policymakers. Despite its strengths, the study lacks direct empirical validation, relying solely on secondary data. The absence of prospective studies, real-world implementation evidence, and randomized controlled trials (RCTs) raises concerns about the clinical generalizability of the AI applications discussed. Key challenges such as model bias, data heterogeneity, ethical concerns, and regulatory hurdles are acknowledged but not sufficiently explored in terms of real-world deployment.
Additionally, while the study claims AI is “revolutionizing diabetes management”, it does not provide empirical evidence proving the superiority of AI-driven approaches over conventional clinical methods. The paper also lacks a comparative effectiveness analysis, failing to measure AI’s real-world performance against existing clinical standards and physician-led interventions.

3.1. Computer Vision for Diabetic Retinopathy

Diabetic retinopathy (DR) is a leading cause of blindness worldwide, and early detection is crucial for effective intervention and prevention of vision loss. The use of computer vision techniques, particularly convolutional neural networks (CNNs), has revolutionized the detection and classification of diabetic retinopathy by automating the analysis of retinal fundus images. These advancements have demonstrated high accuracy and efficiency, often achieving performance comparable to that of expert ophthalmologists [3]. This subsection presents a critical overview of existing computer vision approaches for DR detection, discussing key methodologies, technology advancements, and practical applications.

3.1.1. Existing Approaches in Computer Vision for DR Detection

Early approaches to DR detection involved traditional image processing techniques, which relied on manual feature extraction from retinal images to detect lesions, microaneurysms, hemorrhages, and other pathological features indicative of DR. However, these methods were often limited by their reliance on hand-crafted features and lack of robustness across diverse patient populations [22].
With the advent of deep learning, CNNs have become the dominant approach for DR detection due to their ability to automatically learn complex hierarchical features from large datasets. Models such as ResNet, DenseNet, and Inception-V3 have been widely employed in DR research, achieving high accuracy by leveraging deeper architectures and transfer learning [23]. In one notable study, Gulshan et al. developed a CNN-based model that was validated on large datasets, demonstrating sensitivity and specificity comparable to retinal specialists [3].
More recently, vision transformers (ViTs) have emerged as a promising alternative to CNNs in medical image analysis, including DR detection. ViTs process images in a non-localized manner, capturing global context and enabling more accurate identification of retinal pathologies. Studies comparing ViTs and CNNs in DR classification have shown that ViTs often outperform CNNs, particularly in distinguishing between different stages of disease progression [24]. However, ViTs require extensive training data and computational resources, which limits their applicability in some clinical settings.

3.1.2. Advancements in Technology and Applications

Recent advancements in AI have improved the deployment and scalability of computer vision models for DR. Transfer learning, which utilizes pre-trained models on large datasets, has become integral to DR detection, particularly in low-resource settings where labeled retinal images are limited. Pre-trained CNNs such as VGG and ResNet can be fine-tuned on smaller DR datasets, achieving comparable accuracy with reduced data requirements [25].
Another technological development is the use of generative adversarial networks (GANs) to augment training datasets. GANs generate synthetic retinal images, enhancing model generalization by diversifying training data, which is particularly beneficial for models trained on limited datasets [26]. Furthermore, multimodal approaches that combine image data with clinical information (e.g., glucose levels, blood pressure) have shown potential in enhancing DR prediction by providing a more holistic view of the patient’s health status [14].
The explainability and interpretability of DR detection models have also become critical considerations. Techniques such as Grad-CAM (gradient-weighted class activation mapping) are frequently used to provide visual explanations for model predictions by highlighting areas of the retinal image associated with DR pathology [27]. These explanations help clinicians understand model decisions, building trust and facilitating integration into clinical workflows.

3.1.3. Applications and Clinical Implementation

In clinical practice, AI-driven DR detection systems have begun to see implementation in telemedicine and point-of-care screening. Automated screening systems reduce the workload on specialists by enabling faster and more accurate preliminary assessments, allowing for early detection in at-risk populations who may not have regular access to ophthalmologists [28]. Several AI-based DR detection tools have been approved for clinical use, including IDx-DR, which operates autonomously without specialist oversight. These tools demonstrate the potential of AI to address healthcare disparities by making DR screening accessible in remote and underserved areas [29].
The integration of AI in DR screening, however, is not without challenges. Regulatory and ethical considerations regarding data privacy and bias remain significant concerns. Models trained on limited or homogeneous datasets may exhibit reduced accuracy in diverse populations, highlighting the need for inclusive datasets and robust validation protocols [30]. Future research in DR detection should prioritize model generalizability and fairness, ensuring equitable access to reliable screening technologies.
Computer vision, particularly CNNs and emerging models like ViTs, has transformed the detection of diabetic retinopathy, enabling early intervention and improved patient outcomes. As technology advances and models become more interpretable and accessible, the integration of AI into DR screening holds promise for significantly reducing the global burden of diabetes-related blindness. Future developments will likely focus on increasing model robustness, addressing ethical concerns, and enhancing scalability in diverse clinical settings.

3.2. Trends and Future Directions

The current trend in AI methodologies for diabetes leans towards integrating multimodal data to enhance prediction accuracy and robustness. Future research is likely to focus on combining data from CGM devices, EHRs, genetic markers, and patient-reported outcomes to build comprehensive models capable of personalized care [31]. Moreover, the emphasis on explainable AI (XAI) is expected to grow, ensuring that model outputs can be interpreted and trusted by clinicians and patients alike. Explainable models can bridge the gap between AI and clinical practice by offering insights into how specific input features contribute to predictions.
Federated learning, a decentralized approach that trains algorithms across multiple institutions without sharing raw data, is emerging as a promising solution to address data privacy concerns and improve the generalizability of AI models. This technique allows for the development of robust, cross-institutional predictive models while adhering to data protection regulations such as the General Data Protection Regulation (GDPR).
Lastly, as wearable and IoT (Internet of Things) devices become more advanced, real-time data acquisition and processing will likely become a standard aspect of diabetes management. Integrating AI algorithms into these devices for real-time monitoring and feedback will provide patients with proactive management tools, enabling a shift from reactive to preventive care. The challenge remains in balancing algorithm complexity with interpretability, computational efficiency, and user accessibility.
While AI methodologies in diabetes have made significant strides, continued research and development are essential to harness the full potential of these technologies. The combination of deep learning, multimodal data integration, and privacy-preserving techniques will pave the way for a more personalized and efficient approach to diabetes management, ultimately improving patient outcomes and quality of life.
AI has emerged as a powerful tool in the management of allergic diseases, particularly in predicting allergen exposure and personalizing treatment strategies. Recent advancements in AI and machine learning have enabled the analysis of electronic health records (EHRs) to predict anaphylactic risks based on patient history, genetic predispositions, and environmental factors. By leveraging natural language processing (NLP) techniques, AI systems can extract relevant patterns from unstructured medical records and social determinants of health, improving early warning systems for severe allergic reactions. The integration of machine learning with wearable sensors further enhances real-time monitoring of environmental allergen levels, allowing for proactive patient care and timely medical interventions [32].
AI has significantly accelerated drug discovery processes in diabetes by facilitating the identification of novel therapeutic targets and repurposing existing medications. Machine learning algorithms are employed to screen large-scale molecular databases, predict drug–target interactions, and optimize drug formulation processes. Deep learning-based models have demonstrated exceptional capability in analyzing high-dimensional biological data, including genomics and proteomics, to identify potential antidiabetic compounds. Recent studies highlight the use of generative adversarial networks (GANs) and reinforcement learning for molecular design, leading to more efficient drug candidate identification with reduced experimental costs. Vamathevan et al. [33] utilized deep learning methodologies to enhance drug repurposing efforts, demonstrating improved prediction accuracy for potential diabetes therapeutics. Moreover, federated learning approaches have enabled multi-institutional collaboration while preserving data privacy, further expanding AI’s role in drug discovery [34].
The integration of AI in clinical trials has transformed patient recruitment, monitoring, and predictive analysis, enhancing efficiency and reducing biases in diabetes-related drug trials [35,36].
Table 1 (Diabetes Management) presents a summary of machine learning techniques applied to diabetes management. This section of the table focuses on studies utilizing predictive modeling, risk assessment, and personalized treatment planning, highlighting the specific algorithms employed, such as neural networks (NN), support vector machines (SVMs), and ensemble models.

3.3. In-Silico Efforts

Many AI-driven models, particularly deep learning-based classification and generative models for diabetes prediction and treatment recommendations, remain at an in silico stage. These computational approaches leverage large-scale datasets to identify patterns, optimize insulin therapy, and design novel therapeutic interventions. For example, reinforcement learning frameworks have been applied to personalized diabetes management, allowing for adaptive insulin dosing recommendations based on simulated patient responses [17]. Similarly, deep generative models are widely used to predict potential drug–target interactions for diabetes drug discovery, accelerating preclinical development [33].

3.4. Retrospective Validation

Several studies have validated AI-based diabetes diagnostics and prognostic tools using historical datasets. Retrospective analyses are crucial for assessing model performance before clinical deployment. For instance, deep learning models trained on electronic health records (EHRs) have demonstrated high accuracy in predicting diabetes complications such as retinopathy and nephropathy [3]. However, despite these promising results, these models still require external validation on diverse populations to ensure robustness and generalizability [37].

3.5. Prospective Studies and Clinical Implementation

A limited but growing number of AI applications have undergone prospective evaluation in real-world clinical settings. Some AI-assisted decision support systems for diabetes management, such as continuous glucose monitoring (CGM) integration with deep learning models, have shown promising results in enhancing glycemic control [38,39,40]. Moreover, prospective trials assessing AI-based personalized diabetes treatments are ongoing, focusing on integrating CGM, wearable devices, and clinical data to refine patient-specific insulin therapy [41]. Nevertheless, widespread clinical adoption remains a challenge due to regulatory, ethical, and interpretability concerns [12].
Table 1. Summary of machine learning techniques applied to healthcare challenges: diabetes management.
Table 1. Summary of machine learning techniques applied to healthcare challenges: diabetes management.
Machine Learning Techniques for Diabetes Management
Article Topic N C R S D F E L VT
Li et al. (2020) [17]Blood Glucose Prediction * I
Yu et al. (2021) [31]Multimodal Data Integration* ** RV
Gulshan et al. (2016) [3]Diabetic Retinopathy Detection * RV
Shaikhina et al. (2017) [42]Risk Stratification* * RV
Rodriguez-Rodriguez et al. (2019) [43]Complication Prediction* ** * RV
Wang et al. (2020) [44]Personalized Treatment Planning * PS
Tison et al. (2019) [45]Real-time Glucose Monitoring* PS
Le et al. (2020) [46]Risk Prediction Models* ** RV
Reddy et al. (2021) [41]IoT Devices for Glucose Monitoring * PS
Johnson et al. (2018) [47]Automated EHR Analysis *I
Rana et al. (2019) [48]Lifestyle and Diet Impact Analysis* * RV
Ellahham (2020) [19]AI in Diabetes Care** *** RV
Dankwa-Mullan et al. (2019) [18]AI Applications in Diabetes** *** RV
Khalifa and Albadawy (2024) [21]AI for Diabetes Prevention and Management** **** RV
Guan et al. (2023) [20]AI Advancements and Challenges in Diabetes** **** RV
Total105356661
* Indicates that the technique is present in the corresponding study. Note: N = neural networks (NN), C = convolutional NN (CNN), R = recurrent NN (RNN), S = support vector machines (SVM), D = decision trees (DT), F = random forests (RF), E = ensemble models, L = natural language processing (NLP), VT = validation type.

4. AI Applications in Cancer

AI methodologies have become integral to cancer research, detection, diagnosis, and personalized treatment planning. The substantial volume and complexity of data generated in oncology—from medical imaging to genomic profiles—pose unique challenges, which AI methods are increasingly equipped to address. This section provides a critical overview of current AI applications in cancer, trends in emerging methodologies, and anticipated future directions for technology and clinical practice.
Waljee et al. [49] discuss the potential of AI/ML in addressing the rising burden of colorectal cancer (CRC) in sub-Saharan Africa (SSA). The study highlights the limitations of current screening tools in SSA and proposes two major AI/ML applications: (1) Multianalyte Assays with Algorithmic Analysis (MAAA) for early detection and (2) computer vision and pattern recognition algorithms for pathology-based diagnostics. The study emphasizes the need for innovative AI-based screening approaches due to limited healthcare infrastructure, inadequate access to trained specialists, and a growing CRC burden. The study is largely retrospective in nature, relying on literature review and theoretical AI model discussions rather than conducting prospective clinical trials or real-world AI deployment. It presents an in silico evaluation of AI-driven MAAA models and image recognition algorithms but does not validate these methods with clinical trials in SSA populations. The study also acknowledges the lack of local data and the absence of external validation in low-resource settings, which limits the generalizability of the proposed AI models.
This work provides a comprehensive assessment of the CRC screening challenges in SSA, highlighting resource constraints, inadequate screening policies, and a lack of real-time diagnostics. It introduces MAAA and AI-driven histopathological tools as potential solutions to improve early detection and diagnosis. The discussion is supported by strong theoretical evidence, making it a valuable reference for researchers and policymakers interested in AI-driven oncology solutions for low-resource settings.
Despite its contributions, the study has several critical limitations. One major drawback is the lack of prospective validation, as the proposed AI and machine learning approaches have not been tested in real-world clinical settings within sub-Saharan Africa. Although the study highlights the theoretical potential of AI-driven solutions, it does not include any prospective studies or pilot programs that would demonstrate their feasibility, accuracy, or cost-effectiveness in practice. Another significant limitation is the overreliance on data from high-income countries. Many of the AI models referenced in the study were trained on datasets from developed regions, which may not generalize well to populations in sub-Saharan Africa due to differences in demographics, genetics, and healthcare infrastructure. Without localized data, the applicability and reliability of these AI models remain uncertain. Furthermore, the study does not fully address the ethical and regulatory challenges associated with AI deployment in healthcare. Issues such as data privacy, AI literacy among healthcare professionals, model interpretability, and compliance with regulatory frameworks are critical factors that could hinder the successful implementation of AI solutions in sub-Saharan Africa.
Additionally, the study lacks a clear implementation strategy for integrating AI-based colorectal cancer screening into existing healthcare frameworks. While AI is presented as a potential solution to the shortage of trained pathologists and oncologists, there is no detailed discussion on how these technologies would be adapted to low-resource rural areas, how they would be maintained, or how healthcare providers and patients would be trained to use them effectively. Without addressing these challenges, the practical application of AI in colorectal cancer screening in sub-Saharan Africa remains uncertain.
Rahib et al. (2021) [50] present an analysis of projected cancer incidence and mortality rates in the United States up to the year 2040. The study uses demographic population growth projections from the US Census Bureau and cancer incidence and mortality rates from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate future trends. Through statistical modeling, it identifies the expected shifts in the most common cancer types and their mortality burden, emphasizing the importance of future healthcare resource allocation and policy development. The study is based on retrospective in silico analysis, as it uses past epidemiological data and mathematical models to forecast cancer incidence and mortality trends rather than conducting prospective clinical trials or real-world implementations. The projections rely on historical incidence trends and population growth data, assuming that current patterns in cancer epidemiology, screening practices, and treatment advancements will continue at a stable rate.
This study provides a valuable long-term perspective on cancer incidence and mortality trends, helping policymakers, researchers, and healthcare administrators anticipate future cancer burdens. By integrating large-scale national databases with demographic modeling techniques, it presents a robust statistical framework for estimating future cancer trends. The use of SEER data, which covers 35% of the US population, enhances the reliability of the estimates by incorporating delay-adjusted incidence rates and annual percentage change adjustments. Despite its strengths, the study has several important limitations. One of the primary concerns is its reliance on static modeling assumptions that do not account for unexpected medical advancements, shifts in screening policies, or the impact of emerging treatments. Cancer incidence and mortality rates are influenced by modifiable factors, such as lifestyle changes, medical innovations, and socioeconomic disparities, which the study does not dynamically incorporate. Another limitation is the assumption that average annual percentage changes in cancer incidence and mortality will remain constant over time. While historical trends provide a useful baseline, they do not fully capture potential disruptions from novel screening technologies, immunotherapies, or targeted treatments that could significantly alter the projected trajectory. For example, advancements in liquid biopsy for early cancer detection and AI-driven risk stratification models could shift the epidemiological burden in ways that current models cannot anticipate.
Furthermore, the study does not fully address racial, ethnic, and socioeconomic disparities in cancer incidence and treatment outcomes. While it uses demographic projections to estimate future cases, it does not explore how healthcare access, social determinants of health, or systemic disparities may affect cancer burden distribution. This omission limits the study’s applicability in addressing health equity concerns, particularly for populations with historically lower access to cancer prevention and treatment services. Another critique relates to the absence of sensitivity analyses accounting for healthcare policy changes. For instance, shifts in Medicare and Medicaid coverage, insurance policies, or national cancer screening guidelines could alter incidence and mortality rates beyond the study’s projections. Without accounting for these factors, the estimates may not fully reflect real-world fluctuations in cancer epidemiology.
Hsieh et al. [51] examine the application of AI models in predicting breast cancer risk among Taiwanese women with type 2 diabetes mellitus (T2DM). The study compares the predictive performance of three machine learning models: logistic regression (LR), artificial neural networks (ANN), and random forests (RF). Using data from Taiwan’s National Health Insurance Research Database (NHIRD) between 2000 and 2012, the study develops a breast cancer risk prediction model for T2DM patients. The effectiveness of these models is assessed using recall, accuracy, F1-score, and the area under the receiver operating characteristic curve (AUC). The findings suggest that while all models demonstrated high predictive accuracy, the RF model outperformed ANN and LR in terms of AUC and other performance metrics. This study is classified as a retrospective in silico analysis, as it relies on previously collected real-world data from the NHIRD rather than conducting a prospective clinical trial or real-world clinical implementation. The dataset includes 636,111 newly diagnosed female T2DM patients, and the study applies synthetic minority oversampling technology (SMOTE) to handle class imbalances. The validation process is conducted using k-fold cross-validation and a separate test set, but no external dataset or prospective cohort validation is provided. The reliance on retrospective data and synthetic oversampling introduces potential biases and limits generalizability to broader populations.
This research provides a valuable comparison of machine learning techniques in the context of breast cancer prediction for a large-scale, national cohort of T2DM patients. The application of NHIRD data ensures a robust and representative sample, allowing for a high-powered analysis of breast cancer risk factors in diabetic populations. The study also introduces SMOTE-based data augmentation, which helps to address the imbalance in breast cancer case distributions, enhancing the robustness of the models. Furthermore, the performance evaluation using multiple metrics (AUC, precision, recall, and F1-score) ensures a comprehensive assessment of the models. The results highlight the superior predictive capability of ensemble learning methods (RF) over traditional statistical approaches (LR), reinforcing the importance of machine learning in healthcare predictions.
Despite its contributions, the study has several key limitations. One major drawback is the lack of prospective validation or real-world clinical implementation. While the models are rigorously evaluated using cross-validation and test sets, they are not validated on external datasets or prospective patient cohorts, limiting their real-world applicability. Without prospective studies, the generalizability of these models to new patient populations, diverse ethnic groups, and different healthcare systems remains uncertain. The study also does not sufficiently address clinical interpretability. While AI models, particularly random forests, demonstrate superior predictive power, the study does not explore feature importance or model explainability, which are essential for clinical decision-making. Physicians require transparent and interpretable AI models to integrate them into diagnostic workflows, and the absence of explainability techniques (such as SHAP or LIME) limits the study’s clinical impact.
Kumar et al. [52] present a systematic review of AI techniques applied to cancer prediction and diagnosis. The study follows PRISMA guidelines and examines 185 impactful papers published between 2009 and 2021, selected from databases such as Web of Science, EBSCO, and EMBASE. The review classifies existing research based on conventional machine learning and deep learning techniques and evaluates models based on performance metrics such as accuracy, sensitivity, specificity, dice score, detection rate, precision, recall, and F1-score. The authors emphasize the growing role of AI in improving early cancer detection, automated medical imaging analysis, and prediction of cancer recurrence and survivability. The study falls under the category of retrospective in silico analysis, as it does not conduct prospective clinical trials or real-world patient validation. Instead, it synthesizes findings from past studies that primarily rely on historical datasets, computational modeling, and machine learning experiments. While the review provides valuable insights into AI applications in cancer diagnosis, it lacks real-world validation of AI models and does not assess the clinical implementation challenges associated with deploying these techniques in hospital settings. This review provides a comprehensive synthesis of AI-based cancer prediction methods, offering a valuable reference for researchers and healthcare practitioners. By systematically comparing multiple AI models, it highlights their advantages and limitations in cancer detection and risk assessment. The study effectively categorizes existing research based on different cancer types and AI methodologies, allowing for an organized understanding of the field. Additionally, the use of multiple performance evaluation metrics, such as accuracy, precision, recall, and dice score, strengthens the comparative analysis of the reviewed models.
Despite its contributions, the study has several critical limitations. A primary concern is the lack of prospective validation or real-world implementation of the AI techniques discussed. Since most reviewed studies rely on retrospective datasets and computational simulations, there is no assessment of how well these AI models perform in dynamic, real-world clinical settings with real-time patient data. Without prospective validation, the generalizability of these AI models remains uncertain. Another major limitation is the study’s reliance on research that predominantly uses structured imaging datasets, such as MRI, CT scans, and histopathological images. While these datasets are crucial for AI model development, they do not account for non-structured clinical data, such as patient history, genetic risk factors, and lifestyle variables, which are essential for understanding cancer progression. The omission of multimodal AI approaches that integrate diverse data sources limits the scope of the review.
Another weakness of the review is its focus on AI techniques without a critical discussion on their accessibility and scalability. Many AI models require extensive computational resources, large labeled datasets, and expertise in deep learning frameworks, making them difficult to implement in low-resource healthcare settings. The study does not address whether the reviewed AI techniques are feasible for deployment in developing countries, where cancer mortality rates are high, but access to advanced diagnostic tools is limited.
Tătaru et al. [53] present a comprehensive review of AI and ML applications in the management of prostate cancer (PCa). The study evaluates how AI contributes to multiple aspects of PCa care, including diagnosis, imaging, genomics, radiotherapy, and robotic-assisted surgery. The authors highlight AI’s potential in enhancing diagnostic accuracy, streamlining pathology workflows, and predicting patient outcomes using machine learning algorithms. The paper discusses various AI-driven techniques, such as deep learning (DL), convolutional neural networks (CNNs), and artificial neural networks (ANNs), with a focus on their applications in digital pathology, MRI-based prostate imaging, and genomic analysis. The review acknowledges the rapid advancements in AI-driven decision support systems, particularly in radiotherapy planning and robotic-assisted radical prostatectomy. The study falls under the category of retrospective in silico analysis, as it reviews existing literature without conducting prospective validation or clinical trials. While the paper provides a broad evaluation of AI advancements in PCa management, it does not offer real-world validation of the AI models discussed. Most of the referenced studies rely on historical datasets, computational modeling, and machine learning simulations rather than prospective patient trials. The lack of prospective clinical implementation raises concerns about the practical application of these AI methodologies in routine medical practice.
This review serves as a valuable resource for researchers and clinicians by systematically summarizing AI applications in PCa care. It effectively categorizes AI-based methodologies across various domains, such as imaging, genomics, and treatment planning. The inclusion of a diverse range of AI techniques, from CNN-based histopathological analysis to machine learning-assisted risk stratification, enhances the study’s relevance. The discussion on AI’s role in optimizing radiotherapy treatment planning and assisting in robotic-assisted prostate surgeries provides important insights into the future of AI-driven oncology. By referencing multiple performance metrics such as area under the curve (AUC), accuracy, and sensitivity, the study offers a structured comparison of AI models in different applications.
Despite its contributions, the study has notable limitations. One of the primary concerns is the lack of prospective validation for AI techniques discussed in the review. While the paper acknowledges AI’s potential, it does not provide an in-depth evaluation of the challenges associated with real-world clinical deployment. The absence of external dataset validation and prospective clinical testing limits the generalizability of the findings. Another significant limitation is the study’s emphasis on structured imaging datasets, such as MRI and histopathological images, without integrating non-structured clinical data. AI models that rely exclusively on imaging data may fail to incorporate crucial risk factors such as genetic predisposition, lifestyle habits, and environmental exposures, which play a critical role in PCa progression. The omission of multimodal AI approaches that combine imaging, genomics, and patient history limits the study’s applicability in precision medicine. Furthermore, the paper does not discuss the cost-effectiveness and feasibility of implementing AI-driven solutions in diverse healthcare settings. Many AI algorithms require extensive computational resources, large annotated datasets, and specialized expertise, which may not be readily available in resource-limited hospitals. The study does not assess whether AI-powered PCa management tools can be deployed in low- and middle-income countries, where prostate cancer diagnosis and treatment remain challenging.
Niu et al. (2020) [54] present a systematic review on the application of AI in gastric cancer diagnosis and prognosis prediction. The study explores AI-assisted diagnosis techniques, including pathology, endoscopy, and computed tomography (CT), as well as AI-based prognosis models focused on recurrence, metastasis, and survival prediction. The review highlights recent advancements and discusses future directions to enhance AI model accuracy and clinical applicability.
This work falls under the category of retrospective in silico analysis, as it synthesizes findings from previous studies without conducting prospective validation or real-world implementation. The AI models reviewed in the study are trained and validated on retrospective datasets, making them vulnerable to biases inherent in historical data. Although the review provides a valuable synthesis of AI applications in gastric cancer, it lacks experimental validation of AI models in real-world clinical settings. One of the strengths of this study is its comprehensive analysis of AI methodologies, covering multiple diagnostic modalities such as deep learning-assisted endoscopy, CNN-based pathology image recognition, and machine learning-based survival prediction models. The paper effectively categorizes AI applications in gastric cancer and provides a structured review of their clinical potential.
However, the study has several limitations. It does not address the generalizability of AI models across different patient populations, as most AI-based studies are trained on datasets from specific geographic regions or institutions. The absence of external dataset validation limits the robustness of the findings. Furthermore, the review does not explore the regulatory challenges, ethical concerns, or clinical implementation barriers associated with AI adoption in gastric cancer care. The discussion on overfitting in AI models is insightful but lacks practical strategies to mitigate the issue in real-world applications.
Hunter et al. [55] explore the role of AI in the early diagnosis of cancer. The study discusses AI’s potential in screening asymptomatic patients, triaging symptomatic individuals, and detecting cancer recurrence using ML models applied to electronic health records (EHR), diagnostic imaging, pathology slides, and peripheral blood tests. The review also addresses the ethical, regulatory, and technical challenges of AI implementation in early cancer detection.
This work falls under the category of retrospective in silico analysis, as it synthesizes past research findings without conducting prospective clinical validation of AI models. Although the review provides a comprehensive overview of AI applications, it does not assess the real-world performance of AI-driven diagnostic tools in clinical settings. The AI techniques discussed are primarily validated on retrospective datasets, limiting their applicability in real-time clinical decision-making. The study effectively highlights the advantages of AI in early cancer diagnosis, emphasizing its ability to enhance screening efficiency, automate clinical workflows, and improve risk stratification. The discussion on natural language processing (NLP) for extracting insights from unstructured EHR data is particularly valuable, as it underscores AI’s potential in optimizing clinical decision-making.
However, the study has several limitations. While it discusses the potential benefits of AI, it does not critically assess the limitations of AI-based early diagnosis models, such as the risk of false positives, algorithmic bias, and data privacy concerns. The review also lacks a discussion on the computational demands and infrastructure required to deploy AI-driven diagnostic tools in real-world clinical settings. Furthermore, the regulatory barriers to AI adoption in cancer screening programs are only briefly mentioned, leaving a gap in understanding how AI can be effectively integrated into healthcare systems.

4.1. Medical Imaging for Cancer Detection and Diagnosis

Medical imaging has been a primary focus of AI applications in cancer. Convolutional neural networks (CNNs) and other deep learning models have shown significant efficacy in interpreting mammography, computed tomography (CT), magnetic resonance imaging (MRI), and histopathology slides. These models can process high-dimensional data, identifying subtle patterns that may elude human experts. Esteva et al. demonstrated that a deep neural network could achieve dermatologist-level accuracy in classifying skin cancers from images [2]. Similarly, Gulshan et al. developed a deep learning algorithm that detects diabetic retinopathy with high accuracy, a methodology now adapted to interpret histopathological slides in oncology [3].
In the domain of medical imaging, techniques like transfer learning have proven instrumental in improving diagnostic accuracy for diseases such as lung and breast cancers. Transfer learning enables the adaptation of pre-trained convolutional neural networks (CNNs) like VGG-16 and ResNet, significantly reducing training time and overcoming the limitations of small datasets. For instance, researchers have successfully employed transfer learning to enhance segmentation and classification tasks in medical imaging by leveraging features learned from large datasets [56,57]. Despite these advancements, challenges persist, particularly in acquiring annotated medical data, which remain scarce and costly, hindering the broader applicability of these models [56,57].

4.2. Genomics and Personalized Medicine

AI has become essential for analyzing genomic data to enable personalized treatment, particularly through the identification of biomarkers associated with different cancer subtypes. Machine learning algorithms, such as support vector machines (SVMs) and random forests, have been employed to identify relevant mutations, gene expression profiles, and epigenetic markers [5]. Genomic datasets require high computational power and sophisticated processing techniques; deep learning algorithms are well-suited for such high-dimensional, non-linear data, facilitating the discovery of previously undetectable patterns in oncogenesis and metastasis [58].
Personalized oncology has particularly benefited from AI advancements in genomics, as AI can rapidly match patients’ genetic profiles to suitable targeted therapies. However, the complexity and diversity of cancer genetics require robust models that can integrate multiple data types, including transcriptomics, proteomics, and radiomics. Recent trends in multimodal AI systems, which combine imaging and genetic data, show promise for providing comprehensive insights into cancer biology [6].

4.3. AI in Cancer Treatment Planning and Drug Discovery

Reinforcement learning (RL) is increasingly being utilized for dynamic and personalized treatment planning in healthcare. In radiation therapy, for instance, Tseng et al. [59] developed a deep reinforcement learning (DRL) framework for automated radiation dose adaptation in nonsmall cell lung cancer (NSCLC) patients. Their approach combined generative adversarial networks (GANs) for synthetic data generation, a deep neural network (DNN) to model the radiotherapy environment, and a deep Q-network (DQN) for dose optimization. The DRL system was able to dynamically adjust radiation doses based on individual patient characteristics, achieving automated dose adaptations that closely mirrored clinical protocols. This resulted in a competitive root mean square error (RMSE) of 0.76 Gy compared to clinical decisions, highlighting its potential to optimize therapeutic effectiveness while minimizing radiation-induced toxicities, such as pneumonitis. These models help reduce treatment-associated toxicity while maximizing therapeutic effectiveness. In chemotherapy, predictive models assess potential adverse reactions, enabling oncologists to balance efficacy with patient safety.
In drug discovery, AI is accelerating the identification of new therapeutic targets. Deep generative models, including variational autoencoders (VAEs) and generative adversarial networks (GANs), have been employed to design novel compounds with anti-cancer properties [33]. AI has significantly accelerated the drug discovery process by enabling the rapid screening of extensive chemical libraries, thereby reducing the time and costs traditionally associated with drug development. Recent advancements in AI have demonstrated its ability to predict drug–target interactions, assess compound efficacy, and identify potential toxicity with high accuracy [60]. However, challenges remain, particularly in model interpretability, as regulatory agencies and stakeholders require transparent explanations for AI-generated recommendations. Explainable AI (XAI) approaches, which provide insights into the decision-making processes of these models, are gaining traction as essential tools to address these concerns, ensuring the reliability and regulatory compliance of AI-driven discoveries [60].

4.4. Current Challenges and Limitations

Despite progress, the integration of AI into oncology faces considerable challenges. Data privacy and security are primary concerns, especially as patient data sharing is necessary to train robust AI models. Additionally, model interpretability is critical, as clinicians need to understand AI predictions to trust and apply them in patient care. Bias in training data is another concern; datasets often lack representation from underrepresented groups, leading to models that may perform suboptimally across different demographic groups [11].

4.5. Trends and Future Directions

Future research directions in AI applications in cancer are moving toward more integrative and interpretable models. Key trends include the development of multimodal approaches that combine imaging, genomic, and clinical data to provide holistic insights into cancer progression and treatment response. Transfer learning and federated learning approaches are also gaining attention as they enable models to generalize across different datasets while maintaining patient privacy [15].
Explainable AI (XAI) is an emerging field focusing on the interpretability of AI models, particularly deep learning models. XAI aims to clarify how models make decisions, which could facilitate AI adoption in clinical practice. In addition, real-time AI systems that process patient data continuously may offer responsive, adaptive treatment adjustments, aligning with the goals of precision oncology.
While AI in cancer research holds immense potential, its full realization will depend on addressing current limitations in data quality, model interpretability, and clinical integration. Advances in these areas will likely drive more reliable, effective, and ethically sound applications in oncology.
In Table 2 (Cancer Applications), machine learning techniques for cancer research are summarized. This part of the table includes studies on cancer classification, prognosis, and drug discovery, emphasizing the diverse methods like convolutional neural networks (CNN), clustering techniques, and advanced machine learning models.
The application of AI in cancer research encompasses various stages of validation and implementation, including in silico modeling, retrospective validation, and prospective clinical studies. While many advancements demonstrate promising results, it is essential to recognize that not all approaches have transitioned into routine clinical practice.

4.6. In Silico Efforts

Many AI-driven models, particularly deep learning-based classification and generative models for drug discovery, remain at an in silico stage. These computational approaches leverage large-scale datasets to identify patterns, optimize treatment strategies, and design novel therapeutics. For example, reinforcement learning frameworks have been applied to radiotherapy dose optimization in simulated environments, allowing for iterative improvement without direct patient interaction [59]. Similarly, deep generative models are widely used to predict potential drug–target interactions, accelerating preclinical drug discovery pipelines [33].

4.7. Retrospective Validation

Several studies have validated AI-based cancer diagnostics and prognostic tools using historical datasets. Retrospective analyses are crucial for assessing model performance before clinical deployment. For instance, deep learning models trained on histopathology slides and imaging databases have demonstrated accuracy comparable to radiologists in detecting lung and breast cancer [6]. However, despite these promising results, these models still require external validation on diverse populations to ensure robustness and generalizability [15].

4.8. Prospective Studies and Clinical Implementation

A limited but growing number of AI applications have undergone prospective evaluation in real-world clinical settings. Some AI-assisted diagnostic tools for radiology, such as those for mammography interpretation, have received regulatory approval and are actively used to support clinicians [3]. Moreover, prospective trials assessing AI-based personalized oncology approaches are ongoing, focusing on integrating genomic, imaging, and clinical data to refine treatment recommendations [14]. Nevertheless, widespread clinical adoption remains a challenge due to regulatory, ethical, and interpretability concerns [12].
By clarifying these distinctions, we highlight that while AI holds immense potential in oncology, its translation from research to clinical application requires rigorous validation through retrospective and prospective studies. Future efforts should emphasize explainability, fairness, and real-world evaluation to bridge the gap between theoretical advancements and practical implementation.

5. Epidemiological Surveillance and AI

The integration of AI into epidemiological surveillance has transformed public health practices, providing unprecedented capabilities for real-time disease monitoring, outbreak detection, and forecasting. Traditional epidemiological methods, while effective, often rely on retrospective analysis and structured data collection, which can limit response times and the scope of insights. AI methodologies, particularly those leveraging big data and real-time analytics, address these limitations by synthesizing information from diverse sources, including electronic health records (EHRs), social media, climate data, and satellite imagery [63]. Compared to traditional statistical approaches such as regression models or time-series analysis, which excel with structured data but struggle with real-time processing of large-scale, unstructured datasets, AI offers significant advantages in speed, scalability, and adaptability. This section presents a critical overview of existing AI approaches in epidemiology, discusses emerging trends, and explores future directions for advancing these technologies.
Hassan Ali [64] explores the role of AI in infectious disease surveillance, outbreak prediction, and public health optimization. The study highlights AI-driven techniques such as deep learning and reinforcement learning for real-time outbreak forecasting, genomic surveillance, and vaccine development. By integrating AI with large-scale data from mobility patterns and electronic health records, the study demonstrates the potential of AI in detecting emerging public health threats. However, this study primarily remains an in silico analysis, as it does not provide empirical validation through prospective trials. The AI models discussed are largely theoretical or applied retrospectively to existing datasets, limiting their real-world applicability. Despite emphasizing AI’s potential in pandemic preparedness, the study does not present experimental results on AI-driven interventions deployed during outbreaks. Furthermore, the lack of discussion on AI model interpretability, ethical considerations, and real-time deployment constraints weakens its practical implications.
Grothen et al. [65] provide a systematic review of AI methods applied to pharmacy data for cancer surveillance and epidemiology research. The study investigates the integration of ML and NLP in analyzing electronic health records (EHRs) to improve treatment monitoring, medication adherence, and adverse event detection. The review identifies key AI techniques used in oncology informatics and highlights the increasing role of AI in cancer epidemiology. This study is classified as retrospective validation, as it reviews previously published studies rather than conducting new empirical AI model testing. While it effectively synthesizes the use of AI in pharmacy-related oncology research, it suffers from several limitations, including inconsistent reporting of results, lack of standardization in AI performance metrics, and missing details on data sources. The study also fails to address the challenges of AI model reproducibility, biases in pharmacy datasets, and clinical applicability in real-world cancer surveillance. Future research should focus on prospective validation and the development of standardized frameworks for AI-driven pharmacy informatics.
Jiao et al. [66] discuss the application of AI and big data analytics in epidemic surveillance, emphasizing AI’s role in outbreak detection, epidemiological modeling, and public health decision-making. The study outlines how AI-driven contact tracing, mobility data analysis, and predictive modeling have been leveraged to enhance pandemic response strategies. Despite its comprehensive analysis, the study remains within the in silico category, as it does not include real-world AI implementation or prospective clinical trials. While the study discusses AI applications used during the COVID-19 pandemic, it does not provide empirical evidence on their effectiveness beyond retrospective evaluations. Additionally, the study overlooks key challenges such as data privacy, AI fairness, and real-time scalability in epidemic containment. Future work should integrate prospective AI deployment studies to evaluate the actual impact of AI-based outbreak detection systems in real-world healthcare settings.
Kraemer et al. (2025) [67] present a detailed perspective on the use of AI in infectious disease modeling, discussing how deep learning, probabilistic modeling, and AI-driven computational epidemiology can improve disease surveillance. The study provides insights into AI’s potential for integrating heterogeneous data sources, including clinical records, mobility patterns, and genomic sequences, to enhance outbreak prediction. While the study contributes valuable theoretical advancements, it remains largely in silico, focusing on conceptual AI applications without conducting prospective validation. The authors emphasize the importance of AI in epidemiology but do not provide empirical evidence demonstrating AI’s superiority over traditional epidemiological models in real-world settings. Furthermore, the study does not address potential pitfalls, such as data quality limitations, AI model bias, and challenges in regulatory compliance. Future studies should prioritize real-world AI deployment and longitudinal validation to assess AI’s effectiveness in epidemic forecasting.
Anjaria et al. [68] present an overview of how AI is transforming epidemiological surveillance, pandemic preparedness, and equitable vaccine distribution. The study explores AI-based tools such as HealthMap, BlueDot, and BioSense, which integrate real-time data streams from various sources, including electronic health records, social media, and news outlets, to improve public health decision-making. The authors highlight AI’s advantages over traditional surveillance methods, including real-time predictive capabilities and dynamic learning mechanisms.
However, this study remains within the in silico category, as it primarily describes AI applications without providing empirical validation or real-world clinical trials. The study does not assess the practical implementation of these systems in live pandemic scenarios, nor does it offer comparative analysis with existing non-AI surveillance frameworks. Furthermore, the ethical implications of AI in pandemic monitoring, particularly regarding data privacy and algorithmic bias, are not explored in depth. Future studies should include prospective validation of AI-based surveillance tools to evaluate their efficacy in real-world pandemic responses.
MacIntyre et al. [69] provide a narrative review on AI’s potential to revolutionize epidemic early warning systems. The study summarizes key AI-powered epidemic intelligence platforms, such as ProMED-mail, EPIWATCH, and Epidemic Intelligence from Open Sources (EIOS), highlighting their role in rapid outbreak detection and risk assessment. AI-driven natural language processing (NLP) techniques are described as essential for automating real-time epidemic surveillance.
Despite these contributions, this work falls under retrospective validation, as it primarily compiles existing AI applications without providing new empirical validation. While the authors emphasize AI’s superiority in processing vast open-source data, they do not address the challenges of AI implementation in low-resource settings, where infrastructure limitations can hinder deployment. Additionally, the study does not critically assess the reliability of AI-driven early warning systems in comparison to conventional epidemiological methods. Future research should focus on prospective trials evaluating AI’s real-time performance in epidemic prediction and response.
Nogueira et al. [70] investigate the impact of COVID-19 on stroke care through AI-driven epidemiological surveillance. The study leverages the Viz AI Platform, an AI-powered neuroimaging tool, to analyze stroke screening rates across 97 hospitals in 20 U.S. states before and during the pandemic. The results indicate a significant decline in stroke imaging screenings and hospital admissions, underscoring AI’s potential in real-time monitoring of stroke care systems.
This study qualifies as retrospective validation, as it relies on historical datasets to evaluate AI’s role in stroke care monitoring. Although it provides valuable insights into healthcare system disruptions during the pandemic, it does not assess AI’s effectiveness in prospective clinical trials. Additionally, the study focuses on large-scale data trends without discussing AI’s impact on individual patient outcomes or clinical decision-making. Future research should aim at conducting prospective trials to determine AI’s role in optimizing stroke care pathways in real-world hospital settings.

5.1. Existing Approaches in AI-Driven Epidemiological Surveillance

AI applications in epidemiology primarily focus on early detection, outbreak prediction, and the modeling of disease spread. One of the fundamental techniques in this domain is Bayesian networks, which provide probabilistic models to represent the dependencies between variables influencing disease spread, such as geographic, demographic, and climate factors [71]. Bayesian networks have been instrumental in estimating the likelihood of disease incidence in specific areas and projecting potential risk levels.
ML models, particularly support vector machines (SVMs) and decision trees (DTs), are also used to classify and predict disease outbreaks. These models analyze structured datasets, typically consisting of historical epidemiological data, to classify regions or populations at higher risk for specific diseases. However, traditional ML models often require extensive preprocessing and lack the flexibility to handle unstructured or real-time data [72].
Deep learning models, especially recurrent neural networks (RNNs) and long short-term memory (LSTM) networks, have gained prominence in modeling temporal patterns in disease data. For instance, RNNs have been applied to time-series datasets derived from EHRs and social media feeds to predict influenza trends, significantly improving the timeliness and accuracy of outbreak predictions [73]. Convolutional neural networks (CNNs) have similarly been utilized to analyze satellite and aerial imagery, identifying environmental indicators correlated with vector-borne diseases like malaria and dengue [74].
In addition to these, natural language processing (NLP) tools enable the extraction of epidemiological information from unstructured text sources, such as news articles, medical literature, and online posts. NLP has proven effective for real-time surveillance by detecting mention patterns of symptoms or disease outbreaks in social media, thus identifying potential outbreaks even before official reports are filed [75].
Recent advancements in AI have further enhanced public health surveillance capabilities. For example, AI-driven platforms like BlueDot leverage machine learning and NLP to analyze global data sources, providing early warnings for outbreaks such as COVID-19. Similarly, the CDC employs NLP for vaccine safety monitoring, demonstrating AI’s ability to process real-time data streams and improve decision-making. These advancements create new opportunities for predictive analytics, such as automated tuberculosis detection from chest X-rays, enhancing the speed and accuracy of disease monitoring compared to traditional methods.

5.2. Trends in AI for Epidemiological Surveillance

Several key trends have emerged in AI-driven epidemiology, with an increasing emphasis on multimodal data integration, real-time analytics, and explainability. A primary trend is the integration of multimodal data sources to improve the accuracy and robustness of epidemiological models. By combining EHR data, mobility data, meteorological information, and social media feeds, AI models can better understand the dynamics of disease spread and predict high-risk areas with greater precision [76].
Another trend is the application of federated learning, which allows AI models to be trained on data from multiple locations without transferring sensitive patient information to a central server. This approach addresses privacy concerns while enabling cross-jurisdictional collaboration in epidemic tracking and prediction. Federated learning also enhances the generalizability of AI models by leveraging diverse data sources from various geographic regions and healthcare systems [34].
Moreover, explainable AI (XAI) is gaining importance in epidemiology to ensure that public health professionals can interpret and trust AI-driven predictions. By employing interpretable models, epidemiologists can better understand the variables influencing predictions, which is critical for transparency and public trust. Recent developments in XAI, such as SHAP (Shapley additive explanations) values, offer insights into feature importance, helping researchers interpret the contribution of specific data elements to model predictions [12].

5.3. Future Directions in AI-Based Epidemiological Surveillance

Looking forward, AI-driven epidemiology is likely to focus on further enhancing data integration, computational efficiency, and model interpretability. Future research will likely explore integrating high-resolution environmental data with epidemiological datasets to monitor emerging threats from zoonotic diseases. The increased availability of high-resolution satellite imagery and IoT-enabled sensors will provide real-time environmental monitoring, enabling predictive models to account for local factors, such as vector habitats and pollution levels, which influence disease transmission [77].
Adaptive learning, where models are continuously retrained on new data, is another promising area. In dynamic environments, such as fast-spreading infectious disease outbreaks, adaptive learning allows models to evolve as new data become available, improving prediction accuracy in real time [78].
Moreover, the application of AI in genomic epidemiology is expected to expand, allowing public health agencies to track pathogen evolution, identify new variants, and monitor vaccine efficacy. Integrating genomic data with traditional epidemiological data could significantly enhance surveillance systems, offering early warnings of potential vaccine-resistant strains or more virulent mutations.
Finally, ethical considerations, including data privacy, bias, and equity, will continue to be central to AI research in epidemiology. Researchers are increasingly focused on developing models that are both transparent and fair, ensuring that AI-driven interventions do not inadvertently disadvantage specific communities or exacerbate health disparities.
The role of AI in epidemiological surveillance represents a paradigm shift in public health, enabling proactive and real-time responses to infectious disease threats. By leveraging multimodal data, real-time processing, and explainable models, AI can provide valuable insights into disease dynamics, facilitating timely and effective interventions. As AI methodologies evolve, they hold the potential to transform global health monitoring systems, improving resilience against future pandemics and enhancing public health preparedness. Table 3 (Epidemiological Surveillance) provides a summary of machine learning applications in epidemiological surveillance. This section highlights techniques used for infectious disease tracking, outbreak prediction, and privacy-preserving public health monitoring, showcasing methods such as recurrent neural networks (RNN), Bayesian models, and federated learning.

5.4. In Silico Efforts

Recent advancements in generative AI and reinforcement learning have further enhanced in silico epidemiological modeling. Agent-based simulations now integrate AI-powered reinforcement learning to simulate the spread of infectious diseases under various intervention policies, allowing policymakers to test containment strategies before implementation [67]. Reinforcement learning models have been applied to optimize quarantine strategies and vaccine distribution, demonstrating potential in real-time epidemic control [79]. These models use epidemiological compartments, such as SEIR (susceptible-exposed-infected-recovered), to train AI agents on historical outbreak data and simulate various public health interventions.
Moreover, transformer-based AI models, including bidirectional encoder representations from transformers (BERT), have been adapted for epidemiological data mining, enabling automated extraction of relevant information from vast unstructured datasets [69]. NLP-driven AI models can analyze epidemiological reports, social media feeds, and clinical literature in real time to detect early signals of outbreaks and assess public health risks. Additionally, AI-driven anomaly detection algorithms have been integrated into predictive models to identify unusual disease clusters and potential biosecurity threats [64]. These approaches continue to refine epidemiological surveillance frameworks, bridging the gap between computational modeling and real-world applicability.

5.5. Retrospective Validation

Several studies have focused on validating AI-based epidemiological models using historical datasets, ensuring their reliability before real-world implementation. Natural language processing (NLP) techniques have been applied to extract disease mentions from past social media and online sources, offering retrospective insights into disease spread [75]. AI-based pandemic modeling has leveraged historical COVID-19 datasets to evaluate the effectiveness of public health interventions, helping authorities refine response strategies [76]. Moreover, privacy-preserving federated learning models have been used to analyze past epidemiological data across multiple hospitals, demonstrating the potential of AI in disease risk modeling while maintaining data security [34]. These retrospective validation studies provide a crucial benchmark for ensuring AI models are robust and generalizable before they are integrated into real-time epidemiological workflows.
Beyond COVID-19, AI-based retrospective epidemiological validation has been applied to a range of infectious and non-communicable diseases. Retrospective analyses of tuberculosis (TB) surveillance data have employed deep learning models to enhance early detection rates and predict treatment adherence [3]. Similarly, AI-driven oncology surveillance systems have utilized past pharmacy data to refine cancer epidemiology estimates, optimizing screening recommendations for high-risk populations [65]. AI models have also demonstrated utility in analyzing retrospective influenza data to refine predictive frameworks for seasonal outbreaks, improving healthcare preparedness [9]. These retrospective applications highlight AI’s growing role in enhancing epidemiological analytics across diverse healthcare domains.

5.6. Prospective Studies and Clinical Implementation

An increasing number of AI applications have transitioned into prospective evaluation, demonstrating their ability to track disease outbreaks in real time. AI-driven disease tracking models such as BlueDot have played a pivotal role in early COVID-19 detection by analyzing global travel and health data [78]. Automated surveillance systems integrating electronic health records and patient-reported symptoms have been successfully deployed to enhance epidemic control strategies [77]. Federated learning frameworks have enabled global public health institutions to collaboratively train AI models for outbreak prediction while preserving data privacy [80]. Furthermore, explainable AI (XAI) models have been incorporated into epidemiological decision-making, allowing health professionals to interpret disease spread patterns and optimize response efforts [12]. These prospective implementations highlight the growing impact of AI in transforming epidemiological surveillance from a reactive to a proactive discipline.
Recent AI-driven public health interventions have demonstrated the potential for large-scale clinical implementation. AI-powered early warning systems have been deployed in healthcare networks to detect hospital-onset infections, reducing nosocomial transmission rates [69]. AI-based epidemic intelligence platforms, such as Epidemic Intelligence from Open Sources (EIOS), have integrated multi-source data streams, including climate patterns and zoonotic surveillance, to enhance outbreak prediction [68]. In oncology, real-time AI-driven epidemiological monitoring has improved early cancer detection through automated pharmacy data mining, enhancing population-level cancer screening strategies [70]. These prospective implementations underscore AI’s transformative potential in public health, yet regulatory hurdles, data integration challenges, and ethical considerations remain critical barriers to widespread adoption.

6. Mortality Prediction Models

Mortality prediction models are critical tools in healthcare, providing valuable insights into patient outcomes by estimating the probability of death within specific time frames. These models support risk stratification, guide resource allocation, and enhance personalized care planning. Traditionally, mortality prediction relied on statistical methods, including survival analysis, Cox proportional hazards models, and logistic regression. However, artificial intelligence (AI), particularly deep learning and machine learning (ML), has significantly expanded the capabilities of these models, allowing for more accurate and dynamic risk predictions [81]. This section presents an overview of the latest AI-driven approaches in mortality prediction, recent trends in the field, and potential future directions.
Nachit et al. [82] present a retrospective study utilizing artificial intelligence (AI)-based body composition analysis from routine abdominal CT scans to identify myosteatosis as a predictor of mortality risk in asymptomatic adults. The study applies a deep learning U-Net algorithm to segment and quantify body composition features such as total muscle area, muscle density, subcutaneous fat, visceral fat, and volumetric liver density. The findings indicate a strong association between myosteatosis and increased mortality risk, even after adjusting for age, sex, smoking status, and other metabolic risk factors.
This study falls under the category of retrospective validation, as it employs historical CT scan data to develop predictive models without real-world prospective testing. The results demonstrate high statistical significance in associating myosteatosis with adverse health outcomes, but the study lacks external validation on an independent patient cohort or a prospective clinical implementation strategy. Additionally, potential selection bias exists, given that the dataset is derived from a single-center colorectal cancer screening cohort.
Despite these limitations, the study’s strengths include a large patient cohort (n = 8982) and a median follow-up period of 8.8 years, enhancing the reliability of its findings. The application of AI-based segmentation to extract CT-based biomarkers is a novel approach that could pave the way for automated risk stratification in clinical practice. However, the lack of real-world validation and absence of prospective intervention trials limits its immediate clinical applicability. Future research should focus on validating AI-derived body composition biomarkers through external datasets and prospective trials to assess their real-world impact on patient management and health outcomes.
Reddy et al. [83] explore the role of artificial intelligence and machine learning (AI/ML) in disease forecasting, highlighting their potential in identifying epidemiological trends, predicting chronic disease progression, and optimizing healthcare resource allocation. The study reviews various AI-based forecasting models, including deep learning techniques applied to electronic health records (EHRs), social media data, and environmental factors. The authors argue that AI-driven predictive analytics could significantly enhance early disease detection, improve healthcare planning, and mitigate the impact of infectious disease outbreaks.
This work is classified as an in silico study since it primarily synthesizes theoretical applications and previously developed AI models without conducting original experiments or real-world clinical validation. While it provides an insightful overview of AI’s capabilities in disease forecasting, the study does not empirically test any AI/ML models on new datasets. Furthermore, it lacks a structured evaluation of model reliability, data biases, and the generalizability of AI-based forecasting systems across different healthcare settings.
One of the study’s strengths is its comprehensive discussion of AI’s integration with big data sources, including cloud computing, natural language processing, and IoT-based health monitoring. However, its reliance on secondary sources and theoretical modeling weakens its practical applicability. Future research should focus on implementing AI-driven forecasting models in real-world healthcare settings and validating their predictive accuracy through prospective trials. Additionally, addressing algorithmic biases, data privacy concerns, and ethical considerations will be crucial for ensuring the responsible deployment of AI in disease prediction.
Khalifa and Albadawy (2024) [21] conduct a systematic review of artificial intelligence applications in clinical prediction, identifying eight key domains where AI enhances healthcare decision-making. These include disease diagnosis, prognosis, risk assessment, treatment response prediction, disease progression modeling, readmission risk estimation, complication risk prediction, and mortality prediction. The study synthesizes findings from 74 experimental studies, analyzing how AI contributes to improved diagnostic accuracy, personalized medicine, and healthcare efficiency.
This work is categorized as a retrospective validation since it primarily reviews previously conducted experimental studies without implementing prospective clinical trials. The review offers a detailed examination of AI-driven clinical prediction models but does not validate these models in real-world clinical settings. While the study provides valuable insights into AI’s transformative potential, it does not address the limitations of AI models, such as overfitting, interpretability challenges, and the need for external validation in diverse patient populations.
The review’s strengths lie in its structured methodology, comprehensive coverage of AI applications, and identification of key challenges in AI-based clinical prediction. It highlights the necessity for interdisciplinary collaboration, regulatory oversight, and ethical AI practices to ensure the responsible deployment of AI in healthcare. However, its reliance on secondary data limits its direct applicability to clinical practice. Future research should prioritize real-world clinical trials to assess the effectiveness of AI models in patient care. Moreover, developing explainable AI (XAI) frameworks will be crucial for fostering trust and adoption among healthcare professionals.
Bacha and Sherani [79] present an analysis of AI applications in predictive healthcare analytics, particularly focusing on disease outbreak forecasting, patient outcome modeling, and resource management in healthcare systems. The study discusses the integration of AI with big data techniques, DL, and real-time data analytics to anticipate and mitigate public health challenges. It highlights AI’s ability to analyze electronic health records (EHRs), genomic data, radiological imaging, and social determinants of health to provide personalized and proactive healthcare solutions. The authors emphasize AI’s role in optimizing hospital resource allocation, predicting ICU demand, and managing epidemiological threats in real time. This work is classified as an in silico analysis since it primarily synthesizes theoretical AI applications without conducting real-world validation or prospective clinical trials. The study reviews existing AI-based disease prediction frameworks and discusses their potential but does not implement or empirically validate an AI system on actual patient datasets. The reliance on previously published studies, computational simulations, and theoretical modeling places this work within the category of conceptual analysis rather than empirical research.
The study effectively highlights the transformative impact of AI in predictive healthcare analytics, emphasizing its potential to improve disease outbreak forecasting and patient outcome prediction. The authors provide a comprehensive discussion of AI’s integration with EHRs, wearable health monitoring devices, and social media analytics to enhance public health surveillance. The paper successfully outlines how AI-driven predictive models can shift healthcare from a reactive to a proactive approach, enabling early disease detection and personalized treatment strategies. Additionally, the discussion on AI-based resource optimization, particularly in ICU bed allocation and vaccine distribution, is well-articulated and aligns with recent advancements in AI-driven healthcare logistics.
Despite its valuable insights, the study has notable limitations. The most significant drawback is the lack of empirical validation of AI models. While the paper discusses various AI techniques for predictive healthcare analytics, it does not conduct experimental testing on real-world datasets. The absence of external validation using independent patient cohorts limits the generalizability of the proposed AI applications. Without prospective validation in clinical environments, the predictive accuracy and reliability of these AI models remain theoretical rather than actionable. The study also lacks a detailed discussion on regulatory compliance and ethical concerns related to AI-driven predictive analytics in healthcare. Given the increasing importance of patient data privacy regulations such as HIPAA and GDPR, the paper should have examined how AI models can ensure data security and mitigate privacy risks. Furthermore, the ethical implications of AI-driven disease outbreak prediction, such as the potential for misinterpretation of AI-generated forecasts leading to unnecessary public panic or misallocation of healthcare resources, remain unexplored.

6.1. Existing Approaches in AI-Based Mortality Prediction

Recent years have seen the development of increasingly sophisticated mortality prediction models using DL techniques. Recurrent neural networks (RNNs) and their variants, such as long short-term memory (LSTM) networks and gated recurrent units (GRUs), are widely used for predicting mortality in patients with chronic diseases. These models are especially effective in handling time-series data from electronic health records (EHRs), where sequential patterns in vital signs, lab results, and medication history can reveal subtle risk factors [84].
Convolutional neural networks (CNNs) have also been applied to mortality prediction, particularly in scenarios where image data are available, such as medical scans or histopathological images. For instance, CNNs trained on chest radiographs and echocardiograms have demonstrated significant accuracy in predicting mortality risk among patients with cardiovascular and pulmonary diseases [85]. By analyzing spatial features in medical images, CNNs can identify structural abnormalities linked to increased mortality risk.
In addition to deep learning models, ensemble methods like random forests (RF) and gradient boosting machines (GBMs) are frequently used for mortality prediction in structured datasets. These models benefit from their ability to manage high-dimensional data and capture complex feature interactions, making them useful for risk assessment across diverse patient populations [86]. Additionally, survival trees, a tree-based approach to survival analysis, have been adapted to provide time-to-event predictions, enabling a more nuanced understanding of mortality risk over time [87].
Natural language processing (NLP) has shown significant potential in mortality prediction, especially by analyzing clinical notes that provide detailed information beyond structured data like lab results. Approaches such as knowledge-guided convolutional neural networks (CNNs), which combine Concept Unique Identifiers (CUIs) from the Unified Medical Language System (UMLS) with word embeddings, have demonstrated effectiveness. For instance, a study employing these methods on the MIMIC-III dataset achieved a competitive area under the curve (AUC) of 0.97 in predicting mortality among critically ill patients with diabetes. This indicates that NLP, integrated with domain-specific knowledge, can substantially enhance risk assessment by extracting and leveraging hidden features from unstructured text [88].

6.2. Trends in AI for Mortality Prediction

Current trends in AI-driven mortality prediction emphasize multimodal data integration, explainability, and federated learning. A significant trend is the integration of diverse data sources—EHR data, medical imaging, genomics, and even socioeconomic factors—into a single predictive framework. This multimodal approach provides a more holistic view of patient health, capturing a range of factors that contribute to mortality risk [14]. By combining structured and unstructured data, such models achieve higher predictive accuracy and resilience across different patient demographics.
Another emerging trend is the focus on explainable AI (XAI), which aims to improve the interpretability of mortality prediction models. XAI techniques, such as Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME), provide insights into the relative contribution of various factors to individual risk scores. These tools are essential in clinical contexts, as they help physicians understand the basis of AI-generated mortality predictions, thereby facilitating informed decision-making [89].
Federated learning (FL) is also becoming increasingly popular in mortality prediction to address data privacy and security concerns. FL allows institutions to collaboratively train models on decentralized data without sharing raw patient information. This approach improves model generalizability across institutions and patient populations while maintaining compliance with data protection regulations [80].

6.3. Future Directions in Mortality Prediction Using AI

Looking forward, AI-driven mortality prediction is likely to evolve through the incorporation of real-time data, adaptive learning, and expanded genomic insights. The rise of wearable and IoT devices has enabled continuous monitoring of physiological parameters, offering new opportunities for real-time mortality risk assessment. Integrating AI models with these data streams could allow for dynamic risk predictions that adjust based on current health metrics, providing patients and clinicians with timely alerts for preventative action [90].
Adaptive learning, where models are periodically retrained with new data, is another promising direction for mortality prediction. As health data accumulate and change over time, adaptive learning allows models to capture emerging trends and maintain accuracy across changing population health dynamics. This adaptability is particularly valuable in settings like intensive care units (ICUs), where patient conditions can fluctuate rapidly [91].
Furthermore, advances in genomic research are expected to enhance mortality prediction by identifying genetic risk factors for premature mortality. Integrating genomic data with clinical information could lead to personalized mortality risk profiles, offering insights into disease susceptibility and life expectancy based on genetic predispositions [92]. This integration would pave the way for a more individualized approach to preventive healthcare and long-term risk management.

6.4. In Silico Efforts

Many AI-driven models for mortality prediction remain at an in silico stage, where computational simulations and large-scale datasets are used to develop and refine predictive frameworks. Deep learning models, such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), have been trained on electronic health records (EHRs) and imaging data to identify high-risk patient groups [85]. Generative adversarial networks (GANs) and deep reinforcement learning have also been applied to simulate patient trajectories, optimizing treatment pathways in silico before clinical implementation [33]. Additionally, survival models incorporating AI, such as DeepSurv, have been developed to estimate patient-specific risk scores based on historical datasets [87].
Recent studies have further reinforced the role of AI in predictive healthcare analytics. Bacha and Sherani (2025) [79] provide an in-depth analysis of AI-driven predictive models for disease forecasting and patient outcomes. Their work highlights the integration of AI with big data analytics, demonstrating its potential in identifying epidemiological trends and optimizing healthcare resource allocation. Similarly, Reddy et al. (2021) [83] review machine learning techniques applied to disease forecasting, emphasizing the role of AI in proactive healthcare interventions. However, despite the advancements in AI-based forecasting, these models remain at an in silico stage, requiring real-world validation before clinical deployment.

6.5. Retrospective Validation

Several AI-based mortality prediction models have been validated retrospectively using historical datasets. Retrospective analyses assess model performance on past patient records, ensuring robustness before prospective deployment. For example, deep learning models trained on MIMIC-III data have demonstrated high accuracy in predicting in-hospital mortality [88]. Natural language processing (NLP) models have been applied to extract relevant prognostic information from unstructured clinical notes, further improving prediction accuracy [9]. Similarly, ensemble learning techniques, such as random forests and gradient boosting, have been used to analyze structured clinical datasets, achieving reliable mortality risk estimations [86].
AI-driven body composition analysis has emerged as a promising tool for risk assessment in retrospective studies. Nachit et al. (2023) [82] conducted a retrospective analysis using AI-based CT body composition assessments to identify myosteatosis as a key predictor of mortality in asymptomatic adults. Their study applied deep learning segmentation techniques to quantify muscle and fat composition from routine CT scans, demonstrating the potential of AI in predicting long-term health outcomes. Khalifa and Albadawy (2024) [21] further explored AI’s role in clinical prediction, synthesizing findings from multiple retrospective studies that assess disease risk, treatment response, and patient outcomes. Although these studies validate AI models using historical data, their reliance on retrospective datasets limits their applicability to real-time clinical decision-making.

6.6. Prospective Studies and Clinical Implementation

A growing number of AI applications have transitioned into prospective evaluation, where models are tested in real-world clinical settings. AI-assisted mortality prediction models are increasingly integrated into hospital workflows to support clinical decision-making [89]. For example, federated learning frameworks have enabled the deployment of privacy-preserving mortality prediction models across multiple institutions while ensuring data security [80]. Real-time predictive analytics, integrating patient monitoring data from wearable devices and IoT sensors, have further enhanced the accuracy of mortality risk assessment [90]. However, regulatory hurdles, model interpretability, and ethical considerations remain significant challenges for large-scale clinical adoption [12].
Despite the predominance of retrospective validation, some AI-driven mortality prediction models have undergone prospective trials. For instance, Khalifa and Albadawy (2024) [21] highlight AI’s potential in real-time clinical prediction, discussing its application in early disease detection and treatment outcome forecasting. While these studies mark a step toward clinical integration, large-scale implementation remains limited. Addressing challenges such as algorithmic bias, data standardization, and clinician trust in AI-driven decision support systems is essential for the widespread adoption of AI in mortality risk assessment.
AI-powered mortality prediction models represent a paradigm shift in risk assessment, allowing for precise, individualized predictions that support proactive healthcare interventions. As these models incorporate more diverse data sources, adopt explainable AI techniques, and become adaptable to real-time data, they will play an increasingly central role in enhancing patient outcomes. Future developments in this field promise to further refine our understanding of mortality risk, ultimately contributing to a more predictive and preventive approach to healthcare.
Table 1, Table 2 and Table 3 provide an overview of machine learning techniques across three major healthcare areas: diabetes management, cancer applications, and epidemiological surveillance, respectively. Each section outlines the specific algorithms and methodologies applied within these domains, illustrating the breadth and versatility of machine learning in addressing diverse healthcare challenges.

7. Results

In this section, we present a comparative analysis of recent AI applications in healthcare, focusing on their methodologies, datasets used, performance metrics, and limitations.

7.1. AI Applications in Chronic Diseases

Explanation of Table 4

Table 4 presents a selection of notable studies in the application of AI for chronic diseases. Li et al. (2020) utilized LSTM networks for predicting blood glucose levels in diabetic patients, achieving a root mean square error (RMSE) of 15 mg/dL, indicating high prediction accuracy [17]. Gulshan et al. (2016) developed a CNN for diabetic retinopathy detection, demonstrating high sensitivity and specificity, which is crucial for screening programs [3]. Esteva et al. (2017) applied CNNs to skin cancer detection, achieving accuracy comparable to dermatologists, highlighting the potential of AI in diagnostic imaging [2]. Kourou et al. (2015) employed machine learning models on genomic data for cancer prognosis, achieving an area under the ROC curve (AUC) of 0.85, facilitating personalized treatment plans [5].
Table 4. Summary of AI applications in chronic diseases.
Table 4. Summary of AI applications in chronic diseases.
StudyDiseaseMethodologyDatasetPerformance Metrics
Li et al. (2020) [17]Diabetes (Blood Glucose Prediction)LSTM NetworksCGM Data from 100 PatientsRMSE = 15 mg/dL
Gulshan et al. (2016) [3]Diabetic RetinopathyCNNRetinal Images (EyePACS Dataset)Sensitivity = 90%, Specificity = 94%
Esteva et al. (2017) [2]Skin Cancer DetectionCNNDermoscopic ImagesAccuracy = 72.1%, Comparable to Dermatologists
Kourou et al. (2015) [5]Cancer PrognosisSVM, Random ForestGenomic Data (TCGA)AUC = 0.85

7.2. AI Applications in Epidemiology and Public Health

Explanation of Table 5

Table 5 showcases studies where AI has been applied to epidemiology and public health surveillance. Wang et al. (2018) used LSTM networks to predict influenza outbreaks by analyzing EHRs and social media data, achieving an accuracy of 85% [73]. Brisimi et al. (2018) demonstrated the use of federated learning for disease risk modeling across multiple hospitals without sharing sensitive data, attaining an AUC of 0.88 [34]. Neill (2019) employed Bayesian networks for epidemic detection using demographic and climate data, achieving a high detection rate of 95% [71]. Yang et al. (2023) applied NLP techniques with transformer models to monitor COVID-19 trends through social media, achieving an F1-score of 0.92, indicating high model performance in classifying relevant posts [9].
Table 5. Summary of AI applications in epidemiology and public health.
Table 5. Summary of AI applications in epidemiology and public health.
StudyApplicationMethodologyData SourcesPerformance Metrics
Wang et al. (2018) [73]Influenza Outbreak PredictionLSTM NetworksEHRs, Social Media DataAccuracy = 85%
Brisimi et al. (2018) [34]Disease Risk ModelingFederated LearningEHRs from Multiple HospitalsAUC = 0.88
Neill (2019) [71]Epidemic DetectionBayesian NetworksDemographic and Climate DataDetection Rate = 95%
Yang et al. (2023) [9]COVID-19 SurveillanceNLP with Transformer ModelsSocial Media PostsF1-score = 0.92

7.3. Discussion of Results

The summarized studies demonstrate that AI methodologies have achieved high performance metrics across various healthcare applications. The integration of explainable AI techniques in these models has enhanced their clinical applicability by increasing transparency and trust among healthcare professionals. Adaptive AI systems have shown potential in maintaining model performance over time, which is crucial in the ever-evolving landscape of healthcare data. However, challenges such as data heterogeneity, privacy concerns, and the need for large, high-quality datasets remain prevalent.
AI plays a pivotal role in enhancing personalized treatment, particularly in allergic diseases linked to diabetes. Machine learning models enable real-time analysis of patient-specific factors, such as food allergies impacting insulin metabolism, thereby facilitating tailored treatment plans. These advancements improve glycemic control and reduce adverse reactions, ensuring more precise therapeutic strategies. Furthermore, real-time AI applications in drug discovery have significantly accelerated the identification of novel therapeutics for Type 1 and Type 2 diabetes. Deep learning algorithms analyze large molecular databases, predict drug–target interactions, and optimize compound selection, leading to more efficient drug development processes. The integration of generative models and reinforcement learning has further enhanced the predictive accuracy of new drug candidates, reducing both the cost and time associated with traditional pharmaceutical research.
AI has also revolutionized clinical trial recruitment and monitoring, leading to faster approval of diabetes treatments. By leveraging predictive analytics and federated learning, AI systems identify eligible patients more accurately, ensuring diverse and representative study populations. Additionally, real-time monitoring through wearable devices and continuous data collection allows for dynamic treatment adjustments and improved patient adherence. These advancements enhance the efficiency of clinical trials, accelerating the regulatory approval process for innovative diabetes therapies. Despite these advancements, challenges such as data heterogeneity, privacy concerns, and the need for large, high-quality datasets remain prevalent. Addressing these issues through privacy-preserving techniques and collaborative AI frameworks will be essential for maximizing AI’s impact in healthcare.

8. Discussion: Cross-Cutting Challenges and Solutions

AI in healthcare faces several universal challenges, including interpretability, privacy, and bias. This section consolidates these issues and proposes interdisciplinary solutions.

8.1. Interpretability and Explainable AI (XAI)

XAI techniques like SHAP (Shapley additive explanations) are essential for building trust in AI models across domains [12]. In diabetes, XAI can clarify glucose prediction models, while in cancer, it aids in understanding tumor classification. However, XAI’s effectiveness varies by domain, with image-based models (e.g., CNNs) benefiting more from visual explanations than text-based models (e.g., LLMs).

8.2. Privacy and Federated Learning

Federated learning enables decentralized model training, preserving privacy while fostering collaboration [15]. Beyond privacy, it holds promise for rare diseases and global health by pooling decentralized data. However, scalability and data heterogeneity remain challenges, necessitating adaptive federation strategies.

8.3. Bias and Fairness

AI models often reflect biases in training data, leading to inequitable outcomes. Fairness-aware algorithms and diverse datasets are critical, but bias mitigation must extend to model design and deployment. Interdisciplinary teams, including ethicists and clinicians, are essential to ensure equitable AI.

8.4. Ethical and Regulatory Considerations

AI deployment raises ethical concerns, such as patient consent for LLM-generated content and accountability for AI-driven decisions. Regulatory frameworks like the FDA’s adaptive pathways could balance innovation and safety, but global harmonization is needed.
Addressing these challenges requires interdisciplinary collaboration, patient involvement, and sustainable AI practices to ensure ethical, equitable, and effective deployment.

9. Conclusions

The integration of artificial intelligence methodologies in healthcare has shown a growing tendency to enhance disease diagnosis, treatment, and epidemiological surveillance, particularly in regions where AI-based healthcare innovations are actively being explored and developed. Several studies highlight this trend: AI has been applied in infectious disease clinical practice to improve rapid detection and personalized treatment strategies [93], in bacterial infection diagnosis and outbreak surveillance to enable early warnings and improved disease control [94], and in life course epidemiology for predicting cardiovascular disease risk and refining epidemiological surveillance models [95]. Furthermore, comprehensive reviews highlight its potential in health informatics, including diagnostics, drug discovery, and public health surveillance [96]. This paper discussed the applications and challenges associated with AI techniques in diabetes and cancer management, epidemiological modeling, and mortality prediction. Through ML, DL, and NLP, AI has contributed to predictive modeling that enables timely interventions, enhances decision support, and optimizes personalized treatment plans. However, challenges related to data privacy, model interpretability, and representational biases persist, limiting the widespread adoption of these technologies in clinical practice. Large-scale breaches of electronic health records (EHRs) continue to raise significant concerns, with statistical evidence showing that since October 2009, over 173 million individuals in the United States have been affected by data breaches, exposing vulnerabilities in healthcare data security and the urgent need for stricter regulatory measures [97]. Furthermore, a text mining analysis of compromised healthcare records from 2009 to 2024 has identified over 392 million affected records, highlighting the increasing scale of cybersecurity threats in healthcare and the necessity for robust data protection strategies [98].
Future advancements in AI for healthcare will depend on the development of explainable models, integration of multimodal data sources, and the adoption of privacy-preserving methods, such as federated learning, that align with ethical and regulatory standards. Continued interdisciplinary collaboration among AI researchers, healthcare professionals, and policymakers will be essential to harness the full potential of AI, ensuring equitable and trustworthy healthcare innovations that improve patient outcomes and public health preparedness. Addressing these critical areas will shape a resilient, adaptable, and ethically guided AI-driven healthcare future.

Author Contributions

Conceptualization, M.V.-S., D.A.L.-V. and C.E.M.-S.; methodology, M.V.-S., D.A.L.-V., C.E.M.-S. and L.C.-H.; validation, M.V.-S., D.A.L.-V., C.E.M.-S. and L.C.-H.; formal analysis, M.V.-S., D.A.L.-V., C.E.M.-S. and L.C.-H.; investigation, M.V.-S., D.A.L.-V., C.E.M.-S. and L.C.-H.;writing—original draft preparation, M.V.-S.; writing—review and editing, M.V.-S., D.A.L.-V., C.E.M.-S. and L.C.-H.; visualization, M.V.-S. and D.A.L.-V.; supervision, M.V.-S.; project administration, M.V.-S. and D.A.L.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors acknowledge the support provided by the Secretaria de Ciencia, Humanidades, Tecnología e Inovación (Secihti), with two of the authors being affiliated researchers of this institution. The contributions from all involved entities have been valuable in shaping the development of this study. We also acknowledge the Instituto Tecnológico y de Estudios Superiores de Monterrey and the Universidad Autónoma del Estado de Hidalgo, where the original research and manuscript preparation were conducted for one of the authors. Additionally, we express our gratitude to the Universidad Autónoma Metropolitana, Unidad Azcapotzalco, where the same author is now affiliated and where the final revisions and improvements to this manuscript were completed.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
MLMachine learning
DLDeep learning
NLPNatural language processing
CNNConvolutional neural network
RNNRecurrent neural network
LSTMLong short-term memory
EHRElectronic health records
SHAPShapley additive explanations
ROCReceiver operating characteristic
AUCArea under the curve
RFRandom forest
SVMSupport vector machine
DTDecision tree
GBMGradient boosting machine
GRUGated recurrent units
UMLSUnified Medical Language System
BERTBidirectional encoder representations from transformers
GPTGenerative pre-trained transformer

References

  1. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  2. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
  3. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
  4. Montelongo González, E.E.; Reyes-Ortíz, J.A.; González-Beltrán, B.A. Machine Learning Models for Cancer Type Classification with Unstructured Data. Comput. Sist. 2020, 24, 2. [Google Scholar] [CrossRef]
  5. Kourou, K.; Exarchos, T.P.; Exarchos, K.P.; Karamouzis, M.V.; Fotiadis, D.I. Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotechnol. J. 2015, 13, 8–17. [Google Scholar] [CrossRef]
  6. Bi, W.L.; Hosny, A.; Schabath, M.B.; Giger, M.L.; Birkbak, N.J.; Mehrtash, A.; Allison, T.; Arnaout, O.; Abbosh, C.; Dunn, I.F.; et al. Artificial intelligence in cancer imaging: Opportunities and challenges. Radiology 2019, 290, 716–727. [Google Scholar]
  7. Ochoa-Montiel, R.; Sossa, H.; Olague, G.; Chan-Ley, M.; Menendez, J. Symbolic Learning Using Brain Programming for the Recognition of Leukemia Images. Comput. Sist. 2021, 25, 4. [Google Scholar] [CrossRef]
  8. Luna Lozoya, R.S.; Ochoa Domínguez, H.J.; Sossa Azuela, J.H.; Cruz Sánchez, V.G.; Vergara-Villegas, O.O. Lightweight CNN for Detecting Microcalcifications Clusters in Digital Mammograms. Comput. Sist. 2024, 28, 1. [Google Scholar] [CrossRef]
  9. Yang, S.; Jackson, S.S.; Mowery, D.L.; Kahn, M.G.; Meystre, S.M. Applying NLP to clinical notes for improved mortality prediction. J. Am. Med. Inform. Assoc. 2023, 30, 1287–1296. [Google Scholar]
  10. Arif, M.; Ameer, I.; Bölücü, N.; Sidorov, G.; Gelbukh, A.F.; Elangovan, V. Mental Illness Classification on Social Media Texts Using Deep Learning and Transfer Learning. Comput. Sist. 2024, 28, 2. [Google Scholar] [CrossRef]
  11. Davenport, T.; Kalakota, R. Potential and limitations of artificial intelligence in healthcare: Data bias and model interpretation. Health Aff. 2019, 38, 212–218. [Google Scholar]
  12. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4765–4774. [Google Scholar]
  13. Barajas Montiel, S.E.; Morales, E.F.; Escalante, H.J.; Reyes García, C.A. Automatic Selection of Multi-View Learning Techniques and Views for Pattern Recognition in Electroencephalogram Signals. Comput. Sist. 2023, 27, 1. [Google Scholar] [CrossRef]
  14. Zhang, L.; Wang, H. Multimodal AI for diabetic retinopathy detection integrating clinical and imaging data. IEEE J. Biomed. Health Inform. 2024, 28, 351–362. [Google Scholar]
  15. Kaissis, G.; Makowski, M.R.; Rückert, D.; Braren, R.F. Secure, privacy-preserving and federated machine learning in medical imaging. Nat. Mach. Intell. 2020, 2, 305–311. [Google Scholar] [CrossRef]
  16. Sarkar, J.L.; Ramasamy, V.; Majumder, A.; Panigrahi, C.R.; Gomathy, B.; Pati, B.; Saha, A.K. SensMask: An Intelligent Mask for Assisting Patients during COVID-19 Emergencies. Comput. Sist. 2021, 25, 3. [Google Scholar] [CrossRef]
  17. Li, W.; Liu, D.; Chen, G. Deep learning for blood glucose prediction: A systematic review. IEEE Access 2020, 8, 129177–129193. [Google Scholar]
  18. Dankwa-Mullan, I.; Rivo, M.; Sepulveda, M.; Park, Y.; Snowdon, J.; Rhee, K. Transforming diabetes care through artificial intelligence: The future is here. Popul. Health Manag. 2019, 22, 229–242. [Google Scholar] [CrossRef]
  19. Ellahham, S. Artificial intelligence: The future for diabetes care. Am. J. Med. 2020, 133, 895–900. [Google Scholar] [CrossRef]
  20. Guan, Z.; Li, H.; Liu, R.; Cai, C.; Liu, Y.; Li, J.; Wang, X.; Huang, S.; Wu, L.; Liu, D.; et al. Artificial intelligence in diabetes management: Advancements, opportunities, and challenges. Cell Rep. Med. 2023, 4, 101213. [Google Scholar] [CrossRef]
  21. Khalifa, M.; Albadawy, M. Artificial intelligence for clinical prediction: Exploring key domains and essential functions. Comput. Methods Programs Biomed. Update 2024, 5, 100148. [Google Scholar] [CrossRef]
  22. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  23. Li, X.; Hu, W.; Liang, W.; Wong, S.H. Convolutional neural networks in diabetic retinopathy detection: A review of recent advances. Diabetes Technol. Ther. 2021, 23, 530–538. [Google Scholar]
  24. Matsoukas, C.; Montesinos, P.; Anastasopoulos, C.; Albarqouni, S.; Rueckert, D. Transformers vs. CNNs in diabetic retinopathy classification: Comparative study. IEEE Trans. Med. Imaging 2023, 42, 216–228. [Google Scholar]
  25. Kim, M.; Park, J.; Yoo, Y.; Choi, H.; Lee, J. Transfer learning in diabetic retinopathy detection for low-resource settings. Comput. Methods Programs Biomed. 2023, 230, 107219. [Google Scholar]
  26. Liu, F.; Zhou, Z.; Lu, W.; Yu, H.; Wang, S. GAN-based data augmentation for diabetic retinopathy detection. Med. Image Anal. 2023, 88, 102469. [Google Scholar]
  27. Selvaraju, R.R.; Mahadevan, V.; Maheswari, M.; Kumari, A.; Ramesh, D. Grad-CAM: Visual explanations for deep learning models in diabetic retinopathy detection. J. Med. Imaging 2022, 9, 034004. [Google Scholar]
  28. Xu, Y.; Wang, Y.; Li, J.; Zhang, Y.; Chen, W. Telemedicine for diabetic retinopathy screening: A review of AI-based applications. Telemed. e-Health 2024, 30, 225–234. [Google Scholar]
  29. Abramoff, M.D.; Lavin, P.T.; Birch, M.; Shah, N.; Folk, J.C. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit. Med. 2018, 1, 39. [Google Scholar] [CrossRef]
  30. Elsharkawy, M.; Hosseini, M.; Wang, X.; Ayers, D.; Naik, N.; Rajpurkar, P.; Pinto, L.; Sahni, H.; Kumar, P.; Ting, D.S.W.; et al. Ethics and equity in AI-based diabetic retinopathy detection. J. Glob. Health 2024, 14, 04012. [Google Scholar]
  31. Yu, H.; Wang, Z.; Li, J. Artificial intelligence in diabetes management: Focusing on personalized medicine. J. Diabetes Res. 2021, 2021. [Google Scholar]
  32. Miller, C.; Manious, M.; Portnoy, J. Artificial intelligence and machine learning for anaphylaxis algorithms. Curr. Opin. Allergy Clin. Immunol. 2024, 24, 305–312. [Google Scholar] [CrossRef] [PubMed]
  33. Vamathevan, J.; Clark, D.; Czodrowski, P.; Dunham, I.; Ferran, E.; Lee, G.; Li, B.; Madabhushi, A.; Shah, P.; Spitzer, M.; et al. Applications of machine learning in drug discovery and development. Nat. Rev. Drug Discov. 2019, 18, 463–477. [Google Scholar] [CrossRef] [PubMed]
  34. Sheller, M.J.; Edwards, B.; Reina, G.A.; Martin, J.; Pati, S.; Kotrotsou, A.; Milchenko, M.; Xu, W.; Marcus, D.; Colen, R.R.; et al. Federated learning in medicine: Facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 2020, 10, 12598. [Google Scholar] [CrossRef]
  35. Gupta, Y.; Srivastava, V.; Singh, R.K. AI-enhanced patient-centric clinical trial design. In Proceedings of the AIP Conference Proceedings; AIP Publishing: Melville, NY, USA, 2025; Volume 3254. [Google Scholar]
  36. Huang, J.; Galal, G.; Etemadi, M.; Vaidyanathan, M. Evaluation and mitigation of racial bias in clinical machine learning models: Scoping review. JMIR Med. Inform. 2022, 10, e36388. [Google Scholar] [CrossRef]
  37. Wang, Y.; Ge, X.; Ma, H.; Qi, S.; Zhang, G.; Yao, Y. Deep learning in medical ultrasound image analysis: A review. IEEE Access 2021, 9, 54310–54324. [Google Scholar] [CrossRef]
  38. Contreras, I.; Vehi, J. Artificial intelligence for diabetes management and decision support: Literature review. J. Med. Internet Res. 2018, 20, e10775. [Google Scholar] [CrossRef]
  39. Tyler, N.S.; Jacobs, P.G. Artificial intelligence in decision support systems for type 1 diabetes. Sensors 2020, 20, 3214. [Google Scholar] [CrossRef]
  40. Ahmed, A.; Aziz, S.; Abd-Alrazaq, A.; Farooq, F.; Househ, M.; Sheikh, J. The effectiveness of wearable devices using artificial intelligence for blood glucose level forecasting or prediction: Systematic review. J. Med. Internet Res. 2023, 25, e40259. [Google Scholar] [CrossRef]
  41. Reddy, S.P.; Mohapatra, D.; Ghosh, R. IoT and wearable technologies for diabetes management: A comprehensive review. Diabetes Technol. Ther. 2021, 23, 248–259. [Google Scholar]
  42. Shaikhina, T.; Khovanova, N.A. Machine learning for predictive modelling based on big data from diabetes management system. J. Biomed. Inform. 2017, 68, 216–234. [Google Scholar]
  43. Rodriguez-Rodriguez, I.; Martínez-Romero, M.; Rivas, J.L.; Gómez, E.J. Data mining techniques in the detection of diabetes complications. Comput. Methods Programs Biomed. 2019, 176, 91–102. [Google Scholar]
  44. Wang, J.; Liu, Y.; Zhou, Y.; Liu, S. Personalized treatment planning in diabetes management with machine learning. Comput. Biol. Med. 2020, 118, 103609. [Google Scholar]
  45. Tison, M.G.; Rodriguez, A. Machine learning models for real-time glucose monitoring in diabetes patients. J. Med. Syst. 2019, 43, 294. [Google Scholar]
  46. Le, V.-T.; Nguyen, N.-D.; Wang, Y.-C.; Nguyen, N.-S. Risk prediction models in diabetes management using ensemble learning. IEEE Trans. Biomed. Eng. 2020, 67, 1291–1298. [Google Scholar]
  47. Johnson, T.; Davis, A. Automated extraction and analysis of diabetic patient data from EHRs using NLP techniques. J. Diabetes Sci. Technol. 2018, 12, 1078–1086. [Google Scholar]
  48. Rana, S.; Khosravi, A.; Mishra, R.; Yadav, P.; Goyal, M.; Bhardwaj, R. Deep learning for lifestyle and diet impact analysis in diabetic patients. J. Med. Internet Res. 2019, 21, e12902. [Google Scholar]
  49. Waljee, A.K.; Weinheimer-Haus, E.M.; Abubakar, A.; Ngugi, A.K.; Siwo, G.H.; Kwakye, G.; Singal, A.G.; Rao, A.; Saini, S.D.; Read, A.J.; et al. Artificial intelligence and machine learning for early detection and diagnosis of colorectal cancer in sub-Saharan Africa. Gut 2022, 71, 1259–1265. [Google Scholar] [CrossRef]
  50. Rahib, L.; Wehner, M.R.; Matrisian, L.M.; Nead, K.T. Estimated projection of US cancer incidence and death to 2040. JAMA Netw. Open 2021, 4, e214708. [Google Scholar] [CrossRef]
  51. Hsieh, M.H.; Sun, L.M.; Lin, C.L.; Hsieh, M.J.; Hsu, C.Y.; Kao, C.H. The performance of different artificial intelligence models in predicting breast cancer among individuals having type 2 diabetes mellitus. Cancers 2019, 11, 1751. [Google Scholar] [CrossRef]
  52. Kumar, Y.; Gupta, S.; Singla, R.; Hu, Y.C. A systematic review of artificial intelligence techniques in cancer prediction and diagnosis. Arch. Comput. Methods Eng. 2022, 29, 2043–2070. [Google Scholar] [CrossRef]
  53. Tătaru, O.S.; Vartolomei, M.D.; Rassweiler, J.J.; Virgil, O.; Lucarelli, G.; Porpiglia, F.; Amparore, D.; Manfredi, M.; Carrieri, G.; Falagario, U.; et al. Artificial intelligence and machine learning in prostate cancer patient management—Current trends and future perspectives. Diagnostics 2021, 11, 354. [Google Scholar] [CrossRef] [PubMed]
  54. Niu, P.H.; Zhao, L.L.; Wu, H.L.; Zhao, D.B.; Chen, Y.T. Artificial intelligence in gastric cancer: Application and future perspectives. World J. Gastroenterol. 2020, 26, 5408. [Google Scholar] [CrossRef] [PubMed]
  55. Hunter, B.; Hindocha, S.; Lee, R.W. The role of artificial intelligence in early cancer diagnosis. Cancers 2022, 14, 1524. [Google Scholar] [CrossRef] [PubMed]
  56. Singhal, A.; Phogat, M.; Kumar, D.; Kumar, A.; Dahiya, M.; Shrivastava, V.K. Study of deep learning techniques for medical image analysis: A review. Mater. Today Proc. 2022, 56, 209–214. [Google Scholar] [CrossRef]
  57. Kim, H.E.; Cosa-Linan, A.; Santhanam, N.; Jannesari, M.; Maros, M.E.; Ganslandt, T. Transfer learning for medical image classification: A literature review. BMC Med. Imaging 2022, 22, 69. [Google Scholar] [CrossRef]
  58. Friedman, C. Genomic Medicine and the Role of Machine Learning. Annu. Rev. Med. 2019, 70, 431–444. [Google Scholar]
  59. Tseng, H.H.; Luo, Y.; Cui, S.; Chien, J.T.; Ten Haken, R.K.; Naqa, I.E. Deep reinforcement learning for automated radiation adaptation in lung cancer. Med. Phys. 2017, 44, 6690–6705. [Google Scholar] [CrossRef]
  60. Blanco-Gonzalez, A.; Cabezon, A.; Seco-Gonzalez, A.; Conde-Torres, D.; Antelo-Riveiro, P.; Pineiro, A.; Garcia-Fandino, R. The role of AI in drug discovery: Challenges, opportunities, and strategies. Pharmaceuticals 2023, 16, 891. [Google Scholar] [CrossRef]
  61. Diller, G.P.; Benesch Vidal, M.L.; Kempny, A.; Kubota, K.; Li, W.; Dimopoulos, K.; Arvanitaki, A.; Lammers, A.E.; Wort, S.J.; Baumgartner, H.; et al. A framework of deep learning networks provides expert-level accuracy for the detection and prognostication of pulmonary arterial hypertension. Eur. Heart J. Cardiovasc. Imaging 2022, 23, 1447–1456. [Google Scholar] [CrossRef]
  62. Thakur, R.S.; Chatterjee, S.; Yadav, R.N.; Gupta, L. Medical image denoising using convolutional neural networks. In Digital Image Enhancement and Reconstruction; Elsevier: Amsterdam, The Netherlands, 2023; pp. 115–138. [Google Scholar]
  63. Zhang, X.; Wang, Y.; Yang, D.; Wu, Y.; Zhang, T. Real-time epidemic tracking and forecasting using artificial intelligence. J. Health Inform. Res. 2019, 5, 159–177. [Google Scholar]
  64. Ali, H. AI for pandemic preparedness and infectious disease surveillance: Predicting outbreaks, modeling transmission, and optimizing public health interventions. Int. J. Res. Publ. Rev. 2024, 5, 4605–4619. [Google Scholar] [CrossRef]
  65. Grothen, A.E.; Tennant, B.; Wang, C.; Torres, A.; Bloodgood Sheppard, B.; Abastillas, G.; Matatova, M.; Warner, J.L.; Rivera, D.R. Application of artificial intelligence methods to pharmacy data for cancer surveillance and epidemiology research: A systematic review. JCO Clin. Cancer Inform. 2020, 4, 1051–1058. [Google Scholar] [CrossRef] [PubMed]
  66. Jiao, Z.; Ji, H.; Yan, J.; Qi, X. Application of big data and artificial intelligence in epidemic surveillance and containment. Intell. Med. 2023, 3, 36–43. [Google Scholar] [CrossRef] [PubMed]
  67. Kraemer, M.U.; Tsui, J.L.H.; Chang, S.Y.; Lytras, S.; Khurana, M.P.; Vanderslott, S.; Bajaj, S.; Scheidwasser, N.; Curran-Sebastian, J.L.; Semenova, E.; et al. Artificial intelligence for modelling infectious disease epidemics. Nature 2025, 638, 623–635. [Google Scholar] [CrossRef]
  68. Anjaria, P.; Asediya, V.; Bhavsar, P.; Pathak, A.; Desai, D.; Patil, V. Artificial intelligence in public health: Revolutionizing epidemiological surveillance for pandemic preparedness and equitable vaccine access. Vaccines 2023, 11, 1154. [Google Scholar] [CrossRef]
  69. MacIntyre, C.R.; Chen, X.; Kunasekaran, M.; Quigley, A.; Lim, S.; Stone, H.; Paik, H.Y.; Yao, L.; Heslop, D.; Wei, W.; et al. Artificial intelligence in public health: The potential of epidemic early warning systems. J. Int. Med. Res. 2023, 51, 03000605231159335. [Google Scholar] [CrossRef]
  70. Nogueira, R.G.; Davies, J.M.; Gupta, R.; Hassan, A.E.; Devlin, T.; Haussen, D.C.; Mohammaden, M.H.; Kellner, C.P.; Arthur, A.; Elijovich, L.; et al. Epidemiological surveillance of the impact of the COVID-19 pandemic on stroke care using artificial intelligence. Stroke 2021, 52, 1682–1690. [Google Scholar] [CrossRef]
  71. Neill, D.B. Real-time Bayesian networks for disease surveillance and outbreak detection. Epidemiol. Infect. 2019, 147, 1–9. [Google Scholar]
  72. Choi, J.H.; Zhang, L. Using machine learning to predict infectious disease outbreaks: A systematic review. J. Infect. Dis. 2017, 16, 2145–2158. [Google Scholar]
  73. Wang, Q.; Chen, L.; Thirunarayan, K.; Sheth, A.P. Deep learning for influenza trend prediction: A case study using social media data. J. Med. Internet Res. 2018, 20, e275. [Google Scholar]
  74. Jean, N.; Burke, M. Semi-automated convolutional neural networks for malaria risk mapping using satellite data. PLoS ONE 2019, 14, e0212172. [Google Scholar]
  75. Lampos, V.; Majumder, M.S.; Yom-Tov, E.; Edelstein, M.; Moura, S.; Hamada, Y.; Rangoussi, M.; Hossain, L.; Doan, S.; Cox, I.J. Tracking COVID-19 using natural language processing on social media data: An overview. Nat. Digit. Med. 2021, 4, 24–38. [Google Scholar]
  76. Smigielski, W.; Rees, E.; Patel, N.; Zhou, S.; Tatem, A.J. Real-time integration of multimodal data sources for outbreak detection. BMC Public Health 2020, 20, 341. [Google Scholar]
  77. Rakki, R.; Ahmed, R.; Fong, J.; Yoon, S. Data-driven models for environmental surveillance and zoonotic disease prediction. Environ. Health Perspect. 2020, 128, 047004. [Google Scholar]
  78. Chen, L.; Huang, W.; Cheng, Y.; Wu, C.H.; Li, J. Tracking COVID-19 spread with real-time adaptive learning models. Sci. Rep. 2020, 10, 18954. [Google Scholar]
  79. Bacha, A.; Sherani, A.M.K. AI in Predictive Healthcare Analytics: Forecasting Disease Outbreaks and Patient Outcomes. Glob. Trends Sci. Technol. 2025, 1, 1–14. [Google Scholar]
  80. Xu, L.; Zhao, M. Federated learning for mortality prediction across hospitals: A privacy-preserving approach. J. Biomed. Health Inform. 2024, 28, 315–326. [Google Scholar]
  81. Harrell, F.E. Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis; Springer: New York, NY, USA, 2001. [Google Scholar]
  82. Nachit, M.; Horsmans, Y.; Summers, R.M.; Leclercq, I.A.; Pickhardt, P.J. AI-based CT body composition identifies myosteatosis as key mortality predictor in asymptomatic adults. Radiology 2023, 307, e222008. [Google Scholar] [CrossRef]
  83. Reddy, M.S.; Sarisa, M.; Konkimalla, S.; Bauskar, S.R.; Gollangi, H.K.; Galla, E.P.; Rajaram, S.K. Predicting tomorrow’s Ailments: How AI/ML Is Transforming Disease Forecasting. ESP J. Eng. Technol. Adv. 2021, 1, 188–200. [Google Scholar]
  84. Choi, E.; Schuetz, A.; Stewart, W.F.; Sun, J. Medical event prediction using recurrent neural networks and sequences of health records. Sci. Rep. 2018, 8, 10578. [Google Scholar]
  85. Liu, Y.; Roberts, A.; Long, Y.; Chen, Z. Predicting mortality risk using convolutional neural networks on chest X-rays and clinical data. Med. Image Anal. 2021, 71, 102037. [Google Scholar]
  86. Wang, H.; Zhang, J.; Li, Y.; Huang, Y. Ensemble learning for mortality prediction in multi-institutional datasets. J. Biomed. Inform. 2023, 146, 105534. [Google Scholar]
  87. Lee, J.; Park, H.; Kim, Y.; Choi, K. Interpretable survival models with survival trees for predicting mortality risk. Stat. Med. 2022, 41, 2613–2630. [Google Scholar]
  88. Ye, J.; Yao, L.; Shen, J.; Janarthanam, R.; Luo, Y. Predicting mortality in critically ill patients with diabetes using machine learning and clinical notes. BMC Med. Inform. Decis. Mak. 2020, 20, 295. [Google Scholar] [CrossRef]
  89. Sun, H.; Li, M.; Wang, Y.; Zhang, T. Interpretability in deep learning models for mortality prediction in healthcare. Artif. Intell. Med. 2023, 136, 102431. [Google Scholar]
  90. Green, J.; Patel, A. Integrating IoT data for real-time mortality risk prediction in wearable devices. IEEE Internet Things J. 2024, 11, 252–263. [Google Scholar]
  91. Chen, M.; Roberts, A. Adaptive learning models for ICU mortality prediction with evolving patient data. Crit. Care Med. 2024, 52, 341–353. [Google Scholar]
  92. Li, Z.; Liu, Y.; Zhao, X.; Wang, P. Genomics and AI for predicting mortality risk: A review of recent advances. J. Pers. Med. 2023, 13, 892–905. [Google Scholar]
  93. Sarantopoulos, A.; Mastori Kourmpani, C.; Yokarasa, A.L.; Makamanzi, C.; Antoniou, P.; Spernovasilis, N.; Tsioutis, C. Artificial Intelligence in Infectious Disease Clinical Practice: An Overview of Gaps, Opportunities, and Limitations. Trop. Med. Infect. Dis. 2024, 9, 228. [Google Scholar] [CrossRef]
  94. Zhang, X.; Zhang, D.; Zhang, X.; Zhang, X. Artificial intelligence applications in the diagnosis and treatment of bacterial infections. Front. Microbiol. 2024, 15. [Google Scholar] [CrossRef]
  95. Chen, S.; Yu, J.; Chamouni, S.; Wang, Y.; Li, Y. Integrating machine learning and artificial intelligence in life-course epidemiology: Pathways to innovative public health solutions. BMC Med. 2024, 22. [Google Scholar] [CrossRef] [PubMed]
  96. Kwak, G.H.; Hui, P. DeepHealth: Review and challenges of artificial intelligence in health informatics. arXiv 2019, arXiv:1909.00384. [Google Scholar]
  97. Koczkodaj, W.W.; Mazurek, M.; Strzałka, D.; Wolny-Dominiak, A.; Woodbury-Smith, M. Electronic health record breaches as social indicators. Soc. Indic. Res. 2019, 141, 861–871. [Google Scholar] [CrossRef]
  98. Koczkodaj, W.W.; Nowacki, M.; Pedrycz, W.; Strzalka, D. Text mining analysis of over 392 million compromised healthcare records. Adv. Sci. Technol. Res. J. 2025, 19, 73–81. [Google Scholar] [CrossRef]
Table 2. Summary of machine learning techniques applied to healthcare challenges: cancer applications.
Table 2. Summary of machine learning techniques applied to healthcare challenges: cancer applications.
Machine Learning Techniques for Cancer Applications
Article Topic N C S Cl D Na No F K Validation Type
Hunter et al. (2022) [55]AI in Early Cancer Diagnosis** RV
Niu et al. (2020) [54]AI in Gastric Cancer** * RV
Tătaru et al. (2021) [53]AI in Prostate Cancer Management** * RV
Kumar et al. (2022) [52]AI in Cancer Prediction*** *** RV
Hsieh et al. (2019) [51]AI Models for Breast Cancer** * * RV
Rahib et al. (2021) [50]US Cancer Incidence and Death Projections* * ** I
Waljee et al. (2022) [49]AI for Colorectal Cancer Diagnosis** ** RV
Esteva et al. (2017) [2]Skin Cancer Classification * * RV
Gulshan et al. (2016) [3]Diabetic Retinopathy Screening * RV
Kourou et al. (2015) [5]Cancer Prognosis* * * * RV
Tseng et al. (2017) [59]Tumor Control Prediction * * I
Vamathevan et al. (2019) [33]Drug Discovery in Oncology* * I
Blanco et al. (2023) [60]Drug Development * RV
Bi et al. (2019) [6]Cancer Imaging Analysis * * * RV
Davenport and Kalakota (2019) [11]Model Interpretation and Data Bias* * RV
Kaissis et al. (2020) [15]Privacy-Preserving ML * * * RV
Wang et al. (2021) [37]Various Datasets ** *I
Diller et al. (2019) [61]Pulmonary Hypertension Prognostication * *PS
Thakur et al. (2019) [62]Medical Image Denoising * I
Total9104333542
* Indicates that the technique is present in the corresponding study. Note: N = neural networks (NN), C = convolutional NN (CNN), S = support vector machines, Cl = clustering (including K-means), D = decision trees (DT), Na = naive Bayes, B = Bayes, No = novel ML techniques, F = random forests (RF), K = K-nearest neighbors (KNN).
Table 3. Summary of machine learning techniques applied to healthcare challenges: epidemiological surveillance.
Table 3. Summary of machine learning techniques applied to healthcare challenges: epidemiological surveillance.
Machine Learning Techniques for Epidemiological Surveillance
Article Topic B S R C L D N F VT
Neill (2019) [71]Disease Surveillance* I
Choi and Zhang (2017) [72]Infectious Disease Outbreak Prediction * * RV
Wang et al. (2018) [73]Influenza Prediction * * RV
Jean and Burke (2019) [74]Malaria Risk Mapping * I
Lampos et al. (2021) [75]COVID-19 Tracking with Social Media * RV
Smigielski et al. (2020) [76]Outbreak Detection * * RV
Brisimi et al. (2018) [34]Privacy-Preserving Public Health Surveillance *PS
Lundberg and Lee (2017) [12]Model Interpretability I
Rakki et al. (2020) [77]Zoonotic Disease Prediction * I
Chen et al. (2020) [78]Real-Time COVID-19 Tracking * * PS
Nogueira et al. (2021) [70]AI for Stroke Care Surveillance * *RV
MacIntyre et al. (2023) [69]Epidemic Early Warning Systems * * * RV
Anjaria et al. (2023) [68]Pandemic Preparedness and Vaccine Access ** RV
Kraemer et al. (2025) [67]AI for Infectious Disease Modeling* * * I
Jiao et al. (2023) [66]AI in Epidemic Surveillance and Containment * ** RV
Ali (2024) [64]AI for Pandemic Preparedness * * I
Bacha and Sherani (2025) [79]AI for Healthcare Analytics* * * I
Total34545343
* Indicates that the technique is present in the corresponding study. Note: B = Bayesian, S = support vector machines (SVM), R = recurrent NN (RNN), C = convolutional NN (CNN), L = long short-term memory (LSTM), D = decision trees (DT), N = natural language processing, F = federated learning, VT = validation type.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vargas-Santiago, M.; León-Velasco, D.A.; Maldonado-Sifuentes, C.E.; Chanona-Hernandez, L. A State-of-the-Art Review of Artificial Intelligence (AI) Applications in Healthcare: Advances in Diabetes, Cancer, Epidemiology, and Mortality Prediction. Computers 2025, 14, 143. https://doi.org/10.3390/computers14040143

AMA Style

Vargas-Santiago M, León-Velasco DA, Maldonado-Sifuentes CE, Chanona-Hernandez L. A State-of-the-Art Review of Artificial Intelligence (AI) Applications in Healthcare: Advances in Diabetes, Cancer, Epidemiology, and Mortality Prediction. Computers. 2025; 14(4):143. https://doi.org/10.3390/computers14040143

Chicago/Turabian Style

Vargas-Santiago, Mariano, Diana Assaely León-Velasco, Christian Efraín Maldonado-Sifuentes, and Liliana Chanona-Hernandez. 2025. "A State-of-the-Art Review of Artificial Intelligence (AI) Applications in Healthcare: Advances in Diabetes, Cancer, Epidemiology, and Mortality Prediction" Computers 14, no. 4: 143. https://doi.org/10.3390/computers14040143

APA Style

Vargas-Santiago, M., León-Velasco, D. A., Maldonado-Sifuentes, C. E., & Chanona-Hernandez, L. (2025). A State-of-the-Art Review of Artificial Intelligence (AI) Applications in Healthcare: Advances in Diabetes, Cancer, Epidemiology, and Mortality Prediction. Computers, 14(4), 143. https://doi.org/10.3390/computers14040143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop