Next Article in Journal
Development and Antitumor Evaluation of Doxorubicin-Loaded Two-Layered Sheets for Local Chemotherapy via Direct Drug Application to the Tumor Surface
Previous Article in Journal
Nanoparticle-Based Oral Insulin Delivery: Challenges, Advances, and Future Directions
Previous Article in Special Issue
From Patterns to Pills: How Informatics Is Shaping Medicinal Chemistry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence in Biomedicine: A Systematic Review from Nanomedicine to Neurology and Hepatology

by
Diana-Maria Trasca
1,†,
Pluta Ion Dorin
2,†,
Sirbulet Carmen
3,*,
Renata-Maria Varut
4,*,
Cristina Elena Singer
5,
Kristina Radivojevic
4 and
George Alin Stoica
6
1
Department of Internal Medicine, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
2
Faculty of Medical and Behavioral Sciences, Constantin Brâncuși University of Târgu Jiu, 210185 Târgu Jiu, Romania
3
Department of Anatomy, Discipline of Anatomy, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
4
Research Methodology Department, Faculty of Pharmacy, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
5
Department of Mother and Baby, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
6
Department of Pediatric Surgery, Faculty of Medicine, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Pharmaceutics 2025, 17(12), 1564; https://doi.org/10.3390/pharmaceutics17121564
Submission received: 2 November 2025 / Revised: 15 November 2025 / Accepted: 28 November 2025 / Published: 4 December 2025
(This article belongs to the Special Issue Advancements in AI and Pharmacokinetics)

Abstract

Background/Objectives: This review evaluates the expanding contributions of artificial intelligence (AI) across biomedicine, focusing on cancer therapy and nanomedicine, cardiology and medical imaging, neurodegenerative disorders, and liver disease. Core AI concepts (machine learning, deep learning, artificial neural networks, model training/validation, and explainability) are introduced to frame application domains. Methods: A systematic search of major biomedical databases (2010–2025) identified English-language original studies on AI in these four areas; 203 articles meeting PRISMA 2020 criteria were included in a qualitative synthesis. Results: In oncology and nanomedicine, AI-driven methods expedite nanocarrier design, predict biodistribution and treatment response, and enable nanoparticle-enhanced monitoring. In cardiology, algorithms enhance ECG interpretation, coronary calcium scoring, automated image segmentation, and noninvasive FFR estimation. For neurological disease, multimodal AI models integrate imaging and biomarker data to improve early detection and patient stratification. In hepatology, AI supports digital histopathology, augments intraoperative robotics, and refines transplant wait-list prioritization. Common obstacles are highlighted, including data heterogeneity, lack of standardized acquisition protocols, model transparency, and the scarcity of prospective multicenter validation. Conclusions: AI is emerging as a practical enabler across these biomedical fields, but its safe and equitable use requires harmonized data, rigorous multicentre validation, and more transparent models to ensure clinical benefit while minimizing bias.

Graphical Abstract

1. Introduction

The conceptual roots of Artificial Intelligence (AI) can be traced back to the mid-20th century, when pioneers such as Alan Turing and John McCarthy proposed the idea of machines capable of simulating human reasoning and learning. The early decades (1950s–1970s) were characterized by symbolic AI and rule-based expert systems, which relied on manually encoded logic. The 1980s and 1990s saw the emergence of statistical learning and neural network models, marking a shift toward data-driven computation. The explosion of digital data and computational power in the 2010s catalyzed the deep learning revolution, leading to the development of multilayer neural architectures capable of high-dimensional pattern recognition. More recently, transformer-based and generative models have further advanced the field, enabling large-scale, context-aware systems with unprecedented performance in medical imaging, drug discovery, and natural language processing [1,2,3]. AI represents a frontier field within computer science, dedicated to developing systems capable of replicating human intelligence and performing tasks traditionally dependent on human cognition [4]. The primary objective of AI is to imitate and automate complex cognitive functions such as learning, perception, reasoning, and problem-solving. Within this broad domain, methodologies like machine learning, computer vision, robotics, neural networks, and natural language processing play integral roles [5,6].
In medicine, AI-driven tools assist in predicting disease progression, interpreting medical images, and accelerating drug development [7,8].
The biomedical research community has shown growing interest in artificial intelligence due to its ability to process complex, large-scale biomedical datasets, generate accurate diagnostic insights, and optimize therapeutic decision-making. Within medicine, AI-driven systems contribute to precision diagnostics, image analysis, drug discovery, and robotic-assisted interventions. Their reliability depends on principles such as transparency, interpretability, and data integrity, which ensure trustworthy and reproducible results. Although modern AI techniques have rapidly evolved, their conceptual foundations, rooted in early computational and cognitive science research, continue to inform present-day biomedical innovation [9,10,11,12]. A schematic overview of the core AI workflow in biomedicine, from data acquisition to clinical implementation, is presented in Figure 1.
AI-enhanced systems are also transforming personalized medicine by utilizing patient histories and medical records to determine individualized treatment plans and optimal medication regimens. Data from wearable monitoring devices, which capture heart rates and physical activity, can be integrated into healthcare databases for real-time analysis. With the influx of patient-specific information from diverse sources, AI identifies anomalies, predicts potential medical crises, and alerts healthcare professionals. For instance, hospitals in countries such as Denmark and Norway have adopted AI-based analytic tools to detect inefficiencies and minimize treatment errors within healthcare systems [13,14,15]. Furthermore, surgical robots trained through AI can analyze extensive procedural data to refine surgical techniques, allowing for greater precision and reduced unintended movement [16,17,18]. Beyond spinal procedures, AI is increasingly applied to minimally invasive and robot-assisted surgeries, as well as postoperative monitoring, including the estimation of recovery durations [19,20].
The four domains covered in this review, nanomedicine, cardiology, neurology, and hepatology, were deliberately chosen to represent complementary biological and clinical scales at which artificial intelligence exerts measurable impact. Nanomedicine illustrates AI-driven design and optimization at the molecular and subcellular level; cardiology represents organ-level imaging and physiological monitoring; neurology demonstrates multimodal integration across imaging, electrophysiology, and digital biomarkers; and hepatology exemplifies AI applications in pathology, surgery, and transplant decision-making. Together, these fields provide a coherent framework spanning molecular, structural, functional, and systemic dimensions of biomedicine, allowing for the identification of cross-cutting computational principles and translational pathways that would not be evident within a single specialty.
The aim of this systematic review is to comprehensively synthesize and evaluate the current and emerging applications of AI across key biomedical fields, including nanomedicine, cardiology, neurology, and hepatology. Specifically, the study seeks to identify methodological advances and translational opportunities of AI in these domains; analyze the strengths and limitations of existing evidence; and highlight future research and regulatory directions needed for safe, effective, and equitable clinical implementation of AI technologies.

2. Methods

A systematic search was carried out to map contemporary applications of artificial intelligence across oncology (including nanomedicine), cardiology and imaging, neurodegenerative disorders, and hepatology. Electronic searches of PubMed/MEDLINE, Scopus, Web of Science, and Google Scholar were performed between January 2010 and June 2025, using combinations of controlled vocabulary and free-text terms such as “artificial intelligence”, “machine learning”, “deep learning”, “neural network”, “explainable AI”, “nanomedicine”, “nanocarrier”, “cancer”, “cardiology”, “CT”, “MRI”, “FFR-CT”, “ECG”, “radiomics”, “Alzheimer”, “Parkinson”, “liver”, “histopathology”, and “transplant”.
Reference lists of retrieved reviews and key articles were manually screened to identify additional relevant records.
Eligible studies included English-language full-text articles (original research, systematic reviews, or narrative reviews) that described AI-based methodologies, clinical or preclinical applications, validation frameworks, or translational implications within the target domains. To account for domain-specific differences in research focus and methodology, tailored inclusion criteria were applied for each biomedical field.
  • Nanomedicine: Studies employing AI or machine learning models for nanoparticle design, drug delivery optimization, or nano-bio interface characterization were included.
  • Cardiology: Eligible studies focused on AI-assisted diagnosis, risk prediction, or image-based assessment (echocardiography, CT, or MRI) of cardiovascular diseases.
  • Neurology: Included studies addressed AI applications in neuroimaging, neurodegenerative diseases (Alzheimer’s, Parkinson’s), or neurological outcome prediction.
  • Hepatology: Studies were included if they used AI tools for liver disease diagnosis, fibrosis staging, hepatocellular carcinoma detection, or treatment outcome prediction.
Across all domains, only peer-reviewed original research articles in English were included, while reviews, editorials, case reports, and non-human experimental studies were excluded.
Priority was given to work published within the last five years, while seminal older studies were retained when methodologically informative. Conference abstracts, non-English papers, editorials, commentaries, and reports lacking methodological detail were excluded.
Titles and abstracts were independently screened for relevance, and full-text articles were subsequently reviewed for eligibility. Extracted data included study aims, biomedical domain, data modality, AI technique, dataset characteristics, validation approach, performance metrics, and noted limitations. Extracted information also included dataset origin and type (public, institutional, or proprietary), sample size, and validation strategy (like k-fold cross-validation, hold-out, or external validation). However, many primary studies lacked complete reporting of these parameters, which was recorded as a limitation in the qualitative synthesis. The synthesis was performed narratively to characterize methodological trends, key clinical applications, and translational barriers; no quantitative meta-analysis or formal risk-of-bias assessment was undertaken given the heterogeneity of study designs.
The literature selection process adhered to the PRISMA 2020 guidelines to ensure transparency and reproducibility. A total of 295 records were initially identified. After removing 29 duplicates, 266 unique studies were screened by title and abstract. Thirty-three records were excluded for lack of relevance, and 233 full-text articles were assessed for eligibility. Following exclusion of non–peer-reviewed materials (n = 16) and non-English publications (n = 14), a total of 203 articles were included in the final qualitative synthesis (Figure 2). This systematic review was conducted in accordance with the PRISMA 2020 Statement. The PRISMA 2020 flow diagram (Figure 1) is included in the main text, and the completed PRISMA 2020 checklist is provided as Supplementary Materials. The study has been registered on the Open Science Framework (OSF) to enhance transparency and reproducibility. The full protocol and metadata are available at https://osf.io/m4fq2 (accessed on 10 November 2025).
The methodological quality and potential risk of bias of the included studies were qualitatively appraised using domains adapted from the Joanna Briggs Institute (JBI) and AMSTAR 2 tools. Because of the substantial heterogeneity in study designs and reporting standards, a formal scoring system was not applied. Instead, the evaluation focused on the clarity of study objectives, validity of data sources, AI model validation methods, transparency of outcome reporting, and reproducibility of analyses. In numerous studies, performance metrics such as accuracy, sensitivity, specificity, or AUC were reported without sufficient contextual details, including dataset identifiers, validation folds, or training sample sizes. This lack of standardized reporting limits direct comparison across studies and constrains reproducibility, despite generally sound methodological design.

3. AI in Medicine

Table 1 provides a structured overview of the main AI applications discussed in this review across the four biomedical domains. It summarizes the types of data, computational tasks, and methodological approaches reported in the selected literature, offering a concise reference point that complements and anchors the narrative in this section.

3.1. AI in Cancer Therapy

In healthcare, AI systems function through specialized computational mechanisms that enable data interpretation and predictive analysis. ML algorithms, encompassing supervised, unsupervised, and reinforcement learning, detect patterns and relationships within medical datasets. In supervised learning, labeled data are used to train models such as Support Vector Machines (SVMs) or Random Forests (RF) to recognize abnormalities, including tumor regions in medical imaging [34]. Conversely, unsupervised learning methods discover hidden structures within unlabeled datasets, allowing the identification of distinct cancer subtypes, while reinforcement learning enhances therapeutic strategies by iteratively learning from previous patient outcomes [35]. Neural networks (NNs) emulate the brain’s information-processing structure through interconnected computational units known as nodes, which are organized into multiple layers. Input data pass through these layers and are transformed using activation functions like Rectified Linear Unit (ReLU) or Sigmoid, while optimization techniques such as backpropagation and gradient descent iteratively reduce prediction errors by adjusting connection weights [36].
Deep learning (DL), a branch of neural network–based AI, incorporates multiple hidden layers that enable the system to perform advanced biomedical analyses, including tumor segmentation and genomic data interpretation. Within DL, Convolutional Neural Networks (CNNs) excel at processing medical images by extracting spatial and contextual features, whereas Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) architectures are suited for analyzing sequential or time-dependent patient information [37].
More recently, transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformers (GPT) have been employed to analyze clinical narratives and biomedical literature using self-attention mechanisms, which allow models to focus on relevant portions of text for improved interpretation. To ensure reliability and interpretability, Explainable AI (XAI) frameworks are increasingly integrated into healthcare systems. These include SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), which quantify the influence of specific input variables on model predictions. Visualization methods, including heatmaps, further enhance interpretability by identifying areas of diagnostic importance in medical imagery [38,39,40].

3.1.1. Emerging Methodological Developments

Recent advances in computational methods have further expanded the clinical applicability of AI in oncology. The following subsections outline key methodological innovations that enhance model performance, interpretability, and data privacy within cancer-related research and practice. Beyond conventional architectures such as CNNs, ResNets, and DenseNets, several methodological advances are increasingly shaping biomedical AI. Transfer learning enables pretrained models, originally developed for large-scale image datasets such as ImageNet, to be fine-tuned for specialized medical tasks with limited annotated data, substantially reducing training time and improving performance. Model interpretability frameworks, including Gradient-weighted Class Activation Mapping (Grad-CAM), SHAP and LIME, enhance transparency by visualizing or quantifying how input features influence model outputs, thus fostering clinical trust. In parallel, uncertainty quantification techniques, such as Bayesian neural networks and Monte Carlo dropout, are being adopted to estimate confidence in model predictions—an essential consideration for diagnostic reliability and clinical risk management. Together, these developments address major translational barriers in applying deep learning to real-world medical practice [41,42].

3.1.2. Recent Paradigms in Biomedical AI

In addition to methodological refinements, several paradigm shifts are redefining how AI is deployed across biomedical oncology, enabling more collaborative, multimodal, and generalizable approaches to cancer prediction and management. In parallel with the development of traditional deep learning architectures, several new paradigms have emerged that redefine AI applications in biomedicine. Federated learning enables collaborative model training across multiple healthcare institutions without sharing sensitive patient data, thereby preserving privacy while expanding data diversity and model robustness. Multimodal fusion transformers integrate heterogeneous data sources, such as imaging, genomics, electronic health records, and clinical text, to provide holistic patient-level insights and improve predictive accuracy. Foundation models (like BioGPT, Med-PaLM, and CLIP-based architectures) represent a transformative approach in which large-scale pretrained models are fine-tuned for domain-specific medical tasks, offering unprecedented generalization and adaptability [43,44]. Collectively, these advances illustrate the rapid evolution of biomedical AI toward scalable, interpretable, and privacy-aware intelligent systems aligned with current (2024–2025) research directions.
Through these computational techniques, AI contributes significantly to cancer diagnosis, treatment response prediction, and personalized medical care, establishing itself as a cornerstone of modern healthcare innovation.

3.1.3. Application of AI in Nanomedicine for Cancer Healthcare

AI has become a transformative force in the field of nanomedicine, profoundly influencing the ways in which cancer is diagnosed, treated, and monitored. Through the analysis of extensive datasets, AI-driven algorithms enhance the design and development of nanocarriers, predict cancer dynamics, and enable the creation of highly personalized therapeutic strategies. The integration of AI with nanotechnology represents a major step toward precision oncology, offering more targeted and effective treatment modalities.
In the context of nanocarrier design and drug delivery, AI methodologies such as ML and deep learning (DL) play a crucial role in the rational engineering of nanoparticles. These computational approaches can forecast optimal physicochemical properties, including particle size, morphology, surface charge, and functionalization, to achieve maximum therapeutic efficiency with minimal toxicity. Moreover, AI assists in optimizing drug-loading capacity and release kinetics, allowing nanocarriers to be adapted to the specific characteristics of individual tumor microenvironments [45].
Recent interdisciplinary studies illustrate how AI techniques directly enhance nanocarrier design and clinical translation. For example, CNNs have been used to predict the biodistribution and tumor uptake of gold nanoparticles and silica nanocarriers from PET/CT and fluorescence imaging data, facilitating precise dose optimization and minimizing off-target accumulation [46]. Deep learning frameworks have optimized liposomal nanoparticles for targeted chemotherapeutic delivery in breast and ovarian cancer, improving therapeutic indices and reducing systemic toxicity. In addition, AI-assisted radiomics combined with MRI and photoacoustic imaging enables the detection of iron oxide-based nanoparticles, providing non-invasive tracking of drug release and treatment response. Reinforcement learning models have also guided the adaptive control of polymeric and lipid nanocarriers for on-demand drug release, demonstrating early translational feasibility in preclinical oncology. Collectively, these examples show how AI connects nanoscale engineering with clinically actionable theranostic strategies [47].
AI-based predictive modeling also contributes substantially to cancer targeting. By simulating nanoparticle interactions within biological systems, these models can estimate biodistribution, tumor localization, and potential off-target effects with high precision. In silico simulations further support the identification of suitable biomarkers and ligands for nanoparticle surface modification, leading to the development of advanced delivery systems that improve therapeutic success while minimizing systemic side effects [48,49].
Another major application lies in the personalization of cancer therapy. AI algorithms integrate genomic, proteomic, and clinical data to generate patient-specific treatment regimens. By forecasting individual responses to nanomedicine-based therapies, AI enables the selection of the most effective nanoparticle formulations and drug combinations. This personalized approach is particularly valuable in addressing tumor heterogeneity and reducing the likelihood of treatment resistance [50].
The use of AI in real-time monitoring has also advanced cancer management. When coupled with nanoparticle-based contrast agents, AI-enhanced imaging systems provide continuous insights into tumor development and therapeutic response. These algorithms dynamically analyze imaging outputs, allowing clinicians to adjust treatment strategies promptly to improve efficacy. Additionally, the combination of AI with wearable biosensors and nanoscale monitoring devices facilitates continuous patient observation, supporting the early identification of complications and timely intervention [51].
Finally, AI plays a pivotal role in accelerating the drug discovery process. By analyzing vast chemical libraries and predicting interactions between nanoparticles and biological targets, AI models can identify the most promising drug–nanocarrier combinations. This capability streamlines formulation development and significantly reduces both the cost and duration associated with bringing novel therapies to clinical application [52].
The convergence of AI with nanotechnology is driving innovative integrations that further enhance cancer healthcare:
(a)
Theranostic nanoplatforms: AI-powered theranostic platforms combine diagnostic and therapeutic functions within a single nanostructure. These platforms can detect cancer biomarkers, deliver targeted therapy, and monitor treatment responses in real-time, offering a comprehensive personalized management solution [53].
(b)
AI-driven nano-robotics: AI-controlled nano-robots demonstrate promise in precise drug delivery and tumor targeting. These nano-robots autonomously navigate through the bloodstream, identify cancer cells, and release therapeutics in a controlled manner, minimizing damage to healthy tissues [54].
(c)
Multi-omics data integration: AI algorithms integrate genomic, transcriptomic, and proteomic data with nanomedicine approaches to uncover novel biomarkers and predict therapeutic responses. This integration enhances patient stratification and informs the development of personalized nano-therapeutics [55].
(d)
Quantum computing for nanomedicine: Quantum computing, combined with AI, enables rapid simulation of complex biological environments. This enhances nanoparticle modeling and expedites the development of next-generation nanomedicines [56].

3.2. AI in Cardiology

The integration of artificial intelligence (AI) into patient monitoring systems offers numerous advantages in improving healthcare outcomes. Through AI-based algorithms, vital signs such as heart rate, blood pressure, and respiratory rate can be continuously monitored in real time, enabling early detection of abnormalities and timely medical interventions. This proactive approach enhances patient safety and treatment efficiency. Hannun et al. demonstrated that AI algorithms are capable of detecting arrhythmias and ischemic changes from electrocardiogram (ECG) data, facilitating the early diagnosis of conditions such as atrial fibrillation [57]. The advancement of AI-driven monitoring devices has further enabled real-time feedback and automated analysis of physiological parameters across various clinical settings [58]. Recent research has introduced innovative methods to enhance ECG interpretation, including the application of two event-related moving averages in combination with the fractional Fourier transform (FrFT), significantly improving peak detection accuracy and cardiac condition classification [59]. These technologies hold great promise in cardiology, particularly for the continuous detection of critical events such as malignant arrhythmias [60]. Moreover, by rapidly processing vast datasets with high precision, AI systems can reduce the workload of healthcare professionals while improving diagnostic accuracy and overall patient management [61].

3.2.1. AI in Heart Failure (HF)

Most traditional ECG indices based on voltage exhibit relatively low sensitivity, typically ranging from 19% to 25%. The incorporation of AI into ECG analysis has been shown to substantially improve sensitivity for detecting left ventricular hypertrophy (LVH), increasing it from 42% to 69%, though this improvement is accompanied by a modest decrease in specificity from 92–94% to 87% [62]. This trade-off suggests that while AI enhances early disease detection, it also increases the likelihood of false-positive results, which may necessitate additional follow-up evaluations. In clinical practice, especially in real-world settings, careful management of this balance is critical to avoid overburdening healthcare systems while maximizing the benefit of early detection.
Despite these advancements, AI-based ECG analysis raises certain clinical concerns. Enhanced sensitivity allows AI models to identify subtle or early abnormalities that conventional methods may overlook; however, reduced specificity can lead to overdiagnosis and unnecessary interventions. In medical decision-making, particularly in populations such as infants, the trade-off between sensitivity and specificity must be carefully aligned with disease prevalence and the risks associated with false positives and false negatives. The application of AI for predictive modeling has also introduced novel risk factors, such as endothelin-1, which is associated with oxidative stress and cardiac remodeling [63]. Nevertheless, further research, including randomized trials, is required to validate the effectiveness of AI-guided therapeutic strategies and to translate predictive findings into meaningful improvements in patient care.
Meta-analyses on AI-enhanced ECG detection indicate that AI can elevate sensitivity from the 19–25% range typical of traditional methods up to 69%, while slightly reducing specificity from 92–94% to 87%, thereby improving overall diagnostic accuracy for LVH [62]. Several machine learning models have been proposed to advance clinical diagnosis. For instance, one study utilized a CNN trained on 12-lead ECG data to diagnose left ventricular diastolic dysfunction (LVDD) in patients presenting with dyspnea, outperforming NT-proBNP in distinguishing cardiac from pulmonary causes. Another multicenter prospective study employed ML to evaluate LVDD using ECG in comparison to echocardiographic measures of left ventricular relaxation velocities (e′), demonstrating strong correlations with sensitivity of 78%, specificity of 77%, negative predictive value of 73%, and positive predictive value of 82% in internal testing, with similar performance in external validation [64].
Beyond ECG, AI has also been applied to other imaging modalities for the assessment of structural heart disease. Techniques such as radiomics, which extract quantitative imaging features through algorithmic analysis, have demonstrated improved diagnostic capabilities. For example, AI-driven radiomic analysis correctly differentiated hypertrophic cardiomyopathy (HCM) from hypertensive heart disease with an accuracy of 85.5%, significantly outperforming traditional methods, which achieve approximately 64% accuracy [65,66].

3.2.2. AI in Coronary Artery Disease

Among the various AI applications in cardiology, systems for Fractional Flow Reserve using CT (FFR-CT) have demonstrated particularly promising diagnostic performance in the assessment of coronary artery disease (CAD). In a comparative study, Lipkin et al. evaluated coronary computed tomography angiography (CCTA) interpreted with AI quantitative CT (AI-QCT) against myocardial perfusion imaging (MPI) for detecting obstructive CAD. AI-QCT achieved a higher area under the curve (AUC) than MPI (0.88 vs. 0.66) for predicting stenosis greater than 50% in 301 patients enrolled in the CREDENCE trial [67]. Similarly, Chiou et al. compared AI-QCTISCHEMIA with CT-FFR and physician visual interpretation in 442 patients, reporting superior specificity and diagnostic performance for AI-QCTISCHEMIA relative to both clinician interpretation (specificity 0.62) and CT-FFR (specificity 0.76) [68].
Despite these promising results, several challenges remain. The computational demands of AI algorithms and the need for large, well-annotated datasets are significant limiting factors. Pershina et al. highlighted that the diagnostic accuracy of FFR-CT (AUC = 0.90) is highly dependent on the quality of the input data and available computational resources [69]. Additionally, population homogeneity in many studies introduces bias, as AI-based CT-FFR analyses frequently exclude high-risk patients, limiting generalizability to broader clinical populations [70]. Another important consideration is the trade-off between accuracy and interpretability. While models such as DenseNet201 achieve high accuracy, their “black-box” nature can impede clinical trust and slow adoption. DenseNet121 outperformed thoracic radiologists in certain tasks; however, the lack of transparency in these models remains a barrier to routine clinical integration [71].
Several deep learning architectures, including EfficientNet-B0, DenseNet201, ResNet101, Xception, and MobileNet-v2, have been proposed for automated coronary artery segmentation and classification. Among these, DenseNet201 showed the strongest performance, achieving an accuracy of 0.90, specificity of 0.9833, positive predictive value (PPV) of 0.9556, Cohen’s Kappa of 0.7746, and an AUC of 0.9694, underscoring its superiority in classification tasks [72]. Beyond imaging, AI has also been applied to analyze biochemical markers for cardiovascular risk assessment. For example, Xue et al. employed unsupervised machine learning to stratify ST-segment elevation myocardial infarction (STEMI) patients into distinct phenogroups based on lipid profiles. Statistical analyses, including ANOVA and Cox proportional hazards models, demonstrated significant differences among the phenogroups in lipoprotein(a), high-density lipoprotein cholesterol (HDL-C), and apolipoprotein A1 (ApoA1) patterns, providing predictive insights for clinical outcomes [73].

3.2.3. CT vs. MRI

As shown in Table 2, AI integration diverges substantially between CT and MRI, reflecting modality-specific characteristics in data structure, computational preprocessing, and downstream clinical interpretability within cardiovascular diagnostics.

3.3. AI in Neuronal Diseases

3.3.1. Evolution of AI in Neurological Diagnostics

Early applications of artificial intelligence in neurology primarily relied on conventional machine learning algorithms that used manually selected features extracted from structured datasets, neuropsychological evaluations, and basic neuroimaging results. Developing these systems often required substantial domain expertise to pinpoint informative predictors, and their performance was frequently constrained by limited scalability and poor generalization to heterogeneous patient cohorts. Nevertheless, these early models were instrumental in proving that automated decision-support systems could be effectively applied to neurological contexts, paving the way for more adaptive and data-driven learning approaches.
The emergence of deep learning marked a significant transformation in this area, enabling models to process unstructured and high-dimensional data directly, such as MRI images, EEG recordings, vocal characteristics, and movement sensor outputs [95,96]. Deep architectures including CNNs and RNNs greatly enhanced the ability to identify patterns and extract complex features, yielding more precise and sophisticated diagnostic predictions without depending on manually designed input variables. This technological leap has made it possible to uncover subtle biomarkers linked to neurological diseases, such as Alzheimer’s disease, Parkinson’s disease, and multiple sclerosis, at earlier stages and with higher diagnostic accuracy.
Furthermore, incorporating multiple data modalities into unified deep learning frameworks offers a more comprehensive perspective on a patient’s neurological status. This multimodal integration supports the ongoing transition from traditional symptom-focused evaluation to precision neurology grounded in data analytics. Collectively, these developments mark an essential advancement toward scalable, AI-driven diagnostic solutions capable of reshaping both individual patient care and large-scale neurological screening initiatives.

3.3.2. AI in Parkinson Disease (PD)

Among the numerous domains explored for artificial intelligence applications in Parkinson’s disease (PD), neuroimaging remains one of the most comprehensively investigated [97,98]. The combination of dopamine transporter (DaTscan) imaging with CNNs has achieved exceptional performance in differentiating PD patients from healthy individuals [99]. Recent investigations have reported classification accuracies surpassing 95% through deep learning-based interpretation of DaTscan images—significantly improving upon the results obtained via conventional visual assessment [100,101].
Applications of structural and functional MRI have also yielded encouraging outcomes, both for early diagnosis and for tracking disease progression [102,103]. In particular, graph neural networks (GNNs), a class of deep models designed to process data structured as graphs, where brain regions are treated as nodes and their functional interactions as edges, have been successfully applied to resting-state functional connectivity analyses. These approaches have achieved classification accuracies ranging from 88% to 92% when distinguishing PD cohorts from control groups [104]. By capturing intricate topological patterns and network-level relationships within the brain, GNNs provide new insights into the neural connectivity alterations characteristic of PD and other neurological disorders.
Furthermore, diffusion tensor imaging (DTI) analyzed with advanced machine learning frameworks has uncovered subtle microstructural abnormalities in white matter tracts that may emerge even before the onset of clinical manifestations [105,106]. These findings underscore the potential of AI-enhanced neuroimaging to facilitate earlier and more precise detection of PD-related pathophysiological changes.
Voice alterations are recognized as some of the earliest non-motor indicators of Parkinson’s disease (PD), often manifesting several years prior to the appearance of clinically measurable motor symptoms [107,108]. These vocal abnormalities, characterized by reduced volume, monotonous tone, breathiness, and subtle articulation deficits, can be easily missed during standard neurological evaluations. Nonetheless, they offer a promising window for early identification of PD, particularly in cases where conventional diagnostic approaches may not yet reveal overt signs of pathology.
The incorporation of AI into voice analysis has considerably improved the accuracy and reliability of detecting vocal biomarkers linked to PD. Through the extraction of acoustic parameters such as fundamental frequency variability, jitter, shimmer, harmonics-to-noise ratio, and various spectral attributes, AI-based models have achieved diagnostic accuracies ranging between 85% and 93% [109,110]. These findings highlight the potential of voice analysis as a non-invasive and scalable screening approach, particularly suited for remote assessment and early-stage community-level detection efforts.
Building upon these foundations, recent work has leveraged deep learning frameworks to move beyond traditional signal-processing pipelines. RNNs, and notably LSTM architectures, have proven highly effective in capturing temporal dependencies and sequential variations embedded within speech patterns, thereby modeling the dynamic progression of PD-related vocal changes [111]. More recently, transformer-based architectures, initially developed for natural language processing, have demonstrated substantial promise in learning long-range contextual relationships in speech sequences. By enabling models to train directly on raw or minimally preprocessed audio signals, these approaches minimize the reliance on hand-engineered features and facilitate end-to-end disease classification.
Ultimately, AI-driven voice analysis provides a cost-efficient, non-invasive, and highly scalable diagnostic pathway, offering opportunities for continuous disease monitoring, real-time clinical feedback, and broad integration into telehealth and digital health ecosystems [112].
Gait impairments are among the most distinctive and diagnostically significant motor manifestations of Parkinson’s disease (PD), typically presenting as short, shuffling steps, diminished arm swing, postural instability, and freezing of gait episodes. These changes in locomotion serve as objective and quantifiable indicators for both disease onset and progression. In recent years, AI has been increasingly utilized to analyze these movement abnormalities through data derived from wearable motion sensors, such as accelerometers and gyroscopes. When positioned on various body parts, including the feet, waist, or limbs, these sensors capture high-resolution motion data during walking tasks.
By training machine learning algorithms on such data, researchers have achieved highly accurate classification of PD patients, uncovering intricate gait-related patterns that often elude conventional clinical observation. In some investigations, the sensitivity and specificity of AI-based gait analysis for early PD detection have surpassed 90%, even in scenarios where traditional clinical evaluations provide ambiguous results [113,114]. This level of precision has established gait analysis as a powerful diagnostic and monitoring tool, facilitating both early identification and longitudinal tracking of motor dysfunction in PD.
Beyond wearable technologies, the emergence of AI-driven computer vision techniques has expanded gait analysis into non-contact and highly scalable modalities. Modern markerless motion capture systems can now assess walking behavior using ordinary video recordings obtained from smartphones or surveillance cameras. By extracting joint trajectories and body kinematics from these recordings, deep learning models can detect gait irregularities that signal PD-related motor decline. This approach provides a low-cost and accessible alternative to specialized hardware, enabling movement assessment in diverse environments, including homes, clinics, and public spaces (Table 3) [115].
Furthermore, the integration of such AI systems within telemedicine platforms allows for continuous, remote evaluation of motor symptoms, an especially valuable feature for patients in underserved or rural areas with limited access to movement disorder specialists [116]. As AI methodologies continue to advance, they are poised to redefine clinical and research approaches to gait assessment in Parkinson’s disease, enhancing both precision diagnostics and personalized disease management (Table 4).
Table 3. Assessment of the digital biomarkers and smartphone applications.
Table 3. Assessment of the digital biomarkers and smartphone applications.
AspectDescriptionRef.
Role of Smartphone Technology in PD AssessmentThe widespread availability and computing power of smartphones have enabled the development of accessible, non-invasive, and scalable digital biomarker platforms for PD monitoring. These systems use built-in sensors and software to capture behavioral and physiological data.[117,118]
Motor Assessment through Sensor-Based ApplicationsFinger-tapping apps measure motor speed and variability, serving as indicators of bradykinesia.[119,120]
Speech-Based BiomarkersVoice recording applications analyze speech fluency and tremor-related vocal disruptions, key symptoms of PD.[119,120]
Remote Monitoring and TelehealthSmartphone-based biomarkers allow continuous and passive monitoring of patients in real-world environments. This enhances personalized care, supports timely interventions, and improves patient engagement.[121,122]
Application in Low-Resource SettingsThese technologies provide cost-effective screening and early detection solutions for rural or resource-limited areas, helping reduce healthcare disparities.[121,122]
Advanced Computational CapabilitiesModern smartphones perform real-time signal processing using edge computing and machine learning to analyze tremor patterns, gait variability, and speech signals directly on the device, ensuring privacy and faster feedback.[123]
Federated Learning ApproachesEnable continuous improvement of diagnostic algorithms without sharing sensitive data, enhancing personalization and accuracy across diverse populations.[123]
Table 4. AI in Parkinson’s Disease (PD) Treatment Optimization and Personalized Medicine.
Table 4. AI in Parkinson’s Disease (PD) Treatment Optimization and Personalized Medicine.
AspectDescriptionRef.
AI in Treatment OptimizationAI and machine learning are transforming PD treatment by analyzing complex patient response patterns to dopaminergic therapy. These models integrate longitudinal data such as symptom fluctuations, medication adherence, and side-effect profiles to predict individual treatment efficacy more accurately than traditional methods.[124,125]
Personalized Pharmacological RegimensPredictive modeling allows clinicians to tailor drug dosages and schedules to individual patients, minimizing adverse drug reactions and enhancing therapeutic outcomes.[124,125]
AI-Driven Decision Support SystemsIntegrated into electronic health records, these systems assist clinicians with real-time dosage adjustments and dynamic care models.[126,127]
Deep Reinforcement Learning in NeuromodulationAI algorithms fine-tune deep brain stimulation (DBS) by simulating different stimulation scenarios and learning from patient feedback to optimize therapeutic outcomes and minimize side effects.[128,129]
Improved Clinical Efficiency and Quality of LifeIntelligent DBS optimization reduces clinician workload, resource use, and patient side effects, improving quality of life and clinical efficiency.[128,129]
Predictive Modelling for Disease ProgressionAdvanced ML models combine clinical, imaging, genetic, and digital biomarker data to predict long-term outcomes such as motor complications, cognitive decline, and quality of life deterioration.[130]
Risk Stratification and Patient SelectionAI tools identify patients most likely to benefit from interventions like DBS or clinical trials, supporting precision medicine and resource optimization.[130]
Despite the substantial progress made in applying artificial intelligence (AI) to Parkinson’s disease (PD) diagnostics, several critical challenges continue to hinder its clinical translation. Among the most prominent issues is data heterogeneity. Current research efforts frequently employ diverse methodologies, imaging techniques, sensor configurations, and clinical assessment tools, leading to inconsistencies across datasets. Such variability complicates data harmonization and limits model generalizability, as algorithms trained on one dataset often underperform when tested on another. Additionally, many AI models are developed using small-scale or demographically narrow cohorts, which increases the risk of algorithmic bias and reduces performance when applied to broader and more diverse populations [131,132]. Insufficient representation across age groups, ethnic backgrounds, and disease phenotypes raises concerns about fairness, reliability, and clinical applicability in real-world diagnostic contexts [133,134].
The widespread adoption of smartphones and wearable sensors for continuous PD monitoring also introduces security and privacy risks that must be addressed. Studies have shown that motion sensors embedded in smartphones can be manipulated for keystroke inference attacks, potentially compromising user privacy during data entry. Similarly, wireless sensor networks used in gait or tremor tracking are vulnerable to physical layer fingerprinting attacks, enabling malicious actors to bypass authentication and gain access to sensitive health information. These vulnerabilities are particularly concerning in long-term monitoring systems, where data related to patients’ motor function is transmitted frequently. Therefore, effective implementation frameworks must include strong encryption mechanisms, secure communication protocols, and privacy-preserving analytic methods to ensure that patient confidentiality is protected without compromising the clinical utility of AI-based tools.
Beyond technical and ethical considerations, regulatory and operational barriers further complicate the path to clinical integration. The approval pathways for AI-driven medical technologies remain in flux, as organizations such as the FDA and EMA continue adapting traditional regulatory structures to accommodate adaptive and continuously learning systems. This evolving landscape often results in delays in authorization and deployment, restricting the timely application of innovative solutions in patient care [135]. Moreover, incorporating AI systems into existing clinical workflows requires significant organizational adaptation. Healthcare professionals need to understand, interpret, and trust AI-generated insights, and the user interfaces of these systems must be intuitive and supportive rather than disruptive to established decision-making processes. Achieving interoperability with electronic health records (EHRs) and ensuring that AI outputs align with clinical pathways are equally crucial for effective implementation and user adoption [136,137].
Overall, these multidimensional challenges highlight the necessity for collaborative, interdisciplinary efforts involving clinicians, data scientists, ethicists, and regulatory authorities to fully realize the potential of AI-enhanced diagnostics and management in Parkinson’s disease.

3.3.3. AI in Alzheimer’s Disease (AD)

After image preprocessing and segmentation, scans are ready for computational interpretation. AI methods have streamlined Alzheimer’s research by improving diagnostic workflows and classification, though machine-learning results vary in robustness and reproducibility. Deep learning, multilayer neural networks capable of extracting complex patterns, has been especially useful for detecting MRI atrophy, PET biomarkers, and combined PET–MRI/fMRI signatures. Below, we review the most common AI architectures used to distinguish Alzheimer’s disease, mild cognitive impairment, and healthy controls; the principal imaging modalities include structural/functional MRI, PET tracers, and PET/MRI fusion (Table 5).

3.3.4. Deep Learning in AD Diagnosis and Classification

Among deep learning techniques, the most frequently cited models across reviewed studies include CNNs, RNNs, autoencoders, and generative adversarial networks (GANs). The majority of research efforts have focused on AD diagnosis and classification, with CNN-based architectures emerging as the dominant approach. CNNs process structured input data—such as medical imaging—through a hierarchy of interconnected layers (input, hidden, and output). These networks apply convolutional filters, small sliding matrices designed to capture spatial features such as edges, textures, and contours, which are subsequently used for feature extraction and classification [152]. Numerous investigations have demonstrated the strong performance of CNNs in multimodal imaging-based AD detection, achieving high classification accuracy. The Visual Geometry Group Network (VGGNet) represents one of the earliest and most widely implemented deep CNN architectures in AD research [153,154,155]. Its design minimizes error rates by employing fewer kernel features while increasing the depth of the network [156]. In a recent study, Kim et al. [153] developed a highly accurate hybrid model that integrated VGGNet with a one-dimensional CNN designed to capture brain contour information, specifically focusing on cortical and subcortical boundaries and shape configurations. The incorporation of VGGNet improved model precision significantly, achieving 0.986 accuracy, and outperformed standard versions such as VGG-16, VGG-19, and AlexNet. Optimal performance was observed with an input size of 256 × 256 [153]. Similarly, Mujahid et al. proposed an ensemble framework combining VGG-16 and EfficientNet-B2, which enhanced early AD detection accuracy [154].
Another prominent CNN architecture, ResNet, introduces residual connections that enable efficient information propagation through multiple layers, reducing computational overhead while preventing gradient degradation [157]. Various ResNet variants have been employed in AD classification and early disease identification [158,159,160,161,162,163,164]. For example, Odusami et al. utilized ResNet18 to classify functional MRI (fMRI) data, reporting 99.99% accuracy in differentiating early mild cognitive impairment (MCI) from AD [158].
The DenseNet architecture enhances feature reuse by connecting each layer to every other layer through dense feed-forward connections, ensuring efficient information flow and minimizing redundancy [156]. DenseNet has been effectively implemented for automated feature extraction and AD diagnosis [165]. In a comparative evaluation, Carcagnì et al. analyzed several CNN architectures—including DenseNet, ResNet, and EfficientNet, and found that deeper versions of DenseNet and ResNet outperformed shallower models such as VGG, yielding a 7% improvement in MRI-based AD detection accuracy [166]. Similarly, Sharma et al. [167] introduced a hybrid AI model that combined transfer learning with DenseNet-121 and DenseNet-201, integrated with machine learning classifiers, achieving 91.75% accuracy and 96.5% specificity.
Beyond these major CNN families, several other architectures have contributed to AD diagnosis and classification [168,169,170,171,172,173,174,175]. For instance, the Dementia Network (DemNet) achieved 95.23% accuracy and an AUC of 0.97 for AD staging using non-MRI data [176], while AlzheimerNet demonstrated superior classification precision over traditional approaches. The LeNet architecture, one of the earliest CNN designs, utilizes MaxPooling layers to reduce feature map dimensionality by discarding low-importance data [177]. A modified LeNet model proposed by Hazarika et al. attained a classification accuracy of 96.64% in AD identification [178].
In contrast to CNNs, RNNs are optimized for capturing temporal dependencies within sequential data, allowing effective modeling of time-dependent variations [177]. Mahim et al. integrated gated RNNs with a vision transformer (ViT) architecture, leveraging the ability of gated networks to utilize contextual information from previously processed data. This hybrid RNN–ViT model achieved 99.69% accuracy for binary classification of MRI scans in AD detection [178,179].
Autoencoders, by comparison, function as unsupervised learning mechanisms that compress input data into latent representations and subsequently reconstruct the original input while preserving essential features [180]. Al-Otaibi et al. introduced a dual-attention convolutional autoencoder, demonstrating 99.02% real-time accuracy in AD recognition based on MRI data [181]. Another study employing fMRI developed a specialized autoencoder to effectively differentiate normal aging from AD progression, also reporting excellent classification results [182].
Finally, GANs have emerged as powerful tools in medical image synthesis and domain adaptation, comprising two competing neural networks, one generating synthetic images and the other evaluating them [183]. In 2023, a Loop-Based GAN for Brain Network (BNLoop-GAN) was introduced to model the distribution of brain connectivity networks using multimodal imaging data. The approach successfully discriminated between healthy controls and AD patients with 81.8% sensitivity and 84.9% specificity, outperforming other models across resting-state fMRI and structural MRI modalities [184,185]. Similarly, Chui et al. [186] employed a GAN framework integrated with CNNs and transfer learning to augment underrepresented data, improving the accuracy and robustness of AD classification across multiple datasets.

3.3.5. Prediction/Prognosis

AD research has expanded beyond diagnosis, increasingly focusing on early detection and prognosis. Recent models integrate MRI, PET, molecular, and clinical data to identify biomarkers that accurately track disease progression. Longitudinal analyses have enabled the monitoring of volumetric and metabolic brain changes using AI-driven statistical techniques, such as linear mixed-effects models [187]. The fusion of imaging data with cognitive measures provides a more comprehensive understanding of AD development.
Predictive modeling has gained momentum due to its potential for early intervention. Aqeel et al. applied an RNN with LSTM to predict neuropsychological and MRI biomarkers, distinguishing AD from MCI [188]. Khalid et al. achieved 99.7% accuracy and an AUC of 0.99 using a feed-forward network combining GoogLeNet and DenseNet-121 [189]. Similarly, deep learning tools applied to MRI datasets have shown over 80% accuracy in dementia staging [190].
Several models specifically target MCI-to-AD conversion. Peng et al. used PET-based radiomics and clinical scales (CDR, ADAS) with multivariate logistic regression, achieving 87% sensitivity and 78% specificity [191]. Lin et al. utilized an extreme learning machine (ELM) across five imaging modalities, demonstrating high predictive accuracy [192]. Fakoya et al. developed a CNN model combining MRI and PET slices, preserving modality-specific features while maintaining 94.0% accuracy [193].
AI tools have also explored structural and functional biomarkers. Pan et al. proposed an Ensemble 3DCNN to map MRI-based structural alterations [194], while Kim et al. 2021 applied autoencoders to predict disease progression across AD stages [195]. Brain age prediction models, such as the BrainAGE framework [196], further contributed to longitudinal AD studies.
Key pathological markers like amyloid-beta and tau remain central to AI-based predictions. Wang et al. employed tau-PET images with support vector regression to estimate brain age [197]. Chattopadhyay et al. applied deep learning to T1-weighted MRI for Aβ plaque prediction, showing strong potential in MCI prognosis [198]. Moreover, cognitive performance prediction using imaging data has gained attention. Habuza et al. developed a CNN regression model that differentiated normal and MCI subjects with an AUC of 99.57% [199], while Liang et al. used a multi-task learning framework to predict cognitive decline based on structural associations [200].

3.4. AI in Liver Diseases

Artificial intelligence is reshaping multiple facets of hepatology, from tissue analysis to surgery and transplant decision-making. In histopathology, AI tools are increasingly applied to digitized slides, a shift still limited by the incomplete adoption of whole-slide imaging (WSI) and non-standardized acquisition formats, but both retrospective and prospective multicenter studies remain feasible by rescanning and harmonizing stained paraffin blocks [201]. Digital methods help reduce well-known interobserver variability in pathology and radiology (previously demonstrated for liver cancer), supporting more reproducible diagnostic and prognostic workflows [202]. As a result, histopathology has become one of the fastest-growing AI application areas in hepatology after imaging, with algorithms now assisting clinicians in identifying disease-specific features and suggesting likely diagnoses and stratifications [203,204].
Robotic platforms and AI are also extending the capabilities of liver surgery and living-donor hepatectomy. Minimally invasive and robotic approaches have been shown non-inferior to open techniques and can improve donor safety, shorten hospitalization, and speed functional recovery for recipients and donors alike [205,206]. Robotic systems shorten the technical learning curve versus purely laparoscopic approaches and enable advanced teaching through dual consoles and virtual simulation environments, while integration of preoperative 2D/3D imaging, intraoperative ultrasound, and emerging AI guidance promises progressively greater autonomy and precision in the operating room [207].
In transplantation, AI offers an opportunity to move allocation and wait-list management toward precision medicine. Machine-learning models have outperformed conventional scoring in several exploratory studies by better capturing patient trajectories and complex predictors of wait-list mortality [208,209]. For example, ML-derived algorithms (e.g., OPOM) have shown improved short-term mortality prediction over MELD in retrospective registry analyses and simulation studies, suggesting potential reductions in wait-list deaths and changes in allocation equity. Similarly, hybrid ML–statistical approaches have been used to predict HCC dropout, producing competitive discrimination metrics in validation cohorts [210]. Nevertheless, these encouraging findings remain exploratory: current guidelines stress the need for careful external, prospective validation and simulation testing before clinical deployment to identify biases and ensure safe, equitable adoption.
Together, these developments illustrate a broad AI footprint across hepatology, from automated slide interpretation and imaging augmentation to intraoperative support and smarter transplant prioritization, while underscoring persistent challenges: data standardization, model interpretability, population diversity in training sets, and rigorous prospective validation.

4. Qualitative Appraisal of Study Quality

Overall, the included studies exhibited moderate methodological quality. Most clearly stated their objectives and used valid datasets; however, external validation of AI models was inconsistently reported. Approximately half of the studies provided detailed performance metrics such as accuracy, sensitivity, and AUC values, while others reported results narratively without standardized measures. Reporting transparency and data availability were often limited, which may increase risk of bias and reduce reproducibility. Only a small number of studies explicitly described reproducibility strategies or open-source code sharing. Despite these limitations, the majority demonstrated methodological soundness sufficient to support the synthesized conclusions.

4.1. Advantages, Limitations, and Mitigation Strategies of Artificial Intelligence in Biomedicine

The integration of AI technologies into biomedical research and clinical care offers substantial advantages but also introduces new challenges. While the domain-specific sections of this review discuss these aspects in detail, the following synthesis provides a consolidated perspective useful for clinicians, regulators, and researchers. The relationships among the four biomedical domains and their convergence toward shared AI workflows are summarized in Figure 3.
AI enables large-scale data analysis, high-dimensional pattern recognition, and automated image interpretation far beyond human capability. It accelerates diagnostic workflows, supports personalized medicine through predictive modeling, and enhances treatment planning via multimodal data integration. In research contexts, AI allows hypothesis generation from complex datasets, facilitating drug discovery and biomarker identification that were previously infeasible.
Despite its potential, AI implementation faces several challenges. Data bias and lack of representativeness may lead to skewed predictions, particularly across demographic or institutional boundaries. Model interpretability remains limited, complicating clinical validation and trust. Overfitting and false positives can arise when models are trained on small or unbalanced datasets. Furthermore, AI integration may impose technical burdens on existing clinical workflows, requiring new infrastructure, continuous updates, and clinician training.
To enhance reliability and clinical adoption, rigorous prospective multicentre validation is essential. Implementing explainability and interpretability metrics (like SHAP, LIME, Grad-CAM) can improve transparency. Data harmonization and standardized acquisition protocols reduce bias and improve generalizability. Continuous model monitoring and adaptive retraining ensure long-term performance stability, while regulatory frameworks and ethical oversight remain key to patient safety and accountability.

4.2. Regulatory and Governance Context of AI in Healthcare (2024–2025)

The global regulatory landscape for Artificial Intelligence (AI) in healthcare has evolved rapidly, particularly concerning adaptive and continuously learning models. Current frameworks emphasize safety, transparency, data governance, and ongoing performance monitoring, ensuring that AI-based medical tools maintain reliability throughout their life cycle.
United States (FDA). The U.S. Food and Drug Administration (FDA) finalized its Predetermined Change Control Plan (PCCP) guidance in late 2024 for software functions, including AI/ML components. The PCCP specifies how developers must define anticipated post-authorization modifications to algorithms, verification methods, and validation criteria, an essential step for regulating adaptive models [211]. This builds upon the Good Machine Learning Practice (GMLP) principles jointly published by the FDA, Health Canada, and the UK MHRA, which outline standards for data quality, separation of training and testing datasets, and documentation of model versioning [212].
European Union. The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, complementing the existing Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) [213]. AI systems used in healthcare are generally classified as “high-risk,” requiring strict conformity assessments, risk management procedures, and post-market monitoring. The AI Act introduces explicit obligations for data governance, transparency, human oversight, and harmonized technical documentation, aiming to ensure both safety and ethical accountability across the EU.
United Kingdom (MHRA). The Software and AI as a Medical Device (SaMD/AIaMD) Change Programme continues to modernize the UKCA regulatory framework, aligning it with international GMLP principles. The program defines requirements for evidence generation, validation, and monitoring of AI systems following deployment, emphasizing clinical safety and explainability [214].
Global alignment. The International Medical Device Regulators Forum (IMDRF) provides the foundational definitions and risk-categorization framework for SaMD, widely adopted by the FDA, EU, and MHRA [215]. Recent ISO standards—including ISO/IEC 23894 (AI risk management) [216] and ISO/IEC 42001 (AI management systems) [217]—can be integrated into existing quality management systems (ISO 13485, ISO 14971, IEC 62304) [218,219,220]. Additionally, the World Health Organization (WHO) has published ethical and governance guidelines for large multimodal AI models, focusing on data provenance, auditability, and human oversight [221].
To support safe and reproducible deployment of AI in healthcare, several practical steps are recommended:
  • Proper classification as medical device software (SaMD) following IMDRF risk categories to determine the required level of clinical evidence [215].
  • Implementation of a Predetermined Change Control Plan (PCCP) early in development—clearly defining what model parameters may be updated, how re-training will be validated, and criteria for model acceptance [211].
  • Adherence to Good Machine Learning Practice (GMLP): ensure data representativeness, traceability of versions, bias assessment, and robust documentation of design decisions [212].
  • Integration of AI-specific risk management standards (ISO/IEC 23894 and ISO 14971) to identify and mitigate hazards unique to adaptive algorithms [216].
  • Transparency and explainability appropriate to the model’s risk class, including reporting of uncertainty quantification, population coverage, and human-in-the-loop supervision [213,218].
  • Prospective multicentre validation and real-world performance monitoring to verify model generalizability and detect drift after deployment [211,214].
  • Data governance and security: implement policies for data quality, lineage, anonymization, and federated learning where direct data sharing is restricted [213,218].
  • Interoperability and human-factors design to ensure seamless integration with EHR or PACS systems, minimize workflow burden, and maintain clinician oversight [214,216].
Collectively, these frameworks establish a pathway for translating AI research into safe and ethically compliant clinical tools. By aligning with evolving regulatory guidance, developers and researchers can enhance transparency, trust, and patient safety while facilitating the responsible adoption of adaptive AI systems in medicine.

5. Conclusions

Artificial intelligence is rapidly maturing from experimental toolset to practical enabler across multiple biomedical domains. Deep learning architectures, convolutional networks for images, recurrent and transformer models for sequences and text, and graph-based approaches for networked data, have shown strong ability to extract subtle, clinically meaningful signals from imaging, omics, electrophysiology, and behavioral streams. In oncology and nanomedicine, these methods accelerate design and optimization workflows, predict biodistribution and response, and enable richer monitoring using nanoparticle-enhanced imaging and wearable sensors. In cardiology, AI improves automated image quantification, calcium scoring, ECG-based screening and noninvasive functional assessment; in neurology, multimodal models and speech analysis support earlier detection and longitudinal tracking of Parkinson’s and Alzheimer’s disease. Hepatology benefits from automated histopathology, intraoperative assistance, and allocation-model improvements that move toward more personalized transplant prioritization. Despite these advances, implementation barriers remain substantial. Model performance is often contingent on dataset quality and representativeness; heterogeneous acquisition protocols, limited external validation, and cohort-specific biases reduce generalizability. Many high-performing networks retain “black-box” characteristics that complicate clinical trust and regulatory approval, while computational demands and integration challenges limit deployment in routine workflows. Ethical concerns, privacy, fairness, and equitable access, require proactive governance. To translate promise into practice, emphasis should shift to rigorous, prospectively designed multisite validation, harmonized data standards, explainable and uncertainty-aware modeling, and workflows that position AI as an assistive partner to clinicians rather than a replacement. When these technical, clinical, and social requirements are met, AI is positioned to deliver more precise diagnostics, individualized therapies, and scalable monitoring systems that improve outcomes while preserving safety and equity.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/pharmaceutics17121564/s1, Table S1: PRISMA_2020_checklist.

Author Contributions

Conceptualization, D.-M.T., P.I.D. and S.C.; Methodology, D.-M.T., K.R. and G.A.S.; Software, P.I.D. and K.R.; Validation, R.-M.V. and G.A.S.; Formal analysis, D.-M.T. and C.E.S.; Investigation, K.R. and D.-M.T.; Resources, S.C. and R.-M.V.; Data curation, D.-M.T. and G.A.S.; Writing—original draft preparation, D.-M.T. and K.R.; Writing—review and editing, S.C., R.-M.V. and C.E.S.; Visualization, K.R. and G.A.S.; Supervision, R.-M.V. and S.C.; Project administration, S.C.; Funding acquisition, G.A.S. All authors have read and agreed to the published version of the manuscript.

Funding

The Article Processing Charges were funded by the University of Medicine and Pharmacy of Craiova, Romania.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hirani, R.; Noruzi, K.; Khuram, H.; Hussaini, A.S.; Aifuwa, E.I.; Ely, K.E.; Lewis, J.M.; Gabr, A.E.; Smiley, A.; Tiwari, R.K.; et al. Artificial Intelligence and Healthcare: A Journey through History, Present Innovations, and Future Possibilities. Life 2024, 14, 557. [Google Scholar] [CrossRef] [PubMed]
  2. Avanzo, M.; Stancanello, J.; Pirrone, G.; Drigo, A.; Retico, A. The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning. Cancers 2024, 16, 3702. [Google Scholar] [CrossRef] [PubMed]
  3. Gou, F.; Liu, J.; Xiao, C.; Wu, J. Research on Artificial-Intelligence-Assisted Medicine: A Survey on Medical Artificial Intelligence. Diagnostics 2024, 14, 1472. [Google Scholar] [CrossRef] [PubMed]
  4. Chakroborty, S.; Nath, N.; Jain, A.; Yadav, T.; Yadav, A.S.; Pandey, F.P.; Sharma, A.; Agrawal, Y. Recent developments and trends in silver nanoparticles for biomedical applications. AIP Conf. Proc. 2024, 3178, 090002. [Google Scholar] [CrossRef]
  5. Liu, Y.; Shi, J. Antioxidative nanomaterials and biomedical applications. Nano Today 2019, 27, 146–177. [Google Scholar] [CrossRef]
  6. Onciul, R.; Tataru, C.-I.; Dumitru, A.V.; Crivoi, C.; Serban, M.; Covache-Busuioc, R.-A.; Radoi, M.P.; Toader, C. Artificial Intelligence and Neuroscience: Transformative Synergies in Brain Research and Clinical Applications. J. Clin. Med. 2025, 14, 550. [Google Scholar] [CrossRef]
  7. Zhu, X.; Li, Y.; Gu, N. Application of Artificial Intelligence in the Exploration and Optimization of Biomedical Nanomaterials. Nano Biomed. Eng. 2023, 15, 342–353. [Google Scholar] [CrossRef]
  8. Giansanti, D. Revolutionizing Medical Imaging: The Transformative Role of Artificial Intelligence in Diagnostics and Treatment. Diagnostics 2025, 15, 1557. [Google Scholar] [CrossRef]
  9. Naik, G.G.; Jagtap, V.A. Two heads are better than one: Unravelling the potential Impact of Artifocial Intelligence in nanotechnology. Nano TransMed. 2024, 3, 100041. [Google Scholar] [CrossRef]
  10. Introduction to nanotechnology. In Nanomaterials in Clinical Therapeutics: Synthesis and Applications; Wiley Online Library: Hoboken, NJ, USA, 2022.
  11. Naik, G.G.; Minocha, T.; Verma, A.; Yadav, S.K.; Saha, S.; Agrawal, A.K.; Singh, S.; Sahu, A.N. Asparagus racemosus root-derived carbon nanodots as a nano-probe for biomedical applications. J. Mater. Sci. 2022, 57, 20380–20401. [Google Scholar] [CrossRef]
  12. Naik, G.G.; Madavi, R.; Minocha, T.; Mohapatra, D.; Pratap, R.; Shreya, S.; Patel, P.K.; Yadav, S.K.; Parmar, A.; Patra, A.; et al. In vitro cytotoxic potential of cow dung and expired tomato sauces-derived carbon nanodots against A-375 human melanoma cell line. Arab. J. Chem. 2024, 17, 105576. [Google Scholar] [CrossRef]
  13. Fahim, Y.A.; Hasani, I.W.; Kabba, S.; Ragab, W.M. Artificial Intelligence in Healthcare and Medicine: Clinical Applications, Therapeutic Advances, and Future Perspectives. Eur. J. Med. Res. 2025, 30, 848. [Google Scholar] [CrossRef] [PubMed]
  14. Parvin, N.; Joo, S.W.; Jung, J.H.; Mandal, T.K. Multimodal AI in Biomedicine: Pioneering the Future of Biomaterials, Diagnostics, and Personalized Healthcare. Nanomaterials 2025, 15, 895. [Google Scholar] [CrossRef] [PubMed]
  15. Jandoubi, B.; Akhloufi, M.A. Multimodal Artificial Intelligence in Medical Diagnostics. Information 2025, 16, 591. [Google Scholar] [CrossRef]
  16. Naik, G.G.; Mohapatra, D.; Shreya, S.; Madavi, R.; Shambhavi; Patel, P.K.; Sahu, A.N. Nip in the bud: Can carbon/quantum dots be a prospective nano-theranostics against COVID-19? Bull. Mater. Sci. 2023, 47, 6. [Google Scholar] [CrossRef]
  17. Caballero, D.; Sánchez-Margallo, J.A.; Pérez-Salazar, M.J.; Sánchez-Margallo, F.M. Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review. Surgeries 2025, 6, 7. [Google Scholar] [CrossRef]
  18. Pouwels, S.; Mwangi, A.; Koutentakis, M.; Mendoza, M.; Rathod, S.; Parajuli, S.; Singhal, S.; Lakshani, U.; Yang, W.; Au, K.; et al. The Role of Artificial Intelligence and Information Technology in Enhancing and Optimizing Stapling Efficiency in Metabolic and Bariatric Surgery: A Comprehensive Narrative Review. Gastrointest. Disord. 2025, 7, 63. [Google Scholar] [CrossRef]
  19. Acharya, B.; Behera, A.; Behera, S.; Moharana, S. Recent advances in nanotechnology-based drug Delivery systems for the diagnosis and treatment of Reproductive disorders. ACS Appl. Bio Mater. 2024, 7, 1336–1361. [Google Scholar] [CrossRef]
  20. Galieri, G.; Orlando, V.; Altieri, R.; Barbarisi, M.; Olivi, A.; Sabatino, G.; La Rocca, G. Current Trends and Future Directions in Lumbar Spine Surgery: A Review of Emerging Techniques and Evolving Management Paradigms. J. Clin. Med. 2025, 14, 3390. [Google Scholar] [CrossRef]
  21. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Elsevier: Amsterdam, The Netherlands, 2020.
  22. Konstantinova, J.; Jiang, A.; Althoefer, K.; Dasgupta, P.; Nanayakkara, T. Implementation of tactile sensing for palpation in robot-assisted minimally invasive surgery: A review. IEEE Sens. J. 2014, 14, 2490–2501. [Google Scholar] [CrossRef]
  23. Hu, M.; Ge, X.; Chen, X.; Mao, W.; Qian, X.; Yuan, W.-E. Micro/Nanorobot: A promising targeted drug delivery system. Pharmaceutics 2020, 12, 665. [Google Scholar] [CrossRef] [PubMed]
  24. Goglia, M.; Pavone, M.; D’Andrea, V.; De Simone, V.; Gallo, G. Minimally Invasive Rectal Surgery: Current Status and Future Perspectives in the Era of Digital Surgery. J. Clin. Med. 2025, 14, 1234. [Google Scholar] [CrossRef] [PubMed]
  25. Sanchez-Martinez, S.; Camara, O.; Piella, G.; Cikes, M.; González-Ballester, M.Á.; Miron, M.; Vellido, A.; Gómez, E.; Fraser, A.G.; Bijnens, B. Machine learning for clinical decision-making:challenges and opportunities in cardiovascular imaging. Front. Cardiovasc. Med. 2022, 8, 765693. [Google Scholar] [CrossRef] [PubMed]
  26. Bush, B.; Nifong, L.W.; Alwair, H.; Chitwood, W.R. Robotic mitral valve surgery-current status and future directions. Ann. Cardiothorac. Surg. 2013, 2, 814–817. [Google Scholar]
  27. Nobbenhuis, M.A.E.; Gul, N.; Barton-Smith, P.; O’Sullivan, O.; Moss, E.; Ind, T.E.J.; Royal College of Obstetricians and Gynaecologists. Robotic surgery in gynaecology: Scientific Impact Paper No. 71 (July 2022). BJOG 2023, 130, e1–e8. [Google Scholar] [CrossRef]
  28. Garisto, J.; Ramakrishnan, V.M.; Bertolo, R.; Kaouk, J. The re-discovery of alternative access to the pelvic fossa: The role of the single-port robotic platform. In Single-Port Robotic Surgery in Urology; Elsevier: Amsterdam, The Netherlands, 2022; pp. 35–59. [Google Scholar]
  29. You, Y.; Lai, X.; Pan, Y.; Zheng, H.; Vera, J.; Liu, S.; Deng, S.; Zhang, L. Artificial intelligence in cancer target identification and drug discovery. Signal Transduct. Target. Ther. 2022, 7, 156. [Google Scholar] [CrossRef]
  30. Vijayakumar, M.; Shetty, R. Robotic surgery in oncology. Indian. J. Surg. Oncol. 2020, 11, 549–551. [Google Scholar] [CrossRef]
  31. Yang, Y.; Song, L.; Huang, J.; Cheng, X.; Luo, Q. A uniportal right upper lobectomy by three-arm robotic-assisted thoracoscopic surgery using the da Vinci (Xi) Surgical System in the treatment of early-stage lung cancer. Transl. Lung Cancer Res. 2021, 10, 1571–1575. [Google Scholar] [CrossRef]
  32. Liu, G.; Zhang, S.; Zhang, Y.; Fu, X.; Liu, X. Robotic surgery in rectal cancer: Potential, challenges, and opportunities. Curr. Treat. Options Oncol. 2022, 23, 961–979. [Google Scholar] [CrossRef]
  33. Mehta, C.H.; Narayan, R.; Nayak, U.Y. Computational modeling for formulation design. Drug Discov. Today 2019, 24, 781–788. [Google Scholar] [CrossRef]
  34. Cao, X.; Zheng, Y.-Z.; Liao, H.-Y.; Guo, X.; Li, Y.; Wang, Z.; Zhang, L.; Wang, X.-D.; Wang, X. A clinical nomogram and heat map for assessing survival in patients with stage I nonsmall cell lung cancer after complete resection. Ther. Adv. Med. Oncol. 2020, 12, 1758835920970063. [Google Scholar] [CrossRef] [PubMed]
  35. Yaqoob, A.; Musheer Aziz, R.; Verma, N.K. Applications and techniques of machine learning in cancer classification: A systematic review. Hum.-Centric Intell. Syst. 2023, 3, 588–615. [Google Scholar] [CrossRef]
  36. Jiang, X.; Hu, Z.; Wang, S.; Zhang, Y. Deep learning for medical image-based cancer diagnosis. Cancers 2023, 15, 3608. [Google Scholar] [CrossRef] [PubMed]
  37. Hassan, S.U.; Abdulkadir, S.J.; Zahid, M.S.M.; Al-Selwi, S.M. Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review. Comput. Biol. Med. 2025, 185, 109569. [Google Scholar] [CrossRef]
  38. Rana, M.; Bhushan, M. Machine learning and deep learning approach for medical image analysis: Diagnosis to detection. Multimed. Tools Appl. 2022, 82, 26731–26769. [Google Scholar] [CrossRef]
  39. Yamazaki, K.; Vo-Ho, V.-K.; Bulsara, D.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 863. [Google Scholar] [CrossRef]
  40. Madan, S.; Lentzen, M.; Brandt, J.; Rueckert, D.; Hofmann-Apitius, M.; Fröhlich, H. Transformer models in biomedicine. BMC Med. Inf. Decis. Mak. 2024, 24, 214. [Google Scholar] [CrossRef]
  41. Dumachi, A.I.; Buiu, C. Applications of Machine Learning in Cancer Imaging: A Review of Diagnostic Methods for Six Major Cancer Types. Electronics 2024, 13, 4697. [Google Scholar] [CrossRef]
  42. Huhulea, E.N.; Huang, L.; Eng, S.; Sumawi, B.; Huang, A.; Aifuwa, E.; Hirani, R.; Tiwari, R.K.; Etienne, M. Artificial Intelligence Advancements in Oncology: A Review of Current Trends and Future Directions. Biomedicines 2025, 13, 951. [Google Scholar] [CrossRef]
  43. Cheng, C.H.; Shi, S.S. Artificial Intelligence in Cancer: Applications, Challenges, and Future Perspectives. Mol. Cancer 2025, 24, 274. [Google Scholar] [CrossRef]
  44. Selvaraj, C.; Cho, W.C.; Langeswaran, K.; Alothaim, A.S.; Vijayakumar, R.; Jayaprakashvel, M.; Desai, D. Artificial Intelligence in Cancer Care: Revolutionizing Diagnosis, Treatment, and Precision Medicine amid Emerging Challenges and Future Opportunities. 3 Biotech 2025, 15, 355. [Google Scholar] [CrossRef]
  45. Alshawwa, S.Z.; Kassem, A.A.; Farid, R.M.; Mostafa, S.K.; Labib, G.S. Nanocarrier drug delivery systems: Characterization, limitations, future perspectives and implementation of artificial intelligence. Pharmaceutics 2022, 14, 883. [Google Scholar] [CrossRef] [PubMed]
  46. Samathoti, P.; Kumarachari, R.K.; Bukke, S.P.N.; Rajasekhar, E.S.K.; Jaiswal, A.A.; Eftekhari, Z. The Role of Nanomedicine and Artificial Intelligence in Cancer Health Care: Individual Applications and Emerging Integrations—A Narrative Review. Discov. Oncol. 2025, 16, 697. [Google Scholar] [CrossRef] [PubMed]
  47. Cai, Z.-M.; Li, Z.-Z.; Zhong, N.-N.; Cao, L.-M.; Xiao, Y.; Li, J.-Q.; Huo, F.-Y.; Liu, B.; Xu, C.; Zhao, Y.; et al. Revolutionizing Lymph Node Metastasis Imaging: The Role of Drug Delivery Systems and Future Perspectives. J. Nanobiotechnol. 2024, 22, 135. [Google Scholar] [CrossRef] [PubMed]
  48. Chow, J.C.L. Nanomaterial-based molecular imaging in cancer: Advances in simulation and AI integration. Biomolecules 2025, 15, 444. [Google Scholar] [CrossRef]
  49. Kulkarni, S.; Lin, B.; Radhakrishnan, R. Machine learning enabled multiscale model for nanoparticle margination and physiology based pharmacokinetics. Comput. Chem. Eng. 2025, 198, 109081. [Google Scholar] [CrossRef]
  50. Shirzad, M.; Salahvarzi, A.; Razzaq, S.; Javid-Naderi, M.J.; Rahdar, A.; Fathi-Karkan, S.; Ghadami, A.; Kharaba, Z.; Ferreira, L.F.R. Revolutionizing prostate cancer therapy: Artificial intelligence—Based nanocarriers for precision diagnosis and treatment. Crit. Rev. Oncol. 2025, 208, 104653. [Google Scholar] [CrossRef]
  51. Bhange, M.; Telange, D. Convergence of nanotechnology and artificial intelligence in the fight against liver cancer: A comprehensive review. Discov. Onc. 2025, 16, 77. [Google Scholar] [CrossRef]
  52. Vora, L.K.; Gholap, A.D.; Jetha, K.; Thakur, R.R.S.; Solanki, H.K.; Chavda, V.P. Artificial intelligence in pharmaceutical technology and drug delivery design. Pharmaceutics 2023, 15, 1916. [Google Scholar] [CrossRef]
  53. Govindan, B.; Sabri, M.A.; Hai, A.; Banat, F.; Haija, M.A. A review of advanced multifunctional magnetic nanostructures for cancer diagnosis and therapy integrated into an artificial intelligence approach. Pharmaceutics 2023, 15, 868. [Google Scholar] [CrossRef]
  54. Xu, M.; Qin, Z.; Chen, Z.; Wang, S.; Peng, L.; Li, X.; Yuan, Z. Nanorobots mediated drug delivery for brain cancer active targeting and controllable therapeutics. Nanoscale Res. Lett. 2024, 19, 183. [Google Scholar] [CrossRef] [PubMed]
  55. Wasilewski, T.; Kamysz, W.; Gębicki, J. AI-assisted detection of biomarkers by sensors and biosensors for early diagnosis and monitoring. Biosensors 2024, 14, 356. [Google Scholar] [CrossRef] [PubMed]
  56. Das, K.P. Nanoparticles and convergence of artificial intelligence for targeted drug delivery for cancer therapy: Current progress and challenges. Front. Med. Technol. 2023, 4, 1067144. [Google Scholar] [CrossRef] [PubMed]
  57. Hannun, A.Y.; Rajpurkar, P.; Haghpanahi, M.; Tison, G.H.; Bourn, C.; Turakhia, M.P.; Ng, A.Y. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 2019, 25, 65–69. [Google Scholar] [CrossRef] [PubMed]
  58. Staszak, K.; Tylkowski, B.; Staszak, M. From data to diagnosis: How machine learning is changing heart health monitoring. Int. J. Environ. Res. Public Health 2023, 20, 4605. [Google Scholar] [CrossRef] [PubMed]
  59. Aziz, S.; Ahmed, S.; Alouini, M.S. ECG-based machine-learning algorithms for heartbeat classification. Sci. Rep. 2021, 11, 18738. [Google Scholar] [CrossRef]
  60. Karatzia, L.; Aung, N.; Aksentijevic, D. Artificial intelligence in cardiology: Hope for the future and power for the present. Front. Cardiovasc. Med. 2022, 9, 945726. [Google Scholar] [CrossRef]
  61. Khalifa, M.; Albadawy, M. AI in diagnostic imaging: Revolutionising accuracy and efficiency. Comput. Methods Programs Biomed. Updat. 2024, 5, 100146. [Google Scholar] [CrossRef]
  62. Siranart, N.; Deepan, N.; Techasatian, W.; Phutinart, S.; Sowalertrat, W.; Kaewkanha, P.; Pajareya, P.; Tokavanich, N.; Prasitlumkum, N.; Chokesuwattanaskul, R. Diagnostic accuracy of artificial intelligence in detecting left ventricular hypertrophy by electrocardiograph: A systematic review and meta-analysis. Sci. Rep. 2024, 14, 15882. [Google Scholar] [CrossRef]
  63. Yu, Y.; Gupta, A.; Wu, C.; Masoudi, F.A.; Du, X.; Zhang, J.; Krumholz, H.M.; Li, J. Characteristics, management, and outcomes of patients hospitalized for heart failure in China: The China PEACE retrospective heart failure study. J. Am. Heart Assoc. 2019, 8, e012884. [Google Scholar] [CrossRef]
  64. Kagiyama, N.; Piccirilli, M.; Yanamala, N.; Shrestha, S.; Farjo, P.D.; Casaclang-Verzosa, G.; Tarhuni, W.M.; Nezarat, N.; Budoff, M.J.; Narula, J.; et al. Machine learning assessment of left ventricular diastolic function based on electrocardiographic features. J. Am. Coll. Cardiol. 2020, 76, 930–941. [Google Scholar] [CrossRef] [PubMed]
  65. Jiang, B.; Guo, N.; Ge, Y.; Zhang, L.; Oudkerk, M.; Xie, X. Development and application of artificial intelligence in cardiac imaging. Br. J. Radiol. 2020, 93, 20190812. [Google Scholar] [CrossRef] [PubMed]
  66. Neisius, U.; El-Rewaidy, H.; Nakamori, S.; Rodriguez, J.; Manning, W.J.; Nezafat, R. Radiomic analysis of myocardial native T(1) imaging discriminates between hypertensive heart disease and hypertrophic cardiomyopathy. JACC Cardiovasc. Imaging 2019, 12, 1946–1954. [Google Scholar] [CrossRef] [PubMed]
  67. Lipkin, I.; Telluri, A.; Kim, Y.; Sidahmed, A.; Krepp, J.M.; Choi, B.G.; Jonas, R.; Marques, H.; Chang, H.-J.; Choi, J.H.; et al. Coronary CTA with AI-QCT interpretation: Comparison with myocardial perfusion imaging for detection of obstructive stenosis using invasive angiography as reference standard. AJR Am. J. Roentgenol. 2022, 219, 407–419. [Google Scholar] [CrossRef]
  68. Chiou, A.; Hermel, M.; Sidhu, R.; Hu, E.; van Rosendael, A.; Bagsic, S.; Udoh, E.; Kosturakis, R.; Aziz, M.; Ruiz, C.R.; et al. Artificial intelligence coronary computed tomography, coronary computed tomography angiography using fractional flow reserve, and physician visual interpretation in the per-vessel prediction of abnormal invasive adenosine fractional flow reserve. Eur. Heart J. Imaging Methods Pract. 2024, 2, qyae035. [Google Scholar] [CrossRef]
  69. Lu, M.T.; Ferencik, M.; Roberts, R.S.; Lee, K.L.; Ivanov, A.; Adami, E.; Mark, D.B.; Jaffer, F.A.; Leipsic, J.A.; Douglas, P.S.; et al. Noninvasive FFR derived from coronary CT angiography: Management and outcomes in the PROMISE trial. JACC Cardiovasc. Imaging 2017, 10, 1350–1358. [Google Scholar] [CrossRef]
  70. Martin, S.S.; Mastrodicasa, D.; van Assen, M.; De Cecco, C.N.; Bayer, R.R.; Tesche, C.; Varga-Szemes, A.; Fischer, A.M.; Jacobs, B.E.; Sahbaee, P.; et al. Value of machine learning-based coronary CT fractional flow reserve applied to triple-rule-out CT angiography in acute chest pain. Radiol. Cardiothorac. Imaging 2020, 2, e190137. [Google Scholar] [CrossRef]
  71. Sufian, A.; Hamzi, W.; Sharifi, T.; Zaman, S.; Alsadder, L.; Lee, E.; Hakim, A.; Hamzi, B. AI-driven thoracic X-ray diagnostics: Transformative transfer learning for clinical validation in pulmonary radiography. J. Pers. Med. 2024, 14, 856. [Google Scholar] [CrossRef]
  72. Kaba, Ş.; Haci, H.; Isin, A.; Ilhan, A.; Conkbayir, C. The application of deep learning for the segmentation and classification of coronary arteries. Diagnostics 2023, 13, 2274. [Google Scholar] [CrossRef]
  73. Xue, Y.; Shen, J.; Hong, W.; Zhou, W.; Xiang, Z.; Zhu, Y.; Huang, C.; Luo, S. Risk stratification of ST-segment elevation myocardial infarction (STEMI) patients using machine learning based on lipid profiles. Lipids Heal. Dis. 2021, 20, 48. [Google Scholar] [CrossRef]
  74. Oikonomou, E.K.; Holste, G.; Yuan, N.; Coppi, A.; McNamara, R.L.; Haynes, N.A.; Vora, A.N.; Velazquez, E.J.; Li, F.; Menon, V.; et al. A multimodal video-based AI biomarker for aortic stenosis development and progression. JAMA Cardiol. 2024, 9, 534–544. [Google Scholar] [CrossRef]
  75. Tseng, A.S.; Shelly-Cohen, M.; Attia, I.Z.; Noseworthy, P.A.; Friedman, P.A.; Oh, J.K.; Lopez-Jimenez, F. Spectrum bias in algorithms derived by artificial intelligence: A case study in detecting aortic stenosis using electrocardiograms. Eur. Heart J. Digit. Health 2021, 2, 561–567. [Google Scholar] [CrossRef] [PubMed]
  76. Narula, S.; Shameer, K.; Salem Omar, A.M.; Dudley, J.T.; Sengupta, P.P. Machine-learning algorithms to automate morphological and functional assessments in 2D echocardiography. J. Am. Coll. Cardiol. 2016, 68, 2287–2295. [Google Scholar] [CrossRef] [PubMed]
  77. Agatston, A.S.; Janowitz, W.R.; Hildner, F.J.; Zusmer, N.R.; Viamonte, M., Jr.; Detrano, R. Quantification of coronary artery calcium using ultrafast computed tomography. J. Am. Coll. Cardiol. 1990, 15, 827–832. [Google Scholar] [CrossRef]
  78. Wolterink, J.M.; Leiner, T.; de Vos, B.D.; van Hamersvelt, R.W.; Viergever, M.A.; Išgum, I. Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks. Med. Image Anal. 2016, 34, 123–136. [Google Scholar] [CrossRef] [PubMed]
  79. Martin, S.S.; van Assen, M.; Rapaka, S.; Hudson, H.T.; Fischer, A.M.; Varga-Szemes, A.; Sahbaee, P.; Schwemmer, C.; Gulsun, M.A.; Cimen, S.; et al. Evaluation of a deep learning-based automated CT coronary artery calcium scoring algorithm. JACC Cardiovasc. Imaging 2020, 13, 524–526. [Google Scholar] [CrossRef]
  80. Cano-Espinosa, C.; González, G.; Washko, G.R.; Cazorla, M.; Estépar, R.S. Automated Agatston score computation in non-ECG gated CT scans using deep learning. Proc. SPIE Int. Soc. Opt. Eng. 2018, 10574, 105742K. [Google Scholar] [CrossRef]
  81. van Hamersvelt, R.W.; Zreik, M.; Voskuil, M.; Viergever, M.A.; Išgum, I.; Leiner, T. Deep learning analysis of left ventricular myocardium in CT angiographic intermediate-degree coronary stenosis improves the diagnostic accuracy for identification of functionally significant stenosis. Eur. Radiol. 2019, 29, 2350–2359. [Google Scholar] [CrossRef]
  82. Biasiolli, L.; Hann, E.; Lukaschuk, E.; Carapella, V.; Paiva, J.M.; Aung, N.; Rayner, J.J.; Werys, K.; Fung, K.; Puchta, H.; et al. Automated localization and quality control of the aorta in cine CMR can significantly accelerate processing of the UK Biobank population data. PLoS ONE. 2019, 14, e0212272. [Google Scholar] [CrossRef]
  83. Simulation and Synthesis in Medical Imaging; Springer: New York, NY, USA, 2018.
  84. Olawade, D.B.; Aderinto, N.; Olatunji, G.; Kokori, E.; David-Olawade, A.C.; Hadi, M. Advancements and applications of Artificial Intelligence in cardiology: Current trends and future prospects. J. Med. Surg. Public Health 2024, 3, 100109. [Google Scholar] [CrossRef]
  85. Makimoto, H.; Kohro, T. Adopting artificial intelligence in cardiovascular medicine: A scoping review. Hypertens. Res. 2024, 47, 685–699. [Google Scholar] [CrossRef] [PubMed]
  86. Tarroni, G.; Oktay, O.; Bai, W.; Schuh, A.; Suzuki, H.; Passerat-Palmbach, J.; de Marvao, A.; O’REgan, D.P.; Cook, S.; Glocker, B.; et al. Learning-based quality control for cardiac MR images. IEEE Trans. Med. Imaging 2019, 38, 1127–1138. [Google Scholar] [CrossRef] [PubMed]
  87. Zhang, L.; Gooya, A.; Pereanez, M.; Dong, B.; Piechnik, S.K.; Neubauer, S.; Petersen, S.E.; Frangi, A.F. Automatic assessment of full left ventricular coverage in cardiac cine magnetic resonance imaging with fisher-discriminative 3-D CNN. IEEE Trans. Biomed. Eng. 2018, 66, 1975–1986. [Google Scholar] [CrossRef] [PubMed]
  88. Xue, H.; Tseng, E.; Knott, K.D.; Kotecha, T.; Brown, L.; Plein, S.; Fontana, M.; Moon, J.C.; Kellman, P. Automated detection of left ventricle in arterial input function images for inline perfusion mapping using deep learning: A study of 15,000 patients. Magn. Reson. Med. 2020, 84, 2788–2800. [Google Scholar] [CrossRef]
  89. Tan, L.K.; McLaughlin, R.A.; Lim, E.; Abdul Aziz, Y.F.; Liew, Y.M. Fully automated segmentation of the left ventricle in cine cardiac MRI using neural network regression. J. Magn. Reson. Imaging 2018, 48, 140–152. [Google Scholar] [CrossRef]
  90. Du, X.; Zhang, W.; Zhang, H.; Chen, J.; Zhang, Y.; Warrington, J.C.; Brahm, G.; Li, S. Deep regression segmentation for cardiac bi-ventricle MR images. IEEE Access 2018, 6, 3828–3838. [Google Scholar] [CrossRef]
  91. Bernard, O.; Lalande, A.; Zotti, C.; Cervenansky, F.; Yang, X.; Heng, P.-A.; Cetin, I.; Lekadir, K.; Camara, O.; Ballester, M.A.G.; et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE Trans. Med. Imaging 2018, 37, 2514–2525. [Google Scholar] [CrossRef]
  92. Fahmy, A.S.; Rausch, J.; Neisius, U.; Chan, R.H.; Maron, M.S.; Appelbaum, E.; Menze, B.; Nezafat, R. Automated cardiac MR scar quantification in hypertrophic cardiomyopathy using deep convolutional neural networks. JACC Cardiovasc. Imaging 2018, 11, 1917–1918. [Google Scholar] [CrossRef]
  93. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images are more than pictures, they are data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef]
  94. Baessler, B.; Luecke, C.; Lurz, J.; Klingel, K.; Das, A.; von Roeder, M.; de Waha-Thiele, S.; Besler, C.; Rommel, K.-P.; Maintz, D.; et al. Cardiac MRI and texture analysis of myocardial T1 and T2 maps in myocarditis with acute versus chronic symptoms of heart failure. Radiology 2019, 292, 608–617. [Google Scholar] [CrossRef]
  95. Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef]
  96. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief. Bioinform. 2018, 19, 1236–1246. [Google Scholar] [CrossRef] [PubMed]
  97. Prashanth, R.; Dutta Roy, S.; Mandal, P.K.; Ghosh, S. High-accuracy detection of early Parkinson’s disease through multimodal features and machine learning. Int. J. Med. Inform. 2016, 90, 13–21. [Google Scholar] [CrossRef] [PubMed]
  98. Amoroso, N.; La Rocca, M.; Monaco, A.; Bellotti, R.; Tangaro, S. Complex networks reveal early MRI markers of Parkinson’s disease. Med. Image Anal. 2018, 48, 12–24. [Google Scholar] [CrossRef] [PubMed]
  99. Choi, H.; Ha, S.; Im, H.J.; Paek, S.H.; Lee, D.S. Refining diagnosis of Parkinson’s disease with deep learning-based interpretation of dopamine transporter imaging. NeuroImage 2017, 16, 586–594. [Google Scholar] [CrossRef]
  100. Prashanth, R.; Roy, S.D.; Mandal, P.K.; Ghosh, S. Automatic classification and prediction models for early Parkinson’s disease diagnosis from SPECT imaging. Expert Syst. Appl. 2014, 41, 3333–3342. [Google Scholar] [CrossRef]
  101. Rana, B.; Juneja, A.; Saxena, M.; Gudwani, S.; Kumaran, S.S.; Behari, M.; Agrawal, R.K. Graph-theory-based spectral feature selection for computer-aided diagnosis of Parkinson’s disease using T1-weighted MRI. Expert Syst. Appl. 2015, 25, 245–255. [Google Scholar] [CrossRef]
  102. Poewe, W.; Seppi, K.; Tanner, C.M.; Halliday, G.M.; Brundin, P.; Volkmann, J.; Schrag, A.E.; Lang, A.E. Parkinson disease. Nat. Rev. Dis. Primers 2017, 3, 17013. [Google Scholar] [CrossRef]
  103. Burciu, R.G.; Vaillancourt, D.E. Imaging of motor cortex physiology in Parkinson’s disease. Mov. Disord. 2018, 33, 1688–1699. [Google Scholar] [CrossRef]
  104. Cao, R.; Wang, X.; Gao, Y.; Li, T.; Zhang, H.; Hussain, W.; Xie, Y.; Wang, J.; Wang, B.; Xiang, J. Abnormal anatomical rich-club organization and structural-functional coupling in mild cognitive impairment and Alzheimer’s disease. Front. Neurol. 2020, 11, 53. [Google Scholar] [CrossRef]
  105. Duncan, G.W.; Firbank, M.J.; Yarnall, A.J.; Khoo, T.K.; Brooks, D.J.; Barker, R.A.; Burn, D.J.; O’Brien, J.T. Gray and white matter imaging: A biomarker for cognitive impairment in early Parkinson’s disease? Mov. Disord. 2016, 31, 103–110. [Google Scholar] [CrossRef] [PubMed]
  106. Schwarz, S.T.; Afzal, M.; Morgan, P.S.; Bajaj, N.; Gowland, P.A.; Auer, D.P. The ‘swallow tail’ appearance of the healthy nigrosome—A new accurate test of Parkinson’s disease: A case-control and cohort study. Lancet Neurol. 2014, 13, 461–470. [Google Scholar] [CrossRef]
  107. Rusz, J.; Cmejla, R.; Ruzickova, H.; Ruzicka, E. Quantitative acoustic measurements for characterization of speech and voice disorders in early untreated Parkinson’s disease. J. Acoust. Soc. Am. 2011, 129, 350–367. [Google Scholar] [CrossRef] [PubMed]
  108. Harel, B.; Cannizzaro, M.; Snyder, P.J. Variability in fundamental frequency during speech in prodromal and incipient Parkinson’s disease: A longitudinal case study. Brain Cogn. 2004, 56, 24–29. [Google Scholar] [CrossRef]
  109. Tsanas, A.; Little, M.A.; McSharry, P.E.; Spielman, J.; Ramig, L.O. Novel speech signal processing algorithms for high-accuracy classification of Parkinson’s disease. IEEE. Trans. Biomed. Eng. 2012, 59, 1264–1271. [Google Scholar] [CrossRef]
  110. Sakar, C.O.; Serbes, G.; Gunduz, A.; Tunc, H.C.; Nizam, H.; Sakar, B.E.; Tutuncu, M.; Aydin, T.; Isenkul, M.E.; Apaydin, H. A comparative analysis of speech signal processing algorithms for Parkinson’s disease classification and the use of the tunable Q-factor wavelet transform. Appl. Soft Comput. 2019, 74, 255–263. [Google Scholar] [CrossRef]
  111. Vaswani, A.; Shazeer, N.; Parmar, N. Attention is all you need. Adv. Neural Inf. Proces. Syst. 2017, 30, 5998–6008. [Google Scholar]
  112. Moro-Velázquez, L.; Gómez-García, J.A.; Godino-Llorente, J.I.; Villalba, J.; Orozco-Arroyave, J.R.; Dehak, N. Analysis of speaker recognition methodologies and the influence of kinetic changes to automatically detect Parkinson’s disease. Appl. Soft Comput. 2017, 62, 649–666. [Google Scholar] [CrossRef]
  113. Espay, A.J.; Bonato, P.; Nahab, F.B.; Maetzler, W.; Dean, J.M.; Klucken, J.; Eskofier, B.M.; Merola, A.; Horak, F.; Lang, A.E.; et al. Technology in Parkinson’s disease: Challenges and opportunities. Mov. Disord. 2016, 31, 1272–1282. [Google Scholar] [CrossRef]
  114. Del Din, S.; Godfrey, A.; Mazza, C.; Lord, S.; Rochester, L. Free-living monitoring of Parkinson’s disease: Lessons from the field. Mov. Disord. 2016, 31, 1293–1313. [Google Scholar] [CrossRef]
  115. Pereira, C.R.; Pereira, D.R.; Silva, F.A.; Masieiro, J.P.; Weber, S.A.; Hook, C.; Papa, J.P. A new computer vision-based approach to aid the diagnosis of Parkinson’s disease. Comput. Methods Prog. Biomed. 2016, 136, 79–88. [Google Scholar] [CrossRef] [PubMed]
  116. Galna, B.; Lord, S.; Burn, D.J.; Rochester, L. Progression of gait dysfunction in incident Parkinson’s disease: Impact of medication and phenotype. Mov. Disord. 2015, 30, 359–367. [Google Scholar] [CrossRef] [PubMed]
  117. Bot, B.M.; Suver, C.; Neto, E.C.; Kellen, M.; Klein, A.; Bare, C.; Doerr, M.; Pratap, A.; Wilbanks, J.; Dorsey, E.R.; et al. The mPower study, Parkinson’s disease mobile data collected using ResearchKit. Sci. Data 2016, 3, 160011. [Google Scholar] [CrossRef] [PubMed]
  118. Zhan, A.; Mohan, S.; Tarolli, C.; Schneider, R.B.; Adams, J.L.; Sharma, S.; Elson, M.J.; Spear, K.L.; Glidden, A.M.; Little, M.A.; et al. Using smartphones and machine learning to quantify Parkinson disease severity: The mobile Parkinson disease score. JAMA Neurol. 2018, 75, 876–880. [Google Scholar] [CrossRef]
  119. Arora, S.; Venkataraman, V.; Zhan, A.; Donohue, S.; Biglan, K.; Dorsey, E.; Little, M. Detecting and monitoring the symptoms of Parkinson’s disease using smartphones: A pilot study. Parkinsonism Relat. Disord. 2015, 21, 650–653. [Google Scholar] [CrossRef]
  120. Stamatakis, J.; Ambroise, J.; Crémers, J.; Sharei, H.; Delvaux, V.; Macq, B.; Garraux, G. Finger tapping clinometric score prediction in Parkinson’s disease using low-cost accelerometers. Comput. Intell. Neurosci. 2013, 717853, 1–13. [Google Scholar] [CrossRef]
  121. Prince, J.; Andreotti, F.; De Vos, M. Multi-source ensemble learning for the remote prediction of Parkinson’s disease in the presence of source-wise missing data. IEEE Trans. Biomed. Eng. 2019, 66, 1402–1411. [Google Scholar] [CrossRef]
  122. Rusz, J.; Hlavnicka, J.; Cmejla, R.; Ruzicka, E. Automatic evaluation of speech rhythm instability and acceleration in dysarthrias associated with basal ganglia dysfunction. Front. Bioeng. Biotechnol. 2015, 3, 104. [Google Scholar] [CrossRef]
  123. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
  124. Olanow, C.W.; Rascol, O.; Hauser, R.; Feigin, P.D.; Jankovic, J.; Lang, A.; Langston, W.; Melamed, E.; Poewe, W.; Stocchi, F.; et al. A double-blind, delayed-start trial of rasagiline in Parkinson’s disease. N. Engl. J. Med. 2009, 361, 1268–1278. [Google Scholar] [CrossRef]
  125. Verschuur, C.V.; Suwijn, S.R.; Boel, J.A.; Post, B.; Bloem, B.R.; van Hilten, J.J.; van Laar, T.; Tissingh, G.; Munts, A.G.; Deuschl, G.; et al. Randomized delayed-start trial of levodopa in Parkinson’s disease. N. Engl. J. Med. 2019, 380, 315–324. [Google Scholar] [CrossRef]
  126. Pahwa, R.; Lyons, K.E.; Wilkinson, S.B.; Simpson, R.K.; Ondo, W.G.; Tarsy, D.; Norregaard, T.; Hubble, J.P.; Smith, D.A.; Hauser, R.A.; et al. Long-term evaluation of deep brain stimulation of the thalamus. J. Neurosurg. 2006, 104, 506–512. [Google Scholar] [CrossRef] [PubMed]
  127. Weaver, F.M.; Follett, K.; Stern, M.; Hur, K.; Harris, C.; Marks, W.J., Jr.; Rothlind, J.; Sagher, O.; Reda, D.; Moy, C.S.; et al. Bilateral deep brain stimulation vs best medical therapy for patients with advanced Parkinson disease: A randomized controlled trial. JAMA 2009, 301, 63–73. [Google Scholar] [CrossRef] [PubMed]
  128. Katzman, J.L.; Shaham, U.; Cloninger, A.; Bates, J.; Jiang, T.; Kluger, Y. DeepSurv: Personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Med. Res. Methodol. 2018, 18, 24. [Google Scholar] [CrossRef] [PubMed]
  129. Rosa, M.; Arlotti, M.; Ardolino, G.; Cogiamanian, F.; Marceglia, S.; Di Fonzo, A.; Cortese, F.; Rampini, P.M.; Priori, A. Adaptive deep brain stimulation in a freely moving parkinsonian patient. Mov. Disord. 2015, 30, 1003–1005. [Google Scholar] [CrossRef]
  130. Twala, B. AI-driven precision diagnosis and treatment in Parkinson’s disease: A comprehensive review and experimental analysis. Front. Aging Neurosci. 2025, 17, 1638340. [Google Scholar] [CrossRef]
  131. He, J.; Baxter, S.L.; Xu, J.; Xu, J.; Zhou, X.; Zhang, K. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 2019, 25, 30–36. [Google Scholar] [CrossRef]
  132. Ghassemi, M.; Oakden-Rayner, L.; Beam, A.L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 2021, 3, e745–e750. [Google Scholar] [CrossRef]
  133. Larrazabal, A.J.; Nieto, N.; Peterson, V.; Milone, D.H.; Ferrante, E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl. Acad. Sci. USA 2020, 117, 12592–12594. [Google Scholar] [CrossRef]
  134. Gianfrancesco, M.A.; Tamang, S.; Yazdany, J.; Schmajuk, G. Potential biases in machine learning algorithms using electronic health record data. JAMA Intern. Med. 2018, 178, 1544–1547. [Google Scholar] [CrossRef]
  135. Muehlematter, U.J.; Daniore, P.; Vokinger, K.N. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20): A comparative analysis. Lancet Digit. Health 2021, 3, e195–e203. [Google Scholar] [CrossRef] [PubMed]
  136. Sendak, M.P.; Gao, M.; Brajer, N.; Balu, S. Presenting machine learning model information to clinical end users with model facts labels. NPJ Digit. Med. 2020, 3, 41. [Google Scholar] [CrossRef] [PubMed]
  137. Yang, Q.; Steinfeld, A.; Rosé, C.; Zimmerman, J. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar]
  138. Cortez, J.; Torres, C.G.; Parraguez, V.H.; De los Reyes, M.; Peralta, O.A. Bovine adipose tissue-derived mesenchymal stem cells self-assemble with testicular cells and integrates and modifies the structure of a testicular organoids. Theriogenology 2024, 215, 259–271. [Google Scholar] [CrossRef] [PubMed]
  139. Zhang, Y.; Jiang, X.; Qiao, L.; Liu, M. Modularity-Guided Functional Brain Network Analysis for Early-Stage Dementia Identification. Front. Neurosci. 2021, 15, 720909. [Google Scholar] [CrossRef]
  140. Jiao, F.; Wang, M.; Sun, X.; Ju, Z.; Lu, J.; Wang, L.; Jiang, J.; Zuo, C. Based on Tau PET Radiomics Analysis for the Classification of Alzheimer’s Disease and Mild Cognitive Impairment. Brain Sci. 2023, 13, 367. [Google Scholar] [CrossRef]
  141. Elmotelb, A.S.; Sherif, F.F.; Abohamama, A.S.; Fakhr, M.; Abdelatif, A.M. A Novel Deep Learning Technique for Multiclassification of Alzheimer’s Disease: A Hyperparameter Optimization Approach. Front. Artif. Intell. 2025, 8, 1558725. [Google Scholar] [CrossRef]
  142. Nuvoli, S.; Bianconi, F.; Rondini, M.; Lazzarato, A.; Marongiu, A.; Fravolini, M.L.; Cascianelli, S.; Amici, S.; Filippi, L.; Spanu, A.; et al. Differential Diagnosis of Alzheimer Disease vs. Mild Cognitive Impairment Based on Left Temporal Lateral Lobe Hypomethabolism on 18F-FDG PET/CT and Automated Classifiers. Diagnostics 2022, 12, 2425. [Google Scholar] [CrossRef]
  143. Akramifard, H.; Balafar, M.; Razavi, S.; Ramli, A.R. Emphasis Learning, Features Repetition in Width Instead of Length to Improve Classification Performance: Case Study-Alzheimer’s Disease Diagnosis. Sensors 2020, 20, 941. [Google Scholar] [CrossRef]
  144. Wang, L.; Sheng, J.; Zhang, Q.; Zhou, R.; Li, Z.; Xin, Y. Functional Brain Network Measures for Alzheimer’s Disease Classification. IEEE Access 2025, 11, 111832–111845. [Google Scholar] [CrossRef]
  145. Lama, R.K.; Kwon, G.R. Diagnosis of Alzheimer’s Disease Using Brain Network. Front. Neurosci. 2021, 15, 605115. [Google Scholar] [CrossRef]
  146. Choi, R.Y.; Coyner, A.S.; Kalpathy-Cramer, J.; Chiang, M.F.; Campbell, J.P. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl. Vis. Sci. Technol. 2020, 9, 14. [Google Scholar] [PubMed]
  147. van Loon, W.; de Vos, F.; Fokkema, M.; Szabo, B.; Koini, M.; Schmidt, R.; de Rooij, M. Analyzing Hierarchical Multi-View MRI Data with StaPLR: An Application to Alzheimer’s Disease Classification. Front. Neurosci. 2022, 16, 830630. [Google Scholar] [CrossRef] [PubMed]
  148. Khan, Y.F.; Kaushik, B.; Chowdhary, C.L.; Srivastava, G. Ensemble Model for Diagnostic Classification of Alzheimer’s Disease Based on Brain Anatomical Magnetic Resonance Imaging. Diagnostics 2022, 12, 3193. [Google Scholar] [CrossRef] [PubMed]
  149. Bao, Y.W.; Wang, Z.J.; Shea, Y.F.; Chiu, P.K.-C.; Kwan, J.S.; Chan, F.H.-W.; Mak, H.K.-F. Combined Quantitative amyloid-β PET and Structural MRI Features Improve Alzheimer’s Disease Classification in Random Forest Model—A Multicenter Study. Acad. Radiol. 2024, 31, 5154–5163. [Google Scholar] [CrossRef]
  150. Song, M.; Jung, H.; Lee, S.; Kim, D.; Ahn, M. Diagnostic classification and biomarker identification of alzheimer’s disease with random forest algorithm. Brain Sci. 2021, 11, 453. [Google Scholar] [CrossRef]
  151. Keles, M.K.; Kilic, U. Classification of Brain Volumetric Data to Determine Alzheimer’s Disease Using Artificial Bee Colony Algorithm as Feature Selector. IEEE Access 2022, 10, 82989–83001. [Google Scholar] [CrossRef]
  152. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  153. Kim, C.M.; Lee, W. Classification of Alzheimer’s Disease Using Ensemble Convolutional Neural Network with LFA Algorithm. IEEE Access 2023, 11, 143004–143015. [Google Scholar] [CrossRef]
  154. Mujahid, M.; Rehman, A.; Alam, T.; Alamri, F.S.; Fati, S.M.; Saba, T. An Efficient Ensemble Approach for Alzheimer’s Disease Detection Using an Adaptive Synthetic Technique and Deep Learning. Diagnostics 2023, 13, 2489. [Google Scholar] [CrossRef]
  155. Khan, R.; Akbar, S.; Mehmood, A.; Shahid, F.; Munir, K.; Ilyas, N.; Asif, M.; Zheng, Z. A transfer learning approach for multiclass classification of Alzheimer’s disease using MRI images. Front. Neurosci. 2023, 16, 1050777. [Google Scholar] [CrossRef]
  156. Dhillon, A.; Verma, G.K. Convolutional neural network: A review of models, methodologies and applications to object detection. Prog. Artif. Intell. 2020, 9, 85–112. [Google Scholar] [CrossRef]
  157. Chen, D.; Hu, F.; Nian, G.; Yang, T. Deep Residual Learning for Nonlinear Regression. Entropy 2020, 22, 193. [Google Scholar] [CrossRef]
  158. Odusami, M.; Maskeliūnas, R.; Damaševičius, R.; Krilavičius, T. Analysis of Features of Alzheimer’s Disease: Detection of Early Stage from Functional Brain Changes in Magnetic Resonance Images Using a Finetuned ResNet18 Network. Diagnostics 2021, 11, 1071. [Google Scholar] [CrossRef] [PubMed]
  159. Liu, Y.; Tang, K.; Cai, W.; Chen, A.; Zhou, G.; Li, L.; Liu, R. MPC-STANet: Alzheimer’s Disease Recognition Method Based on Multiple Phantom Convolution and Spatial Transformation Attention Mechanism. Front. Aging Neurosci. 2022, 14, 918462. [Google Scholar] [CrossRef] [PubMed]
  160. Odusami, M.; Maskeliūnas, R.; Damaševičius, R. An Intelligent System for Early Recognition of Alzheimer’s Disease Using Neuroimaging. Sensors 2022, 22, 740. [Google Scholar] [CrossRef] [PubMed]
  161. Li, C.; Wang, Q.; Liu, X.; Hu, B. An Attention-Based CoT-ResNet with Channel Shuffle Mechanism for Classification of Alzheimer’s Disease Levels. Front. Aging Neurosci. 2022, 14, 930584. [Google Scholar] [CrossRef]
  162. Pusparani, Y.; Lin, C.Y.; Jan, Y.K.; Lin, F.-Y.; Liau, B.-Y.; Ardhianto, P.; Farady, I.; Alex, J.S.R.; Aparajeeta, J.; Chao, W.-H.; et al. Diagnosis of Alzheimer’s Disease Using Convolutional Neural Network with Select Slices by Landmark on Hippocampus in MRI Images. IEEE Access 2023, 11, 61688–61697. [Google Scholar] [CrossRef]
  163. Sun, H.; Wang, A.; Wang, W.; Liu, C. An Improved Deep Residual Network Prediction Model for the Early Diagnosis of Alzheimer’s Disease. Sensors 2021, 21, 4182. [Google Scholar] [CrossRef]
  164. AlSaeed, D.; Omar, S.F. Brain MRI Analysis for Alzheimer’s Disease Diagnosis Using CNN-Based Feature Extraction and Machine Learning. Sensors 2022, 22, 2911. [Google Scholar] [CrossRef]
  165. Syed Jamalullah, R.; Mary Gladence, L.; Ahmed, M.A.; Lydia, E.L.; Ishak, M.K.; Hadjouni, M.; Mostafa, S.M. Leveraging Brain MRI for Biomedical Alzheimer’s Disease Diagnosis Using Enhanced Manta Ray Foraging Optimization Based Deep Learning. IEEE Access 2023, 11, 81921–81929. [Google Scholar] [CrossRef]
  166. Carcagnì, P.; Leo, M.; Del Coco, M.; Distante, C.; De Salve, A. Convolution Neural Networks and Self-Attention Learners for Alzheimer Dementia Diagnosis from Brain MRI. Sensors 2023, 23, 1694. [Google Scholar] [CrossRef] [PubMed]
  167. Sharma, S.; Gupta, S.; Gupta, D.; Altameem, A.; Saudagar, A.K.J.; Poonia, R.C.; Nayak, S.R. HTLML: Hybrid AI Based Model for Detection of Alzheimer’s Disease. Diagnostics 2022, 12, 1833. [Google Scholar] [CrossRef] [PubMed]
  168. Chen, Z.; Mo, X.; Chen, R.; Feng, P.; Li, H. A Reparametrized CNN Model to Distinguish Alzheimer’s Disease Applying Multiple Morphological Metrics and Deep Semantic Features From Structural MRI. Front. Aging Neurosci. 2022, 14, 856391. [Google Scholar] [CrossRef] [PubMed]
  169. Khagi, B.; Kwon, G.R. 3D CNN Design for the Classification of Alzheimer’s Disease Using Brain MRI and PET. IEEE Access 2020, 8, 217830–217847. [Google Scholar] [CrossRef]
  170. Kaya, M.; Cetin-Kaya, Y. A Novel Deep Learning Architecture Optimization for Multiclass Classification of Alzheimer’s Disease Level. IEEE Access 2024, 12, 46562–46581. [Google Scholar] [CrossRef]
  171. Shamrat, F.M.J.M.; Akter, S.; Azam, S.; Karim, A.; Ghosh, P.; Tasnim, Z.; Hasib, K.M.; De Boer, F.; Ahmed, K. AlzheimerNet:An Effective Deep Learning Based Proposition for Alzheimer’s Disease Stages Classification From Functional Brain Changes in Magnetic Resonance Images. IEEE Access 2023, 11, 16376–16395. [Google Scholar] [CrossRef]
  172. Hazarika, R.A.; Maji, A.K.; Kandar, D.; Jasinska, E.; Krejci, P.; Leonowicz, Z.; Jasinski, M. An Approach for Classification of Alzheimer’s Disease Using Deep Neural Network and Brain Magnetic Resonance Imaging (MRI). Electronics 2023, 12, 676. [Google Scholar] [CrossRef]
  173. Fareed, M.M.S.; Zikria, S.; Ahmed, G.; Din, M.Z.; Mahmood, S.; Aslam, M.; Jillani, S.F.; Moustafa, A. ADD-Net: An Effective Deep Learning Model for Early Detection of Alzheimer Disease in MRI Scans. IEEE Access 2022, 10, 96930–96951. [Google Scholar] [CrossRef]
  174. Sait, A.R.W.; Nagaraj, R. A Feature-Fusion Technique-Based Alzheimer’s Disease Classification Using Magnetic Resonance Imaging. Diagnostics 2024, 14, 2363. [Google Scholar] [CrossRef]
  175. Chabib, C.M.; Hadjileontiadis, L.J.; Shehhi, A.A. DeepCurvMRI: Deep Convolutional Curvelet Transform-Based MRI Approach for Early Detection of Alzheimer’s Disease. IEEE Access 2023, 11, 44650–44659. [Google Scholar] [CrossRef]
  176. Murugan, S.; Venkatesan, C.; Sumithra, M.G.; Gao, X.-Z.; Elakkiya, B.; Akila, M.; Manoharan, S. DEMNET: A Deep Learning Model for Early Diagnosis of Alzheimer Diseases and Dementia from MR Images. IEEE Access 2021, 9, 90319–90329. [Google Scholar] [CrossRef]
  177. Ganokratanaa, T.; Ketcham, M.; Pramkeaw, P. Advancements in Cataract Detection: The Systematic Development of LeNet-Convolutional Neural Network Models. J. Imaging 2023, 9, 197. [Google Scholar] [CrossRef] [PubMed]
  178. Hazarika, R.A.; Abraham, A.; Kandar, D.; Maji, A.K. An Improved LeNet-Deep Neural Network Model for Alzheimer’s Disease Classification Using Brain Magnetic Resonance Images. IEEE Access 2021, 9, 161194–161207. [Google Scholar] [CrossRef]
  179. Dey, R.; Salem, F.M. Gate-variants of Gated Recurrent Unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
  180. Mahim, S.M.; Ali, M.S.; Hasan, M.O.; Nafi, A.A.N.; Sadat, A.; Al Hasan, S.; Shareef, B.; Ahsan, M.; Islam, K.; Miah, S.; et al. Unlocking the Potential of XAI for Improved Alzheimer’s Disease Detection and Classification Using a ViT-GRU Model. IEEE Access 2024, 12, 8390–8412. [Google Scholar] [CrossRef]
  181. Zhao, Y.; Guo, Q.; Zhang, Y.; Zheng, J.; Yang, Y.; Du, X.; Feng, H.; Zhang, S. Application of Deep Learning for Prediction of Alzheimer’s Disease in PET/MR Imaging. Bioengineering 2023, 10, 1120. [Google Scholar] [CrossRef]
  182. Al-Otaibi, S.; Mujahid, M.; Khan, A.R.; Nobanee, H.; Alyami, J.; Saba, T. Dual Attention Convolutional AutoEncoder for Diagnosis of Alzheimer’s Disorder in Patients Using Neuroimaging and MRI Features. IEEE Access 2024, 12, 58722–58739. [Google Scholar] [CrossRef]
  183. Guo, H.; Zhang, Y. Resting State fMRI and Improved Deep Learning Algorithm for Earlier Detection of Alzheimer’s Disease. IEEE Access 2020, 8, 115383–115392. [Google Scholar] [CrossRef]
  184. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef]
  185. Chui, K.T.; Gupta, B.B.; Alhalabi, W.; Alzahrani, F.S. An MRI Scans-Based Alzheimer’s Disease Detection via Convolutional Neural Network and Transfer Learning. Diagnostics 2022, 12, 1531. [Google Scholar] [CrossRef]
  186. Cao, Y.; Kuai, H.; Liang, P.; Pan, J.S.; Yan, J.; Zhong, N. BNLoop-GAN: A multi-loop generative adversarial model on brain network learning to classify Alzheimer’s disease. Front. Neurosci. 2023, 17, 1202382. [Google Scholar] [CrossRef]
  187. Kale, M.; Wankhede, N.; Pawar, R.; Ballal, S.; Kumawat, R.; Goswami, M.; Khalid, M.; Taksande, B.; Upaganlawar, A.; Umekar, M.; et al. AI-driven innovations in Alzheimer’s disease: Integrating early diagnosis, personalized treatment, and prognostic modelling. Ageing Res. Rev. 2024, 101, 102497. [Google Scholar] [CrossRef]
  188. Aqeel, A.; Hassan, A.; Khan, M.A.; Rehman, S.; Tariq, U.; Kadry, S.; Majumdar, A.; Thinnukool, O. A Long Short-Term Memory Biomarker-Based Prediction Framework for Alzheimer’s Disease. Sensors 2022, 22, 1475. [Google Scholar] [CrossRef]
  189. Khalid, A.; Senan, E.M.; Al-Wagih, K.; Al-Azzam, M.M.A.; Alkhraisha, Z.M. Automatic Analysis of MRI Images for Early Prediction of Alzheimer’s Disease Stages Based on Hybrid Features of CNN and Handcrafted Features. Diagnostics 2023, 13, 1654. [Google Scholar] [CrossRef] [PubMed]
  190. Jain, V.; Nankar, O.; Jerrish, D.J.; Gite, S.; Patil, S.; Kotecha, K. A Novel AI-Based System for Detection and Severity Prediction of Dementia Using MRI. IEEE Access 2021, 9, 154324–154346. [Google Scholar] [CrossRef]
  191. Peng, J.; Wang, W.; Song, Q.; Hou, J.; Jin, H.; Qin, X.; Yuan, Z.; Wei, Y.; Shu, Z. 18F-FDG-PET Radiomics Based on White Matter Predicts The Progression of Mild Cognitive Impairment to Alzheimer Disease: A Machine Learning Study. Acad. Radiol. 2023, 30, 1874–1884. [Google Scholar] [CrossRef]
  192. Lin, W.; Gao, Q.; Yuan, J.; Chen, Z.; Feng, C.; Chen, W.; Du, M.; Tong, T. Predicting Alzheimer’s Disease Conversion From Mild Cognitive Impairment Using an Extreme Learning Machine-Based Grading Method with Multimodal Data. Front. Aging Neurosci. 2020, 12, 77. [Google Scholar] [CrossRef] [PubMed]
  193. Fakoya, A.A.; Parkinson, S. A Novel Image Casting and Fusion for Identifying Individuals at Risk of Alzheimer’s Disease Using MRI and PET Imaging. IEEE Access 2024, 12, 134101–134114. [Google Scholar] [CrossRef]
  194. Pan, D.; Zeng, A.; Yang, B.; Lai, G.; Hu, B.; Song, X.; Jiang, T.; Alzheimer’s Disease Neuroimaging Initiative (ADNI). Deep Learning for Brain MRI Confirms Patterned Pathological Progression in Alzheimer’s Disease. Adv. Sci. 2023, 10, e2204717. [Google Scholar] [CrossRef]
  195. Kim, S.T.; Kucukaslan, U.; Navab, N. Longitudinal Brain MR Image Modeling Using Personalized Memory for Alzheimer’s Disease. IEEE Access 2021, 9, 143212–143221. [Google Scholar] [CrossRef]
  196. Crystal, O.; Maralani, P.J.; Black, S.; Fischer, C.; Moody, A.R.; Khademi, A. Brain Age Estimation on a Dementia Cohort Using FLAIR MRI Biomarkers. Am. J. Neuroradiol. 2023, 44, 1384–1390. [Google Scholar] [CrossRef]
  197. Wang, M.; Wei, M.; Wang, L.; Song, J.; Rominger, A.; Shi, K.; Jiang, J. Tau Protein Accumulation Trajectory-Based Brain Age Prediction in the Alzheimer’s Disease Continuum. Brain Sci. 2024, 14, 575. [Google Scholar] [CrossRef]
  198. Chattopadhyay, T.; Ozarkar, S.S.; Buwa, K.; Joshy, N.A.; Komandur, D.; Naik, J.; Thomopoulos, S.I.; Steeg, G.V.; Ambite, J.L.; Thompson, P.M. Comparison of deep learning architectures for predicting amyloid positivity in Alzheimer’s disease, mild cognitive impairment, and healthy aging, from T1-weighted brain structural MRI. Front. Neurosci. 2024, 18, 1387196. [Google Scholar] [CrossRef] [PubMed]
  199. Habuza, T.; Zaki, N.; Mohamed, E.A.; Statsenko, Y. Deviation from Model of Normal Aging in Alzheimer’s Disease: Application of Deep Learning to Structural MRI Data and Cognitive Tests. IEEE Access 2022, 10, 53234–53249. [Google Scholar] [CrossRef]
  200. Liang, W.; Zhang, K.; Cao, P.; Liu, X.; Yang, J.; Zaiane, O.R. Exploiting task relationships for Alzheimer’s disease cognitive score prediction via multi-task learning. Comput. Biol. Med. 2023, 152, 106367. [Google Scholar] [CrossRef] [PubMed]
  201. Gerussi, A.; Verda, D.; Bernasconi, D.P.; Carbone, M.; Komori, A.; Abe, M.; Inao, M.; Namisaki, T.; Mochida, S.; Yoshiji, H.; et al. Machine learning in primary biliary cholangitis: A novel approach for risk stratification. Liver Int. 2022, 42, 615–627. [Google Scholar] [CrossRef] [PubMed]
  202. Tovoli, F.; Renzulli, M.; Negrini, G.; Brocchi, S.; Ferrarini, A.; Andreone, A.; Benevento, F.; Golfieri, R.; Morselli-Labate, A.M.; Mastroroberto, M.; et al. Interoperator variability and source of errors in tumour response assessment for hepatocellular carcinoma treated with sorafenib. Eur. Radiol. 2018, 28, 3611–3620. [Google Scholar] [CrossRef]
  203. Nam, D.; Chapiro, J.; Paradis, V.; Seraphin, T.P.; Kather, J.N. Artificial Intelligence in liver diseases: Improving diagnostics, prognostics and response prediction. JHEP Rep. 2022, 4, 100443. [Google Scholar] [CrossRef]
  204. Gerussi, A.; Scaravaglio, M.; Cristoferi, L.; Verda, D.; Milani, C.; De Bernardi, E.; Ippolito, D.; Asselta, R.; Invernizzi, P.; Kather, J.N.; et al. Artificial Intelligence for precision medicine in autoimmune liver disease. Front. Immunol. 2022, 13, 966329. [Google Scholar] [CrossRef]
  205. Cherqui, D.; Ciria, R.; Kwon, C.H.D.; Kim, K.H.; Broering, D.; Wakabayashi, G.; Samstein, B.; Troisi, R.I.; Han, H.S.; Rotellar, F.; et al. Expert consensus guidelines on minimally invasive donor hepatectomy for living donor liver transplantation from innovation to implementation: A joint initiative from the International Laparoscopic Liver Society (ILLS) and the Asian-Pacific Hepato-Pancreato-Biliary Association (A-PHPBA). Ann Surg 2021, 273, 96–108. [Google Scholar]
  206. Lee, K.-W.; Hong, S.K.; Suh, K.-S.; Kim, H.-S.; Ahn, S.-W.; Yoon, K.C.; Lee, J.-M.; Cho, J.-H.; Kim, H.; Yi, N.-J. One hundred fifteen cases of pure laparoscopic living donor right hepatectomy at a single center. Transplantation 2018, 102, 1878–1884. [Google Scholar] [CrossRef]
  207. Chen, P.-D.; Wu, C.-Y.; Hu, R.-H.; Chen, C.-N.; Yuan, R.-H.; Liang, J.-T.; Lai, H.-S.; Wu, Y.-M. Robotic major hepatectomy: Is there a learning curve? Surgery 2017, 161, 642–649. [Google Scholar] [CrossRef]
  208. Burra, P.; Giannini, E.G.; Caraceni, P.; Corradini, S.G.; Rendina, M.; Volpes, R.; Toniutto, P. Specific issues concerning the management of patients on the waiting list and after liver transplantation. Liver Int. 2018, 38, 1338–1362. [Google Scholar] [CrossRef]
  209. Bertsimas, D.; Kung, J.; Trichakis, N.; Wang, Y.; Hirose, R.; Vagefi, P.A. Development and validation of an optimized prediction of mortality for candidates awaiting liver transplantation. Am. J. Transplant 2019, 19, 1109–1118. [Google Scholar] [CrossRef]
  210. Kwong, A.; Ebel, N.; Kim, W.; Lake, J.; Smith, J.; Schladt, D.; Skeans, M.; Foutz, J.; Gauntt, K.; Cafarella, M.; et al. OPTN/SRTR2020 annual data report: Liver. Am. J. Transpl. Transplant. 2022, 22, 204–309. [Google Scholar] [CrossRef]
  211. Predetermined Change Control Plan for AI/ML-Enabled Device Software Functions: Guidance for Industry and FDA Staff; U.S. Food and Drug Administration (FDA): Silver Spring, MD, USA, 2024.
  212. Good Machine Learning Practice for Medical Device Development: Guiding Principles; FDA: Silver Spring, MD, USA; Health Canada: Ottawa, ON, Canada; MHRA: London, UK, 2021.
  213. Artificial Intelligence Act (Regulation (EU) 2024/1689); L168/1; Official Journal of the European Union: Brussels, Belgum, 2024.
  214. Software and AI as a Medical Device Change Programme (AIaMD); Medicines and Healthcare Products Regulatory Agency: London, UK, 2024.
  215. Software as a Medical Device (SaMD): Key Definitions and Risk Categorization; IMDRF: Silver Spring, MD, USA, 2021.
  216. Artificial Intelligence—Guidance on Risk Management; International Organization for Standardization (ISO): Geneva, Switzerland, 2023.
  217. Artificial Intelligence Management System Standard; International Organization for Standardization (ISO): Geneva, Switzerland, 2023.
  218. Medical Devices—Quality Management Systems—Requirements for Regulatory Purposes; International Organization for Standardization (ISO): Geneva, Switzerland, 2016.
  219. Medical Devices—Application of Risk Management to Medical Devices; International Organization for Standardization (ISO): Geneva, Switzerland, 2019.
  220. Medical Device Software—Software Life Cycle Processes; International Electrotechnical Commission (IEC): Geneva, Switzerland, 2015.
  221. Ethics and Governance of Artificial Intelligence for Health; WHO: Geneva, Switzerland, 2023.
Figure 1. Conceptual workflow underlying AI methodologies in biomedicine, summarizing the progression from multimodal data acquisition through preprocessing and feature engineering, model development, validation and performance assessment, to clinical implementation and integration into healthcare workflows.
Figure 1. Conceptual workflow underlying AI methodologies in biomedicine, summarizing the progression from multimodal data acquisition through preprocessing and feature engineering, model development, validation and performance assessment, to clinical implementation and integration into healthcare workflows.
Pharmaceutics 17 01564 g001
Figure 2. PRISMA Flow Diagram of Study Selection Process.
Figure 2. PRISMA Flow Diagram of Study Selection Process.
Pharmaceutics 17 01564 g002
Figure 3. Cross-domain integration map illustrating AI applications in nanomedicine, cardiology, neurology, and hepatology. Domain-specific use cases converge on shared computational principles—data preprocessing, feature engineering, model architectures, validation frameworks, and interpretability—while the lower panel highlights overarching challenges and opportunities, including data heterogeneity, domain shift and external validity, and transfer learning potential.
Figure 3. Cross-domain integration map illustrating AI applications in nanomedicine, cardiology, neurology, and hepatology. Domain-specific use cases converge on shared computational principles—data preprocessing, feature engineering, model architectures, validation frameworks, and interpretability—while the lower panel highlights overarching challenges and opportunities, including data heterogeneity, domain shift and external validity, and transfer learning potential.
Pharmaceutics 17 01564 g003
Table 1. Summary of AI application in medicine.
Table 1. Summary of AI application in medicine.
Topic/AreaDescriptionRef.
AI in MedicineAI enables computers and robots to emulate human behavior, assist in healthcare diagnosis, and perform surgical procedures. Applications include drug development, medical data generation, and disease analysis such as cancer.[21]
AI-Powered RoboticsAI-driven surgical robots and nanorobots improve precision and efficacy by enabling targeted drug delivery.[22,23]
AI-Enhanced Soft RoboticsMachine learning–driven soft robotic systems mimic physiological functions for diagnostic and therapeutic applications. Recent advances include ML-enhanced soft robotic platforms inspired by rectal functions to model fecal continence mechanisms and investigate neuromuscular coordination, highlighting the convergence of AI, robotics, and biomedical engineering.[24]
Machine Learning and Deep Learning in HealthcareML and deep learning support clinical diagnostics and treatment decisions. AI-assisted surgical robots are used in procedures like heart valve repair, gynecology, and prostatectomy. Future cancer treatments may rely on unsupervised and reinforcement learning for pattern recognition and strategy optimization.[25,26,27,28]
AI in Computational Biology and Molecular MedicineAI contributes to identifying medicinal targets, managing protein interactions, and advancing genetics and molecular medicine.[29]
Robotic Surgery in Oncology The review highlights the advantages, challenges, and Indian context of robotic surgery in oncology.[30]
Case Example: Da Vinci Robotic Surgical System Yang et al. described a uniportal right upper lobectomy performed with the 4th generation Da Vinci Xi system, demonstrating advanced robotic-assisted surgery capabilities and fast patient recovery.[31]
Robotic Surgery in Rectal CancerRobotic surgery helps overcome limitations of traditional laparoscopy, improving radical operation outcomes. Innovations include the Verb Surgical project and developments in robotic mesorectal excision, lymph node dissection, and AI integration in surgery.[32]
Computational Methods in Drug FormulationComputational modeling optimizes drug formulations (e.g., methotrexate nanosuspension) by analyzing molecular interactions and aggregation. Tools like LAMMPS and GROMACS assess nanoparticle behavior. Mehta et al. reviewed these modeling tools, emphasizing their role in personalized medicine and improved therapeutic outcomes.[33]
Table 2. Comparison of AI applications in CT versus MRI for cardiovascular imaging.
Table 2. Comparison of AI applications in CT versus MRI for cardiovascular imaging.
ModalityAI Application/TaskClinical/Research UseAdvantages Limitations/ChallengesRef.
CT—Opportunistic risk stratification (DASSi)AI-based biomarker extraction from echocardiographic + CMR inputs (Digital Aortic Stenosis Severity Index, DASSi)Screening and follow-up; risk stratification using even handheld devices for opportunistic screeningEnables personalized screening without complex imaging setups; usable on lower-resource platformsDepends on heterogenous inputs (echo + CMR); needs cross-modality harmonization and validation across populations[74]
CT—AI screening for valve disease (mitral/aortic)Automated detection/classification of valve disease severity from imaging and ECG/clinical inputsLarge-scale screening, triage, identification of severe aortic stenosis (AS)High diagnostic performance (AUCs reported >0.88–0.91 in extreme-spectrum cohorts); enables fast triageSpectrum and selection bias in training data; model interpretability issues; variable imaging acquisition protocols[75,76]
CT—Coronary artery calcium scoring (CAC, Agatston) automatedAutomated CAC detection and Agatston score estimation from non-contrast/low-dose chest CT or CCTARisk stratification for coronary atherosclerosis; population screening (e.g., lung CT cohorts)High throughput; reduces labor for manual scoring; can be applied opportunistically to lung screening CTsImage noise, motion or blooming artifacts degrade accuracy; requires robust pre-processing and well-labelled training sets[77,78,79,80]
CT—CCTA + myocardial analysis for ischemia predictionDeep learning analysis of left ventricular (LV) myocardium (multiscale CNN + auto-encoding) to predict functionally significant stenosis and stress ischemiaNoninvasive functional assessment adjunct to stenosis grading; improve prediction of ischemia beyond stenosis %Adds myocardial functional info from standard CCTA; improved discrimination (AUC~0.76 vs. anatomy alone)Moderate specificity in some reports (e.g., sensitivity 84.6%, specificity 48.4%); method complexity and need for robust validation[81]
CT—Automated coronary segmentation & classificationDL architectures (EfficientNet, DenseNet201, ResNet101, Xception, MobileNet-v2) for artery segmentation and lesion classificationAutomated reporting, quantification of stenosis and plaque characterizationVery high reported metrics in some models (DenseNet201: accuracy 0.90; AUC 0.9694; specificity 0.9833)Black-box models, potential overfitting to homogeneous datasets; generalizability issues[71,72]
CT—Lipid/phenogroup clustering for risk prediction (non-imaging input)Unsupervised ML to derive phenogroups from lipid profiles to predict outcomes in STEMIRisk stratification and phenotyping for prognosis and personalized managementReveals biologically meaningful patient subgroups; strong statistical associations with outcomesRequires large cohorts and external validation; confounding by treatment and comorbidities[73]
MRI—Automated QA and pre-processing checksAutomated assessment of image quality and slice selection (e.g., ascending vs. descending aorta detection; basal/apical slice identification; motion-artifact detection)Quality control before downstream analysis; ensures standardized inputs for segmentation and quantificationReduces manual QC burden; ensures consistent inputs for AI pipelines; lowers inter-scan variabilityModels need to handle wide scanning protocols and scanner vendors; edge cases (severe artifacts) may fail[82,83,84,85]
MRI—Left ventricle (LV) detection and segmentationCNNs/boundary-regression/regression-based networks for LV identification and segmentation across cardiac cycleAutomated EF calculation, volumetry, mass, wall motion analysis for clinical and research useExtremely high detection/segmentation accuracy reported (e.g., LV detection success ~99.98%; Dice scores up to ~0.95); much faster than manual tracingRequires large annotated datasets (tens of thousands of scans); variation across centers; need for robust external validation[86,87,88,89,90,91]
MRI—Scar quantification and tissue characterizationDeep CNNs for scar volume (late gadolinium enhancement), T1/T2-mapping radiomics, and myocardial tissue feature extractionPhenotyping for HCM, ischemic scar, fibrosis assessment, prognosisEnables quantitative, reproducible tissue characterization; radiomics can discriminate diseases (e.g., HCM vs. hypertensive disease)Radiomics feature reproducibility across scanners and protocols is challenging; requires harmonization and large multisite datasets[92,93,94]
Table 5. Machine Learning in AD Diagnosis and Classification.
Table 5. Machine Learning in AD Diagnosis and Classification.
Algorithm/ModelDescription and Study FindingsRef.
Support Vector Machine (SVM)A supervised ML tool used to classify AD and MCI by detecting patterns in labeled imaging data. Widely used in neuroimaging for AD/MCI diagnosis.[138]
Modular-LASSO Feature Selection (MLFS) + SVMZhang et al. developed a hybrid MLFS–SVM method incorporating Fuzzy Bayesian Networks for feature detection in resting-state fMRI, enhancing AD/MCI classification accuracy.[139]
Radiomics-Based SVM ModelsJiao et al. applied SVM to identify radiomics signatures from tau tracer PET images, achieving higher accuracy (84.8 ± 4.5%) compared to SUVR (73.1 ± 3.6%).[140,141]
SVM for FDG PET ImagingNuvoli et al. used linear SVM on FDG PET imaging for AD/MCI differential diagnosis, reporting 76.23% accuracy based on temporal lobe hypometabolism.[142]
Emphasis Learning with SVMAkramifard et al. improved classification performance by repeating key features in smaller datasets (emphasis learning), achieving 98.81% accuracy between AD and normal controls.[143]
SVM + Graph Theory (fMRI)Wang et al. combined SVM with graph-based measures for fMRI analysis, yielding 96.80% accuracy in distinguishing AD from healthy controls. Slightly lower accuracy was seen in MCI classification.[144]
SVM + LASSO (Graph-based fMRI)Combined SVM and LASSO feature selection provided high accuracy in classifying AD, MCI, and healthy controls, outperforming traditional methods.[145]
Logistic Regression (LR)LR explores input–output correlations using a sigmoid curve. Van Loon et al. introduced StaPLR (stacked penalized logistic regression) for multimodal MRI data fusion, achieving a mean AUC of 0.942, outperforming elastic net regression (AUC 0.848).[146,147]
Decision Tree (DT) and Random Forest (RF)Supervised ML methods that classify data hierarchically; RF uses multiple DTs to predict outcomes. Widely applied for AD classification.[145,148,149,150]
Multimodal RF (Aβ PET + sMRI)Bao et al. demonstrated improved AD classification accuracy using multimodal fusion (AUC = 0.89) compared to single-modality models (AUC = 0.71).[149]
RF in Feature Selection (Combined with SVM)Keles et al. used RF as a classifier in combination with optimization algorithms (BABC, BPSO, BGWO, BDE), achieving accuracies of 0.863–0.905.[151]
Comparison of RF, SVM, MLP, and CNNSong et al. compared multiple models for AD classification using 63, 29, and 22 features. All models showed high accuracy, but RF exhibited the smallest performance drop (−3.8%), confirming its robustness across feature sets and modalities.[150]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trasca, D.-M.; Dorin, P.I.; Carmen, S.; Varut, R.-M.; Singer, C.E.; Radivojevic, K.; Stoica, G.A. Artificial Intelligence in Biomedicine: A Systematic Review from Nanomedicine to Neurology and Hepatology. Pharmaceutics 2025, 17, 1564. https://doi.org/10.3390/pharmaceutics17121564

AMA Style

Trasca D-M, Dorin PI, Carmen S, Varut R-M, Singer CE, Radivojevic K, Stoica GA. Artificial Intelligence in Biomedicine: A Systematic Review from Nanomedicine to Neurology and Hepatology. Pharmaceutics. 2025; 17(12):1564. https://doi.org/10.3390/pharmaceutics17121564

Chicago/Turabian Style

Trasca, Diana-Maria, Pluta Ion Dorin, Sirbulet Carmen, Renata-Maria Varut, Cristina Elena Singer, Kristina Radivojevic, and George Alin Stoica. 2025. "Artificial Intelligence in Biomedicine: A Systematic Review from Nanomedicine to Neurology and Hepatology" Pharmaceutics 17, no. 12: 1564. https://doi.org/10.3390/pharmaceutics17121564

APA Style

Trasca, D.-M., Dorin, P. I., Carmen, S., Varut, R.-M., Singer, C. E., Radivojevic, K., & Stoica, G. A. (2025). Artificial Intelligence in Biomedicine: A Systematic Review from Nanomedicine to Neurology and Hepatology. Pharmaceutics, 17(12), 1564. https://doi.org/10.3390/pharmaceutics17121564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop