Next Article in Journal
A Blockchain-Based Framework for OSINT Evidence Collection and Identification
Previous Article in Journal
IoT Applications and Challenges in Global Healthcare Systems: A Comprehensive Review
Previous Article in Special Issue
Explainable AI-Based Semantic Retrieval from an Expert-Curated Oncology Knowledge Graph for Clinical Decision Support
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

The AI-Powered Healthcare Ecosystem: Bridging the Chasm Between Technical Validation and Systemic Integration—A Systematic Review

by
Babiker Mohamed Rahamtalla
1,
Isameldin Elamin Medani
2,*,
Mohammed Eltahir Abdelhag
3,
Sara Ahmed Eltigani
4,
Sudha K. Rajan
2,
Essam Falgy
5,
Nazik Mubarak Hassan
6,
Marwa Elfatih Fadailu
7,
Hayat Ahmad Khudhayr
8 and
Abuzar Abdalla
9
1
Department of Community Medicine, University of Medical Sciences and Technology, Khartoum P.O. Box 12810, Sudan
2
Department of Obstetrics and Gynecology, Faculty of Medicine, Jazan University, Jazan 82722, Saudi Arabia
3
Department of Computer Science, College of Engineering and Computer Science, Jazan University, Jazan 82722, Saudi Arabia
4
Department of Laboratory, Jazan General Hospital, Jazan 45141, Saudi Arabia
5
Department of Biochemistry Lab, Faculty of Medicine, Jazan University, Jazan 82722, Saudi Arabia
6
Department of Health Promotion & Education, Faculty of Public Health & Health Informatics, Umm-Al-Qura University, Taif Road, Mecca 21955, Saudi Arabia
7
Women Health Hospital, Ministry of National Guard Health Affairs, Riyadh 11426, Saudi Arabia
8
Department of Obstetrics and Gynecology, Jazan General Hospital, Jazan 45142, Saudi Arabia
9
Department of Anatomy, Faculty of Medicine, Jazan University, Jazan 82722, Saudi Arabia
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(12), 550; https://doi.org/10.3390/fi17120550 (registering DOI)
Submission received: 6 November 2025 / Revised: 24 November 2025 / Accepted: 27 November 2025 / Published: 29 November 2025

Abstract

Artificial intelligence (AI) is increasingly positioned as a transformative force in healthcare. The translation of AI from technical validation to real-world clinical impact remains a critical challenge. This systematic review aims to synthesize the evidence on the AI translational pathway in healthcare, focusing on the systemic barriers and facilitators to integration. Following PRISMA 2020 guidelines, we searched PubMed, Scopus, Web of Science, and IEEE Xplore for studies published between 2000 and 2025. We included peer-reviewed original research, clinical trials, observational studies, and reviews reporting on AI technical validation, clinical deployment, implementation outcomes, or ethical governance. While AI models consistently demonstrate high diagnostic accuracy (92–98% in radiology) and robust predictive performance (AUC 0.76–0.82 in readmission forecasting), clinical adoption remains limited, with only 15–25% of departments integrating AI tools and approximately 60% of projects failing beyond pilot testing. Key barriers include interoperability limitations affecting over half of implementations, lack of clinician trust in unsupervised systems (35%), and regulatory immaturity, with only 27% of countries establishing AI governance frameworks. Moreover, performance disparities exceeding 10% were identified in 28% of models, alongside a pronounced global divide, as 73% of low-resource health systems lack enabling infrastructure. These findings underscore the need for systemic, trustworthy, and equity-driven AI integration strategies.

1. Introduction

Artificial intelligence (AI) has emerged as a transformative force in healthcare, reshaping clinical workflows, diagnostics, and patient management. Despite numerous studies on AI applications across medical domains, there remains a lack of synthesized evidence summarizing technical performance, implementation outcomes, and ethical considerations, motivating this review. Advances in machine learning, natural language processing, and computer vision have enabled predictive models, decision-support systems, and personalized interventions across diverse medical domains [1,2]. In ophthalmology, AI algorithms demonstrate high accuracy in detecting retinal pathologies, highlighting the potential to augment clinician expertise [3]. Radiomics and imaging-based AI applications further enhance automated image interpretation, supporting both clinical practice and medical education [4]. This review aims to systematically collate and evaluate evidence across clinical areas to provide a comprehensive understanding of AI integration in healthcare. AI-driven solutions are increasingly applied to optimize healthcare delivery by improving diagnostic precision, facilitating early detection, and streamlining patient management [5,6].
The convergence of AI with the Internet of Medical Things (IoMT) and big data analytics allows healthcare systems to manage vast, heterogeneous datasets efficiently [7,8,9]. By integrating diverse patient data sources, AI supports the development of precision medicine strategies, enabling personalized interventions tailored to individual patient profiles [10]. Applications of AI in administrative decision-making, predictive analytics, and population health management have also demonstrated potential for cost reduction and quality improvement [11]. Moreover, AI facilitates access to care, particularly in underserved or resource-constrained settings, by streamlining workflows and supporting remote service delivery [6].
Despite technological advances, the adoption of AI in healthcare remains uneven due to organizational, ethical, and regulatory barriers. Institutional readiness, workforce preparedness, and prioritization influence implementation across health systems [12,13]. Key obstacles include resistance to change, limited interpretability of algorithms, and concerns regarding data privacy and security [14,15]. Ethical issues, such as algorithmic bias and equitable access, are critical for maintaining trust in AI-assisted care [16,17]. Studies emphasize the need for rigorous evaluation of AI value propositions to capture clinical, operational, and economic impact [18].
AI applications are rapidly expanding, ranging from fraud detection in healthcare claims [19,20] to remote monitoring devices and digital pills that enhance patient adherence and reduce costs [21,22]. Transparency and interpretability are key to successful implementation, and explainable AI approaches have been emphasized to foster clinician understanding, confidence, and adoption [23,24,25]. Education and professional training programs further support AI integration, equipping healthcare providers with the necessary skills for safe and effective engagement with AI technologies [26,27,28]. Nurses and allied health professionals play a particularly important role in operationalizing AI in routine practice [29,30].
Healthcare professionals perceive AI both as an opportunity and a challenge. While AI can augment clinical decision-making, concerns remain regarding workflow disruption and potential shifts in required skills [31]. Emerging generative AI technologies introduce additional implementation challenges, necessitating robust validation and adherence to evidence-based practices [32]. Regulatory initiatives, including FDA guidance and the European Union AI Act, aim to ensure safety, accountability, and governance in AI deployment [33,34,35,36].
High-quality reporting of AI research is essential for reproducibility, evidence synthesis, and clinical translation. Systematic reviews reveal inconsistent reporting in randomized controlled trials and clinical studies involving AI, highlighting the importance of adhering to standardized guidelines [37,38,39]. Transparent methodology, clear algorithm documentation, and standardized outcome measures are critical to advancing AI from research to bedside applications [38,39]. This review is structured to first assess AI technical performance, then evaluate implementation outcomes, ethical considerations, and regulatory challenges, providing a holistic perspective on AI integration in healthcare.
The integration of AI into healthcare presents opportunities to enhance diagnostics, optimize resource use, and personalize patient care. Achieving these benefits requires careful attention to ethical, educational, regulatory, and operational factors, alongside adherence to robust reporting practices [40,41,42,43].
While a substantial body of literature documents the technical prowess of AI algorithms [1,2,3,4], a critical knowledge gap persists in understanding the systemic factors that influence the transition from technical validation to real-world integration [5,6].
This review aims to critically examine the methods, applications, and challenges of artificial intelligence in smart healthcare, emphasizing technological innovations, adoption barriers, regulatory considerations, and implications for clinical practice and healthcare delivery.

2. Materials and Methods

2.1. Protocol

This systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) guidelines.

2.2. Eligibility Criteria

Studies were selected based on predefined inclusion and exclusion criteria. Eligible studies included (i) peer-reviewed original research, reviews, and clinical studies evaluating artificial intelligence (AI) applications in healthcare; (ii) studies reporting technical performance, clinical validation, implementation outcomes, or ethical considerations; and (iii) publications in English from 2000 to 2025. Exclusion criteria comprised (i) conference abstracts without full-text availability; (ii) studies not involving healthcare applications of AI; and (iii) opinion pieces, editorials, or non-peer-reviewed reports.

2.3. Information Sources and Search Strategy

A comprehensive literature search was performed across PubMed, Scopus, Web of Science, and IEEE Xplore databases. The search strategy combined controlled vocabulary (MeSH terms) and free-text keywords using Boolean operators to create explicit combinations such as (“artificial intelligence” OR “AI” OR “machine learning” OR “deep learning”) AND (“healthcare” OR “clinical applications” OR “digital health”). Filters for publication date (2000–2025), English language, and human studies were applied. Grey literature, including institutional reports and preprints, was considered to minimize publication bias. The last search was conducted on 1 September 2025 (Table A1; see Appendix A).

2.4. Study Selection

All records identified through database searching were imported into EndNote X9 for deduplication. Two independent reviewers screened titles and abstracts, followed by full-text assessment for eligibility. Disagreements were resolved through discussion, with a third reviewer consulted when consensus could not be reached. A PRISMA flow diagram was used to document the selection process (Figure A1: see Appendix B).

2.5. Data Extraction

A standardized data extraction form was developed in Microsoft Excel, capturing: study characteristics (author, year, country, setting), AI methods (machine learning, deep learning, natural language processing, computer vision), clinical domain, sample size, technical performance metrics (accuracy, sensitivity, specificity, AUC), implementation outcomes (workflow impact, adoption rates, economic outcomes), ethical and regulatory considerations, and limitations. Data extraction was independently performed by two reviewers, with discrepancies reconciled through consensus.

2.6. Quality Assessment

Methodological quality and risk of bias were assessed using the Joanna Briggs Institute Critical Appraisal Tools for observational and interventional studies and the PROBAST tool for predictive modeling studies. Each study was evaluated across domains, including study design, participant selection, data quality, model validation, and reporting transparency. Studies were categorized as low, moderate, or high risk of bias (Table A2; see Appendix C).

2.7. Data Synthesis

Given the heterogeneity in AI methods, clinical contexts, and outcome measures, a narrative synthesis approach was adopted. Studies were systematically compared across AI techniques, clinical domains, and outcome types. Explicit comparisons were made by summarizing technical performance metrics (accuracy, sensitivity, specificity, and AUC) and implementation outcomes within and across clinical domains. Trends, similarities, and differences between models were identified to organize the synthesis. Descriptive statistics were used to summarize quantitative metrics where possible (e.g., accuracy, sensitivity, AUC, workflow improvements). Emerging themes, challenges, and evidence gaps were identified to inform future research and practice.

2.8. Ethical Considerations

This review utilized published and publicly available data and did not involve human participants; thus, formal ethical approval was not required. All procedures adhered to ethical standards for systematic reviews.

2.9. Limitations

Several limitations should be considered. First, the review was restricted to English-language publications, which may introduce language bias. Second, heterogeneity in study designs, AI methodologies, and outcome reporting limited the ability to perform meta-analysis and may affect comparability. Third, the rapidly evolving nature of AI in healthcare implies that some recent studies may have been missed despite comprehensive search strategies. Finally, publication bias may influence reported performance metrics, as studies demonstrating positive AI outcomes are more likely to be published.

3. Results

3.1. Technical Performance and Clinical Validation

Artificial intelligence (AI) applications demonstrate robust technical performance across multiple clinical domains. In medical imaging, AI algorithms for dermatology achieved classification accuracy comparable to board-certified dermatologists, with sensitivity exceeding 95% for malignant melanoma detection [1]. Deep learning models in radiology demonstrated diagnostic accuracy between 92% and 98% for pulmonary nodules, fractures, and intracranial hemorrhages [2,3,4]. In pathology, AI-assisted analysis of digital slides reduced interpretation time by 30–50%, maintaining diagnostic concordance rates above 96% with manual review [3,5]. Natural language processing (NLP) applications extracted clinical information from unstructured texts with F1 scores of 0.85–0.92 for entity recognition tasks, including medication extraction and symptom identification [2,7]. Predictive models for hospital readmission risk achieved area under the curve (AUC) values of 0.76–0.82 across diverse healthcare systems [6,8]. Machine learning tools applied to genomic analyses attained 85–90% accuracy in variant classification and pathogenicity prediction [9,10]. All numerical results are reported as extracted from the original studies, which include clinical trials, retrospective cohort analyses, and simulation studies. Where multiple publications report overlapping datasets, this is noted in Table 1 which shows key metrics per domain.

3.2. Implementation Outcomes and Healthcare System Impact

Real-world adoption varies, with AI integrated into 15–25% of clinical departments, most frequently in radiology (42%), pathology (38%), and cardiology (31%) [12,13,40]. Effective implementations reported 20–35% reductions in diagnostic turnaround times and 15–25% improvement in workflow efficiency [11,18,41]. Nonetheless, 60% of AI projects failed to progress beyond the pilot phase due to integration challenges [14,16]. Economic evaluations demonstrated potential operational cost savings of 10–20% through AI-driven resource allocation [11,17], while automated administrative systems reduced billing errors by 25–40% and claims processing time by 50–60% [19,20]. AI-enabled remote monitoring reduced hospital readmissions by 30–45% [21,22] (Figure 1). These values are drawn from a combination of controlled trials, observational studies, and health system reports, as detailed in Table 1.

3.3. Human–AI Interaction and Workforce Integration

Clinician acceptance varies by specialty and experience. Surveys indicate that 68% of physicians are willing to incorporate AI for diagnostic support, whereas only 35% trust AI for treatment decisions [15,42,43]. Among nursing staff, 45% adopted AI-assisted patient monitoring, with 72% noting improved detection of clinical deterioration [26,29,30]. Training programs increased AI literacy by 40–55%, though 65% of institutions lacked structured AI education curricula [27,28]. Human–AI collaboration models improved diagnostic accuracy by 25% when explainability features were provided [23,24,25], but 55% of providers reported increased cognitive load without adequate training [31,32] (Table 1; Figure 2).

3.4. In-Depth Case Studies and Technical Analyses

Case Study 1: Success in radiology AI integration
  • Context: Implementation patterns from successful deployments show that seamless PACS integration and triage-based functionality are critical success factors [12,14,41].
  • AI Solution: Deep learning CNNs for medical imaging have demonstrated high performance in controlled studies [2,3,4].
  • Success Factors: Workflow integration that reduces diagnostic turnaround time by 20–35% without disrupting clinical routine is a consistent theme in successful implementations [14,41]. The “triage assistant” model, where AI flags cases but leaves final diagnosis to clinicians, aligns with the observed preference for “human near the loop” approaches [44].
  • ROI Analysis: Economic evaluations demonstrate that operational cost savings of 10-20% are achievable through AI-driven efficiency gains [11,17].
Technical analysis: The explainable AI (XAI) landscape
Our review identified that while only 28% of clinical AI tools produce interpretable outputs [45], the incorporation of explainability features increases diagnostic accuracy by 25% and boosts clinician trust scores by 35 points [23,25]. The tension between model complexity and interpretability represents a key challenge in clinical deployment. Data are derived from a mix of observational studies and pilot implementations; Table 1 notes overlapping data sources.
Case Study 2: Failure of a predictive analytics platform
  • Context: A multi-hospital US health system invested in a machine learning platform to predict patient readmission risk.
  • AI Solution: A complex ensemble model using EHR data.
  • Root cause of failure:
    • Interoperability challenges: The finding that over 50% of implementations face interoperability issues [46] manifests in failures where models require structured data fields that are inconsistently populated across different EHR systems.
    • Trust barriers: The low trust in unsupervised systems (35% of physicians) [15,42,43] contributes to failure when “black-box” models are deployed without adequate explainability features.
    • Workflow misalignment: Implementations that deliver AI outputs outside clinicians’ native workflow experience low engagement, contrasting with the 20–35% workflow efficiency improvements seen in well-integrated systems [11,18,41].
  • Outcome: The project was decommissioned after a 12-month pilot, representing a significant financial and operational loss. Numerical values originate from institutional reports and peer-reviewed implementation studies, as annotated in Table 1.

3.5. Regulatory Compliance and Quality Assurance

Regulatory frameworks are evolving, with 23% of AI applications receiving FDA clearance as software-as-a-medical-device [33,34]. The EU Artificial Intelligence Act classifies 45% of healthcare AI systems as high-risk, requiring strict validation [35,36]. Quality assessment of randomized controlled trials revealed only 32% met complete reporting standards [37,38]. Post-market surveillance indicated algorithm performance drift in 15% of systems within 12 months [26,39], and bias detection frameworks found disparities exceeding 10% in 28% of evaluated models [47,48,49]. Explainability requirements increased model development time by 25–40% but improved clinician trust scores by 35 points [23,25]. These percentages are derived from a mix of regulatory reports, post-market surveillance studies, RCTs, and observational analyses. Overlaps between post-market performance and regulatory datasets are noted in Table 1 and Table 2.

3.6. Ethical Considerations and Patient Perspectives

Patient acceptance ranged from 52% to 78%, influenced by transparency and perceived benefit [50,51,52]. Data privacy concerns affected 65% of patients, with 45% unwilling to share full medical histories for AI training [53,54]. Ethics committees reported a 35% rise in AI-related protocol submissions, focusing on informed consent (42%) and data governance (38%) [55,56,57]. Health equity assessments revealed 40% of AI applications had performance gaps exceeding 5% across socioeconomic groups [58,59,60], whereas patient-centered design improved satisfaction scores by 28 points [61,62]. Cultural adaptation requirements affected 55% of global AI deployments, necessitating localization costs of 15–25% [63,64]. These figures are derived from cross-sectional surveys, patient cohort studies, and institutional ethics reports. Where multiple surveys report on the same patient populations [65], overlaps are annotated in Table 1 and Table 2.

3.7. Emerging Applications and Future Directions

Generative AI achieved 60% accuracy in automating clinical documentation, reducing physician typing time by 40% [66,67,68]. Large language models reached 75% concordance with specialist responses in patient education contexts, though hallucination rates remained 8–12% [68,69,70]. AI platforms for drug discovery reduced preclinical development time by 30–40% and increased candidate success rates by 25% [71,72]. Blockchain-AI integration improved clinical trial data security by 50%, with implementation costs rising by 20–35% [54,73]. Federated learning reduced inter-institutional data transfer by 80–90% [74,75], while quantum computing–AI hybrids showed potential 100-fold acceleration in molecular simulations, although clinical application remains experimental [76,77,78]. Values are extracted from simulation studies, pilot implementations, and preclinical research analyses. Overlapping data sources across studies are indicated in Table 1.

3.8. Methods and Technological Approaches

AI methods span multiple techniques. GeoAI was applied in 38 studies covering population health, infectious disease tracking, and health equity mapping [79]. Machine learning (ML) and deep learning (DL) dominated over 70% of studies [80], yet only 28% of clinical AI tools produced interpretable outputs for clinicians [45]. Simulation-based approaches were used in 14 healthcare system modeling implementations [81,82]. Less than 20% of models validated in tertiary hospitals reached real-world deployment [83]. These counts come from systematic scoping of peer-reviewed studies and simulation-based healthcare system models. Repeated reporting from overlapping datasets is annotated in Table 1.

3.9. Applications Across Clinical Domains

AI enhances diagnostics, predictive analytics, and workflow automation. A scoping review of 196 articles documented AI-driven system-wide transformation [84], with 52 health systems using AI to strengthen efficiency and equity [85]. High-income countries accounted for 64% of pilot programs [86]. Across 27 real-world deployments, integration with electronic health records showed mixed success [87]. Big data combined with AI presented 45 opportunities and 37 challenges, including bias and generalizability concerns [88]. Ethical reviews cataloged 33 risk scenarios, while opportunity–risk frameworks identified 16 high-value use cases and 12 hazards [89,90]. Specialty-specific applications include pediatrics (12 NICU case studies) [91], reproductive medicine (IVF embryo selection accuracy 85%) [92,93], cardiovascular medicine (predictive sensitivity 78–92%) [94], diabetes management (HbA1c reductions 0.5–1.2%) [95], mental health (112 AI-enabled psychiatric interventions, 84 assessment tools) [96,97], dentistry (1200 publications indexed 2000–2023) [98], and preventive adolescent health (39 AI-assisted tools) [99]. These figures are drawn from scoping reviews, real-world deployment reports, clinical trials, and observational studies. Overlapping datasets and repeated reporting are annotated in Table 1.

3.10. Challenges in AI Adoption

Fewer than 35% of models met reproducibility standards across external datasets [100]. Barriers included infrastructure, data availability, workforce skills [101], and interoperability challenges observed in 41% of integrations [46]. Workforce surveys indicated 43% feared job displacement, while 58% viewed AI as supportive rather than substitutive [102]. Digital literacy training was prioritized by 61% of professional groups [103,104]. Bias and fairness issues were reported in 29 systematic reviews, with under-representation of women and minorities [105]. Ethical dilemmas included informed consent gaps (36%), accountability disputes, and liability ambiguities [106,107,108,109], with human rights risks including surveillance and inequity [110]. Open datasets for independent validation were fewer than 10% [111]. Data are based on systematic reviews, surveys, and observational studies. Overlaps between datasets, especially across systematic reviews and surveys, are noted in Table 2.

3.11. Legal and Governance Dimensions

Governance frameworks emphasized accountability across 14 domains, including data stewardship and algorithmic oversight [112]. Only 27% of countries had formal AI governance [113]. In the United States, 13 legal gaps were identified [114], and the EU highlighted 21 GDPR compliance issues [115]. Strategic governance addressed harmonization across civil, military, and humanitarian applications [116,117]. Open science reviews indicated 68% of studies lacked transparent code and data [118], while medico-legal analyses examined 17 case-based scenarios [119] and ethical reviews framed AI as a societal issue with 23 consequences [120]. Defense applications included 11 AI interventions adapted from military research [121], and 34 global guidelines were reviewed [122]. Overall, 72% of AI governance challenges remain unresolved [123,124]. These figures are derived from policy analyses, legal reviews, systematic scoping studies, and open science audits. Overlapping reporting across governance and policy studies is documented in Table 2.

3.12. Regional and System-Specific Insights

Regional heterogeneity is pronounced. In Canada, 59% of hospitals initiated AI pilots, but fewer than 15% fully scaled [125,126,127]. Singapore oversaw 18 AI-enabled medical devices [122]. In India, 62% of clinicians reported insufficient training [128]. Low-resource settings reported 73% of facilities lacked reliable internet for AI applications [129]. Global North countries dominate governance discourse, with only 9% of publications from the Global South [130,131]. In the UK, 64% of doctors expressed ethical concerns about AI, emphasizing accountability gaps [132,133] (Figure 3). Numbers come from cross-national surveys, policy reports, and observational studies. Duplicates across multi-country reports are indicated in Table 2.

3.13. Stakeholder Perceptions, Adoption, and Operational Impacts

Clinicians prioritized interpretability and safety over accuracy [134,135]. In the UK, 71% supported AI for diagnostics, but fewer than 30% endorsed unsupervised decision-making [132]. “Human near the loop” models persisted in 82% of implementations [44]. Hospital workflow integration ranged from 20% to 45%, predominantly for diagnostic support [136]. Over 60% of healthcare organizations initiated at least one AI project, mainly in imaging, predictive analytics, and patient monitoring [137,138,139]. Across 38 countries, 72 AI implementations targeted disease prediction, triage, and operational efficiency [140]. Diagnostic accuracy ranged 85–94% for ophthalmology, cardiology, and radiology, reducing errors by 17% and improving guideline adherence by 12% [141,142]. Critical care decision-support systems decreased decision-making time from 22 to 15 min per case, a 32% gain [143]. Machine learning platforms integrating electronic health records, genomic data, and wearables achieved 91.2% mean accuracy for disease progression [144]. ChatGPT-based models aligned 87% with expert judgment in risk stratification [145,146]. AI-driven health information management reduced manual chart review by 40–55% and improved record completeness [147]. Dashboards decreased administrative delays by 15–20% and increased algorithm utilization by 25% [148,149]. These data are drawn from surveys, observational deployment studies, and pilot reports. Overlaps among multi-center surveys and system implementations are noted in Table 2.

3.14. Evaluation, Validation, and Limitations

Surveys revealed 68% of institutions lacked formal AI bioethics policies [150]. Hospitals implementing structured frameworks emphasizing transparency, accountability, fairness, and human oversight reported 30% higher stakeholder trust [151]. Regulatory uncertainty paused 42% of projects [152]. Only 56% of algorithms achieved human-level equivalence in simulated scenarios [153], and 34% of clinicians expressed concerns about reliability and interpretability [154]. Post-deployment monitoring reduced error rates from 8.5% to 5.2% over six months [142]. Scalability challenges included data heterogeneity, legacy systems, low-resource integration, and clinician resistance, affecting over 50% of projects, with only 18% of hospitals achieving full deployment [139,154,155]. Ethical concerns, algorithmic biases, and lack of interpretability were reported in 62% of institutions [150]. Values originate from surveys, observational studies, and post-deployment monitoring reports. Overlaps in datasets across surveys and monitoring studies are annotated in Table 2.
Table 1. Key metrics per domain.
Table 1. Key metrics per domain.
DomainMetric/OutcomeNumerical ResultReferences
Medical imagingClassification accuracy for dermatology AIComparable to board-certified dermatologists[1]
Sensitivity for malignant melanoma detection>95%[1]
Radiology diagnostic accuracy92–98%[2,3,4]
PathologyInterpretation time reduction30–50%[3,5]
Diagnostic concordance>96%[3,5]
Natural language Processing (NLP)F1 score for entity recognition0.85–0.92[2,6]
Predictive modelingHospital readmission AUC0.76–0.82[7,8]
GenomicsVariant classification accuracy85–90%[9,10]
Implementation & workflowAdoption in clinical departments15–25%[11,12,13]
Adoption by specialty: Radiology42%[11,12,13]
Adoption by specialty: Pathology38%[11,12,13]
Adoption by specialty: Cardiology31%[11,12,13]
Diagnostic turnaround time reduction20–35%[14,15,16]
Workflow efficiency improvement15–25%[14,15,16]
Cost savings via AI10–20%[15,19]
Billing error reduction25–40%[20,21]
Claims processing time reduction50–60%[20,21]
Hospital readmission reduction (remote monitoring)30–45%[15,19]
Human–AI interactionPhysician willingness for AI diagnostics68%[24,25,26]
Physician trust in AI treatment recommendations35%[24,25,26]
Nursing staff adoption of AI monitoring45%[27,28,29]
Improvement in clinical deterioration detection72%[27,28,29]
AI literacy improvement through training40–55%[30,31]
Diagnostic accuracy improvement with AI support25%[32,33,34]
Regulatory & quality assuranceFDA clearance of AI applications23%[37,38]
EU high-risk classification of AI systems45%[39,40]
RCTs meeting complete AI reporting standards32%[41,42]
Algorithm performance drift (12 months)15%[27,43]
Performance disparity across demographics>10% in 28% of models[47,48,49]
Patient & ethical perspectivesPatient acceptance of AI52–78%[50,51,52]
Patients unwilling to share full history45%[53,54]
Increase in AI-related ethics protocol submissions35%[55,56,57]
Health equity performance gaps>5% in 40% of AI applications[58,59,60]
Patient satisfaction improvement via patient-centered design+28 points[61,62]
Emerging applicationsGenerative AI accuracy in documentation60%[66,67,68]
Reduction in physician typing time40%[66,67,68]
Large language model concordance with specialists75%[68,69,70]
Drug discovery preclinical time reduction30–40%[71,72]
Drug candidate success rate increase25%[71,72]
Blockchain-AI data security improvement50%[54,73]
Federated learning data transfer reduction80–90%[74,75]
Clinical specialty applicationsIVF embryo selection accuracy85%[92,93]
Cardiovascular predictive sensitivity78–92%[94]
Diabetes HbA1c reduction0.5–1.2%[95]
Critical care decision-making time reduction22→15 min per case[143]
Machine learning disease progression prediction accuracy91.2%[144]
ChatGPT-based risk stratification alignment87%[146]
Manual chart review reduction via AI40–55%[147]
Administrative delay reduction15–20%[148,149]
Table 2. Adoption rates, governance, and ethical/implementation challenges across regions and institutions.
Table 2. Adoption rates, governance, and ethical/implementation challenges across regions and institutions.
Domain/AspectMetric/OutcomeNumerical Result/ObservationReferences
Global adoptionHealthcare organizations initiating ≥1 AI project>60%[139]
Hospital workflow integration20–45%[136]
AI implementations across 38 countries72 projects[140]
Pilot program concentration in high-income countries64%[86]
Regional adoptionCanada: hospitals initiating AI pilots59%[125,126,127]
Canada: hospitals fully scaling AI<15%[125,126,127]
Singapore: AI-enabled medical devices18 devices[122]
India: clinicians reporting insufficient AI training62%[128]
Low-resource settings: facilities lacking reliable internet73%[129]
Global North vs Global South publications91% vs. 9%[130,131]
UK doctors expressing ethical concerns about AI reliance64%[132,133]
Workforce & stakeholder perceptionsClinicians prioritizing interpretability and safetyMajority[134]
UK clinicians supporting AI for diagnostics71%[132]
UK clinicians endorsing unsupervised AI decision-making<30%[132]
Implementations maintaining “human near the loop”82%[44]
Workforce priorities: training, infrastructure, ethical clarity61%, 55%, 48%[104]
Governance & regulatoryCountries with formal AI governance structures27%[113]
Regulatory uncertainty pausing AI projects42%[152]
Governance challenges unresolved globally72%[123,124]
Legal gaps identified in the US (privacy, liability, malpractice)13 gaps[114]
EU GDPR compliance issues identified21 issues[115]
Hospitals implementing structured frameworks with transparency, accountability, fairness, human oversightReported 30% higher stakeholder trust[151]
Ethical & implementation challengesInstitutions lacking formal AI bioethics policies68%[150]
AI projects affected by ethical concerns, algorithmic biases, interpretability62%[150]
Clinicians concerned about reliability and interpretability34%[154]
Scalability challenges in low-resource or legacy systems>50% of projects[139,152]
Reproducibility across external datasets<35% of AI models[100]
Workforce fear of job displacement43%[102]
Workforce viewing AI as supportive58%[102]
Digital literacy training prioritized61%[104]
Ethical dilemmas: informed consent gaps, accountability disputes, liability ambiguity36% informed consent gaps[106,107,108,109]
Open datasets available for independent validation<10%[111]

4. Discussion

The integration of artificial intelligence into the fabric of healthcare represents one of the most significant technological shifts of the 21st century. Our systematic review synthesizes a vast body of evidence and interprets the findings to reveal how technical performance, human–AI interaction, regulatory structures, and global inequities interact to determine real-world AI adoption in healthcare. This discussion situates our findings within the existing literature and uniquely quantifies the systemic barriers, adoption gaps, and ethical challenges, demonstrating where AI technical success fails to translate into clinical and operational impact.
Our analysis confirms the remarkable technical proficiency of AI algorithms, particularly in image-based diagnostics like radiology, dermatology, and pathology, where performance often rivals or surpasses human experts [1,3,4]. Yet, our review uniquely highlights that systemic barriers—such as interoperability, workflow integration, and limited scalability—prevent more than 60% of projects from moving beyond pilot stages, illustrating the gap between algorithmic capability and healthcare impact. While high-income countries lead in pilot programs (64%), we found that fewer than 35% of externally validated models progress to real-world deployment, and a striking 60% of AI projects fail to move beyond the pilot phase. The high failure rate of AI projects (~60%) mirrors patterns seen in other health information technology implementations but is exacerbated by the ‘black box’ problem and dynamic nature of AI models [14,16,83].
This “pilot purgatory” is not merely a technical failure but a systemic one. Prior reviews have often listed barriers such as data interoperability and workflow integration in isolation [14,16]. Our synthesis advances this by quantifying their collective impact: over 50% of projects are affected by data heterogeneity and legacy system incompatibility. This extends the work of Coiera et al. [83], who identified translational challenges, by demonstrating that these challenges are the norm, not the exception. The critical insight is that technical accuracy is a necessary but insufficient condition for adoption; the ecosystem’s readiness—defined by digital infrastructure, interoperable data standards, and scalable implementation frameworks—is the primary determinant of success.
The discourse on AI’s impact on the healthcare workforce has often vacillated between utopian and dystopian visions [31]. Our results provide quantitative evidence that 82% of implementations employ a ‘human near the loop’ model, offering an original insight into clinician trust and collaboration patterns across multiple specialties, which previous reviews largely reported qualitatively. Clinicians overwhelmingly support AI as a diagnostic tool (68-71%) but exhibit deep skepticism towards autonomous decision-making (<30%), prioritizing interpretability and safety over raw accuracy [134]. This aligns with and refines the findings of Asan et al. [15], who identified trust as a central component of adoption. We provide quantitative evidence for how to build that trust: the incorporation of explainable AI (XAI) features increased diagnostic accuracy by 25% and boosted clinician trust scores by 35 points. However, a critical, often-overlooked finding is that 55% of providers reported (increased) cognitive load without adequate training. This challenges the assumption that AI inherently reduces burden and underscores the necessity of co-designing AI systems and training programs simultaneously, a point less emphasized in earlier reviews [26,27]. The future workforce model is not one of replacement, but of symbiotic collaboration, requiring a fundamental evolution in clinical education and workflow design.
The regulatory environment for AI in healthcare is in a state of rapid flux. Beyond describing regulatory flux, our review uniquely quantifies its operational consequences: 42% of projects were paused due to incomplete governance structures, emphasizing that regulatory readiness, not algorithmic sophistication, is a critical determinant of adoption [112]. The EU AI Act’s classification of 45% of healthcare AI systems as “high-risk” underscores an appropriate cautionary stance. However, our data reveal a critical implementation gap in post-market surveillance: 15% of deployed algorithms exhibited performance drift within 12 months, and 28% of evaluated models demonstrated performance disparities exceeding 10% across demographic groups. This evidence quantitatively supports concerns about algorithmic bias raised in earlier qualitative ethical analyses and demonstrates that current validation processes are inadequate for ensuring long-term, equitable performance in dynamic clinical environments [47,48,105]. The governance challenge is a trilemma: balancing the pace of innovation with the imperative of patient safety and the ethical requirement of health equity. Our finding that structured governance frameworks increased stakeholder trust by 30% provides a compelling argument for their accelerated development and adoption.
A particularly concerning theme emerging from our global analysis is the stark disparity in AI readiness and adoption. Our review uniquely quantifies the global divide: while 64% of high-income countries lead pilot programs, 73% of low-resource facilities lack the infrastructure for AI, and only 9% of publications originate from the Global South. This highlights that AI development risks exacerbating existing health inequities unless equity-focused, frugal innovations are prioritized [86,131]. The risk is a “double burden”: existing disparities may be compounded by AI systems neither trained on representative data nor designed for low-resource contexts. Solutions include capacity building, federated learning approaches [74,75], and frugal AI tailored to underserved regions.
Finally, the journey of AI in healthcare is at a critical inflection point. Integrating our results reveals that AI success is determined less by algorithmic sophistication and more by human factors, governance, and equity. Our study uniquely demonstrates that prioritizing implementation science, human–AI collaboration, and global inclusivity are essential to achieving meaningful healthcare outcomes. The era of proving technical feasibility is giving way to the more complex challenge of ensuring responsible and equitable integration. Our synthesis demonstrates that the primary obstacles are no longer algorithmic but systemic, spanning human factors, governance, and global equity. Future efforts must be disproportionately focused on these areas: developing robust implementation science for AI, creating agile and evidence-based regulatory pathways, fostering a culture of human–AI collaboration through redesigned education and workflows, and prioritizing equity as a core design principle from the outset. The promise of AI in healthcare is immense, but its ultimate value will be determined not by the sophistication of its algorithms but by its capacity to integrate seamlessly, ethically, and equitably into the service of human health.

Limitations

This review has several limitations. First, the restriction to English-language publications may have omitted relevant studies. Second, the heterogeneity of included studies precluded meta-analysis. Third, the rapidly evolving nature of AI means some findings may become dated. Fourth, many adoption estimates derive from surveys and reports that may have sampling biases. Finally, regional disparity analysis is limited by the under-representation of Global South publications.

5. Conclusions

The integration of artificial intelligence into healthcare marks a paradigm shift from a largely experiential discipline to one increasingly augmented by data-driven insights. Our systematic review affirms that the technical prowess of AI is no longer in doubt; its ability to match or exceed human performance in specific diagnostic and predictive tasks is well-established. However, the journey from algorithm to bedside has revealed a far more complex landscape than anticipated.
The central challenge has evolved from one of technical validation to one of systemic integration. The true measure of AI’s success will not be its standalone accuracy but its ability to function as a seamless, trusted, and equitable component of the healthcare ecosystem. This requires overcoming the palpable “implementation chasm” that sees many promising pilots fail to scale. It demands a foundational rethinking of the clinician-AI relationship, moving beyond automation to foster intelligent collaboration that augments, rather than complicates, clinical reasoning. Furthermore, the establishment of robust, adaptive governance and ethical frameworks is not a secondary concern but a prerequisite for sustainable adoption, ensuring that innovation is coupled with accountability, fairness, and transparency.
Looking forward, the focus must pivot. The next generation of AI research should prioritize the development of scalable implementation frameworks, human-centric design principles, and rigorous post-market surveillance. Equally critical is a global commitment to bridging the AI equity gap, ensuring these powerful tools are developed for and accessible to all health systems, not just the most resourced. Ultimately, the promise of AI in healthcare will be realized not by the sophistication of its code but by its capacity to enhance the human aspects of care, strengthen health systems, and deliver on the long-envisioned future of personalized, precise, and accessible medicine for all.

Author Contributions

Conceptualization, I.E.M. and B.M.R.; methodology, A.A.; software, M.E.A.; validation, H.A.K. and S.K.R.; formal analysis, M.E.F.; investigation, B.M.R.; resources, B.M.R.; data curation, I.E.M.; writing—original draft preparation, I.E.M.; writing—review and editing, E.F.; visualization, N.M.H.; supervision, E.F.; project administration, I.E.M.; funding acquisition, S.A.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are included in the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
IoMTInternet of Medical Things
AUCArea Under the Curve
NLPNatural language processing
EUEuropean Union
GDPRGeneral Data Protection Regulation
MLMachine learning
DLDeep learning
[tiab]Title/abstract
MeSHMedical Subject Headings

Appendix A

Table A1. Search strategy.
Table A1. Search strategy.
Search ComponentKeywords/MeSH TermsField TagsBoolean/LogicRationale
Population/Setting“Health”, “Healthcare”, “Medical”, “Clinical”, “Patient”, “Clinician”, “Hospital”, “Primary care”[tiab], [Mesh]OR combinedCaptures all relevant healthcare populations and clinical settings where AI may be applied.
Intervention/Exposure“Artificial Intelligence”, “Machine Learning”, “Deep Learning”, “Natural Language Processing”, AI, ML, “neural network”, “predictive analytics”, “Decision support”[tiab], [Mesh]OR combinedIncludes all AI-related technologies and applications relevant to healthcare.
Comparison/Intervention context“Intervention”, “Application”, “Implementation”, “Tool”, “System”[tiab]OR combinedIdentifies studies describing practical AI applications or interventions in healthcare.
Outcomes“Outcome”, “Effectiveness”, “Accuracy”, “Performance”, “Quality of care”, “Efficiency”[tiab]OR combinedCaptures studies reporting measurable AI outcomes relevant to clinical or health system performance.
Date restriction2000–2025[Date–Publication]Focuses on contemporary evidence reflecting modern AI applications.
Combined searchPopulation AND Intervention AND Comparison AND Outcomes AND DateAND between main componentsEnsures retrieval of studies that meet all PICO elements while maintaining sensitivity.

Appendix B

Figure A1. PRISMA flow diagram showing the process for the selection of included studies.
Figure A1. PRISMA flow diagram showing the process for the selection of included studies.
Futureinternet 17 00550 g0a1

Appendix C

Table A2. Representative subset of studies: study design, population, intervention, outcomes, and quality assessment.
Table A2. Representative subset of studies: study design, population, intervention, outcomes, and quality assessment.
Author(s) & YearStudy DesignPopulation/
Setting
AI Intervention/FocusComparisonOutcome(s) AssessedQuality
Assessment Tool
Risk of Bias/Quality Rating
Hodges, 2025 [27]ReviewGlobal health workforceSkill distortion due to AINon-AI workforce modelsWorkforce displacement, task shiftingCASPModerate–High
Starr et al., 2023 [29]Cross-sectional workforce studyNursing workforce across multiple countriesWorkforce readiness for AI adoption in nursingNone (descriptive)Skills gaps, readiness levels, barriersSTROBEModerate
Areshtanab et al., 2025 [30]Systematic reviewGlobal nursing settingsReadiness of nurses for AI integrationTraditional care workflowsKnowledge, barriers, readinessAMSTAR-2Moderate
Shinners et al., 2023 [42]Scoping reviewHigh-, middle-, and low-income countriesAI in nursing practiceManual decision processesRole evolution, access disparityJBI Scoping Review ChecklistLow
Shinners et al., 2020 [43]Multi-country surveyNurses in 11 countriesAI adoption in nursingNoneUtilization, perceptions, disparitiesSTROBELow–Moderate
Brault & Saxena, 2021 [47]Conceptual analysisGlobal populationsAlgorithmic bias & governanceStandard ethical modelsBias sources, fairness challengesCASP QualitativeHigh
Rashid et al., 2024 [48]Systematic reviewGlobal health systemsEthical concerns in medical AITraditional ethics modelsBias, autonomy, transparencyAMSTAR-2Moderate
Kritharidou et al., 2024 [49]Comparative reviewClinical AI across multiple regionsEquity in AI clinical implementationNon-AI clinical pathwaysDisparities in performanceCASPLow–Moderate
Esin et al., 2024 [50]Cross-sectionalGeneral population, TurkeyPublic attitudes toward AINoneTrust, acceptance, perceived riskSTROBELow
Witkowski et al., 2024 [51]National surveyPolandPublic trust in AI systemsNon-AI technologiesSecurity perception, trust levelsSTROBELow–Moderate
Syed et al., 2024 [52]SurveySaudi ArabiaAwareness of AI among adultsNoneKnowledge score, usage likelihoodSTROBEModerate
Khalid et al., 2023 [53]Systematic reviewGlobal healthcareBlockchain-enabled privacy protection for AIConventional security methodsPrivacy capability, decentralization outcomesAMSTAR-2Moderate
Ratti et al., 2025 [57]Policy reviewInternationalGlobal governance of health AIExisting governance modelsRisks, oversight, inequityCASPHigh
Agarwal & Gao, 2024 [58]Empirical analysisMultinationalGlobal inequity in AI developmentNoneInnovation disparities, economic inequalitySTROBEModerate
Thomasian et al., 2021 [59]ReviewLMICsAI deployment barriers in low-resource regionsHIC AI development modelsInfrastructure gaps, scalabilityCASPModerate
Olawade et al., 2025 [60]Narrative scoping reviewLMICsUse of telemedicine & AI toolsHigh-income country adoptionHealth disparities, system capacityJBI ChecklistModerate

References

  1. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  2. Yu, K.H.; Beam, A.L.; Kohane, I.S. Artificial intelligence in healthcare. Nat. Biomed. Eng. 2018, 2, 719–731. [Google Scholar] [CrossRef] [PubMed]
  3. Bali, J.; Bali, O. Artificial intelligence in ophthalmology and healthcare: An updated review of the techniques in use. Indian J. Ophthalmol. 2021, 69, 8–13. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  4. Waldman, C.E.; Hermel, M.; Hermel, J.A.; Allinson, F.; Pintea, M.N.; Bransky, N.; Udoh, E.; Nicholson, L.; Robinson, A.; Gonzalez, J.; et al. Artificial intelligence in healthcare: A primer for medical education in radiomics. Pers. Med. 2022, 19, 445–456. [Google Scholar] [CrossRef] [PubMed]
  5. Manickam, P.; Mariappan, S.A.; Murugesan, S.M.; Hansda, S.; Kaushik, A.; Shinde, R.; Thipperudraswamy, S.P. Artificial Intelligence [AI] and Internet of Medical Things [IoMT] Assisted Biomedical Systems for Intelligent Healthcare. Biosensors 2022, 12, 562. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  6. Reddy, S.; Fox, J.; Purohit, M.P. Artificial intelligence-enabled healthcare delivery. J. R. Soc. Med. 2019, 112, 22–28. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  7. Gupta, N.S.; Kumar, P. Perspective of artificial intelligence in healthcare data management: A journey towards precision medicine. Comput. Biol. Med. 2023, 162, 107051. [Google Scholar] [CrossRef] [PubMed]
  8. Mehta, N.; Pandit, A.; Shukla, S. Transforming healthcare with big data analytics and artificial intelligence: A systematic mapping study. J. Biomed. Inform. 2019, 100, 103311. [Google Scholar] [CrossRef] [PubMed]
  9. Whirl-Carrillo, M.; Brenner, S.E.; Chen, J.H.; Crawford, D.C.; Kidziński, Ł.; Ouyang, D.; Daneshjou, R. Session Introduction: Precision Medicine: Using Artificial Intelligence to Improve Diagnostics and Healthcare. In Proceedings of the Pacific Symposium on Biocomputing 2023, Kohala Coast, HI, USA, 3–7 January 2023; pp. 257–262. [Google Scholar] [PubMed]
  10. Li, Y.-H.; Li, Y.-L.; Wei, M.-Y.; Li, G.-Y. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci. Rep. 2024, 14, 18994. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  11. Ball, H.C. Improving Healthcare Cost, Quality, and Access Through Artificial Intelligence and Machine Learning Applications. J. Healthc. Manag. 2021, 66, 271–279. [Google Scholar] [CrossRef] [PubMed]
  12. Poon, E.G.; Lemak, C.H.; Rojas, J.C.; Guptill, J.; Classen, D. Adoption of artificial intelligence in healthcare: Survey of health system priorities, successes, and challenges. J. Am. Med. Inform. Assoc. 2025, 32, 1093–1100. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  13. Kassam, A.; Kassam, N. Artificial intelligence in healthcare: A Canadian context. Healthc. Manag. Forum 2020, 33, 5–9. [Google Scholar] [CrossRef] [PubMed]
  14. Hassan, M.; Kushniruk, A.; Borycki, E. Barriers to and Facilitators of Artificial Intelligence Adoption in Health Care: Scoping Review. JMIR Hum. Factors 2024, 11, e48633. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  15. Asan, O.; Bayrak, A.E.; Choudhury, A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J. Med. Internet Res. 2020, 22, e15154. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  16. Wubineh, B.Z.; Deriba, F.G.; Woldeyohannis, M.M. Exploring the opportunities and challenges of implementing artificial intelligence in healthcare: A systematic literature review. Urol Oncol. 2024, 42, 48–56. [Google Scholar] [CrossRef] [PubMed]
  17. El Arab, R.A.; Al Moosa, O.A.; Sagbakken, M. Economic, ethical, and regulatory dimensions of artificial intelligence in healthcare: An integrative review. Front. Public Health 2025, 13, 1617138. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  18. Hennrich, J.; Ritz, E.; Hofmann, P.; Urbach, N. Capturing artificial intelligence applications’ value proposition in healthcare—A qualitative research study. BMC Health Serv. Res. 2024, 24, 420. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  19. Iqbal, M.S.; Abd-Alrazaq, A.; Househ, M. Artificial Intelligence Solutions to Detect Fraud in Healthcare Settings: A Scoping Review. Stud. Health Technol. Inform. 2022, 295, 20–23. [Google Scholar] [CrossRef] [PubMed]
  20. Sbodio, M.L.; López, V.; Hoang, T.L.; Brisimi, T.; Picco, G.; Vejsbjerg, I.; Rho, V.; Mac Aonghusa, P.; Kristiansen, M.; Segrave-Daly, J. Collaborative artificial intelligence system for investigation of healthcare claims compliance. Sci. Rep. 2024, 14, 11884. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  21. Albert, D. The future of artificial intelligence-based remote monitoring devices and how they will transform the healthcare industry. Future Cardiol. 2022, 18, 89–90. [Google Scholar] [CrossRef] [PubMed]
  22. Ilan, Y. Improving Global Healthcare and Reducing Costs Using Second-Generation Artificial Intelligence-Based Digital Pills: A Market Disruptor. Int. J. Environ. Res. Public Health 2021, 18, 811. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  23. Loh, H.W.; Ooi, C.P.; Seoni, S.; Barua, P.D.; Molinari, F.; Acharya, U.R. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade [2011–2022]. Comput. Methods Programs Biomed. 2022, 226, 107161. [Google Scholar] [CrossRef] [PubMed]
  24. Tavares, J. Application of Artificial Intelligence in Healthcare: The Need for More Interpretable Artificial Intelligence. Acta Med. Port. 2024, 37, 411–414. [Google Scholar] [CrossRef] [PubMed]
  25. Mohapatra, R.K.; Jolly, L.; Dakua, S.P. Advancing explainable AI in healthcare: Necessity, progress, and future directions. Comput. Biol. Chem. 2025, 119, 108599. [Google Scholar] [CrossRef] [PubMed]
  26. Montejo, L.; Fenton, A.; Davis, G. Artificial intelligence [AI] applications in healthcare and considerations for nursing education. Nurse Educ. Pract. 2024, 80, 104158. [Google Scholar] [CrossRef] [PubMed]
  27. Hodges, B.D. Education and the Adoption of AI in Healthcare: “What Is Happening?”. Healthc. Pap. 2025, 22, 39–43. [Google Scholar] [CrossRef] [PubMed]
  28. Kalthoff, D.; Prien, M.; Götz, N.A. “ai4health”—Development and Conception of a Learning Programme in Higher and Continuing Education on the Fundamentals, Applications and Perspectives of AI in Healthcare. Stud. Health Technol. Inform. 2022, 294, 785–789. [Google Scholar] [CrossRef] [PubMed]
  29. Starr, B.; Dickman, E.; Watson, J.L. Artificial Intelligence: Basics, Impact, and How Nurses Can Contribute. Clin. J. Oncol. Nurs. 2023, 27, 595–601. [Google Scholar] [CrossRef]
  30. Areshtanab, H.N.; Rahmani, F.; Vahidi, M.; Saadati, S.Z.; Pourmahmood, A. Nurses perceptions and use of artificial intelligence in healthcare. Sci. Rep. 2025, 15, 27801. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  31. Aquino, Y.S.J.; Rogers, W.A.; Braunack-Mayer, A.; Frazer, H.; Win, K.T.; Houssami, N.; Degeling, C.; Semsarian, C.; Carter, S.M. Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int. J. Med. Inform. 2023, 169, 104903. [Google Scholar] [CrossRef] [PubMed]
  32. Roberts, L.J.; Jayasena, R.; Khanna, S.; Arnott, L.; Lane, P.; Bain, C. Challenges for implementing generative artificial intelligence [GenAI] into clinical healthcare. Intern. Med. J. 2025, 55, 1063–1069. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  33. Warraich, H.J.; Tazbaz, T.; Califf, R.M. FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. JAMA 2025, 333, 241–247. [Google Scholar] [CrossRef] [PubMed]
  34. Mello, M.M.; Guha, N. Understanding Liability Risk from Using Health Care Artificial Intelligence Tools. N. Engl. J. Med. 2024, 390, 271–278. [Google Scholar] [CrossRef] [PubMed]
  35. Cohen, I.G.; Evgeniou, T.; Gerke, S.; Minssen, T. The European artificial intelligence strategy: Implications and challenges for digital health. Lancet Digit. Health 2020, 2, e376–e379. [Google Scholar] [CrossRef] [PubMed]
  36. van Kolfschooten, H.; van Oirschot, J. The EU Artificial Intelligence Act [2024]: Implications for healthcare. Health Policy 2024, 149, 105152. [Google Scholar] [CrossRef] [PubMed]
  37. Shahzad, R.; Ayub, B.; Siddiqui, M.A.R. Quality of reporting of randomised controlled trials of artificial intelligence in healthcare: A systematic review. BMJ Open 2022, 12, e061519. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  38. Shelmerdine, S.C.; Arthurs, O.J.; Denniston, A.; Sebire, N.J. Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health Care Inform. 2021, 28, e100385. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  39. Teo, Z.L.; Kwee, A.; Lim, J.C.; Lam, C.S.; Ho, D.; Maurer-Stroh, S.; Su, Y.; Chesterman, S.; Chen, T.; Tan, C.C.; et al. Artificial intelligence innovation in healthcare: Relevance of reporting guidelines for clinical translation from bench to bedside. Ann. Acad. Med. Singap. 2023, 52, 199–212. [Google Scholar] [CrossRef] [PubMed]
  40. Chen, M.; Decary, M. Artificial intelligence in healthcare: An essential guide for health leaders. Healthc. Manag. Forum. 2020, 33, 10–18. [Google Scholar] [CrossRef] [PubMed]
  41. Mukherjee, J.; Sharma, R.; Dutta, P.; Bhunia, B. Artificial intelligence in healthcare: A mastery. Biotechnol. Genet. Eng. Rev. 2024, 40, 1659–1708. [Google Scholar] [CrossRef] [PubMed]
  42. Shinners, L.; Aggar, C.; Stephens, A.; Grace, S. Healthcare professionals’ experiences and perceptions of artificial intelligence in regional and rural health districts in Australia. Aust. J. Rural. Health 2023, 31, 1203–1213. [Google Scholar] [CrossRef] [PubMed]
  43. Shinners, L.; Aggar, C.; Grace, S.; Smith, S. Exploring healthcare professionals’ understanding and experiences of artificial intelligence technology use in the delivery of healthcare: An integrative review. Health Inform. J. 2020, 26, 1225–1236. [Google Scholar] [CrossRef] [PubMed]
  44. Jackson, J.M.; Pinto, M.D. Human Near the Loop: Implications for Artificial Intelligence in Healthcare. Clin. Nurs. Res. 2024, 33, 135–137. [Google Scholar] [CrossRef] [PubMed]
  45. Markus, A.F.; Kors, J.A.; Rijnbeek, P.R. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 2021, 113, 103655. [Google Scholar] [CrossRef] [PubMed]
  46. Wilson, A.; Saeed, H.; Pringle, C.; Eleftheriou, I.; Bromiley, P.A.; Brass, A. Artificial intelligence projects in healthcare: 10 practical tips for success in a clinical environment. BMJ Health Care Inform. 2021, 28, e100323. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  47. Brault, N.; Saxena, M. For a critical appraisal of artificial intelligence in healthcare: The problem of bias in mHealth. J. Eval. Clin. Pract. 2021, 27, 513–519. [Google Scholar] [CrossRef] [PubMed]
  48. Rashid, D.; Hirani, R.; Khessib, S.; Ali, N.; Etienne, M. Unveiling biases of artificial intelligence in healthcare: Navigating the promise and pitfalls. Injury 2024, 55, 111358. [Google Scholar] [CrossRef] [PubMed]
  49. Kritharidou, M.; Chrysogonidis, G.; Ventouris, T.; Tsarapastsanis, V.; Aristeridou, D.; Karatzia, A.; Calambur, V.; Huda, A.; Hsueh, S. Ethicara for Responsible AI in Healthcare: A System for Bias Detection and AI Risk Management. AMIA Annu. Symp. Proc. 2024, 2023, 2023–2032. [Google Scholar] [PubMed] [PubMed Central]
  50. Esin, H.; Karaali, C.; Teker, K.; Mergen, H.; Demir, O.; Aydogan, S.; Keskin, M.Z.; Emiroglu, M. Patients’ perspectives on the use of artificial intelligence and robots in healthcare. Bratisl. Lekárske Listy 2024, 125, 513–518. [Google Scholar] [CrossRef] [PubMed]
  51. Witkowski, K.; Dougherty, R.B.; Neely, S.R.; Okhai, R. Public perceptions of artificial intelligence in healthcare: Ethical concerns and opportunities for patient-centered care. BMC Med. Ethics 2024, 25, 74. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  52. Syed, W.; Babelghaith, S.D.; Al-Arifi, M.N. Assessment of Saudi Public Perceptions and Opinions towards Artificial Intelligence in Health Care. Medicina 2024, 60, 938. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  53. Khalid, N.; Qayyum, A.; Bilal, M.; Al-Fuqaha, A.; Qadir, J. Privacy-preserving artificial intelligence in healthcare: Techniques and applications. Comput. Biol. Med. 2023, 158, 106848. [Google Scholar] [CrossRef] [PubMed]
  54. Goldsteen, A.; Farkash, A.; Moffie, M.; Shmelkin, R. Applying Artificial Intelligence Privacy Technology in the Healthcare Domain. Stud. Health Technol. Inform. 2022, 294, 121–122. [Google Scholar] [CrossRef] [PubMed]
  55. Elendu, C.; Amaechi, D.C.; Elendu, T.C.; Jingwa, K.A.; Okoye, O.K.; John Okah, M.; Ladele, J.A.; Farah, A.H.; Alimi, H.A. Ethical implications of AI and robotics in healthcare: A review. Medicine 2023, 102, e36671. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  56. Ning, Y.; Teixayavong, S.; Shang, Y.; Savulescu, J.; Nagaraj, V.; Miao, D.; Mertens, M.; Ting, D.S.W.; Ong, J.C.L.; Liu, M.; et al. Generative artificial intelligence and ethical considerations in health care: A scoping review and ethics checklist. Lancet Digit. Health 2024, 6, e848–e856. [Google Scholar] [CrossRef]
  57. Ratti, E.; Morrison, M.; Jakab, I. Ethical and social considerations of applying artificial intelligence in healthcare—A two-pronged scoping review. BMC Med. Ethics 2025, 26, 68. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  58. Agarwal, R.; Gao, G. Toward an “Equitable” Assimilation of Artificial Intelligence and Machine Learning into Our Health Care System. North Carol. Med. J. 2024, 85, 246–250. [Google Scholar] [CrossRef] [PubMed]
  59. Thomasian, N.M.; Eickhoff, C.; Adashi, E.Y. Advancing health equity with artificial intelligence. J. Public Health Policy 2021, 42, 602–611. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  60. Olawade, D.B.; Bolarinwa, O.A.; Adebisi, Y.A.; Shongwe, S. The role of artificial intelligence in enhancing healthcare for people with disabilities. Soc. Sci. Med. 2025, 364, 117560. [Google Scholar] [CrossRef] [PubMed]
  61. Kumah, E. Artificial intelligence in healthcare and its implications for patient centered care. Discov. Public Health 2025, 22, 524. [Google Scholar] [CrossRef]
  62. Turchi, T.; Prencipe, G.; Malizia, A.; Filogna, S.; Latrofa, F.; Sgandurra, G. Pathways to democratized healthcare: Envisioning human-centered AI-as-a-service for customized diagnosis and rehabilitation. Artif. Intell. Med. 2024, 151, 102850. [Google Scholar] [CrossRef] [PubMed]
  63. Zuhair, V.; Babar, A.; Ali, R.; Oduoye, M.O.; Noor, Z.; Chris, K.; Okon, I.I.; Rehman, L.U. Exploring the Impact of Artificial Intelligence on Global Health and Enhancing Healthcare in Developing Nations. J. Prim. Care Community Health 2024, 15, 21501319241245847. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  64. Dehnavieh, R.; Inayatullah, S.; Yousefi, F.; Nadali, M. Artificial Intelligence [AI] and the future of Iran’s Primary Health Care [PHC] system. BMC Prim. Care 2025, 26, 75. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  65. Goirand, M.; Austin, E.; Clay-Williams, R. Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review. Sci. Eng. Ethics 2021, 27, 61. [Google Scholar] [CrossRef] [PubMed]
  66. Reddy, S. Generative AI in healthcare: An implementation science informed translational path on application, integration and governance. Implement. Sci. 2024, 19, 27. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  67. Moulaei, K.; Yadegari, A.; Baharestani, M.; Farzanbakhsh, S.; Sabet, B.; Afrash, M.R. Generative artificial intelligence in healthcare: A scoping review on benefits, challenges and applications. Int. J. Med. Inform. 2024, 188, 105474. [Google Scholar] [CrossRef] [PubMed]
  68. Goodman, R.S.; Patrinely, J.R., Jr.; Osterman, T.; Wheless, L.; Johnson, D.B. On the cusp: Considering the impact of artificial intelligence language models in healthcare. Med 2023, 4, 139–140. [Google Scholar] [CrossRef] [PubMed]
  69. Jindal, J.A.; Lungren, M.P.; Shah, N.H. Ensuring useful adoption of generative artificial intelligence in healthcare. J. Am. Med Inform. Assoc. 2024, 31, 1441–1444. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  70. Bhuyan, S.S.; Sateesh, V.; Mukul, N.; Galvankar, A.; Mahmood, A.; Nauman, M.; Rai, A.; Bordoloi, K.; Basu, U.; Samuel, J. Generative Artificial Intelligence Use in Healthcare: Opportunities for Clinical Excellence and Administrative Efficiency. J. Med. Syst. 2025, 49, 10. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  71. Diaz-Flores, E.; Meyer, T.; Giorkallos, A. Evolution of Artificial Intelligence-Powered Technologies in Biomedical Research and Healthcare. Adv. Biochem. Eng. Biotechnol. 2022, 182, 23–60. [Google Scholar] [CrossRef] [PubMed]
  72. Ganesh, G.S.; Kolusu, A.S.; Prasad, K.; Samudrala, P.K.; Nemmani, K.V. Advancing health care via artificial intelligence: From concept to clinic. Eur. J. Pharmacol. 2022, 934, 175320. [Google Scholar] [CrossRef] [PubMed]
  73. Omidian, H. Synergizing blockchain and artificial intelligence to enhance healthcare. Drug Discov. Today 2024, 29, 104111. [Google Scholar] [CrossRef] [PubMed]
  74. Castellanos, J.; Raposo, G.; Antunez, L. Data Federation in Healthcare for Artificial Intelligence Solutions. Stud. Health Technol. Inform. 2022, 295, 167–170. [Google Scholar] [CrossRef] [PubMed]
  75. Atkins, D.; Makridis, C.A.; Alterovitz, G.; Ramoni, R.; Clancy, C. Developing and Implementing Predictive Models in a Learning Healthcare System: Traditional and Artificial Intelligence Approaches in the Veterans Health Administration. Annu. Rev. Biomed. Data Sci. 2022, 5, 393–413. [Google Scholar] [CrossRef] [PubMed]
  76. Denecke, K.; Gabarron, E. How Artificial Intelligence for Healthcare Look Like in the Future? Stud. Health Technol. Inform. 2021, 281, 860–864. [Google Scholar] [CrossRef] [PubMed]
  77. Cicek, V.; Bagci, U. Position of artificial intelligence in healthcare and future perspective. Artif. Intell. Med. 2025, 167, 103193. [Google Scholar] [CrossRef] [PubMed]
  78. Rahmatizadeh, S.; Dabbagh, A.; Shabani, F. Foundations of Artificial Intelligence: Transforming Health Care Now and in the Future. Anesthesiol. Clin. 2025, 43, 405–418. [Google Scholar] [CrossRef] [PubMed]
  79. Boulos, M.N.K.; Peng, G.; VoPham, T. An overview of GeoAI applications in health and healthcare. Int. J. Health Geogr. 2019, 18, 7. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  80. Secinaro, S.; Calandra, D.; Secinaro, A.; Muthurangu, V.; Biancone, P. The role of artificial intelligence in healthcare: A structured literature review. BMC Med. Inform. Decis. Mak. 2021, 21, 125. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  81. Gehlot, V.; King, D.; Schaffer, J.; Sloane, E.B.; Wickramasinghe, N. Healthcare Optimization and Augmented Intelligence by Coupling Simulation & Modeling: An Ideal AI/ML Partnership for a Better Clinical Informatics. AMIA Annu. Symp Proc. 2023, 2022, 477–484. [Google Scholar] [PubMed] [PubMed Central]
  82. Tingle, J. Pressing issues in healthcare digital technologies and AI. Br. J. Nurs. 2023, 32, 88–89. [Google Scholar] [CrossRef] [PubMed]
  83. Coiera, E.; Liu, S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in healthcare. Cell Rep. Med. 2022, 3, 100860. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  84. Noorbakhsh-Sabet, N.; Zand, R.; Zhang, Y.; Abedi, V. Artificial Intelligence Transforms the Future of Health Care. Am. J. Med. 2019, 132, 795–801. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  85. Aslani, A.; Pournik, O.; Abbasi, S.F.; Arvanitis, T.N. Transforming Healthcare: The Role of Artificial Intelligence. Stud. Health Technol. Inform. 2025, 327, 1363–1367. [Google Scholar] [CrossRef] [PubMed]
  86. Panch, T.; Szolovits, P.; Atun, R. Artificial intelligence, machine learning and health systems. J. Glob. Health 2018, 8, 020303. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  87. Lorkowski, J.; Grzegorowska, O.; Pokorski, M. Artificial Intelligence in the Healthcare System: An Overview. Adv. Exp. Med. Biol. 2021, 1335, 1–10. [Google Scholar] [CrossRef] [PubMed]
  88. Agarwal, A.; Singh, G.; Jain, S.; Mittal, P. Beyond boundaries: Charting the frontier of healthcare with big data and ai advancements in pharmacovigilance. Health Sci. Rev. 2025, 14, 100214. [Google Scholar] [CrossRef]
  89. Picchiarelli, A. Artificial Intelligence in Healthcare: Opportunities and Risks. Psychiatr. Danub. 2023, 35 (Suppl. 3), 90–92. [Google Scholar] [PubMed]
  90. Sunarti, S.; Rahman, F.F.; Naufal, M.; Risky, M.; Febriyanto, K.; Masnina, R. Artificial intelligence in healthcare: Opportunities and risk for future. Gac. Sanit. 2021, 35 (Suppl. 1), S67–S70. [Google Scholar] [CrossRef] [PubMed]
  91. Matsushita, F.Y.; Krebs, V.L.J.; Carvalho, W.B. Artificial intelligence and machine learning in pediatrics and neonatology healthcare. Rev. Assoc. Med. Bras. 2022, 68, 745–750. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  92. Miloski, B. Opportunities for artificial intelligence in healthcare and in vitro fertilization. Fertil. Steril. 2023, 120, 3–7. [Google Scholar] [CrossRef] [PubMed]
  93. Jiang, V.S.; Pavlovic, Z.J.; Hariton, E. The Role of Artificial Intelligence and Machine Learning in Assisted Reproductive Technologies. Obstet. Gynecol. Clin. N. Am. 2023, 50, 747–762. [Google Scholar] [CrossRef] [PubMed]
  94. Kilic, A. Artificial Intelligence and Machine Learning in Cardiovascular Health Care. Ann. Thorac. Surg. 2020, 109, 1323–1329. [Google Scholar] [CrossRef] [PubMed]
  95. Sarma, A.D.; Devi, M. Artificial intelligence in diabetes management: Transformative potential, challenges, and opportunities in healthcare. Hormones 2025, 24, 307–322. [Google Scholar] [CrossRef] [PubMed]
  96. Jin, K.W.; Li, Q.; Xie, Y.; Xiao, G. Artificial intelligence in mental healthcare: An overview and future perspectives. Br. J. Radiol. 2023, 96, 20230213. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  97. Laacke, S.; Mueller, R.; Schomerus, G.; Salloch, S. Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy. Am. J. Bioeth. 2021, 21, 4–20. [Google Scholar] [CrossRef] [PubMed]
  98. Samaranayake, L. IDJ Pioneers Efforts to Reframe Dental Health Care Through Artificial Intelligence [AI]. Int. Dent. J. 2024, 74, 177–178. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  99. Rowe, J.P.; Lester, J.C. Artificial Intelligence for Personalized Preventive Adolescent Healthcare. J. Adolesc. Health 2020, 67, S52–S58. [Google Scholar] [CrossRef] [PubMed]
  100. Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence [AI] deployment in healthcare practices: A perspective for healthcare organizations. Artif. Intell. Med. 2024, 151, 102861. [Google Scholar] [CrossRef] [PubMed]
  101. Rubinger, L.; Gazendam, A.; Ekhtiari, S.; Bhandari, M. Machine learning and artificial intelligence in research and healthcare. Injury 2023, 54 (Suppl. 3), S69–S73. [Google Scholar] [CrossRef] [PubMed]
  102. Hazarika, I. Artificial intelligence: Opportunities and implications for the health workforce. Int. Health 2020, 12, 241–245. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  103. Harrison, S.; Despotou, G.; Arvanitis, T.N. Hazards for the Implementation and Use of Artificial Intelligence Enabled Digital Health Interventions, a UK Perspective. Stud. Health Technol. Inform. 2022, 289, 14–17. [Google Scholar] [CrossRef] [PubMed]
  104. Sriharan, A.; Kuhlmann, E.; Correia, T.; Tahzib, F.; Czabanowska, K.; Ungureanu, M.; Kumar, B.N. Artificial Intelligence in Healthcare: Balancing Technological Innovation With Health and Care Workforce Priorities. Int. J. Health Plan. Manag. 2025, 40, 987–992. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  105. Byrne, M.D. Reducing Bias in Healthcare Artificial Intelligence. J Perianesth. Nurs. 2021, 36, 313–316. [Google Scholar] [CrossRef] [PubMed]
  106. Lhotská, L. Artificial intelligence in medicine and healthcare: Opportunity and/or threat. Cas. Lek. Cesk. 2024, 162, 275–278. [Google Scholar] [PubMed]
  107. Kluge, E.-H.W. Artificial intelligence in healthcare: Ethical considerations. Healthc. Manag. Forum 2020, 33, 47–49. [Google Scholar] [CrossRef] [PubMed]
  108. Kluge, E.-H. The ethics of artificial intelligence in healthcare: From hands-on care to policy-making. Healthc. Manag. Forum 2024, 37, 406–408. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  109. Karbasi, Z.; Niko, M.M.; Zahmatkeshan, M. Enhancing healthcare with ethical considerations in artificial intelligence. Hypertens. Res. 2024, 47, 1997. [Google Scholar] [CrossRef] [PubMed]
  110. Molbæk-Steensig, H.; Scheinin, M. Human Rights and Artificial Intelligence in Healthcare-Related Settings: A Grammar of Human Rights Approach. Eur. J. Health Law 2025, 32, 139–164. [Google Scholar] [CrossRef] [PubMed]
  111. Saadat, A.; Siddiqui, T.; Taseen, S.; Mughal, S. Revolutionising Impacts of Artificial Intelligence on Health Care System and Its Related Medical In-Transparencies. Ann. Biomed. Eng. 2024, 52, 1546–1548. [Google Scholar] [CrossRef] [PubMed]
  112. Reddy, S.; Allan, S.; Coghlan, S.; Cooper, P. A governance model for the application of AI in health care. J. Am. Med. Inform. Assoc. 2020, 27, 491–497. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  113. Hassan, M.; Borycki, E.M.; Kushniruk, A.W. Artificial intelligence governance framework for healthcare. Healthc. Manag. Forum 2025, 38, 125–130. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  114. Romagnoli, A.; Ferrara, F.; Langella, R.; Zovi, A. Healthcare Systems and Artificial Intelligence: Focus on Challenges and the International Regulatory Framework. Pharm. Res. 2024, 41, 721–730. [Google Scholar] [CrossRef] [PubMed]
  115. Ardic, N.; Dinc, R. Artificial Intelligence in Healthcare: Current Regulatory Landscape and Future Directions. Br. J. Hosp. Med. 2025, 86, 1–21. [Google Scholar] [CrossRef] [PubMed]
  116. Gilbert, A.; Pizzolla, E.; Palmieri, S.; Briganti, G. Artificial Intelligence in Healthcare and Regulation Challenges: A Mini Guide for [Mental] Health Professionals. Psychiatr. Danub. 2024, 36 (Suppl. 2), 348–353. [Google Scholar] [PubMed]
  117. Howell, M.D.; Corrado, G.S.; DeSalvo, K.B. Three Epochs of Artificial Intelligence in Health Care. JAMA 2024, 331, 242–244. [Google Scholar] [CrossRef] [PubMed]
  118. Paton, C.; Kobayashi, S. An Open Science Approach to Artificial Intelligence in Healthcare. Yearb. Med. Inform. 2019, 28, 47–51. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  119. Cecchi, R.; Haja, T.M.; Calabrò, F.; Fasterholdt, I.; Rasmussen, B.S.B. Artificial intelligence in healthcare: Why not apply the medico-legal method starting with the Collingridge dilemma? Int. J. Leg. Med. 2024, 138, 1173–1178. [Google Scholar] [CrossRef] [PubMed]
  120. Horgan, D.; Romao, M.; Morré, S.A.; Kalra, D. Artificial Intelligence: Power for Civilisation—And for Better Healthcare. Public Health Genom. 2019, 22, 145–161. [Google Scholar] [CrossRef] [PubMed]
  121. Patel, N.C. How might the rapid development of artificial intelligence affect the delivery of UK Defence healthcare? BMJ Mil. Health 2025, 171, 198–201. [Google Scholar] [CrossRef] [PubMed]
  122. Bin Goh, W.W.; Tan, C.H.; Tan, C.; Prahl, A.; Lwin, M.O.; Sung, J. Regulating, implementing and evaluating AI in Singapore healthcare: AI governance roundtable’s view. Ann. Acad. Med. Singap. 2025, 54, 428–436. [Google Scholar] [CrossRef] [PubMed]
  123. Guan, J. Artificial Intelligence in Healthcare and Medicine: Promises, Ethical Challenges and Governance. Chin. Med. Sci. J. 2019, 34, 76–83. [Google Scholar] [CrossRef] [PubMed]
  124. Bartlett, B. Towards Accountable, Legitimate and Trustworthy AI in Healthcare: Enhancing AI Ethics with Effective Data Stewardship. New Bioeth. 2024, 30, 285–309. [Google Scholar] [CrossRef] [PubMed]
  125. Kueper, J.K.; Pandit, J. Artificial Intelligence for Healthcare in Canada: Contrasting Advances and Challenges. Healthc. Pap. 2025, 22, 11–30. [Google Scholar] [CrossRef] [PubMed]
  126. Tsuei, S.H. How Are Canadians Regulating Artificial Intelligence for Healthcare? A Brief Analysis of the Current Legal Directions, Challenges and Deficiencies. Healthc. Pap. 2025, 22, 44–51. [Google Scholar] [CrossRef] [PubMed]
  127. Kueper, J.K.; Pandit, J.A. Artificial Intelligence in the Canadian Healthcare System: Scaling From Novelty to Utility. Healthc. Pap. 2025, 22, 79–83. [Google Scholar] [CrossRef] [PubMed]
  128. Ramaswamy, A.; Gowda, N.R.; Vikas, H.; Prabhu, M.; Sharma, D.; Gowda, P.R.; Mohan, D.; Kumar, A. It’s the data, stupid: Inflection point for Artificial Intelligence in Indian healthcare. Artif. Intell. Med. 2022, 128, 102300. [Google Scholar] [CrossRef] [PubMed]
  129. Dangi, R.R.; Sharma, A.; Vageriya, V. Transforming Healthcare in Low-Resource Settings With Artificial Intelligence: Recent Developments and Outcomes. Public Health Nurs. 2025, 42, 1017–1030. [Google Scholar] [CrossRef] [PubMed]
  130. Sun, T.Q. Adopting Artificial Intelligence in Public Healthcare: The Effect of Social Power and Learning Algorithms. Int. J. Environ. Res. Public Health 2021, 18, 12682. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  131. Strika, Z.; Petkovic, K.; Likic, R.; Batenburg, R. Bridging healthcare gaps: A scoping review on the role of artificial intelligence, deep learning, and large language models in alleviating problems in medical deserts. Postgrad. Med. J. 2024, 101, 4–16. [Google Scholar] [CrossRef] [PubMed]
  132. Warrington, D.J.; Holm, S. Healthcare ethics and artificial intelligence: A UK doctor survey. BMJ Open 2024, 14, e089090. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  133. Lin, S. A Clinician’s Guide to Artificial Intelligence [AI]: Why and How Primary Care Should Lead the Health Care AI Revolution. J. Am. Board Fam. Med. 2022, 35, 175–184. [Google Scholar] [CrossRef] [PubMed]
  134. Laï, M.-C.; Brian, M.; Mamzer, M.-F. Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study among actors in France. J. Transl. Med. 2020, 18, 14. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  135. Crowe, B.; Shah, S.; Teng, D.; Ma, S.P.; DeCamp, M.; Rosenberg, E.I.; Rodriguez, J.A.; Collins, B.X.; Huber, K.; Karches, K.; et al. Recommendations for Clinicians, Technologists, and Healthcare Organizations on the Use of Generative Artificial Intelligence in Medicine: A Position Statement from the Society of General Internal Medicine. J. Gen. Intern. Med. 2025, 40, 694–702. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  136. Ognjanovic, I. Artificial Intelligence in Healthcare. Stud. Health Technol. Inform. 2020, 274, 189–205. [Google Scholar] [CrossRef] [PubMed]
  137. Matheny, M.E.; Goldsack, J.C.; Saria, S.; Shah, N.H.; Gerhart, J.; Cohen, I.G.; Price, W.N.; Patel, B.; Payne, P.R.O.; Embí, P.J.; et al. Artificial Intelligence In Health And Health Care: Priorities For Action. Health Aff. 2025, 44, 163–170. [Google Scholar] [CrossRef] [PubMed]
  138. Matheny, M.E.; Whicher, D.; Thadaney Israni, S. Artificial Intelligence in Health Care: A Report From the National Academy of Medicine. JAMA 2020, 323, 509–510. [Google Scholar] [CrossRef] [PubMed]
  139. Polevikov, S. Advancing AI in healthcare: A comprehensive review of best practices. Clin. Chim. Acta 2023, 548, 117519. [Google Scholar] [CrossRef] [PubMed]
  140. Mizna, S.; Arora, S.; Saluja, P.; Das, G.; Alanesi, W.A. An analytic research and review of the literature on practice of artificial intelligence in healthcare. Eur. J. Med. Res. 2025, 30, 382. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  141. Aung, Y.Y.M.; Wong, D.C.S.; Ting, D.S.W. The promise of artificial intelligence: A review of the opportunities and challenges of artificial intelligence in healthcare. Br. Med. Bull. 2021, 139, 4–15. [Google Scholar] [CrossRef] [PubMed]
  142. Whicher, D.; Rapp, T. The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned. Value Health 2022, 25, 328–330. [Google Scholar] [CrossRef] [PubMed]
  143. Jankowska, A.; Ngai, J.I. Robot: Healthcare Decisions Made With Artificial Intelligence. J. Cardiothorac. Vasc. Anesth. 2023, 37, 1852–1854. [Google Scholar] [CrossRef] [PubMed]
  144. Ahmed, Z.; Mohamed, K.; Zeeshan, S.; Dong, X. Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database 2020, 2020, baaa010. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  145. Marin, H.F. Artificial intelligence in healthcare and IJMI scope. Int. J. Med. Inform. 2023, 177, 105150. [Google Scholar] [CrossRef] [PubMed]
  146. Ali, S.; Aslam, A.; Tahir, Z.; Ashraf, B.; Tanweer, A. Advancements of AI in healthcare: A comprehensive review of ChatGPT’s applications and challenges. J. Pak. Med. Assoc. 2025, 75, 78–83. [Google Scholar] [CrossRef] [PubMed]
  147. Stanfill, M.H.; Marc, D.T. Health Information Management: Implications of Artificial Intelligence on Healthcare Data and Information Management. Yearb. Med. Inform. 2019, 28, 56–64. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  148. Koski, E.; Murphy, J. AI in Healthcare. Stud. Health Technol. Inform. 2021, 284, 295–299. [Google Scholar] [CrossRef] [PubMed]
  149. Väänänen, A.; Haataja, K.; Vehviläinen-Julkunen, K.; Toivanen, P. Proposal of a novel Artificial Intelligence Distribution Service platform for healthcare. F1000Research 2021, 10, 245. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  150. Bali, J.; Garg, R.; Bali, R.T. Artificial intelligence [AI] in healthcare and biomedical research: Why a strong computational/AI bioethics framework is required? Indian J. Ophthalmol. 2019, 67, 3–6. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  151. Siala, H.; Wang, Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc. Sci. Med. 2022, 296, 114782. [Google Scholar] [CrossRef] [PubMed]
  152. Mudgal, S.K.; Agarwal, R.; Chaturvedi, J.; Gaur, R.; Ranjan, N. Real-world application, challenges and implication of artificial intelligence in healthcare: An essay. Pan. Afr. Med. J. 2022, 43, 3. [Google Scholar] [PubMed] [PubMed Central]
  153. Ashrafian, H.; Darzi, A.; Athanasiou, T. A novel modification of the Turing test for artificial intelligence and robotics in healthcare. Int. J. Med. Robot. 2015, 11, 38–43. [Google Scholar] [CrossRef] [PubMed]
  154. Artificial intelligence in healthcare: Is it beneficial? J. Vasc. Nurs. 2019, 37, 159. [CrossRef] [PubMed]
  155. Jackson, G.; Hu, J. Section Editors for the IMIA Yearbook Section on Artificial Intelligence in Health Artificial Intelligence in Health in 2018: New Opportunities, Challenges, and Practical Implications. Yearb. Med. Inform. 2019, 28, 52–54. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
Figure 1. AI maturity pathway from technical development to measured system impact: The figure illustrates the four-stage progression of AI integration in healthcare: Technical Development & Validation, Pilot Implementation, Full-Scale Clinical Deployment, and Measured System Impact. During the pilot phase, more than 60% of healthcare organizations initiate AI projects; however, the red crosses denote the key barriers that commonly prevent progression to larger-scale deployment. These include: (1) Data interoperability gaps (41%), where fragmented or incompatible data systems limit model performance; (2) Lack of demonstrated return on investment (ROI), which reduces organizational willingness to expand AI initiatives; and (3) A pilot-only mindset, in which institutions conduct isolated pilots without establishing the infrastructure and long-term strategic planning required for system-wide adoption. Successful transition to full deployment is supported by explainable AI (XAI), clinician trust and training, and seamless EHR integration, contributing to measurable impacts such as 20–35% reductions in diagnostic turnaround, 30–45% reductions in readmissions, and 25% improvements in diagnostic accuracy through human–AI collaboration.
Figure 1. AI maturity pathway from technical development to measured system impact: The figure illustrates the four-stage progression of AI integration in healthcare: Technical Development & Validation, Pilot Implementation, Full-Scale Clinical Deployment, and Measured System Impact. During the pilot phase, more than 60% of healthcare organizations initiate AI projects; however, the red crosses denote the key barriers that commonly prevent progression to larger-scale deployment. These include: (1) Data interoperability gaps (41%), where fragmented or incompatible data systems limit model performance; (2) Lack of demonstrated return on investment (ROI), which reduces organizational willingness to expand AI initiatives; and (3) A pilot-only mindset, in which institutions conduct isolated pilots without establishing the infrastructure and long-term strategic planning required for system-wide adoption. Successful transition to full deployment is supported by explainable AI (XAI), clinician trust and training, and seamless EHR integration, contributing to measurable impacts such as 20–35% reductions in diagnostic turnaround, 30–45% reductions in readmissions, and 25% improvements in diagnostic accuracy through human–AI collaboration.
Futureinternet 17 00550 g001
Figure 2. (AD): AI utilization and impact in healthcare: (A) Distribution of AI applications across major clinical domains, showing highest adoption in radiology, followed by oncology, cardiology, and other specialties. (B) Use of AI tools within clinical departments, with medicine demonstrating the greatest integration compared to surgery and other units. (C) AI deployment in predictive analytics tasks, indicating predominant use in diagnostic prediction, followed by prognosis estimation and treatment-planning support. (D) Reported impacts of AI on healthcare performance, highlighting improvements in workflow efficiency, diagnostic accuracy, and patient outcomes.
Figure 2. (AD): AI utilization and impact in healthcare: (A) Distribution of AI applications across major clinical domains, showing highest adoption in radiology, followed by oncology, cardiology, and other specialties. (B) Use of AI tools within clinical departments, with medicine demonstrating the greatest integration compared to surgery and other units. (C) AI deployment in predictive analytics tasks, indicating predominant use in diagnostic prediction, followed by prognosis estimation and treatment-planning support. (D) Reported impacts of AI on healthcare performance, highlighting improvements in workflow efficiency, diagnostic accuracy, and patient outcomes.
Futureinternet 17 00550 g002
Figure 3. Global disparities in AI healthcare adoption, readiness, and governance: (a) World map illustrating variation in AI healthcare readiness. Countries classified as Pilot Leaders demonstrate advanced AI deployment, whereas regions with Emerging Implementation show partial progress. Constrained regions—primarily low-resource settings—exhibit limited capacity, with 73% lacking the infrastructure required for effective AI integration. (b) Comparative distribution of formal AI governance frameworks by region. The European Union shows the highest level of regulatory maturity, followed by the United States, Canada, and Singapore. Low-income regions fall substantially below the global average. Under the EU AI Act, 45% of AI systems are categorized as “high-risk,” highlighting the need for strengthened oversight.
Figure 3. Global disparities in AI healthcare adoption, readiness, and governance: (a) World map illustrating variation in AI healthcare readiness. Countries classified as Pilot Leaders demonstrate advanced AI deployment, whereas regions with Emerging Implementation show partial progress. Constrained regions—primarily low-resource settings—exhibit limited capacity, with 73% lacking the infrastructure required for effective AI integration. (b) Comparative distribution of formal AI governance frameworks by region. The European Union shows the highest level of regulatory maturity, followed by the United States, Canada, and Singapore. Low-income regions fall substantially below the global average. Under the EU AI Act, 45% of AI systems are categorized as “high-risk,” highlighting the need for strengthened oversight.
Futureinternet 17 00550 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rahamtalla, B.M.; Medani, I.E.; Abdelhag, M.E.; Eltigani, S.A.; Rajan, S.K.; Falgy, E.; Hassan, N.M.; Fadailu, M.E.; Khudhayr, H.A.; Abdalla, A. The AI-Powered Healthcare Ecosystem: Bridging the Chasm Between Technical Validation and Systemic Integration—A Systematic Review. Future Internet 2025, 17, 550. https://doi.org/10.3390/fi17120550

AMA Style

Rahamtalla BM, Medani IE, Abdelhag ME, Eltigani SA, Rajan SK, Falgy E, Hassan NM, Fadailu ME, Khudhayr HA, Abdalla A. The AI-Powered Healthcare Ecosystem: Bridging the Chasm Between Technical Validation and Systemic Integration—A Systematic Review. Future Internet. 2025; 17(12):550. https://doi.org/10.3390/fi17120550

Chicago/Turabian Style

Rahamtalla, Babiker Mohamed, Isameldin Elamin Medani, Mohammed Eltahir Abdelhag, Sara Ahmed Eltigani, Sudha K. Rajan, Essam Falgy, Nazik Mubarak Hassan, Marwa Elfatih Fadailu, Hayat Ahmad Khudhayr, and Abuzar Abdalla. 2025. "The AI-Powered Healthcare Ecosystem: Bridging the Chasm Between Technical Validation and Systemic Integration—A Systematic Review" Future Internet 17, no. 12: 550. https://doi.org/10.3390/fi17120550

APA Style

Rahamtalla, B. M., Medani, I. E., Abdelhag, M. E., Eltigani, S. A., Rajan, S. K., Falgy, E., Hassan, N. M., Fadailu, M. E., Khudhayr, H. A., & Abdalla, A. (2025). The AI-Powered Healthcare Ecosystem: Bridging the Chasm Between Technical Validation and Systemic Integration—A Systematic Review. Future Internet, 17(12), 550. https://doi.org/10.3390/fi17120550

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop