Governing Healthcare AI in the Real World: How Fairness, Transparency, and Human Oversight Can Coexist: A Narrative Review
Abstract
1. Introduction
2. Materials and Methods
3. Bias and Fairness
4. Explainability and Transparency
5. Safety and Quality
6. Privacy and Data Protection
7. Accountability and Liability
8. Human Oversight
9. Procurement and Deployment
10. Discussion
- Equity—ensuring that all patients are treated fairly;
- Transparency—enabling an understanding of how AI systems reach their decisions;
- Safety—preventing harmful errors;
- Privacy—protecting patients’ data;
- Responsibility—clarifying who is accountable in the event of problems;
- Oversight—maintaining meaningful human control;
- Organizational practices—determining how the hospital or institution integrates and uses AI in routine care.
11. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| ABCDS | Algorithm-Based Clinical Decision Support (lifecycle/oversight framework for clinical prediction models) |
| AI | Artificial Intelligence |
| AUC | Area Under the Receiver Operating Characteristic Curve |
| CI/CD | Continuous Integration and Continuous Deployment (or Delivery) pipelines |
| CONSORT-AI | Consolidated Standards of Reporting Trials—Artificial Intelligence (CONSORT extension for AI interventions) |
| DECIDE-AI | Developmental and Exploratory Clinical Investigations of Decision-Support Systems Driven by Artificial Intelligence |
| DP | Differential Privacy |
| DPIA/DPIAs | Data Protection Impact Assessment/Data Protection Impact Assessments |
| ePHI | Electronic protected health information |
| ERM | Enterprise Risk Management |
| FDA | United States Food and Drug Administration |
| FL | Federated Learning |
| GDPR | General Data Protection Regulation |
| GMLP | Good Machine Learning Practice (joint principles) |
| HEAAL | Health Equity Across the AI Lifecycle (framework for assessing how AI affects health equity) |
| HIPAA | Health Insurance Portability and Accountability Act |
| HSA | Health Sciences Authority |
| IRB/IRBs | Institutional Review Board/Institutional Review Boards |
| IT | Information Technology |
| LIME | Local Interpretable Model-agnostic Explanations |
| LLM | Large Language Model |
| MAUDE | Manufacturer and User Facility Device Experience (FDA adverse event database for medical devices) |
| MFDS | Ministry of Food and Drug Safety |
| MHRA | Medicines and Healthcare products Regulatory Agency |
| ML | Machine Learning |
| MRI/CT | Magnetic Resonance Imaging/Computed Tomography |
| NHS | National Health Service |
| NICE | National Institute for Health and Care Excellence |
| NIST | National Institute of Standards and Technology |
| NLP | Natural Language Processing |
| NMPA | National Medical Products Administration |
| PCCP/PCCPs | Predetermined Change Control Plan/Predetermined Change Control Plans (for regulated AI/ML-enabled medical devices) |
| PET/PETs | Privacy-Enhancing Technology/Privacy-Enhancing Technologies |
| PMDA | Pharmaceuticals and Medical Devices Agency |
| RACI | Responsible, Accountable, Consulted, Informed |
| SaMD | Software as a Medical Device |
| SLA/SLAs | Service Level Agreement/Service Level Agreements |
| SHAP | SHapley Additive exPlanations |
| SPIRIT-AI | Standard Protocol Items: Recommendations for Interventional Trials—Artificial Intelligence (SPIRIT extension for AI interventions) |
| TGA | Therapeutic Goods Administration |
| TRL/TRLs | Technology Readiness Level/Technology Readiness Levels |
| XAI | Explainable Artificial Intelligence |
| YY | Chinese medical device industry standard code prefix (YY/YYT series for sectoral specifications, e.g., dataset quality standards) |
References
- Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
- He, J.; Baxter, S.L.; Xu, J.; Zhou, X.; Zhang, K. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 2019, 25, 30–36. [Google Scholar] [CrossRef]
- Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef]
- Liu, X.; Cruz Rivera, S.; Moher, D.; Calvert, M.J.; Denniston, A.K. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI extension. Nat. Med. 2020, 26, 1364–1374. [Google Scholar] [CrossRef]
- Cruz Rivera, S.; Liu, X.; Chan, A.-W.; Denniston, A.K.; Calvert, M.J. The SPIRIT-AI and CONSORT-AI Working Group; SPIRIT-AI and CONSORT-AI Steering Group; SPIRIT-AI and CONSORT-AI Consensus Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. Nat. Med. 2020, 26, 1351–1363. [Google Scholar] [CrossRef] [PubMed]
- Busch, F.; Kather, J.N.; Johner, C.; Moser, M.; Truhn, D.; Adams, L.C.; Bressem, K.K. Navigating the European Union Artificial Intelligence Act for healthcare. npj Digit. Med. 2024, 7, 210. [Google Scholar] [CrossRef] [PubMed]
- U.S. Food and Drug Administration. Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions. 18 August 2025. Available online: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence (accessed on 14 January 2026).
- U.S. Food and Drug Administration. Artificial Intelligence in Software as a Medical Device (SaMD). 25 March 2025. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-software-medical-device (accessed on 14 January 2026).
- Medicines and Healthcare products Regulatory Agency (MHRA). Software and AI as a Medical Device Change Programme Roadmap. 14 June 2023. Available online: https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme-roadmap (accessed on 14 January 2026).
- National Institute for Health and Care Excellence (NICE). Evidence Standards Framework for Digital Health Technologies (Updated to Include AI and Data-Driven Technologies with Adaptive Algorithms). Available online: https://www.nice.org.uk/corporate/ecd7 (accessed on 14 January 2026).
- NHS England. Artificial Intelligence—Information Governance Guidance (Includes Procurement-Time Checks, Medical-Device Status, and Reviewability of Outputs). 30 April 2025. Available online: https://transform.england.nhs.uk/information-governance/guidance/artificial-intelligence/ (accessed on 14 January 2026).
- Health Canada. Pre-Market Guidance for Machine Learning-Enabled Medical Devices. 5 February 2025. Available online: https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/pre-market-guidance-machine-learning-enabled-medical-devices.html (accessed on 14 January 2026).
- Therapeutic Goods Administration (TGA). Artificial Intelligence (AI) and Medical Device Software. 4 September 2025. Available online: https://www.tga.gov.au/products/medical-devices/software-and-artificial-intelligence/manufacturing/artificial-intelligence-ai-and-medical-device-software (accessed on 14 January 2026).
- Personal Data Protection Commission (PDPC). Model AI Governance Framework (Second Edition). Available online: https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework (accessed on 14 January 2026).
- Health Sciences Authority (HSA). Regulatory Guidelines for Software Medical Devices—A Life Cycle Approach (Revision 2, 29 April 2022). Available online: https://www.hsa.gov.sg/docs/default-source/hprg-mdb/guidance-documents-for-medical-devices/regulatory-guidelines-for-software-medical-devices---a-life-cycle-approach_r2-%282022-apr%29-pub.pdf (accessed on 14 January 2026).
- Pharmaceuticals and Medical Devices Agency (PMDA). Report on AI-Based Software as a Medical Device (SaMD). 28 August 2023. Available online: https://www.pmda.go.jp/files/000266100.pdf (accessed on 14 January 2026).
- Ministry of Food and Drug Safety (MFDS). Guidance on the Review and Approval of Artificial Intelligence (AI)-Based Medical Devices. 20 July 2023. Available online: https://www.mfds.go.kr/eng/brd/m_40/view.do?seq=72627 (accessed on 14 January 2026).
- National Medical Products Administration (NMPA). NMPA Announcement on Guidance for the Classification Defining of AI-Based Medical Software Products. 8 July 2021. Available online: https://english.nmpa.gov.cn/2021-07/08/c_660267.htm (accessed on 14 January 2026).
- National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0). Available online: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf (accessed on 14 January 2026).
- World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. 28 June 2021. Available online: https://www.who.int/publications/i/item/9789240029200 (accessed on 14 January 2026).
- Ning, Y.; Teixayavong, S.; Shang, Y.; Savulescu, J.; Nagaraj, V.; Miao, D.; Mertens, M.; Ting, D.S.W.; Ong, J.C.L.; Liu, M.; et al. Generative artificial intelligence and ethical considerations in health care: A scoping review and ethics checklist. Lancet Digit. Health 2024, 6, e848–e856. [Google Scholar] [CrossRef] [PubMed]
- Du, J.; Tao, X.; Zhu, L.; Wang, H.; Qi, W.; Min, X.; Wei, S.; Zhang, X.; Liu, Q.; Du, Q. Development of a visualized risk prediction system for sarcopenia in older adults using machine learning: A cohort study based on CHARLS. Front. Public Health 2025, 13, 1544894. [Google Scholar] [CrossRef]
- Weissman, G.E. Evaluation and regulation of artificial intelligence medical devices. Annu. Rev. Biomed. Data Sci. 2025, 8, 81–99. [Google Scholar] [CrossRef]
- Maliha, G.; Gerke, S.; Cohen, I.G.; Parikh, R.B. Artificial intelligence and liability in medicine: Balancing safety and innovation. Milbank Q. 2021, 99, 629–647. [Google Scholar] [CrossRef]
- Pham, T. Ethical and legal considerations in healthcare AI: Innovation and policy for safe and fair use. R. Soc. Open Sci. 2025, 12, 241873. [Google Scholar] [CrossRef]
- Mahamadou, A.J.D.; Trotsyuk, A.A. Revisiting technical bias mitigation strategies. Annu. Rev. Biomed. Data Sci. 2025, 8, 287–303. [Google Scholar] [CrossRef]
- Hanna, M.G.; Pantanowitz, L.; Jackson, B.; Palmer, O.; Visweswaran, S.; Pantanowitz, J.; Deebajah, M.; Rashidi, H.H. Ethical and bias considerations in artificial intelligence/machine learning. Mod. Pathol. 2025, 38, 100686. [Google Scholar] [CrossRef]
- Dehghani, F.; Paiva, P.; Malik, N.; Lin, J.; Bayat, S.; Bento, M. Accuracy–fairness trade-off in ML for healthcare: A quantitative evaluation of bias mitigation strategies. Inf. Softw. Technol. 2025, 188, 107896. [Google Scholar] [CrossRef]
- Stanley, E.A.M.; Wilms, M.; Forkert, N.D. Disproportionate subgroup impacts and other challenges of fairness in artificial intelligence for medical image analysis. In Ethical and Philosophical Issues in Medical Imaging, Multimodal Learning and Fusion Across Scales for Clinical Decision Support, and Topological Data Analysis for Biomedical Imaging; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; Volume 13755, pp. 14–25. [Google Scholar]
- Vogt, Y. Disability and algorithmic fairness in healthcare: A narrative review. J. Med. Artif. Intell. 2025, 8, 56. [Google Scholar] [CrossRef]
- Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef] [PubMed]
- Seyyed-Kalantari, L.; Zhang, H.; McDermott, M.B.A.; Chen, I.Y.; Ghassemi, M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat. Med. 2021, 27, 2176–2182. [Google Scholar] [CrossRef] [PubMed]
- Gichoya, J.W.; Banerjee, I.; Bhimireddy, A.R.; Burns, J.L.; Celi, L.A.; Chen, L.-C.; Correa, R.; Dullerud, N.; Ghassemi, M.; Huang, S.-C.; et al. AI recognition of patient race in medical imaging: A modelling study. Lancet Digit. Health 2022, 4, e406–e414. [Google Scholar] [CrossRef]
- Liu, M.; Ning, Y.; Teixayavong, S.; Mertens, M.; Xu, J.; Ting, D.S.W.; Cheng, L.T.-E.; Ong, J.C.L.; Teo, Z.L.; Tan, T.F.; et al. A translational perspective towards clinical AI fairness. npj Digit. Med. 2023, 6, 172. [Google Scholar] [CrossRef]
- Long, Y.; Novak, L.; Walsh, C.G. Searching for value-sensitive design in applied health AI: A narrative review. Yearb. Med. Inform. 2024, 33, 75–82. [Google Scholar] [CrossRef] [PubMed]
- Ferrara, E. Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci 2024, 6, 3. [Google Scholar] [CrossRef]
- Ministry of Economy, Trade and Industry (METI). Governance Guidelines for Implementation of AI Principles (Ver. 1.1). 28 January 2022. Available online: https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf (accessed on 14 January 2026).
- Ministry of Science and ICT (MSIT). MSIT Releases People-Centered “National AI Ethical Guidelines” Draft 27 November 2020. Available online: https://english.msit.go.kr/eng/bbs/view.do?bbsSeqNo=42&mId=4&mPid=2&nttSeqNo=467&sCode=eng (accessed on 14 January 2026).
- Ministry of Science and Technology of the People’s Republic of China (MOST). Release of “Ethical Norms for New Generation Artificial Intelligence”. 26 September 2021. Available online: https://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (accessed on 14 January 2026).
- Radanliev, P. AI ethics: Integrating transparency, fairness, and privacy in AI development. Appl. Artif. Intell. 2025, 39, 2463722. [Google Scholar] [CrossRef]
- Owens, K.; Griffen, Z.; Damaraju, L. Managing a “responsibility vacuum” in AI monitoring and governance in healthcare: A qualitative study. BMC Health Serv. Res. 2025, 25, 1043. [Google Scholar] [CrossRef]
- Kale, A.U.; Hogg, H.D.J.; Pearson, R.; Glocker, B.; Golder, S.; Coombe, A.; Coombe, A.; Waring, J.; Liu, X.; Moore, D.J.; et al. Detecting algorithmic errors and patient harms for AI-enabled medical devices in randomized trials: Protocol. JMIR Res. Protoc. 2024, 13, e55707. [Google Scholar] [CrossRef] [PubMed]
- Kumar, A.; Aelgani, V.; Vohra, R.; Gupta, S.K.; Bhagawati, M.; Paul, S.; Saba, L.; Suri, N.; Khanna, N.N.; Laird, J.R.; et al. Artificial intelligence bias in medical system designs: A systematic review. Multimed. Tools Appl. 2024, 83, 18005–18057. [Google Scholar] [CrossRef]
- Zhang, J.; Zhang, Z.-M. Ethics and governance of trustworthy medical artificial intelligence. BMC Med. Inform. Decis. Mak. 2023, 23, 135. [Google Scholar] [CrossRef] [PubMed]
- Hasanzadeh, F.; Josephson, C.B.; Waters, G.; Adedinsewo, D.; Azizi, Z.; White, J.A. Bias recognition and mitigation strategies in healthcare AI. npj Digit. Med. 2025, 8, 12. [Google Scholar] [CrossRef]
- Rajkomar, A.; Hardt, M.; Howell, M.D.; Corrado, G.; Chin, M.H. Ensuring fairness in machine learning to advance health equity. Ann. Intern. Med. 2018, 169, 866–872. [Google Scholar] [CrossRef]
- Griffin, A.C.; Wang, K.H.; Leung, T.I.; Facelli, J.C. Recommendations to promote fairness and inclusion in biomedical AI. J. Biomed. Inform. 2024, 152, 104693. [Google Scholar] [CrossRef]
- Nazer, L.H.; Zatarah, R.; Waldrip, S.; Ke, J.X.C.; Moukheiber, M.; Khanna, A.K.; Hicklen, R.S.; Moukheiber, L.; Moukheiber, D.; Ma, H.; et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLoS Digit. Health 2023, 2, e0000278. [Google Scholar] [CrossRef]
- van den Heuvel, J.; Porter, A.; Kirkpatrick, E.; Verjans, J.; Reddy, S.; Freckelton, I. The silent partner: A narrative review of AI’s impact on informed consent. J. Law Med. 2025, 32, 74–84. [Google Scholar]
- Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
- Park, S.H.; Kim, Y.-H.; Lee, J.Y.; Yoo, S.; Kim, C.J. Ethical challenges regarding artificial intelligence in medicine from the perspective of scientific editing and peer review. Sci. Ed. 2019, 6, 91–98. [Google Scholar] [CrossRef]
- Alkhanbouli, R.; Almadhaani, H.M.A.; Alhosani, F.; Simsekler, M.C.E. The role of explainable artificial intelligence in disease prediction: A systematic literature review and future research directions. BMC Med. Inform. Decis. Mak. 2025, 25, 110. [Google Scholar] [CrossRef]
- Ali, S.; Akhlaq, F.; Imran, A.S.; Kastrati, Z.; Daudpota, S.M.; Moosa, M. The enlightening role of explainable artificial intelligence in medical and healthcare domains: A systematic literature review. Comput. Biol. Med. 2023, 166, 107555. [Google Scholar] [CrossRef] [PubMed]
- Ghassemi, M.; Oakden-Rayner, L.; Beam, A.L. The false hope of current approaches to explainable AI in health care. Lancet Digit. Health 2021, 3, e745–e750. [Google Scholar] [CrossRef]
- Kapcia, M.; Eshkiki, H.; Duell, J.; Fan, X.; Zhou, S.; Mora, B. ExMed: An AI tool for experimenting explainable techniques on medical data analytics. In Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Virtual, 1–3 November 2021; pp. 841–845. [Google Scholar]
- Falvo, F.R.; Cannataro, M. Explainability techniques for artificial intelligence models in medical diagnostic. In Proceedings of the 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Lisbon, Portugal, 3–6 December 2024; pp. 6907–6913. [Google Scholar]
- Phillips, V. A counterintuitive approach to explainable AI in healthcare: Balancing transparency, efficiency, and cost. AI Soc. 2025, 40, 5735–5741. [Google Scholar] [CrossRef]
- Rudin, C. Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
- McNamara, S.L.; Yi, P.H.; Lotter, W. The clinician–AI interface: Intended use and explainability in FDA-cleared AI devices for medical image interpretation. npj Digit. Med. 2024, 7, 80. [Google Scholar] [CrossRef]
- Wenderott, K.; Krups, J.; Zaruchas, F.; Weigl, M. Effects of artificial intelligence implementation on efficiency in medical imaging: A systematic literature review and meta-analysis. npj Digit. Med. 2024, 7, 265. [Google Scholar] [CrossRef]
- Doumard, E.; Aligon, J.; Escriva, E.; Excoffier, J.-B.; Monsarrat, P.; Soulé-Dupuy, C. A quantitative approach for the comparison of additive local explanation methods. Inf. Syst. 2023, 114, 102254. [Google Scholar] [CrossRef]
- van der Velden, B.H.M.; Kuijf, H.J.; Gilhuijs, K.G.A.; Viergever, M.A. Explainable artificial intelligence in deep learning-based medical image analysis: A survey. Med. Image Anal. 2022, 79, 102470. [Google Scholar] [CrossRef]
- Loh, H.W.; Ooi, C.P.; Seoni, S.; Barua, P.D.; Molinari, F.; Acharya, U.R. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). Int. J. Med. Inform. 2022, 157, 104222. [Google Scholar] [CrossRef] [PubMed]
- Mandava, R.; Vellela, S.S.; Malathi, N.; Haritha, K.; Gorintla, S.; Dalavai, L. Exploring the Role of XAI in Enhancing Predictive Model Transparency in Healthcare Risk Assessment 2025. In Proceedings of the International Conference on Computational Robotics, Testing and Engineering Evaluation (ICCRTEE), Virudhunagar, India, 28–30 May 2025; pp. 1–5. [Google Scholar] [CrossRef]
- Kaur, A.; Goyal, S. Explainable AI in Healthcare: Introduction. In Explainable Artificial Intelligence in the Healthcare Industry; Kumar, A., Ananth Kumar, T., Das, P., Sharma, C., Dubey, A.K., Eds.; Wiley-Scrivener: Hoboken, NJ, USA, 2025; pp. 307–323. Available online: https://www.wiley.com/en-us/Explainable+Artificial+Intelligence+in+the+Healthcare+Industry-p-9781394249268 (accessed on 14 January 2026).
- Blahodelskyi, O. Systematic review: Innovative approaches in artificial intelligence development. Nigerian J. Technol. 2025, 43, 839–848. [Google Scholar] [CrossRef]
- Aldhafeeri, F.M. Governing artificial intelligence in radiology: A systematic review of ethical, legal, and regulatory frameworks. Diagnostics 2025, 15, 2300. [Google Scholar] [CrossRef] [PubMed]
- Nawawi, M.H.M.; Ishak, M.S.; Raes, R.F.A.; Razak, I.A.; Hasan, S.; Rahim, A.I.A. The intersection of quality improvement, artificial intelligence and patient safety in healthcare—Current applications, challenges and risks, and future directions: A scoping review. J. Med. Artif. Intell. 2025, 8, 57. [Google Scholar] [CrossRef]
- Doolan, P.; Michopoulou, S.; Meades, R. IPEM topical report: Results of a 2024 UK survey of artificial intelligence in medical physics and clinical engineering. Phys. Med. Biol. 2025, 70, 14TR01. [Google Scholar] [CrossRef]
- Hodges, B.D. Education and the adoption of AI in healthcare: “What is happening?”. Healthc. Pap. 2025, 22, 39–43. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, J.; Meng, X.; Li, S.; Wang, H. Interpretation of sectoral standard AI medical device—Specific requirement for datasets: Color fundus images of diabetic retinopathy. Med. J. Peking Union Med. Coll. Hosp. 2025, 16, 916–924. [Google Scholar] [CrossRef]
- Gornet, M.; Maxwell, W. The European approach to regulating AI through technical standards. Internet Policy Rev. 2024, 13, 1–27. [Google Scholar] [CrossRef]
- Reddy, S. Global harmonization of AI-enabled software as a medical device regulation: Addressing challenges and unifying standards. Mayo Clin. Proc. Digit. Health 2024, 3, 100191. [Google Scholar] [CrossRef]
- Chauhan, S.B.; Gaur, R.; Akram, A.; Singh, I. Artificial intelligence-driven insights for regulatory intelligence in medical devices: Evaluating EMA, FDA, and CDSCO frameworks. Glob. Clin. Eng. J. 2025, 7, 11–24. [Google Scholar] [CrossRef]
- Vardas, E.P.; Marketou, M.; Vardas, P.E. Medicine, healthcare and the AI Act: Gaps, challenges and future implications. Eur. Heart J. Digit. Health 2025, 6, 833–839. [Google Scholar] [CrossRef]
- Duffourc, M.N.; Gerke, S. The proposed EU directives for AI liability leave worrying gaps likely to impact medical AI. npj Digit. Med. 2023, 6, 77. [Google Scholar] [CrossRef]
- Wong, A.; Otles, E.; Donnelly, J.P.; Krumm, A.; McCullough, J.; DeTroyer-Cooley, O.; Pestrue, J.; Phillips, M.; Konye, J.; Penoza, C.; et al. External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. JAMA Intern. Med. 2021, 181, 1065–1070. [Google Scholar] [CrossRef] [PubMed]
- Lyell, D.; Wang, Y.; Coiera, E.; Magrabi, F. More than algorithms: Analysis of safety events involving ML-enabled medical devices reported to the FDA. J. Am. Med. Inform. Assoc. 2023, 30, 1227–1236. [Google Scholar] [PubMed]
- Handley, J.L.; Krevat, S.A.; Fong, A.; Ratwani, R.M. Artificial intelligence-related safety issues associated with ML-enabled medical devices: An analysis of MAUDE reports. npj Digit. Med. 2024, 7, 351. [Google Scholar] [PubMed]
- Abrisqueta-Costa, P.; García-Marco, J.A.; Gutiérrez, A.; Hernández-Rivas, J.Á.; Andreu-Lapiedra, R.; Arguello-Tomas, M.; Leiva-Farré, C.; López-Roda, M.D.; Callejo-Mellén, Á.; Álvarez-García, E.; et al. Real-world evidence on adverse events and healthcare resource utilization in patients with chronic lymphocytic leukaemia in Spain using natural language processing: The SRealCLL study. Cancers 2024, 16, 4004. [Google Scholar] [PubMed]
- Sáez, C.; Ferri, P.; García-Gómez, J.M. Resilient artificial intelligence in health: Synthesis and research agenda toward next-generation trustworthy clinical decision support. J. Med. Internet Res. 2024, 26, e50295. [Google Scholar] [CrossRef]
- Stogiannos, N.; Cuocolo, R.; D’Antonoli, A.T.; dos Santos, D.P.; Harvey, H.; Huisman, M.; Kocak, B.; Kotter, E.; Lekadir, K.; Shelmerdine, S.C.; et al. Recognising errors in AI implementation in radiology: A narrative review. Eur. J. Radiol. 2025, 191, 112311. [Google Scholar] [CrossRef]
- Federico, C.A.; Trotsyuk, A.A. Biomedical data science, artificial intelligence, and ethics: Navigating challenges in the face of explosive growth. Annu. Rev. Biomed. Data Sci. 2024, 7, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Williamson, S.M.; Prybutok, V. Balancing privacy and progress: A review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl. Sci. 2024, 14, 675. [Google Scholar] [CrossRef]
- Amini, M.M.; Jesus, M.; Sheikholeslami, D.F.; Alves, P.; Benam, A.H.; Hariri, F. Artificial intelligence ethics and challenges in healthcare applications: A comprehensive review in the context of the European GDPR mandate. Mach. Learn. Knowl. Extr. 2023, 5, 1023–1035. [Google Scholar] [CrossRef]
- European Parliament. EU AI Act: First Regulation on Artificial Intelligence. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 18 October 2025).
- Meszaros, J.; Minari, J.; Huys, I. The future regulation of artificial intelligence systems in healthcare services and medical research in the European Union. Front. Genet. 2022, 13, 927721. [Google Scholar] [CrossRef]
- Personal Data Protection Commission (PDPC). Data Protection Obligations (Personal Data Protection Act, Singapore). Available online: https://www.pdpc.gov.sg/overview-of-pdpa/the-legislation/personal-data-protection-act/data-protection-obligations (accessed on 14 January 2026).
- Japanese Law Translation (Ministry of Justice, Japan). Act on the Protection of Personal Information. Available online: https://www.japaneselawtranslation.go.jp/en/laws/view/4241/en (accessed on 14 January 2026).
- Ministry of Government Legislation (Republic of Korea). Personal Information Protection Act (English Text, National Law Information Center). Available online: https://www.law.go.kr/LSW//lsInfoP.do?chrClsCd=010203&lsiSeq=213857&urlMode=engLsInfoR&viewCls=engLsInfoR (accessed on 14 January 2026).
- Supreme People’s Procuratorate (People’s Republic of China). Personal Information Protection Law of the People’s Republic of China. Available online: https://en.spp.gov.cn/2021-12/29/c_948419.htm (accessed on 14 January 2026).
- Office for Civil Rights (OCR), U.S. Department of Health and Human Services. HIPAA Security Rule to Strengthen the Cybersecurity of Electronic Protected Health Information. Notice of Proposed Rulemaking; 6 January 2025. Available online: https://www.federalregister.gov/documents/2025/01/06/2024-30983/hipaa-security-rule-to-strengthen-the-cybersecurity-of-electronic-protected-health-information (accessed on 14 January 2026).
- Office of the Privacy Commissioner of Canada. Privacy and Artificial Intelligence (AI). Available online: https://www.priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/ (accessed on 14 January 2026).
- Office of the Australian Information Commissioner (OAIC). Guidance on Privacy and the Use of Commercially Available AI Products. 21 October 2024. Available online: https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products (accessed on 14 January 2026).
- Wang, Y.; Liu, C.; Zhou, K.; Zhu, T.; Han, X. Towards regulatory generative AI in ophthalmology healthcare: A security and privacy perspective. Br. J. Ophthalmol. 2024, 108, 1349–1353. [Google Scholar] [CrossRef]
- Wang, X.; Li, J.; Ding, X.; Zhang, H.; Sun, L. A survey of differential privacy techniques for federated learning. IEEE Access 2025, 13, 6539–6555. [Google Scholar] [CrossRef]
- Shukla, S.; Rajkumar, S.; Sinha, A.; Esha, M.; Elango, K.; Sampath, V. Federated learning with differential privacy for breast cancer diagnosis enabling secure data sharing and model integrity. Sci. Rep. 2025, 15, 12345. [Google Scholar] [CrossRef]
- Kaissis, G.A.; Makowski, M.R.; Rückert, D.; Braren, R.F. Secure, privacy-preserving and federated machine learning in medical imaging. Nat. Mach. Intell. 2020, 2, 305–311. [Google Scholar] [CrossRef]
- Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. npj Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef]
- Wu, T.; Deng, Y.; Zhou, Q.; Chen, X.; Zhang, M. ADPHE-FL: Federated learning with adaptive differential privacy and homomorphic encryption. Peer-to-Peer Netw. Appl. 2025, 18, 210. [Google Scholar] [CrossRef]
- Ouyang, J.; Han, R.; Zuo, X.; Cheng, Y.; Liu, C.H. Accuracy-aware differential privacy in federated learning of large transformer models. J. Inf. Secur. Appl. 2025, 81, 103844. [Google Scholar] [CrossRef]
- Mahato, G.K.; Banerjee, A.; Chakraborty, S.K.; Gao, X.-Z. Privacy-preserving verifiable federated learning scheme using blockchain and homomorphic encryption. Appl. Soft Comput. 2024, 156, 111208. [Google Scholar] [CrossRef]
- Li, K.; Lohachab, A.; Dumontier, M.; Urovi, V. Privacy preservation in blockchain-based healthcare data sharing: A systematic review. Peer-to-Peer Netw. Appl. 2025, 18, 302. [Google Scholar] [CrossRef] [PubMed]
- Conduah, A.K.; Ofoe, S.; Siaw-Marfo, D. Data privacy in healthcare: Global challenges and solutions. Digit. Health 2025, 11, 20552076241234567. [Google Scholar] [CrossRef]
- Alshohoumi, F. Privacy concerns of IoT medical applications: An empirical analysis of the current privacy policies under the GDPR. Int. J. Electron. Healthc. 2025, 15, 155–175. [Google Scholar] [CrossRef]
- Puneeth, R.P.; Parthasarathy, G. Blockchain-based framework for privacy preservation and securing EHR with patient-centric access control. Acta Inform. Pragensia 2024, 13, 84195–84229. [Google Scholar] [CrossRef]
- Chan, H.Y. A proportionality-by-design approach for mobile mental health and well-being applications. Law Innov. Technol. 2025, 17, 58–83. [Google Scholar] [CrossRef]
- Rocher, L.; Hendrickx, J.M.; de Montjoye, Y.-A. Estimating the success of re-identifications in incomplete datasets using generative models. Nat. Commun. 2019, 10, 3069. [Google Scholar] [CrossRef]
- Schwarz, C.G.; Kremers, W.K.; Therneau, T.M.; Sharp, R.R.; Gunter, J.L.; Vemuri, P.; Arani, A.; Spychalla, A.J.; Kantarci, K.; Knopman, D.S.; et al. Identification of anonymous MRI research participants by face recognition. N. Engl. J. Med. 2019, 381, 1684–1686. [Google Scholar] [CrossRef]
- Pati, S.; Kumar, S.; Varma, A.; Edwards, B.; Lu, C.; Qu, L.; Wang, J.J.; Lakshminarayanan, A.; Wang, S.-H.; Sheller, M.J.; et al. Privacy preservation for federated learning in health care. Patterns 2024, 5, 100974. [Google Scholar] [CrossRef]
- Kaabachi, B.; Despraz, J.; Meurers, T.; Otte, K.; Halilovic, M.; Kulynych, B.; Prasser, F.; Raisaro, J.L. A scoping review of privacy and utility metrics in medical synthetic data. npj Digit. Med. 2025, 8, 60. [Google Scholar] [CrossRef]
- Sella, N.; Guinot, F.; Lagrange, N.; Albou, L.-P.; Desponds, J.; Isambert, H. Preserving information while respecting privacy through an information theoretic framework for synthetic health data generation. npj Digit. Med. 2025, 8, 49. [Google Scholar] [CrossRef]
- Habli, I.; Lawton, T.; Porter, Z. Artificial intelligence in health care: Accountability and safety. Bull. World Health Organ. 2020, 98, 251–256. [Google Scholar] [CrossRef] [PubMed]
- Chan, B. Applying a common enterprise theory of liability to clinical AI systems. Am. J. Law Med. 2021, 47, 351–385. [Google Scholar] [CrossRef]
- Daye, D.; Wiggins, W.F.; Lungren, M.P.; Alkasab, T.; Kottler, N.; Allen, B.; Roth, C.J.; Bizzo, B.C.; Durniak, K.; Brink, J.A.; et al. Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How? Radiology 2022, 305, 555–563. [Google Scholar] [CrossRef] [PubMed]
- Bedoya, A.D.; Economou-Zavlanos, N.J.; Goldstein, B.A.; Young, A.; Jelovsek, J.E.; O’bRien, C.; Parrish, A.B.; Elengold, S.; Lytle, K.; Balu, S.; et al. A framework for the oversight and local deployment of safe and high-quality prediction models. J. Am. Med. Inform. Assoc. 2022, 29, 1631–1636. [Google Scholar] [CrossRef]
- Di Palma, G.; Scendoni, R.; Tambone, V.; Alloni, R.; De Micco, F. Integrating enterprise risk management to address AI-related risks in healthcare: Strategies for effective risk mitigation and implementation. J. Healthc. Risk Manag. 2025, 44, 25–33. [Google Scholar] [CrossRef]
- Contaldo, M.T.; Pasceri, G.; Vignati, G.; Bracchi, L.; Triggiani, S.; Carrafiello, G. AI in radiology: Navigating medical responsibility. Diagnostics 2024, 14, 1506. [Google Scholar] [CrossRef] [PubMed]
- Chan, G.K.Y. AI in healthcare: Regulatory guidelines and judge-made negligence principles for AI implementers. Med. Law Int. 2025. [Google Scholar] [CrossRef]
- Bottomley, D.; Thaldar, D. Liability for harm caused by AI in healthcare: An overview of the core legal concepts. Front. Pharmacol. 2023, 14, 1297353. [Google Scholar] [CrossRef]
- Naidoo, T. Overview of AI regulation in healthcare: A comparative study of the EU and South Africa. S. Afr. J. Bioeth. Law 2024, 17, e2294. [Google Scholar] [CrossRef]
- Chau, M.; Rahman, M.G.; Debnath, T. From black box to clarity: Strategies for effective AI informed consent in healthcare. Artif. Intell. Med. 2025, 167, 103169. [Google Scholar] [CrossRef] [PubMed]
- Srinivasu, P.N.; Sandhya, N.; Jhaveri, R.H.; Raut, R. From blackbox to explainable AI in healthcare: Existing tools and case studies. Mob. Inf. Syst. 2022, 1, 8167821. [Google Scholar] [CrossRef]
- Price, W.N., II; Gerke, S.; Cohen, I.G. Potential liability for physicians using artificial intelligence. JAMA 2019, 322, 1765–1766. [Google Scholar] [CrossRef]
- Hillis, J.M.; Visser, J.J.; Cliff, E.R.S.; van der Geest–Aspers, K.; Bizzo, B.C.; Dreyer, K.J.; Adams-Prassl, J.; Andriole, K.P. The lucent yet opaque challenge of regulating artificial intelligence in radiology. npj Digit. Med. 2024, 7, 69. [Google Scholar] [CrossRef]
- Carvalho, E.; Mascarenhas, M.; Pinheiro, F.; Correia, R.; Balseiro, S.; Barbosa, G.; Guerra, A.; Oliveira, D.; Moura, R.; dos Santos, A.M.; et al. Predetermined change control plans: Guiding principles for advancing safe, effective, and high-quality AI-ML technologies. JMIR AI 2025, 4, e76854. [Google Scholar] [CrossRef]
- Weerakoon, A.T.; Girdis, T.; Peters, O. Artificial intelligence in Australian dental and general healthcare: A scoping review. Aust. Dent. J. 2025, online ahead of print. [Google Scholar] [CrossRef] [PubMed]
- Warraich, H.J.; Tazbaz, T.; Califf, R.M. FDA perspective on the regulation of artificial intelligence in health care and biomedicine. JAMA 2025, 333, 241–247. [Google Scholar] [CrossRef]
- Stetson, P.D.; Choy, J.; Summerville, N.; Baldwin-Medsker, A.; Mak, J.; Chatterjee, A.; Kim, K.; Kumar, C.; Samedy, P.; Halperin, J.; et al. Responsible artificial intelligence governance in oncology. npj Digit. Med. 2025, 8, 407. [Google Scholar] [CrossRef]
- Rozenblit, L.; Price, A.; Solomonides, A.; Joseph, A.L.; Koski, E.; Srivastava, G.; Labkoff, S.; Bray, D.; Lopez-Gonzalez, M.; Singh, R.; et al. Toward responsible AI governance: Balancing multi-stakeholder perspectives on AI in healthcare. Int. J. Med. Inform. 2025, 203, 106015. [Google Scholar] [CrossRef]
- Waeiss, Q.; Cho, M.K. An ecosystem approach to governing commercial actors in healthcare AI. Policy Stud. 2025, 1–14. [Google Scholar] [CrossRef]
- Salwei, M.E.; Davis, S.E.; Reale, C.; Novak, L.L.; Walsh, C.G.; Beebe, R.; Nelson, S.; Sundrani, S.; Rose, S.; Wright, A.; et al. Human-Centered Design of an Artificial Intelligence Monitoring System: The Vanderbilt Algorithmovigilance Monitoring and Operations System. JAMIA Open 2025, 8, ooaf136. [Google Scholar] [CrossRef]
- Balendran, A.; Benchoufi, M.; Evgeniou, T.; Ravaud, P. Algorithmovigilance, lessons from pharmacovigilance. npj Digit. Med. 2024, 7, 270. [Google Scholar] [CrossRef]
- Sridharan, K.; Sivaramakrishnan, G. Leveraging artificial intelligence to detect ethical concerns in medical research: A case study. J. Med. Ethics 2025, 51, 126–134. [Google Scholar] [CrossRef]
- Friesen, P.; Douglas-Jones, R.; Marks, M.; Pierce, R.; Fletcher, K.; Mishra, A.; Lorimer, J.; Véliz, C.; Hallowell, N.; Graham, M.; et al. Governing AI-driven health research: Are IRBs up to the task? Ethics Hum. Res. 2021, 43, 35–42. [Google Scholar] [CrossRef] [PubMed]
- Anderson, E.E.; Johnson, A.; Lynch, H.F. Inclusive, engaged, and accountable institutional review boards. Account. Res. 2024, 31, 1287–1295. [Google Scholar] [CrossRef] [PubMed]
- Labkoff, S.; Oladimeji, B.; Kannry, J.; Solomonides, A.; Leftwich, R.; Koski, E.; Joseph, A.L.; Lopez-Gonzalez, M.; Fleisher, L.A.; Nolen, K.; et al. Toward a responsible future: Recommendations for AI-enabled clinical decision support. J. Am. Med. Inform. Assoc. 2024, 31, 2730–2739. [Google Scholar] [CrossRef]
- Bignami, E.G.; Russo, M.; Semeraro, F.; Bellini, V. The European Union AI Act in an era of global uncertainty. JMIR AI 2025, 4, e75527. [Google Scholar] [CrossRef] [PubMed]
- NHS Digital. Artificial Intelligence (AI) Buyer’s Guide Assessment Template. Available online: https://digital.nhs.uk/services/ai-knowledge-repository/develop-ai/a-buyers-guide-to-ai-in-health-and-care/assessment-template (accessed on 14 January 2026).
- Khan, S.D.; Hoodbhoy, Z.; Raja, M.H.R.; Kim, J.Y.; Hogg, H.D.J.; Manji, A.A.A.; Gulamali, F.; Hasan, A.; Shaikh, A.; Tajuddin, S.; et al. Frameworks for procurement, integration, monitoring, and evaluation of artificial intelligence tools in clinical settings: A systematic review. PLoS Digit. Health 2024, 3, e0000514. [Google Scholar] [CrossRef]
- Bidenko, N.V.; Stuchynska, N.V.; Palamarchuk, Y.V.; Matviienko, M.M. Integrating artificial intelligence in healthcare practice: Challenges and future prospects. Wiad. Lek. 2025, 78, 1199–1205. [Google Scholar] [CrossRef]
- Kim, J.Y.; Hasan, A.; Kellogg, K.C.; Ratliff, W.; Murray, S.G.; Suresh, H.; Valladares, A.; Shaw, K.; Tobey, D.; Vidal, D.E.; et al. Development and preliminary testing of Health Equity Across the AI Lifecycle (HEAAL): A framework for healthcare delivery organizations to mitigate the risk of AI solutions worsening health inequities. PLoS Digit. Health 2024, 3, e0000390. [Google Scholar] [CrossRef]
- Arnaout, A.; Gill, P.; Virani, A.; Flatt, A.; Prodan-Balla, N.; Byres, D.; Stowe, M.; Saremi, A.; Coss, M.; Tatto, M.; et al. Shaping the future of healthcare in British Columbia: Establishing provincial clinical governance for responsible deployment of artificial intelligence tools. Healthc. Manag. Forum 2024, 37, 320–328. [Google Scholar] [CrossRef]
- Torkilsheyggi, A. Flexibility first, then standardize: A strategy for growing inter-departmental systems. Stud. Health Technol. Inform. 2015, 216, 477–481. [Google Scholar]
- Lukkien, D.R.M.; Nap, H.H.; Peine, A.; Minkman, M.M.N.; Moors, E.H.M.; Boon, W.P.C. Responsible scaling of artificial intelligence in healthcare: Standardization meets customization. Ethics Inf. Technol. 2025, 27, 34. [Google Scholar] [CrossRef]
- Vasey, B.; Nagendran, M.; Campbell, B.; Clifton, D.A.; Collins, G.S.; Denaxas, S.; Denniston, A.K.; Faes, L.; Geerts, B.; Ibrahim, M.; et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ 2022, 377, e070904. [Google Scholar] [CrossRef] [PubMed]
- Sakly, H.; Guetari, R.; Kraiem, N. Deployment and Continuous Integration of AI in Healthcare. In Scalable Artificial Intelligence for Healthcare: Advancing AI Solutions for Global Health Challenges; CRC Press: Boca Raton, FL, USA, 2025; pp. 95–112. [Google Scholar]
- Brady, A.P.; Allen, B.; Chong, J.; Kotter, E.; Kottler, N.; Mongan, J.; Oakden-Rayner, L.; dos Santos, D.P.; Tang, A.; Wald, C.; et al. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J. Med. Imaging Radiat. Oncol. 2024, 68, 7–26. [Google Scholar] [CrossRef]
- Rajendra, J.B.; Thuraisingam, A.S. The role of explainability and human intervention in AI decisions: Jurisdictional and regulatory aspects. Inf. Commun. Technol. Law 2025, 34, 1–32. [Google Scholar] [CrossRef]
- Wang, Y.; Li, N.; Chen, L.; Wu, M.; Meng, S.; Dai, Z.; Zhang, Y.; Clarke, M. Guidelines, consensus statements, and standards for the use of artificial intelligence in medicine: Systematic review. J. Med. Internet Res. 2023, 25, e46089. [Google Scholar] [CrossRef] [PubMed]
- Hou, J.; Cheng, X.; Liao, J.; Zhang, Z.; Wang, W. Ethical concerns of AI in healthcare: A systematic review of qualitative studies. Nurs. Ethics 2025, online ahead of print. [Google Scholar] [CrossRef]
- Čartolovni, A.; Tomičić, A.; Lazić Mosler, E. Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. Int. J. Med. Inform. 2022, 161, 104738. [Google Scholar] [CrossRef]
- Cestonaro, C.; Delicati, A.; Marcante, B.; Caenazzo, L.; Tozzo, P. Defining medical liability when artificial intelligence is applied on diagnostic algorithms: A systematic review. Front. Med. 2023, 10, 1305756. [Google Scholar] [CrossRef] [PubMed]
| Governance Theme | Key Risk | Priority Response (Operational Mechanisms) |
|---|---|---|
| Bias and Fairness | Systematically unequal performance and allocative harms due to biased data, proxies, and context-dependent deployment. | Fairness-by-design with pre-specified subgroup metrics and thresholds; datasheets/model cards; multi-site external validation; routine subgroup audits post-deployment; documented escalation and remediation (recalibration, retraining, de-implementation); stakeholder/patient review for impact and redress. |
| Explainability and Transparency | Clinically and legally insufficient intelligibility, contestability, and traceability of outputs and updates. | Separate clinical usability explainability from regulatory transparency; task-bound explanation validation (stability, usefulness); standardized documentation (model cards, intended use, limitations); audit logs and versioning; release notes for updates; change control plan linking model/interface changes to evidence. |
| Safety and Quality | Silent performance degradation under dataset shift, workflow change, or updates, with delayed detection of harm. | Pre-deployment silent trials and prospective validation on local workflows; post-market surveillance with drift detection; incident reporting and safety signal review; periodic re-certification/reevaluation; predefined rollback/kill-switch authority; integration into institutional quality management and risk registers. |
| Privacy and Data Protection | Unlawful secondary use, re-identification, leakage/memorization (incl. generative AI), and weak accountability across controllers/processors. | DPIA and purpose limitation; data minimisation and access controls; encryption, logging, retention rules; privacy-enhancing techniques where appropriate (e.g., federated learning + differential privacy) with clinically meaningful utility testing; documented data-sharing/transfer governance and breach response. |
| Accountability and Liability | Responsibility gaps in multi-actor systems (developer–vendor–provider–clinician), especially under frequent updates and opaque services. | Explicit Responsible, Accountable, Consulted, Informed (RACI)-style allocation of duties; contractual clauses on documentation, monitoring, audit rights, and update notification; preserved evidence trails (logs, validation reports, monitoring outputs); defined incident investigation pathway; alignment with enterprise risk management; suitable compensation/insurance arrangements where appropriate. |
| Human Oversight | Nominal “human-in-the-loop” without real authority, skills, or time to intervene, leading to automation bias and unmanaged risk. | Defined decision rights (override, pause, retire); training and competency checks; escalation pathways and accountability for monitoring actions; human-factors testing of interfaces/alerts; governance committee with documented meeting outputs; periodic review of reliance patterns and override events. |
| Procurement and Deployment | Technology-led purchasing that omits governance requirements, long-term maintainability, monitoring capacity, and de-implementation. | Multidisciplinary procurement with mandatory governance artefacts (model card, monitoring plan, DPIA, change control plan); performance Service Level Agreements (SLAs) and audit clauses; interoperability and data-quality prerequisites; implementation and training plan; obligations for post-market studies and update transparency; explicit de-implementation/exit provisions. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Bailo, P.; Nittari, G.; Pesel, G.; Basello, E.; Spasari, T.; Ricci, G. Governing Healthcare AI in the Real World: How Fairness, Transparency, and Human Oversight Can Coexist: A Narrative Review. Sci 2026, 8, 36. https://doi.org/10.3390/sci8020036
Bailo P, Nittari G, Pesel G, Basello E, Spasari T, Ricci G. Governing Healthcare AI in the Real World: How Fairness, Transparency, and Human Oversight Can Coexist: A Narrative Review. Sci. 2026; 8(2):36. https://doi.org/10.3390/sci8020036
Chicago/Turabian StyleBailo, Paolo, Giulio Nittari, Giuliano Pesel, Emerenziana Basello, Tommaso Spasari, and Giovanna Ricci. 2026. "Governing Healthcare AI in the Real World: How Fairness, Transparency, and Human Oversight Can Coexist: A Narrative Review" Sci 8, no. 2: 36. https://doi.org/10.3390/sci8020036
APA StyleBailo, P., Nittari, G., Pesel, G., Basello, E., Spasari, T., & Ricci, G. (2026). Governing Healthcare AI in the Real World: How Fairness, Transparency, and Human Oversight Can Coexist: A Narrative Review. Sci, 8(2), 36. https://doi.org/10.3390/sci8020036

