Advances in Closed-Loop Artificial Intelligence for Healthcare
Abstract
1. Introduction
- (1)
- This work systematically categorises Human-in-the-Loop (HITL) AI approaches in healthcare into pre-deployment and post-deployment stages, providing a structured perspective on how human involvement evolves across the lifecycle of clinical AI systems.
- (2)
- This work presents a focused review of post-deployment HITL AI in healthcare, synthesising emerging studies that examine clinician interaction, feedback mechanisms, and human–AI collaboration after model deployment in clinical settings.
- (3)
- This work introduces conceptual frameworks for closed-loop AI without retraining, illustrating how real-time expert feedback can be incorporated into deployed systems to refine AI outputs and support adaptive clinical decision support systems.
2. HITL AI Approaches
2.1. Definitions of Pre-Deployment and Post-Deployment
- Output moderation, where clinicians review and adjust model predictions before they are used in decision-making.
- Input moderation, where clinicians influence model behaviour by modifying inputs or feature representations.
- Internal moderation, where clinicians interact with selected model parameters to guide system behaviour.
2.2. Pre-Deployment
2.2.1. Human Labelling Data
2.2.2. Active Learning
2.2.3. Reinforcement Learning
2.2.4. Others
2.3. Post-Deployment
2.3.1. With Model Retraining
Without XAI
With XAI
Active Learning
Reinforcement Learning
2.3.2. Without Model Retraining
Override
Moderation
- (1)
- Feature Upgrade/Downgrade
- (2)
- Additional Control Inputs
3. Systematic Search for Post-Deployment HITL AI
3.1. Methodology
3.1.1. Search Strategy and Data Sources
3.1.2. Inclusion Criteria
- They were peer-reviewed original research publications (journal and conference papers).
- They included healthcare applications.
- They contained experimental works.
- The model variables were modified by a human expert.
- They were published in English.
- The were published between 2020 and 2025.
- The full text was available.
3.1.3. Study Selection and Data Collection
3.2. Results
3.2.1. Publication Trend of Initial Search String
3.2.2. Existing Post-Deployment HITL AI Studies
4. Discussions and Future Directions
- (1)
- Selective intervention mechanisms: HITL interactions should be triggered primarily for low-confidence predictions or high-risk decisions, reducing unnecessary clinician interruptions.
- (2)
- Explanation prioritisation: XAI interfaces should highlight the most influential features or reasoning steps to reduce interpretation time.
- (3)
- Workflow-aware integration: HITL mechanisms should align with existing clinical workflows rather than requiring additional steps outside routine decision-making processes.
- (4)
- Adaptive alerting strategies: Systems should limit excessive notifications and prioritise alerts based on clinical relevance and uncertainty levels.
- (5)
- Auditability and traceability: Logging clinician interventions supports accountability and enables system improvement without increasing user burden.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| CDSS | Clinical Decision Support Systems |
| CNN | Convolutional Neural Network |
| EHR | Electronic Health Records |
| GUI | Graphical User Interface |
| GNN | Graph Neural Network |
| HITL | Human-In-The-Loop |
| ICU | Intensive Care Unit |
| PRISMA | Preferred Reporting Items for Systematic Reviews and Meta Analyses |
| SHAP | Shapley Additive Explanations |
| LIME | Local Interpretable Model-agnostic Explanations |
| XAI | Explainable AI |
References
- Kim, M.; Sohn, H.; Choi, S.; Kim, S. Requirements for Trustworthy Artificial Intelligence and its Application in Healthcare. Healthc. Inform. Res. 2023, 29, 315–322. [Google Scholar] [CrossRef] [PubMed]
- Scarpato, N.; Ferroni, P.; Guadagni, F. XAI Unveiled: Revealing the Potential of Explainable AI in Medicine: A Systematic Review. IEEE Access 2024, 12, 191498–191516. [Google Scholar] [CrossRef]
- Al-Ansari, A.A.; Nejad, F.A.B.; Al-Nasr, R.J.; Prithula, J.; Rahman, T.; Hasan, A.; Chowdhury, M.E.H.; Alam, M.F. Predicting ICU Mortality Among Septic Patients Using Machine Learning Technique. J. Clin. Med. 2025, 14, 3495. [Google Scholar] [CrossRef]
- Frezza, B.; Nurchis, M.C.; Capolupo, G.T.; Carannante, F.; De Prizio, M.; Rondelli, F.; Alunni Fegatelli, D.; Gili, A.; Lepre, L.; Costa, G. A Comparison of Machine Learning-Based Models and a Simple Clinical Bedside Tool to Predict Morbidity and Mortality After Gastrointestinal Cancer Surgery in the Elderly. Bioengineering 2025, 12, 544. [Google Scholar] [CrossRef]
- Bindewari, S.; Sharma, K.; Gaddam, S.S.; Verma, A.; Parashar, D.; Arse, M. Machine Learning Models for Early Detection of Cardiac Arrest Risk Factors. In Proceedings of the 2025 International Conference on Cognitive Computing in Engineering, Communications, Sciences and Biomedical Health Informatics (IC3ECSBHI), Greater Noida, India, 16–18 January 2025; pp. 1–5. [Google Scholar]
- Moazemi, S.; Vahdati, S.; Li, J.; Kalkhoff, S.; Castano, L.J.V.; Dewitz, B.; Bibo, R.; Sabouniaghdam, P.; Tootooni, M.S.; Bundschuh, R.A.; et al. Artificial intelligence for clinical decision support for monitoring patients in cardiovascular ICUs: A systematic review. Front. Med. 2023, 10, 1109411. [Google Scholar] [CrossRef]
- Abubeker, K.M.; Baskar, S.; Chandran, P.; Yadav, P. Computer Vision-Assisted Smart ICU Framework for Optimized Patient Care. IEEE Sens. Lett. 2024, 8, 6001004. [Google Scholar] [CrossRef]
- Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef]
- Van Dort, B.A.; Engelsma, T.; Medlock, S.; Dusseljee-Peute, L. User-Centered Methods in Explainable AI Development for Hospital Clinical Decision Support: A Scoping Review. Stud. Health Technol. Inform. 2025, 326, 17–21. [Google Scholar] [CrossRef]
- Wang, B.; Asan, O.; Mansouri, M. Patients’ Perceptions of Integrating AI into Healthcare: Systems Thinking Approach. In Proceedings of the 2022 IEEE International Symposium on Systems Engineering (ISSE), Vienna, Austria, 24–26 October 2022; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/10005383 (accessed on 23 March 2026).
- Gerke, S.; Minssen, T.; Cohen, G. Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial Intelligence in Healthcare; Academic Press: Cambridge, MA, USA, 2020; pp. 295–336. [Google Scholar] [CrossRef]
- Siala, H.; Wang, Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc. Sci. Med. 2022, 296, 114782. [Google Scholar] [CrossRef] [PubMed]
- De Micco, F.; Di Palma, G.; Ferorelli, D.; De Benedictis, A.; Tomassini, L.; Tambone, V.; Cingolani, M.; Scendoni, R. Artificial intelligence in healthcare: Transforming patient safety with intelligent systems-A systematic review. Front. Med. 2024, 11, 1522554. [Google Scholar] [CrossRef] [PubMed]
- Lekadir, K.; Frangi, A.F.; Porras, A.R.; Glocker, B.; Cintas, C.; Langlotz, C.P.; Weicken, E.; Asselbergs, F.W.; Prior, F.; Collins, G.S.; et al. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ 2025, 388, e081554. [Google Scholar] [CrossRef] [PubMed]
- Cross, J.L.; Choma, M.A.; Onofrey, J.A. Bias in medical AI: Implications for clinical decision-making. PLoS Digit. Health 2024, 3, e0000651. [Google Scholar] [CrossRef]
- Adegbesan, A.; Akingbola, A.; Ojo, O.; Jessica, O.U.; Alao, U.H.; Shagaya, U.; Adewole, O.; Abdullahi, O. Ethical Challenges in the Integration of Artificial Intelligence in Palliative Care. J. Med. Surg. Public Health 2024, 4, 100158. [Google Scholar] [CrossRef]
- Karimian, G.; Petelos, E.; Evers, S.M.A.A. The ethical issues of the application of artificial intelligence in healthcare: A systematic scoping review. AI Ethics 2022, 2, 539–551. [Google Scholar] [CrossRef]
- Chen, X.; Wang, X.; Qu, Y. Constructing Ethical AI Based on the “Human-in-the-Loop” System. Systems 2023, 11, 548. [Google Scholar] [CrossRef]
- Sezgin, E. Artificial intelligence in healthcare: Complementing, not replacing, doctors and healthcare providers. Digit. Health 2023, 9, 20552076231186520. [Google Scholar] [CrossRef]
- Mosqueira-Rey, E.; Hernández-Pereira, E.; Alonso-Ríos, D.; Bobes-Bascarán, J.; Fernández-Leal, Á. Human-in-the-loop machine learning: A state of the art. Artif. Intell. Rev. 2022, 56, 3005–3054. [Google Scholar] [CrossRef]
- Tomaszewski, J.E. Overview of the role of artificial intelligence in pathology: The computer as a pathology digital assistant. In Artificial Intelligence and Deep Learning in Pathology; Elsevier: Amsterdam, The Netherlands, 2021; pp. 237–262. [Google Scholar] [CrossRef]
- Leersum, C.M.v.; Maathuis, C. Human centred explainable AI decision-making in healthcare. J. Responsible Technol. 2025, 21, 100108. [Google Scholar] [CrossRef]
- Chustecki, M. Benefits and Risks of AI in Health Care: Narrative Review. Interact. J. Med. Res. 2024, 13, e53616. [Google Scholar] [CrossRef] [PubMed]
- Akingbola, A.; Adeleke, O.; Idris, A.; Adewole, O.; Adegbesan, A. Artificial Intelligence and the Dehumanization of Patient Care. J. Med. Surg. Public Health 2024, 3, 100138. [Google Scholar] [CrossRef]
- European Society of Radiology. Summary of the proceedings of the International Forum 2021: “A more visible radiologist can never be replaced by AI”. Insights Imaging 2022, 13, 43. [Google Scholar] [CrossRef] [PubMed]
- Abbasian Ardakani, A.; Airom, O.; Khorshidi, H.; Bureau, N.J.; Salvi, M.; Molinari, F.; Acharya, U.R. Interpretation of Artificial Intelligence Models in Healthcare: A Pictorial Guide for Clinicians. J. Ultrasound Med. 2024, 43, 1789–1818. [Google Scholar] [CrossRef]
- Sadeghi, Z.; Alizadehsani, R.; Cifci, M.A.; Kausar, S.; Rehman, R.; Mahanta, P.; Bora, P.K.; Almasri, A.; Alkhawaldeh, R.S.; Hussain, S.; et al. A review of Explainable Artificial Intelligence in healthcare. Comput. Electr. Eng. 2024, 118, 109370. [Google Scholar] [CrossRef]
- Jung, J.; Kang, S.; Choi, J.; El-Kareh, R.; Lee, H.; Kim, H. Evaluating the impact of explainable AI on clinicians’ decision-making: A study on ICU length of stay prediction. Int. J. Med. Inform. 2025, 201, 105943. [Google Scholar] [CrossRef]
- Bharati, S.; Mondal, M.R.H.; Podder, P. A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When? IEEE Trans. Artif. Intell. 2024, 5, 1429–1442. [Google Scholar] [CrossRef]
- Nazar, M.; Alam, M.M.; Yafi, E.; Su’Ud, M.M. A Systematic Review of Human-Computer Interaction and Explainable Artificial Intelligence in Healthcare with Artificial Intelligence Techniques. IEEE Access 2021, 9, 153316–153348. [Google Scholar] [CrossRef]
- Reddy, S. Explainability and artificial intelligence in medicine. Lancet Digit. Health 2022, 4, e214–e215. [Google Scholar] [CrossRef] [PubMed]
- Hamida, S.U.; Chowdhury, M.J.M.; Chakraborty, N.R.; Biswas, K.; Sami, S.K. Exploring the Landscape of Explainable Artificial Intelligence (XAI): A Systematic Review of Techniques and Applications. Big Data Cogn. Comput. 2024, 8, 149. [Google Scholar] [CrossRef]
- Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl.-Based Syst. 2023, 263, 110273. [Google Scholar] [CrossRef]
- Kiseleva, A.; Kotzinos, D.; De Hert, P. Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations. Front. Artif. Intell. 2022, 5, 879603. [Google Scholar] [CrossRef]
- Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. Available online: https://www.sciencedirect.com/science/article/pii/S1566253523001148 (accessed on 23 March 2026). [CrossRef]
- Roy, S.; Pal, D.; Meena, T. Explainable artificial intelligence to increase transparency for revolutionizing healthcare ecosystem and the road ahead. Netw. Model. Anal. Health Inform. Bioinform. 2024, 13, 4. [Google Scholar] [CrossRef]
- Dritsas, E.; Trigka, M. Application of Deep Learning for Heart Attack Prediction with Explainable Artificial Intelligence. Computers 2024, 13, 244. [Google Scholar] [CrossRef]
- Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
- Vimbi, V.; Shaffi, N.; Mahmud, M. Interpreting artificial intelligence models: A systematic review on the application of LIME and SHAP in Alzheimer’s disease detection. Brain Inform. 2024, 11, 10. [Google Scholar] [CrossRef]
- Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Appl. Sci. 2021, 11, 5088. [Google Scholar] [CrossRef]
- Budd, S.; Robinson, E.C.; Kainz, B. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med. Image Anal. 2021, 71, 102062. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.; Gomez, C.; Huang, C.M.; Unberath, M. Explainable medical imaging AI needs human-centered design: Guidelines and evidence from a systematic review. NPJ Digit. Med. 2022, 5, 156. [Google Scholar] [CrossRef]
- Tam, T.Y.C.; Sivarajkumar, S.; Kapoor, S.; Stolyar, A.V.; Polanska, K.; McCarthy, K.R.; Osterhoudt, H.; Wu, X.; Visweswaran, S.; Fu, S.; et al. A framework for human evaluation of large language models in healthcare derived from literature review. NPJ Digit. Med. 2024, 7, 258. [Google Scholar] [CrossRef]
- Yuan, H.; Kang, L.; Li, Y.; Fan, Z. Human-in-the-loop machine learning for healthcare: Current progress and future opportunities in electronic health records. Med. Adv. 2024, 2, 318–322. [Google Scholar] [CrossRef]
- Kabata, F.; Thaldar, D. Human in the loop requirement and AI healthcare applications in low-resource settings: A narrative review. S. Afr. J. Bioeth. Law 2024, 17, 70–73. Available online: https://samajournals.co.za/index.php/sajbl/article/view/1975 (accessed on 23 March 2026).
- Gómez-Carmona, O.; Casado-Mansilla, D.; López-de-Ipiña, D.; García-Zubia, J. Human-in-the-loop machine learning: Reconceptualizing the role of the user in interactive approaches. Internet Things 2024, 25, 101048. [Google Scholar] [CrossRef]
- Kumar, S.; Datta, S.; Singh, V.; Datta, D.; Kumar Singh, S.; Sharma, R. Applications, Challenges, and Future Directions of Human-in-the-Loop Learning. IEEE Access 2024, 12, 75735–75760. [Google Scholar] [CrossRef]
- Sarker, I.H. AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems. SN Comput. Sci. 2022, 3, 158. [Google Scholar] [CrossRef] [PubMed]
- Steidl, M.; Felderer, M.; Ramler, R. The pipeline for the continuous development of artificial intelligence models—Current state of research and practice. J. Syst. Softw. 2023, 199, 111615. [Google Scholar] [CrossRef]
- Ennab, M.; McHeick, H. Enhancing interpretability and accuracy of AI models in healthcare: A comprehensive review on challenges and future directions. Front. Robot. AI 2024, 11, 1444763. [Google Scholar] [CrossRef] [PubMed]
- Adnan, N.; Faizan Ahmed, S.M.; Das, J.K.; Aijaz, S.; Sukhia, R.H.; Hoodbhoy, Z.; Umer, F. Developing an AI-based application for caries index detection on intraoral photographs. Sci. Rep. 2024, 14, 26752. [Google Scholar] [CrossRef]
- Ignesti, G.; Deri, C.; D’Angelo, G.; Pratali, L.; Bruno, A.; Benassi, A.; Salvetti, O.; Moroni, D.; Martinelli, M. Deep learning methods for point-of-care ultrasound examination. In Proceedings of the 17th International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2023, Bangkok, Thailand, 8–10 November 2023; pp. 435–440. Available online: https://ieeexplore.ieee.org/document/10472834 (accessed on 23 March 2026).
- Lee, M.H.; Siewiorek, D.P.; Smailagic, A.; Bernardino, A.; Bermúdez I Badia, S. Towards Efficient Annotations for a Human-AI Collaborative, Clinical Decision Support System: A Case Study on Physical Stroke Rehabilitation Assessment. In Proceedings of the International Conference on Intelligent User Interfaces, Proceedings IUI; Association for Computing Machinery: New York, NY, USA, 2022; pp. 4–14. Available online: https://ink.library.smu.edu.sg/sis_research/7307/ (accessed on 23 March 2026).
- Jin, L.; Yang, J.; Kuang, K.; Ni, B.; Gao, Y.; Sun, Y.; Gao, P.; Ma, W.; Tan, M.; Kang, H.; et al. Deep-learning-assisted detection and segmentation of rib fractures from CT scans: Development and validation of FracNet. EBioMedicine 2020, 62, 103106. [Google Scholar] [CrossRef]
- Ramesh, P.V.; Subramaniam, T.; Ray, P.; Devadas, A.K.; Ramesh, S.V.; Ansar, S.M.; Ramesh, M.K.; Rajasekaran, R.; Parthasarathi, S. Utilizing human intelligence in artificial intelligence for detecting glaucomatous fundus images using human-in-the-loop machine learning. Indian J. Ophthalmol. 2022, 70, 1131–1138. [Google Scholar] [CrossRef]
- Huang, Z.; Wang, X.; Liu, X.; Li, J.; Hu, X.; Yu, Q.; Kuang, G.; Xiong, N.; Gao, Y. Human-in-the-loop machine learning-based quantitative assessment of hemifacial spasm based on volumetric interpolated breath-hold examination MR. Br. J. Radiol. 2025, 98, 562–570. [Google Scholar] [CrossRef]
- Yu, R.; Jiang, K.W.; Bao, J.; Hou, Y.; Yi, Y.; Wu, D.; Song, Y.; Hu, C.H.; Yang, G.; Zhang, Y.D. PI-RADS(AI): Introducing a new human-in-the-loop AI model for prostate cancer diagnosis based on MRI. Br. J. Cancer 2023, 128, 1019–1029. [Google Scholar] [CrossRef]
- Zhou, T.; Li, L.; Bredell, G.; Li, J.; Unkelbach, J.; Konukoglu, E. Volumetric memory network for interactive medical image segmentation. Med. Image Anal. 2023, 83, 102599. [Google Scholar] [CrossRef]
- Busch, F.; Xu, L.; Sushko, D.; Weidlich, M.; Truhn, D.; Müller-Franzes, G.; Heimer, M.M.; Niehues, S.M.; Makowski, M.R.; Hinsche, M.; et al. Dual center validation of deep learning for automated multi-label segmentation of thoracic anatomy in bedside chest radiographs. Comput. Methods Programs Biomed. 2023, 234, 107505. [Google Scholar] [CrossRef]
- Talaat, F.M.; Elnaggar, A.R.; Shaban, W.M.; Shehata, M.; Elhosseini, M. CardioRiskNet: A Hybrid AI-Based Model for Explainable Risk Prediction and Prognosis in Cardiovascular Disease. Bioengineering 2024, 11, 822. [Google Scholar] [CrossRef] [PubMed]
- Chandler, C.; Foltz, P.W.; Elvevåg, B. Improving the Applicability of AI for Psychiatric Applications through Human-in-the-loop Methodologies. Schizophr. Bull. 2022, 48, 949–957. [Google Scholar] [CrossRef] [PubMed]
- Brandenburg, J.M.; Jenke, A.C.; Stern, A.; Daum, M.T.J.; Schulze, A.; Younis, R.; Petrynowski, P.; Davitashvili, T.; Vanat, V.; Bhasker, N.; et al. Active learning for extracting surgomic features in robot-assisted minimally invasive esophagectomy: A prospective annotation study. Surg. Endosc. 2023, 37, 8577–8593. [Google Scholar] [CrossRef] [PubMed]
- Banumathi, K.; Venkatesan, L.; Benjamin, L.S.; Vijayalakshmi, K.; Satchi, N.S. Reinforcement Learning in Personalized Medicine: A Comprehensive Review of Treatment Optimization Strategies. Cureus 2025, 17, e82756. [Google Scholar] [CrossRef]
- Tang, S.; Modi, A.; Sjoding, M.W.; Wiens, J. Clinician-in-The-loop decision making: Reinforcement learning with near-optimal set-valued policies. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020; JMLR: Norfolk, MA, USA, 2020; pp. 9329–9338. Available online: https://arxiv.org/abs/2007.12678 (accessed on 23 March 2026).
- Washington, P. A Perspective on Crowdsourcing and Human-in-the-Loop Workflows in Precision Health. J. Med. Internet Res. 2024, 26, e51138. [Google Scholar] [CrossRef]
- Roe, K.D.; Jawa, V.; Zhang, X.; Chute, C.G.; Epstein, J.A.; Matelsky, J.; Shpitser, I.; Taylor, C.O. Feature engineering with clinical expert knowledge: A case study assessment of machine learning model complexity and performance. PLoS ONE 2020, 15, e0231300. [Google Scholar] [CrossRef]
- Wu, Y.; Liu, Y.; Yang, Y.; Yao, M.S.; Yang, W.; Shi, X.; Yang, L.; Li, D.; Liu, Y.; Yin, S.; et al. A concept-based interpretable model for the diagnosis of choroid neoplasias using multimodal data. Nat. Commun. 2025, 16, 3504. [Google Scholar] [CrossRef]
- Subba, B.; Toufiq, M.; Omi, F.; Yurieva, M.; Khan, T.; Rinchai, D.; Palucka, K.; Chaussabel, D. Human-augmented large language model-driven selection of glutathione peroxidase 4 as a candidate blood transcriptional biomarker for circulating erythroid cells. Sci. Rep. 2024, 14, 23225. [Google Scholar] [CrossRef]
- Bodén, A.C.S.; Molin, J.; Garvin, S.; West, R.A.; Lundström, C.; Treanor, D. The human-in-the-loop: An evaluation of pathologists’ interaction with artificial intelligence in clinical practice. Histopathology 2021, 79, 210–218. [Google Scholar] [CrossRef]
- Lee, S.; Lee, J.; Park, J.; Park, J.; Kim, D.; Lee, J.; Oh, J. Deep learning-based natural language processing for detecting medical symptoms and histories in emergency patient triage. Am. J. Emerg. Med. 2024, 77, 29–38. [Google Scholar] [CrossRef]
- Ghani, A. HiLTS©: Human-in-the-Loop Therapeutic System: A Wireless-Enabled Digital Neuromodulation Testbed for Brainwave Entrainment. Technologies 2026, 14, 71. [Google Scholar] [CrossRef]
- Metsch, J.M.; Saranti, A.; Angerschmid, A.; Pfeifer, B.; Klemt, V.; Holzinger, A.; Hauschild, A.C. CLARUS: An interactive explainable AI platform for manual counterfactuals in graph neural networks. J. Biomed. Inform. 2024, 150, 104600. [Google Scholar] [CrossRef] [PubMed]
- Zliobaite, I.; Bifet, A.; Pfahringer, B.; Holmes, G. Active learning with drifting streaming data. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 27–39. [Google Scholar] [CrossRef]
- El-Hasnony, I.M.; Elzeki, O.M.; Alshehri, A.; Salem, H. Multi-Label Active Learning-Based Machine Learning Model for Heart Disease Prediction. Sensors 2022, 22, 1184. [Google Scholar] [CrossRef] [PubMed]
- Ziniu, L.; Ke, X.; Liu, L.; Lanqing, L.; Deheng, Y.; Peilin, Z. Deploying Offline Reinforcement Learning with Human Feedback. arXiv 2023, arXiv:2303.07046. Available online: https://arxiv.org/abs/2303.07046 (accessed on 23 March 2026).
- Wysocki, O.; Davies, J.K.; Vigo, M.; Armstrong, A.C.; Landers, D.; Lee, R.; Freitas, A. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making. Artif. Intell. 2023, 316, 103839. [Google Scholar] [CrossRef]
- Mienye, I.D.; Obaido, G.; Jere, N.; Mienye, E.; Aruleba, K.; Emmanuel, I.D.; Ogbuokiri, B. A survey of explainable artificial intelligence in healthcare: Concepts, applications, and challenges. Inform. Med. Unlocked 2024, 51, 101587. [Google Scholar] [CrossRef]
- Steffny, L.; Dahlem, N.; Reichl, L.; Gisa, K.; Greff, T.; Werth, D. Design of a Human-in-the-Loop Centered AI-Based Clinical Decision Support System for Professional Care Planning. In HHAI 2023: Augmenting Human Intellect; IOS Press: Amsterdam, The Netherlands, 2023. [Google Scholar] [CrossRef]
- Zhang, S.; Yu, J.; Xu, X.; Yin, C.; Lu, Y.; Yao, B.; Tory, M.; Padilla, L.M.; Caterino, J.; Zhang, P.; et al. Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis Diagnosis. In Proceedings of the 2024 CHI Conference on Human Factors in Computing System; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
- Sendak, M.; Elish, M.C.; Gao, M.; Futoma, J.; Ratliff, W.; Nichols, M.; Bedoya, A.; Balu, S.; O’Brien, C. The human body is a black box. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: New York, NY, USA, 2020; pp. 99–109. [Google Scholar] [CrossRef]
- van Voorst, R. Challenges and Limitations of Human Oversight in Ethical Artificial Intelligence Implementation in Health Care: Balancing Digital Literacy and Professional Strain. Mayo Clin. Proc. Digit. Health 2024, 2, 559–563. [Google Scholar] [CrossRef]
- Hau, D. Update on Software as a Medical Device (SaMD): The TGA and IMDRF Perspectives; Therapeutic Goods Administration: Canberra, Australia, 2017. Available online: https://www.tga.gov.au/sites/default/files/update-on-software-as-a-medical-device-samd.pdf (accessed on 15 March 2026).
- OpenRegulatory. FDA Risk Classification for Software as a Medical Device (SaMD). Available online: https://openregulatory.com/articles/fda-risk-classification-for-software-as-a-medical-device-samd (accessed on 15 March 2026).
- Harmon, R.; Williams, P.A.H.; McCauley, V.B. Software as a Medical Device (SaMD): Useful or Useless Term? In Proceedings of the 54th Hawaii International Conference on System Sciences; University of Hawai’i at Mānoa: Honolulu, HI, USA, 2021; Available online: https://researchnow-admin.flinders.edu.au/ws/portalfiles/portal/35103812/Hermon_Williams_McCauley_SaMD_2021_Paper_0367.pdf (accessed on 23 March 2026).
- Harris, S.; Bonnici, T.; Keen, T.; Lilaonitkul, W.; White, M.J.; Swanepoel, N. Clinical deployment environments: Five pillars of translational machine learning for health. Front. Digit. Health 2022, 4, 939292. [Google Scholar] [CrossRef]
- Bayram, F.; Ahmed, B.S.; Kassler, A. From concept drift to model degradation: An overview on performance-aware drift detectors. Knowl.-Based Syst. 2022, 245, 108632. [Google Scholar] [CrossRef]
- Aasvang, E.K.; Meyhoff, C.S. The future of postoperative vital sign monitoring in general wards: Improving patient safety through continuous artificial intelligence-enabled alert formation and reduction. Curr. Opin. Anaesthesiol. 2023, 36, 683–690. [Google Scholar] [CrossRef]
- Aboutalebi, H.; Pavlova, M.; Shafiee, M.J.; Florea, A.; Hryniowski, A.; Wong, A. COVID-Net Biochem: An explainability-driven framework to building machine learning models for predicting survival and kidney injury of COVID-19 patients from clinical and biochemistry data. Sci. Rep. 2023, 13, 17001. [Google Scholar] [CrossRef] [PubMed]
- Sheu, R.K.; Pardeshi, M.S.; Pai, K.C.; Chen, L.C.; Wu, C.L.; Chen, W.C. Interpretable Classification of Pneumonia Infection Using eXplainable AI (XAI-ICP). IEEE Access 2023, 11, 28896–28919. [Google Scholar] [CrossRef]
- Ju, Y.; Waugh, J.L.S.; Singh, S.; Rusin, C.G.; Patel, A.B.; Jain, P.N. A multimodal deep learning tool for detection of junctional ectopic tachycardia in children with congenital heart disease. Heart Rhythm O2 2024, 5, 452–459. [Google Scholar] [CrossRef]
- Gavai, A.K.; van Hillegersberg, J. AI-driven personalized nutrition: RAG-based digital health solution for obesity and type 2 diabetes. PLoS Digit. Health 2025, 4, e0000758. [Google Scholar] [CrossRef] [PubMed]
- Bahani, K.; Moujabbir, M.; Ramdani, M. An accurate fuzzy rule-based classification systems for heart disease diagnosis. Sci. Afr. 2021, 14, e01019. [Google Scholar] [CrossRef]
- Deshmukh, A.; Kallivalappil, N.; D’Souza, K.; Kadam, C. AL-XAI-MERS: Unveiling Alzheimer’s Mysteries with Explainable AI. In Proceedings of the 2nd International Conference on Emerging Trends in Information Technology and Engineering, ICETITE 2024, Vellore, India, 22–23 February 2024; Available online: https://ieeexplore.ieee.org/document/10493489 (accessed on 23 March 2026).
- Bonde, A.; Lorenzen, S.; Brixen, G.; Troelsen, A.; Sillesen, M. Assessing the utility of deep neural networks in detecting superficial surgical site infections from free text electronic health record data. Front. Digit. Health 2023, 5, 1249835. [Google Scholar] [CrossRef]
- Panigutti, C.; Beretta, A.; Fadda, D.; Giannotti, F.; Pedreschi, D.; Perotti, A.; Rinzivillo, S. Co-design of Human-centered, Explainable AI for Clinical Decision Support. ACM Trans. Interact. Intell. Syst. 2023, 13, 21. [Google Scholar] [CrossRef]
- Yang, Y.; Truong, N.D.; Maher, C.; Nikpour, A.; Kavehei, O. Continental generalization of a human-in-the-loop AI system for clinical seizure recognition. Expert Syst. Appl. 2022, 207, 118083. [Google Scholar] [CrossRef]
- Wani, N.A.; Kumar, R.; Bedi, J. DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence. Comput. Methods Programs Biomed. 2024, 243, 107879. [Google Scholar] [CrossRef] [PubMed]
- Yu, H.Q.; Alaba, A.; Eziefuna, E. Evaluation of Integrated XAI Frameworks for Explaining Disease Prediction Models in Healthcare. In Internet of Things of Big Data for Healthcare; Communications in Computer and Information Science; Springer: Cham, Switzerland, 2024; pp. 14–28. Available online: https://link.springer.com/chapter/10.1007/978-3-031-52216-1_2 (accessed on 23 March 2026).
- Awais, M.; Ghayvat, H.; Krishnan Pandarathodiyil, A.; Nabillah Ghani, W.M.; Ramanathan, A.; Pandya, S.; Walter, N.; Naufal Saad, M.; Zain, R.B.; Faye, I. Healthcare professional in the loop (HPIL): Classification of standard and oral cancer-causing anomalous regions of oral cavity using textural analysis technique in autofluorescence imaging. Sensors 2020, 20, 5780. [Google Scholar] [CrossRef] [PubMed]
- Saranya, A.; Narayan, S. Risk Prediction of Heart Disease using Deep SHAP Techniques. In Proceedings of the 2nd International Conference on Advancement in Computation and Computer Technologies, InCACCT 2024, Gharuan, India, 2–3 May 2024; pp. 332–336. Available online: https://ieeexplore.ieee.org/document/10551212 (accessed on 23 March 2026).
- Nafisah, S.I.; Muhammad, G. Tuberculosis detection in chest radiograph using convolutional neural network architecture and explainable artificial intelligence. Neural Comput. Appl. 2022, 36, 111–131. [Google Scholar] [CrossRef] [PubMed]
- Hatherley, J.; Sparrow, R. Diachronic and synchronic variation in the performance of adaptive machine learning systems: The ethical challenges. J. Am. Med. Inform. Assoc. 2023, 30, 361–366. [Google Scholar] [CrossRef] [PubMed]
- Pulicharla, M.R. Detecting and addressing model drift: Automated monitoring and real-time retraining in ML pipelines. World J. Adv. Res. Rev. 2019, 3, 147–152. [Google Scholar] [CrossRef]









| Ref | Year | Main Task | Scope | Key Gaps |
|---|---|---|---|---|
| [41] | 2021 | Reviews HITL and active learning in medical imaging | Focused on DL methods for classification and segmentation tasks | Does not cover non-imaging applications, mostly prototype-level studies, needs deployment focus |
| [42] | 2022 | Reviews HITL in XAI through human-centred design | Systematic analysis of usability and transparency in medical imaging AI | Narrow focus on imaging, needs economic and deployment considerations, few studies measure the actual clinical decision impact |
| [43] | 2024 | Presents a framework for human evaluation of clinical large language models (LLMs) | Evaluates trust, quality, safety, and reasoning of LLM outputs | Needs HITL during training or annotation, narrow to evaluation, framework not yet prospectively validated, limited LLM deployment data |
| [44] | 2024 | Classifies HITL roles across the ML lifecycle | Broad coverage: annotation, training, validation | Only covers opportunities for Electronic Health records, needs practical integration and ethical discussions, conceptual, minimal empirical validations in clinical environments |
| [45] | 2024 | Analyses ethical and regulatory aspects of HITL in low-resource settings | HITL is a governance necessity for safety and fairness | No technical taxonomy, purely normative, needs implementation examples or data, needs practical integration pathways |
| This Work | 2026 | Presents post-deployment HITL framework for clinical settings | HITL design enabling real-time expert modification of the model without model retraining | Prospective deployment evaluation is pending |
| HITL Approaches | Ref | Clinical Domain | Interaction Type | Deployment Setting | AI Model | XAI Method | Evaluation Metrics |
|---|---|---|---|---|---|---|---|
| Post-deployment HITL with retraining | [72] | Kidney cancer | Human feedback on counterfactuals | Synthetic and PPI Dataset | GNN | GNNExplainer | Sensitivity: 0.75, Specificity: 0.74 |
| [88] | COVID-19 survival and acute kidney injury | Human-in-the-loop retraining based on clinician feedback | Hospital (Stony Brook) | eXtreme Gradient Boost (XGB), LightGBM, CatBoost, RF, LR | GSInquire | XGBoost performs best with Accuracy: 92.30% (Survival), 88.05% (AKI) | |
| [89] | Pneumonia infection | Clinician-in-the-loop transfer learning | Multi-centre CXR datasets | DCNN | SHAP | Accuracy: 92.14% (Independent Learning), 93.29% (TL) | |
| [90] | Pediatric cardiac arrhythmia | Clinician review & model retraining | ECG data, Texas Children Hospital | 5 Layer CNN | LIME | AUC ROC: 0.95 | |
| [91] | Dietary recommendation system | Feedback on AI nutrition suggestions | User-facing recommendation system | LLaMA3 model | Virtual nutritionist that provides structured, evidence-backed explanations | Success rate: 80.1% (Nutritional) 92% (Sustainability) | |
| Post-deployment HITL without retraining (override). | [92] | Heart disease | Override rule-based HITL | Clinical dataset (Cleveland & CombinedHunVada) | FCRLC (IF-THEN) | Rule-based | Accuracy: 83.17% (CL) 80.46% (CO) |
| [93] | Alzheimer’s diagnosis | Human override and validation | OASIS dataset | DenseNet121 and MobileNet v2 | LIME | Accuracy: 88% (D), 93%(M) | |
| [94] | Surgical site infections | Clinician review | 11 hospitals, Denmark | NLP | Human review | AUC ROC: 0.989 | |
| [95] | Predicting next visit time, diagnosis, and medications | Human-in-the-loop override and explanation | MIMIC-IV ICU data | RNN | DoctorXAI, Prototype UI for explanation | Max avg Fidelity: 0.90 ± 0.03 | |
| [96] | Seizure recognition | Clinician feedback on predictions | RPAH EEG | Conv-LSTM | The occlusion-based approach | Sensitivity: 92.19% | |
| [97] | Lung cancer detection | Human feedback for model correction | Survey Lung Cancer dataset | Hybrid (ConvXGB) model | SHAP | Accuracy: 97.43% | |
| [98] | Evaluation of XAI frameworks in disease prediction | Human evaluation of explanations | Prostate cancer, pneumonia CXR, medical Q&A | RF, LR-tabular CNN-Image | LIME, SHAP | AUC ROC (12-s window): 0.84 | |
| [99] | Oral cancer | Graphical User Interface-based clinician review | VELscope imaging (own data) | KNN | GUI visualization | Accuracy: 83% | |
| [100] | Heart disease risk | Clinician-guided retraining | UCI dataset | CNN | DeepSHAP | Accuracy: 90% | |
| [101] | Tuberculosis detection | Clinician feedback | CXR: Montgomery, Shenzhen, Belarus | CNN | Visualization | Accuracy: 99.1% |
| HITL Approaches | Ref | Advantages | Limitations |
|---|---|---|---|
| With model retraining | [72,88,89,90,91] | Improved accuracy over time, adaptability to evolving data, enhanced clinical relevance, error correction & bias reduction, knowledge transfer, improved trust and transparency | High computational and resource costs, data drift risk between retraining cycles, human fatigue and variability, integration complexity, version management & validation overhead, delayed responsiveness |
| Without model Retraining (Override) | [92,93,94,95,96,97,98,99,100,101] | Real-time adaptation, lower computational burden, rapid feedback integration, improved safety and control, simpler regulatory compliance as the core model is unchanged, clinician trust and oversight | Increased cognitive load, risk of feedback fatigue, and inconsistent handling of edge cases |
| Without model Retraining (moderation)- proposed frameworks | - | Maintains autonomous operation while allowing human guidance, allows immediate adjustment to model behaviour without delay, no heavy retraining or optimisation steps needed, potentially simpler regulatory management compared with systems requiring frequent model retraining | Human influence may be limited to predefined moderation mechanisms, may require well-designed interfaces or protocols to be effective, moderation logic adds complexity to the inference pipeline, poorly calibrated moderation may suppress important alerts |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Das, D.; Adams, S.D.; Corva, D.M.; Bucknall, T.K.; Kouzani, A.Z. Advances in Closed-Loop Artificial Intelligence for Healthcare. Electronics 2026, 15, 1396. https://doi.org/10.3390/electronics15071396
Das D, Adams SD, Corva DM, Bucknall TK, Kouzani AZ. Advances in Closed-Loop Artificial Intelligence for Healthcare. Electronics. 2026; 15(7):1396. https://doi.org/10.3390/electronics15071396
Chicago/Turabian StyleDas, Diba, Scott D. Adams, Dean M. Corva, Tracey K. Bucknall, and Abbas Z. Kouzani. 2026. "Advances in Closed-Loop Artificial Intelligence for Healthcare" Electronics 15, no. 7: 1396. https://doi.org/10.3390/electronics15071396
APA StyleDas, D., Adams, S. D., Corva, D. M., Bucknall, T. K., & Kouzani, A. Z. (2026). Advances in Closed-Loop Artificial Intelligence for Healthcare. Electronics, 15(7), 1396. https://doi.org/10.3390/electronics15071396

