Explainable Artificial Intelligence (XAI) in Medical Imaging

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 30 November 2026 | Viewed by 12895

Special Issue Editors


E-Mail Website
Guest Editor
Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
Interests: cardiac

Special Issue Information

Dear Colleagues,

Using artificial intelligence (AI) in medical imaging can help transform diagnostics, prognosis, and clinical decision-making. However, the widespread adoption of AI in clinical settings is commonly limited by a lack of transparency and interpretability. Explainable Artificial Intelligence (XAI) addresses this critical need by providing insights into how AI systems arrive at specific predictions, hoping to increase clinician trust and facilitate regulatory compliance.

Current XAI techniques—such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM)—have demonstrated utility across various imaging modalities. Yet, significant knowledge gaps remain. These include the lack of standardized benchmarks for evaluating explanation quality, insufficient integration of clinical context in interpretability frameworks, and limited understanding of how explanations influence clinical decision-making.

This Special Issue welcomes original research, reviews, and perspective articles on novel XAI methods as applied to medical imaging. Within this context, topics of interest for this Special Issue include, but are not limited to, the following:

  • Applications of XAI to emerging clinical problems
  • Critical analyses of current XAI pitfalls and limitations
  • Development and validation of new XAI techniques
  • Address ethical and regulatory standards surrounding XAI clinical use

Successful submissions will provide rigorous technical validation, incorporate clinical relevance, and offer transparent methodological reporting. We encourage interdisciplinary collaborations between computer scientists and clinicians to ensure both algorithmic robustness and real-world applicability.

Authors are advised to clearly articulate the interpretability goals of their work and to align their submissions with the broader mission of safe, equitable, and trustworthy AI in medicine.

Dr. Quincy A. Hathaway
Dr. Yashbir Singh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Explainable Artificial Intelligence (XAI)
  • medical imaging
  • SHAP/LIME/Grad-CAM

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 2231 KB  
Article
Evaluating Explainability: A Framework for Systematic Assessment of Explainable AI Features in Medical Imaging
by Miguel A. Lago, Ghada Zamzmi, Brandon Eich and Jana G. Delfino
Bioengineering 2026, 13(1), 111; https://doi.org/10.3390/bioengineering13010111 - 16 Jan 2026
Cited by 2 | Viewed by 1339
Abstract
Explainability features are intended to provide insight into the internal mechanisms of an Artificial Intelligence (AI) device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features [...] Read more.
Explainability features are intended to provide insight into the internal mechanisms of an Artificial Intelligence (AI) device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features in medical images. Our evaluation framework for AI explainability is based on four criteria that relate to the particular needs in AI-enabled medical devices: (1) Consistency quantifies the variability of explanations to similar inputs; (2) plausibility estimates how close the explanation is to the ground truth; (3) fidelity assesses the alignment between the explanation and the model internal mechanisms; and (4) usefulness evaluates the impact on task performance of the explanation. Finally, we developed a scorecard for AI explainability methods in medical imaging that serves as a complete description and evaluation to accompany this type of device. We describe these four criteria and give examples on how they can be evaluated. As a case study, we use Ablation CAM and Eigen CAM to illustrate the evaluation of explanation heatmaps on the detection of breast lesions on synthetic mammographies. The first three criteria are evaluated for task-relevant scenarios. This framework establishes criteria through which the quality of explanations provided by medical devices can be quantified. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Medical Imaging)
Show Figures

Figure 1

Other

Jump to: Research

33 pages, 14786 KB  
Systematic Review
Systematic Review of Artificial Intelligence and Electrocardiography for Cardiovascular Disease Diagnosis
by Hernando Velandia, Aldo Pardo, María Isabel Vera and Miguel Vera
Bioengineering 2025, 12(11), 1248; https://doi.org/10.3390/bioengineering12111248 - 14 Nov 2025
Cited by 4 | Viewed by 4807
Abstract
Cardiovascular diseases (CVDs) are the leading cause of death globally. Electrocardiograms (ECGs) are crucial diagnostic tools; however, their traditional interpretations exhibit limited sensitivity and reproducibility. This systematic review discusses the recent advances in artificial intelligence (AI), including deep learning and machine learning, applied [...] Read more.
Cardiovascular diseases (CVDs) are the leading cause of death globally. Electrocardiograms (ECGs) are crucial diagnostic tools; however, their traditional interpretations exhibit limited sensitivity and reproducibility. This systematic review discusses the recent advances in artificial intelligence (AI), including deep learning and machine learning, applied to ECG analysis for CVD detection. It examines over 100 studies from 2019 to 2025, classifying AI applications by disease type (heart failure, myocardial infarction, and atrial fibrillation), model architecture (convolutional neural networks, long short-term memory, and hybrid models), and methodological innovation (signal denoising, synthetic data generation, and explainable AI). Comparative tables and conceptual figures highlight performance metrics, dataset characteristics, and implementation challenges. Our findings indicated that AI models outperform traditional methods, especially in terms of detecting subclinical conditions and enabling real-time monitoring via wearable technologies. Nonetheless, issues such as demographic bias, lack of dataset diversity, and regulatory hurdles persist. The review concludes by offering actionable recommendations to enhance clinical translation, equity, and transparency in AI-ECG applications. These insights aim to guide interdisciplinary efforts toward the safe and effective adoption of AI in cardiovascular diagnostics. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Medical Imaging)
Show Figures

Figure 1

18 pages, 1752 KB  
Systematic Review
Beyond Post hoc Explanations: A Comprehensive Framework for Accountable AI in Medical Imaging Through Transparency, Interpretability, and Explainability
by Yashbir Singh, Quincy A. Hathaway, Varekan Keishing, Sara Salehi, Yujia Wei, Natally Horvat, Diana V. Vera-Garcia, Ashok Choudhary, Almurtadha Mula Kh, Emilio Quaia and Jesper B Andersen
Bioengineering 2025, 12(8), 879; https://doi.org/10.3390/bioengineering12080879 - 15 Aug 2025
Cited by 27 | Viewed by 5888
Abstract
The integration of artificial intelligence (AI) in medical imaging has revolutionized diagnostic capabilities, yet the black-box nature of deep learning models poses significant challenges for clinical adoption. Current explainable AI (XAI) approaches, including SHAP, LIME, and Grad-CAM, predominantly focus on post hoc explanations [...] Read more.
The integration of artificial intelligence (AI) in medical imaging has revolutionized diagnostic capabilities, yet the black-box nature of deep learning models poses significant challenges for clinical adoption. Current explainable AI (XAI) approaches, including SHAP, LIME, and Grad-CAM, predominantly focus on post hoc explanations that may inadvertently undermine clinical decision-making by providing misleading confidence in AI outputs. This paper presents a systematic review and meta-analysis of 67 studies (covering 23 radiology, 19 pathology, and 25 ophthalmology applications) evaluating XAI fidelity, stability, and performance trade-offs across medical imaging modalities. Our meta-analysis of 847 initially identified studies reveals that LIME achieves superior fidelity (0.81, 95% CI: 0.78–0.84) compared to SHAP (0.38, 95% CI: 0.35–0.41) and Grad-CAM (0.54, 95% CI: 0.51–0.57) across all modalities. Post hoc explanations demonstrated poor stability under noise perturbation, with SHAP showing 53% degradation in ophthalmology applications (ρ = 0.42 at 10% noise) compared to 11% in radiology (ρ = 0.89). We demonstrate a consistent 5–7% AUC performance penalty for interpretable models but identify modality-specific stability patterns suggesting that tailored XAI approaches are necessary. Based on these empirical findings, we propose a comprehensive three-pillar accountability framework that prioritizes transparency in model development, interpretability in architecture design, and a cautious deployment of post hoc explanations with explicit uncertainty quantification. This approach offers a pathway toward genuinely accountable AI systems that enhance rather than compromise clinical decision-making quality and patient safety. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Medical Imaging)
Show Figures

Figure 1

Back to TopTop