Next Article in Journal
Development of an Advanced Multi-Layer Digital Twin Conceptual Framework for Underground Mining
Previous Article in Journal
A Dynamic QoS Mapping Algorithm for 5G-TSN Converged Networks Based on Weighted Fuzzy C-Means and Three-Way Decision Theory
Previous Article in Special Issue
Repeatability of Corneal Astigmatism and Equivalent Power with the MS-39 Tomographer Derived from Model Surface Fitting in a Cataractous Population
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Review

eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing

1
Design and Creative Technology, Torrens University Australia, 88 Wakefield St., Adelaide, SA 5000, Australia
2
Faculty of Science and Technology (Sydney Campus), Charles Darwin University, 815 George Street, Sydney, NSW 2000, Australia
3
Discipline of Information Technology, Peter Faber Business School, Australian Catholic University, Sydney, NSW 2060, Australia
4
Design and Creative Technology, Torrens University Australia, 46-52 Mountain St., Sydney, NSW 2007, Australia
5
Faculty of Engineering and Information Technology, University of Technology Sydney, 15 Broadway, Sydney, NSW 2007, Australia
6
School of Science and Technology, University of New England, Armidale, NSW 2350, Australia
7
Data Science Institute, University of Technology Sydney, 15 Broadway, Sydney, NSW 2007, Australia
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(21), 6649; https://doi.org/10.3390/s25216649
Submission received: 27 September 2025 / Revised: 24 October 2025 / Accepted: 27 October 2025 / Published: 30 October 2025

Abstract

Artificial Intelligence (AI) has achieved immense progress in recent years across a wide array of application domains, with biomedical imaging and sensing emerging as particularly impactful areas. However, the integration of AI in safety-critical fields, particularly biomedical domains, continues to face a major challenge of explainability arising from the opacity of complex prediction models. Overcoming this obstacle falls within the realm of eXplainable Artificial Intelligence (XAI), which is widely acknowledged as an essential aspect for successfully implementing and accepting AI techniques in practical applications to ensure transparency, fairness, and accountability in the decision-making processes and mitigate potential biases. This article provides a systematic cross-domain review of XAI techniques applied to quantitative prediction tasks, with a focus on their methodological relevance and potential adaptation to biomedical imaging and sensing. To achieve this, following PRISMA guidelines, we conducted an analysis of 44 Q1 journal articles that utilised XAI techniques for prediction applications across different fields where quantitative databases were used, and their contributions to explaining the predictions were studied. As a result, 13 XAI techniques were identified for prediction tasks. Shapley Additive eXPlanations (SHAP) was identified in 35 out of 44 articles, reflecting its frequent computational use for feature-importance ranking and model interpretation. Local Interpretable Model-Agnostic Explanations (LIME), Partial Dependence Plots (PDPs), and Permutation Feature Index (PFI) ranked second, third, and fourth in popularity, respectively. The study also recognises theoretical limitations of SHAP and related model-agnostic methods, such as their additive and causal assumptions, which are particularly critical in heterogeneous biomedical data. Furthermore, a synthesis of the reviewed studies reveals that while many provide computational evaluation of explanations, none include structured human–subject usability validation, underscoring an important research gap for clinical translation. Overall, this study offers an integrated understanding of quantitative XAI techniques, identifies methodological and usability gaps for biomedical adaptation, and provides guidance for future research aimed at safe and interpretable AI deployment in biomedical imaging and sensing.
Keywords: eXplainable artificial intelligence; machine learning; quantitative prediction; PRISMA; shapley additive eXPlanations (SHAP); model-agnostic explanations (LIME); partial dependence plots (PDPs); permutation feature index (PFI) eXplainable artificial intelligence; machine learning; quantitative prediction; PRISMA; shapley additive eXPlanations (SHAP); model-agnostic explanations (LIME); partial dependence plots (PDPs); permutation feature index (PFI)

Share and Cite

MDPI and ACS Style

Hettikankanamage, N.; Shafiabady, N.; Chatteur, F.; Wu, R.M.X.; Ud Din, F.; Zhou, J. eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing. Sensors 2025, 25, 6649. https://doi.org/10.3390/s25216649

AMA Style

Hettikankanamage N, Shafiabady N, Chatteur F, Wu RMX, Ud Din F, Zhou J. eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing. Sensors. 2025; 25(21):6649. https://doi.org/10.3390/s25216649

Chicago/Turabian Style

Hettikankanamage, Nadeesha, Niusha Shafiabady, Fiona Chatteur, Robert M. X. Wu, Fareed Ud Din, and Jianlong Zhou. 2025. "eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing" Sensors 25, no. 21: 6649. https://doi.org/10.3390/s25216649

APA Style

Hettikankanamage, N., Shafiabady, N., Chatteur, F., Wu, R. M. X., Ud Din, F., & Zhou, J. (2025). eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing. Sensors, 25(21), 6649. https://doi.org/10.3390/s25216649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop