eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing
Abstract
1. Introduction
- Identify and categorise XAI techniques applied for quantitative prediction tasks across diverse domains, and their relevance to biomedical imaging and sensing.
- Highlight the advancements, challenges, and benefits of the XAI techniques applied for numerical prediction tasks.
- Identify the gaps and provide future research direction in applying XAI techniques for predictions.
2. Research Methodology
2.1. Search Strategy
2.2. Selection Criteria and Quality Assessment
2.3. Data Extraction and Analysis
- The RQ specifically examines the XAI techniques, which aim to provide explanations for the prediction outcomes generated by AI models.
- The RQ concerns predictions involving quantitative data.
- The RQ seeks to identify the contribution of XAI methods to the prediction problems in various domains, how these techniques have interpreted the outcomes, and potentially uncover any limitations associated with their use.
3. Results
3.1. Shapley Additive Explanations (SHAP)
3.2. Local Interpretable Model-Agnostic Explanations (LIME)
3.3. Partial Dependence Plots (PDPs)
3.4. Permutation Feature Importance (PFI)
3.5. Accelerated Model-Agnostic Explanations (AcME)
3.6. Accumulated Local Effects (ALE)
3.7. Explain Like I’m 5 (ELI5)
3.8. Explain
3.9. IME
3.10. Individual Conditional Expectation (ICE)
3.11. Permutation Importance (PIMP)
3.12. KernelSHAP
3.13. SHAPASH
4. Discussion
4.1. Review Summary
4.2. Challenges in Existing XAI Application and Future Direction
4.3. Usability, and Clinical Utility of XAI
4.4. Relevance of XAI to Biomedical Imaging and Sensing
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
| Authors | Year | XAI Method | Classifier | Industry | Application | Data |
|---|---|---|---|---|---|---|
| [1] | 2023 | LIME, SHAP | XGBoost | Healthcare | Diagnosis of COVID-19 | COVID-19 positive and negative patients |
| [59] | 2023 | LIME | RF | Finance | Prediction of stock market prices. | Stock market price |
| [42] | 2023 | LIME, SHAP | ET, XGBoost | Environmental | Prediction of daily pan evaporation. | Air temperature (Ta), solar radiation (Rs), and relative humidity (H) |
| [56] | 2023 | SHAP, AcME, KernalSHAP, PDPs | XGBoost | Miscellaneous | Predict abnormal behaviours, assessing the impact of changes in feature values on model predictions, and identifying feature importance and ranking. | Chemical feature Glass Data |
| [45] | 2023 | PDPs, SHAP | GBR | Finance | Predict daily stock prices. | Stock market prices |
| [55] | 2023 | SHAP | Poly, RFR, XGBoost, DNN | Mining | Predict the ash content of coal samples based on the composition data of XRF analysis. | XRF data |
| [43] | 2023 | SHAP | XGBoost | Environmental | Predict the responsibility of environmental factors in changing water. quality from eutrophic to hypereutrophic states. | Water quality |
| [47] | 2023 | SHAP | RF | Chemical Engineering | Predict the CO2 adsorption capacity. | CO2 adsorption capacity of PC based on TP |
| [37] | 2023 | SHAP | NGB | Energy | Assess the relationship between hydrogeochemical variables and reservoir temperature. | predict reservoir temperature u |
| [30] | 2023 | SHAP | XGBoost | Civil Engineering | Identify important input parameters affecting the liquefaction potential. | Historical post-liquefaction |
| [25] | 2023 | SHAP, PDPs, PFI | XGBoost | Healthcare | Early diagnosis of chronic kidney disease (CKD). | Haemoglobin, specific gravity, and hypertension |
| [31] | 2023 | SHAP | ANN | Civil Engineering | Probabilistic buckling stress prediction models of steel shear panel dampers. | Steel shear panel damper |
| [32] | 2023 | SHAP | RF | Civil Engineering | Predict seismic drifts in CLT buildings. | Drift demands |
| [53] | 2023 | SHAP | SVR | Genomics | Predicting the martensitic transformation peak temperature and investigating the effects of important features and alloying elements on TP for TiZrHfNiCoCu HESMAs. | Temperature |
| [26] | 2023 | SHAP, LIME, SHAPASH | LGBM | Healthcare | Prediction of Parkinson’s Disease (PD) progression. | Clinical data |
| [49] | 2022 | SHAP | LSTM | Economic | Predict economic growth rates and crises by capturing sequential dependencies within the economic cycle. | GDP growth rates |
| [41] | 2022 | SHAP, LIME | LSTM | Energy | Building load prediction. | Energy consumption data |
| [60] | 2022 | LIME | LSTM | Mechanical Engineering | Predict temperature in the District Heating Systems’ (DHS) supply line. | DHS operation |
| [51] | 2022 | SHAP | LR | Aeronautics | Predicting the failure of components and systems in jet engines. | Run-to-failure trajectories for jet engines |
| [52] | 2022 | SHAP | XGBoost | Aviation | Predict demand of air transportation. | Weather, runway reports, flight data |
| [38] | 2022 | SHAP | GBTs | Energy | Analyse secondary control activation. | FRR exhibit fixed time scales and deadlines |
| [61] | 2022 | LIME | RF | Sports | Analyse the match style and gameplay of the national basketball association (NBA). | NBA gameplay data at a seasonal level |
| [77] | 2022 | SHAP | XGBoost | Healthcare | Predict the malnutrition status of children with CHD using explainable ML methods to provide insight into the model’s predictions and outcomes. | Cohort data |
| [33] | 2022 | SHAP | XGBoost | Civil Engineering | Predicting the shear capacity of FRP-RC beams. n (SHAP) is used to identify the most important factors that influence the shear capacity prediction of FRP-RC beams. | Shear critical FRP-RC beams |
| [39] | 2022 | SHAP | Energy | Power factor prediction model with high accuracy and introduced interpretability tools to gain material-oriented insight. | Power factors | |
| [44] | 2022 | PDPs, ALE, ICE, SHAP | RF | Environmental | Predict biological stream conditions in the Chesapeake Bay Watershed, USA. | Catchments |
| [27] | 2022 | SHAP | MLP | Healthcare | Prediction of post-operative mortality after any surgery in an emergency setting on elderly patients. | Demographic and clinical data, medical and surgical history, preoperative risk factors, frailty, biochemical blood examination, vital parameters, and operative details |
| [46] | 2022 | SHAP | XGBoost | Finance | Predict financial pressure. | financial pressure and the volatility of the healthcare stock market |
| [34] | 2022 | SHAP | XGBoost | Civil Engineering | Identify failure mode of RC flat slabs without transverse reinforcement. | 610 groups of data |
| [35] | 2022 | SHAP | XGBoost | Civil Engineering | Predict external wind pressure of a low-rise building in urban-like settings. | Eexternal wind pressure |
| [48] | 2022 | SHAP | XGBoost | Chemical Engineering | Predict external wind pressure of a low-rise building in urban-like settings. | Azo molecules |
| [40] | 2022 | SHAP | GTB | Energy | Predict yields and higher heating value of torrefied biomass. | Torrefaction data |
| [29] | 2022 | SHAP | XGBoost | Healthcare | Understand the main logic governing the model prediction. | Blood count |
| [36] | 2022 | SHAP | XGBoost | Civil Engineering | Prediction of one-part alkali-activated material enabled by interpretable machine learning. | Alkali activated material (AAM) |
| [66] | 2021 | PFI | SVM | Civil Engineering | Analysing the mechanical performance of fly ash-based geopolymer concrete with different machine learning techniques. | Fly ash (FA)-based geopolymer concrete, hydroxide (NaOH) and sodium silicate (Na2SiO3) |
| [28] | 2022 | SHAP, LIME, PFI | XGBoost | Healthcare | Understand previously undetected relationships between prognostic variables to make informed clinical decisions and effective interventions. | Clinical data |
| [64] | 2021 | PIMP, PDPs | XGBoost | Finance | Predict financial distress | Accounting records and financial ratios |
| [78] | 2021 | SHAP | RF | Economic | Evaluate the precision of housing-price forecasts. | Housing Prices |
| [65] | 2021 | PDPs | RF | Environmental | Predict hydrogen production. | Hydrogen production |
| [74] | 2020 | SHAP | XGBoost | Healthcare | The early and accurate detection of the onset of acute myocardial infarction (AMI). | Electrocardiogram Vigilance with Electronic data |
| [7] | 2020 | LIME, SHAP, ELI5 | RF | Energy | Gaining insight into solar photovoltaic power generation forecasting. | Weather |
| [67] | 2020 | PFI | RF | Civil Engineering | Predict structural response of RC slabs exposed to blast loading. | Reinforced concrete slabs exposed to blast loading using ten (input) |
| [62] | 2019 | LIME | Civil Engineering | Explain and evaluate data-driven building energy performance. | Building and BAS data | |
| [70] | 2017 | EXPLAIN, IME | RF | Finance | Business-to-business (B2B) sales forecasting. | B2B sales data |
References
- Yagin, F.H.; Cicek, İ.B.; Alkhateeb, A.; Yagin, B.; Colak, C.; Azzeh, M.; Akbulut, S. Explainable artificial intelligence model for identifying COVID-19 gene biomarkers. Comput. Biol. Med. 2023, 154, 106619. [Google Scholar] [CrossRef] [PubMed]
- Saleh, M.; AlHamaydeh, M.; Zakaria, M. Shear capacity prediction for reinforced concrete deep beams with web openings using artificial intelligence methods. Eng. Struct. 2023, 280, 115675. [Google Scholar] [CrossRef]
- Ngarambe, J.; Yun, G.Y.; Santamouris, M. The use of artificial intelligence (AI) methods in the prediction of thermal comfort in buildings: Energy implications of AI-based thermal comfort controls. Energy Build. 2020, 211, 109807. [Google Scholar] [CrossRef]
- Yi, Z.; Liang, Z.; Xie, T.; Li, F. Financial risk prediction in supply chain finance based on buyer transaction behavior. Decis. Support Syst. 2023, 170, 113964. [Google Scholar] [CrossRef]
- Shafiabady, N.; Hadjinicolaou, N.; Din, F.U.; Bhandari, B.; Wu, R.M.X.; Vakilian, J. Using Artificial Intelligence (AI) to predict organizational agility. PLoS ONE 2023, 18, e0283066. [Google Scholar] [CrossRef]
- You, D.; Niu, S.; Dong, S.; Yan, H.; Chen, Z.; Wu, D.; Shen, L.; Wu, X. Counterfactual explanation generation with minimal feature boundary. Inf. Sci. 2023, 625, 342–366. [Google Scholar] [CrossRef]
- Kuzlu, M.; Cali, U.; Sharma, V.; Güler, Ö. Gaining insight into solar photovoltaic power generation forecasting utilizing explainable artificial intelligence tools. IEEE Access 2020, 8, 187814–187823. [Google Scholar] [CrossRef]
- Gohel, P.; Singh, P.; Mohanty, M. Explainable AI: Current status and future directions. arXiv 2021, arXiv:2107.07045. [Google Scholar] [CrossRef]
- Holzinger, A.; Goebel, R.; Fong, R.; Moon, T.; Müller, K.P.; Samek, W. xxAI-Beyond Explainable AI; Springer: Cham, Switzerland, 2020; Available online: https://link.springer.com/bookseries/1244 (accessed on 1 April 2023).
- Houssein, E.H.; Gamal, A.M.; Younis, E.M.G.; Mohamed, E. Explainable artificial intelligence for medical imaging systems using deep learning: A comprehensive review. Clust. Comput. 2025, 28, 469. [Google Scholar] [CrossRef]
- Hou, J.; Sicen, L.; Bie, Y.; Wang, H.; Tan, A.; Luo, L.; Chen, H. Self-explainable ai for medical image analysis: A survey and new outlooks. arXiv 2024, arXiv:2410.02331. [Google Scholar]
- Zhou, J.; Gandomi, A.H.; Chen, F.; Holzinger, A. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 2021, 10, 593. [Google Scholar] [CrossRef]
- Chaddad, A.; Peng, J.; Xu, J.; Bouridane, A. Survey of Explainable AI Techniques in Healthcare. Sensors 2023, 23, 634. [Google Scholar] [CrossRef] [PubMed]
- Hauser, K.; Kurz, A.; Haggenmüller, S.; Maron, R.C.; von Kalle, C.; Utikal, J.S.; Meier, F.; Hobelsberger, S.; Gellrich, F.F.; Sergon, M.; et al. Explainable artificial intelligence in skin cancer recognition: A systematic review. Eur. J. Cancer 2022, 167, 54–69. [Google Scholar] [CrossRef]
- Giuste, F.; Shi, W.; Zhu, Y.; Naren, T.; Isgut, M.; Sha, Y.; Tong, L.; Gupte, M.; Wang, M.D. Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review. IEEE Rev. Biomed. Eng. 2023, 16, 5–21. [Google Scholar] [CrossRef] [PubMed]
- Omeiza, D.; Webb, H.; Jirotka, M.; Kunze, L. Explanations in Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 10142–10162. [Google Scholar] [CrossRef]
- Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
- Dehkordi, A.H.; Mazaheri, E.; Ibrahim, H.A.; Dalvand, S.; Gheshlagh, R.G. How to Write a Systematic Review: A Narrative Review. Int. J. Prev. Med. 2021, 12, 27. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4768–4777. [Google Scholar]
- Shapley, L.S. A value for n-person games. Ann. Math. Study 1988, 28, 307–317. [Google Scholar]
- Fryer, D.; Strümke, I.; Nguyen, H. Shapley values for feature selection: The good, the bad, and the axioms. IEEE Access 2021, 9, 144352–144360. [Google Scholar] [CrossRef]
- Watson, D.S. Conceptual challenges for interpretable machine learning. Synthese 2022, 200, 65. [Google Scholar] [CrossRef]
- Kovalerchuk, B. Interpretable AI/ML for High-Stakes Tasks with Human-in-the-Loop: Critical Review and Future Trends, Research Square. 2025. Available online: https://www.researchsquare.com/article/rs-3989807/v1 (accessed on 20 October 2025).
- Moreno-Sánchez, P.A. Data-Driven Early Diagnosis of Chronic Kidney Disease: Development and Evaluation of an Explainable AI Model. IEEE Access 2023, 11, 38359–38369. [Google Scholar] [CrossRef]
- Junaid, M.; Ali, S.; Eid, F.; El-Sappagh, S.; Abuhmed, T. Explainable machine learning models based on multimodal time-series data for the early detection of Parkinson’s disease. Comput. Methods Programs Biomed. 2023, 234, 107495. [Google Scholar] [CrossRef] [PubMed]
- Fransvea, P.; Fransvea, G.; Liuzzi, P.; Sganga, G.; Mannini, A.; Costa, G. Study and validation of an explainable machine learning-based mortality prediction following emergency surgery in the elderly: A prospective observational study. Int. J. Surg. 2022, 107, 106954. [Google Scholar] [CrossRef] [PubMed]
- Alabi, R.O.; Almangush, A.; Elmusrati, M.; Leivo, I.; Mäkitie, A.A. An interpretable machine learning prognostic system for risk stratification in oropharyngeal cancer. Int. J. Med. Inform. 2022, 168, 104896. [Google Scholar] [CrossRef]
- Meiseles, A.; Paley, D.; Ziv, M.; Hadid, Y.; Rokach, L.; Tadmor, T. Explainable machine learning for chronic lymphocytic leukemia treatment prediction using only inexpensive tests. Comput. Biol. Med. 2022, 145, 105490. [Google Scholar] [CrossRef]
- Jas, K.; Dodagoudar, G.R. Explainable machine learning model for liquefaction potential assessment of soils using XGBoost-SHAP. Soil Dyn. Earthq. Eng. 2022, 165, 107662. [Google Scholar] [CrossRef]
- Hu, S.; Wang, W.; Lu, Y. Explainable machine learning models for probabilistic buckling stress prediction of steel shear panel dampers. Eng. Struct. 2023, 288, 116235. [Google Scholar] [CrossRef]
- Junda, E.; Málaga-Chuquitaype, C.; Chawgien, K. Interpretable machine learning models for the estimation of seismic drifts in CLT buildings. J. Build. Eng. 2023, 70, 106365. [Google Scholar] [CrossRef]
- Wakjira, T.G.; Al-Hamrani, A.; Ebead, U.; Alnahhal, W. Shear capacity prediction of FRP-RC beams using single and ensenble ExPlainable Machine learning models. Compos. Struct. 2022, 287, 115381. [Google Scholar] [CrossRef]
- Shen, Y.; Wu, L.; Liang, S. Explainable machine learning-based model for failure mode identification of RC flat slabs without transverse reinforcement. Eng. Fail. Anal. 2022, 141, 106647. [Google Scholar] [CrossRef]
- Meddage, D.; Ekanayake, I.; Weerasuriya, A.; Lewangamage, C.; Tse, K.; Miyanawala, T.; Ramanayaka, C. Explainable Machine Learning (XML) to predict external wind pressure of a low-rise building in urban-like settings. J. Wind. Eng. Ind. Aerodyn. 2022, 226, 105027. [Google Scholar] [CrossRef]
- Shah, S.F.A.; Chen, B.; Zahid, M.; Ahmad, M.R. Compressive strength prediction of one-part alkali activated material enabled by interpretable machine learning. Constr. Build. Mater. 2022, 360, 129534. [Google Scholar] [CrossRef]
- Ibrahim, B.; Konduah, J.O.; Ahenkorah, I. Predicting Reservoir Temperature of Geothermal Systems in West Anatolia, Turkey: A Focus on Predictive Performance and Explainability of Machine Learning Models. SSRN Electron. J. 2022, 112, 102727. [Google Scholar] [CrossRef]
- Kruse, J.; Schäfer, B.; Witthaut, D. Secondary control activation analysed and predicted with explainable AI. Electr. Power Syst. Res. 2022, 212, 108489. [Google Scholar] [CrossRef]
- Yang, Z.; Sheng, Y.; Zhu, C.; Ni, J.; Zhu, Z.; Xi, J.; Zhang, W.; Yang, J. Accurate and explainable machine learning for the power factors of diamond-like thermoelectric materials. J. Mater. 2022, 8, 633–639. [Google Scholar] [CrossRef]
- Onsree, T.; Tippayawong, N.; Phithakkitnukoon, S.; Lauterbach, J. Interpretable machine-learning model with a collaborative game approach to predict yields and higher heating value of torrefied biomass. Energy 2022, 249, 123676. [Google Scholar] [CrossRef]
- Chung, W.J.; Liu, C. Analysis of input parameters for deep learning-based load prediction for office buildings in different climate zones using eXplainable Artificial Intelligence. Energy Build. 2022, 276, 112521. [Google Scholar] [CrossRef]
- El Bilali, A.; Abdeslam, T.; Ayoub, N.; Lamane, H.; Ezzaouini, M.A.; Elbeltagi, A. An interpretable machine learning approach based on DNN, SVR, Extra Tree, and XGBoost models for predicting daily pan evaporation. J. Environ. Manag. 2023, 327, 116890. [Google Scholar] [CrossRef] [PubMed]
- Kruk, M. Prediction of environmental factors responsible for chlorophyll a-induced hypereutrophy using explainable machine learning. Ecol. Inform. 2023, 75, 102005. [Google Scholar] [CrossRef]
- Maloney, K.O.; Buchanan, C.; Jepsen, R.D.; Krause, K.P.; Cashman, M.J.; Gressler, B.P.; Young, J.A.; Schmid, M. Explainable machine learning improves interpretability in the predictive modeling of biological stream conditions in the Chesapeake Bay Watershed, USA. J. Environ. Manag. 2022, 322, 116068. [Google Scholar] [CrossRef]
- Ghosh, I.; Alfaro-Cortés, E.; Gámez, M.; García-Rubio, N. Role of proliferation COVID-19 media chatter in predicting Indian stock market: Integrated framework of nonlinear feature transformation and advanced AI. Expert Syst. Appl. 2023, 219, 119695. [Google Scholar] [CrossRef]
- Weng, F.; Zhu, J.; Yang, C.; Gao, W.; Zhang, H. Analysis of financial pressure impacts on the health care industry with an explainable machine learning method: China versus the USA. Expert Syst. Appl. 2022, 210, 118482. [Google Scholar] [CrossRef]
- Xie, C.; Xie, Y.; Zhang, C.; Dong, H.; Zhang, L. Explainable machine learning for carbon dioxide adsorption on porous carbon. J. Environ. Chem. Eng. 2023, 11, 109053. [Google Scholar] [CrossRef]
- Mai, J.; Lu, T.; Xu, P.; Lian, Z.; Li, M.; Lu, W. Predicting the maximum absorption wavelength of azo dyes using an interpretable machine learning strategy. Dye. Pigment. 2022, 206, 110647. [Google Scholar] [CrossRef]
- Park, S.; Yang, J.S. Interpretable deep learning LSTM model for intelligent economic decision-making. Knowl.-Based Syst. 2022, 248, 108907. [Google Scholar] [CrossRef]
- Rico-Juan, J.R.; de La Paz, P.T. Machine learning with explainability or spatial hedonics tools? An analysis of the asking prices in the housing market in Alicante, Spain. Expert Syst. Appl. 2021, 171, 114590. [Google Scholar] [CrossRef]
- Baptista, M.L.; Goebel, K.; Henriques, E.M.P. Relation between prognostics predictor evaluation metrics and local interpretability SHAP values. Artif. Intell. 2022, 306, 103667. [Google Scholar] [CrossRef]
- Midtfjord, A.D.; De Bin, R.; Huseby, A.B. A decision support system for safer airplane landings: Predicting runway conditions using XGBoost and explainable AI. Cold Reg. Sci. Technol. 2022, 199, 103556. [Google Scholar] [CrossRef]
- He, S.; Wang, Y.; Zhang, Z.; Xiao, F.; Zuo, S.; Zhou, Y.; Cai, X.; Jin, X. Interpretable machine learning workflow for evaluation of the transformation temperatures of TiZrHfNiCoCu high entropy shape memory alloys. Mater. Des. 2023, 225, 111513. [Google Scholar] [CrossRef]
- He, Z.; Huang, H.; Choi, H.; Bilgihan, A. Building organizational resilience with digital transformation. J. Serv. Manag. 2023, 34, 147–171. [Google Scholar] [CrossRef]
- Wen, Z.; Liu, H.; Zhou, M.; Liu, C.; Zhou, C. Explainable machine learning rapid approach to evaluate coal ash content based on X-ray fluorescence. Fuel 2023, 332, 125991. [Google Scholar] [CrossRef]
- Dandolo, D.; Masiero, C.; Carletti, M.; Pezze, D.D.; Susto, G.A. AcME—Accelerated model-agnostic explanations: Fast whitening of the machine-learning black box. Expert Syst. Appl. 2023, 214, 119115. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why should I trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Garreau, D.; Luxburg, U. Explaining the explainer: A first theoretical analysis of LIME. In Proceedings of the International Conference on Artificial Intelligence and Statistics, València, Spain, 2–4 May 2020; pp. 1287–1296. [Google Scholar]
- Çeli, T.B.; İcan, Ö.; Bulut, E. Extending machine learning prediction capabilities by explainable AI in financial time series prediction [Formula presented]. Appl. Soft Comput. 2023, 132, 109876. [Google Scholar] [CrossRef]
- Zdravković, M.; Ćirić, I.; Ignjatović, M. Explainable heat demand forecasting for the novel control strategies of district heating systems. Annu. Rev. Control 2022, 53, 405–413. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, W.; Liu, X. Explainable AI techniques with application to NBA gameplay prediction. Neurocomputing 2022, 483, 59–71. [Google Scholar] [CrossRef]
- Fan, C.; Xiao, F.; Yan, C.; Liu, C.; Li, Z.; Wang, J. A novel methodology to explain and evaluate data-driven building energy performance models based on interpretable machine learning. Appl. Energy 2019, 235, 1551–1560. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Qian, H.; Wang, B.; Yuan, M.; Gao, S.; Song, Y. Financial distress prediction using a corrected feature selection measure and gradient boosted decision tree. Expert Syst. Appl. 2021, 190, 116202. [Google Scholar] [CrossRef]
- Zhao, S.; Li, J.; Chen, C.; Yan, B.; Tao, J.; Chen, G. Interpretable machine learning for predicting and evaluating hydrogen production via supercritical water gasification of biomass. J. Clean. Prod. 2021, 316, 128244. [Google Scholar] [CrossRef]
- Peng, Y.; Unluer, C. Analyzing the mechanical performance of fly ash-based geopolymer concrete with different machine learning techniques. Constr. Build. Mater. 2021, 316, 125785. [Google Scholar] [CrossRef]
- Almustafa, M.K.; Nehdi, M.L. Machine learning model for predicting structural response of RC slabs exposed to blast loading. Eng. Struct. 2020, 221, 111109. [Google Scholar] [CrossRef]
- Apley, D.W.; Zhu, J. Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2016, 82, 1059–1086. [Google Scholar] [CrossRef]
- Robnik-Šikonja, M.; Kononenko, I. Explaining Classifications for Individual Instances. IEEE Trans. Knowl. Data Eng. 2008, 20, 589–600. [Google Scholar] [CrossRef]
- Bohanec, M.; Borštnar, M.K.; Robnik-Šikonja, M. Explaining machine learning models in sales predictions. Expert Syst. Appl. 2017, 71, 416–428. [Google Scholar] [CrossRef]
- Štrumbelj, E.; Bosnić, Z.; Kononenko, I.; Zakotnik, B.; Kuhar, C.G. Explanation and reliability of prediction models: The case of breast cancer recurrence. Knowl. Inf. Syst. 2010, 24, 305–324. [Google Scholar] [CrossRef]
- Goldstein, A.; Kapelner, A.; Bleich, J.; Pitkin, E. Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation. J. Comput. Graph. Stat. 2013, 24, 44–65. [Google Scholar] [CrossRef]
- Antwarg, L.; Miller, R.M.; Shapira, B.; Rokach, L. Explaining anomalies detected by autoencoders using Shapley Additive Explanations [Formula presented]. Expert Syst. Appl. 2021, 186, 115736. [Google Scholar] [CrossRef]
- Ibrahim, L.; Mesinovic, M.; Yang, K.W.; Eid, M.A. Explainable Prediction of Acute Myocardial Infarction using Machine Learning and Shapley Values. IEEE Access 2020, 8, 210410–210417. [Google Scholar] [CrossRef]
- Kumar, I.E.; Venkatasubramanian, S.; Scheidegger, C.; Friedler, S. Problems with Shapley-value-based explanations as feature importance measures. In Proceedings of the International Conference on Machine Learning 2020, Virtual, 13–18 July 2020; pp. 5491–5500. [Google Scholar]
- Fildes, R.; Ma, S.; Kolassa, S. Retail forecasting: Research and practice. Int. J. Forecast. 2019, 38, 1283–1318. [Google Scholar] [CrossRef]
- Shi, H.; Yang, D.; Tang, K.; Hu, C.; Li, L.; Zhang, L.; Gong, T.; Cui, Y. Explainable machine learning model for predicting the occurrence of postoperative malnutrition in children with congenital heart disease. Clin. Nutr. 2022, 41, 202–210. [Google Scholar] [CrossRef]
- Crawford, B.; Sourki, R.; Khayyam, H.; Milani, A.S. A machine learning framework with dataset-knowledgeability pre-assessment and a local decision-boundary crispness score: An industry 4.0-based case study on composite autoclave manufacturing. Comput. Ind. 2021, 132, 103510. [Google Scholar] [CrossRef]
- Noora, S.; Saleh, M.; Akbari, Y.; Maadeed, S.A. A review of explainable AI techniques and their evaluation in mammography for breast cancer screening. Clin. Imaging 2025, 123, 110492. [Google Scholar] [CrossRef]
- Räz, T.; De Mortanges, A.P.; Reyes, M. Explainable AI in medicine: Challenges of integrating XAI into the future clinical routine. Front. Radiol. 2025, 5, 1627169. [Google Scholar] [CrossRef] [PubMed]
- Lahav, O.; Mastronarde, N.; van der Schaar, M. What is interpretable? Using machine learning to design interpretable decision-support systems. arXiv 2018, arXiv:1811.10799. [Google Scholar]
- Giri, A.; Din, F.U. Role of data as an interface between primary, secondary and tertiary care: Evidence from literature. Inform. Health 2025, 2, 63–72. [Google Scholar] [CrossRef]






| Database | Article Parts Searched | Field | Search String |
|---|---|---|---|
| ScienceDirect | Title, Abstract | All Fields | ((“Explainable AI” OR “interpretable AI” OR “explainable artificial intelligence” OR “interpretable artificial intelligence” OR “explainable machine learning” OR “interpretable machine learning”) AND (“Biomedical Imaging” OR “Biomedical Sensing” OR “Biomedical” OR “Imaging” OR “Sensing”)) |
| IEEE | Title, Abstract | All Fields | ((“All Metadata”: explainable AI) OR (“All Metadata”: interpretable AI) OR (“All Metadata”: explainable machine learning) OR (“All Metadata”: interpretable machine learning) OR (“All Metadata”: interpretable machine learning) AND (“Biomedical Imaging” OR “Biomedical Sensing” OR “Biomedical” OR “Imaging” OR “Sensing”)) |
| Studies | Domain/Application | XAI Method Used | Primary Evaluation Reported | Formal Usability |
|---|---|---|---|---|
| [1] | Healthcare COVID-19 diagnosis | LIME, SHAP | Model predictive performance (classification metrics) + computational explanations (feature importance/visuals) | No (presents computational evaluation of predictability) |
| [25] | Healthcare—prognostic variables | SHAP, PDPs, PFI | Feature importance analyses; model performance metrics | No (presents feature extraction for clinical validity) |
| [26] | Miscellaneous/human-in-the-loop example | SHAP, LIME, SHAPASH | Computational explanation comparisons; predictive accuracy | No (presents data experimentation) |
| [28] | Healthcare—prognostic variables | SHAP, LIME, PFI | Insights into prognostic variables; model performance; and clinical data analysis | No (presents clinical validation of prediction) |
| [29] | Seminal/Method paper (LIME) | LIME | Method demonstration and computational examples | No (presents method proposal/computational evaluation) |
| [56] | Miscellaneous/human-in-the-loop example | SHAP, AcME, KernelSHAP, PDPs | Computational runtime/efficiency and fidelity comparisons; demonstration as root-cause tool | No (demonstrates potential for human-in-loop; no usability trials) |
| [74] | Healthcare—disease detection | SHAP, | Prediction performance on ECG/retrospective clinical dataset; feature importance | No (presents EEG Data Evaluations with Experimental results |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hettikankanamage, N.; Shafiabady, N.; Chatteur, F.; Wu, R.M.X.; Ud Din, F.; Zhou, J. eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing. Sensors 2025, 25, 6649. https://doi.org/10.3390/s25216649
Hettikankanamage N, Shafiabady N, Chatteur F, Wu RMX, Ud Din F, Zhou J. eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing. Sensors. 2025; 25(21):6649. https://doi.org/10.3390/s25216649
Chicago/Turabian StyleHettikankanamage, Nadeesha, Niusha Shafiabady, Fiona Chatteur, Robert M. X. Wu, Fareed Ud Din, and Jianlong Zhou. 2025. "eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing" Sensors 25, no. 21: 6649. https://doi.org/10.3390/s25216649
APA StyleHettikankanamage, N., Shafiabady, N., Chatteur, F., Wu, R. M. X., Ud Din, F., & Zhou, J. (2025). eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing. Sensors, 25(21), 6649. https://doi.org/10.3390/s25216649

