A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics
Abstract
1. Introduction
- 1.
- offer reliable insights;
- 2.
- build trust and assist clinicians and stakeholders in comprehending the underlying rationale for AI-generated decisions or identifying biased or erroneous decisions;
- 3.
- meet regulatory standards to ensure compliance and ethical implementation [8].
- Section 2 provides a short overview of the main types of XAI methods.
- Section 3 reviews how these methods are applied in biomedical informatics.
- In Section 5, we explore the key challenges XAI faces.
- Finally, in Section 6, we discuss possible directions for future research and offer the conclusion of our study.
2. Taxonomy of XAI Methods
- Specificity (S).Specificity discriminates between approaches that are Model-Specific (M-S), i.e., their working mechanisms are specific to one model architecture, or Model-Agnostic (M-A), i.e., their mechanisms can be applied to any AI model regardless of the specific architecture [9].
- Scope of Explanation ().This can be Global (G), with an approach capable of explaining the overall behaviour of the model; Local (L), with an approach that explains a single specific prediction made by the model; or Both (B), with an approach that can address aspects of both global and local interpretability [10].
- Model Interpretability (MI).This can be Intrinsic (I), usually for simple models whose working mechanism is defined in such a way that they can be explained by design, or Post Hoc (P), for models whose complexity requires the application of approaches to analyse predictions only after they have been trained to generate an explanation [11].
- Explanation Modalities (EM)This criterion categorises the XAI methods based on the format/s of the explanations they provide as output [9], which may include measures of or visualisation of how specific features contributed to the decision, rule-based logic explanations, example-based explanations, and text-based summaries.
2.1. On Model-Specific and Model-Agnostic Methods
2.2. Scope of Explanation
2.3. Intrinsic and Post-Hoc Approaches
2.4. Explanation Modalities (Visual, Textual, Symbolic)
2.5. Modalities in Medical Imaging
2.6. Overview of the Main XAI Techniques
3. Applications in Biomedical Informatics
3.1. Genomics and Omics Data
3.2. Electronic Health Records
3.3. Time Series and Clinical Monitoring
3.4. Emerging Approaches
3.5. Other Medical Imaging and Multimodal Applications
Integrating Multimodal Information
3.6. Discussion and Considerations from Further Applications
| Ref. | Modalities | Domain | Datasets | Links | Access |
|---|---|---|---|---|---|
| [113] | Clinical Tabular | EHR | Cardiac MRI Phenotypes & Brain Volumetric MRI | [114] | Con |
| [15] | MRI | Medical Imaging | Brain Tumor MRI Dataset | [115] | Pub |
| [81] | X-ray | Medical Imaging | CAPE Model Development Dataset & Interpretation Evaluation Dataset & NIH Chest X-Ray Public Dataset | [116] | Pri & Pri & Pub |
| [45] | Endoscopy | Medical Imaging | Kvasir-Capsule Dataset | [117] | Pub |
| [118] | Dermoscopic Images | Medical Imaging | Skin Cancer MNIST | [119] | Pub |
| [100] | ICU Tabular | EHR | Al-Ain Hospital ICU Electronic Health Records (EHR) Dataset | ✗ | Pri |
| [66] | SNP | Genomics | ADNI Genetic GWAS Dataset (Alzheimer’s Disease Neuroimaging Initiative) | [120] | Pub |
| [82] | CT | Medical Imaging | IQ-OTH/NCCD Lung Cancer CT Dataset | [121] | Pub |
| [89] | MRI | Medical Imaging | Alzheimer MRI Dataset | [122] | Pub |
| [123] | Pathology Slides | Medical Imaging | Warwick-QU & Cancer Dataset | [124] & Unknown | Pub & Unknown |
| [110] | Oncology | Multimodal Data | GenoMed4All + Synthema MDS Training Cohort | [125,126,127] | Con |
| [71] | Clinical Tabular | EHR | Korean Acute Myocardial Infarction Registry | ✗ | Pri |
| [85] | X-ray | Medical Imaging | Chest X-Ray Pneumonia Dataset & SARS-CoV-2 CT-scan dataset | [128,129] | Pub |
| [42] | X-ray | Medical Imaging | COVID-19 Radiography Database | [130,131] | Pub |
| [93] | CT | Medical Imaging | ICH for Non-Contrast Computed Tomography | ✗ | Pri |
| [90] | MRI | Medical Imaging | Simulated Bias in Artificial Medical Images (SimBA) | [90] | Pub |
| [87] | Institutional Review Board | Clinical Text & Notes | Institutional Review Board (IRB) Protocol Dataset | ✗ | Pri |
| [132] | X-ray | Medical Imaging | Tuberculosis (TB) Chest X-Ray Database | [133] | Pub |
| [80] | MRI | Medical Imaging | UK Biobank Cardiac MRI Dataset | [114] | Con |
| [134] | Microscopic PBS | Medical Imaging | C-NMC-19 & Taleqani Hospital Dataset & Multi-Cancer Dataset | [135,136,137] | Pub |
| [138] | MRI | Medical Imaging | Internal Single-Center Brain Metastasis MRI Dataset | ✗ | Pri |
| [68] | SNP | Genomics | CREA-AA Ex Situ Germplasm Collection | ✗ | Pri |
| [69] | Tabular Data | EHR | Finnish Real-World EHR Dataset of T2D patients | ✗ | Pri |
| [70] | Tabular Data | EHR | Kaggle Stroke Prediction Dataset & Kushtia Medical College Hospital | ✗ | Pub |
| [67] | Microbiome Data | Genomics | YachidaS_2019, YuJ_2015, WirbelJ_2019, ZellerG_2014, VogtmannE_2016 | [139,140,141,142,143] | Pub |
| [144] | Simulation Data & Structural Features | Multimodal Data | Simulated Molecular Structures and QM/MM Reaction Paths | [145] | Pub |
| [96] | Imaging, Tabular Data | Multimodal Data | Survey and Interviews | ✗ | Pri |
| [146] | EHR & Clinical Tabular Data | Multimodal Data | MIMIC-III | [147] | Pub |
| [75] | Time Series & Tabular Data | Multimodal Data | DOMINO & ExtraSensory Dataset | [148] | Pri & Pri |
| [92] | Tabular Data | EHR | 22 Real-World Tabular | [92] | Pub |
| [46] | X-ray | Medical Imaging | Knee ArthroScan, Lung X-Ray, FracAtlas | [149,150,151] | Pub |
| [73] | ICU | Time-series | University Hospital of Fuenlabrada | ✗ | Pri |
| [82] | CT | Medical Imaging | IQ-OTH//NCCD Lung Cancer Dataset | [121] | Pub |
| [44] | SNP | Genomics | MalaCards & OMIM & DisGeNet & SympGAN | [152,153,154,155] | Pub |
| [107] | Image | NDA | CUB-200-2011 | [156] | Pub |
| [72] | Textual | NDA | ALERT Telegram Threat Dataset | [157] | Pub |
| [79] | Imaging | NDA | Japanese Female Facial Expression Database | [158] | Pub |
| [159] | Tabular Data | NDA | CIFAR-10 & CIFAR-100 & ImageNet-1K | [160,161,162] | Pub |
| [94] | CT & MRI | Medical Imaging | LIDC-IDRI & Duke Breast Cancer MRI Dataset | [163,164] | Pub |
| [86] | Ultrasound | Medical Imaging | Gallbladder Diseases Dataset | [165] | Pub |
| [99] | EHR, Text and Tabular Clinical Data | Multimodal Data | UCSF | ✗ | Pri |
| [112] | Tabular Data | EHR | Psychiatric Emergency Department Electronic Health Records | ✗ | Pri |
| [111] | MRI | Medical Imaging | Brain Tumor MRI Dataset & Large MRI Training Dataset | [115,166] | Pub |
| Articles n = 43 | n = 19 | ||||||
|---|---|---|---|---|---|---|---|
| Ref. | Area | Stakeholders | AI Method(s) Used | S | SE | MI | HCM |
| [113] | Difficult to deploy in real-world settings | Patients | SHAP | M-A | G | P | ✗ |
| [15] | Low interpretability in DL models | Practitioners | PIDL | M-S | L | I | ✗ |
| [81] | Model interpretability | Clinicians | Grad-CAM, LIME, SHAP | M-A | G | P | ✓ |
| [45] | Black-box nature of DL models | Gastroenterologists | Grad-CAM, LIME, SHAP, GradCAM++, LayerCAM | M-A | L | P | ✓ |
| [118] | Data privacy, model interpretability | Healthcare Providers | Saliency Maps, Grad-CAM | M-A | G | P | ✗ |
| [100] | Resource allocation, model transparency | Hospital Administrators | SHAP, Different plots | M-A | G | P | ✓ |
| [66] | Biomarker Identification, Model Interpretability | Neurologists | SHAP | M-A | G | P | ✓ |
| [82] | Improving interpretability of CNN | Radiologists, Oncologists | CNNs, Grad-CAM, SHAP, Attention Mechanisms | M-S | L | P | ✗ |
| [89] | Efficient AI-based screening | Neurologists, Radiologists, Researchers | EfficientNetB0, Dual Attention Mechanisms | M-S | G | P | ✗ |
| [123] | Enhancing diagnostic accuracy | Radiologists, Oncologists | Adaptive Aquila Optimizer, DL Models | M-S | L | P | ✗ |
| [110] | Data scarcity in rare cancers | Oncologists, Researchers | MOSAIC Framework, SHAP, ML | M-S | G | P | ✗ |
| [71] | Interpretability, Trust | Cardiologists | Tree-based models, SHAP, DiCE | M-A | G | P | ✗ |
| [85] | Trust, Usability of Explanations | Radiologists | Grad-CAM, LIME | M-A | L | P | ✓ |
| [42] | Saliency map reliability | Radiologists | Grad-CAM | M-S | L | P | ✗ |
| [93] | Model interpretability in critical care | Neurologists, Radiologists | SHAP, Guided Grad-CAM, CNNs | M-S | G | P | ✗ |
| [90] | Systematic bias in AI models | Model Developers, Policymakers | Fairness Metrics, SHAP | M-A | G | P | ✗ |
| [87] | Uncertainty in AI predictions | Researchers, Health Planners | Transformers, Calibration Layers | M-S | G | P | ✓ |
| [132] | Interpretability of transformer-based models | Radiologists, Pulmonologists | Vision Transformer, Grad-CAM | M-S | L | P | ✗ |
| [80] | Realism and relevance of counterfactuals | Cardiologists, Researchers | MiMICRI Framework | M-A | L | P | ✗ |
| [134] | Trade-off between transparency and model performance | Haematologists, Pathologists | CNN, Grad-CAM, CAM. IG, LIME | M-S | L | P | ✗ |
| [138] | Interpretability of longitudinal monitoring tools | Neurosurgeons, Oncologists | Streamlit, Grad-CAM, SmoothGrad | M-S | L | P | ✗ |
| [68] | Model transparency in breeding programs | Plant Geneticists, Breeders | SHAP, Regression Models | M-A | G | P | ✓ |
| [69] | Improve individualized treatment strategies and interpretability of predictions | Endocrinologists, Public Health Officials | XGBoost, SHAP | M-A | G | P | ✓ |
| [70] | Improve predictive accuracy and interpretability for clinical decision-making | Neurologists, General Practitioners | Ensemble Models, SHAP, LIME | M-A | G | P | ✓ |
| [67] | Improve interpretability of microbiome-based disease prediction, feature interpretation | Oncologists, Microbiome Researchers | SHAP | M-A | G | P | ✓ |
| [144] | Understanding enzyme dynamics and resistance | Structural Biologists, Pharmacologists | SHAPE, XGBoost | M-A | G | P | ✓ |
| [96] | Interprets medical reality and supports clinicians | System Designers, Clinicians | MAP Model, Transparent design | ✗ | ✗ | ✗ | ✓ |
| [146] | Enhance diagnostic and treatment recommendations | Clinicians, Medical AI Developers | Transformer | M-S | L | P | ✓ |
| [75] | Deploy-ability of NeSy HAR | Researchers, Developers | Semantic Loss Functions, GradCAM | M-S | G | I | ✓ |
| [92] | GBDT explainability, efficiency | ML Practitioners | TREX, BoostIn | M-S | L | P | ✗ |
| [46] | Generalization across datasets | Radiologists, clinicians | EfficientNet-B0, ViT, Swin Transformer, CBAM, Grad-CAM | M-S | L | P | ✗ |
| [73] | XAI methods for time-varying outputs | ICU clinicians | IT-SHAP, CCMI, Hadamard Attention | M-A | B | B | ✓ |
| [82] | Interpretability in CNN-based models | Radiologists, Oncologists | Multi-Head Attention (MHA), Grad-CAM, SHAP | B | B | P | ✗ |
| [44] | Integration between risk prediction and actionable recommendations | Oncologists, GPs | hybrid Transformer-CNN, SHAP, LLMs | M-A | B | P | ✗ |
| [107] | Human–AI collaboration and Explainable A | Designers, Researchers | Deception of Reliance (DoR) metric | M-A | L | P | ✓ |
| [72] | Lack of labeled Telegram data | Policymakers | RoBERTa+, Integrated Gradients, DeepLIFT, LIME, SHAP | B | B | B | ✓ |
| [79] | Transparency with design of DNs | Researchers | PCA, DCT, CCA | M-S | G | I | ✗ |
| [159] | Computational cost of exact Shapley values | Researchers | SHAP | B | L | P | ✗ |
| [94] | Limitations of classical image forensics | Researchers, Cybersecurity | SHAP, Back-in-Time Diffusion | M-S | L | P | ✗ |
| [86] | Misdiagnosis, heterogeneity in lesion appearance | Hepatobiliary Specialists | CNN with multi-scale feature extraction + Grad-CAM, LIME | B | L | P | ✗ |
| [99] | Social determinants | Clinicians, Policymakers | ✗ | M-S | G | P | ✓ |
| [112] | Early identification of suicide risk | Psychiatry | SHAP, BD plots | M-S | L | P | ✗ |
| [111] | Feature interpretability | Radiologists, Neurologists | SHAP | M-A | G | P | ✓ |
3.7. Comparison of XAI Methods
4. Evaluation Metric and Clinical Assessment
4.1. Quantitative Evaluation
4.2. Faithfulness Metrics
4.3. Robustness Metrics
4.4. Clinician-in-the-Loop Evaluation
5. Challenges
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Xu, H.; Shuttleworth, K.M.J. Medical artificial intelligence and the black box problem: A view based on the ethical principle of “do no harm”. Intell. Med. 2024, 4, 52–57. [Google Scholar] [CrossRef]
- Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. [Google Scholar] [CrossRef]
- Dauzier, J.; Rock, N.; Kelly, P.; Pons, A.; Andre, A.; Urwin, L. Navigating AI Liability Risks. 2024. Available online: https://www.dlapiperoutsourcing.com/blog/tle/2024/navigating-ai-liability-risks.html (accessed on 10 July 2025).
- Jones, C.; Thornton, J.; Wyatt, J.C. Artificial intelligence and clinical decision support: Clinicians’ perspectives on trust, trustworthiness, and liability. Med. Law Rev. 2023, 31, 501–520. [Google Scholar] [CrossRef]
- Durán, J.M.; Jongsma, K.R. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethics 2021, 47, 329–335. [Google Scholar] [CrossRef]
- Chen, Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit. Soc. Sci. Commun. 2023, 10, 567. [Google Scholar] [CrossRef]
- Roy-Stang, Z.; Davies, J. Human biases and remedies in AI safety and alignment contexts. AI Ethics 2025, 5, 4891–4913. [Google Scholar] [CrossRef]
- Eke, C.I.; Shuib, L. The role of explainability and transparency in fostering trust in AI healthcare systems: A systematic literature review, open issues and potential solutions. Neural Comput. Appl. 2025, 37, 1999–2034. [Google Scholar] [CrossRef]
- Kumar, D.; Mehta, M.A. An Overview of Explainable AI Methods, Forms and Frameworks. In Explainable AI: Foundations, Methodologies and Applications; Springer: Berlin/Heidelberg, Germany, 2022; pp. 43–59. [Google Scholar]
- Agarwal, C.; Ley, D.; Krishna, S.; Saxena, E.; Pawelczyk, M.; Johnson, N.; Puri, I.; Zitnik, M.; Lakkaraju, H. OpenXAI: Towards a Transparent Evaluation of Model Explanations. arXiv 2022, arXiv:2206.11104. [Google Scholar]
- Salih, A.; Raisi-Estabragh, Z.; Boscolo Galazzo, I.; Radeva, P.; Petersen, S.E.; Menegaz, G.; Lekadir, K. A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME. arXiv 2023, arXiv:2305.02012. [Google Scholar] [CrossRef]
- Molnar, C. Interpretable Machine Learning, 3rd ed.; Leanpub: Victoria, BC, Canada, 2025. [Google Scholar]
- Olah, C.; Satyanarayan, A.; Johnson, I.; Carter, S.; Schubert, L.; Ye, K.; Mordvintsev, A. The Building Blocks of Interpretability. Distill 2018, 3. [Google Scholar] [CrossRef]
- Devireddy, K. A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches. arXiv 2025. [Google Scholar] [CrossRef]
- Amin, A.; Hasan, K.; Hossain, M.S. XAI-Empowered MRI Analysis for Consumer Electronic Health. IEEE Trans. Consum. Electron. 2024, 71, 1423–1431. [Google Scholar] [CrossRef]
- Kares, F.; Speith, T.; Zhang, H.; Langer, M. What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI). arXiv 2025, arXiv:2504.17023. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
- Chattopadhyay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-CAM++: Generalized Gradient-based Visual Explanations for Deep Convolutional Networks. arXiv 2017, arXiv:1710.11063. [Google Scholar]
- Jiang, P.T.; Zhang, C.B.; Hou, Q.; Cheng, M.M.; Wei, Y. LayerCAM: Exploring Hierarchical Class Activation Maps for Localization. IEEE Trans. Image Process. 2021, 30, 5875–5888. [Google Scholar] [CrossRef] [PubMed]
- Sepiolo, D.; Ligeza, A. Towards Explainability of Tree-Based Ensemble Models: A Critical Overview. In Proceedings of the Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 484, pp. 287–296. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why Should I Trust You? Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 4–9 December 2017; NIPS’17. pp. 4768–4777. [Google Scholar]
- Chen, H.; Covert, I.C.; Lundberg, S.M.; Lee, S.I. Algorithms to estimate Shapley value feature attributions. Nat. Mach. Intell. 2023, 5, 590–601. [Google Scholar] [CrossRef]
- Kothinti, R.R. Fusion of Multi-Modal Deep Learning and Explainable AI for Cardiovascular Disease Risk Stratification. Int. J. Nov. Res. Dev. (IJNRD) 2023, 8, 136–144. [Google Scholar]
- Khamis, M.M.; Klemm, N.; Adamko, D.J.; El-Aneed, A. Comparison of accuracy and precision between multipoint calibration, single point calibration, and relative quantification for targeted metabolomic analysis. Anal. Bioanal. Chem. 2018, 410, 5899–5913. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Model-Agnostic Interpretability of Machine Learning. arXiv 2016, arXiv:1606.05386. [Google Scholar] [CrossRef]
- Shrikumar, A.; Greenside, P.; Kundaje, A. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, NSW, Australia, 6–11 August 2017. [Google Scholar]
- Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic attribution for deep networks. Int. Conf. Mach. Learn. (ICML) 2017, 70, 3319–3328. [Google Scholar]
- Erion, G.; Janizek, J.D.; Sturmfels, P.; Lundberg, S.; Lee, S.I. Improving performance of deep learning models with expected gradients. Nat. Mach. Intell. 2021, 3, 620–631. [Google Scholar] [CrossRef]
- Murdoch, W.J.; Liu, P.J.; Yu, B. Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs. arXiv 2018, arXiv:1801.05453. [Google Scholar] [CrossRef]
- Schulz, E.; Johansson, F.; Sontag, D. CXPlain: Causal explanations for model interpretation under uncertainty. arXiv 2020, arXiv:2003.07258. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Wiggerthale, J.; Reich, C. Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness. AI 2024, 5, 2864–2896. [Google Scholar] [CrossRef]
- Ignatiev, A.; Izza, Y.; Stuckey, P.J.; Marques-Silva, J. Using MaxSAT for Efficient Explanations of Tree Ensembles. Proc. AAAI Conf. Artif. Intell. 2022, 36, 3776–3785. [Google Scholar] [CrossRef]
- Famiglini, L.; Campagner, A.; Barandas, M.; Maida, G.A.L.; Gallazzi, E.; Cabitza, F. Evidence-based XAI: An empirical approach to design more effective and explainable decision support systems. Comput. Biol. Med. 2024, 170, 108042. [Google Scholar] [CrossRef]
- Slack, D.; Hilgard, S.; Singh, S.; Lakkaraju, H. Reliable Post hoc Explanations: Modeling Uncertainty in Explainability. Adv. Neural Inf. Process. Syst. (NeurIPS) 2020, 33, 9391–9404. [Google Scholar]
- Balagopalan, A.; Zhang, H.; Hamidieh, K.; Hartvigsen, T.; Rudzicz, F.; Ghassemi, M. The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations. In Proceedings of the 2022 ACM Conference on Fairness Accountability and Transparency, Seoul, Republic of Korea, 21–24 June 2022; ACM: New York, NY, USA, 2022. FAccT ’22. pp. 1194–1206. [Google Scholar] [CrossRef]
- Vale, D.; El-Sharif, A.; Ali, M. Explainable Artificial Intelligence (XAI) Post-hoc Explainability Methods: Risks and Limitations in Non-discrimination Law. AI Ethics 2022, 2, 815–826. [Google Scholar] [CrossRef]
- Eshkiki, H.; Mora, B. Neighbor Migrating Generator: Finding the closest possible neighbor with different classes. In Proceedings of the AISB Convention 2023 Swansea University, Swansea, UK, 13–14 April 2023; p. 79. [Google Scholar]
- Fragkathoulas, C.; Papanikou, V.; Pitoura, E.; Terzi, E. FGCE: Feasible Group Counterfactual Explanations for Auditing Fairness. arXiv 2024, arXiv:2410.22591. [Google Scholar] [CrossRef]
- Kapcia, M.; Eshkiki, H.; Duell, J.; Fan, X.; Zhou, S.; Mora, B. ExMed: An AI Tool for Experimenting Explainable AI Techniques on Medical Data Analytics. In Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Washington, DC, USA, 1–3 November 2021; pp. 841–845. [Google Scholar] [CrossRef]
- Sun, J.; Shi, W.; Giuste, F.O.; Vaghani, Y.S.; Tang, L.; Wang, M.D. Improving explainable AI with patch perturbation-based evaluation pipeline: A COVID-19 X-ray image analysis case study. Sci. Rep. 2023, 13, 19488. [Google Scholar] [CrossRef] [PubMed]
- Parimbelli, E.; Buonocore, T.M.; Nicora, G.; Michalowski, W.; Wilk, S.; Bellazzi, R. Why did AI get this one wrong?—Tree-based explanations of machine learning model predictions. Artif. Intell. Med. 2023, 135, 102471. [Google Scholar] [CrossRef]
- Lu, K.; Lu, J.; Xu, H.; Guo, K.; Zhang, Q.; Lin, H.; Grosser, M.; Zhang, Y.; Zhang, G. Genomics-Enhanced Cancer Risk Prediction for Personalized LLM-Driven Healthcare Recommender Systems. ACM Trans. Inf. Syst. 2025, 43, 152. [Google Scholar] [CrossRef]
- Varam, D.; Mitra, R.; Mkadmi, M.; Riyas, R.A.; Abuhani, D.A.; Dhou, S.; Alzaatreh, A. Wireless Capsule Endoscopy Image Classification: An Explainable AI Approach. IEEE Access 2023, 11, 105262–105280. [Google Scholar] [CrossRef]
- Das, I.; Sheakh, M.A.; Abdulla, S.; Tahosin, M.S.; Hassan, M.M.; Zaman, S.; Shukla, A. Improving Medical X-ray Imaging Diagnosis with Attention Mechanisms and Robust Transfer Learning Techniques. IEEE Access 2025, 13, 159002–159027. [Google Scholar] [CrossRef]
- Chung, M.; Won, J.B.; Kim, G.; Kim, Y.; Ozbulak, U. Evaluating Visual Explanations of Attention Maps for Transformer-Based Medical Imaging. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2024 Workshops; Springer Nature: Cham, Switzerland, 2025; pp. 110–120. [Google Scholar] [CrossRef]
- Rao, A.; Aalami, O. Towards Improving the Visual Explainability of Artificial Intelligence in the Clinical Setting. BMC Digit. Health 2023, 1, 23. [Google Scholar] [CrossRef]
- Quan, X.; Valentino, M.; Dennis, L.A.; Freitas, A. Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing; Al-Onaizan, Y., Bansal, M., Chen, Y.N., Eds.; Association for Computational Linguistics: Miami, FL, USA, 2024; pp. 2933–2958. [Google Scholar] [CrossRef]
- Olausson, T.; Gu, A.; Lipkin, B.; Zhang, C.; Solar-Lezama, A.; Tenenbaum, J.; Levy, R. LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing; Bouamor, H., Pino, J., Bali, K., Eds.; Association for Computational Linguistics: Singapore, 2023; pp. 5153–5176. [Google Scholar] [CrossRef]
- Kalyanpur, A.; Saravanakumar, K.K.; Barres, V.; McFate, C.; Moon, L.; Seifu, N.; Eremeev, M.; Barrera, J.; Bautista-Castillo, A.; Brown, E.; et al. Multi-step Inference over Unstructured Data. arXiv 2024, arXiv:2406.17987. [Google Scholar]
- Mardaoui, D.; Garreau, D. An Analysis of LIME for Text Data. arXiv 2021, arXiv:2010.12487. [Google Scholar] [CrossRef]
- Alvarez-Melis, D.; Jaakkola, T.S. On the Robustness of Interpretability Methods. arXiv 2018, arXiv:1806.08049. [Google Scholar] [CrossRef]
- Schindele, A.; Krebold, A.; Heiß, U.; Nimptsch, K.; Pfaehler, E.; Berr, C.; Bundschuh, R.A.; Wendler, T.; Kertels, O.; Tran-Gia, J.; et al. Interpretable machine learning for thyroid cancer recurrence predicton: Leveraging XGBoost and SHAP analysis. Eur. J. Radiol. 2025, 186, 112049. [Google Scholar] [CrossRef] [PubMed]
- Alyoubi, A.A.; Alyoubi, B.A. Interpretable multimodal emotion recognition using optimized transformer model with SHAP-based transparency. J. Supercomput. 2025, 81, 1044. [Google Scholar] [CrossRef]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Sagar, A. Vitbis: Vision transformer for biomedical image segmentation. In Proceedings of the MICCAI Workshop on Distributed and Collaborative Learning; Springer: Berlin/Heidelberg, Germany, 2021; pp. 34–45. [Google Scholar]
- Wang, H.; Zhang, Z. TATCN: Time series prediction model based on time attention mechanism and TCN. In Proceedings of the 2022 IEEE 2nd International Conference on Computer Communication and Artificial Intelligence (CCAI), Beijing, China, 6–8 May 2022; pp. 26–31. [Google Scholar]
- Zhu, H.; Wang, Z.; Shi, Y.; Hua, Y.; Xu, G.; Deng, L. Multimodal Fusion Method Based on Self-Attention Mechanism. Wirel. Commun. Mob. Comput. 2020, 2020, 8843186. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
- Choi, E.; Bahadori, M.T.; Kulas, J.A.; Schuetz, A.; Stewart, W.F.; Sun, J. RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism. arXiv 2017, arXiv:1608.05745. [Google Scholar] [CrossRef]
- Anari, S.; Sadeghi, S.; Sheikhi, G.; Ranjbarzadeh, R.; Bendechache, M. Explainable attention based breast tumor segmentation using a combination of UNet, ResNet, DenseNet, and EfficientNet models. Sci. Rep. 2025, 15, 1027. [Google Scholar] [CrossRef]
- Shen, J.; Wu, J.; Liang, H.; Zhao, Z.; Li, K.; Zhu, K.; Wang, K.; Ma, Y.; Hu, W.; Guo, C.; et al. Physiological signal analysis using explainable artificial intelligence: A systematic review. Neurocomputing 2025, 618, 128920. [Google Scholar] [CrossRef]
- Ni, J.; Mao, R.; Yang, Z.; Lei, H.; Cambria, E. Finding the Pillars of Strength for Multi-Head Attention. arXiv 2023, arXiv:2305.14380. [Google Scholar] [CrossRef]
- Song, R.; Li, Y.; Shi, L.; Giunchiglia, F.; Xu, H. Shortcut Learning in In-Context Learning: A Survey. arXiv 2024, arXiv:2411.02018. [Google Scholar] [CrossRef]
- Khater, T.; Ansari, S.; Saad Alatrany, A.; Alaskar, H.; Mahmoud, S.; Turky, A.; Tawfik, H.; Almajali, E.; Hussain, A. Explainable Machine Learning Model for Alzheimer Detection Using Genetic Data: A Genome-Wide Association Study Approach. IEEE Access 2024, 12, 95091–95105. [Google Scholar] [CrossRef]
- Rynazal, R.; Fujisawa, K.; Shiroma, H.; Salim, F.; Mizutani, S.; Shiba, S.; Yachida, S.; Yamada, T. Leveraging explainable AI for gut microbiome-based colorectal cancer classification. Genome Biol. 2023, 24, 21. [Google Scholar] [CrossRef]
- Novielli, P.; Romano, D.; Pavan, S.; Losciale, P.; Stellacci, A.M.; Diacono, D.; Bellotti, R.; Tangaro, S. Explainable artificial intelligence for genotype-to-phenotype prediction in plant breeding: A case study with a dataset from an almond germplasm collection. Front. Plant Sci. 2024, 15, 1434229. [Google Scholar] [CrossRef]
- Chandra, G.; Lavikainen, P.; Siirtola, P.; Tamminen, S.; Ihalapathirana, A.; Laatikainen, T.; Martikainen, J.; Röning, J. Explainable Prediction of Long-Term Glycated Hemoglobin Response Change in Finnish Patients with Type 2 Diabetes Following Drug Initiation Using Evidence-Based Machine Learning Approaches. Clin. Epidemiol. 2025, 17, 225–240. [Google Scholar] [CrossRef]
- Hossain, M.M.; Ahmed, M.M.; Rakib, M.R.H.; Zia, M.O.; Hasan, R.; Islam, M.R.; Islam, M.S.; Alam, M.S.; Islam, M.K. Optimizing Stroke Risk Prediction: A Primary Dataset-Driven Ensemble Classifier With Explainable Artificial Intelligence. Health Sci. Rep. 2025, 8, e70799. [Google Scholar] [CrossRef]
- Kim, M.; Kang, D.; Kim, M.S.; Choe, J.C.; Lee, S.H.; Ahn, J.H.; Oh, J.H.; Choi, J.H.; Lee, H.C.; Cha, K.S.; et al. Acute myocardial infarction prognosis prediction with reliable and interpretable artificial intelligence system. J. Am. Med. Inform. Assoc. 2024, 31, 1540–1550. [Google Scholar] [CrossRef]
- Ravi, K.; Yuan, J.S. ALERT: Active Learning and Explainable AI for Robust Threat Detection in Telegram. Digit. Threat. 2025, 6, 16. [Google Scholar] [CrossRef]
- Escudero-Arnanz, O.; Soguero-Ruiz, C.; Alvarez-Rodriguez, J.; Marques, A.G. Explainable Temporal Inference for Irregular Multivariate Time Series. A Case Study for Early Prediction of Multidrug Resistance. IEEE Trans. Biomed. Eng. 2025. early access. [Google Scholar] [CrossRef] [PubMed]
- Sarker, M.; Zhou, L.; Eberhart, A.; Hitzler, P. Neuro-symbolic artificial intelligence. AI Commun. 2022, 34, 197–209. [Google Scholar] [CrossRef]
- Arrotta, L.; Civitarese, G.; Bettini, C. Semantic Loss: A New Neuro-Symbolic Approach for Context-Aware Human Activity Recognition. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2024, 7, 147. [Google Scholar] [CrossRef]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10561. [Google Scholar] [CrossRef]
- Nemirovsky-Rotman, S.; Bercovich, E. Explicit Physics-Informed Deep Learning for Computer-Aided Diagnostic Tasks in Medical Imaging. Mach. Learn. Knowl. Extr. 2024, 6, 385–401. [Google Scholar] [CrossRef]
- Wang, S.; Sankaran, S.; Wang, H.; Perdikaris, P. An Expert’s Guide to Training Physics-informed Neural Networks. arXiv 2023, arXiv:2308.08468. [Google Scholar]
- Gao, L.; Liu, K.; Guo, Z.; Guan, L. Mathematics-Inspired Models: A Green and Interpretable Learning Paradigm for Multimedia Computing. ACM Trans. Multimed. Comput. Commun. Appl. 2025, 21, 197. [Google Scholar] [CrossRef]
- Guo, G.; Deng, L.; Tandon, A.; Endert, A.; Kwon, B.C. MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), Rio de Janeiro, Brazil, 3–6 June 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1861–1874. [Google Scholar] [CrossRef]
- Zou, L.; Goh, H.L.; Liew, C.J.Y.; Quah, J.L.; Gu, G.T.; Chew, J.J.; Kumar, M.P.; Ang, C.G.L.; Ta, A.W.A. Ensemble Image Explainable AI (XAI) Algorithm for Severe Community-Acquired Pneumonia and COVID-19 Respiratory Infections. IEEE Trans. Artif. Intell. 2023, 4, 242–254. [Google Scholar] [CrossRef]
- Haque, F.; Hasan, M.A.; Siddique, M.A.I.; Roy, T.; Shaha, T.K.; Islam, Y.; Paul, A.; Chowdhury, M.E.H. An End-to-End Concatenated CNN Attention Model for the Classification of Lung Cancer with XAI Techniques. IEEE Access 2025, 13, 96317–96336. [Google Scholar] [CrossRef]
- Pishghadam, N.; Esmaeilyfard, R.; Paknahad, M. Explainable deep learning for age and gender estimation in dental CBCT scans using attention mechanisms and multi-task learning. Sci. Rep. 2025, 25, 03305. [Google Scholar] [CrossRef] [PubMed]
- Akbar, A.; Han, S.; Urr Rehman, N.; Ahmed, K.; Eshkiki, H.; Caraffini, F. Explainable breast cancer prediction from 3-dimensional dynamic contrast-enhanced magnetic resonance imaging. Appl. Intell. 2025, 55, 901. [Google Scholar] [CrossRef]
- Ihongbe, I.E.; Fouad, S.; Mahmoud, T.F.; Rajasekaran, A.; Bhatia, B. Evaluating Explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered Lens. PLoS ONE 2024, 19, e0308758. [Google Scholar] [CrossRef]
- Nabil, H.; Ahmed, I.; Das, A.; Mridha, M.; Kabir, M.; Aung, Z. MSFE-GallNet-X: A multi-scale feature extraction-based CNN Model for gallbladder disease analysis with enhanced explainability. BMC Med. Imaging 2025, 25, 360. [Google Scholar] [CrossRef]
- Ferrell, B.; Raskin, S.E.; Zimmerman, E.B. Calibrating a Transformer-Based Model’s Confidence on Community-Engaged Research Studies: Decision Support Evaluation Study. JMIR Form. Res. 2023, 7, e41516. [Google Scholar] [CrossRef]
- Tharmakulasingam, M.; Wang, W.; Kerby, M.; Ragione, R.L.; Fernando, A. TransAMR: An Interpretable Transformer Model for Accurate Prediction of Antimicrobial Resistance Using Antibiotic Administration Data. IEEE Access 2023, 11, 75337–75350. [Google Scholar] [CrossRef]
- Deenadayalan, T.; Shantharajah, S.P. Prognostic Survival Analysis for AD Diagnosis and Progression Using MRI Data: An AI-Based Approach. IEEE Access 2025, 13, 89059–89078. [Google Scholar] [CrossRef]
- Stanley, E.A.; Souza, R.; Winder, A.J.; Gulve, V.; Amador, K.; Wilms, M.; Forkert, N.D. Towards objective and systematic evaluation of bias in artificial intelligence for medical imaging. J. Am. Med. Inform. Assoc. 2024, 31, 2613–2621. [Google Scholar] [CrossRef]
- Nguyen, H.; Cao, H.; Nguyen, V.; Pham, D. Evaluation of Explainable Artificial Intelligence: SHAP, LIME, and CAM. In Proceedings of the FPT AI Conference (FAIC 2021), Ha Noi, Viet Nam, 4–5 March 2021; pp. 1–6. [Google Scholar]
- Brophy, J.; Hammoudeh, Z.; Lowd, D. Adapting and evaluating influence-estimation methods for gradient-boosted decision trees. J. Mach. Learn. Res. 2023, 24, 154. [Google Scholar]
- Zhang, H.; Yang, Y.F.; Song, X.L.; Hu, H.J.; Yang, Y.Y.; Zhu, X.; Yang, C. An interpretable artificial intelligence model based on CT for prognosis of intracerebral hemorrhage: A multicenter study. BMC Med. Imaging 2024, 24, 170. [Google Scholar] [CrossRef]
- Grabovski, F.M.; Yasur, L.; Amit, G.; Mirsky, Y. Back-in-Time Diffusion: Unsupervised Detection of Medical Deepfakes. ACM Trans. Intell. Syst. Technol. 2025, 16, 123. [Google Scholar] [CrossRef]
- Farhadloo, M.; Sharma, A.; Shekhar, S.; Markovic, S. Spatial Computing Opportunities in Biomedical Decision Support: The Atlas-EHR Vision. ACM Trans. Spat. Algorithms Syst. 2024, 10, 21. [Google Scholar] [CrossRef]
- van Berkel, N.; Bellio, M.; Skov, M.B.; Blandford, A. Measurements, Algorithms, and Presentations of Reality: Framing Interactions with AI-Enabled Decision Support. ACM Trans. Comput.-Hum. Interact. 2023, 30, 32. [Google Scholar] [CrossRef]
- Bibi, N.; Courtney, J.; McGuinness, K. Enhancing Brain Disease Diagnosis with XAI: A Review of Recent Studies. ACM Trans. Comput. Healthc. 2025, 6, 16. [Google Scholar] [CrossRef]
- Patrício, C.; Neves, J.a.C.; Teixeira, L.F. Explainable Deep Learning Methods in Medical Image Classification: A Survey. ACM Comput. Surv. 2023, 56, 85. [Google Scholar] [CrossRef]
- Tong, M.W.; Ziegeler, K.; Kreutzinger, V.; Majumdar, S. Explainable AI reveals tissue pathology and psychosocial drivers of opioid prescription for non-specific chronic low back pain. Sci. Rep. 2025, 15, 30690. [Google Scholar] [CrossRef]
- Alsinglawi, B.S.; Alnajjar, F.; Alorjani, M.S.; Al-Shari, O.M.; Munoz, M.N.; Mubin, O. Predicting Hospital Stay Length Using Explainable Machine Learning. IEEE Access 2024, 12, 90571–90585. [Google Scholar] [CrossRef]
- Bongurala, A.R.; Save, D.; Virmani, A. Progressive role of artificial intelligence in treatment decision-making in the field of medical oncology. Front. Med. 2025, 12, 1533910. [Google Scholar] [CrossRef] [PubMed]
- Hossain, M.I.; Zamzmi, G.; Mouton, P.R.; Salekin, M.S.; Sun, Y.; Goldgof, D. Explainable AI for Medical Data: Current Methods, Limitations, and Future Directions. ACM Comput. Surv. 2025, 57, 148. [Google Scholar] [CrossRef]
- Procter, R.; Tolmie, P.; Rouncefield, M. Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare. ACM Trans. Comput.-Hum. Interact. 2023, 30, 31. [Google Scholar] [CrossRef]
- Sun, Q.; Akman, A.; Schuller, B.W. Explainable Artificial Intelligence for Medical Applications: A Review. ACM Trans. Comput. Healthc. 2025, 6, 17. [Google Scholar] [CrossRef]
- chander, B.; John, C.; Warrier, L.; Gopalakrishnan, K. Toward Trustworthy Artificial Intelligence (TAI) in the Context of Explainability and Robustness. ACM Comput. Surv. 2025, 57, 144. [Google Scholar] [CrossRef]
- Andersen, T.O.; Nunes, F.; Wilcox, L.; Coiera, E.; Rogers, Y. Introduction to the Special Issue on Human-Centred AI in Healthcare: Challenges Appearing in the Wild. ACM Trans. Comput.-Hum. Interact. 2023, 30, 25. [Google Scholar] [CrossRef]
- Spitzer, P.; Morrison, K.; Turri, V.; Feng, M.; Perer, A.; Kühl, N. Imperfections of XAI: Phenomena Influencing AI-Assisted Decision-Making. ACM Trans. Interact. Intell. Syst. 2025, 15, 17. [Google Scholar] [CrossRef]
- Swamy, V.; Montariol, S.; Blackwell, J.; Frej, J.; Jaggi, M.; Käser, T. Intrinsic User-Centric Interpretability through Global Mixture of Experts. arXiv 2025, arXiv:2402.02933. [Google Scholar]
- Coroama, L.; Groza, A. Evaluation Metrics in Explainable Artificial Intelligence (XAI). In Proceedings of the Advanced Research in Technologies, Information, Innovation and Sustainability; Guarda, T., Portela, F., Augusto, M.F., Eds.; Springer: Cham, Switzerland, 2022; pp. 401–413. [Google Scholar]
- D’Amico, S.; Dall’Olio, L.; Rollo, C.; Alonso, P.; Prada-Luengo, I.; Dall’Olio, D.; Sala, C.; Sauta, E.; Asti, G.; Lanino, L.; et al. MOSAIC: An Artificial Intelligence-Based Framework for Multimodal Analysis, Classification, and Personalized Prognostic Assessment in Rare Cancers. JCO Clin. Cancer Inform. 2024, 8, e2400008. [Google Scholar] [CrossRef]
- Rahman, A.; Hayat, M.; Iqbal, N.; Alarfaj, F.K.; Alkhalaf, S.; Alturise, F. Enhanced MRI brain tumor detection using deep learning in conjunction with explainable AI SHAP based diverse and multi feature analysis. Sci. Rep. 2025, 15, 29411. [Google Scholar] [CrossRef]
- Gericke, F.; Voorspoels, W.; Peeters, E.; Demyttenaere, K.; Sabbe, M.; Bantjes, J.; Bruffaerts, R. Personalised machine-learning decision support for suicidal thoughts and behaviours in the psychiatric emergency department. Psychiatry Res. 2025, 352, 116698. [Google Scholar] [CrossRef]
- Salih, A.M.; Galazzo, I.B.; Raisi-Estabragh, Z.; Petersen, S.E.; Menegaz, G.; Radeva, P. Characterizing the Contribution of Dependent Features in XAI Methods. IEEE J. Biomed. Health Inform. 2024, 28, 6466–6473. [Google Scholar] [CrossRef]
- Sudlow, C.; Gallacher, J.; Allen, N.; Beral, V.; Burton, P.; Danesh, J.; Downey, P.; Elliott, P.; Green, J.; Landray, M.; et al. UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age. PLoS Med. 2015, 12, e1001779. [Google Scholar] [CrossRef]
- Nickparvar, M. Brain Tumor MRI Dataset. 2021. Available online: https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset (accessed on 27 November 2025).
- Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2097–2106. [Google Scholar]
- Smedsrud, P.H.; Thambawita, V.; Hicks, S.A.; Gjestang, H.; Nedrejord, O.O.; Næss, E.; Borgli, H.; Jha, D.; Berstad, T.J.D.; Eskeland, S.L.; et al. Kvasir-Capsule, a video capsule endoscopy dataset. Sci. Data 2021, 8, 142. [Google Scholar] [CrossRef] [PubMed]
- Serhani, M.A.; Tariq, A.; Qayyum, T.; Taleb, I.; Din, I.; Trabelsi, Z. Meta-XPFL: An Explainable and Personalized Federated Meta-Learning Framework for Privacy-Aware IoMT. IEEE Internet Things J. 2025, 12, 13790–13805. [Google Scholar] [CrossRef]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
- Initiative, A.D.N. ADNI: Alzheimer’s Disease Neuroimaging Initiative. 2025. Available online: https://adni.loni.usc.edu/ (accessed on 27 November 2025).
- Al-Yasriy, H.F.; Al-Husieny, M.S.; Mohsen, F.Y.; Khalil, E.A.; Hassan, Z.S. Diagnosis of lung cancer based on CT scans using CNN. IOP Conf. Ser. Mater. Sci. Eng. 2020, 928, 022035. [Google Scholar] [CrossRef]
- Pinamonti, M. Alzheimer MRI 4 Classes Dataset. 2025. Available online: https://www.kaggle.com/datasets/marcopinamonti/alzheimer-mri-4-classes-dataset (accessed on 27 November 2025).
- Alkhalaf, S.; Alturise, F.; Bahaddad, A.A.; Elnaim, B.M.E.; Shabana, S.; Abdel-Khalek, S.; Mansour, R.F. Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging. Cancers 2023, 15, 1492. [Google Scholar] [CrossRef] [PubMed]
- Sirinukunwattana, K.; Pluim, J.P.; Chen, H.; Qi, X.; Heng, P.A.; Guo, Y.B.; Wang, L.Y.; Matuszewski, B.J.; Bruni, E.; Sanchez, U.; et al. Gland segmentation in colon histology images: The glas challenge contest. Med. Image Anal. 2017, 35, 489–502. [Google Scholar] [CrossRef]
- GenoMed4All: Genomics for Next Generation Healthcare. 2025. Available online: https://www.genomed4all.eu (accessed on 27 November 2025).
- Synthema: Synthetic Haematological Data. 2025. Available online: https://www.synthema.eu (accessed on 27 November 2025).
- EuroBloodNet: European Reference Network for Rare Haematological Diseases. 2025. Available online: https://www.eurobloodnet.eu (accessed on 27 November 2025).
- Mooney, P.T. Chest X-Ray Images (Pneumonia). 2025. Available online: https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia (accessed on 27 November 2025).
- Eduardo, P. SARS-CoV-2 CT-Scan Dataset. 2025. Available online: https://www.kaggle.com/datasets/plameneduardo/sarscov2-ctscan-dataset (accessed on 27 November 2025).
- Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
- Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Kashem, S.B.A.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar] [CrossRef]
- Vanitha, K.; Mahesh, T.R.; Kumar, V.V.; Guluwadi, S. Enhanced tuberculosis detection using Vision Transformers and explainable AI with a Grad-CAM approach on chest X-rays. BMC Med. Imaging 2025, 25, 96. [Google Scholar] [CrossRef]
- Rahman, T. Tuberculosis (TB) Chest X-Ray Dataset. 2021. Available online: https://www.kaggle.com/datasets/tawsifurrahman/tuberculosis-tb-chest-xray-dataset (accessed on 27 November 2025).
- Muhammad, D.; Salman, M.; Keles, A.; Bendechache, M. ALL diagnosis: Can efficiency and transparency coexist? An explainable deep learning approach. Sci. Rep. 2025, 15, 12812. [Google Scholar] [CrossRef]
- Mourya, S.; Kant, S.; Kumar, P.; Gupta, A.; Gupta, R. ALL Challenge Dataset of ISBI 2019 (C-NMC 2019). 2019. Available online: https://www.cancerimagingarchive.net/collection/c-nmc-2019/ (accessed on 26 October 2025).
- Aria, M.; Ghaderzadeh, M.; Bashash, D.; Abolghasemi, H.; Asadi, F.; Hosseini, A. Acute lymphoblastic leukemia (ALL) image dataset. Kaggle 2021. [Google Scholar] [CrossRef]
- Naren, O.S. Multi Cancer Dataset. 2022. Available online: https://www.kaggle.com/datasets/obulisainaren/multi-cancer/versions/1 (accessed on 26 October 2025).
- Buga, R.; Buzea, C.G.; Agop, M.; Ochiuz, L.; Vasincu, D.; Popa, O.; Rusu, D.I.; Știrban, I.; Eva, L. Streamlit Application and Deep Learning Model for Brain Metastasis Monitoring After Gamma Knife Treatment. Biomedicines 2025, 13, 423. [Google Scholar] [CrossRef] [PubMed]
- Yachida, S.; Mizutani, S.; Shiroma, H.; Shiba, S.; Nakajima, T.; Sakamoto, T.; Watanabe, H.; Masuda, K.; Nishimoto, Y.; Kubo, M.; et al. Metagenomic and metabolomic analyses reveal distinct stage-specific phenotypes of the gut microbiota in colorectal cancer. Nat. Med. 2019, 25, 968–976. [Google Scholar] [CrossRef] [PubMed]
- Yu, J.; Feng, Q.; Wong, S.H.; Zhang, D.; Liang, Q.y.; Qin, Y.; Tang, L.; Zhao, H.; Stenvang, J.; Li, Y.; et al. Metagenomic analysis of faecal microbiome as a tool towards targeted non-invasive biomarkers for colorectal cancer. Gut 2017, 66, 70–78. [Google Scholar] [CrossRef]
- Wirbel, J.; Pyl, P.T.; Kartal, E.; Zych, K.; Kashani, A.; Milanese, A.; Fleck, J.S.; Voigt, A.Y.; Palleja, A.; Ponnudurai, R.; et al. Meta-analysis of fecal metagenomes reveals global microbial signatures that are specific for colorectal cancer. Nat. Med. 2019, 25, 679–689. [Google Scholar] [CrossRef]
- Zeller, G.; Tap, J.; Voigt, A.Y.; Sunagawa, S.; Kultima, J.R.; Costea, P.I.; Amiot, A.; Böhm, J.; Brunetti, F.; Habermann, N.; et al. Potential of fecal microbiota for early-stage detection of colorectal cancer. Mol. Syst. Biol. 2014, 10, 766. [Google Scholar] [CrossRef]
- Vogtmann, E.; Hua, X.; Zeller, G.; Sunagawa, S.; Voigt, A.Y.; Hercog, R.; Goedert, J.J.; Shi, J.; Bork, P.; Sinha, R. Colorectal Cancer and the Human Gut Microbiome: Reproducibility with Whole-Genome Shotgun Sequencing. PLoS ONE 2016, 11, e0155362. [Google Scholar] [CrossRef]
- Yin, C.; Song, Z.; Tian, H.; Palzkill, T.; Tao, P. Unveiling the structural features that regulate carbapenem deacylation in KPC-2 through QM/MM and interpretable machine learning. Phys. Chem. Chem. Phys. 2023, 25, 1349–1362. [Google Scholar] [CrossRef]
- Contributors, Z. 800 QM/MM Minimum Energy Pathway Conformations for the Deacylation Reactions of KPC-2/Imipenem. 2022. Available online: https://zenodo.org/records/7387266 (accessed on 26 October 2025).
- Raza, S.; Ding, C. Improving Clinical Decision Making With a Two-Stage Recommender System. IEEE/ACM Trans. Comput. Biol. Bioinform. 2023, 21, 1180–1190. [Google Scholar] [CrossRef]
- Johnson, A.; Pollard, T.; Mark, R. MIMIC-III Clinical Database (Version 1.4). RRID:SCR_007345. 2016. Available online: https://physionet.org/content/mimiciii/1.4/ (accessed on 26 October 2025).
- Vaizman, Y.; Ellis, K.; Lanckriet, G. Recognizing Detailed Human Context in the Wild from Smartphones and Smartwatches. IEEE Pervasive Comput. 2017, 16, 62–74. [Google Scholar] [CrossRef]
- Nouman, H. Annotated Dataset for Knee Arthritis Detection. Kaggle Tech. Rep. 2024. Available online: https://www.kaggle.com/datasets/hafiznouman786/annotated-dataset-for-knee-arthritis-detection (accessed on 27 November 2025).
- Mamalakis, M.; Swift, A.J.; Vorselaars, B.; Ray, S.; Weeks, S.; Ding, W.; Clayton, R.H.; Mackenzie, L.S.; Banerjee, A. DenResCov-19: A deep transfer learning network for robust automatic classification of COVID-19, pneumonia, and tuberculosis from X-rays. Comput. Med. Imaging Graph. 2021, 94, 102008. [Google Scholar] [CrossRef]
- Abedeen, I.; Rahman, M.A.; Prottyasha, F.Z.; Ahmed, T.; Chowdhury, T.M.; Shatabda, S. Fracatlas: A dataset for fracture classification, localization and segmentation of musculoskeletal radiographs. Sci. Data 2023, 10, 521. [Google Scholar] [CrossRef]
- Rappaport, N.; Twik, M.; Plaschkes, I.; Nudel, R.; Iny Stein, T.; Levitt, J.; Gershoni, M.; Morrey, C.P.; Safran, M.; Lancet, D. MalaCards: An amalgamated human disease compendium with diverse clinical and genetic annotation and structured search. Nucleic Acids Res. 2017, 45, D877–D887. [Google Scholar] [CrossRef] [PubMed]
- Hamosh, A. Online Mendelian Inheritance in Man (OMIM). An Online Catalog of Human Genes and Genetic Disorders; McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University: Baltimore, MD, USA, 2014. [Google Scholar]
- Piñero, J.; Ramírez-Anguita, J.M.; Saüch-Pitarch, J.; Ronzano, F.; Centeno, E.; Sanz, F.; Furlong, L.I. The DisGeNET knowledge platform for disease genomics: 2019 update. Nucleic Acids Res. 2020, 48, D845–D855. [Google Scholar] [CrossRef] [PubMed]
- Lu, K.; Yang, K.; Sun, H.; Zhang, Q.; Zheng, Q.; Xu, K.; Chen, J.; Zhou, X. SympGAN: A systematic knowledge integration system for symptom–gene associations network. Knowl.-Based Syst. 2023, 276, 110752. [Google Scholar] [CrossRef]
- Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. The Caltech-UCSD Birds-200-2011 Dataset. In Computation & Neural Systems Technical Report; CNS-TR-2011-001; California Institute of Technology: Pasadena, CA, USA, 2011. [Google Scholar]
- Ravi, K.; Yuan, J. ThreatGram 101—Extreme Telegram Replies Data with Threat Levels. 2024. Available online: https://data.mendeley.com/datasets/tm9s68vgxd/1 (accessed on 26 October 2025).
- Lyons, M.; Akamatsu, S.; Kamachi, M.; Gyoba, J. Coding facial expressions with gabor wavelets. In Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan, 14–16 April 1998; pp. 200–205. [Google Scholar]
- Gupte, S.; Paparrizos, J. Understanding the Black Box: A Deep Empirical Dive into Shapley Value Approximations for Tabular Data. Proc. ACM Manag. Data 2025, 3, 232. [Google Scholar] [CrossRef]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. Technical Report. 2009. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 26 November 2025).
- Krizhevsky, A.; Nair, V.; Hinton, G. CIFAR-100 (Canadian Institute for Advanced Research). Technical Report. 2009. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 26 November 2025).
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
- Armato, S.G., III; McLennan, G.; Bidaut, L.; McNitt-Gray, M.F.; Meyer, C.R.; Reeves, A.P.; Zhao, B.; Aberle, D.R.; Henschke, C.I.; Hoffman, E.A.; et al. Data From LIDC-IDRI. The Cancer Imaging Archive. [Dataset]. 2015. Available online: https://www.cancerimagingarchive.net/collection/lidc-idri/ (accessed on 26 October 2025).
- Saha, A.; Harowicz, M.R.; Grimm, L.J.; Weng, J.; Cain, E.H.; Kim, C.E.; Ghate, S.V.; Walsh, R.; Mazurowski, M.A. Dynamic Contrast-Enhanced Magnetic Resonance Images of Breast Cancer Patients with Tumor Locations. The Cancer Imaging Archive. [Dataset]. 2021. Available online: https://www.cancerimagingarchive.net/collection/duke-breast-cancer-mri/ (accessed on 26 October 2025).
- Arshed, M.; Mumtaz, S.; Ştefan Cristian Gherghina; Urooj, N.; Ahmed, S.; Dewi, C. Multiclass AI-Generated Deepfake Face Detection Using Patch-Wise Deep Learning Model, Mendeley Data, V2. 2024. Available online: https://data.mendeley.com/datasets/r6h24d2d3y/2 (accessed on 26 October 2025).
- Çinar, A.; Yildirim, M. Detection of tumors on brain MRI images using the hybrid convolutional neural network architecture. Med. Hypotheses 2020, 139, 109684. [Google Scholar] [CrossRef]
- Nogueira, M.A.; Abreu, P.H.; Martins, P.; Machado, P.; Duarte, H.; Santos, J. Image descriptors in radiology images: A systematic review. Artif. Intell. Rev. 2017, 47, 531–559. [Google Scholar] [CrossRef]
- El-Geneedy, M.; El-Din Moustafa, H.; Khater, H.; Abd-Elsamee, S.; Gamel, S.A. A comprehensive explainable AI approach for enhancing transparency and interpretability in stroke prediction. Sci. Rep. 2025, 15, 26048. [Google Scholar] [CrossRef]
- Amponsah, A.A. Explainable AI for computational pathology identifies model limitations and tissue biomarkers. arXiv 2024, arXiv:2409.03080v2. [Google Scholar]
- Adebayo, J.; Gilmer, J.; Muelly, M.; Goodfellow, I.; Hardt, M.; Kim, B. Sanity checks for saliency maps. Adv. Neural Inf. Process. Syst. 2018, 31, 9525–9536. [Google Scholar]
- Tjoa, E.; Guan, C. Quantifying Explainability of Saliency Methods in Deep Neural Networks With a Synthetic Dataset. IEEE Trans. Artif. Intell. 2023, 4, 858–870. [Google Scholar] [CrossRef]
- Miró-Nicolau, M.; Jaume-i Capó, A.; Moyà-Alcover, G. A comprehensive study on fidelity metrics for XAI. Inf. Process. Manag. 2025, 62, 103900. [Google Scholar] [CrossRef]
- Zheng, X.; Shirani, F.; Chen, Z.; Lin, C.; Cheng, W.; Guo, W.; Luo, D. F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI. arXiv 2024, arXiv:2410.02970. [Google Scholar] [CrossRef]
- Moradi, M.; Samwald, M. Evaluating the Robustness of Neural Language Models to Input Perturbations. arXiv 2021, arXiv:2108.12237. [Google Scholar] [CrossRef]
- Gawantka, F.; Just, F.; Savelyeva, M.; Wappler, M.; Lässig, J. A Novel Metric for Evaluating the Stability of XAI Explanations. Adv. Sci. Technol. Eng. Syst. J. 2024, 9, 133–142. [Google Scholar] [CrossRef]
- Asan, O.; Choudhury, A. Research Trends in Artificial Intelligence Applications in Human Factors Health Care: Mapping Review. JMIR Hum. Factors 2021, 8, e28236. [Google Scholar] [CrossRef]
- Oyeniyi, J.; Oluwaseyi, P. Emerging trends in AI-powered medical imaging: Enhancing diagnostic accuracy and treatment decisions. Int. J. Enhanc. Res. Sci. Technol. Eng. 2024, 13, 81–94. [Google Scholar]
- Yu, F.; Moehring, A.; Banerjee, O.; Salz, T.; Agarwal, N.; Rajpurkar, P. Heterogeneity and predictors of the effects of AI assistance on radiologists. Nat. Med. 2024, 30, 837–849. [Google Scholar] [CrossRef]
- Yu, Y.; Gomez-Cabello, C.A.; Haider, S.A.; Genovese, A.; Prabha, S.; Trabilsy, M.; Collaco, B.G.; Wood, N.G.; Bagaria, S.; Tao, C.; et al. Enhancing Clinician Trust in AI Diagnostics: A Dynamic Framework for Confidence Calibration and Transparency. Diagnostics 2025, 15, 2204. [Google Scholar] [CrossRef]
- Fogliato, R.; Chappidi, S.; Lungren, M.; Fisher, P.; Wilson, D.; Fitzke, M.; Parkinson, M.; Horvitz, E.; Inkpen, K.; Nushi, B. Who goes first? Influences of human-AI workflow on decision making in clinical imaging. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 1362–1374. [Google Scholar]
- Yi, P.H.; Prinster, D.; Mahmood, A. Incorrect AI Advice Influences Diagnostic Decisions in Radiology. Available online: https://www.rsna.org/news/2024/november/ai-influences-diagnostic-decisions (accessed on 28 November 2025).
- Pietilä, E.; Moreno-Sánchez, P.A. When an Explanation is not Enough: An Overview of Evaluation Metrics of Explainable AI Systems in the Healthcare Domain. In Proceedings of the MEDICON’23 and CMBEBIH’23; Badnjević, A., Gurbeta Pokvić, L., Eds.; Springer: Cham, Switzerland, 2024; pp. 573–584. [Google Scholar]
- Hwang, H.; Bell, A.; Fonseca, J.; Pliatsika, V.; Stoyanovich, J.; Whang, S.E. SHAP-based Explanations are Sensitive to Feature Representation. arXiv 2025, arXiv:2505.08345. [Google Scholar] [CrossRef]
- Shobeiri, S. Enhancing transparency in healthcare machine learning models using Shap and Deeplift a methodological approach. Iraqi J. Inf. Commun. Technol. 2024, 7, 56–72. [Google Scholar] [CrossRef]
- Jethani, N.; Sudarshan, M.; Covert, I.; Lee, S.I.; Ranganath, R. FastSHAP: Real-Time Shapley Value Estimation. arXiv 2022, arXiv:2107.07436. [Google Scholar]
- Parliament, E.; Council. Document 32024R1689: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 28 November 2025).
- Tabassi, E. Artificial Intelligence Risk Management Framework (AI RMF 1.0); Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2023. [Google Scholar]
- Ambritta, P.N.; Mahalle, P.N.; Bhapkar, H.R.; Shinde, G.R.; Sable, N.P. Improving explainable AI interpretability with mathematical models for evaluating explanation methods. Int. J. Inf. Technol. 2025, 17, 1–21. [Google Scholar] [CrossRef]
- Antoniadi, A.M.; Du, Y.; Guendouz, Y.; Wei, L.; Mazo, C.; Becker, B.A.; Mooney, C. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Appl. Sci. 2021, 11, 5088. [Google Scholar] [CrossRef]
- Laberge, G.; Pequignot, Y.B.; Marchand, M.; Khomh, F. Tackling the XAI disagreement problem with regional explanations. In Proceedings of the International Conference on Artificial Intelligence and Statistics. PMLR, Valencia, Spain, 2–4 May 2024; pp. 2017–2025. [Google Scholar]
- The Royal College of Radiologists. Clinical Radiology Workforce Census 2022; Technical Report; The Royal College of Radiologists: London, UK, 2022. [Google Scholar]
- Wu, J.T.; Wong, K.C.L.; Gur, Y.; Ansari, N.; Karargyris, A.; Sharma, A.; Morris, M.; Saboury, B.; Ahmad, H.; Boyko, O.; et al. Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents. JAMA Netw. Open 2020, 3, e2022779. [Google Scholar] [CrossRef]






| Method | Generalities | Advantages | Disadvantages | Common Application | |||
|---|---|---|---|---|---|---|---|
| Name | Ref. | S | SE | ||||
| SHAP | [22] | M-A | L & G | Shapley values (game theory) | Widely used and well understood | Computationally expensive and assumes feature independence. | Feature importance, EHR-based risk scores, relevant biomarkers, tabular ML models, fairness/bias |
| DeepSHAP | [27] | M-S | L & G | DeepLIFT (SHAP) backpropagation | Accurate for NNs; capture layer-wise interactions | Sensitive to reference baseline. Applicable only to NNs. | CNNs, deep model interpretation, attributing importance, multimodal explainability |
| Integrated Gradients | [28] | M-S | L & G | Integrated path from baseline | Provides smooth, noise-reduced attributions. | Dependent on baseline; may produce misleading attributions. Applicable only to NNs. | Heatmaps, explaining deep EHR, time series, genomics. |
| Expected Gradients | [29] | M-S | L | Expected gradient-path values relative to the chosen baselines. | Produces more stable and robust attributions. | High computational load due to sampling from data distribution. | Informing attribution stability and explaining ICU mortality, biomedical systems, multimodal models. |
| Contextual Decomposition | [30] | M-S | L | Decomposition of activation models and attention contributions. | Works well with sequential processing inside deep stacked attention mechanisms. | Attribution rules can become complex architectures. Applicable only to RNNs & attention. | Clinical time-series and time segments, attention-based genomics. |
| CXPlain | [31] | M-A | L | Learns the perturbation-induced change in loss. | Often more faithful than surrogate-based methods. | Performance depends on the quality of explainer training. | EHR/tabular, feature attribution, deep models, multimodal models. |
| Anchors | [32] | M-A | L | Rule-based | Human-readable, rule-based explanations (if–then anchors). | Hard to find anchors in high-dimensional data. | Rule extraction, EHR predictions, clinical text, image explanations. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Eshkiki, H.; Tanhaei, F.; Caraffini, F.; Mora, B. A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics. Appl. Sci. 2025, 15, 12934. https://doi.org/10.3390/app152412934
Eshkiki H, Tanhaei F, Caraffini F, Mora B. A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics. Applied Sciences. 2025; 15(24):12934. https://doi.org/10.3390/app152412934
Chicago/Turabian StyleEshkiki, Hassan, Farinaz Tanhaei, Fabio Caraffini, and Benjamin Mora. 2025. "A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics" Applied Sciences 15, no. 24: 12934. https://doi.org/10.3390/app152412934
APA StyleEshkiki, H., Tanhaei, F., Caraffini, F., & Mora, B. (2025). A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics. Applied Sciences, 15(24), 12934. https://doi.org/10.3390/app152412934

