Explainable and Trustworthy AI in Health and Biology: Enabling Transparent and Actionable Decision-Making

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "Medical & Healthcare AI".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 4042

Special Issue Editor


E-Mail Website
Guest Editor
Department of Population Health Sciences, University of Leicester, University Road, Leicester LE1 7RH, UK
Interests: explainable AI; biological age estimation

Special Issue Information

Dear Colleagues,
  1. Focus:
    This Special Issue will focus on the development, application, and evaluation of explainable and trustworthy artificial intelligence (XAI) techniques in the fields of healthcare and biological sciences. It will bring together interdisciplinary research that will advance our understanding of how AI-driven systems can be made transparent, reliable, and clinically actionable, moving beyond predictive accuracy toward models that can be understood and trusted by both clinicians and domain experts.
  1. Scope:
    We welcome original research articles, reviews, and perspective pieces that address (but are not limited to) the following topics:
    • Novel XAI methods designed for medical and biological data (e.g., omics, imaging, EHRs, sensor data);
    • Case studies showcasing how explainability supports clinical decision-making;
    • Human–AI interaction and how explanations impact trust and adoption;
    • Regulatory and ethical considerations for trustworthy AI in health contexts;
    • Benchmarks, evaluation frameworks, or datasets for assessing explainability and clinical relevance;
    • Comparative studies between black-box and interpretable models;
    • Integration of domain knowledge and expert feedback into AI explanations;
    • Theoretical or conceptual work on trust, transparency, and responsibility in health AI.
  1. Purpose:
    The purpose of this Special Issue is to foster the discussion and dissemination of methods that not only predict outcomes but also support the understanding, justification, and safe deployment of AI in real-world clinical and biological settings. It will bridge the gap between algorithm developers and end-users (clinicians, biologists, regulators), contributing to systems that are technically sound, ethically aligned, and clinically useful.

While recent years have seen an explosion in both AI applications in healthcare and explainable AI techniques, there remains a disconnect between technical XAI innovations and their practical deployment in health and biology. The existing literature often emphasizes model explainability in abstract or technical terms, with less attention paid to clinical relevance, human interpretability, and the trustworthiness of model outputs in sensitive domains.

This Special Issue will complement and extend the current body of work by

  • Emphasizing real-world use cases where explanations have demonstrably influenced practice or decision-making;
  • Exploring the interplay between explanation, trust, and clinical actionability, which is underexplored in much of the XAI literature;
  • Highlighting the user-centered design of explanations, including qualitative evaluations with healthcare professionals or domain experts;
  • Encouraging contributions that consider the ethical, regulatory, and societal dimensions of explainable and trustworthy AI.

By doing so, this Special Issue will serve as a timely and practical resource for researchers, practitioners, and policymakers seeking to implement AI systems that are not only intelligent but also interpretable, safe, and usable.

Dr. Ahmed Salih
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • trustworthy AI
  • model interpretability
  • responsible AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 1714 KB  
Article
Integrating Machine-Learning Methods with Importance–Performance Maps to Evaluate Drivers for the Acceptance of New Vaccines: Application to AstraZeneca COVID-19 Vaccine
by Jorge de Andrés-Sánchez, Mar Souto-Romero and Mario Arias-Oliva
AI 2026, 7(1), 34; https://doi.org/10.3390/ai7010034 - 21 Jan 2026
Viewed by 700
Abstract
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) [...] Read more.
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) model—with machine-learning techniques (decision tree regression, random forest, and Extreme Gradient Boosting) and SHapley Additive exPlanations (SHAP) integrated into an importance–performance map (IPM) to prioritize determinants of vaccination intention. Using survey data collected in Spain in September 2020 (N = 600), when the AstraZeneca vaccine had not yet been approved, we examine the roles of perceived efficacy (EF), fear of COVID-19 (FC), fear of the vaccine (FV), and social influence (SI). Results: EF and SI consistently emerged as the most influential determinants across modelling approaches. Ensemble learners (random forest and Extreme Gradient Boosting) achieved stronger out-of-sample predictive performance than the single decision tree, while decision tree regression provided an interpretable, rule-based representation of the main decision pathways. Exploiting the local nature of SHAP values, we also constructed SHAP-based IPMs for the full sample and for the low-acceptance segment, enhancing the policy relevance of the prioritization exercise. Conclusions: By combining theory-driven structural modelling with predictive and explainable machine learning, the proposed framework offers a transparent and replicable tool to support the design of vaccination communication strategies and can be transferred to other settings involving emerging health technologies. Full article
Show Figures

Figure 1

21 pages, 765 KB  
Article
DERI1000: A New Benchmark for Dataset Explainability Readiness
by Andrej Pisarcik, Robert Hudec and Roberta Hlavata
AI 2025, 6(12), 320; https://doi.org/10.3390/ai6120320 - 8 Dec 2025
Viewed by 1189
Abstract
Deep learning models are increasingly evaluated not only for predictive accuracy but also for their robustness, interpretability, and data quality dependencies. However, current benchmarks largely isolate these dimensions, lacking a unified evaluation protocol that integrates data-centric and model-centric properties. To bridge the gap [...] Read more.
Deep learning models are increasingly evaluated not only for predictive accuracy but also for their robustness, interpretability, and data quality dependencies. However, current benchmarks largely isolate these dimensions, lacking a unified evaluation protocol that integrates data-centric and model-centric properties. To bridge the gap between data quality assessment and eXplainable Artificial Intelligence (XAI), we introduce DERI1000—the Dataset Explainability Readiness Index—a benchmark that quantifies how suitable and well-prepared a dataset is for explainable and trustworthy deep learning. DERI1000 combines eleven measurable factors—sharpness, noise artifacts, exposure, resolution, duplicates, diversity, separation, imbalance, label noise proxy, XAI overlay, and XAI stability—into a single normalized score calibrated around a reference baseline of 1000. Using five MedMNIST datasets (PathMNIST, ChestMNIST, BloodMNIST, OCTMNIST, OrganCMNIST) and five convolutional neural architectures (DenseNet121, ResNet50, ResNet18, VGG16, EfficientNet-B0), we fitted factor weights through multi-dataset impact analysis. The results indicate that imbalance (0.3319), separation (0.1377), and label noise proxy (0.2161) are the dominant contributors to explainability readiness. Experiments demonstrate that DERI1000 effectively distinguishes models with superficially high accuracy (ACC) but poor interpretability or robustness. The framework thereby enables cross-domain, reproducible evaluation of model performance and data quality under unified metrics. We conclude that DERI1000 provides a scalable, interpretable, and extensible foundation for benchmarking deep learning systems across both data-centric and explainability-driven dimensions. Full article
Show Figures

Figure 1

Review

Jump to: Research

29 pages, 1145 KB  
Review
Explainable Artificial Intelligence (XAI) for EEG Analysis: A Survey on Recent Trends and Advancements
by Vassilis Lyberatos, Georgios Kontos, Nikolaos Spanos, Orfeas Menis Mastromichalakis, Athanasios Voulodimos and Giorgos Stamou
AI 2026, 7(3), 95; https://doi.org/10.3390/ai7030095 - 5 Mar 2026
Viewed by 1530
Abstract
Recent advancements in XAI have radically changed the way that AI systems are evaluated, as transparency and trustworthiness are now valued as highly as performance. This is especially true in medical applications, as, in order for such tools to be used in practical [...] Read more.
Recent advancements in XAI have radically changed the way that AI systems are evaluated, as transparency and trustworthiness are now valued as highly as performance. This is especially true in medical applications, as, in order for such tools to be used in practical applications, interpretability is a key requirement for clinical adoption. Electroencephalography (EEG) analysis, in particular, has seen a significant rise in research, as the difficult and complex nature of EEG signals benefits from these methods, enabling researchers and practitioners to gain new insights from the vast amount of data that is now available. This survey presents a comprehensive analysis of the latest trends and advancements in XAI for EEG analysis. First, we provide a brief overview of fundamental EEG tasks, available datasets, and AI model approaches used for analysis. Then, we classify XAI methods using well-established taxonomies in XAI research, such as locality and generalization of explanations. By exploring all relevant XAI techniques in EEG analysis, our study offers researchers a clear perspective on the current state of the field and identifies potential research gaps. Our review indicates that current XAI approaches for EEG often face limitations in robustness, consistency, and neuroscientific grounding. These findings highlight the need for more reliable and domain-informed explainability methods to support trustworthy EEG analysis in research and clinical practice. Full article
Show Figures

Figure 1

Back to TopTop