Explainable and Trustworthy AI in Health and Biology: Enabling Transparent and Actionable Decision-Making

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "Medical & Healthcare AI".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 458

Special Issue Editor


E-Mail Website
Guest Editor
Department of Population Health Sciences, University of Leicester, University Road, Leicester LE1 7RH, UK
Interests: explainable AI; biological age estimation

Special Issue Information

Dear Colleagues,
  1. Focus:
    This Special Issue will focus on the development, application, and evaluation of explainable and trustworthy artificial intelligence (XAI) techniques in the fields of healthcare and biological sciences. It will bring together interdisciplinary research that will advance our understanding of how AI-driven systems can be made transparent, reliable, and clinically actionable, moving beyond predictive accuracy toward models that can be understood and trusted by both clinicians and domain experts.
  1. Scope:
    We welcome original research articles, reviews, and perspective pieces that address (but are not limited to) the following topics:
    • Novel XAI methods designed for medical and biological data (e.g., omics, imaging, EHRs, sensor data);
    • Case studies showcasing how explainability supports clinical decision-making;
    • Human–AI interaction and how explanations impact trust and adoption;
    • Regulatory and ethical considerations for trustworthy AI in health contexts;
    • Benchmarks, evaluation frameworks, or datasets for assessing explainability and clinical relevance;
    • Comparative studies between black-box and interpretable models;
    • Integration of domain knowledge and expert feedback into AI explanations;
    • Theoretical or conceptual work on trust, transparency, and responsibility in health AI.
  1. Purpose:
    The purpose of this Special Issue is to foster the discussion and dissemination of methods that not only predict outcomes but also support the understanding, justification, and safe deployment of AI in real-world clinical and biological settings. It will bridge the gap between algorithm developers and end-users (clinicians, biologists, regulators), contributing to systems that are technically sound, ethically aligned, and clinically useful.

While recent years have seen an explosion in both AI applications in healthcare and explainable AI techniques, there remains a disconnect between technical XAI innovations and their practical deployment in health and biology. The existing literature often emphasizes model explainability in abstract or technical terms, with less attention paid to clinical relevance, human interpretability, and the trustworthiness of model outputs in sensitive domains.

This Special Issue will complement and extend the current body of work by

  • Emphasizing real-world use cases where explanations have demonstrably influenced practice or decision-making;
  • Exploring the interplay between explanation, trust, and clinical actionability, which is underexplored in much of the XAI literature;
  • Highlighting the user-centered design of explanations, including qualitative evaluations with healthcare professionals or domain experts;
  • Encouraging contributions that consider the ethical, regulatory, and societal dimensions of explainable and trustworthy AI.

By doing so, this Special Issue will serve as a timely and practical resource for researchers, practitioners, and policymakers seeking to implement AI systems that are not only intelligent but also interpretable, safe, and usable.

Dr. Ahmed Salih
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • trustworthy AI
  • model interpretability
  • responsible AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 765 KB  
Article
DERI1000: A New Benchmark for Dataset Explainability Readiness
by Andrej Pisarcik, Robert Hudec and Roberta Hlavata
AI 2025, 6(12), 320; https://doi.org/10.3390/ai6120320 - 8 Dec 2025
Viewed by 195
Abstract
Deep learning models are increasingly evaluated not only for predictive accuracy but also for their robustness, interpretability, and data quality dependencies. However, current benchmarks largely isolate these dimensions, lacking a unified evaluation protocol that integrates data-centric and model-centric properties. To bridge the gap [...] Read more.
Deep learning models are increasingly evaluated not only for predictive accuracy but also for their robustness, interpretability, and data quality dependencies. However, current benchmarks largely isolate these dimensions, lacking a unified evaluation protocol that integrates data-centric and model-centric properties. To bridge the gap between data quality assessment and eXplainable Artificial Intelligence (XAI), we introduce DERI1000—the Dataset Explainability Readiness Index—a benchmark that quantifies how suitable and well-prepared a dataset is for explainable and trustworthy deep learning. DERI1000 combines eleven measurable factors—sharpness, noise artifacts, exposure, resolution, duplicates, diversity, separation, imbalance, label noise proxy, XAI overlay, and XAI stability—into a single normalized score calibrated around a reference baseline of 1000. Using five MedMNIST datasets (PathMNIST, ChestMNIST, BloodMNIST, OCTMNIST, OrganCMNIST) and five convolutional neural architectures (DenseNet121, ResNet50, ResNet18, VGG16, EfficientNet-B0), we fitted factor weights through multi-dataset impact analysis. The results indicate that imbalance (0.3319), separation (0.1377), and label noise proxy (0.2161) are the dominant contributors to explainability readiness. Experiments demonstrate that DERI1000 effectively distinguishes models with superficially high accuracy (ACC) but poor interpretability or robustness. The framework thereby enables cross-domain, reproducible evaluation of model performance and data quality under unified metrics. We conclude that DERI1000 provides a scalable, interpretable, and extensible foundation for benchmarking deep learning systems across both data-centric and explainability-driven dimensions. Full article
Show Figures

Figure 1

Back to TopTop