Explainable and Trustworthy AI in Health and Biology: Enabling Transparent and Actionable Decision-Making
A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "Medical & Healthcare AI".
Deadline for manuscript submissions: 31 August 2026 | Viewed by 126
Special Issue Editor
Special Issue Information
- Focus:
This Special Issue will focus on the development, application, and evaluation of explainable and trustworthy artificial intelligence (XAI) techniques in the fields of healthcare and biological sciences. It will bring together interdisciplinary research that will advance our understanding of how AI-driven systems can be made transparent, reliable, and clinically actionable, moving beyond predictive accuracy toward models that can be understood and trusted by both clinicians and domain experts.
- Scope:
We welcome original research articles, reviews, and perspective pieces that address (but are not limited to) the following topics:
- Novel XAI methods designed for medical and biological data (e.g., omics, imaging, EHRs, sensor data);
- Case studies showcasing how explainability supports clinical decision-making;
- Human–AI interaction and how explanations impact trust and adoption;
- Regulatory and ethical considerations for trustworthy AI in health contexts;
- Benchmarks, evaluation frameworks, or datasets for assessing explainability and clinical relevance;
- Comparative studies between black-box and interpretable models;
- Integration of domain knowledge and expert feedback into AI explanations;
- Theoretical or conceptual work on trust, transparency, and responsibility in health AI.
- Purpose:
The purpose of this Special Issue is to foster the discussion and dissemination of methods that not only predict outcomes but also support the understanding, justification, and safe deployment of AI in real-world clinical and biological settings. It will bridge the gap between algorithm developers and end-users (clinicians, biologists, regulators), contributing to systems that are technically sound, ethically aligned, and clinically useful.
While recent years have seen an explosion in both AI applications in healthcare and explainable AI techniques, there remains a disconnect between technical XAI innovations and their practical deployment in health and biology. The existing literature often emphasizes model explainability in abstract or technical terms, with less attention paid to clinical relevance, human interpretability, and the trustworthiness of model outputs in sensitive domains.
This Special Issue will complement and extend the current body of work by
- Emphasizing real-world use cases where explanations have demonstrably influenced practice or decision-making;
- Exploring the interplay between explanation, trust, and clinical actionability, which is underexplored in much of the XAI literature;
- Highlighting the user-centered design of explanations, including qualitative evaluations with healthcare professionals or domain experts;
- Encouraging contributions that consider the ethical, regulatory, and societal dimensions of explainable and trustworthy AI.
By doing so, this Special Issue will serve as a timely and practical resource for researchers, practitioners, and policymakers seeking to implement AI systems that are not only intelligent but also interpretable, safe, and usable.
Dr. Ahmed Salih
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- explainable AI
- trustworthy AI
- model interpretability
- responsible AI
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.
Further information on MDPI's Special Issue policies can be found here.