Deciphering Medicine: The Role of Explainable Artificial Intelligence in Healthcare Innovations, 2nd Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 1029

Special Issue Editors


E-Mail Website
Guest Editor
Department of Bioengineering, Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
Interests: medical image analysis; artificial intelligence in medicine; deep learning; computer-aided diagnostics; precsion medicine; diagnostics and prognostic markers; bigdata in medicine
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
Interests: artificial intelligence (AI); machine learning; deep learning; robotics; metaheuristics; computer-assisted diagnosis systems; computer vision; bioinspired optimization algorithms; smart systems engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In an era where artificial intelligence (AI) is rapidly transforming the landscape of healthcare, the need for transparency and understandability in AI algorithms has never been more critical. This Special Issue, entitled " Deciphering Medicine: The Role of Explainable Artificial Intelligence in Healthcare Innovations, 2nd Edition", seeks to bridge the gap between advanced AI technologies and their practical, ethical, and efficient application in medical settings.

Focus and Scope:

We invite original research, reviews, and insightful studies that delve into the development, implementation, and evaluation of explainable AI systems in medical diagnostics and treatment. This Special Issue aims to spotlight innovative methodologies, case studies, and frameworks that enhance the interpretability and transparency of AI models, thereby fostering trust and reliability among healthcare professionals and patients.

Key Themes:

  • Development of explainable AI models for diagnosis, prognosis, and treatment planning.
  • Ethical implications and considerations in deploying AI in medical settings.
  • Case studies showcasing successful implementation of explainable AI in clinical practice.
  • Advances in machine learning and deep learning that enhance transparency and interpretability.
  • Integration of AI with traditional medical knowledge to improve patient outcomes.
  • User-centric approaches to designing explainable AI systems in healthcare.
  • Regulatory and policy perspectives on the use of AI in medical diagnostics and treatment.

Submissions:

We welcome submissions from researchers, practitioners, and thought leaders in the fields of computer science, medical informatics, bioengineering, and related disciplines. Articles should emphasize not only the technological aspects of AI, but also its practical implications, user experience, and ethical considerations in a medical context.

By focusing on explainable AI in healthcare, this Special Issue aims to illuminate the path towards the more transparent, ethical, and effective integration of AI in medicine, ultimately contributing to improved patient care and healthcare outcomes.

Dr. Mohamed Shehata
Prof. Dr. Mostafa Elhosseini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deciphering medicine
  • explainable AI
  • machine learning
  • deep learning
  • medical diagnostics and treatment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 691 KiB  
Article
What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach
by Elisabeth Hildt
Bioengineering 2025, 12(4), 375; https://doi.org/10.3390/bioengineering12040375 - 2 Apr 2025
Viewed by 724
Abstract
This article reflects on explainability in the context of medical artificial intelligence (AI) applications, focusing on AI-based clinical decision support systems (CDSS). After introducing the concept of explainability in AI and providing a short overview of AI-based clinical decision support systems (CDSSs) and [...] Read more.
This article reflects on explainability in the context of medical artificial intelligence (AI) applications, focusing on AI-based clinical decision support systems (CDSS). After introducing the concept of explainability in AI and providing a short overview of AI-based clinical decision support systems (CDSSs) and the role of explainability in CDSSs, four use cases of AI-based CDSSs will be presented. The examples were chosen to highlight different types of AI-based CDSSs as well as different types of explanations: a machine language (ML) tool that lacks explainability; an approach with post hoc explanations; a hybrid model that provides medical knowledge-based explanations; and a causal model that involves complex moral concepts. Then, the role, relevance, and implications of explainability in the context of the use cases will be discussed, focusing on seven explainability-related aspects and themes. These are: (1) The addressees of explainability in medical AI; (2) the relevance of explainability for medical decision making; (3) the type of explanation provided; (4) the (often-cited) conflict between explainability and accuracy; (5) epistemic authority and automation bias; (6) Individual preferences and values; (7) patient autonomy and doctor–patient relationships. The case-based discussion reveals that the role and relevance of explainability in AI-based CDSSs varies considerably depending on the tool and use context. While it is plausible to assume that explainability in medical AI has positive implications, empirical data on explainability and explainability-related implications is scarce. Use-case-based studies are needed to investigate not only the technical aspects of explainability but also the perspectives of clinicians and patients on the relevance of explainability and its implications. Full article
Show Figures

Figure 1

Back to TopTop