Deciphering Medicine: The Role of Explainable Artificial Intelligence in Healthcare Innovations, 2nd Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 2836

Special Issue Editors


E-Mail Website
Guest Editor
Department of Bioengineering, Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
Interests: medical image analysis; artificial intelligence in medicine; deep learning; computer-aided diagnostics; precsion medicine; diagnostics and prognostic markers; bigdata in medicine
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, 35516 Mansoura, Egypt
Interests: artificial intelligence (AI); machine learning; deep learning; robotics;metaheuristics; computer-assisted diagnosis systems; computer vision; bioinspired optimization algorithms; smart systems engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In an era where artificial intelligence (AI) is rapidly transforming the landscape of healthcare, the need for transparency and understandability in AI algorithms has never been more critical. This Special Issue, entitled " Deciphering Medicine: The Role of Explainable Artificial Intelligence in Healthcare Innovations, 2nd Edition", seeks to bridge the gap between advanced AI technologies and their practical, ethical, and efficient application in medical settings.

Focus and Scope:

We invite original research, reviews, and insightful studies that delve into the development, implementation, and evaluation of explainable AI systems in medical diagnostics and treatment. This Special Issue aims to spotlight innovative methodologies, case studies, and frameworks that enhance the interpretability and transparency of AI models, thereby fostering trust and reliability among healthcare professionals and patients.

Key Themes:

  • Development of explainable AI models for diagnosis, prognosis, and treatment planning.
  • Ethical implications and considerations in deploying AI in medical settings.
  • Case studies showcasing successful implementation of explainable AI in clinical practice.
  • Advances in machine learning and deep learning that enhance transparency and interpretability.
  • Integration of AI with traditional medical knowledge to improve patient outcomes.
  • User-centric approaches to designing explainable AI systems in healthcare.
  • Regulatory and policy perspectives on the use of AI in medical diagnostics and treatment.

Submissions:

We welcome submissions from researchers, practitioners, and thought leaders in the fields of computer science, medical informatics, bioengineering, and related disciplines. Articles should emphasize not only the technological aspects of AI, but also its practical implications, user experience, and ethical considerations in a medical context.

By focusing on explainable AI in healthcare, this Special Issue aims to illuminate the path towards the more transparent, ethical, and effective integration of AI in medicine, ultimately contributing to improved patient care and healthcare outcomes.

Dr. Mohamed Shehata
Prof. Dr. Mostafa Elhosseini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deciphering medicine
  • explainable AI
  • machine learning
  • deep learning
  • medical diagnostics and treatment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 840 KiB  
Article
A Dual-Feature Framework for Enhanced Diagnosis of Myeloproliferative Neoplasm Subtypes Using Artificial Intelligence
by Amna Bamaqa, N. S. Labeeb, Eman M. El-Gendy, Hani M. Ibrahim, Mohamed Farsi, Hossam Magdy Balaha, Mahmoud Badawy and Mostafa A. Elhosseini
Bioengineering 2025, 12(6), 623; https://doi.org/10.3390/bioengineering12060623 - 7 Jun 2025
Viewed by 365
Abstract
Myeloproliferative neoplasms, particularly the Philadelphia chromosome-negative (Ph-negative) subtypes such as essential thrombocythemia, polycythemia vera, and primary myelofibrosis, present diagnostic challenges due to overlapping morphological features and clinical heterogeneity. Traditional diagnostic approaches, including imaging and histopathological analysis, are often limited by interobserver variability, delayed [...] Read more.
Myeloproliferative neoplasms, particularly the Philadelphia chromosome-negative (Ph-negative) subtypes such as essential thrombocythemia, polycythemia vera, and primary myelofibrosis, present diagnostic challenges due to overlapping morphological features and clinical heterogeneity. Traditional diagnostic approaches, including imaging and histopathological analysis, are often limited by interobserver variability, delayed diagnosis, and subjective interpretations. To address these limitations, we propose a novel framework that integrates handcrafted and automatic feature extraction techniques for improved classification of Ph-negative myeloproliferative neoplasms. Handcrafted features capture interpretable morphological and textural characteristics. In contrast, automatic features utilize deep learning models to identify complex patterns in histopathological images. The extracted features were used to train machine learning models, with hyperparameter optimization performed using Optuna. Our framework achieved high performance across multiple metrics, including precision, recall, F1 score, accuracy, specificity, and weighted average. The concatenated probabilities, which combine both feature types, demonstrated the highest mean weighted average of 0.9969, surpassing the individual performances of handcrafted (0.9765) and embedded features (0.9686). Statistical analysis confirmed the robustness and reliability of the results. However, challenges remain in assuming normal distributions for certain feature types. This study highlights the potential of combining domain-specific knowledge with data-driven approaches to enhance diagnostic accuracy and support clinical decision-making. Full article
Show Figures

Figure 1

25 pages, 691 KiB  
Article
What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach
by Elisabeth Hildt
Bioengineering 2025, 12(4), 375; https://doi.org/10.3390/bioengineering12040375 - 2 Apr 2025
Cited by 1 | Viewed by 1932
Abstract
This article reflects on explainability in the context of medical artificial intelligence (AI) applications, focusing on AI-based clinical decision support systems (CDSS). After introducing the concept of explainability in AI and providing a short overview of AI-based clinical decision support systems (CDSSs) and [...] Read more.
This article reflects on explainability in the context of medical artificial intelligence (AI) applications, focusing on AI-based clinical decision support systems (CDSS). After introducing the concept of explainability in AI and providing a short overview of AI-based clinical decision support systems (CDSSs) and the role of explainability in CDSSs, four use cases of AI-based CDSSs will be presented. The examples were chosen to highlight different types of AI-based CDSSs as well as different types of explanations: a machine language (ML) tool that lacks explainability; an approach with post hoc explanations; a hybrid model that provides medical knowledge-based explanations; and a causal model that involves complex moral concepts. Then, the role, relevance, and implications of explainability in the context of the use cases will be discussed, focusing on seven explainability-related aspects and themes. These are: (1) The addressees of explainability in medical AI; (2) the relevance of explainability for medical decision making; (3) the type of explanation provided; (4) the (often-cited) conflict between explainability and accuracy; (5) epistemic authority and automation bias; (6) Individual preferences and values; (7) patient autonomy and doctor–patient relationships. The case-based discussion reveals that the role and relevance of explainability in AI-based CDSSs varies considerably depending on the tool and use context. While it is plausible to assume that explainability in medical AI has positive implications, empirical data on explainability and explainability-related implications is scarce. Use-case-based studies are needed to investigate not only the technical aspects of explainability but also the perspectives of clinicians and patients on the relevance of explainability and its implications. Full article
Show Figures

Figure 1

Back to TopTop