Explainable AI: Methods, Applications, and Challenges

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 611

Special Issue Editor


E-Mail Website
Guest Editor
Computer and Information Technology Department, Purdue University in Indianapolis, Indianapolis, IN 46222, USA
Interests: explainable AI for network intrusion detection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The growing application of the usage of artificial intelligence (AI) techniques for different applications in various sectors (including network security, autonomous driving, IoT systems, medical domain, and among others) opens new research directions in this research area. However, most existing research has primarily concentrated on the classification accuracy of different AI algorithms without offering insights into their underlying behaviour and decision-making processes.

This constraint highlights the urgent need to better utilize the relatively new domain of explainable AI (XAI) to enhance the clarity of AI decisions in these different domains. XAI is essential for building trust, transparency, and accountability in AI systems, particularly in applications where decisions have high-stakes implications. There are several challenges for such usage of XAI, including generating accurate global and local explanations of AI models, discovering the main features that affect the decision-making of each AI model, and evaluating different classes of XAI models to test for trust in their applicability.

In this Special Issue, original research articles and reviews are welcome. This Special Issue focuses on discussions around emerging solutions suitable for accomplishing efficient and reliable explainable AI approaches for different application domains, along with outlining the state-of-the-art efforts and open challenges in this research area. Potential topics of interest include, but are not limited to, the following:

  • Explainable AI methods for advanced network intrusion detection;
  • Explainable AI methods for Internet-of-things (IoT) security;
  • Explainable AI methods in medical domain;
  • Explainable AI for explaining black-box deep learning methods;
  • Usage of explainable AI for feature selection;
  • Efficiency analysis and optimization of explainable AI methods;
  • Evaluation frameworks for explainable AI solutions;
  • Comparing black-box and white-box AI models and related XAI solutions;
  • Adversarial attacks on explainable AI models;
  • Open challenges in applications of explainable AI;
  • Human trust in explainable AI solutions;
  • Usage of generative AI for enhancing explainable AI solutions.

We look forward to receiving your contributions.

Dr. Mustafa Abdallah
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • black-box AI
  • human trust in XAI
  • generative AI for XAI
  • XAI for feature selection
  • XAI in medical application
  • XAI for network security

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 7010 KiB  
Article
The Explanation and Sensitivity of AI Algorithms Supplied with Synthetic Medical Data
by Dan Munteanu, Simona Moldovanu and Mihaela Miron
Electronics 2025, 14(7), 1270; https://doi.org/10.3390/electronics14071270 - 24 Mar 2025
Viewed by 349
Abstract
The increasing complexity and importance of medical data in improving patient care, advancing research, and optimizing healthcare systems led to the proposal of this study, which presents a novel methodology by evaluating the sensitivity of artificial intelligence (AI) algorithms when provided with real [...] Read more.
The increasing complexity and importance of medical data in improving patient care, advancing research, and optimizing healthcare systems led to the proposal of this study, which presents a novel methodology by evaluating the sensitivity of artificial intelligence (AI) algorithms when provided with real data, synthetic data, a mix of both, and synthetic features. Two medical datasets, the Pima Indians Diabetes Database (PIDD) and the Breast Cancer Wisconsin Dataset (BCWD), were used, employing the Gaussian Copula Synthesizer (GCS) and the Synthetic Minority Oversampling Technique (SMOTE) to generate synthetic data. We classified the new datasets using fourteen machine learning (ML) models incorporated into PyCaret AutoML (Automated Machine Learning) and two deep neural networks, evaluating performance using accuracy (ACC), F1-score, Area Under the Curve (AUC), Matthews Correlation Coefficient (MCC), and Kappa metrics. Local Interpretable Model-agnostic Explanations (LIME) provided the explanation and justification for classification results. The quality and content of the medical data are very important; thus, when the classification of original data is unsatisfactory, a good recommendation is to create synthetic data with the SMOTE technique, where an accuracy of 0.924 is obtained, and supply the AI algorithms with a combination of original and synthetic data. Full article
(This article belongs to the Special Issue Explainable AI: Methods, Applications, and Challenges)
Show Figures

Figure 1

Back to TopTop