Advances in Explainable Artificial Intelligence (XAI): 3rd Edition

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).

Deadline for manuscript submissions: closed (30 September 2025) | Viewed by 7667

Special Issue Editor

School of Computer Science, Technological University Dublin, D08 X622 Dublin, Ireland
Interests: explainable artificial intelligence; defeasible argumentation; deep learning; human-centred design; mental workload modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, artificial intelligence has seen a shift in focus towards the design and deployment of intelligent systems that are interpretable and explainable, with the rise of a new field: explainable artificial intelligence (XAI). This has been echoed both in the research literature and in the press, attracting scholars from all around the world as well as a lay audience. Initially devoted to the design of post hoc methods for explainability, essentially wrapping machine- and deep-learning models with explanations, it is now expanding its boundaries to ante hoc methods for the production of self-interpretable models. Along with this, neuro-symbolic approaches for reasoning have been employed in conjunction with machine learning in order to extend modeling accuracy and precision with self-explainability and justifiability. Scholars have also started shifting the focus towards the structure of explanations since the ultimate users of interactive technologies are humans, linking artificial intelligence and computer sciences to psychology, human–computer interaction, philosophy, and sociology.

It is certain that explainable artificial intelligence is gaining momentum, and this Special Issue calls for contributions exploring this new fascinating area of research, seeking articles that are devoted to the theoretical foundation of XAI, its historical perspectives, and the design of explanations and interactive human-centered intelligent systems with knowledge–representation principles and automated learning capabilities, not only for experts but for the lay audience as well.

Dr. Luca Longo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence (XAI)
  • neuro-symbolic reasoning for XAI
  • interpretable deep learning
  • argument-based models of explanations
  • graph neural networks for explainability
  • machine learning and knowledge graphs
  • human-centric explainable AI
  • interpretation of black-box models
  • human-understandable machine learning
  • counterfactual explanations for machine learning
  • natural language processing in XAI
  • quantitative/qualitative evaluation metrics for XAI
  • ante and post hoc XAI methods
  • rule-based systems for XAI
  • fuzzy systems and explainability
  • human-centered learning and explanations
  • model-dependent and model-agnostic explainability
  • case-based explanations for AI systems
  • interactive machine learning and explanations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

27 pages, 3658 KB  
Article
SkinVisualNet: A Hybrid Deep Learning Approach Leveraging Explainable Models for Identifying Lyme Disease from Skin Rash Images
by Amir Sohel, Rittik Chandra Das Turjy, Sarbajit Paul Bappy, Md Assaduzzaman, Ahmed Al Marouf, Jon George Rokne and Reda Alhajj
Mach. Learn. Knowl. Extr. 2025, 7(4), 157; https://doi.org/10.3390/make7040157 - 1 Dec 2025
Viewed by 706
Abstract
Lyme disease, caused by the Borrelia burgdorferi bacterium and transmitted through black-legged (deer) tick bites, is becoming increasingly prevalent globally. According to data from the Lyme Disease Association, the number of cases has surged by more than 357% over the past 15 years. [...] Read more.
Lyme disease, caused by the Borrelia burgdorferi bacterium and transmitted through black-legged (deer) tick bites, is becoming increasingly prevalent globally. According to data from the Lyme Disease Association, the number of cases has surged by more than 357% over the past 15 years. According to the Infectious Disease Society of America, traditional diagnostic methods are often slow, potentially allowing bacterial proliferation and complicating early management. This study proposes a novel hybrid deep learning framework to classify Lyme disease rashes, addressing the global prevalence of the disease caused by the Borrelia burgdorferi bacterium, which is transmitted through black-legged (deer) tick bites. This study presents a novel hybrid deep learning framework for classifying Lyme disease rashes, utilizing pre-trained models (ResNet50 V2, VGG19, DenseNet201) for initial classification. By combining VGG19 and DenseNet201 architectures, we developed a hybrid model, SkinVisualNet, which achieved an impressive accuracy of 98.83%, precision of 98.45%, recall of 99.09%, and an F1 score of 98.76%. To ensure the robustness and generalizability of the model, 5-fold cross-validation (CV) was performed, generating an average validation accuracy between 98.20% and 98.92%. Incorporating image preprocessing techniques such as gamma correction, contrast stretching and data augmentation led to a 10–13% improvement in model accuracy, significantly enhancing its ability to generalize across various conditions and improving overall performance. To improve model interpretability, we applied Explainable AI methods like LIME, Grad-CAM, CAM++, Score CAM and Smooth Grad to visualize the rash image regions most influential in classification. These techniques enhance both diagnostic transparency and model reliability, helping clinicians better understand the diagnostic decisions. The proposed framework demonstrates a significant advancement in automated Lyme disease detection, providing a robust and explainable AI-based diagnostic tool that can aid clinicians in improving patient outcomes. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 3rd Edition)
Show Figures

Graphical abstract

19 pages, 4291 KB  
Article
Comparative Analysis of Perturbation Techniques in LIME for Intrusion Detection Enhancement
by Mantas Bacevicius, Agne Paulauskaite-Taraseviciene, Gintare Zokaityte, Lukas Kersys and Agne Moleikaityte
Mach. Learn. Knowl. Extr. 2025, 7(1), 21; https://doi.org/10.3390/make7010021 - 21 Feb 2025
Cited by 7 | Viewed by 3683
Abstract
The growing sophistication of cyber threats necessitates robust and interpretable intrusion detection systems (IDS) to safeguard network security. While machine learning models such as Decision Tree (DT), Random Forest (RF), k-Nearest Neighbors (K-NN), and XGBoost demonstrate high effectiveness in detecting malicious activities, their [...] Read more.
The growing sophistication of cyber threats necessitates robust and interpretable intrusion detection systems (IDS) to safeguard network security. While machine learning models such as Decision Tree (DT), Random Forest (RF), k-Nearest Neighbors (K-NN), and XGBoost demonstrate high effectiveness in detecting malicious activities, their interpretability decreases as their complexity and accuracy increase, posing challenges for critical cybersecurity applications. Local Interpretable Model-agnostic Explanations (LIME) is widely used to address this limitation; however, its reliance on normal distribution for perturbations often fails to capture the non-linear and imbalanced characteristics of datasets like CIC-IDS-2018. To address these challenges, we propose a modified LIME perturbation strategy using Weibull, Gamma, Beta, and Pareto distributions to better capture the characteristics of network traffic data. Our methodology improves the stability of different ML models trained on CIC-IDS datasets, enabling more meaningful and reliable explanations of model predictions. The proposed modifications allow for an increase in explanation fidelity by up to 78% compared to the default Gaussian approach. Pareto-based perturbations provide the best results. Among all distributions tested, Pareto consistently yielded the highest explanation fidelity and stability, particularly for K-NN (R2 = 0.9971, S = 0.9907) and DT (R2 = 0.9267, S = 0.9797). This indicates that heavy-tailed distributions fit well with real-world network traffic patterns, reducing the variance in attribute importance explanations and making them more robust. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 3rd Edition)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 1023 KB  
Review
A Four-Dimensional Analysis of Explainable AI in Energy Forecasting: A Domain-Specific Systematic Review
by Vahid Arabzadeh and Raphael Frank
Mach. Learn. Knowl. Extr. 2025, 7(4), 153; https://doi.org/10.3390/make7040153 - 25 Nov 2025
Viewed by 1552
Abstract
Despite the growing use of Explainable Artificial Intelligence (XAI) in energy time-series forecasting, a systematic evaluation of explanation quality remains limited. This systematic review analyzes 50 peer-reviewed studies (2020–2025) applying XAI to load, price, or renewable generation forecasting. Using a PRISMA-inspired protocol, we [...] Read more.
Despite the growing use of Explainable Artificial Intelligence (XAI) in energy time-series forecasting, a systematic evaluation of explanation quality remains limited. This systematic review analyzes 50 peer-reviewed studies (2020–2025) applying XAI to load, price, or renewable generation forecasting. Using a PRISMA-inspired protocol, we introduce a dual-axis taxonomy and a four-factor framework covering global transparency, local fidelity, user relevance, and operational viability to structure our qualitative synthesis. Our analysis reveals that XAI application is not uniform but follows three distinct, domain-specific paradigms: a user-centric approach in load forecasting, a risk management approach in price forecasting, and a physics-informed approach in generation forecasting. Post hoc methods, particularly SHAP, dominate the literature (62% of studies), while rigorous testing of explanation robustness and the reporting of computational overhead (23% of studies) remain critical gaps. We identify key research directions, including the need for standardized robustness testing and human-centered design, and provide actionable guidelines for practitioners. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 3rd Edition)
Show Figures

Graphical abstract

Back to TopTop