Explainable Artificial Intelligence: Concepts, Techniques, Analytics and Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 September 2025 | Viewed by 4102

Special Issue Editors


E-Mail Website
Guest Editor
Knowtion GmbH, Amalienbadstraße 41, Bau, 76227 Karlsruhe, Germany
Interests: computer vision; machine learning; deep learning; wireless sensor networks; IoT
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automatics and Applied Software, Faculty of Engineering, Aurel Vlaicu University of Arad, Ro-310025 Arad, Romania
Interests: neuro-fuzzy technologies; fuzzy logic approaches; adaptive fuzzy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Explainable Artificial Intelligence (XAI) has emerged as a crucial field addressing the need for transparency and understanding in AI systems to foster trust and accountability. XAI aims to make AI decision-making processes comprehensible to humans by developing methodologies that clarify how AI systems arrive at their decisions, encompassing both final outputs and intermediate processes. Techniques to enhance explainability include creating inherently interpretable models like decision trees, applying post hoc methods such as LIME and SHAP to existing models, and using visualization tools like feature importance plots and heatmaps. Analytical frameworks in XAI evaluate the accuracy, comprehensibility, and actionability of explanations to ensure they are useful to end-users. XAI applications span healthcare, finance, autonomous systems, legal contexts, agriculture, and transportation, where transparency enhances trust, safety, regulatory compliance, and operational efficiency. This Special Issue explores the latest advancements in XAI, fostering collaboration and innovation in developing more explainable and reliable AI systems.

Topics of Interest:

  1. Bio-inspired methods, deep learning, convolutional neural networks, hybrid architectures, etc.
  2. Time series analysis, fractional-order controllers, gradient field methods, surface reconstruction, and other mathematical models for intelligent feature detection, extraction, and recognition.
  3. Embedded intelligent computer vision algorithms.
  4. Human, nature, technology, and various object activity recognition models.
  5. Hyper-parameter learning and tuning, automatic calibration, and hybrid and surrogate learning for computational intelligence in vision systems.
  6. Intelligent video and image acquisition techniques.

Dr. Mohit Mittal
Prof. Dr. Valentina E. Balas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • computer vision algorithms
  • embedded AI
  • AI approaches

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 4970 KiB  
Article
AI-Based Impact Location in Structural Health Monitoring for Aerospace Application Evaluation Using Explainable Artificial Intelligence Techniques
by Andrés Pedraza, Daniel del-Río-Velilla and Antonio Fernández-López
Electronics 2025, 14(10), 1975; https://doi.org/10.3390/electronics14101975 - 12 May 2025
Viewed by 211
Abstract
Due to the nature of composites, the ability to accurately locate low-energy impacts on structures is crucial for Structural Health Monitoring (SHM) in the aerospace sector. For this purpose, several techniques have been developed in the past, and, among them, Artificial Intelligence (AI) [...] Read more.
Due to the nature of composites, the ability to accurately locate low-energy impacts on structures is crucial for Structural Health Monitoring (SHM) in the aerospace sector. For this purpose, several techniques have been developed in the past, and, among them, Artificial Intelligence (AI) has demonstrated promising results with high performance. The non-linear behavior of AI-based solutions has made them able to withstand scenarios where complex structures and different impact configurations have been introduced, making accurate location predictions. However, the black-box nature of AI poses a challenge in the aerospace field, where reliability, trustworthiness, and validation capability are paramount. To overcome this problem, Explainable Artificial Intelligence (XAI) techniques emerge as a solution, enhancing model transparency, trust, and validation. This research presents a case study: a previously trained Impact-Locator-AI model is, initially, demonstrating a promising location accuracy; however, its behavior in real-life scenarios is unknown, and before embedding it in an aerospace structure as an SHM system its reliability must be tested. By applying XAI methodologies, the Impact-Locator-AI model can be critically evaluated to assess its reliability and potential suitability for aerospace applications, while also laying the groundwork for future research at the intersection of XAI and impact location in SHM. Full article
Show Figures

Figure 1

43 pages, 10194 KiB  
Article
Fuzzy Rules for Explaining Deep Neural Network Decisions (FuzRED)
by Anna L. Buczak, Benjamin D. Baugher and Katie Zaback
Electronics 2025, 14(10), 1965; https://doi.org/10.3390/electronics14101965 - 12 May 2025
Viewed by 188
Abstract
This paper introduces a novel approach to explainable artificial intelligence (XAI) that enhances interpretability by combining local insights from Shapley additive explanations (SHAP)—a widely adopted XAI tool—with global explanations expressed as fuzzy association rules. By employing fuzzy association rules, our method enables AI [...] Read more.
This paper introduces a novel approach to explainable artificial intelligence (XAI) that enhances interpretability by combining local insights from Shapley additive explanations (SHAP)—a widely adopted XAI tool—with global explanations expressed as fuzzy association rules. By employing fuzzy association rules, our method enables AI systems to generate explanations that closely resemble human reasoning, delivering intuitive and comprehensible insights into system behavior. We present the FuzRED methodology and evaluate its performance on models trained across three diverse datasets: two classification tasks (spam identification and phishing link detection), and one reinforcement learning task involving robot navigation. Compared to the Anchors method FuzRED offers at least one order of magnitude faster execution time (minutes vs. hours) while producing easily interpretable rules that enhance human understanding of AI decision making. Full article
Show Figures

Figure 1

27 pages, 6796 KiB  
Article
Comparative Analysis of Post Hoc Explainable Methods for Robotic Grasp Failure Prediction
by Aneseh Alvanpour, Cagla Acun, Kyle Spurlock, Christopher K. Robinson, Sumit K. Das, Dan O. Popa and Olfa Nasraoui
Electronics 2025, 14(9), 1868; https://doi.org/10.3390/electronics14091868 - 3 May 2025
Viewed by 214
Abstract
In human–robot collaborative environments, predicting and explaining robotic grasp failures is crucial for effective operation. While machine learning models can predict failures accurately, they often lack transparency, limiting their utility in critical applications. This paper presents a comparative analysis of three post hoc [...] Read more.
In human–robot collaborative environments, predicting and explaining robotic grasp failures is crucial for effective operation. While machine learning models can predict failures accurately, they often lack transparency, limiting their utility in critical applications. This paper presents a comparative analysis of three post hoc explanation methods—Tree-SHAP, LIME, and TreeInterpreter—for explaining grasp failure predictions from white-box and black-box models. Using a simulated robotic grasping dataset, we evaluate these methods based on their agreement in identifying important features, similarity in feature importance rankings, dependency on model type, and computational efficiency. Our findings reveal that Tree-SHAP and TreeInterpreter demonstrate stronger consistency with each other than with LIME, particularly for correctly predicted failures. The choice of ML model significantly affects explanation consistency, with simpler models yielding more agreement across methods. TreeInterpreter offers a substantial computational advantage, operating approximately 24 times faster than Tree-SHAP and over 2000 times faster than LIME for complex models. All methods consistently identify effort in joint 1 across fingers 1 and 3 as critical factors in grasp failures, aligning with mechanical design principles. These insights contribute to developing more transparent and reliable robotic grasping systems, enabling better human–robot collaboration through improved failure understanding and prevention. Full article
Show Figures

Figure 1

24 pages, 1761 KiB  
Article
Info-CELS: Informative Saliency Map-Guided Counterfactual Explanation for Time Series Classification
by Peiyu Li, Omar Bahri, Pouya Hosseinzadeh, Soukaïna Filali Boubrahimi and Shah Muhammad Hamdi
Electronics 2025, 14(7), 1311; https://doi.org/10.3390/electronics14071311 - 26 Mar 2025
Viewed by 354
Abstract
As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in AI-based systems, leading to the emergence of the [...] Read more.
As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in AI-based systems, leading to the emergence of the Explainable Artificial Intelligence (XAI) field. Recently, a novel counterfactual explanation model, CELS, has been introduced. CELS learns a saliency map for the interests of an instance and generates a counterfactual explanation guided by the learned saliency map. While CELS represents the first attempt to exploit learned saliency maps not only to provide intuitive explanations for the reason behind the decision made by the time series classifier but also to explore post hoc counterfactual explanations, it exhibits limitations in terms of its high validity for the sake of ensuring high proximity and sparsity. In this paper, we present an enhanced approach that builds upon CELS. While the original model achieved promising results in terms of sparsity and proximity, it faced limitations in terms of validity. Our proposed method addresses this limitation by removing mask normalization to provide more informative and valid counterfactual explanations. Through extensive experimentation on datasets from various domains, we demonstrate that our approach outperforms the CELS model, achieving higher validity and producing more informative explanations. Full article
Show Figures

Figure 1

30 pages, 11982 KiB  
Article
LIMETREE: Consistent and Faithful Surrogate Explanations of Multiple Classes
by Kacper Sokol and Peter Flach
Electronics 2025, 14(5), 929; https://doi.org/10.3390/electronics14050929 - 26 Feb 2025
Cited by 1 | Viewed by 441
Abstract
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a comprehensive view may [...] Read more.
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a comprehensive view may be difficult since they can present competing or contradictory evidence. To address this challenge, we introduce the novel paradigm of multi-class explanations. We outline the theory behind such techniques and propose a local surrogate model based on multi-output regression trees—called LIMETREE—that offers faithful and consistent explanations of multiple classes for individual predictions while being post-hoc, model-agnostic and data-universal. On top of strong fidelity guarantees, our implementation delivers a range of diverse explanation types, including counterfactual statements favored in the literature. We evaluate our algorithm with respect to explainability desiderata, through quantitative experiments and via a pilot user study, on image and tabular data classification tasks, comparing it with LIME, which is a state-of-the-art surrogate explainer. Our contributions demonstrate the benefits of multi-class explanations and the wide-ranging advantages of our method across a diverse set of scenarios. Full article
Show Figures

Figure 1

26 pages, 13220 KiB  
Article
YOLOv8-Based XR Smart Glasses Mobility Assistive System for Aiding Outdoor Walking of Visually Impaired Individuals in South Korea
by Incheol Jeong, Kapyol Kim, Jungil Jung and Jinsoo Cho
Electronics 2025, 14(3), 425; https://doi.org/10.3390/electronics14030425 - 22 Jan 2025
Viewed by 1902
Abstract
This study proposes an eXtended Reality (XR) glasses-based walking assistance system to support independent and safe outdoor walking for visually impaired people. The system leverages the YOLOv8n deep learning model to recognize walkable areas, public transport facilities, and obstacles in real time and [...] Read more.
This study proposes an eXtended Reality (XR) glasses-based walking assistance system to support independent and safe outdoor walking for visually impaired people. The system leverages the YOLOv8n deep learning model to recognize walkable areas, public transport facilities, and obstacles in real time and provide appropriate guidance to the user. The core components of the system are Xreal Light Smart Glasses and an Android-based smartphone, which are operated through a mobile application developed using the Unity game engine. The system divides the user’s field of vision into nine zones, assesses the level of danger in each zone, and guides the user along a safe walking path. The YOLOv8n model was trained to recognize sidewalks, pedestrian crossings, bus stops, subway exits, and various obstacles on a smartphone connected to XR glasses and demonstrated an average processing time of 583 ms and an average memory usage of 80 MB, making it suitable for real-time use. The experiments were conducted on a 3.3 km route around Bokjeong Station in South Korea and confirmed that the system works effectively in a variety of walking environments, but recognized the need to improve performance in low-light environments and further testing with visually impaired people. By proposing an innovative walking assistance system that combines XR technology and artificial intelligence, this study is expected to contribute to improving the independent mobility of visually impaired people. Future research will further validate the effectiveness of the system by integrating it with real-time public transport information and conducting extensive experiments with users with varying degrees of visual impairment. Full article
Show Figures

Figure 1

Back to TopTop