Explainable Artificial Intelligence: Concepts, Techniques, Analytics and Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 September 2025 | Viewed by 2949

Special Issue Editors


E-Mail Website
Guest Editor
Knowtion GmbH, Amalienbadstraße 41, Bau, 76227 Karlsruhe, Germany
Interests: computer vision; machine learning; deep learning; wireless sensor networks; IoT
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automatics and Applied Software, Faculty of Engineering, Aurel Vlaicu University of Arad, Ro-310025 Arad, Romania
Interests: neuro-fuzzy technologies; fuzzy logic approaches; adaptive fuzzy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Explainable Artificial Intelligence (XAI) has emerged as a crucial field addressing the need for transparency and understanding in AI systems to foster trust and accountability. XAI aims to make AI decision-making processes comprehensible to humans by developing methodologies that clarify how AI systems arrive at their decisions, encompassing both final outputs and intermediate processes. Techniques to enhance explainability include creating inherently interpretable models like decision trees, applying post hoc methods such as LIME and SHAP to existing models, and using visualization tools like feature importance plots and heatmaps. Analytical frameworks in XAI evaluate the accuracy, comprehensibility, and actionability of explanations to ensure they are useful to end-users. XAI applications span healthcare, finance, autonomous systems, legal contexts, agriculture, and transportation, where transparency enhances trust, safety, regulatory compliance, and operational efficiency. This Special Issue explores the latest advancements in XAI, fostering collaboration and innovation in developing more explainable and reliable AI systems.

Topics of Interest:

  1. Bio-inspired methods, deep learning, convolutional neural networks, hybrid architectures, etc.
  2. Time series analysis, fractional-order controllers, gradient field methods, surface reconstruction, and other mathematical models for intelligent feature detection, extraction, and recognition.
  3. Embedded intelligent computer vision algorithms.
  4. Human, nature, technology, and various object activity recognition models.
  5. Hyper-parameter learning and tuning, automatic calibration, and hybrid and surrogate learning for computational intelligence in vision systems.
  6. Intelligent video and image acquisition techniques.

Dr. Mohit Mittal
Prof. Dr. Valentina E. Balas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • computer vision algorithms
  • embedded AI
  • AI approaches

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 1761 KiB  
Article
Info-CELS: Informative Saliency Map-Guided Counterfactual Explanation for Time Series Classification
by Peiyu Li, Omar Bahri, Pouya Hosseinzadeh, Soukaïna Filali Boubrahimi and Shah Muhammad Hamdi
Electronics 2025, 14(7), 1311; https://doi.org/10.3390/electronics14071311 - 26 Mar 2025
Viewed by 280
Abstract
As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in AI-based systems, leading to the emergence of the [...] Read more.
As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in AI-based systems, leading to the emergence of the Explainable Artificial Intelligence (XAI) field. Recently, a novel counterfactual explanation model, CELS, has been introduced. CELS learns a saliency map for the interests of an instance and generates a counterfactual explanation guided by the learned saliency map. While CELS represents the first attempt to exploit learned saliency maps not only to provide intuitive explanations for the reason behind the decision made by the time series classifier but also to explore post hoc counterfactual explanations, it exhibits limitations in terms of its high validity for the sake of ensuring high proximity and sparsity. In this paper, we present an enhanced approach that builds upon CELS. While the original model achieved promising results in terms of sparsity and proximity, it faced limitations in terms of validity. Our proposed method addresses this limitation by removing mask normalization to provide more informative and valid counterfactual explanations. Through extensive experimentation on datasets from various domains, we demonstrate that our approach outperforms the CELS model, achieving higher validity and producing more informative explanations. Full article
Show Figures

Figure 1

30 pages, 11982 KiB  
Article
LIMETREE: Consistent and Faithful Surrogate Explanations of Multiple Classes
by Kacper Sokol and Peter Flach
Electronics 2025, 14(5), 929; https://doi.org/10.3390/electronics14050929 - 26 Feb 2025
Viewed by 375
Abstract
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a comprehensive view may [...] Read more.
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a comprehensive view may be difficult since they can present competing or contradictory evidence. To address this challenge, we introduce the novel paradigm of multi-class explanations. We outline the theory behind such techniques and propose a local surrogate model based on multi-output regression trees—called LIMETREE—that offers faithful and consistent explanations of multiple classes for individual predictions while being post-hoc, model-agnostic and data-universal. On top of strong fidelity guarantees, our implementation delivers a range of diverse explanation types, including counterfactual statements favored in the literature. We evaluate our algorithm with respect to explainability desiderata, through quantitative experiments and via a pilot user study, on image and tabular data classification tasks, comparing it with LIME, which is a state-of-the-art surrogate explainer. Our contributions demonstrate the benefits of multi-class explanations and the wide-ranging advantages of our method across a diverse set of scenarios. Full article
Show Figures

Figure 1

26 pages, 13220 KiB  
Article
YOLOv8-Based XR Smart Glasses Mobility Assistive System for Aiding Outdoor Walking of Visually Impaired Individuals in South Korea
by Incheol Jeong, Kapyol Kim, Jungil Jung and Jinsoo Cho
Electronics 2025, 14(3), 425; https://doi.org/10.3390/electronics14030425 - 22 Jan 2025
Viewed by 1577
Abstract
This study proposes an eXtended Reality (XR) glasses-based walking assistance system to support independent and safe outdoor walking for visually impaired people. The system leverages the YOLOv8n deep learning model to recognize walkable areas, public transport facilities, and obstacles in real time and [...] Read more.
This study proposes an eXtended Reality (XR) glasses-based walking assistance system to support independent and safe outdoor walking for visually impaired people. The system leverages the YOLOv8n deep learning model to recognize walkable areas, public transport facilities, and obstacles in real time and provide appropriate guidance to the user. The core components of the system are Xreal Light Smart Glasses and an Android-based smartphone, which are operated through a mobile application developed using the Unity game engine. The system divides the user’s field of vision into nine zones, assesses the level of danger in each zone, and guides the user along a safe walking path. The YOLOv8n model was trained to recognize sidewalks, pedestrian crossings, bus stops, subway exits, and various obstacles on a smartphone connected to XR glasses and demonstrated an average processing time of 583 ms and an average memory usage of 80 MB, making it suitable for real-time use. The experiments were conducted on a 3.3 km route around Bokjeong Station in South Korea and confirmed that the system works effectively in a variety of walking environments, but recognized the need to improve performance in low-light environments and further testing with visually impaired people. By proposing an innovative walking assistance system that combines XR technology and artificial intelligence, this study is expected to contribute to improving the independent mobility of visually impaired people. Future research will further validate the effectiveness of the system by integrating it with real-time public transport information and conducting extensive experiments with users with varying degrees of visual impairment. Full article
Show Figures

Figure 1

Back to TopTop