Explainability in AI and Machine Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 November 2024 | Viewed by 2374

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering and Informatics, University of Patras, 26504 Patras, Greece
Interests: artificial intelligence; knowledge representation; intelligent systems; intelligent e-learning; sentiment analysis

E-Mail Website
Guest Editor
Department of Computer Engineering & Informatics, University of Patras, 26504 Rio, Greece
Interests: artificial intelligence; learning technologies; machine learning; human–computer interaction; social media; affective computing; sentiment analysis;
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Explainable Artificial Intelligence (XAI) in general concerns the problem of communicating explanations to human users by AI systems regarding their decisions. This has been of natural interest in systems or models of "traditional" AI (e.g., knowledge representation and reasoning systems, planning systems), where the "internal" decision-making process is mostly transparent (white-box models), which, although mostly interpretable, are not explainable. However, recently, due to the development of many successful models, explainability is of particular concern to the machine learning (ML) community. This is because, although some models are interpretable, most ML models act like black-boxes, and in many applications (e.g., medicine, healthcare, education, automated driving), practitioners want to understand models' decision making, to be able to trust them when used in reality.

So, XAI has become an active subfield of machine learning aiming at increasing the transparency of machine learning models. Explainability, apart from increasing trust and confidence, can also provide further insights regarding the model itself and the problem.

Deep Neural Networks (DNNs) are ML models that have achieved major advances. However, a clear understanding of their internal decision making is lacking. Interpreting the internal mechanisms of DNNs has been a very interesting topic. Symbolic methods could be used for network interpretation, by making clear inference patterns inside DNNs, and explaining the decisions made by them. On the other hand, re-designing DNNs in an interpretable or explainable way could be a solution.

Natural language (NL) techniques, such NL Generation (NLG) and NL Processing (NLP), can help in providing comprehensible explanations of automated decisions to human users of AI systems.

Topics of interest include, but are not limited to, the following:

  • Applications of XAI systems;
  • Evaluation of XAI approaches;
  • Explainable Agents;
  • Explaining Black-box Models;
  • Explaining Logical Formulas;
  • Explainable Machine Learning;
  • Explainable Planning;
  • Interpretable Machine Learning;
  • Metrics for Explainability Evaluation;
  • Models for Explainable Recommendations;
  • Natural Language Generation for Explainable AI;
  • Self-explanatory Decision-Support Systems;
  • Verbalizing Knowledge Bases.

Prof. Dr. Ioannis Hatzilygeroudis
Prof. Dr. Vasile Palade
Dr. Isidoros Perikos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 836 KiB  
Article
Explainable Artificial Intelligence Approach for Diagnosing Faults in an Induction Furnace
by Sajad Moosavi, Roozbeh Razavi-Far, Vasile Palade and Mehrdad Saif
Electronics 2024, 13(9), 1721; https://doi.org/10.3390/electronics13091721 - 29 Apr 2024
Viewed by 535
Abstract
For over a century, induction furnaces have been used in the core of foundries for metal melting and heating. They provide high melting/heating rates with optimal efficiency. The occurrence of faults not only imposes safety risks but also reduces productivity due to unscheduled [...] Read more.
For over a century, induction furnaces have been used in the core of foundries for metal melting and heating. They provide high melting/heating rates with optimal efficiency. The occurrence of faults not only imposes safety risks but also reduces productivity due to unscheduled shutdowns. The problem of diagnosing faults in induction furnaces has not yet been studied, and this work is the first to propose a data-driven framework for diagnosing faults in this application. This paper presents a deep neural network framework for diagnosing electrical faults by measuring real-time electrical parameters at the supply side. Experimental and sensory measurements are collected from multiple energy analyzer devices installed in the foundry. Next, a semi-supervised learning approach, known as the local outlier factor, has been used to discriminate normal and faulty samples from each other and label the data samples. Then, a deep neural network is trained with the collected labeled samples. The performance of the developed model is compared with several state-of-the-art techniques in terms of various performance metrics. The results demonstrate the superior performance of the selected deep neural network model over other classifiers, with an average F-measure of 0.9187. Due to the black box nature of the constructed neural network, the model predictions are interpreted by Shapley additive explanations and local interpretable model-agnostic explanations. The interpretability analysis reveals that classified faults are closely linked to variations in odd voltage/current harmonics of order 3, 11, 13, and 17, highlighting the critical impact of these parameters on the model’s prediction. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

30 pages, 4185 KiB  
Article
Intelligent Decision Support for Energy Management: A Methodology for Tailored Explainability of Artificial Intelligence Analytics
by Dimitrios P. Panagoulias, Elissaios Sarmas, Vangelis Marinakis, Maria Virvou, George A. Tsihrintzis and Haris Doukas
Electronics 2023, 12(21), 4430; https://doi.org/10.3390/electronics12214430 - 27 Oct 2023
Cited by 5 | Viewed by 1211
Abstract
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be [...] Read more.
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be used by different stakeholders, with different backgrounds, preferences, abilities, skills, and goals. Our methodology is aligned with the explainable artificial intelligence (XAI) paradigm and aims to enhance the interpretability of AI-empowered decision support systems (DSSs). Specifically, a clustering-based approach is adopted to customize the depth of explainability based on the specific needs of different user groups. This approach improves the accuracy and effectiveness of energy management analytics while promoting transparency and trust in the decision-making process. The methodology is structured around an iterative development lifecycle for an intelligent decision support system and includes several steps, such as stakeholder identification, an empirical study on usability and explainability, user clustering analysis, and the implementation of an XAI framework. The XAI framework comprises XAI clusters and local and global XAI, which facilitate higher adoption rates of the AI system and ensure responsible and safe deployment. The methodology is tested on a stacked neural network for an analytics service, which estimates energy savings from renovations, and aims to increase adoption rates and benefit the circular economy. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

Back to TopTop