You are currently on the new version of our website. Access the old version .

1,600 Results Found

  • Review
  • Open Access
499 Citations
40,217 Views
19 Pages

Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics

  • Jianlong Zhou,
  • Amir H. Gandomi,
  • Fang Chen and
  • Andreas Holzinger

The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or...

  • Article
  • Open Access
2,726 Views
18 Pages

Evaluating Anomaly Explanations Using Ground Truth

  • Liat Antwarg Friedman,
  • Chen Galed,
  • Lior Rokach and
  • Bracha Shapira

15 November 2024

The widespread use of machine and deep learning algorithms for anomaly detection has created a critical need for robust explanations that can identify the features contributing to anomalies. However, effective evaluation methodologies for anomaly exp...

  • Article
  • Open Access
19 Citations
7,583 Views
41 Pages

Explainable AI Evaluation: A Top-Down Approach for Selecting Optimal Explanations for Black Box Models

  • SeyedehRoksana Mirzaei,
  • Hua Mao,
  • Raid Rafi Omar Al-Nima and
  • Wai Lok Woo

20 December 2023

Explainable Artificial Intelligence (XAI) evaluation has grown significantly due to its extensive adoption, and the catastrophic consequence of misinterpreting sensitive data, especially in the medical field. However, the multidisciplinary nature of...

  • Article
  • Open Access
936 Views
23 Pages

Assessing the quality of multimodal posts is a challenging task that involves using multimodal data to evaluate the quality of posts’ responses to discussion topics. Providing evaluations and explanations plays a crucial role in promoting stude...

  • Article
  • Open Access
712 Views
22 Pages

16 April 2025

Decision support systems are being increasingly applied in critical decision-making domains such as healthcare and criminal justice. Trust in these systems requires transparency and explainability. Among the forms of explanation, globally consistent...

  • Article
  • Open Access
486 Views
37 Pages

In this study, a structured and methodological evaluation approach for eXplainable Artificial Intelligence (XAI) methods in medical image classification is proposed and implemented using LIME and SHAP explanations for chest X-ray interpretations. The...

  • Article
  • Open Access
1,482 Views
16 Pages

An Explainable Data-Driven Optimization Method for Unmanned Autonomous System Performance Assessment

  • Hang Yi,
  • Haisong Zhang,
  • Hao Wang,
  • Wenming Wang,
  • Lixin Jia,
  • Lihang Feng and
  • Dong Wang

14 November 2024

Unmanned autonomous systems (UASs), including drones and robotics, are widely employed across various fields. Despite significant advances in AI-enhanced intelligent systems, there remains a notable deficiency in the interpretability and comprehensiv...

  • Review
  • Open Access
1 Citations
9,297 Views
36 Pages

3 September 2025

The widespread adoption of Artificial Intelligence (AI) in critical domains, such as healthcare, finance, law, and autonomous systems, has brought unprecedented societal benefits. Its black-box (sub-symbolic) nature allows AI to compute prediction wi...

  • Article
  • Open Access
1 Citations
2,514 Views
35 Pages

21 January 2025

In within-visual-range (WVR) air combat, basic fighter maneuvers (BFMs) are widely used. Air combat engagement database (ACED) is a dedicated database for researching WVR air combat. Utilizing the data in ACED, a Transformer-based BFM decision suppor...

  • Article
  • Open Access
11 Citations
7,183 Views
22 Pages

Enhancing Self-Explanation Learning through a Real-Time Feedback System: An Empirical Evaluation Study

  • Ryosuke Nakamoto,
  • Brendan Flanagan,
  • Yiling Dai,
  • Taisei Yamauchi,
  • Kyosuke Takami and
  • Hiroaki Ogata

2 November 2023

This research introduces the self-explanation-based automated feedback (SEAF) system, aimed at alleviating the teaching burden through real-time, automated feedback while aligning with SDG 4’s sustainability goals for quality education. The sys...

  • Communication
  • Open Access
3 Citations
3,090 Views
6 Pages

In Crohn’s disease (CD) and ulcerative colitis (UC), the major inflammatory bowel diseases (IBD) in human beings, the tissue-damaging inflammatory response is characterized by elevated levels of Suppressor of Mothers Against Decapentaplegic (Sm...

  • Article
  • Open Access
38 Citations
8,280 Views
22 Pages

A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence

  • Mi-Young Kim,
  • Shahin Atakishiyev,
  • Housam Khalifa Bashier Babiker,
  • Nawshad Farruque,
  • Randy Goebel,
  • Osmar R. Zaïane,
  • Mohammad-Hossein Motallebi,
  • Juliano Rabelo,
  • Talat Syed and
  • Peter Chun

The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created hi...

  • Article
  • Open Access
8 Citations
3,440 Views
19 Pages

Evaluation of the Relation between Ictal EEG Features and XAI Explanations

  • Sergio E. Sánchez-Hernández,
  • Sulema Torres-Ramos,
  • Israel Román-Godínez and
  • Ricardo A. Salido-Ruiz

Epilepsy is a neurological disease with one of the highest rates of incidence worldwide. Although EEG is a crucial tool for its diagnosis, the manual detection of epileptic seizures is time consuming. Automated methods are needed to streamline this p...

  • Article
  • Open Access
7 Citations
3,682 Views
20 Pages

22 November 2022

The scientific literature and decision makers debate and explore education’s influence on regional development. However, differences among EU regions remain to be explained. The present article proposes to measure these disparites in terms of t...

  • Feature Paper
  • Article
  • Open Access
16 Citations
6,377 Views
16 Pages

An Innovative Design of an Integrated MED-TVC and Reverse Osmosis System for Seawater Desalination: Process Explanation and Performance Evaluation

  • Omer Mohamed Abubaker Al-hotmani,
  • Mudhar Abdul Alwahab Al-Obaidi,
  • Yakubu Mandafiya John,
  • Raj Patel and
  • Iqbal Mohammed Mujtaba

20 May 2020

In recent times two or more desalination processes have been combined to form integrated systems that have been widely used to resolve the limitations of individual processes as well as producing high performance systems. In this regard, a simple int...

  • Article
  • Open Access
9 Citations
2,874 Views
24 Pages

Towards a Reliable Evaluation of Local Interpretation Methods

  • Jun Li,
  • Daoyu Lin,
  • Yang Wang,
  • Guangluan Xu and
  • Chibiao Ding

18 March 2021

The growing use of deep neural networks in critical applications is making interpretability urgently to be solved. Local interpretation methods are the most prevalent and accepted approach for understanding and interpreting deep neural networks. How...

  • Article
  • Open Access
18 Citations
6,428 Views
26 Pages

Critical Thinking, Formation, and Change

  • Carlos Saiz and
  • Silvia F. Rivas

28 November 2023

In this paper, we propose an application of critical thinking (CT) to real-world problems, taking into account personal well-being (PB) and lifelong formation (FO). First, we raise a substantial problem with CT, which is that causal explanation is of...

  • Case Report
  • Open Access
5 Citations
5,407 Views
16 Pages

A Novel Method for Evaluation of Flood Risk Reduction Strategies: Explanation of ICPR FloRiAn GIS-Tool and Its First Application to the Rhine River Basin

  • Adrian Schmid-Breton,
  • Gesa Kutschera,
  • Ton Botterhuis and
  • The ICPR Expert Group ‘Flood Risk Analysis’ (EG HIRI)

To determine the effects of measures on flood risk, the International Commission for the Protection of the Rhine (ICPR), supported by the engineering consultant HKV has developed a method and a GIS-tool named “ICPR FloRiAn (Flood Risk Analysis)...

  • Article
  • Open Access
12 Citations
3,691 Views
16 Pages

Postural deficits such as hyperlordosis (hollow back) or hyperkyphosis (hunchback) are relevant health issues. Diagnoses depend on the experience of the examiner and are, therefore, often subjective and prone to errors. Machine learning (ML) methods...

  • Article
  • Open Access
44 Citations
4,369 Views
18 Pages

New SHapley Additive ExPlanations (SHAP) Approach to Evaluate the Raw Materials Interactions of Steel-Fiber-Reinforced Concrete

  • Madiha Anjum,
  • Kaffayatullah Khan,
  • Waqas Ahmad,
  • Ayaz Ahmad,
  • Muhammad Nasir Amin and
  • Afnan Nafees

9 September 2022

Recently, artificial intelligence (AI) approaches have gained the attention of researchers in the civil engineering field for estimating the mechanical characteristics of concrete to save the effort, time, and cost of researchers. Consequently, the c...

  • Article
  • Open Access
36 Citations
3,384 Views
21 Pages

Evaluating the Strength and Impact of Raw Ingredients of Cement Mortar Incorporating Waste Glass Powder Using Machine Learning and SHapley Additive ExPlanations (SHAP) Methods

  • Hassan Ali Alkadhim,
  • Muhammad Nasir Amin,
  • Waqas Ahmad,
  • Kaffayatullah Khan,
  • Sohaib Nazar,
  • Muhammad Iftikhar Faraz and
  • Muhammad Imran

20 October 2022

This research employed machine learning (ML) and SHapley Additive ExPlanations (SHAP) methods to assess the strength and impact of raw ingredients of cement mortar (CM) incorporated with waste glass powder (WGP). The data required for this study were...

  • Article
  • Open Access

Explainability features are intended to provide insight into the internal mechanisms of an Artificial Intelligence (AI) device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to...

  • Article
  • Open Access
2 Citations
1,713 Views
13 Pages

23 November 2022

All professional decisions prepared for a specific stakeholder can and must be explained. The primary role of explanation is to defend and reinforce the proposed decision, supporting stakeholder confidence in the validity of the decision. In this pap...

  • Article
  • Open Access
17 Citations
6,706 Views
23 Pages

This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna st...

  • Article
  • Open Access
8 Citations
5,836 Views
18 Pages

Enhancing Automated Scoring of Math Self-Explanation Quality Using LLM-Generated Datasets: A Semi-Supervised Approach

  • Ryosuke Nakamoto,
  • Brendan Flanagan,
  • Taisei Yamauchi,
  • Yiling Dai,
  • Kyosuke Takami and
  • Hiroaki Ogata

24 October 2023

In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, t...

  • Article
  • Open Access
1 Citations
3,600 Views
22 Pages

10 November 2022

Robert Nola has recently defended an argument against the existence of God on the basis of naturalistic explanations of religious belief. I will critically evaluate his argument in this paper. Nola’s argument takes the form of an inference to t...

  • Article
  • Open Access
21 Citations
6,679 Views
30 Pages

An Explanation of the LSTM Model Used for DDoS Attacks Classification

  • Abdulmuneem Bashaiwth,
  • Hamad Binsalleeh and
  • Basil AsSadhan

31 July 2023

With the rise of DDoS attacks, several machine learning-based attack detection models have been used to mitigate malicious behavioral attacks. Understanding how machine learning models work is not trivial. This is particularly true for complex and no...

  • Article
  • Open Access
1 Citations
718 Views
18 Pages

24 June 2025

With the growing prevalence of Explainable AI (XAI), the effectiveness, transparency, usefulness, and trustworthiness of explanations have come into focus. However, recent work in XAI often still falls short in terms of integrating human knowledge an...

  • Article
  • Open Access
12 Citations
27,611 Views
25 Pages

22 January 2014

This study surveys and evaluates previous attempts to use game theory to explain the strategic dynamic of the Cuban missile crisis, including, but not limited to, explanations developed in the style of Thomas Schelling, Nigel Howard and Steven Brams....

  • Article
  • Open Access
1,860 Views
20 Pages

Modern network intrusion detection systems (NIDSs) rely on complex deep learning models. However, the “black-box” nature of deep learning methods hinders transparency and trust in predictions, preventing the timely implementation of count...

  • Article
  • Open Access
122 Citations
15,347 Views
31 Pages

Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain

  • Samanta Knapič,
  • Avleen Malhi,
  • Rohit Saluja and
  • Kary Främling

19 September 2021

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve...

  • Article
  • Open Access
2 Citations
3,571 Views
15 Pages

Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as &l...

  • Article
  • Open Access
4 Citations
4,160 Views
18 Pages

16 December 2023

In our daily lives, we are often faced with the need to explain various phenomena, but we do not always select the most accurate explanation. For example, let us consider a “toxic” relationship with physical and psychological abuse, where...

  • Article
  • Open Access
32 Citations
7,041 Views
16 Pages

An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs

  • Zubaira Naz,
  • Muhammad Usman Ghani Khan,
  • Tanzila Saba,
  • Amjad Rehman,
  • Haitham Nobanee and
  • Saeed Ali Bahaj

3 January 2023

Explainable Artificial Intelligence is a key component of artificially intelligent systems that aim to explain the classification results. The classification results explanation is essential for automatic disease diagnosis in healthcare. The human re...

  • Article
  • Open Access
6 Citations
1,956 Views
15 Pages

6 September 2023

Explainable artificial intelligence (XAI) methods aim to explain to the user on what basis the model makes decisions. Unfortunately, general-purpose approaches that are independent of the types of data, model used and the level of sophistication of t...

  • Article
  • Open Access
7 Citations
3,940 Views
32 Pages

BEERL: Both Ends Explanations for Reinforcement Learning

  • Ahmad Terra,
  • Rafia Inam and
  • Elena Fersman

28 October 2022

Deep Reinforcement Learning (RL) is a black-box method and is hard to understand because the agent employs a neural network (NN). To explain the behavior and decisions made by the agent, different eXplainable RL (XRL) methods are developed; for examp...

  • Article
  • Open Access
2 Citations
1,195 Views
30 Pages

26 February 2025

Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, r...

  • Article
  • Open Access
1,303 Views
34 Pages

Bridging Text and Knowledge: Explainable AI for Knowledge Graph Classification and Concept Map-Based Semantic Domain Discovery with OBOE Framework

  • Raúl A. del Águila Escobar,
  • María del Carmen Suárez-Figueroa,
  • Mariano Fernández López and
  • Boris Villazón Terrazas

18 November 2025

Explainable Artificial Intelligence (XAI) has primarily focused on explaining model predictions, yet a critical gap remains in explaining semantic structure discovery within knowledge graphs derived from concept maps (CMs). This study extends the OBO...

  • Article
  • Open Access
9 Citations
3,186 Views
22 Pages

20 December 2023

Automated vehicles (AVs) are recognized as one of the most effective measures to realize sustainable transport. These vehicles can reduce emissions and environmental pollution, enhance accessibility, improve safety, and produce economic benefits thro...

  • Article
  • Open Access
13 Citations
5,490 Views
22 Pages

Students’ Scientific Evaluations of Water Resources

  • Josh Medrano,
  • Joshua Jaffe,
  • Doug Lombardi,
  • Margaret A. Holzer and
  • Christopher Roemmele

18 July 2020

Socially-relevant and controversial topics, such as water issues, are subject to differences in the explanations that scientists and the public (herein, students) find plausible. Students need to be more evaluative of the validity of explanations (e....

  • Article
  • Open Access
16 Citations
3,854 Views
14 Pages

Explanations for Neural Networks by Neural Networks

  • Sascha Marton,
  • Stefan Lüdtke and
  • Christian Bartelt

18 January 2022

Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fideli...

  • Article
  • Open Access

16 January 2026

Large Language Models (LLMs) are increasingly used in industrial monitoring and decision support, yet they remain prone to process-control hallucinations—diagnoses and explanations that sound plausible but conflict with physical constraints, se...

  • Article
  • Open Access
739 Views
22 Pages

5 December 2025

Explanations for static-analysis warnings assist developers in understanding potential code issues. An end-to-end pipeline was implemented to generate natural-language explanations, evaluated on 5183 warning–explanation pairs from Java reposito...

  • Article
  • Open Access
2 Citations
4,425 Views
38 Pages

A Bayesian Network Approach to Explainable Reinforcement Learning with Distal Information

  • Rudy Milani,
  • Maximilian Moll,
  • Renato De Leone and
  • Stefan Pickl

10 February 2023

Nowadays, Artificial Intelligence systems have expanded their competence field from research to industry and daily life, so understanding how they make decisions is becoming fundamental to reducing the lack of trust between users and machines and inc...

  • Article
  • Open Access
15 Citations
11,698 Views
29 Pages

30 June 2025

The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of...

  • Article
  • Open Access
7 Citations
4,977 Views
14 Pages

Outlier Detection with Explanations on Music Streaming Data: A Case Study with Danmark Music Group Ltd.

  • Jonas Herskind Sejr,
  • Thorbjørn Christiansen,
  • Nicolai Dvinge,
  • Dan Hougesen,
  • Peter Schneider-Kamp and
  • Arthur Zimek

4 March 2021

In the digital marketplaces, businesses can micro-monitor sales worldwide and in real-time. Due to the vast amounts of data, there is a pressing need for tools that automatically highlight changing trends and anomalous (outlier) behavior that is pote...

  • Article
  • Open Access
6 Citations
4,315 Views
23 Pages

Face Aging by Explainable Conditional Adversarial Autoencoders

  • Christos Korgialas,
  • Evangelia Pantraki,
  • Angeliki Bolari,
  • Martha Sotiroudi and
  • Constantine Kotropoulos

This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE). The proposed f...

  • Article
  • Open Access
170 Views
16 Pages

25 December 2025

Convolutional neural networks (CNNs) have achieved remarkable progress in recent years, largely driven by advances in computational hardware. However, their increasingly complex architectures continue to pose significant challenges for interpretabili...

  • Feature Paper
  • Article
  • Open Access
2 Citations
1,669 Views
14 Pages

14 February 2025

This paper focuses on explaining changes over time in globally sourced annual temporal data with the specific objective of identifying features in black-box models that contribute to these temporal shifts. Leveraging local explanations, a part of exp...

of 32