You are currently on the new version of our website. Access the old version .

8,340 Results Found

  • Article
  • Open Access
1,684 Views
23 Pages

Metaphysical Explanation: An Empirical Investigation

  • Andrew J. Latham and
  • Kristie Miller

The literature on metaphysical explanation contains three widely accepted assumptions. First, that the notion of metaphysical explanation with which philosophers are interested is a notion with which the folk are familiar: it is at least continuous w...

  • Article
  • Open Access
3 Citations
7,582 Views
26 Pages

13 October 2016

A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations and partial descriptions extracted from sensor observations. Existing explanation generation systems draw on ideas that can be mapped...

  • Article
  • Open Access
4 Citations
10,530 Views
11 Pages

17 January 2020

There is a long history of philosophical inquiry into the concept of explanation in science, and this work has some implications for the ways in which science teachers, particularly in the physical sciences (physics and chemistry), explain ideas to s...

  • Feature Paper
  • Article
  • Open Access
129 Citations
23,973 Views
24 Pages

Fairness and Explanation in AI-Informed Decision Making

  • Alessa Angerschmid,
  • Jianlong Zhou,
  • Kevin Theuermann,
  • Fang Chen and
  • Andreas Holzinger

AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairne...

  • Article
  • Open Access
4 Citations
4,160 Views
18 Pages

16 December 2023

In our daily lives, we are often faced with the need to explain various phenomena, but we do not always select the most accurate explanation. For example, let us consider a “toxic” relationship with physical and psychological abuse, where...

  • Article
  • Open Access
21 Citations
6,679 Views
30 Pages

An Explanation of the LSTM Model Used for DDoS Attacks Classification

  • Abdulmuneem Bashaiwth,
  • Hamad Binsalleeh and
  • Basil AsSadhan

31 July 2023

With the rise of DDoS attacks, several machine learning-based attack detection models have been used to mitigate malicious behavioral attacks. Understanding how machine learning models work is not trivial. This is particularly true for complex and no...

  • Article
  • Open Access
7 Citations
6,588 Views
29 Pages

Beyond Causal Explanation: Einstein’s Principle Not Reichenbach’s

  • Michael Silberstein,
  • William Mark Stuckey and
  • Timothy McDevitt

16 January 2021

Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacet...

  • Article
  • Open Access
1 Citations
1,945 Views
17 Pages

In the sciences of the deep past, it is taken for granted that the hypothesis that offers the best explanation is the best confirmed. I examine in detail the debate over the K/Pg mass extinctions that began in 1980 with the publication of the paper b...

  • Article
  • Open Access
150 Citations
12,027 Views
28 Pages

3 February 2022

In recent years, many methods for intrusion detection systems (IDS) have been designed and developed in the research community, which have achieved a perfect detection rate using IDS datasets. Deep neural networks (DNNs) are representative examples a...

  • Article
  • Open Access
11 Citations
7,183 Views
22 Pages

Enhancing Self-Explanation Learning through a Real-Time Feedback System: An Empirical Evaluation Study

  • Ryosuke Nakamoto,
  • Brendan Flanagan,
  • Yiling Dai,
  • Taisei Yamauchi,
  • Kyosuke Takami and
  • Hiroaki Ogata

2 November 2023

This research introduces the self-explanation-based automated feedback (SEAF) system, aimed at alleviating the teaching burden through real-time, automated feedback while aligning with SDG 4’s sustainability goals for quality education. The sys...

  • Article
  • Open Access
17 Citations
6,706 Views
23 Pages

This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna st...

  • Article
  • Open Access
295 Views
27 Pages

3 December 2025

Understanding how expert characteristics shape verdicts is critical. This study identified message (e.g., explanation satisfaction) and source characteristics (e.g., education) that predict perceived expertise and verdicts. We hypothesized an expert...

  • Review
  • Open Access
27 Citations
7,215 Views
28 Pages

Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review

  • Thi-Thu-Huong Le,
  • Aji Teguh Prihatno,
  • Yustus Eko Oktian,
  • Hyoeun Kang and
  • Howon Kim

8 May 2023

In recent years, numerous explainable artificial intelligence (XAI) use cases have been developed, to solve numerous real problems in industrial applications while maintaining the explainability level of the used artificial intelligence (AI) models t...

  • Article
  • Open Access
1,179 Views
15 Pages

The multimodal knowledge graph link prediction model integrates entity features from multiple modalities, such as text and images, and uses these fused features to infer potential entity links in the knowledge graph. This process is highly dependent...

  • Article
  • Open Access
2 Citations
1,545 Views
24 Pages

Info-CELS: Informative Saliency Map-Guided Counterfactual Explanation for Time Series Classification

  • Peiyu Li,
  • Omar Bahri,
  • Pouya Hosseinzadeh,
  • Soukaïna Filali Boubrahimi and
  • Shah Muhammad Hamdi

As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in...

  • Article
  • Open Access
7 Citations
3,328 Views
18 Pages

With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with s...

  • Article
  • Open Access
4 Citations
1,643 Views
23 Pages

The increasing complexity and importance of medical data in improving patient care, advancing research, and optimizing healthcare systems led to the proposal of this study, which presents a novel methodology by evaluating the sensitivity of artificia...

  • Article
  • Open Access
7 Citations
4,880 Views
19 Pages

As deep learning research continues to advance, interpretability is becoming as important as model performance. Conducting interpretability studies to understand the decision-making processes of deep learning models can improve performance and provid...

  • Review
  • Open Access
1,529 Views
21 Pages

At present, artificial intelligence (AI) has shown significant potential in digestive endoscopy image analysis, serving as a powerful auxiliary tool for the accurate diagnosis and treatment of gastrointestinal diseases. However, mainstream models rep...

  • Article
  • Open Access
8 Citations
5,836 Views
18 Pages

Enhancing Automated Scoring of Math Self-Explanation Quality Using LLM-Generated Datasets: A Semi-Supervised Approach

  • Ryosuke Nakamoto,
  • Brendan Flanagan,
  • Taisei Yamauchi,
  • Yiling Dai,
  • Kyosuke Takami and
  • Hiroaki Ogata

24 October 2023

In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, t...

  • Article
  • Open Access
170 Views
16 Pages

25 December 2025

Convolutional neural networks (CNNs) have achieved remarkable progress in recent years, largely driven by advances in computational hardware. However, their increasingly complex architectures continue to pose significant challenges for interpretabili...

  • Article
  • Open Access
12 Citations
2,629 Views
15 Pages

15 October 2022

Anomaly detection is critical to ensure cloud infrastructures’ quality of service. However, due to the complexity of inconspicuous (indistinct) anomalies, high dynamicity, and the lack of anomaly labels in the cloud environment, multivariate ti...

  • Article
  • Open Access
1 Citations
2,154 Views
21 Pages

13 May 2024

In text classifier models, the complexity of recurrent neural networks (RNNs) is very high because of the vast state space and uncertainty of transitions, which makes the RNN classifier’s explainability insufficient. It is almost impossible to...

  • Article
  • Open Access
1,330 Views
34 Pages

An Explainable Approach to Parkinson’s Diagnosis Using the Contrastive Explanation Method—CEM

  • Ipek Balikci Cicek,
  • Zeynep Kucukakcali,
  • Birgul Deniz and
  • Fatma Ebru Algül

18 August 2025

Background/Objectives: Parkinson’s disease (PD) is a progressive neurodegenerative disorder that requires early and accurate diagnosis. This study aimed to classify individuals with and without PD using volumetric brain MRI data and to improve...

  • Article
  • Open Access
26 Citations
23,911 Views
32 Pages

29 February 2020

This manuscript outlines a viable approach for training and evaluating machine learning systems for high-stakes, human-centered, or regulated applications using common Python programming tools. The accuracy and intrinsic interpretability of two types...

  • Article
  • Open Access
2 Citations
2,613 Views
12 Pages

23 September 2024

Objective. To establish a risk prediction model for intradialytic hypotension (IDH) in maintenance hemodialysis (MHD) patients and to analyze the explainability of the risk prediction model. Methods. A total of 2,228,650 hemodialysis records of 1075...

  • Proceeding Paper
  • Open Access
1,257 Views
8 Pages

2 September 2024

Interactive videos and digital technology have been recognized by educators as supportive tools for learning science. To assess their impact, we compared seventh-grade students’ abilities to explain scientific phenomena and understand climate c...

  • Article
  • Open Access
2 Citations
1,737 Views
48 Pages

27 November 2024

The ‘black box’ nature of machine learning (ML) approaches makes it challenging to understand how most artificial intelligence (AI) models make decisions. Explainable AI (XAI) aims to provide analytical techniques to understand the behavi...

  • Article
  • Open Access
47 Citations
5,523 Views
16 Pages

Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation

  • Zhenpeng Feng,
  • Mingzhe Zhu,
  • Ljubiša Stanković and
  • Hongbing Ji

1 May 2021

Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentatio...

  • Article
  • Open Access
288 Citations
18,313 Views
17 Pages

Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any...

  • Article
  • Open Access
6 Citations
2,747 Views
14 Pages

10 May 2023

With the development of artificial intelligence technology, machine learning models are becoming more complex and accurate. However, the explainability of the models is decreasing, and much of the decision process is still unclear and difficult to ex...

  • Article
  • Open Access
4 Citations
2,575 Views
25 Pages

Enhancing Structured Query Language Injection Detection with Trustworthy Ensemble Learning and Boosting Models Using Local Explanation Techniques

  • Thi-Thu-Huong Le,
  • Yeonjeong Hwang,
  • Changwoo Choi,
  • Rini Wisnu Wardhani,
  • Dedy Septono Catur Putranto and
  • Howon Kim 

6 November 2024

This paper presents a comparative analysis of several decision models for detecting Structured Query Language (SQL) injection attacks, which remain one of the most prevalent and serious security threats to web applications. SQL injection enables atta...

  • Article
  • Open Access
10 Citations
3,704 Views
16 Pages

18 June 2021

Many computer-aided diagnosis methods, especially ones with deep learning strategies, of liver cancers based on medical images have been proposed. However, most of such methods analyze the images under only one scale, and the deep learning models are...

  • Article
  • Open Access
21 Citations
4,001 Views
19 Pages

Drivers’ Age and Automated Vehicle Explanations

  • Qiaoning Zhang,
  • Xi Jessie Yang and
  • Lionel P. Robert

11 February 2021

Automated vehicles (AV) have the potential to benefit our society. Providing explanations is one approach to facilitating AV trust by decreasing uncertainty about automated decision-making. However, it is not clear whether explanations are equally be...

  • Article
  • Open Access
5 Citations
4,298 Views
13 Pages

Measuring Characteristics of Explanations with Element Maps

  • Steffen Wagner,
  • Karel Kok and
  • Burkhard Priemer

11 February 2020

What are the structural characteristics of written scientific explanations that make them good? This is often difficult to measure. One approach to describing and analyzing structures is to employ network theory. With this research, we aim to describ...

  • Article
  • Open Access
2,726 Views
18 Pages

Evaluating Anomaly Explanations Using Ground Truth

  • Liat Antwarg Friedman,
  • Chen Galed,
  • Lior Rokach and
  • Bracha Shapira

15 November 2024

The widespread use of machine and deep learning algorithms for anomaly detection has created a critical need for robust explanations that can identify the features contributing to anomalies. However, effective evaluation methodologies for anomaly exp...

  • Article
  • Open Access
77 Citations
7,359 Views
26 Pages

Breast cancer is a serious threat to women. Many machine learning-based computer-aided diagnosis (CAD) methods have been proposed for the early diagnosis of breast cancer based on histopathological images. Even though many such classification methods...

  • Article
  • Open Access
21 Citations
5,105 Views
19 Pages

14 February 2023

Machine learning methods can establish complex nonlinear relationships between input and response variables for stadium fire risk assessment. However, the output of machine learning models is considered very difficult due to their complex “blac...

  • Article
  • Open Access
2 Citations
2,120 Views
23 Pages

28 October 2023

As an essential component of a universal CNC machine tool, the spindle plays a critical role in determining the accuracy of machining parts. The three cutting process parameters (cutting speed, feed speed, and cutting depth) are the most important op...

  • Article
  • Open Access
2 Citations
3,571 Views
15 Pages

Decision support systems based on machine learning models should be able to help users identify opportunities and threats. Popular model-agnostic explanation models can identify factors that support various predictions, answering questions such as &l...

  • Article
  • Open Access
9 Citations
3,186 Views
22 Pages

20 December 2023

Automated vehicles (AVs) are recognized as one of the most effective measures to realize sustainable transport. These vehicles can reduce emissions and environmental pollution, enhance accessibility, improve safety, and produce economic benefits thro...

  • Article
  • Open Access
6 Citations
1,956 Views
15 Pages

6 September 2023

Explainable artificial intelligence (XAI) methods aim to explain to the user on what basis the model makes decisions. Unfortunately, general-purpose approaches that are independent of the types of data, model used and the level of sophistication of t...

  • Review
  • Open Access
499 Citations
40,217 Views
19 Pages

Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics

  • Jianlong Zhou,
  • Amir H. Gandomi,
  • Fang Chen and
  • Andreas Holzinger

The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or...

  • Article
  • Open Access
10 Citations
4,216 Views
19 Pages

Elementary Students’ Reasoning in Drawn Explanations Based on a Scientific Theory

  • Valeria M. Cabello,
  • Patricia M. Moreira and
  • Paulina Griñó Morales

26 September 2021

Constructing explanations of scientific phenomena is a high-leverage practice that promotes student understanding. In the context of this study, we acknowledge that children are used to receiving explanations from teachers. However, they are rarely e...

  • Article
  • Open Access
1,100 Views
28 Pages

Fidex and FidexGlo: From Local Explanations to Global Explanations of Deep Models

  • Guido Bologna,
  • Jean-Marc Boutay,
  • Damian Boquete,
  • Quentin Leblanc,
  • Deniz Köprülü and
  • Ludovic Pfeiffer

20 February 2025

Deep connectionist models are characterized by many neurons grouped together in many successive layers. As a result, their data classifications are difficult to understand. We present two novel algorithms which explain the responses of several black-...

  • Article
  • Open Access
3 Citations
6,357 Views
21 Pages

Tangled String for Multi-Timescale Explanation of Changes in Stock Market

  • Yukio Ohsawa,
  • Teruaki Hayashi and
  • Takaaki Yoshino

22 March 2019

This work addresses the question of explaining changes in the desired timescales of the stock market. Tangled string is a sequence visualization tool wherein a sequence is compared to a string and trends in the sequence are compared to the appearance...

  • Article
  • Open Access
1 Citations
3,600 Views
22 Pages

10 November 2022

Robert Nola has recently defended an argument against the existence of God on the basis of naturalistic explanations of religious belief. I will critically evaluate his argument in this paper. Nola’s argument takes the form of an inference to t...

  • Article
  • Open Access
10 Citations
4,375 Views
18 Pages

Automated Assessment of Comprehension Strategies from Self-Explanations Using LLMs

  • Bogdan Nicula,
  • Mihai Dascalu,
  • Tracy Arner,
  • Renu Balyan and
  • Danielle S. McNamara

14 October 2023

Text comprehension is an essential skill in today’s information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs...

of 167