Skip Content
You are currently on the new version of our website. Access the old version .

16,861 Results Found

  • Article
  • Open Access
300 Citations
18,467 Views
17 Pages

Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any...

  • Article
  • Open Access
7 Citations
2,890 Views
27 Pages

24 September 2024

In this study, we present a new approach that combines multiple Bidirectional Encoder Representations from Transformers (BERT) architectures with a Convolutional Neural Network (CNN) framework designed for sexism detection in text at a granular level...

  • Article
  • Open Access
1 Citations
3,616 Views
16 Pages

Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models

  • Ju-Hwan Lee,
  • Dang Thanh Vu,
  • Nam-Kyung Lee,
  • Il-Hong Shin and
  • Jin-Young Kim

7 January 2025

This study explores the integration of concept bottleneck models (CBMs) with knowledge distillation (KD) while preserving the locality characteristics of the CBM. Although KD proves effective in model compression, compressed models often lack interpr...

  • Article
  • Open Access
1,092 Views
22 Pages

Protocol for Evaluating Explainability in Actuarial Models

  • Catalina Lozano-Murcia,
  • Francisco P. Romero and
  • Mᵃ Concepción Gonzalez-Ramos

This paper explores the use of explainable artificial intelligence (XAI) techniques in actuarial science to address the opacity of advanced machine learning models in financial contexts. While technological advancements have enhanced actuarial models...

  • Article
  • Open Access
1 Citations
1,596 Views
16 Pages

Explaining a Logic Dendritic Neuron Model by Using the Morphology of Decision Trees

  • Xingqian Chen,
  • Honghui Fan,
  • Wenhe Chen,
  • Yaoxin Zhang,
  • Dingkun Zhu and
  • Shuangbao Song

3 October 2024

The development of explainable machine learning methods is attracting increasing attention. Dendritic neuron models have emerged as powerful machine learning methods in recent years. However, providing explainability to a dendritic neuron model has n...

  • Article
  • Open Access
3 Citations
2,503 Views
21 Pages

8 July 2024

Machine learning is a well-matured discipline, and exploration of datasets can be performed in an efficient way, leading to accurate and operational prediction and decision models. On the other hand, most methods tend to produce black-box-type models...

  • Article
  • Open Access
70 Citations
7,873 Views
34 Pages

An Explainable AI-Based Fault Diagnosis Model for Bearings

  • Md Junayed Hasan,
  • Muhammad Sohaib and
  • Jong-Myon Kim

13 June 2021

In this paper, an explainable AI-based fault diagnosis model for bearings is proposed with five stages, i.e., (1) a data preprocessing method based on the Stockwell Transformation Coefficient (STC) is proposed to analyze the vibration signals for var...

  • Article
  • Open Access
17 Citations
8,096 Views
30 Pages

Explaining Misinformation Detection Using Large Language Models

  • Vishnu S. Pendyala and
  • Christopher E. Hall

Large language models (LLMs) are a compressed repository of a vast corpus of valuable information on which they are trained. Therefore, this work hypothesizes that LLMs such as Llama, Orca, Falcon, and Mistral can be used for misinformation detection...

  • Article
  • Open Access
14 Citations
12,196 Views
30 Pages

Explainable Aspect-Based Sentiment Analysis Using Transformer Models

  • Isidoros Perikos and
  • Athanasios Diamantopoulos

An aspect-based sentiment analysis (ABSA) aims to perform a fine-grained analysis of text to identify sentiments and opinions associated with specific aspects. Recently, transformers and large language models have demonstrated exceptional performance...

  • Article
  • Open Access
28 Citations
6,521 Views
14 Pages

15 September 2022

Artificial intelligence is changing the practice of healthcare. While it is essential to employ such solutions, making them transparent to medical experts is more critical. Most of the previous work presented disease prediction models, but did not ex...

  • Review
  • Open Access
4 Citations
3,160 Views
28 Pages

20 May 2025

Interpretability is widely recognized as essential in machine learning, yet optimization models remain largely opaque, limiting their adoption in high-stakes decision-making. While optimization provides mathematically rigorous solutions, the reasonin...

  • Article
  • Open Access
5 Citations
1,626 Views
22 Pages

A Meta-Learning-Based Ensemble Model for Explainable Alzheimer’s Disease Diagnosis

  • Fatima Hasan Al-bakri,
  • Wan Mohd Yaakob Wan Bejuri,
  • Mohamed Nasser Al-Andoli,
  • Raja Rina Raja Ikram,
  • Hui Min Khor,
  • Zulkifli Tahir and
  • The Alzheimer’s Disease Neuroimaging Initiative

Background/Objectives: Artificial intelligence (AI) models for Alzheimer’s disease (AD) diagnosis often face the challenge of limited explainability, hindering their clinical adoption. Previous studies have relied on full-scale MRI, which incre...

  • Article
  • Open Access
14 Citations
5,938 Views
17 Pages

Explainable Machine Learning Model for Chronic Kidney Disease Prediction

  • Muhammad Shoaib Arif,
  • Ateeq Ur Rehman and
  • Daniyal Asif

3 October 2024

More than 800 million people worldwide suffer from chronic kidney disease (CKD). It stands as one of the primary causes of global mortality, uniquely noted for an increase in death rates over the past twenty years among non-communicable diseases. Mac...

  • Article
  • Open Access
17 Citations
6,289 Views
20 Pages

TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models

  • Soumick Chatterjee,
  • Arnab Das,
  • Chirag Mandal,
  • Budhaditya Mukhopadhyay,
  • Manish Vipinraj,
  • Aniruddh Shukla,
  • Rajatha Nagaraja Rao,
  • Chompunuch Sarasaen,
  • Oliver Speck and
  • Andreas Nürnberger

10 February 2022

Clinicians are often very sceptical about applying automatic image processing approaches, especially deep learning-based methods, in practice. One main reason for this is the black-box nature of these approaches and the inherent problem of missing in...

  • Article
  • Open Access
19 Citations
8,545 Views
23 Pages

15 February 2024

This study aims to establish a greater reliability compared to conventional speech emotion recognition (SER) studies. This is achieved through preprocessing techniques that reduce uncertainty elements, models that combine the structural features of e...

  • Article
  • Open Access
70 Citations
14,011 Views
13 Pages

An Explainable Deep Learning Model to Prediction Dental Caries Using Panoramic Radiograph Images

  • Faruk Oztekin,
  • Oguzhan Katar,
  • Ferhat Sadak,
  • Muhammed Yildirim,
  • Hakan Cakar,
  • Murat Aydogan,
  • Zeynep Ozpolat,
  • Tuba Talo Yildirim,
  • Ozal Yildirim and
  • U. Rajendra Acharya
  • + 1 author

Dental caries is the most frequent dental health issue in the general population. Dental caries can result in extreme pain or infections, lowering people’s quality of life. Applying machine learning models to automatically identify dental carie...

  • Article
  • Open Access
5 Citations
2,691 Views
18 Pages

Transformer-Based Explainable Model for Breast Cancer Lesion Segmentation

  • Huina Wang,
  • Lan Wei,
  • Bo Liu,
  • Jianqiang Li,
  • Jinshu Li,
  • Juan Fang and
  • Catherine Mooney

27 January 2025

Breast cancer is one of the most prevalent cancers among women, with early detection playing a critical role in improving survival rates. This study introduces a novel transformer-based explainable model for breast cancer lesion segmentation (TEBLS),...

  • Article
  • Open Access
4 Citations
4,706 Views
25 Pages

Sentiment analysis is a crucial tool for measuring public opinion and understanding human communication across digital social media platforms. However, due to linguistic complexities and limited data or computational resources, it is under-represente...

  • Feature Paper
  • Article
  • Open Access
15 Citations
5,298 Views
21 Pages

Machine learning is increasingly and ubiquitously being used in the medical domain. Evaluation metrics like accuracy, precision, and recall may indicate the performance of the models but not necessarily the reliability of their outcomes. This paper a...

  • Article
  • Open Access
6 Citations
5,559 Views
24 Pages

6 June 2025

Background: Industry 4.0’s development requires digitalized manufacturing through Predictive Maintenance (PdM) because such practices decrease equipment failures and operational disruptions. However, its effectiveness is hindered by three key c...

  • Article
  • Open Access
3,076 Views
21 Pages

Explainable Use of Foundation Models for Job Hiring

  • Vishnu S. Pendyala,
  • Neha Bais Thakur and
  • Radhika Agarwal

Automating candidate shortlisting is a non-trivial task that stands to benefit substantially from advances in artificial intelligence. We evaluate a suite of foundation models such as Llama 2, Llama 3, Mixtral, Gemma-2b, Gemma-7b, Phi-3 Small, Phi-3...

  • Article
  • Open Access
1,535 Views
23 Pages

Explainable Machine Learning Models for Credit Rating in Colombian Solidarity Sector Entities

  • María Andrea Arias-Serna,
  • Jhon Jair Quiza-Montealegre,
  • Luis Fernando Móntes-Gómez,
  • Leandro Uribe Clavijo and
  • Andrés Felipe Orozco-Duque

This paper proposes a methodology for implementing a custom-developed explainability model for credit rating using behavioral data registered during the lifecycle of the borrowing that can replicate the score given by the regulatory model for the sol...

  • Article
  • Open Access
8 Citations
7,491 Views
23 Pages

Explaining Bad Forecasts in Global Time Series Models

  • Jože Rožanec,
  • Elena Trajkova,
  • Klemen Kenda,
  • Blaž Fortuna and
  • Dunja Mladenić

4 October 2021

While increasing empirical evidence suggests that global time series forecasting models can achieve better forecasting performance than local ones, there is a research void regarding when and why the global models fail to provide a good forecast. Thi...

  • Article
  • Open Access
56 Citations
9,070 Views
26 Pages

Explainable Boosting Machines for Slope Failure Spatial Predictive Modeling

  • Aaron E. Maxwell,
  • Maneesh Sharma and
  • Kurt A. Donaldson

8 December 2021

Machine learning (ML) methods, such as artificial neural networks (ANN), k-nearest neighbors (kNN), random forests (RF), support vector machines (SVM), and boosted decision trees (DTs), may offer stronger predictive performance than more traditional,...

  • Communication
  • Open Access
30 Citations
9,475 Views
12 Pages

An Explainable Machine Learning Model for Material Backorder Prediction in Inventory Management

  • Charis Ntakolia,
  • Christos Kokkotis,
  • Patrik Karlsson and
  • Serafeim Moustakidis

27 November 2021

Global competition among businesses imposes a more effective and low-cost supply chain allowing firms to provide products at a desired quality, quantity, and time, with lower production costs. The latter include holding cost, ordering cost, and backo...

  • Article
  • Open Access
25 Citations
13,851 Views
19 Pages

White blood cells (WBCs) are crucial components of the immune system that play a vital role in defending the body against infections and diseases. The identification of WBCs subtypes is useful in the detection of various diseases, such as infections,...

  • Article
  • Open Access
33 Citations
17,361 Views
26 Pages

A Mathematical Model for Customer Segmentation Leveraging Deep Learning, Explainable AI, and RFM Analysis in Targeted Marketing

  • Fatma M. Talaat,
  • Abdussalam Aljadani,
  • Bshair Alharthi,
  • Mohammed A. Farsi,
  • Mahmoud Badawy and
  • Mostafa Elhosseini

15 September 2023

In the evolving landscape of targeted marketing, integrating deep learning (DL) and explainable AI (XAI) offers a promising avenue for enhanced customer segmentation. This paper introduces a groundbreaking approach, DeepLimeSeg, which synergizes DL m...

  • Article
  • Open Access
2 Citations
1,665 Views
17 Pages

A Feature-Augmented Explainable Artificial Intelligence Model for Diagnosing Alzheimer’s Disease from Multimodal Clinical and Neuroimaging Data

  • Fatima Hasan Al-bakri,
  • Wan Mohd Yaakob Wan Bejuri,
  • Mohamed Nasser Al-Andoli,
  • Raja Rina Raja Ikram,
  • Hui Min Khor,
  • Yus Sholva,
  • Umi Kalsom Ariffin,
  • Noorayisahbe Mohd Yaacob,
  • Zuraida Abal Abas and
  • Md Fahmi Abd Samad
  • + 5 authors

17 August 2025

Background/Objectives: This study presents a survey-based evaluation of an explainable AI (Feature-Augmented) approach, which was designed to support the diagnosis of Alzheimer’s disease (AD) by integrating clinical data, MMSE scores, and MRI s...

  • Article
  • Open Access
1 Citations
3,241 Views
34 Pages

Comparing Explainable AI Models: SHAP, LIME, and Their Role in Electric Field Strength Prediction over Urban Areas

  • Ioannis Givisis,
  • Dimitris Kalatzis,
  • Christos Christakis and
  • Yiannis Kiouvrekis

4 December 2025

This study presents a comparative evaluation of state-of-the-art Machine Learning (ML) and Explainable Artificial Intelligence (XAI) methods, specifically SHAP and LIME, for predicting electromagnetic field (EMF) strength in urban environments. Using...

  • Article
  • Open Access
2 Citations
2,835 Views
14 Pages

An Explainable Fusion of ECG and SpO2-Based Models for Real-Time Sleep Apnea Detection

  • Tanmoy Paul,
  • Omiya Hassan,
  • Christina S. McCrae,
  • Syed Kamrul Islam and
  • Abu Saleh Mohammad Mosa

Obstructive sleep apnea (OSA) is a common disorder characterized by disrupted breathing during sleep, leading to serious health consequences such as daytime fatigue, hypertension, metabolic issues, and cardiovascular disease. Polysomnography (PSG) is...

  • Article
  • Open Access
618 Views
33 Pages

AGF-HAM: Adaptive Gated Fusion Hierarchical Attention Model for Explainable Sentiment Analysis

  • Mahander Kumar,
  • Lal Khan,
  • Mohammad Zubair Khan and
  • Amel Ali Alhussan

5 December 2025

The rapid growth of user-generated content in the digital space has increased the necessity of properly and interpretively analyzing sentiment and emotion systems. This research paper presents a new hybrid model, HAM (Hybrid Attention-based Model), a...

  • Article
  • Open Access
19 Citations
4,797 Views
23 Pages

Puzzle out Machine Learning Model-Explaining Disintegration Process in ODTs

  • Jakub Szlęk,
  • Mohammad Hassan Khalid,
  • Adam Pacławski,
  • Natalia Czub and
  • Aleksander Mendyk

Tablets are the most common dosage form of pharmaceutical products. While tablets represent the majority of marketed pharmaceutical products, there remain a significant number of patients who find it difficult to swallow conventional tablets. Such di...

  • Article
  • Open Access
62 Citations
7,245 Views
22 Pages

Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)

  • Nida Aslam,
  • Irfan Ullah Khan,
  • Samiha Mirza,
  • Alanoud AlOwayed,
  • Fatima M. Anis,
  • Reef M. Aljuaid and
  • Reham Baageel

16 June 2022

With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating every...

  • Article
  • Open Access
1,068 Views
23 Pages

14 October 2025

To develop truly human-centered automated systems, it is essential to acknowledge that human reasoning is prone to systematic deviations from rational judgment, known as Cognitive Biases. The present study investigated such flawed reasoning in the co...

  • Article
  • Open Access
2 Citations
1,439 Views
20 Pages

MAL-XSEL: Enhancing Industrial Web Malware Detection with an Explainable Stacking Ensemble Model

  • Ezz El-Din Hemdan,
  • Samah Alshathri,
  • Haitham Elwahsh,
  • Osama A. Ghoneim and
  • Amged Sayed

26 April 2025

The escalating global incidence of malware presents critical cybersecurity threats to manufacturing, automation, and industrial process control systems. Given the fast-developing web applications and IoT devices in use by industry operations, securin...

  • Article
  • Open Access
1,178 Views
32 Pages

5 September 2025

This study introduces a data-driven twin modeling framework based on modern Koopman operator theory, offering a significant advancement over classical modal decomposition by accurately capturing nonlinear dynamics with reduced complexity and no manua...

  • Article
  • Open Access
3 Citations
5,919 Views
60 Pages

The increasing complexity and volume of cybersecurity logs demand advanced analytical techniques capable of accurate threat detection and explainability. This paper investigates the application of Large Language Models (LLMs), specifically qwen2.5:7b...

  • Article
  • Open Access
204 Views
23 Pages

18 January 2026

Hospitals are among the most energy-intensive buildings, yet their heating systems often operate below optimal efficiency due to outdated controls and limited sensing. Existing facilities often provide only a few accessible measurement points, many o...

  • Article
  • Open Access
574 Views
23 Pages

15 November 2025

Accurate prediction of blast-induced air overpressure (AOp) is vital for environmental management and safety in mining and construction. Traditional empirical models are simple but fail to capture complex meteorological effects, while accurate black-...

  • Article
  • Open Access
383 Views
41 Pages

2 January 2026

Rapid technological innovation has made navigating millions of new patent filings a critical challenge for corporations and research institutions. Existing patent recommendation systems, largely constrained by their static designs, struggle to captur...

  • Article
  • Open Access
11 Citations
3,523 Views
12 Pages

9 December 2022

Organ toxicity caused by chemicals is a serious problem in the creation and usage of chemicals such as medications, insecticides, chemical products, and cosmetics. In recent decades, the initiation and development of chemical-induced organ damage hav...

  • Systematic Review
  • Open Access
3 Citations
5,264 Views
31 Pages

eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing

  • Nadeesha Hettikankanamage,
  • Niusha Shafiabady,
  • Fiona Chatteur,
  • Robert M. X. Wu,
  • Fareed Ud Din and
  • Jianlong Zhou

30 October 2025

Artificial Intelligence (AI) has achieved immense progress in recent years across a wide array of application domains, with biomedical imaging and sensing emerging as particularly impactful areas. However, the integration of AI in safety-critical fie...

  • Article
  • Open Access
13 Citations
2,702 Views
23 Pages

2 November 2022

There is growing tension between high-performance machine-learning (ML) models and explainability within the scientific community. In arsenic modelling, understanding why ML models make certain predictions, for instance, “high arsenic” in...

  • Article
  • Open Access
1,121 Views
14 Pages

28 August 2025

The growing adoption of deep learning (DL) in early-stage cancer diagnosis has demonstrated remarkable performance across multiple imaging tasks. Yet, the lack of transparency in these models (“black-box” problem) limits their adoption in...

  • Article
  • Open Access
25 Citations
5,278 Views
15 Pages

XAI-Fall: Explainable AI for Fall Detection on Wearable Devices Using Sequence Models and XAI Techniques

  • Harsh Mankodiya,
  • Dhairya Jadav,
  • Rajesh Gupta,
  • Sudeep Tanwar,
  • Abdullah Alharbi,
  • Amr Tolba,
  • Bogdan-Constantin Neagu and
  • Maria Simona Raboaca

9 June 2022

A fall detection system is vital for the safety of older people, as it contacts emergency services when it detects a person has fallen. There have been various approaches to detect falls, such as using a single tri-axial accelerometer to detect falls...

  • Article
  • Open Access
5,063 Views
31 Pages

Physics-Informed and Explainable Graph Neural Networks for Generalizable Urban Building Energy Modeling

  • Rudai Shan,
  • Hao Ning,
  • Qianhui Xu,
  • Xuehua Su,
  • Mengjin Guo and
  • Xiaohan Jia

11 August 2025

Urban building energy prediction is a critical challenge for sustainable city planning and large-scale retrofit prioritization. However, traditional data-driven models struggle to capture real urban environments’ spatial and morphological compl...

  • Article
  • Open Access
5 Citations
3,455 Views
21 Pages

24 January 2024

Sandstone-hosted uranium deposits are indeed significant sources of uranium resources globally. They are typically found in sedimentary basins and have been extensively explored and exploited in various countries. They play a significant role in meet...

  • Article
  • Open Access
1 Citations
2,290 Views
23 Pages

Explainable Deep Learning Model for ChatGPT-Rephrased Fake Review Detection Using DistilBERT

  • Rania A. AlQadi,
  • Shereen A. Taie,
  • Amira M. Idrees and
  • Esraa Elhariri

Customers heavily depend on reviews for product information. Fake reviews may influence the perception of product quality, making online reviews less effective. ChatGPT’s (GPT-3.5 and GPT-4) ability to generate human-like reviews and responses...

  • Technical Note
  • Open Access
10 Citations
3,671 Views
17 Pages

30 August 2024

Artificial intelligence (AI) has made remarkable progress in recent years in remote sensing applications, including environmental monitoring, crisis management, city planning, and agriculture. However, the critical challenge in utilizing AI models in...

  • Systematic Review
  • Open Access
659 Views
40 Pages

A Systematic Review of Diffusion Models for Medical Image-Based Diagnosis: Methods, Taxonomies, Clinical Integration, Explainability, and Future Directions

  • Mohammad Azad,
  • Nur Mohammad Fahad,
  • Mohaimenul Azam Khan Raiaan,
  • Tanvir Rahman Anik,
  • Md Faraz Kabir Khan,
  • Habib Mahamadou Kélé Toyé and
  • Ghulam Muhammad

Background and Objectives: Diffusion models, as a recent advancement in generative modeling, have become central to high-resolution image synthesis and reconstruction. Their rapid progress has notably shaped computer vision and health informatics, pa...

of 338