Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (132)

Search Parameters:
Keywords = (LIME) local interpretable model-agnostic explanations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2624 KiB  
Article
A Transparent House Price Prediction Framework Using Ensemble Learning, Genetic Algorithm-Based Tuning, and ANOVA-Based Feature Analysis
by Mohammed Ibrahim Hussain, Arslan Munir, Mohammad Mamun, Safiul Haque Chowdhury, Nazim Uddin and Muhammad Minoar Hossain
FinTech 2025, 4(3), 33; https://doi.org/10.3390/fintech4030033 - 18 Jul 2025
Viewed by 312
Abstract
House price prediction is crucial in real estate for informed decision-making. This paper presents an automated prediction system that combines genetic algorithms (GA) for feature optimization and Analysis of Variance (ANOVA) for statistical analysis. We apply and compare five ensemble machine learning (ML) [...] Read more.
House price prediction is crucial in real estate for informed decision-making. This paper presents an automated prediction system that combines genetic algorithms (GA) for feature optimization and Analysis of Variance (ANOVA) for statistical analysis. We apply and compare five ensemble machine learning (ML) models, namely Extreme Gradient Boosting Regression (XGBR), random forest regression (RFR), Categorical Boosting Regression (CBR), Adaptive Boosting Regression (ADBR), and Gradient Boosted Decision Trees Regression (GBDTR), on a comprehensive dataset. We used a dataset with 1000 samples with eight features and a secondary dataset for external validation with 3865 samples. Our integrated approach identifies Categorical Boosting with GA (CBRGA) as the best performer, achieving an R2 of 0.9973 and outperforming existing state-of-the-art methods. ANOVA-based analysis further enhances model interpretability and performance by isolating key factors such as square footage and lot size. To ensure robustness and transparency, we conduct 10-fold cross-validation and employ explainable AI techniques such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), providing insights into model decision-making and feature importance. Full article
Show Figures

Figure 1

17 pages, 897 KiB  
Article
The Quest for the Best Explanation: Comparing Models and XAI Methods in Air Quality Modeling Tasks
by Thomas Tasioulis, Evangelos Bagkis, Theodosios Kassandros and Kostas Karatzas
Appl. Sci. 2025, 15(13), 7390; https://doi.org/10.3390/app15137390 - 1 Jul 2025
Viewed by 225
Abstract
Air quality (AQ) modeling is at the forefront of estimating pollution levels in areas where the spatial representativity is low. Large metropolitan areas in Asia such as Beijing face significant pollution issues due to rapid industrialization and urbanization. AQ nowcasting, especially in dense [...] Read more.
Air quality (AQ) modeling is at the forefront of estimating pollution levels in areas where the spatial representativity is low. Large metropolitan areas in Asia such as Beijing face significant pollution issues due to rapid industrialization and urbanization. AQ nowcasting, especially in dense urban centers like Beijing, is crucial for public health and safety. One of the most popular and accurate modeling methodologies relies on black-box models that fail to explain the phenomena in an interpretable way. This study investigates the performance and interpretability of Explainable AI (XAI) applied with the eXtreme Gradient Boosting (XGBoost) algorithm employing the SHapley Additive exPlanations (SHAP) and the Local Interpretable Model-Agnostic Explanations (LIME) for PM2.5 nowcasting. Using a SHAP-based technique for dimensionality reduction, we identified the features responsible for 95% of the target variance, allowing us to perform an effective feature selection with minimal impact on accuracy. In addition, the findings show that SHAP and LIME supported orthogonal insights: SHAP provided a view of the model performance at a high level, identifying interaction effects that are often overlooked using gain-based metrics such as feature importance; while LIME presented an enhanced overlook by justifying its local explanation, providing low-bias estimates of the environmental data values that affect predictions. Our evaluation set included 12 monitoring stations using temporal split methods with or without lagged-feature engineering approaches. Moreover, the evaluation showed that models retained a substantial degree of predictive power (R2 > 0.93) even in a reduced complexity size. The findings provide evidence for deploying interpretable and performant AQ modeling tools where policy interventions cannot solely depend on predictive analytics tools. Overall, the findings demonstrate the large potential of directly incorporating explainability methods during model development for equal and more transparent modeling processes. Full article
(This article belongs to the Special Issue Machine Learning and Reasoning for Reliable and Explainable AI)
Show Figures

Figure 1

29 pages, 4325 KiB  
Article
Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models
by Pamela Hermosilla, Sebastián Berríos and Héctor Allende-Cid
Appl. Sci. 2025, 15(13), 7329; https://doi.org/10.3390/app15137329 - 30 Jun 2025
Viewed by 954
Abstract
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers [...] Read more.
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers a powerful approach to enhancing transparency, trust, and legal defensibility in network intrusion detection. This study presents a comparative analysis of SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) applied to Extreme Gradient Boosting (XGBoost) and Attentive Interpretable Tabular Learning (TabNet), using the UNSW-NB15 dataset. XGBoost achieved 97.8% validation accuracy and outperformed TabNet in explanation stability and global coherence. In addition to classification performance, we evaluate the fidelity, consistency, and forensic relevance of the explanations. The results confirm the complementary strengths of SHAP and LIME, supporting their combined use in building transparent, auditable, and trustworthy AI systems in digital forensic applications. Full article
(This article belongs to the Special Issue New Advances in Computer Security and Cybersecurity)
Show Figures

Figure 1

14 pages, 675 KiB  
Article
Predicting Predisposition to Tropical Diseases in Female Adults Using Risk Factors: An Explainable-Machine Learning Approach
by Kingsley Friday Attai, Constance Amannah, Moses Ekpenyong, Said Baadel, Okure Obot, Daniel Asuquo, Ekerette Attai, Faith-Valentine Uzoka, Emem Dan, Christie Akwaowo and Faith-Michael Uzoka
Information 2025, 16(7), 520; https://doi.org/10.3390/info16070520 - 21 Jun 2025
Viewed by 341
Abstract
Malaria, typhoid fever, respiratory tract infections, and urinary tract infections significantly impact women, especially in remote, resource-constrained settings, due to limited access to quality healthcare and certain risk factors. Most studies have focused on vector control measures, such as insecticide-treated nets and time [...] Read more.
Malaria, typhoid fever, respiratory tract infections, and urinary tract infections significantly impact women, especially in remote, resource-constrained settings, due to limited access to quality healthcare and certain risk factors. Most studies have focused on vector control measures, such as insecticide-treated nets and time series analysis, often neglecting emerging yet critical risk factors vital for effectively preventing febrile diseases. We address this gap by investigating the use of machine learning (ML) models, specifically extreme gradient boost and random forest, in predicting adult females’ susceptibility to these diseases based on biological, environmental, and socioeconomic factors. An explainable AI (XAI) technique, local interpretable model-agnostic explanations (LIME), was applied to enhance the transparency and interpretability of the predictive models. This approach provided insights into the models’ decision-making process and identified key risk factors, enabling healthcare professionals to personalize treatment services. Factors such as high cholesterol levels, poor personal hygiene, and exposure to air pollution emerged as significant contributors to disease susceptibility, revealing critical areas for public health intervention in remote and resource-constrained settings. This study demonstrates the effectiveness of integrating XAI with ML in directing health interventions, providing a clearer understanding of risk factors, and efficiently allocating resources for disease prevention and treatment. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

21 pages, 7576 KiB  
Article
Interpreting Global Terrestrial Water Storage Dynamics and Drivers with Explainable Deep Learning
by Haijun Huang, Xitian Cai, Lu Li, Xiaolu Wu, Zichun Zhao and Xuezhi Tan
Remote Sens. 2025, 17(13), 2118; https://doi.org/10.3390/rs17132118 - 20 Jun 2025
Viewed by 432
Abstract
Sustained reductions in terrestrial water storage (TWS) have been observed globally using Gravity Recovery and Climate Experiment (GRACE) satellite data since 2002. However, the underlying mechanisms remain incompletely understood due to limited record lengths and data discontinuity. Recently, explainable artificial intelligence (XAI) has [...] Read more.
Sustained reductions in terrestrial water storage (TWS) have been observed globally using Gravity Recovery and Climate Experiment (GRACE) satellite data since 2002. However, the underlying mechanisms remain incompletely understood due to limited record lengths and data discontinuity. Recently, explainable artificial intelligence (XAI) has provided robust tools for unveiling dynamics in complex Earth systems. In this study, we employed a deep learning technique (Long Short-Term Memory network, LSTM) to reconstruct global TWS dynamics, filling gaps in the GRACE record. We then utilized the Local Interpretable Model-agnostic Explanations (LIME) method to uncover the underlying mechanisms driving observed TWS reductions. Our results reveal a consistent decline in the global mean TWS over the past 22 years (2002–2024), primarily influenced by precipitation (17.7%), temperature (16.0%), and evapotranspiration (10.8%). Seasonally, the global average of TWS peaks in April and reaches a minimum in October, mirroring the pattern of snow water equivalent with approximately a one-month lag. Furthermore, TWS variations exhibit significant differences across latitudes and are driven by distinct factors. The largest declines in TWS occur predominantly in high latitudes, driven by rising temperatures and significant snow/ice variability. Mid-latitude regions have experienced considerable TWS losses, influenced by a combination of precipitation, temperature, air pressure, and runoff. In contrast, most low-latitude regions show an increase in TWS, which the model attributes mainly to increased precipitation. Notably, TWS losses are concentrated in coastal areas, snow- and ice-covered regions, and areas experiencing rapid temperature increases, highlighting climate change impacts. This study offers a comprehensive framework for exploring TWS variations using XAI and provides valuable insights into the mechanisms driving TWS changes on a global scale. Full article
Show Figures

Figure 1

21 pages, 23794 KiB  
Article
Towards Faithful Local Explanations: Leveraging SVM to Interpret Black-Box Machine Learning Models
by Jiaxiang Xu, Zhanhao Zhang, Junfei Wang, Biao Ouyang, Benkuan Zhou, Jianxiong Zhao, Hanfang Ge and Bo Xu
Symmetry 2025, 17(6), 950; https://doi.org/10.3390/sym17060950 - 15 Jun 2025
Viewed by 403
Abstract
Although machine learning (ML) models are widely used in many fields, their prediction processes are often hard to understand. This lack of transparency makes it harder for people to trust them, especially in high-stakes fields like healthcare and finance. Human-interpretable explanations for model [...] Read more.
Although machine learning (ML) models are widely used in many fields, their prediction processes are often hard to understand. This lack of transparency makes it harder for people to trust them, especially in high-stakes fields like healthcare and finance. Human-interpretable explanations for model predictions are crucial in these contexts. While existing local interpretation methods have been proposed, many suffer from low local fidelity, instability, and limited effectiveness when applied to highly nonlinear models. This paper presents SVM-X, a model-agnostic local explanation approach designed to address these challenges. By leveraging the inherent symmetry of the SVM hyperplane, SVM-X precisely captures the local decision boundaries of complex nonlinear models, providing more accurate and stable explanations. Experimental evaluations on the UCI Adult dataset, the Bank Marketing dataset, and the Amazon Product Review dataset demonstrate that SVM-X consistently outperforms state-of-the-art methods like LIME and LEMNA. Notably, SVM-X achieves up to a 27.2% improvement in accuracy. Our work introduces a reliable and interpretable framework for understanding machine learning predictions, offering a promising new direction for future research. Full article
Show Figures

Figure 1

14 pages, 667 KiB  
Article
MRI-Based Radiomics Ensemble Model for Predicting Radiation Necrosis in Brain Metastasis Patients Treated with Stereotactic Radiosurgery and Immunotherapy
by Yijun Chen, Corbin Helis, Christina Cramer, Michael Munley, Ariel Raimundo Choi, Josh Tan, Fei Xing, Qing Lyu, Christopher Whitlow, Jeffrey Willey, Michael Chan and Yuming Jiang
Cancers 2025, 17(12), 1974; https://doi.org/10.3390/cancers17121974 - 13 Jun 2025
Viewed by 555
Abstract
Background: Radiation therapy is a primary and cornerstone treatment modality for brain metastasis. However, it can result in complications like necrosis, which may lead to significant neurological deficits. This study aims to develop and validate an ensemble model with radiomics to predict radiation [...] Read more.
Background: Radiation therapy is a primary and cornerstone treatment modality for brain metastasis. However, it can result in complications like necrosis, which may lead to significant neurological deficits. This study aims to develop and validate an ensemble model with radiomics to predict radiation necrosis. Method: This study retrospectively collected and analyzed MRI images and clinical information from 209 stereotactic radiosurgery sessions involving 130 patients with brain metastasis. An ensemble model integrating gradient boosting, random forest, decision tree, and support vector machine was developed and validated using selected radiomic features and clinical factors to predict the likelihood of necrosis. The model performance was evaluated and compared with other machine learning algorithms using metrics, including the area under the curve (AUC), sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV). SHapley Additive exPlanations (SHAP) analysis and local interpretable model-agnostic explanations (LIME) analysis were applied to explain the model’s prediction. Results: The ensemble model achieved strong performance in the validation cohort, with the highest AUC. Compared to individual models and the stacking ensemble model, it consistently outperformed. The model demonstrated superior accuracy, generalizability, and reliability in predicting radiation necrosis. SHAP and LIME were used to interpret a complex predictive model for radiation necrosis. Both analyses highlighted similar significant factors, enhancing our understanding of prediction dynamics. Conclusions: The ensemble model using radiomic features exhibited high accuracy and robustness in predicting the occurrence of radiation necrosis. It could serve as a novel and valuable tool to facilitate radiotherapy for patients with brain metastasis. Full article
(This article belongs to the Special Issue Brain Metastases: From Mechanisms to Treatment)
Show Figures

Figure 1

28 pages, 4269 KiB  
Article
XGB-BIF: An XGBoost-Driven Biomarker Identification Framework for Detecting Cancer Using Human Genomic Data
by Veena Ghuriani, Jyotsna Talreja Wassan, Priyal Tripathi and Anshika Chauhan
Int. J. Mol. Sci. 2025, 26(12), 5590; https://doi.org/10.3390/ijms26125590 - 11 Jun 2025
Viewed by 755
Abstract
The human genome has a profound impact on human health and disease detection. Carcinoma (cancer) is one of the prominent diseases that majorly affect human health and requires the development of different treatment strategies and targeted therapies based on effective disease detection. Therefore, [...] Read more.
The human genome has a profound impact on human health and disease detection. Carcinoma (cancer) is one of the prominent diseases that majorly affect human health and requires the development of different treatment strategies and targeted therapies based on effective disease detection. Therefore, our research aims to identify biomarkers associated with distinct cancer types (gastric, lung, and breast) using machine learning. In the current study, we have analyzed the human genomic data of gastric cancer, breast cancer, and lung cancer patients using XGB-BIF (i.e., XGBoost-Driven Biomarker Identification Framework for detecting cancer). The proposed framework utilizes feature selection via XGBoost (eXtreme Gradient Boosting), which captures feature interactions efficiently and takes care of the non-linear effects in the genomic data. The research progressed by training XGBoost on the full dataset, ranking the features based on the Gain measure (importance), followed by the classification phase, which employed support vector machines (SVM), logistic regression (LR), and random forest (RF) models for classifying cancer-diseased and non-diseased states. To ensure interpretability and transparency, we also applied SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), enabling the identification of high-impact biomarkers contributing to risk stratification. Biomarker significance is discussed primarily via pathway enrichment and by studying survival analysis (Kaplan–Meier curves, Cox regression) for identified biomarkers to strengthen translational value. Our models achieved high predictive performance, with an accuracy of more than 90%, to classify and link genomic data into diseased (cancer) and non-diseased states. Furthermore, we evaluated the models using Cohen’s Kappa statistic, which confirmed strong agreement between predicted and actual risk categories, with Kappa scores ranging from 0.80 to 0.99. Our proposed framework also achieved strong predictions on the METABRIC dataset during external validation, attaining an AUC-ROC of 93%, accuracy of 0.79%, and Kappa of 74%. Through extensive experimentation, XGB-BIF identified the top biomarker genes for different cancer datasets (gastric, lung, and breast). CBX2, CLDN1, SDC2, PGF, FOXS1, ADAMTS18, POLR1B, and PYCR3 were identified as important biomarkers to identify diseased and non-diseased states of gastric cancer; CAVIN2, ADAMTS5, SCARA5, CD300LG, and GIPC2 were identified as important biomarkers for breast cancer; and CLDN18, MYBL2, ASPA, AQP4, FOLR1, and SLC39A8 were identified as important biomarkers for lung cancer. XGB-BIF could be utilized for identifying biomarkers of different cancer types using genetic data, which can further help clinicians in developing targeted therapies for cancer patients. Full article
Show Figures

Graphical abstract

25 pages, 2136 KiB  
Article
A Hybrid Deep Learning Framework for Wind Speed Prediction with Snake Optimizer and Feature Explainability
by Khaled Yousef, Baris Yuce and Allen He
Sustainability 2025, 17(12), 5363; https://doi.org/10.3390/su17125363 - 11 Jun 2025
Viewed by 568
Abstract
Renewable energy, especially wind power, is required to reduce greenhouse gas emissions and fossil fuel use. Variable wind patterns and weather make wind energy integration into modern grids difficult. Energy trading, resource planning, and grid stability demand accurate forecasting. This study proposes a [...] Read more.
Renewable energy, especially wind power, is required to reduce greenhouse gas emissions and fossil fuel use. Variable wind patterns and weather make wind energy integration into modern grids difficult. Energy trading, resource planning, and grid stability demand accurate forecasting. This study proposes a hybrid deep learning framework that improves forecasting accuracy and interpretability by combining advanced deep learning (DL) architectures, explainable artificial intelligence (XAI), and metaheuristic optimization. The intricate temporal relationships in wind speed data were captured by training Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), LSTM-GRU hybrid, and Bidirectional LSTM-GRU following data preprocessing and normalization. To enhance transparency, Local Interpretable Model-Agnostic Explanations (LIMEs) were applied, revealing key time-step contributions across three urban datasets (Los Angeles, San Francisco, and San Diego). The framework further incorporates the Snake Optimizer Algorithm (SOA) to optimize hyperparameters such as LSTM units, dropout rate, learning rate, and batch size, ensuring improved training efficiency and reduced forecast error. The model predicted 2020–2040 wind speeds using rolling forecasting; the SOA-optimized LSTM model outperformed baseline and hybrid models, achieving low MSE, RMSE, and MAE and high R2 scores. This proves its accuracy, stability, and adaptability across climates, supporting wind energy prediction and sustainable energy planning. Full article
Show Figures

Graphical abstract

24 pages, 4055 KiB  
Article
Privacy-Preserving Interpretability: An Explainable Federated Learning Model for Predictive Maintenance in Sustainable Manufacturing and Industry 4.0
by Hamad Mohamed Hamdan Alzari Alshkeili, Saif Jasim Almheiri and Muhammad Adnan Khan
AI 2025, 6(6), 117; https://doi.org/10.3390/ai6060117 - 6 Jun 2025
Viewed by 1173
Abstract
Background: Industry 4.0’s development requires digitalized manufacturing through Predictive Maintenance (PdM) because such practices decrease equipment failures and operational disruptions. However, its effectiveness is hindered by three key challenges: (1) data confidentiality, as traditional methods rely on centralized data sharing, raising concerns about [...] Read more.
Background: Industry 4.0’s development requires digitalized manufacturing through Predictive Maintenance (PdM) because such practices decrease equipment failures and operational disruptions. However, its effectiveness is hindered by three key challenges: (1) data confidentiality, as traditional methods rely on centralized data sharing, raising concerns about security and regulatory compliance; (2) a lack of interpretability, where opaque AI models provide limited transparency, making it difficult for operators to trust and act on failure predictions; and (3) adaptability issues, as many existing solutions struggle to maintain a consistent performance across diverse industrial environments. Addressing these challenges requires a privacy-preserving, interpretable, and adaptive Artificial Intelligence (AI) model that ensures secure, reliable, and transparent PdM while meeting industry standards and regulatory requirements. Methods: Explainable AI (XAI) plays a crucial role in enhancing transparency and trust in PdM models by providing interpretable insights into failure predictions. Meanwhile, Federated Learning (FL) ensures privacy-preserving, decentralized model training, allowing multiple industrial sites to collaborate without sharing sensitive operational data. This proposed research developed a sustainable privacy-preserving Explainable FL (XFL) model that integrates XAI techniques like Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) into an FL structure to improve PdM’s security and interpretability capabilities. Results: The proposed XFL model enables industrial operators to interpret, validate, and refine AI-driven maintenance strategies while ensuring data privacy, accuracy, and regulatory compliance. Conclusions: This model significantly improves failure prediction, reduces unplanned downtime, and strengthens trust in AI-driven decision-making. The simulation results confirm its high reliability, achieving 98.15% accuracy with a minimal 1.85% miss rate, demonstrating its effectiveness as a scalable, secure, and interpretable solution for PdM in Industry 4.0. Full article
Show Figures

Figure 1

25 pages, 1344 KiB  
Article
Customer-Centric Decision-Making with XAI and Counterfactual Explanations for Churn Mitigation
by Simona-Vasilica Oprea and Adela Bâra
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 129; https://doi.org/10.3390/jtaer20020129 - 3 Jun 2025
Viewed by 952
Abstract
In this paper, we propose a methodology designed to deliver actionable insights that help businesses retain customers. While Machine Learning (ML) techniques predict whether a customer is likely to churn, this alone is not enough. Explainable Artificial Intelligence (XAI) methods, such as SHapley [...] Read more.
In this paper, we propose a methodology designed to deliver actionable insights that help businesses retain customers. While Machine Learning (ML) techniques predict whether a customer is likely to churn, this alone is not enough. Explainable Artificial Intelligence (XAI) methods, such as SHapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), highlight the features influencing the prediction, but businesses need strategies to prevent churn. Counterfactual (CF) explanations bridge this gap by identifying the minimal changes in the business–customer relationship that could shift an outcome from churn to retention, offering steps to enhance customer loyalty and reduce losses to competitors. These explanations might not fully align with business constraints; however, alternative scenarios can be developed to achieve the same objective. Among the six classifiers used to detect churn cases, the Balanced Random Forest classifier was selected for its superior performance, achieving the highest recall score of 0.72. After classification, Diverse Counterfactual Explanations with ML (DiCEML) through Mixed-Integer Linear Programming (MILP) is applied to obtain the required changes in the features, as well as in the range permitted by the business itself. We further apply DiCEML to uncover potential biases within the model, calculating the disparate impact of some features. Full article
Show Figures

Figure 1

29 pages, 3354 KiB  
Article
Enhancing Heart Attack Prediction: Feature Identification from Multiparametric Cardiac Data Using Explainable AI
by Muhammad Waqar, Muhammad Bilal Shahnawaz, Sajid Saleem, Hassan Dawood, Usman Muhammad and Hussain Dawood
Algorithms 2025, 18(6), 333; https://doi.org/10.3390/a18060333 - 2 Jun 2025
Viewed by 967
Abstract
Heart attack is a leading cause of mortality, necessitating timely and precise diagnosis to improve patient outcomes. However, timely diagnosis remains a challenge due to the complex and nonlinear relationships between clinical indicators. Machine learning (ML) and deep learning (DL) models have the [...] Read more.
Heart attack is a leading cause of mortality, necessitating timely and precise diagnosis to improve patient outcomes. However, timely diagnosis remains a challenge due to the complex and nonlinear relationships between clinical indicators. Machine learning (ML) and deep learning (DL) models have the potential to predict cardiac conditions by identifying complex patterns within data, but their “black-box” nature restricts interpretability, making it challenging for healthcare professionals to comprehend the reasoning behind predictions. This lack of interpretability limits their clinical trust and adoption. The proposed approach addresses this limitation by integrating predictive modeling with Explainable AI (XAI) to ensure both accuracy and transparency in clinical decision-making. The proposed study enhances heart attack prediction using the University of California, Irvine (UCI) dataset, which includes various heart analysis parameters collected through electrocardiogram (ECG) sensors, blood pressure monitors, and biochemical analyzers. Due to class imbalance, the Synthetic Minority Over-sampling Technique (SMOTE) was applied to enhance the representation of the minority class. After preprocessing, various ML algorithms were employed, among which Artificial Neural Networks (ANN) achieved the highest performance with 96.1% accuracy, 95.7% recall, and 95.7% F1-score. To enhance the interpretability of ANN, two XAI techniques, specifically SHapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), were utilized. This study incrementally benchmarks SMOTE, ANN, and XAI techniques such as SHAP and LIME on standardized cardiac datasets, emphasizing clinical interpretability and providing a reproducible framework for practical healthcare implementation. These techniques enable healthcare practitioners to understand the model’s decisions, identify key predictive features, and enhance clinical judgment. By bridging the gap between AI-driven performance and practical medical implementation, this work contributes to making heart attack prediction both highly accurate and interpretable, facilitating its adoption in real-world clinical settings. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 1085 KiB  
Systematic Review
Explainable Artificial Intelligence in Radiological Cardiovascular Imaging—A Systematic Review
by Matteo Haupt, Martin H. Maurer and Rohit Philip Thomas
Diagnostics 2025, 15(11), 1399; https://doi.org/10.3390/diagnostics15111399 - 31 May 2025
Cited by 1 | Viewed by 1035
Abstract
Background: Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the “black box” nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. [...] Read more.
Background: Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the “black box” nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. This systematic review synthesizes current research on the use of XAI methods in radiological cardiovascular imaging. Methods: A systematic literature search was conducted in PubMed, Scopus, and Web of Science to identify peer-reviewed original research articles published between January 2015 and March 2025. Studies were included if they applied XAI techniques—such as Gradient-Weighted Class Activation Mapping (Grad-CAM), Shapley Additive Explanations (SHAPs), Local Interpretable Model-Agnostic Explanations (LIMEs), or saliency maps—to cardiovascular imaging modalities, including cardiac computed tomography (CT), magnetic resonance imaging (MRI), echocardiography and other ultrasound examinations, and chest X-ray (CXR). Studies focusing on nuclear medicine, structured/tabular data without imaging, or lacking concrete explainability features were excluded. Screening and data extraction followed PRISMA guidelines. Results: A total of 28 studies met the inclusion criteria. Ultrasound examinations (n = 9) and CT (n = 9) were the most common imaging modalities, followed by MRI (n = 6) and chest X-rays (n = 4). Clinical applications included disease classification (e.g., coronary artery disease and valvular heart disease) and the detection of myocardial or congenital abnormalities. Grad-CAM was the most frequently employed XAI method, followed by SHAP. Most studies used saliency-based techniques to generate visual explanations of model predictions. Conclusions: XAI holds considerable promise for improving the transparency and clinical acceptance of deep learning models in cardiovascular imaging. However, the evaluation of XAI methods remains largely qualitative, and standardization is lacking. Future research should focus on the robust, quantitative assessment of explainability, prospective clinical validation, and the development of more advanced XAI techniques beyond saliency-based methods. Strengthening the interpretability of AI models will be crucial to ensuring their safe, ethical, and effective integration into cardiovascular care. Full article
(This article belongs to the Special Issue Latest Advances and Prospects in Cardiovascular Imaging)
Show Figures

Figure 1

28 pages, 2604 KiB  
Article
A Hybrid Approach to Credit Risk Assessment Using Bill Payment Habits Data and Explainable Artificial Intelligence
by Cem Bulut and Emel Arslan
Appl. Sci. 2025, 15(10), 5723; https://doi.org/10.3390/app15105723 - 20 May 2025
Viewed by 671
Abstract
Credit risk is one of the most important issues in the rapidly growing and developing finance sector. This study utilized a dataset containing real information about the bill payments of individuals who made transactions with a payment institution operating in Turkey. First, the [...] Read more.
Credit risk is one of the most important issues in the rapidly growing and developing finance sector. This study utilized a dataset containing real information about the bill payments of individuals who made transactions with a payment institution operating in Turkey. First, the transactions in the dataset were analyzed based on the bill type and the individual and features reflecting the payment habits were extracted. For the target class, real credit scores generated by the Credit Registry Office for the individuals whose payment habits were extracted were used. The dataset is a multi-class, unbalanced, and alternative dataset. Therefore, the dataset was prepared for the analysis by using data cleaning, feature selection, and sampling techniques. Then, the dataset was classified using various classification and evaluation methods. The best results were obtained with a model consisting of ANOVA F-Test, SMOTE, and Extra Tree algorithms. With this model, 80.49% accuracy, 79.89% precision, and 97.04% UAC rate were obtained. These results are quite efficient for an alternative dataset with 10 classes. This model was transformed into an explainable and interpretable form using LIME and SHAP, which are XAI techniques. This study presents a new hybrid model for credit risk assessment based on a multi-class and imbalanced alternative dataset and machine learning. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 1846 KiB  
Article
Vision-Language Model-Based Local Interpretable Model-Agnostic Explanations Analysis for Explainable In-Vehicle Controller Area Network Intrusion Detection
by Jaeseung Lee and Jehyeok Rew
Sensors 2025, 25(10), 3020; https://doi.org/10.3390/s25103020 - 10 May 2025
Viewed by 786
Abstract
The Controller Area Network (CAN) facilitates efficient communication among vehicle components. While it ensures fast and reliable data transmission, its lightweight design makes it susceptible to data manipulation in the absence of security layers. To address these vulnerabilities, machine learning (ML)-based intrusion detection [...] Read more.
The Controller Area Network (CAN) facilitates efficient communication among vehicle components. While it ensures fast and reliable data transmission, its lightweight design makes it susceptible to data manipulation in the absence of security layers. To address these vulnerabilities, machine learning (ML)-based intrusion detection systems (IDS) have been developed and shown to be effective in identifying anomalous CAN traffic. However, these models often function as black boxes, offering limited transparency into their decision-making processes, which hinders trust in safety-critical environments. To overcome these limitations, this paper proposes a novel method that combines Local Interpretable Model-agnostic Explanations (LIME) with a vision-language model (VLM) to generate detailed textual interpretations of an ML-based CAN IDS. This integration mitigates the challenges of visual-only explanations in traditional XAI and enhances the intuitiveness of IDS outputs. By leveraging the multimodal reasoning capabilities of VLMs, the proposed method bridges the gap between visual and textual interpretability. The method supports both global and local explanations by analyzing feature importance with LIME and translating results into human-readable narratives via VLM. Experiments using a publicly available CAN intrusion detection dataset demonstrate that the proposed method provides coherent, text-based explanations, thereby improving interpretability and end-user trust. Full article
(This article belongs to the Special Issue AI-Based Intrusion Detection Techniques for Vehicle Networks)
Show Figures

Figure 1

Back to TopTop