Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (61)

Search Parameters:
Keywords = explainable AI (SHAP, LIME)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1752 KiB  
Systematic Review
Beyond Post hoc Explanations: A Comprehensive Framework for Accountable AI in Medical Imaging Through Transparency, Interpretability, and Explainability
by Yashbir Singh, Quincy A. Hathaway, Varekan Keishing, Sara Salehi, Yujia Wei, Natally Horvat, Diana V. Vera-Garcia, Ashok Choudhary, Almurtadha Mula Kh, Emilio Quaia and Jesper B Andersen
Bioengineering 2025, 12(8), 879; https://doi.org/10.3390/bioengineering12080879 - 15 Aug 2025
Viewed by 677
Abstract
The integration of artificial intelligence (AI) in medical imaging has revolutionized diagnostic capabilities, yet the black-box nature of deep learning models poses significant challenges for clinical adoption. Current explainable AI (XAI) approaches, including SHAP, LIME, and Grad-CAM, predominantly focus on post hoc explanations [...] Read more.
The integration of artificial intelligence (AI) in medical imaging has revolutionized diagnostic capabilities, yet the black-box nature of deep learning models poses significant challenges for clinical adoption. Current explainable AI (XAI) approaches, including SHAP, LIME, and Grad-CAM, predominantly focus on post hoc explanations that may inadvertently undermine clinical decision-making by providing misleading confidence in AI outputs. This paper presents a systematic review and meta-analysis of 67 studies (covering 23 radiology, 19 pathology, and 25 ophthalmology applications) evaluating XAI fidelity, stability, and performance trade-offs across medical imaging modalities. Our meta-analysis of 847 initially identified studies reveals that LIME achieves superior fidelity (0.81, 95% CI: 0.78–0.84) compared to SHAP (0.38, 95% CI: 0.35–0.41) and Grad-CAM (0.54, 95% CI: 0.51–0.57) across all modalities. Post hoc explanations demonstrated poor stability under noise perturbation, with SHAP showing 53% degradation in ophthalmology applications (ρ = 0.42 at 10% noise) compared to 11% in radiology (ρ = 0.89). We demonstrate a consistent 5–7% AUC performance penalty for interpretable models but identify modality-specific stability patterns suggesting that tailored XAI approaches are necessary. Based on these empirical findings, we propose a comprehensive three-pillar accountability framework that prioritizes transparency in model development, interpretability in architecture design, and a cautious deployment of post hoc explanations with explicit uncertainty quantification. This approach offers a pathway toward genuinely accountable AI systems that enhance rather than compromise clinical decision-making quality and patient safety. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Medical Imaging)
Show Figures

Figure 1

16 pages, 1471 KiB  
Article
Leveraging Machine Learning Techniques to Predict Cardiovascular Heart Disease
by Remzi Başar, Öznur Ocak, Alper Erturk and Marcelle de la Roche
Information 2025, 16(8), 639; https://doi.org/10.3390/info16080639 - 27 Jul 2025
Viewed by 525
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of death globally, underscoring the urgent need for data-driven early diagnostic tools. This study proposes a multilayer artificial neural network (ANN) model for heart disease prediction, developed using a real-world clinical dataset comprising 13,981 patient records. [...] Read more.
Cardiovascular diseases (CVDs) remain the leading cause of death globally, underscoring the urgent need for data-driven early diagnostic tools. This study proposes a multilayer artificial neural network (ANN) model for heart disease prediction, developed using a real-world clinical dataset comprising 13,981 patient records. Implemented on the Orange data mining platform, the ANN was trained using backpropagation and validated through 10-fold cross-validation. Dimensionality reduction via principal component analysis (PCA) enhanced computational efficiency, while Shapley additive explanations (SHAP) were used to interpret model outputs. Despite achieving 83.4% accuracy and high specificity, the model exhibited poor sensitivity to disease cases, identifying only 76 of 2233 positive samples, with a Matthews correlation coefficient (MCC) of 0.058. Comparative benchmarks showed that random forest and support vector machines significantly outperformed the ANN in terms of discrimination (AUC up to 91.6%). SHAP analysis revealed serum creatinine, diabetes, and hemoglobin levels to be the dominant predictors. To address the current study’s limitations, future work will explore LIME, Grad-CAM, and ensemble techniques like XGBoost to improve interpretability and balance. This research emphasizes the importance of explainability, data representativeness, and robust evaluation in the development of clinically reliable AI tools for heart disease detection. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

31 pages, 2148 KiB  
Article
Supporting Reflective AI Use in Education: A Fuzzy-Explainable Model for Identifying Cognitive Risk Profiles
by Gabriel Marín Díaz
Educ. Sci. 2025, 15(7), 923; https://doi.org/10.3390/educsci15070923 - 18 Jul 2025
Viewed by 702
Abstract
Generative AI tools are becoming increasingly common in education. They make many tasks easier, but they also raise questions about how students interact with information and whether their ability to think critically might be affected. Although these tools are now part of many [...] Read more.
Generative AI tools are becoming increasingly common in education. They make many tasks easier, but they also raise questions about how students interact with information and whether their ability to think critically might be affected. Although these tools are now part of many learning processes, we still do not fully understand how they influence cognitive behavior or digital maturity. This study proposes a model to help identify different user profiles based on how they engage with AI in educational contexts. The approach combines fuzzy clustering, the Analytic Hierarchy Process (AHP), and explainable AI techniques (SHAP and LIME). It focuses on five dimensions: how AI is used, how users verify information, the cognitive effort involved, decision-making strategies, and reflective behavior. The model was tested on data from 1273 users, revealing three main types of profiles, from users who are highly dependent on automation to more autonomous and critical users. The classification was validated with XGBoost, achieving over 99% accuracy. The explainability analysis helped us understand what factors most influenced each profile. Overall, this framework offers practical insight for educators and institutions looking to promote more responsible and thoughtful use of AI in learning. Full article
(This article belongs to the Special Issue Generative AI in Education: Current Trends and Future Directions)
Show Figures

Figure 1

26 pages, 2624 KiB  
Article
A Transparent House Price Prediction Framework Using Ensemble Learning, Genetic Algorithm-Based Tuning, and ANOVA-Based Feature Analysis
by Mohammed Ibrahim Hussain, Arslan Munir, Mohammad Mamun, Safiul Haque Chowdhury, Nazim Uddin and Muhammad Minoar Hossain
FinTech 2025, 4(3), 33; https://doi.org/10.3390/fintech4030033 - 18 Jul 2025
Viewed by 531
Abstract
House price prediction is crucial in real estate for informed decision-making. This paper presents an automated prediction system that combines genetic algorithms (GA) for feature optimization and Analysis of Variance (ANOVA) for statistical analysis. We apply and compare five ensemble machine learning (ML) [...] Read more.
House price prediction is crucial in real estate for informed decision-making. This paper presents an automated prediction system that combines genetic algorithms (GA) for feature optimization and Analysis of Variance (ANOVA) for statistical analysis. We apply and compare five ensemble machine learning (ML) models, namely Extreme Gradient Boosting Regression (XGBR), random forest regression (RFR), Categorical Boosting Regression (CBR), Adaptive Boosting Regression (ADBR), and Gradient Boosted Decision Trees Regression (GBDTR), on a comprehensive dataset. We used a dataset with 1000 samples with eight features and a secondary dataset for external validation with 3865 samples. Our integrated approach identifies Categorical Boosting with GA (CBRGA) as the best performer, achieving an R2 of 0.9973 and outperforming existing state-of-the-art methods. ANOVA-based analysis further enhances model interpretability and performance by isolating key factors such as square footage and lot size. To ensure robustness and transparency, we conduct 10-fold cross-validation and employ explainable AI techniques such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), providing insights into model decision-making and feature importance. Full article
Show Figures

Figure 1

17 pages, 897 KiB  
Article
The Quest for the Best Explanation: Comparing Models and XAI Methods in Air Quality Modeling Tasks
by Thomas Tasioulis, Evangelos Bagkis, Theodosios Kassandros and Kostas Karatzas
Appl. Sci. 2025, 15(13), 7390; https://doi.org/10.3390/app15137390 - 1 Jul 2025
Viewed by 284
Abstract
Air quality (AQ) modeling is at the forefront of estimating pollution levels in areas where the spatial representativity is low. Large metropolitan areas in Asia such as Beijing face significant pollution issues due to rapid industrialization and urbanization. AQ nowcasting, especially in dense [...] Read more.
Air quality (AQ) modeling is at the forefront of estimating pollution levels in areas where the spatial representativity is low. Large metropolitan areas in Asia such as Beijing face significant pollution issues due to rapid industrialization and urbanization. AQ nowcasting, especially in dense urban centers like Beijing, is crucial for public health and safety. One of the most popular and accurate modeling methodologies relies on black-box models that fail to explain the phenomena in an interpretable way. This study investigates the performance and interpretability of Explainable AI (XAI) applied with the eXtreme Gradient Boosting (XGBoost) algorithm employing the SHapley Additive exPlanations (SHAP) and the Local Interpretable Model-Agnostic Explanations (LIME) for PM2.5 nowcasting. Using a SHAP-based technique for dimensionality reduction, we identified the features responsible for 95% of the target variance, allowing us to perform an effective feature selection with minimal impact on accuracy. In addition, the findings show that SHAP and LIME supported orthogonal insights: SHAP provided a view of the model performance at a high level, identifying interaction effects that are often overlooked using gain-based metrics such as feature importance; while LIME presented an enhanced overlook by justifying its local explanation, providing low-bias estimates of the environmental data values that affect predictions. Our evaluation set included 12 monitoring stations using temporal split methods with or without lagged-feature engineering approaches. Moreover, the evaluation showed that models retained a substantial degree of predictive power (R2 > 0.93) even in a reduced complexity size. The findings provide evidence for deploying interpretable and performant AQ modeling tools where policy interventions cannot solely depend on predictive analytics tools. Overall, the findings demonstrate the large potential of directly incorporating explainability methods during model development for equal and more transparent modeling processes. Full article
(This article belongs to the Special Issue Machine Learning and Reasoning for Reliable and Explainable AI)
Show Figures

Figure 1

28 pages, 1969 KiB  
Article
A Fuzzy-XAI Framework for Customer Segmentation and Risk Detection: Integrating RFM, 2-Tuple Modeling, and Strategic Scoring
by Gabriel Marín Díaz
Mathematics 2025, 13(13), 2141; https://doi.org/10.3390/math13132141 - 30 Jun 2025
Viewed by 412
Abstract
This article presents an interpretable framework for customer segmentation and churn risk detection, integrating fuzzy clustering, explainable AI (XAI), and strategic scoring. The process begins with Fuzzy C-Means (FCM) applied to normalized RFM indicators (Recency, Frequency, Monetary), which were then mapped to a [...] Read more.
This article presents an interpretable framework for customer segmentation and churn risk detection, integrating fuzzy clustering, explainable AI (XAI), and strategic scoring. The process begins with Fuzzy C-Means (FCM) applied to normalized RFM indicators (Recency, Frequency, Monetary), which were then mapped to a 2-tuple linguistic scale to enhance semantic interpretability. Cluster memberships and centroids were analyzed to identify distinct behavioral patterns. An XGBoost classifier was trained to validate the coherence of the fuzzy segments, while SHAP and LIME provided global and local explanations for the classification decisions. Following segmentation, an AHP-based strategic score was computed for each customer, using weights derived from pairwise comparisons reflecting organizational priorities. These scores were also translated into the 2-tuple domain, reinforcing interpretability. The model then identified customers at risk of disengagement, defined by a combination of low Recency, high Frequency and Monetary values, and a low AHP score. Based on Recency thresholds, customers are classified as Active, Latent, or Probable Churn. A second XGBoost model was applied to predict this risk level, with SHAP used to explain its predictive behavior. Overall, the proposed framework integrated fuzzy logic, semantic representation, and explainable AI to support actionable, transparent, and human-centered customer analytics. Full article
Show Figures

Figure 1

29 pages, 4325 KiB  
Article
Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models
by Pamela Hermosilla, Sebastián Berríos and Héctor Allende-Cid
Appl. Sci. 2025, 15(13), 7329; https://doi.org/10.3390/app15137329 - 30 Jun 2025
Viewed by 2102
Abstract
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers [...] Read more.
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers a powerful approach to enhancing transparency, trust, and legal defensibility in network intrusion detection. This study presents a comparative analysis of SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) applied to Extreme Gradient Boosting (XGBoost) and Attentive Interpretable Tabular Learning (TabNet), using the UNSW-NB15 dataset. XGBoost achieved 97.8% validation accuracy and outperformed TabNet in explanation stability and global coherence. In addition to classification performance, we evaluate the fidelity, consistency, and forensic relevance of the explanations. The results confirm the complementary strengths of SHAP and LIME, supporting their combined use in building transparent, auditable, and trustworthy AI systems in digital forensic applications. Full article
(This article belongs to the Special Issue New Advances in Computer Security and Cybersecurity)
Show Figures

Figure 1

15 pages, 653 KiB  
Article
Optimizing Solar Radiation Prediction with ANN and Explainable AI-Based Feature Selection
by Ibrahim Al-Shourbaji and Abdalla Alameen
Technologies 2025, 13(7), 263; https://doi.org/10.3390/technologies13070263 - 20 Jun 2025
Viewed by 473
Abstract
Reliable and accurate solar radiation (SR) prediction is crucial for renewable energy development amid a growing energy crisis. Machine learning (ML) models are increasingly recognized for their ability to provide accurate and efficient solutions to SR prediction challenges. This paper presents an Artificial [...] Read more.
Reliable and accurate solar radiation (SR) prediction is crucial for renewable energy development amid a growing energy crisis. Machine learning (ML) models are increasingly recognized for their ability to provide accurate and efficient solutions to SR prediction challenges. This paper presents an Artificial Neural Network (ANN) model optimized using feature selection techniques based on Explainable AI (XAI) methods to enhance SR prediction performance. The developed ANN model is evaluated using a publicly available SR dataset, and its prediction performance is compared with five other ML models. The results indicate that the ANN model surpasses the other models, confirming its effectiveness for SR prediction. Two XAI techniques, LIME and SHAP, are then used to explain the best-performing ANN model and reduce its complexity by selecting the most significant features. The findings show that prediction performance is improved after applying the XAI methods, achieving a lower MAE of 0.0024, an RMSE of 0.0111, a MAPE of 0.4016, an RMSER of 0.0393, a higher R2 score of 0.9980, and a PC of 0.9966. This study demonstrates the significant potential of XAI-driven feature selection to create more efficient and accurate ANN models for SR prediction. Full article
Show Figures

Figure 1

24 pages, 4055 KiB  
Article
Privacy-Preserving Interpretability: An Explainable Federated Learning Model for Predictive Maintenance in Sustainable Manufacturing and Industry 4.0
by Hamad Mohamed Hamdan Alzari Alshkeili, Saif Jasim Almheiri and Muhammad Adnan Khan
AI 2025, 6(6), 117; https://doi.org/10.3390/ai6060117 - 6 Jun 2025
Viewed by 1583
Abstract
Background: Industry 4.0’s development requires digitalized manufacturing through Predictive Maintenance (PdM) because such practices decrease equipment failures and operational disruptions. However, its effectiveness is hindered by three key challenges: (1) data confidentiality, as traditional methods rely on centralized data sharing, raising concerns about [...] Read more.
Background: Industry 4.0’s development requires digitalized manufacturing through Predictive Maintenance (PdM) because such practices decrease equipment failures and operational disruptions. However, its effectiveness is hindered by three key challenges: (1) data confidentiality, as traditional methods rely on centralized data sharing, raising concerns about security and regulatory compliance; (2) a lack of interpretability, where opaque AI models provide limited transparency, making it difficult for operators to trust and act on failure predictions; and (3) adaptability issues, as many existing solutions struggle to maintain a consistent performance across diverse industrial environments. Addressing these challenges requires a privacy-preserving, interpretable, and adaptive Artificial Intelligence (AI) model that ensures secure, reliable, and transparent PdM while meeting industry standards and regulatory requirements. Methods: Explainable AI (XAI) plays a crucial role in enhancing transparency and trust in PdM models by providing interpretable insights into failure predictions. Meanwhile, Federated Learning (FL) ensures privacy-preserving, decentralized model training, allowing multiple industrial sites to collaborate without sharing sensitive operational data. This proposed research developed a sustainable privacy-preserving Explainable FL (XFL) model that integrates XAI techniques like Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) into an FL structure to improve PdM’s security and interpretability capabilities. Results: The proposed XFL model enables industrial operators to interpret, validate, and refine AI-driven maintenance strategies while ensuring data privacy, accuracy, and regulatory compliance. Conclusions: This model significantly improves failure prediction, reduces unplanned downtime, and strengthens trust in AI-driven decision-making. The simulation results confirm its high reliability, achieving 98.15% accuracy with a minimal 1.85% miss rate, demonstrating its effectiveness as a scalable, secure, and interpretable solution for PdM in Industry 4.0. Full article
Show Figures

Figure 1

15 pages, 1182 KiB  
Article
Interpretable Ensemble Learning Approach for Predicting Student Adaptability in Online Education Environments
by Shakib Sadat Shanto and Akinul Islam Jony
Knowledge 2025, 5(2), 10; https://doi.org/10.3390/knowledge5020010 - 3 Jun 2025
Viewed by 855
Abstract
The COVID-19 pandemic has accelerated the shift towards online education, making it a critical focus for educational institutions. Understanding students’ adaptability to this new learning environment is crucial for ensuring their academic success. This study aims to predict students’ adaptability levels in online [...] Read more.
The COVID-19 pandemic has accelerated the shift towards online education, making it a critical focus for educational institutions. Understanding students’ adaptability to this new learning environment is crucial for ensuring their academic success. This study aims to predict students’ adaptability levels in online education using a dataset of 1205 observations that incorporates sociodemographic factors and information collected across different educational levels (school, college, and university). Various machine learning (ML) and deep learning (DL) models, including decision tree (DT), random forest (RF), support vector machine (SVM), K-nearest neighbors (KNN), XGBoost, and artificial neural networks (ANNs), are applied for adaptability prediction. The proposed ensemble model achieves superior performance with 95.73% accuracy, significantly outperforming traditional ML and DL models. Furthermore, explainable AI (XAI) techniques, such as LIME and SHAP, were employed to uncover the specific features that significantly impact the adaptability level predictions, with financial condition, class duration, and network type emerging as key factors. By combining robust predictive modeling and interpretable AI, this study contributes to the ongoing efforts to enhance the effectiveness of online education and foster student success in the digital age. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

29 pages, 3354 KiB  
Article
Enhancing Heart Attack Prediction: Feature Identification from Multiparametric Cardiac Data Using Explainable AI
by Muhammad Waqar, Muhammad Bilal Shahnawaz, Sajid Saleem, Hassan Dawood, Usman Muhammad and Hussain Dawood
Algorithms 2025, 18(6), 333; https://doi.org/10.3390/a18060333 - 2 Jun 2025
Viewed by 1331
Abstract
Heart attack is a leading cause of mortality, necessitating timely and precise diagnosis to improve patient outcomes. However, timely diagnosis remains a challenge due to the complex and nonlinear relationships between clinical indicators. Machine learning (ML) and deep learning (DL) models have the [...] Read more.
Heart attack is a leading cause of mortality, necessitating timely and precise diagnosis to improve patient outcomes. However, timely diagnosis remains a challenge due to the complex and nonlinear relationships between clinical indicators. Machine learning (ML) and deep learning (DL) models have the potential to predict cardiac conditions by identifying complex patterns within data, but their “black-box” nature restricts interpretability, making it challenging for healthcare professionals to comprehend the reasoning behind predictions. This lack of interpretability limits their clinical trust and adoption. The proposed approach addresses this limitation by integrating predictive modeling with Explainable AI (XAI) to ensure both accuracy and transparency in clinical decision-making. The proposed study enhances heart attack prediction using the University of California, Irvine (UCI) dataset, which includes various heart analysis parameters collected through electrocardiogram (ECG) sensors, blood pressure monitors, and biochemical analyzers. Due to class imbalance, the Synthetic Minority Over-sampling Technique (SMOTE) was applied to enhance the representation of the minority class. After preprocessing, various ML algorithms were employed, among which Artificial Neural Networks (ANN) achieved the highest performance with 96.1% accuracy, 95.7% recall, and 95.7% F1-score. To enhance the interpretability of ANN, two XAI techniques, specifically SHapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), were utilized. This study incrementally benchmarks SMOTE, ANN, and XAI techniques such as SHAP and LIME on standardized cardiac datasets, emphasizing clinical interpretability and providing a reproducible framework for practical healthcare implementation. These techniques enable healthcare practitioners to understand the model’s decisions, identify key predictive features, and enhance clinical judgment. By bridging the gap between AI-driven performance and practical medical implementation, this work contributes to making heart attack prediction both highly accurate and interpretable, facilitating its adoption in real-world clinical settings. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 1085 KiB  
Systematic Review
Explainable Artificial Intelligence in Radiological Cardiovascular Imaging—A Systematic Review
by Matteo Haupt, Martin H. Maurer and Rohit Philip Thomas
Diagnostics 2025, 15(11), 1399; https://doi.org/10.3390/diagnostics15111399 - 31 May 2025
Cited by 2 | Viewed by 1321
Abstract
Background: Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the “black box” nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. [...] Read more.
Background: Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the “black box” nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. This systematic review synthesizes current research on the use of XAI methods in radiological cardiovascular imaging. Methods: A systematic literature search was conducted in PubMed, Scopus, and Web of Science to identify peer-reviewed original research articles published between January 2015 and March 2025. Studies were included if they applied XAI techniques—such as Gradient-Weighted Class Activation Mapping (Grad-CAM), Shapley Additive Explanations (SHAPs), Local Interpretable Model-Agnostic Explanations (LIMEs), or saliency maps—to cardiovascular imaging modalities, including cardiac computed tomography (CT), magnetic resonance imaging (MRI), echocardiography and other ultrasound examinations, and chest X-ray (CXR). Studies focusing on nuclear medicine, structured/tabular data without imaging, or lacking concrete explainability features were excluded. Screening and data extraction followed PRISMA guidelines. Results: A total of 28 studies met the inclusion criteria. Ultrasound examinations (n = 9) and CT (n = 9) were the most common imaging modalities, followed by MRI (n = 6) and chest X-rays (n = 4). Clinical applications included disease classification (e.g., coronary artery disease and valvular heart disease) and the detection of myocardial or congenital abnormalities. Grad-CAM was the most frequently employed XAI method, followed by SHAP. Most studies used saliency-based techniques to generate visual explanations of model predictions. Conclusions: XAI holds considerable promise for improving the transparency and clinical acceptance of deep learning models in cardiovascular imaging. However, the evaluation of XAI methods remains largely qualitative, and standardization is lacking. Future research should focus on the robust, quantitative assessment of explainability, prospective clinical validation, and the development of more advanced XAI techniques beyond saliency-based methods. Strengthening the interpretability of AI models will be crucial to ensuring their safe, ethical, and effective integration into cardiovascular care. Full article
(This article belongs to the Special Issue Latest Advances and Prospects in Cardiovascular Imaging)
Show Figures

Figure 1

26 pages, 2438 KiB  
Article
A Hybrid KAN-BiLSTM Transformer with Multi-Domain Dynamic Attention Model for Cybersecurity
by Aleksandr Chechkin, Ekaterina Pleshakova and Sergey Gataullin
Technologies 2025, 13(6), 223; https://doi.org/10.3390/technologies13060223 - 29 May 2025
Cited by 4 | Viewed by 2279
Abstract
With the exponential growth of cyberbullying cases on social media, there is a growing need to develop effective mechanisms for its detection and prediction, which can create a safer and more comfortable digital environment. One of the areas with such potential is the [...] Read more.
With the exponential growth of cyberbullying cases on social media, there is a growing need to develop effective mechanisms for its detection and prediction, which can create a safer and more comfortable digital environment. One of the areas with such potential is the application of natural language processing (NLP) and artificial intelligence (AI). This study applies a novel hybrid-structure Hybrid Transformer–Enriched Attention with Multi-Domain Dynamic Attention Network (Hyb-KAN), which combines a transformer-based architecture, an attention mechanism, and BiLSTM recurrent neural networks. In this study, a multi-class classification method is used to identify comments containing cyberbullying features. For better verification, we compared the proposed method with baseline methods. The Hyb-KAN model demonstrated high results on the multi-class classification dataset, achieving an accuracy of 95.25%. The synergy of BiLSTM, Transformer, MD-DAN, and KAN components provides flexibility and accuracy of text analysis. The study used explainable visualization techniques, including SHAP and LIME, to analyze the interpretability of the Hyb-KAN model, providing a deeper understanding of the decision-making mechanisms. In the final stage of the study, the results were compared with current research data to confirm their relevance to current trends. Full article
Show Figures

Figure 1

30 pages, 3401 KiB  
Article
Explainable AI Assisted IoMT Security in Future 6G Networks
by Navneet Kaur and Lav Gupta
Future Internet 2025, 17(5), 226; https://doi.org/10.3390/fi17050226 - 20 May 2025
Viewed by 879
Abstract
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack [...] Read more.
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack surface, exposing healthcare systems to cybersecurity risks like data breaches, device manipulation, and potentially life-threatening disruptions. While 6G networks offer significant benefits for healthcare, such as ultra-low latency, extensive connectivity, and AI-native capabilities, as highlighted in the ITU 6G (IMT-2030) framework, they are expected to introduce new and potentially more severe security challenges. These advancements put critical medical systems at greater risk, highlighting the need for more robust security measures. This study leverages AI techniques to systematically identify security vulnerabilities within 6G-enabled healthcare environments. Additionally, the proposed approach strengthens AI-driven security through use of multiple XAI techniques cross-validated against each other. Drawing on the insights provided by XAI, we tailor our mitigation strategies to the ITU-defined 6G usage scenarios, with a focus on their applicability to medical IoT networks. We propose that these strategies will effectively address potential vulnerabilities and enhance the security of medical systems leveraging IoT and 6G networks. Full article
(This article belongs to the Special Issue Toward 6G Networks: Challenges and Technologies)
Show Figures

Figure 1

27 pages, 1758 KiB  
Article
Cybersecure XAI Algorithm for Generating Recommendations Based on Financial Fundamentals Using DeepSeek
by Iván García-Magariño, Javier Bravo-Agapito and Raquel Lacuesta
AI 2025, 6(5), 95; https://doi.org/10.3390/ai6050095 - 2 May 2025
Viewed by 1571
Abstract
Background: Investment decisions in stocks are one of the most complex tasks due to the uncertainty of which stocks will increase or decrease in their values. A diversified portfolio statistically reduces the risk; however, stock choice still substantially influences the profitability. Methods: This [...] Read more.
Background: Investment decisions in stocks are one of the most complex tasks due to the uncertainty of which stocks will increase or decrease in their values. A diversified portfolio statistically reduces the risk; however, stock choice still substantially influences the profitability. Methods: This work proposes a methodology to automate investment decision recommendations with clear explanations. It utilizes generative AI, guided by prompt engineering, to interpret price predictions derived from neural networks. The methodology also includes the Artificial Intelligence Trust, Risk, and Security Management (AI TRiSM) model to provide robust security recommendations for the system. The proposed system provides long-term investment recommendations based on the financial fundamentals of companies, such as the price-to-earnings ratio (PER) and the net margin of profits over the total revenue. The proposed explainable artificial intelligence (XAI) system uses DeepSeek for describing recommendations and suggested companies, as well as several charts based on Shapley additive explanation (SHAP) values and local-interpretable model-agnostic explanations (LIMEs) for showing feature importance. Results: In the experiments, we compared the profitability of the proposed portfolios, ranging from 8 to 28 stock values, with the maximum expected price increases for 4 years in the NASDAQ-100 and S&P-500, where both bull and bear markets were, respectively, considered before and after the custom duties increases in international trade by the USA in April 2025. The proposed system achieved an average profitability of 56.62% while considering 120 different portfolio recommendations. Conclusions: A t-Student test confirmed that the difference in profitability compared to the index was statistically significant. A user study revealed that the participants agreed that the portfolio explanations were useful for trusting the system, with an average score of 6.14 in a 7-point Likert scale. Full article
(This article belongs to the Special Issue AI in Finance: Leveraging AI to Transform Financial Services)
Show Figures

Figure 1

Back to TopTop