Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (733)

Search Parameters:
Keywords = explainable artificial intelligence (XAI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 754 KB  
Article
Effect of Explainable AI Features on User Satisfaction and Purchase Intention in Saudi Mobile Shopping Apps
by Ahmed S. M. Almamy, Sufyan Habib, Layla K. Nasser and Nawaf N. Hamadneh
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 120; https://doi.org/10.3390/jtaer21040120 - 16 Apr 2026
Abstract
This study examines the impact of explainable artificial intelligence (XAI) features on user satisfaction and purchase intention in Saudi mobile shopping applications, utilising the stimulus–organism–response (S–O–R) framework. With the increasing reliance on AI-driven decision support in e-commerce, enhancing transparency, fairness, trustworthiness, and interpretability [...] Read more.
This study examines the impact of explainable artificial intelligence (XAI) features on user satisfaction and purchase intention in Saudi mobile shopping applications, utilising the stimulus–organism–response (S–O–R) framework. With the increasing reliance on AI-driven decision support in e-commerce, enhancing transparency, fairness, trustworthiness, and interpretability has become crucial for shaping consumer perceptions and behavioural responses. The research employed a quantitative methodology using partial least squares structural equation modelling (PLS-SEM) to examine the relationships among stimulus factors, cognitive and affective states, consumer satisfaction, and purchase intention. In a survey of 597 respondents from Jeddah and Makkah, Saudi Arabia, the findings highlight that fairness and bias detection, trustworthiness, and transparency significantly influence consumers’ cognitive and affective states, which in turn enhance satisfaction and intention to purchase. Consumer satisfaction emerged as a critical mediator, reinforcing the role of positive emotional and cognitive experiences in driving purchase behaviours. However, interpretability showed limited impact, suggesting that consumers may prioritise fairness and trustworthiness over technical clarity of explanations. Theoretically, this study contributes to advancing knowledge on the role of XAI in consumer behaviour by integrating fairness, transparency, and affective responses into the S–O–R paradigm. From a managerial perspective, the results underscore the importance for mobile shopping platforms to design AI systems that foster trust, reduce perceived bias, and ensure transparency, thereby improving consumer engagement and purchase outcomes. By addressing gaps in interpretability and transparency, businesses can strengthen user trust and loyalty, ultimately enhancing competitive advantage in Saudi Arabia’s rapidly growing e-commerce sector. Full article
Show Figures

Figure 1

40 pages, 2412 KB  
Review
Groundwater Potential Mapping Using Machine Learning Techniques: Current Trends and Future Perspectives
by Mosaad Ali Hussein Ali, Elsayed Ahmed Elsadek, Clinton Williams, Kelly R. Thorp and Diaa Eldin M. Elshikha
Water 2026, 18(8), 947; https://doi.org/10.3390/w18080947 - 15 Apr 2026
Abstract
Groundwater is a vital freshwater resource that supports domestic, agricultural, and industrial activities in many regions worldwide. Accurate groundwater potential mapping (GPM) is essential for sustainable water resource management; however, traditional empirical and statistical approaches often struggle to capture the complex, nonlinear relationships [...] Read more.
Groundwater is a vital freshwater resource that supports domestic, agricultural, and industrial activities in many regions worldwide. Accurate groundwater potential mapping (GPM) is essential for sustainable water resource management; however, traditional empirical and statistical approaches often struggle to capture the complex, nonlinear relationships among hydrogeological variables. In recent years, machine learning (ML) has emerged as a powerful data-driven approach for improving GPM accuracy and efficiency. This review synthesizes findings from 83 peer-reviewed studies published between 2015 and 2025, focusing on widely used ML algorithms such as Random Forest, Support Vector Machines, Artificial Neural Networks, and hybrid models. The review evaluates key methodological aspects, including input parameter selection, data partitioning, integration with GIS and remote sensing, and model justification techniques. It also discusses common challenges such as data limitations, regional variability, and model interpretability. The results indicate that ML-based approaches can significantly enhance groundwater prediction when supported by appropriate data and validation strategies. Future research directions include explainable artificial intelligence, uncertainty quantification, multi-source data integration, and improved model transferability. This review provides a comprehensive reference for advancing reliable and sustainable groundwater potential mapping. Full article
(This article belongs to the Section Hydrogeology)
Show Figures

Figure 1

18 pages, 469 KB  
Review
Generative Artificial Intelligence Transitions Pharmaceutical Development from Empirical Screening to Predictive Molecular Design and Clinical Trial Optimization
by Ghaith K. Mansour and Hatouf H. Sukkarieh
Pharmaceuticals 2026, 19(4), 614; https://doi.org/10.3390/ph19040614 - 13 Apr 2026
Viewed by 161
Abstract
The traditional paradigm of pharmaceutical research is characterized by substantial inefficiency, requiring extensive timelines and billions of dollars while suffering from high clinical attrition rates. The integration of generative artificial intelligence (AI) is driving a paradigm shift from empirical experimentation toward predictive, data-driven [...] Read more.
The traditional paradigm of pharmaceutical research is characterized by substantial inefficiency, requiring extensive timelines and billions of dollars while suffering from high clinical attrition rates. The integration of generative artificial intelligence (AI) is driving a paradigm shift from empirical experimentation toward predictive, data-driven innovation. This review evaluates state-of-the-art applications of these technologies across the drug discovery and development pipeline. By analyzing multi-omics data streams, AI models can elucidate complex disease mechanisms and identify novel therapeutic targets. Deep generative architectures facilitate the algorithmic creation of novel molecular entities, enabling the design of therapeutics with complex polypharmacological profiles. Furthermore, AI is enhancing the clinical testing phase through large language models (LLMs) that improve patient enrollment and through synthetic control arms (SCAs) that provide computational alternatives to traditional placebo groups. Despite these advances, the scientific community must address inherent algorithmic biases stemming from demographic underrepresentation and mitigate the risks of data hallucinations. Ultimately, realizing the full translational potential of generative AI in precision medicine may require the widespread adoption of explainable AI (XAI) frameworks and rigorous data standards. Full article
(This article belongs to the Section AI in Drug Development)
Show Figures

Graphical abstract

45 pages, 7613 KB  
Article
BrainTwin-AI: A Multimodal MRI-EEG-Based Cognitive Digital Twin for Real-Time Brain Health Intelligence
by Himadri Nath Saha, Utsho Banerjee, Rajarshi Karmakar, Saptarshi Banerjee and Jon Turdiev
Brain Sci. 2026, 16(4), 411; https://doi.org/10.3390/brainsci16040411 - 13 Apr 2026
Viewed by 254
Abstract
Background/Objectives: Brain health monitoring is increasingly essential as modern cognitive load, stress, and lifestyle pressures contribute to widespread neural instability. The paper presents BrainTwin, a next-generation cognitive digital twin, as a patient-specific, constantly updating computer model that combines state-of-the-art MRI analytics for [...] Read more.
Background/Objectives: Brain health monitoring is increasingly essential as modern cognitive load, stress, and lifestyle pressures contribute to widespread neural instability. The paper presents BrainTwin, a next-generation cognitive digital twin, as a patient-specific, constantly updating computer model that combines state-of-the-art MRI analytics for neuro-oncological assessment related to clinical study and management of tumors affecting the central nervous system (including their detection, progression, and monitoring) with real-time EEG-based brain health intelligence. Methods: Structural analysis is driven by an Enhanced Vision Transformer (ViT++), which improves spatial representation and boundary localization, achieving more accurate tumor prediction than conventional models. The extracted tumor volume forms the baseline for short-horizon tumor progression modeling. Parallel to MRI analysis, continuous EEG signals are captured through an in-house wearable skullcap, preprocessed using Edge AI on a Hailo Toolkit-enabled Raspberry Pi 5 for low-latency denoising and secure cloud transmission. Pre-processed EEG packets are authenticated at the fog layer, ensuring secure and reliable cloud transfer, enabling significant load reduction in the edge and cloud nodes. In the digital twin, EEG characteristics offer real-time functional monitoring through dynamic brainwave analysis, while a BiLSTM classifier distinguishes relaxed, stress, and fatigue states, which are probabilistically inferred cognitive conditions derived from EEG spectral patterns. Unlike static MRI imaging, EEG provides real-time brain health monitoring. The BrainTwin performs EEG–MRI fusion, correlating functional EEG metrics with ViT++ structural embeddings to produce a single risk score that can be interpreted by clinicians to determine brain vulnerability to future diseases. Explainable artificial intelligence (XAI) provides clinical interpretability through gradient-weighted class activation mapping (Grad-CAM) heatmaps, which are used to interpret ViT++ decisions and are visualized on a 3D interactive brain model to allow more in-depth inspection of spatial details. Results: The evaluation metrics demonstrate a BiLSTM macro-F1 of 0.94 (Precision/Recall/F1: Relaxed 0.96, Stress 0.93, Fatigue 0.92) and a ViT++ MRI accuracy of 96%, outperforming baseline architectures. Conclusions: These results demonstrate BrainTwin’s reliability, interpretability, and clinical utility as an integrated digital companion for tumor assessment and real-time functional brain monitoring. Full article
Show Figures

Figure 1

28 pages, 1987 KB  
Review
Applications, Challenges, and Future Trends of Artificial Intelligence of Things (AIoT)-Enabled Water Quality and Resource Management
by Ashikur Rahman, Gwo Chin Chung and Yin Hoe Ng
Water 2026, 18(8), 919; https://doi.org/10.3390/w18080919 - 12 Apr 2026
Viewed by 417
Abstract
Safe and sustainable water sources are a serious global concern because of growing population, urbanization, industrialization, and climate change. The conventional water surveillance systems that rely on periodic sampling and laboratory analysis fail to provide time-sensitive and high-resolution data utilized for proactive water [...] Read more.
Safe and sustainable water sources are a serious global concern because of growing population, urbanization, industrialization, and climate change. The conventional water surveillance systems that rely on periodic sampling and laboratory analysis fail to provide time-sensitive and high-resolution data utilized for proactive water management. Artificial Intelligence of Things (AIoT) offers a viable solution, as they can provide tools of constant active monitoring and predictive analytics. The integration of IoT sensor networks with machine learning (ML) methods enables real-time data-driven water resource monitoring and intelligent decision-making, enhances water quality assessment, supports early detection of anomalies, improves predictive capabilities for floods and droughts, and facilitates efficient irrigation and reservoir management, ultimately leading to sustainable and resilient water management systems. The paper presents an extensive overview of AIoT solutions for water quality monitoring and water resource management, including IoT sensor networks for real-time data acquisition, machine learning methods for prediction, classification, anomaly detection, and edge computing platforms for data processing and decision support. This study also highlights existing possibilities, obstacles, and research gaps identified through a review of the recent literature. Key challenges reported across multiple studies include limited data availability, sensor calibration bias, integration of heterogeneous data, and insufficient model interpretability. Advanced paradigms such as digital twin systems, TinyML, federated learning, and explainable AI (XAI) are examined as enabling technologies to enhance system efficiency, flexibility, and transparency. Future research directions are outlined to develop scalable, interpretable, and real-time water management solutions. Full article
Show Figures

Figure 1

36 pages, 1657 KB  
Review
The Current Status of Contaminated Site Remediation and Application Prospects of Artificial Intelligence—A Review
by Guodong Zheng, Shengcheng Mei, Yiping Wu and Pengyi Cui
Environments 2026, 13(4), 212; https://doi.org/10.3390/environments13040212 - 12 Apr 2026
Viewed by 258
Abstract
Industrialization has led to the substantial release of heavy metals and organic pollutants into soil and groundwater, resulting in severe contaminated site issues that pose significant threats to ecosystems and human health. This review aims to systematically review the current development status and [...] Read more.
Industrialization has led to the substantial release of heavy metals and organic pollutants into soil and groundwater, resulting in severe contaminated site issues that pose significant threats to ecosystems and human health. This review aims to systematically review the current development status and challenges of contaminated site remediation technologies, and explore the potential of artificial intelligence (AI) applications in site remediation, to provide a theoretical reference for advancing intelligent remediation. Conventional remediation technologies mainly include physical methods (e.g., solidification/stabilization (S/S), soil vapor extraction (SVE), thermal desorption, pump and treat (P&T), groundwater circulation wells (GCWs)), chemical methods (e.g., chemical oxidation/reduction, electrokinetic remediation (EKR), soil washing), and biological methods (phytoremediation, microbial remediation), along with combined strategies that integrate multiple approaches. Although these technologies have achieved certain successes in engineering practice, they still face common challenges such as risks of secondary pollution, long remediation periods, high costs, poor adaptability to complex hydrogeological conditions, and insufficient long-term stability, making it difficult to fully meet the remediation demands of complex contaminated sites. Subsequently, the potential of emerging technologies—including nanomaterial-based remediation, bioelectrochemical systems, and molecular biology-assisted remediation—is introduced. On this basis, the forefront applications of AI in contaminated site remediation are discussed, covering site monitoring and characterization, risk assessment, remedial strategy selection, process prediction and parameter optimization, material design, and post-remediation intelligent stewardship. Machine learning (ML), explainable AI (XAI), and hybrid modeling approaches have markedly improved remediation efficiency and decision-making. Looking forward, with advancements in XAI, mechanism-data fusion models, and environmental foundation models, AI is poised to drive a paradigm shift toward intelligent and precision remediation. However, challenges related to data quality, model interpretability, and interdisciplinary expertise remain key barriers to overcome. Full article
32 pages, 6302 KB  
Article
Disentangling Climatic and Surface-Physical Drivers of the Urban Heat Island Using Explainable AI Across U.S. Cities
by Osama A. B. Aljarrah and Dimitrios Goulias
Sustainability 2026, 18(8), 3694; https://doi.org/10.3390/su18083694 - 8 Apr 2026
Viewed by 458
Abstract
Urban Heat Islands (UHIs) are widely analyzed using Land Surface Temperature (LST), yet most studies remain limited to single cities, rely on a single machine-learning model, analyze LST alone, and use inconsistent Surface Urban Heat Island Intensity (SUHII) definitions, which restrict cross-city comparability [...] Read more.
Urban Heat Islands (UHIs) are widely analyzed using Land Surface Temperature (LST), yet most studies remain limited to single cities, rely on a single machine-learning model, analyze LST alone, and use inconsistent Surface Urban Heat Island Intensity (SUHII) definitions, which restrict cross-city comparability and broader generalization. This study introduces an explainable artificial intelligence (XAI) framework implemented in Google Earth Engine (GEE) to analyze census-tract summer surface heat (2018–2024) across eight climatically contrasting U.S. cities. The main novelty is a standardized tract-scale cross-city framework that jointly models LST and SUHII using a consistent SUHII definition, a common physical predictor set, city-held-out nested cross-validation, and SHAP-based interpretation, allowing absolute surface heat to be distinguished from relative within-city heat anomaly; this combination is rarely implemented within a single urban heat study. Multiple machine-learning models were evaluated, with ensemble trees performing best: Extreme Gradient Boosting (XGBoost) best predicted SUHII (R2 = 0.879; RMSE = 0.213), while Extra Trees best predicted LST (R2 = 0.908; RMSE = 0.745 °C). SHapley Additive exPlanations (SHAP) indicate that SUHII is driven primarily by impervious surface fraction and surface moisture availability, whereas LST is structured by latitude and mean summer air temperature. Overall, the framework provides interpretable multi-city attribution of urban surface heat drivers with demonstrated cross-city generalization. Full article
(This article belongs to the Special Issue Climate-Responsive Strategies for Sustainable Infrastructure)
Show Figures

Figure 1

15 pages, 1148 KB  
Article
Early Prediction of Well-Being Outcomes in Older Adults Using Explainable AI and Emotional Intelligence Measures
by Evgenia Kouli, Evangelos Bebetsos, Maria Michalopoulou and Filippos Filippou
Appl. Sci. 2026, 16(7), 3586; https://doi.org/10.3390/app16073586 - 7 Apr 2026
Viewed by 485
Abstract
Background: Well-being in the elderly is shaped by complex emotional and social factors. Early identification of individuals at risk for reduced well-being may support timely preventive or supportive interventions. This study examined whether emotional intelligence indicators collected at baseline can predict well-being status [...] Read more.
Background: Well-being in the elderly is shaped by complex emotional and social factors. Early identification of individuals at risk for reduced well-being may support timely preventive or supportive interventions. This study examined whether emotional intelligence indicators collected at baseline can predict well-being status 5 months later using explainable machine learning models. Methods: A cohort of elderly participants aged 60 to 89 years completed emotional intelligence measures at baseline, and well-being was assessed 5 months later using the POMS questionnaire. Four machine learning algorithms, Logistic Regression (LR), Support Vector Machines (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), were developed using 5-fold stratified cross-validation. Model performance was evaluated through accuracy, precision, recall, F1-score, ROC AUC, and normalized confusion matrices. SHapley Additive exPlanations (SHAP) were applied to interpret the contribution and directionality of each predictor. Results: XGBoost achieved the highest predictive performance (accuracy = 0.789; F1 = 0.778) and demonstrated balanced classification across well-being categories. SVM also performed robustly (accuracy = 0.760), while LR showed reduced sensitivity for detecting those with poorer well-being. SHAP analysis identified self-control, emotionality, sociability, self-motivation, and well-being components as the most influential predictors. Lower emotionality, higher sociability, and higher self-control scores were linked to a greater probability of favorable well-being outcomes. Conclusions: The findings demonstrate the feasibility of using explainable machine learning models to predict 5-month well-being status within this sample of older adults using emotional intelligence indicators. XGBoost provided the strongest and most balanced performance, while SHAP analysis clarified how specific emotional intelligence dimensions influenced predictions. These findings suggest that interpretable machine learning approaches may support future efforts toward early recognition of older adults who may be at risk for reduced well-being and guide personalized intervention strategies. Full article
Show Figures

Figure 1

29 pages, 2990 KB  
Article
Federated and Interpretable AI Framework for Secure and Transparent Loan Default Prediction in Financial Institutions
by Awad M. Awadelkarim
Math. Comput. Appl. 2026, 31(2), 56; https://doi.org/10.3390/mca31020056 - 5 Apr 2026
Viewed by 362
Abstract
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, [...] Read more.
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, which limit the application of advanced models because of regulatory and confidentiality issues, and black-box decision making, which diminishes confidence in automated credit risk tools. This study mitigates these problems by adopting a federated-inspired decentralized ensemble learning model combined with explainable artificial intelligence (XAI) in predicting loan defaults. Various machine learning classifiers are trained on partitioned institutional data without the need to share any data; they include K-Nearest Neighbors, support vector machine, random forest, and XGBoost. They use a prediction-level aggregation strategy to simulate the collaborative decision-making process without losing locality of data. SHAP and LIME are used to promote model interpretability by giving both global and local explanations of the consequences of prediction. The proposed framework was tested on a large public dataset of loans that contains more than 116,000 records, including various financial and borrower-related features. The experimental findings show that XGBoost has high and reliable predictive accuracy in both centralized and decentralized scenarios, achieving 99.7% accuracy under federated-inspired evaluation. The explanation analysis shows interest rate spread and upfront charges as the most significant predictors of loan default risk. The main contributions of this research are as follows: (i) a privacy-preserving decentralized ensemble learning framework that is applicable in multi-institutional financial contexts, (ii) a detailed analysis of centralized and decentralized predictive performances, and (iii) the pipeline of the XAI, which can be used to increase its transparency and regulatory confidence in automated credit risk evaluation. These results prove that decentralized learning combined with explainable AI can provide high-performing, transparent and privacy-sensitive loan default prediction systems in practice in real-world banking systems. Full article
Show Figures

Figure 1

24 pages, 4002 KB  
Article
A Causal XAI Diagnosis and Optimization Framework for Hot-Rolled Strip Shape Incorporating Hybrid Structure Learning
by Yuchun Wu, Pengju Xu, Dongyu Li and Zhimin Lv
Metals 2026, 16(4), 401; https://doi.org/10.3390/met16040401 - 3 Apr 2026
Viewed by 239
Abstract
Accurate shape control is paramount for ensuring the quality of hot-rolled strip products, which is significantly challenged by the high dimensionality, inherent nonlinearity, and strong coupling of process parameters. While machine learning (ML) methods have demonstrated superior predictive performance in product quality modeling, [...] Read more.
Accurate shape control is paramount for ensuring the quality of hot-rolled strip products, which is significantly challenged by the high dimensionality, inherent nonlinearity, and strong coupling of process parameters. While machine learning (ML) methods have demonstrated superior predictive performance in product quality modeling, the inherent “black-box” nature and lack of transparency severely undermine system reliability and hinder practical deployment. Existing explainable artificial intelligence (XAI) approaches predominantly rely on statistical correlations while overlooking the underlying causal mechanisms among coupled variables, which severely limits the validity of explanations. To address these limitations, a causal XAI diagnosis and optimization framework for hot-rolled strip shape is proposed. Initially, a hybrid causal structure learning module is established, which integrates domain knowledge with the NOTEARS-MLP algorithm to accurately reconstruct the causal topology and decode the complex coupling mechanisms among process parameters. Subsequently, a high-performance quality prediction module utilizing AutoML techniques is constructed to establish a robust predictive baseline. Furthermore, a causal XAI and quality optimization module is introduced, which incorporates causal constraints into standard Shapley additive explanation (SHAP) analysis for transparent diagnosis, and employs piecewise linear analysis (PLR) to generate sample-specific optimization strategies. Comprehensive experimental validation demonstrates that the prediction module significantly outperforms state-of-the-art ML approaches across multiple performance metrics. Additionally, comparative analysis reveals that the optimization strategy based on causal feature attribution exhibits 14.7% defect rate reduction over the associational baseline, which is effective, efficient and establishes a new benchmark for causal explainability in industrial process optimization applications. Full article
Show Figures

Figure 1

18 pages, 692 KB  
Review
From Pixels to Prediction: Developing Integrated AI Foundation Models for Personalized Thyroid Cancer Care
by Jae Hyun Park, Younghyun Park, Yong Moon Lee, Sejung Yang and Jong Ho Yoon
Cancers 2026, 18(7), 1155; https://doi.org/10.3390/cancers18071155 - 3 Apr 2026
Viewed by 389
Abstract
Background: Thyroid cancer incidence continues to rise globally, yet current diagnostic methods, reliant on ultrasound-guided fine-needle aspiration, suffer from substantial inter-observer variability and indeterminate results. Objective: This review explores the transformative potential of integrated artificial intelligence (AI) foundation models in thyroid [...] Read more.
Background: Thyroid cancer incidence continues to rise globally, yet current diagnostic methods, reliant on ultrasound-guided fine-needle aspiration, suffer from substantial inter-observer variability and indeterminate results. Objective: This review explores the transformative potential of integrated artificial intelligence (AI) foundation models in thyroid cancer management. We propose a paradigm shift using foundation models—large-scale, multimodal architectures pre-trained on diverse datasets—to bridge the gap between initial pixels and long-term prognostic prediction. Proposed Models: We introduce two integrated conceptual frameworks: ThyroSight-Prognos for high-precision assessment in specialized tertiary settings and SonoPredict-AI for cost-effective screening in primary care. Key Innovations: By synthesizing data from ultrasound, pathology (WSI), genomics, and clinical parameters through explainable AI (XAI), these models aim to reduce unnecessary surgeries and personalize treatment pathways. Challenges and Outlook: This paper addresses critical implementation challenges, including data heterogeneity, hardware requirements, and regulatory trust, ultimately providing a strategic blueprint for future multi-center prospective clinical validation to revolutionize thyroid care through precision oncology. Full article
(This article belongs to the Special Issue The Changing Paradigms in the Management of Thyroid Cancer)
Show Figures

Figure 1

23 pages, 399 KB  
Article
Integrating Model Explainability and Uncertainty Quantification for Trustworthy Fraud Detection
by Tebogo Forster Mapaila and Makhamisa Senekane
Technologies 2026, 14(4), 212; https://doi.org/10.3390/technologies14040212 - 3 Apr 2026
Viewed by 350
Abstract
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments [...] Read more.
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments is constrained by limited interpretability, overconfident predictions, and the absence of principled mechanisms for expressing decision uncertainty. Emerging regulatory expectations increasingly emphasise transparency, accountability, and operational reliability, underscoring the need for evaluation frameworks that extend beyond predictive accuracy. This study proposes the Integrated Transparency and Confidence Framework (ITCF), a deployment-oriented approach that unifies model explainability, statistically valid uncertainty quantification, and operational decision support for fraud detection. ITCF combines instance-level explanations generated via Local Interpretable Model-Agnostic Explanations (LIME) with distribution-free uncertainty estimation using split conformal prediction. The framework incorporates selective explainability, abstention-based routing, and uncertainty-driven triage to support human-in-the-loop review. Using the PaySim dataset of 6,362,620 mobile-money transactions, Random Forest and XGBoost models are evaluated under extreme class imbalance using F1-score, AUC–ROC, and Matthews Correlation Coefficient (MCC). At a target coverage level of 90% (α=0.1), both models achieve empirical coverage close to the target level, with XGBoost producing smaller prediction sets and superior recall, MCC, and latency. ITCF provides transaction-level explanations for uncertain cases and specifies an auditable workflow that is intended to support transparency, traceability, and risk-aware human review, thereby enabling defensible human decision-making in regulated environments. Overall, this study illustrates how explainability and uncertainty quantification can be combined in a deployment-oriented evaluation workflow while noting that real-world validation remains a future endeavour. Full article
(This article belongs to the Special Issue Privacy-Preserving and Trustworthy AI for Industrial 4.0 and Beyond)
Show Figures

Graphical abstract

25 pages, 12554 KB  
Article
An Explainable Artificial Intelligence-Driven Framework for Predicting Groundwater Irrigation Suitability in Hard-Rock Aquifers: Moving Beyond Traditional Bivariate Diagnostics
by Mohamed Hussein Yousif, Quanrong Wang, Anurag Tewari, Abara A. Biabak Indrick, Hafizou M. Sow, Yousif Hassan Mohamed Salh and Wakeel Hussain
Water 2026, 18(7), 854; https://doi.org/10.3390/w18070854 - 2 Apr 2026
Viewed by 443
Abstract
Groundwater is the primary source of irrigation in many semi-arid hard-rock aquifer regions. Yet, its suitability assessment is often hindered by the nonlinear hydrochemical dynamics that traditional bivariate tools, such as the U.S. Salinity Laboratory (USSL) diagram, cannot adequately resolve. To overcome this [...] Read more.
Groundwater is the primary source of irrigation in many semi-arid hard-rock aquifer regions. Yet, its suitability assessment is often hindered by the nonlinear hydrochemical dynamics that traditional bivariate tools, such as the U.S. Salinity Laboratory (USSL) diagram, cannot adequately resolve. To overcome this limitation, we developed an explainable artificial intelligence (XAI) framework that predicts irrigation suitability categories directly from hydrochemical variables, without relying on calculated indices. Using 1872 post-monsoon groundwater samples from Telangana, India, we trained three ensemble tree-based classifiers (Random Forest, LightGBM, and XGBoost) on 11 hydrochemical variables (Na+, K+, Ca2+, Mg2+, HCO3, Cl, F, NO3, SO42−, pH, and total hardness). Class imbalance was addressed using the Synthetic Minority Over-sampling Technique (SMOTE), and model hyperparameters were optimized with Optuna. Among the tested models, LightGBM achieved the best performance (balanced accuracy = 0.938). Model interpretability was enabled using Shapley Additive Explanations (SHAP), supported by Piper and Gibbs diagrams, revealing a critical distinction between sodicity-driven salinity and hardness-driven mineralization, identifying calcium-saturated waters for which gypsum amendment can be chemically futile. To bridge the gap between algorithmic accuracy and operational simplicity, we distilled SHAP explanations into linear heuristics and quantified the trade-off between accuracy and simplicity. Accordingly, we proposed a tiered hydrochemical triage framework in which quantitative heuristics handled approximately 62.5% of the routine samples, while XAI resolved the complex and ambiguous cases. Overall, the proposed framework transforms classic suitability assessment tools into an adaptable, evidence-informed, proactive decision-support system for sustainable agricultural water management under increasing environmental stress. Full article
Show Figures

Figure 1

43 pages, 1754 KB  
Systematic Review
Potential Clinical Applicability of Deep Learning in the Diagnosis of Major Depressive Disorder Using rs-fMRI: A Systematic Literature Review
by Maryam Saeedi, Lan Wei, Mercy Edoho and Catherine Mooney
Appl. Sci. 2026, 16(7), 3444; https://doi.org/10.3390/app16073444 - 1 Apr 2026
Viewed by 413
Abstract
Background: Major Depressive Disorder (MDD) is one of the leading causes of disability worldwide. Deep learning methods have been widely used for MDD detection, with research suggesting that deep models outperform traditional machine learning techniques. However, detecting MDD remains challenging due to data [...] Read more.
Background: Major Depressive Disorder (MDD) is one of the leading causes of disability worldwide. Deep learning methods have been widely used for MDD detection, with research suggesting that deep models outperform traditional machine learning techniques. However, detecting MDD remains challenging due to data heterogeneity, model complexities and the requirement for discriminative feature representations. Objective: This review outlines recent progress in deep learning methods for MDD detection from Resting-state fMRI (rs-fMRI), with a focus on the model’s generalisability and features that most effectively represent the function/anatomy of the brain to contribute to biomarker identifications and interpretability. Further, the review assesses the applicability of current models to real-world challenges. Methods: This systematic review followed the PRISMA guidelines. Studies involved clinically diagnosed MDD subjects, a control group, and deep learning methods for classification tasks. Results: The cerebellum, thalamus, amygdala, insula, and default mode network are the most frequently reported brain regions associated with depression. Although deep learning has shown impressive results, it has limitations in terms of reliance on labelled data, heterogeneity of data from various hospitals, and model interpretability. A majority of the studies lacked external validation and had a single-site dataset or regionally homogeneous datasets, and did not consider the temporal and dynamic nature of rs-fMRI data. Conclusion: Deep learning offers considerable potential in advancing MDD diagnosis and understanding its mechanisms. Multi-regional data collection, harmonisation techniques, and rigorous testing in real-world workflows should be the primary focus of future research. Full article
Show Figures

Figure 1

33 pages, 1100 KB  
Systematic Review
The Emerging Role of Explainable Artificial Intelligence in EEG-Based Autism Research: A Systematic Review
by Maria Eugenia Martelli, Simone Colella, Roberta Meloni, Federica Gigliotti, Antonello Rosato, Massimo Panella and Carla Sogos
NeuroSci 2026, 7(2), 41; https://doi.org/10.3390/neurosci7020041 - 1 Apr 2026
Viewed by 414
Abstract
The increasing prevalence of Autism Spectrum Disorder (ASD) has intensified research efforts aimed at clarifying its neurobiological underpinnings. Electroencephalography (EEG) has enabled the identification of functional alterations in neuronal networks, contributing to the characterization of ASD-related brain dynamics and supporting the investigation of [...] Read more.
The increasing prevalence of Autism Spectrum Disorder (ASD) has intensified research efforts aimed at clarifying its neurobiological underpinnings. Electroencephalography (EEG) has enabled the identification of functional alterations in neuronal networks, contributing to the characterization of ASD-related brain dynamics and supporting the investigation of links between neural processes and behavioral impairments. In recent years, Artificial Intelligence (AI) methods have been increasingly applied to EEG analysis, allowing the extraction of complex, high-dimensional features. However, the limited interpretability of many AI-based models represents a major barrier to their clinical translation. To address this issue, Explainable Artificial Intelligence (XAI) approaches have emerged as promising tools to enhance model transparency and neurobiological interpretability. This systematic review examined studies explicitly applying XAI techniques to EEG or event-related potential data from individuals with ASD. A comprehensive literature search was conducted across multiple electronic databases up to November 2025. Studies were included if they involved ASD populations, electrophysiological data, and AI-based analytical approaches with explicit explainability components. Due to substantial methodological heterogeneity, a qualitative narrative synthesis was performed. Eleven studies met the inclusion criteria. Overall, included articles highlighted partially overlapping electrophysiological patterns involving spectral alterations, functional connectivity, and network organization; however, some studies also revealed marked heterogeneity in study design and limited clinical characterization. Consequently, they should be interpreted with caution, as the field remains at a preliminary stage. This review outlines current trends, methodological limitations, and key gaps in XAI-driven EEG research in ASD, and discusses future directions toward clinically meaningful and interpretable neurophysiological biomarkers. The review protocol was registered in PROSPERO (CRD420251231630). Full article
Show Figures

Figure 1

Back to TopTop