Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (714)

Search Parameters:
Keywords = explainable artificial intelligence (XAI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1976 KB  
Review
Advances in Closed-Loop Artificial Intelligence for Healthcare
by Diba Das, Scott D. Adams, Dean M. Corva, Tracey K. Bucknall and Abbas Z. Kouzani
Electronics 2026, 15(7), 1396; https://doi.org/10.3390/electronics15071396 - 27 Mar 2026
Abstract
Artificial intelligence (AI) is increasingly used in healthcare to support clinical decision-making through clinical decision support systems (CDSS). Human-in-the-loop (HITL) approaches introduce clinician oversight to improve model interpretability, reliability, and adaptability, while explainable AI (XAI) helps clinicians understand model behaviour. This review categorises [...] Read more.
Artificial intelligence (AI) is increasingly used in healthcare to support clinical decision-making through clinical decision support systems (CDSS). Human-in-the-loop (HITL) approaches introduce clinician oversight to improve model interpretability, reliability, and adaptability, while explainable AI (XAI) helps clinicians understand model behaviour. This review categorises HITL AI approaches in healthcare into pre-deployment and post-deployment stages and provides a dedicated review focusing specifically on post-deployment HITL systems. It also introduces the concept of closed-loop AI, where real-time expert feedback can refine AI outputs without requiring model retraining. A systematic review following PRISMA guidelines was conducted using the Scopus and PubMed databases for studies published between 2020 and July 2025. From 3466 identified records, 3012 remained after duplicate removal. After title and abstract screening, 1630 articles were assessed through full-text review, and 15 studies met the predefined inclusion criteria related to HITL, post-deployment adaptation, and interactive XAI in healthcare. The selected studies indicate growing interest in post-deployment HITL systems that allow clinicians to refine AI outputs, provide real-time feedback, and support adaptive CDSS. These findings highlight a shift toward human-centred, closed-loop AI frameworks that integrate expert feedback into deployed systems to improve transparency, trust, and responsiveness in clinical decision-making. Full article
Show Figures

Figure 1

24 pages, 4459 KB  
Article
AI-Driven Decision Support System for Proactive Risk Management in Construction Projects
by Jon Zorrilla, Sandra Seijo, Unai Arenal and Juan Ramón Mena
Intell. Infrastruct. Constr. 2026, 2(2), 4; https://doi.org/10.3390/iic2020004 - 26 Mar 2026
Abstract
Construction projects frequently face risks such as anomalies, delays, and bottlenecks, which can substantially affect timelines and budgets. This study proposes a machine learning (ML)-based framework for early identification of risks in construction projects, enabling pattern understanding and decision-making through clustering, outlier and [...] Read more.
Construction projects frequently face risks such as anomalies, delays, and bottlenecks, which can substantially affect timelines and budgets. This study proposes a machine learning (ML)-based framework for early identification of risks in construction projects, enabling pattern understanding and decision-making through clustering, outlier and bottleneck detection, and relevant variables identification. It uses a business process management (BPM) dataset of construction documents and applies clustering techniques to both numerical and mixed datasets to group documents with similar characteristics, enabling the detection of temporal deviations and the patterns behind them. Additionally, an ensemble anomaly detection model based on different algorithms is implemented to identify outliers through key variables, which may indicate hidden risks and planning errors. Explainable artificial intelligence (XAI) techniques are then used to analyse the importance of the variables, supporting the identification and analysis of bottlenecks that may compromise project success. The results reveal an F1 score of 0.73 in bottleneck detection using three understandable decision rules, a 6% rate of anomalies within the dataset, and three distinct project clusters. This approach enables accurate and timely detection of risks while providing valuable insights for decision-making, improving risk management, and optimising project execution in the architecture, engineering and construction (AEC) industry. Full article
Show Figures

Figure 1

33 pages, 15024 KB  
Article
HFA-Net: Explainable Multi-Scale Deep Learning Framework for Illumination-Invariant Plant Disease Diagnosis in Precision Agriculture
by Muhammad Hassaan Ashraf, Farhana Jabeen, Muhammad Waqar and Ajung Kim
Sensors 2026, 26(7), 2067; https://doi.org/10.3390/s26072067 - 26 Mar 2026
Abstract
Robust plant disease detection in real-world agricultural environments remains challenging due to dynamic environmental conditions. Accurate and reliable disease identification is essential for precision agriculture and effective crop management. Although computer vision and Artificial Intelligence (AI) have shown promising results in controlled settings, [...] Read more.
Robust plant disease detection in real-world agricultural environments remains challenging due to dynamic environmental conditions. Accurate and reliable disease identification is essential for precision agriculture and effective crop management. Although computer vision and Artificial Intelligence (AI) have shown promising results in controlled settings, their performance often drops under lesion scale variability, inter- and intra-class similarity among diseases, class imbalance, and illumination fluctuations. To overcome these challenges, we propose a Heterogeneous Feature Aggregation Network (HFA-Net) that brings together architectural improvements, illumination-aware preprocessing, and training-level enhancements into a single cohesive framework. To extract richer and more discriminative features from the early layers of the network, HFA-Net introduces a multi-scale, multi-level feature aggregation stem. The Reduction-Expansion (RE) mechanism helps preserve important lesion details while adapting to variations in scale. Considering real agricultural environments, an Illumination-Adaptive Contrast Enhancement (IACE) preprocessing pipeline is designed to address illumination variability in real agricultural environments. Experimental results show that HFA-Net achieves 96.03% accuracy under normal conditions and maintains strong performance under challenging lighting scenarios, achieving 92.95% and 93.07% accuracy in extremely dark and bright environments, respectively. Furthermore, quantitative explainability analysis using perturbation-based metrics demonstrates that the model’s predictions are not only accurate but also faithful to disease-relevant regions. Finally, Grad-CAM-based visual explanations confirm that the model’s predictions are driven by disease-specific regions, enhancing interpretability and practical reliability. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

20 pages, 2388 KB  
Article
The Role of Green Official Development Assistance in the Implementation of Sustainable Development Goal 15 Using Explainable AI
by Jeongyeon Chae and Eunho Choi
Forests 2026, 17(4), 412; https://doi.org/10.3390/f17040412 - 26 Mar 2026
Abstract
The Sustainable Development Goals (SDGs) are global objectives adopted by countries worldwide to achieve sustainable development by 2030 and consist of 17 goals and 169 specific targets. Among them, SDG 15 (Life on Land) aims to conserve terrestrial ecosystems and promote their sustainable [...] Read more.
The Sustainable Development Goals (SDGs) are global objectives adopted by countries worldwide to achieve sustainable development by 2030 and consist of 17 goals and 169 specific targets. Among them, SDG 15 (Life on Land) aims to conserve terrestrial ecosystems and promote their sustainable use. Successful implementation of SDG 15 requires continuous management of terrestrial ecosystems and positive forest transitions. However, systematic analyses examining the role of green official development assistance (ODA), which supports environmental improvement in developing countries, remain limited. Accordingly, this study investigates the role that green ODA can play in forest transitions. Focusing on green ODA provided to developing countries between 2010 and 2023, this study employed shapley additive explanations (SHAP), an explainable artificial intelligence (XAI) technique, to predict its influence on SDG 15 implementation scores and to analyze the contributions of economic, environmental, and social indicators. In addition, a SHAP value-based decomposition and a gap index were calculated to examine the contribution of green ODA relative to its input. The results indicate that the overall contribution of green ODA to SDG 15 implementation in developing countries is relatively limited. However, statistically significant effects were observed in country groups with higher levels of SDG 15 implementation performance. In contrast, the effects were weakened or constrained in some country groups with lower levels of SDG 15 implementation. These findings suggest that green ODA may function as a transition accelerator that facilitates positive forest transitions in countries with stronger capacities for implementing SDG 15. Strengthening and improving the existing limitations of green ODA could enhance its role and enable it to contribute more effectively to sustainable development and the conservation of terrestrial ecosystems in developing countries. Full article
(This article belongs to the Special Issue Forest Economics and Policy Analysis)
Show Figures

Figure 1

17 pages, 980 KB  
Article
Intelligent Agents for Sustainable Maritime Logistics: Architectures, Applications, and the Path to Robust Autonomy
by Marko Rosić, Dean Sumić and Lada Maleš
Sustainability 2026, 18(7), 3231; https://doi.org/10.3390/su18073231 - 26 Mar 2026
Abstract
The maritime industry is under increased challenges of balancing operational effectiveness and environmental responsibility. This study examines the application of intelligent agents as a technology that can align these two goals in the triple-bottom-line model that involves social responsibility, environmental footprint, and economic [...] Read more.
The maritime industry is under increased challenges of balancing operational effectiveness and environmental responsibility. This study examines the application of intelligent agents as a technology that can align these two goals in the triple-bottom-line model that involves social responsibility, environmental footprint, and economic sustainability. An agent architecture taxonomy is outlined and adapted to the maritime industry, distinguishing between reactive, deliberative, hybrid, and multi-agent systems (MAS). The application of these architectures is analysed throughout the maritime domain. In the ship-centric environment, the analysis highlights the role of agents in autonomous navigation, energy-efficient meteorological routing, and predictive maintenance. The analysis in the port and supply-chain domain demonstrates a shift towards decentralized asset orchestration and logistic coordination rather than centralized control. The paper outlines certain barriers to widespread adoption, namely the reality gap of simulation-based training and the lack of transparency in deep-learning models (“black box” problem). The paper concludes by outlining a future research agenda proposing a use of explainable artificial intelligence (XAI), high-fidelity simulation-to-real transfer, and communication protocol standardization to continue the trend of developing strong autonomous capabilities in sustainable maritime logistics. Full article
(This article belongs to the Special Issue Sustainable Management of Shipping, Ports and Logistics)
Show Figures

Figure 1

13 pages, 763 KB  
Article
Supporting Novice Creativity in Design Education Through Human-Centred Explainable AI
by Ahmed Al-sa’di and Dave Miller
Theor. Appl. Ergon. 2026, 2(2), 4; https://doi.org/10.3390/tae2020004 - 24 Mar 2026
Viewed by 22
Abstract
Generative artificial intelligence tools are reshaping design by enabling novice designers to produce professional-quality user interfaces rapidly. However, for novice designers, exposure to AI-generated outputs that are far beyond their capabilities can inhibit creative growth. In this work, we investigate AI overperformance, when [...] Read more.
Generative artificial intelligence tools are reshaping design by enabling novice designers to produce professional-quality user interfaces rapidly. However, for novice designers, exposure to AI-generated outputs that are far beyond their capabilities can inhibit creative growth. In this work, we investigate AI overperformance, when superior AI outputs lower the creative confidence of novices, and explore whether human-centred and explainable AI interfaces can mitigate such effects while sustaining creative agency. We conducted a within-subjects experiment with 75 novice designers using a web-based research platform. Participants completed mobile app design tasks under three conditions: Human-Only (baseline), AI Overmatch (exposure to superior AI outputs), and XAI-Enhanced (exposure to AI outputs with an embedded explainable interface). A repeated-measures ANOVA indicated that creative self-efficacy varied significantly, F = 24.67, p < 0.001, η2 = 0.18. While creative self-efficacy was significantly decreased in the AI Overmatch condition, M = −1.18, SD = 0.32, when compared to the Human-Only conditions, M = 0.08, SD = 0.15, this was significantly increased in the XAI-Enhanced condition, M U= 0.42, SD = 0.18. This also led to a rise in creative performance across both ideation and output quality. The results showed that the AI Overmatch condition significantly reduced creative self-efficacy and originality; however, this negative effect was mitigated by the XAI-Enhanced interface, which enhanced confidence and idea quality. Mediation analysis demonstrated that expectancy disconfirmation explains the negative impact of AI overperformance on human creativity. These findings provide constructive design principles for educational AI tools and contribute to HCI theory by demonstrating that pedagogically oriented, transparent AI supports human–AI collaboration without diminishing human agency. Full article
64 pages, 8530 KB  
Review
Smart Medical Image Processing System Based on Explainable and Generative Artificial Intelligence: A Comprehensive Review
by Cosmin George Nicolăescu, Florentina Magda Enescu, Alin Gheorghiță Mazăre, Nicu Bizon and Cristian Toma
Algorithms 2026, 19(4), 244; https://doi.org/10.3390/a19040244 - 24 Mar 2026
Viewed by 91
Abstract
In recent years, the integration of advanced methods in medical imaging has become a major topic of interest due to its potential to enhance diagnostic accuracy, improve clinical efficiency, and increase specialists’ confidence in Artificial Intelligence (AI)-based decision-making. This paper explores the synthesis [...] Read more.
In recent years, the integration of advanced methods in medical imaging has become a major topic of interest due to its potential to enhance diagnostic accuracy, improve clinical efficiency, and increase specialists’ confidence in Artificial Intelligence (AI)-based decision-making. This paper explores the synthesis of Explainable AI (XAI) and Generative AI (GAI) in medical imaging, highlighting the advantages and challenges of these emerging technologies. The objective of this paper is to explore how the combined use of XAI and GAI contributes both to interpretability and to diagnostic accuracy. This research represents a systematic literature review conducted in accordance with PRISMA 2020, based on searches carried out in the PubMed, Scopus, IEEE Xplore, MDPI and ScienceDirect databases. Thus, a comprehensive overview of the integration of XAI and GAI in medical imaging is presented, based on recent studies and validated clinical applications. The advantages of combining transparency and data amplification in diagnostic models are highlighted, demonstrating their complementary roles in improving diagnosis using medical imaging. Ongoing challenges in clinical adoption are also emphasised, including interpretability and the need for validated assessment metrics. Beyond technological benefits, the paper also underlines the importance of ethical and legal considerations in the use of XAI and GAI in medical imaging. Based on the detailed analysis of the investigated studies, the paper also proposes a visual and architectural system concept intended for medical imaging, oriented towards research into the development of a unified system capable of detecting multiple types of pathologies. This research provides a detailed perspective on how XAI and GAI can revolutionise medical imaging by optimising data interpretation, enhancing human-AI collaboration, and increasing patient safety. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning in Medical Imaging Diagnostics)
Show Figures

Figure 1

26 pages, 2728 KB  
Article
Identification of Road Safety Behavior Patterns in Colombia Using Explainable Artificial Intelligence
by Hugo Ordoñez, Cristian Ordoñez, Carlos Cordoba and Luis Revelo
Societies 2026, 16(4), 104; https://doi.org/10.3390/soc16040104 - 24 Mar 2026
Viewed by 89
Abstract
This study identifies and explains road safety behavior patterns in Colombia using explainable artificial intelligence (XAI). Based on 9232 records and 38 variables from the Territorial Survey of Road Safety Behavior, the CRISP-DM methodology was applied, including data cleaning, normalization, encoding, and feature [...] Read more.
This study identifies and explains road safety behavior patterns in Colombia using explainable artificial intelligence (XAI). Based on 9232 records and 38 variables from the Territorial Survey of Road Safety Behavior, the CRISP-DM methodology was applied, including data cleaning, normalization, encoding, and feature selection. XGBoost, Random Forest, Bagging, and AdaBoost models were evaluated, incorporating three domain-specific indices: Distraction Index (DI), Risky Road Interaction Index (RRI), and Normative Compliance Index (NCI). AdaBoost achieved the best overall balance (Precision = 0.78; Recall = 0.75; F1-score = 0.77), simultaneously reducing false positives and false negatives. SHAP analysis revealed that environmental and infrastructure factors (lighting, traffic signals, intersections, congestion, perceived crime) explain more variance than self-reported behaviors (mobile phone use, alcohol consumption, speeding). The complementary indices indicated above-average distraction levels, high exposure to risky interactions, and low compliance in specific segments. These findings enable the prioritization of targeted interventions (improvements in lighting and crossings, focused enforcement, and educational campaigns) and support operation with thresholds adjusted to error costs, providing traceable decision support for public road safety policies. Overall, the proposed approach integrates prediction and explainability to enable actionable decisions and continuous monitoring aimed at reducing traffic accidents. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

12 pages, 1203 KB  
Article
Interpretable Machine Learning for Emergency Department Triage: Clinical Insights from 133,198 Patients Using the Korean Triage and Acuity Scale (KTAS)
by MyoungJe Song, Jongsun Kim, Eun-Chul Jang and SoonChan Kwon
Diagnostics 2026, 16(6), 954; https://doi.org/10.3390/diagnostics16060954 - 23 Mar 2026
Viewed by 124
Abstract
Background/Objectives: Emergency room severity classification (KTAS) is essential for patient safety but has limitations due to its reliance on subjective judgment. Existing machine learning models have not been trusted in clinical settings due to their opaque ‘black box’ nature in decision-making processes. To [...] Read more.
Background/Objectives: Emergency room severity classification (KTAS) is essential for patient safety but has limitations due to its reliance on subjective judgment. Existing machine learning models have not been trusted in clinical settings due to their opaque ‘black box’ nature in decision-making processes. To overcome this, this study aims to develop an explainable machine learning framework that provides a transparent basis for judgment with high accuracy. Method: We retrospectively analyzed 133,198 emergency room visits from 2022 to 2024. We trained Random Forest (RF) and XGBoost models using vital signs and pain scores and applied explainable AI (XAI) techniques to ensure model transparency. Results: Although XGBoost showed the highest predictive performance (94.7% accuracy within a ±1 error margin), we ultimately selected the RF model, which provides a good balance of predictive power (91.6%) and interpretability for clinical use. The results of the XAI analysis confirmed that pain score, age, and systolic blood pressure were the key variables in prediction, strongly aligning with clinical logic. Conclusions: This study demonstrates that explainable AI can provide transparent insights for KTAS prediction beyond the limitations of traditional black-box models. These models may support emergency department triage by improving consistency and assisting clinicians in identifying potentially high-risk patients. However, further external validation is required before routine clinical implementation. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

24 pages, 3066 KB  
Article
Enhancing Network Traffic Monitoring Through eXplainable Artificial Intelligence Methodologies
by Cătălin-Eugen Bucur, Georgiana Crihan, Anamaria Rădoi, Elena-Grațiela Robe-Voinea and Iustin-Nicolae Moroșan
Telecom 2026, 7(2), 34; https://doi.org/10.3390/telecom7020034 - 23 Mar 2026
Viewed by 188
Abstract
In the contemporary digital landscape, AI (Artificial Intelligence) emerged as a pivotal tool in enhancing the defense technologies developed across the entire network infrastructure. As reliance on AI-based decision-making grew, so did the imperative need for interpretability, transparency, and trustworthiness, leading to the [...] Read more.
In the contemporary digital landscape, AI (Artificial Intelligence) emerged as a pivotal tool in enhancing the defense technologies developed across the entire network infrastructure. As reliance on AI-based decision-making grew, so did the imperative need for interpretability, transparency, and trustworthiness, leading to the development and integration of XAI (eXplainable Artificial Intelligence). This research paper provides a comprehensive overview of the current state of the art in XAI approaches that can be effectively implemented for network traffic monitoring, especially in critical digital infrastructures. The main contribution of this research article consists of the comparative analysis of the XAI SHAP (Shapley Additive Explanation) method applied to different datasets obtained from real-time network traffic monitoring, utilizing several representative parameters, which demonstrates the performance, vulnerabilities, and limitations of the proposed method, and also the security implications of the system resources from a cybersecurity perspective. Experimental results show that Ethernet networks offer higher predictability and clearer decision boundaries. Consequently, they are a safer solution for deployment in sensitive network architectures. In contrast, BYOD (Bring Your Own Device) Wi-Fi environments exhibit greater randomness. Full article
Show Figures

Figure 1

20 pages, 2647 KB  
Article
Explainable Artificial Intelligence Unravels the Possible Distinct Roles of VKORC1 and CYP2C9 in Predicting Warfarin Anticoagulation Control
by Kannan Sridharan and Gowri Sivaramakrishnan
Med. Sci. 2026, 14(1), 156; https://doi.org/10.3390/medsci14010156 - 22 Mar 2026
Viewed by 138
Abstract
Background: Warfarin pharmacogenomics is critical due to its narrow therapeutic index and significant interpatient variability. While machine learning (ML) can predict anticoagulation control status (ACS), its “black-box” nature limits clinical translatability. Explainable Artificial Intelligence (XAI) addresses this by providing interpretable insights. This study [...] Read more.
Background: Warfarin pharmacogenomics is critical due to its narrow therapeutic index and significant interpatient variability. While machine learning (ML) can predict anticoagulation control status (ACS), its “black-box” nature limits clinical translatability. Explainable Artificial Intelligence (XAI) addresses this by providing interpretable insights. This study applied ML and XAI to a warfarin pharmacogenomic dataset to predict poor ACS and explain model decisions. Methods: A post hoc analysis was conducted on a cross-sectional dataset of 232 patients receiving warfarin for ≥6 months. Data included age, gender, interacting drugs, SAMe-TT2R2 score, and genotypes for CYP2C9, VKORC1, and CYP4F2. Poor ACS was defined as time in therapeutic range (TTR) < 70%. The dataset was split into training (70%) and testing (30%) cohorts. Three models, Random Forest, XGBoost, and Logistic Regression, were developed and evaluated using AUC-ROC, sensitivity, and specificity. XAI techniques, including permutation importance and SHapley Additive exPlanations (SHAP), were employed for global and local interpretability. Results: Of 232 patients, 141 (60.8%) had poor ACS. XGBoost and Random Forest demonstrated comparable predictive accuracy (AUC-ROC: 0.67), outperforming Logistic Regression. Sensitivity was 0.83 and 0.79 for XGBoost and Random Forest, respectively. However, specificity was modest for both ensemble methods (Random Forest: 0.48; XGBoost: 0.41) and extremely low for Logistic Regression (0.04), indicating poor discrimination, particularly for identifying patients with adequate anticoagulation control. Globally, important predictors included the age, SAMe-TT2R2 score, CYP2C9 (*2/*2), female gender, and VKORC1 (C/T). XAI revealed predictions were primarily driven by VKORC1, CYP4F2, SAMe-TT2R2 scores, and drug interactions. Concordance between XAI predictions and actual ACS was 78% for adequate and 88.6% for poor ACS. SHAP analysis showed VKORC1 provided a stable risk signal (mean absolute SHAP: 1.44 ± 0.49 in concordant cases), while CYP2C9 was a high-variance, high-impact driver of discordance (mean SHAP: 3.44 ± 3.79 in discordant cases). Conclusions: ML models, particularly ensemble methods, show modest ability to predict poor warfarin control with limited ability to correctly identify patients with adequate control from our dataset. XAI transforms these models into interpretable tools, with SHAP analysis attributing predictions to specific genetic and clinical features. While predictive accuracy remains modest, this approach enhances transparency and provides a foundation for generating hypotheses that may ultimately support clinical decision-making in pharmacogenomic-guided warfarin therapy. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Cardiovascular Medicine)
Show Figures

Figure 1

36 pages, 6452 KB  
Review
Explainable and Federated Recommender Systems: A Survey and Conceptual Framework for Trustworthy Personalization
by Alexandra Vultureanu-Albiși and Costin Bădică
Electronics 2026, 15(6), 1292; https://doi.org/10.3390/electronics15061292 - 19 Mar 2026
Viewed by 221
Abstract
Federated recommender systems (FRS) enable privacy-preserving collaborative training without sharing raw user data, while explainable recommender systems (XRS) aim to improve transparency, trust, and accountability. However, research that integrates federation and explainability remains limited and fragmented. This survey reviews recent work at the [...] Read more.
Federated recommender systems (FRS) enable privacy-preserving collaborative training without sharing raw user data, while explainable recommender systems (XRS) aim to improve transparency, trust, and accountability. However, research that integrates federation and explainability remains limited and fragmented. This survey reviews recent work at the intersection of Federated Learning (FL), Explainable Artificial Intelligence (XAI), and recommender systems, referred to as Explainable Federated Recommender Systems (XFRS). We analyze architectures, learning paradigms, personalization strategies, and explainability mechanisms, and discuss their trade-offs in explainability, privacy, and trustworthiness. We propose a unified conceptual framework that links these components in decentralized recommendation settings. Combining bibliometric analysis with a systematic categorization of the literature, we identify key gaps and emerging trends, including the limited adoption of explainability in federated settings. Finally, we summarize open challenges and future directions toward trustworthy, privacy-aware personalized recommender systems. Full article
Show Figures

Figure 1

33 pages, 3673 KB  
Review
State of the Art in Monitoring Methane Emissions from Arctic–boreal Wetlands and Lakes
by Masoud Mahdianpari, Oliver Sonnentag, Fariba Mohammadimanesh, Ali Radman, Mohammad Marjani, Peter Morse, Phil Marsh, Martin Lavoie, David Risk, Jianghua Wu, Celestine Neba Suh, David Gee, Garfield Giff, Celtie Ferguson, Matthias Peichl and Jean Granger
Remote Sens. 2026, 18(6), 926; https://doi.org/10.3390/rs18060926 - 18 Mar 2026
Viewed by 221
Abstract
Arctic–boreal wetlands and lakes are among the most significant and most uncertain natural sources of atmospheric methane. Rapid Arctic amplification, permafrost thaw, hydrological change, and increasing ecosystem productivity are expected to intensify methane emissions from high-latitude landscapes. Yet, significant uncertainties persist in quantifying [...] Read more.
Arctic–boreal wetlands and lakes are among the most significant and most uncertain natural sources of atmospheric methane. Rapid Arctic amplification, permafrost thaw, hydrological change, and increasing ecosystem productivity are expected to intensify methane emissions from high-latitude landscapes. Yet, significant uncertainties persist in quantifying their magnitude, seasonality, and spatial distribution. This review synthesizes the current state of the art in monitoring methane emissions from Arctic–boreal wetlands and lakes through complementary bottom-up and top-down approaches. We examine Earth observation (EO) capabilities, including optical, thermal infrared (TIR), and synthetic aperture radar (SAR) missions, as well as new emerging satellite platforms. We also assess in situ measurement networks, wetland and lake inventories, empirical and process-based models, and atmospheric inversion frameworks. Key gaps remain in representing small waterbodies, shoreline heterogeneity, winter emissions, inventory harmonization, and integration between atmospheric retrievals and surface-based flux models. Moreover, advances in multi-sensor data fusion, explainable artificial intelligence (XAI), physics-informed inversion methods, and geospatial foundation models offer strong potential to reduce these uncertainties. A coordinated integration of satellite observations, field measurements, and transparent modeling frameworks is essential to improve Arctic–boreal methane budgets and strengthen projections of climate feedback in a rapidly warming region. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Wetland Mapping and Monitoring)
Show Figures

Figure 1

24 pages, 2985 KB  
Article
Explainable AI-Based Analysis of Deflection in RC Beams with Longitudinal GFRP Bars in Tension Zone
by Muhammet Karabulut
Polymers 2026, 18(6), 728; https://doi.org/10.3390/polym18060728 - 17 Mar 2026
Viewed by 250
Abstract
The research gap addressed in this study is the lack of a transparent and quantitative evaluation of the governing parameters influencing deflection behavior in reinforced concrete (RC) beams reinforced with glass fiber-reinforced polymer (GFRP) bars. The objective of this study is to identify [...] Read more.
The research gap addressed in this study is the lack of a transparent and quantitative evaluation of the governing parameters influencing deflection behavior in reinforced concrete (RC) beams reinforced with glass fiber-reinforced polymer (GFRP) bars. The objective of this study is to identify and quantify the relative importance of the key parameters controlling deflection in GFRP-reinforced RC beams, which exhibit fundamentally different behavior compared to steel-reinforced beams due to the linear-elastic response of GFRP bars until rupture. To achieve this objective, the method integrates explainable artificial intelligence (XAI) techniques, including SHapley Additive exPlanations (SHAP), Pearson correlation heatmap, scatter plot analysis, and sensitivity analysis—with experimental structural data obtained from beams with three different concrete strength classes. The main contribution of this study is the quantitative ranking and interpretation of the governing parameters affecting deflection behavior through a transparent and data-driven framework. Key parameters—including elastic modulus (Ec), compressive strength (fck), creep coefficient (φ), failure moment (Mexp), effective moment of inertia (Ieff), and applied load (P)—were evaluated. The results consistently indicate that stiffness- and capacity-related parameters dominate the deflection response. Sensitivity analysis reveals that the failure moment (Mexp) is the most influential parameter, contributing approximately 23% of the total relative influence on deflection, followed by compressive strength (fck) and cracking-related parameters. Pearson correlation heatmap and scatter plot analyses further confirm strong relationships between deflection and Ec, fck, φ, and Ieff. The proposed framework improves the interpretability of deflection prediction in GFRP-reinforced RC beams and provides a transparent basis for serviceability-based structural design and performance-oriented assessment. Full article
Show Figures

Figure 1

41 pages, 8140 KB  
Article
A Hierarchical Signal-to-Policy Learning Framework for Risk-Aware Portfolio Optimization
by Jiayang Yu and Kuo-Chu Chang
Int. J. Financial Stud. 2026, 14(3), 75; https://doi.org/10.3390/ijfs14030075 - 13 Mar 2026
Viewed by 385
Abstract
This study proposes a hierarchical signal-to-policy learning framework for risk-aware portfolio optimization that integrates model-based return forecasting, explainable machine learning, and deep reinforcement learning (DRL) within a unified architecture. In the first stage, next-period returns are estimated using gradient-boosted tree models, and SHAP-based [...] Read more.
This study proposes a hierarchical signal-to-policy learning framework for risk-aware portfolio optimization that integrates model-based return forecasting, explainable machine learning, and deep reinforcement learning (DRL) within a unified architecture. In the first stage, next-period returns are estimated using gradient-boosted tree models, and SHAP-based feature attributions are extracted to provide transparent, factor-level explanations of the predictive signals. In the second stage, a Proximal Policy Optimization (PPO) agent incorporates both predictive forecasts and explanatory signals into its state representation and learns dynamic allocation policies under a mean–CVaR reward function that explicitly penalizes tail risk while controlling trading frictions. By separating signal extraction from policy learning, the proposed architecture allows the use of economically interpretable predictive signals to incorporate into the policy’s state representation while preserving the flexibility and adaptability of reinforcement learning. Empirical evaluations on U.S. sector ETFs and Dow Jones Industrial Average constituents show that the hierarchical framework delivers higher and stable out-of-sample risk-adjusted returns relative to both a single-layer DRL agent trained solely on technical indicators, a mean–CVaR optimized portfolio using the same parameters used in the proposed hierarchical model and standard equal weight as well as index-based benchmarks. These results demonstrate that integrating explainable predictive signals with risk-sensitive reinforcement learning improves the robustness and stability of data-driven portfolio strategies. Full article
(This article belongs to the Special Issue Financial Markets: Risk Forecasting, Dynamic Models and Data Analysis)
Show Figures

Figure 1

Back to TopTop