Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,442)

Search Parameters:
Keywords = SHAP interpretation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 4726 KB  
Article
Interpretable Deep Learning for REIT Return Forecasting: A Comparative Study of LSTM, TVP–VAR Proxy, and SHAP-Based Explanations
by Eddy Suprihadi, Nevi Danila, Zaiton Ali and Gede Pramudya Ananta
Int. J. Financial Stud. 2026, 14(3), 73; https://doi.org/10.3390/ijfs14030073 (registering DOI) - 12 Mar 2026
Abstract
Forecasting returns in Real Estate Investment Trust (REIT) markets remains challenging because REIT performance is shaped by nonlinear and time-varying interactions with macro-financial conditions. This study evaluates the forecasting performance of Long Short-Term Memory (LSTM) neural networks relative to a TVP–VAR proxy implemented [...] Read more.
Forecasting returns in Real Estate Investment Trust (REIT) markets remains challenging because REIT performance is shaped by nonlinear and time-varying interactions with macro-financial conditions. This study evaluates the forecasting performance of Long Short-Term Memory (LSTM) neural networks relative to a TVP–VAR proxy implemented as an expanding window VAR for weekly U.S. U.S. REIT returns. All models are assessed within a harmonized experimental framework that applies consistent data preprocessing, feature construction, and strictly time-ordered out-of-sample evaluation. The results indicate that the baseline LSTM model delivers modest but more stable error-based performance than the TVP–VAR proxy, with improvements concentrated in RMSE and MAE, while evidence for directional predictability is weak and not consistently distinguishable from benchmark performance. To enhance transparency, SHapley Additive exPlanations (SHAPs) are used to interpret the LSTM forecasts. The attribution analysis highlights recent REIT returns, global equity indicators—particularly the Hang Seng Index—and crude oil prices as influential predictors, and shows that their contributions vary across volatility regimes, consistent with time-varying spillovers and changing risk transmission. Overall, the study positions LSTM forecasting combined with SHAP-based interpretation as a transparent and reproducible framework for comparative evaluation and driver analysis in weekly REIT returns, rather than as a strong directional timing tool. Full article
(This article belongs to the Special Issue Advances in Financial Econometrics)
Show Figures

Figure 1

30 pages, 2372 KB  
Article
Explainable AI for Employee Retention in Green Human Resource Management: Integrating Prediction, Interpretation, and Policy Simulation
by Dinh Cuong Nguyen, Dan Tenney and Elif Kongar
Sustainability 2026, 18(6), 2740; https://doi.org/10.3390/su18062740 - 11 Mar 2026
Abstract
Retaining the green workforce, employees driving sustainability and environmental innovation, is essential for organizational resilience and long-term environmental goals. While prior Green HRM research has primarily relied on survey-based methodologies and theoretical frameworks to examine retention factors, these approaches lack predictive capability and [...] Read more.
Retaining the green workforce, employees driving sustainability and environmental innovation, is essential for organizational resilience and long-term environmental goals. While prior Green HRM research has primarily relied on survey-based methodologies and theoretical frameworks to examine retention factors, these approaches lack predictive capability and fail to provide actionable, employee-specific insights. This study advances beyond descriptive and correlational analyses by employing explainable artificial intelligence (XAI) to develop a transparent, data-driven framework for identifying attrition drivers and quantitatively evaluating retention strategies. Unlike existing studies that rely on self-reported perceptions, our approach leverages objective HR data and machine learning to predict individual-level attrition risk with calibrated probabilities. Leveraging the IBM HR Analytics dataset as a proxy for sustainability-focused roles, we construct an interpretable logistic regression model with strong predictive performance and isotonic regression calibration. Global and local interpretability techniques, including SHAP, LIME, and permutation importance, show that non-monetary factors, such as excessive overtime, frequent business travel, and limited promotion opportunities, have a greater impact on turnover risk than salary levels. These findings align with Green Human Management (Green HRM) principles, which emphasize work–life balance and employee well-being. Crucially, our policy simulation framework, absent from prior Green HRM studies, demonstrates that eliminating overtime could reduce predicted attrition probability by 17.35% for affected employees, potentially retaining 31 staff members, substantially outperforming modest salary adjustments. This work expands the value of predictive AI into HR analytics by consolidating HR analytics with Green HRM through a novel methodology that bridges the gap between prediction and actionable intervention. It represents the first systematic integration of XAI-based predictive modeling with counterfactual policy simulation in environmentally conscious sustainable organizations. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

21 pages, 1031 KB  
Article
A Machine Learning Framework for Pavement Performance Prediction Under Extreme Climate Conditions
by Noelia Molinero-Pérez, Tatiana García-Segura, Pedro Ortiz-Garrido, Stella Heras and Amalia Sanz-Benlloch
Mathematics 2026, 14(6), 945; https://doi.org/10.3390/math14060945 - 11 Mar 2026
Abstract
Accurate pavement performance prediction is critical for effective pavement management systems (PMS), enabling optimal maintenance and rehabilitation decisions. The Pavement Condition Index (PCI) is the most widely used performance indicator, yet reliable prediction requires models that capture full spectrum of deterioration drivers, including [...] Read more.
Accurate pavement performance prediction is critical for effective pavement management systems (PMS), enabling optimal maintenance and rehabilitation decisions. The Pavement Condition Index (PCI) is the most widely used performance indicator, yet reliable prediction requires models that capture full spectrum of deterioration drivers, including structural characteristics, traffic loads, and the increasingly impactful extreme climate events. While machine learning (ML) approaches have improved PCI prediction, most existing models overlook climate extremes. This study proposes a comprehensive ML-based PCI model that integrates extreme climate variables from the Expert Team on Climate Change Detection and Indices (ETCCDI). Eleven algorithms were evaluated on a dataset combining pavement age, structural characteristics, traffic loads, and extreme climate variables. Among the evaluated models, categorical boosting model achieved the lowest error values and the highest R2 (0.81). Explainability analyses using feature importance and SHapley Additive exPlanations (SHAP) identified the number of icing days (ID), daily temperature range in December (DTR_Dec) and consecutive dry days (CDD) as the extreme climate indicators with the greatest negative predictive influence on PCI. Incorporating ETCCDI indices provided additional explanatory power beyond traditional annual average climatic variables, significantly improving both predictive accuracy and model interpretability. These findings highlight the importance of integrating standardized extreme climate indicators into PMS frameworks to support more resilient and sustainable pavement management under evolving climate conditions. Full article
Show Figures

Figure 1

26 pages, 1118 KB  
Article
Representation-Centric Approach for Android Malware Classification: Interpretability-Driven Feature Engineering on Function Call Graphs
by Gyumin Kim, Dongmin Yoon, NaeJoung Kwak and ByoungYup Lee
Appl. Sci. 2026, 16(6), 2670; https://doi.org/10.3390/app16062670 - 11 Mar 2026
Abstract
The existing research on Android malware detection using graph neural networks (GNNs) has largely focused on architectural improvements, while input node feature representations have received less systematic attention. This study adopts a representation-centric approach to enhance function call graph (FCG)-based malware classification through [...] Read more.
The existing research on Android malware detection using graph neural networks (GNNs) has largely focused on architectural improvements, while input node feature representations have received less systematic attention. This study adopts a representation-centric approach to enhance function call graph (FCG)-based malware classification through interpretability-driven feature engineering. We propose a dual-level structural feature framework integrating local topological patterns with global graph-level properties. The initial feature set comprises 13 dimensions: five local degree profile (LDP) features and eight global structural features capturing community structure, execution flow, and connectivity patterns. To mitigate the curse of dimensionality, we apply an interpretability-driven selection using integrated gradients (IG), gradient-weighted class activation mapping (GradCAM), and Shapley additive explanations (SHAP), yielding an optimized seven-dimensional subset. Experiments on the MalNet-Tiny benchmark demonstrate that the proposed approach achieves 94.47 ± 0.25% accuracy with jumping knowledge GraphSAGE (JK-GraphSAGE), improving the LDP-only baseline by 0.32 percentage points while reducing feature dimensionality by 46%. The selected features exhibit consistent importance across four GNN architectures and multiple message-passing layers, demonstrating model-agnostic effectiveness. The results reveal that aggregation mechanisms critically influence feature utility, highlighting the necessity of interpretability-guided design for robust malware detection. This work provides a systematic methodology for feature engineering in graph-based security applications. Full article
Show Figures

Figure 1

21 pages, 474 KB  
Article
Performance Evaluation of Machine Learning and Deep Learning Models for Credit Risk Prediction
by Irvine Mapfumo and Thokozani Shongwe
J. Risk Financial Manag. 2026, 19(3), 210; https://doi.org/10.3390/jrfm19030210 - 11 Mar 2026
Abstract
Credit risk prediction is essential for financial institutions to effectively assess the likelihood of borrower defaults and manage associated risks. This study presents a comparative analysis of deep learning architectures and traditional machine learning models on imbalanced credit risk datasets. To address class [...] Read more.
Credit risk prediction is essential for financial institutions to effectively assess the likelihood of borrower defaults and manage associated risks. This study presents a comparative analysis of deep learning architectures and traditional machine learning models on imbalanced credit risk datasets. To address class imbalance, we employ three resampling techniques: Synthetic Minority Over-sampling Technique (SMOTE), Edited Nearest Neighbors (ENN), and the hybrid SMOTE-ENN. We evaluate the performance of various models, including multilayer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM), gated recurrent unit (GRU), logistic regression, decision tree, support vector machine (SVM), random forest, adaptive boosting, and extreme gradient boosting. The analysis reveals that SMOTE-ENN combined with MLP achieves the highest F1-score of 0.928 (accuracy 95.4%) on the German dataset, while SMOTE-ENN with random forest attains the best F1-score of 0.789 (accuracy 82.1%) on the Taiwanese dataset. SHapley Additive exPlanations (SHAP) are employed to enhance model interpretability, identifying key drivers of credit default. These findings provide actionable guidance for developing transparent, high-performing, and robust credit risk assessment systems. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

23 pages, 2586 KB  
Article
Explainable AI-Based Hyperspectral Classification Reveals Differences in Spectral Response over Phenological Stages
by Rameez Ahsen, Pierpaolo Di Bitonto, Pierfrancesco Novielli, Michele Magarelli, Donato Romano, Martina Di Venosa, Anna Maria Stellacci, Nicola Amoroso, Alfonso Monaco, Bruno Basso, Roberto Bellotti and Sabina Tangaro
Biology 2026, 15(6), 454; https://doi.org/10.3390/biology15060454 - 11 Mar 2026
Abstract
Optimizing nitrogen (N) fertilization is essential for sustaining durum wheat yield and grain quality while reducing the environmental impacts associated with N over-application. Hyperspectral sensing provides a rapid and non-destructive approach for monitoring crop N status. However, high-dimensional data, phenology-dependent spectral responses, and [...] Read more.
Optimizing nitrogen (N) fertilization is essential for sustaining durum wheat yield and grain quality while reducing the environmental impacts associated with N over-application. Hyperspectral sensing provides a rapid and non-destructive approach for monitoring crop N status. However, high-dimensional data, phenology-dependent spectral responses, and spatial autocorrelation in field measurements limit robust nitrogen classification and interpretation. This study evaluated hyperspectral-based nitrogen status classification in durum wheat under Mediterranean field conditions and identified key spectral regions using explainable artificial intelligence. A field experiment was conducted in Southern Italy using ten N fertilization rates (0–180 kg N ha−1). Canopy reflectance was acquired at the booting and heading stages from georeferenced sampling locations. Three nitrogen stratification strategies (binary Low–High, Extreme, and three-level) were evaluated using Random Forest, SVM-RBF, and XGBoost classifiers. Model performance was assessed using spatially independent Leave-One-Plot-Out cross-validation at both the sample and plot levels, with plot-level predictions derived through majority voting. Classification robustness was strongly influenced by the stratification strategy and phenological stage. The binary Low–High stratification achieved the highest sample-level accuracy, with a maximum of 0.78 at booting (SVM-RBF) and 0.75 at heading (SVM-RBF), whereas the Extreme stratification produced intermediate performance, with maximum accuracies of 0.73 at booting (SVM-RBF) and 0.63 at heading (XGBoost). Plot-level aggregation improved performance, reaching up to 0.90 at booting and 1.00 at heading. SHAP analysis highlighted red, red-edge, and near-infrared wavelengths as the dominant contributors, with increased reliance on longer wavelengths at the heading. Overall, explainable machine learning provides a robust framework for hyperspectral nitrogen monitoring in durum wheat. Full article
(This article belongs to the Special Issue Adaptation of Living Species to Environmental Stress (2nd Edition))
Show Figures

Figure 1

20 pages, 1559 KB  
Article
Prediction of Bulk Density in Laser Powder Bed Fusion of Pure Zinc Using Supervised Machine Learning
by Kristijan Šket, Snehashis Pal, Tomaž Brajlih, Igor Drstvenšek and Mirko Ficko
Metals 2026, 16(3), 309; https://doi.org/10.3390/met16030309 - 11 Mar 2026
Abstract
This work used machine learning to forecast product density and optimize the laser powder bed fusion (LPBF) process for parts made of pure zinc (Zn). A relative density of 90–97% (6.42–6.95 g/cm3) was obtained by varying combinations of key process parameters, [...] Read more.
This work used machine learning to forecast product density and optimize the laser powder bed fusion (LPBF) process for parts made of pure zinc (Zn). A relative density of 90–97% (6.42–6.95 g/cm3) was obtained by varying combinations of key process parameters, including laser power, scanning speed, track overlapping, hatch spacing, and layer thickness. Machine learning provided models for density prediction and better comprehension of the impact of input parameters. A SHapley Additive exPlanation (SHAP) analysis quantified the contributions of specific features, enhancing model interpretability. Fifty-one experimental runs were used to test several methods, including Bayesian ridge, CatBoost, elastic net, lasso, linear regression, random forest, ridge regression, and XGBoost. CatBoost performed best, with a test coefficient of determination (R2) of 0.893, a mean absolute error (MAPE) of 0.010 and a root mean square error (RMSE) of 0.015. A feature importance analysis showed that laser power (49%) and scanning speed (42%) had the greatest influence, while hatch spacing (5%) and layer thickness (4%) had minimal impacts on product density. Therefore, selecting the correct optimized set of process parameters determines the resulting density and can support more efficient LPBF process development. Full article
(This article belongs to the Special Issue Advances in Metal Additive Manufacturing: Process and Performance)
Show Figures

Figure 1

32 pages, 6394 KB  
Article
A Machine-Learning Approach for Evaluating Perceived Walking Comfort in Macau’s High-Density Urban Environment
by Zhimu Gong, Junling Zhou, Xuefang Zhang, Lingfeng Xie, Guanxu Luo, Xiping Luo, Jiayi Fu, Yitong Guo and Xiaoyan Zhi
Buildings 2026, 16(6), 1103; https://doi.org/10.3390/buildings16061103 - 10 Mar 2026
Abstract
Evaluating pedestrian comfort in high-density cities requires methods integrating subjective experience with urban morphology. This study develops an integrated framework combining pairwise comparison scoring, semantic segmentation (DeepLabv3+), ensemble learning (Random Forest), and SHAP-based interpretability. EfficientNet-B7 is used to expand pairwise datasets and derive [...] Read more.
Evaluating pedestrian comfort in high-density cities requires methods integrating subjective experience with urban morphology. This study develops an integrated framework combining pairwise comparison scoring, semantic segmentation (DeepLabv3+), ensemble learning (Random Forest), and SHAP-based interpretability. EfficientNet-B7 is used to expand pairwise datasets and derive continuous comfort scores across Macau’s street network. Four experiential street types are identified: historical–cultural districts, urban lifestyle areas, natural corridors, and leisure zones. SHAP analysis illustrates stable associations between predicted comfort scores and multi-layered spatial configurations, including cultural legibility and sequencing in historic cores, moderate greenery with functional anchoring in residential areas, and scene coherence in tourism zones. Semantic features serve as effective morphological proxies within the modeling framework. Methodologically, the framework demonstrates how explainable machine learning can be applied to dense Asian cities under observational conditions. Design implications emphasize interface continuity, microclimate adaptation, and functional enrichment, suggesting that pedestrian comfort is closely related to coherent spatial–experiential structures rather than isolated environmental upgrades. Full article
Show Figures

Figure 1

43 pages, 10191 KB  
Article
Heatwave Effects of Emerging Industry Clustering in Chinese Urban Agglomerations
by Yang Chen, Wanhua Huang and Xu Wei
Sustainability 2026, 18(6), 2697; https://doi.org/10.3390/su18062697 - 10 Mar 2026
Abstract
Under the dual pressures of global warming and high-density urbanization, extreme heatwaves have emerged as a critical ecological risk constraining the sustainable development of Chinese urban agglomerations. Based on multi-source remote sensing, meteorological, and economic data for 19 major urban agglomerations from 2014 [...] Read more.
Under the dual pressures of global warming and high-density urbanization, extreme heatwaves have emerged as a critical ecological risk constraining the sustainable development of Chinese urban agglomerations. Based on multi-source remote sensing, meteorological, and economic data for 19 major urban agglomerations from 2014 to 2023, this study develops an emerging industrial agglomeration–energy activity–thermal environment response framework. Using XGBoost-SHAP interpretable machine learning and GeoSHAPLEY spatial decomposition, the nonlinear and spatially heterogeneous impacts of industrial agglomeration on heatwave characteristics are systematically quantified. Results indicate that the heatwave index increased from 0.619 to 0.637, with the model explaining 80.7 percent and 74.7 percent of variance in duration and frequency, respectively. Moreover, emerging industrial agglomeration ranks among the top contributors to both duration and frequency, explaining over 20 percent of duration variability and surpassing traditional industrial and socioeconomic factors. Heatwave duration and frequency exhibit nonlinear relationships. During early agglomeration, energy efficiency improvements generated marginal cooling of five to eight percent, whereas intensified agglomeration amplifies duration by over ten percent through energy-intensive activities and infrastructure heat islands. Meanwhile, green innovation at high agglomeration levels mitigates six to nine percent of the warming effect. In addition, spatial differentiation of industrial agglomeration, reflected by a Gini increase from 0.685 to 0.728 and inter-regional contribution around 62 percent, underpins heat risk heterogeneity. Furthermore, natural endowments, socioeconomic development, infrastructure, environmental regulation, and technological innovation significantly moderate these effects, with high-tech innovation attenuating heatwave amplification. Consequently, the thermal effects of industrial agglomeration follow a three-stage spatial evolution of warming, stabilization, and counter-regulation. These findings highlight that coordinated optimization of industrial spatial layout and green technological innovation is crucial for enhancing climate resilience and promoting low-carbon transformation in urban agglomerations. Full article
Show Figures

Figure 1

25 pages, 4156 KB  
Article
Building Trustworthy Digital Archival Services: A Deep Semantic Auditing Approach Based on SHAP Interpretability
by Lihang Feng, Zhengyang Cao, Lili Sun, Yongshi Jin, Jiantao Shi and Dong Wang
Electronics 2026, 15(6), 1147; https://doi.org/10.3390/electronics15061147 - 10 Mar 2026
Viewed by 21
Abstract
In the context of the cross-disciplinary integration of data science and archival management, archival openness auditing stands as a critical process for public information access but faces challenges in processing long texts with sparse core information. To address this, this paper proposes an [...] Read more.
In the context of the cross-disciplinary integration of data science and archival management, archival openness auditing stands as a critical process for public information access but faces challenges in processing long texts with sparse core information. To address this, this paper proposes an Assisted Archival Auditing Model (ALC-MCFN) based on deep semantic understanding and decision transparency. The model aims to leverage intelligent analytics to optimize the decision-making process of archival openness. Regarding deep semantic understanding, a semantic-aware dynamic truncation mechanism is first employed to effectively remove redundancy while preserving key logical structures. Subsequently, by fusing global, local, and logical semantic features extracted by BERT, TextCNN, and TextGCN, the model overcomes the limitations of single-view feature representation. Furthermore, to address the “black box” issue of deep learning in compliance auditing, the SHAP method is introduced to provide post hoc interpretability. By visualizing the contribution of key textual features to the auditing results, the model enhances the transparency and trustworthiness of decision-making. Experimental results demonstrate that ALC-MCFN outperforms mainstream baseline models, with a 77.21% F1-score on the self-built archival domain OParchives dataset (1.15 percentage points higher than the BERT baseline), providing robust data science support for risk control and efficiency improvement in intelligent archival management. Full article
Show Figures

Figure 1

22 pages, 17254 KB  
Article
Landslide Susceptibility Assessment Based on a Deep Learning-Derived Landslide Inventory in Moxi Town, Sichuan, China
by Yitong Yao, Yixiang Du, Wenjun Zhang, Xianwen Liu, Jialun Cai, Hui Feng, Hongyao Xiang, Rong Hu, Yuhao Yang and Tongben Fu
Remote Sens. 2026, 18(6), 849; https://doi.org/10.3390/rs18060849 - 10 Mar 2026
Viewed by 38
Abstract
Landslides are characterized by strong suddenness and a wide range of damage; accurate prediction of their susceptibility is an important prerequisite for regional risk prevention and control. To address the difficulties in acquiring landslide inventories in complex terrain areas and the insufficient interpretability [...] Read more.
Landslides are characterized by strong suddenness and a wide range of damage; accurate prediction of their susceptibility is an important prerequisite for regional risk prevention and control. To address the difficulties in acquiring landslide inventories in complex terrain areas and the insufficient interpretability of existing prediction models, this study proposes a landslide susceptibility assessment (LSA) framework that integrates automated sample detection and interpretability analysis. The proposed framework is applied to Moxi Town, a typical alpine valley area in Sichuan Province, China. A Mask R-CNN instance segmentation model was introduced to achieve automated detection of landslide samples, resulting in a high-quality inventory containing 923 landslides. Based on the spatial relationships between the landslide inventory and influencing factors, a convolutional neural network (CNN) landslide susceptibility assessment model incorporating Shapley Additive exPlanations (SHAP) interpretability analysis was constructed. The CNN model was further compared with random forest (RF) and extreme gradient boosting (XGBoost) machine learning models. The results show that the AUC value of the CNN model has increased by 4.3% and 3.2% compared with the RF and XGBoost models, respectively, and it significantly reduces the pretzel effect of landslide susceptibility mapping (LSM). The results validate the reliability of the proposed framework, which can provide technical support for landslide disaster prevention and monitoring. Full article
(This article belongs to the Special Issue Landslide Detection Using Machine and Deep Learning)
Show Figures

Figure 1

22 pages, 6838 KB  
Article
A Dynamic Landslide Susceptibility Assessment Method Based on Multi-Source Remote Sensing, XGBoost, and SHAP: A Case Study in Yongsheng County, Yunnan Province
by Shuhao Yan, Shanshan Wang, Yixuan Guo, Xingxing Rong, Dan Zhao and Wei Li
Remote Sens. 2026, 18(6), 845; https://doi.org/10.3390/rs18060845 - 10 Mar 2026
Viewed by 42
Abstract
Landslide susceptibility assessment (LSA) heavily depends on the completeness of landslide inventories and the interpretability of predictive models. Conventional inventories, based solely on historical records, often fail to identify newly occurring or slow-moving landslides, leading to biased susceptibility estimates. To address this limitation, [...] Read more.
Landslide susceptibility assessment (LSA) heavily depends on the completeness of landslide inventories and the interpretability of predictive models. Conventional inventories, based solely on historical records, often fail to identify newly occurring or slow-moving landslides, leading to biased susceptibility estimates. To address this limitation, this study proposes a dynamic LSA framework that integrates multi-source remote sensing data, Extreme Gradient Boosting (XGBoost) modeling, and Shapley Additive Explanations (SHAP), with a case study in Yongsheng County, Yunnan Province, China. This study jointly uses multi-temporal optical remote sensing imagery and Sentinel-1 InSAR (Interferometric Synthetic Aperture Radar) deformation data to update the landslide inventory. Compared with the historical inventory containing 334 landslide points, the updated inventory incorporates an additional 140 deformation-related landslide hazard points. XGBoost models were developed using conditioning factors selected through multicollinearity analysis to evaluate the influence of inventory completeness on model performance. Results show that the model based on the updated inventory achieves a significant improvement in predictive accuracy. SHAP-based interpretation reveals that distance to roads and maximum deformation rate are the dominant factors controlling landslide occurrence, reflecting the combined effects of human activities and dynamic ground deformation. The resulting susceptibility map shows that the Area Under the Curve (AUC) value for susceptibility zoning of the updated sample increases from 0.857 to 0.928, with high and very high susceptibility zones occupying 8.28% of the study area. Overall, the proposed framework improves both the accuracy and interpretability of LSA and demonstrates the effectiveness of multi-source remote sensing data for dynamic landslide hazard assessment in mountainous regions. Full article
Show Figures

Figure 1

16 pages, 1277 KB  
Article
Limitations of MMSE in Cognitive Assessment: Revealing Latent Risk via Structural Brain Atrophy
by Moonhyeok Choi, Jaehyun Jo and Jinhyoung Jeong
Life 2026, 16(3), 451; https://doi.org/10.3390/life16030451 - 10 Mar 2026
Viewed by 36
Abstract
The primary objective of this study was to evaluate the relative contributions of the MMSE and nWBV in three-class cognitive stage classification, with a secondary objective of conducting a subgroup analysis to investigate latent risk within the MMSE-normal population. To achieve this, we [...] Read more.
The primary objective of this study was to evaluate the relative contributions of the MMSE and nWBV in three-class cognitive stage classification, with a secondary objective of conducting a subgroup analysis to investigate latent risk within the MMSE-normal population. To achieve this, we proposed an explainable deep-learning-based analytical framework integrating the MMSE with nWBV, a structural brain atrophy indicator, and systematically assessed the relative contributions of each variable in cognitive impairment stage classification and potential risk screening. Although the MMSE is widely used in clinical practice as a cognitive screening tool, it has limited sensitivity to early or subtle cognitive decline and may not adequately reflect structural brain changes due to the ceiling effect. To address this limitation, we compared four tabular deep learning models—MLP, Tab ResNet, Tab Transformer, and FT Transformer—under identical fivefold cross-validation conditions. Age and sex were fixed as covariates, and feature ablation analysis was conducted to examine the independent and combined effects of the MMSE and nWBV. The results showed no statistically significant differences in classification performance among model architectures, indicating that predictive performance was primarily determined by the informational content of the input variables rather than model complexity. In the feature ablation analysis, the MMSE alone demonstrated strong discriminative power, whereas nWBV alone showed relatively limited performance; however, when combined with the MMSE, nWBV consistently improved classification performance. Furthermore, for interpretability analysis, both Integrated Gradients (IG) and SHAP were applied to validate variable contributions from complementary perspectives. Across both methods, the MMSE and nWBV were repeatedly identified as key contributing features, and interpretability stability was maintained throughout cross-validation folds, supporting the robustness and reliability of the explanatory results. Beyond simple model performance comparisons, this study provides evidence supporting the complementary integration of structural brain atrophy information into MMSE-centered traditional cognitive assessment by jointly considering variable contribution and interpretability stability. This approach is expected to contribute to precision risk screening and clinical decision support in the early stages of cognitive decline. Although the MMSE exhibited strong discriminative performance, nWBV provided complementary structural risk signals within the MMSE-normal subgroup, suggesting that integrating cognitive assessment with structural biomarkers may enhance early risk identification. Full article
(This article belongs to the Section Physiology and Pathology)
Show Figures

Figure 1

20 pages, 508 KB  
Article
Predictive Modelling of Credit Default Risk Using Machine Learning and Ensemble Techniques
by Mofoka Rebuseditsoe Mathibela and Daniel Maposa
Math. Comput. Appl. 2026, 31(2), 45; https://doi.org/10.3390/mca31020045 - 10 Mar 2026
Viewed by 59
Abstract
This study develops a hybrid framework integrating ensemble learning with explainable artificial intelligence to address the methodological challenge of balancing predictive accuracy and interpretability in credit risk model comparison. Using the German Credit Dataset, we implemented a comprehensive preprocessing pipeline, including feature encoding, [...] Read more.
This study develops a hybrid framework integrating ensemble learning with explainable artificial intelligence to address the methodological challenge of balancing predictive accuracy and interpretability in credit risk model comparison. Using the German Credit Dataset, we implemented a comprehensive preprocessing pipeline, including feature encoding, scaling, and SMOTE for class imbalance handling. Four base models, logistic regression, Random Forest, XGBoost, and Multilayer Perceptron, were combined through a Stacked Ensemble with a logistic regression meta learner. The ensemble demonstrated strong performance, achieving an AUC of 0.761, precision of 0.783, recall of 0.806, and an F1 score of 0.794, which represented the highest scores among all models tested. Notably, Random Forest (AUC = 0.749) surpassed XGBoost (AUC = 0.733), challenging conventional algorithmic hierarchies. SHAP analysis provided transparent global and local interpretability, identifying Current Account status (SHAP = 0.153), Loan Duration (0.064), and Savings Account (0.063) as dominant predictor variables. Class-imbalance handling and threshold optimisation enhanced practical utility by reducing false positives from 39 to 16, thereby aligning with financial risk priorities. The framework provides a reproducible methodological pipeline for systematically comparing credit scoring approaches, demonstrating how predictive performance can be evaluated alongside interpretability considerations within a benchmark dataset context. Full article
Show Figures

Figure 1

18 pages, 1769 KB  
Article
An Artificial Intelligence Approach to Predict Tracheostomy Requirement in Mechanically Ventilated Critically Ill Patients: A Retrospective Single-Center Study
by Dicle Birtane, Fatma Özdemir, Damla Yavuz and Zafer Çukurova
J. Clin. Med. 2026, 15(5), 2081; https://doi.org/10.3390/jcm15052081 - 9 Mar 2026
Viewed by 90
Abstract
Background: In critically ill patients, tracheostomy decisions are driven by heterogeneous and dynamic clinical trajectories, and no universally accepted scoring system exists to reliably predict tracheostomy requirement. An accurate and interpretable prediction model could help earlier decision-making and potentially reduce prolonged mechanical ventilation [...] Read more.
Background: In critically ill patients, tracheostomy decisions are driven by heterogeneous and dynamic clinical trajectories, and no universally accepted scoring system exists to reliably predict tracheostomy requirement. An accurate and interpretable prediction model could help earlier decision-making and potentially reduce prolonged mechanical ventilation (MV) and failed weaning. Methods: In this retrospective study, data from 6507 mechanically ventilated intensive care unit (ICU) patients were analyzed using an electronic clinical decision support system; 1049 patients required tracheostomy and 5458 did not. The primary outcome was the prediction of tracheostomy occurrence during ICU stay based on invasive mechanical ventilation (IMV) parameters obtained within the first five days. The secondary outcome was the identification of the most influential parameters guiding tracheostomy decision-making during early IMV. Ten machine learning algorithms were developed using an 80/20 train–test split. Model performance was assessed using discrimination, calibration, and clinical performance metrics. Explainability was evaluated using SHapley Additive exPlanations (SHAP) analysis. Results: Among all models, Gradient Boosting demonstrated strong discrimination and calibration performance (AUROC 0.92, AUPRC 0.56, specificity 97%, F1 score 0.46, Brier score 0.078). In the Gradient Boosting model, feature importance analysis demonstrated that secretion count was the strongest predictor of tracheostomy requirement, accounting for 14.72% of the model’s predictive contribution. This was followed by lactate level (6.12%), arterial pH (3.74%), and peak airway pressure (3.57%). SHAP-based analyses consistently identified secretion count as the strongest predictor of tracheostomy requirement, followed by lactate level, Glasgow Coma Scale (GCS), and arterial pH. In addition, SHAP provided clinically interpretable insights into the direction and magnitude of the effects of individual predictors. Conclusions: Machine learning models integrating early-phase ventilatory and physiological data may enable clinically meaningful prediction of tracheostomy requirement. The combination of strong performance and explainability suggests potential utility as a decision-support tool in critically ill patients requiring prolonged MV. Full article
Show Figures

Figure 1

Back to TopTop