Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (922)

Search Parameters:
Keywords = Shapley value

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 11478 KB  
Article
The Analysis of Urban Nighttime Light Spatial Heterogeneity and Driving Factors Based on SDGSAT-1 Data
by Jinke Liu, Yiran Zhang, Yifei Zhu, Xuesheng Zhao and Wei Guo
Sensors 2026, 26(7), 2094; https://doi.org/10.3390/s26072094 - 27 Mar 2026
Abstract
Artificial light at night (ALAN) data is widely used in urban function analysis and socio-economic activity monitoring, but its application at the micro-scale of cities still faces challenges. This study utilizes high spatial resolution SDGSAT-1 nighttime light data to explore the spatial heterogeneity [...] Read more.
Artificial light at night (ALAN) data is widely used in urban function analysis and socio-economic activity monitoring, but its application at the micro-scale of cities still faces challenges. This study utilizes high spatial resolution SDGSAT-1 nighttime light data to explore the spatial heterogeneity of ALAN at the street scale in two representative Chinese cities—Beijing and Guangzhou. By integrating multi-source data (such as building vector data, road networks, and point of interest data), a multi-dimensional indicator system covering urban morphology, functional structure, and transportation accessibility is constructed. Based on this, the study employs a Geographically Weighted Random Forest (GWRF) model combined with the Shapley Additive Explanations (SHAP) method to deeply analyze the non-linear relationships between ALAN intensity and multiple driving factors, as well as their spatial variability. Results demonstrate the superiority of the GWRF model over global models in capturing spatial non-stationarity, with R2 values of 0.67 for Beijing and 0.74 for Guangzhou, compared to 0.62 and 0.71 for the random forest models, respectively. Road density is the dominant factor influencing nighttime light intensity in both Beijing and Guangzhou. However, the relationship between ALAN and its driving factors varies across these cities. In Beijing, a balanced multi-factor model is observed, whereas in Guangzhou, ALAN intensity is primarily driven by road density, with secondary influences from other factors like sky view factor. This study validates SDGSAT-1 for micro-scale analysis, offering a scientific basis for differentiated urban lighting planning. Full article
(This article belongs to the Special Issue Sensor-Based Systems for Environmental Monitoring and Assessment)
31 pages, 6307 KB  
Article
A Novel Urban Biological Parameter Estimation Method Based on LiDAR Point Cloud Single-Tree Segmentation
by Tongtong Lu, Fang Huang, Yuxin Ding, Qingzhe Lv, Hao Guan, Gongwei Li, Xiang Kang and Geer Teng
Remote Sens. 2026, 18(7), 1001; https://doi.org/10.3390/rs18071001 - 27 Mar 2026
Abstract
Aiming at diverse urban tree structures and difficulties in vegetation point cloud extraction and utilization, this study proposed single-tree-scale biological parameter estimation methods for urban scenarios to enhance point cloud’s application value in urban greening management. For single-tree segmentation, it constructed a method [...] Read more.
Aiming at diverse urban tree structures and difficulties in vegetation point cloud extraction and utilization, this study proposed single-tree-scale biological parameter estimation methods for urban scenarios to enhance point cloud’s application value in urban greening management. For single-tree segmentation, it constructed a method based on the constraints of the trees’ geometric features and combined the gravitational modeling characteristics, called the CGF-CG single-tree segmentation method. This method (i) combines clustering and principal direction analysis to extract trunk points, (ii) introduces canopy segmentation based on trunk positions, (iii) optimizes edge point attributes via a gravitational model. Based on CGF-CG’s accurate results, an improved random forest method for single-tree biological parameter (IRF-BP) estimation (aboveground biomass, carbon storage, leaf area index, living vegetation volume) was proposed: (i) correlation analysis with variable screening, (ii) adaptive feature selection and pigeon-inspired optimization to enhance model generalization, (iii) adopting Shapley Additive Explanations (SHAP) to improve interpretability. Based on these, a complete model for different tree species was constructed. Validation showed that CGF-CG exhibited negligible over-segmentation and under-segmentation in the selected study areas, with overall average precision, recall, and F1-score over 98.5%. Additionally, on the selected overall region, the overall mF1 score, mPTP, and mPTR of our method are 99.13%, 99.15%, and 99.12%, respectively, which are superior to Forestmetrics, lidR, PyCrown, and DBSCAN methods. IRF-BP performed well, with a highest R2 of 0.81 and a lowest mean absolute percentage error of 7.5%, effectively surpassing the performance of traditional models such as RFR, GBR, KNN, and XGB. In summary, results provided theoretical and technical support for urban green resource management and evaluation. Full article
Show Figures

Figure 1

20 pages, 6374 KB  
Article
Uncovering the Spatiotemporal Evolution and Driving Factors of Flash Flood in the Qinghai–Tibet Plateau
by Chaoyue Li, Xinyu Feng, Guotao Zhang, Zhonggen Wang, Wen Jin and Chengjie Li
Remote Sens. 2026, 18(7), 996; https://doi.org/10.3390/rs18070996 - 26 Mar 2026
Abstract
Frequent flash floods threaten human well-being, hydropower infrastructure, and ecosystems. However, the long-term evolution of flash flood patterns over recent decades remains insufficiently understood, particularly in data-scarce high-altitude regions. Using multi-source remote sensing data integrated with historical disaster records and field investigations, this [...] Read more.
Frequent flash floods threaten human well-being, hydropower infrastructure, and ecosystems. However, the long-term evolution of flash flood patterns over recent decades remains insufficiently understood, particularly in data-scarce high-altitude regions. Using multi-source remote sensing data integrated with historical disaster records and field investigations, this study examined the spatiotemporal evolution and driving factors of flash floods across the Qinghai–Tibet Plateau (QTP). The results indicate that flash floods have increased exponentially, which may be influenced by disaster management policies, with peaks in July–August and frequent occurrences from April to September. The seasonal trajectory of the center of gravity of flash floods from April to September exhibited a clear directional pattern. Regions with the highest disaster density were concentrated in the headwaters of five major rivers, including the Yarlung Zangbo, Jinsha, Nu, Lancang, and Yellow Rivers. Shapley Additive Explanation (SHAP) and Random Forest analyses reveal that soil moisture, anthropogenic intensity, and seasonal runoff variability are the dominant driving factors. With ongoing socioeconomic development, intensified human activities have become a key contributor to the increasing frequency of flash floods. These findings highlight the value of remote sensing-based assessments for flash flood monitoring and early warning and provide scientific support for risk mitigation, loss reduction, and the advancement of water-related targets under the United Nations’ Sustainable Development Goals. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

32 pages, 3153 KB  
Article
A Rough Set-Based Decision Framework for Customer-Driven Product Design: A Case Study on Public-Access Faucets
by Hong Jia and Jianning Su
Appl. Sci. 2026, 16(7), 3193; https://doi.org/10.3390/app16073193 - 26 Mar 2026
Abstract
Translating heterogeneous user requirements (URs) into robust engineering specifications for public-access products is a critical challenge, often impeded by information uncertainty and fragmented design processes. To address this, we propose an integrated decision-making framework underpinned by Rough Set Theory (RST) as a unified [...] Read more.
Translating heterogeneous user requirements (URs) into robust engineering specifications for public-access products is a critical challenge, often impeded by information uncertainty and fragmented design processes. To address this, we propose an integrated decision-making framework underpinned by Rough Set Theory (RST) as a unified mathematical language for uncertainty management. The framework systematically guides customer-driven product development by integrating a series of RST-based methods: a Kano model analysis to screen URs, a novel rough-Shapley value model to determine their interdependent weights, a rough-QFD approach to translate them into weighted design requirements (DRs), and the rough-VIKOR method to select the optimal design alternative. A case study on public-access faucets validates the framework’s efficacy. The results demonstrate its capability to identify critical URs, derive robust DRs by systematically resolving technical attribute conflicts, and select a superior design solution that optimally balances hygiene, durability, and user experience. The application of the framework successfully identified Alternative A1 (Push-Activated Spout) as the optimal solution, demonstrating superior performance in proactive hygiene and core functionality. The results prove that maintaining data integrity through a unified RST pipeline effectively resolves early-stage design conflicts. This research contributes a rigorous, data-driven decision support system that enhances objectivity and information fidelity, providing a transparent and auditable methodology for designing human-centered public infrastructure. Full article
Show Figures

Figure 1

20 pages, 2388 KB  
Article
The Role of Green Official Development Assistance in the Implementation of Sustainable Development Goal 15 Using Explainable AI
by Jeongyeon Chae and Eunho Choi
Forests 2026, 17(4), 412; https://doi.org/10.3390/f17040412 - 26 Mar 2026
Abstract
The Sustainable Development Goals (SDGs) are global objectives adopted by countries worldwide to achieve sustainable development by 2030 and consist of 17 goals and 169 specific targets. Among them, SDG 15 (Life on Land) aims to conserve terrestrial ecosystems and promote their sustainable [...] Read more.
The Sustainable Development Goals (SDGs) are global objectives adopted by countries worldwide to achieve sustainable development by 2030 and consist of 17 goals and 169 specific targets. Among them, SDG 15 (Life on Land) aims to conserve terrestrial ecosystems and promote their sustainable use. Successful implementation of SDG 15 requires continuous management of terrestrial ecosystems and positive forest transitions. However, systematic analyses examining the role of green official development assistance (ODA), which supports environmental improvement in developing countries, remain limited. Accordingly, this study investigates the role that green ODA can play in forest transitions. Focusing on green ODA provided to developing countries between 2010 and 2023, this study employed shapley additive explanations (SHAP), an explainable artificial intelligence (XAI) technique, to predict its influence on SDG 15 implementation scores and to analyze the contributions of economic, environmental, and social indicators. In addition, a SHAP value-based decomposition and a gap index were calculated to examine the contribution of green ODA relative to its input. The results indicate that the overall contribution of green ODA to SDG 15 implementation in developing countries is relatively limited. However, statistically significant effects were observed in country groups with higher levels of SDG 15 implementation performance. In contrast, the effects were weakened or constrained in some country groups with lower levels of SDG 15 implementation. These findings suggest that green ODA may function as a transition accelerator that facilitates positive forest transitions in countries with stronger capacities for implementing SDG 15. Strengthening and improving the existing limitations of green ODA could enhance its role and enable it to contribute more effectively to sustainable development and the conservation of terrestrial ecosystems in developing countries. Full article
(This article belongs to the Special Issue Forest Economics and Policy Analysis)
Show Figures

Figure 1

29 pages, 9088 KB  
Article
Fine-Scale Mapping of the Wildland–Urban Interface and Seasonal Wildfire Susceptibility Analysis in the High-Altitude Mountainous Areas of Southwestern China
by Shenghao Li, Mingshan Wu, Jiangxia Ye, Xun Zhao, Sophia Xiaoxia Duan, Mengting Xue, Wenlong Yang, Zhichao Huang, Bingjie Han, Shuai He and Fangrong Zhou
Fire 2026, 9(4), 140; https://doi.org/10.3390/fire9040140 (registering DOI) - 25 Mar 2026
Abstract
Wildfires at the wildland–urban interface (WUI) have increased in frequency and severity under global warming and intensified human activities. As a representative high-altitude mountainous region in southwestern China, Yunnan features complex topography, steep climatic gradients, and dispersed settlements interwoven with wildlands, making it [...] Read more.
Wildfires at the wildland–urban interface (WUI) have increased in frequency and severity under global warming and intensified human activities. As a representative high-altitude mountainous region in southwestern China, Yunnan features complex topography, steep climatic gradients, and dispersed settlements interwoven with wildlands, making it a fire-prone area where wildfire management is particularly challenging. However, a fine-scale WUI dataset is currently lacking for this region. To address this gap, we refined WUI classification thresholds using a one-factor-at-a-time (OFAT) method and generated the first fine-resolution WUI map of Yunnan. Seasonal wildfire driving factors from 2004 to 2023 were quantified, and machine learning models were applied to produce seasonal susceptibility maps. SHapley Additive exPlanations (SHAP) were employed to interpret the dominant contributing factors. The resulting WUI covers 25,730.67 km2, accounting for 6.5% of Yunnan’s land area. Random forest models effectively captured seasonal wildfire susceptibility patterns, with AUC values exceeding 0.83 across all seasons. High susceptibility zones (>0.5) comprised 30.09% of the WUI in spring, 25.74% in winter, 22.61% in autumn, and 13.74% in summer. SHAP analysis revealed that anthropogenic factors consistently drive wildfire occurrence, while climatic conditions in the preceding season influence vegetation status and subsequently affect wildfire likelihood in the current season. By integrating static “where” mapping with dynamic “when” susceptibility analysis, this study establishes a comprehensive “When–Where” framework that supports both long-term WUI planning and short-term seasonal early warning. The integration of fine scale WUI mapping with seasonal susceptibility modeling enhances wildfire risk management in complex high-altitude regions. These findings provide a scientific basis for location-specific, time-sensitive, and full-chain wildfire management in mountainous landscapes and contribute to cross-border ecological security governance in the Indo-China Peninsula. Full article
Show Figures

Figure 1

20 pages, 2636 KB  
Article
Inferring Wildfire Ignition Causes in Spain Using Machine Learning and Explainable AI
by Clara Ochoa, Magí Franquesa, Marcos Rodrigues and Emilio Chuvieco
Fire 2026, 9(4), 138; https://doi.org/10.3390/fire9040138 - 24 Mar 2026
Viewed by 169
Abstract
A substantial proportion of wildfires in Mediterranean regions continue to be recorded without information about the cause or source of ignition, limiting our ability to understand ignition drivers and design effective prevention strategies. In this study, we develop a spatially harmonised wildfire database [...] Read more.
A substantial proportion of wildfires in Mediterranean regions continue to be recorded without information about the cause or source of ignition, limiting our ability to understand ignition drivers and design effective prevention strategies. In this study, we develop a spatially harmonised wildfire database for mainland Spain by integrating ignition records from the Spanish General Fire Statistics (EGIF) with fire perimeters generated from satellite images. We then apply a Random Forest classifier to infer ignition causes for events lacking cause attribution. To interpret model behaviour, we use Shapley Additive Explanation (SHAP) values at both global and local scales. Results indicate that human-caused ignitions are dominant, with intentional and negligence-related fires accounting for 52.13% of all known events, although they are associated with contrasting climatic and land-use settings. Negligence-related fires tend to occur under hot, dry and windy conditions, often in agricultural interfaces, whereas intentional fires are more frequent under cooler and wetter conditions and in areas with higher population density and land-use change. Lightning-caused fires represent a small fraction of total ignitions (3%) but exhibit a distinct climatic signature, occurring primarily in sparsely populated areas, under intermediate moisture conditions, and often leading to larger burned areas. Despite strong overall model performance (F1-score = 0.82), minority classes (e.g., lightning and fire rekindling, 0.17%) remain challenging to classify, reflecting both data imbalance and uncertainty in causal attribution. Overall, the combined use of machine learning and explainable AI provides a coherent spatial characterisation of wildfire ignition drivers across mainland Spain, highlights systematic differences among ignition causes, and identifies key limitations in existing fire cause records. This framework represents a practical step towards improving fire cause information by integrating remote sensing products with field-based fire reports, thereby supporting more targeted and evidence-based fire risk management. Full article
Show Figures

Figure 1

27 pages, 3445 KB  
Article
Artificial Neural Network-Based Prediction of Compressive Strength for Mix Design Evaluation in Sustainable Expanded Polystyrene-Infused Concrete
by Kavin John O. Castillanes and Gilford B. Estores
Buildings 2026, 16(6), 1252; https://doi.org/10.3390/buildings16061252 - 21 Mar 2026
Viewed by 163
Abstract
Lightweight concrete incorporating expanded polystyrene (EPS) remains an active area of research due to its potential to produce more sustainable resource-efficient construction materials. However, identifying the optimal mix design for EPS-infused concrete typically requires extensive experimental trials, resulting in significant time, cost, and [...] Read more.
Lightweight concrete incorporating expanded polystyrene (EPS) remains an active area of research due to its potential to produce more sustainable resource-efficient construction materials. However, identifying the optimal mix design for EPS-infused concrete typically requires extensive experimental trials, resulting in significant time, cost, and material consumption. To address this challenge, this study proposes an artificial neural network (ANN) predictive model with 5-fold cross-validation to estimate compressive strength performance and to develop mix design recommendations based on actual and predicted results. A total of 55 experimental samples were prepared and grouped into 11 batches, with the EPS volume replacement levels ranging from 0% to 50% at 5% increments. Model performance was evaluated using mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), coefficient of determination (R2), and scatter index (SI), with graphical representations like predicted vs. actual plots, response plots, and residual plots, and the results were benchmarked against a multiple linear regression (MLR) model. Among the tested configurations, the 4-5-1 ANN model demonstrated the highest predictive accuracy. Furthermore, a Shapley (SHAP) analysis was conducted to interpret the model behavior and determine the relative importance of the input variables. The findings reveal that EPS content had the greatest influence on compressive strength prediction, followed by slump value, then gravel content, and finally concrete density. Full article
Show Figures

Figure 1

35 pages, 9721 KB  
Article
Research on Carbon Allowance Allocation Based on the Shapley Value: An In-Depth Study of Jiangsu Province
by Boya Jiang, Lujia Cai, Baolin Huang and Hongxian Li
Sustainability 2026, 18(6), 3093; https://doi.org/10.3390/su18063093 - 21 Mar 2026
Viewed by 131
Abstract
Given less than five years remaining until the target year for the first phase of China’s dual carbon goals, this paper studies carbon allowance allocation with an in-depth study of Jiangsu Province due to its significant role in driving the Yangtze River Delta’s [...] Read more.
Given less than five years remaining until the target year for the first phase of China’s dual carbon goals, this paper studies carbon allowance allocation with an in-depth study of Jiangsu Province due to its significant role in driving the Yangtze River Delta’s pioneering achievement of the dual carbon goals. This study considered 2017 (the intermediate target year) as the base year and incorporated socio-economic data such as population, GDP, and the urbanization rate. Then, methods including the entropy weight method, gravity model and social network analysis were applied to classify Jiangsu’s 95 counties. From a regional coordination perspective, carbon governance clusters were constructed with the Shapley value, based on which spatial heterogeneity patterns were analyzed, and a carbon quota allocation was proposed. The findings reveal that: (1) The dominant factors influencing cross-scale carbon reduction capacity at the county level are natural carbon sink capacity (indicator weight: 0.180) and urbanization rate (indicator weight: 0.145). (2) The correlation between carbon reduction factors among different districts and counties exhibits an uneven spatial pattern. And the spatial configuration exhibits a multi-tiered, network-like distribution. (3) Through conducting spatial analysis and spatial grouping, Jiangsu could be divided into 14 county-level carbon governance alliances, with the number of member counties ranging from 4 to 10 within each alliance. (4) The allocation of carbon quotas in Jiangsu exhibits a distinct descending gradient from the southern to the northern regions, which is coupled with the regional economic geography. This is exemplified by the highest quota in Jiangyin (496.46 Mt) in the south and the lowest in Lianyun (34.90 Mt) in the north. It is concluded that two carbon emission reduction pathways should be established as a priority: (a) Tongshan-Gulou (Xuzhou)-Yunlong-Quanshan-Jiawang and (b) Tianning-Jiangyin-Zhangjiagang-Changshu-Taicang-Kunshan. Full article
(This article belongs to the Section Development Goals towards Sustainability)
Show Figures

Figure 1

21 pages, 6010 KB  
Article
A Deep Neural Network Model for Thermochemical Equilibrium Prediction in Diesel Combustion with Uncertainty Quantification and Explainability
by Huangchang Ji, Zhefeng Guo, Yang Han and Timothy Lee
Energies 2026, 19(6), 1551; https://doi.org/10.3390/en19061551 - 20 Mar 2026
Viewed by 180
Abstract
Deep neural networks (DNNs) have demonstrated remarkable capability in accurately predicting equilibrium combustion products and thermodynamic properties of diesel combustion. However, the lack of awareness of uncertainty and interpretability has limited their scientific credibility and practical application. In this work, an enhanced DNN [...] Read more.
Deep neural networks (DNNs) have demonstrated remarkable capability in accurately predicting equilibrium combustion products and thermodynamic properties of diesel combustion. However, the lack of awareness of uncertainty and interpretability has limited their scientific credibility and practical application. In this work, an enhanced DNN framework with uncertainty quantification and explainability is developed. The model achieves high accuracy across all outputs, with R2 values exceeding 0.99 for major thermodynamic variables. In this model, Monte Carlo dropout sampling is used to estimate epistemic uncertainty, and prediction confidence intervals are analyzed across all species and thermodynamic outputs, revealing strong correlations for major components. Model explainability is further explored using Shapley additive explanations (SHAP), which attribute the influence of equivalence ratio, temperature, and pressure on each predicted species and combustion characteristics. The combined uncertainty quantification and explainability framework not only enhances confidence in DNN combustion models but also provides physical insight into the relationships between input conditions and equilibrium thermochemistry that are learned by the DNN. Full article
(This article belongs to the Section I2: Energy and Combustion Science)
Show Figures

Figure 1

38 pages, 2846 KB  
Article
On Importance Sampling and Multilinear Extensions for Approximating Shapley Values with Applications to Explainable Artificial Intelligence
by Tim Pollmann and Jochen Staudacher
Complexities 2026, 2(1), 7; https://doi.org/10.3390/complexities2010007 - 17 Mar 2026
Viewed by 158
Abstract
Shapley values are the most widely used point-valued solution concept for cooperative games and have recently garnered attention for their applicability in explainable machine learning. Due to the complexity of Shapley value computation, users mostly resort to Monte Carlo approximations for large problems. [...] Read more.
Shapley values are the most widely used point-valued solution concept for cooperative games and have recently garnered attention for their applicability in explainable machine learning. Due to the complexity of Shapley value computation, users mostly resort to Monte Carlo approximations for large problems. We take a detailed look at an approximation method grounded in multilinear extensions proposed in 2021 under the name “Owen sampling”. We point out why Owen sampling is biased and propose unbiased alternatives based on combining multilinear extensions with stratified sampling and importance sampling. Finally, we discuss empirical results of the presented algorithms for various cooperative games, including real-world explainability scenarios. Full article
Show Figures

Figure 1

19 pages, 2866 KB  
Article
Machine Learning Models for Sepsis: From Early Detection to Short- and Long-Term Prognosis
by Maria Vittoria Ristori, Filippo Ruffini, Silvia Spoto, Roberto Cammarata, Vincenzo La Vaccara, Lucrezia Bani, Damiano Caputo, Paolo Soda, Valerio Guarrasi and Silvia Angeletti
Int. J. Mol. Sci. 2026, 27(6), 2721; https://doi.org/10.3390/ijms27062721 - 17 Mar 2026
Viewed by 231
Abstract
Sepsis is a leading cause of morbidity and mortality worldwide, and its outcomes depend on early recognition and timely intervention. Conventional clinical scores and biomarkers provide prognostic value but often lack accuracy for individualized prediction. Machine learning (ML) offers the ability to integrate [...] Read more.
Sepsis is a leading cause of morbidity and mortality worldwide, and its outcomes depend on early recognition and timely intervention. Conventional clinical scores and biomarkers provide prognostic value but often lack accuracy for individualized prediction. Machine learning (ML) offers the ability to integrate multidimensional data to improve risk stratification. We analyzed 477 patients admitted to our hospital, including 251 with sepsis, 100 with septic shock, and 126 controls. Demographic, clinical, and laboratory data were collected. Univariate correlation analyses explored associations with sepsis severity and mortality (in-hospital, 30-day, and 90-day). Several ML models were tested, with performance assessed by area under the receiver operating characteristic curve (AUC-ROC) and Matthews’s correlation coefficient (MCC). Model interpretability was evaluated using SHAP (SHapley Additive exPlanations). Sepsis severity and mortality correlated with biomarkers (procalcitonin, mid-regional pro-adrenomedullin, lactate) and clinical scores (SOFA, qSOFA). In-hospital mortality was associated with ADM, catecholamine use, and SOFA, while 90-day mortality involved smoking and Gram-negative or polymicrobial infections. Different machine learning models were evaluated, and the model achieving the highest performance on the validation set was selected. The selected model either outperformed or demonstrated comparable performance to logistic regression, depending on the specific prediction task (AUC 0.99 for sepsis, 0.96 for septic shock, 0.70 for ICU admission; 0.90, 0.72, and 0.87 for in-hospital, 30-day, and 90-day mortality). SHAP confirmed the clinical relevance of these predictors. ML models integrating clinical and biochemical data outperform conventional methods in predicting sepsis progression and mortality, while maintaining interpretability. These findings support the use of ML-based tools for early diagnosis and personalized risk stratification in sepsis, though external validation is required before clinical application. Full article
(This article belongs to the Special Issue New Insights in Translational Bioinformatics: Second Edition)
Show Figures

Figure 1

14 pages, 935 KB  
Article
Biomarker Discovery for Autism Prediction Using Massive Feature Extraction Based on EEG Signals
by Nauman Hafeez, Abdul Rehman Aslam and Muhammad Awais Bin Altaf
Sensors 2026, 26(6), 1862; https://doi.org/10.3390/s26061862 - 16 Mar 2026
Viewed by 246
Abstract
Autism spectrum disorder (ASD) is a heterogeneous neurodevelopmental disorder that requires early diagnosis for better intervention. However, current clinical behavioural examinations are time-consuming and prone to human error. Objective and effective biomarkers are essential for the diagnosis and prognosis of the disorder. Electroencephalography [...] Read more.
Autism spectrum disorder (ASD) is a heterogeneous neurodevelopmental disorder that requires early diagnosis for better intervention. However, current clinical behavioural examinations are time-consuming and prone to human error. Objective and effective biomarkers are essential for the diagnosis and prognosis of the disorder. Electroencephalography (EEG) is a non-invasive and inexpensive brain-imaging technique that is widely applied in the diagnosis of ASD. Feature-based methods have shown better performance in EEG-based applications. Here, we present a prediction framework based on massive feature extraction using the highly comparative time-series analysis (HCTSA) method and a hybrid feature selection method for the classification of ASD from resting-state EEG. Machine-learning models are trained and tested on a different number of selected features. Our models demonstrated 100% accuracy with ≥50 features on a balanced dataset of 56 participants. The most discriminating EEG channels and features were used in the prediction process, as well as those using Shapley values to provide explainability of our framework. Whilst these results are promising, we acknowledge the limitations of a single small-scale dataset and emphasise the need for validation on larger independent cohorts before clinical translation. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

26 pages, 2686 KB  
Article
Algorithmic Stability in Turbulent Markets: Unveiling the Superiority of Shallow Learning over Deep Architectures in Cryptocurrency Forecasting
by Ceyda Yerdelen Kaygın, Musa Gün, Osman Nuri Akarsu, Haşim Bağcı and Ahmet Yanık
Mathematics 2026, 14(6), 989; https://doi.org/10.3390/math14060989 - 14 Mar 2026
Viewed by 346
Abstract
Forecasting cryptocurrency prices is challenging due to extreme volatility, nonlinear dynamics, and frequent structural shifts in digital asset markets. While recent research increasingly applies deep learning architectures, the predictive advantage of highly complex models in noisy financial environments remains uncertain. This study evaluates [...] Read more.
Forecasting cryptocurrency prices is challenging due to extreme volatility, nonlinear dynamics, and frequent structural shifts in digital asset markets. While recent research increasingly applies deep learning architectures, the predictive advantage of highly complex models in noisy financial environments remains uncertain. This study evaluates the forecasting performance of shallow and deep learning approaches by comparing Support Vector Machines (SVM), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) models, along with hybrid configurations (GRU + SVM, LSTM + SVM, and GRU + LSTM). Using daily data spanning from 1 October 2020 to 23 September 2025 for five major cryptocurrencies—Bitcoin, Ethereum, Binance Coin, Solana, and Ripple—the models are estimated within a consistent framework and assessed using out-of-sample performance metrics, including MAE, MAPE, MSE, and R2. The results indicate that greater algorithmic complexity does not necessarily improve forecasting accuracy. In several cases, the parsimonious SVM model outperforms deep neural network architectures, particularly for highly volatile assets, while hybrid models fail to provide systematic improvements and sometimes amplify prediction errors. SHapley Additive exPlanations analysis further shows that immediate price-based variables dominate predictive power, whereas many lagged technical indicators contribute relatively limited explanatory value. Overall, the findings underscore the importance of algorithmic parsimony, suggesting that simpler machine learning models may deliver more robust forecasts in highly volatile cryptocurrency markets. Full article
(This article belongs to the Special Issue Recent Computational Techniques to Forecast Cryptocurrency Markets)
Show Figures

Figure 1

17 pages, 566 KB  
Article
Analyst-of-Record: A Proof-of-Concept for Influence-Based Analyst Credit Assignment in Human-Feedback Decision Support
by Devon L. Brown and Danda B. Rawat
Electronics 2026, 15(6), 1210; https://doi.org/10.3390/electronics15061210 - 13 Mar 2026
Viewed by 270
Abstract
The purpose of this study is to examine whether analyst-level credit can be assigned quantitatively in a lightweight human-feedback decision-support pipeline. In intelligence and national security workflows, analysts often provide edits, comments, and evaluative feedback during the production of analytic products, yet these [...] Read more.
The purpose of this study is to examine whether analyst-level credit can be assigned quantitatively in a lightweight human-feedback decision-support pipeline. In intelligence and national security workflows, analysts often provide edits, comments, and evaluative feedback during the production of analytic products, yet these intermediate contributions are usually discarded, leaving no auditable record of how individual feedback shaped the final output. To address this problem, this study proposes a proof-of-concept Analyst-of-Record framework that combines synthetic analyst feedback, a linear ridge reward model, first-order influence functions, and additive Shapley aggregation to estimate both feedback-item and analyst-level contribution scores. The research design uses the Fact Extraction and VERification (FEVER) fact-verification dataset under controlled experimental settings. The pipeline retrieves evidence with Best Matching 25 (BM25), generates a grounded template-based response, derives three synthetic analyst feedback channels from FEVER annotations, trains a reward model on simple claim–answer and analyst-identity features, and aggregates per-feedback influence scores into an Analyst Contribution Index (ACI). The main experiments are conducted on a 500-claim subset across five random seeds, with additional ablation and bootstrap analyses used to assess sensitivity and stability. The findings show that the reward model achieves a mean validation R2 of 0.801±0.037, indicating that the synthetic feedback signals are learnable under the selected featureization. The analyst-level contribution scores remain stable across random seeds, with approximately half of the total influence magnitude attributed to the explanation-quality channel and the remainder split across the other two channels. Ablation results further show that removing the explanation-quality channel collapses validation fit, while bootstrap resampling demonstrates tight concentration of absolute ACI magnitudes. Theoretically, this study extends attribution research beyond document-only grounding by showing how analyst feedback itself can be modeled as an object of contribution analysis. It also demonstrates that influence functions and Shapley-style aggregation can be adapted into a tractable framework for estimating interpretable analyst-level credit in a reproducible experimental setting. Practically, the proposed framework offers an initial foundation for more traceable and accountable decision-support workflows in which intermediate analyst contributions can be preserved rather than lost. The results also provide a feasible implementation path for future systems that incorporate stronger generators, richer evidence representations, and real analyst annotations. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop