Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (117)

Search Parameters:
Keywords = interpretable generalized additive neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2585 KB  
Article
Interpretable Machine Learning Model Integrating Electrocardiographic and Acute Physiology Metrics for Mortality Prediction in Critical Ill Patients
by Qiuyu Wang, Bin Wang, Bo Chen, Qing Li, Yutong Zhao, Tianshan Dong, Yifei Wang and Ping Zhang
J. Clin. Med. 2025, 14(20), 7163; https://doi.org/10.3390/jcm14207163 - 11 Oct 2025
Viewed by 259
Abstract
Background: Critically ill patients in the intensive care unit (ICU) are characterized by complex comorbidities and a high risk of short-term mortality. Traditional severity scoring systems rely on physiological and laboratory variables but lack direct integration of electrocardiogram (ECG) data. This study [...] Read more.
Background: Critically ill patients in the intensive care unit (ICU) are characterized by complex comorbidities and a high risk of short-term mortality. Traditional severity scoring systems rely on physiological and laboratory variables but lack direct integration of electrocardiogram (ECG) data. This study aimed to construct an interpretable machine learning (ML) model combining ECG-derived and clinical variables to predict 28-day mortality in ICU patients. Methods: A retrospective cohort analysis was performed with data from the MIMIC-IV v2.2 database. The primary outcome was 28-day mortality. An ECG-based risk score was generated from the first ECG after ICU admission using a deep residual convolutional neural network. Feature selection was guided by XGBoost importance ranking, SHapley Additive exPlanations, and clinical relevance. A three-variable model comprising ECG score, APS-III score, and age (termed the E3A score) was developed and evaluated across four ML algorithms. We evaluated model performance by calculating the AUC of ROC curves, examining calibration, and applying decision curve analysis. Results: A total of 18,256 ICU patients were included, with 2412 deaths within 28 days. The ECG score was significantly higher in non-survivors than in survivors (median [IQR]: 24.4 [15.6–33.4] vs. 13.5 [7.2–22.1], p < 0.001). Logistic regression demonstrated the best discrimination for the E3A score, achieving an AUC of 0.806 (95% CI: 0.784–0.826) for the test set and 0.804 (95% CI: 0.772–0.835) for the validation set. Conclusions: Integrating ECG-derived features with clinical variables improves prognostic accuracy for 28-day mortality prediction in ICU patients, supporting early risk stratification in critical care. Full article
(This article belongs to the Special Issue New Insights into Critical Care)
Show Figures

Figure 1

26 pages, 3383 KB  
Article
Biomass Gasification for Waste-to-Energy Conversion: Artificial Intelligence for Generalizable Modeling and Multi-Objective Optimization of Syngas Production
by Gema Báez-Barrón, Francisco Javier Lopéz-Flores, Eusiel Rubio-Castro and José María Ponce-Ortega
Resources 2025, 14(10), 157; https://doi.org/10.3390/resources14100157 - 8 Oct 2025
Viewed by 516
Abstract
Biomass gasification, a key waste-to-energy technology, is a complex thermochemical process with many input variables influencing the yield and quality of syngas. In this study, data-driven machine learning models are developed to capture the nonlinear relationships between feedstock properties, operating conditions, and syngas [...] Read more.
Biomass gasification, a key waste-to-energy technology, is a complex thermochemical process with many input variables influencing the yield and quality of syngas. In this study, data-driven machine learning models are developed to capture the nonlinear relationships between feedstock properties, operating conditions, and syngas composition, in order to optimize process performance. Random Forest (RF), CatBoost (Categorical Boosting), and an Artificial Neural Network (ANN) were trained to predict key syngas outputs (syngas composition and syngas yield) from process inputs. The best-performing model (ANN) was then integrated into a multi-objective optimization framework using the open-source Optimization & Machine Learning Toolkit (OMLT) in Pyomo. An optimization problem was formulated with two objectives—maximizing the hydrogen-to-carbon monoxide (H2/CO) ratio and maximizing the syngas yield simultaneously, subject to operational constraints. The trade-off between these competing objectives was resolved by generating a Pareto frontier, which identifies optimal operating points for different priority weightings of syngas quality vs. quantity. To interpret the ML models and validate domain knowledge, SHapley Additive exPlanations (SHAP) were applied, revealing that parameters such as equivalence ratio, steam-to-biomass ratio, feedstock lower heating value, and fixed carbon content significantly influence syngas outputs. Our results highlight a clear trade-off between maximizing hydrogen content and total gas yield and pinpoint optimal conditions for balancing this trade-off. This integrated approach, combining advanced ML predictions, explainability, and rigorous multi-objective optimization, is novel for biomass gasification and provides actionable insights to improve syngas production efficiency, demonstrating the value of data-driven optimization in sustainable waste-to-energy conversion processes. Full article
Show Figures

Figure 1

28 pages, 3628 KB  
Article
From Questionnaires to Heatmaps: Visual Classification and Interpretation of Quantitative Response Data Using Convolutional Neural Networks
by Michael Woelk, Modelice Nam, Björn Häckel and Matthias Spörrle
Appl. Sci. 2025, 15(19), 10642; https://doi.org/10.3390/app151910642 - 1 Oct 2025
Viewed by 282
Abstract
Structured quantitative data, such as survey responses in human resource management research, are often analysed using machine learning methods, including logistic regression. Although these methods provide accurate statistical predictions, their results are frequently abstract and difficult for non-specialists to comprehend. This limits their [...] Read more.
Structured quantitative data, such as survey responses in human resource management research, are often analysed using machine learning methods, including logistic regression. Although these methods provide accurate statistical predictions, their results are frequently abstract and difficult for non-specialists to comprehend. This limits their usefulness in practice, particularly in contexts where eXplainable Artificial Intelligence (XAI) is essential. This study proposes a domain-independent approach for the autonomous classification and interpretation of quantitative data using visual processing. This method transforms individual responses based on rating scales into visual representations, which are subsequently processed by Convolutional Neural Networks (CNNs). In combination with Class Activation Maps (CAMs), image-based CNN models enable not only accurate and reproducible classification but also visual interpretability of the underlying decision-making process. Our evaluation found that CNN models with bar chart coding achieved an accuracy of between 93.05% and 93.16%, comparable to the 93.19% achieved by logistic regression. Compared with conventional numerical approaches, exemplified by logistic regression in this study, the approach achieves comparable classification accuracy while providing additional comprehensibility and transparency through graphical representations. Robustness is demonstrated by consistent results across different visualisations generated from the same underlying data. By converting abstract numerical information into visual explanations, this approach addresses a core challenge: bridging the gap between model performance and human understanding. Its transparency, domain-agnostic design, and straightforward interpretability make it particularly suitable for XAI-driven applications across diverse disciplines that use quantitative response data. Full article
Show Figures

Figure 1

20 pages, 2916 KB  
Article
Domain-Driven Teacher–Student Machine Learning Framework for Predicting Slope Stability Under Dry Conditions
by Semachew Molla Kassa, Betelhem Zewdu Wubineh, Africa Mulumar Geremew, Nandyala Darga Kumar and Grzegorz Kacprzak
Appl. Sci. 2025, 15(19), 10613; https://doi.org/10.3390/app151910613 - 30 Sep 2025
Viewed by 354
Abstract
Slope stability prediction is a critical task in geotechnical engineering, but machine learning (ML) models require large datasets, which are often costly and time-consuming to obtain. This study proposes a domain-driven teacher–student framework to overcome data limitations for predicting the dry factor of [...] Read more.
Slope stability prediction is a critical task in geotechnical engineering, but machine learning (ML) models require large datasets, which are often costly and time-consuming to obtain. This study proposes a domain-driven teacher–student framework to overcome data limitations for predicting the dry factor of safety (FS dry). The teacher model, XGBoost, was trained on the original dataset to capture nonlinear relationships among key site-specific features (unit weight, cohesion, friction angle) and assign pseudo-labels to synthetic samples generated via domain-driven simulations. Six student models, random forest (RF), decision tree (DT), shallow artificial neural network (SNN), linear regression (LR), support vector regression (SVR), and K-nearest neighbors (KNN), were trained on the augmented dataset to approximate the teacher’s predictions. Models were evaluated using a train–test split and five-fold cross-validation. RF achieved the highest predictive accuracy, with an R2 of up to 0.9663 and low error metrics (MAE = 0.0233, RMSE = 0.0531), outperforming other student models. Integrating domain knowledge and synthetic data improved prediction reliability despite limited experimental datasets. The framework provides a robust and interpretable tool for slope stability assessment, supporting infrastructure safety in regions with sparse geotechnical data. Future work will expand the dataset with additional field and laboratory tests to further improve model performance. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

29 pages, 7187 KB  
Article
A Novel Framework for Predicting Daily Reference Evapotranspiration Using Interpretable Machine Learning Techniques
by Elsayed Ahmed Elsadek, Mosaad Ali Hussein Ali, Clinton Williams, Kelly R. Thorp and Diaa Eldin M. Elshikha
Agriculture 2025, 15(18), 1985; https://doi.org/10.3390/agriculture15181985 - 20 Sep 2025
Cited by 1 | Viewed by 426
Abstract
Accurate estimation of daily reference evapotranspiration (ETo) is crucial for sustainable water resource management and irrigation scheduling, especially in water-scarce regions like Arizona. The standardized Penman–Monteith (PM) method is costly and requires specialized instruments and expertise, making it generally impractical for [...] Read more.
Accurate estimation of daily reference evapotranspiration (ETo) is crucial for sustainable water resource management and irrigation scheduling, especially in water-scarce regions like Arizona. The standardized Penman–Monteith (PM) method is costly and requires specialized instruments and expertise, making it generally impractical for commercial growers. This study developed 35 ETo models to predict daily ETo across Coolidge, Maricopa, and Queen Creek in Pinal County, Arizona. Seven input combinations of daily meteorological variables were used for training and testing five machine learning (ML) models: Artificial Neural Network (ANN), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and Support Vector Machine (SVM). Four statistical indicators, coefficient of determination (R2), the normalized root-mean-squared error (RMSEn), mean absolute error (MAE), and simulation error (Se), were used to evaluate the ML models’ performance in comparison with the FAO-56 PM standardized method. The SHapley Additive exPlanations (SHAP) method was used to interpret each meteorological variable’s contribution to the model predictions. Overall, the 35 ETo-developed models showed an excellent to fair performance in predicting daily ETo over the three weather stations. Employing ANN10, RF10, XGBoost10, CatBoost10, and SVM10, incorporating all ten meteorological variables, yielded the highest accuracies during training and testing periods (0.994 ≤ R2 ≤ 1.0, 0.729 ≤ RMSEn ≤ 3.662, 0.030 ≤ MAE ≤ 0.181 mm·day−1, and 0.833 ≤ Se ≤ 2.295). Excluding meteorological variables caused a gradual decline in ET-developed models’ performance across the stations. However, 3-variable models using only maximum, minimum, and average temperatures (Tmax, Tmin, and Tave) predicted ETo well across the three stations during testing (17.655 ≤ RMSEn ≤ 13.469 and Se ≤ 15.45%). Results highlighted that Tmax, solar radiation (Rs), and wind speed at 2 m height (U2) are the most influential factors affecting ETo at the central Arizona sites, followed by extraterrestrial solar radiation (Ra) and Tave. In contrast, humidity-related variables (RHmin, RHmax, and RHave), along with Tmin and precipitation (Pr), had minimal impact on the model’s predictions. The results are informative for assisting growers and policymakers in developing effective water management strategies, especially for arid regions like central Arizona. Full article
(This article belongs to the Section Agricultural Water Management)
Show Figures

Figure 1

30 pages, 2870 KB  
Article
Hybrid Explainable AI Framework for Predictive Maintenance of Aeration Systems in Wastewater Treatment Plants
by Daniel Voipan, Andreea Elena Voipan and Marian Barbu
Water 2025, 17(17), 2636; https://doi.org/10.3390/w17172636 - 6 Sep 2025
Viewed by 1190
Abstract
Aeration systems are among the most energy-intensive components of wastewater treatment plants (WWTPs), consuming up to 75% of total electricity while being prone to performance degradation caused by diffuser fouling and pressure losses. Traditional maintenance strategies are largely reactive or preventive, leading to [...] Read more.
Aeration systems are among the most energy-intensive components of wastewater treatment plants (WWTPs), consuming up to 75% of total electricity while being prone to performance degradation caused by diffuser fouling and pressure losses. Traditional maintenance strategies are largely reactive or preventive, leading to inefficient interventions, higher operational costs, and limited fault anticipation. This study addresses the need for an advanced predictive maintenance framework capable of early detection and differentiation of multiple aeration system faults. Using the Benchmark Simulation Model No. 2 (BSM2), two representative degradation scenarios—acute airflow pressure loss and chronic diffuser fouling—were simulated to generate a labeled dataset. A hybrid machine learning approach was developed, combining Random Forest-based feature selection with Long Short-Term Memory (LSTM) neural networks for temporal, multi-label fault classification. To enhance interpretability and operator trust, SHapley Additive exPlanations (SHAP) were applied to quantify feature contributions and provide transparent model predictions. The results show that the proposed framework achieves over 94% detection accuracy and provides early warnings compared to static threshold-based methods. The integration of explainable AI ensures actionable insights for maintenance planning. This approach supports more energy-efficient, reliable, and sustainable operation of WWTP aeration systems and offers a benchmark methodology for future predictive maintenance research. Full article
(This article belongs to the Special Issue AI, Machine Learning and Digital Twin Applications in Water)
Show Figures

Figure 1

33 pages, 1992 KB  
Article
Future Skills in the GenAI Era: A Labor Market Classification System Using Kolmogorov–Arnold Networks and Explainable AI
by Dimitrios Christos Kavargyris, Konstantinos Georgiou, Eleanna Papaioannou, Theodoros Moysiadis, Nikolaos Mittas and Lefteris Angelis
Algorithms 2025, 18(9), 554; https://doi.org/10.3390/a18090554 - 2 Sep 2025
Viewed by 727
Abstract
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To [...] Read more.
Generative Artificial Intelligence (GenAI) is widely recognized for its profound impact on labor market demand, supply, and skill dynamics. However, due to its transformative nature, GenAI increasingly overlaps with traditional AI roles, blurring boundaries and intensifying the need to reassess workforce competencies. To address this challenge, this paper introduces KANVAS (Kolmogorov–Arnold Network Versatile Algorithmic Solution)—a framework based on Kolmogorov–Arnold Networks (KANs), which utilize B-spline-based, compact, and interpretable neural units—to distinguish between traditional AI roles and emerging GenAI-related positions. The aim of the study is to develop a reliable and interpretable labor market classification system that differentiates these roles using explainable machine learning. Unlike prior studies that emphasize predictive performance, our work is the first to employ KANs as an explanatory tool for labor classification, to reveal how GenAI-related and European Skills, Competences, Qualifications, and Occupations (ESCO)-aligned skills differentially contribute to distinguishing modern from traditional AI job roles. Using raw job vacancy data from two labor market platforms, KANVAS implements a hybrid pipeline combining a state-of-the-art Large Language Model (LLM) with Explainable AI (XAI) techniques, including Shapley Additive Explanations (SHAP), to enhance model transparency. The framework achieves approximately 80% classification consistency between traditional and GenAI-aligned roles, while also identifying the most influential skills contributing to each category. Our findings indicate that GenAI positions prioritize competencies such as prompt engineering and LLM integration, whereas traditional roles emphasize statistical modeling and legacy toolkits. By surfacing these distinctions, the framework offers actionable insights for curriculum design, targeted reskilling programs, and workforce policy development. Overall, KANVAS contributes a novel, interpretable approach to understanding how GenAI reshapes job roles and skill requirements in a rapidly evolving labor market. Finally, the open-source implementation of KANVAS is flexible and well-suited for HR managers and relevant stakeholders. Full article
Show Figures

Figure 1

19 pages, 7102 KB  
Article
Enhanced Convolutional Neural Network–Transformer Framework for Accurate Prediction of the Flexural Capacity of Ultra-High-Performance Concrete Beams
by Long Yan, Pengfei Liu, Fan Yang and Xu Feng
Buildings 2025, 15(17), 3138; https://doi.org/10.3390/buildings15173138 - 1 Sep 2025
Viewed by 558
Abstract
Ultra-high-performance concrete (UHPC) is increasingly employed in long-span and heavily loaded structural applications; however, the accurate prediction of its flexural capacity remains a significant challenge because of the complex interactions among geometric parameters, reinforcement details, and advanced material properties. Existing design codes and [...] Read more.
Ultra-high-performance concrete (UHPC) is increasingly employed in long-span and heavily loaded structural applications; however, the accurate prediction of its flexural capacity remains a significant challenge because of the complex interactions among geometric parameters, reinforcement details, and advanced material properties. Existing design codes and single-architecture machine learning models often struggle to capture these nonlinear relationships, particularly when experimental datasets are limited in size and diversity. This study proposes a compact hybrid CNN–Transformer model that combines convolutional layers for local feature extraction with self-attention mechanisms for modeling long-range dependencies, enabling robust learning from a database of 120 UHPC beam tests drawn from 13 laboratories worldwide. The model’s predictive performance is benchmarked against conventional design codes, analytical and semi-empirical formulations, and alternative machine learning approaches including Convolutional Neural Networks (CNN), eXtreme Gradient Boosting (XGBoost), and K-Nearest Neighbors (KNN). Results show that the proposed architecture achieves the highest accuracy with an R2 of 0.943, an RMSE of 41.310, and a 25% reduction in RMSE compared with the best-performing baseline, while maintaining strong generalization across varying fiber dosages, reinforcement ratios, and shear-span ratios. Model interpretation via SHapley Additive exPlanations (SHAP) analysis identifies key parameters influencing capacity, providing actionable design insights. The findings demonstrate the potential of hybrid deep-learning frameworks to improve structural performance prediction for UHPC beams and lay the groundwork for future integration into reliability-based design codes. Full article
(This article belongs to the Special Issue Trends and Prospects in Cementitious Material)
Show Figures

Figure 1

22 pages, 2120 KB  
Article
Machine Learning Algorithms and Explainable Artificial Intelligence for Property Valuation
by Gabriella Maselli and Antonio Nesticò
Real Estate 2025, 2(3), 12; https://doi.org/10.3390/realestate2030012 - 1 Aug 2025
Viewed by 1112
Abstract
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships [...] Read more.
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships among variables. However, their application raises two main critical issues: (i) the risk of overfitting, especially with small datasets or with noisy data; (ii) the interpretive issues associated with the “black box” nature of many models. Within this framework, this paper proposes a methodological approach that addresses both these issues, comparing the predictive performance of three ML algorithms—k-Nearest Neighbors (kNN), Random Forest (RF), and the Artificial Neural Network (ANN)—applied to the housing market in the city of Salerno, Italy. For each model, overfitting is preliminarily assessed to ensure predictive robustness. Subsequently, the results are interpreted using explainability techniques, such as SHapley Additive exPlanations (SHAPs) and Permutation Feature Importance (PFI). This analysis reveals that the Random Forest offers the best balance between predictive accuracy and transparency, with features such as area and proximity to the train station identified as the main drivers of property prices. kNN and the ANN are viable alternatives that are particularly robust in terms of generalization. The results demonstrate how the defined methodological framework successfully balances predictive effectiveness and interpretability, supporting the informed and transparent use of ML in real estate valuation. Full article
Show Figures

Figure 1

21 pages, 4147 KB  
Article
OLTEM: Lumped Thermal and Deep Neural Model for PMSM Temperature
by Yuzhong Sheng, Xin Liu, Qi Chen, Zhenghao Zhu, Chuangxin Huang and Qiuliang Wang
AI 2025, 6(8), 173; https://doi.org/10.3390/ai6080173 - 31 Jul 2025
Viewed by 945
Abstract
Background and Objective: Temperature management is key for reliable operation of permanent magnet synchronous motors (PMSMs). The lumped-parameter thermal network (LPTN) is fast and interpretable but struggles with nonlinear behavior under high power density. We propose OLTEM, a physics-informed deep model that combines [...] Read more.
Background and Objective: Temperature management is key for reliable operation of permanent magnet synchronous motors (PMSMs). The lumped-parameter thermal network (LPTN) is fast and interpretable but struggles with nonlinear behavior under high power density. We propose OLTEM, a physics-informed deep model that combines LPTN with a thermal neural network (TNN) to improve prediction accuracy while keeping physical meaning. Methods: OLTEM embeds LPTN into a recurrent state-space formulation and learns three parameter sets: thermal conductance, inverse thermal capacitance, and power loss. Two additions are introduced: (i) a state-conditioned squeeze-and-excitation (SC-SE) attention that adapts feature weights using the current temperature state, and (ii) an enhanced power-loss sub-network that uses a deep MLP with SC-SE and non-negativity constraints. The model is trained and evaluated on the public Electric Motor Temperature dataset (Paderborn University/Kaggle). Performance is measured by mean squared error (MSE) and maximum absolute error across permanent-magnet, stator-yoke, stator-tooth, and stator-winding temperatures. Results: OLTEM tracks fast thermal transients and yields lower MSE than both the baseline TNN and a CNN–RNN model for all four components. On a held-out generalization set, MSE remains below 4.0 °C2 and the maximum absolute error is about 4.3–8.2 °C. Ablation shows that removing either SC-SE or the enhanced power-loss module degrades accuracy, confirming their complementary roles. Conclusions: By combining physics with learned attention and loss modeling, OLTEM improves PMSM temperature prediction while preserving interpretability. This approach can support motor thermal design and control; future work will study transfer to other machines and further reduce short-term errors during abrupt operating changes. Full article
Show Figures

Figure 1

26 pages, 2219 KB  
Article
Predicting Cognitive Decline in Parkinson’s Disease Using Artificial Neural Networks: An Explainable AI Approach
by Laura Colautti, Monica Casella, Matteo Robba, Davide Marocco, Michela Ponticorvo, Paola Iannello, Alessandro Antonietti, Camillo Marra and for the CPP Integrated Parkinson’s Database
Brain Sci. 2025, 15(8), 782; https://doi.org/10.3390/brainsci15080782 - 23 Jul 2025
Viewed by 1063
Abstract
Background/Objectives: The study aims to identify key cognitive and non-cognitive variables (e.g., clinical, neuroimaging, and genetic data) predicting cognitive decline in Parkinson’s disease (PD) patients using machine learning applied to a sample (N = 618) from the Parkinson’s Progression Markers Initiative database. [...] Read more.
Background/Objectives: The study aims to identify key cognitive and non-cognitive variables (e.g., clinical, neuroimaging, and genetic data) predicting cognitive decline in Parkinson’s disease (PD) patients using machine learning applied to a sample (N = 618) from the Parkinson’s Progression Markers Initiative database. Traditional research has mainly employed explanatory approaches to explore variable relationships, rather than maximizing predictive accuracy for future cognitive decline. In the present study, we implemented a predictive framework that integrates a broad range of baseline cognitive, clinical, genetic, and imaging data to accurately forecast changes in cognitive functioning in PD patients. Methods: An artificial neural network was trained on baseline data to predict general cognitive status three years later. Model performance was evaluated using 5-fold stratified cross-validation. We investigated model interpretability using explainable artificial intelligence techniques, including Shapley Additive Explanations (SHAP) values, Group-Wise Feature Masking, and Brute-Force Combinatorial Masking, to identify the most influential predictors of cognitive decline. Results: The model achieved a recall of 0.91 for identifying patients who developed cognitive decline, with an overall classification accuracy of 0.79. All applied explainability techniques consistently highlighted baseline MoCA scores, memory performance, the motor examination score (MDS-UPDRS Part III), and anxiety as the most predictive features. Conclusions: From a clinical perspective, the findings can support the early detection of PD patients who are more prone to developing cognitive decline, thereby helping to prevent cognitive impairments by designing specific treatments. This can improve the quality of life for patients and caregivers, supporting patient autonomy. Full article
(This article belongs to the Section Neurodegenerative Diseases)
Show Figures

Figure 1

31 pages, 7723 KB  
Article
A Hybrid CNN–GRU–LSTM Algorithm with SHAP-Based Interpretability for EEG-Based ADHD Diagnosis
by Makbal Baibulova, Murat Aitimov, Roza Burganova, Lazzat Abdykerimova, Umida Sabirova, Zhanat Seitakhmetova, Gulsiya Uvaliyeva, Maksym Orynbassar, Aislu Kassekeyeva and Murizah Kassim
Algorithms 2025, 18(8), 453; https://doi.org/10.3390/a18080453 - 22 Jul 2025
Viewed by 1353
Abstract
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to [...] Read more.
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to jointly capture spatial and temporal dynamics. In addition to the final hybrid architecture, the CNN–GRU–LSTM model alone demonstrates excellent accuracy (99.63%) with minimal variance, making it a strong baseline for clinical applications. To evaluate the role of global attention mechanisms, transformer encoder models with two and three attention blocks, along with a spatiotemporal transformer employing 2D positional encoding, are benchmarked. A hybrid CNN–RNN–transformer model is introduced, combining convolutional, recurrent, and transformer-based modules into a unified architecture. To enhance interpretability, SHapley Additive exPlanations (SHAP) are employed to identify key EEG channels contributing to classification outcomes. Experimental evaluation using stratified five-fold cross-validation demonstrates that the proposed hybrid model achieves superior performance, with average accuracy exceeding 99.98%, F1-scores above 0.9999, and near-perfect AUC and Matthews correlation coefficients. In contrast, transformer-only models, despite high training accuracy, exhibit reduced generalization. SHAP-based analysis confirms the hybrid model’s clinical relevance. This work advances the development of transparent and reliable EEG-based tools for pediatric ADHD screening. Full article
Show Figures

Graphical abstract

21 pages, 2547 KB  
Article
Remaining Available Energy Prediction for Energy Storage Batteries Based on Interpretable Generalized Additive Neural Network
by Ji Qi, Pengrui Li, Yifan Dong, Zhicheng Fu, Zhanguo Wang, Yong Yi and Jie Tian
Batteries 2025, 11(7), 276; https://doi.org/10.3390/batteries11070276 - 20 Jul 2025
Cited by 1 | Viewed by 585
Abstract
Precise estimation of the remaining available energy in batteries is not only key to improving energy management efficiency, but also serves as a critical safeguard for ensuring the safe operation of battery systems. To address the challenges associated with energy state estimation under [...] Read more.
Precise estimation of the remaining available energy in batteries is not only key to improving energy management efficiency, but also serves as a critical safeguard for ensuring the safe operation of battery systems. To address the challenges associated with energy state estimation under dynamic operating conditions, this study proposes a method for predicting the remaining available energy of energy storage batteries based on an interpretable generalized additive neural network (IGANN). First, considering the variability in battery operating conditions, the study designs a battery working voltage threshold that accounts for safety margins and proposes an available energy state assessment metric, which enhances prediction consistency under different discharge conditions. Subsequently, 12 features are selected from both direct observation and statistical characteristics to capture the operating condition information of the battery, and a dataset is constructed using actual operational data from an energy storage station. Finally, the model is trained and validated on the feature dataset. The validation results show that the model achieves an average absolute error of 2.39%, indicating that it effectively captures the energy variation characteristics within the 0.2 C to 0.6 C dynamic current range. Furthermore, the contribution of each feature is analyzed based on the model’s interpretability, and the model is optimized by utilizing high-contribution features. This optimization improves both the accuracy and runtime efficiency of the model. Finally, a dynamic prediction is conducted for a discharge cycle, comparing the predictions of the IGANN model with those of three other machine learning methods. The IGANN model demonstrates the best performance, with the average absolute error consistently controlled within 3%, proving the model’s accuracy and robustness under complex conditions. Full article
(This article belongs to the Special Issue Advances in Lithium-Ion Battery Safety and Fire: 2nd Edition)
Show Figures

Graphical abstract

23 pages, 6067 KB  
Article
Daily-Scale Fire Risk Assessment for Eastern Mongolian Grasslands by Integrating Multi-Source Remote Sensing and Machine Learning
by Risu Na, Byambakhuu Gantumur, Wala Du, Sainbuyan Bayarsaikhan, Yu Shan, Qier Mu, Yuhai Bao, Nyamaa Tegshjargal and Battsengel Vandansambuu
Fire 2025, 8(7), 273; https://doi.org/10.3390/fire8070273 - 11 Jul 2025
Viewed by 1246
Abstract
Frequent wildfires in the eastern grasslands of Mongolia pose significant threats to the ecological environment and pastoral livelihoods, creating an urgent need for high-temporal-resolution and high-precision fire prediction. To address this, this study established a daily-scale grassland fire risk assessment framework integrating multi-source [...] Read more.
Frequent wildfires in the eastern grasslands of Mongolia pose significant threats to the ecological environment and pastoral livelihoods, creating an urgent need for high-temporal-resolution and high-precision fire prediction. To address this, this study established a daily-scale grassland fire risk assessment framework integrating multi-source remote sensing data to enhance predictive capabilities in eastern Mongolia. Utilizing fire point data from eastern Mongolia (2012–2022), we fused multiple feature variables and developed and optimized three models: random forest (RF), XGBoost, and deep neural network (DNN). Model performance was enhanced using Bayesian hyperparameter optimization via Optuna. Results indicate that the Bayesian-optimized XGBoost model achieved the best generalization performance, with an overall accuracy of 92.3%. Shapley additive explanations (SHAP) interpretability analysis revealed that daily-scale meteorological factors—daily average relative humidity, daily average wind speed, daily maximum temperature—and the normalized difference vegetation index (NDVI) were consistently among the top four contributing variables across all three models, identifying them as key drivers of fire occurrence. Spatiotemporal validation using historical fire data from 2023 demonstrated that fire points recorded on 8 April and 1 May 2023 fell within areas predicted to have “extremely high” fire risk probability on those respective days. Moreover, points A (117.36° E, 46.70° N) and B (116.34° E, 49.57° N) exhibited the highest number of days classified as “high” or “extremely high” risk during the April/May and September/October periods, consistent with actual fire occurrences. In summary, the integration of multi-source data fusion and Bayesian-optimized machine learning has enabled the first high-precision daily-scale wildfire risk prediction for the eastern Mongolian grasslands, thus providing a scientific foundation and decision-making support for wildfire prevention and control in the region. Full article
Show Figures

Figure 1

21 pages, 6305 KB  
Article
Use of BOIvy Optimization Algorithm-Based Machine Learning Models in Predicting the Compressive Strength of Bentonite Plastic Concrete
by Shuai Huang, Chuanqi Li, Jian Zhou, Xiancheng Mei and Jiamin Zhang
Materials 2025, 18(13), 3123; https://doi.org/10.3390/ma18133123 - 1 Jul 2025
Viewed by 461
Abstract
The combination of bentonite and conventional plastic concrete is an effective method for projecting structures and adsorbing heavy metals. Determining the compressive strength (CS) is a crucial step in the design of bentonite plastic concrete (BPC). Traditional experimental analyses are resource-intensive, time-consuming, and [...] Read more.
The combination of bentonite and conventional plastic concrete is an effective method for projecting structures and adsorbing heavy metals. Determining the compressive strength (CS) is a crucial step in the design of bentonite plastic concrete (BPC). Traditional experimental analyses are resource-intensive, time-consuming, and prone to high uncertainties. To address these challenges, several machine learning (ML) models, including support vector regression (SVR), artificial neural network (ANN), and random forest (RF), are generated to forecast the CS of BPC materials. To improve the prediction accuracy, a meta-heuristic optimization, called the Ivy algorithm, is integrated with Bayesian optimization (BOIvy) to optimize the ML models. Several statistical indices, including the coefficient of determination (R2), root mean square error (RMSE), prediction accuracy (U1), prediction quality (U2), and variance accounted for (VAF), are adopted to evaluate the predictive performance of all models. Additionally, Shapley additive explanation (SHAP) and sensitivity analysis are conducted to enhance model interpretability. The results indicate that the best model is the BOIvy-ANN model, which achieves the optimal indices during the testing. Moreover, water, curing time, and cement are found to be more influential on the prediction of the CS of BPC than other features. This paper provides a strong example of applying artificial intelligence (AI) techniques to estimate the performance of BPC materials. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

Back to TopTop