Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,748)

Search Parameters:
Keywords = ensemble analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1742 KB  
Article
Ensembling Transformer-Based Models for 3D Ischemic Stroke Segmentation in Non-Contrast CT
by Lyailya Cherikbayeva, Vladimir Berikov, Zarina Melis, Arman Yeleussinov, Dametken Baigozhanova, Nurbolat Tasbolatuly, Zhanerke Temirbekova and Denis Mikhailapov
Appl. Sci. 2025, 15(17), 9725; https://doi.org/10.3390/app15179725 - 4 Sep 2025
Abstract
Ischemic stroke remains one of the leading causes of mortality and disability, and accurate segmentation of the affected areas on CT brain images plays a crucial role in timely diagnosis and clinical decision-making. This study proposes an ensemble approach based on the combination [...] Read more.
Ischemic stroke remains one of the leading causes of mortality and disability, and accurate segmentation of the affected areas on CT brain images plays a crucial role in timely diagnosis and clinical decision-making. This study proposes an ensemble approach based on the combination of the transformer-based models SE-UNETR and Swin UNETR using a weighted voting strategy. Its performance was evaluated using the Dice similarity coefficient, which quantifies the overlap between the predicted lesion regions and the ground-truth annotations. In this study, three-dimensional CT scans of the brain from 98 patients with a confirmed diagnosis of acute ischemic stroke were used. The data were provided by the International Tomography Center, SB RAS. The experimental results demonstrated that the ensemble based on transformer models significantly outperforms each individual model, providing more stable and accurate predictions. The final Dice coefficient reached 0.7983, indicating the high effectiveness of the proposed approach for ischemic lesion segmentation in CT images. The analysis showed more precise delineation of ischemic lesion boundaries and a reduction in segmentation errors. The proposed method can serve as an effective tool in automated stroke diagnosis systems and other applications requiring high-accuracy medical image analysis. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 5097 KB  
Article
A Robust Optimization Framework for Hydraulic Containment System Design Under Uncertain Hydraulic Conductivity Fields
by Wenfeng Gao, Yawei Kou, Hao Dong, Haoran Liu and Simin Jiang
Water 2025, 17(17), 2617; https://doi.org/10.3390/w17172617 - 4 Sep 2025
Abstract
Effective containment of contaminant plumes in heterogeneous aquifers is critically challenged by the inherent uncertainty in hydraulic conductivity (K). Conventional, deterministic optimization approaches for pump-and-treat (P&T) system design often fail when confronted with real-world geological variability. This study proposes a novel robust simulation-optimization [...] Read more.
Effective containment of contaminant plumes in heterogeneous aquifers is critically challenged by the inherent uncertainty in hydraulic conductivity (K). Conventional, deterministic optimization approaches for pump-and-treat (P&T) system design often fail when confronted with real-world geological variability. This study proposes a novel robust simulation-optimization framework to design reliable hydraulic containment systems that explicitly account for this subsurface uncertainty. The framework integrates the Karhunen–Loève Expansion (KLE) for efficient stochastic representation of heterogeneous K-fields with a Genetic Algorithm (GA) implemented via the pymoo library, coupled with the MODFLOW groundwater flow model for physics-based performance evaluation. The core innovation lies in a multi-scenario assessment process, where candidate well configurations (locations and pumping rates) are evaluated against an ensemble of K-field realizations generated by KLE. This approach shifts the design objective from optimality under a single scenario to robustness across a spectrum of plausible subsurface conditions. A structured three-step filtering method—based on mean performance, consistency (pass rate), and stability (low variability)—is employed to identify the most reliable solutions. The framework’s effectiveness is demonstrated through a numerical case study. Results confirm that deterministic designs are highly sensitive to the specific K-field realization. In contrast, the robust framework successfully identifies well configurations that maintain a high and stable containment performance across diverse K-field scenarios, effectively mitigating the risk of failure associated with single-scenario designs. Furthermore, the analysis reveals how varying degrees of aquifer heterogeneity influence both the required operational cost and the attainable level of robustness. This systematic approach provides decision-makers with a practical and reliable strategy for designing cost-effective P&T systems that are resilient to geological uncertainty, offering significant advantages over traditional methods for contaminated site remediation. Full article
(This article belongs to the Special Issue Groundwater Quality and Contamination at Regional Scales)
Show Figures

Figure 1

27 pages, 8592 KB  
Article
Metallic and Translucent Decorative Layers: Analytical and Historical Insights from the Medieval Sculptural Complex of the Refectory of San Salvador de Oña (Burgos, Spain)
by Ana María Cuesta Sánchez
Heritage 2025, 8(9), 357; https://doi.org/10.3390/heritage8090357 - 2 Sep 2025
Abstract
The Monastery of San Salvador de Oña (Burgos) is a Benedictine site that has undergone substantial modifications since its foundation in the 11th century and preserves a significant corpus of Medieval, Renaissance, and Baroque artistic remains. Among these, the refectory stands out as [...] Read more.
The Monastery of San Salvador de Oña (Burgos) is a Benedictine site that has undergone substantial modifications since its foundation in the 11th century and preserves a significant corpus of Medieval, Renaissance, and Baroque artistic remains. Among these, the refectory stands out as a particularly distinctive ensemble, exhibiting sculptural influences from the Burgundy region and serving as a notable example in terms of structure, craftsmanship, and decoration. Material characterization analyses of this ensemble have not only identified the range of pigments present but also documented metallic materials and applied decorative elements, providing the basis for a proposed chronological framework for the various pictorial strata and stages. A detailed examination of the metallic materials and their overlaying layers has facilitated a comprehensive analysis focused on materiality, manufacturing techniques, and methods of application, while also situating the decoration within its historical, artistic, and cultural context. Full article
(This article belongs to the Section Materials and Heritage)
Show Figures

Figure 1

61 pages, 3596 KB  
Review
Beginner-Friendly Review of Research on R-Based Energy Forecasting: Insights from Text Mining
by Minjoong Kim, Hyeonwoo Kim and Jihoon Moon
Electronics 2025, 14(17), 3513; https://doi.org/10.3390/electronics14173513 - 2 Sep 2025
Abstract
Data-driven forecasting is becoming increasingly central to modern energy management, yet nonspecialists without a background in artificial intelligence (AI) face significant barriers to entry. While Python is the dominant machine learning language, R remains a practical and accessible tool for users with expertise [...] Read more.
Data-driven forecasting is becoming increasingly central to modern energy management, yet nonspecialists without a background in artificial intelligence (AI) face significant barriers to entry. While Python is the dominant machine learning language, R remains a practical and accessible tool for users with expertise in statistics, engineering, or domain-specific analysis. To inform tool selection, we first provide an evidence-based comparison of R with major alternatives before reviewing 49 peer-reviewed articles published between 2020 and 2025 in Science Citation Index Expanded (SCIE)-level journals that utilized R for energy forecasting tasks, including electricity (regional and site-level), solar, wind, thermal energy, and natural gas. Despite such growth, the field still lacks a systematic, cross-domain synthesis that clarifies which R-based methods prevail, how accessible workflows are implemented, and where methodological gaps remain; this motivated our use of text mining. Text mining techniques were employed to categorize the literature according to forecasting objectives, modeling methods, application domains, and tool usage patterns. The results indicate that tree-based ensemble learning models—e.g., random forests, gradient boosting, and hybrid variants—are employed most frequently, particularly for solar and short-term load forecasting. Notably, few studies incorporated automated model selection or explainable AI; however, there is a growing shift toward interpretable and beginner-friendly workflows. This review offers a practical reference for nonexperts seeking to apply R in energy forecasting contexts, emphasizing accessible modeling strategies and reproducible practices. We also curate example R scripts, workflow templates, and a study-level link catalog to support replication. The findings of this review support the broader democratization of energy analytics by identifying trends and methodologies suitable for users without advanced AI training. Finally, we synthesize domain-specific evidence and outline the text-mining pipeline, present visual keyword profiles and comparative performance tables that surface prevailing strategies and unmet needs, and conclude with practical guidance and targeted directions for future research. Full article
Show Figures

Figure 1

20 pages, 7962 KB  
Article
Uncertainty Analysis of Snow Depth Retrieval Products over China via the Triple Collocation Method and Ground-Based Measurements
by Jianwei Yang, Lingmei Jiang, Meiqing Chen and Jiajie Ying
Remote Sens. 2025, 17(17), 3036; https://doi.org/10.3390/rs17173036 - 1 Sep 2025
Viewed by 100
Abstract
Snow depth is a crucial variable when assessing the hydrological cycle and total water supply. Therefore, thorough and large-scale assessments of the widely used gridded snow depth products are highly important. In previous studies, triple collocation analysis (TCA) was applied as a complementary [...] Read more.
Snow depth is a crucial variable when assessing the hydrological cycle and total water supply. Therefore, thorough and large-scale assessments of the widely used gridded snow depth products are highly important. In previous studies, triple collocation analysis (TCA) was applied as a complementary method to assess various snow depth products. Nevertheless, TCA-derived errors have not yet been validated against ground-based measurements. Specifically, the reliability of the TCA for quantitatively evaluating snow depth datasets remains unknown. In this study, we first generate a long-term snow depth product using our previously proposed remotely sensed retrieval algorithm. Then, we assess the results obtained with this algorithm together with other widely used assimilated (GlobSnow-v3.0) and reanalysis (ERA5-land and MERRA2) products. The reliability of the TCA method is investigated by comparing the errors derived from TCA and from ground-based measurements, as well as their relative performance rankings. Our results reveal that the unRMSE values of snow depth products are highly correlated with the TCA-derived errors, and both provide consistent performance rankings across most areas. However, in northern Xinjiang (NXJ), the TCA-derived errors for MERRA2 are underestimated against the ground-based results. Furthermore, we decomposed the covariance equations of TCA to assess their scientific robustness, and we found that the variance of MERRA2 is low due to the narrow dynamic range and severe underestimation in the snow season. Additionally, any two datasets in the triplet must exhibit correlation, at least displaying the same trend in snow depth. This paper provides a comprehensive assessment of snow depth products and demonstrates the reliability of TCA-based uncertainty analysis, which is particularly useful for applying multiproduct snow depth ensembles in the future. Full article
(This article belongs to the Special Issue Snow Water Equivalent Retrieval Using Remote Sensing)
Show Figures

Figure 1

23 pages, 8928 KB  
Article
Dynamic Fracture Strength Prediction of HPFRC Using a Feature-Weighted Linear Ensemble Approach
by Xin Cai, Yunmin Wang, Yihan Zhao, Liye Chen and Jifeng Yuan
Materials 2025, 18(17), 4097; https://doi.org/10.3390/ma18174097 - 1 Sep 2025
Viewed by 141
Abstract
Owing to its excellent crack resistance and durability, High-Performance Fiber-Reinforced Concrete (HPFRC) has been extensively applied in engineering structures exposed to extreme loading conditions. The Mode I dynamic fracture strength of HPFRC under high-strain-rate conditions exhibits significant strain-rate sensitivity and nonlinear response characteristics. [...] Read more.
Owing to its excellent crack resistance and durability, High-Performance Fiber-Reinforced Concrete (HPFRC) has been extensively applied in engineering structures exposed to extreme loading conditions. The Mode I dynamic fracture strength of HPFRC under high-strain-rate conditions exhibits significant strain-rate sensitivity and nonlinear response characteristics. However, existing experimental methods for strength measurement are limited by high costs and the absence of standardized testing protocols. Meanwhile, conventional data-driven models for strength prediction struggle to achieve both high-precision prediction and physical interpretability. To address this, this study introduces a dynamic fracture strength prediction method based on a feature-weighted linear ensemble (FWL) mechanism. A comprehensive database comprising 161 sets of high-strain-rate test data on HPFRC fracture strength was first constructed. Key modeling variables were then identified through correlation analysis and an error-driven feature selection approach. Subsequently, six representative machine learning models (KNN, RF, SVR, LGBM, XGBoost, MLPNN) were employed as base learners to construct two types of ensemble models, FWL and Voting, enabling a systematic comparison of their performance. Finally, the predictive mechanisms of the models were analyzed for interpretability at both global and local scales using SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) methods. The results demonstrate that the FWL model achieved optimal predictive performance on the test set (R2 = 0.908, RMSE = 2.632), significantly outperforming both individual models and the conventional ensemble method. Interpretability analysis revealed that strain rate and fiber volume fraction are the primary factors influencing dynamic fracture strength, with strain rate demonstrating a highly nonlinear response mechanism across different ranges. The integrated prediction framework developed in this study offers the combined advantages of high accuracy, robustness, and interpretability, providing a novel and effective approach for predicting the fracture behavior of HPFRC under high-strain-rate conditions. Full article
Show Figures

Figure 1

16 pages, 949 KB  
Article
Predicting the Cognitive and Social–Emotional Development of Minority Children in Early Education: A Data Science Approach
by Danail Brezov, Nadia Koltcheva and Desislava Stoyanova
AppliedMath 2025, 5(3), 113; https://doi.org/10.3390/appliedmath5030113 - 1 Sep 2025
Viewed by 155
Abstract
Our study tracks the development of 105 Roma children between 3 and 5 (median age: 51 months), enrolled in an NGO-aided developmental program. Each child undergoes pre- and post-assessment based on the Developmental Assessment of Young Children (DAYC), a standard tool used to [...] Read more.
Our study tracks the development of 105 Roma children between 3 and 5 (median age: 51 months), enrolled in an NGO-aided developmental program. Each child undergoes pre- and post-assessment based on the Developmental Assessment of Young Children (DAYC), a standard tool used to track the progress in early childhood development and detect delays. Data are gathered from three sources, teacher, parent/caregiver and specialist, covering four developmental domains and adaptive behavior scale. There are subjective biases; however, in the post-assessment, the teachers’ and parents’ evaluations converge. The test results confirm significant improvement in all areas (p<0.0001), with the highest being in cognitive skills 32.2% and the lowest being in physical development 14.4%. We also apply machine learning methods to impute missing data and predict the likely future progress for a given student in the program based on the initial input, while also evaluating the influence of environmental factors. Our weighted ensemble regression models are coupled with principal component analysis (PCA) and yield average coefficients of determination R20.7 for the features of interest. Also, we perform k-means clustering in the plane cognitive vs. social–emotional progress and consider the classification problem of predicting the group in which a given student would eventually be assigned to, with a weighted F1-score of 0.83 and a macro-averaged area under the curve (AUC) of 0.94. This could be useful in practice for the optimized formation of study groups. We explore classification as a means of imputing missing categorical data too, e.g., education, employment or marital status of the parents. Our algorithms provide solutions with the F1-score ranging from 0.92 to 0.97 and, respectively, an AUC between 0.99 and 1. Full article
Show Figures

Figure 1

22 pages, 1076 KB  
Article
Comparative Analysis of Machine Learning and Deep Learning Models for Tourism Demand Forecasting with Economic Indicators
by Ivanka Vasenska
FinTech 2025, 4(3), 46; https://doi.org/10.3390/fintech4030046 - 1 Sep 2025
Viewed by 66
Abstract
This study addresses the critical need for accurate tourism demand (TD) forecasting in Bulgaria using economic indicators, developing robust predictive models to navigate post-pandemic market volatility. The COVID-19 pandemic exposed tourism’s vulnerability to systemic shocks, highlighting deficiencies in traditional forecasting approaches. Bulgaria’s tourism [...] Read more.
This study addresses the critical need for accurate tourism demand (TD) forecasting in Bulgaria using economic indicators, developing robust predictive models to navigate post-pandemic market volatility. The COVID-19 pandemic exposed tourism’s vulnerability to systemic shocks, highlighting deficiencies in traditional forecasting approaches. Bulgaria’s tourism industry, characterized by strong seasonal variations and economic sensitivity, requires enhanced methodologies for strategic planning in uncertain environments. The research employs comprehensive comparative analysis of machine learning (ML) and deep machine learning (DML) methodologies. Monthly overnight stay data from Bulgaria’s National Statistical Institute (2005–2024) were integrated with COVID-19 case data, Consumer Price Index (CPI) and Bulgarian Gross Domestic Product (GDP) variables for the same period. Multiple approaches were implemented including Prophet with external regressors, Ridge regression, LightGBM, and gradient boosting models using inverse MAE weighting optimization, alongside deep learning architectures such as Bidirectional LSTM with attention mechanisms and XGBoost configurations, as each model statistical significance was estimated. Contrary to prevailing assumptions about deep learning superiority, traditional machine learning ensemble approaches demonstrated superior performance. The ensemble model combining Prophet, LightGBM, and Ridge regression achieved optimal results with MAE of 156,847 and MAPE of 14.23%, outperforming individual models by 10.2%. Deep learning alternatives, particularly Bi-LSTM architectures, exhibited significant deficiencies with negative R2 scores, indicating fundamental limitations in capturing seasonal tourism patterns, probable data dependence and overfitting. The findings, provide tourism stakeholders and policymakers with empirically validated forecasting tools for enhanced decision-making. The ensemble approach combined with statistical significance testing offers improved accuracy for investment planning, marketing budget allocation, and operational capacity management during economic volatility. Economic indicator integration enables proactive responses to market disruptions, supporting resilient tourism planning strategies and crisis management protocols. Full article
Show Figures

Figure 1

21 pages, 3262 KB  
Article
An Artificial Intelligence-Based Melt Flow Rate Prediction Method for Analyzing Polymer Properties
by Mohammad Anwar Parvez and Ibrahim M. Mehedi
Polymers 2025, 17(17), 2382; https://doi.org/10.3390/polym17172382 - 31 Aug 2025
Viewed by 216
Abstract
The polymer industry gained increasing importance due to the ability of polymers to replace traditional materials such as wood, glass, and metals in various applications, offering advantages such as high strength-to-weight ratio, corrosion resistance, and ease of fabrication. Among key performance indicators, melt [...] Read more.
The polymer industry gained increasing importance due to the ability of polymers to replace traditional materials such as wood, glass, and metals in various applications, offering advantages such as high strength-to-weight ratio, corrosion resistance, and ease of fabrication. Among key performance indicators, melt flow rate (MFR) plays a crucial role in determining polymer quality and processability. However, conventional offline laboratory methods for measuring MFR are time-consuming and unsuitable for real-time quality control in industrial settings. To address this challenge, the study proposes a leveraging artificial intelligence with machine learning-based melt flow rate prediction for polymer properties analysis (LAIML-MFRPPPA) model. A dataset of 1044 polymer samples was used, incorporating six input features such as reactor temperature, pressure, hydrogen-to-propylene ratio, and catalyst feed rate, with MFR as the target variable. The input features were normalized using min–max scaling. Two ensemble models—kernel extreme learning machine (KELM) and random vector functional link (RVFL)—were developed and optimized using the pelican optimization algorithm (POA) for improved predictive accuracy. The proposed method outperformed traditional and deep learning models, achieving an R2 of 0.965, MAE of 0.09, RMSE of 0.12, and MAPE of 3.4%. A SHAP-based sensitivity analysis was conducted to interpret the influence of input features, confirming the dominance of melt temperature and molecular weight. Overall, the LAIML-MFRPPPA model offers a robust, accurate, and deployable solution for real-time polymer quality monitoring in manufacturing environments. Full article
(This article belongs to the Special Issue Scientific Machine Learning for Polymeric Materials)
Show Figures

Figure 1

22 pages, 5263 KB  
Article
Educational Facility Site Selection Based on Multi-Source Data and Ensemble Learning: A Case Study of Primary Schools in Tianjin
by Zhenhui Sun, Ying Xu, Junjie Ning, Yufan Wang and Yunxiao Sun
ISPRS Int. J. Geo-Inf. 2025, 14(9), 337; https://doi.org/10.3390/ijgi14090337 - 30 Aug 2025
Viewed by 280
Abstract
To achieve the objective of a “15 min living circle” for educational services, this study develops an integrated method for primary school site selection in Tianjin, China, by combining multi-source data and ensemble learning techniques. At a 500 m grid scale, a suitability [...] Read more.
To achieve the objective of a “15 min living circle” for educational services, this study develops an integrated method for primary school site selection in Tianjin, China, by combining multi-source data and ensemble learning techniques. At a 500 m grid scale, a suitability prediction model was constructed based on the existing distribution of primary schools, utilizing Random Forest (RF) and Extreme Gradient Boosting (XGBoost) models. Comprehensive evaluation, feature importance analysis, and SHAP (SHapley Additive exPlanations) interpretation were conducted to ensure model reliability and interpretability. Spatial overlay analysis, incorporating population structure and the education supply–demand ratio, identified highly suitable areas for primary school construction. The results demonstrate: (1) RF and XGBoost achieved evaluation metrics exceeding 85%, outperforming traditional single models such as Logistic Regression, SVM, KNN, and CART. Validation against actual primary school distributions yielded accuracies of 84.70% and 92.41% for RF and XGBoost, respectively. (2) SHAP analysis identified population density, proximity to other educational institutions, and accessibility to transportation facilities as the most critical factors influencing site suitability. (3) Suitable areas for primary school construction are concentrated in central Tianjin and surrounding areas, including Baoping Street (Baodi District), Huaming Street (Dongli District), and Zhongbei Town (Xiqing District), among others, to meet high-quality educational service demands. Full article
(This article belongs to the Special Issue Spatial Information for Improved Living Spaces)
Show Figures

Figure 1

12 pages, 2991 KB  
Article
A Novel Pattern Recognition Method for Non-Destructive and Accurate Origin Identification of Food and Medicine Homologous Substances with Portable Near-Infrared Spectroscopy
by Wei Liu, Ziqin Zhang, Yang Liu, Liwen Jiang, Pao Li and Wei Fan
Molecules 2025, 30(17), 3565; https://doi.org/10.3390/molecules30173565 - 30 Aug 2025
Viewed by 193
Abstract
In this study, a novel pattern recognition method named boosting–partial least squares–discriminant analysis (Boosting-PLS-DA) was developed for the non-destructive and accurate origin identification of food and medicine homologous substances (FMHSs). Taking Gastrodia elata, Aurantii Fructus Immaturus, and Angelica dahurica as examples, [...] Read more.
In this study, a novel pattern recognition method named boosting–partial least squares–discriminant analysis (Boosting-PLS-DA) was developed for the non-destructive and accurate origin identification of food and medicine homologous substances (FMHSs). Taking Gastrodia elata, Aurantii Fructus Immaturus, and Angelica dahurica as examples, spectra of FMHSs from different origins were obtained by portable near-infrared (NIR) spectroscopy without destroying the samples. The identification models were developed with Boosting-PLS-DA, compared with principal component analysis (PCA) and partial least squares–discriminant analysis (PLS-DA) models. The model performances were evaluated using the validation set and an external validation set obtained one month later. The results showed that the Boosting-PLS-DA method can obtain the best results. For the analysis of Aurantii Fructus Immaturus and Angelica dahurica, 100% accuracies of the validation sets and external validation sets were obtained using Boosting-PLS-DA models. For the analysis of Gastrodia elata, Boosting-PLS-DA models showed significant improvements in external validation set accuracies compared to PLS-DA, reducing the risk of overfitting. Boosting-PLS-DA method combines the high robustness of ensemble learning with the strong discriminative capability of discriminant analysis. The generalizability will be further validated with a sufficiently large external validation set and more types of FMHSs. Full article
(This article belongs to the Special Issue Application of Spectroscopy for Drugs)
Show Figures

Figure 1

36 pages, 11682 KB  
Article
Isoliquiritigenin as a Neuronal Radiation Mitigant: Mitigating Radiation-Induced Anhedonia Tendency Targeting Grik3/Grm8/Grin3a via Integrated Proteomics and AI-Driven Discovery
by Boyang Li, Suqian Cheng, Han Zhang and Bo Li
Pharmaceuticals 2025, 18(9), 1307; https://doi.org/10.3390/ph18091307 - 30 Aug 2025
Viewed by 191
Abstract
Background/Objectives: Radiotherapy can cause severe and irreversible brain damage, including cognitive impairment, increased dementia risk, debilitating depression, and other neuropsychiatric disorders. Current radioprotective drugs face limitations, such as single-target inefficacy or manufacturing hurdles. Isoliquiritigenin (ISL), a natural flavonoid derived from licorice root, [...] Read more.
Background/Objectives: Radiotherapy can cause severe and irreversible brain damage, including cognitive impairment, increased dementia risk, debilitating depression, and other neuropsychiatric disorders. Current radioprotective drugs face limitations, such as single-target inefficacy or manufacturing hurdles. Isoliquiritigenin (ISL), a natural flavonoid derived from licorice root, exhibits broad bioactivities. It exhibits anti-inflammatory, anti-cancer, immunoregulatory, hepatoprotective, and cardioprotective activities. This study aimed to elucidate ISL’s neuronal radiation mitigation effects and key targets. Methods: In vitro and in vivo models of radiation-induced neuronal injury were established. ISL’s bioactivities were evaluated through cellular cytotoxicity assays, LDH release, ROS, ATP, glutamate, and GSH levels. In vivo, ISL’s radiation mitigation effect was evaluated with sucrose preference test, IL-β level, histopathological analysis, and Golgi-Cox staining analysis. Proteomics, pathway enrichment, and ensemble models (four machine learning models, weighted gene co-expression network, protein–protein interaction) identified core targets. Molecular docking and dynamic simulations validated ISL’s binding stability with key targets. Results: ISL attenuated radiation-induced cellular cytotoxicity, reduced LDH/ROS, restored ATP, elevated GSH, and mitigated glutamate accumulation. In rats, ISL alleviated anhedonia-like phenotypes and hippocampal synaptic loss. ISL also significantly suppressed radiation-induced neuroinflammation, as evidenced by reduced levels of the pro-inflammatory cytokine IL-1β. Proteomic analysis revealed that ISL’s main protective pathways included the synaptic vesicle cycle, glutamatergic synapse, MAPK signaling pathway, SNARE interactions in vesicular transport, insulin signaling pathway, and insulin secretion. Grm8, Grik3, and Grin3a were identified as key targets using the integrated models. The expression of these targets was upregulated post-radiation and restored by ISL. Molecular docking and dynamic simulations indicated that ISL showed stable binding to these receptors compared to native ligands. Conclusions: ISL demonstrates multi-scale radiation mitigation activities in vitro and in vivo by modulating synaptic and inflammatory pathways, with glutamate receptors as core targets. This work nominates ISL as an important natural product for mitigating radiotherapy-induced neural damage. Full article
Show Figures

Figure 1

21 pages, 2926 KB  
Article
Multi-Algorithm Ensemble Learning Framework for Predicting the Solder Joint Reliability of Wafer-Level Packaging
by Qinghua Su and Kuo-Ning Chiang
Materials 2025, 18(17), 4074; https://doi.org/10.3390/ma18174074 - 30 Aug 2025
Viewed by 207
Abstract
To enhance design efficiency, this study employs an effective prediction approach that utilizes validated finite element analysis (FEA) to generate simulation data and subsequently applies machine learning (ML) techniques to predict packaging reliability. Validated FEA models are used to replace the costly design-on-experiment [...] Read more.
To enhance design efficiency, this study employs an effective prediction approach that utilizes validated finite element analysis (FEA) to generate simulation data and subsequently applies machine learning (ML) techniques to predict packaging reliability. Validated FEA models are used to replace the costly design-on-experiment approach. However, the training time for some ML algorithms is costly; therefore, reducing the size of the training dataset to lower computational cost is a critical issue for ML. Nevertheless, this approach simultaneously introduces new challenges in maintaining prediction accuracy due to the inherent limitations of small data machine learning. To address these challenges, this work adopts Wafer-Level Packaging (WLP) as a case study. It proposes an ensemble learning framework that integrates multiple machine learning algorithms to enhance predictive robustness. By leveraging the complementary strengths of different algorithms and frameworks, the ensemble approach effectively improves generalization, enabling accurate predictions even with constrained training data. Full article
(This article belongs to the Section Materials Simulation and Design)
Show Figures

Figure 1

28 pages, 6018 KB  
Article
Analysis of Factors Influencing Driving Safety at Typical Curve Sections of Tibet Plateau Mountainous Areas Based on Explainability-Oriented Dynamic Ensemble Learning Strategy
by Xinhang Wu, Fei Chen, Wu Bo, Yicheng Shuai, Xue Zhang, Wa Da, Huijing Liu and Junhao Chen
Sustainability 2025, 17(17), 7820; https://doi.org/10.3390/su17177820 - 30 Aug 2025
Viewed by 332
Abstract
The complex topography of China’s Tibetan Plateau mountainous roads, characterized by diverse curve types and frequent traffic accidents, significantly impacts the safety and sustainability of the transportation system. To enhance driving safety on these mountain roads and promote low-carbon, resilient transportation development, this [...] Read more.
The complex topography of China’s Tibetan Plateau mountainous roads, characterized by diverse curve types and frequent traffic accidents, significantly impacts the safety and sustainability of the transportation system. To enhance driving safety on these mountain roads and promote low-carbon, resilient transportation development, this study investigates the mechanisms through which different curve types affect driving safety and proposes optimization strategies based on interpretable machine learning methods. Focusing on three typical curve types in plateau regions, drone high-altitude photography was employed to capture footage of three specific curves along China’s National Highway G318. Oblique photography was utilized to acquire road environment information, from which 11 data indicators were extracted. Subsequently, 8 indicators, including cornering preference and vehicle type, were designated as explanatory variables, the curve type indicator was set as the dependent variable, and the remaining indicators were established as safety assessment indicators. Linear models (logistic regression, ridge regression) and non-linear models (Random Forest, LightGBM, XGBoost) were used to conduct model comparison and factor analysis. Ultimately, three non-linear models were selected, employing an explainability-oriented dynamic ensemble learning strategy (X-DEL) to evaluate the three curve types. The results indicate that non-linear models outperform linear models in terms of accuracy and scene adaptability. The explainability-oriented dynamic ensemble learning strategy (X-DEL) is beneficial for the construction of driving safety models and factor analysis on Tibetan Plateau mountainous roads. Furthermore, the contribution of indicators to driving safety varies across different curve types. This research not only deepens the scientific understanding of safety issues on plateau mountainous roads but, more importantly, its proposed solutions directly contribute to building safer, more efficient, and environmentally friendly transportation systems, thereby providing crucial impetus for sustainable transportation and high-quality regional development in the Tibetan Plateau. Full article
Show Figures

Figure 1

21 pages, 3121 KB  
Article
An Interpretable Stacked Ensemble Learning Framework for Wheat Storage Quality Prediction
by Xinze Li, Wenyue Wang, Bing Pan, Siyu Zhu, Junhui Zhang, Yunzhao Ma, Hongpeng Guo, Zhe Liu, Wenfu Wu and Yan Xu
Agriculture 2025, 15(17), 1844; https://doi.org/10.3390/agriculture15171844 - 29 Aug 2025
Viewed by 172
Abstract
Accurate prediction of wheat storage quality is essential for ensuring storage safety and providing early warnings of quality deterioration. However, existing methods focus solely on storage environmental conditions, neglecting the spatial distribution of temperature within grain piles, lacking interpretability, and generally failing to [...] Read more.
Accurate prediction of wheat storage quality is essential for ensuring storage safety and providing early warnings of quality deterioration. However, existing methods focus solely on storage environmental conditions, neglecting the spatial distribution of temperature within grain piles, lacking interpretability, and generally failing to provide reliable forecasts of future quality changes. To overcome these challenges, an interpretable prediction framework for wheat storage quality based on stacked ensemble learning is proposed. Three key features, Effective Accumulated Temperature (EAT), Cumulative High Temperature Deviation (CHTD), and Cumulative Temperature Gradient (CTG), were derived from grain temperature data to capture the spatiotemporal dynamics of the internal temperature field. These features were then input into the stacked ensemble learning model to accurately predict historical quality changes. In addition, future grain temperatures were predicted with high precision using a Graph Convolutional Network-Temporal Fusion Transformer (GCN-TFT) model. The temperature prediction results were then employed to construct features and were fed into the stacked ensemble learning model to enable future quality change prediction. Baseline experiments indicated that the stacked model significantly outperformed individual models, achieving R2 = 0.94, MAE = 0.44 mg KOH/100 g, and RMSE = 0.59 mg KOH/100 g. SHAP interpretability analysis revealed that EAT constituted the primary driver of wheat quality deterioration, followed by CHTD and CTG. Moreover, in future quality prediction experiments, the GCN-TFT model demonstrated high accuracy in 60-day grain temperature forecasts, and although the prediction accuracy of fatty acid value changes based on features derived from predicted temperatures slightly declined compared to features based on actual temperature data, it remained within an acceptable precision range, achieving an MAE of 0.28 mg KOH/100 g and an RMSE of 0.33 mg KOH/100 g. The experiments validated that the overall technical route from grain temperature prediction to quality prediction exhibited good accuracy and feasibility, providing an efficient, stable, and interpretable quality monitoring and early warning tool for grain storage management, which assists managers in making scientific decisions and interventions to ensure storage safety. Full article
Show Figures

Figure 1

Back to TopTop