Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,507)

Search Parameters:
Keywords = non-linear feature

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5798 KB  
Article
Application of Generative AI in Financial Risk Prediction: Enhancing Model Accuracy and Interpretability
by Kai-Chao Yao, Hsiu-Chu Hung, Ching-Hsin Wang, Wei-Lun Huang, Hui-Ting Liang, Tzu-Hsin Chu, Bo-Siang Chen and Wei-Sho Ho
Information 2025, 16(10), 857; https://doi.org/10.3390/info16100857 - 3 Oct 2025
Abstract
This study explores the application of generative artificial intelligence (AI) in financial risk forecasting, aiming to assess its potential in enhancing both the accuracy and interpretability of predictive models. Traditional methods often struggle with the complexity and nonlinearity of financial data, whereas generative [...] Read more.
This study explores the application of generative artificial intelligence (AI) in financial risk forecasting, aiming to assess its potential in enhancing both the accuracy and interpretability of predictive models. Traditional methods often struggle with the complexity and nonlinearity of financial data, whereas generative AI—such as large language models and generative adversarial networks (GANs)—offers novel solutions to these challenges. The study begins with a comprehensive review of current research on generative AI in financial risk prediction, with a focus on its roles in data augmentation and feature extraction. It then investigates techniques such as Generative Adversarial Explanation (GAX) to evaluate their effectiveness in improving model interpretability. Case studies demonstrate the practical value of generative AI in real-world financial forecasting and quantify its contribution to predictive accuracy. Furthermore, the study identifies key challenges—including data quality, model training costs, and regulatory compliance—and proposes corresponding mitigation strategies. The findings suggest that generative AI can significantly improve the accuracy and interpretability of financial risk models, though its adoption must be carefully managed to address associated risks. This study offers insights and guidance for future research in applying generative AI to financial risk forecasting. Full article
(This article belongs to the Special Issue Modeling in the Era of Generative AI)
21 pages, 406 KB  
Article
DRBoost: A Learning-Based Method for Steel Quality Prediction
by Yang Song, Shuaida He and Qiyu Wu
Symmetry 2025, 17(10), 1644; https://doi.org/10.3390/sym17101644 - 3 Oct 2025
Abstract
Steel products play an important role in daily production and life as a common production material. Currently, the quality of steel products is judged by manual experience. However, various inspection criteria employed by human operators and complex factors and mechanisms in the steelmaking [...] Read more.
Steel products play an important role in daily production and life as a common production material. Currently, the quality of steel products is judged by manual experience. However, various inspection criteria employed by human operators and complex factors and mechanisms in the steelmaking process may lead to inaccuracies. To address these issues, we propose a learning-based method for steel quality prediction, which is named DRBoost,based on multiple machine learning techniques, including Decision tree, Random forest, and the LSBoost algorithm. In our method, the decision tree clearly captures the nonlinear relationships between features and serves as a solid baseline for making preliminary predictions. Random forest enhances the model’s robustness and avoids overfitting by aggregating multiple decision trees. LSBoost uses gradient descent training to assign contribution coefficients to different kinds of raw materials to obtain more accurate predictions. Five key chemical elements, including carbon, silicon, manganese, phosphorus, and sulfur, which significantly influence the major performance characteristics of steel products, are selected. Steel quality prediction is conducted by predicting the contents of these chemical elements. Multiple models are constructed to predict the contents of five key chemical elements in steel products. These models are symmetrically complementary, meeting the requirements of different production scenarios and forming a more accurate and universal method for predicting the steel product’s quality. In addition, the prediction method provides a symmetric quality control system for steel product production. Experimental evaluations are conducted based on a dataset of 2012 samples from a steel plant in Liaoning Province, China. The input variables include various raw material usages, while the outputs are the content of five key chemical elements that influence the quality of steel products. The experimental results show that the models demonstrate their advantages in different performance metrics and are applicable to practical steelmaking scenarios. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

34 pages, 2710 KB  
Review
The Role of Fractional Calculus in Modern Optimization: A Survey of Algorithms, Applications, and Open Challenges
by Edson Fernandez, Victor Huilcapi, Isabela Birs and Ricardo Cajo
Mathematics 2025, 13(19), 3172; https://doi.org/10.3390/math13193172 - 3 Oct 2025
Abstract
This paper provides a comprehensive overview of the application of fractional calculus in modern optimization methods, with a focus on its impact in artificial intelligence (AI) and computational science. We examine how fractional-order derivatives have been integrated into traditional methodologies, including gradient descent, [...] Read more.
This paper provides a comprehensive overview of the application of fractional calculus in modern optimization methods, with a focus on its impact in artificial intelligence (AI) and computational science. We examine how fractional-order derivatives have been integrated into traditional methodologies, including gradient descent, least mean squares algorithms, particle swarm optimization, and evolutionary methods. These modifications leverage the intrinsic memory and nonlocal features of fractional operators to enhance convergence, increase resilience in high-dimensional and non-linear environments, and achieve a better trade-off between exploration and exploitation. A systematic and chronological analysis of algorithmic developments from 2017 to 2025 is presented, together with representative pseudocode formulations and application cases spanning neural networks, adaptive filtering, control, and computer vision. Special attention is given to advances in variable- and adaptive-order formulations, hybrid models, and distributed optimization frameworks, which highlight the versatility of fractional-order methods in addressing complex optimization challenges in AI-driven and computational settings. Despite these benefits, persistent issues remain regarding computational overhead, parameter selection, and rigorous convergence analysis. This review aims to establish both a conceptual foundation and a practical reference for researchers seeking to apply fractional calculus in the development of next-generation optimization algorithms. Full article
(This article belongs to the Special Issue Fractional Order Systems and Its Applications)
26 pages, 1838 KB  
Article
Modeling the Emergence of Insight via Quantum Interference on Semantic Graphs
by Arianna Pavone and Simone Faro
Mathematics 2025, 13(19), 3171; https://doi.org/10.3390/math13193171 - 3 Oct 2025
Abstract
Creative insight is a core phenomenon of human cognition, often characterized by the sudden emergence of novel and contextually appropriate ideas. Classical models based on symbolic search or associative networks struggle to capture the non-linear, context-sensitive, and interference-driven aspects of insight. In this [...] Read more.
Creative insight is a core phenomenon of human cognition, often characterized by the sudden emergence of novel and contextually appropriate ideas. Classical models based on symbolic search or associative networks struggle to capture the non-linear, context-sensitive, and interference-driven aspects of insight. In this work, we propose a computational model of insight generation grounded in continuous-time quantum walks over weighted semantic graphs, where nodes represent conceptual units and edges encode associative relationships. By exploiting the principles of quantum superposition and interference, the model enables the probabilistic amplification of semantically distant but contextually relevant concepts, providing a plausible account of non-local transitions in thought. The model is implemented using standard Python 3.10 libraries and is available both as an interactive fully reproducible Google Colab notebook and a public repository with code and derived datasets. Comparative experiments on ConceptNet-derived subgraphs, including the Candle Problem, 20 Remote Associates Test triads, and Alternative Uses, show that, relative to classical diffusion, quantum walks concentrate more probability on correct targets (higher AUC and peaks reached earlier) and, in open-ended settings, explore more broadly and deeply (higher entropy and coverage, larger expected radius, and faster access to distant regions). These findings are robust under normalized generators and a common time normalization, align with our formal conditions for transient interference-driven amplification, and support quantum-like dynamics as a principled process model for key features of insight. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
13 pages, 322 KB  
Article
Observer-Based Exponential Stabilization for Time Delay Takagi–Sugeno–Lipschitz Models
by Omar Kahouli, Hamdi Gassara, Lilia El Amraoui and Mohamed Ayari
Mathematics 2025, 13(19), 3170; https://doi.org/10.3390/math13193170 - 3 Oct 2025
Abstract
This paper addresses the problem of observer-based control (OBC) for nonlinear systems with time delay (TD). A novel hybrid modeling framework for nonlinear TD systems is first introduced by synergistically combining TD Takagi–Sugeno (TDTS) fuzzy and Lipschitz approaches. The proposed methodology broadens the [...] Read more.
This paper addresses the problem of observer-based control (OBC) for nonlinear systems with time delay (TD). A novel hybrid modeling framework for nonlinear TD systems is first introduced by synergistically combining TD Takagi–Sugeno (TDTS) fuzzy and Lipschitz approaches. The proposed methodology broadens the range of representable systems by enabling Lipschitz nonlinearities to fulfill dual functions: they may describe essential dynamic behaviors of the system or represent aggregated uncertainties, depending on the specific application. The proposed TDTS–Lipschitz (TDTSL) model class features measurable premise variables while accommodating Lipschitz nonlinearities that may depend on unmeasurable system states. Then, through the construction of an appropriate Lyapunov–Krasovskii (L-K) functional, we derive sufficient conditions to ensure exponential stability of the augmented closed-loop model. Subsequently, through a decoupling methodology, these stability conditions are reformulated as a set of linear matrix inequalities (LMIs). Finally, the proposed OBC design is validated through application to a continuous stirred tank reactor (CSTR) with lumped uncertainties. Full article
(This article belongs to the Special Issue Advances in Nonlinear Analysis: Theory, Methods and Applications)
Show Figures

Figure 1

24 pages, 1421 KB  
Article
Machine Learning-Aided Supply Chain Analysis of Waste Management Systems: System Optimization for Sustainable Production
by Zhe Wee Ng, Biswajit Debnath and Amit K Chattopadhyay
Sustainability 2025, 17(19), 8848; https://doi.org/10.3390/su17198848 - 2 Oct 2025
Abstract
Electronic-waste (e-waste) management is a key challenge in engineering smart cities due to its rapid accumulation, complex composition, sparse data availability, and significant environmental and economic impacts. This study employs a bespoke machine learning infrastructure on an Indian e-waste supply chain network (SCN) [...] Read more.
Electronic-waste (e-waste) management is a key challenge in engineering smart cities due to its rapid accumulation, complex composition, sparse data availability, and significant environmental and economic impacts. This study employs a bespoke machine learning infrastructure on an Indian e-waste supply chain network (SCN) focusing on the three pillars of sustainability—environmental, economic, and social. The economic resilience of the SCN is investigated against external perturbations, like market fluctuations or policy changes, by analyzing six stochastically perturbed modules, generated from the optimal point of the original dataset using Monte Carlo Simulation (MCS). In the process, MCS is demonstrated as a powerful technique to deal with sparse statistics in SCN modeling. The perturbed model is then analyzed to uncover “hidden” non-linear relationships between key variables and their sensitivity in dictating economic arbitrage. Two complementary ensemble-based approaches have been used—Feedforward Neural Network (FNN) model and Random Forest (RF) model. While FNN excels in regressing the model performance against the industry-specified target, RF is better in dealing with feature engineering and dimensional reduction, thus identifying the most influential variables. Our results demonstrate that the FNN model is a superior predictor of arbitrage conditions compared to the RF model. The tangible deliverable is a data-driven toolkit for smart engineering solutions to ensure sustainable e-waste management. Full article
15 pages, 1337 KB  
Article
Sinusoidal Approximation Theorem for Kolmogorov–Arnold Networks
by Sergei Gleyzer, Hanh Nguyen, Dinesh P. Ramakrishnan and Eric A. F. Reinhardt
Mathematics 2025, 13(19), 3157; https://doi.org/10.3390/math13193157 - 2 Oct 2025
Abstract
The Kolmogorov–Arnold representation theorem states that any continuous multivariable function can be exactly represented as a finite superposition of continuous single-variable functions. Subsequent simplifications of this representation involve expressing these functions as parameterized sums of a smaller number of unique monotonic functions. Kolmogorov–Arnold [...] Read more.
The Kolmogorov–Arnold representation theorem states that any continuous multivariable function can be exactly represented as a finite superposition of continuous single-variable functions. Subsequent simplifications of this representation involve expressing these functions as parameterized sums of a smaller number of unique monotonic functions. Kolmogorov–Arnold Networks (KANs) have been recently proposed as an alternative to multilayer perceptrons. KANs feature learnable nonlinear activations applied directly to input values, modeled as weighted sums of basis spline functions. This approach replaces the linear transformations and sigmoidal post-activations used in traditional perceptrons. In this work, we propose a novel KAN variant by replacing both the inner and outer functions in the Kolmogorov–Arnold representation with weighted sinusoidal functions of learnable frequencies. We particularly fix the phases of the sinusoidal activations to linearly spaced constant values and provide a proof of their theoretical validity. We also conduct numerical experiments to evaluate its performance on a range of multivariable functions, comparing it with fixed-frequency Fourier transform methods, basis spline KANs (B-SplineKANs), and multilayer perceptrons (MLPs). We show that it outperforms the fixed-frequency Fourier transform B-SplineKAN and achieves comparable performance to MLP. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

19 pages, 3140 KB  
Article
Exploring Non-Linear Effects of a Station-Area Built Environment on Origin–Destination Flow in a Large-Scale Urban Metro Network
by Wenming Rao, Yuan Yao, Siping Ke and Zhao Liu
Sustainability 2025, 17(19), 8829; https://doi.org/10.3390/su17198829 - 2 Oct 2025
Abstract
Origin–destination (OD) passenger flow is a critical variable for metro system planning and operation. While numerous studies have investigated the influence of the built environment on passenger flow, most have focused on ingress or egress flows at metro stations. The impact of the [...] Read more.
Origin–destination (OD) passenger flow is a critical variable for metro system planning and operation. While numerous studies have investigated the influence of the built environment on passenger flow, most have focused on ingress or egress flows at metro stations. The impact of the built environment on OD flow dynamics, particularly the differences between origin-side and destination-side effects, remains poorly understood. This study proposes a novel method for exploring the non-linear effects of station-area built environments on OD flow in large-scale metro networks. First, hourly OD flows and station-area built environment features were extracted from multi-source data. Next, an analytical framework was developed to model the built environment–OD flow relationship using a gradient boosting decision tree model. Finally, the contributions of built environment variables and their non-linear effects on OD flows were systematically investigated. The proposed method was implemented on the Suzhou metro network in China. Test results show that most built environment variables exhibit time-varying, non-linear correlations with OD flows. Even the same variable demonstrates notable differences in its effect between the origin and destination sides. The findings of this study provide valuable guidance for metro planning and station-area urban development. Full article
Show Figures

Figure 1

30 pages, 4602 KB  
Article
Intelligent Fault Diagnosis of Ball Bearing Induction Motors for Predictive Maintenance Industrial Applications
by Vasileios I. Vlachou, Theoklitos S. Karakatsanis, Stavros D. Vologiannidis, Dimitrios E. Efstathiou, Elisavet L. Karapalidou, Efstathios N. Antoniou, Agisilaos E. Efraimidis, Vasiliki E. Balaska and Eftychios I. Vlachou
Machines 2025, 13(10), 902; https://doi.org/10.3390/machines13100902 - 2 Oct 2025
Abstract
Induction motors (IMs) are crucial in many industrial applications, offering a cost-effective and reliable source of power transmission and generation. However, their continuous operation imposes considerable stress on electrical and mechanical parts, leading to progressive wear that can cause unexpected system shutdowns. Bearings, [...] Read more.
Induction motors (IMs) are crucial in many industrial applications, offering a cost-effective and reliable source of power transmission and generation. However, their continuous operation imposes considerable stress on electrical and mechanical parts, leading to progressive wear that can cause unexpected system shutdowns. Bearings, which enable shaft motion and reduce friction under varying loads, are the most failure-prone components, with bearing ball defects representing most severe mechanical failures. Early and accurate fault diagnosis is therefore essential to prevent damage and ensure operational continuity. Recent advances in the Internet of Things (IoT) and machine learning (ML) have enabled timely and effective predictive maintenance strategies. Among various diagnostic parameters, vibration analysis has proven particularly effective for detecting bearing faults. This study proposes a hybrid diagnostic framework for induction motor bearings, combining vibration signal analysis with Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) in an IoT-enabled Industry 4.0 architecture. Statistical and frequency-domain features were extracted, reduced using Principal Component Analysis (PCA), and classified with SVMs and ANNs, achieving over 95% accuracy. The novelty of this work lies in the hybrid integration of interpretable and non-linear ML models within an IoT-based edge–cloud framework. Its main contribution is a scalable and accurate real-time predictive maintenance solution, ensuring high diagnostic reliability and seamless integration in Industry 4.0 environments. Full article
(This article belongs to the Special Issue Vibration Detection of Induction and PM Motors)
Show Figures

Figure 1

14 pages, 1037 KB  
Article
MMSE-Based Dementia Prediction: Deep vs. Traditional Models
by Yuyeon Jung, Yeji Park, Jaehyun Jo and Jinhyoung Jeong
Life 2025, 15(10), 1544; https://doi.org/10.3390/life15101544 - 1 Oct 2025
Abstract
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and [...] Read more.
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and subtle decline patterns. This study developed a novel deep learning-based dementia prediction model using MMSE data collected from domestic clinical settings and compared its performance with traditional machine learning models. A notable strength of this work lies in its use of item-level MMSE features combined with explainable AI (SHAP analysis), enabling both high predictive accuracy and clinical interpretability—an advancement over prior approaches that primarily relied on total scores or linear modeling. Data from 164 participants, classified into cognitively normal, mild cognitive impairment (MCI), and dementia groups, were analyzed. Individual MMSE items and total scores were used as input features, and the dataset was divided into training and validation sets (8:2 split). A fully connected neural network with regularization techniques was constructed and evaluated alongside Random Forest and support vector machine (SVM) classifiers. Model performance was assessed using accuracy, F1-score, confusion matrices, and receiver operating characteristic (ROC) curves. The deep learning model achieved the highest performance (accuracy 0.90, F1-score 0.90), surpassing Random Forest (0.86) and SVM (0.82). SHAP analysis identified Q11 (immediate memory), Q12 (calculation), and Q17 (drawing shapes) as the most influential variables, aligning with clinical diagnostic practices. These findings suggest that deep learning not only enhances predictive accuracy but also offers interpretable insights aligned with clinical reasoning, underscoring its potential utility as a reliable tool for early dementia diagnosis. However, the study is limited by the use of data from a single clinical site with a relatively small sample size, which may restrict generalizability. Future research should validate the model using larger, multi-institutional, and multimodal datasets to strengthen clinical applicability and robustness. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

25 pages, 4372 KB  
Article
A Hybrid Framework Integrating Past Decomposable Mixing and Inverted Transformer for GNSS-Based Landslide Displacement Prediction
by Jinhua Wu, Chengdu Cao, Liang Fei, Xiangyang Han, Yuli Wang and Ting On Chan
Sensors 2025, 25(19), 6041; https://doi.org/10.3390/s25196041 - 1 Oct 2025
Abstract
Landslide displacement prediction is vital for geohazard early warning and infrastructure safety. To address the challenges of modeling nonstationary, nonlinear, and multiscale behaviors inherent in GNSS time series, this study proposes a hybrid predicting framework that integrates Past Decomposable Mixing with an inverted [...] Read more.
Landslide displacement prediction is vital for geohazard early warning and infrastructure safety. To address the challenges of modeling nonstationary, nonlinear, and multiscale behaviors inherent in GNSS time series, this study proposes a hybrid predicting framework that integrates Past Decomposable Mixing with an inverted Transformer architecture (PDM-iTransformer). The PDM module decomposes the original sequence into multi-resolution trend and seasonal components, using structured bottom-up and top-down mixing strategies to enhance feature representation. The iTransformer then models each variable’s time series independently, applying cross-variable self-attention to capture latent dependencies and using feed-forward networks to extract local dynamic features. This design enables simultaneous modeling of long-term trends and short-term fluctuations. Experimental results on GNSS monitoring data demonstrate that the proposed method significantly outperforms traditional models, with R2 increased by 16.2–48.3% and RMSE and MAE reduced by up to 1.33 mm and 1.08 mm, respectively. These findings validate the framework’s effectiveness and robustness in predicting landslide displacement under complex terrain conditions. Full article
(This article belongs to the Special Issue Structural Health Monitoring and Smart Disaster Prevention)
34 pages, 4605 KB  
Article
Forehead and In-Ear EEG Acquisition and Processing: Biomarker Analysis and Memory-Efficient Deep Learning Algorithm for Sleep Staging with Optimized Feature Dimensionality
by Roberto De Fazio, Şule Esma Yalçınkaya, Ilaria Cascella, Carolina Del-Valle-Soto, Massimo De Vittorio and Paolo Visconti
Sensors 2025, 25(19), 6021; https://doi.org/10.3390/s25196021 - 1 Oct 2025
Abstract
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be [...] Read more.
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be adapted for wearable applications. The system utilizes a custom experimental setup with the ADS1299EEG-FE-PDK evaluation board to acquire EEG signals from the forehead and in-ear regions under various conditions, including visual and auditory stimuli. Afterward, the acquired signals were processed to extract a wide range of features in time, frequency, and non-linear domains, selected based on their physiological relevance to sleep stages and disorders. The feature set was reduced using the Minimum Redundancy Maximum Relevance (mRMR) algorithm and Principal Component Analysis (PCA), resulting in a compact and informative subset of principal components. Experiments were conducted on the Bitbrain Open Access Sleep (BOAS) dataset to validate the selected features and assess their robustness across subjects. The feature set extracted from a single EEG frontal derivation (F4-F3) was then used to train and test a two-step deep learning model that combines Long Short-Term Memory (LSTM) and dense layers for 5-class sleep stage classification, utilizing attention and augmentation mechanisms to mitigate the natural imbalance of the feature set. The results—overall accuracies of 93.5% and 94.7% using the reduced feature sets (94% and 98% cumulative explained variance, respectively) and 97.9% using the complete feature set—demonstrate the feasibility of obtaining a reliable classification using a single EEG derivation, mainly for unobtrusive, home-based sleep monitoring systems. Full article
Show Figures

Figure 1

20 pages, 2916 KB  
Article
Domain-Driven Teacher–Student Machine Learning Framework for Predicting Slope Stability Under Dry Conditions
by Semachew Molla Kassa, Betelhem Zewdu Wubineh, Africa Mulumar Geremew, Nandyala Darga Kumar and Grzegorz Kacprzak
Appl. Sci. 2025, 15(19), 10613; https://doi.org/10.3390/app151910613 - 30 Sep 2025
Abstract
Slope stability prediction is a critical task in geotechnical engineering, but machine learning (ML) models require large datasets, which are often costly and time-consuming to obtain. This study proposes a domain-driven teacher–student framework to overcome data limitations for predicting the dry factor of [...] Read more.
Slope stability prediction is a critical task in geotechnical engineering, but machine learning (ML) models require large datasets, which are often costly and time-consuming to obtain. This study proposes a domain-driven teacher–student framework to overcome data limitations for predicting the dry factor of safety (FS dry). The teacher model, XGBoost, was trained on the original dataset to capture nonlinear relationships among key site-specific features (unit weight, cohesion, friction angle) and assign pseudo-labels to synthetic samples generated via domain-driven simulations. Six student models, random forest (RF), decision tree (DT), shallow artificial neural network (SNN), linear regression (LR), support vector regression (SVR), and K-nearest neighbors (KNN), were trained on the augmented dataset to approximate the teacher’s predictions. Models were evaluated using a train–test split and five-fold cross-validation. RF achieved the highest predictive accuracy, with an R2 of up to 0.9663 and low error metrics (MAE = 0.0233, RMSE = 0.0531), outperforming other student models. Integrating domain knowledge and synthetic data improved prediction reliability despite limited experimental datasets. The framework provides a robust and interpretable tool for slope stability assessment, supporting infrastructure safety in regions with sparse geotechnical data. Future work will expand the dataset with additional field and laboratory tests to further improve model performance. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

22 pages, 3419 KB  
Article
A Small-Sample Prediction Model for Ground Surface Settlement in Shield Tunneling Based on Adjacent-Ring Graph Convolutional Networks (GCN-SSPM)
by Jinpo Li, Haoxuan Huang and Gang Wang
Buildings 2025, 15(19), 3519; https://doi.org/10.3390/buildings15193519 - 30 Sep 2025
Abstract
In some projects, a lack of data causes problems for presenting an accurate prediction model for surface settlement caused by shield tunneling. Existing models often rely on large volumes of data and struggle to maintain accuracy and reliability in shield tunneling. In particular, [...] Read more.
In some projects, a lack of data causes problems for presenting an accurate prediction model for surface settlement caused by shield tunneling. Existing models often rely on large volumes of data and struggle to maintain accuracy and reliability in shield tunneling. In particular, the spatial dependency between adjacent rings is overlooked. To address these limitations, this study presents a small-sample prediction framework for settlement induced by shield tunneling, using an adjacent-ring graph convolutional network (GCN-SSPM). Gaussian smoothing, empirical mode decomposition (EMD), and principal component analysis (PCA) are integrated into the model, which incorporates spatial topological priors by constructing a ring-based adjacency graph to extract essential features. A dynamic ensemble strategy is further employed to enhance robustness across layered geological conditions. Monitoring data from the Wuhan Metro project is used to demonstrate that GCN-SSPM yields accurate and stable predictions, particularly in zones facing abrupt settlement shifts. Compared to LSTM+GRU+Attention and XGBoost, the proposed model reduces RMSE by over 90% (LSTM) and 75% (XGBoost), respectively, while achieving an R2 of about 0.71. Notably, the ensemble assigns over 70% of predictive weight to GCN-SSPM in disturbance-sensitive zones, emphasizing its effectiveness in capturing spatially coupled and nonlinear settlement behavior. The prediction error remains within ±1.2 mm, indicating strong potential for practical applications in intelligent construction and early risk mitigation in complex geological conditions. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

20 pages, 2215 KB  
Article
Research on Thermal Failure Characteristics and Prediction Methods of Lithium–Sulfur Batteries
by Lu Cheng, Junshuai Lu and Bihui Jin
World Electr. Veh. J. 2025, 16(10), 555; https://doi.org/10.3390/wevj16100555 - 30 Sep 2025
Abstract
Lithium–sulfur (Li-S) batteries are promising energy storage solutions due to their high density and cost-effectiveness. However, the risk of thermal failure limits their widespread use. Understanding thermal failure characteristics and developing accurate prediction methods are crucial for ensuring battery safety and reliability. This [...] Read more.
Lithium–sulfur (Li-S) batteries are promising energy storage solutions due to their high density and cost-effectiveness. However, the risk of thermal failure limits their widespread use. Understanding thermal failure characteristics and developing accurate prediction methods are crucial for ensuring battery safety and reliability. This study aims to analyze the thermal failure characteristics of Li-S batteries and offer machine learning-based prediction methods for the early detection of potential thermal failures. The research begins with collecting temperature data from sensors deployed over numerous planes of a Li-S battery module under varied operating conditions. The data are created using proven numerical models that simulate various failure conditions. To improve model stability and learning efficiency, temperature data are preprocessed using min–max normalization to scale them to a consistent range. We suggest using a machine learning algorithm, such as the Energy Valley Optimizer Muted Multilayer Perceptrons with Mutual Information (EneVO-MPMI) algorithm. These models are trained on temperature data which are combined with Multilayer Perceptrons (MPs) to capture complicated, nonlinear correlations in thermal failure predictions, whereas the Energy Valley Optimizer (EneVO) optimizes the model’s structure and hyperparameters to avoid overfitting. Mutual Information (MI) assists in the selection of relevant features, resulting in accurate prediction from sensor data. To assess the models’ generalizability, five-fold cross-validation is used and achieves an average F1-score of 97.2%, a recall of 97.6%, an accuracy of 97.3%, and a precision of 96.9%. The EneVO-MPMI method emerges as the most effective, delivering a higher accuracy in forecasting thermal failure while requiring less training and prediction time. It shows that the EneVO-MPMI method is the most accurate and efficient at forecasting thermal breakdown in Li-S batteries. The technique can be used to identify Li-S battery defects early on, reducing the possibility of thermal instability and improving battery safety in a variety of applications. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

Back to TopTop