Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,373)

Search Parameters:
Keywords = nonlinear learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 2484 KB  
Review
Research Progress on Application of Machine Learning in Continuous Casting
by Zhaofeng Wang, Jinghao Shao, Shuai Zhang, Jiahui Zhang and Yuqi Pang
Metals 2025, 15(12), 1383; https://doi.org/10.3390/met15121383 (registering DOI) - 17 Dec 2025
Abstract
Continuous casting is a key core link in steel production with characteristics of strong nonlinearity, multi-parameter coupling and dynamic fluctuations under working conditions. Traditional experience-dependent or mechanism-driven models are no longer suitable for the high-quality and high-efficiency production demands of modern steel industries. [...] Read more.
Continuous casting is a key core link in steel production with characteristics of strong nonlinearity, multi-parameter coupling and dynamic fluctuations under working conditions. Traditional experience-dependent or mechanism-driven models are no longer suitable for the high-quality and high-efficiency production demands of modern steel industries. Machine learning provides an effective technical path for solving the complex control problems in the continuous casting process through its powerful data mining and pattern recognition capabilities. This paper systematically reviews the research progress of machine learning applications in the field of continuous casting, focusing on three core scenarios: abnormal prediction, quality defect detection and process parameter optimization. It sorts out the evolution from single models to feature optimization and integration, deep learning hybrid models, and mechanism-data dual-driven models. It summarizes the significant achievements of this technology in reducing production risks and improving the stability of cast billet quality, and it analyzes the prominent challenges currently faced such as data distortion and distribution imbalance, insufficient model interpretability and limited cross-scenario generalization ability. Finally, it looks forward to future technological innovation and application expansion directions, providing theoretical support and technical references for the digital and intelligent transformation of the steel industry. Full article
25 pages, 5917 KB  
Article
Explainable Machine Learning-Based Prediction of Compressive Strength in Sustainable Recycled Aggregate Self-Compacting Concrete Using SHAP Analysis
by Ahmed Almutairi
Sustainability 2025, 17(24), 11334; https://doi.org/10.3390/su172411334 - 17 Dec 2025
Abstract
The increasing emphasis on sustainability in construction materials has led to a surge of research focused on recycled aggregate self-compacting concrete (RA-SCC). However, the critical gap in predicting the compressive strength of concrete remains challenging because of the nonlinear interactions among the mix’s [...] Read more.
The increasing emphasis on sustainability in construction materials has led to a surge of research focused on recycled aggregate self-compacting concrete (RA-SCC). However, the critical gap in predicting the compressive strength of concrete remains challenging because of the nonlinear interactions among the mix’s constituents. The distinct contribution of this study is to develop an interpretable machine learning (ML) framework to accurately forecast the compressive strength of RA-SCC and identify the most influential mix parameters. A dataset comprising 400 experimental samples was compiled, incorporating eight input variables: age, cement strength, cement, fly ash, blast furnace slag, water, recycled aggregate, and superplasticizer, with compressive strength as the output variable. Four ML algorithms such as support vector regression (SVR), random forest (RF), Multilayer Perceptron (MLP), and extreme gradient boosting (XGBoost) were trained and optimized using Bayesian-based hyperparameter tuning combined with 10-fold cross-validation. Among the evaluated models, XGBoost demonstrated superior accuracy, with R2 = 0.98 and RMSE = 2.95 MPa during training, and R2 = 0.96 with RMSE = 3.25 MPa during testing, confirming its robustness and minimal overfitting. SHAP (SHapley Additive exPlanations) evaluation indicates that superplasticizer, cement, and cement strength were the most dominant factors influencing compressive strength, whereas higher water content showed a negative impact. The developed framework demonstrates that explainable ML can effectively capture the complex nonlinear behavior of RA-SCC, offering a reliable tool for mix design optimization and sustainable concrete production. These findings contribute to advancing data-driven decision making in eco-efficient materials engineering. Full article
Show Figures

Figure 1

28 pages, 1670 KB  
Article
Research on Intelligent Control Method of Camber for Medium and Heavy Plate Based on Machine Vision
by Chunyu He, Chunpo Yue, Zhong Zhao, Zhiqiang Wu and Zhijie Jiao
Materials 2025, 18(24), 5668; https://doi.org/10.3390/ma18245668 - 17 Dec 2025
Abstract
With the continuous development of intelligent manufacturing in the iron and steel industry, there are increasing requirements for the quality control and precision of steel products. Camber is one of the critical defects affecting product quality in medium and heavy plates. Its occurrence [...] Read more.
With the continuous development of intelligent manufacturing in the iron and steel industry, there are increasing requirements for the quality control and precision of steel products. Camber is one of the critical defects affecting product quality in medium and heavy plates. Its occurrence during the rolling process not only reduces the yield of plates but also leads to serious production accidents such as rolling scrap and equipment damage, increasing the operational costs of enterprises. Addressing the difficulties that camber is influenced by complex factors and direct modeling control is challenging, this study proposes a camber detection and control method for medium and heavy plates based on image processing and machine learning algorithms, relying on an actual plate production line. The Optuna-XGBoost model is used to mine and train the production data of plates rolling, extracting the optimal control experience of operators as the pre-control values for camber. The Optuna-XGBoost model achieves an R2 of 0.9999 on the training set and 0.9794 on the test set, demonstrating excellent fitting performance. Meanwhile, a camber detection technology during the plate rolling process is developed based on machine vision. A feedback control model for camber of medium and heavy plates based on distal lateral movement is established. The combined application of pre-control and feedback control reduces the occurrence of camber, ensuring the overall flatness of steel plates during the rolling process. This paper establishes an intelligent control framework for plate camber, synergized by data-driven pre-control and machine vision-based feedback control, offering a novel approach for the online optimal control of complex nonlinear industrial processes. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
23 pages, 6068 KB  
Article
Relationship Between Built-Up Spatial Pattern, Green Space Morphology and Carbon Sequestration at the Community Scale: A Case Study of Shanghai
by Lixian Peng, Yunfang Jiang, Xianghua Li, Chunjing Li and Jing Huang
Land 2025, 14(12), 2437; https://doi.org/10.3390/land14122437 - 17 Dec 2025
Abstract
Enhancing the carbon sequestration (CS) capacity of urban green spaces is crucial for mitigating global warming, environmental degradation, and urbanisation-induced issues. This study focuses on the urban community unit to establish a system of determining factors for the CS capacity of green space, [...] Read more.
Enhancing the carbon sequestration (CS) capacity of urban green spaces is crucial for mitigating global warming, environmental degradation, and urbanisation-induced issues. This study focuses on the urban community unit to establish a system of determining factors for the CS capacity of green space, considering the built-up spatial pattern and green space morphology. An interpretable machine learning approach (Random Forest + Shapley Additive exPlanations) is employed to systematically analyse the non-linear relationship of built-up spatial pattern and green space morphology factors. Results demonstrate significant urban zonal heterogeneity in green space CS, whereas southern suburban area communities exhibited higher capacity. In terms of green space morphology factors, higher fractional vegetation cover (FVC) and cohesion were positively correlated with green space CS capacity. Leaf area index (LAI), canopy density (CD), and the evergreen-broadleaf forest ratio additionally further enhanced the positive effect of two-dimensional green space factors on CS. For built-up spatial pattern factors, communities with a high green space ratio and low development intensity exhibited higher CS capacity. And the optimal ranges of FVC, LAI and CD for effective facilitation of community green space CS were identified as 0.6–0.75, 4.85–5.5 and 0.68–0.7, respectively. Moreover, cohesion, LAI and CD bolstered the CS capacity in communities with a high building density and plot ratio. This study provides a rational basis for planning and layout of green space patterns to enhance CS efficiency at the urban community scale. Full article
Show Figures

Figure 1

29 pages, 4874 KB  
Article
Hierarchical Control for USV Trajectory Tracking with Proactive–Reactive Reward Shaping
by Zixiao Luo, Dongmei Du, Dandan Liu, Qiangqiang Yang, Yi Chai, Shiyu Hu and Jiayou Wu
J. Mar. Sci. Eng. 2025, 13(12), 2392; https://doi.org/10.3390/jmse13122392 - 17 Dec 2025
Abstract
To address trajectory tracking of underactuated unmanned surface vessels (USVs) under disturbances and model uncertainty, we propose a hierarchical control framework that combines model predictive control (MPC) with proximal policy optimization (PPO). The outer loop runs in the inertial reference frame, where an [...] Read more.
To address trajectory tracking of underactuated unmanned surface vessels (USVs) under disturbances and model uncertainty, we propose a hierarchical control framework that combines model predictive control (MPC) with proximal policy optimization (PPO). The outer loop runs in the inertial reference frame, where an MPC planner based on a kinematic model enforces velocity and safety constraints and generates feasible body–fixed velocity references. The inner loop runs in the body–fixed reference frame, where a PPO policy learns the nonlinear inverse mapping from velocity to multi–thruster thrust, compensating hydrodynamic modeling errors and external disturbances. On top of this framework, we design a Proactive–Reactive Adaptive Reward (PRAR) that uses the MPC prediction sequence and real–time pose errors to adaptively reweight the reward across surge, sway and yaw, improving robustness and cross–model generalization. Simulation studies on circular and curvilinear trajectories compare the proposed PRAR–driven dual–loop controller (PRAR–DLC) with MPC–PID, PPO–Only, MPC–PPO and PPO variants. On the curvilinear trajectory, PRAR–DLC reduces surge MAE and maximum tracking error from 0.269 m and 0.963 m (MPC–PID) to 0.138 m and 0.337 m, respectively; on the circular trajectory it achieves about an 8.5% reduction in surge MAE while maintaining comparable sway and yaw accuracy to the baseline controllers. Real–time profiling further shows that the average MPC and PPO evaluation times remain below the control sampling period, indicating that the proposed architecture is compatible with real–time onboard implementation and physical deployment. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

32 pages, 1043 KB  
Article
Modeling Student Acceptance of AI Technologies in Higher Education: A Hybrid SEM–ANN Approach
by Charmine Sheena R. Saflor
Future Internet 2025, 17(12), 581; https://doi.org/10.3390/fi17120581 - 17 Dec 2025
Abstract
This study examines the role of different factors in supporting the sustainable use of Artificial Intelligence (AI) technologies in higher education, particularly in the context of student interactions with intelligent and human-centered learning tools. Using Structural Equation Modeling (SEM) and Artificial Neural Networks [...] Read more.
This study examines the role of different factors in supporting the sustainable use of Artificial Intelligence (AI) technologies in higher education, particularly in the context of student interactions with intelligent and human-centered learning tools. Using Structural Equation Modeling (SEM) and Artificial Neural Networks (ANN) within the Technology Acceptance Model (TAM), the research provides a detailed look at how trust influences students’ attitudes and behaviors toward AI-based learning platforms. Data were gathered from 200 students at Occidental Mindoro State College to analyze the effects of social influence, self-efficacy, perceived ease of use, perceived risk, attitude toward use, behavioral intention, acceptance, and actual use. Results from SEM indicate that perceived risk and ease of use have a stronger impact on AI adoption than perceived usefulness and trust. The ANN analysis further shows that acceptance is the most important factor influencing actual AI use, reflecting the complex, non-linear relationships between trust, risk, and adoption. These findings highlight the need for AI systems that are adaptive, transparent, and designed with the user experience in mind. By building interfaces that are more intuitive and reliable, educators and designers can strengthen human–AI interaction and promote responsible and lasting integration of AI in education. Full article
Show Figures

Figure 1

16 pages, 1209 KB  
Article
Comparative Analysis of Machine Learning and Statistical Models for Railroad–Highway Grade Crossing Safety
by Erickson Senkondo, Deo Chimba, Masanja Madalo, Afia Yeboah and Shala Blue
Vehicles 2025, 7(4), 163; https://doi.org/10.3390/vehicles7040163 - 17 Dec 2025
Abstract
Railroad-highway grade crossings (RHGCs) are critical points of conflict between roadway and rail systems, contributing to over 2000 crashes and 250 fatalities annually in the United States. This study applied machine learning methods (ML) techniques to model and predict crash frequency at RHGCs, [...] Read more.
Railroad-highway grade crossings (RHGCs) are critical points of conflict between roadway and rail systems, contributing to over 2000 crashes and 250 fatalities annually in the United States. This study applied machine learning methods (ML) techniques to model and predict crash frequency at RHGCs, using a comprehensive dataset from the Federal Railroad Administration (FRA) and Tennessee Department of Transportation (TDOT). The dataset included 807 validated crossings, incorporating roadway geometry, traffic volumes, rail characteristics, and control features. Five ML models—Random Forest, XGBoost, PSO-Elastic Net, Transformer-CNN, and Autoencoder-MLP—were developed and compared to a traditional Negative Binomial (NB) regression model. Results showed that ML models significantly outperformed the NB model in predictive accuracy, with the Transformer-CNN achieving the lowest Mean Squared Error (21.4) and Mean Absolute Error (3.2). Feature importance analysis using SHAP values consistently identified Annual Average Daily Traffic (AADT), Truck Traffic Percentage, and Number of Lanes as the most influential predictors, findings that were underrepresented or statistically insignificant in the NB model. Notably, the NB model failed to detect the nonlinear relationships and interaction effects that ML algorithms captured effectively. While only three variables were statistically significant in the NB model, ML models revealed a broader spectrum of critical crash determinants, offering deeper interpretability and higher sensitivity. These findings emphasize the superiority of machine learning approaches in modeling RHGC safety and highlight their potential to support data-driven interventions and policy decisions for reducing crash risks at grade crossings. Full article
Show Figures

Figure 1

29 pages, 4822 KB  
Article
Depth-Specific Prediction of Coastal Soil Salinization Using Multi-Source Environmental Data and an Optimized GWO–RF–XGBoost Ensemble Model
by Yuanbo Wang, Xiao Yang, Xingjun Lv, Wei He, Ming Shao, Hongmei Liu and Chao Jia
Remote Sens. 2025, 17(24), 4043; https://doi.org/10.3390/rs17244043 - 16 Dec 2025
Abstract
Soil salinization is an escalating global concern threatening agricultural productivity and ecological sustainability, particularly in coastal regions where complex interactions among hydrological, climatic, and anthropogenic factors govern salt accumulation. The vertical differentiation and spatial heterogeneity of salinity drivers remain poorly resolved. We present [...] Read more.
Soil salinization is an escalating global concern threatening agricultural productivity and ecological sustainability, particularly in coastal regions where complex interactions among hydrological, climatic, and anthropogenic factors govern salt accumulation. The vertical differentiation and spatial heterogeneity of salinity drivers remain poorly resolved. We present an integrated modeling framework combining ensemble machine learning and spatial statistics to investigate the depth-specific dynamics of soil salinity in the Yellow River Delta, a vulnerable coastal agroecosystem. Using multi-source environmental predictors and 220 field samples harmonized to 30 m resolution, the hybrid Gray Wolf Optimizer–Random Forest–XGBoost model achieved high predictive accuracy for surface salinity (R2 = 0.91, RMSE = 0.03 g/kg, MAE = 0.02 g/kg). Spatial autocorrelation analysis (Global Moran’s I = 0.25, p < 0.01) revealed pronounced clustering of high-salinity hotspots associated with seawater intrusion pathways and capillary rise. The results reveal distinct vertical control mechanisms: vegetation indices and soil water content dominate surface salinity, while total dissolved solids (TDS), pH, and groundwater depth increasingly influence middle and deep layers. By applying SHAP (SHapley Additive Explanations), we quantified nonlinear feature contributions and ranked key predictors across layers, offering mechanistic insights beyond conventional correlation. Our findings highlight the importance of depth-specific monitoring and intervention strategies and demonstrate how explainable machine learning can bridge the gap between black-box prediction and process understanding. This framework offers a generalizable framework that can be adapted to other coastal agroecosystems with similar hydro-environmental conditions. Full article
(This article belongs to the Topic Water Management in the Age of Climate Change)
21 pages, 843 KB  
Article
Multi-Condition Degradation Sequence Analysis in Computers Using Adversarial Learning and Soft Dynamic Time Warping
by Yuanhong Mao, Xi Liu, Pengchao He, Bo Chai, Ling Li, Yilin Zhang, Xin Hu and Yunan Li
Mathematics 2025, 13(24), 4007; https://doi.org/10.3390/math13244007 - 16 Dec 2025
Abstract
Predicting the degradation and lifespan of embedded computers relies critically on the accurate evaluation of key parameter degradation within computing systems. Accelerated high-temperature tests are frequently employed as an alternative to ambient-temperature degradation tests, primarily due to the excessive duration and cost of [...] Read more.
Predicting the degradation and lifespan of embedded computers relies critically on the accurate evaluation of key parameter degradation within computing systems. Accelerated high-temperature tests are frequently employed as an alternative to ambient-temperature degradation tests, primarily due to the excessive duration and cost of ambient-temperature testing. However, the scarcity of effective methodologies for correlating degradation trends across distinct temperature conditions persists as a prominent challenge. This study addresses this gap by leveraging adversarial learning to generate low-temperature degradation sequences from high-temperature datasets. The adversarial learning framework enables feature transfer across diverse operating conditions and facilitates domain adaptation learning. This empowers the model to extract features invariant to degradation trends across multiple temperature conditions. Furthermore, soft dynamic time warping (SDTW) is utilized to precisely align the generated low-temperature sequences with their real-world counterparts. This alignment methodology enables elastic matching of time series data exhibiting nonlinear temporal variations, thereby ensuring accurate comparison and synchronization of degradation sequences. Compared with prior methodologies, our proposed approach delivers superior performance on computer degradation data. It offers a more accurate and reliable solution for the degradation analysis and lifespan prediction of embedded computers, thereby advancing the reliability of computational systems. Full article
19 pages, 1680 KB  
Article
An Improved DQN Framework with Dual Residual Horizontal Feature Pyramid for Autonomous Fault Diagnosis in Strong-Noise Scenarios
by Sha Li, Tong Wang, Xin Xu, Weiting Gan, Kun Chen, Xinyan Fan and Xueming Xu
Sensors 2025, 25(24), 7639; https://doi.org/10.3390/s25247639 - 16 Dec 2025
Abstract
Fault diagnosis methods based on deep learning have made certain progress in recent years. However, in actual industrial scenarios, there are severe strong background noise and limited computing resources, which poses challenges to the practical application of fault diagnosis models. In response to [...] Read more.
Fault diagnosis methods based on deep learning have made certain progress in recent years. However, in actual industrial scenarios, there are severe strong background noise and limited computing resources, which poses challenges to the practical application of fault diagnosis models. In response to the above issues, this paper proposes a novel noise-resistant and lightweight fault diagnosis framework with nonlinear timestep degenerative greedy strategy (NTDGS) and dual residual horizontal feature pyramid (DRHFPN) for fault diagnosis in strong noise scenarios. This method takes advantage of the strong fitting ability of deep learning methods to model the agent in reinforcement learning by the ways of parameterization, fully leveraging the advantages of both deep learning and reinforcement learning methods. NTDGS is further developed to adaptively adjust the action sampling strategy of the agent at different training stages, improving the convergence speed of the network. To enhance the noise resistance of the network, DRHFPN is constructed, which can filter out interference noise at the feature map level by fusing local feature details and global semantic information. Furthermore, the feature map weighting attention mechanism (FMWAM) is designed to enhance the weak feature extraction ability of the network through adaptive weighting of the feature maps. Finally, the performance of the proposed method is evaluated in different datasets and strong noise environments. Experiments show that in various fault diagnosis scenarios, the proposed method has better noise resistance, higher fault diagnosis accuracy, and fewer parameters compared to other methods. Full article
(This article belongs to the Special Issue Smart Sensors for Machine Condition Monitoring and Fault Diagnosis)
24 pages, 4961 KB  
Article
U-PKAN: A Dual-Module Kolmogorov–Arnold Network for Agricultural Plant Disease Detection
by Dejun Xi, Baotong Zhang and Yi-Jia Wang
Agriculture 2025, 15(24), 2599; https://doi.org/10.3390/agriculture15242599 - 16 Dec 2025
Abstract
Crop diseases and pests have a significant impact on planting costs and crop yields and, in severe cases, can threaten food security and farmers’ incomes. Currently, most researchers employ various deep learning methods, such as the YOLO series algorithms and U-Net and its [...] Read more.
Crop diseases and pests have a significant impact on planting costs and crop yields and, in severe cases, can threaten food security and farmers’ incomes. Currently, most researchers employ various deep learning methods, such as the YOLO series algorithms and U-Net and its variants, for the detection of agricultural plant diseases. However, the existing algorithms suffer from insufficient interpretability and are limited to linear modeling, which can lead to issues such as trust crises in current technologies, restricted applications and difficulties in tracing and correcting errors. To address these issues, a dual-module Kolmogorov–Arnold Network (U-PKAN) is proposed for agricultural plant disease detection in this paper. A KAN encoder–decoder structure is adopted to construct the network. To ensure the network fully extracts features, two different modules, namely Patchembed-KAN (P-KAN) and Decoder-KAN (D-KAN), are designed. To enhance the network’s feature fusion capability, a KAN-based symmetrical structure for skip connections is designed. The proposed method places learnable activation functions on weights, enabling it to achieve higher accuracy with fewer parameters. Moreover, it can reveal the compositional structure and variable dependencies of synthetic datasets through symbolic formulas, thus exhibiting excellent interpretability. A field corn disease image dataset was collected and constructed. Additionally, the performance of the U-PKAN model was verified using the open plant disease dataset PlantDoc and a gear pitting dataset. To better understand the performance differences between different methods, U-PKAN was compared with U-KAN, U-Net, AttUNet, and U-Net++ models for performance benchmarking. IoU and the Dice coefficient were chosen as evaluation metrics. The experimental results demonstrate that the proposed method achieves faster convergence and higher segmentation accuracy. Overall, the proposed method demonstrates outstanding performance in aspects such as function approximation, global perception, interpretability and computational efficiency. Full article
Show Figures

Figure 1

25 pages, 3067 KB  
Article
SVR-Based Cryptocurrency Price Prediction Using a Hybrid FISA-Rao and Firefly Algorithm for Feature and Hyperparameter Selection
by Merve Er, Kenan Bayaz and Seniye Ümit Oktay Fırat
Appl. Sci. 2025, 15(24), 13177; https://doi.org/10.3390/app152413177 - 16 Dec 2025
Abstract
Financial forecasting is a challenging task due to the complexity and nonlinear volatility that characterize modern financial markets. Machine learning algorithms are very effective at increasing prediction accuracy, thereby supporting data-driven decision making, optimizing pricing strategies, and improving financial risk management. In particular, [...] Read more.
Financial forecasting is a challenging task due to the complexity and nonlinear volatility that characterize modern financial markets. Machine learning algorithms are very effective at increasing prediction accuracy, thereby supporting data-driven decision making, optimizing pricing strategies, and improving financial risk management. In particular, combining machine learning techniques with metaheuristic algorithms often leads to significant performance improvements across various domains. This study proposes a hybrid framework for cryptocurrency price prediction, where Support Vector Regression (SVR) with radial basis function kernel is used to perform the prediction, while a Firefly algorithm is employed for correlation-based feature selection and hyperparameter tuning. To improve search performance, the parameters of the Firefly algorithm are optimized using the Fully Informed Search Algorithm (FISA) which is an improved version of the parameterless Rao algorithm. The model is applied to hourly data of Bitcoin, Ethereum, Binance, Solana and Ripple, separately. The model’s performance is evaluated by comparison with Gated Recurrent Unit (GRU), Multilayer Perceptron (MLP), and SVR methods using MSE, MAE, and MAPE metrics, along with statistical validation by Wilcoxon’s signed-rank test. The results show that the proposed model achieves a superior accuracy and demonstrate the critical importance of feature selection and hyperparameter tuning for achieving accurate predictions in volatile markets. Moreover, customizing both feature sets and model configurations for each cryptocurrency allows the model to capture distinct market characteristics and provides deeper insights into intra-day market dynamics. Full article
Show Figures

Figure 1

33 pages, 5511 KB  
Article
Physics-Informed Transfer Learning for Predicting Engine Oil Degradation and RUL Across Heterogeneous Heavy-Duty Equipment Fleets
by Mohamed G. A. Nassef, Omar Wael, Youssef H. Elkady, Habiba Elshazly, Jahy Ossama, Sherwet Amin, Dina ElGayar, Florian Pape and Islam Ali
Lubricants 2025, 13(12), 545; https://doi.org/10.3390/lubricants13120545 - 16 Dec 2025
Abstract
Predicting the Remaining Useful Life (RUL) of engine oil is critical for proactive maintenance and fleet reliability. However, irregular and noisy single-point sampling presents challenges for conventional prognostic models. To address this, a hierarchical physics-informed transfer learning (TL) framework is proposed that reconstructs [...] Read more.
Predicting the Remaining Useful Life (RUL) of engine oil is critical for proactive maintenance and fleet reliability. However, irregular and noisy single-point sampling presents challenges for conventional prognostic models. To address this, a hierarchical physics-informed transfer learning (TL) framework is proposed that reconstructs nonlinear degradation trajectories directly from non-time-series data. The method uniquely integrates Arrhenius-type oxidation kinetics and thermochemical laws within a multi-level TL architecture, coupling fleet-level generalization with engine-specific adaptation. Unlike conventional approaches, this framework embeds physical priors directly into the transfer process, ensuring thermodynamically consistent predictions across different equipment. An integrated uncertainty quantification module provides calibrated confidence intervals for RUL estimation. Validation was conducted on 1760 oil samples from dump trucks, dozers, shovels, and wheel loaders operating under real mining conditions. The framework achieved an average R2 of 0.979 and RMSE of 10.185. This represents a 69% reduction in prediction error and a 75% narrowing of confidence intervals for RUL estimates compared to baseline models. TL outperformed the asset-specific model, reducing RMSE by up to 3 times across all equipment. Overall, this work introduces a new direction for physics-informed transfer learning, enabling accurate and uncertainty-aware RUL prediction from uncontrolled industrial data and bridging the gap between idealized degradation studies and real-world maintenance practices. Full article
(This article belongs to the Special Issue Intelligent Algorithms for Triboinformatics)
Show Figures

Figure 1

26 pages, 3813 KB  
Article
Deep Learning for the Greenium: Evidence from Green Bonds, Risk Disclosures, and Market Sentiment
by Meryem Raissi, Abdelhadi Darkaoui, Souhail Admi and Hind Bouzid
J. Risk Financial Manag. 2025, 18(12), 717; https://doi.org/10.3390/jrfm18120717 - 16 Dec 2025
Abstract
This study examines how physical and transition climate risks affect the greenium, assuming that implied volatility serves as a proxy for investor sentiment generated by these risks. Applying a Gated Recurrent Unit (GRU) deep learning model to daily data from January 2020 to [...] Read more.
This study examines how physical and transition climate risks affect the greenium, assuming that implied volatility serves as a proxy for investor sentiment generated by these risks. Applying a Gated Recurrent Unit (GRU) deep learning model to daily data from January 2020 to June 2025 with a rigorous train–test split to get around the drawbacks of full-sample estimations and guarantee strong out-of-sample generalizability is a significant empirical contribution. Our findings show that adding the interaction between these climate risks and the sentiment proxy slightly increases predictive power. The GRU model outperforms random forest and linear regression benchmarks in terms of generalizability, but it remains sensitive to different data splits and hyperparameter tuning. This highlights the use of complex, non-linear models for risk forecasting and portfolio allocation for investors and risk managers, as well as the need for regular climate disclosure for policymakers to reduce information asymmetry. The GRU’s stringent validation framework directly enables more reliable pricing and exposure management. Full article
(This article belongs to the Topic Sustainable and Green Finance)
Show Figures

Figure 1

28 pages, 1813 KB  
Article
Econometric and Python-Based Forecasting Tools for Global Market Price Prediction in the Context of Economic Security
by Dmytro Zherlitsyn, Volodymyr Kravchenko, Oleksiy Mints, Oleh Kolodiziev, Olena Khadzhynova and Oleksandr Shchepka
Econometrics 2025, 13(4), 52; https://doi.org/10.3390/econometrics13040052 - 15 Dec 2025
Abstract
Debate persists over whether classical econometric or modern machine learning (ML) approaches provide superior forecasts for volatile monthly price series. Despite extensive research, no systematic cross-domain comparison exists to guide model selection across diverse asset types. In this study, we compare traditional econometric [...] Read more.
Debate persists over whether classical econometric or modern machine learning (ML) approaches provide superior forecasts for volatile monthly price series. Despite extensive research, no systematic cross-domain comparison exists to guide model selection across diverse asset types. In this study, we compare traditional econometric models with classical ML baselines and hybrid approaches across financial assets, futures, commodities, and market index domains. Universal Python-based forecasting tools include month-end preprocessing, automated ARIMA order selection, Fourier terms for seasonality, circular terms, and ML frameworks for forecasting and residual corrections. Performance is assessed via anchored rolling-origin backtests with expanding windows and a fixed 12-month horizon. MAPE comparisons show that ARIMA-based models provide stable, transparent benchmarks but often fail to capture the nonlinear structure of high-volatility series. ML tools can enhance accuracy in these cases, but they are susceptible to stability and overfitting on monthly histories. The most accurate and reliable forecasts come from models that combine ARIMA-based methods with Fourier transformation and a slight enhancement using machine learning residual correction. ARIMA-based approaches achieve about 30% lower forecast errors than pure ML (18.5% vs. 26.2% average MAPE and 11.6% vs. 16.8% median MAPE), with hybrid models offering only marginal gains (0.1 pp median improvement) at significantly higher computational cost. This work demonstrates the domain-specific nature of model performance, clarifying when hybridization is effective and providing reproducible Python pipelines suited for economic security applications. Full article
Show Figures

Figure 1

Back to TopTop