Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,117)

Search Parameters:
Keywords = gaussian process regression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6204 KB  
Article
Event-Triggered Data-Driven Robust Model Predictive Control for an Omni-Directional Mobile Manipulator
by Pu Guo, Chunli Li, Binjie Wang and Chao Ren
Actuators 2026, 15(4), 185; https://doi.org/10.3390/act15040185 - 27 Mar 2026
Abstract
Omni-directional mobile manipulators (OMMs) are inherently nonlinear, strongly coupled, and multiple-input multiple-output systems, posing significant challenges in developing accurate mechanistic models due to their complexity. Koopman operator theory offers a data-driven modeling framework that leverages input–output data to characterize system dynamics, but there [...] Read more.
Omni-directional mobile manipulators (OMMs) are inherently nonlinear, strongly coupled, and multiple-input multiple-output systems, posing significant challenges in developing accurate mechanistic models due to their complexity. Koopman operator theory offers a data-driven modeling framework that leverages input–output data to characterize system dynamics, but there often exist modeling errors. In this paper, an event-triggered data-driven linear model predictive control (MPC) framework is proposed for an OMM, without using any prior knowledge of the robot system. A finite-dimensional approximate linear Koopman model is established for an OMM using input–output data. The Gaussian process regression (GPR) is employed to estimate the model’s errors, while an extended state observer (ESO) is designed to estimate external disturbances. Since the introduction of GPR increases the computational burden, an event-triggered (ET) mechanism is introduced to reduce unnecessary controller recomputations and controller update frequency. Finally, comparative experiments are carried out to verify the effectiveness and performance superiority of the proposed control scheme. Full article
(This article belongs to the Section Control Systems)
22 pages, 2650 KB  
Article
Design and Implementation of an Eyewear-Integrated Infrared Eye-Tracking System
by Carlo Pezzoli, Marco Brando Mario Paracchini, Daniele Maria Crafa, Marco Carminati, Luca Merigo, Tommaso Ongarello and Marco Marcon
Sensors 2026, 26(7), 2065; https://doi.org/10.3390/s26072065 - 26 Mar 2026
Viewed by 46
Abstract
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. [...] Read more.
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. This paper is a feasibility study for the design, simulation, and experimental evaluation of a photosensor oculography (PSOG) eye-tracking system that is fully integrated into an eyewear frame, based on near-infrared (NIR) emitters and photodiodes. The proposed approach combines simulation-driven optimization of the optical constellation, a multi-frequency modulation and demodulation scheme enabling parallel source discrimination and robust ambient-light rejection, and a resource-efficient signal acquisition pipeline suitable for embedded implementation. Eye rotations in azimuth and elevation are inferred from differential reflectance patterns of ocular regions (sclera, iris, and pupil) using lightweight regression techniques, including shallow neural networks and Gaussian process regression, selected to balance estimation accuracy with computational and power constraints. System performance is evaluated using a controllable artificial-eye platform under defined geometric and illumination conditions, enabling repeatable assessment of gaze-estimation accuracy and algorithmic behavior. Sub-degree errors are achieved in this controlled setting, demonstrating the feasibility and potential effectiveness of the proposed architecture. Practical considerations for translation to real-world smart eyewear, including human-subject validation, anatomical variability, calibration strategies, and embedded deployment, are discussed and identified as directions for future work. By detailing the optical design methodology, modulation strategy, and algorithmic trade-offs, this work clarifies the distinct contributions of the proposed PSOG system relative to existing frame-integrated and camera-free eye-tracking approaches, and provides a foundation for further development toward wearable and augmented-reality applications. Full article
Show Figures

Figure 1

25 pages, 2754 KB  
Article
GPCN: A Decomposition-Based Hybrid Model for a Lithium-Ion Capacity Forecasting and RUL Inference Framework
by Li Wang, Guosheng Cai, Yuan Gao and Caoxin Shen
World Electr. Veh. J. 2026, 17(4), 171; https://doi.org/10.3390/wevj17040171 - 25 Mar 2026
Viewed by 120
Abstract
To address the non-stationary fluctuations caused by capacity regeneration and measurement noise during lithium-ion battery aging, this paper proposes a decomposition-guided heterogeneous prognostic framework for capacity forecasting and remaining useful life (RUL) inference. First, the raw capacity sequence is decomposed by CEEMDAN to [...] Read more.
To address the non-stationary fluctuations caused by capacity regeneration and measurement noise during lithium-ion battery aging, this paper proposes a decomposition-guided heterogeneous prognostic framework for capacity forecasting and remaining useful life (RUL) inference. First, the raw capacity sequence is decomposed by CEEMDAN to separate the long-term degradation trend from short-term regeneration-related disturbances across different time scales. Next, a temporal convolutional network (TCN) is employed to model the trend component, while Gaussian process regression (GPR) is used to characterize local fluctuation behavior and provide predictive uncertainty. Finally, Dempster–Shafer (D-S) evidence theory is introduced to fuse multi-source prognostic outputs, yielding a more robust capacity trajectory for end-of-life (EOL) threshold localization and RUL estimation. Experiments are conducted on the lithium-ion battery dataset released by NASA Ames. Across the four tested battery cells, the proposed method achieves RMSE values of 0.0257–0.0445 Ah and EOL cycle deviations of 1.17–5.53 cycles, while yielding a more balanced trade-off than representative baselines between point-wise prediction accuracy and threshold-crossing stability. Moreover, under direct multi-step forecasting, the prediction error increases with the forecasting horizon, which is consistent with the expected characteristics of long-horizon capacity extrapolation. Overall, this work provides an implementable and interpretable prognostic framework for battery health assessment in the presence of capacity regeneration phenomena. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

15 pages, 1555 KB  
Article
Optimization of Cu2O Nano-Additive-Doped Diesel Engine Performance via Physics-Informed Hybrid GPR Framework
by Recep Cagri Orman
Energies 2026, 19(7), 1603; https://doi.org/10.3390/en19071603 - 25 Mar 2026
Viewed by 184
Abstract
In this study, a novel “Physics-Informed Hybrid Machine Learning” framework was developed to model and optimize the complex combustion and carbon-based emission characteristics of Cu2O nano-additive doped diesel fuel. To reduce reliance on purely empirical correlations, the proposed framework integrates alterations [...] Read more.
In this study, a novel “Physics-Informed Hybrid Machine Learning” framework was developed to model and optimize the complex combustion and carbon-based emission characteristics of Cu2O nano-additive doped diesel fuel. To reduce reliance on purely empirical correlations, the proposed framework integrates alterations in fuel physical properties into the prediction loop, thereby enhancing physical consistency and model generalizability. The methodology comprises data pre-processing, modeling via Gaussian Process Regression (GPR) with an Automatic Relevance Determination (ARD) kernel, and multi-objective optimization using NSGA-II. Experimental tests were conducted at a constant engine speed of 2000 rpm under varying load conditions. The developed hybrid model exhibited high predictive accuracy, particularly for performance metrics and gaseous emissions (e.g., R2 > 0.95 for BSFC and CO). ARD-based feature importance analysis confirmed that nano-additive dosage plays a critical role in the fine-tuning of emissions. Crucially, the optimization algorithm identified a nano-additive dosage of ~29 ppm and an engine load of 15.5 Nm as the optimal operating point for the simultaneous improvement of performance and carbonaceous emissions. This finding, exploring the unmeasured design space, demonstrates the framework’s capability to discover optimal conditions beyond discrete experimental points. Full article
Show Figures

Figure 1

29 pages, 7114 KB  
Article
Modeling and Experimental Study of Fuzzy Control System for Operating Parameters of Grain Combine Harvester Cleaning Device
by Jing Pang, Yahao Tian, Zhanchao Dai, Zhe Du, Fengkui Dang, Xinqi Chen and Xinping Li
Appl. Sci. 2026, 16(7), 3137; https://doi.org/10.3390/app16073137 - 24 Mar 2026
Viewed by 46
Abstract
The cleaning unit is a key functional component of grain combine harvesters, yet its operating parameters are still predominantly adjusted according to operator experience, resulting in limited adaptability to fluctuating working conditions. To enhance the intelligence and stability of the cleaning process, this [...] Read more.
The cleaning unit is a key functional component of grain combine harvesters, yet its operating parameters are still predominantly adjusted according to operator experience, resulting in limited adaptability to fluctuating working conditions. To enhance the intelligence and stability of the cleaning process, this study develops a fuzzy control approach supported by data-driven performance modeling. Based on multi-condition bench experiments, feeding rate, fan speed, cleaning sieve vibration frequency, and sieve opening were selected as input variables. Gaussian Process Regression (GPR) models were established to describe the nonlinear relationships between operating parameters and cleaning loss rate and impurity rate, and impurity rate was inferred online to compensate for the absence of a reliable sensor. Taking feeding rate variation as the primary disturbance, a dual-input fuzzy control strategy was designed using loss rate monitoring and model-predicted impurity rate as feedback signals. Simulation and bench test results show that, under small and moderate load disturbances (±20% and ±35%), the proposed method reduces either impurity rate or cleaning loss rate through coordinated parameter adjustment. Under large disturbances (±50%), performance deterioration cannot be fully eliminated, but its extent is alleviated compared with open-loop conditions. Full article
Show Figures

Figure 1

17 pages, 3224 KB  
Article
Research on Surface Acoustic Wave Yarn Tension Sensor for Spinning Machines: Structural Optimization, Sensitivity Enhancement and Temperature Compensation
by Hao Chen, Yang Feng, Shuai Zhu, Ben Wang, Bingkun Zhang, Hua Xia, Xulehan Yu and Wanqing Chen
Textiles 2026, 6(1), 37; https://doi.org/10.3390/textiles6010037 - 23 Mar 2026
Viewed by 109
Abstract
This paper presents a yarn tension sensor based on Surface Acoustic Waves (SAW). To enhance the detection accuracy of the sensor, an improved beam structure is designed for tension measurement, along with intelligent algorithms for temperature compensation. Firstly, regarding the sensor structure, a [...] Read more.
This paper presents a yarn tension sensor based on Surface Acoustic Waves (SAW). To enhance the detection accuracy of the sensor, an improved beam structure is designed for tension measurement, along with intelligent algorithms for temperature compensation. Firstly, regarding the sensor structure, a simply supported beam with a hyperbolic surface is designed to achieve stress concentration by reducing the section modulus at the beam’s midpoint. Secondly, by incorporating an unbalanced split-electrode Interdigital Transducer (IDT) design, the sensor effectively suppresses signal sidelobe interference and significantly improves the structure’s tension sensitivity. Finally, in terms of signal processing, to eliminate the influence of environmental temperature fluctuations on measurements, a temperature-compensation algorithm based on Bayesian Optimization Least Squares Support Vector Machine (BO-LSSVM) with Gaussian Process regression is proposed. Experimental results show that the tension sensitivity of the improved structure was 8.2% higher than that of the doubly clamped beam and 12.7% higher than that of the cantilever beam. For temperature compensation, the BO-LSSVM model reduced the Mean Relative Error (MRE) by 5.67 percentage points relative to raw data and by 2.04 percentage points relative to the fixed-parameter LSSVM model, lowering the temperature sensitivity coefficient from 4.09 (×103/°C) to 0.41 (103/°C). Full article
Show Figures

Figure 1

29 pages, 10106 KB  
Article
Polynomial Chaos Expanded Gaussian Process
by Dominik Polke, Tim Kösters, Elmar Ahle and Dirk Söffker
Mach. Learn. Knowl. Extr. 2026, 8(3), 78; https://doi.org/10.3390/make8030078 - 19 Mar 2026
Viewed by 180
Abstract
In complex and unknown processes, global models are fitted over the entire input domain but often tend to perform poorly whenever the response surface exhibits non-stationary behavior and varying smoothness. A common approach is to use local models, which requires partitioning the input [...] Read more.
In complex and unknown processes, global models are fitted over the entire input domain but often tend to perform poorly whenever the response surface exhibits non-stationary behavior and varying smoothness. A common approach is to use local models, which requires partitioning the input domain into subdomains and training multiple models, thereby adding significant complexity. Recognizing this limitation, this study addresses the need for models that represent the input–output relationship consistently over the full domain while still adapting to local variations in the response. It introduces a novel machine learning approach: the Polynomial Chaos Expanded Gaussian Process (PCEGP), leveraging polynomial chaos expansion to calculate input-dependent hyperparameters of the Gaussian process (GP). This provides a mathematically interpretable approach that incorporates non-stationary covariance functions and heteroscedastic noise estimation to generate locally adapted models. The model performance is compared to different algorithms in benchmark tests for regression tasks. The results demonstrate low prediction errors of the PCEGP, highlighting model performance that is often competitive with or better than previous methods. A key advantage of the presented model is its interpretable hyperparameters along with training and prediction runtimes comparable to those of a standard GP. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

25 pages, 2146 KB  
Article
Machine Learning-Based Predictive Modelling of Key Operating Parameters in an Industrial-Scale Wet Vertical Stirred Media Mill
by Okay Altun, Aydın Kaya, Ali Seydi Keçeli, Ece Uzun, Meltem Güler and Nurettin Alper Toprak
Minerals 2026, 16(3), 311; https://doi.org/10.3390/min16030311 - 16 Mar 2026
Viewed by 187
Abstract
To the authors’ knowledge, this is the first industrial machine learning (ML) study focused on wet vertical stirred media milling. The study develops and validates machine learning (ML) models to predict the key operating parameters, namely mill discharge product size, mill feed slurry [...] Read more.
To the authors’ knowledge, this is the first industrial machine learning (ML) study focused on wet vertical stirred media milling. The study develops and validates machine learning (ML) models to predict the key operating parameters, namely mill discharge product size, mill feed slurry flow rate, mill power draw, and the specific energy consumption of an industrial wet vertical stirred media mill operating at a copper plant. A physics-guided workflow was adapted, combining relief coefficient-based variable screening with fundamental stirred milling principles to define 20 different structured model input scenarios. In the scope, six regression approaches, linear regression (LR), fine tree regression (FTR), support vector regression (SVR), random forest regression (RFR), artificial neural network regression (ANN), and Gaussian process regression (GPR), were trained and validated using plant sensor data and evaluated using R2 and RMSE. Overall performance was reasonable, with GPR providing the highest predictive accuracy, followed by RFR/ANN, while LR, SVR, and FTR performed lower. The potential benefit of feed size was also assessed conceptually through an upper-bound sensitivity analysis, representing a best-case scenario where an online feed size measurement would be available. Because the feed size descriptor (F80) was not independently measured but derived from an energy–size relationship, the associated accuracy gains are reported as theoretical upper-bound indications rather than independent predictive capability. Overall, the findings support ML-based decision support in stirred milling operations and motivate future work using independently measured feed size (or reliable proxy sensing). Full article
(This article belongs to the Collection Advances in Comminution: From Crushing to Grinding Optimization)
Show Figures

Figure 1

18 pages, 2647 KB  
Article
High-Precision Aeromagnetic Compensation Method Under the Influence of the Geomagnetic Field
by You Li, Guochao Wang, Qi Han and Qiong Li
Sensors 2026, 26(6), 1867; https://doi.org/10.3390/s26061867 - 16 Mar 2026
Viewed by 212
Abstract
Aeromagnetic surveys play an important role in geophysical exploration and many other fields. In many applications, magnetometers are installed aboard an aircraft to survey large areas. Due to its composition, an aircraft has its own magnetic field, which degrades the reliability of the [...] Read more.
Aeromagnetic surveys play an important role in geophysical exploration and many other fields. In many applications, magnetometers are installed aboard an aircraft to survey large areas. Due to its composition, an aircraft has its own magnetic field, which degrades the reliability of the measurements, and thus a technique (named aeromagnetic compensation) that reduces the effect of magnetic interference is required. Commonly, based on a figure-of-merit (FOM) flight, this issue is solved as a linear regression problem. However, the influence of the geomagnetic field, which refers to the magnetic interference introduced by the non-uniform magnetic field in the region, creates accuracy problems when estimating the model coefficients. The analysis in this study indicates that the geomagnetic field can be obtained by a data processing method based on Gaussian-process-regression (GPR) combined with the measurement process. Accordingly, we propose a high-precision compensation method, designated as the Geomagnetic Field-Based (GF-Based) method, which isolates geomagnetic influence to enhance calibration fidelity. This method restricts the impact of the geomagnetic field and improves the precision of the calibration. Compared with the existing methods which considered the geomagnetic field, the proposed method improves the improved ratio (IR), which is verified by a set of airborne experiments. Full article
Show Figures

Figure 1

18 pages, 3814 KB  
Article
A Theory-Guided Machine Learning and Molecular Dynamics Approach for Characterizing Fast-Curing Polyurethane Systems
by Luohaoran Wang, Jacob Harris, Steven Mamolo, Sangharsha Gharat, Ali Zolali, Alan Taub and Mihaela Banu
Polymers 2026, 18(6), 679; https://doi.org/10.3390/polym18060679 - 11 Mar 2026
Viewed by 381
Abstract
Fast-curing polyurethane (PU) systems are attractive for high-throughput manufacturing, but quantifying cure kinetics, gelation, and cure-dependent glass transition temperature (Tg) is difficult, especially at a low degree of cure (DoC). Here, a fast-reacting BASF PU formulation was studied [...] Read more.
Fast-curing polyurethane (PU) systems are attractive for high-throughput manufacturing, but quantifying cure kinetics, gelation, and cure-dependent glass transition temperature (Tg) is difficult, especially at a low degree of cure (DoC). Here, a fast-reacting BASF PU formulation was studied using non-isothermal differential scanning calorimetry (DSC) at multiple heating rates, rheometry at 50 °C, and molecular dynamics (MD) simulations to extend Tgα in the low-DoC regime. DSC provided reaction enthalpy and conversion histories, and Kamal–Sourour (KS) parameters were identified by robust nonlinear fitting, reproducing conversion and curing rate profiles (R2 > 0.99 and >0.95). Rheology indicated gelation between 475 and 625 s (DoC ≈ 0.53), and DSC-based Tg at uncured, gelation, and fully cured states, established the experimental Tg trend. MD (LAMMPS) with topological crosslinking and NPT thermal scans extracted Tg from density–temperature slopes at selected DoC points. Experimental and MD Tg data were fused with Gaussian process regression constrained by the DiBenedetto relationship (5-fold cross-validation), giving λ ≈ 0.29 and confidence intervals. This framework links kinetics, gelation, and Tg evolution for fast-curing PU and identifies the low-DoC region as the main source of uncertainty. Full article
(This article belongs to the Section Polymer Analysis and Characterization)
Show Figures

Figure 1

20 pages, 3087 KB  
Article
Classification and Prediction of Average Current in High-Power Semiconductor Devices: A Machine Learning Framework
by Fawad Ahmad, Luis Vaccaro, Armel Asongu Nkembi, Mario Marchesoni and Federico Portesine
Electronics 2026, 15(6), 1149; https://doi.org/10.3390/electronics15061149 - 10 Mar 2026
Viewed by 192
Abstract
The applications of machine learning (ML) in power electronics are expanding with time, providing effective tools that reduce design complexity and enhance predictive accuracy. In high-power semiconductor devices, such as thyristors and high-power diodes, electrical parameters may directly influence electro-thermal behavior, reliability, and [...] Read more.
The applications of machine learning (ML) in power electronics are expanding with time, providing effective tools that reduce design complexity and enhance predictive accuracy. In high-power semiconductor devices, such as thyristors and high-power diodes, electrical parameters may directly influence electro-thermal behavior, reliability, and overall device performance. Consequently, accurate prediction and classification of average current are critical to ensure optimal device selection, optimize design, and assess performance. In this article, a comprehensive dataset based on data from industrial thyristors capturing electrical and structural parameters relevant to current handling capability is utilized to classify and predict the average current of devices. Additionally, Shapley additive explanation (SHAP) analysis has been performed, highlighting the importance of crucial parameters and identifying the impact of each parameter on model output. Moreover, several ML models, including artificial neural networks (ANNs), support vector machines (SVMs), ensembles, and Gaussian process regression (GPR) are implemented and then compared to assess their performance. The proposed methodology provides manufacturers and designers with data-driven design tools that enhance reliability assessments and facilitate optimized device selection for high-power applications. Full article
(This article belongs to the Section Semiconductor Devices)
Show Figures

Figure 1

22 pages, 4084 KB  
Article
Multi-Objective Optimization of Surface Roughness and Material Removal Rate in Ultrasonic Vibration-Assisted CBN Grinding of External Cylindrical Surfaces
by Toan-Thang Ha, Anh-Tung Luu and Ngoc-Pi Vu
Coatings 2026, 16(3), 333; https://doi.org/10.3390/coatings16030333 - 8 Mar 2026
Viewed by 319
Abstract
Ultrasonic vibration-assisted grinding using cubic boron nitride (CBN) wheels has emerged as an effective approach for improving surface integrity and machining efficiency in hard-to-machine materials. However, achieving a desirable balance between surface roughness and material removal rate remains a critical challenge due to [...] Read more.
Ultrasonic vibration-assisted grinding using cubic boron nitride (CBN) wheels has emerged as an effective approach for improving surface integrity and machining efficiency in hard-to-machine materials. However, achieving a desirable balance between surface roughness and material removal rate remains a critical challenge due to their inherently conflicting nature. In this study, a multi-objective optimization framework is proposed to simultaneously minimize surface roughness (Ra) and maximize material removal rate (MRR) in external cylindrical CBN grinding performed on a computer numerical control (CNC) milling machine under ultrasonic vibration assistance. Gaussian process regression models were first developed to accurately represent the nonlinear relationships between machining parameters and the target responses. These surrogate models were subsequently integrated with the non-dominated sorting genetic algorithm II (NSGA-II) to generate a set of Pareto-optimal solutions. The convergence behavior of the optimization process was evaluated using the hypervolume indicator, confirming fast and stable convergence. The resulting Pareto front clearly illustrates the trade-off between Ra and MRR, and a knee point solution was identified as a practical compromise for industrial application. The optimized results demonstrate that ultrasonic vibration-assisted CBN grinding can significantly enhance machining performance while maintaining acceptable surface quality. The proposed methodology provides an effective decision-support tool for multi-objective process optimization in advanced grinding applications. Full article
Show Figures

Figure 1

17 pages, 2179 KB  
Article
Machine Learning-Assisted Analysis of Fracture Energy in Externally Bonded Reinforcement on Groove Bond Strength Prediction
by Bahareh Mehdizadeh, Pouyan Fakharian, Younes Nouri, Mohammad Afrazi and Bijan Samali
Buildings 2026, 16(5), 1070; https://doi.org/10.3390/buildings16051070 - 8 Mar 2026
Viewed by 191
Abstract
The tensile capacity of a connection is predicted through the use of established models, among which the bond behavior between CFRP layers and concrete is always considered. In structures reinforced with CFRP, the prediction of the bond force between concrete and CFRP is [...] Read more.
The tensile capacity of a connection is predicted through the use of established models, among which the bond behavior between CFRP layers and concrete is always considered. In structures reinforced with CFRP, the prediction of the bond force between concrete and CFRP is essential, as the connection must be designed to withstand the required tensile capacity. An underestimation can lead to inefficient design, while an overestimation risks premature debonding failure, potentially compromising structural safety and serviceability. In recent applications, the bond force between concrete and CFRP has been increased through the use of the Externally Bonded Reinforcement on Groove (EBROG) method. However, due to the structural complexity introduced by the grooved interface, accurate prediction of its bond strength remains challenging, and conventional analytical models may not fully capture the underlying nonlinear interactions. In this technique, CFRP layers are placed into grooves to enhance the interaction among the adhesive, concrete, and CFRP. However, due to the structural complexity of this connection, accurate prediction of its bond force is challenging and requires the application of artificial intelligence methods. This study develops a machine learning (ML) framework to predict the bond strength of the EBROG technique. Four ML models, Support Vector Machine (SVM), Gaussian Process Regression (GPR), Decision Tree, and XGBoost, were implemented, and their hyperparameters were optimized via Bayesian optimization. The models were evaluated using multiple statistical metrics, with the XGBoost algorithm demonstrating superior predictive performance, achieving an R2 of 0.987 and an RMSE of 0.522 kN. This represents an improvement of approximately 5.6% in R2 and a reduction of over 53% in RMSE compared to the existing analytical model. SHAP analysis provided interpretable, data-driven insights, revealing that fracture energy is the predominant factor governing bond strength and elucidating nonlinear interactions between key design parameters. This ML-fracture mechanics framework not only offers superior prediction but also advances the mechanistic understanding of the EBROG bond behavior. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

22 pages, 8037 KB  
Article
A Deep Learning-Driven Spatio-Temporal Framework for Timely Corn Yield Estimation Across Multiple Remote Sensing Scenarios
by Xiaoyu Zhou, Yaoshuai Dang, Jinling Song, Zhiqiang Xiao and Hua Yang
Remote Sens. 2026, 18(5), 743; https://doi.org/10.3390/rs18050743 - 28 Feb 2026
Viewed by 320
Abstract
Crop yield estimation, particularly early-season yield prediction, is highly important for global food security and disaster mitigation. In this study, we utilized deep learning models combined with remote sensing data to develop in-season crop yield estimation models, enabling immediate yield prediction. We employed [...] Read more.
Crop yield estimation, particularly early-season yield prediction, is highly important for global food security and disaster mitigation. In this study, we utilized deep learning models combined with remote sensing data to develop in-season crop yield estimation models, enabling immediate yield prediction. We employed a convolutional neural network (CNN) for spatial feature extraction and a long short-term memory network (LSTM) for temporal patterns, complemented by Gaussian process regression (GP) that introduced geographical coordinates. Three groups of in-season yield prediction experiments were designed, utilizing four-phase, two-phase, and single-phase data, respectively. The results indicated that under the two-phase training scheme, the LSTM_GP model achieved the highest performance in the sixth period, with an R2 value of 0.61 and a root mean square error (RMSE) value of 983.38 kg/ha. When trained on single-phase data at the twelfth phase (approximately mid-to-late July), the LSTM_GP model also performed best, attaining an R2 value of 0.62 and an RMSE value of 969.06 kg/ha. The single-phase prediction model outperformed time-series models in yield prediction accuracy. The periods from mid-to-late July to early-to-mid August represent critical crop growth stages were essential for accurate yield prediction. From our research, we found that adding GP can improve the prediction accuracy, especially for LSTM. Moreover, the proposed single-phase prediction model realized reliable crop yield prediction as well as the silking to early grain-filling stage (mid-to-late July), providing a critical lead time of approximately 2–2.5 months before harvest to support pre-harvest agricultural decision-making. Full article
Show Figures

Figure 1

13 pages, 2158 KB  
Article
A Gaussian Process Regression Model for Estimating Pore Volume in the Longmaxi Shale Formation
by Sirong Zhu, Ning Li, Zhiwen Huang, Mingze Sun, Jie Zeng and Wenxi Ren
Processes 2026, 14(5), 798; https://doi.org/10.3390/pr14050798 - 28 Feb 2026
Viewed by 247
Abstract
Shale pore volume is a critical parameter for reservoir evaluation. Accurate and rapid determination of this parameter is essential for identifying sweet spots and performing reliable reserve estimations. Currently, laboratory experiments remain the standard for determining pore volume; however, these methods are typically [...] Read more.
Shale pore volume is a critical parameter for reservoir evaluation. Accurate and rapid determination of this parameter is essential for identifying sweet spots and performing reliable reserve estimations. Currently, laboratory experiments remain the standard for determining pore volume; however, these methods are typically time-consuming, costly, and labor-intensive. To complement traditional experimental approaches, we developed a Gaussian Process Regression (GPR) model to estimate shale pore volume based on mineralogical compositions. The model is specifically tailored for the Longmaxi shale, utilizing six input features: the contents of Total Organic Carbon (TOC), clay, quartz, feldspar, carbonate, and pyrite. The GPR model achieved a mean absolute percentage error (MAPE) of 9.97% on the testing dataset, while it yielded an MAPE of 17.66% when applied to an additional independent validation set. Finally, a sensitivity analysis using the Shapley additive explanations was conducted to elucidate the influence of mineralogical constituents on shale pore volume. Full article
Show Figures

Figure 1

Back to TopTop