Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Keywords = kernel ridge regression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 6311 KB  
Article
Machine Learning-Assisted Optimisation of the Laser Beam Powder Bed Fusion (PBF-LB) Process Parameters of H13 Tool Steel Fabricated on a Preheated to 350 C Building Platform
by Katsiaryna Kosarava, Paweł Widomski, Michał Ziętala, Daniel Dobras, Marek Muzyk and Bartłomiej Adam Wysocki
Materials 2026, 19(1), 210; https://doi.org/10.3390/ma19010210 - 5 Jan 2026
Viewed by 480
Abstract
This study presents the first application of Machine Learning (ML) models to optimise Powder Bed Fusion using Laser Beam (PBF-LB) process parameters for H13 steel fabricated on a 350 °C preheated building platform. A total of 189 cylindrical specimens were produced for training [...] Read more.
This study presents the first application of Machine Learning (ML) models to optimise Powder Bed Fusion using Laser Beam (PBF-LB) process parameters for H13 steel fabricated on a 350 °C preheated building platform. A total of 189 cylindrical specimens were produced for training and testing machine learning (ML) models using variable process parameters: laser power (250–350 W), scanning speed (1050–1300 mm/s), and hatch spacing (65–90 μm). Eight ML models were investigated: 1. Support Vector Regression (SVR), 2. Kernel Ridge Regression (KRR), 3. Stochastic Gradient Descent Regressor, 4. Random Forest Regressor (RFR), 5. Extreme Gradient Boosting (XGBoost), 6. Extreme Gradient Boosting with limited depth (XGBoost LD), 7. Extra Trees Regressor (ETR) and 8. Light Gradient Boosting Machine (LightGBM). All models were trained using the Fast Library for Automated Machine Learning & Tuning (FLAML) framework to predict the relative density of the fabricated samples. Among these, the XGBoost model achieved the highest predictive accuracy, with a coefficient of determination R2=0.977, mean absolute percentage error MAPE = 0.002, and mean absolute error MAE = 0.017. Experimental validation was conducted on 27 newly fabricated samples using ML predicted process parameters. Relative densities exceeding 99.6% of the theoretical value (7.76 g/cm3) for all models except XGBoost LD and KRR. The lowest MAE = 0.004 and the smallest difference between the ML-predicted and PBF-LB validated density were obtained for samples made with LightGBM-predicted parameters. Those samples exhibited a hardness of 604 ± 13 HV0.5, which increased to approximately 630 HV0.5 after tempering at 550 °C. The LightGBM optimised parameters were further applied to fabricate a part of a forging die incorporating internal through-cooling channels, demonstrating the efficacy of machine learning-guided optimisation in achieving dense, defect-free H13 components suitable for industrial applications. Full article
(This article belongs to the Special Issue Multiscale Design and Optimisation for Metal Additive Manufacturing)
Show Figures

Graphical abstract

17 pages, 1810 KB  
Article
Comparative Analysis of Machine Learning and Multi-View Learning for Predicting Peak Penetration Resistance of Spudcans: A Study Using Centrifuge Test Data
by Mingyuan Wang, Xiuqing Yang, Xing Yang, Dong Wang, Wenjing Sun and Huimin Sun
J. Mar. Sci. Eng. 2026, 14(1), 62; https://doi.org/10.3390/jmse14010062 - 29 Dec 2025
Viewed by 146
Abstract
Punch-through accidents pose a significant risk during the positioning of jack-up rigs. To mitigate this hazard, accurate prediction of the peak penetration resistance of spudcan foundations is essential for developing safe operational plans. Advances in artificial intelligence have spurred the widespread application of [...] Read more.
Punch-through accidents pose a significant risk during the positioning of jack-up rigs. To mitigate this hazard, accurate prediction of the peak penetration resistance of spudcan foundations is essential for developing safe operational plans. Advances in artificial intelligence have spurred the widespread application of machine learning (ML) to geotechnical engineering. To evaluate the prediction effect of different algorithm frameworks on the peak resistance of spudcans, this study evaluates the feasibility of ML and multi-view learning (MVL) methods using existing centrifuge test data. Six ML models—Random Forest, Support Vector Machine (with Gauss, second-degree, and third-degree polynomial kernels), Multiple Linear Regression, and Neural Networks—alongside a Ridge Regression-based MVL method are employed. The performance of these models is rigorously assessed through training and testing across various working conditions. The results indicate that well-trained ML and MVL models achieve accurate predictions for both sand-over-clay and three-layer clay strata. For the sand-over-clay stratum, the mean relative error (MRE) across the 58-case dataset is approximately 15%. The Neural Network and MVL method demonstrate the highest accuracy. This study provides a viable and effective empirical solution for predicting spudcan peak resistance and offers practical guidance for algorithm selection in different stratigraphic conditions, ultimately supporting enhanced safety planning for jack-up rig operations. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 874 KB  
Article
Methods in PES-Learn: Direct-Fit Machine Learning of Born–Oppenheimer Potential Energy Surfaces
by Ian T. Beck, Justin M. Turney and Henry F. Schaefer
Molecules 2026, 31(1), 100; https://doi.org/10.3390/molecules31010100 - 25 Dec 2025
Viewed by 504
Abstract
The release of PES-Learn version 1.0 as an open-source software package for the automatic construction of machine learning models of semi-global molecular potential energy surfaces (PESs) is presented. Improvements to PES-Learn’s interoperability are stressed with new Python API that simplifies workflows for [...] Read more.
The release of PES-Learn version 1.0 as an open-source software package for the automatic construction of machine learning models of semi-global molecular potential energy surfaces (PESs) is presented. Improvements to PES-Learn’s interoperability are stressed with new Python API that simplifies workflows for PES construction via interaction with QCSchema input and output infrastructure. In addition, a new machine learning method is introduced to PES-Learn: kernel ridge regression (KRR). The capabilities of KRR are emphasized with examination of select semi-global PESs. All machine learning methods available in PES-Learn are benchmarked with benzene and ethanol datasets from the rMD17 database to illustrate PES-Learn’s performance ability. Fitting performance and timings are assessed for both systems. Finally, the ability to predict gradients with neural network models is presented and benchmarked with ethanol and benzene. PES-Learn is an active project and welcomes community suggestions and contributions. Full article
Show Figures

Graphical abstract

27 pages, 2727 KB  
Article
The Module Gradient Descent Algorithm via L2 Regularization for Wavelet Neural Networks
by Khidir Shaib Mohamed, Ibrahim. M. A. Suliman, Abdalilah Alhalangy, Alawia Adam, Muntasir Suhail, Habeeb Ibrahim, Mona A. Mohamed, Sofian A. A. Saad and Yousif Shoaib Mohammed
Axioms 2025, 14(12), 899; https://doi.org/10.3390/axioms14120899 - 4 Dec 2025
Viewed by 575
Abstract
Although wavelet neural networks (WNNs) combine the expressive capability of neural models with multiscale localization, there are currently few theoretical guarantees for their training. We investigate the weight decay (L2 regularization) optimization dynamics of gradient descent (GD) for WNNs. Using explicit [...] Read more.
Although wavelet neural networks (WNNs) combine the expressive capability of neural models with multiscale localization, there are currently few theoretical guarantees for their training. We investigate the weight decay (L2 regularization) optimization dynamics of gradient descent (GD) for WNNs. Using explicit rates controlled by the spectrum of the regularized Gram matrix, we first demonstrate global linear convergence to the unique ridge solution for the feature regime when wavelet atoms are fixed and only the linear head is trained. Second, for fully trainable WNNs, we demonstrate linear rates in regions satisfying a Polyak–Łojasiewicz (PL) inequality and establish convergence of GD to stationary locations under standard smoothness and boundedness of wavelet parameters; weight decay enlarges these regions by suppressing flat directions. Third, we characterize the implicit bias in the over-parameterized neural tangent kernel (NTK) regime: GD converges to the minimum reproducing kernel Hilbert space (RKHS) norm interpolant associated with the WNN kernel with L2. In addition to an assessment process on synthetic regression, denoising, and ablations across λ and stepsize, we supplement the theory with useful recommendations on initialization, stepsize schedules, and regularization scales. Together, our findings give a principled prescription for dependable training that has broad applicability to signal processing applications and shed light on when and why L2-regularized GD is stable and quick for WNNs. Full article
Show Figures

Figure 1

12 pages, 977 KB  
Article
Simultaneous Detection and Quantification of Age-Dependent Dopamine Release
by Ibrahim Moubarak Nchouwat Ndumgouo, Mohammad Zahir Uddin Chowdhury and Stephanie Schuckers
BioMedInformatics 2025, 5(4), 64; https://doi.org/10.3390/biomedinformatics5040064 - 21 Nov 2025
Viewed by 468
Abstract
Background: Dopamine (DA) is a key biomarker for neurodegenerative diseases such as Parkinson’s. However, detailed insights into how DA release in the brain changes with aging remain challenging. Integrating machine learning with DA sensing platforms has proven more effective in tracking age-dependent [...] Read more.
Background: Dopamine (DA) is a key biomarker for neurodegenerative diseases such as Parkinson’s. However, detailed insights into how DA release in the brain changes with aging remain challenging. Integrating machine learning with DA sensing platforms has proven more effective in tracking age-dependent DA dynamics than using the sensing platforms alone. Method: This study presents a machine learning framework to automatically detect and quantify dopamine (DA) release using the near-infrared catecholamine nanosensors (nIRCats) dataset of acute mouse brain tissue across three age groups (4, 8.5, and 12 weeks), focusing on the dorsolateral (DLS) and dorsomedial striatum (DMS). 251 image frames from the dataset were analyzed to extract features for training a CatBoost regression model. To enhance speed while maintaining much of the predictive accuracy, the model was distilled into a kernelized Ridge regression model. Results: The model achieved validation Mean Squared Error (MSE) of 0.004 and R2 value of 0.79. When the acceptable prediction range was expanded to include values within ±10% of the actual DA release and mouse age, model performance improved to a validation MSE of 0.001 and R2 value of 0.97. Conclusions: These results demonstrate that the proposed approach can accurately and automatically predict spatial and age-dependent dopamine dynamics; a crucial requirement for optimizing deep brain stimulation therapies for neurodegenerative disorders such as Parkinson’s disease (PD) and depression. Full article
Show Figures

Figure 1

22 pages, 57273 KB  
Article
Adaptive Software-Defined Network Control Using Kernel-Based Reinforcement Learning: An Empirical Study
by Yedil Nurakhov, Abzal Kyzyrkanov, Zhenis Otarbay and Danil Lebedev
Appl. Sci. 2025, 15(23), 12349; https://doi.org/10.3390/app152312349 - 21 Nov 2025
Viewed by 540
Abstract
Software-defined networking (SDN) requires adaptive control strategies to handle dynamic traffic conditions and heterogeneous network environments. Reinforcement learning (RL) has emerged as a promising solution, yet deep RL methods often face instability, non-stationarity, and reproducibility challenges that limit practical deployment. To address these [...] Read more.
Software-defined networking (SDN) requires adaptive control strategies to handle dynamic traffic conditions and heterogeneous network environments. Reinforcement learning (RL) has emerged as a promising solution, yet deep RL methods often face instability, non-stationarity, and reproducibility challenges that limit practical deployment. To address these issues, a kernel-based RL framework is introduced, embedding transition dynamics into reproducing kernel Hilbert spaces (RKHS) and combining kernel ridge regression with policy iteration. This approach enables stable value estimation, enhanced sample efficiency, and interpretability, making it suitable for large-scale and evolving SDN scenarios. Experimental evaluation demonstrates consistent convergence and robustness under traffic variability, with cumulative rewards exceeding those of baseline deep RL methods by more than 22%. The findings highlight the potential of kernel-embedded RL as a practical and theoretically grounded solution for adaptive SDN management and contribute to the broader development of intelligent systems in complex environments. Full article
Show Figures

Figure 1

24 pages, 4364 KB  
Article
Determining the Optimal T-Value for the Temperature Scaling Calibration Method Using the Open-Vocabulary Detection Model YOLO-World
by Max Andreas Ingrisch, Rani Marcel Schilling, Ingo Chmielewski and Stefan Twieg
Appl. Sci. 2025, 15(22), 12062; https://doi.org/10.3390/app152212062 - 13 Nov 2025
Cited by 1 | Viewed by 1280
Abstract
Object detection is an important tool in many areas, such as robotics or autonomous driving. Especially in these areas, a wide variety of object classes must be detected or interacted with. Models from the field of Open-Vocabulary Detection (OVD) provide a solution here, [...] Read more.
Object detection is an important tool in many areas, such as robotics or autonomous driving. Especially in these areas, a wide variety of object classes must be detected or interacted with. Models from the field of Open-Vocabulary Detection (OVD) provide a solution here, as they can detect not only base classes but also novel object classes, i.e., those classes that were not seen during training. However, one problem with OVD models is their poor calibration, meaning that the predictions are often too over- or under-confident. To improve the calibration, Temperature Scaling is used in this study. Using YOLO World, one of the best-performing OVD models, the aim is to determine the optimal T-value for this calibration method. For this reason, it is investigated whether there is a correlation between the logit distribution and the optimal T-value and how this can be modeled. Finally, the influence of Temperature Scaling on the Expected Calibration Error (ECE) and the mAP (Mean Average Precision) will be analyzed. The results of this study show that similar logit distributions of different datasets result in the same optimal T-values. This correlation could be best modeled using Kernel Ridge Regression (KRR) and Support Vector Machine (SVM). In all cases, the ECE could be improved by Temperature Scaling without significantly reducing the mAP. Full article
Show Figures

Figure 1

28 pages, 4706 KB  
Article
Comparative Performance Analysis of Machine Learning-Based Annual and Seasonal Approaches for Power Output Prediction in Combined Cycle Power Plants
by Asiye Aslan and Ali Osman Büyükköse
Energies 2025, 18(19), 5110; https://doi.org/10.3390/en18195110 - 25 Sep 2025
Viewed by 1039
Abstract
This study develops an innovative framework that utilizes real-time operational data to forecast electrical power output (EPO) in Combined Cycle Power Plants (CCPPs) by employing a temperature segmentation-based modeling approach. Instead of using a single general prediction model, which is commonly seen in [...] Read more.
This study develops an innovative framework that utilizes real-time operational data to forecast electrical power output (EPO) in Combined Cycle Power Plants (CCPPs) by employing a temperature segmentation-based modeling approach. Instead of using a single general prediction model, which is commonly seen in the literature, three separate prediction models were created to explicitly capture the nonlinear effect of ambient temperature (AT) on efficiency (AT < 12 °C, 12 °C ≤ AT < 20 °C, AT ≥ 20 °C). Linear Ridge, Medium Tree, Rational Quadratic Gaussian Process Regression (GPR), Support Vector Machine (SVM) Kernel, and Neural Network methods were applied. In the modeling, the variables considered were AT, relative humidity (RH), atmospheric pressure (AP), and condenser vacuum (V). The highest performance was achieved with the Rational Quadratic GPR method. In this approach, the weighted average Mean Absolute Error (MAE) was found to be 2.225 with seasonal segmentation, while it was calculated as 2.417 in the non-segmented model. By applying seasonal prediction models, the hourly EPO prediction error was reduced by 192 kW, achieving a 99.77% average convergence of the predicted power output values to the actual values. This demonstrates the contribution of the proposed approach to enhancing operational efficiency. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

28 pages, 3780 KB  
Article
Machine Learning Prediction Models of Beneficial and Toxicological Effects of Zinc Oxide Nanoparticles in Rat Feed
by Leonid Legashev, Ivan Khokhlov, Irina Bolodurina, Alexander Shukhman and Svetlana Kolesnik
Mach. Learn. Knowl. Extr. 2025, 7(3), 91; https://doi.org/10.3390/make7030091 - 29 Aug 2025
Viewed by 1856
Abstract
Nanoparticles have found widespread application across diverse fields, including agriculture and animal husbandry. However, a persistent challenge in laboratory-based studies involving nanoparticle exposure is the limited availability of experimental data, which constrains the robustness and generalizability of findings. This study presents a comprehensive [...] Read more.
Nanoparticles have found widespread application across diverse fields, including agriculture and animal husbandry. However, a persistent challenge in laboratory-based studies involving nanoparticle exposure is the limited availability of experimental data, which constrains the robustness and generalizability of findings. This study presents a comprehensive analysis of the impact of zinc oxide nanoparticles (ZnO NPs) in feed on elemental homeostasis in male Wistar rats. Using correlation-based network analysis, a correlation graph weight value of 15.44 and a newly proposed weighted importance score of 1.319 were calculated, indicating that a dose of 3.1 mg/kg represents an optimal balance between efficacy and physiological stability. To address the issue of limited sample size, synthetic data generation was performed using generative adversarial networks, enabling data augmentation while preserving the statistical characteristics of the original dataset. Machine learning models based on fully connected neural networks and kernel ridge regression, enhanced with a custom loss function, were developed and evaluated. These models demonstrated strong predictive performance across a ZnO NP concentration range of 1–150 mg/kg, accurately capturing the dependencies of essential element, protein, and enzyme levels in blood on nanoparticle dosage. Notably, the presence of toxic elements and some other elements at ultra-low concentrations exhibited non-random patterns, suggesting potential systemic responses or early indicators of nanoparticle-induced perturbations and probable inability of synthetic data to capture the true dynamics. The integration of machine learning with synthetic data expansion provides a promising approach for analyzing complex biological responses in data-scarce experimental settings, contributing to the safer and more effective application of nanoparticles in animal nutrition. Full article
Show Figures

Figure 1

16 pages, 1499 KB  
Article
Predicting Flatfish Growth in Aquaculture Using Bayesian Deep Kernel Machines
by Junhee Kim, Seung-Won Seo, Ho-Jin Jung, Hyun-Seok Jang, Han-Kyu Lim and Seongil Jo
Appl. Sci. 2025, 15(17), 9487; https://doi.org/10.3390/app15179487 - 29 Aug 2025
Viewed by 770
Abstract
Olive flounder (Paralichthys olivaceus) is a key aquaculture species in South Korea, but its production has been challenged by rising mortality under environmental stress from key environmental factors such as water temperature, dissolved oxygen, and feeding conditions. To support adaptive management, [...] Read more.
Olive flounder (Paralichthys olivaceus) is a key aquaculture species in South Korea, but its production has been challenged by rising mortality under environmental stress from key environmental factors such as water temperature, dissolved oxygen, and feeding conditions. To support adaptive management, this study proposes a Bayesian Deep Kernel Machine Regression (BDKMR) model that integrates Gaussian process regression with neural network-based feature learning. Using longitudinal data from commercial farms, we model fish growth as a function of water temperature, dissolved oxygen, and feed quantity. Model performance is assessed via Leave-One-Out Cross-Validation and compared against kernel ridge regression and Bayesian kernel machine regression. Results show that BDKMR achieves substantially lower prediction errors, indicating superior accuracy and robustness. These findings suggest that BDKMR offers a flexible and effective framework for predictive modeling in aquaculture systems. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

22 pages, 13770 KB  
Article
Prediction Model of Powdery Mildew Disease Index in Rubber Trees Based on Machine Learning
by Jiazheng Zhu, Xize Huang, Xiaoyu Liang, Meng Wang and Yu Zhang
Plants 2025, 14(15), 2402; https://doi.org/10.3390/plants14152402 - 3 Aug 2025
Viewed by 1239
Abstract
Powdery mildew, caused by Erysiphe quercicola, is one of the primary diseases responsible for the reduction in natural rubber production in China. This disease is a typical airborne pathogen, characterized by its ability to spread via air currents and rapidly escalate into [...] Read more.
Powdery mildew, caused by Erysiphe quercicola, is one of the primary diseases responsible for the reduction in natural rubber production in China. This disease is a typical airborne pathogen, characterized by its ability to spread via air currents and rapidly escalate into an epidemic under favorable environmental conditions. Accurate prediction and determination of the prevention and control period represent both a critical challenge and key focus area in managing rubber-tree powdery mildew. This study investigates the effects of spore concentration, environmental factors, and infection time on the progression of powdery mildew in rubber trees. By employing six distinct machine learning model construction methods, with the disease index of powdery mildew in rubber trees as the response variable and spore concentration, temperature, humidity, and infection time as predictive variables, a preliminary predictive model for the disease index of rubber-tree powdery mildew was developed. Results from indoor inoculation experiments indicate that spore concentration directly influences disease progression and severity. Higher spore concentrations lead to faster disease development and increased severity. The optimal relative humidity for powdery mildew development in rubber trees is 80% RH. At varying temperatures, the influence of humidity on the disease index differs across spore concentration, exhibiting distinct trends. Each model effectively simulates the progression of powdery mildew in rubber trees, with predicted values closely aligning with observed data. Among the models, the Kernel Ridge Regression (KRR) model demonstrates the highest accuracy, the R2 values for the training set and test set were 0.978 and 0.964, respectively, while the RMSE values were 4.037 and 4.926, respectively. This research provides a robust technical foundation for reducing the labor intensity of traditional prediction methods and offers valuable insights for forecasting airborne forest diseases. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

33 pages, 2441 KB  
Article
Kernel Ridge-Type Shrinkage Estimators in Partially Linear Regression Models with Correlated Errors
by Syed Ejaz Ahmed, Ersin Yilmaz and Dursun Aydın
Mathematics 2025, 13(12), 1959; https://doi.org/10.3390/math13121959 - 13 Jun 2025
Viewed by 761
Abstract
Partially linear time series models often suffer from multicollinearity among regressors and autocorrelated errors, both of which can inflate estimation risk. This study introduces a generalized ridge-type kernel (GRTK) framework that combines kernel smoothing with ridge shrinkage and augments it through ordinary and [...] Read more.
Partially linear time series models often suffer from multicollinearity among regressors and autocorrelated errors, both of which can inflate estimation risk. This study introduces a generalized ridge-type kernel (GRTK) framework that combines kernel smoothing with ridge shrinkage and augments it through ordinary and positive-part Stein adjustments. Closed-form expressions and large-sample properties are established, and data-driven criteria—including GCV, AICc, BIC, and RECP—are used to tune the bandwidth and shrinkage penalties. Monte-Carlo simulations indicate that the proposed procedures usually reduce risk relative to existing semiparametric alternatives, particularly when the predictors are strongly correlated and the error process is dependent. An empirical study of US airline-delay data further demonstrates that GRTK produces a stable, interpretable fit, captures a nonlinear air-time effect overlooked by conventional approaches, and leaves only a modest residual autocorrelation. By tackling multicollinearity and autocorrelation within a single, flexible estimator, the GRTK family offers practitioners a practical avenue for more reliable inference in partially linear time series settings. Full article
(This article belongs to the Special Issue Statistical Forecasting: Theories, Methods and Applications)
Show Figures

Figure 1

16 pages, 17659 KB  
Article
Extracting Multi-Dimensional Features for BMI Estimation Using a Multiplex Network
by Anying Xu, Tianshu Wang, Tao Yang and Kongfa Hu
Symmetry 2025, 17(6), 877; https://doi.org/10.3390/sym17060877 - 4 Jun 2025
Cited by 1 | Viewed by 2043
Abstract
Body Mass Index (BMI) is a crucial indicator for assessing human obesity and overall health, providing valuable insights for applications such as health monitoring, patient re-identification, and personalized healthcare. Recently, several data-driven methods have been developed to estimate BMI using 2D and 3D [...] Read more.
Body Mass Index (BMI) is a crucial indicator for assessing human obesity and overall health, providing valuable insights for applications such as health monitoring, patient re-identification, and personalized healthcare. Recently, several data-driven methods have been developed to estimate BMI using 2D and 3D features extracted from facial and body images or RGB-D data. However, current research faces challenges such as the incomplete consideration of anthropometric features, the neglect of multiplex networks, and low-BMI-estimation performance. To address these issues, this paper proposes three 3D anthropometric features, one 2D anthropometric feature, and a deep feature extraction method to comprehensively consider anthropometric features. Additionally, a BMI estimation method based on a multiplex network is introduced. In this method, three types of features are extracted by constructing a multichannel network, and BMI estimation is performed using Kernel Ridge Regression (KRR). The experimental results demonstrate that the proposed method significantly outperforms state-of-the-art methods. By incorporating symmetry into our analysis, we can uncover deeper patterns and relationships within complex systems, leading to a more comprehensive understanding of the phenomena under investigation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 3122 KB  
Article
Forecasting Sovereign Credit Risk Amidst a Political Crisis: A Machine Learning and Deep Learning Approach
by Amira Abid
J. Risk Financial Manag. 2025, 18(6), 300; https://doi.org/10.3390/jrfm18060300 - 1 Jun 2025
Cited by 2 | Viewed by 2359
Abstract
The purpose of this paper is to forecast the sovereign credit risk for Egypt, Morocco, and Saudi Arabia during political crises. Our approach uses machine learning models (Linear Regression, Ridge Regression, Lasso Regression, XGBoost, and Kernel Ridge) and deep learning models (RNN, LSTM, [...] Read more.
The purpose of this paper is to forecast the sovereign credit risk for Egypt, Morocco, and Saudi Arabia during political crises. Our approach uses machine learning models (Linear Regression, Ridge Regression, Lasso Regression, XGBoost, and Kernel Ridge) and deep learning models (RNN, LSTM, BiLSTM, and GRU) to predict CDS-based implied default probabilities. We compare the predictive accuracy of the tested models with the results showing that Linear Regression outperforms all other techniques, while deep learning architectures, such as RNN and GRU, demonstrate a competitive performance. To validate the sovereign credit risk prediction, we use the forecasted implied default probability from the Linear Regression model to determine the corresponding forecasted implied rating according to the Thomson Reuters StarMine Sovereign Risk model. The results reveal significant differences in the perceived creditworthiness of Egypt, Morocco, and Saudi Arabia, reflecting each country’s economic fundamentals and their ability to manage global shocks, particularly those related to the Russo-Ukrainian war. Specifically, Egypt is perceived as the most vulnerable, Morocco occupies an intermediate position, and Saudi Arabia is seen as having a low credit risk. This study provides valuable managerial insights by enhancing tools for the sovereign credit risk analysis, offering reliable decision-making in volatile global markets. The alignment between forecasted ratings and default probabilities underscores the practical relevance of the results, guiding stakeholders in effectively managing credit risks amidst economic uncertainty. Full article
(This article belongs to the Special Issue Forecasting and Time Series Analysis)
Show Figures

Figure 1

22 pages, 3422 KB  
Article
Estimation of Reference Crop Evapotranspiration in the Yellow River Basin Based on Machine Learning and Its Regional and Drought Adaptability Analysis
by Jun Zhao, Huayu Zhong and Congfeng Wang
Agronomy 2025, 15(5), 1237; https://doi.org/10.3390/agronomy15051237 - 19 May 2025
Viewed by 911
Abstract
In recent years, the Yellow River Basin has experienced frequent extreme climate events, with an increasing intensity and frequency of droughts, exacerbating regional water scarcity and severely constraining agricultural irrigation efficiency and sustainable water resource utilization. The accurate estimation of reference crop evapotranspiration [...] Read more.
In recent years, the Yellow River Basin has experienced frequent extreme climate events, with an increasing intensity and frequency of droughts, exacerbating regional water scarcity and severely constraining agricultural irrigation efficiency and sustainable water resource utilization. The accurate estimation of reference crop evapotranspiration (ET0) is crucial for developing scientifically sound irrigation strategies and enhancing water resource management capabilities. This study utilized daily scale meteorological data from 31 stations across the Yellow River Basin spanning the period 1960–2023 to develop various machine learning models. The study constructed four machine learning models—random forest (RF), a Support Vector Machine (SVM), Gradient Boosting (GB), and Ridge Regression (Ridge)—using the meteorological variables required by the Priestley–Taylor (PT) and Hargreaves (HG) equations as inputs. These models represent a range of algorithmic structures, from nonlinear ensemble methods (RF, GB) to kernel-based regression (SVR) and linear regularized regression (Ridge). The objective was to comprehensively evaluate their performance and robustness in estimating ET0 under different climatic zones and drought conditions and to compare them with traditional empirical formulas. The main findings are as follows: machine learning models, particularly nonlinear approaches, significantly outperformed the PT and HG methods across all climatic regions. Among them, the RF model demonstrated the highest simulation accuracy, achieving an R2 of 0.77, and reduced the mean daily ET0 estimation error by 0.057 mm/day and 0.076 mm/day compared to the PT and HG models, respectively. Under drought-year scenarios, although all models showed slight performance degradation, nonlinear machine learning models still surpassed traditional formulas, with the R2 of the RF model decreasing marginally from 0.77 to 0.73, indicating strong robustness. In contrast, linear models such as Ridge Regression exhibited greater sensitivity to changes in feature distributions during drought years, with estimation accuracy dropping significantly below that of the PT and HG methods. The results indicate that in data-sparse regions, machine learning approaches with simplified inputs can serve as effective alternatives to empirical formulas, offering superior adaptability and estimation accuracy. This study provides theoretical foundations and methodological support for regional water resource management, agricultural drought mitigation, and climate-resilient irrigation planning in the Yellow River Basin. Full article
Show Figures

Figure 1

Back to TopTop