Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (689)

Search Parameters:
Keywords = gaussian kernel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 7812 KB  
Article
Drone-Based Road Marking Condition Mapping: A Drone Imaging and Geospatial Pipeline for Asset Management
by Minh Dinh Bui, Jubin Lee, Kanghyeok Choi, HyunSoo Kim and Changjae Kim
Drones 2026, 10(2), 77; https://doi.org/10.3390/drones10020077 (registering DOI) - 23 Jan 2026
Abstract
This study presents a drone-based method for assessing the condition of road markings from high-resolution imagery acquired by a UAV. A DJI Matrice 300 RTK (Real-Time Kinematic) equipped with a Zenmuse P1 camera (DJI, China) is flown over urban road corridors to capture [...] Read more.
This study presents a drone-based method for assessing the condition of road markings from high-resolution imagery acquired by a UAV. A DJI Matrice 300 RTK (Real-Time Kinematic) equipped with a Zenmuse P1 camera (DJI, China) is flown over urban road corridors to capture images with centimeter-level ground sampling distance. In contrast to common approaches that rely on vehicle-mounted or street-view cameras, using a UAV reduces survey time and deployment effort while still providing views that are suitable for marking. The flight altitude, overlap, and corridor pattern are chosen to limit occlusions from traffic and building shadows while preserving the resolution required for condition assessment. From these images, the method locates individual markings, assigns a class to each marking, and estimates its level of deterioration. Candidate markings are first detected with YOLOv9 on the UAV imagery. The detections are cropped and segmented, which refines marking boundaries and thin structures. The condition is then estimated at the pixel level by modeling gray-level statistics with kernel density estimation (KDE) and a two-component Gaussian mixture model (GMM) to separate intact and distressed material. Subsequently, we compute a per-instance damage ratio that summarizes the proportion of degraded pixels within each marking. All results are georeferenced to map coordinates using a 3D reference model, allowing visualization on base maps and integration into road asset inventories. Experiments on unseen urban areas report detection performance (precision, recall, mean average precision) and segmentation performance (intersection over union), and analyze the stability of the damage ratio and processing time. The findings indicate that the drone-based method can identify road markings, estimate their condition, and attach each record to geographic space in a way that is useful for inspection scheduling and maintenance planning. Full article
(This article belongs to the Special Issue Urban Traffic Monitoring and Analysis Using UAVs)
30 pages, 6571 KB  
Article
MRKAN: A Multi-Scale Network for Dual-Polarization Radar Multi-Parameter Extrapolation
by Junfei Wang, Yonghong Zhang, Linglong Zhu, Qi Liu, Haiyang Lin, Huaqing Peng and Lei Wu
Remote Sens. 2026, 18(2), 372; https://doi.org/10.3390/rs18020372 - 22 Jan 2026
Abstract
Severe convective weather is marked by abrupt onset, rapid evolution, and substantial destructive potential, posing major threats to economic activities and human safety. To address this challenge, this study proposes MRKAN, a multi-parameter prediction algorithm for dual-polarization radar that integrates Mamba, radial basis [...] Read more.
Severe convective weather is marked by abrupt onset, rapid evolution, and substantial destructive potential, posing major threats to economic activities and human safety. To address this challenge, this study proposes MRKAN, a multi-parameter prediction algorithm for dual-polarization radar that integrates Mamba, radial basis functions (RBFs), and the Kolmogorov–Arnold Network (KAN). The method predicts radar reflectivity, differential reflectivity, and the specific differential phase, enabling a refined depiction of the dynamic structure of severe convective systems. MRKAN incorporates four key innovations. First, a Cross-Scan Mamba module is designed to enhance global spatiotemporal dependencies through point-wise modeling across multiple complementary scans. Second, a Multi-Order KAN module is developed that employs multi-order β-spline functions to overcome the linear limitations of convolution kernels and to achieve high-order representations of nonlinear local features. Third, a Gaussian and Inverse Multiquadratic RBF module is constructed to extract mesoscale features using a combination of Gaussian radial basis functions and Inverse Multiquadratic radial basis functions. Finally, a Multi-Scale Feature Fusion module is designed to integrate global, local, and mesoscale information, thereby enhancing multi-scale adaptive modeling capability. Experimental results show that MRKAN significantly outperforms mainstream methods across multiple key metrics and yields a more accurate depiction of the spatiotemporal evolution of severe convective weather. Full article
23 pages, 6077 KB  
Article
Patient Similarity Networks for Irritable Bowel Syndrome: Revisiting Brain Morphometry and Cognitive Features
by Arvid Lundervold, Julie Billing, Birgitte Berentsen and Astri J. Lundervold
Diagnostics 2026, 16(2), 357; https://doi.org/10.3390/diagnostics16020357 - 22 Jan 2026
Abstract
Background: Irritable Bowel Syndrome (IBS) is a heterogeneous gastrointestinal disorder characterized by complex brain–gut interactions. Patient Similarity Networks (PSNs) offer a novel approach for exploring this heterogeneity and identifying clinically relevant patient subgroups. Methods: We analyzed data from 78 participants (49 IBS patients [...] Read more.
Background: Irritable Bowel Syndrome (IBS) is a heterogeneous gastrointestinal disorder characterized by complex brain–gut interactions. Patient Similarity Networks (PSNs) offer a novel approach for exploring this heterogeneity and identifying clinically relevant patient subgroups. Methods: We analyzed data from 78 participants (49 IBS patients and 29 healthy controls) with 36 brain morphometric measures (FreeSurfer v7.4.1) and 6 measures of cognitive functions (5 RBANS domain indices plus a Total Scale score). PSNs were constructed using multiple similarity measures (Euclidean, cosine, correlation-based) with Gaussian kernel transformation. We performed community detection (Louvain algorithm), centrality analyses, feature importance analysis, and correlations with symptom severity. Statistical validation included bootstrap confidence intervals and permutation testing. Results: The PSN comprised 78 nodes connected by 469 edges, with four communities detected. These communities did not significantly correspond to diagnostic groups (Adjusted Rand Index = 0.011, permutation p=0.212), indicating IBS patients and healthy controls were intermixed. However, each community exhibited distinct neurobiological profiles: Community 1 (oldest, preserved cognition) showed elevated intracranial volume but reduced subcortical gray matter; Community 2 (youngest, most severe IBS symptoms) had elevated cortical volumes but reduced white matter; Community 3 (most balanced IBS/HC ratio, mildest IBS symptoms) showed the largest subcortical volumes; Community 4 (lowest cognitive performance across multiple domains) displayed the lowest RBANS scores alongside high IBS prevalence. Top network features included subcortical structures, corpus callosum, and cognitive indices (Language, Attention). Conclusions: PSN identifies brain–cognition communities that cut across diagnostic categories, with distinct feature profiles suggesting different hypothesis-generating neurobiological patterns within IBS that may inform personalized treatment strategies. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

18 pages, 4205 KB  
Article
Research on Field Weed Target Detection Algorithm Based on Deep Learning
by Ziyang Chen, Le Wu, Zhenhong Jia, Jiajia Wang, Gang Zhou and Zhensen Zhang
Sensors 2026, 26(2), 677; https://doi.org/10.3390/s26020677 - 20 Jan 2026
Viewed by 99
Abstract
Weed detection algorithms based on deep learning are considered crucial for smart agriculture, with the YOLO series algorithms being widely adopted due to their efficiency. However, existing YOLO algorithms struggle to maintain high accuracy, while low parameter requirements and computational efficiency are achieved [...] Read more.
Weed detection algorithms based on deep learning are considered crucial for smart agriculture, with the YOLO series algorithms being widely adopted due to their efficiency. However, existing YOLO algorithms struggle to maintain high accuracy, while low parameter requirements and computational efficiency are achieved when weeds with occlusion or overlap are detected. To address this challenge, a target detection algorithm called SSS-YOLO based on YOLOv9t is proposed in this paper. First, the SCB (Spatial Channel Conv Block) module is introduced, in which large kernel convolution is employed to capture long-range dependencies, occluded weed regions are bypassed by being associated with unobstructed areas, and features of unobstructed regions are enhanced through inter-channel relationships. Second, the SPPF EGAS (Spatial Pyramid Pooling Fast Edge Gaussian Aggregation Super) module is proposed, where multi-scale max pooling is utilized to extract hierarchical contextual features, large receptive fields are leveraged to acquire background information around occluded objects, and features of weed regions obscured by crops are inferred. Finally, the EMSN (Efficient Multi-Scale Spatial-Feedforward Network) module is developed, through which semantic information of occluded regions is reconstructed by contextual reasoning and background vegetation interference is effectively suppressed while visible regional details are preserved. To validate the performance of this method, experiments are conducted on both our self-built dataset and the publicly available Cotton WeedDet12 dataset. The results demonstrate that compared to existing algorithms, significant performance improvements are achieved by the proposed method. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 10325 KB  
Article
Eye Movement Analysis: A Kernel Density Estimation Approach for Saccade Direction and Amplitude
by Paula Fehlinger, Bernhard Ertl and Bianca Watzka
J. Eye Mov. Res. 2026, 19(1), 10; https://doi.org/10.3390/jemr19010010 - 19 Jan 2026
Viewed by 99
Abstract
Eye movements are important indicators of problem-solving or solution strategies and are recorded using eye-tracking technologies. As they reveal how viewers interact with presented information during task processing, their analysis is crucial for educational research. Traditional methods for analyzing saccades, such as histograms [...] Read more.
Eye movements are important indicators of problem-solving or solution strategies and are recorded using eye-tracking technologies. As they reveal how viewers interact with presented information during task processing, their analysis is crucial for educational research. Traditional methods for analyzing saccades, such as histograms or polar diagrams, are limited in capturing patterns in direction and amplitude. To address this, we propose a kernel density estimation approach that explicitly accounts for the data structure: for the circular distribution of saccade direction, we use the von Mises kernel, and for saccade amplitude, a Gaussian kernel. This yields continuous probability distributions that not only improve accuracy of representations but also model the underlying distribution of eye movements. This method enables the identification of strategies used during task processing and reveals the connections to the underlying cognitive processes. It allows for a deeper understanding of information processing during learning. By applying our new method to an empirical dataset, we uncovered differences in solution strategies that conventional techniques could not reveal. The insights gained can contribute to the development of more effective teaching methods, better tailored to the individual needs of learners, thereby enhancing their academic success. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Graphical abstract

27 pages, 2979 KB  
Article
A Study on the Measurement and Spatial Non-Equilibrium of Marine New-Quality Productivity in China: Differences, Polarization, and Causes
by Yao Wu, Renhong Wu, Lihua Yang, Zixin Lin and Wei Wang
Water 2026, 18(2), 240; https://doi.org/10.3390/w18020240 - 16 Jan 2026
Viewed by 145
Abstract
Compared to traditional marine productivity, marine new-quality productivity (MNQP) is composed of advanced productive forces driven by the deepening application of new technologies, is characterized by the rapid emergence of new industries, new business models, and new modes of operation, and [...] Read more.
Compared to traditional marine productivity, marine new-quality productivity (MNQP) is composed of advanced productive forces driven by the deepening application of new technologies, is characterized by the rapid emergence of new industries, new business models, and new modes of operation, and is marked by a substantial increase in total factor productivity in the marine economy. It has, therefore, become a new engine and pathway for China’s development into a maritime power. The main research approaches and conclusions of this paper are as follows: ① Using a combined order relation analysis method–Entropy Weight Method (G1-EWM) weighting method that integrates subjective and objective factors, we measured the development level of China’s MNQP from 2006 to 2021 across two dimensions: “factor structure” and “quality and efficiency”. The findings indicate that China’s MNQP is developing robustly and still holds considerable potential for improvement. ② Utilizing Gaussian Kernel Density Estimation and Spatial Markov Chain analysis to examine the dynamic evolution of China’s MNQP, the study identifies breaking the low-end lock-in of MNQP as crucial for accelerating balanced development. Spatial imbalances in China’s MNQP may exist both at the national level and within the three major marine economic zones. ③ To further examine potential spatial imbalances, Dagum Gini decomposition was employed to assess regional disparities in China’s MNQP. The DER polarization index and EGR polarization index were used to analyze spatial polarization levels, revealing an intensifying spatial imbalance in China’s MNQP. ④ Finally, geographic detectors were employed to identify the factors influencing spatial imbalances in China’s MNQP. Results indicate that these imbalances result from the combined effects of multiple factors, with marine economic development emerging as the core determinant exerting a dominant influence. The core conclusions of this study provide theoretical support and practical evidence for advancing the enhancement of China’s MNQP, thereby contributing to the realization of the goal of building a maritime power. Full article
(This article belongs to the Section Oceans and Coastal Zones)
Show Figures

Figure 1

32 pages, 4385 KB  
Article
Probabilistic Wind Speed Forecasting Under at Site and Regional Frameworks: A Comparative Evaluation of BART, GPR, and QRF
by Khaled Haddad and Ataur Rahman
Climate 2026, 14(1), 21; https://doi.org/10.3390/cli14010021 - 15 Jan 2026
Viewed by 131
Abstract
Reliable probabilistic wind speed forecasts are essential for integrating renewable energy into power grids and managing operational uncertainty. This study compares Quantile Regression Forests (QRF), Bayesian Additive Regression Trees (BART), and Gaussian Process Regression (GPR) under at-site and regional pooled frameworks using 21 [...] Read more.
Reliable probabilistic wind speed forecasts are essential for integrating renewable energy into power grids and managing operational uncertainty. This study compares Quantile Regression Forests (QRF), Bayesian Additive Regression Trees (BART), and Gaussian Process Regression (GPR) under at-site and regional pooled frameworks using 21 years (2000–2020) of daily wind data from eleven stations in New South Wales and Queensland, Australia. Models are evaluated via strict year-based holdout validation across seven metrics: RMSE, MAE, R2, bias, correlation, coverage, and Continuous Ranked Probability Score (CRPS). Regional QRF achieves exceptional point forecast stability with minimal RMSE increase but suffers persistent under-coverage, rendering probabilistic bounds unreliable. BART attains near-nominal coverage at individual sites but experiences catastrophic calibration collapse under regional pooling, driven by fixed noise priors inadequate for spatially heterogeneous data. In contrast, GPR maintains robust probabilistic skill regionally despite larger point forecast RMSE penalties, achieving the lowest overall CRPS and near-nominal coverage through kernel-based variance inflation. Variable importance analysis identifies surface pressure and minimum temperature as dominant predictors (60–80%), with spatial covariates critical for regional differentiation. Operationally, regional QRF is prioritised for point accuracy, regional GPR for calibrated probabilistic forecasts in risk-sensitive applications, and at-site BART when local data suffice. These findings show that Bayesian machine learning methods can effectively navigate the trade-off between local specificity and regional pooling, a challenge common to wind forecasting in diverse terrain globally. The methodology and insights are transferable to other heterogeneous regions, providing guidance for probabilistic wind forecasting and renewable energy grid integration. Full article
Show Figures

Figure 1

17 pages, 1776 KB  
Article
Multi-Scale Adaptive Light Stripe Center Extraction for Line-Structured Light Vision Based Online Wheelset Measurement
by Saisai Liu, Qixin He, Wenjie Fu, Boshi Du and Qibo Feng
Sensors 2026, 26(2), 600; https://doi.org/10.3390/s26020600 - 15 Jan 2026
Viewed by 239
Abstract
The extraction of the light stripe center is a pivotal step in line-structured light vision measurement. This paper addresses a key challenge in the online measurement of train wheel treads, where the diverse and complex profile characteristics of the tread surface lead to [...] Read more.
The extraction of the light stripe center is a pivotal step in line-structured light vision measurement. This paper addresses a key challenge in the online measurement of train wheel treads, where the diverse and complex profile characteristics of the tread surface lead to uneven gray-level distribution and varying width features in the stripe image, ultimately degrading the accuracy of center extraction. To solve this problem, a region-adaptive multiscale method for light stripe center extraction is proposed. First, potential light stripe regions are identified and enhanced based on the gray-gradient features of the image, enabling precise segmentation. Subsequently, by normalizing the feature responses under Gaussian kernels with different scales, the locally optimal scale parameter (σ) is determined adaptively for each stripe region. Sub-pixel center extraction is then performed using the Hessian matrix corresponding to this optimal σ. Experimental results demonstrate that under on-site conditions featuring uneven wheel surface reflectivity, the proposed method can reliably extract light stripe centers with high stability. It achieves a repeatability of 0.10 mm, with mean measurement errors of 0.12 mm for flange height and 0.10 mm for flange thickness, thereby enhancing both stability and accuracy in industrial measurement environments. The repeatability and reproducibility of the method were further validated through repeated testing of multiple wheels. Full article
(This article belongs to the Special Issue Intelligent Sensors and Signal Processing in Industry)
Show Figures

Figure 1

19 pages, 7967 KB  
Article
State-of-Charge Estimation of Lithium-Ion Batteries Based on GMMCC-AEKF in Non-Gaussian Noise Environment
by Fuxiang Li, Haifeng Wang, Hao Chen, Limin Geng and Chunling Wu
Batteries 2026, 12(1), 29; https://doi.org/10.3390/batteries12010029 - 14 Jan 2026
Viewed by 167
Abstract
To improve the accuracy and robustness of lithium-ion battery state of charge (SOC) estimation, this paper proposes a generalized mixture maximum correlation-entropy criterion-based adaptive extended Kalman filter (GMMCC-AEKF) algorithm, addressing the performance degradation of the traditional extended Kalman filter (EKF) under non-Gaussian noise [...] Read more.
To improve the accuracy and robustness of lithium-ion battery state of charge (SOC) estimation, this paper proposes a generalized mixture maximum correlation-entropy criterion-based adaptive extended Kalman filter (GMMCC-AEKF) algorithm, addressing the performance degradation of the traditional extended Kalman filter (EKF) under non-Gaussian noise and inaccurate initial conditions. Based on the GMMCC theory, the proposed algorithm introduces an adaptive mechanism and employs two generalized Gaussian kernels to construct a mixed kernel function, thereby formulating the generalized mixture correlation-entropy criterion. This enhances the algorithm’s adaptability to complex non-Gaussian noise. Simultaneously, by incorporating adaptive filtering concepts, the state and measurement covariance matrices are dynamically adjusted to improve stability under varying noise intensities and environmental conditions. Furthermore, the use of statistical linearization and fixed-point iteration techniques effectively improves both the convergence behavior and the accuracy of nonlinear system estimation. To investigate the effectiveness of the suggested method, experiments for SOC estimation were implemented using two lithium-ion cells featuring distinct rated capacities. These tests employed both dynamic stress test (DST) and federal test procedure (FTP) profiles under three representative temperature settings: 40 °C, 25 °C, and 10 °C. The experimental findings prove that when exposed to non-Gaussian noise, the GMMCC-AEKF algorithm consistently outperforms both the traditional EKF and the generalized mixture maximum correlation-entropy-based extended Kalman filter (GMMCC-EKF) under various test conditions. Specifically, under the 25 °C DST profile, GMMCC-AEKF improves estimation accuracy by 86.54% and 10.47% over EKF and GMMCC-EKF, respectively, for the No. 1 battery. Under the FTP profile for the No. 2 battery, it achieves improvements of 55.89% and 28.61%, respectively. Even under extreme temperatures (10 °C, 40 °C), GMMCC-AEKF maintains high accuracy and stable convergence, and the algorithm demonstrates rapid convergence to the true SOC value. In summary, the GMMCC-AEKF confirms excellent estimation accuracy under various temperatures and non-Gaussian noise conditions, contributing a practical approach for accurate SOC estimation in power battery systems. Full article
Show Figures

Graphical abstract

46 pages, 5566 KB  
Article
Classifying with the Fine Structure of Distributions: Leveraging Distributional Information for Robust and Plausible Naïve Bayes
by Quirin Stier, Jörg Hoffmann and Michael C. Thrun
Mach. Learn. Knowl. Extr. 2026, 8(1), 13; https://doi.org/10.3390/make8010013 - 5 Jan 2026
Viewed by 364
Abstract
In machine learning, the Bayes classifier represents the theoretical optimum for minimizing classification errors. Since estimating high-dimensional probability densities is impractical, simplified approximations such as naïve Bayes and k-nearest neighbor are widely used as baseline classifiers. Despite their simplicity, these methods require design [...] Read more.
In machine learning, the Bayes classifier represents the theoretical optimum for minimizing classification errors. Since estimating high-dimensional probability densities is impractical, simplified approximations such as naïve Bayes and k-nearest neighbor are widely used as baseline classifiers. Despite their simplicity, these methods require design choices—such as the distance measures in kNN, or the feature independence in naïve Bayes. In particular, naïve Bayes relies on implicit assumptions by using Gaussian mixtures or univariate kernel density estimators. Such design choices, however, often fail to capture heterogeneous distributional structures across features. We propose a flexible naïve Bayes classifier that leverages Pareto Density Estimation (PDE), a parameter-free, non-parametric approach shown to outperform standard kernel methods in exploratory statistics. PDE avoids prior distributional assumptions and supports interpretability through visualization of class-conditional likelihoods. In addition, we address a recently described pitfall of Bayes’ theorem: the misclassification of observations with low evidence. Building on the concept of plausible Bayes, we introduce a safeguard to handle uncertain cases more reliably. While not aiming to surpass state-of-the-art classifiers, our results show that PDE-flexible naïve Bayes with uncertainty handling provides a robust, scalable, and interpretable baseline that can be applied across diverse data scenarios. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

13 pages, 1432 KB  
Article
Online Hyperparameter Tuning in Bayesian Optimization for Material Parameter Identification: An Application in Strain-Hardening Plasticity for Automotive Structural Steel
by Teng Long, Leyu Wang, Cing-Dao Kan and James D. Lee
AppliedMath 2026, 6(1), 6; https://doi.org/10.3390/appliedmath6010006 - 3 Jan 2026
Viewed by 216
Abstract
Effective identification of strain-hardening parameters is essential for predictive plasticity models used in automotive applications. However, the performance of Bayesian optimization depends strongly on kernel hyperparameters in the Gaussian-process surrogate, which are often kept fixed. In this work, we propose a likelihood-based online [...] Read more.
Effective identification of strain-hardening parameters is essential for predictive plasticity models used in automotive applications. However, the performance of Bayesian optimization depends strongly on kernel hyperparameters in the Gaussian-process surrogate, which are often kept fixed. In this work, we propose a likelihood-based online hyperparameter strategy within Bayesian optimization to identify strain-hardening parameters in plasticity. Specifically, we used the rational polynomial strain-hardening scheme for the plasticity model to fit the force vs. displacement response of automotive structural steel in tension. An in-house Bayesian optimization framework was first developed, and an online hyperparameter tuning algorithm was further incorporated to advance the optimization scheme. The optimization histories obtained from the fixed and online-tuning hyperparameters were compared. For the same number of iterations, the online hyperparameter adaptation reduced the final residual by approximately 20.4%, 24.0%, and 3.8% for Specimens 1–3, respectively. These results demonstrate that the proposed strategy can significantly improve the efficiency and quality of strain-hardening parameter identification. The results show that the online tuning scheme improved the optimization efficiency. This proposed strategy may be readily extensible to other materials and identification problems where enhancing optimization efficiency is needed. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

17 pages, 2706 KB  
Article
Gaussian Process Modeling of EDM Performance Using a Taguchi Design
by Dragan Rodić, Milenko Sekulić, Borislav Savković, Anđelko Aleksić, Aleksandra Kosanović and Vladislav Blagojević
Eng 2026, 7(1), 14; https://doi.org/10.3390/eng7010014 - 1 Jan 2026
Viewed by 273
Abstract
Electrical discharge machining (EDM) is widely used for machining hard and difficult-to-cut materials; however, the complex and nonlinear nature of the process makes the accurate prediction of key performance indicators challenging, particularly when only limited experimental data are available. In this study, a [...] Read more.
Electrical discharge machining (EDM) is widely used for machining hard and difficult-to-cut materials; however, the complex and nonlinear nature of the process makes the accurate prediction of key performance indicators challenging, particularly when only limited experimental data are available. In this study, a combined Taguchi design and Gaussian process regression (GPR) modeling framework is proposed to predict the surface roughness (Ra), material removal rate (MRR), and overcut (OC) in die-sinking EDM. An L18 Taguchi orthogonal array was employed to efficiently design experiments involving discharge current, pulse duration, and electrode material. GPR models with an automatic relevance determination (ARD) radial basis function kernel were developed to capture nonlinear relationships and varying parameter relevance. Model performance was evaluated using strict leave-one-out cross-validation (LOOCV). The developed GPR models achieved low prediction errors, with RMSE (MAE) values of 0.54 µm (0.41 µm) for Ra, 1.56 mm3/min (1.21 mm3/min) for MRR, and 0.0065 mm (0.0055 mm) for OC, corresponding to approximately 9.8%, 5.4%, and 5.9% of the respective response ranges. These results confirm stable and reliable predictive accuracy within the investigated parameter domain. Based on the validated surrogate models, multi-objective optimization was performed to identify Pareto-optimal process conditions, revealing graphite electrodes as the dominant choice within the feasible operating region. The proposed approach demonstrates that accurate and robust prediction of EDM performance can be achieved even with compact experimental datasets, providing a practical tool for process analysis and optimization. Full article
(This article belongs to the Special Issue Emerging Trends and Technologies in Manufacturing Engineering)
Show Figures

Figure 1

23 pages, 1581 KB  
Article
Fast Riemannian Manifold Hamiltonian Monte Carlo for Hierarchical Gaussian Process Models
by Takashi Hayakawa and Satoshi Asai
Mathematics 2026, 14(1), 146; https://doi.org/10.3390/math14010146 - 30 Dec 2025
Viewed by 196
Abstract
Hierarchical Bayesian models based on Gaussian processes are considered useful for describing complex nonlinear statistical dependencies among variables in real-world data. However, effective Monte Carlo algorithms for inference with these models have not yet been established, except for several simple cases. In this [...] Read more.
Hierarchical Bayesian models based on Gaussian processes are considered useful for describing complex nonlinear statistical dependencies among variables in real-world data. However, effective Monte Carlo algorithms for inference with these models have not yet been established, except for several simple cases. In this study, we show that, compared with the slow inference achieved with existing program libraries, the performance of Riemannian manifold Hamiltonian Monte Carlo (RMHMC) can be drastically improved by applying the chain rule for the differentiation of the Hamiltonian in the optimal order determined by the model structure, and by dynamically programming the eigendecomposition of the Riemannian metric with the recursive update of the eigenvectors at the previous move. This improvement cannot be achieved when using a naive automatic differentiator included in commonly used libraries. We numerically demonstrate that RMHMC effectively samples from the posterior, allowing the calculation of model evidence, in a Bayesian logistic regression on simulated data and in the estimation of propensity functions for the American national medical expenditure data using several Bayesian multiple-kernel models. These results lay a foundation for implementing effective Monte Carlo algorithms for analysing real-world data with Gaussian processes, and highlight the need to develop a customisable library set that allows users to incorporate dynamically programmed objects and to finely optimise the mode of automatic differentiation depending on the model structure. Full article
(This article belongs to the Special Issue Bayesian Statistics and Applications)
Show Figures

Graphical abstract

15 pages, 3785 KB  
Article
A Sustainable Manufacturing Approach: Experimental and Machine Learning-Based Surface Roughness Modelling in PMEDM
by Vaibhav Ganachari, Aleksandar Ašonja, Shailesh Shirguppikar, Ruturaj U. Kakade, Mladen Radojković, Blaža Stojanović and Aleksandar Vencl
J. Manuf. Mater. Process. 2026, 10(1), 10; https://doi.org/10.3390/jmmp10010010 - 29 Dec 2025
Viewed by 300
Abstract
The powder-mixed electric-discharge machining (PMEDM) process has been the focus of researchers for quite some time. This method overcomes the constraints of conventional machining, viz., low material removal rate (MRR) and high surface roughness (SR) in hard-cut materials, tool failure, and a high [...] Read more.
The powder-mixed electric-discharge machining (PMEDM) process has been the focus of researchers for quite some time. This method overcomes the constraints of conventional machining, viz., low material removal rate (MRR) and high surface roughness (SR) in hard-cut materials, tool failure, and a high tool wear ratio (TWR). However, to determine the optimal machining parameter levels for improving MRR, surface finish must be measured during actual experimentation using various parameter levels across different materials. It is a very costly and time-consuming process for industries. However, in the age of Industry 4.0 and artificial intelligence machine learning (AI-ML), it provides an efficient solution to real manufacturing problems when big data is available. In this study, experimentation was conducted on AISI D2 steel using the PMEDM process for SR analysis with different parameters, viz. current, voltage, cycle time (TOn), powder concentration (PC), and duty factor (DF). Moreover, machine learning models were used to predict SR values for selected parameter levels in the PMEDM process. In this research, Gaussian process regression (GPR) with a squared exponential kernel, support vector machines, and ensemble regression models were used for computational analysis. The results of this work showed that Gaussian regression, support vector machine, and ensemble regression achieved 95%, 92%, and 83% accuracy, respectively. The GPR model achieved the best predictive performance among these three models. Full article
Show Figures

Figure 1

37 pages, 26723 KB  
Article
Investigation of the Hydrodynamic Characteristics of a Wandering Reach with Multiple Mid-Channel Shoals in the Upper Yellow River
by Hefang Jing, Haoqian Li, Weihong Wang, Yongxia Liu and Jianping Lv
Sustainability 2026, 18(1), 264; https://doi.org/10.3390/su18010264 - 26 Dec 2025
Viewed by 208
Abstract
Sustainable management of sediment-laden rivers is essential for balancing flood control, ecological protection, and socioeconomic development. The Upper Yellow River, supporting 160 million people, faces escalating challenges in maintaining channel stability under intensified water–sediment imbalances. This study investigates the Sipaikou reach in Ningxia—a [...] Read more.
Sustainable management of sediment-laden rivers is essential for balancing flood control, ecological protection, and socioeconomic development. The Upper Yellow River, supporting 160 million people, faces escalating challenges in maintaining channel stability under intensified water–sediment imbalances. This study investigates the Sipaikou reach in Ningxia—a representative wandering channel with multiple mid-channel shoals—through integrated UAV-USV-GNSS RTK field measurements and hydrodynamic and sediment transport modeling. Field measurements reveal that mid-channel shoal morphology coupled with bend circulation governs flow division patterns, with discharge ratios of 44.16% and 86.31% at the primary and secondary shoals, respectively. Gaussian kernel density estimation demonstrates velocity distributions evolving from right-skewed to left-skewed around shoals, while spur dike regions display strong left skewness with concentrated main flow. Numerical simulations under six discharge scenarios indicate: (1) Head loss exhibits diminishing marginal effects at the primary shoal, an inflection point at a critical discharge at the secondary shoal, and superlinear growth in the spur dike region. (2) The normal-flow period represents the critical threshold for erosion–deposition regime transition. (3) Spur dike series achieve bank protection through main flow constriction and inter-dike low-velocity zone creation. These findings provide scientific foundations for sustainable flood risk management and ecological restoration in wandering rivers. The integrated measurement–simulation framework offers a transferable methodology for adaptive river management under changing hydrological conditions. Full article
(This article belongs to the Section Sustainable Water Management)
Show Figures

Figure 1

Back to TopTop