Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (850)

Search Parameters:
Keywords = uncertainty quantification model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 16232 KB  
Article
Hybrid Multimodal Surrogate Modeling and Uncertainty-Aware Co-Design for L-PBF Ti-6Al-4V with Nanomaterials-Informed Morphology Proxies
by Rifath Bin Hossain, Xuchao Pan, Geng Chang, Xin Su, Yu Tao and Xinyi Han
Nanomaterials 2026, 16(8), 447; https://doi.org/10.3390/nano16080447 - 8 Apr 2026
Abstract
Reliable property prediction and process selection in laser powder bed fusion are hindered by small, set-level datasets in which key morphology descriptors are intermittently missing, limiting both generalization and actionable co-design. A hybrid multimodal surrogate strategy is introduced that couples engineered process physics [...] Read more.
Reliable property prediction and process selection in laser powder bed fusion are hindered by small, set-level datasets in which key morphology descriptors are intermittently missing, limiting both generalization and actionable co-design. A hybrid multimodal surrogate strategy is introduced that couples engineered process physics features with morphology proxies through a deployable two-stage embedding module and gradient-boosted tree regressors. Set-resolved inputs are assembled from L-PBF parameters, linear energy density and related energy-density variants, pore and prior-β grain summary statistics, and stress–strain-derived descriptors, followed by missingness-aware feature filtering, median imputation, and 5-fold GroupKFold evaluation grouped by set_id, with morphology embeddings learned on training folds and predicted when absent. Across six targets, the final deployable models achieve an RMSE/R2 of 11.07 MPa/0.895 (yield), 13.88 MPa/0.873 (UTS), 0.677%/0.861 (elongation), and 2.38 GPa/0.663 (modulus), while roughness and hardness remain challenging (RMSE 2.31 μm and 16.54 HV; R2 about 0.12 and 0.11). These surrogates enable constraint-aware candidate generation that identifies a concise set of manufacturing recipes balancing strength and surface objectives under uncertainty-aware screening. The resulting framework provides a practical blueprint for multimodal, small-data additive manufacturing studies and can be extended to richer microstructure measurements and prospective validation to accelerate functional and biomedical alloy development. Full article
(This article belongs to the Section Nanofabrication and Nanomanufacturing)
Show Figures

Figure 1

37 pages, 1897 KB  
Article
A Bayesian Feature Weighting Model with Simplex-Constrained Dirichlet and Contamination-Aware Priors for Noisy Medical Data
by Mehmet Ali Cengiz, Zeynep Öztürk and Abdulmohsen Alharthi
Mathematics 2026, 14(8), 1243; https://doi.org/10.3390/math14081243 - 8 Apr 2026
Abstract
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity [...] Read more.
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity to label noise and outliers, which are pervasive in real-world medical data. This often results in unstable importance estimates and unreliable clinical interpretations. In this work, we introduce a novel Bayesian feature weighting model that fundamentally departs from existing approaches by jointly integrating simplex-constrained Dirichlet priors for global feature weights, hierarchical shrinkage priors for coefficient regularization, and contamination-aware priors for explicit modeling of label noise within a single coherent probabilistic framework. Unlike conventional Bayesian feature selection or robust classification models, the proposed formulation yields globally interpretable feature weights defined on the probability simplex, while simultaneously providing full posterior uncertainty quantification and robustness to both mislabeled observations and aberrant feature values through principled influence control. Comprehensive simulation studies across diverse contamination scenarios, together with applications to multiple real-world medical datasets, demonstrate that the proposed model consistently outperforms classical and state-of-the-art baselines in terms of discrimination, probabilistic calibration, and stability of feature-importance estimates. These results highlight the practical and methodological significance of the proposed framework as a robust, uncertainty-aware, and interpretable solution for medical decision making under noisy data conditions. Full article
(This article belongs to the Special Issue Statistical Machine Learning: Models and Its Applications)
Show Figures

Figure 1

19 pages, 7516 KB  
Article
ForSOC-UA: A Novel Framework for Forest Soil Organic Carbon Estimation and Uncertainty Assessment with Multi-Source Data and Spatial Modeling
by Qingbin Wei, Miao Li, Zhen Zhen, Shuying Zang, Hongwei Ni, Xingfeng Dong and Ye Ma
Remote Sens. 2026, 18(8), 1106; https://doi.org/10.3390/rs18081106 - 8 Apr 2026
Abstract
Accurate estimation of forest soil organic carbon (SOC) is considered critical for understanding terrestrial carbon cycling and supporting climate change mitigation strategies. However, the canopy block, intricate vertical structure of forests, and the constraints of single-source remote sensing data have presented considerable obstacles [...] Read more.
Accurate estimation of forest soil organic carbon (SOC) is considered critical for understanding terrestrial carbon cycling and supporting climate change mitigation strategies. However, the canopy block, intricate vertical structure of forests, and the constraints of single-source remote sensing data have presented considerable obstacles for estimating forest SOC. This study proposes a forest SOC estimation and uncertainty analysis (ForSOC-UA) framework to enhance forest SOC estimation and quantify its uncertainty in the natural secondary forests of northern China by integrating hyperspectral imagery (ZY-1F), synthetic aperture radar data (Sentinel-1), and environmental covariates (such as topography, vegetation, and soil indices). The performance of traditional machine learning models (RF, SVM, and CNN), geographically weighted regression (GWR), and a geographically weighted random forest (GWRF) model was compared across three different soil depths (0–5 cm, 5–10 cm, and 10–30 cm). The results showed that GWRF consistently outperformed all other models across all soil depth layers, with the highest accuracy achieved using multi-source data (R2 = 0.58, RMSE = 27.49 g/kg, rRMSE = 0.31). Analysis of feature importance revealed that soil moisture, terrain characteristics, and Sentinel-1 polarization attributes were the primary predictors, while spectral derivatives in the red and near-infrared bands from ZY-1F also played a significant role for forest SOC estimation. The uncertainty analysis indicated a forest SOC estimation uncertainty of 37.2 g/kg in the 0–5 cm soil layer, with a decreasing trend as depth increased. This pattern is associated with the vertical spatial distribution of the measured forest SOC. This integrated approach effectively captures spatial heterogeneity and nonlinear relationships between feature and forest SOC, while also assessing estimation uncertainty, so providing a robust methodology for predicting forest SOC. The ForSOC-UA framework addresses the uncertainty quantification of SOC estimation at different vertical depths based on machine learning, providing methodological enhancements for the assessment of large-scale forest SOC and the monitoring of carbon sinks within forest ecosystems. Full article
Show Figures

Figure 1

29 pages, 6506 KB  
Article
A Hybrid VMD–Informer Framework for Forecasting Volatile Pork Prices
by Xudong Lin, Guobao Liu, Zhiguo Du, Bin Wen, Zhihui Wu, Xianzhi Tu and Yongjie Zhang
Agriculture 2026, 16(8), 827; https://doi.org/10.3390/agriculture16080827 - 8 Apr 2026
Abstract
Accurate forecasting of pork prices is important yet challenging because pork price series are highly volatile and non-stationary. Existing hybrid forecasting models often rely on fixed-weight integration, which may limit their ability to adapt to multi-scale temporal variation and complex temporal dependencies. To [...] Read more.
Accurate forecasting of pork prices is important yet challenging because pork price series are highly volatile and non-stationary. Existing hybrid forecasting models often rely on fixed-weight integration, which may limit their ability to adapt to multi-scale temporal variation and complex temporal dependencies. To address these issues, this study proposes VMD–EMSA–HCTM–Informer, a hybrid forecasting framework that combines signal decomposition with an enhanced encoder–decoder architecture. Variational Mode Decomposition (VMD) is first used to reduce signal non-stationarity by extracting intrinsic mode functions. Within the Informer backbone, an Enhanced Multi-Scale Attention (EMSA) encoder is introduced to capture local fluctuations at different temporal scales, while a Hybrid Convolutional–Temporal Module (HCTM) decoder is used to strengthen temporal feature extraction and channel interaction modeling. Empirical evaluation was conducted on daily pork price data from the China Pig Industry Network and a large-scale intensive breeding enterprise in southern China over the period 2013–2025. Under the current experimental setting, the proposed framework achieved the lowest average errors among the compared baselines across five independent runs, with an average MAE of 0.4875 and an average MAPE of 3.0540%. These results suggest that the proposed framework provides a useful and relatively stable univariate forecasting approach for volatile pork prices. However, the findings should be interpreted within the scope of the present dataset and experimental design, and future work will extend the framework to multivariate forecasting with exogenous drivers and uncertainty quantification. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

70 pages, 5061 KB  
Systematic Review
Beyond Accuracy: Transferability Limits, Validation Inflation, and Uncertainty Gaps in Satellite-Based Water Quality Monitoring—A Systematic Quantitative Synthesis and Operational Framework
by Saeid Pourmorad, Valerie Graw, Andreas Rienow and Luca Antonio Dimuccio
Remote Sens. 2026, 18(7), 1098; https://doi.org/10.3390/rs18071098 - 7 Apr 2026
Abstract
Satellite remote sensing has become essential for water quality assessment across inland and coastal environments, with rapid improvements in recent years. Significant advances have been made in detecting optically active parameters (such as chlorophyll-a, suspended matter, and turbidity), showing consistently strong performance across [...] Read more.
Satellite remote sensing has become essential for water quality assessment across inland and coastal environments, with rapid improvements in recent years. Significant advances have been made in detecting optically active parameters (such as chlorophyll-a, suspended matter, and turbidity), showing consistently strong performance across multiple studies. Specifically, the median validation performance (R2) derived from the quantitative synthesis indicates R2 = 0.82 for chlorophyll-a (interquartile range—IQR: 0.75–0.90), R2 = 0.80 for total suspended matter (IQR: 0.78–0.85), and R2 = 0.88 for turbidity (IQR: 0.85–0.90). Conversely, the retrieval of optically inactive parameters (such as nutrients like total phosphorus and total nitrogen) remains more context dependent. It exhibits moderate, more variable results, with median R2 = 0.68 (IQR: 0.64–0.74) for total phosphorus and R2 = 0.75 (IQR: 0.70–0.80) for total nitrogen. These findings clearly illustrate the varying success of retrievals of optically active and inactive parameters and underscore the inherent difficulties of indirect estimation methods. However, high reported accuracy has yet to translate into transferable, uncertainty-informed, and operational monitoring systems. This gap stems from structural issues in validation design, physics integration, uncertainty management, and multi-sensor compatibility rather than data limitations alone. We present a PRISMA-guided, distribution-aware quantitative synthesis of 152 peer-reviewed studies (1980–2025), based on a systematic search protocol, to evaluate satellite-based retrievals of both optically active and inactive parameters. Instead of simply averaging performance, we analyse the empirical distributions of validation metrics, considering the validation protocol, sensor type, parameter category, degree of physics integration, and uncertainty quantification. The synthesis demonstrates that validation strategy often influences reported results more than the algorithm class itself, with accuracy inflated under non-independent cross-validation methods and notable variability between studies concealed by mean-based reports. Across four decades, four persistent structural challenges remain: limited transferability across sites and sensors beyond calibration areas; weak or implicit physical integration in many data-driven models; lack of or inconsistency in uncertainty quantification; and fragmented multi-sensor harmonisation that restricts operational scalability. To address these issues, we introduce two evidence-based coding frameworks: a physics-integration taxonomy (P0–P4) and an uncertainty-quantification hierarchy (U0–U4). Applying these frameworks shows that most studies remain focused on low-to-moderate levels of physics integration and primarily consider uncertainty at the prediction stage, with limited attention to upstream sources throughout the observation and inference process. Building on this structured synthesis, we propose a transferable, physics-informed, and uncertainty-aware conceptual framework that links model architecture, validation robustness, and probabilistic uncertainty to well-founded design principles. By shifting satellite water quality modelling from isolated algorithm demonstrations towards integrated, evidence-based system design, this study promotes scalable, decision-grade environmental monitoring amid the accelerating impacts of climate change. Full article
25 pages, 6996 KB  
Article
Uncertainty and Sensitivity Analysis of Input Parameters in the CANDLE Module: A Morris–Sobol–LHS–Iman–Conover Framework
by Fenghui Yang, Wanhong Wang, Rubing Ma and Xiaoming Yang
J. Nucl. Eng. 2026, 7(2), 27; https://doi.org/10.3390/jne7020027 - 6 Apr 2026
Abstract
In this study, an uncertainty quantification (UQ) and sensitivity analysis (SA) workflow was developed for the input parameters of the CANDLE module, which is currently being tested and verified for calculating the downward relocation and solidification of molten core material. The workflow consists [...] Read more.
In this study, an uncertainty quantification (UQ) and sensitivity analysis (SA) workflow was developed for the input parameters of the CANDLE module, which is currently being tested and verified for calculating the downward relocation and solidification of molten core material. The workflow consists of three steps: (i) Morris screening to reduce the input set, (ii) Sobol variance decomposition on the screened subset to compute Sobol sensitivity indices, and (iii) uncertainty propagation using a 2 × 2 design that combines two sampling schemes (MC and LHS) with two dependence settings (independent and correlated inputs). The four cases considered were independent MC, correlated MC, independent LHS, and correlated LHS–Iman–Conover (LHS-IC). We considered 16 input parameters and three output figures of merit (FOMs) and compared the four cases in terms of propagated uncertainty and Shapley-based importance rankings, thereby distinguishing the effects of the sampling scheme, the imposed input dependence, and their interaction. The results show that the molten mass of the current material in the source node is the dominant factor governing the drained melt mass and the remaining melt mass in the receiving node, whereas the cold-wall surface temperature has a significant effect on the mass of molten material that solidifies in the receiving node. The mass of molten material that remains available in the receiving node is mainly governed by the coupled effects of the molten mass of the current material at the source node, the length of the receiving node, and the velocity limit. Under the non-uniform input-parameter distributions adopted in this study, LHS broadened the range of the outputs. After input correlations were introduced, the output distributions changed slightly. This study improves the understanding of input parameter sensitivities and uncertainty propagation in the CANDLE module. It also demonstrates the practical use of LHS-IC for module-level UQ/SA with correlated inputs, providing guidance for subsequent model improvements and parameter tuning. Full article
Show Figures

Figure 1

22 pages, 4256 KB  
Systematic Review
Modeling the Resilience of Multimodal Freight Networks Under Disruptions: A Systematic Review
by Tariq Lamei, Ahmed Elsayed, Ahmed Ibrahim and Ahmed Abdel-Rahim
Infrastructures 2026, 11(4), 130; https://doi.org/10.3390/infrastructures11040130 - 6 Apr 2026
Viewed by 85
Abstract
Multimodal freight transportation networks are increasingly exposed to natural and human-made disruptions, yet prior research remains fragmented in how disruptions are represented, which modeling techniques are applied, and how results are validated, limiting comparability and actionable guidance for resilient planning. This study presents [...] Read more.
Multimodal freight transportation networks are increasingly exposed to natural and human-made disruptions, yet prior research remains fragmented in how disruptions are represented, which modeling techniques are applied, and how results are validated, limiting comparability and actionable guidance for resilient planning. This study presents a PRISMA-guided systematic review of disruption modeling in multimodal freight networks. A total of 21 studies were identified and coded to address three research questions concerning (RQ1) which analytical and computational modeling techniques are applied; (RQ2) to what extent models represent cross-modal interdependencies, cascading failures, and recovery processes; and (RQ3) what validation, calibration, and empirical testing strategies are employed. The review shows that optimization-based approaches and hybrid frameworks dominate the literature, complemented by fewer network science and data-driven methods. Most studies model disruptions as node/link failures and/or capacity degradation using static single-event scenarios, and explicit representations of cascading effects, operational delay propagation, and time-evolving recovery trajectories remain relatively rare. While many studies rely on real network data, formal calibration and historical backtesting against observed disruption events are uncommon, and validation is primarily case study-based. These findings highlight the need for more dynamic resilience modeling, stronger uncertainty quantification, standardized reporting of performance and resilience metrics, and greater use of empirically grounded validation to improve the generalizability and decision relevance of multimodal freight resilience models. Full article
Show Figures

Figure 1

23 pages, 3709 KB  
Article
A Metric-Driven Evaluation Framework for Remaining Useful Life Prognosis with Quantified Uncertainty
by Govind Vashishtha, Sumika Chauhan and Merve Ertarğın
Sensors 2026, 26(7), 2230; https://doi.org/10.3390/s26072230 - 3 Apr 2026
Viewed by 187
Abstract
This paper introduces a novel metric-driven evaluation framework for Remaining Useful Life (RUL) prognosis in rotating machinery, featuring robust uncertainty quantification. Accurate RUL prediction is vital for optimizing maintenance and preventing failures, but existing methods often struggle with complex nonlinear degradation or lack [...] Read more.
This paper introduces a novel metric-driven evaluation framework for Remaining Useful Life (RUL) prognosis in rotating machinery, featuring robust uncertainty quantification. Accurate RUL prediction is vital for optimizing maintenance and preventing failures, but existing methods often struggle with complex nonlinear degradation or lack reliable uncertainty estimates. Our proposed framework integrates a probabilistic Deep State Space Model (DSSM) with a variational inference approach to model complex, non-linear degradation trends and inherent aleatoric uncertainty. A key innovation is the use of the Slime Mold Algorithm (SMA) for efficient hyperparameter optimization, ensuring maximum accuracy. Furthermore, an online adaptation mechanism, governed by a heuristic reinforcement learning agent, allows the model to continuously update its knowledge and adapt to concept drift in real-time. Experimental validation on the IMS bearing dataset demonstrates superior RUL prediction accuracy, evidenced by the lowest Root Mean Square Error (RMSE) of 8.1829 cycles, and a PICP of 0.59416. This dual capability makes the framework highly suitable for real-world predictive maintenance, enhancing safety and reliability. Full article
(This article belongs to the Special Issue Sensor-Based Fault Diagnosis and Prognosis)
Show Figures

Figure 1

27 pages, 439 KB  
Article
Bayesian Versus Frequentist Inference in Structural Equation Modeling: Finite-Sample Properties and Economic Applications
by Bojan Baškot, Andrej Ševa, Vesna Lešević and Bogdan Ubiparipović
Mathematics 2026, 14(7), 1198; https://doi.org/10.3390/math14071198 - 3 Apr 2026
Viewed by 186
Abstract
Structural Equation Modeling (SEM) is a key framework for analyzing complex economic relationships involving latent variables, mediation effects, and endogeneity, yet the choice between frequentist and Bayesian estimation remains theoretically and practically contested, especially in settings with non-stationary data and small samples. This [...] Read more.
Structural Equation Modeling (SEM) is a key framework for analyzing complex economic relationships involving latent variables, mediation effects, and endogeneity, yet the choice between frequentist and Bayesian estimation remains theoretically and practically contested, especially in settings with non-stationary data and small samples. This study provides a formal comparison of the two approaches by formulating SEM as a probabilistic graphical model and deriving the corresponding estimation procedures, identifiability conditions, and uncertainty measures. We examine asymptotic properties of frequentist estimators and posterior consistency in Bayesian SEM, with particular attention to integrated time-series SEM applications such as shadow economy estimation. The analysis shows that while both approaches converge under large-sample conditions, important differences arise in finite samples. Bayesian methods exhibit more stable point estimates through coherent uncertainty quantification, particularly when prior information regularizes an otherwise ill-conditioned likelihood. Under model misspecification, Bayesian posteriors concentrate around the pseudo-true parameter defined by the Kullback-Leibler projection, providing a probabilistic representation of misspecification uncertainty through posterior spread—an advantage over frequentist inference, which typically conditions on the maintained model as exact. These findings carry direct implications for empirical economic modeling under realistic data constraints. In settings where sample sizes are small, identification is weak, and model uncertainty is substantial, conditions that routinely characterize macroeconomic research, the choice of inferential framework is not a matter of philosophical preference but a determinant of whether policy-relevant conclusions can be credibly defended. Bayesian SEM offers a principled and transparent path forward in precisely these conditions. Full article
23 pages, 399 KB  
Article
Integrating Model Explainability and Uncertainty Quantification for Trustworthy Fraud Detection
by Tebogo Forster Mapaila and Makhamisa Senekane
Technologies 2026, 14(4), 212; https://doi.org/10.3390/technologies14040212 - 3 Apr 2026
Viewed by 207
Abstract
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments [...] Read more.
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments is constrained by limited interpretability, overconfident predictions, and the absence of principled mechanisms for expressing decision uncertainty. Emerging regulatory expectations increasingly emphasise transparency, accountability, and operational reliability, underscoring the need for evaluation frameworks that extend beyond predictive accuracy. This study proposes the Integrated Transparency and Confidence Framework (ITCF), a deployment-oriented approach that unifies model explainability, statistically valid uncertainty quantification, and operational decision support for fraud detection. ITCF combines instance-level explanations generated via Local Interpretable Model-Agnostic Explanations (LIME) with distribution-free uncertainty estimation using split conformal prediction. The framework incorporates selective explainability, abstention-based routing, and uncertainty-driven triage to support human-in-the-loop review. Using the PaySim dataset of 6,362,620 mobile-money transactions, Random Forest and XGBoost models are evaluated under extreme class imbalance using F1-score, AUC–ROC, and Matthews Correlation Coefficient (MCC). At a target coverage level of 90% (α=0.1), both models achieve empirical coverage close to the target level, with XGBoost producing smaller prediction sets and superior recall, MCC, and latency. ITCF provides transaction-level explanations for uncertain cases and specifies an auditable workflow that is intended to support transparency, traceability, and risk-aware human review, thereby enabling defensible human decision-making in regulated environments. Overall, this study illustrates how explainability and uncertainty quantification can be combined in a deployment-oriented evaluation workflow while noting that real-world validation remains a future endeavour. Full article
(This article belongs to the Special Issue Privacy-Preserving and Trustworthy AI for Industrial 4.0 and Beyond)
Show Figures

Graphical abstract

27 pages, 8750 KB  
Article
Uncertainty-Aware Prediction of Unconfined Compressive Strength and Fracture Anisotropy in Deep Shales: A Leakage-Free Physics-Constrained Machine Learning Framework
by Yicheng Song and Xinpu Shen
Appl. Sci. 2026, 16(7), 3471; https://doi.org/10.3390/app16073471 - 2 Apr 2026
Viewed by 156
Abstract
The continuous prediction and uncertainty quantification of unconfined compressive strength (UCS) and the fracture-related index of anisotropy (FRIA) are essential for optimizing drilling operations and hydraulic fracturing design in shale gas development. However, machine-learning-based log inversion often suffers from (1) spatial information leakage [...] Read more.
The continuous prediction and uncertainty quantification of unconfined compressive strength (UCS) and the fracture-related index of anisotropy (FRIA) are essential for optimizing drilling operations and hydraulic fracturing design in shale gas development. However, machine-learning-based log inversion often suffers from (1) spatial information leakage caused by autocorrelation in well logs, (2) implicit target contamination during multi-source data fusion, and (3) biased evaluation under random data splitting, which can overestimate apparent performance and underestimate extrapolation risk in deep heterogeneous intervals. To address these limitations, we propose a leakage-free, physics-constrained framework for predicting UCS and FRIA in the Weiyuan shale gas reservoir. Using 18,440 quality-controlled, depth-aligned samples, we adopt a contiguous depth-based split that preserves stratigraphic continuity while isolating training, validation, and test intervals to block spatial leakage. Under a strict leakage-free protocol, we evaluate single-task ensemble trees (STL-RF/HGB), a multi-task neural network (MTL-MLP), and a physics-informed variant (PINN-MLP) for deep-interval stabilization. The best model is target-dependent: STL-RF achieves R2 = 0.984 for FRIA, whereas MTL-MLP attains R2 = 0.874 for UCS. For deep formations (>4800 m), PINN-MLP with a depth-continuity constraint reduces deep-interval prediction error by 47.5%. Multi-seed experiments with 95% Student’s t confidence intervals further confirm robustness. Overall, the framework provides a reproducible workflow for continuous geomechanical-parameter prediction and risk-aware deployment in deep unconventional reservoirs. Full article
Show Figures

Figure 1

31 pages, 1921 KB  
Article
Wind Turbine Gearbox Oil Temperature Forecasting Using Stochastic Differential Equations and Multi-Objective Grey Modeling
by Bo Wang and Yizhong Wu
Machines 2026, 14(4), 386; https://doi.org/10.3390/machines14040386 - 1 Apr 2026
Viewed by 142
Abstract
This study develops and evaluates three complementary predictive modeling frameworks for gearbox oil temperature forecasting: Stochastic Differential Equation (SDE) modeling with iterative Markov correction, multi-objective genetic algorithm-enhanced grey modeling (MOGA-GM(1,N)), and multi-output Gaussian Process Regression (MO-GPR). The study used supervisory control and data [...] Read more.
This study develops and evaluates three complementary predictive modeling frameworks for gearbox oil temperature forecasting: Stochastic Differential Equation (SDE) modeling with iterative Markov correction, multi-objective genetic algorithm-enhanced grey modeling (MOGA-GM(1,N)), and multi-output Gaussian Process Regression (MO-GPR). The study used supervisory control and data acquisition (SCADA) data from a 1.5 MW wind turbine gearbox, comprising 14 temperature measurements spanning 789 operational hours. The SDE framework partitions temperature evolution into deterministic aging effects and stochastic environmental perturbations, achieving a fitting accuracy of 2.5% and testing accuracy of 8.0% after thirty iterative corrections. The MOGA-GM(1,N) approach optimizes weight coefficients through the dual objective of minimizing the posterior difference ratio and maximizing small error probability, attaining first-class accuracy classification (C=0.06; P=0.99) while identifying mechanical loads and rotational speeds as dominant thermal drivers. MO-GPR demonstrates competitive performance with uncertainty quantification capabilities, achieving RMSE values of 2.51–7.48 depending on training SCADA data proportions. Comparative analysis shows that the iteratively refined SDE methodachieves the best prediction accuracy in this case study for continuous thermal trajectory forecasting, while MOGA-GM(1,N) excels at wear source diagnostics and operational factor analysis. The proposed framework addresses persistent challenges in wind turbine condition monitoring, including extreme nonlinearity, discontinuous data, and unpredictable thermal spikes. The results suggest potential for implementation in preventive maintenance systems, enabling timely intervention before critical thermal thresholds that precipitate component failure. Full article
Show Figures

Figure 1

39 pages, 23703 KB  
Article
A Unified Framework for Uncertainty Quantification and Sensitivity Analysis of Shaped Charge Jet Penetration in Oil Shale
by Yancheng Li, Huifeng Zhang, Li Li, Lusheng Yang, Zhenghe Liu and Haojie Lian
Processes 2026, 14(7), 1127; https://doi.org/10.3390/pr14071127 - 31 Mar 2026
Viewed by 221
Abstract
Shaped charge is widely used in petroleum drilling, yet the inherent parametric uncertainty of oil shale introduces significant uncertainties that affect perforation outcomes. The complex coupling of oil shale constitutive parameters under extreme strains poses challenges for uncertainty quantification. A coupled algorithm integrating [...] Read more.
Shaped charge is widely used in petroleum drilling, yet the inherent parametric uncertainty of oil shale introduces significant uncertainties that affect perforation outcomes. The complex coupling of oil shale constitutive parameters under extreme strains poses challenges for uncertainty quantification. A coupled algorithm integrating an improved material point method (MPM) and polynomial chaos expansion (PCE) is presented, and polynomial chaos expansion (PCE) is used to systematically analyze the uncertainty and sensitivity of shaped charge jet penetration depth. Mechanical parameters from oil shale samples at Checun Coal Mine well No. 1 were tested to define key parameter ranges and establish a reliable uncertainty space. A benchmark simulation of a single isolated shaped charge jet validated the algorithm, and Sobol’ global sensitivity analysis identified internal friction angle, density, and Poisson’s ratio as strongly sensitive parameters, while tensile strength, Young’s modulus, and cohesion showed weak sensitivity, supporting surrogate model dimensionality reduction. Composite detonation models of three and five charges further revealed the effects of multi-projectile blast wave coupling on jet dynamics, providing new theoretical insights into cluster effects under high-energy, high-pressure, and extreme-strain conditions. Sensitivity and uncertainty analyses based on surrogate models emphasized the critical influence of internal friction angle alongside Poisson’s ratio and density. A reliable numerical framework is established for multi-physics coupled simulations of geomechanical responses under complex multi-source explosive loading. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

26 pages, 5644 KB  
Article
Interpretable Performance Prediction for Wet Scrubbers Using Multi-Gene Genetic Programming: An Application-Oriented Study
by Linling Zhu, Ruhua Zhu, Jun Zhou, Huiqing Luo, Xiaochuan Li and Tao Wei
Mathematics 2026, 14(7), 1142; https://doi.org/10.3390/math14071142 - 29 Mar 2026
Viewed by 167
Abstract
The removal efficiency of wet scrubbers is governed by complex nonlinear interactions among operating parameters such as liquid level, airflow velocity, and dust concentration, making accurate real-time prediction challenging, which in turn leads to operational instability, increased energy consumption, and excessive emissions. To [...] Read more.
The removal efficiency of wet scrubbers is governed by complex nonlinear interactions among operating parameters such as liquid level, airflow velocity, and dust concentration, making accurate real-time prediction challenging, which in turn leads to operational instability, increased energy consumption, and excessive emissions. To address this bottleneck, we first introduce multi-gene genetic programming (MGGP) to develop interpretable models quantifying multi-parameter coupling and predicting removal efficiency for PM1, PM2.5, PM10, and TSP. Key input variables, including liquid level height, inlet airflow velocity, system pressure, and inlet dust concentration, were identified via correlation analysis. Explicit mathematical models were derived. Global sensitivity analysis using the elementary effect test (EET) identified inlet airflow velocity as most influential. Uncertainty quantification via quantile regression (QR) confirmed the model’s reliability with narrow prediction intervals and high coverage probabilities. MGGP offers a favorable balance of accuracy, generalization, and interpretability compared to extreme gradient boosting (XGBoost) and multiple nonlinear regression (MNR). Its explicit form quantifies parameter interactions, enabling efficient on-site monitoring with low computational cost. This study provides an interpretable prediction tool for intelligent wet scrubber operation, supporting cleaner production and refined control in complex industrial processes. Full article
Show Figures

Figure 1

33 pages, 3590 KB  
Systematic Review
Diffusion-Based Approaches for Medical Image Segmentation: An In-Depth Review
by Muhammad Yaseen, Maisam Ali, Sikandar Ali and Hee-Cheol Kim
Electronics 2026, 15(7), 1400; https://doi.org/10.3390/electronics15071400 - 27 Mar 2026
Viewed by 421
Abstract
Medical image segmentation represents a fundamental task in medical image analysis, serving as a critical component for accurate diagnosis, treatment planning, and disease monitoring. The emergence of Denoising Diffusion Probabilistic Models (DDPMs) has revolutionized the landscape of generative modeling and recently gained significant [...] Read more.
Medical image segmentation represents a fundamental task in medical image analysis, serving as a critical component for accurate diagnosis, treatment planning, and disease monitoring. The emergence of Denoising Diffusion Probabilistic Models (DDPMs) has revolutionized the landscape of generative modeling and recently gained significant attention in medical image analysis. This comprehensive review examines the current state of the art in diffusion models for medical image segmentation, covering theoretical foundations, methodological innovations, computational efficiency strategies, and clinical applications. We analyze recent advances in latent diffusion frameworks, transformer-based architectures, and ambiguous segmentation modeling while addressing the practical challenges of implementing these models in clinical environments. The review encompasses applications across multiple medical imaging modalities including Magnetic Resonance Imaging (MRI), Computed Tomography (CT), ultrasound, and X-ray imaging, providing insights into performance achievements and identifying future research directions. Through systematic analysis of publications mostly from 2019 to 2025, we demonstrate that diffusion models have achieved remarkable progress in addressing fundamental challenges including data scarcity, inter-observer variability, and uncertainty quantification. Notable achievements include inference time being reduced from 91.23 s to 0.34 s for echocardiogram segmentation (LDSeg, Echo dataset), DSC scores up to 0.96 for knee cartilage MRI segmentation, and a +13.87% DSC improvement over baseline methods for breast ultrasound segmentation. This review serves as a comprehensive resource for researchers and clinicians interested in leveraging diffusion models for medical image segmentation, providing a roadmap for future research and clinical translation. Full article
(This article belongs to the Special Issue Advanced Techniques in Real-Time Image Processing)
Show Figures

Figure 1

Back to TopTop