Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (473)

Search Parameters:
Keywords = uncertainty analysis and quantification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2137 KB  
Article
Multiregional Forecasting of Traffic Accidents Using Prophet Models with Statistical Residual Validation
by Jaime Sayago-Heredia, Tatiana Elizabeth Landivar, Roberto Vásconez and Wilson Chango-Sailema
Computation 2026, 14(4), 78; https://doi.org/10.3390/computation14040078 - 26 Mar 2026
Abstract
This study develops a multiregional forecasting framework for road traffic accidents in Ecuador, addressing a critical limitation in existing predictive approaches that rely predominantly on point error metrics without validating the statistical assumptions underlying forecast uncertainty. Although the analysis is conducted at the [...] Read more.
This study develops a multiregional forecasting framework for road traffic accidents in Ecuador, addressing a critical limitation in existing predictive approaches that rely predominantly on point error metrics without validating the statistical assumptions underlying forecast uncertainty. Although the analysis is conducted at the provincial level, the spatial dimension is used primarily for cross-regional comparison and risk classification rather than for explicit spatial interaction modeling. Using a dataset of 27,648 monthly observations covering all 24 provinces from 2014 to 2025, the study applies the Prophet model within a Design Science Research paradigm and a CRISP-DM implementation cycle. Separate provincial models are estimated with a 24-month forecasting horizon, and methodological rigor is ensured through systematic residual diagnostics using the Shapiro–Wilk test for normality and the Ljung–Box test for temporal independence. Empirical results indicate that the Prophet-based artifact outperforms a naïve seasonal benchmark in 70.8% of the provinces, demonstrating excellent predictive accuracy in structurally stable regions such as Tungurahua (MAPE = 10.9%). At the same time, the framework enables the identification of critical emerging risks in provinces such as Santo Domingo and Cotopaxi, where projected increases exceed 49% despite acceptable point forecasts. The findings confirm that point accuracy alone does not guarantee the validity of confidence intervals and that residual validation is essential for trustworthy uncertainty quantification. Overall, the proposed approach provides a robust foundation for a predictive surveillance system capable of supporting differentiated, evidence-based road safety policies in territorially heterogeneous contexts. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

20 pages, 2881 KB  
Article
Structural Deformation Prediction and Uncertainty Quantification via Physics-Informed Data-Driven Learning
by Tong Zhang and Shiwei Qin
Appl. Sci. 2026, 16(7), 3194; https://doi.org/10.3390/app16073194 - 26 Mar 2026
Abstract
In structural health monitoring, purely data-driven methods for deformation prediction are often susceptible to time-varying boundary conditions under complex operating scenarios, leading to insufficient physical interpretability and limited generalization across different conditions. To address these challenges, this study proposes a Physics-Informed Dual-branch Long [...] Read more.
In structural health monitoring, purely data-driven methods for deformation prediction are often susceptible to time-varying boundary conditions under complex operating scenarios, leading to insufficient physical interpretability and limited generalization across different conditions. To address these challenges, this study proposes a Physics-Informed Dual-branch Long Short-Term Memory framework (PINN-DualSHM). The framework employs dual-branch LSTMs to separately extract temporal features of structural mechanical responses and environmental thermal effects. Dynamic decoupling and fusion of these heterogeneous features are achieved through an adaptive cross-attention mechanism. Furthermore, physical priors, including the thermodynamic superposition principle and structural settlement monotonicity, are embedded into the loss function as regularization terms, complemented by a dual uncertainty quantification system based on heteroscedastic regression and MC Dropout. Experimental results based on long-term measured data from an industrial base project in Shenzhen demonstrate that PINN-DualSHM significantly outperforms baseline models such as LSTM, CNN-LSTM, and GAT-LSTM. Specifically, the Root Mean Square Error (RMSE) is reduced by 65.25%, and the coefficient of determination (R2) reaches 0.925. Physical consistency analysis confirms that the introduction of physical constraints effectively suppresses anomalous predictive fluctuations that violate mechanical laws. Uncertainty decomposition reveals that aleatoric uncertainty is dominant (93.7%), objectively indicating that the current system’s accuracy bottleneck lies in sensor noise rather than model capability. By enhancing prediction accuracy while providing credible quantitative assessments and physical interpretability, the proposed method provides a scientific basis for the operation, maintenance optimization, and upgrading decisions of SHM systems. Full article
Show Figures

Figure 1

27 pages, 3395 KB  
Article
Probabilistic Water Quality Monitoring Using Multi-Temporal Sentinel-2 Data: A Situational Awareness Framework for Harmful Algal Bloom Forecasting
by Muhammad Zaid Qamar, Cristiano Ciccarelli, Mohammed Ajaoud and Massimiliano Lega
Remote Sens. 2026, 18(6), 959; https://doi.org/10.3390/rs18060959 - 23 Mar 2026
Viewed by 137
Abstract
Environmental monitoring systems require robust uncertainty quantification for effective decision-making in complex ecological processes. Harmful algal blooms represent a critical challenge where prediction uncertainty directly impacts resource allocation and response timing, yet current remote sensing-based prediction systems provide only deterministic classifications without confidence [...] Read more.
Environmental monitoring systems require robust uncertainty quantification for effective decision-making in complex ecological processes. Harmful algal blooms represent a critical challenge where prediction uncertainty directly impacts resource allocation and response timing, yet current remote sensing-based prediction systems provide only deterministic classifications without confidence measures. This gap between algorithmic predictions and actionable risk assessment limits operational utility for stakeholders managing water quality under varying risk tolerances. This study developed a transferable probabilistic forecasting framework integrating Sentinel-2 multispectral imagery with quantile regression and ensemble machine learning to generate continuous confidence indicators for cyanobacteria density prediction, demonstrated through its application to Lake Okeechobee, Florida. The methodology combines spectral indices extracted from Sentinel-2 data with XGBoost for quantile regression at 0.05, 0.50, and 0.95 probability levels, and LightGBM for multi-horizon temporal forecasting. Sentinel-2’s 13 spectral bands spanning visible to shortwave infrared wavelengths, combined with its 5-day revisit frequency provide a spectrally rich and temporally dense input space that is well-suited to gradient boosting methods such as XGBoost, which can exploit complex nonlinear interactions among spectral features to distinguish cyanobacterial signatures from background water constituents. LightGBM achieved mean absolute percentage errors of 2.9% for 10-day forecasts and 5.7% for 20-day forecasts, outperforming conventional regression models. The framework generates 90% prediction intervals that enable reliable risk classifications for operational bloom management. This approach bridges the gap between satellite-based algal bloom detection and actionable decision-making by quantifying predictive uncertainty, representing a shift from binary classifications to probability-based environmental monitoring systems that accommodate varying stakeholder risk tolerances in water quality management applications. Full article
Show Figures

Figure 1

21 pages, 3822 KB  
Article
Uncertainty-Aware Framework for CT Radiation Dose Optimization in the Active Surveillance of Small Renal Masses: Clinical and Radiological Considerations
by M. A. Elsabagh, Amira Samy Talaat, Dalia Elwi, Shaimaa M. Hassan, Sameer Alqassimi and Esraa Hassan
Diagnostics 2026, 16(6), 943; https://doi.org/10.3390/diagnostics16060943 - 23 Mar 2026
Viewed by 120
Abstract
Background: Active surveillance of small renal masses is challenged by cumulative radiation exposure from repeated CT imaging, raising long-term health concerns. Low-dose CT protocols offer a strategy to mitigate this risk but are limited by uncertainty regarding measurement accuracy and potential effects on [...] Read more.
Background: Active surveillance of small renal masses is challenged by cumulative radiation exposure from repeated CT imaging, raising long-term health concerns. Low-dose CT protocols offer a strategy to mitigate this risk but are limited by uncertainty regarding measurement accuracy and potential effects on clinical decision-making. Methods: We propose an uncertainty-aware analytical framework using a multi-observer dataset of 40 paired CT cases (low-dose vs. standard-dose). The methodology combines statistical agreement assessment (concordance correlation coefficient, intraclass correlation coefficient), multi-algorithm machine learning prediction (linear regression, random forest, gradient boosting, and SVR), and integrated uncertainty quantification to evaluate equivalence across imaging protocols. Results: Comparative analysis demonstrates near-perfect concordance between protocols (concordance correlation coefficient = 0.9930). Linear regression achieved the highest predictive performance (R2 = 0.9933, MAE = 0.4239 mm, MAPE = 2.07%), outperforming more complex ensemble models, highlighting that interpretable models can achieve superior accuracy without compromising reliability. Conclusions: Clinically, the framework supports the safe adoption of low-dose CT for longitudinal tumor assessment, preserving measurement fidelity and diagnostic confidence essential for timely intervention or continued surveillance. Radiologically, it ensures robust lesion characterization across protocols while minimizing cumulative radiation exposure, particularly in younger patients. By integrating uncertainty quantification, this approach enhances transparency, informs clinical decision-making, and facilitates personalized, evidence-based surveillance strategies, promoting safer, dose-optimized imaging in the management of small renal masses. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

32 pages, 2678 KB  
Article
Enhanced Chronic Kidney Disease Detection: A Hybrid Deep Learning Framework Using Clinical Biomarkers and Ensemble Feature Engineering with DeepCKD-Net
by Mostafa Al Ghamdi and Saleh Alyahyan
Appl. Sci. 2026, 16(6), 3024; https://doi.org/10.3390/app16063024 - 20 Mar 2026
Viewed by 91
Abstract
Chronic Kidney Disease (CKD) affects over 850 million people globally, with early detection critical for effective intervention. We present DeepCKD-Net, a hybrid deep learning framework that synergistically integrates transformer architectures with gradient-boosting ensembles for multi-stage CKD prediction. Using a clinical dataset of 400 [...] Read more.
Chronic Kidney Disease (CKD) affects over 850 million people globally, with early detection critical for effective intervention. We present DeepCKD-Net, a hybrid deep learning framework that synergistically integrates transformer architectures with gradient-boosting ensembles for multi-stage CKD prediction. Using a clinical dataset of 400 patients with 26 biomarker features from the UCI repository, our framework introduces three key innovations: (1) a hierarchical attention mechanism capturing complex inter-dependencies among clinical parameters, (2) an adaptive feature fusion module combining transformer-learned patterns with gradient-boosting decision boundaries, and (3) a confidence-aware ensemble strategy providing uncertainty quantification for clinical decision support. DeepCKD-Net achieves 98.7% accuracy and 0.993 AUC, surpassing state-of-the-art methods by 4.2% while maintaining 16.8 ms inference time suitable for real-time clinical deployment. Integrated SHAP analysis provides interpretable predictions, with serum creatinine (SHAP value: 0.342) and blood urea (0.287) identified as top predictive biomarkers, aligning with established clinical knowledge. The framework demonstrates robust performance under realistic clinical conditions, maintaining >90% accuracy with 20% missing data. Our contributions advance AI-driven nephrology diagnostics by providing a deployable, interpretable, and clinically validated solution for early CKD detection. Full article
Show Figures

Figure 1

22 pages, 13068 KB  
Article
A Block-Wise ICP Method for Retrieving 3D Landslide Displacement Vectors Based on Terrestrial Laser Scanning Point Clouds
by Zhao Xian, Jia-Wen Zhou, Zhi-Yu Li, Yuan-Mao Xu and Nan Jiang
Remote Sens. 2026, 18(6), 923; https://doi.org/10.3390/rs18060923 - 18 Mar 2026
Viewed by 127
Abstract
Terrestrial laser scanning (TLS) provides dense point clouds for landslide monitoring, yet occlusion, heterogeneous point density, and seasonal vegetation introduce noise and unstable deformation boundaries in multi-temporal change detection. To overcome the limitations of the multiscale model-to-model cloud comparison (M3C2) method under dominant [...] Read more.
Terrestrial laser scanning (TLS) provides dense point clouds for landslide monitoring, yet occlusion, heterogeneous point density, and seasonal vegetation introduce noise and unstable deformation boundaries in multi-temporal change detection. To overcome the limitations of the multiscale model-to-model cloud comparison (M3C2) method under dominant downslope tangential motion and vegetation disturbance, we propose a block-wise ICP method to retrieve 3D displacement vectors. The scene is partitioned into local sub-blocks; rigid registration is performed within each sub-block, and the estimated translation is assigned to the sub-block center. A two-stage matching and quality control procedure removes under-constrained sub-blocks, enabling the direct retrieval of 3D displacement vectors and interpretable boundaries. Applied to the Longxigou landslide in Wenchuan using RIEGL VZ-2000i surveys on 1 November 2023 and 23 May 2024, the proposed method produces a more continuous displacement field and clearer boundaries than M3C2. For a tower target, manual measurements indicate a displacement of 0.41–0.63 m; our estimates are within 0.33–0.40 m, whereas M3C2 mostly falls between −0.25 and 0.25 m. In a seasonal vegetation change scene, we detect a canopy envelope expansion of approximately 0.20–0.40 m, while M3C2 shows scattered canopy responses that hinder boundary interpretation. A sensitivity analysis indicates a block-scale trade-off between boundary stability and peak preservation, motivating adaptive multi-scale blocking and uncertainty quantification. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Technology for Ground Deformation)
Show Figures

Figure 1

25 pages, 7558 KB  
Review
A Bibliometric Study on Machine Learning-Based Quantification of Agricultural Soil Respiration and Implications for the Management of Agricultural Soil Carbon Sinks
by Tongde Chen, Lingling Wang, Xingshuai Mei, Jiarong Hou and Fengqiuli Zhang
Agriculture 2026, 16(6), 646; https://doi.org/10.3390/agriculture16060646 - 12 Mar 2026
Viewed by 251
Abstract
This study used bibliometric methods to systematically analyze the development trend, knowledge structure and evolution path of the field of “quantitative research on agricultural soil respiration based on machine learning” from 2021 to 2025, and further explored its implications for agricultural soil carbon [...] Read more.
This study used bibliometric methods to systematically analyze the development trend, knowledge structure and evolution path of the field of “quantitative research on agricultural soil respiration based on machine learning” from 2021 to 2025, and further explored its implications for agricultural soil carbon sinks. Based on 966 articles included in the core collection of Web of Science, this paper comprehensively uses tools such as Biblioshiny, CiteSpace and VOSviewer to carry out multi-dimensional analysis from the aspects of annual publication trends, international and institutional cooperation networks, keyword clustering and emergent evolution. It is found that this field has shown phased evolution characteristics of “technology-driven mechanism deepening–application expansion” in the past five years. At the beginning of the 5-year period of research, the introduction of machine learning methods and model verification were the core, then gradually expanding to multi-algorithm comparison, environmental factor coupling mechanisms and multi-source data fusion. Recently, the field has focused on regional-scale simulation, uncertainty quantification and model interpretability research. Keyword clustering identifies three thematic clusters—machine learning algorithm and model optimization, environmental driving factors and process mechanism, and remote sensing fusion and regional application—which form a knowledge system of “method–mechanism–application” collaborative evolution. The national cooperation network presents a pattern of “Asia-led, China–US dual-core, and European connectivity”. China dominates in scientific research output, and the United States plays a key role in international cooperation. This study further points out that the development of this field provides important methodological support and a scientific basis for accurate assessment, intelligent management and carbon neutralization decision-making for agricultural soil carbon sinks. Based on the above findings, future research should focus on the development of intelligent models of mechanisms and data fusion, the construction of multi-source data assimilation and uncertainty assessment frameworks, the expansion of global diversified agricultural system cases, and the promotion of an open and shared international scientific research cooperation ecology. This study provides empirical evidence and a direction reference for academic development, scientific research layout, carbon sink management and international collaboration in this field. Full article
Show Figures

Figure 1

22 pages, 4178 KB  
Article
Uncertainty Assessment of S-Parameters in Vector Network Analyzers Under De-Embedding Conditions
by Jiangmiao Zhu, Yifan Wang, Chaoxian Fu, Kaige Man and Kejia Zhao
Metrology 2026, 6(1), 20; https://doi.org/10.3390/metrology6010020 - 11 Mar 2026
Viewed by 190
Abstract
This study proposes a method to quantify uncertainty in the scattering parameter (S-parameter) measurements when using de-embedding techniques. After calibrating the measurement setup with reference standards, de-embedding algorithms are employed to extract the intrinsic S-parameter of the device under test (DUT). This process [...] Read more.
This study proposes a method to quantify uncertainty in the scattering parameter (S-parameter) measurements when using de-embedding techniques. After calibrating the measurement setup with reference standards, de-embedding algorithms are employed to extract the intrinsic S-parameter of the device under test (DUT). This process introduces additional complexity to the uncertainty analysis. This study investigates the sources of uncertainty inherent to vector network analyzer (VNA) measurements. Subsequently, a covariance matrix-based approach is employed to propagate these uncertainties, culminating in the quantification of S-parameter uncertainty. The effectiveness of the proposed is determined by comparing the measured S-parameters of power dividers and couplers to their nominal values, considering parameters such as balance, coupling, and voltage standing wave ratio (VSWR). Additionally, an uncertainty analysis is conducted for the power divider’s S-parameters, tracing the uncertainty sources back to the calibration standards. Full article
(This article belongs to the Collection Measurement Uncertainty)
Show Figures

Figure 1

20 pages, 1386 KB  
Article
A New Functional Setting for Term Structure Modeling Using the Heath–Jarrow–Morton Framework
by Michael Pokojovy, Ebenezer Nkum and Thomas M. Fullerton
Econometrics 2026, 14(1), 14; https://doi.org/10.3390/econometrics14010014 - 11 Mar 2026
Viewed by 197
Abstract
The well-known Heath–Jarrow–Morton (HJM) framework provides a universal and efficacious instrument for modeling the stochastic evolution of an entire yield curve by explaining the interest rate dynamics in continuous time under no-arbitrage conditions. Existing implementations involve exponentially weighted function spaces as theoretical settings [...] Read more.
The well-known Heath–Jarrow–Morton (HJM) framework provides a universal and efficacious instrument for modeling the stochastic evolution of an entire yield curve by explaining the interest rate dynamics in continuous time under no-arbitrage conditions. Existing implementations involve exponentially weighted function spaces as theoretical settings for the former stochastic evolution. While the choice of weight can have a drastic effect on model calibration and subsequent forecasting, it cannot be estimated from market data and does not allow for any objective interpretation. The proposed approach does not have this shortcoming as it adopts a suitably designed unweighted function space. The HJM equation is discretized using a finite difference approach. The resulting semiparametric model is then calibrated on real-world yield data with a new type of functional principal component analysis (PCA)-based approach. Backtesting and benchmarking are conducted against the one-factor Vasicek model using historical data to illustrate its simulation capabilities for prediction and uncertainty quantification. Additionally, in contrast to widely studied US treasuries, negative interest rates are observed for AAA Euro Bonds during the sample period employed for this study. Accordingly, the framework allows for the possibility of negative yields. Full article
Show Figures

Figure 1

23 pages, 10640 KB  
Article
Machine Learning-Driven Computer Vision System for Automated Fat and Energy Quantification in Human Milk Microcapillaries
by Lujan E. Huamanga-Chumbes, Erwin J. Sacoto-Cabrera, Jaime Lloret, Vinie Lee Silva-Alvarado, Alfz Huicho-Mendigure and Edison Moreno-Cardenas
Sensors 2026, 26(6), 1756; https://doi.org/10.3390/s26061756 - 10 Mar 2026
Viewed by 377
Abstract
Neonatal health requires precise lipid quantification in human milk to ensure proper nutritional development. Traditional manual methods, such as the creamatocrit, are limited by human-induced bias and significant measurement uncertainty. This study presents a low-cost Computer Vision System acting as an automated optical [...] Read more.
Neonatal health requires precise lipid quantification in human milk to ensure proper nutritional development. Traditional manual methods, such as the creamatocrit, are limited by human-induced bias and significant measurement uncertainty. This study presents a low-cost Computer Vision System acting as an automated optical sensing modality for estimate the cream fraction (c) using advanced Machine Learning regression, which is subsequently used to derive fat and energy quantification through established analytical equations. The system is optimized for the Gold-LED spectrum, which enhances the dynamic range to 226 a.u. for robust feature extraction. We evaluated 28 distinct ML regression models across three feature spaces (Gray Scale, RGB, and Combined). The results, based on 6400 samples, demonstrate that the Rational Quadratic GPR model achieved the highest predictive stability with a coefficient of determination of R2=0.867. This computational framework achieved a 57.5% reduction in relative error compared to manual benchmarks. SHAP analysis indicates that the model selectively attributes higher importance to Red channel intensities and Blue contrast gradients, which correspond to the optical scattering characteristics of lipid globules. These findings validate the system as a stable sensing modality for non-invasive quantification. The proposed architecture integrates cost-effective hardware with high-precision analytical modeling, offering a reagent-free and operationally feasible alternative for standardized nutritional assessment in neonatal intensive care units and milk banks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

45 pages, 6030 KB  
Article
An Open-Source Life Cycle Inventory (LCI) Model to Assess the Environmental Impacts of IGBT Power Semiconductor Manufacturing
by Thomas Guillemet, Pierre-Yves Pichon and Nicolas Degrenne
Sustainability 2026, 18(5), 2663; https://doi.org/10.3390/su18052663 - 9 Mar 2026
Viewed by 338
Abstract
While sustainability is set as a goal by a broad range of international organizations, its definition varies, and there is still a lack of practical criteria for product designers to evaluate the degree of (un)sustainability in the design phase. Life cycle assessment (LCA) [...] Read more.
While sustainability is set as a goal by a broad range of international organizations, its definition varies, and there is still a lack of practical criteria for product designers to evaluate the degree of (un)sustainability in the design phase. Life cycle assessment (LCA) can allow quantification of the environmental impacts of a product but is often carried out post-design, when the manufacturing process is already settled. Finally, while significant advances have been made towards standardizing LCA calculations by providing product category rules, large uncertainties remain in the calculation results due to a lack of transparency regarding the choices of databases, system boundaries, allocation, cut-off rules, and level of data granularity. A practical way to improve in those areas is to share with the semiconductor community a parametrizable life cycle inventory (LCI) model based on a target device to (1) identify knowledge gaps in LCA methods for such products, (2) identify the main process variables, and (3) provide a starting point for LCA calculations by the designers themselves. With this aim, a parametrizable cradle-to-gate manufacturing LCI model was developed based on the peer-reviewed process flow of a trench field-stop silicon insulated gate bipolar transistor (IGBT) semiconductor power device. The model allows computation of the environmental impacts of the IGBT manufacturing process based on different tunable parameters such as die size, wafer diameter, manufacturing yield, abatement efficiency, wafer fab throughput, wafer fab location, and associated electricity mix. Embedding a high level of data granularity, it helps identify, at elementary process levels, key environmental hotspots and associated technical levers for their reduction. Analysis of the IGBT manufacturing process tends to demonstrate the importance of an impact assessment approach considering multiple environmental categories, going beyond the sole focus on greenhouse gas emissions and accounting for potential transfers of impact. With an open-source mindset and in a continuous improvement prospective, the manufacturing inventory model and its associated tools are freely available from a public GitHub repository and open for comments and consolidation from users. Full article
Show Figures

Figure 1

81 pages, 28674 KB  
Article
Representation Learning for Maritime Vessel Behaviour: A Three-Stage Pipeline for Robust Trajectory Embeddings
by Ghassan Al-Falouji, Shang Gao, Zhixin Huang, Ben Biesenbach, Peer Kröger, Bernhard Sick and Sven Tomforde
J. Mar. Sci. Eng. 2026, 14(5), 507; https://doi.org/10.3390/jmse14050507 - 8 Mar 2026
Viewed by 229
Abstract
The growing complexity of maritime navigation creates safety challenges that drive the shift toward autonomous systems. Maritime vessel behaviour modelling is critical for safe and efficient autonomous operations. Representation learning offers a systematic approach to learn feature embeddings encoding vessel behaviour for improved [...] Read more.
The growing complexity of maritime navigation creates safety challenges that drive the shift toward autonomous systems. Maritime vessel behaviour modelling is critical for safe and efficient autonomous operations. Representation learning offers a systematic approach to learn feature embeddings encoding vessel behaviour for improved situational awareness and decision-making. We introduce a three-stage representation learning pipeline evaluating six architectures on real-world AIS trajectories. Grouped Masked Autoencoder (GMAE)-Risk Extrapolation (REx) combines group-wise masked autoencoding at the semantic feature level with risk extrapolation regularisation, forcing encoders to learn cross-group dependencies between temporal, kinematic, spatial, and interaction features. DAE and EAE provide robust and uncertainty-aware baselines. Evaluation uses a dual-pipeline framework on two years of Kiel Fjord AIS data (176,787 trajectories, 527,225 segments). Pipeline 1 applies three-stage representation learning using vessel-type classification as encoder selection probe. GMAE-REx achieves 86.03% validation accuracy, outperforming DAE (85.63%), EAE (85.56%), and baselines Transformer (84.93%), TCN (76.27%), LiST (85.12%). Pipeline 2 applies unsupervised clustering to discover intrinsic behavioural structure. Learnt representations consistently outperform expert features on DBCV, conductance, and modularity metrics, organising trajectories by operational context rather than vessel type. This behaviour-oriented organisation enables cross-vessel knowledge transfer for autonomous navigation, VTS monitoring, and safety analysis. Full article
(This article belongs to the Special Issue Intelligent Solutions for Marine Operations)
Show Figures

Figure 1

30 pages, 3388 KB  
Article
Nonstationary Flood Frequency Analysis for Urban Watersheds Using Open-Source Bayesian Software: Contrasting Case Studies from Texas
by C. Haden Smith, Brian Skahill and David A. Margo
Water 2026, 18(5), 636; https://doi.org/10.3390/w18050636 - 7 Mar 2026
Viewed by 717
Abstract
Urban flood frequency analysis faces unique challenges as land development alters watershed hydrology, producing nonstationary flood records. This study demonstrates nonstationary flood frequency analysis (NSFFA) using RMC-BestFit, an open-source Bayesian software, through two Texas case studies. Brays Bayou at Houston (96 years of [...] Read more.
Urban flood frequency analysis faces unique challenges as land development alters watershed hydrology, producing nonstationary flood records. This study demonstrates nonstationary flood frequency analysis (NSFFA) using RMC-BestFit, an open-source Bayesian software, through two Texas case studies. Brays Bayou at Houston (96 years of record) exemplifies an urbanized watershed with increasing flood trends; a step-logistic model captures both the abrupt increase in mean flood magnitude around 1968 and the progressive decrease in log-space variance as urbanization homogenized runoff response. O.C. Fisher Reservoir (169 years of record) exhibits decreasing trends attributed to brush encroachment and groundwater extraction; despite a sinusoidal model achieving best information criteria, a step function was selected based on physical reasoning, demonstrating that statistical fit alone should not dictate model selection. Results reveal contrasting frequency curve patterns: at O.C. Fisher, stationary and nonstationary curves differ uniformly (53% reduction in 100-year flood), while at Brays Bayou, curves differ substantially for frequent events (48% increase in 2-year flood) but converge in the extreme tail due to opposing trends in location and scale parameters. These findings underscore that NSFFA relevance depends on decision context. Bayesian methods offer key advantages including flexible integration of diverse data sources, comprehensive uncertainty quantification, and principled model comparison. Open-source software democratizes access to these methods, promoting transparency and reproducibility. Full article
(This article belongs to the Special Issue Urban Flood Frequency Analysis and Risk Assessment, 2nd Edition)
Show Figures

Figure 1

24 pages, 1727 KB  
Article
Symmetry-Guided Deep Generative Model for Multi-Step Evolution of Complex Dynamical Systems
by Ying Xu, Chengbo Zhu, Nannan Su, Yingying Wang and Ziqi Fan
Symmetry 2026, 18(3), 450; https://doi.org/10.3390/sym18030450 - 6 Mar 2026
Viewed by 210
Abstract
Complex dynamical systems are characterized by inherent nonlinearity, high dimensionality, spatiotemporal uncertainty, and implicit symmetry, posing fundamental challenges for their mathematical modeling and multi-step evolution prediction. For example, wind power exhibits strong randomness, intermittency, and latent temporal symmetry. To address this, this paper [...] Read more.
Complex dynamical systems are characterized by inherent nonlinearity, high dimensionality, spatiotemporal uncertainty, and implicit symmetry, posing fundamental challenges for their mathematical modeling and multi-step evolution prediction. For example, wind power exhibits strong randomness, intermittency, and latent temporal symmetry. To address this, this paper proposes a symmetry-guided deep generative model, the bi-directional recurrent generative adversarial network (BDR-GAN), for the multi-step rolling prediction of such systems. The BDR-GAN formalizes multi-step evolution as a conditional probability distribution learning problem. It systematically integrates three forms of symmetry to enhance modeling validity: bi-directional temporal symmetry captured by a BiLSTM-based generator, structural symmetry within the adversarial learning framework between the generator and a 1D-CNN discriminator, and rolling symmetry enabled by a recursive prediction strategy that supports cyclic state updates. Theoretical analysis demonstrates that this symmetry-embedded adversarial mechanism enables BDR-GAN to effectively approximate the underlying dynamic operators and the conditional distribution of future states, improving the learned model’s generalization. Experimental validation on wind power datasets confirms the framework’s superiority. Compared to benchmark models, BDR-GAN achieves superior prediction accuracy (e.g., RMSE 0.236, MAPE 5.12%), provides reliable uncertainty quantification (PICP 95.5%), and exhibits enhanced robustness against noise and variability. This work provides a generalizable, symmetry-guided modeling framework for the multi-step evolution of complex dynamical systems, offering theoretical and technical support for high-precision prediction in critical applications such as wind power integration and smart grid operation. Full article
(This article belongs to the Special Issue Application of Symmetry/Asymmetry and Machine Learning)
Show Figures

Figure 1

16 pages, 1553 KB  
Article
Machine-Learning Algorithm and Decline-Curve Analysis Comparison in Forecasting Gas Production
by Dan-Romulus Jacota, Cristina Roxana Popa, Maria Tănase and Cristina Veres
Processes 2026, 14(5), 826; https://doi.org/10.3390/pr14050826 - 3 Mar 2026
Viewed by 357
Abstract
This study utilizes machine-learning algorithms to reinterpret existing datasets originally plotted using Decline-Curve Analysis (DCA), aiming to enhance predictive accuracy without requiring new field-data acquisition. Historical production records were compiled: monthly oil/gas rates, bottom-hole pressures, and cumulative productions, which were fitted to Arps [...] Read more.
This study utilizes machine-learning algorithms to reinterpret existing datasets originally plotted using Decline-Curve Analysis (DCA), aiming to enhance predictive accuracy without requiring new field-data acquisition. Historical production records were compiled: monthly oil/gas rates, bottom-hole pressures, and cumulative productions, which were fitted to Arps equations via least-squares optimization, and key decline parameters, such as initial rate, nominal decline rate, and hyperbolic exponent, served as input data. Four machine-learning models were trained and validated: Artificial Neural Networks (ANN), Support Vector Machines (SVM), and Linear Regression (LR), using 80/20 train–test splits and 5-fold cross-validation. Models were evaluated using Mean Squared Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and coefficient of determination (R2). The ANN emerged as the best-performing method, achieving near-unity predictive accuracy (R2 ≈ 1) on the independent test set, with low error values (MSE = 0.0012 Ncm2/month2, RMSE = 0.035 Ncm/month, MAE = 0.028 Ncm/month) for oil production rates. Similar levels of accuracy were obtained for gas rates and pressures. These results reflect the strong and highly regular relationships present in the dataset analyzed rather than an exact zero-error fit. The multi-layer architecture of the ANN effectively captured the nonlinear interactions between Arps parameters and transient flow regimes, outperforming the empirical and physics-constrained approaches. Linear regression yielded strong results (R2 = 0.98, RMSE = 0.15 Ncm/month) but faltered in high-decline scenarios, failing to model exponential tails accurately. SVM exhibited the highest deviations (RMSE = 0.42 Ncm/month, R2 = 0.89), attributable to kernel sensitivity in sparse, noisy decline data. RF provided intermediate performance (R2 = 0.97). This ANN-driven approach redefines decline analysis by automating parameter tuning and uncertainty quantification, reducing forecasting errors by 85% versus classical Arps methods. Full article
Show Figures

Figure 1

Back to TopTop