Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (726)

Search Parameters:
Keywords = Monte Carlo framework

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 6950 KiB  
Article
At-Site Versus Regional Frequency Analysis of Sub-Hourly Rainfall for Urban Hydrology Applications During Recent Extreme Events
by Sunghun Kim, Kyungmin Sung, Ju-Young Shin and Jun-Haeng Heo
Water 2025, 17(15), 2213; https://doi.org/10.3390/w17152213 - 24 Jul 2025
Abstract
Accurate rainfall quantile estimation is critical for urban flood management, particularly given the escalating climate change impacts. This study comprehensively compared at-site frequency analysis and regional frequency analysis for sub-hourly rainfall quantile estimation, using data from 27 sites across Seoul. The analysis focused [...] Read more.
Accurate rainfall quantile estimation is critical for urban flood management, particularly given the escalating climate change impacts. This study comprehensively compared at-site frequency analysis and regional frequency analysis for sub-hourly rainfall quantile estimation, using data from 27 sites across Seoul. The analysis focused on Seoul’s disaster prevention framework (30-year and 100-year return periods). Employing L-moment statistics and Monte Carlo simulations, the rainfall quantiles were estimated, the methodological performance was evaluated, and Seoul’s current disaster prevention standards were assessed. The analysis revealed significant spatio-temporal variability in Seoul’s precipitation, causing considerable uncertainty in individual site estimates. A performance evaluation, including the relative root mean square error and confidence interval, consistently showed regional frequency analysis superiority over at-site frequency analysis. While at-site frequency analysis demonstrated better performance only for short return periods (e.g., 2 years), regional frequency analysis exhibited a substantially lower relative root mean square error and significantly narrower confidence intervals for larger return periods (e.g., 10, 30, 100 years). This methodology reduced the average 95% confidence interval width by a factor of approximately 2.7 (26.98 mm versus 73.99 mm). This enhanced reliability stems from the information-pooling capabilities of regional frequency analysis, mitigating uncertainties due to limited record lengths and localized variabilities. Critically, regionally derived 100-year rainfall estimates consistently exceeded Seoul’s 100 mm disaster prevention threshold across most areas, suggesting that the current infrastructure may be substantially under-designed. The use of minute-scale data underscored its necessity for urban hydrological modeling, highlighting the inadequacy of conventional daily rainfall analyses. Full article
(This article belongs to the Special Issue Urban Flood Frequency Analysis and Risk Assessment)
36 pages, 11035 KiB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis–-Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
11 pages, 961 KiB  
Article
Viscous Cosmology in f(Q,Lm) Gravity: Insights from CC, BAO, and GRB Data
by Dheeraj Singh Rana, Sai Swagat Mishra, Aaqid Bhat and Pradyumn Kumar Sahoo
Universe 2025, 11(8), 242; https://doi.org/10.3390/universe11080242 - 23 Jul 2025
Abstract
In this article, we investigate the influence of viscosity on the evolution of the cosmos within the framework of the newly proposed f(Q,Lm) gravity. We have considered a linear functional form [...] Read more.
In this article, we investigate the influence of viscosity on the evolution of the cosmos within the framework of the newly proposed f(Q,Lm) gravity. We have considered a linear functional form f(Q,Lm)=αQ+βLm with a bulk viscous coefficient ζ=ζ0+ζ1H for our analysis and obtained exact solutions to the field equations associated with a flat FLRW metric. In addition, we utilized Cosmic Chronometers (CC), CC + BAO, CC + BAO + GRB, and GRB data samples to determine the constrained values of independent parameters in the derived exact solution. The likelihood function and the Markov Chain Monte Carlo (MCMC) sampling technique are combined to yield the posterior probability using Bayesian statistical methods. Furthermore, by comparing our results with the standard cosmological model, we found that our considered model supports the acceleration of the universe in late time. Full article
Show Figures

Figure 1

13 pages, 793 KiB  
Communication
Gamma-Ray Bursts Calibrated by Using Artificial Neural Networks from the Pantheon+ Sample
by Zhen Huang, Xin Luo, Bin Zhang, Jianchao Feng, Puxun Wu, Yu Liu and Nan Liang
Universe 2025, 11(8), 241; https://doi.org/10.3390/universe11080241 - 23 Jul 2025
Abstract
In this paper, we calibrate the luminosity relation of gamma−ray bursts (GRBs) by employing artificial neural networks (ANNs) to analyze the Pantheon+ sample of type Ia supernovae (SNe Ia) in a manner independent of cosmological assumptions. The A219 GRB dataset is used to [...] Read more.
In this paper, we calibrate the luminosity relation of gamma−ray bursts (GRBs) by employing artificial neural networks (ANNs) to analyze the Pantheon+ sample of type Ia supernovae (SNe Ia) in a manner independent of cosmological assumptions. The A219 GRB dataset is used to calibrate the Amati relation (Ep-Eiso) at low redshift with the ANN framework, facilitating the construction of the Hubble diagram at higher redshifts. Cosmological models are constrained with GRBs at high redshift and the latest observational Hubble data (OHD) via the Markov chain Monte Carlo numerical approach. For the Chevallier−Polarski−Linder (CPL) model within a flat universe, we obtain Ωm=0.3210.069+0.078h=0.6540.071+0.053w0=1.020.50+0.67, and wa=0.980.58+0.58 at the 1 −σ confidence level, which indicates a preference for dark energy with potential redshift evolution (wa0). These findings using ANNs align closely with those derived from GRBs calibrated using Gaussian processes (GPs). Full article
Show Figures

Figure 1

25 pages, 2201 KiB  
Article
Evolutionary-Assisted Data-Driven Approach for Dissolved Oxygen Modeling: A Case Study in Kosovo
by Bruno da S. Macêdo, Larissa Lima, Douglas Lima Fonseca, Tales H. A. Boratto, Camila M. Saporetti, Osman Fetoshi, Edmond Hajrizi, Pajtim Bytyçi, Uilson R. V. Aires, Roland Yonaba, Priscila Capriles and Leonardo Goliatt
Earth 2025, 6(3), 81; https://doi.org/10.3390/earth6030081 - 21 Jul 2025
Viewed by 159
Abstract
Dissolved oxygen (DO) is widely recognized as a fundamental parameter in assessing water quality, given its critical role in supporting aquatic ecosystems. Accurate estimation of DO levels is crucial for effective management of riverine environments, especially in anthropogenically stressed regions. In this study, [...] Read more.
Dissolved oxygen (DO) is widely recognized as a fundamental parameter in assessing water quality, given its critical role in supporting aquatic ecosystems. Accurate estimation of DO levels is crucial for effective management of riverine environments, especially in anthropogenically stressed regions. In this study, a hybrid machine learning (ML) framework is introduced to predict DO concentrations, where optimization is performed through Genetic Algorithm Search with Cross-Validation (GASearchCV). The methodology was applied to a dataset collected from the Sitnica River in Kosovo, comprising more than 18,000 observations of temperature, conductivity, pH, and dissolved oxygen. The ML models Elastic Net (EN), Support Vector Regression (SVR), and Light Gradient Boosting Machine (LGBM) were fine-tuned using cross-validation and assessed using five performance metrics: coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error MARE, and mean square error (MSE). Among them, the LGBM model yielded the best predictive results, achieving an R2 of 0.944 and RMSE of 8.430 mg/L on average. A Monte Carlo Simulation-based uncertainty analysis further confirmed the model’s robustness, enabling comparison of the trade-off between uncertainty and predictive precision. Comparison with recent studies confirms the proposed framework’s competitive performance, demonstrating the effectiveness of automated tuning and ensemble learning in achieving reliable and real-time water quality forecasting. The methodology offers a scalable and reliable solution for advancing data-driven water quality forecasting, with direct applicability to real-time environmental monitoring and sustainable resource management. Full article
Show Figures

Figure 1

18 pages, 2549 KiB  
Article
A Multi-Fusion Early Warning Method for Vehicle–Pedestrian Collision Risk at Unsignalized Intersections
by Weijing Zhu, Junji Dai, Xiaoqin Zhou, Xu Gao, Rui Cheng, Bingheng Yang, Enchu Li, Qingmei Lü, Wenting Wang and Qiuyan Tan
World Electr. Veh. J. 2025, 16(7), 407; https://doi.org/10.3390/wevj16070407 - 21 Jul 2025
Viewed by 157
Abstract
Traditional collision risk warning methods primarily focus on vehicle-to-vehicle collisions, neglecting conflicts between vehicles and vulnerable road users (VRUs) such as pedestrians, while the difficulty in predicting pedestrian trajectories further limits the accuracy of collision warnings. To address this problem, this study proposes [...] Read more.
Traditional collision risk warning methods primarily focus on vehicle-to-vehicle collisions, neglecting conflicts between vehicles and vulnerable road users (VRUs) such as pedestrians, while the difficulty in predicting pedestrian trajectories further limits the accuracy of collision warnings. To address this problem, this study proposes a vehicle-to-everything-based (V2X) multi-fusion vehicle–pedestrian collision warning method, aiming to enhance the traffic safety protection for VRUs. First, Unmanned Aerial Vehicle aerial imagery combined with the YOLOv7 and DeepSort algorithms is utilized to achieve target detection and tracking at unsignalized intersections, thereby constructing a vehicle–pedestrian interaction trajectory dataset. Subsequently, key foundational modules for collision warning are developed, including the vehicle trajectory module, the pedestrian trajectory module, and the risk detection module. The vehicle trajectory module is based on a kinematic model, while the pedestrian trajectory module adopts an Attention-based Social GAN (AS-GAN) model that integrates a generative adversarial network with a soft attention mechanism, enhancing prediction accuracy through a dual-discriminator strategy involving adversarial loss and displacement loss. The risk detection module applies an elliptical buffer zone algorithm to perform dynamic spatial collision determination. Finally, a collision warning framework based on the Monte Carlo (MC) method is developed. Multiple sampled pedestrian trajectories are generated by applying Gaussian perturbations to the predicted mean trajectory and combined with vehicle trajectories and collision determination results to identify potential collision targets. Furthermore, the driver perception–braking time (TTM) is incorporated to estimate the joint collision probability and assist in warning decision-making. Simulation results show that the proposed warning method achieves an accuracy of 94.5% at unsignalized intersections, outperforming traditional Time-to-Collision (TTC) and braking distance models, and effectively reducing missed and false warnings, thereby improving pedestrian traffic safety at unsignalized intersections. Full article
Show Figures

Figure 1

25 pages, 10024 KiB  
Article
Forecasting with a Bivariate Hysteretic Time Series Model Incorporating Asymmetric Volatility and Dynamic Correlations
by Hong Thi Than
Entropy 2025, 27(7), 771; https://doi.org/10.3390/e27070771 - 21 Jul 2025
Viewed by 125
Abstract
This study explores asymmetric volatility structures within multivariate hysteretic autoregressive (MHAR) models that incorporate conditional correlations, aiming to flexibly capture the dynamic behavior of global financial assets. The proposed framework integrates regime switching and time-varying delays governed by a hysteresis variable, enabling the [...] Read more.
This study explores asymmetric volatility structures within multivariate hysteretic autoregressive (MHAR) models that incorporate conditional correlations, aiming to flexibly capture the dynamic behavior of global financial assets. The proposed framework integrates regime switching and time-varying delays governed by a hysteresis variable, enabling the model to account for both asymmetric volatility and evolving correlation patterns over time. We adopt a fully Bayesian inference approach using adaptive Markov chain Monte Carlo (MCMC) techniques, allowing for the joint estimation of model parameters, Value-at-Risk (VaR), and Marginal Expected Shortfall (MES). The accuracy of VaR forecasts is assessed through two standard backtesting procedures. Our empirical analysis involves both simulated data and real-world financial datasets to evaluate the model’s effectiveness in capturing downside risk dynamics. We demonstrate the application of the proposed method on three pairs of daily log returns involving the S&P500, Bank of America (BAC), Intercontinental Exchange (ICE), and Goldman Sachs (GS), present the results obtained, and compare them against the original model framework. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

21 pages, 1057 KiB  
Article
Hybrid Sensor Placement Framework Using Criterion-Guided Candidate Selection and Optimization
by Se-Hee Kim, JungHyun Kyung, Jae-Hyoung An and Hee-Chang Eun
Sensors 2025, 25(14), 4513; https://doi.org/10.3390/s25144513 - 21 Jul 2025
Viewed by 114
Abstract
This study presents a hybrid sensor placement methodology that combines criterion-based candidate selection with advanced optimization algorithms. Four established selection criteria—modal kinetic energy (MKE), modal strain energy (MSE), modal assurance criterion (MAC) sensitivity, and mutual information (MI)—are used to evaluate DOF sensitivity and [...] Read more.
This study presents a hybrid sensor placement methodology that combines criterion-based candidate selection with advanced optimization algorithms. Four established selection criteria—modal kinetic energy (MKE), modal strain energy (MSE), modal assurance criterion (MAC) sensitivity, and mutual information (MI)—are used to evaluate DOF sensitivity and generate candidate pools. These are followed by one of four optimization algorithms—greedy, genetic algorithm (GA), particle swarm optimization (PSO), or simulated annealing (SA)—to identify the optimal subset of sensor locations. A key feature of the proposed approach is the incorporation of constraint dynamics using the Udwadia–Kalaba (U–K) generalized inverse formulation, which enables the accurate expansion of structural responses from sparse sensor data. The framework assumes a noise-free environment during the initial sensor design phase, but robustness is verified through extensive Monte Carlo simulations under multiple noise levels in a numerical experiment. This combined methodology offers an effective and flexible solution for data-driven sensor deployment in structural health monitoring. To clarify the rationale for using the Udwadia–Kalaba (U–K) generalized inverse, we note that unlike conventional pseudo-inverses, the U–K method incorporates physical constraints derived from partial mode shapes. This allows a more accurate and physically consistent reconstruction of unmeasured responses, particularly under sparse sensing. To clarify the benefit of using the U–K generalized inverse over conventional pseudo-inverses, we emphasize that the U–K method allows the incorporation of physical constraints derived from partial mode shapes directly into the reconstruction process. This leads to a constrained dynamic solution that not only reflects the known structural behavior but also improves numerical conditioning, particularly in underdetermined or ill-posed cases. Unlike conventional Moore–Penrose pseudo-inverses, which yield purely algebraic solutions without physical insight, the U–K formulation ensures that reconstructed responses adhere to dynamic compatibility, thereby reducing artifacts caused by sparse measurements or noise. Compared to unconstrained least-squares solutions, the U–K approach improves stability and interpretability in practical SHM scenarios. Full article
Show Figures

Figure 1

77 pages, 2935 KiB  
Review
Assessment Methods for Building Energy Retrofits with Emphasis on Financial Evaluation: A Systematic Literature Review
by Maria D. Papangelopoulou, Konstantinos Alexakis and Dimitris Askounis
Buildings 2025, 15(14), 2562; https://doi.org/10.3390/buildings15142562 - 20 Jul 2025
Viewed by 141
Abstract
The building sector remains one of the largest contributors to global energy consumption and CO2 emissions, yet selecting optimal retrofit strategies is often hindered by inconsistent evaluation practices and limited integration of environmental and social impacts. This review addresses that gap by [...] Read more.
The building sector remains one of the largest contributors to global energy consumption and CO2 emissions, yet selecting optimal retrofit strategies is often hindered by inconsistent evaluation practices and limited integration of environmental and social impacts. This review addresses that gap by systematically analyzing how various assessment methods are applied to building retrofits, particularly from a financial and environmental perspective. A structured literature review was conducted across four major scientific databases using predefined keywords, filters, and inclusion/exclusion criteria, resulting in a final sample of 50 studies (green colored citations of this paper). The review focuses on the application of Life Cycle Cost Analysis (LCCA), Cost–Benefit Analysis (CBA), and Life Cycle Assessment (LCA), as well as additional indicators that quantify energy and sustainability performance. Results show that LCCA is the most frequently used method, applied in over 60% of the studies, often in combination with LCA (particularly for long time horizons). CBA appears in fewer than 25% of cases. More than 50% of studies are based in Europe, and over 60% of case studies involve residential buildings. EnergyPlus and DesignBuilder were the most common simulation tools, used in 28% and 16% of the cases, respectively. Risk and uncertainty were typically addressed through Monte Carlo simulations (22%) and sensitivity analysis. Comfort and social impact indicators were underrepresented, with thermal comfort included in only 12% of studies and no formal use of tools like Social-LCA or SROI. The findings highlight the growing sophistication of retrofit assessments post-2020, but also reveal gaps such as geographic imbalance (absence of African case studies), inconsistent treatment of discount rates, and limited integration of social indicators. The study concludes that future research should develop standardized, multidimensional evaluation frameworks that incorporate social equity, stakeholder values, and long-term resilience alongside cost and carbon metrics. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

34 pages, 3704 KiB  
Article
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
by Óscar Wladimir Gómez-Morales, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036 - 18 Jul 2025
Viewed by 133
Abstract
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability [...] Read more.
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

18 pages, 1797 KiB  
Article
Extreme Grid Operation Scenario Generation Framework Considering Discrete Failures and Continuous Output Variations
by Dong Liu, Guodong Guo, Zhidong Wang, Fan Li, Kaiyuan Jia, Chenzhenghan Zhu, Haotian Wang and Yingyun Sun
Energies 2025, 18(14), 3838; https://doi.org/10.3390/en18143838 - 18 Jul 2025
Viewed by 161
Abstract
In recent years, extreme weather events have occurred more frequently. The resulting equipment failure, renewable energy extreme output, and other extreme operation scenarios affect the smooth operation of power grids. The occurrence probability of extreme operation scenarios is small, and the occurrence frequency [...] Read more.
In recent years, extreme weather events have occurred more frequently. The resulting equipment failure, renewable energy extreme output, and other extreme operation scenarios affect the smooth operation of power grids. The occurrence probability of extreme operation scenarios is small, and the occurrence frequency in historical operation data is low, which affects the modeling accuracy for scenario generation. Meanwhile, extreme operation scenarios in the form of discrete temporal data lack corresponding modeling methods. Therefore, this paper proposes a definition and generation framework for extreme power grid operation scenarios triggered by extreme weather events. Extreme operation scenario expansion is realized based on the sequential Monte Carlo sampling method and the distribution shifting algorithm. To generate equipment failure scenarios in discrete temporal data form and extreme output scenarios in continuous temporal data form for renewable energy, a Gumbel-Softmax variational autoencoder and an extreme conditional generative adversarial network are respectively proposed. Numerical examples show that the proposed models can effectively overcome limitations related to insufficient historical extreme data and discrete extreme scenario training. Additionally, they can generate improved-quality equipment failure scenarios and renewable energy extreme output scenarios and provide scenario support for power grid planning and operation. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Electrical Power Systems)
Show Figures

Figure 1

23 pages, 2250 KiB  
Article
Machine Learning Techniques for Uncertainty Estimation in Dynamic Aperture Prediction
by Carlo Emilio Montanari, Robert B. Appleby, Davide Di Croce, Massimo Giovannozzi, Tatiana Pieloni, Stefano Redaelli and Frederik F. Van der Veken
Computers 2025, 14(7), 287; https://doi.org/10.3390/computers14070287 - 18 Jul 2025
Viewed by 192
Abstract
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as [...] Read more.
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as the CERN Large Hadron Collider and is crucial for designing future accelerators like the CERN Future Circular Hadron Collider. Traditional methods for computing the dynamic aperture are computationally demanding and involve extensive numerical simulations with numerous initial phase space conditions. In our recent work, we have devised surrogate models to predict the dynamic aperture boundary both efficiently and accurately. These models have been further refined by incorporating them into a novel active learning framework. This framework enhances performance through continual retraining and intelligent data generation based on informed sampling driven by error estimation. A critical attribute of this framework is the precise estimation of uncertainty in dynamic aperture predictions. In this study, we investigate various machine learning techniques for uncertainty estimation, including Monte Carlo dropout, bootstrap methods, and aleatory uncertainty quantification. We evaluated these approaches to determine the most effective method for reliable uncertainty estimation in dynamic aperture predictions using machine learning techniques. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

20 pages, 774 KiB  
Article
Robust Variable Selection via Bayesian LASSO-Composite Quantile Regression with Empirical Likelihood: A Hybrid Sampling Approach
by Ruisi Nan, Jingwei Wang, Hanfang Li and Youxi Luo
Mathematics 2025, 13(14), 2287; https://doi.org/10.3390/math13142287 - 16 Jul 2025
Viewed by 220
Abstract
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the [...] Read more.
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the sample size (n) and there are extreme outliers in the response variables or covariates (e.g., p/n > 0.1). Traditional penalized regression techniques, however, exhibit notable vulnerability to data outliers during high-dimensional variable selection, often leading to biased parameter estimates and compromised resilience. To address this critical limitation, we propose a novel empirical likelihood (EL)-based variable selection framework that integrates a Bayesian LASSO penalty within the composite quantile regression framework. By constructing a hybrid sampling mechanism that incorporates the Expectation–Maximization (EM) algorithm and Metropolis–Hastings (M-H) algorithm within the Gibbs sampling scheme, this approach effectively tackles variable selection in high-dimensional settings with outlier contamination. This innovative design enables simultaneous optimization of regression coefficients and penalty parameters, circumventing the need for ad hoc selection of optimal penalty parameters—a long-standing challenge in conventional LASSO estimation. Moreover, the proposed method imposes no restrictive assumptions on the distribution of random errors in the model. Through Monte Carlo simulations under outlier interference and empirical analysis of two U.S. house price datasets, we demonstrate that the new approach significantly enhances variable selection accuracy, reduces estimation bias for key regression coefficients, and exhibits robust resistance to data outlier contamination. Full article
Show Figures

Figure 1

15 pages, 3145 KiB  
Article
Probabilistic Prediction of Spudcan Bearing Capacity in Stiff-over-Soft Clay Based on Bayes’ Theorem
by Zhaoyu Sun, Pan Gao, Yanling Gao, Jianze Bi and Qiang Gao
J. Mar. Sci. Eng. 2025, 13(7), 1344; https://doi.org/10.3390/jmse13071344 - 14 Jul 2025
Viewed by 177
Abstract
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit [...] Read more.
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit limited accuracy as they fail to account for uncertainties in seabed stratigraphy and soil properties. To address this limitation, based on a database of centrifuge model tests, a probabilistic prediction framework for the peak resistance and corresponding depth is developed by integrating empirical prediction formulas based on Bayes’ theorem. The proposed Bayesian methodology effectively refines prediction accuracy by quantifying uncertainties in soil parameters, spudcan geometry, and computational models. Specifically, it establishes prior probability distributions of peak resistance and depth through Monte Carlo simulations, then updates these distributions in real time using field monitoring data during spudcan penetration. The results demonstrate that both the recommended method specified in ISO 19905-1 and an existing deterministic model tend to yield conservative estimates. This approach can significantly improve the predicted accuracy of the peak resistance compared with deterministic methods. Additionally, it shows that the most probable failure zone converges toward the actual punch-through point as more monitoring data is incorporated. The enhanced prediction capability provides critical decision support for mitigating punch-through potential during offshore jack-up operations, thereby advancing the safety and reliability of marine engineering practices. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

129 pages, 6810 KiB  
Review
Statistical Mechanics of Linear k-mer Lattice Gases: From Theory to Applications
by Julian Jose Riccardo, Pedro Marcelo Pasinetti, Jose Luis Riccardo and Antonio Jose Ramirez-Pastor
Entropy 2025, 27(7), 750; https://doi.org/10.3390/e27070750 - 14 Jul 2025
Viewed by 150
Abstract
The statistical mechanics of structured particles with arbitrary size and shape adsorbed onto discrete lattices presents a longstanding theoretical challenge, mainly due to complex spatial correlations and entropic effects that emerge at finite densities. Even for simplified systems such as hard-core linear k [...] Read more.
The statistical mechanics of structured particles with arbitrary size and shape adsorbed onto discrete lattices presents a longstanding theoretical challenge, mainly due to complex spatial correlations and entropic effects that emerge at finite densities. Even for simplified systems such as hard-core linear k-mers, exact solutions remain limited to low-dimensional or highly constrained cases. In this review, we summarize the main theoretical approaches developed by our research group over the past three decades to describe adsorption phenomena involving linear k-mers—also known as multisite occupancy adsorption—on regular lattices. We examine modern approximations such as an extension to two dimensions of the exact thermodynamic functions obtained in one dimension, the Fractional Statistical Theory of Adsorption based on Haldane’s fractional statistics, and the so-called Occupation Balance based on expansion of the reciprocal of the fugacity, and hybrid approaches such as the semi-empirical model obtained by combining exact one-dimensional calculations and the Guggenheim–DiMarzio approach. For interacting systems, statistical thermodynamics is explored within generalized Bragg–Williams and quasi-chemical frameworks. Particular focus is given to the recently proposed Multiple Exclusion statistics, which capture the correlated exclusion effects inherent to non-monomeric particles. Applications to monolayer and multilayer adsorption are analyzed, with relevance to hydrocarbon separation technologies. Finally, computational strategies, including advanced Monte Carlo techniques, are reviewed in the context of high-density regimes. This work provides a unified framework for understanding entropic and cooperative effects in lattice-adsorbed polyatomic systems and highlights promising directions for future theoretical and computational research. Full article
(This article belongs to the Special Issue Statistical Mechanics of Lattice Gases)
Show Figures

Figure 1

Back to TopTop