Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,721)

Search Parameters:
Keywords = Monte Carlo

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1078 KB  
Article
Research on Spare Part Activation Strategy and Reliability Index Calculation of Cold Standby Voting Systems Under Weibull Distribution
by Ziwen Yang, Xiaochuan Ai, Longlong Liu and Jun Wu
Mathematics 2026, 14(9), 1533; https://doi.org/10.3390/math14091533 (registering DOI) - 30 Apr 2026
Abstract
This study investigates the impact of standby activation strategies on system reliability. The results show that a delayed activation strategy effectively improves system reliability. Additionally, to tackle the difficulty of deriving analytical solutions for reliability metrics under the Weibull distribution, a non-homogeneous Markov [...] Read more.
This study investigates the impact of standby activation strategies on system reliability. The results show that a delayed activation strategy effectively improves system reliability. Additionally, to tackle the difficulty of deriving analytical solutions for reliability metrics under the Weibull distribution, a non-homogeneous Markov model based on the delayed activation strategy is introduced. The system’s residual life is modeled computationally using the state transition method. The numerical results suggest that the proposed method aligns closely with Monte Carlo simulations. It significantly improves computational efficiency while maintaining high accuracy, thus confirming its effectiveness. Full article
(This article belongs to the Special Issue Statistical Analysis and Data Science for Complex Data, 2nd Edition)
30 pages, 11635 KB  
Article
A Traffic-Density-Aware, Speed-Adaptive Control Strategy to Mitigate Traffic Congestion for New Energy Vehicle Networks
by Chia-Kai Wen and Chia-Sheng Tsai
World Electr. Veh. J. 2026, 17(5), 241; https://doi.org/10.3390/wevj17050241 (registering DOI) - 30 Apr 2026
Abstract
The rising market penetration of new energy vehicles (NEVs) is transforming urban traffic into a heterogeneous mix of battery electric (BEVs), hybrid electric (HEVs), and conventional fuel vehicles (FVs). For analytical brevity, traditional internal combustion engine vehicles (ICEVs) are hereafter referred to as [...] Read more.
The rising market penetration of new energy vehicles (NEVs) is transforming urban traffic into a heterogeneous mix of battery electric (BEVs), hybrid electric (HEVs), and conventional fuel vehicles (FVs). For analytical brevity, traditional internal combustion engine vehicles (ICEVs) are hereafter referred to as ‘fuel vehicles (FVs)’ in the discussion of New Energy Vehicle (NEV) networks. This research investigates the efficacy of centralized coordination for NEVs within a localized region, as opposed to individualized speed control, in enhancing the mitigation of traffic congestion. Evaluating traffic efficiency and decarbonization strategies in such settings often requires extensive random sampling and Monte Carlo simulations over a large set of parameter combinations. However, conventional microscopic traffic simulators (e.g., SUMO), which rely on fine-grained modeling of vehicle dynamics and signal control, incur prohibitive computational time when scaled to large networks and numerous experimental scenarios. In this study, battery electric vehicles and hybrid electric vehicles are designed as density-aware vehicles, whose movement speed is adaptively adjusted according to the regional traffic density in their vicinity and the control parameter β. In contrast, fuel vehicles adopt a stochastic movement speed and, together with other vehicle types, exhibit either movement or stoppage in the lattice environment. This density-driven speed-adaptive control and lattice arbitration mechanism is intended to reproduce, in a simplified yet extensible manner, changes in mobility and traffic-flow stability under high-density traffic conditions. The simulation results indicate that, under the same Manhattan road network and vehicle-density conditions, tuning the β parameter of new energy vehicles to reduce their movement speed in high-density areas and to mitigate abrupt position changes can suppress traffic-flow oscillations, delay the onset of the congestion phase transition, and promote spatial equilibrium of traffic flow. Meanwhile, this study develops simplified energy-consumption and carbon emission models for battery electric vehicles, hybrid electric vehicles, and fuel vehicles, demonstrating that incorporating a speed-adaptive density strategy into mixed traffic flow not only helps alleviate abnormal congestion but also reduces potential energy use and carbon emissions caused by congestion and stop-and-go behavior. From a sensing and practical perspective, the proposed framework assumes that future connected and autonomous vehicles (CAVs) can estimate vehicle states and local traffic density through GNSS–IMU multi-sensor fusion and V2X communications, indicating methodological consistency between the proposed model and real-world CAV sensing capabilities and making it a suitable and effective experimental platform for investigating the relationships among new energy vehicle penetration, density-control strategies, and carbon footprint. Full article
(This article belongs to the Section Automated and Connected Vehicles)
21 pages, 8939 KB  
Article
Enhancing Battery Consistency Through Physics-Machine Learning Integration: A Calendering Process-Oriented Optimization Strategy
by Wenhao Zhu, Yankun Liao, Gang Wu and Fei Lei
Energies 2026, 19(9), 2186; https://doi.org/10.3390/en19092186 - 30 Apr 2026
Abstract
Manufacturing tolerances inevitably induce cell-to-cell inconsistencies. These inconsistent cells are connected in series and parallel to form battery packs, which will affect the safety and reliability of the battery system. This study presents a novel optimization framework integrating the multi-level physical model with [...] Read more.
Manufacturing tolerances inevitably induce cell-to-cell inconsistencies. These inconsistent cells are connected in series and parallel to form battery packs, which will affect the safety and reliability of the battery system. This study presents a novel optimization framework integrating the multi-level physical model with machine learning to improve battery consistency from the manufacturing perspective. The multi-level physical modeling approach is applied to establish the link between the parameter deviations of the calendering process and the battery inconsistency performance. Based on the multi-level physical model, the Monte Carlo method is used to describe parameter deviations and generate datasets of electrochemical properties. The coefficients of variations in battery capacity and resistance are calculated as the consistency evaluation index based on these datasets. The proposed optimization approach applies machine learning to reduce the computational cost of the multi-level physical simulations due to lots of Monte Carlo simulations. Combined with the multi-level physical model and neural network model, the multi-objective particle swarm optimization algorithm is adopted to provide the optimal calendering process parameter deviations by achieving the trade-off between battery consistency performance and manufacturing cost. Results indicate that the battery consistency performance is improved by controlling the precision of the calendering process and manufacturing cost. This approach can effectively give feedback and guidance to the inverse design of the manufacturing process. Full article
21 pages, 1419 KB  
Article
Sim-Exact Methods for Stochastic Optimization: A Complementary Approach to Simheuristics
by Angel A. Juan, Antonio R. Uguina, Marc Escoto and Veronica Medina
Mathematics 2026, 14(9), 1518; https://doi.org/10.3390/math14091518 - 30 Apr 2026
Abstract
This paper introduces a sim-exact methodology for stochastic combinatorial optimization problems. The approach combines exact optimization models with Monte Carlo or discrete-event simulation to evaluate candidate solutions under uncertainty. The method iteratively adjusts a control parameter based on simulation feedback and solves a [...] Read more.
This paper introduces a sim-exact methodology for stochastic combinatorial optimization problems. The approach combines exact optimization models with Monte Carlo or discrete-event simulation to evaluate candidate solutions under uncertainty. The method iteratively adjusts a control parameter based on simulation feedback and solves a sequence of deterministic optimization problems. Unlike scenario-based stochastic programming, the approach does not rely on explicit scenario enumeration, and unlike simheuristics, it preserves optimality with respect to each deterministic subproblem. The methodology is tested on the vehicle routing problem with stochastic demands under different levels of demand variability. Results are compared with a simheuristic approach and a sample average approximation (SAA) method. The results show that sim-exact performance is comparable to simheuristics, with no statistically significant differences in most cases, while SAA shows weaker performance under medium and high variability. Full article
29 pages, 927 KB  
Article
Integrated PdM–OEE–LCC Framework: A Stochastic Control Approach for Industry 4.0 Systems
by Przemysław Drożyner and Małgorzata Jasiulewicz-Kaczmarek
Appl. Sci. 2026, 16(9), 4391; https://doi.org/10.3390/app16094391 - 30 Apr 2026
Abstract
In the Industry 4.0 era, effective maintenance management is paramount to ensuring production continuity, operational efficiency, and cost-effectiveness. Modern industrial systems operate under inherent uncertainty and limited observability, necessitating the development of sophisticated decision-support frameworks. This study introduces a comprehensive approach to optimizing [...] Read more.
In the Industry 4.0 era, effective maintenance management is paramount to ensuring production continuity, operational efficiency, and cost-effectiveness. Modern industrial systems operate under inherent uncertainty and limited observability, necessitating the development of sophisticated decision-support frameworks. This study introduces a comprehensive approach to optimizing maintenance control for industrial assets under stochastic degradation and partial observability. The framework integrates stochastic processes for degradation modeling with Overall Equipment Effectiveness (OEE) and Life Cycle Cost (LCC) analysis for multi-dimensional performance assessment. Maintenance interventions are governed by threshold-based strategies, where optimal service limits (Θ*) are determined through extensive Monte Carlo simulations. Furthermore, both local and global sensitivity analyses are employed to identify critical drivers of decision-making, such as failure penalties, process volatility, and maintenance efficacy. The model is extended to incorporate Digital Twin concepts, enhancing state estimation under noisy sensor data, and addresses multi-machine scenarios with resource constraints to reflect real-world operational complexities. Results indicate that failure costs and process uncertainty are the primary determinants of maintenance timing. Notably, Digital Twin integration significantly bolsters decision accuracy in the presence of measurement noise, providing a robust and scalable solution for modern manufacturing environments. Full article
31 pages, 2774 KB  
Article
Economic Evaluation of Phased Digital Transformation Investments in SMEs: A Cost–Benefit Analysis in the Turkish Metal Processing Sector
by Sultan Gül Özdamar and Süleyman Ersöz
Adm. Sci. 2026, 16(5), 214; https://doi.org/10.3390/admsci16050214 - 30 Apr 2026
Abstract
This study examines how manufacturing SMEs can structure digital transformation as a strategic, risk-managed process under demand uncertainty and resource constraints. Integrating digital maturity assessment with cost–benefit analysis (D3A–CBA framework), the study evaluates a phased investment strategy at a Turkish metal processing SME, [...] Read more.
This study examines how manufacturing SMEs can structure digital transformation as a strategic, risk-managed process under demand uncertainty and resource constraints. Integrating digital maturity assessment with cost–benefit analysis (D3A–CBA framework), the study evaluates a phased investment strategy at a Turkish metal processing SME, grounding the analysis in real production order data and firm-level financial records. The phased structure—informed by real options reasoning—conditions capacity expansion on measurable Phase-1 performance thresholds, thereby limiting downside risk while preserving strategic flexibility. Under the base scenario (10% real discount rate), Phase-1 yields an NPV of TRY 3,830,738 and an IRR of 12.4%; the combined portfolio reaches TRY 17,365,066. However, a 10,000-iteration Monte Carlo simulation reveals a 29.8–33.0% probability of negative NPV, and sensitivity analysis exposes an asymmetric risk profile in which moderate demand shocks—rather than cost shocks—drive non-viability. The findings demonstrate that digital transformation in resource-constrained SMEs requires not only positive financial returns but also strategic mechanisms to manage demand uncertainty, exchange rate volatility, and organizational adaptation. The proposed framework offers SME managers a reproducible, evidence-based approach to aligning investment decisions with strategic objectives while containing capital risk. Full article
Show Figures

Figure 1

30 pages, 17252 KB  
Article
From BIM to Digital Twin: A Data-Driven Closed-Loop Framework for Dynamic Construction Progress Management
by Han Wu, Zhaoyi Zeng, Yangfa Peng, Qi Yang, Hao Deng, Jian Yu and Peng Zhou
Buildings 2026, 16(9), 1788; https://doi.org/10.3390/buildings16091788 - 30 Apr 2026
Abstract
Traditional construction progress management is hindered by reliance on manual monitoring, delayed information feedback, and a lack of proactive correction capabilities. To address these issues, this study proposes a data-driven closed-loop framework for dynamic progress management leveraging Building Information Modeling (BIM) and Digital [...] Read more.
Traditional construction progress management is hindered by reliance on manual monitoring, delayed information feedback, and a lack of proactive correction capabilities. To address these issues, this study proposes a data-driven closed-loop framework for dynamic progress management leveraging Building Information Modeling (BIM) and Digital Twin (DT). The proposed framework is operationalized through three integrated modules: (i) a dynamic perception layer that synchronizes on-site conditions via IoT and digitized construction logs; (ii) a stochastic prediction engine coupling machine learning with Monte Carlo Simulation (MCS) to quantify delay risks; and (iii) an optimization module based on a Constraint Satisfaction Problem (CSP) model for automated strategy generation. The system’s efficacy was preliminarily evaluated through a prototype application on a primary school building project. Findings from this case indicate that the framework enables near real-time synchronization for schedule deviation warnings, effectively compressing the information latency from days to within a single management cycle. Furthermore, within the empirical scope of the case study, the implementation of DT-driven strategies was associated with a 3-day schedule advancement relative to the simulated baseline and a 15% reduction in the resource idle rate compared to the pre-deployment phase. This study provides a potential pathway for enhancing progress control precision in similar construction environments. Full article
Show Figures

Figure 1

874 KB  
Proceeding Paper
Detection of Deteriorated Areas in Water Distribution Networks Exploiting Chlorine Measurements in a Bayesian Framework
by Benedetta Sansone, Alfonso Cozzolino, Roberta Padulano, Cristiana Di Cristo and Giuseppe Del Giudice
Eng. Proc. 2026, 135(1), 7; https://doi.org/10.3390/engproc2026135007 - 29 Apr 2026
Abstract
This study proposes a methodology to identify deteriorated pipes in water distribution networks using prior system information and routine chlorine residual data. While bulk chlorine decay kbulk can be measured in laboratories, wall decay kwall depends on pipe material, diameter, and [...] Read more.
This study proposes a methodology to identify deteriorated pipes in water distribution networks using prior system information and routine chlorine residual data. While bulk chlorine decay kbulk can be measured in laboratories, wall decay kwall depends on pipe material, diameter, and ageing, particularly in unlined metallic pipes. Empirical data were used to estimate kwall, which was integrated into a Bayesian inference framework solved with Markov Chain Monte Carlo. Applied to an Italian network with synthetic chlorine data, this method demonstrated effectiveness across three test scenarios, exploiting the contrast between kwall and kbulk to detect deteriorated pipes within a computationally efficient environment. Full article
Show Figures

Figure 1

40 pages, 42115 KB  
Article
Artificial Intelligence for Learning 2D Debris-Flow Dynamics: Application of Fourier Neural Operators and Synthetic Data to a Case Study in Central Italy
by Mauricio Secchi, Antonio Pasculli and Nicola Sciarra
Land 2026, 15(5), 759; https://doi.org/10.3390/land15050759 - 29 Apr 2026
Abstract
Physics-based simulation of debris flows over complex terrain is essential for hazard assessment, but repeated numerical integration is costly when many scenarios must be explored. We develop a general deep-learning surrogate modelling framework for two-dimensional (2D) debris-flow propagation, here applied to the Morino–Rendinara [...] Read more.
Physics-based simulation of debris flows over complex terrain is essential for hazard assessment, but repeated numerical integration is costly when many scenarios must be explored. We develop a general deep-learning surrogate modelling framework for two-dimensional (2D) debris-flow propagation, here applied to the Morino–Rendinara area (central Italy) using a three-dimensional (3D) Fourier Neural Operator (FNO) trained on synthetic simulations generated by a validated in-house finite-volume shallow-water solver. The solver reproduces debris-flow propagation over complex terrain and is specifically developed for artificial intelligence (AI) applications. It is based on a depth-averaged 2D formulation using the Harten–Lax–van Leer–Contact (HLLC) approximate Riemann solver, hydrostatic reconstruction, positivity-preserving wet–dry treatment, and Voellmy-type basal friction, and was verified through analytical benchmarks, numerical tests, and back-analyses of real events. The dataset was built from four site-specific release settings derived from real topography, combining different released volumes and bulk densities while preserving local geomorphological and rheological characteristics. Each simulation was stored as a full spatio-temporal tensor and used to train an FNO conditioned on coordinates, topography, friction parameters, bulk density, and initial release thickness. Training used a novel loss to emphasize active-flow areas and improve velocity reconstruction, and was performed using a graphics processing unit (GPU). The surrogate shows effective generalization to within-distribution validation samples, with global relative mean squared errors of 5.49% for flow thickness, 5.34% for velocity component u, and 2.60% for v, and mean R2 values of 0.95, 0.94, and 0.97. For a representative sample, the surrogate predicts the full spatio-temporal solution in 0.52 s, versus about 47 s for the first-order finite-volume solver, corresponding to a speed-up of about 91×, with an even larger gap expected for higher-order solvers, since, whilst the computation time of the solver increases as its complexity increases, the computation time of the FNO remains essentially unchanged. These results indicate that the proposed FNO is a reliable site-specific surrogate for rapid approximation of 2D debris-flow dynamics over real terrain, with potential for uncertainty propagation, Monte Carlo analysis, large-ensemble simulation, and hazard-oriented scenario assessment. Full article
29 pages, 10117 KB  
Article
A Multi-Source Geospatial Framework for the Evaluation of Urban Flood Resilience Under Extreme Rainfall: Evidence from Chongqing, China
by Tao Yang, Yingxia Yun, Fengliang Tang and Xiaolei Zheng
Water 2026, 18(9), 1067; https://doi.org/10.3390/w18091067 - 29 Apr 2026
Abstract
Mountainous megacities face a distinctive form of pluvial waterlogging in which terrain-controlled flow convergence, accelerating imperviousness, and aging drainage interact to produce chronic, spatially clustered failures rather than stochastic events. Existing frameworks, such as hydrodynamic modeling, data-driven machine learning, and multi-criteria composite indexing, [...] Read more.
Mountainous megacities face a distinctive form of pluvial waterlogging in which terrain-controlled flow convergence, accelerating imperviousness, and aging drainage interact to produce chronic, spatially clustered failures rather than stochastic events. Existing frameworks, such as hydrodynamic modeling, data-driven machine learning, and multi-criteria composite indexing, carry distinctive failure modes at the municipal scale. This study develops and externally validates a city-wide, grid-based assessment framework for Chongqing, China, through three integrated choices. First, resilience is reformulated as a stabilized adaptation-to-risk ratio and subjected to an explicit falsification test against independent waterlogging observations. Second, multi-source hydroclimatic, topographic–hydrologic, land-cover, and service-accessibility indicators are integrated on a 500 m fishnet (22,500 cells) through within-component CRITIC–Entropy weighting and TOPSIS, with robustness diagnosed by a 500-iteration Monte Carlo weight-perturbation analysis. Third, a spatially grouped LightGBM classifier with SHAP interpretation serves both as an independent validation layer and as a mechanistic lens on non-linear driver thresholds. The composite risk surface achieves ROC-AUC values of 0.834 and 0.873 against two independent waterlogging registries, is strongly spatially clustered (Moran’s I = 0.81, p < 0.001), and preserves its ranking under aggressive weight perturbation (Spearman ρ ≥ 0.95 in 95% of scenarios). A counterintuitive finding emerges from the falsification test as resilience yields ROC-AUC below 0.5 on both point sets, indicating that accessibility-based capacity proxies systematically capture urban centrality rather than drainage robustness, like a diagnosable measurement problem affecting the wider resilience-index literature. LightGBM concentrates 88.0% of waterlogging cells within the top 10% of scored grids, and SHAP-derived thresholds align with saturation-ponding, well-drained, and convergence–hotspot regimes of classical hydrology. Together, these results reframe waterlogging assessment in complex terrain from a cartographic exercise into a falsifiable, resource-aware prioritization framework, and clarify why capacity maps and risk maps should be published as complementary instruments of flood governance. Full article
(This article belongs to the Section Urban Water Management)
28 pages, 6364 KB  
Article
Data-Driven Bedload Inference from RFID Pebble Tracing in a Pre-Alpine Stream
by Oleksandr Didkovskyi, Monica Corti, Monica Papini, Alessandra Menafoglio and Laura Longoni
Water 2026, 18(9), 1064; https://doi.org/10.3390/w18091064 - 29 Apr 2026
Abstract
We analyse pebble RFID tracing observations to investigate sediment transport dynamics in gravel-bed rivers using statistical modelling. This study examines a dataset of nearly 3500 tracer displacement measurements collected during 27 sediment-mobilizing events in a pre-Alpine reach in Italy. Our analysis follows three [...] Read more.
We analyse pebble RFID tracing observations to investigate sediment transport dynamics in gravel-bed rivers using statistical modelling. This study examines a dataset of nearly 3500 tracer displacement measurements collected during 27 sediment-mobilizing events in a pre-Alpine reach in Italy. Our analysis follows three main steps, addressing tracer mobility patterns, event-scale transport dynamics, and reach-scale bedload inference. First, using Markov Chain analysis of state transitions on typical and high-magnitude transport events, we demonstrate that pebbles tend to maintain their mobility state between events, characterizing the between-event intermittency of bedload transport. A subsequent analysis of flow characteristics reveals that consecutive floods of similar magnitude exhibit increasing movement probability while maintaining similar virtual velocities. Finally, we train Gradient Boosting regression models to estimate distributions of pebble displacements and virtual velocities (defined, following common usage, as the ratio between the distance a tracer travels during a mobilising event and the duration of that event). Together with Monte Carlo propagation, these models are used to derive reach-scale volume estimates. The models identify flow rate and event duration as primary controls, while grain size has minimal influence within the sampled range of tracer dimensions. To strengthen our approach, we implement an extensive multi-stage validation process aimed at both single-tracer predictions and overall basin-scale movement estimates. The results indicate that high-magnitude transport events (12% of observations) contribute similar bedload volumes as typical events (88% of observations), highlighting the significant role of extreme events in total sediment transport. Model predictions yield bedload volume estimates that align well with independent measurements from a downstream sediment retention basin. Full article
(This article belongs to the Section Water Erosion and Sediment Transport)
Show Figures

Figure 1

24 pages, 502 KB  
Article
QML Inference for Spatio-Temporal GARCH Models with Spatial Volatility Interactions
by Khaoula Aouati, Soumia Kharfouchi, Khudhayr A. Rashedi, Tariq S. Alshammari and Abdullah H. Alenezy
Mathematics 2026, 14(9), 1507; https://doi.org/10.3390/math14091507 - 29 Apr 2026
Abstract
We propose a new class of spatio-temporal GARCH models designed to capture volatility dynamics that propagate jointly across time and space. Existing spatio-temporal GARCH formulations typically account for either lagged spatial spillovers or contemporaneous interactions separately, and therefore fail to capture the combined [...] Read more.
We propose a new class of spatio-temporal GARCH models designed to capture volatility dynamics that propagate jointly across time and space. Existing spatio-temporal GARCH formulations typically account for either lagged spatial spillovers or contemporaneous interactions separately, and therefore fail to capture the combined effect of instantaneous spatial volatility feedback and its propagation over time. To address this gap, we introduce a unified framework that incorporates both contemporaneous and lagged spatial volatility interactions within a single coherent model. At each time point, conditional variances evolve according to a temporal GARCH recursion combined with both contemporaneous and lagged spatial volatility interactions defined on a lattice. This structure allows volatility shocks to diffuse instantaneously across neighboring locations and persist over time through spatially structured feedback mechanisms, extending existing spatial and spatio-temporal GARCH formulations. We establish sufficient conditions for the existence of a unique strictly stationary and ergodic solution based on contraction properties of a combined spatial–temporal operator. Statistical inference is conducted via Gaussian quasi-maximum likelihood estimation (QMLE). We derive consistency and asymptotic normality of the QMLE under two asymptotic regimes: (i) increasing temporal domain with fixed spatial size, and (ii) joint asymptotics where both the number of time periods and spatial locations diverge. In both cases, the asymptotic covariance matrix admits a standard sandwich form and can be consistently estimated. An extensive Monte Carlo study confirms the theoretical results. The simulations show that the QMLE performs well even under strong spatial and temporal persistence and remains robust to heavy-tailed innovations. In particular, increasing the spatial domain substantially improves estimation accuracy, highlighting the efficiency gains induced by spatial information. The proposed model provides a flexible and tractable framework for analyzing volatility processes evolving jointly in time and space. Full article
(This article belongs to the Section D1: Probability and Statistics)
15 pages, 563 KB  
Article
Leveraging ChatGPT for Vancomycin Therapeutic Drug Monitoring: Simulation Using Bayesian Estimation and Hyperparameter Optimization
by Akira Kageyama, Takahiko Aoyama, Rikuya Maehara, Dai Harada, Takashi Kawakubo and Yasuhiro Tsuji
Sci. Pharm. 2026, 94(2), 34; https://doi.org/10.3390/scipharm94020034 - 29 Apr 2026
Abstract
The usefulness of ChatGPT, a large language model, has recently been explored in medical research. However, no studies have examined its reproducibility or applicability to therapeutic drug monitoring (TDM), a core task of clinical pharmacists. In this simulation study, we evaluated the feasibility [...] Read more.
The usefulness of ChatGPT, a large language model, has recently been explored in medical research. However, no studies have examined its reproducibility or applicability to therapeutic drug monitoring (TDM), a core task of clinical pharmacists. In this simulation study, we evaluated the feasibility of using ChatGPT for vancomycin (VCM) TDM based on Bayesian estimation. A total of 1000 virtual patients were generated by Monte Carlo simulations using a population pharmacokinetic model of VCM. Bayesian-estimated pharmacokinetic parameters and predicted concentrations were input into ChatGPT, and dosage regimens were compared among the three conditions, using temperature as a hyperparameter (T = 0.1, 0.5, and 1.0). Reproducibility was evaluated using the mode percentage in repeated runs. The reproducibility of the ChatGPT output was higher at T = 0.1 than at T = 0.5 and T = 1.0. When ChatGPT simulated the mode-recommended regimen (T = 0.1), the target attainment rate of the area under the serum concentration (AUC) (400–600 mg·h/L) improved from 25.5% (pre-optimization AUC (fixed-dose regimen)) to 71.5% (post-optimization AUC (ChatGPT-guided regimen)). These findings demonstrate that ChatGPT-based TDM using Bayesian estimation can enhance dose optimization. Adjusting the hyperparameter temperature to 0.1 improved reproducibility, suggesting that a reliable ChatGPT-assisted TDM support system may be clinically useful. Full article
20 pages, 2207 KB  
Article
Critical Benchmark Validation of the Core Physics Multigroup Cross-Section Library TPEX
by Ying Chen, Haicheng Wu, Lili Wen, Yue Xiao, Jinchao Zhang, Qian Zhang, Xiaofei Wu and Huanyu Zhang
Energies 2026, 19(9), 2143; https://doi.org/10.3390/en19092143 - 29 Apr 2026
Abstract
Core physics multigroup cross-section libraries provide essential cross-section and burnup data for reactor neutron physics calculations, serving as a fundamental prerequisite for reactor physics analysis. The China Nuclear Data Center has developed the TPEX multigroup cross-section library for pressurized water reactors (PWRs) based [...] Read more.
Core physics multigroup cross-section libraries provide essential cross-section and burnup data for reactor neutron physics calculations, serving as a fundamental prerequisite for reactor physics analysis. The China Nuclear Data Center has developed the TPEX multigroup cross-section library for pressurized water reactors (PWRs) based on the Chinese Evaluated Nuclear Data Library CENDL-3.2. A systematic critical benchmark validation of the newly developed TPEX library has been performed. To verify its applicability and accuracy, the validation has been conducted against 131 critical benchmark experiments from the International Criticality Safety Benchmark Evaluation Project (ICSBEP 2006) and the WIMS-D library update project. The calculated effective multiplication factors (keff) are compared with the experimental values, results from equivalent multigroup libraries, and reference solutions from Monte Carlo code. The results indicate that the absolute average deviations between the calculated keff values using the TPEX library and the experimental measurements are 280 pcm for the uranium solution experiments, 410 pcm for the plutonium solution experiments, 10 pcm for the uranium metal lattice experiments, 20 pcm for the uranium dioxide lattice experiments, 22 pcm for the MOX fuel lattice experiments, and 150 pcm for the LCT001 uranium oxide assembly experiments. Accordingly, the TPEX library demonstrates excellent performance in reactivity predictions for PWRs. Full article
29 pages, 1860 KB  
Article
Confidence Intervals for Parameter Variance of Zero-Inflated Two-Parameter Rayleigh Distribution
by Sasipong Kijsason, Sa-Aat Niwitpong and Suparat Niwitpong
Symmetry 2026, 18(5), 765; https://doi.org/10.3390/sym18050765 - 29 Apr 2026
Abstract
This study develops confidence and credible intervals for the variance of the zero-inflated two-parameter Rayleigh distribution, a flexible model for non-negative data with excess zeros. Seven approaches are proposed: Bayesian Markov chain Monte Carlo (MCMC), Bayesian highest posterior density (HPD), the standard confidence [...] Read more.
This study develops confidence and credible intervals for the variance of the zero-inflated two-parameter Rayleigh distribution, a flexible model for non-negative data with excess zeros. Seven approaches are proposed: Bayesian Markov chain Monte Carlo (MCMC), Bayesian highest posterior density (HPD), the standard confidence interval, the approximation normal, the percentile bootstrap, the bootstrap method with standard error, and the generalized confidence interval (GCI). Their performance is assessed through Monte Carlo simulation using coverage probability (CP) and expected length (EL). The results show that the Bayesian HPD interval performs best overall, attaining coverage close to the nominal level while yielding shorter intervals than the alternatives, especially for small samples. The methods are illustrated with road traffic fatality data from Chiang Mai Province, Thailand, recorded in March 2024. These findings support the practical usefulness of the HPD approach for variance interval estimation in zero-inflated continuous models. Full article
Back to TopTop