Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,300)

Search Parameters:
Keywords = Monte Carlo approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 993 KB  
Article
TSS GAZ PTP: Towards Improving Gumbel AlphaZero with Two-Stage Self-Play for Multi-Constrained Electric Vehicle Routing Problems
by Hui Wang, Xufeng Zhang and Chaoxu Mu
Smart Cities 2026, 9(2), 21; https://doi.org/10.3390/smartcities9020021 - 23 Jan 2026
Abstract
Deep reinforcement learning (DRL) with self-play has emerged as a promising paradigm for solving combinatorial optimization (CO) problems. The recently proposed Gumbel AlphaZero Plan-to-Play (GAZ PTP) framework adopts a competitive training setup between a learning agent and an opponent to tackle classical CO [...] Read more.
Deep reinforcement learning (DRL) with self-play has emerged as a promising paradigm for solving combinatorial optimization (CO) problems. The recently proposed Gumbel AlphaZero Plan-to-Play (GAZ PTP) framework adopts a competitive training setup between a learning agent and an opponent to tackle classical CO tasks such as the Traveling Salesman Problem (TSP). However, in complex and multi-constrained environments like the Electric Vehicle Routing Problem (EVRP), standard self-play often suffers from opponent mismatch: when the opponent is either too weak or too strong, the resulting learning signal becomes ineffective. To address this challenge, we introduce Two-Stage Self-Play GAZ PTP (TSS GAZ PTP), a novel DRL method designed to maintain adaptive and effective learning pressure throughout the training process. In the first stage, the learning agent, guided by Gumbel Monte Carlo Tree Search (MCTS), competes against a greedy opponent that follows the best historical policy. As training progresses, the framework transitions to a second stage in which both agents employ Gumbel MCTS, thereby establishing a dynamically balanced competitive environment that encourages continuous strategy refinement. The primary objective of this work is to develop a robust self-play mechanism capable of handling the high-dimensional constraints inherent in real-world routing problems. We first validate our approach on the TSP, a benchmark used in the original GAZ PTP study, and then extend it to the multi-constrained EVRP, which incorporates practical limitations including battery capacity, time windows, vehicle load limits, and charging infrastructure availability. The experimental results show that TSS GAZ PTP consistently outperforms existing DRL methods, with particularly notable improvements on large-scale instances. Full article
Show Figures

Figure 1

30 pages, 8054 KB  
Article
A New, Discrete Model of Lindley Families: Theory, Inference, and Real-World Reliability Analysis
by Refah Alotaibi and Ahmed Elshahhat
Mathematics 2026, 14(3), 397; https://doi.org/10.3390/math14030397 - 23 Jan 2026
Abstract
Recent developments in discrete probability models play a crucial role in reliability and survival analysis when lifetimes are recorded as counts. Motivated by this need, we introduce the discrete ZLindley (DZL) distribution, a novel discretization of the continuous ZL law. Constructed using a [...] Read more.
Recent developments in discrete probability models play a crucial role in reliability and survival analysis when lifetimes are recorded as counts. Motivated by this need, we introduce the discrete ZLindley (DZL) distribution, a novel discretization of the continuous ZL law. Constructed using a survival-function approach, the DZL retains the analytical tractability of its continuous parent while simultaneously exhibiting a monotonically decreasing probability mass function and a strictly increasing hazard rate—properties that are rarely achieved together in existing discrete models. We derive key statistical properties of the proposed distribution, including moments, quantiles, order statistics, and reliability indices such as stress–strength reliability and the mean residual life. These results demonstrate the DZL’s flexibility in modeling skewness, over-dispersion, and heavy-tailed behavior. For statistical inference, we develop maximum likelihood and symmetric Bayesian estimation procedures under censored sampling schemes, supported by asymptotic approximations, bootstrap methods, and Markov chain Monte Carlo techniques. Monte Carlo simulation studies confirm the robustness and efficiency of the Bayesian estimators, particularly under informative prior specifications. The practical applicability of the DZL is illustrated using two real datasets: failure times (in hours) of 18 electronic systems and remission durations (in weeks) of 20 leukemia patients. In both cases, the DZL provides substantially better fits than nine established discrete distributions. By combining structural simplicity, inferential flexibility, and strong empirical performance, the DZL distribution advances discrete reliability theory and offers a versatile tool for contemporary statistical modeling. Full article
(This article belongs to the Special Issue Statistical Models and Their Applications)
10 pages, 863 KB  
Article
Destruction/Inactivation of SARS-CoV-2 Virus Using Ultrasound Excitation: A Preliminary Study
by Almunther Alhasawi, Fajer Alassaf and Alshimaa Hassan
Viruses 2026, 18(2), 152; https://doi.org/10.3390/v18020152 - 23 Jan 2026
Abstract
SARS-CoV-2, the causative virus of the COVID-19 pandemic, is a highly transmissible, enveloped, single-stranded RNA virus that has mutated into several variants, complicating vaccine strategies and drug resistance. Novel treatment modalities targeting conserved structural vulnerable points are essential to combat these variants. The [...] Read more.
SARS-CoV-2, the causative virus of the COVID-19 pandemic, is a highly transmissible, enveloped, single-stranded RNA virus that has mutated into several variants, complicating vaccine strategies and drug resistance. Novel treatment modalities targeting conserved structural vulnerable points are essential to combat these variants. The primary aim of the current study is to test the mechanical vulnerability of the SARS-CoV-2 virus envelope and spike proteins to focused, high-frequency ultrasound waves (25 MHz) in vitro. Utilizing a preliminary pretest and posttest study design, the study was conducted on a virus sample within a distilled water matrix, under controlled laboratory biosafety conditions. Since detailed imaging tools were unavailable, viral disruption was indirectly measured using real-time PCR cycle threshold (Ct) values. Ct values increased significantly after high-frequency ultrasound exposure, indicating a reduction in amplifiable viral genomic material. A paired t-test indicated a significant difference between the pretest and posttest Ct (p < 0.001), which is supported by Monte Carlo test results that revealed statistically significant shifting in viral load categories (p = 0.001, two-sided). Specifically, 85.7% of high-viral-load samples converted to low or moderate content, 46.7% of low or moderate samples were shifted to negative content. This intervention produced a large effect size (Cohen’s d = 2.422). These results indicate that ultrasound may offer a promising non-pharmacological approach to destroy or inactivate SARS-CoV-2 variants in an aqueous environment. Full article
Show Figures

Graphical abstract

17 pages, 1563 KB  
Article
Assessing Methane Emission Patterns and Sensitivities at High-Emission Point Sources in China via Gaussian Plume Modeling
by Haomin Li, Ning Wang, Lingling Ma, Yongguang Zhao, Jiaqi Hu, Beibei Zhang, Jingmei Li and Qijin Han
Environments 2026, 13(1), 62; https://doi.org/10.3390/environments13010062 (registering DOI) - 22 Jan 2026
Viewed by 17
Abstract
Accurate quantification of methane (CH4) emissions from individual point sources is essential for understanding localized greenhouse gas dynamics and supporting mitigation strategies. This study employs satellite-based point-source emission rate data from the Carbon Mapper initiative, combined with ERA5 meteorological reanalysis, to [...] Read more.
Accurate quantification of methane (CH4) emissions from individual point sources is essential for understanding localized greenhouse gas dynamics and supporting mitigation strategies. This study employs satellite-based point-source emission rate data from the Carbon Mapper initiative, combined with ERA5 meteorological reanalysis, to simulate near-surface CH4 dispersion using a Gaussian plume model coupled with Monte Carlo simulations. This approach captures local dispersion characteristics around each emission source. Simulations driven by these emission inputs reveal a highly skewed, heavy-tailed concentration distribution (consistent with log-normal characteristics), where the 95th percentile (1292.1 ppm) significantly exceeds the mean (475.9 ppm), indicating the dominant influence of a small number of super-emitters. Sectoral analysis shows that coal mining contributes the most high-emission sites, while the solid waste and oil & gas sectors present higher per-source intensities, averaging 1931.1 ppm and 1647.6 ppm, respectively. Spatially, emissions are concentrated in North and Northwest China, particularly Shanxi Province, which hosts 62 high-emission sites with an average maximum of 1583.9 ppm. Sensitivity analysis reveals that emission rate perturbations produce nearly linear responses in concentration, whereas wind speed variations induce an inverse and asymmetric nonlinear response, with sensitivity amplified under low wind speed conditions (a ±30% change in wind speed results in more than ±25% variation in concentration). Under stable atmospheric conditions (Class E), concentrations are approximately 1.3 times higher than those under weakly unstable conditions (Class C). Monte Carlo simulations further indicate that output uncertainty peaks within 150–300 m downwind of emission sources. These results provide a quantitative basis for improving uncertainty characterization in satellite-based methane inversion and for prioritizing risk-based monitoring strategies. Full article
Show Figures

Figure 1

28 pages, 5265 KB  
Article
Research on Energy Futures Hedging Strategies for Electricity Retailers’ Risk Based on Monthly Electricity Price Forecasting
by Weiqing Sun and Chenxi Wu
Energies 2026, 19(2), 552; https://doi.org/10.3390/en19020552 - 22 Jan 2026
Viewed by 14
Abstract
The widespread adoption of electricity market trading platforms has enhanced the standardization and transparency of trading processes. As markets become more liberalized, regulatory policies are phasing out protective electricity pricing mechanisms, leaving retailers exposed to price volatility risks. In response, demand for risk [...] Read more.
The widespread adoption of electricity market trading platforms has enhanced the standardization and transparency of trading processes. As markets become more liberalized, regulatory policies are phasing out protective electricity pricing mechanisms, leaving retailers exposed to price volatility risks. In response, demand for risk management tools has grown significantly. Futures contracts serve as a core instrument for managing risks in the energy sector. This paper proposes a futures-based risk hedging model grounded in electricity price forecasting. A price prediction model is constructed using historical data from electricity markets and energy futures, with SHAP values used to analyze the transmission effects of energy futures prices on monthly electricity trading prices. The Monte Carlo simulation method, combined with a t-GARCH model, is applied to calculate CVaR and determine optimal portfolio weights for futures products. This approach captures the volatility clustering and fat-tailed characteristics typical of energy futures returns. To validate the model’s effectiveness, an empirical analysis is conducted using actual market data. By forecasting electricity price trends and formulating futures strategies, the study evaluates the hedging and profitability performance of futures trading under different market conditions. Results show that the proposed model effectively mitigates risks in volatile market environments. Full article
(This article belongs to the Section C: Energy Economics and Policy)
Show Figures

Figure 1

33 pages, 1664 KB  
Article
Modeling Healthcare Data with a Novel Flexible Three-Parameter Distribution
by Thamer Manshi, Ammar M. Sarhan and M. E. Sobh
Mathematics 2026, 14(2), 359; https://doi.org/10.3390/math14020359 - 21 Jan 2026
Viewed by 40
Abstract
Developing flexible lifetime distributions is essential for accurately modeling reliability and lifetime data across various scientific and engineering contexts. In this work, we introduce a new three-parameter lifetime distribution, which extends the well-known two-parameter Sarhan–Tadj–Hamilton model. We derive and discuss several of its [...] Read more.
Developing flexible lifetime distributions is essential for accurately modeling reliability and lifetime data across various scientific and engineering contexts. In this work, we introduce a new three-parameter lifetime distribution, which extends the well-known two-parameter Sarhan–Tadj–Hamilton model. We derive and discuss several of its important theoretical properties, including the reliability characteristics and moments. The parameter estimation is carried out using both maximum likelihood and Bayesian approaches, providing a comprehensive comparison of inferential techniques. To further examine the efficiency and robustness of the proposed estimators, a detailed Monte Carlo simulation study is conducted under different sample sizes and parameter settings. The practical usefulness of the distribution is illustrated through its application to three real-world datasets, namely cancer and COVID-19 data, where it demonstrates superior fit and flexibility compared to existing and nested lifetime models. These findings highlight the potential of the proposed model as a valuable addition to the toolbox of applied statisticians and reliability practitioners. Full article
36 pages, 4550 KB  
Article
Probabilistic Load Forecasting for Green Marine Shore Power Systems: Enabling Efficient Port Energy Utilization Through Monte Carlo Analysis
by Bingchu Zhao, Fenghui Han, Yu Luo, Shuhang Lu, Yulong Ji and Zhe Wang
J. Mar. Sci. Eng. 2026, 14(2), 213; https://doi.org/10.3390/jmse14020213 - 20 Jan 2026
Viewed by 83
Abstract
The global shipping industry is surging ahead, and with it, a quiet revolution is taking place on the water: marine lithium-ion batteries have emerged as a crucial clean energy carrier, powering everything from ferries to container ships. When these vessels dock, they increasingly [...] Read more.
The global shipping industry is surging ahead, and with it, a quiet revolution is taking place on the water: marine lithium-ion batteries have emerged as a crucial clean energy carrier, powering everything from ferries to container ships. When these vessels dock, they increasingly rely on shore power charging systems to refuel—essentially, plugging in instead of idling on diesel. But predicting how much power they will need is not straightforward. Think about it: different ships, varying battery sizes, mixed charging technologies, and unpredictable port stays all come into play, creating a load profile that is random, uneven, and often concentrated—a real headache for grid planners. So how do you forecast something so inherently variable? This study turned to the Monte Carlo method, a probabilistic technique that thrives on uncertainty. Instead of seeking a single fixed answer, the model embraces randomness, feeding in real-world data on supply modes, vessel types, battery capacity, and operational hours. Through repeated random sampling and load simulation, it builds up a realistic picture of potential charging demand. We ran the numbers for a simulated fleet of 400 vessels, and the results speak for themselves: load factors landed at 0.35 for conventional AC shore power, 0.39 for high-voltage DC, 0.33 for renewable-based systems, 0.64 for smart microgrids, and 0.76 when energy storage joined the mix. Notice how storage and microgrids really smooth things out? What does this mean in practice? Well, it turns out that Monte Carlo is not just academically elegant, it is practically useful. By quantifying uncertainty and delivering load factors within confidence intervals, the method offers port operators something precious: a data-backed foundation for decision-making. Whether it is sizing infrastructure, designing tariff incentives, or weighing the grid impact of different shore power setups, this approach adds clarity. In the bigger picture, that kind of insight matters. As ports worldwide strive to support cleaner shipping and align with climate goals—China’s “dual carbon” ambition being a case in point—achieving a reliable handle on charging demand is not just technical; it is strategic. Here, probabilistic modeling shifts from a simulation exercise to a tangible tool for greener, more resilient port energy management. Full article
Show Figures

Figure 1

23 pages, 4564 KB  
Article
Control of Wave Energy Converters Using Reinforcement Learning
by Odai R. Bani Hani, Zeiad Khafagy, Matthew Staber, Ashraf Gaffar and Ossama Abdelkhalik
J. Mar. Sci. Eng. 2026, 14(2), 211; https://doi.org/10.3390/jmse14020211 - 20 Jan 2026
Viewed by 168
Abstract
Efficient control of wave energy converters (WECs) is crucial for maximizing energy capture and reducing the Levelized Cost of Energy (LCoE). In this study, we employ a deep reinforcement learning (DRL) framework based on the Soft Actor-Critic (SAC) and Deep Deterministic Policy Gradient [...] Read more.
Efficient control of wave energy converters (WECs) is crucial for maximizing energy capture and reducing the Levelized Cost of Energy (LCoE). In this study, we employ a deep reinforcement learning (DRL) framework based on the Soft Actor-Critic (SAC) and Deep Deterministic Policy Gradient (DDPG) algorithms for WEC control. Our approach leverages a novel decoupled co-simulation architecture, training agents episodically in MATLAB to export a robust policy within the WEC-Sim environment. Furthermore, we utilize a rigorous benchmarking protocol to compare the SAC and DDPG agents against a classical Bang-Singular-Bang (BSB) optimal control benchmark. Evaluation under realistic, irregular Pierson-Moskowitz sea states demonstrates that the performance of the RL agents is very close to that of the BSB optimal control baseline. Monte Carlo simulations show that both the DDPG and SAC agents can perform even better than the BSB when the model of the BSB is different from the simulation environment. Full article
Show Figures

Figure 1

33 pages, 2214 KB  
Article
Research on Microgrid Resilience in Highway Service Areas Based on Federated Multi-Agent Deep Reinforcement Learning
by Jiyong Li, Zhiliang Cheng, Yide Peng, Hao Huang and Chen Ye
Sustainability 2026, 18(2), 1027; https://doi.org/10.3390/su18021027 - 19 Jan 2026
Viewed by 120
Abstract
This paper proposes a Federated Multi-Agent Deep Reinforcement Learning (FMADRL) framework to enhance the resilience of highway service area microgrids against extreme weather events. The method integrates Generative Adversarial Networks with Monte Carlo simulations to generate high-fidelity weather scenarios, enabling privacy-preserving collaborative optimization [...] Read more.
This paper proposes a Federated Multi-Agent Deep Reinforcement Learning (FMADRL) framework to enhance the resilience of highway service area microgrids against extreme weather events. The method integrates Generative Adversarial Networks with Monte Carlo simulations to generate high-fidelity weather scenarios, enabling privacy-preserving collaborative optimization across distributed microgrids. A multi-objective approach using the Ripple-Spreading Algorithm yields balanced solutions for economic efficiency, reliability, and response speed. Large-scale simulations demonstrate significant improvements: the proposed method achieves an 88.3 score on the comprehensive system resilience metric, reduces the average fault recovery time from 46.6 min to 8.4 min, lowers annual operating costs by 69.3%, equivalent to 536,945.1 USD, and achieves annual carbon emissions reductions of 285 Mg. This approach provides an innovative solution for enhancing the resilience of distributed microgrids during extreme weather events. Full article
(This article belongs to the Section Hazards and Sustainability)
Show Figures

Figure 1

21 pages, 10379 KB  
Article
Spatial Optimization of Urban-Scale Sponge Structures and Functional Areas Using an Integrated Framework Based on a Hydrodynamic Model and GIS Technique
by Mengxiao Jin, Quanyi Zheng, Yu Shao, Yong Tian, Jiang Yu and Ying Zhang
Water 2026, 18(2), 262; https://doi.org/10.3390/w18020262 - 19 Jan 2026
Viewed by 136
Abstract
Rapid urbanization has exacerbated urban-stormwater challenges, highlighting the critical need for coordinated surface-water and groundwater management through rainfall recharge. However, current sponge city construction methods often overlook the crucial role of underground aquifers in regulating the water cycle and mostly rely on simplified [...] Read more.
Rapid urbanization has exacerbated urban-stormwater challenges, highlighting the critical need for coordinated surface-water and groundwater management through rainfall recharge. However, current sponge city construction methods often overlook the crucial role of underground aquifers in regulating the water cycle and mostly rely on simplified engineering approaches. To address these limitations, this study proposes a spatial optimization framework for urban-scale sponge systems that integrates a hydrodynamic model (FVCOM), geographic information systems (GIS), and Monte Carlo simulations. This framework establishes a comprehensive evaluation system that synergistically integrates surface water inundation depth, geological lithology, and groundwater depth to quantitatively assess sponge city suitability. The FVCOM was employed to simulate surface water inundation processes under extreme rainfall scenarios, while GIS facilitated spatial analysis and data integration. The Monte Carlo simulation was utilized to optimize the spatial layout by objectively determining factor weights and evaluate result uncertainty. Using Shenzhen City in China as a case study, this research combined the “matrix-corridor-patch” theory from landscape ecology to optimize the spatial structure of the sponge system. Furthermore, differentiated planning and management strategies were proposed based on regional characteristics and uncertainty analysis. The research findings provide a replicable and verifiable methodology for developing sponge city systems in high-density urban areas. The core value of this methodology lies in its creation of a scientific decision-making tool for direct application in urban planning. This tool can significantly enhance a city’s climate resilience and facilitate the coordinated, optimal management of water resources amid environmental changes. Full article
(This article belongs to the Special Issue "Watershed–Urban" Flooding and Waterlogging Disasters)
Show Figures

Figure 1

25 pages, 10707 KB  
Article
Stochastic–Fuzzy Assessment Framework for Firefighting Functionality of Urban Water Distribution Networks Against Post-Earthquake Fires
by Xiang He, Hong Huang, Fengjiao Xu, Chao Zhang and Tingxin Qin
Sustainability 2026, 18(2), 949; https://doi.org/10.3390/su18020949 - 16 Jan 2026
Viewed by 300
Abstract
Post-earthquake fires often cause more severe losses than the earthquakes themselves, highlighting the critical role of water distribution networks (WDNs) in mitigating fire risks. This study proposed an improved assessment framework for the post-earthquake firefighting functionality of WDNs. This framework integrates a WDN [...] Read more.
Post-earthquake fires often cause more severe losses than the earthquakes themselves, highlighting the critical role of water distribution networks (WDNs) in mitigating fire risks. This study proposed an improved assessment framework for the post-earthquake firefighting functionality of WDNs. This framework integrates a WDN firefighting simulation model into a cloud model-based assessment method. By combining seismic damage and firefighting scenarios, the simulation model derives sample values of the functional indexes through Monte Carlo simulations. These indexes integrate the spatiotemporal characteristics of the firefighting flow and pressure deficiencies to assess a WDN’s capability to control fire and address fire hazards across three dimensions: average, severe, and prolonged severe deficiencies. The cloud model-based assessment method integrates the sample values of functional indexes with expert opinions, enabling qualitative and quantitative assessments under stochastic–fuzzy conditions. An illustrative study validated the efficacy of this method. The flow- and pressure-based indexes elucidated functionality degradation owing to excessive firefighting flow and the diminished supply capacity of a WDN, respectively. The spatiotemporal characteristics of severe flow and pressure deficiencies demonstrated the capability of firefighting resources to manage concurrent fires while ensuring a sustained water supply to fire sites. This method addressed the limitations of traditional quantitative and qualitative assessment approaches, resulting in more reliable outcomes. Full article
(This article belongs to the Section Hazards and Sustainability)
Show Figures

Figure 1

23 pages, 8263 KB  
Article
Uncertainty-Aware Deep Learning for Sugarcane Leaf Disease Detection Using Monte Carlo Dropout and MobileNetV3
by Pathmanaban Pugazhendi, Chetan M. Badgujar, Madasamy Raja Ganapathy and Manikandan Arumugam
AgriEngineering 2026, 8(1), 31; https://doi.org/10.3390/agriengineering8010031 - 16 Jan 2026
Viewed by 225
Abstract
Sugarcane diseases cause estimated global annual losses of over $5 billion. While deep learning shows promise for disease detection, current approaches lack transparency and confidence estimates, limiting their adoption by agricultural stakeholders. We developed an uncertainty-aware detection system integrating Monte Carlo (MC) dropout [...] Read more.
Sugarcane diseases cause estimated global annual losses of over $5 billion. While deep learning shows promise for disease detection, current approaches lack transparency and confidence estimates, limiting their adoption by agricultural stakeholders. We developed an uncertainty-aware detection system integrating Monte Carlo (MC) dropout with MobileNetV3, trained on 2521 images across five categories: Healthy, Mosaic, Red Rot, Rust, and Yellow. The proposed framework achieved 97.23% accuracy with a lightweight architecture comprising 5.4 M parameters. It enabled a 2.3 s inference while generating well-calibrated uncertainty estimates that were 4.0 times higher for misclassifications. High-confidence predictions (>70%) achieved 98.2% accuracy. Gradient-weighted Class Activation Mapping provided interpretable disease localization, and the system was deployed on Hugging Face Spaces for global accessibility. The model demonstrated high recall for the Healthy and Red Rot classes. The model achieved comparatively higher recall for the Healthy and Red Rot classes. The inclusion of uncertainty quantification provides additional information that may support more informed decision-making in precision agriculture applications involving farmers and agronomists. Full article
Show Figures

Figure 1

15 pages, 2092 KB  
Article
Improved NB Model Analysis of Earthquake Recurrence Interval Coefficient of Variation for Major Active Faults in the Hetao Graben and Northern Marginal Region
by Jinchen Li and Xing Guo
Entropy 2026, 28(1), 107; https://doi.org/10.3390/e28010107 - 16 Jan 2026
Viewed by 137
Abstract
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods [...] Read more.
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods significantly underestimate the coefficient of variation when applied to limited paleoseismic datasets, with deviations reaching between 30 and 40% for small sample sizes. We developed a linear transformation and iterative optimization approach that corrects these statistical biases by standardizing recurrence interval data from different sample sizes to conform to a common standardized distribution. Application to 26 fault segments across 15 major active faults in the Hetao graben system yields a corrected coefficient of variation of α = 0.381, representing a 24% increase over the traditional method (α0 = 0.307). This correction demonstrates that conventional approaches systematically underestimate earthquake recurrence variability, potentially compromising seismic hazard assessments. The improved model successfully eliminates sampling bias through iterative convergence, providing more reliable parameters for probability distributions in renewal-based earthquake forecasting. Full article
Show Figures

Figure 1

21 pages, 7908 KB  
Article
Bi-Level Decision-Making for Commercial Charging Stations in Demand Response Considering Nonlinear User Satisfaction
by Weiqing Sun, En Xie and Wenwei Yang
Sustainability 2026, 18(2), 907; https://doi.org/10.3390/su18020907 - 15 Jan 2026
Viewed by 143
Abstract
With the widespread adoption of electric vehicles, commercial charging stations (CCS) have grown rapidly as a core component of charging infrastructure. Due to the concentrated and high-power charging load characteristics of CCS, a ‘peak on peak’ phenomenon can occur in the power distribution [...] Read more.
With the widespread adoption of electric vehicles, commercial charging stations (CCS) have grown rapidly as a core component of charging infrastructure. Due to the concentrated and high-power charging load characteristics of CCS, a ‘peak on peak’ phenomenon can occur in the power distribution network. Demand response (DR) serves as an important and flexible regulation tool for power systems, offering a new approach to addressing this issue. However, when CCS participates in DR, it faces a dual dilemma between operational revenue and user satisfaction. To address this, this paper proposes a bi-level, multi-objective framework that co-optimizes station profit and nonlinear user satisfaction. An asymmetric sigmoid mapping is used to capture threshold effects and diminishing marginal utility. Uncertainty in users’ charging behaviors is evaluated using a Monte Carlo scenario simulation together with chance constraints enforced at a 0.95 confidence level. The model is solved using the fast non-dominated sorting genetic algorithm, NSGA-II, and the compromise optimal solution is identified via the entropy-weighted Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). Case studies show robust peak shaving with a 6.6 percent reduction in the daily maximum load, high satisfaction with a mean of around 0.96, and higher revenue with an improvement of about 12.4 percent over the baseline. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

45 pages, 2207 KB  
Article
Integrating the Contrasting Perspectives Between the Constrained Disorder Principle and Deterministic Optical Nanoscopy: Enhancing Information Extraction from Imaging of Complex Systems
by Yaron Ilan
Bioengineering 2026, 13(1), 103; https://doi.org/10.3390/bioengineering13010103 - 15 Jan 2026
Viewed by 194
Abstract
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in [...] Read more.
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in biological contexts, where variability acts as an adaptive mechanism rather than being merely a measurement error. In contrast, Hell’s recent breakthrough in nanoscopy demonstrates that engineered diffraction minima can achieve sub-nanometer resolution without relying on stochastic (random) molecular switching, thereby replacing randomness with deterministic measurement precision. Philosophically, these two approaches are distinct: the CDP views noise as functionally necessary, while Hell’s method seeks to overcome noise limitations. However, both frameworks address complementary aspects of information extraction. The primary goal of microscopy is to provide information about structures, thereby facilitating a better understanding of their functionality. Noise is inherent to biological structures and functions and is part of the information in complex systems. This manuscript achieves integration through three specific contributions: (1) a mathematical framework combining CDP variability bounds with Hell’s precision measurements, validated through Monte Carlo simulations showing 15–30% precision improvements; (2) computational demonstrations with N = 10,000 trials quantifying performance under varying biological noise regimes; and (3) practical protocols for experimental implementation, including calibration procedures and real-time parameter optimization. The CDP provides a theoretical understanding of variability patterns at the system level, while Hell’s technique offers precision tools at the molecular level for validation. Integrating these approaches enables multi-scale analysis, allowing for deterministic measurements to accurately quantify the functional variability that the CDP theory predicts is vital for system health. This synthesis opens up new possibilities for adaptive imaging systems that maintain biologically meaningful noise while achieving unprecedented measurement precision. Specific applications include cancer diagnostics through chromosomal organization variability, neurodegenerative disease monitoring via protein aggregation disorder patterns, and drug screening by assessing cellular response heterogeneity. The framework comprises machine learning integration pathways for automated recognition of variability patterns and adaptive acquisition strategies. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

Back to TopTop