Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (77)

Search Parameters:
Keywords = reverse Monte Carlo modelling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7446 KB  
Article
MCMC Correction of Score-Based Diffusion Models for Model Composition
by Anders Sjöberg, Jakob Lindqvist, Magnus Önnheim, Mats Jirstrand and Lennart Svensson
Entropy 2026, 28(3), 351; https://doi.org/10.3390/e28030351 - 20 Mar 2026
Viewed by 268
Abstract
Diffusion models can be parameterized in terms of either score or energy function. The energy parameterization is attractive as it enables sampling procedures such as Markov Chain Monte Carlo (MCMC) that incorporates a Metropolis–Hastings (MH) correction step based on energy differences between proposed [...] Read more.
Diffusion models can be parameterized in terms of either score or energy function. The energy parameterization is attractive as it enables sampling procedures such as Markov Chain Monte Carlo (MCMC) that incorporates a Metropolis–Hastings (MH) correction step based on energy differences between proposed samples. Such corrections can significantly improve sampling quality, particularly in the context of model composition, where pre-trained models are combined to generate samples from novel distributions. Score-based diffusion models, on the other hand, are more widely adopted and come with a rich ecosystem of pre-trained models. However, they do not, in general, define an underlying energy function, making MH-based sampling inapplicable. In this work, we address this limitation by retaining score parameterization and introducing a novel MH-like acceptance rule based on line integration of the score function. This allows the reuse of existing diffusion models while still combining the reverse process with various MCMC techniques, viewed as an instance of annealed MCMC. Through experiments on synthetic and real-world data, we show that our MH-like samplers yield relative improvements of similar magnitude to those observed with energy-based models, without requiring explicit energy parameterization. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

24 pages, 3023 KB  
Review
Porous Organic Polymers with Azo, Azoxy, and Azodioxy Linkages: Design, Synthesis, and CO2 Adsorption Properties
by Ivan Kodrin and Ivana Biljan
Polymers 2026, 18(6), 735; https://doi.org/10.3390/polym18060735 - 17 Mar 2026
Viewed by 475
Abstract
Rising atmospheric CO2 levels have increased the demand for robust, scalable adsorbents for practical CO2 capture and separation. Porous organic polymers (POPs) are attractive candidates because their pore architecture and binding site properties can be precisely tuned via building blocks and [...] Read more.
Rising atmospheric CO2 levels have increased the demand for robust, scalable adsorbents for practical CO2 capture and separation. Porous organic polymers (POPs) are attractive candidates because their pore architecture and binding site properties can be precisely tuned via building blocks and linkage formation. This review summarizes experimental and computational studies of azo-linked POPs and, more broadly, nitrogen–nitrogen (N–N) linked systems, emphasizing how synthetic routes, building blocks, and framework topology govern CO2 uptake. We highlight key synthetic strategies and representative systems, including porphyrin–azo networks, and discuss the relatively sparse experimental literature on alternative N–N linked POPs incorporating azoxy and azodioxy motifs. Emphasis is placed on reversible nitroso/azodioxide chemistry as a potential pathway to ordered porous organic materials. Computational studies provide a practical route to connect structure with adsorption behavior in largely amorphous or partially ordered networks. We review hierarchical workflows combining periodic DFT and electrostatic potential properties, grand canonical Monte Carlo (GCMC) simulations, and binding energy calculations to rationalize trends and identify favorable binding environments. Computational findings demonstrate that pore accessibility and stacking models can strongly influence predicted CO2 adsorption. This review provides guidelines for designing POPs with enhanced CO2 adsorption, offering an outlook and discussing challenges for future studies. Full article
Show Figures

Graphical abstract

16 pages, 1565 KB  
Article
Shrimp Market Under Innovation Schemes: Hidden Markov Modeling
by Johnny Javier Triviño-Sanchez, Alexander Fernando Haro-Sarango, Julián Coronel-Reyes, Carlos Alfredo De Loor-Platón and Dayanna Soria-Encalada
J. Risk Financial Manag. 2026, 19(3), 214; https://doi.org/10.3390/jrfm19030214 - 12 Mar 2026
Viewed by 336
Abstract
This article models the Ecuadorian shrimp market as a nonlinear system with recurring latent regimes that affect margins and planning decisions. A multivariate Hidden Markov Model (HMM) with Gaussian emissions in log space is estimated via the Baum–Welch algorithm to segment the joint [...] Read more.
This article models the Ecuadorian shrimp market as a nonlinear system with recurring latent regimes that affect margins and planning decisions. A multivariate Hidden Markov Model (HMM) with Gaussian emissions in log space is estimated via the Baum–Welch algorithm to segment the joint dynamics of pounds produced, dollars invoiced, and average price. The analysis uses monthly data from January 2017 to May 2025 (T = 101). The selected four-state specification shows strong fit and outperforms linear alternatives (log likelihood = 480.9; AIC = 859.8; BIC = 729.5). The dominant regime (State 2) concentrates high prices (~USD 2.97/lb) with intermediate production and acts as an attractor (stationary probability ≈ 1), while States 0 and 1 capture orderly expansion and oversupply conditions, and State 3 reflects episodic demand rallies. Adverse regimes (States 0–1) exhibit expected durations of 6–8 months, suggesting natural reversion toward the profitable regime. These estimates enable probabilistic regime forecasting and Monte Carlo scenario simulation to support hedging, inventory management, and financial stress testing. Overall, the proposed HMM framework provides an operational decision tool for producers, traders, and policymakers seeking to anticipate regime shifts, mitigate oversupply cycles, and stabilize margins. Full article
(This article belongs to the Section Mathematics and Finance)
Show Figures

Figure 1

30 pages, 8048 KB  
Article
High-Precision Multi-View Simulation of Ship Infrared Characteristics Using BP-ERMCM
by Shucheng Zhou, Shengliang Hu, Hai Wu, Yasong Luo and Pengfei Zhang
Appl. Sci. 2026, 16(5), 2318; https://doi.org/10.3390/app16052318 - 27 Feb 2026
Viewed by 268
Abstract
This study addresses key challenges in obtaining reliable infrared data for maritime ship observation and limitations of existing models, such as simplified reflectance assumptions and incomplete multi-band coverage. To improve modeling accuracy and computational efficiency, a high-precision Bidirectional Reflectance and Pseudo-random Vector Enhanced [...] Read more.
This study addresses key challenges in obtaining reliable infrared data for maritime ship observation and limitations of existing models, such as simplified reflectance assumptions and incomplete multi-band coverage. To improve modeling accuracy and computational efficiency, a high-precision Bidirectional Reflectance and Pseudo-random Vector Enhanced Reverse Monte Carlo Method (BP-ERMCM) is developed. By combining the Bidirectional Reflectance Distribution Function (BRDF), pseudo-random vector approaches, and improved ray-tracking algorithms with precomputed thermal radiation and MODTRAN’s atmospheric transfer model, BP-ERMCM provides multi-view infrared characteristic simulations across 3–5 μm and 8–12 μm bands. Simulations using a 3D ship model with 191 viewpoints reveal seasonal sensitivity, with summer peak intensity at 9.8 μm being 39.3% higher than in winter, and viewpoint dependency showing oblique overhead radiation 5.65 times greater than that from bow angles. Long-wave contours enhance target distinction, while mid-wave regions are dominated by reflection, increasing intensity at 3.8 μm by 56.1–85.7%. These findings highlight BP-ERMCM’s potential to inform infrared signature database construction, detector optimization, and maritime observation strategies. The findings underscore BP-ERMCM’s capability to enhance efficiency and accuracy, providing valuable insights for infrared databases, sensor selection, and maritime observation strategies, thereby advancing infrared signature analysis in maritime applications. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

19 pages, 3010 KB  
Article
Efficient mmWave PA in 90 nm CMOS: Stacked-Inverter Topology, L/T Matching, and EM-Validated Results
by Nusrat Jahan, Ramisha Anan and Jannatul Maua Nazia
Chips 2025, 4(4), 52; https://doi.org/10.3390/chips4040052 - 15 Dec 2025
Viewed by 925
Abstract
In this study, we present the design and analysis of a stacked inverter-based millimeter-wave (mmWave) power amplifier (PA) in 90 nm CMOS-targeting wideband Q-band operation. The PA employs two PMOS and two NMOS devices in a fully stacked inverter topology to distribute device [...] Read more.
In this study, we present the design and analysis of a stacked inverter-based millimeter-wave (mmWave) power amplifier (PA) in 90 nm CMOS-targeting wideband Q-band operation. The PA employs two PMOS and two NMOS devices in a fully stacked inverter topology to distribute device stress, remove the need for an RF choke, and increase effective transconductance while preserving compact layout. A resistor ladder biases the stack near VDD/4 per device, and capacitive division steers intermediate-node swings to enable class-E-like voltage shaping at the output. Closed-form models are developed for gain, output power, drain efficiency/PAE, and linearity, alongside a small-signal stacked-ladder formulation that quantifies stress sharing and the impedance presented to the matching networks; L/T network synthesis relations are provided to co-optimize bandwidth and insertion loss. Post-layout simulation in 90 nm CMOS shows |S21| = 10 dB at 39.84 GHz with 3 dB bandwidth from 36.8 to 42.4 GHz, peak PAE of 18.38% near 41 GHz, and saturated output power Psat=8.67 dBm at VDD=4 V, with S11<15 dB and reverse isolation 16 dB. The layout occupies 1.6×1.6 mm2 and draws 31.08 mW. Robustness is validated via a 200-run Monte Carlo showing tight clustering of Psat and PAE, sensitivity sweeps identifying sizing/tolerance trade-offs (±10% devices/passives), and EM co-simulation of on-chip passives indicating only minor loss/shift relative to schematic while preserving the target bandwidth and efficiency. The results demonstrate a balanced gain–efficiency–power trade-off with layout-aware resilience, positioning stacked-inverter CMOS PAs as a power- and area-efficient solution for mmWave front-ends. Full article
(This article belongs to the Special Issue IC Design Techniques for Power/Energy-Constrained Applications)
Show Figures

Figure 1

24 pages, 3213 KB  
Article
The UG-EM Lifetime Model: Analysis and Application to Symmetric and Asymmetric Survival Data
by Omalsad H. Odhah, Saba M. Alwan and Sarah Aljohani
Symmetry 2025, 17(12), 2027; https://doi.org/10.3390/sym17122027 - 26 Nov 2025
Viewed by 549
Abstract
This paper introduces the UG-EM (Unconditional Gamma-Exponential Model) as a new compound lifetime model designed to enhance flexibility in tail behavior compared to traditional distributions. The UG-EM model provides a unified framework for analyzing deviations from symmetry in survival data, effectively capturing right-skewed [...] Read more.
This paper introduces the UG-EM (Unconditional Gamma-Exponential Model) as a new compound lifetime model designed to enhance flexibility in tail behavior compared to traditional distributions. The UG-EM model provides a unified framework for analyzing deviations from symmetry in survival data, effectively capturing right-skewed patterns, which are commonly observed in real-world lifetime phenomena. The main analytical properties are derived, including the probability density, cumulative distribution, hazard and reversed-hazard functions, mean residual life, and several measures of dispersion and uncertainty. The effects of the UG-EM parameters (α and λ) are examined, showing that increasing either parameter can cause a temporary reduction in entropy H(T) at early times followed by a long-term increase; in some cases, the influence of α is stronger than that of λ. Parameter estimation is carried out using the maximum likelihood method and assessed through Monte Carlo simulations to evaluate estimator bias and variability, highlighting the significant role of sample size in estimation accuracy. The proposed model is applied to three survival datasets (Lung, Veteran, and Kidney) and compared with classical alternatives such as Exponential, Weibull, and Log-normal distributions using standard goodness-of-fit criteria. Results indicate that the UG-EM model offers superior flexibility and can capture patterns that simpler models fail to represent, although the empirical results do not demonstrate a clear, consistent superiority over standard competitors across all tested datasets. The paper also discusses identifiability issues, estimation challenges, and practical implications for reliability and medical survival analysis. Recommendations for further theoretical development and broader model comparison are provided. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

19 pages, 2942 KB  
Article
Research on the Quantitative Relationship Between Positioning Error and Coherent Synthesis Success Rate in a Moving Platform Distributed Coherent Synthesis System
by Peiheng Li, Liang Chen, Long Li and Meng Yang
Electronics 2025, 14(22), 4408; https://doi.org/10.3390/electronics14224408 - 12 Nov 2025
Viewed by 444
Abstract
Distributed coherent synthesis on dynamic platforms suffers from phase misalignment and significantly reduced synthesis efficiency due to navigation errors and communication delays. To address this challenge and dramatically enhance the synthesis efficiency, this paper proposes an “error-performance” quantification framework and corresponding compensation methods: [...] Read more.
Distributed coherent synthesis on dynamic platforms suffers from phase misalignment and significantly reduced synthesis efficiency due to navigation errors and communication delays. To address this challenge and dramatically enhance the synthesis efficiency, this paper proposes an “error-performance” quantification framework and corresponding compensation methods: (1) Phase compensation strategy: Adaptive Kalman Filter (AKF) with a multi-index fusion-based adaptive factor derived from novelty sequences, enabling intelligent switching between predictive and robust modes for improved phase compensation; (2) Positioning error modeling method: Employing an adaptive reverse-adaptive robust Kalman filter (ARKF) to synthesize error trajectories, with standard deviation σ as the primary control parameter. Monte Carlo simulations establish a quantitative relationship between positioning error standard deviation (σ) and coherent synthesis success rate: Under a 3-transmitter configuration, success rate ≥ 95% when σ ≤ 100 mm; The 100–237.3 mm range constitutes a transition zone where success rate decreases from 95% to 80%; when σ ≥ 460 mm, the success rate stabilizes at 56–58%. The core conclusion indicates that when σ ≤ 237.3 mm, the system achieves high coherent synthesis efficiency with 80% probability. This paper aims to establish a cross-platform transferable error-performance quantification framework, providing a direct reference for navigational accuracy selection in distributed coherent systems. Full article
Show Figures

Figure 1

8 pages, 1340 KB  
Proceeding Paper
Trans-Dimensional Diffusive Nested Sampling for Metabolic Network Inference
by Johann Fredrik Jadebeck, Wolfgang Wiechert and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 5; https://doi.org/10.3390/psf2025012005 - 24 Sep 2025
Viewed by 806
Abstract
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly [...] Read more.
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly effective for estimating evidence and sampling the parameter posterior probability distributions. However, the practicality of nested sampling for metabolic network inference has yet to be studied. In this technical report, we explore the amalgamation of nested sampling, specifically diffusive nested sampling, with reversible jump Markov chain Monte Carlo. We apply the algorithm to two synthetic problems from the field of metabolic flux analysis. We present run times and share insights into hyperparameter choices, providing a useful point of reference for future applications of nested sampling to metabolic flux problems. Full article
Show Figures

Figure 1

24 pages, 983 KB  
Article
Bayesian Learning Strategies for Reducing Uncertainty of Decision-Making in Case of Missing Values
by Vitaly Schetinin and Livija Jakaite
Mach. Learn. Knowl. Extr. 2025, 7(3), 106; https://doi.org/10.3390/make7030106 - 22 Sep 2025
Cited by 1 | Viewed by 1655
Abstract
Background: Liquidity crises pose significant risks to financial stability, and missing data in predictive models increase the uncertainty in decision-making. This study aims to develop a robust Bayesian Model Averaging (BMA) framework using decision trees (DTs) to enhance liquidity crisis prediction under missing [...] Read more.
Background: Liquidity crises pose significant risks to financial stability, and missing data in predictive models increase the uncertainty in decision-making. This study aims to develop a robust Bayesian Model Averaging (BMA) framework using decision trees (DTs) to enhance liquidity crisis prediction under missing data conditions, offering reliable probabilistic estimates and insights into uncertainty. Methods: We propose a BMA framework over DTs, employing Reversible Jump Markov Chain Monte Carlo (RJ MCMC) sampling with a sweeping strategy to mitigate overfitting. Three preprocessing techniques for missing data were evaluated: Cont (treating variables as continuous with missing values labeled by a constant), ContCat (converting variables with missing values to categorical), and Ext (extending features with binary missing-value indicators). Results: The Ext method achieved 100% accuracy on a synthetic dataset and 92.2% on a real-world dataset of 20,000 companies (11% in crisis), outperforming baselines (AUC PRC 0.817 vs. 0.803, p < 0.05). The framework provided interpretable uncertainty estimates and identified key financial indicators driving crisis predictions. Conclusions: The BMA-DT framework with the Ext technique offers a scalable, interpretable solution for handling missing data, improving prediction accuracy and uncertainty estimation in liquidity crisis forecasting, with potential applications in finance, healthcare, and environmental modeling. Full article
(This article belongs to the Section Learning)
Show Figures

Graphical abstract

23 pages, 5178 KB  
Article
Variable Dimensional Bayesian Method for Identifying Depth Parameters of Substation Grounding Grid Based on Pulsed Eddy Current
by Xiaofei Kang, Zhiling Li, Jie Hou, Su Xu, Yanjun Zhang, Zhihao Zhou and Jingang Wang
Energies 2025, 18(17), 4649; https://doi.org/10.3390/en18174649 - 1 Sep 2025
Viewed by 711
Abstract
The substation grounding grid, as the primary path for fault current dissipation, is crucial for ensuring the safe operation of the power system and requires regular inspection. The pulsed eddy current method, known for its non-destructive and efficient features, is widely used in [...] Read more.
The substation grounding grid, as the primary path for fault current dissipation, is crucial for ensuring the safe operation of the power system and requires regular inspection. The pulsed eddy current method, known for its non-destructive and efficient features, is widely used in grounding grid detection. However, during the parameter identification process, it is prone to local minima or no solution. To address this issue, this paper first develops a pulsed eddy current forward response model for the substation grounding grid based on the magnetic dipole superposition principle, with accuracy validation. Then, a variable dimensional Bayesian parameter identification method is introduced, utilizing the Reversible-Jump Markov Chain Monte Carlo (RJMCMC) algorithm. By using nonlinear optimization results as the initial model and introducing a dual-factor control strategy to dynamically adjust the sampling step size, the model enhances coverage of high-probability regions, enabling effective estimation of grounding grid parameter uncertainties. Finally, the proposed method is validated by comparing the forward response model with field test results, showing that the error is within 10%, demonstrating both the accuracy and practical applicability of the proposed parameter identification method. Full article
(This article belongs to the Special Issue Reliability of Power Electronics Devices and Converter Systems)
Show Figures

Figure 1

25 pages, 4032 KB  
Article
New Logistic Family of Distributions: Applications to Reliability Engineering
by Laxmi Prasad Sapkota, Nirajan Bam, Pankaj Kumar and Vijay Kumar
Axioms 2025, 14(8), 643; https://doi.org/10.3390/axioms14080643 - 19 Aug 2025
Cited by 1 | Viewed by 1272
Abstract
This study introduces a novel family of probability distributions, termed the Pi-Power Logistic-G family, constructed through the application of the Pi-power transformation technique. By employing the Weibull distribution as the baseline generator, a new and flexible model, the Pi-Power Logistic Weibull distribution, is [...] Read more.
This study introduces a novel family of probability distributions, termed the Pi-Power Logistic-G family, constructed through the application of the Pi-power transformation technique. By employing the Weibull distribution as the baseline generator, a new and flexible model, the Pi-Power Logistic Weibull distribution, is formulated. Particular emphasis is given to this specific member of the family, which demonstrates a rich variety of hazard rate shapes, including J-shaped, reverse J-shaped, and monotonic increasing patterns, thereby highlighting its adaptability in modeling diverse types of lifetime data. The paper examines the fundamental properties of this distribution and applies the method of maximum likelihood estimation (MLE) to determine its parameters. A Monte Carlo simulation was performed to assess the performance of the estimation method, demonstrating that both Bias and mean square error decline as the sample size increases. The utility of the proposed distribution is further highlighted through its application to real-world engineering datasets. Using model selection metrics and goodness-of-fit tests, the results demonstrate that the proposed model outperforms existing alternatives. In addition, a Bayesian approach was used to estimate the parameters of both datasets, further extending the model’s applicability. The findings of this study have significant implications for the fields of reliability modeling, survival analysis, and distribution theory, enhancing methodologies and offering valuable theoretical insights. Full article
Show Figures

Figure 1

24 pages, 2253 KB  
Article
Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach
by Rongshang Chen and Zhiyong Chen
Entropy 2025, 27(7), 715; https://doi.org/10.3390/e27070715 - 1 Jul 2025
Viewed by 875
Abstract
Spatial data not only enables smart cities to visualize, analyze, and interpret data related to location and space, but also helps departments make more informed decisions. We apply a Bayesian quantile regression (BQR) of the partially linear varying coefficient spatial autoregressive (PLVCSAR) model [...] Read more.
Spatial data not only enables smart cities to visualize, analyze, and interpret data related to location and space, but also helps departments make more informed decisions. We apply a Bayesian quantile regression (BQR) of the partially linear varying coefficient spatial autoregressive (PLVCSAR) model for spatial data to improve the prediction of performance. It can be used to capture the response of covariates to linear and nonlinear effects at different quantile points. Through an approximation of the nonparametric functions with free-knot splines, we develop a Bayesian sampling approach that can be applied by the Markov chain Monte Carlo (MCMC) approach and design an efficient Metropolis–Hastings within the Gibbs sampling algorithm to explore the joint posterior distributions. Computational efficiency is achieved through a modified reversible-jump MCMC algorithm incorporating adaptive movement steps to accelerate chain convergence. The simulation results demonstrate that our estimator exhibits robustness to alternative spatial weight matrices and outperforms both quantile regression (QR) and instrumental variable quantile regression (IVQR) in a finite sample at different quantiles. The effectiveness of the proposed model and estimation method is demonstrated by the use of real data from the Boston median house price. Full article
(This article belongs to the Special Issue Bayesian Hierarchical Models with Applications)
Show Figures

Figure 1

23 pages, 13777 KB  
Article
The Sine Alpha Power-G Family of Distributions: Characterizations, Regression Modeling, and Applications
by Amani S. Alghamdi, Shatha F. ALoufi and Lamya A. Baharith
Symmetry 2025, 17(3), 468; https://doi.org/10.3390/sym17030468 - 20 Mar 2025
Cited by 8 | Viewed by 1733
Abstract
This study develops a new method for generating families of distributions based on the alpha power transformation and the trigonometric function, which enables enormous versatility in the resulting sub-models and enhances the ability to more accurately characterize tail shapes. This proposed family of [...] Read more.
This study develops a new method for generating families of distributions based on the alpha power transformation and the trigonometric function, which enables enormous versatility in the resulting sub-models and enhances the ability to more accurately characterize tail shapes. This proposed family of distributions is characterized by a single parameter, which exhibits considerable flexibility in capturing asymmetric datasets, making it a valuable alternative to some families of distributions that require additional parameters to achieve similar levels of flexibility. The sine alpha power generated family is introduced using the proposed method, and some of its members and properties are discussed. A particular member, the sine alpha power-Weibull (SAP-W), is investigated in depth. Graphical representations of the new distribution display monotone and non-monotone forms, whereas the hazard rate function takes a reversed J shape, J shape, bathtub, increasing, and decreasing shapes. Various characteristics of SAP-W distribution are derived, including moments, rényi entropies, and order statistics. Parameters of SAP-W are estimated using the maximum likelihood technique, and the effectiveness of these estimators is examined via Monte Carlo simulations. The superiority and potentiality of the proposed approach are demonstrated by analyzing three real-life engineering applications. The SAP-W outperforms several competing models, showing its flexibility. Additionally, a novel-log location-scale regression model is presented using SAP-W. The regression model’s significance is illustrated through its application to real data. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

18 pages, 3631 KB  
Article
Reliability Evaluation of Integrated Electricity–Water System Based on Multi-State Models of Equipment in Water System
by Yang Liu, Chunyan Li, Jiyuan Tang, Yiming Yao, Kaigui Xie, Bo Hu, Changzheng Shao and Tao Wu
Appl. Sci. 2025, 15(5), 2275; https://doi.org/10.3390/app15052275 - 20 Feb 2025
Viewed by 944
Abstract
The power system and the water system are two important infrastructures of human society that are closely related and interdependent. However, the reliability problems of the power system and water system are becoming more and more prominent. To better reveal the impact of [...] Read more.
The power system and the water system are two important infrastructures of human society that are closely related and interdependent. However, the reliability problems of the power system and water system are becoming more and more prominent. To better reveal the impact of the complex coupling relationship between the power system and the water system on the reliability of the Integrated Electricity–Water System (IEWS), this paper investigates a reliability evaluation method of the IEWS based on multi-state models of equipment in the water system. Firstly, a multi-state reliability model is established based on the failure mechanism of equipment in the water system, such as pipes and Reverse Osmosis (RO) desalination plants. Secondly, combined with the multi-state model of equipment in the water system and the Markov chain Monte Carlo (MCMC) method, the IEWS reliability evaluation method is established. Finally, two IEWSs with different scales are simulated to verify the validity and adaptability of the proposed model. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

28 pages, 4351 KB  
Article
Optimal Scheduling of Microgrids Based on an Improved Dung Beetle Optimization Algorithm
by Yuntao Yue, Haoran Ren, Dong Liu and Lenian Zhang
Appl. Sci. 2025, 15(2), 975; https://doi.org/10.3390/app15020975 - 20 Jan 2025
Cited by 6 | Viewed by 1853
Abstract
More distributed energy resources are being integrated into microgrid systems, making scheduling more complex and challenging. In order to achieve the utilization of renewable energy and peak load shifting on a microgrid system, an optimal scheduling model is established. Firstly, a microgrid operation [...] Read more.
More distributed energy resources are being integrated into microgrid systems, making scheduling more complex and challenging. In order to achieve the utilization of renewable energy and peak load shifting on a microgrid system, an optimal scheduling model is established. Firstly, a microgrid operation model including a photovoltaic array, wind turbine, micro gas turbine, diesel generator, energy storage, and grid connection is constructed, considering the demand response and the uncertainty of wind and solar power. The modeling demand response is determined via a price–demand elasticity matrix, whereas the uncertainty of wind and solar power is established using Monte Carlo sampling and a K-means clustering algorithm. Secondly, a multi-objective function that includes operational and environmental treatment costs is constructed. To optimize the objective function, an Improved Dung Beetle Optimization algorithm (IDBO) is proposed. A tent mapping, non-dominated sorting, and reverse elite learning strategy is proposed to improve the Dung Beetle Optimization algorithm (DBO); therefore, the IDBO is developed. Finally, the proposed model and algorithm are validated through some simulation experiments. A benchmark function test proves that IDBO has a fast convergence speed and high accuracy. The microgrid system scheduled by IDBO has the lowest total cost, and its ability to achieve peak load shifting and improve the utilization of renewable energy is proved through tests involving different scenarios. The results show that compared with traditional optimal scheduling models and algorithms, this approach is more reliable and cost-effective. Full article
Show Figures

Figure 1

Back to TopTop