Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = Monte Carlo search (MCS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
55 pages, 18379 KiB  
Article
Maritime Risk Assessment: A Cutting-Edge Hybrid Model Integrating Automated Machine Learning and Deep Learning with Hydrodynamic and Monte Carlo Simulations
by Egemen Ander Balas and Can Elmar Balas
J. Mar. Sci. Eng. 2025, 13(5), 939; https://doi.org/10.3390/jmse13050939 - 11 May 2025
Viewed by 927
Abstract
In this study, a Hybrid Maritime Risk Assessment Model (HMRA) integrating automated machine learning (AML) and deep learning (DL) with hydrodynamic and Monte Carlo simulations (MCS) was developed to assess maritime accident probabilities and risks. The machine learning models of Light Gradient Boosting [...] Read more.
In this study, a Hybrid Maritime Risk Assessment Model (HMRA) integrating automated machine learning (AML) and deep learning (DL) with hydrodynamic and Monte Carlo simulations (MCS) was developed to assess maritime accident probabilities and risks. The machine learning models of Light Gradient Boosting (LightGBM), XGBoost, Random Forest, and Multilayer Perceptron (MLP) were employed. Cross-validation of model architectures, calibrated baseline configurations, and hyperparameter optimization enabled predictive precision, producing generalizability. This hybrid model establishes a robust maritime accident probability prediction framework through a multi-stage methodology that ensembles learning architecture. The model was applied to İzmit Bay (in Türkiye), a highly jammed maritime area with dense traffic patterns, providing a complete methodology to evaluate and rank risk factors. This research improves maritime safety studies by developing an integrated, simulation-based decision-making model that supports risk assessment actions for policymakers and stakeholders in marine spatial planning (MSP). The potential spill of 20 barrels (bbl) from an accident between two tankers was simulated using the developed model, which interconnects HYDROTAM-3D and the MCS. The average accident probability in İzmit Bay was estimated to be 5.5 × 10−4 in the AML based MCS, with a probability range between 2.15 × 10−4 and 7.93 × 10−4. The order of the predictions’ magnitude was consistent with the Undersecretariat of the Maritime Affairs Search and Rescue Department accident data for İzmit Bay. The spill reaches the narrow strait of the inner basin in the first six hours. This study determines areas within the bay at high risk of accidents and advocates for establishing emergency response centers in these critical areas. Full article
(This article belongs to the Special Issue Recent Advances in Maritime Safety and Ship Collision Avoidance)
Show Figures

Figure 1

13 pages, 879 KiB  
Article
Impact of Proton–Proton Collisions on the Cosmic-Ray Spectrum in Giant Clouds
by Hao-Ran Ge and Ruo-Yu Liu
Universe 2025, 11(2), 35; https://doi.org/10.3390/universe11020035 - 23 Jan 2025
Cited by 1 | Viewed by 728
Abstract
Gamma-ray production by proton–proton (pp) inelastic collisions plays an important role in searching for cosmic-ray (CR) accelerators. Understanding the pionic gamma-ray production associated with giant molecular/atomic clouds is thus crucial to identify this process. In this work, we study the [...] Read more.
Gamma-ray production by proton–proton (pp) inelastic collisions plays an important role in searching for cosmic-ray (CR) accelerators. Understanding the pionic gamma-ray production associated with giant molecular/atomic clouds is thus crucial to identify this process. In this work, we study the feedback of the pionic gamma-ray production to the CR distribution, by considering collision-induced energy loss on cosmic-ray protons in the dense core region of molecular clouds (MCs). We try to introduce a Monte Carlo simulation framework to quantify this effect, and present a detailed analysis to evaluate how pp collisions harden the cosmic-ray proton spectrum and the resulting gamma-ray spectrum in giant clouds. Full article
Show Figures

Figure 1

20 pages, 4568 KiB  
Article
Neutronics Analysis on High-Temperature Gas-Cooled Pebble Bed Reactors by Coupling Monte Carlo Method and Discrete Element Method
by Kashminder S. Mehta, Braden Goddard and Zeyun Wu
Energies 2024, 17(20), 5188; https://doi.org/10.3390/en17205188 - 18 Oct 2024
Cited by 2 | Viewed by 1521
Abstract
The High-Temperature Gas-Cooled Pebble Bed Reactor (HTG-PBR) is notable in the advanced reactor realm for its online refueling capabilities and inherent safety features. However, the multiphysics coupling nature of HTG-PBR, involving neutronic analysis, pebble flow movement, and thermo-fluid dynamics, creates significant challenges for [...] Read more.
The High-Temperature Gas-Cooled Pebble Bed Reactor (HTG-PBR) is notable in the advanced reactor realm for its online refueling capabilities and inherent safety features. However, the multiphysics coupling nature of HTG-PBR, involving neutronic analysis, pebble flow movement, and thermo-fluid dynamics, creates significant challenges for its development, optimization, and safety analysis. This study focuses on the high-fidelity neutronic modelling and analysis of HTG-PBR with an emphasis on achieving an equilibrium state of the reactor for long-term operations. Computational approaches are developed to perform high-fidelity neutronics analysis by coupling the superior modelling capacities of the Monte Carlo Method (MCM) and Discrete Element Method (DEM). The MCM-based code OpenMC and the DEM-based code LIGGGHTS are employed to simulate the neutron transport and pebble movement phenomena in the reactor, respectively. To improve the computational efficiency to expedite the equilibrium core search process, the reactor core is discretized by grouping pebbles in axial and radial directions with the incorporation of the pebble position information from DEM simulations. The OpenMC model is modified to integrate fuel circulation and fresh fuel loading. All of these measures ultimately contribute to a successful generation of an equilibrium core for HTG-PBR. For demonstration, X-energy’s Xe-100 reactor—a 165 MW thermal power HTG-PBR—is used as the model reactor in this study. Starting with a reactor core loaded with all fresh pebbles, the equilibrium core search process indicates the continuous loading of fresh fuel is required to sustain the reactor operation after 1000 days of fuel depletion with depleted fuel circulation. Additionally, the model predicts 213 fresh pebbles are needed to add to the top layer of the reactor to ensure the keff does not reduce below the assumed reactivity limit of 1.01. Full article
Show Figures

Figure 1

28 pages, 2740 KiB  
Article
Maximizing Net Present Value for Resource Constraint Project Scheduling Problems with Payments at Event Occurrences Using Approximate Dynamic Programming
by Tshewang Phuntsho and Tad Gonsalves
Algorithms 2024, 17(5), 180; https://doi.org/10.3390/a17050180 - 28 Apr 2024
Cited by 1 | Viewed by 2098
Abstract
Resource Constraint Project Scheduling Problems with Discounted Cash Flows (RCPSPDC) focuses on maximizing the net present value by summing the discounted cash flows of project activities. An extension of this problem is the Payment at Event Occurrences (PEO) scheme, where the client makes [...] Read more.
Resource Constraint Project Scheduling Problems with Discounted Cash Flows (RCPSPDC) focuses on maximizing the net present value by summing the discounted cash flows of project activities. An extension of this problem is the Payment at Event Occurrences (PEO) scheme, where the client makes multiple payments to the contractor upon completion of predefined activities, with additional final settlement at project completion. Numerous approximation methods such as metaheuristics have been proposed to solve this NP-hard problem. However, these methods suffer from parameter control and/or the computational cost of correcting infeasible solutions. Alternatively, approximate dynamic programming (ADP) sequentially generates a schedule based on strategies computed via Monte Carlo (MC) simulations. This saves the computations required for solution corrections, but its performance is highly dependent on its strategy. In this study, we propose the hybridization of ADP with three different metaheuristics to take advantage of their combined strengths, resulting in six different models. The Estimation of Distribution Algorithm (EDA) and Ant Colony Optimization (ACO) were used to recommend policies for ADP. A Discrete cCuckoo Search (DCS) further improved the schedules generated by ADP. Our experimental analysis performed on the j30, j60, and j90 datasets of PSPLIB has shown that ADP–DCS is better than ADP alone. Implementing the EDA and ACO as prioritization strategies for Monte Carlo simulations greatly improved the solutions with high statistical significance. In addition, models with the EDA showed better performance than those with ACO and random priority, especially when the number of events increased. Full article
Show Figures

Figure 1

16 pages, 5249 KiB  
Article
Reverse Engineering of Radical Polymerizations by Multi-Objective Optimization
by Jelena Fiosina, Philipp Sievers, Gavaskar Kanagaraj, Marco Drache and Sabine Beuermann
Polymers 2024, 16(7), 945; https://doi.org/10.3390/polym16070945 - 29 Mar 2024
Cited by 4 | Viewed by 1423
Abstract
Reverse engineering is applied to identify optimum polymerization conditions for the synthesis of polymers with pre-defined properties. The proposed approach uses multi-objective optimization (MOO) and provides multiple candidate polymerization procedures to achieve the targeted polymer property. The objectives for optimization include the maximal [...] Read more.
Reverse engineering is applied to identify optimum polymerization conditions for the synthesis of polymers with pre-defined properties. The proposed approach uses multi-objective optimization (MOO) and provides multiple candidate polymerization procedures to achieve the targeted polymer property. The objectives for optimization include the maximal similarity of molar mass distributions (MMDs) compared to the target MMDs, a minimal reaction time, and maximal monomer conversion. The method is tested for vinyl acetate radical polymerizations and can be adopted to other monomers. The data for the optimization procedure are generated by an in-house-developed kinetic Monte-Carlo (kMC) simulator for a selected recipe search space. The proposed reverse engineering algorithm comprises several steps: kMC simulations for the selected recipe search space to derive initial data, performing MOO for a targeted MMD, and the identification of the Pareto optimal space. The last step uses a weighted sum optimization function to calculate the weighted score of each candidate polymerization condition. To decrease the execution time, clustering of the search space based on MMDs is applied. The performance of the proposed approach is tested for various target MMDs. The suggested MOO-based reverse engineering provides multiple recipe candidates depending on competing objectives. Full article
Show Figures

Graphical abstract

20 pages, 840 KiB  
Article
Set-Based Group Search Optimizer for Stochastic Many-Objective Optimal Power Flow
by Jiehui Zheng, Mingming Tao, Zhigang Li and Qinghua Wu
Appl. Sci. 2023, 13(18), 10247; https://doi.org/10.3390/app131810247 - 12 Sep 2023
Cited by 1 | Viewed by 1449
Abstract
The conventional optimal power flow (OPF) is confronted with challenges in tackling more than three objectives and the stochastic characteristics due to the uncertainty and intermittence of the RESs. However, there are few methods available that simultaneously address high-dimensional objective optimization and uncertainty [...] Read more.
The conventional optimal power flow (OPF) is confronted with challenges in tackling more than three objectives and the stochastic characteristics due to the uncertainty and intermittence of the RESs. However, there are few methods available that simultaneously address high-dimensional objective optimization and uncertainty handling. This paper proposes a set-based group search optimizer (SetGSO) to tackle the stochastic many-objective optimal power flow (MaOPF) of power systems penetrated with renewable energy sources. The proposed SetGSO depicts the original stochastic variables by set-based individuals under the evolutionary strategy of the basic GSO, without using repeated sampling or probabilistic information. Consequently, two metrics, hyper-volume and average imprecision, are introduced to transform the stochastic MaOPF into a deterministic bi-objective OPF, guaranteeing a much superior Pareto-optimal front. Finally, our method was evaluated on three modified bus systems containing renewable energy sources, and compared with the basic GSO using Monte Carlo sampling (GSO-MC) and a set-based genetic algorithm (SetGA) in solving the stochastic MaOPF. The numerical results demonstrate a saving of 90% of the computation time in the proposed SetGSO method compared to sampling-based approaches and it achieves improvements in both the hyper-volume and average imprecision indicators, with a maximum enhancement of approximately 30% and 7% compared to SetGA. Full article
Show Figures

Figure 1

26 pages, 10217 KiB  
Article
Searching for Promisingly Trained Artificial Neural Networks
by Juan M. Lujano-Rojas, Rodolfo Dufo-López, Jesús Sergio Artal-Sevil and Eduardo García-Paricio
Forecasting 2023, 5(3), 550-575; https://doi.org/10.3390/forecast5030031 - 4 Sep 2023
Cited by 1 | Viewed by 1939
Abstract
Assessing the training process of artificial neural networks (ANNs) is vital for enhancing their performance and broadening their applicability. This paper employs the Monte Carlo simulation (MCS) technique, integrated with a stopping criterion, to construct the probability distribution of the learning error of [...] Read more.
Assessing the training process of artificial neural networks (ANNs) is vital for enhancing their performance and broadening their applicability. This paper employs the Monte Carlo simulation (MCS) technique, integrated with a stopping criterion, to construct the probability distribution of the learning error of an ANN designed for short-term forecasting. The training and validation processes were conducted multiple times, each time considering a unique random starting point, and the subsequent forecasting error was calculated one step ahead. From this, we ascertained the probability of having obtained all the local optima. Our extensive computational analysis involved training a shallow feedforward neural network (FFNN) using wind power and load demand data from the transmission systems of the Netherlands and Germany. Furthermore, the analysis was expanded to include wind speed prediction using a long short-term memory (LSTM) network at a site in Spain. The improvement gained from the FFNN, which has a high probability of being the global optimum, ranges from 0.7% to 8.6%, depending on the forecasting variable. This solution outperforms the persistent model by between 5.5% and 20.3%. For wind speed predictions using an LSTM, the improvement over an average-trained network stands at 9.5%, and is 6% superior to the persistent approach. These outcomes suggest that the advantages of exhaustive search vary based on the problem being analyzed and the type of network in use. The MCS method we implemented, which estimates the probability of identifying all local optima, can act as a foundational step for other techniques like Bayesian model selection, which assumes that the global optimum is encompassed within the available hypotheses. Full article
(This article belongs to the Special Issue Advanced Methods for Renewable Energy Forecasting)
Show Figures

Figure 1

29 pages, 6523 KiB  
Article
Improving the Performance of Hydrological Model Parameter Uncertainty Analysis Using a Constrained Multi-Objective Intelligent Optimization Algorithm
by Xichen Liu, Guangyuan Kan, Liuqian Ding, Xiaoyan He, Ronghua Liu and Ke Liang
Water 2023, 15(15), 2700; https://doi.org/10.3390/w15152700 - 26 Jul 2023
Cited by 1 | Viewed by 2112
Abstract
In the field of hydrological model parameter uncertainty analysis, sampling methods such as Differential Evolution based on Monte Carlo Markov Chain (DE-MC) and Shuffled Complex Evolution Metropolis (SCEM-UA) algorithms have been widely applied. However, there are two drawbacks which may introduce bad effects [...] Read more.
In the field of hydrological model parameter uncertainty analysis, sampling methods such as Differential Evolution based on Monte Carlo Markov Chain (DE-MC) and Shuffled Complex Evolution Metropolis (SCEM-UA) algorithms have been widely applied. However, there are two drawbacks which may introduce bad effects into the uncertainty analysis. The first disadvantage is that few optimization algorithms consider the physical meaning and reasonable range of the model parameters. The traditional sampling algorithms may generate non-physical parameter values and poorly simulated hydrographs when carrying out the uncertainty analysis. The second disadvantage is that the widely used sampling algorithms commonly involve only a single objective. Such sampling procedures implicitly introduce too strong an “exploitation” property into the sampling process, consequently destroying the diversity property of the sampled population, i.e., the “exploration” property is bad. Here, “exploitation” refers to using good already-existing solutions and making refinements to them, so that their fitness will improve further; meanwhile, “exploration” denotes that the algorithm searches for new solutions in new regions. With the aim of improving the performance of uncertainty analysis algorithms, in this research, a constrained multi-objective intelligent optimization algorithm is proposed that preserves the physical meaning of the model parameter using the penalty function method and maintains the population diversity using a Non-dominated Sorted Genetic Algorithm-II (NSGA-II) multi-objective optimization procedure. The representativeness of the parameter population is estimated on the basis of the mean and standard deviation of the Nash–Sutcliffe coefficient, and the diversity is evaluated on the basis of the mean Euclidean distance. The Chengcun watershed is selected as the study area, and uncertainty analysis is carried out. The numerical simulations indicate that the performance of the proposed algorithm is significantly improved, preserving the physical meaning and reasonable range of the model parameters while significantly improving the diversity and reliability of the sampled parameter population. Full article
Show Figures

Figure 1

16 pages, 2925 KiB  
Article
Research on Seismic Connectivity Reliability Analysis of Water Distribution System Based on CUDA
by Li Long, Huaping Yang, Yan Zhou and Yong Yang
Water 2023, 15(11), 2087; https://doi.org/10.3390/w15112087 - 31 May 2023
Cited by 1 | Viewed by 1562
Abstract
To improve the seismic connectivity reliability (SCR) analysis efficiency of water distribution systems (WDS) based on Monte Carlo (MC) simulation, the quasi-Monte Carlo (QMC) method sampled by a low-discrepancy sequence is applied. Furthermore, a parallel algorithm combined with the breadth-first search algorithm for [...] Read more.
To improve the seismic connectivity reliability (SCR) analysis efficiency of water distribution systems (WDS) based on Monte Carlo (MC) simulation, the quasi-Monte Carlo (QMC) method sampled by a low-discrepancy sequence is applied. Furthermore, a parallel algorithm combined with the breadth-first search algorithm for SCR analysis of WDS based on the QMC method and Compute Unified Device Architecture (CUDA) platform was proposed. A city WDS was taken as a computational example, the accuracy and efficiency of the traditional MC algorithm and parallel algorithm were compared, and the influence of the Sobol sequence and pseudo-random number sequence was analysed. The analysis results show that when 1,000,000 simulations are performed, the maximum error of the calculation results of the two methods is 0.2%, and the parallel method can obtain a six-fold speedup ratio compared with the serial method, indicating that the proposed parallel method is correct, meets the accuracy requirements, and helps to improve the SCR analysis efficiency. When the number of simulations is the same, the simulation results based on the Sobol sequence are more accurate than those based on the pseudo-random number sequence. The proposed parallel method also achieves a good acceleration effect in the SCR analysis of large-scale WDS. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

19 pages, 8090 KiB  
Article
Dynamic Optimal Power Dispatch in Unbalanced Distribution Networks with Single-Phase Solar PV Units and BESS
by Jordan Radosavljević, Aphrodite Ktena, Milena Gajić, Miloš Milovanović and Jovana Živić
Energies 2023, 16(11), 4356; https://doi.org/10.3390/en16114356 - 26 May 2023
Cited by 8 | Viewed by 1769
Abstract
Battery energy storage systems (BESSs) are a promising solution for increasing efficiency and flexibility of distribution networks (DNs) with a significant penetration level of photovoltaic (PV) systems. There are various issues related to the optimal operation of DNs with integrated PV systems and [...] Read more.
Battery energy storage systems (BESSs) are a promising solution for increasing efficiency and flexibility of distribution networks (DNs) with a significant penetration level of photovoltaic (PV) systems. There are various issues related to the optimal operation of DNs with integrated PV systems and BESS that need to be addressed to maximize DN performance. This paper deals with day-ahead optimal active–reactive power dispatching in unbalanced DNs with integrated single-phase PV generation and BESS. The objectives are the minimization of cost for electricity, energy losses in the DN, and voltage unbalance at three-phase load buses by optimal management of active and reactive power flows. To solve this highly constrained non-linear optimization problem, a hybrid particle swarm optimization with sigmoid-based acceleration coefficients (PSOS) and a chaotic gravitational search algorithm (CGSA)called the PSOS-CGSA algorithm is proposed. A scenario-based approach encompassing the Monte Carlo simulation (MCS) method with a simultaneous backward reduction algorithm is used for the probabilistic assessment of the uncertainty of PV generation and power of loads. The effectiveness of the proposed procedure is evaluated through aseries test cases in a modified IEEE 13-bus feeder. The simulation results show that the proposed approach enables a large reduction in daily costs for electricity, as well as a reduction in expected daily energy losses in the DN by 22% compared to the base case without BESS while ensuring that the phase voltage unbalance rate (PVUR) is below the maximum limit of 2% for all three-phase buses in the DN. Full article
Show Figures

Figure 1

24 pages, 3447 KiB  
Article
UAV Aerial Image Generation of Crucial Components of High-Voltage Transmission Lines Based on Multi-Level Generative Adversarial Network
by Jinyu Wang, Yingna Li and Wenxiang Chen
Remote Sens. 2023, 15(5), 1412; https://doi.org/10.3390/rs15051412 - 2 Mar 2023
Cited by 16 | Viewed by 2718
Abstract
With the aim of improving the image quality of the crucial components of transmission lines taken by unmanned aerial vehicles (UAV), a priori work on the defective fault location of high-voltage transmission lines has attracted great attention from researchers in the UAV field. [...] Read more.
With the aim of improving the image quality of the crucial components of transmission lines taken by unmanned aerial vehicles (UAV), a priori work on the defective fault location of high-voltage transmission lines has attracted great attention from researchers in the UAV field. In recent years, generative adversarial nets (GAN) have achieved good results in image generation tasks. However, the generation of high-resolution images with rich semantic details from complex backgrounds is still challenging. Therefore, we propose a novel GANs-based image generation model to be used for the critical components of power lines. However, to solve the problems related to image backgrounds in public data sets, considering that the image background of the common data set CPLID (Chinese Power Line Insulator Dataset) is simple. However, it cannot fully reflect the complex environments of transmission line images; therefore, we established an image data set named “KCIGD” (The Key Component Image Generation Dataset), which can be used for model training. CFM-GAN (GAN networks based on coarse–fine-grained generators and multiscale discriminators) can generate the images of the critical components of transmission lines with rich semantic details and high resolutions. CFM-GAN can provide high-quality image inputs for transmission line fault detection and line inspection models to guarantee the safe operation of power systems. Additionally, we can use these high-quality images to expand the data set. In addition, CFM-GAN consists of two generators and multiple discriminators, which can be flexibly applied to image generation tasks in other scenarios. We introduce a penalty mechanism-related Monte Carlo search (MCS) approach in the CFM-GAN model to introduce more semantic details in the generated images. Moreover, we presented a multiscale discriminator structure according to the multitask learning mechanisms to effectively enhance the quality of the generated images. Eventually, the experiments using the CFM-GAN model on the KCIGD dataset and the publicly available CPLID indicated that the model used in this work outperformed existing mainstream models in improving image resolution and quality. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Remote Sensing Big Data)
Show Figures

Figure 1

22 pages, 3513 KiB  
Article
Electric Vehicle Participation in Regional Grid Demand Response: Potential Analysis Model and Architecture Planning
by Qian Wang, Xiaolong Yang, Xiaoyu Yu, Jingwen Yun and Jinbo Zhang
Sustainability 2023, 15(3), 2763; https://doi.org/10.3390/su15032763 - 3 Feb 2023
Cited by 11 | Viewed by 2891
Abstract
When a large-scale random charging load is connected to the regional power grid, it can negatively affect the safe and stable operation of the power grid. Therefore, we need to study its charging load and response potential in advance so that electric vehicles [...] Read more.
When a large-scale random charging load is connected to the regional power grid, it can negatively affect the safe and stable operation of the power grid. Therefore, we need to study its charging load and response potential in advance so that electric vehicles can interact well with the grid after being connected to the regional grid. Firstly, after analyzing the influencing factors of regional electric vehicle ownership, an electric vehicle ownership prediction model based on the sparrow search algorithm-improved BP neural network (SSA-BPNN) is established. On this basis, an electric vehicle charging load prediction model is established based on the sparrow search algorithm-improved BP neural network and Monte Carlo algorithm (SSA-BPNN-MC). Secondly, the charging behavior of different types of electric vehicles is analyzed and modeled, and the data from a certain area are taken as an example for the prediction. Then, according to the load forecasting results, the potential of electric vehicles participating in demand response in the region in the future is deeply analyzed using the scenario analysis method. Finally, with the aim of resolving the problems of massive multi-source heterogeneous data processing and the management of electric vehicles participating in the regional power grid demand response, a basic framework of electric vehicles participating in the regional power grid demand response is developed, which provides effective support for promoting electric vehicles to participate in regional grid demand response. Full article
Show Figures

Figure 1

18 pages, 4750 KiB  
Article
An Efficient Reliability Method with Multiple Shape Parameters Based on Radial Basis Function
by Wenyi Du, Juan Ma, Peng Yue and Yongzhen Gong
Appl. Sci. 2022, 12(19), 9689; https://doi.org/10.3390/app12199689 - 27 Sep 2022
Cited by 3 | Viewed by 1541
Abstract
Structural reliability analysis has an inherent contradiction between efficiency and accuracy. The metamodel can significantly reduce the computational cost of reliability analysis by a simpler approximation. Therefore, it is crucial to build a metamodel, which achieves the minimum simulations and accurate estimation for [...] Read more.
Structural reliability analysis has an inherent contradiction between efficiency and accuracy. The metamodel can significantly reduce the computational cost of reliability analysis by a simpler approximation. Therefore, it is crucial to build a metamodel, which achieves the minimum simulations and accurate estimation for reliability analysis. Aiming at this, an effective adaptive metamodel based on the combination of radial basis function (RBF) model and Monte Carlo simulation (MCS) is proposed. Different shape parameters are first used to generate the weighted prediction variance, and the search for new training samples is guided by the active learning function that achieves a tradeoff of (1) being close enough to limit state function (LSF) to have a high reliability sensitivity; (2) keeping enough distance between the existing samples to avoid a clustering problem; and (3) being in the sensitive region to ensure the effectiveness of the information obtained. The performance of the proposed method for a nonlinear, non-convex, and high dimensional reliability analysis is validated by three numerical cases. The results indicate the high efficiency and accuracy of the proposed method. Full article
Show Figures

Figure 1

14 pages, 413 KiB  
Article
Demand Side Management Strategy for Multi-Objective Day-Ahead Scheduling Considering Wind Energy in Smart Grid
by Kalim Ullah, Taimoor Ahmad Khan, Ghulam Hafeez, Imran Khan, Sadia Murawwat, Basem Alamri, Faheem Ali, Sajjad Ali and Sheraz Khan
Energies 2022, 15(19), 6900; https://doi.org/10.3390/en15196900 - 21 Sep 2022
Cited by 35 | Viewed by 2682
Abstract
Distributed energy resources (DERs) and demand side management (DSM) strategy implementation in smart grids (SGs) lead to environmental and economic benefits. In this paper, a new DSM strategy is proposed for the day-ahead scheduling problem in SGs with a high penetration of wind [...] Read more.
Distributed energy resources (DERs) and demand side management (DSM) strategy implementation in smart grids (SGs) lead to environmental and economic benefits. In this paper, a new DSM strategy is proposed for the day-ahead scheduling problem in SGs with a high penetration of wind energy to optimize the tri-objective problem in SGs: operating cost and pollution emission minimization, the minimization of the cost associated with load curtailment, and the minimization of the deviation between wind turbine (WT) output power and demand. Due to climatic conditions, the nature of the wind energy source is uncertain, and its prediction for day-ahead scheduling is challenging. Monte Carlo simulation (MCS) was used to predict wind energy before integrating with the SG. The DSM strategy used in this study consists of real-time pricing and incentives, which is a hybrid demand response program (H-DRP). To solve the proposed tri-objective SG scheduling problem, an optimization technique, the multi-objective genetic algorithm (MOGA), is proposed, which results in non-dominated solutions in the feasible search area. Besides, the decision-making mechanism (DMM) was applied to find the optimal solution amongst the non-dominated solutions in the feasible search area. The proposed scheduling model successfully optimizes the objective functions. For the simulation, MATLAB 2021a was used. For the validation of this model, it was tested on the SG using multiple balancing constraints for power balance at the consumer end. Full article
(This article belongs to the Special Issue New Trends in Power Networks' Transition towards Renewable Energy)
Show Figures

Figure 1

21 pages, 3791 KiB  
Article
A Grasshopper Optimization Algorithm-Based Response Surface Method for Non-Probabilistic Structural Reliability Analysis with an Implicit Performance Function
by Qi Li, Junmu Wang and Guoshao Su
Buildings 2022, 12(7), 1061; https://doi.org/10.3390/buildings12071061 - 21 Jul 2022
Cited by 4 | Viewed by 2595
Abstract
Non-probabilistic reliability analysis has great developmental potential in the field of structural reliability analysis, as it is often difficult to obtain enough samples to construct an accurate probability distribution function of random variables based on probabilistic theory. In practical engineering cases, the performance [...] Read more.
Non-probabilistic reliability analysis has great developmental potential in the field of structural reliability analysis, as it is often difficult to obtain enough samples to construct an accurate probability distribution function of random variables based on probabilistic theory. In practical engineering cases, the performance function (PF) is commonly implicit. Monte Carlo simulation (MCS) is commonly used for structural reliability analysis with implicit PFs. However, MCS requires the calculation of thousands of PF values. Such calculation could be time-consuming when the structural systems are complicated, and numerical analysis procedures such as the finite element method have to be adopted to obtain the PF values. To address this issue, this paper presents a grasshopper optimization algorithm-based response surface method (RSM). First, the method employs a quadratic polynomial to approximate the implicit PF with a small set of the actual values of the implicit PF. Second, the grasshopper optimization algorithm (GOA) is used to search for the global optimal solution of the scaling factor of the convex set since the problem of solving the reliability index is transformed into an unconstrained optimal problem. During the search process in the GOA, a dynamic response surface updating strategy is used to improve the approximate accuracy near the current optimal point to improve the computing efficiency. Two mathematical examples and two engineering structure examples that use the proposed method are given to verify its feasibility. The results compare favorably with those of MCS. The proposed method can be non-invasively combined with finite element analysis software to solve non-probabilistic reliability analysis problems of structures with implicit PF with high efficiency and high accuracy. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

Back to TopTop