Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,338)

Search Parameters:
Keywords = Monte Carlo simulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 18533 KB  
Article
Distance Velocity Fusion Algorithm Based on Sequential Monte Carlo Probability Hypothesis Density Filter in Low-to-No Power Scenario
by Wei Chen, Fei Teng, Hu Jin, Yingke Lei, Feng Qian and Mengbo Zhang
Electronics 2026, 15(9), 1787; https://doi.org/10.3390/electronics15091787 (registering DOI) - 22 Apr 2026
Abstract
In the context of an increasingly chaotic electromagnetic environment, the problem of multisensor data fusion for tracking airborne maneuvering targets has garnered significant attention and applications. In low-to-no power scenarios, certain sensors exhibit measurement inaccuracies, and the disparity in measurement precision among networked [...] Read more.
In the context of an increasingly chaotic electromagnetic environment, the problem of multisensor data fusion for tracking airborne maneuvering targets has garnered significant attention and applications. In low-to-no power scenarios, certain sensors exhibit measurement inaccuracies, and the disparity in measurement precision among networked sensors leads to data inequality. This results in poor fusion accuracy in the multisensor fusion process, particularly when prior weights are unknown. To address the aforementioned problems, this study first redefines the motion model of airborne maneuvering targets by capturing the complexity of the trajectory of the target. Subsequently, a modeling framework for low-to-no power scenarios is established using a one-transmitter three-receiver radar system. In this model, the Signal-to-Noise Ratio (SNR) of the two sensors was intentionally reduced to simulate data inequality. Finally, a distance velocity (DV) fusion algorithm was designed based on the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) algorithm. Specifically, after the state extraction step of the SMC-PHD filter algorithm, the final estimated target was obtained in two steps: judgment and weighted summation. The simulation results demonstrate the effectiveness of the proposed algorithm in improving fusion accuracy and robustness in dynamic environments and under real electromagnetic interference. Full article
27 pages, 1563 KB  
Article
A Safety-Constrained Multi-Objective Optimization Framework for Autonomous Mining Systems: Statistical Validation in Surface and Underground Environments
by Rajesh Patil and Magnus Löfstrand
Technologies 2026, 14(5), 248; https://doi.org/10.3390/technologies14050248 - 22 Apr 2026
Abstract
The incorporation of artificial intelligence, multi-sensor perception, and cyber-physical control into mining operations offers tremendous opportunities for increasing productivity, safety, and sustainability. However, present frameworks focus on discrete subsystems rather than providing a unified, safety-constrained optimization method that has been verified in both [...] Read more.
The incorporation of artificial intelligence, multi-sensor perception, and cyber-physical control into mining operations offers tremendous opportunities for increasing productivity, safety, and sustainability. However, present frameworks focus on discrete subsystems rather than providing a unified, safety-constrained optimization method that has been verified in both surface and underground environments. This paper describes a scalable, hierarchical autonomous mining architecture that incorporates sensor fusion, edge intelligence, fleet coordination, and digital twin-based decision support. It is designed to operate in GNSS-denied conditions and extreme climatic constraints common to Nordic mining environments. A mathematical modeling approach formalizes vehicle dynamics, drilling mechanics, and multi-agent fleet coordination inside a safety-constrained multi-objective optimization formulation. The framework is validated using Monte Carlo simulation with uncertainty measurement, sensitivity analysis, and statistical hypothesis testing. The preliminary results show improvements over a typical baseline, with productivity increasing by approximately 24.3% ± 3.2%, energy consumption decreasing by 12.8% ± 2.5%, and safety risk decreasing by 48.6% ± 4.1%. A sensitivity study identifies localization accuracy, communication delay, and optimization weighting as the primary system performance drivers. The suggested framework serves as a reproducible and transferable reference model for next-generation intelligent mining systems, having direct applications to both industrial deployment and future research in autonomous resource extraction. Full article
(This article belongs to the Section Information and Communication Technologies)
16 pages, 6386 KB  
Article
Nano-Power OTA-Based Low-Pass Filter for Ultra-Low-Energy Biomedical Signal Processing
by Tomasz Kulej, Montree Kumngern and Fabian Khateb
Sensors 2026, 26(9), 2586; https://doi.org/10.3390/s26092586 - 22 Apr 2026
Abstract
This paper presents a nanowatt-scale operational transconductance amplifier (OTA) and an electronically tunable third-order low-pass filter (LPF) designed for energy-constrained biomedical signal conditioning. The circuits are implemented in a 65 nm CMOS process and verified through comprehensive schematic-level simulations. Biased in the deep [...] Read more.
This paper presents a nanowatt-scale operational transconductance amplifier (OTA) and an electronically tunable third-order low-pass filter (LPF) designed for energy-constrained biomedical signal conditioning. The circuits are implemented in a 65 nm CMOS process and verified through comprehensive schematic-level simulations. Biased in the deep subthreshold region at 1 nA, the OTA achieves a 50 dB low-frequency gain, a 225 Hz unity-gain bandwidth at 10 pF load capacitance and an input-referred noise floor of 1.55 μV/√Hz, with a total power consumption of only 1.75 nW. The integrated third-order LPF provides a wide tuning range (37–668 Hz) via bias current modulation, exhibiting excellent linearity with a THD of 0.059% and a 65.3 dB dynamic range. Monte Carlo and PVT corner analyses demonstrate the design’s theoretical robustness against process variations and environmental fluctuations. ECG signal simulations validate the circuit’s effectiveness in suppressing high-frequency artifacts while preserving morphological integrity, providing a proof-of-concept for ultra-low-power wearable healthcare architectures. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 3142 KB  
Article
A SAR Echo Simulation Method for Ship Targets in the Sea Based on Model Segmentation and Electromagnetic Scattering Characteristics Simulation
by Feixiang Ren, Pengbo Wang and Jiaquan Wen
Remote Sens. 2026, 18(9), 1266; https://doi.org/10.3390/rs18091266 - 22 Apr 2026
Abstract
The simulation of synthetic aperture radar (SAR) echo signals usually relies on complex hardware equipment and a large amount of scene data, which results in high costs and low efficiency. In order to simulate SAR echo signals of ship targets in the sea [...] Read more.
The simulation of synthetic aperture radar (SAR) echo signals usually relies on complex hardware equipment and a large amount of scene data, which results in high costs and low efficiency. In order to simulate SAR echo signals of ship targets in the sea quickly and accurately in complex environments at a lower cost, this paper proposes a SAR echo simulation method based on model segmentation and electromagnetic scattering characteristic simulation. This method first implements the simulation of sea models under different sea conditions based on PM wave spectrum model and the Monte Carlo method, and segments them according to the requirements of simulation resolution. Then, it uses Python API 3.11 in Blender 4.5 to segment the ship model automatically and optimize the visible surface elements and mesh for each sub-model. Next, it uses Lua API in Feko to simulate the electromagnetic scattering characteristics of each sub-model of the sea and the ship target automatically, and obtains the required radar cross section (RCS) data of the ship target in the sea after processing. Finally, SAR echo simulation is realized through dual-channel technology. To further verify the simulation result, the chirp scaling (CS) algorithm is used for imaging processing. The results show that this method can realize SAR echo simulation of various ship targets under different sea conditions in a quick, accurate and cost-effective manner without the need for any hardware equipment. Full article
(This article belongs to the Special Issue SAR Monitoring of Marine and Coastal Environments)
Show Figures

Figure 1

16 pages, 5098 KB  
Article
Etch-ViGen: A Video Generation Model for Etching Simulation
by Li Ding, Hua Shao, Zhiqiang Li, Nan Liu, Rui Chen and Zhenjie Yao
AI 2026, 7(4), 149; https://doi.org/10.3390/ai7040149 - 21 Apr 2026
Abstract
With the scaling down of integrated circuit dimensions and the increasing complexity of transistor structures, the role of etching in manufacturing has become increasingly critical. We propose an etching simulation approach based on a video generation model, which models the evolution of the [...] Read more.
With the scaling down of integrated circuit dimensions and the increasing complexity of transistor structures, the role of etching in manufacturing has become increasingly critical. We propose an etching simulation approach based on a video generation model, which models the evolution of the etching process as a video generation task. By embedding frames into quantized latent codeword representations using VQ-VAE (Vector Quantized Variational Autoencoder), injecting physical conditions with a CLIP projection layer, and leveraging a temporal autoregressive prediction model, we propose a generative model of the etching process. We validate the effectiveness of our model on both simulated and experimental data. Our approach achieves a 6000× speedup over the Monte Carlo method while reducing the simulation MAE (Mean Absolute Error) by 14.4% compared with the state-of-the-art video model. Furthermore, results generated by our video-based model show strong agreement with experimental data. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

38 pages, 4167 KB  
Article
Sustainable Operational Decision-Making for Thermal Power Enterprises’ Carbon Assets Oriented Toward Medium- and Long-Term Risk Exposure
by Ying Kuai, Yue Liu, Wu Wan, Boyan Zou and Yao Qin
Sustainability 2026, 18(8), 4094; https://doi.org/10.3390/su18084094 - 20 Apr 2026
Abstract
Against the background of deepening “dual carbon” goals and the continuously tightening policies of the national carbon market, the carbon asset risks faced by thermal power enterprises have shifted from short-term compliance cost fluctuations to medium- and long-term systemic risks. Managing these risks [...] Read more.
Against the background of deepening “dual carbon” goals and the continuously tightening policies of the national carbon market, the carbon asset risks faced by thermal power enterprises have shifted from short-term compliance cost fluctuations to medium- and long-term systemic risks. Managing these risks effectively is essential for ensuring the financial viability of thermal power operations during the low-carbon transition, thereby supporting the long-term sustainability of the energy sector. This study constructs a risk management framework for carbon assets in thermal power enterprises based on the LSTM model and option portfolios. First, the multi-dimensional characteristics of medium- and long-term carbon asset risks are systematically identified at the policy, market, and enterprise levels. Second, a dual-layer LSTM model with Dropout regularization is employed to simulate medium- and long-term carbon prices. The prediction results indicate a moderate upward trend in future carbon prices, with the fluctuation range gradually narrowing. On this basis, a combined hedging strategy of “core call options + auxiliary put options” is designed, capping the maximum procurement cost at 72.63 CNY/ton and covering over 90% of the risk of carbon price increases. Monte Carlo simulations and rolling window backtesting, conducted using operational data from a thermal power enterprise to validate the framework, verify the effectiveness and robustness of the strategy. The study shows that, through the integration of accurate LSTM predictions and proactive option hedging, thermal power enterprises can transform their carbon asset management from passive compliance to active value creation, thereby enhancing their operational sustainability and resilience during the energy transition. Full article
29 pages, 2275 KB  
Article
Reliability Analysis of Tuned Mass Damper-Equipped Structures Under Stochastic Excitation
by Lun Shao, Alexandre Saidi, Abdel-Malek Zine and Mohamed Ichchou
Vibration 2026, 9(2), 29; https://doi.org/10.3390/vibration9020029 - 20 Apr 2026
Abstract
Tuned mass dampers (TMDs) are commonly used to reduce excessive vibrations in engineering structures. Although their vibration control performance has been widely studied, the reliability of TMD-equipped structures under stochastic excitations has not been sufficiently investigated. In practical applications, random loads and system [...] Read more.
Tuned mass dampers (TMDs) are commonly used to reduce excessive vibrations in engineering structures. Although their vibration control performance has been widely studied, the reliability of TMD-equipped structures under stochastic excitations has not been sufficiently investigated. In practical applications, random loads and system uncertainties may significantly affect structural safety, and an efficient evaluation of failure probability remains a challenging task. Thus, the applications of these methods are greatly limited in vibration control. In this work, the structural reliability of systems equipped with TMDs is analyzed by adopting the first-passage time (FPT) as the failure criterion. Numerical investigations are performed on continuous beam models with TMDs under different types of stochastic excitation. In addition, an experimental study on a two-story steel frame structure is conducted to further examine the reliability performance of TMD-controlled systems. To reduce the computational cost associated with Monte Carlo simulation, a data-driven classification method is employed to approximate the failure domain based on a limited number of samples. The results indicate that the proposed approach enables accurate reliability estimation with a substantial reduction in computational cost, making it suitable for large-scale reliability analysis of vibration-controlled structures under stochastic excitation. The experimental results further demonstrate the applicability of the proposed reliability assessment method for practical vibration control problems. Full article
25 pages, 4753 KB  
Article
Agent-Based Modeling of Green Hydrogen Industry Scale-Up in Russia: Critical Thresholds, Phase Dynamics, and Investment Requirements
by Konstantin Gomonov, Svetlana Ratner, Arsen A. Petrosyan and Svetlana Revinova
Hydrogen 2026, 7(2), 53; https://doi.org/10.3390/hydrogen7020053 - 20 Apr 2026
Abstract
The development of a green hydrogen industry is a strategic priority for Russia’s energy transition, yet the dynamics of scaling up this nascent sector remain poorly understood. This study uses agent-based modeling (ABM) to simulate the co-evolution of Russia’s electricity, hydrogen, and electrolyzer [...] Read more.
The development of a green hydrogen industry is a strategic priority for Russia’s energy transition, yet the dynamics of scaling up this nascent sector remain poorly understood. This study uses agent-based modeling (ABM) to simulate the co-evolution of Russia’s electricity, hydrogen, and electrolyzer sectors over 2024–2050. The model incorporates three types of heterogeneous agents (power producers, hydrogen producers, and electrolyzer manufacturers) operating under bounded rationality. Four scenarios are examined across 50 Monte Carlo runs each, varying the electrolyzer learning rate (10–25%), willingness to pay for green hydrogen (2–6 $/kg), and government support intensity. The results reveal an endogenous three-phase development pattern: Phase I (2024–2028) dominated by renewable capacity build-up reaching ~30 GW; Phase II (2029–2040) characterized by rapid electrolyzer deployment scaling to 14.5 GW; and Phase III (2041–2050) marked by stabilization at approximately 30 GW producing 1.12 Mt/year at 3.1 $/kg. Two critical thresholds are identified: renewable capacity exceeding 30–38 GW and low-cost electricity above 4–7 TWh/year. The electrolyzer learning rate emerges as the most influential parameter, while the pessimistic scenario confirms market failure without a green premium (WTP < 2 $/kg). Strategic investment losses of 2–6 billion USD are necessary catalysts for industry emergence. Russia’s 2030 production target (0.55 Mt) is found structurally infeasible under all scenarios. Full article
(This article belongs to the Special Issue Green Hydrogen Production)
Show Figures

Graphical abstract

21 pages, 11108 KB  
Article
Using Negative Power Transformation to Model Block Minima
by Thanawan Prahadchai, Piyapatr Busababodhin, Taeyong Kwon and Sanghoo Yoon
Mathematics 2026, 14(8), 1383; https://doi.org/10.3390/math14081383 - 20 Apr 2026
Abstract
This study proposes a novel transformation method for analyzing block minima using the generalized extreme value distribution (GEVD). The negative power transformation (NPT), which includes a tunable hyperparameter and reduces to the reciprocal transformation (RT) when set to 1, improves the accuracy and [...] Read more.
This study proposes a novel transformation method for analyzing block minima using the generalized extreme value distribution (GEVD). The negative power transformation (NPT), which includes a tunable hyperparameter and reduces to the reciprocal transformation (RT) when set to 1, improves the accuracy and robustness in estimating long-term return levels (RL). Compared to traditional methods, the NPT-GEVD demonstrates lower bias, standard errors, and root mean square errors in Monte Carlo simulations. Furthermore, the NPT-GEVD provides consistent RL estimates with improved robustness across varying parameterizations and sample sizes, mainly when using L-moments for small datasets. The application of the NPT-GEVD to rainfall data from South Korea revealed that the RLs for detecting hourly cumulative rainfall threshold levels varied from 30 min to over 4 h, depending on the location and threshold. This research underscores the value of advanced transformation techniques in environmental risk management, offering critical insights for flood prediction and mitigation strategies in climate change. Full article
(This article belongs to the Special Issue Extreme Value Theory: Theory, Methodology and Applications)
22 pages, 8531 KB  
Article
Research on the Trend of CO2 Emissions and Sustainable Scenario Prediction Before 2060—A Study of Hebei Province, China
by Yamei Chen, Xiaoning Wang and Qiong Chen
Sustainability 2026, 18(8), 4048; https://doi.org/10.3390/su18084048 - 19 Apr 2026
Viewed by 170
Abstract
Due to urbanization and industrialization, there are significant regional differences in carbon emissions, making it increasingly urgent and necessary to conduct an in-depth examination of carbon emission trends from energy consumption across various sectors at the provincial level. Taking Hebei Province, a major [...] Read more.
Due to urbanization and industrialization, there are significant regional differences in carbon emissions, making it increasingly urgent and necessary to conduct an in-depth examination of carbon emission trends from energy consumption across various sectors at the provincial level. Taking Hebei Province, a major carbon-emitting province in China, as a case study, we analyzed carbon emissions from three perspectives: historical emissions, influencing factors, and scenario projections. First, we established a carbon emission inventory for energy consumption. Second, using the integrated LMDI-SD-MC framework, we constructed four subsystems economy, society, energy, and technology and employed three scenarios for forecasting. The results show that: (1) Carbon emissions in Hebei Province from 2003 to 2021 exhibited increased trend year by year, with the share of coal and coke decreasing and the share of natural gas increasing. The industry, residential, and transportation sectors accounted for more than 95% of total carbon emissions. (2) In terms of influencing factors, energy intensity and the level of economic development contributed the most significantly, with contribution rates of −75.97% and 195.97%, respectively. (3) Among the scenario projections, the low-carbon development scenario is the most suitable for Hebei Province, enabling the province to achieve its “Dual Carbon” goals as scheduled. Under the baseline development scenario, the peak is reached in 2040. Under the rapid development scenario, carbon emissions will reach 1130.86 106 tons by 2060. (4) Uncertainty analysis using Monte Carlo simulation for all three scenarios showed errors within ±10%, indicating that the model results are robust and interpretable. This study provides a provincial level emission reduction perspective for China to achieve its “Dual Carbon” goals and sustainable development. Full article
Show Figures

Figure 1

18 pages, 744 KB  
Article
Evaluating the Impact of Intelligent Data Processing for Corporate Finance with the Use of Real Options Analysis
by Stanimir Ivanov Kabaivanov and Veneta Metodieva Markovska
J. Risk Financial Manag. 2026, 19(4), 292; https://doi.org/10.3390/jrfm19040292 - 18 Apr 2026
Viewed by 254
Abstract
Technological innovation is changing virtually every aspect of business practices and operational procedures. The introduction of large language models and various types of intelligent processing, commonly referred to as artificial intelligence, presents significant change to cope with. In this paper, we suggest an [...] Read more.
Technological innovation is changing virtually every aspect of business practices and operational procedures. The introduction of large language models and various types of intelligent processing, commonly referred to as artificial intelligence, presents significant change to cope with. In this paper, we suggest an estimation method, based on real options analysis (ROA), that improves the assessment and valuation of intelligent data processing’s impact on organizations. The presented approach can reflect direct and indirect effects from introducing artificial intelligence methods and is therefore better suited than traditional financial metrics for the assessment of contemporary intelligent tools and solutions. Using Monte Carlo simulation and American-style real options, we have estimated two sample use cases to compare the ROA results against other common valuation methods. Numerical experiments indicate that the suggested approach is capable of capturing both the direct and indirect impact of new technologies, which improves relevant financial and management decisions. Full article
(This article belongs to the Special Issue The Role of Digitization in Corporate Finance)
Show Figures

Figure 1

28 pages, 2087 KB  
Article
The q-Deformed Lindley Distribution: Properties, Statistical Inference, and Applications
by Mahmoud M. El-Awady, Hanan Haj Ahmad, Yazan Rabaiah and Ahmed T. Ramadan
Mathematics 2026, 14(8), 1364; https://doi.org/10.3390/math14081364 - 18 Apr 2026
Viewed by 92
Abstract
This paper introduces a q-deformed extension of the Lindley distribution. This extension is obtained by replacing the classical exponential with the q-exponential function from Tsallis non-extensive statistical techniques. This transformation offers more control over the tail behavior of the distribution, providing [...] Read more.
This paper introduces a q-deformed extension of the Lindley distribution. This extension is obtained by replacing the classical exponential with the q-exponential function from Tsallis non-extensive statistical techniques. This transformation offers more control over the tail behavior of the distribution, providing a transition between exponential and power-law decay patterns. Such flexibility is particularly useful when modeling right-skewed data with excess kurtosis, where classical models may not adequately describe heavy-tailed and highly skewed data. We derive several key properties, including the quantile function, expressed by the Lambert–Tsallis function Wq, the raw and incomplete moments, the probability-weighted moments, and the Tsallis entropy. The distribution of order statistics is also investigated. For parameter estimation, we employ several frequentist methods and conduct extensive Monte Carlo simulation studies to assess and compare their performance. Finally, applications to real-world datasets show that the q-deformed Lindley model is practically superior and more flexible than the classical Lindley distribution and other well-known models. Full article
31 pages, 543 KB  
Article
Frequentist and Bayesian Predictive Inference for the Log-Logistic Distribution Under Progressive Type-II Censoring
by Ziteng Zhang and Wenhao Gui
Entropy 2026, 28(4), 466; https://doi.org/10.3390/e28040466 - 18 Apr 2026
Viewed by 93
Abstract
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we [...] Read more.
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we derive three distinct point predictors: the Best Unbiased Predictor (BUP), the Conditional Median Predictor (CMP), and the Bayesian Predictor (BP). Corresponding prediction intervals are constructed using frequentist pivotal quantities, Bayesian Equal-Tailed Intervals (ETIs), and Highest Posterior Density (HPD) methods. The Bayesian procedures are implemented via Markov chain Monte Carlo (MCMC) sampling. We evaluate the finite-sample performance of the proposed methodologies through a Monte Carlo simulation study and further validate them using two real-world datasets, namely bladder cancer remission times and guinea pig survival times. The numerical results indicate that the proposed BP, particularly under the empirical prior, provides the most accurate and stable overall performance for point prediction, while the frequentist predictors become less reliable in extreme heavy-tailed settings. For interval prediction, the Bayesian HPD method consistently outperforms the alternatives, substantially reducing interval lengths for right-skewed data while maintaining the nominal coverage probability. Full article
20 pages, 718 KB  
Article
Robustness of Energy Delivery and Economic Sensitivity in Onshore and Offshore Wind Power
by Fernando M. Camilo, Paulo J. Santos and Armando J. Pires
Energies 2026, 19(8), 1951; https://doi.org/10.3390/en19081951 - 17 Apr 2026
Viewed by 152
Abstract
The increasing penetration of wind generation requires performance evaluation methods that extend beyond average annual energy production. Temporal delivery characteristics, such as monthly dispersion and exposure to low-production periods, can influence both technical robustness and economic sensitivity. Building upon a previously developed probabilistic [...] Read more.
The increasing penetration of wind generation requires performance evaluation methods that extend beyond average annual energy production. Temporal delivery characteristics, such as monthly dispersion and exposure to low-production periods, can influence both technical robustness and economic sensitivity. Building upon a previously developed probabilistic and entropy-based assessment framework, this study evaluates the robustness of delivery-oriented performance metrics for onshore and offshore wind units under parametric and economic uncertainty. Using high-resolution operational data from four wind units (three onshore and one offshore), the analysis incorporates percentile sensitivity, threshold variation in low-production exposure, bootstrap-based uncertainty intervals, and Monte Carlo simulation of economic inputs including CAPEX, operation and maintenance costs, and discount rate. The results indicate that variations in percentile definitions and stochastic economic assumptions modify absolute performance values but do not substantially alter the relative positioning between offshore and onshore units. Averaged over 2022–2024, the analyzed offshore unit exhibited a lower monthly energy dispersion coefficient (CVE=0.255) [Reviewer2]than the analyzed onshore units (CVE=0.368), [Reviewer2]corresponding to an approximate 30% reduction in relative variability. The offshore unit also showed lower mean low-production exposure (LPE=0.526 versus 0.581 for onshore units) [Reviewer2]and consistently lower amplification of robustness-adjusted LCOE under conservative delivery assumptions. These results indicate that the analyzed offshore unit retains stronger delivery robustness and lower economic sensitivity across the tested parameter ranges. The proposed robustness-validation framework complements conventional yield-based assessments and provides additional insight for risk-aware evaluation of wind generation assets in renewable-dominated power systems. Full article
(This article belongs to the Special Issue Recent Innovations in Offshore Wind Energy)
45 pages, 4863 KB  
Article
A Novel Version of the Arcsine–Rayleigh Distribution with Entropy Measures, Statistical Inference, and Applications
by Asmaa S. Al-Moisheer, Khalaf S. Sultan, Moustafa N. Mousa and Mahmoud M. M. Mansour
Entropy 2026, 28(4), 464; https://doi.org/10.3390/e28040464 - 17 Apr 2026
Viewed by 127
Abstract
This paper presents a new distribution on the unit interval, named the Unit Arcsine–Rayleigh distribution (UASRD), which is the result of the exponential transformation of the Arcsine–Rayleigh distribution. The model suggested is versatile and can be used in modeling limited reliability and proportion [...] Read more.
This paper presents a new distribution on the unit interval, named the Unit Arcsine–Rayleigh distribution (UASRD), which is the result of the exponential transformation of the Arcsine–Rayleigh distribution. The model suggested is versatile and can be used in modeling limited reliability and proportion data. Entropy-based measures are also studied to determine the uncertainty and information content of the proposed model and further explain the probabilistic nature of the proposed model and its potential applicability in information-theoretic and reliability tasks. These findings demonstrate the utility of the suggested model in the study of the limited data in the context of information theory. Basic statistical characteristics are derived, such as cumulative and density functions, quantile function, reliability and hazard functions, and ordinary moments. Estimation of parameters is obtained through approaches of maximum likelihood and maximum product spacing and Bayesian estimation of parameters. The performance of the estimators is also assessed by a Monte Carlo simulation study, and the application of real data shows the utility of the proposed model to the analysis of bounded data. Full article
Back to TopTop