Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,293)

Search Parameters:
Keywords = Monte-Carlo models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3120 KiB  
Article
Bee Swarm Metropolis–Hastings Sampling for Bayesian Inference in the Ginzburg–Landau Equation
by Shucan Xia and Lipu Zhang
Algorithms 2025, 18(8), 476; https://doi.org/10.3390/a18080476 (registering DOI) - 2 Aug 2025
Abstract
To improve the sampling efficiency of Markov Chain Monte Carlo in complex parameter spaces, this paper proposes an adaptive sampling method that integrates a swarm intelligence mechanism called the BeeSwarm-MH algorithm. The method combines global exploration by scout bees with local exploitation by [...] Read more.
To improve the sampling efficiency of Markov Chain Monte Carlo in complex parameter spaces, this paper proposes an adaptive sampling method that integrates a swarm intelligence mechanism called the BeeSwarm-MH algorithm. The method combines global exploration by scout bees with local exploitation by worker bees. It employs multi-stage perturbation intensities and adaptive step-size tuning to enable efficient posterior sampling. Focusing on Bayesian inference for parameter estimation in the soliton solutions of the two-dimensional complex Ginzburg–Landau equation, we design a dedicated inference framework to systematically compare the performance of BeeSwarm-MH with the classical Metropolis–Hastings algorithm. Experimental results demonstrate that BeeSwarm-MH achieves comparable estimation accuracy while significantly reducing the required number of iterations and total computation time for convergence. Moreover, it exhibits superior global search capabilities and adaptive features, offering a practical approach for efficient Bayesian inference in complex physical models. Full article
17 pages, 2439 KiB  
Article
Monte Carlo-Based VaR Estimation and Backtesting Under Basel III
by Yueming Cheng
Risks 2025, 13(8), 146; https://doi.org/10.3390/risks13080146 (registering DOI) - 1 Aug 2025
Abstract
Value-at-Risk (VaR) is a key metric widely applied in market risk assessment and regulatory compliance under the Basel III framework. This study compares two Monte Carlo-based VaR models using publicly available equity data: a return-based model calibrated to historical portfolio volatility, and a [...] Read more.
Value-at-Risk (VaR) is a key metric widely applied in market risk assessment and regulatory compliance under the Basel III framework. This study compares two Monte Carlo-based VaR models using publicly available equity data: a return-based model calibrated to historical portfolio volatility, and a CAPM-style factor-based model that simulates risk via systematic factor exposures. The two models are applied to a technology-sector portfolio and evaluated under historical and rolling backtesting frameworks. Under the Basel III backtesting framework, both initially fall into the red zone, with 13 VaR violations. With rolling-window estimation, the return-based model shows modest improvement but remains in the red zone (11 exceptions), while the factor-based model reduces exceptions to eight, placing it into the yellow zone. These results demonstrate the advantages of incorporating factor structures for more stable exception behavior and improved regulatory performance. The proposed framework, fully transparent and reproducible, offers practical relevance for internal validation, educational use, and model benchmarking. Full article
Show Figures

Figure 1

20 pages, 1457 KiB  
Article
A Semi-Random Elliptical Movement Model for Relay Nodes in Flying Ad Hoc Networks
by Hyeon Choe and Dongsu Kang
Telecom 2025, 6(3), 56; https://doi.org/10.3390/telecom6030056 (registering DOI) - 1 Aug 2025
Abstract
This study presents a semi-random mobility model called Semi-Random Elliptical Movement (SREM), developed for relay-oriented Flying Ad Hoc Networks (FANETs). In FANETs, node distribution has a major impact on network performance, making the mobility model a critical design element. While random models offer [...] Read more.
This study presents a semi-random mobility model called Semi-Random Elliptical Movement (SREM), developed for relay-oriented Flying Ad Hoc Networks (FANETs). In FANETs, node distribution has a major impact on network performance, making the mobility model a critical design element. While random models offer simplicity and path diversity, they often result in unstable relay paths due to inconsistent node placement. In contrast, planned path models provide alignment but lack the flexibility needed in dynamic environments. SREM addresses these challenges by enabling nodes to move along elliptical trajectories, combining autonomous movement with alignment to the relay path. This approach encourages natural node concentration along the relay path while maintaining distributed mobility. The spatial characteristics of SREM have been analytically defined and validated through the Monte Carlo method, confirming stable node distributions that support effective relaying. Computer simulation results show that SREM performs better than general mobility models that do not account for relaying, offering more suitable performance in relay-focused scenarios. These findings suggest that SREM provides both structural consistency and practical effectiveness, making it a strong candidate for improving the realism and reliability of FANET simulations involving relay-based communication. Full article
Show Figures

Figure 1

16 pages, 3282 KiB  
Article
First-Principles Study on Periodic Pt2Fe Alloy Surface Models for Highly Efficient CO Poisoning Resistance
by Junmei Wang, Qingkun Tian, Harry E. Ruda, Li Chen, Maoyou Yang and Yujun Song
Nanomaterials 2025, 15(15), 1185; https://doi.org/10.3390/nano15151185 (registering DOI) - 1 Aug 2025
Abstract
Surface and sub-surface atomic configurations are critical for catalysis as they host the active sites governing electrochemical processes. This study employs density functional theory (DFT) calculations and Monte Carlo simulations combined with the cluster-expansion approach to investigate atom distribution and Pt segregation in [...] Read more.
Surface and sub-surface atomic configurations are critical for catalysis as they host the active sites governing electrochemical processes. This study employs density functional theory (DFT) calculations and Monte Carlo simulations combined with the cluster-expansion approach to investigate atom distribution and Pt segregation in Pt-Fe alloys across varying Pt/Fe ratios. Our simulations reveal a strong tendency for Pt atoms to segregate to the surface layer while Fe atoms enrich the sub-surface region. Crucially, the calculations predict the stability of a periodic Pt2Fe alloy surface model, characterized by specific defect structures, at low platinum content and low annealing temperatures. Electronic structure analysis indicates that forming this Pt2Fe surface alloy lowers the d-band center of Pt atoms, weakening CO adsorption and thereby enhancing resistance to CO poisoning. Although defect-induced strains can modulate the d-band center, crystal orbital Hamilton population (COHP) analysis confirms that such strains generally strengthen Pt-CO interactions. Therefore, the theoretical design of Pt2Fe alloy surfaces and controlling defect density are predicted to be effective strategies for enhancing catalyst resistance to CO poisoning. This work highlights the advantages of periodic Pt2Fe surface models for anti-CO poisoning and provides computational guidance for designing efficient Pt-based electrocatalysts. Full article
(This article belongs to the Section Theory and Simulation of Nanostructures)
Show Figures

Figure 1

21 pages, 670 KiB  
Article
I-fp Convergence in Fuzzy Paranormed Spaces and Its Application to Robust Base-Stock Policies with Triangular Fuzzy Demand
by Muhammed Recai Türkmen and Hasan Öğünmez
Mathematics 2025, 13(15), 2478; https://doi.org/10.3390/math13152478 - 1 Aug 2025
Abstract
We introduce I-fp convergence (ideal convergence in fuzzy paranormed spaces) and develop its core theory, including stability results and an equivalence to I*-fp convergence under the AP Property. Building on this foundation, we design an adaptive base-stock policy for a single-echelon [...] Read more.
We introduce I-fp convergence (ideal convergence in fuzzy paranormed spaces) and develop its core theory, including stability results and an equivalence to I*-fp convergence under the AP Property. Building on this foundation, we design an adaptive base-stock policy for a single-echelon inventory system in which weekly demand is expressed as triangular fuzzy numbers while holiday or promotion weeks are treated as ideal-small anomalies. The policy is updated by a simple learning rule that can be implemented in any spreadsheet, requires no optimisation software, and remains insensitive to tuning choices. Extensive simulation confirms that the method simultaneously lowers cost, reduces average inventory and raises service level relative to a crisp benchmark, all while filtering sparse demand spikes in a principled way. These findings position I-fp convergence as a lightweight yet rigorous tool for blending linguistic uncertainty with anomaly-aware decision making in supply-chain analytics. Full article
Show Figures

Figure 1

17 pages, 13918 KiB  
Article
Occurrence State and Controlling Factors of Methane in Deep Marine Shale: A Case Study from Silurian Longmaxi Formation in Sichuan Basin, SW China
by Junwei Pu, Tongtong Luo, Yalan Li, Hongwei Jiang and Lin Qi
Minerals 2025, 15(8), 820; https://doi.org/10.3390/min15080820 (registering DOI) - 1 Aug 2025
Abstract
Deep marine shale is the primary carrier of shale gas resources in Southwestern China. Because the occurrence and gas content of methane vary with burial conditions, understanding the microscopic mechanism of methane occurrence in deep marine shale is critical for effective shale gas [...] Read more.
Deep marine shale is the primary carrier of shale gas resources in Southwestern China. Because the occurrence and gas content of methane vary with burial conditions, understanding the microscopic mechanism of methane occurrence in deep marine shale is critical for effective shale gas exploitation. The temperature and pressure conditions in deep shale exceed the operating limits of experimental equipment; thus, few studies have discussed the microscopic occurrence mechanism of shale gas in deep marine shale. This study applies molecular simulation technology to reveal the methane’s microscopic occurrence mechanism, particularly the main controlling factor of adsorbed methane in deep marine shale. Two types of simulation models are also proposed. The Grand Canonical Monte Carlo (GCMC) method is used to simulate the adsorption behavior of methane molecules in these two models. The results indicate that the isosteric adsorption heat of methane in both models is below 42 kJ/mol, suggesting that methane adsorption in deep shale is physical adsorption. Adsorbed methane concentrates on the pore wall surface and forms a double-layer adsorption. Furthermore, adsorbed methane can transition to single-layer adsorption if the pore size is less than 1.6 nm. The total adsorption capacity increases with rising pressure, although the growth rate decreases. Excess adsorption capacity is highly sensitive to pressure and can become negative at high pressures. Methane adsorption capacity is determined by pore size and adsorption potential, while accommodation space and adsorption potential are influenced by pore size and mineral type. Under deep marine shale reservoir burial conditions, with burial depth deepening, the effect of temperature on shale gas occurrence is weaker than pressure. Higher temperatures inhibit shale gas occurrence, and high pressure enhances shale gas preservation. Smaller pores facilitate the occurrence of adsorbed methane, and larger pores have larger total methane adsorption capacity. Deep marine shale with high formation pressure and high clay mineral content is conducive to the microscopic accumulation of shale gas in deep marine shale reservoirs. This study discusses the microscopic occurrence state of deep marine shale gas and provides a reference for the exploration and development of deep shale gas. Full article
(This article belongs to the Special Issue Element Enrichment and Gas Accumulation in Black Rock Series)
Show Figures

Figure 1

26 pages, 8736 KiB  
Article
Uncertainty-Aware Fault Diagnosis of Rotating Compressors Using Dual-Graph Attention Networks
by Seungjoo Lee, YoungSeok Kim, Hyun-Jun Choi and Bongjun Ji
Machines 2025, 13(8), 673; https://doi.org/10.3390/machines13080673 (registering DOI) - 1 Aug 2025
Abstract
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a [...] Read more.
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a Bayesian GAT method specifically tailored for vibration-based compressor fault diagnosis. The approach integrates domain-specific digital-twin simulations built with Rotordynamic software (1.3.0), and constructs dual adjacency matrices to encode both physically informed and data-driven sensor relationships. Additionally, a hybrid forecasting-and-reconstruction objective enables the model to capture short-term deviations as well as long-term waveform fidelity. Monte Carlo dropout further decomposes prediction uncertainty into aleatoric and epistemic components, providing a more robust and interpretable model. Comparative evaluations against conventional Long Short-Term Memory (LSTM)-based autoencoder and forecasting methods demonstrate that the proposed framework achieves superior fault-detection performance across multiple fault types, including misalignment, bearing failure, and unbalance. Moreover, uncertainty analyses confirm that fault severity correlates with increasing levels of both aleatoric and epistemic uncertainty, reflecting heightened noise and reduced model confidence under more severe conditions. By enhancing GAT fundamentals with a domain-tailored dual-graph strategy, specialized Bayesian inference, and digital-twin data generation, this research delivers a comprehensive and interpretable solution for compressor fault diagnosis, paving the way for more reliable and risk-aware predictive maintenance in complex rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

48 pages, 2506 KiB  
Article
Enhancing Ship Propulsion Efficiency Predictions with Integrated Physics and Machine Learning
by Hamid Reza Soltani Motlagh, Seyed Behbood Issa-Zadeh, Md Redzuan Zoolfakar and Claudia Lizette Garay-Rondero
J. Mar. Sci. Eng. 2025, 13(8), 1487; https://doi.org/10.3390/jmse13081487 - 31 Jul 2025
Abstract
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte [...] Read more.
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte Carlo simulations provides a solid foundation for training machine learning models, particularly in cases where dataset restrictions are present. The XGBoost model demonstrated superior performance compared to Support Vector Regression, Gaussian Process Regression, Random Forest, and Shallow Neural Network models, achieving near-zero prediction errors that closely matched physics-based calculations. The physics-based analysis demonstrated that the Combined scenario, which combines hull coatings with bulbous bow modifications, produced the largest fuel consumption reduction (5.37% at 15 knots), followed by the Advanced Propeller scenario. The results demonstrate that user inputs (e.g., engine power: 870 kW, speed: 12.7 knots) match the Advanced Propeller scenario, followed by Paint, which indicates that advanced propellers or hull coatings would optimize efficiency. The obtained insights help ship operators modify their operational parameters and designers select essential modifications for sustainable operations. The model maintains its strength at low speeds, where fuel consumption is minimal, making it applicable to other oil tankers. The hybrid approach provides a new tool for maritime efficiency analysis, yielding interpretable results that support International Maritime Organization objectives, despite starting with a limited dataset. The model requires additional research to enhance its predictive accuracy using larger datasets and real-time data collection, which will aid in achieving global environmental stewardship. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
18 pages, 1738 KiB  
Article
Extreme Wind Speed Prediction Based on a Typhoon Straight-Line Path Model and the Monte Carlo Simulation Method: A Case for Guangzhou
by Zhike Lu, Xinrui Zhang, Junling Hong and Wanhai Xu
Appl. Sci. 2025, 15(15), 8486; https://doi.org/10.3390/app15158486 (registering DOI) - 31 Jul 2025
Viewed by 72
Abstract
The southeastern coastal region of China has long been affected by typhoon disasters, which pose significant threats to the safety of offshore structures. Therefore, predicting extreme wind speeds corresponding to various return periods on the basis of limited typhoon samples is particularly important [...] Read more.
The southeastern coastal region of China has long been affected by typhoon disasters, which pose significant threats to the safety of offshore structures. Therefore, predicting extreme wind speeds corresponding to various return periods on the basis of limited typhoon samples is particularly important for wind-resistant design. This study systematically predicts extreme typhoon wind speeds for various return periods and quantitatively assesses the sensitivity of key parameters by employing a Monte Carlo stochastic simulation framework integrated with a typhoon straight-line trajectory model and the Yan Meng wind field model. Focusing on Guangzhou (23.13° N, 113.28 °E), a representative coastal city in southeastern China, this research establishes a modular analytical framework that provides generalizable solutions for typhoon disaster assessment in coastal regions. The probabilistic wind load data generated by this framework significantly increases the cost-effectiveness and safety of wind-resistant structural design. Full article
(This article belongs to the Special Issue Transportation and Infrastructures Under Extreme Weather Conditions)
Show Figures

Figure 1

24 pages, 3980 KiB  
Article
A Two-Stage Restoration Method for Distribution Networks Considering Generator Start-Up and Load Recovery Under an Earthquake Disaster
by Lin Peng, Aihua Zhou, Junfeng Qiao, Qinghe Sun, Zhonghao Qian, Min Xu and Sen Pan
Electronics 2025, 14(15), 3049; https://doi.org/10.3390/electronics14153049 - 30 Jul 2025
Viewed by 167
Abstract
Earthquakes can severely disrupt power distribution networks, causing extensive outages and disconnection from the transmission grid. This paper proposes a two-stage restoration method tailored for post-earthquake distribution systems. First, earthquake-induced damage is modeled using ground motion prediction equations (GMPEs) and fragility curves, and [...] Read more.
Earthquakes can severely disrupt power distribution networks, causing extensive outages and disconnection from the transmission grid. This paper proposes a two-stage restoration method tailored for post-earthquake distribution systems. First, earthquake-induced damage is modeled using ground motion prediction equations (GMPEs) and fragility curves, and degraded network topologies are generated by Monte Carlo simulation. Then, a time-domain generator start-up model is developed as a mixed-integer linear program (MILP), incorporating cranking power and radial topology constraints. Further, a prioritized load recovery model is formulated as a mixed-integer second-order cone program (MISOCP), integrating power flow, voltage, and current constraints. Finally, case studies demonstrate the effectiveness and general applicability of the proposed method, confirming its capability to support resilient and adaptive distribution network restoration under various earthquake scenarios. Full article
Show Figures

Figure 1

23 pages, 3453 KiB  
Article
Robust Peak Detection Techniques for Harmonic FMCW Radar Systems: Algorithmic Comparison and FPGA Feasibility Under Phase Noise
by Ahmed El-Awamry, Feng Zheng, Thomas Kaiser and Maher Khaliel
Signals 2025, 6(3), 36; https://doi.org/10.3390/signals6030036 - 30 Jul 2025
Viewed by 160
Abstract
Accurate peak detection in the frequency domain is fundamental to reliable range estimation in Frequency-Modulated Continuous-Wave (FMCW) radar systems, particularly in challenging conditions characterized by a low signal-to-noise ratio (SNR) and phase noise impairments. This paper presents a comprehensive comparative analysis of five [...] Read more.
Accurate peak detection in the frequency domain is fundamental to reliable range estimation in Frequency-Modulated Continuous-Wave (FMCW) radar systems, particularly in challenging conditions characterized by a low signal-to-noise ratio (SNR) and phase noise impairments. This paper presents a comprehensive comparative analysis of five peak detection algorithms: FFT thresholding, Cell-Averaging Constant False Alarm Rate (CA-CFAR), a simplified Matrix Pencil Method (MPM), SVD-based detection, and a novel Learned Thresholded Subspace Projection (LTSP) approach. The proposed LTSP method leverages singular value decomposition (SVD) to extract the dominant signal subspace, followed by signal reconstruction and spectral peak analysis, enabling robust detection in noisy and spectrally distorted environments. Each technique was analytically modeled and extensively evaluated through Monte Carlo simulations across a wide range of SNRs and oscillator phase noise levels, from 100 dBc/Hz to 70 dBc/Hz. Additionally, real-world validation was performed using a custom-built harmonic FMCW radar prototype operating in the 2.4–2.5 GHz transmission band and 4.8–5.0 GHz harmonic reception band. Results show that CA-CFAR offers the highest resilience to phase noise, while the proposed LTSP method delivers competitive detection performance with improved robustness over conventional FFT and MPM techniques. Furthermore, the hardware feasibility of each algorithm is assessed for implementation on a Xilinx FPGA platform, highlighting practical trade-offs between detection performance, computational complexity, and resource utilization. These findings provide valuable guidance for the design of real-time, embedded FMCW radar systems operating under adverse conditions. Full article
Show Figures

Graphical abstract

23 pages, 1652 KiB  
Article
Case Study on Emissions Abatement Strategies for Aging Cruise Vessels: Environmental and Economic Comparison of Scrubbers and Low-Sulphur Fuels
by Luis Alfonso Díaz-Secades, Luís Baptista and Sandrina Pereira
J. Mar. Sci. Eng. 2025, 13(8), 1454; https://doi.org/10.3390/jmse13081454 - 30 Jul 2025
Viewed by 163
Abstract
The maritime sector is undergoing rapid transformation, driven by increasingly stringent international regulations targeting air pollution. While newly built vessels integrate advanced technologies for compliance, the global fleet averages 21.8 years of age and must meet emission requirements through retrofitting or operational changes. [...] Read more.
The maritime sector is undergoing rapid transformation, driven by increasingly stringent international regulations targeting air pollution. While newly built vessels integrate advanced technologies for compliance, the global fleet averages 21.8 years of age and must meet emission requirements through retrofitting or operational changes. This study evaluates, at environmental and economic levels, two key sulphur abatement strategies for a 1998-built cruise vessel nearing the end of its service life: (i) the installation of open-loop scrubbers with fuel enhancement devices, and (ii) a switch to marine diesel oil as main fuel. The analysis was based on real operational data from a cruise vessel. For the environmental assessment, a Tier III hybrid emissions model was used. The results show that scrubbers reduce SOx emissions by approximately 97% but increase fuel consumption by 3.6%, raising both CO2 and NOx emissions, while particulate matter decreases by only 6.7%. In contrast, switching to MDO achieves over 99% SOx reduction, an 89% drop in particulate matter, and a nearly 5% reduction in CO2 emissions. At an economic level, it was found that, despite a CAPEX of nearly USD 1.9 million, scrubber installation provides an average annual net saving exceeding USD 8.2 million. From the deterministic and probabilistic analyses performed, including Monte Carlo simulations under various fuel price correlation scenarios, scrubber installation consistently shows high profitability, with NPVs surpassing USD 70 million and payback periods under four months. Full article
(This article belongs to the Special Issue Sustainable and Efficient Maritime Operations)
Show Figures

Figure 1

22 pages, 4895 KiB  
Article
Machine Learning-Assisted Secure Random Communication System
by Areeb Ahmed and Zoran Bosnić
Entropy 2025, 27(8), 815; https://doi.org/10.3390/e27080815 - 29 Jul 2025
Viewed by 150
Abstract
Machine learning techniques have revolutionized physical layer security (PLS) and provided opportunities for optimizing the performance and security of modern communication systems. In this study, we propose the first machine learning-assisted random communication system (ML-RCS). It comprises a pretrained decision tree (DT)-based receiver [...] Read more.
Machine learning techniques have revolutionized physical layer security (PLS) and provided opportunities for optimizing the performance and security of modern communication systems. In this study, we propose the first machine learning-assisted random communication system (ML-RCS). It comprises a pretrained decision tree (DT)-based receiver that extracts binary information from the transmitted random noise carrier signals. The ML-RCS employs skewed alpha-stable (α-stable) noise as a random carrier to encode the incoming binary bits securely. The DT model is pretrained on an extensively developed dataset encompassing all the selected parameter combinations to generate and detect the α-stable noise signals. The legitimate receiver leverages the pretrained DT and a predetermined key, specifically the pulse length of a single binary information bit, to securely decode the hidden binary bits. The performance evaluations included the single-bit transmission, confusion matrices, and a bit error rate (BER) analysis via Monte Carlo simulations. The fact that the BER reached 10−3 confirms the ability of the proposed system to establish successful secure communication between a transmitter and legitimate receiver. Additionally, the ML-RCS provides an increased data rate compared to previous random communication systems. From the perspective of security, the confusion matrices and computed false negative rate of 50.2% demonstrate the failure of an eavesdropper to decode the binary bits without access to the predetermined key and the private dataset. These findings highlight the potential ability of unconventional ML-RCSs to promote the development of secure next-generation communication devices with built-in PLSs. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

13 pages, 600 KiB  
Article
Frequentist and Bayesian Estimation Under Progressive Type-II Random Censoring for a Two-Parameter Exponential Distribution
by Rajni Goel, Mahmoud M. Abdelwahab and Tejaswar Kamble
Symmetry 2025, 17(8), 1205; https://doi.org/10.3390/sym17081205 - 29 Jul 2025
Viewed by 133
Abstract
In medical research, random censoring often occurs due to unforeseen subject withdrawals, whereas progressive censoring is intentionally applied to minimize time and resource requirements during experimentation. This work focuses on estimating the parameters of a two-parameter exponential distribution under a progressive Type-II random [...] Read more.
In medical research, random censoring often occurs due to unforeseen subject withdrawals, whereas progressive censoring is intentionally applied to minimize time and resource requirements during experimentation. This work focuses on estimating the parameters of a two-parameter exponential distribution under a progressive Type-II random censoring scheme, which integrates both censoring strategies. The use of symmetric properties in failure and censoring time models, arising from a shared location parameter, facilitates a balanced and robust inferential framework. This symmetry ensures interpretational clarity and enhances the tractability of both frequentist and Bayesian methods. Maximum likelihood estimators (MLEs) are obtained, along with asymptotic confidence intervals. A Bayesian approach is also introduced, utilizing inverse gamma priors, and Gibbs sampling is implemented to derive Bayesian estimates. The effectiveness of the proposed methodologies was assessed through extensive Monte Carlo simulations and demonstrated using an actual dataset. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

18 pages, 2954 KiB  
Article
A Multi-Objective Decision-Making Method for Optimal Scheduling Operating Points in Integrated Main-Distribution Networks with Static Security Region Constraints
by Kang Xu, Zhaopeng Liu and Shuaihu Li
Energies 2025, 18(15), 4018; https://doi.org/10.3390/en18154018 - 28 Jul 2025
Viewed by 230
Abstract
With the increasing penetration of distributed generation (DG), integrated main-distribution networks (IMDNs) face challenges in rapidly and effectively performing comprehensive operational risk assessments under multiple uncertainties. Thereby, using the traditional hierarchical economic scheduling method makes it difficult to accurately find the optimal scheduling [...] Read more.
With the increasing penetration of distributed generation (DG), integrated main-distribution networks (IMDNs) face challenges in rapidly and effectively performing comprehensive operational risk assessments under multiple uncertainties. Thereby, using the traditional hierarchical economic scheduling method makes it difficult to accurately find the optimal scheduling operating point. To address this problem, this paper proposes a multi-objective dispatch decision-making optimization model for the IMDN with static security region (SSR) constraints. Firstly, the non-sequential Monte Carlo sampling is employed to generate diverse operational scenarios, and then the key risk characteristics are extracted to construct the risk assessment index system for the transmission and distribution grid, respectively. Secondly, a hyperplane model of the SSR is developed for the IMDN based on alternating current power flow equations and line current constraints. Thirdly, a risk assessment matrix is constructed through optimal power flow calculations across multiple load levels, with the index weights determined via principal component analysis (PCA). Subsequently, a scheduling optimization model is formulated to minimize both the system generation costs and the comprehensive risk, where the adaptive grid density-improved multi-objective particle swarm optimization (AG-MOPSO) algorithm is employed to efficiently generate Pareto-optimal operating point solutions. A membership matrix of the solution set is then established using fuzzy comprehensive evaluation to identify the optimal compromised operating point for dispatch decision support. Finally, the effectiveness and superiority of the proposed method are validated using an integrated IEEE 9-bus and IEEE 33-bus test system. Full article
Show Figures

Figure 1

Back to TopTop