Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (412)

Search Parameters:
Keywords = physical-stochastic models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 538 KB  
Article
Evaluation of GPU-Accelerated Edge Platforms for Stochastic Simulations: Performance and Energy Efficiency Analysis
by Pilsung Kang
Mathematics 2025, 13(20), 3305; https://doi.org/10.3390/math13203305 - 16 Oct 2025
Abstract
With the increasing emphasis on energy-efficient computing, edge devices accelerated by graphics processing units (GPUs) are gaining attention for their potential in scientific workloads. These platforms support compute-intensive simulations under strict energy and resource constraints, yet their computational efficiency across architectures remains an [...] Read more.
With the increasing emphasis on energy-efficient computing, edge devices accelerated by graphics processing units (GPUs) are gaining attention for their potential in scientific workloads. These platforms support compute-intensive simulations under strict energy and resource constraints, yet their computational efficiency across architectures remains an open question. This study evaluates the performance of GPU-based edge platforms for executing the stochastic simulation algorithm (SSA), a widely used and inherently compute-intensive method for modeling biochemical and physical systems. Execution time, floating point throughput, and the trade-offs between cost and power consumption are analyzed, with a focus on how variations in core count, clock speed, and architectural features impact SSA scalability. Experimental results show that the Jetson Orin NX consistently outperforms Xavier NX and Orin Nano in both speed and efficiency, reaching up to 4.86 million iterations per second while operating under a 20 W power envelope. At the largest workload scale, it achieves 2102.7 ms/W in energy efficiency and 105.3 ms/USD in cost-performance—substantially better than the other Jetson devices. These findings highlight the architectural considerations necessary for selecting edge GPUs for scientific computing and offer practical guidance for deploying compute-intensive workloads beyond artificial intelligence (AI) applications. Full article
(This article belongs to the Special Issue Advances in High-Performance Computing, Optimization and Simulation)
Show Figures

Figure 1

22 pages, 4825 KB  
Article
Multidimensional Visualization and AI-Driven Prediction Using Clinical and Biochemical Biomarkers in Premature Cardiovascular Aging
by Kuat Abzaliyev, Madina Suleimenova, Symbat Abzaliyeva, Madina Mansurova, Adai Shomanov, Akbota Bugibayeva, Arai Tolemisova, Almagul Kurmanova and Nargiz Nassyrova
Biomedicines 2025, 13(10), 2482; https://doi.org/10.3390/biomedicines13102482 - 12 Oct 2025
Viewed by 230
Abstract
Background: Cardiovascular diseases (CVDs) remain the primary cause of global mortality, with arterial hypertension, ischemic heart disease (IHD), and cerebrovascular accident (CVA) forming a progressive continuum from early risk factors to severe outcomes. While numerous studies focus on isolated biomarkers, few integrate multidimensional [...] Read more.
Background: Cardiovascular diseases (CVDs) remain the primary cause of global mortality, with arterial hypertension, ischemic heart disease (IHD), and cerebrovascular accident (CVA) forming a progressive continuum from early risk factors to severe outcomes. While numerous studies focus on isolated biomarkers, few integrate multidimensional visualization with artificial intelligence to reveal hidden, clinically relevant patterns. Methods: We conducted a comprehensive analysis of 106 patients using an integrated framework that combined clinical, biochemical, and lifestyle data. Parameters included renal function (glomerular filtration rate, cystatin C), inflammatory markers, lipid profile, enzymatic activity, and behavioral factors. After normalization and imputation, we applied correlation analysis, parallel coordinates visualization, t-distributed stochastic neighbor embedding (t-SNE) with k-means clustering, principal component analysis (PCA), and Random Forest modeling with SHAP (SHapley Additive exPlanations) interpretation. Bootstrap resampling was used to estimate 95% confidence intervals for mean absolute SHAP values, assessing feature stability. Results: Consistent patterns across outcomes revealed impaired renal function, reduced physical activity, and high hypertension prevalence in IHD and CVA. t-SNE clustering achieved complete separation of a high-risk group (100% CVD-positive) from a predominantly low-risk group (7.8% CVD rate), demonstrating unsupervised validation of biomarker discriminative power. PCA confirmed multidimensional structure, while Random Forest identified renal function, hypertension status, and physical activity as dominant predictors, achieving robust performance (Accuracy 0.818; AUC-ROC 0.854). SHAP analysis identified arterial hypertension, BMI, and physical inactivity as dominant predictors, complemented by renal biomarkers (GFR, cystatin) and NT-proBNP. Conclusions: This study pioneers the integration of multidimensional visualization and AI-driven analysis for CVD risk profiling, enabling interpretable, data-driven identification of high- and low-risk clusters. Despite the limited single-center cohort (n = 106) and cross-sectional design, the findings highlight the potential of interpretable models for precision prevention and transparent decision support in cardiovascular aging research. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

29 pages, 3821 KB  
Article
Mathematical Framework for Digital Risk Twins in Safety-Critical Systems
by Igor Kabashkin
Mathematics 2025, 13(19), 3222; https://doi.org/10.3390/math13193222 - 8 Oct 2025
Viewed by 275
Abstract
This paper introduces a formal mathematical framework for Digital Risk Twins (DRTs) as an extension of traditional digital twin (DT) architectures, explicitly tailored to the needs of safety-critical systems. While conventional DTs enable real-time monitoring and simulation of physical assets, they often lack [...] Read more.
This paper introduces a formal mathematical framework for Digital Risk Twins (DRTs) as an extension of traditional digital twin (DT) architectures, explicitly tailored to the needs of safety-critical systems. While conventional DTs enable real-time monitoring and simulation of physical assets, they often lack structured mechanisms to model stochastic failure processes; evaluate dynamic risk; or support resilient, risk-aware decision-making. The proposed DRT framework addresses these limitations by embedding probabilistic hazard modeling, reliability theory, and coherent risk measures into a modular and mathematically interpretable structure. The DT to DRT transformation is formalized as a composition of operators that project system trajectories onto risk-relevant features, compute failure intensities, and evaluate risk metrics under uncertainty. The framework supports layered integration of simulation, feature extraction, hazard dynamics, and decision-oriented evaluation, providing traceability, scalability, and explainability. Its utility is demonstrated through a case study involving an aircraft brake system, showcasing early warning detection, inspection schedule optimization, and visual risk interpretation. The results confirm that the DRT enables modular, explainable, and domain-agnostic integration of reliability logic into digital twin systems, enhancing their value in safety-critical applications. Full article
Show Figures

Figure 1

19 pages, 1035 KB  
Article
Spectral Bounds and Exit Times for a Stochastic Model of Corruption
by José Villa-Morales
Math. Comput. Appl. 2025, 30(5), 111; https://doi.org/10.3390/mca30050111 - 8 Oct 2025
Viewed by 128
Abstract
We study a stochastic differential model for the dynamics of institutional corruption, extending a deterministic three-variable system—corruption perception, proportion of sanctioned acts, and policy laxity—by incorporating Gaussian perturbations into key parameters. We prove global existence and uniqueness of solutions in the physically relevant [...] Read more.
We study a stochastic differential model for the dynamics of institutional corruption, extending a deterministic three-variable system—corruption perception, proportion of sanctioned acts, and policy laxity—by incorporating Gaussian perturbations into key parameters. We prove global existence and uniqueness of solutions in the physically relevant domain, and we analyze the linearization around the asymptotically stable equilibrium of the deterministic system. Explicit mean square bounds for the linearized process are derived in terms of the spectral properties of a symmetric matrix, providing insight into the temporal validity of the linear approximation. To investigate global behavior, we relate the first exit time from the domain of interest to backward Kolmogorov equations and numerically solve the associated elliptic and parabolic PDEs with FreeFEM, obtaining estimates of expectations and survival probabilities. An application to the case of Mexico highlights nontrivial effects: while the spectral structure governs local stability, institutional volatility can non-monotonically accelerate global exit, showing that highly reactive interventions without effective sanctions increase uncertainty. Policy implications and possible extensions are discussed. Full article
(This article belongs to the Section Social Sciences)
Show Figures

Figure 1

49 pages, 3694 KB  
Systematic Review
A Systematic Review of Models for Fire Spread in Wildfires by Spotting
by Edna Cardoso, Domingos Xavier Viegas and António Gameiro Lopes
Fire 2025, 8(10), 392; https://doi.org/10.3390/fire8100392 - 3 Oct 2025
Viewed by 704
Abstract
Fire spotting (FS), the process by which firebrands are lofted, transported, and ignite new fires ahead of the main flame front, plays a critical role in escalating extreme wildfire events. This systematic literature review (SLR) analyzes peer-reviewed articles and book chapters published in [...] Read more.
Fire spotting (FS), the process by which firebrands are lofted, transported, and ignite new fires ahead of the main flame front, plays a critical role in escalating extreme wildfire events. This systematic literature review (SLR) analyzes peer-reviewed articles and book chapters published in English from 2000 to 2023 to assess the evolution of FS models, identify prevailing methodologies, and highlight existing gaps. Following a PRISMA-guided approach, 102 studies were selected from Scopus, Web of Science, and Google Scholar, with searches conducted up to December 2023. The results indicate a marked increase in scientific interest after 2010. Thematic and bibliometric analyses reveal a dominant research focus on integrating the FS model within existing and new fire spread models, as well as empirical research and individual FS phases, particularly firebrand transport and ignition. However, generation and ignition FS phases, physics-based FS models (encompassing all FS phases), and integrated operational models remain underexplored. Modeling strategies have advanced from empirical and semi-empirical approaches to machine learning and physical-mechanistic simulations. Despite advancements, most models still struggle to replicate the stochastic and nonlinear nature of spotting. Geographically, research is concentrated in the United States, Australia, and parts of Europe, with notable gaps in representation across the Global South. This review underscores the need for interdisciplinary, data-driven, and regionally inclusive approaches to improve the predictive accuracy and operational applicability of FS models under future climate scenarios. Full article
Show Figures

Figure 1

21 pages, 1618 KB  
Article
Towards Realistic Virtual Power Plant Operation: Behavioral Uncertainty Modeling and Robust Dispatch Through Prospect Theory and Social Network-Driven Scenario Design
by Yi Lu, Ziteng Liu, Shanna Luo, Jianli Zhao, Changbin Hu and Kun Shi
Sustainability 2025, 17(19), 8736; https://doi.org/10.3390/su17198736 - 29 Sep 2025
Viewed by 260
Abstract
The growing complexity of distribution-level virtual power plants (VPPs) demands a rethinking of how flexible demand is modeled, aggregated, and dispatched under uncertainty. Traditional optimization frameworks often rely on deterministic or homogeneous assumptions about end-user behavior, thereby overestimating controllability and underestimating risk. In [...] Read more.
The growing complexity of distribution-level virtual power plants (VPPs) demands a rethinking of how flexible demand is modeled, aggregated, and dispatched under uncertainty. Traditional optimization frameworks often rely on deterministic or homogeneous assumptions about end-user behavior, thereby overestimating controllability and underestimating risk. In this paper, we propose a behavior-aware, two-stage stochastic dispatch framework for VPPs that explicitly models heterogeneous user participation via integrated behavioral economics and social interaction structures. At the behavioral layer, user responses to demand response (DR) incentives are captured using a Prospect Theory-based utility function, parameterized by loss aversion, nonlinear gain perception, and subjective probability weighting. In parallel, social influence dynamics are modeled using a peer interaction network that modulates individual participation probabilities through local contagion effects. These two mechanisms are combined to produce a high-dimensional, time-varying participation map across user classes, including residential, commercial, and industrial actors. This probabilistic behavioral landscape is embedded within a scenario-based two-stage stochastic optimization model. The first stage determines pre-committed dispatch quantities across flexible loads, electric vehicles, and distributed storage systems, while the second stage executes real-time recourse based on realized participation trajectories. The dispatch model includes physical constraints (e.g., energy balance, network limits), behavioral fatigue, and the intertemporal coupling of flexible resources. A scenario reduction technique and the Conditional Value-at-Risk (CVaR) metric are used to ensure computational tractability and robustness against extreme behavior deviations. Full article
Show Figures

Figure 1

17 pages, 956 KB  
Article
Energy Optimization of Motor-Driven Systems Using Variable Frequency Control, Soft Starters, and Machine Learning Forecasting
by Hashnayne Ahmed, Cristián Cárdenas-Lailhacar and S. A. Sherif
Energies 2025, 18(19), 5135; https://doi.org/10.3390/en18195135 - 26 Sep 2025
Viewed by 382
Abstract
This paper presents a unified modeling framework for quantifying power and energy consumption in motor-driven systems operating under variable frequency control and soft starter conditions. By formulating normalized expressions for voltage, current, and power factor as functions of motor speed, the model enables [...] Read more.
This paper presents a unified modeling framework for quantifying power and energy consumption in motor-driven systems operating under variable frequency control and soft starter conditions. By formulating normalized expressions for voltage, current, and power factor as functions of motor speed, the model enables accurate estimation of instantaneous and cumulative energy use using only measurable electrical quantities. The effect of soft starter operation during startup is incorporated through ramp-based profiles, while variable frequency control is modeled through dynamic speed modulation. Analytical results show that variable speed control can achieve energy savings of up to 36.1% for sinusoidal speed profiles and up to 42.9% when combined with soft starter operation, with the soft starter alone contributing a consistent 8.6% reduction independent of the power factor. To support energy optimization under uncertain demand scenarios, a two-stage stochastic optimization framework is developed for motor sizing and control assignment, and four physics-guided machine learning models—MLP, LSTM, GRU, and XGBoost—are benchmarked to forecast normalized energy ratios from key electrical parameters, enabling rapid and interpretable predictions. The proposed framework provides a scalable, interpretable, and practical tool for monitoring, diagnostics, and smart energy management of industrial motor-driven systems. Full article
Show Figures

Figure 1

21 pages, 6147 KB  
Article
A Two-Stage Hybrid Modeling Strategy for Early-Age Concrete Temperature Prediction Using Decoupled Physical Processes
by Xiaoyi Hu, Min Gan, Liangliang Zhang, Zhou Yu and Xin Lin
Buildings 2025, 15(19), 3479; https://doi.org/10.3390/buildings15193479 - 26 Sep 2025
Viewed by 308
Abstract
Predicting early-age temperature evolution in mass concrete is crucial for controlling thermal cracks. This process involves two distinct physical stages: an initial, hydration-driven heating stage (Stage I) and a subsequent, environment-dominated cooling stage (Stage II). To address this challenge, we propose a novel [...] Read more.
Predicting early-age temperature evolution in mass concrete is crucial for controlling thermal cracks. This process involves two distinct physical stages: an initial, hydration-driven heating stage (Stage I) and a subsequent, environment-dominated cooling stage (Stage II). To address this challenge, we propose a novel two-stage hybrid modeling strategy that decouples the underlying physical processes. This approach was developed and validated using a 450-h on-site monitoring dataset. For the deterministic heating phase (Stage I), we employed polynomial regression. For the subsequent stochastic cooling phase (Stage II), a Random Forest algorithm was used to model the complex environmental interactions. The proposed hybrid model was benchmarked against several alternatives, including a physics-based finite element model (FEM) and a single Random Forest model. During the critical cooling stage, our approach demonstrated superior performance, achieving a Root Mean Square Error (RMSE) of 0.24 °C. This represents a 17.2% improvement over the best-performing single model. Furthermore, cumulative error analysis indicated that the hybrid model maintained a stable and unbiased prediction trend throughout the monitoring period. This addresses a key weakness in single-stage models, where underlying phase-specific errors can accumulate and lead to long-term drift. The proposed framework offers an accurate, robust, and transferable paradigm for modeling other complex engineering processes that exhibit distinct multi-stage characteristics. Full article
(This article belongs to the Special Issue Urban Renewal: Protection and Restoration of Existing Buildings)
Show Figures

Figure 1

29 pages, 3717 KB  
Article
Inverse Procedure to Initial Parameter Estimation for Air-Dropped Packages Using Neural Networks
by Beata Potrzeszcz-Sut and Marta Grzyb
Appl. Sci. 2025, 15(19), 10422; https://doi.org/10.3390/app151910422 - 25 Sep 2025
Viewed by 230
Abstract
This paper presents a neural network–driven framework for solving the inverse problem of initial parameter estimation in air-dropped package missions. Unlike traditional analytical methods, which are computationally intensive and often impractical in real time, the proposed system leverages the flexibility of multilayer perceptrons [...] Read more.
This paper presents a neural network–driven framework for solving the inverse problem of initial parameter estimation in air-dropped package missions. Unlike traditional analytical methods, which are computationally intensive and often impractical in real time, the proposed system leverages the flexibility of multilayer perceptrons to model both forward and inverse relationships between drop conditions and flight outcomes. In the forward stage, a trained network predicts range, flight time, and impact velocity from predefined release parameters. In the inverse stage, a deeper neural model reconstructs the required release velocity, angle, and altitude directly from the desired operational outcomes. By employing a hybrid workflow—combining physics-based simulation with neural approximation—our approach generates large, high-quality datasets at low computational cost. Results demonstrate that the inverse network achieves high accuracy across deterministic and stochastic tests, with minimal error when operating within the training domain. The study confirms the suitability of neural architectures for tackling complex, nonlinear identification tasks in precision airdrop operations. Beyond their technical efficiency, such models enable agile, GPS-independent mission planning, offering a reliable and low-cost decision support tool for humanitarian aid, scientific research, and defense logistics. This work highlights how artificial intelligence can transform conventional trajectory design into a fast, adaptive, and autonomous capability. Full article
(This article belongs to the Special Issue Application of Neural Computation in Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1690 KB  
Article
3V-GM: A Tri-Layer “Point–Line–Plane” Critical Node Identification Algorithm for New Power Systems
by Yuzhuo Dai, Min Zhao, Gengchen Zhang and Tianze Zhao
Entropy 2025, 27(9), 937; https://doi.org/10.3390/e27090937 - 7 Sep 2025
Viewed by 548
Abstract
With the increasing penetration of renewable energy, the stochastic and intermittent nature of its generation increases operational uncertainty and vulnerability, posing significant challenges for grid stability. However, traditional algorithms typically identify critical nodes by focusing solely on the network topology or power flow, [...] Read more.
With the increasing penetration of renewable energy, the stochastic and intermittent nature of its generation increases operational uncertainty and vulnerability, posing significant challenges for grid stability. However, traditional algorithms typically identify critical nodes by focusing solely on the network topology or power flow, or by combining the two, which leads to the inaccurate and incomplete identification of essential nodes. To address this, we propose the Three-Dimensional Value-Based Gravity Model (3V-GM), which integrates structural and electrical–physical attributes across three layers. In the plane layer, we combine each node’s global topological position with its real-time supply–demand voltage state. In the line layer, we introduce an electrical coupling distance to quantify the strength of electromagnetic interactions between nodes. In the point layer, we apply eigenvector centrality to detect latent hub nodes whose influence is not immediately apparent. The performance of our proposed method was evaluated by examining the change in the load loss rate as nodes were sequentially removed. To assess the effectiveness of the 3V-GM approach, simulations were conducted on the IEEE 39 system, as well as six other benchmark networks. The simulations were performed using Python scripts, with operational parameters such as bus voltages, active and reactive power flows, and branch impedances obtained from standard test cases provided by MATPOWER v7.1. The results consistently show that removing the same number of nodes identified by 3V-GM leads to a greater load loss compared to the six baseline methods. This demonstrates the superior accuracy and stability of our approach. Additionally, an ablation experiment, which decomposed and recombined the three layers, further highlights the unique contribution of each component to the overall performance. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

46 pages, 8337 KB  
Review
Numerical Modelling of Keratinocyte Behaviour: A Comprehensive Review of Biochemical and Mechanical Frameworks
by Sarjeel Rashid, Raman Maiti and Anish Roy
Cells 2025, 14(17), 1382; https://doi.org/10.3390/cells14171382 - 4 Sep 2025
Viewed by 1833
Abstract
Keratinocytes are the primary cells of the epidermis layer in our skin. They play a crucial role in maintaining skin health, responding to injuries, and counteracting disease progression. Understanding their behaviour is essential for advancing wound healing therapies, improving outcomes in regenerative medicine, [...] Read more.
Keratinocytes are the primary cells of the epidermis layer in our skin. They play a crucial role in maintaining skin health, responding to injuries, and counteracting disease progression. Understanding their behaviour is essential for advancing wound healing therapies, improving outcomes in regenerative medicine, and developing numerical models that accurately mimic skin deformation. To create physically representative models, it is essential to evaluate the nuanced ways in which keratinocytes deform, interact, and respond to mechanical and biochemical signals. This has prompted researchers to investigate various computational methods that capture these dynamics effectively. This review summarises the main mathematical and biomechanical modelling techniques (with particular focus on the literature published since 2010). It includes reaction–diffusion frameworks, finite element analysis, viscoelastic models, stochastic simulations, and agent-based approaches. We also highlight how machine learning is being integrated to accelerate model calibration, improve image-based analyses, and enhance predictive simulations. While these models have significantly improved our understanding of keratinocyte function, many approaches rely on idealised assumptions. These may be two-dimensional unicellular analysis, simplistic material properties, or uncoupled analyses between mechanical and biochemical factors. We discuss the need for multiscale, integrative modelling frameworks that bridge these computational and experimental approaches. A more holistic representation of keratinocyte behaviour could enhance the development of personalised therapies, improve disease modelling, and refine bioengineered skin substitutes for clinical applications. Full article
(This article belongs to the Section Cellular Biophysics)
Show Figures

Figure 1

20 pages, 5097 KB  
Article
A Robust Optimization Framework for Hydraulic Containment System Design Under Uncertain Hydraulic Conductivity Fields
by Wenfeng Gao, Yawei Kou, Hao Dong, Haoran Liu and Simin Jiang
Water 2025, 17(17), 2617; https://doi.org/10.3390/w17172617 - 4 Sep 2025
Viewed by 802
Abstract
Effective containment of contaminant plumes in heterogeneous aquifers is critically challenged by the inherent uncertainty in hydraulic conductivity (K). Conventional, deterministic optimization approaches for pump-and-treat (P&T) system design often fail when confronted with real-world geological variability. This study proposes a novel robust simulation-optimization [...] Read more.
Effective containment of contaminant plumes in heterogeneous aquifers is critically challenged by the inherent uncertainty in hydraulic conductivity (K). Conventional, deterministic optimization approaches for pump-and-treat (P&T) system design often fail when confronted with real-world geological variability. This study proposes a novel robust simulation-optimization framework to design reliable hydraulic containment systems that explicitly account for this subsurface uncertainty. The framework integrates the Karhunen–Loève Expansion (KLE) for efficient stochastic representation of heterogeneous K-fields with a Genetic Algorithm (GA) implemented via the pymoo library, coupled with the MODFLOW groundwater flow model for physics-based performance evaluation. The core innovation lies in a multi-scenario assessment process, where candidate well configurations (locations and pumping rates) are evaluated against an ensemble of K-field realizations generated by KLE. This approach shifts the design objective from optimality under a single scenario to robustness across a spectrum of plausible subsurface conditions. A structured three-step filtering method—based on mean performance, consistency (pass rate), and stability (low variability)—is employed to identify the most reliable solutions. The framework’s effectiveness is demonstrated through a numerical case study. Results confirm that deterministic designs are highly sensitive to the specific K-field realization. In contrast, the robust framework successfully identifies well configurations that maintain a high and stable containment performance across diverse K-field scenarios, effectively mitigating the risk of failure associated with single-scenario designs. Furthermore, the analysis reveals how varying degrees of aquifer heterogeneity influence both the required operational cost and the attainable level of robustness. This systematic approach provides decision-makers with a practical and reliable strategy for designing cost-effective P&T systems that are resilient to geological uncertainty, offering significant advantages over traditional methods for contaminated site remediation. Full article
(This article belongs to the Special Issue Groundwater Quality and Contamination at Regional Scales)
Show Figures

Figure 1

23 pages, 4363 KB  
Article
Hybrid SDE-Neural Networks for Interpretable Wind Power Prediction Using SCADA Data
by Mehrdad Ghadiri and Luca Di Persio
Electricity 2025, 6(3), 48; https://doi.org/10.3390/electricity6030048 - 1 Sep 2025
Viewed by 624
Abstract
Wind turbine power forecasting is crucial for optimising energy production, planning maintenance, and enhancing grid stability. This research focuses on predicting the output of a Senvion MM92 wind turbine at the Kelmarsh wind farm in the UK using SCADA data from 2020. Two [...] Read more.
Wind turbine power forecasting is crucial for optimising energy production, planning maintenance, and enhancing grid stability. This research focuses on predicting the output of a Senvion MM92 wind turbine at the Kelmarsh wind farm in the UK using SCADA data from 2020. Two approaches are explored: a hybrid model combining Stochastic Differential Equations (SDEs) with Neural Networks (NNs) and Deep Learning models, in particular, Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM), and the Combination of Convolutional Neural Networks (CNNs) and LSTM. Notably, while SDE-NN models are well suited for predictions in cases where data patterns are chaotic and lack consistent trends, incorporating stochastic processes increases the complexity of learning within SDE models. Moreover, it is worth mentioning that while SDE-NNs cannot be classified as purely “white box” models, they are also not entirely “black box” like traditional Neural Networks. Instead, they occupy a middle ground, offering improved interpretability over pure NNs while still leveraging the power of Deep Learning. This balance is precious in fields such as wind power prediction, where accuracy and understanding of the underlying physical processes are essential. The evaluation of the results demonstrates the effectiveness of the SDE-NNs compared to traditional Deep Learning models for wind power prediction. The SDE-NNs achieve slightly better accuracy than other Deep Learning models, highlighting their potential as a powerful alternative. Full article
Show Figures

Figure 1

32 pages, 1569 KB  
Systematic Review
A Review of Multi-Energy Systems from Resiliency and Equity Perspectives
by Kathryn Hinkelman, Juan Diego Flores Garcia, Saranya Anbarasu and Wangda Zuo
Energies 2025, 18(17), 4536; https://doi.org/10.3390/en18174536 - 27 Aug 2025
Viewed by 981
Abstract
Multi-energy systems (MES), or energy hubs, offer a technologically viable solution for maintaining resilient energy infrastructure in the face of increasingly frequent disasters, which disproportionately affect low-income and disadvantaged communities; however, their adoption for these purposes remains poorly understood. Following PRISMA 2020, this [...] Read more.
Multi-energy systems (MES), or energy hubs, offer a technologically viable solution for maintaining resilient energy infrastructure in the face of increasingly frequent disasters, which disproportionately affect low-income and disadvantaged communities; however, their adoption for these purposes remains poorly understood. Following PRISMA 2020, this paper systematically reviews the MES literature from both resiliency and equity perspectives to identify synergies, disparities, and gaps in the context of climate change and long-term decarbonization goals. From 2420 records identified from Scopus (1997–2023), we included 211 original MES research publications for detailed review, with studies excluded based on their scale, scope, or technology. Risk of bias was minimized through dual-stage screening and statistical analysis across 18 physical system and research approach categories. The results found that papers including equity are statically more likely to involve fully renewable energy systems, while middle income countries tend to adopt renewable systems with biofuels more than high income countries. Sector coupling with two energy types improved the resiliency index the most (73% difference between baseline and proposed MES), suggesting two-type systems are optimal. Statistically significant differences in modeling formulations also emerged, such as equity-focused MES studies adopting deterministic design models, while resilience-focused studies favored stochastic control formulations and load-shedding objectives. While preliminary studies indicate low operational costs and high resilience can synergistically be achieved, further MES case studies are needed with low-income communities and extreme climates. Broadly, this review novelly applies structured statistical analysis for the MES domain, revealing key trends in technology adoption, modeling approaches, and equity-resilience integration. Full article
(This article belongs to the Topic Multi-Energy Systems, 2nd Edition)
Show Figures

Figure 1

25 pages, 3918 KB  
Article
Sensitivity Analysis of Component Parameters in Dual-Channel Time-Domain Correlated UWB Fuze Receivers Under Parametric Deviations
by Yanbin Liang, Kaiwei Wu, Bing Yang, Shijun Hao and Zhonghua Huang
Sensors 2025, 25(16), 5065; https://doi.org/10.3390/s25165065 - 14 Aug 2025
Viewed by 384
Abstract
In ultra-wideband (UWB) radio fuze architectures, the receiver serves as the core component for receiving target-reflected signals, with its performance directly determining system detection accuracy. Manufacturing tolerances and operational environments induce inherent stochastic perturbations in circuit components, causing deviations of actual parameters from [...] Read more.
In ultra-wideband (UWB) radio fuze architectures, the receiver serves as the core component for receiving target-reflected signals, with its performance directly determining system detection accuracy. Manufacturing tolerances and operational environments induce inherent stochastic perturbations in circuit components, causing deviations of actual parameters from nominal values. This consequently degrades the signal-to-noise ratio (SNR) of receiver outputs and compromises ranging precision. To overcome these limitations and identify critical sensitive components in the receiver, this study proposes the following: (1) A dual-channel time-domain correlated UWB fuze detection model; and (2) the integration of an asymmetric tolerance mathematical model for dual-channel correlated receivers with a Morris-LHS-Sobol collaborative strategy to quantify independent effects and coupling interactions across multidimensional parameter spaces. Simulation results demonstrate that integrating capacitors and resistors constitute the dominant sensitivity sources, exhibiting significantly positive synergistic effects. Physical simulation correlation and hardware circuit verification confirms that the proposed model and sensitivity analysis method outperform conventional approaches in tolerance resolution and allocation optimization, thereby advancing the theoretical characterization of nonlinear coupling effects between parameters. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Back to TopTop