Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,210)

Search Parameters:
Keywords = stochastic dynamic system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2341 KB  
Article
A Multi-Expert Evolutionary Boosting Method for Proactive Control in Unstable Environments
by Alexander Musaev and Dmitry Grigoriev
Algorithms 2025, 18(11), 692; https://doi.org/10.3390/a18110692 - 2 Nov 2025
Abstract
Unstable technological processes, such as turbulent gas and hydrodynamic flows, generate time series that deviate sharply from the assumptions of classical statistical forecasting. These signals are shaped by stochastic chaos, characterized by weak inertia, abrupt trend reversals, and pronounced low-frequency contamination. Traditional extrapolators, [...] Read more.
Unstable technological processes, such as turbulent gas and hydrodynamic flows, generate time series that deviate sharply from the assumptions of classical statistical forecasting. These signals are shaped by stochastic chaos, characterized by weak inertia, abrupt trend reversals, and pronounced low-frequency contamination. Traditional extrapolators, including linear and polynomial models, therefore act only as weak forecasters, introducing systematic phase lag and rapidly losing directional reliability. To address these challenges, this study introduces an evolutionary boosting framework within a multi-expert system (MES) architecture. Each expert is defined by a compact genome encoding training-window length and polynomial order, and experts evolve across generations through variation, mutation, and selection. Unlike conventional boosting, which adapts only weights, evolutionary boosting adapts both the weights and the structure of the expert pool, allowing the system to escape local optima and remain responsive to rapid environmental shifts. Numerical experiments on real monitoring data demonstrate consistent error reduction, highlighting the advantage of short windows and moderate polynomial orders in balancing responsiveness with robustness. The results show that evolutionary boosting transforms weak extrapolators into a strong short-horizon forecaster, offering a lightweight and interpretable tool for proactive control in environments dominated by chaotic dynamics. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

53 pages, 4192 KB  
Article
A Methodology for Assessing Digital Readiness of Industrial Enterprises for Ecosystem Adaptation: Evidence from Kazakhstan’s Sustainable Industrial Transformation
by Larissa Tashenova, Dinara Mamrayeva and Barno Kulzhambekova
Sustainability 2025, 17(21), 9763; https://doi.org/10.3390/su17219763 - 1 Nov 2025
Viewed by 31
Abstract
This scientific article examines the issue of the effectiveness of digital transformation in Kazakhstan’s industry from the perspective of how effectively enterprises are able to convert digital resources into economically measurable results in the context of the transition to a model of sustainable [...] Read more.
This scientific article examines the issue of the effectiveness of digital transformation in Kazakhstan’s industry from the perspective of how effectively enterprises are able to convert digital resources into economically measurable results in the context of the transition to a model of sustainable industrial growth. The aim of the study is to develop a comprehensive methodology for assessing the digital readiness of industrial enterprises to implement and adapt digital ecosystems based on a synthesis of conceptual and empirical approaches. The methodology developed by the authors combines a parametric diagnostic system and stochastic frontier analysis (SFA) tools, which allows for a quantitative assessment of not only the scale but also the effectiveness of digital transformations at the regional level. The empirical part of the study includes statistical data for 2023, reflecting the dynamics of the introduction of ICT, cloud technologies, big data analytics, etc., in the industrial sector. The results of the analysis showed the steady development of digitalization with the existing pronounced spatial asymmetry. The application of SFA made it possible to identify technological “frontiers” and reveal the hidden potential for increasing the effectiveness of digital investments at the regional level. The practical value of the study lies in its applicability for assessing the digital readiness of industrial enterprises for ecosystem adaptation, diagnosing regional digital disparities, and justifying targeted government policy measures aimed at strengthening the digital maturity and sustainability of the industrial sector. Full article
Show Figures

Graphical abstract

51 pages, 28544 KB  
Article
Spatial Flows of Information Entropy as Indicators of Climate Variability and Extremes
by Bernard Twaróg
Entropy 2025, 27(11), 1132; https://doi.org/10.3390/e27111132 - 31 Oct 2025
Viewed by 76
Abstract
The objective of this study is to analyze spatial entropy flows that reveal the directional dynamics of climate change—patterns that remain obscured in traditional statistical analyses. This approach enables the identification of pathways for “climate information transport”, highlights associations with atmospheric circulation types, [...] Read more.
The objective of this study is to analyze spatial entropy flows that reveal the directional dynamics of climate change—patterns that remain obscured in traditional statistical analyses. This approach enables the identification of pathways for “climate information transport”, highlights associations with atmospheric circulation types, and allows for the localization of both sources and “informational voids”—regions where entropy is dissipated. The analytical framework is grounded in a quantitative assessment of long-term climate variability across Europe over the period 1901–2010, utilizing Shannon entropy as a measure of atmospheric system uncertainty and variability. The underlying assumption is that the variability of temperature and precipitation reflects the inherently dynamic character of climate as a nonlinear system prone to fluctuations. The study focuses on calculating entropy estimated within a 70-year moving window for each calendar month, using bivariate distributions of temperature and precipitation modeled with copula functions. Marginal distributions were selected based on the Akaike Information Criterion (AIC). To improve the accuracy of the estimation, a block bootstrap resampling technique was applied, along with numerical integration to compute the Shannon entropy values at each of the 4165 grid points with a spatial resolution of 0.5° × 0.5°. The results indicate that entropy and its derivative are complementary indicators of atmospheric system instability—entropy proving effective in long-term diagnostics, while its derivative provides insight into the short-term forecasting of abrupt changes. A lag analysis and Spearman rank correlation between entropy values and their potential supported the investigation of how circulation variability influences the occurrence of extreme precipitation events. Particularly noteworthy is the temporal derivative of entropy, which revealed strong nonlinear relationships between local dynamic conditions and climatic extremes. A spatial analysis of the information entropy field was also conducted, revealing distinct structures with varying degrees of climatic complexity on a continental scale. This field appears to be clearly structured, reflecting not only the directional patterns of change but also the potential sources of meteorological fluctuations. A field-theory-based spatial classification allows for the identification of transitional regions—areas with heightened susceptibility to shifts in local dynamics—as well as entropy source and sink regions. The study is embedded within the Fokker–Planck formalism, wherein the change in the stochastic distribution characterizes the rate of entropy production. In this context, regions of positive divergence are interpreted as active generators of variability, while sink regions function as stabilizing zones that dampen fluctuations. Full article
(This article belongs to the Special Issue 25 Years of Sample Entropy)
24 pages, 3446 KB  
Article
DAHG: A Dynamic Augmented Heterogeneous Graph Framework for Precipitation Forecasting with Incomplete Data
by Hailiang Tang, Hyunho Yang and Wenxiao Zhang
Information 2025, 16(11), 946; https://doi.org/10.3390/info16110946 - 30 Oct 2025
Viewed by 191
Abstract
Accurate and timely precipitation forecasting is critical for climate risk management, agriculture, and hydrological regulation. However, this task remains challenging due to the dynamic evolution of atmospheric systems, heterogeneous environmental factors, and frequent missing data in multi-source observations. To address these issues, we [...] Read more.
Accurate and timely precipitation forecasting is critical for climate risk management, agriculture, and hydrological regulation. However, this task remains challenging due to the dynamic evolution of atmospheric systems, heterogeneous environmental factors, and frequent missing data in multi-source observations. To address these issues, we propose DAHG, a novel long-term precipitation forecasting framework based on dynamic augmented heterogeneous graphs with reinforced graph generation, contrastive representation learning, and long short-term memory (LSTM) networks. Specifically, DAHG constructs a temporal heterogeneous graph to model the complex interactions among multiple meteorological variables (e.g., precipitation, humidity, wind) and remote sensing indicators (e.g., NDVI). The forecasting task is formulated as a dynamic spatiotemporal regression problem, where predicting future precipitation values corresponds to inferring attributes of target nodes in the evolving graph sequence. To handle missing data, we present a reinforced dynamic graph generation module that leverages reinforcement learning to complete incomplete graph sequences, enhancing the consistency of long-range forecasting. Additionally, a self-supervised contrastive learning strategy is employed to extract robust representations of multi-view graph snapshots (i.e., temporally adjacent frames and stochastically augmented graph views). Finally, DAHG integrates temporal dependency through long short-term memory (LSTM) networks to capture the evolving precipitation patterns and outputs future precipitation estimations. Experimental evaluations on multiple real-world meteorological datasets show that DAHG reduces MAE by 3% and improves R2 by 0.02 over state-of-the-art baselines (p < 0.01), confirming significant gains in accuracy and robustness, particularly in scenarios with partially missing observations (e.g., due to sensor outages or cloud-covered satellite readings). Full article
Show Figures

Graphical abstract

18 pages, 1681 KB  
Article
Modeling Dynamic Regime Shifts in Diffusion Processes: Approximate Maximum Likelihood Estimation for Two-Threshold Ornstein–Uhlenbeck Models
by Svajone Bekesiene, Anatolii Nikitin and Serhii Nechyporuk
Mathematics 2025, 13(21), 3450; https://doi.org/10.3390/math13213450 - 29 Oct 2025
Viewed by 138
Abstract
This study addresses the problem of estimating parameters in a two-threshold Ornstein–Uhlenbeck diffusion process, a model suitable for describing systems that exhibit changes in dynamics when crossing specific boundaries. Such behavior is often observed in real economic and physical processes. The main objective [...] Read more.
This study addresses the problem of estimating parameters in a two-threshold Ornstein–Uhlenbeck diffusion process, a model suitable for describing systems that exhibit changes in dynamics when crossing specific boundaries. Such behavior is often observed in real economic and physical processes. The main objective is to develop and evaluate a method for accurately identifying key parameters, including the threshold levels, drift changes, and diffusion coefficient, within this stochastic framework. The paper proposes an iterative algorithm based on approximate maximum likelihood estimation, which recalculates parameter values step by step until convergence is achieved. This procedure simultaneously estimates both the threshold positions and the associated process parameters, allowing it to adapt effectively to structural changes in the data. Unlike previously studied single-threshold systems, two-threshold models are more natural and offer improved applicability. The method is implemented through custom programming and tested using synthetically generated data to assess its precision and reliability. The novelty of this study lies in extending the approximate maximum likelihood framework to a two-threshold Ornstein–Uhlenbeck process and in developing an iterative estimation procedure capable of jointly recovering both threshold locations and regime-specific parameters with proven convergence properties. Results show that the algorithm successfully captures changes in the process dynamics and provides consistent parameter estimates across different scenarios. The proposed approach offers a practical tool for analyzing systems influenced by shifting regimes and contributes to a better understanding of dynamic processes in various applied fields. Full article
(This article belongs to the Special Issue Stochastic Differential Equations and Applications)
Show Figures

Figure 1

25 pages, 3099 KB  
Article
Joint Energy–Resilience Optimization of Grid-Forming Storage in Islanded Microgrids via Wasserstein Distributionally Robust Framework
by Yinchi Shao, Yu Gong, Xiaoyu Wang, Xianmiao Huang, Yang Zhao and Shanna Luo
Energies 2025, 18(21), 5674; https://doi.org/10.3390/en18215674 - 29 Oct 2025
Viewed by 319
Abstract
The increasing deployment of islanded microgrids in disaster-prone and infrastructure-constrained regions has elevated the importance of resilient energy storage systems capable of supporting autonomous operation. Grid-forming energy storage (GFES) units—designed to provide frequency reference, voltage regulation, and black-start capabilities—are emerging as critical assets [...] Read more.
The increasing deployment of islanded microgrids in disaster-prone and infrastructure-constrained regions has elevated the importance of resilient energy storage systems capable of supporting autonomous operation. Grid-forming energy storage (GFES) units—designed to provide frequency reference, voltage regulation, and black-start capabilities—are emerging as critical assets for maintaining both energy adequacy and dynamic stability in isolated environments. However, conventional storage planning models fail to capture the interplay between uncertain renewable generation, time-coupled operational constraints, and control-oriented performance metrics such as virtual inertia and voltage ride-through. To address this gap, this paper proposes a novel distributionally robust optimization (DRO) framework that jointly optimizes the siting and sizing of GFES under renewable and load uncertainty. The model is grounded in Wasserstein-metric DRO, allowing worst-case expectation minimization over an ambiguity set constructed from empirical historical data. A multi-period convex formulation is developed that incorporates energy balance, degradation cost, state-of-charge dynamics, black-start reserve margins, and stability-aware constraints. Frequency sensitivity and voltage compliance metrics are explicitly embedded into the optimization, enabling control-aware dispatch and resilience-informed placement of storage assets. A tractable reformulation is achieved using strong duality and solved via a nested column-and-constraint generation algorithm. The framework is validated on a modified IEEE 33-bus distribution network with high PV penetration and heterogeneous demand profiles. Case study results demonstrate that the proposed model reduces worst-case blackout duration by 17.4%, improves voltage recovery speed by 12.9%, and achieves 22.3% higher SoC utilization efficiency compared to deterministic and stochastic baselines. Furthermore, sensitivity analyses reveal that GFES deployment naturally concentrates at nodes with high dynamic control leverage, confirming the effectiveness of the control-informed robust design. This work provides a scalable, data-driven planning tool for resilient microgrid development in the face of deep temporal and structural uncertainty. Full article
Show Figures

Figure 1

24 pages, 628 KB  
Article
Advanced Stochastic Modeling for Series Production Processes: A Markov Chains and Queuing Theory Approach to Optimizing Manufacturing Efficiency
by Nestor E. Caicedo-Solano, Darwin Peña-González, Diego Vergara, Izabel F. Machado and Edwan Anderson Ariza-Echeverri
Processes 2025, 13(11), 3468; https://doi.org/10.3390/pr13113468 - 28 Oct 2025
Viewed by 270
Abstract
The study presents an integrated stochastic modeling framework that combines Markov chains and queuing theory to optimize production efficiency in a metallurgical machining process. The model captures the dynamic behavior of key manufacturing stages, milling, drilling, reaming, and packing, by representing rework, inspection, [...] Read more.
The study presents an integrated stochastic modeling framework that combines Markov chains and queuing theory to optimize production efficiency in a metallurgical machining process. The model captures the dynamic behavior of key manufacturing stages, milling, drilling, reaming, and packing, by representing rework, inspection, and waste as distinct probabilistic states. Effective arrival rates and service rates are computed to evaluate machine utilization, total processing time, and throughput. The proposed approach was applied to an industrial case study, where the results showed that reprocessing activities increased the total cost per conforming unit by approximately 0.3% and affected overall system performance. By adjusting machine allocation and minimizing rework probabilities, the model demonstrated measurable improvements in cycle time and cost efficiency. This integrated methodology provides a practical decision-support tool for optimizing production flow, balancing resource utilization, and reducing the financial impact of nonconformity in continuous manufacturing environments. Full article
(This article belongs to the Special Issue Production and Industrial Engineering in Metal Processing)
Show Figures

Figure 1

17 pages, 1801 KB  
Article
Reliability Modeling and Assessment of a Dual-Span Rotor System with Misalignment Fault and Shared Load
by Peng Gao
Appl. Sci. 2025, 15(21), 11477; https://doi.org/10.3390/app152111477 - 27 Oct 2025
Viewed by 113
Abstract
To address the challenge of time-varying reliability assessment for double-span rotor systems under misalignment faults and load-sharing conditions, a time-varying reliability modeling method based on neural networks and reliability velocity mapping is proposed in this paper. By establishing a system dynamics model coupled [...] Read more.
To address the challenge of time-varying reliability assessment for double-span rotor systems under misalignment faults and load-sharing conditions, a time-varying reliability modeling method based on neural networks and reliability velocity mapping is proposed in this paper. By establishing a system dynamics model coupled with misalignment fault, a feedforward neural network surrogate model is constructed to efficiently predict stochastic stress responses, overcoming the limitations of high computational cost and difficulty in probabilistic analysis inherent in traditional finite element methods. Furthermore, by introducing the concept of reliability velocity, an intelligent mapping from independent systems to dependent systems is established, significantly enhancing the assessment accuracy of system time-varying reliability under small-sample conditions. Case study validation demonstrates that the proposed method can accurately capture the system degradation behavior under load-sharing and failure dependency mechanisms, providing a theoretical foundation for reliability analysis and intelligent operation and maintenance of rotor systems. Full article
Show Figures

Figure 1

31 pages, 656 KB  
Article
Qualitative Analysis of Delay Stochastic Systems with Generalized Memory Effects
by Abdelhamid Mohammed Djaouti and Muhammad Imran Liaqat
Mathematics 2025, 13(21), 3409; https://doi.org/10.3390/math13213409 - 26 Oct 2025
Viewed by 150
Abstract
Fractional stochastic differential equations (FSDEs) are powerful tools for modeling real-world phenomena, as they incorporate both memory effects and stochastic noise. A central focus in their analysis is establishing the well-posedness and regularity of solutions. Moreover, the averaging principle offers a systematic approach [...] Read more.
Fractional stochastic differential equations (FSDEs) are powerful tools for modeling real-world phenomena, as they incorporate both memory effects and stochastic noise. A central focus in their analysis is establishing the well-posedness and regularity of solutions. Moreover, the averaging principle offers a systematic approach to simplify complex dynamical systems by approximating their behavior through time-averaged models. In this paper, we develop a theoretical framework for a class of FSDEs involving the Hilfer–Katugampola derivative. Our main contributions include proving the well-posedness and regularity of solutions, establishing a generalized averaging principle, and demonstrating real-life applications solved via the Euler–Maruyama method. All numerical simulations were conducted using the Python programming language (version 3.11). These results are formulated for the P˜th moment, providing a unified analysis that extends existing findings. Full article
Show Figures

Figure 1

37 pages, 4383 KB  
Article
The Spatial Regime Conversion Method
by Charles G. Cameron, Cameron A. Smith and Christian A. Yates
Mathematics 2025, 13(21), 3406; https://doi.org/10.3390/math13213406 - 26 Oct 2025
Viewed by 286
Abstract
We present the spatial regime conversion method (SRCM), a novel hybrid modelling framework for simulating reaction–diffusion systems that adaptively combines stochastic discrete and deterministic continuum representations. Extending the regime conversion method (RCM) to spatial settings, the SRCM employs a discrete reaction–diffusion master equation [...] Read more.
We present the spatial regime conversion method (SRCM), a novel hybrid modelling framework for simulating reaction–diffusion systems that adaptively combines stochastic discrete and deterministic continuum representations. Extending the regime conversion method (RCM) to spatial settings, the SRCM employs a discrete reaction–diffusion master equation (RDME) representation in regions of low concentration and continuum partial differential equations (PDEs) where concentrations are high, dynamically switching based on local thresholds. This is an advancement over the existing methods in the literature, requiring no fixed spatial interfaces, enabling efficient and accurate simulation of systems in which stochasticity plays a key role but is not required uniformly across the domain. We specify the full mathematical formulation of the SRCM, including conversion reactions, hybrid kinetic rules, and consistent numerical updates. The method is validated across several one-dimensional test systems, including simple diffusion from a region of high concentration, the formation of a morphogen gradient, and the propagation of FKPP travelling waves. The results show that the SRCM captures key stochastic features while offering substantial gains in computational efficiency over fully stochastic models. Full article
(This article belongs to the Special Issue Stochastic Models in Mathematical Biology, 2nd Edition)
Show Figures

Figure 1

19 pages, 321 KB  
Article
Entropy Production and Irreversibility in the Linearized Stochastic Amari Neural Model
by Dario Lucente, Giacomo Gradenigo and Luca Salasnich
Entropy 2025, 27(11), 1104; https://doi.org/10.3390/e27111104 - 25 Oct 2025
Viewed by 244
Abstract
One among the most intriguing results coming from the application of statistical mechanics to the study of the brain is the understanding that it, as a dynamical system, is inherently out of equilibrium. In the realm of non-equilibrium statistical mechanics and stochastic processes, [...] Read more.
One among the most intriguing results coming from the application of statistical mechanics to the study of the brain is the understanding that it, as a dynamical system, is inherently out of equilibrium. In the realm of non-equilibrium statistical mechanics and stochastic processes, the standard observable computed to determine whether a system is at equilibrium or not is the entropy produced along the dynamics. For this reason, we present here a detailed calculation of the entropy production in the Amari model, a coarse-grained model of the brain neural network, consisting of an integro-differential equation for the neural activity field, when stochasticity is added to the original dynamics. Since the way to add stochasticity is always to some extent arbitrary, particularly for coarse-grained models, there is no general prescription to do so. We precisely investigate the interplay between noise properties and the original model features, discussing in which cases the stationary state is in thermal equilibrium and which cases it is out of equilibrium, providing explicit and simple formulae. Following the derivation for the particular case considered, we also show how the entropy production rate is related to the variation in time of the Shannon entropy of the system. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
23 pages, 3153 KB  
Article
Domain-Specific Acceleration of Gravity Forward Modeling via Hardware–Software Co-Design
by Yong Yang, Daying Sun, Zhiyuan Ma and Wenhua Gu
Micromachines 2025, 16(11), 1215; https://doi.org/10.3390/mi16111215 - 25 Oct 2025
Viewed by 359
Abstract
The gravity forward modeling algorithm is a compute-intensive method and is widely used in scientific computing, particularly in geophysics, to predict the impact of subsurface structures on surface gravity fields. Traditional implementations rely on CPUs, where performance gains are mainly achieved through algorithmic [...] Read more.
The gravity forward modeling algorithm is a compute-intensive method and is widely used in scientific computing, particularly in geophysics, to predict the impact of subsurface structures on surface gravity fields. Traditional implementations rely on CPUs, where performance gains are mainly achieved through algorithmic optimization. With the rise of domain-specific architectures, FPGA offers a promising platform for acceleration, but faces challenges such as limited programmability and the high cost of nonlinear function implementation. This work proposes an FPGA-based co-processor to accelerate gravity forward modeling. A RISC-V core is integrated with a custom instruction set targeting key computation steps. Tasks are dynamically scheduled and executed on eight fully pipeline processing units, achieving high parallelism while retaining programmability. To address nonlinear operations, we introduce a piecewise linear approximation method optimized via stochastic gradient descent (SGD), significantly reducing resource usage and latency. The design is implemented on the AMD UltraScale+ ZCU102 FPGA (Advanced Micro Devices, Inc. (AMD), Santa Clara, CA, USA) and evaluated across several forward modeling scenarios. At 250 MHz, the system achieves up to 179× speedup over an Intel Xeon 5218R CPU (Intel Corporation, Santa Clara, CA, USA) and improves energy efficiency by 2040×. To the best of our knowledge, this is the first FPGA-based gravity forward modeling accelerate design. Full article
(This article belongs to the Special Issue Recent Advances in Field-Programmable Gate Array (FPGA))
Show Figures

Figure 1

20 pages, 1690 KB  
Article
Hybrid Drive Simulation Architecture for Power Distribution Based on the Federated Evolutionary Monte Carlo Algorithm
by Dongli Jia, Xiaoyu Yang, Wanxing Sheng, Keyan Liu, Tingyan Jin, Xiaoming Li and Weijie Dong
Energies 2025, 18(21), 5595; https://doi.org/10.3390/en18215595 - 24 Oct 2025
Viewed by 297
Abstract
Modern active distribution networks are increasingly characterized by high complexity, uncertainty, and distributed clustering, posing challenges for traditional model-based simulations in capturing nonlinear dynamics and stochastic variations. This study develops a data–model hybrid-driven simulation architecture that integrates a Federated Evolutionary Monte Carlo Optimization [...] Read more.
Modern active distribution networks are increasingly characterized by high complexity, uncertainty, and distributed clustering, posing challenges for traditional model-based simulations in capturing nonlinear dynamics and stochastic variations. This study develops a data–model hybrid-driven simulation architecture that integrates a Federated Evolutionary Monte Carlo Optimization (FEMCO) algorithm for distribution network optimization. The model-driven module employs spectral clustering to decompose the network into multiple autonomous subsystems and performs distributed reconstruction through gradient descent. The data-driven module, built upon Long Short-Term Memory (LSTM) networks, learns temporal dependencies between load curves and operational parameters to enhance predictive accuracy. These two modules are fused via a Random Forest ensemble, while FEMCO jointly leverages Monte Carlo global sampling, Federated Learning-based distributed training, and Genetic Algorithm-driven evolutionary optimization. Simulation studies on the IEEE 33 bus distribution system demonstrate that the proposed framework reduces power losses by 25–45% and voltage deviations by 75–85% compared with conventional Genetic Algorithm and Monte Carlo approaches. The results confirm that the proposed hybrid architecture effectively improves convergence stability, optimization precision, and adaptability, providing a scalable solution for the intelligent operation and distributed control of modern power distribution systems. Full article
Show Figures

Figure 1

26 pages, 4340 KB  
Article
Vertical Motion Stabilization of High-Speed Multihulls in Irregular Seas Using ESO-Based Backstepping Control
by Xianjin Fang, Huayang Li, Zhilin Liu, Guosheng Li, Tianze Ni, Fan Jiang and Jie Zhang
J. Mar. Sci. Eng. 2025, 13(11), 2040; https://doi.org/10.3390/jmse13112040 - 24 Oct 2025
Viewed by 162
Abstract
The severe vertical motion of high-speed multihull vessels significantly impairs their seakeeping performance, making the design of effective anti-motion controllers crucial. However, existing controllers, predominantly designed based on deterministic dynamic models, suffer from limitations such as insufficient robustness, reliance on empirical knowledge, structural [...] Read more.
The severe vertical motion of high-speed multihull vessels significantly impairs their seakeeping performance, making the design of effective anti-motion controllers crucial. However, existing controllers, predominantly designed based on deterministic dynamic models, suffer from limitations such as insufficient robustness, reliance on empirical knowledge, structural complexity, and suboptimal performance, which hinder their practical applicability. To address this, this paper proposes a robust decoupled vertical motion controller based on the step response inversion method and incorporating an Extended State Observer (ESO) uncertainty compensation term. The control algorithm is designed leveraging the equivalent noise bandwidth theory to account for the stochastic characteristics of pitch/heave motion, with ESO compensation introduced to enhance robustness. The stability of the closed loop system is rigorously proven through theoretical analysis. Simulation results demonstrate that the proposed algorithm significantly suppresses the amplitudes of both pitch and heave motions. Full article
(This article belongs to the Special Issue Advanced Control Strategies for Autonomous Maritime Systems)
Show Figures

Figure 1

25 pages, 1868 KB  
Article
AI-Powered Digital Twin Co-Simulation Framework for Climate-Adaptive Renewable Energy Grids
by Kwabena Addo, Musasa Kabeya and Evans Eshiemogie Ojo
Energies 2025, 18(21), 5593; https://doi.org/10.3390/en18215593 - 24 Oct 2025
Viewed by 602
Abstract
Climate change is accelerating the frequency and intensity of extreme weather events, posing a critical threat to the stability, efficiency, and resilience of modern renewable energy grids. In this study, we propose a modular, AI-integrated digital twin co-simulation framework that enables climate adaptive [...] Read more.
Climate change is accelerating the frequency and intensity of extreme weather events, posing a critical threat to the stability, efficiency, and resilience of modern renewable energy grids. In this study, we propose a modular, AI-integrated digital twin co-simulation framework that enables climate adaptive control of distributed energy resources (DERs) and storage assets in distribution networks. The framework leverages deep reinforcement learning (DDPG) agents trained within a high-fidelity co-simulation environment that couples physical grid dynamics, weather disturbances, and cyber-physical control loops using HELICS middleware. Through real-time coordination of photovoltaic systems, wind turbines, battery storage, and demand side flexibility, the trained agent autonomously learns to minimize power losses, voltage violations, and load shedding under stochastic climate perturbations. Simulation results on the IEEE 33-bus radial test system augmented with ERA5 climate reanalysis data demonstrate improvements in voltage regulation, energy efficiency, and resilience metrics. The framework also exhibits strong generalization across unseen weather scenarios and outperforms baseline rule based controls by reducing energy loss by 14.6% and improving recovery time by 19.5%. These findings position AI-integrated digital twins as a promising paradigm for future-proof, climate-resilient smart grids. Full article
Show Figures

Figure 1

Back to TopTop