Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (198)

Search Parameters:
Keywords = parametric stochastic modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 557 KB  
Article
A Multi-Stage Decomposition and Hybrid Statistical Framework for Time Series Forecasting
by Swera Zeb Abbasi, Mahmoud M. Abdelwahab, Imam Hussain, Moiz Qureshi, Moeeba Rind, Paulo Canas Rodrigues, Ijaz Hussain and Mohamed A. Abdelkawy
Axioms 2026, 15(4), 273; https://doi.org/10.3390/axioms15040273 - 9 Apr 2026
Viewed by 218
Abstract
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates [...] Read more.
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates multi-level signal decomposition with structured parametric modeling to enhance predictive accuracy. The proposed hybrid architectures—EMD–EEMD–ARIMA, EMD–EEMD–GMDH, and EMD–EEMD–ETS—employ a hierarchical decomposition–reconstruction strategy before forecasting. In the first stage, Empirical Mode Decomposition (EMD) decomposes the observed series into intrinsic mode functions (IMFs) and a residual component. In the second stage, Ensemble Empirical Mode Decomposition (EEMD) is applied to further refine the extracted components, mitigating mode mixing and improving signal separability. In the final stage, each reconstructed component is modeled using ARIMA, Exponential Smoothing State Space (ETS), and Group Method of Data Handling (GMDH) frameworks, and the individual forecasts are aggregated to obtain the final prediction. Empirical evaluation based on a recursive one-step-ahead forecasting scheme demonstrates consistent numerical improvements across all standard accuracy measures. In particular, the proposed EMD–EEMD–ARIMA model achieves the lowest forecasting error, reducing the root-mean-square error (RMSE) by approximately 6–7% relative to the best-performing single-stage model and by about 3–4% relative to the two-stage EMD-based hybrids. Similar improvements are observed in mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), indicating enhanced stability and robustness of the three-stage architecture. The results provide strong numerical evidence that multi-level decomposition combined with structured statistical modeling yields superior predictive performance for complex nonlinear and nonstationary time series. The proposed framework offers a mathematically coherent, computationally tractable, and systematically structured hybrid modeling strategy that effectively integrates noise-assisted decomposition with parametric and data-driven forecasting techniques. Full article
Show Figures

Figure 1

22 pages, 5539 KB  
Article
Artificial Neural Network-Based PID Parameter Estimation Using Black Kite Algorithm Hyperparameter Optimization for DC Motor Speed Control
by Yılmaz Seryar Arıkuşu
Biomimetics 2026, 11(4), 242; https://doi.org/10.3390/biomimetics11040242 - 3 Apr 2026
Viewed by 272
Abstract
This paper proposes a Black Kite Algorithm (BKA)-based hyperparameter optimization method for Artificial Neural Network (ANN) training, mitigating local minimum issues associated with conventional training techniques. The resulting BKA-ANN model is then employed to estimate PID controller parameters for DC motor speed regulation. [...] Read more.
This paper proposes a Black Kite Algorithm (BKA)-based hyperparameter optimization method for Artificial Neural Network (ANN) training, mitigating local minimum issues associated with conventional training techniques. The resulting BKA-ANN model is then employed to estimate PID controller parameters for DC motor speed regulation. A large-scale dataset of 100,000 samples was generated via MATLAB simulation, with reference speed and load torque stochastically varied, and optimal PID parameters determined by minimizing the ITAE criterion for each operating condition. The optimized controller was evaluated under various operating conditions including transient response, frequency domain analysis (phase margin and bandwidth), parametric robustness, and load disturbance suppression, along with control effort and energy consumption assessments. The proposed BKA-ANN approach was benchmarked against nine algorithms: hybrid atom search optimization-simulated annealing (hASO-SA), harris hawks optimization (HHO), Henry gas solubility optimization with opposition-based learning (OBL/HGSO), atom search optimization (ASO), henry gas solubility op-timization (HGSO), stochastic fractal search(SFS), grey wolf optimization (GWO), sine–cosine algorithm (SCA), and Standard ANN. Simulation results indicate that BKA-ANN achieves stable performance across all tested scenarios, with minimal oscillation and competitive settling time compared to the evaluated algorithms. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

28 pages, 6829 KB  
Article
Numerical Simulation of Particle Deposition on Superhydrophobic Surfaces with Randomly Distributed Roughness—A Coupled LBM-IMBM-DEM Method
by Wenjun Zhao and Hao Lu
Coatings 2026, 16(3), 377; https://doi.org/10.3390/coatings16030377 - 17 Mar 2026
Viewed by 460
Abstract
Dust pollution has emerged as a critical issue in a wide range of industrial applications, creating an urgent demand for effective strategies to mitigate particle deposition. Recent experimental studies have demonstrated that superhydrophobic coatings represent a promising class of self-cleaning materials, primarily attributed [...] Read more.
Dust pollution has emerged as a critical issue in a wide range of industrial applications, creating an urgent demand for effective strategies to mitigate particle deposition. Recent experimental studies have demonstrated that superhydrophobic coatings represent a promising class of self-cleaning materials, primarily attributed to their hierarchical rough structures and intrinsically low surface energy. Nevertheless, the underlying self-cleaning mechanisms of superhydrophobic surfaces have not yet been fully elucidated. This work examines particle deposition on superhydrophobic surfaces featuring stochastic roughness distributions through computational modeling. Surface topographies were generated using Fast Fourier Transform techniques. An integrated lattice Boltzmann–discrete element method (LBM–DEM) framework simulated particle transport in superhydrophobic-coated channels. Particle–fluid coupling was achieved via the immersed moving boundary approach, while particle–surface interactions employed a modified Johnson–Kendall–Roberts (JKR) adhesion model. Parametric studies quantified effects of particle size, interfacial energy, flow Reynolds number, and topographical statistics on deposition dynamics. Experimental validation demonstrates good agreement between numerical predictions and measurements. Smaller particles exhibit a lower tendency to deposit on superhydrophobic surfaces, whereas increasing surface energy significantly enhances particle deposition due to stronger adhesion forces and the suppression of particle resuspension. In addition, higher Reynolds numbers effectively reduce particle deposition. The revealed self-cleaning mechanisms provide theoretical guidance for the design of high-performance self-cleaning coatings, and the identified effects of particle and surface parameters offer practical insights for anti-pollution engineering applications. Full article
Show Figures

Figure 1

25 pages, 4085 KB  
Article
Load Frequency Control in Multi-Area Power Systems Using Incremental Proportional–Integral–Derivative and Model-Free Adaptive Control
by Md Asif Shaharear, Chengyu Zhou, Shahin Shaikh and Md Mehedy Hasan Faruk
Appl. Syst. Innov. 2026, 9(3), 59; https://doi.org/10.3390/asi9030059 - 16 Mar 2026
Viewed by 666
Abstract
Maintaining frequency stability in modern multi-area interconnected power systems has become increasingly challenging due to the stochastic nature of wind power and reduced effective system inertia. Under these dynamic conditions, traditional fixed-gain PID controllers frequently fail to provide robust regulation. To address this [...] Read more.
Maintaining frequency stability in modern multi-area interconnected power systems has become increasingly challenging due to the stochastic nature of wind power and reduced effective system inertia. Under these dynamic conditions, traditional fixed-gain PID controllers frequently fail to provide robust regulation. To address this limitation, this study proposes and evaluates a practical model-free secondary control strategy for multi-area Load Frequency Control (LFC). The proposed hybrid MFAC–PID framework integrates an incremental model-free adaptive control (MFAC) law with a low-gain incremental PID damping term. This combination leverages real-time input–output data to determine primary control actions without relying on an explicit plant model, while the PID component supplies supplementary damping based on recent control errors. Furthermore, the controller utilizes online pseudo-gradient estimation to dynamically adapt to stochastic wind fluctuations and ±5% parametric uncertainty. Simulation results demonstrate that the hybrid design substantially enhances Area Control Error (ACE) regulation. Under wind-disturbed conditions, it reduces the aggregated Integral Absolute Error (IAEtotal) from 92.76 to 41.10, representing an improvement of over 50% compared with the fixed-gain PID baseline. Additionally, the controller maintains a low computational overhead of 0.306 milliseconds per control cycle. These findings indicate that the hybrid MFAC–PID structure provides a robust, computationally efficient solution for real-time Automatic Generation Control (AGC) in renewable-integrated multi-area power grids. Full article
Show Figures

Figure 1

21 pages, 6110 KB  
Article
Stochastic Dynamic Analysis and Vibration Suppression of FG-GPLRC Cylinder–Plate Combined Structures with Distributed Dynamic Vibration Absorbers
by Qingtao Gong, Ai Zhang, Yao Teng and Yuan Wang
Materials 2026, 19(6), 1082; https://doi.org/10.3390/ma19061082 - 11 Mar 2026
Viewed by 323
Abstract
Cylinder–plate combined structures (CPCS) are widely used in aerospace, marine engineering, and offshore platform systems. During service, they are frequently subjected to stochastic excitations induced by turbulent boundary layers, acoustic loads, hydrodynamic disturbances, and broadband operational vibrations. Excessive random vibration responses may significantly [...] Read more.
Cylinder–plate combined structures (CPCS) are widely used in aerospace, marine engineering, and offshore platform systems. During service, they are frequently subjected to stochastic excitations induced by turbulent boundary layers, acoustic loads, hydrodynamic disturbances, and broadband operational vibrations. Excessive random vibration responses may significantly reduce structural reliability, accelerate fatigue damage, and compromise operational safety. To address these engineering challenges, a unified stochastic dynamic analysis and vibration suppression framework is established for functionally graded graphene platelet-reinforced composites (FG-GPLRC) CPCS equipped with distributed dynamic vibration absorbers (DVAs). Adopting the First-order Shear Deformation Theory (FSDT), a comprehensive energy functional for the CPCS is established, in which the penalty method is implemented to impose boundary conditions and ensure interface continuity. Subsequently, the Pseudo-excitation Method (PEM) is utilized to convert the stochastic vibration analysis into an equivalent deterministic harmonic problem, and the governing equations are spatially discretized by combining the spectral geometric method (SGM) with the Ritz variational procedure, enabling efficient evaluation of power spectral density (PSD) and root-mean-square (RMS) responses. The reliability of the proposed model is verified through a series of numerical validation comparisons. On this basis, comprehensive parametric investigations are conducted to assess how material properties, structural geometries, and critical DVA parameters influence system behavior. The results demonstrate that the incorporation of distributed DVAs can achieve superior vibration suppression performance. This study provides an efficient and reliable theoretical framework for stochastic vibration analysis and damping design of advanced composite plate–shell coupled structures operating in complex random environments, offering important theoretical support for dynamic optimization design in aerospace and marine engineering applications. Full article
(This article belongs to the Special Issue Research on Vibration of Composite Structures)
Show Figures

Figure 1

37 pages, 6274 KB  
Article
Analysis and Prediction Evaluation of Provincial Carbon Emissions Under Multi-Model Fusion
by Ketong Liu, Hao Ren, Siyao Lu, Xuecheng Shang, Zheng Liu and Baofu Yu
Sustainability 2026, 18(5), 2545; https://doi.org/10.3390/su18052545 - 5 Mar 2026
Cited by 1 | Viewed by 314
Abstract
Against the backdrop of sustainable development and global climate governance, this study focuses on the evaluation and trend prediction of provincial carbon emission efficiency and constructs a multi-model integrated analytical framework featuring “data preprocessing—efficiency decomposition—dynamic forecasting—policy deduction”. First, economic, energy consumption and carbon [...] Read more.
Against the backdrop of sustainable development and global climate governance, this study focuses on the evaluation and trend prediction of provincial carbon emission efficiency and constructs a multi-model integrated analytical framework featuring “data preprocessing—efficiency decomposition—dynamic forecasting—policy deduction”. First, economic, energy consumption and carbon emission data for 30 provinces in China from 2009 to 2019 are collected. Data cleaning is performed through outlier identification and Lagrange interpolation, and a cross-regionally comparable quantification system is established based on a unified carbon emission standard, laying a foundation for subsequent analysis. Second, data envelopment analysis (DEA) is adopted to decompose carbon emission efficiency. It is found that approximately 23% of provinces lie on the technical efficiency frontier, with the average variance share of technical inefficiency being 0.62; 6% of provinces have the potential for scale expansion; and 10% suffer from diseconomies of scale, reflecting significant structural efficiency losses in regions concentrated with high-carbon industries. Third, the long short-term memory (LSTM) neural network is employed for dynamic forecasting and scenario simulation of carbon emissions by 2025. The model’s prediction error in 2019 is controlled within 8.7%. Simulation results show that when the share of clean energy rises to 35%, China’s national carbon emission growth rate can be reduced to 1.2% by 2025. However, multi-scenario sensitivity analysis indicates that the achievement of this target highly depends on policy enforcement intensity and power grid accommodation capacity. In addition, stochastic frontier analysis (SFA) reveals the heterogeneous contributions of different energy types to economic and social outputs. The consumption elasticities of electricity, liquefied petroleum gas and gasoline are significantly positive, whereas the negative elasticities of oil, fuel oil and coal deeply reflect the low energy utilization efficiency and rigid lock-in of high-carbon industries in some regions. Finally, combined with efficiency evaluation, trend prediction and mechanism analysis, differentiated emission reduction strategies are proposed for technologically backward provinces, scale-imbalanced provinces and clean energy base provinces, forming a complete closed loop from “efficiency diagnosis” to “future deduction” and then to “policy feedback”. This study breaks through the limitations of a single model. Through the coupling of parametric and non-parametric methods, as well as the integration of dynamic forecasting and scenario simulation, it effectively addresses issues such as data heterogeneity. It provides scientific support for local governments to formulate emission reduction policies and optimize energy structures, establishes a methodological foundation for industrial efficiency analysis and international carbon responsibility allocation research, and helps to promote regional clean, low-carbon, and sustainable development. Full article
Show Figures

Figure 1

14 pages, 2881 KB  
Article
Analysis of Noise-Induced Deformations of Population Dynamics with an Allee Effect and Immigration
by Lev Ryashko and Irina Bashkirtseva
Mathematics 2026, 14(4), 655; https://doi.org/10.3390/math14040655 - 12 Feb 2026
Viewed by 329
Abstract
The problem of analyzing the mechanisms of variability in population dynamics caused by the combined influence of the Allee effect, immigration and random fluctuations is addressed. In this study, we explore such a multi-factorial problem based on a Ricker-type population model. For the [...] Read more.
The problem of analyzing the mechanisms of variability in population dynamics caused by the combined influence of the Allee effect, immigration and random fluctuations is addressed. In this study, we explore such a multi-factorial problem based on a Ricker-type population model. For the deterministic version of the model, the transformations of system dynamic regimes caused by changes in parameters of growth rate and intensity of immigration are determined using bifurcation analysis. For the randomly forced population model, the phenomena of stochastic excitement and noise-induced temporal extinction are revealed and investigated. The parametric study of these effects uses statistical data obtained from direct numerical modeling as well as an analytical approach based on the stochastic sensitivity technique and the confidence interval method. Full article
(This article belongs to the Section E3: Mathematical Biology)
Show Figures

Figure 1

35 pages, 942 KB  
Article
Parametric Resonance, Arithmetic Geometry, and Adelic Topology of Microtubules: A Bridge to Orch OR Theory
by Michel Planat
Int. J. Topol. 2026, 3(1), 1; https://doi.org/10.3390/ijt3010001 - 7 Jan 2026
Cited by 2 | Viewed by 1362
Abstract
Microtubules are cylindrical protein polymers that organize the cytoskeleton and play essential roles in intracellular transport, cell division, and possibly cognition. Their highly ordered, quasi-crystalline lattice of tubulin dimers, notably tryptophan residues, endows them with a rich topological and arithmetic structure, making them [...] Read more.
Microtubules are cylindrical protein polymers that organize the cytoskeleton and play essential roles in intracellular transport, cell division, and possibly cognition. Their highly ordered, quasi-crystalline lattice of tubulin dimers, notably tryptophan residues, endows them with a rich topological and arithmetic structure, making them natural candidates for supporting coherent excitations at optical and terahertz frequencies. The Penrose–Hameroff Orch OR theory proposes that such coherences could couple to gravitationally induced state reduction, forming the quantum substrate of conscious events. Although controversial, recent analyses of dipolar coupling, stochastic resonance, and structured noise in biological media suggest that microtubular assemblies may indeed host transient quantum correlations that persist over biologically relevant timescales. In this work, we build upon two complementary approaches: the parametric resonance model of Nishiyama et al. and our arithmetic–geometric framework, both recently developed in Quantum Reports. We unify these perspectives by describing microtubules as rectangular lattices governed by the imaginary quadratic field Q(i), within which nonlinear dipolar oscillations undergo stochastic parametric amplification. Quantization of the resonant modes follows Gaussian norms N=p2+q2, linking the optical and geometric properties of microtubules to the arithmetic structure of Q(i). We further connect these discrete resonances to the derivative of the elliptic L-function, L(E,1), which acts as an arithmetic free energy and defines the scaling between modular invariants and measurable biological ratios. In the appended adelic extension, this framework is shown to merge naturally with the Bost–Connes and Connes–Marcolli systems, where the norm character on the ideles couples to the Hecke character of an elliptic curve to form a unified adelic partition function. The resulting arithmetic–elliptic resonance model provides a coherent bridge between number theory, topological quantum phases, and biological structure, suggesting that consciousness, as envisioned in the Orch OR theory, may emerge from resonant processes organized by deep arithmetic symmetries of space, time, and matter. Full article
Show Figures

Figure 1

29 pages, 26089 KB  
Article
A Machine Learning Vibration-Based Methodology for Robust Detection and Severity Characterization of Gear Incipient Faults Under Variable Working Speed and Load
by Dimitrios M. Bourdalos and John S. Sakellariou
Machines 2026, 14(1), 9; https://doi.org/10.3390/machines14010009 - 19 Dec 2025
Viewed by 805
Abstract
A machine learning (ML) methodology for the robust detection and severity characterization of incipient gear faults under variable speed and load is postulated. The methodology is trained using vibration signals from a single accelerometer mounted on the gearbox, simultaneously acquired with tachometer signals [...] Read more.
A machine learning (ML) methodology for the robust detection and severity characterization of incipient gear faults under variable speed and load is postulated. The methodology is trained using vibration signals from a single accelerometer mounted on the gearbox, simultaneously acquired with tachometer signals at a sample of working conditions (WCs) from the range of interest. A special parametric identification procedure of gearbox dynamics that may account for the continuous range of WCs is introduced through ‘clouds’ of advanced stochastic data-driven Functionally Pooled models, estimated from angularly resampled vibration signals. Each cloud represents the gearbox dynamics at a specific fault severity level, while the pseudo-static effects of the WCs on the dynamics are accounted for through data pooling. Fault detection and severity characterization are achieved by testing the consistency of a vibration signal with each model cloud within a hypothesis testing framework in which the unknown load is also estimated. The methodology is assessed through 18,300 experiments on a single-stage spur gearbox including four incipient single-tooth pinion faults, 61 speeds, and four load levels. The faults produce no significant changes in the time-domain signals, while their frequency-domain effects overlap with the variations caused by the WCs, rendering the diagnosis problem highly challenging. The comparison with a state-of-the-art deep Stacked Autoencoder (SAE) demonstrates the ML method’s superior performance, achieving 95.4% and 91.6% accuracy in fault detection and characterization, respectively. Full article
Show Figures

Figure 1

27 pages, 391 KB  
Article
Analysis of λ-Hölder Stability of Economic Equilibria and Dynamical Systems with Nonsmooth Structures
by Anna V. Aleshina, Andrey L. Bulgakov, Yanliang Xin and Igor Y. Panarin
Mathematics 2025, 13(24), 3993; https://doi.org/10.3390/math13243993 - 15 Dec 2025
Viewed by 608
Abstract
This paper develops a mathematical approach to the analysis of the stability of economic equilibria in nonsmooth models. The λ-Hölder apparatus of subdifferentials is used, which extends the class of systems under study beyond traditional smooth optimization and linear approximations. Stability conditions [...] Read more.
This paper develops a mathematical approach to the analysis of the stability of economic equilibria in nonsmooth models. The λ-Hölder apparatus of subdifferentials is used, which extends the class of systems under study beyond traditional smooth optimization and linear approximations. Stability conditions are obtained for solutions to intertemporal choice problems and capital accumulation models in the presence of nonsmooth dependencies, threshold effects, and discontinuities in elasticities. For λ-Hölder production and utility functions, estimates of the sensitivity of equilibria to parameters are obtained, and indicators of the convergence rate of trajectories to the stationary state are derived for λ>1. The methodology is tested on a multisectoral model of economic growth with technological shocks and stochastic disturbances in capital dynamics. Numerical experiments confirm the theoretical results: a power-law dependence of equilibrium sensitivity on the magnitude of parametric disturbances is revealed, as well as consistency between the analytical λ-Hölder convergence rate and the results of numerical integration. Stochastic disturbances of small variance do not violate stability. The results obtained provide a rigorous mathematical foundation for the analysis of complex economic systems with nonsmooth structures, which are increasingly used in macroeconomics, decision theory, and regulation models. Full article
(This article belongs to the Section E5: Financial Mathematics)
22 pages, 1405 KB  
Article
Entropy-Based Evidence Functions for Testing Dilation Order via Cumulative Entropies
by Mashael A. Alshehri
Entropy 2025, 27(12), 1235; https://doi.org/10.3390/e27121235 - 5 Dec 2025
Viewed by 369
Abstract
This paper introduces novel non-parametric entropy-based evidence functions and associated test statistics for assessing the dilation order of probability distributions constructed from cumulative residual entropy and cumulative entropy. The proposed evidence functions are explicitly tuned to questions about distributional variability and stochastic ordering, [...] Read more.
This paper introduces novel non-parametric entropy-based evidence functions and associated test statistics for assessing the dilation order of probability distributions constructed from cumulative residual entropy and cumulative entropy. The proposed evidence functions are explicitly tuned to questions about distributional variability and stochastic ordering, rather than global model fit, and are developed within a rigorous evidential framework. Their asymptotic distributions are established, providing a solid foundation for large-sample inference. Beyond their theoretical appeal, these procedures act as effective entropy-driven tools for quantifying statistical evidence, offering a compelling non-parametric alternative to traditional approaches, such as Kullback–Leibler discrepancies. Comprehensive Monte Carlo simulations highlight their robustness and consistently high power across a wide range of distributional scenarios, including heavy-tailed models, where conventional methods often perform poorly. A real-data example further illustrates their practical utility, showing how cumulative entropies can provide sharper statistical evidence and clarify stochastic comparisons in applied settings. Altogether, these results advance the theoretical foundation of evidential statistics and open avenues for applying cumulative entropies to broader classes of stochastic inference problems. Full article
Show Figures

Figure 1

21 pages, 2740 KB  
Article
Charting the Landscape of Data Envelopment Analysis in Renewable Energy and Carbon Emission Efficiency
by Thu-Thao Le and Wen-Min Lu
Energies 2025, 18(23), 6147; https://doi.org/10.3390/en18236147 - 24 Nov 2025
Cited by 1 | Viewed by 922
Abstract
This study explores the intellectual landscape and methodological evolution of Data Envelopment Analysis (DEA) in the context of renewable energy and carbon emission efficiency. Using bibliometric techniques and data extracted from the Web of Science Core Collection (2389 publications from 2000 to 2024), [...] Read more.
This study explores the intellectual landscape and methodological evolution of Data Envelopment Analysis (DEA) in the context of renewable energy and carbon emission efficiency. Using bibliometric techniques and data extracted from the Web of Science Core Collection (2389 publications from 2000 to 2024), the research identifies influential authors, institutions, and thematic clusters shaping the field. The results reveal that DEA has evolved from a traditional efficiency assessment tool into a comprehensive analytical framework supporting sustainable energy transition and carbon mitigation policies. Six major research clusters were identified, encompassing carbon emission measurement, efficiency benchmarking, methodological innovations, industrial applications, circular economy perspectives, and international productivity comparisons. Notably, Asian scholars, particularly from China and Taiwan, dominate the research landscape, reflecting strong regional leadership in empirical and methodological advancements. The findings demonstrate that recent studies increasingly adopt advanced models such as network DEA, dynamic DEA, DEA–Malmquist, and hybrid DEA–machine learning approaches to address complex energy systems. Comparative insights highlight DEA’s advantages over Stochastic Frontier Analysis (SFA) in handling multi-dimensional, non-parametric data, while emphasizing the need for hybrid frameworks to improve robustness. This study contributes to the ongoing discourse on energy sustainability by mapping knowledge structures, revealing methodological trajectories, and providing guidance for future research on efficiency and carbon reduction strategies. Full article
(This article belongs to the Special Issue Challenges and Opportunities in the Global Clean Energy Transition)
Show Figures

Figure 1

35 pages, 2126 KB  
Review
Techniques and Developments in Stochastic Streamflow Synthesis—A Comprehensive Review
by Shirin Studnicka and Umed S. Panu
Encyclopedia 2025, 5(4), 198; https://doi.org/10.3390/encyclopedia5040198 - 21 Nov 2025
Cited by 1 | Viewed by 1004
Abstract
Stochastic streamflow synthesis has long been the cornerstone of water resource planning, enabling the generation of extended hydrological sequences that reflect natural variability beyond the limitations of observed records. This paper presents a comprehensive review of the theoretical foundations, methodological advancements, and evolving [...] Read more.
Stochastic streamflow synthesis has long been the cornerstone of water resource planning, enabling the generation of extended hydrological sequences that reflect natural variability beyond the limitations of observed records. This paper presents a comprehensive review of the theoretical foundations, methodological advancements, and evolving trends in synthetic streamflow generation. Historical progression is explored through three distinct eras: the pre-modern formulation era (pre-1960), the era dominated by autoregressive models (1960–2000), and the recent period marked by the rise of data-driven AI/ML approaches. Various modelling paradigms, parametric versus non-parametric, traditional versus AI-based, and single- versus multi-scale approaches, are critically assessed and compared with a focus on their applicability across temporal resolutions and hydrological regimes. This study also categorizes evaluation criteria into four dimensions: preservation of stochastic characteristics, distributional consistency, error-based metrics, and operational performance. In addition, the use and impact of transformation techniques (e.g., log or Box-Cox) employed to normalize streamflow distributions for improved model fidelity are examined. A bibliometric analysis of over 200 studies highlights the global research footprint, showing that the United States leads with 70 studies, followed by Canada with 15, reflecting the growing international engagement in the field. The analysis also identifies the most active journals publishing streamflow synthesis research: Water Resources Research (50 publications, since 1967), Journal of Hydrology (25 publications, since 1963), and Journal of the American Water Resources Association (9 publications, since 1974). This review not only synthesizes past and current practices but also outlines key challenges and future research directions to advance stochastic hydrology in an era of climatic uncertainty and data complexity. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

10 pages, 2230 KB  
Proceeding Paper
Bayesian Functional Data Analysis in Astronomy
by Thomas Loredo, Tamás Budavári, David Kent and David Ruppert
Phys. Sci. Forum 2025, 12(1), 12; https://doi.org/10.3390/psf2025012012 - 4 Nov 2025
Viewed by 803
Abstract
Cosmic demographics—the statistical study of populations of astrophysical objects—has long relied on tools from multivariate statistics for analyzing data comprising fixed-length vectors of properties of objects, as might be compiled in a tabular astronomical catalog (say, with sky coordinates, and brightness measurements in [...] Read more.
Cosmic demographics—the statistical study of populations of astrophysical objects—has long relied on tools from multivariate statistics for analyzing data comprising fixed-length vectors of properties of objects, as might be compiled in a tabular astronomical catalog (say, with sky coordinates, and brightness measurements in a fixed number of spectral passbands). But beginning with the emergence of automated digital sky surveys, ca. 2000, astronomers began producing large collections of data with more complex structures: light curves (brightness time series) and spectra (brightness vs. wavelength). These comprise what statisticians call functional data—measurements of populations of functions. Upcoming automated sky surveys will soon provide astronomers with a flood of functional data. New methods are needed to accurately and optimally analyze large ensembles of light curves and spectra, accumulating information both along individual measured functions and across a population of such functions. Functional data analysis (FDA) provides tools for statistical modeling of functional data. Astronomical data presents several challenges for FDA methodology, e.g., sparse, irregular, and asynchronous sampling, and heteroscedastic measurement error. Bayesian FDA uses hierarchical Bayesian models for function populations, and is well suited to addressing these challenges. We provide an overview of astronomical functional data and some key Bayesian FDA modeling approaches, including functional mixed effects models, and stochastic process models. We briefly describe a Bayesian FDA framework combining FDA and machine learning methods to build low-dimensional parametric models for galaxy spectra. Full article
Show Figures

Figure 1

53 pages, 4192 KB  
Article
A Methodology for Assessing Digital Readiness of Industrial Enterprises for Ecosystem Adaptation: Evidence from Kazakhstan’s Sustainable Industrial Transformation
by Larissa Tashenova, Dinara Mamrayeva and Barno Kulzhambekova
Sustainability 2025, 17(21), 9763; https://doi.org/10.3390/su17219763 - 1 Nov 2025
Cited by 1 | Viewed by 1580
Abstract
This scientific article examines the issue of the effectiveness of digital transformation in Kazakhstan’s industry from the perspective of how effectively enterprises are able to convert digital resources into economically measurable results in the context of the transition to a model of sustainable [...] Read more.
This scientific article examines the issue of the effectiveness of digital transformation in Kazakhstan’s industry from the perspective of how effectively enterprises are able to convert digital resources into economically measurable results in the context of the transition to a model of sustainable industrial growth. The aim of the study is to develop a comprehensive methodology for assessing the digital readiness of industrial enterprises to implement and adapt digital ecosystems based on a synthesis of conceptual and empirical approaches. The methodology developed by the authors combines a parametric diagnostic system and stochastic frontier analysis (SFA) tools, which allows for a quantitative assessment of not only the scale but also the effectiveness of digital transformations at the regional level. The empirical part of the study includes statistical data for 2023, reflecting the dynamics of the introduction of ICT, cloud technologies, big data analytics, etc., in the industrial sector. The results of the analysis showed the steady development of digitalization with the existing pronounced spatial asymmetry. The application of SFA made it possible to identify technological “frontiers” and reveal the hidden potential for increasing the effectiveness of digital investments at the regional level. The practical value of the study lies in its applicability for assessing the digital readiness of industrial enterprises for ecosystem adaptation, diagnosing regional digital disparities, and justifying targeted government policy measures aimed at strengthening the digital maturity and sustainability of the industrial sector. Full article
Show Figures

Graphical abstract

Back to TopTop