Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,119)

Search Parameters:
Keywords = stochastics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 66333 KB  
Review
Diffusion Models: Unlocking the “4 Secrets” of High-Quality Image Generation
by Tao Zhou, Zhe Zhang, Mingzhe Zhang, Wenwen Chai, Yong Xia and Fuyuan Hu
Electronics 2026, 15(8), 1755; https://doi.org/10.3390/electronics15081755 (registering DOI) - 21 Apr 2026
Abstract
The diffusion model (DM) is a hot topic in deep generative models and is widely applied in image generation. In diffusion models, there are four main “secrets” that affect high-quality image generation: constructing the diffusion model, improving the sampling velocity, designing the diffusion [...] Read more.
The diffusion model (DM) is a hot topic in deep generative models and is widely applied in image generation. In diffusion models, there are four main “secrets” that affect high-quality image generation: constructing the diffusion model, improving the sampling velocity, designing the diffusion process, and guiding diffusion models. How should one construct the diffusion model? How can one improve the sampling velocity? How should one design the diffusion process? How should one guide diffusion models? These questions are critical to enhancing diffusion model performance. However, most existing review papers focus on applications, while discussion of the four key technical aspects remains limited. In response, this paper summarizes four key technologies and six representative application directions. First, the basic principles of diffusion models are reviewed from three perspectives: denoising diffusion probabilistic models, noise conditional score network models, and stochastic differential equation models. Second, key techniques for improving sampling velocity are summarized from three perspectives: non-Markovian sampling, knowledge distillation sampling, and discrete optimization sampling. Third, the diffusion process design is summarized from three perspectives: latent space, Transformer-based diffusion, and non-Euclidean space. Fourth, guidance strategies are summarized from three perspectives: classifier guidance, classifier-free guidance, and multimodal guidance. Fifth, the advantages and applications of diffusion models are discussed in high-quality text-to-image generation, high-quality text-to-video generation, and high-quality image-to-image generation. Finally, this paper discusses the challenges faced by diffusion models in image generation. Overall, this review systematically discusses the four “secrets” of diffusion models for image generation and provides a useful reference for future research in this field. Full article
Show Figures

Graphical abstract

45 pages, 3902 KB  
Article
Machine Learning-Based Power Quality Prediction in a Microgrid for Community Energy Systems
by Ibrahim Jahan, Khoa Nguyen Dang Dinh, Vojtech Blazek, Vaclav Snasel, Stanislav Misak, Ivo Pergl, Faisal Mohamed and Abdesselam Mechali
Energies 2026, 19(8), 1998; https://doi.org/10.3390/en19081998 (registering DOI) - 21 Apr 2026
Abstract
To mitigate environmental impact, specifically the CO2 emissions associated with conventional thermal and nuclear facilities, renewable energy sources are increasingly being adopted as primary alternatives. However, integrating these renewable sources into the utility grid poses a significant challenge, primarily due to the [...] Read more.
To mitigate environmental impact, specifically the CO2 emissions associated with conventional thermal and nuclear facilities, renewable energy sources are increasingly being adopted as primary alternatives. However, integrating these renewable sources into the utility grid poses a significant challenge, primarily due to the stochastic and nonlinear nature of weather. Consequently, it is imperative that power systems operate under an intelligent control model to ensure energy output meets strict power quality standards. In this context, accurate forecasting is a cornerstone of smart power management, particularly in off-grid architectures, where predicting Power Quality Parameters (PQPs) is fundamental for system optimization and error correction. This study conducts a comprehensive comparative evaluation of nine different predictive architectures for estimating PQPs. The algorithms analyzed include LSTM, GRU, DNN, CNN1D-LSTM, BiLSTM, attention mechanisms, DT, SVM, and XGBoost. The central objective is to develop a reliable basis for the automated regulation and enhancement of electrical quality in isolated systems. The specific parameters investigated are power voltage (U), Voltage Total Harmonic Distortion (THDu), Current Total Harmonic Distortion (THDi), and short-term flicker severity (Pst). Data for this investigation were acquired from an experimental off-grid setup at VSB-Technical University of Ostrava (VSB-TUO), Czech Republic. To assess model performance, we utilized root mean square error (RMSE) as the primary accuracy metric, while simultaneously evaluating computational efficiency in terms of processing speed and memory consumption during testing. Full article
23 pages, 4408 KB  
Article
Measurement-Informed Latency Limits for Real-Time UAV Swarm Coordination
by Rodolfo Vera-Amaro, Alberto Luviano-Juárez, Mario E. Rivero-Ángeles, Diego Márquez-González and Danna P. Suárez-Ángeles
Drones 2026, 10(4), 310; https://doi.org/10.3390/drones10040310 (registering DOI) - 21 Apr 2026
Abstract
Communication latency is one of the main factors limiting the practical scalability of unmanned aerial vehicle (UAV) swarms operating with distributed formation control. In real-time UAV missions, such as coordinated swarm navigation, autonomous inspection, and aerial monitoring, delayed information exchange directly affects formation [...] Read more.
Communication latency is one of the main factors limiting the practical scalability of unmanned aerial vehicle (UAV) swarms operating with distributed formation control. In real-time UAV missions, such as coordinated swarm navigation, autonomous inspection, and aerial monitoring, delayed information exchange directly affects formation stability and operational safety. In practical aerial networks, inter-UAV communication latency is influenced by stochastic effects including jitter, burst delays, and multi-hop propagation, which are rarely captured by the simplified deterministic delay assumptions commonly adopted in analytical formation-control studies. This paper introduces a measurement-informed stochastic delay model and a communication–control delay-feasibility framework that jointly account for per-link latency behavior, multi-hop delay accumulation, and controller-level delay tolerance. The proposed framework is evaluated using an attractive–repulsive distance-based potential field (ARD–PF) formation controller, for which the maximum admissible end-to-end delay is quantified as a function of swarm size and inter-UAV separation. The delay model is calibrated and validated using more than 15,000 in-flight communication delay samples collected from a multi-UAV LoRa platform operating under realistic flight conditions. The results show that different mechanisms limit swarm operation under different operating scenarios. In some configurations, stochastic communication latency becomes the dominant constraint, whereas in others, formation geometry or network load determines the feasible operating region. Based on these elements, the proposed framework characterizes delay-feasible operating regions and predicts the maximum feasible swarm size under distributed formation control and realistic multi-hop communication latency. Full article
(This article belongs to the Special Issue Low-Latency Communication for Real-Time UAV Applications)
48 pages, 2582 KB  
Article
Multi-Strategy Improved Red-Billed Blue Magpie Optimization Algorithm and Its Engineering Applications
by Junchao Ni, Jianhua Miao, Yejun Zheng, Li Cao, Yang Qiu and Yinggao Yue
Biomimetics 2026, 11(4), 287; https://doi.org/10.3390/biomimetics11040287 (registering DOI) - 21 Apr 2026
Abstract
In response to the decline in population diversity, the imbalance between exploration and exploitation, and the low convergence efficiency in the middle and later stages of the Red-billed Blue Magpie Optimizer (RBMO) when addressing complex optimization problems, this study proposes a multi-strategy enhanced [...] Read more.
In response to the decline in population diversity, the imbalance between exploration and exploitation, and the low convergence efficiency in the middle and later stages of the Red-billed Blue Magpie Optimizer (RBMO) when addressing complex optimization problems, this study proposes a multi-strategy enhanced variant termed CLD-RBMO. The proposed algorithm improves the original search mechanism from three perspectives: strengthened global exploration, enhanced local refinement, and directed exploitation in the middle and later stages. During the exploration phase, a hierarchical perturbation mechanism based on Logistic chaotic mapping and Lévy flight is introduced to enhance randomness and spatial coverage in the early search process. In the local exploitation phase, a Cauchy–Gauss hybrid mutation operator is employed to improve the algorithm’s capability to escape from local optima. In the middle and later search stages, a stochastic differential mutation strategy is incorporated to provide population-structure-based directional guidance for individuals, thereby accelerating convergence and improving optimization accuracy. Simulation results on the CEC2017 benchmark test functions indicate that CLD-RBMO demonstrates clear superiority over the original algorithm and several representative swarm intelligence optimization algorithms in terms of optimization accuracy, stability, and overall performance ranking. Convergence curve analysis confirms its dynamic performance improvements across different search stages, and the Wilcoxon rank-sum test further statistically validates the significance of the performance enhancement achieved by the proposed improvements compared with the original algorithm. Moreover, evaluations on two representative mechanical engineering optimization case studies further demonstrate the algorithm’s strong stability and engineering generalization capability. Full article
(This article belongs to the Special Issue Advances in Biological and Bio-Inspired Algorithms: 2nd Edition)
32 pages, 7900 KB  
Article
Smart Manufacturing Scheduling Under Data Latency: A Rolling-Horizon Two-Stage MILP Framework for OEM–Tier-1 Coordination
by Harshkumar K. Parmar and Shivakumar Raman
J. Manuf. Mater. Process. 2026, 10(4), 142; https://doi.org/10.3390/jmmp10040142 (registering DOI) - 21 Apr 2026
Abstract
Real-time coordination across OEM–Tier-1 manufacturing networks remains challenging due to delayed shop-floor data, stochastic machine availability, and the need for schedule stability. This paper presents a protocol-agnostic, two-stage mixed-integer linear programming (MILP) framework for real-time family-level scheduling. The method integrates MTConnect-like data streams [...] Read more.
Real-time coordination across OEM–Tier-1 manufacturing networks remains challenging due to delayed shop-floor data, stochastic machine availability, and the need for schedule stability. This paper presents a protocol-agnostic, two-stage mixed-integer linear programming (MILP) framework for real-time family-level scheduling. The method integrates MTConnect-like data streams without requiring adherence to any single communication standard. In Stage 1, a baseline plan is generated using expected capacity; in Stage 2, a rolling-horizon recourse model adapts the plan to observed (possibly lagged) capacity while incorporating a stability penalty to control resequencing. A synthetic OEM–Tier-1 testbed with three machines (two Tier-1, one OEM) is used to benchmark performance under real-time (L = 0) and delayed (L = 5) data scenarios. Across these scenarios, the real-time rolling scheduler improves strict on-time fulfillment by approximately 70% and eliminates terminal backlog relative to static planning, while MILP solve times remain under 0.1 s per cycle. Sensitivity experiments that vary disruption intensity, replanning interval (Δ), and stability weight (λ) show consistent qualitative trends and illustrate how the framework can be tuned to balance service performance against schedule stability without sacrificing computational tractability. Full article
Show Figures

Figure 1

17 pages, 2015 KB  
Article
Efficient Battery State of Health Estimation Using Lightweight ML Models Based on Limited Voltage Measurements
by Mohammad Okour, Mohannad Alkhalil, Mutaz Al Fayad, Juhyun Bak, Kevin R. James, Sulaiman Mohaidat, Xiaoqi Liu, Fadi Alsaleem, Michael Hempel, Hamid Sharif-Kashani and Mahmoud Alahmad
J. Low Power Electron. Appl. 2026, 16(2), 16; https://doi.org/10.3390/jlpea16020016 (registering DOI) - 21 Apr 2026
Abstract
Accurate estimation of lithium-ion battery State of Health (SoH) is critical for emerging applications such as reconfigurable battery systems. Although data-driven machine learning methods are promising, they often rely on costly, time-intensive aging experiments and extensive feature engineering. This work proposes a lightweight [...] Read more.
Accurate estimation of lithium-ion battery State of Health (SoH) is critical for emerging applications such as reconfigurable battery systems. Although data-driven machine learning methods are promising, they often rely on costly, time-intensive aging experiments and extensive feature engineering. This work proposes a lightweight SoH-prediction framework validated on both physics-informed synthetic aging data and the NASA battery aging dataset. We evaluated Random Forest (RF) and Feedforward Neural Network (FNN) models that use only a limited number of samples from an early segment of the raw discharge voltage curve as input. Results show that RF consistently outperforms FNN across input sizes in deterministic or noise-free environments, achieving an RMSE of 0.07% SoH using just 5 voltage samples. In inherently stochastic experimental data, however, FNN can achieve an RMSE 50% lower than RF (1.28 vs. 2.87), but requires 37× more mathematical operations per inference. These findings emphasize the predictive value of the early-discharge-voltage region and demonstrate that compact, low-feature-complexity models can deliver accurate SoH estimates. Overall, the approach supports a goal of combining informed synthetic data with limited real measurements to build robust, scalable SoH predictors, reducing dependence on labor-intensive degradation testing and feature-heavy pipelines. Full article
(This article belongs to the Special Issue 15th Anniversary of Journal of Low Power Electronics and Applications)
Show Figures

Figure 1

28 pages, 1664 KB  
Article
Failing to Use the Balance Sheet to Manage Cycle Shocks: Evidence from Nigeria
by Akolisa Ufodike
J. Risk Financial Manag. 2026, 19(4), 298; https://doi.org/10.3390/jrfm19040298 - 20 Apr 2026
Abstract
Nigeria entered the 2020 COVID-19-related oil price downturn without the fiscal buffers that numerous resource-rich economies had built over time. Despite heavy dependence on petroleum revenues, the country has made limited use of stabilization tools such as structured hedging programs, sovereign savings mechanisms, [...] Read more.
Nigeria entered the 2020 COVID-19-related oil price downturn without the fiscal buffers that numerous resource-rich economies had built over time. Despite heavy dependence on petroleum revenues, the country has made limited use of stabilization tools such as structured hedging programs, sovereign savings mechanisms, or strategic reserves, leaving public finances exposed to external shocks. Drawing on political choice theory and the resource governance literature, this study examines how institutional conditions shaped crisis management during the 2020 oil price collapse and the COVID-19 pandemic. The study combines qualitative institutional analysis with a stochastic counterfactual simulation. It compares Nigeria’s policy approach with those of oil-producing countries including Mexico, Saudi Arabia, the United Arab Emirates, Angola, and Ghana, using data from the IMF, World Bank, Afreximbank, and peer-reviewed sources. The counterfactual simulation is calibrated to Nigeria’s 2019 federal budget oil benchmark of US $60 per barrel, with the IMF’s 2019 petroleum price assumption used as a robustness check. The model treats hedging as a form of partial fiscal insurance rather than full stabilization. Results suggest that hedging sufficient to offset 10%, 20%, and 30% of the shock would have improved 2020 GDP decline from −1.80% to approximately −1.62%, −1.44%, and −1.26%, respectively. The analysis identifies institutional gaps in Nigeria’s use of hedging, sovereign savings, and reserve infrastructure. The counterfactual results indicate that even modest oil hedging could have meaningfully softened the 2020 downturn, with the 20% scenario reducing GDP contraction by an estimated 0.36 percentage points. These findings suggest that governance constraints contributed materially to fiscal vulnerability. The study proposes a four-pillar framework centered on risk hedging, revenue savings, strategic investment, and institutional reform to strengthen fiscal stability and resilience to external shocks. Full article
(This article belongs to the Special Issue Commodity Price Risk and Corporate Valuation)
Show Figures

Graphical abstract

20 pages, 4655 KB  
Article
Experimental Characterization and Non-Linear Dynamic Modelling of PCD Bearings: A Digital-Twin Approach for the Condition Monitoring of Rotating Machinery
by Alessio Cascino, Andrea Amedei, Enrico Meli and Andrea Rindi
Sensors 2026, 26(8), 2545; https://doi.org/10.3390/s26082545 - 20 Apr 2026
Abstract
This study proposes a comprehensive methodology for the experimental characterization and non-linear dynamic modelling of Polycrystalline Diamond (PCD) bearings, establishing a high-fidelity digital twin approach for the condition monitoring of rotating machinery. The research addresses complex rotor–stator interactions through the development of a [...] Read more.
This study proposes a comprehensive methodology for the experimental characterization and non-linear dynamic modelling of Polycrystalline Diamond (PCD) bearings, establishing a high-fidelity digital twin approach for the condition monitoring of rotating machinery. The research addresses complex rotor–stator interactions through the development of a multibody numerical framework. A structural 1D Finite Element (FE) model of the stator assembly was first calibrated via experimental modal analysis, achieving a high correlation with the first four bending modes and a maximum frequency discrepancy of only 1.4%. This validated structure was integrated into a non-linear multibody environment to simulate transient rub-impact events at rotational speeds up to 5500 rpm across varying clearance configurations. The model successfully captures the transition from stable periodic orbital motion to the stochastic and chaotic regimes observed in high-clearance setups. Frequency-domain validation further confirms the model’s accuracy in identifying supersynchronous harmonics and energy distribution patterns. Quantitative analysis shows that high-clearance configurations generate impact forces exceeding 6000 N, providing critical data for structural health assessment. These results demonstrate that the proposed digital twin serves as a robust physical foundation for diagnostic systems, enabling the identification of contact-induced vibrational signatures that are essential for training prognostic algorithms. This approach facilitates the autonomous monitoring of critical rotating machinery in demanding industrial and subsea applications, supporting the transition toward active balancing and model-based vibration control strategies. Full article
(This article belongs to the Special Issue Robust Measurement and Control Under Noise and Vibrations)
Show Figures

Figure 1

29 pages, 2275 KB  
Article
Reliability Analysis of Tuned Mass Damper-Equipped Structures Under Stochastic Excitation
by Lun Shao, Alexandre Saidi, Abdel-Malek Zine and Mohamed Ichchou
Vibration 2026, 9(2), 29; https://doi.org/10.3390/vibration9020029 - 20 Apr 2026
Abstract
Tuned mass dampers (TMDs) are commonly used to reduce excessive vibrations in engineering structures. Although their vibration control performance has been widely studied, the reliability of TMD-equipped structures under stochastic excitations has not been sufficiently investigated. In practical applications, random loads and system [...] Read more.
Tuned mass dampers (TMDs) are commonly used to reduce excessive vibrations in engineering structures. Although their vibration control performance has been widely studied, the reliability of TMD-equipped structures under stochastic excitations has not been sufficiently investigated. In practical applications, random loads and system uncertainties may significantly affect structural safety, and an efficient evaluation of failure probability remains a challenging task. Thus, the applications of these methods are greatly limited in vibration control. In this work, the structural reliability of systems equipped with TMDs is analyzed by adopting the first-passage time (FPT) as the failure criterion. Numerical investigations are performed on continuous beam models with TMDs under different types of stochastic excitation. In addition, an experimental study on a two-story steel frame structure is conducted to further examine the reliability performance of TMD-controlled systems. To reduce the computational cost associated with Monte Carlo simulation, a data-driven classification method is employed to approximate the failure domain based on a limited number of samples. The results indicate that the proposed approach enables accurate reliability estimation with a substantial reduction in computational cost, making it suitable for large-scale reliability analysis of vibration-controlled structures under stochastic excitation. The experimental results further demonstrate the applicability of the proposed reliability assessment method for practical vibration control problems. Full article
26 pages, 17603 KB  
Article
SICABI: Symmetry-Informed Stochastic Modeling via Dominant-Period Stationarity and Recursive Adaptive Parametric Density Estimation
by Daniel Canton-Enriquez, Jorge-Luis Perez-Ramos, Selene Ramirez-Rosales, Luis-Antonio Diaz-Jimenez, Ana-Marcela Herrera-Navarro and Hugo Jimenez-Hernandez
Symmetry 2026, 18(4), 681; https://doi.org/10.3390/sym18040681 - 20 Apr 2026
Abstract
Wind dynamics in urban environments exhibit non-stationarity and marked spatial variability, complicating stochastic modeling when a single global distribution is assumed. This article discusses the estimation of wind density under quasi-stationary regimes at the local level using SICABI, a two-phase framework: (i) Stationary [...] Read more.
Wind dynamics in urban environments exhibit non-stationarity and marked spatial variability, complicating stochastic modeling when a single global distribution is assumed. This article discusses the estimation of wind density under quasi-stationary regimes at the local level using SICABI, a two-phase framework: (i) Stationary Region Identification (ISR) estimates, through spectral power analysis, a specific dominant period for each location and validates the induced subsampling using the Augmented Dickey–Fuller (ADF) test, and (ii) RAPID adjusts an adaptive parametric density by recursively updating the mixture parameters and creating new components when a normalized membership distance exceeds a threshold. The analysis uses wind speed records collected from eight stations in the Metropolitan Area of Queretaro, Mexico, during the period from 1 January 2023 to 31 December 2023, aggregated at a 10 min resolution, from which Xδ,s is constructed for each site. RAPID is compared against Gaussian Kernel Density Estimation (KDE) with Silverman bandwidth and EM-fitted Gaussian mixtures with BIC-based selection (Kmax=12). The resulting densities were compared with an empirical density estimated from a histogram over a fixed grid (m=50) using the MISE and RMSE metrics. The results reveal marked site-dependent differences in dominant periodicity and residual behavior, including asymmetry and heavy tails. ISR identified dominant periods ranging from 37 to 166 days, and RAPID adapted its complexity with Ks[5,10] without fixing the number of mixture components in advance. Quantitatively, RAPID achieved the lowest RMSE at 6/8 sites and the lowest MISE at 5/8 sites, while also exhibiting shorter execution times than KDE and MoG under the same input Xδ,s. The results support RAPID as a competitive adaptive method for site-specific density estimation in non-stationary urban climate signals. In this context, local regimes can be viewed as approximate invariants under time translation in the weak stochastic sense, while deviations from this assumption are reflected in increased distributional complexity across sites. Full article
Show Figures

Figure 1

20 pages, 1246 KB  
Article
Comparative Performance of Gaussian Plume and Backward Lagrangian Stochastic Models for Near-Field Methane Emission Estimation Using a Single Controlled Release Experiment
by Aashish Upreti, Kira B. Shonkwiler, Stuart N. Riddick and Daniel J. Zimmerle
Atmosphere 2026, 17(4), 417; https://doi.org/10.3390/atmos17040417 - 20 Apr 2026
Abstract
Methane (CH4) is a major component of natural gas and a potent greenhouse gas. Increasing atmospheric methane concentrations are attributed to emissive anthropogenic activities by an average of 13 ppb per yr since 2020 and are linked to a changing global [...] Read more.
Methane (CH4) is a major component of natural gas and a potent greenhouse gas. Increasing atmospheric methane concentrations are attributed to emissive anthropogenic activities by an average of 13 ppb per yr since 2020 and are linked to a changing global climate. Mitigating CH4 emissions from oil and gas production sites has recently become a target to reduce overall greenhouse gas emissions; however, monitoring the efficacy of mitigation strategies depends on accurate quantification of CH4 emissions at the facility-level. Near-field quantification of methane (CH4) emissions from oil and gas (O&G) facilities remains challenging due to the effects of atmospheric variability and sensor configuration on atmospheric dispersion models. This study evaluates the performance of two atmospheric dispersion models, the Gaussian plume (GP) and backward Lagrangian stochastic (bLS), by comparing calculated CH4 emissions to controlled single-point emissions between 0.4 and 5.2 kg CH4 h−1. Emissions were calculated by both models using 121 individual sets of measurements comprising five-minute averaged downwind methane mixing ratios and matching meteorological data. The comparison shows that the bLS approach achieved a higher proportion of emission estimates within a factor of two (FAC2) of the known emission rates compared to the GP approach. The emissions calculated by the bLS model also had a lower multiplicative error and reduced bias relative to GP. Other error-based metrics further confirmed the bLS model performed better, as it yielded lower RMSE and MAE than GP. Statistical analysis of the emission data shows that the lateral and vertical alignment of the source and the sensor plays a critical role in emission estimations, as measurements made closer to the plume centerline and at a distance between 40 and 80 m downwind yielded the best FAC2 agreement. High wind meander degraded the ability of both approaches to generate representative emissions, particularly with the GP approach, as it violates the modeling approach’s assumption of steady-state emissions. Data suggest emissions calculated by the bLS model are comprehensively in better agreement, but the computational demands of the modeling approach and integration into fenceline systems limit real-time applicability. While these results provide insight into model performance under controlled near-field conditions, their applicability to more complex or heterogeneous oil and gas production environments (e.g., the regions Marcellus or Unita Basins) remains limited and uncertain. Full article
Show Figures

Figure 1

21 pages, 4154 KB  
Article
Automatic Modal Parameter Identification for Offshore Wind Turbines Using Modified Clustering-Based Methodology
by Yang Yang, Fayun Liang, Qingxin Zhu and Hao Zhang
Sensors 2026, 26(8), 2536; https://doi.org/10.3390/s26082536 - 20 Apr 2026
Abstract
Offshore wind power stands as a clean and low-carbon energy option that is booming as part of the efforts to achieve the goal of carbon neutrality. Effectively monitoring the dynamic response of wind turbines is a necessity to analyze the modal parameters, which [...] Read more.
Offshore wind power stands as a clean and low-carbon energy option that is booming as part of the efforts to achieve the goal of carbon neutrality. Effectively monitoring the dynamic response of wind turbines is a necessity to analyze the modal parameters, which are key parameters to assess whether the wind turbines are operating safely. Modal parameter identification for offshore wind turbines (OWTs) becomes essential through analyzing the dynamic response, given the limited acceptable range of natural frequencies under dynamic loads. This paper introduces a novel machine learning-based method that combines the SSI-data (data-driven stochastic subspace identification) modal parameter identification method with clustering analysis, employing DBSCAN (Density-Based Spatial Clustering of Applications with Noise) and the K-means cluster algorithm. The proposed method can automatically define the number of K-means clusters. The validation was carried out through a theoretical analysis using a four-degree-of-freedom model and Opensees numerical simulation model of an OWT. The verification and case study outcomes demonstrate that the proposed method possesses the accuracy required for automated modal parameter identification. Compared with the benchmark case results, the differences between the frequencies identified by the proposed method and the reference values are 0.0%, 0.30%, and 0.18% for the first three orders, respectively. This research not only provides valuable insights for professionals in related dynamic monitoring fields but also offers technical support for diagnosing abnormal states of OWTs utilizing dynamic response data. Full article
Show Figures

Figure 1

7 pages, 1295 KB  
Proceeding Paper
Parameter Analysis of a Stochastic Approach for Generating Spectrum-Compatible Ground Motions
by Wei-Chih Su
Eng. Proc. 2026, 136(1), 3; https://doi.org/10.3390/engproc2026136003 - 20 Apr 2026
Abstract
In order to validate the structure and ascertain its conformity with the stipulated design conditions, the responses and members’ internal forces of the finite element model of structures under artificial earthquakes must be simulated. There are a variety of methodologies to generate the [...] Read more.
In order to validate the structure and ascertain its conformity with the stipulated design conditions, the responses and members’ internal forces of the finite element model of structures under artificial earthquakes must be simulated. There are a variety of methodologies to generate the artificial earthquake waveform that corresponds to the design response spectrum. The frequency domain method is intuitive and convenient for generating the artificial earthquake waveform that corresponds to the design response spectrum. However, fluctuations in energy within specific frequency bands influence the acceleration responses across all frequency ranges. This, in turn, impedes the convergence process during the generation of artificial earthquake waveforms. The present study proposes a refined procedure for the generation of artificial earthquake waveforms in the frequency domain. The procedure can be used to generate the artificial earthquake that occurred in the vicinity of the Maanshan Nuclear Power Plant in Taiwan. A comparison of the parameters effect, including the cover range of the weighted function and the peak ground acceleration of the initial guess, were conducted to ascertain the convergence properties of the proposed approach. Full article
Show Figures

Figure 1

28 pages, 8935 KB  
Article
Wind-Sound Synergy and Fractal Design: Intelligent, Adaptive Acoustic Façades for High-Performance, Climate-Responsive Buildings
by Lingge Tan, Xinyue Zhang, Donghui Cui and Stephen Jia Wang
Buildings 2026, 16(8), 1615; https://doi.org/10.3390/buildings16081615 - 20 Apr 2026
Abstract
The building façade serves as the primary interface between the built environment and external climate, marking the transition from static regulation to dynamic response in climate-adaptive design. While existing research predominantly addresses periodic climatic elements such as temperature and solar radiation, the highly [...] Read more.
The building façade serves as the primary interface between the built environment and external climate, marking the transition from static regulation to dynamic response in climate-adaptive design. While existing research predominantly addresses periodic climatic elements such as temperature and solar radiation, the highly stochastic wind environment and its potential for internal acoustic problems remain systematically unexplored. This study investigates the acoustic modulation mechanism of building façades under dynamic wind conditions through a simulation-based methodology. The primary aim is to demonstrate the use of active control to mitigate the influence of fluctuating wind on the internal acoustic environment of buildings with open windows or semi-open boundaries, focusing on the coupling between stochastic wind fields and architectural acoustics in humid subtropical climates. We propose a wind-responsive adaptive acoustic façade system employing fractal geometry and configurable delay strategies, and develop a high-fidelity simulation framework to quantify how façade geometry and activation logic regulate acoustic parameters under varying wind conditions (1–8 m/s). Results indicate that: (1) support vector regression-based mapping of wind speed to delay strategies maintains key sound-field parameters (Lateral Fraction (LF), Speech Clarity (C50), and Early Decay Time to Reverberation Time ratio (EDT/RT30)) within 10% fluctuation across wind regimes; (2) fractal configurations achieve balanced wide-band (125 Hz–8 kHz) performance, with SPL fluctuation <3 dB, spectral tilt (+0.3 dB), and reverberation time slope <0.3; (3) configurational switching between column (high LF) and row (high C50) arrangements enables dynamic trade-off between spatial impression and speech clarity. This work establishes an integrated framework coupling wind dynamics, façade morphology, and acoustic modulation to regulate objective indoor acoustic parameters. Based on the simulated omnidirectional point-source model, the results show that key acoustic indicators remain stable across varying wind conditions, providing a theoretical and quantifiable basis for climate-responsive acoustic envelope design. Future work will include empirical prototype testing and listening tests to determine whether these simulated acoustic parameters translate into improved comfort and well-being for occupants. Full article
(This article belongs to the Special Issue Advanced Research on Improvement of the Indoor Acoustic Environment)
Show Figures

Figure 1

15 pages, 234 KB  
Article
Enhancing or Jeopardizing Human Creativity? Will Humans Be Able to Defend Themselves Against AI Superpowers in an Age of Ethics Washing and Law Washing?
by Lorenzo Magnani
Philosophies 2026, 11(2), 65; https://doi.org/10.3390/philosophies11020065 - 20 Apr 2026
Abstract
I recently introduced the concept of eco-cognitive openness and situatedness to explain how cognitive systems—human or artificial—dynamically interact with their environments to generate information and creative outputs through abductive cognition. Humans display high eco-cognitive openness, integrating tools and cultural contexts through “unlocked strategies” [...] Read more.
I recently introduced the concept of eco-cognitive openness and situatedness to explain how cognitive systems—human or artificial—dynamically interact with their environments to generate information and creative outputs through abductive cognition. Humans display high eco-cognitive openness, integrating tools and cultural contexts through “unlocked strategies” that also enable exceptional creativity. By contrast, generative AI like LLMs operates via “locked strategies” based on pre-existing datasets with limited real-time interaction, which constrains higher creativity. Although LLMs surpass humans in many cognitive tasks, they lack the openness required for truly advanced abductive performance. Notably, most human cognition is repetitive and imitative—humans themselves often resemble “stochastic parrots.” In this sense, LLMs reveal human intellectual poverty more than they expose flaws in artificial intelligence. I will illustrate how LLMs can act as powerful enhancers of human performance while simultaneously threatening our most distinctive prerogative: creativity. Future human–AI collaboration could expand our eco-cognitive openness, but demands vigilant oversight to counter bias and so-called overcomputationalization. GenAI can serve as an epistemic mediator toward unlocked creativity only if humans maintain agency and embed its outputs in broader socio-cultural frameworks. My greatest concern is that ethical and legal safeguards will remain ineffective in practice, resulting in mere “ethics washing” and “law washing” without genuine enforcement. Full article
(This article belongs to the Special Issue Intelligent Inquiry into Intelligence)
Back to TopTop