Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,073)

Search Parameters:
Keywords = Gaussian optimality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1190 KB  
Article
Integrating Multi-Strategy Improvements to Sand Cat Group Optimization and Gradient-Boosting Trees for Accurate Prediction of Microclimate in Solar Greenhouses
by Xiao Cui, Yuwei Cheng, Zhimin Zhang, Juanjuan Mu and Wuping Zhang
Agriculture 2025, 15(17), 1849; https://doi.org/10.3390/agriculture15171849 - 29 Aug 2025
Abstract
Solar greenhouses are an important component of modern facility agriculture, and the dynamic changes in their internal environment directly affect crop growth and yield. Among these factors, crop transpiration releases water vapor through transpiration, directly altering the indoor humidity balance and forming a [...] Read more.
Solar greenhouses are an important component of modern facility agriculture, and the dynamic changes in their internal environment directly affect crop growth and yield. Among these factors, crop transpiration releases water vapor through transpiration, directly altering the indoor humidity balance and forming a dynamic coupling with factors such as temperature and light. The environment of solar greenhouses exhibits highly nonlinear and multivariate coupling characteristics, leading to insufficient prediction accuracy in existing models. However, accurate predictions are crucial for regulating crop growth and yield. However, current mainstream greenhouse environmental prediction models still have obvious limitations when dealing with such complexity: traditional machine learning models and single-variable-driven models have issues such as insufficient accuracy (average MAE is 15–20% higher than in this study) and weak adaptability to nonlinear environmental changes in multi-environmental factor coupling predictions, making it difficult to meet the needs of precision farming. A review of relevant research over the past five years shows that while LSTM-based models perform well in time series prediction, they ignore the spatial correlations between environmental factors. Models incorporating attention mechanisms can capture key variables but suffer from high computational costs. To address these issues, this study proposes a prediction model based on multi-strategy optimization and gradient-boosting (GBDT) algorithms. By introducing a multi-scale feature fusion module, it addresses the accuracy issues in multi-factor coupling prediction. Additionally, it employs a lightweight network design to balance prediction performance and computational efficiency, filling the gap in existing research applications under complex greenhouse environments. The model optimizes data preprocessing and model parameters through Sobol sequence initialization, adaptive t-distribution perturbation strategies, and Gaussian–Cauchy mixture mutation strategies and combines CatBoost for modeling to enhance prediction accuracy. Experimental results show that the MSCSO–CatBoost model performs excellently in temperature prediction, with the mean absolute error (MAE) and root mean square error (RMSE) reduced by 22.5% (2.34 °C) and 24.4% (3.12 °C), respectively, and the coefficient of determination (R2) improved to 0.91, significantly outperforming traditional regression methods and combinations of other optimization algorithms. Additionally, the model demonstrates good generalization capability in predicting multiple environmental variables such as temperature, humidity, and light intensity, adapting to environmental fluctuations under different climatic conditions. This study confirms that combining multi-strategy optimization with gradient-boosting algorithms can significantly improve the prediction accuracy of solar greenhouse environments, providing reliable support for precision agricultural management. Future research could further explore the model’s adaptive optimization in complex climatic regions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
18 pages, 538 KB  
Article
Optimizing Carbon Footprint and Strength in High-Performance Concrete Through Data-Driven Modeling
by Saloua Helali, Shadiah Albalawi, Maer Alanazi, Bashayr Alanazi and Nizar Bel Hadj Ali
Sustainability 2025, 17(17), 7808; https://doi.org/10.3390/su17177808 - 29 Aug 2025
Abstract
High-performance concrete (HPC) is an essential construction material used for modern buildings and infrastructure assets, recognized for its exceptional strength, durability, and performance under harsh situations. Nonetheless, the HPC production process frequently correlates with elevated carbon emissions, principally attributable to the high quantity [...] Read more.
High-performance concrete (HPC) is an essential construction material used for modern buildings and infrastructure assets, recognized for its exceptional strength, durability, and performance under harsh situations. Nonetheless, the HPC production process frequently correlates with elevated carbon emissions, principally attributable to the high quantity of cement utilized, which significantly influences its carbon footprint. In this study, data-driven modeling and optimization strategies are employed to minimize the carbon footprint of high-performance concretes while keeping their performance properties. Starting from an experimental dataset, artificial neural networks (ANNs), ensemble techniques (ETs), and Gaussian process regression (GPR) are employed to yield predictive models for compressive strength of HPC mixes. The model’s input variables are the various components of HPC: cement, water, superplasticizer, fly ash, blast furnace slag, and coarse and fine aggregates. Models are trained using a dataset of 356 records. Results proved that the GPR-based model exhibits excellent accuracy with a determination coefficient of 0.90. The prediction model is used in a double objective optimization task formulated to identify mix configurations that allow for high mechanical performance aligned with a reduced carbon emission. The multi-objective optimization task is undertaken using genetic algorithms (GAs). Promising results are obtained when the machine learning prediction model is associated with GA optimization to identify strong yet sustainable mix configurations. Full article
(This article belongs to the Special Issue Advancements in Concrete Materials for Sustainable Construction)
17 pages, 8835 KB  
Article
Evolutionary Gaussian Decomposition
by Roman Y. Pishchalnikov, Denis D. Chesalin, Vasiliy A. Kurkov, Andrei P. Razjivin, Sergey V. Gudkov, Alexey S. Dorokhov and Andrey Yu. Izmailov
Mathematics 2025, 13(17), 2760; https://doi.org/10.3390/math13172760 - 27 Aug 2025
Abstract
We present a computational approach for performing the Gaussian decomposition (GD) of experimental spectral data, called evolutionary Gaussian decomposition (EGD). The key feature of EGD is its ability to estimate the optimal number of Gaussian components required to fit a target function, which [...] Read more.
We present a computational approach for performing the Gaussian decomposition (GD) of experimental spectral data, called evolutionary Gaussian decomposition (EGD). The key feature of EGD is its ability to estimate the optimal number of Gaussian components required to fit a target function, which can be any experimental functional dependence. The efficiency and robustness of EGD are achieved through the use of the differential evolution (DE) algorithm, which allows us to tune the performance of the method. Based on statistics from the independent trials of DE, EGD can determine the number of Gaussians above which further improvement in fit quality does not occur. EGD works by collecting statistics on local minima in the vicinity of the estimated optimal number of Gaussians, and, if necessary, repeats this process several times during optimization until the desired results are obtained. The method was tested using both synthetic spectral-like functions and measured spectra of photosynthetic pigments. In addition to the local minima statistics, the most significant factors that affect the results of the analysis were the median and minimum values of the cost function. These values were obtained for each different number of Gaussian functions used in the evaluation process. Full article
(This article belongs to the Special Issue Evolutionary Computation, Optimization, and Their Applications)
Show Figures

Figure 1

29 pages, 8415 KB  
Article
Three-Dimensional Modeling and Analysis of Directed Energy Deposition Melt Pools Based on Physical Information Neural Networks
by Xiang Han, Zhuang Qian, Xinyue Gao, Huaping Li, Zhongqing Peng and Yu Long
Appl. Sci. 2025, 15(17), 9401; https://doi.org/10.3390/app15179401 - 27 Aug 2025
Abstract
In Directed Energy Deposition (DED), modeling the molten pool temperature field is crucial for precise temperature control, process optimization, and quality improvement. However, conventional numerical methods suffer from limitations such as high computational costs and poor transferability. This study proposes a physics-informed neural [...] Read more.
In Directed Energy Deposition (DED), modeling the molten pool temperature field is crucial for precise temperature control, process optimization, and quality improvement. However, conventional numerical methods suffer from limitations such as high computational costs and poor transferability. This study proposes a physics-informed neural network with dynamic learning rate (DLR-PINN) model, which integrates transfer learning to enable rapid prediction of 3D temperature fields and dimensions of molten pools across process parameters. Its validity is verified by a finite element method (FEM) calibrated via single-track DED experiments. Results show that DLR-PINN exhibits superior convergence and stability compared to traditional PINN. Combined with transfer learning, training efficiency is significantly enhanced, with a single prediction taking only 10 s. Using the FEM as the benchmark, it achieves a mean absolute percentage error (MAPE) of 0.53% for temperature prediction, and MAPE of 3.69%, 2.48%, and 6.96% for molten pool dimension predictions, respectively. Sensitivity analysis of process parameters reveals that scanning speed has a significantly greater regulatory effect on molten pool characteristics than laser power. Additionally, the temperature field of the flat-top heat source is more uniform than that of the Gaussian heat source, which is more conducive to improving printing quality and efficiency. Full article
Show Figures

Figure 1

14 pages, 7032 KB  
Article
Frequency-Domain Gaussian Cooperative Filtering Demodulation Method for Spatially Modulated Full-Polarization Imaging Systems
by Ziyang Zhang, Pengbo Ma, Shixiao Ye, Song Ye, Wei Luo, Shu Li, Wei Xiong, Yuting Zhang, Wentao Zhang, Fangyuan Wang, Jiejun Wang, Xinqiang Wang and Niyan Chen
Photonics 2025, 12(9), 857; https://doi.org/10.3390/photonics12090857 - 26 Aug 2025
Viewed by 83
Abstract
The spatially modulated full-polarization imaging system encodes complete polarization information into a single interferogram, enabling rapid demodulation. However, traditional single Gaussian low-pass filtering cannot adequately suppress crosstalk among Stokes components, leading to reduced accuracy. To address this issue, this paper proposes a frequency-domain [...] Read more.
The spatially modulated full-polarization imaging system encodes complete polarization information into a single interferogram, enabling rapid demodulation. However, traditional single Gaussian low-pass filtering cannot adequately suppress crosstalk among Stokes components, leading to reduced accuracy. To address this issue, this paper proposes a frequency-domain Gaussian cooperative filter (FGCF) based on a divide-and-conquer strategy in the frequency domain. Specifically, the method employs six Gaussian high-pass filters to effectively identify and suppress interference signals located at different positions in the frequency domain, while utilizing a single Gaussian low-pass filter to preserve critical polarization information within the image. Through the cooperative processing of the low-pass filter response and the complementary responses of the high-pass filters, simultaneous optimization of information retention and interference suppression is achieved. Simulation and real-scene experiments show that FGCF significantly enhances demodulation quality, especially for S1, and achieves superior structural similarity compared with traditional low-pass filtering. Full article
Show Figures

Figure 1

27 pages, 19263 KB  
Article
An Adaptive Dual-Channel Underwater Target Detection Method Based on a Vector Cross-Trispectrum Diagonal Slice
by Weixuan Zhang, Yu Chen, Qiang Bian, Yuyao Liu, Yan Liang and Zhou Meng
J. Mar. Sci. Eng. 2025, 13(9), 1628; https://doi.org/10.3390/jmse13091628 - 26 Aug 2025
Viewed by 98
Abstract
This paper introduces a method for detecting weak line spectrum signals in dynamic, non-Gaussian marine noise using a single vector hydrophone. The trispectrum diagonal slice is employed to extract coupled line spectrum features, enabling the detection of line spectra with independent frequencies and [...] Read more.
This paper introduces a method for detecting weak line spectrum signals in dynamic, non-Gaussian marine noise using a single vector hydrophone. The trispectrum diagonal slice is employed to extract coupled line spectrum features, enabling the detection of line spectra with independent frequencies and phases while effectively suppressing Gaussian noise. By constructing a cross-trispectrum diagonal slice spectrum from the hydrophone’s sound pressure and composite particle velocity, the method leverages coherence gain to enhance the signal-to-noise ratio (SNR). Furthermore, a discriminator based on the cross-coherence function of pressure and velocity is proposed, which utilizes a dynamic threshold to adaptively and in real-time select either the vector cross-trispectrum diagonal slice (V-TriD) or the conventional energy detection (ED) as the optimal detection channel for incoming signal. The feasibility and effectiveness of this method were validated through simulations and sea trial data from the South China Sea. Experimental results demonstrate that the proposed algorithm can effectively detect the target signal, achieving an SNR improvement of 3 dB at the target frequency and an average reduction in broadband noise energy of 1–2 dB compared to traditional energy spectrum detection. The proposed algorithm exhibits computational efficiency, adaptability, and robustness, making it well suited for real-time underwater target detection in critical applications, including harbor security, waterway monitoring, and marine bioacoustic studies. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 6313 KB  
Article
Time-Optimal Trajectory Planning for Industrial Robots Based on Improved Fire Hawk Optimizer
by Shuxia Ye, Bo Jiang, Yongwei Zhang, Liwen Cai, Liang Qi and Siyu Fei
Machines 2025, 13(9), 764; https://doi.org/10.3390/machines13090764 - 26 Aug 2025
Viewed by 59
Abstract
Focusing on joint-space time-optimal trajectory planning for industrial robots, this study integrates 3-5-3 piecewise polynomial parameterization with an improved Fire Hawk Optimization algorithm (TFHO). Subject to joint position, velocity, and acceleration limits, segment durations are optimized as decision variables. TFHO employs Tent-chaotic initialization [...] Read more.
Focusing on joint-space time-optimal trajectory planning for industrial robots, this study integrates 3-5-3 piecewise polynomial parameterization with an improved Fire Hawk Optimization algorithm (TFHO). Subject to joint position, velocity, and acceleration limits, segment durations are optimized as decision variables. TFHO employs Tent-chaotic initialization to improve the uniformity of initial solutions and a two-phase adaptive Lévy–Gaussian–Cauchy hybrid mutation to balance early global exploration with late local exploitation, mitigating premature convergence and enhancing stability. On benchmark functions, TFHO attains the lowest mean area under the convergence curve (AUC; lower is better). Wilcoxon signed-rank tests show statistically significant improvements over FHO, PSO, GWO, and WOA (p0.05). Ablation studies indicate a pronounced reduction in run-to-run variability: the standard deviation decreases from 0.3157 (FHO) to 0.0023 with TFHO, a 99.27% drop. In an ABB IRB-2600 simulation case, the execution time is shortened from 12.00 s to 9.88 s (−17.66%) while preserving smooth and continuous kinematic profiles (position, velocity, and acceleration), demonstrating practical engineering applicability. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

27 pages, 1880 KB  
Article
Optimal Choice of the Shape Parameter for the Radial Basis Functions Method in One-Dimensional Parabolic Inverse Problems
by Sanduni Wasana and Upeksha Perera
Algorithms 2025, 18(9), 539; https://doi.org/10.3390/a18090539 - 25 Aug 2025
Viewed by 136
Abstract
Inverse problems have numerous important applications in science, engineering, medicine, and other disciplines. In this study, we present a numerical solution for a one-dimensional parabolic inverse problem with energy overspecification at a fixed spatial point, using the radial basis function (RBF) method. The [...] Read more.
Inverse problems have numerous important applications in science, engineering, medicine, and other disciplines. In this study, we present a numerical solution for a one-dimensional parabolic inverse problem with energy overspecification at a fixed spatial point, using the radial basis function (RBF) method. The collocation matrix arising in RBF-based approaches is typically highly ill-conditioned, and the method’s performance is strongly influenced by the choice of the radial basis function and its shape parameter. Unlike previous studies that focused primarily on Gaussian radial basis functions, this work investigates and compares the performance of three RBF types—Gaussian (GRBF), Multiquadrics (MQRBF), and Inverse Multiquadrics (IMQRBF). By transforming the inverse problem into an equivalent direct problem, we apply the RBF collocation method in both space and time. Numerical experiments on two test problems with known analytical solutions are conducted to evaluate the approximation error, optimal shape parameters, and matrix conditioning. Results indicate that both MQRBF and IMQRBF generally provide better accuracy than GRBF. Furthermore, IMQRBF enhances numerical stability due to its lower condition number, making it a more robust choice for solving ill-posed inverse problems where both stability and accuracy are critical. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

30 pages, 1831 KB  
Article
Integrating Cacao Physicochemical-Sensory Profiles via Gaussian Processes Crowd Learning and Localized Annotator Trustworthiness
by Juan Camilo Lugo-Rojas, Maria José Chica-Morales, Sergio Leonardo Florez-González, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Foods 2025, 14(17), 2961; https://doi.org/10.3390/foods14172961 - 25 Aug 2025
Viewed by 145
Abstract
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, [...] Read more.
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, and often inconsistent annotations provided by multiple experts. We propose a comprehensive framework that leverages a correlated chained Gaussian processes model for learning from crowds, termed MAR-CCGP, specifically designed for a customized Casa Luker database that integrates sensory and physicochemical data on cacao-based products. By formulating sensory evaluations as regression tasks, our approach enables the estimation of continuous perceptual scores from physicochemical inputs, while concurrently inferring the latent, input-dependent reliability of each annotator. To address the inherent noise, subjectivity, and non-stationarity in expert-generated sensory data, we introduce a three-stage methodology: (i) construction of an integrated database that unifies physicochemical parameters with corresponding sensory descriptors; (ii) application of a MAR-CCGP model to infer the underlying ground truth from noisy, crowd-sourced, and non-stationary sensory annotations; and (iii) development of a novel localized expert trustworthiness approach, also based on MAR-CCGP, which dynamically adjusts for variations in annotator consistency across the input space. Our approach provides a robust, interpretable, and scalable solution for learning from heterogeneous and noisy sensory data, establishing a principled foundation for advancing data-driven sensory analysis and product optimization in the food science domain. We validate the effectiveness of our method through a series of experiments on both semi-synthetic data and a novel real-world dataset developed in collaboration with Casa Luker, which integrates sensory evaluations with detailed physicochemical profiles of cacao-based products. Compared to state-of-the-art learning-from-crowds baselines, our framework consistently achieves superior predictive performance and more precise annotator reliability estimation, demonstrating its efficacy in multi-annotator regression settings. Of note, our unique combination of a novel database, robust noisy-data regression, and input-dependent trust scoring sets MAR-CCGP apart from existing approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Figure 1

29 pages, 5578 KB  
Article
A Comprehensive Study of Machine Learning for Waste-to-Energy Process Modeling and Optimization
by Jianzhao Zhou, Jingyuan Liu, Jingzheng Ren and Chang He
Processes 2025, 13(9), 2691; https://doi.org/10.3390/pr13092691 - 24 Aug 2025
Viewed by 272
Abstract
This study presents a comprehensive study integrating machine learning, life cycle assessment (LCA) and heuristic optimization to achieve a low-carbon medical waste (MW)-to fuel process. A detailed process simulation coupled with cradle to gate LCA is employed to generate a dataset covering diverse [...] Read more.
This study presents a comprehensive study integrating machine learning, life cycle assessment (LCA) and heuristic optimization to achieve a low-carbon medical waste (MW)-to fuel process. A detailed process simulation coupled with cradle to gate LCA is employed to generate a dataset covering diverse process operation conditions, embodied carbon of supplying H2 and the associated carbon emission factor of MW treatment (CEF). Four machine learning techniques, including support vector machine, artificial neural network, Gaussian process regression, and XGBoost, are trained, each achieving test R2 close to 0.90 and RMSE of ~0.26. These models are integrated with heuristic algorithms to optimize operating parameters under various green hydrogen mixes (20–80%). Our results show that machine learning models outperform the detailed process model (DPM), achieving a minimum CEF of ~1.3 to ~1.1 kg CO2-eq/kg MW with higher computational stabilities. Importantly, the optimization times dropped from hours (DPM) to seconds (machine learning models) and the combination of Gaussian process regression and particle swarm optimization is highlighted, with an optimization time under one second. The optimized process holds promise in carbon reduction compared to traditional MW disposal methods. These findings show machine learning can achieve high predictive accuracy while dramatically enhancing optimization speed and stability, providing a scalable framework for extensive scenario analysis during waste-to-energy process design and further real-time optimization application. Full article
(This article belongs to the Special Issue Modeling and Optimization for Multi-scale Integration)
Show Figures

Figure 1

26 pages, 5810 KB  
Review
Bayesian Optimization for Chemical Synthesis in the Era of Artificial Intelligence: Advances and Applications
by Runqiu Shen, Guihua Luo and An Su
Processes 2025, 13(9), 2687; https://doi.org/10.3390/pr13092687 - 23 Aug 2025
Viewed by 418
Abstract
This review highlights recent advances in the application of Bayesian optimization to chemical synthesis. In the era of artificial intelligence, Bayesian optimization has emerged as a powerful machine learning approach that transforms reaction engineering by enabling efficient and cost-effective optimization of complex reaction [...] Read more.
This review highlights recent advances in the application of Bayesian optimization to chemical synthesis. In the era of artificial intelligence, Bayesian optimization has emerged as a powerful machine learning approach that transforms reaction engineering by enabling efficient and cost-effective optimization of complex reaction systems. We begin with a concise overview of the theoretical foundations of Bayesian optimization, emphasizing key components such as Gaussian process-based surrogate models and acquisition functions that balance exploration and exploitation. Subsequently, we examine its practical applications across various chemical synthesis contexts, including reaction parameter tuning, catalyst screening, molecular design, synthetic route planning, self-optimizing systems, and autonomous laboratories. In addition, we discuss the integration of emerging techniques, such as noise-robust methods, multi-task learning, transfer learning, and multi-fidelity modeling, which enhance the versatility of Bayesian optimization in addressing the challenges and limitations inherent in chemical synthesis. Full article
(This article belongs to the Special Issue Machine Learning Optimization of Chemical Processes)
Show Figures

Figure 1

25 pages, 7421 KB  
Article
Analysis of Internal Explosion Vibration Characteristics of Explosion-Proof Equipment in Coal Mines Using Laser Doppler
by Xusheng Xue, Junbiao Qiu, Hongkui Zhang, Wenjuan Yang, Huahao Wan and Fandong Chen
Appl. Sci. 2025, 15(17), 9255; https://doi.org/10.3390/app15179255 - 22 Aug 2025
Viewed by 260
Abstract
Currently, there is a lack of methods for detecting the mechanism of gas explosion propagation within flameproof enclosures and the dynamic behavior of flameproof enclosures under explosion impact. Therefore, this paper studies a method for detecting the vibration characteristics of coal mine explosion-proof [...] Read more.
Currently, there is a lack of methods for detecting the mechanism of gas explosion propagation within flameproof enclosures and the dynamic behavior of flameproof enclosures under explosion impact. Therefore, this paper studies a method for detecting the vibration characteristics of coal mine explosion-proof equipment under internal gas explosions using laser Doppler. First, a model of gas explosion propagation and explosion transmission response in flameproof enclosures is established to reveal the mechanism of gas explosion transmission inside coal mine flameproof enclosures. Second, a laser Doppler measurement method for coal mine flameproof enclosures is proposed, along with a step-by-step progressive vibration characteristic analysis method. This begins with a single-frequency dimension analysis using the Fourier transform (FFT), extends to time–frequency joint analysis using the short-time Fourier transform (STFT) to incorporate a time scale, and then advances to a three-dimensional linkage of scale, time, and frequency using the wavelet transform (DWT) to solve the limitation of the fixed window length of the STFT, thereby achieving a dynamic characterization of the detonation response characteristics. Finally, a non-symmetric Gaussian impact load inversion model is constructed to validate the overall scheme. The experimental results show that the FFT analysis identified a 2000 Hz main frequency, along with the global frequency components of the flameproof enclosure vibration signal, the STFT analysis revealed the dynamic evolution of the 2000 Hz main frequency and global frequency over time, and the wavelet transform achieved higher accuracy positioning of the frequency amplitude in the time domain, with better time resolution. Finally, the experimental platform showed an error of less than 5% compared with the actual measured impact load, and the error between the inverted impact load and the actual load was less than 15%. The experimental platform is feasible, and the inversion model has good accuracy. The laser Doppler measurement method has significant advantages over traditional coal mine flameproof equipment measurement and analysis methods and can provide further failure analysis and prevention, design optimization, and safety performance evaluation of flameproof enclosures in the future. Full article
(This article belongs to the Special Issue Advanced Blasting Technology for Mining)
Show Figures

Figure 1

16 pages, 2441 KB  
Article
Federated Hybrid Graph Attention Network with Two-Step Optimization for Electricity Consumption Forecasting
by Hao Yang, Xinwu Ji, Qingchan Liu, Lukun Zeng, Yuan Ai and Hang Dai
Energies 2025, 18(17), 4465; https://doi.org/10.3390/en18174465 - 22 Aug 2025
Viewed by 374
Abstract
Electricity demand forecasting is essential for smart grid management, yet it presents challenges due to the dynamic nature of consumption trends and regional variability in usage patterns. While federated learning (FL) offers a privacy-preserving solution for handling sensitive, region-specific data, traditional FL approaches [...] Read more.
Electricity demand forecasting is essential for smart grid management, yet it presents challenges due to the dynamic nature of consumption trends and regional variability in usage patterns. While federated learning (FL) offers a privacy-preserving solution for handling sensitive, region-specific data, traditional FL approaches struggle when local datasets are limited, often leading models to overfit noisy peak fluctuations. Additionally, many regions exhibit stable, periodic consumption behaviors, further complicating the need for a global model that can effectively capture diverse patterns without overfitting. To address these issues, we propose Federated Hybrid Graph Attention Network with Two-step Optimization for Electricity Consumption Forecasting (FedHMGAT), a hybrid modeling framework designed to balance periodic trends and numerical variations. Specifically, FedHMGAT leverages a numerical structure graph with a Gaussian encoder to model peak fluctuations as dynamic covariance features, mitigating noise-driven overfitting, while a multi-scale attention mechanism captures periodic consumption patterns through hybrid feature representation. These feature components are then fused to produce robust predictions. To enhance global model aggregation, FedHMGAT employs a two-step parameter aggregation strategy: first, a regularization term ensures parameter similarity across local models during training, and second, adaptive dynamic fusion at the server tailors aggregation weights to regional data characteristics, preventing feature dilution. Experimental results verify that FedHMGAT outperforms conventional FL methods, offering a scalable and privacy-aware solution for electricity demand forecasting. Full article
(This article belongs to the Special Issue AI, Big Data, and IoT for Smart Grids and Electric Vehicles)
Show Figures

Figure 1

29 pages, 3625 KB  
Article
Wind Farm Collector Line Fault Diagnosis and Location System Based on CNN-LSTM and ICEEMDAN-PE Combined with Wavelet Denoising
by Huida Duan, Song Bai, Zhipeng Gao and Ying Zhao
Electronics 2025, 14(17), 3347; https://doi.org/10.3390/electronics14173347 - 22 Aug 2025
Viewed by 233
Abstract
To enhance the accuracy and precision of fault diagnosis and location for the collector lines in wind farms under complex operating conditions, an intelligent combined method based on CNN-LSTM and ICEEMDAN-PE-improved wavelet threshold denoising is proposed. A wind power plant model is established [...] Read more.
To enhance the accuracy and precision of fault diagnosis and location for the collector lines in wind farms under complex operating conditions, an intelligent combined method based on CNN-LSTM and ICEEMDAN-PE-improved wavelet threshold denoising is proposed. A wind power plant model is established using the PSCADV46/EMTDC software. In response to the issue of indistinct fault current signal characteristics under complex fault conditions, a hybrid fault diagnosis model is constructed using CNN-LSTM. The convolutional neural network is utilized to extract the local time-frequency features of the current signals, while the long short-term memory network is employed to capture the dynamic time series patterns of faults. Combined with the improved phase-mode transformation, various types of faults are intelligently classified, effectively resolving the problem of fault feature extraction and achieving a fault diagnosis accuracy rate of 96.5%. To resolve the problem of small fault current amplitudes, low fault traveling wave amplitudes, and difficulty in accurate location due to noise interference in actual wind farms with high-resistance grounding faults, a combined denoising algorithm based on ICEEMDAN-PE-improved wavelet threshold is proposed. This algorithm, through the collaborative optimization of modal decomposition and entropy threshold, significantly improves the signal-to-noise ratio and reduces the root mean square error under simulated conditions with injected Gaussian white noise, stabilizing the fault location error within 0.5%. Extensive simulation results demonstrate that the fault diagnosis and location method proposed in this paper can effectively meet engineering requirements and provide reliable technical support for the intelligent operation and maintenance system of a wind farm. Full article
(This article belongs to the Special Issue Advanced Online Monitoring and Fault Diagnosis of Power Equipment)
Show Figures

Figure 1

23 pages, 3801 KB  
Article
Multi-Variable Evaluation via Position Binarization-Based Sparrow Search
by Jiwei Hua, Xin Gu, Debing Sun, Jinqi Zhu and Shuqin Wang
Electronics 2025, 14(16), 3312; https://doi.org/10.3390/electronics14163312 - 20 Aug 2025
Viewed by 255
Abstract
The Sparrow Search Algorithm (SSA), a metaheuristic renowned for rapid convergence, good stability, and high search accuracy in continuous optimization, faces inherent limitations when applied to discrete multi-variable combinatorial optimization problems like feature selection. To enable effective multi-variable evaluation and discrete feature subset [...] Read more.
The Sparrow Search Algorithm (SSA), a metaheuristic renowned for rapid convergence, good stability, and high search accuracy in continuous optimization, faces inherent limitations when applied to discrete multi-variable combinatorial optimization problems like feature selection. To enable effective multi-variable evaluation and discrete feature subset selection using SSA, a novel binary variant, Position Binarization-based Sparrow Search Algorithm (BSSA), is proposed. BSSA employs a sigmoid transformation function to convert the continuous position vectors generated by the standard SSA into binary solutions, representing feature inclusion or exclusion. Recognizing that the inherent exploitation bias of SSA and the complexity of high-dimensional feature spaces can lead to premature convergence and suboptimal solutions, we further enhance BSSA by introducing stochastic Gaussian noise (zero mean) into the sigmoid transformation. This strategic perturbation actively diversifies the search population, improves exploration capability, and bolsters the algorithm’s robustness against local optima stagnation during multi-variable evaluation. The fitness of each candidate feature subset (solution) is evaluated using the classification accuracy of a Support Vector Machine (SVM) classifier. The BSSA algorithm is compared with four high-performance optimization algorithms on 12 diverse benchmark datasets selected from the UCI repository, utilizing multiple performance metrics. Experimental results demonstrate that BSSA achieves superior performance in classification accuracy, computational efficiency, and optimal feature selection, significantly advancing multi-variable evaluation for feature selection tasks. Full article
Show Figures

Figure 1

Back to TopTop