Next Article in Journal
Traceable Suppression of Vehicle-Induced Dust in Industrial Sheds Through Dynamic–Static Feature Enhancement
Previous Article in Journal
Clarke-Domain Dyadic Wavelet Denoising for Three-Phase Induction Motor Current Signals
Previous Article in Special Issue
Research on MPC-Based Power Allocation Strategy and Dynamic Value Evaluation of Wind–Hydrogen Coupled Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Football Team Training Algorithm Based on Modal Decomposition and BiLSTM Method for Short-Term Wind Power Forecasting

School of Electrical Engineering, Guangxi University, Nanning 530004, China
*
Author to whom correspondence should be addressed.
Processes 2026, 14(6), 951; https://doi.org/10.3390/pr14060951
Submission received: 29 January 2026 / Revised: 4 March 2026 / Accepted: 12 March 2026 / Published: 17 March 2026
(This article belongs to the Special Issue Adaptive Control and Optimization in Power Grids)

Abstract

Reliable wind power forecasting is essential for maintaining the safe and stable operation of power systems with high renewable energy penetration. This study proposes a short-term wind power forecasting model based on decomposition–optimization–prediction, integrating complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN), the improved football team training algorithm (IFTTA), and the bidirectional long short-term memory network model (BiLSTM). CEEMDAN is employed to decompose the non-stationary wind power sequence into relatively stable intrinsic mode functions (IMFs), thereby separating multi-scale fluctuation features. The IFTTA incorporates a dynamic probability allocation strategy and an adaptive parameter adjustment mechanism, which contributes to a better balance between global exploration and local exploitation. After optimizing the hyperparameters of BiLSTM using IFTTA, the prediction performance significantly improved. Validations were conducted on three datasets from Xinjiang, Ningxia, and Inner Mongolia, China, each containing 1440 samples (1152 for training and 288 for testing). Comparisons with the benchmark forecasting model demonstrate that the pro-posed model reduces the mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE) by at least 25.29%, 29.62%, and 20.66%, respectively. Correspondingly, the coefficient of determination (R2) was improved by at least 0.0069. This model provides an effective solution for short-term wind power prediction in practical engineering.

1. Introduction

For grid-connected wind farms, short-term wind power forecasting is critical for mitigating generation variability and ensuring the secure and reliable operation of power grids. As a core support for China’s Dual Carbon goals, wind energy has seen a surge in installed capacity—with 373 GW of new renewable energy capacity added in 2024, accounting for 86% of the country’s total new power installations [1,2,3,4,5]—yet its inherent volatility has a certain impact on grid stability [6,7,8]. This underscores the need to improve the short-term forecasting accuracy for the reliable dispatch of power systems with high renewable penetration.
Unlike long-term forecasting for planning purposes, short-term predictions (15 min to 24 h) deliver the real-time precision required for operational decision-making [9,10]. They optimize maintenance scheduling, minimize outage losses, enable the rational allocation of balancing reserves, and reduce wind power curtailment. Moreover, the development of precise short-term forecasting technology for wind farms is urgently needed. Accurate forecasts not only support achieving renewable energy consumption targets under the dual carbon framework, but also enhance operational economic efficiency. However, operational data from wind farms are often affected by equipment noise, meteorological anomalies, and measurement errors, leading to highly non-stationary power sequences that impede forecasting accuracy [11]. Therefore, developing a high-precision short-term forecasting method tailored to the operational characteristics of existing wind farms is of significant engineering value.
Wind power forecasting methods primarily include physical approaches based on numerical weather prediction, statistical methods based on time series analysis, artificial intelligence methods based on machine learning, and hybrid models that integrate the strengths of multiple techniques [11,12,13,14,15]. The disadvantage of physical approaches [16,17] is that the model is complex and the computational cost is high. The analytical approach necessitates a substantial volume of reliable historical electricity data. If the data is missing or abnormal, it will directly affect the training effect of the model. Advances in deep learning [18,19] have further promoted the evolution of forecasting approaches.
The hybrid forecasting model is currently the mainstream method for wind power prediction research. The typical “decomposition–prediction–optimization” three-stage framework significantly improves predictive performance. In the data preprocessing stage, signal decomposition techniques can effectively enhance the data quality. Reference [20] employed a method combining variational mode decomposition (VMD) with recurrent neural networks, confirming the effectiveness of signal decomposition in suppressing anomalous data interference. However, traditional decomposition methods such as EEMD suffer from mode aliasing issues. The CEEMDAN method proposed by Torres et al. achieves more precise signal separation through an adaptive noise control mechanism [21]. Dragomiretskiy et al. further demonstrated the advantages of adaptive signal decomposition in handling non-stationary time series [22].
Parameter optimization is a critical step in enhancing model performance. The introduction of intelligent optimization algorithms effectively addresses the hyperparameter sensitivity issue in deep learning models. While traditional optimization methods such as particle swarm optimization (PSO) [23], genetic algorithm (GA) [20], and whale optimization algorithm (WOA) [24] have achieved certain results, they still suffer from the limitation of being prone to local optima. As an emerging meta-heuristic algorithm, the football team training algorithm (FTTA) demonstrates excellent convergence properties through its unique group collaboration mechanism [25]. However, its parameter sensitivity and its imbalance between exploration and exploitation limit its application effectiveness. Standard long short-term memory (LSTM) [26] networks effectively capture long-term dependencies in time series through their gating mechanisms, but their unidirectional structure limits the full utilization of temporal context. In fact, fluctuations in wind power are not only correlated with past states, but are also influenced by the evolution of future meteorological conditions such as wind speed and atmospheric pressure. The bidirectional long short-term memory (BiLSTM) network processes sequences simultaneously in forward and backward directions, enabling a more comprehensive capture of dependencies between data points and their contextual information. This approach provides deeper insight into the dynamic patterns inherent in wind power sequences [27]. The enhanced IFTTA adopted in this study aims to overcome the limitations of the original FTTA, offering a more robust and efficient means of parameter tuning for forecasting models.
To address the aforementioned challenges, this study proposes a hybrid CEEMDAN–IFTTA–BiLSTM for short-term wind power prediction. First, CEEMDAN is employed to adaptively decompose highly volatile wind power data, thereby reducing noise interference. Second, by introducing adaptive parameter adjustment mechanism and dynamic probability allocation strategy, adopting mixed grouping strategy and a three-stage learning mechanism, the FTTA is improved, effectively enhancing its global exploration and local development capabilities. Third, a BiLSTM network is utilized to capture both historical and future temporal features simultaneously. The improved IFTTA is applied to optimize the hyperparameters of the BiLSTM network, improving the overall forecasting performance. Table 1 presents the existing research findings on wind power forecasting models and their limitations.
Finally, a comparative analysis of the proposed model against other wind power forecasting models was conducted across three distinct datasets. The findings confirm that the proposed model improves both the precision and reliability of wind power forecasting.
The rest of this paper is structured as follows. Section 2 details the mathematical methods for CEEMDAN data processing and the construction of the BiLSTM prediction network; Section 3 focuses on discussing improvement strategies for the FTTA Section 4 establishes the proposed prediction framework; Section 5 presents the experimental results of the proposed method and compares them with other existing prediction methods; and Section 6 concludes by summarizing the research findings and outlining future research directions.

2. Prediction Model Algorithms

2.1. Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN)

Traditional empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) suffer from mode aliasing and residual noise, which limit their application in non-stationary signal processing. CEEMDAN adds dynamically amplitude-dependent Gaussian white noise to the original signal, decomposes the noise-added signal via EMD, and obtains each IMF by averaging the corresponding components of multiple decompositions. The noise amplitude is dynamically adjusted based on the signal fluctuation intensity, ensuring the effective separation of different frequency components while suppressing residual noise. CEEMDAN achieves complete decomposition without mode aliasing, and the decomposition results are more robust to noise interference compared with EMD and EEMD [14,26,28].
The process of decomposing the wind power raw data by CEEMDAN is as follows [29]:
Let x t = x 1 , x 2 , x t denote the raw time series of wind power data; I M F k ˜ t = I M F k ˜ 1 , I M F k ˜ 2 , , I M F k ˜ t is the IMF obtained via the CEEMDAN; E k is the Kth IMF decomposed by EMD. The signal-to-noise ratio (SNR) for each stage is set by the scalar coefficient ε k , which determines the standard deviation of the white noise.
Step 1: Add adaptive Gaussian white noise to the original wind power sequence x(t), with the noise amplitude dynamically adjusted based on the sequence’s fluctuation intensity. The noise-added sequence is:
x i t = x t + ε 0 ω i t , i = 1 , 2 , 3 , , K
where ε 0 represents the amplitude of Gaussian white noise added to the original time series; ω i is the i of noise sequence; i denotes the i-th instance of Gaussian white noise applied to the original time series, while K indicates the total number of times white noise is added.
Step 2: Perform EMD decomposition on each noise-added subsequence and name the first modal component of the i-th decomposition as I M F 1 ˜ t . Then, calculate the mean using Formula (2) to obtain the first IMF of the ensemble empirical mode decomposition.
I M F 1 ˜ t = 1 K i = 1 K I M F 1 i t = I M F ¯ 1 t
Step 3: Calculate the first residual.
r 1 t = x t I M F 1 ˜ t
Step 4: By constructing a new signal and performing EMD decomposition, the second IMF can be obtained.
I M F 2 ˜ t = 1 K i = 1 K E 1 r 1 t + ε 1 E 1 ω i t
r 2 t = r 1 t I M F 2 ˜ t
Step 5: Similarly, repeat Step 4 to obtain the (n + 1)th IMF and the nth residual.
I M F ˜ k + 1 t = 1 K i = 1 K E 1 r k t + ε k E k ω i t
r k t = r k 1 t I M F k ˜ t
Step 6: Repeat the above steps until no further useful information can be extracted from the residual column, thereby terminating the algorithm. Thus, the original signal can be expressed as:
x t = k = 1 K I M F k ˜ t + r t
The CEEMDAN decomposition process is shown in Figure 1:

2.2. Bidirectional Long Short-Term Memory Network Model (BiLSTM)

Long short-term memory (LSTM) networks address the gradient vanishing and exploding issues inherent in traditional recurrent neural networks (RNNs), enhancing memory retention during prolonged training. LSTM networks regulate information influx and decay through a gate mechanism comprising a forget gate, an input gate, and an output gate. The LSTM architecture is illustrated as Figure 2 follows:
g t = σ ω x g x t + ω h g h t 1 + ω c g c t 1 + b g
g t = σ ω x g x t + ω h g h t 1 + ω c g c t 1 + b g
c t = g t c t 1 + i t tanh ω x c x t + ω h c h t 1 + b c
o t = σ ( ω x o x t + ω h o h t 1 + ω c o c t + b o )
h t = o t tanh c t
where σ = 1 1 + e x , tanh = e x e x e x + e x , ω x g , ω x i , ω x c , ω x o denote the weight matrices connecting to input information x t ; ω h g , ω h i , ω h c , ω h o denote the weight matrices connecting to output information h t 1 from the previous moment; ω h g , ω h i , ω c o denote the weight matrices connecting to memory information unit c t 1 ; and the bias vectors are represented by b g , b i , b c , b o ; o t , g t , i t for the output gate, forget gate, and input gate respectively; { x 1 , x 2 , , x T } represents the input sequence.
LSTM can only process forward information, making it difficult to fully extract potential features from wind power sequences. In contrast, BiLSTM consists of both forward and backward LSTM layers, enabling it to simultaneously capture the intrinsic relationships between current power parameters and past/future time points. This approach significantly enhances the utilization efficiency of wind power sequence features. In the forward LSTM layer of BILSTM, data are trained sequentially; in the backward LSTM layer, data are trained in reverse order. The structure of BILSTM is shown in Figure 3. The data flow is not only from the past to future but also from the future to past [30,31,32].
The output of BiLSTM at time t is as follows:
h t = σ ω x 1 x t + ω h 1 h t 1 + b
h t = σ ω x 2 x t + ω h 2 h t 1 + b
y t = ω y 1 h t + ω y 2 h t + b y
where h , h , y t represent the forward propagation, backward propagation, and output layer vectors, respectively. ω x 1 , ω x 2 is the weight matrix connecting input information x t ; ω h 1 , ω h 2 is connected to the output information h t 1 is the weight matrix from the previous moment; ω y 1 , ω y 2 is the weight matrix for the output information at the current time step; b , b , b y represent the error vectors for the forward propagation layer, the backpropagation layer, and the output layer, respectively.

3. Improved Football Team Training Optimization Algorithm (IFTTA)

The collective training stage of traditional football team training [22] optimization algorithms adopts four state transition probabilities, lacking adaptability to the dynamic nature of the search process. To overcome this constraint, the present study introduced a multi-mode search strategy utilizing dynamic probability allocation. This approach effectively strikes a balance between global exploration and local refinement by modifying the probability distribution across various search behaviors.

3.1. Dynamic Probability Allocation Strategy

Traditional FTTAs categorize training [25] states into four types: followers, discoverers, thinkers, and fluctuators. During the initial phase, state transitions employ an equal-probability allocation scheme. This allocation pattern fails to adapt to the algorithm’s varying search strategy requirements across different optimization stages, making it difficult to achieve a dynamic equilibrium between global exploration and local exploitation capabilities [25]. To address this, this study introduced a dynamic probability allocation mechanism [33], enabling adaptive adjustment of transition probabilities for each training state throughout the iteration process. This aligns with the search requirements of different optimization phases. The probability allocation expression is as follows:
p 1 t = 0.7 × 1 t T + 0.1 p 2 = 0.2 p 3 = 0.1 p 4 = 0.1
where t denotes the current iteration count, T represents the maximum iteration count, p 1 t , p 2 , p 3 , p 4 represent the allocation probabilities for Mode 1, 2, 3, and 4, respectively, as proposed in the following text.
In the improved football team training optimization algorithm, the design of the dynamic probability allocation strategy adheres to the optimization process’s phase characteristics: emphasizing exploration in the early stages, focusing on exploitation in the later stages, and maintaining diversity throughout the entire process. Specifically, the probability p 1 ( t ) of Mode 1 is designed to decrease linearly with iteration count [33,34]. This ensures that the follower dominates the search process during early iterations, guiding the population to rapidly focus on promising regions. In later stages, its influence gradually diminishes to prevent premature convergence caused by over-reliance on elite individuals. The Mode 2 probability p 2 is fixed at 0.2, corresponding to a strong exploration strategy based on extreme value differences [35]. This maintains the algorithm’s global search capability and population diversity throughout the entire process. The Mode 3 probability p 3 and the Mode 4 probability p 4 are both fixed at 0.1, representing a balanced strategy and an adaptive perturbation strategy, respectively [25,36]. Their proportions are small in the early stages. As p 1 ( t ) decreases, their relative roles in the later search gradually increase, naturally achieving a transition from broad exploration to detailed refinement. This probability allocation mechanism achieves dynamic adaptive switching of search modes through a structurally simple approach: dynamically adjusting only the dominant probability p 1 , supplemented by fixed probabilities p 2 , p 3 , and p 4 . This lays the foundation for the effective coordination of subsequent multi-mode training strategies.

3.2. Multi-Modal Training Strategy

Based on the probability allocation mechanism above-mentioned, four complementary training modes were designed, each of which meets different optimization requirements.
  • Training Mode 1: Directed learning mode
This mode simulates the process of players learning from the best individuals in the team, combining elite guidance and group information to enhance search directionality [37], as shown in Figure 4a. The search direction expression [34] is as follows:
x i n e w = x i + r 1 × x b e s t x i + 0.1 × r 1 × x ¯ x i
where x i n e w represents the current global optimum solution, x b e s t is the best individual in the team, x i is the solution requiring optimization, x ¯ denotes the population mean position, r 1 and r 2 are random vectors within the range of 0 to 1.
2.
Training Mode 2: Information-guided training
This model utilizes the difference between the best and worst individuals in the population for information driven training, as shown in Figure 4b. It broadens the search space by leveraging extreme value variations and improves the ability to escape local optima by incorporating random disturbances [38]. The expression is as follows:
ω = 0.5 + 0.5 × rand
x i n e w = x i + ω × r 1 × x b e s t x w o r s t + 0.05 × λ 0 , 1
where ω represents the adaptive weight; x w o r s t is the worse individual in the team; λ 0 , 1 denotes a standard normal random number; and rand is a random number uniformly distributed in the interval [0, 1].
3.
Training Mode 3: Balance training mode
This mode considers staying away from disadvantaged areas and getting as close as possible to advantageous areas [35]. The bidirectional equilibrium exploration is shown in Figure 4c. The expression is as follows:
ω 1 = 0.3 + 0.7 × t T
x i n e w = x i ω 1 × r 1 × x w o r s t x i + 1 ω 1 × r 2 × x b e s t x i
where ω 1 represents the dynamic weight that shifts the search focus.
As the iteration process increases, the search behavior gradually transitions from focusing on escaping from disadvantaged areas in the early stage to focusing on approaching advantageous areas in the later stage.
4.
Training Mode 4: Disturbance training mode
The relatively large multiplicative perturbations in the early stage are employed to enhance global exploration capabilities, and additive perturbation combined with elite guidance in the later stage are adopted to improve local development efficiency. The adaptive perturbation strategy is shown in Figure 4d. The staged disturbance strategy [36] is as follows:
x i n e w = x i × 1 + 0.5 × λ 0 , 1 × 1 t T , t < T 2 x i + 0.1 × λ 0 , 1 × x b e s t x 1 , t T 2
Through a dynamic probability allocation mechanism, Mode 1 and Mode 2 extensively search potential regions across different hyperparameter combinations, and Mode 3 and Mode 4 perform fine-grained development and local fine-tuning within advantageous parameter regions.

3.3. Uncertainty Analysis of the IFTTA

To further validate the robustness of the IFTTA optimization algorithm, an uncertainty analysis was conducted through 60 independent runs (with different random initializations and hyperparameter perturbations). The algorithm’s performance stability under random factors was quantified by measuring the uncertainty characteristics of the performance metric distribution [39,40].
Figure 5 shows the box plot of the IFTTA uncertainty performance metrics. The median in the box plot reflects the typical accuracy level of the model, while the box height (interquartile range) characterizes the fluctuation range between runs. The dashed lines and outliers indicate performance deviations under extreme conditions. The box widths for MAE, RMSE, and MAPEeps metrics were narrow with limited outliers, indicating that the IFTTA optimization algorithm effectively constrains error fluctuations caused by random initialization and hyperparameter perturbations. This ensures relatively stable error levels across different operating conditions. The R2 metric consistently clustered within the high-value range of 0.9981 to 0.999, with no significant box dispersion. This demonstrates the model’s strong interpretability of wind power variations and excellent reproducibility across repeated training iterations, indicating that the optimization effectiveness of the IFTTA does not significantly degrade due to random factors.
Figure 6 shows the probability density distribution of the IFTTA uncertainty performance metrics. The probability density distributions of RMSE and MAE exhibited peak concentration with short tails, indicating that the vast majority of independent runs achieved comparable high-precision levels. Only a small number of runs experienced minor error increases due to extreme random perturbations. The probability density distribution of R2 was highly clustered near 1, further validating that the models adapted by the IFTTA optimization algorithm are not merely “one-off optimal” but achieve statistically optimal and stable performance.
Through uncertainty analysis, it is evident that the IFTTA effectively suppresses fluctuations in model performance caused by random initialization and hyperparameter perturbations. This achieves statistically high accuracy and stability in predictive performance, significantly enhancing the model’s robustness against disturbances.

3.4. Iterative Analysis

Figure 7 illustrates the convergence behavior of different optimization strategies during the hyperparameter search. As iterations progress, the IFTTA optimization curve descends more rapidly and smoothly than other curves, indicating that the IFTTA optimization process can continuously refine hyperparameter combinations and converge to a stable region. This highlights its advantages in both “search efficiency” and “final achievable optimality level”, thereby providing supporting evidence for the ultimate performance enhancement.

4. CEEMDAN–IFTTA–BiLSTM

This study presents a hybrid model for short-term wind power prediction, combining CEEMDAN, IFTTA, and BiLSTM. The overall framework is shown in Figure 8.
The technical workflow of this study is illustrated in the Figure 9. First, historical wind power data are processed using the sliding window method to generate input samples suitable for time series forecasting. Next, the preprocessed historical power data undergoes CEEMDAN decomposition, splitting it into multiple intrinsic mode function (IMF) sub-sequences. Each IMF component is normalized to eliminate scale and amplitude differences between features. Subsequently, the data are divided into training and testing subsets. During the training phase, the BiLSTM is trained while evaluating the loss function. The IFTTA optimization algorithm further enhances the prediction model’s performance. When optimization conditions are not met, IFTTA continues adjusting the hyperparameters and retraining the model until an optimal solution is achieved. Finally, the model is retrained on the test set using the optimal hyperparameter combination, and the prediction results from each modal component are combined to obtain the final prediction. The superiority of the model is assessed through relevant data evaluation metrics.

5. Example Analysis Data Description

5.1. Data Description

For short-term wind power forecasting, wind speed data collected from wind farms are generally employed as the input for prediction models. The datasets used in this study were obtained from wind farms located in three distinct climatic and wind resource regions across China. To reduce geographical constraints and improve data representativeness and coverage, wind farms were selected from the arid and semi-arid region of Northwest China (Xinjiang, Ningxia) and the mid-temperate continental climate zone (Inner Mongolia). The specific datasets are described as follows: Dataset 1 consists of historical wind power generation data from a wind farm in Xinjiang, covering the period from 1 to 15 March 2019; Dataset 2 contains historical wind power generation records from a wind farm in Ningxia, collected from 15 to 30 June 2019; and Dataset 3 includes wind power generation records from a wind farm in Inner Mongolia, spanning 1 to 15 December 2019.
All three wind farms have a consistent installed capacity of 200 MW, which eliminates interference caused by capacity differences in the generation data. Data were recorded daily from 00:00 to 23:45 at a 15-min sampling interval to guarantee data continuity and integrity. Each wind farm provided 1440 valid data samples, among which 230 missing values were filled via linear interpolation to maintain data continuity. In total, 80% of the sample data were used for model training and optimization, and the remaining 20% were applied as the test set for short-term wind power forecasting, thereby ensuring the rationality and reliability of model training and evaluation.
Using sliding window technology, input–output sample pairs are generated from preprocessed wind power time series data. Based on model parameter settings, each sample’s input consists of historical feature vectors spanning 24 consecutive time steps, while the output is the target wind power value for the single time step immediately following this input sequence. The sliding window step size was set to 1, meaning that the window advances only one time step per iteration. By constructing highly overlapping sample pairs, this approach maximizes the utilization of temporal correlations between adjacent moments in the wind power time series, fully leveraging the data’s sequential characteristics.

5.2. Experimental Environment

The experimental hardware environment comprised a 12th Gen Intel(R) Core (TM) i7-13700H 2.40 GHz processor, equipped with 16 GB of RAM and 1 TB SSD storage, with MATLAB 2025a used as the software platform.

5.3. Evaluation Indicators

This paper evaluated the prediction model using five performance metrics: mean absolute error (MAE), which indicates the level of prediction error; root mean square error (RMSE), reflecting the difference between actual and predicted values; mean percentage error (MAPE); the coefficient of determination R 2 ; and calculation time (CT). These metrics complement each other, providing a more accurate reflection of the actual prediction error [30,31,39,41]. Their respective equations are as follows:
M A E = 1 L t = 1 L Y P R E t Y A C T t
R M S E = 1 L t = 1 L Y P R E t Y A C T t 2
M A P E = 1 L t = 1 L Y A C T t Y P R E t Y A C T t × 100 %
R 2 = 1 i = 1 L Y A C T t Y P R E t 2 i = 1 L Y A C T t Y P R E t 2
where L denotes the size of the test dataset; Y P R E t is the t-th predicted value of wind power data; Y A C T t is the actual value of the t-th data point in the wind power data.

5.4. CEEMDAN Decomposition Results

CEEMDAN was used to preprocesses the historical wind power data. The noise standard deviation was set to 0.2, the number of noise additions was 500, and the maximum number of iterations was 5000. Figure 10 shows the signal after CEEMDAN decomposition. As can be observed from Figure 10, CEEMDAN decomposed Dataset 1, Dataset 2, and Dataset 3 into 11 sequences, labeled IMF1 through IMF11. Each component reflected the characteristics of the original signal at different frequency scales. Each IMF component exhibited clear extreme points. A larger number of extreme points indicates faster signal fluctuations and a higher frequency; conversely, fewer extreme points suggest a lower frequency. Therefore, a higher IMF index corresponds to a lower frequency.

5.5. Model Training

The parameters of the BiLSTM used in this study are shown in Table 2. As shown in Figure 7, IFTTA converged after 35 iterations, so the training iteration count was set to 35. The hyperparameters of the BiLSTM optimized by IFTTA, including learning rate, number of hidden nodes, and L2 regularization parameter, are shown in Table 3.

5.6. Model Comparison Analysis

To validate the predictive performance of the proposed method, we used LSTM, autoregressive integrated moving average (ARIMA), support vector regression (SVR), CEEMDAN–BiLSTM, CEEMDAN–FDA–BiLSTM, CEEMDAN–FTTA–BiLSTM, and the proposed CEEMDAN–IFTTA–BiLSTM methods to predict the wind power. The comparison of the prediction result curves of different algorithms for three datasets is shown in Figure 11, Figure 12 and Figure 13.
As illustrated in Figure 11, Figure 12 and Figure 13, the prediction outcomes of the proposed approach exhibited strong agreement with the actual data when compared to alternative methods. Specifically, compared to traditional statistical models (ARIMA) and regression models (SVR) as well as standalone deep learning models (LSTM), the combined model CEEMDAN–BiLSTM—derived from CEEMDAN decomposition—exhibited lower error rates. However, it still showed significant discrepancies from the actual data. Optimizing the hyperparameters of the prediction model through algorithmic refinement can further enhance forecasting accuracy. Compared to the force directed algorithm (FDA) and FTTA optimization algorithms proposed in recent years, the IFTTA optimization algorithm achieved a more substantial improvement in prediction precision.
As shown in Table 4, the predictive capability of the model was significantly enhanced by decomposing the data using the CEEMDAN method and optimizing the hyperparameters of the BiLSTM through the IFFTA. In Dataset 1, compared to the LSTM, ARIMA, SVR, CEEMDAN–BiLSTM, CEEMDAN–FDA–BiLSTM, and CEEMDAN–FTTA–BiLSTM, MAE decreased by 73.25%, 78.73%, 56.65%, 62.44%, 48.91%, and 25.29%, respectively, while RMSE decreased by 77.47%, 81.18%, 58.13%, 68.43%, 56.78%, and 29.62%, respectively. MAPE decreased by 42.94%, 39.32%, 33.73%, 25.90%, 29.11%, and 21.13%, respectively, while R2 increased by 0.1261, 0.1836, 0.0318, 0.0609, 0.0294, and 0.0069, respectively. In Dataset 2, compared to the other six models, MAE decreased by 78.75%, 84.70%, 67.47%, 65.83%, 58.33%, and 54.03%, respectively. RMSE decreased by 80.72%, 84.65%, 66.93%, 69.06%, 60.19%, and 55.78%, respectively, MAPE decreased by 59.45%, 51.22%, 45.48%, 33.90%, 24.12%, and 10.23%, respectively, while R2 increased by 0.0896, 0.1468, 0.0242, 0.0290, 0.0137, and 0.0093, respectively. In Dataset 3, compared to the other six models, MAE decreased by 66.14%, 71.25%, 49.75%, 48.96%, 31.94%, and 35.84%, respectively. RMSE decreased by 66.93%, 70.98%, 48.26%, 53.60%, 33.26%, and 45.05%, respectively, MAPE decreased by 62.10%, 58.02%, 53.74%, 36.32%, 32.91%, and 20.66%, respectively, while R2 increased by 0.0582, 0.0777, 0.0196, 0.0261, 0.0089, and 0.0166, respectively.
In terms of computational time, although Model A required an average of 271 s (longer than other models), its exceptional prediction accuracy compensates for the additional computational cost in high-precision prediction scenarios.
In summary, the prediction model proposed in this article adopts a unique dynamic probability allocation and multi-mode search strategy, effectively improving the parameter optimization effect and prediction accuracy. The proposed model had significantly better wind power prediction performance than other models in different seasons and regions.

6. Conclusions

To address the high volatility and randomness of wind power generation, a short-term wind power forecasting model based on CEEMDAN–IFTTA–BiLSTM was proposed. According to the experimental results, the following conclusions can be drawn:
(1)
CEEMDAN can perform adaptive signal decomposition on the original wind power sequence, which effectively reduces the data noise interference and enhances the ability to extract temporal features. It can provide more regular data for the prediction model.
(2)
Improving FTTA through a multi-mode search strategy with dynamic probability allocation, the IFTTA has overcome the subjectivity of traditional empirical tuning. In contrast to the FTTA, the IFTTA enhances both the global search and local optimization abilities of hyperparameter tuning, leading to a substantial improvement in the accuracy of wind power forecasting.
(3)
Compared to the other six models, the proposed method in this paper reduced the MAE, RMSE, and MAPE by at least 25.29%, 29.62%, and 21.13%, respectively, in Dataset 1, while increasing R2 by at least 0.0069. In Dataset 2, MAE, RMSE, and MAPE decreased by at least 54.03%, 55.78%, and 10.23%, respectively, while R2 improved by at least 0.0093. In Dataset 3, MAE, RMSE, and MAPE decreased by at least 31.94%, 33.26%, and 20.66%, respectively, while R2 improved by at least 0.0089. The proposed forecasting method significantly enhances the prediction accuracy of wind power generation, providing robust support for optimizing the scheduling and economic operation of wind power systems.
It should be noted that the model proposed in this paper still requires further exploration and refinement for predictions at the extreme points of the power curve. In future work, more meteorological factors will be considered, and multi-source data fusion will be employed to further improve the model’s forecasting accuracy under complex weather conditions.

Author Contributions

Conceptualization, Y.L.; Methodology, Y.L.; Software, Y.L.; Validation, L.X., Y.L., C.L. and L.L.; Formal analysis, Y.L. and C.L.; Investigation, L.X., Y.L., L.L. and F.L.; Resources, L.X.; Writing—original draft, Y.L.; Writing—review & editing, L.X. and Y.L.; Visualization, Y.L. and F.L.; Supervision, L.X.; Project administration, L.X.; Funding acquisition, L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Guangxi Natural Science Foundation, grant number 2021 GXNSFAA220132.

Data Availability Statement

Data may be shared and made available upon request. Researchers with a legitimate need may contact the corresponding author to obtain the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. National Energy Administration. Grid Operation Status of Renewable Energy in 2024. 2025. Available online: https://www.nea.gov.cn/20250221/e10f363cabe3458aaf78ba4558970054/c.html (accessed on 29 July 2025).
  2. Wang, Y.; Zou, R.; Liu, F.; Zhang, L.; Liu, Q. A review of wind speed and wind power forecasting with deep neural networks. Appl. Energy 2021, 304, 117766. [Google Scholar] [CrossRef]
  3. Lei, Y.; Wang, Z.; Wang, D.; Zhang, X.; Che, H.; Yue, X.; Tian, C.; Zhong, J.; Guo, L.; Li, L.; et al. Co-benefits of carbon neutrality in enhancing and stabilizing solar and wind energy. Nat. Clim. Change 2023, 13, 693–700. [Google Scholar]
  4. Holttinen, H.; Lindroos, T.J.; Lehtilä, A.; Koljonen, T.; Kiviluoma, J.; Korpås, M. Estimating the CO2 Impacts of Wind Energy in the Transition Towards Carbon-Neutral Energy Systems. Energies 2025, 18, 1548. [Google Scholar] [CrossRef]
  5. Goyal, V.; Aishwarya, M.; C F, T.C.; Varghese, J.; Marathe, A.; Bhagat, G.P.; Hemalatha, H. Role of Wind Energy in Achieving Global Carbon Neutrality: Challenges and Opportunities. E3S Web Conf. 2024, 591, 02002. [Google Scholar] [CrossRef]
  6. Nomandela, S.; Mnguni, M.E.S.; Raji, A.K. Adaptive Control and Interoperability Frameworks for Wind Power Plant Integration: A Comprehensive Review of Strategies, Standards, and Real-Time Validation. Appl. Sci. 2025, 15, 12729. [Google Scholar] [CrossRef]
  7. Xie, Y.; Li, C.; Li, M.; Liu, F.; Taukenova, M. An overview of deterministic and probabilistic forecasting methods of wind energy. iScience 2023, 26, 105804. [Google Scholar] [CrossRef]
  8. Baningobera, B.E.; Oleinikova, I.; Uhlen, K.; Pokhrel, B.R. Challenges and solutions in low-inertia power systems with high wind penetration. IET Gener. Transm. Distrib. 2024, 18, 4221–4244. [Google Scholar]
  9. Soman, S.S.; Zareipour, H.; Malik, O.; Mandal, P. A review of wind power and wind speed forecasting methods with different time horizons. In Proceedings of the North American Power Symposium 2010, Arlington, TX, USA, 26–28 September 2010. [Google Scholar]
  10. Wan, C.; Xu, Z.; Pinson, P.; Dong, Z.Y.; Wong, K.P. Probabilistic Forecasting of Wind Power Generation Using Extreme Learning Machine. IEEE Trans. Power Syst. 2014, 29, 1033–1044. [Google Scholar] [CrossRef]
  11. Deng, X.; Shao, H.; Hu, C.; Jiang, D.; Jiang, Y. Wind Power Forecasting Methods Based on Deep Learning: A Survey. Comput. Model. Eng. Sci. 2020, 122, 273–302. [Google Scholar] [CrossRef]
  12. Hanifi, S.; Liu, X.; Lin, Z.; Lotfian, S. A Critical Review of Wind Power Forecasting Methods—Past, Present and Future. Energies 2020, 13, 3764. [Google Scholar] [CrossRef]
  13. Yang, W.; Wang, J.; Niu, T.; Du, P. A hybrid forecasting system based on a dual decomposition strategy and multi-objective optimization for electricity price forecasting. Appl. Energy 2019, 235, 1205–1225. [Google Scholar] [CrossRef]
  14. Rayi, V.K.; Mishra, S.; Naik, J.; Dash, P. Adaptive VMD based optimized deep learning mixed kernel ELM autoencoder for single and multistep wind power forecasting. Energy 2022, 244, 122585. [Google Scholar] [CrossRef]
  15. Hossain, M.A.; Chakrabortty, R.K.; Elsawah, S.; Ryan, M.J. Very short-term forecasting of wind power generation using hybrid deep learning model. J. Clean. Prod. 2021, 296, 126564. [Google Scholar] [CrossRef]
  16. Lu, H.; Ma, X.; Ma, M. A hybrid multi-objective optimizer-based model for daily electricity demand prediction considering COVID-19. Energy 2021, 219, 119568. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, C.; Zhang, X.; Mei, S.; Zhen, Z.; Jia, M.; Li, Z.; Tang, H. Numerical weather prediction enhanced wind power forecasting: Rank ensemble and probabilistic fluctuation awareness. Appl. Energy 2022, 313, 118769. [Google Scholar] [CrossRef]
  18. Giantsidi, S.; Tarantola, C. Deep learning for financial forecasting: A review of recent trends. Int. Rev. Econ. Financ. 2025, 104, 104719. [Google Scholar] [CrossRef]
  19. Mojtahedi, F.F.; Yousefpour, N.; Chow, S.H.; Cassidy, M. Deep Learning for Time Series Forecasting: Review and Applications in Geotechnics and Geosciences. Arch. Comput. Methods Eng. 2025, 32, 3415–3445. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Pan, G.; Chen, B.; Han, J.; Zhao, Y.; Zhang, C. Short-term wind speed prediction model based on GA-ANN improved by VMD. Renew. Energy 2020, 156, 1373–1388. [Google Scholar] [CrossRef]
  21. Torres, M.E.; Colominas, M.A.; Schlotthauer, G.; Flandrin, P. A complete ensemble empirical mode decomposition with adaptive noise. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011. [Google Scholar]
  22. Dragomiretskiy, K.; Zosso, D. Variational Mode Decomposition. IEEE Trans. Signal Process. 2014, 62, 531–544. [Google Scholar] [CrossRef]
  23. You, H.; Bai, S.; Wang, R.; Li, Z.; Xiang, S.; Huang, F. New PSO-SVM Short-Term Wind Power Forecasting Algorithm Based on the CEEMDAN Model. J. Electr. Comput. Eng. 2022, 2022, 7161445. [Google Scholar] [CrossRef]
  24. Ding, Y.; Chen, Z.; Zhang, H.; Wang, X.; Guo, Y. A short-term wind power prediction model based on CEEMD and WOA-KELM. Renew. Energy 2022, 189, 188–198. [Google Scholar] [CrossRef]
  25. Tian, Z.; Gai, M. Football team training algorithm: A novel sport-inspired meta-heuristic optimization algorithm for global optimization. Expert Syst. Appl. 2024, 245, 123088. [Google Scholar] [CrossRef]
  26. Jiang, T.; Liu, Y. A short-term wind power prediction approach based on ensemble empirical mode decomposition and improved long short-term memory. Comput. Electr. Eng. 2023, 110, 108830. [Google Scholar] [CrossRef]
  27. Graves, A.; Schmidhuber, J.R. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 2005, 18, 602–610. [Google Scholar] [CrossRef]
  28. Zeng, W.; Cao, Y.; Feng, L.; Fan, J.; Zhong, M.; Mo, W.; Tan, Z. Hybrid CEEMDAN-DBN-ELM for online DGA serials and transformer status forecasting. Electr. Power Syst. Res. 2023, 217, 109176. [Google Scholar] [CrossRef]
  29. Fang, N.; Liu, Z.; Fan, S. Short-Term Wind Power Prediction Method Based on CEEMDAN-VMD-GRU Hybrid Model. Energies 2025, 18, 1465. [Google Scholar] [CrossRef]
  30. Xiong, J.; Peng, T.; Tao, Z.; Zhang, C.; Song, S.; Nazir, M.S. A dual-scale deep learning model based on ELM-BiLSTM and improved reptile search algorithm for wind power prediction. Energy 2023, 266, 126419. [Google Scholar] [CrossRef]
  31. Kuang, M.; Liu, X.; Zhao, M.; Zhang, H.; Wu, X.; Tian, Y. MC-VMD-CNN-BiLSTM short-term wind power prediction considering rolling error correction. Eng. Res. Express 2024, 6, 045304. [Google Scholar] [CrossRef]
  32. Zhang, Z.; Deng, A.; Wang, Z.; Li, J.; Zhao, H.; Yang, X. Wind Power Prediction Based on EMD-KPCA-BiLSTM-ATT Model. Energies 2024, 17, 2568. [Google Scholar] [CrossRef]
  33. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings: IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Anchorage, AK, USA, 4–9 May 1998. [Google Scholar]
  34. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995. [Google Scholar]
  35. Neri, F.; Tirronen, V. Recent advances in differential evolution: A survey and experimental analysis. Artif. Intell. Rev. 2010, 33, 61–106. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  37. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  38. Akay, B.; Karaboga, D. A modified Artificial Bee Colony algorithm for real-parameter optimization. Inf. Sci. 2012, 192, 120–142. [Google Scholar] [CrossRef]
  39. Yan, Y.; Qian, Y.; Zhou, Y. Nonparametric Probabilistic Prediction of Ultra-Short-Term Wind Power Based on MultiFusion–ChronoNet–AMC. Energies 2025, 18, 1646. [Google Scholar] [CrossRef]
  40. Lehmann, C.; Paromau, Y. Quantifying Uncertainty and Variability in Machine Learning: Confidence Intervals for Quantiles in Performance Metric Distributions. aXiv 2025, arXiv:2501.16931. [Google Scholar]
  41. Wang, S.; Shi, J.; Yang, W.; Yin, Q. High and low frequency wind power prediction based on Transformer and BiGRU-Attention. Energy 2024, 288, 129753. [Google Scholar] [CrossRef]
Figure 1. CEEMDAN decomposition algorithm flowchart.
Figure 1. CEEMDAN decomposition algorithm flowchart.
Processes 14 00951 g001
Figure 2. LSTM architecture.
Figure 2. LSTM architecture.
Processes 14 00951 g002
Figure 3. BiLSTM architecture diagram.
Figure 3. BiLSTM architecture diagram.
Processes 14 00951 g003
Figure 4. Motion trajectory diagram. (a) Elite-guided directional learning. (b) Extreme value difference-driven exploration. (c) The bidirectional equilibrium exploration. (d) The adaptive perturbation strategy.
Figure 4. Motion trajectory diagram. (a) Elite-guided directional learning. (b) Extreme value difference-driven exploration. (c) The bidirectional equilibrium exploration. (d) The adaptive perturbation strategy.
Processes 14 00951 g004
Figure 5. IFTTA uncertainty performance indicator.
Figure 5. IFTTA uncertainty performance indicator.
Processes 14 00951 g005
Figure 6. Probability density distribution of IFTTA uncertainty performance indicators.
Figure 6. Probability density distribution of IFTTA uncertainty performance indicators.
Processes 14 00951 g006
Figure 7. Convergence behavior of different optimization strategies across datasets during the hyperparameter search. (a) Dataset 1; (b) Dataset 2; (c) Dataset 3.
Figure 7. Convergence behavior of different optimization strategies across datasets during the hyperparameter search. (a) Dataset 1; (b) Dataset 2; (c) Dataset 3.
Processes 14 00951 g007
Figure 8. CEEMDAN–IFTTA–BiLSTM structural diagram.
Figure 8. CEEMDAN–IFTTA–BiLSTM structural diagram.
Processes 14 00951 g008
Figure 9. CEEMDAN–IFTTA–BiLSTM flowchart.
Figure 9. CEEMDAN–IFTTA–BiLSTM flowchart.
Processes 14 00951 g009
Figure 10. CEEMDAN decomposition diagram. (a) Dataset 1 CEEMDAN decomposition diagram; (b) Dataset 2 CEEMDAN decomposition diagram; (c) Dataset 3 CEEMDAN decomposition diagram.
Figure 10. CEEMDAN decomposition diagram. (a) Dataset 1 CEEMDAN decomposition diagram; (b) Dataset 2 CEEMDAN decomposition diagram; (c) Dataset 3 CEEMDAN decomposition diagram.
Processes 14 00951 g010
Figure 11. Comparison of prediction results for Dataset 1.
Figure 11. Comparison of prediction results for Dataset 1.
Processes 14 00951 g011
Figure 12. Comparison of prediction results for Dataset 2.
Figure 12. Comparison of prediction results for Dataset 2.
Processes 14 00951 g012
Figure 13. Comparison of prediction results for Dataset 3.
Figure 13. Comparison of prediction results for Dataset 3.
Processes 14 00951 g013
Table 1. Current research progress.
Table 1. Current research progress.
Research CategoriesTypical MethodsKey AdvantagesLimitations Exist
Physical Prediction MethodsNumerical Weather Prediction (NWP)Based on meteorological physical mechanisms, requiring minimal historical data, suitable for new wind farmsModel complexity, high computational cost, and prediction accuracy heavily influenced by meteorological model precision
Traditional Statistical MethodsTime Series Analysis (AR, ARMA, ARIMA)Simple model structure, fast computation speed, high interpretabilityPoor fitting capability for nonlinear, non-stationary wind power sequences; significant impact from data anomalies/missing values
Single Deep Learning ModelLSTMCaptures long-term temporal dependencies, adapts to nonlinear fluctuationsUnidirectional structure fails to fully utilize forward and backward temporal context information
Composite Depth Prediction ModelBiLSTMBidirectional time series modeling simultaneously extracts historical and future correlation featuresHyperparameter sensitivity leads to inefficient manual tuning and limited accuracy
Decomposition-Prediction Class Hybrid ModelEEMD/CEEMDAN + Deep Learning ModelDecomposes non-stationary sequences, reduces noise and modal aliasing, enhances prediction stabilityResidual interference persists in single decomposition without further enhancement through optimization algorithms
Intelligent Optimization AlgorithmsPSO, GA, WOAAutomatically optimizes model hyperparameters to enhance generalization capabilityProne to local optima, making it difficult to balance convergence speed and accuracy
Novel Intelligent Optimization AlgorithmsFTTA (Football Team Training Algorithm)Superior swarm collaboration mechanism with strong convergence performanceParameter sensitivity and imbalance between exploration and exploitation limit application effectiveness
The Model Proposed in This PaperCEEMDAN–IFTTA–BiLSTM1. CEEMDAN adaptive denoising and deconvolution;
2. IFTTA optimizes model hyperparameters, balancing exploration and exploitation;
3. BiLSTM captures bidirectional temporal features
It exhibits high predictive accuracy, but errors still occur at the extreme points of the prediction curve.
Table 2. BiLSTM parameters.
Table 2. BiLSTM parameters.
ParameterValue
Sliding window1
Data window24
Number of convolutional layers0
BiLSTM hidden units64
BiLSTM layers1
OptimizerAdam
Initial learning rate0.002
L2 regularization0.000
Table 3. BiLSTM hyperparameter optimization.
Table 3. BiLSTM hyperparameter optimization.
ParameterOptimization RangeOptimization Results
Dataset 1Learning rate9.0 × 10−4(1 × 10−5, 0.001)
BiLSTM hidden nodes144(100, 200)
L2 regularization parameter4.0 × 10−5(1 × 10−6, 0.01)
Dataset 2Learning rate8.0 × 10−4(1 × 10−5, 0.001)
BiLSTM hidden nodes160(100, 200)
L2 regularization parameter3.0 × 10−5(1 × 10−6, 0.01)
Dataset 3Learning rate9.5 × 10−4(1 × 10−5, 0.001)
BiLSTM hidden nodes152(100, 200)
L2 regularization parameter4.5 × 10−5(1 × 10−6, 0.01)
Table 4. Comparison of prediction performance for Datasets 1–3.
Table 4. Comparison of prediction performance for Datasets 1–3.
Prediction ModelMAE (MW)RMSE (MW)MAPE (%)R2CT (s)
Dataset 1LSTM4.82497.963113.740.867292
ARIMA6.07029.533812.920.809738
SVR2.97854.285411.830.961544
CEEMDAN–BiLSTM3.43705.683110.580.9324168
CEEMDAN–FDA–BiLSTM2.52694.150511.060.9639213
CEEMDAN–FTTA–BiLSTM1.72782.54919.940.9864222
CEEMDAN–IFTTA–BiLSTM1.29091.79417.840.9933268
Dataset 2LSTM15.402824.349814.280.900998
ARIMA21.385630.588311.870.843741
SVR10.061014.197610.620.966346
CEEMDAN–BiLSTM9.577715.17518.760.9615176
CEEMDAN–FDA–BiLSTM7.853111.79537.630.9768228
CEEMDAN–FTTA–BiLSTM7.118910.61626.450.9812235
CEEMDAN–IFTTA–BiLSTM3.27274.69485.790.9905287
Dataset 3LSTM5.35167.993713.880.934787
ARIMA6.30309.109612.530.915236
SVR3.60555.110411.370.973342
CEEMDAN–BiLSTM3.55025.69808.260.9668161
CEEMDAN–FDA–BiLSTM2.66243.96127.840.9840202
CEEMDAN–FTTA–BiLSTM2.82384.81206.630.9763214
CEEMDAN–IFTTA–BiLSTM1.81192.64395.260.9929259
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, L.; Luo, Y.; Li, C.; Li, L.; Liu, F. Improved Football Team Training Algorithm Based on Modal Decomposition and BiLSTM Method for Short-Term Wind Power Forecasting. Processes 2026, 14, 951. https://doi.org/10.3390/pr14060951

AMA Style

Xie L, Luo Y, Li C, Li L, Liu F. Improved Football Team Training Algorithm Based on Modal Decomposition and BiLSTM Method for Short-Term Wind Power Forecasting. Processes. 2026; 14(6):951. https://doi.org/10.3390/pr14060951

Chicago/Turabian Style

Xie, Lingling, Yanjing Luo, Chunhui Li, Long Li, and Fengyuan Liu. 2026. "Improved Football Team Training Algorithm Based on Modal Decomposition and BiLSTM Method for Short-Term Wind Power Forecasting" Processes 14, no. 6: 951. https://doi.org/10.3390/pr14060951

APA Style

Xie, L., Luo, Y., Li, C., Li, L., & Liu, F. (2026). Improved Football Team Training Algorithm Based on Modal Decomposition and BiLSTM Method for Short-Term Wind Power Forecasting. Processes, 14(6), 951. https://doi.org/10.3390/pr14060951

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop