Next Article in Journal
Recent Developments in Carbon Quantum Dots: Properties, Fabrication Techniques, and Bio-Applications
Previous Article in Journal
Synthesis and Characterization of Magnetic Xerogel Monolith as an Adsorbent for As(V) Removal from Groundwater
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Model Based on an Improved Seagull Optimization Algorithm for Short-Term Wind Speed Forecasting

1
School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Nanjing University of Information Science and Technology, Nanjing 210044, China
3
Smart Energy Center, CSIC (Chongqing) Haizhuang Wind Power Equipment Co., Ltd., Chongqing 401122, China
*
Author to whom correspondence should be addressed.
Processes 2021, 9(2), 387; https://doi.org/10.3390/pr9020387
Submission received: 18 January 2021 / Revised: 14 February 2021 / Accepted: 15 February 2021 / Published: 20 February 2021

Abstract

:
Wind energy is a clean energy source and is receiving widespread attention. Improving the operating efficiency and economic benefits of wind power generation systems depends on more accurate short-term wind speed predictions. In this study, a new hybrid model for short-term wind speed forecasting is proposed. The model combines variational modal decomposition (VMD), the proposed improved seagull optimization algorithm (ISOA) and the kernel extreme learning machine (KELM) network. The model adopts a hybrid modeling strategy: firstly, VMD decomposition is used to decompose the wind speed time series into several wind speed subseries. Secondly, KELM optimized by ISOA is used to predict each decomposed subseries. The ISOA technique is employed to accurately find the best parameters in each KELM network such that the predictability of a single KELM model can be enhanced. Finally, the prediction results of the wind speed sublayer are summarized to obtain the original wind speed. This hybrid model effectively characterizes the nonlinear and nonstationary characteristics of wind speed and greatly improves the forecasting performance. The experiment results demonstrate that: (1) the proposed VMD-ISOA-KELM model obtains the best performance for the application of three different prediction horizons compared with the other classic individual models, and (2) the proposed hybrid model combining the VMD technique and ISOA optimization algorithm performs better than models using other data preprocessing techniques.

1. Introduction

To achieve global clean energy development, reduce greenhouse gas emissions and prevent the crisis of the depletion of nonrenewable fossil energy reserves, the large-scale use of clean energy has become a global energy development trend [1,2]. Among the various widely used new energies, wind energy is used worldwide due to its wide energy distribution, pollution-free nature and sustainability, and it is of great significance to tap into the potential of wind energy to adjust the traditional energy structure. According to a report released by the Global Wind Energy Association (GWEC) in 2019, the global installed capacity of wind power in 2019 was 60.4 GW, reaching a total of 651 GW. As of the end of 2019, China’s cumulative installed wind power capacity reached 210 MW [3]. The chaotic, random and intermittent characteristics of wind speed pose considerable challenges to power systems. The violent fluctuation of wind power in a short period of time causes a short-term imbalance of the power system, which may cause the power system to collapse. Therefore, accurate wind speed forecasting is critical to accurately predicting the output power of wind power and stabilizing the operating state of the power system.
At present, wind speed prediction methods mainly include the following four methods: (i) the physical model method, (ii) the time series method, (iii) the spatial correlation method and (iv) the artificial intelligence method [4,5,6]. The physical model method mainly uses the physical parameters when the wind speed generates the background to construct complex mathematical equations, and uses numerical weather prediction (NWP) for simulation. Classic numerical simulation approaches include the high-resolution limited area model (HIRLAM) [7], the fifth-generation mesoscale model (MM5) [8] and the weather research and forecast model (WRF) [9]. However, physical methods have disadvantages such as a difficulty in obtaining physical data, the consumption of many computing resources and being unsuitable for short-term wind speed prediction [10]. The time series method uses the potential before and after information and correlation in the historical wind speed data to build a model. Common wind speed statistical models include autoregressive (AR) [11], autoregressive moving average (ARMA) [12], autoregressive integrated moving average (ARIMA) [13] and autoregressive fraction moving average (ARFIMA) [14] models. Although time series approaches are simpler and more economical when compared with physical model methods, they are also limited by the nonlinearity and nonstationarity of the wind speed time series. As a unique method, the spatial correlation model starts from the relevant wind speed data around the wind speed center and selects appropriate sites to build a spatial model. Samalot et al. [15] successfully combined Kalman filtering and Kriging to reduce the bias of the weather research and forecasting (WRF) model. However, this method has strict measurement requirements and is difficult to implement.
In addition, with the rise of artificial intelligence, artificial intelligence methods have shown strong advantages in the extraction of the nonlinear characteristics of wind speed fluctuations, and have gradually become a research hotspot in the field of prediction. Many methods including artificial neural networks (ANNs) [16,17], support vector machines (SVMs) [18,19] and fuzzy logic (FL) methods [20,21] have been applied to wind speed prediction. Monfared et al. [22] combined fuzzy logic with an artificial neural network, which not only effectively reduced the rule base but also improved the accuracy of predicting wind speed. Li et al. [23] studied the application of adaptive linear elements (ALEs), back propagation (BP) and radial basis functions (RBFs) to these three neural networks in 1-h wind speed prediction and proposed that the best prediction model is related not only to the type of neural network but also to the data source. Guo et al. [24] proposed a backpropagation neural network wind speed prediction method to eliminate seasonal effects to predict daily average wind speed. This method can effectively eliminate seasonal effects from actual wind speed data. Zhang et al. [25] proposed a two-step method to determine the connection weight of the RBF network to predict the future wind speed interval. Compared with the traditional multilayer perceptron (MLP) method, this method can effectively increase the prediction interval. Compared with the traditional neural network, the extreme learning machine (ELM) has faster convergence speed and less human intervention, which leads to its strong generalization ability for heterogeneous datasets [26].
The neural network improves the prediction accuracy of wind speed series to a certain extent. However, the instability of the wind speed sequence and the corresponding noise also create considerable interference in the neural network model training process. In the end, the model training effect is not good, and the wind speed prediction error is large. Therefore, to solve the random interference of the wind speed sequence, various preprocessing technologies have been developed. Liu et al. [27] used wavelet transform (WT) preprocessing technology to decompose the original sequence into multiple wind velocity subsequences, and then made predictions through the echo state network. Niu et al. [28] used empirical mode decomposition (EMD) to decompose the original signal and then predicted each subsequence through the general regression neural network (GRNN) optimized by the fruit fly algorithm (FOA), which improved the accuracy of wind prediction. EMD cannot effectively decompose the original wind speed series due to its disadvantages such as end effects and modal aliasing. After that, Ren et al. [29] studied the prediction model based on EMD, its improved version and two intelligent algorithms, and finally suggested complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)and support vector regression (SVR) as the best wind speed prediction method. Zhou et al. [30] proposed a hybrid framework for multilevel wind speed prediction based on variational model decomposition (VMD) and convolutional neural networks. Furthermore, chaos theory has increasingly attracted attention. Multifractal patterns of wind speed can be obtained through chaotic characteristics analysis. Jiang et al. [31] employed a hybrid linear-nonlinear modeling method based on chaos theory to capture the linear and nonlinear factors hidden in wind speed time series, which contained VMD technology to remove the noise in original data. The experimental results showed that the hybrid model was more accurate compared with other models.
Based on the analysis above, artificial intelligence methods have been the most extensive and successful approaches to short-term wind speed prediction, but the prediction ability of a single artificial intelligence method is limited. Hybrid approaches have shown better performance than single models. Therefore, it has gradually become a popular trend to apply data preprocessing techniques before sending wind speed data into forecasting models.
In this study, a novel hybrid strategy is proposed that includes three portions: data preprocessing, optimization and forecasting. Specifically, based on the decomposition and integration strategy, VMD decomposition is used to decompose the original wind speed series into several variational modes to filter out the noise in the original wind speed time series. Then, the KELM prediction network is applied to the problem of wind speed forecasting. At the same time, the improved seagull optimization algorithm is used to optimize the kernel parameters of the KELM network, thereby forming a hybrid model.
The main contributions and innovations of this research are as follows: (1) data preprocessing technology is included to reduce the volatility and randomness of wind speed series and improve the accuracy of prediction. VMD decomposes the original wind speed series into a set of relatively stable modes. (2) In the prediction phase, the kernel function is added to ELM to map the one-dimensional wind speed sequence to the high-dimensional space for prediction, which reduces the difficulty of prediction. (3) An improved seagull optimization algorithm (ISOA) is proposed to determine the two best parameters in KELM simultaneously. In the prediction phase, ISOA continuously searches for the two parameters of the kernel function in KELM. At the same time, each search can retain the optimal approximate solution, so that the KELM network can be optimized, and the prediction accuracy and stability of the prediction are improved. (4) A systematic assessment system is established to evaluate the forecasting ability of our developed hybrid model. Four multistep prediction experiments and three performance indicators are included in this study to compare and analyze the forecasting capacity of the proposed hybrid model in each case.

2. Methods

The technologies used in the hybrid strategy are introduced in this section, including the data preprocessing technology (VMD), the KELM network and the improved seagull optimization algorithm. In the last part, the workflow of the hybrid strategy is presented.

2.1. Variational Mode Decomposition (VMD)

VMD is a novel signal decomposition method that was proposed by Dragomiretskiy and Zosso in 2014 [32], which decomposes a one-dimensional signal into a limited number of modes with a center frequency bandwidth through an iterative search. VMD has good adaptive ability and can overcome modal aliasing. It can decompose nonstationary wind speed time series into subseries called intrinsic mode functions (IMFs). Each subseries contains rich information. The mathematical model of VMD can be expressed as follows:
{ min { u k } , { ω k } { k t [ ( δ ( t ) + j π t ) u k ( t ) ] e j ω k t 2 2 } s . t .   k u k = f ,
where f is the signal to be decomposed, δ ( t ) is the impulse function and u k and ω k are the k -th mode component and the corresponding center frequency, respectively.
To solve the optimization problem of Formula (1), we introduce the terms of the Lagrange multiplier operator λ and quadratic penalty factor α :
L ( u k , ω k , λ ) = α k t [ ( δ ( t ) + j π t ) u k ( t ) ] e j ω k t 2 2 + f k = 1 K u k 2 2 + λ , f k = 1 K u k ,
The following shows the whole process of VMD decomposition:
Step 1: Set the initial values of { u ^ k 1 } , { ω k 1 } , { λ ^ 1 } and n, where ^ uses the Parseval/Plancherel Fourier equidistant transform for conversion to the frequency domain.
Step 2: Use Equations (3)–(5) to update { u ^ k 1 } , { ω k 1 } and { λ ^ 1 } , respectively;
u ^ k n + 1 ( ω ) = f ( ω ) i k u ^ i ( ω ) + λ ( ω ) 2 1 + 2 α ( ω ω k ) 2 ,
ω k n + 1 ( ω ) = 0 ω | u ^ k ( ω ) | 2 d ω 0 | u ^ k ( ω ) | 2 d ω ,
λ ^ n + 1 ( ω ) = λ n ( ω ) + τ ( f ( ω ) k u k n + 1 ( ω ) ) ,
Step 3: Go to step 2. until the iterative stop condition of Equation (6) is satisfied and output the result.
k u ^ k n + 1 u ^ k n 2 u ^ k n 2 < e .

2.2. Kernel Extreme Learning Machine

KELM is a single hidden layer feedforward neural network (SLFN). Traditional feedforward neural network training speed is slow and easily falls into local minimums, and the selection of the learning rate is sensitive. ELM randomly generates the connection weight between the input layer and the hidden layer and the threshold of the hidden layer source to obtain a unique optimal solution. For N arbitrarily distinct samples ( x i , o i ) , where x i = [ x i 1 , x i 2 , , x i m ] T R n and o i = [ o i 1 , o i 2 , , o i m ] T R m , the output of an ELM with L hidden neurons can be expressed as
Θ ( x i ) = i = 1 L β i g ( a i x j + b i ) = o j ,   j = 1 , 2 , , N ,
where g ( ) represents the activation function of the hidden layer, a i = [ a i 1 , a i 2 , , a i m ] T is the input weight vector, β ι = [ β i 1 , β i 2 , , β i m ] T is the output weight vector and b i is the bias.
Equation (7) can be simplified as
H β = T ,
where
H = [ h ( x 1 ) h ( x N ) ] = [ g ( a 1 x 1 + b 1 ) g ( a L x 1 + b L ) g ( a 1 x N + b 1 ) g ( a L x N + b L ) ] N × L ,
β = [ β 1 T β L T ] L × m   and   T = [ t 1 T t L T ] L × m ,
where H is called the ELM hidden layer output matrix. Training a network of ELMs can be understood as finding a suitable set of a ^ , b ^ and β ^ satisfying:
H ( a ^ , b ^ ) β ^ T = min a , b , β H ( a , b ) β T ,
The regularization coefficient C is introduced and the regularized least square solution is obtained:
β ^ = H T ( I / C + H H T ) 1 T ,
Thus, the output function of the ELM model is transformed into:
Θ ( x ) = h ( x ) β ^ = H β ^ ,
KELM combines the ELM algorithm with a kernel function. The idea of the kernel function is to map the input spatial sample data to the high-dimensional feature space, and replace the inner product operation in the transformed high-dimensional space with the kernel function operation in the original input space.
In the KELM, the H H T of Equation (12) is constructed as follows:
H H T ( i , j ) = K ( x i , x j ) ,
Then, we can deduce Equation (15),
H H T = Ω E L M = h ( x i ) h ( x j ) = K ( x i , x j ) ,
where K ( , ) denotes the kernel functions. It can be seen that KELM’s output function Θ ( x ) and the output layer β are as follows:
{ Θ ( x ) = h ( x ) β = [ K ( x , x 1 ) K ( x , x N ) ] ( I / C + Ω E L M ) 1 T β = ( I / C + Ω E L M ) 1 T ,
It is worth noting that the Gaussian kernel function is employed in this paper according to the Mercer theorem as follows:
K ( x i , x j ) = e x i x j 2 γ 2 ,
where γ 2 represents the parameter of the kernel function. Therefore, there are two parameters that need to be adjusted in KELM, and the accuracy of KELM can be improved by adjusting C and γ .

2.3. The Proposed ISOA Algorithm

2.3.1. Seagull Optimization Algorithm

An increasing number of scholars have become committed to the design and development of new intelligent optimization algorithms. Dhiman G and Kumar V [33] developed a new type of bioinspired optimization algorithm, the seagull optimization algorithm, by studying the biological characteristics of seagulls. Seagulls live in groups, using their intelligence to find and attack their prey. The most important characteristics of seagulls are migration and aggressive behavior. The mathematical expression of the natural behavior of seagulls is as follows.
During the migration process, seagulls move from one position to another and meet three conditions:
  • Avoid collision: To avoid collisions with other seagulls, variable A is employed to calculate the new position of the search seagull.
    C s ( t ) = A × P s ( t ) ,
    where C s ( t ) represents a new position that does not conflict with other search seagulls, P s ( t ) represents the current position of the search seagull, t represents the current iteration and A represents the motion behavior of the search seagull in a given search space.
    A = f c ( t × ( f c / M a x i t e r a t i o n ) ) ,
    where t = 0 , 1 , 2 , , M a x i t e r a t i o n , f c can control the frequency of the variable, and its value drops from 2 to 0.
  • Best position: After avoiding overlapping with other seagulls, seagulls will move in the direction of the best position.
    M s ( t ) = B × ( P b s ( t ) P s ( t ) ) ,
    where M s ( t ) represents the positions of the search seagull. B is the random number responsible for balancing the global and local search seagull.
    B = 2 × A 2 × r d ,
    where r d is a random number that lies in the range of [ 0 , 1 ] .
  • Close to the best search seagull: After the seagull moves to a position where it does not collide with other seagulls, it moves in the direction of the best position to reach its new position.
    D s ( t ) = | C s ( t ) + M s ( t ) | ,
    where D s ( t ) represents the best fit search seagull.
Seagulls can constantly change their attack angle and speed during their migration. They use their wings and weight to maintain height. When attacking prey, they move in a spiral shape in the air. The motion behavior in the x , y and z planes is described as follows:
x = r × cos ( θ ) ,
y = r × sin ( θ ) ,
z = r × θ ,
r = u × e θ v ,
where r is the radius of the spiral and θ is a random angle in the range of [ 0 , 2 π ] . u and v are the correlation constants of the spiral shape, and e is the base of the natural logarithm. The attack position of seagulls is constantly updated.
P s ( t ) = D s ( t ) × x × y × z + P b s ( t ) ,
where P s ( t ) saves the best solution and updates the position of other search seagulls.

2.3.2. Improved Seagull Optimization Algorithm (ISOA)

The SOA algorithm has the advantages of solving large-scale constrained problems, low computational cost, and fast convergence speed. Compared with other optimization algorithms, it has strong advantages. However, the global optimization search process of SOA is linear as shown in Equation (19). This linear search method means that the global search capability of SOA cannot be fully utilized. Therefore, we propose a nonlinear search control formula as shown in Equation (28), which can target the seagull group exploration process stage and improve the speed and accuracy of the algorithm.
A = f c × 1 e 4 ( t M a x i t e r a t i o n ) 4 ,
where e represents the base of natural logarithm.
The specific implementation procedures of the proposed ISOA are shown as below:
Step 1: Set the initial parameters of the SOA, including A , B , M a x i t e r a t i o n , f c = 2 , u = 1 , and v = 1 .
Step 2: Initialize the seagull population.
Step 3: Use the calculated fitness function to calculate the fitness value of each seagull and select the current best seagull position.
Step 4: Choose different strategies to update seagull migration and attack positions according to the description in Section 2.3.2.
Step 5: Repeat steps 3 and 4 to update the best seagull position and fitness value until the maximum number of iterations is reached.
Step 6: Obtain the final best seagull position and fitness value.

2.4. Workflow of the Hybrid Model

Through decomposition-based data preprocessing technology, VMD, SOA and KELM were combined to establish a hybrid method for wind speed prediction. To improve the prediction accuracy and search speed, an improved seagull algorithm was used to synchronously search the optimal parameters C and σ 2 of KELM. The root mean square error was used as the fitness function. The workflow of this study is provided in Figure 1 and detailed explanations are given below.

2.4.1. Data Preprocessing

The original wind speed sequence was volatile and random. At this stage, VMD technology was used to decompose the complex wind speed data. The modes decomposed by VMD had their own center frequencies, which were stable relative to the original wind speed time series.

2.4.2. Hybrid Models Forecasting

The KELM model was used as the basic predictive model of the system because of its advantages of fast learning and a super-nonlinear description ability. The decomposed subseries were respectively predicted by the KELM model. ISOA was used to find the two best parameters of KELM at the same time in the subseries prediction process to ensure that the prediction of each subseries was optimal. The two parameters of each subseries reached the optimal value when the number of iterations reach the maximum. Then, the forecasting results of these models were combined together to obtain the final wind speed forecasting result. The ISOA-KELM process is shown in Figure 1.

2.4.3. Multi-Step ahead Forecasting

The developed combined model was employed in this study to forecasting short-term wind speed. One-step, two-step and three-step forecasts were included in this study. Multi-step forecasting was conducted to evaluate the predictive ability of the proposed strategy. The description of multi-step ahead forecasting is as follows: assume that the input datasets are { x ( t 5 ) , x ( t 4 ) , x ( t 1 ) , x ( t ) } and the output datasets are { x ( t + l ) } , where t donates a certain moment and l donates the forecast horizon. When l is equal to a positive integer, set the output data to y ^ ( l ) = x ( t + l ) . At this time, y ^ ( l ) is the l-step ahead forecast value of the original x ( t + l ) .

3. Experimental Design

3.1. Data Description

The experimental data for this study were taken from the Shanghai (SH) wind farm, which possesses rich wind energy resources. These data sets were collected on 8 April, 4 July, 20 October and 15 January 2019. All data sets included 1006 points, which were recorded every 10 min and lasted approximately a week. The first six datasets were used for preheating, and the entire dataset was divided into a training set and a test set before the experiment. The first 80% was used for training, and the last 20% was used for testing. The maximum (Max.), minimum (Min.), mean, median (Med.), standard deviation (SD), kurtosis (Kurt.) and skewness (Skew) of the four data sets were also recorded, as shown in Table 1.

3.2. Performance Metrics

The value predicted by the model often had an error with regard to the true value. The performance indicator evaluates the prediction effect of different models by evaluating the error between the observed value and the predicted value. Different evaluation indicators have different evaluation capabilities. In this study, the mean absolute error (MAE), root mean square error (RMSE) and mean absolute percentage error (MAPE) were calculated. The calculation methods of MAE and RMSE offset the positive and negative prediction errors, taking into account the average degree of error between the predicted value and the observed value. MAPE is the average value of absolute error and is the most widely implemented indicator used to reflect the effectiveness and reliability of aproposed new model. To explain the performance indicators more clearly, Table 2 lists the definitions and specific formulas of the four error indicators. Here Y o ( i ) and Y ^ p ( i ) represent the actual value and the predicted value, respectively, and N is the sample size.

4. Different Experiments and Relative Analysis

In this section, a detailed evaluation and analysis of the proposed model are carried out. Two sets of experiments are designed, and the graphs and tables visually show the corresponding prediction results and evaluation indicators. The experimental setup and results are as follows.

4.1. Experimental Setup

Two sets of comparative experiments were used to compare the forecasting ability between the proposed model and other comparable models. Experiment 1 compared the proposed combined model with five independent models to investigate its prediction performance. Experiment 2 compared the forecasting accuracy between the proposed model and models using various data preprocessing technologies. The four data sets were tested by all models. The results of multistep ahead forecasting further illustrated the forecasting capability of different models. Three error evaluation indicators were used to quantify the predictive ability. The smaller the value of error criteria, the better the predictive performance.
In Experiment 1, we selected five widely used individual models (BP, SVM, LSTM, ELM and KELM) as the control group of the comparative experiment. In order to compare the developed strategy with the prediction ability based on different data preprocessing technologies, such as discrete wavelet transform (DWT), EMD and complementary ensemble empirical mode decomposition (CEEMD), we conducted experiment 2.

4.2. Experiment I: Comparison with Other Individual Models

Table 3 shows the comparison of the results of the proposed model and the other individual models in the four seasons datasets. Figure 2, Figure 3 and Figure 4 show the forecasting results of individual forecasting models in SH in April. At the top of the chart, the predicted results versus 10 min interval sampling points for all forecasting models are shown. Below, the error distribution diagram of forecasting and the scatter diagram of each individual model are presented.
For SH Apr, in the one-step forecasting, the proposed model showed the best MAE, RMSE and MAPE scores at 0.315, 0.408 and 6.606% respectively, followed by the KELM model, whose values for MAE, RMSE and MAPE were 0.888, 1.190 and 17.373% respectively. The worst was the BP neural network, with MAE, RMSE and MAPE scores of 1.247, 1.642 and 30.167%, respectively. When the model forecasting was two-step, the developed model had the best accuracy with an RMSE of 0.436. In the three-step, the proposed model still had the best predictive ability with an RMSE of 0.496, but the second most accurate model was the BP network. Figure 4, Figure 5 and Figure 6 shows the prediction results of the proposed model and the individual model in the spring experimental series (SH Apr).
For SH July, when the forecasting is one-step, the proposed VMD-ISOA-KELM hybrid model achieves the highest accuracy with a MAPE value of 3.140%. Comparatively, the individual models have fairly lower MAPE values of 9.792%, 7.434%, 8.561%, 7.355% and 7.342%, respectively. In the two-step and three-step forecasting, the developed combined model is more effective than the other methods for wind speed forecasting. Meanwhile, KELM has the lowest MAPE values at 7.342% and 9.883% in the one-step and two-step among the remaining four individual models.
For SH Oct, according to the evaluation criteria shown in Table 3, the proposed model still outperformed the individual models in the three steps, with MAPE values of 2.367%, 2.541% and 2.844%. According to the obtained MAPE, long short-term memory (LSTM) is ranked as the second most effective model in the three forecasts, with lower MAPE values of 7.731%, 10.557% and 11.753%.
For SH Jan, in all forecasting steps, the developed combined model exceeded the five benchmark models with MAPE values of 3.894%, 4.276% and 4.737%. In the two-step and three-step forecasting, the five individual models performed poorly, and their RMSE values were all over 1.

4.3. Experiment II: Comparsion with Other Models Using Different Data Preprocessing Methods

This experiment demonstrated the forecasting performance of the wind speed time series by comparing the VMD-ISOA-model with models using different data preprocessing methods, namely DWT, EMD and CEEMD. The comparison results are listed in Table 4 and Figure 5, Figure 6, Figure 7 and Figure 8. More details of the experiment are given below:
For SH Apr, in the one-step forecasting, the proposed model showed the best performance with a MAPE value of 6.606%. In comparison, the model after pretreatment of VMD ranked as the second most effective model among the other data preprocessing technologies, with MAPE values of 7.089%, 7.412% and 8.340%, respectively, from one-step to three-step forecasting. Correspondingly, the DWT-Model showed the worst forecasting accuracy with MAPE values of 18.12%, 28.585%, and 36.064% from one-step to three-step forecasting.
For SH July, according to the evaluation criteria shown in Table 4, the proposed model still outperformed the individual models in one-step forecasting, with the lowest MAE, RMSE and MAPE values of 0.221, 0.270 and 3.140%. According to the obtained MAPE, LSTM ranked as the second most effective model in the three forecasting, with lower MAPE values of 7.731%, 10.557% and 11.753%.
For SH Oct, when the forecasting was one-step, the proposed VMD-ISOA-KELM hybrid model achieved the highest accuracy with a MAPE value of 3.140%. Comparatively, the DWT-Model, EMD-Model, CEEMD-Model and VMD-Model had MAPE values of 5.981%, 6.744%, 3.452%, 7.355% and 7.342%, respectively, which wereinferior to our developed hybrid model. The comparison results of our forecasting strategy and DWT-Model, EMD-Model and CEEMD-Model are shown in Figure 7.
For SH Jan, when the model forecasting is one-step, the prediction accuracy of the hybrid model, which has the lowest MAE, RMSE and MAPE values of 0.252, 0.333 and 3.894% respectively, was still superior compared to the other models using different preprocessing methods. In addition, the CEEMD -Model showed a better forecasting performance than EMD, with MAPE values of 6.807%, 7.601% and 8.246% respectively when the model forecasting changed from one-step to three-step.

5. Discussion

This section presents an insightful discussion of the experiment results, namely the main contributions, the performance of the employed optimization algorithm, the effectiveness of the proposed model and improvements of the proposed model. The concrete details are as follows.

5.1. Main Achievements and Results

Considering the noisy and highly nonlinear features of real wind speed data, this paper mainly proposes an optimized hybrid forecasting strategy based on VMD, KELM and ISOA for short-term wind speed forecasting. VMD decomposition technology has advantages in terms of weakening the non-stationarity of wind speed data, which were found by comparing and analyzing the experimental results of VMD-KELM, EMD-KELM, CEEMD-KELM and DWT-KELM techniques. With regard to wind speed forecasting, KELM is used as a powerful regression core to characterize the relationship between the samples in each subsequence and the expected output. Experiment 1 showed that KELM has a certain advantage in several widely used individual models. However, the prediction accuracy of KELM is sensitive to parameters. For this purpose, a novel algorithm ISOA was proposed to solve optimization issues, transforming the global optimization strategy from linear to non-linear. In order to further improve the prediction, the two parameters of KELM were optimized by the proposed ISOA algorithm. The superiority of the proposed prediction strategy was shown through relative experiments and contrastive analysis.

5.2. Performance of the Employed Optimization Algorithm

In this subsection, eight typical benchmark functions were used to measure and verify the proposed ISOA algorithm, including three unimodal functions and five multimodal functions. The unimodal function was used to test the development ability, and the multimodal function was used to test the development ability and avoid falling into the local optimum. These benchmark functions are shown in Table 5. Peak donates the features of the function, Dim donates the dimension of the function, Range donates the definition domain of the function and f min donates the optimal value of the function.
In addition, seven classic optimization algorithms were selected for comparison with the new algorithm, namely particle swarm optimization (PSO), differential evolution (DE), seagull optimization algorithm (SOA), gray wolf optimizer (GWO), sine cosine algorithm (SCA), moth flame optimization (MFO) and the multiverse optimizer (MVO). All algorithms were run 50 times on each benchmark function and with a maximum of 200 iterations. Figure 9 shows the convergence curve of ISOA and other comparison algorithms with the same dimensions. Compared with SOA, ISOA was closer to the optimal value with the same number of iterations. Among all comparative functions, ISOA had the fastest convergence speed, reflecting ISOA’s efficient exploration capability. In order to measure the experimental results, the average value (AVG) and standard deviation (STD) were used to evaluate the results. Note that the best results are presented in bold. The data in Table 6 demonstrate that the optimization result of ISOA was the best among all optimization algorithms. At the same time, the STD values of the solutions were still the smallest, indicating the stability of the ISOA.

5.3. Effectiveness of the Developed Strategy

To investigate the different effectiveness of the developed model and other comparison models, the Diebold-Mariano (DM) test was employed, which is a statistical hypothesis test. The null hypothesis H 0 and alternative hypothesis H 1 are written as follows:
H 0 : E [ F ( e i 1 ) ] = E [ F ( e i 2 ) ]
H 1 : E [ F ( e i 1 ) ] E [ F ( e i 2 ) ]
where F is the loss function of forecasting errors, e i 1 and e i 2 are forecasting errors between actual values and forecasted values of the different forecasting models. Then, implementing statistical reasoning by DM test statistics, the DM test statistic values can be computed by
D M = i = 1 n ( F ( e i 1 ) F ( e i 2 ) ) / n τ 2 / n τ 2
where τ 2 denotes the estimation for the variance of F ( e i 1 ) F ( e i 2 ) .
Table 7 lists the mean DM values from one- to three-step forecasting. Regardless of the DM values for one-step, two-step and three-step forecasting, the DM values of the nine comparison models were all obviously significant. For some classic individual models, all DM values were much larger than the upper limits at a 1% significance level. Moreover, when comparing with models applying different data pretreatment technologies, the proposed hybrid model similarly obtains showed a improvement.

5.4. Improvements of the Proposed Model

To further discuss and evaluate the degree of improvement in forecasting when comparing a selected model with the proposed mode, we adopted an improvement percentage of the MAPE criteria ( P M A P E ), which enabled a comprehensive analysis of the proposed hybrid model. It is defined as
P MAPE = | MAPE 1 MAPE 2 MAPE 1 | × 100 %
According to the definition of P MAPE , the larger the P MAPE , the better the forecasting accuracy of our developed model relative to the selected models. Table 8 presents the improvement percentages of MAPE for the proposed model and other forecasting models. From further analysis of the results shown in Table 8, we are able to state the following.
  • The improvement ratios of the evaluation indicators of the proposed strategy compared with individual models are greater than 50%. Among the classic individual models, the maximum improvement percentages of MAPE for the three steps forecasting are 78.01% (SH Apr, one-step), 81.49% (SH Oct, two-step) and 83.69% (SH Jan, three-step), which shows the developed model’s significant improvements to multi-step forecasting.
  • Similar to previous research, when compared with other models using different data preprocessing technologies, the improvements in the forecasting effectiveness of the proposed model are fairly evident. For instance, in comparison with DWT-KELM, EMD-KELM, CEEMD-KELM and VMD-KELM, the proposed model leads to 63.54%, 52.06%, 53.83% and 6.81% reductions for one-step forecasting, respectively. Thus, the developed combined model can obtain satisfactory forecasting effectiveness.
  • These results show that there is still much room for individual models to improve forecasting accuracy. Adding a data preprocessing technique can significantly improve the forecast precision. However, the use of optimization algorithms can further improve the accuracy and stability of short-term wind speed forecasting.

6. Conclusions

To follow the trend of clean energy development, strive to achieve low-carbon environmental protection, and vigorously develop wind energy resources, this paper proposes a hybrid forecasting model based on VMD, an improved seagull optimization algorithm and KELM. Firstly, VMD is applied to decompose the given non-stationary wind speed data into several subseries with various scales. Then, KELM is used as a powerful regression core to characterize the relationship between the samples in each subsequence and the expected output. To enhance the prediction performance, the proposed ISOA is designed by including a nonlinear formula, which controls the population migration process and attack process of SOA. Subsequently, the proposed ISOA algorithm is applied to the simultaneous optimization of two parameters in the KELM model. Finally, the final predicted value is obtained by summing the results of all subseries. Furthermore, to evaluate the effectiveness and applicability of the developed combined model, different forecasting models are implemented on four datasets. The selected forecasting models includes five classic individual models and four hybrid models. The experimental results of the three metrics show that (1) the VMD is effective in improving the accuracy and stability of the wind speed predictions; (2) compared with the common ANN and SVM models, the KELM models show advantages in capturing the nonlinear characteristics of the wind speed time series; (3) regardless of the forecasting step or the observation datasets, the proposed combined strategy was superior to all of the selected methods with average MAPE values of 3.865%, 4.213% and 4.614% for one- to three-step forecasting.

Author Contributions

Conceptualization, X.C. and Y.L.; writing—original draft preparation, X.C.; writing—review and editing, X.Y. and X.X.; supervision, Y.Z. and F.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number. 41675156, the Talent Startup Project of Nanjing University of Information Science and Technology under Grant no. 2243141701053, the general program of natural science research in Jiangsu Province, grant number 19KJB170004, key scientific research projects of China State Railway Group, grant number N2019T003, and the science and technology major project of China State Shanghai Railway Group, grant number 201904.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

This work was supported by the National Natural Science Foundation of China, grant number. 41675156, the Talent Startup Project of Nanjing University of Information Science and Technology under Grant no. 2243141701053, the general program of natural science research in Jiangsu Province, grant number 19KJB170004, key scientific research projects of China State Railway Group, grant number N2019T003, and the science and technology major project of China State Shanghai Railway Group, grant number 201904.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anoune, K.; Bouya, M.; Astito, A.; Abdellah, A.B. Sizing methods and optimization techniques for PV-wind based hybrid renewable energy system: A review. Renew. Sustain. Energy Rev. 2018, 93, 652–673. [Google Scholar] [CrossRef]
  2. Duan, J.; Zuo, H.; Bai, Y.; Duan, J.; Chang, M.; Chen, B. Short-term wind speed forecasting using recurrent neural networks with error correction. Energy 2021, 217, 119397. [Google Scholar] [CrossRef]
  3. Lee, J.; Zhao, F.; Dutton, A.; Lathigara, A. Global wind Report 2019; Global Wind Energy Council (GWEC): Brussels, Belgium, 2020; Available online: https://gwec.net/global-wind-report-2019/ (accessed on 25 March 2020).
  4. Jiang, P.; Liu, Z.; Niu, X.; Zhang, L. A combined forecasting system based on statistical method, artificial neural networks, and deep learning methods for short-term wind speed forecasting. Energy 2020, 217, 119361. [Google Scholar] [CrossRef]
  5. Peng, T.; Zhang, C.; Zhou, J.; Nazir, M.S. Negative correlation learning-based RELM ensemble model integrated with OVMD for multi-step ahead wind speed forecasting. Renew. Energy 2020, 156, 804–819. [Google Scholar] [CrossRef]
  6. Song, J.; Wang, J.; Lu, H. A novel combined model based on advanced optimization algorithm for short-term wind speed forecasting. Appl. Energy 2018, 215, 643–658. [Google Scholar] [CrossRef]
  7. Landberg, L. Short-term prediction of the power production from wind farms. J. Wind. Eng. Ind. Aerodyn. 1999, 80, 207–220. [Google Scholar] [CrossRef]
  8. Salcedo-Sanz, S.; Perez-Bellido, Á.M.; Ortiz-García, E.G.; Portilla-Figueras, A.; Prieto, L.; Correoso, F. Accurate short-term wind speed prediction by exploiting diversity in input data using banks of artificial neural networks. Neurocomputing 2009, 72, 1336–1341. [Google Scholar] [CrossRef]
  9. Prósper, M.A.; Otero-Casal, C.; Fernández, F.C.; Miguez-Macho, G. Wind power forecasting for a real onshore wind farm on complex terrain using WRF high resolution simulations. Renew. Energy 2019, 135, 674–686. [Google Scholar] [CrossRef]
  10. Wang, K.; Fu, W.; Chen, T.; Zhang, B.; Xiong, D.; Fang, P. A compound framework for wind speed forecasting based on comprehensive feature selection, quantile regression incorporated into convolutional simplified long short-term memory network and residual error correction. Energy Convers. Manag. 2020, 222, 113234. [Google Scholar] [CrossRef]
  11. Firat, U.; Engin, S.N.; Saraclar, M.; Ertuzun, A.B. Wind Speed Forecasting Based on Second Order Blind Identification and Autoregressive Model. In Proceedings of the 2010 Ninth International Conference on Machine Learning and Applications, Washington, DA, USA, 12–14 December 2010; pp. 686–691. [Google Scholar] [CrossRef] [Green Version]
  12. Erdem, E.; Shi, J. ARMA based approaches for forecasting the tuple of wind speed and direction. Appl. Energy 2011, 88, 1405–1414. [Google Scholar] [CrossRef]
  13. Zhang, J.; Wei, Y.; Tan, Z. An adaptive hybrid model for short term wind speed forecasting. Energy 2020, 190, 115615. [Google Scholar] [CrossRef]
  14. Kavasseri, R.G.; Seetharaman, K. Day-ahead wind speed forecasting using f-ARIMA models. Renew. Energy 2009, 34, 1388–1393. [Google Scholar] [CrossRef]
  15. Samalot, A.; Astitha, M.; Yang, J.; Galanis, G. Combined Kalman filter and universal kriging to improve storm wind speed predictions for the northeastern United States. Weather. Forecast. 2019, 34, 587–601. [Google Scholar] [CrossRef]
  16. Wang, J.; Li, Y. An innovative hybrid approach for multi-step ahead wind speed prediction. Appl. Soft Comput. 2019, 78, 296–309. [Google Scholar] [CrossRef]
  17. Liu, H.; Tian, H.-Q.; Pan, D.-F.; Li, Y.-F. Forecasting models for wind speed using wavelet, wavelet packet, time series and Artificial Neural Networks. Appl. Energy 2013, 107, 191–208. [Google Scholar] [CrossRef]
  18. Zhou, J.; Shi, J.; Li, G. Fine tuning support vector machines for short-term wind speed forecasting. Energy Convers. Manag. 2011, 52, 1990–1998. [Google Scholar] [CrossRef]
  19. Liu, D.; Niu, D.; Wang, H.; Fan, L. Short-term wind speed forecasting using wavelet transform and support vector machines optimized by genetic algorithm. Renew. Energy 2014, 62, 592–597. [Google Scholar] [CrossRef]
  20. Yang, H.; Jiang, Z.; Lu, H. A hybrid wind speed forecasting system based on a ‘decomposition and ensemble’strategy and fuzzy time series. Energies 2017, 10, 1422. [Google Scholar] [CrossRef] [Green Version]
  21. Li, C.; Zhu, Z.; Yang, H.; Li, R. An innovative hybrid system for wind speed forecasting based on fuzzy preprocessing scheme and multi-objective optimization. Energy 2019, 174, 1219–1237. [Google Scholar] [CrossRef]
  22. Monfared, M.; Rastegar, H.; Kojabadi, H.M. A new strategy for wind speed forecasting using artificial intelligent methods. Renew. Energy 2009, 34, 845–848. [Google Scholar] [CrossRef]
  23. Li, G.; Shi, J. On comparing three artificial neural networks for wind speed forecasting. Appl. Energy 2010, 87, 2313–2320. [Google Scholar] [CrossRef]
  24. Guo, Z.-H.; Wu, J.; Lu, H.-Y.; Wang, J.-Z. A case study on a hybrid wind speed forecasting method using BP neural network. Knowl.-Based Syst. 2011, 24, 1048–1056. [Google Scholar] [CrossRef]
  25. Zhang, C.; Wei, H.; Xie, L.; Shen, Y.; Zhang, K. Direct interval forecasting of wind speed using radial basis function neural networks in a multi-objective optimization framework. Neurocomputing 2016, 205, 53–63. [Google Scholar] [CrossRef]
  26. Sun, W.; Liu, M. Wind speed forecasting using FEEMD echo state networks with RELM in Hebei, China. Energy Convers. Manag. 2016, 114, 197–208. [Google Scholar] [CrossRef]
  27. Liu, D.; Wang, J.; Wang, H. Short-term wind speed forecasting based on spectral clustering and optimised echo state networks. Renew. Energy 2015, 78, 599–608. [Google Scholar] [CrossRef]
  28. Niu, D.; Liang, Y.; Hong, W.-C. Wind speed forecasting based on EMD and GRNN optimized by FOA. Energies 2017, 10, 2001. [Google Scholar] [CrossRef] [Green Version]
  29. Ren, Y.; Suganthan, P.; Srikanth, N. A comparative study of empirical mode decomposition-based short-term wind speed forecasting methods. IEEE Trans. Sustain. Energy 2014, 6, 236–244. [Google Scholar] [CrossRef]
  30. Zhou, J.; Liu, H.; Xu, Y.; Jiang, W. A hybrid framework for short term multi-step wind speed forecasting based on variational model decomposition and convolutional neural network. Energies 2018, 11, 2292. [Google Scholar] [CrossRef] [Green Version]
  31. Jiang, P.; Wang, B.; Li, H.; Lu, H. Modeling for chaotic time series based on linear and nonlinear framework: Application to wind speed forecasting. Energy 2019, 173, 468–482. [Google Scholar] [CrossRef]
  32. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2014, 62, 531–544. [Google Scholar] [CrossRef]
  33. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the proposed model. VMD: variational modal decomposition; KELM: kernel extreme learning machine; ISOA: improved seagull optimization algorithm.
Figure 1. Flow chart of the proposed model. VMD: variational modal decomposition; KELM: kernel extreme learning machine; ISOA: improved seagull optimization algorithm.
Processes 09 00387 g001
Figure 2. The results of each prediction model in one-step prediction in SH Apr.
Figure 2. The results of each prediction model in one-step prediction in SH Apr.
Processes 09 00387 g002
Figure 3. The results of each prediction model in two-step prediction in SH Apr.
Figure 3. The results of each prediction model in two-step prediction in SH Apr.
Processes 09 00387 g003
Figure 4. The results of each prediction model in three-step prediction in SH Apr.
Figure 4. The results of each prediction model in three-step prediction in SH Apr.
Processes 09 00387 g004
Figure 5. Forecasting performance of decomposed models in one-, two- and three-step ahead forecasting for the spring dataset.
Figure 5. Forecasting performance of decomposed models in one-, two- and three-step ahead forecasting for the spring dataset.
Processes 09 00387 g005
Figure 6. Forecasting performance of decomposed models in one-, two- and three-step ahead forecasting for the summer dataset.
Figure 6. Forecasting performance of decomposed models in one-, two- and three-step ahead forecasting for the summer dataset.
Processes 09 00387 g006
Figure 7. Forecasting performance of decomposed models in one, two and three-step ahead forecasting for the autumn dataset.
Figure 7. Forecasting performance of decomposed models in one, two and three-step ahead forecasting for the autumn dataset.
Processes 09 00387 g007
Figure 8. Forecasting performance of decomposed models in one, two and three-step ahead forecasting for the winter dataset.
Figure 8. Forecasting performance of decomposed models in one, two and three-step ahead forecasting for the winter dataset.
Processes 09 00387 g008
Figure 9. Convergence curves of ISOA, seagull optimization algorithm (SOA), particle swarm optimization (PSO), differential evolution (DE), gray wolf optimizer (GWO), sine cosine algorithm (SCA), moth flame optimization (MFO) and multiverse optimizer (MVO) tested on various benchmark functions. (a) F1; (b) F2; (c) F5; (d) F8; (e) F9; (f) F10; (g) F11; (h) F15.
Figure 9. Convergence curves of ISOA, seagull optimization algorithm (SOA), particle swarm optimization (PSO), differential evolution (DE), gray wolf optimizer (GWO), sine cosine algorithm (SCA), moth flame optimization (MFO) and multiverse optimizer (MVO) tested on various benchmark functions. (a) F1; (b) F2; (c) F5; (d) F8; (e) F9; (f) F10; (g) F11; (h) F15.
Processes 09 00387 g009
Table 1. Statistical indicators of the four datasets.
Table 1. Statistical indicators of the four datasets.
DatasetPeriodStatistics Indicator
Max. (m/s)Min.
(m/s)
Mean (m/s)SD
(m/s)
Skew.Kurt.
Spring8–14 April15.170.376.972.790.192.31
Summer4–10 July21.390.127.364.321.274.14
Autumn20–26 October 12.580.765.632.140.252.77
Winter15–21 January 12.340.936.451.97−0.113.07
Table 2. Three error metrics.
Table 2. Three error metrics.
MetricsDefinitionEquation
MAEMean absolute error M A E = 1 N i = 1 N | Y o ( i ) Y ^ p ( i ) |
RMSERoot-mean-square error R M S E = 1 N i = 1 N ( Y o ( i ) Y ^ p ( i ) ) 2
MAPEAbsolute percentage error M A P E = 1 N i = 1 N | Y o ( i ) Y ^ p ( i ) Y o ( i ) | × 100 %
Table 3. Comparison of forecasting performances of the proposed model and other independent models. BP: backpropagation; SVM: support vector machine; LSTM: long short-term memory; ELM: extreme learning machine; KELM: kernel extreme learning machine.
Table 3. Comparison of forecasting performances of the proposed model and other independent models. BP: backpropagation; SVM: support vector machine; LSTM: long short-term memory; ELM: extreme learning machine; KELM: kernel extreme learning machine.
DatasetsModelsOne-StepTwo-StepThree-Step
MAERMSEMAPEMAERMSEMAPEMAERMSEMAPE
(m/s)(m/s)(%)(m/s)(m/s)(%)(m/s)(m/s)(%)
SH AprBP1.2471.64230.1671.2731.74733.8361.2741.71331.622
SVM0.9191.24823.6901.2021.64831.1041.3381.79635.701
LSTM1.0141.33121.5831.4961.91929.8881.5161.96136.335
ELM0.9541.30321.0511.3401.89035.8021.5762.28146.062
KELM0.8881.19017.3731.1561.56823.9161.2701.73129.056
Proposed0.3150.4086.6060.3300.4366.8370.3780.4967.512
SH JulBP0.6770.8649.7920.7700.96111.1051.0021.22814.638
SVM0.5190.6787.4340.6870.8589.9560.7670.93111.197
LSTM0.6380.8198.5610.7610.94611.1680.7650.95210.830
ELM0.5210.6847.3550.6930.8569.9310.7870.96911.431
KELM0.5150.6727.3420.6800.8399.8830.7390.90010.853
Proposed0.2210.2703.1400.2260.2763.2050.2370.2883.361
SH OctBP0.6760.8868.9661.0551.32613.7310.8531.12011.471
SVM0.7491.0798.7630.9371.24311.4681.0701.39313.285
LSTM0.6160.8237.7310.8231.07310.5570.9371.22111.753
ELM0.6710.9478.1840.8971.21911.1451.0451.39612.996
KELM0.7501.0188.9810.9411.21011.6721.0561.34813.268
Proposed0.1820.2352.3670.1980.2572.5410.2230.2872.844
SH JanBP0.8091.09511.8480.8801.17913.1590.9851.34714.676
SVM0.6290.9039.0660.8281.11212.3330.9421.26214.244
LSTM0.6550.9409.4850.8751.16112.7140.9021.22313.556
ELM0.7391.12010.2790.9701.37414.0661.0921.53915.869
KELM0.6320.8919.1790.8231.10412.2390.9161.23913.783
Proposed0.2520.3333.8940.2800.3724.2760.3140.4184.737
Table 4. Comparison of forecasting performances of the combined model and other models using different data preprocessing methods. DWT: discrete wavelet transform; EMD: empirical mode decomposition; CEEMD: complementary ensemble empirical mode decomposition; VMD: variational mode decomposition.
Table 4. Comparison of forecasting performances of the combined model and other models using different data preprocessing methods. DWT: discrete wavelet transform; EMD: empirical mode decomposition; CEEMD: complementary ensemble empirical mode decomposition; VMD: variational mode decomposition.
DatasetsModelsOne-StepTwo-StepThree-Step
MAERMSEMAPEMAERMSEMAPEMAERMSEMAPE
(m/s)(m/s)(%)(m/s)(m/s)(%)(m/s)(m/s)(%)
SH AprDWT0.6391.07418.121.1211.53228.5851.3771.80836.064
EMD0.6060.76813.7790.7640.99919.8920.8561.15623.910
CEEMD0.5520.73114.3080.6340.86014.7960.6990.95116.466
VMD0.3310.4377.0890.3530.4717.4120.4040.5288.340
Proposed0.3150.4086.6060.3300.4366.8370.3780.4967.512
SH JulDWT0.4270.5896.1160.6490.8259.3010.7590.91711.002
EMD0.4410.5556.0310.4940.6306.8170.5310.6717.346
CEEMD0.2880.3744.1140.3340.4364.6820.3880.5035.464
VMD0.2890.3534.0980.2480.3023.5330.2790.3404.002
Proposed0.2210.2703.1400.2260.2763.2050.2370.2883.361
SH OctDWT0.5210.8485.9810.8751.20610.5871.0431.38812.927
EMD0.5050.6776.7440.5650.7687.4520.6350.8448.293
CEEMD0.2660.3723.4520.3500.4854.5590.4100.5615.412
VMD0.2510.3163.1140.3370.4234.1540.3650.4584.529
Proposed0.1820.2352.3670.1980.2572.5410.2230.2872.844
SH JanDWT0.4160.7016.0160.7801.04211.5520.9171.21613.738
EMD0.510.6727.5690.5790.7648.6610.6340.8389.448
CEEMD0.4420.5966.8070.4890.6697.6100.5310.7278.246
VMD0.2730.3644.2000.3080.4104.6900.3360.4455.077
Proposed0.2520.3333.8940.2800.3724.2760.3140.4184.737
Table 5. Description of unimodal, multimodal and fixed-dimension benchmark functions.
Table 5. Description of unimodal, multimodal and fixed-dimension benchmark functions.
FunctionPeak DimRange f min
f 1 = i = 1 n x i 2 Unimodal30[−100, 100]0
f 2 = i = 1 n | x i | + i = 1 n | x i | Unimodal30[−10, 10]0
f 5 = i = 1 n [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] Unimodal30[−30, 30]0
f 8 = i = 1 n x i sin ( | x i | ) Multimodal30[−500, 500]−12,569.5
f 9 = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] Multimodal30[−5.12, 5.12]0
f 10 = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e Multimodal30[−32, 32]0
f 11 = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 Multimodal30[−600, 600]0
f 15 = i = 1 11 [ a i x 1 ( b i 2 + b i z 2 ) b i 2 + b i z 3 + z 4 ] 2 Fixed-dimension4[−5, 5]0.0003
Table 6. Test results of 50 trials of ISOA and other algorithms.
Table 6. Test results of 50 trials of ISOA and other algorithms.
IDMetricISOASOAPSODEGWOSCAMFOMVO
F1AVG1.80 × 10−969.18 × 10−723.90 × 10−12.64 × 10−68.70 × 10−96.87 × 1023.98 × 1048.41 × 100
STD1.27 × 10−956.49 × 10−712.75 × 10−12.15 × 10−68.14 × 10−97.40 × 1025.13 × 1032.60 × 100
F2AVG9.45 × 10−689.40 × 10−631.23 × 1001.05 × 10−45.61 × 10−61.50 × 1003.95 × 1014.28 × 101
STD4.51 × 10−673.97 × 10−624.62 × 10−13.42 × 10−52.91 × 10−61.41 × 1001.90 × 1018.35 × 101
F5AVG2.88 × 1012.88 × 1014.17 × 1023.17 × 1012.78 × 1022.05 × 1065.54 × 1061.07 × 103
STD2.95 × 10−24.62 × 10−25.17 × 1021.84 × 1017.76 × 1014.70 × 1061.91 × 1071.59 × 103
F8AVG−1.25 × 104−1.25 × 104−3.40 × 103−4.18 × 103−5.81 × 103−3.51 × 103−8.31 × 103−7.51 × 103
STD5.07 × 1017.95 × 1015.23 × 1023.57 × 1011.16 × 1032.71 × 1028.04 × 1025.74 × 102
F9AVG0.00 × 1000.00 × 1001.08 × 1024.72 × 1001.47 × 1011.03 × 1021.73 × 1021.35 × 102
STD0.00 × 1000.00 × 1003.25 × 1012.11 × 1008.63 × 1004.94 × 1012.72 × 1013.23 × 101
F10AVG8.89 × 10−168.89 × 10−161.51 × 1007.09 × 10−41.67 × 10−61.47 × 1011.57 × 1012.70 × 100
STD0.00 × 1000.00 × 1005.16 × 10−13.08 × 10−49.73 × 10−67.21 × 1004.69 × 1005.79 × 10−1
F11AVG0.00 × 1002.02 × 10−25.96 × 1009.69 × 10−21.03 × 10−26.89 × 1002.55 × 1011.07 × 100
STD0.00 × 1001.43 × 10−13.09 × 1005.57 × 10−21.48 × 10−25.44 × 1003.35 × 1011.98 × 10−2
F15AVG3.70 × 10−44.40 × 10−39.10 × 10−43.67 × 10−24.20 × 10−31.10 × 10−31.90 × 10−36.70 × 10−3
STD2.90 × 10−44.80 × 10−32.19 × 1044.24 × 10−27.71 × 10−33.96 × 10−44.00 × 10−38.81 × 10−3
Table 7. Diebold–Mariano (DM) test of different models.
Table 7. Diebold–Mariano (DM) test of different models.
Model1-Step2-Step3-Step
BP7.92528.64388.6631
SVM6.39697.98648.4509
LSTM7.02398.21068.6123
ELM6.96027.00227.3714
KELM6.65348.19608.7345
DWT6.33676.65787.5850
EMD4.24126.65946.8246
CEEMD5.57555.48125.6415
VMD3.63864.68484.1407
Table 8. Improvement percentages of the proposed model.
Table 8. Improvement percentages of the proposed model.
ModelSH AprilSH JulySH OctoberSH January
1-Step2-Step3-Step1-Step2-Step3-Step1-Step2-Step3-Step1-Step2-Step3-Step
BP78.1079.7976.2467.9371.1477.0473.6081.4975.2167.1367.5167.72
SVM72.1178.0278.9657.7667.8169.9872.9977.8478.5957.0565.3366.74
LSTM69.3977.1279.3363.3271.3068.9769.3875.9375.8058.9566.3765.06
ELM68.6280.9083.6957.3167.7370.6071.0877.2078.1262.1269.6070.15
KELM61.9871.4174.1557.2367.5769.0373.6478.2378.5657.5865.0665.63
DWT63.5476.0879.1748.6665.5469.4560.4276.0078.0035.2762.9865.52
EMD52.0665.6368.5847.9452.9954.2564.9065.9065.7148.5550.6349.86
CEEMD53.8353.7954.3823.6831.5538.4931.4344.2647.4542.7943.8142.55
VMD6.817.769.9323.389.2816.0223.9938.8337.207.298.836.70
Note: The units of all values revealed in the table are (%).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Li, Y.; Zhang, Y.; Ye, X.; Xiong, X.; Zhang, F. A Novel Hybrid Model Based on an Improved Seagull Optimization Algorithm for Short-Term Wind Speed Forecasting. Processes 2021, 9, 387. https://doi.org/10.3390/pr9020387

AMA Style

Chen X, Li Y, Zhang Y, Ye X, Xiong X, Zhang F. A Novel Hybrid Model Based on an Improved Seagull Optimization Algorithm for Short-Term Wind Speed Forecasting. Processes. 2021; 9(2):387. https://doi.org/10.3390/pr9020387

Chicago/Turabian Style

Chen, Xin, Yuanlu Li, Yingchao Zhang, Xiaoling Ye, Xiong Xiong, and Fanghong Zhang. 2021. "A Novel Hybrid Model Based on an Improved Seagull Optimization Algorithm for Short-Term Wind Speed Forecasting" Processes 9, no. 2: 387. https://doi.org/10.3390/pr9020387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop