Next Article in Journal
Impacts on CO2 Emission Allowance Prices in China: A Quantile Regression Analysis of the Shanghai Emission Trading Scheme
Previous Article in Journal
Willingness of Farmers to Transform Vacant Rural Residential Land into Cultivated Land in a Major Grain-Producing Area of Central China
Open AccessArticle

Wind Energy Potential Assessment and Forecasting Research Based on the Data Pre-Processing Technique and Swarm Intelligent Optimization Algorithms

by 1, 2,* and 3
1
Department of Basic Courses, Lanzhou Polytechnic College, Lanzhou 730050, China
2
School of Mathematics & Statistics, Lanzhou University, Lanzhou 730000, China
3
School of Mathematics and Computer Science, Northwest University for Nationalities, Lanzhou 730030, China
*
Author to whom correspondence should be addressed.
Academic Editor: Francesco Nocera
Sustainability 2016, 8(11), 1191; https://doi.org/10.3390/su8111191
Received: 8 August 2016 / Revised: 7 October 2016 / Accepted: 29 October 2016 / Published: 18 November 2016

Abstract

Accurate quantification and characterization of a wind energy potential assessment and forecasting is significant to optimal wind farm design, evaluation and scheduling. However, wind energy potential assessment and forecasting remain difficult and challenging research topics at present. Traditional wind energy assessment and forecasting models usually ignore the problem of data pre-processing as well as parameter optimization, which leads to low accuracy. Therefore, this paper aims to assess the potential of wind energy and forecast the wind speed in four locations in China based on the data pre-processing technique and swarm intelligent optimization algorithms. In the assessment stage, the cuckoo search (CS) algorithm, ant colony (AC) algorithm, firefly algorithm (FA) and genetic algorithm (GA) are used to estimate the two unknown parameters in the Weibull distribution. Then, the wind energy potential assessment results obtained by three data-preprocessing approaches are compared to recognize the best data-preprocessing approach and process the original wind speed time series. While in the forecasting stage, by considering the pre-processed wind speed time series as the original data, the CS and AC optimization algorithms are adopted to optimize three neural networks, namely, the Elman neural network, back propagation neural network, and wavelet neural network. The comparison results demonstrate that the new proposed wind energy assessment and speed forecasting techniques produce promising assessments and predictions and perform better than the single assessment and forecasting components.
Keywords: wind energy assessment and forecasting; data pre-processing; swarm intelligent optimization; neural network; error evaluation wind energy assessment and forecasting; data pre-processing; swarm intelligent optimization; neural network; error evaluation

1. Introduction

As a clean and renewable resource, wind energy is important in energy supply and, through wind turbines, the green wind energy can be converted to electricity. However, not all locations are suitable for wind turbine installation. As a result, wind energy assessment should be performed in advance. Furthermore, to guarantee the safety of wind energy, the accuracy of wind speed forecasting should be ensured. Wind energy assessment and wind speed forecasting are two challenging research topics at present.
Wind energy assessment plays a significant role in wind turbine installation decisions in many countries worldwide, and technologies used for wind energy potential are varied. Based on different moment constraints, Liu and Chang [1] performed validity analysis of the maximum entropy distribution for wind energy assessment in Taiwan. Nested ensemble Numerical Weather Prediction approach was proposed by Al-Yahyai et al. [2] to perform a wind energy assessment over Oman. Wu et al. [3] proposed an assessment model based on the Weibull distribution and different particle swarm optimization algorithms as well as differential evolution algorithms to assess the wind energy potential at Inner Mongolia in China. Jung and Kwon [4] introduced artificial neural networks to improve the wind energy potential estimation for four sites surrounding the Saemangeum Seawall. The wind analysis model was adopted by Boudia et al. [5] to assess the wind energy of four locations situated in the Algerian Sahara. Apart from the wind analysis model, Quan and Leephakpreeda [6] also used economic analysis to assess the wind energy potential in Thailand. A GIS-based method was applied by Siyal et al. [7] for wind energy assessment in Sweden.
One of the most vital factors used for wind energy assessment is the wind speed. The effect of the wind energy assessment directly depends on the accuracy of the wind speed forecasting. Many techniques have recently been proposed to forecast the wind speed, and the related techniques can usually be divided into the following three categories: short-term wind speed forecasting [8,9,10], medium-term wind speed forecasting [11] and long-term wind speed forecasting. One of the most popular skills used for wind speed forecasting is to construct a hybrid model based on several single forecasting approaches. For example, Wang et al. [12] presented a hybrid model with the assistance of the phase space reconstruction algorithm and Markov algorithm. Based on the extreme learning machine, Ljung-Box Q-test and seasonal auto-regressive integrated moving average (ARIMA) models, a hybrid wind speed forecasting model is proposed by Wang et al. [13] to estimate the wind speed of different sites in northwestern China. The ARIMA model was also used by Shukur and Lee [14] to show a hybrid wind speed forecasting model with the Kalman filter and an artificial neural network. Liu et al. [15] demonstrated a hybrid approach using the secondary decomposition model and Elman neural networks. Fei [16] used a hybrid method that consists of the empirical mode decomposition and multiple-kernel relevance vector regression technologies.
In this paper, based on the cuckoo search (CS) algorithm and ant colony (AC) algorithm, two new wind energy assessment models and six wind speed forecasting models are proposed. In the assessment process, the AC and CS algorithms are applied to optimize two unknown parameters of the Weibull distribution. Then, four assessment error evaluation criteria are adopted to evaluate the effectiveness of the two newly proposed assessment models. While in the forecasting process, the CS and AC algorithms are used to optimize three neural networks, namely the Elman, back propagation and wavelet neural networks, and the new proposed approaches are validated by three forecasting error evaluation criteria.
The remaining part of this paper is organized as follows: A description of wind energy potential assessment methodologies is given and the results are evaluated in Section 2. Section 3 presents the connection between the energy assessment and forecasting to identify the best data pre-processing approach. The proposed integrated forecasting framework and forecasting results are presented in Section 4, and the last section presents the concluding remarks.

2. Wind Energy Potential Assessment Methodologies and Results

In this section, related single methodologies as well as the proposed hybrid methods used to assess the wind energy potential are introduced; then, the assessment results are presented to demonstrate the performance of the methods.

2.1. Related Methodologies

This subsection focuses on the related single and hybrid methodologies to assess the wind energy potential.

2.1.1. Related Single Methodologies

The main content of two parameter optimization algorithms and the assessment approach will be described in this section.

Parameter Optimization Algorithms

(a) Cuckoo Search Algorithm
The cuckoo search (CS) algorithm [17] is derived from the behavior of the cuckoo in the process of searching for nests. To simplify the CS algorithm, three idealized rules are hypothesized. The first is that only one egg is laid by a cuckoo each time, and the cuckoo randomly selects a parasitic nest to hatch the egg. The second is that among the randomly selected parasitic nests, the best parasitic nest will be reserved for the next generation. The last is that the number of the available parasitic nests is fixed, and the probability of the alien egg found by the host of the parasitic nest is p a , which is located in the interval [ 0 , 1 ] . Once the alien eggs have been found, the host birds will throw them or abandon the nest, and build a new one in another place. For simplicity, we use the statement that one egg in a nest represents a solution, and the new and potentially better solutions will replace the bad ones.
On the basis of these three ideal rules, the new solution is generated by:
x ( t + 1 ) = x ( t ) + α × L é v y
where α is the step size and, in most cases, it is set to α = 1 ; the symbol “ × ” represents the entry-wise multiplication. In essence, Equation (1) is a random walk equation, and the future position is determined by the current positon (the first term in Equation (1)) as well as the transition probability (the second term in Equation (1)). Lévy in Equation (1) denotes the random search path, and the random step length follows the Lévy distribution shows Equation (2), i.e.,
Lévy ~ u = t λ
where λ is set to values in the interval ( 1 ,   3 ] .
(b) Ant Colony Algorithm
The ant colony (AC) algorithm is proposed by Italian scientist Dorigo M. etc. in 1991. To facilitate the research, the following assumptions are proposed [18]: (1) The communication mediums that ants used are the pheromone and environment; (2) The response of the ant to the environment is determined by its internal mode; (3) The ant individuals are independent; and (4) the entire ant colony shows a random characteristic.
Through adaptation and collaboration in two stages, ants transition to an ordered state from the disordered one and obtain the optimum path. The key point of path selection is the probability transition, i.e., the probability of the kth ant from the ith city to the jth city at time is calculated by the Equation (3) [19]:
p i j k ( t ) = { [ τ i j   ( t ) ] α · [ η i k   ( t ) ] β s allowed k [ τ i s   ( t ) ] α · [ η i s   ( t ) ] β , if   j allowed k 0 , otherwise
where τ i j ( t ) and η i k ( t ) represent the intensity of the pheromone trail and visibility of edge ( i , j ) , respectively; allowed k is the set of cities to be visited by the kth ant in the Ith city, and α and β are two coefficients that tune the relative importance of the trail versus visibility.

Assessment Approach

The Weibull distribution is introduced to this paper to assess the potential wind energy. The probability density function (PDF) of the Weibull distribution can be expressed by Equation (4):
p ( x ; k , c )   =   k c ( x c ) k 1 exp [ ( x c ) k ]
where x is the random variable, which represents the wind speed in this paper; k and c are the shape and scale parameters, respectively.

2.1.2. Proposed Wind Energy Potential Assessment Model

In this paper, the CS algorithm is used to estimate the unknown parameters k and c in the Weibull distribution. The new proposed novel model is abbreviated as the CS-Weibull model. The pseudo code of this model is presented in Algorithm 1. Similarly, the AC algorithm is adopted to estimate the two parameters. Correspondingly, this new model is abbreviated as the AC-Weibull model. The pseudo code presented in Algorithm 2 is provided to help understand this novel model.
Algorithm 1: CS-Weibull
Input:
  • x s ( 0 ) = ( x ( 0 ) ( 1 ) , x ( 0 ) ( 2 ) , , x ( 0 ) ( q ) ) —a sequence of training data.
  • x p ( 0 ) = ( x ( 0 ) ( q + 1 ) , x ( 0 ) ( q + 2 ) , , x ( 0 ) ( q + d ) ) —a sequence of verifying data
Output:
  • xb—the value of x with the best fitness value in population of nests
  • Fitness Function: f ( x ) = ( k / c ) × ( x / c ) k 1 × exp [ ( x / c ) k ]
Parameters:
  Num Cuckoos = 50;number of initial population
  Min Number Of Eggs = 2;minimum number of eggs for each cuckoo
  Max Number Of Eggs = 4;maximum number of eggs for each cuckoo
  Max Iter = 200;maximum iterations of the Cuckoo Algorithm
  Knn Cluster Num = 1;number of clusters that we want to make
  Motion Coeff = 20;Lambda variable in COA paper, default = 2
  accuracy = 1.0 × 10−10;How much accuracy in answer is needed
  Max Num Of Cuckoos = 20;maximum number of cuckoos that can live at the same time
  Radius Coeff = 0.05;Control parameter of egg laying
  Cuckoo Pop Variance = 1 × 10−10;Population variance that cuts the optimization
1: /* Initialize population of n host nests xi (i = 1, 2, ..., n) randomly*/
2: FOR EACH i: 1 ≤ in DO
3: Evaluate the corresponding fitness function Fi
4: END FOR
5: WHILE (g < GenMax) DO
6: /* Get new nests by Lévy flights */
7: FOR EACH i: 1 ≤ in DO
8: xL = xi + αLevy(λ);
9: END FOR
10: FOR EACH i: 1 ≤ in DO
11: Compute FL
12: IF (FL < Fi) THEN
13: xixL;
14: END IF
15: END FOR
16: Compute FL
17: /*Update best nest xp of the d generation*/
18: IF (Fp < Fb) THEN
19: xbxp;
20: END IF
21: END WHILE
22: RETURN xb
Algorithm 2: AC-Weibull
Input:
  • x s ( 0 ) = ( x ( 0 ) ( 1 ) , x ( 0 ) ( 2 ) , , x ( 0 ) ( q ) ) —a sequence of training data.
  • x p ( 0 ) = ( x ( 0 ) ( q + 1 ) , x ( 0 ) ( q + 2 ) , , x ( 0 ) ( q + d ) ) —a sequence of verifying data
Output:
  • xb—the value of x with the best fitness value in population of nests
  • Fitness Function: f ( x ) = ( k / c ) × ( x / c ) k 1 × exp [ ( x / c ) k ]
Parameters:
  Maximum iterations:50
  The number of ant:30
  Parameters of the important degree of information elements:1
  Parameters of the important degree of the Heuristic factor:5
  Parameters of the important degree of the heuristic factor:0.1
  Pheromone increasing intensity coefficient:100
 
NC_max—Maximum iterations:50
m—The number of ant:30
Alpha—Parameters of the important degree of information elements:1
Beta—Parameters of the important degree of the Heuristic factor:5
Rho—Parameters of the important degree of the heuristic factor:0.1
Q—Pheromone increasing intensity coefficient:100
1: /*Initialize popsize candidates with the values between 0 and 1*/
2: FOR EACH i: 1 ≤ in DO
3: α i 1 = r a n d ( m , n )
4: END FOR
5: P = { α i i t e r : 1 i p o p s i z e }
6: iter = 1; Evaluate the corresponding fitness function Fi
7: /* Find the best value of repeatedly until the maximum iterations are reached. */
8: WHILE .( i t e r i t e r m a x ) DO
9: /* Find the best fitness value for each candidates */
10: FOR EACH α i i t e r P DO
11: Build neural network by using x s ( 0 ) with the α i i t e r value
12: Calculate x ^ p ( 0 )   =   ( x ^ p + 1 ( 0 ) , x ^ p + 2 ( 0 ) , , x ^ p + 3 ( 0 ) ) by neural network
13: /* Choose the best fitness value of the ith candidate in history */
14: IF (pBesti > fitness( α i i t e r )) THEN
15: pBesti = fitness( α i i t e r )
16: END IF
17: END FOR
18: /* Choose the candidate with the best fitness value of all the candidates */
19: FOR EACH α i i t e r P DO
20: IF (gBest > pBesti) THEN
21: gBest = pBesti = x t + 1 k = x g b e s t ± : t = 1 , 2 , , T
22: α b e s t = α i i t e r
23: END IF
24: END FOR
25: /*Update the values of all the candidates by using ACO’s evolution equations.*/
26: FOR EACH α i i t e r P DO
27: α t + 1   =   0.1   ×   α t
28: x ¯ g b e s t   =   x g b e s t + ( x g b e s t × 0.01 ) { i f f ( x ¯ g b e s t ) f ( x g b e s t )   t h e   s i g n i s ( + ) i f f ( x ¯ g b e s t ) f ( x g b e s t )     t h e   s i g n i s ( )
29: END FOR
30: P = { α i i t e r : 1     i     p o p s i z e }
31: iter = iter + 1
32: END WHILE

2.2. Wind Energy Potential Assessment Case Study

In this paper, wind speed data from 2009 to 2013 are adopted to assess the wind energy in four locations—[125, 40], [122.5, 40], [125, 42.5], and [120, 40]—where the first component represents the longitude and the second one denotes the location latitude. The collected wind speed data will be applied from two aspects, 1. Single year data application: Wind speed data in the single year will be analyzed to obtain the yearly assessment results and 2. Whole five-year data application: Wind speed data in each season of the five years will be analyzed to obtain the seasonal assessment results as well as the whole five-year assessment results.
In addition, beyond the CS-Weibull and AC-Weibull models, an original Weibull model and two other models related to the Firefly Algorithm (FA) and the Genetic Algorithm (GA) are introduced to compare the assessment effectiveness. The two models are abbreviated as the FA-Weibull and GA-Weibull models, respectively.

2.2.1. Assessment Results in a Single Year

The wind energy assessment is an important indicator to determine the potential of wind resources and describe the amount of wind energy at various wind speed values in a particular location. In a study of the wind energy assessment, the common parameter estimation methods include the method of moments estimate, maximum likelihood estimate, and least squares estimate, which have some disadvantages and limitations. For example, the method of moments estimate is simple where only knowing the moment of the population is sufficient and does not require knowledge of the population distribution. However, it can only be used in the distribution when the population origin moment exists, and the moment only has some of the information. This method only has good performance when the sample size is large. The maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model according to observations by finding the parameter values that maximize the likelihood of making the observations given the parameters. However, the maximum likelihood estimation must incorporate the sample distribution. It is more complicated to incorporate the likelihood equations, which often obtains the approximate solution by computer iterative computation. The maximum likelihood estimation is complex and may lead to multi-optimal solutions or non-optimal solutions. The least squares can be applied to estimate linear and nonlinear relationships. When applying the least square to estimate the parameters of models, the observed data do not require information about the probability and statistics method. However, the least square has two kinds of defects. If the noise of model is colored noise, the estimation result of the least square is a biased estimation; with increasing data size, “data saturation” will appear. The Bayesian parameter estimation must know the distribution of the random error. When the sample size is small, prior probability has a significant influence on the estimation result (the result of maximum likelihood estimation, method of moments estimate, least square estimate and Bayesian parameter estimation in Appendix A). In summary, in this paper, the effectiveness of four optimization algorithms (Firefly Algorithm, Genetic Algorithm, Ant Colony Algorithm and Cuckoo Search Algorithm) is evaluated to determine the shape (k) and scale (c) parameters of the Weibull distribution function for calculating the wind power density. By comparing the assessment results, the swarm intelligent algorithm showed an effective assessment performance.
The parameter estimation results in a single year, from 2009 to 2013, of the five models are listed in Table 1. According to the estimated parameters given in Table 1, the five models can be determined, and Figure 1 is the indication of the PDF fitting results in a single year from 2009 to 2013.
With the PDF fitting results, in this paper, the following four error evaluation criteria (showed in Equations (5)–(7)) are adopted to evaluate the assessment performance:
MAE   =   1 n i = 1 n | y i y ^ i |
SSE   =   i = 1 n ( y i y ^ i ) 2
RMSE   =   1 n i = 1 n ( y i y ^ i ) 2
R 2   =   i = 1 n ( y i y ¯ ) 2 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
where y i is the observed value, y ^ i means the forecasted value, and y ¯ is calculated by y ¯   =   i = 1 n y i / n .
Table 2 provides the assessment performance evaluation results in a single year from 2009 to 2013 of the four optimization algorithms on a yearly basis in terms of MAE, RMSE, SSE and R2, respectively. As seen from Table 2, although the presented descriptive statistics provide meaningful statistical analysis, especially regarding the distribution of the wind speed, they cannot be solely used to judge the precision level of each optimization algorithm for estimating the parameters of Weibull distribution. Therefore, the different evaluation criteria introduced by Equations (5)–(8) are employed to appraise the performances of the four selected parameter estimation optimization algorithms. It is meaningful that different statistical criterion supplies different useful views for comparing the optimization algorithms. As a result, the combination of all statistical indicators provides an effective way to compare the different parameter estimation optimization algorithms for wind power assessment. The effectivity of the assessed wind power density values changes when the parameter estimation optimization algorithms change. This is apparent for each research site when the four optimization algorithms of CS, GA, FA and AC are utilized to estimate the parameters of Weibull distribution. This conclusion is drawn from the low error values and high R2 and SSE values. On the other hand, the lowest agreement levels are attained when the four algorithms are applied for k and c parameter calculations. According to the statistical results in Table 2, for the four sites Chinese wind farm sites, the best results for calculating the wind speed density are achieved when the four optimization algorithms are employed to compute the k and c parameters. For each gate station site, the most precise results are obtained using the different optimization algorithms [20].

2.2.2. Seasonal and Whole Five-Year Assessment Results

Considering that wind speed data may be vastly different in different years, this section provides seasonal and whole five-year wind energy assessment results by comprehensively using the wind speed data in the five years from 2009 to 2013. Similarly, Table 3 lists the seasonal and whole five-year parameter estimation results, and Figure 2 and Table 4 present the PDF fitting and corresponding error results.
The same conclusion can be obtained from these results; i.e., the four new proposed models based on the FA, GA, CS algorithm and AC algorithm are superior to the original Weibull model.
The two -parameter Weibull distribution function has been widely applied to different kinds of wind energy-related investigations due to its briefness, flexibility and effectiveness. In this paper, the performance of four optimization algorithms, including the FA, GA, CS, and AC algorithms, was assessed to optimize the k and c parameters of the Weibull probability distribution function when calculating the wind power density at four sites in China. The assessments were conducted on both a seasonal and annual basis to offer a more complete analysis. Both the annual and seasonal results showed that by using different parameter estimation methods through different optimization algorithms for determining the k and c parameters of the Weibull distribution, the accuracy of the calculated wind power density values would change. According to the wind energy assessment results from the statistical analysis, the FA, GA, CS, and AC algorithms provided a very desirable performance for each site. Another discovery showed the CS and AC algorithms’ approach in terms of the efficiency. The assessment results show that the more appropriate parameter estimation algorithm was not universal among all examined sites. As a matter of fact, the wind energy properties could be a significant factor in wind energy assessment. Annually and seasonally for Site 1, the CS algorithm was recognized as a more appropriate algorithm, while the FA showed weak performance for wind power assessment. For Site 2, the four optimization algorithms were determined as a more effective Weibull parameter estimation algorithm for optimizing the wind power density in each year and season. For Site 3, the AC showed poor performance for the annual wind power density distribution, and the FA was recognized as a more appropriate method. For Site 4, both the FA and GA perform better for the seasonal wind power density. The suggested parameter estimation methods have excellent performance for representing the distribution of seasonal and annual wind power density as well as determining different statistical properties of the power density [20].

3. Connection between Energy Assessment and Forecasting

In recent years, the de-noising method is widely used to preprocess wind speed time series, such as the Ensemble Empirical Mode Decomposition (EEMD), Singular Spectrum Analysis (SSA), and the Wavelet decomposition (WD). Thus far, there is no effective way to choose which de-noising methods should be used to address the original wind speed time series. In this section, the wind energy assessment method with the smallest error values is used to choose the best de-nosing method to pre-process the wind speed time series.
Figure 3 presents the PDF fitting results obtained by three different de-noising methods for the four sites, and Table 5 shows the parameter estimation and error results of the different de-nosing wind speed time series. As seen from Figure 3 and Table 5, the R2 values from Site 1 to Site 4 in the WD de-noising method are all closest to 1. Assessment results obtained by the three de-noising models show that the MAE values of the WD de-noising method is the smallest. In this paper, the WD de-noising method is adopted to preprocess the original wind speed to improve the forecasting accuracy.

4. Proposed Integrated Forecasting Framework and Forecasting Results

In this section, three basic neural network forecasting models are first introduced; then, the integrated forecasting framework proposed in this paper is shown. Finally, the forecasting results obtained by the new proposed forecasting framework are analyzed.

4.1. Basic Neural Network Forecasting Models

Artificial neural networks are usually used to forecast fields as they can approximate nonlinear functions with arbitrary accuracy. Three neural network models are introduced in this paper for the wind speed forecasting application.

4.1.1. Back Propagation Neural Network

The back propagation neural network (BPNN) [21] is a multilayer feed-forward neural network. The two main features that should be considered in BPNN are the feed-forward signal and back propagated error. In the feed-forward process, the signal is passed layer-by-layer from the input layer to the hidden layer and then to the output layer. The state of the neurons only impacts the neurons in the adjacent next layer. If the output in the output layer is not expected, back propagation starts.
Suppose X 1 , X 2 , , X n are the input values of the BPNN; Y 1 , Y 2 , , Y m are the corresponding output values; and ω i j and ω j k are the weights, the BPNN can be viewed as a non-linear function and the input values and output values can be regarded as the independent and dependent variables. The BPNN structure in Figure 4 is the expression of the function mapping relation from n independent variables to m dependent variables.
The network training is the main task of the BPNN. Through the training operation, the BPNN has capacity for associative memory and forecasting. The training process of the BPNN includes the following steps:
Step 1: Network initialization. Based on the practical problem, determine the number of nodes in the input, hidden and output layers. Then, initialize the following values: the connection weights ω i j and ω j k , threshold values θ j and θ k in the hidden and output layers, respectively, and the learning rate η and the transfer functions.
Step 2: The output calculation of the hidden layer. According to the input vector X = ( X 1 , X 2 , , X n ) , the connection weights ω i j between the input and hidden layers and the threshold value θ j in the hidden layer as well as the output of the hidden layer can be calculated by Equation(9):
H j   =   f ( i = 1 l ω i j x i θ j )
where l is the number of nodes in the hidden layer and f ( · ) is the transfer function of the hidden layer, which has a variety of expression forms. In this research, the following form is adopted in Equation (10):
f ( x ) = 1 1 + e x
Step 3: The output calculation of the output layer. According to the output H j of the hidden layer, the connection weights ω j k between the hidden layer and output layer, and the threshold value θ j in the output layer, the forecasting output of the BPNN can be expressed as Equation (11):
Y k   =   g ( j ω j k H j θ k )
where g ( · ) is the transfer function from the hidden layer to the output layer, which is defined as Equation (12) in this research:
g ( x ) = 1 1 + e x
Step 4: Error calculation. With the predicted output Y = ( Y 1 , Y 2 , , Y m ) and the desired output D Y = ( D Y 1 , D Y 2 , , D Y m ) , the forecasting error of the network is computed by Equation (13):
e = 1 2 P p = 1 P j = 1 m ( D Y j p Y j p ) 2
where P is the number of the input and output pairs.
Step 5: Weights update. Update the connection weights ω i j and ω j k by Equations (14) and (15):
ω j k = ω j k + η δ k H j
ω i j = ω i j + η δ j X i
where η is the learning rate, and shows Equations (16) and (17)
δ k = Y k ( 1 Y k ) ( D Y k Y k )
δ j = H j ( 1 H j ) k ω j k δ k
Step 6: Threshold update. By using the forecasting error of the network, the threshold is updated by Equations (18) and (19):
θ k = θ k η δ k
θ j = θ j η δ j
Step 7: Termination determination. Determine whether the termination requirement is achieved, if so, ended, otherwise, return to Step 2.

4.1.2. Wavelet Neural Network

The Wavelet Neural Network (WNN) [22] is a neural network type that is constructed on the basis of the BPNN topology, and the wavelet basis function is regarded as the transfer function of the hidden layer nodes. In this type of network, the signal is transferred feed-forward, while the error is transferred back-forward. Suppose X 1 , X 2 , , X n are the inputs of the network, Y 1 , Y 2 , , Y m are the forecasted output, and ω i j and ω j k are the weights, the output of the hidden layer can be represented by Equation (20)
h j = h ( i = 1 n ω i j X i b j a j )
where h j is the output of the jth hidden layer node, ω i j is the connection weight between the input and hidden layers, h ( · ) is the wavelet function, b j is the shift factor of the wavelet function, and a j is the stretch factor wavelet function.
The forecasted value of the output layer can be calculated by Equation (21):
y k = j = 1 l ω j k h j ,   k = 1 , 2 , , m
where ω j k is the weight between the hidden and output layers, h j is the output of the jth hidden layer nodes, l is the number of the nodes in the hidden layer, and m is number of the nodes in the output layer.
The process of the WNN algorithm is as follows:
Step 1: Network initialization. Randomly initialize the stretch factor a k , shift factor b k , network connection weights ω i j and ω j k , and network learning rate η .
Step 2: Sample classification. Divide the samples into the training and testing samples, which are used to train the network and test the forecasting accuracy of the network, respectively.
Step 3: Output prediction. Input the training sample into the network and calculate the predicted output of the network as well as the error between the network output and desired output.
Step 4: Weight correction. Correct the network weights and parameters in the wavelet function according to the calculated error values, helping the network predicted values approach the expected values.
Step 5: Algorithm termination judgment. Determine whether the algorithm termination is satisfied; if not, return to Step 3.

4.1.3. Elman Neural Network

ENN [23] is generally divided into four layers, input, hidden, context and output layers. The connections between the input, hidden and output layers are similar to the feed-forward network. The nodes in the input layer only play a signal transmission role, while those in the output layer have a linear weighted effect. The transfer function of the hidden layer can be either linear or nonlinear, and the context layer, which is also known as the undertake or state layer, is used to remember the previous output of the hidden layer and return it to the network input so it can be considered a single-step delay operator.
Through the delay and storage of the context layer, the output of the hidden layer can be self-connected to the input of the hidden layer. This self-connection approach makes the network sensitive to the historical data and increases the capacity of the network to address the dynamic information, which can then achieve the dynamic modeling purpose. In addition, the ENN can approximate any nonlinear map with arbitrary precision without considering the specific form of the external noise impact on the system. Therefore, given the input and output pair of the system, the system can be modeled.

4.2. Structure of the Proposed Integrated Forecasting Framework

In this paper, neural network models based on the three artificial intelligent neural networks mentioned in Section 4.1—i.e., the ENN, BPNN and WNN—are used to forecast the wind speed; the integrated forecasting framework is shown in Figure 5 and can be decomposed into the following three main procedures. First, the wavelet decomposition (WD) [24] is used to decompose the original wind speed data. As seen from Section 3, the WD method is the best pre-processing method selected according to the wind energy assessment results, and it is used to preprocess the original wind speed. With this operation, three new models, abbreviated as WD-ENN, WD-BPNN and WD-WNN, are gained. Second, the CS and the AC algorithms are adopted to optimize the unknown weight and bias matrices between hidden and output layers in the three neural network models obtained in the first step, respectively. Additionally, with this implementation, in addition to the three neural networks optimized by the CS algorithm, named the WD-CS-ENN, WD-CS-BPNN and WD-CS-WNN, three neural networks optimized by the AC algorithm, abbreviated as the WD-AC-ENN, the WD-AC-BPNN and the WD-AC-WNN, are obtained as well (shown in Figure 4). The related pseudo codes are presented in Algorithms 3 and 4.
Algorithm 3: Three Neural Networks Optimized by the CS Algorithm
Input:
  • x s ( 0 )   =   ( x ( 0 ) ( 1 ) , x ( 0 ) ( 2 ) , , x ( 0 ) ( q ) ) —a sequence of training data.
  • x p ( 0 )   =   ( x ( 0 ) ( q   +   1 ) , x ( 0 ) ( q   +   2 ) , , x ( 0 ) ( q   +   d ) ) —a sequence of verifying data
Output:
  • xb—the value of x with the best fitness value in population of nests
  • Fitness Function: x ( k )   =   f ( ω 1 x c ( k )   +   ω 2 ( u ( k 1 ) ) )  (ENN)
  •                    f ( n e t ) = 1 1 + e n e t  (BPNN)
  •                h ( j ) = h j ( i = 1 k ω i j x b j a j )  (WNN)
Parameters:
  Num Cuckoos = 50number of initial population
  Min Number Of Eggs = 2;minimum number of eggs for each cuckoo
  Max Number Of Eggs = 4;maximum number of eggs for each cuckoo
  Max Iter = 200;maximum iterations of the Cuckoo Algorithm
  Knn Cluster Num = 1;number of clusters that we want to make
  Motion Coeff = 20;Lambda variable in COA paper, default = 2
  accuracy = 0 × 10−10;How much accuracy in answer is needed
  Max Num Of Cuckoos = 20;maximum number of cuckoos that can live at the same time
  Radius Coeff = 0.05;Control parameter of egg laying
  Cuckoo Pop Variance = 1 × 10−10;population variance that cuts the optimization

1: /* Initialize population of n host nests xi (i = 1, 2, ..., n) randomly*/
2: FOR EACH i: 1 ≤ in DO
3: Evaluate the corresponding fitness function Fi
4: END FOR
5: WHILE (g< GenMax) DO
6: /* Get new nests by Lévy flights */
7: FOR EACH i: 1 ≤ in DO
8: xL=xi+αLevy(λ);
9: END FOR
10: FOR EACH i: 1 ≤ in DO
11: Compute FL
12: IF (FL < Fi) THEN
13: xixL;
14: END IF
15: END FOR
16: Compute FL
17: /*Update best nest xp of the d generation*/
18: IF (Fp < Fb) THEN
19: xbxp;
20: END IF
21: END WHILE
22: RETURN xb
Algorithm 4: Three Neural Networks Optimized by the AC Optimization Algorithm
Input:
  • x s ( 0 ) = ( x ( 0 ) ( 1 ) , x ( 0 ) ( 2 ) , , x ( 0 ) ( q ) ) —a sequence of training data.
  • x p ( 0 ) = ( x ( 0 ) ( q + 1 ) , x ( 0 ) ( q + 2 ) , , x ( 0 ) ( q + d ) ) —a sequence of verifying data
Output:
  • xb—the value of x with the best fitness value in population of nests
  • Fitness Function: x ( k ) = f ( ω 1 x c ( k ) + ω 2 ( u ( k 1 ) ) )  (ENN)
  •          f ( n e t )   =   1 1   +   e n e t  (BPNN)
  •          h ( j ) = h j ( i = 1 k ω i j x b j a j )  (WNN)
Parameters:
  Maximum iterations:50
  The number of ant:30
  Parameters of the important degree of information elements:1
  Parameters of the important degree of the Heuristic factor:5
  Parameters of the important degree of the heuristic factor:0.1
  Pheromone increasing intensity coefficient:100
    NC_max—Maximum iterations:50
    m—The number of ant:30
    Alpha—Parameters of the important degree of information elements:1
    Beta—Parameters of the important degree of the Heuristic factor:5
    Rho—Parameters of the important degree of the heuristic factor:0.1
    Q—Pheromone increasing intensity coefficient:100
1: /*Initialize popsize candidates with the values between 0 and 1*/
2: FOR EACH i 1 i n DO
3: α i 1 = r a n d ( m , n )
4: END FOR
5: P = { α i i t e r : 1 i p o p s i z e }
6: iter = 1; Evaluate the corresponding fitness function Fi
7: /* Find the best value of repeatedly until the maximum iterations are reached. */
8: WHILE .( i t e r i t e r m a x ) DO
9: /* Find the best fitness value for each candidates */
10: FOR EACH α i i t e r P DO
11: Build neural network by using x s ( 0 ) with the α i i t e r value
12: Calculate x ^ p ( 0 ) = ( x ^ p + 1 ( 0 ) , x ^ p + 2 ( 0 ) , , x ^ p + 3 ( 0 ) ) by neural network
13: /*Choose the best fitness value of the ith candidate in history */
14: IF (pBesti > fitness( α i i t e r )) THEN
15: pBesti = fitness( α i i t e r )
16: END IF
17: END FOR
18: /* Choose the candidate with the best fitness value of all the candidates */
19: FOR EACH α i i t e r P DO
20: IF (gBest > pBesti) THEN
21: gBest = pBesti = x t + 1 k = x g b e s t ± : t = 1 , 2 , , T
22: α b e s t = α i i t e r
23: END IF
24: END FOR
25: /*Update the values of all the candidates by using ACO’s evolution equations.*/
26: FOR EACH α i i t e r P DO
27: α t + 1   =   0.1   ×   α t
28: x ¯ g b e s t   =   x g b e s t   +   ( x g b e s t   ×   0.01 ) { i f f ( x ¯ g b e s t )     f ( x g b e s t )     t h e s i g n i s ( + ) i f f ( x ¯ g b e s t )     f ( x g b e s t )     t h e s i g n i s ( )
29: END FOR
30: P   =   { α i i t e r : 1     i     p o p s i z e }
31: iter = iter + 1
32: END WHILE

4.3. Wind Speed Forecasting Case Study

When the original wind speed time series is disposed by the WD method, the pre-processed wind speed time series is considered as the input of the optimized BPNN, ENN and WNN models. It is worth noting that the method for dividing the original wind speed time series into the training and testing sets is quite important. Moreover, in the network training procedure, the training inputs are de-noised data, while the training output is the original training time series. In the testing step, the inputs are also the de-noised wind speed data, and the output is the original testing output. However, the testing output is assumed to be unknown.
Figure 6 presents the data division results; in this paper, the training dataset window with length N = 1008 is fixed according to the original time series. For example, suppose a study of the wind speed time series will be forecasted. Apart from the data division, the forecasting horizon is also an important index. In this paper, multi-step ahead forecasting with values h = 1, 2, and 3 are analyzed, where h is a prediction step.
Related parameter initialization values in different neural networks are shown in Table 6. Based on the error evaluation criteria, MAE, defined in Equation (5) and the following two forecasting error evaluation criteria shows in Equations (22) and (23), forecasting error values obtained by different neural networks are listed in Table 7.
MSE = 1 n i = 1 n ( y i y ^ i ) 2
MAPE = 1 n i = 1 n | y i y ^ i y i |
where y i and y ^ i are the actual and forecasted wind speed values, and n is the number of the data samples.
Table 7 provides the forecasting error results with three different horizons, one-step-ahead, two-steps-ahead and three-steps-ahead. As seen, under the same horizon conditions, performances of the optimized nine neural networks are all better than those of the three single neural networks. Additionally, models optimized by the WD and CS or WD and AC are all superior to those that were only optimized by the WD algorithm. While the models optimized by the WD and CS are compared with the models optimized by the WD and AC, for the one-step-ahead horizon forecasting results shows in Figure 7, error values obtained by the WD and CS algorithms are all smaller than the corresponding models optimized by the WD and AC algorithms. For the two-step-ahead horizon forecasting results shows in Figure 8, the BPNN model optimized by the WD and CS is worse than that optimized by the WD and AC algorithms. For the three-steps-ahead horizon forecasting results shows in Figure 9, the ENN and BPNN models optimized by the WD and CS are both worse than the one optimized by the WD and AC algorithms. In conclusion, the novel optimized models proposed in this paper are all better than the original models.

5. Conclusions

Effective wind energy potential assessment and forecasting for a particular site plays an indispensable role in the design, evaluation and scheduling of wind farms. In this paper, based on the CS and AC algorithms, two new wind energy assessment models, as well as six wind speed forecasting models, are proposed. First, the CS and AC algorithms are introduced to estimate the two unknown parameters in the Weibull distribution as well as improve the assessment accuracy. The four assessment error evaluation criteria sets of results demonstrate that the two newly proposed assessment models are effective and meaningful. Then, the best data pre-processing approach is selected according to the wind energy potential evaluation results and is adopted to process the wind speed time series. Finally, the CS and AC algorithms are used to optimize three neural networks—namely the ENN, BPNN and WNN—and the three sets of forecasting error evaluation criteria results demonstrate that the six newly proposed assessment models perform better than the original ones. Therefore, forecasting researchers can greatly benefit from data pre-processing and swarm intelligent optimization techniques and these data allow for significant improvements in accuracy.

Acknowledgement

The work was supported by the National Natural Science Foundation of China (Grant No. 11161041).

Author Contributions

Zhilong Wang and Chen Wang conceived and designed the experiments; Chen Wang performed the experiments; Zhilong Wang and Chen Wang analyzed the data; Zhilong Wang and Jie Wu wrote the paper and Chen Wang checked the whole paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The result of method of moments estimate, maximum likelihood estimate, least squares estimate, Bayesian prior estimate and Bayesian posterior estimate.
Table A1. The result of method of moments estimate, maximum likelihood estimate, least squares estimate, Bayesian prior estimate and Bayesian posterior estimate.
[125, 40][122.5, 40]
ParameterMMMLELSEBayesian PriorBayesian PosteriorMMMLELSEBayesian PriorBayesian Posterior
2009k8.76678.76848.76868.7758.76028.91658.91948.9178.9288.916
c2.4822.4822.45752.45232.50022.31132.31132.30212.26352.3293
MAE0.01150.00920.02060.0080.01450.01280.01570.01530.01770.0133
SSE0.19920.16490.21230.16960.19760.36350.03630.03090.03220.0224
RMSE0.01210.01220.02160.0110.01640.0180.01610.01390.0140.0127
R20.94390.94410.94190.9420.94470.95110.95140.95030.94660.9527
2010k8.52548.52678.5258.51628.51898.9068.90788.90628.90558.8965
c2.54322.54322.54842.53322.56072.47372.47372.47142.38152.4949
MAE0.01010.00860.01450.01090.01110.01230.00970.00780.00940.0106
SSE0.14720.1340.10830.15050.18510.19790.1660.16440.11160.1758
RMSE0.01160.02090.01930.02290.0210.01190.01090.01110.00920.011
R20.89050.89080.89070.88850.89020.90560.90580.90540.89230.9068
2011k8.66578.66788.66578.66888.65798.84818.858.84998.85628.8493
c2.41472.41472.41532.37542.4372.46172.46172.43962.42722.4713
MAE0.01560.01360.01220.01380.0130.01140.01180.01140.01070.0111
SSE0.27270.24120.27450.28720.30140.20650.18010.16460.16630.1287
RMSE0.0150.0120.01340.01310.01370.01250.0110.0120.00920.0084
R20.93860.93890.93860.9350.93930.96150.96170.96040.960.9621
2012k8.69998.70068.7078.71598.69748.74818.75078.74948.76278.7487
c2.65042.65042.58062.59142.65512.33962.33962.31282.28142.3555
MAE0.01710.01530.0130.01450.0140.01280.01140.01160.01390.0129
SSE0.35630.32370.2790.32630.3040.19980.20630.19570.2050.1392
RMSE0.01890.01780.01760.01460.01180.0140.01270.01180.01080.0123
R20.95220.95220.94510.94670.95250.94640.94660.94270.93860.9486
2013k9.11379.11559.11739.13039.11369.32649.32919.3289.34269.3066
c2.47282.47282.42822.40692.48292.35732.35732.33022.30992.3895
MAE0.0150.01960.01560.01720.0190.01220.01630.01510.01120.0154
SSE0.35430.3420.40210.32860.31460.33010.2210.25970.22080.1968
RMSE0.01960.01230.0160.01540.01960.01570.01560.0140.01240.0141
R20.96030.96040.95490.95270.96140.95340.95350.94940.94690.9563
Frist seasonk8.88128.88278.88288.88548.87418.78478.7868.78928.79878.7844
c2.51162.51162.49282.49212.52932.54632.54632.49592.47572.5485
MAE0.01490.01170.01380.01320.01060.01270.01080.01210.01040.0109
SSE0.35560.35370.22580.32410.24470.2960.24940.20220.15190.1361
RMSE0.01640.01050.01120.01260.01430.01180.01180.01320.01080.0095
R20.94310.94330.94210.94240.94320.96240.96250.95730.95520.9626
Second seasonk8.48948.49138.49128.4998.48969.19529.19799.19719.21359.1931
c2.44512.44512.41992.39632.44742.34632.34632.31142.29092.3632
MAE0.01380.01630.01590.01430.01420.01480.0150.01080.01080.0152
SSE0.41550.36220.27660.43150.24470.28650.25070.29780.19670.162
RMSE0.01620.01410.01020.01340.01440.01510.01230.01420.01210.0102
R20.96880.96890.96740.9660.9690.94160.94170.93610.93330.9439
Third seasonk8.33758.33968.33878.34788.333810.018110.020510.1910.027210.0131
c2.4042.4042.38422.36352.41172.41042.41042.39812.372.4253
MAE0.01850.01630.01540.02030.01470.01740.0150.01450.01620.0139
SSE0.53270.41340.29020.32520.31520.34960.3360.27310.17330.1516
RMSE0.02110.01230.00980.01250.01650.01610.0190.01740.01390.0138
R20.96540.96550.96380.96230.96580.9680.96820.9670.96460.9689
Fourth seasonk8.32368.32558.32578.33468.32369.86079.86349.86139.86919.8567
c2.44742.44742.41742.38682.45022.38622.38622.37912.33792.4014
MAE0.02040.02060.02140.02020.01530.02310.01480.01260.01640.0201
SSE0.57690.32470.32220.74050.37090.42120.2770.3230.23560.1386
RMSE0.02290.01930.01480.01770.01690.02260.01360.0160.01490.0143
R20.96230.96240.95940.95630.96250.95150.95170.95090.9470.9525
[125, 42.5][120, 40]
ParameterMMMLELSEBayesian PriorBayesian PosteriorMMMLELSEBayesian PriorBayesian Posterior
2009k9.39139.39399.39179.39979.37059.1169.11889.11689.12929.1008
c2.36952.36952.36372.31292.40192.33322.33322.31682.27472.3632
MAE0.00910.0090.00850.00830.01110.01290.01180.01020.01060.0144
SSE0.1530.14150.10980.13770.21810.22840.01920.01480.01580.0273
RMSE0.01050.010.00880.01140.01150.01430.01210.01130.01210.0164
R20.94830.94850.94760.94140.94960.95150.95170.94910.94250.9546
2010k9.37899.38169.38049.39379.36949.26119.26379.26219.27319.2589
c2.36312.36312.33982.32232.38572.36932.36932.35252.32252.3851
MAE0.01060.00980.01740.01970.01220.01350.01340.01320.01060.0123
SSE0.22110.01920.02190.03520.02280.26170.02030.02280.02070.026
RMSE0.01160.01030.01660.01660.0120.01550.01280.01340.01120.013
R20.95630.95650.95320.95130.95860.95210.95230.950.94650.9538
2011k9.44019.44189.4419.44279.42869.35739.35969.35769.36359.3551
c2.48782.48782.47682.45192.50892.40812.40812.40342.38432.4221
MAE0.00840.00810.00840.00880.00720.01220.01030.00990.00940.0114
SSE0.1350.13160.1160.13340.13180.19970.16840.19080.1640.1707
RMSE0.00910.00850.00920.00990.00850.01280.01270.01260.01240.0106
R20.94350.94370.94270.94060.94390.95230.95250.95220.95160.9527
2012k9.4749.4779.47529.4919.47189.55789.56029.56049.57549.5567
c2.31382.31382.28882.27022.33112.40412.40412.36722.33772.4165
MAE0.01150.01140.00970.00780.00970.01130.01230.01140.01240.0115
SSE0.16590.13680.12360.16780.12880.19550.17640.16770.19430.2051
RMSE0.01170.01130.00890.00890.0110.01530.01280.01360.01280.0139
R20.95010.95020.94650.94430.95230.94510.94520.93920.93460.9469
2013k9.91729.929.91839.93059.89089.81399.81689.8159.82819.8021
c2.36322.36322.34632.32962.39792.34612.34612.32762.30492.3697
MAE0.00880.01260.01010.01080.0110.01280.01330.0120.01560.0143
SSE0.17370.14480.16120.17980.18170.26520.19110.22270.19070.284
RMSE0.01290.00970.01450.01520.01070.01540.01130.01630.01390.0135
R20.90990.91010.90750.90560.91260.93520.93540.93280.93040.9371
Frist seasonk8.84838.85048.84948.85558.83738.50988.5118.5128.51298.5088
c2.42952.42952.4152.37592.45382.55052.55052.52582.52432.5537
MAE0.01080.00980.00760.00920.09090.01150.01040.00950.01110.0086
SSE0.2330.20750.13370.12050.21320.2560.19120.18540.20610.2218
RMSE0.01260.1190.0090.00860.10620.01340.01260.01160.01250.0107
R20.9380.93820.93620.9310.93970.95980.95990.95890.95890.9598
Second seasonk9.60439.60649.60559.61249.60058.5128.51428.51118.51238.506
c2.43572.43572.42022.38522.44972.39622.39622.40992.33192.4178
MAE0.00970.01140.00920.01060.12520.01420.00860.01180.01110.0108
SSE0.22890.23040.15750.15110.22890.27750.1940.24310.21490.2718
RMSE0.01310.1260.00940.00910.14310.01380.01080.01540.01140.0112
R20.960.96020.95840.95440.96120.9410.94120.94140.93670.941
Third seasonk10.602410.604510.604710.611810.56069.11389.11589.11449.11759.097
c2.48092.48092.45782.43532.52372.45232.45232.44522.41372.4805
MAE0.01520.01580.01110.00990.12360.01330.01030.01220.01590.0168
SSE0.26740.26760.14010.17550.26210.36130.23270.22060.26890.2856
RMSE0.01250.17260.01410.01410.14750.01520.01340.01870.01590.0137
R20.92410.92420.92080.91770.92680.94340.94360.94270.93920.9443
Fourth seasonk10.494510.497110.495310.503210.46748.99298.99468.99458.99878.9824
c2.40812.40812.39792.36092.44022.48982.48982.47092.46132.5114
MAE0.02230.01550.01220.01370.15560.0180.0160.01210.01870.0114
SSE0.29820.23650.18940.17140.30030.35410.22170.28470.30540.3246
RMSE0.01630.13470.0170.01420.18220.02140.01750.02060.01780.0137
R20.94040.94060.93930.93490.94170.94030.94050.93850.93790.9412

References

  1. Liu, F.J.; Chang, T.P. Validity analysis of maximum entropy distribution based on different moment constraints for wind energy assessment. Energy 2011, 36, 1820–1826. [Google Scholar] [CrossRef]
  2. Al-Yahyai, S.; Charabi, Y.; Al-Badi, A.; Gastli, A. Nested ensemble NWP approach for wind energy assessment. Renew. Energy 2012, 37, 150–160. [Google Scholar] [CrossRef]
  3. Wu, J.; Wang, J.; Chi, D. Wind energy potential assessment for the site of Inner Mongolia in China. Renew. Sust. Energ. Rev. 2013, 21, 215–228. [Google Scholar] [CrossRef]
  4. Jung, S.; Kwon, S.-D. Weighted error functions in artificial neural networks for improved wind energy potential estimation. Appl. Energ. 2013, 111, 778–790. [Google Scholar] [CrossRef]
  5. Boudia, S.M.; Benmansour, A.; Ghellai, N.; Benmedjahed, M.; Hellal, M.A.T. Temporal assessment of wind energy resource at four locations in Algerian Sahara. Energy Convers. Manag. 2013, 76, 654–664. [Google Scholar] [CrossRef]
  6. Quan, P.; Leephakpreeda, T. Assessment of wind energy potential for selecting wind turbines: An application to Thailand. Sustain. Energy Technol. Assess. 2015, 11, 17–26. [Google Scholar] [CrossRef]
  7. Siyal, S.H.; Mörtberg, U.; Mentis, D.; Welsch, M.; Babelon, I.; Howells, M. Wind energy assessment considering geographic and environmental restrictions in Sweden: A GIS-based approach. Energy 2015, 83, 447–461. [Google Scholar] [CrossRef]
  8. Liu, D.; Wang, J.; Wang, H. Short-term wind speed forecasting based on spectral clustering and optimised echo state networks. Renew. Energy 2015, 78, 599–608. [Google Scholar] [CrossRef]
  9. Hu, J.; Wang, J. Short-term wind speed prediction using empirical wavelet transform and Gaussian process regression. Energy 2015, 93, 1456–1466. [Google Scholar] [CrossRef]
  10. Lydia, M.; Kumar, S.S.; Selvakumar, A.I.; Kumar, G.E.P. Linear and non-linear autoregressive models for short-term wind speed Forecasting. Energy Convers. Manag. 2016, 112, 115–124. [Google Scholar] [CrossRef]
  11. Wang, J.; Qin, S.; Zhou, Q.; Jiang, H. Medium-term wind speeds forecasting utilizing hybrid models for three different sites in Xinjiang, China. Renew. Energy 2015, 76, 91–101. [Google Scholar] [CrossRef]
  12. Wang, Y.; Wang, J.; Wei, X. A hybrid wind speed forecasting model based on phase space reconstruction theory and Markov model: A case study of wind farms in northwest China. Energy 2015, 91, 556–572. [Google Scholar] [CrossRef]
  13. Wang, J.; Hu, J.; Ma, K.; Zhang, Y. A self-adaptive hybrid approach for wind speed forecasting. Renew. Energy 2015, 78, 374–385. [Google Scholar] [CrossRef]
  14. Shukur, O.B.; Lee, M.H. Daily wind speed forecasting through hybrid KF-ANN model based on ARIMA. Renew. Energy 2015, 76, 637–647. [Google Scholar] [CrossRef]
  15. Liu, H.; Tian, H.Q.; Liang, X.F.; Li, Y.F. Wind speed forecasting approach using secondary decomposition algorithm and Elman neural networks. Appl. Energy 2015, 157, 183–194. [Google Scholar] [CrossRef]
  16. Fei, S.W. A hybrid model of EMD and multiple-kernel RVR algorithm for wind speed prediction. Electr. Power Energy Syst. 2016, 78, 910–915. [Google Scholar] [CrossRef]
  17. Tuba, M.; Subotic, M.; Stanarevic, N. Modified cuckoo search algorithm for unconstrained optimization problems. In Proceedings of the European Computing Conference, Paris, France, 28–30 April 2011; pp. 263–268.
  18. Cheng, Y. The basic principle and applications of ACA. Pioneer. Sci. Technol. Mon. 2011, 4, 117–121. [Google Scholar]
  19. Wang, X.; Ni, J.; Wan, W. Research on the ant colony optimization algorithm with multi-population hierarchy evolution. In Advances in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2010; pp. 222–230. [Google Scholar]
  20. Alavi, K.O.; Mostafaeipour, A.; Goudarzi, N.; Jalilvand, M. Assessing different parameters estimation methods of Weibull distribution to compute wind power density. Energy Convers. Manag. 2016, 108, 322–335. [Google Scholar]
  21. Guo, Z.; Wu, J.; Lu, H.; Wang, J. A case study on a hybrid wind speed forecasting method using BP neural network. Knowl.-Based Syst. 2011, 24, 1048–1056. [Google Scholar] [CrossRef]
  22. Xun, L.; Xie, H. Wavelet neural networks based on Genetic algorithm. Comput. Digit. Eng. 2007, 35, 5–7. [Google Scholar]
  23. Lin, F.J.; Kung, Y.S.; Chen, S.Y.; Liu, Y.H. Recurrent wavelet-based Elman neural network control for multi-axis motion control stage using linear ultrasonic motors. IET Electri. Power Appl. 2010, 4, 314–332. [Google Scholar] [CrossRef]
  24. Mabrouk, A.B.; Abdallah, N.B.; Dhifaoui, Z. Wavelet decomposition and autoregressive model for time series prediction. Appl. Math. Comput. 2008, 199, 334–340. [Google Scholar] [CrossRef]
Figure 1. PDF fitting results in the single year from 2009 to 2013
Figure 1. PDF fitting results in the single year from 2009 to 2013
Sustainability 08 01191 g001
Figure 2. Seasonal PDF and whole five-year fitting results.
Figure 2. Seasonal PDF and whole five-year fitting results.
Sustainability 08 01191 g002
Figure 3. PDF fitting results obtained by three different de-noising methods for the four sites.
Figure 3. PDF fitting results obtained by three different de-noising methods for the four sites.
Sustainability 08 01191 g003
Figure 4. Three optimized neural networks.
Figure 4. Three optimized neural networks.
Sustainability 08 01191 g004
Figure 5. The flowchart of this proposed integrated forecasting model.
Figure 5. The flowchart of this proposed integrated forecasting model.
Sustainability 08 01191 g005
Figure 6. Data division.
Figure 6. Data division.
Sustainability 08 01191 g006
Figure 7. One-step-ahead forecasting results obtained by different models.
Figure 7. One-step-ahead forecasting results obtained by different models.
Sustainability 08 01191 g007
Figure 8. Two-step-ahead forecasting results obtained by different models.
Figure 8. Two-step-ahead forecasting results obtained by different models.
Sustainability 08 01191 g008
Figure 9. Three-steps-ahead forecasting results obtained by different models.
Figure 9. Three-steps-ahead forecasting results obtained by different models.
Sustainability 08 01191 g009
Table 1. Parameter estimation results in a single year from 2009 to 2013.
Table 1. Parameter estimation results in a single year from 2009 to 2013.
YearLocationWeibullFA-WeibullGA-WeibullCS-WeibullAC-Weibull
kckckckckc
2009[125, 40]8.76402.46398.83352.47208.67742.36108.79772.51638.66472.3911
[122.5, 40]8.90712.28208.94862.31028.95752.32718.90682.27279.06922.2952
[125, 42.5]9.37492.33439.34962.33959.49582.33349.23282.31379.55142.2632
[120, 40]9.10342.29759.06162.29249.06402.28249.07772.28489.07712.2464
2010[125, 40]8.51282.53688.42642.50828.53742.49318.41462.60558.46982.5977
[122.5, 40]8.87032.41508.80242.36898.99402.44918.84272.34338.83172.3450
[125, 42.5]9.37582.33849.41272.40189.43752.21869.18112.31379.27912.3050
[120, 40]9.25292.34079.30292.31459.21462.26429.21772.32219.26382.2973
2011[125, 40]8.65362.39008.50272.40638.79142.41588.76272.36358.48632.4199
[122.5, 40]8.84322.44078.94702.37918.69232.53848.70692.45218.77142.4255
[125, 42.5]9.42852.46549.41272.40189.43752.21869.18112.31379.27912.3050
[120, 40]9.35352.39339.34902.40159.44022.40689.27292.39809.47622.3849
2012[125, 40]8.70222.61918.87432.64298.71552.70558.65362.63168.58992.6867
[122.5, 40]8.74892.30778.80062.21358.64322.37338.68392.35438.73212.3135
[125, 42.5]9.47972.29129.61492.28559.67162.34849.38942.25599.32962.2571
[120, 40]9.55092.36819.55992.33189.64122.33509.42752.39739.54872.3346
2013[125, 40]9.10472.43389.36722.52258.96712.41089.00992.43299.06502.4348
[122.5, 40]9.32182.32889.29552.31559.29012.33959.38662.29209.37242.4228
[125, 42.5]9.91502.34289.61492.28559.67162.34849.38942.25599.32962.2571
[120, 40]9.80892.32119.76842.37839.89062.33879.98912.34799.44102.2976
Table 2. Assessment error results in a single year from 2009 to 2013.
Table 2. Assessment error results in a single year from 2009 to 2013.
YearMetricLocation [125, 40]Location [122.5, 40]
WeibullFA-WeibullGA-WeibullCS-WeibullAC-WeibullWeibullFA-WeibullGA-WeibullCS-WeibullAC-Weibull
2009MAE0.010.0080.01660.00790.01140.01270.01220.01240.0140.0117
SSE0.17520.16150.16970.16880.16650.28560.02830.02780.02740.02
RMSE0.0110.00960.01670.00950.01330.0140.01310.0130.01290.011
R20.95940.96890.96770.96890.9690.96380.96770.96790.96750.9693
2010MAE0.00890.00860.0140.00980.00950.010.00960.0070.0080.0094
SSE0.13660.11630.09770.13150.17510.17440.1620.14340.10810.1546
RMSE0.00970.0170.0150.02190.0210.01090.010.00940.00820.0098
R20.98120.98360.98370.98520.98240.97230.97350.9750.97580.9738
2011MAE0.00980.00810.00750.00740.00710.01550.01450.01520.01540.0156
SSE0.16840.01370.01020.00980.01020.43130.37070.40390.39450.3968
RMSE0.01070.01070.00930.00910.00930.01720.01530.01520.01680.0172
R20.97320.98640.98710.98650.98780.96120.9740.97260.97320.9726
2012MAE0.01120.01020.01010.00880.00980.01270.01220.01170.0110.0107
SSE0.21960.20120.17270.15680.19570.29390.26330.25090.27670.2603
RMSE0.01220.01070.01160.01090.01060.01420.01410.00980.01450.0089
R20.96110.96790.96890.96750.96770.96210.97190.9730.97020.973
2013MAE0.0120.01150.01090.01070.01060.00980.00910.00910.00960.0096
SSE0.25090.24040.24060.24770.24080.16910.14720.15840.13530.1193
RMSE0.01310.01140.01140.01090.01080.01080.00920.00950.00880.0083
R20.95880.96790.96720.96880.96850.95980.96290.96250.9630.9638
YearMetricLocation [125, 42.5]Location [120, 40]
WeibullFA-WeibullGA-WeibullCS-WeibullAC-WeibullWeibullFA-WeibullGA-WeibullCS-WeibullAC-Weibull
2009MAE0.00860.00770.00780.00790.00950.01040.01040.00810.00840.012
SSE0.12850.11810.10430.13520.18010.1890.01860.01390.01280.024
RMSE0.00940.00890.00840.00960.0110.01140.01130.00980.00940.0128
R20.96670.9770.97720.97720.97530.96030.96710.96780.9680.9671
2010MAE0.01040.00820.0170.01560.01110.01110.0110.01120.00990.0109
SSE0.18910.01790.01790.02920.0180.21850.01980.02260.01710.0205
RMSE0.01140.00850.01540.01380.01090.01220.01040.01110.00970.0106
R20.960.96890.96750.96810.96820.96290.97790.97830.97830.9779
2011MAE0.00870.0080.00810.00810.00870.01060.00860.01030.01140.0114
SSE0.13110.1150.11460.11450.11590.19680.16310.16370.16810.1598
RMSE0.00950.00890.0130.01120.00890.01160.00850.0110.01110.0108
R20.9720.97910.97770.97680.97770.97090.97920.97910.97870.979
2012MAE0.01170.010.0120.00750.00890.01180.010.01040.01050.0108
SSE0.24540.22020.20790.20740.21060.24590.2020.21430.2410.2395
RMSE0.01290.010.01180.00850.00980.0130.01170.01210.01290.0126
R20.95550.96770.96780.96880.96880.95270.96160.96120.95970.9606
2013MAE0.0080.00790.0070.00710.00710.00940.0090.0090.0090.009
SSE0.11410.10940.10070.11010.10750.16050.15730.14960.14870.165
RMSE0.00880.00820.00810.00820.00820.01050.01010.01010.01010.0102
R20.96950.97240.97080.97190.97110.96830.97570.97580.9770.9765
Table 3. Seasonal and whole five-year parameter estimation results.
Table 3. Seasonal and whole five-year parameter estimation results.
YearLocationWeibullFA-WeibullGA-WeibullCS-WeibullAC-Weibull
kckckckckc
First season[125, 40]8.87832.49958.73462.46728.88992.48438.95042.53129.07392.5050
[122.5, 40]8.48102.41558.57732.46028.55042.47398.45472.30048.45652.4358
[125, 42.5]8.33252.37938.27842.33488.46302.36618.39522.40168.31892.4561
[120, 40]8.31272.41088.11712.36898.14272.39838.35422.41108.36122.5259
Second season[125, 40]8.87632.49878.98352.50089.00522.50248.85212.51848.88922.5043
[122.5, 40]8.47672.41648.48052.34228.53482.45478.47972.42538.49112.3732
[125, 42.5]8.32732.38148.42972.40168.41372.38798.37662.37818.27402.3765
[120, 40]8.30892.41128.40852.46538.25262.41108.29942.38598.26542.3733
Third season[125, 40]8.87582.49798.71432.41868.88302.55418.83172.56298.91942.5700
[122.5, 40]8.47562.41558.46512.41738.45802.42788.36142.39118.33122.4068
[125, 42.5]8.32532.38078.45352.49958.13922.30618.49792.41028.19192.2587
[120, 40]8.30712.41058.38582.32518.35932.38928.21102.35208.20302.3716
Fourth season[125, 40]8.50402.53438.55632.51388.46282.51098.48462.46978.59092.6049
[122.5, 40]8.48732.35488.62272.35958.31832.38398.66322.37418.46462.3547
[125, 42.5]9.10232.42828.99452.38969.28032.45119.09672.51148.99212.4545
[120, 40]8.98802.47229.00082.43568.93972.42699.00662.51599.07292.4519
Whole five year[125, 40]8.74592.47778.78032.51858.68522.45938.76372.45178.73092.5006
[122.5, 40]8.93562.34638.99272.35338.91142.34238.96252.39018.97162.3568
[125, 42.5]9.51352.34739.52862.38909.51932.35759.57602.39629.48492.3152
[120, 40]9.41312.33799.47442.34999.39312.29689.50492.35429.33082.3495
Table 4. Seasonal and whole five-year assessment error results.
Table 4. Seasonal and whole five-year assessment error results.
YearMetricLocation [125, 40]Location [122.5, 40]
WeibullFA-WeibullGA-WeibullCS-WeibullAC-WeibullWeibullFA-WeibullGA-WeibullCS-WeibullAC-Weibull
First seasonMAE0.010.00960.00710.00880.00930.01710.01620.01650.01640.0167
SSE0.21750.020.01920.02070.02050.6410.66250.63530.65560.6049
RMSE0.01090.010.00870.00730.01060.01870.01510.01760.0170.0174
R20.97340.97850.97810.97930.9790.95720.96350.96120.96270.963
Second seasonMAE0.010.00910.00950.00910.00970.01720.01270.01090.01020.0167
SSE0.21770.19760.29810.140.14280.64210.59930.350.49080.613
RMSE0.01090.00850.01040.00720.00720.01880.01670.01270.01510.0147
R20.97330.97920.97890.97920.9790.95720.96090.96130.96050.958
Third seasonMAE0.010.08970.06660.06510.05850.01720.0160.01280.01490.0159
SSE0.21760.21090.18010.12980.15740.64230.06260.04540.04810.058
RMSE0.01090.01020.00940.0080.00880.01880.01690.01440.01480.0163
R20.97330.97370.97340.97390.97380.95710.96190.96270.96360.9634
Fourth seasonMAE0.01220.01140.01080.01150.01040.01050.00970.00950.00950.01
SSE0.33120.2730.20680.27390.2360.23970.19820.19790.12910.1055
RMSE0.01350.01040.0090.01040.01150.01150.01050.01050.00850.0077
R20.96910.96930.96970.96930.97030.9770.98130.98250.98320.9826
Whole five yearMAE0.01310.01140.01290.01150.01060.0150.01410.0130.01160.0198
SSE1.49821.46161.34541.21641.13071.96881.78221.54661.62291.7183
RMSE0.01430.0140.01340.01270.01230.01640.01530.01180.01340.0183
R20.96450.97850.97860.97840.97860.95680.96880.96930.96940.9681
YearMetricLocation [125, 42.5]Location [120, 40]
WeibullFA-WeibullGA-WeibullCS-WeibullAC-WeibullWeibullFA-WeibullGA-WeibullCS-WeibullAC-Weibull
First seasonMAE0.01260.01260.01220.0110.00870.01510.01330.01140.01180.0072
SSE0.35070.02880.03230.02340.01860.49950.46340.32850.36610.2851
RMSE0.01390.01090.01160.00980.00880.01650.01590.01340.01420.0125
R20.96440.96880.96790.96840.9690.95890.96520.96580.96310.966
Second seasonMAE0.01270.01140.01180.00960.01250.01510.01380.01470.01310.0149
SSE0.35170.03080.03670.03190.03330.50050.44680.46970.32170.4802
RMSE0.01390.01150.01260.01170.0120.01660.01270.01550.01070.0147
R20.96450.9670.96640.96650.96650.95890.96310.96130.96310.9624
Third seasonMAE0.01270.01130.01410.00980.01350.01510.01150.0080.01390.0109
SSE0.35210.30950.33360.29390.34970.50090.49290.43910.49790.4929
RMSE0.01390.01080.01280.00880.0130.01660.01620.01130.01650.0169
R20.96440.96690.96660.96730.9660.95880.96180.96310.95960.9617
Fourth seasonMAE0.00910.00890.00740.00750.08810.00960.00840.00840.00870.0085
SSE0.18030.17690.10310.10330.17170.2020.15680.1570.17010.1853
RMSE0.00990.09620.00810.00810.09460.01050.00990.00990.01010.0101
R20.97120.9760.97860.97840.97620.97120.97960.97910.97790.9785
Whole five yearMAE0.01160.01120.01010.01180.01430.0120.01030.0130.01040.0124
SSE1.17931.67231.38861.71881.66581.26460.95851.31080.94691.2449
RMSE0.01270.01260.01150.01280.01590.01320.01190.01390.01180.0136
R20.96080.96920.96920.96930.96870.96020.96880.96870.9690.9687
Table 5. Assessment results of each de-noising wind speed time series.
Table 5. Assessment results of each de-noising wind speed time series.
MetricLocation [125, 40]Location [122.5, 40]
EEMD-WeibullSSA-WeibullWD-WeibullEEMD-WeibullSSA-WeibullWD-Weibull
k8.61568.61338.73458.85928.83698.9252
c3.35433.52662.29403.42343.24182.1892
MAE0.00440.00480.00430.00530.00460.0045
SSE0.27870.32990.31420.41610.32330.3133
RMSE0.00620.00670.00610.00750.00670.0063
R20.98610.98700.98970.97670.98260.9857
MetricLocation [125, 42.5]Location [120, 40]
EEMD-WeibullSSA-WeibullWD-WeibullEEMD-WeibullSSA-WeibullWD-Weibull
k9.42369.41389.50759.28539.30029.4049
c3.18423.18092.23053.19823.27832.2173
MAE0.00420.00410.00410.00430.00440.0045
SSE0.25000.24230.24540.25590.27620.2463
RMSE0.00590.00580.00590.00590.00610.0009
R20.98790.98760.98990.98840.98850.9898
Table 6. Related parameter initialization values in the neural networks.
Table 6. Related parameter initialization values in the neural networks.
WD-CS/AC-ENN ModelWD-CS/AC-BPNN ModelWD-CS/AC-WNN Model
WD-CS-ENNWD-AC-ENNWD-CS-BPNNWD-AC-BPNNWD-CS-WNNWD-AC-WNN
Number of input neurons Ni: 3Number of input neurons Ni: 4Number of input neurons Ni: 5Number of input neurons Ni: 5Number of input neurons Ni: 5Number of input neurons Ni: 3
Number of hidden layer neurons Nj: 16Number of hidden layer neurons Nj: 22Number of hidden layer neurons Nj: 15Number of hidden layer neurons Nj: 16Number of hidden layer neurons Nj: 19Number of hidden layer neurons Nj: 20
Number of output neurons Nk: 1Number of output neurons Nk: 1Number of output neurons Nk: 1Number of output neurons Nk: 1Number of output neurons Nk: 1Number of output neurons Nk: 1
Maximum of iterative steps:1000Maximum of iterative steps: 1000Maximum of iterative steps: 1000Maximum of iterative steps: 1000Maximum of iterative steps: 1000Maximum of iterative steps: 1000
Value of the learning rate: 0.01Value of the learning rate: 0.01Value of the learning rate: 0.01Value of the learning rate: 0.01Value of the learning rate: 0.01Value of the learning rate: 0.01
Table 7. Forecasting error values of each model.
Table 7. Forecasting error values of each model.
HorizonCriterionSingle ModelModel Optimized by the WDModel Optimized by the WD and CSModel Optimized by the WD and AC
ENNBPNNWNNWD-ENNWD-BPNNWD-WNNWD-CS-ENNWD-CS-BPNNWD-CS-WNNWD-AC-ENNWD-AC-BPNNWD-AC-WNN
One-step-aheadMAE0.63870.51640.54240.55790.40670.27690.28420.26810.21680.36120.28450.3131
MSE0.69510.45610.55030.55540.29130.14840.15450.13760.08510.22030.16360.1755
MAPE0.09610.07700.07880.08320.06190.05930.04020.03790.03830.05340.03610.0371
Two-steps-aheadMAE0.69410.53600.54310.64050.40840.36220.30370.28440.23700.37930.34080.3489
MSE0.81670.49870.53350.71550.45410.45460.5060.45850.45570.48950.43990.4469
MAPE0.10380.07900.07920.09530.06980.06460.07440.06980.06340.07280.06820.0684
Three-steps-aheadMAE0.71990.55350.58140.68150.46200.52850.35560.31920.31530.35530.26240.2850
MSE0.90840.73100.75460.81490.70460.69950.65270.60420.60590.21170.13100.1569
MAPE0.10650.08180.08500.10070.07860.07550.08450.07920.07810.08380.06770.0704
Back to TopTop