Next Article in Journal
Visualization of International Energy Policy Research
Next Article in Special Issue
Hybridizing DEMD and Quantum PSO with SVR in Electric Load Forecasting
Previous Article in Journal
An Innovative Control Strategy to Improve the Fault Ride-Through Capability of DFIGs Based on Wind Energy Conversion Systems
Previous Article in Special Issue
A Carbon Price Forecasting Model Based on Variational Mode Decomposition and Spiking Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm

1
School of Information, Zhejiang University of Finance & Economics, Hangzhou 310018, China
2
School of Economics & Management, Nanjing Tech University, Nanjing 211800, China
3
Department of Information Management, Oriental Institute of Technology, 58 Sec. 2, Sichuan Road, Panchiao, Taipei 220, Taiwan
*
Author to whom correspondence should be addressed.
Energies 2016, 9(2), 70; https://doi.org/10.3390/en9020070
Submission received: 19 October 2015 / Revised: 8 December 2015 / Accepted: 21 January 2016 / Published: 26 January 2016

Abstract

:
This paper proposes a new electric load forecasting model by hybridizing the fuzzy time series (FTS) and global harmony search algorithm (GHSA) with least squares support vector machines (LSSVM), namely GHSA-FTS-LSSVM model. Firstly, the fuzzy c-means clustering (FCS) algorithm is used to calculate the clustering center of each cluster. Secondly, the LSSVM is applied to model the resultant series, which is optimized by GHSA. Finally, a real-world example is adopted to test the performance of the proposed model. In this investigation, the proposed model is verified using experimental datasets from the Guangdong Province Industrial Development Database, and results are compared against autoregressive integrated moving average (ARIMA) model and other algorithms hybridized with LSSVM including genetic algorithm (GA), particle swarm optimization (PSO), harmony search, and so on. The forecasting results indicate that the proposed GHSA-FTS-LSSVM model effectively generates more accurate predictive results.

1. Introduction

Load forecasting plays an important role in electric system planning and operation. In recent years, lots of researchers have studied the load forecasting problem and developed a variety of load forecasting methods. Load forecasting algorithms can be divided into three major categories: traditional methods, modern intelligent methods and hybrid algorithms [1]. The traditional method [1,2] mainly includes autoregressive (AR), autoregressive moving average (ARMA) [3], autoregressive integrated moving average (ARIMA) [4], semi-parametric [5], gray model [6,7], similar-day models [8], and Kalman filtering method [9]. Due to the theoretical limitations of the algorithms themselves, it is difficult to improve the forecasting accuracy using these forecasting approaches. For example, the ARIMA model is unable to capture the rapid changing process underlying the electric load from historical data pattern. The Kalman filter model cannot avoid the observation noise and the forecasting accuracy of the grey model will be reduced along with the increasing degree of discretiin of the data.
The intelligent methods mainly include artificial neural network (ANN) [10], fuzzy systems [11], knowledge based expert system (KBES) approach [12], wavelet analysis [13], support vector machine (SVM) [14], and so on. Knowledge-based expert system combines the knowledge and experience of numerous experts to maximize the experts’ ability, but the method does not have self-learning ability. Besides, KBES is limited to the total amount of knowledge stored in the database and it is difficult to process any sudden change of the conditions [15]. The ANN has the ability of nonlinear approximation, self-learning, parallel processing and higher adaptive ability, however, it also has some problems, such as the difficulty of choosing parameters, and high computational complexity. SVM is a new machine learning method proposed by Cortes and Vapnik [16]. It is based on the principle of structural risk minimization (SRM) in statistical learning theory. The practical problems such as small sample, nonlinear, high dimension and local minimum point could be solved by the SVM via solving a convex quadratic programming (QP) problem. However, traditional SVM also has some shortcomings. For example, SVM cannot determine the input variables effectively and reasonably and it has slow convergence speed and poor forecasting results while suffering from strong random fluctuation time series. Compared with SVM, the least squares support vector machine (LSSVM), proposed by Suykens and Vandewalle [17], is an improved model of the original SVM. It has the following advantages, using equality constraints instead of the inequality in standard SVM, solving a set of linear equations instead of QP [13]. LSSVM has been widely applied to solve forecasting problems in many fields, such as stock index forecasting [18], credit rating forecasting [19], GPRS traffic forecasting [20], tax forecasting [21] and prevailing wind direction forecasting [22], and so on.
Fuzzy time series (FTS), as a significant quantitative forecasting model, has been broadly applied in electric load forecasting. There are lots of literatures focused on FTS related issues that are also involved in this paper [23,24,25,26,27]. Lee and Hong [23] proposed a new FTS approaches for the electric power load forecasting. Efendi et al. [24] discussed the fuzzy logical relationships used to determine the electric load forecast in the FTS modeling. Sadaei et al. [26] presented an enhanced hybrid method based on a sophisticated exponentially weighted fuzzy algorithm to forecast short-term load. FTS is often combined with other models for forecasting. For example, a new method for forecasting the TAIEX is presented based on FTS and SVMs [28].
In addition, various optimization algorithms are widely employed in LSSVM to improve its searching performance, such as genetic algorithm (GA) [29], particle swarm optimization (PSO) [30], harmony search algorithm (HSA) and artificial bee colony algorithm (ABC) [31]. All the optimization methods improve the efficiency of the model in some way. Although the single forecasting method can improve the forecasting accuracy in some aspects, it is more difficult to yield the desired accuracy in all electric load forecasting cases. Thus, via hybridizing two or more approaches, the hybrid model can combine the merits of two or more models, as proposed by researchers. A new hybrid forecasting method, namely ESPLSSVM, based on empirical mode decomposition, seasonal adjustment, PSO and LSSVM model is proposed in [32]. Hybridization of support vector regression (SVR) with chaotic sequence and EA is able to avoid solutions trapping into a local optimum and improve forecasting accuracy successfully [33]. Ghofrani et al. [34] proposed a hybrid forecasting framework by applying a new data preprocessing algorithm with time series and regression analysis to enhance the forecasting accuracy of a Bayesian neural network (BNN). A hybrid algorithm based on fuzzy algorithm and imperialist competitive algorithm (RHWFTS-ICA) is also developed [35], in which the fuzzy algorithm is refined high-order weighted. In this paper, the global harmony search algorithm (GHSA) is hybridized with LSSVM to optimize the parameters of LSSVM.
The rest of this paper consists three sections: the proposed method GHSA-FTS-LSSVM, including FTS model, fuzzy c-means clustering (FCS) algorithm, GA, global harmony search and least squares SVM, is introduced in Section 2; a numerical example is illustrated in Section 3; and conclusions are discussed in Section 4.

2. Methodology of Global Harmony Search Algorithm-Fuzzy Time Series-Least Squares Support Vector Machines Model

2.1. Least Squares Support Vector Machine Model

LSSVM is a kind of supervised learning model which is widely used in both classification problems and regression analysis. Comparing with SVM model, LSSVM can find the solution by solving a set of linear equations while classical SVM needs to solve a convex QP problem. As for the regression problem, given a training data set = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x i , y i ) } , x i R n and y i R , and the separating hyper-plane in the feature space will be as Equation(1):
y ( x ) = w T φ ( x )
where w refers to the weight vector and φ ( x ) is a nonlinear mapping from the input space to the feature space. Then the structural minimization is used to formulate the following optimization problem of the function estimation as Equation (2):
min :   1 2 w 2 + 1 2 γ i = 1 n ε i 2 subject to : y i = w T φ ( x i ) + b + ε i ,    i = 1 , 2 , , n
where γ refers to the regulation constant and ε i to the error variable at time i, b to the bias term.
Define the Lagrange function as Equation (3):
L ( w , b , ε , α ) = 1 2 w 2 + 1 2 γ i = 1 n ε i 2 i = 1 n α i { w T φ ( x i ) + b + ε i y i }
where α i is the Lagrange multiplier.
Solving the partial differential of Lagrange function and introducing the kernel function, the final nonlinear function estimate of LSSVM with the kernel function can be written as Equation (4):
Y i = f ( X i ) = i = 1 n α i K ( X , X i ) + b
As for the selection of kernel function, this paper used the Gaussian radial basis function (RBF) as the kernel function, because RBF is the most effective for the nonlinear regression problems. And the RBF can be expressed as Equation (5):
K ( X , X i ) = exp X X i 2 2 σ 2
Through the above description, we can see that the selection of the regulation constant γ and Gaussian kernel function parameter σ has a significant influence on the learning effect and generalization ability of LSSVM. But the LSSVM model does not have a suitable method to select parameters, so we employ the global harmony search to realize the adaptive selection of parameters.

2.2. Global Harmony Search Algorithm in Parameters Determination of Least Squares Support Vector Machines Model

In music improvisation, musicians search for a perfect state of harmony by repeatedly adjusting the pitch of the instrument. Inspired by this phenomenon, HSA [36,37] is proposed by Geem et al. [36] as a new intelligent optimization search algorithm. However, every candidate solution in the fundamental HSA is independent to each other, which has no information sharing mechanism, thus, this characteristic also limits the algorithm efficiency. Lin and Li [38] have developed a GHSA which borrowed the concepts from swarm intelligence to enhance its performance [39]. The proposed GHSA procedure is illustrated as follows and the corresponding flowchart is shown in Figure 1.
Step 1: Define the objective function and initialize parameters.
Firstly, f ( x ) is the objective function of the problem where x is a candidate solution consisting of N decision variables x i and L B i x i U B i . L B i and U B i are the lower and upper bounds for each variable. Besides, the parameters used in GHSA are also initialized in this step.
Step 2: Initialize the harmony memory.
The initialization process is done as:
  • Randomly generate a harmony memory in the size of 2 × H M S from a uniform distribution in the range [ L B i , U B i ]   ( i = 1 , 2 , 3 , , n ) .
  • Calculate the fitness of each candidate solution in the harmony memory and sort the results in ascending order.
  • The harmony memory is generated by [ x 1 , x 2 , , , x HMS ] .
Step 3: Improvisation.
The purpose of this step is to generate a new harmony. The new harmony vector X = { x 1 , , x 2 , , , x n , } is generated based on the following rules:
Firstly, randomly generate   r 1 ,   r 2 in a uniform distribution of the range [ 0 , 1 ] .
  • If r 1 < HMCR and r 2 PAR , then x i , = x i , . Palatino
  • If r 1 < HMCR and r 2 < PAR , then x i , = Rnd ( x i gBest b w , x i gBest + b w ) , where b w is an arbitrary distance bandwidth (BW) and x i gBest is i t h dimension of the best candidate solution.
  • If r 1 HMCR , then x i , = Rnd ( L B i , U B i ) .
Step 4: Update harmony memory.
If the fitness of the new harmony vector is better than that of the worst harmony, it will take the place of the worst harmony in the HM.
Step 5: Check the stopping criterion.
Terminate when the iteration is reached.

2.3. Fuzzy Time Series Generation

This paper proposes FCM model by using FCS algorithm with GA to process the raw data and to generate FTS. The flowchart is shown as Figure 1. Firstly, the number of clustering k is computed as the initial value. Secondly, the clustering center is obtained until the stop criteria of the algorithm are reached. Finally, the time series fuzzy membership is determined.

2.3.1. Fuzzy Time Series Model

A FTS is defined [40,41] as follows:
Definition 1:Let Y ( t ) (t= 0, 1, 2,…), a subset of real number, be the universe of the discourse on which fuzzy membership of f i ( i = 1 , 2 , , n ) are defined. If F(t) is a collection of f 1 ,   f 2 ,…, then F(t) represents a FTS on Y(t).
Definition 2:If F ( t ) is caused by   F ( t 1 ) only, the FTS relationship can be expressed as F ( t 1 ) F ( t ) . Then let F ( t 1 ) = A i   and   F ( t ) = A j , so the relationship between F ( t 1 ) and F ( t ) which is referred to as a fuzzy logical relationship can be denoted by   A i   A j .
We present the general definitions of FTS as follows:
Suppose U is divided into n subsets, such as U = { u 1 ,   u 2 , ,   u n } . Then a fuzzy set A in the universe of the discourse of U can be expressed as Equation (6):
A =   f A ( u 1 ) u 1 + f A ( u 2 ) u 1 + + f A ( u n ) u n
where f A ( u i ) denotes the degree of membership of u i in A with the condition of f A ( u i ) [ 0 , 1 ] .

2.3.2. Fuzzy C-Means Clustering Algorithm

Fuzzy c-means (FCM) [42] is a common clustering algorithm which could make one piece of data to cluster into multiple classes. Let X = { x j | j = 1 , 2 , , n } be the observation data set and C = { c i | i = 1 , 2 , , k } be the set of cluster centers. The results of fuzzy clustering can be expressed by membership function U = { u i j | i = 1 , 2 , , k ; j = 1 , 2 , , n } where u i j [ 0 , 1 ] and u i j is also limited by Equations (7) and (8).
Figure 1. Global harmony search algorithm-fuzzy time series-least squares support vector machines (GHSA-FTS-LSSVM) algorithm flowchart.
Figure 1. Global harmony search algorithm-fuzzy time series-least squares support vector machines (GHSA-FTS-LSSVM) algorithm flowchart.
Energies 09 00070 g001
j = 1 n u i j ( 0 , n )
i = 1 k u i j = 1
The objective function of FCM can be express as Equation (9):
J ( u i j , c k ) = i = 1 k j = 1 n ( U ( x j , c i ) ) m x j c i 2   ( m > 1 )
The cluster centers and the membership functions U are calculated by Equations (10) and (11):
c i = j = 1 n ( u i j ) m · x j j = 1 n ( u i j ) m
U ( x j , c i ) = u i j = ( x j c i ) 2 / ( m 1 ) l = 1 k ( x j c l ) 2 / ( m 1 )
where m is any real number named weight index, u i j represents the membership of x j in the i th cluster center and x j c i refers to the Euclidean distance between the real value x j and the fuzzy cluster center c i .

3. Numerical Example

3.1. Data Set

The experiment employs electric load data of Guangdong Province Industrial Development Database to compare the forecasting performances among the proposed GHSA-FTS-LSSVM model, GHSA-LSSVM model, GA-LSSVM, PSO-LSSVM and GHSA-LSSVM. The detailed data used in this paper is shown in Table 1. Among these data, the electric load data from January 2011 to December 2013 were used for model fitting and training, and the data from April to December 2014 were used to forecast.
Table 1. Monthly electric load in Guangdong Province from January 2011 to November 2014 (unit: thousand million W/h).
Table 1. Monthly electric load in Guangdong Province from January 2011 to November 2014 (unit: thousand million W/h).
DateLoadDateLoadDateLoad
January 2011284.1May 2012351.6September 2013372.3
February 2011263.2June 2012353.1October 2013375.6
March 2011339.8July 2012386.5November 2013386.4
April 2011325.7August 2012376.1December 2013410.9
May 2011336.2September 2012338January 2014384.5
June 2011341October 2012343February 2014322.1
July 2011371.7November 2012356.1March 2014389.2
August 2011366.4December 2012362.4April 2014373.3
September 2011329.8January 2013331May 2014387.6
October 2011326.9February 2013278.1June 2014393.4
November 2011331.4March 2013368.3July 2014429.8
December 2011362.3April 2013357.2August 2014416.7
January 2012341.5May 2013368.1September 2014379.9
February 2012328.3June 2013373.3October 2014385.3
March 2012358.7July 2013419.4November 2014398.2
April 2012335.2August 2013426.6December 2014374.8
The procedure of data preprocessing is illustrated as follows:
Step 1: Data normalization
Before FCM, we normalized the original data by Equation (12):
X ( i ) = T ( i ) T min T max T min
where T ( i )   ( i = 1 , 2 , , n ) is the set of time series which contains n observations, T min and T max refer to the minimum and maximum values of the data, X ( i )   ( i = 1 , 2 , , n ) is the normalized set of time series.
Step 2: Clustering calculation.
The number of clustering k is calculated by Equation (13) [43]:
  k = [ ( T max T min ) · ( n 1 ) t = 2 n | X ( i ) X ( i 1 ) | ]
where ‘[]’ represents the rounded integer arithmetic. According to Equation (13), k = 8 .
Step 3: Parameters initialization
We determine the maximum iteration M A X I = 200 and the minimum change of membership M I N C = 10 7 . The performance of the algorithm depends on the initial cluster centers, so we need to specify a set of cluster centers at random.
Step 4: Update operator
If the objective function is better than the previous ones, the membership functions and cluster centers will be updated by Equations (10) and (11) after each iteration.
Step 5: Termination operator
In this paper, we use the iteration number and change of memberships as the termination operators. If the current iteration is larger than MAXI or the current change of membership is smaller than MINC, the FCM finish its work and we can get the cluster centers.
After FCM, we got a set of clustering centers (set = {0.6115, 0.4595, 3949, 0.6668, 0.9491, 0.0732, 0.7485, 0.5416}), and the final time series fuzzy membership we got is shown in Table 2.
Table 2. The final fuzzy time series (FTS) (partly).
Table 2. The final fuzzy time series (FTS) (partly).
DateFTS
11 January0.01040.02200.03380.00840.00360.90130.00630.0142
11 February0.01280.02270.03070.01080.00530.89280.00850.0163
11 March0.00001.00000.00000.00000.00000.00000.00000.0000
11 April0.00640.05060.91810.00420.00110.00390.00260.0130
11 May0.01150.75810.18420.00660.00130.00260.00360.0322
11 June0.00260.97430.01060.00140.00020.00040.00070.0099
11 July0.12550.00540.00300.82560.00220.00060.02100.0165
11 August0.95470.00240.00120.02720.00060.00020.00370.0101
11 September0.00050.00640.99110.00030.00010.00020.00020.0011
11 October0.00290.02560.96040.00190.00050.00160.00110.0060
11 November0.00460.07470.90320.00280.00060.00170.00160.0107
11 December0.84190.01270.00580.04490.00190.00090.00980.0821

3.2. Global Harmony Search Algorithm-Least Squares Support Vector Machines Model

3.2.1. Parameters Selection by Global Harmony Search Algorithm

Before the GHSA we need to determine parameters. The parameters include the number of variables, the range of each variable [ L B i   ,     U B i ] the harmony memory size (HMS), the harmony memory considering rate (HMCR), the value of BW, the pitch adjusting rate (PAR) and the number of iteration (NI).
In the experiments of GHSA, the larger harmony consideration rate (HMCR) is beneficial to the local convergence while the smaller HMCR can keep the diversity of the population. In this paper, we set the HMCR as 0.8. For the PAR, the smaller PAR can enhance the local search ability of algorithm while the larger PAR is easily to adjust search area around the harmony memory. In addition, the value of BW also has a certain impact on the searching results. For larger BW, it can avoid algorithm trapping into a local optimal and the smaller BW can search meticulously in the local area. In our experiment, we use a small PAR and a large BW in the early iterations of the algorithm, and with the increase of the NIs, BW is expected to be reduced while PAR ought to increase. Therefore, we adopt the following equations:
P A R = ( P A R max P A R min ) * c u r r e n t I t e r a t i o n N I + P A R min
B W = B W max * exp ( log ( B W min / B W max ) * c u r r e n t I t e r a t i o n N I )
In swarm intelligence algorithms, the global optimization ability of algorithm will be ameliorated by increasing the population size increase. However, the search time will also increase and the convergence speed will slow down as the population size becomes larger. On the contrary, if the population size is small, the algorithm will more easily be trapped in a local optimum. The original data consists of 48 sets. Combined with the relevant research experiences and a lot of experiments, we divided the number of data by the number of parameters and the quotient we got is 24, which is set to be the HMS. After continuous optimization experiments, we determined 20 as the HMS in GHSA.
In the LS-SVM model, the regularization parameter, γ , is a compromise to control the proportion of misclassification sample and the complexity of the model. It is used to adjust the empirical risk and confidence interval of data until the LS-SVM receiving excellent generalization performance. When the kernel parameter, σ, is approaching zero, the training sample can be correctly classified, however, it will suffer from over-fitting problem, and in the meanwhile, it will also reduce the generalization performance level of LS-SVM. Based on authors previous research experiences, the range of parameters γ and σ we determined in this paper are [0, 10000], [0, 100]. The parameters we select in GHSA are shown in Table 3.
Table 3. Parameters selection in GHSA.
Table 3. Parameters selection in GHSA.
ParameterValueComment
num n u m = 2 Number of variables
γ γ [ 0 , 10000 ] Range of each variable
σ σ [ 0 , 100 ] Range of each variable
H M S H M S = 20 Harmony memory size
H M C R H M C R = 0.9 HMS considering rate
P A R P A R max = 0.9 , P A R min = 0.1 Pitch adjusting rate
b w b w max = 1 , b w min = 0.001 Bandwidth
N I N I = 200 Number of iteration

3.2.2. Fitness Function in Global Harmony Search Algorithm

Fitness function in GHSA is used to measure the fitness degree of generated harmony vector. Only if its fitness is better than that of the worst harmony in the harmony memory, it can replace the worst harmony. The fitness function is given Equation (16):
f i t = 100 × i = 1 n | y i y i ' | y i n
where n refers to the number of test sample, y i refers to the observation value and y i ' to the predictive value in LSSVM.
Then we calculate the fitness function in GHSA by Equation (16). After finishing the GHSA, we have determined the optimal parameter γ = 9746.7 and σ = 30.4 , then we establish LSSVM model to train historical data for forecasting the next electric load and get a set of output of LSSVM. At last, we denormalize the output of LSSVM.

3.2.3. Denormalization

After the GHSA we have determined the optimal parameter γ and σ , then we establish LSSVM model to train historical data for forecasting the next electric load. The outputs of LSSVM are normalized values, so we need to denormalize them to real values. Denormalization method is given by Equation (17):
  v real = v i × ( m a x m i n ) + m i n  
where max and min refers to the maximum and minimum value of the original data.

3.2.4. Defuzzification Mechanism

As indicated by several experiments that there are some inherent errors between actual values and fuzzy values. Therefore, it's necessary to estimate this kind of fuzzy effects to provide higher accurate forecasting performance. In this paper, we proposed an approach to adjust the fuzzy effects, namely defuzzification mechanism, as shown in Equation (18):
d f t = AVG ( Y 1 t F Y 1 t , Y 2 t F Y 2 t , , Y i t F Y i t , , Y n t F Y n t )
where t = 1 , 2 , 12 for the twelve months in a year, n is the total year number of data set, i = 1 , 2 , , n refers to the number of year and Y i t ,   F Y i t is the actual value and fuzzy value of the i th year respectively. Thus, the final forecasting result can be expressed as Equation (19), and the defuzzification multipliers are shown in Table 4.
  y i ' = y i ' * d f i  
Table 4. Defuzzification multiplier of each month.
Table 4. Defuzzification multiplier of each month.
MonthMultiplierMonthMultiplier
January1.00244July1.00612
February0.98222August1.00567
March0.99931September1.00069
April0.99522October0.99932
May0.99772November1.00937
June1.00493December0.99996

3.3. Performance Evaluation

We compare these proposals in different respects. First the proposed GHSA efficiency is compared with other optimization algorithms like HSA, PSO and GA. These appropriate algorithms are utilized to optimize the parameter γ and σ .
This experimental procedure is repeated 20 times for each optimization algorithm, and the performance comparison for different algorithms is represented in Figure 2, and the comparison of average fitness curves is presented in Table 5. We can see from Figure 2 and Table 5 that the values of the γ−1 obtained by the four algorithms are close to 0.0001, that is, all the search algorithms can achieve similar optimization, but the values of parameter, σ, as optimized by the different algorithms are not the same and this directly affects the fitness. The convergence speed of PSO is the fastest, however, due to the algorithm complexity, its running time is long. The execution time of HSA is the shortest, but the fitness is the worst. The running time of GHSA is equivalent to HSA, and the fitness of GHSA is optimal among four algorithms. In second group, the forecasting accuracy of the proposed algorithm is compared with ARIMA, GA-LSSVM [29], PSO-LSSVM [30] and the first group models. We take the mean absolute percentage error (MAPE), mean absolute error (MAE) and root mean squared error (RMSE) to evaluate the accuracy of the proposed method. The MAPE are shown in Equations (20)–(22):
M A P E = 100 × i = 1 n | y i y i ' | y i n
M A E = i = 1 n | y i y i ' | n
R M S E = i = 1 n ( y i y i ' ) 2 n
where n refers to the number of sample, y i is the observation value and y i ' is the predictive value. According to the optimal value in Table 5, forecasting results of GHSA-FTS-LSSVM, HSA-LSSVM, GA-LSSVM and PSO-LSSVM models as shown in Table 6.
Figure 2. Comparison of (a) average fitness curves; and (b) best fitness curves.
Figure 2. Comparison of (a) average fitness curves; and (b) best fitness curves.
Energies 09 00070 g002
Table 5. Performance comparison for different algorithms. Particle swarm optimization: PSO; harmony search algorithm: HAS; genetic algorithm: GA.
Table 5. Performance comparison for different algorithms. Particle swarm optimization: PSO; harmony search algorithm: HAS; genetic algorithm: GA.
AlgorithmFitnessγ−1σRunning Time/s
GHSA0.03970.0001030.39779.2977
HSA0.04890.0001052.84228.2681
GA0.04390.0001052.842268.6248
PSO0.04510.0001122.396569.9352
Table 6. Forecasting results of GHSA-FTS-LSSVM, GHSA-LSSVM, GA-LSSVM, PSO-LSSVM and autoregressive integrated moving average (ARIMA) models (unit: thousand million W/h).
Table 6. Forecasting results of GHSA-FTS-LSSVM, GHSA-LSSVM, GA-LSSVM, PSO-LSSVM and autoregressive integrated moving average (ARIMA) models (unit: thousand million W/h).
TimeActualGHSA-FTS-LSSVMGHSA-LSSVMGA-LSSVM [29]PSO-LSSVM [30]ARIMA
15 January384.5388.5989387.094387.066393.205399.142
15 February352.1379.4326372.661372.65373.62381.038
15 March349.2368.1298355.006355.01352.864359.864
15 April373.3359.5839353.429353.434351.189377.003
15 May387.6380.1802366.55366.545366.026362.173
15 June393.4392.6603374.353374.341375.799361.905
15 July429.8387.9569377.522377.506379.962399.488
15 August416.7395.6517397.452397.409408.614432.612
15 September379.9395.7048390.271390.239397.814423.027
15 October385.3376.4279370.15370.142370.449404.338
15 November398.2391.7981373.098373.086374.179390.129
15 December374.8380.8968380.146380.127383.494385.307
MAPE (%)-3.7094.5794.5794.6545.219
MAE-14.35818.03518.03518.21520.153
RMSE-18.18021.91421.92121.52523.0717
The excellent performance of the GHSA-FTS-LSSVM method is due to following reasons: first of all, we use FCM to process the original data, making the accurate load value become a set of input variables with fuzzy feature. Thus, the defects of the original data can be overcome and the implicit information is dug up. Secondly, the proposed algorithm employed the GHSA to improve the searching efficiency. Finally, LSSVM reduces the time of equation solving and improves the accuracy and generalization ability of the model.

4. Conclusions

Traditional electric load forecasting methods are based on the exact value of time series, but the electric power market is very complex, and the functional relations between variables are too difficult to describe, so this paper adopts the FTS model, and load values are defined as fuzzy sets. Then we compare the four algorithms GHSA, HSA, PSO and GA. According to the experimental results, it is obvious that GHSA which can find the optimal solution quickly and efficiently, is the best search algorithm in the LS-LSVM model. For the prediction accuracy, the MAPE of GHSA-FTS-LSSVM model is better than that of the GHSA-LSSVM which has no fuzzy processing. Also, our method has a better performance than the corresponding methods with GA and PSO.

Acknowledgments

This work was supported by Ministry of Science and Technology, Taiwan (MOST 104-2410-H-161-002) and Urban public utility government regulation Laboratory of Zhejiang university of Finance & Economics (No. Z0406413011/002/002).

Author Contributions

Yan Hong Chen and Wei-Chiang Hong conceived and designed the experiments; Wen Shen and Ning Ning Huang performed the experiments; Yan Hong Chen and Wei-Chiang Hong analyzed the data; Yan Hong Chen wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghayekhloo, M.; Menhaj, M.B.; Ghofrani, M. A hybrid short-term load forecasting with a new data preprocessing framework. Electr. Power Syst. Res. 2015, 119, 138–148. [Google Scholar] [CrossRef]
  2. Kouhi, S.; Keynia, F. A new cascade NN based method to short-term load forecast in deregulated electricity market. Energy Convers. Manag. 2013, 71, 76–83. [Google Scholar] [CrossRef]
  3. Chen, J.F.; Wang, W.M.; Huang, C.M. Analysis of an adaptive time-series autoregressive moving-average (ARMA) model for short-term load forecasting. Electr. Power Syst. Res. 1995, 34, 187–196. [Google Scholar] [CrossRef]
  4. Nie, H.; Liu, G.; Liu, X.; Wang, Y. Hybrid of ARIMA and SVMs for short-term load forecasting. Energy Proced. 2012, 16, 1455–1460. [Google Scholar] [CrossRef]
  5. Nedellec, R.; Cugliari, J.; Goude, Y. GEFCom2012: Electric load forecasting and backcasting with semi-parametric models. Int. J. Forecast. 2014, 30, 375–381. [Google Scholar] [CrossRef]
  6. Kang, J.; Zhao, H. Application of improved grey model in long-term load forecasting of power engineering. Syst. Eng. Proced. 2012, 3, 85–91. [Google Scholar] [CrossRef]
  7. Bahrami, S.; Hooshmand, R.A.; Parastegari, M. Short term electric load forecasting by wavelet transform and grey model improved by PSO (particle swarm optimization) algorithm. Energy 2014, 72, 434–442. [Google Scholar] [CrossRef]
  8. Mandal, P.; Senjyu, T.; Urasaki, N.; Funabashi, T. A neural network based several-hour-ahead electric load forecasting using similar days approach. Int. J. Electr. Power Energy Syst. 2006, 28, 367–373. [Google Scholar] [CrossRef]
  9. Ko, C.N.; Lee, C.M. Short-term load forecasting using SVR (support vector regression)-based radial basis function neural network with dual extended Kalman filter. Energy 2013, 49, 413–422. [Google Scholar] [CrossRef]
  10. Yu, F.; Xu, X. A short-term load forecasting model of natural gas based on optimized genetic algorithm and improved BP neural network. Appl. Energy 2014, 134, 102–113. [Google Scholar] [CrossRef]
  11. Pai, P.F. Hybrid ellipsoidal fuzzy systems in forecasting regional electricity loads. Energy Convers. Manag. 2006, 47, 2283–2289. [Google Scholar] [CrossRef]
  12. Chandrashekara, A.S.; Ananthapadmanabha, T.; Kulkarni, A.D. A neuro-expert system for planning and load forecasting of distribution systems. Int. J. Electr. Power Energy Syst. 1999, 21, 309–314. [Google Scholar] [CrossRef]
  13. Chen, Y.; Yang, Y.; Liu, C.; Li, L. A hybrid application algorithm based on the support vector machine and artificial intelligence: An example of electric load forecasting. Appl. Math. Model. 2015, 39, 2617–2632. [Google Scholar] [CrossRef]
  14. Selakov, A.; Cvijetinović, D.; Milović, L.; Bekut, D. Hybrid PSO–SVM method for short-term load forecasting during periods with significant temperature variations in city of Burbank. Appl. Soft Comput. 2014, 16, 80–88. [Google Scholar] [CrossRef]
  15. Hong, W.C. Electric load forecasting by seasonal recurrent SVR (support vector regression) with chaotic artificial bee colony algorithm. Energy 2011, 36, 5568–5578. [Google Scholar] [CrossRef]
  16. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  17. Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Proc. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  18. Huang, S.C.; Wu, T.K. Integrating GA-based time-scale feature extractions with SVMs for stock index forecasting. Expert Syst. Appl. 2008, 35, 2080–2088. [Google Scholar] [CrossRef]
  19. Huang, S.C. Integrating nonlinear graph based dimensionality reduction schemes with SVMs for credit rating forecasting. Expert Syst. Appl. 2009, 36, 7515–7518. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Lv, T. An empirical study on GPRS traffic forecasting based on chaos and SVM theory. J. China Univ. Post. Telecommun. 2010, 17, 41–44. [Google Scholar] [CrossRef]
  21. Li, X.L.; Yi, Q.Z.; Liu, X.Y. Tax forecasting theory and model based on SVM optimized by PSO. Expert Syst. Appl. 2011, 38, 116–120. [Google Scholar]
  22. Yang, Y.; Zhao, Y. Prevailing Wind Direction Forecasting for Natural Ventilation djustment in Greenhouses Based on LE-SVM. Energy Proced. 2012, 16, 252–258. [Google Scholar] [CrossRef]
  23. Lee, W.J.; Hong, J. A hybrid dynamic and fuzzy time series model for mid-term power load forecasting. Int. J. Electr. Power Energy Syst. 2015, 64, 1057–1062. [Google Scholar] [CrossRef]
  24. Efendi, R.; Ismail, Z.; Deris, M.M. A new linguistic out-sample approach of fuzzy time series for daily forecasting of Malaysian electricity load demand. Appl. Soft Comput. 2015, 28, 422–430. [Google Scholar] [CrossRef]
  25. Pereira, C.M.; De Almeida, N.N.; Velloso, M.L.F. Fuzzy Modeling to Forecast an Electric Load Time Series. Proced. Comput. Sci. 2015, 55, 395–404. [Google Scholar] [CrossRef]
  26. Sadaei, H.J.; Enayatifar, R.; Abdullah, A.H.; Gani, A. Short-term load forecasting using a hybrid model with a refined exponentially weighted fuzzy time series and an improved harmony search. Int. J. Electr. Power Energy Syst. 2014, 62, 118–129. [Google Scholar] [CrossRef]
  27. Day, P.; Fabian, M.; Noble, D.; Ruwisch, G.; Spencer, R.; Stevenson, J.; Thoppay, R. Residential power load forecasting. Proced. Comput. Sci. 2014, 28, 457–464. [Google Scholar] [CrossRef]
  28. Chen, S.M.; Kao, P.Y. TAIEX forecasting based on fuzzy time series, particle swarm optimization techniques and support vector machines. Inf. Sci. 2013, 247, 62–71. [Google Scholar] [CrossRef]
  29. Mahjoob, M.J.; Abdollahzade, M.; Zarringhalam, R. GA based optimized LS-SVM forecasting of short term electricity price in competitive power markets. In Proceeding of the IEEE Conference on Industrial Electronics and Applications, Singapore, 3–5 June 2008; pp. 73–78.
  30. Xiang, Y.; Jang, L. Water quality prediction using LS-SVM and particle swarm optimization. In Proceeding of the IEEE Second International Workshop on Knowledge Discovery and Data Mining, Moscow, Russia, 23–25 January 2009; pp. 900–904.
  31. Mustaffa, Z.; Yusof, Y.; Kamaruddin, S.S. Gasoline Price Forecasting: An Application of LSSVM with Improved ABC. Proced. Soc. Behav. Sci. 2014, 129, 601–609. [Google Scholar] [CrossRef] [Green Version]
  32. Zhai, M.Y. A new method for short-term load forecasting based on fractal interpretation and wavelet analysis. Int. J. Electr. Power Energy Syst. 2015, 69, 241–245. [Google Scholar] [CrossRef]
  33. Zhang, W.Y.; Hong, W.C.; Dong, Y.; Tsai, G.; Sung, J.T.; Fan, G.F. Application of SVR with chaotic GASA algorithm in cyclic electric load forecasting. Energy 2012, 45, 850–858. [Google Scholar] [CrossRef]
  34. Ghofrani, M.; Ghayekhloo, M.; Arabali, A.; Ghayekhloo, A. A hybrid short-term load forecasting with a new input selection framework. Energy 2015, 81, 777–786. [Google Scholar] [CrossRef]
  35. Enayatifar, R.; Sadaei, H.J.; Abdullah, A.H.; Gani, A. Imperialist competitive algorithm combined with refined high-order weighted fuzzy time series (RHWFTS–ICA) for short term load forecasting. Energy Convers. Manag. 2013, 76, 1104–1116. [Google Scholar] [CrossRef]
  36. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  37. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  38. Lin, J.; Li, X. Global harmony search optimization based ink preset for offset printing. Chin. J. Sci. Instrum. 2010, 10, 2248–2253. (In Chinese) [Google Scholar]
  39. Wang, C.M.; Huang, Y.F. Self-adaptive harmony search algorithm for optimization. Expert Syst. Appl. 2010, 37, 2826–2837. [Google Scholar] [CrossRef]
  40. Song, Q.; Chissom, B.S. Fuzzy time series and its models. Fuzzy Sets Syst. 1993, 54, 269–277. [Google Scholar] [CrossRef]
  41. Egrioglu, E.; Aladag, C.H.; Yolcu, U.; Uslu, V.R.; Erilli, N.A. Fuzzy time series forecasting method based on Gustafson–Kessel fuzzy clustering. Expert Syst. Appl. 2011, 38, 10355–10357. [Google Scholar] [CrossRef]
  42. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  43. Yang, Y.W.; Lin, Y.P. Multi-step forecasting of stock markets based on fuzzy time series model. Comput. Eng. Appl. 2014, 50, 252–256. [Google Scholar]

Share and Cite

MDPI and ACS Style

Chen, Y.H.; Hong, W.-C.; Shen, W.; Huang, N.N. Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm. Energies 2016, 9, 70. https://doi.org/10.3390/en9020070

AMA Style

Chen YH, Hong W-C, Shen W, Huang NN. Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm. Energies. 2016; 9(2):70. https://doi.org/10.3390/en9020070

Chicago/Turabian Style

Chen, Yan Hong, Wei-Chiang Hong, Wen Shen, and Ning Ning Huang. 2016. "Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm" Energies 9, no. 2: 70. https://doi.org/10.3390/en9020070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop