Next Article in Journal
On Fault Prediction for Wind Turbine Pitch System Using Radar Chart and Support Vector Machine Approach
Previous Article in Journal
Finite Control Set Model Predictive Control for an LCL-Filtered Grid-Tied Inverter with Full Status Estimations under Unbalanced Grid Voltage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electric Vehicle Charging Load Forecasting: A Comparative Study of Deep Learning Approaches

1
School of Information Engineering, Zhengzhou University, Zhengzhou 450001, China
2
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
3
School of Engineering, Cardiff University, Cardiff CF24 3AA, UK
4
School of Software Engineering, University of Science and Technology of China, Hefei 230026, China
*
Author to whom correspondence should be addressed.
Energies 2019, 12(14), 2692; https://doi.org/10.3390/en12142692
Submission received: 19 June 2019 / Revised: 4 July 2019 / Accepted: 5 July 2019 / Published: 13 July 2019

Abstract

:
Load forecasting is one of the major challenges of power system operation and is crucial to the effective scheduling for economic dispatch at multiple time scales. Numerous load forecasting methods have been proposed for household and commercial demand, as well as for loads at various nodes in a power grid. However, compared with conventional loads, the uncoordinated charging of the large penetration of plug-in electric vehicles is different in terms of periodicity and fluctuation, which renders current load forecasting techniques ineffective. Deep learning methods, empowered by unprecedented learning ability from extensive data, provide novel approaches for solving challenging forecasting tasks. This research proposes a comparative study of deep learning approaches to forecast the super-short-term stochastic charging load of plug-in electric vehicles. Several popular and novel deep-learning based methods have been utilized in establishing the forecasting models using minute-level real-world data of a plug-in electric vehicle charging station to compare the forecasting performance. Numerical results of twelve cases on various time steps show that deep learning methods obtain high accuracy in super-short-term plug-in electric load forecasting. Among the various deep learning approaches, the long-short-term memory method performs the best by reducing over 30% forecasting error compared with the conventional artificial neural network model.

1. Introduction

Uninterrupted supply of electricity is crucial to the functioning of the modern civilization. Today’s electricity grid is highly complex and increasingly vulnerable to the potential disruptions. Load forecasting has, therefore, been a key measure in power system planning, scheduling and operation. With the increasing penetration of variable renewable energy resources, accurate forecasting of both generation and demand profiles are important for effective and economic dispatching of power contributors. Load forecasting can be categorized into short-, medium- and long-term depending on the time span or resolution [1]. The short-term load forecasting is useful for utility optimal operations and scheduling while the long-term load forecasting is delivered in the system planning stage. Moreover, minute-level load forecasting is super-short-term load forecasting [2] and has been utilized in real-time power quality and security monitoring.
The electrification of the transportation sector is seen as an effective means to reduce greenhouse gas emissions from the burning of fossil fuel. Other environmental concerns such as urban air quality and related health impacts have also prompted policy makers and stakeholders to opt for the popularization of electric vehicles (EVs) [3] in replacing traditional internal combustion engine (ICE) based vehicles. EVs can be considered as zero emissions vehicle during its operation when electricity from renewable sources are used to charge them. However, the rapid development of the EV industry is introducing new challenges to the existing power system structure owing to their large battery capacity [4] and highly stochastic individual charging behavior.
Model based load forecasting techniques include statistical models using recursive and traditional mathematical tools [5,6,7,8], and artificial intelligence models involving various state-of-the-art machine learning approaches [9,10]. Traditional forecasting methods are generally straightforward and utilizing explainable presentations in the model composition, whereas artificial intelligence methods produce grey or black-box models in generating the forecasting results. Due to the strong adaptive learning and generalization ability, artificial neural network (ANN) has become successful in delivering load forecasting tasks [11]. However, increasing resolution and dimensionality of the emerging dataset challenge canonical ANN approaches. Deep learning methods have been on the spotlight and seen remarkable success in image semantic segmentation and feature classification [12,13,14], natural language processing [15] and various computational extensive science and engineering fields [16]. The load forecasting problem is a featured time-series problem which is of strong similarity with natural language processing, where the deep learning method would have the potential to effectively contribute.
In the distribution grid level, EV charging load exerts strong pressures due to its highly periodical and fluctuate characteristics. The EV demand curve would see significant peak-valley differences and large spikes in featured time slots particular in a super-short-term time scale. A more precise forecasting of such novel load type is potential to significantly contribute to the power system operator both from economic and security issues, where more powerful tools are in need. The key contributions of the paper are as below:
(1) New model: a novel exploratory super-short-term multi-step load forecasting model is proposed particularly for modeling the super-short-term EV charging load. The minute-level extra-short-term model plays a crucial role in the operation and maintenance of the EV charging aggregators as well as in the power flow analysis and control [17]. The impact of multiple time step of the historical data on the forecasting accuracy would be evaluated.
(2) New method: a new EV load forecasting deep learning framework is established and applied in the modeling task, where six conventional and deep learning based methods are comprehensively and comparatively studied in the evaluations on various index criteria.
(3) New scenario: a brand new scenario with real world historical data of a whole year on plug-in electric vehicle charging stations in Shenzhen has been adopted in validating the model effectiveness. According to our knowledge, this is the first study that utilizes real world super-short-term EV charging data rather than simulation data to forecast the multi-time step extra-short-term charging load profiles.
The rest of the paper is organized as follows: Section 2 elaborates the background of the load forecasting and briefly reviews load forecasting methods; Section 3 demonstrates the principle of several featured deep learning method and the corresponding load forecasting framework; Section 4 gives a detailed discussion and analysis of the charging data we used; the experimental results of comparative studies are shown in Section 5; in the end, Section 6 concludes the paper.

2. Literature Study

Based on the characteristics of load-based time dimension expansion, the initial research on load forecasting problem is based on the time series prediction method proposed by Box et al. [5] in 1976. The method has low input requirements for the load forecasting model, which only considers the time series input of historical data and does not consider other multi-faceted influencing factors that affect the load. The literature [18] provides a load peak model that takes external factors into account such as weather and humidity. Based on the Box–Jenkins method, Hagan et al. [19] proposed an autoregressive moving average model (ARMA) model prediction method, and Juberias et al. [20] established the autoregressive integral moving average model (ARIMA) model to achieve load forecasting. In order to further improve the accuracy of load forecasting, many hybrid methods of predictive models have been proposed. Jie et al. [21] combine the seasonal exponential adjustment method with the regression methods. Pai and Hong [22] applied support vector machine (SVM) to load forecasting and used the simulated annealing algorithm to select the kernel parameters. Guo et al. [23] utilized time-indexed autoregressive with exogenous terms (ARX) models with two-stage weighted least squares regression for modeling hourly cooling load.
Due to the different types of load and the complexity of influencing factors, the selection of input features and the method in constructing load forecasting models become important. Many intelligent prediction methods have been proposed utilizing more relevant information. The authors in [24] used a random forest approach to build a load forecasting model and the inputs are refined by expert feature selection using fuzzy rules. Feng and Xu [25] proposed an appropriate combinational approach for short-term gas load forecasting based on genetic algorithm-optimized neural network. Kouhi and Keynia [26] proposed a cascade neural network method and used a two-stage feature selection method for selecting the best input. Mahmoud et al. [27] utilized a tuning fuzzy system and ANN method for modeling the medium voltage load. Existing approaches have proved the applicability for the time series based mathematical models and computational intelligence models in solving the load forecasting problem. However, new participants such as renewable generation and EVs have seen more complicated characteristics and higher uncertainties, which challenge the conventional approaches.
In 2006, the deep learning concept was firstly proposed by Hinton et al. [28]. The deep learning methods have stronger nonlinear learning ability, robustness and generalization than the traditional methods, in particular for the large scale data resources. The model trained by deep learning methods could be applied to large scale and intractable scenarios, where appropriate adjustments of only limited hyper-parameters could achieve desired effects. Featured deep learning based network include convolutional neural network (CNN), recurrent neural network (RNN), Deep Boltzmann Machines [29], Stacked AutoEncoder [30], etc. Long short-term memory is an improved RNN, which was first introduced by Hochreiter et al. [31] and aimed to relief the gradient vanish problem of original RNN. In 2016, Marino et al. [32] used long-short-term memory (LSTM) for building energy load forecasting and discussed the effect of neuron nodes numbers on the forecasting error. In 2017, Kong et al. [33] applied LSTM to the residential load forecasting, and LSTM showed the best performance compared to other counterparts. Zheng et al. [34] presented a hybrid LSTM model for short-term load forecasting and obtained comparatively good results compared with other counterparts. In 2018, Bouktif et al. [35] proposed an optimal LSTM model for electric load forecasting using feature selection and genetic algorithm. This method trained several linear and nonlinear machine learning algorithms and selected the best performance algorithm as the baseline. These case studies have effectively verified the feasibility and superiority of deep learning in particular LSTM methods in the field of load forecasting.
In recent years, plug-in electric vehicles (PEVs) have emerged worldwide. The large power capacity of their battery provides an unprecedented challenge to the existing power system. Accurate and efficient load forecasting for PEV charging is critical for the maintenance and operation of charging stations [36,37]. Mu et al. [38] presented a Spatial-Temporal model to evaluate the impact of large scale deployment of PEVs on urban distribution networks. Qian et al. [39] proposed a methodology for modeling and analyzing the load in a distribution system considering PEV battery charging, and adopted Monte Carlo simulation in scenario generations. Alizadeh et al. [40] proposed a stochastic model based on queuing theory for PEV charging load. Luo et al. [41] proposed a Monte Carlo simulation based model to forecast the charging load of the PEVs in China. Lu et al. [42] utilized a random forest to forecast the 15 min level EV charging data. However, these traditional methods are difficult for quantifying the external factors that affect the charging load of PEVs, and it is impossible to establish a deterministic model. In our previous study [43], the deep learning method is used for hourly level PEV load forecasting and obtained well performance. However, the minute level super-short-term forecasting is more challenging. In this paper, the super-short-term PEV charging load model is established using minute level historical data for training, validation and test. Moreover, the performance of multiple deep learning methods in solving the super-short-term PEV forecasting problem is comprehensively evaluated.

3. The Deep Learning Based PEV Charging Load Forecasting Framework

In 2015, LeCun et al. [44] systematically reviewed the featured methods and the applications of deep learning in the literature, where CNN and RNN have been among the most powerful tools in solving image based and time sequential data problems, respectively. Specifically, the recurrent nets have shone light on sequential data such as text and speech. The load forecasting data are sequential data that is similar to the text and speech, and RNN is the sequence-based model, which is potential to demonstrate better performance than traditional methods in solving time series problems.

3.1. RNN Model

For load forecasting problem, when the load value at time t-1 as the input of RNN, the model can output the load value of time t, RNN models can better capture the characteristics of input data by using recurrent structure shown in Figure 1.
For time t:
S t = ϕ ( U x t + W S t 1 + b 1 ) ,
o t = ϕ ( V S t + b 2 ) ,
y ^ t = ϕ ( o t ) ,
where x t , S t and o t denote input, hidden and output unit at t, respectively. Network connection weights are denoted by V, W, U. Moreover, b and y ^ represent the bias and predicted output value, and ϕ denotes activation function. With the increase amount and dimension of data, RNN has to remember a lot of information before the time t, which leads to the vanishing gradient problem. In light of this, the LSTM method is proposed for solving this problem.

3.2. LSTM Model

The LSTM structure maintains the key logic of original recurrent network scheme. The major difference between LSTM and RNN is that the LSTM method adds a “processor” to the algorithm to determine whether the input information is useful or not. The processor item is named ’cell’ to include the all features of LSTM modules. As shown in Figure 2, three gates are Three gates are designed in the cell named input gate ( i t ), forget gate ( f t ) and output gate ( o t ), respectively, for maintaining and updating valuable information of the data before time t. The model training method for LSTM is the well adopted back-propagation through time (BPTT) [45]. It has been proved that LSTM is an effective method to solve the problem of long-range dependencies, and it has universal applicability in various learning and prediction problems. The cell states and parameters’ updating scheme is shown as follows:
f t = σ ( W f [ h t 1 , x t ] + b f ) ,
i t = σ ( W i [ h t 1 , x t ] + b i ) ,
C ˜ t = t a n h ( W c [ h t 1 , x t ] + b c ) ,
C t = i t C ˜ t + f t C t 1 ,
o t = σ ( W o [ h t 1 , x t ] + b o ) ,
h t = o t t a n h ( C t ) ,
where i t , f t , and o t are input, forget and output gate, respectively. h t 1 is the output at t 1 time slot, and x t is the input at current moment, and C t 1 is the memory from previous block. The forget gate ( f t ) reads the information in h t 1 and x t , then outputs a value between 0 and 1 for the cell state C t 1 , and 1 denotes “completely reserved” and 0 represents “completely discarded”. The input gate i t decides how much new information is to be added into the cell state, and the first stage is the s i g m o d layer that decides which information will be updated. The second stage is that the t a n h layer generates a vector which is the new candidate value C ˜ t . The memory of current block C t is generated by the item’s accumulation of previous block and input gate. Finally, the output gate ( o t ) outputs a value that determines the cell state, where W and b are weight and bias. σ and t a n h are activation functions shown as follows:
σ ( x ) = 1 1 + e x ,
t a n h ( x ) = e x e x e x + e x .

3.3. The LSTM Based PEV Charging Load Forecasting Framework

The framework for PEV charging load forecasting using LSTM is shown in Figure 3. In this case, the model training and validation data are the real world PEV charging load with one minute time intervals. The original data are the accumulated sampling output of the overall charging actions, e.g., from the beginning of each charging process to the end of charging time for each charging post. Therefore, the framework starts from data pre-processing for the input, and the preprocessed input is the PEV charging load power per minute. Then, the whole prepared data set is divided into training, test and validation set. The training set is used to train the model, while the validation set is used to tune the hyper-parameters in order to get the best performance forecasting model, and the test set is used to verify the validity of the model.
The data set should be normalized before being fed into the model, through which the calculations in the training is simplified and the network convergence is accelerated. The normalization formula is as follows:
y = x x m i n x m a x x m i n .
The next step is to set a time step for look-back, and Figure 4 shows how the time step works. If T i m e S t e p = 1 , the model uses the information from the previous moment ( x n 1 ) as the input, and the output is the current time information ( x n ). If T i m e S t e p = 3 , the model would utilize n previous moment information, e.g., x n 3 , x n 2 , , x n 1 , as the inputs, and the output is the current time information ( x n ). Proper hyper-parameters are preset for the LSTM net and the training set data are adopted to train the model using the BPTT method. Finally, test set data are used to evaluate the performance of the model.
After the LSTM block, there is a dense layer that maps the outputs of LSTM block to a single value. The PEV charging load forecast of a one minute time interval is obtained by the inverse normalization of the output data from LSTM block. In this paper, several other deep learning based models are also considered in the comparative studies, and they share the same forecasting framework. Due to the space limitation, the other methods are not detailed in this section. All of the framework modules are implemented in a desktop workstation with 3.0 GHz Intel i7 and 64 GB memory, and the GPU is Geforce Nvidia GTX-1080Ti with all codes running in Keras library [46] with Tensorflow [47] backend.

4. Data Analysis

In dealing with the load forecasting and energy prediction tasks, data-driven approaches have achieved good results compared to other analytical models [48]. The deep learning model adopted in this paper is also a data-driven method, in which the data analysis is vital to train the model for the given task. In this section, the characteristics of the dataset will be comprehensively analyzed.

4.1. Data Statistical System

The dataset used in this paper is collected from a large scale PEV charging station in Shenzhen with photovoltaic panels on the roof and considerable energy storage equipped. The charging station has 64 parking spaces for pure-electrified buses, 12 charging spaces for cars, and 24 built-up charging piles. Our dataset is the charging load collected from the 24 charging piles. The charging station’s power distribution mode and data statistics system diagram are shown in Figure 5. The charging station has two power distribution modes: solar carport power generation and battery energy storage. The charging data are stored in the charging pile and transmitted to the data center through the wireless router. The data center is established in the cloud and managed by the centralized platform. The real-time data would be downloaded and delivered into the data processing step that is implemented in the local work station.

4.2. Data Pre-Processing and Feature Analysis

The original data contain three data types: charging start time, charging end time and total charging amount. The data ranges from 31 March 2017 to 17 July 2018. Due to the fact that many cars choose to charge at night, the charge statistics span covers two days long. In order to obtain the one year data from 1 July 2017 to 30 June 2018, we choose the data from 3 June 2017 to 1 July 2018. Then, the outlier data in the dataset are found and processed with the method elaborated in the following paragraphs.
A fetched load value is firstly judged in Equation (13), and determined by Equation (14) once identified as an outlier or missing data. In this paper, the outlier data are majorly tacked by interpolation methods. The detailed threshold handling method is shown as below:
when:
max [ | y d t y d t 1 y d t | , | y d t y d t + 1 y d t | ] ε a ,
where y d t is the fetched load value of time t and ε a is the predetermined threshold. The outlier data will be replaced by a simple interpolation manner, where y d t is denoted as below:
y d t = y d t 1 + y d t + 1 2 .
A similar operation is implemented for the error data. It should be noted that the interpolation method is only effective in minor outliers, whereas the large amount of error data may require more specific manner to handle or prevent. Finally, the tool panda is used to split the data at one-minute intervals for one year from 1 July 2017 12:00 a.m. to 30 June 2018 11:59 a.m. At this point, the data pre-processing task finishes and ends.
Shenzhen is located in the southern part of China and belongs to the subtropical monsoon climate. The summer time spans more than six months. It is therefore not reasonable to analyze the data features on a quarterly basis. Figure 6 shows the load curve for half a month. It can be seen from the figure that the PEV charging load as a whole follows a certain fixed pattern, which is related to the fixed route of the bus and the commutes of passengers. Therefore, according to the climate characteristics of Shenzhen, the distribution of charging load is compared by the scenarios of the dry season and the rainy season. The rainy season is from April to September, while the dry season is between October and March. The box plot of charging load distribution per minute is shown in Figure 7. It can be seen that the peak load, median load, upper quantile and lower quantile in the rainy season are all higher than those of the dry season, and the peak load in the rainy season exceeds 40 kW. It shows that the rainy season is pleasant for people to travel and consume more power, due to which the amount of charging vehicles is larger.
In addition to the climate impact, the holidays and work days are normally completely different in charging profiles. Figure 7 also shows the box plot comparison of load distribution for the weekday and holiday in each month. Overall, the peak load and upper quantile of the holidays are higher than the weekdays. In February of 2018, the weekday load distribution has many outliers. The reason is that this month is the Chinese Spring Festival, when people are mostly reunited at home and the charging load is lower than other times. These points are judged as abnormal data in the box plot, but this indicates the authenticity of the data and the impact of holidays on the charging load.

5. Numerical Results for Case Study

5.1. Evaluation Metrics and Error Function

Generally, three popular metrics are used to evaluate the performance of model, including root mean squared error (RMSE), mean absolute percent error (MAPE) and mean absolute error (MAE), etc. The equations of these metrics are shown as follows:
R M S E = 1 N i = 1 n ( y i ^ y i ) 2 ,
M A P E = 1 N i = 1 n ( | y i ^ y i y i | × 100 ) ,
M A E = 1 N i = 1 n | y i ^ y i | ,
where N is the number of samples, y i ^ is forecasting value, and y i is actual value. In this paper, the EV charging facilities have been using the constant charging power. In this regard, to keep this commonly used unit, the minute based charging load of a whole charging station is calculated by the accumulated charging time slots of all the charging piles in each minute. The accumulated charging power could also denote the total charging amount for the overall charging station in every minute. Due to the durability of the PEV batteries, the drivers’ driving habits and the period of frequent use of the vehicles have the regularity. Therefore, the sum of the charing load of the piles in a certain period of time may be 0. However, in Equation (16), the actual value is the denominator and should not be ’0’. Therefore, in our study, the MAPE metric is not considered in the evaluation of forecasting accuracy. On the other hand, the RMSE and MAE are chosen as the metrics, and a coefficient of determination is adopted to estimate the goodness of fit, named R square, of which the formula is as follows:
R 2 = 1 i = 1 n ( y i ^ y i ) 2 i = 1 n ( y i ^ y i ) 2 ,
where y i ¯ is the average of all samples. The range of R 2 is (0,1), and the closer R 2 is to 1, the higher forecasting accuracy it would be.

5.2. Experimental Setup

The load forecasting model can be divided into univariate models and multivariate models. Some traditional load forecasting methods cannot increase the temperature and other characteristics to express a more complete relationship map because we cannot capture the change law from the individual load variables. The deep learning model is capable of capturing the load variation characteristics from univariate, which is a great improvement for the efficiency of real-time prediction. The final load prediction error confirms the validity of a univariate deep learning prediction model.
In this paper, we choose two datasets of PEV charging load of a whole year ranging from 1 July 2017 to 30 June 2018 including a charging station case and an official charging site aggregator case. For both cases, the charging load time interval is in minutes and an accumulation of 525,600 rows data are considered in total. The data are split into three subsets for different purposes in order to make full use of the data. The proportion of the three subsets are divided as 0.7/0.2/0.1, among which the training set is from 1 July 2017 to 31 January 2018 and from 22 April 2018 to 21 May 2018, the test set is from 1 February 2018 to 21 April 2018, and the validation set is from 22 May 2018 to 30 June 2018. The training set is firstly used to get the pre-training model, followed by the model performance improvement by adjusting the hyper-parameters on the validation set. Finally, the validation set is used for result evaluation.
In the application of the Deep Learning model, hyper-parameter tuning is essential to obtain the best performance. In this case study, six featured and popular models are selected for performance comparison including ANN, RNN [49], canonical LSTM, gated recurrent units (GRU) [50], stacked auto-encoders (SAEs) [51] and the bi-directional long short-term memory (Bi-LSTM) [52]. These methods have broadly covered the conventional neural network approaches as well as the state-of-the-art deep learning methods. For deep learning models, the adjustment of hyper-parameters relies heavily on repeated experiments. By observing the performance of the validation set, hyper-parameter adjustment is performed. If the number of neural network layers is too large, the model will be over-fitting, and too little will lead to under-fitting. Through repeated experiments, the ANN is assumed as a single hidden layer structure for parameter tuning simplification. The RNN, GRU, Bi-LSTM and LSTM have two hidden layers, whereas the SAEs have four hidden layers, and all of the models’ hidden layers have 16 nodes. The learning rate in the deep learning model is also an important parameter. It is often set between 0.00001 and 0.01; if the learning rate is too high, the network cannot converge to the global optimal value; if the learning rate is too low, it cannot converge. Through extensive experiments, the learning rate is set to 0.001. The epoch and batch size of the training process are according to the data set size, and the network convergence can be achieved. In this experiment, we set them as 30 and 512, respectively. To prevent over-fitting of the network, we added the dropout layer, which is usually set to a range of 0.3–0.8. Through experimentation, the dropout is 0.3 to achieve the lowest prediction error. Other comparison models have achieved optimal results under this set of parameters and are discussed separately. The MAE is chosen as the loss function, while the RMS prop [53] is adopted as an optimizer. The time steps’ parameter represents the second dimension of the input matrix. In the experimental study, the performance of the models in three different time steps are compared. The dimension conversion in each layer is shown in Figure 8. The dropout concept, first introduced by Hinton et al. [54], refers to the temporary discarding of neural network units from the network according to a certain probability during the training process to prevent the model from over-fitting, of which the value is set as 0.5.

5.3. Case 1: PEV Charging Station Case Study

The first step in the experimental process is to read the load data file and convert the load data into the required matrix. The data are then fed it into the model for pre-training. It is required to keep observing whether the loss function of the model on the training set converges. Once it converges, the loss of the validation set should be finally checked regarding whether the model converges or not. If the loss does not converge, an over-fitting phenomenon occurs. The hyper parameters of the model are not effective and should be adjusted. The procedure is repeated until the best performance on the validation set is obtained. The best hyper-parameters are shown in the previous subsection. The training loss and validation loss on some epochs are shown in Table 1. Multiple scenarios with three different time steps for each method are trained. As it can be seen from the table that the LSTM model has the minimum loss in the last epoch for three scenarios, which is 0.0068 (one time step), 0.0065 (five time steps), 0.0064 (15 time steps) in the training set, and 0.0031 (one time step), 0.0043 (five time steps), 0.0034 (15 time steps) in the validation set. The loss curve of three scenarios are shown in Figure 9, where the losses of each model are converged. Though the Bi-LSTM method has the comparatively well performance, the training time of the model in the training procedure is too long as the time steps increase, which need 58 s on each epoch for the 15 time steps, whereas LSTM only needs 8 s. Overall, the performance of LSTM is proved to be the best among all the counterparts.
Then, the trained models are recalled to test the accuracy on the test set. The average error (MAE, RSME) and goodness of fit (R2) are calculated and shown as Table 2. In Table 2, we can find that the LSTM proves to be more accurate than the ANN, RNN, GRU, SAE, Bi-LSTM models for the minute-level load forecasting. The MAE of ANN model in three time steps is 2.3582, 3.0206 and 2.9988, and 0.4782, 0.5734 and 0.5500 for the LSTM model. In contrast, the MAE is reduced by 1.8800 in one time step, 2.4472 in five time steps, and 2.4489 in 15 time steps. It can be found that, with the time step increase, the MAE difference also increases in the two models. The average RMSE of LSTM in three scenarios is 0.8988. When the time step changes from 1 to 5, the RMSE of RNN, GRU, Bi-LSTM LSTM all decrease, and that of the ANN and SAE increase. Such results prove that the sequence models work better for long sequences. When the time step changes from 5 to 15, the RMSE of other models increase, while that of LSTM and Bi-LSTM decrease. This proves that LSTM can remember longer input sequences and efficiently process their information, providing a more robust solution to the sequences data with longer time intervals and longer delays. In addition, with the time steps increase, the R2 of ANN becomes smaller, which are 0.862306 for one time step and 0.816820 for 15 time steps. The R2 of LSTM is very close to one in three time steps, which indicates that the LSTM has been significantly competitive for the super-short-term EV charging load forecasting. The metrics and error comparison histograms in three scenarios are shown in Figure 10. It can be seen more intuitively from the histograms that LSTM has the lowest error and the best goodness of fit compared to five other methods. Compared with the best results obtained from all the counterparts, the LSTM method has an average of 30% lower errors on all the index criteria.
Figure 11 shows the actual data curve and forecasting curve for each model in the three scenarios of a single day and a whole week. For the single day in the left column, 1440 points is described in the curve. It can be seen that the charging load began to increase sharply at 11:00 p.m. This is due to the fact that the public transportation in Shenzhen, such as taxis and buses, are all electric vehicles. These vehicles work during the day and can be recharged at night after being fully uncommitted by 11:00 p.m. In addition, it is the time period when the charging price is the lowest after 11:00 p.m., which is beneficial for reducing the charging cost. In the enlarged view, it could be observed that the change in per minute load is a more clear waveform. Both ANN and SAE models have slight fluctuations in the forecasting curve when the time step is 1, and the fluctuation disappears with the time step increases. Compared with the actual data curve especially near the peak, ANN and SAE models fail to capture the step changes in the load value. The LSTM model again perfectly predicts the load of each point and captures all the slight step changes.
For a whole week in the right column of Figure 11, the data from 15 February 2018 to 22 February 2018 are adopted and 10,080 points are considered in the curve. It could be easily seen that, due to the inherent working mode of public transports, the curve shows a certain periodicity. Moreover, the utilization of one minute as time intervals can effectively describe the nonlinear characteristics of the data; however, it makes it more difficult to the general model for accurate load forecasting. From the first figure, it can be observed that, when the time step is 1, the RNN model cannot fit the peak load well and the valley load, and the forecasting value is larger than the actual value. The ANN model can not capture the subtle changes of the load, in particular for the data near the peak load. In the partially enlarged graph, the forecasting effect of the SAEs model is basically a straight line near the curve initial position, and it is basically unable to capture the subtle changes. The GRU, Bi-LSTM and LSTM models can reasonably forecast the value of each point, and LSTM still shows the best forecasting accuracy among the three models. In the second figure that illustrates the curves of the five time step models, it can be seen that the RNN model forecasting effect is largely improved. However, in the enlarged graph, the forecasting curve of SAE and ANN produces the straight line near the initial position. The LSTM model still obtains the best performance among all the counterparts. The third figure demonstrates the curves of the 15 time steps models and shows the same results with Table 2. In this scenario, the forecasting accuracy of the RNN model is reduced, where the forecasting value of peak and valley load is higher than the actual value in the one time step case and it is lower than the true data in the 15 time steps case.
As shown in Figure 12, the forecast results of the holiday (dry season) and the working day (rainy season) are compared. From the forecast of the working day (rainy season), the charging load peaks at 11:00 p.m., begins to decrease at 3:00 a.m., and increases again at 8:00 a.m. In addition, the charging load is higher during working days, while the holiday (dry season) had a low charging load during the day. This is because people often choose to travel on holidays, and most private EVs are charged on working days, while most public transport EVs can only choose to charge at night. Again, due to the large amount of public transportation, the charging load increases greatly at 11:00 p.m. The prediction results of each model show that the prediction accuracy of LSTM on holidays and working days is better than the other five counterparts.
In the results, the LSTM model exhibits strongly competitive performance in minute-level PEV charging load forecasting, and its variants GRU and Bi-LSTM also exhibit comparatively well accuracy. The effectiveness of the deep learning based model for minute-level super short-term load forecasting for PEV charging is promising and manifested.

5.4. Case 2: PEV Aggregator Case Study

The load curves of the PEV charging station persist highly periodical characteristics, which might not be convincing enough to demonstrate the superior performance of deep learning methods. In order to further verify the validity of the proposed model, another minute-level load dataset of a PEV aggregator for commercial building chargers in Shenzhen was used to validate the proposed super short-term model, and the prediction results’ curves are shown in Figure 13. It could be observed that the charging behavior of a commercial charges aggregator is more random and fluctuating, which is completely different from the charging station profile. In this case study, all six of the methods are compared again and the results of performance comparison are shown in Table 3, also with three time-steps’ options. The LSTM again shows the best performances on all the time-step tests, obtaining the MAE of the three time-steps as small as 0.3096, 0.4699 and 0.2864, respectively. Though the accuracy is slightly lower than the results of the periodical charging station case study, deep learning models again demonstrate competitive performance for the extra-short-term PEV charging load forecasting.

6. Conclusions and Future Work

In this study, deep learning approaches are for the first time utilized in super-short-term minute-level short-term PEV charging load forecasting. Unlike the previous shallow structure methods, the deep learning based models do not need many features in the model training. They effectively capture the potential load change features using only historical load data such as the nonlinear feature and temporal correlations. Comprehensive comparative studies including ANN, RNN, LSTM, GRU, SAEs and Bi-LSTM models are implemented in three scenarios, where the unsupervised learning algorithm is applied to pre-train the models. Fine-tuning and proper hyper-parameters are well investigated to achieve the best performance. Comprehensive metrics index including RMSE, MAE and R2 are used to evaluate the model performances. The results show that deep learning models effectively forecast super-short-term PEV charging load for providing accurate prediction curves in dynamic power system scheduling. Among the deep learning methods, the LSTM model is superior to the other methods and is competent in forecasting extra-short term PEV charging load.
With the quickly mass roll out of PEVs in the multiple levels of power grid, the accurate forecasting of PEV charging load has the potential to bring significant economic and social benefits. The proposed deep learning model provides an important tool to pave the way for the large penetration of PEVs integrating into the power system and provides a competitive artificial intelligence showcase in the low carbon energy systems.

Author Contributions

J.Z. and Z.Y. proposed the algorithms and drafted the paper. M.M., Y.G., Y.W. and S.F. prepared the data and modified the paper. Y.Z. and Y.C. run the data test and prepared the materials.

Funding

This research is financially supported by China NSFC (51607177), the Natural Science Foundation of Guangdong Province (2018A030310671, 2018A030313755), the China Post-Doctoral Science Foundation (2018M631005), the European Commission’s Horizon 2020 project, ’Demand response integration technologies: unlocking the demand response potential in the distribution grid’ (DRIvE, 774431), the Outstanding Young Researcher Innovation Fund of the Shenzhen Institute of Advanced Technology, the Chinese Academy of Sciences (201822), the Shenzhen Basic Research Fund under Grant No. JCYJ20160331190123578 and the Shenzhen Discipline Construction Project for Urban Computing and Data Intelligence.

Acknowledgments

The authors would like to thank Huikun Yang from Winline Co. Ltd. (Shenzhen, China) for kindly providing the charging data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Raza, M.Q.; Khosravi, A. A review on artificial intelligence based load demand forecasting techniques for smart grid and buildings. Renew. Sustain. Energy Rev. 2015, 50, 1352–1372. [Google Scholar] [CrossRef]
  2. Taylor, J.W. An evaluation of methods for very short-term load forecasting using minute-by-minute British data. Int. J. Forecast. 2008, 24, 645–658. [Google Scholar] [CrossRef]
  3. Yang, Z.; Li, K.; Foley, A. Computational scheduling methods for integrating plug-in electric vehicles with power systems: A review. Renew. Sustain. Energy Rev. 2015, 51, 396–416. [Google Scholar] [CrossRef]
  4. Zhang, C.; Yang, Z.; Li, K. Modeling of electric vehicle batteries using rbf neural networks. In Proceedings of the 2014 International Conference on Computing, Management and Telecommunications (ComManTel), Da Nang, Vietnam, 27–29 April 2014; pp. 116–121. [Google Scholar]
  5. Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley and Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  6. Wei, L.; Zhen-gang, Z. Based on time sequence of arima model in the application of short-term electricity load forecasting. In Proceedings of the 2009 International Conference on Research Challenges in Computer Science, Shanghai, China, 28–29 December 2009; pp. 11–14. [Google Scholar]
  7. Haida, T.; Muto, S. Regression based peak load forecasting using a transformation technique. IEEE Trans. Power Syst. 1994, 9, 1788–1794. [Google Scholar] [CrossRef]
  8. Shankar, R.; Chatterjee, K.; Chatterjee, T. A very short-term load forecasting using kalman filter for load frequency control with economic load dispatch. J. Eng. Sci. Technol. Rev. 2012, 5, 97–103. [Google Scholar] [CrossRef]
  9. Park, D.C.; El-Sharkawi, M.; Marks, R.; Atlas, L.; Damborg, M. Electric load forecasting using an artificial neural network. IEEE Trans. Power Syst. 1991, 6, 442–449. [Google Scholar] [CrossRef]
  10. Chen, B.J.; Chang, M.W. Load forecasting using support vector machines: A study on eunite competition 2001. IEEE Trans. Power Syst. 2004, 19, 1821–1830. [Google Scholar] [CrossRef]
  11. Hippert, H.S.; Pedreira, C.E.; Souza, R.C. Neural networks for short-term load forecasting: A review and evaluation. IEEE Trans. Power Syst. 2001, 16, 44–55. [Google Scholar] [CrossRef]
  12. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  13. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; NIPS: Grenada, Spain, 2012; pp. 1097–1105. [Google Scholar]
  14. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  15. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems; NIPS: Grenada, Spain, 2014; pp. 3104–3112. [Google Scholar]
  16. Yang, D.; Pang, Y.; Zhou, B.; Li, K. Fault Diagnosis for Energy Internet Using Correlation Processing-Based Convolutional Neural Networks. IEEE Trans. Syst. Man Cybern. Syst. 2019. [Google Scholar] [CrossRef]
  17. Dommel, H.W.; Tinney, W.F. Optimal power flow solutions. IEEE Trans. Power Appar. Syst. 1968, 87, 1866–1876. [Google Scholar] [CrossRef]
  18. Corpening, S.L.; Reppen, N.D.; Ringlee, R.J. Experience with weather sensitive load models for short and long-term forecasting. IEEE Trans. Power Appar. Syst. 1966, PAS-92, 1966–1972. [Google Scholar] [CrossRef]
  19. Hagan, M.T.; Behr, S.M. The time series approach to short term load forecasting. IEEE Trans. Power Syst. 1987, 2, 785–791. [Google Scholar] [CrossRef]
  20. Juberias, G.; Yunta, R.; Moreno, J.G.; Mendivil, C. A new arima model for hourly load forecasting. In Proceedings of the Transmission and Distribution Conference, New Orleans, LA, USA, 11–16 April 1999. [Google Scholar]
  21. Jie, W.U.; Wang, J.; Haiyan, L.U.; Dong, Y.; Xiaoxiao, L.U. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model. Energy Convers. Manag. 2013, 70, 1–9. [Google Scholar]
  22. Pai, P.F.; Hong, W.C. Support vector machines with simulated annealing algorithms in electricity load forecasting. Energy Convers. Manag. 2005, 46, 2669–2688. [Google Scholar] [CrossRef]
  23. Guo, Y.; Nazarian, E.; Ko, J.; Rajurkar, K. Hourly cooling load forecasting using time-indexed arx models with two-stage weighted least squares regression. Energy Convers. Manag. 2014, 80, 46–53. [Google Scholar] [CrossRef]
  24. Lahouar, A.; Slama, J.B.H. Day-ahead load forecast using random forest and expert input selection. Energy Convers. Manag. 2015, 103, 1040–1051. [Google Scholar] [CrossRef]
  25. Feng, Y.; Xiaozhong, X. A short-term load forecasting model of natural gas based on optimized genetic algorithm and improved bp neural network. Appl. Energy 2014, 134, 102–113. [Google Scholar]
  26. Kouhi, S.; Keynia, F. A new cascade nn based method to shortterm load forecast in deregulated electricity market. Energy Convers. Manag. 2013, 71, 76–83. [Google Scholar] [CrossRef]
  27. Mahmoud, T.S.; Habibi, D.; Hassan, M.Y.; Bass, O. Modelling self-optimised short term load forecasting for medium voltage loads using tunning fuzzy systems and artificial neural networks. Energy Convers. Manag. 2015, 106, 1396–1408. [Google Scholar] [CrossRef]
  28. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  29. Salakhutdinov, R.; Hinton, G. Deep boltzmann machines. In Artificial Intelligence and Statistics; Addison-Wesley: New York, NY, USA, 2009; pp. 448–455. [Google Scholar]
  30. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  31. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  32. Marino, D.L.; Amarasinghe, K.; Manic, M. Building energy load forecasting using deep neural networks. In Proceedings of the IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, Italy, 23–26 October 2016; pp. 7046–7051. [Google Scholar]
  33. Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y. Short-term residential load forecasting based on lstm recurrent neural network. IEEE Trans. Smart Grid 2017, 10, 841–851. [Google Scholar] [CrossRef]
  34. Zheng, H.; Yuan, J.; Chen, L. Short-term load forecasting using emd-lstm neural networks with a xgboost algorithm for feature importance evaluation. Energies 2017, 10, 1168. [Google Scholar] [CrossRef]
  35. Bouktif, S.; Fiaz, A.; Ouni, A.; Serhani, M. Optimal deep learning lstm model for electric load forecasting using feature selection and genetic algorithm: Comparison with machine learning approaches. Energies 2018, 11, 1636. [Google Scholar] [CrossRef]
  36. Aziz, M.; Oda, T.; Mitani, T.; Watanabe, Y.; Kashiwagi, T. Utilization of electric vehicles and their used batteries for peak-load shifting. Energies 2015, 8, 3720–3738. [Google Scholar] [CrossRef]
  37. Gerossier, A.; Girard, R.; Kariniotakis, G. Modeling and Forecasting Electric Vehicle Consumption Profiles. Energies 2019, 12, 1341. [Google Scholar] [CrossRef]
  38. Mu, Y.; Wu, J.; Jenkins, N.; Jia, H.; Wang, C. A spatialtemporal model for grid impact analysis of plug-in electric vehicles. Appl. Energy 2014, 114, 456–465. [Google Scholar] [CrossRef]
  39. Qian, K.; Zhou, C.; Allan, M.; Yue, Y. Modeling of load demand due to ev battery charging in distribution systems. IEEE Trans. Power Syst. 2011, 26, 802–810. [Google Scholar] [CrossRef]
  40. Alizadeh, M.; Scaglione, A.; Davies, J.; Kurani, K.S. A scalable stochastic model for the electricity demand of electric and plugin hybrid vehicles. IEEE Trans. Smart Grid 2014, 5, 848–860. [Google Scholar] [CrossRef]
  41. Luo, Z.; Song, Y.; Hu, Z.; Xu, Z.; Xia, Y.; Zhan, K. Forecasting charging load of plug-in electric vehicles in china. In Proceedings of the Power and Energy Society General Meeting, San Diego, CA, USA, 24–29 July 2011. [Google Scholar]
  42. Lu, Y.; Li, Y.; Xie, D.; Wei, E.; Bao, X.; Chen, H.; Zhong, X. The Application of Improved Random Forest Algorithm on the Prediction of Electric Vehicle Charging Load. Energies 2018, 11, 3207. [Google Scholar] [CrossRef]
  43. Zhu, J.; Yang, Z.; Guo, Y.; Zhang, J.; Yang, H. Short-Term Load Forecasting for Electric Vehicle Charging Stations Based on Deep Learning Approaches. Appl. Sci. 2019, 9, 1723. [Google Scholar] [CrossRef]
  44. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
  45. Werbos, P.J. Backpropagation through time: What it does and how to do it. Proc. IEEE 1990, 78, 1550–1560. [Google Scholar] [CrossRef]
  46. Chollet, F. Keras. Available online: https://keras.io/ (accessed on 18 November 2018).
  47. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow: A System for Large-Scale Machine Learning; OSDI: Boulder, CO, USA, 2016; Volume 16, pp. 265–283. [Google Scholar]
  48. Wei, Y.; Zhang, X.; Shi, Y.; Xia, L.; Pan, S.; Wu, J.; Han, M.; Zhao, X. A review of data-driven approaches for prediction and classification of building energy consumption. Renew. Sustain. Energy Rev. 2018, 82, 1027–1047. [Google Scholar] [CrossRef]
  49. Vermaak, J.; Botha, E. Recurrent neural networks for shortterm load forecasting. IEEE Trans. Power Syst. 1998, 13, 126–132. [Google Scholar] [CrossRef]
  50. Cho, K.; Van Merrienboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv Preprint 2014, arXiv:1409.1259. [Google Scholar]
  51. Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems; NIPS: Grenada, Spain, 2007; pp. 153–160. [Google Scholar]
  52. Graves, A.; Schmidhuber, J. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 2005, 18, 602–610. [Google Scholar] [CrossRef]
  53. Tieleman, T.; Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA Neural Netw. Mach. Learn. 2012, 4, 26–31. [Google Scholar]
  54. Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv Preprint 2012, arXiv:1207.0580. [Google Scholar]
Figure 1. Basic RNN structure.
Figure 1. Basic RNN structure.
Energies 12 02692 g001
Figure 2. The structure of LSTM.
Figure 2. The structure of LSTM.
Energies 12 02692 g002
Figure 3. The LSTM based forecasting framework.
Figure 3. The LSTM based forecasting framework.
Energies 12 02692 g003
Figure 4. Time step working mechanism.
Figure 4. Time step working mechanism.
Energies 12 02692 g004
Figure 5. Power distribution mode and data statistics system.
Figure 5. Power distribution mode and data statistics system.
Energies 12 02692 g005
Figure 6. Load curve for half a month data.
Figure 6. Load curve for half a month data.
Energies 12 02692 g006
Figure 7. Box plot of EV charging load.
Figure 7. Box plot of EV charging load.
Energies 12 02692 g007
Figure 8. Dimension conversion in each layer.
Figure 8. Dimension conversion in each layer.
Energies 12 02692 g008
Figure 9. Loss curve comparison in three time steps.
Figure 9. Loss curve comparison in three time steps.
Energies 12 02692 g009
Figure 10. RMSE, MAE and R2 comparison histograms.
Figure 10. RMSE, MAE and R2 comparison histograms.
Energies 12 02692 g010
Figure 11. Different time steps load forecasting curve effect comparison graph of one week.
Figure 11. Different time steps load forecasting curve effect comparison graph of one week.
Energies 12 02692 g011
Figure 12. Load forecasting curve effect comparison graph of holiday (dry) and weekday (rainy).
Figure 12. Load forecasting curve effect comparison graph of holiday (dry) and weekday (rainy).
Energies 12 02692 g012
Figure 13. Load forecasting curve effect comparison graph of the PEV aggregator.
Figure 13. Load forecasting curve effect comparison graph of the PEV aggregator.
Energies 12 02692 g013
Table 1. Training and validation loss of different algorithms in different time steps.
Table 1. Training and validation loss of different algorithms in different time steps.
T-StepEpochLossANNRNNGRUSAEsBi-LSTMLSTM
11Training Loss0.42270.10070.10670.14210.10760.0746
Validation Loss0.15250.02530.02580.05830.00890.0136
10Training Loss0.05400.02700.01930.02290.01420.0079
Validation Loss0.04830.01690.00980.01610.00720.0048
20Training Loss0.04550.02710.01900.02120.01400.0070
Validation Loss0.02890.01660.01050.01160.01010.0033
30Training Loss0.03990.02710.01880.02090.01330.0068
Validation Loss0.01530.01490.00960.01070.00680.0031
51Training Loss0.30100.08100.07710.08620.05810.0356
Validation Loss0.07930.02400.02260.02930.01560.0253
10Training Loss0.03820.01950.01670.02220.01180.0074
Validation Loss0.01850.01000.00870.01490.00670.0077
20Training Loss0.03800.01890.01630.02160.01160.0067
Validation Loss0.01960.01260.00660.01210.00520.0053
30Training Loss0.03780.01860.01610.02140.01150.0065
Validation Loss0.01960.00870.00810.01180.00550.0043
151Training Loss0.17260.08240.07630.15400.05190.0337
Validation Loss0.03600.07560.03730.05090.01560.0692
10Training Loss0.03830.01950.01670.02240.01200.0073
Validation Loss0.02040.01200.00900.01290.01010.0072
20Training Loss0.03790.01970.01640.02170.01160.0067
Validation Loss0.01960.01670.00860.01200.00940.0084
30Training Loss0.03770.01950.01620.02140.01150.0064
Validation Loss0.01940.00970.00750.01150.00490.0034
Table 2. Performance comparison of the MAE, RMSE, and R2 for all methods.
Table 2. Performance comparison of the MAE, RMSE, and R2 for all methods.
T-StepMetricsANNRNNGRUSAEsBi-LSTMLSTM
1MAE2.35823.23971.91161.08861.30960.4782
RMSE4.30783.79152.43331.56891.59960.9546
R20.86230.87160.94950.94030.98440.9953
5MAE3.02062.74571.31342.16160.90450.5734
RMSE5.01173.57031.73763.10421.22880.8937
R20.81360.91040.97880.93230.98940.9944
15MAE2.99883.15591.32692.25160.82960.5500
RMSE4.96803.86301.78803.15561.09340.8452
R20.81680.89410.97560.92920.99160.9950
Table 3. Performance comparison of the MAE, the RMSE, and the R2 for all methods.
Table 3. Performance comparison of the MAE, the RMSE, and the R2 for all methods.
T-StepMetricsANNRNNGRUSAEsBi-LSTMLSTM
1MAE0.90980.47510.42810.70080.53210.3096
RMSE1.25810.68900.63400.95510.77020.5095
R20.86030.95810.96450.91950.94760.9771
5MAE0.89120.48300.51120.55290.62410.4699
RMSE1.26540.67610.72180.76380.80910.6219
R20.85850.95960.93610.94840.94210.9658
15MAE0.88230.46590.61110.65760.81570.2864
RMSE1.24890.65060.85190.89561.02600.4418
R20.86260.96270.92840.92930.90720.9828

Share and Cite

MDPI and ACS Style

Zhu, J.; Yang, Z.; Mourshed, M.; Guo, Y.; Zhou, Y.; Chang, Y.; Wei, Y.; Feng, S. Electric Vehicle Charging Load Forecasting: A Comparative Study of Deep Learning Approaches. Energies 2019, 12, 2692. https://doi.org/10.3390/en12142692

AMA Style

Zhu J, Yang Z, Mourshed M, Guo Y, Zhou Y, Chang Y, Wei Y, Feng S. Electric Vehicle Charging Load Forecasting: A Comparative Study of Deep Learning Approaches. Energies. 2019; 12(14):2692. https://doi.org/10.3390/en12142692

Chicago/Turabian Style

Zhu, Juncheng, Zhile Yang, Monjur Mourshed, Yuanjun Guo, Yimin Zhou, Yan Chang, Yanjie Wei, and Shengzhong Feng. 2019. "Electric Vehicle Charging Load Forecasting: A Comparative Study of Deep Learning Approaches" Energies 12, no. 14: 2692. https://doi.org/10.3390/en12142692

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop