Next Article in Journal
Reply to Comment by Alison A. Monaghan, David A.C. Manning, and Zoe K. Shipton on ‘Repurposing Hydrocarbon Wells for Geothermal Use in the UK: The Onshore Fields with the Greatest Potential, by Watson et al. (2020)’
Next Article in Special Issue
Prolongation of Battery Lifetime for Electric Buses through Flywheel Integration
Previous Article in Journal
Efficient Integration of Machine Learning into District Heating Predictive Models
Previous Article in Special Issue
Induction Heater Based Battery Thermal Management System for Electric Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Gated Recurrent Unit Network Model for State-of-Charge Estimation of Lithium-Ion Battery

1
State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130022, China
2
Taizhou Automobile Power Transmission Research Institute, Jilin University, Taizhou 210008, China
3
Zhengzhou Yutong Bus Co., Ltd., Zhengzhou 450016, China
4
School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Energies 2020, 13(23), 6366; https://doi.org/10.3390/en13236366
Submission received: 23 October 2020 / Revised: 22 November 2020 / Accepted: 29 November 2020 / Published: 3 December 2020
(This article belongs to the Special Issue Battery Management for Electric Vehicles)

Abstract

:
An accurate state-of-charge (SOC) can not only provide a safe and reliable guarantee for the entirety of equipment but also extend the service life of the battery pack. Given that the chemical reaction inside the lithium-ion battery is a highly nonlinear dynamic system, obtaining an accurate SOC for the battery management system is very challenging. This paper proposed a gated recurrent unit recurrent neural network model with activation function layers (GRU-ATL) to estimate battery SOC. The model used deep learning technology to establish the nonlinear relationship between current, voltage, and temperature measurement signals and battery SOC. Then the online SOC estimation was carried out on different testing sets using the trained model. The experiments in this paper showed that the GRU-ATL network model could realize online SOC estimation under different working conditions without relying on an accurate battery model. Compared with the gated recurrent unit recurrent neural (GRU) network model and long short-term memory (LSTM) network model, the GRU-ATL network model had more stable and accurate SOC prediction performance. When the measurement data contained noise, the experimental results showed that the SOC prediction accuracy of GRU-ATL model was 0.1–0.4% higher than the GRU model and 0.3–0.7% higher than the LSTM model. The mean absolute error (MAE) of SOC predicted by the GRU-ATL model was stable in the range of 0.7–1.4%, and root mean square error (RMSE) was stable between 1.2–1.9%. The model still had high prediction accuracy and robustness, which could meet the SOC estimation in complex vehicle working conditions.

1. Introduction

With the development of science and technology, the world’s dependence on energy is increasing year by year. At present, oil and coal are the main non-renewable energy sources, and the emission of waste gas will inevitably cause some serious environmental consequences [1]. To solve these problems, governments of various countries strongly advocate that electric vehicles replace fuel vehicles, helping to reduce the emission of harmful gases. Lithium-ion batteries have made considerable progress in the past decade [2]. Compared with other chemical materials, lithium batteries have the advantages of high energy density, long life, and high power. They have been widely used in mobile phones, computers, electric vehicles, and satellites [3]. Because the chemical reaction inside the battery is a highly nonlinear dynamic system, it is a very challenging task to obtain an accurate state-of-charge (SOC) for the battery management system [4].

1.1. Literature Review

Contemporary methods are mainly based on current, voltage, and temperature measurement signals to obtain battery SOC value; these methods include the ampere-hour integration [5], open-circuit voltage [6], model-based [7], and data-driven methods [8,9]. The ampere hour integration method is mainly used to obtain the SOC value by integrating the current. However, inaccuracy of SOC initial value accumulates errors, which eventually leads to the estimated SOC value deviating from the actual SOC value. The open-circuit voltage method mainly establishes the relationship between SOC and open-circuit voltage by looking up the table or polynomial fitting. However, in order to obtain an accurate SOC value, this method requires the battery to be at rest for more than two hours. Therefore, the ampere-hour integration method and open-circuit voltage method are not suitable for online SOC estimation under complex dynamic conditions.
The electrochemical model uses some partial differential equations to establish the relationship between SOC and battery impedance or lithium concentration in a battery to obtain accurate SOC values [10,11]. However, this electrochemical model equation is excessively complex and involves many model parameters; thus, the accuracy of online SOC estimation is low, and it can only be applied to the offline battery design and performance analysis under laboratory conditions. At present, most of the research focuses on the method based on the equivalent circuit model, which includes the extended Kalman filter algorithm [12], unscented Kalman filter algorithm (UKF) [13], square root unscented Kalman filter algorithm [14], particle filter algorithm (PF) [15], and the H infinite filter [16]. The least square method with the forgetting factor is used to identify the parameters online to obtain more accurate SOC values [17,18]. Xia et al. [19] used the Kalman filter algorithm as a density function to improve the PF algorithm, which improved the prediction accuracy of the model. Although these model-based methods could obtain accurate SOC values online, their prediction performance was highly dependent on the accuracy of battery models. This kind of method requires designers to spend considerable time to select model parameters and noise covariance in the filter algorithm. In addition, they are carried out under the condition that the battery system is Gaussian noise by default.
The data-driven method does not need an accurate battery model. It uses the strong learning ability of the model to train a large number of data to get a network model which can reflect the relationship between battery measurement signals (such as current, voltage, temperature. and internal resistance) and SOC. Then, the network model is used to estimate the SOC on the unknown data set. Fotouhi et al. [20] proposed an adaptive neuro-fuzzy inference system combined with coulomb counting method for SOC estimation of lithium sulfur battery. However, the internal resistance and open circuit voltage were used as input eigenvectors, which were not easy to measure directly. Guo et al. [21] proposed an improved back propagation neural network (BPNN), which used the measured voltage, current, and temperature as input characteristics to predict the actual SOC. However, the convergence speed of the BPNN algorithm was slow, and it was very sensitive to the initial weight. Further, it was easy to appear gradient diffusion or fall into local minimum problem. In order to solve this problem and obtain an accurate SOC value, Wang et al. [22] used the artificial fish swarm algorithm to optimize some parameters of the BP neural network. However, this swarm intelligence optimization algorithm greatly increased the computational cost in the case of a large amount of data. With the development of computer hardware, some complex deep learning networks have been applied to battery SOC estimation. Deep learning technology can easily capture the complex and changeable relationship between the measurement signal and SOC through the deep neural network of multilayer nonlinear transformation. Chemali et al. [23] proposed a long short-term memory (LSTM) network to estimate the battery SOC based on signal characteristics such as current, voltage, and temperature. This method can complete SOC estimation under different working conditions without using the Kalman filter algorithm. In order to further improve the prediction accuracy of battery SOC, Bian et al. [24] used a bidirectional long short-term memory (BILSTM) neural network to predict battery SOC. However, the disadvantage of the BILSTM network is that the decoding accuracy is seriously affected when the decoding begins without enough input sequence information. Vidal et al. [25] proposed a deep feed-forward neural network (FNN) approach for battery SOC estimation. The method still had a good SOC estimation performance when the measurement signal was added with errors. But this method needed to calculate the average value of the measured signal in a certain time step, and the average value in different time periods had a certain influence on the prediction results. Yang et al. [26] used the gated recurrent unit recurrent neural network (GRU-RNN) to predict the battery SOC using the measured current, voltage, and temperature signals. Jiao et al. [27] used the momentum gradient to optimize the weight of the network to improve the prediction accuracy of the GRU network model. In addition, some methods have combined neural networks and filter algorithms that also obtained accurate SOC estimation. Dong et al. [28] proposed a hybrid model of the wavelet neural network and particle filter algorithm and estimated the energy state of a battery under different working conditions. Tian et al. [29] proposed a method combining LSTM with an adaptive cubature Kalman filter for battery SOC estimation, and realized the accurate estimation of battery SOC with less data. Although this type of method can reduce the SOC estimation error to some extent, the final SOC estimation value excessively depends on the SOC value estimated by deep learning or neural network. In order to achieve good SOC estimation results, it also needs a lot of experiments to adjust the process noise covariance and measurement noise covariance.

1.2. The Contributions

Inspired by reference [25], this paper proposed a GRU-RNN network model with an activation function layer (GRU-ATL) for battery SOC estimation. This model mainly added a Tanh layer, a leaky rectified linear unit (ReLU) layer, and a clipped rectified linear unit (ReLU) layer to GRU-RNN network model. The relationship between current, voltage, temperature, and SOC was established in the training set. Then the online SOC estimation was carried out on different testing sets using the trained model. The main contributions of this paper are as follows:
(1)
Deep learning technology can solve the problem of low prediction accuracy of battery SOC by a traditional neural network. This technology does not need to establish an accurate battery equivalent circuit model. It can simplify the tedious parameter adjustment process based on the model method, and greatly save the time needed in the whole process of SOC estimation;
(2)
Compared with an LSTM network, a GRU network structure has the advantages of fewer parameters and a simple structure. It can save a lot of training and prediction time on the premise of ensuring the prediction accuracy of the model. Compared with FNN, LSTM, and GRU network models, it was found that the proposed model can obtain more accurate and stable SOC estimation results in different operating conditions. The Tanh activation function is a saturated activation function, which can enhance the nonlinear learning ability of neural network. The leaky ReLU and the clipped ReLU are unsaturated activation functions, which can solve the problem of gradient disappearance encountered in the neural network. The output of the neural network was limited to a certain area, which improved the prediction performance of the model. Adding the above three activation function layers in the GRU-RNN network can improve the prediction accuracy and robustness of the model;
(3)
The SOC prediction performance of LSTM, GRU, and GRU-ATL network models was compared when the measurement signals contain Gaussian noise and non-Gaussian noise. The experimental results showed that the SOC prediction accuracy of GRU-ATL model was 0.1–0.4% higher than GRU model and 0.3–0.7% higher than LSTM model;
(4)
The battery model obtained by the deep learning network can be used for SOC online prediction in different temperature working conditions. The GRU-ATL model still had high prediction accuracy and robustness when the measurement data contains noise. Therefore, it could meet the complex dynamic conditions of vehicles.

1.3. The Organization of the Paper

The remainder of the paper is organized as follows. The second part introduces the basic principle of the GRU-ATL network model in detail. The third part describes the battery data used in the experiment and the evaluation index of the model. The fourth part discusses and analyzes in detail the SOC prediction performance of the GRU-ATL network model under various operating conditions. The fifth part is the conclusion of this paper.

2. GRU-FC Network Model

2.1. GRU Structural Unit

A GRU network was formed by adding a gating mechanism to a simple RNN network, which was used to control the transmission of information in the neural network. A GRU network could effectively capture the dependence of large step in time series, and solve the problem of gradient attenuation or explosion in long-term memory and back-propagation [30]. Compared with an LSTM network, the GRU network had the advantages of fewer parameters, a simpler structure, and higher computational efficiency, which were suitable for building larger networks [31,32]. At present, GRU networks have been proven effective in some application scenarios. A GRU network was mainly composed of the reset gate and update gate units. Its structural unit is shown in Figure 1.
Figure 1 shows that the input of the reset and update gates of the gating unit were the hidden state Ht−1 and the input Xt, and the output was calculated by the Sigmoid activation function. Suppose that the number of hidden cells is h, the small-batch input of time step t is Xt ∈ ℝn×d (the length of sample time is n, the number of eigenvectors is d) and the hidden state of the previous time step is Ht−1 ∈ ℝn×h. The calculation of reset gate Rt ∈ ℝn×h and update gate Zt ∈ ℝn×h is as follows:
R t = σ g r u ( X t W x r + H t 1 W h r + b r ) , Z t = σ g r u ( X t W x z + H t 1 W h z + b z ) ,
where Wxr, Wxz ∈ ℝd×h, and Whr, Whz ∈ ℝh×h are network weight parameters. br and bz are network bias parameters. σgru is a Sigmoid activation function. Its main function is to convert the values in both gates to 0–1.
The candidate hidden state Ht* ∈ ℝn×h was obtained by a series of operations between the output of the reset gate and the hidden state Ht−1 of the previous time step. When the element value in the reset gate was close to 0, then the hidden state Ht−1 was discarded. When the element value was close to 1, the hidden state Ht−1 was retained. The calculation method is as follows:
H t = tanh ( X t W x h + ( R t H t 1 ) W h h + b h )
where Wxh ∈ ℝd×h and Whh ∈ ℝh×h are the network weight parameters and bh is the network bias parameter. The function of the Tanh activation function was to convert all element ranges into between [−1, 1].
In an LSTM network, the input and forget gates are complementary and have certain redundancy. By contrast, a GRU network directly uses an update gate to control the balance between the input and forgetting. The hidden state Ht ∈ ℝn×h of the current time moment is obtained by combining the hidden state Ht−1 of the previous time moment and the candidate hidden state Ht* of the current time through the update gate Zt of the current time step.
H t = Z t H t 1 + ( 1 Z t ) H t
Equations (1)–(3) show that when Zt = 0 and Rt = 1, a GRU network will degenerate into a simple RNN network. When Zt = 0 and Rt = 0, the current state Ht is only related to the current input Xt, and is not involved with the historical state Ht−1. When Zt = 1, the current state Ht is equal to the previous hidden state Ht−1.

2.2. SOC Estimation Based on GRU-ATL Network

As shown in Figure 2, a GRU-RNN [8,26] network model consists of a sequence input layer, a GRU network layer, a fully connected layer with neurons, a fully connected layer without neurons, and a regression output layer. The input of the GRU network model is composed of current, voltage, and temperature measurement signals, and the output is the battery SOC at the current time, namely xt = [It, Vt, Tt], yt = SOCk. In the training set, the nonlinear relationship between current, voltage, and temperature and SOC was established. The prediction accuracy of the model was verified in the test set. Inspired by reference [25], this paper proposed a GRU-RNN network model with activation function layer (GRU-ATL) for battery SOC estimation. As shown in Figure 2, this model mainly added a Tanh layer, a leaky ReLU layer, and a clipped ReLU layer to the GRU-RNN network model.
The leaky ReLU layer played the role of execution threshold. It mainly multiplied any input value less than zero by a fixed factor coefficient [33], and its expression is as follows:
f ( x ) = { x x 0 s c a l e x x < 0
where scale is the factor coefficient when the input x is negative.
The clipped ReLU layer also performed threshold operation. It mainly set any input value less than zero to zero, and any value higher than the clipping upper limit as the clipping upper limit [34]. The expression is as follows:
f ( x ) = { 0 x < 0 x 0 x < T C T C x T C
where TC is the threshold of the clipped ReLU layer.

2.3. Selection of Other Parameters in Network

In deep learning networks, the common optimizers include the adaptive moment estimation (Adam) optimizer, stochastic gradient descent (SGD) optimizer, adaptive gradient (Adagrad) optimizer, root mean square prop (RMSProp) optimizer, and batch gradient descent (BGD) optimizer. The Adam optimizer was proposed by Kingma and Ba, and it combines the advantages of momentum and RMSProp [35]. The principle and implementation of the algorithm are simple, and the memory requirement is low. The updating of parameters is not affected by the scaling change of gradient. The super parameters have good interpretability, and the learning rate can be automatically adjusted without adjustment or little fine-tuning. The Adam optimizer has achieved good results in practical application, and it is suitable for gradient sparse or gradient noise problems.
The updating equation of the bias parameter θadam is as follows:
θ a d a m , t = θ a d a m , t 1 α a d m a m t / ( v t + ε )
where m t is the bias-corrected first raw moment estimate, v t is the bias-corrected second raw moment estimate, αadma is the learning rate, and ε is a constant value.
The above formula that Adam algorithm can adjust adaptively from two aspects of gradient mean and gradient square. To prevent calculation errors, the constant value ε is set to 1 × 10−8. The convergence speed of the network becomes very small when the αadma is too small. The loss function of the network oscillates and even deviates from the minimum when the αadma is too large. Therefore, the α a d m a 0 is set as 0.01. The αadma is attenuated once every 250 iterations with a magnification factor of 0.1.
In the deep learning network, if many parameters are to be set and the network model is too complex, the overfitting phenomenon occurs. In other words, the model performs well in the training set but poorly in the actual test set, and the model does not have good generalization ability. Therefore, an L2 regularization algorithm is selected to avoid the overfitting phenomenon. L2 regularization is mainly based on the original loss function plus the sum of squares of weight parameters [36]. The expression is as follows:
E L 2 ( θ a d a m ) = E ( θ a d a m ) + C L 2 2 W 2
where, EL2(θadam) is the original loss function, CL2 is the regularization coefficient, CL2 = 0.001, and W is the weight vector.

3. Battery Data Description and Processing Process

3.1. Battery Data Description

To verify the feasibility of the deep learning network model proposed in this paper, we chose the open-source battery test data set of McMaster University’s McMaster Institute for Automotive Research and Technology as the research target [25]. This data set conducted charging and discharging test experiments on LGHG2 batteries under multiple different working conditions, effectively simulating the real driving environment of electric vehicles. Each battery test data contained measurement signals such as time, current, voltage, temperature, and capacity. The various parameters of LGHG2 battery are shown in Table 1.
The four standard discharge test conditions mainly include Urban Dynamometer Driving Schedule (UDDS), Highway Fuel Economy Driving Schedule (HWFET), LA92 Dynamometer Driving Schedule (LA92), and Supplemental Federal Test Procedure Driving Schedule (US06). The charging test condition is a fast constant current and constant voltage (CC-CV) charging mode. In other words, the constant current of 3A is used for charging. When the cut-off voltage reaches 4.2 V, the battery is charged in the constant voltage mode until the cut-off charging current is 0.05 A. In addition, the data set includes eight mixed dynamic test conditions, each of which is a random combination of the above four standard dynamic conditions. The test data of −10, 0, 10, 25, and 40 °C were used to verify the feasibility of the model. Figure 3 shows the test data of LA92 dynamic test conditions. The figure shows a highly nonlinear relationship between the battery capacity and the three measurement signals of current, voltage, and temperature during the discharge process, and the discharge capacity at 25 °C is significantly greater than that at −10 and 0 °C. This finding shows that with the decrease of temperature, the chemical reaction inside the battery became slow, and the actual capacity value gradually decreased. It is worth noting that some aging phenomena appear in the battery after several charge–discharge cycle tests. In the original dynamic working condition data set, the time unit of recording data was 0.1 s, and in the charging working condition, the time unit of recording data was 60 s. To reduce calculation costs, the data set recorded every 1 s in the dynamic test conditions was selected as the research object of this article.

3.2. Data Processing Process and Evaluation Indicators

As shown in Figure 3, the measured signals (current, voltage, and temperature) in the data set fluctuated greatly, which affected the results of data analysis. Normalizing the data could not only speed up the learning speed of the network, but also eliminate the adverse effects caused by singular sample data. In this paper, the mapminmax function was used to normalize the three measurement signals to [0, 1]. The expression is as follows:
x n o r m a l = ( x s i g n a l x min ) / ( x max x min )
where xmin and xmax are the minimum and maximum values in real measurement data; xsignal is the real measurement data; and xnormal is the normalized data.
The output of the GRU-ATL network model was the battery SOC, which was mainly obtained through the time integration of the current. The calculation process is as follows:
S O C ( t ) = S O C ( t 0 ) t 0 t η I t / ( 3600 C N ) d τ
where SOC(t0) is the initial SOC value; SOC(t) is the SOC value at current time t; η is the coulomb rate, η = 1; It is the current flowing through the battery at time t; and CN is the actual capacity value of each working condition.
The data set was divided into a training set and five test sets. The GRU-ATL network model was used to obtain a training model on the training set. Then, the model was used to perform simulation verification on the test sets. It is worth noting that some conditions did not complete the charge–discharge experiment. For example, at 25 °C, the discharge time of HWFET was only 600 s. Therefore, this part of the data set was not added to the training set and test set. All the simulation experiments were carried out in the CPU simulation environment of MATLAB, and the computer processor was an Intel Core i5-7400 [email protected] GHz with 8 GB memory. In each case, the experiment was carried out three times, and the average value of the three times was calculated as the final result. Finally, mean absolute error (MAE) and root mean square error (RMSE) were introduced to evaluate the prediction performance of the GRU-ATL network model:
M A E s o c = 1 n t = 1 n | S O C A c t u r e S O C E s t i m a t e d |
R M S E s o c = 1 n t = 1 n ( S O C A c t u r e S O C E s t i m a t e d ) 2
where MAEsoc is the mean absolute error of SOC; RMSEsoc is the root mean square error of SOC; SOCActure is the actual SOC value; SOCEstimated is the predicted SOC value; and n is the sample size.

4. Experimental Results and Discussion

4.1. SOC Estimation Results of Four Network Models

In this experiment, three network models (FNN [25], LSTM, and GRU) were introduced to compare with the GRU-ATL network model. The training set was composed of eight mixed dynamic test conditions, CC-CV charging conditions, and four standard dynamic test conditions. The test set was composed of CC-CV charging conditions and four standard dynamic test conditions. Notably, the number of training sets was one and the number of test sets was five. The detailed division of the training set and test set is shown in Table 2. A large number of simulation experiments showed that the number of iterations had a certain impact on the SOC prediction accuracy of the four models. But this is beyond the scope of this article. According to reference [25], the maximum number of iterations in the FNN network model was 5500. In order to save computation while ensuring the accuracy, the maximum iteration times of the other three network models were set to 2000. The other parameters in the four network models were the same, and the number of neurons in each layer was 55.
Figure 4 and Figure 5 show the SOC prediction results of four network models at 0 and 25 °C, respectively. The figure shows that when the current, voltage, and temperature characteristics at time t were used to predict the SOC value at time t, the FNN network model had the worst prediction result, and the other three network models could effectively track the real SOC curve. Although the FNN network model needed the shortest time in the whole prediction process, the information in the network structure propagated in one direction, and there was no reverse information transmission. It led to the suggestion that the battery model could not reflect the relationship between the measurement signal and SOC very well. Therefore, the SOC error of the FNN network model was very large. From the SOC error curve of each working condition, we could see that the SOC error of the LSTM network model was larger than those of the GRU and GRU-ATL network models. The reason for this finding is that in the formula of network structure, the number of parameters of LSTM was four times that of simple RNN [37]. If the number of parameters was too much, the overfitting phenomenon would occur. However, GRU had only two gate switches, and the number of parameters was three times that of simple RNN [38]. Therefore, it could reduce the overfitting phenomenon, improve the prediction accuracy of the model, and save a considerable training time. The results in Table 3 show that the estimated SOC value of the GRU-ATL network model was more accurate than that of the GRU network structure under most dynamic conditions. Under the UDDS condition of 0 °C, the MAE and RMSE of the GRU network model were 1.2% and 2.3% respectively. In the UDDS and LA92 dynamic conditions, the GRU-ATL network model achieved good estimation results under different temperatures. Its MAE was less than 0.9%, and its RMSE was less than 1.2%. Owing to the large discharge current in US06 conditions, the chemical reaction inside the battery was severe, resulting in the larger error result of SOC compared with the previous two conditions. However, its MAE was also within 1.5%, and its RMSE was within 1.9%. The SOC prediction results of GRU model were more accurate than those of the GRU-ATL network model in the LA92 condition of 10 °C and US06 condition of 0 °C. It was mainly because the difference of SOC prediction accuracy between GRU network model and GRU-ATL was not very obvious, and the SOC prediction results of GRU-ATL were inevitably lower than those of the GRU model in different time periods. However, the SOC prediction results of GRU-ATL were more accurate than those of GRU in the whole testing set at each temperature. In CC-CV conditions, the SOC prediction results of GRU-ATL model were more accurate than the other three models. But the SOC error results of the four network models were relatively large. The main reason is that the data acquisition unit was 60 s in CC-CV conditions, which did not provide enough data for network training. Through the analysis of the above results, the GRU-ATL network model had the highest accuracy and most stable SOC estimation among the four network models. This shows that the Tanh activation function, leaky ReLU activation function, and clipped ReLU activation function are helpful to improve the prediction accuracy and stability of GRU network model.

4.2. SOC Estimation Results under Unknown Conditions

In the process of driving, vehicles encounter a variety of complex working conditions. Therefore, to verify the prediction effect of the GRU-ATL network under unknown conditions, the training set used in this part was composed of eight mixed dynamic test conditions and CC-CV charging conditions. The testing sets consisted of CC-CV charging conditions and four standard dynamic test conditions. In other words, the testing sets were not added to the training set to train together. In addition, this part mainly discusses the influence of the number of neurons in the two network layers on the prediction results. The number of neurons in the two network layers had a certain influence on the final prediction results. However, no scientific and effective method to effectively select the number of neurons has been developed. The swarm intelligence optimization algorithm or evolutionary algorithm can find the best number of neurons in the case of minimal data to achieve the best prediction performance of the model. However, these algorithms need large amounts of calculation in some cases with large amounts of data; thus, they are not suitable for this paper. We then used three cases as examples to analyze the influence of the number of neurons on the accuracy of the model. In addition to the two evaluation indexes introduced in Section 3.2, the training time was also used as the evaluation index of the model. Case 1 indicated that the number of neurons in the GRU layer (NGRU) was 55, and the number of neurons in the FC layer (NFC) increased according to the number in Table 4. Case 2 indicated that the NFC in the FC layer was 55, and the NGRU in the GRU layer increased according to the number in Table 4. Case 3 indicated that the NGRU and NFC in both layers increased according to the number in Table 4.
Under the 25 °C condition, the indexes under three conditions are shown in Figure 6, and the statistical results of each index are listed in Table 4. Figure 6a,b shows that with the increase of NGRU and NFC, the estimation accuracy of SOC had a certain degree of improvement. When the number of neurons increased to a certain value, the estimation accuracy tended to decline. However, no mathematical relationship was found between the number of neurons and the accuracy of the model. Table 4 shows that in cases 2 and 3, with the increase of NGRU in the GRU layer, SOC estimation in some cases did not achieve ideal results. The reason for this finding was that too many neurons resulted in the underfitting phenomenon in the model, eventually leading to the failure of the GRU-ATL network model in SOC prediction. As can be seen from Figure 6c, under the conditions of cases 2 and 3, the training time doubled with the increase of NGRU in the GRU layer. By contrast, the training time did not increase significantly with the increase of NFC in the FC layer.
A large number of simulation experiments showed that when the NGRU in the GRU layer is 55 and the NFC in the FC layer was 160, the proposed network model had higher accuracy for SOC estimation and saved considerable training time. Figure 7 shows the SOC estimation results at 25 °C. According to the enlarged layout of the three dynamic conditions in Figure 7c–e, the predicted SOC curve could effectively track the real SOC curve. Table 5 shows the statistical results of SOC error at five temperatures. At 0, 10, 25, and 40 °C, the MAE of SOC error was within 1.1%, and the RMSE was within 1.5%. At −10 °C, the error index of SOC was relatively high, the MAE was 1.271%, and the RMSE is 2.005%. The main reason for this finding is that the amount of test data at −10 °C was small, and the chemical reaction of the battery at low temperature also had a certain effect on the quality of the data set. From the analysis of the above experimental results, the GRU-ATL network model could obtain accurate SOC estimation results under unknown conditions.

4.3. SOC Estimation Results with Noise

In the working process of a battery management system, owing to the complex and changeable external environment and the poor accuracy of the signal acquisition sensor, the collected current, voltage, and temperature signals usually contain certain measurement errors. Gaussian and non-Gaussian noises were added to the measured signals to test the prediction performance of the GRU-ATL network model.
As shown in Figure 8a, Gaussian distribution noise (Noise 1) is white Gaussian noise with a mean value of 0 and a standard deviation of 0.02. Its calculation formula is as follows:
N o i s e 1 = α N o i s e 1 randn ( 1 , n )
where αNoise1 is the coefficient of standard deviation, αNoise1 = 0.02, and n is the length of prediction data.
As shown in Figure 8b, Noise 2 was a uniformly distributed random noise between [0, 0.01]. Noise 3 is non-Gaussian noise, which is the sum of Noise 1 and Noise 2. The expression is as follows:
N o i s e 3 = N o i s e 1 + α N o i s e 2 rand ( 1 , n )
where αNoise2 is the coefficient of uniformly distributed noise, αNoise2 = 0.01.
To further study the influence of measurement signal noise on the prediction accuracy of the model, the current with Noise 1 and Noise 3 was labeled as case 1, the voltage with Noise 1 and Noise 3 was labeled as case 2, the temperature with Noise 1 and Noise 3 was labeled as case 3, and the three measurement signals with Noise 1 and Noise 3 were labeled as case 4. The Gaussian noise and non-Gaussian noise of temperature are 25 times higher than those of Noise 1 and Noise 3, respectively. For example, when Noise 1 was 0.05, the Gaussian noise of current was 0.05 A, the Gaussian noise of voltage was 0.05 V, and the Gaussian noise of temperature was 1.25 °C. The training set used in the experiment was the same as that in Section 4.2. The test set was the data set with different noise added to the measured signals at each temperature. That is, the number of training set was 1 and the number of testing set was 40. In the GRU-FC network model, the NGRU in the GRU layer was set to 55, the NFC in the FC layer was set to 160, and the number of iterations was set to 2000. Other parameters were consistent with those in Section 2.3. At the same time, the LSTM network model and the GRU model were introduced for comparison.
Figure 9 shows the SOC prediction results of LSTM model with Gaussian noise at 10 °C. Figure 10 shows the SOC prediction results of GRU-ATL model with Gaussian noise at 10 °C. Figure 11 shows the SOC prediction results of a simple GRU model with non-Gaussian noise at 40 °C. Figure 12 shows the SOC prediction results of GRU-ATL model with non-Gaussian noise at 40 °C. As can be seen from Figure 9, Figure 10, Figure 11 and Figure 12, the SOC prediction curves of three network models well tracked the actual SOC curve in four cases. Table 6 shows the statistical results of SOC error under each condition of the three models. According to the statistical results, the SOC prediction accuracy of GRU-ATL model was 0.1–0.4% higher than the GRU model and 0.3–0.7% higher than the LSTM model. The MAE of the SOC predicted by GRU-ATL model was stable in the range of 0.7–1.4%, and RMSE was stable between 1.2–1.9%. In some cases of −10°C, the MAE and RMSE of SOC predicted by the GRU model were 1.7% and 2.5%, respectively, and the MAE and RMSE of SOC predicted by LSTM model were 1.8% and 2.7%, respectively. This shows that the SOC prediction results of GRU-ATL model were more stable and accurate at a low temperature. Compared with LSTM and GRU network models, GRU-ATL network model had better prediction accuracy and stronger robustness in unknown conditions with noise.

5. Conclusions

This paper proposed a gated recurrent unit recurrent neural network model (GRU-RNN) with an activation function layer (GRU-ATL) for battery SOC estimation. This model mainly added a Tanh layer, a leaky rectified linear unit (ReLU) layer, and a clipped rectified linear unit (ReLU) layer to GRU-RNN network model. The relationship between current, voltage, temperature, and SOC was established in the training set. Then the online SOC estimation was carried out on different testing sets using the trained model. The main work of this paper are as follows:
(1)
Compared with an LSTM network, a GRU network structure has the advantages of fewer parameters and simple structure. It can save a lot of training and prediction time on the premise of ensuring the prediction accuracy of the model. Compared with FNN, LSTM, and GRU network models, it was found that the proposed model could obtain more accurate and stable SOC estimation results in different operating conditions. Adding the above three activation function layers in the GRU-RNN network could improve the prediction accuracy and robustness of the model;
(2)
The prediction accuracy of the model could be improved by appropriately increasing the number of neurons in the GRU layer and FC layer. But the excessive number of neurons in the two layers caused an over fitting phenomenon, which affected the SOC estimation accuracy. In order to save computation, the number of neurons in the GRU layer was 2–3 times less than that in the FC layer. A large number of experiments showed that the SOC was more accurate when the number of neurons in the GRU layer was 55 and that of the FC layer was 160. The MAE was less than 1.3% and RMSE was less than 2%;
(3)
The SOC prediction performance of LSTM, GRU, and GRU-ATL network models was compared when the measurement signals contained Gaussian noise and non-Gaussian noise. The experimental results showed that the SOC prediction accuracy of GRU-ATL model was 0.1–0.4% higher than the GRU model and 0.3–0.7% higher than the LSTM model. The MAE of SOC predicted by the GRU-ATL model was stable in the range of 0.7–1.4%, and the RMSE was stable between 1.2–1.9%. The SOC prediction results of the GRU-ATL network model were still more accurate and stable at a low temperature;
(4)
The experiments in this paper showed that the GRU-ATL network model could realize online SOC estimation under different working conditions without relying on an accurate battery model. When the measurement data contained noise, the model still had high prediction accuracy and robustness, which could meet the SOC estimation in complex vehicle working conditions.
The future work was mainly to use the model proposed in this paper to verify SOC estimation on other battery data sets. The SOC value used in the experiment was calculated according to the actual capacity. It was difficult to obtain the actual capacity value due to the aging phenomenon of the battery. Therefore, how to use deep learning networks to predict the actual capacity is also an important task in the future.

Author Contributions

Conceptualization, W.D. and F.X.; methodology, W.D. and C.S.; software, S.P. and S.S.; validation, S.P., S.S. and Y.S.; formal analysis, S.P. and Y.S.; writing—original draft preparation, W.D. and F.X.; writing—review and editing, W.D. and C.S.; project administration, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Tackling Item in Science and Technology Department of Jilin Province, China Grant number [20150204017GX], Jilin Provincial Natural Science Foundation, China Grant number [20150101037JC].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berecibar, M.; Gandiaga, I.; Villarreal, I.; Omar, N.; Van Mierlo, J.; Van den Bossche, P. Critical review of state of health estimation methods of Li-ion batteries for real applications. Renew. Sustain. Energy Rev. 2016, 56, 572–587. [Google Scholar] [CrossRef]
  2. Lipu, M.S.H.; Hannan, M.A.; Hussain, A.; Hoque, M.M.; Ker, P.J.; Saad, M.H.M.; Ayob, A. A review of state of health and remaining useful life estimation methods for lithium-ion battery in electric vehicles: Challenges and recommendations. J. Clean. Prod. 2018, 205, 115–133. [Google Scholar] [CrossRef]
  3. Zhang, C.; He, Y.; Yuan, L.; Xiang, S. Capacity prognostics of lithium-ion batteries using EMD denoising and multiple kernel RVM. IEEE Access 2017, 5, 12061–12070. [Google Scholar] [CrossRef]
  4. Shrivastava, P.; Soon, T.K.; Idris, M.Y.I.B.; Mekhilef, S. Overview of model-based online state-of-charge estimation using Kalman filter family for lithium-ion batteries. Renew. Sustain. Energy Rev. 2019, 113, 109233. [Google Scholar] [CrossRef]
  5. Ng, K.S.; Moo, C.; Chen, Y.; Hsieh, Y. Enhanced coulomb counting method for estimating state-of-charge and state-of-health of lithium-ion batteries. Appl. Energy 2009, 86, 1506–1511. [Google Scholar] [CrossRef]
  6. Petzl, M.; Danzer, M.A. Advancements in OCV Measurement and Analysis for Lithium-Ion Batteries. IEEE Trans. Energy Conver. 2013, 28, 675–681. [Google Scholar] [CrossRef]
  7. Wang, Q.; Wang, J.; Zhao, P.; Kang, J.; Yan, F.; Du, C. Correlation between the model accuracy and model-based SOC estimation. Electrochim. Acta. 2017, 228, 146–159. [Google Scholar] [CrossRef]
  8. Li, C.; Xiao, F.; Fan, Y. An approach to state of charge estimation of lithium-ion batteries based on recurrent neural networks with gated recurrent unit. Energies 2019, 12, 1592. [Google Scholar] [CrossRef] [Green Version]
  9. Yang, F.; Song, X.; Xu, F.; Tsui, K. State-of-charge estimation of lithium-ion batteries via long short-term memory network. IEEE Access 2019, 7, 53792–53799. [Google Scholar] [CrossRef]
  10. Kim, M.; Chun, H.; Kim, J.; Kim, K.; Yu, J.; Kim, T.; Han, S. Data-efficient parameter identification of electrochemical lithium-ion battery model using deep Bayesian harmony search. Appl. Energy 2019, 254, 113644. [Google Scholar] [CrossRef]
  11. Bartlett, A.; Marcicki, J.; Onori, S.; Rizzoni, G.; Yang, X.G.; Miller, T. Electrochemical model-based state of charge and capacity estimation for a composite electrode lithium-ion battery. IEEE Trans. Contr. Syst. Technol. 2015, 24, 384–399. [Google Scholar] [CrossRef]
  12. Xiong, R.; Sun, F.; Chen, Z.; He, H. A data-driven multi-scale extended Kalman filtering based parameter and state estimation approach of lithium-ion polymer battery in electric vehicles. Appl. Energy 2014, 113, 463–476. [Google Scholar] [CrossRef]
  13. Partovibakhsh, M.; Liu, G. An adaptive unscented kalman filtering approach for online estimation of model parameters and state-of-charge of lithium-ion batteries for autonomous mobile robots. IEEE Trans. Contr. Syst. Technol. 2015, 23, 357–363. [Google Scholar] [CrossRef]
  14. Wang, S.; Fernandez, C.; Cao, W.; Zou, C.; Yu, C.; Li, X. An adaptive working state iterative calculation method of the power battery by using the improved Kalman filtering algorithm and considering the relaxation effect. J. Power Sources 2019, 428, 67–75. [Google Scholar] [CrossRef]
  15. Ye, M.; Guo, H.; Cao, B. A model-based adaptive state of charge estimator for a lithium-ion battery using an improved adaptive particle filter. Appl. Energy 2017, 190, 740–748. [Google Scholar] [CrossRef]
  16. Chen, C.; Xiong, R.; Shen, W. A lithium-ion battery-in-the-loop approach to test and validate multiscale dual h infinity filters for state-of-charge and capacity estimation. IEEE Trans. Power Electron. 2018, 33, 332–342. [Google Scholar] [CrossRef]
  17. Dai, H.; Xu, T.; Zhu, L.; Wei, X.; Sun, Z. Adaptive model parameter identification for large capacity Li-ion batteries on separated time scales. Appl. Energy 2016, 184, 119–131. [Google Scholar] [CrossRef]
  18. Yang, H.; Sun, X.; An, Y.; Zhang, X.; Wei, T.; Ma, Y. Online parameters identification and state of charge estimation for lithium-ion capacitor based on improved Cubature Kalman filter. J. Energy Storage 2019, 24, 100810. [Google Scholar] [CrossRef]
  19. Xia, B.; Sun, Z.; Zhang, R.; Cui, D.; Lao, Z.; Wang, W.; Sun, W.; Lai, Y.; Wang, M. A comparative study of three improved algorithms based on particle filter algorithms in soc estimation of lithium ion batteries. Energies 2017, 10, 1149. [Google Scholar] [CrossRef]
  20. Fotouhi, A.; Auger, D.J.; Propp, K.; Longo, S. Lithium–sulfur battery state-of-charge observability analysis and estimation. IEEE Trans. Power Electr. 2018, 33, 5847–5859. [Google Scholar] [CrossRef]
  21. Guo, Y.; Zhao, Z.; Huang, L. Soc estimation of lithium battery based on improved bp neural network. Energy Procedia 2017, 105, 4153–4158. [Google Scholar] [CrossRef]
  22. Wang, Q.; Wu, P.; Lian, J. SOC estimation algorithm of power lithium battery based on AFSA-BP neural network. J. Eng. 2020, 13, 535–539. [Google Scholar] [CrossRef]
  23. Chemali, E.; Kollmeyer, P.J.; Preindl, M.; Ahmed, R.; Emadi, A. Long short-term memory networks for accurate state-of-charge estimation of li-ion batteries. IEEE Trans. Ind. Electron. 2018, 65, 6730–6739. [Google Scholar] [CrossRef]
  24. Bian, C.; He, H.; Yang, S. Stacked bidirectional long short-term memory networks for state-of-charge estimation of lithium-ion batteries. Energy 2020, 191, 116538. [Google Scholar] [CrossRef]
  25. Vidal, C.; Kollmeyer, P.J.; Naguib, M.; Malysz, P.; Gross, O.; Emadi, A. Robust xev battery state-of-charge estimator design using a feedforward deep neural network. SAE Int. J. Adv. Curr. Prac. Mobil. 2020, 2, 2872–2880. [Google Scholar]
  26. Yang, F.; Li, W.; Li, C.; Miao, Q. State-of-charge estimation of lithium-ion batteries based on gated recurrent neural network. Energy 2019, 175, 66–75. [Google Scholar] [CrossRef]
  27. Jiao, M.; Wang, D.; Qiu, J. A GRU-RNN based momentum optimized algorithm for SOC estimation. J. Power Sources 2020, 459, 228051. [Google Scholar] [CrossRef]
  28. Dong, G.; Zhang, X.; Zhang, C.; Chen, Z. A method for state of energy estimation of lithium-ion batteries based on neural network model. Energy 2015, 90, 879–888. [Google Scholar] [CrossRef]
  29. Tian, Y.; Lai, R.; Li, X.; Xiang, L.; Tian, J. A combined method for state-of-charge estimation for lithium-ion batteries using a long short-term memory network and an adaptive cubature Kalman filter. Appl. Energy 2020, 265, 114789. [Google Scholar] [CrossRef]
  30. Cho, K.; Courville, A.; Bengio, Y. Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks. IEEE Trans. Multimed. 2015, 17, 1875–1886. [Google Scholar] [CrossRef] [Green Version]
  31. Song, Y.; Li, L.; Peng, Y.; Liu, D. Lithium-Ion. Battery Remaining Useful Life Prediction Based on GRU-RNN; IEEE: Piscataway, NJ, USA, 2018; Volume 2018, pp. 317–322. [Google Scholar]
  32. Zhang, A.; Lipton, Z.C.; Li, M.; Smola, A.J. Dive into Deep Learning. 2020. Available online: https://github.com/d2l-ai/d2l-zh (accessed on 29 November 2020).
  33. Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning, Stanford University, Stanford, CA, USA, 17–19 June 2013; p. 3. [Google Scholar]
  34. Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G. Deep Speech: Scaling up end-to-end speech recognition. arXiv 2014, arXiv:1412.5567. [Google Scholar]
  35. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
  36. Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, UK, 2012. [Google Scholar]
  37. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural. Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  38. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the EMNLP 2014—2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qutar, 25–29 October 2014; pp. 1724–1734. [Google Scholar]
Figure 1. Gated recurrent unit (GRU) structural unit [32].
Figure 1. Gated recurrent unit (GRU) structural unit [32].
Energies 13 06366 g001
Figure 2. Schematic diagram of a GRU-RNN network model with an activation function layer (GRU-ATL) network structure.
Figure 2. Schematic diagram of a GRU-RNN network model with an activation function layer (GRU-ATL) network structure.
Energies 13 06366 g002
Figure 3. Test data of LA92 at three temperatures (−10, 0, and 25 °C); (a) current (b) voltage (c) temperature (d) discharge capacity.
Figure 3. Test data of LA92 at three temperatures (−10, 0, and 25 °C); (a) current (b) voltage (c) temperature (d) discharge capacity.
Energies 13 06366 g003
Figure 4. State-of-charge (SOC) and SOC error of four network models at 0 °C: (a,e) Urban Dynamometer Driving Schedule (UDDS), (b,f) LA92, (c,g) US06, (d,h) constant charge and constant voltage (CC-CV).
Figure 4. State-of-charge (SOC) and SOC error of four network models at 0 °C: (a,e) Urban Dynamometer Driving Schedule (UDDS), (b,f) LA92, (c,g) US06, (d,h) constant charge and constant voltage (CC-CV).
Energies 13 06366 g004
Figure 5. SOC and SOC error of four network models at 25 °C: (a,e) UDDS, (b,f) LA92, (c,g) US06, (d,h) CC-CV.
Figure 5. SOC and SOC error of four network models at 25 °C: (a,e) UDDS, (b,f) LA92, (c,g) US06, (d,h) CC-CV.
Energies 13 06366 g005
Figure 6. The evaluation index values in three cases: (a) Mean absolute error (MAE), (b) root mean square error (RMSE), (c) training time.
Figure 6. The evaluation index values in three cases: (a) Mean absolute error (MAE), (b) root mean square error (RMSE), (c) training time.
Energies 13 06366 g006
Figure 7. SOC estimation error of case 1 at 25 °C: (a) SOC, (b) SOC error, (c) local enlargement of UDDS, (d) local enlargement of LA92, (e) local enlargement of US06.
Figure 7. SOC estimation error of case 1 at 25 °C: (a) SOC, (b) SOC error, (c) local enlargement of UDDS, (d) local enlargement of LA92, (e) local enlargement of US06.
Energies 13 06366 g007
Figure 8. Two kinds of random noise: (a) Gaussian distributed noise, (b) uniformly distributed noise.
Figure 8. Two kinds of random noise: (a) Gaussian distributed noise, (b) uniformly distributed noise.
Energies 13 06366 g008
Figure 9. SOC and SOC error of long short-term memory (LSTM) model in four cases at 10 °C: (a) SOC, (b) SOC error, (c) local enlargement of HWFET, (d) local enlargement of US06.
Figure 9. SOC and SOC error of long short-term memory (LSTM) model in four cases at 10 °C: (a) SOC, (b) SOC error, (c) local enlargement of HWFET, (d) local enlargement of US06.
Energies 13 06366 g009
Figure 10. SOC and SOC error of GRU-ATL model in four cases at 10 °C: (a) SOC, (b) SOC error, (c) local enlargement of HWFET, (d) local enlargement of US06.
Figure 10. SOC and SOC error of GRU-ATL model in four cases at 10 °C: (a) SOC, (b) SOC error, (c) local enlargement of HWFET, (d) local enlargement of US06.
Energies 13 06366 g010
Figure 11. SOC and SOC error of GRU model in four cases at 40 °C: (a) SOC, (b) SOC error, (c) local enlargement of UDDS, (d) local enlargement of LA92.
Figure 11. SOC and SOC error of GRU model in four cases at 40 °C: (a) SOC, (b) SOC error, (c) local enlargement of UDDS, (d) local enlargement of LA92.
Energies 13 06366 g011
Figure 12. SOC and SOC error of GRU-ATL model in four cases at 40 °C: (a) SOC, (b) SOC error, (c) local enlargement of UDDS, (d) local enlargement of LA92.
Figure 12. SOC and SOC error of GRU-ATL model in four cases at 40 °C: (a) SOC, (b) SOC error, (c) local enlargement of UDDS, (d) local enlargement of LA92.
Energies 13 06366 g012
Table 1. The main specifications of LG 18650HG2.
Table 1. The main specifications of LG 18650HG2.
ParameterValue
Nominal capacity/voltage3.0 Ah/3.6 V
Charge and discharge cut-off voltage4.2 V/2.5 V
Normal end-of-charge current50 mA
Max continuous discharge current20 A
Standard charge current1.5 A
Energy density240 Wh/Kg
Table 2. The detailed division of training set and test set.
Table 2. The detailed division of training set and test set.
Data SetT (°C)Condition
Training−10, 0, 10, 25, 40Mixed (1–8), CC-CV (3A), UDDS, HWFET, LA92, US06
Testing 1−10UDDS, HWFET, LA92, US06, CC-CV (3A)
Testing 20UDDS, HWFET, LA92, US06, CC-CV (3A)
Testing 310UDDS, HWFET, LA92, US06, CC-CV (3A)
Testing 425UDDS, LA92, US06, CC-CV (3A)
Testing 540UDDS, HWFET, LA92, US06, CC-CV (3A)
Table 3. SOC error results of four network models at five temperatures.
Table 3. SOC error results of four network models at five temperatures.
Network ModelT (°C)UDDSLA92US06CC-CV
MAE (%)RMSE (%)MAE (%)RMSE (%)MAE (%)RMSE (%)MAE (%)RMSE (%)
FNN−105.3036.4894.8696.5949.06611.89911.47914.352
04.0414.8213.4374.5077.5199.6329.47511.597
102.7263.3282.4513.2166.0377.6808.13910.425
251.6122.0781.7842.4044.3055.4907.4278.827
401.5171.9201.6152.1053.3634.2897.8089.438
LSTM−102.0672.9931.6582.0852.2652.772.0752.822
01.2562.4711.2711.6132.2162.8342.8683.313
101.5722.2641.2271.6052.2972.9541.7141.999
251.2612.0171.0801.4162.2252.7872.8693.249
401.3272.1031.1641.4891.8702.2862.0682.389
GRU−100.5580.8990.4540.5511.5871.7243.1654.387
01.1292.3330.6940.8651.2941.6701.6502.069
100.7021.3140.4610.5541.4741.8421.4091.901
250.9881.4540.5700.7141.0071.2531.8472.179
400.6121.0460.6200.8000.9891.3211.6042.072
GRU-ATL−100.4240.7520.4450.5961.1261.3232.4363.004
00.4061.1610.3610.5001.3241.8981.1421.685
100.4570.8590.6380.7651.4291.7300.7991.066
250.8141.1650.5260.6520.9781.1791.4071.763
400.6090.8900.5410.7090.9111.1481.3031.613
Table 4. Model evaluation indexes under three conditions at 25 °C.
Table 4. Model evaluation indexes under three conditions at 25 °C.
Case 1NFC103070100130160200230260300
MAE (%)0.9560.8321.1130.7750.7870.7341.0940.9491.0321.179
RMSE (%)1.6231.6281.8121.3471.2911.1581.7791.5561.6231.544
Time (h)5.925.956.016.036.136.156.186.216.266.33
Case 2NGRU103070100130160200230260295
MAE (%)1.2830.9880.9790.9110.8401.029NA0.997NA1.093
RMSE (%)2.0561.5431.4331.5551.3111.338NA1.636NA1.575
Time (h)4.374.987.0510.2615.5522.38NA28.93NA34.16
Case 3NGRU and NFC103070100130160200230260300
MAE (%)1.0950.8540.8310.9451.0141.0960.934NA0.9301.083
RMSE (%)1.9691.8451.4821.4661.6781.5711.553NA1.3291.565
Time (h)4.275.018.3612.3717.5221.0524.37NA30.4535.25
Table 5. SOC error results of case 1 (NGRU = 55, NFC = 160).
Table 5. SOC error results of case 1 (NGRU = 55, NFC = 160).
T (°C)−100102540
MAE (%)1.2710.8141.0320.7340.799
RMSE (%)2.0051.1011.4611.1581.259
Table 6. The statistical results of SOC error in four cases.
Table 6. The statistical results of SOC error in four cases.
Network Model Noise 1Noise 3
T (°C)−100102540−100102540
LSTMCase 1MAE (%)1.7551.2931.2021.1951.1571.7381.3061.1971.2011.160
RMSE (%)2.3691.8791.6951.7511.6892.3581.8941.6931.7541.691
Case 2MAE (%)1.8201.3641.2951.2881.2581.7381.3571.4571.2841.284
RMSE (%)2.4491.9431.8151.1951.7822.6691.8781.9621.8531.813
Case 3MAE (%)1.7561.3011.2061.1981.1611.7501.3171.2101.2011.168
RMSE (%)2.3701.8861.6991.7531.6922.3721.9111.7041.7551.670
Case 4MAE (%)1.7881.3351.2651.2661.2341.9811.3171.4141.2511.255
RMSE (%)2.4101.9151.7801.8151.7592.5721.8501.9131.8231.786
GRUCase 1MAE (%)1.6871.2771.1691.2721.0101.6871.2951.1651.2811.015
RMSE (%)2.4401.8081.6101.8261.4112.4411.8281.6081.8351.417
Case 2MAE (%)1.6971.2991.1801.3051.1131.6871.1601.4101.2501.062
RMSE (%)2.4691.8281.6381.8601.5062.49416611.8321.8051.439
Case 3MAE (%)1.6921.2781.1691.2721.0111.7001.2971.1681.2731.019
RMSE (%)2.4491.8081.6111.8271.4112.4501.8291.6091.8251.421
Case 4MAE (%)1.6971.2981.1761.2991.0891.7001.1761.3881.2381.035
RMSE (%)2.4731.8261.6341.8531.4792.4731.6791.8101.7951.411
GRU-ATLCase 1MAE (%)1.2490.6970.9980.8910.9371.2470.6961.0090.9010.947
RMSE (%)1.8591.1551.4091.2481.2731.8631.1591.4221.2571.282
Case 2MAE (%)1.2650.7181.0380.9241.0551.2470.8870.9090.8280.906
RMSE (%)1.8881.1711.451.2881.4131.8651.2351.2781.1971.275
Case 3MAE (%)1.2510.6981.0020.8930.9401.2640.7051.0100.8900.960
RMSE (%)1.8641.1581.4131.2501.2781.8781.1751.4221.2471.294
Case 4MAE (%)1.2680.7221.0400.9281.0421.3590.8550.9100.8280.893
RMSE (%)1.8921.1761.4501.2921.3951.8651.2181.2811.1951.259
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duan, W.; Song, C.; Peng, S.; Xiao, F.; Shao, Y.; Song, S. An Improved Gated Recurrent Unit Network Model for State-of-Charge Estimation of Lithium-Ion Battery. Energies 2020, 13, 6366. https://doi.org/10.3390/en13236366

AMA Style

Duan W, Song C, Peng S, Xiao F, Shao Y, Song S. An Improved Gated Recurrent Unit Network Model for State-of-Charge Estimation of Lithium-Ion Battery. Energies. 2020; 13(23):6366. https://doi.org/10.3390/en13236366

Chicago/Turabian Style

Duan, Wenxian, Chuanxue Song, Silun Peng, Feng Xiao, Yulong Shao, and Shixin Song. 2020. "An Improved Gated Recurrent Unit Network Model for State-of-Charge Estimation of Lithium-Ion Battery" Energies 13, no. 23: 6366. https://doi.org/10.3390/en13236366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop