Next Article in Journal
Challenges and Future Prospects of the MXene-Based Materials for Energy Storage Applications
Next Article in Special Issue
Aging of a Lithium-Metal/LFP Cell: Predictive Model and Experimental Validation
Previous Article in Journal
Enabling Online Search and Fault Inference for Batteries Based on Knowledge Graph
Previous Article in Special Issue
Comprehensive Degradation Analysis of NCA Li-Ion Batteries via Methods of Electrochemical Characterisation for Various Stress-Inducing Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Lithium-ion Battery Discharge Capacity by Integrating Optimized Explainable-AI and Stacked LSTM Model

1
Department of Mechanical Engineering, School of Technology, PDEU Gandhinagar, Gandhinagar 382426, Gujarat, India
2
Mechanical Engineering Department, Medi-Caps University, Indore 453331, Madhya Pradesh, India
3
Mechanical Engineering Department, National Institute of Technology, Kurukshetra 136119, Haryana, India
*
Authors to whom correspondence should be addressed.
Batteries 2023, 9(2), 125; https://doi.org/10.3390/batteries9020125
Submission received: 21 December 2022 / Revised: 1 February 2023 / Accepted: 6 February 2023 / Published: 9 February 2023
(This article belongs to the Special Issue Lithium-Ion Batteries Aging Mechanisms, 2nd Edition)

Abstract

:
Accurate lithium-ion battery state of health evaluation is crucial for correctly operating and managing battery-based energy storage systems. Experimental determination is problematic in these applications since standard functioning is necessary. Machine learning techniques enable accurate and effective data-driven predictions in such situations. In the present paper, an optimized explainable artificial intelligence (Ex-AI) model is proposed to predict the discharge capacity of the battery. In the initial stage, three deep learning (DL) models, stacked long short-term memory networks (stacked LSTMs), gated recurrent unit (GRU) networks, and stacked recurrent neural networks (SRNNs) were developed based on the training of six input features. Ex-AI was applied to identify the relevant features and further optimize Ex-AI operating parameters, and the jellyfish metaheuristic optimization technique was considered. The results reveal that discharge capacity was better predicted when the jellyfish-Ex-AI model was applied. A very low RMSE of 0.04, MAE of 0.60, and MAPE of 0.03 were observed with the Stacked-LSTM model, demonstrating our proposed methodology’s utility.

1. Introduction

Batteries are an integral and crucial element in electric vehicles (EV). The development of EVs is crucial for reducing our reliance on fossil fuels and minimizing vehicle emissions [1,2,3]. Lead-acid, nickel-metal hydride, and lithium-ion (Li-ion) batteries are the three primary types of power batteries. Li-ion batteries have been employed in electric cars or hybrid electric vehicles (HEVs), as well as in other fields. They serve a crucial role by providing the requisite energy storage for various applications. This application involves microgrids and renewable energy [4,5], electric vehicles [6], and consumer electronics. This is attributable primarily to its propitious properties, such as high energy density, high efficiency, and extended life cycles. Despite having several benefits over other battery types, the performance of batteries diminishes with frequent charging and draining. Therefore, engineering research focuses on identifying their degradation, estimating and forecasting their states, and optimizing their maintenance [7]. A battery’s deterioration process must be closely monitored in order to maximize energy output, avoid early failure, and increase reliability and longevity. In a battery management system (BMS), the process entails assessing the state of health (SOH) [8], forecasting the state of charge (SOC), and estimating the amount of useful life left (RUL) [9]. To maintain the safe operation and optimum utilization of lithium-ion batteries, this can also be utilized to schedule maintenance tasks in an automated and efficient manner. The end of life (EOL) is defined as the capacity approaching 70–80% of its nominal value [10] with the same SOC and operating conditions. Capacity is a direct indicator of SOH in batteries. SOH is a quantitative evaluation showing a battery’s overall health and ability to deliver the specified performance compared to a brand-new battery [11]. A lithium battery generally fails when its capacity drops by 20% of its rated value [12]. With usage, changes in the lithium-ion battery’s internal micro-chemical structure have a similar impact on its external characteristics, most notably a reduction in capacity and an increase in impedance [13]. The most accurate battery SOH indicator, or the capacity to store energy, has long been a research focus.
Battery modeling is required to establish relationships between the operating parameters of batteries, such as charging and discharging voltage, cycle life, temperature, and so on. Machine learning has emerged as a promising technique for developing reliable and successful prediction models in the last two decades. It has been successfully applied in various core engineering applications such as predicting bearing degradation [14], surface morphology analysis in machined components [15], fracture prediction in welded joints [16], etc. ML has also been extended for battery modelling with various approaches, and its utility still needs to be explored. Model-based techniques and data-driven methods are two broad categories that may be used to categorize current methods for determining charging–discharging capacity, RUL prediction, etc. [17]. The two most popular model-based strategies are the electrochemical model (EM) [18,19,20] and the equivalent circuit model (ECM) [21]. An EM is utilized to calculate the SOH by examining the electrochemical reactions inside the battery. Because it is difficult to model, this strategy is challenging to put into practice. Yu et al. [22] proposed a particle filter (PF) method based on quantum particle swarm optimization for ECM-based SOH estimation. Their solution has fewer variables that can be controlled, lessens computer complexity, and makes application easier. In order to balance model accuracy and computational complexity, Torai et al. [23] introduced an EM based SOH estimation model that takes advantage of the LiFePO4 (LFP)/graphite battery’s differential capacity. The parameters of their model were shown to be related to the phase transition behavior of both active LFP and graphite materials. Data-driven techniques have become popular since they depend on historical data from experiments rather than sophisticated physical or mathematical models. Data-driven technique predictions are purely based on battery operating parameters (such as current, voltage, temperature, and so on) or on characteristics derived from charging and discharging processes.
These techniques disregard the battery’s failure mechanism and electrochemical reactions [24]. A mapping between predicted data and battery SOH must be created using specific methodologies. Several machine learning algorithms are reported for SOH estimation. Data from Li-ion battery discharge cycles were acquired, and features were extracted and assessed by Patil et al. [25]. The regression and classification versions of SVM were then used to predict the SOH and RUL. In order to estimate SOH, Nuhic et al. [26] collected several variables in various aging scenarios. Furthermore, Zheng and Deng [27] extracted health indicators from the charging procedures and analyzed the relationship between the selected features from mutual information and SOH. The authors concluded that multiple Gaussian process regression demonstrates higher prediction accuracy as compared to the conventional GPR model. In another study, Sun et al. [28] demonstrated the utility of the LSTM neural network model, which shows high prognosis accuracy with varying charging and discharging conditions of Li-ion batteries.
The non-linear relationship between the input parameters and their mapping with SOH brings complexity to the development of accurate DL models. Further, the prediction results pertaining to RUL or charging–discharging capacity could be biased due to random splitting of input data in training and testing. Therefore, the present work attempts to construct a robust model that blends the meta-heuristic jellyfish optimization techniques, the Ex-AI models, and the deep learning (DL) models to overcome the issues outlined above. According to the literature survey conducted by the authors, only a few studies were published in which the potential of feature identification using Ex-AI coupled with optimal DL models for Li-ion batteries discharge capacity prediction is examined. In the present paper, the authors formulate an integrated model for accurately predicting the discharge capacity of Li-ion batteries, and the results are analyzed in detail. The remaining portion of the paper is arranged as follows: The methodology used in this investigation is explained briefly in Section 2. In Section 3, findings are explained with supporting information. Finally, the findings are summarized in Section 4, study outcomes are highlighted in Figure 1 shows the optimized Explainable AI methodology flowchart applied to predict the discharge capacity of Li-ion battery dataset.

2. Materials and Methods

2.1. Deep Learning Algorithms

Deep learning (DL) is a subset of ML algorithms that has been successfully applied to a variety of applications such as determination of surface roughness [29], analysis of machined images [30], bearing degradation analysis using TGAN [31], etc. DL is an advanced form of neural network architecture that tries to simulate the functioning of the human brain. The availability of additional hidden layers can help optimize and refine the prediction capability specifically needed for large datasets [31]. In the present study, the authors utilized long short-term memory (LSTM), gated recurrent units (GRUs), and stacked recurrent neural networks (SRNNs) to predict the discharge capacity of a Li-ion battery. The description of the algorithms is as follows:

2.1.1. Stacked Long Short-Term Memory Network

This is a recurrent neural network known as long short-term memory (LSTM), capable of learning and predicting order dependence in sequence prediction tasks. Due to the presence of feedback connections, the entire sequence of data can be processed efficiently. LSTM, which consists of cells and gates through which the flow of information is regulated, is utilized to circumvent the loss of short-term memory [32]. A collection of recurrently connected memory blocks makes up an LSTM layer. These units can be compared to differentiable memory chips found in digital computers. Input, output, and forget gates, as well as one or more memory cells coupled in a loop, are present in each of them. These three multiplicative units constantly mimic the actions of writing, reading, and resetting for the memory cells.
The input, output, and forget gates are calculated as follows:
G i = σ y t W y i + S t 1 W s i + β i
G f = σ y t W y f + S t 1 W s f + β f
G o = σ y t W y o + S t 1 W s o + β o
The input node memory cell is computed as:
θ t ˜ = tan h y t w y c + S t 1   w s c + β c
The memory cell internal state is calculated as:
θ t = G f θ t 1 + G i   θ t ˜
The hidden state is calculated with the help of the activation function as [30]:
S t = G o tan h θ t
Here, t is the time step, and y t is input
In the stacked LSTM model, several hidden layers are included as compared to a single layer in the conventional LSTM model. The advantage lies in the fact that the model parameters are distributed over the whole space of the model without increasing memory capacity, which enables the model to accelerate convergence and refine nonlinear operations of raw data. Figure 2 shows the architecture of the stacked LSTM model.

2.1.2. Gated Recurrent Unit

Gated recurrent units (GRUs) is a potential deep-learning model for sequence prediction. It was introduced by Cho et al. [33] in 2014 and, as the name implies, uses gates to utilize the information flow of the data. GRUs are very similar to long short-term memory (LSTM) and are reported to give better prediction and less computational complexity. GRUs consist of a hidden unit that integrates with the input and forget gates, forming a single update gate. Three gates in LSTM are replaced by the reset gate and update gate, as shown in Figure 3. The reset gate assists in recording short-term dependencies in sequence, whereas the update gate assists in recording long-term dependencies in sequence.
The reset gate (Rt) and update gate (Zt) are mathematically calculated as follows:
G r = σ y t w y r + S t 1 w h r + β r
G u = σ y t w y u + S t 1 w h u + β u
Here y t represents input variables. w y r ,   w x u , w h r   a n d   w h u are weight parameters, whereas β r   a n d   β u are bias parameters. S t 1 represents the hidden state from the previous time step.
The candidate hidden state is calculated as
S ˜ t = t a n h y t w u h + G r S t 1 w h h + β h
where tanh represents the activation function and symbol is the Hadamard product operator.
Finally, the effect of update gate G u is computed from Equation (10) as
S t = G u S t 1 + 1 G u S ˜ t

2.1.3. Stacked Recurrent Neural Network

The stacked recurrent neural network (SRNN) is a type of deep learning algorithm consisting of multiple stacked recurrent layers. An input layer, a hidden layer, and an output layer are the three layers of a conventional RNN structure. Each neuron in the hidden layer has a state feedback mechanism that allows RNNs to remember historical knowledge translated from incoming data [34]. As a result, RNNs are more suited to dealing with sequential data. Another significant property of the RNN is that it can handle sequences of varying lengths due to its recurring structure.
Let the input sequence at time step t be denoted as x t and the target sequence be denoted by y t . Therefore, a single layer RNN with a single cell shown in Figure 4 is modeled as
V t = f A x t + B V t 1 + u
y t = g C V t + w
Here V t and V t 1 represent the state vector of the corresponding cell layers at time steps t and t − 1. A, B, and C denote the weighting matrix in between the adjacent input vectors, the adjacent state vectors, and output weighting matrix, respectively [35]. The bias vectors of the cell layer and output are denoted by u and w.
The stacked RNN considering all the total layers at different time steps t − 1, t, t + 1, and t + 2 as shown in Figure 4 is modeled as
V t l = f A l V   t l 1 + B l V   t 1 l 1 + u l
y t = g C V t L + w
Here V t l , A l , and y t denote the hidden state of the lth layer, weighting matrix between adjacent layers, and predicted output vector of S-RNN, respectively.

2.2. Jellyfish Optimization

Both in the world of commercial applications and scientific research, meta-heuristic optimization algorithms are crucial for tackling complex issues. Different meta-heuristic algorithms have been created over the past ten years to deal with optimization issues. The jellyfish search optimizer (JSO) was recently developed as a swarm intelligence algorithm for global optimization [36]. JSO is designed to mimic how marine jellyfish act in the ocean, including how they move in swarms, follow ocean currents, and switch between these motions using a temporal control system. A large collection of jellyfish known as a swarm will either migrate passively (Type A) about their own position or actively (Type B) toward any other position. Type A motion with updated location is formulated as:
Y i t 1 = Y i t + γ r a n d 0 , 1 B U B L
B U and B L represent the upper and lower bounds, γ represents the motion coefficient.
To implement Type B movement, another jellyfish is chosen randomly, and to determine the direction of movement, a vector is drawn from the jellyfish of interest (i) to the selected jellyfish (j). Finally, the updated position of the jellyfish is calculated from Equation (18)
S t e p = rand   ( 0 , 1 )   D r t  
D r t = Y j t Y i t   i f   f Y i f X j Y i t Y j t   i f   f Y i < f X j
Y i t 1 = Y i t + S t e p

2.3. Experimentation and Data Acquisition

The battery aging dataset utilized by the authors belongs to the National Aeronautics and Space Administration (NASA) lithium-ion battery aging experimental data set [37]. The experiments for the dataset were performed on various commercially available Li-ion 18650-sized rechargeable batteries using a custom experimental setup at NASA’s Prognostic Centre of Excellence (PCoE). This dataset contains information about the charge, discharge, and electrochemical impedance spectroscopy cycles performed at various temperatures. Experiments were performed at a room temperature of 24 degrees Celsius on the batteries B0005, B0006, B0007, and B0018, which were chosen to demonstrate the work that is being presented here. The capacity of the LiNi0.8Co0.15Al0.05O2 batteries used by NASA was 2 Ah. Additionally, the CCCV charging-CC discharging profile was used to evaluate the batteries’ aging performances. Specifically, a 0.75C charging rate and a 1C discharging rate were chosen. The maximum permissible charging voltage was 4.2 volts, and the maximum permissible charging current rate was 0.01 amperes. The batteries were put through the charging stage of a cycle at a constant 1.5 A current until the voltage reached 4.2 V, after which the batteries were put through the discharge stage at a constant 2 A current. The cycle was stopped once the voltage dropped from 4.2 V to the corresponding cut-off voltages, as shown in Table 1.
Voltage and current were measured at both the battery and the load during the discharge cycle (charger). Temperature was recorded at the battery during discharge cycles, and the number of cycles and the measured capacity were additional elements in the dataset that relate to discharge capacity. Finally, the battery capacity was recorded at the end of each discharge cycle, which needed to be predicted with DL models. Figure 5 indicates the capacity recorded through the first discharge cycle until the end of the final cycle of the selected batteries. It has been stated explicitly that these batteries’ discharge capacity significantly decreases with time. The electrolyte and electrode of the battery have a side reaction that contributes to its capacity deteriorating as it experiences more discharge cycles. The experiments were terminated when the observed capacity for these batteries fell below 70% of their rated capacity, as shown by the dotted horizontal line in Figure 5. Table 2 lists the features that were extracted throughout the discharge cycles. Here, current measured is the amount of electricity flowing into and out of the Li-ion battery at a given time, as recorded using a measuring device, usually in Amperes (A), whereas current discharge is the rate at which a Li-ion battery is discharging, or releasing its stored energy, at any given time, and is also stated in Amperes (A). The difference between these two terminologies is critical for understanding the performance and behavior of Li-ion batteries, particularly in applications such as electric cars, mobile devices, and renewable energy systems. Current measurements may provide details about the system’s real energy flow, while discharge current readings can reveal details about the battery’s charge and power delivery capacity. Similarly, voltage measured refers to the precise voltage, commonly measured in Volts, between the terminals of a Li-ion battery at any particular time (V). Voltage discharge refers to the voltage between the battery terminals at a specific time during discharge or energy release. It measures the energy stored in the battery and its capacity to transfer energy to the load. Volts are also used to represent voltage discharge (V).
Figure 6 represents the variations in discharge voltage measured through discharge cycles C28, C56, C84, C112, C140, and C168, respectively, for battery B0005. It was observed that the voltage measured decreased with respect to time until it reached a cut-off voltage of 2.75 V. Moreover, the discharge voltage dropped significantly faster as the battery went further through the discharge cycles. Similarly, Figure 7 represents the variations in current measured for the same battery B0005, indicating the battery reached the end of cycle sooner as the number of cycles increased.

3. Results and Discussion

Several DL algorithms were applied to the Li-ion battery dataset to estimate the discharge capacity in order to show the usefulness of the suggested methodology. At the initial stage, the performance of the Stacked-LSTM, GRU, and STAR models are evaluated with three standard metrics: root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). The formulas to calculate these metrics are as follows:
R M S E = 1 m j = 1 m A v P v 2
M A E = 1 m j = 1 m A v P v  
M A P E = 1 m j = 1 m A v P v A v
where Av, Pv, and m denote the actual experimental value, predicted value, and the total number of observations.
In the current study, the Li-ion battery dataset was split in a 70:30 ratio, meaning that 70% of the data was used for training the model while 30% was used for testing the model. Training is needed in machine learning because it allows the model to learn the underlying patterns and relationships in the training data. This enables the model to generalize to new, unseen data and make accurate predictions. The authors also evaluated the performance of DL models based on ten-fold cross-validation results, as this is a promising and reliable methodology to evaluate the model’s performance to remove bias in the result arising from the random splitting of the dataset. The dataset was divided into ten equal sections in the beginning; nine were used to train the model, and the remaining one was utilized for testing. The remaining eight parts were used to train the model in the second step, and two parts were applied for testing. This process was repeated until all ten sections were used for both training and testing. The overall prediction results were the average results of the prediction of all the parts utilized for training and testing. Figure 8 shows the procedure for 10-fold cross-validation applied to any DL model.
After applying ML/DL models to the Li-ion battery dataset, it is challenging for researchers to understand and reconstruct how an algorithm arrived at a result. In the conventional approach, the entire computation is referred to as a “black box,” which makes it difficult to understand and interpret results. These “black box” models are created by starting with the raw data. Clearly, the researchers or data scientists who created the algorithm need help understanding or cannot define what is happening inside or how the AI algorithm came to a particular conclusion. Explainable artificial intelligence, sometimes referred to as Ex-AI, is a group of techniques and methodologies that, when applied to machine learning algorithms, enable researchers to comprehend the outcomes produced by the algorithms. Ex-AI is a promising technique to describe an AI model and its anticipated impact and investigate any biases in predicted results. It helps to describe the model’s accuracy, fairness, transparency, and results in decision-making that AI supports. When bringing AI models into production, an organization must ensure that their AI is explainable, which assists in systematically developing AI-based methodology. The authors created a model based on Stacked-LSTM, GRU, and STAR DL models where training, testing, and a ten-fold technique were performed on all input parameters in order to thoroughly evaluate the prediction outcomes of the proposed methodology on the Li-ion battery dataset. Ex-AI was used in the subsequent stages to choose the pertinent input parameters that could significantly enhance the prediction of battery discharge capacity. In our study, the XGBoost algorithm was chosen to select the input parameters through the Ex-AI model. It is always challenging to optimally select the hyperparameters in any ML/DL algorithms. Therefore, in the present study, jellyfish search optimization was implemented to optimally select the hyper parameter values of the Ex-AI model. Details about the XGBoost algorithm parameters, range, and optimized values are reflected in Table 3 and Table 4.
Features or variables are essential to predict the output when using machine learning models correctly. Not all features are needed for prediction, and removing unnecessary features may significantly improve the model’s performance. Feature selection identifies the essential features with specific statistical formulations. In the case of a high-dimensional dataset, irrelevant features increase computational complexity, training time, and out-of-sample performance, and therefore feature selection is highly desired. Further, it is observed that feature explainability is needed to validate the prediction accuracy of an AI model. SHAP (Shapley additive explanations) is an open-source Python package for formulating an Ex-AI model. SHAP is extremely useful in explaining the influence of individual input parameters on predictions. The Shapley value is calculated by taking the average marginal contribution of a feature value and dividing it by the total number of feasible coalitions. The SHAP scores for Li-ion battery dataset features are shown in Figure 9 and Figure 10. The horizontal axis of a bar graph represents the mean absolute SHAP values, and the vertical axis represents the features. The features with mean absolute SHAP value higher than zero were considered, and features with zero or negative values were not considered. With the Ex-AI model, the selected features were cycle, current charge, battery id, and current measured, as seen in Figure 9. Moreover, when the hyperparameter of the Ex-AI model was optimized with jellyfish optimization, the selected features were cycle, current charge, and battery id, as can be observed from Figure 10. The influence of the selected features on prediction capability is shown in Figure 11, Figure 12 and Figure 13.
After identifying the relevant features, the prediction capability of the models was evaluated through three performance metrics: RMSE, MAE, and MAPE. Training, testing, and ten-fold were applied on three DL models for discharge capacity prediction. Due to unbiased results and high reliability, it is suggested to consider the ten-fold results. Figure 11a–c show the RMSE values obtained after applying three DL models with three feature conditions: (a) all features, (b) Ex-AI features, (c) jellyfish-optimized Ex-AI features. Here, all features refers to the input values listed in Table 2, whereas Ex-AI features refers to four features obtained after applying the mean SHAP score (Figure 9). Similarly, jellyfish Ex-AI refers to the three features obtained after applying the mean SHAP score (Figure 10). Since the mean SHAP score of current measured and voltage measured is near zero, therefore, these features were discarded. It is observed from Figure 11a–c that the lowest RMSE with ten-fold was 0.04 from the Stacked-LSTM model considering three optimized Ex-AI features, whereas the highest RMSE of 0.11 was observed from the GRU model when four Ex-AI features were considered. The prediction results of jellyfish-optimized Ex-AI features provide lower estimation errors (RMSEs), Figure 11c. In contrast, the prediction results considering all features exhibited higher RMSE prediction errors from all three DL models, as observed from Figure 11a,b. As can be observed from Figure 12a–c, the discharge capacity of Li-ion batteries was accurately predicted by all three models that were taken into consideration for this investigation. However, the least MAE of 0.60 considering the ten-fold procedure was observed from the stacked LSTM model, Figure 12c, whereas the maximum MAE observed was 0.90 from the GRU model, Figure 12a,b. In Figure 13a–c, results obtained from MAPE are shown. The Stacked-LSTM model gives the least error with all the feature conditions and considering ten-fold prediction results. The least MAPE of 0.03 is observed from the Stacked-LSTM model, whereas the maximum MAPE of 0.07 is observed from the GRU model. Additionally, very little RMSE, MAE, and MAPE were seen compared to ten-fold prediction results from the Li-ion battery aging dataset when training and testing prediction results are considered to forecast discharge capacity in all situations, as shown in Figure 11, Figure 12 and Figure 13. However, as was already noted, data from the 10- fold procedure should be considered due to unbiasedness and reliability. Ten-fold cross-validation (CV) can give a higher prediction error than training and testing on a single data split because it is a more rigorous evaluation of the model’s performance. In a ten-fold CV, the data is divided into ten subsets, and the model is trained and tested ten times, each time using a different subset as the test set. This gives a better estimate of the model’s performance on new, unseen data. However, it also means that the model is being tested on a wider variety of data, which can lead to a higher prediction error. Additionally, using a ten-fold CV can increase the variance in the estimated performance, contributing to a higher error. The prediction findings show that, independent of the model, prediction accuracy increases when Ex-AI is used for feature selection. These errors are further decreased when Ex-AI is tuned using the jellyfish optimization model. It should also be noted that the prediction error range is significantly less when observing RMSE and MAPE values (Figure 11 and Figure 13), while the maximum MAE observed is less than 1 (ten-fold), which shows the robustness of our proposed methodology. Figure 14a–c shows the prediction errors reported from optimized Ex-AI features and with all three models. Optimized Ex-AI features were chosen since RSME, MAE, and MAPE are significantly less, which signifies the least error in prediction. Further, it is evident from the figure that when ten-fold is applied, the predicted experimental data match accurately with the experimental data. Our outcomes show the utility of jellyfish-optimized Ex-AI features for accurately predicting the discharge capacity of the Li-ion battery dataset considered in our study.
Table 5 shows the Li-ion discharge prediction accuracy when individual features are considered. It is observed that when individual features are considered along with ten-fold prediction results, then minimum RMSE observed is 0.10 with SRNN model and cycle features, which is much higher when Ex-AI features and jellyfish-optimized Ex-AI features are considered (Figure 11b,c). Similarly, when MAE is considered with individual features, the minimum MAE observed is 1.02 (ten-fold) with stacked LSTM model and cycle as features, which is much higher when Ex-AI features and jellyfish-optimized Ex-AI features are considered (Figure 12b,c). Moreover, the minimum MAPE (ten-fold) with GRU model and current charge as features is 0.10, which is much higher when Ex-AI features and jellyfish-optimized Ex-AI features are considered (Figure 13b,c). The results, as observed from Table 5 and Figure 11, Figure 12 and Figure 13, confirm that the Ex-AI and jellyfish-optimized features predict discharge capacity of Li-ion battery much better when individual battery features are considered. Table 6 shows the computational time required to develop ML models.
It is reflected that the stacked LSTM model predicts the discharged capacity better than GRU and SRNN models. The possible reasons are that LSTM is more accurate on a larger dataset. Further, GRU models exhibit a slow convergence rate and low learning efficiency compared to the other two models. Furthermore, the computation time of SRNN is very high due to the use of the ReLU activation function. For large datasets, vanishing gradient issues were observed from SRNN; however, LSTM was explicitly invented to avoid the vanishing gradient problem. Though the prediction results are good, there are several ways to enhance the prediction capability of stacked LSTM, GRU, and SRNN models, such as experimenting with different layer sizes, number of layers, and activation functions, combining multiple models to form a single prediction by averaging or weighted averaging of their outputs, and implementing attention mechanisms to help the model focus on the most relevant parts of the input sequence.

4. Conclusions

In the present work, an efficient optimized Ex-AI model to predict discharge capacity of Li-ion battery is investigated in detail. The methodology was developed from an openly accessible Li-ion battery aging dataset [37]. Three deep learning models, Stacked-LSTM, GRU, and SRNN are considered and prediction capability is evaluated from the features selected through Ex-AI and optimized Ex-AI models. RMSE, MAE, and MAPE results obtained after performing training, testing and ten-fold procedure are discussed in detail. When ten-fold CV prediction results are considered, observations are as follows:
  • The lowest RMSE value of 0.04 is observed from the stacked LSTM model when jellyfish-optimized Ex-AI features were considered.
  • Very low MAE and MAPE of 0.01 were obtained from the stacked LSTM model when jellyfish-optimized Ex-AI features were considered.
  • Stacked LSTM better predicts the discharge capacity of Li-ion batteries as compared to GRU and SRNN deep learning models.
  • Features selected after applying jellyfish-optimized Ex-AI models were found to exhibit better prediction capability compared to both Ex-AI features and all features.
The prediction results obtained after incorporating our proposed methodology are accurate and reliable. In future, more comparative analyses can be performed so that a more generalized model to predict health estimation of different types of batteries can be formulated.

Author Contributions

Conceptualization, V.V. and V.W.; methodology, V.V. and M.S.; software, M.S. and P.N.; validation, V.V. and M.S.; formal analysis, P.S. and H.B.; investigation, P.N. and M.S.; resources, V.V. and P.N.; data curation, P.S. and H.B.; writing—original draft preparation M.S., V.W., and P.N.; writing—review and editing, V.V. and H.B.; visualization, P.S. and M.S.; supervision, V.V. and V.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The authors used a publicly available Li-ion battery dataset. Link: https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognosticdata-repository/ (accessed on 1 September 2022).

Acknowledgments

The authors would like to thank B. Saha and K. Goebel, NASA Ames Research Center, for providing access to the dataset for research purposes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, L.; Wang, S.; Qu, X. Optimal Electric Bus Fleet Scheduling Considering Battery Degradation and Non-Linear Charging Profile. Transp. Res. Part E Logist. Transp. Rev. 2021, 154, 102445. [Google Scholar]
  2. Zhang, L.; Zeng, Z.; Qu, X. On The Role of Battery Capacity Fading Mechanism in The Lifecycle Cost of Electric Bus Fleet. IEEE Trans. Intell. Transp. Syst. 2021, 22, 2371–2380. [Google Scholar]
  3. Zhang, W.; Zhao, H.; Xu, M. Optimal Operating Strategy of Short Turning Lines for The Battery Electric Bus System. Commun. Transp. Res. 2021, 1, 100023. [Google Scholar]
  4. Reza, M.; Mannan, M.; Wali, S.; Hannan, M.; Jern, K.; Rahman, S.; Muttaqi, K.; Mahlia, T. Energy Storage Integration Towards Achieving Grid Decarbonization: A Bibliometric Analysis and Future Directions. J. Energy Storage 2021, 41, 102855. [Google Scholar] [CrossRef]
  5. Hannan, M.; Faisal, M.; Jern Ker, P.; Begum, R.; Dong, Z.; Zhang, C. Review of Optimal Methods and Algorithms for Sizing Energy Storage Systems to Achieve Decarbonization in Microgrid Applications. Renew. Sustain. Energy Rev. 2020, 131, 110022. [Google Scholar] [CrossRef]
  6. Wen, J.; Zhao, D.; Zhang, C. An Overview of Electricity Powered Vehicles: Lithium-Ion Battery Energy Storage Density and Energy Conversion Efficiency. Renew. Energy 2020, 162, 1629–1648. [Google Scholar]
  7. Wu, J.; Wang, Y.; Zhang, X.; Chen, Z. A Novel State of Health Estimation Method of Li-Ion Battery Using Group Method of Data Handling. J. Power Sources 2016, 327, 457–464. [Google Scholar] [CrossRef]
  8. Xiong, R.; Yu, Q.; Shen, W.; Lin, C.; Sun, F. A Sensor Fault Diagnosis Method for A Lithium-Ion Battery Pack in Electric Vehicles. IEEE Trans. Power Electron. 2019, 34, 9709–9718. [Google Scholar]
  9. Rivera-Barrera, J.; Muñoz-Galeano, N.; Sarmiento-Maldonado, H. Soc Estimation for Lithium-Ion Batteries: Review and Future Challenges. Electronics 2017, 6, 102. [Google Scholar] [CrossRef]
  10. El Mejdoubi, A.; Oukaour, A.; Chaoui, H.; Gualous, H.; Sabor, J.; Slamani, Y. State-Of-Charge and State-Of-Health Lithium-Ion Batteries’ Diagnosis According to Surface Temperature Variation. IEEE Trans. Ind. Electron. 2016, 63, 2391–2402. [Google Scholar] [CrossRef]
  11. Yu, J. State-Of-Health Monitoring and Prediction of Lithium-Ion Battery Using Probabilistic Indication and State-Space Model. IEEE Trans. Instrum. Meas. 2015, 64, 2937–2949. [Google Scholar]
  12. Pascoe, P.; Anbuky, A. Standby Power System VRLA Battery Reserve Life Estimation Scheme. IEEE Trans. Energy Convers. 2005, 20, 887–895. [Google Scholar]
  13. Zhi, Y.; Wang, H.; Wang, L. A State of Health Estimation Method for Electric Vehicle Li-Ion Batteries Using GA-PSO-SVR. Complex Intell. Syst. 2022, 8, 2167–2182. [Google Scholar]
  14. Bhavsar, K.; Vakharia, V.; Chaudhari, R.; Vora, J.; Pimenov, D.; Giasin, K. A Comparative Study to Predict Bearing Degradation Using Discrete Wavelet Transform (DWT), Tabular Generative Adversarial Networks (TGAN) and Machine Learning Models. Machines 2022, 10, 176. [Google Scholar]
  15. Vakharia, V.; Vora, J.; Khanna, S.; Chaudhari, R.; Shah, M.; Pimenov, D.Y.; Giasin, K.; Prajapati, P.; Wojciechowski, S. Experimental investigations and prediction of WEDMed surface of nitinol SMA using SinGAN and DenseNet deep learning model. J. Mater. Res. Technol. 2022, 18, 325–337. [Google Scholar]
  16. Mishra, A.; Dasgupta, A. Supervised and Unsupervised Machine Learning Algorithms for Forecasting the Fracture Location in Dissimilar Friction-Stir-Welded Joints. Forecasting 2022, 4, 787–797. [Google Scholar] [CrossRef]
  17. Xiong, R.; Li, L.; Tian, J. Towards A Smarter Battery Management System: A Critical Review on Battery State of Health Monitoring Methods. J. Power Sources 2018, 405, 18–29. [Google Scholar]
  18. Li, J.; Adewuyi, K.; Lotfi, N.; Landers, R.; Park, J. A Single Particle Model with Chemical/Mechanical Degradation Physics for Lithium-Ion Battery State of Health (SOH) Estimation. Appl. Energy 2018, 212, 1178–1190. [Google Scholar]
  19. Gao, Y.; Liu, K.; Zhu, C.; Zhang, X.; Zhang, D. Co-Estimation of State-of-Charge and State-of- Health for Lithium-Ion Batteries Using an Enhanced Electrochemical Model. IEEE Trans. Ind. Electron. 2022, 69, 2684–2696. [Google Scholar] [CrossRef]
  20. Wu, L.; Pang, H.; Geng, Y.; Liu, X.; Liu, J.; Liu, K. Low-Complexity State of Charge and Anode Potential Prediction for Lithium-Ion Batteries Using a Simplified Electrochemical Model-Based Observer under Variable Load Condition. Int. J. Energy Res. 2022, 46, 11834–11848. [Google Scholar]
  21. Wei, Z.; Zhao, J.; Ji, D.; Tseng, K. A Multi-Timescale Estimator for Battery State of Charge and Capacity Dual Estimation Based on An Online Identified Model. Appl. Energy 2017, 204, 1264–1274. [Google Scholar] [CrossRef]
  22. Yu, J.; Mo, B.; Tang, D.; Liu, H.; Wan, J. Remaining Useful Life Prediction for Lithium-Ion Batteries Using a Quantum Particle Swarm Optimization-Based Particle Filter. Qual. Eng. 2017, 29, 536–546. [Google Scholar]
  23. Torai, S.; Nakagomi, M.; Yoshitake, S.; Yamaguchi, S.; Oyama, N. State-Of-Health Estimation of Lifepo4/Graphite Batteries Based on a Model Using Differential Capacity. J. Power Sources 2016, 306, 62–69. [Google Scholar]
  24. Cui, S.; Joe, I. A Dynamic Spatial-Temporal Attention-Based GRU Model with Healthy Features for State-of-Health Estimation of Lithium-Ion Batteries. IEEE Access 2021, 9, 27374–27388. [Google Scholar]
  25. Patil, M.; Tagade, P.; Hariharan, K.; Kolake, S.; Song, T.; Yeo, T.; Doo, S. A Novel Multistage Support Vector Machine Based Approach for Li Ion Battery Remaining Useful Life Estimation. Appl. Energy 2015, 159, 285–297. [Google Scholar]
  26. Nuhic, A.; Terzimehic, T.; Soczka-Guth, T.; Buchholz, M.; Dietmayer, K. Health Diagnosis and Remaining Useful Life Prognostics of Lithium-Ion Batteries Using Data-Driven Methods. J. Power Sources 2013, 239, 680–688. [Google Scholar] [CrossRef]
  27. Zheng, X.; Deng, X. State-of-Health Prediction for Lithium-Ion Batteries with Multiple Gaussian Process Regression Model. IEEE Access 2019, 7, 150383–150394. [Google Scholar]
  28. Sun, B.; Pan, J.; Wu, Z.; Xia, Q.; Wang, Z.; Ren, Y.; Yang, D.; Guo, X.; Feng, Q. Adaptive evolution enhanced physics-informed neural networks for time-variant health prognosis of lithium-ion batteries. J. Power Sources 2023, 556, 232432. [Google Scholar] [CrossRef]
  29. Patel, D.R.; Kiran, M.B.; Vakharia, V. Modeling and prediction of surface roughness using multiple regressions: A noncontact approach. Eng. Rep. 2020, 2, e12119. [Google Scholar] [CrossRef]
  30. Vakharia, V.; Kiran, M.B.; Dave, N.J.; Kagathara, U. Feature extraction and classification of machined component texture images using wavelet and artificial intelligence techniques. In Proceedings of the 2017 8th International Conference on Mechanical and Aerospace Engineering (ICMAE), Prague, Czech Republic, 22–25 July 2017; pp. 140–144. [Google Scholar] [CrossRef]
  31. Shah, M.; Vakharia, V.; Chaudhari, R.; Vora, J.; Pimenov, D.; Giasin, K. Tool Wear Prediction in Face Milling of Stainless Steel Using Singular Generative Adversarial Network And LSTM Deep Learning Models. Int. J. Adv. Manuf. Technol. 2022, 121, 723–736. [Google Scholar] [CrossRef]
  32. Pang, H.; Wu, L.; Liu, J.; Liu, X.; Liu, K. Physics-informed neural network approach for heat generation rate estimation of lithium-ion battery under various driving conditions. J. Energy Chem. 2023, 78, 1–12. [Google Scholar] [CrossRef]
  33. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  34. Čerňanský, M.; Makula, M.; Beňušková, Ľ. Organization of the state space of a simple recurrent network before and after training on recursive linguistic structures. Neural Netw. 2007, 20, 236–244. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, X.; Zhou, J.; Qian, H. Short-Term Wind Power Forecasting by Stacked Recurrent Neural Networks with Parametric Sine Activation Function. Electr. Power Syst. Res. 2021, 192, 107011. [Google Scholar] [CrossRef]
  36. Chou, J.; Truong, D. A Novel Metaheuristic Optimizer Inspired by Behavior of Jellyfish in Ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  37. Saha, B.; Goebel, K. Battery Data Set. In NASA Ames Prognostics Data Repository; NASA Ames Research Centre: Mountain View, CA, USA, 2007. Available online: https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognosticdata-repository/ (accessed on 1 September 2022).
Figure 1. Ex-AI methodology to estimate Li-ion battery discharge capacity.
Figure 1. Ex-AI methodology to estimate Li-ion battery discharge capacity.
Batteries 09 00125 g001
Figure 2. Stacked LSTM architecture.
Figure 2. Stacked LSTM architecture.
Batteries 09 00125 g002
Figure 3. GRU architecture.
Figure 3. GRU architecture.
Batteries 09 00125 g003
Figure 4. S-RNN architecture.
Figure 4. S-RNN architecture.
Batteries 09 00125 g004
Figure 5. The discharge capacities of selected batteries corresponding to the respective discharge cycles.
Figure 5. The discharge capacities of selected batteries corresponding to the respective discharge cycles.
Batteries 09 00125 g005
Figure 6. Discharge voltage measured at the battery B0005 at different cycles.
Figure 6. Discharge voltage measured at the battery B0005 at different cycles.
Batteries 09 00125 g006
Figure 7. Current measured at the load for battery B0005 at different cycles.
Figure 7. Current measured at the load for battery B0005 at different cycles.
Batteries 09 00125 g007
Figure 8. Ten-fold procedure applied to Li-ion battery dataset.
Figure 8. Ten-fold procedure applied to Li-ion battery dataset.
Batteries 09 00125 g008
Figure 9. Features selected with Ex-AI model.
Figure 9. Features selected with Ex-AI model.
Batteries 09 00125 g009
Figure 10. Features selected with optimized Ex-AI model.
Figure 10. Features selected with optimized Ex-AI model.
Batteries 09 00125 g010
Figure 11. (ac) RMSE values from three models. (a) All features. (b) Ex-AI features. (c) Jellyfish-optimized EX-AI features.
Figure 11. (ac) RMSE values from three models. (a) All features. (b) Ex-AI features. (c) Jellyfish-optimized EX-AI features.
Batteries 09 00125 g011
Figure 12. MAE values from three models. (a) All features. (b) Ex-AI features. (c) Jellyfish-optimized Ex-AI features.
Figure 12. MAE values from three models. (a) All features. (b) Ex-AI features. (c) Jellyfish-optimized Ex-AI features.
Batteries 09 00125 g012aBatteries 09 00125 g012b
Figure 13. (ac) MAPE values from three models. (a) All features. (b) Ex-AI features. (c) Jellyfish-optimized Ex-AI features.
Figure 13. (ac) MAPE values from three models. (a) All features. (b) Ex-AI features. (c) Jellyfish-optimized Ex-AI features.
Batteries 09 00125 g013
Figure 14. (ac) Actual vs. predicted results with Jellyfish optimized Ex-AI features.
Figure 14. (ac) Actual vs. predicted results with Jellyfish optimized Ex-AI features.
Batteries 09 00125 g014
Table 1. Batteries selected for current work indicating the number of cycles, temperature, and cut-off voltages.
Table 1. Batteries selected for current work indicating the number of cycles, temperature, and cut-off voltages.
Battery No.Ambient TemperatureNo. of CyclesDischarge Cut-Off Voltage
B000524 °C1682.7 V
B000624 °C1682.5 V
B000724 °C1682.2 V
B001824 °C1322.5 V
Table 2. Features corresponding to discharge profile in the Li-ion battery dataset.
Table 2. Features corresponding to discharge profile in the Li-ion battery dataset.
Feature NameDescription
Voltage measuredBattery terminal voltage during discharge (Volts)
Current measuredBattery output during discharge (Amp)
Temperature measuredBattery temperature measured during discharge (°C)
Current chargeCurrent measured at the load during discharge (Amp)
Voltage chargeVoltage measured at the load during discharge (Volts)
TimeTime vector from start to end of a discharge cycle (secs)
CycleNumber of discharge cycles for battery
Battery IDTo identify the battery number among four batteries
Table 3. XGBoost parameters.
Table 3. XGBoost parameters.
XGBoost ParametersFunctionRange
P1Learning rate(0,1]
P2Max depth[0,inf)
P3Min child weight[0,inf)
P4N estimators[50,500]
P5N jobs[1,inf)
P6Subsamples(0,1]
Table 4. Selected parameters.
Table 4. Selected parameters.
XGBoost P1P2P3P4P5P6
Default0.36110011
Jellyfish0.1022.7930710.98
Table 5. Prediction results with individual features.
Table 5. Prediction results with individual features.
Feature Model Stacked LSTMGRUSRNN
MetricsTrainTest Ten-FoldTrainTest Ten-FoldTrainTest Ten-Fold
Voltage measured RMSE0.1870.1870.1940.1860.1870.2090.2290.2290.227
MAE0.3600.3621.3360.3580.3581.4240.4120.4141.512
MAPE0.1040.1040.1180.1030.1030.1110.1100.1100.115
Current measured RMSE0.1900.1910.1960.2060.2070.2120.1900.1900.208
MAE0.3650.3671.3600.3960.3961.4800.3650.3671.416
MAPE0.1030.1040.1190.1180.1180.1220.1040.1040.112
Temperature measuredRMSE0.1900.1900.1940.2590.2600.2420.1950.1960.208
MAE0.3670.3691.3440.4790.4821.6640.3690.3711.472
MAPE0.1060.1060.1200.1470.1480.1320.1030.1030.128
Current chargeRMSE0.1640.1650.1730.1800.1810.1760.1790.1800.186
MAE0.3080.3111.1680.3240.3291.1920.3350.3381.272
MAPE0.1020.1010.1150.1000.1000.1070.1010.1010.159
Voltage chargedRMSE0.1830.1840.1920.1860.1870.2050.1910.1910.223
MAE0.3510.3531.3200.3560.3561.3840.3580.3601.488
MAPE0.1560.1570.1650.1580.1580.1730.1590.1600.186
Time RMSE0.1890.1900.1960.3780.3780.2280.2220.2230.208
MAE0.3650.3671.3600.7430.7431.5600.4050.4071.480
MAPE0.1040.1040.1190.1970.1980.1230.1090.1090.118
Cycle RMSE0.1000.1020.1950.1170.1170.1330.1030.1040.105
MAE0.2850.2901.0270.4750.4751.5210.4300.4351.092
MAPE0.1070.1070.1510.1040.1040.1760.1030.1040.155
Table 6. Computational time required.
Table 6. Computational time required.
LSTMGRUSRNN
FeaturesValidationTime (s)Time (s)Time (s)
All featuresTrain330270550
Test301113
10 CV420035006500
Ex-AI featuresTrain308230510
Test301010
10 CV390033006200
Optimized Ex-AI
Features
Train300220490
Test271010
10 CV350032006000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vakharia, V.; Shah, M.; Nair, P.; Borade, H.; Sahlot, P.; Wankhede, V. Estimation of Lithium-ion Battery Discharge Capacity by Integrating Optimized Explainable-AI and Stacked LSTM Model. Batteries 2023, 9, 125. https://doi.org/10.3390/batteries9020125

AMA Style

Vakharia V, Shah M, Nair P, Borade H, Sahlot P, Wankhede V. Estimation of Lithium-ion Battery Discharge Capacity by Integrating Optimized Explainable-AI and Stacked LSTM Model. Batteries. 2023; 9(2):125. https://doi.org/10.3390/batteries9020125

Chicago/Turabian Style

Vakharia, Vinay, Milind Shah, Pranav Nair, Himanshu Borade, Pankaj Sahlot, and Vishal Wankhede. 2023. "Estimation of Lithium-ion Battery Discharge Capacity by Integrating Optimized Explainable-AI and Stacked LSTM Model" Batteries 9, no. 2: 125. https://doi.org/10.3390/batteries9020125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop