Next Article in Journal
Research on Frequency-Domain Learning-Based Fault Diagnosis Methods in Nuclear Power Plants via a Lightweight Complex-Valued Neural Network
Previous Article in Journal
Structural Design and Electromechanical Performance Verification of High-Voltage Optical Fiber Composite Insulators Based on Finite Element Simulation
Previous Article in Special Issue
Reinforcement Learning-Based Optimization of Environmental Control Systems in Battery Energy Storage Rooms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning-Based Lifetime Prediction of Lithium Batteries: A Comparative Assessment for Electric Vehicle Applications

1
University of Caen Normandy, LUSAC Laboratory, 50130 Cherbourg-en-Cotentin, France
2
Paris-Saclay University, CentraleSupelec, CNRS, GeePs Laboratory, 91192 Gif Sur Yvette, France
3
Batten College of Engineering and Technology, Old Dominion University, Norfolk, VA 23529, USA
*
Author to whom correspondence should be addressed.
Energies 2026, 19(5), 1203; https://doi.org/10.3390/en19051203
Submission received: 12 January 2026 / Revised: 18 February 2026 / Accepted: 22 February 2026 / Published: 27 February 2026

Abstract

This paper evaluates and compares four data-driven methods (Gaussian Process Regression (GPR), echo state network (ESN), gated recurrent unit (GRU), and long short-term memory (LSTM)) for lithium-ion capacity prognostics adapted to electric vehicle conditions. This comparison aims to find the most efficient prognosis method considering two constraints: the limitation of computational power and the unavailability of on-board capacity measurement that requires full charge and discharge conditions. The machine learning models are trained using capacity values estimated under vehicle conditions. The ageing data is collected from cycling tests of two battery chemistries, Lithium Fer Phosphate (LFP) and Nickel Manganese Cobalt (NMC), with different ageing trends. The prognosis algorithms are tuned with three different percentages of the data, allowing for the evaluation of the methods at different ageing stages. The comparison and analysis of the results show that ESN outperforms other methods; it has the lowest prediction error (mean absolute percentage error less than 1.4% at initial ageing of the cells) and the shortest training time, making it the most appropriate method for automotive applications.

1. Introduction

Lithium-ion batteries are currently the most used energy storage system in electric vehicles (EVs) due to their higher performance in comparison with other energy storage systems: high power and energy density, slow self-discharge rate, and fast response time [1,2]. However, as with any electrochemical system, lithium-ion batteries suffer from different irreversible reactions that lead to their degradation and the loss of their characteristics [3]. Ageing mechanisms mainly lead to battery capacity loss and internal impedance increase [4]. Therefore, EV batteries’ prognosis and health management (PHM) are necessary for efficient and secure battery system management. Regarding vehicular applications, the PHM methods focus on monitoring and forecasting the battery capacity evolution to assess battery health status [5]. The state of health (SoH) prediction methods can be model-based or data-driven.
The model-based methods consist of fitting the parameters of the degradation model using available battery measurements. The SoH prediction is obtained by extrapolating the fitted ageing model to the upcoming cycles [6]. Usually, the battery prognostic is done with particle filters (PF) to fit and update model parameters. The study presented in [7] used a generalised Dakin degradation model to predict the capacity of LFP and LTO cells. The model parameters are optimised using PF. The predicting error is less than 6.64%. In [8], the authors used an exponential model to predict the capacity of LCO cells while the model parameters were fitted with an improved particle filter (PF) algorithm. The study in [9] used a grey model that combines the exponential, linear, and polynomial models to predict the battery capacity. The model’s parameters are fitted using PF. The accuracy of the method is verified using the NASA Public Lithium-Ion Battery Data Set. The results show that combining three models achieves higher prediction accuracy than single-model results.
Data-driven approaches to battery prognosis proposed in the literature use sequences of previous capacity values to predict their values over future cycles [10]. Among the different approaches, machine learning techniques such as Gaussian Process Regression, Support Vector Machine, and deep-learning methods such as recurrent neural networks (RNNs) are widely referenced [10,11,12,13,14,15,16,17,18]. The study in [12] proposed and evaluated an optimised LSTM model for capacity prediction. This method was evaluated based on experimental tests of battery cycling of NCA cells under two different temperatures. The results showed that the capacity was predicted with a mean absolute error (MAE) of less than 0.012Ah. In [13], the authors used the GPR model to predict the capacity of NCA cells from calendar ageing data. The RMSE is less than 0.0105Ah. In [14], the authors used an LSTM model for battery prognosis. First, the capacity trends are decomposed using complete ensemble empirical mode decomposition and principal component analysis. Second, the LSTM model based on transfer learning used the decomposed signal to predict the cell’s capacity. The relative error of the prediction is less than 1.9%.
Finally, although these prediction methods show good accuracy and robustness for different battery chemistry capacity predictions, the data used to design the models are generally obtained from experimental laboratory tests. This limits their use in electric vehicles because the tests needed to measure capacity are time-consuming, expensive, and require specific conditions [15,16]. In addition, the ageing behaviour varies between cells due to several factors, such as working conditions, position of the cells in the module, and driver behaviour [17]. Therefore, the prognosis methods should take into account the ageing trajectory of each cell to predict its future evolution. The prognosis frameworks proposed in the literature require significant computing resources and usually combine several algorithms and mechanisms for SoH prediction [18,19], making them difficult to implement for on-board training. A practical method of predicting the state of health (SoH) of batteries in electric vehicles should be based on a history of capacity estimated on-board to take into consideration the ageing trajectory of each cell.
Concerning data-driven prediction methods, recurrent neural networks such as LSTM, GRU, and their variants have shown their efficiency for different time-series prediction tasks, particularly for lithium-ion prognostics [10,11,12,13,14,15,16,17,18,19]. The GPR model has proven its efficiency for predicting battery ageing across various studies, due to its low computational cost and minimal data requirements [13,20]. Recently, echo state network (ESN) has emerged as an alternative to gradient descent training-based approaches and has been successfully applied for time-series prediction [21]; the main advantages of ESN are the capacity to learn from a small database and the simplicity of its training process (linear regression problem), which makes it convenient for applications where computational sources are limited [22]. In the following, the four prediction approaches previously mentioned are evaluated and compared: LSTM, GRU, ESN, and GPR. The objective is to assess and compare simple architectures for capacity prediction to select algorithms that achieve higher accuracy with less computational power for electric vehicles. These methods are first trained with measured capacity values to test their ability to predict the battery ageing trend. In the second step, the prediction models are trained using the estimated capacity. This step allows for assessing their accuracy and robustness to the estimation errors in the training data, which is close to the actual conditions under which vehicle battery capacity is estimated. Finally, the prediction models are evaluated using training data at three ageing stages of the cells (initial, medium, and advanced), and they allow the assessment of long-term, medium-term, and short-term prediction performance. The model training and validation data are generated experimentally from battery cycling of two cell chemistries (LFP and NMC) with two different ageing trends. Results are compared and evaluated in terms of prediction accuracy and learning time. The main contributions of this paper are:
The assessment and comparison of four machine learning methods (LSTM, GRU, GPR, and ESN) for the capacity prediction of lithium-ion cells for vehicular applications.
The prediction models are compared by considering two scenarios: the capacity values are measured or estimated, and different percentages of the training data are considered.
The models are evaluated and compared using data from two different ageing trends extracted from the cycling of LFP and NMC cells.
The rest of the paper is organised as follows: Section 2 details the prediction structure, the experimental tests and the training data, and the prognosis models. Section 3 presents the prediction results and their analysis. A conclusion closes the paper.

2. Materials and Methods

2.1. Prognosis Model Structure

The prediction structure used for this application is the non-iterative structure (Figure 1). The training process consists of finding the relationship (the model) between the number of cycles (or cycle duration) and the corresponding capacities [23]. The model predicts capacity over k number of future cycles (as represented in Figure 1). Indeed, the selected structure allows long-term capacity prediction over the battery’s lifetime. Furthermore, unlike the iterative structure, which uses previously predicted values to predict the next one and thus accumulates prediction errors within the model [24], the non-iterative structure should be stable and robust in the face of uncertainties in the estimated capacity values [25]. It should be noted that this structure must be used with caution, as the prediction model cannot be generalised because it is linked to the cell used for training.

2.2. Experimental Design and Training Data

2.2.1. Experimental Design

The experimental data (voltage, temperature, and capacity) used for the model evaluation are obtained with the accelerated cycling tests of two battery chemistries (LFP and NMC) under an environmental temperature of 35 °C [26,27]. The cycled cells are charged using Constant-Current–Constant Voltage mode and then discharged using a dynamic current profile adapted from the Worldwide Harmonized Light Vehicles Test Cycles. Figure 2 shows the cycling profile. A check-up test is realised to monitor the evolution of battery capacity, energy, and internal resistance with cycling [28]. Many studies have shown that ambient temperature significantly affects battery ageing [26]. The higher the environmental temperature, the higher the rate of ageing. In addition, battery temperature rises during charging and discharging due to the internal electrochemical process and the Joule effect. Therefore, the ambient temperature is set to 35 °C to conduct accelerated ageing tests without exceeding 45 °C [27]. More details about experiments and protocols are provided in [26,28].
In these tests, four LFP cells with a rated capacity of 20 Ah and four NMC cells with a rated capacity of 3 Ah are cycled. Figure 3 shows capacity evolution as a function of the number of cycles for the four cells for each chemistry. The results show that capacity decreases with ageing for both chemistries; the NMC cells lost about 20% of their initial capacity after 985 cycles, while the LFP cells lost 14.2% after 1057 cycles.
The results show that capacity fade varies between cells: regarding the NMC cells, cell #1 and cell #2 lose 20% of their capacity after 716 cycles, while cell #3 and cell #4 lose 20% after 844 and 985 cycles, respectively. The capacity fade of the LFP cells varies between 11% and 15% after 1057 cycles. The comparison of the two chemistries shows that the LFP cells have a higher lifetime compared to NMC cells. These results highlight differences in ageing trends between cells and between the two chemistries, allowing evaluation of the adaptability of the prediction models to different ageing trends. Future work will focus on conducting tests under different operating conditions (calendar ageing, different load profiles, temperatures, and SoC windows) to evaluate the models across different scenarios.

2.2.2. Training Data

The prediction models are evaluated considering two scenarios:
-
In the first scenario (Figure 4a), the prognosis models are trained with the capacities measured during laboratory tests. The first step is to evaluate the model’s ability to predict the progression of ageing. This would be different in the case of actual electric vehicles, for which capacity measurements are often unavailable.
-
In the second scenario (Figure 4b), the prediction models use the estimated values to predict the battery capacity evolution. This configuration is more realistic because, in actual applications, the capacity is generally calculated on board.
Several data-driven methods have recently been proposed to estimate the battery capacity and achieve high accuracy for vehicular applications. These methods have achieved high estimation accuracy of battery capacity based on available measurements such as voltage, current, and temperatures [27,28,29,30]. This study uses the estimation method proposed in [27] to generate a historical record of capacity data. The diagnosis method combines two auto-encoders with an LSTM neural network to estimate capacity from voltage and temperature measurements under the WLTC current profile (Figure 2). The model achieved an accurate estimation of the capacity (mean absolute percentage error less than 2%) for two chemistries (LFP and NMC).

2.2.3. Evaluation Procedure

The prediction models are trained for each cell at three stages of its life (40%, 60%, and 80% of cycles) to assess their performance. The percentages represent the ratio between the number of cycles used to fit the model and the total number of cycles. This approach enables the evaluation of predictions at three stages of the battery’s life (initial, intermediate, and advanced ageing) and allows the assessment of long-term, medium-term, and short-term prediction performance. The prediction errors are evaluated using the mean absolute percentage error (MAPE) presented in (1) instead of the MAE and RMSE, which would not be suitable because the nominal capacities are 3 Ah and 20 Ah for NMC and LFP cells, respectively:
M A P E % = 100 × 1 N × 1 N X ^ X e x p / X e x p

2.2.4. Long Short-Term Memory

The long short-term memory neural network is a deep recurrent neural network composed of LSTM units. Each unit is made up of three gates: the input gate, the forget gate, and the output gate, described by (2), (3), and (4), respectively [31]:
i k = σ ( w i h k 1 , x k + b i )
f k = σ ( w f h k 1 , x k + b f )
o k = σ ( w o h k 1 , x k + b o )
where w X and b X are the weight and bias, respectively, and σ is the activation function (sigmoid function). The candidate state cell c ~ k is obtained from Equation (5):
c ~ k = t a n h ( w c h k 1 , x k + b c )
where w c is the weight, b c is the bias, and t a n h is the hyperbolic tangent function.
Subsequently, the state cell c k is updated based on its previous values c k 1 and the candidate value c ~ k (6). The cell output h k is calculated using (7):
c k = f k × c k 1 + i k × c ~ k
h k = o k × t a n h ( c k )
This gating mechanism in LSTM networks offers several advantages for time-series problems: it allows the model to retain and propagate relevant information over the processed time steps. In addition, the gates in LSTM provide an adaptive memory control: the forget gate allows the model to neglect irrelevant information from previous time steps. In contrast, the input gate selectively incorporates new information. This adaptability enables LSTM to focus on the relevant information and reduce noise and irrelevant features’ impact on the sequence [31]. The architecture and hyperparameters are obtained via a trial-and-error approach. The strategy aims to select the optimal architecture by minimising the size and complexity of the neural network; hyperparameters were selected through a progressive architecture search. Training began with a three-layer neural network, with the number of units per layer varied from 30 to 200 in steps of 10. The architecture was expanded by adding layers only when the validation-set prediction error remained above 3%. Table 1 summarises the LSTM configuration used for the capacity prediction.

2.2.5. Gated Recurrent Unit Network

The gated recurrent unit network is a simplified variant of LSTM networks. The simplification seeks to maintain the performance of the LSTM network unit while decreasing the network complexity [34]. The GRU network is a stack of several layers of gated recurrent units; each unit consists of two components: an update gate and a reset gate. The update gate u k assesses the relevance of features and captures long-term dependencies, whereas the reset gate r k regulates the extent to which past information is discarded.
The two gates are calculated based on the input vector x k and the preceding hidden state h k 1 , as expressed by Equations (8) and (9) [35]:
u k = σ ( w u h k 1 , x k + b u )
r k = σ ( w r h k 1 , x k + b k )
where w X and b X are the weight matrices and the bias vectors, respectively; σ denotes the sigmoid activation function.
The candidate’s hidden state h ~ k is expressed as follows:
h ~ k = t a n h ( w h r k × h k 1 , x k + b h )
where w h and b h are the weight and bias, respectively; t a n h represents the hyperbolic tangent function. The current hidden h k state is then updated based on the candidate h ~ k and the previous hidden state h k 1 :
h k = ( 1 u k ) × h k 1 + u k × h ~ k
The training consists of optimising the network’s parameters (weights and biases) to minimise the error between the predicted and measured values. The GRU architecture and configuration follow the same procedure used for the LSTM architecture (trial and error method). Table 2 summarises the GRU architecture’s hyperparameters used for the capacity prediction.

2.2.6. Echo State Neural Network

The echo state network (ESN), represented in Figure 5, is a recurrent neural network with a different architecture and training process than conventional neural networks (LSTM, GRU, MLP, etc.). The ESN is a class of reservoir computing methods where the inputs are mapped to a non-linear high-dimensional state [36]. The reservoir states act as an expansion of the inputs into a higher-dimensional state space. The output of the ESN is calculated as a linear combination of the reservoir state’s components, the weights of which are the only optimised parameters during the training process. Therefore, the training process becomes a linear regression problem, which reduces its complexity and avoids the drawback of the vanishing gradient present in deep neural networks [37].
In an echo state network with n neurons, the input vector u k is mapped into the recurrent neurons of the reservoir. The ESN state is defined by the vector a k . During each iteration, the ESN state is updated based on the current input u k , the previous state a k 1 and the previous network output y ^ k 1 , as described by Equations (12) and (13) [36]:
z k = W a k 1 + w i n u k + w f b y ^ k 1
a k = t a n h ( z k × N n 1 , ε 2 )
where z k is the pre-activation state vector before being processed by the activation function, W is the weights’ matrix between the reservoir units, w i n is the input coefficients matrix, w f d matrix of feedback connections between the ESN output and the ESN state, and t a n h refers to the hyperbolic tangent function. N n 1 , ε 2 ϵ R n is a vector of the state noise composed of elements that follow a normal distribution, where ε 2 is the noise variance. The current output of the ESN is calculated based on the output matrix weight w o u t and the current state a k as presented in (14):
y ^ k = a k w o u t
W , w i n , and w f d are randomly assigned and remain constant during the training and test processes. The weight of the output matrix w o u t is the only parameter optimised during the training using linear regression to minimise the squared error between the predicted output y ^ k and the measured value. There are three parameters to set for this network: the number of recursive neurons, the reservoir density, and the output noise variance. In this study, these parameters are obtained using the trial-and-error method [38,39]: we first set the density to 10% and the noise variance to 0.1, and varied the number of neurons from 500 to 100 in steps of 50. With a relative prediction error of less than 3%, we set the number of recursive neurons to 200. Second, the variance is reduced to minimise the prediction fluctuations and sensitivity to outliers during training. Finally, the state noise variance is set to 0.03. The reservoir sparsity is kept constant at 10%.

2.2.7. Gaussian Process Regression

The GPR is a Bayesian supervised machine learning technique for regression problems [40]. The Gaussian process ( G P ) is defined by the mean m x and covariance Σ x , x T functions of the vector of inputs x [41]:
f ( x ) ~ G P ( m x , Σ x , x T )
m ( x ) = E [ f x ]
Σ x , x T = E [ ( f x m x ) ( f x T m x T ) ]
The covariance matrix depends on the selected kernel function. In this application, different kernels (‘Squaredexponential’, ‘Exponential’, ‘Matern32’, and ‘Matern52’ [41,42]) are compared to select the most suitable kernel for battery lifetime prognosis. Table 3 presents the mean absolute error for NMC and LFP cells, considering the three training percentages (40%, 60%, and 80%).
The performance comparison shows that the Matern32 kernel (Equation (18)) provides good and stable accuracy (bold values) compared with other kernels across different ageing stages of NMC and LFP cells. Therefore, Matern32 is selected as a kernel function for this application:
Σ x i , x j = σ 2 1 + 3 r σ l e x p 3 r σ l
where σ is the standard deviation and σ l is the characteristic length scale. These two parameters are optimised during the training phase of the GPR. The Euclidean distance r between x i and x j is calculated by (19):
  r = ( x i x j ) T ( x i x j )  
Considering the dataset defined as x i y i i 1,2 , , m , the regression model is formulated as follows:
y i = f x i + ϵ ;   i 1,2 , , m
where f . stands for the unknown regression function and ϵ is the Gaussian noise and variance σ n 2 . Therefore, with the assumption that the mean of the function is zero ( m ( x ) = 0 ) [41,42], the GP is then expressed as follows:
y ~ G P ( 0 , Σ x , x T +   σ n 2 )
The use of the GPR in forecasting problems consists of fitting the GP’s free parameters ( σ , σ l , and σ n ) using a couple of input/output data x i y i i 1,2 , , m . These parameters are adjusted to minimise the error between the predicted and measured values. The fitted model is then used to predict the output values y i ¯ i m + 1 , m + 2 , , n , corresponding to the future inputs x ¯ i i m + 1 , m + 2 , , n .

3. Results

3.1. NMC Cells

Figure A1, Figure A2, Figure A3, Figure A4, Figure A5 and Figure A6 present the evolution of the predicted and measured capacity using measured and estimated data for all NMC cells. Table 4 and Table 5 present the MAPE of capacity prediction of each cell, using experimental and estimated training data respectively. The mean and standard deviation of the MAPE values are also provided. Figure 6 and Figure 7 present the evolution of the mean MAPE and the standard deviation for NMC cells for each prediction model when the prediction models are trained with measured and estimated data, respectively.
In the model trained with measured capacity, the best results are obtained with ESN (MAPE average value lower than 1% even with only 40% of the training dataset), followed by LSTM and GRU (MAPE average value around 1%). Regarding GPR, the results show that non-uniformity in the ageing trend significantly increases the prediction error, which is the case for cells #3 and #4:
-
At initial ageing (40% of the dataset), the error is no greater than 2% for LSTM and GRU and 1.5% for ESN. At the same time, the error reaches 25% for GPR.
-
At average ageing (60% of the dataset), the error is lower than 2% for LSTM and 2.2% for GRU. These errors are higher than those of the initial ageing due to the higher prediction error of cell #4. Regarding ESN, the MAPE reaches 1.07%. In comparison, the error reaches 8% for GPR.
-
In advanced ageing conditions (80% of the dataset), the four prediction models present a lower prediction error than the first two ageing cases. For LSTM and ESN models, the error is lower than 0.91%, less than 0.51% for GRU, and 1.45% for GPR.
Regarding the performance of the prediction models trained with estimated capacity values, the prediction errors are higher than in the previous case because they include the capacity estimation errors. The predicted capacities converge to the target values with maximum errors of 2.73% for LSTM, 3.27% for GRU, 2.12% for ESN, and 32.74% for GPR. Regarding the evolution of prediction accuracy, with the increasing of training percentages, the following conclusions can be drawn:
-
At initial ageing (40% of the dataset), the maximum error is no greater than 1.60% for LSTM, 2.51% for GRU, and 1.07% for ESN. The error reaches 32.74% for GPR.
-
At average ageing (60% of the dataset), the prediction error reaches 1.28%, 2.10%, 1.74%, and 14.32% for LSTM, GRU, ESN, and GPR, respectively.
-
At advanced ageing conditions (80% of the dataset), the highest prediction errors for LSTM, GRU, and ESN increase to 2.73%, 3.27%, and 2.12%, respectively. Meanwhile, the mean error is drastically reduced to 0.88% with GPR.

3.2. LFP Cells

Figure A7, Figure A8, Figure A9, Figure A10, Figure A11 and Figure A12 present the evolution of the predicted and measured capacity when the prediction models are trained using measured and estimated data. Table 6 and Table 7 display the MAPE of all models trained with estimated and measured data, and the mean and the standard deviation are also provided. Figure 8 and Figure 9 present the evolution of the average MAPE and the standard deviation for LFP cells, in the cases of measured and training sets, respectively.
In the case of using experimental data as a training set, the LSTM, GRU, and ESN present a maximum prediction error of up to 2% for all study cases. Regarding the GPR model, the prediction error varies from 13.19% to 0.43% when 40% or 80% of the dataset is used for training, respectively. The errors are more significant for the cells whose ageing evolution is non-uniform, as for cell #2.
Regarding the model’s performance when trained with experimental data, the following conclusions can be drawn:
At initial ageing (40% of the dataset), the three recurrent networks (LSTM, GRU, and ESN) achieve good results: the error is less than 1.93% for LSTM, 1.06% for GRU, and 0.64% for ESN. The GPR prediction error is not higher than 3% for cells #1, #3, and #4, while for cell #2, it reaches 13.19%.
At average ageing (60% of the dataset), the prediction accuracy increases: the error is not higher than 0.42% for LSTM, 0.87% for GRU, and 0.37% for ESN. Regarding GPR, the prediction error is still higher for cell #2 (less than 4%), while it is not higher than 0.7% for the other cells.
The four prediction models perform better at advanced ageing conditions (80% of the dataset). The error is lower than 1% for LSTM and GRU and lower than 0.5% for ESN and GPR.
Regarding the models’ performance when they are trained with estimated data, the prediction results of LFP are in agreement with those of NMC cells. The prediction errors are higher when the models are trained with measured capacity (Table 6). The maximum MAPE is less than 3.15% for LSTM, 3.35% for GRU, and 2.31% for ESN. The GPR is the most affected by the estimated capacity errors, with an MAPE as high as 22.54%:
-
At initial ageing (40% of the cycles), the three recurrent networks (LSTM, GRU, and ESN) achieve good results. The maximum error is less than 2.72% for LSTM, 2.59% for GRU, and 2.31% for ESN. Meanwhile, the prediction error for GPR varies between 16.75% and 22.54%.
-
At average ageing (60% of the dataset), the prediction errors decrease to 1.22% and 1.99% for LSTM and ESN, respectively. Meanwhile, for GRU, the MAPE increases to 3.34%. GPR still presents a higher prediction error, ranging from 3.15% to 7.4%, compared to the three recurrent networks.
-
At advanced ageing (80% of the dataset), the prediction errors increase to 3.15% and 3.35% for LSTM and GRU, respectively. The MAPE for ESN is less than 1.92%, while for GPR, the prediction error is less than 2.03%.

3.3. Training Time

Finally, Table 8 displays the training times and the memory usage during training of each prediction model, considering the three dataset percentages (40%, 60%, and 80%) for NMC and LFP cells. The prediction algorithms are developed in a Python[DP1] [AH2] 3.10.19 environment using TensorFlow 2.20.0 and GPflow 2.10.0 libraries. The algorithms are designed and evaluated in a Windows 10® environment. The computer has an i-5 processor with 2.50 (GHz) and 16 GB of RAM.
As expected, the results show that the more the data percentage increases, the more the required training time increases. The comparative evaluation shows that LSTM has the longest training duration, followed by GRU and GPR. ESN has a shorter training time (less than 0.15s), following the description of the model’s architecture. Regarding memory usage, ESN exhibits the lowest memory requirement, followed by GPR, LSTM, and GRU. The GRU requires more memory than the LSTM because it uses two hidden layers (Table 2).
The evaluation of the prognosis models on both NMC and LFP cells shows that the three recurrent networks (LSTM, GRU, and ESN) exhibit good prediction accuracy at the three ageing stages (40%, 60%, and 80%). The results also highlight that the prediction errors differ from one cell to another. These differences are related to the ageing evolution of each cell. Regarding GPR, the evaluation results indicate its limitations for long-term prediction and its sensitivity to the non-uniformity of the ageing trend [42]. The limitation of the GPR for long-term prediction is related to using the local kernel. As presented in Figure 10, the higher the cycle number, the higher the distance x i x j ; thus, the kernel function decays. Consequently, the prediction converges to the mean value of the function that was set to zero [33,34].
Regarding LSTM and GRU networks, the results confirm the efficiency of the gating mechanisms for filtering the training data (Equations (2)–(4), (8) and (9)) [43]. The high accuracy of the LSTM is due to the superiority of the gating mechanisms compared with GRU. The results obtained for the ESN also confirm its efficiency in handling time-series problems and capacity to learn from noisy data, as reported in several studies [44,45].
The results of the four prediction models, when using the measured dataset, show that the higher the percentage of cycles used for model fitting, the better the prediction accuracy, and the greater the increase in prediction accuracy. When the prediction models are trained using estimated capacity values, the results show that the prediction error increases due to the estimation errors and outliers in the training data. However, the mean error for LSTM, GRU, and ESN is always less than 2% (between 1 and 1.5% for LSTM and ESN). Finally, comparing the prognosis models, ESN outperforms the other models’ accuracy and computational time, making it the most suitable for vehicular applications [46]. Table 9 presents the MAPE of the proposal compared with other prognostic models from the literature. The results show that even when the prediction models (LSTM and ESN) are trained with estimated capacity values, the prediction accuracy is comparable to that obtained with models trained on measured capacity values.

4. Conclusions

This paper evaluated and compared four machine learning methods for lithium-ion battery capacity prediction, accounting for vehicle conditions and constraints. These models are long short-term memory, gated recurrent unit network, echo state network, and Gaussian Process Regression. These models are trained with capacity values obtained from experimental measurements and then evaluated in a second scenario in which the capacity training data are estimated. This scenario is more realistic, as capacity is usually estimated in real applications, and it assesses the robustness of predictions to estimation errors in the training data. The ageing data used for model evaluation are generated experimentally by cycling NMC and LFP cells. The two chemistries exhibit different ageing trends, allowing the testing of the prognosis method on both behaviours. The results show that the ESN achieves the best accuracy and time response for all studied cases compared to the three other methods (MAPE less than 1.4% at an early prediction stage: the initial 40% of the cycle life). GRU and LSTM achieve good predictive performance but suffer from long training times due to their complex training processes. The GPR model is accurate at the late prediction stage but shows limited performance at the early stage and is sensitive to non-uniformity and perturbations in the ageing trends. Future work will focus on exploring different prediction architectures and models and extending evaluations across different working conditions (field data, varying temperatures, and loading profiles) to improve the robustness and flexibility of the prediction models.

Author Contributions

Conceptualization, A.H., R.P., D.D. and H.G.; methodology, A.H., R.P., D.D. and H.G.; software, A.H.; validation, A.H., R.P., D.D., H.G., B.T.-I., P.M.B., H.C. and H.G.; formal analysis, A.H., R.P., D.D., H.G., B.T.-I., P.M.B., H.C. and H.G.; investigation, A.H., R.P., D.D., H.G., B.T.-I., P.M.B., H.C. and H.G.; data curation, A.H.; writing—original draft preparation, A.H.; writing—review and editing, A.H., R.P., D.D., H.G. and H.C.; visualisation, A.H., R.P., D.D., H.G., B.T.-I., P.M.B., H.C. and H.G.; supervision, R.P., D.D., H.G., B.T.-I., P.M.B., H.C. and H.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data will be available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Capacity prediction results based on experimental data for NMC cells (trained on 40% of cycles).
Figure A1. Capacity prediction results based on experimental data for NMC cells (trained on 40% of cycles).
Energies 19 01203 g0a1
Figure A2. Capacity prediction results based on estimated data for NMC cells (trained on 40% of cycles).
Figure A2. Capacity prediction results based on estimated data for NMC cells (trained on 40% of cycles).
Energies 19 01203 g0a2
Figure A3. Capacity prediction results based on experimental data for NMC cells (trained on 60% of cycles).
Figure A3. Capacity prediction results based on experimental data for NMC cells (trained on 60% of cycles).
Energies 19 01203 g0a3
Figure A4. Capacity prediction results based on estimated data for NMC cells (trained on 60% of cycles).
Figure A4. Capacity prediction results based on estimated data for NMC cells (trained on 60% of cycles).
Energies 19 01203 g0a4
Figure A5. Capacity prediction results based on experimental data for NMC cells (trained on 80% of cycles).
Figure A5. Capacity prediction results based on experimental data for NMC cells (trained on 80% of cycles).
Energies 19 01203 g0a5
Figure A6. Capacity prediction results based on estimated data for NMC cells (trained on 80% of cycles).
Figure A6. Capacity prediction results based on estimated data for NMC cells (trained on 80% of cycles).
Energies 19 01203 g0a6

Appendix B

Figure A7. Capacity prediction results based on experimental data for LFP cells (trained on 40% of cycles).
Figure A7. Capacity prediction results based on experimental data for LFP cells (trained on 40% of cycles).
Energies 19 01203 g0a7
Figure A8. Capacity prediction results based on estimated data for LFP cells (trained on 40% of cycles).
Figure A8. Capacity prediction results based on estimated data for LFP cells (trained on 40% of cycles).
Energies 19 01203 g0a8
Figure A9. Capacity prediction results based on experimental data for LFP cells (trained on 60% of cycles).
Figure A9. Capacity prediction results based on experimental data for LFP cells (trained on 60% of cycles).
Energies 19 01203 g0a9
Figure A10. Capacity prediction results based on estimated data for LFP cells (trained on 60% of cycles).
Figure A10. Capacity prediction results based on estimated data for LFP cells (trained on 60% of cycles).
Energies 19 01203 g0a10
Figure A11. Capacity prediction results based on experimental data for LFP cells (trained on 80% of cycles).
Figure A11. Capacity prediction results based on experimental data for LFP cells (trained on 80% of cycles).
Energies 19 01203 g0a11
Figure A12. Capacity prediction results based on estimated data for LFP cells (trained on 80% of cycles).
Figure A12. Capacity prediction results based on estimated data for LFP cells (trained on 80% of cycles).
Energies 19 01203 g0a12

References

  1. Krishna, T.N.V.; Kumar, S.V.S.V.P.D.; Rao, S.S.; Chang, L. Powering the Future: Advanced Battery Management Systems (BMS) for Electric Vehicles. Energies 2024, 17, 3360. [Google Scholar] [CrossRef]
  2. Selvaraj, V.; Vairavasundaram, I. A comprehensive review of state of charge estimation in lithium-ion batteries used in electric vehicles. J. Energy Storage 2023, 72, 108777. [Google Scholar] [CrossRef]
  3. Cho, S.; Kim, H. AI-Integrated Smart Grading System for End-of-Life Lithium-Ion Batteries Based on Multi-Parameter Diagnostics. Energies 2025, 18, 5915. [Google Scholar] [CrossRef]
  4. Che, Y.; Deng, Z.; Lin, X.; Hu, L.; Hu, X. Predictive Battery Health Management with Transfer Learning and Online Model Correction. IEEE Trans. Veh. Technol. 2021, 70, 1269–1277. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Wang, A.; Zhang, C.; He, P.; Shao, K.; Cheng, K.; Zhou, Y. State-of-Health Estimation for Lithium-Ion Batteries via Incremental Energy Analysis and Hybrid Deep Learning Model. Batteries 2025, 11, 217. [Google Scholar] [CrossRef]
  6. Chen, L.; Chen, J.; Wang, H.; Wang, Y.; An, J.; Yang, R.; Pan, H. Remaining Useful Life Prediction of Battery Using a Novel Indicator and Framework with Fractional Grey Model and Unscented Particle Filter. IEEE Trans. Power Electron. 2020, 35, 5850–5859. [Google Scholar] [CrossRef]
  7. Liu, Z.; Sun, G.; Bu, S.; Han, J.; Tang, X.; Pecht, M. Particle Learning Framework for Estimating the Remaining Useful Life of Lithium-Ion Batteries. IEEE Trans. Instrum. Meas. 2017, 66, 280–293. [Google Scholar] [CrossRef]
  8. El Mejdoubi, A.; Chaoui, H.; Gualous, H.; Van Den Bossche, P.; Omar, N.; Van Mierlo, J. Lithium-ion batteries health prognosis considering ageing conditions. IEEE Trans. Power Electron. 2019, 34, 6834–6844. [Google Scholar] [CrossRef]
  9. Hong, S.; Qin, C.; Lai, X.; Meng, Z.; Dai, H. State-of-health estimation and remaining useful life prediction for lithium-ion batteries based on an improved particle filter algorithm. J. Energy Storage 2023, 64, 107179. [Google Scholar] [CrossRef]
  10. Meng, X.; Zhang, H.; Lan, H.; Cui, S.; Huang, Y.; Li, G.; Dong, Y.; Zhou, S. Lithium-Ion Battery SOH Prediction Method Based on ICEEMDAN+FC-BiLSTM. Energies 2025, 18, 5617. [Google Scholar] [CrossRef]
  11. Song, K.; Hu, D.; Tong, Y.; Yue, X. Remaining life prediction of lithium-ion batteries based on health management: A review. J. Energy Storage 2023, 57, 106193. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Xiong, R.; He, H.; Pecht, M.G. Long Short-Term Memory Recurrent Neural Network for Remaining Useful Life Prediction of Lithium-Ion Batteries. IEEE Trans. Veh. Technol. 2018, 67, 5695–5705. [Google Scholar] [CrossRef]
  13. Liu, K.; Li, Y.; Hu, X.; Lucu, M.; Widanage, W.D. Gaussian Process Regression With Automatic Relevance Determination Kernel for Calendar Ageing Prediction of Lithium-Ion Batteries. IEEE Trans. Ind. Inform. 2020, 16, 3767–3777. [Google Scholar] [CrossRef]
  14. Chen, Z.; Chen, L.; Shen, W.; Xu, K. Remaining Useful Life Prediction of Lithium-Ion Battery via a Sequence Decomposition and Deep Learning Integrated Approach. IEEE Trans. Veh. Technol. 2022, 71, 1466–1479. [Google Scholar] [CrossRef]
  15. Xu, Z.; Wang, J.; Lund, P.D.; Zhang, Y. Estimation and prediction of state of health of electric vehicle batteries using discrete incremental capacity analysis based on real driving data. Energy 2021, 225, 120160. [Google Scholar] [CrossRef]
  16. Peng, J.; Meng, J.; Chen, D.; Liu, H.; Hao, S.; Sui, X.; Du, X. A Review of Lithium-Ion Battery Capacity Estimation Methods for Onboard Battery Management Systems: Recent Progress and Perspectives. Batteries 2022, 8, 229. [Google Scholar] [CrossRef]
  17. Guo, J.; Li, Y.; Pedersen, K.; Stroe, D.-I. Lithium-Ion Battery Operation, Degradation, and Ageing Mechanism in Electric Vehicles: An Overview. Energies 2021, 14, 5220. [Google Scholar] [CrossRef]
  18. Yuzhao, L.; Ming, Z.; Tian, C.; Zewei, Z. Remaining useful life prediction for stratospheric airships based on a channel and temporal attention network. Commun. Nonlinear Sci. Numer. Simul. 2025, 143, 108634. [Google Scholar] [CrossRef]
  19. Wan, A.; Zhang, H.; Chen, T.; AL-Bukhaiti, K.; Wang, W. A hybrid deep learning model for robust aeroengine remaining useful life prediction. Signal Image Video Process. 2025, 19, 550. [Google Scholar] [CrossRef]
  20. Zhang, C.; Zhao, S.; He, Y. An Integrated Method of the Future Capacity and RUL Prediction for Lithium-Ion Battery Pack. IEEE Trans. Veh. Technol. 2022, 71, 2601–2613. [Google Scholar] [CrossRef]
  21. Lin, X.; Yang, Z.; Song, Y. Short-term stock price prediction based on echo state networks. Expert Syst. Appl. 2009, 36, 7313–7317. [Google Scholar] [CrossRef]
  22. Ghazijahani, M.S.; Heyder, F.; Schumacher, J.; Cierpka, C. On the benefits and limitations of Echo State Networks for turbulent flow prediction. Meas. Sci. Technol. 2022, 34, 014002. [Google Scholar] [CrossRef]
  23. Hu, X.; Xu, L.; Lin, X.; Pecht, M. Battery Lifetime Prognostics. Joule 2020, 4, 310–346. [Google Scholar] [CrossRef]
  24. Vassallo, D.; Krishnamurthy, R.; Sherman, T.; Fernando, H.J.S. Analysis of Random Forest Modeling Strategies for Multi-Step Wind Speed Forecasting. Energies 2020, 13, 5488. [Google Scholar] [CrossRef]
  25. Liu, D.; Zhou, J.; Pan, D.; Peng, Y.; Peng, X. Lithium-ion battery remaining useful life estimation with an optimized Relevance Vector Machine algorithm with incremental learning. Measurement 2015, 63, 143–151. [Google Scholar] [CrossRef]
  26. Hammou, A.; Petrone, R.; Gualous, H.; Diallo, D. Deep learning framework for state of health estimation of NMC and LFP Li-ion batteries for vehicular applications. J. Energy Storage 2023, 70, 108083. [Google Scholar] [CrossRef]
  27. Barcellona, S.; Colnago, S.; Codecasa, L.; Piegari, L. An accelerated ageing test procedure for lithium-ion battery based on a dual-temperature approach. J. Power Sources 2025, 644, 237167. [Google Scholar] [CrossRef]
  28. Hammou, A.; Petrone, R.; Diallo, D.; Gualous, H. Estimating the Health Status of Li-ion NMC Batteries from Energy Characteristics for EV Applications. IEEE Trans. Energy Convers. 2023, 38, 2160–2168. [Google Scholar] [CrossRef]
  29. Wang, Z.; Zhao, X.; Fu, L.; Zhen, D.; Gu, F.; Ball, A.D. A review on rapid state of health estimation of lithium-ion batteries in electric vehicles. Sustain. Energy Technol. Assess. 2023, 60, 103457. [Google Scholar] [CrossRef]
  30. Heinrich, F.; Pruckner, M. Virtual experiments for battery state of health estimation based on neural networks and in-vehicle data. J. Energy Storage 2022, 48, 103856. [Google Scholar] [CrossRef]
  31. Chai, X.; Li, S.; Liang, F. A novel battery SOC estimation method based on random search optimized LSTM neural network. Energy 2024, 306, 132583. [Google Scholar] [CrossRef]
  32. Liu, Y.; Wang, X.; Wang, L.; Liu, D. A modified leaky ReLU scheme (MLRS) for topology optimization with multiple materials. Appl. Math. Comput. 2019, 352, 188–204. [Google Scholar] [CrossRef]
  33. Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. arXiv 2015, arXiv:1412.6980. https://arxiv.org/abs/1412.6980v9.
  34. Pikus, M.; Wąs, J.; Kozina, A. Using Deep Learning in Forecasting the Production of Electricity from Photovoltaic and Wind Farms. Energies 2025, 18, 3913. [Google Scholar] [CrossRef]
  35. Liu, Q.; Zang, W.; Zhang, W.; Zhang, Y.; Tong, Y.; Feng, Y. Steady-State Model Enabled Dynamic PEMFC Performance Degradation Prediction via Recurrent Neural Network. Energies 2025, 18, 2665. [Google Scholar] [CrossRef]
  36. Prokhorov, D. Echo state networks: Appeal and challenges. In Proceedings of the International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; Volume 3, pp. 1463–1466. [Google Scholar] [CrossRef]
  37. Matzner, F. Hyperparameter Tuning in Echo State Networks. In Proceedings of the GECCO 2022—Proceedings of the 2022 Genetic and Evolutionary Computation Conference, Boston, MA, USA, 9–13 July 2022; pp. 404–412. [Google Scholar] [CrossRef]
  38. Rodan, A.; Tiňo, P. Minimum complexity echo state network. IEEE Trans. Neural Netw. 2011, 22, 131–144. [Google Scholar] [CrossRef]
  39. Lukoševičius, M. A Practical Guide to Applying Echo State Networks. In Neural Networks: Tricks of the Trade; Lecture Notes in Computer Science; Montavon, G., Orr, G.B., Müller, K.R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7700. [Google Scholar] [CrossRef]
  40. Meng, J.; Yue, M.; Diallo, D. A Degradation Empirical-Model-Free Battery End-Of-Life Prediction Framework Based on Gaussian Process Regression and Kalman Filter. IEEE Trans. Transp. Electrif. 2022, 9, 4898–4908. [Google Scholar] [CrossRef]
  41. Cong, X.; Zhang, C.; Jiang, J.; Zhang, W.; Jiang, Y. A Hybrid Method for the Prediction of the Remaining Useful Life of Lithium-Ion Batteries with Accelerated Capacity Degradation. IEEE Trans. Veh. Technol. 2020, 69, 12775–12785. [Google Scholar] [CrossRef]
  42. Li, Z.; Hong, X.; Hao, K.; Chen, L.; Huang, B. Gaussian process regression with heteroscedastic noises—A machine-learning predictive variance approach. Chem. Eng. Res. Des. 2020, 157, 162–173. [Google Scholar] [CrossRef]
  43. Liu, M.; Wang, Y.; Wang, J.; Wang, J.; Xie, X. Speech Enhancement Method Based on LSTM Neural Network for Speech Recognition. In Proceedings of the 2018 14th IEEE International Conference on Signal Processing (ICSP), Beijing, China, 12–16 August 2018; pp. 245–249. [Google Scholar] [CrossRef]
  44. Duarte, A.L.O.; Eisencraft, M. Denoising of discrete-time chaotic signals using echo state networks. Signal Process. 2024, 214, 109252. [Google Scholar] [CrossRef]
  45. Wen, S.; Hu, R.; Yang, Y.; Huang, T.; Zeng, Z.; Song, Y.-D. Memristor-Based Echo State Network with Online Least Mean Square. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1787–1796. [Google Scholar] [CrossRef]
  46. Shrivastava, P.; Naidu, P.A.; Sharma, S.; Panigrahi, B.K.; Garg, A. Review on technological advancement of lithium-ion battery states estimation methods for electric vehicle applications. J. Energy Storage 2023, 64, 107159. [Google Scholar] [CrossRef]
  47. Jiang, B.; Dai, H.; Wei, X.; Jiang, Z. Multi-Kernel Relevance Vector Machine with Parameter Optimization for Cycling Ageing Prediction of Lithium-Ion Batteries. IEEE J. Emerg. Sel. Top. Power Electron. 2023, 11, 175–186. [Google Scholar] [CrossRef]
  48. Wang, Q.; Yan, H.; Wang, Y.; Yang, Y.; Liu, X.; Zhu, Z.; Huang, G.; Huang, Z. Probabilistic State of Health Prediction for Lithium-Ion Batteries Based on Incremental Capacity and Differential Voltage Curves. Energies 2025, 18, 5450. [Google Scholar] [CrossRef]
  49. Hou, X.; Guo, X.; Yuan, Y.; Zhao, K.; Tong, L.; Yuan, C.; Teng, L. The state of health prediction of Li-ion batteries based on an improved extreme learning machine. J. Energy Storage 2023, 70, 108044. [Google Scholar] [CrossRef]
Figure 1. The prediction model structure.
Figure 1. The prediction model structure.
Energies 19 01203 g001
Figure 2. The applied current during cell cycling.
Figure 2. The applied current during cell cycling.
Energies 19 01203 g002
Figure 3. The evolution of cells’ capacity with ageing for NMC and LFP chemistry.
Figure 3. The evolution of cells’ capacity with ageing for NMC and LFP chemistry.
Energies 19 01203 g003
Figure 4. The two configurations used for the prognosis model’s evaluation (a) With the measured capacity values, (b) With the estimated capacity values.
Figure 4. The two configurations used for the prognosis model’s evaluation (a) With the measured capacity values, (b) With the estimated capacity values.
Energies 19 01203 g004
Figure 5. The architecture of the echo state network [38].
Figure 5. The architecture of the echo state network [38].
Energies 19 01203 g005
Figure 6. The mean MAPE as a function of the training data percentage in the case of the measured training set for NMC cells.
Figure 6. The mean MAPE as a function of the training data percentage in the case of the measured training set for NMC cells.
Energies 19 01203 g006
Figure 7. The evolution of the mean error as a function of the training data percentage in the case of the estimated training set for NMC cells.
Figure 7. The evolution of the mean error as a function of the training data percentage in the case of the estimated training set for NMC cells.
Energies 19 01203 g007
Figure 8. The evolution of the mean MAPE in the case of the measured training set for LFP cells.
Figure 8. The evolution of the mean MAPE in the case of the measured training set for LFP cells.
Energies 19 01203 g008
Figure 9. The evolution of the mean error as a function of the training data percentage in the case of the estimated training set for LFP cells.
Figure 9. The evolution of the mean error as a function of the training data percentage in the case of the estimated training set for LFP cells.
Energies 19 01203 g009
Figure 10. The evolution of the kernel used for the GPR model as a function of the distance between x i and x j .
Figure 10. The evolution of the kernel used for the GPR model as a function of the distance between x i and x j .
Energies 19 01203 g010
Table 1. The selected hyperparameters of the LSTM algorithm.
Table 1. The selected hyperparameters of the LSTM algorithm.
ParameterDescription
Input layer50 LSTM units
Hidden layer100 LSTM units
Output layerOne dense unit with Leaky Rectified Linear activation function [32]
Optimisation algorithm‘ADAM’ optimizer [33] with ‘MSE’ loss function
Table 2. The selected hyperparameters for the GRU algorithm.
Table 2. The selected hyperparameters for the GRU algorithm.
ParameterDescription
Input layer50 GRUs
Hidden layer—1100 GRUs
Hidden layer—2100 GRUs
Output layerOne dense unit with Leaky Rectified Linear activation function [32]
Optimisation algorithm‘ADAM’ optimizer [33] with ‘MSE’ loss function
Table 3. Mea, Absolute Error comparison of the different kernels.
Table 3. Mea, Absolute Error comparison of the different kernels.
Battery ChemistryPercentage DataSquaredexponentialExponentialMatern32Matern52
NMC40%6.274.160.762.26
60%1.821.850.420.48
80%0.090.580.110.19
LFP40%13.482.231.673.95
60%1.620.980.240.39
80%0.130.170.020.02
Table 4. The MAPE of the predictions for NMC cells (models trained with measured data).
Table 4. The MAPE of the predictions for NMC cells (models trained with measured data).
ModelsPercentage DataCell #1Cell #2Cell #3Cell #4MeanStandard Deviation
LSTM40%0.731.510.771.741.190.51
60%0.600.380.811.960.940.70
80%0.860.790.660.530.710.15
GRU40%1.131.680.541.411.190.49
60%0.350.550.512.180.900.86
80%0.510.440.430.400.450.05
ESN40%0.460.470.631.480.760.49
60%0.280.330.511.070.550.36
80%0.250.280.360.910.450.31
GPR40%0.350.168.2225.118.4611.72
60%0.080.101.558.072.453.81
80%0.130.170.061.450.450.66
Table 5. The MAPE of the predictions for NMC cells (models trained with estimated data).
Table 5. The MAPE of the predictions for NMC cells (models trained with estimated data).
ModelsPercentage DataCell #1Cell #2Cell #3Cell #4MeanStandard Deviation
LSTM40%0.450.401.600.810.810.55
60%1.280.400.872.521.270.91
80%2.730.730.951.021.360.92
GRU40%1.391.602.510.791.570.71
60%1.980.430.892.101.350.82
80%3.270.731.020.901.481.20
ESN40%0.921.070.980.830.950.10
60%1.741.000.780.961.120.42
80%1.700.822.120.701.340.68
GPR40%4.716.079.8332.7413.3413.11
60%3.342.771.4814.325.485.95
80%0.240.040.302.920.881.37
Table 6. The MAPE of the predictions for LFP cells (models trained with measured data).
Table 6. The MAPE of the predictions for LFP cells (models trained with measured data).
ModelsPercentage DataCell #1Cell #2Cell #3Cell #4MeanStandard Deviation
LSTM40%1.411.271.931.491.530.28
60%0.300.250.420.280.310.08
80%0.620.660.910.710.730.13
GRU40%1.060.851.220.981.030.16
60%0.380.870.830.610.670.23
80%0.730.940.990.760.860.13
ESN40%0.350.350.650.210.390.19
60%0.360.320.370.260.330.05
80%0.200.200.270.200.220.03
GPR40%3.0213.191.751.834.945.53
60%0.683.940.410.341.341.74
80%0.090.430.040.050.150.18
Table 7. The MAPE of the predictions for LFP cells (models trained with estimated data).
Table 7. The MAPE of the predictions for LFP cells (models trained with estimated data).
ModelsPercentage DataCell #1Cell #2Cell #3Cell #4Mean MAPEStandard Deviation
LSTM40%0.622.720.381.381.281.05
60%1.220.651.220.280.840.46
80%0.210.693.150.781.211.32
GRU40%0.512.362.581.391.710.95
60%1.590.423.341.311.661.22
80%0.451.133.351.461.601.24
ESN40%1.540.822.310.701.340.74
60%1.990.480.980.811.070.65
80%0.440.371.920.570.830.73
GPR40%17.4821.8216.5722.5419.603.01
60%7.277.394.393.155.552.12
80%2.030.620.580.680.980.70
Table 8. Computational time and memory usage of the prediction models for NMC and LFP cells.
Table 8. Computational time and memory usage of the prediction models for NMC and LFP cells.
Battery ChemistryPercentage DataLSTMGRUGPRESN
Training time (s)NMC40%45.21 s29.76 s2.48 s0.06 s
60%59.91 s32.02 s5.44 s0.09 s
80%75.85 s34.41 s11.16 s0.09 s
LFP40%53.69 s30.94 s3.91 s0.08 s
60%71.79 s33.74 s6.48 s0.09 s
80%88.75 s37.00 s14.83 s0.15 s
Memory usage (Megabyte)29.48 MB49.59 MB11.33 MB6.71 MB
Table 9. Performance of prognosis models.
Table 9. Performance of prognosis models.
MethodDatasetDataMAPE
ESN (this study)Laboratory testsEstimated<1.4%
LSTM (this study)Laboratory testsEstimated<1.6%
Empirical mode decomposition and LSTM [14]NASA and CALCEMeasured<1.9%
Multi-kernel RVM and particle swarm optimisation [47]Laboratory testsMeasured<1.34%
Whale optimisation algorithm and bidirectional LSTM [48]Oxford datasetMeasured<1.26%
Extreme learning machine [49]NASA datasetMeasured<1.2%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hammou, A.; Petrone, R.; Diallo, D.; Tala-Ighil, B.; Boussiengue, P.M.; Chaoui, H.; Gualous, H. Machine Learning-Based Lifetime Prediction of Lithium Batteries: A Comparative Assessment for Electric Vehicle Applications. Energies 2026, 19, 1203. https://doi.org/10.3390/en19051203

AMA Style

Hammou A, Petrone R, Diallo D, Tala-Ighil B, Boussiengue PM, Chaoui H, Gualous H. Machine Learning-Based Lifetime Prediction of Lithium Batteries: A Comparative Assessment for Electric Vehicle Applications. Energies. 2026; 19(5):1203. https://doi.org/10.3390/en19051203

Chicago/Turabian Style

Hammou, Abdelilah, Raffaele Petrone, Demba Diallo, Boubekeur Tala-Ighil, Philippe Makany Boussiengue, Hicham Chaoui, and Hamid Gualous. 2026. "Machine Learning-Based Lifetime Prediction of Lithium Batteries: A Comparative Assessment for Electric Vehicle Applications" Energies 19, no. 5: 1203. https://doi.org/10.3390/en19051203

APA Style

Hammou, A., Petrone, R., Diallo, D., Tala-Ighil, B., Boussiengue, P. M., Chaoui, H., & Gualous, H. (2026). Machine Learning-Based Lifetime Prediction of Lithium Batteries: A Comparative Assessment for Electric Vehicle Applications. Energies, 19(5), 1203. https://doi.org/10.3390/en19051203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop