Next Article in Journal
Mean Droplet Size Prediction of Twin Swirl Airblast Nozzle at Elevated Operating Conditions
Previous Article in Journal
Two-Stage Robust Resilience Enhancement of Distribution System against Line Failures via Hydrogen Tube Trailers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Temperature State-of-Charge Estimation of Lithium-Ion Batteries Based on Spatial Transformer Network

by
Yu Cao
1,*,
Xin Wen
1,* and
Hongyu Liang
2
1
College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2
Zhuhai Southern Intelligent Transportation Co., Ltd., Zhuhai 519088, China
*
Authors to whom correspondence should be addressed.
Energies 2024, 17(20), 5029; https://doi.org/10.3390/en17205029
Submission received: 5 September 2024 / Revised: 4 October 2024 / Accepted: 7 October 2024 / Published: 10 October 2024
(This article belongs to the Section F1: Electrical Power System)

Abstract

:
Accurately estimating the state of charge of a lithium-ion battery plays an important role in managing the health of a battery and estimating its charging state. Traditional state-of-charge estimation methods encounter difficulties in processing the diverse temporal data sequences and predicting adaptive results. To address these problems, we propose a spatial transformer network (STN) for multi-temperature state-of-charge estimation of lithium-ion batteries. The proposed STN consists of a convolutional neural network with a temporal–spatial module and a long short-term memory transformer network, which together are able to efficiently capture the spatiotemporal features. To train the STN under multi-temperature conditions, denoising augmentation and attention prediction are proposed to enhance the model’s generalizability within a unified framework. Experimental results show that the proposed method reduces the mean absolute error and root mean square error by 41% and 43%, respectively, compared with existing methods; in the semi-supervised setting, the respective reductions are 23% and 38%, indicating that effective extraction of the spatiotemporal features along with denoising augmentation is beneficial for estimating the state of charge and can promote the development of battery management systems using semi-supervised learning methods.

1. Introduction

Due to growing concerns over global warming, greenhouse gas emissions, and the depletion of fossil fuels, electric vehicles (EVs) have gained widespread popularity in recent decades for their improved performance and efficiency. Lithium-ion batteries (LIBs) are commonly used in various devices, including EVs, thanks to their light weight, high energy density, good performance, and long lifetime [1]. With these attractive characteristics, substantial investments have been made to improve the durability and reliability of LIB technology [2]. These investments aim to enhance the stability and robustness of LIBs, making them more efficient and reliable for various applications. However, capacity degradation occurs as the number of charge–discharge cycles increases. Prognostics and health management (PHM) methods, particularly estimation of the battery’s state of charge (SOC), play a crucial role in ensuring the reliability and safety of electronic devices such as the battery management system (BMS) [3]. The SOC represents the remaining capacity of the battery relative to its full charge capacity. Estimating the SOC in advance is essential, allowing key information to be provided regarding the maintenance and replacement of batteries in order to ensure safety [4].
A robust BMS is essential for ensuring the safe and reliable operation of LIBs. The BMS assesses various battery states, including the SOC, in order to prevent overcharging or overdischarging. Accurate estimation of battery SOC is essential for the health and longevity of electronic devices. However, estimating SOC in batteries is a challenging task due to intricate electrochemical reactions and sensitivity to environmental factors. Common approaches to addressing this challenge include using electrochemical or electrical battery equivalent models [5]. However, these methods have drawbacks, including difficulties in parameter identification, complex computations, and model mismatch caused by battery aging. Traditional methods of SOC estimation [6,7] such as Coulomb counting and voltage-based methods often suffer from limited accuracy and reliability, especially under dynamic operating conditions and with aging batteries. Recent progress in machine learning has sparked interest in data-driven approaches for battery health estimation [5]. The extensive application of lithium batteries in scenarios such as energy storage and electric vehicles generates large volumes of data. These data support the adaptability of batteries in new contexts. Due to its strong learning and predictive capabilities, machine learning (ML) plays a significant role in areas related to LIB research, production, service, and disposal, contributing to the refinement of intelligent management across all stages [8]. ML-based methods have been extensively applied in SOC estimation, and researchers have validated these approaches by conducting experiments on LIBs under various conditions such as EV drive cycles, temperature fluctuations, noise interference, and aging effects [9,10]. ML-based methods also have shown promise in addressing related challenges by leveraging data-driven models to capture complex relationships between SOC and various battery parameters.
Recently, deep learning has shown significant achievements in various tasks, including estimation, classification, prediction, and more [11]. Deep learning models are widely used for SOC estimation to understand the nonlinear nature of battery capacity [12]. For instance, convolutional neural networks (CNNs) have been proposed to process and analyze spatial hierarchies in data that can allow for modeling the charge process and terminal voltage curve of a battery to capture their relationship. Recurrent neural networks (RNNs) have been developed to simulate the nonlinear trend for SOC assessment. Long short-term memory (LSTM) has been applied to analyze the capacity sequence for understanding the inclination of battery degradation. Cao et al. [13] proposed a novel LSTM–transformer hybrid architecture with attention mechanisms and sequence modeling that was uniquely specialized for multitask real-time predictions. Various approaches for enhancing the accuracy and efficiency of SOC estimation have been proposed in the recent literature [14,15]. Chen et al. [16] proposed a long short-term memory recurrent neural network (LSTM–RNN) with extended input and constrained output for battery SOC estimation. Fallah et al. [17] compared the impact of different network architectures on SOC estimation and discovered the strengths of various networks in SOC estimation. These studies show the significance of deep learning-based methods in accurately estimating SOC in various applications, showcasing the potential for improving BMS.
When it comes to estimating the SOC, architectures based on the RNN framework, including gated recurrent unit (GRU) and LSTM models, have proven to be adept at handling sequential data. Existing RNN-based models face challenges when inputting raw data with varying temperatures into neural networks for representation learning, resulting in noise [18]. This noise can hinder the effectiveness of the resulting models in accurately learning and representing the data. In addition, training RNN-based networks recurrently for sequential data incurs high time costs and degrades performance due to long-term dependencies. In most approaches, data denoising and model estimation are treated as separate tasks, neglecting the correlation between them.
To address the problems mentioned above, a spatial transformer network (STN) for multi-temperature state-of-charge estimation of lithium-ion batteries is proposed in this paper. A spatial transformer network called STN is proposed with an attention predictor to enhance the training efficiency of the neural network by analyzing trends in sequences. The STN consists of a temporal–spatial module in a convolutional neural network and an LSTM network with a transformer, which together can efficiently capture spatiotemporal features and predict the SOC precisely at different temperatures. To train the STN in diverse temperature environments, denoising augmentation is proposed to expand the dataset as a means to improve the model’s generalization ability. Pseudolabels are generated to denoise the augmented data. To improve the robustness of the model, perturbations are applied to the training data and pseudolabels are generated using consistent prediction. Finally, a series of experiments was carried out to verify the effectiveness of the proposed approach under varying ambient temperatures.

2. Related Works

2.1. Traditional Physics-Based SOC Estimation

Traditional SOC estimation methods for lithium batteries have shown limitations, prompting the development of novel approaches. Huria et al. [19] introduced a simplified extended Kalman filter (EKF) observer for SOC estimation, utilizing a two-RC-block equivalent circuit and the traditional Coulomb counting method. This implementation offers a computationally efficient option for real-time SOC evaluation in vehicles. Similarly, Zhou et al. [20] proposed an adaptive wavelet neural network (AWNN) model for accurate online SOC estimation, addressing the drawbacks of traditional algorithms. Wang et al. [21] focu sed on lithium–bismuth liquid metal batteries, evaluating three SOC estimation algorithms (EKF, unscented Kalman filter, and particle filter) using various discharge profiles. Luo et al. [22] highlighted the advantages of the EKF algorithm over traditional methods, emphasizing the importance of accurate battery models for precise SOC estimation. Wu et al. [23] introduced an online SOC estimation method based on a simplified electrochemical model for lithium-ion batteries, considering sensor bias for improved accuracy. Chen et al. [24] proposed a modified electrochemical-distributed thermal coupled model (MEDTM) to enhance SOC estimation accuracy for cylindrical lithium-ion batteries. Wang et al. [25] developed a sliding mode observer for dynamic equivalent state-of-charge (ESOC) estimation in hybrid energy storage systems. However, these methods struggle to handle time-varying estimation for data sequences and to produce adaptively predictive results.

2.2. Deep Learning-Based SOC Estimation

Recently, deep learning has emerged as a powerful tool for SOC estimation in lithium-ion batteries. Herle et al. [26] proposed a temporal convolutional network capable of capturing SOC features with an estimation error of less than 1%. Bian et al. [27] introduced a bidirectional LSTM encoder–decoder architecture for SOC sequence estimation, allowing the model to capture long-term dependencies from both past and future directions. This approach was evaluated on public battery datasets under dynamic loading profiles. Similarly, Hannan et al. [28] proposed a transformer model trained with self-supervised learning for accurate SOC estimation without the need for feature engineering or adaptive filtering. Their model achieved an RMSE of less than 1.9% on test data with minimal training epochs and data usage. Furthermore, Bian et al. [29] developed a deep transfer neural network with multiscale distribution adaptation for cross-domain SOC estimation of lithium-ion batteries, emphasizing the importance of the data distribution alignment for successful deep learning applications. Ma et al. [30] also explored enhancing SOC estimation performance using unlabeled training samples, showcasing the potential for improving accuracy with additional data. In the context of EVs, Venkitaraman et al. [31] proposed a hybrid deep learning mechanism for charging control and management to ensure safe and efficient operations while preventing battery overcharging or discharging. Additionally, Hai et al. [12] focused on deep learning-based prediction of lithium-ion battery SOC for EVs in standard driving cycles, highlighting the relevance of accurate SOC estimation for optimizing vehicle performance. Overall, deep learning approaches have demonstrated significant advancements in SOC estimation for lithium-ion batteries, showcasing the potential for improved accuracy, efficiency, and performance in various applications.

3. Proposed Method

The proposed method aims to estimate the SOC using historical data. In addition, we introduce an objective function that combines a temporal–spatial module and pseudolabels. As specified by the U.S. Advanced Battery Consortium [32], SOC is typically quantified as the proportion of remaining capacity with respect to the nominal capacity. The mathematical form can be expressed as a ratio, as shown in Equation (1):
S O C = Q r Q R × 100 %
where Q r is the remaining charge and Q R is the rated capacity of the battery.

3.1. Model Architecture

Ensuring continuous battery operation and estimating the SOC accurately and promptly are crucial tasks. In this paper, a novel deep learning model called STN is proposed to address limitations in existing RNN-based methods. As illustrated in Figure 1, the STN comprises four components: normalization, feature extraction, denoising augmentation, and attention prediction.

3.1.1. Normalization

To mitigate the impact of alterations in the input data distribution, it is essential to normalize the data. The input sequence x = x 1 , , x n , representing capacity with a length of n, is transformed to the range ( 0 , 1 ] ; its true label is y . The i-th sequence x i = v i , c i , p i represents the voltage, current, and battery temperature related to the SOC, respectively. To minimize the effect on the neural network of shifts in the input data distribution, it is necessary to perform denoising in order to normalize the data. The normalization formula for input x is defined as follows:
x = x x m i n x m a x x m i n
where x denotes the normalized data, x m a x represents the maximal values of the raw data, and x m i n represents the minimal values.

3.1.2. Denoising Augmentation

Data augmentation is one way to enhance the robustness of a model. Training data often contain noise, particularly during charge/discharge regeneration. In many approaches, noisy data are fed directly into neural networks without any denoising step, leading to reduced prediction accuracy. In this part, we propose a denoising augmentation (DA) step in which small perturbations ε are added to the data while maintaining consistency in the labels, as shown in Equation (3):
x + ε δ
where δ is a very small value that constraints the magnitude of the perturbation.
In order to ensure stability and robustness, it is crucial to denoise the input data before feeding them into the deep neural network. An unsupervised learning method that focuses on extracting useful features is utilized here to reconstruct the input data from a lower-dimensional representation while preserving as much information as possible.
Inspired by the idea of denoising autoencoders [33], we construct a denoising model with a denoising encoder and denoising decoder. We sample a data sequence or perturbed data with samples. Small Gaussian noise is incorporated into the normalized input to create the corrupted vector x ^ . The DA step has the dual role of removing noise from the original input and capturing nonlinear features. By denoising the raw input, DA aims to improve the quality of the data and enhance the learning process. Additionally, it seeks to extract intricate patterns and representations from the input data through nonlinear transformations of the denoising encoder, as follows:
d = f d W 1 x ^ + b 1
where W 1 , b 1 , and f d are the matrix weight, bias, and activation function, respectively.
The input vector is reconstructed by the denoising decoder, which maps the latent representation back to the input space. Reconstructing the input vector involves mapping the latent representation back to the input space. This mapping process of the denoising decoder is defined as follows:
x ^ r = f r W 2 d + b 2 ,
where f r is the reconstructed function and x ^ is the reconstructed denoised data.
In our network architecture, the activation functions are rectified linear units (ReLUs). The DA function is subsequently defined as follows:
L D A W 1 , W 2 = L M x , x ^ r + W 1 F 2 + W 2 F 2
where L M denotes the mean square error function.
With the pretrained model M , the pseudolabels are determined as follows:
y ^ = M x r
where the pseudo-label y ^ is constraints as follows.
y ^ y c δ
where c is a constant that is set to 2.

3.1.3. Attention Prediction

The attention mechanism focuses on a part of the information while neglecting other perceptible information [34]. A conventional transformer model comprises an encoder and decoder for sequence-to-sequence tasks. The encoder processes the input sequence to produce a high-dimensional vector, which is then utilized by the decoder to generate the output sequence. This paper employs a transformer encoder to capture the long-term dependencies associated with the decline in capacity observed in battery operation data. To maximize the utilization of sequence position information, we incorporate relative position tokens into the sequence. These relative position tokens are used to enhance the understanding of the relationship between different elements in the sequence.
The attention part of the prediction focuses on modeling feature dependencies without considering their spatial arrangement in the sequence. It involves parallel attention functions for capturing relationships within the representation of a specific layer. Using the input feature map F , the attention prediction is defined as
S O C = f A F Q θ , F K θ , F V θ ,
where f A is defined as follows [35]:
f A ( Q , K , V ) = softmax Q K T d h V
where Q , K , and V respectively denote the query, key, and value, while the ratio d h = d / h is used to prevent small gradients and create a smoother attention distribution [35].

3.2. Objective Function Based on Spatial Transformer Network

The proposed method involves two primary tasks: denoising and prediction. Rather than treating these tasks independently, we introduce an objective function based on STN that connects them. This function enables the simultaneous optimization of both tasks within a unified framework. The loss evaluation is based on the mean square error (MSE), and the objective function is formally defined as follows:
L D , θ = L M D + λ 1 L D A W 1 , W 2 + λ 2 θ 2
where λ 1 and λ 2 are coefficients for corresponding items and the parameters θ W 1 and θ W 2 are part of θ .
During training, the objective function is optimized through the stochastic gradient descent (SGD) method with momentum [36], formulated as follows:
θ * = min L D , θ
where θ * stands for the optimized parameters of the model.

4. Experiments

4.1. Dataset

This paper assesses the cross-domain SOC estimation performance of the proposed method under different temperature conditions. The evaluation is conducted using the Panasonic 18650PF dataset [37] and involves multiple transfer experiments. This dataset consists of data collected from brand-new Panasonic NCR18650PF batteries produced by Panasonic Energy Co., Ltd. at Moriguchi City, Japan. It is widely recognized and utilized in the research community as a validation benchmark.
Our assessment based on the Panasonic dataset involved conducting various drive cycles such as Cycle1, Cycle2, Cycle3, Cycle4, US06, HWFET, UDDS, LA92, and NN at different temperatures of 25 °C, 10 °C, 0 °C, −10 °C, and −20 °C. Cycles 1–4 were a mix of different drive cycles, including US06, HWFET, UDDS, LA92, and NN. The NN drive cycle was a combination of the US06 and LA92 cycles with additional dynamics to aid in neural network training. We split the dataset into a training set and test set at a 4:1 ratio. A visualization of the dataset is shown in Figure 2.

4.2. Implementation Details

This paper proposes STN for multi-temperature SOC estimation of LIBs. The STN comprises four components: normalization, feature extractor, denoising augmentation, and attention prediction. Due to limited GPU memory, the batch size was set to 128 with and the length of the time window W was set to 90. The coefficients λ 1 and λ 2 were set to 1 and 0.1, respectively. The denoising augmentation model was pretrained first in a unsupervised setting; δ in Equation (3) was set to 0.1 and ε consisted of small Gaussian noise. All experiments were conducted in the PyTorch framework. The models were trained using an NIVIDIA GeForce GTX 1060 with 6 GB of memory.
The performance of the proposed method is compared with other methods using root mean square error (RMSE) and mean absolute error (MAE) as performance evaluation metrics. The formulas for the evaluation metrics are shown in Equations (13) and (14), with smaller values of both metrics indicating better estimation accuracy:
RMSE = 1 N i = 1 N y i y ^ i p 2
MAE = 1 N i = 1 N y i y ^ i p
where N is the number of samples and y ^ i p and y i are the i-th predicted label and the true SOC label, respectively.

4.3. Compared Methods

4.3.1. Overall Performance

To validate the efficacy of the proposed methods across various temperatures, experiments were conducted for SOC estimation at five different ambient temperatures. The performance metrics included the MAE and RMSE scores. Table 1 presents the performance results of the models trained at different temperatures. The average MAE and RMSE values are 0.55% and 0.71%. Among the five tested temperatures, the best MAE and RMSE values are 0.31% and 0.40% at 10 °C, which demonstrates that the tested batteries may exhibit more stable operation at 10 °C. The second-best MAE and RMSE values are 0.48% and 0.57% at 25 °C. As shown in Figure 3, the model was trained at different ambient temperatures and tested on different devices. The curves with different colors indicate the true label (Y_test) and the predicted values. It can be noted that Figure 3a,d displays jagged SOC trends, yet the proposed method still achieves superior estimation results. The training loss on the train set is shown in Figure 4.

4.3.2. Network Architecture Comparison

In this experiment, we compared our models to other network architectures, including LSTM, GRU, and other methods [38,39]. The experimental results are shown in Table 2, which presents the performance of the model trained on all ambient temperatures. Among all methods, the proposed method obtains the best average MAE and RMSE values of 0.64% and 0.82%, respectively. The results demonstrate that the proposed model extracts useful spatiotemporal information in modeling the capacity sequences. GRU has better estimation results than LSTM; however, it is inflexible in real-world applications. A possible reason for the results of the LSTM and GRU models is that they may have struggled to extract rich and effective features. The compared method from [38] demonstrates smaller RMSE at 10 °C, while the method from [39] achieves the best MAE and RMSE at 0 °C. The proposed method demonstrates exceptional performance at other temperatures as well, highlighting its superiority even beyond its intended scope. Overall, the proposed method yields excellent results, surpassing expectations and excelling in diverse situations. Compared to other network structures, the proposed method exhibits better performance in nonlinear feature mapping, resulting in smoother estimations with reduced fluctuations. Compared with the first experiment, in this experiment the model was trained on the whole dataset, resulting in higher average MAE and RMSE. It is hypothesized that varying temperatures lead to data distribution biases, creating obstacles during learning. These challenges may impact the performance and accuracy of the model in SOC estimation.

4.3.3. Semi-Supervised Estimation

The proposed method introduces denoising augmentation and pseudolabels for the augmented data, allowing SOC to be estimated in a semi-supervised setting. This process includes pretraining the model on labeled data and then applying pseudolabels to the unlabeled data. During the experiment in a semi-supervised setting, about 5% of the labeled data were used, while the rest of the training data were unlabeled.
The experimental results are shown in Table 3, which presents the semi-supervised performance of the model trained with limited labeled data. The proposed method achieves the best average MAE and RMSE of 0.93% and 1.02%, respectively. This comprehensive estimation demonstrates that the proposed method obtains smoother and more accurate SOC estimation thanks to the consistent prediction training. Notably, at 0 °C the method from [39] obtains the best MAE and RMSE of 0.73% and 0.98%, respectively. This is attributed to their model design, which consists of a cross-domain network framework adapted for semi-supervised settings.
Overall, the proposed method with a spatial transformer network results in improved SOC estimation performance. The transformer model is able to simultaneously learn spatiotemporal features that are effective for estimating SOC. This parallel feature learning capability makes it suitable for SOC estimation.

5. Conclusions

In this paper, we have reviewed traditional and deep learning-based methods for SOC estimation and discussed the challenges that they encounter under different temperature conditions. To mitigate the impact of temperature variations on noise levels, an STN model is proposed with a temporal–spatial module for accurate SOC estimation, which can enhance battery safety and prolong battery lifespans. The proposed model utilizes attention-based prediction to improve the efficiency of training by extracting sequence patterns. The STN efficiently captures spatiotemporal features to accurately predict battery SOC at multiple temperature settings. The proposed method includes denoising augmentation through the construction of a simple denoising encoder and denoising decoder. Perturbations were introduced to the training data and pseudolabels were generated through consistent predictions, further enhancing the robustness of the proposed model. Experimental results showed that the proposed method reduces MAE by 41% and and RMSE by 43% compared to existing methods. In a semi-supervised setting, the method achieved respective reductions of 23% and 38% in the same metrics. The proposed method effectively improves prediction accuracy and enables the model to be trained in a semi-supervised manner by performing the crucial step of denoising the data at different temperatures. The experimental results indicate the potential for practical applications in battery management systems.
In the future, we aim to broaden the application of the proposed method. Training a model on only a portion of data with noise labels may lead to insufficient robustness and bias. To improve model training, additional charge–discharge data need to be incorporated, and the criteria for assessing the charge level of a battery need be explored to correct the prediction. Operational conditions, including varying temperatures and currents, significantly influence battery degradation trends; therefore, our future research will focus on estimating the SOC and remaining useful life (RUL) of batteries under different operating conditions in semi-supervised and transfer learning settings.

Author Contributions

Conceptualization, Y.C. and X.W.; Methodology, Y.C.; Formal analysis, Y.C.; Investigation, Y.C.; Resources, H.L.; Writing—original draft, Y.C.; Writing—review and editing, Y.C.; Supervision, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data are contained within the paper.

Conflicts of Interest

Author Hongyu Liang was employed by the Zhuhai Southern Intelligent Transportation Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Saw, L.H.; Ye, Y.; Tay, A.A. Integration issues of lithium-ion battery into electric vehicles battery pack. J. Clean. Prod. 2016, 113, 1032–1045. [Google Scholar] [CrossRef]
  2. Zheng, Y.; Zhou, S.; Zhang, Z.; Zhou, X.; Cao, R.; Li, Q.; Gao, Z.; Fan, C.; Yang, S. A capacity fade reliability model for lithium-ion battery packs based on real-vehicle data. Energy 2024, 307, 132782. [Google Scholar] [CrossRef]
  3. Liu, J.; Zhou, L.; Zhang, Y.; Wang, J.; Wang, Z. Aging behavior and mechanisms of lithium-ion battery under multi-aging path. J. Clean. Prod. 2023, 423, 138678. [Google Scholar] [CrossRef]
  4. Chavan, S.; Venkateswarlu, B.; Prabakaran, R.; Salman, M.; Joo, S.W.; Choi, G.S.; Kim, S.C. Thermal runaway and mitigation strategies for electric vehicle lithium-ion batteries using battery cooling approach: A review of the current status and challenges. J. Energy Storage 2023, 72, 108569. [Google Scholar] [CrossRef]
  5. Korkmaz, M. SoC estimation of lithium-ion batteries based on machine learning techniques: A filtered approach. J. Energy Storage 2023, 72, 108268. [Google Scholar] [CrossRef]
  6. Tao, Z.; Zhao, Z.; Wang, C.; Huang, L.; Jie, H.; Li, H.; Hao, Q.; Zhou, Y.; See, K.Y. State of charge estimation of lithium batteries: Review for equivalent circuit model methods. Measurement 2024, 236, 115148. [Google Scholar] [CrossRef]
  7. Zhang, S.; Guo, X.; Dou, X.; Zhang, X. A data-driven coulomb counting method for state of charge calibration and estimation of lithium-ion battery. Sustain. Energy Technol. Assess. 2020, 40, 100752. [Google Scholar] [CrossRef]
  8. Zhai, Q.; Jiang, H.; Long, N.; Kang, Q.; Meng, X.; Zhou, M.; Yan, L.; Ma, T. Machine learning for full lifecycle management of lithium-ion batteries. Renew. Sustain. Energy Rev. 2024, 202, 114647. [Google Scholar] [CrossRef]
  9. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Ker, P.J.; Mahlia, T.M.I.; Mansor, M.; Ayob, A.; Saad, M.H.; Dong, Z.Y. Toward Enhanced State of Charge Estimation of Lithium-ion Batteries Using Optimized Machine Learning Techniques. Sci. Rep. 2020, 10, 4687. [Google Scholar] [CrossRef]
  10. Gu, T.; Wang, D. A beetle antennae search optimized recurrent extreme learning machine for battery state of charge estimation. Int. J. Energy Res. 2022, 46, 19190–19205. [Google Scholar] [CrossRef]
  11. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2019, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  12. Hai, T.; Dhahad, H.A.; Fadhil Jasim, K.; Sharma, K.; Zhou, J.; Fouad, H.; El-Shafai, W. Deep learning-based prediction of lithium-ion batteries state of charge for electric vehicles in standard driving cycle. Sustain. Energy Technol. Assess. 2023, 60, 103461. [Google Scholar] [CrossRef]
  13. Cao, K.; Zhang, T.; Huang, J. Advanced hybrid LSTM-transformer architecture for real-time multi-task prediction in engineering systems. Sci. Rep. 2024, 14, 4890. [Google Scholar] [CrossRef] [PubMed]
  14. Ul Hassan, M.; Saha, S.; Haque, M.E.; Islam, S.; Mahmud, A.; Mendis, N. A comprehensive review of battery state of charge estimation techniques. Sustain. Energy Technol. Assess. 2022, 54, 102801. [Google Scholar] [CrossRef]
  15. D.V.S.R., S.; Badachi, C.; Green, R.C., II. A review on data-driven SOC estimation with Li-Ion batteries: Implementation methods and future aspirations. J. Energy Storage 2023, 72, 108420. [Google Scholar] [CrossRef]
  16. Chen, J.; Zhang, Y.; Wu, J.; Cheng, W.; Zhu, Q. SOC estimation for lithium-ion battery using the LSTM-RNN with extended input and constrained output. Energy 2023, 262, 125375. [Google Scholar] [CrossRef]
  17. El Fallah, S.; Kharbach, J.; Hammouch, Z.; Rezzouk, A.; Ouazzani Jamil, M. State of charge estimation of an electric vehicle’s battery using Deep Neural Networks: Simulation and experimental results. J. Energy Storage 2023, 62, 106904. [Google Scholar] [CrossRef]
  18. Hannan, M.; Lipu, M.; Hussain, A.; Mohamed, A. A review of lithium-ion battery state of charge estimation and management system in electric vehicle applications: Challenges and recommendations. Renew. Sustain. Energy Rev. 2017, 78, 834–854. [Google Scholar] [CrossRef]
  19. Turner, J. Simplified Extended Kalman Filter Observer for SOC Estimation of Commercial Power-Oriented LFP Lithium Battery Cells (2013-01-1544). In Progress in Modeling and Simulation of Batteries; SAE International: Warrendale, PA, USA, 2016; Chapter 3; pp. 19–28. [Google Scholar]
  20. Zhou, F.; Wang, L.; Lin, H.; Lv, Z. High accuracy state-of-charge online estimation of EV/HEV lithium batteries based on Adaptive Wavelet Neural Network. In Proceedings of the 2013 IEEE ECCE Asia Downunder, Melbourne, Australia, 3–6 June 2013; pp. 513–517. [Google Scholar] [CrossRef]
  21. Wang, X.; Song, Z.; Yang, K.; Yin, X.; Geng, Y.; Wang, J. State of Charge Estimation for Lithium-Bismuth Liquid Metal Batteries. Energies 2019, 12, 183. [Google Scholar] [CrossRef]
  22. Luo, Y.; Qi, P.; Kan, Y.; Huang, J.; Huang, H.; Luo, J.; Wang, J.; Wei, Y.; Xiao, R.; Zhao, S. State of charge estimation method based on the extended Kalman filter algorithm with consideration of time-varying battery parameters. Int. J. Energy Res. 2020, 44, 10538–10550. [Google Scholar] [CrossRef]
  23. Wu, L.; Liu, K.; Pang, H.; Jin, J. Online SOC Estimation Based on Simplified Electrochemical Model for Lithium-Ion Batteries Considering Current Bias. Energies 2021, 14, 5265. [Google Scholar] [CrossRef]
  24. Chen, G.; Liu, Z.; Su, H.; Zhuang, W. Electrochemical-distributed thermal coupled model-based state of charge estimation for cylindrical lithium-ion batteries. Control. Eng. Pract. 2021, 109, 104734. [Google Scholar] [CrossRef]
  25. Wang, Y.; Jiang, W.; Zhu, C.; Xu, Z.; Deng, Y. Research on Dynamic Equivalent SOC Estimation of Hybrid Energy Storage System Based on Sliding Mode Observer. Front. Energy Res. 2021, 9, 711716. [Google Scholar] [CrossRef]
  26. Hannan, M.A.; How, D.N.T.; Lipu, M.S.H.; Ker, P.J.; Dong, Z.Y.; Mansur, M.; Blaabjerg, F. SOC Estimation of Li-ion Batteries with Learning Rate-Optimized Deep Fully Convolutional Network. IEEE Trans. Power Electron. 2021, 36, 7349–7353. [Google Scholar] [CrossRef]
  27. Bian, C.; He, H.; Yang, S.; Huang, T. State-of-charge sequence estimation of lithium-ion battery based on bidirectional long short-term memory encoder-decoder architecture. J. Power Sources 2020, 449, 227558. [Google Scholar] [CrossRef]
  28. Hannan, M.; How, D.; Lipu, M.S.H.; Mansor, M.; Ker, P.; Dong, Z.; Sahari, K.; Tiong, S.; Muttaqi, K.; Mahlia, T.M.I.; et al. Towards Accurate State of Charge Estimation for Lithium-Ion Batteries using Self-supervised Transformer Model: A Deep Learning Approach. Sci. Rep. 2021. under review. [Google Scholar] [CrossRef]
  29. Bian, C.; Yang, S.; Miao, Q. Cross-Domain State-of-Charge Estimation of Li-Ion Batteries Based on Deep Transfer Neural Network with Multiscale Distribution Adaptation. IEEE Trans. Transp. Electrif. 2021, 7, 1260–1270. [Google Scholar] [CrossRef]
  30. Ma, L.; Zhang, T. Deep learning-based battery state of charge estimation: Enhancing estimation performance with unlabelled training samples. J. Energy Chem. 2023, 80, 48–57. [Google Scholar] [CrossRef]
  31. Ka, A.; Kosuru, V.S.R. Hybrid Deep Learning Mechanism for Charging Control and Management of Electric Vehicles. Eur. J. Electr. Eng. Comput. Sci. 2023, 7, 38–46. [Google Scholar] [CrossRef]
  32. Pang, S.; Farrell, J.; Du, J.; Barth, M. Battery State-of-Charge Estimation. In Proceedings of the 2001 American Control Conference (Cat. No.01CH37148), Arlington, VA, USA, 25–27 June 2001; Volume 2, pp. 1644–1649. [Google Scholar] [CrossRef]
  33. Bengio, Y.; Yao, L.; Alain, G.; Vincent, P. Generalized Denoising Auto-Encoders as Generative Models. In Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 12–15 December 2019; Burges, C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2013; Volume 26. [Google Scholar]
  34. Huang, Z.; Liang, S.; Liang, M. A generic shared attention mechanism for various backbone neural networks. Neurocomputing 2024, 128697, in press. [Google Scholar] [CrossRef]
  35. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.u.; Polosukhin, I. Attention is All you Need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  36. Loshchilov, I.; Hutter, F. SGDR:Stochastic Gradient Descent with Warm Restarts. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017; pp. 1–16. [Google Scholar]
  37. Kollmeyer, P. Panasonic 18650PF Li-ion Battery Data; University of Wisconsin Madison: Madison, WI, USA; McMaster University: Hamilton, ON, USA, 2018. [Google Scholar] [CrossRef]
  38. Qin, Y.; Adams, S.; Yuen, C. Transfer Learning-Based State of Charge Estimation for Lithium-Ion Battery at Varying Ambient Temperatures. IEEE Trans. Ind. Inform. 2021, 17, 7304–7315. [Google Scholar] [CrossRef]
  39. Shen, L.; Li, J.; Liu, J.; Zhu, L.; Shen, H.T. Temperature Adaptive Transfer Network for Cross-Domain State-of-Charge Estimation of Li-Ion Batteries. IEEE Trans. Power Electron. 2023, 38, 3857–3869. [Google Scholar] [CrossRef]
Figure 1. Illustration of the proposed model structure. with the proposed denoising augmentation, small perturbations are randomly added to the input data, allowing the STN to be trained with improved robustness while effectively capturing the spatiotemporal features.
Figure 1. Illustration of the proposed model structure. with the proposed denoising augmentation, small perturbations are randomly added to the input data, allowing the STN to be trained with improved robustness while effectively capturing the spatiotemporal features.
Energies 17 05029 g001
Figure 2. The drive cycles of the US06, HWFET, UDDS, LA92, and Neural Network at different ambient temperatures.
Figure 2. The drive cycles of the US06, HWFET, UDDS, LA92, and Neural Network at different ambient temperatures.
Energies 17 05029 g002
Figure 3. Performance of the model on different drives at different ambient temperatures.
Figure 3. Performance of the model on different drives at different ambient temperatures.
Energies 17 05029 g003
Figure 4. Training loss on the training set under different temperatures.
Figure 4. Training loss on the training set under different temperatures.
Energies 17 05029 g004
Table 1. Test performance of the model trained at different temperatures.
Table 1. Test performance of the model trained at different temperatures.
Metrics25 °C10 °C0 °C−10 °C−20 °CAverage
MAE (%)0.480.310.610.640.710.55
RMSE (%)0.570.400.840.850.920.71
Table 2. Performance of the model trained on all ambient temperatures, with the best results displayed in bold.
Table 2. Performance of the model trained on all ambient temperatures, with the best results displayed in bold.
ArchitectureMetricsAmbient TemperatureAverage
25 °C10 °C0 °C−10 °C−20 °C
LSTMMAE (%)1.091.740.631.112.391.62
RMSE (%)3.083.931.611.963.493.04
GRUMAE (%)1.091.380.641.121.701.19
RMSE (%)3.073.301.601.942.552.49
Qin et al. [38]MAE (%)2.740.581.912.894.942.61
RMSE (%)2.270.451.702.173.952.11
Shen et al. [39]MAE (%)0.761.370.531.111.691.09
RMSE (%)1.071.810.681.462.181.44
Proposed methodMAE (%)0.560.560.590.640.860.64
RMSE (%)0.710.790.740.781.100.82
Table 3. Semi-supervised performance of the model, with the best results displayed in bold.
Table 3. Semi-supervised performance of the model, with the best results displayed in bold.
ArchitectureMetricsAmbient TemperatureAverage
25 °C10 °C0 °C−10 °C−20 °C
Qin et al. [38]MAE (%)2.881.602.102.964.992.90
RMSE (%)2.951.722.323.115.043.02
Shen et al. [39]MAE (%)0.881.400.731.231.881.22
RMSE (%)1.271.910.981.632.481.65
Proposed methodMAE (%)0.860.820.790.941.260.93
RMSE (%)0.911.061.040.781.331.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, Y.; Wen, X.; Liang, H. Multi-Temperature State-of-Charge Estimation of Lithium-Ion Batteries Based on Spatial Transformer Network. Energies 2024, 17, 5029. https://doi.org/10.3390/en17205029

AMA Style

Cao Y, Wen X, Liang H. Multi-Temperature State-of-Charge Estimation of Lithium-Ion Batteries Based on Spatial Transformer Network. Energies. 2024; 17(20):5029. https://doi.org/10.3390/en17205029

Chicago/Turabian Style

Cao, Yu, Xin Wen, and Hongyu Liang. 2024. "Multi-Temperature State-of-Charge Estimation of Lithium-Ion Batteries Based on Spatial Transformer Network" Energies 17, no. 20: 5029. https://doi.org/10.3390/en17205029

APA Style

Cao, Y., Wen, X., & Liang, H. (2024). Multi-Temperature State-of-Charge Estimation of Lithium-Ion Batteries Based on Spatial Transformer Network. Energies, 17(20), 5029. https://doi.org/10.3390/en17205029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop