Next Article in Journal
Multistage Early Warning of Sodium-Ion Battery Thermal Runaway Using Multidimensional Signal Analysis and Redundancy Optimization
Next Article in Special Issue
Early Prediction of Battery Lifetime Using Centered Isotonic Regression with Quantile-Transformed Features
Previous Article in Journal
Plasticized Ionic Liquid Crystal Elastomer Emulsion-Based Polymer Electrolyte for Lithium-Ion Batteries
Previous Article in Special Issue
The Role of Machine Learning in Enhancing Battery Management for Drone Operations: A Focus on SoH Prediction Using Ensemble Learning Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Exploiting Artificial Neural Networks for the State of Charge Estimation in EV/HV Battery Systems: A Review

by
Pierpaolo Dini
* and
Davide Paolini
Department of Information Engineering, University of Pisa, Via G. Caruso n.16, 56122 Pisa, Italy
*
Author to whom correspondence should be addressed.
Batteries 2025, 11(3), 107; https://doi.org/10.3390/batteries11030107
Submission received: 8 January 2025 / Revised: 1 March 2025 / Accepted: 7 March 2025 / Published: 13 March 2025
(This article belongs to the Special Issue Machine Learning for Advanced Battery Systems)

Abstract

:
Artificial Neural Networks (ANNs) improve battery management in electric vehicles (EVs) by enhancing the safety, durability, and reliability of electrochemical batteries, particularly through improvements in the State of Charge (SOC) estimation. EV batteries operate under demanding conditions, which can affect performance and, in extreme cases, lead to critical failures such as thermal runaway—an exothermic chain reaction that may result in overheating, fires, and even explosions. Addressing these risks requires advanced diagnostic and management strategies, and machine learning presents a powerful solution due to its ability to adapt across multiple facets of battery management. The versatility of ML enables its application to material discovery, model development, quality control, real-time monitoring, charge optimization, and fault detection, positioning it as an essential technology for modern battery management systems. Specifically, ANN models excel at detecting subtle, complex patterns that reflect battery health and performance, crucial for accurate SOC estimation. The effectiveness of ML applications in this domain, however, is highly dependent on the selection of quality datasets, relevant features, and suitable algorithms. Advanced techniques such as active learning are being explored to enhance ANN model performance by improving the models’ responsiveness to diverse and nuanced battery behavior. This compact survey consolidates recent advances in machine learning for SOC estimation, analyzing the current state of the field and highlighting the challenges and opportunities that remain. By structuring insights from the extensive literature, this paper aims to establish ANNs as a foundational tool in next-generation battery management systems, ultimately supporting safer and more efficient EVs through real-time fault detection, accurate SOC estimation, and robust safety protocols. Future research directions include refining dataset quality, optimizing algorithm selection, and enhancing diagnostic precision, thereby broadening ANNs’ role in ensuring reliable battery management in electric vehicles.

1. Introduction

With the rise of neural networks and machine learning techniques, there is an increasing demand to apply these technologies to various engineering challenges. These applications include luxury yacht energy management [1,2], energy efficiency in buildings [3,4], industrial robotics [5,6] and electric vehicles [7,8]. AI techniques enable achieving outstanding results while circumventing the difficulties associated with developing accurate physical models of these problems [9]. The transportation industry is investing in and confronting a range of challenges to improve efficiency, boost performance, enhance connectivity, expand autonomy, and reduce emissions [10,11]. Major automotive markets, especially in Europe, have set ambitious CO2 reduction targets, positioning electric power trains as one of the most effective solutions. Increasing economic incentives will be allocated to facilitate the transition toward the full adoption of electric vehicles across the region [12], with the goal of achieving this milestone by 2034 [13], as shown in Figure 1.
However, one of the critical challenges in this domain is achieving an ideal balance between cost-effectiveness and performance [14]. Batteries, as one of the highest-cost components in EVs [15], play a crucial role in determining both the efficiency and overall economics of these vehicles. An accurate estimation of the battery’s State of Charge (SOC) is essential for reducing design costs [16], optimizing vehicle efficiency, and improving overall system reliability [17]. This makes advanced battery management system (BMS) softwares indispensable (e.g., MATLAB/Simulink framework, or Modelica framework), for the precise estimation of SOC [18]. Currently, SOC estimation methods for EV/HV battery systems are broadly classified into direct and indirect approaches [19,20], as depicted in Figure 2.
Traditional methods for SOC estimation, such as open-circuit voltage (OCV) measurement [21] and coulomb counting [22], are commonly employed due to their simplicity. However, these methods are often limited by sensor inaccuracies and the inherent uncertainty associated with battery behavior under various operational conditions [23]. To address these limitations, more advanced techniques have been developed, combining equivalent circuit models (ECMs) with Kalman Filters to enhance SOC estimation accuracy [24]. These hybrid methods typically require extensive battery testing to model and calibrate parameters, which can be resource-intensive [25]. In recent years, machine learning has introduced transformative potential to SOC estimation by enabling data-driven approaches that leverage the vast amounts of data generated by modern battery systems. Through representation learning and deep learning, ANN techniques can capture complex patterns in battery data that are challenging to model with conventional physics-based methods [26]. Voltage, current, and temperature signals, among other factors, can be analyzed using ANNs to provide indirect but highly accurate SOC estimates, an approach that aligns well with the requirements of real-time EV applications [27]. Despite the advantages of ANN-based SOC estimation, challenges remain due to the nonlinear nature of batteries, where factors such as degradation, temperature variations, and dynamic load conditions can introduce significant variability in SOC predictions [28,29]. Ensuring accurate SOC estimation is essential not only for optimizing vehicle performance but also for improving passenger safety, enhancing driving comfort, and reducing costs associated with battery overdesign or oversizing [30]. This paper provides a comprehensive overview of recent advancements in ANN-based SOC estimation methods for electric vehicles. By analyzing a range of machine learning algorithms, we aim to highlight the strengths and limitations of these techniques, discussing their applicability and potential to transform EV battery management. For those interested in foundational machine learning concepts, additional resources are available that provide a deeper understanding of these techniques and their underlying principles. As battery technology continues to evolve rapidly, the availability of extensive datasets for battery performance optimization has grown substantially. These data, encompassing time-series measurements of voltage, current, and temperature, allow for more shaded SOC estimation models, which are critical for the real-time management of energy in EV batteries [31].

2. Methods Analyzed

In this overview, we examine and categorize the main machine learning methods for estimating the battery State of Charge (SOC) in Electric Vehicles (EVs). Before diving into specific approaches, we first address the rationale for selecting machine learning techniques over traditional methods based on Kalman Filters. As illustrated below in Figure 3, Kalman Filters rely on detailed knowledge of the battery’s model, requiring several engineering steps to achieve accurate SOC estimation.
In contrast, machine learning offers a more streamlined and direct path to reliable results, using datasets built from real battery measurements or Digital Twin (DT) models, as show in Figure 4 and Figure 5. The use of DT models to train neural networks has demonstrated considerable success not only in the electric vehicle industry but also across various applications, such as energy efficiency monitoring in luxury yachts, hotel energy management, autonomous navigation for automotive vehicles, and other robotic systems [32,33,34,35]. A key advantage of this approach is its ability to produce a well-trained neural network from DT model outputs, even in early development stages when real-world data are limited. This flexibility makes machine learning particularly valuable in initial project phases, allowing for robust SOC estimation from minimal measurement data and broadening its applicability across different sectors [36].
In this section of the article, we provide an in-depth analysis of recent machine learning methods developed for addressing the challenge with the State of Charge (SOC) estimation. We present a concise overview of these approaches, highlighting the conceptual frameworks that underpin how each method functions as proposed in the literature. To facilitate a clear understanding of the techniques, we categorize them into two main types:
  • Feedforward Neural Networks (FNNs);
  • Recurrent Neural Networks (RNNs).
This division enables us to compare their unique architectures, advantages, and limitations in capturing battery dynamics, as well as their suitability for real-time SOC estimation in electric vehicle applications. In this study, we chose to focus exclusively on data-driven approaches based on Feedforward Neural Networks (FNNs) and Recurrent Neural Networks (RNNs) due to their proven effectiveness and suitability for the State of Charge (SOC) estimation. These methods stand out for their ability to model complex nonlinear relationships between inputs and SOC without relying on detailed and often computationally intensive battery models. This capability makes them particularly well-suited for handling the dynamic and nonlinear behavior inherent in battery systems. Moreover, FNNs and RNNs are highly advantageous for embedded applications in electric vehicles. Their computational efficiency and relatively low memory requirements allow seamless integration into the resource-constrained environments of vehicle systems. This efficiency not only enables real-time SOC estimation but also ensures that these methods can operate effectively under the hardware limitations typical of embedded systems. The rapid advancements and growing interest in FNNs and RNNs within the field further motivated their selection for analysis. These neural network architectures have demonstrated a strong potential to enhance the performance, adaptability, and robustness of battery management systems. By focusing on these approaches, our study aims to contribute to the understanding and development of SOC estimation techniques that are both practical and scalable for real-world applications. Each method’s structure and operational principles are examined, with attention given to how they handle data patterns and temporal dependencies inherent in SOC estimation. This categorization will also help to clarify the decision-making process in selecting appropriate neural network types, considering the specific demands of battery state monitoring and predictive accuracy. To ensure a fair and consistent comparison of SOC estimation methods, such as the machine learning-based approaches detailed in this paper, it is crucial to evaluate algorithms under uniform conditions. This involves using comparable datasets, aligning the number of trainable parameters, and applying consistent training and testing methodologies. By following these guidelines, meaningful insights into the relative strengths and limitations of various approaches can be obtained. Specifically, we recommend adhering to the following three principles when evaluating ANNs’ SOC estimation techniques:
  • Consistent Quality Metrics: Utilize identical or comparable quality indices across studies to ensure that the evaluation criteria are uniform and allow for objective comparisons. These metrics could include Mean Absolute Error (MAE), root mean squared error (RMSE), or accuracy percentages, as referenced in the literature;
  • Dataset Transparency: Clearly state the dataset used for training and testing, specifying whether it is a standard publicly available dataset or a custom dataset obtained from specific battery configurations. This helps in understanding the results;
  • Algorithm Complexity: Analyze the computational complexity of the proposed methods, highlighting their advantages and drawbacks. This includes evaluating processing time, memory requirements, and the feasibility of real-time implementation on embedded systems;
  • Different Approach: we chose articles that presented a particular method that differed in the interesting way in which they addressed the problem of SOC estimation.
Integrating these principles into the evaluation process provides a structured framework for assessing SOC estimation methodologies. This approach is particularly relevant when comparing the diverse machine learning models discussed earlier, including FNNs, LSTMs, BiLSTMs, and GRUs. By aligning the evaluation standards, the trade-offs between accuracy, computational efficiency, and practical applicability can be better understood, enabling the selection of the most suitable method for real-world deployment.

3. Feedforward Neural Networks (FNNs) for Battery SOC Estimation

Feedforward Neural Networks (FNNs) are powerful tools for implementing complex non-linear mappings in systems with multiple inputs and outputs [37]. Their flexibility and simplicity make them particularly valuable in fields such as battery state estimation, where precise, data-driven models are essential for accurate predictions [38]. In this context, we focus on a straightforward FNN structure featuring a single hidden layer, an approach commonly used in state estimation due to its simplicity and reliability [39]. In designing an FNN, the choice of activation function f activ ( x ) plays a crucial role in defining the network’s output behavior. A popular option is the hyperbolic tangent function, defined as
f activ ( x ) = tanh ( x ) = e x e x e x + e x ,
which bounds the output within the range [ 1 , 1 ] , creating smooth and stable transitions that often facilitate efficient training. Another interesting choice of activation function is the sigmoid function, especially in binary classification problems, as it maps input values to a range of ( 0 , 1 ) . While similar to tanh, it does not provide zero-centered outputs, which may slow down training in certain networks. Its mathematical expression is
f activ ( x ) = 1 1 + e x
Alternatively, the Rectified Linear Unit (ReLU) activation function is widely adopted for its computational efficiency and handling of sparse activations. ReLU is defined as
f activ ( x ) = ReLU ( x ) = 0 , if x < 0 x , if x 0 .
The ReLU function’s ability to set all negative inputs to zero introduces a piecewise linear structure that enables faster learning and helps to mitigate the vanishing gradient problem, a common issue in neural networks. The choice of activation function can impact both the network’s ability to learn from data and the computational efficiency during training [40,41]. For instance, tanh and sigmoid activations are bounded, which can be beneficial for controlling output ranges, but they may suffer from saturation effects, leading to slower gradient updates. On the other hand, ReLU mitigates the vanishing gradient problem, especially in deep networks, but can cause “dead neurons” (neurons that stop updating if they consistently output zero) [42]. The training of an FNN involves optimizing its weights W and biases b to minimize the loss function, commonly defined as the sum of squared errors:
L = 1 2 i = 1 n ( y i y ^ i ) 2 ,
where y i is the actual target value, y ^ i is the predicted output from the network, and n is the number of training samples. Backpropagation, a widely used algorithm for this purpose, calculates the partial derivatives of the loss function relative to each weight and bias, iteratively updating them to reduce L. The number of training iterations, or epochs, is often used to establish a stopping condition, though additional criteria, such as improvement in error reduction, can also determine when training should conclude [43]. A generic architecture of FNNs is shown in Figure 6.
For applications like the State of Charge (SOC) estimation in electric vehicle batteries, hybrid approaches that integrate an Equivalent Circuit Model (ECM) with an FNN have proven effective [20]. Traditionally, battery SOC estimation relies on lookup tables (LUTs) to correlate SOC with open-circuit voltage (OCV). However, using an FNN to model this correlation offers greater flexibility and potentially higher accuracy [44]. FNN uses OCV as an input, processes it through a hidden layer with m neurons, and outputs the SOC estimate. The ECM complements this model by estimating OCV from the battery’s terminal voltage, V terminal , thus enhancing the robustness of the SOC prediction. This method is schematically represented in Figure 7.
Further improvements in SOC estimation accuracy have been achieved by integrating the FNN with a Kalman Filter [45], specifically an Unscented Kalman Filter (UKF), for Lithium Iron Phosphate (LFP) batteries [46]. In this hybrid model, the FNN generates initial SOC estimates, which the UKF then refines. The FNN operates as a supervised machine learning algorithm, comparing its predictions to known SOC values to compute estimation errors. For training, datasets collected from automotive homologation driving cycles, such as US06, FUDS, and the Dynamic Stress Test (DST) specified by the U.S. Advanced Battery Consortium, provide diverse operating conditions. Despite the relatively narrow training dataset, the UKF effectively reduces error, increasing the model’s accuracy even when applied to new data. This approach can be illustrated by the following Figure 8.
Recent studies have demonstrated the potential of Feedforward Neural Networks (FNNs) for estimating the State of Charge (SOC) in hybrid energy storage systems without relying on Kalman Filters [47,48,49,50]. One prominent work [51] explored the application of an FNN in a 12V hybrid energy storage configuration that combines a Lithium Iron Phosphate (LFP) battery and a lead–acid battery, both used to power a belt starter–generator system. This system allows the electric machine to operate as either a motor or generator, following a control strategy that cycles the Li-ion battery within a specific SOC range. In this setup, an FNN was developed to estimate the SOC of both batteries simultaneously, leveraging a single network structure. This method can be summarized as in Figure 9.
Despite this approach, further research is needed to compare this dual-output FNN with separate single-output models, in order to clarify the advantages of a shared neural network structure for SOC estimation [52,53]. To address the complexities of accurately predicting SOC—given the influence of various battery parameters like SOC, State of Health (SOH), current, and temperature—some researchers [48,54] have explored using battery polarization states as inputs to the FNN model, as shown in Figure 10.
Different polarization state time constants, denoted as τ , can vary (e.g., from 0 to 1000) to account for distinct dynamic behaviors. Four polarization states with specific τ values were chosen, alongside the battery current, voltage, and temperature as inputs to the FNN, as shown in Figure 11.
The network was trained and validated on data from various driving cycles—EUDC, HL07, HWFET, and NEDC—each tested under ambient temperatures ranging from −10 °C to 25 °C. This data division allocated 80% for training and the remaining 20% for validation and testing, with additional testing conducted on a Hardware-in-the-Loop (HIL) setup to evaluate real-world applicability. This study further demonstrated the FNN’s capability of accurately estimating SOC across a range of temperatures, including low-temperature conditions down to −20 °C. Although FNNs do not inherently capture time-dependent information, temporal dependencies can be indirectly modeled by creating new input features based on moving averages of the battery’s terminal voltage and current. This approach enabled results comparable to those of Recurrent Neural Networks (RNNs), which are specifically designed for sequential data [55,56,57]. Through a systematic evaluation of FNN architectures—including variations in neuron counts, hidden layers, and averaging window lengths (100 and 400 timesteps)—the study found that, at 25 °C, the optimal results were obtained using a 400-timestep rolling window. As expected, performance degraded at −20 °C due to the adverse effects of low temperatures on Li-ion batteries. The authors hypothesized that model accuracy in these conditions could be improved with a larger dataset or a more complex FNN capable of capturing the intricate dynamics at low temperatures. Another noteworthy finding is the significant improvement in model robustness and accuracy, up to 41%, achievable by using augmented datasets. Additionally, internal resistance data—measured in a laboratory setting along with battery voltage, current, and temperature—were incorporated to train and test the FNN for SOC estimation. Although the direct measurement of internal resistance in vehicles poses practical challenges, it is feasible to integrate a model-based estimate of internal resistance onboard to provide real-time input for both SOC and SOH estimation. In refining FNN structures, optimization algorithms have shown great potential in streamlining the setup process by identifying the optimal FNN structure, thus minimizing reliance on extensive engineering experience to set parameters effectively [58,59,60]. While initial optimization efforts typically focus on neuron count and learning rates, additional parameters—such as the number of hidden layers and the initialization of weight distributions—could further enhance accuracy. However, considering these factors also introduces added complexity and an increased offline computational burden when searching for optimal structures. An interesting extension of FNN design is presented in [61], where a unique model architecture composed of three parallel FNNs was developed, as summarized in Figure 12.
Each FNN was individually trained on distinct operational data corresponding to the three primary operating modes of idling, charging, and discharging. Although the accuracy of this model was limited by the available data (with training based on a pulsed profile and validation on the US06 dataset), the study addressed the influence of randomly initialized network weights by performing multiple training iterations. By averaging the resulting error across these training runs, the authors obtained a final error measurement, accounting for variations caused by initial weight values. This insight underscores the impact of initialization in neural network training, a factor often underexplored in the SOC estimation literature despite its importance in determining convergence to desirable local minima. In another study, the authors in [62] present an advanced method for estimating battery State of Charge (SOC) by integrating a Backpropagation Neural Network (BPNN) with the Backtracking Search Algorithm (BSA). The methodology begins by collecting three key input parameters—voltage, current, and temperature—from DST and FUDS drive cycles conducted at three different temperatures (0 °C, 25 °C, and 45 °C). These raw measurements are initially processed through a moving-average low-pass FIR filter to smooth the data, and then normalized to prepare for subsequent analysis. Once preprocessed, the normalized inputs are fed into the BPNN, which leverages its capacity to model complex, nonlinear relationships between the measured parameters and the battery’s SOC to produce an initial estimate. However, given that neural networks are prone to becoming trapped in local minima due to suboptimal initial weights and biases, the approach enhances performance by incorporating the BSA. In the optimization stage, BSA fine-tunes the network by determining the optimal number of neurons in the hidden layer and the ideal learning rate, all based on the minimization of the root mean squared error (RMSE). The entire procedure is structured into four distinct stages. In Stage I, raw data are collected, filtered to reduce noise, and normalized. Stage II uses these preprocessed data within the BPNN to generate an initial SOC estimation. Stage III applies BSA to optimize the hidden layer configuration and learning rate, thereby refining the network’s performance. Finally, Stage IV focuses on training and validating the optimized model with designated training and testing datasets, culminating in the computation of the final SOC and its associated estimation error. To further validate the superiority of the proposed BPNN–BSA model, a comparative study was conducted against alternative approaches, including RBFNN–BSA, GRNN–BSA, and ELM–BSA models. In each case, current, voltage, and temperature served as inputs, and their hyperparameters were similarly tuned using BSA. Evaluation metrics such as RMSE, Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and SOC error were calculated for both DST and FUDS cycles across the three temperature settings. For instance, during the DST cycle, the BPNN–BSA model achieved RMSE values of 1.47%, 0.81%, and 0.48% at 0°C, 25 °C, and 45 °C, respectively, while, for the FUDS cycle, RMSE values were 1.74%, 0.91%, and 0.57%. These results indicate significant improvements; at 25 °C, the DST cycle RMSE was reduced by 34%, 62%, and 56% compared to the RBFNN-,BSA, GRNN-,BSA, and ELM-,BSA models, respectively. Similar enhancements were observed for the FUDS cycle, with the proposed method consistently yielding lower error metrics across all temperatures. Beyond RMSE, the proposed model demonstrated notable reductions in MAE and MAPE. For example, the DST cycle MAE reached as low as 0.32% at 45 °C, reflecting improvements of 43%, 68%, and 63% over the alternative methods at 25 °C. Similarly, for the FUDS cycle, the MAE at 0 °C was 0.87%, showing significant decreases compared to higher temperatures and outperforming the competing models by substantial margins. Furthermore, the MAPE for the DST cycle at 25 °C was 7.15% for the proposed model—considerably lower than the corresponding values for the other methods—while the SOC error remained minimal, underscoring the enhanced accuracy and robustness of the BPNN–BSA approach. Overall, integrating BPNN with BSA not only optimizes the neural network’s parameters but also delivers superior performance in terms of accuracy and robustness across varying operating conditions and temperature profiles. Despite the increased computational cost inherent in such optimization, the improved precision in SOC estimation and the model’s ability to generalize across different scenarios make the BPNN–BSA approach a highly effective solution for battery management in electric vehicles and energy storage systems. This method with the BSA algorithm is summarized in Figure 13.
An intriguing recent study presents a highly accurate estimator for the State of Charge (SOC) of lithium-ion batteries by meticulously integrating their underlying physical behavior into the modeling process. The authors introduce a sophisticated hybrid architecture that fuses a physics-based model—as detailed in [63]—with a data-driven deep Feedforward Neural Network (FNN) for battery cells, as described in [64,65]. This innovative framework is implemented within the paradigm of Physics-Informed Neural Networks (PINNs), which uniquely combine traditional neural network architectures with the governing physical laws of the system. Unlike conventional data-driven methods, PINNs inherently respect the electrochemical principles and dynamic processes that govern battery operation, thereby yielding predictions that are not only accurate and reliable but also physically plausible, even in cases where experimental data are limited or contaminated with noise. Central to this methodology is a composite loss function that seamlessly integrates two components. The first is a data fidelity term, which minimizes the discrepancy between the network’s predictions and the observed measurements, mirroring the approach taken by conventional neural networks. The second, and more distinctive, component, is the physics fidelity term, which quantifies deviations from established physical laws—such as charge conservation and the kinetics of battery charge and discharge—by directly incorporating the governing differential equations, along with their boundary and initial conditions, into the loss function. This design ensures that the network, which accepts critical input variables such as current, voltage, and temperature, produces SOC predictions that remain consistent with the underlying physics across a broad spectrum of operating conditions. The experimental data used to train the PINN model were gathered under rigorously controlled laboratory conditions. Experiments were conducted at ambient temperatures ranging from −10 °C to 25 °C, thereby capturing the battery’s performance across a diverse thermal spectrum. A series of power profiles were generated for an electric vehicle by simulating standard drive cycles—including UDDS, US06, LA92, and HWFET—as well as “Mix” cycles composed of randomized segments of these standard profiles. During these experiments, the battery was housed within a climate chamber where ambient temperature was systematically varied, and key parameters such as voltage, current, battery temperature, and ampere-hours were continuously recorded. This comprehensive dataset, generated under varying dynamic loads and temperature conditions, provided the necessary depth and diversity to effectively train the PINN model, enabling it to accurately estimate SOC across all tested thermal conditions. To further validate the approach, the study conducted a rigorous comparison between the proposed PINN model, an Adaptive Kalman Filter (AKF), and a conventional FNN. Twelve distinct neural network setups were trained and evaluated using the Li-ion battery dataset. Notably, the battery’s behavior at −10 °C posed significant challenges due to substantial voltage deviations under load, heightened nonlinearity, and increased internal resistance. Despite these hurdles, the PINN model demonstrated exceptional performance, with error metrics—such as RMSE, MAE, and maximum error—remaining impressively low. Even at −10 °C, the RMSE was maintained below 2.85% (approximately 0.028310), and the maximum error, indicative of estimation variance, stayed low even as the SOC dropped to 20% (around 0.119825). These results underscore the PINN’s adeptness at capturing the intricate, nonlinear dynamics and resistance characteristics under extreme conditions, thereby affirming its reliability and robustness. Furthermore, embedding physical laws within the neural network enhances its interpretability, a significant advantage over conventional “black-box” models. By elucidating the internal mechanisms that govern battery behavior, this approach not only bolsters confidence in the model’s predictions but also facilitates the identification and resolution of potential issues under novel or extreme operating scenarios. Although the computational demands are higher—due to the necessity of calculating derivatives and enforcing additional constraints during training—the substantial benefits in terms of prediction accuracy, robustness, and real-time applicability render the PINN-based approach a highly promising solution for advanced battery management systems in electric vehicles. By elegantly merging data-driven techniques with a profound understanding of battery physics, this innovative method represents a significant advancement over traditional SOC estimation techniques, offering a robust, adaptable, and interpretable tool for navigating the complexities inherent in lithium-ion battery behavior. This particular architecture is shown in Figure 14.
Finally, there is another interesting study [66], in which an innovative hybrid method is presented that significantly enhances battery state estimation by integrating the traditional Equivalent Circuit Model (ECM) with Feedforward Neural Networks. The approach commences with a first-order ECM that characterizes the battery’s dynamic behavior using fundamental components such as ohmic resistance, polarization resistance, and polarization capacitance. The initial parameters are determined online via a Recursive Least Squares algorithm with a forgetting factor, ensuring robust calibration against real-world measurements. However, under extreme conditions—such as freezing temperatures or rapid charging and discharging—the ECM alone proves insufficient to capture the battery’s pronounced nonlinear behavior. To overcome these limitations, three Feedforward Neural Networks are embedded within the ECM as virtual electronic components, each dedicated to correcting the residual errors in estimating a specific parameter. These networks are trained offline using an iterative strategy that incorporates the battery’s state-space equations into a physics-informed loss function. This loss function, derived from the mean squared error between the predicted and measured terminal voltages, directs the backpropagation process to fine-tune the network parameters. The evaluation of the hybrid model is based on a real-world battery dataset, which comprises charging and discharging measurements—such as current, voltage, and cell temperature—collected from a brand-new Panasonic 18650PF cell under diverse operating conditions. To thoroughly assess the model’s performance, the authors selected three different operating modes (HPPC, US06, and HWFET) and conducted tests under four ambient temperatures (−20 °C, −10 °C, 0 °C, and 10 °C), resulting in a total of 12 scenarios. To further validate the model’s accuracy, the predicted terminal voltage was computed using both the first-order ECM and the proposed hybrid model with the identified parameters. Under the HPPC mode, the traditional ECM demonstrated high accuracy during the initial stages of discharging; however, its estimation error progressively increased due to error accumulation from the recursive FFRLS method and its limited adaptability. In contrast, the hybrid model maintained consistently high accuracy throughout the discharging process, as the neural network modules dynamically compensated for the parameter estimation biases with their superior nonlinear-fitting capabilities. Under the US06 and HWFET operating conditions, the traditional ECM exhibited considerable fluctuations and larger estimation errors, whereas the hybrid model closely tracked the terminal voltage without introducing extraneous noise. This enhanced performance is attributed to the FNN modules’ ability to effectively learn the rapid variations in voltage and current characteristic of these conditions. Notably, when discharging at −20 °C—a scenario in which the battery’s electrochemical behavior becomes exceedingly complex—the traditional ECM produced the highest errors across all operating modes, while the hybrid model continued to deliver satisfactory accuracy. Following the offline training phase, the refined parameters are deployed for real-time State of Charge (SOC) estimation using an Extended Kalman Filter (EKF). When applied to the dataset, the hybrid model achieved a reduction in SOC estimation errors, ranging from 29% to 64% across various conditions, demonstrating marked improvements in accuracy, adaptability, and robustness compared to conventional ECM approaches. By merging the interpretability of physics-based models with the advanced nonlinear learning capabilities of neural networks, this method offers a precise and dependable solution for battery state estimation in complex and challenging environments. This approach can be summarized as in Figure 15.
Based on the studies analyzed in this section, it is evident that with the integration of advanced optimization techniques and the use of augmented datasets, Feedforward Neural Networks (FNNs) emerge as a highly promising solution for the next generation of battery management systems, particularly in practical and real-world applications [67]. However, in spite of the benefits of what we have seen in this section, FFNs suffer from problems related to the explosion or vanishing of the descending gradient [68], failing always, and especially with complex systems, to handle the correlations that can occur between distant time instants. To solve these problems mentioned above, the use of Recurrent Neural Networks is often a resort, which we analyze in the next section of the article.

4. Recurrent Neural Networks (RNNs) for Battery SOC Estimation

In this section, we introduce various types of Recurrent Neural Networks (RNNs) and examine key findings from the recent literature on their application in estimating the State of Charge (SOC) of batteries. RNN-based machine learning models, which retain and leverage past information, show particular promise for such tasks [69,70]. Unlike traditional neural networks, RNNs incorporate temporal dependencies by feeding previous outputs or intermediate states back as inputs [71]. For instance, the SOC from time step k 1 can be fed as an input at time k. This architecture is well-suited for capturing short-term sequence dependencies, though it may struggle with the long-term dependencies typically seen in battery systems. However, training RNNs can be challenging due to issues such as vanishing or exploding gradients during backpropagation, which can hinder the learning of long-term dependencies [72]. To address these limitations, several RNN variants have been developed, including Long Short-Term Memory (LSTM) [73], Bidirectional LSTM (BiLSTM) [74], and Gated Recurrent Unit (GRU) [75]. These architectures use specialized gating mechanisms to manage and leverage dependencies over longer sequences, making them suitable for tasks involving extensive sequential data, such as speech recognition and battery SOC estimation [69,76,77].

4.1. LSTM Neural Network

The LSTM network introduces a memory cell c k to store and manage information over time. It uses three primary gates:
  • Forget Gate f k : Controls how much information from the previous memory cell c k 1 should be retained.
    f k = σ ( W f · [ h k 1 , x k ] + b f )
  • Input Gate i k : Regulates how much of the new information will be added to the memory cell.
    i k = σ ( W i · [ h k 1 , x k ] + b i )
  • Output Gate o k : Determines how much of the information from the memory cell c k should be used in the output h k .
    o k = σ ( W o · [ h k 1 , x k ] + b o )
These gates work together to update the memory cell and produce the final output:
c ˜ k = tanh ( W c · [ h k 1 , x k ] + b c )
c k = f k · c k 1 + i k · c ˜ k
h k = o k · tanh ( c k )
This setup enables the LSTM to selectively retain or discard information, allowing it to capture long-term dependencies [78]. An example of a generic architecture of an LSTM is shown in Figure 16.
In [79], the authors applied a Long Short-Term Memory (LSTM) network to estimate the State of Charge (SOC) using only directly measured battery signals, such as terminal voltage, load current, and ambient temperature. This approach does not rely on additional methods or estimation filters, simplifying the overall implementation. A notable outcome of this work is the LSTM’s ability to accurately estimate SOC across varying ambient temperatures. This capability represents a significant advantage over traditional methods that rely on lookup tables (LUTs), which typically require separate LUTs to be created for each temperature range [80]. However, it is important to highlight that the effectiveness of this approach depends on the quality of the dataset used to train the LSTM. Specifically, the dataset must contain sufficient information to encode the temperature variations within the model parameters, which may necessitate a large and comprehensive dataset. For their study, the authors utilized a dataset from a Panasonic 18650PF Li-ion battery cell, acquired at multiple ambient temperatures. This dataset is publicly available for download from [81]. Another interesting solution to improve the use of LSTM neural networks, using its ability to store what happens in the short- to long-term observation time, is the combination of LSTM with other neural networks [82,83,84].The idea is to use a hybrid CNN–LSTM network to estimate the State of Charge (SOC) of lithium-ion batteries by capturing their complex nonlinear dynamics using voltage, current, and temperature measurements. The method leverages spatial features extracted from the immediate relationships among the measured variables—including their averages—as well as temporal dependencies derived from historical data. The network begins by processing time-sequenced input data with a one-dimensional convolution layer that transforms raw signals into high-level representations emphasizing internal correlations. These features are then passed to an LSTM network that effectively integrates historical context over both short and long periods. This combined approach enhances performance beyond that of a traditional LSTM alone. To simulate real-world electric vehicle battery loading behaviors, the DST–FUDS–US06 (DFU) profile—an integration of the dynamic stress test (DST), federal urban driving schedule (FUDS), and US06 test profiles—was employed. During discharge, the DFU profile was repeatedly applied until full battery discharge, and tests were conducted under various temperature conditions (0 °C, 10 °C, 20 °C, 30 °C, 40 °C, 50 °C, and room temperature) to build comprehensive testing datasets. More specifically, the network was trained with DST data (8438 samples), FUDS data (8390 samples), and US06 data (7987 samples) at room temperature, while its online SOC estimation performance was evaluated using DFU data (8350 samples) at room temperature. The input vector x k = [ I k , , V k , , T k , , I a v g k , , V a v g k ] is mapped to the output y k = [ S O C k ] . Estimation results at room temperature are presented in Section IV-A, with additional results under varying temperature conditions in Section IV-B. During training, although increasing the number of epochs generally enhances model accuracy, it also extends training time. To determine an optimal number of epochs, the root mean squared error (RMSE) for both training and testing datasets was monitored. The RMSE dropped below 4% after 2000 epochs and stabilized around 2% after 6200 epochs, with the best performance achieved between 8000 and 11,000 epochs. Consequently, 10,000 epochs were selected for training. For performance comparison, the proposed network was benchmarked against a standalone LSTM (i.e., without the convolutional layer) and a CNN comprising three convolutional layers (each with six filters and zero padding to preserve input length). All networks were trained for 10,000 epochs, with training times of 161 min for the proposed network, 231 min for the LSTM, and 102 min for the CNN. On a laptop, the average computation times per time step were 0.098 ms, 0.116 ms, and 0.082 ms for the proposed, LSTM, and CNN networks, respectively. When the battery starts at 100% SOC, the CNN network operating independently of past inputs produces highly fluctuating estimates, whereas both the LSTM and the proposed CNN–LSTM deliver much smoother and more accurate predictions. The proposed network consistently maintains estimation errors within 2%, while the standalone LSTM can sometimes exceed 4%. Although both networks yield satisfactory results, the CNN–LSTM demonstrates slightly better accuracy. Since the initial battery SOC is not always known a priori, robustness against unknown initial conditions is critical. To simulate this, data with lower initial SOC values were generated (for example, by removing data with SOC greater than 80% to simulate an 80% initial SOC). When the SOC starts at 80%, both the LSTM and the proposed networks exhibit a two-stage behavior. Initially, the effect of the unknown state is dominant, and the proposed network takes slightly longer than the LSTM to track the true SOC. Once convergence is achieved, however, the proposed network exhibits lower and more consistent errors, with overall RMSE and mean absolute error (MAE) values of 1.35% and 0.87% compared to 1.43% and 0.95% for the LSTM. When the initial SOC is set to 60%, the proposed network converges to the true SOC faster than the LSTM and subsequently provides significantly more stable and accurate estimates (with an RMSE of 0.92% and MAE of 0.48% versus 2.97% and 2.03% for the LSTM). Although the CNN network is less affected by unknown initial conditions, its inability to model temporal dependencies results in inferior overall performance. Overall, this integrated CNN–LSTM design effectively combines local and temporal feature extraction, adapts to environmental variations and uncertain initial conditions, and delivers superior performance in estimating the battery’s State of Charge. This type of hybrid approach is schematized in Figure 17.
The recent works [85] presents the EI–LSTM–CO, an improved LSTM–RNN model for SOC estimation. By incorporating a sliding window average voltage into the input, the model enhances its ability to capture nonlinear battery behavior. Additionally, a state flow strategy based on ampere-hour integration (AhI) constrains output variations, resulting in smoother and more accurate SOC predictions. This particular LSTM Neural Network can be schematized using the following diagram (Figure 18).
An interesting recent work [86] presents a novel approach for battery State of Charge (SOC) estimation—a critical factor for electric vehicle safety and range prediction—using an LSTM neural network optimized via a random search algorithm (RS-LSTM), as depicted in Figure 19. Unlike most SOC studies that rely on manual hyperparameter tuning, which is often random and fails to achieve a globally optimal solution, this method automates the optimization process. Few studies have attempted automatic hyperparameter tuning; those that have generally lack sufficient validation in real-world scenarios and do not adequately account for the complexities of actual vehicle operation. Real driving conditions introduce numerous measurable battery parameters and including less relevant features can compromise both model accuracy and interpretability. To simulate the diverse conditions encountered by lithium-ion batteries in practical use, the study utilizes a dataset from the University of Maryland (CALCE) based on the INR 18650-20R battery model. The CALCE team conducted a series of dynamic tests under varying temperatures (0 °C, 25 °C, and 45 °C) and initial SOC conditions (80% and 50%) using datasets from the Dynamic Stress Test (DST) and the Federal Urban Driving Schedule (FUDS). To address the challenges associated with manual hyperparameter tuning, the authors first apply a Random Forest algorithm for dimensionality reduction, selecting the most relevant input features (discharge capacity and discharge energy) to reduce noise and prevent overfitting. Building on this foundation, the RS-LSTM method optimizes the LSTM network’s hyperparameters—namely, look back, epochs, batch size, and learning rate—within a carefully designed search space tailored to the battery data. This process yields optimal settings of 45 for look back, 177 epochs, a batch size of 64, and a learning rate of 0.0026. The study further compares the feature selection results from the Random Forest algorithm with those derived from the Pearson correlation coefficient to identify the best combination of model inputs from both linear and nonlinear perspectives. Following dimensionality reduction and hyperparameter optimization, the optimized LSTM network is evaluated using standard metrics such as Mean Absolute Error (MAE), root mean squared error (RMSE), and the coefficient of determination (R2). Experimental validation under various SOC intervals, temperature conditions, and even scenarios with added Gaussian noise confirms the method’s superiority, effectiveness, and robustness. Furthermore, to assess the efficiency of the proposed method under complex driving dynamics—including frequent charging, discharging, and pause states—the study incorporates real-world vehicle data collected from three vehicles between 1 and 15 April 2022. The SOC estimation results for both training and validation sets indicate that all vehicles exhibit excellent performance, demonstrating that RS-LSTM effectively captures vehicle-specific SOC change patterns. Although some deviations occur during vehicle pause states—likely due to insufficient training data—the overall performance remains robust. Notably, Vehicle C shows exceptional results (with an R2 of 0.998, MAE of 0.871, and RMSE of 1.218 in the training set, and similar metrics in the validation set), while Vehicles A and B display only minor increases in error. The RS-LSTM method significantly outperforms alternative approaches such as TCN–LSTM, CNN–BiGRU, CNN, and standard LSTM networks, achieving an MAE of 0.221%, an RMSE of 0.262%, and an R2 of 0.997. By automating hyperparameter tuning and employing robust feature selection, this integrated approach produces a model that is both highly accurate and stable, overcoming the limitations of manual tuning. The authors recommend that future research further adapt the model to the complex nature of real-world vehicle data and develop more adaptive feature selection methods to dynamically respond to evolving data characteristics. This innovative approach not only represents a significant advancement for battery management systems in electric vehicles but also holds potential for broader applications, such as in portable electronic devices and supercapacitors.
The use of LSTM networks comes with challenges, such as their unidirectional nature, which limits them to processing data only from past to present, potentially losing valuable future context. They are also prone to overfitting, especially with limited or noisy datasets, and optimizing parameters like look-back length and learning rate can be computationally expensive and time-consuming. Additionally, LSTMs may struggle with capturing complex long-range dependencies and require significant computational resources for processing long sequences. To partially solve these problems, Bidirectional LSTM (BiLSTM) networks address these limitations by processing data in both forward and backward directions, allowing them to capture both past and future contexts [87]. This bidirectional approach enhances their ability to handle noisy or incomplete data, model complex dependencies more effectively, and achieve greater accuracy in tasks like SOC estimation [88]. While BiLSTMs require more computational resources than standard LSTMs, their improved robustness and precision make them a valuable tool for applications requiring detailed sequential data analysis. In the next section of this article, we will discuss this recurrent neural network, highlighting some interesting applications and variants for estimating SOC.

4.2. Bidirectional LSTM Neural Networks

The Bidirectional Long Short-Term Memory (BiLSTM) network is an extension of the traditional LSTM architecture that processes data in both forward and backward directions. By combining the outputs from two LSTMs—one processing the input sequence in its natural order and the other processing it in reverse—BiLSTMs can capture both past (previous time steps) and future (upcoming time steps) context for every point in the sequence [89]. This bidirectional architecture makes BiLSTMs particularly useful for applications where the context from both ends of a sequence significantly impacts predictions, such as text translation, speech recognition, and battery state estimation tasks like State of Charge (SOC) [90] and remaining useful life estimation [91].
  • Forward LSTM: Processes the input sequence x = [ x 1 , x 2 , , x T ] from the first time step t = 1 to the last t = T , generating a sequence of hidden states.
    h t = LSTM forward ( x t , h t 1 )
    Here, h t represents the hidden state at time t for the forward LSTM;
  • Backward LSTM: Processes the same input sequence in reverse, from t = T back to t = 1 , generating another sequence of hidden states.
    h t = LSTM backward ( x t , h t + 1 )
    h t is the hidden state at time t for the backward LSTM;
  • Combined Output: The output of the BiLSTM at each time step t is a combination of the hidden states from the forward and backward LSTMs. The combination is typically achieved using concatenation, summation, or averaging:
    h t = f h t , h t
    where f ( · ) is a function that combines the two hidden states, such as
    -
    Concatenation: h t = h t h t
    -
    Summation: h t = h t + h t
Key equations for BiLSTM results including the following:
  • Forward LSTM Hidden State:
    h t = tanh W f x t + U f h t 1 + b f
    where W f , U f , and b f are the weights and biases for the forward LSTM;
  • Backward LSTM Hidden State:
    h t = tanh W b x t + U b h t + 1 + b b
    where W b , U b , and b b are the weights and biases for the backward LSTM;
  • Output Combination:
    h t = f h t , h t
    Depending on the task, the choice of f (concatenation, summation, or averaging) can vary.
This dual-direction approach allows BiLSTM to capture temporal or contextual dependencies from both ends of the sequence, making it ideal for applications such as text translation, where context from both the start and end of a phrase impacts the final interpretation. An example of this architecture is shown in Figure 20.
Many interesting works, such as [92,93], use the concept of BiLSTM and introduce a hybrid deep learning architecture, combining a Convolutional Neural Network (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) to enhance the estimation of the State of Charge (SOC) in lithium-ion batteries, critical for electric vehicle efficiency. These models introduce an innovative method for estimating the State of Charge (SOC) in lithium-ion batteries used in electric vehicles by combining spatial and temporal feature analysis within a hybrid deep learning framework. Accurate SOC estimation requires capturing spatial features—which represent the interrelationships among current inputs—and temporal features, which reflect the correlations between the present SOC and historical data. To achieve this, the authors propose a hybrid CNN–BiLSTM network. The one-dimensional Convolutional Neural Network (CNN) extracts advanced spatial features from raw data by adjusting convolution kernel weights and window widths, effectively generating multiple features that serve as inputs for the subsequent recurrent model. The bidirectional long short-term memory (BiLSTM) network then models the temporal dynamics by learning from both past and future dependencies, thereby capturing the evolution of battery behavior over time. A key innovation of this work is the integration of the Group Learning Algorithm (GLA) to automatically optimize the hyperparameters of the CNN–BiLSTM network. By tuning parameters such as the learning rate, the number of LSTM units, and the number of bidirectional layers, GLA enhances the model’s predictability and stability while reducing the computational burden and eliminating the need for manual configuration. This strategic approach improves the model’s efficiency and adaptability to diverse operating conditions without the risk of overfitting or underfitting. The deep neural network is trained using the mean squared error (MSE) loss function, and its performance is evaluated with metrics including R 2 , relative error, normalized RMSE, RMSE, and Mean Absolute Error (MAE). The experiments were conducted on an AMD Ryzen 5 5500U using MATLAB 2022, with data collected from cylindrical INR21700-40T Li-ion batteries provided by the CALCE Research Group. The batteries were discharged under standard industry procedures at temperatures of 25 °C, 0 °C, and 45 °C. Analysis of the data revealed strong positive associations between SOC, voltage, and discharge capacity, as well as the significant influence of urban driving cycles and environmental conditions on battery performance. Evaluations across different temperatures demonstrated that the proposed GLA–CNN–BiLSTM model consistently outperforms alternative approaches (such as CNN, CNN–LSTM, GLA–LSTM, and GLA–CNN) in terms of accuracy and stability. At 0 °C, the model produced significantly lower error metrics than the alternatives, while, at 25 °C and 45 °C, it maintained high prediction accuracy with minimal errors. Further performance assessments, including the Granger causality test, confirmed the model’s superior ability to capture the intricate dynamics of battery behavior, achieving remarkably low error rates that surpass state-of-the-art methods. This work presents a robust and efficient methodology for SOC estimation in electric vehicles. By effectively integrating spatial and temporal feature extraction with an innovative hyperparameter tuning strategy via GLA, the hybrid model significantly enhances the accuracy, reliability, and adaptability of battery SOC estimation in real-world applications. .These methods based on GLA–CNN–BiLSTM are summarized in Figure 21.
An interesting enhancement of bidirectional LSTM networks is presented in the work [94], where the authors integrate an attention mechanism to improve performance and introduce multi-dimensional temperature compensation to address variations in temperature. This approach describes an advanced method for estimating the State of Charge (SOC) of lithium-ion batteries using a temperature-compensated Bidirectional LSTM (BiLSTM) network enhanced with an integrated attention mechanism. This approach improves feature extraction by weighting the hidden outputs of multiple LSTM cells, which allows the network to selectively focus on the most relevant inputs in complex time-series battery data. In this method, spatial attention is applied during the encoding phase via a convolutional neural network that learns high-dimensional representations, ensuring that the most critical features are emphasized. During decoding, temporal attention is used to dynamically assign weights across time steps, effectively capturing essential temporal information for accurate SOC prediction. Temperature effects are addressed through a multidimensional compensation strategy. A polynomial model expresses the open-circuit voltage (OCV) as a function of both SOC and temperature, while the reference SOC is computed using ampere-hour integration. This integration of temperature data not only compensates for temperature-induced variations but also improves error convergence during training. Experimental validation was performed using a laboratory test bench that includes a battery testing system, an electrochemical workstation, a host computer, and a thermostat. Lithium-ion cells with a LiNiCoAlO2 cathode, graphite anode, and solid polymer electrolyte (rated at 3.7 V and 2.6 Ah) were tested at temperatures of 0 °C, 25 °C, and 40 °C under two conditions: a dynamic stress test (DST) and an urban dynamometer driving schedule (UDDS). Under DST conditions at 25 °C with a discharge period of 13,500 s sampled every 10 s (yielding 1350 data points), the initial high-quality data allowed for accurate predictions by the LSTM. However, bidirectional errors introduced fluctuations when the dataset was small. The temperature-compensated model with integrated attention (AMBiLSTM TC) effectively mitigated these errors, maintaining stable and lower prediction errors throughout the discharge. Under UDDS conditions, where data are sampled every second (yielding 13,500 data points) and the temperature rises rapidly within the first 120 s, the AMBiLSTM TC again outperformed standard methods by significantly reducing the root mean squared error (RMSE). Further analysis during training showed that the introduction of attention mechanisms led to rapid error convergence, with an optimal configuration found at 16 hidden units, balancing improved accuracy with only a modest increase in computational time. By integrating both spatial and temporal attention mechanisms with multidimensional temperature compensation and validating the approach through extensive experimental testing, the proposed method markedly improves feature extraction from time-series data, resulting in a robust and accurate SOC estimation under a wide range of operating conditions while maintaining computational efficiency. This technique is summarized in Figure 22.
Experimental results show that the proposed method achieves maximum errors of less than 1% and improves accuracy by 9.39% and 22.36% under two different current conditions. Furthermore, in the context of battery health prognosis, the approach enhances accuracy by 21.45% compared to standard bidirectional LSTM networks, demonstrating its significant potential for applications in battery management systems. However, BiLSTMs require a sufficient sequence length before outputting predictions, which can introduce computational limits.

4.3. Gated Recurrent Unit Neural Networks

Another effective RNN variant for managing long-term dependencies is the Gated Recurrent Unit (GRU). The GRU employs a simplified gating mechanism compared to the LSTM, using a single gate to control both memory retention and update decisions [95,96,97]. The Gated Recurrent Unit (GRU) is a simplified variant of the Long Short-Term Memory (LSTM) architecture, designed to address the exploding and vanishing gradient problem in recurrent neural networks (RNNs) while reducing computational complexity [98,99]. An example of this particular architecture is shown in Figure 23.
Unlike the LSTM, which uses separate forget and input gates, the GRU combines these into a single update gate. This design reduces the number of parameters and simplifies training, while still enabling the model to capture long-term dependencies in sequential data [100,101]. The GRU consists of two main gates:
  • Reset Gate ( r k ): Controls how much of the past information to forget;
  • Update Gate ( z k ): Determines how much of the previous state to retain and how much to update with new information.
The core equations governing the GRU are as follows:
  • Update Gate:
    z k = σ W z x k + U z h k 1 + b z
    where x k is the input vector at time step k, h k 1 is the hidden state from the previous time step, W z and U z are weight matrices, b z is the bias term, and σ is the sigmoid activation function;
  • Reset Gate:
    r k = σ W r x k + U r h k 1 + b r
    Here, W r , U r , and b r are the corresponding weight matrices and bias for the reset gate;
  • Candidate Hidden State: The reset gate determines how much of the past hidden state contributes to the candidate hidden state.
    h ˜ k = tanh W h x k + U h r k h k 1 + b h
    where ⊙ denotes the element-wise multiplication, and tanh is the hyperbolic tangent activation function;
  • Current Hidden State: The update gate determines the final hidden state as a combination of the previous hidden state and the candidate hidden state.
    h k = z k h k 1 + 1 z k h ˜ k
This streamlined design allows GRUs to achieve performance similar to LSTMs with reduced computational complexity [102]. The GRU’s update gate decides which information from the previous and current steps should contribute to the output [103].
An interesting application of a GRU variant is presented in the article [104], where the authors propose an enhanced model for State of Charge (SOC) estimation using a Bidirectional Gated Recurrent Unit (Bi-GRU) network optimized with the Nesterov Accelerated Gradient (NAG) algorithm, as shown in Figure 24.
To validate their approach, the authors utilize a well-known lithium-ion battery dataset from the University of Maryland, ensuring a comprehensive evaluation of the model’s effectiveness. They achived some interesting results in terms of accuracy in SOC estimation across various ambient temperatures, outperforming existing state-of-the-art data-driven methods and demonstrating its robustness under diverse conditions. In [105], the authors combine Gated Recurrent Units (GRUs) with Recurrent Neural Networks (RNNs) to propose an advanced method for estimating the State of Charge (SOC) in lithium batteries, as shown in Figure 25. The paper presents an innovative approach that integrates a GRU–RNN model with an enhanced momentum gradient optimization algorithm. Traditional techniques—such as ampere-hour integration and open-circuit voltage measurements—often struggle with both accuracy and real-time performance. To address these challenges, the authors develop a model that takes measured voltage and current as inputs to a GRU–RNN, which then outputs an estimated SOC. The GRU, an evolution of conventional recurrent neural networks, effectively manages long-term dependencies while mitigating issues like exploding or vanishing gradients. Additionally, the paper introduces an improved training strategy through a momentum gradient algorithm, a refined version of the classic gradient descent method. The authors first review the conventional gradient descent approach and then present the momentum-enhanced version. This algorithm considers both the current gradient and historical gradient information: when the weight update direction at the current iteration aligns with that of the previous one, the update is reinforced; if not, it is dampened. This approach not only reduces oscillations during weight updates but also accelerates convergence during training. To further enhance generalization and prevent overfitting, random noise is incorporated into the training data. Experimental validation is carried out on a lithium battery test platform that includes a battery tester (NEWARE CT-4008-5V12A-TB), a battery holder, BTcap 21700 (2200 mAh) lithium batteries, and a host computer. The study systematically evaluates various parameters—such as the momentum coefficient, noise variance, number of training epochs, and number of hidden neurons—to optimize performance. The results reveal that a moderate momentum coefficient significantly improves convergence speed and minimizes oscillations, while excessively high values can lead to instability. Moreover, an optimal noise variance of 0.03, combined with 30 hidden neurons, yields the best prediction accuracy, as indicated by improvements in RMSE, MAE, and R2 metrics. This study demonstrates that the GRU–RNN-based momentum optimization algorithm provides a reliable and efficient method for SOC estimation in battery management systems. The integration of advanced machine learning techniques with tailored optimization strategies represents a significant contribution to the field and holds promise for application to other dynamic systems, ultimately paving the way for more accurate and efficient battery performance monitoring. This work exemplifies a promising fusion of machine learning and practical battery applications, offering a robust solution to a critical challenge in energy storage technology.
The authors of a recent study [106] present a robust and innovative approach for estimating the State of Charge (SOC) in lithium-ion batteries, known as GRU–ASG. This method integrates a Gated Recurrent Unit (GRU) neural network with an adaptive Savitzky–Golay (ASG) filter, enhancing both the accuracy and stability of SOC estimation. The core principles of this approach are illustrated in Figure 26. Accurate SOC estimation is fundamental for ensuring the safety, efficiency, and longevity of energy storage systems. Traditional methods, such as ampere-hour integration and open-circuit voltage measurement, suffer from cumulative errors and rely on precise initial SOC values, making them impractical for many real-world applications. Model-based techniques, like Kalman Filtering, require detailed system parameters, which are often unavailable or difficult to determine. More recently, deep learning approaches, particularly Recurrent Neural Networks (RNNs) such as GRU and LSTM, have demonstrated superior performance in SOC estimation. However, these models struggle with fluctuations in predictions, particularly under dynamic operating conditions. To overcome these challenges, the GRU–ASG method introduces a two-fold solution. First, the “many-to-one” structured GRU network captures long-term dependencies by incorporating multiple past measurements into each SOC estimate, thereby enhancing prediction accuracy. Second, the adaptive Savitzky–Golay filter (ASG) smooths the GRU-generated SOC predictions. Unlike Kalman-based filters, which require a reference SOC, ASG autonomously optimizes its parameters, eliminating the need for predefined initial SOC values. The method operates in two main stages. The GRU network first generates an initial SOC estimate, leveraging historical voltage, current, and temperature data. The ASG filtering algorithm then refines this estimate by dynamically adjusting the smoothing window length, optimizing the trade-off between reducing fluctuations and preserving accuracy. This adaptive approach eliminates the need for manual parameter tuning, making it particularly well-suited for real-time applications in battery management systems. To validate the effectiveness of the many-to-one GRU architecture, the authors compare its performance against standard RNN and LSTM models. The three models have identical architectures, except for the type of neural network layers. Experimental results indicate that while all three models can capture the general decreasing trend of SOC, their accuracy varies significantly. The GRU model achieves the lowest maximum estimation error at approximately 7%, followed by LSTM at 8% and RNN at 12%. These findings underscore the major reliability of GRU for SOC estimation. In terms of error metrics, GRU achieves a mean squared error (MSE) of 0.17%, which is 59% lower than RNN and 39% lower than LSTM. Similarly, its mean absolute error (MAE) is 3.49%, 33% lower than RNN, and 20% lower than LSTM. These results highlight the clear advantage of the many-to-one GRU structure in improving SOC estimation accuracy. To further enhance its robustness, the ASG filtering algorithm is applied to refine the GRU-generated SOC estimates. The GRU–ASG model is trained using four out of six different operating condition datasets collected from an energy storage plant, with the remaining two datasets used for validation. Experimental results confirm that GRU–ASG consistently outperforms other approaches, including GRU, GRU-MD (Moving Median), and GRU-GA (Gaussian filter), achieving performance comparable to GRU–SG (Savitzky–Golay filtering with an optimally tuned window length). Notably, across all test datasets, MSE remains below 0.15% and MAE does not exceed 3%. Additionally, the GRU–ASG model demonstrates strong generalization capabilities, enabling accurate SOC estimation even under complex and unpredictable operating conditions. When compared to alternative filtering techniques, GRU–ASG shows clear advantages. While all filtering methods enhance the stability of GRU predictions, ASG outperforms both moving median (MD) and Gaussian (GA) filters by dynamically selecting the optimal smoothing window length. Unlike Kalman-based methods, which require an accurate initial SOC, GRU–ASG operates independently, making it highly practical for real-world applications in energy storage. As energy storage systems continue to expand, the increasing availability of real-world battery data will further improve the training and performance of GRU–ASG, strengthening its applicability for large-scale SOC estimation and battery safety management. This scalability highlights its potential for widespread adoption in modern battery management systems. Future research will focus on incorporating additional environmental factors, such as temperature fluctuations and battery capacity degradation, to further enhance robustness under real-world operating conditions. Moreover, transfer learning techniques will be explored to extend the adaptability of GRU–ASG across different battery chemistries and usage scenarios, ensuring even greater versatility in practical applications. The GRU–ASG model represents a significant advancement in SOC estimation, surpassing both traditional and deep learning-based approaches. The many-to-one GRU structure effectively integrates historical data, while the ASG filter enhances stability and adaptability. With its strong generalization capabilities and scalability, GRU–ASG stands as a highly promising an interesting solution for next-generation energy storage systems, where precise SOC estimation is crucial for ensuring safety, efficiency, and longevity.

5. Discussion

The growing demand for machine learning-based methods in estimating the State of Charge (SOC) of EV/HV battery system arises from the limitations of traditional approaches, which struggle with the uncertainty and complexity inherent in battery models. Unlike conventional methods, machine learning techniques exploit large datasets and advanced artificial intelligence algorithms to bypass the need for explicit battery modeling. Instead, they train specialized models to uncover nonlinear relationships between input parameters, such as voltage, current, and temperature, and the SOC. These techniques are recognized for their high accuracy, adaptability, and strong generalization capabilities. This review consolidates recent advancements in machine learning-based SOC estimation, presenting challenges, opportunities, and areas for further research. Additionally, the methods analyzed are summarized in the sequence of Table 1, Table 2, Table 3 and Table 4 , which outlines their respective advantages and disadvantages.

5.1. Challenges in Machine Learning-Based SOC Estimation

From the analysis of the methods proposed in this article, it is possible to define problems that these methods have in common, which may prove to be the challenges to overcome in future work, in order to make these ANN algorithms even more performant. The challenges to be overcome and solved can be summarized in the following key points:
  • Inconsistencies in Model Evaluation: Comparing different models remains challenging due to variability in datasets, hyperparameter settings, optimization algorithms, and computational resources used across studies [107]. These inconsistencies complicate cross-study benchmarking and make it difficult to assess the relative effectiveness of different approaches. Furthermore, the use of complex composite algorithms, which often combine serial or parallel structures, increases model complexity [108]. As the number of modules grows, the model’s overall controllability diminishes, making it more vulnerable to disturbances and raising concerns about its robustness in practical applications [109];
  • Dataset Limitations: Large variations in datasets are caused by differences in experimental conditions and the mismatch between training data and real-world electric vehicle operation [110]. This discrepancy impacts even the most sophisticated models, reducing their accuracy and applicability in real-world scenarios [111]. The lack of standardized, comprehensive datasets exacerbates this issue;
  • Computational Complexity: Many advanced SOC estimation algorithms require significant computational power due to their intricate structures and optimization techniques. This increases the demands on hardware resources, potentially hindering the real-time performance of SOC estimation systems [112]. Balancing computational efficiency with accuracy is critical for enabling online SOC estimation;
  • Lack of Open-Source Tools: Few researchers publish their code, and there is often little transparency regarding preprocessing techniques, input parameter choices, and hyperparameter tuning. These gaps hinder reproducibility and prevent effective collaboration within the research community. Without standardized benchmarks, it is difficult to attribute performance improvements to either the model or preprocessing techniques [113];
  • Real-Time Implementation: Implementing advanced machine learning algorithms in real-time systems, especially in resource-constrained vehicles, poses significant technical hurdles [114,115];
  • Generalization to Diverse Conditions: We must adapt SOC estimation models to perform reliably across a wide range of driving conditions and environmental factors [116].

5.2. Areas for Improvement and Key Observations

We use Artificial Neural Network (ANN) methods to estimate the State of Charge (SOC) and these methods have proven effective for their intended tasks. However, our analysis shows that several improvements can boost performance, reliability, and practical applicability. Below, we present key areas for improvement along with emerging challenges that future research must tackle:
  • Standardization of Datasets and Data Quality: Machine learning models often depend on specific datasets that limit generalization. We must develop high-quality, standardized datasets with precise measurements and diverse data types. Researchers should also improve data quality by applying advanced data preprocessing techniques [117], sensor fusion methods, and machine learning-based denoising algorithms to ensure accurate and reliable input data [118];
  • Open-Source Benchmarks: Creating open-source initiatives and establishing standardized benchmarks for data preprocessing, model architectures, and hyperparameter tuning may lead to significant improvements [119]. Adopting uniform techniques to clean, normalize, and prepare data establishes a consistent foundation for experiments. Designing model architectures with comparable features and systematically tuning hyperparameters—by adjusting factors like learning rates and network layers—minimizes biases from arbitrary decisions. This approach promises enhanced reproducibility, fair comparisons among methods, and increased collaboration, ultimately accelerating progress in ANN-based SOC estimation for electric vehicle batteries [120];
  • Balanced Algorithm Complexity: In pursuit of increased accuracy, SOC estimation models must deliver computational efficiency and meet real-world constraints [121]. Evaluating these models from diverse perspectives—including robustness, adaptability, and efficiency—ensures practicality for deployment;
  • Generalization and Transfer Learning: One of the challenges in achieving accurate State of Charge estimation is that vehicles are subjected to a wide range of driving conditions, many of which are not fully represented during the training phase. Ensuring that these models generalize well across such diverse conditions remains a significant concern [122]. Transfer learning is emerging as a promising approach to adapt these models to varying scenarios, with researchers actively exploring how to leverage knowledge from one domain to enhance performance in another;
  • AI Hardware Acceleration: The advent of specialized hardware for AI and machine learning is expected to accelerate the deployment of complex models in real-time applications [123]. In particular, hardware accelerators such as GPUs and TPUs drastically reduce training and inference times, paving the way for the development and integration of increasingly sophisticated state-of-charge estimation models in electric vehicles;
  • Big Data and Cloud Computing: Integrating big data analytics with cloud computing platforms can optimization SOC management. Cloud-based solutions provide scalability and allow remote monitoring and optimization by analyzing data from large fleets, leading to more robust and adaptive management strategies [124];
  • Advanced Machine Learning Techniques: Advanced models—including Convolutional Neural Networks (CNNs) and Transformer-based architectures—promise significant improvements [125]. These techniques excel at feature extraction and capture complex, non-linear relationships in battery data. For example, models using attention mechanisms (such as AMBiLSTM) and multi-head self-attention in Transformers effectively identify intricate patterns and dynamic operating conditions [126];
  • Hybrid Models and Optimization Strategies: Combining multiple architectures (e.g., GRU–ASG, GLA–CNN–BiLSTM) in hybrid models can improve accuracy and robustness [127]. Researchers must optimize these models to manage increased computational complexity. Likewise, advanced optimization strategies (as seen in RS-LSTM) can boost performance while requiring more computational resources [128].
Addressing these improvements and challenges will enhance the performance, reliability, and applicability of ANN-based SOC estimation methods, driving the development of more efficient and robust battery management systems.

6. Conclusions

This work presents a comprehensive and detailed review of the State of Charge (SOC) estimation methods for battery systems based on machine learning algorithms. By analyzing and comparing various methods, this paper provides a clear understanding of the advantages and limitations of different approaches, offering valuable insights into the evolution and current trends in SOC estimation. A structured methodology for evaluating and benchmarking different models is introduced, enabling a more consistent and fair comparison across diverse studies. This framework addresses key challenges in the field, including dataset variability, computational complexity, and the lack of standardized benchmarks, while offering guidance on how these obstacles can be overcome. Furthermore, the study emphasizes areas for improvement, such as enhancing generalization capabilities, fostering open-source collaboration, and achieving a balance between accuracy and computational efficiency. The potential of machine learning-based SOC estimation lies in its ability to handle the nonlinear dynamics of batteries, its adaptability to various operating conditions, and its robust generalization capabilities.
By addressing challenges such as dataset standardization and promoting the adoption of open-source benchmarks, this work aims to accelerate innovation in SOC estimation methods. Additionally, it advocates for the development of scalable, efficient, and robust machine learning models that align with real-world constraints, fostering advances in battery management systems. Ultimately, this study aims to contribute to the enhancement of EV/HV battery system performance and safety, driving forward the intelligent evolution of SOC estimation techniques. These advancements will play a pivotal role in supporting the sustainable development of electric vehicles and new energy technologies, paving the way for a cleaner, smarter energy future.

Author Contributions

All authors has contributed equally to the the article. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been partially supported by CN1 Spoke 6; by HARDNESS PE SERICS Spoke 7; and by MIUR project FoReLab.

Data Availability Statement

No new data have been generated.

Conflicts of Interest

Authors declare no conflicts of interest.

References

  1. Hindawi, M.; Basheer, Y.; Qaisar, S.M.; Waqar, A. An Overview of Artificial Intelligence Driven Li-Ion Battery State Estimation. In IoT Enabled-DC Microgrids; CRC Press: Boca Raton, FL, USA, 2024; pp. 121–157. [Google Scholar]
  2. Dini, P.; Paolini, D.; Saponara, S.; Minossi, M. Leaveraging Digital Twin & Artificial Intelligence in Consumption Forecasting System for Sustainable Luxury Yacht. IEEE Access 2024, 12, 160700–160714. [Google Scholar]
  3. Lu, C.; Li, S.; Lu, Z. Building energy prediction using artificial neural networks: A literature survey. Energy Build. 2022, 262, 111718. [Google Scholar] [CrossRef]
  4. Moayedi, H.; Mosavi, A. Double-target based neural networks in predicting energy consumption in residential buildings. Energies 2021, 14, 1331. [Google Scholar] [CrossRef]
  5. Dzedzickis, A.; Subačiūtė-Žemaitienė, J.; Šutinys, E.; Samukaitė-Bubnienė, U.; Bučinskas, V. Advanced applications of industrial robotics: New trends and possibilities. Appl. Sci. 2021, 12, 135. [Google Scholar] [CrossRef]
  6. Tsapin, D.; Pitelinskiy, K.; Suvorov, S.; Osipov, A.; Pleshakova, E.; Gataullin, S. Machine learning methods for the industrial robotic systems security. J. Comput. Virol. Hacking Tech. 2024, 20, 397–414. [Google Scholar] [CrossRef]
  7. Manoharan, A.; Begam, K.; Aparow, V.R.; Sooriamoorthy, D. Artificial Neural Networks, Gradient Boosting and Support Vector Machines for electric vehicle battery state estimation: A review. J. Energy Storage 2022, 55, 105384. [Google Scholar] [CrossRef]
  8. Djaballah, Y.; Negadi, K.; Boudiaf, M. Enhanced lithium–ion battery state of charge estimation in electric vehicles using extended Kalman filter and deep neural network. Int. J. Dyn. Control 2024, 12, 2864–2871. [Google Scholar] [CrossRef]
  9. Allal, Z.; Noura, H.N.; Salman, O.; Chahine, K. Machine learning solutions for renewable energy systems: Applications, challenges, limitations, and future directions. J. Environ. Manag. 2024, 354, 120392. [Google Scholar] [CrossRef]
  10. Rinchi, O.; Alsharoa, A.; Shatnawi, I.; Arora, A. The Role of Intelligent Transportation Systems and Artificial Intelligence in Energy Efficiency and Emission Reduction. arXiv 2024, arXiv:2401.14560. [Google Scholar]
  11. Chen, W.; Men, Y.; Fuster, N.; Osorio, C.; Juan, A.A. Artificial intelligence in logistics optimization with sustainable criteria: A review. Sustainability 2024, 16, 9145. [Google Scholar] [CrossRef]
  12. Qadir, S.A.; Ahmad, F.; Al-Wahedi, A.M.A.; Iqbal, A.; Ali, A. Navigating the complex realities of electric vehicle adoption: A comprehensive study of government strategies, policies, and incentives. Energy Strategy Rev. 2024, 53, 101379. [Google Scholar] [CrossRef]
  13. Möring-Martínez, G.; Senzeybek, M.; Jochem, P. Clustering the European Union electric vehicle markets: A scenario analysis until 2035. Transp. Res. Part D Transp. Environ. 2024, 135, 104372. [Google Scholar] [CrossRef]
  14. Raganati, F.; Ammendola, P. CO2 post-combustion capture: A critical review of current technologies and future directions. Energy Fuels 2024, 38, 13858–13905. [Google Scholar] [CrossRef]
  15. al Irsyad, M.I.; Firmansyah, A.I.; Harisetyawan, V.T.F.; Supriatna, N.K.; Gunawan, Y.; Jupesta, J.; Yaumidin, U.K.; Purwanto, J.; Inayah, I. Comparative total cost assessments of electric and conventional vehicles in ASEAN: Commercial vehicles and motorcycle conversion. Energy Sustain. Dev. 2025, 101599. [Google Scholar] [CrossRef]
  16. Hussein, H.M.; Aghmadi, A.; Abdelrahman, M.S.; Rafin, S.S.H.; Mohammed, O. A review of battery state of charge estimation and management systems: Models and future prospective. Wiley Interdiscip. Rev. Energy Environ. 2024, 13, e507. [Google Scholar]
  17. Nekahi, A.; Madikere Raghunatha Reddy, A.K.; Li, X.; Deng, S.; Zaghib, K. Rechargeable Batteries for the Electrification of Society: Past, Present, and Future. Electrochem. Energy Rev. 2025, 8, 1–30. [Google Scholar] [CrossRef]
  18. Dini, P.; Saponara, S.; Colicelli, A. Overview on battery charging systems for electric vehicles. Electronics 2023, 12, 4295. [Google Scholar] [CrossRef]
  19. Ria, A.; Dini, P. A Compact Overview on Li-Ion Batteries Characteristics and Battery Management Systems Integration for Automotive Applications. Energies 2024, 17, 5992. [Google Scholar] [CrossRef]
  20. Demirci, O.; Taskin, S.; Schaltz, E.; Demirci, B.A. Review of battery state estimation methods for electric vehicles-Part I: SOC estimation. J. Energy Storage 2024, 87, 111435. [Google Scholar] [CrossRef]
  21. Pillai, P.; Sundaresan, S.; Kumar, P.; Pattipati, K.R.; Balasingam, B. Open-circuit voltage models for battery management systems: A review. Energies 2022, 15, 6803. [Google Scholar] [CrossRef]
  22. Movassagh, K.; Raihan, A.; Balasingam, B.; Pattipati, K. A critical look at coulomb counting approach for state of charge estimation in batteries. Energies 2021, 14, 4074. [Google Scholar] [CrossRef]
  23. Lipu, M.H.; Hannan, M.; Hussain, A.; Ayob, A.; Saad, M.H.; Karim, T.F.; How, D.N. Data-driven state of charge estimation of lithium-ion batteries: Algorithms, implementation factors, limitations and future trends. J. Clean. Prod. 2020, 277, 124110. [Google Scholar] [CrossRef]
  24. Tao, Z.; Zhao, Z.; Wang, C.; Huang, L.; Jie, H.; Li, H.; Hao, Q.; Zhou, Y.; See, K.Y. State of charge estimation of lithium Batteries: Review for equivalent circuit model methods. Measurement 2024, 236, 115148. [Google Scholar] [CrossRef]
  25. Shrivastava, P.; Soon, T.K.; Idris, M.Y.I.B.; Mekhilef, S. Overview of model-based online state-of-charge estimation using Kalman filter family for lithium-ion batteries. Renew. Sustain. Energy Rev. 2019, 113, 109233. [Google Scholar] [CrossRef]
  26. Farea, A.; Yli-Harja, O.; Emmert-Streib, F. Understanding physics-informed neural networks: Techniques, applications, trends, and challenges. AI 2024, 5, 1534–1557. [Google Scholar] [CrossRef]
  27. Reza, M.; Mannan, M.; Mansor, M.; Ker, P.J.; Mahlia, T.I.; Hannan, M. Recent advancement of remaining useful life prediction of lithium-ion battery in electric vehicle applications: A review of modelling mechanisms, network configurations, factors, and outstanding issues. Energy Rep. 2024, 11, 4824–4848. [Google Scholar] [CrossRef]
  28. Rahman, T.; Alharbi, T. Exploring lithium-Ion battery degradation: A concise review of critical factors, impacts, data-driven degradation estimation techniques, and sustainable directions for energy storage systems. Batteries 2024, 10, 220. [Google Scholar] [CrossRef]
  29. Marzbani, F.; Osman, A.H.; Hassan, M.S. Electric vehicle energy demand prediction techniques: An in-depth and critical systematic review. IEEE Access 2023, 11, 96242–96255. [Google Scholar] [CrossRef]
  30. Memon, S.A.; Hamza, A.; Zaidi, S.S.H.; Khan, B.M. Estimating state of charge and state of health of electrified vehicle battery by data driven approach: Machine learning. In Proceedings of the 2022 International Conference on Emerging Technologies in Electronics, Computing and Communication (ICETECC), Jamshoro, Pakistan, 7–9 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–9. [Google Scholar]
  31. Dini, P.; Colicelli, A.; Saponara, S. Review on modeling and soc/soh estimation of batteries for automotive applications. Batteries 2024, 10, 34. [Google Scholar] [CrossRef]
  32. Ferriol-Galmés, M.; Suárez-Varela, J.; Paillissé, J.; Shi, X.; Xiao, S.; Cheng, X.; Barlet-Ros, P.; Cabellos-Aparicio, A. Building a digital twin for network optimization using graph neural networks. Comput. Netw. 2022, 217, 109329. [Google Scholar] [CrossRef]
  33. Di Fonso, R.; Teodorescu, R.; Cecati, C.; Bharadwaj, P. A Battery Digital Twin From Laboratory Data Using Wavelet Analysis and Neural Networks. IEEE Trans. Ind. Inform. 2024, 20, 6889–6899. [Google Scholar] [CrossRef]
  34. Li, W.; Li, Y.; Garg, A.; Gao, L. Enhancing real-time degradation prediction of lithium-ion battery: A digital twin framework with CNN-LSTM-attention model. Energy 2024, 286, 129681. [Google Scholar] [CrossRef]
  35. Wang, S.; Zhang, J.; Wang, P.; Law, J.; Calinescu, R.; Mihaylova, L. A deep learning-enhanced Digital Twin framework for improving safety and reliability in human–robot collaborative manufacturing. Robot. Comput.-Integr. Manuf. 2024, 85, 102608. [Google Scholar] [CrossRef]
  36. Iliuţă, M.E.; Moisescu, M.A.; Pop, E.; Ionita, A.D.; Caramihai, S.I.; Mitulescu, T.C. Digital Twin—A Review of the Evolution from Concept to Technology and Its Analytical Perspectives on Applications in Various Fields. Appl. Sci. 2024, 14, 5454. [Google Scholar] [CrossRef]
  37. Chen, S.; Billings, S.A. Neural networks for nonlinear dynamic system modelling and identification. Int. J. Control 1992, 56, 319–346. [Google Scholar] [CrossRef]
  38. Arulampalam, G.; Bouzerdoum, A. A generalized feedforward neural network architecture for classification and regression. Neural Netw. 2003, 16, 561–568. [Google Scholar] [CrossRef] [PubMed]
  39. Wang, Y.; Tian, J.; Sun, Z.; Wang, L.; Xu, R.; Li, M.; Chen, Z. A comprehensive review of battery modeling and state estimation approaches for advanced battery management systems. Renew. Sustain. Energy Rev. 2020, 131, 110015. [Google Scholar] [CrossRef]
  40. Rane, C.; Tyagi, K.; Kline, A.; Chugh, T.; Manry, M. Optimizing performance of feedforward and convolutional neural networks through dynamic activation functions. Evol. Intell. 2024, 17, 4083–4093. [Google Scholar] [CrossRef]
  41. Szandała, T. Review and comparison of commonly used activation functions for deep neural networks. In Bio-Inspired Neurocomputing; Springer: Singapore, 2021; pp. 203–224. [Google Scholar]
  42. Hammad, M. Deep Learning Activation Functions: Fixed-Shape, Parametric, Adaptive, Stochastic, Miscellaneous, Non-Standard, Ensemble. arXiv 2024, arXiv:2407.11090. [Google Scholar]
  43. Narkhede, M.V.; Bartakke, P.P.; Sutaone, M.S. A review on weight initialization strategies for neural networks. Artif. Intell. Rev. 2022, 55, 291–322. [Google Scholar] [CrossRef]
  44. Dang, X.; Yan, L.; Jiang, H.; Wu, X.; Sun, H. Open-circuit voltage-based state of charge estimation of lithium-ion power battery by combining controlled auto-regressive and moving average modeling with feedforward-feedback compensation method. Int. J. Electr. Power Energy Syst. 2017, 90, 27–36. [Google Scholar] [CrossRef]
  45. Feng, S.; Li, X.; Zhang, S.; Jian, Z.; Duan, H.; Wang, Z. A review: State estimation based on hybrid models of Kalman filter and neural network. Syst. Sci. Control Eng. 2023, 11, 2173682. [Google Scholar] [CrossRef]
  46. He, W.; Williard, N.; Chen, C.; Pecht, M. State of charge estimation for Li-ion batteries using neural network modeling and unscented Kalman filter-based error cancellation. Int. J. Electr. Power Energy Syst. 2014, 62, 783–791. [Google Scholar] [CrossRef]
  47. Murawwat, S.; Gulzar, M.M.; Alzahrani, A.; Hafeez, G.; Khan, F.A.; Abed, A.M. State of charge estimation and error analysis of lithium-ion batteries for electric vehicles using Kalman filter and deep neural network. J. Energy Storage 2023, 72, 108039. [Google Scholar]
  48. Chen, C.; Xiong, R.; Yang, R.; Shen, W.; Sun, F. State-of-charge estimation of lithium-ion battery using an improved neural network model and extended Kalman filter. J. Clean. Prod. 2019, 234, 1153–1164. [Google Scholar] [CrossRef]
  49. Srihari, S.; Vasanthi, V. SOC Estimation Using Extended Kalman Filter in Electric Vehicle Batteries. In Proceedings of the 2024 International Conference on Advancements in Power, Communication and Intelligent Systems (APCI), Kannur, India, 21–22 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–5. [Google Scholar]
  50. Khan, U.; Kirmani, S.; Rafat, Y.; Rehman, M.U.; Alam, M.S. Improved deep learning based state of charge estimation of lithium ion battery for electrified transportation. J. Energy Storage 2024, 91, 111877. [Google Scholar] [CrossRef]
  51. Vidal, C.; Haußmann, M.; Barroso, D.; Shamsabadi, P.M.; Biswas, A.; Chemali, E.; Ahmed, R.; Emadi, A. Hybrid energy storage system state-of-charge estimation using artificial neural network for micro-hybrid applications. In Proceedings of the 2018 IEEE Transportation Electrification Conference and Expo (ITEC), Long Beach, CA, USA, 13–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1075–1081. [Google Scholar]
  52. Cui, X.; Liu, S.; Ruan, G.; Wang, Y. Data-driven aggregation of thermal dynamics within building virtual power plants. Appl. Energy 2024, 353, 122126. [Google Scholar] [CrossRef]
  53. Barik, S.; Saravanan, B. Recent developments and challenges in state-of-charge estimation techniques for electric vehicle batteries: A review. J. Energy Storage 2024, 100, 113623. [Google Scholar] [CrossRef]
  54. Vidal, C.; Gross, O.; Gu, R.; Kollmeyer, P.; Emadi, A. xEV Li-ion battery low-temperature effects. IEEE Trans. Veh. Technol. 2019, 68, 4560–4572. [Google Scholar] [CrossRef]
  55. Bian, C.; Duan, Z.; Hao, Y.; Yang, S.; Feng, J. Exploring large language model for generic and robust state-of-charge estimation of Li-ion batteries: A mixed prompt learning method. Energy 2024, 12, 131856. [Google Scholar] [CrossRef]
  56. Rastegarpanah, A.; Asif, M.E.; Stolkin, R. Hybrid Neural Networks for Enhanced Predictions of Remaining Useful Life in Lithium-Ion Batteries. Batteries 2024, 10, 106. [Google Scholar] [CrossRef]
  57. Badfar, M.; Yildirim, M.; Chinnam, R. State-of-charge estimation across battery chemistries: A novel regression-based method and insights from unsupervised domain adaptation. J. Power Sources 2025, 628, 235760. [Google Scholar] [CrossRef]
  58. Han, J.; Moraga, C.; Sinne, S. Optimization of feedforward neural networks. Eng. Appl. Artif. Intell. 1996, 9, 109–119. [Google Scholar] [CrossRef]
  59. Ojha, V.K.; Abraham, A.; Snášel, V. Metaheuristic design of feedforward neural networks: A review of two decades of research. Eng. Appl. Artif. Intell. 2017, 60, 97–116. [Google Scholar] [CrossRef]
  60. Abdolrasol, M.G.; Hussain, S.S.; Ustun, T.S.; Sarker, M.R.; Hannan, M.A.; Mohamed, R.; Ali, J.A.; Mekhilef, S.; Milad, A. Artificial neural networks based optimization techniques: A review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
  61. Tong, S.; Lacap, J.H.; Park, J.W. Battery state of charge estimation using a load-classifying neural network. J. Energy Storage 2016, 7, 236–243. [Google Scholar] [CrossRef]
  62. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Saad, M.H.; Ayob, A. Neural network approach for estimating state of charge of lithium-ion battery using backtracking search algorithm. IEEE Access 2018, 6, 10069–10079. [Google Scholar] [CrossRef]
  63. Vidal, C.; Kollmeyer, P.; Naguib, M.; Malysz, P.; Gross, O.; Emadi, A. Robust xev battery state-of-charge estimator design using a feedforward deep neural network. SAE Int. J. Adv. Curr. Pract. Mobil. 2020, 2, 2872–2880. [Google Scholar] [CrossRef]
  64. Naguib, M.; Kollmeyer, P.; Vidal, C.; Emadi, A. Accurate surface temperature estimation of lithium-ion batteries using feedforward and recurrent artificial neural networks. In Proceedings of the 2021 IEEE Transportation Electrification Conference & Expo (ITEC), Chicago, IL, USA, 21–25 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 52–57. [Google Scholar]
  65. Baraean, A.; Kassas, M.; Abido, M. Physics-Informed NN for Improving Electric Vehicles Lithium-Ion Battery State-of-Charge Estimation Robustness. In Proceedings of the 2024 IEEE 12th International Conference on Smart Energy Grid Engineering (SEGE), Oshawa, ON, Canada, 18–20 August 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 245–250. [Google Scholar]
  66. Guo, Z.; Li, Y.; Yan, Z.; Chow, M.Y. A Neural-Network-Embedded Equivalent Circuit Model for Lithium-ion Battery State Estimation. arXiv 2024, arXiv:2407.20262. [Google Scholar]
  67. Michailidis, P.; Michailidis, I.; Gkelios, S.; Kosmatopoulos, E. Artificial Neural Network Applications for Energy Management in Buildings: Current Trends and Future Directions. Energies 2024, 17, 570. [Google Scholar] [CrossRef]
  68. Hu, Y.; Huber, A.; Anumula, J.; Liu, S.C. Overcoming the vanishing gradient problem in plain recurrent networks. arXiv 2018, arXiv:1801.06105. [Google Scholar]
  69. Mienye, I.D.; Swart, T.G.; Obaido, G. Recurrent neural networks: A comprehensive review of architectures, variants, and applications. Information 2024, 15, 517. [Google Scholar] [CrossRef]
  70. Mousaei, A.; Naderi, Y.; Bayram, I.S. Advancing state of charge management in electric vehicles with machine learning: A technological review. IEEE Access 2024, 12, 43255–43283. [Google Scholar] [CrossRef]
  71. Graves, A.; Mohamed, A.r.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 6645–6649. [Google Scholar]
  72. Pascanu, R. On the difficulty of training recurrent neural networks. arXiv 2013, arXiv:1211.5063. [Google Scholar]
  73. Graves, A.; Graves, A. Long short-term memory. In Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012; pp. 37–45. [Google Scholar]
  74. Yao, Q.; Kollmeyer, P.J.; Lu, D.D.C.; Emadi, A. A Comparison Study of Unidirectional and Bidirectional Recurrent Neural Network for Battery State of Charge Estimation. In Proceedings of the 2024 IEEE Transportation Electrification Conference and Expo (ITEC), Chicago, IL, USA, 19–21 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  75. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  76. Zhang, S.; Wu, Y.; Che, T.; Lin, Z.; Memisevic, R.; Salakhutdinov, R.R.; Bengio, Y. Architectural complexity measures of recurrent neural networks. arXiv 2016, arXiv:1602.08210. [Google Scholar] [CrossRef]
  77. El Fallah, S.; Kharbach, J.; Vanagas, J.; Vilkelytė, Ž.; Tolvaišienė, S.; Gudžius, S.; Kalvaitis, A.; Lehmam, O.; Masrour, R.; Hammouch, Z.; et al. Advanced state of charge estimation using deep neural network, gated recurrent unit, and long short-term memory models for lithium-Ion batteries under aging and temperature conditions. Appl. Sci. 2024, 14, 6648. [Google Scholar] [CrossRef]
  78. Duan, J.; Zhang, P.F.; Qiu, R.; Huang, Z. Long short-term enhanced memory for sequential recommendation. World Wide Web 2023, 26, 561–583. [Google Scholar] [CrossRef]
  79. Chemali, E.; Kollmeyer, P.J.; Preindl, M.; Ahmed, R.; Emadi, A. Long short-term memory networks for accurate state-of-charge estimation of Li-ion batteries. IEEE Trans. Ind. Electron. 2017, 65, 6730–6739. [Google Scholar] [CrossRef]
  80. Charkhgard, M.; Farrokhi, M. State-of-charge estimation for lithium-ion batteries using neural networks and EKF. IEEE Trans. Ind. Electron. 2010, 57, 4178–4187. [Google Scholar] [CrossRef]
  81. Data, M. Panasonic 18650PF Li-ion Battery Data. 2018. Available online: https://data.mendeley.com/datasets/wykht8y7tg/1 (accessed on 1 January 2025).
  82. Song, X.; Yang, F.; Wang, D.; Tsui, K.L. Combined CNN-LSTM network for state-of-charge estimation of lithium-ion batteries. IEEE Access 2019, 7, 88894–88902. [Google Scholar] [CrossRef]
  83. Tian, Y.; Lai, R.; Li, X.; Xiang, L.; Tian, J. A combined method for state-of-charge estimation for lithium-ion batteries using a long short-term memory network and an adaptive cubature Kalman filter. Appl. Energy 2020, 265, 114789. [Google Scholar] [CrossRef]
  84. Wei, M.; Ye, M.; Li, J.B.; Wang, Q.; Xu, X. State of charge estimation of lithium-ion batteries using LSTM and NARX neural networks. IEEE Access 2020, 8, 189236–189245. [Google Scholar] [CrossRef]
  85. Chen, J.; Zhang, Y.; Wu, J.; Cheng, W.; Zhu, Q. SOC estimation for lithium-ion battery using the LSTM-RNN with extended input and constrained output. Energy 2023, 262, 125375. [Google Scholar] [CrossRef]
  86. Chai, X.; Li, S.; Liang, F. A novel battery SOC estimation method based on random search optimized LSTM neural network. Energy 2024, 306, 132583. [Google Scholar] [CrossRef]
  87. Siami-Namini, S.; Tavakoli, N.; Namin, A.S. The performance of LSTM and BiLSTM in forecasting time series. In Proceedings of the 2019 IEEE International conference on big data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3285–3292. [Google Scholar]
  88. Liu, K.; Peng, Q.; Che, Y.; Zheng, Y.; Li, K.; Teodorescu, R.; Widanage, D.; Barai, A. Transfer learning for battery smarter state estimation and ageing prognostics: Recent progress, challenges, and prospects. Adv. Appl. Energy 2023, 9, 100117. [Google Scholar] [CrossRef]
  89. Alizadegan, H.; Rashidi Malki, B.; Radmehr, A.; Karimi, H.; Ilani, M.A. Comparative study of long short-term memory (LSTM), bidirectional LSTM, and traditional machine learning approaches for energy consumption prediction. Energy Explor. Exploit. 2024, 43, 01445987241269496. [Google Scholar] [CrossRef]
  90. Zhao, D.; Li, H.; Zhou, F.; Zhong, Y.; Zhang, G.; Liu, Z.; Hou, J. Research progress on data-driven methods for battery states estimation of electric buses. World Electr. Veh. J. 2023, 14, 145. [Google Scholar] [CrossRef]
  91. Begni, A.; Dini, P.; Saponara, S. Design and test of an lstm-based algorithm for li-ion batteries remaining useful life estimation. In Proceedings of the International Conference on Applications in Electronics Pervading Industry, Environment and Society, Genoa, Italy, 26–27 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 373–379. [Google Scholar]
  92. Khan, M.K.; Abou Houran, M.; Kauhaniemi, K.; Zafar, M.H.; Mansoor, M.; Rashid, S. Efficient state of charge estimation of lithium-ion batteries in electric vehicles using evolutionary intelligence-assisted GLA–CNN–Bi-LSTM deep learning model. Heliyon 2024, 10, e35183. [Google Scholar] [CrossRef]
  93. Sherkatghanad, Z.; Ghazanfari, A.; Makarenkov, V. A self-attention-based CNN-Bi-LSTM model for accurate state-of-charge estimation of lithium-ion batteries. J. Energy Storage 2024, 88, 111524. [Google Scholar] [CrossRef]
  94. Xu, P.; Wang, C.; Ye, J.; Ouyang, T. State-of-charge estimation and health prognosis for lithium-ion batteries based on temperature-compensated Bi-LSTM network and integrated attention mechanism. IEEE Trans. Ind. Electron. 2023, 71, 5586–5596. [Google Scholar] [CrossRef]
  95. Shiri, F.M.; Perumal, T.; Mustapha, N.; Mohamed, R. A comprehensive overview and comparative analysis on deep learning models: CNN, RNN, LSTM, GRU. arXiv 2023, arXiv:2305.17473. [Google Scholar]
  96. Su, Y.; Kuo, C.C.J. Recurrent neural networks and their memory behavior: A survey. APSIPA Trans. Signal Inf. Process. 2022, 11, e26. [Google Scholar] [CrossRef]
  97. Zhou, R.; Dai, X.; Lin, F.; Zhang, J.; Ma, H. TGT: Battery State of Charge Estimation with Robustness to Missing Data. In Proceedings of the 2024 CPSS & IEEE International Symposium on Energy Storage and Conversion (ISESC), Xi’an, China, 8–11 November 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 431–436. [Google Scholar]
  98. Kanai, S.; Fujiwara, Y.; Iwamura, S. Preventing gradient explosions in gated recurrent units. In Proceedings of the NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  99. Salem, F.M.; Salem, F.M. Gated RNN: The Gated Recurrent Unit (GRU) RNN. In Recurrent Neural Networks: From Simple to Gated Architectures; Springer: Cham, Switzerland, 2022; pp. 85–100. [Google Scholar]
  100. Rodriguez, S. Gated Recurrent Units-Enhancements and Applications: Studying Enhancements to Gated Recurrent Unit (GRU) Architectures and Their Applications in Sequential Modeling Tasks. Adv. Deep Learn. Tech. 2023, 3, 16–30. [Google Scholar]
  101. Yang, F.; Li, W.; Li, C.; Miao, Q. State-of-charge estimation of lithium-ion batteries based on gated recurrent neural network. Energy 2019, 175, 66–75. [Google Scholar] [CrossRef]
  102. Cahuantzi, R.; Chen, X.; Güttel, S. A comparison of LSTM and GRU networks for learning symbolic sequences. In Proceedings of the Science and Information Conference, London, UK, 22–23 June 2023; Springer: Cham, Switzerland, 2023; pp. 771–785. [Google Scholar]
  103. Li, X.; Ma, X.; Xiao, F.; Wang, F.; Zhang, S. Application of gated recurrent unit (GRU) neural network for smart batch production prediction. Energies 2020, 13, 6121. [Google Scholar] [CrossRef]
  104. Zhang, Z.; Dong, Z.; Gao, M.; Han, Y.; Ji, X.; Wang, M.; He, Y. State-of-Charge Estimation of Lithium-ion Batteries Using the Nesterov Accelerated Gradient Algorithm Based Bi-GRU. In Proceedings of the 2020 8th International Conference on Power Electronics Systems and Applications (PESA), Hong Kong, China, 7–10 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
  105. Jiao, M.; Wang, D.; Qiu, J. A GRU-RNN based momentum optimized algorithm for SOC estimation. J. Power Sources 2020, 459, 228051. [Google Scholar] [CrossRef]
  106. Lu, J.; He, Y.; Liang, H.; Li, M.; Shi, Z.; Zhou, K.; Li, Z.; Gong, X.; Yuan, G. State of charge estimation for energy storage lithium-ion batteries based on gated recurrent unit neural network and adaptive Savitzky-Golay filter. Ionics 2024, 30, 297–310. [Google Scholar] [CrossRef]
  107. Liao, L.; Li, H.; Shang, W.; Ma, L. An empirical study of the impact of hyperparameter tuning and model optimization on the performance properties of deep neural networks. ACM Trans. Softw. Eng. Methodol. (TOSEM) 2022, 31, 1–40. [Google Scholar] [CrossRef]
  108. Zhan, Z.H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev. 2022, 55, 59–110. [Google Scholar] [CrossRef]
  109. Raiaan, M.A.K.; Mukta, M.S.H.; Fatema, K.; Fahad, N.M.; Sakib, S.; Mim, M.M.J.; Ahmad, J.; Ali, M.E.; Azam, S. A review on large language models: Architectures, applications, taxonomies, open issues and challenges. IEEE Access 2024, 12, 26839–26874. [Google Scholar] [CrossRef]
  110. Pozzato, G.; Allam, A.; Pulvirenti, L.; Negoita, G.A.; Paxton, W.A.; Onori, S. Analysis and key findings from real-world electric vehicle field data. Joule 2023, 7, 2035–2053. [Google Scholar] [CrossRef]
  111. Fang, W.; Chen, H.; Zhou, F. Fault diagnosis for cell voltage inconsistency of a battery pack in electric vehicles based on real-world driving data. Comput. Electr. Eng. 2022, 102, 108095. [Google Scholar] [CrossRef]
  112. Wong, R.H.; Sooriamoorthy, D.; Manoharan, A.; Binti Sariff, N.; Hilmi Ismail, Z. Balancing accuracy and efficiency: A homogeneous ensemble approach for lithium-ion battery state of charge estimation in electric vehicles. Neural Comput. Appl. 2024, 36, 19157–19171. [Google Scholar] [CrossRef]
  113. Sesidhar, D.; Badachi, C.; Green, R.C., II. A review on data-driven SOC estimation with Li-Ion batteries: Implementation methods & future aspirations. J. Energy Storage 2023, 72, 108420. [Google Scholar]
  114. Nagarale, S.D.; Patil, B. Accelerating AI-Based Battery Management System’s SOC and SOH on FPGA. Appl. Comput. Intell. Soft Comput. 2023, 2023, 2060808. [Google Scholar] [CrossRef]
  115. He, Z.; Shen, X.; Sun, Y.; Zhao, S.; Fan, B.; Pan, C. State-of-health estimation based on real data of electric vehicles concerning user behavior. J. Energy Storage 2021, 41, 102867. [Google Scholar] [CrossRef]
  116. Hashemi, S.R.; Bahadoran Baghbadorani, A.; Esmaeeli, R.; Mahajan, A.; Farhad, S. Machine learning-based model for lithium-ion batteries in BMS of electric/hybrid electric aircraft. Int. J. Energy Res. 2021, 45, 5747–5765. [Google Scholar] [CrossRef]
  117. Li, S.; He, H.; Zhao, P.; Cheng, S. Data cleaning and restoring method for vehicle battery big data platform. Appl. Energy 2022, 320, 119292. [Google Scholar] [CrossRef]
  118. Akbar, K.; Zou, Y.; Awais, Q.; Baig, M.J.A.; Jamil, M. A machine learning-based robust state of health (SOH) prediction model for electric vehicle batteries. Electronics 2022, 11, 1216. [Google Scholar] [CrossRef]
  119. Zhang, H.; Gui, X.; Zheng, S.; Lu, Z.; Li, Y.; Bian, J. BatteryML: An open-source platform for machine learning on battery degradation. arXiv 2023, arXiv:2310.14714. [Google Scholar]
  120. Zhang, D.; Zhong, C.; Xu, P.; Tian, Y. Deep learning in the state of charge estimation for li-ion batteries of electric vehicles: A review. Machines 2022, 10, 912. [Google Scholar] [CrossRef]
  121. Rivera-Barrera, J.P.; Muñoz-Galeano, N.; Sarmiento-Maldonado, H.O. SoC estimation for lithium-ion batteries: Review and future challenges. Electronics 2017, 6, 102. [Google Scholar] [CrossRef]
  122. Liu, F.; Yu, D.; Su, W.; Ma, S.; Bu, F. Adaptive Multitimescale Joint Estimation Method for SOC and Capacity of Series Battery Pack. IEEE Trans. Transp. Electrif. 2023, 10, 4484–4502. [Google Scholar] [CrossRef]
  123. Timilsina, L.; Hoang, P.H.; Moghassemi, A.; Buraimoh, E.; Chamarthi, P.K.; Ozkan, G.; Papari, B.; Edrington, C.S. A real-time prognostic-based control framework for hybrid electric vehicles. IEEE Access 2023, 11, 127589–127607. [Google Scholar] [CrossRef]
  124. He, H.; Meng, X.; Wang, Y.; Khajepour, A.; An, X.; Wang, R.; Sun, F. Deep reinforcement learning based energy management strategies for electrified vehicles: Recent advances and perspectives. Renew. Sustain. Energy Rev. 2024, 192, 114248. [Google Scholar] [CrossRef]
  125. Guirguis, J.; Ahmed, R. Transformer-based deep learning models for state of charge and state of health estimation of li-ion batteries: A survey study. Energies 2024, 17, 3502. [Google Scholar] [CrossRef]
  126. Salucci, C.B.; Bakdi, A.; Glad, I.K.; Vanem, E.; De Bin, R. A novel semi-supervised learning approach for State of Health monitoring of maritime lithium-ion batteries. J. Power Sources 2023, 556, 232429. [Google Scholar] [CrossRef]
  127. Zafar, M.H.; Mansoor, M.; Abou Houran, M.; Khan, N.M.; Khan, K.; Moosavi, S.K.R.; Sanfilippo, F. Hybrid deep learning model for efficient state of charge estimation of Li-ion batteries in electric vehicles. Energy 2023, 282, 128317. [Google Scholar] [CrossRef]
  128. Krzywanski, J.; Sosnowski, M.; Grabowska, K.; Zylka, A.; Lasek, L.; Kijo-Kleczkowska, A. Advanced computational methods for modeling, prediction and optimization—A review. Materials 2024, 17, 3521. [Google Scholar] [CrossRef]
Figure 1. European map of EV sales clustered by how evolved countries are in the transition to net-zero-emission CO2 passenger car sales [13].
Figure 1. European map of EV sales clustered by how evolved countries are in the transition to net-zero-emission CO2 passenger car sales [13].
Batteries 11 00107 g001
Figure 2. Summary diagram of the major SOC estimators used in the literature.
Figure 2. Summary diagram of the major SOC estimators used in the literature.
Batteries 11 00107 g002
Figure 3. Generic model for estimated SOC with KF.
Figure 3. Generic model for estimated SOC with KF.
Batteries 11 00107 g003
Figure 4. Generic model for estimated SOC with ML with dataset collected by real measurement of battery.
Figure 4. Generic model for estimated SOC with ML with dataset collected by real measurement of battery.
Batteries 11 00107 g004
Figure 5. Generic model for estimated SOC with ML with dataset collected by DT model.
Figure 5. Generic model for estimated SOC with ML with dataset collected by DT model.
Batteries 11 00107 g005
Figure 6. Generic structure of FNNs.
Figure 6. Generic structure of FNNs.
Batteries 11 00107 g006
Figure 7. BP neural network introduced to offline build the battery OCV–SOC curve for SOC estimation.
Figure 7. BP neural network introduced to offline build the battery OCV–SOC curve for SOC estimation.
Batteries 11 00107 g007
Figure 8. SOC estimation scheme based on FNN and Unscented Kalman Filter (UKF).
Figure 8. SOC estimation scheme based on FNN and Unscented Kalman Filter (UKF).
Batteries 11 00107 g008
Figure 9. Architecture of BPNN with 4-layer network and 12 inputs, 6 from each battery, for estimating SOC from LFP and LA batteries.
Figure 9. Architecture of BPNN with 4-layer network and 12 inputs, 6 from each battery, for estimating SOC from LFP and LA batteries.
Batteries 11 00107 g009
Figure 10. Structure of state estimation algorithm SOC with combined FNN and EKF.
Figure 10. Structure of state estimation algorithm SOC with combined FNN and EKF.
Batteries 11 00107 g010
Figure 11. Experimental data collection for training and validation of neural network for SOC estimation with polarization and temperature.
Figure 11. Experimental data collection for training and validation of neural network for SOC estimation with polarization and temperature.
Batteries 11 00107 g011
Figure 12. Schematic of three-classificatios Feedforward Neural Networks for battery SOC estimation based on battery current.
Figure 12. Schematic of three-classificatios Feedforward Neural Networks for battery SOC estimation based on battery current.
Batteries 11 00107 g012
Figure 13. Schematic diagram of SOC estimation with FNN combined with BSA algorithm.
Figure 13. Schematic diagram of SOC estimation with FNN combined with BSA algorithm.
Batteries 11 00107 g013
Figure 14. FNNs with PINNs architecture for battery SOC.
Figure 14. FNNs with PINNs architecture for battery SOC.
Batteries 11 00107 g014
Figure 15. Hybrid ECM approach where we use 3 FNNs for resistance and capacity of battery model.
Figure 15. Hybrid ECM approach where we use 3 FNNs for resistance and capacity of battery model.
Batteries 11 00107 g015
Figure 16. LSTM generic architecture.
Figure 16. LSTM generic architecture.
Batteries 11 00107 g016
Figure 17. Architecture of combined LSTM with another RNN.
Figure 17. Architecture of combined LSTM with another RNN.
Batteries 11 00107 g017
Figure 18. Framework of the SOC estimation using the algorithm EI–LSTM–CO.
Figure 18. Framework of the SOC estimation using the algorithm EI–LSTM–CO.
Batteries 11 00107 g018
Figure 19. Flowchart for optimization of LSTM neural network by RS random search algorithm.
Figure 19. Flowchart for optimization of LSTM neural network by RS random search algorithm.
Batteries 11 00107 g019
Figure 20. BiLSTM generic architecture.
Figure 20. BiLSTM generic architecture.
Batteries 11 00107 g020
Figure 21. Schematic structure of GLA–CNN–BiLSTM deep learning model.
Figure 21. Schematic structure of GLA–CNN–BiLSTM deep learning model.
Batteries 11 00107 g021
Figure 22. Framework of SOC estimation based on the AMBiLSTM architecture.
Figure 22. Framework of SOC estimation based on the AMBiLSTM architecture.
Batteries 11 00107 g022
Figure 23. GRU generic architecture.
Figure 23. GRU generic architecture.
Batteries 11 00107 g023
Figure 24. Architecture of Bidirectional GRU (Bi-GRU).
Figure 24. Architecture of Bidirectional GRU (Bi-GRU).
Batteries 11 00107 g024
Figure 25. Scheme of GRU–RNN architecture.
Figure 25. Scheme of GRU–RNN architecture.
Batteries 11 00107 g025
Figure 26. Overall structure of GRU–ASG model.
Figure 26. Overall structure of GRU–ASG model.
Batteries 11 00107 g026
Table 1. Comparison of SOC estimation methods.
Table 1. Comparison of SOC estimation methods.
ArticleInnovationsDataset & EV TestsPerformanceAdvantagesDisadvantagesComputational Cost
UKF for SOC EstimationOCV–SOC-T table with temperature (0–50 °C); UKF for nonlinearity handling; Simplified Rint model.Lab tests on LiFePO4 (18650) with DST, OCV–SOC-T, FUDS; No direct EV tests.RMS error < 5% (40 °C), up to 16.4% (0 °C).Accurate SOC estimation with temperature integration; Real-time capable.Poor performance at low temperatures; Sensitive to flat OCV–SOC regions.Low; optimized for BMS real-time use.
Neural-Network-Embedded ECM [66]ECM with 3 feedforward NNs for residual error correction; Physics-informed loss in training.Panasonic 18650PF; HPPC, US06, HWFET at 10 °C to −20 °C; Lab-tested under EV conditions.MSE reduced by 82.5% (US06); RMSE improved 33–64% (US06), 5–29% (HWFET).High accuracy, especially in extreme conditions; Adaptive to uncertainties.Initial/final phase errors; Complex offline training, requires retraining.Offline: 264–450 s; Online: 1.3–2.2 s (real-time suitable).
Physics-Informed NN (PINN) [63]Deep FNN with physics-constrained loss (data + charge conservation).McMaster Univ. (LG HG2, 3Ah); UDDS, US06, LA92, HWFET, Mix; −10 °C to 25 °C; Climatic chamber tests.RMSE < 2.85%; Outperforms NN-only and AKF methods.High accuracy and robustness with noisy/sparse data; Physically consistent solutions.Demanding offline training; Complex design.Offline: demanding; Online: efficient, real-time capable.
NN + UKF for Error Cancellation [45]Hybrid NN for SOC estimation + UKF for noise/error reduction; No OCV–SOC table needed.Dynamic tests; NN trained on DST, validated on US06/FUDS; Li-ion (mostly LiFePO4) under EV conditions.RMS error (US06, 0 °C) reduced from 4.1% (NN only) to 2.4% (NN + UKF).Higher robustness via UKF filtering; No need for OCV–SOC table.Risk of NN overfitting; Complex NN–UKF integration.Offline: structural optimization; Online: real-time with filtered errors.
Table 2. Comparison of SOC estimation methods (part 2).
Table 2. Comparison of SOC estimation methods (part 2).
ArticleInnovationsDataset & EV TestsPerformanceAdvantagesDisadvantagesComputational Cost
Hybrid FNNs for SOC [51]Hybrid FNN model for SOC estimation; NN optimization to improve generalization and reduce errors.Lab data on Li-ion (LiFePO4) batteries; DST, US06, FUDS at variable temperatures.RMS error < 3–4% under optimal conditions.Robust to nonlinearities; No need for OCV–SOC tables.Complex offline training; Requires careful optimization.Offline: demanding; Online: real-time capable.
FNN + EKF τ [48]Hybrid approach: FNN + EKF with parameter τ for filter update; Direct SOC estimation without OCV–SOC table.Lab data on Li-ion (LiFePO4); DST, US06, FUDS under various temperatures.Reduced RMS and max errors; Stable SOC estimation.Accurate and robust; EKF ensures real-time applicability.EKF less accurate in strong nonlinearities; Requires tuning and sensor calibration.Offline: demanding; Online: real-time suitable.
BPNN + BSA Model [62]BPNN with BSA model for direct SOC estimation; Improved data fusion (voltage, current, temperature).Lab data on Li-ion (LiFePO4); DST, US06, FUDS at various temperatures.RMS and MAE errors typically < 5%.Strong nonlinear learning; No OCV–SOC table needed.Complex training; Requires high-quality data.Offline: computationally heavy; Online: real-time feasible.
LSTM–RNN for SOC [85]EI-LSTM-CO model: extended input (avg. voltage via sliding window), output constrained by AhI strategy for stability.Public dataset (CALCE, LiFePO4); DST for training, US06 and FUDS for validation (7 temperatures, 1s sampling).RMSE < 1.3%, MAXE < 3.2%; Improved stability vs. standard LSTM–RNN.Smooth SOC estimation; No need for accurate initial SOC; High accuracy for BMS.Requires careful tuning (epochs, window size, constraints); More complex than conventional NNs.Offline: 150 epochs, window size 50; Online: fast, real-time capable.
Table 3. Comparison of SOC estimation methods (part 3).
Table 3. Comparison of SOC estimation methods (part 3).
ArticleInnovationsDataset & EV TestsPerformanceAdvantagesDisadvantagesComputational Cost
CNN–LSTM for SOC [82]Hybrid CNN–LSTM for feature extraction and temporal modeling; Robust to unknown initial SOC and temperature variations.A123 18650 LiFePO4; DST, FUDS, US06; Climatic chamber tests.MAE < 1%, RMSE < 2% (unknown initial SOC); RMSE < 2%, MAE < 1.5% (temperature variations).Stable and accurate SOC estimation; Rapid convergence; Good adaptation to environmental variations.Complex architecture; Long offline training and parameter tuning.Offline: 10,000 epochs (161 min on GPU); Online: 0.098 ms per time step (real-time suitable).
RS-LSTM (Random Search LSTM) [86]LSTM with Random Search optimization; Random Forest for feature selection; Auto-tuned hyperparameters.CALCE dataset (INR 18650-20R); DST, FUDS, US06; Lab and real vehicle data at different temperatures.Optimal settings: Look back = 45, Epochs = 177, Batch = 64, LR = 0.0026; MAE = 0.221%, RMSE = 0.262%.Reduces noise via Random Forest; Auto-optimized hyperparameters; High accuracy and stability.High offline computation due to hyperparameter search; Complex model tuning.Offline: intensive training; Online: fast, real-time capable.
GLA–CNN–BiLSTM for SOC [92]CNN for spatial and BiLSTM for temporal features; GLA for automatic hyperparameter tuning.Six EV discharge datasets (HWFET, US06, BJDST, DST, FUDS, UDDS); Lab tests on Li-ion.MAE < 1%, RMSE < 1%, Max error < 2%.Combines CNN (features) + BiLSTM (time); GLA ensures fast convergence and adaptability.Intensive offline training; Complex hyperparameter tuning.Offline: high due to GLA; Online: fast, real-time suitable.
Table 4. Comparison of SOC estimation methods (part 4).
Table 4. Comparison of SOC estimation methods (part 4).
ArticleInnovationsDataset & EV TestsPerformanceAdvantagesDisadvantagesComputational Cost
AMBiLSTM with Attention [94]BiLSTM with spatial/temporal attention; Temperature compensation in OCV and capacity; SOC and SOH co-estimation.Lab tests on LiNiCoAlO2 (2.6 Ah); 0–40 °C; DST, UDDS; 1 s/10 s sampling.DST (25 °C): RMSE ↓9.39%; UDDS (25 °C): RMSE ↓22.36%; SOH ↑21.45%.Improved feature extraction; Robust to temperature variations; Simultaneous SOC/SOH estimation.Complex model; Requires large dataset and intensive training; Potential short-sequence errors.Offline: GPU-intensive (RTX 3090); Online: fast, real-time feasible.
GRU–RNN with Momentum [105]GRU–RNN with momentum gradient algorithm; Noise injection to prevent overfitting.Lab tests on BTcap 21700 (2.2 Ah); Charge-discharge cycles; Voltage-current measurements.Sigma = 0.03: RMSE = 0.0092, MAE = 0.0041, R2 = 0.9990.Fast convergence; Reduced weight oscillations; High accuracy and generalization.Requires precise tuning of momentum (beta) and hyperparameters.Offline: intensive; Online: extremely fast, real-time capable.
GRU–ASG for SOC [106]GRU with adaptive Savitzky–Golay (ASG) filter; Spearman coefficient for dynamic window selection.Real data from LiFePO4 (280 Ah, 3.2 V); Energy storage plant; Six discharge datasets.MSE < 0.15%, MAE < 3%; Outperforms standard GRU and filters.Robust in varying conditions; Adaptive filter eliminates manual tuning; Effective memory structure.Complex integration of NN with adaptive filtering; Additional online computation.Offline: GPU training (GTX 1060, TensorFlow); Online: fast, real-time suitable.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dini, P.; Paolini, D. Exploiting Artificial Neural Networks for the State of Charge Estimation in EV/HV Battery Systems: A Review. Batteries 2025, 11, 107. https://doi.org/10.3390/batteries11030107

AMA Style

Dini P, Paolini D. Exploiting Artificial Neural Networks for the State of Charge Estimation in EV/HV Battery Systems: A Review. Batteries. 2025; 11(3):107. https://doi.org/10.3390/batteries11030107

Chicago/Turabian Style

Dini, Pierpaolo, and Davide Paolini. 2025. "Exploiting Artificial Neural Networks for the State of Charge Estimation in EV/HV Battery Systems: A Review" Batteries 11, no. 3: 107. https://doi.org/10.3390/batteries11030107

APA Style

Dini, P., & Paolini, D. (2025). Exploiting Artificial Neural Networks for the State of Charge Estimation in EV/HV Battery Systems: A Review. Batteries, 11(3), 107. https://doi.org/10.3390/batteries11030107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop