Next Article in Journal
A Multi-Behavior and Sequence-Aware Recommendation Method
Previous Article in Journal
Geometric Graph Learning Network for Node Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Battery State-of-Charge Prediction Method Based on a Hammerstein Model Integrated with a Hippopotamus Optimization Algorithm and Neural Network

School of Electrical Engineering, Northeast Electric Power University, Jilin 132012, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(3), 698; https://doi.org/10.3390/electronics15030698
Submission received: 17 January 2026 / Revised: 1 February 2026 / Accepted: 3 February 2026 / Published: 5 February 2026

Abstract

Accurate estimation of the state of charge (SOC) of lithium-ion batteries is critical for assessing the safety and remaining range of electric vehicles. However, due to the complex and variable operating environment of batteries and their highly nonlinear internal mechanisms, achieving high-precision SOC prediction remains a central challenge in current research. To this end, this paper proposes a nonlinear Hammerstein model based on the Hippopotamus Optimization Algorithm (HO) to optimize the backpropagation neural network, thereby enhancing the accuracy of SOC prediction. The HO-BP-Hammerstein model optimizes the BP neural network architecture using the Hippopotamus Algorithm and conducts SOC prediction accuracy tests on real-world data. Experimental results demonstrate the superiority of the proposed method through comparative accuracy analysis of various SOC prediction approaches under different operating conditions, confirming its significant engineering application value.

1. Introduction

With the rapid advancement of new green energy technologies, the precise detection of battery state of charge within battery management systems (BMSs) has become a research hotspot in the energy sector [1]. As the core energy storage component in new energy technologies, lithium-ion batteries have achieved large-scale application in electric vehicles and smart grids due to their significant advantages, including high energy density [2], long cycle life, and absence of memory effect [3]. Notably, the safe operation of electric vehicle powertrains heavily relies on the BMSs’ real-time monitoring of SOC [4]. However, SOC measurement is constrained by the complexity of the battery’s electrochemical system and cannot be directly measured.
The current mainstream methods for predicting the SOC of batteries can be categorized into three major technical approaches: Kalman filter algorithms based on physical models, traditional empirical model-driven ampere-hour integration methods, and emerging data-driven methods [5,6,7]. While the ampere-hour integration method offers the advantage of a simple principle, its error exhibits exponential growth over time. Neural network methods can achieve high-precision predictions but are constrained by the scale of training data. Open-circuit voltage and internal resistance methods, dependent on battery static conditions and measurement device accuracy, struggle to meet practical engineering requirements. With the rapid advancement of deep learning technology [8], data-driven approaches have emerged as the core research methodology in this field [9] due to their structural simplicity and ease of implementation. This method effectively approximates complex nonlinear mapping relationships, enabling the construction of high-precision systems [10]. Notably, the Hammerstein model demonstrates unique advantages in SOC prediction due to its distinctive nonlinear mapping capability [11]. By cascading a static nonlinear module with a dynamic linear module, this model effectively decomposes complex nonlinear problems into separately manageable nonlinear compensation and linear control issues.
The nonlinear input-linear dynamic response characteristics of the Hammerstein model [12,13] align well with the autonomous learning capabilities of neural networks, enabling the construction of highly accurate SOC prediction models. Yang [14] et al. employed a three-layer BP neural network to estimate SOC; however, traditional BP neural networks suffer from issues such as sensitivity to initial weights and susceptibility to local optima, resulting in imprecise SOC predictions. Liu et al. [15] used a genetic algorithm (GA) to optimize the BP neural network. However, this optimization requires adjusting numerous parameters, such as crossover rate and mutation rate, resulting in reduced convergence speed. Li et al. [16] introduced particle swarm optimization (PSO) for weight and threshold correction, constructing a BP neural network model with PSO optimization. Although this algorithm improved convergence speed, its local search capability remained insufficient. Gao et al. [17] used the Grey Wolf Algorithm (GWO) to optimize the weight values of BP neural networks, yet a reducible steady-state error margin persisted after optimization. Zhao Xinhao et al. [18] employed an improved firefly algorithm to optimize backpropagation neural networks, but this approach exhibited high computational complexity and time consumption. Wei et al. [19] used the cuckoo algorithm to optimize the BP neural network for predicting the health status of lithium batteries, which had fewer iterations and larger errors. Afaq Ahmad et al. [20] used the HO optimization algorithm to integrate electric vehicle charging network research, and Aniseh Saber et al. [21] used the HO algorithm to conduct risk assessment on LSTM network optimization. The study showed that the HO optimization algorithm has good optimization performance and prediction accuracy in mechanical design and photovoltaic power generation prediction fields. On this basis, this article combines the HO algorithm with the Hammerstein model for the first time to construct an HO BP Hammerstein composite structure, which is applied to predict the state of charge (SOC) of lithium-ion batteries. In the existing SOC prediction methods for lithium-ion batteries, using optimization algorithms to improve neural networks has become the mainstream research direction. However, various mainstream algorithms have inherent limitations in being quantifiable and comparable, which restricts further improvement of prediction performance. Genetic algorithm (GA) relies on complex manual parameter adjustment and has a slow convergence speed; Although the particle swarm optimization (PSO) algorithm has improved convergence speed, it is prone to premature convergence in complex nonlinear spaces and lacks local development capabilities, resulting in large steady-state errors; The gray wolf algorithm (GWO) and other algorithms have poor local optimization stability under dynamic conditions, which limits the model’s generalization ability. Compared with these methods, the Hippopotamus Optimization Algorithm (HO) [22] introduced in this article has significant differences and innovative value in mechanism: it simulates the social structure and behavioral competition of hippopotamus populations and constructs an optimization framework with fewer parameters and stronger search dynamics [23]. The HO algorithm can more effectively maintain population diversity and avoid falling into local optima during the global exploration phase; during the local development phase, more refined searches can be conducted. The advantage of this mechanism enables HO to more accurately optimize neural network weights when dealing with the highly nonlinear and dynamic coupling characteristics of battery systems, thereby achieving a better balance between exploration and mining, providing a more efficient algorithm framework for constructing high-precision SOC prediction models. Subsequent experiments will demonstrate that incorporating this innovative mechanism into the Hammerstein model can significantly improve predictive performance.
Among existing lithium-ion battery state-of-charge prediction methods, the Hammerstein model can characterize the dynamic-static coupling characteristics of batteries. However, its nonlinear modules often employ traditional function approximation methods, leading to issues such as dynamic response lag and insufficient generalization capability under complex operating conditions. Current mainstream Hammerstein model prediction methods include key term separation [24], overparameterized identification [25,26], hierarchical recursive identification [27,28,29], and filtering techniques [30,31]. Among these, the Hammerstein model developed by Liu and Wang et al. [11,24], which combines key term separation with hierarchical identification principles, predicts the battery’s state of charge. However, these methods generally suffer from insufficient dynamic tracking capability and high sensitivity to noise. To address this bottleneck, this paper innovatively introduces a hippo algorithm-optimized BP neural network into the nonlinear module of the Hammerstein model. This approach offers the following advantages: enhanced algorithmic performance, optimized model structure, and improved engineering applicability. The experiment shows that the HO BP Hammerstein model proposed in this paper achieves significant advantages in standard function optimization (such as an F4 function accuracy of 6.1 × 10−2.61). In battery SOC prediction, the MAE was as low as 0.469% under 0 °C FUDS conditions and remained below 0.74% in wide temperature range testing. The prediction accuracy of R2 > 97% was achieved on real vehicle data, significantly improving the robustness and practicality of the prediction.
The main contributions of this paper are as follows:
(1)
For lithium-ion battery state-of-charge prediction under complex operating conditions, a composite architecture integrating neural networks with the Hammerstein model is proposed. This approach suppresses the accumulation of multi-factor errors through a nonlinear dynamic coupling mechanism.
(2)
Employing the key-term separation concept, the coupling between parameters of a non-linear portion and a linear portion in the Hammerstein SOC model is separated with minimal parameters and computational effort.
(3)
The proposed method demonstrates superior performance across various operating conditions through comparisons with HO-BP-Hammerstein, GWO-BP, and PSO-BP approaches, achieving an average error below 0.74%.
(4)
Practical in-vehicle validation: Using hybrid electric vehicle data, the proposed method proves applicable with high stability and accuracy demonstrated through experiments.
Chapter distribution of this article: Section 2 introduces the Hammerstein model based on the HO algorithm-optimized backpropagation neural network and the BP deep neural network architecture. Section 3 presents the mathematical modeling of the hippo algorithm optimization process. Section 4 verifies the superior performance of the hippo optimization algorithm and the prediction process of the HO-BP-Hammerstein model. Section 5 elaborates on the data sources and processing methods used in the experiments, as well as the experimental parameter settings and result analysis. Section 6 summarizes this paper and proposes extended ideas for further research.

2. Complete Algorithm Demonstration

2.1. Hammerstein Model

As shown in Figure 1, the proposed HO-BP-Hammerstein composite model consists of an optimized neural network nonlinear module cascaded with a dynamic linear module. It is defined as follows:
α ( z ) SOC ( t ) = β ( z ) S ( t ) + υ ( t )
Among the inputs U ( t ) = u 1 ( t ) u 2 ( t ) = V ( t ) I ( t ) R 2 consists of the battery’s voltage and current, while the model output Y ( t ) = S O C ( t ) is the state of charge of the battery. S ( t ) is the output of the nonlinear module and also serves as the input of the linear module. υ ( t ) is white random noise, and α ( z ) and β ( z ) are known polynomials in terms of z.

2.2. Building a BP Neural Network

The BP neural network model adopts a classic three-layer feedforward structure comprising an input layer, a hidden layer, and an output layer [32]. Parameter optimization is achieved through the synergistic mechanism of forward propagation and backpropagation [33]. Forward propagation transmits sampled data from the input layer to the output layer, while backpropagation adjusts weights and biases to minimize output error by propagating errors backward [34]. Based on Equation (2), the three-layer BP neural network structure can be determined.
H = m + n + α
H : hidden layers count, n : input layers count, m : output layers count. The value of α range from [1,2,3,4,5,6,7,8,9,10]. The network structure is shown in Figure 2.
This article found through system comparison that when α = 7 in Table 1, the model achieves the optimal balance between training efficiency, generalization ability, and final prediction accuracy (RMSE, MAE) under a wide temperature range and dynamic operating conditions. Therefore, this value was chosen to construct the final HO BP Hammerstein model to address the complex nonlinear challenges faced by electric vehicle batteries in actual operation.
In Figure 2, the random neuron N j [ m ] is the jth neuron in the mth layer, selected n elements for input analysis. The input is ( x 1 , x 2 , x 3 , x 4 , x n ) T , and its corresponding variable weight matrix is as follows:
Z j [ m ] = w 1 x 1 + w 2 x 2 + w 3 x 3 + + w n x n + b j [ m ]
The network input Z j [ m ] can be studied as in Equation (3).
Z j m   = i = 1 n w j i m x j i m + b j m = W j X + b j
The network’s output y j is given by Equation (5) y j [ m ] = f ( * ) [35], where f ( * ) is an activation function [36].
y j = f Z j = f i = i n w j i m × x j i m = F W j X
The sigmoid activation function is expressed as follows:
f x   = 1 1   +   e x
This article uses the Sigmoid function to constrain the output range, while recognizing the potential gradient saturation problem that may occur when the input values are extremely small. Therefore, the input data has been normalized to the [−1, 1] interval before training, effectively avoiding gradient saturation and ensuring that the gradient can propagate effectively and converge stably during the training process.
The backpropagation algorithm drives iterative weight adjustments along the negative error gradient, aimed at minimizing the total error. The node notations in Figure 2 are defined as follows: as input X i n , hidden H j n , and output Y k n layers, respectively. The connection weights are denoted as W j i 1 and W k j 1 . The specific steps of backpropagation are as follows.
Step 1: Confirm the output of the hidden layer and the output layer nodes
H j n = f ( i = 0 n W j i 1 x i θ j )   Y k n = f ( i = 0 n W k j 2 x i θ k )  
Step 2: Calculate the error. The values of m are 1 and 2.
E j m = 1 2 k = 1 n e j 2 E = j = 1 N E j = 1 2 i = 1 N k = 1 n e j 2   e j = y ^ j k y j k  
Step 3: This step entails calculating the error function and its partial derivatives with respect to the nodes in both the output and hidden layers.
E W k j 2   = k = 1 l   E * Y k Y k W k j 2   = E * Y k   Y k   W k j 2   = e k * Y k × Y j E W k j 1   = i j E * Y k Y k * W k j 2   H j W ij 1 = Y k × k e k Y k W jk 2
Step 4: Based on the principle of gradient descent, adjust the connection weights using the learning rate σ, and update the connection weights between nodes in the output layer and hidden layer according to Equations (10) and (11).
Step 5:
W j k 2 m   +   1 = W j k 2 m + Δ W k j 2 = W k j 2 m + σ e k * Y k H j
W j i 1 m + 1 = W j i 1 m + Δ W j i 1 = W j i 2 m + σ H j × e k × Y k W k j 2 m × x i  
The index m is incremented iteratively until the error meets the predefined threshold, at which point the neural network training terminates.

3. Mathematical Modeling of the HO Algorithm

3.1. Population Initialization

Hippopotamus optimization is a population-based optimization algorithm whose search agents consist of hippopotamus individuals. Hippopotamuses represent candidate solutions to the optimization problem, meaning each hippopotamus’s updated position within the search space signifies an adjustment to the decision variable’s value. The initialization phase requires generating random initial solutions. During this process, decision variables are generated according to the following formula:
χ i : x i j = l b j + r ( u b j l b j )
In the formula, χ i gives the i-th solution’s position, with r being a uniform random number in [0, 1], l b j is the lower bound; u b j is the upper bound for the j-th variable. N is the population size; m is the problem’s decision variable count.
The HO algorithm simulates the social structure and juvenile behavior characteristics of hippopotamus herds, integrating leader guidance, exploratory wandering, and risk avoidance mechanisms into the optimization process. Its core concept employs biological heuristic strategies to balance global search with local exploitation, thereby enhancing the efficiency of solving complex optimization problems [21].
In this study, the Hippo Optimization (HO) algorithm was employed as a global optimizer, directly optimizing all weights and bias parameters of the BP neural network rather than solely for parameter initialization. The entire training process was dominated by HO, with the standard BP backpropagation algorithm serving to provide fitness calculations for HO based on network outputs, rather than performing an independent gradient descent training phase.

3.2. Phase One: Hippopotamus Position Updates in Rivers or Ponds (Exploration Phase)

The first stage of the HO algorithm achieves global exploration of the solution space by simulating the competitive and dispersal behaviors of male members within a hippopotamus herd. The dominant male hippopotamus (representing the global optimum) guides the population toward convergence, while expelled males enhance diversity through random perturbations or competitive mechanisms, thereby improving the algorithm’s robustness in tackling complex optimization problems.
Hippopotamus populations consist of several adult female hippos, hippo calves, multiple adult male hippos, and one dominant male hippo (the group leader). The selection of the dominant male hippo is based on the iterative evaluation of objective function values (taking the minimum value in minimization problems and the maximum value in maximization problems). The mathematical expression for the position of male hippopotamus members in a river or pond is shown in Equation (13).
χ i M h i p p o : x i j M h i p p o = x i j + y 1 D h i p p o I 1 x i j
χ i M h i p p o denotes the position of the i-th male hippopotamus, y 1 is a random number in the interval [0, 1], and D h i p p o represents the position of the dominant hippopotamus (i.e., the solution with the highest fitness in the current iteration).
k = I 2 × r 1 + ( Q 1 ) 2 × r 2 1 r 3 I 1 × r 4 + ( Q 2 ) r 5
T = exp t τ
χ i F B h i p p o : x i j F B h i p p o = x i j + h 1 D h i p p o I 2 M G i   T > 0.6 Ξ                                                                                     e l s e
Ξ = x i j + h 2 ( M G i D h i p p o )           r 6 > 0.5 l b j + r 7 ( u b j l b j )           e l s e
In Formula (14) r 1 r 4 are random vectors in [0, 1], r 5 is a random scalar in [0, 1], I 1 and I 2 are random integers taking values 1 or 2, and Q 1 and Q 2 are random integers taking values 0 or 1. In Formula (16) M G i denotes the mean of randomly selected hippo positions (which may include the current hippo χ i ). Formulas (16) and (17) describe the rules for updating the position of female or immature hippos within a population. If T is greater than 0.6, it indicates the cub is far from its mother. r 6 is a random number between [0, 1]; if greater than 0.5, local exploration is performed; otherwise, global exploration is conducted. h 1 and h 2 are coefficients or vectors randomly selected from preset scenarios to adjust movement direction and step size. r 7 is a random number between [0, 1]. In Formulas (18) and (19) F i is the objective function value used to evaluate the quality of candidate solutions.
χ i = χ i Mhippo F i Mhippo < F i χ i                                                 e l s e
χ i = χ i F B h i p p o F i F B h i p p o < F i χ i                                                 e l s e
By simulating hippopotamus characteristics, a dynamic equilibrium between exploration and exploitation is achieved. Position updates driven by random perturbations and fitness optimize neural network weights and biases, thereby enhancing model performance.

3.3. Phase Two: Hippopotamus Defending Against Predators (Exploration Phase)

The second stage of the HO algorithm transforms low-fitness regions (“predator locations”) into exploration drivers by simulating hippopotamus defense against predators. Through directed perturbations and multi-strategy exploration mechanisms, it enhances the algorithm’s global search capability in complex solution spaces. This phase, along with the first phase, provides an adaptive balance between exploration and operation. The predator’s position within the search space is defined by Equation (20).
P r e d a t o r : P r e d a t o r j = l b j + r 8 u b j l b j
Here, r 8 denotes a random vector ranging from 0 to 1.
D = P r e d a t o r j x i j
The proximity of the i-th candidate solution to the low-fitness region is expressed as in Equation (21). Formulas (21) and (22) achieve an adaptive switching between local exploration and global search in the optimization algorithm by simulating a hippopotamus’s differential response to predator distance. This mechanism enables the HO algorithm to perform exceptionally well in complex optimization problems, particularly in optimizing neural network weights and biases, effectively balancing training speed and model performance.
χ i H i p p o R : x i j H i p p o R = R L P r e d a t o r j + f c d × c o s 2 π g 1 D                                         F P r e d a t o r j < F i R L P r e d a t o r j + f c d × c o s 2 π g 1 2 × D + r 9               F P r e d a t o r j F i
i = N 2 + 1 , N 2 + 2 , , N , j = 1 , 2 , , m
χ i H i p p o R is a candidate solution in the defensive state, and R L is a random vector based on the Levy distribution, simulating sudden position changes during predator attacks. In Equation (21), the range of values for f [2, 4], The range of values for c [1, 1.5], The range of values for D [2, 3], The range of values for g [−1, 1], and r 9 is a multidimensional random direction vector. The range of values for w and v [0, 1], ϑ is a constant equal to 1.5, Γ denotes the gamma function, and σ w is the scale parameter.
L e v y ( ϑ ) = 0.05 × w × σ w | v | 1 ϑ
σ w = Γ ( 1 + ϑ ) sin π ϑ 2 Γ ( 1 + ϑ ) 2 ϑ 2 ( ϑ 1 ) 2 1 ϑ
χ i = χ i H i p p o R F i H i p p o R < F i χ i F i H i p p o R F i
The survival competition rules for hippos after defending against predators, as shown in the above formula, are centered on determining individual retention or replacement through fitness comparison. In the second stage, a significant improvement in the global search process is observed. Steps 1 and 2 complement each other to effectively prevent the system from lingering in local optimization.

3.4. Phase Three: Hippopotamus Escapes Predator (Development Phase)

When hippos encounter groups of predators or fail to disperse threats through defensive behaviors, their survival strategy involves rapidly retreating to nearby safe zones such as lakes or ponds. The third phase of the HO algorithm simulates this behavior by generating random positions near the current location, enhancing local exploration capabilities, and optimizing the solution’s fine-grained search.
Equations (26)–(29) establish a multi-scenario-based local search optimization, where t denotes the current iteration count, and T represents the maximum iteration count, dynamically adjusting the search intensity.
l b j l o c a l = l b j t , u b j l o c c a l = u b j t
χ i H i p p o E : x i j H i p p o E = x i j + r 10 l b j local + s 1 u b j local l b j local
χ i H i p p o E denotes a candidate solution for local development. s 1 is randomly selected from three scenarios, with the chosen scenario exhibiting stronger local search capabilities.
s = 2 × r 11 1 r 12 r 13
r 11 denotes a uniformly distributed vector over [0, 1]m, while r 10 and r 13 represent uniformly distributed scalars over [0, 1]. r 12 denotes a normal distribution scalar with mean 0 and standard deviation 1. s represents a search strategy emphasizing different development behaviors (detailed search, gradient following, random perturbations). This design endows the HO algorithm with high flexibility and efficiency during the local development phase, making it particularly suitable for scenarios requiring fine-tuned optimization.
χ i = χ i H i p p o E F i H i p p o E < F i χ i F i H i p p o E F i
The HO algorithm implicitly simulates the characteristics of different hippo roles through behavioral fusion and random perturbations, rather than explicitly categorizing the population. This approach prevents excessive computational complexity and low efficiency and avoids over-parameterization in network optimization. This design embodies a balance between “biologically inspired” principles and “engineering practicality,” ensuring the algorithm maximizes optimization performance while preserving the essence of natural behavior.

4. HO-BP-Hammerstein Model

4.1. Performance Evaluation and Analysis of the HO Algorithm

To validate the optimization performance of the HO algorithm, this paper employs 23 standard test functions for evaluation and selects four representative test functions for comparative analysis in Table 2.
In Figure 3, Figure 4, Figure 5 and Figure 6, F4 and F7 are single-peak test functions, F8 is a multi-peak test function, and F13 is a fixed-dimensional multi-peak test function. This article selects four standard test functions, F4, F7, F8, and F13, to systematically evaluate the performance of algorithms in smooth optimization and high-dimensional nonlinear search. These characteristics correspond to practical prediction challenges such as steady-state fitting, dynamic response, and multi-factor coupling in battery data, ensuring the comprehensiveness and pertinence of algorithm evaluation. The results from the figure indicate that, despite requiring more optimization iterations, the HO algorithm demonstrates superior global search capability compared to the Grey Wolf Optimization (GWO) and Particle Swarm Optimization (PSO) algorithms, achieving the smallest converged function value. Regarding convergence rate, the HO algorithm exhibits the steepest initial slope for rapid convergence toward the optimum, followed by a stable phase with minimal error (as shown in Figure 3, Figure 4 and Figure 5), outperforming both GWO and PSO. In computational efficiency, the HO algorithm achieves optimal results with fewer iterations through a balanced strategy, avoiding redundancy and demonstrating synergistic advantages of “high precision, fast convergence, and low computational time.”

4.2. HO-BP-Hammerstein Model Prediction Process

The proposed HO-BP-Hammerstein model follows the modeling framework outlined below. As shown in Figure 7. Step 1: Input normalized battery voltage data, current, and historical SOC data into a BP neural network to construct the nonlinear module. Determine hidden layer nodes, apply a sigmoid activation function to the output layer to constrain the prediction range, and use mean squared error as the fitness function. Step 2: Configure HO algorithm parameters—hippo population size, maximum iterations, and function dimension corresponding to total network parameters. Define upper and lower boundaries of the solution space, randomly generate the initial population, and ensure that the candidate solution is evenly distributed in the search space. Step 3: During optimization, the leader hippo dynamically updates its position by simulating group behavior, balancing global exploration and local exploitation. Step 4: After each iteration, the model error is verified based on the training and testing sets. The model training terminates when any of the following conditions are met: (1) reaching the preset maximum number of iterations; (2) if the training error converges below the threshold of 0.000001, it indicates that the model has fully converged. If the termination condition is not met, continue optimizing the iteration. Step 5: Finally, map the global optimal solution to the network weight matrix and bias vector. The H0-BP model serves as the output of the Hammerstein nonlinear module, fitted to the linear dynamic module via least squares. Step 6: Compare the HO-BP-Hammerstein model’s predictions against actual values, as well as against predictions from the GWO-BP and PSO-BP methods, to evaluate the model’s accuracy and superiority.

5. Experimental Simulation and Results Processing

5.1. Experimental Data

The lithium battery dataset used in this paper originates from the University of Maryland (CALCE), with the battery model being INR18650-20R. Key parameters are listed in Table 3. This includes the Federal Urban Driving Schedule (FUDS) dataset and the Dynamic Stress Test (DST) dataset, as shown in Figure 8 and Figure 9. When the test temperature is set to 0 °C and the initial SOC of the battery is 80%, detailed battery parameters for both the FUDS and DST datasets—such as voltage, current, voltage change rate, and SOC—are provided.

5.2. Performance Evaluation Metrics

This paper employs Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Coefficient of Determination (R2) to evaluate the model’s performance in predicting SOC. The calculation formulas:
M S E = 1 n i = 1 n y i y Λ i 2
R M S E = 1 n i = i n y i y Λ i 2
M A E = 1 n i = 1 n y Λ i y i
R 2 = 1 i y Λ i y i 2 i y i y i 2

5.3. Verification of SOC Prediction Accuracy Superiority

To validate the SOC prediction performance of the HO-BP-Hammerstein model [37], this study predicts the state of charge during discharge under FUDS conditions (0 °C, 25 °C, 45 °C) with an initial SOC of 80%. The results are compared with other SOC prediction methods, such as GWO-BP and PSO-BP. As shown in Figure 10b, during the initial simulation phase, the PSO-BP and GWO-BP methods predicted values closer to the actual SOC than the HO-BP-Hammerstein model. However, as the battery continued discharging, the predictions from both PSO-BP and GWO-BP exhibited significant fluctuations with gradually increasing errors. Despite the superior stability of PSO-BP and GWO-BP, the HO-BP-Hammerstein model consistently demonstrates a narrower error range between predicted and actual SOC, clearly exceeding the comparison methods in prediction accuracy. Compared to comparative methods, the modeling results of the HO-BP-Hammerstein model not only achieve higher prediction accuracy but also exhibit superior stability. The comparative evaluation metrics for each method are summarized in Table 4. The H0-BP-Hammerstein model achieved optimal performance across all metrics: R2 = 0.976, MSE = 3.7 × 10−5, RMSE = 0.615%, and MAE = 0.469%. This fully validates the model’s comprehensive advantages in both accuracy and stability.
To check the efficiency of the HO-BP-Hammerstein model in predicting SOC under different operating conditions, SOC prediction experiments were also conducted under DST conditions (0 °C, 25 °C, 45 °C) with an initial SOC of 80%. The prediction results of the HO-BP-Hammerstein model at different temperatures indicate that its SOC prediction curves exhibit the closest consistency with actual measured values in Figure 11a–c. Simulation results demonstrate that, compared to the reference methods, the HO-BP-Hammerstein model adapts well to different operating conditions and maintains higher stability in battery SOC prediction. As shown in Table 5, the evaluation metrics reveal that at 0 °C, the HO-BP-Hammerstein model achieves an R2 of 0.986, with MAE and RMSE of 0.368% and 0.485%, respectively. The GWO-BP model achieves an R2 of 0.941, with MAE and RMSE of 0.83% and 1.001%, significantly reducing error occurrence. These results further demonstrate the applicability and high accuracy of the HO-BP-Hammerstein model under varying operating conditions, highlighting its valuable utility for engineering practice.
According to Table 6, the training time of HO BP Hammerstein is higher than that of PSO-BP and GWO-BP, which is due to the more complex population behavior simulation and fitness evaluation of the HO algorithm. However, during the deployment phase, the single inference time of all three methods is extremely small, as the trained model only requires one simple forward calculation for prediction. Although the training process of the model (which is an offline step) takes a long time, it does not affect the online near-real-time estimation performance of the vehicle BMS. Therefore, the HO BP Hammerstein model not only significantly improves prediction accuracy but also has engineering feasibility for achieving near real-time and high-precision SOC estimation in actual BMS hardware.

5.4. Real Vehicle Battery Prediction Simulation

In practical applications, accurate prediction of SOC is crucial, but accuracy is affected by various factors.
This section conducts experimental validation using integrated battery data from two different electric vehicle brands. Both models utilize 18,650-type ternary lithium-ion battery systems, charged under fast-charging and slow-charging modes, respectively. Through characteristic analysis of battery data combined with experimental simulation, the generalization capability of the HO-BP-Hammerstein model is verified.
To ensure the input quality and training stability of the model, we have systematically preprocessed the raw data. The specific steps and methods are as follows:
Due to the high frequency of data collection, there is a significant redundancy issue in the original charging data, manifested as multiple repeated or differentiated single battery voltage records corresponding to the same SOC value. Firstly, we performed deduplication on the data to eliminate identical duplicate records. Subsequently, a method combining physical constraints and statistical analysis was used to handle outliers: based on the normal operating voltage range of lithium-ion batteries (2.5 V–4.2 V), physical outliers that clearly exceeded this limit were removed; At the same time, for abnormal voltage/current values in historical operating data, cubic spline interpolation is used for smooth filling to restore the continuity and rationality of the data, avoiding sequence interruption or information loss caused by direct exclusion. After cleaning, a total of 140,866 valid charging data were obtained, laying a reliable data foundation for subsequent modeling. To further improve the training efficiency and numerical stability of the model, a normalization method is used to linearly transform all feature data into the [−1, 1] interval. This range matches the output domain of the commonly used hyperbolic tangent activation function in neural networks, which not only helps to unify the scales of different scale features but also accelerates the model convergence process and provides a smoother search space for subsequent HO optimization algorithms in the parameter space. The formula follows:
d a t a i n p u t = 2 ( d a t a d a t a m i n ) d a t a max d a t a m i n 1
where d a t a i n p u t denotes the normalized data value; d a t a represents the raw data; d a t a max and d a t a m i n are the maximum and minimum values in the raw data. All processed charge data are stored within the range [−1, 1].
Algorithm parameters: hippo population size is 50, maximum iteration count is 50, network iteration count is 100, and convergence threshold is 0.000001. 80% of the experimental data is used for model training, while the remaining 20% is reserved for model validation to ensure the model maintains good predictive accuracy under real-world conditions.
The model prediction results are shown in Figure 12 and Figure 13, demonstrating excellent predictive accuracy. Under these conditions, the HO-BP-Hammerstein model achieved evaluation metrics of: R2 = 97.1918%, MAE = 0.081149, MSE = 0.010592, and RMSE = 0.10292. The experimental results demonstrate the model’s strong generalization capability on new data, highlighting its applicability and stability.
In the HO-BP-Hammerstein model, the core mechanism of SOC prediction lies in organically integrating the nonlinear dynamic modeling capability of the Hammerstein system with the parameter adaptation feature of the BP neural network through a hybrid optimization strategy. Simultaneously, leveraging the global search advantage of the HO algorithm effectively avoids the pitfall of traditional gradient descent methods, which are prone to getting stuck in local minima. The model is implemented through a three-tiered structure: the Hammerstein module consists of a static nonlinear block cascaded with a dynamic linear block; the BP neural network serves as the parameter identifier, with its weight matrix and bias vector forming the optimization variables for the HO algorithm; and the HO algorithm efficiently searches the global parameter space to locate the optimal network parameter combination that minimizes prediction error. Theoretically, the effective separation of static nonlinear and dynamic linear parameters within the Hammerstein model enables gradient information calculation for the BP network through backpropagation of errors. Combined with the adaptive search mechanism of the HO algorithm, this achieves optimal parameter selection. The error norm exhibits exponential decay with iteration, ultimately confining SOC prediction errors strictly within engineering accuracy requirements.

6. Conclusions

This paper proposes the Hippo Algorithm-Optimized BP Neural Network Hammerstein Model (HO-BP-Hammerstein), achieving high-precision prediction of lithium-ion battery state of charge. By globally optimizing the neural network’s weight matrix and bias parameters via the HO algorithm, the model’s nonlinear fitting capability is significantly enhanced. In multi-condition testing under FUDS and DST protocols across the 0 °C to 45 °C temperature range, the model achieved an average absolute error of 0.368%, representing a 46.2% reduction compared to the GWO-BP method. This demonstrates excellent temperature adaptability and prediction accuracy. Practical engineering validation confirms the model’s effective application in hybrid electric vehicle battery management systems, enabling reliable SOC prediction for real electric vehicles and providing critical data support for battery thermal runaway early warning. This study has made significant progress in the field of SOC prediction for lithium-ion batteries, but there are still certain limitations. On the one hand, the model training process has higher computational costs; on the other hand, the dataset size and diversity of operating conditions currently used for model validation are still limited. Future research work should focus on more concrete and feasible paths, with a focus on breaking through the challenges of multi-source heterogeneous data fusion generated in real vehicle operation. Specifically, the system needs to integrate multidimensional temporal data such as voltage curves, multidimensional temperature distributions, cumulative cycle times, and current change rates collected synchronously under different charging strategies (such as fast charging and slow charging modes). These variables have a significant impact on the dynamic characteristics and aging status of the battery. To effectively extract key information from the complex heterogeneous data mentioned above, it is recommended to use a technology path that combines attention mechanism to extract key features to dynamically quantify different data sources, achieve adaptive collaborative optimization of model structure and parameters, and thus construct a higher precision next-generation battery state prediction framework, promoting the transfer and application of this research results to more complex actual vehicle environments.

Author Contributions

Conceptualization, L.Z. and B.Y.; investigation, L.L. and S.C.; resources, L.Z., B.Y. and L.L.; writing—original draft, L.Z., B.Y., L.L., S.C., H.L. and W.W.; writing—review and editing, L.Z., B.Y., L.L., S.C. and H.L.; supervision, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Major Science and Technology Special Project of Jilin Province under Grant 20240204001SF.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, H.; Wu, W.; Cui, L.; Zhou, G.; Yang, Y.; Zhang, Q. State of charge prediction for lithium-ion batteries based on multi-process scale encoding and adaptive graph convolution. J. Energy Storage 2025, 113, 115482. [Google Scholar] [CrossRef]
  2. Shi, H.; Wang, S.; Liang, J.; Takyi-Aninakwa, P.; Yang, X.; Fernandez, C.; Wang, L. Multi-Time Scale Identification of Key Kinetic Processes for Lithium-Ion Batteries Considering Variable Characteristic Frequency. J. Energy Chem. 2023, 82, 521–536. [Google Scholar] [CrossRef]
  3. Zhang, L.; Zhang, J.; Gao, T.; Lyu, L.; Wang, L.; Shi, W.; Cai, G. Improved LSTM based state of health estimation using random segments of the charging curves for lithium-ion batteries. J. Energy Storage 2023, 74, 109370. [Google Scholar] [CrossRef]
  4. Dong, J.; Wang, R.; Wang, H.; Hua, S.; Geng, G.; Jiang, Q. Adaptive SOC Estimation of Grid-Level BESS for Multiple Operational Scenarios. J. Energy Storage 2025, 132, 117776. [Google Scholar] [CrossRef]
  5. He, Y.; Zhu, D.; Wang, Y.; Sun, J.; Mo, L. Non-uniformly sampled data for data-driven analysis of nonlinear singularly perturbed hybrid systems. Nonlinear Dyn. 2025, 113, 14165–14179. [Google Scholar] [CrossRef]
  6. Meng, J.; Luo, G.; Ricco, M.; Swierczynski, M.; Stroe, D.I.; Teodorescu, R. Overview of lithium-ion battery modeling methods for state-of-charge estimation in electrical vehicles. Appl. Sci. 2018, 8, 659. [Google Scholar] [CrossRef]
  7. Fang, X.; Xie, L.; Dimarogonas, D.V. Simultaneous distributed localization and formation tracking control via matrix-weighted position constraints. Automatica 2025, 175, 112188. [Google Scholar] [CrossRef]
  8. Liu, L.; Yan, Z.; Xu, B.; Zhang, P.; Cai, C.; Yang, H. A highly scalable integrated voltage equalizer based on parallel-transformers for high-voltage energy storage systems. IEEE Trans. Ind. Electron. 2023, 71, 595–603. [Google Scholar] [CrossRef]
  9. Liu, Z.; Tan, Z.; Wang, Y. A Mconv TCN-Informer deep learning model for SOC prediction of lithium-ion batteries. J. Energy Storage 2025, 129, 117092. [Google Scholar] [CrossRef]
  10. Qu, W.; Chen, G.; Zhang, T. An adaptive noise reduction approach for remaining useful life prediction of lithium-ion batteries. Energies 2022, 15, 7422. [Google Scholar] [CrossRef]
  11. Liu, Z.; Chen, J.; Fan, Q.; Wang, D. A key-term separation based least square method for Hammerstein SOC estimation model. Sustain. Energy Grids Netw. 2023, 35, 101089. [Google Scholar] [CrossRef]
  12. Li, F.; Zheng, T.; Cao, Q. Modeling and identification for practical nonlinear process using neural fuzzy network–based Hammerstein system. Trans. Inst. Meas. Control 2023, 45, 2091–2102. [Google Scholar] [CrossRef]
  13. Li, F.; Yang, Y.; Xia, Y. Identification for nonlinear systems modelled by deep long short-term memory networks based Wiener model. Mech. Syst. Signal Process. 2024, 220, 111631. [Google Scholar] [CrossRef]
  14. Xiong, R.; Zhang, Y.; He, H.; Zhou, X.; Pecht, M.G. A double-scale, particle-filtering, energy state prediction algorithm for lithium-ion batteries. IEEE Trans. Ind. Electron. 2018, 65, 1526–1538. [Google Scholar] [CrossRef]
  15. Liu, C.; Ling, J.; Kou, L. Comparison of Performance between GA-BP Neural Network and BP Neural Network. China Health Stat. 2013, 30, 173–176. [Google Scholar]
  16. Li, Z.; Wang, J.; Guo, C. A new method of BP network optimized based on particle swarm optimization and simulation test. Acta Electron. Sin. 2008, 36, 2224–2228. [Google Scholar]
  17. Zhang, L.; Gao, T.; Cai, G.; Hai, K.L. Research on electric vehicle charging safety warning model based on back propagation neural network optimized by improved gray wolf algorithm. J. Energy Storage 2022, 49, 104092. [Google Scholar] [CrossRef]
  18. Zhao, X.; Xu, L. Improved Firefly Algorithm Optimization for Health Status Estimation of Power Lithium ion Batteries Using Backpropagation Neural Network. Energy Storage Sci. Technol. 2023, 12, 934–940. [Google Scholar]
  19. Wei, X.; She, S.; Rong, W.; Liu, A. Health status prediction of lithium batteries based on cuckoo algorithm optimized BP neural network. Comput. Meas. Control 2021, 29, 65–69. [Google Scholar]
  20. Ahmad, A.; Senjyu, T.; Ahmed, S.; Uehara, A.; Rahman, M.S.A.; Talaat, M.; Song, D.; Elkholy, M. Resilience-oriented optimization of a green hydrogen backup system with an integrated electric vehicle charging network using the hippopotamus optimization algorithm in semi-urban areas of Pakistan. Energy Convers. Manag. 2026, 350, 120993. [Google Scholar] [CrossRef]
  21. Saber, A.; De Luca, C.; Pourzangbar, A.; Tondelli, S.; Bell, M.L. Heat wave risk assessment in Bologna using Spatio-temporal artificial intelligence: Leveraging LSTM enhanced by the hippopotamus optimization algorithm. Sustain. Cities Soc. 2025, 131, 106671. [Google Scholar] [CrossRef]
  22. Geetha, N.; Kavitha, D.; Kumaresan, D. Influence of Electrode Parameters on the Performance Behavior of Lithium-Ion Battery. J. Electrochem. Energy Convers. Storage 2023, 20, 011013. [Google Scholar] [CrossRef]
  23. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  24. Wang, D. Key-term separation based hierarchical gradient approach for NN based Hammerstein battery model. Appl. Math. Lett. 2024, 157, 10920. [Google Scholar] [CrossRef]
  25. Hou, J.; Wang, H.; Su, H.; Chen, F.; Liu, J. A bias-correction modeling method of Hammerstein–Wiener systems with polynomial nonlinearities using noisy measurements. Mech. Syst. Signal Process. 2024, 213, 111329. [Google Scholar] [CrossRef]
  26. Hou, J. Parsimonious model based consistent subspace identification of Hammerstein systems under periodic disturbances. Int. J. Control Autom. Syst. 2024, 22, 61–71. [Google Scholar] [CrossRef]
  27. Ding, F.; Xu, L.; Zhang, X.; Ma, H. Hierarchical gradient-and least-squares-based iterative estimation algorithms for input-nonlinear output-error systems from measurement information by using the over-parameterization. Int. J. Robust Nonlinear Control 2024, 34, 1120–1147. [Google Scholar] [CrossRef]
  28. Ding, F.; Xu, L.; Zhang, X.; Zhou, Y.; Luan, X. Recursive identification methods for general stochastic systems with colored noises by using the hierarchical identification principle and the filtering identification idea. Annu. Rev. Control 2024, 57, 100942. [Google Scholar] [CrossRef]
  29. Chen, J.; Huang, B.; Gan, M.; Chen, C.P. A novel reduced-order algorithm for rational models based on Arnoldi process and Krylov subspace. Automatica 2021, 129, 109663. [Google Scholar] [CrossRef]
  30. Ding, F.; Xu, L.; Zhang, X.; Zhou, Y. Filtered auxiliary model recursive generalized extended parameter estimation methods for Box–Jenkins systems by means of the filtering identification idea. Int. J. Robust Nonlinear Control 2023, 33, 5510–5535. [Google Scholar] [CrossRef]
  31. Ding, F.; Shao, X.; Xu, L.; Zhang, X.; Xu, H.; Zhou, Y. Filtered generalized iterative parameter identification for equation-error autoregressive models based on the filtering identification idea. Int. J. Adapt. Control Signal Process. 2024, 38, 1363–1385. [Google Scholar] [CrossRef]
  32. Dasgupta, R. Smooth saturation function-based position and attitude tracking of a quad-rotorcraft avoiding singularity. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; pp. 692–703. [Google Scholar]
  33. Dai, H.; MacBeth, C. Effects of learning parameters on learning procedure and performance of a BPNN. Neural Netw. 1997, 10, 1505–1521. [Google Scholar] [CrossRef] [PubMed]
  34. Bhat, A.Y.; Qayoum, A. Viscosity of CuO nanofluids: Experimental investigation and modelling with FFBP-ANN. Thermochim. Acta 2022, 714, 179267. [Google Scholar] [CrossRef]
  35. Seo, M.; Goh, T.; Park, M.; Koo, G.; Kim, S.W. Detection of internal short circuit in lithium ion battery using model-based switching model method. Energies 2017, 10, 76. [Google Scholar] [CrossRef]
  36. Dey, S.; Biron, Z.A.; Tatipamula, S.; Das, N.; Mohon, S.; Ayalew, B.; Pisu, P. Model-based real-time thermal fault diagnosis of Lithium-ion batteries. Control Eng. Pract. 2016, 56, 37–48. [Google Scholar] [CrossRef]
  37. Lu, X.; Qiu, J.; Lei, G.; Zhu, J. Degradation mode knowledge transfer method for LFP batteries. IEEE Trans. Transp. Electrif. 2022, 9, 1142–1152. [Google Scholar] [CrossRef]
Figure 1. Hammerstein model based on HO-BP.
Figure 1. Hammerstein model based on HO-BP.
Electronics 15 00698 g001
Figure 2. Structure of BP and its neurons.
Figure 2. Structure of BP and its neurons.
Electronics 15 00698 g002
Figure 3. Analysis of F4 Standard Function.
Figure 3. Analysis of F4 Standard Function.
Electronics 15 00698 g003
Figure 4. Analysis of F7 Standard Function.
Figure 4. Analysis of F7 Standard Function.
Electronics 15 00698 g004
Figure 5. Analysis of F8 Standard Function.
Figure 5. Analysis of F8 Standard Function.
Electronics 15 00698 g005
Figure 6. Analysis of F13 Standard Function.
Figure 6. Analysis of F13 Standard Function.
Electronics 15 00698 g006
Figure 7. Flowchart of the Hammerstein Model Based on HO-BP.
Figure 7. Flowchart of the Hammerstein Model Based on HO-BP.
Electronics 15 00698 g007
Figure 8. Detailed parameters of FUDS operating conditions.
Figure 8. Detailed parameters of FUDS operating conditions.
Electronics 15 00698 g008
Figure 9. Detailed parameters of DST operating conditions.
Figure 9. Detailed parameters of DST operating conditions.
Electronics 15 00698 g009
Figure 10. Validity verification results of FUDS operating conditions at different temperatures.
Figure 10. Validity verification results of FUDS operating conditions at different temperatures.
Electronics 15 00698 g010
Figure 11. Validity verification results of DST operating conditions at different temperatures.
Figure 11. Validity verification results of DST operating conditions at different temperatures.
Electronics 15 00698 g011
Figure 12. Comparison of real vehicle battery SOC predictions.
Figure 12. Comparison of real vehicle battery SOC predictions.
Electronics 15 00698 g012
Figure 13. Error between test set prediction and real data.
Figure 13. Error between test set prediction and real data.
Electronics 15 00698 g013
Table 1. The impact of different α values on model performance.
Table 1. The impact of different α values on model performance.
α Palatino LinotypeNetwork Scale and ComplexityPerformance Impact in SOC Prediction Tasks
α = 1~3The number of neurons is relatively small, and the model structure is simple.The training speed is fast, but the fitting ability is limited, which may make it difficult to capture complex nonlinear features in dynamic changes, resulting in lower prediction accuracy.
α = 4~6The number of neurons is moderate, and the model complexity and expression ability are relatively balanced.The good balance between generalization ability and fitting ability can achieve reliable prediction accuracy under most conventional operating conditions (such as 25 °C FUDS/DST), making it a more universal and robust initial choice.
α = 7~10There are a large number of neurons, the model structure is complex, and the representation ability is strong.Having strong nonlinear fitting ability, it can more accurately learn the complex mapping relationship of battery systems under a wide temperature range (0–45 °C) and dynamic load, providing a structural basis for achieving high-precision prediction, as shown in the article.
Table 2. Comparison of Intelligent Algorithm Performance.
Table 2. Comparison of Intelligent Algorithm Performance.
Test FunctionF4F7F8F13
HO6.1 × 10−2614.6 × 10−5−2.1 × 1041.1 × 10−4
GWO4.3 × 10−152.1 × 10−3−5.6 × 1030.286
PSO4.7 × 10−516.1 × 10−2−3.1 × 1030.064
Table 3. Battery parameter information.
Table 3. Battery parameter information.
Battery SpecificationsSpecification Value
Battery TypeINR18650-20R
Capacity Rating2000 m Ah
Battery MaterialsLiNiMnCo/Graphite
Saturation voltage4.2 V
Cut-off voltage2.5 V
Weight45 g
Diameter Size18.33 mm
Length64.85 mm
Table 4. Comparison of evaluation indicators for FUDS at different temperatures.
Table 4. Comparison of evaluation indicators for FUDS at different temperatures.
Temperature0 °C25 °C45 °C
MethodR2MSERMSEMAER2MSERMSEMAER2MSERMSEMAE
Hammerstein0.9763.7 × 10−50.615%0.469%0.9381.2 × 10−41.121%0.481%0.9637.3 × 10−50.856%0.739%
GWO-BP0.9478.4 × 10−50.916%0.654%0.9091.8 × 10−41.349%0.623%0.9211.6 × 10−41.263%1.105%
PSO-BP0.8981.6 × 10−41.269%0.983%0.9091.8 × 10−41.353%0.789%0.9241.5 × 10−41.235%0.979%
Table 5. Comparison of evaluation indicators for DST at different temperatures.
Table 5. Comparison of evaluation indicators for DST at different temperatures.
Temperature0 °C25 °C45 °C
MethodR2MSERMSEMAER2MSERMSEMAER2MSERMSEMAE
Hammerstein0.9862.4 × 10−50.485%0.368%0.9686.7 × 10−50.816%0.569%0.9716.1 × 10−50.784%0.604%
GWO-BP0.9411.0 × 10−41.001%0.83%0.9191.7 × 10−41.302%0.99%0.9618.2 × 10−50.905%0.771%
PSO-BP0.9783.8 × 10−50.613%0.5%0.9251.6 × 10−41.248%0.497%0.9211.7 × 10−41.295%0.947%
Table 6. Comparison of computational efficiency of different optimization algorithms.
Table 6. Comparison of computational efficiency of different optimization algorithms.
MethodAverage Training Time (s)Training Conditions
Hammerstein310 ± 15.3Iteration: 50
GWO-BP178.47 ± 9.8Iteration: 30
PSO-BP141.47 ± 10.5Iteration: 40
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Yang, B.; Lyu, L.; Che, S.; Li, H.; Wang, W. A Battery State-of-Charge Prediction Method Based on a Hammerstein Model Integrated with a Hippopotamus Optimization Algorithm and Neural Network. Electronics 2026, 15, 698. https://doi.org/10.3390/electronics15030698

AMA Style

Zhang L, Yang B, Lyu L, Che S, Li H, Wang W. A Battery State-of-Charge Prediction Method Based on a Hammerstein Model Integrated with a Hippopotamus Optimization Algorithm and Neural Network. Electronics. 2026; 15(3):698. https://doi.org/10.3390/electronics15030698

Chicago/Turabian Style

Zhang, Liang, Bilong Yang, Ling Lyu, Sihan Che, Haoqiang Li, and Weifei Wang. 2026. "A Battery State-of-Charge Prediction Method Based on a Hammerstein Model Integrated with a Hippopotamus Optimization Algorithm and Neural Network" Electronics 15, no. 3: 698. https://doi.org/10.3390/electronics15030698

APA Style

Zhang, L., Yang, B., Lyu, L., Che, S., Li, H., & Wang, W. (2026). A Battery State-of-Charge Prediction Method Based on a Hammerstein Model Integrated with a Hippopotamus Optimization Algorithm and Neural Network. Electronics, 15(3), 698. https://doi.org/10.3390/electronics15030698

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop