Next Article in Journal
Kuwaiti EV Owners’ Experience and Recommendations for Mass Adoption for the World’s EV Laggard
Next Article in Special Issue
Exploring Thermal Runaway: Role of Battery Chemistry and Testing Methodology
Previous Article in Journal
A Two-State Linear Flux Weakening Strategy for an Operation Region Extension of the Variable-Flux Memory Machine
Previous Article in Special Issue
Cloud-Enabled Reconfiguration of Electrical/Electronic Architectures for Modular Electric Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Linear Continuous-Time Regression and Dequantizer for Lithium-Ion Battery Cells with Compromised Measurement Quality

by
Zoltan Mark Pinter
1,*,
Mattia Marinelli
1,
M. Scott Trimboli
2 and
Gregory L. Plett
2
1
Department of Wind and Energy Systems, Technical University of Denmark, Frederiksborgvej 399, 4000 Roskilde, Denmark
2
Department of Electrical and Computer Engineering, University of Colorado Colorado Springs, 1420 Austin Bluffs Pkwy, Colorado Springs, CO 80918, USA
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2025, 16(3), 116; https://doi.org/10.3390/wevj16030116
Submission received: 11 December 2024 / Revised: 29 January 2025 / Accepted: 8 February 2025 / Published: 20 February 2025

Abstract

Battery parameter identification is a key challenge for battery management systems, as parameterizing lithium-ion batteries is resource-intensive. Electrical circuit models (ECMs) provide an alternative, but their parameters change with physical conditions and battery age, necessitating regular parameter identification. This paper presents two modular algorithms to improve data quality and enable fast, robust parameter identification. First, the dequantizer algorithm restores the time series generating the noisy, quantized data using the inverse normal distribution function. Then, the Linear Continuous-Time Regression (LCTR) algorithm extracts exponential parameters from first-order or overdamped second-order systems, deducing ECM parameters and guaranteeing optimality with respect to RMSE. The parameters have low sensitivity to measurement noise since they are continuous-time. Sensitivity analyses confirm the algorithms’ suitability for battery management across various Gaussian measurement noise, accuracy, time constants and state-of-charge (SoC), using evaluation metrics like root-mean-square-error (RMSE) (<2 mV), relative time constant errors, and steady-state error. If the coarseness of rounding is not extreme, the steady-state is restored within a fraction of a millivolt. While a slight overestimation in the lower time constants occurs for overdamped systems, the algorithms outperform the conventional benchmark for first-order systems. Their robustness is further validated in real-life applications, highlighting their potential to enhance commercial battery management systems.

1. Introduction

Battery systems play a crucial role in the shift towards green energy by storing and rapidly transferring electrical energy. While lithium-ion batteries offer high efficiency and performance, their sensitivity to various physical variables and battery chemistry parameters is a challenge. Understanding these nonlinear sensitivities is vital for both performance and safety [1]. Although physics-based bottom-up models can be employed, they are often costly, time-consuming, and energy-intensive for parameter identification [2]. Additionally, the parameters change over time with battery aging, rendering initial identifications less accurate.
The Thévenin equivalent, or equivalent circuit model (ECM), serves as a primary alternative to physics-based models [3] (see Figure 1). It captures input–output battery behavior to the extent the user requires and data quality allows. However, this is an imposed interpretation, hence when the physical variables change, the ECM parameters change with them [4]. Including more variables like state of charge (SoC), temperature, frequency, current, hysteresis state, or pressure results in the curse of dimensionality for parameter lookup tables [5]. This necessitates neglecting certain factors or resorting to online identification of parameters, which requires high-quality measurements of current and terminal voltage as inputs and outputs, respectively.
Parameter identification methods fall into two categories: offline and online algorithms. Offline algorithms process data in batches to find a posterior, while online algorithms incorporate earlier identifications into their prior for updates [6]. This makes the latter computationally more efficient. Identification algorithms can determine either time or frequency domain parameters. They include regression techniques like linear regression, Gaussian processes, support vector machines, neural networks, and ensembles considered as machine learning methods due to their data-driven nature [7]. Input–output methods can identify transfer function, state-space and polynomial model parameters. Such methods produce autoregressive models with exogenous output (ARX, or ARMAX if stochastic). Models can be both linear and nonlinear and are easily computed via online algorithms. For cases where nonlinearities can be separated from the linear model, Hammerstein–Wiener models can be applied [8].
Nonlinear identification methods are notoriously sensitive to initial parameter guesses. Furthermore, they are iterative and time- and memory-intensive. One commonly used linear method, ARX models, becomes particularly sensitive to noise, leading to the overfitting of irrelevant patterns. In order to provide a robust alternative, this paper extends the linear continuous-time regression presented in [9,10] by fitting it to the battery parameter identification problem.
Some challenges that all identification algorithms face are sensitivities to noise, quantization, sampling rate, or the window of the investigated data batch. Assessing lower-cost device feasibility requires understanding the trade-offs in data quality due to sensor limitations. High data quality requires better measurement devices, though this can be impractical or costly. Alternatively, the quality can be improved by software solutions. The issue arises in various fields. In radar applications [11], for example, addressing spectrum corruption from angular quantization during blind phase searches involves low-pass filtering, enhancing phase noise tolerance. Analog-digital conversion [12] tackles sigma-delta quantization error with an optimization algorithm, showing polynomial decay in estimation error with increased sample numbers. Image reconstruction in image processing [13] utilizes generalized expectation maximization for noisy, quantized, and sparse measurements. Seismology [14] finds noise and rounding burden the slope of the Gutenberg–Richter magnitude distribution, revealing biased probability distribution modeling. Compressed sensing studies like [15,16] explore the impact of coarsely quantized, sparse, and noisy data, indicating L1 regularized maximum likelihood’s superiority over L1 regularized least squares. In ARMA or output error (OE) models, parameter bound estimates for quantized data are examined in [17], with such measurements enhancing the innovation signal in [18]. An Unscented Kalman filter is adapted in [19] to handle rounding and Gaussian (normal) measurement noise, noting that high standard deviation (denoted as std. or σ ) has a whitening effect on the combined noise. The Monte Carlo simulation reveals potential sample misplacement due to Gaussian noise in the combined PDF. To achieve parameters with low sensitivity to measurement noise, this approach identifies continuous-time parameters rather than relying on the commonly used linear discrete-time regression.
In our application, the identification of step response functions are studied, where we note two main problems introduced by quantization: (1) the steady-state estimate is biased, (2) the information about the transient is distorted with colored noise. Despite the code’s microcontroller-friendly compactness, an offline algorithm is designed to confirm specific assumptions in the investigated data batch. These assumptions include the following: (1) sufficient charge transfer at the series start for accurate ECM capacitor current estimation, (2) knowledge of SoC at the end of charge transfer due to strong parameter dependency, and (3) a sufficiently long battery cell rest period to capture parameters. The battery cell needs to rest during the identification period since the change in SoC introduces nonlinearities to the investigated time series, and the proposed algorithm assumes constant input. Given the intended accuracy of the application, parameter dependency on current and temperature is neglected.
Contribution. We desire to efficiently determine the ECM parameters of lithium-ion battery cells throughout their lifetime, despite white Gaussian sensor noise and quantization corrupting the terminal voltage measurements. This work proposes two combined, but modular algorithms to address these challenges and investigate their parameter sensitivities. The key contributions (depicted in Figure 1) are as follows:
  • A dequantizing algorithm that recovers information lost due to quantization or large noise magnitudes, hence restoring terminal voltage. It utilizes the inverse normal distribution function to reconstruct time series data from quantized measurements corrupted by Gaussian measurement noise.
  • The filtered data are processed by the novel Linear Continuous-Time Regression (LCTR) algorithm, capable of deducing gains, time constants, and bias for first-order or overdamped second-order systems.
  • Investigation of the combined algorithms’ sensitivity to quantization noise, measurement noise, steady-state, and time constants. Evaluation statistics include root-mean-square-error (RMSE), time-constant errors, and steady-state errors.
The proposed LCTR algorithm aims at providing a fast alternative to nonlinear algorithms, while also ensuring optimality with respect to RMSE. Additionally, we presume that continuous-time parameters have low sensitivity to measurement noise compared to linear regression with discrete-time models.
This paper is structured as follows: Section 2.1 gives an overview of the parameter identification algorithm, then Section 2.2 elaborates on the dequantization and Section 2.3 on the LCTR. The results are described in Section 3 and concluded in Section 4.

2. Methodology

2.1. Overview of the Identification Algorithm

The investigated time series (both simulated and deployed) is 3600 s in length. The battery cell (with Q = 100  Ah capacity) starts from rest at 85% SoC, is discharged at 30 A from 500 s to 1000 s, and then rests until 3600 s. Time is denoted by t and is a discrete variable when used with an index ☐ ( t k ) . The sampling time is 1 s. There are five main steps for deducing the parameters of the lithium-ion cell, namely:
I. Finding switching times. Observations indicate that the dynamics behind the R 0 step reach steady-state by 1 ms. This means that R 0 is overestimated at the 1 s sample period, since after 1 s the voltages over the ECM RC branches are already on the rise. This creates a bias in the estimates of the ECM resistances. To address this, the first sample before a current jump of at least the current threshold 5 A is taken as switching time. There is 1 switch-on and 1 switch-off in each time series.
II. Dequantizing. When applied, the dequantization algorithm operates on the time series data after the switch-off time and estimates the underlying data generating the measurement samples. The first estimate then becomes the first measurement. The last 100 samples are discarded as the receding sample window deteriorates the statistics.
III. Finding R 0 . The difference between the first sample after switching v ( t off + 1 ) and the mean of 10 samples before switching v ¯ off 10 : off is divided by the instantaneous jump in the current I to find R 0 . The current after switching is zero and is averaged for 500 samples before the jump to reduce the effect of measurement outliers.
R ^ 0 = v ( t off + 1 s ) v ¯ t off 10 : t off I ¯ t off 500 : t off , v t off 10 : t off = v t off 10 s v t off T I t off 500 : t off = I t off 500 s I t off T
IV. Linear Continuous-Time Regression. The algorithm is used to find the initial values b i of one or two exponentials, the coefficients a i of their exponents, and their bias. The index of an exponential is denoted by i. Since the regression has multiple objectives, it is corrupted by colored noise. Hence, the initial values are fixed by stretching them to fit between the first measurement after switching and the mean of the last 100 points of the dequantized time series, i.e.,
b i = v ( t off + 1 s ) v ¯ pf , t 3400 : t 3500 i b i , i { 1 , 2 } , v ¯ pf , t 3400 : t 3500 = v t 3400 v t 3500 T .
V. Deducing RC branch parameters. The time constants are the reciprocals of the coefficients of the exponents. The currents through the resistances of the RC branches i ^ R i , t off are calculated from the differential equation of the RC branch assuming that the exponents are exact and constant before switching off. At last, the RC resistances are obtained from the voltage before switching off and the current through the resistance,
τ ^ i = 1 a i ,
R ^ i = v ^ i , t off i ^ R i , t off , where v ^ i , t off = b i and i ^ R i , t off = 1 e ( t off t on ) a i I ¯ t off 100 : t off .
Equations (3) and (4) find the time constants τ i and the resistances R i within the Thévenin equivalent in Figure 1, whose parameters are related to the response time and the magnitude of the diffusion voltage, respectively. These parameters will be identified by the LCTR algorithm.

2.2. Dequantizing Algorithm to Alleviate Corrupted Measurements

The dequantizing algorithm aims to recover information lost due to Gaussian measurement noise and quantization. It addresses the rise in the current step response with smoothing and the slower dynamics by leveraging the inverse normal distribution function and the Gaussian assumption. Figure 2a shows how the mean is encoded in the samples. Estimation quality depends on the sampling rate, with limited benefits beyond a level. This sampling rate is typically 1 Hz due to BMS limitations; however, increasing it up to 100 Hz–1 kHz can provide better R 0 estimations [20]. The quantization algorithm assumes that a sufficient amount of data is collected for each time instant to estimate the mean, and the electrochemical dynamics are negligible within the investigated time window.
The ratio of samples from the quantization regions uses the inverse normal distribution function to determine the mean value, from which the data are generated. Therefore the data samples and the threshold in the center or right of Figure 2a are utilized to find the distribution on the left. The average estimation error in Figure 2b depends on the location of the mean between the l R R / 2 denoted by 0 and l R by 1, where R is the accuracy bit, and  l Z . The expected value of the error is zero at l R . It rises linearly in the vicinity since the number of samples at another value of l is negligible. Close to l R R / 2 , there are more samples at different l values to describe the distribution. As described in [19], the increasing standard deviation whitens the estimation error. This latter effect is stronger for the green curves with a larger standard deviation.
The estimated mean is calculated at each sampling point based on the sample data batch y s ( t ) . The time window w is,
w = max 200 , 200 max 2 mV σ , σ 2 mV y s ( t ) = y ( t k w ) y ( t k + w ) T , hence | | y s ( t ) | | = 2 w + 1 .
The minimum window size is 200 s; it increases proportional to the square of the lack or excess of noise power, since either the estimation error is not whitened sufficiently, or the signal-to-noise-ratio (SNR) is too low. The length of the batch is | | y s ( t ) | | . The sampling occurs within a symmetric window around the time instant in question, meaning that it is receding at the edges, decreasing both sample length and estimation quality.
The range of the sample r ( t ) counted in accuracy bits R is found via Equation (6). Too large an R leads to r ( t ) = 0 , hence the quantized data are the sole source of information, giving μ ( t ) as the mean. Nonzero and even / odd r ( t ) in Figure 2a depicts the mechanism to find the threshold for the data separation. The thresholds T i ( t ) indicate where data are naturally separated by the rounding (red); otherwise, the estimation becomes biased. In order to alleviate the sampling bias, the inverse normal distribution is evaluated symmetrically from two opposite points of the distribution function.
r ( t ) = max y s ( t ) min y s ( t ) R
if r ( t ) = 0 μ ( t ) = y ¯ s ( t ) if r ( t ) Z + is even T 1 ( t ) = min y s ( t ) + max y s ( t ) 2 R 2 T 2 ( t ) = min y s ( t ) + max y s ( t ) 2 + R 2 if r ( t ) Z + is odd T 1 ( t ) = T 2 ( t ) = min y s ( t ) + max y s ( t ) 2
The sample realization y s ( t ) of the random variable Y ( t ) is classified according to how large a proportion of the data points is less than or equal to the threshold T i ( t ) . This estimated probability Pr ^ Y ( t ) T i ( t ) is expressed in Equation (7). The estimated deviation from the threshold value y ˜ ^ i ( t ) maximizes the likelihood function L y ˜ ( t ) | y s ( t ) i .
Pr ^ Y ( t ) T i ( t ) = j = k w k + w q ( t j ) | | y s ( t ) | | i { 1 , 2 } , q ( t j ) = 1 if y ( t j ) T i ( t ) 0 if y ( t j ) > T i ( t ) y ˜ ^ i ( t ) = N 1 Pr ^ Y ( t ) T i ( t ) μ = 0 , σ
The mean of the sample can then be calculated from the threshold and the deviation as,
μ ^ i ( t ) = T i ( t ) y ˜ ^ i ( t ) i { 1 , 2 } , μ ^ ( t ) = μ ^ 1 ( t ) + μ ^ 2 ( t ) 2 , if r ( t ) Z + .
At the beginning of the time series, dynamics affect the sample. During this transient region, better results can be obtained if the following smoothing function is used:
y smooth ( t ) = t k w / 4 t k + w / 4 y ( t ) w 2 + 1 .
At the edges smoothing is conducted with a receding window, similar to the statistics of the inverse normal distribution. The weighting function W d q ( t ) in Equation (10) is used to merge the results of Equations (8) and (9). It consists of a linearly spaced vector with its elements raised to the power c, where c is proportional to the noise power. This enhances noise power to uncover the mean. Finally, the dequantized signal y dq is obtained from the weighted samples y dq ( t ) .
W d q ( t ) = | | y | | s ( t ) : 1 : 1 T c | y | | s ( t ) , where c = 4 σ 2 mV , y dq ( t ) = W d q ( t ) y smooth ( t ) + 1 W d q ( t ) μ ( t ) .

2.3. Linear Continuous-Time Regression

The challenge with fitting a sum of exponentials to measured data is that it is inherently a nonlinear regression problem. The literature suggests either nonlinear optimization or considers special cases and converts the problem to an approximate linear regression problem to be solved in a closed form. However, most of these methods do not generalize easily to ECM parameter identification. The method below (introduced in [10]) provides a solution to cope with this challenge.
The problem. We desire to take a set of time points { t 1 , , t N } together with a set of data points { y 1 , , y N } and fit them to the model in Equation (11). Parameter m denotes the number of exponential terms in the identification problem. The problem requires that we find the constants b = { b 0 , b 1 , b m } and a = { a 1 , a 2 , a m } so as to minimize the root-mean-squared error (RMSE) as defined in Equation (12).
y ^ ( t ) = b 0 + i = 1 m b i e a i t = b 0 + b 1 e a 1 t + b 2 e a 2 t +
RMSE = 1 N i = 1 N y ( t i ) y ^ ( t i ) 2 .

2.3.1. Single Exponential with and Without Bias

In Equation (13), we begin with the simplest exponential form that also allows a steady-state bias. Its time derivative is calculated in Equation (14). Using Equation (13) and Equation (14), we then find a form without the exponential function explicitly, as stated in Equation (15).
y ^ ( t ) = b 0 + b 1 e a 1 t .
y ^ ˙ ( t ) = b 1 a 1 e a 1 t .
y ^ ˙ ( t ) = a 1 y ^ ( t ) + b 0 a 1 .
The initial value of Equation (13) is
y ^ ( 0 ) = b 0 + b 1 .
The next step is to integrate both sides of the expression,
0 t y ^ ˙ ( τ ) d τ = a 1 0 t y ^ ( τ ) d τ + 0 t b 0 a 1 d τ y ^ ( t ) = b 0 + b 1 + b 0 a 1 t a 1 0 t y ^ ( τ ) d τ .
The rightmost term is approximated in two ways: (i) by replacing y ^ ( t ) with the measured data y ( t ) , and (ii) by replacing the integration with a trapezoidal approximation. We can then write the last line of Equation (17) as Equation (18), where I t = 0 t t d t is the approximate integral between time 0 and t.
y ^ ( t ) = b 0 + b 1 + b 0 a 1 t a 1 I t , rewritten as y ^ ( t ) = A + B t C I 1 ( t ) .
This form is now linear in the parameters and the values of A, B, and C can be found by standard linear regression,
y 0 y 1 Y = 1 , t 0 , I 1 ( t 0 ) 1 , t 1 , I 1 ( t 1 ) M A B C X
The least squares solution is computed as in Equation (20), where † denotes the matrix pseudo-inverse.
X = M Y
Once we have computed X , and therefore, A, B, and C, we can find the coefficients we desire as,
a 1 = C , b 0 = B / a 1 and b 1 = A b 0 .
If, for some reason, we happen to be confident that a 0 = 0 , then we can simplify the method. We form
y 0 y 1 Y = 1 , I 1 ( t 0 ) 1 , I 1 ( t 1 ) M A B X
and compute the least-squares solution X = M Y . Once we have computed X , and therefore, A, and B, we can find the coefficients we desire as a 1 = B and b 1 = A .

2.3.2. Double Exponential with and Without Bias

The method presented in this work generalizes nicely to higher-order exponential fits. We next consider a double-exponential form with a steady-state bias in Equation (23), with its first and second time derivatives shown in Equation (24). We can express the second-order derivatives from these equations as shown in Equation (25).
y ^ ( t ) = b 0 + b 1 e a 1 t + b 2 e a 2 t .
y ^ ˙ = b 1 a 1 e a 1 t b 2 a 2 e a 2 t y ^ ¨ ( t ) = b 1 a 1 2 e a 1 t + b 2 a 2 2 e a 2 t .
y ^ ¨ ( t ) = ( a 1 + a 2 ) y ^ ˙ ( t ) a 1 a 2 y ^ ( t ) + b 0 a 1 a 2 .
The initial values for the model are as follows:
y ^ ( 0 ) = b 0 + b 1 + b 2 y ^ ˙ ( 0 ) = b 1 a 1 b 2 a 2 .
Again, we integrate both sides of Equation (25) (we integrate twice here; we will do so one step at a time for clarity),
0 t y ^ ¨ ( τ ) d τ = ( a 1 + a 2 ) 0 t y ^ ˙ ( τ ) d τ a 1 a 2 0 t y ^ ( τ ) d τ + 0 t b 0 a 1 a 2 d τ y ^ ˙ ( t ) y ^ ˙ ( 0 ) = ( a 1 + a 2 ) y ^ ( t ) y ^ ( 0 ) a 1 a 2 0 t y ^ ( τ ) d τ + b 0 a 1 a 2 t y ^ ˙ ( t ) = ( a 1 + a 2 ) y ^ ( t ) a 1 a 2 0 t y ^ ( τ ) d τ + b 0 a 1 a 2 t + b 1 a 2 + b 2 a 1 + b 0 ( a 1 + a 2 ) .
We now integrate again to write,
0 t y ^ ˙ ( τ ) d τ = ( a 1 + a 2 ) 0 t y ^ ( τ ) d τ a 1 a 2 0 t 0 t y ^ ( τ ) d τ d ζ + b 0 a 1 a 2 0 t τ d τ + 0 t b 1 a 2 + b 2 a 1 + b 0 ( a 1 + a 2 ) d τ y ^ ( t ) = ( a 1 + a 2 ) 0 t y ^ ( τ ) d τ a 1 a 2 0 t 0 t y ^ ( τ ) d τ d ζ + b 0 a 1 a 2 t 2 / 2 + b 1 a 2 + b 2 a 1 + b 0 ( a 1 + a 2 ) t + ( b 0 + b 1 + b 2 ) .
The two integrals are again approximated by replacing y ^ ( t ) with the measured data y ( t ) , and by replacing the integration with a trapezoidal approximation. Then, we can write,
y ^ ( t ) = ( a 1 + a 2 ) I 1 ( t ) a 1 a 2 I 2 ( t ) + b 0 a 1 a 2 t 2 / 2 + b 1 a 2 + b 2 a 1 + b 0 ( a 1 + a 2 ) t + ( b 0 + b 1 + b 2 ) , rewritten as y ^ ( t ) = A + B t + C t 2 D I 1 ( t ) E I 2 ( t ) .
As before, this form is now linear in the parameters and the values of A, B, and C can be found via standard linear regression,
y 0 y 1 Y = 1 , t 0 , t 0 2 , I 1 ( t 0 ) , I 2 ( t 0 ) 1 , t 1 , t 1 2 , I 1 ( t 1 ) , I 2 ( t 1 ) M A B C D E X
by computing the least-squares solution X = M Y . To determine the a and b sets of coefficients from this solution, consider the polynomial ( x a 1 ) ( x a 2 ) = 0 . It can be seen that a 1 and a 2 are roots of the first expression in Equation (31), which can be found using the quadratic equation. We can solve for b 0 similarly, using Equation (29). Then, we can assemble a system of equations and solve for b 1 and b 2 as stated in Equation (32).
x 2 D x + E = 0 and b 0 = 2 C / E .
A b 0 B b 0 D = 1 1 a 2 a 1 b 1 b 2
If, for some reason, we happen to be confident that a 0 = 0 , then we can simplify the method. We form
y 0 y 1 Y = 1 , t 0 , I 1 ( t 0 ) , I 2 ( t 0 ) 1 , t 1 , I 1 ( t 1 ) , I 2 ( t 1 ) M A B C D X
and compute the least-squares solution X = M Y . Once we have computed X we can then solve for a 1 and a 2 as roots of the polynomial x 2 C x + D = 0 , which can be found using the quadratic equation. Then, we solve the system of equations Equation (34) to find b 1 and  b 2 .
A B = 1 1 a 2 a 1 b 1 b 2

2.3.3. The LCTR Algorithm

Special modifications were made to improve LCTR performance. The modified outputs are noted by apostrophe.
Weighted least squares and normalization. The least square solution of Equation (20) is modified to the weighted least squares solution (for both orders) in Equation (35). Normalization was applied to improve matrix conditioning; here max | M : j | denotes the maximum absolute value of column j. The vector X consists of the matrix columns X j .
X j = ( M T W M ) 1 M W y max | M : j | , where M i j = M i j max | M : j | { i , j }
Differential constraint for the second-order case. Reduced data quality can often lead to one of the time constants being incorrectly identified as negative. In order to enforce stable parameters, differential constraints were introduced in Equation (36) for the last 1000 data points, equating the differential of the estimated output with zero. Note that X is constant, hence X ˙ = 0 . The augmented matrices are M and Y . The weight function for the differential constraint W in Figure 3 increases in a quadratic manner after the sample point 2500, since subsequent data points are expected to change their values less than the ones prior.
Y = M X , Y ˙ t 2600 : t 3600 = 0 = M ˙ t 2600 : t 3600 , : X , where Y t 2600 : t 3600 = Y t 2600 Y t 3600 T , M t 2600 : t 3600 , : = M t 2600 , i M t 3600 , i T i , Y = Y 0 , M = M M ˙ .
Transient weighting. Figure 3 also shows that weighting on data points at the rise times of the transients is greater than that at the end of the data series. This ensures that the more informative dynamics at the beginning dominate the fitting so that the right time constants are found. Due to the nature of least squares, the longer the data series, the less exact the rise periods are fitted. Furthermore, since regression techniques assume white noise [7], the noise coloring due to using larger time windows (see Equation (5)) reduces estimation quality. This was taken into consideration when choosing window sizes.

2.4. Matlab Implementations

  • The Matlab code of the dequantizer:
Wevj 16 00116 i001
  • The Matlab code for second-order fit:
Wevj 16 00116 i002

3. Results

3.1. Sensitivity Analysis Overview

A sensitivity analysis was performed to assess the performance and robustness of the algorithms with respect to Gaussian measurement noise σ , resolution R, state of charge (SoC, analogous to open-circuit voltage), and time constants τ i . Evaluation metrics included RMSE (see Equation (12)), relative time constant errors ( τ ˜ i / τ i , where a ˜ denotes estimation error), and steady-state errors. Steady-state errors are approximated as the error at the last time instant of the window. Table 1 displays nominal and extreme parameter values. Nominal values are used when not distributed.

3.2. Visualization of Parameter Identification

First, visualizations of the dequantizing and the LCTR time series are shown for minimal and maximal resolution and Gaussian noise, using a second-order model. Next, RMSE improvement with the dequantizer is shown; the dequantizer is used hereafter in what follows. Then, relative time constant error sensitivity to time constants is shown in order to explore how the combined locations of the two time constants challenge the program. Tests on the first-order system show that a reduced number of parameters are easier to find. A comparison of RMSE improvement and relative time constant errors with those of a benchmark algorithm follows. Steady-state errors are plotted against accuracy and state-of-charge to highlight biased estimation due to coarse rounding. Finally, a real-life scenario test is presented. Matlab experiments show that the dequantizier takes 20 ms, and the LCTR takes 40 ms for second-order and 20 ms for first-order systems.
Figure 4 shows the performance of the LCTR for low/high values of Gaussian noise and resolution. The first row shows that the dequantizer’s smoothing effect slightly distorts the data; by using overlapping sample batches, it adds a correlation between data points. However, as soon as the data quality decreases and the data statistics begin to convey more than the individual points, it becomes useful. When the data are coarsely quantized, the steady-state is missed by the estimation. The LCTR estimates unstable time constants for the dataset with high Gaussian noise without dequantization. It also overestimates low time constants for large Gaussian noise standard deviation.
Figure 5 displays RMSE variations based on accuracy and Gaussian noise standard deviation, comparing unfiltered and dequantized cases. The red regions on the right indicate challenges posed by the Gaussian noise, particularly evident at a low resolution, but notably mitigated by dequantization. Yet, at low Gaussian noise, dequantization’s impact on RMSE is not substantial due to the coloring effect of quantization, where the dequantizer lacks sufficient information to recover the steady-state. Given the improvements, subsequent usage of the dequantizer is implied.
Figure 6 depicts relative time constant errors ( τ i ^ τ i ) / τ i for varying time constant values. The estimation of τ 1 deteriorates with lower values since limited samples are used to describe the rise. It also worsens when τ 2 is lower or τ 1 is higher, indicating challenges when the two time constants are close together, which leads the LCTR to mix information about the exponentials. The relative error of τ 2 increases with larger values, likely due to the incomplete convergence captured in the 2500 s time window.

3.3. Parameter Sensitivities for the First-Order Model

The benchmark algorithm refers to the analytical method used to find the parameters of a first-order model. If the sampling rate is high enough, the data are deterministic, and the end of the time series is near the steady-state, then it gives nearly exact estimations. The mean of the last 100 samples is taken as the estimated steady-state. The estimated initial value is the difference between the initial data point and the steady-state. The time constant is the time when the 1 e t / τ = 1 e 1 = 63 % of the initial value is attained by the curve. This algorithm takes 7 ms.
Tests on the first-order model assumed the geometric mean of τ 1 and τ 2 , i.e., 119.2 s, as the time constant. Figure 7 left shows the LCTR RMSE improvement compared to the benchmark for first-order systems. It is larger in regions with a high Gaussian noise standard deviation due to the inherent robustness of the LCTR. However, it worsens in the coarsely rounded region with low noise. Figure 7 right shows the steady-state error of the LCTR plotted against accuracy and SoC (similar results for the benchmark). The OCV is a strictly monotonic function of the SoC, and it equates to the steady-state; hence the rounding error shows periodicity over SoC as displayed in Figure 2b. The dequantizer reduces the steady-state error to a fraction of a mV in most of the region, except where the data are coarsely rounded.

3.4. Field Test Results

Figure 8 shows one of a series of measurements that were made to test a LiFePO4 cell [21]. The battery cell was charged for 15 min then rested for 40 min with data captured at 10 mV accuracy. The dequantizer algorithm restored the data sufficiently for the LCTR to process it. In doing so, the LCTR found R 0 = 0.9 m Ω , R 1 = 1.1 m Ω , R 2 = 1.5 m Ω , τ 1 = 225 s and τ 2 = 859 s for this particular cell. Based on the data and general experience with LiFePO4 chemistry (see such a cell from another manufacturer in [2]), all the parameters are realistic, except for τ 1 , which is normally expected to be around one minute.

4. Conclusions

This work addresses the parameter estimation of single and double exponential functions with application to lithium-ion battery cells, under compromised data quality conditions. Its main contributions are as follows: (i) the dequantizer algorithm, (ii) Liner Continuous-Time Regression (LCTR), and (iii) corresponding sensitivity analysis. The dequantizer is able to recover the shape of the underlying data in the presence of Gaussian noise, although it introduces some correlation due to overlapping sample windows and fails to find the steady-state at extremely coarse resolution. The LCTR demonstrates robustness after data processing by the dequantizer and outperforms an analytical benchmark algorithm, which is a typical commercial application. The RMSE is normally below 2 mV and the steady-state error is within a fraction of a millivolt if rounding is not extremely coarse. The lower time constant of second-order systems may be overestimated under higher noise levels. In summary, the algorithm is fast, robust and analytically sound to estimate the battery parameters throughout their lifespan.
There are several avenues of improvement for future work. The dequantizer used a maximum of two thresholds. Dividing the data along all possible thresholds can further improve recovery. The author observed that higher sampling rates improve the lower time constant’s estimate, warranting a sensitivity analysis. Addressing missing or intermittent data could also be included in such an analysis. Lastly, converting the least squares algorithm to a recursive formulation could potentially reduce memory usage and improve algorithm speed.

Author Contributions

Conceptualization, Z.M.P., M.S.T. and G.L.P.; methodology, Z.M.P. and G.L.P.; software, Z.M.P.; validation, Z.M.P.; data curation, Z.M.P.; writing—original draft preparation, Z.M.P.; writing—review and editing, M.M., M.S.T. and G.L.P.; visualization, Z.M.P.; project administration, M.M.; funding acquisition, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the IFD TopchargE project under the Grant Agreement No. 9090-00035A (http://topcharge.eu/, accessed on 1 April 2024), and the H2020 Insulae project under the grant agreement no. 824433 (http://insulae-h2020.eu/, accessed on 1 April 2024). The TOPChargE project is centered around a variable topology battery system allowing to store electricity and use it to charge electric vehicles directly from the battery. The goal of the Insulae project is to foster innovative energy solutions to decarbonise European islands.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Pinter, Z.M.; Papageorgiou, D.; Rohde, G.; Marinelli, M.; Træholt, C. Review of control algorithms for reconfigurable battery systems with an industrial example. In Proceedings of the 2021 56th International Universities Power Engineering Conference (UPEC), Middlesbrough, UK, 31 August–3 September 2021; pp. 1–6. [Google Scholar]
  2. Pinter, Z.M.; Engelhardt, J.; Rohde, G.; Træholt, C.; Marinelli, M. Validation of a Single-Cell Reference Model for the Control of a Reconfigurable Battery System. In Proceedings of the 2022 International Conference on Renewable Energies and Smart Technologies (REST), Tirana, Albania, 28–29 July 2022; Volume 1, pp. 1–5. [Google Scholar]
  3. Plett, G.L. Battery Management Systems, Volume II: Equivalent-Circuit Methods; Artech House: Norwood, MA, USA, 2015. [Google Scholar]
  4. Marinelli, M.; Calearo, L.; Engelhardt, J.; Rohde, G. Electrical Thermal and Degradation Measurements of the LEAF e-plus 62-kWh Battery Pack. In Proceedings of the 2022 International Conference on Renewable Energies and Smart Technologies (REST), Tirana, Albania, 28–29 July 2022; Volume 1, pp. 1–5. [Google Scholar]
  5. Pinter, Z.M.; Rohde, G.; Marinelli, M. Comparative Analysis of Rule-Based and Model Predictive Control Algorithms in Reconfigurable Battery Systems for EV Fast-Charging Stations. J. Energy Storage. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4882648 (accessed on 2 July 2024).
  6. Åström, K.J.; Wittenmark, B. Adaptive Control; Courier Corporation: Chelmsford, MA, USA, 2008. [Google Scholar]
  7. Bishop, C. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Volume 2, pp. 531–537. [Google Scholar]
  8. Wills, A.; Schön, T.B.; Ljung, L.; Ninness, B. Identification of hammerstein—Wiener models. Automatica 2013, 49, 70–81. [Google Scholar] [CrossRef]
  9. Foss, S.D. A method of exponential curve fitting by numerical integration. Biometrics 1970, 26, 815–821. [Google Scholar] [CrossRef]
  10. Plett, G.L. A Linear Method to Fit Equivalent Circuit Model Parameter Values to HPPC Relaxation Data From Lithium-Ion Cells. ASME Lett. Dyn. Syst. Control 2025, 5, 011003. [Google Scholar] [CrossRef]
  11. Rodrigo Navarro, J.; Kakkar, A.; Schatz, R.; Pang, X.; Ozolins, O.; Udalcovs, A.; Popov, S.; Jacobsen, G. Blind phase search with angular quantization noise mitigation for efficient carrier phase recovery. Photonics 2017, 4, 37. [Google Scholar] [CrossRef]
  12. Saab, R.; Wang, R.; Yılmaz, Ö. Quantization of compressive samples with stable and robust recovery. Appl. Comput. Harmon. Anal. 2018, 44, 123–143. [Google Scholar] [CrossRef]
  13. Qiu, K.; Dogandzic, A. Sparse signal reconstruction from quantized noisy measurements via GEM hard thresholding. IEEE Trans. Signal Process. 2012, 60, 2628–2634. [Google Scholar] [CrossRef]
  14. Márquez-Ramírez, V.; Nava, F.; Zúñiga, F. Correcting the Gutenberg–Richter b-value for effects of rounding and noise. Earthq. Sci. 2015, 28, 129–134. [Google Scholar] [CrossRef]
  15. Stanković, I.; Brajović, M.; Daković, M.; Ioana, C.; Stanković, L. Quantization in compressive sensing: A signal processing approach. IEEE Access 2020, 8, 50611–50625. [Google Scholar] [CrossRef]
  16. Zymnis, A.; Boyd, S.; Candes, E. Compressed sensing with quantized measurements. IEEE Signal Process. Lett. 2009, 17, 149–152. [Google Scholar] [CrossRef]
  17. Casini, M.; Garulli, A.; Vicino, A. Bounding nonconvex feasible sets in set membership identification: OE and ARX models with quantized information. IFAC Proc. Vol. 2012, 45, 1191–1196. [Google Scholar] [CrossRef]
  18. Yu, C.; You, K.; Xie, L. Quantized identification of ARMA systems with colored measurement noise. Automatica 2016, 66, 101–108. [Google Scholar] [CrossRef]
  19. Xu, J.; Li, J.; Xu, S. Analysis of quantization noise and state estimation with quantized measurements. J. Control Theory Appl. 2011, 9, 66–75. [Google Scholar] [CrossRef]
  20. Steinstraeter, M.; Gandlgruber, J.; Everken, J.; Lienkamp, M. Influence of pulse width modulated auxiliary consumers on battery aging in electric vehicles. J. Energy Storage 2022, 48, 104009. [Google Scholar] [CrossRef]
  21. GWL/Power CALB CA100FI—Lithium Cell LiFePO4 (3.2 V/100 Ah), Datasheet. Available online: https://files.gwl.eu/ (accessed on 1 January 2024).
Figure 1. (Top): Data generation (meaning the assumptions for the identification and the simulation), and ECM parameter identification. (Bottom): the Thévenin equivalent (or ECM, see [2]) of a second-order model. Estimation is denoted by ^ , where ☐ is the variable of interest. Open circuit voltage (OCV) is found from a lookup table based on the SoC. The SoC is calculated by Coulomb-counting Δ S o C = 100 % Q I d t , where Q is capacity and I is current.
Figure 1. (Top): Data generation (meaning the assumptions for the identification and the simulation), and ECM parameter identification. (Bottom): the Thévenin equivalent (or ECM, see [2]) of a second-order model. Estimation is denoted by ^ , where ☐ is the variable of interest. Open circuit voltage (OCV) is found from a lookup table based on the SoC. The SoC is calculated by Coulomb-counting Δ S o C = 100 % Q I d t , where Q is capacity and I is current.
Wevj 16 00116 g001
Figure 2. Features of the dequantizing algorithm. (a) Choosing threshold (orange) for the inverse normal distribution. Rounded values are dashed blue. (top): odd number of values, (bottom): even number of values. (Left): Gaussian distribution, (center): large std. (green), (right): low std. (black). (b) Estimation error dependence on the mean’s position between the rounded value (1) and the average of neighboring rounded values (0). Green: large std. Black: low std.
Figure 2. Features of the dequantizing algorithm. (a) Choosing threshold (orange) for the inverse normal distribution. Rounded values are dashed blue. (top): odd number of values, (bottom): even number of values. (Left): Gaussian distribution, (center): large std. (green), (right): low std. (black). (b) Estimation error dependence on the mean’s position between the rounded value (1) and the average of neighboring rounded values (0). Green: large std. Black: low std.
Wevj 16 00116 g002
Figure 3. The LCTR weighting function. The first 2500 samples set a preference to model early data points, the last 1000 samples weight the steady-state differential.
Figure 3. The LCTR weighting function. The first 2500 samples set a preference to model early data points, the last 1000 samples weight the steady-state differential.
Wevj 16 00116 g003
Figure 4. Dependency of estimation quality on Gaussian noise and resolution for an overdamped second-order system.
Figure 4. Dependency of estimation quality on Gaussian noise and resolution for an overdamped second-order system.
Wevj 16 00116 g004
Figure 5. RMSE for the unfiltered (left) and the dequantized (right) estimation for an overdamped second-order system.
Figure 5. RMSE for the unfiltered (left) and the dequantized (right) estimation for an overdamped second-order system.
Wevj 16 00116 g005
Figure 6. Relative error of τ 1 (left) and τ 2 (right). White areas denote estimations below 10% or above 10 times of the true value.
Figure 6. Relative error of τ 1 (left) and τ 2 (right). White areas denote estimations below 10% or above 10 times of the true value.
Wevj 16 00116 g006
Figure 7. (left) Difference in RMSE in mV for benchmark and the LCTR. Yellow means larger error for the benchmark, dark blue means larger error for the LCTR. The investigated system is a first-order system. (right) Steady-state error in mV depicted for LCTR (similar results found for the benchmark).
Figure 7. (left) Difference in RMSE in mV for benchmark and the LCTR. Yellow means larger error for the benchmark, dark blue means larger error for the LCTR. The investigated system is a first-order system. (right) Steady-state error in mV depicted for LCTR (similar results found for the benchmark).
Wevj 16 00116 g007
Figure 8. Results of the first-order algorithm on a field test with a lithium-ion phosphate cell. (top) current time series, (bottom) measurement, dequantized and estimated voltage time series. The spikes in the voltage occur because the model uses the noisy current measurement as an input.
Figure 8. Results of the first-order algorithm on a field test with a lithium-ion phosphate cell. (top) current time series, (bottom) measurement, dequantized and estimated voltage time series. The spikes in the voltage occur because the model uses the noisy current measurement as an input.
Wevj 16 00116 g008
Table 1. Nominal, extreme values and number of parameter test points for second order systems (Figure 1). The test points are distributed logarithmically, except for SoC, which is linear. The base of the logarithm is 2 for σ and R and 1.5 for τ 1 and τ 2 .
Table 1. Nominal, extreme values and number of parameter test points for second order systems (Figure 1). The test points are distributed logarithmically, except for SoC, which is linear. The base of the logarithm is 2 for σ and R and 1.5 for τ 1 and τ 2 .
σ R τ 1 τ 2 R 0 R 1 R 2 SoC ( t 0 )
Nominal value2 mV0.31 mV22 s647 s0.63 m Ω 0.47 m Ω 0.24 m Ω 80%
Minimum value0.25 mV0.63 mV4.3 s128 s---10%
Maximum value16 mV10 mV111 s3275 s---100%
Number of test points769911191
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pinter, Z.M.; Marinelli, M.; Trimboli, M.S.; Plett, G.L. Linear Continuous-Time Regression and Dequantizer for Lithium-Ion Battery Cells with Compromised Measurement Quality. World Electr. Veh. J. 2025, 16, 116. https://doi.org/10.3390/wevj16030116

AMA Style

Pinter ZM, Marinelli M, Trimboli MS, Plett GL. Linear Continuous-Time Regression and Dequantizer for Lithium-Ion Battery Cells with Compromised Measurement Quality. World Electric Vehicle Journal. 2025; 16(3):116. https://doi.org/10.3390/wevj16030116

Chicago/Turabian Style

Pinter, Zoltan Mark, Mattia Marinelli, M. Scott Trimboli, and Gregory L. Plett. 2025. "Linear Continuous-Time Regression and Dequantizer for Lithium-Ion Battery Cells with Compromised Measurement Quality" World Electric Vehicle Journal 16, no. 3: 116. https://doi.org/10.3390/wevj16030116

APA Style

Pinter, Z. M., Marinelli, M., Trimboli, M. S., & Plett, G. L. (2025). Linear Continuous-Time Regression and Dequantizer for Lithium-Ion Battery Cells with Compromised Measurement Quality. World Electric Vehicle Journal, 16(3), 116. https://doi.org/10.3390/wevj16030116

Article Metrics

Back to TopTop