Next Article in Journal
Artificial Intelligence for the Analysis of Biometric Data from Wearables in Education: A Systematic Review
Previous Article in Journal
Robust Sparse Non-Negative Matrix Factorization for Identifying Signals of Interest in Bearing Fault Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simultaneous State and Parameter Estimation Methods Based on Kalman Filters and Luenberger Observers: A Tutorial & Review

Mechanical Engineering Department, Cullen College of Engineering, University of Houston, Houston, TX 77204, USA
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(22), 7043; https://doi.org/10.3390/s25227043 (registering DOI)
Submission received: 11 October 2025 / Revised: 15 November 2025 / Accepted: 17 November 2025 / Published: 18 November 2025
(This article belongs to the Section Physical Sensors)

Abstract

Simultaneous state and parameter estimation is essential for control system design and dynamic modeling of physical systems. This capability provides critical real-time insight into system behavior, supports the discovery of underlying mechanisms, and facilitates adaptive control strategies. Surveyed in this review paper are two classes of state and parameter estimation methods: Kalman Filters and Luenberger Observers. The Kalman Filter framework, including its major variants such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), Cubature Kalman Filter (CKF), and Ensemble Kalman Filter (EnKF), has been widely applied for joint and dual estimation in linear and nonlinear systems under uncertainty. In parallel, Luenberger observers, typically used in deterministic settings, offer alternative approaches through high-gain, sliding mode, and adaptive observer structures. This review focuses on the theoretical foundations, algorithmic developments, and application domains of these methods and provides a comparative analysis of their advantages, limitations, and practical relevance across diverse engineering scenarios.

1. Introduction

State estimation is fundamental to a broad range of engineering and scientific fields [1,2,3,4], as it involves inferring non-measurable internal states of a system. These states are essential for understanding, predicting, and controlling system performance [5,6,7], yet they often remain inaccessible due to sensor limitations, challenging environmental conditions [1], or the inherent system complexity. To address these challenges, various state estimation techniques have been developed, enabling the inference of internal states from available measurement data [8]. Among the widely used techniques are the Kalman Filter and its nonlinear extensions [9,10,11,12,13] and the Luenberger observer [14] for linear systems and its nonlinear system extensions [15,16,17,18].
Traditionally, state estimation techniques assume that model parameters are either (i) known a priori or (ii) unknown but constant throughout the estimation horizon (i.e., time-invariant) [15]. While this simplifies system modeling and analysis, it fails to account for the complexities of real-world systems where parameters may be uncertain or time-varying. In practice, model parameters often change over time due to environmental variations, operational shifts, or gradual system degradation. Neglecting these dynamics can lead to distorted representations of system behavior and inaccurate state predictions, as states and parameters are inherently interdependent. Therefore, it is crucial to employ methods capable of simultaneously estimating both system states and time-varying system parameters, enabling a more accurate and comprehensive characterization of system dynamics.
Extensive research has been devoted to state estimation in systems with known parameters, including numerous comparative analyses of classical filters and observers. Studies evaluating the Extended, Unscented, Cubature, and Ensemble Kalman Filters in nonlinear systems [19,20,21] have highlighted important trade-offs between estimation accuracy, numerical stability, and computational complexity, while comparative investigations of deterministic observers and adaptive Kalman filters under various noise and dynamic conditions [22,23] have provided valuable insights into estimator performance. However, these works primarily focus on state estimation alone and do not explicitly address the challenges associated with estimating system parameters that are uncertain or time-varying. Despite numerous review papers on Kalman filtering and observer design, to the best of the authors’ knowledge, no comprehensive review unifies and compares methods developed for simultaneous state and parameter estimation across both stochastic (Kalman-based) and deterministic (observer-based) frameworks. The present work therefore fills this gap by providing a structured, tutorial-style review that integrates theoretical foundations, algorithmic developments, and comparative perspectives on the principal approaches to concurrent state and parameter estimation.
The manuscript structure is as follows: Presented in Section 2 are Kalman filter-based state and parameter estimation methods for both linear and nonlinear systems, along with their modified versions for joint state-parameter estimation. It also includes a comparative analysis of these methods based on the reviewed literature and highlights their key limitations. Developed in Section 3 is a detailed examination of observer-based methods and their applications in state-parameter estimation. Provided in Section 4 a comparative discussion of Kalman-based and observer-based approaches. Finally, summarized in Section 5 are the key insights from the reviewed methods followed by the concluding remarks and future research directions.

2. State and Parameters Estimation: Kalman-Based Methods

The Kalman filter, introduced by Rudolf E. Kalman [24], is a widely used algorithm for estimating the internal states of linear dynamic systems from noisy or incomplete measurement data [25]. It operates recursively, processing a sequence of measurements over time to predict and refine state estimates. By accounting for uncertainties in both the system model and measurements, the Kalman filter produces optimal estimates of the system’s current state. Its strength lies in its ability to iteratively improve predictions by integrating new data, thereby reducing uncertainty over time. The filter operates in two key steps: the prediction (propagation) step, where the state estimate and covariance are projected forward in time, and the update (correction) step, where new measurements are incorporated to adjust the state estimate and minimize estimation error [26]. This recursive framework makes the Kalman filter highly effective for real-time applications in various fields, including control systems, navigation, and signal processing.
To describe the Kalman filter, consider the stochastic system
x k + 1 = F k x k + B k u k + w k , w k N ( 0 , Q k ) ,
y k = H k x k + v k , v k N ( 0 , R k ) ,
where x k is the state, u k the input, and y k the measurement. F k , B k , H k are the state transition, input, and observation matrices; Q k and R k are the process and measurement noise covariances. Note that the time-invariant case is recovered by setting F k = F , B k = B , H k = H , Q k = Q , and R k = R .
The Kalman filter algorithm uses the state transition in (1) to predict the states at time step k, incorporating the process noise as zero-mean Gaussian noise with covariance Q . These predictions are used to compute the predicted system output through the measurement model in (2). The uncertainties are captured in the error covariance matrix P , which influences the calculation of the Kalman gain. The Kalman gain optimally balances the predicted states and measured data by weighting them according to the noise statistics represented in the Q and R matrices. This recursive process improves the system behavior estimates.
The Kalman filter process is summarized in Table 1, where x ^ k represents the predicted state estimate, x ^ k + is the updated state estimate, P k is the predicted error covariance, P k + is the updated error covariance, u k 1 is the control input, y k is the measurement at time k, y ˜ k is the measurement residual, and K k is the Kalman gain. I is the identity matrix, employed in the update step to adjust the error covariance.
The use of the Kalman filter for simultaneous state estimation and parameter identification was first explored in the early 1970s [27,28,29,30]. Parameter identification involves utilizing the Kalman filter in a dynamically evolving system, where the state variables are augmented to incorporate the parameters being estimated [31]. This approach, known as joint state and parameter estimation, allows for the simultaneous estimation of both system states and parameters within a single filtering process, recursively updating the estimates over time as new measurements become available.
The dual state-parameter estimation is an alternative technique [32]. In this approach [33,34], the state vector x and the parameter vector θ are treated separately using two distinct concurrently executed filters: one for state estimation and the other for parameter estimation. This approach enables each filter to specialize its task, potentially improving accuracy and robustness. Other approaches combine a Kalman-based filter for state estimation with parameter estimation algorithms such as Least Squares Estimation (LSE) [35,36] or Recursive Least Squares Estimation (RLSE) [37] for parameter estimation. These hybrid solutions have the Kalman filter sequentially update the state estimates while the parameters are periodically updated using a batch or recursive least squares approach [15].

2.1. Kalman-Based Filters

This section presents the different variants of Kalman-based algorithms and their adaptations for both joint and dual state-parameter estimation. A comparative analysis is also provided, drawing on representative applications and published studies to evaluate the performance of these approaches in simultaneous state and parameter estimation tasks.

2.1.1. Extended Kalman Filter

The Extended Kalman Filter (EKF) is a nonlinear extension of the standard Kalman Filter, designed to estimate the states of systems with nonlinear dynamics and measurements [38]. It accomplishes this by employing a Taylor series expansion to linearize the nonlinear system dynamics around the current state estimate, allowing it to effectively handle the complexities of nonlinear behavior.
To illustrate the EKF, consider the following discrete-time nonlinear state-space model, described by the state and measurement equations [1]
x k = f ( x k 1 , u k 1 ) + w k 1
y k = h ( x k ) + v k
where x k and y k are the state and the measurement vectors, respectively. The process noise w k 1 and the measurement noise v k are assumed to be zero-mean with covariance matrices Q k and R k , respectively. The functions f and h represent the nonlinear state transition function and the nonlinear measurement function, respectively. Defining the Jacobians F k 1 and H k as
State Transition Jacobian
F k 1 = f x x ^ k 1 | k 1 , u k 1
Measurement Transition Jacobian
H k = h x x ^ k | k 1
where F k and H k are the Jacobian matrices of the nonlinear state transition function f and measurement function h, evaluated at the estimated state x ^ k | k with control input u k , and the predicted state x ^ k | k 1 , respectively.
The EKF is executed through the following two recursive steps [1], as described herein.
State and Covariance Estimate Predictions: In the prediction step, the state estimate x ^ k | k 1 is projected forward in time using the nonlinear state transition function f ( x k 1 , u k 1 ) based on the previous state estimate x ^ k 1 | k 1 and the current control input u k . Simultaneously, the covariance estimate P k | k 1 is predicted by propagating the previous covariance P k 1 | k 1 through the state transition matrix F k , with the addition of the process noise covariance Q k to account for uncertainty
x ^ k | k 1 = f ( x ^ k 1 | k 1 , u k )
P k | k 1 = F k P k 1 | k 1 F k + Q k
State and Covariance Estimate Update: In the update step, the Kalman gain K k is computed, which weighs the new measurement y k relative to the predicted measurement derived from the nonlinear measurement function h ( x k ) . This gain is used to adjust the predicted state estimate x ^ k | k 1 , resulting in the updated state estimate x ^ k | k . The covariance estimate is also updated by incorporating the measurement, reducing uncertainty in the state estimate. The updated covariance P k | k reflects the reduced uncertainty after considering the new measurement
K k = P k | k 1 H k ( H k P k | k 1 H k + R k ) 1
x ^ k | k = x ^ k | k 1 + K k y k h ( x ^ k | k 1 )
P k | k = ( I K k H k ) P k | k 1
In Equations (7)–(11), K k represents the Kalman gain, which determines the weight given to the new measurement versus its estimate during the update process. The term x ^ k | k 1 denotes the a priori state estimate, predicted before incorporating the current measurement, while x ^ k | k is the a posteriori state estimate, updated after accounting for the measurement. Similarly, P k | k 1 is the a priori covariance matrix, representing the uncertainty in the state prediction before the update, and P k | k is the a posteriori prediction error covariance matrix, which reflects the reduced uncertainty after the measurement has been processed. This process allows the Kalman filter to refine both the state estimate and the uncertainty (covariance) using incoming measurements, iteratively improving the accuracy of the system’s understanding as new data becomes available.

2.1.2. Unscented Kalman Filter

The Extended Kalman Filter approximates nonlinear systems by linearizing them around the current state estimate, whereas the Unscented Kalman Filter (UKF) directly processes the system’s nonlinear functions. By avoiding the approximation errors inherent in the EKF’s linearization, by instead propagating a set of sample points, called sigma points, through the system’s true nonlinear dynamics, the UKF offers greater accuracy, particularly in highly nonlinear and complex systems. This makes the UKF a more attractive choice in scenarios where the EKF’s performance is constrained. The UKF algorithm involves generating sigma points, predicting the state and covariance, incorporating new measurements, and refining the state estimate. The detailed steps are outlined as follows [39].
Initialization:
Begin by initializing the state estimate x ^ 0 and the covariance P 0
x ^ 0 = E [ x 0 ]
P 0 = E ( x 0 x ^ 0 ) ( x 0 x ^ 0 )
Sigma Points Generation: Generate sigma points χ k 1 based on the current state estimate x ^ k 1 and covariance P k 1
χ k 1 ( 0 ) = x ^ k 1
χ k 1 ( i ) = x ^ k 1 + ( L + λ ) P k 1 i , i = 1 , , L
χ k 1 ( i ) = x ^ k 1 ( L + λ ) P k 1 i L , i = L + 1 , , 2 L
where λ = α 2 ( L + κ ) L , L is the dimension of the state vector, and α and κ are the scaling parameters.
State Prediction: Propagate sigma points through the nonlinear state transition function
χ k | k 1 ( i ) = f ( χ k 1 ( i ) )
Calculate the predicted state mean x ^ k and covariance P k
x ^ k = i = 0 2 L W m ( i ) χ k | k 1 ( i )
P k = i = 0 2 L W c ( i ) χ k | k 1 ( i ) x ^ k χ k | k 1 ( i ) x ^ k + Q k
where Q k is the process noise covariance, W c ( i ) and W m ( i ) are the weights for the covariance and mean, respectively, defined as
W m ( 0 ) = λ L + λ , W c ( 0 ) = λ L + λ + ( 1 α 2 + β )
W m ( i ) = W c ( i ) = 1 2 ( L + λ ) , i = 1 , , 2 L
Here, β is a constant used to incorporate prior knowledge of the distribution of x (e.g., choosing β = 2 is optimal for a Gaussian distribution) [39].
Measurement Update: Propagate the sigma points through the nonlinear measurement function
Υ k ( i ) = h ( χ k | k 1 ( i ) )
Calculate the predicted measurement mean y ^ k and its covariance P k y
y ^ k = i = 0 2 L W m ( i ) Υ k ( i )
P k y = i = 0 2 L W c ( i ) Υ k ( i ) y ^ k Υ k ( i ) y ^ k + R
where R is the measurement noise covariance.
Kalman Gain and State Update: Compute the cross-covariance P x y and the Kalman gain K
P x y = i = 0 2 L W c ( i ) χ k | k 1 ( i ) x ^ k Υ k ( i ) y ^ k
K = P x y ( P k y ) 1
Finally, update the state estimate x ^ k and the covariance P k using the actual measurement y k
x ^ k = x ^ k + K ( y k y ^ k )
P k = P k K P k y K
These steps collectively define the Unscented Kalman Filter (UKF) algorithm, where stage, from sigma point generation to the final state update, is executed deterministically based on the underlying nonlinear functions. This structured process enables the UKF to deliver recursive state estimates without requiring explicit Jacobians, thereby simplifying implementation while preserving higher-order accuracy.

2.1.3. Cubature Kalman Filter

The Cubature Kalman Filter (CKF) is an advanced variant of the Kalman Filter designed to handle the complexities of nonlinear systems, particularly in high-dimensional spaces. Unlike the Unscented Kalman Filter, which relies on sigma points, the CKF employs a cubature rule based on spherical-radial integration, enhancing its ability to manage system nonlinearities more effectively, thereby improving estimation performance in complex scenarios [40,41]. The following presents a detailed formulation of the CKF, including its prediction and correction phases for a general nonlinear discrete-time system.
For a nonlinear discrete-time system, the CKF operates through the following steps.
Initialization: Initialize state and covariance estimates
x ^ 0 | 0 = E [ x 0 ]
P 0 | 0 = E [ ( x 0 x ^ 0 | 0 ) ( x 0 x ^ 0 | 0 ) ]
Prediction Step: Compute the cubature points χ i based on the current state estimate x ^ k 1 | k 1 and its covariance P k 1 | k 1
χ i = x ^ k 1 | k 1 + P k 1 | k 1 ζ i
where ζ i are the cubature points chosen according to the spherical-radial cubature rule [41]. The cubature points are then propagated through the state transition function f
χ i , k | k 1 = f ( χ i , u k 1 )
Compute the predicted state mean as
x ^ k | k 1 = i = 1 2 n ω i χ i , k | k 1
where ω i = 1 2 n .
Compute the predicted state covariance as
P k | k 1 = i = 1 2 n ω i ( χ i , k | k 1 x ^ k | k 1 ) ( χ i , k | k 1 x ^ k | k 1 ) + Q k 1
Correction Step: Propagate the cubature points through the measurement function h
y i , k | k 1 = h ( χ i )
Calculate the predicted measurement mean as
y ^ k | k 1 = i = 1 2 n ω i y i , k | k 1
Next, compute the innovation covariance P y y , k and cross-covariance P x y , k as
P y y , k = i = 1 2 n ω i ( y i , k | k 1 y ^ k | k 1 ) ( y i , k | k 1 y ^ k | k 1 ) + R k
P x y , k = i = 1 2 n ω i ( χ i , k | k 1 x ^ k | k 1 ) ( y i , k | k 1 y ^ k | k 1 )
Calculate the Kalman gain K k as
K k = P x y , k P y y , k 1
Finally, the updated state estimate and the state covariance are
x ^ k | k = x ^ k | k 1 + K k ( y k y ^ k | k 1 )
P k | k = P k | k 1 K k P y y , k K k
The CKF algorithm, as detailed above, defines a structured and recursive process for nonlinear state estimation. By leveraging numerical integration through deterministic cubature points, it enables consistent prediction and correction without the need for linear approximations nor heuristic sampling, making it more suitable for a broad range of complex nonlinear system applications.

2.1.4. Ensemble Kalman Filter

The Ensemble Kalman Filter (EnKF), introduced by Evensen in 1994 [42,43], is a Monte Carlo approximation of the traditional Kalman Filter, intended for high-dimensional, nonlinear systems applications [24]. Departing from the standard Kalman filter, which assumes Gaussian distributions to compute single state estimation, the EnKF employs a large ensemble of state estimates to approximate the distribution of possible states. This EnKF solution produced a more robust prediction in complex and uncertain environments [44,45]. The four primary steps in the EnKF process are: initialization, prediction, update, and iteration.
Initialization: An ensemble of N members x i 0 R n is initialized, where each member represents a possible state of the system.
Prediction Step: Propagate each ensemble member x i k using the state dynamics
x i f , k + 1 = f ( x i k ) + w i k
where f represents the system dynamics and w i k the process noise for the ith ensemble member.
Update Step: Update each predicted ensemble member using the observation
x i a , k + 1 = x i f , k + 1 + K k + 1 y k + 1 H ( x i f , k + 1 ) + ϵ i k + 1
where x i a , k + 1 is the updated ensemble member, K k + 1 is the Kalman gain, y k + 1 is the output, H is the observation operator, and ϵ i k + 1 N ( 0 , R k + 1 ) is the measurement noise.
Kalman Gain Calculation: Compute the Kalman gain using the forecast covariance matrix P f , k + 1 derived from the ensemble members
K k + 1 = P f , k + 1 H H P f , k + 1 H + R k + 1 1
The forecast covariance matrix is approximated by
P f , k + 1 = 1 N 1 i = 1 N x i f , k + 1 x ˜ f , k + 1 x i f , k + 1 x ˜ f , k + 1
where x ˜ f , k + 1 is the sample mean of the forecasted ensemble
x ˜ f , k + 1 = 1 N i = 1 N x i f , k + 1
Iteration: The updated ensemble x i a , k + 1 is then used as the new initial condition for the next iteration of the process.

2.2. Joint State and Parameter Estimation Using Kalman Based Algorithms

In joint state and parameter estimation, the system’s state vector is augmented to include both the dynamic states and the unknown parameters to be estimated. The standard filter equations are then applied, modified to facilitate the simultaneous estimation of both quantities, states and parameters, from system measurements.
Let x denote the state vector and θ the model parameters vector; the augmented state vector is then defined as
x a = x θ
Note: When a system includes unknown parameters, even a model that is linear in the state variables may lead to a nonlinear estimation problem once the parameters are treated as additional states. This occurs because the parameters often appear multiplicatively in the system matrices, introducing nonlinear couplings between the original states and the parameters in the augmented model.
By incorporating the dynamics of the parameters into the system and measurement models, the updated representation becomes
x a , k + 1 = A a x a , k + B a u k + w a ,
y k = H a x a , k + v k ,
where A a , B a , and H a are the augmented state transition, input, and observation matrices, respectively, and w a is the augmented process noise vector. These matrices explicitly combine the nominal system matrices with the parameter-dependent components, embedding the parameter effects directly within the dynamic update. Specifically, A a incorporates both the system dynamics and parameter coupling terms, B a represents the input mapping including parameter dependencies, and H a defines how both the states and parameters influence the measured outputs. This representation preserves the discrete-time state-space structure while enabling simultaneous estimation of both system states and parameters within a unified framework.
Joint state and parameter estimation is widely applied in engineering. For both linear and nonlinear systems, the Joint Extended Kalman Filter (JEKF) is a commonly used method. A typical application is in battery management systems, where the JEKF is used to estimate the state of charge (SOC) and various model parameters [46]. However, in systems with significant nonlinearities, the performance of the JEKF may degrade due to the inaccuracies introduced by the linearization and Jacobian approximation.
To overcome these limitations, the Joint Unscented Kalman Filter (JUKF) is often employed as it avoids linearization by propagating sigma points through the nonlinear system models, capturing the true mean and covariance of the state distribution more accurately [39]. It has been successfully employed in many research studies. For instance, in [47], the JUKF was used for simultaneous state and parameter estimation in vehicle dynamics, enhancing the performance of driver assistance systems under nonlinear conditions. Similarly, in [48], the JUKF was applied to Managed Pressure Drilling (MPD) to estimate downhole states and drilling parameters using topside measurements, demonstrating reliable performance even during transient drilling operations.
An alternative to the JUKF is the Joint Cubature Kalman Filter (JCKF), which replaces sigma points with cubature points based on spherical-radial integration rules [41]. This method retains the benefits of derivative-free estimation while improving numerical stability and accuracy. In [49], the JCKF was employed for estimating vehicle states and road-dependent parameters, such as tire-road friction coefficients, which are critical for adaptive vehicle control under varying road conditions.
The Joint Ensemble Kalman Filter (JEnKF) leverages ensemble-based methods for joint estimation in high-dimensional, nonlinear systems. Rather than relying on analytical approximations of distributions, the JEnKF uses a Monte Carlo ensemble to represent the posterior distribution. In [44], the Joint EnKF (JEnKF) was employed to estimate soil moisture states and land surface parameters, achieving high estimation accuracy. Additional applications of the JEnKF in joint state-parameter estimation can be found in [44,50,51,52].

2.3. Dual State and Parameter Estimation

In dual estimation, two independent filters operate in parallel: one for state estimation and the other for model parameters identification. The state filter utilizes system measurements and the current model to estimate the dynamic state variables, while the parameter filter relies on the updated state estimates to refine the parameter estimates. This structure enables a decoupled yet interactive estimation process that enhances adaptability and robustness. A schematic of the dual estimation framework is illustrated in Figure 1.
The dual estimation approach has been applied successfully in various engineering domains. For example, in [53], the Dual Extended Kalman Filter (DEKF) was employed to estimate both vehicle states and parameters, specifically tire cornering stiffness, which changes under different driving conditions. By allowing continuous adaptation of the parameters as vehicle dynamics evolved, the method dramatically improved state estimation accuracy and enhanced real-time vehicle control performance.
Similarly, in [54] the Dual Unscented Kalman Filter (DUKF) was utilized for state and parameter estimation in surface-atmosphere exchange processes, addressing missing data in net ecosystem CO2 exchange (NEE). The DUKF provided continuous updates that outperformed conventional gap-filling techniques by improving estimation accuracy and reducing subjectivity, while effectively accounting for uncertainties in both model structure and measurements.
In [55], the Dual Cubature Kalman Filter (DCKF) was utilized to estimate the state of charge and circuit model parameters in lithium-ion batteries, delivering accurate and reliable results under varying operational conditions. Additionally, in [56], the Dual Ensemble Kalman Filter (DEnKF) was implemented to enhance streamflow forecasting, enabling simultaneous estimation of hydrological states and parameters, such as soil moisture and storage capacities, within a dynamic model. This approach addressed limitations of traditional calibration techniques, by enabling continuous updates in response to environmental changes.

2.4. Comparison

Both joint and dual state-parameter estimation methods aim to recover the same quantities, namely, the system states and the unknown parameters, from noisy measurements. The distinction lies not in the problem formulation but in the computational architecture used to achieve this goal, specifically: (1) In joint estimation, the state and parameter vectors are combined into a single augmented state vector, and one unified filter (e.g., EKF, UKF, CKF, or EnKF) simultaneously estimates them within a single recursive process. (2) In dual estimation, two filters operate in parallel: one estimates the states using the current parameter estimates, while the other updates the parameters based on the most recent state estimates. This decoupling reduces filter coupling errors and often improves numerical stability, at the cost of increased computational complexity and a possible time lag between state and parameter updates.
Numerous research studies have compared the performance of joint and dual estimation approaches across different applications, including chemical processes, vehicle dynamics, and environmental systems [44,49,53,54,56,57,58,59,60]. In [57] the Joint Extended Kalman Filter (JEKF) was evaluated against the Dual Extended Kalman Filter (DEKF) for simultaneous estimation of states and parameters in a highly nonlinear continuous stirred tank reactor (CSTR). The results highlighted the advantages of DEKF, particularly in reducing the risk of filter divergence by decoupling the state and parameter estimation processes. Moreover, the DEKF reduces computational load by allowing the parameter estimator to be turned off once the parameters converge to their optimal values, without compromising the accuracy of state estimation. This feature is particularly beneficial in systems where parameters stabilize over time, thus improving filter efficiency.
Another study in [58] proposed a modified version of the Dual Unscented Kalman Filter (MDUKF), which builds on the DUKF’s ability to deactivate the parameter estimator once sufficient accuracy is achieved. Unlike the standard DUKF, which continuously estimates state and parameter variables separately, the MDUKF improves performance by integrating a refined approach that allows the parameter estimator to be turned off once a satisfactory accuracy level is reached via a selective update mechanism. This strategy significantly reduces computational load while improving estimation performance. The effectiveness of this approach was demonstrated in the dual estimation of vehicle states and critical parameters such as side-slip angle, lateral tire-road forces, and the tire-road friction coefficient.
Similarly, the performance of the JCKF against the DCKF was assessed in [49]. The simulation results indicate that while both methods achieve accurate estimations, the DCKF consistently surpasses the JCKF in terms of accuracy, computational efficiency, and robustness, especially under varying conditions. These findings makes the dual estimation approach the preferred choice for real-time applications, especially in scenarios where environmental conditions are unpredictable.
A concise summary of the main characteristics, advantages, and limitations of the joint and dual Kalman-based estimation methods is provided in Table 2.

2.5. Challenges with Kalman-Based Methods

The performance of Kalman-based filters heavily depends on the appropriate selection of the process and measurement noise covariance matrices, Q and R [1]. These matrices define the statistical properties of process and measurement noise, respectively. Achieving the right balance is crucial: if the covariance values are set too low, the filter may become overly sensitive to disturbances, leading to inaccurate estimates. Conversely, excessively high values can result in overly conservative estimates, diminishing the filter’s responsiveness and adaptability [1].
One major challenge is accurately modeling the complex and nonlinear noise sources that are prevalent in real-world applications [61]. Noise characteristics often change over time and are affected by environmental factors and operating conditions, making it challenging to accurately capture their statistical properties. Inaccurate noise modeling can result in incorrect assumptions about system behavior, leading to degraded filter performance or even divergence [62,63].
Another challenge lies in tuning critical algorithm-specific parameters, such as the unscented transform parameters ( α , β , and κ ) for the Unscented Kalman Filter (UKF) [64,65], the optimal placement and weights of cubature points for the Cubature Kalman Filter (CKF), and the appropriate number of ensemble members for the Ensemble Kalman Filter (EnKF). These parameters are often tuned manually with minimal theoretical guidance, relying on trial and error until satisfactory filter performance is achieved. However, this ad hoc approach is time-consuming and does not guarantee optimal parameter selection, especially given the large number of parameters involved [46].
To address these challenges, several research efforts have proposed systematic approaches [66,67,68]. For example, a reference recursive recipe (RRR) has been developed for tuning Kalman filter covariance matrices, which iteratively updates the matrices based on sample statistics derived from the filter [69,70]. Another approach utilizes particle swarm optimization (PSO) algorithms to fine-tune filter parameters. These algorithms iteratively adjust the covariance matrices at each time step by optimizing a predefined cost function, as demonstrated in [1,71,72]. The approach in [72] utilize a particle swarm optimization (PSO) algorithm to fine-tune Kalman filter parameters, iteratively updating the filter covariance matrices at each time step by minimizing the mean squared error. Additionally, Ref. [64] introduces a systematic framework for tuning the scaling parameters α , β , and κ of the unscented transform in the UKF, providing a structured approach to enhance filter performance while reduce reliance on manual tuning.
These advancements provide promising alternatives to traditional filter parameter selection methods, enabling more efficient and accurate selection and enhancing the robustness of Kalman-based filters in practical applications.

3. Simultaneous States and Parameters Estimation: Observer-Based Methods

Observer-based approaches provide an alternative framework to Kalman filtering for the simultaneous estimation of system states and parameters. These methods are grounded in deterministic system theory and are often favored for their conceptual simplicity, reduced computational requirements, and ease of implementation in certain classes of systems. This section presents a range of observer-based techniques that have been explored in the literature for simultaneous state and parameter estimation. For consistency with the formulations in Section 2, all observer-based approaches discussed in this section are derived from the general discrete-time nonlinear system model (3) and (4), with specific assumptions (linearity, time-variation, or parameter dependence) applied as required by each observer type.

3.1. Observer-Based Methods for Simultaneous State and Parameters Estimation

State estimation via Luenberger observers was first introduced by D. Luenberger [14]. It typically has the following form
x ^ k + 1 = A x ^ k + B u k + L ( y k C x ^ k )
where x ^ k is the estimated state, u k is the control input, y k is the system output, A and B are the system matrices, and C is the output matrix. L is the observer gain matrix, selected to place the poles of the error dynamic matrix ( A L C ) at a desired location determined by the desired convergence rate of the estimations. One major assumption in the design of the Luenberger observer is that the system model (i.e., A , B , and C matrices) match the true plant dynamics (i.e., Equation (1)).
Note: To maintain consistency with the notation used in Section 2, the matrices A , B , and C in Equation (50) correspond respectively to F , B , and H in Equation (1).
The application of Luenberger observers for simultaneous state and parameter estimation in linear time-invariant systems was first explored in the early 1970s [73]. The fundamental approach involved augmenting the observer with integrators to estimate constant parameters alongside system states. Later development enabled the application of these methods to time-varying systems [74].
For linear systems, a simple approach is to augment the state vector with the unknown parameters using an augmented Luenberger observer [75]. An alternative approach involves organizing the unknown parameters into structured uncertainty matrices, which are then identified using a modified Luenberger observer architecture in combination with a parameter estimation algorithm to simultaneously perform state and parameter estimation [15].
For nonlinear systems, adaptive observers [76], high-gain observers [77], and sliding mode observers are popular. These techniques provide robust alternatives for handling system nonlinearities and uncertainties. A common assumption across all these methods is that the parameter vector θ is observable from the available measurements, ensuring the system’s states and parameters can be effectively estimated.
The reviewed literature demonstrates a variety of observer-based approaches for simultaneous state and parameter estimation, offering effective alternatives to stochastic filtering methods, especially in systems where model structure is well-understood and noise characteristics are not dominant.

3.1.1. Augmented Luenberger Observer

Consider the discrete-time linear system defined as
x k + 1 = A x k + B u k + A θ , k θ k ,
y k = C x k ,
where x k R n is the state vector, u k R m is the control input, y k R p is the measurement output, and θ k R q is the vector of unknown parameters. The matrices A , B , and C represent the system dynamics, input, and output, respectively, while A θ , k is the parameters matrix, capturing the influence of the parameters on the system dynamics.
In this method, the modified observer equation for the augmented system can be written as [75]
z ^ k + 1 = A ( θ ^ k ) z ^ k + B ( θ ^ k ) u k + L y k C ( θ ^ k ) z ^ k ,
where z ^ k = x ^ k θ ^ k T is the estimate of the augmented state vector, and L is the observer gain matrix. The gain L is designed to ensure that the matrix A ( θ ^ k ) L C ( θ ^ k ) is Schur stable. This implies that all the eigenvalues lie strictly inside the unit circle. As in the standard Luenberger observer, the output error term ( y k C ( θ ^ k ) z ^ k ) is used to correct the estimates, driving them towards the true values. The convergence rate of the observer can be adjusted when designing the observer gain L . Common methods to do that is via pole placement or linear matrix inequalities (LMI) [78,79].
The system matrices A ( θ k ) , B ( θ k ) , and C ( θ k ) of the augmented system in (53) can be written as
A ( θ k ) = A A θ ( θ k ) 0 I , B ( θ k ) = B ( θ k ) 0 , C ( θ k ) = C ( θ k ) 0 .
It is commonly assumed that the influence of parameters is incorporated into the state matrix A ( θ K ) . However, in practice, parameters may also affect the input matrix B [15] or the output matrix C [80]. This flexibility allows the observer structure to be adapted to different system representations. However, a notable drawback of this method is its restriction to linear systems, as described in Equations (51) and (52), along with the assumption that parameters are constant or vary slowly ( θ ˙ ( t ) = 0 ). When the unknown parameters undergo significant variations, this assumption can degrade performance, preventing the estimates from accurately tracking changes in the true parameters. Moreover, since many real-world systems are inherently nonlinear, directly applying this method often fails to achieve effective state-parameter estimation. In such cases, alternative approaches, such as adaptive observers, are required to overcome these challenges.

3.1.2. Observer-RLSE-CR Method

The Observer-RLSE-CR method integrates deterministic state estimation with adaptive parameter identification in systems with time-varying uncertainties. In this approach, the system matrices are expressed as the sum of a nominal and a perturbation matrix [15]
x k + 1 = A ( k ) x k + B ( k ) u k
y ( k ) = C x k
where x k R n is the system state, u k R m is the system input, and y k R p is the measured system output. The system matrices are defined as A ( k ) = A 0 + Δ A ( k ) R n × n and B ( k ) = B 0 + Δ B ( k ) R n × m , where A 0 and B 0 are the nominal system parameters, and Δ A ( k ) and Δ B ( k ) are the unknown time-varying parameters. The system output matrix, C R p × n , defines the individual state(s) that are measured.
The study in [15] proposes an Observer-RLSE-CR framework for the joint estimation of system states and parameters. The proposed algorithm integrates a modified Luenberger equation for state estimation, as shown in Equation (57), while concurrently estimating the uncertainty matrices online using recursive least squares estimation with covariance reset (RLSE-CR). Specifically,
x ^ k + 1 = ( A 0 + Δ A ^ ( k ) ) x ^ k + L ( y k C 0 x ^ k ) + ( B 0 + Δ B ^ ( k ) ) u k
where x ^ k is the state estimate, L is the observer gain, Δ A ^ ( k ) and Δ B ^ ( k ) are the estimates of the system state and input uncertainty matrices, respectively. These time-varying uncertainty estimates are obtained online using the RLSE-CR algorithm, which recursively minimizes a least-squares cost function while implementing a covariance reset mechanism to improve estimator stability and responsiveness. The reset is activated once convergence criteria are met, thereby enhancing adaptability in the presence of parameter drift or abrupt changes.
The flowchart of the algorithm proposed in [15] is presented in Figure 2. Further insights into the parameter estimation process can be found in [15]. Additionally, a systematic method is introduced for selecting the sliding window size p in the RLSE-CR procedure, based on the desired convergence rate and system settling time as well.
A comparison between the Observer-RLSE-CR estimator, the augmented Kalman filter, and the Kalman filter coupled with RLSE is provided in [15]. While all the algorithms effectively estimate both states and parameters, the Observer-RLSE-CR has shown benefits over the Kalman-based filters in capturing uncertainties within the input uncertainty matrix Δ B .
This approach presents several advantages over the augmented Luenberger observer. It not only enables accurate joint estimation of system states and time-varying parameters but also effectively mitigates noise introduced by measurement sensors. However, a key limitation of this method is its restriction to linear systems.

3.1.3. High Gain Observer

High-gain observers are widely used for nonlinear systems or systems with varying dynamics [81,82]. It employs a large observer gain to amplify the output error, enabling rapid corrections of the estimates and facilitating fast convergence to the true values. In linear systems, a standard augmented Luenberger observer formulation is generally adopted, with the observer gain set to a high level. For nonlinear systems, a more structured approach is required. This often involves transforming the system into an observable canonical form and defining an appropriate parameter estimation law before applying the high-gain observer.
Consider the discrete-time nonlinear system defined by
x k + 1 = f ( x k , u k , θ k )
y k = h ( x k )
where x k R n is the system state, u k R m is the system input, y k R p is the system output, θ k R q is the unknown parameters vector, and f and h are nonlinear functions describing the system dynamics and output behavior. The parameters may be constant or slowly varying such that θ k = g ( θ k ) , where g is the dynamic law governing the parameters variation over time.
When dealing with constant parameters, i.e., θ k + 1 = θ k , the estimation process is simplified. An immediate method for the concurrent system state and unknown parameters estimation is to augment the state vector with the unknown parameters [77]. Defining the augmented state vector as
z k = x k θ k R p + q
Thus, the high-gain observer for the nonlinear augmented system can be written as
z ^ k + 1 = f ( x ^ k , u k , θ ^ k ) θ ^ k + L ( y k h ( x ^ k ) )
where L is the high-gain observer matrix for the augmented system. The innovation term ( y k h ( x ^ k ) ) corrects the estimates based on output discrepancies.
For time-varying parameters, a separate adaptive update law is introduced to allow the observer to track parameter changes, as follows
θ ^ k + 1 = θ ^ k + γ k ( y k h ( x ^ k ) )
Here, γ k is a time-varying learning rate that controls the adaptation speed. The innovation term continues to serve as the correction signal, enabling the parameter estimate to follow dynamic changes effectively.
Meanwhile, the state estimates continue to evolve based on the standard high-gain observer formulation, using the estimated parameters to generate forward predictions, as follows
x ^ k + 1 = f ( x ^ k , u k , θ ^ k ) + L x ( y k h ( x ^ k ) )
where L x is the high-gain matrix for the state estimation.
Another approach that can be adopted for the joint state-parameter estimation is concurrently employing a high-gain observer (i.e., Equation (63)) for state estimation and another estimator such as a Kalman filter or a Recursive Least Squares for parameter estimation.
One drawback of high-gain observers is their heightened sensitivity to modeling errors and measurement noise. The large gains can amplify noise, leading to reduced estimation accuracy and diminished performance. Additionally, high-gain observers often require the system to be expressed in a particular form, such as an observable canonical form or strict-feedback form, thereby limiting their applicability to certain systems.

3.1.4. Sliding Mode Observer

State estimation via sliding mode observer (SMO) was first introduced in the mid 1980s [83,84]. For the system x k + 1 = A x k + B u k , y k = C x k , the SMO has a similar form to the standard observer with a replacement of the innovation term with a discontinuous switching function as shown below [84]
x ^ k + 1 = A x ^ k + B u k + L sign ( y k C x ^ k )
The sliding mode term L sign ( y k C x ^ k ) enhances the observer’s capacity to manage uncertainties and disturbances by introducing a sliding surface that forces the estimation error to converge to zero in finite time. However, in practical implementations, this mechanism can lead to undesirable chattering effects. To alleviate this, practitioners often replace the switching term with a saturation function, thereby reducing chattering while preserving robustness. Thus, the modified SMO can be written in the form
x ^ k + 1 = A x ^ k + B u k + L sat ( e k , ϵ )
where e k = y k C x ^ k is the estimation error and ϵ is a small positive threshold used to reduce the effect of chattering. The saturation function is defined as [85,86]
sat ( e k , ϵ ) = e k ϵ , if | e k | < ϵ sign ( e k ) , otherwise
Various strategies have been proposed to extend SMOs for joint estimation of states and parameters. One approach involves augmenting the state vector with unknown parameters, treating them as additional state variables to be estimated using a single SMO framework [87]. Another method employs dual SMO structures, with one observer dedicated to state estimation and the other to parameter identification, operating in parallel [88]. An alternative strategy uses an adaptive SMO for state estimation coupled with a separate online parameter estimator. In this configuration, the parameter estimator continuously updates the model parameters in real time, and the observer adapts accordingly [89].

3.1.5. Adaptive Observer

Adaptive observer is commonly used when dealing with nonlinear time-varying systems [74,90,91]. Consider the nonlinear system defined in Equations (58) and (59). The observer for state estimates can be represented with the following equations
x ^ k + 1 = f ( x ^ k , u k , θ ^ k ) + L k e k
where L k is the nonlinear, time-varying observer gain, and e k = y k y ^ k is the estimation error. For adaptive observers, an adaptation law is designed to update the parameter estimates as follows
θ ^ k + 1 = θ ^ k + Γ k Φ k e k
where Γ k is a positive definite matrix controlling the rate of parameter adaptation and Φ k is the regressor related to the system dynamics. For simple problems, Γ k is often set constant such that
Γ = γ 1 0 0 0 γ 2 0 0 0 γ q
where γ i > 0 are constants that determine the adaptation speed of each parameter θ i . Larger values of γ i lead to faster convergence but may also amplify noise and destabilize the system if set too high.
In more advanced implementations, time-varying gain matrices Γ k are adopted. One strategy involves decreasing the values of Γ k as the estimation error diminishes, helping to stabilize the adaptation over time. Another common method leverages Lyapunov stability theory, for a guaranteed asymptotic stability of the estimation error [90,92].
Defining the Lyapunov function
V k = e k T P e k + θ ˜ k T Γ k 1 θ ˜ k
where P is a symmetric positive definite matrix, and θ ˜ k = θ θ ^ k is the parameter estimation error.
Stability is ensured by choosing Γ k such that the Lyapunov difference satisfies: Δ V k = V k + 1 V k < 0 [93].
Adaptive observers are particularly effective in nonlinear, time-varying settings to achieve accurate state and parameter estimation. They rely on an adaptive law that continuously updates parameters over time. The key features of their design include the time-varying observer gain L k , the adaptation law driven by the estimation error e k , and the parameter update rate Γ k , which may be constant or time-varying, adaptively tuned to maintain stability and desired performance.

3.2. Challenges with Observer-Based Methods

The performance of the Luenberger observer depends heavily on the accuracy of the system model. Modeling uncertainties or incorrect system dynamics can significantly degrade its estimation capabilities, and in some cases, lead to divergence [94]. Furthermore, a major limitation of observer-based methods is their sensitivity to noise, as they do not explicitly incorporate process and measurement noise the way Kalman-based filters do. This makes them particularly vulnerable in noisy environments, often resulting in biased or noisy estimates of states and parameters.
High-gain observers are especially sensitive to measurement noise. Although the use of large gains accelerates convergence, it also amplifies noise, resulting in a trade-off between convergence speed and robustness. Proper tuning of the observer gain is therefore critical, as excessive gain values may destabilize the estimation process or introduce significant noise artifacts into the state estimates.
Sliding mode observers face their own set of challenges, notably the chattering effect caused by high-frequency switching nature of the sliding mode correction term. This can introduce undesirable oscillations that can be detrimental to physical systems. Furthermore, designing effective SMOs often requires careful selection of the sliding surface and observer parameters, typically achieved through trial-and-error procedures, which further complicates practical implementation and tuning.
Adaptive observers pose unique challenges as well. One key requirement is the presence of persistent excitation in the system inputs to ensure convergence of parameter estimates. If this condition is not met, the observer may fail to adequately capture parameter dynamics, resulting in poor estimation accuracy. Additionally, the design of the adaptation gain matrix must balance speed of convergence with robustness to noise and modeling uncertainties.
Finally, across all observer types, another common limitation is their dependence on accurate initial conditions. Poorly initialized state or parameter estimates can lead to slow convergence, large transient errors, or in some cases, complete divergence of the estimation process.

4. Comparative Discussion of Kalman-Based and Observer-Based Approaches

From a broader perspective, Kalman-based and observer-based approaches represent two complementary philosophies in simultaneous state and parameter estimation. Kalman-based filters, rooted in stochastic estimation theory, explicitly account for process and measurement noise through covariance modeling and probabilistic estimation. When the statistical properties of the noise are well characterized, these filters provides statistically optimal estimates in the minimum mean-square error sense. The probabilistic foundation enables Kalman-based algorithms to achieve high robustness against random disturbances, measurement uncertainties, and modeling imperfections. However, these advantages come at the expense of increased computational cost and sensitivity to incorrect noise covariance tuning or modeling errors, which can lead to divergence or degraded performance, particularly in highly nonlinear or time-varying systems.
Observer-based methods, by contrast, are grounded in deterministic system theory. They reconstruct system states and parameters through feedback mechanisms that exploit the structure of the underlying model rather than probabilistic assumptions. Because they do not rely on explicit noise statistics, observers are often applied in systems where the model dynamics are well characterized but noise properties are uncertain or difficult to quantify. While this independence from statistical noise modeling simplifies implementation, it also limits robustness to measurement disturbances. Nevertheless, deterministic observers generally exhibit faster convergence, simpler implementation, and lower computational requirements than their Kalman-based counterparts. However, they lack built-in uncertainty quantification mechanisms (e.g., they do not, by construction, produce uncertainty measures such as covariance matrices or confidence intervals) and their performance may degrade in the presence of significant measurement noise, unmodeled dynamics, or parameter drift.
In practical terms, the selection between Kalman-based and observer-based approaches depends on the nature of the system and the information available about its uncertainty sources. For systems dominated by stochastic disturbances, where statistical modeling is feasible and computational resources are sufficient, Kalman-based filters remain the preferred choice. Conversely, for systems with reliable deterministic models, limited sensor data, or stringent real-time constraints, observer-based approaches often provide a more efficient and flexible solution. A concise comparison of the key characteristics, advantages, and limitations of Kalman-based and observer-based approaches is presented in Table 3.
Recent studies have also explored hybrid estimation frameworks that integrate the statistical optimality of Kalman filtering with the structural adaptability of deterministic observers. Such hybrid schemes have been proposed across various domains, including vehicle dynamics, robotics, and energy systems, to enhance robustness and estimation accuracy under mixed stochastic and deterministic uncertainties [88,95,96,97]. These hybrid architectures show promising directions for achieving improved estimation accuracy and stability, particularly in nonlinear systems with time-varying parameters.

5. Conclusions

Presented in this manuscript is a comprehensive review and a comparative analysis of various methodologies explored in the literature for the simultaneous estimation of system states and parameters, focusing on approaches based on Kalman filters and Luenberger observers. While Kalman-based methods excel in handling stochastic noise, observer-based methods offer simplicity and efficiency in deterministic settings.
The methods reviewed are generally categorized into three main approaches:
  • Augmented State Approach: The unknown parameters are treated as additional states within the system, leading to an augmented state vector. An observer is then designed to estimate the full set of states, including both the system states and the unknown parameters. This method is advantageous in systems where the states and parameters are interdependent, but the complexity of the augmented observer increases with the number of parameters, which can affect computational efficiency and robustness to noise.
  • Decoupled Estimation Approach: The estimation tasks for the unknown parameters and the system states are decoupled, and two separate observers are run concurrently, one for state estimation and the other for parameter estimation. This approach simplifies the observer design by separating the two estimation problems. However, challenges arise when strong interactions exist between the system states and parameters, which can lead to inaccuracies or slow convergence.
  • Parameter Identification Coupled with State Estimation: A parameter identification technique is used to estimate the unknown parameters, which is then coupled with a state observer for state estimation. This approach leverages well-established parameter estimation techniques, such as least squares, in conjunction with traditional state observers. While this method can be efficient, its accuracy and convergence are highly dependent on the parameter identification process and the model’s sensitivity to parameter changes.
The challenges and limitations associated with each estimation method have been explored. A significant challenge is noise sensitivity, particularly in methods like the Luenberger observer, which rely heavily on accurate model assumptions. Robustness to noise is a critical factor, especially in real-world systems where measurements are often noisy or uncertain. Additionally, the computational complexity of the methods varies significantly, with some requiring substantial resources due to the complexity of the observer design or the number of parameters involved.
Each method offers distinct advantages and trade-offs depending on the system’s characteristics and performance requirements. For example, the augmented state approach provides a unified framework for simultaneous state and parameter estimation but may impose a higher computational burden. On the other hand, decoupled approaches can reduce computational effort but may struggle when state and parameter dynamics are strongly coupled.
The selection of the most appropriate method ultimately depends on several factors, including the quality and quantity of available data, the complexity of the system dynamics, the accuracy of the system model, and the desired level of computational efficiency. Future research could focus on hybrid approaches that integrate the strengths of multiple methods, as well as adaptive techniques that dynamically adjust the estimation strategy based on real-time data characteristics and evolving system behavior. These advancements could further enhance the robustness and applicability of state and parameter estimation methods in complex, noisy environments.

Future Research Directions

Although significant progress has been made in simultaneous state and parameter estimation using Kalman filters and observer-based approaches, several important research challenges remain open. One promising direction is the development of hybrid stochastic-deterministic frameworks that leverage the complementary strengths of both paradigms. For example, embedding observer feedback structures within Kalman-based filters could improve numerical stability and convergence under parameter uncertainty, while integrating covariance adaptation into deterministic observers may enhance robustness to noise and unmodeled dynamics.
A second key challenge concerns the treatment of strongly coupled states and parameters (e.g, cases when state and parameters contributions to measured outputs are difficult to distinguish), where interdependence reduces observability and can lead to estimator divergence or slow convergence. Future research should focus on systematic observability analysis, parameter sensitivity quantification, and coupling-decoupling strategies that preserve estimation accuracy while maintaining computational tractability.
Another important direction involves handling fast time-varying parameters. Most existing simultaneous state and parameter estimation algorithms assume slowly varying or piecewise-constant (quasi-static) parameters (i.e., θ k + 1 = θ k within the estimation window), which limits their applicability in systems with rapid parameter evolution. Addressing this issue requires adaptive observers and Kalman filter variants capable of tracking fast parameter dynamics without compromising numerical stability or state accuracy.
Furthermore, data-driven and learning-assisted estimation techniques are expected to play an increasingly important role. The fusion of physics-based observers with neural or regression-based parameter estimators could enable real-time adaptation to nonlinear or partially known dynamics while maintaining interpretability and physical consistency.
From a computational perspective, the scalability of simultaneous state and parameter estimation algorithms to high-dimensional systems remains an unsolved challenge. Efficient reduced-order, distributed implementations, particularly for systems with large augmented state vectors, represent a critical area for future exploration.

Author Contributions

Conceptualization, A.C. and M.A.F.; investigation, A.C.; resources, M.A.F.; writing—original draft preparation, A.C.; writing—review and editing, M.A.F. and K.G.; supervision, M.A.F.; project administration, M.A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KFKalman Filter
EKFExtended Kalman Filter
UKFUnscented Kalman Filter
CKFCubature Kalman Filter
EnKFEnsemble Kalman Filter
JEKFJoint Extended Kalman Filter
JUKFJoint Unscented Kalman Filter
JCKFJoint Cubature Kalman Filter
JEnKFJoint Ensemble Kalman Filter
DEKFDual Extended Kalman Filter
DUKFDual Unscented Kalman Filter
DCKFDual Cubature Kalman Filter
DEnKFDual Ensemble Kalman Filter
LSELeast Squares Estimation
RLSERecursive Least Squares Estimation
RLSE-CRRecursive Least Squares Estimation with Covariance Reset
SMOSliding Mode Observer
PSOParticle Swarm Optimization
RRRReference Recursive Recipe
LMILinear Matrix Inequality
PIDProportional-Integral-Derivative controller

References

  1. Chebbi, A.; Grigoriadis, K.; Franchek, M.; Cescon, M. Substrate temperature estimation and control in advanced MOCVD process for superconductor manufacturing. Int. J. Adv. Manuf. Technol. 2024, 133, 273–285. [Google Scholar] [CrossRef]
  2. Zhao, J.; Zhu, Y.; Zhang, B.; Liu, M.; Wang, J.; Liu, C.; Hao, X. Review of State Estimation and Remaining Useful Life Prediction Methods for Lithium–Ion Batteries. Sustainability 2023, 15, 5014. [Google Scholar] [CrossRef]
  3. Alsaadi, F.E.; Liu, Y.; Alharbi, N.S. Design of robust H∞ state estimator for delayed polytopic uncertain genetic regulatory networks: Dealing with finite-time boundedness. Neurocomputing 2022, 497, 170–181. [Google Scholar] [CrossRef]
  4. Chebbi, A. Observer-Based Simultaneous States and Parameters Estimation Method with Application to System Health Monitoring. Ph.D. Dissertation, University of Houston, Houston, TX, USA, 2024. [Google Scholar]
  5. Kadakia, Y.A.; Alnajdi, A.; Abdullah, F.; Christofides, P.D. Encrypted distributed model predictive control with state estimation for nonlinear processes. Digit. Chem. Eng. 2023, 9, 100133. [Google Scholar] [CrossRef]
  6. Le, V.T.H.; Stoica, C.; Alamo, T.; Camacho, E.F.; Dumur, D. Zonotopes: From Guaranteed State-Estimation to Control; John Wiley & Sons, Incorporated: Hoboken, NJ, USA, 2013. [Google Scholar]
  7. Ricker, L.N. Model predictive control with state estimation. Ind. Eng. Chem. Res. 1990, 29, 374–382. [Google Scholar] [CrossRef]
  8. Holten, L.; Gjelsvik, A.; Aam, S.; Wu, F.; Liu, W.H. Comparison of different methods for state estimation. IEEE Trans. Power Syst. 1988, 3, 1798–1806. [Google Scholar] [CrossRef]
  9. Khodarahmi, M.; Maihami, V. A Review on Kalman Filter Models. Arch. Comput. Methods Eng. 2023, 30, 727–747. [Google Scholar] [CrossRef]
  10. Varshney, D.; Bhushan, M.; Patwardhan, S.C. State and parameter estimation using extended Kitanidis Kalman filter. J. Process Control 2019, 76, 98–111. [Google Scholar] [CrossRef]
  11. Reina, G.; Messina, A. Vehicle dynamics estimation via augmented Extended Kalman Filtering. Measurement 2019, 133, 383–395. [Google Scholar] [CrossRef]
  12. Urrea, C.; Agramonte, R. Kalman Filter: Historical Overview and Review of Its Use in Robotics 60 Years after Its Creation. J. Sens. 2021, 2021, 9674015. [Google Scholar] [CrossRef]
  13. Garcia, R.V.; Pardal, P.; Kuga, H.K.; Zanardi, M.C. Nonlinear filtering for sequential spacecraft attitude estimation with real data: Cubature Kalman Filter, Unscented Kalman Filter and Extended Kalman Filter. Adv. Space Res. 2019, 63, 1038–1050. [Google Scholar] [CrossRef]
  14. Luenberger, D.G. Observers for multivariable systems. IEEE Trans. Autom. Control 1966, 11, 190–197. [Google Scholar] [CrossRef]
  15. Chebbi, A.; Franchek, M.A.; Grigoriadis, K. A Modified Observer Method for the Joint Estimation of States and Parameters for the Class of Linear Uncertain Discrete-Time Systems. Int. J. Eng. Res. Appl. 2024, 14, 7–18. [Google Scholar]
  16. Unterrieder, C.; Priewasser, R.; Marsili, S.; Huemer, M. Battery State Estimation Using Mixed Kalman/Hinfinity, Adaptive Luenberger and Sliding Mode Observer. In Proceedings of the 2013 IEEE Vehicle Power and Propulsion Conference (VPPC), Beijing, China, 15–18 October 2013. [Google Scholar]
  17. Hu, X.; Sun, F.; Zou, Y. Estimation of State of Charge of a Lithium-Ion Battery Pack for Electric Vehicles Using an Adaptive Luenberger Observer. Energies 2010, 3, 1586–1603. [Google Scholar] [CrossRef]
  18. Dong, N.V.; Thai, T.Q.; Duc, P.M.; Thuan, N.V. Estimation of Vehicle Dynamics States Using Luenberger Observer. Int. J. Mech. Eng. Robot. Res. 2019, 8, 430–436. [Google Scholar] [CrossRef]
  19. Gupta, S.D.; Yu, J.Y.; Mallick, M.; Coates, M.; Morelande, M. Comparison of Angle-only Filtering Algorithms in 3D Using EKF, UKF, PF, PFF, and Ensemble KF. In Proceedings of the 18th International Conference on Information Fusion, Washington, DC, USA, 6–9 July 2015. [Google Scholar]
  20. Khan, Z.A.; Shrivastava, P.; Amrr, S.M.; Mekhilef, S.; Algethami, A.A.; Seyedmahmoudian, M.; Stojcevski, A. A Comparative Study on Different Online State of Charge Estimation Algorithms for Lithium-Ion Batteries. Sustainability 2022, 14, 7412. [Google Scholar] [CrossRef]
  21. Si, J.; Niu, Y.; Wang, B. A Review of Nonlinear Filtering Algorithms in Integrated Navigation Systems. Sensors 2025, 25, 6462. [Google Scholar] [CrossRef] [PubMed]
  22. Roubache, T.; Chaouch, S.; Naït-Saïd, M.S. Comparative Study Between Luenberger Observer and Extended Kalman Filter for Fault-Tolerant Control of Induction Motor Drives. Adv. Model. Anal. C 2018, 73, 29–36. [Google Scholar] [CrossRef]
  23. Kirubarajan, T.; Bar-Shalom, Y. Kalman Filter versus IMM Estimator: When Do We Need the Latter? IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1452–1470. [Google Scholar] [CrossRef]
  24. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. Trans. ASME 1960, 82, 34–45. [Google Scholar] [CrossRef]
  25. Chui, C.K.; Chen, G. Kalman Filtering; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  26. Kordic, V. Kalman Filter; IntechOpen: Rijeka, Croatia, 2010. [Google Scholar]
  27. Jazwinski, A.H. Stochastic Processes and Filtering Theory; Mathematics in Science and Engineering; Academic Press: New York, NY, USA, 1970; Volume 64. [Google Scholar]
  28. The Analytic Sciences Corporation. Applied Optimal Estimation; MIT Press: Cambridge, MA, USA, 1974. [Google Scholar]
  29. Bierman, G.J. Factorization Methods for Discrete Sequential Estimation; Mathematics in Science and Engineering; Academic Press: New York, NY, USA, 1977; Volume 128. [Google Scholar]
  30. Feeley, J.J. State and Parameter Estimation in a Nuclear Fuel Pin Using the Extended Kalman Filter; Idaho National Engineering Lab.: Idaho Falls, ID, USA, 1979. [Google Scholar]
  31. Wilson, D.I.; Agarwal, M.; Rippin, D. Experiences implementing the extended Kalman filter on an industrial batch reactor. Comput. Chem. Eng. 1998, 22, 1653–1672. [Google Scholar] [CrossRef]
  32. Nelson, L.; Stear, E. The simultaneous on-line estimation of parameters and states in linear systems. IEEE Trans. Autom. Control 1976, 21, 94–98. [Google Scholar] [CrossRef]
  33. Wan, E.A.; Nelson, A.T. Dual extended Kalman filter methods. In Kalman Filtering and Neural Networks; John Wiley & Sons Inc.: New York, NY, USA, 2001; pp. 123–173. [Google Scholar]
  34. Saatci, E.; Akan, A. Dual Unscented Kalman Filter and Its Applications to Respiratory System Modelling. In Kalman Filter Recent Advances and Applications; IntechOpen: Rijeka, Croatia, 2009. [Google Scholar]
  35. Yang, J.N.; Huang, H.; Lin, S. Sequential non-linear least-square estimation for damage identification of structures. Int. J. Non-Linear Mech. 2006, 41, 124–140. [Google Scholar] [CrossRef]
  36. Feng, D. Combined state and least squares parameter estimation algorithms for dynamic systems. Appl. Math. Model. 2014, 38, 403–412. [Google Scholar] [CrossRef]
  37. Wang, W.; Ding, F.; Alsaedi, A.; Hayat, T. Filtering based parameter estimation for observer canonical state space systems with colored noise. J. Frankl. Inst. 2017, 354, 593–609. [Google Scholar] [CrossRef]
  38. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  39. Wan, E.A.; Merwe, R.V.D. The Unscented Kalman Filter for Nonlinear Estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium, Lake Louise, AB, Canada, 4 October 2000. [Google Scholar]
  40. Wenkai, F.; Jie, X.; Guang, O.; Jun, L. Sensor Fault Diagnosis for Flight Control System Based on Cubature Kalman Filter. In Proceedings of the 2014 IEEE Chinese Guidance, Navigation and Control Conference, Yantai, China, 8–10 August 2014. [Google Scholar]
  41. Arasaratnam, I.; Haykin, S. Cubature Kalman Filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  42. Evensen, G. Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res. Ocean. 1994, 99, 10143–10162. [Google Scholar] [CrossRef]
  43. Evensen, G. The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn. 2003, 53, 343–367. [Google Scholar] [CrossRef]
  44. Nie, S.; Zhu, J.; Luo, Y. Simultaneous estimation of land surface scheme states and parameters using the ensemble Kalman filter: Identical twin experiments. Hydrol. Earth Syst. Sci. 2011, 15, 2437–2457. [Google Scholar] [CrossRef]
  45. Aanonsen, S.I.; Nøvda, G.; Oliver, D.S.; Reynolds, A.C.; Vallès, B. The Ensemble Kalman Filter in Reservoir Engineering—A Review. SPE J. 2009, 14, 393–412. [Google Scholar] [CrossRef]
  46. Beelen, H.; Bergveld, H.J.; Donkers, M.C.F. Joint Estimation of Battery Parameters and State of Charge Using an Extended Kalman Filter: A Single-Parameter Tuning Approach. IEEE Trans. Control Syst. Technol. 2021, 29, 1087–1101. [Google Scholar] [CrossRef]
  47. Wielitzka, M.; Dagen, M.; Ortmaier, T. Joint Unscented Kalman Filter for State and Parameter Estimation in Vehicle Dynamics. In Proceedings of the 2015 IEEE Conference on Control Applications (CCA), Sydney, NSW, Australia, 21–23 September 2015. [Google Scholar]
  48. Mahdianfar, H.; Pavlov, A.; Aamo, O.M. Joint unscented Kalman filter for state and parameter estimation in Managed Pressure Drilling. In Proceedings of the 2013 European Control Conference (ECC), Zurich, Switzerland, 17–19 July 2013. [Google Scholar]
  49. Sun, Y.; Chen, Q. Joint estimation of states and parameters of vehicle model using cubature Kalman filter. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016. [Google Scholar]
  50. Rajakumar, A.G.; Kumar, M.S.M.; Amrutur, B.; Kapelan, Z. Real-Time Water Quality Modeling with Ensemble Kalman Filter for State and Parameter Estimation in Water Distribution Networks. J. Water Resour. Plan. Manag. 2019, 145, 04019049. [Google Scholar] [CrossRef]
  51. Cammalleri, C.; Ciraolo, G. State and parameter update in a coupled energy/hydrologic balance model using ensemble Kalman filtering. J. Hydrol. 2012, 416–417, 171–181. [Google Scholar] [CrossRef]
  52. Liu, K.; Huang, G.; Šimůnek, J.; Xu, X.; Xiong, Y.; Huang, Q. Comparison of ensemble data assimilation methods for the estimation of time-varying soil hydraulic parameters. J. Hydrol. 2021, 594, 125729. [Google Scholar] [CrossRef]
  53. Ok, M.; Ok, S.; Park, J.H. Estimation of Vehicle Attitude, Acceleration, and Angular Velocity Using Convolutional Neural Network and Dual Extended Kalman Filter. Sensors 2021, 21, 1282. [Google Scholar] [CrossRef] [PubMed]
  54. Gove, J.H.; Hollinger, D.Y. Application of a dual unscented Kalman filter for simultaneous state and parameter estimation in problems of surface-atmosphere exchange. J. Geophys. Res. 2006, 111. [Google Scholar] [CrossRef]
  55. Li, X.; Song, Y.; Ren, H. State of Charge Estimation of Lithium-Ion Batteries Based on Fractional-Order Model with Multi-Innovations Dual Cubature Kalman Filter Method. J. Electrochem. Soc. 2024, 171, 127007. [Google Scholar] [CrossRef]
  56. Moradkhani, H.; Sorooshian, S.; Gupta, H.V.; Houser, P.R. Dual state–parameter estimation of hydrological models using ensemble Kalman filter. Adv. Water Resour. 2005, 28, 135–147. [Google Scholar] [CrossRef]
  57. Khodadadi, H.; Jazayeri, H.J. Applying a dual extended Kalman filter for the nonlinear state and parameter estimations of a continuous stirred tank reactor. Comput. Chem. Eng. 2011, 35, 2426–2436. [Google Scholar] [CrossRef]
  58. Shaimaa, N.E.; Abd-Alla, M.; Ahmed, A. Modified Dual Unscented Kalman Filter Approach for Measuring Vehicle States and Vehicle Parameters. Int. J. Eng. Res. Technol. (IJERT) 2014, 3, 1423–1430. [Google Scholar]
  59. Elenchezhiyan, M.; Thirunavukkarasu, I.; Kumar, E.G. Applying Joint and Dual State and Parameter Estimation Using Derivative-Free Kalman Filter for a Switched Nonlinear System. In Intelligent Control, Robotics, and Industrial Automation; Suresh, S., Lal, S., Kiran, M.S., Eds.; Lecture Notes in Electrical Engineering; Springer: Singapore, 2024; Volume 1220, pp. 441–458. [Google Scholar]
  60. Horváth, T.; Fodor, D. Performance Comparison of the General, the Dual, and the Joint Sigma Point Kalman Filters on State Estimation of Li-Ion Battery Cells for BMSs. Eng. Proc. 2024, 79, 45. [Google Scholar] [CrossRef]
  61. Akhlaghi, S.; Zhou, N.; Huang, Z. Adaptive adjustment of noise covariance in Kalman filter for dynamic state estimation. In Proceedings of the IEEE Power & Energy Society General Meeting, Chicago, IL, USA, 16–20 July 2017; pp. 1–5. [Google Scholar]
  62. Cetenovic, D.; Rankovic, A.; Zhao, J.; Jin, Z.; Wu, J.; Terzija, V. An Adaptive Method for Tuning Process Noise Covariance Matrix in EKF-based Three-phase Distribution System State Estimation. Int. J. Electr. Power Energy Syst. 2021, 132, 107192. [Google Scholar] [CrossRef]
  63. Mironova, A.; Haus, B.; Zedler, A.; Mercorelli, P. Extended Kalman Filter for Temperature Estimation and Control of Peltier Cells in a Novel Industrial Milling Process. IEEE Trans. Ind. Appl. 2020, 56, 1670–1678. [Google Scholar] [CrossRef]
  64. Scardua, L.A.; da Cruz, J.J. Particle-Based Tuning of the Unscented Kalman Filter. J. Control. Autom. Electr. Syst. 2016, 27, 10–18. [Google Scholar] [CrossRef]
  65. Ghorbani, E.; Dollon, Q.; Gosselin, F.P. Physics-aware tuning of the unscented Kalman filter: Statistical framework for solving inverse problems involving nonlinear dynamical systems and missing data. Nonlinear Dyn. 2024, 113, 4301–4323. [Google Scholar] [CrossRef]
  66. Nez, A.; Fradet, L.; Marin, F.; Monnet, T.; Lacouture, P. Identification of Noise Covariance Matrices to Improve Orientation Estimation by Kalman Filter. Sensors 2018, 18, 3490. [Google Scholar] [CrossRef] [PubMed]
  67. Schneider, R.; Georgakis, C. How To NOT Make the Extended Kalman Filter Fail. Ind. Eng. Chem. Res. 2013, 52, 3354–3362. [Google Scholar] [CrossRef]
  68. Feng, B.; Fu, M.; Ma, H.; Xia, Y.; Wang, B. Kalman Filter With Recursive Covariance Estimation—Sequentially Estimating Process Noise Covariance. IEEE Trans. Ind. Electron. 2014, 61, 6253–6263. [Google Scholar] [CrossRef]
  69. Ananthasayanam, M.R.; Mohan, M.S.; Naik, N.; Gemson, R.M.O. A heuristic reference recursive recipe for adaptively tuning the Kalman filter statistics part-1: Formulation and simulation studies. Sādhanā 2016, 41, 1473–1490. [Google Scholar] [CrossRef]
  70. Mohan, M.S.; Naik, N.; Gemson, R.M.O.; Ananthasayanam, M.R. A heuristic reference recursive recipe for adaptively tuning the Kalman filter statistics part-2: Real data studies. Sādhanā 2016, 41, 1491–1507. [Google Scholar] [CrossRef]
  71. Laamari, Y.; Chafaa, K.; Athamena, B. Particle swarm optimization of an extended Kalman filter for speed and rotor flux estimation of an induction motor drive. Electr. Eng. 2015, 97, 129–138. [Google Scholar] [CrossRef]
  72. Jatoth, R.K.; Kumar, T. Particle Swarm Optimization Based Tuning of Unscented Kalman Filter for Bearings Only Tracking. In Proceedings of the 2009 International Conference on Advances in Recent Technologies in Communication and Computing, Kottayam, India, 27–28 October 2009. [Google Scholar]
  73. Luders, G.; Narendra, K. An adaptive observer and identifier for a linear system. IEEE Trans. Autom. Control 1973, 18, 496–499. [Google Scholar] [CrossRef]
  74. Bastin, G.; Gevers, M. Stable adaptive observers for nonlinear time-varying systems. IEEE Trans. Autom. Control 1988, 33, 650–658. [Google Scholar] [CrossRef]
  75. Damon, P.M.; Dabladji, H.; Ichalal, D.; Nehaoua, L.; Arioui, H. Estimation of lateral motorcycle dynamics and rider action with Luenberger observer. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016. [Google Scholar]
  76. Besançon, G. Remarks on nonlinear adaptive observer design. Syst. Control Lett. 2000, 41, 271–280. [Google Scholar] [CrossRef]
  77. Nicosia, S.; Tornambè, A. High-gain observers in the state and parameter estimation of robots having elastic joints. Syst. Control Lett. 1989, 13, 331–337. [Google Scholar] [CrossRef]
  78. Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; Studies in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1994; Volume 15. [Google Scholar]
  79. Khlebnikov, M.V. Nonfragile Filtering under Bounded Exogenous Disturbances. Autom. Remote Control 2024, 85, 477–488. [Google Scholar] [CrossRef]
  80. Hattab, O. Observer Based Parameter Estimation For Linear Uncertain Discrete-Time Systems. Ph.D. Thesis, University of Houston, Houston, TX, USA, 2018. [Google Scholar]
  81. Khalil, H.K.; Praly, L. High-gain observers in nonlinear feedback control. Int. J. Robust Nonlinear Control 2014, 24, 993–1015. [Google Scholar] [CrossRef]
  82. Zhuang, X.; Wang, H.; Ahmed-Ali, S.; Tian, Y. Design of a joint adaptive high-gain observer for a class of nonlinear sampled-output system with unknown states and parameters. Int. J. Robust Nonlinear Control 2022, 32, 9174–9194. [Google Scholar] [CrossRef]
  83. Slotine, J.; Hedrick, J.; Misawa, E. On sliding observers for nonlinear systems. In Proceedings of the American Control Conference, Seattle, WA, USA, 18–20 June 1986; pp. 1794–1800. [Google Scholar]
  84. Spurgeon, S.K. Sliding mode observers: A survey. Int. J. Syst. Sci. 2008, 39, 751–764. [Google Scholar] [CrossRef]
  85. Zhou, Y.; Soh, Y.C.; Shen, J.X. Speed Estimation and Nonmatched Time-Varying Parameter Identification for a DC Motor With Hybrid Sliding-Mode Observer. IEEE Trans. Ind. Electron. 2013, 60, 5539–5549. [Google Scholar] [CrossRef]
  86. Shtessel, Y.; Edwards, C.; Fridman, L.; Levant, A. Sliding Mode Control and Observation; Birkhäuser: Basel, Switzerland, 2014. [Google Scholar]
  87. Liu, Y.; Fang, J.; Tan, K.; Huang, B.; He, W. Sliding Mode Observer with Adaptive Parameter Estimation for Sensorless Control of IPMSM. Energies 2020, 13, 5991. [Google Scholar] [CrossRef]
  88. Chen, Y.; Ma, Y.; Chen, H. State of charge and state of health estimation for lithium-ion battery through dual sliding mode observer based on AMESim-Simulink co-simulation. J. Renew. Sustain. Energy 2018, 10, 034103. [Google Scholar] [CrossRef]
  89. Ning, B.; Cao, B.; Wang, B.; Zou, Z. Adaptive sliding mode observers for lithium-ion battery state estimation based on parameters identified online. Energy 2018, 153, 732–742. [Google Scholar] [CrossRef]
  90. You, J.; Wu, W.; Wang, Y. An Adaptive Luenberger Observer for Speed-Sensorless Estimation of Induction Machines. In Proceedings of the 2018 Annual American Control Conference (ACC), Wisconsin Center, Milwaukee, WI, USA, 27–29 June 2018. [Google Scholar]
  91. Cecilia, A. Adaptive Observers: Direct and Indirect Redesign. In Advances in Nonlinear Observer Design for State and Parameter Estimation in Energy Systems; Springer: Cham, Switzerland, 2023. [Google Scholar]
  92. Xie, L.; Khargonekar, P.P. Lyapunov-based adaptive state estimation for a class of nonlinear stochastic systems. Automatica 2012, 48, 1423–1431. [Google Scholar] [CrossRef]
  93. Bof, N.; Carli, R.; Schenato, L. Lyapunov Theory for Discrete Time Systems. arXiv 2018, arXiv:1809.05289. [Google Scholar] [CrossRef]
  94. Yu, W. Luenberger Observer Design for Uncertainty Nonlinear Systems. In New Perspectives and Applications of Modern Control Theory; Springer: Cham, Switzerland, 2017; pp. 25–42. [Google Scholar]
  95. Ben Moussa, H.; Bakhti, M. Dual sliding mode observer and Kalman filter-based nonlinear control of a BLDC actuated in-wheel anti-lock braking system. Int. J. Dyn. Control 2025, 13, 155. [Google Scholar] [CrossRef]
  96. Phuong, T.T.; Ohishi, K.; Yokokura, Y. Fine Sensorless Force Control Realization Based on Dither Periodic Component Elimination Kalman Filter and Wide Band Disturbance Observer. IEEE Trans. Ind. Electron. 2020, 67, 757–767. [Google Scholar] [CrossRef]
  97. Lee, A.S.; Hilal, W.; Gadsden, S.A.; Al-Shabi, M. Combined Kalman and sliding innovation filtering: An adaptive estimation strategy. Measurement 2023, 218, 113228. [Google Scholar] [CrossRef]
Figure 1. Dual estimation schematic.
Figure 1. Dual estimation schematic.
Sensors 25 07043 g001
Figure 2. Flowchart of computing the parameter and state estimate using Observer-RLSE-CR Method [15].
Figure 2. Flowchart of computing the parameter and state estimate using Observer-RLSE-CR Method [15].
Sensors 25 07043 g002
Table 1. Kalman filter algorithm.
Table 1. Kalman filter algorithm.
Prediction:
Predicted state estimate x ^ k = F k 1 x ^ k 1 + + B k 1 u k 1
Predicted error covariance P k = F k 1 P k 1 + F k 1 + Q k 1
Update:
Measurement residual y ˜ k = y k H k x ^ k
Kalman gain K k = P k H k H k P k H k + R k 1
Updated state estimate x ^ k + = x ^ k + K k y ˜ k
Updated error covariance P k + = ( I K k H k ) P k
Table 2. Comparison of joint and dual Kalman-based estimation methods.
Table 2. Comparison of joint and dual Kalman-based estimation methods.
Filter TypeSchemeComputationAdvantagesLimitations
EKFJoint (JEKF)Low-MediumCompact implementation.Linearization errors; loss of accuracy for strong nonlinearities.
Dual (DEKF)MediumImproved stability via state-parameter decoupling.Higher cost; potential delay between estimates.
UKFJoint (JUKF)MediumJacobian-free; second-order accuracy for smooth dynamics.Sensitive to scaling parameters ( α , β , κ ) .
Dual (DUKF)Med-HighHandles strong nonlinear coupling robustly.Requires two filters; increased tuning and cost.
CKFJoint (JCKF)Med-HighNumerically stable; suitable for higher dimensions.Cubature integration increases computation.
Dual (DCKF)HighStrong noise attenuation; robust under parameter drift.Sensitive to covariance tuning; computationally heavy.
EnKFJoint (JEnKF)HighScalable to large stochastic systems.Sampling noise; large ensemble needed.
Dual (DEnKF)High-Very HighRobust to model uncertainty; adaptive to time-varying parameters.High computational load; possible ensemble divergence.
Table 3. Comparison of Kalman-based and Observer-based approaches for simultaneous state and parameter estimation.
Table 3. Comparison of Kalman-based and Observer-based approaches for simultaneous state and parameter estimation.
AspectKalman-Based MethodsObserver-Based Methods
Theoretical BasisStochastic estimation theory (probabilistic, covariance-driven).Deterministic system theory (model-structure-driven).
Noise HandlingExplicitly models process and measurement noise via covariance matrices ( Q , R ).Implicitly accounts for disturbances through feedback; no explicit noise statistics.
OptimalityStatistically optimal or near-optimal under Gaussian noise assumptions.Not statistically optimal; relies on deterministic stability and observability conditions.
Computational ComplexityModerate to high (matrix operations, covariance updates, or ensemble propagation).Generally low, via simpler recursive structure.
Model DependenceRobust to moderate model mismatch if noise statistics are accurate.Sensitive to modeling errors; accuracy depends on precise system representation.
Uncertainty QuantificationCovariance-based uncertainty metrics.Does not directly quantify estimation uncertainty.
Nonlinearity HandlingExtended, Unscented, Cubature, and Ensemble variants manage nonlinearities effectively.Nonlinear extensions via adaptive, high-gain, or sliding-mode observers.
Tuning ParametersRequires careful selection of Q , R , and algorithm-specific parameters.Primarily tuned via gain matrices or adaptation laws.
Typical ApplicationsNavigation, tracking, robotics, battery management, hydrology.Vehicle dynamics, process control, electromechanical systems, robotics.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chebbi, A.; Franchek, M.A.; Grigoriadis, K. Simultaneous State and Parameter Estimation Methods Based on Kalman Filters and Luenberger Observers: A Tutorial & Review. Sensors 2025, 25, 7043. https://doi.org/10.3390/s25227043

AMA Style

Chebbi A, Franchek MA, Grigoriadis K. Simultaneous State and Parameter Estimation Methods Based on Kalman Filters and Luenberger Observers: A Tutorial & Review. Sensors. 2025; 25(22):7043. https://doi.org/10.3390/s25227043

Chicago/Turabian Style

Chebbi, Amal, Matthew A. Franchek, and Karolos Grigoriadis. 2025. "Simultaneous State and Parameter Estimation Methods Based on Kalman Filters and Luenberger Observers: A Tutorial & Review" Sensors 25, no. 22: 7043. https://doi.org/10.3390/s25227043

APA Style

Chebbi, A., Franchek, M. A., & Grigoriadis, K. (2025). Simultaneous State and Parameter Estimation Methods Based on Kalman Filters and Luenberger Observers: A Tutorial & Review. Sensors, 25(22), 7043. https://doi.org/10.3390/s25227043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop