Next Article in Journal
Assessing the Technical–Economic Feasibility of Low-Altitude Unmanned Airships: Methodology and Comparative Case Studies
Previous Article in Journal
Direct Numerical Simulation of Boundary Layer Transition Induced by Roughness Elements in Supersonic Flow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Error-State Kalman Filtering with Linearized State Constraints

by
Hoang Viet Do
1,2 and
Jin-woo Song
2,3,*
1
Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
2
Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea
3
Department of Artificial Intelligence and Robotics, Sejong University, Seoul 05006, Republic of Korea
*
Author to whom correspondence should be addressed.
Aerospace 2025, 12(3), 243; https://doi.org/10.3390/aerospace12030243
Submission received: 5 February 2025 / Revised: 3 March 2025 / Accepted: 14 March 2025 / Published: 16 March 2025

Abstract

:
In recent years, the error-state Kalman filter (ErKF) has been widely employed in various applications, including robotics, aerospace, and localization. However, incorporating state constraints into the ErKF framework using the estimate projection method remains ambiguous. This paper examines this issue in depth, specifically exploring whether constraints should be enforced before or after the ErKF correction step. We adopt a mathematical approach, deriving analytical solutions and analyzing their statistical properties. Our findings prove that, for a linear system with linear constraints, both methods yield statistically equivalent results. However, the filter’s behavior becomes uncertain when dealing with linearized constraints. We further identify a special case of a nonlinear constraint where the results of the linear case remain valid. To support our theoretical analysis and evaluate the filter’s performance under non-ideal conditions, we conduct two Monte Carlo simulations considering increasing initialization errors and constraint incompleteness. The simulation results validate our theoretical insights and suggest that applying constraints to the error state after the correction step may lead to superior performance compared to the alternative approach.

1. Introduction

The incredible Kalman Filter (KF) has become a ubiquitous mathematical tool for linear state estimation problems in a diverse range of fields, including but not limited to robotics [1], aerospace [2], navigation [3], and industrial applications [4]. It provides a globally optimal estimate based on a mathematical model of the system dynamics and a series of noisy sensor measurements [5]. The filter’s ability to fuse information from multiple sensors and its ability to handle time-varying systems make it a popular choice in many fields. Although linear systems are often used in theoretical studies, nonlinear systems are more commonly encountered in practical applications. While many techniques are available for state estimation in nonlinear systems [6], linearization of nonlinearity has emerged as the most popular choice due to its simplicity and effectiveness. The utilization of the linearized system within the framework of the KF is commonly known as the extended Kalman filter (EKF).
Another variant of the KF that is especially popular in attitude estimation and localization problems is the error-state Kalman filter (ErKF). This indirect (error-state) technique is designed to estimate the difference between the actual state vector and the estimated state vector. One of the primary benefits of utilizing the ErKF is the ability to avoid the platform’s dynamic model complexity [7]. Moreover, for a nonlinear estimation problem that involves linearization, it estimates not only the estimation error but also the linearization error. This allows us to obtain a more accurate and reliable estimation [8]. Note that the ErKF and the KF are mathematically equivalent for linear systems. However, the former shows superior numerical stability compared to the latter. For nonlinear systems, although the ErKF and the EKF share mathematical similarity, they are different in the statistical sense [8], and the ErKF provides better numerical performance compared to the EKF.
In many practical applications, certain aspects of system dynamics may be difficult or complex to express mathematically. As a result, researchers often either disregard this information entirely or treat it as an additional noise-free measurement. An alternative approach is to incorporate such information as constraints in the estimation process. When applied within the Kalman filter (KF) framework, this technique is known as the constrained Kalman filter (CKF). By enforcing constraints on the KF’s estimation, CKF methods can account for such information without compromising the accuracy and robustness of the standard KF [9]. In this paper, we focus on scenarios where the constraint is either inherently linear or linearized, given the partial equivalence in their treatment. Consequently, the results derived for linear constraints may be directly applicable to their linearized counterparts [10,11].
As described earlier, the ErKF estimates the error state, not the full state; hence, an additional correction step is required at the end of each algorithm’s epoch to compute the final solution of interest. This extra step introduces ambiguity when applying existing CKF methods to the error state, particularly when using the estimate projection method (EPM). The reason is that the ErKF’s estimate will be reset to zero after the correction ([12], Section 13.6). Consequently, physically speaking, the corrected full state is now carrying the estimation error, not the error state. As a result, the mean and variance of the error state will be altered [13], which potentially affects the constrained estimation. This raises the question of whether the error projection should be applied before or after the correction step. The ambiguity associated with the EPM does not arise in other constraint-based approaches discussed in [9]. Therefore, in this paper, we attempt to comprehensively investigate potential ways for incorporating linear constraints into the ErKF framework through the estimate projection perspective. Additionally, we also aim to determine the circumstances under which these approaches are equivalent or yield distinctive outcomes. Notably, this inquiry has yet to be explored in the existing literature, as far as the authors know. To recap, our key contributions to this article can be summarized as follows:
  • We claim two possible approaches with a rigorous mathematical derivation of their respective solutions. Specifically, one method involves placing the constraint before the correction step, while the other involves enforcing the constraint after the correction step.
  • Building upon our proposed solution, we examine their statistical properties by leveraging previous results on the full-state CKF in [10]. Particularly, we mathematically prove that, in the linear constraint case, imposing constraints either before or after the correction step yields identical performance in terms of mean square error. Furthermore, we provide the specific conditions under which they hold equivalence when linearized constraints are taken into account.
  • We present two numerical examples to evaluate the filter’s performance. In the first example, complete and incomplete constraints are imposed, whereas in the second example, a nonlinear constraint that does not meet our assumption is applied, thus hindering a rigorous assessment of the filter’s behavior. Moreover, we conduct extensive Monte Carlo simulations to scrutinize the filter’s accuracy and robustness across different scenarios of initialization errors. This error factor plays a crucial role in filtering theory [14,15]. The simulation results lend support to our theoretical framework and vision.
The rest of this article is organized as follows: Section 2 reviews related works in the context of constrained filtering. Section 3 provides the mathematical notations, along with a brief overview of both the constrained Kalman filter and the error-state Kalman filter. After laying the foundation, we develop the proposed constrained error-state Kalman filter in Section 4. Next, Section 5 presents two numerical examples with Monte Carlo simulations to validate the filter’s performance. Finally, Section 6 provides concluding remarks and future work directions.

2. Related Works

A great deal of effort and thorough analysis have been developed for the full-state CKF with both linear [10,16] and nonlinear constraints [11,17]. There are various ways to formulate the CKF for linear constraints, including but not limited to the estimate projection method [10], the system projection method (SPM) [18], and the gain projection method (GPM) [19]. While the EPM projects the KF’s estimate onto the constraint surface, the SPM and the GPM project the system dynamics and the Kalman gain onto the constraint surface, respectively. To avoid being overwhelmed by the vast number of approaches available, Simon [9] provides a comprehensive summary of most existing methods with rigorous analysis and comparison from both theoretical and practical implementation perspectives. In contrast to the special treatment and extensive research given to the CKF, the lack of attention given to the constrained ErKF (C-ErKF) in the research community has made it difficult for the author to seek reference.
While there have been some studies of the C-ErKF, its full potential and limitations are not yet fully understood. For instance, Jung and Park [20] incorporated depth constraints into an ErKF-based filter to estimate the poses of a visual–inertial navigation system. Although this filter showed great performance compared to the traditional ErKF, only the optimization problem was given. Without an analytical solution, it is unclear whether the constraint was performed before or after the error correction. This ambiguous expression was also used in [21,22], whereby the error correction was not taken into account during the filter’s design. Note that both of the mentioned researchers used the EPM to yield their solutions. This could potentially be a contributing factor to the occurrence of the unclear problem. In particular, Zanetti et al. [22], Zanetti and Bishop [23] constrained the error state via the GPM. By doing so, the potential impact of error correction may be disregarded, thereby making their approach appear distinct or unparalleled. This observation is applicable to the SPM [9] as well, in which the error covariance and the process noise covariance matrices are subject to certain constraints instead of the KF’s estimate. Whilst one might ignore the confusion and directly apply the CKF’s results to the error-state form in one way or another, we believe that thorough inquiry must be taken into consideration to improve the process of filter selection.

3. Preliminaries

3.1. Mathematical Notation

Throughout the following sections, unless otherwise stated, we shall express R n , R m × n as the set of n-dimensional Euclidean space and the set of m × n real matrices, respectively. Scalars and vectors are denoted by lowercase letters x, matrices by capital letters X, and their transposes by X . The notations I n and 1 n indicate an identity matrix whose dimension is n × n and an n × 1 all-ones vector, respectively. An m × n zeros matrix is denoted by 0 m × n . We shall adopt the symbol X Y to denote the Hadamard product of two matrices X and Y with the same dimensions. A general multivariate Gaussian distribution of a random vector x with mean μ and covariance matrix P is denoted as x N μ , P . Finally, if we say that X Y for any square matrices X , Y R n × n , we mean that X Y is positive semi-definite.

3.2. Constrained Kalman Filter with Linearized Constraints

In this section, we shall proceed to review the problem of linear state estimation that is subjected to a set of known, not necessarily time-invariant, equality constraints. Given these conditions, the topic of finding an optimal solution from the unconstrained estimate of the linear KF has been thoroughly studied and generalized in the remarkable survey [9]. For a brief summary, let us consider a linear time-varying discrete-time dynamical system as follows:
x k = F k 1 x k 1 + w k 1
y k = H k x k + v k ,
where the state vector x k and the measurement vector y k at time step k belong to an open subset of R n × R m ; F k 1 R n × n is the invertible transition matrix. It is assumed that only part of the state vector is directly measured from a set of sensors, indicating that m < n and leading to an additional assumption in which the known measurement matrix H k R m × n has full row rank. In addition, both the state and measurement vectors are perturbed by an uncorrelated zero-mean white Gaussian process noise w k 1 N 0 , Q k 1 and measurement noise v k N 0 , R k , respectively.
Suppose now that only some components of the state vector should satisfy some nonlinear equality constraints at time step k such that
g ( x k ) = d ,
where g ( x k ) R s is a nonlinear state-dependent function, its dimension s represents the number of constraints, and d is a known constant vector. As suggested by Simon [10], to incorporate (2) into the linear KF framework, one may linearize it around the best current KF’s estimate, which could be obtained after the measurement update step. In other words, the above nonlinear constraint can be linearized using the first-order Taylor expansion around the a posteriori estimate x ^ k + as follows:
g ( x k ) D k x k = d k ,
where D k = g ( x k ) x k | x k = x ^ k + and d k = d g x ^ k + + D k x ^ k + . Here, the constraint matrix D k is a state-dependent matrix. Moreover, in this paper, we are interested in the situation where D k has full rank and rank D k = s n [10].
The goal is to recursively find a constrained estimate of x k at time step k denoted by x ˜ k given the current unconstrained estimate x ^ k + and the linearized constraints (3). That is to say, we seek the mean value of a conditional distribution x ˜ k = E [ x k x ^ k + , d k from the following optimization problem (OP) [10]:
minimize x k R n x ˜ k x ^ k + P k + 1 x ˜ k x ^ k + ,
subject to D k x ˜ k = d k .
where the vector x ^ k + is the a posteriori state estimate of the KF and its corresponding covariance matrix (mean squares) P k + . The above OP is formulated by directly projecting the unconstrained state estimation onto the constraint surface (estimate projection method [9]), and the solution to this problem, if the constraint (4b) is feasible, is then given as follows [10]:
x ˜ k = x ^ k + P k + D k D k P k + D k 1 D k x ^ k + d k
P ˜ k = P k + P k + D k D k P k + D k 1 D k P k + ,
where P ˜ k is the covariance matrix of x ˜ k . These calculations are referred to as the CKF. In the above expression, the vector x ^ k + can be obtained from the following procedure ([24], pp. 128–129):
x ^ k = F k 1 x ^ k 1 +
P k = F k 1 P k 1 + F k 1 + Q k 1
K k = P k H k H k P k H k + R k 1
x ^ k + = x ^ k + K k y k H k x ^ k
P k + = I K k H k P k ( I K k H k ) + K k R k K k ,
where P k is the covariance matrix of the a priori state estimate x ^ k of the KF. It is worth noting that we choose the Joseph formula to calculate P k + as in (6e) for improving numerical stability ([25], Section 5.2.6, p. 190).

3.3. Error-State Kalman Filter

Defining the error-state estimate as δ x k = x k x ^ k , the system in (1) can be reformulated as the following error dynamics model:
δ x k = F k 1 δ x k 1 + w k 1
δ y k = H k δ x k + v k .
In this section, to avoid introducing additional symbols, we shall abuse notation by using the same symbols in Section 3.2 to share the same denotation but for the error state (e.g., w k 1 and v k ).
The procedure to find the optimal error estimate for the above system using KF is referred to as the error-state Kalman filter and can be generalized as follows [26] ([24], Section 13.1):
P k = F k 1 P k 1 + F k 1 + Q k 1
K k = P k H k H k P k H k + R k 1
δ x ^ k + = K k y k H k x ^ k
P k + = I K k H k P k ( I K k H k ) + K k R k K k
x ^ k + = F k 1 x ^ k 1 + + δ x ^ k + .
Compared to the direct KF’s formula in (6), the ErKF intentionally set the a priori error-state estimate δ x ^ k as zero at the beginning of every time step ([12], p. 406). It is implied that the a posteriori error-state estimate of the previous step δ x ^ k 1 + is also forced to zeros. This is because the correction phase in (8e) explicitly transfers the estimation error to the actual state as a closed-loop KF ([27], Section 3.2.6).

4. Error-State Kalman Filter with Linearized Constraints

4.1. Problem and Solution Formulation

Henceforth, the term unconstrained ErKF (unC-ErKF) will be defined as being interchangeable with the conventional ErKF that has been discussed in the previous section. Due to the presence of an error correction step, incorporating constraints into the unC-ErKF algorithm using the projection method might not have a unique solution. For this reason, we shall focus our efforts on identifying the following two possible approaches:
  • Pre-constrained ErKF (PreC-ErKF): This method enforces the error state before the correction step. Thus, the OP will be formulated around the a priori state estimate x ^ k since the a posteriori state estimate x ^ k + is not yet calculated. Moreover, in this case, non-zero ErKF estimates will be projected onto the constraint surface.
  • Post-constrained ErKF (PostC-ErKF): In contrast to the above approach, the postC-ErKF enforces the constraints after the correction step. Consequently, the OP will now be computed with x ^ k + , and the zero-valued ErKF’s error state will be projected onto the constraint surface due to the value reset procedure.
The above two constrained ErKFs are described with the aid of a block diagram, as depicted in Figure 1. One can notice that since the ErKF’s estimates are reset after the correction (i.e., δ x ^ k + = 0 ), their statistical properties will be altered noticeably [13,28]. Therefore, constraint enforcement before and after the stage could provide divergent solutions, and they need to be rigorously studied.
Since this paper considers nonlinear constraints, we need to linearize them around the current estimated state to derive the corresponding error-state constraints. Assume that the error state δ x k is sufficiently small so that the high-order terms of the linearization can be ignored. Thus, from (2) and the preceding definition of δ x and D k , we have
g ( x k ) x k | x = x ^ k δ x k = d g x ^ k = D k δ x k = δ d k ,
where the description of δ d k is apparent from the above equation. To support our proposal, as established earlier in this section, and to enhance the clarity of the development below, one needs to further define
D k = g ( x k ) x k | x k = x ^ k , δ d k = d g x ^ k ,
and
D k + = g ( x k ) x k | x k = x ^ k + , δ d k + = d g x ^ k + .
The above D k and D k + are, respectively, used for the preC-ErKF and postC-ErKF. The occurrence of this discrepancy can be attributed to the fact that in the case of the pre-constrained approach, at the time of calculating D k , only the a priori estimate x ^ k is available. On the contrary, since the postC-ErKF enforces the constraints after the correction, the a posteriori estimate x ^ k + is obtainable for calculating D k so that it becomes D k + .
One could indicate that the estimate D k and D k + of D k are calculated using different state estimates. Hypothetically speaking, since the a posteriori estimate x ^ k + has a smaller error covariance than that of the a priori estimate x ^ k [5], we may reasonably argue that using the former could reduce the linearization error of D k . This argument is implicitly consistent with the findings presented in [11]. For subsequent analysis, we introduce the following mathematical derivations of the techniques mentioned earlier:
Lemma 1 
(PreC-ErKF). Given the ErKF’s a posteriori estimate δ x ^ k + at time step k, finding the pre-constrained error-state estimate δ x ˜ k involves the following OP:
δ x ˜ k = arg min δ x ˜ k R n δ x ˜ k δ x ^ k + P k + 1 δ x ˜ k δ x ^ k + ,
subject to D k δ x ˜ k = δ d k ,
and its solution
δ x ˜ k = δ x ^ k + W k D k δ x ^ k + δ d k
x ˜ k = x ^ k + δ x ˜ k ,
where W k = P k + D k D k P k + D k 1 and the vector x ˜ k is the final estimate obtained by correcting the constrained error estimate δ x ˜ k from the KF’s a priori estimate x ^ k . Note that the estimation characteristics (e.g., mean and error covariance) of x ˜ k and x ^ k are distinct.
Proof. 
The constrained problem of the error state can be directly derived from the estimate projection method in Section 3.2. Particularly, (4a) can be rewritten as
x k x ˜ k x k x ^ k + P k + 1 x k x ˜ k x k x ^ k + .
Let δ x ˜ k = x k x ˜ k be the difference between the actual state and the constrained estimate. It can be verified, by the definition of the true error δ x k in Section 3.3, that δ x ^ k + = x k x ^ k + . These together yield (12a). Next, by the fact that after calculating δ x ˜ k , we need to correct the constrained estimation error to find the constrained estimate as in (13b). By substituting x ˜ k into (4b) and replacing the linearization point of d k to the a priori estimate x ^ k , we obtain (12b). Now, combining (12a) with (12b), we formulate an estimate projection problem for the error state. One may use the method of Lagrange multipliers to solve this problem. By doing so, the solution should be the same as in (13b). The step-by-step derivation of this algorithm can be found in ([10], Section III).
Alternatively, the solution can be directly obtained from the direct CKF result in (5a). Specifically, by substituting (5a) into (13b), we obtain
x ^ k + W k ( D k x ^ k + d k ) = x ^ k + δ x ˜ k .
By inserting (8e) into (15), we obtain
x ^ k + δ x ^ k + W k D k x ^ k + δ x ^ k + d k = x ^ k + δ x ˜ k .
Canceling out the common factors and rearranging the terms, we have
δ x ˜ k = δ x ^ k + W k D k δ x ^ k + d k D k x ^ k = δ x ^ k + W k D k δ x ^ k + δ d k .
In the above expression, we have changed g x ^ k + and D k x ^ k + of d k in (3) to g x ^ k and D k x ^ k , respectively. The reason is that the a posteriori estimate x ^ k + is not yet to be calculated until this step. We hence complete the proof of Lemma 1.    □
Remark 1. 
The PreC-ErKF is statistically consistent with the constrained Kalman gain methods in [22,29], particularly in that the constraint error is corrected based on the a priori estimate x ^ k . Interestingly, these two approaches are identical if the weighting factor P k + in (12a) is intentionally set as an identity matrix [9].
Lemma 2 
(PostC-ErKF). Given that the a posteriori estimation error of KF has been corrected, i.e., x ^ k + is computed and δ x ^ k + is reset to zero, finding the post-constrained error-state estimate δ x ˜ k + involves the following OP:
δ x ˜ k + minimize δ x ˜ k R n δ x ˜ k P k + 1 δ x ˜ k ,
subject to D k + δ x ˜ k = δ d k + ,
and its solution
δ x ˜ k + = W k + δ d k +
x ˜ k + = x ^ k + + δ x ˜ k + ,
where W k + = P k + D k + D k + P k + D k + 1 , and the vector x ˜ k + is the final estimate obtained by correcting the constrained error estimate δ x ˜ k + from the KF’s a posteriori estimate x ^ k + . The estimation characteristics of x ˜ k + and x ^ k + are obviously different.
Proof 
(Sketch version). As the situation is analogous to Lemma 1, we give a sketch only. By substituting δ x ^ k + = 0 into (12a) and (13a), we obtain (18a) and (19a), respectively. Note that, in this scenario, the error estimate from KF has been corrected so that we have x ^ k + . Hence, we obtain the exact formulas of D k + and δ d k + as in (11). Additionally, by substituting x ˜ k + in (19b) to (4b), we directly have (18b) without any modification.    □
Remark 2. 
The result in (19a) shows that δ x ˜ k + = 0 if δ d k + = 0 . This explicitly demonstrates that the unconstrained estimate x ^ k + satisfies the equality constraint in (2). Therefore, no more action (constraining) needs to be taken. This observation is more intelligible than the preC-ErKF approach. In particular, if the a priori unconstrained estimate satisfies the constraint, then from (13a), we have δ x ˜ k = δ x ^ k + , so that D k δ x ^ k + = δ d k , which is equivalent to D k x ^ k + = d k . This means that indeed, x ^ k + satisfies the direct constraint, not x ^ k .

4.2. Properties of the CErKF

In the following section, we will scrutinize some of the critical attributes of the preC-ErKF and postC-ErKF in the context of a linear constraint (i.e., g ( x k ) can be expressed as D x k , where D is a constant matrix that has full rank s, and d k = d ). The motivation for this action is based on the fact that linearization makes it challenging to analyze the filter’s behavior theoretically. However, from the development of the linear case, we will establish a particular case of the nonlinear constraint g ( x k ) , in which the effect of the linearization error is canceled out in the filter’s estimation confidence. Most of the developments below are formulated based on the full-state CKF results in [10] with appropriate adjustments.
Theorem 1. 
The constrained state estimates x ˜ k and x ˜ k + , as given by the pre-constrained and post-constrained ErKF, respectively, are both unbiased state estimators when the constraint is linear. That is,
E x ˜ k = E x ˜ k + = E x k .
Proof. 
In this proof, we shall use the same symbol to denote a random variable and the value taken by it. There should be no confusion because of their distinctions.
Because the constraint is now a linear equation, we can rewrite the term g x ^ k in (10) as D x ^ k . As a clear consequence, we have δ d k = d D x ^ k and W k = W k + = W k = P k + D D P k + D 1 . From Lemma 1, it can be shown that
x ˜ k = x ^ k + δ x ^ k + W k D δ x ^ k + d + D x ^ k
= I n W k D x ^ k + δ x ^ k + + W k d
= I n W k D x ^ k + + W k d .
Then,
x k x ˜ k = I n W k D x k x ^ k + .
Taking the expectation of both sides of the above equation gives
E x k x ˜ k = I n W k D E x k x ^ k + .
It should be noted that we take the random variable W k out of the expectation operator under the same assumptions as those used in the derivation of the EKF, specifically that W k is zero-mean [10,22].
In the similar manner, from Lemma 2, we can write
E x k x ˜ k + = I n W k D E x k x ^ k + .
Since the unC-ErKF provides an unbiased estimate of x k , thus, E x k x ^ k + = 0 . This implies that E x k x ˜ k = E x k x ˜ k + = 0 .    □
Theorem 2. 
Let P ˜ k and P ˜ k + be the error covariances of the constrained estimate before the correction x ˜ k and after the correction x ˜ k + , respectively. Suppose that the constraint is linear; then,
P ˜ k = P ˜ k + < P k + .
Proof. 
It can be shown from (25) that if the constraint is linear, we can write
P ˜ k = E x k x ˜ k x k x ˜ k
= I n W k D E x k x ^ k + x k x ^ k + I n W k D
= I n W k D P k + I n W k D .
In a similar manner, from (26), we also have
P ˜ k + = E x k x ˜ k + x k x ˜ k + = I n W k D P k + I n W k D .
Clearly, one could show that
P ˜ k = P ˜ k + = I n W k D P k + I n W k D .
Next, the above equation can be further expressed as
P ˜ k + P k + = P ˜ k P k + = W k D P k + .
It might be worth noting that (33) is obtained using the following equality:
W k D k P k + = P k + D k W k = W k D k P k + D k W k .
Since D is assumed to be full-rank (i.e., rank ( D ) = s < n ), then W k D P k + is positive definite ([10], Th. 2). Thus, we can conclude that
P ˜ k + < P k + and P ˜ k < P k + .
By combining these results with (32), we complete the proof of Theorem 2.    □
Theorem 3. 
Let Σ k and Σ k + , respectively, be the covariances of the constrained estimation errors x ˜ k and x ˜ k + . If the constraint is linear, then the following equality holds:
Σ k + = Σ k .
Proof. 
The covariance of the pre-constrained estimation error δ x ˜ k can be given as
Σ k = E x ˜ k E x ˜ k x ˜ k E x ˜ k .
Recall that x ˜ k = x k δ x ˜ k . Write
Σ k = E x k δ x ˜ k E x k δ x ˜ k x k δ x ˜ k E x k δ x ˜ k .
By Theorem 1, we know that E δ x ˜ k = 0 if the constraint is linear. Thus, the above equation becomes
Σ k = E x k δ x ˜ k E x k x k δ x ˜ k E x k
= E x k E x k x k E x k + E δ x ˜ k δ x ˜ k ,
where we use the fact that the constrained estimation error δ x ˜ k is zero-mean and uncorrelated with the full state ([5], Ch. 5). Similarly, we can show that
Σ k + = E x k E x k x k E x k + E δ x ˜ k + δ x ˜ k + .
Taking the difference between Σ k + and Σ k , we obtain
Σ k + Σ k = E δ x ˜ k + δ x ˜ k + E δ x ˜ k δ x ˜ k = P ˜ k + P ˜ k .
By Theorem 2, we can conclude that if the constraint is linear, then (32) holds, from which the result (36) follows.    □
Proposition 1. 
Suppose that the constraint is now nonlinear, and we need to linearize it to apply the results that have been derived so far. Let us consider the following circumstances:
1. 
If the nonlinear constraint can be expressed as
g ( x k ) = c D k x k = d ,
where c is a constant scalar, and the state-dependent matrix D k = D ( x k ) is obtained from linearizing g ( x k ) , then Theorem 1 and (35) hold.
2. 
If the above linearized constraint matrix D k can be further written as
D k = C u ( x k ) 1 n ,
where C R s × n is a constant matrix, and the vector u ( x k ) R s contains state-dependent functions such that f : R n R { 0 } , then, not only are both of the considered filters unbiased estimators, but also their corresponding constrained error covariances P ˜ k and P ˜ k + are identical. Consequently, it can be inferred from the proof of Theorem 3 that Σ k and Σ k + are also identical.
Proof. 
We first provide an illustrative example to enhance the clarity of the subsequent discussion. Suppose that the state variable x k = x 1 x 2 x 3 R 3 and it satisfies the following polynomial equations: g ( x k ) = x 1 2 x 2 2 . This constraint fulfills both (43) and (44).
Remark 3. 
Since the system’s constraints not only come from the physical laws but can also be chosen and designed by engineers, it is possible for the nonlinear constraint g ( x k ) to meet one of those assumptions in practice.
Now, assuming that (43) holds, one could rewrite it as
g ( x k ) = D k x k = d c .
Because the constraint is nonlinear, (21) becomes
x ˜ k = x ^ k + δ x ^ k + W k D k δ x ^ k + d c + g ( x ^ k )
= x ^ k + W k D k x ^ k + D k x ^ k d c + D k x ^ k
= I n W k D k x ^ k + + 1 c W k d .
Similarly, one could obtain
x ˜ k + = I n W k + D k + x ^ k + + 1 c W k + d .
Clearly, for the similar rationale presented in the proof of Theorem 1, it follows that E x k x ˜ k = E x k x ˜ k + = 0 . This means the preC-ErKF and the postC-ErKF are both unbiased estimators.
From (48), (49), and (33), it can be verified that
P ˜ k + P k + = W k + D k + P k +
and
P ˜ k P k + = W k D k P k + .
Since W k + D k + P k + and W k D k P k + are both positive definite for all D k and D k + ([10], Th. 2), (35) holds.
Next, it follows from (50) and (51) that
P ˜ k + P ˜ k = W k D k P k + W k + D k + P k + .
If D k satisfies (44), from the definition of W k + in Lemma 2, we have
P ˜ k + = P k + W k + D k + P k +
= P k + P k + D k + D k + P k + D k + 1 D k + P k +
= P k + P k + C u ( x ^ k + ) 1 n C u ( x ^ k + ) 1 n P k + C u ( x ^ k + ) 1 n 1 C u ( x ^ k + ) 1 n P k + .
Notice that the Hadamard product terms in (55) can be rewritten as C u ( x ^ k + ) 1 n = diag ( u ( x ^ k + ) ) C , where the operator diag : R n R n × n returns a square diagonal matrix.
Remark 4. 
The range of function f guarantees that the matrix diag ( u ( x ^ k + ) ) is not singular. In fact, this is equivalent to having a full-rank D k , which coincides with the assumption in Section 3.2. Thus, the condition is trivially met.
By substituting this equality into (55), we obtain
P ˜ k + = P k + P k + diag ( u ( x ^ k + ) ) C diag ( u ( x ^ k + ) ) C P k + diag ( u ( x ^ k + ) ) C 1 diag ( u ( x ^ k + ) ) C P k +
= P k + P k + C C P k + C 1 C P k + .
It can be seen that the above equation is equivalent to (32), meaning that the effect of the linearization error from D k + to the post-constrained error covariance P ˜ k + is eliminated. This results in a similar error covariance P ˜ k of the pre-constrained approach, where the effect of the linearization error from D k is also canceled out. Hence, from (57), one can show that
P ˜ k + P ˜ k = 0 n × n .
From the proof of Theorem 3, we directly have Σ k + Σ k = 0 n × n .    □

4.3. Implementation

This section explains the step-by-step practical implementation of the filters in detail. To be specific, since the preC-ErKF and postC-ErKF share the same initialization and some portion of the time update processes, Algorithm 1 demonstrates their pseudocode jointly. It is worth mentioning that the choice of using the constrained process noise covariance Q ˜ 0 instead of Q 0 (lines 2 to 4 of Algorithm 1) is optional, even though it is recommended to use Q ˜ 0 as the initial value for the KF framework. By doing so, one could achieve significant enhancements (see Table 1 and ([9], Table II)). However, this technique [18] is not originated from the proposed filters; hence, it remains an option.
Algorithm 1 Constrained Error-state Kalman Filter
  1:
Initialize:   x ^ 0 + , P 0 + , Q 0 , R 0 .
  2:
if Constrained Q ˜ 0 is chosen then
  3:
     From [18] and [9], it can be verified that Q ˜ 0 = N Q 0 N , where N = U 2 U 2 and D 0 = U 1 U 2 S 0 V .
  4:
end if
  5:
for  k = 1 to K do
  6:
    Time update ← (8a); δ x ^ k = 0 n × 1 .
  7:
    if Measurement is available then
  8:
        Measurement update ← From (8b) to (8d).
Option #1: Pre-constrained
  9:
        Compute D k and δ d k with x ^ k .
10:
        if  D k δ x ^ k δ d k  then
11:
           Constraint ← Calculate δ x ˜ k as in (13a).
12:
           Correction ← Calculate x ˜ k as in (13b).
13:
        else
14:
           Unconstrained correction ← Calculate x ^ k + as in (8e).
15:
           Reset δ x ^ k + = 0 n × 1 .
16:
            x ˜ k = x ^ k + .
17:
        end if
Option #2: Post-constrained
18:
        Unconstrained correction ← Calculate x ^ k + as in (8e).
19:
        Compute D k + and δ d k + with x ^ k + .
20:
        Reset δ x ^ k + = 0 n × 1 .
21:
        if  δ d k + 0 s × 1 (see Remark 2) then
22:
           Constraint ← Calculate δ x ˜ k + as in (19a).
23:
           Correction ← Calculate x ˜ k + as in (19b).
24:
        else
25:
            x ˜ k + = x ^ k + .
26:
        end if
27:
    else
28:
         x ^ k + = x ^ k , P k + = P k .
29:
    end if
30:
end for
As pointed out in Figure 1, the difference between those two methods is the number of correction steps. In cases where constraining is deemed necessary, the postC-ErKF mandates the implementation of two correction steps, one for the unconstrained estimate (8e) and another for the constrained estimate (19b). In contrast, the preC-ErKF only necessitates a single correction step (8e) or (13b), irrespective of the extent to which the constraints are satisfied. It is important to note that the ultimate solutions x ˜ k and x ˜ k + of the mentioned filters are not fed back into the conventional ErKF’s mainstream. Notwithstanding the lack of verification, we firmly believe that propagating a constrained estimate through an unconstrained model may have a negative impact on the ErKF’s statistical characteristics and behavior.

5. Case Studies

To rigorously validate the performance of the two investigated filters in this paper, we reproduce two test case problems in [9] with increasing initial condition uncertainties. The suitability of these two problems as validation benchmarks for the filters can be attributed to their simple system models. However, they can still reflect most of the filter’s salient behavior. Moreover, these testing scenarios are frequently encountered in practical applications, and their results can be employed to align summary outcomes in the cited survey paper. For the sake of clarity, we have preserved the exact values of all parameters utilized in both simulations except the number of Monte Carlo runs and have repeated them for the reader’s benefit.
The primary objective of this section is to analyze the numerical performance of the preC-ErKF and postC-ErKF to determine the more effective approach for incorporating constraints into the ErKF when the EPM is chosen. Additionally, we compare these methods with the UnC-ErKF and the gain projection ErKF (GP-ErKF) presented in [9]. This comparison is motivated by the fact that, despite being mentioned in [9], the GPM was neither considered nor numerically evaluated. By including this analysis, we aim to provide a more comprehensive comparison of constrained filtering techniques.
The metric for comparison between approaches is only accuracy. Note that since they are both KF-based algorithms, the consistency of the filters can be theoretically guaranteed if KF’s process noise Q k and measurement R k noise covariance matrices are well chosen. Thus, we intentionally omit the filter’s consistency analysis. For evaluating the accuracy of the estimates, a Monte Carlo-averaged root mean square estimation error (RMSEE) E r m s , k is considered [30]. The value of E r m s , k is computed over a set of M given Monte Carlo trials as E r m s , k = 1 M m = 1 M x k m x ^ k m 2 2 . Here, x k m and x ^ k m are the actual and estimated states at the time instant k during the m-th Monte Carlo run. From this, the time-averaged RMSEE E ¯ r m s , k can be calculated as E ¯ r m s , k = 1 K k = 1 K E r m s , k . The constraint error is also taken into account in terms of the RMS error [10]. Specifically, the RMS constraint error (RMSCE) is defined as C r m s , k = 1 M m = 1 M D k m x k m D ^ k m x ^ k m 2 2 . Subsequently, the time-averaged RMSCE is computed as C ¯ r m s , k = 1 K k = 1 K C r m s , k .

5.1. Example 1: A Linear System with Linear Equality Constraints

In this example, we study a simple position estimation of a two-dimensional (2D) vehicle. The state vector x k = p n , k p e , k v n , k v e , k contains the position and velocity of the vehicle in the east (x-axis) and north (y-axis) directions, respectively. The state space error dynamical model for this system can be deduced from its full-state form as
δ x k = 1 0 Δ t 0 0 1 0 Δ t 0 0 1 0 0 0 0 1 δ x k 1 + w k 1
δ y k = 1 0 0 0 0 1 0 0 δ x k + v k ,
where Δ t is the sample period of the filter. Note that to obtain the above system from the original full-state dynamical equation, we assume that the control input is perfectly known. Next, the constraint on the error state is given as
1 tan θ 0 0 0 0 1 tan θ δ x k = D 1 δ x k = δ d 1 , k ,
where θ is the vehicle’s heading angle measured counter-clockwise from due east, and the error constraint value is written as δ d 1 , k = d 1 D 1 x ^ k . To conduct a more comprehensive analysis, we additionally employ an inconsistent variant of D 1 where only the vehicle’s velocity is constrained. In particular, this constraint can be written as follows:
D 2 = 0 0 1 tan θ .
This constraint is considered as an incomplete one. In contrast to the constraint D 1 , the constraint D 2 relies on the principle that velocity determines position, implying that a velocity constraint implicitly constrains position. However, since position is derived from velocity through an approximate integration, any error in the velocity estimation could adversely affect the position constraint. For a detailed discussion, please refer to [9] and the relevant literature mentioned therein.
For the system model’s parameters, the heading angle θ is chosen as a constant of 60 degrees, and the constraint values on the full state, d 1 and d 2 , are set to 0 0 and 0, respectively. For the filter’s parameters, the sample period Δ t is set to 3 s, the KF’s initial state and its corresponding covariance are x ^ 0 + = and P 0 + = diag 900 , 900 , 4 , 4 . The process noise ω k 1 and measurement noise v k are presumed to have the same statistical characteristics as considered in Section 3.2 with the value of their noise covariance matrices being Q k = diag 4 , 4 , 1 , 1 and R k = diag 900 , 900 . We implement all the algorithms (unC-ErKF, GP-ErKF, preC-ErKF, and postC-ErKF) on a 150 s simulation over 600 Monte Carlo trials for two scenarios where D 1 and D 2 are individually considered.
Table 1 lists the time-averaged RMSEE E ¯ r m s , k and RMSCE C ¯ r m s , k obtained from the above-mentioned simulation. Two observations can be drawn from these results. Firstly, the preC-ErKF, postC-ErKF, and GP-ErKF together outperform the UnC-ErKF as can be easily predicted both intuitively and mathematically from the result of Theorem 2. Secondly, when using D 1 , the estimation and constraint errors of all frameworks, except for the UnC-ErKF, are indistinguishable. This aligns with the findings in [9], which state that all the constrained filters yield identical performance if the constraint is complete.
In the case of the incomplete constraint matrix D 2 , although evaluated approaches achieve comparable levels of constraint satisfaction efficacy, the postC-ErKF provides slightly better improvement compared to the preC-ErKF and is identical to the GP-ErKF. Based on this result, one may tentatively conclude that the preC-ErKF is more sensitive to constraint incompleteness than others.
It is also worth emphasizing that unlike in [9], we compared the unC-ErKF with Q 0 and the C-ErKFs with Q ˜ 0 . This implementation is based on the principle that the unconstrained filter should completely ignore the constraint’s information. The results presented in Table 1 provide partial corroboration of the theorems that we have proven previously.

5.2. Example 2: A Nonlinear System with a Nonlinear Equality Constraint

In the following, the investigated filters are employed to estimate a nonlinear pendulum with a nonlinear energy constraint [31]. We linearize the system using the first-order Taylor expansion to deal with the nonlinearity. Hence, the error-state model of the pendulum can be expressed as follows:
δ ϕ k δ Ω k = 1 Δ t g Δ t cos ϕ k 1 L 1 δ ϕ k 1 δ Ω k 1 + ω k 1
δ y k = 1 0 0 1 δ ϕ k δ Ω k + v k ,
where δ ϕ k and δ Ω k are the angular position and velocity error at the time instant k. The system’s state is updated at a time step of Δ t = 0.05 s. By conservation of energy, the system (62) satisfies the following first-order linearized constraint [9]:
m g L sin ϕ k m L 2 Ω k δ ϕ k δ Ω k = δ d k .
Here, δ d k = C + m g L cos ϕ k m L 2 Ω k 2 2 , where C is some constant. The system model’s parameters are chosen as follows: the gravitational force constant g = 9.81 m/s2, the pendulum length L = 1 m, and the pendulum weight m = 1 kg. The system’s initial state x 0 is set as π / 4 π / 50 with an update time step of Δ t = 0.05 s.
The filter’s process noise covariance matrix is chosen as Q k = diag 0 . 007 2 , 0 . 007 2 , and the measurement noise covariance matrix is R k = diag 0 . 1 2 , 0 . 1 2 . To further evaluate the filter’s performance, we analyze the robustness to bad initialization with an increasing initial condition error of x ^ 0 + for 25 % , 50 % , and 75 % , respectively. Specifically, this error is mathematically defined as x ^ 0 + = ( 1 β ) x 0 , where β = { 0.25 , 0.5 , 0.75 } . For this reason, the initial estimation error covariance will be set distinctly for each x ^ 0 + . In particular, P 0 + = E x 0 x ^ 0 + x 0 x ^ 0 + . The simulation was performed for a duration of 25 s, with 300 Monte Carlo trials executed.
Figure 2 demonstrates the square root of the actual estimation error covariance of ϕ ^ k and Ω ^ k from Monte Carlo simulations. These results provide an initial intuitive assessment, suggesting that the theorems presented in this paper are not guaranteed to hold for general nonlinear systems. It is apparent from Figure 2a that when the initial error is relatively small ( β = 0.25 ), both proposed C-ErKF methods exhibit a smaller magnitude of the error covariance as compared to the conventional unC-ErKF at all time steps k. This finding is consistent with the inequality outlined in (27), despite the system being nonlinear and the constraint matrix failing to satisfy the assumptions of Proposition 1. Furthermore, despite the overall similarity in performance between the two considered methods, the post-constrained exposes a slightly better response during the early stages than its counterpart. Notably, the postC-ErKF also yields a smaller error covariance than the others, but the results may be difficult to discern from the figure. On the other hand, the GP-ErKF produces mixed results, exhibiting worse performance than the UnC-ErKF when β = 0.25 and β = 0.5 . This may be due to the constraint imposed on the Kalman gain, which could excessively prioritize constraint satisfaction (see Table 2). These findings suggest that the EPM outperforms the GPM. It is important to note that a comparison between the EPM and the GPM was not conducted in [9]; therefore, our results extend the work presented in that study.
As the initial error increases, the postC-ErKF shows significantly superior performance at some initial steps in comparison to the others. Surprisingly, the preC-ErKF reveals a counterintuitive performance compared to the traditional approach during this period. To be precise, in the case of β = 0.75 , Figure 2c illustrates that the preC-ErKF has the largest estimation error covariance among all candidates in both the angular position ϕ ^ k and velocity Ω ^ k variables. This clearly indicates that during this time step, the pre-constrained strategy has low confidence in its estimation. As a consequence, the accuracy of the estimation will be diminished. However, after these early steps, the preC-ErKF quickly approaches the steady state, outperforming the unC-ErKF and reporting similar performance as the postC-ErKF.
The RMSEE and RMSCE mentioned earlier in this section are highlighted in Figure 3. This could serve as evidence that the results presented herein display a similar trend to that of Figure 2, owing to the correlation between the error covariance matrix and the filter’s accuracy. In particular, while the postC-ErKF demonstrates the most outstanding performance in both accuracy and robustness, the preC-ErKF outputs a significant overshoot when β = 0.5 and β = 0.75 compared to the unC-ErKF during the settling time (Figure 3b,c). Still, the pre-constrained approach outperforms the conventional ErKF in the steady state. Moreover, it can be clearly seen that the post-constrained filter exhibits superior efficiency and improvement compared to other competitors in the context of the RMSCE C r m s , k . This finding apparently supports our vision and discussion in Section 4.1.
Table 2 summarizes the time-averaged RMSEE and RMSCE of the second example for three β configurations. It should be noted that, for consistency and in accordance with the same rationale, we also implement the constrained filters with Q ˜ 0 and the unconstrained filter with Q 0 as in the first example. This also applies to those results in Figure 2, Figure 3 and Figure 4. The table is revealing in several ways.
  • Firstly, the postC-ErKF provides the smallest estimation error and constraint across all evaluated cases. For instance, in the worst case, when β = 0.75 , it gives E ¯ r m s , k = 0.0059 for ϕ ^ , while the preC-ErKF, GP-ErKF, and unC-ErKF offer 0.0075 , 0.0084 , and 0.0129 , respectively. In other words, the post-constrained approach decreases the estimation error rate by approximately 21 % , 30 % , and 54 % compared to that of the pre-constrained, gain-constrained, and unconstrained methods, respectively. In the context of constraint error, similarly, it reduces the constraint error rate by approximately 70 % , 7 % , and 93 % compared to others. This table also supports our previous explanation that the GP-ErKF overly enforces the estimation. As a result, although the constraint error is small, the final estimation is poorer compared to the other methods.
  • Secondly, the behavior of the preC-ErKF degrades much when faced with large initial errors. Specifically, as β increases from 0.25 to 0.75 , the pre-constrained strategy produces a corresponding increase in RMSCE with a significant gap value (e.g., from 0.0035 to 0.0076 , and to 0.0207 , respectively). Conversely, the proposed post-constrained method yields a negligible small incremental gap in response to an increase in the initialization errors.
For a comprehensive analysis, we also compute the time-averaged RMSEE and RMSCE of each Monte Carlo run; in particular, their values can be computed as E ¯ r m s , m = 1 K k = 1 K x k m x ^ k m 2 2 and C ¯ r m s , m = 1 K k = 1 K D k m x k m D ^ k m x ^ k m 2 2 , respectively. The results with β = 0.5 are illustrated in Figure 4. It is apparent from this figure that the postC-ErKF outclasses other filters in every Monte Carlo simulation for angular position estimation. Nevertheless, both the gain-constrained and pre-constrained methods provide smaller E ¯ r m s , m and C ¯ r m s , m compared to the unconstrained filter. These observations are in agreement with what we have discussed so far.
The numerical results thus far indicate that the PostC-ErKF outperforms the PreC-ErKF. Although Proposition 1 provides a special case in which both methods yield the same estimation results, the performance of the two approaches remains uncertain in other scenarios. However, based on the derivations presented, it can be suggested that the PostC-ErKF has an advantage because it linearizes the nonlinear constraints around the a posteriori estimate, resulting in linearized constraints that are closer to the true constraints. In contrast, the PreC-ErKF linearizes the constraints around the a priori estimate, which may lead to less accurate results.

6. Conclusions

In this paper, we have investigated two possible ways to incorporate linearized equality constraints using the EPM into the conventional ErKF framework that, to the best of our knowledge, has yet to be previously considered. Since the ErKF involves an extra error correction, the constraints can be enforced before or after this step; we refer to them as the preC-ErKF and postC-ErKF, respectively. Through a complete examination, we have mathematically proven that both proposed filters exhibit identical performance in all aspects for linear or particular nonlinear constraints that can be expressed as a certain form. Nevertheless, their responses if g ( x ) does not satisfy the mentioned conditions are not mathematically established. However, the numerical Monte Carlo simulations have shown that the postC-ErKF provides a notable enhancement compared to the preC-ErKF in such scenarios. Specifically, the post-constrained filter exhibits smaller time-averaged estimation and constraint error and is significantly robust against the increasing initialization error and the constraint’s incompleteness.
It has been clearly shown that the ErKF’s correction step substantially impacts the estimation results when constraining nonlinear constraints are involved. Although this paper’s results deliver adequate evidence to determine whether the post-constrained filter should be prioritized over the pre-constrained approach, further rigorous studies about their behaviors in the presence of general nonlinear constraints are our motivation in future work.

Author Contributions

Conceptualization, H.V.D. and J.-w.S.; methodology, H.V.D.; software, H.V.D.; validation, H.V.D. and J.-w.S.; formal analysis, H.V.D. and J.-w.S.; investigation, H.V.D. and J.-w.S.; resources, H.V.D.; data curation, H.V.D.; writing—original draft preparation, H.V.D.; writing—review and editing, H.V.D.; visualization, H.V.D.; supervision, J.-w.S.; project administration, J.-w.S.; funding acquisition, J.-w.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Unmanned Vehicles Core Technology Research and Development Program through the National Research Foundation of Korea (NRF) (NRF-2023M3C1C1A01098408), and by the Unmanned Vehicle Advanced Research Center (UVARC) funded by the Ministry of Science and ICT, the Republic of Korea (No. 2020M3C1C1A01086408).

Data Availability Statement

The data and code presented in this study are available on request from the corresponding author (the data and code are not publicly available due to privacy).

Acknowledgments

We sincerely thank Jae Hyung Jung from the Smart Robotics Laboratory, School of Computation, Information and Technology, Technical University of Munich, and Chan Gook Park from the Navigation and Electronic System Laboratory, Department of Aerospace Engineering, Institute of Advanced Aerospace Technology, Seoul National University, for their valuable comments and suggestions, which have significantly improved this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, S.Y. Kalman Filter for Robot Vision: A Survey. IEEE Trans. Ind. Electron. 2012, 59, 4409–4420. [Google Scholar] [CrossRef]
  2. Kim, S.G.; Crassidis, J.L.; Cheng, Y.; Fosbury, A.M.; Junkins, J.L. Kalman Filtering for Relative Spacecraft Attitude and Position Estimation. J. Guid. Control. Dyn. 2007, 30, 133–143. [Google Scholar] [CrossRef]
  3. Gao, G.; Gao, B.; Gao, S.; Hu, G.; Zhong, Y. A Hypothesis Test-Constrained Robust Kalman Filter for INS/GNSS Integration With Abnormal Measurement. IEEE Trans. Veh. Technol. 2023, 72, 1662–1673. [Google Scholar] [CrossRef]
  4. Auger, F.; Hilairet, M.; Guerrero, J.M.; Monmasson, E.; Orlowska-Kowalska, T.; Katsura, S. Industrial Applications of the Kalman Filter: A Review. IEEE Trans. Ind. Electron. 2013, 60, 5458–5471. [Google Scholar] [CrossRef]
  5. Anderson, B.D.; Moore, J.B. Optimal Filtering; Courier Corporation: North Chelmsford, MA, USA, 2012. [Google Scholar]
  6. Daum, F. Nonlinear filters: Beyond the Kalman filter. IEEE Aerosp. Electron. Syst. Mag. 2005, 20, 57–69. [Google Scholar] [CrossRef]
  7. Roumeliotis, S.; Sukhatme, G.; Bekey, G. Circumventing dynamic modeling: Evaluation of the error-state Kalman filter applied to mobile robot localization. In Proceedings of the 1999 International Conference on Robotics and Automation (ICRA), Detroit, MI, USA, 10–15 May 1999; Volume 2, pp. 1656–1663. [Google Scholar] [CrossRef]
  8. Madyastha, V.; Ravindra, V.; Mallikarjunan, S.; Goyal, A. Extended Kalman Filter vs. Error State Kalman Filter for Aircraft Attitude Estimation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Portland, OR, USA, 8–11 August 2011; p. 6615. [Google Scholar] [CrossRef]
  9. Simon, D. Kalman filtering with state constraints: A survey of linear and nonlinear algorithms. IET Control. Theory Appl. 2010, 4, 1303–1318. [Google Scholar] [CrossRef]
  10. Simon, D.; Chia, T.L. Kalman filtering with state equality constraints. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 128–136. [Google Scholar] [CrossRef]
  11. Yang, C.; Blasch, E. Kalman Filtering with Nonlinear State Constraints. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 70–84. [Google Scholar] [CrossRef]
  12. Titterton, D.; Weston, J. Strapdown Inertial Navigation Technology; IET: Herts, UK, 2004; Volume 17. [Google Scholar]
  13. Mueller, M.W.; Hehn, M.; D’Andrea, R. Covariance Correction Step for Kalman Filtering with an Attitude. J. Guid. Control. Dyn. 2017, 40, 2301–2306. [Google Scholar] [CrossRef]
  14. Linderoth, M.; Soltesz, K.; Robertsson, A.; Johansson, R. Initialization of the Kalman filter without assumptions on the initial state. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4992–4997. [Google Scholar] [CrossRef]
  15. Kneip, L.; Scaramuzza, D.; Siegwart, R. On the initialization of statistical optimum filters with application to motion estimation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 1500–1506. [Google Scholar] [CrossRef]
  16. Pizzinga, A. Constrained Kalman Filtering: Additional Results. Int. Stat. Rev. 2010, 78, 189–208. [Google Scholar] [CrossRef]
  17. Julier, S.J.; LaViola, J.J. On Kalman Filtering With Nonlinear Equality Constraints. IEEE Trans. Signal Process. 2007, 55, 2774–2784. [Google Scholar] [CrossRef]
  18. Ko, S.; Bitmead, R.R. State estimation for linear systems with state equality constraints. Automatica 2007, 43, 1363–1368. [Google Scholar] [CrossRef]
  19. Teixeira, B.O.S.; Chandrasekar, J.; Palanthandalam-Madapusi, H.J.; Torres, L.A.B.; Aguirre, L.A.; Bernstein, D.S. Gain-Constrained Kalman Filtering for Linear and Nonlinear Systems. IEEE Trans. Signal Process. 2008, 56, 4113–4123. [Google Scholar] [CrossRef]
  20. Jung, J.H.; Gook Park, C. Constrained Filtering-based Fusion of Images, Events, and Inertial Measurements for Pose Estimation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 644–650. [Google Scholar] [CrossRef]
  21. Do, H.V.; Kwon, Y.S.; Kim, H.J.; Song, J.W. An Improvement of 3D DR/INS/GNSS Integrated System using Inequality Constrained EKF. In Proceedings of the 2022 22nd International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 27 November–1 December 2022; pp. 1780–1783. [Google Scholar] [CrossRef]
  22. Zanetti, R.; Majji, M.; Bishop, R.H.; Mortari, D. Norm-constrained Kalman filtering. J. Guid. Control. Dyn. 2009, 32, 1458–1465. [Google Scholar] [CrossRef]
  23. Zanetti, R.; Bishop, R. Quaternion estimation and norm constrained Kalman filtering. In Proceedings of the AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Keystone, CO, USA, 21–24 August 2006; p. 6164. [Google Scholar] [CrossRef]
  24. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  25. Grewal, M.S.; Andrews, A.P. Kalman Filtering: Theory and Practice with MATLAB; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  26. Trawny, N.; Roumeliotis, S.I. Indirect Kalman filter for 3D attitude estimation. Univ. Minnesota Dept. Comp. Sci. Eng. Tech. Rep. 2005, 2, 2005. [Google Scholar]
  27. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: New York, NY, USA, 2008. [Google Scholar]
  28. Gill, R.; Mueller, M.W.; D’Andrea, R. Full-Order Solution to the Attitude Reset Problem for Kalman Filtering of Attitudes. J. Guid. Control. Dyn. 2020, 43, 1232–1246. [Google Scholar] [CrossRef]
  29. Chee, S.A.; Forbes, J.R. Norm-constrained consider Kalman filtering. J. Guid. Control. Dyn. 2014, 37, 2048–2053. [Google Scholar] [CrossRef]
  30. Raihan, D.; Chakravorty, S. Particle Gaussian mixture filters-I. Automatica 2018, 98, 331–340. [Google Scholar] [CrossRef]
  31. Teixeira, B.O.; Chandrasekar, J.; Tôrres, L.A.; Aguirre, L.A.; Bernstein, D.S. State estimation for linear and non-linear equality-constrained systems. Int. J. Control. 2009, 82, 918–936. [Google Scholar] [CrossRef]
Figure 1. Block diagram illustrating two possible approaches for incorporating constraints into the ErKF procedure using the estimate projection method.
Figure 1. Block diagram illustrating two possible approaches for incorporating constraints into the ErKF procedure using the estimate projection method.
Aerospace 12 00243 g001
Figure 2. Average standard deviation of the second example’s estimate ϕ ^ and Ω ^ for 300 Monte Carlo runs with increasing initial uncertainties β .
Figure 2. Average standard deviation of the second example’s estimate ϕ ^ and Ω ^ for 300 Monte Carlo runs with increasing initial uncertainties β .
Aerospace 12 00243 g002
Figure 3. The second example’s Monte Carlo-averaged RMSEE (sum of all state’s elements) and RMSCE for 300 runs with increasing initial uncertainties β .
Figure 3. The second example’s Monte Carlo-averaged RMSEE (sum of all state’s elements) and RMSCE for 300 runs with increasing initial uncertainties β .
Aerospace 12 00243 g003
Figure 4. Time-averaged angular position RMSEE and RMSCE of each Monte Carlo simulation for the case of β = 0.5 .
Figure 4. Time-averaged angular position RMSEE and RMSCE of each Monte Carlo simulation for the case of β = 0.5 .
Aerospace 12 00243 g004
Table 1. Time-averaged RMS estimation (sum of all state’s elements) and constraint error of four considered filtering algorithms of the first example with two different constraints (complete and incomplete) over 600 Monte Carlo trials.
Table 1. Time-averaged RMS estimation (sum of all state’s elements) and constraint error of four considered filtering algorithms of the first example with two different constraints (complete and incomplete) over 600 Monte Carlo trials.
Filter Type E ¯ rms C ¯ rms
D 1 D 2 D 1 D 2
Unconstrained ErKF ( Q 0 ) 24.141 24.085 32.560 4.495
GP ErKF ( Q ˜ 0 ) [9] 18 . 373 22 . 480 0 . 0 0 . 0
Pre-constrained ErKF ( Q ˜ 0 ) 18 . 373 22.516 0 . 0 0 . 0
Post-constrained ErKF ( Q ˜ 0 ) 18 . 373 22 . 480 0 . 0 0 . 0
The numbers rendered in bold mark the minimum value, and all values are rounded to three decimal digits.
Table 2. Time-averaged RMS estimation and constraint error of four considered filtering algorithms for the second example under 25 % , 50 % , and 75 % initial error test cases.
Table 2. Time-averaged RMS estimation and constraint error of four considered filtering algorithms for the second example under 25 % , 50 % , and 75 % initial error test cases.
Filter TypeTime-Averaged RMS Estimation Error ( ϕ ^ , Ω ^ )Time-Averaged RMS Constraint Error
0.25 0.5 0.75 0.25 0.5 0.75
UnC-ErKF ( Q 0 )0.0119, 0.03350.0124, 0.03360.0129, 0.03370.08650.08870.0901
GP-ErKF ( Q ˜ 0 ) [9]0.0075, 0.01940.0080, 0.01990.0084, 0.02020.00350.00540.0066
PreC-ErKF ( Q ˜ 0 )0.0055, 0.01450.0058, 0.01450.0075, 0.01530.00350.00760.0207
PostC-ErKF ( Q ˜ 0 )0.0049, 0.01380.0055, 0.01440.0059, 0.01460.00310.00500.0061
The bold numbers represent better results (the smaller number) and all values are rounded to four decimal digits.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Do, H.V.; Song, J.-w. Error-State Kalman Filtering with Linearized State Constraints. Aerospace 2025, 12, 243. https://doi.org/10.3390/aerospace12030243

AMA Style

Do HV, Song J-w. Error-State Kalman Filtering with Linearized State Constraints. Aerospace. 2025; 12(3):243. https://doi.org/10.3390/aerospace12030243

Chicago/Turabian Style

Do, Hoang Viet, and Jin-woo Song. 2025. "Error-State Kalman Filtering with Linearized State Constraints" Aerospace 12, no. 3: 243. https://doi.org/10.3390/aerospace12030243

APA Style

Do, H. V., & Song, J.-w. (2025). Error-State Kalman Filtering with Linearized State Constraints. Aerospace, 12(3), 243. https://doi.org/10.3390/aerospace12030243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop