Abstract
In recent years, the error-state Kalman filter (ErKF) has been widely employed in various applications, including robotics, aerospace, and localization. However, incorporating state constraints into the ErKF framework using the estimate projection method remains ambiguous. This paper examines this issue in depth, specifically exploring whether constraints should be enforced before or after the ErKF correction step. We adopt a mathematical approach, deriving analytical solutions and analyzing their statistical properties. Our findings prove that, for a linear system with linear constraints, both methods yield statistically equivalent results. However, the filter’s behavior becomes uncertain when dealing with linearized constraints. We further identify a special case of a nonlinear constraint where the results of the linear case remain valid. To support our theoretical analysis and evaluate the filter’s performance under non-ideal conditions, we conduct two Monte Carlo simulations considering increasing initialization errors and constraint incompleteness. The simulation results validate our theoretical insights and suggest that applying constraints to the error state after the correction step may lead to superior performance compared to the alternative approach.
1. Introduction
The incredible Kalman Filter (KF) has become a ubiquitous mathematical tool for linear state estimation problems in a diverse range of fields, including but not limited to robotics [1], aerospace [2], navigation [3], and industrial applications [4]. It provides a globally optimal estimate based on a mathematical model of the system dynamics and a series of noisy sensor measurements [5]. The filter’s ability to fuse information from multiple sensors and its ability to handle time-varying systems make it a popular choice in many fields. Although linear systems are often used in theoretical studies, nonlinear systems are more commonly encountered in practical applications. While many techniques are available for state estimation in nonlinear systems [6], linearization of nonlinearity has emerged as the most popular choice due to its simplicity and effectiveness. The utilization of the linearized system within the framework of the KF is commonly known as the extended Kalman filter (EKF).
Another variant of the KF that is especially popular in attitude estimation and localization problems is the error-state Kalman filter (ErKF). This indirect (error-state) technique is designed to estimate the difference between the actual state vector and the estimated state vector. One of the primary benefits of utilizing the ErKF is the ability to avoid the platform’s dynamic model complexity [7]. Moreover, for a nonlinear estimation problem that involves linearization, it estimates not only the estimation error but also the linearization error. This allows us to obtain a more accurate and reliable estimation [8]. Note that the ErKF and the KF are mathematically equivalent for linear systems. However, the former shows superior numerical stability compared to the latter. For nonlinear systems, although the ErKF and the EKF share mathematical similarity, they are different in the statistical sense [8], and the ErKF provides better numerical performance compared to the EKF.
In many practical applications, certain aspects of system dynamics may be difficult or complex to express mathematically. As a result, researchers often either disregard this information entirely or treat it as an additional noise-free measurement. An alternative approach is to incorporate such information as constraints in the estimation process. When applied within the Kalman filter (KF) framework, this technique is known as the constrained Kalman filter (CKF). By enforcing constraints on the KF’s estimation, CKF methods can account for such information without compromising the accuracy and robustness of the standard KF [9]. In this paper, we focus on scenarios where the constraint is either inherently linear or linearized, given the partial equivalence in their treatment. Consequently, the results derived for linear constraints may be directly applicable to their linearized counterparts [10,11].
As described earlier, the ErKF estimates the error state, not the full state; hence, an additional correction step is required at the end of each algorithm’s epoch to compute the final solution of interest. This extra step introduces ambiguity when applying existing CKF methods to the error state, particularly when using the estimate projection method (EPM). The reason is that the ErKF’s estimate will be reset to zero after the correction ([12], Section 13.6). Consequently, physically speaking, the corrected full state is now carrying the estimation error, not the error state. As a result, the mean and variance of the error state will be altered [13], which potentially affects the constrained estimation. This raises the question of whether the error projection should be applied before or after the correction step. The ambiguity associated with the EPM does not arise in other constraint-based approaches discussed in [9]. Therefore, in this paper, we attempt to comprehensively investigate potential ways for incorporating linear constraints into the ErKF framework through the estimate projection perspective. Additionally, we also aim to determine the circumstances under which these approaches are equivalent or yield distinctive outcomes. Notably, this inquiry has yet to be explored in the existing literature, as far as the authors know. To recap, our key contributions to this article can be summarized as follows:
- We claim two possible approaches with a rigorous mathematical derivation of their respective solutions. Specifically, one method involves placing the constraint before the correction step, while the other involves enforcing the constraint after the correction step.
- Building upon our proposed solution, we examine their statistical properties by leveraging previous results on the full-state CKF in [10]. Particularly, we mathematically prove that, in the linear constraint case, imposing constraints either before or after the correction step yields identical performance in terms of mean square error. Furthermore, we provide the specific conditions under which they hold equivalence when linearized constraints are taken into account.
- We present two numerical examples to evaluate the filter’s performance. In the first example, complete and incomplete constraints are imposed, whereas in the second example, a nonlinear constraint that does not meet our assumption is applied, thus hindering a rigorous assessment of the filter’s behavior. Moreover, we conduct extensive Monte Carlo simulations to scrutinize the filter’s accuracy and robustness across different scenarios of initialization errors. This error factor plays a crucial role in filtering theory [14,15]. The simulation results lend support to our theoretical framework and vision.
The rest of this article is organized as follows: Section 2 reviews related works in the context of constrained filtering. Section 3 provides the mathematical notations, along with a brief overview of both the constrained Kalman filter and the error-state Kalman filter. After laying the foundation, we develop the proposed constrained error-state Kalman filter in Section 4. Next, Section 5 presents two numerical examples with Monte Carlo simulations to validate the filter’s performance. Finally, Section 6 provides concluding remarks and future work directions.
2. Related Works
A great deal of effort and thorough analysis have been developed for the full-state CKF with both linear [10,16] and nonlinear constraints [11,17]. There are various ways to formulate the CKF for linear constraints, including but not limited to the estimate projection method [10], the system projection method (SPM) [18], and the gain projection method (GPM) [19]. While the EPM projects the KF’s estimate onto the constraint surface, the SPM and the GPM project the system dynamics and the Kalman gain onto the constraint surface, respectively. To avoid being overwhelmed by the vast number of approaches available, Simon [9] provides a comprehensive summary of most existing methods with rigorous analysis and comparison from both theoretical and practical implementation perspectives. In contrast to the special treatment and extensive research given to the CKF, the lack of attention given to the constrained ErKF (C-ErKF) in the research community has made it difficult for the author to seek reference.
While there have been some studies of the C-ErKF, its full potential and limitations are not yet fully understood. For instance, Jung and Park [20] incorporated depth constraints into an ErKF-based filter to estimate the poses of a visual–inertial navigation system. Although this filter showed great performance compared to the traditional ErKF, only the optimization problem was given. Without an analytical solution, it is unclear whether the constraint was performed before or after the error correction. This ambiguous expression was also used in [21,22], whereby the error correction was not taken into account during the filter’s design. Note that both of the mentioned researchers used the EPM to yield their solutions. This could potentially be a contributing factor to the occurrence of the unclear problem. In particular, Zanetti et al. [22], Zanetti and Bishop [23] constrained the error state via the GPM. By doing so, the potential impact of error correction may be disregarded, thereby making their approach appear distinct or unparalleled. This observation is applicable to the SPM [9] as well, in which the error covariance and the process noise covariance matrices are subject to certain constraints instead of the KF’s estimate. Whilst one might ignore the confusion and directly apply the CKF’s results to the error-state form in one way or another, we believe that thorough inquiry must be taken into consideration to improve the process of filter selection.
3. Preliminaries
3.1. Mathematical Notation
Throughout the following sections, unless otherwise stated, we shall express , as the set of n-dimensional Euclidean space and the set of real matrices, respectively. Scalars and vectors are denoted by lowercase letters x, matrices by capital letters X, and their transposes by . The notations and indicate an identity matrix whose dimension is and an all-ones vector, respectively. An zeros matrix is denoted by . We shall adopt the symbol to denote the Hadamard product of two matrices X and Y with the same dimensions. A general multivariate Gaussian distribution of a random vector x with mean and covariance matrix P is denoted as . Finally, if we say that for any square matrices , we mean that is positive semi-definite.
3.2. Constrained Kalman Filter with Linearized Constraints
In this section, we shall proceed to review the problem of linear state estimation that is subjected to a set of known, not necessarily time-invariant, equality constraints. Given these conditions, the topic of finding an optimal solution from the unconstrained estimate of the linear KF has been thoroughly studied and generalized in the remarkable survey [9]. For a brief summary, let us consider a linear time-varying discrete-time dynamical system as follows:
where the state vector and the measurement vector at time step k belong to an open subset of ; is the invertible transition matrix. It is assumed that only part of the state vector is directly measured from a set of sensors, indicating that and leading to an additional assumption in which the known measurement matrix has full row rank. In addition, both the state and measurement vectors are perturbed by an uncorrelated zero-mean white Gaussian process noise and measurement noise , respectively.
Suppose now that only some components of the state vector should satisfy some nonlinear equality constraints at time step k such that
where is a nonlinear state-dependent function, its dimension s represents the number of constraints, and d is a known constant vector. As suggested by Simon [10], to incorporate (2) into the linear KF framework, one may linearize it around the best current KF’s estimate, which could be obtained after the measurement update step. In other words, the above nonlinear constraint can be linearized using the first-order Taylor expansion around the a posteriori estimate as follows:
where and . Here, the constraint matrix is a state-dependent matrix. Moreover, in this paper, we are interested in the situation where has full rank and [10].
The goal is to recursively find a constrained estimate of at time step k denoted by given the current unconstrained estimate and the linearized constraints (3). That is to say, we seek the mean value of a conditional distribution from the following optimization problem (OP) [10]:
where the vector is the a posteriori state estimate of the KF and its corresponding covariance matrix (mean squares) . The above OP is formulated by directly projecting the unconstrained state estimation onto the constraint surface (estimate projection method [9]), and the solution to this problem, if the constraint (4b) is feasible, is then given as follows [10]:
where is the covariance matrix of . These calculations are referred to as the CKF. In the above expression, the vector can be obtained from the following procedure ([24], pp. 128–129):
where is the covariance matrix of the a priori state estimate of the KF. It is worth noting that we choose the Joseph formula to calculate as in (6e) for improving numerical stability ([25], Section 5.2.6, p. 190).
3.3. Error-State Kalman Filter
Defining the error-state estimate as , the system in (1) can be reformulated as the following error dynamics model:
In this section, to avoid introducing additional symbols, we shall abuse notation by using the same symbols in Section 3.2 to share the same denotation but for the error state (e.g., and ).
The procedure to find the optimal error estimate for the above system using KF is referred to as the error-state Kalman filter and can be generalized as follows [26] ([24], Section 13.1):
Compared to the direct KF’s formula in (6), the ErKF intentionally set the a priori error-state estimate as zero at the beginning of every time step ([12], p. 406). It is implied that the a posteriori error-state estimate of the previous step is also forced to zeros. This is because the correction phase in (8e) explicitly transfers the estimation error to the actual state as a closed-loop KF ([27], Section 3.2.6).
4. Error-State Kalman Filter with Linearized Constraints
4.1. Problem and Solution Formulation
Henceforth, the term unconstrained ErKF (unC-ErKF) will be defined as being interchangeable with the conventional ErKF that has been discussed in the previous section. Due to the presence of an error correction step, incorporating constraints into the unC-ErKF algorithm using the projection method might not have a unique solution. For this reason, we shall focus our efforts on identifying the following two possible approaches:
- Pre-constrained ErKF (PreC-ErKF): This method enforces the error state before the correction step. Thus, the OP will be formulated around the a priori state estimate since the a posteriori state estimate is not yet calculated. Moreover, in this case, non-zero ErKF estimates will be projected onto the constraint surface.
- Post-constrained ErKF (PostC-ErKF): In contrast to the above approach, the postC-ErKF enforces the constraints after the correction step. Consequently, the OP will now be computed with , and the zero-valued ErKF’s error state will be projected onto the constraint surface due to the value reset procedure.
The above two constrained ErKFs are described with the aid of a block diagram, as depicted in Figure 1. One can notice that since the ErKF’s estimates are reset after the correction (i.e., ), their statistical properties will be altered noticeably [13,28]. Therefore, constraint enforcement before and after the stage could provide divergent solutions, and they need to be rigorously studied.
Figure 1.
Block diagram illustrating two possible approaches for incorporating constraints into the ErKF procedure using the estimate projection method.
Since this paper considers nonlinear constraints, we need to linearize them around the current estimated state to derive the corresponding error-state constraints. Assume that the error state is sufficiently small so that the high-order terms of the linearization can be ignored. Thus, from (2) and the preceding definition of and , we have
where the description of is apparent from the above equation. To support our proposal, as established earlier in this section, and to enhance the clarity of the development below, one needs to further define
and
The above and are, respectively, used for the preC-ErKF and postC-ErKF. The occurrence of this discrepancy can be attributed to the fact that in the case of the pre-constrained approach, at the time of calculating , only the a priori estimate is available. On the contrary, since the postC-ErKF enforces the constraints after the correction, the a posteriori estimate is obtainable for calculating so that it becomes .
One could indicate that the estimate and of are calculated using different state estimates. Hypothetically speaking, since the a posteriori estimate has a smaller error covariance than that of the a priori estimate [5], we may reasonably argue that using the former could reduce the linearization error of . This argument is implicitly consistent with the findings presented in [11]. For subsequent analysis, we introduce the following mathematical derivations of the techniques mentioned earlier:
Lemma 1
(PreC-ErKF). Given the ErKF’s a posteriori estimate at time step k, finding the pre-constrained error-state estimate involves the following OP:
and its solution
where and the vector is the final estimate obtained by correcting the constrained error estimate from the KF’s a priori estimate . Note that the estimation characteristics (e.g., mean and error covariance) of and are distinct.
Proof.
The constrained problem of the error state can be directly derived from the estimate projection method in Section 3.2. Particularly, (4a) can be rewritten as
Let be the difference between the actual state and the constrained estimate. It can be verified, by the definition of the true error in Section 3.3, that . These together yield (12a). Next, by the fact that after calculating , we need to correct the constrained estimation error to find the constrained estimate as in (13b). By substituting into (4b) and replacing the linearization point of to the a priori estimate , we obtain (12b). Now, combining (12a) with (12b), we formulate an estimate projection problem for the error state. One may use the method of Lagrange multipliers to solve this problem. By doing so, the solution should be the same as in (13b). The step-by-step derivation of this algorithm can be found in ([10], Section III).
Alternatively, the solution can be directly obtained from the direct CKF result in (5a). Specifically, by substituting (5a) into (13b), we obtain
By inserting (8e) into (15), we obtain
Canceling out the common factors and rearranging the terms, we have
In the above expression, we have changed and of in (3) to and , respectively. The reason is that the a posteriori estimate is not yet to be calculated until this step. We hence complete the proof of Lemma 1. □
Remark 1.
The PreC-ErKF is statistically consistent with the constrained Kalman gain methods in [22,29], particularly in that the constraint error is corrected based on the a priori estimate . Interestingly, these two approaches are identical if the weighting factor in (12a) is intentionally set as an identity matrix [9].
Lemma 2
(PostC-ErKF). Given that the a posteriori estimation error of KF has been corrected, i.e., is computed and is reset to zero, finding the post-constrained error-state estimate involves the following OP:
and its solution
where , and the vector is the final estimate obtained by correcting the constrained error estimate from the KF’s a posteriori estimate . The estimation characteristics of and are obviously different.
Proof
(Sketch version). As the situation is analogous to Lemma 1, we give a sketch only. By substituting into (12a) and (13a), we obtain (18a) and (19a), respectively. Note that, in this scenario, the error estimate from KF has been corrected so that we have . Hence, we obtain the exact formulas of and as in (11). Additionally, by substituting in (19b) to (4b), we directly have (18b) without any modification. □
Remark 2.
The result in (19a) shows that if . This explicitly demonstrates that the unconstrained estimate satisfies the equality constraint in (2). Therefore, no more action (constraining) needs to be taken. This observation is more intelligible than the preC-ErKF approach. In particular, if the a priori unconstrained estimate satisfies the constraint, then from (13a), we have , so that , which is equivalent to . This means that indeed, satisfies the direct constraint, not .
4.2. Properties of the CErKF
In the following section, we will scrutinize some of the critical attributes of the preC-ErKF and postC-ErKF in the context of a linear constraint (i.e., can be expressed as , where D is a constant matrix that has full rank s, and ). The motivation for this action is based on the fact that linearization makes it challenging to analyze the filter’s behavior theoretically. However, from the development of the linear case, we will establish a particular case of the nonlinear constraint , in which the effect of the linearization error is canceled out in the filter’s estimation confidence. Most of the developments below are formulated based on the full-state CKF results in [10] with appropriate adjustments.
Theorem 1.
The constrained state estimates and , as given by the pre-constrained and post-constrained ErKF, respectively, are both unbiased state estimators when the constraint is linear. That is,
Proof.
In this proof, we shall use the same symbol to denote a random variable and the value taken by it. There should be no confusion because of their distinctions.
Because the constraint is now a linear equation, we can rewrite the term in (10) as . As a clear consequence, we have and . From Lemma 1, it can be shown that
Then,
Taking the expectation of both sides of the above equation gives
It should be noted that we take the random variable out of the expectation operator under the same assumptions as those used in the derivation of the EKF, specifically that is zero-mean [10,22].
In the similar manner, from Lemma 2, we can write
Since the unC-ErKF provides an unbiased estimate of , thus, . This implies that . □
Theorem 2.
Let and be the error covariances of the constrained estimate before the correction and after the correction , respectively. Suppose that the constraint is linear; then,
Proof.
It can be shown from (25) that if the constraint is linear, we can write
In a similar manner, from (26), we also have
Clearly, one could show that
Next, the above equation can be further expressed as
It might be worth noting that (33) is obtained using the following equality:
Since D is assumed to be full-rank (i.e., rank), then is positive definite ([10], Th. 2). Thus, we can conclude that
By combining these results with (32), we complete the proof of Theorem 2. □
Theorem 3.
Let and , respectively, be the covariances of the constrained estimation errors and . If the constraint is linear, then the following equality holds:
Proof.
The covariance of the pre-constrained estimation error can be given as
Recall that . Write
By Theorem 1, we know that if the constraint is linear. Thus, the above equation becomes
where we use the fact that the constrained estimation error is zero-mean and uncorrelated with the full state ([5], Ch. 5). Similarly, we can show that
Taking the difference between and , we obtain
By Theorem 2, we can conclude that if the constraint is linear, then (32) holds, from which the result (36) follows. □
Proposition 1.
Suppose that the constraint is now nonlinear, and we need to linearize it to apply the results that have been derived so far. Let us consider the following circumstances:
- 1.
- If the nonlinear constraint can be expressed aswhere c is a constant scalar, and the state-dependent matrix is obtained from linearizing , then Theorem 1 and (35) hold.
- 2.
- If the above linearized constraint matrix can be further written aswhere is a constant matrix, and the vector contains state-dependent functions such that , then, not only are both of the considered filters unbiased estimators, but also their corresponding constrained error covariances and are identical. Consequently, it can be inferred from the proof of Theorem 3 that and are also identical.
Proof.
We first provide an illustrative example to enhance the clarity of the subsequent discussion. Suppose that the state variable and it satisfies the following polynomial equations: . This constraint fulfills both (43) and (44).
Remark 3.
Since the system’s constraints not only come from the physical laws but can also be chosen and designed by engineers, it is possible for the nonlinear constraint to meet one of those assumptions in practice.
Now, assuming that (43) holds, one could rewrite it as
Because the constraint is nonlinear, (21) becomes
Similarly, one could obtain
Clearly, for the similar rationale presented in the proof of Theorem 1, it follows that . This means the preC-ErKF and the postC-ErKF are both unbiased estimators.
From (48), (49), and (33), it can be verified that
and
Since and are both positive definite for all and ([10], Th. 2), (35) holds.
Next, it follows from (50) and (51) that
If satisfies (44), from the definition of in Lemma 2, we have
Notice that the Hadamard product terms in (55) can be rewritten as , where the operator returns a square diagonal matrix.
Remark 4.
The range of function f guarantees that the matrix is not singular. In fact, this is equivalent to having a full-rank , which coincides with the assumption in Section 3.2. Thus, the condition is trivially met.
By substituting this equality into (55), we obtain
It can be seen that the above equation is equivalent to (32), meaning that the effect of the linearization error from to the post-constrained error covariance is eliminated. This results in a similar error covariance of the pre-constrained approach, where the effect of the linearization error from is also canceled out. Hence, from (57), one can show that
From the proof of Theorem 3, we directly have . □
4.3. Implementation
This section explains the step-by-step practical implementation of the filters in detail. To be specific, since the preC-ErKF and postC-ErKF share the same initialization and some portion of the time update processes, Algorithm 1 demonstrates their pseudocode jointly. It is worth mentioning that the choice of using the constrained process noise covariance instead of (lines 2 to 4 of Algorithm 1) is optional, even though it is recommended to use as the initial value for the KF framework. By doing so, one could achieve significant enhancements (see Table 1 and ([9], Table II)). However, this technique [18] is not originated from the proposed filters; hence, it remains an option.
Table 1.
Time-averaged RMS estimation (sum of all state’s elements) and constraint error of four considered filtering algorithms of the first example with two different constraints (complete and incomplete) over 600 Monte Carlo trials.
| Algorithm 1 Constrained Error-state Kalman Filter |
|
| Option #1: Pre-constrained |
|
| Option #2: Post-constrained |
|
As pointed out in Figure 1, the difference between those two methods is the number of correction steps. In cases where constraining is deemed necessary, the postC-ErKF mandates the implementation of two correction steps, one for the unconstrained estimate (8e) and another for the constrained estimate (19b). In contrast, the preC-ErKF only necessitates a single correction step (8e) or (13b), irrespective of the extent to which the constraints are satisfied. It is important to note that the ultimate solutions and of the mentioned filters are not fed back into the conventional ErKF’s mainstream. Notwithstanding the lack of verification, we firmly believe that propagating a constrained estimate through an unconstrained model may have a negative impact on the ErKF’s statistical characteristics and behavior.
5. Case Studies
To rigorously validate the performance of the two investigated filters in this paper, we reproduce two test case problems in [9] with increasing initial condition uncertainties. The suitability of these two problems as validation benchmarks for the filters can be attributed to their simple system models. However, they can still reflect most of the filter’s salient behavior. Moreover, these testing scenarios are frequently encountered in practical applications, and their results can be employed to align summary outcomes in the cited survey paper. For the sake of clarity, we have preserved the exact values of all parameters utilized in both simulations except the number of Monte Carlo runs and have repeated them for the reader’s benefit.
The primary objective of this section is to analyze the numerical performance of the preC-ErKF and postC-ErKF to determine the more effective approach for incorporating constraints into the ErKF when the EPM is chosen. Additionally, we compare these methods with the UnC-ErKF and the gain projection ErKF (GP-ErKF) presented in [9]. This comparison is motivated by the fact that, despite being mentioned in [9], the GPM was neither considered nor numerically evaluated. By including this analysis, we aim to provide a more comprehensive comparison of constrained filtering techniques.
The metric for comparison between approaches is only accuracy. Note that since they are both KF-based algorithms, the consistency of the filters can be theoretically guaranteed if KF’s process noise and measurement noise covariance matrices are well chosen. Thus, we intentionally omit the filter’s consistency analysis. For evaluating the accuracy of the estimates, a Monte Carlo-averaged root mean square estimation error (RMSEE) is considered [30]. The value of is computed over a set of M given Monte Carlo trials as . Here, and are the actual and estimated states at the time instant k during the m-th Monte Carlo run. From this, the time-averaged RMSEE can be calculated as . The constraint error is also taken into account in terms of the RMS error [10]. Specifically, the RMS constraint error (RMSCE) is defined as . Subsequently, the time-averaged RMSCE is computed as .
5.1. Example 1: A Linear System with Linear Equality Constraints
In this example, we study a simple position estimation of a two-dimensional (2D) vehicle. The state vector contains the position and velocity of the vehicle in the east (x-axis) and north (y-axis) directions, respectively. The state space error dynamical model for this system can be deduced from its full-state form as
where is the sample period of the filter. Note that to obtain the above system from the original full-state dynamical equation, we assume that the control input is perfectly known. Next, the constraint on the error state is given as
where is the vehicle’s heading angle measured counter-clockwise from due east, and the error constraint value is written as . To conduct a more comprehensive analysis, we additionally employ an inconsistent variant of where only the vehicle’s velocity is constrained. In particular, this constraint can be written as follows:
This constraint is considered as an incomplete one. In contrast to the constraint , the constraint relies on the principle that velocity determines position, implying that a velocity constraint implicitly constrains position. However, since position is derived from velocity through an approximate integration, any error in the velocity estimation could adversely affect the position constraint. For a detailed discussion, please refer to [9] and the relevant literature mentioned therein.
For the system model’s parameters, the heading angle is chosen as a constant of 60 degrees, and the constraint values on the full state, and , are set to and 0, respectively. For the filter’s parameters, the sample period is set to 3 s, the KF’s initial state and its corresponding covariance are and . The process noise and measurement noise are presumed to have the same statistical characteristics as considered in Section 3.2 with the value of their noise covariance matrices being and . We implement all the algorithms (unC-ErKF, GP-ErKF, preC-ErKF, and postC-ErKF) on a 150 s simulation over 600 Monte Carlo trials for two scenarios where and are individually considered.
Table 1 lists the time-averaged RMSEE and RMSCE obtained from the above-mentioned simulation. Two observations can be drawn from these results. Firstly, the preC-ErKF, postC-ErKF, and GP-ErKF together outperform the UnC-ErKF as can be easily predicted both intuitively and mathematically from the result of Theorem 2. Secondly, when using , the estimation and constraint errors of all frameworks, except for the UnC-ErKF, are indistinguishable. This aligns with the findings in [9], which state that all the constrained filters yield identical performance if the constraint is complete.
In the case of the incomplete constraint matrix , although evaluated approaches achieve comparable levels of constraint satisfaction efficacy, the postC-ErKF provides slightly better improvement compared to the preC-ErKF and is identical to the GP-ErKF. Based on this result, one may tentatively conclude that the preC-ErKF is more sensitive to constraint incompleteness than others.
It is also worth emphasizing that unlike in [9], we compared the unC-ErKF with and the C-ErKFs with . This implementation is based on the principle that the unconstrained filter should completely ignore the constraint’s information. The results presented in Table 1 provide partial corroboration of the theorems that we have proven previously.
5.2. Example 2: A Nonlinear System with a Nonlinear Equality Constraint
In the following, the investigated filters are employed to estimate a nonlinear pendulum with a nonlinear energy constraint [31]. We linearize the system using the first-order Taylor expansion to deal with the nonlinearity. Hence, the error-state model of the pendulum can be expressed as follows:
where and are the angular position and velocity error at the time instant k. The system’s state is updated at a time step of s. By conservation of energy, the system (62) satisfies the following first-order linearized constraint [9]:
Here, , where C is some constant. The system model’s parameters are chosen as follows: the gravitational force constant m/s2, the pendulum length m, and the pendulum weight kg. The system’s initial state is set as with an update time step of s.
The filter’s process noise covariance matrix is chosen as , and the measurement noise covariance matrix is . To further evaluate the filter’s performance, we analyze the robustness to bad initialization with an increasing initial condition error of for , and , respectively. Specifically, this error is mathematically defined as , where . For this reason, the initial estimation error covariance will be set distinctly for each . In particular, . The simulation was performed for a duration of 25 s, with 300 Monte Carlo trials executed.
Figure 2 demonstrates the square root of the actual estimation error covariance of and from Monte Carlo simulations. These results provide an initial intuitive assessment, suggesting that the theorems presented in this paper are not guaranteed to hold for general nonlinear systems. It is apparent from Figure 2a that when the initial error is relatively small (), both proposed C-ErKF methods exhibit a smaller magnitude of the error covariance as compared to the conventional unC-ErKF at all time steps k. This finding is consistent with the inequality outlined in (27), despite the system being nonlinear and the constraint matrix failing to satisfy the assumptions of Proposition 1. Furthermore, despite the overall similarity in performance between the two considered methods, the post-constrained exposes a slightly better response during the early stages than its counterpart. Notably, the postC-ErKF also yields a smaller error covariance than the others, but the results may be difficult to discern from the figure. On the other hand, the GP-ErKF produces mixed results, exhibiting worse performance than the UnC-ErKF when and . This may be due to the constraint imposed on the Kalman gain, which could excessively prioritize constraint satisfaction (see Table 2). These findings suggest that the EPM outperforms the GPM. It is important to note that a comparison between the EPM and the GPM was not conducted in [9]; therefore, our results extend the work presented in that study.
Figure 2.
Average standard deviation of the second example’s estimate and for 300 Monte Carlo runs with increasing initial uncertainties .
Table 2.
Time-averaged RMS estimation and constraint error of four considered filtering algorithms for the second example under , , and initial error test cases.
As the initial error increases, the postC-ErKF shows significantly superior performance at some initial steps in comparison to the others. Surprisingly, the preC-ErKF reveals a counterintuitive performance compared to the traditional approach during this period. To be precise, in the case of , Figure 2c illustrates that the preC-ErKF has the largest estimation error covariance among all candidates in both the angular position and velocity variables. This clearly indicates that during this time step, the pre-constrained strategy has low confidence in its estimation. As a consequence, the accuracy of the estimation will be diminished. However, after these early steps, the preC-ErKF quickly approaches the steady state, outperforming the unC-ErKF and reporting similar performance as the postC-ErKF.
The RMSEE and RMSCE mentioned earlier in this section are highlighted in Figure 3. This could serve as evidence that the results presented herein display a similar trend to that of Figure 2, owing to the correlation between the error covariance matrix and the filter’s accuracy. In particular, while the postC-ErKF demonstrates the most outstanding performance in both accuracy and robustness, the preC-ErKF outputs a significant overshoot when and compared to the unC-ErKF during the settling time (Figure 3b,c). Still, the pre-constrained approach outperforms the conventional ErKF in the steady state. Moreover, it can be clearly seen that the post-constrained filter exhibits superior efficiency and improvement compared to other competitors in the context of the RMSCE . This finding apparently supports our vision and discussion in Section 4.1.
Figure 3.
The second example’s Monte Carlo-averaged RMSEE (sum of all state’s elements) and RMSCE for 300 runs with increasing initial uncertainties .
Table 2 summarizes the time-averaged RMSEE and RMSCE of the second example for three configurations. It should be noted that, for consistency and in accordance with the same rationale, we also implement the constrained filters with and the unconstrained filter with as in the first example. This also applies to those results in Figure 2, Figure 3 and Figure 4. The table is revealing in several ways.
Figure 4.
Time-averaged angular position RMSEE and RMSCE of each Monte Carlo simulation for the case of .
- Firstly, the postC-ErKF provides the smallest estimation error and constraint across all evaluated cases. For instance, in the worst case, when , it gives for , while the preC-ErKF, GP-ErKF, and unC-ErKF offer , , and , respectively. In other words, the post-constrained approach decreases the estimation error rate by approximately , , and compared to that of the pre-constrained, gain-constrained, and unconstrained methods, respectively. In the context of constraint error, similarly, it reduces the constraint error rate by approximately , , and compared to others. This table also supports our previous explanation that the GP-ErKF overly enforces the estimation. As a result, although the constraint error is small, the final estimation is poorer compared to the other methods.
- Secondly, the behavior of the preC-ErKF degrades much when faced with large initial errors. Specifically, as increases from to , the pre-constrained strategy produces a corresponding increase in RMSCE with a significant gap value (e.g., from to , and to , respectively). Conversely, the proposed post-constrained method yields a negligible small incremental gap in response to an increase in the initialization errors.
For a comprehensive analysis, we also compute the time-averaged RMSEE and RMSCE of each Monte Carlo run; in particular, their values can be computed as and , respectively. The results with are illustrated in Figure 4. It is apparent from this figure that the postC-ErKF outclasses other filters in every Monte Carlo simulation for angular position estimation. Nevertheless, both the gain-constrained and pre-constrained methods provide smaller and compared to the unconstrained filter. These observations are in agreement with what we have discussed so far.
The numerical results thus far indicate that the PostC-ErKF outperforms the PreC-ErKF. Although Proposition 1 provides a special case in which both methods yield the same estimation results, the performance of the two approaches remains uncertain in other scenarios. However, based on the derivations presented, it can be suggested that the PostC-ErKF has an advantage because it linearizes the nonlinear constraints around the a posteriori estimate, resulting in linearized constraints that are closer to the true constraints. In contrast, the PreC-ErKF linearizes the constraints around the a priori estimate, which may lead to less accurate results.
6. Conclusions
In this paper, we have investigated two possible ways to incorporate linearized equality constraints using the EPM into the conventional ErKF framework that, to the best of our knowledge, has yet to be previously considered. Since the ErKF involves an extra error correction, the constraints can be enforced before or after this step; we refer to them as the preC-ErKF and postC-ErKF, respectively. Through a complete examination, we have mathematically proven that both proposed filters exhibit identical performance in all aspects for linear or particular nonlinear constraints that can be expressed as a certain form. Nevertheless, their responses if does not satisfy the mentioned conditions are not mathematically established. However, the numerical Monte Carlo simulations have shown that the postC-ErKF provides a notable enhancement compared to the preC-ErKF in such scenarios. Specifically, the post-constrained filter exhibits smaller time-averaged estimation and constraint error and is significantly robust against the increasing initialization error and the constraint’s incompleteness.
It has been clearly shown that the ErKF’s correction step substantially impacts the estimation results when constraining nonlinear constraints are involved. Although this paper’s results deliver adequate evidence to determine whether the post-constrained filter should be prioritized over the pre-constrained approach, further rigorous studies about their behaviors in the presence of general nonlinear constraints are our motivation in future work.
Author Contributions
Conceptualization, H.V.D. and J.-w.S.; methodology, H.V.D.; software, H.V.D.; validation, H.V.D. and J.-w.S.; formal analysis, H.V.D. and J.-w.S.; investigation, H.V.D. and J.-w.S.; resources, H.V.D.; data curation, H.V.D.; writing—original draft preparation, H.V.D.; writing—review and editing, H.V.D.; visualization, H.V.D.; supervision, J.-w.S.; project administration, J.-w.S.; funding acquisition, J.-w.S. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Unmanned Vehicles Core Technology Research and Development Program through the National Research Foundation of Korea (NRF) (NRF-2023M3C1C1A01098408), and by the Unmanned Vehicle Advanced Research Center (UVARC) funded by the Ministry of Science and ICT, the Republic of Korea (No. 2020M3C1C1A01086408).
Data Availability Statement
The data and code presented in this study are available on request from the corresponding author (the data and code are not publicly available due to privacy).
Acknowledgments
We sincerely thank Jae Hyung Jung from the Smart Robotics Laboratory, School of Computation, Information and Technology, Technical University of Munich, and Chan Gook Park from the Navigation and Electronic System Laboratory, Department of Aerospace Engineering, Institute of Advanced Aerospace Technology, Seoul National University, for their valuable comments and suggestions, which have significantly improved this work.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Chen, S.Y. Kalman Filter for Robot Vision: A Survey. IEEE Trans. Ind. Electron. 2012, 59, 4409–4420. [Google Scholar] [CrossRef]
- Kim, S.G.; Crassidis, J.L.; Cheng, Y.; Fosbury, A.M.; Junkins, J.L. Kalman Filtering for Relative Spacecraft Attitude and Position Estimation. J. Guid. Control. Dyn. 2007, 30, 133–143. [Google Scholar] [CrossRef]
- Gao, G.; Gao, B.; Gao, S.; Hu, G.; Zhong, Y. A Hypothesis Test-Constrained Robust Kalman Filter for INS/GNSS Integration With Abnormal Measurement. IEEE Trans. Veh. Technol. 2023, 72, 1662–1673. [Google Scholar] [CrossRef]
- Auger, F.; Hilairet, M.; Guerrero, J.M.; Monmasson, E.; Orlowska-Kowalska, T.; Katsura, S. Industrial Applications of the Kalman Filter: A Review. IEEE Trans. Ind. Electron. 2013, 60, 5458–5471. [Google Scholar] [CrossRef]
- Anderson, B.D.; Moore, J.B. Optimal Filtering; Courier Corporation: North Chelmsford, MA, USA, 2012. [Google Scholar]
- Daum, F. Nonlinear filters: Beyond the Kalman filter. IEEE Aerosp. Electron. Syst. Mag. 2005, 20, 57–69. [Google Scholar] [CrossRef]
- Roumeliotis, S.; Sukhatme, G.; Bekey, G. Circumventing dynamic modeling: Evaluation of the error-state Kalman filter applied to mobile robot localization. In Proceedings of the 1999 International Conference on Robotics and Automation (ICRA), Detroit, MI, USA, 10–15 May 1999; Volume 2, pp. 1656–1663. [Google Scholar] [CrossRef]
- Madyastha, V.; Ravindra, V.; Mallikarjunan, S.; Goyal, A. Extended Kalman Filter vs. Error State Kalman Filter for Aircraft Attitude Estimation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Portland, OR, USA, 8–11 August 2011; p. 6615. [Google Scholar] [CrossRef]
- Simon, D. Kalman filtering with state constraints: A survey of linear and nonlinear algorithms. IET Control. Theory Appl. 2010, 4, 1303–1318. [Google Scholar] [CrossRef]
- Simon, D.; Chia, T.L. Kalman filtering with state equality constraints. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 128–136. [Google Scholar] [CrossRef]
- Yang, C.; Blasch, E. Kalman Filtering with Nonlinear State Constraints. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 70–84. [Google Scholar] [CrossRef]
- Titterton, D.; Weston, J. Strapdown Inertial Navigation Technology; IET: Herts, UK, 2004; Volume 17. [Google Scholar]
- Mueller, M.W.; Hehn, M.; D’Andrea, R. Covariance Correction Step for Kalman Filtering with an Attitude. J. Guid. Control. Dyn. 2017, 40, 2301–2306. [Google Scholar] [CrossRef]
- Linderoth, M.; Soltesz, K.; Robertsson, A.; Johansson, R. Initialization of the Kalman filter without assumptions on the initial state. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4992–4997. [Google Scholar] [CrossRef]
- Kneip, L.; Scaramuzza, D.; Siegwart, R. On the initialization of statistical optimum filters with application to motion estimation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 1500–1506. [Google Scholar] [CrossRef]
- Pizzinga, A. Constrained Kalman Filtering: Additional Results. Int. Stat. Rev. 2010, 78, 189–208. [Google Scholar] [CrossRef]
- Julier, S.J.; LaViola, J.J. On Kalman Filtering With Nonlinear Equality Constraints. IEEE Trans. Signal Process. 2007, 55, 2774–2784. [Google Scholar] [CrossRef]
- Ko, S.; Bitmead, R.R. State estimation for linear systems with state equality constraints. Automatica 2007, 43, 1363–1368. [Google Scholar] [CrossRef]
- Teixeira, B.O.S.; Chandrasekar, J.; Palanthandalam-Madapusi, H.J.; Torres, L.A.B.; Aguirre, L.A.; Bernstein, D.S. Gain-Constrained Kalman Filtering for Linear and Nonlinear Systems. IEEE Trans. Signal Process. 2008, 56, 4113–4123. [Google Scholar] [CrossRef]
- Jung, J.H.; Gook Park, C. Constrained Filtering-based Fusion of Images, Events, and Inertial Measurements for Pose Estimation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 644–650. [Google Scholar] [CrossRef]
- Do, H.V.; Kwon, Y.S.; Kim, H.J.; Song, J.W. An Improvement of 3D DR/INS/GNSS Integrated System using Inequality Constrained EKF. In Proceedings of the 2022 22nd International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 27 November–1 December 2022; pp. 1780–1783. [Google Scholar] [CrossRef]
- Zanetti, R.; Majji, M.; Bishop, R.H.; Mortari, D. Norm-constrained Kalman filtering. J. Guid. Control. Dyn. 2009, 32, 1458–1465. [Google Scholar] [CrossRef]
- Zanetti, R.; Bishop, R. Quaternion estimation and norm constrained Kalman filtering. In Proceedings of the AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Keystone, CO, USA, 21–24 August 2006; p. 6164. [Google Scholar] [CrossRef]
- Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Grewal, M.S.; Andrews, A.P. Kalman Filtering: Theory and Practice with MATLAB; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
- Trawny, N.; Roumeliotis, S.I. Indirect Kalman filter for 3D attitude estimation. Univ. Minnesota Dept. Comp. Sci. Eng. Tech. Rep. 2005, 2, 2005. [Google Scholar]
- Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: New York, NY, USA, 2008. [Google Scholar]
- Gill, R.; Mueller, M.W.; D’Andrea, R. Full-Order Solution to the Attitude Reset Problem for Kalman Filtering of Attitudes. J. Guid. Control. Dyn. 2020, 43, 1232–1246. [Google Scholar] [CrossRef]
- Chee, S.A.; Forbes, J.R. Norm-constrained consider Kalman filtering. J. Guid. Control. Dyn. 2014, 37, 2048–2053. [Google Scholar] [CrossRef]
- Raihan, D.; Chakravorty, S. Particle Gaussian mixture filters-I. Automatica 2018, 98, 331–340. [Google Scholar] [CrossRef]
- Teixeira, B.O.; Chandrasekar, J.; Tôrres, L.A.; Aguirre, L.A.; Bernstein, D.S. State estimation for linear and non-linear equality-constrained systems. Int. J. Control. 2009, 82, 918–936. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).