Next Article in Journal
Robust H-Control for Uncertain Stochastic Systems with Impulsive Effects
Previous Article in Journal
Homotopy Analysis Method for a Fractional Order Equation with Dirichlet and Non-Local Integral Conditions

Mathematics 2019, 7(12), 1168; https://doi.org/10.3390/math7121168

Article
Ellipsoidal and Gaussian Kalman Filter Model for Discrete-Time Nonlinear Systems
1
Geodätisches Institut Hannover, Leibniz Universität Hannover, 30167 Hannover, Germany
2
Institut für Geoinformation und Vermessung Dessau, Hochschule Anhalt, 06846 Dessau, Germany
3
Department of Computer Science, University of Texas at El Paso, El Paso, TX 79968, USA
*
Author to whom correspondence should be addressed.
Received: 9 September 2019 / Accepted: 20 November 2019 / Published: 3 December 2019

Abstract

:
In this paper, we propose a new technique—called Ellipsoidal and Gaussian Kalman filter—for state estimation of discrete-time nonlinear systems in situations when for some parts of uncertainty, we know the probability distributions, while for other parts of uncertainty, we only know the bounds (but we do not know the corresponding probabilities). Similarly to the usual Kalman filter, our algorithm is iterative: on each iteration, we first predict the state at the next moment of time, and then we use measurement results to correct the corresponding estimates. On each correction step, we solve a convex optimization problem to find the optimal estimate for the system’s state (and the optimal ellipsoid for describing the systems’s uncertainty). Testing our algorithm on several highly nonlinear problems has shown that the new algorithm performs the extended Kalman filter technique better—the state estimation technique usually applied to such nonlinear problems.
Keywords:
Ellipsoidal and Gaussian Kalman filter; state estimation; unknown but bounded uncertainty; nonlinear programming; convex optimization

1. Introduction

State estimation is important for virtually all areas of engineering and science. Every discipline which uses the mathematical modeling of its systems needs state estimation; this includes electrical engineering, mechanical engineering, chemical engineering, aerospace engineering, robotics, dynamical systems’ control and many others.
We estimate the system’s state based on the measurement results. In this estimation, we need to take into account that measurements are never absolutely accurate, the measurement results contain inaccuracy (“noise”)—e.g., due to inevitable imperfection of the measuring instruments. Also, our understanding of the system’s dynamics is also usually approximate. In addition to the internal factors (which are described by the system’s state) and the known external factors, there are usually also many other factors that affect the system—and which, from the viewpoint of the model, constitute the noise. The presence of noise affects our ability to predict the system’s behavior and to control the system. It is therefore desirable to minimize the effect of the noise—in particular, to minimize the effect of noise on the state estimation. In engineering, traditionally, techniques for decreasing the effect of noise—i.e., for separating (“filtering”) signal from noise—are known as filtering; because of this, the state estimation problem can be viewed as an important particular case of filtering.
To formulate the state estimation problem in precise terms, we need to have a mathematical model of the actual system, and we need to have a model describing the system’s and measurement uncertainties (noise). To select the best filtering technique, we also need to select a numerical measure for describing the remaining inaccuracy of state estimation. Once this measure is selected, we need, given the measurement results, to find the state estimates for which the selected measure of inaccuracy attains the smallest possible value; see, e.g., [1]. In other words, from the mathematical viewpoint, estimating a state means solving the corresponding optimization problem.
The quality of the resulting state estimates depends on how well our models— i.e., our assumptions about the system and about the noise—describe the actual system and the actual noise. The most widely used state estimation techniques are stochastic techniques, i.e., techniques based on the assumption that we know the probability distribution of the noise (or, more generally, that we have some information about this distribution). In most techniques, it is assumed that the noise is Gaussian, but other distributions have also been considered. Stochastic techniques have been actively developed for dozens (if not hundreds) of years.
The most widely used stochastic state estimation techniques are the Kalman Filter (KF) techniques [2] and the Extended Kalman Filter (EKF) techniques [3,4]. In most practical applications, the traditional Kalman filters are used, which are based on the assumption that the system is linear (and that the noise is Gaussian). In practice, however, many real-life system are non-linear (and the actual noise distribution is sometimes non-Gaussian).
Because of the ubiquity of non-linear systems, several filtering techniques have been developed for the non-linear case. Many applications use the finite sum approximation techniques such as the Gaussian sum filter (see, e.g., [5]) or linearization techniques such as EKF (see, e.g., [6]). These techniques work well for many non-linear systems. Their main limitation is that they assume that both the system noise and the measurement noise are normally distributed. As a result, sometimes, when the actual distributions are non-Gaussian—e.g., when they are multi-modal—these techniques do not perform well.
Another widely used filtering techniques applicable to non-linear systems is the Unscented Kalman Filter (UKF), first proposed in [7]. The UKF technique is based on using a special non-linear transformation of the data—called Unscented Tranform (UT)—that, crudely speaking. transforms the actual probability distributions into simpler ones. The main advantage of UKF is that, in contrast to EKF, UKF techniques do not assume that the required nonlinear transformations are smooth (differentiable). As a result, UKF does not involve time-consuming computation of the corresponding derivatives—i.e., Jacobian and Hessian matrices. On the other hand, because of their more general nature, UKF algorithms are more complex and, in general, require more computation time.
Yet another widely filtering technique for nonlinear (and non-Gaussian, and non-stationary) situations is the Particle Filter (PF) technique [8]. The main idea of this technique is that we simulate each probability distribution by selecting several sample points (called particles). We start with selecting several states distributed according to the known prior distribution of the system’s state. To each of the selected states, we apply the system’s dynamics—simulating the corresponding system’s noise and measurement noise. The resulting sample of states (and sample of expected measurement results) represent the distributions corresponding to the next moment of time. We can then use the actual observation results obtained at this moment of time to update the distributions. This method works very well in many practical applications. For example, in geodesy, the PF techniques have been successfully used to accurately determine the trajectory on a moving vehicle based on Laser-scanner-based multi-sensor measurements [9,10].
While these techniques work well in many practical situations, they all have a common limitation: they assume that we have a reasonably detailed information about the corresponding probability distributions. In practice, often, we do not have that information. Sometimes, some approximate distributions are provided, but the actual frequencies of different noise values are very different from what these approximate distributions predict. This is a frequent situations, e.g., for measuring instruments, when the manufacturer provides us only with an upper bound on the systematic error component (or even on the overall measurement error) without providing any information on the probabilities of different values within the given bounds; see, e.g., [11].
Such situations are known as situations with Unknown But Bounded uncertainty (UBB). Techniques for state estimation under such uncertainty have been developed since the 1960s; see, e.g., [12,13,14].
In the case of UBB uncertainty, for each noise component, once we know the upper bound Δ on its absolute value, the only information that we have about the actual noise value is that this value is located in the interval [ Δ , Δ ] ; we do not know the probabilities of different values from this interval. Once we know the bounds Δ i on each noise value n i ( 1 i k ), we can therefore conclude that the tuple n = ( n 1 , , n k ) formed by these noise values belongs to the box
[ Δ 1 , Δ 1 ] × × [ Δ n , Δ n ] ,
methods for processing such information are given, e.g., in [15,16,17].
We may also have some additional information about the relation between different noise values—e.g., the upper bound on the difference between noise values at two consequent moments of time. As a result, in addition to the interval containing each noise value, we may know a bounded close (hence compact) set containing the tuples of possible noise values.
There exist several techniques for dealing with such uncertainty. The most accurate techniques are the ones that use generic polytopes [18,19]—since by using polytopes, we can approximate any reasonable compact set with any given accuracy. However, to get an accurate approximation, we need to use a large number of parameters—especially in multi-dimensional case. As a result, in practice, these general methods are only efficient in low-dimensional situations.
Somewhat faster techniques emerge when we limit ourselves to a special class of polytopes called zonotopes; see, e.g., [1,20,21,22,23]. Mathematically, zonotopes are defined as Minkowski sums of intervals (for those who are not familiar with this notion, the definition of Minkowski sum is given in the next section). These methods are more efficient, but still mostly limited to low-dimensional cases.
The only efficient general techniques available in the higher-dimensional cases are techniques based on using ellipsoids; see, e.g., [24,25,26]. If we use ellipsoids, then some problems becomes easy to solve: e.g., if the system is linear and we know the ellipsoid that contains its initial state, then the set of all possible states at the next moment of time is also an ellipsoid, and this ellipsoid can be easily computed. Other problems are not so easy. For example, if we have two different sources of noise and each of them is described by an ellipsoid, then the set of possible values of the overall noise is no longer an ellipsoid—using the term which is explained on the next section, it is the Minkowski sum of the two original ellipsoids. To apply the ellipsoid technique to this situation, we need to enclose this Minkowski sum into an ellipsoid. We want this enclosing ellipsoid to represent the sum as accurately as possible—so, to compute this ellipsoid, we need to solve the corresponding optimization problem (this will also be described in the next section).
The above techniques take care of the situations in which we either know all the corresponding probability distributions or we do not know the probabilities at all—we only know the bounds on the noises. In practice, often, we have the probability information about some noise components, and we only know bounds on other components of the noise. For such situations, we need state estimation techniques that would take both types of uncertainty into account. For the linear case, such techniques have been developed by Benjamin Noack in his PhD dissertation [27]. In this paper, we extend these techniques to the general nonlinear case.
Following Noack, we use ellipsoids to describe the UBB uncertainty. At this stage of our research, we limit ourselves to the situations when all the probability distributions are Gaussian. Thus, we call our new filtering techniques the Ellipsoidal and Gaussian Kalman Filter (EGKF, for short).
For random uncertainties, the newly proposed method keeps the Kalman filter’s recursive framework, and thus, with respect to this uncertainty, is as efficient as the original KF. Of course, since we also use ellipsoidal uncertainty, we need to solve an optimization problem on each step, as a result of which our method requires somewhat more computation time than the usual KF.
The structure of the paper is as follows. Section 2 describes the mathematical definitions and results that are used in the following text—including the general definitions and results about the corresponding dynamical systems and filters. In Section 3, we analyze the corresponding problem, derive the formulas of the resulting EGKF algorithm, and, finally, present this algorithm in a practical ready-to-use form. In Section 4, we show, on several test cases, how the new method works—and we show that, in most cases, it indeed performs better than EKF. The last section contains conclusions and future work.

2. Mathematical Model

This section contains the mathematical definitions and results that are used in the following text—including the general definitions and results about the corresponding dynamical systems and filters. It also includes the formulation of the problem in precise terms.

2.1. Mathematical Definitions and Results Used in This Paper

The following definitions, theorems and corollaries are required for the derivation of the new filter model EGKF. Proofs of these results can be found, e.g., in [26].
Definition 1.
Let n be a positive integer. By a bounded ellipsoid in R n with nonempty interior (or simply ellipsoid, for short) we mean a set
E = E ( c , S ) = { x R n | ( x c ) T S 1 ( x c ) 1 } ,
where c R n and S is a positive definite matrix (this will be denoted by S > 0 ). c is called the center of the ellipsoid E , and S is called the shape matrix.
The shape matrix specifies the size and orientation of the ellipsoid.
Definition 2.
Let A and B be sets in the Euclidean space R n . By the Minkowski sum A + B of A and B, we mean the set of all possible values obtained by adding elements of A and B:
A + B = { a + b | a A , b B } .
In general, for K ellipsoids in R n
E k = E ( c k , S k ) , ( k = 1 , 2 , , K ) ,
their Minkowski sum
U K = k = 1 K E k
is not an ellipsoid anymore; however, it is still a convex set.
In this paper, we will look for the ellipsoid with the smallest trace tr ( S ) that contains the Minkowski sum. We selected this criterion since it corresponds to minimizing the mean square error in the probabilistic case. In principle, we could instead minimize the volume of the ellipsoid—which corresponds to minimizing the determinant | S | of the shape matrix, or the largest eigenvalue of the shape matrix λ M ( S ) . Minimizing the largest eigenvalue λ M ( S ) makes the ellipsoid more like a ball (in 2D case, more like a circle). Minimizing the volume sometimes leads to oblate ellipsoids, with large uncertainty in some directions.
For minimizing the trace, we will denote the corresponding minimization problem by T + :
E * = arg min U K E tr ( S ) ( Problem T + ) .
It is known that the optimal ellipsoid E * always exists and is unique [26].
Theorem 1.
The center of the optimal ellipsoid E * for Problem T + is given by
c * = k = 1 K c k .
Theorem 2.
Let D be the convex set of all vectors α R K for which α k > 0 for all k and k = 1 K α k = 1 . Then, for any α D , the ellipsoid E α = E + ( c * , S α ) , with c * defined by (6) and
S α = k = 1 K α k 1 S k
contains the Minkowski sum U K .
Corollary 1.
(Special case of Theorem 2) When K = 2 , we have α 1 + α 2 = 1 , so the formula for S α can be rewritten as
S α = 1 α 1 S 1 + 1 α 2 S 2 = 1 + 1 β S 1 + ( 1 + β ) S 2 ,
where β can be any positive real number.
Proof of Corollary 1.
Indeed, for each β > 0 , we can take α 1 = β 1 + β and α 2 = 1 1 + β . ☐
Theorem 3.
In the family E α = E + ( c * , S α ) , the minimal-trace ellipsoid containing the Minkowski sum of the ellipsoids E k = E + ( c k , S k ) , k = 1 , 2 , , K is obtained for
S α * = k = 1 K tr ( S k ) · k = 1 K S k tr ( S k ) .
Corollary 2.
(Special case of Theorem 3) When K = 2 , we have
S α * = 1 + 1 β * S 1 + ( 1 + β * ) S 2 , w h e r e β * = tr ( S 1 ) tr ( S 2 ) .

2.2. Dynamical Systems and Linearization

In this paper, we consider the following general discrete-time nonlinear system:
x k + 1 = f k ( x k , u k , w k , a k ) ,
y k = h k ( x k , v k , b k ) ,
where:
  • x k is a n-dimensional state vector at time t k ,
  • u k is the known input vector,
  • w k N ( 0 , C k u ) is a Gaussian system noise with covariance matrix C k u ,
  • a k E ( 0 , S k u ) is an UBB perturbation with shape matrix S k u ,
  • v k N ( 0 , C k z ) is a Gaussian measurement noise with covariance matrix C k z ,
  • b k E ( 0 , S k z ) is an UBB perturbation with shape matrix S k z —all at the time t k .
To make notations clearer, in this paper, parameters related to the first (system) equation will be denoted by u and parameters related to the second (measurement) equation will be denoted by z.
Similarly to EKF, we start with linearization: namely, we expand the right-hand side of the above equations in Taylor series and keep only linear terms in the corresponding expansion. For the system Equation (11a), we perform Taylor expansion around the point ( x k = x ^ k + , u k = u k , w k = 0 , a k = 0 ) and get the following result:
x k + 1 = f k ( x ^ k + , u k , 0 , 0 ) + f k x k | ( x ^ k + , u k , 0 , 0 ) ( x k x ^ k + ) + f k w k | ( x ^ k + , u k , 0 , 0 ) w k + f k a k | ( x ^ k + , u k , 0 , 0 ) a k + f k ( x ^ k + , u k , 0 , 0 ) + F x , k ( x k x ^ k + ) + F w , k w k + F a , k a k = F x , k x k + [ f k ( x ^ k + , u k , 0 , 0 ) F x , k x ^ k + ] + F w , k w k + F a , k a k = F x , k x k + u ˜ k + F w , k w k + F a , k a k .
Here, we denoted u ˜ k = f k ( x ^ k + , u k , 0 , 0 ) F x , k x ^ k + and F x , k , F w , k , and F a , k denote the corresponding derivatives. Note that, in spite of the fact that our uncertainty model is different from the purely probabilistic model underlying the EKF technique, the resulting formulas are similar to the corresponding EKF formulas–since at this stage, we did not yet use any information about the uncertainty.
For the measurement Equation (11b), a similar Taylor expansion around the natural point ( x k = x ^ k , v k = 0 , b k = 0 ) leads to the following formula:
y k = h k ( x ^ k , 0 , 0 ) + h k x k | ( x ^ k , 0 , 0 ) ( x k x ^ k ) + h k v k | ( x ^ k , 0 , 0 ) v k + h k b k | ( x ^ k , 0 , 0 ) b k + h k ( x ^ k , 0 , 0 ) + H x , k ( x k x ^ k ) + H v , k v k + H b , k b k = H x , k x k + z ˜ k + H v , k v k + H b , k b k .
Here, we denoted z ˜ k = h k ( x ^ k , 0 , 0 ) H x , k x ^ k . Notice that if the measurement equation is linear, then z ˜ k = 0 .
Thus, from the original system (11), we get the following linearized system:
x k + 1 = F x , k x k + u ˜ k + F w , k w k + F a , k a k ,
y k = H x , k x k + z ˜ k + H v , k v k + H b , k b k .
These equations describe the change in the (unknown) true state x k . In practice, we only know the estimates for the state. Following the general idea of Kalman filter, at each moment of time t k , we will consider:
  • an a priori estimate of this state—i.e., what we predict based on the information available at the previous moment of time—and
  • an a posterior estimate—what we get after taking into account the results of measurements performed at this moment of time.
In this paper, we will denote the a priori estimate by x ^ k and the a posteriori estimate by x ^ k + .
Since we consider the case when the system has both the random and the unknown-but-bounded error components (14), both a priori and a posteriori estimates should have the same form:
x ^ k = c k + g k + e k ,
x ^ k + = c k + + g k + + e k + ,
where c k and c k + are points, g k N ( 0 , C k ) and g k + N ( 0 , C k + ) are random variables with Gaussian distribution, e k E ( 0 , S k ) and e k + E ( 0 , S k + ) are two points within two ellipsoids with shape matrices S k and S k + respectively.
Figure 1 illustrates shows how a state x 0 can be decomposed into the sum of the point estimate c 0 and two perturbations: the UBB noise component e 0 and the Gaussian noise component g 0 .
Our objective is to use the measurement results to generate the a posteriori estimate x ^ 0 + , i.e., in precise terms, to compute the point estimate c k + and the matrices C k + and S k + .

3. Derivation of Ellipsoidal and Gaussian Kalman Filter

In this section we derive the EGKF model. The derivation is based on the results presented in Section 2.1. The derivation is summarized by the two theorems presented in the following subsection.

3.1. Prediction

As we have mentioned, our algorithm will describe, moment by moment, how our knowledge of the system’s state changes. At the first moment of time, we have information x ^ 0 + about the initial state of the system. For each moment t k , k = 0 , 1 , , our algorithm will describe the transition from moment t k to the next moment t k + 1 . So, at moment t 1 , after applying our algorithm, we will know the a posteriori estimate x ^ 1 + for the state of the system at this moment. Then, at moment t 2 , we will get an a posteriori estimate x ^ 2 + for the state of the system at moment t 2 , etc.
Let us describe the transition from moment t k to the next moment t k + 1 . At moment t k , we know the a posteriori estimate of the state, i.e., we know that
x ^ k + = c k + + g k + + e k + ,
where g k + N ( 0 , C k + ) , e k + E ( 0 , S k + ) and c k + , C k + , S k + are given.
As we showed in the previous section, the a priori estimate at the next moment of time has the form
x ^ k + 1 = F x , k x ^ k + + u ˜ k + F w , k w k + F a , k a k .
The following theorem describes this a prior estimate in the desired form (15).
Theorem 4.
The a priori estimate x ^ k + 1 , defined by the Formula (18), is equal to: x ^ k + 1 = c k + 1 + g k + 1 + e k + 1 , where
c k + 1 = F x , k c k + + u ˜ k ,
g k + 1 N ( 0 , C k + 1 ) ,
e k + 1 E ( 0 , S k + 1 ) ,
C k + 1 = F x , k C k + F x , k T + F w , k C k u F w , k T ,
and
S k + 1 = tr F x , k S k + F x , k T + tr F a , k S k u F a , k T · F x , k S k + F x , k T tr F x , k S k + F x , k T + F a , k S k u F a , k T tr F a , k S k u F a , k T .
Proof. 
From (17) and (18), we get
x ^ k + 1 = F x , k ( c k + + g k + + e k + ) + u ˜ k + F w , k w k + F a , k a k = F x , k c k + + F x , k g k + + F x , k e k + + u ˜ k + F w , k w k + F a , k a k = c k + 1 + g k + 1 + e k + 1 ,
where we denoted:
c k + 1 = F x , k c k + + u ˜ k ,
g k + 1 = F x , k g k + + F w , k w k ,
e k + 1 = F x , k e k + + F a , k a k .
Here,
g k + N ( 0 , C k + ) , w k N ( 0 , C k u ) ,
therefore according to the properties of Gaussian distributions and (26),
g k + 1 N ( 0 , C k + 1 ) ,
where
C k + 1 = F x , k C k + F x , k T + F w , k C k u F w , k T .
Also, here,
e k + E ( 0 , S k + ) , a k E ( 0 , S k u ) ,
therefore according to Corollary 2:
e k + 1 E ( 0 , S k + 1 ) ,
where
S k + 1 = tr F x , k S k + F x , k T + tr F a S k u F a T · F x , k S k + F x , k T tr F x , k S k + F x , k T + F a S k u F a T tr F a S k u F a T .
The proof is complete. ☐

3.2. Filtering

In the previous section, we show how to generate the a priori estimate for the system’s state. As a result, at each moment t k , we have the following estimate:
x ^ k = c k + g k + e k ,
where g k N ( 0 , C k ) , e k + E ( 0 , S k ) and c k , C k , S k are known.
Let us now describe how we take the observation results y k into account, and thus, transform the a priori estimate into an a posteriori one. In line with the general KF idea, to compensate for the prediction error y k h k ( x ^ k , 0 , 0 ) , we add, to the a priori estimate, a term proportional to this error, i.e., compute the following a posteriori estimate
x ^ k + = x ^ k + K k [ y k h k ( x ^ k , 0 , 0 ) ] ,
for an appropriate matrix K k . In this section, we consider what happens for a general selection of the gain factor K k . In the next subsection, we will show how to select the optimal gain factor.
Let us show how the resulting a posteriori estimate can be represented in the desired form (15). To come up with this form, we introduce a new parameter β ; the optimal value of this parameter will also be described in the next subsection.
Theorem 5.
Let β > 0 . Then, an a posteriori estimate x ^ k + , as defined by the Formula (35), has the form x ^ k + = c k + + g k + + e k + , where:
c k + = ( I K k H x , k ) c k + K k ( y k z ˜ k ) ,
g k + N ( 0 , C k + ) ,
e k + E ( 0 , S k + ) ,
C k + = ( I K k H x , k ) C k ( I K k H x , k ) T + K k H v , k C k z H v , k T K k T ,
and
S k + ( β ) = 1 + 1 β ( I K k H x , k ) S k ( I K k H x , k ) T + ( 1 + β ) K k H b , k S k z H b , k T K k T .
Proof. 
From (34) and (35) we get
x ^ k + = x ^ k + K k [ y k h k ( x ^ k , 0 , 0 ) ] = x ^ k + K k [ y k H x , k x ^ k z ˜ k ] = ( I K k H x , k ) x ^ k + K k ( y k z ˜ k ) = ( I K k H x , k ) ( c k + g k + e k ) + K k ( H x , k x k + H v , k v k + H b , k b k ) = c k + + g k + + e k k ,
where we denoted
c k + = ( I K k H x , k ) c k + K k H x , k x k ,
g k + = ( I K k H x , k ) g k + K k H v , k v k ,
e k + = ( I K k H x , k ) e k + K k H b , k b k .
For the point estimate part, we have v k = 0 and b k = 0 , therefore from (42) we get
c k + = ( I K k H x , k ) c k + K k ( y k z ˜ k ) .
Here,
g k N ( 0 , C k ) , v k N ( 0 , C k z ) ,
therefore according to properties of Gaussian distributions and (37),
g k + N ( 0 , C k + ) ,
where
C k + = ( I K k H x , k ) C k ( I K k H x , k ) T + F v , k C k z F v , k T .
Also here,
e k E ( 0 , S k ) , b k E ( 0 , S k z ) ,
then according to Corollary 1, for any β > 0 , we get
e k + E ( 0 , S k + ) ,
where
S k + ( β ) = 1 + 1 β ( I K k H x , k ) S k ( I K k H x , k ) T + ( 1 + β ) K k H b , k S k z H b , k T K k T .
The proof is complete. ☐

3.3. Optimization Problem

To finalize our algorithm, the only thing left is to find the optimal gain factor K k . Similar to KF and to EKF, we will find the gain factor that minimizes the mean square error of the posteriori estimation.
In our case, this error has two components: the statistical component and the UBB component. To combine these errors into a single objective function, we need to decide how much weight we give to each component. This relative weight will be described by a parameter η [ 0 , 1 ] , so that the resulting cost function will have the following form:
J ( β ) = ( 1 η ) tr ( C k + ) + η tr S k + ( β ) .
This expression represents the overall uncertainty of the posteriori estimation. We will find the gain factor K k and the value β that minimize this cost function.
Plugging (48) and (51) into (52) we get:
J ( β ) = ( 1 η ) tr ( I K k H x , k ) C k ( I K k H x , k ) T + ( 1 η ) tr K k H v , k C k z H v , k T K k T + η 1 + 1 β tr ( I K k H x , k ) S k ( I K k H x , k ) T + η ( 1 + β ) tr K k H b , k S k z H b , k T K k T ( 1 η ) tr ( I K k H x , k ) C k ( I K k H x , k ) T + ( 1 η ) tr K k H v , k C k z H v , k T K k T + η 1 + 1 β M + η ( 1 + β ) N ,
where we denoted M = tr ( I K k H x , k ) S k ( I K k H x , k ) T and N = tr K k H b , k S k z H b , k T K k T .
Let us first find the optimal value β . Since M > 0 and N > 0 , taking into account that the arithmetic mean is always larger than or equal that the geometric mean, we conclude that
1 + 1 β M + ( 1 + β ) N = M + N + 1 β M + β N M + N + 2 M N .
In particular, when 1 β M = β N , i.e., when M = β 2 N and β = M N , we have:
1 + 1 β M + ( 1 + β ) N = M + N + 2 M N = M + N 2 .
Thus, the smallest possible value of the cost function is attained when β = M N . In this case, the cost function J ( β ) , as described by the Formula (52), has the following form:
J ( β ) = ( 1 η ) tr ( I K k H x , k ) C k ( I K k H x , k ) T + ( 1 η ) tr K k H v , k C k z H v , k T K k T + η M + N 2 .
To find the optimal value of the gain factor K k , let us differentiate the resulting expression by K k and equate the resulting derivative to 0.
Notice that
M K k = 1 2 M 1 2 M K k = 1 2 M ( I K k H x , k ) S k , T + S k H x , k T = 1 M ( I K k H x , k ) S k H x , k T , N K k = 1 2 N 1 2 N K k = 1 2 N K k H b , k S k z , T + S k z H b , k T = 1 N K k H b , k S k z H b , k T .
Therefore,
J K k = 2 ( 1 η ) ( K k H x , k I ) C k H x , k T + 2 ( 1 η ) K k H v , k C k z H v , k T + 2 η 1 + 1 β ( K k H x , k I ) S k H x , k T + 2 η ( 1 + β ) K k H b , k S k z H b , k T .
Equating the resulting derivative J K k to 0, we get the following explicit formula for the optimal gain K k :
K k = A 1 · ( 1 η ) C k H x , k T + η 1 + 1 β S k H x , k T ,
where we denoted:
A = ( 1 η ) H x , k C k H x , k T + ( 1 η ) H v , k C k z H v , k T + η 1 + 1 β H x , k S k H x , k T + η ( 1 + β ) H b , k S k z H b , k T .
The only thing left to determine is the parameter β . For each β , substituting the expression (48), (51) and (59) into the formula for the cost function, we get the corresponding value of J ( β ) . We thus need to find the value β * that minimizes this cost function, i.e., that solves the following minimization problem:
min β J ( β ) s . t . β ( 0 , + ) R 1 .
This optimization problem is more complex than the quadratic optimization problems which usually occur in filtering. Indeed, for quadratic objective functions, derivatives are linear and thus, when we equate all the derivatives to 0, we get an easy-to-solve system of linear equations. In contrast, for non-quadratic objective functions, equating derivatives to 0 leads to a difficult-to-solve system of nonlinear equations. Another reason why non-quadratic optimization problems are often difficult to solve is that traditional methods for solving these problems—such as gradient descent—often lead to a local minimum and not to the desired global minimum of the function. Good news is that in our case, similarly to [27], we can show that the cost function is convex. For convex objective functions, every local minimum is also a global minimum; see, e.g., [28]. Because of this, there exist many efficient techniques for solving convex optimization problems; for example, we can use the corresponding algorithm from the INTLAB toolbox [28].
Once we find the optimal value β * , we can substitute it into the Formulas (48), (51) and (59), and get the desired a posteriori state estimate x ^ k + .
It should be mentioned that this estimate depends on the parameter η —which was introduced to balance the random uncertainty and UBB uncertainty. Specifically, we should mention three important cases—as described, e.g., in [26]:
(1)
When η = 1 2 , the stochastic uncertainty and UBB uncertainty have the same weight. and the optimal gain factor K k contains no η in this case. This setting is recommended to users when there is no expert-based information about η available.
(2)
When η = 0 , we get:
K k ( β ) = C k H x , k T · H x , k C k H x , k T + H v , k C k z H v , k T 1 .
In this case the filtering model contains no UBB uncertainty and the EGKF algorithm reduces to EKF [6].
(3)
When η = 1 , the model now only contains UBB uncertainty. In this case,
K k ( β ) = 1 + 1 β S k H x , k T · 1 + 1 β H x , k S k H x , k T + ( 1 + β ) H b , k S k z H b , k T 1 .
Based on Theorems 4 and 5, and Equations (59) and (61), we can summarize the EGKF model into the following Algorithm 1:
Algorithm 1 Ellipsoidal and Gaussian Kalman filter model.
1:
Input:
  • System information: f k , h k , C k u , C k z , S k u , S k z .
  • Initial values: c 0 + , C 0 + , S 0 + .
  • Parameters: n, η .
2:for k=1,2,…,K do
3:  Calculation:
c k = F x , k 1 c k 1 + + u ˜ k 1 .
C k = F x , k 1 C k 1 + F x , k 1 T + F w , k 1 C k 1 u F w , k 1 T .
B = tr F x , k 1 S k 1 + F x , k 1 T + tr F a , k 1 S k 1 u F a , k 1 T .
S k = B · F x , k 1 S k 1 + F x , k 1 T tr F x , k 1 S k 1 + F x , k 1 T + F a , k 1 S k 1 u F a , k 1 T tr F a , k 1 S k 1 u F a , k 1 T .
A = ( 1 η ) H x , k C k H x , k T + ( 1 η ) H v , k C k z H v , k T + η 1 + 1 β H x , k S k H x , k T + η ( 1 + β ) H b , k S k z H b , k T .
K k = A 1 · ( 1 η ) C k H x , k T + η 1 + 1 β S k H x , k T .
c k + = ( I K k H x , k ) c k + K k ( y k z ˜ k ) .
C k + = ( I K k H x , k ) C k ( I K k H x , k ) T + F v , k C k z F v , k T .
S k + ( β ) = 1 + 1 β ( I K k H x , k ) S k ( I K k H x , k ) T + ( 1 + β ) K k H b , k S k z H b , k T K k T .
β * = arg min β > 0 { ( 1 η ) tr C k + ( β ) + η tr S k + ( β ) } .
4:  Output:
   c k + , C k + , and S k + .
5:end for

4. Applications

This section contains the result of two test of applying EGKF to simulated data sets. We also applied EKF to both problems, and compared the root-mean-square error (RMSE) of the two methods.
  • In the first test, EGKF was applied on a highly nonlinear benchmark problem. To compare the result, we performed 100 simulations.
  • In the second test, we applied EGKF on a two-dimensional trajectory estimation problem. For this problem, we performed 1000 Monte Carlo runs.
To simulate the UBB uncertainty, we used uniform distributions on the corresponding ellipsoids.

4.1. Example 1: A Highly Nonlinear Benchmark Example

We consider the following example:
x k + 1 = 1 2 x k + 25 x k 1 + x k 2 + 8 cos [ 1.2 ( k 1 ) ] + w k + a k ,
y k = 1 20 x k 2 + v k + b k ,
where:
  • x k is a 1-dimensional scalar,
  • u k = 8 cos [ 1.2 ( k 1 ) ] is the input vector,
  • w k N ( 0 , 1 ) is Gaussian noise,
  • w i , k E ( 0 , 9 ) is the unknown but bounded perturbation; in this 1-D case the ellipsoid is simply the interval [ 3 , 3 ] ,
  • v k N ( 0 , 1 ) is Gaussian measurement noise,
  • b k E ( 0 , 4 ) is the unknown but bounded perturbation—in this case, the ellipsoid is the interval [ 2 , 2 ] .
Here:
  • the initial true state is x 0 = 0 . 1 ,
  • the initial state estimate is x ^ 0 = x 0 ,
  • the initial estimate for covariance matrix is C 0 + = 2 ,
  • the initial estimate for the shape matrix is S 0 + = 1 × 10 3 .
This discrete-time dynamical system is a known benchmark in the nonlinear estimation theory; see, e.g., [6,8,29]. A high degree of nonlinearity in both process and measurement equations makes state estimation problem for this system very difficult.
We use this example to show that the new EGKF method behaves better than the traditional first order EKF when UBB uncertainties are taken into account.
We used a simulation length of 50 time steps. Following recommendations from the previous section, we selected the weight parameter η = 0.5 .
We repeated our simulation 100 times. Figure 2 shows the results of applying both EGKF and EKF for several of the 100 runs, namely, for runs 25, 50, 75, and 100.
In the above figure:
  • the red stars denote the true state,
  • the black crosses represent the EKF estimates of the state,
  • the blue lines represent the UBB ellipsoids estimated by EGKF (in this 1D case they are intervals),
  • the blue plus signs mark the centers of the output ellipsoids.
One can see that the centers of the ellipsoids are, in general, different given from the EKF estimates.
For each of the two methods and for each of the 100 iterations, at each moment of time, we can calculate the difference between the actual state and the corresponding estimate. (As estimates corresponding to EGKF, we took the centers of the corresponding a posterior ellipsoids.)
Based on 50 moments of time, we get a vector consisting of 50 such actual-state-vs.-estimated-state differences. To compare the vectors corresponding to two different methods, we calculated the l 2 norms of these vectors—this is equivalent to comparing the root-mean-square estimation errors. The resulting values are presented in Table 1 for simulations 1, 11, …, 91.
In most cases, the EGKF leads to a smaller mean squared estimation error. Over all 100 simulations:
  • the average l 2 norm of the EGKF estimates is 148.70, while
  • for the extended Kalman filter, the average l 2 norm is much higher: it is equal to 192.29.
Thus, we conclude that, on average, the new EGKF algorithm performs much better than EKF.
We got a similar conclusion when, instead of comparing the l 2 norms, we compared:
  • the l 1 norms—which correspond to comparing mean absolute values of the estimation errors, and
  • the l norms—which correspond to comparing the largest estimation errors.

4.2. Example 2: Two-Dimensional Trajectory Estimation

In the second test, we used 1000-times Monte Carlo simulations to compare the performance of EGKF and EKF on a 2D trajectory estimation problem from [10].
In this problem, a vehicle moves on a plane, following a curved trajectory. The state vector x = ( x , y , v x , v y ) contains positions and velocities of the vehicle, in x-direction and y-direction, respectively. After linearization, we get the following equation:
x k + 1 = F k x k + w k + a k .
Here:
  • x k = ( x k , y k , v x , k , v y , k ) is the state vector at time t k ;
  • the transition matrix F k has the form:
    F k = 1 0 Δ t 0 0 1 0 Δ t 0 0 1 0 0 0 0 1 ;
  • w k is Gaussian noise with covariance matrix C k u ,
  • a k is the unknown but bounded uncertainty, which is bounded by an ellipsoid with shape matrix S k u .
In total, in each of the 1000 simulations, we observed 500 epochs with time step Δ t = 0.1 s. In our formulas, as units of time, distance, and angle, we chose second, meter and degree, respectively.
In this experiment, two observation stations located at points S 1 = [ s 12 , s 12 ] and S 2 = [ s 21 , s 22 ] performed the measurements. Each station measured the distance to the vehicle and the direction to the vehicle, as described by the following equation:
y k = d 1 d 2 θ 1 θ 2 = [ x s 11 ] 2 + [ y s 12 ] 2 [ x s 21 ] 2 + [ y s 22 ] 2 arctan [ ( y s 12 ) / ( x s 11 ) ] arctan [ ( y s 22 ) / ( x s 21 ) ] + v k + b k .
Here:
  • v k represents random uncertainty; it is Gaussian with covariance matrix C k z ,
  • b k represents the unknown but bounded uncertainty; it is bounded by an ellipsoid with shape matrix S k z .
The initial state estimate, and initial estimates for the covariance matrix and for the shape matrix are:
  • x 0 = ( 0 , 0 , 0 , 0 ) ,
  • C 0 + = diag ( 0.01 , 0.01 , 0.01 , 0.01 ) ,
  • S 0 + = diag ( 10 6 , 10 6 , 10 6 , 10 6 ) .
The initial covariance matrices in process equation and measurement equation are:
C 0 u = 0.0033 0 0.005 0 0 0.0033 0 0.005 0.005 0 0.01 0 0 0.005 0 0.01 , C 0 z = 0.05 2 0 0 0 0 0.05 2 0 0 0 0 0.01 2 0 0 0 0 0.01 2 .
The initial shape matrices of UBB uncertainties in process equation and measurement equation are:
S 0 u = 1 2 0 0 0 0 1 2 0 0 0 0 0.5 2 0 0 0 0 0.5 2 , S 0 z = 0.01 2 0 0 0 0 0.01 2 0 0 0 0 ( π 180 ) 2 0 0 0 0 ( π 180 ) 2 .
As in the first example, we select the value η = 0.5 of the weighting parameter.
Figure 3 shows an example of trajectory estimates by using EGKF and EKF obtained on one of the 100 iterations—namely, on the 5th iteration. To see it clearer, Figure 4 shows a more detailed information for several portions of the trajectory.
Here:
  • the red stars mark the true positions,
  • black crosses mark the EKF estimates,
  • the blue plus signs are the centers of the ellipsoids computed by EGKF.
The two observation stations are marked by red triangles; they are connected by a straight line.
Figure 5 shows the comparison of RMSE estimation errors EGKF and EKF averaged over all 1000 Monte-Carlo runs.
The peak occurred in the figure between [600, 800] is probably caused by the random uncertainty, just like what EKF behaves in nonlinear applications sometimes. In this experiment we introduced new UBB uncertainty but we still kept the influence of random uncertainty ( η = 0.5 ). The new EGKF can provide a better global estimation than EKF but a few exceptions may still happen. This unexpected jump also caused a larger standard deviation value in EGKF.
In almost all simulations (namely, in 997 cases out of 1000), the RMSE of EGKF was significantly smaller than for the EKF. We can therefore conclude that the new EGKF techniques provides better estimation for this nonlinear system than EKF. Detailed statistics of the comparison can be found in Table 2.
In Figure 6, we compared the average traces (over all 1000 Monte-Carlo runs) of EGKF UBB shape matrix S k + , EGKF covariance matrix C k + and EKF covariance matrix, in every epoch.
Notice that the trace of EGKF shape matrix is much larger than the traces of covariance matrices.
We can also see that the UBB uncertainty grows in the starting part of the trajectory and when the trajectory crosses the straight line connecting the observation stations—this corresponds to k [ 210 , 220 ] . This phenomenon is easy to explain: in this case, both angle measurements measure, in effect, the same quantity, so from the measurements, we get fewer information than in other parts of the trajectory.
The EGKF algorithm has been applied to a data set which is obtained from a real world experiment see, e.g., [30]. In that experiment, taken from the scope of georeferencing of terrestrial laser scanner, a multi-sensor system has collected the trajectory information of two GNSS antennas. EGKF was applied to estimate the positions and velocities of those two antennas and the results were compared with EKF.

5. Conclusions and Future Work

In this paper, we propose a new method for state estimation in situations when, in addition to the probabilistic uncertainty, we also have unknown-but-bounded errors. Our testing shows that, on average, the new method leads to much more accurate estimates than the standard extended Kalman filter.
While our method has been shown to be successful, there is room for improvement.
First, the efficiency of our method depends on the initial selection of the parameters describing the system uncertainty and the measurement uncertainties. In some practical situations, we have prior information about these uncertainties, but in other practical cases, we do not have this information. Similarly to the usual Kalman filter, our method eventually converges to the correct uncertainty estimate, but this convergence may be somewhat slow. It would be nice to come up with recommendations on how to select the initial uncertainty parameters that would guarantee faster convergence.
Second, in our method, we assumed that measurements are reasonably frequent, so that during the time interval between the two measurements, we can safely ignore terms quadratic in terms of the corresponding changes—and thus consider a linearized system of equations. This is true in many practical situations—e.g., in the example of GNSS-based navigation that we considered in this paper. However, in some practical situations, measurements are rarer, and so, in the interval between the two measurements, we can no longer ignore terms which are quadratic in terms of changes. To cover such situations, it is desirable to extend our technique to second-order Kalman filtering.
Third, in this paper, we followed the usual Kalman filter techniques in assuming that the corresponding probability distributions are Gaussian. As we mentioned in Section 1, in practice, sometimes distributions are not Gaussian. It is therefore desirable to extend our method to the general non-Gaussian case – e.g., by using ideas of unscented Kalman filtering.
Fourth, in our method, we minimized the mean square estimation error—which, for our method, corresponds to selecting an ellipsoid with the smallest possible trace of the corresponding shape matrix. While in many practical problems, minimizing the mean square error is a reasonable idea, in some problems, it may be more reasonable to minimize, e.g., the largest possible estimation error. In this case, as we have mentioned, instead of minimizing the trace, we should be minimizing the largest eigenvalue of the shape matrix. It is this desirable to extend our method to this—and other—possible criteria.
Fifth, to make computations feasible, we approximated the set of possible states by an ellipsoid. In principle, we can get more accurate estimates if we use families of sets with more parameters for such approximation—e.g., zonotopes or, more generally, generic polytopes. From this viewpoint, it is desirable to come up with efficient algorithms for processing zonotope (and, more generally, polytope) uncertainty.
Finally, it is desirable to analyze (and, if necessary, improve) the stability of our method (and of other state estimation techniques). While stability is very important for practical problems, it is not even clear how to formulate this problem in precise mathematical terms, since the state estimation problems are usually ill-posed—as most inverse problems (see, e.g., [31])—and thus, strictly speaking, not stable.

Author Contributions

Conceptualization, I.N.; Data curation, L.S. and H.A.; Formal analysis, L.S. and V.K.; Funding acquisition, I.N.; Investigation, L.S. and H.A.; Methodology, L.S., B.K. and V.K.; Project administration, I.N.; Resources, H.A. and I.N.; Software, B.K.; Supervision, V.K. and I.N.; Validation, L.S.; Visualization, L.S.; Writing—original draft, L.S.; Writing—review & editing, H.A., B.K., V.K. and I.N.

Funding

This work was funded by the German Research Foundation (DFG) as a part of the Research Training Group i.c.sens (GRK2159).

Acknowledgments

The authors acknowledge the discussions with Steffen Schön, Franz Rottensteiner and Claus Brenner from Leibniz University Hannover. The authors also thank Sergey Grigorian from Department of Mathematics in University of Texas Rio Grande Valley for his comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Le, V.T.H.; Stoica, C.; Alamo, T.; Camacho, E.F.; Dumur, D. Zonotopic guaranteed state estimation for uncertain systems. Automatica 2013, 49, 3418–3424. [Google Scholar] [CrossRef]
  2. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  3. Smith, G.L.; Schmidt, S.F.; McGee, L.A. Application of Statistical Filter Theory to the Optimal Estimation of Position and Velocity on Board a Circumlunar Vehicle; NASA: Washington, DC, USA, 1962.
  4. McElhoe, B.A. An assessment of the navigation and course corrections for a manned flyby of mars or venus. IEEE Trans. Aerosp. Electron. Syst. 1966, 4, 613–623. [Google Scholar] [CrossRef]
  5. Sorenson, H.W.; Alspach, D.L. Recursive Bayesian estimation using Gaussian sums. Automatica 1971, 7, 465–479. [Google Scholar] [CrossRef]
  6. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; JohnWiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  7. Julier, S.; Uhlmann, A. New extension of the Kalman filter to nonlinear systems. In Proceedings of the AeroSense, Orlando, FL, USA, 28 July 1997. [Google Scholar]
  8. Gordon, N.; Salmond, D.; Smith, A.F.M. Novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proc. F Radar Signal Process. 1993, 140, 107–113. [Google Scholar] [CrossRef]
  9. Alkhatib, H.; Paffenholz, J.-A.; Kutterer, H. Sequential Monte Carlo filtering for nonlinear GNSS trajectories. In Proceedings of the Symposium on VII Hotine-Marussi Symposium on Mathematical Geodesy, Rome, Italy, 6–10 June 2009; Sneeuw, N., Novák, P., Crespi, M., Sansò, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 81–86. [Google Scholar]
  10. Alkhatib, H. Alternative Nonlinear Filtering Techniques in Geodesy for Dual State and Adaptive Parameter Estimation. In the 1st International Workshop on the Quality of Geodetic Observation and Monitoring Systems (QuGOMS’11); Kutterer, H., Seitz, F., Alkhatib, H., Schmidt, M., Eds.; Springer: Cham, Switzerland, 2015; Volume 140. [Google Scholar]
  11. Milanese, M.; Vicino, A. Optimal estimation theory for dynamic systems with set membership uncertainty: An overview. Automatica 1996, 27, 997–1009. [Google Scholar] [CrossRef]
  12. Witsenhausen, H.S. Sets of possible states of linear systems given perturbed observations. IEEE Trans. Autom. Control 1968, 13, 556–558. [Google Scholar] [CrossRef]
  13. Schweppe, F. Recursive state estimation: Unknown but bounded errors and system inputs. IEEE Trans. Autom. Control 1968, 13, 22–28. [Google Scholar] [CrossRef]
  14. Schweppe, F.C. Uncertain Dynamic Systems; Prentice Hall: Englewood Cliffs, NJ, USA, 1973. [Google Scholar]
  15. Kreinovich, V.; Lakeyev, A.; Rohn, J.; Kahl, P. Computational Complexity and Feasibility of Data Processing and Interval Computations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 10. [Google Scholar]
  16. Ferson, S.; Kreinovich, V.; Hajagos, J.; Oberkampf, W.; Ginzburg, L. Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty; Report SAND2007-0939,162; Sandia National Laboratories: Albuquerque, NM, USA, 2007.
  17. Kutterer, H.; Neumann, I. Recursive least-squares estimation in case of interval observation data. Int. J. Reliab. Saf. 2011, 5, 229–249. [Google Scholar] [CrossRef]
  18. Vicino, A.A.; Zappa, G. Sequential approximation of feasible parameter sets for identification with set membership uncertainty. IEEE Trans. Autom. Control 1996, 41, 774–785. [Google Scholar] [CrossRef]
  19. Walter, E.; Piet-Lahanier, H. Exact recursive polyhedral description of the feasible parameter set for bounded-error models. IEEE Trans. Autom. Control 1989, 34, 911–915. [Google Scholar] [CrossRef]
  20. Combastel, C. A state bounding observer for uncertain non-linear continuous-time systems based on zonotopes. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 15 December 2005; pp. 7228–7234. [Google Scholar]
  21. Alamo, T.; Bravo, J.M.; Camacho, E.F. Guaranteed state estimation by zonotopes. Automatica 2005, 41, 1035–1043. [Google Scholar] [CrossRef]
  22. Althoff, M.; Stursberg, O.; Buss, M. Safety assessment for stochastic linear systems using enclosing hulls of probability density functions. In Proceedings of the 2009 European Control Conference, Budapest, Hungary, 23–26 August 2009; pp. 625–630. [Google Scholar]
  23. Schön, S.; Kutterer, H. Using zonotopes for overestimation-free interval least-squares–some geodetic applications. Reliab. Comput. 2005, 11, 137–155. [Google Scholar] [CrossRef]
  24. Bertsekas, D.P.; Rhodes, I.B. On the minimax reachability of target sets and target tubes. Automatica 1971, 7, 233–247. [Google Scholar] [CrossRef]
  25. Polyak, B.T.; Nazin, S.A.; Durieu, C.; Walter, E. Ellipsoidal parameter or state estimation under model uncertainty. Automatica 2004, 40, 1171–1179. [Google Scholar] [CrossRef]
  26. Durieu, C.; Walter, E.; Polyak, B. Multi-input multi-output ellipsoidal state bounding. J. Optim. Theory Appl. 2001, 111, 273–303. [Google Scholar] [CrossRef]
  27. Noack, B. State Estimation for Distributed Systems with Stochastic and Set-Membership Uncertainties; KIT Scientific Publishing: Karlsruhe, Germany, 2014; Volume 14. [Google Scholar]
  28. Rump, S.M. INTLAB—INTerval LABoratory. In Developments in Reliable Computing; Csendes, T., Ed.; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1999; pp. 77–104. [Google Scholar]
  29. Kitagawa, G. Non-gaussian state—Space modeling of nonstationary time series. J. Am. Stat. Assoc. 1987, 82, 1032–1041. [Google Scholar]
  30. Sun, L.; Alkhatib, H.; Paffenholz, J.A.; Neumann, I. Geo-Referencing of a Multi-Sensor System Based on Set-membership Kalman Filter. In Proceedings of the 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; pp. 1–5. [Google Scholar]
  31. Luise, B. State estimation analysed as inverse problem. In Assessment and Future Directions of Nonlinear Model Predictive Control; Springer: New York, NY, USA, 2007; pp. 335–346. [Google Scholar]
Figure 1. Decomposition of state vector x 0 = c 0 + e 0 + g 0 .
Figure 1. Decomposition of state vector x 0 = c 0 + e 0 + g 0 .
Mathematics 07 01168 g001
Figure 2. Comparison results in 4 simulations.
Figure 2. Comparison results in 4 simulations.
Mathematics 07 01168 g002aMathematics 07 01168 g002b
Figure 3. 2D trajectory estimated results of the 5th simulation.
Figure 3. 2D trajectory estimated results of the 5th simulation.
Mathematics 07 01168 g003
Figure 4. 2D trajectory estimated results of the 5th simulation (zoom in).
Figure 4. 2D trajectory estimated results of the 5th simulation (zoom in).
Mathematics 07 01168 g004
Figure 5. Root-mean-square error (RMSE) comparison of EGKF and EKF in 1000 runs.
Figure 5. Root-mean-square error (RMSE) comparison of EGKF and EKF in 1000 runs.
Mathematics 07 01168 g005
Figure 6. Average trace comparison for 1000 MC runs of EGKF and EKF in every epoch.
Figure 6. Average trace comparison for 1000 MC runs of EGKF and EKF in every epoch.
Mathematics 07 01168 g006
Table 1. Comparison of Ellipsoidal and Gaussian Kalman Filter (EGKF) and Extended Kalman Filter (EKF) in 10 simulations.
Table 1. Comparison of Ellipsoidal and Gaussian Kalman Filter (EGKF) and Extended Kalman Filter (EKF) in 10 simulations.
   k1112131415161718191
EGKF53.59116.3870.38102.95100.1675.8182.1148.7166.3081.55
  EKF72.49116.64230.08234.6980.7331.32109.5379.698.6364.95
Table 2. RMSE Comparison of EGKF and EKF.
Table 2. RMSE Comparison of EGKF and EKF.
RMSE [m]EGKFEKF
Min0.1870.241
Max0.5010.278
Mean0.2100.259
Median0.2090.259
Standard Deviation0.0190.006
Back to TopTop