Next Article in Journal
Near Infrared Spectroscopy as a Traceability Tool to Monitor Black Soldier Fly Larvae (Hermetia illucens) Intended as Animal Feed
Previous Article in Journal
Research on Dynamic Path Planning of Multi-AGVs Based on Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Estimation in Continuous–Discrete Cubature Kalman Filters for Bearings-Only Tracking

1
Information and Navigation College, Air Force Engineering University, Xi’an 710077, China
2
State Key Laboratory of Geo-Information Engineering, Xi’an Research Institute of Survey and Mapping, Xi’an 710054, China
3
Science and Technology Complex Aviation Systems Simulation Laboratory, Beijing 100076, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8167; https://doi.org/10.3390/app12168167
Submission received: 12 July 2022 / Revised: 9 August 2022 / Accepted: 13 August 2022 / Published: 15 August 2022
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
The model of bearings-only tracking is generally described by discrete–discrete filtering systems. Discrete robust methods are also frequently used to address measurement uncertainty problems in bearings-only tracking. The recently popular continuous–discrete filtering system considers the state model of the target to be continuous in time, and is more suitable for bearings-only tracking because of its higher mathematical solution accuracy. However, the sufficient evaluation of robust methods in continuous–discrete systems is not available. In addition, in the different continuous–discrete measurement environments, the choice of a robust algorithm also needs to be discussed. To fill this gap, this paper firstly establishes the continuous–discrete target tracking model, and then evaluates the performance of proposed robust square-root continuous–discrete cubature Kalman filter algorithms in the measurement of uncertainty problems. From the simulation results, the robust square-root continuous–discrete maximum correntropy cubature Kalman filter algorithm and the variational Bayesian square-root continuous–discrete cubature Kalman filter algorithm have better environmental adaptability, which provides a promising means for solving continuous–discrete robust problems.

1. Introduction

Bearings-only tracking [1,2] uses a sequence of angle measurements to estimate the state (position and velocity) of the target and has universal applications in the field of navigation, especially in passive target tracking. For the system model of bearings-only tracking, the target state is generally built as the discrete time model for convenience. In addition, the angle measurement information for the target is discrete time. This is called the discrete–discrete time system model. However, considering the actual situation of target tracking, the target state is continuous in nature, and the measurement is discrete time. Such a system model is called the continuous–discrete time system [1]. Compared with the discrete–discrete system model, the continuous–discrete system model is closer to the real situation, and its solutions have more advantages, such as high accuracy [3].
Based on the continuous–discrete time system, continuous–discrete filtering, whose model is based on the covariance matrix of random errors, can be established. In [4], the cubature criterion was introduced into the continuous–discrete time system, and the continuous–discrete cubature Kalman filter (CD-CKF) algorithm was obtained. It uses stochastic differential equations (SDEs) to describe the continuous-time target state model, and the accuracy of the SDE solution greatly affects the filtering performance. In [5], Crouse summarized the Euler-Maruyama method based on 0.5 order and the Itô-Taylor method, which were proven to have acceptable accuracy. In order to further improve the accuracy of state estimation, high-order numerical approximation methods are used to deal with stochastic differential equations. In [6], Crouse described the continuous-time model as the form of its expectation and covariance and then solved the time prediction problem of the continuous model. In [7], the nested implicit Runge-Kutta method was used for the continuous–discrete extended Kalman filter (CD-EKF), and the corresponding application in CD-CKF was discussed in [8]. Subsequently, adaptive methods were proposed. In [9], Kulikova et al. used adaptive step size to further improve the accuracy. The adaptive feedback strategy [10] reduced the impact of unpredictable errors in the prediction step by means of covariance adjustment; thus, the performance and efficiency were both improved.
In the actual measurement environment, harsh conditions such as signal interference will cause non-Gaussian noise [11,12] or outliers to appear. If they are not dealt with, the Gaussian-assumed filters will be inaccurate or even divergent. For this measurement uncertainty problem, various robust methods [13] are proposed for the discrete–discrete time filtering systems. As a method of modification, robust methods aim to improve robustness in the presence of outliers in the data. For instance, the Huber’s M-estimation method [14,15] used the robust cost function to reduce the weight of abnormal data. It is widely used and has proven effective in filtering problems [16]. Wu et al. [17,18] proposed a more practical robust CKF algorithm based on generalized M-estimation without the experience threshold of the weight function and statistical characteristics information of outliers. In [19], Särkkä dealt with the time-varying noise with the variational Bayes criterion in the case of unknown noise covariance. In [20,21,22], the maximum correntropy criterion (MCC) was introduced into discrete–discrete filtering systems. The correntropy can utilize the higher-order information of the measurements instead of the second-order moment information of the minimum mean square error criterion, which is more suitable for non-Gaussian systems in the presences of non-Gaussian noise.
The above robust methods are successfully used in the discrete–discrete time systems, but sufficient evaluation of the continuous–discrete time systems is not available. Moreover, how to balance the accuracy and robust performance of the continuous–discrete filtering algorithms is of great significance. Therefore, this paper proposes the square-root continuous–discrete cubature Kalman filter (SRCD-CKF) with Huber’s method, maximum correntropy criterion, and variational Bayesian criterion, and analyzes their filtering performance in the measurement of uncertainty problems. The CD-CKF is chosen because it is derivative-free and generally more accurate than the CD-EKF. Additionally, compared with the continuous–discrete unscented Kalman filter (CD-UKF), CD-CKF has the smaller computational burden. The square-root technique can also improve the stability of the filter. Compared with the proposed robust algorithms, the robust square-root continuous–discrete maximum correntropy cubature Kalman filter (RSRCD-MCCKF) algorithm and the variational Bayesian square-root continuous–discrete cubature Kalman filter (VBSRCD-CKF) algorithm have higher accuracy and stronger robustness.
To sum up, in this paper, the robust estimation methods based on a continuous–discrete system are proposed to solve the problem of sufficient evaluation of continuous–discrete robust algorithms. Compared with the discrete–discrete filtering system, considering the continuous–discrete filtering system can effectively ensure the accuracy of bearings-only tracking, while the robust methods are used to improve the robustness of the tracking. From the perspective of improving accuracy and robustness, the proposed algorithms have excellent performance in continuous–discrete systems. Another contribution is that the article compares the performance of the algorithms in different continuous–discrete measurement environments, fully considering the environmental adaptability of the algorithms.
The structure of the rest of the article is as follows: Section 2 reviews the target tracking model of the continuous–discrete system and the algorithm of SRCD-CKF, Section 3 proposes the robust SRCD-CKF algorithms with Huber’s method, the RSRCD-MCCKF algorithm, and the VBSRCD-CKF algorithm, Section 4 shows the results of the simulation, and Section 5 summarizes the main work of this paper.

2. Continuous–Discrete System Model and Square-Root Continuous–Discrete Cubature Kalman Filter Algorithm for Target Tracking

2.1. Continuous–Discrete System Model

In a continuous–discrete target tracking system, the target state model is continuous time, and the measurement model is discrete time. Considering the random disturbance, the continuous-time target state model can be described as the following stochastic differential equation form [1]:
d x ( t ) = f ( x ( t ) , t ) dt + Q d w ( t ) ,
where x ( t ) is the n-dimensional state vector, f is known as the drift function, w ( t ) is the zero-mean Gaussian white-noise process, and Q represents the covariance matrix of the zero-mean Gaussian white-noise process.
In order to deal with the continuous time model, the target model of the continuous–discrete system can be described by the moment differential equations (MDEs) [9], which are shown as
d x ^ ( t ) dt = F ( x ^ ( t ) ) ,
d P ( t ) dt = J ( x ^ ( t ) ) P ( t ) + P ( t ) J T ( x ^ ( t ) ) + Q ,
where x ^ ( t ) is the expectation of x ( t ) at time t, P ( t ) represents the error covariance matrix at time t, F ( x ^ ( t ) ) is the state equation of the system, and the Jacobian matrix of F ( x ^ ( t ) ) is J ( x ^ ( t ) ) , that is, J ( x ^ ( t ) ) = F ( x ^ ( t ) ) / x ^ ( t ) .
The discrete time measurement model is
z k = h x k , k + r k ,
where z k is the actual observation value, h is the observation function, and r k is the observation noise, which refers to Gaussian or non-Gaussian noise. The specific noise form of r k is defined by the measurement scenario, and its covariance matrix is R k . Here, k refers to discrete time measurement points.

2.2. Square-Root Continuous–Discrete Cubature Kalman Filter Algorithm

In order to ensure the positive definiteness and symmetry of the error covariance matrix, the square-root continuous–discrete cubature Kalman filter is introduced here. Similar to the typical CKF method, the continuous–discrete time system is combined with the cubature criterion, which is also divided into two steps: time update and measurement update.
In the SRCD-CKF algorithm, the state cubature points can be defined as [23]
X i ( t ) = S ( t ) ξ i + x ^ ( t ) ,
where S ( t ) is the lower triangular matrix of the covariance matrix, satisfying P ( t ) = S ( t ) S T ( t ) , and ξ i is the cubature point set, which is defined as
ξ i ( t ) = n η i i = 1 , , n n η i - n i = n + 1 , , 2 n
where η i is the i-th coordinate column-vector in n .
Based on the cubature criterion, its expectation and covariance can be re-expressed as
d x ^ ( t ) dt = F ( X ( t ) ) ε ,
d P ( t ) dt = X ( t ) W F T ( X ( t ) ) + F ( X ( t ) ) X T ( t ) + Q ( t ) .
The parameters of ε and W are defined as follows:
ε = 1 2 n W = ( I 2 n 1 T ε ) diag 1 / ( 2 n ) , , 1 / ( 2 n ) ( I 2 n 1 T ε ) T ,
where I 2 n represents a unit matrix with a dimension of 2n, 1 is a unit column vector, and represents the Kronecker tensor.
For the calculation of covariance propagation, the square-root covariance calculation method proposed in [24] is used. Meanwhile, the high-order numerical approximation method is used to deal with the problem:
d S ( t ) dt = S ( t ) Φ ( B ) ,
Φ ( B ) = B i , j   if   i > j 1 2 B i , j   if   i = j 0   if   i < j ,
where B i , j is the element in the i-th row and j-th column of Φ ( B ) . The definition of B ( t ) matrix is
B ( t ) = S 1 ( t ) X ( t ) W F T ( X ( t ) ) + F ( X ( t ) ) X T ( t ) + Q ( t ) S T ( t ) .
Based on this, the predicted state cubature point X i * ( t ) can be obtained. The specific SRCD-CKF Algorithm 1 is as follows:
Algorithm 1: SRCD-CKF.
Time update
Step 1 Expectation and covariance matrix initialization: x ^ t k = x ^ k 1 k 1 , P t k = P k 1 k 1 .
Step 2.1 Covariance decomposition: P ( t k ) = S ( t k ) S T ( t k ) .
Step 2.2 Calculate the state cubature point: X i ( t k ) = S ( t k ) ξ i + x ^ ( t k ) .
Step 2.3 State cubature point propagation: x ^ ( t ) = F ( X ( t ) ) ε S ( t ) = S ( t ) Φ ( B ( t ) ) .
Step 2.4 Calculate the state prediction value: x ^ k k 1 = X 1 * ( t k ) + + X 2 n * ( t k ) / 2 n .
Step 2.5 Calculate the predicted square-root covariance:
S ( t k ) = S 1 ( t k ) S 2 ( t k ) S 2 n ( t k ) ,
where   S i ( t k )   represents   a   sin gle   column   vector ,   S i ( t k ) = X i * ( t k ) x ^ k k 1 / n .
Measurement update
Step 3.1 Calculate the state cubature point: S k | k 1 = S ( t k ) , X i , k k 1 = S k | k 1 ξ i + x ^ k k 1 .
Step 3.2 Measure the spread of cubature points: Z i , k k 1 = h ( X i , k k 1 , k ) .
Step 3.3 Calculate the predicted value of the measurement: z ^ k k 1 = 1 2 n i = 1 2 n Z i , k k 1 .
Step 3.4 Construct the measurement weighted center matrix:
Z k k 1 = 1 2 n Z 1 , k k 1 z ^ k k 1   , ,   Z 2 n , k k 1 z ^ k k 1 .
Step 3.5 Calculate the innovation covariance matrix: P zz , k k 1 = Z k k 1 Z k k 1 T + R k .
Step 3.6 Construct the state weighted center matrix:
X k k 1 = 1 2 n X 1 , k k 1 x ^ k k 1   , ,   X 2 n , k k 1 x ^ k k 1 .
Step 3.7 Calculate the cross covariance matrix: P xz , k k 1 = X k k 1 Z k k 1 T .
Step 3.8 The continuous–discrete cubature gain is: K k = P xz , k | k 1 P zz , k | k 1 1 .
Step 3.9 Calculate the state estimate: x ^ k k = x ^ k k 1 + K k ( z k z ^ k k 1 ) .
Step 3.10 Update the covariance matrix: P k | k = P k | k 1 K k P z z , k | k 1 K k T .
Remark 1.
The above method gives the square root form of the prediction covariance and uses it for the measurement update process. The algorithm has effectively guaranteed the stability of the filtering. At the same time, a more thorough square-root version is also available.
Here we redefine the cross-covariance matrix P xz , k k 1 , the innovations covariance square-root R e , k 1 / 2 , and the filtering covariance square-root S k | k by QR decomposition:
qr Z k k 1 R k 1 / 2 X k k 1 0 = R e , k 1 / 2 0 P xz , k k 1 S k | k ,
where 0 is the zero-block of a proper size, R k 1 / 2 is the measurement noise covariance square-root obtained by Cholesky factorization, and R e , k 1 / 2 and S k | k are lower triangular matrices.
On this basis, we can obtain the update state and covariance. In addition, the robust estimation methods under this square-root version can be obtained according to our idea.

3. Robust Square-Root Continuous–Discrete Cubature Kalman Filter Algorithms

3.1. Robust SRCD-CKF with Huber’s Method

The robust SRCD-CKF algorithms are used to correct the abnormal measurements. Huber’s method uses the robust cost function to reduce the weight of abnormal data. Among them, the most common weight reduction functions are the two-stage weight function based on Huber and the three-stage weight function based on p-Huber. The Huber weight function usually includes two parts. This means that the measurement data are divided into normal data and abnormal data. When the absolute value of the standardized residual exceeds the critical value, the data are considered abnormal and the weight is reduced.
μ ¯ k = 1 , λ ˜ b < c c / λ ˜ b , λ ˜ b c ,
where λ ˜ b is the absolute value of the standardized residuals, and c is the critical value of abnormal error discrimination. The Huber robust square-root continuous–discrete cubature Kalman filter (HRSRCD-CKF) Algorithm 2 is as follows:
Algorithm 2: HRSRCD-CKF.
Step 1 Repeat Step 1 of the SRCD-CKF algorithm.
Steps 2.1–3.7 are equivalent to Steps 2.1–3.7 of the SRCD-CKF algorithm.
Step 3.8 The innovation covariance matrix is redefined: P ¯ zz , k k 1 = Z k k 1 Z k k 1 T + μ ¯ k 1 R k .
Step 3.9–3.11 are equivalent to Steps 3.8–3.10 of the SRCD-CKF algorithm.
The weight function based on the Mahalanobis distance is also called the p-Huber three-stage weight function. Compared with the Huber function, the p-Huber function uses the α quantile of the chi-square distribution to adjust the critical value of abnormal error discrimination, which is more reasonable than the empirical value. The discriminant value is defined as follows:
d k = M k 2 = ( z k z ^ k k 1 ) T P z z , k | k 1 1 ( z k z ^ k k 1 ) .
In Equation (15), M k = ( z k z ^ k k 1 ) T P z z , k | k 1 1 ( z k z ^ k k 1 ) is the Mahalanobis distance. The new three-stage weight function is
μ ¯ k = 1 , d k < χ α 1 χ α 1 / d k , χ α 1 d k < 0 , d k χ α 2 χ α 2 ,
where χ α is the quantile of the chi-square distribution. When d k exceeds χ α 1 , it is considered that there is an outlier in the measurement, and its weight needs to be reduced. At the same time, in order to further eliminate the influence of large outliers, when d k χ α 2 , they will be eliminated. The M-estimation–based robust square-root continuous–discrete cubature Kalman filter (MRSRCD-CKF) Algorithm 3 is as follows:
Algorithm 3: MRSRCD-CKF.
Step 1 Repeat Step 1 of the SRCD-CKF algorithm.
Steps 2.1–3.7 are equivalent to Steps 2.1–3.7 of the SRCD-CKF algorithm.
Steps 3.8–3.11 are equivalent to Steps 3.8–3.11 of the HRSRCD-CKF algorithm, and μ ¯ k is calculated by Equation (16).

3.2. RSRCD-MCCKF Algorithm Based on Maximum Correntropy Criterion

Correntropy is a new method used to measure the similarity between two random variables. Given the random variables X and Y, their joint probability density function is F X , Y x , y . The correntropy can be defined as follows [22]:
V X , Y = E κ X , Y = κ x , y d F X , Y x , y ,
where E is the mathematical expectation, and κ is the Messer-type positive definite kernel function. In this paper, we select the Gaussian kernel as the kernel function of the correntropy, which is defined as follows:
κ x , y = G σ e = exp e 2 2 σ 2 ,
where e = x y , and σ > 0 is the kernel bandwidth of the kernel function.
In practice, the joint probability density function is usually difficult to obtain, and the data sampling points are also limited. Therefore, the correntropy can be estimated by the approach of T data sampling points:
V ^ X , Y = 1 T i = 1 T G σ e i .
where e i = x i y i , x i , y i i = 1 T represents the related sampling point sequence of the joint probability density function F X , Y x , y . Expanding the Gaussian kernel function using the Taylor series, we can obtain
V X , Y = n = 0 1 n 2 n σ 2 n n ! E X Y 2 n .
It can be seen from Equation (20) that the correntropy information contains all the even-order moments of the random variable error X−Y. Therefore, as long as the appropriate kernel bandwidth is selected, higher-order information can be obtained. For the systems contaminated by noise, compared with the minimum mean square error criterion, the correntropy can be used to describe its statistical information.
On this basis, this paper introduces MCC into the continuous–discrete system. Firstly, H ¯ k is defined as the pseudo-measurement matrix, R ¯ k is the estimated covariance matrix of the observation noise, and their calculation methods are as follows:
H ¯ k = P xz , k k 1 T P k k 1 1 ,
R ¯ k = P z z , k k 1 H ¯ k P k k 1 H ¯ k T .
The cost function based on the MCC filtering can be defined by
J MCC = G σ x k x ^ k k - 1 P k k - 1 1 2 + G σ z k H ¯ k x k z ^ k k 1 + H ¯ k x ^ k k - 1 R ¯ k 1 .
In this paper, the process noise is assumed to be Gaussian, and we modify the cost function as follows:
J MCC = γ x k x ^ k k - 1 P k k - 1 1 2 β G σ z k H ¯ k x k z ^ k k 1 + H ¯ k x ^ k k - 1 R ¯ k 1 ,
where γ and β are adjusting weights, and x A 2 = x T A x . Then, the optimal estimation of x k is
x ^ k = arg min J MCC x k .
The optimal solution of the above equation can be obtained by solving the derivative
J MCC x k = γ P k k 1 1 x k x ^ k k - 1 β L k H ¯ k T R ¯ k 1 2 σ 2 z k H ¯ k x k z ^ k k 1 + H ¯ k x ^ k k - 1 = 0
With
L k = G σ z k H ¯ k x k z ^ k k 1 + H ¯ k x ^ k k - 1 R ¯ k 1 .
This correntropy criterion follows the idea of M-estimation, applying the Gaussian kernel into each element of the estimation error matrix. The adjustment factor obtained by the Gaussian kernel function (denoted as L k ) will be a scalar, which is easily separated and avoids the numerical calculation problem of zero matrix inversion.
When σ , in order to guarantee the algorithm can converge to the traditional CD-CKF algorithm, we set γ = 1 , β = 2 σ 2 in this paper. Then, the state estimate and the new filter gain can be obtained by the fixed-point iteration method:
x ^ k k = x ^ k k - 1 + K = k z k z ^ k k 1 ,
K = k = L k P k k 1 H ¯ k T R ¯ k + L k H ¯ k P k k 1 H ¯ k T 1 .
The update of the error covariance matrix can be calculated by the following equation:
P k k = ( I K = k H ¯ k ) P k k 1 .
Considering that in the fixed-point iteration, H ¯ k x k can be replaced by H ¯ k x ^ k k - 1 [20], L k can be rewritten as
L k = G σ z k z ^ k k 1 R ¯ k 1 ,
where G σ can be calculated by Equation (18).
Therefore, the RSRCD-MCCKF Algorithm 4 can be summarized as follows:
Algorithm 4: RSRCD-MCCKF.
Step 1 Repeat Step 1 of the SRCD-CKF algorithm.
Steps 2.1–3.7 are equivalent to Steps 2.1–3.7 of the SRCD-CKF algorithm.
Step 3.8 Calculate H ¯ k ,   R ¯ k ,   L k using Equations (21), (22), and (31).
Step 3.9 Obtain the new Kalman filter gain K = k using Equation (29).
Step 3.10 Complete the estimation of the state value and the update of the error covariance matrix using Equations (28) and (30).
In the RSRCD-MCCKF, note that the Gaussian kernel bandwidth σ greatly affects the accuracy of the filtering algorithm. Studies have shown that [25] when σ is small, it is more robust to non-Gaussian measurement noise; however, when σ is too small, the filtering performance of the algorithm will be more severely degraded, and the filtering may even diverge. For the actual situation of continuous–discrete systems, how to select the proper value of σ according to different scenarios is also of great significance.

3.3. VBSRCD-CKF Algorithm Based on Variational Bayes Criterion

Variational Bayesian approximation is an iterative optimization algorithm that approximates the posterior probability distribution of parameters [26]. The VB-based filter uses VB approximation to estimate the joint posterior distribution of the state and covariance p x k , R k z 1 : k , as follows:
p x k , R k z 1 : k Q x x k Q R R k ,
where Q x x k and Q R R k are the yet unknown approximating densities.
Unlike the correntropy, the Kullback-Leibler (KL) divergence compares the closeness of two probability distributions. It is now possible to form a VB approximation by minimizing the KL divergence between the separable approximation and the true posterior:
KL Q x x k Q R R k p x k , R k z 1 : k = Q x x k Q R R k × log Q x x k Q R R k p x k , R k z 1 : k d x k d R k .
Minimizing the KL divergence associated with the probability densities Q x x k and Q R R k in turn, while keeping the other fixed, we can obtain the following equations:
Q x x k exp log p z k , x k , R k z 1 : k Q R R k d R k ,
Q R R k exp log p z k , x k , R k z 1 : k Q x x k d x k .
Computing the above equations, we can obtain the following densities [19]:
Q x x k = N x k x ^ k , P k ,
Q R R k = IW R k v k , V k ,
where IW(·) represents the inverse Wishart (IW) distribution, and the parameters x ^ k , P k , v k and V k can be calculated by Kalman-type filters. The VBSRCD-CKF Algorithm 5 is as follows:
Algorithm 5: VBSRCD-CKF.
Step 1 At Step 1 of the SRCD-CKF algorithm, the initialization of v k and V k are included.
Steps 2.1–2.5 are equivalent to Steps 2.1–2.5 of the SRCD-CKF algorithm.
Step 2.6 Calculate the parameters of the IW distribution of measurement noise covariance: v k k 1 = ρ v k 1 n 1 + n + 1 , V k k 1 = C V k - 1 C T , where ρ is a scale factor that 0 < ρ ≤ 1 and C is a matrix that 0 < C 1 with a reasonable choice for the matrix C = ρ I d . I d is an identity matrix, and d is the dimension of the measurement.
Step 3.1 Before calculating the state cubature point, first let x ^ k 0 = x ^ k k 1 , V k ( 0 ) = V k k 1 , and v k = 1 + v k k 1 .
Steps 3.2–3.8 are equivalent to Steps 3.1–3.7 of the SRCD-CKF algorithm.
Step 3.9 For j = 1:M, iterate the following steps (M is the times of algorithm iteration).
Step 3.9.1 Calculate the measurement noise covariance matrix: R k j = v k n 1 1 V k j - 1 .
Step 3.9.2 Update the innovation covariance matrix: P zz , k k 1 j = Z k k 1 Z k k 1 T + R k j .
Step 3.9.3 Calculate the continuous–discrete filter gain: K k j = P xz , k | k 1 P zz , k | k 1 j 1 .
Step 3.9.4 Calculate the state estimate and update the covariance:
    x ^ j k = x ^ k k 1 + K k j ( z k z ^ k k 1 ) ,
    P k j = P k | k 1 K k j P z z , k | k 1 j K k j T .
Step 3.9.5 Calculate the updated parameter of the IW distribution of measurement noise covariance:
    X i , k j = S k j ξ i + x ^ k j ,
    V k j = V k k 1 + 1 2 n i = 1 2 n z k h ( X i , k j , k ) z k h ( X i , k j , k ) T .
Step 3.10 Until j = M, output x ^ k k = x ^ M , P k k = P k M , and V k = V k M .

4. Numerical Simulation

To evaluate the performance of the proposed methods, the remote distance passive target tracking scenario [27] is considered. The motion model of the target is the coordinated turn, analyzing the performance changes of the four continuous–discrete robust algorithms proposed in this paper for data with Gaussian noise (Section 4.1), non-Gaussian noise (Section 4.2), or corrupted by outliers (Section 4.3). When considering random interference, coordinated turn motion is a typical nonlinear motion. The continuous-time motion model can be expressed by stochastic differential equation.
The state vector of the target is x t = x t x ˙ t y t y ˙ t ω T ; x t and y t , x ˙ t and y ˙ t are the position and velocity of the target in the Cartesian coordinate system, respectively; and ω is a constant turn rate, which is 0.01 rad/s. The motion of single observer in the CA model with the initial state is 0 km , 0.025 km / s ,   0 km ,   0.02 km / s T . The state equation of the system is F x t = [ x ˙ t , ω y ˙ t , y ˙ t , ω x ˙ t , 0 ] T . The random noise term is w t = [ w 1 t , w 2 t , w 3 t , w 4 t , w 5 t ] T , where its increment and state are independent of each other. Initial covariance matrix Q = diag [ 0 τ 1 0 τ 1 τ 2 ] , where τ 1 = 2 and τ 2 = 7 × 10 6 . The initial state x t 0 = [ 40 km ,   0 km / s ,   50 km ,   0.2 km / s ,   0.01 rad / s ] T , and the initial covariance matrix P t 0 = diag [ 0.01 0.01 0.01 0.01 0 ] . The measurement sampling interval is 1 min. The number of Monte Carlo simulations is 200. The experiment is based on the MATLAB simulation software with an Intel(R) Core (TM) i7-10750H CPU @ 2.60 GHz 2.59 GHz.
Among them, the critical value of abnormal error discrimination in the HRSRCD-CKF algorithm is c = 1.345, the α quantile of the chi-square distribution in the MRSRCD-CKF algorithm is α 1 = 0.1 , α 2 = 0.1 % , the Gaussian kernel bandwidth σ in the RSRCD-MCCKF algorithm is selected according to the simulation scenarios, the initial parameters in the VBSRCD-CKF algorithm are v 0 = 600 , V 0 = 0.01 , ρ = 1 exp 4 , and the number of internal loops of the algorithm is M = 10.
The discrete-time nonlinear measurement model is
z k = tan 1 y r t k / x r t k + r k ,
where x r t k and y r t k are the relative position of the target and the observer at time k, r k is the measurement noise, and the specific expression form is defined according to the simulation scenarios.
In order to evaluate the performance of filtering, the Root Mean Square Error (RMSE) of the position state and the velocity state are respectively defined as
R M S E p o s t k = 1 N i = 1 N x t k x ^ i t k 2 2 1 / 2 ,
R M S E v e l t k = 1 N i = 1 N x ˙ t k x ˙ ^ i t k 2 2 1 / 2 ,
where N is the number of Monte Carlo simulations, x ^ i t k and x ˙ ^ i t k are the results of the i-th Monte Carlo state estimation, and x t k , x ˙ t k are the true position state and velocity state of the target, respectively. The trajectory of target and observer is shown in Figure 1.
As shown in Figure 1, the bearings-only tracking scenario is tracking a two-dimensional target that is moving in a coordinated turn at a constant turn rate. The observer is maneuvered to achieve observability. In addition, it should be noted that the target tracked in this paper is the interference source on the ground, and the purpose of this paper is to improve the accuracy and robustness of this tracking process in different measurement environments through the continuous–discrete robust method.

4.1. Gaussian Noise

The measurement noise r k satisfies the Gaussian distribution with zero mean, that is, r k ~ N ( 0 , R k ) . The initial value of R k is a one-dimensional constant value of 0.001 rad. Before comparing the proposed algorithms, the influence of Gaussian kernel bandwidth σ on the algorithm RSRCD-MCCKF is discussed. The R M S E p o s and R M S E v e l of the algorithm at the final time instant are shown in Table 1.
As shown in Table 1, when the Gaussian kernel bandwidth σ changes, the algorithm shows different performance. When the value of σ is too small or too large, the filtering accuracy will be reduced. Among them, — refers to the filtering as diverged. When the value of σ increases gradually, its filtering performance has approached the SRCD-CKF algorithm, which is in line with the previous theoretical derivation. But under the premise of σ = 50 , its filtering accuracy is still higher than the SRCD-CKF algorithm. In other words, as long as the proper value of σ is selected, the filtering accuracy of the RSRCD-MCCKF algorithm is higher than the traditional algorithm.
On this basis, the Gaussian kernel bandwidth σ = 5 is selected, and we can see the R M S E p o s and R M S E v e l comparison of each algorithm in Figure 2.
As shown in Figure 2, when the measurement noise is Gaussian noise, the previous trends of each algorithm are roughly the same, and they all tend to converge as the time increases. Among them, HRSRCD-CKF and MRSRCD-CKF have small fluctuations compared with the SRCD-CKF algorithm, and the accuracy is basically the same under normal measurement conditions. The accuracy of the VBSRCD-CKF algorithm is slightly higher than that of HRSRCD-CKF and MRSRCD-CKF; this reflects the higher accuracy of the variational Bayesian method in approximate estimation. The RSRCD-MCCKF has higher accuracy than the above algorithms, because the maximum correntropy criterion can utilize higher-order information of measurements, applying the Gaussian kernel to each element of the estimation error matrix, which has a better combination with SRCD-CKF.

4.2. Gaussian Mixture Noise

The measurement noise is Gaussian mixed noise. Gaussian mixed noise is a typical non-Gaussian distribution, r k ~ 0.99 N ( 0 , R k ) + 0.01 N ( 0 , 500 R k ) . The R M S E p o s and R M S E v e l of the RSRCD-MCCKF algorithm at the final time instant, when the kernel bandwidth σ changes in the non-Gaussian noise environment, are shown in Table 2.
As shown in Table 2, the value of σ that can suppress the non-Gaussian noise has changed. Compared with the Gaussian conditions, the filtering accuracy of the algorithms is reduced, but the appropriate σ can still suppress the non-Gaussian noise that appears in the measurement. Therefore, it is more important to adaptively select the value of σ according to different scenarios.
Here, the Gaussian kernel bandwidth σ = 5 is also selected under the condition of non-Gaussian noise. The R M S E p o s and R M S E v e l comparison of each algorithm is shown in Figure 3.
It can be seen from Figure 3 that when the measurement noise is non-Gaussian noise, the errors of each algorithm increase slightly. The HRSRCD-CKF algorithm and the MRSRCD-CKF algorithm cannot effectively suppress the non-Gaussian measurement noise, and the algorithm basically fails. The VBSRCD-CKF algorithm and the RSRCD-MCCKF algorithm can suppress non-Gaussian noise to a certain extent. The filtering performance of the two algorithms is basically equivalent and finally shows relatively high convergence accuracy, which reflects their ability to deal with non-Gaussian noise problems.

4.3. Gaussian Noise Together with Shot Noise

The measurement noise satisfies the Gaussian distribution with zero mean, and the measurement covariance matrix changes suddenly to 30 times the original at t = 10 min, R k 10 = 30 R k . Considering that when σ = 5 , the filtering performance in RSRCD-MCCKF is good regardless of Gaussian noise or non-Gaussian noise; consequently, we still choose σ = 5 here. The R M S E p o s and R M S E v e l of each algorithm are shown in Figure 4.
As shown in Figure 4, when the measurement is abnormal, the error of the SRCD-CKF algorithm increases abruptly, which is not robust. While the other algorithms effectively resist the abnormal measurement. Among them, the VBSRCD-CKF algorithm shows better robustness, and it has the most obvious effect on abnormal measurement suppression; the RSRCD-MCCKF algorithm is second only to the VBSRCD-CKF algorithm and shows good robustness with high accuracy, while the suppression effect of the HRSRCD-CKF algorithm and the MRSRCD-CKF algorithm is slightly worse, indicating that the RSRCD-MCCKF algorithm and the VBSRCD-MCCKF algorithm are more effective in dealing with abnormal measurement problems.

4.4. Computational Complexity

When comparing the computational complexity of each algorithm, the parameter settings are the same as in Section 4.1 and Section 4.3, respectively. The relative computation time of SRCD-CKF is set to “1”. The relative computation time of each algorithm is shown in Table 3.
As shown in Table 3, the four robust algorithms all take longer than the initial SRCD-CKF algorithm, but their difference is not obvious. In addition, it can be observed that the computation time of each algorithm increases when interference is encountered, but the computation time has not increased much. Among them, the HRSRCD-CKF algorithm and the MRSRCD-CKF algorithm are relatively low in complexity. The computation time of the VBSRCD-CKF algorithm increases due to a certain number of inner loops. The computation time of the RSRCD-MCCKF is the longest because the calculation process includes more matrix inversion operations and exponential operations of Gaussian kernel function. However, considering the accuracy of the algorithm and application scenarios, it is acceptable within a similar time period.
Furthermore, it can be seen that the computation time of the proposed algorithms are longer than initial algorithm, but its improvement in accuracy and robustness is critical, which is the focus of this research. At the same time, considering that the increase in computing time is not obvious, the proposed algorithms can be used for other filtering problems.

5. Conclusions

In this paper, in order to evaluate the performance of robust methods in continuous–discrete tracking systems, four robust square-root continuous–discrete cubature Kalman filter algorithms are proposed. From the results, we can draw the following conclusions:
(1)
The effectiveness of the algorithms in this paper is evaluated through a remote distance passive target tracking scenario, and the simulation results demonstrate the better environmental adaptability of the proposed algorithms in solving continuous-discrete robust problems. What’s more, this paper aims to solve the slow tracking problem in remote distance passive target tracking, and the proposed algorithms can also be used in other navigation domains.
(2)
As a common tool for data processing, Huber’s estimation has a wide range of applications. The RSRCD-MCCKF algorithm and the VBSRCD-CKF algorithm proposed in this paper are better than the Huber’s estimator, which provides an alternative means for the field of robust estimation.
(3)
In the simulation results, we can see that the filtering performance of the algorithm without robustness is very poor. Therefore, when the system’s data are contaminated by outliers, the relevant robust estimation method is indispensable.

Author Contributions

Conceptualization, R.H. and S.C.; methodology, H.H.; software, H.H.; validation, H.W. and H.H.; formal analysis, H.W.; investigation, H.H. and S.C.; resources, R.H.; data curation, H.H. and R.H.; writing—original draft preparation, H.H.; writing—review and editing, H.W.; visualization, H.H.; supervision, S.C.; project administration, S.C.; funding acquisition, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number [62073337, 61703420], the National Postdoctoral Program for Innovative Talent grant number [BX20190260], the China Postdoctoral Science Foundation [2019M663998], the Natural Science Foundation of Shannxi Province grant number [2020JQ-479], and the Young Elite Scientists Sponsorship Program by SNAST [20190109].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, R.K.; Chen, S.X.; Wu, H.; Chen, K.; Liu, J. Adaptive Covariance Feedback Cubature Kalman Filtering for Continuous-Discrete Bearings-Only Tracking System. IEEE Access 2019, 7, 2686–2694. [Google Scholar] [CrossRef]
  2. Li, L.Q.; Wang, X.L.; Liu, Z.X.; Xie, W.X. Auxiliary truncated unscented Kalman filter for bearings-only maneuvering target tracking. Sensors 2017, 17, 972. [Google Scholar] [CrossRef] [PubMed]
  3. Crouse, D. Basic tracking using nonlinear continuous-time dynamic models. IEEE Aerosp. Electron. Syst. Mag. 2015, 30, 4–41. [Google Scholar] [CrossRef]
  4. Arasaratnam, I.; Haykin, S.; Hurd, T.R. Cubature Kalman Filtering for Continuous-Discrete Systems: Theory and Simulation. IEEE Trans. Signal Processing 2010, 58, 4977–4993. [Google Scholar] [CrossRef]
  5. Crouse, D.F. Cubature Kalman Filters for Continuous-Time Dynamic Models Part I: Solutions Discretizing the Langevin Equation. In Proceedings of the 2014 IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014. [Google Scholar]
  6. Crouse, D.F. Cubature Kalman Filters for Continuous-Time Dynamic Models Part Ⅱ: A Solution Based on Moment Matching. In Proceedings of the 2014 IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014. [Google Scholar]
  7. Kulikov, G.Y.; Kulikova, M.V. Accurate Numerical Implementation of the Continuous-Discrete Extended Kalman Filter. IEEE Trans. Autom. Control 2014, 59, 273–279. [Google Scholar] [CrossRef]
  8. Wang, J.L.; Zhang, D.X.; Shao, X.W. New version of continuous-discrete cubature Kalman filtering for nonlinear continuous-discrete systems. ISA Trans. 2019, 91, 174–783. [Google Scholar] [CrossRef]
  9. Kulikov, G.Y.; Kulikova, M.V. NIRK-based Accurate Continuous-Discrete Extended Kalman filters for Estimating Continous-Time Stochastic Target Tracking Models. J. Comput. Appl. Math. 2016, 316, 260–270. [Google Scholar] [CrossRef]
  10. He, R.K.; Chen, S.X.; Wu, H.; Chen, K. Stochastic feedback based continuous-discrete cubature Kalman filtering for bearings-only tracking. Sensors 2018, 18, 1959. [Google Scholar] [CrossRef]
  11. Gordon, N.J.; Salmond, D.J.; Smith, A.F.M. Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation. Radar Signal Processing IEE Proc. Part F 1993, 140, 107–113. [Google Scholar] [CrossRef]
  12. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Processing 2002, 50, 174–188. [Google Scholar] [CrossRef]
  13. Meinhold, R.J.; Singpurwalla, N.D. Robustification of Kalman filter methods. J. Am. Stat. Assoc. 1989, 84, 479–486. [Google Scholar] [CrossRef]
  14. Huber, P.J.; Ronchetti, E.M. Robust Statistics, 2nd ed.; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  15. Jureckova, J.; Picek, J.; Schindler, M. Robust Statistical Methods with R, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  16. Chang, J.B.; Liu, M. M-estimator-based robust Kalman filter for systems with process modeling errors and rank deficient measurement models. Nonlinear Dyn. 2015, 80, 1431–1449. [Google Scholar] [CrossRef]
  17. Wu, H.; Chen, S.X.; Chen, K.; Yang, B.F. Robust cubature Kalman filter target tracking algorithm based on genernalized M-estiamtion. Acta Phys. Sin. 2015, 64, 456–463. [Google Scholar] [CrossRef]
  18. Wu, H.; Chen, S.X.; Chen, K.; Yang, B.F. Robust Derivative-Free Cubature Kalman Filter for Bearings-Only Tracking. J. Guid. Control. Dyn. 2016, 39, 1865–1870. [Google Scholar] [CrossRef]
  19. Särkkä, S.; Nummenmaa, A. Recursive noise adaptive Kalman filtering by variational Bayesian approximations. IEEE Trans. Autom. Control. 2009, 54, 596–600. [Google Scholar] [CrossRef]
  20. Wang, G.Q.; Li, N.; Zhang, Y.G. Maximum correntropy unscented Kalman and information filters for non-Gaussian measurement noise. J. Frankl. Inst. 2017, 354, 8659–8677. [Google Scholar] [CrossRef]
  21. Liu, X.; Qu, H.; Zhao, J.; Yue, P.; Wang, M. Maximum correntropy unscented Kalman filter for spacecraft relative state estimation. Sensors 2016, 16, 1530. [Google Scholar] [CrossRef]
  22. Liu, X.; Chen, B.; Xu, B.; Wu, Z.; Honeine, P. Maximum correntropy unscented filter. Int. J. Syst. Sci. 2017, 48, 1607–1615. [Google Scholar] [CrossRef]
  23. Särkkä, S.; Solin, A. On continuous-discrete cubature Kalman filtering. IFAC Proc. Vol. 2012, 45, 1221–1226. [Google Scholar] [CrossRef]
  24. Särkkä, S. On unsented Kalman filtering for state estimation of continuous-time nonlinear systems. IEEE Trans. Autom. Control 2007, 52, 1631–1641. [Google Scholar] [CrossRef]
  25. Chen, B.D.; Liu, X.; Zhao, H.Q.; Principe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  26. Särkkä, S.; Hartikainen, J. Non-linear noise adaptive Kalman filtering via variational Bayes. In Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP) Conference, Southampton, UK, 22–25 September 2013. [Google Scholar]
  27. He, R.K.; Chen, S.X.; Wu, H.; Zhang, F.Z.; Chen, K. Efficient extended cubature Kalman filtering for nonlinear target tracking. Int. J. Syst. Sci. 2020, 52, 392–406. [Google Scholar] [CrossRef]
Figure 1. The trajectory of target and observer.
Figure 1. The trajectory of target and observer.
Applsci 12 08167 g001
Figure 2. The RMSE comparison of each algorithm under Gaussian noise. (a) R M S E p o s of each algorithm under Gaussian noise; (b) R M S E v e l of each algorithm under Gaussian noise.
Figure 2. The RMSE comparison of each algorithm under Gaussian noise. (a) R M S E p o s of each algorithm under Gaussian noise; (b) R M S E v e l of each algorithm under Gaussian noise.
Applsci 12 08167 g002
Figure 3. The RMSE comparison of each algorithm under non-Gaussian noise. (a) R M S E p o s of each algorithm under non-Gaussian noise; (b) R M S E v e l of each algorithm under non-Gaussian noise.
Figure 3. The RMSE comparison of each algorithm under non-Gaussian noise. (a) R M S E p o s of each algorithm under non-Gaussian noise; (b) R M S E v e l of each algorithm under non-Gaussian noise.
Applsci 12 08167 g003
Figure 4. The RMSE comparison of each algorithm under abnormal measurement. (a) R M S E p o s of each algorithm under abnormal measurement; (b) R M S E v e l of each algorithm under abnormal measurement.
Figure 4. The RMSE comparison of each algorithm under abnormal measurement. (a) R M S E p o s of each algorithm under abnormal measurement; (b) R M S E v e l of each algorithm under abnormal measurement.
Applsci 12 08167 g004
Table 1. Algorithm accuracy under Gaussian noise when kernel bandwidth is different.
Table 1. Algorithm accuracy under Gaussian noise when kernel bandwidth is different.
Algorithms R M S E p o s   [ km ] R M S E v e l   [ km s - 1 ]
RSRCD-MCCKF ( σ = 0.1 )
RSRCD-MCCKF ( σ = 1 )0.05280.000205
RSRCD-MCCKF ( σ = 2 )0.04490.000188
RSRCD-MCCKF ( σ = 5 )0.04820.000179
RSRCD-MCCKF ( σ = 10 )0.06450.000211
RSRCD-MCCKF ( σ = 50 )0.07710.000239
SRCD-CKF 0.07840.000247
Table 2. Algorithm accuracy under non-Gaussian noise when kernel bandwidth is different.
Table 2. Algorithm accuracy under non-Gaussian noise when kernel bandwidth is different.
Algorithms R M S E p o s   [ km ] R M S E v e l   [ km s - 1 ]
RSRCD-MCCKF ( σ = 2 )0.0349
RSRCD-MCCKF ( σ = 3 )1.08290.0043
RSRCD-MCCKF ( σ = 5 )0.47160.0014
RSRCD-MCCKF ( σ = 7 )0.48130.0014
RSRCD-MCCKF ( σ = 10 )0.53750.0016
SRCD-CKF 0.66790.0019
Table 3. The relative computation time of each algorithm.
Table 3. The relative computation time of each algorithm.
AlgorithmsComputation Time
(Scenario 4.1)
Computation Time
(Scenario 4.3)
SRCD-CKF11
HRSRCD-CKF1.0291.034
MRSRCD-CKF1.0051.018
RSRCD-MCCKF1.0761.080
VBSRCD-CKF1.0681.071
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, H.; Chen, S.; Wu, H.; He, R. Robust Estimation in Continuous–Discrete Cubature Kalman Filters for Bearings-Only Tracking. Appl. Sci. 2022, 12, 8167. https://doi.org/10.3390/app12168167

AMA Style

Hu H, Chen S, Wu H, He R. Robust Estimation in Continuous–Discrete Cubature Kalman Filters for Bearings-Only Tracking. Applied Sciences. 2022; 12(16):8167. https://doi.org/10.3390/app12168167

Chicago/Turabian Style

Hu, Haoran, Shuxin Chen, Hao Wu, and Renke He. 2022. "Robust Estimation in Continuous–Discrete Cubature Kalman Filters for Bearings-Only Tracking" Applied Sciences 12, no. 16: 8167. https://doi.org/10.3390/app12168167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop