Next Article in Journal
Estimation of the Water Level in the Ili River from Sentinel-2 Optical Data Using Ensemble Machine Learning
Previous Article in Journal
A Satellite Observational Study of Topographical Effects on Daytime Shallow Convective Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Filtering in Triplet Markov Chain Model in the Presence of Non-Gaussian Noise with Application to Target Tracking

1
Ministry of Education Key Laboratory for Intelligent Networks and Network Security, School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2
School of Economics and Management, Chang’an University, Xi’an 710054, China
3
Xi’an Satellite Control Center, Xi’an 710043, China
4
State Key Laboratory of Astronautic Dynamics, Xi’an 710043, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(23), 5543; https://doi.org/10.3390/rs15235543
Submission received: 30 September 2023 / Revised: 22 November 2023 / Accepted: 26 November 2023 / Published: 28 November 2023
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
In hidden Markov chain (HMC) models, widely used for target tracking, the process noise and measurement noise are in general assumed to be independent and Gaussian for mathematical simplicity. However, the independence and Gaussian assumptions do not always hold in practice. For instance, in a typical radar tracking application, the measurement noise is correlated over time as the sampling frequency of a radar is generally much higher than the bandwidth of the measurement noise. In addition, target maneuvers and measurement outliers imply that the process noise and measurement noise are non-Gaussian. To solve this problem, we resort to triplet Markov chain (TMC) models to describe stochastic systems with correlated noise and derive a new filter under the maximum correntropy criterion to deal with non-Gaussian noise. By stacking the state vector, measurement vector, and auxiliary vector into a triplet state vector, the TMC model can capture the complete dynamics of stochastic systems, which may be subjected to potential parameter uncertainty, non-stationarity, or error sources. Correntropy is used to measure the similarity of two random variables; unlike the commonly used minimum mean square error criterion, which uses only second-order statistics, correntropy uses second-order and higher-order information, and is more suitable for systems in the presence of non-Gaussian noise, particularly some heavy-tailed noise disturbances. Furthermore, to reduce the influence of round-off errors, a square-root implementation of the new filter is provided using QR decomposition. Instead of the full covariance matrices, corresponding Cholesky factors are recursively calculated in the square-root filtering algorithm. This is more numerically stable for ill-conditioned problems compared to the conventional filter. Finally, the effectiveness of the proposed algorithms is illustrated via three numerical examples.

Graphical Abstract

1. Introduction

Target tracking plays an important role in both civilian and military fields [1,2,3,4]. It aims to estimate the kinematic state (e.g., position, velocity and acceleration) of a target in real time [5]. Target tracking problems are usually solved in the framework of state-space models, and the state estimation can be obtained by solving the filtering problem of a state-space model. The hidden Markov chain (HMC) model is the most commonly used state-space model for target tracking. In an HMC model, the hidden state is assumed to be a Markov process and observed via an independent observation process. For linear cases, the filtering problem is usually solved using the Kalman filter (KF) [6], which, at each time step, recursively provides an optimal solution in the minimum mean square error (MMSE) sense. The KF and its variants have been widely used in many practical applications, such as target tracking [2], navigation [1], and smart grids [7], to name but a few.
The KF in general performs well with independent and Gaussian noise sources. That is both process noise and measurement noise are independent Gaussian noise sequences. However, the independence and Gaussian conditions that are typically assumed in target tracking do not always hold in practice [5]. For example, in a typical tracking application, a radar is used to track a vehicle. On the one hand, successive samples of the measurement noise are correlated over time because the sampling frequency of a radar is usually much higher than the bandwidth of the measurement noise [8]. In addition, the discretization process of the object continuous-time motion dynamics brings an extra noise contribution to the sensor measurement model, resulting in correlated process noise and measurement noise [9]. This implies that independence assumptions implicit in the HMC model may not be satisfied in reality. On the other hand, the Gaussian distribution is commonly used to characterize the process noise and measurement noise for mathematical simplicity [10]. Nevertheless, real-world data in general are non-Gaussian due to ubiquitous disturbances, such as some heavy-tailed noise disturbances. For instance, when the moving vehicle encounters an emergency (e.g., a jaywalker or a traffic accident), maneuvering is required to avoid potential hazards. Changes in the vehicle aspect with respect to the radar due to maneuvers probably result in significant radar cross-section fluctuations. In this case, the target kinematic state and its associated measurement are found to be non-Gaussian and heavy-tailed [5,11]. In addition, outliers from unreliable sensors imply that the measurement noise is also non-Gaussian and heavy-tailed [12]. When the independence and Gaussian assumptions implicit in the HMC model are not tenable, the KF may fail to produce reliable estimation results.
For stochastic systems with correlated noise, traditional solutions involve reconstructing a new HMC model with independent process noise and measurement noise using a pre-whitening technique, and then, use the standard KF to calculate the state estimation [1]. In recent years, several other forms of state-space models have been developed for modeling stochastic systems. One such model is the pairwise Markov chain (PMC) model [13,14,15,16,17]. In the PMC model, the pairwise X , Y is Markovian, where X is the hidden process and Y is the observed one. The PMC model is more general than the HMC model, since X in the PMC model is not necessarily Markovian, which presents the possibility of improving modeling capability. Another popular model is the triplet Markov chain (TMC) model [18,19,20]. In the TMC model, the triplet X , R , Y is Markovian, where R is an auxiliary process. The TMC model is more general than the PMC (and the HMC) model, since X , Y is not necessarily Markovian in the TMC model. Moreover, such an auxiliary process presents the possibility of capturing the complete dynamics of stochastic systems, and is very useful in some engineering applications, which may be subjected to potential parameter uncertainty, non-stationarity, or error sources [19]. In the framework of the TMC model, a Kalman-like filter, called the triplet Kalman filter (TKF), is developed for the linear Gaussian case [18]. The structural advantages of the TMC model make it preferable in solving some practical engineering problems, such as target tracking [19], signal segmentation [21], speech processing [22], and so on.
To solve the filtering problem of stochastic systems in the presence of non-Gaussian noise, particularly some heavy-tailed noise disturbances, several strategies have been developed. In general, these strategies can be divided into three categories [23]. The first uses heavy-tailed distributions to model non-Gaussian noise, such as the most commonly used t-distribution [5,11,12,24]. However, unlike the Gaussian distribution, the t-distribution is difficult to analytically handle, resulting in the corresponding filtering algorithms not having a closed form solution. The second is the multiple model approach, in which a finite sum of Gaussian distributions is used to represent non-Gaussian noise [25,26]. However, recursive filtering is not feasible in the classic family of conditionally Gaussian linear state-space models (CGLSSMs), which is a natural extension of Gaussian linear systems arrived at by introducing its dependence on switches, and approximate approaches need to be used [27]. As alternative models, conditionally Markov switching hidden linear models (CMSHLMs) allow fast exact optimal filtering in the presence of jumps [28]. In particular, CMSHLM-based filters can be as efficient as particle filters in the case of Gaussian mixtures [29]. CMSHLMs are also used to approximate nonlinear non-Gaussian stationary Markov dynamic systems and allow fast exact filtering [30]. The third strategy is the Monte Carlo method, which employs a set of weighted random particles to approximate the probability density function of the state [31]. In theory, enough particles can approximate the real distribution of a state with arbitrary precision. Sampling methods are generally categorized into deterministic sampling approaches [32] and random sampling approaches [31]. However, sampling approaches, especially random sampling approaches, usually have a huge computation burden, which does not facilitate online application.
Recently, the research on filtering techniques under the maximum correntropy (MC) criterion has become an important orientation for the state estimation of stochastic systems in the presence of non-Gaussian noise [33,34,35,36]. Correntropy is a statistical metric to measure the similarity of two random variables in information theory; unlike the commonly used MMSE criterion, which uses second-order statistics, the MC criterion uses second-order statistics and higher-order information, thus offering the probability of improving estimation accuracy for systems in the presence of non-Gaussian noise. Several filtering techniques under the MC criterion have been developed for linear non-Gaussian HMC models [23,37,38]. They outperform the classical KF against heavy-tailed noise disturbances. The MC criterion has also been extended to nonlinear cases [39,40]. In addition, one study [38] provides two square-root filtering algorithms under the MC criterion to improve numerical stability. However, these filtering algorithms do not take into account the case of correlated noise in stochastic systems, such as the aforementioned target tracking example, and thus, they may not produce reliable estimation results.
In this paper, our aim is to address the filtering problem of stochastic systems with correlated and non-Gaussian noise. To solve this problem, we resort to a linear TMC model to formulate stochastic systems with correlated noise, and then derive a new filter under the MC criterion to deal with non-Gaussian noise. The inspiration for this idea comes from two aspects. One is that the TMC model relaxes the independence assumptions that are typically assumed in the HMC model, and it can capture the complete dynamics of stochastic systems subjected to potential parameter uncertainty, non-stationarity, or error sources [19]. The other is that the MC criterion can capture higher-order statistics compared to the commonly used MMSE criterion, which utilize only the second-order information. It has been found that the MC criterion is more suitable for stochastic systems in the presence of non-Gaussian noise, particularly some heavy-tailed noise disturbances, than the MMSE criterion [37,38,41,42].
In brief, the main contributions of this work are as follows:
(1) A new maximum correntropy triplet Kalman filter (MCTKF) is developed to address the estimation problem of dynamic systems with correlated and non-Gaussian noise. In this filter, the TMC model is employed to describe common noise-dependent dynamic systems and the maximum correntropy criterion, instead of the MMSE criterion, is used to address non-Gaussian noise. A single target tracking example, where the process noise is autocorrelated and the process noise and measurement noise are non-Gaussian, is provided to verify the effectiveness of the MCTKF;
(2) A square-root implementation of the MCTKF using QR decomposition is designed to improve the numerical stability with respect to round-off errors, which are ubiquitous in many real-world applications. Instead of full covariance matrices in the MCTKF, corresponding Cholesky factors are recursively computed in the square-root MCTKF. A linear non-Gaussian TMC example with round-off errors is provided to verify the numerical stability of the square-root MCTKF;
(3) A bearing-only multi-target tracking example with non-Gaussian process noise and measurement noise is provided to verify the effectiveness of the nonlinear extension of the MCTKF, i.e., the maximum correntropy extended triplet Kalman filter (MCETKF). We use the multi-Bernoulli (MB) filtering framework [43] to address the multi-tracking problem, in which the MCETKF, the extended TKF (ETKF), and extended KF (EKF) are tested. Test results show that the proposed MB-MCETKF outperforms other filtering algorithms in terms of estimation accuracy of both target states and number.
The rest of the paper is organized as follows. The problem formulation is first introduced in Section 2. Next, the MCTKF and its square-root implementation using QR decomposition are provided in Section 3. Section 4 discusses how to model for practical applications with dependent noise. In Section 5, we validate the proposed filters via simulations. Finally, we conclude the paper in Section 6.

2. Problem Formulation

2.1. Hidden Markov Chain Model

The HMC model is the most popular form of state-space model and it is commonly used for target tracking. Consider a linear HMC model as follows:
x k + 1 = F k x k + w k , z k = H k x k + v k ,
where x k R n x and z k R n z are the state vector with dimension n x and the measurement vector with dimension n z at time k, respectively. F k and H k are the transition matrix and the measurement matrix, respectively. w k N 0 , Q k and v k N 0 , R k are the process noise and the measurement noise, respectively. N m , P denotes a Gaussian distribution with the mean vector m and the covariance matrix P. In general, the process noise and measurement noise in model (1) are assumed to be independent, jointly independent, and independent of the initial state x 0 . Thus, the classical KF can be employed to estimate the state, and it recursively provides an optimal solution in the MMSE sense [6].
However, the independence and Gaussian assumptions implicit in the HMC model may not be satisfied in practice. For the aforementioned example, in a typical radar target-tracking application, the measurement noise is correlated (or colored) over time as the sampling frequency of a radar is usually much higher than the bandwidth of the measurement noise. In addition, target maneuvers in emergency and outliers from unreliable sensors imply that the process and measurement noises are non-Gaussian and heavy-tailed. In this case, the KF may not produce reliable state estimation.

2.2. Triplet Markov Chain Model

Taking into account independent constraints of noise sources in the HMC model, we resort to a linear TMC model to formulate stochastic systems with correlated noise [18]. By stacking the state vector, measurement vector, and auxiliary vector into a triplet state vector, the TMC model can capture the complete dynamics of stochastic systems subjected to potential parameter uncertainty, non-stationarity, or error sources [19]. Consider a linear TMC model given by
x k + 1 r k + 1 z k ζ k + 1 = F k x x F k x r F k x z F k r x F k r r F k r z F k z x F k z r F k z z F k x k r k z k 1 + ξ k x ξ k r ξ k z ξ k ,
where x k R n x , r k R n r and z k R n z are the state vector with dimension n x , the auxiliary vector with dimension n r , and the measurement vector with dimension n z at time k, respectively. Matrix F k is deterministic. Noise process ξ 0 : k = ξ 0 , ξ 1 , , ξ k is a white noise sequence.
In model (2), only the process ζ = ζ 0 : k is assumed to be a Markov process, and other chains, e.g., x, r, z, ( x , r ) , ( x , z ) and ( r , z ) , need not be Markov in general [18]. It has been found that model (2) can be directly used to describe the systems of model (1) with autoregressive process noise, autoregressive measurement noise, or correlated process noise and measurement noise (see Section II.B in [18]). The structural advantages of the TMC model make it preferable in modeling some practical engineering problems.

2.3. Triplet Kalman Filter

A Kalman-like filter has been provided for the linear TMC model with Gaussian noise under the MMSE criteria [18]. It is referred to as a triplet Kalman filter (TKF). For convenience, the TKF is briefly reviewed here. In model (2), the state variable x k and the auxiliary variable r k are combined into an augmented variable x k * = x k T , r k T T . Then, model (2) can be rewritten compactly by
x k + 1 * z k = F k x * x * F k x * z F k z x * F k z z x k * z k 1 + ξ k x * ξ k z .
Let us further assume that
x 0 * N x ^ 0 * , P 0 * and ξ k N 0 , Q k x * x * Q k x * z Q k z x * Q k z z Q k .
For model (3) with (4), the TKF recursion is summarized in Proposition 1.
Proposition 1
(Triplet Kalman Filter). Consider the linear Gaussian triplet Markov system (3) with (4). Then, the state estimate x ^ k | k * and its corresponding covariance P k | k * can be computed from x ^ k 1 | k 1 * and P k 1 | k 1 * via the equations:
Initialization: Set initial values and constants
1
x ^ 0 | 0 * = x ¯ 0 * , P 0 | 0 * = P 0 * ,
2
F ^ k x * x * = F k x * x * Q k x * z Q k z z 1 F k z x * ,
3
F ^ k x * z = F k x * z Q k x * z Q k z z 1 F k z z ,
4
Q ^ k x * x * = Q k x * x * Q k x * z Q k z z 1 Q k z x * .
Prediction: Compute predicted state x ^ k | k 1 * and P k | k 1 * by
5
x ^ k | k 1 * = F ^ k 1 x * x * x ^ k 1 | k 1 * + Q k 1 x * z Q k 1 z z 1 z k 1 ,
6
P k | k 1 * = Q ^ k 1 x * x * + F ^ k - 1 x * x * P k 1 | k 1 * F ^ k - 1 x * x * T .
Update: Compute updated state x ^ k | k * and P k | k * by
7
e k = z k F k z x * x ^ k | k 1 * F k z z z k 1 ,
7
R e , k = F k z x * P k | k 1 * F k z x * T + Q k z z ,
9
K k = P k | k 1 * F k z x * T R e , k 1 ,
10
x ^ k | k * = x ^ k | k 1 * + K k e k ,
11
P k | k * = I K k F k z x * P k | k 1 * .
The TKF recursively provide an optimal solution in the MMSE sense, and in general, performs well in the presence of Gaussian noise. However, it may fail to produce reliable estimation results for stochastic systems in the presence of non-Gaussian noise, such as some heavy-tailed noise disturbances. The main reason for this is that the MMSE criterion adopted by the TKF captures only the second-order statistics of the innovation e k , and is sensitive to impulsive noise disturbances. To solve this problem, a new filter is derived under the maximum correntropy criterion in the next section. As correntropy utilizes second-order and higher-order statistics of the innovation, the new filter may perform much better than the TKF for stochastic systems in the presence of non-Gaussian noise.
In terms of the floating-point operations, it can be easily concluded from Table 1 that the computational complexity of the TKF is
S TKF = 6 n x * 3 + n x * 2 n z + n x * n z 2 + 5 n x * n z n x * + n z 2 + 2 O n z 3 .

3. Methods

3.1. Maximum Correntropy Triplet Kalman Filter

3.1.1. Correntropy

Correntropy is an important statistical metric in information theory and widely used in signal processing, pattern recognition, and machine learning [44,45]. It is used to measure the similarity between two random variables. Specifically, given two random variables X and Y, the correntropy is defined by
C X , Y = E κ X , Y = κ x , y d f X Y x , y ,
where E · denotes an expectation operator, κ · , · denotes a positive definite kernel function satisfying Mercer theory, and f X Y x , y is the joint probability density function of variables X and Y. Satisfying Mercer theory means that the kernel matrix is positive semi-definite. In practice, the joint distribution f X Y x , y is usually unknown, and only limited numbers of data are available. In this case, a sample mean estimator can be used to compute the correntropy:
C ^ X , Y = 1 N i = 1 N κ x i , y i ,
where x i , y i i = 1 N denotes N samples drawn from the joint distribution f X Y .
In this paper, unless otherwise specified, the kernel function is a Gaussian kernel function given by
κ x i , y i = G σ x i y i ,
where G σ x i y i = exp x i y i 2 2 σ 2 , and σ is a kernel size. From (8), one can see that the Gaussian correntropy is positive and bounded. It reaches the maximum if and only if X = Y .
Taking the Taylor series expansion of the Gaussian kernel function, we have
C X , Y = n = 0 1 n 2 n σ 2 n n ! E X Y 2 n .
As can be seen from (9), the correntropy is expressed as a weighted sum of all even-order moments of X Y [37]. Compared to the MMSE criterion, which uses only second-order statistics of the error information and is sensitive to larger outliers, the correntropy captures the second-order and higher-order moments of the error, which makes it preferable in addressing non-Gaussian noise. Note that when σ tends to infinity, the correntropy will be determined by the first term (i.e., when n = 0 ) on the right-hand side of (9).

3.1.2. Main Result

To address the filtering problem of stochastic systems with correlated and non-Gaussian noise, a new filter is derived based on the maximum correntropy (MC) criterion in the framework of the linear TMC model. The new filter is referred to as maximum correntropy triplet Kalman filter (MCTKF) and is summarized in Proposition 2.
Proposition 2
(Maximum Correntropy Triplet Kalman Filter). Consider the linear Gaussian triplet Markov system (3) with (4) in the presence of non-Gaussian noise. Then, the state estimate x ^ k | k * and its corresponding covariance P k | k * can be computed from x ^ k 1 | k 1 * and P k 1 | k 1 * via the equations:
Initialization: Set initial values and constants
1
x ^ 0 | 0 * = x ¯ 0 * , P 0 | 0 * = P 0 * ,
2
F ^ k x * x * = F k x * x * Q k x * z Q k z z 1 F k z x * ,
3
F ^ k x * z = F k x * z Q k x * z Q k z z 1 F k z z ,
4
Q ^ k x * x * = Q k x * x * Q k x * z Q k z z 1 Q k z x * .
Prediction: Compute predicted state x ^ k | k 1 * and P k | k 1 * by
5
x ^ k | k 1 * = F ^ k 1 x * x * x ^ k 1 | k 1 * + Q k 1 x * z Q k 1 z z 1 z k 1 ,
6
P k | k 1 * = Q ^ k 1 x * x * + F ^ k - 1 x * x * P k 1 | k 1 * F ^ k - 1 x * x * T .
Update: Compute updated state x ^ k | k * and P k | k * by
7
e k = z k F k z x * x ^ k | k 1 * F k z z z k 1 ,
8
λ k = G σ k e k Q k z z 1 ,
9
R e , k λ = F k z x * P k | k 1 * F k z x * T + Q k z z ,
10
K k λ = P k | k 1 * λ k F k z x * T R e , k λ 1 ,
11
x ^ k | k * = x ^ k | k 1 * + K k λ e k ,
12
P k | k * = I K k λ F k z x * P k | k 1 * .
The MCTKF has a recursive structure similar to the TKF, except that an extra inflation parameter λ k is involved in the update step. The inflation parameter λ k can be regarded as a scale to control information inflation of K k λ , and is calculated according to the MC criterion. The MC criterion uses second-order and higher-order statistics of the innovation e k . This makes the MCTKF perform well for stochastic systems in the presence of non-Gaussian noise.
The kernel size σ k plays an important role in the behavior of the MCTKF. It will reduce to the TKF when the kernel size σ k tends to infinity, as the inflation parameter is λ k = 1 . Here, we adopt an adaptive strategy suggested in [23] to choose σ k , which is a function of the innovation and computed by
σ k = z k F k z x * x ^ k | k 1 * F k z z z k 1 Q k z z 1 .
Choosing (10) for σ k results in the parameter λ k being a constant, i.e., λ k = exp 1 2 0.6065 . According to the results in Section 5.1, although this strategy in general cannot obtain the optimal value for σ k , it still makes the MCTKF able to outperform the TKF in dealing non-Gaussian noise. Furthermore, in this condition, the MCTKF outperforms an MCTKF with a fixed kernel size σ k , when the parameter σ k is inappropriately selected. To sum up, the kernel size plays a very important role in the MCTKF, and (10) is a fair competitive strategy at present. We will study this problem in our future research.
According to Table 2, the computational complexity of the MCTKF given in (11) is almost the same as that of the TKF shown in (5), which facilitates its practical application.
S MCTKF = 6 n x * 3 + n x * 2 n z + n x * n z 2 + n x * n z n x * + n z 2 + n z + 3 O n z 3 .

3.1.3. Derivation of the MCTKF

For the linear TMC model (3) with (4), according to the prediction step of the TKF in Proposition 1, we have
x ^ k | k 1 * z k = I F k z x * x k * + 0 F k z z z k 1 + η k ,
where I and 0 denote an identity matrix of n x * × n x * and a zero matrix of n x * × n z , respectively. The error η k is
η k = x k * x ^ k | k 1 * w k z , and E η k η k T = P k | k 1 * 0 0 Q k z z .
For the case of non-Gaussian noise, we use the maximum Gaussian kernel-based correntropy criterion instead of the MMSE criterion to derive the update equations. The main reason for this is that the MMSE uses only second-order statistics of the error signal and is sensitive to large outliers, whereas the correntropy captures second-order and higher-order moments of the error, which may perform much better for non-Gaussian noise, especially when the dynamic system is disturbed by some heavy-tailed impulsive noises. Thus, the objective function under maximum Gaussian kernel-based correntropy is
J 1 x k * = G σ k z k F k z x * x k * F k z z z k 1 + G σ k x k * x k | k 1 * .
In addition, a weighted matrix in the weighted least square (WLS) contributes to obtain a minimum covariance estimation. Therefore, under the MC and WLS criterion, we establish a new objective function given by
J x k * = G σ k z k F k z x * x k * F k z z z k 1 Q k z z 1 + G σ k x k * x ^ k | k 1 * P k | k 1 * 1 .
Our goal is to find a solution x ^ k * that maximizes objective function (15) to deal with non-Gaussian noise, i.e.,
x ^ k * = arg max x k * J x k * .
One can obtain the maximum of objective function (15) by setting its gradient to zero, because the objective function is convex and its solution is unique. The main reason is as follows. Both two terms on the right-hand side of (15) are Gaussian kernel functions. On the one hand, the exponential function with base e 2.71828 is a positive monotonically increasing function. On the other hand, the exponential part is a non-positive definite quadratic form, which is an upward convex function with a unique maximum. Thus, the Gaussian kernel function will reach the maximum value when the exponential part takes the maximum value by setting the gradient of the exponential part to zero. In addition, according to the property of the exponential function, the solution of setting the gradient of the exponential function to zero is equivalent to the solution of setting the exponential part to zero. Therefore, the solution obtained by setting the gradient of the objective function to zero maximizes the objective function.
According to the above analysis, maximization of the objective function J x k * with respect to x k * implies J x k * x k * = 0 , i.e.,
J x k * x k * = 1 σ k 2 G σ k z k F k z x * x k * F k z z z k 1 Q k z z 1 F k z x * T Q k z z 1 z k F k z x * x k * F k z z z k 1 1 σ k 2 G σ k x k x ^ k | k 1 * P k | k 1 * 1 P k | k 1 * 1 x k x ^ k | k 1 * = 0 .
Equation (17) can be written more compactly as
Ψ k x k * = P k | k 1 * 1 x ^ k | k 1 * + λ k F k z x * T Q k z z 1 z k F k z z z k 1 ,
where
Ψ k = P k | k 1 * 1 + λ k F k z x * T Q k z z 1 F k z x * ,
λ k = G σ k z k F k z x * x k * F k z z z k 1 Q k z z 1 G σ k x k * x ^ k | k 1 * P k | k 1 * 1 .
Adding and subtracting a term λ k F k z x * T Q k z z 1 F k z x * x ^ k | k 1 * on the right-hand side of (18), gives
Ψ k x k * = Ψ k x ^ k | k 1 * + λ k F k z x * T Q k z z 1 z k F k z x * x ^ k | k 1 * F k z z z k 1 .
Then, the estimation of x k * is
x ^ k | k * = x ^ k | k 1 * + K k λ z k F k z x * x ^ k | k 1 * F k z z z k 1 ,
where
K k λ = Ψ k 1 λ k F k z x * T Q k z z 1 .
The covariance matrix P k | k * has a similar form to that of the standard TKF (see step 11 in Proposition 1), except that K k is replaced by K k λ , i.e.,
P k | k * = I K k λ F k z x * P k | k 1 * .
Remark 1.
In the gain matrix K k λ , the involved λ k is a function of variable x k * . In other words, (22) is a fixed-point equation, i.e., x k * = f x k * . In theory, the estimation of x k * can be obtained via a fixed-point iterative technique given by
x ^ k | k * , n + 1 = f x ^ k | k * , n ,
where x k * , n denotes the estimation result of the nth iteration initialized by x k * , 0 = x ^ k | k 1 * . It has been found that only one iteration of the fixed-point rule is required in practice [38]. Therefore, by substituting x k * x ^ k | k 1 * into (20), we have
λ k = G σ k z k F k z x * x ^ k | k 1 * F k z z z k 1 Q k z z 1 ,
since the denominator of (20) is G σ k 0 = 1 .
Remark 2.
The calculation of K k λ in (23) involves two n x * × n x * and one n z × n z matrix inversions. Matrix inversion generally requires a lot of computing resources. This will become impractical when the dimension of a matrix is very large. In addition, from the perspective of numerical stability, matrix inversion should as far as possible also be avoided. Inspired by the theoretical result of Lemma 1 in [38], similarly, we provide several algebraic equivalent formulas of the gain matrix K k λ and the covariance matrix P k | k * in Lemma 1 below.
Lemma 1.
Consider the state-space model (3) with (4) in the presence of non-Gaussian noise. The estimation problem can be solved by the MCTKF shown in Proposition 2, where the gain matrix K k λ and the covariance matrix P k | k * can be equivalently replaced by the following formulas
K k λ = λ k Ψ k 1 F k z x * T Q k z z 1
= λ k P k | k * F k z x * T Q k z z 1
= λ k P k | k 1 * F k z x * T R e , k λ 1 ,
P k | k * = Ψ k 1
= I K k λ F k z x * P k | k 1 *
= P k | k 1 * λ k 1 K k λ R e , k λ K k λ T
= I K k λ F k z x * P k | k 1 * I K k λ F k z x * λ k + K k λ Q k z z K k λ T ,
where Ψ k and R e , k λ are as follows
Ψ k = P k | k 1 * 1 + λ k F k z x * T Q k z z 1 F k z x * ,
R e , k λ = F k z x * P k | k 1 * F k z x * T + Q k z z ,
and λ k is given by (25).
Proof. 
(1) Algebraic equivalence for K k λ formulas. First, (26) can be directly obtained by substituting (29) into (27), which implies (26) and (27) are algebraically equivalent.
Next, we prove the algebraic equivalence of Formulas (27) and (28). Substituting (30) into (27), we have
K k λ = λ k P k | k 1 * F k z x * T Q k z z 1 λ k K k λ F k z x * P k | k 1 * F k z x * T Q k z z 1 .
Formula (33) can be rewritten as
K k λ Q k z z + λ k F k z x * P k | k 1 * F k z x * T Q k z z 1 = λ k P k | k 1 * F k z x * T Q k z z 1 ,
and thus, we have
K k λ = λ k P k | k 1 * F k z x * T Q k z z + λ k F k z x * P k | k 1 * F k z x * T 1 .
Hence, Formulas (26)–(28) of the gain matrix K k λ are algebraically equivalent.
(2) Algebraic equivalence for P k | k * formulas. First, we prove the algebraic equivalence of Formulas (29) and (30). According to the matrix inversion lemma [46], i.e.,
A + B C D 1 = A 1 A 1 B C 1 + D A 1 B 1 D A 1 ,
Formula (29) can be rewritten as
P k | k * = P k | k 1 * λ k P k | k 1 * F k z x * T Q k z z + λ k F k z x * P k | k 1 * F k z x * T 1 F k z x * P k | k 1 * .
Substituting (28) into (37), we have
P k | k * = P k | k 1 * K k λ F k z x * P k | k 1 * = I K k λ F k z x * P k | k 1 * .
The last line of Formula (38) is exactly the same as (30), which implies that (29) and (30) are algebraically equivalent.
Next, we prove the algebraic equivalence of Formulas (30) and (31). Substituting (28) into (31), we have
P k | k * = P k | k 1 * λ k 1 K k λ R e , k λ R e , k λ 1 F k z x * P k | k 1 * λ k = I K k λ F k z x * P k | k 1 * .
The last line of Formula (39) is aslo exactly the same as (30), which implies that (30) and (31) are algebraically equivalent.
Finally, we want to verify (32). To this end, we add and subtract a term K k λ Q k z z K k λ T on the right-hand side of (30), i.e.,
P k | k * = I K k λ F k z x * P k | k 1 * + K k λ Q k z z K k λ T K k λ Q k z z K k λ T .
In addition, by substituting (30) into (27), K k λ can be rewritten as
K k λ = I K k λ F k z x * P k | k 1 * λ k F k z x * T Q k z z 1 .
Then, (32) can be derived by substituting (41) into (40) as follows:
P k | k * = I K k λ F k z x * P k | k 1 * + K k λ Q k z z K k λ T I K k λ F k z x * P k | k 1 * λ k F k z x * T Q k z z 1 Q k z z K k λ T = I K k λ F k z x * P k | k 1 * P k | k 1 * λ k F k z x * T K k λ T + K k λ Q k z z K k λ T = I K k λ F k z x * P k | k 1 * I λ k K k λ F k z x * T + K k λ Q k z z K k λ T .
Hence, Formulas (29)–(32) of the covariance matrix P k | k * are algebraically equivalent. □
We can use (28) instead of (23) to compute the gain matrix K k λ , since the former requires only one n z × n z matrix inversion. This reduces the computational cost and improves the numerical stability.

3.2. Square-Root MCTKF

The MCTKF, in general, performs well for stochastic systems with correlated and non-Gaussian noise. However, it may suffer from the influence of round-off errors, which is an important issue in practice [38]. Studies have shown that the square-root filtering technique is an effective strategy for enhancing the numerical stability of filtering algorithms, and can significantly reduce the influence of round-off errors. The key idea is that a square-root factor of the covariance matrix, instead of the full matrix, is propagated at each time step. In this section, a square-root implementation of the MCTKF is provided to reduce the influence of round-off errors.
Cholesky decomposition is the most commonly used approach in square-root filtering algorithms. An important reason for this is that for a symmetric positive definite matrix, its Cholesky factor exists and is unique. Even if the matrix is positive semi-definite, its Cholesky factor still exists, but is not unique [47]. More exactly, for a symmetric positive definite matrix A, Cholesky decomposition gives it the expression: A = A T 2 A 1 2 , where the factor A 1 2 has a triangular form with positive diagonal elements. Triangular forms are preferred in most engineering applications. For descriptions of some other square-root filtering variants, readers can refer to [48].
In this section, we employ the Cholesky decomposition to provide a square-root implementation of the MCTKF. Instead of the full covariance matrices P k | k 1 * and P k | k * , corresponding Cholesky factors ( P k | k 1 * ) 1 2 and ( P k | k * ) 1 2 are calculated at each time step. Throughout this paper, the factor A 1 2 is specified as an upper triangular matrix for the Cholesky decomposition of A = A T 2 A 1 2 . It should be noted that the square-root filtering technique is not completely free of round-off errors, but the influence of round-off errors can be reduced in the following two aspects [38,49]: (i) the product A T 2 A 1 2 will never be negative definite, even in the presence of round-off errors, while round-off errors may lead to negative covariance matrices; and (ii) the numerical conditioning of A 1 2 is usually much better than that of A, as the condition number of matrix A is C ( A ) = C ( A T 2 A 1 2 ) = [ C ( A 1 2 ) ] 2 . This means that the square-root implementation can yield twice as effective precision as the conventional filter in ill-conditioned problems [49].
In addition, modern square-root filtering techniques imply QR factorization at each time step for calculating corresponding Cholesky factors. More precisely, first, a pre-array A is constructed according to model parameters of the stochastic system. Next, an orthogonal operator Q is introduced to the pre-array to obtain an upper (or lower) triangular form of a post-array B, i.e., Q A = B . Finally, the Cholesky factor can be simply extract from the post-array.
Taking into account that the inflation parameter λ k is a scalar value, we design a square-root implementation of the MCTKF using QR decomposition. It is referred to as a square-root MCTKF, and is summarized in Proposition 3.
Proposition 3
(Square-Root Maximum Correntropy Triplet Kalman Filter). Consider the linear Gaussian triplet Markov system (3) with (4) in the presence of non-Gaussian noise. Then, the state estimate x ^ k | k * and ( P k | k * ) 1 / 2 can be computed from x ^ k 1 | k 1 * and ( P k 1 | k 1 * ) 1 / 2 via the equations:
Cholesky Decomposition: Find square roots
1
P 0 * = P 0 * T 2 P 0 * 1 2 , Q k 1 z z = Q k 1 z z T 2 Q k 1 z z 1 2 .
Initialization: Set initial values and constants
2
x ^ 0 | 0 * = x ¯ 0 * , P 0 | 0 * 1 2 = P 0 * 1 2 ,
3
F ^ k x * x * = F k x * x * Q k x * z Q k z z 1 F k z x * ,
4
F ^ k x * z = F k x * z Q k x * z Q k z z 1 F k z z ,
5
Q ^ k x * x * = Q k x * x * Q k x * z Q k z z 1 Q k z x * ,
6
Find the square root Q ^ k x * x * = Q ^ k x * x * T 2 Q ^ k x * x * 1 2 .
Prediction: Compute predicted state x ^ k | k 1 * and ( P k | k 1 * ) 1 / 2 by
7
x ^ k | k 1 * = F ^ k 1 x * x * x ^ k 1 | k 1 * + Q k 1 x * z Q k 1 z z 1 z k 1 ,
8
A k p = P k 1 | k 1 * 1 2 F ^ k - 1 x * x * T Q ^ k 1 x * x * 1 2 . Form the pre-array
9
B k p = Q k p A k p = P k | k 1 * 1 2 0 . Find the post-array
Update: Compute updated state x ^ k | k * and ( P k | k * ) 1 / 2 by
10
e k = z k F k z x * x ^ k | k 1 * F k z z z k 1 ,
11
λ k = G σ k e k Q k z z 1 ,
12
A k u = Q k z z 1 2 0 λ k 1 2 P k | k 1 * 1 2 F ^ k 1 z x * T P k | k 1 * 1 2 ,
13
B k u = Q k u A k u = R e , k 1 2 K ¯ k λ T 0 P k | k * 1 2 . Find the post-array
14
x ^ k | k * = x ^ k | k 1 * + λ k 1 2 K ¯ k λ R e , k T 2 e k .
Instead of the full covariance matrices P k | k 1 * and P k | k * , the Cholesky factors ( P k | k 1 * ) 1 2 and ( P k | k * ) 1 2 are recursively calculated in the square-root MCTKF. In fact, the Cholesky decomposition is applied only once for covariance matrix factorization, i.e., P 0 * = P 0 * T 2 P 0 * 1 2 in step 1 of Proposition 3. Stable orthogonal transformations should be applied as far as possible in the square-root algorithm. In Proposition 3, we utilize the QR decomposition, in which Q can be any orthogonal transformation and the resulted post-array is an upper triangular matrix. Although the square-root algorithm cannot be free of round-off errors, it can significantly reduce the influence of round-off errors and is more numerically stable for ill-conditioned problems than the MCTKF.
In essence, the square-root MCTKF is algebraically equivalent to the conventional MCTKF. This can be easily proved by utilizing the properties of orthogonal matrices. In brief, B T B = A T Q T Q A = A T A can be easily obtained as Q A = B . Then, the required formulas can be obtained by comparing both sides of the resulted equality A T A = B T B . The proof of algebraic equivalence of the square-root MCTKF and the conventional MCTKF is given below.
Proof. 
First, according to the equation in step 8 of the square-root MCTKF (Proposition 3), we have
F ^ k 1 x * x * P k 1 | k 1 * T 2 P k 1 | k 1 * 1 2 F ^ k 1 x * x * T + Q ^ k 1 x * x * T 2 Q ^ k 1 x * x * 1 2 = P k | k 1 * T 2 P k | k 1 * 1 2 ,
which is consistent with the equation in step 6 of the conventional MCTKF (Proposition 2).
Next, according to step 12 of Proposition 3, we have
F k z x * P k | k 1 * T 2 λ k P k | k 1 * 1 2 F k z x * T + Q k z z T 2 Q k z z 1 2 = R e , k λ T 2 R e , k λ 1 2 ,
P k | k 1 * T 2 λ k P k | k 1 * 1 2 F k z x * T = K ¯ k λ R e , k λ 1 2 ,
P k | k * T 2 P k | k * 1 2 + K ¯ k λ K ¯ k λ T = P k | k 1 * T 2 P k | k 1 * 1 2 .
We have R e , k λ = F k z x * P k | k 1 * λ k F k z x * T + Q k z z according to (44), as λ k is a scalar value. In addition, from (45) we have
K ¯ k λ = λ k 1 2 P k | k 1 * F k z x * T R e , k λ 1 2 ,
which can be regarded as a “normalized” gain matrix. According to (28), the relationship between K k λ in step 10 of the MCTKF and its normalized form K ¯ k λ in step 13 of the square-root MCTKF is as follows:
K k λ = λ k 1 2 K ¯ k λ R e , k λ T 2 .
Therefore, according to step 11 of the MCTKF, the state estimation x ^ k | k * can be obtained by
x ^ k | k * = x ^ k | k 1 * + K k λ e k = x ^ k | k 1 * + λ k 1 2 K ¯ k λ R e , k λ T 2 e k .
The last line of (49) is consistent with the equation in step 14 of the square-root MCTKF.
Finally, taking into account that λ k is a scalar value and the covariance matrix is symmetric, from (45) and (46), we have
P k | k * = P k | k 1 * K ¯ k λ K ¯ k λ T = P k | k 1 * λ k 1 2 P k | k 1 * F k z x * T R e , k λ 1 2 R e , k λ T 2 F k z x * P k | k 1 * λ k 1 2 = P k | k 1 * λ k P k | k 1 * F k z x * T R e , k λ 1 F k z x * P k | k 1 * = P k | k 1 * K k λ F k z x * P k | k 1 * = I K k λ F k z x * P k | k 1 * .
The last line of (50) is exactly the same as the equation in step 12 of the MCTKF.
Hence, the square-root MCTKF (Proposition 3) and the conventional MCTKF (Proposition 2) are algebraically equivalent. □
From Table 3, the computational complexity of the square-root MCTKF is
S SMCTKF = 8 n x * 3 + 8 n x * 2 n z + n x * 2 + 10 n x * n z 2 + 7 n x * n z + 2 n z 3 + 3 n z 2 + n z + 3 O n z 3 .
Note that in steps 9 and 13 of Proposition 3, the square-root MCTKF returns a lower triangle of the matrix and does not involve floating-point operations.

4. Applications

This section focuses on how to use the TMC model to formulate common noise-dependent dynamic systems. For mathematical convenience, it is usually assumed that the process noise and the measurement noise of a dynamic system are white noise and independent of each other. However, the independence assumption does not always hold in practice. For example, in some dynamic systems, the process noise may be autocorrelated, or the measurement noise may be autocorrelated, or the process noise and the measurement noise may be cross-correlated. These common noise-dependent dynamic systems can be described using the TMC model.
Consider a linear HMC model as follows:
x k + 1 = F k x k + w k , z k = H k x k + v k ,
where x k and z k are the hidden state and the measurement, respectively; F k and H k are the transition matrix and the measurement matrix, respectively; and w k and v k are the process noise and the measurement noise, respectively.
(1) Autocorrelated Process Noise
Consider the case where in (52), the process noise is a Markov chain (MC) process, i.e., w k + 1 = A k w w k + ξ k w , in which ξ k w is the white noise with zero mean, and the measurement noise remains zero-mean white noise. In this case, the state x k is no longer a MC process, but x k , w k is a MC process. Set r k = w k , and then the system can be reformulated by a TMC model as follows:
x k + 1 w k + 1 z k = F k I n x × n w 0 n x × n z 0 n w × n x A k w 0 n w × n z H k 0 n z × n w 0 n z × n z x k v k z k 1 + 0 n x × 1 ξ k w v k ,
where ξ k r = ξ k w and ξ k v = v k are independent.
(2) Autocorrelated Measurement Noise
Consider the case where in (52), the process noise remains zero-mean white noise, but the measurement noise is a MC process, i.e., v k + 1 = A k v v k + ξ k v , in which ξ k v is white noise with zero mean. Set r k = v k , and then the system can be reformulated by a TMC model as follows:
x k + 1 v k + 1 z k = F k 0 n x × n v 0 n x × n z 0 n w × n x A k v 0 n v × n z H k I n z × n v 0 n z × n z x k v k z k 1 + w k ξ k v 0 n z × 1 ,
where ξ k x = w k and ξ k r = ξ k v are independent.
(3) Autocorrelated Process Noise and Measurement Noise
Consider the case where in (52), both process noise and measurement noise are MC processes, i.e., w k + 1 = A k w w k + ξ k w , in which ξ k w is the white noise with zero mean, and v k + 1 = A k v v k + ξ k v , in which ξ k v is white noise with zero mean. In addition, ξ k w and ξ k v are independent. Set r k = w k T , v k T T , and then the system can be reformulated by a TMC model as follows:
x k + 1 w k + 1 v k + 1 z k = F k I n x × n w 0 n x × n v 0 n x × n z 0 n w × n x A k w 0 n w × n v 0 n w × n z 0 n v × n x 0 n v × n w A k v 0 n v × n z H k 0 n z × n w I n z × n v 0 n z × n z x k w k v k z k 1 + 0 n x × 1 ξ k w ξ k v 0 n z × 1 ,
where ξ k r = ξ k w T , ξ k w T T .
(4) Cross-correlated Process Noise and Measurement Noise
Consider the case where in (52) the process noise and the measurement noise are cross-correlated, i.e., E w k v k T = S k 0 . In this case, the auxiliary variable r k is ignored, and the system can be reformulated by a PMC model as follows:
x k + 1 z k = F k 0 n x × n w H k 0 n z × n v x k z k 1 + w k v k ,
The PMC model is a particular form of the TMC in which the auxiliary variable is ignored and x k * = x k in (3).

5. Results and Analysis

In this section, three illustrative examples are provided to demonstrate the effectiveness of the proposed algorithms. First (Section 5.1), a single target-tracking example with correlated and non-Gaussian noise is considered to verify the effectiveness of the MCTKF. Second (Section 5.2), a linear non-Gaussian TMC example with a round-off error is provided to verify the effectiveness of the square-root MCTKF. Third (Section 5.3), a nonlinear bearing-only multi-tracking example is given to verify the effectiveness of the nonlinear extension of the MCTKF.
Simulations are carried out using Matlab R2018a on a PC with the following specifications: Inter(R) Core(TM) i7-7700 CPU at 3.6 GHz, RAM 16.0 GB.

5.1. Single Target-Tracking Example with Correlated and Non-Gaussian Noise

It has been found that the TMC model is suitable for applications with correlated noise [20]. Let us consider a typical linear HMC model, in which the process noise is assumed to be a Markov process, i.e.,
x k + 1 = F k x k + G k u k , u k + 1 = B u k + ς k , z k = H k x k + v k ,
where the process noise u k is assumed to be a Markov process, and noise processes ς k and v k are zero-mean white noise and independent of the initial state x 0 = N ( x ^ 0 , P 0 ) . The corresponding covariance matrices are defined by Q k = E [ ς k ς k T ] , R k = E [ v k v k T ] and E [ ς k v k T ] = 0 . In (57), u k poses as an error source and is a Markov process excited by the white driving noise ς k . This implies that the independence assumption of u k is no longer satisfied. Thus, the standard Kalman filter is inappropriate for this system. However, (57) can be directly converted into a linear TMC model (2), i.e.,
x k + 1 u k + 1 z k ζ k + 1 = F k G k 0 n x × n z 0 n u × n x B k 0 n u × n z H k 0 n z × n u 0 n z × n z F k x k u k z k 1 + 0 n x × 1 ς k v k w k ,
with
E w k w k T = 0 n x × n x 0 n x × n u 0 n x × n z 0 n u × n x Q k 0 n u × n z 0 n z × n x 0 n z × n u R k .
Consider a typical two-state target-tracking problem, where the state x k = [ p k , p ˙ k ] T contains the position and velocity in the Cartesian coordinates with position measurements only [20]. The model parameters in (57) are set as follows:
F k = 1 T 0 1 , G k = T 2 / 2 T , H k = 1 0 , B k = 0.8 ,
where T = 1 s is the sampling period. The initial state x 0 * = [ x 0 T , u 0 T ] T is Gaussian with x 0 * N ( x ^ 0 * , P 0 * ) , where x ^ 0 * = [ 200 , 0.5 , 0.2 ] T and P 0 = 100 I 3 . To verify the effectiveness of the proposed algorithms, two cases are considered for the system in the presence of non-Gaussian noise.
Case 1: Both ς k and v k are Gaussian noise disturbed by shot noise, and the shot noise occurs with a probability of p shot = 0.1 , i.e.,
ς k = N 0 , Q k + shot noise , v k = N 0 , R k + shot noise ,
where Q k = 10 2 ( m / s 2 ) 2 , R k = 0.1 2 m 2 , and shot noise sources ς k and v k are generated by randi ( [ 5 , 10 ] ) and randi ( [ 10 , 20 ] ) , respectively. The symbol randi ( [ a , b ] ) is a Matlab instruction that an integer is randomly returned from the uniform discrete distribution of interval [ a , b ] .
Case 2: Both ς k and v k are Gaussian mixture noise, i.e.,
ς k = α N 0 , Q 1 + 1 α N 0 , Q 2 , v k = α N 0 , R 1 + 1 α N 0 , R 2 ,
where Q 1 = 10 2 ( m / s 2 ) 2 , Q 2 = 3 2 Q 1 , R 1 = 0.1 2 m 2 , R 2 = 100 2 R 1 , and α = 0.9 .
The following filtering algorithms are tested for comparative study: (1) the Kalman filter (KF), (2) the triplet Kalman filter (TKF), (3) the triplet Student’s t-filter (TTF) [5], (4) the proposed maximum correntropy TKF (MCTKF), and (5) its square-root implementation (Square-root MCTKF). In the TTF, the noise is assumed to be a Student’s t-distribution, i.e., ξ k T u , Σ , ν , where the mean vector is u = 0 , the scale matrix is Σ k = ν 2 ν E w k w k T (see Equation (59)), and the freedom degree is ν = 5 . To compare the performance of the filters, the root mean square root (RMSE) is used as a metric, i.e.,
RMS E x k , i = 1 M j = 1 M x k , i j x ^ k , i j 2 ,
where x k , i denotes the i th component of the state vector x k at time k, M is the total number of Monte Carlo trials, and x k , i j and x ^ k , i j are the i th element of the true state vector and its estimation at time k in the j th Monte Carlo trial, respectively. We performed M = 500 Monte Carlo trials.
The true velocity of the target and the results of its estimation in a single trial are given in Figure 1, which shows that the proposed MCTKF and its square-root form perform better than other filters. It should be noted that all filters have similar estimation results of target position. One possible reason for this is that target position can be observed. In addition, the velocity RMSEs over 500 Monte Carlo trials are shown in Figure 2. Figure 2 shows that the MCTKF and its square-root form have the same and smaller velocity RMSEs than the TKF and KF. This indicates that the MCTKF and its square-root implementation are algebraically equivalent and can effectively deal with the state estimation problem of stochastic systems with correlated and non-Gaussian noise. The estimation performance of the TKF will degrade when the system is disturbed by some heavy-tailed noise. The KF has the worst estimation performance, because the independence and Gaussian assumptions of the process noise and measurement noise required in the KF are not satisfied in this system. The TTF is also effective for non-Gaussian noise and even performs better than the MCTKF at most of the time. This is because the Student’s t-distribution can describe the system with non-Gaussian noise more accurately than the Gaussian distribution, although it does not have an analytical solution. However, the TTF exhibits poorer robustness compared to the MCTKF. As shown in Figure 1 and Figure 2, the former has a large estimation bias at some moments. A possible reason for this is that for large outliers, the TTF relies more on measurements, while the MCTKF can reduce the impact of outliers on state estimation via the parameter λ k . Therefore, there is a trade-off between estimation accuracy and robustness when choosing which method to use.
Note that the difference between the TKF and the MCTKF in Figure 2 is more significant that in Figure 1. The main reason for this is the difference in y-axis coordinate range between the two figures, and a smaller coordinate range results in more significant difference. In addition, Figure 1 is the result of a single trial, which has a certain degree of randomness, while Figure 2 is the statistical result of 500 Monte Carlo trials. This may also be a reason for this difference.
We further study the effect of the kernel size σ k on the proposed MCTKF. To this end, the MCTKF with a fixed σ k (MCTKF-fixed) is tested. The averaged velocity RMSEs of the MCTKF-fixed with different kernel sizes are shown in Figure 3. The results suggest that the kernel size plays a significant role in the accuracy of the MCTKF. When the kernel size is too small, the estimation accuracy of the MCTKF-fixed will severely deteriorate and even diverge. In contrast, the MCTKF-fixed will reduce to the conventional TKF as σ k . Indeed, within a certain range, i.e., σ 1 σ k σ 2 , the MCTKF-fixed performs better than the MCTKF. However, using strategy (10) for the kernel size σ k , the MCTKF performs better than the TKF, although this strategy cannot obtain its optimal value. Therefore, the problem of the optimal kernel size selection is a very important issue in correntropy-based filters. We will focus on this issue in future research.

5.2. Linear TMC Example with Round-off Error

The square-root implementation of the MCTKF aims to improve the numerical stability of the filtering algorithm and reduce the influence of round-off errors. To further verify the robustness of the square-root MCTKF, consider the model (3) with (4) with parameters as follows:
F k = F k x * x * F k x * z F k z x * F k z z = 0.9 0 0 δ 0 0 0.9 0 0 δ 0 0 0.9 0 0 1 1 1 1 1 1 1 δ + 1 0 1 ,
Q k = Q k x * x * Q k x * z Q k z x * Q k z z = Q k x * x * 0 0 0 0 0 0 0 0 0 0 0 0 δ 2 0 0 δ 2 ,
where the parameter δ is utilized to simulate round-off. We assume that δ 2 < ε round - off , but δ > ε round - off , where ε round - off denotes the unit round-off error (computer-made round-off for floating-point arithmetic is often characterized by a single parameter ε round - off , defined in different sources as the largest number such that either 1 + ε round - off = 1 or 1 + ε round - off / 2 = 1 in machine precision [38]). Two cases are considered for the system in the presence of non-Gaussian noise.
Case 1: The process noise w k x * is Gaussian noise disturbed by shot noise, and the shot noise occurs with a probability of p shot = 0.1 , i.e.,
w k x * = N 0 , Q k + shot noise ,
where Q k = I 3 , and the shot noise is generated by randi ( [ 10 , 20 ] ) . The symbol randi ( [ a , b ] ) is a Matlab instruction that an integer is randomly returned from the uniform discrete distribution of interval [ a , b ] .
Case 2: The process noise w k x * is Gaussian mixture noise given by
w k x * = α N 0 , Q 1 + 1 α N 0 , Q 2 ,
where Q 1 = I 3 , Q 2 = 10 2 Q 1 , and α = 0.9 .
The MCTKF and the square-root MCTKF are tested for comparative study. We conduct 100 Monte Carlo trials on various ill-conditioned parameter values δ . For the MCTKF, the source of the difficulty lies in the inversion of the matrix R e , k λ . More precisely, even though the rank of the observation matrix is n z (i.e., n z = 2 ), the matrix R e , k λ becomes severely ill-conditioned as δ ε round - off , i.e., approaches the machine precision limit. The averaged RMSEs and the CPU time of a single trial are shown in Table 4.
By analyzing the results given in Table 4, we can explore the numerical behavior of each filter when the ill-conditioned problem grows. Specifically, when δ = 10 6 , both two filters perform well, and have the same RMSE results. This further indicates that the square-root MCTKF is algebraically equivalent to the conventional MCTKF. With the decrease in the parameter δ , the RMSE results of the two filtering algorithms are no longer the same. Smaller RMSE results indicate that the square-root MCTKF is more numerically stable than the MCTKF. In particular, when δ = 10 8 and δ = 10 9 , the MCTKF diverges, but its square-root form still performs well. In addition, taking into account the computational complexity, the averaged CPU time of the square-root MCTKF is about 1 / 3 of that of the MCTKF.
The above analysis shows that the square-root MCTKF is more numerically stable than the conventional MCTKF, and can significantly reduce the influence of round-off errors.

5.3. Nonlinear Bearing-Only Multi-Target Tracking Example in the Presence of Non-Gaussian Noise

In this section, a bearing-only multi-target tracking example is provided to verify the effectiveness of the proposed strategy in dealing with correlated non-Gaussian noise. In multi-target tracking, data association is the main difficulty due to the uncertainties in target birth and death, clutter, and miss-detection. Interestingly, random finite set-based multi-target filtering methods [50] deal with the multi-target tracking problem from the perspective of set value estimation, avoiding the data association process. Therefore, we use the multi-Bernoulli (MB) filtering framework [43] to address the multi-tracking problem, in which the extended Kalman filter (EKF), the extended triplet Kalman filter (ETKF), and the maximum correntropy extended triplet Kalman filter (MCETKF) are tested, due to nonlinear bearing measurements.
There are a total number of 10 targets in the surveillance region 2000 , 2000 m × 2000 , 2000 m . Each target moves at an approximately constant turn rate, but the turn rate ω k is time-varying. Let the target state be x k = p x , k , p ˙ x , k , p y , k , p ˙ y , k T , where p x , k , p y , k , and p ˙ x , k , p ˙ y , k , are the target position and velocity, respectively. Hence, the state transition model is
x k + 1 = F ω k x k + G w k ω k + 1 = ω k + T u k ,
where
F ω = 1 sin ω T ω 0 1 cos ω T ω 0 cos ω T 0 sin ω T 0 1 cos ω T ω 1 sin ω T ω 0 sin ω T 0 cos ω T , G = T 2 2 0 T 0 0 T 2 2 0 T ,
w k is the process noise, u k is the angular acceleration, and T = 1 s is the sampling time. Therefore, the transformation model of the hidden state x k * = x k T , ω k T is
x k + 1 ω k + 1 x k + 1 * = F ω k 0 n x × 1 0 1 × n x I 1 F k x * x * x k ω k + G w k T u k ξ k x * .
Both w k and u k are white noise and independent of each other. They are assumed non-Gaussian and follow a t-distribution. Let w k T 0 , Σ w , η w and u k T 0 , Σ u , η u , where T μ , Σ , η denotes a t-distribution with mean μ , scale matrix Σ , and degree of freedom η . The t-distribution is the most commonly used heavy-tailed distribution. The parameters are set as follows: Σ w = η w 2 η w σ w 2 I 2 and Σ u = η u 2 η u σ u 2 , where σ w = 5 m / s 2 , σ u = π / 180 rad / s , and η w = η u = 4 .
Two sensors are used to observe targets, and the measurement equation is
z k = arctan p x , k s x , k 1 / p y , k s y , k 1 arctan p x , k s x , k 2 / p y , k s y , k 2 + v k ,
where s x , k i , s y , k i denotes the position of the ith sensor, with i 1 , 2 , and v k T 0 , Σ v , η v is the measurement noise, with Σ v = η v 2 η v R , R = diag σ θ 1 2 , σ θ 2 2 , σ θ 1 = σ θ 2 = π / 1800 rad , and η v = 4 . Outliers in sensor measurements are also considered. That is, at time k = 25 , 45 , 65 , 85 , the scale matrix of sensor measurement noise is Σ v with probability 0.5, or 10 Σ v with probability 0.5. The total simulation time is 100 s. Each sensor moves at a constant velocity, and their positions shown in Figure 4a are set as follows:
s 1 = 2000 : 10 : 1010 ; 2000 : 5 : 1505 , s 2 = 2000 : 5 : 1505 ; 1000 : 5 : 1495 .
The detection probability of target is p D , k = 0.98 . Clutter is modeled as a Poisson distribution with density λ c = 1.25 × 10 7 m 2 over the surveillance region (that is, an average of 2 pieces of clutter per scan). Note that bearing-only target tracking is sensitive to clutter, and high clutter density leads to high false alarms. Range-and-bearing tracking is suitable for scenes with higher clutter density.
Under the MB filtering framework [43], the EKF, the ETKF, and the MCETKF are tested. For the MB-ETKF and the MB-MCETKF, the birth process is a multi-Bernoulli random finite set with density π Γ = r Γ i , p Γ * , i i = 1 4 , where existing probabilities are r Γ 1 = r Γ 2 = 0.02 , r Γ 3 = r Γ 4 = 0.03 , and corresponding probability density functions are p Γ * , i x * = N x * ; m Γ * , i , P Γ * , i 1 , 2 , 3 , 4 , where the parameters are
m Γ * , 1 = 1500 , 0 , 250 , 0 , 0 T , m Γ * , 2 = 250 , 0 , 1000 , 0 , 0 T , m Γ * , 3 = 250 , 0 , 750 , 0 , 0 T , m Γ * , 4 = 1000 , 0 , 1500 , 0 , 0 T , P Γ * = diag 50 , 50 , 50 , 50 , π / 30 T 2 . .
The prior knowledge of process noise is assumed as
ξ k x * = G w k T u k N 0 , G σ w 2 I 2 G T 0 0 σ u 2 ,
and the prior knowledge of measurement noise is assumed as ξ k z = v k N 0 , R . For the MB-EKF, all parameters are the same as those in the MB-ETKF and MB-MCETKF, except for the ignored auxiliary variable turn rate ω k , and the turn rate in F ω is a constant with ω = π / 180 rad / s . The survival probability of a target is p S , k = 0.99 .
The true trajectories of targets and sensors in a single trial are shown in Figure 4a. The estimated results of target states by the MB-EKF, the MB-ETKF, and the MB-MCETKF are given in Figure 4b–d, respectively. Intuitively, The MB-EKF and the MB-ETKF perform poorly because some targets are lost during the tracking process, such as target 1 and target 6 in Figure 4b, and target 1 in Figure 4c. The main reason for this is that both are derived using the MMSE criterion under the assumption of Gaussian noise, and the MMSE only uses the second-order term of the innovation. They easily mistake outliers originated from targets as false-alarm targets, resulting in target loss. Moreover, several false-alarm estimations appear in Figure 4b,c. The main reason for this may be that the measurement outliers near the target state can easily make the target state estimation worse, although both filters have a certain robustness toward measurement outliers. In terms of details, a significant estimation bias gradually appears over time in both target 6 and 7 shown in Figure 4b. The main reason for this is that the MB-EKF does not consider the turning rate of a target, which is time-varying over time. The proposed MB-MCETKF shows better estimation performance than the other two filters, does not suffer from target loss and false alarms, and can accurately estimate the states of each target. The main reasons for this include: first, the TMC model is more accurate and takes into account the turning rate of a target via an auxiliary variable; and second, the proposed algorithm adopts the correntropy criterion, instead of the MMSE criterion, and shows better robustness toward outliers by utilizing higher-order information of the innovation.
Figure 5 shows target number estimation results of different filters over 500 Monte Carlo trials. Figure 5 shows that the target number estimation of the MB-ETKF is almost same as that of the MB-EKF in the initial state, but worse than that of the MB-MCETKF. The main reason for this is that both are derived using the MMSE criterion under an assumption of Gaussian noise, and the MMSE only uses the second-order term of the innovation. They easily mistake outliers as false-alarm targets, resulting in an underestimation of target number. As time goes by, the target number estimation of the MB-EKF becomes worse than that of the MB-ETKF. The main reason for this is that the turning rate is not considered in the MB-EKF, resulting in a more inaccurate predicted state of a target, which will further reduce its robustness toward outliers. The target number estimation of the MB-MCETKF is more accurate than that of the other two filters, mainly because the former is derived by using correntropy criteria, which utilizes not only second-order but also higher-order information of the innovation, and thus, has stronger resistance toward outliers than the other two filters. Note that the MB-MCETKF can only reduce the influence of outliers to a certain extent, but cannot completely eliminate it. Therefore, large outliers may also be regarded as false alarms, resulting in an estimate of the number of targets slightly lower than the true number of targets.
To further compare the performance of the algorithms, the most popular optimal subpattern assignment (OSPA) distance for multi-target tracking is used as a metric [43]. This takes into account both the number and states of targets. Assume X = x 1 , , x m and X ^ = x ^ 1 , , x ^ n , where X and X ^ are the true and estimated finite sets of targets, respectively. Let d c x , x ^ = min c , x x ^ , where c > 0 is the truncation distance and · is the Euclidean norm, Π n denotes the set of permutations over 1 , n . For p 1 , the OSPA distance is defined by
d ¯ p c X , X ^ = 1 n min π Π n i = 1 m d c x i , x ^ π i p + c p n m 1 p
if m n , and d ¯ p c X , X ^ = d ¯ p c X ^ , X if m > n .
The OSPA distances (where c = 100 and p = 2 ) of different filters over 500 Monte Carlo trials are shown in Figure 6. Figure 6 shows that its smaller OSPA distance indicates that the MB-MCETKF performs better than the other two filters, since the former can more accurately estimate the states and number of targets, shown in Figure 4 and Figure 5, respectively. In the initial stage, the MB-EKF and the MB-ETKF have similar OSPA distances and similar target number estimation results shown in Figure 5, which indicates that the former can also accurately estimate the states of targets, although it does not consider the turning rate. The main reason for this is that the initial turning rate is set to 0 rad/s in the true scenario. As time goes by, the OSPA distance of the MB-EKF becomes worse than that of the MB-ETKF. The main reasons for this include that the MB-EKF has a worse target number estimation shown in Figure 5 and its target state estimation shown in Figure 4b also becomes worse; for example, there is significant estimation bias in Target 6 and Target 7.
The average CPU time of single-step running of each filtering algorithm is given in Table 5. It can be seen that the MB-MCETKF has almost identical CPU time to the MB-ETKF. However, because the former requires additional calculation time for the parameter λ k (step 2, in Proposition 2), it takes slightly more CPU time. Compared to the other two filters, the MB-EKF takes the least CPU time. The main reason for this is that lower-dimensional matrix takes less computing time during the inversion operation. In this example, the state dimension in the MB-EKF is 4, which is smaller than the state dimension of 5 in the other two filters, resulting in the former taking less computational time.

6. Conclusions

In this paper, a new filter called the MCTKF is developed to address the filtering problem of stochastic systems with correlated and non-Gaussian noise. In this filter, the linear TMC model is employed to formulate the correlated relationship of stochastic systems, and the MC criterion, instead of the MMSE, is adopted to deal with non-Gaussian noise, as the former can use not only second-order but also higher-order statistics of the innovation. Furthermore, a square-root implementation of the MCTKF is designed using QR decomposition to improve the numerical stability with respect to round-off errors. Although the two filters are algebraically equivalent, simulation results show that the square-root algorithm is more numerically stable than the MCTKF for ill-conditioned problems. Both filters have simple forms, which facilitate their practical application. Numerical examples show the effectiveness of the proposed algorithms, including the nonlinear extension of the MCTKF via a bearing-only multi-target tracking example in the presence of non-Gaussian noise.
In addition, our results show that kernel size plays an important role in the two filters. Simulation results show that the adaptive kernel size selection method adopted in this paper is an effective strategy. However, it is not optimal and there is room for improvement. Therefore, the issue of optimal kernel size selection needs to be further studied.

Author Contributions

Conceptualization, G.Z. and L.Z.; methodology, G.Z. and X.Z.; software, validation, G.Z., S.D. and M.Z.; formal analysis, G.Z and F.L.; investigation; resource; data curation, G.Z. and X.Z.; writing—original draft preparation, G.Z.; review and editing, G.Z. and L.Z.; visualization, G.Z., S.D. and M.Z.; supervision, F.L.; project administration; funding acquisition, G.Z. and F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grants 62103318 and 62173266, and the Special Fund for Basic Research Funds of Central Universities (Humanities and Social Sciences) under Grant 300102231627.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory, Algorthims and Software; Wiley: New York, NY, USA, 2001. [Google Scholar]
  2. Jiang, M.; Guo, S.; Luo, H.; Yao, Y.; Cui, G. A Robust Target Tracking Method for Crowded Indoor Environments Using mmWave Radar. Remote Sens. 2023, 15, 2425. [Google Scholar] [CrossRef]
  3. Zandavi, S.M.; Chung, V. State Estimation of Nonlinear Dynamic System Using Novel Heuristic Filter Based on Genetic Algorithm. Soft Comput. 2019, 23, 5559–5570. [Google Scholar] [CrossRef]
  4. Lan, J.; Li, X.R. Nonlinear Estimation Based on Conversion-Sample Optimization. Automatica 2020, 121, 109160. [Google Scholar] [CrossRef]
  5. Zhang, G.; Lan, J.; Zhang, L.; He, F.; Li, S. Filtering in Pairwise Markov Model with Student’s t Non-Stationary Noise with Application to Target Tracking. IEEE Trans. Signal Process. 2021, 69, 1627–1641. [Google Scholar] [CrossRef]
  6. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  7. An, D.; Zhang, F.; Yang, Q.; Zhang, C. Data Integrity Attack in Dynamic State Estimation of Smart Grid: Attack Model and Countermeasures. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1631–1644. [Google Scholar] [CrossRef]
  8. Wu, W.R.; Chang, D.C. Maneuvering Target Tracking with Colored Noise. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1311–1320. [Google Scholar]
  9. Saha, S.; Gustafsson, F. Particle Filtering with Dependent Noise Processes. IEEE Trans. Signal Process. 2012, 60, 4497–4508. [Google Scholar] [CrossRef]
  10. Li, W.; Jia, Y.; Du, J.; Zhang, J. PHD Filter for Multi-Target Tracking with Glint Noise. Signal Process. 2014, 94, 48–56. [Google Scholar] [CrossRef]
  11. Huang, Y.; Zhang, Y.; Li, N.; Wu, Z.; Chambers, J.A. A Novel Robust Student’s t-Based Kalman Filter. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1545–1554. [Google Scholar] [CrossRef]
  12. Roth, M.; Özkan, E.; Gustafsson, F. A Student’s t Filter for Heavy Tailed Process and Measurement Noise. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 5770–5774. [Google Scholar]
  13. Pieczynski, W.; Desbouvries, F. Kalman Filtering Using Pairwise Gaussian Models. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2003, Hong Kong, China, 6–10 April 2003; pp. 57–60. [Google Scholar]
  14. Pieczynski, W. Pairwise Markov Chains. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 634–639. [Google Scholar] [CrossRef]
  15. Némesin, V.; Derrode, S. Robust Blind Pairwise Kalman Algorithms Using QR Decompositions. IEEE Trans. Signal Process. 2013, 61, 5–9. [Google Scholar] [CrossRef]
  16. Zhang, G.H.; Han, C.Z.; Lian, F.; Zeng, L.H. Cardinality Balanced Multi-target Multi-Bernoulli Filter for Pairwise Markov Model. Acta Autom. Sin. 2017, 43, 2100–2108. [Google Scholar]
  17. Petetin, Y.; Desbouvries, F. Bayesian Multi-Object Filtering for Pairwise Markov Chains. IEEE Trans. Signal Process. 2013, 61, 4481–4490. [Google Scholar] [CrossRef]
  18. Ait-El-Fquih, B.; Desbouvries, F. Kalman Filtering in Triplet Markov Chains. IEEE Trans. Signal Process. 2006, 54, 2957–2963. [Google Scholar] [CrossRef]
  19. Lehmann, F.; Pieczynski, W. Reduced-Dimension Filtering in Triplet Markov Models. IEEE Trans. Autom. Control 2021, 67, 605–617. [Google Scholar] [CrossRef]
  20. Lehmann, F.; Pieczynski, W. Suboptimal Kalman Filtering in Triplet Markov Models Using Model Order Reduction. IEEE Signal Process. Lett. 2020, 27, 1100–1104. [Google Scholar] [CrossRef]
  21. Petetin, Y.; Desbouvries, F. Exact Bayesian Estimation in Constrained Triplet Markov Chains. In Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Reims, France, 21–24 September 2014; pp. 1–16. [Google Scholar]
  22. Ait El Fquih, B.; Desbouvries, F. Kalman Filtering for Triplet Markov Chains: Applications and Extensions. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Philadelphia, PA, USA, 23–23 March 2005; Volume IV, pp. 685–688. [Google Scholar]
  23. Izanloo, R.; Fakoorian, S.A.; Yazdi, H.S.; Simon, D. Kalman Filtering Based on the Maximum Correntropy Criterion in The Presence of Non–Gaussian Noise. In Proceedings of the 2016 Annual Conference on Information Science and Systems (CISS), Princeton, NJ, USA, 16–18 March 2016; pp. 500–505. [Google Scholar]
  24. Zhu, J.; Xie, W.; Liu, Z. Student’s t-Based Robust Poisson Multi-Bernoulli Mixture Filter under Heavy-Tailed Process and Measurement Noises. Remote Sens. 2023, 15, 4232. [Google Scholar] [CrossRef]
  25. Bilik, I.; Tabrikian, J. MMSE-Based Filtering in Presence of Non-Gaussian System and Measurement Noise. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 1153–1170. [Google Scholar] [CrossRef]
  26. Shan, C.; Zhou, W.; Jiang, Z.; Shan, H. A New Gaussian Approximate Filter with Colored Non-Stationary Heavy-Tailed Measurement Noise. Digit. Signal Process. 2021, 122, 103358. [Google Scholar] [CrossRef]
  27. Zheng, F.; Derrode, S.; Pieczynski, W. Semi-supervised optimal recursive filtering and smoothing in non-Gaussian Markov switching models. Signal Process. 2020, 171, 107511. [Google Scholar] [CrossRef]
  28. Pieczynski, W. Exact Filtering in Conditionally Markov Switching Hidden Linear Models. C. R. Math. 2011, 349, 587–590. [Google Scholar] [CrossRef]
  29. Abbassi, N.; Benboudjema, D.; Derrode, S.; Pieczynski, W. Optimal filter approximations in conditionally Gaussian pairwise Markov switching models. IEEE Trans. Autom. Control 2014, 60, 1104–1109. [Google Scholar] [CrossRef]
  30. Gorynin, I.; Derrode, S.; Monfrini, E.; Pieczynski, W. Fast filtering in switching approximations of nonlinear Markov systems with applications to stochastic volatility. IEEE Trans. Autom. Control 2016, 62, 853–862. [Google Scholar] [CrossRef]
  31. Kotecha, J.; Djuric, P. Gaussian Sum Particle Filtering. IEEE Trans. Signal Process. 2003, 51, 2602–2612. [Google Scholar] [CrossRef]
  32. Liu, X.; Qu, H.; Zhao, J.; Yue, P. Maximum Correntropy Square-Root Cubature Kalman Filter with Application to SINS/GPS Integrated Systems. ISA Trans. 2018, 80, 195–202. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, W.; Pokharel, P.P.; Príncipe, J.C. Correntropy: Properties and Applications in Non–Gaussian Signal Processing. IEEE Trans. Signal Process. 2007, 55, 5286–5298. [Google Scholar] [CrossRef]
  34. Wang, D.; Zhang, H.; Huang, H.; Ge, B. A Redundant Measurement-Based Maximum Correntropy Extended Kalman Filter for the Noise Covariance Estimation in INS/GNSS Integration. Remote Sens. 2023, 15, 2430. [Google Scholar] [CrossRef]
  35. Liao, T.; Hirota, K.; Wu, X.; Shao, S.; Dai, Y. A Dynamic Self-Tuning Maximum Correntropy Kalman Filter for Wireless Sensors Networks Positioning Systems. Remote Sens. 2022, 14, 4345. [Google Scholar] [CrossRef]
  36. Li, X.; Guo, Y.; Meng, Q. Variational Bayesian-Based Improved Maximum Mixture Correntropy Kalman Filter for Non-Gaussian Noise. Entropy 2022, 24, 117. [Google Scholar] [CrossRef]
  37. Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum Correntropy Kalman Filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  38. Kulikova, M.V. Square-Root Algorithms for Maximum Correntropy Estimation of Linear Discrete-Time Systems in Presence of Non–Gaussian Noise. Syst. Control Lett. 2017, 108, 8–15. [Google Scholar] [CrossRef]
  39. Liu, X.; Qu, H.; Zhao, J.; Chen, B. State Space Maximum Correntropy Filter. Signal Process. 2017, 130, 152–158. [Google Scholar] [CrossRef]
  40. Liu, X.; Chen, B.; Xu, B.; Wu, Z.; Honeine, P. Maximum Correntropy Unscented Filter. Int. J. Syst. Sci. 2017, 48, 1607–1615. [Google Scholar] [CrossRef]
  41. Gunduz, A.; Príncipe, J.C. Correntropy as A Novel Measure for Nonlinearity Tests. Signal Process. 2009, 89, 14–23. [Google Scholar] [CrossRef]
  42. Cinar, G.T.; Príncipe, J.C. Hidden State Estimation Using the Correntropy Filter with Fixed Point Update and Adaptive Kernel Size. In Proceedings of the The 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–6. [Google Scholar]
  43. Vo, B.T.; Vo, B.N.; Cantoni, A. The Cardinality Balanced Multi-Target Multi-Bernoulli Filter and Its Implementations. IEEE Trans. Signal Process. 2009, 57, 409–423. [Google Scholar]
  44. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  45. Zhang, G.; Lian, F.; Han, C.; Chen, H.; Fu, N. Two Novel Sensor Control Schemes for Multi-Target Tracking via Delta Generalised Labelled Multi-Bernoulli Filtering. IET Signal Process. 2018, 12, 1131–1139. [Google Scholar] [CrossRef]
  46. Higham, N.J. Accuracy and Stability of Numerical Algorithms; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  47. Higham, N.J. Analysis of the Cholesky Decomposition of a Semi-Definite Matrix; Oxford University Press: Manchester, UK, 1990. [Google Scholar]
  48. Grewal, M.S.; Andrews, A.P. Kalman Filtering: Theory and Practice with MATLAB; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  49. Kaminski, P.; Bryson, A.; Schmidt, S. Discrete Square Root Filtering: A Survey of Current Techniques. IEEE Trans. Autom. Control 1971, 16, 727–736. [Google Scholar] [CrossRef]
  50. Mahler, R.P. Advances in Statistical Multisource-Multitarget Information Fusion; Artech House: Norwood, MA, USA, 2014. [Google Scholar]
Figure 1. True velocity and estimates in a single trial. (a) Case 1. (b) Case 2.
Figure 1. True velocity and estimates in a single trial. (a) Case 1. (b) Case 2.
Remotesensing 15 05543 g001
Figure 2. Velocity RMSEs over time. (a) Case 1. (b) Case 2.
Figure 2. Velocity RMSEs over time. (a) Case 1. (b) Case 2.
Remotesensing 15 05543 g002
Figure 3. Averaged velocity RMSEs of the MCTKF-fixed with different kernel size σ k . (a) Case 1. (b) Case 2.
Figure 3. Averaged velocity RMSEs of the MCTKF-fixed with different kernel size σ k . (a) Case 1. (b) Case 2.
Remotesensing 15 05543 g003
Figure 4. The estimated results by different filters. (a) True trajectories of targets and sensors. (b) MB-EKF estimated result. (c) MB-ETKF estimated result. (d) MB-MCETKF estimated result.
Figure 4. The estimated results by different filters. (a) True trajectories of targets and sensors. (b) MB-EKF estimated result. (c) MB-ETKF estimated result. (d) MB-MCETKF estimated result.
Remotesensing 15 05543 g004
Figure 5. Target number estimates.
Figure 5. Target number estimates.
Remotesensing 15 05543 g005
Figure 6. OSPA distances.
Figure 6. OSPA distances.
Remotesensing 15 05543 g006
Table 1. Computational complexities of TKF’s recursive equations.
Table 1. Computational complexities of TKF’s recursive equations.
StepAddition/Subtraction and MultiplicationMatrix Inversion
5 2 n x * 2 + 2 n x * n z 2 + 4 n x * n z n x * O n z 3
6 4 n x * 3 n x * 2 0
7 2 n x * n z + 2 n z 2 0
8 2 n x * 2 n z + 2 n x * n z 2 n x * n z n z 2 0
9 2 n x * 2 n z + 2 n x * n z 2 2 n x * n z O n z 3
10 2 n x * n z 0
11 2 n x * 3 + 2 n x * 2 n z n x * 2 0
Table 2. Computational complexities of the recursive equations of the MCTKF.
Table 2. Computational complexities of the recursive equations of the MCTKF.
StepAddition/Subtraction and MultiplicationMatrix Inversion
5 2 n x * 2 + 2 n x * n z 2 + 4 n x * n z n x * O n z 3
6 4 n x * 3 n x * 2 0
7 2 n x * n z + 2 n z 2 0
8 n z 2 + n z O n z 3
9 2 n x * 2 n z + 2 n x * n z 2 n x * n z n z 2 0
10 2 n x * 2 n z + 2 n x * n z 2 n x * n z O n z 3
11 2 n x * n z 0
12 2 n x * 3 + 2 n x * 2 n z n x * 2 0
Table 3. Computational complexities of the recursive equation of the square-root MCTKF.
Table 3. Computational complexities of the recursive equation of the square-root MCTKF.
StepAddition/Subtraction and MultiplicationMatrix Inversion
7 2 n x * 2 + 2 n x * n z 2 + 4 n x * n z n x * O n z 3
8 6 n x * 3 n x * 2 0
900
10 2 n x * n z + 2 n z 2 0
11 n z 2 + n z O n z 3
12 2 n n * 3 + 8 n n * 2 n z + 6 n n * n z 2 + 2 n z 3 0
1300
14 2 n x * n z 2 + n x * n z + n x * O n z 3
Table 4. RMSE results and average CPU time in the presence of shot and Gaussian mixture noises in Case 1 and Case 2, respectively.
Table 4. RMSE results and average CPU time in the presence of shot and Gaussian mixture noises in Case 1 and Case 2, respectively.
MethodCase 1: Shot NoiseCase 2: Gaussian Mixture Noise
RMSECPU Time (s)RMSECPU Time (s)
δ 10 6 10 7 10 8 10 9 10 6 10 7 10 8 10 9
MCTKF2.87902.9882 9.4331 × 10 3 9.3607 × 10 5 6.4793 × 10 3 5.79475.7867 3.6565 × 10 3 3.7070 × 10 5 6.2853 × 10 3
Square-root MCTKF2.87902.98682.86302.8635 1.9792 × 10 3 5.79495.78666.04055.7510 1.9325 × 10 3
Table 5. Average CPU time of single-step running of each filtering algorithm.
Table 5. Average CPU time of single-step running of each filtering algorithm.
MB-EKFMB-ETKFMB-MCETKF
CPU time (s) 6.7576 × 10 3 8.6420 × 10 3 8.6714 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, G.; Zhang, X.; Zeng, L.; Dai, S.; Zhang, M.; Lian, F. Filtering in Triplet Markov Chain Model in the Presence of Non-Gaussian Noise with Application to Target Tracking. Remote Sens. 2023, 15, 5543. https://doi.org/10.3390/rs15235543

AMA Style

Zhang G, Zhang X, Zeng L, Dai S, Zhang M, Lian F. Filtering in Triplet Markov Chain Model in the Presence of Non-Gaussian Noise with Application to Target Tracking. Remote Sensing. 2023; 15(23):5543. https://doi.org/10.3390/rs15235543

Chicago/Turabian Style

Zhang, Guanghua, Xiqian Zhang, Linghao Zeng, Shasha Dai, Mingyu Zhang, and Feng Lian. 2023. "Filtering in Triplet Markov Chain Model in the Presence of Non-Gaussian Noise with Application to Target Tracking" Remote Sensing 15, no. 23: 5543. https://doi.org/10.3390/rs15235543

APA Style

Zhang, G., Zhang, X., Zeng, L., Dai, S., Zhang, M., & Lian, F. (2023). Filtering in Triplet Markov Chain Model in the Presence of Non-Gaussian Noise with Application to Target Tracking. Remote Sensing, 15(23), 5543. https://doi.org/10.3390/rs15235543

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop