Next Article in Journal
Enhancing Personalized Ads Using Interest Category Classification of SNS Users Based on Deep Neural Networks
Next Article in Special Issue
Comparison and Evaluation of Integrity Algorithms for Vehicle Dynamic State Estimation in Different Scenarios for an Application in Automated Driving
Previous Article in Journal
Online Multiple Athlete Tracking with Pose-Based Long-Term Temporal Dependencies
Previous Article in Special Issue
Modular Approach for Odometry Localization Method for Vehicles with Increased Maneuverability
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Fading Factor and Updated Monitoring Strategy Adaptive Kalman Filter-Based Variational Bayesian

1
College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
2
Center for Control Theory and Guidance Technology, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(1), 198; https://doi.org/10.3390/s21010198
Received: 6 December 2020 / Revised: 23 December 2020 / Accepted: 26 December 2020 / Published: 30 December 2020

Abstract

:
Aiming at the problem that the performance of adaptive Kalman filter estimation will be affected when the statistical characteristics of the process and measurement of the noise matrices are inaccurate and time-varying in the linear Gaussian state-space model, an algorithm of multi-fading factor and an updated monitoring strategy adaptive Kalman filter-based variational Bayesian is proposed. Inverse Wishart distribution is selected as the measurement noise model and the system state vector and measurement noise covariance matrix are estimated with the variational Bayesian method. The process noise covariance matrix is estimated by the maximum a posteriori principle, and the updated monitoring strategy with adjustment factors is used to maintain the positive semi-definite of the updated matrix. The above optimal estimation results are introduced as time-varying parameters into the multiple fading factors to improve the estimation accuracy of the one-step state predicted covariance matrix. The application of the proposed algorithm in target tracking is simulated. The results show that compared with the current filters, the proposed filtering algorithm has better accuracy and convergence performance, and realizes the simultaneous estimation of inaccurate time-varying process and measurement noise covariance matrices.

1. Introduction

In many practical engineering applications, the actual values of the required state variables are often not directly available. For example, when radar detects an air target, it can calculate the target distance based on information such as reflected waves. Still, there is random interference in the radar detection process, resulting in random noise in the observation signal. In this case, it is impossible to obtain the required state variables accurately, and these state variables can only be estimated or predicted based on the observed signal. In linear systems, the Kalman filter is the optimal filter [1]. With the development of computer technology, the calculation requirements and complexity of Kalman filtering no longer become obstacles to its application [2]. At present, the Kalman filtering theory has been widely used in tracking, navigation, guidance, and other areas [3,4,5,6,7,8,9].
The application of the Kalman filter requires prior knowledge of the mathematical model of the system and the statistical characteristics of noise. Still, in many practical application problems, these are unknown or only partially known [10,11,12]. If inaccurate mathematical models or statistical noise characteristics are used to design the Kalman filter, the performance of the filter will be degraded, resulting in larger estimation errors, and even filter divergence. To solve this problem, various adaptive Kalman filters (AKF) have been produced [13,14].
The Sage–Husa filter (SH-KF) is widely used because of its simple algorithm, which can estimate the first and second moments of noise online [15]. However, the Sage–Husa adaptive noise estimator has problems such as a large amount of calculation and easy divergence of state estimation [16]. In addition, some literature has pointed out that the covariance matrices of process noise and observation noise cannot be estimated dynamically in real-time by the Sage–Husa adaptive estimator at the same time, which can only estimate the other noise covariance matrix when one noise covariance matrix is known [17,18]. The maximum-likelihood-based adaptive filtering method (ML-KF) can evaluate and correct the second-order moments of the noise statistics online, but it needs to rely on accurate innovation covariance estimation, and ML-KF requires a large sliding window to obtain a precise estimate of the noise covariance matrix, which theoretically makes it only for time-varying noise covariance matrix estimation [19,20]. Strong tracking Kalman filter (ST-KF) is an adaptive filtering algorithm based on the principle of residual orthogonality. It adjusts the weight of new measurement data by adding an estimate of the one-step predictive covariance matrix. It has strong robustness regarding model parameter mismatch, lower sensitivity to the statistical characteristics of noise and initial values, and a strong ability to track sudden changes. But its adjustment ability for each filtering channel is the same, and state predicted error covariance matrix (PECM) and measurement noise covariance matrix (MNCM) are not estimated [21,22]
In recent years, many scholars have introduced the variational Bayesian machine-learning method into the KF algorithm and proposed AKF algorithm based on the variational Bayesian approach (VB-KF), which is an approximation of the Bayesian method. By choosing a suitable conjugate prior distribution, the slow time-varying measurement noise covariance can be estimated [23,24,25]. Literature [26] proposed a variational adaptive Kalman filter (R-VBKF) but only estimated the system state vector and the measurement noise covariance matrix (MNCM); the accuracy is not satisfactory enough. In the algorithm presented in the literature [27], the state predicted error covariance matrix (PECM) and the measurement noise covariance matrix (MNCM) are estimated, but the process noise covariance matrix (PNCM) is not directly assessed. The specific estimated value of the PNCM cannot be obtained, and the estimation accuracy needs to be further improved.
Aiming at the linear Gaussian state–space model with slow time-varying covariance of process and measurement noise, taking into account the estimation accuracy, convergence performance, robustness, and the realization of simultaneous estimation of noise covariance matrices, the multi-fading factor and updated monitoring strategy, AKF-based variational Bayesian (MFMS-VBAKF) is proposed. Its feasibility is proved by simulation experiments.
The main contributions of the algorithm proposed in this paper are as follows: compared with the inverse-Gamma distribution applied by the R-VBKF algorithm in [25], the algorithm proposed in this paper selects the distribution of the measurement noise model as a more reasonable inverse-Wishart distribution in variational Bayesian approach, which improves the estimation accuracy of system state vector and MNCM; compared with SH-KF in [15], the algorithm proposed in this paper guarantees the positive semi-definiteness of the PNCM under the maximum posterior estimation by designing an updated monitoring strategy, and achieves the simultaneous estimation of the PNCM and MNCM.; and compared with ST-KF [21], the proposed algorithm has different adjustment abilities for each channel by introducing multi-fading factors and the PECM estimation accuracy is also improved.
The main structure is as follows: Section 2 illustrates the mathematical modeling of the problem. In Section 3, the multi-fading factor and updated monitoring strategy AKF-based variational Bayesian is derived. In Section 4, the proposed algorithm is compared with the existing algorithm through the simulation of the target tracking application, and the excellent performance of the proposed algorithm is proved. Section 5 summarizes the conclusions. Finally, Section 6 plans the future work.

2. Problem Modeling

Consider the following discrete linear stochastic system of the state–space model
X k = Φ k 1 X k 1 + ω k 1 ,
Z k = H k 1 X k + v k ,
where (1) and (2) are process and measurement equations, respectively. k is discrete-time, X k R n × n is the state vector of the system at time k , Z k R m × m is the measurement signal vector of the corresponding state. Φ k R n × n is the state-transition matrix, H k R m × n is the measurement matrix. ω k R n   and v k R m are uncorrelated white Gaussian noise with zero mean vectors and covariance matrices Q k and R k , respectively. The initial state X 0 is assumed to be a Gaussian distribution with mean vector X ^ 0 and the covariance matrix P 0 . X 0 is uncorrelated to ω k and v k at any time [1].
For linear Gaussian state–space models, the Kalman filter (KF) algorithm is an optimal estimation filter algorithm. If the noise covariance matrices Q k and R k are fully known, KF estimates the state vector X k through the measurement information of Z 1 : k , and the estimation accuracy is satisfactory. However, the performance of the KF algorithm overly depends on the prior knowledge of the noise statistics. If the time-varying noise covariance matrices Q k and R k   are unknown or inaccurate, the accuracy of the KF algorithm will decrease, and even cause the estimation to diverge. Besides, when most existing AKF algorithms estimate the PNCM Q k and MNCM R k at the same time, the filtering will diverge. Therefore, a multi-fading factor and updated monitoring strategy AKF-based variational Bayesian with inaccurate time-varying PNCM and MNCM is proposed.

3. The Proposed Multi-Fading Factor and Updates Monitoring Strategy AKF-Based Variational Bayesian

In the VBAKF algorithm, the independent state vector X k and the measurement noise covariance matrix R k are regarded as the parameters to be estimated.

3.1. AKF-Based Variational Bayesian (VBAKF)

3.1.1. Prediction Process and Distribution Selection

In the traditional Kalman filter framework, the Gaussian distributions are selected as the distributions of one-step predicted probability density function (PDF) P ( X k | Z 1 : k 1 ) and likelihood PDF p ( Z k | X k ) :
p ( X k | Z 1 : k 1 ) = N ( X k 1 ; X ^ k : k 1 , P k : k 1 ) ,
p ( Z k | X k ) = N ( Z k ; H X ^ k : k 1 , R k ) ,
where   N ( G ; μ , Σ ) is the Gaussian distribution and μ and Σ represent the mean and variance of the distribution, respectively. The PDF of the Gaussian distribution is:
N ( G ; μ , Σ ) = 1 | 2 π Σ | e x p 1 2 ( G μ ) Τ Σ 1 ( G μ ) ,
According to Equation (1), the predicted state vector X ^ k : k 1 and the corresponding one-step predicted error covariance matrix (PECM) P k : k 1 can be described as:
X ^ k : k 1 = Φ k 1 X ^ k 1 : k 1 ,
P k : k 1 = Φ k 1 P k 1 : k 1 Φ k 1 T + Q k 1 ,
where X ^ k : k 1 and P k : k 1 represent the state estimation at time k 1 and the corresponding estimation error covariance matrix, respectively. ( . ) T represents the transpose of the matrix. Among them, it is assumed that the real PNCM Q k is unknown due to the complex environmental factors in real applications, which will lead to an inaccurate P k : k 1 in Equation (7). The estimation methods for Q k and P k : k 1 will be given in the next two sections.
In this section, the aim is to infer X k with R k . For the purpose, the conjugate prior distributions need to be first selected for inaccurate MNCM R k since the conjugacy can ensure that the posterior distribution and the prior distribution maintain the same functional form.
According to Bayesian statistical theory, if the Gaussian distribution has a known mean, the conjugate prior distribution of the covariance matrix can be regarded as the inverse Wishart (IW) distribution [28]. C 1 is the inverse matrix of a positive definite matrix C . If C 1 follows the Wishart-distribution W ( C 1 ; λ , Ψ 1 ) , the matrix C follows the IW distribution:
I W ( C ; λ , Ψ ) = | Ψ | λ / 2 2 λ d / 2 Γ d ( λ 2 ) | C | ( λ + d + 1 ) / 2 e x p 1 2 t r [ Ψ C 1 ] .
In Equation (8), C is a symmetric positive definite random matrix, distribution parameter λ is a dof parameter, Ψ is a symmetric positive definite matrix, d is the dimension of C , Γ n ( . ) represents a multivariate gamma function, and   t r [ . ] is the matrix trace calculation. Additionally, if λ > d + 1 and E [ C 1 ] ~ I W ( C ; λ , Ψ ) , then E [ C 1 ] = ( λ d 1 ) Ψ 1 . E ( . ) stands for mathematical expectation calculation [29]. Since R k is the covariance matrix of the Gaussian PDF, the prior distribution of P ( R k | Z 1 : k 1 ) can be written as IW distribution:
p ( R k | Z 1 : k 1 ) = I W ( R k : k 1 ; t ^ k : k 1 , T ^ k : k 1 ) ,
where t ^ k : k 1 and T ^ k : k 1 denote the degrees of freedom (dof) parameter and scale matrix of p ( R k | Z k 1 ) , respectively. Next, the values of t ^ k : k 1 and T ^ k : k 1 need to be assigned.
Owing to the Bayesian theorem, the prior distribution p ( R k | Z k 1 ) can be written as:
p ( R k | Z 1 : k 1 ) =   p ( R k | R k 1 ) p ( R k 1 | Z 1 : k 1 ) d R k 1 ,
where p ( R k 1 | Z 1 : k 1 ) ) is the posterior probability density function (PDF) of the MNCM R k 1 .
Utilizing (9), the distribution of posterior PDF p ( R k 1 | Z 1 : k 2 ) can be replaced as inverse Wishart distribution, due to the prior distribution p ( R k 1 | Z 1 : k 1 ) of MNCM R k 1 is selected as inverse Wishart distribution, and p ( R k 1 | Z 1 : k 1 ) can be written as:
p ( R k 1 | Z 1 : k 1 ) = I W ( R k 1 ; t ^ k 1 : k 1 , T ^ k 1 : k 1 ) .
To guarantee p ( R k | Z 1 : k 1 ) also obeys the inverse Wishart distribution, a changing factor ρ is introduced to modify the one-step predicted values of the distribution parameters t ^ k : k 1 and T ^ k : k 1 . The formulas are as follows:
t ^ k : k 1 = ρ ( t ^ k 1 m 1 ) + m + 1 ,
T ^ k : k 1 = ρ T ^ k 1 ,
among them, m is the dimension of Z k , ρ ( 0 , 1 ] , the time-varying measurement noise covariance matrix can be changed with a certain probability distribution, and control the posterior and prior probability density functions to have the same distribution.
In addition, the initial PDF distribution of MNCM R 0 is also assumed as inverse Wishart distribution.   p ( R 0 ) = I W ( R 0 ; t ^ 0 : 0 , T ^ 0 : 0 ) . At the initial moment, to formulate the prior information of the measurement noise covariance matrix, the mean value of R 0 is considered as the initial fixed measurement noise covariance matrix R ˜ 0 , i.e.,
R ˜ 0 = T ^ k : k 1 t ^ k : k 1 m 1
Assuming that the prior distribution of the joint probability density function of the state variable and the MNCM is the product of the Gaussian distribution and the inverse Wishart distribution, the prediction process can be defined as:
p ( X , R k | Z 1 : k 1 ) = p ( X k | Z 1 : k 1 ) p ( R k | Z k 1 ) = N ( X k 1 ; X ^ k : k 1 , P k : k 1 ) I W ( R k 1 ; t ^ k 1 : k 1 , T ^ k 1 : k 1 )

3.1.2. Variational Update Process

Aiming at estimating the state X k and the MNCM R k , their joint posterior PDF p ( X k , R k | Z 1 : k ) needs to be calculated. However, the analytical solution of this joint posterior PDF cannot be obtained directly. The variational Bayesian method is utilized to find an approximate PDF of a free form as follows [30]:
p ( X k , R k | Z 1 : k ) q ( X k ) q ( R k ) ,
where q ( . ) means the approximate posterior PDF of p ( . ) .
In the standard VB method, Kullback–Leibler divergence (KLD) is used to measure the degree of approximation between the approximation posterior PDF and the true posterior PDF, and the optimal solution is obtained by minimizing KLD. The VB method can provide a closed form solution for the approximate posterior PDF. Minimizing the KLD between the approximation posterior PDF   q ( X k ) , q ( R k ) and the true joint posterior   p ( X k , R k | Z 1 : k 1 ) is used to form the VB-approximation [30]:
{ q ( X k ) q ( R k ) } = arg m i n   K L D ( q ( X k ) , q ( R k )   | |   p ( X k , R k | Z 1 : k ) ) ,
The divergence function KLD(.) is defined as:
K L D ( q ( A )   | |   p ( A ) ) =   q ( A ) log q ( A ) p ( A ) d A .
Combined with Equations (17) and (18), the optimal solution of Equation (16) is derived as:
log q ( X k ) = E R k [ log p ( X k , R k , Z 1 : k ) ] + c X k ,
log q ( R k ) = E X k [ log p ( X k , R k , Z 1 : k ) ] + c R k ,
where log ( . ) stands for natural logarithm calculation, E   φ [ . ] denotes the expectation calculation of the approximate posterior PDF of the variable   φ , c X k and c R k represent the constants of variable X k and MNCM R k , respectively. The solutions of Equations (19) and (20) cannot be solved directly since q ( X k ) and q ( R k ) are coupled. Therefore, the fixed-point iteration method is introduced to calculate the solution to these parameters.
The further form of Equation (20) can be derived as (See Appendix A for details):
log q ( i + 1 ) ( R k ) = c R k 1 2 ( m + t ^ k : k 1 + 2 ) log | R k | 1 2 t r a c e ( ( E ( i ) [ ( Z k H k X k ) ( Z k H k X k ) T ] + T ^ k : k 1 ) R k 1 ) ,
where: q ( i ) ( . ) represents the approximate probability distribution of q ( . ) at the i -th iteration, t r a c e ( . ) is the calculation of the matrix trace, C R k is a constant related to R k which independent of the distribution form, and m is the dimension of the real observation matrix. The expectation part of Equation (21) is defined as V k ( i ) , and expanded as:
V k ( i ) = E ( i ) [ ( Z k H k X k ) ( Z k H k X k ) T ] =   ( Z k H k X k ) ( Z k H k X k ) T + N ( X k ; X ^ k : k i , P k : k ) d X k = ( Z k H k X ^ k i ) ( Z k H k X ^ k i ) T + H k P k H k T ,
it can be seen that q ( i + 1 ) ( R k ) obeys a new inverse Wishart distribution form as follows,
q ( i + 1 ) ( R k ) = I W ( R k ; t ^ k ( i + 1 ) , T ^ k ( i + 1 ) ) .
and the distribution parameters t ^ k ( i + 1 ) and T ^ k ( i + 1 ) are, respectively, as follows:
t ^ k ( i + 1 ) = t ^ k : k 1 + 1 ,
T ^ k ( i + 1 ) = V k ( i ) + T ^ k ( i ) ,
Similarly, the logarithmic expression of the approximate distribution of the system state X k is as follows:
log q ( i + 1 ) ( X k ) = 1 2 ( Z k H k X k ) T E ( i + 1 ) [ R k 1 ] ( Z k H k X k ) 1 2 ( X k X ^ k : k 1 ) T P k : k 1 1 ( X k X ^ k : k 1 ) + c X k ,
where E ( i + 1 ) [ R k 1 ] 1 is given by:
E ( i + 1 ) [ R k 1 ] 1 = ( t ^ k ( i + 1 ) m 1 ) 1 T ^ k ( i + 1 ) .
The likelihood PDF p ( Z k | X k ) in Equation (4) after updating the ( i + 1 )-th iteration can be derived as follows:
p ( i + 1 ) ( Z k | X k ) = N ( Z k ; H k X k , R ^ k ( i + 1 ) ) ,
The corrected measurement noise covariance matrix (MNCM) R ^ k ( i + 1 ) can be written as:
R ^ k ( i + 1 ) = E ( i + 1 ) [ R k 1 ] 1
Since q ( X k ) obeys the Gaussian distribution as q ( X k ) = N ( X k ; X ^ k , P k ) . Combining with the standard Kalman filter framework, the gain matrix K k ( i + 1 ) , system state X ^ k ( i + 1 ) , and state covariance P k ( i + 1 ) in the variational measurement update are corrected as follows, respectively:
K k ( i + 1 ) =   P k : k 1 H k T ( H k P k : k 1 H k T + R ^ k ( i + 1 ) ) 1 ,
X ^ k ( i + 1 ) = X ^ k : k 1 + K k ( i + 1 ) ( Z k H k X ^ k : k 1 ) ,
P k ( i + 1 ) = ( I K k ( i + 1 ) H k ) P k : k 1 ( I K k ( i + 1 ) H k ) T + K k ( i + 1 ) R ^ k ( i + 1 ) ( K k ( i + 1 ) ) T
Analyzing the above derivation, it can be seen that the implicit solution of the variational update formula is constituted by the Equations (22), (24), (25), and (29)–(32). The expected maximum approach is used to iteratively calculate q ( X k ) and q ( R k ) to update the parameters X k and R k to be estimated continuously. When q ( X k ) and q ( R k ) are closer to p ( X , R k 1 | Z 1 : k ) , the KLD value of Equation (17) is smaller, and the estimation results of the parameter to be estimated adaptively approach to the true value until the iteration of the variational update process is finished. At this time, the optimal estimation results of parameters X ^ k and R ^ k to be estimated at time k can be calculated as follows ( N is the number of fixed-point iterations):
q ( X k ) q ( N ) ( X k ) = ( X k ; X ^ k ; k ( N ) , P k ; k ( N ) ) = ( X k ; X ^ k : k , P k : k ) ,
q ( R k ) q ( N ) ( R k ) = I W ( R k ; t ^ k ( N ) , T ^ k ( N ) ) = I W ( R k ; t ^ k | k , T ^ k | k ) .

3.2. Updated Monitoring Strategy Based on Maximum a Posterior (MAP) for Estimating the PNCM Q k

Some existing adaptive filtering methods estimate the process noise covariance matrix (PNCM) Q k and measurement noise covariance matrix (MNCM) R k at the same time, it is easy to cause the accuracy of the estimated value of the state X k to decrease or even diverge. This is caused by the value of Q k becoming negative definite matrix during the estimation process [17].
Aiming at realizing the simultaneous estimation of the PNCM Q k , MNCM R k and improving the estimation accuracy of the state vector X k . An updated monitoring strategy based on maximum a posterior (MAP) for estimating the PNCM Q k is proposed.
According to the state–space model as Equations (1) and (2), in paper [15], the maximum a posteriori suboptimal unbiased estimation method based on the noise statistics of measurement { Z 1 , Z 2 , Z 3 , Z k } for estimating the PNCM Q k is given as:
Q ^ k = 1 k * s = 0 k 1 [ K s γ s γ s T K s T + P s : s Φ s 1 P s 1 : s 1 Φ s 1 T ] ,
combining Equation (35) with the conclusion of Section 3.1, where   K s = K s ( N ) is the optimal gain calculated by VBAKF through N th variational iterations at time s , P s : s = P s ( N )   is the state covariance calculated after N th variational iterations at time s   and γ s = Z k X ^ k : k 1 is the residual.
From a statistical point of view, Equation (35) is an arithmetic average, and the weight coefficient in the formula is 1 / ( k + 1 ) . However, when estimating the time-varying process noise covariance matrix, the role of the latest information should be highlighted, which can be achieved by multiplying each item in s = 0 k [ . ] by a different weighting coefficient. Owing to the exponential weighting method, we can assign different weights to each item in s = 0 k [ . ] , and increase the weight of the latest information, thereby improving the accuracy of Q ^ k estimation. The exponential weighting method is introduced, and the weighting coefficient { ϑ s } is selected to satisfy:
ϑ s = ϑ s 1 b ;   b ( 0 , 1 ) ;   s = 0 k ϑ s = 1 .
further deduced as follows:
ϑ s = d k b s ,   d k = ( 1 b ) ( 1 b k + 1 ) ;   s [ 0 , k ] ,
where b is the attenuation factor. In s = 0 k [ . ] of Equation (35), each item is multiplied by the weight coefficient d k , instead of the original weight coefficient. The time-varying process noise covariance matrix (PNCM) Q k estimation method is obtained, and the recursive algorithm is derived as:
Q ^ k = ( 1 d k ) Q ^ k 1 + d k ( K k ( N ) γ k γ k T K k ( N ) T + P k ( N ) Φ k 1 P k 1 ( N ) Φ k 1 T ) .
Equations (7), (30)–(32), and (38) constitutes the VBAKF algorithm that simultaneously estimates the PNCM Q k , MNCM R k , state vector X k , and one-step predicted state error covariance matrix (PECM) P k : k 1 .
However, through simulation experiments, the estimation of the PNCM Q k by the above algorithm is prone to abnormality; that is, it loses positive semi-definiteness, which leads to filtering divergence.
To solve this problem, based on not losing the information of the original process noise estimation algorithm (38), an updated monitoring strategy of process noise parameters is designed. Firstly, it is judged whether Q k calculated by Equation (38) is a positive semi-definite matrix. Then the adjustment factor β is introduced to update the process noise estimation parameters to ensure that the corrected PNCM meets the requirements.
The right side of the Equation (38) is shifted as follows:
Q ^ k = Q ^ k 1 + d k ( K k ( N ) γ k γ k T K k ( N ) T + D k ) ,
D k = P k ( N ) Φ k P k 1 ( N ) Φ k T Q ^ k 1 .
Generally speaking, the selection of the initial value of the state error covariance matrix P 0 is imprecise, and the deviation from the ideal value is large in the initial stage of filtering, resulting in the absolute value of the theoretical process noise covariance matrix D k determined by Equation (39) being much larger than γ k γ k T . It is easy to cause Q ^ k to lose the positive semi-definiteness, and then the filtering will diverge. Therefore, it is necessary to introduce an adjustment factor to attenuate the effect of the state error covariance matrix at the initial moment, to avoid the indefiniteness of the estimated value of the PNCM, to prevent the filter from diverging. The updated monitoring strategy of the PNCM is as follows:
Q ^ k = Q ^ k 1 + d k ( K k ( N ) γ k γ k T K k ( N ) T + ( β k ) p ( P k ( N ) Φ k 1 P k 1 ( N ) Φ k 1 T Q ^ k 1 ) ) ,
where p 1 is a positive integer (the initial value is 1), β k is the adjustment factor, and the value of β k is related to the state error variance matrix as follows:
β k = e x p a k ,
a k = t r a c e ( P k ( N ) Φ k 1 P k 1 ( N ) Φ k 1 T Q ^ k 1 ) t r a c e ( K k ( N ) γ k γ k T K k ( N ) T ) .
The specific process for the updated monitoring strategy is as follows: monitor the process noise covariance matrix calculated by Equation (38), and judge whether Q ^ k is a positive semi-definite matrix to determine whether Q ^ k needs to be updated. If not, output Q ^ k . Otherwise, turn to Equation (41) and set p = 1 ; Equation (41) is used to update the process noise parameters. Continue the monitoring of the updated estimated process noise covariance matrix to determine whether it is necessary to continue to update. If it is necessary, take p = p + 1 ; Equation (41) is used to recalculate Q ^ k . The loop is executed until Q ^ k is a positive semi-definite matrix. End the update of the process noise covariance matrix at the current moment. The flowchart of one time-step of the updated monitoring strategy is shown in Figure 1.
So far, combined with the traditional Kalman filter framework, the VBAKF algorithm with updated monitoring strategy is derived to estimate system state vector X k , the process noise covariance matrix (PNCM) Q k , measurement noise covariance matrix (MNCM) R k , one-step predicted state error covariance matrix (PECM) P k : k 1 , and state error covariance matrix P k at the same time.

3.3. Improved by Introducing Multiple Fading Factors

If the statistical characteristics of the process and measurement noise are time-varying, the convergence speed of VBAKF will slow down, and there will be a particular error in the estimation result, which will be reflected by the residual sequence γ k [31].
In view of this, to improve the accuracy of estimation, the multiple fading factor L m d is introduced to realize the correction of the one-step predicted error covariance matrix (PECM) P k : k 1 . Equation (7) can be rewritten as:
P k : k 1 * = L m d Φ k 1 P k 1 Φ k 1 T + Q ^ k 1 ,
adjusting the gain K k ( N ) in real-time to keep γ k orthogonal, forcing the filter to keep track of the actual state of the system. The tracking ability is thereby improved [31]. The calculation method of multiple fading factors L m d is as follows:
L m d = d i a g [ λ k 1 , λ k 2 , , λ k n ] ,
λ k i = { λ k 0 i , λ k 0 i > 1 1 , λ k 0 i 1   i = 1 , 2 , , n ,
where   n is the dimension of the state vector X k , λ k 0 i = α i · G k , the value of α i is determined by the system prior information. The formula of G k is as follows:
G k = t r a c e ( N k ) i = 1 n α i · M k i i ,
where M k i i is the i -th element of the main diagonal of M k , the calculation formulas of N k and M k are as follows:
N k = B k H k 1 Q ^ k 1 H k 1 T τ R ^ k ,
M k = Φ k 1 P k : k Φ k 1 T H k 1 T H k 1 .
In Equation (48), τ is the weakening factor and B k is unknown and can be estimated by the following formula:
B k = { γ k γ k T , k = 1 μ B k 1 + γ k γ k T 1 + μ , k > 1
where μ ( 0 , 1 ] is the forgetting factor.
Since L m d is used to correct the one-step predicted error covariance matrix (PECM) in the prediction step of the filtering algorithm, the initial value R ^ 1 of the MNCM R ^ k must be set in advance, assuming that R ^ 1 also obeys the inverse Wishart distribution. The estimated value of measurement noise covariance matrix R ^ k in the time-update step is defined as:
R ^ k = { T ^ 0 ( t ^ 0 m 1 ) , k = 1 R ^ k 1 , k > 1 ,
In Equation (51), the variation range of the slowly time-varying measurement noise covariance matrix is small, and the estimated value R ^ k 1 at the previous time still has a great reference value for the current time estimation. Therefore, R ^ k 1 estimated by the variational update recursively of the previous time is used as R ^ k at time k > 1 . R ^ k is used as the time-varying parameter of L m d to modify the one-step predicted error covariance matrix (PECM) P k : k 1 * more accurately, and the accurate P k : k 1 * can affect the accuracy of the estimation result of the variational iteration recursion directly.
The multi-fading factor and updated monitoring strategy AKF-based variational Bayesian in this paper is composed of Equations (6), (12)–(14), (22)–(25), (27), (29)–(34), (41)–(51). The pseudo-code implementation of the proposed MFMS-VBAKF algorithm is listed in Algorithm 1.
Algorithm 1 One time-step of the proposed multi-fading factor and updated monitoring strategy AKF-based variational Bayesian
Inputs:   X ^ k 1 : k 1 ,   P k 1 : k 1 ,   t ^ k 1 : k 1 ,   T ^ k 1 : k 1 ,   Φ ,   H ,   Z k ,   m ,   n ,   ρ ,   b ,   μ ,   τ ,   α i ,   N
Time update:
1.   X ^ k : k 1 = Φ X k 1 : k 1 ,   γ k = Z k X ^ k : k 1
2. if k = 1 then
3. R ^ k = T ^ 0 / ( t ^ 0 m 1 ) ,   Q k = Q ˜ 0
4. B k = γ k γ k T ,
5. else
6. R ^ k = R ^ k 1 ,   Q k = Q ^ k 1
7. B k = μ B k 1 + γ k γ k T / 1 + μ
8.   N k = B k H k 1 Q ^ k 1 H k 1 T τ R ^ k ,
9.   M k = Φ k 1 P k : k Φ k 1 T H k 1 T H k 1 ,
10.   G k = t r a c e ( N k ) / i = 1 n ( α i · M k i i ) ,   λ k 0 i = α i · G k
11. if λ k 0 i > 1
12. λ k i = λ k 0 i
13. else
14. λ k i = 1
15.   L m d = d i a g [ λ k 1 , λ k 2 , , λ k n ]
16.   d k = ( 1 b ) / ( 1 b k + 1 )
17.   Q ^ k = ( 1 d k ) Q ^ k 1 + d k ( K k γ k γ k T K k T + P k Φ k 1 P k 1 Φ k 1 T ) , c = 1
18.   θ = m i n ( e i g (   Q ^ k ) ) ,
19.   a k = t r a c e ( P k Φ k 1 P k 1 Φ k 1 T Q ^ k 1 ) / t r a c e ( K k γ k γ k T K k T )
20.   β k = e x p a k
21. while θ < 0 do
22. Q ^ k = Q ^ k 1 + d k ( K k γ k γ k T K k T + ( β k ) p ( P k Φ k 1 P k 1 Φ k 1 T Q ^ k 1 ) )
23. θ = m i n ( e i g (   Q ^ k ) ) , c = c + 1
24.   P k : k 1 * = L m d Φ k 1 P k 1 : k 1 Φ k 1 T + Q ^ k 1
Iterated measurement update
25. Initialization
26.   V k ( i ) = ( Z k H k X ^ k i ) ( Z k H k X ^ k i ) T + H k P k H k T
27.   t ^ k ( i + 1 ) = t ^ k : k 1 + 1 , T ^ k ( i + 1 ) = V k ( i ) + T ^ k ( i )
for i = 0 : N 1
Update q ( N ) ( X k ) = ( X k ; X ^ k : k , P k : k ) given P k : k 1 * and q ( i + 1 ) ( R k ) :
28.   E ( i + 1 ) [ R k 1 ] 1 = ( t ^ k ( i + 1 ) m 1 ) 1 T ^ k ( i + 1 )
29.   R ^ k ( i + 1 ) = E ( i + 1 ) [ R k 1 ] 1
30.   K k ( i + 1 ) =   P k : k 1 * H k T ( H k P k : k 1 * H k T + R ^ k ( i + 1 ) ) 1
31.   X ^ k ( i + 1 ) = X ^ k : k 1 + K k ( i + 1 ) ( Z k H k X ^ k : k 1 )
32.   P ^ k ( i + 1 ) = ( I K k ( i + 1 ) H k ) P k : k 1 ( I K k ( i + 1 ) H k ) T + K k ( i + 1 ) R ^ k ( i + 1 ) ( K k ( i + 1 ) ) T
End for
33.   X ^ k : k =   X ^ k : k ( N ) , P ^ k : k =   P ^ k : k ( N ) , t ^ k : k =   t ^ k ( N ) , T ^ k : k = T ^ k ( N ) ,   R ^ k = R ^ k ( N )
Outputs:   X ^ k : k ,   P ^ k : k ,   t ^ k : k ,   T ^ k : k ,   R ^ k , Q ^ k

4. Simulations and Results

The application of the proposed algorithm in target tracking is simulated. The target moves according to the continuous white noise accelerated motion model in the two-dimensional Cartesian coordinate system. Sensors are used to collect the target location. The system state is defined as x k = [ x k   x ˙ k   y k   y ˙ k ] , where x k   and   y k represent the Cartesian coordinates of the target at time   k ,   x ˙ k   and y ˙ k   represent the velocity of the target at the corresponding position [24,25]. The state transition matrix Φ k 1 and the measurement matrix H k are respectively set as:
Φ k 1 = [ I 2 Δ t I 2 0 I 2 ] ,   H k = [ I 2 0 ] ,
where the parameter Δ t = 1   s represent the sampling interval, and I n represent the n-dimensional unit matrix. Similar to [25], the true process noise covariance matrix (PNCM) and the measurement noise covariance matrix (MNCM) are set to slow time-varying models, which are:
{ Q k = ( 9.5 + 0.5 c o s π k T ) q [ Δ t 3 3 I 2 Δ t 2 2 I 2 Δ t 2 2 I 2 Δ t I 2 ] R k = ( 0.1 + 0.05 c o s π k T ) r [ 1 0.5 0.5 1 ]
The simulation environment is set as follows: T = 1000   s is the total simulation time, q is a parameter related to process noise, and r is a parameter related to measurement noise. The fixed PNCM and MNCM are set as Q ˜ k = σ I 4 and R ˜ k = ε I 2 , respectively, where σ and ε are the prior confidence parameters used to adjust the initial fixed noise covariance matrix.
The parameters in the MFMS-VBAKF algorithm proposed in this paper are set as follows:   σ = 1 ,   ε = 100 , changing factor ρ = exp ( 4 ) , the number of variational iterations N = 10 , the initial value of the variational parameter t ^ 0 = 1 ,   T ^ 0 = 300 I 2 , forgetting factor μ = 0.95 , the weakening factor τ = 0.4 , the parameter [ α 1   α 2   α 3   α 4 ] = [ 1.7   1.1   1.7   1.1 ] , and the attenuation factor b = 0.96 .
This paper compares MFMS-VBAKF and true noise covariance matrix Kalman filter (TCMKF) [1], fixed noise covariance matrix Kalman filter (FCMKF) [1], SH-KF [15], ML-KF [20], ST-KF [21], and R-VBKF [26] algorithms. Table 1 lists the estimated parameters and parameter settings of the existing algorithms. All algorithms are programmed using MATLAB R2018a, and the simulation program runs on a computer with Intel® Core™ i5-6300HQ CPU at 2.30 GHz and 8GB of RAM.
With the aim of evaluating the accuracy of system state estimation, the root mean square error and average root mean square error of position and velocity are regarded as performance indicators, which are defined as follows:
E R M S E , p o s 1 M S = 1 M ( ( x k s x ^ k s ) 2 + ( y k s y ^ k s ) 2 )
E A R M S E , p o s 1 M T k = 1 T S = 1 M ( ( x k s x ^ k s ) 2 + ( y k s y ^ k s ) 2 )
where ( x k , x ^ k ) and ( y k , y ^ k ) , respectively, represent the true value and estimated value of the position in the s -th Monte Carlo experiment and M = 1000 represents the total number of Monte Carlo experiment runs. Similarly, the calculation formulas of RMSE and ARMSE for the corresponding velocity can be obtained.
True and estimated trajectories of the target are shown in Figure 2. It can be seen that, compared to the existing adaptive filtering algorithm, the proposed filter maintains excellent target tracking performance throughout the entire process.
Figure 3a,b plot the RMSE variation curves of the position and velocity of the existing filter and the proposed MFMS-VBAKF, respectively. The RMSE of the estimation result of TCMKF algorithm is regarded as the benchmark. It can be seen from Figure 2 that compared with existing algorithms, the proposed algorithm has a faster convergence speed and higher accuracy. To further elaborate on the advantages of the proposed algorithm, Table 2 lists the average root mean square error (ARMSE) of different KF filtering algorithms:
According to the data in Table 2, it can be found that in comparison with other AKF algorithms, the MFMS-VBAKF algorithm has the smallest ARMSE and the highest accuracy in estimating target position and velocity.
To compare the computational complexity with existing algorithms, Table 3 lists the single-step running time of each algorithm. It can be seen that, compared with the R-VBAKF using the variational Bayesian method, the proposed MFMS-VBAKF increases the single-step running time by 0.21 μs. The design of multi-fading factors and updated monitoring strategy has ensured a substantial increase in estimation accuracy while bringing higher computational complexity.
To evaluate the accuracy of the estimation of one-step predicted state error covariance matrix PECM and the noise covariance matrices PNCM, MNCM, the square root of the normalized Frobenius norm (SRNFN) and the averaged SRNFN (ASRNFN) are used as the measure of error, which are defined as:
E S R N F N , P ( 1 n 2 M s = 1 M | | P ^ k : k 1 s P ^ o , k : k 1 s | | 2 ) 1 / 4
E A S R N F N , P ( 1 n 2 M T k = 1 T s = 1 M | | P ^ k : k 1 s P ^ o , k : k 1 s | | 2 ) 1 / 4
where P ^ o , k : k 1 s and P ^ k : k 1 s represent the true value and estimated value of the noise covariance matrix or one-step predicted state error covariance matrix in the s -th Monte Carlo experiment, respectively. The SRNFN and ASRNFN of the estimation result of PECM are shown in Figure 4 and Table 4, respectively.
It can be clearly seen that, compared with the existing adaptive KF algorithm, if the noise covariance matrices are slowly time-varying, the SRNFN of the MFMS-VBAKF algorithm is smaller than the SRNFN of the current algorithm. Compared with R-VBKF with similar performance, the ASRNFN of MFMS-VBAKF is reduced by 5.45%.
Figure 5 shows the SRNFN of the measurement noise covariance matrix (MNCM) estimation. Obviously, the MFMS-VBAKF algorithm has the strongest tracking ability, the highest estimation accuracy and the fastest convergence speed of the slowly time-varying measurement noise covariance matrix estimation.
Figure 6 and Table 5 show respectively the SRNFNs and the ASRNFNs of the PNCM from the existing filters and MFMS-VBAKF algorithm. It can be seen that the SRNFN and ASRNFN of the proposed MFMS-VBAKF are both smaller than the current filters. Thus, the MFMS-VBAKF has better estimation accuracy and satisfactory convergence speed in PNCM estimation.
Next, we compare and analyze the influence of the values of four critical parameters (changing factor ρ , forgetting factor μ , weakening factor τ and attenuation factor b ) in the MFMS-VBAKF algorithm on the estimated effect.
Figure 7 shows the RMSEs of position and velocity from the existing filters and MFMS-VBAKF in the case of ρ = 0.85 ,   0.93 ,   0.95 ,   1 exp ( 4 ) , 1.0 . The MFMS-VBAKF with ρ = 0.85 ,   0.93 ,   0.95 ,   1 exp ( 4 ) ,   1.0 has better estimation accuracy than existing adaptive KF filters. And when   ρ = 1 exp ( 4 ) , the MFMS-VBAKF algorithm has the best estimation accuracy and convergence performance.
Figure 8 plots the RMSE curves of position and velocity from the existing filters and the MFMS-VBAKF in the case of μ = 0.65 ,   0.75 ,   0.85 ,   0.95 ,   1.0 . The MFMS-VBAKF with μ = 0.65 ,   0.75 ,   0.85 ,   0.95 ,   1.0 . has better estimation accuracy than existing adaptive KF filters, and when   μ = 0.95 , the MFMS-VBAKF algorithm has the best estimation accuracy and convergence performance.
Figure 9 plots the RMSE curves of position and velocity from the existing filters and the MFMS-VBAKF in the case of τ = 0.2 ,   0.4 ,   0.6 ,   0.8 ,   1.0 . The MFMS-VBAKF with τ = 0.2 ,   0.4 ,   0.6 ,   0.8 ,   1.0 has better estimation accuracy than existing adaptive KF filters, and when   τ =   0.4 , the MFMS-VBAKF algorithm has the best estimation accuracy and convergence performance.
Figure 10 shows the RMSEs of position and velocity from the existing filters and MFMS-VBAKF in the case of b = 0.76 ,   0.86 ,   0.96 ,   0.99 . The MFMS-VBAKF with b = 0.76 ,   0.86 ,   0.96 ,   0.99 . has better estimation accuracy than existing adaptive KF filters. And when   b = 0.96 , MFMS-VBAKF algorithm has the best estimation accuracy and convergence performance.
For the sake of testing the robustness of the adaptive correction capability of the MFMS-VBAKF algorithm when the fixed noise covariance matrices are set to different initial values, the priori confidence parameters σ and   ε are set to change in combination within the grid area of ( σ , ε ) [ 0.1 ,   800 ] × [ 0.1 ,   800 ] . The ARMSEs estimated by the algorithm for position and velocity are displayed in Figure 10.
It can be analyzed from Figure 11 that the ARMSEs of position and velocity estimation are flat in a large area of the set grid, and the estimation results are close to the actual values. However, the initial setting values of the fixed noise covariance matrices in the extremely narrow area on the right edge of the grid are too different from the actual values, which leads to unsatisfactory performance of the estimation results. This is caused by the variational Bayesian method that can only guarantee local convergence. In general, the estimated effects of the MFMS-VBAKF algorithm can converge to near the actual values, with excellent robust performance.

5. Conclusions

This paper presents a multi-fading factor and updated monitoring strategy AKF-based variational Bayesian with the inaccurate time-varying process and measurement noise covariance matrices. The model of measurement error is defined as the inverse Wishart distribution, and the variational Bayesian method is used to recursively estimate the measurement noise covariance matrix and the system state, the estimation results of the two can be approximated to the true value. The process noise covariance matrix is estimated by the maximum a posteriori principle, and the updated monitoring strategy with adjustment factors is used to guarantee the positive semi-definiteness of the updated matrix. The estimated value of measurement noise covariance obtained by the variational iteration recursion and the estimated value of process noise covariance obtained by updating the monitoring strategy are used as time-varying parameters of multiple fading factors, which can be corrected to obtain more accurate state predicted error covariance. Variational Bayesian and the updated monitoring strategy and multi-fading factors complement each other, which not only enhances the responsiveness of target tracking, but also improves the estimation accuracy of variational iteration recursion. The simulation results show that the proposed MFMS-VBAKF algorithm realizes the simultaneous estimation of the process noise covariance matrix and the measurement noise covariance matrix, and has achieved satisfactory results in terms of estimation accuracy, convergence performance, and robustness.

6. Future Work

Compared with existing filtering algorithms, the MFMS-VBAKF algorithm proposed in this paper exhibits excellent performance in the state estimation problem of linear systems with inaccurate time-varying processes and measurement noise covariance matrix. However, in real applications, the influence of the control input of the system should not be underestimated. Control input will cause large changes in noise, and even noise will obey non-Gaussian distribution. Moreover, most of the systems in real applications are nonlinear systems. In the above cases, MFMS-VBAKF will not be applicable. Therefore, in future work, the theory of this paper will be further extended to the problem of nonlinear adaptive Kalman filtering. The problem of nonlinear AKF with inaccurate state transition matrix and non-Gaussian noise distribution will be further studied.

Author Contributions

C.S. conceived the idea for this paper, designed the experiments and wrote the paper; Z.J. assisted in model designing and experiments; W.Z. and Y.Y. polished the language. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under grant number 61573113.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank Tuan Duong Hoang, at the School of Electrical and Data Engineering, University of Technology, Sydney for providing comments and suggestions on the revised version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Derivation of (21)

The Bayesian estimation theory states that the joint PDF can be defined as:
p ( X k , R k ,   Z 1 : k ) = p ( Z k | X k , R k ) p ( X k | Z 1 : k 1 ) p ( R k | Z 1 : k 1 ) p ( Z 1 : k 1 )
Equations (3), (4), and (11) are substituted into (A1) to derive:
p ( X k , R k ,   Z 1 : k ) = N ( Z k ; H k X k , R k ) N ( X k 1 ; X ^ k : k 1 , P k : k 1 ) I W ( R k 1 ; t ^ k 1 : k 1 , T ^ k 1 : k 1 ) p ( Z 1 : k 1 )
Combined with Equation (5), log N ( G ; μ , Σ ) can be rewritten as:
log N ( G ; μ , Σ ) = log [ 1 | 2 π Σ | e x p 1 2 ( G μ ) Τ Σ 1 ( G μ ) ] = log 1 2 π 1 2 log | Σ | 1 2 ( G μ ) T Σ 1 ( G μ ) = 1 2 log | Σ | 1 2 ( G μ ) T Σ 1 ( G μ ) + c G ,
where c G denotes a constant related to the variable G .
Combined with Equation (8), log I W ( C ; λ , Ψ ) can be rewritten as:
log I W ( C ; λ , Ψ ) = log { | Ψ | λ / 2 2 λ d / 2 Γ d ( λ 2 ) | C | ( λ + d + 1 ) / 2 e x p 1 2 t r [ Ψ C 1 ] } = λ 2 log | Ψ | ( λ + d + 1 ) 2 log | C | 1 2 t r [ Ψ C 1 ] λ d 2 log 2 log | Γ d ( λ 2 ) | = ( λ + d + 1 ) 2 log | C | 1 2 t r [ Ψ C 1 ] + c C ,
where c C denotes the constant related to the variable C .
Combining (A2)–(A4), the logarithmic form of the joint PDF can be written as:
log p ( X k , R k ,   Z 1 : k ) = log { N ( Z k ; H k X k , R k ) N ( X k 1 ; X ^ k : k 1 , P k : k 1 ) × I W ( R k 1 ; t ^ k 1 : k 1 , T ^ k 1 : k 1 ) p ( Z 1 : k 1 ) = 1 2 log | R k | 1 2 ( Z k H k X k ) T R k 1 ( Z k H k X k ) 1 2 log | P k : k 1 | 1 2 ( X k X ^ k : k 1 ) T P k : k 1 1 ( X k X ^ k : k 1 ) ( t ^ k : k 1 + m + 1 ) 2 log | R k | 1 2 t r ( T ^ k : k 1 R k 1 ) + c = ( t ^ k : k 1 + m + 2 ) 2 log | R k | 1 2 ( Z k H k X k ) T R k 1 ( Z k H k X k ) 1 2 t r ( T ^ k : k 1 R k 1 ) 1 2 log | P k : k 1 | 1 2 ( X k X ^ k : k 1 ) T P k : k 1 1 ( X k X ^ k : k 1 ) + c
where c denotes a constant related to the variables X k and R k .
Bring (A5) to Equation (20) to get the further derivation of Equation (21):
log q ( i + 1 ) ( R k ) = E X k ( i ) [ log p ( X k , R k ,   Z 1 : k ) ] + c R k = ( t ^ k : k 1 + m + 2 ) 2 log | R k | 1 2 E ( i ) [ t r ( ( V k ( i ) + T ^ k : k 1 ) R k 1 ) ] 1 2 E ( i ) [ t r ( T ^ k : k 1 R k 1 ) ] 1 2 E ( i ) [ ( Z k H k X k ) T R k 1 ( Z k H k X k ) ] 1 2 E ( i ) [ ( X k X ^ k : k 1 ) T P k : k 1 1 ( X k X ^ k : k 1 ) ] E ( i ) [ 1 2 log | P k : k 1 | ] + c = ( t ^ k : k 1 + m + 2 ) 2 log | R k | 1 2 t r ( ( V k ( i ) + T ^ k : k 1 ) R k 1 ) + c R k
where
c R k = E ( i ) [ 1 2 log | P k : k 1 | ] 1 2 E ( i ) [ ( Z k H k X k ) T R k 1 ( Z k H k X k ) ] + c .
Equation (26) can be derived in a similar way.

References

  1. Simon, D. The Discrete-Time Kalman Filter. In Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches; John Wiley & Sons Publishing: Hoboken, NJ, USA, 2006; pp. 121–148. [Google Scholar]
  2. Kawase, T.; Hideshi, T.; Ehara, N.; Sasase, I. A Kalman Tracker with a Turning Acceleration Estimator. Electron. Commun. Jpn. 2001, 84, 1–11. [Google Scholar] [CrossRef]
  3. Kaur, H.; Sahambi, J.S. Vehicle Tracking in Video Using Fractional Feedback Kalman Filter. IEEE Trans. Comput. Imaging 2016, 4, 550–561. [Google Scholar] [CrossRef]
  4. Meng, Z.; Zhou, W.; Gazor, S. Robust widely linear beamforming using estimation of extended covariance matrix and steering vector. EURASIP J. Wirel. Commun. Netw. 2020, 1, 1–20. [Google Scholar] [CrossRef]
  5. Zhang, Y.G.; Dang, Y.F.; Li, N.; Huang, Y.L. An Integrated Navigation Algorithm Based on Distributed Kalman Filter. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015; pp. 2132–2135. [Google Scholar]
  6. Zhu, W.; Wang, W.; Yuan, G.N. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking. Sensors 2016, 16, 805. [Google Scholar] [CrossRef] [PubMed][Green Version]
  7. Susmita, B.; Dinesh, M. Kalman Filter-Based Raim for Reliable Aircraft Positioning with Gps and Navic Constellations. Sensors 2020, 20, 6606. [Google Scholar]
  8. Kumar, G.S.; Ghosh, R.; Ghose, D.; Vengadarajan, A. Guidance of Seekerless Interceptors Using Innovation Covariance Based Tuning of Kalman Filters. J. Guid. Control Dyn. 2017, 40, 603–614. [Google Scholar] [CrossRef]
  9. Caccia, M.; Bibuli, M.; Bono, R.; Bruzzone, G. Basic Navigation, Guidance and Control of an Unmanned Surface Vehicle. Auton. Robot. 2008, 25, 349–365. [Google Scholar] [CrossRef]
  10. Liu, Y.H.; Fan, X.Q.; Lv, C. An Innovative Information Fusion Method with Adaptive Kalman Filter for Integrated Ins/Gps Navigation of Autonomous Vehicles. Mech. Syst. Signal Process. 2018, 100, 605–616. [Google Scholar] [CrossRef][Green Version]
  11. Sun, F.C.; Hu, X.S.; Zou, Y.; Li, S.G. Adaptive Unscented Kalman Filtering for State of Charge Estimation of a Lithium-Ion Battery for Electric Vehicles. Energy 2011, 36, 3531–3540. [Google Scholar] [CrossRef]
  12. Bisht, S.; Mahendra, P.S. An Adaptive Unscented Kalman Filter for Tracking Sudden Stiffness Changes. Mech. Syst. Signal Process. 2014, 49, 181–195. [Google Scholar] [CrossRef]
  13. Mohamed, A.H. Optimizing the Estimation Procedure in INS/GPS Integration for Kinematic Applications; University of Calgary: Calgary, AB, Canada, 1999. [Google Scholar]
  14. Xu, F.; Su, Y.; Liu, H. Research of Optimized Adaptive Kalman Filtering. In Proceedings of the 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May–2 June 2014; pp. 1210–1214. [Google Scholar]
  15. Sage, A.P.; Husa, G.W. Adaptive filtering with unknown prior statistics. In Proceedings of the Joint Automatic Control Conference, Boulder, CO, USA, 5–7 August 1969; pp. 760–769. [Google Scholar]
  16. Mehra, R. Approaches to Adaptive Filtering. IEEE Trans. Autom. Control 1972, 17, 693–698. [Google Scholar] [CrossRef]
  17. Zhu, Z.; Liu, S.R.; Zhang, B.T. An improved Sage-Husa adaptive filtering algorithm. In Proceedings of the 31st Chinese Control Conference, Hefei, China, 25–27 July 2012; pp. 5113–5117. [Google Scholar]
  18. Xu, S.Q.; Zhou, H.Y.; Wang, J.Q.; He, Z.M.; Wang, D.Y. Sins/Cns/Gnss Integrated Navigation Based on an Improved Federated Sage-Husa Adaptive Filter. Sensors 2019, 19, 3812. [Google Scholar] [CrossRef] [PubMed][Green Version]
  19. Fu, M.Y.; de Souza, C.E.; Luo, Z.Q. Finite-Horizon Robust Kalman Filter Design. IEEE Trans. Signal Process. 2001, 49, 2103–2112. [Google Scholar]
  20. Mohamed, A.; Schwarz, K. Adaptive Kalman Filtering for INS/GPS. J. Geod. 1999, 73, 193–203. [Google Scholar] [CrossRef]
  21. Chen, X.Y.; Shen, C.; Zhang, W.B.; Tomizuka, M.; Xu, Y.; Chiu, K.L. Novel Hybrid of Strong Tracking Kalman Filter and Wavelet Neural Network for Gps/Ins During Gps Outages. Measurement 2013, 46, 3847–3854. [Google Scholar] [CrossRef]
  22. Chang, G.B.; Liu, M. An Adaptive Fading Kalman Filter Based on Mahalanobis Distance. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2015, 229, 1114–1123. [Google Scholar] [CrossRef]
  23. Sarkka, S.; Nummenmaa, A. Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations. IEEE Trans. Autom. Control 2009, 54, 596–600. [Google Scholar] [CrossRef]
  24. Huang, Y.; Zhang, Y.; Li, N.; Wu, Z.; Chambers, J.A. A Novel Robust Student’s t-Based Kalman Filter. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1545–1554. [Google Scholar] [CrossRef][Green Version]
  25. Tohid, A. Approximate Bayesian Smoothing with Unknown Process and Measurement Noise Covariances. IEEE Signal Process. Lett. 2015, 22, 2450–2454. [Google Scholar]
  26. Särkkä, S.; Hartikainen, J. Variational Bayesian Adaptation of Noise Covariances in Non-Linear Kalman Filtering. arXiv 2013, arXiv:1302.0681v1. [Google Scholar]
  27. Li, Z.F.; Zhang, J.Y.; Wang, J.G.; Zhou, Q.S. Recursive Noise Adaptive Extended Object Tracking by Variational Bayesian Approximation. IEEE Access 2019, 7, 151168–151179. [Google Scholar] [CrossRef]
  28. Berger, J.O. Prior Information and Subjective Probability. In Statistical Decision Theory and Bayesian Analysis; Springer: New York, NY, USA, 1985; pp. 74–117. [Google Scholar]
  29. Lindley, D. Kendall’s Advanced Theory of Statistics, Volume 2b, Bayesian Inference, 2nd Edn. J. R. Stat. Soc. Ser. A 2005, 168, 259–260. [Google Scholar] [CrossRef]
  30. Tzikas, D.G.; Likas, A.C.; Galatsanos, N.P. The Variational Approximation for Bayesian Inference. IEEE Signal Process. Mag. 2008, 25, 131–146. [Google Scholar] [CrossRef]
  31. Zhou, D.H.; Frank, P. Strong tracking Kalman filtering of nonlinear time-varying stochastic systems with coloured noise: Application to parameter estimation and empirical robustness analysis. Int. J. Control 1996, 65, 295–307. [Google Scholar] [CrossRef]
Figure 1. The flowchart of one time-step of the updated monitoring strategy based on maximum a posterior (MAP) for estimating the process noise covariance matrix (PNCM).
Figure 1. The flowchart of one time-step of the updated monitoring strategy based on maximum a posterior (MAP) for estimating the process noise covariance matrix (PNCM).
Sensors 21 00198 g001
Figure 2. True and estimated trajectories of the target.
Figure 2. True and estimated trajectories of the target.
Sensors 21 00198 g002
Figure 3. The root mean square errors (RMSEs) of the target position and velocity estimation.
Figure 3. The root mean square errors (RMSEs) of the target position and velocity estimation.
Sensors 21 00198 g003aSensors 21 00198 g003b
Figure 4. The SRNFN of PECM estimation.
Figure 4. The SRNFN of PECM estimation.
Sensors 21 00198 g004
Figure 5. The square root of the normalized Frobenius norm (SRNFN) of the measurement noise covariance matrix (MNCM) estimation.
Figure 5. The square root of the normalized Frobenius norm (SRNFN) of the measurement noise covariance matrix (MNCM) estimation.
Sensors 21 00198 g005
Figure 6. The SRNFN of the PNCM estimation.
Figure 6. The SRNFN of the PNCM estimation.
Sensors 21 00198 g006
Figure 7. The RMSEs of the position and velocity estimation in the case of ρ = 0.85 ,   0.93 ,   0.95 ,   1 exp ( 4 ) , 1.0 .
Figure 7. The RMSEs of the position and velocity estimation in the case of ρ = 0.85 ,   0.93 ,   0.95 ,   1 exp ( 4 ) , 1.0 .
Sensors 21 00198 g007
Figure 8. The RMSE of the position and velocity estimation in the case of μ = 0.65, 0.75, 0.85, 0.95, 1.0.
Figure 8. The RMSE of the position and velocity estimation in the case of μ = 0.65, 0.75, 0.85, 0.95, 1.0.
Sensors 21 00198 g008aSensors 21 00198 g008b
Figure 9. The RMSE of the position and velocity estimation in the case of τ = 0.2 ,   0.4 ,   0.6 ,   0.8 ,   1.0 .
Figure 9. The RMSE of the position and velocity estimation in the case of τ = 0.2 ,   0.4 ,   0.6 ,   0.8 ,   1.0 .
Sensors 21 00198 g009
Figure 10. The RMSE of the position and velocity estimation in the case of b = 0.76 ,   0.86 ,   0.96 ,   0.99 .
Figure 10. The RMSE of the position and velocity estimation in the case of b = 0.76 ,   0.86 ,   0.96 ,   0.99 .
Sensors 21 00198 g010
Figure 11. The ARMSEs of the position and velocity estimation under a combination of ( σ , ε ) [ 0.1 ,   800 ] × [ 0.1 ,   800 ] .
Figure 11. The ARMSEs of the position and velocity estimation under a combination of ( σ , ε ) [ 0.1 ,   800 ] × [ 0.1 ,   800 ] .
Sensors 21 00198 g011
Table 1. Estimated parameters and parameter settings of the existing algorithms.
Table 1. Estimated parameters and parameter settings of the existing algorithms.
Types of Filtering AlgorithmsEstimated ParametersAlgorithm ParametersValue of Algorithm Parameters
SH-KF X ^ k ,   P ^ k : k 1 , P ^ k ,   Q ^ k Forgetting factor 0.96
ML-KF X ^ k ,   P ^ k : k 1 ,   P ^ k , Q ^ k The size of sliding window 150
ST-KF X ^ k ,   P ^ k : k 1 ,   P ^ k Forgetting factor0.94
R-VBKF X ^ k ,   P ^ k : k 1 , P ^ k ,   R ^ k The number of iterations 10
Forgetting factor0.98
SHKF, Sage–Husa Kalman filter in [15]; MLKF, maximum likelihood Kalman filter in [20]; ST-KF, strong-tracking Kalman Filter in [21]; R-VBKF, the Kalman filter algorithm that uses variational iteration to recursively estimate R ^ k and X ^ k in [26].
Table 2. Average root mean square error (ARMSE) of various Kalman filter algorithms.
Table 2. Average root mean square error (ARMSE) of various Kalman filter algorithms.
Filters E A R M S E , p o s / m E A R M S E , v e l / ( m · s 1 )
FCMKF9.8536.348
ML-KF4.59629.759
SH-KF9.6469.545
R-VBKF9.3525.961
ST-KF6.1575.653
MFMS-VBAKF4.0733.946
TCMKF3.6493.253
Table 3. Single-step running time of each algorithm.
Table 3. Single-step running time of each algorithm.
FiltersSingle-Step Running Time (μs)
FCMKF0.28
ST-KF0.41
SH-KF0.46
ML-KF0.54
R-VBKF0.71
MFMS-VBAKF0.92
Table 4. ASRNFN of PECM estimation of various Kalman filter algorithms.
Table 4. ASRNFN of PECM estimation of various Kalman filter algorithms.
Filters E A S R N F N , P
ML-KF21.365
ST-KF5.573
SH-KF4.113
FCMKF3.852
R-VBKF3.115
MFMS-VBAKF2.945
Table 5. The averaged SRNFN (ASRNFN) of PNCM estimation of various Kalman filter algorithms.
Table 5. The averaged SRNFN (ASRNFN) of PNCM estimation of various Kalman filter algorithms.
Filters E A S R N F N , Q  
SH-KF4.915
ML-KF3.688
FCMKF3.357
MFMS-VBAKF2.754
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shan, C.; Zhou, W.; Yang, Y.; Jiang, Z. Multi-Fading Factor and Updated Monitoring Strategy Adaptive Kalman Filter-Based Variational Bayesian. Sensors 2021, 21, 198. https://doi.org/10.3390/s21010198

AMA Style

Shan C, Zhou W, Yang Y, Jiang Z. Multi-Fading Factor and Updated Monitoring Strategy Adaptive Kalman Filter-Based Variational Bayesian. Sensors. 2021; 21(1):198. https://doi.org/10.3390/s21010198

Chicago/Turabian Style

Shan, Chenghao, Weidong Zhou, Yefeng Yang, and Zihao Jiang. 2021. "Multi-Fading Factor and Updated Monitoring Strategy Adaptive Kalman Filter-Based Variational Bayesian" Sensors 21, no. 1: 198. https://doi.org/10.3390/s21010198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop