Multi-Fading Factor and Updated Monitoring Strategy Adaptive Kalman Filter-Based Variational Bayesian

Aiming at the problem that the performance of adaptive Kalman filter estimation will be affected when the statistical characteristics of the process and measurement of the noise matrices are inaccurate and time-varying in the linear Gaussian state-space model, an algorithm of multi-fading factor and an updated monitoring strategy adaptive Kalman filter-based variational Bayesian is proposed. Inverse Wishart distribution is selected as the measurement noise model and the system state vector and measurement noise covariance matrix are estimated with the variational Bayesian method. The process noise covariance matrix is estimated by the maximum a posteriori principle, and the updated monitoring strategy with adjustment factors is used to maintain the positive semi-definite of the updated matrix. The above optimal estimation results are introduced as time-varying parameters into the multiple fading factors to improve the estimation accuracy of the one-step state predicted covariance matrix. The application of the proposed algorithm in target tracking is simulated. The results show that compared with the current filters, the proposed filtering algorithm has better accuracy and convergence performance, and realizes the simultaneous estimation of inaccurate time-varying process and measurement noise covariance matrices.


Introduction
In many practical engineering applications, the actual values of the required state variables are often not directly available. For example, when radar detects an air target, it can calculate the target distance based on information such as reflected waves. Still, there is random interference in the radar detection process, resulting in random noise in the observation signal. In this case, it is impossible to obtain the required state variables accurately, and these state variables can only be estimated or predicted based on the observed signal. In linear systems, the Kalman filter is the optimal filter [1]. With the development of computer technology, the calculation requirements and complexity of Kalman filtering no longer become obstacles to its application [2]. At present, the Kalman filtering theory has been widely used in tracking, navigation, guidance, and other areas [3][4][5][6][7][8][9].
The application of the Kalman filter requires prior knowledge of the mathematical model of the system and the statistical characteristics of noise. Still, in many practical application problems, these are unknown or only partially known [10][11][12]. If inaccurate mathematical models or statistical noise characteristics are used to design the Kalman filter, the performance of the filter will be degraded, resulting in larger estimation errors, and even filter divergence. To solve this problem, various adaptive Kalman filters (AKF) have been produced [13,14].
The Sage-Husa filter (SH-KF) is widely used because of its simple algorithm, which can estimate the first and second moments of noise online [15]. However, the Sage-Sensors 2021, 21,198 3 of 23 where (1) and (2) are process and measurement equations, respectively. k is discrete-time, X k ∈ R n×n is the state vector of the system at time k, Z k ∈ R m×m is the measurement signal vector of the corresponding state. Φ k ∈ R n×n is the state-transition matrix, H k ∈ R m×n is the measurement matrix. ω k ∈ R n and v k ∈ R m are uncorrelated white Gaussian noise with zero mean vectors and covariance matrices Q k and R k , respectively. The initial state X 0 is assumed to be a Gaussian distribution with mean vectorX 0 and the covariance matrix P 0 . X 0 is uncorrelated to ω k and v k at any time [1]. For linear Gaussian state-space models, the Kalman filter (KF) algorithm is an optimal estimation filter algorithm. If the noise covariance matrices Q k and R k are fully known, KF estimates the state vector X k through the measurement information of Z 1:k , and the estimation accuracy is satisfactory. However, the performance of the KF algorithm overly depends on the prior knowledge of the noise statistics. If the time-varying noise covariance matrices Q k and R k are unknown or inaccurate, the accuracy of the KF algorithm will decrease, and even cause the estimation to diverge. Besides, when most existing AKF algorithms estimate the PNCM Q k and MNCM R k at the same time, the filtering will diverge. Therefore, a multi-fading factor and updated monitoring strategy AKF-based variational Bayesian with inaccurate time-varying PNCM and MNCM is proposed.

The Proposed Multi-Fading Factor and Updates Monitoring Strategy AKF-Based Variational Bayesian
In the VBAKF algorithm, the independent state vector X k and the measurement noise covariance matrix R k are regarded as the parameters to be estimated.

Prediction Process and Distribution Selection
In the traditional Kalman filter framework, the Gaussian distributions are selected as the distributions of one-step predicted probability density function (PDF) P(X k |Z 1:k−1 ) and likelihood PDF p(Z k |X k ) : p(X k |Z 1:k−1 ) = N X k−1 ;X k:k−1 , P k:k−1 , p(Z k |X k ) = N Z k ; HX k:k−1 , R k , (4) where N(G; µ, Σ) is the Gaussian distribution and µ and Σ represent the mean and variance of the distribution, respectively. The PDF of the Gaussian distribution is: According to Equation (1), the predicted state vectorX k:k−1 and the corresponding one-step predicted error covariance matrix (PECM) P k:k−1 can be described as: whereX k:k−1 and P k:k−1 represent the state estimation at time k − 1 and the corresponding estimation error covariance matrix, respectively. (.) T represents the transpose of the matrix. Among them, it is assumed that the real PNCM Q k is unknown due to the complex environmental factors in real applications, which will lead to an inaccurate P k:k−1 in Equation (7). The estimation methods for Q k and P k:k−1 will be given in the next two sections. In this section, the aim is to infer X k with R k . For the purpose, the conjugate prior distributions need to be first selected for inaccurate MNCM R k since the conjugacy can ensure that the posterior distribution and the prior distribution maintain the same functional form.
Sensors 2021, 21, 198 4 of 23 According to Bayesian statistical theory, if the Gaussian distribution has a known mean, the conjugate prior distribution of the covariance matrix can be regarded as the inverse Wishart (IW) distribution [28]. C −1 is the inverse matrix of a positive definite matrix C. If C −1 follows the Wishart-distribution W C −1 ; λ, Ψ −1 , the matrix C follows the IW distribution: In Equation (8), C is a symmetric positive definite random matrix, distribution parameter λ is a dof parameter, Ψ is a symmetric positive definite matrix, d is the dimension of C, Γ n (.) represents a multivariate gamma function, and tr[.] is the matrix trace calculation. Ad- stands for mathematical expectation calculation [29]. Since R k is the covariance matrix of the Gaussian PDF, the prior distribution of P(R k |Z 1:k−1 ) can be written as IW distribution: wheret k:k−1 andT k:k−1 denote the degrees of freedom (dof) parameter and scale matrix of p(R k |Z k−1 ), respectively. Next, the values oft k:k−1 andT k:k−1 need to be assigned.
Owing to the Bayesian theorem, the prior distribution p(R k |Z k−1 ) can be written as: where p(R k−1 |Z 1:k−1 )) is the posterior probability density function (PDF) of the MNCM R k−1 . Utilizing (9), the distribution of posterior PDF p(R k−1 |Z 1:k−2 ) can be replaced as inverse Wishart distribution, due to the prior distribution p(R k−1 |Z 1:k−1 ) of MNCM R k−1 is selected as inverse Wishart distribution, and p(R k−1 |Z 1:k−1 ) can be written as: To guarantee p(R k |Z 1:k−1 ) also obeys the inverse Wishart distribution, a changing factor ρ is introduced to modify the one-step predicted values of the distribution parameterŝ t k:k−1 andT k:k−1 . The formulas are as follows: among them, m is the dimension of Z k , ρ ∈ (0, 1], the time-varying measurement noise covariance matrix can be changed with a certain probability distribution, and control the posterior and prior probability density functions to have the same distribution. In addition, the initial PDF distribution of MNCM R 0 is also assumed as inverse Wishart distribution. p(R 0 ) = IW R 0 ;t 0:0 ,T 0:0 . At the initial moment, to formulate the prior information of the measurement noise covariance matrix, the mean value of R 0 is considered as the initial fixed measurement noise covariance matrix R 0 , i.e., Assuming that the prior distribution of the joint probability density function of the state variable and the MNCM is the product of the Gaussian distribution and the inverse Wishart distribution, the prediction process can be defined as: Aiming at estimating the state X k and the MNCM R k , their joint posterior PDF p(X k , R k |Z 1:k ) needs to be calculated. However, the analytical solution of this joint posterior PDF cannot be obtained directly. The variational Bayesian method is utilized to find an approximate PDF of a free form as follows [30]: where q(.) means the approximate posterior PDF of p(.).
In the standard VB method, Kullback-Leibler divergence (KLD) is used to measure the degree of approximation between the approximation posterior PDF and the true posterior PDF, and the optimal solution is obtained by minimizing KLD. The VB method can provide a closed form solution for the approximate posterior PDF. Minimizing the KLD between the approximation posterior PDF q(X k ), q(R k ) and the true joint posterior p(X k , R k |Z 1:k−1 ) is used to form the VB-approximation [30]: The divergence function KLD(.) is defined as: Combined with Equations (17) and (18), the optimal solution of Equation (16) is derived as: where log(.) stands for natural logarithm calculation, E ϕ [.] denotes the expectation calculation of the approximate posterior PDF of the variable ϕ, c X k and c R k represent the constants of variable X k and MNCM R k , respectively. The solutions of Equations (19) and (20) cannot be solved directly since q(X k ) and q(R k ) are coupled. Therefore, the fixed-point iteration method is introduced to calculate the solution to these parameters. The further form of Equation (20) can be derived as (See Appendix A for details): where: q (i) (.) represents the approximate probability distribution of q(.) at the i-th iteration, trace(.) is the calculation of the matrix trace, C R k is a constant related to R k which independent of the distribution form, and m is the dimension of the real observation matrix. The expectation part of Equation (21) is defined as V (i) k , and expanded as: it can be seen that q (i+1) (R k ) obeys a new inverse Wishart distribution form as follows, and the distribution parameterst are, respectively, as follows: Similarly, the logarithmic expression of the approximate distribution of the system state X k is as follows: where is given by: The likelihood PDF p(Z k |X k ) in Equation (4) after updating the (i + 1)-th iteration can be derived as follows: The corrected measurement noise covariance matrix (MNCM)R (i+1) k can be written as: Since q(X k ) obeys the Gaussian distribution as q(X k ) = N X k ;X k , P k . Combining with the standard Kalman filter framework, the gain matrix K and state covariance P (i+1) k in the variational measurement update are corrected as follows, respectively: Analyzing the above derivation, it can be seen that the implicit solution of the variational update formula is constituted by the Equations (22), (24), (25), and (29)-(32). The expected maximum approach is used to iteratively calculate q(X k ) and q(R k ) to update the parameters X k and R k to be estimated continuously. When q(X k ) and q(R k ) are closer to p(X, R k−1 |Z 1:k ), the KLD value of Equation (17) is smaller, and the estimation results of the parameter to be estimated adaptively approach to the true value until the iteration of the variational update process is finished. At this time, the optimal estimation results of parametersX k andR k to be estimated at time k can be calculated as follows (N is the number of fixed-point iterations): Some existing adaptive filtering methods estimate the process noise covariance matrix (PNCM) Q k and measurement noise covariance matrix (MNCM) R k at the same time, it is easy to cause the accuracy of the estimated value of the state X k to decrease or even diverge. This is caused by the value of Q k becoming negative definite matrix during the estimation process [17]. Aiming at realizing the simultaneous estimation of the PNCM Q k , MNCM R k and improving the estimation accuracy of the state vector X k . An updated monitoring strategy based on maximum a posterior (MAP) for estimating the PNCM Q k is proposed.
According to the state-space model as Equations (1) and (2), in paper [15], the maximum a posteriori suboptimal unbiased estimation method based on the noise statistics of measurement {Z 1 , Z 2 , Z 3 , · · · Z k } for estimating the PNCM Q k is given as: combining Equation (35) From a statistical point of view, Equation (35) is an arithmetic average, and the weight coefficient in the formula is 1/(k + 1). However, when estimating the time-varying process noise covariance matrix, the role of the latest information should be highlighted, which can be achieved by multiplying each item in k ∑ s=0 [.] by a different weighting coefficient.
Owing to the exponential weighting method, we can assign different weights to each item in k ∑ s=0 [.], and increase the weight of the latest information, thereby improving the accuracy ofQ k estimation. The exponential weighting method is introduced, and the weighting coefficient {ϑ s } is selected to satisfy: further deduced as follows: where b is the attenuation factor. In [.] of Equation (35), each item is multiplied by the weight coefficient d k , instead of the original weight coefficient. The time-varying process noise covariance matrix (PNCM) Q k estimation method is obtained, and the recursive algorithm is derived as: Equations (7), (30)-(32), and (38) constitutes the VBAKF algorithm that simultaneously estimates the PNCM Q k , MNCM R k , state vector X k , and one-step predicted state error covariance matrix (PECM) P k:k−1 .
However, through simulation experiments, the estimation of the PNCM Q k by the above algorithm is prone to abnormality; that is, it loses positive semi-definiteness, which leads to filtering divergence.
To solve this problem, based on not losing the information of the original process noise estimation algorithm (38), an updated monitoring strategy of process noise parameters is designed. Firstly, it is judged whether Q k calculated by Equation (38) is a positive semi-definite matrix. Then the adjustment factor β is introduced to update the process noise estimation parameters to ensure that the corrected PNCM meets the requirements.
The right side of the Equation (38) is shifted as follows: Generally speaking, the selection of the initial value of the state error covariance matrix P 0 is imprecise, and the deviation from the ideal value is large in the initial stage of filtering, resulting in the absolute value of the theoretical process noise covariance matrix D k determined by Equation (39) being much larger than γ k γ T k . It is easy to causeQ k to lose the positive semi-definiteness, and then the filtering will diverge. Therefore, it is necessary to introduce an adjustment factor to attenuate the effect of the state error covariance matrix at the initial moment, to avoid the indefiniteness of the estimated value of the PNCM, to prevent the filter from diverging. The updated monitoring strategy of the PNCM is as follows: where p ≥ 1 is a positive integer (the initial value is 1), β k is the adjustment factor, and the value of β k is related to the state error variance matrix as follows: The specific process for the updated monitoring strategy is as follows: monitor the process noise covariance matrix calculated by Equation (38), and judge whetherQ k is a positive semi-definite matrix to determine whetherQ k needs to be updated. If not, output Q k . Otherwise, turn to Equation (41) and set p = 1; Equation (41) is used to update the process noise parameters. Continue the monitoring of the updated estimated process noise covariance matrix to determine whether it is necessary to continue to update. If it is necessary, take p = p + 1; Equation (41) is used to recalculateQ k . The loop is executed untilQ k is a positive semi-definite matrix. End the update of the process noise covariance matrix at the current moment. The flowchart of one time-step of the updated monitoring strategy is shown in Figure 1.
So far, combined with the traditional Kalman filter framework, the VBAKF algorithm with updated monitoring strategy is derived to estimate system state vector X k , the process noise covariance matrix (PNCM) Q k , measurement noise covariance matrix (MNCM) R k , one-step predicted state error covariance matrix (PECM) P k:k−1 , and state error covariance matrix P k at the same time.

Improved by Introducing Multiple Fading Factors
If the statistical characteristics of the process and measurement noise are time-varying, the convergence speed of VBAKF will slow down, and there will be a particular error in the estimation result, which will be reflected by the residual sequence γ k [31].
In view of this, to improve the accuracy of estimation, the multiple fading factor L md is introduced to realize the correction of the one-step predicted error covariance matrix (PECM) P k:k−1 . Equation (7) can be rewritten as: adjusting the gain K (N) k in real-time to keep γ k orthogonal, forcing the filter to keep track of the actual state of the system. The tracking ability is thereby improved [31]. The calculation method of multiple fading factors L md is as follows: In Equation (48), is the weakening factor and is unknown and can be estimated by the following formula: where ∈ (0,1] is the forgetting factor. Since is used to correct the one-step predicted error covariance matrix (PECM) in the prediction step of the filtering algorithm, the initial value of the MNCM must be set in advance, assuming that also obeys the inverse Wishart distribution. The estimated value of measurement noise covariance matrix in the time-update step is defined as: where n is the dimension of the state vector X k , λ 0i k = α i ·G k , the value of α i is determined by the system prior information. The formula of G k is as follows: where M ii k is the i-th element of the main diagonal of M k , the calculation formulas of N k and M k are as follows: In Equation (48), τ is the weakening factor and B k is unknown and can be estimated by the following formula: where µ ∈ (0, 1] is the forgetting factor. Since L md is used to correct the one-step predicted error covariance matrix (PECM) in the prediction step of the filtering algorithm, the initial valueR 1 of the MNCMR k must be set in advance, assuming thatR 1 also obeys the inverse Wishart distribution. The estimated value of measurement noise covariance matrixR k in the time-update step is defined as: In Equation (51), the variation range of the slowly time-varying measurement noise covariance matrix is small, and the estimated valueR k−1 at the previous time still has a great reference value for the current time estimation. Therefore,R k−1 estimated by the variational update recursively of the previous time is used asR k at time k > 1.R k is used as the time-varying parameter of L md to modify the one-step predicted error covariance matrix (PECM) P * k:k−1 more accurately, and the accurate P * k:k−1 can affect the accuracy of the estimation result of the variational iteration recursion directly.

Simulations and Results
The application of the proposed algorithm in target tracking is simulated. The target moves according to the continuous white noise accelerated motion model in the twodimensional Cartesian coordinate system. Sensors are used to collect the target location.
The system state is defined as x k = x k .
x k y k corresponding position [24,25]. The state transition matrix Φ k−1 and the measurement matrix H k are respectively set as: where the parameter ∆t = 1 s represent the sampling interval, and I n represent the ndimensional unit matrix. Similar to [25], the true process noise covariance matrix (PNCM) and the measurement noise covariance matrix (MNCM) are set to slow time-varying models, which are: The simulation environment is set as follows: T = 1000 s is the total simulation time, q is a parameter related to process noise, and r is a parameter related to measurement noise. The fixed PNCM and MNCM are set as Q k = σI 4 and R k = εI 2 , respectively, where σ and ε are the prior confidence parameters used to adjust the initial fixed noise covariance matrix.
With the aim of evaluating the accuracy of system state estimation, the root mean square error and average root mean square error of position and velocity are regarded as performance indicators, which are defined as follows: where x k ,x k and y k ,ŷ k , respectively, represent the true value and estimated value of the position in the s-th Monte Carlo experiment and M = 1000 represents the total number of Monte Carlo experiment runs. Similarly, the calculation formulas of RMSE and ARMSE for the corresponding velocity can be obtained. True and estimated trajectories of the target are shown in Figure 2. It can be seen that, compared to the existing adaptive filtering algorithm, the proposed filter maintains excellent target tracking performance throughout the entire process.  [15]; MLKF, maximum likelihood Kalman filter in [20]; ST-KF, strong-tracking Kalman Filter in [21]; R-VBKF, the Kalman filter algorithm that uses variational iteration to recursively estimate and in [26].
True and estimated trajectories of the target are shown in Figure 2. It can be seen that, compared to the existing adaptive filtering algorithm, the proposed filter maintains excellent target tracking performance throughout the entire process.   Figure 2 that compared with existing algorithms, the proposed algorithm has a faster convergence speed and higher accuracy. To further elaborate on the advantages of the proposed algorithm, Table 2 lists the average root mean square error (ARMSE) of different KF filtering algorithms:     [15]; MLKF, maximum likelihood Kalman filter in [20]; ST-KF, strong-tracking Kalman Filter in [21]; R-VBKF, the Kalman filter algorithm that uses variational iteration to recursively estimate and in [26].
True and estimated trajectories of the target are shown in Figure 2. It can be seen that, compared to the existing adaptive filtering algorithm, the proposed filter maintains excellent target tracking performance throughout the entire process.     According to the data in Table 2, it can be found that in comparison with other AKF algorithms, the MFMS-VBAKF algorithm has the smallest ARMSE and the highest accuracy in estimating target position and velocity.
To compare the computational complexity with existing algorithms, Table 3 lists the single-step running time of each algorithm. It can be seen that, compared with the R-VBAKF using the variational Bayesian method, the proposed MFMS-VBAKF increases the single-step running time by 0.21 μs. The design of multi-fading factors and updated monitoring strategy has ensured a substantial increase in estimation accuracy while bringing higher computational complexity. To evaluate the accuracy of the estimation of one-step predicted state error covariance matrix PECM and the noise covariance matrices PNCM, MNCM, the square root of the normalized Frobenius norm (SRNFN) and the averaged SRNFN (ASRNFN) are used as the measure of error, which are defined as:   According to the data in Table 2, it can be found that in comparison with other AKF algorithms, the MFMS-VBAKF algorithm has the smallest ARMSE and the highest accuracy in estimating target position and velocity.
To compare the computational complexity with existing algorithms, Table 3 lists the single-step running time of each algorithm. It can be seen that, compared with the R-VBAKF using the variational Bayesian method, the proposed MFMS-VBAKF increases the singlestep running time by 0.21 µs. The design of multi-fading factors and updated monitoring strategy has ensured a substantial increase in estimation accuracy while bringing higher computational complexity. To evaluate the accuracy of the estimation of one-step predicted state error covariance matrix PECM and the noise covariance matrices PNCM, MNCM, the square root of the normalized Frobenius norm (SRNFN) and the averaged SRNFN (ASRNFN) are used as the measure of error, which are defined as: whereP s o,k:k−1 andP s k:k−1 represent the true value and estimated value of the noise covariance matrix or one-step predicted state error covariance matrix in the s-th Monte Carlo experiment, respectively. The SRNFN and ASRNFN of the estimation result of PECM are shown in Figure 4 and Table 4, respectively.
where , : and : represent the true value and estimated value of the noise covariance matrix or one-step predicted state error covariance matrix in the -th Monte Carlo experiment, respectively. The SRNFN and ASRNFN of the estimation result of PECM are shown in Figure 4 and Table 4, respectively.  It can be clearly seen that, compared with the existing adaptive KF algorithm, if the noise covariance matrices are slowly time-varying, the SRNFN of the MFMS-VBAKF algorithm is smaller than the SRNFN of the current algorithm. Compared with R-VBKF with similar performance, the ASRNFN of MFMS-VBAKF is reduced by 5.45%. Figure 5 shows the SRNFN of the measurement noise covariance matrix (MNCM) estimation. Obviously, the MFMS-VBAKF algorithm has the strongest tracking ability, the highest estimation accuracy and the fastest convergence speed of the slowly time-varying measurement noise covariance matrix estimation.   It can be clearly seen that, compared with the existing adaptive KF algorithm, if the noise covariance matrices are slowly time-varying, the SRNFN of the MFMS-VBAKF algorithm is smaller than the SRNFN of the current algorithm. Compared with R-VBKF with similar performance, the ASRNFN of MFMS-VBAKF is reduced by 5.45%. Figure 5 shows the SRNFN of the measurement noise covariance matrix (MNCM) estimation. Obviously, the MFMS-VBAKF algorithm has the strongest tracking ability, the highest estimation accuracy and the fastest convergence speed of the slowly time-varying measurement noise covariance matrix estimation.   Table 5 show respectively the SRNFNs and the ASRNFNs of the PNCM from the existing filters and MFMS-VBAKF algorithm. It can be seen that the SRNFN and ASRNFN of the proposed MFMS-VBAKF are both smaller than the current filters. Thus, the MFMS-VBAKF has better estimation accuracy and satisfactory convergence speed in PNCM estimation.
Sensors 2020, 20, x FOR PEER REVIEW 15 of 22 Figure 6 and Table 5 show respectively the SRNFNs and the ASRNFNs of the PNCM from the existing filters and MFMS-VBAKF algorithm. It can be seen that the SRNFN and ASRNFN of the proposed MFMS-VBAKF are both smaller than the current filters. Thus, the MFMS-VBAKF has better estimation accuracy and satisfactory convergence speed in PNCM estimation.  Next, we compare and analyze the influence of the values of four critical parameters (changing factor , forgetting factor , weakening factor and attenuation factor ) in the MFMS-VBAKF algorithm on the estimated effect.     Next, we compare and analyze the influence of the values of four critical parameters (changing factor ρ, forgetting factor µ, weakening factor τ and attenuation factor b) in the MFMS-VBAKF algorithm on the estimated effect.           It can be analyzed from Figure 11 that the ARMSEs of position and velocity estimation are flat in a large area of the set grid, and the estimation results are close to the actual values. However, the initial setting values of the fixed noise covariance matrices in the extremely narrow area on the right edge of the grid are too different from the actual values, which leads to unsatisfactory performance of the estimation results. This is caused by the variational Bayesian method that can only guarantee local convergence. In general, the estimated effects of the MFMS-VBAKF algorithm can converge to near the actual values, with excellent robust performance.  For the sake of testing the robustness of the adaptive correction capability of the MFMS-VBAKF algorithm when the fixed noise covariance matrices are set to different initial values, the priori confidence parameters σ and ε are set to change in combination within the grid area of (σ, ε) ∈ [0.1, 800] × [0.1, 800]. The ARMSEs estimated by the algorithm for position and velocity are displayed in Figure 10.
It can be analyzed from Figure 11 that the ARMSEs of position and velocity estimation are flat in a large area of the set grid, and the estimation results are close to the actual values. However, the initial setting values of the fixed noise covariance matrices in the extremely narrow area on the right edge of the grid are too different from the actual values, which leads to unsatisfactory performance of the estimation results. This is caused by the variational Bayesian method that can only guarantee local convergence. In general, the estimated effects of the MFMS-VBAKF algorithm can converge to near the actual values, with excellent robust performance.

Conclusions
This paper presents a multi-fading factor and updated monitoring strategy AKFbased variational Bayesian with the inaccurate time-varying process and measurement noise covariance matrices. The model of measurement error is defined as the inverse Wishart distribution, and the variational Bayesian method is used to recursively estimate the measurement noise covariance matrix and the system state, the estimation results of the two can be approximated to the true value. The process noise covariance matrix is estimated by the maximum a posteriori principle, and the updated monitoring strategy with adjustment factors is used to guarantee the positive semi-definiteness of the updated matrix. The estimated value of measurement noise covariance obtained by the variational iteration recursion and the estimated value of process noise covariance obtained by updating the monitoring strategy are used as time-varying parameters of multiple fading factors, which can be corrected to obtain more accurate state predicted error covariance. Variational Bayesian and the updated monitoring strategy and multi-fading factors complement each other, which not only enhances the responsiveness of target tracking, but also improves the estimation accuracy of variational iteration recursion. The simulation results show that the proposed MFMS-VBAKF algorithm realizes the simultaneous esti-

Conclusions
This paper presents a multi-fading factor and updated monitoring strategy AKFbased variational Bayesian with the inaccurate time-varying process and measurement noise covariance matrices. The model of measurement error is defined as the inverse Wishart distribution, and the variational Bayesian method is used to recursively estimate the measurement noise covariance matrix and the system state, the estimation results of the two can be approximated to the true value. The process noise covariance matrix is estimated by the maximum a posteriori principle, and the updated monitoring strategy with adjustment factors is used to guarantee the positive semi-definiteness of the updated matrix. The estimated value of measurement noise covariance obtained by the variational iteration recursion and the estimated value of process noise covariance obtained by updating the monitoring strategy are used as time-varying parameters of multiple fading factors, which can be corrected to obtain more accurate state predicted error covariance. Variational Bayesian and the updated monitoring strategy and multi-fading factors complement each other, which not only enhances the responsiveness of target tracking, but also improves the estimation accuracy of variational iteration recursion. The simulation results show that the proposed MFMS-VBAKF algorithm realizes the simultaneous estimation of the process noise covariance matrix and the measurement noise covariance matrix, and has achieved satisfactory results in terms of estimation accuracy, convergence performance, and robustness.

Future Work
Compared with existing filtering algorithms, the MFMS-VBAKF algorithm proposed in this paper exhibits excellent performance in the state estimation problem of linear systems with inaccurate time-varying processes and measurement noise covariance matrix. However, in real applications, the influence of the control input of the system should not be underestimated. Control input will cause large changes in noise, and even noise will obey non-Gaussian distribution. Moreover, most of the systems in real applications are nonlinear systems. In the above cases, MFMS-VBAKF will not be applicable. Therefore, in future work, the theory of this paper will be further extended to the problem of nonlinear adaptive Kalman filtering. The problem of nonlinear AKF with inaccurate state transition matrix and non-Gaussian noise distribution will be further studied.