Next Article in Journal
Intelligent Prediction and Numerical Simulation of Landslide Prediction in Open-Pit Mines Based on Multi-Source Data Fusion and Machine Learning
Previous Article in Journal
Validation of Low-Cost IMUs for Telerehabilitation Exercises
Previous Article in Special Issue
Multimodal Social Sensing for the Spatio-Temporal Evolution and Assessment of Nature Disasters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Robust Event-Triggered Variational Bayesian Filtering Method with Heavy-Tailed Noise

1
Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
2
Shanghai Research Institute for Intelligent Autonomous Systems, Shanghai 201210, China
3
Department of Automation, University of Science and Technology of China, Hefei 230026, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(10), 3130; https://doi.org/10.3390/s25103130
Submission received: 2 April 2025 / Revised: 8 May 2025 / Accepted: 9 May 2025 / Published: 15 May 2025
(This article belongs to the Special Issue Advances in Wireless Sensor Networks for Smart City)

Abstract

:
Event-triggered state estimation has attracted significant attention due to the advantage of efficiently utilizing communication resources in wireless sensor networks. In this paper, an adaptive robust event-triggered variational Bayesian filtering method is designed for heavy-tailed noise with inaccurate nominal covariance matrices. The one-step state prediction probability density function and the measurement likelihood function are modeled as Student’s t-distributions. By choosing inverse Wishart priors, the system state, the prediction error covariance, and the measurement noise covariance are jointly estimated based on the variational Bayesian inference and the fixed-point iteration. In the proposed filtering algorithm, the system states and the unknown covariances are adaptively updated by taking advantage of the event-triggered probabilistic information and the transmitted measurement data in the cases of non-transmission and transmission, respectively. The tracking simulations show that the proposed filtering method achieves good and robust estimation performance with low communication overhead.

1. Introduction

State estimation and filtering for dynamic systems has great practical significance in signal processing, communication systems control engineering, and the related fields [1]. State estimation techniques have been widely used in numerous practical applications, such as navigation [2], target tracking [3], and intelligent vehicles [4]. The classical well-known Kalman filter (KF) provides an optimal state estimate for linear Gaussian systems based on the minimum mean squared error (MMSE) criterion [5]. However, the practical systems usually have the phenomenon of incomplete information and non-Gaussian features, which leads to the failure of the Kalman filter.
Constrained by the limited communication resources, the event-triggered scheduling mechanism has emerged as a promising transmission paradigm for optimizing resource utilization while maintaining estimation and control performance by mitigating unnecessary system activations [6,7]. However, incomplete information resulting from the event-triggered scheduling schemes complicates the design of state estimators. Consequently, the design of the event-triggered state estimators has received much attention recently, including the determinist event-triggered scheme [8] and the stochastic event-triggered scheme [9]. Most of the literature on event-triggered state estimation assumes that system noise has a Gaussian noise with known statistical information. This assumption leads to the poor robust estimation performance of the above filters to outliers. As a common type of a non-Gaussian phenomenon, outliers are of enormous practical significance inasmuch as they occur relatively often [10]. The noise corrupted by an outlier has heavy-tailed features, which cannot be captured by a Gaussian distribution [11]. Due to the advantages in dealing with the intractable posterior probability density function (PDF), variational Bayesian inference (VBI) has been an effective approximation method to derive robust estimators.
Motivated by the above discussion, in this paper, we study the robust state estimation with the heavy-tailed process and measurement noise for the stochastic event-triggered scheduling scheme. An adaptive robust event-triggered filtering algorithm is presented based on the VBI approach and the fixed-point iteration. The contributions of this paper are summarized as follows:
1.
In the stochastic event-triggered state estimation problem, the heavy-tailed process and measurement noise with inaccurate nominal noise covariances are considered. Student’s t-distribution and the inverse Wishart distribution are adopted to model the one-step prediction and the measurement likelihood PDFs and the unknown covariances matrices, respectively.
2.
A robust event-triggered variational Bayesian filtering method is proposed to jointly estimate the system state together with the prediction error covariance and measurement noise covariance. In the filtering algorithm, the event-triggered probabilistic information and the transmitted measurement are, respectively, used to adaptively update the system states in the cases of non-transmission ( γ k = 0 ) and transmission ( γ k = 1 ).
3.
The simulation results verify that the proposed filtering method has a promising estimation performance in the presence of intermittent measurements and outliers. The comparison with several other variational filtering algorithms confirms that our filtering greatly saves the communication resources, and achieves a comparable estimation performance without knowing the accurate noise covariances.
The remainder of this paper is organized as follows. Section 2 reviews the literature on event-triggered robust state estimation. Section 3 formulates the stochastic event-triggered robust state estimation problem with the heavy-tailed process and measurement noise, and it presents the hierarchical Gaussian model based on Student’s t-distribution. Section 4 proposes the robust event-triggered variational Bayesian filtering method for cases of non-transmission and transmission, respectively. Section 5 presents the simulation experiments and results to show the effectiveness of the proposed filtering. A conclusion is given in Section 6.
Notations:  S 0 n and S 0 n represent the sets of n × n real symmetric positive semi-definite and positive definite matrices, respectively. I n is the n × n -dimensional identity matrix. N ( μ , P ) denotes the Gaussian distribution with the mean μ and covariance matrix P. N ( x | μ , P ) stands for the Gaussian PDF of random variable x with the mean μ and covariance matrix P. Pr ( · ) is the probability of a random event.

2. Related Work

Robust state estimation and filtering for dynamic systems have been developing for decades. Robust state estimation has been extensively studied, involving cases of model parameter uncertainty [12], non-Gaussian noise [13], and imperfect sensor information [14,15]. To overcome the failure of conventional Kalman filtering in such problems, various type of filtering methods have been proposed, such as extended Kalman filtering, unscented Kalman filtering, cubature Kalman filtering, and particle filtering.
To date, studies on event-triggered state estimation have produced many exciting results. For the innovation-based deterministic event-triggered scheduling schemes, the authors in [8,16] proposed a Kalman-like filter and a distributed KF by using the Gaussian approximation method, respectively. To obtain the MMSE estimator, a stochastic event-triggered scheduling scheme was proposed in [9] with Gaussian-like triggering functions, and then, the optimal event-triggered KF were derived for both the open-loop and closed-loop schemes. Recently, the robust event-triggered state estimators based on risk-sensitive functions have been proposed for hidden Markov models [17], linear Gaussian systems [18] and nonlinear systems [19], respectively. In [20], an event-triggered model was designed in the presence of packet drops and Gaussian correlated noise. For nonlinear systems, cubature Kalman filtering and particle filtering methods are used in [21,22] to design the event-triggered state estimators, respectively.
Most of the literature on event-triggered state estimation assumes that system noise has a Gaussian distribution with known statistical information. However, in practical engineering applications, many hidden factors and anomalies produce outliers, such as unanticipated environmental disturbances and temporary sensor failure. When the noise is corrupted by outliers, the Kalman-based filters usually have poor robustness because the Gaussian distribution cannot capture heavy-tailed features. In recent years, the filtering problem with non-Gaussian heavy-tailed noise has attracted much attention. Variational Bayesian inference (VBI) has been proposed as an effective approximation method to tackle the complicated PDFs and derive robust filters [23,24,25,26,27]. Based on Student’s t-distribution, a novel variational filtering algorithm with heavy-tailed noise was developed in [23]. In [24], non-stationary heavy-tailed noise is modeled as Gaussian mixture distributions. Lately, the Gaussian-Gamma mixture filter [25], the Gaussian-Student’s t mixture filter [26], and Student’s t mixture filter [27] were proposed in the presence of outliers, respectively. In [28], a multi-sensor variational Bayesian Student’s t-based cubature information fusion algorithm was developed to solve the state estimation of nonlinear systems with heavy-tailed measurement noise. The authors in [29] derived a robust cubature Kalman filter by using a partial variational Bayesian method to deal with non-stationary heavy-tailed noise. For systems with inaccurate Gaussian process noise covariance over binary sensor networks, the authors in [30] developed a distributed sequential estimator via inverse Wishart distributions and fixed-point iterations. In [31], an event-triggered variational Bayesian filter was proposed for the Gaussian system noise with unknown and time-varying noise covariances. In the case of heavy-tailed process noise and Gaussian measurement noise, an event-triggered robust unscented Kalman filter was proposed in [32] to jointly estimate the states and the unmanned surface vehicle parameters. The authors in [33] introduced a sampled memory event-triggered mechanism and discarded the measurement outliers based on an error upper bound.

3. Problem Formulation

3.1. System Model

Consider the following discrete-time linear system:
x k = A k x k 1 + w k 1 ,
y k = C k x k + v k ,
where x k R n is the system state, y k R m is the measurement output, and the process noise w k and the measurement noise v k are the heavy-tailed noise with zero mean and nominal covariances Q k S 0 n and R k S 0 m , respectively. In practice, both Q k and R k are usually inaccurate. The initial state x 0 has a Gaussian distribution with mean x ^ 0 | 0 and covariance P 0 | 0 S 0 n , i.e., p ( x 0 ) = N ( x 0 | x ^ 0 | 0 , P 0 | 0 ) . Moreover, x 0 , w k , and v k are assumed to be mutually uncorrelated for all k N 0 .
To save communication resources for the network, the sensor decides whether to send the measurement to the remote estimator according to the following stochastic event-triggered scheme [9]:
γ k = 1 , if ς k > exp ( 1 2 y ˜ k | k 1 Y y ˜ k | k 1 ) 0 , if ς k exp ( 1 2 y ˜ k | k 1 Y y ˜ k | k 1 ) ,
where γ k = 1 means transmission; otherwise, no transmission occurs. The innovation y ˜ k | k 1 is defined as y ˜ k | k 1 = y k y ^ k | k 1 with the feedback prediction measurement y ^ k | k 1 = C k x ^ k | k 1 , and the random variable ς k is independent and identically distributed (i.i.d.) following a uniform distribution over [ 0 , 1 ] .
At each time k, the remote estimator produces a real-time estimate of the system state x k based on the information set
I k { γ 0 , γ 0 y 0 ; ; γ k , γ k y k } , k N 0
with the initial information set I 1 .
The goal of this paper is to design a robust event-triggered variational Bayesian filtering method based on information set I k for systems (1)–(2) with heavy-tailed process and measurement noise. To this end, Student’s t-distribution, inverse Wishart distribution, and Gamma distribution are first introduced as follows.
Definition 1 
([34]). For a D-dimensional variable x, Student’s t-distribution of variable x is given by
St ( x | μ , Σ , ν ) = Γ ( ν / 2 + D / 2 ) Γ ( ν / 2 ) ( ν π ) D / 2 | Σ | 1 / 2 1 + ( x μ ) Σ 1 ( x μ ) ν ν + D 2 ,
where μ is the mean vector, Σ is the scale matrix, ν is the degree of freedom (dof), and Γ ( · ) denotes the Gamma function. The particular case of ν = 1 is the Cauchy distribution. In the limit ν , Student’s t-distribution reduces to a Gaussian distribution with the mean μ and covariance Σ.
Definition 2 
([35]). For a p × p -dimensional symmetric positive definite random matrix Λ, the inverse Wishart distribution is given by
IW ( Λ | d , Ψ ) = B 1 ( d , Ψ ) | Ψ | d / 2 | Λ | ( d + p + 1 ) / 2 exp tr ( Ψ Λ 1 ) / 2 ,
where B ( d , Ψ ) = 2 d p / 2 π p ( p 1 ) / 4 Π i = 1 p Γ { ( d + 1 i ) / 2 } , d is the dof parameter, and Ψ is a p × p -dimensional symmetric positive definite inverse scale matrix. If Λ IW ( Λ | d , Ψ ) , then E [ Λ 1 ] = d Ψ 1 and E [ Λ ] = ( d p 1 ) 1 Ψ , provided λ > d + 1 .
Definition 3 
([34]). For a positive random variable τ > 0 governed by parameters a > 0 and b > 0 , the Gamma distribution of variable τ is given by
G ( τ | a , b ) = 1 Γ ( a ) b a τ a 1 e b τ .
If τ G ( τ | a , b ) , then E [ τ ] = a b .
Remark 1. 
Compared with the works on event-triggered state estimation [9,36], heavy-tailed noise with inaccurate nominal covariance is considered in the paper. In this case, Kalman-based filters will result in substantial estimation errors because the required Gaussian assumption no longer holds. Hence, the VBI approach is adopted to deal with the non-Gaussianity by choosing appropriate PDFs. Furthermore, the event-triggered scheme leads to incomplete measurements; thus, a way to achieve robust estimation in the case of non-transmission is a critical problem.

3.2. Student’s t-Based Hierarchical Gaussian State-Space Model

In this subsection, Student’s t-based hierarchical Gaussian state-space model is introduced to obtain an approximate of the analytical estimates.
First, due to heavy-tailed noise, the measurement noise v k , the likelihood PDF p ( y k | x k ) , and the one-step prediction PDF p ( x k | I k 1 ) are formulated as Student’s t-distributions as follows [23]:
p ( v k ) = St ( v k | 0 , R k , ν ) ,
p ( y k | x k , R k ) = St ( y k | C k x k , R k , ν ) ,
p ( x k | I k 1 , Σ k ) = St ( x k | x ^ k | k 1 , Σ k , ω ) .
The prior state estimate x ^ k | k 1 and the nominal prediction error covariance P k | k 1 are recursively given by
x ^ k | k 1 = A k x ^ k 1 | k 1 ,
P k | k 1 = A k P k 1 | k 1 A k + Q k 1 .
The covariance Q k 1 is not accurate and the heavy-tailed noise w k will result in significant estimation error; hence, variational inference is adopted to estimate the scale matrix Σ k instead of P k | k 1 .
The unknown covariance matrices are usually modeled by inverse Wishart distributions [35]; thus, the conjugate prior distributions of Σ k and R k are given as
p ( Σ k | I k 1 ) = IW ( Σ k | o ^ k | k 1 , O ^ k | k 1 ) ,
p ( R k | I k 1 ) = IW ( R k | u ^ k | k 1 , U ^ k | k 1 ) .
To capture the prior information of Σ k , the prior parameters o ^ k | k 1 and O ^ k | k 1 satisfy
O ^ k | k 1 o ^ k | k 1 = P ˜ k | k 1 = A k P k 1 | k 1 A k + Q ˜ k 1
where Q ˜ k 1 is the inaccurate nominal process noise covariance. Let o ^ k | k 1 = τ , where τ > 0 is a tuning parameter used to reflect the influence of the prior information. It follows that O ^ k | k 1 = τ P k | k 1 . The prior parameters u ^ k | k 1 and U ^ k | k 1 are given by [37]
u ^ k | k 1 = ρ u ^ k 1 | k 1 , U ^ k | k 1 = ρ U ^ k 1 | k 1 ,
where ρ ( 0 , 1 ] is a forgetting factor which indicates the extent of the time fluctuations. The initial R 0 is also assumed to be inverse Wishart distributed with the dof parameter u 0 | 0 and the inverse scale matrix U 0 | 0 . Then, the initial nominal R ˜ 0 is set as U ^ 0 | 0 u ^ 0 | 0 = R ˜ 0 .
Student’s t-distribution can viewed as an infinite mixture of Gaussians having the same mean but different covariances. Then, p ( x k | I k 1 ) and p ( y k | x k ) can be rewritten as [23]
p ( x k | I k 1 , Σ k ) = N ( x k | x ^ k | k 1 , Σ k / ξ k ) G ( ξ k | ω / 2 , ω / 2 ) d ξ k ,
p ( y k | x k , R k ) = N ( y k | C k x k , R k / λ k ) G ( λ k | ν / 2 , ν / 2 ) d λ k .
Therefore, the one-step prediction PDF p ( x k | I k 1 ) and the likelihood PDF p ( y k | x k ) are given by the following hierarchical Gaussian forms:
p ( x k | I k 1 , Σ k , ξ k ) = N ( x k | x ^ k | k 1 , Σ k / ξ k ) ,
p ( y k | x k , R k , λ k ) = N ( y k | C k x k , R k / λ k ) ,
where variables ξ k and λ k obey the following Gamma distributions:
p ( ξ k ) = G ( ξ k | ω / 2 , ω / 2 ) ,
p ( λ k ) = G ( λ k | ν / 2 , ν / 2 ) .
As a result, Student’s t-based hierarchical Gaussian state-space model (14)–(17) has been constructed.

4. Robust Event-Triggered Variational Bayesian Filtering

In this section, a robust filtering algorithm is proposed based on the VBI approach under event-triggered scheduling scheme (3).
VBI is usually applied to solve the optimization problem for systems with both unknown parameters and latent variables. Denote the set of all latent variables and parameters by Ω = { θ 1 , , θ N } and the set of all observed variables by Y = { y 1 , , y M } . The solution to the joint posterior PDF p ( Ω | Y ) is not analytically tractable, which hinders the estimates of the unknown parameters and latent variables. The VBI approach is used to search for an approximate PDF in a factorized form [34,38]:
p ( Ω | Y ) i = 1 N q ( θ i ) ,
where q ( · ) is the approximate posterior PDF of p ( · ) . The approximation based on VBI can be formed by minimizing the Kullback–Leibler (KL) divergence between the separable approximation and the true posterior distributions. Thus, a general expression for the optimal solution is given by [34,38]
log q * ( θ i ) = E Ω θ i * { log p ( Ω , Y ) } + c θ i ,
where Ω θ i is the set of all elements in Ω except θ i , and c θ i is a constant independent of θ i . When the optimal variational parameters are coupled with each other, the fixed-point iteration method is adopted to solve (19), i.e.,
log q i + 1 ( θ ) = E Ω θ i i { log p ( Ω , Y ) } + c θ i ,
where i denotes the i-th iteration, and the iterations will converge to a local optimum of (19).
For Student’s t-based hierarchical Gaussian state-space model in Section 3.2, the posterior estimates will be given based on the VBI method, which is illustrated by the graphical model in Figure 1. Depending on whether the measurement data have been received by the remote estimator, the discussion is divided into non-transmission, i.e., γ k = 0 , and transmission cases, i.e., γ k = 1 . As shown in Figure 1, the measurement y k can be directly used to infer x k in the case of γ k = 1 . In the case of γ k = 0 , the probabilistic information of the event-triggered scheme (3) is used to infer x k and y k jointly.
Remark 2. 
Compared with most of the previous studies  [23,31], a more practical robust state estimation problem is considered, with both the event-triggered transmission scheme and heavy-tailed noise. A way to estimate the system states in the presence of non-transmissions and outliers is a critical problem. As a result, the proposed filtering will achieve superior robust estimation performance while greatly saving communication resources.

4.1. Variational Filtering in the Case of Non-Transmission

When the measurement y k is not transmitted by the sensor at time k, i.e., γ k = 0 , y k is unavailable for the remote estimator. In this case, the set of the unknown variables is denoted as
Ω 0 = { ( x k , y k ) , ξ k , λ k , Σ k , R k } .
In the VBI framework, the joint posterior PDF p ( x k , y k ) , ξ k , λ k , Σ k , R k | I k is approximated as
p ( x k , y k ) , ξ k , λ k , Σ k , R k | I k q ( x k , y k ) q ( ξ k ) q ( λ k ) q ( Σ k ) q ( R k ) .
The approximate posterior PDF for every element in Ω 0 will be calculated in the following.
(1) The logarithm of joint PDF p ( Ω , I k ) : The joint PDF p ( Ω , I k ) is factorized as
p ( Ω 0 , I k ) = p ( Ω 0 , I k 1 , γ k = 0 ) = p ( γ k = 0 | y k , I k 1 ) p ( x k , y k | ξ k , λ k , Σ k , R k , I k 1 ) × p ( R k | I k 1 ) p ( Σ k | I k 1 ) p ( λ k ) p ( ξ k ) p ( I k 1 ) ,
where p ( ξ k ) , p ( λ k ) , p ( Σ k | I k 1 ) , and p ( R k | I k 1 ) are given by (16), (17), (10) and (11), respectively, and the event-triggered scheme (3) yields
p ( γ k = 0 | y k , I k 1 ) = exp 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) .
It has been shown in [9] that x k and y k are jointly Gaussian given I k . We define ϕ k [ x k , y k ] , ϕ ^ k | k 1 E [ ϕ k | I k 1 ] = [ x ^ k | k 1 , y ^ k | k 1 ] , and Φ k | k 1 E [ ( ϕ k ϕ ^ k | k 1 ) ( ϕ k ϕ ^ k | k 1 ) ] . In view of (12) and (13), Φ k | k 1 is given as
Φ k | k 1 = Σ k / ξ k Σ k C k / ξ k C k Σ k / ξ k C k Σ k C k / ξ k + R k / λ k .
It follows that
p ( x k , y k | ξ k , λ k , Σ k , R k , I k 1 ) = N ( ϕ k | ϕ ^ k | k 1 , Φ k | k 1 ) .
Substituting the above distributions into (23), we have
p ( Ω 0 , I k ) = exp 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) N ( ϕ k | ϕ ^ k | k 1 , Φ k | k 1 ) × IW ( R k | u ^ k | k 1 , U ^ k | k 1 ) IW ( Σ k | o ^ k | k 1 , O ^ k | k 1 ) × G ( λ k | ν / 2 , ν / 2 ) G ( ξ k | ω / 2 , ω / 2 ) p ( I k 1 ) .
Then, log p ( Ω 0 , I k ) is computed as
log p ( Ω 0 , I k ) = 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) 1 2 log | Φ k | k 1 | 1 2 1 2 ( ϕ k ϕ ^ k | k 1 ) Φ k | k 1 1 ( ϕ k ϕ ^ k | k 1 ) u ^ k | k 1 + m + 1 2 log | R k | 1 2 tr ( U ^ k | k 1 R k 1 ) o ^ k | k 1 + n + 1 2 log | Σ k | 1 2 tr ( O ^ k | k 1 Σ k 1 ) + ( ω 2 1 ) log ξ k ω 2 ξ k + ( ν 2 1 ) log λ k ν 2 λ k + C Ω 0 ,
where C Ω 0 is a constant independent of Ω 0 .
(2) Decoupling of Φ k | k 1 : We used the method proposed in [31] to decouple Σ k and R k from log | Φ k | k 1 | 1 2 and ( ϕ k ϕ ^ k | k 1 ) Φ k | k 1 1 ( ϕ k ϕ ^ k | k 1 ) . First, Φ k | k 1 is factorized as
Φ k | k 1 = I n 0 C k I m Σ k / ξ k 0 0 R k / λ k I n C k 0 I m .
Thus, the determinant and the inverse of Φ k | k 1 are, respectively, computed as
| Φ k | k 1 | = | Σ k / ξ k | | R k / λ k | = ξ k n 2 λ k m 2 | Σ k | | R k | ,
Φ k | k 1 1 = I n C k 0 I m ( Σ k / ξ k ) 1 0 0 ( R k / λ k ) 1 I n 0 C k I m .
Define ρ k | k 1 = I n 0 C k I m x k x ^ k | k 1 y k y ^ k | k 1 , then
1 2 ( ϕ k ϕ ^ k | k 1 ) Φ k | k 1 1 ( ϕ k ϕ ^ k | k 1 ) = 1 2 ρ k | k 1 ( Σ k / ξ k ) 1 0 0 ( R k / λ k ) 1 ρ k | k 1 = 1 2 tr ρ k | k 1 ρ k | k 1 ( Σ k / ξ k ) 1 0 0 ( R k / λ k ) 1 = ξ k 2 tr ( ρ k | k 1 ρ k | k 1 ) x x Σ k 1 λ k 2 tr ( ρ k | k 1 ρ k | k 1 ) y y R k 1 ,
where ρ k | k 1 ρ k | k 1 is computed by
ρ k | k 1 ρ k | k 1 = I n 0 C k I m x k x ^ k | k 1 y k y ^ k | k 1 x k x ^ k | k 1 y k y ^ k | k 1 I n 0 C k I m = ( ρ k | k 1 ρ k | k 1 ) x x ( ρ k | k 1 ρ k | k 1 ) x y ( ρ k | k 1 ρ k | k 1 ) x y ( ρ k | k 1 ρ k | k 1 ) y y ,
and ( ρ k | k 1 ρ k | k 1 ) x x and ( ρ k | k 1 ρ k | k 1 ) y y are given as
( ρ k | k 1 ρ k | k 1 ) x x = ( x k x ^ k | k 1 ) ( x k x ^ k | k 1 ) , ( ρ k | k 1 ρ k | k 1 ) y y = C k ( x k x ^ k | k 1 ) ( x k x ^ k | k 1 ) C k ( y k y ^ k | k 1 ) ( x k x ^ k | k 1 ) C k C k ( x k x ^ k | k 1 ) ( y k y ^ k | k 1 ) + ( y k y ^ k | k 1 ) ( y k y ^ k | k 1 ) .
Substituting (26) and (28) into (25) yields
log p ( Ω 0 , I k ) = ( n + ω 2 1 ) log ξ k ω 2 ξ k o ^ k | k 1 + n + 2 2 log | Σ k | 1 2 tr ( O ^ k | k 1 Σ k 1 ) ξ k 2 tr ( ρ k | k 1 ρ k | k 1 ) x x Σ k 1 + ( m + ν 2 1 ) log λ k ν 2 λ k u ^ k | k 1 + m + 2 2 log | R k | 1 2 tr ( U ^ k | k 1 R k 1 ) λ k 2 tr ( ρ k | k 1 ρ k | k 1 ) y y R k 1 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) + C Ω 0 .
(3) The update of ξ k : Let θ = ξ k , and using (31) in (20), we have
log q i + 1 ( ξ k ) = ( n + ω 2 1 ) log ξ k 1 2 ξ k ω + tr S k i E i [ Σ k 1 ] + C ξ k ,
where S k i is given by
S k i = E i [ ( ρ k | k 1 ρ k | k 1 ) x x ] .
According to (32) and Definition 3, q i + 1 ( ξ k ) can be updated as the following Gamma PDF:
q i + 1 ( ξ k ) = G ( ξ k | α k i + 1 , β k i + 1 ) ,
where the shape parameter α k i + 1 and rate parameter β k i + 1 are given by
α k i + 1 = n + ω 2 , β k i + 1 = ω + tr S k i E i [ Σ k 1 ] 2 .
(4) The update of λ k : Let θ = λ k , and using (31) in (20), we have
log q i + 1 ( λ k ) = ( m + ν 2 1 ) log λ k 1 2 λ k ν + tr D k i E i [ R k 1 ] + C λ k ,
where D k i is given by
D k i = E i [ ( ρ k | k 1 ρ k | k 1 ) y y ] .
According to (36) and Definition 3, q i + 1 ( λ k ) can be updated as the following Gamma PDF:
q i + 1 ( λ k ) = G ( λ k | η k i + 1 , δ k i + 1 ) ,
where the shape parameter η k i + 1 and rate parameter δ k i + 1 are given by
η k i + 1 = m + ν 2 , δ k i + 1 = ν + tr D k i E i [ R k 1 ] 2 .
(5) The update of Σ k : Let θ = Σ k , and using (31) in (20), we have
log q i + 1 ( Σ k ) = o ^ k | k 1 + n + 2 2 log | Σ k | 1 2 tr O ^ k | k 1 + E i + 1 [ ξ k ] S k i Σ k 1 + C Σ k .
Based on (40) and Definition 2, q i + 1 ( Σ k ) is updated by the following inverse Wishart PDF:
q i + 1 ( Σ k ) = IW ( Σ k | o ^ k i + 1 , O ^ k i + 1 ) ,
where the dof parameter o ^ k i + 1 and inverse scale matrix O ^ k i + 1 are given by
o ^ k i + 1 = o ^ k | k 1 + 1 , O ^ k i + 1 = O ^ k | k 1 + E i + 1 [ ξ k ] S k i .
(6) The update of R k : Let θ = R k and using (31) in (20), we have
log q i + 1 ( R k ) = u ^ k | k 1 + m + 2 2 log | R k | 1 2 tr U ^ k | k 1 + E i + 1 [ λ k ] D k i R k 1 + C R k .
Based on (43) and Definition 2, q i + 1 ( R k ) can be updated as the following inverse Wishart PDF:
q i + 1 ( R k ) = IW ( R k | u ^ k i + 1 , U ^ k i + 1 ) ,
where the dof parameter u ^ k i + 1 and inverse scale matrix U ^ k i + 1 are given by
u ^ k i + 1 = u ^ k | k 1 + 1 , U ^ k i + 1 = U ^ k | k 1 + E i + 1 [ λ k ] D k i .
(7) The update of ϕ k : To update ϕ k , i.e., x k and y k , the following two lemmas will be used.
Lemma 1 
([39]). For matrices A, B, C, and D with appropriate dimensions, if A and E = D C A 1 B are nonsingular, then
A B C D 1 = A 1 + A 1 B E 1 C A 1 A 1 B E 1 E 1 C A 1 E 1 ,
If D is also nonsingular, then A 1 + A 1 B E 1 C A 1 = ( A B D 1 C ) 1 .
Lemma 2 
([40]). For matrices A, B, U, and V with appropriate dimensions, if A and B are nonsingular, then ( A + U B V ) 1 = A 1 A 1 U ( B 1 + V A 1 U ) 1 V A 1 .
Theorem 1. 
Considering the event-triggered state estimation system with heavy-tailed process and measurement noise (1)–(3), based on Student’s t-based hierarchical Gaussian model (14)–(17) and the variation Bayesian approximation (22), the variational updates x ^ k | k i + 1 and y ^ k | k i + 1 in the case of γ k = 0 are given as follows:
x ^ k | k i + 1 = x ^ k | k 1 , y ^ k | k i + 1 = C k x ^ k | k 1 ,
P k | k i + 1 = P ˜ k | k 1 i + 1 K k i + 1 C k P ˜ k | k 1 i + 1 ,
K k i + 1 = P ˜ k | k 1 i + 1 C k ( C k P ˜ k | k 1 i + 1 C k + R ˜ k i + 1 + Y 1 ) 1 ,
P y y , k | k i + 1 = ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 + Y 1 ,
P x y , k | k i + 1 = P ˜ k | k 1 i + 1 C k I m + Y ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 ,
where
R ˜ k i + 1 = E i + 1 [ R k 1 ] 1 E i + 1 [ λ k ] , P ˜ k | k 1 i + 1 = E i + 1 [ Σ k 1 ] 1 E i + 1 [ ξ k ] .
Proof. 
Let θ = ϕ k , and using (31) in (20), we have
log q i + 1 ( ϕ k ) = 1 2 tr E i + 1 [ ξ k ] ( ρ k | k 1 ρ k | k 1 ) x x E i + 1 [ Σ k 1 ] 1 2 tr E i + 1 [ λ k ] ( ρ k | k 1 ρ k | k 1 ) y y E i + 1 [ R k 1 ] 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) + C ϕ k .
Defining the modified measurement noise and prediction error covariance matrices R ˜ k i + 1 = E i + 1 [ R k 1 ] 1 E i + 1 [ λ k ] and P ˜ k | k 1 i + 1 = E i + 1 [ Σ k 1 ] 1 E i + 1 [ ξ k ] , respectively, then
log q i + 1 ( ϕ k ) = 1 2 tr ( ρ k | k 1 ρ k | k 1 ) x x ( P ˜ k | k 1 i + 1 ) 1 1 2 tr ( ρ k | k 1 ρ k | k 1 ) y y ( R ˜ k i + 1 ) 1 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) + C ϕ k = 1 2 ρ k | k 1 ( P ˜ k | k 1 i + 1 ) 1 0 0 ( R ˜ k i + 1 ) 1 ρ k | k 1 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) + C ϕ k = 1 2 ( ϕ k ϕ ^ k | k 1 ) E i + 1 [ Φ ˜ k | k 1 ] ( ϕ k ϕ ^ k | k 1 ) 1 2 ( y k y ^ k | k 1 ) Y ( y k y ^ k | k 1 ) + C ϕ k = 1 2 ( ϕ k ϕ ^ k | k 1 ) ( Θ k i + 1 ) 1 ( ϕ k ϕ ^ k | k 1 ) + C ϕ k ,
where E i + 1 [ Φ ˜ k | k 1 ] is given as
E i + 1 [ Φ ˜ k | k 1 ] = I n C k 0 I m ( P ˜ k | k 1 i + 1 ) 1 0 0 ( R ˜ k i + 1 ) 1 I n 0 C k I m = ( P ˜ k | k 1 i + 1 ) 1 + C k ( R ˜ k i + 1 ) 1 C k C k ( R ˜ k i + 1 ) 1 ( R ˜ k i + 1 ) 1 C k ( R ˜ k i + 1 ) 1 + Y .
and Θ k i + 1 is given as
Θ k i + 1 = E i + 1 [ Φ ˜ k | k 1 ] + 0 0 0 Y 1 = ( P ˜ k | k 1 i + 1 ) 1 + C k ( R ˜ k i + 1 ) 1 C k C k ( R ˜ k i + 1 ) 1 ( R ˜ k i + 1 ) 1 C k ( R ˜ k i + 1 ) 1 + Y 1 , = P k | k i + 1 P x y , k | k i + 1 ( P x y , k | k i + 1 ) P y y , k | k i + 1 A Θ B Θ C Θ D Θ 1 .
Based on Lemma 1, the estimation error covariance of x k is computed as
P k | k i + 1 = ( A Θ B Θ D Θ 1 C Θ ) 1 = ( P ˜ k | k 1 i + 1 ) 1 + C k ( R ˜ k i + 1 ) 1 ( R ˜ k i + 1 ) 1 ( ( R ˜ k i + 1 ) 1 + Y ) 1 ( R ˜ k i + 1 ) 1 C k 1 = ( P ˜ k | k 1 i + 1 ) 1 + C k ( R ˜ k i + 1 + Y 1 ) 1 C k 1 = P ˜ k | k 1 i + 1 P ˜ k | k 1 i + 1 C k ( C k P ˜ k | k 1 i + 1 C k + R ˜ k i + 1 + Y 1 ) 1 C k P ˜ k | k 1 i + 1 P ˜ k | k 1 i + 1 K k i + 1 C k P ˜ k | k 1 i + 1 ,
where the last two equalities hold because of Lemma 2.
The estimation error covariance of y k is computed as
P y y , k | k i + 1 = ( D Θ C Θ A Θ 1 B Θ ) 1 = [ ( R ˜ k i + 1 ) 1 ( R ˜ k i + 1 ) 1 C k ( ( P ˜ k | k 1 i + 1 ) 1 + C k ( R ˜ k i + 1 ) 1 C k ) 1 C k ( R ˜ k i + 1 ) 1 + Y ] 1 = { ( R ˜ k i + 1 ) 1 ( R ˜ k i + 1 ) 1 C k [ P ˜ k | k 1 i + 1 P ˜ k | k 1 i + 1 C k ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 C k P ˜ k | k 1 i + 1 ] C k ( R ˜ k i + 1 ) 1 + Y } 1 = ( R ˜ k i + 1 ) 1 ( C k P ˜ k | k 1 i + 1 C k ) 1 R ˜ k i + 1 + I m 1 ( R ˜ k i + 1 ) 1 + Y 1 = ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 + Y 1 ,
where the third equality holds due to Lemma 2, and the last two equalities are obtained by conventional matrix operations.
In addition, the cross-covariance between x k and y k are derived as follows:
P x y , k | k i + 1 = A Θ 1 C Θ P y y , k | k i + 1 = ( P ˜ k | k 1 i + 1 ) 1 + C k ( R ˜ k i + 1 + Y 1 ) 1 C k 1 C k ( R ˜ k i + 1 ) 1 × ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 + Y 1 = P ˜ k | k 1 i + 1 P ˜ k | k 1 i + 1 C k ( C k P ˜ k | k 1 i + 1 C k + R ˜ k i + 1 ) 1 C k P ˜ k | k 1 i + 1 × C k ( R ˜ k i + 1 ) 1 ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 + Y 1 = P ˜ k | k 1 i + 1 C k ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1 + Y 1 = P ˜ k | k 1 i + 1 C k I m + Y ( R ˜ k i + 1 + C k P ˜ k | k 1 i + 1 C k ) 1
where the third equality holds due to Lemma 2.
Therefore, q i + 1 ( x k ) is updated as the Gaussian PDF q i + 1 ( x k ) = N ( x k | x ^ k | k i + 1 , P k | k i + 1 ) , where the mean x ^ k | k i + 1 = x ^ k | k 1 and the covariance matrix P k | k i + 1 is given as (48). Similarly, q i + 1 ( y k ) is updated as the Gaussian PDF q i + 1 ( y k ) = N ( y k | y ^ k | k i + 1 , P y y , k | k i + 1 ) , where the mean y ^ k | k i + 1 = y ^ k | k 1 = C k x ^ k | k 1 and the covariance matrix P y y , k | k i + 1 is given as (48).    □
(8) The computation of expectations: According to Definition 2 and Definition 3, the required expectations E i + 1 [ ξ k ] , E i + 1 [ λ k ] , E i + 1 [ Σ k 1 ] and E i + 1 [ R k 1 ] are calculated as follows
E i + 1 [ ξ k ] = α k i + 1 / β k i + 1 , E i + 1 [ λ k ] = η k i + 1 / δ k i + 1 ,
E i + 1 [ Σ k 1 ] = o ^ k i + 1 ( O ^ k i + 1 ) 1 , E i + 1 [ R k 1 ] = u ^ k i + 1 ( U ^ k i + 1 ) 1 .
In view of (30), the required expectations S k i + 1 and D k i + 1 are computed as
S k i + 1 = E i + 1 [ ( ρ k | k 1 ρ k | k 1 ) x x ] = P k | k i + 1 + ( x ^ k | k i + 1 x ^ k | k 1 ) ( x ^ k | k i + 1 x ^ k | k 1 ) ,
D k i + 1 = E i + 1 [ ( ρ k | k 1 ρ k | k 1 ) y y ] = C k P k | k i + 1 C k ( C k P x y , k | k i + 1 ) ( C k P x y , k | k i + 1 ) + P y y , k | k i + 1 ,
where P k | k i + 1 , P y y , k | k i + 1 , and P x y , k | k i + 1 are given by (48). In the case of γ k = 0 , it follows from x ^ k | k i + 1 = x ^ k | k 1 and S k i = P k | k i .

4.2. Variational Bayesian Filtering in the Case of Transmission

When the measurement y k is transmitted by the sensor at time k, i.e., γ k = 1 , the set of the unknown variables is denoted as
Ω 1 = { x k , ξ k , λ k , Σ k , R k } .
The joint PDF p ( Ω 1 , I k ) is factorized as
p ( Ω 1 , I k ) = p ( Ω 1 , I k 1 , γ k = 1 , y k ) = p ( γ k = 1 | y k , I k 1 ) p ( y k | x k , λ k , R k ) p ( x k | ξ k , Σ k , I k 1 ) × p ( R k | I k 1 ) p ( Σ k | I k 1 ) p ( λ k ) p ( ξ k ) p ( I k 1 ) = [ 1 exp ( 1 2 y ˜ k | k 1 Y y ˜ k | k 1 ) ] N ( y k | C k x k , R k / λ k ) × N ( x k | x ^ k | k 1 , Σ k / ξ k ) IW ( R k | u ^ k 1 | k 1 , U ^ k 1 | k 1 ) × IW ( Σ k | o ^ k 1 | k 1 , O ^ k 1 | k 1 ) G ( λ k | ν / 2 , ν / 2 ) G ( ξ k | ω / 2 , ω / 2 ) p ( I k 1 )
The recursions of ξ k , λ k , Σ k , and R k are the same as those in the case of γ k = 0 . Let θ = x k , so that
log q i + 1 ( x k ) = 1 2 E i + 1 [ ξ k ] ( x k x ^ k | k 1 ) E i + 1 [ Σ k 1 ] ( x k x ^ k | k 1 ) 1 2 E i + 1 [ λ k ] ( y k C k x k ) E i + 1 [ R k 1 ] ( y k C k x k ) + C x k = 1 2 ( x k x ^ k | k 1 ) ( P ˜ k | k 1 i + 1 ) 1 ( x k x ^ k | k 1 ) 1 2 ( y k C k x k ) ( R ˜ k i + 1 ) 1 ( y k C k x k ) + C x k ,
where R ˜ k i + 1 and P ˜ k | k 1 i + 1 are given by (49). Hence x k is updated by the following Gaussian PDF:
q i + 1 ( x k ) = N ( x k | x ^ k | k i + 1 , P k | k i + 1 ) ,
where the mean x ^ k | k i + 1 and covariance matrix P k | k i + 1 are given by
x ^ k | k i + 1 = x ^ k | k 1 + K k i + 1 ( y k C k x ^ k | k 1 ) , P k | k i + 1 = P ˜ k | k 1 i + 1 K k i + 1 C k P ˜ k | k 1 i + 1 , K k i + 1 = P ˜ k | k 1 i + 1 C k ( C k P ˜ k | k 1 i + 1 C k + R ˜ k i + 1 ) 1 .
The expectations S k i + 1 and D k i + 1 are given as follows.
S k i + 1 = P k | k i + 1 + ( x ^ k | k i + 1 x ^ k | k 1 ) ( x ^ k | k i + 1 x ^ k | k 1 ) ,
D k i + 1 = E i + 1 [ ( y k C k x k ) ( y k C k x k ) ] = ( y k C k x ^ k | k i + 1 ) ( y k C k x ^ k | k i + 1 ) + C k P k | k i + 1 C k ,
where P k | k i + 1 is given by (64).
Remark 3. 
In the derivation of variational Bayesian filtering, event-triggered structure information is used to calculate the logarithm of joint PDF p ( Ω , I k ) in the case of non-transmission. Despite the absence of the measurement, the unknown covariances are adaptively estimated by taking advantage of the conditional probabilities of the triggering decision (24). In the case of transmission, the measurement received by the remote estimator can be directly used to update the system states, as seen in (64).
To sum up, we propose the robust event-triggered Student’s t-based variational Bayesian filtering, as summarized in Algorithm 1.
Algorithm 1 Event-triggered Student’s t-based robust variational Bayesian filtering
Input: 
x ^ k 1 | k 1 , P k 1 | k 1 , A k , C k , Q ˜ k 1 , u ^ k 1 | k 1 , U ^ k 1 | k 1 ; m, n, ω , ν , τ , ρ , N
1:
Time update:
2:
x ^ k | k 1 = A k x ^ k 1 | k 1
3:
P ˜ k | k 1 = A k P k 1 | k 1 A k + Q ˜ k 1
4:
Variational measurement update:
5:
Initialization: x ^ k | k 0 = x ^ k | k 1 , P k | k 0 = P ˜ k | k 1 , o ^ k | k 1 = τ , O ^ k | k 1 = τ P ˜ k | k 1 , u ^ k | k 1 = ρ u ^ k 1 | k 1 , U ^ k | k 1 = ρ U ^ k 1 | k 1 , E 0 [ Σ k 1 ] = o ^ k | k 1 O ^ k | k 1 1 , E 0 [ R k 1 ] = u ^ k | k 1 U ^ k | k 1 1
6:
for  i = 0 to N 1  do
7:
     Calculate  S k i and D k i :
8:
     if  γ k = 0  then
9:
           S k i = P k | k i , D k i is calculated by (59)
10:
     else
11:
           S k i and D k i are calculated by (65) and (66)
12:
     end if
13:
     Update  q i + 1 ( ξ k ) = G ( ξ k | α k i + 1 , β k i + 1 ) by (35), E i + 1 [ ξ k ] = α k i + 1 / β k i + 1
14:
     Update  q i + 1 ( λ k ) = G ( λ k | η k i + 1 , δ k i + 1 ) by (39), E i + 1 [ λ k ] = η k i + 1 / δ k i + 1
15:
     Update  q i + 1 ( Σ k ) = IW ( Σ k | o ^ k i + 1 , O ^ k i + 1 ) by (42), E i + 1 [ Σ k 1 ] = o ^ k i + 1 ( O ^ k i + 1 ) 1
16:
     Update  q i + 1 ( R k ) = IW ( R k | u ^ k i + 1 , U ^ k i + 1 ) by (45), E i + 1 [ R k 1 ] = u ^ k i + 1 ( U ^ k i + 1 ) 1
17:
     Update  q i + 1 ( x k ) = N ( x k | x ^ k | k i + 1 , P k | k i + 1 ) : R ˜ k i + 1 = E i + 1 [ R k 1 ] 1 E i + 1 [ λ k ] , P ˜ k | k 1 i + 1 = E i + 1 [ Σ k 1 ] 1 E i + 1 [ ξ k ]
18:
     if  γ k = 0  then
19:
           x ^ k | k i + 1 = x ^ k | k 1 , P k | k i + 1 , P y y , k | k i + 1 , P x y , k | k i + 1 are calculated as (48)
20:
     else
21:
           x ^ k | k i + 1 and P k | k i + 1 are updated as (64)
22:
     end if
23:
end for
24:
x ^ k | k = x ^ k | k N , P k | k = P k | k N , o ^ k | k = o ^ k N , O ^ k | k = O ^ k N , u ^ k | k = u ^ k N , U ^ k | k = U ^ k N
Output: 
x ^ k | k , P k | k , u ^ k | k , U ^ k | k
The parameters in Algorithm 1 are discussed as follows: (1) ν and ω are the dofs of Student’s t-distributions (6) and (7), respectively. Student’s t-distribution will reduce to the Cauchy distribution and the Gaussian distribution when the dof is 1 and infinite, respectively. The selection of the dofs is related to the degree of heavy tail of the target distribution. (2) τ > 0 is a tuning parameter used to reflect the influence of the prior information P k | k 1 . For a large τ , the measurement updates will introduce more prior uncertainties caused by the heavy-tailed process noise. For a small τ , a large amount of information about the system process will be lost. Generally, the tuning parameter is suggested to be selected as τ [ 2 , 6 ] . (3) ρ ( 0 , 1 ] is the forgetting factor used to update the prior scale matrix in the inverse Wishart prior distribution of R k in (11), which indicates the extent of the time fluctuations. The smaller the forgetting factor ρ , the more the information from the previous estimation R ^ k 1 is forgotten; otherwise, the more the information from the previous estimation R ^ k 1 is used. Generally, the forgetting factor is suggested to be selected as ρ [ 0.94 , 1 ] for a better performance.
Remark 4. 
The computational complexity of Algorithm 1 is analyzed. In the time update step, the computational complexity of calculating x ^ k | k 1 and P ˜ k | k 1 is O ( n 3 ) . In the variational measurement update for each iteration i, the complexity of calculating S k i and D k i is O ( n 2 ) and O ( m n 2 + n m 2 ) , respectively. The complexity of updating ξ k , λ k , Σ k , and R k is O ( n 3 ) , O ( m 3 ) , O ( n 2 ) , and O ( m 2 ) , respectively. Then, the computational complexity of obtaining x ^ k | k i + 1 is O ( m n 2 + n m 2 + m 3 + n 3 ) , where n and m are the dimensions of the system state and the measurement output, respectively.

5. Simulation Results

In this section, the performance of the proposed robust event-triggered variational Bayesian filtering method is illustrated by the problem of target tracking. The target moves according to a constant velocity model in two-dimensional space, and its position is measured. The system matrices are given as
A k = I 2 T I 2 0 I 2 , C k = I 2 0 ,
where the parameter T = 1   s is the sampling interval. The state dimension and the measurement dimension are n = 4 and m = 2 , respectively. The nominal process and measurement noise covariances are set as
Q k = T 3 3 I 2 T 2 2 I 2 T 2 2 I 2 T I 2 q , R k = r I 2 ,
where q = 1 and r = 100   m 2 . The heavy-tailed process and measurement noise are generated by
w k N ( 0 , Q k ) , w . p . 0.95 N ( 0 , 100 Q k ) , w . p . 0.05 , v k N ( 0 , R k ) , w . p . 0.90 N ( 0 , 100 R k ) , w . p . 0.10 ,
In this simulation, we compare the proposed filtering algorithm, Algorithm 1, with the following estimators.
1.
RST-KF (the robust Student’s t-based Kalman filter [23]): RST-KF is proposed for heavy-tailed process and measurement noise, where the accurate noise covariances are known and y k is sent at each time k.
2.
ETVBF (the event-triggered variational Bayesian filter [31]): ETVBF is proposed for unknown Gaussian noise covariance, where the unknown Gaussian process noise covariance is formulated as the multiple nominal process noise covariance.
3.
KFNCM (the Kalman filter with nominal noise covariance matrices): The measurement y k is sent at each time k.
4.
CLSET-KF (the closed-loop stochastic event-triggered Kalman filter [9]): The stochastic event-triggered scheme is adopted, where the process and measurement noise have Gaussian distributions with known covariance matrices.
The inaccurate nominal initial process and measurement noise covariances are set as Q ˜ k = I 4 and R ˜ 0 = β I 2 , respectively. The triggering parameter takes the form of Y = ζ I 2 . The mean and covariance of x 0 are chosen as x ^ 0 | 0 = [ 10 , 10 , 10 , 10 ] and P 0 | 0 = 100 I 4 . The dof parameters are set as ω = ν = 5 . The tuning parameter and the forgetting factor are set as τ = 5 and ρ = 0.997 , respectively. To compare the estimation performance, the root mean square error (RMSE) and the averaged RMSE (ARMSE) are used as the performance metrics, which are defined as
RMSE j , k , l 1 M n j = 1 M l = 1 n x l , k ( j ) x ^ l , k | k ( j ) 2 1 2 , ARMSE j , k , l 1 M K n j = 1 M k = 1 K l = 1 n x l , k ( j ) x ^ l , k | k ( j ) 2 1 2 ,
where M = 10,000, K = 200 , and n represent the total Monte Carlo experiment number, the iterative step of one Monte Carlo run, and the dimension of the state, respectively. x l , k ( j ) and x ^ l , k | k ( j ) represent the l-th components of system state x k and the estimate x ^ k | k at time k in the j-th Monte Carlo run, respectively. To evaluate the estimation performance under different transmission frequencies, the communication rate is defined as
γ lim sup N 1 N k = 0 N 1 E [ γ k ] ,
which is numerically calculated by
γ = 1 M K j = 1 M k = 1 K γ k , j 1 2 ,
where γ k , j denotes γ k in the j-th Monte Carlo run.
Figure 2 depicts the ARMSEs of the five filtering algorithms under β = 100 . It is seen that the proposed filter performs better than ETVBF, KFNCM, and CLSET-KF, and it gradually stabilizes as ζ increases. Compared with RST-KF, the proposed filter does not know the exact nominal noise covariance. Hence, the performance of the proposed filter is worse than that of RST-KF because of the information loss. ETVBF is developed for the unknown Gaussian noise covariances, and it has poor estimation performance in the presence of outliers. KFNCM and CLSET-KF are the optimal estimators for the linear Gaussian systems without and with closed-loop the event-triggered scheme, respectively, and their performance tends to be the same as ζ increases. However, the two filters are ineffective under heavy-tailed noise.
Figure 3 shows the communication rate of the three filters using a closed-loop stochastic event-triggered scheme. As seen in Figure 2 and Figure 3, ETVBF has the lowest communication rate but the largest ARMSE. Compared with CLSET-KF, the proposed filter has both a smaller communication rate and better performance. Therefore, it can be concluded that the proposed filtering achieves a promising robust estimation performance with acceptable communication overhead under heavy-tailed noise.
To compare the robust estimation performance under the stochastic event-triggered scheme, Figure 4 plots the target trajectories and the RMSEs within 200 steps under ζ = 0.025 and β = 100 . In this case, the communication rates of the proposed filter, ETVBF, and CLSET-KF are 0.8108 , 0.7695 , and 0.8445 , respectively. As shown in Figure 4a, ETVBF and CLSET-KF fail in tracking outliers, while the proposed filtering can still track the target position well. It can be seen in Figure 4b that the mean RMSEs of the proposed filtering and CLSET-KF are 13.2289 and 14.5960 , respectively. As time step k increases, the RMSE of the proposed filtering is gradually and significantly smaller than that of CLSET-KF and ETVBF.
To illustrate the influence of the nominal measurement noise covariance R ˜ 0 , the comparisons of ARMSE and the communication rate are depicted in Figure 5. Only ETVBF and the proposed filtering are affected by R ˜ 0 , and both have better performance with increased β . It is also noted that the ARMSE of the proposed filtering tends to be stable when β is about 200. However, as β becomes large, the communication rate of both ETVBF and the proposed filtering decreases and then increases. Hence, a trade-off between the estimation performance and the communication rate can be achieved by adjusting parameter β .

6. Conclusions

In this paper, a robust variational Bayesian filtering method is proposed for the stochastic event-triggered state estimation system with heavy-tailed process and measurement noise. Heavy-tailed noise is approximated as Student’s t-distributions, where the prior distributions of the scale matrices are chosen as the inverse Wishart distributions. Based on a Student’s t-based hierarchical Gaussian state-space model, the variational Bayesian inference approach and the fixed-point iteration method are utilized to jointly estimate the system states and the unknown covariances in the cases of non-transmission and transmission, respectively. The simulation results demonstrate the effectiveness of the proposed stochastic event-triggered filtering method for heavy-tailed process and measurement noise with inaccurate covariance matrices.
This work was addresses the practical constraints of limited communication resources and outliers. Further work can involve the design of event-triggered robust state estimators for various other non-ideal implementations, such as communication delays, model uncertainties, and the extension of the proposed estimation method to real application scenarios.

Author Contributions

Conceptualization, D.D. and J.X.; methodology, D.D.; software, D.D.; validation, D.D., P.Y. and J.X.; formal analysis, D.D.; investigation, D.D.; resources, D.D.; data curation, D.D.; writing—original draft preparation, D.D.; writing—review and editing, D.D., P.Y. and J.X.; visualization, D.D.; supervision, P.Y. and J.X.; project administration, J.X. and P.Y.; funding acquisition, P.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hespanha, J.P.; Naghshtabrizi, P.; Xu, Y. A Survey of Recent Results in Networked Control Systems. Proc. IEEE 2007, 95, 138–162. [Google Scholar] [CrossRef]
  2. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  3. Yu, X.; Jin, G.; Li, J. Target Tracking Algorithm for System with Gaussian/Non-Gaussian Multiplicative Noise. IEEE Trans. Veh. Technol. 2020, 69, 90–100. [Google Scholar] [CrossRef]
  4. Zhu, H.; Yuen, K.V.; Mihaylova, L.; Leung, H. Overview of Environment Perception for Intelligent Vehicles. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2584–2601. [Google Scholar] [CrossRef]
  5. Anderson, B.D.; Moore, J.B. Optimal Filtering; Courier Corporation: North Chelmsford, MA, USA, 2012. [Google Scholar]
  6. Peng, C.; Li, F. A Survey on Recent Advances in Event-triggered Communication and Control. Inf. Sci. 2018, 457–458, 113–125. [Google Scholar] [CrossRef]
  7. Yan, S.; Gu, Z.; Park, J.H.; Xie, X. Adaptive Memory-Event-Triggered Static Output Control of T–S Fuzzy Wind Turbine Systems. IEEE Trans. Fuzzy Syst. 2022, 30, 3894–3904. [Google Scholar] [CrossRef]
  8. Wu, J.; Jia, Q.; Johansson, K.H.; Shi, L. Event-Based Sensor Data Scheduling: Trade-Off Between Communication Rate and Estimation Quality. IEEE Trans. Autom. Control 2013, 58, 1041–1046. [Google Scholar] [CrossRef]
  9. Han, D.; Mo, Y.; Wu, J.; Weerakkody, S.; Sinopoli, B.; Shi, L. Stochastic Event-Triggered Sensor Schedule for Remote State Estimation. IEEE Trans. Autom. Control 2015, 60, 2661–2675. [Google Scholar] [CrossRef]
  10. Pearson, R. Outliers in process modeling and identification. IEEE Trans. Control Syst. Technol. 2002, 10, 55–63. [Google Scholar] [CrossRef]
  11. Agamennoni, G.; Nieto, J.I.; Nebot, E.M. Approximate Inference in State-Space Models with Heavy-Tailed Noise. IEEE Trans. Signal Process. 2012, 60, 5024–5037. [Google Scholar] [CrossRef]
  12. Liu, S.; Wang, Z.; Chen, Y.; Wei, G. Protocol-Based Unscented Kalman Filtering in the Presence of Stochastic Uncertainties. IEEE Trans. Autom. Control 2020, 65, 1303–1309. [Google Scholar] [CrossRef]
  13. Mu, H.Q.; Yuen, K.V. Novel Outlier-Resistant Extended Kalman Filter for Robust Online Structural Identification. J. Eng. Mech. 2015, 141, 04014100. [Google Scholar] [CrossRef]
  14. Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements. Automatica 2012, 48, 2007–2015. [CrossRef]
  15. Yu, H.; Zou, Y.; Li, Q.; Zhu, J.; Li, H.; Liu, S.; Zhang, H.; Dai, K. Anti-delay Kalman filter fusion algorithm for inter-vehicle sensor network with finite-step convergence. J. Frankl. Inst. 2024, 361, 106786. [Google Scholar] [CrossRef]
  16. Qian, J.; Duan, P.; Duan, Z.; Shi, L. Event-Triggered Distributed State Estimation: A Conditional Expectation Method. IEEE Trans. Autom. Control 2023, 68, 6361–6368. [Google Scholar] [CrossRef]
  17. Xu, J.; Ho, D.W.C.; Li, F.; Yang, W.; Tang, Y. Event-Triggered Risk-Sensitive State Estimation for Hidden Markov Models. IEEE Trans. Autom. Control 2019, 64, 4276–4283. [Google Scholar] [CrossRef]
  18. Huang, J.; Shi, D.; Chen, T. Robust event-triggered state estimation: A risk-sensitive approach. Automatica 2019, 99, 253–265. [Google Scholar] [CrossRef]
  19. Sun, Y.C.; Yang, G.H. Remote State Estimation for Nonlinear Systems via a Fading Channel: A Risk-Sensitive Approach. IEEE Trans. Cybern. 2022, 52, 10253–10262. [Google Scholar] [CrossRef]
  20. Cheng, G.; Liu, J.; Song, S. Event-Triggered State Filter Estimation for Nonlinear Systems with Packet Dropout and Correlated Noise. Sensors 2024, 24, 769. [Google Scholar] [CrossRef]
  21. Li, Z.; Li, S.; Liu, B.; Yu, S.S.; Shi, P. A Stochastic Event-Triggered Robust Cubature Kalman Filtering Approach to Power System Dynamic State Estimation with Non-Gaussian Measurement Noises. IEEE Trans. Control Syst. Technol. 2023, 31, 889–896. [Google Scholar] [CrossRef]
  22. Li, W.; Wang, Z.; Liu, Q.; Guo, L. An Information Aware Event-triggered Scheme for Particle Filter based Remote State Estimation. Automatica 2019, 103, 151–158. [Google Scholar] [CrossRef]
  23. Huang, Y.; Zhang, Y.; Li, N.; Wu, Z.; Chambers, J.A. A Novel Robust Student’s t-Based Kalman Filter. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1545–1554. [Google Scholar] [CrossRef]
  24. Zhu, H.; Zhang, G.; Li, Y.; Leung, H. A Novel Robust Kalman Filter with Unknown Non-stationary Heavy-tailed Noise. Automatica 2021, 127, 109511. [Google Scholar] [CrossRef]
  25. Zhu, H.; Zhang, G.; Li, Y.; Leung, H. An Adaptive Kalman Filter with Inaccurate Noise Covariances in the Presence of Outliers. IEEE Trans. Autom. Control 2022, 67, 374–381. [Google Scholar] [CrossRef]
  26. Huang, Y.; Zhang, Y.; Zhao, Y.; Chambers, J.A. A Novel Robust Gaussian-Student’s t Mixture Distribution Based Kalman Filter. IEEE Trans. Signal Process. 2019, 67, 3606–3620. [Google Scholar] [CrossRef]
  27. Dong, P.; Jing, Z.; Leung, H.; Shen, K.; Wang, J. Student-t Mixture Labeled Multi-Bernoulli Filter for Multi-target Tracking with Heavy-tailed Noise. Signal Process. 2018, 152, 331–339. [Google Scholar] [CrossRef]
  28. Dong, X.; Chisci, L.; Cai, Y. An Adaptive Filter for Nonlinear Multi-Sensor Systems with Heavy-Tailed Noise. Sensors 2020, 20, 6757. [Google Scholar] [CrossRef]
  29. Fu, H.; Huang, W.; Li, Z.; Cheng, Y.; Zhang, T. Robust Cubature Kalman Filter with Gaussian-Multivariate Laplacian Mixture Distribution and Partial Variational Bayesian Method. IEEE Trans. Signal Process. 2023, 71, 847–858. [Google Scholar] [CrossRef]
  30. Zhang, J.; Wei, G.; Ding, D.; Ju, Y. Distributed Sequential State Estimation over Binary Sensor Networks with Inaccurate Process Noise Covariance: A Variational Bayesian Framework. IEEE Trans. Signal Inf. Process. Over Netw. 2025, 11, 1–10. [Google Scholar] [CrossRef]
  31. Lv, X.; Duan, P.; Duan, Z.; Chen, G.; Shi, L. Stochastic Event-Triggered Variational Bayesian Filtering. IEEE Trans. Autom. Control 2023, 68, 4321–4328. [Google Scholar] [CrossRef]
  32. Shen, H.; Wen, G.; Lv, Y.; Zhou, J. A Stochastic Event-Triggered Robust Unscented Kalman Filter-Based USV Parameter Estimation. IEEE Trans. Ind. Electron. 2024, 71, 11272–11282. [Google Scholar] [CrossRef]
  33. Yan, S.; Gu, Z.; Park, J.H.; Xie, X. Sampled Memory-Event-Triggered Fuzzy Load Frequency Control for Wind Power Systems Subject to Outliers and Transmission Delays. IEEE Trans. Cybern. 2023, 53, 4043–4053. [Google Scholar] [CrossRef] [PubMed]
  34. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  35. O’Hagan, A.; Forster, J.J. Kendall’s Advanced Theory of Statistics: Bayesian Inference; Arnold: London, UK, 2004. [Google Scholar]
  36. Deng, D.; Xiong, J. Stochastic Event-triggered Remote State Estimation over Gaussian Channels without Knowing Triggering Decisions: A Bayesian Inference Approach. Automatica 2023, 152, 110951. [Google Scholar] [CrossRef]
  37. Huang, Y.; Zhang, Y.; Wu, Z.; Li, N.; Chambers, J. A Novel Adaptive Kalman Filter with Inaccurate Process and Measurement Noise Covariance Matrices. IEEE Trans. Autom. Control 2018, 63, 594–601. [Google Scholar] [CrossRef]
  38. Tzikas, D.G.; Likas, A.C.; Galatsanos, N.P. The variational Approximation for Bayesian Inference. IEEE Signal Process. Mag. 2008, 25, 131–146. [Google Scholar] [CrossRef]
  39. Harville, D.A. Matrix Algebra from a Statistician’s Perspective; Taylor & Francis: Abingdon, UK, 1998. [Google Scholar]
  40. Higham, N.J. Accuracy and Stability of Numerical Algorithms, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2002. [Google Scholar]
Figure 1. Graphical model for the event-triggered hierarchical Gaussian state-space model based on Student’s t-distribution. The nodes shown with blue rectangles correspond to the observed variable z k = γ k y k , while the nodes with red circles correspond to the inferred variables ξ k , λ k , Σ k , R k , and x k . The blue box represents the two types of observed variables under the event-triggered scheme. The orange boxes represent the latent variables related to the heavy-tailed noise w k and v k .
Figure 1. Graphical model for the event-triggered hierarchical Gaussian state-space model based on Student’s t-distribution. The nodes shown with blue rectangles correspond to the observed variable z k = γ k y k , while the nodes with red circles correspond to the inferred variables ξ k , λ k , Σ k , R k , and x k . The blue box represents the two types of observed variables under the event-triggered scheme. The orange boxes represent the latent variables related to the heavy-tailed noise w k and v k .
Sensors 25 03130 g001
Figure 2. ARMSEs of the proposed filtering, RST-KF, ETVBF, KFNCM, and CLSET-KF versus the triggering parameter.
Figure 2. ARMSEs of the proposed filtering, RST-KF, ETVBF, KFNCM, and CLSET-KF versus the triggering parameter.
Sensors 25 03130 g002
Figure 3. The communication rates of the proposed filtering, ETVBF, and CLSET-KF versus the triggering parameter.
Figure 3. The communication rates of the proposed filtering, ETVBF, and CLSET-KF versus the triggering parameter.
Sensors 25 03130 g003
Figure 4. Trajectories and RMSEs under ζ = 0.025 and β = 100 .
Figure 4. Trajectories and RMSEs under ζ = 0.025 and β = 100 .
Sensors 25 03130 g004
Figure 5. The comparisons of ARMSE and the communication rate versus β under ζ = 0.02 .
Figure 5. The comparisons of ARMSE and the communication rate versus β under ζ = 0.02 .
Sensors 25 03130 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, D.; Yi, P.; Xiong, J. An Adaptive Robust Event-Triggered Variational Bayesian Filtering Method with Heavy-Tailed Noise. Sensors 2025, 25, 3130. https://doi.org/10.3390/s25103130

AMA Style

Deng D, Yi P, Xiong J. An Adaptive Robust Event-Triggered Variational Bayesian Filtering Method with Heavy-Tailed Noise. Sensors. 2025; 25(10):3130. https://doi.org/10.3390/s25103130

Chicago/Turabian Style

Deng, Di, Peng Yi, and Junlin Xiong. 2025. "An Adaptive Robust Event-Triggered Variational Bayesian Filtering Method with Heavy-Tailed Noise" Sensors 25, no. 10: 3130. https://doi.org/10.3390/s25103130

APA Style

Deng, D., Yi, P., & Xiong, J. (2025). An Adaptive Robust Event-Triggered Variational Bayesian Filtering Method with Heavy-Tailed Noise. Sensors, 25(10), 3130. https://doi.org/10.3390/s25103130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop