Next Article in Journal
Editorial for the Special Issue “Review of Application Areas of GPR”
Next Article in Special Issue
Multi-Hypothesis Marginal Multi-Target Bayes Filter for a Heavy-Tailed Observation Noise
Previous Article in Journal
The CNES Solutions for Improving the Positioning Accuracy with Post-Processed Phase Biases, a Snapshot Mode, and High-Frequency Doppler Measurements Embedded in Recent Advances of the PPP-WIZARD Demonstrator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Student’s t-Based Robust Poisson Multi-Bernoulli Mixture Filter under Heavy-Tailed Process and Measurement Noises

1
College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China
2
Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(17), 4232; https://doi.org/10.3390/rs15174232
Submission received: 21 July 2023 / Revised: 24 August 2023 / Accepted: 25 August 2023 / Published: 29 August 2023
(This article belongs to the Special Issue Radar and Microwave Sensor Systems: Technology and Applications)

Abstract

:
A novel Student’s t-based robust Poisson multi-Bernoulli mixture (PMBM) filter is proposed to effectively perform multi-target tracking under heavy-tailed process and measurement noises. To cope with the common scenario where the process and measurement noises possess different heavy-tailed degrees, the proposed filter models this noise as two Student’s t-distributions with different degrees of freedom. Furthermore, this method considers that the scale matrix of the one-step predictive probability density function is unknown and models it as an inverse-Wishart distribution to mitigate the influence of heavy-tailed process noise. A closed-form recursion of the PMBM filter for propagating the approximated Gaussian-based PMBM posterior density is derived by introducing the variational Bayesian approach and a hierarchical Gaussian state-space model. The overall performance improvement is demonstrated through three simulations.

1. Introduction

Multi-target tracking (MTT) aims to jointly estimate the time-varying number of targets and their states under the interference of various noises, false alarms, detection uncertainty, and data association uncertainty [1]. It has been widely used in civilian and military fields such as autonomous driving, air traffic control, missile warning, etc. In the past two decades, a new random finite set (RFS)-based MTT strategy, which models the multi-target state and sensor measurement as two individual RFSs, has been developed to avoid the data association required by the traditional MTT strategy [1,2,3].
The multi-target Bayesian (MTB) filter provides an optimal solution for the RFS-based MTT algorithms. However, due to the existence of multiple integrals, as well as the combinatorial nature stemming from the infinite-dimensional multi-target densities, the MTB filter is computationally intractable, which limits its application [1,2,4]. Recently, various suboptimal solutions have been successively developed, with two of the most well known being the probability hypothesis density (PHD) filter [1,5] and the multi-Bernoulli (MeMBer, MB) filter [6]. The PHD filter propagates the multi-target intensity (first-order statistic of the target RFS), which is computationally tractable but not accurate enough. The MeMBer filter propagates the MB density; however, its cardinality estimation has a certain bias. By introducing additional parameters, many variants, such as the cardinality balanced MeMBer (CBMeMBer) filter [7], δ -generalized labeled MeMBer ( δ -GLMB) filter [8], and labeled MeMBer (LMB) filter [9], have been developed. It is worth mentioning that both the δ -GLMB filter and the LMB filter can achieve high tracking accuracy and output target trajectories but at the cost of high computational complexity. In recent years, a new variant called the Poisson multi-Bernoulli mixture (PMBM) filter has been developed [10]. The PMBM filter considers the birth process as a Poisson, which is different from the MB birth in the GLMB and LMB filters. The propagated conjugate prior is the combination of a Poisson point process (PPP) and an MB mixture (MBM), in which the former is the modeling of all targets that have never been detected, and the latter considers all the data association hypotheses and is implemented using the track-oriented multiple hypotheses tracking (TOMHT) formulation [10,11]. The PMBM filter achieves a much higher tracking accuracy compared to the aforementioned filters with a lower computational load, so it is a promising filter [12,13,14,15,16,17].
Conventional RFS-based MTT filters are mostly implemented by applying the standard Kalman filter in multi-target fields, so they are only suitable for the Gaussian process and measurement noises [2]. However, in engineering applications, such as maneuvering target tracking, as well as the presence of burst interference and unreliable sensors, the distributions of these noises are usually heavy-tailed non-Gaussian with many outliers. Compared to Gaussian noise, heavy-tailed noise statistically means a larger uncertainty in positions far from the mean, that is, the probability of target movement or measurement far from the mean is higher than that of Gaussian noise. In this situation, the tracking performance of these conventional filters may break down. To mitigate this, the Student’s t-distribution with naturally heavy-tailed characteristics is employed to model the heavy-tailed noise, but this can make obtaining a closed-form recursion intractable. The variational Bayesian (VB) inference provides a solution to this problem. This approach has been widely used for estimating the unknown covariance [3,17,18,19], in which the VB-LMB [18] and Gaussian inverse-Wishart (IW) mixture LMB (GIWM-LMB) [19] filters were proposed by modeling the unknown measurement covariance as inverse-Gamma and IW distributions, respectively. Using a combination of the VB approach and Student’s t-model, a filter called the VB Kalman filter (VB-KF) [20,21] for single-target tracking (STT) under heavy-tailed measurement noise is proposed. This strategy has been extended to MTT fields [22,23,24,25], but the drawback is that they cannot cope with the heavy-tailed process noise. As a generalization of the ubiquitous Kalman filter, the Student’s t-filter (STF) [26] models the process and measurement noises as Student’s t-distributions with the same degrees of freedom (DOF) and then propagates the Student’s t-based density, which can deal with these two noises in a natural manner. Based on this method, the Student’s t mixture CBMeMBer (STM-CBMeMBer) filter [27], the STM-LMB filter [2], and the STM-GLMB filter [28] are proposed, especially the latter two filters, achieving higher estimation accuracy.
However, the STF-based filters still have inherent shortcomings. First, the moment-matching approximation is introduced to prevent the DOF from increasing in the update step. This approximation can only capture the first two moments so it has a large error. Second, the STF-based filters require that the process and measurement noises have the same DOFs, which is seldom experienced in real-world scenarios [26]. Finally, compared to the Gaussian approximation of a posterior probability density function (PDF), the STF-based filters use Student’s t-distribution to approximate the posterior PDF, which proves to be unreasonable [29]. To avoid these drawbacks, a novel heavy-tailed-noise Kalman filter, which models these two noises as Student’s t-distributions with different DOFs, has been proposed to perform STT [29]. This filter uses the hierarchical Gaussian state-space model to approximate the posterior PDF. By introducing the VB approach, the target state is estimated iteratively, and the tracking performance is greatly improved.
To the best of our knowledge, the proposed R-ST-PMBM filter is the first efficient MTT filter that can theoretically handle process and measurement noises with different heavy-tailed degrees. Firstly, to match the characteristics of the heavy-tailed process and measurement noises, the one-step predicted PDF and measurement likelihood PDF are modeled as Student’s t-distributions, where the DOFs can be set inconsistently to cope with process and measurement noises with different heavy-tailed degrees. Secondly, due to the influence of heavy-tailed process noise, it is not possible to obtain an accurate scale matrix of the one-step predicted PDF. To handle this, we assume this scale matrix is unknown and model it as an IW distribution to improve the filtering performance. Finally, a hierarchical Gaussian state-space model, which includes the parameters of the Student’s t-distributions and IW distribution, is introduced into the standard PMBM filter, and then iteratively updates the PMBM posterior density through the VB approach. Note that we used an approximated Gaussian-based PMBM posterior rather than the Student’s t-based PMBM density to approximate the true posterior PDF, which is more reasonable. The moment-matching approximation in the update of the STF-based filters is avoided, thus preserving the high-order moments completely and achieving higher tracking accuracy. Simulation results in different scenarios demonstrate the effectiveness of our contribution.
The rest of this paper is outlined as follows. Section 2 presents some useful background information, including the related models, the PMBM RFS, and the VB approach. Section 3 presents an analytic implementation for the proposed R-ST-PMBM filter. Section 4 validates the effectiveness of the proposed filter, and Section 5 presents the conclusions.

2. Background

2.1. PMBM RFS

The PMBM RFS involved in this paper consists of two parts: the Poisson RFS and the MBM RFS. Firstly, assume that the RFS X = { x 1 , x | X | } is a PPP, that is, its cardinality | X | is Poisson. All the elements of X are independent and identically distributed (i.i.d.). The density of this Poisson RFS can be denoted by
f P ( X ) = e μ ( x ) d x x X μ ( x )
where μ ( x ) represents the single-target intensity. A single-Bernoulli component (single-target hypothesis) with a probability of survival r and density p ( x ) , can be denoted by
f B ( X ) = 1 r , X = ϕ r p ( x ) , X = { x } 0 , | X | > 1
An MB RFS contains a finite number of single-Bernoulli components, that is,
f M B ( X ) X 1 X n = X i = 1 n f B , i ( X i )
where ⊎ stands for the disjoint union, and ∝ denotes the proportionality. Similarly, an MBM RFS can be denoted by
f M B M ( X ) j X 1 X n = X i = 1 n w j , i f B , j , i ( X i )
By combining the Poisson RFS and MBM RFS, we obtain the PMBM RFS
f ( X ) = P M = X f P ( P ) f M B M ( M ) P M 1 M n = X f P ( P ) j i = 1 n w j , i f B , j , i ( M i )
where j is the index of the MBM RFS, and w j , i is the weight of the i-th single-target hypothesis in the j-th MB RFS.

2.2. Models

We consider the linear state-space system with the target dynamic model and sensor measurement model given by
x k = F x k 1 + w k 1
z k = H x k + v k
where x k R n x and z k R n z denote the target state and sensor measurement at time k. F and H are the state transition and measurement matrices. n x and n z denote the dimension of the target state and measurement, respectively. Here, we model the heavy-tailed process noise w k 1 and the measurement noise v k as zero-mean Student’s t-distributions as follows:
p ( w k 1 ) = S ( w k 1 ; 0 , Σ k 1 , d w )
p ( v k ) = S ( v k ; 0 , R , d v )
where S ( x ; x ¯ , Λ , d x ) denotes a Student’s t-distribution with mean x ¯ , scale matrix Λ , and DOF d x . The resulting one-step predicted and likelihood PDFs can be represented by [30]
p ( x k | z 1 : k 1 ) = S ( x k ; m k | k 1 , Σ k , d w ) = 0 N ( x k ; m k | k 1 , Σ k ξ k ) G ( ξ k ; d w 2 , d w 2 ) d ξ k
p ( z k | x k ) = S ( z k ; H x k , R , d v ) = 0 N ( z k ; H x k , R λ k ) G ( λ k ; d v 2 , d v 2 ) d λ k
where G ( x ; α , β ) = β α x α 1 e β x β α x α 1 e β x Γ ( α ) Γ ( α ) is the PDF of a Gamma distribution with shape parameter α and rate parameter β . N ( x ; x ¯ , P ) represents a Gaussian distribution with mean x ¯ and covariance P. Γ ( · ) is the Gamma function [22]. To apply the hierarchical Gaussian state-space model in the single-target hypothesis prediction and update, we assume that the predicted posterior density is Gaussian with mean m k | k 1 and nominal covariance P k | k 1 . Then, we have
m k | k 1 = F m k 1
P k | k 1 = F P k 1 F T + Q
where Q denotes the nominal process noise covariance. We can derive the nominal single-target transition density as follows:
p ( x k | x k 1 ) = N ( x k ; F x k 1 , Q )
With the introduction of heavy-tailed process noise, Q is inaccurate and therefore not suitable as the scale matrix of the one-step predicted PDF. To handle this, we assume that the scale matrix Σ k in Equation (10) is unknown and model it as an IW distribution with the PDF given by
p ( Σ k ) = I W ( Σ k ; u k , U k ) = | U k | u k u k 2 2 | Σ k | ( u k + d + 1 ) ( u k + d + 1 ) 2 2 exp { 0.5 t r ( U k Σ k 1 ) } 2 d u k d u k 2 2 Γ d ( u k u k 2 2 )
where u k denotes the DOF parameter. U k denotes the inverse scale matrix, which is a symmetric and positive definite matrix of dimension d × d . Γ d ( · ) is the d-variate Gamma function. When u k > d + 1 , E [ Σ k 1 ] = ( u k d 1 ) U k 1 [31]. Let E [ Σ k 1 ] = P k | k 1 1 , and we make the following assumptions:
u k = n x + τ + 1
U k = τ P k | k 1
where τ is the tuning parameter. Rewrite Equations (10) and (11) in the following hierarchical forms:
p ( x k | ξ k , Σ k , z 1 : k 1 ) = N ( x k ; m k | k 1 , Σ k ξ k )
p ( ξ k ) = G ( ξ k ; d w 2 , d w 2 )
p ( z k | x k , λ k ) = N ( z k ; H x k , R λ k )
p ( λ k ) = G ( λ k ; d v 2 , d v 2 )

2.3. VB Approximation

As a deterministic approximation scheme, the VB approach is always used to obtain sub-optimal approximations of unknown or complex posterior distributions and has been widely applied in machine learning and neural networks [32]. For a single-target hypothesis of our work, the VB approach aims to derive a separable approximation of the posterior density, that is
p ( Δ | z 1 : k ) q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k )
where Δ = { x k , ξ k , Σ k , λ k } . The Kullback–Leibler divergence (KLD) between this posterior and its approximation is derived by
K L D ( q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k ) | | p ( Δ | z 1 : k ) ) = q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k ) log q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k ) p ( Δ | z 1 : k ) d x k d ξ k d Σ k d λ k
The joint PDF p ( Δ , z 1 : k ) can be factored as
p ( Δ , z 1 : k ) = p ( x k | ξ k , Σ k , z 1 : k 1 ) p ( ξ k ) p ( Σ k ) p ( z k | x k , λ k ) p ( λ k )
By minimizing Equation (23), we obtain
log q ( Φ ) = E Δ ( Φ ) [ log p ( Δ , z 1 : k ) ] + c Φ
where Φ is an arbitrary element of Δ , and Δ ( Φ ) represents the set of all elements in Δ except for Φ . By substituting Equations (15), (18)–(21), and (24) into Equation (25), these individual approximate PDFs can be iteratively derived in the following forms:
q ( x k ) N ( x k ; m k a , P k a )
q ( ξ k ) G ( ξ k ; α k a , β k a )
q ( Σ k ) I W ( Σ k ; u k a , U k a )
q ( λ k ) G ( λ k ; γ k a , δ k a )
where a denotes the number of iterations. Note that this VB approximation only requires the input of Gaussian parameters for prediction and updating. That is, the recursion of the single-target hypothesis involved in this paper can be considered as a propagation of Gaussian distribution (see Section 3.2 for the detailed iteration process). The approximated marginal PDF of the measurement z k can be denoted by
q ( z k ) exp ( ln p ( z k ) )
where
ln p ( z k ) = 1 2 t r { λ k a ( R ) 1 H P k a H T } + ln N ( z k ; H m k a , R λ k a ) + n x 2 + 1 2 ln | P k a | + n x 2 ln 2 + ( d w 2 + n x 2 α k a ) [ ψ ( α k a ) ln β k a ] + ( β k a d w 2 ) ξ k a + d w 2 ln d w 2 + a ln Γ ( d w 2 ) α k a ln β k a + ln Γ ( α k a ) γ k a ln δ k a + ln Γ ( γ k a ) + ( δ k a d v 2 ) λ k a + ( d v 2 γ k a ) [ ψ ( γ k a ) ln δ k a ] + d v 2 ln d v 2 ln Γ ( d v 2 ) + u k 2 ln | U k | ln Γ n x ( u k 2 ) u k a 2 ln | U k a | + ln Γ n x ( u k a 2 )
where ψ ( · ) is the digamma function. The proof of Equations (30) and (31) is provided in Appendix A.

3. R-ST-PMBM Filter

In this section, a detailed implementation of the proposed filter is described.

3.1. Prediction

The prediction step of the proposed filter includes two parts: the Poisson prediction and the MBM prediction. The former consists of the current newborn targets and the prediction of previously undetected newborn targets, whereas the latter involves the prediction of previously potentially detected targets.

3.1.1. Poisson Prediction

Suppose that the Poisson part of the updated posterior density at time k 1 is a Gaussian mixture of the form
μ k 1 ( x k 1 ) = l = 1 J P , k 1 w P , k 1 l N ( x k 1 ; m P , k 1 l , P P , k 1 l )
where w P , k 1 l denotes the weight of the l-th Poisson component. The intensity of the newborn targets at time k is as follows:
γ k ( x k ) = h = 1 J γ , k w γ , k h N ( x k ; m γ , k h , P γ , k h )
Given the nominal single-target transition density (Equation (14)) and the probability of target survival p S , the Poisson prediction can be derived by
μ k | k 1 ( x k ) = γ k ( x k ) + μ S , k | k 1 ( x k )
where,
μ S , k | k 1 ( x k ) = p ( x k | ζ k 1 ) μ k 1 ( ζ k 1 ) d ζ k 1 = l = 1 J P , k 1 w S , k | k 1 l N ( x k ; m S , k | k 1 l , P S , k | k 1 l )
w S , k | k 1 l = p S w P , k 1 l
m S , k | k 1 l = F m P , k 1 l
P S , k | k 1 l = F P P , k 1 l F T + Q

3.1.2. MBM Prediction

Suppose that in the updated MBM RFS at time k 1 , the i-th single-target hypothesis of the j-th MB RFS has an existence probability r M , k 1 j , i , and the PDF is as follows:
p M , k 1 j , i ( x k 1 ) = w M , k 1 j , i N ( x k 1 ; m M , k 1 j , i , P M , k 1 j , i )
The predicted single-target hypothesis at time k can be derived by
r M , k | k 1 j , i = p S r M , k 1 j , i
p M , k | k 1 j , i ( x k ) = p ( x k | ζ k 1 ) p M , k 1 j , i ( ζ k 1 ) d ζ k 1 = w M , k | k 1 j , i N ( x k ; m M , k | k 1 j , i , P M , k | k 1 j , i )
where the weight w M , k | k 1 j , i will not participate in the update step and
m M , k | k 1 j , i = F m M , k 1 j , i
P M , k | k 1 j , i = F P M , k 1 j , i F T + Q

3.2. Update

Similar to the standard PMBM filter, the update step of our filter can be divided into three parts: an update for undetected targets, an update for potential targets detected for the first time, and an update for previously potentially detected targets.

3.2.1. Update for Undetected Targets

The undetected targets refer to the newborn targets that are missed by the sensor at the current and past times, and they are represented in the form of a Poisson RFS. Note that in this update, the form of these targets remains unchanged. Assume that the predicted Poisson RFS is as follows:
μ k | k 1 ( x k ) = l = 1 J P , k | k 1 w P , k | k 1 l N ( x k ; m P , k | k 1 l , P P , k | k 1 l )
Given the detection probability p D , the update for undetected targets at time k can be derived by
μ k ( x k ) = ( 1 p D ) μ k | k 1 ( x k ) = l = 1 J P , k | k 1 ( 1 p D ) w P , k | k 1 l N ( x k ; m P , k | k 1 l , P P , k | k 1 l )

3.2.2. Update for Potential Targets Detected for the First Time

Potential targets detected for the first time comprise two parts: the detected portion of the newborn targets at time k, and the previously undetected newborn targets that were first detected by the sensor at time k. In the prediction step, the newborn targets are modeled as a Poisson RFS, whereas the survival targets are represented as an MBM RFS. A detected newborn target will participate in the next recursion as a survival target; hence, the results of updating the potential targets detected for the first time are in the form of an MBM.
In this update, each measurement needs to go through all the elements of the predicted Poisson RFS that fall into the ellipsoidal gate, and then be combined by weight to construct an individual MB RFS. Given the predicted Poisson RFS at time k (see Equation (44)), for z k n , the n-th element of the measurement RFS Z k , we calculate the Mahalanobis distances from it to all the predicted Poisson components and preserve the closest ones. Assuming that J ^ P , k | k 1 is the component number of the predicted Poisson RFS that falls into the ellipsoidal gate, before this update, we rewrite the eligible components of Equation (44) in the following conditional form:
μ k | k 1 ( x k | ξ k , Σ k ) = l = 1 J ^ P , k | k 1 μ k | k 1 l ( x k ) = l = 1 J ^ P , k | k 1 w P , k | k 1 l N ( x k ; m P , k | k 1 l , P P , k | k 1 l )
where ξ k and Σ k are the auxiliary variable and scale matrix of the equivalent one-step predicted density in the form of a Student’s t-distribution, respectively. Using Equations (15)–(22), (26)–(31), and (46), given the clutter intensity κ k , the individual MB RFS updated by z k n can be derived as follows:
r M , k n ( z k n ) = e M , k n ( z k n ) e M , k n ( z k n ) + κ k
p M , k n ( x k , ξ k , Σ k , λ k | z k n ) = l = 1 J ^ P , k | k 1 p D p ( z k n | x k , λ k ) p ( λ k ) × μ k | k 1 l ( x k | ξ k , Σ k ) p ( ξ k ) p ( Σ k ) / e M , k n , l ( z k n ) l = 1 J ^ P , k | k 1 p D w P , k | k 1 l q n , l ( z k n ) q n , l ( x k ) q n , l ( λ k ) q n , l ( ξ k ) q n , l ( Σ k ) l = 1 J ^ P , k | k 1 w M , k n , l N ( x k ; m M , k n , l , P M , k n , l ) G ( λ k ; γ k n , l , δ k n , l ) × G ( ξ k ; α k n , l , β k n , l ) I W ( Σ k ; u k n , l , U k n , l )
where the distribution parameters can be obtained through iteration. First, we assign a = 1 , m M , k n , l , a = m P , k | k 1 l , P M , k n , l , a = P P , k | k 1 l , u k = n x + τ + 1 , U k = τ P P , k | k 1 l , and E n , l , a [ Σ k 1 ] = ( u k n x 1 ) U k 1 . We set the constant parameters α k n , l = 0.5 ( n x + d w ) , γ k n , l = 0.5 ( n z + d v ) , and u k n , l = u k + 1 , and then iterate the following process:
Θ k n , l , a = P M , k n , l , a + ( m M , k n , l , a m P , k | k 1 l ) ( m M , k n , l , a m P , k | k 1 l ) T
β k n , l , a = 0.5 { d w + t r ( Θ k n , l , a E n , l , a [ Σ k 1 ] ) }
E n , l , a [ ξ k ] = α k n , l α k n , l β k n , l , a β k n , l , a
Ξ k n , l , a = ( z k n H m M , k n , l , a ) ( z k n H m M , k n , l , a ) T + H P M , k n , l , a H T
δ k n , l , a = 0.5 { d v + t r ( Ξ k n , l , a R 1 ) }
E n , l , a [ λ k ] = γ k n , l γ k n , l δ k n , l , a δ k n , l , a
U k n , l , a = U k + E n , l , a [ ξ k ] Θ k n , l , a
E n , l , a [ Σ k 1 ] = ( u k n , l n x 1 ) ( u k n , l n x 1 ) U k n , l , a U k n , l , a
R ˜ k n , l , a = R R E n , l , a [ λ k ] E n , l , a [ λ k ]
P ˜ k | k 1 n , l , a = { E n , l , a [ Σ k 1 ] } 1 { E n , l , a [ Σ k 1 ] } 1 E n , l , a [ ξ k ] E n , l , a [ ξ k ]
K k n , l , a = P ˜ k | k 1 n , l , a H T ( H P ˜ k | k 1 n , l , a H T + R ˜ k n , l , a ) 1
m M , k n , l , a = m P , k | k 1 l + K k n , l , a ( z k n H m P , k | k 1 l )
P M , k n , l , a = P ˜ k | k 1 n , l , a K k n , l , a H P ˜ k | k 1 n , l , a
a = a + 1
The above iteration is terminated when a = N max or | m M , k n , l , a m M , k n , l , a 1 | < σ , where a denotes the number of iterations, and N m a x and σ are given thresholds. When the iteration terminates, we assign the final values in Equation (48) such that m M , k n , l = m M , k n , l , a , P M , k n , l = P M , k n , l , a , δ k n , l = δ k n , l , a , β k n , l = β k n , l , a , and U k n , l = U k n , l , a . Now, using Equations (15), (19)–(21), and (46), we obtain
e M , k n ( z k n ) = l = 1 J ^ P , k | k 1 e M , k n , l ( z k n ) = l = 1 J ^ P , k | k 1 p D p ( z k n | x k , λ k ) p ( λ k ) p ( ξ k ) p ( Σ k ) μ k | k 1 l ( x k | ξ k , Σ k ) d x k d λ k d ξ k d Σ k = l = 1 J ^ P , k | k 1 p D w P , k | k 1 l q n , l ( z k n )
where q n , l ( z k n ) exp ( ln p n , l ( z k n ) ) , and
ln p n , l ( z k n ) = 1 2 t r { E n , l , a [ λ k ] ( R ) 1 H P M , k n , l H T } + ln N ( z k n ; H m M , k n , l , R E n , l , a [ λ k ] ) + n x 2 + 1 2 ln | P M , k n , l | + n x 2 ln 2 + ( d w 2 + n x 2 α k n , l ) [ ψ ( α k n , l ) ln β k n , l ] + ( β k n , l d w 2 ) E n , l , a [ ξ k ] + d w 2 ln d w 2 + a ln Γ ( d w 2 ) α k n , l ln β k n , l + ln Γ ( α k n , l ) γ k n , l ln δ k n , l + ln Γ ( γ k n , l ) + ( δ k n , l d v 2 ) E n , l , a [ λ k ] u k n , l 2 ln | U k n , l | + ln Γ n x ( u k n , l 2 ) + ( d v 2 γ k n , l ) [ ψ ( γ k n , l ) ln δ k n , l ] + d v 2 ln d v 2 ln Γ ( d v 2 ) + u k 2 ln | U k | ln Γ n x ( u k 2 )
For Equation (48), its sub-component weight w M , k n , l = e M , k n , l ( z k n ) = p D w P , k | k 1 l q n , l ( z k n ) , and its mono-distribution form can be obtained by merging all the elements within it. In addition, to better approximate the truth, we approximate Equation (48) with the following Gaussian-based posterior density form:
p M , k n ( x k | z k n ) = w M , k n N ( x k ; m M , k n , P M , k n )
where
w M , k n = e M , k n ( z k n ) + κ k
m M , k n = 1 e M , k n ( z k n ) l = 1 J ^ P , k | k 1 w M , k n , l m M , k n , l
P M , k n = 1 e M , k n ( z k n ) l = 1 J ^ P , k | k 1 w M , k n , l P M , k n , l

3.2.3. Update for Previously Potentially Detected Targets

According to Section 3.2.2, the previously potentially detected targets refer to the survival targets, and the updated results remain in an MBM form. In this update, each predicted single-target hypothesis (single-Bernoulli component) needs to go through the sensor measurements and be updated by the measurements that fall into the ellipsoidal gate, and then create new single-target hypotheses. Since the survival targets may be misdetected by the sensor the next time, this update can also be divided into two parts: an update for undetected survival targets and an update for detected survival targets.
Considering the i-th predicted single-target hypothesis of the j-th MB RFS with existence probability (Equation (40)) and density (Equation (41)), the undetected survival target can be updated by
r M , k ϕ , j , i = ( 1 p D ) r M , k | k 1 j , i 1 p D r M , k | k 1 j , i
p M , k ϕ , j , i ( x k ) = ( 1 p D r M , k | k 1 j , i ) p M , k | k 1 j , i ( x k ) = ( 1 p D r M , k | k 1 j , i ) N ( x k ; m M , k | k 1 j , i , P M , k | k 1 j , i )
Assume that Z k is the measurement RFS at time k, and Z k g is its subset with all elements falling into the gate. Similar to Section 3.2.2, we rewrite the density (Equation (41)) in the conditional form p M , k | k 1 j , i ( x k | ξ k , Σ k ) . For z k n Z k , the updated survival probability of this single-target hypothesis for the detected survival target can be denoted by
r M , k n , j , i = 1 , z k n Z k g 0 , z k n Z k g
For the density update, when z k n Z k g , we set
p M , k n , j , i ( x k | z k n ) = p M , k n , j , i ( x k , ξ k , Σ k , λ k | z k n ) = 0
If z k n Z k g , the updated density can be derived as follows:
p M , k n , j , i ( x k , ξ k , Σ k , λ k | z k n ) = p D r M , k | k 1 j , i p ( z k n | x k , λ k ) p ( λ k ) p M , k | k 1 j , i ( x k | ξ k , Σ k ) p ( ξ k ) p ( Σ k ) p D r M , k | k 1 j , i q n , j , i ( z k n ) q n , j , i ( x k ) q n , j , i ( λ k ) q n , j , i ( ξ k ) q n , j , i ( Σ k ) w M , k n , j , i N ( x k ; m M , k n , j , i , P M , k n , j , i ) G ( λ k ; γ k n , j , i , δ k n , j , i ) × G ( ξ k ; α k n , j , i , β k n , j , i ) I W ( Σ k ; u k n , j , i , U k n , j , i )
where the updated parameters are obtained through iteration. We assign a = 1 , m M , k n , j , i , a = m M , k | k 1 j , i , P M , k n , j , i , a = P M , k | k 1 j , i , u k = n x + τ + 1 , U k = τ P M , k | k 1 j , i , and E n , j , i , a [ Σ k 1 ] = ( u k n x 1 ) U k 1 . The constant parameters are set as α k n , j , i = 0.5 ( n x + d w ) , γ k n , j , i = 0.5 ( n z + d v ) , and u k n , j , i = u k + 1 . Then, the following iteration is executed:
Θ k n , j , i , a = P M , k n , j , i , a + ( m M , k n , j , i , a m M , k | k 1 j , i ) ( m M , k n , j , i , a m M , k | k 1 j , i ) T
β k n , j , i , a = 0.5 { d w + t r ( Θ k n , j , i , a E n , j , i , a [ Σ k 1 ] ) }
E n , j , i , a [ ξ k ] = α k n , j , i α k n , j , i β k n , j , i , a β k n , j , i , a
Ξ k n , j , i , a = ( z k n H m M , k n , j , i , a ) ( z k n H m M , k n , j , i , a ) T + H P M , k n , j , i , a H T
δ k n , j , i , a = 0.5 { d v + t r ( Ξ k n , j , i , a R 1 ) }
E n , j , i , a [ λ k ] = γ k n , j , i γ k n , j , i δ k n , j , i , a δ k n , j , i , a
U k n , j , i , a = U k + E n , j , i , a [ ξ k ] Θ k n , j , i , a
E n , j , i , a [ Σ k 1 ] = ( u k n , j , i n x 1 ) ( u k n , j , i n x 1 ) U k n , j , i , a U k n , j , i , a
R ˜ k n , j , i , a = R R E n , j , i , a [ λ k ] E n , j , i , a [ λ k ]
P ˜ k | k 1 n , j , i , a = { E n , j , i , a [ Σ k 1 ] } 1 { E n , j , i , a [ Σ k 1 ] } 1 E n , j , i , a [ ξ k ] E n , j , i , a [ ξ k ]
K k n , j , i , a = P ˜ k | k 1 n , j , i , a H T ( H P ˜ k | k 1 n , j , i , a H T + R ˜ k n , j , i , a ) 1
m M , k n , j , i , a = m M , k | k 1 j , i + K k n , j , i , a ( z k n H m M , k | k 1 j , i )
P M , k n , j , i , a = P ˜ k | k 1 n , j , i , a K k n , j , i , a H P ˜ k | k 1 n , j , i , a
a = a + 1
The stopping conditions for this iteration are the same as in the previous section. When it stops, we assign the final values in Equation (73) such that m M , k n , j , i = m M , k n , j , i , a , P M , k n , j , i = P M , k n , j , i , a , δ k n , j , i = δ k n , j , i , a , β k n , j , i = β k n , j , i , a , and U k n , j , i = U k n , j , i , a . Since our filter propagates the Gaussian-based PMBM posterior PDF, we approximate Equation (73) with the following weighted Gaussian form:
p M , k n , j , i ( x k | z k n ) = w M , k n , j , i N ( x k ; m M , k n , j , i , P M , k n , j , i )
where w M , k n , j , i = p D r M , k | k 1 j , i q n , j , i ( z k n ) , q n , j , i ( z k n ) exp ( ln p n , j , i ( z k n ) ) and
ln p n , j , i ( z k n ) = 1 2 t r { E n , j , i , a [ λ k ] ( R ) 1 H P M , k n , j , i H T } + ln N ( z k n ; H m M , k n , j , i , R E n , j , i , a [ λ k ] ) + n x 2 + 1 2 ln | P M , k n , j , i | + n x 2 ln 2 + ( d w 2 + n x 2 α k n , j , i ) [ ψ ( α k n , j , i ) ln β k n , j , i ] + ( β k n , j , i d w 2 ) E n , j , i , a [ ξ k ] + d w 2 ln d w 2 + a ln Γ ( d w 2 ) α k n , j , i ln β k n , j , i + ln Γ ( α k n , j , i ) γ k n , j , i ln δ k n , j , i + ln Γ ( γ k n , j , i ) + ( d v 2 γ k n , j , i ) [ ψ ( γ k n , j , i ) ln δ k n , j , i ] + d v 2 ln d v 2 + ( δ k n , j , i d v 2 ) E n , j , i , a [ λ k ] ln Γ ( d v 2 ) + u k 2 ln | U k | ln Γ n x ( u k 2 ) u k n , j , i 2 ln | U k n , j , i | + ln Γ n x ( u k n , j , i 2 )

3.3. Organization Method of the Filter

To better show how the proposed filter works, we use pseudocode to demonstrate the key steps of the algorithm. Given that P o i s s o n k 1 M B M k 1 is the PMBM posterior at time k 1 , where M B M k 1 = [ M B k 1 1 , , M B k 1 J ] , and for M B k 1 j , the j-th subset of M B M k 1 , it may contain several single-Bernoulli components, that is, M B k 1 j = [ B k 1 1 , , B k 1 I ] , where i is its index. The filtering steps are shown in Algorithm 1.
Algorithm 1 Pseudocode for one recursion of the filter
Input: 
PMBM posterior at time k 1 , P o i s s o n k 1 M B M k 1 ; measurement set at time k, Z k = [ z k 1 , , z k N ( k ) ] .
Output: 
PMBM posterior at time k, P o i s s o n k M B M k .
1:
Perform prediction and obtain P o i s s o n k | k 1 M B M k | k 1 (see Section 3.1); ▹Prediction.
2:
P o i s s o n k = p e r f o r m E q u a t i o n (45); ▹Update for undetected targets.
3:
M B M k = ;
4:
for  j = 1 : J do
5:
     M B k j = ;
6:
    for  i = 1 : I  do
7:
         B k i = p e r f o r m E q u a t i o n s  (69) and (70); ▹Update for previously potentially detected targets (undetected).
8:
        for  n = 1 : N ( k )  do▹Update for previously potentially detected targets (detected).
9:
             Perform ellipsoidal gating on z k n Z k ; ▹Gating.
10:
           if  B k | k 1 i meets the gating then
11:
                B k i + n × I = p e r f o r m E q u a t i o n s  (71)–(73);
12:
               Approximate with Gaussian Equation (88);
13:
           else
14:
                B k i + n × I = ;
15:
           end if
16:
            M B k j = M B k j B k i + n × I ;
17:
        end for
18:
         M B k j = M B k j B k i ;
19:
    end for
20:
     M B M k = M B M k M B k j ;
21:
end for
22:
for  n = 1 : N ( k ) do▹Update for potential targets detected for the first time.
23:
    Perform ellipsoidal gating on z k n Z k ; ▹Gating.
24:
    if at least one component of P o i s s o n k | k 1 meets the gating then
25:
         M B k J + n = p e r f o r m E q u a t i o n s  (47) and (48);
26:
        Approximate to Gaussian Equation (65);
27:
         M B M k = M B M k M B k J + n ;
28:
    end if
29:
end for
The global hypotheses are a collection of all possible one-to-one correspondences between the potential targets and the measurements. In the update step, to construct the global hypothesis matrix, the cost matrix of all possible associations needs to be generated first.
Assume that at time k, m and n are the numbers of sensor measurements and predicted MB RFSs, respectively. Using the predicted global hypotheses and the updated single-target hypothesis weights, the cost matrix C can be denoted by
C = w 1 , 1 ϕ w m , 1 ϕ w 1 , 2 ϕ w m , 2 ϕ w 1 , n ϕ w m , n ϕ v 1 0 0 v m m × ( n + m )
where w a , b ϕ = ln w a , b w a , b w b ϕ w b ϕ , in which w a , b and w b ϕ are the component weights of Equations (88) and (70), respectively. v is the log weight of Equation (65). By applying Murty’s algorithm, all the possible associations are listed and arranged according to cost from low to high, and then the hypotheses with high costs are pruned. Finally, the corresponding component number of each element in the cost matrix is output. Note that the global hypotheses remain unchanged in the prediction step. The detailed global hypothesis weight calculation method and the state extraction method can be referred to as the standard PMBM filter [11].
Assume that m k is the number of measurements at time k, and N k | k 1 P and N k | k 1 B are the predicted number of Poisson components and total number of single-Bernoulli components across all the predicted MB RFSs, respectively. The computational complexity of the proposed filter can be denoted by O [ m k N k | k 1 P + ( m k + 1 ) N k | k 1 B ] , which is the same as the standard PMBM filter. The increase in the computational load compared to the standard version is caused by the iterative update of the single-target hypothesis.

4. Simulations

In this section, we demonstrate the tracking performance of the proposed filter (R-ST-PMBM) with three examples and compare it with the existing filters. We evaluate the filtering performance using the Optimal Subpattern Assignment (OSPA) distance [33] and Generalized OSPA (GOSPA) metrics [34]. We also consider metrics such as running time and cardinality estimation accuracy.

4.1. Scenario with the Same Heavy-Tailed Degrees

We consider a two-dimensional region [ 1000 , 1000 ] × [ 1000 , 1000 ] (in m) with uniform clutter and an unknown and time-varying number of targets. The clutter is Poisson with density λ c = 5 × 10 6 m−2. The target state is denoted by x k = [ p x , k , v x , k , p y , k , v y , k ] T , where ( p x , k , p y , k ) and ( v x , k , v y , k ) denote the position and velocity, respectively. We use the following models to execute this simulation:
F = I 2 1 T 0 1 Q = σ w 2 I 2 T 4 4 T 3 2 T 3 2 T 2
H = 1 0 0 0 0 0 1 0 R = σ v 2 I 2
where ⊗ denotes the Kronecker product, T = 1 s is the sampling period, and I 2 is an identity matrix of dimension 2 × 2 . The nominal standard derivations of process and measurement noises are set to σ w = 1 m/s2 and σ v = 5 m, respectively. Both the heavy-tailed process and measurement noises are constructed using the proportional superposition of two Gaussian distributions with significantly different covariances, which can be represented as follows [2]:
w k p w N ( 0 , Q ) + ( 1 p w ) N ( 0 , 100 Q )
v k p v N ( 0 , R ) + ( 1 p v ) N ( 0 , 100 R )
where p w and p v denote the superposition proportions, which can be used to adjust the heavy-tailed degrees of these two noises. In this test, we set p w = p v = 0.9 to construct a scenario with the same heavy-tailed degrees. The newborn targets are Poisson with intensity
γ k ( x k ) = h = 1 4 w γ , k h N ( x k ; m γ , k h , P γ , k h )
where w γ , k h = 0.01 , m γ , k 1 = [ 0 , 0 , 0 , 0 ] T , m γ , k 2 = [ 400 , 0 , 600 , 0 ] T , m γ , k 3 = [ 800 , 0 , 200 , 0 ] T , m γ , k 4 = [ 200 , 0 , 800 , 0 ] T , and P γ , k h = diag ( [ 100 , 100 , 100 , 100 ] ) . We set eight targets to appear and disappear successively in the restricted region, and each target has a survival probability of p S = 0.99 . Note that due to the influence of the heavy-tailed process noise, the targets may experience violent maneuvers. Figure 1a shows the target trajectories of a test.
For the parameters required in the update, we use a maximum number of iterations N m a x = 10 , a cutoff distance σ = 0.1 , and a tuning parameter τ = 1 . Each target has a probability p D = 0.98 of being detected by the sensor. The maximum number of global hypotheses is set to N h y p o = 200 . The pruning weight thresholds of the Poisson and MBM components are set to w P o i s ( T ) = 10 5 and w m b m ( T ) = 10 4 , respectively. The threshold of the ellipsoidal gating is set to T g = 40 . The Bernoulli components, whose existence probability is lower than r B ( T ) = 10 5 , are removed.
In this test, to match the scenario of the same heavy-tailed degrees ( p w = p v = 0.9 ), the DOFs of the process and measurement noises are set to d w = d v = 4 . Figure 1b shows the position estimates for this simulation. Note that each boundary of the sparse and dense estimation points indicates a severe change in the target velocity.
Figure 2a and Figure 2b, respectively, show the comparisons of the OSPA distances (for p = 1 and c = 100 ) and cardinality estimates of the proposed filter with the STM-LMB, STM-GLMB, and STM-CBMeMBer filters after 100 Monte Carlo runs, indicating better performance compared to the existing filters. It is worth mentioning that the OSPA fluctuation at t = 70 s in Figure 2a is caused by the gradual decrease in the component’s probability of existence when the target disappears. We also use the GOSPA metric, with parameters p = 2 , c = 30 , and α = 2 , to further evaluate the tracking performance of the proposed filter. The comparisons of the GOSPA error, localization error (LE), missed targets error (ME), and false targets error (FE) are shown in Figure 3. It is evident that the R-ST-PMBM filter exhibits a smaller GOSPA error, LE, and ME most of the time. Note that all the errors are root-squared versions. From Figure 3c, we can conclude that the error at t = 1 s in Figure 2a is caused by the misestimation of the true targets when the filtering process starts. These errors also exist in the simulation of other PMBM-based filters.
Table 1 shows the indices of the average OSPA (AOSPA) distance [23], average GOSPA error (AGOSPA), average LE (ALE), average ME (AME), average FE (AFE), and average running time (ART) (in s) of the four filters after 100 Mont Carlo runs. These results demonstrate that our filter exhibits the best tracking performance in the scenario with the same heavy-tailed degrees. It should be noted that compared to the STF-based filters, the performance advantage of the proposed filter is mainly due to the combined effects of the following factors: avoiding the use of moment-matching approximations with large errors, using a Gaussian-based PMBM posterior to approximate the truth posterior, and inheriting the high performance of the standard PMBM filter.

4.2. Scenario with Different Heavy-Tailed Degrees

To evaluate the pertinence of the design of the proposed filter, we consider a scenario in which the process noise and the measurement noise have different heavy-tailed degrees. Before the test, to facilitate the selection of appropriate DOFs, we studied the tracking performance of the proposed filter under different heavy-tailed process and measurement noises with different DOF settings, and the results are shown in Figure 4. For Figure 4a, we set the process noise to null and d w = 4 , whereas for Figure 4b, we set p v = 0.9 and d v = 4 , and the remaining parameters were unchanged.
In this test, we change the superposition proportion of the process noise (see Equation (93)) to p w = 0.8 , which can increase the number of outliers in the mixed process noise and result in a larger heavy-tailed degree compared to the measurement noise. As shown in Figure 4b, the DOF of the process noise is set to d w = 2 , and the other noise parameters are consistent with the previous test.
Theoretically, to track the targets in this scenario, the one-step predicted PDF and likelihood PDF (see Equations (10) and (11)) are usually required to be set to different DOFs, i.e., d w d v . But this is impossible in the STM-CBMeMBer, STM-LMB, and STM-GLMB filters. Therefore, for these three filters, we successively set the DOFs to d w = d v = 3 and d w = d v = 4 (due to the existence of moment matching, in the STF-based filters, the DOFs cannot be set to 2). We set the DOFs of the R-ST-PMBM filter to d w = 2 and d v = 4 . Figure 5 and Figure 6 show comparisons of the OSPA distances, cardinality estimates, and GOSPA metrics, respectively.
Note that the extensions “S1” and “S2” in the legends in Figure 5 and Figure 6 represent the DOF settings d w = d v = 3 and d w = d v = 4 , respectively. It is evident that the OSPA distance and the GOSPA error of the R-ST-PMBM filter are lower compared to the existing filters, except for the fluctuation when the target number is reduced (at t = 70 s). Table 2 provides a comparison of the different indices, where we can see that the overall performance of the R-ST-PMBM filter is better compared to the existing filters. Compared to the previous test, the performance advantage of the R-ST-PMBM filter in the scenario of noises with different heavy-tailed degrees is enhanced. It should be pointed out that the operational efficiency advantage of the proposed filter stems from its linear computational complexity in the number of hypotheses, whereas the complexity of the STM-LMB and STM-GLMB filters is quadratic.

4.3. Real Data Scenario

The effectiveness of the R-ST-PMBM filter is also verified in a real data scenario. Essentially, all the target maneuvers and sensor measurement outliers can be considered as the affected results of the heavy-tailed process and measurement noises, respectively. The larger the heavy-tailed degree of the noise, the more severe the target motion or the sensor measurement outliers, and vice versa. In this simulation, the surveillance area is restricted to [8000, 11,000] × [1000, 7000] (in m), and the real data are obtained from radar measurements of two aircraft in the surveillance area. The target motions and sensor measurements are projected onto a two-dimensional plane. Two targets appeared successively in this area, lasting 64 s, and were detected 48 times by the sensor.
The scanning interval T (see Equation (91)) of the sensor is not constant. The newborn parameters are set to w γ , k = 1 , m γ , k = [ 8727 , 0 , 1399 , 0 ] T , and P γ , k = diag ( [ 50 , 50 , 50 , 50 ] ) 2 . The nominal standard deviations of the process and measurement noises are set to σ w = 30 m/s2 and σ v = 10 m, and the DOFs are set to d w = d v = 4 . The parameters not mentioned can be referred to in Section 4.1. Figure 7 shows the true trajectories and position estimates. Figure 8 shows the OSPA distance and cardinality estimate comparisons of the four filters. It can be seen that the R-ST-PMBM filter has the best tracking effect among these filters. The comparison results of the GOSPA metrics, shown in Figure 9, also support this conclusion. Note that the large fluctuation of the proposed filter at time step k = 18 is directly caused by the sudden increase in the radar scanning period.
The numerical results of the four filters are provided in Table 3. It can be seen that, compared to the other filters, the R-ST-PMBM filter demonstrates a significant overall performance advantage in this real data scenario.

5. Conclusions

In this paper, an improved PMBM filter, called the R-ST-PMBM filter, was proposed to better perform MTT under heavy-tailed process and measurement noises, which provided a Student’s t-based Gaussian approximation to the components of the PMBM density. The one-step prediction and measurement likelihood were modeled as Student’s t-distributions with different DOFs, enabling our filter to be more adaptable to real environments. The scale matrix of the one-step predicted PDF was assumed to be unknown and modeled as an IW distribution, increasing the filter’s robustness. The resulting Student’s t-based hierarchical Gaussian state-space model was iterated to derive a posterior PDF by introducing the auxiliary variables and the VB approach and then approximated to Gaussian to obtain a closed-form recursion in the update step. The simulation results show that in the heavy-tailed process and measurement noise environment, the R-ST-PMBM filter achieves better overall performance compared to the existing filters, even under these noises with different heavy-tailed degrees.
In our future work, maneuvering target tracking in a real-world environment will become an application field for the proposed filter. Applying this filter to extended target tracking is also a possible research topic.

Author Contributions

J.Z.: Conceptualization, Validation, Writing—original draft preparation. W.X.: Resources, Software, Review, Supervision. Z.L.: Methodology, Writing—review and editing, Project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China under grant 62171287 and the Shenzhen Science and Technology Program under grants JCYJ20190808120417257 and JCYJ20220818100004008.

Data Availability Statement

The data presented in this study are partly available on request from the corresponding author. The data are not publicly available due to their current restricted access.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

By applying the approximate inference theory described, we can derive the log-marginal PDF of measurement z k as follows [32]:
ln p ( z k ) = L ( z k ) + K L D ( q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k ) | | p ( x k , ξ k , Σ k , λ k | z k ) )
where
L ( z k ) = q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k ) ln ( p ( x k , ξ k , Σ k , λ k , z k ) q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k ) ) d x k d ξ k d Σ k d λ k
which denotes the evidence lower bound (ELOB). Assuming that c z is the error of this inference when minimizing the KLD in Equation (A1), we can derive the log-marginal PDF by
ln p ( z k ) = L ( z k ) + c z = E { ln [ p ( x k , ξ k , Σ k , λ k , z k ) ] } E { ln [ q ( x k ) q ( ξ k ) q ( Σ k ) q ( λ k ) ] } + c z
= E { ln [ p ( z k | x k , λ k ) ] } + E { ln [ p ( x k | ξ k , Σ k ) ] } + E { ln [ p ( ξ k ) ] } + E { ln [ p ( Σ k ) ] } + E { ln [ p ( λ k ) ] } E { ln [ q ( x k ) ] } E { ln [ q ( ξ k ) ] } E { ln [ q ( Σ k ) ] } E { ln [ q ( λ k ) ] } + c z
= E { ln [ N ( z k ; H x k , R λ k ) ] } + E { ln [ N ( x k ; m k | k 1 , Σ k ξ k ) ] } + E { ln [ G ( ξ k ; d w 2 , d w 2 ) ] } + E { ln [ I W ( Σ k ; u k | k 1 , U k | k 1 ) ] } + E { ln [ G ( λ k ; d v 2 , d v 2 ) ] } E { ln [ N ( x k ; m k a , P k a ) ] } E { ln [ G ( ξ k ; α k a , β k a ) ] } E { ln [ I W ( Σ k ; u k a , U k a ) ] } E { ln [ G ( λ k ; γ k a , δ k a ) ] } + c z
where
E { ln [ N ( z k ; H x k , R λ k ) ] } = 1 2 ln | R | + n z 2 E [ ln λ k ] n z 2 ln ( 2 π ) 1 2 t r { E [ λ k ] ( R ) 1 E [ ( z k H x k ) ( z H x k ) T ] }
E { ln [ N ( x k ; m k | k 1 , Σ k ξ k ) ] } = 1 2 E [ ln | Σ k | ] + n x 2 E [ ln ξ k ] n x 2 ln ( 2 π ) = 1 2 t r { E [ ξ k ] E [ ( Σ k ) 1 ] E [ ( x k m k | k 1 ) ( x k m k | k 1 ) T ] }
E { ln [ G ( ξ k ; d w 2 , d w 2 ) ] } = ( d w 2 1 ) E [ ln ξ k ] d w 2 E [ ξ k ] + d w 2 ln d w 2 ln Γ ( d w 2 )
E { ln [ I W ( Σ k ; u k | k 1 , U k | k 1 ) ] } = 1 2 t r { U k | k 1 E [ ( Σ k ) 1 ] } + u k | k 1 2 ln | U k | k 1 | u k | k 1 + n x + 1 2 E [ ln | Σ k | ] u k | k 1 n x 2 ln 2 ln Γ n x ( u k | k 1 2 )
E { ln [ G ( λ k ; d v 2 , d v 2 ) ] } = ( d v 2 1 ) E [ ln λ k ] d v 2 E [ λ k ] + d v 2 ln d v 2 ln Γ ( d v 2 )
E { ln [ N ( x k ; m k a , P k a ) ] } = 1 2 t r { ( P k a ) 1 E [ ( x k m k a ) ( x k m k a ) T ] } 1 2 ln | P k a | n x 2 ln ( 2 π )
E { ln [ G ( ξ k ; α k a , β k a ) ] } = ( α k a 1 ) E [ ln ξ k ] β k a E [ ξ k ] + α k a ln β k a ln Γ ( α k a )
E { ln [ I W ( Σ k ; u k a , U k a ) ] } = 1 2 t r ( U k a E [ ( Σ k ) 1 ] ) u k a + n x + 1 2 E [ ln | Σ k | ] + u k a 2 ln | U k a | u k a n x 2 ln 2 ln Γ n x ( u k a 2 )
E { ln [ G ( λ k ; γ k a , δ k a ) ] } = ( γ k a 1 ) E [ ln λ k ] δ k a E [ λ k ] + γ k a ln δ k a ln Γ ( γ k a )
Reference [29] provides the following expectations:
E [ ( z k H x k ) ( z k H x k ) T ] = ( z k H m k a ) ( z k H m k a ) T + H P k a H T
E [ ( x k m k | k 1 ) ( x k m k | k 1 ) T ] = P k a + ( m k a m k | k 1 ) ( m k a m k | k 1 ) T
E [ ( Σ k ) 1 ] = ( u k n x 1 ) ( U k ) 1
For G ( ξ ; α , β ) , we have [35]
E [ ξ ] = α β
E [ ln ξ ] = ψ ( α ) ln β
Here, we set the approximate error c z = a . By substituting Equations (A6)–(A19) into Equation (A5), we can obtain the exponent of Equation (30) and derive the approximated marginal PDF q ( z k ) .

References

  1. Vo, B.T.; Ma, W.K. The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 2006, 54, 4091–4104. [Google Scholar] [CrossRef]
  2. Dong, P.; Jing, Z.; Leung, H.; Wang, J. Student-t mixture labeled multi-Bernoulli filter for multi-target tracking with heavy-tailed noise. Signal Process. 2018, 152, 331–339. [Google Scholar] [CrossRef]
  3. Lian, Y.; Lian, F.; Hou, L. Robust labeled multi-Bernoulli filter with inaccurate noise covariances. In Proceedings of the 25th International Conference on Information Fusion, Linköping, Sweden, 4–7 July 2022. [Google Scholar]
  4. Wang, X.; Xie, W.; Li, L. Labeled Multi-Bernoulli Maneuvering Target Tracking Algorithm via TSK Iterative Regression Model. Chinese J. Electron. 2022, 31, 227–239. [Google Scholar]
  5. Yang, Z.; Li, X.; Yao, X.; Sun, J.; Shan, T. Gaussian process Gaussian mixture PHD filter for 3D multiple extended target tracking. Remote Sens. 2023, 15, 3224. [Google Scholar] [CrossRef]
  6. Mahler, R. Statistical Multisource-Multitarget Information Fusion; Artech House, Inc.: Norwood, MA, USA, 2007. [Google Scholar]
  7. Vo, B.T.; Vo, B.N.; Cantoni, A. The cardinality balanced multi-target multi-Bernoulli filter and its implementations. IEEE Trans. Signal Process. 2009, 57, 409–423. [Google Scholar]
  8. Vo, B.N.; Vo, B.T.; Hoang, H.G. An efficient implementation of the generalized labeled multi-Bernoulli filter. IEEE Trans. Signal Process. 2017, 65, 1975–1987. [Google Scholar] [CrossRef]
  9. Reuter, S.; Vo, B.T.; Vo, B.N.; Dietmayer, K. The labeled multi-Bernoulli filter. IEEE Trans. Signal Process. 2014, 62, 3246–3260. [Google Scholar]
  10. Williams, J.L. Marginal multi-Bernoulli filters: RFS derivation of MHT, JIPDA, and association-based MeMBer. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1664–1687. [Google Scholar] [CrossRef]
  11. García-Fernández, A.F.; Williams, J.L.; Granstöm, K. Poisson multi-Bernoulli mixture filter: Direct derivation and implementation. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1883–1901. [Google Scholar] [CrossRef]
  12. Xia, Y.; Granström, K.; Svensson, L.; García-Fernández, A.F. Performance evaluation of multi-Bernoulli conjugate priors for multi-target filtering. In Proceedings of the 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017. [Google Scholar]
  13. Du, H.; Xie, W. Extended target marginal distribution Poisson multi-Bernoulli mixture filter. Sensors 2020, 20, 5387. [Google Scholar] [CrossRef]
  14. García-Fernández, A.F.; Svensson, L.; Williams, J.L.; Xia, Y.; Granstöm, K. Trajectory Poisson multi-Bernoulli filters. IEEE Trans. Signal Process. 2020, 68, 4933–4945. [Google Scholar] [CrossRef]
  15. Li, Y.; Wei, P.; You, M.; Wei, Y.; Zhang, H. Joint detection, tracking, and classification of multiple extended objects based on the JDTC-PMBM-GGIW filter. Remote Sens. 2023, 15, 887. [Google Scholar] [CrossRef]
  16. Li, G.; Kong, L.; Yi, W.; Li, X. Multiple model Poisson multi-Bernoulli mixture filter for maneuvering targets. IEEE Sens. J. 2021, 21, 3143–3154. [Google Scholar] [CrossRef]
  17. Li, W.; Gu, H.; Su, W. Robust Poisson multi-Bernoulli mixture filter with inaccurate process and measurement noise covariances. IEEE Access 2020, 8, 52209–52220. [Google Scholar] [CrossRef]
  18. Qiu, H.; Huang, G.; Gao, J. Variational Bayesian labeled multi-Bernoulli filter with unknown sensor noise statistics. Chin. J. Aeronaut. 2016, 29, 1378–1384. [Google Scholar]
  19. Li, G.; Wei, P.; Li, Y.; Gao, L.; Zhang, H. A robust fast LMB filter for superpositional sensors. Signal Process. 2020, 174, 107606. [Google Scholar] [CrossRef]
  20. Ting, J.A.; Theodorou, E.; Schaal, E. Learning an outlier-robust Kalman filter. In Proceedings of the 18th European Conference on Machine Learning, Warsaw, Poland, 17–21 September 2007; pp. 748–756. [Google Scholar]
  21. Zhu, H.; Leung, H.; He, Z. A variational Bayesian approach to robust sensor fusion based on Student-t distribution. Inf. Sci. 2013, 221, 201–214. [Google Scholar] [CrossRef]
  22. Li, W.; Jia, Y.; Du, J.; Zhang, J. PHD filter for multi-target tracking with glint noise. Signal Process. 2014, 94, 48–56. [Google Scholar] [CrossRef]
  23. Liu, Z.; Huang, B.; Zou, Y.; Li, L. Multi-object Bayesian filter for jump Markov system under glint noise. Signal Process. 2019, 157, 131–140. [Google Scholar] [CrossRef]
  24. Dong, P.; Leung, H.; Jing, Z.; Shen, K.; Li, M. The labeled multi-Bernoulli filter for multitarget tracking with glint noise. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 2253–2268. [Google Scholar] [CrossRef]
  25. Hou, L.; Lian, F.; Tan, S.; Xu, C.; De Abreu, G.T.F. Robust generalized labeled multi-Bernoulli filter for multitarget tracking with unknown non-stationary heavy-tailed measurement noise. IEEE Access 2021, 9, 94438–94453. [Google Scholar] [CrossRef]
  26. Roth, M.; Özkan, E.; Gustafsson, F. A Student’s t filter for heavy-tailed process and measurement noise. In Proceedings of the 2013 IEEE Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
  27. Wang, M.; Ji, H.; Zhang, Y.; Hu, X. A Student’s t mixture cardinality-balanced multi-target multi-Bernoulli filter with heavy-tailed process and measurement noises. IEEE Access 2018, 6, 51098–51109. [Google Scholar] [CrossRef]
  28. Hu, X.; Zhang, Q.; Song, B.; Zhao, M.; Xia, Z. Student-t mixture GLMB filter with heavy-tailed noises. In Proceedings of the 2013 IEEE Conference on Signal Processing, Communications and Computing, Xi’an, China, 25–27 October 2022. [Google Scholar]
  29. Huang, Y.; Zhang, Y.; Li, N.; Wu, Z. A novel robust Student’s t-based Kalman filter. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1545–1554. [Google Scholar] [CrossRef]
  30. Huang, Y.; Zhang, Y.; Li, N.; Wu, Z. A robust Gaussian approximate fixed-interval smoother for nonlinear systems with heavy-tailed process and measurement noises. IEEE Signal Proc. Let. 2016, 23, 468–472. [Google Scholar] [CrossRef]
  31. O’Hagan, A.; Forster, J. Kendall’s Advanced Theory of Statistics, Vol 2B: Bayesian Inference; Arnold Publishers: London, UK, 2004; pp. 402–406. [Google Scholar]
  32. Bishop, C. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: New York, NY, USA, 2006; pp. 462–466. [Google Scholar]
  33. Schihmacher, D.; Vo, B.T.; Vo, B.N. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process. 2008, 56, 3447–3457. [Google Scholar] [CrossRef]
  34. Rahmathullah, A.S.; García-Fernández, A.F.; Svensson, L. Generalized optimal sub-pattern assignment metric. In Proceedings of the 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017. [Google Scholar]
  35. Huang, Y. Researches on High-Accuracy State Estimation Methods and Their Applications to Target Tracking and Cooperative Localization. Ph.D. Dissertation, Harbin Engineering University, Harbin, China, 2018; pp. 162–163. [Google Scholar]
Figure 1. (a) True trajectories of the targets in the scenario with the same heavy-tailed degrees. (b) Position estimates of this simulation.
Figure 1. (a) True trajectories of the targets in the scenario with the same heavy-tailed degrees. (b) Position estimates of this simulation.
Remotesensing 15 04232 g001
Figure 2. (a) OSPA distances of the four filters versus time. (b) Cardinality estimates versus time.
Figure 2. (a) OSPA distances of the four filters versus time. (b) Cardinality estimates versus time.
Remotesensing 15 04232 g002
Figure 3. Comparison of the errors among the four filters versus time. (a) GOSPA error. (b) LE. (c) ME. (d) FE.
Figure 3. Comparison of the errors among the four filters versus time. (a) GOSPA error. (b) LE. (c) ME. (d) FE.
Remotesensing 15 04232 g003
Figure 4. AOSPA versus DOF under noises with different heavy-tailed degrees. (a) Process noise is null, d w = 4 . (b) p v = 0.9 , d v = 4 .
Figure 4. AOSPA versus DOF under noises with different heavy-tailed degrees. (a) Process noise is null, d w = 4 . (b) p v = 0.9 , d v = 4 .
Remotesensing 15 04232 g004
Figure 5. (a) OSPA distances of the four filters with different DOF settings. (b) Cardinality estimates versus time.
Figure 5. (a) OSPA distances of the four filters with different DOF settings. (b) Cardinality estimates versus time.
Remotesensing 15 04232 g005
Figure 6. Comparison of the errors among the four filters with different DOF settings. (a) GOSPA error. (b) LE. (c) ME. (d) FE.
Figure 6. Comparison of the errors among the four filters with different DOF settings. (a) GOSPA error. (b) LE. (c) ME. (d) FE.
Remotesensing 15 04232 g006
Figure 7. True trajectories and position estimates in the real data scenario.
Figure 7. True trajectories and position estimates in the real data scenario.
Remotesensing 15 04232 g007
Figure 8. (a) OSPA distances of the four filters in the real data scenario. (b) Cardinality estimates versus time.
Figure 8. (a) OSPA distances of the four filters in the real data scenario. (b) Cardinality estimates versus time.
Remotesensing 15 04232 g008
Figure 9. Comparison of the errors among the four filters in the real data simulation. (a) GOSPA error. (b) LE. (c) ME. (d) FE.
Figure 9. Comparison of the errors among the four filters in the real data simulation. (a) GOSPA error. (b) LE. (c) ME. (d) FE.
Remotesensing 15 04232 g009
Table 1. Numerical results of the four filters in the scenario with the same heavy-tailed degrees.
Table 1. Numerical results of the four filters in the scenario with the same heavy-tailed degrees.
FilterAOSPAAGOSPAALEAMEAFEART
STM-CBMeMBer18.80126.96718.63415.31311.784.035
STM-LMB10.59821.92417.57410.1367.928.708
STM-GLMB10.56121.93417.7559.8867.77112.452
R-ST-PMBM9.57120.29915.9239.258.1384.02
Table 2. Numerical results of the four filters in the scenario with different heavy-tailed degrees.
Table 2. Numerical results of the four filters in the scenario with different heavy-tailed degrees.
FilterDOF SettingsAOSPAAGOSPAALEAMEAFEART
STM-CBMeMBer d w = d v = 3 19.20327.54119.40215.34211.7854.125
d w = d v = 4 19.38726.39317.52416.51110.4843.739
STM-LMB d w = d v = 3 11.90925.56721.19810.4879.319.141
d w = d v = 4 12.30624.41219.27611.7238.968.688
STM-GLMB d w = d v = 3 12.10125.86421.35410.6099.61614.978
d w = d v = 4 11.93624.12219.24611.1838.89512.666
R-ST-PMBM d w = 2 ,
d v = 4
10.37921.62416.49910.438.8394.516
Table 3. Numerical results of the four filters in the real data simulation.
Table 3. Numerical results of the four filters in the real data simulation.
FilterAOSPAAGOSPAALEAMEAFEART
STM-CBMeMBer14.55514.11111.5812.6521.3260.302
STM-LMB7.9647.3536.620.8840.4421.152
STM-GLMB7.9777.3666.6340.8840.4421.531
R-ST-PMBM6.6236.185.1030.8840.8840.416
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, J.; Xie, W.; Liu, Z. Student’s t-Based Robust Poisson Multi-Bernoulli Mixture Filter under Heavy-Tailed Process and Measurement Noises. Remote Sens. 2023, 15, 4232. https://doi.org/10.3390/rs15174232

AMA Style

Zhu J, Xie W, Liu Z. Student’s t-Based Robust Poisson Multi-Bernoulli Mixture Filter under Heavy-Tailed Process and Measurement Noises. Remote Sensing. 2023; 15(17):4232. https://doi.org/10.3390/rs15174232

Chicago/Turabian Style

Zhu, Jiangbo, Weixin Xie, and Zongxiang Liu. 2023. "Student’s t-Based Robust Poisson Multi-Bernoulli Mixture Filter under Heavy-Tailed Process and Measurement Noises" Remote Sensing 15, no. 17: 4232. https://doi.org/10.3390/rs15174232

APA Style

Zhu, J., Xie, W., & Liu, Z. (2023). Student’s t-Based Robust Poisson Multi-Bernoulli Mixture Filter under Heavy-Tailed Process and Measurement Noises. Remote Sensing, 15(17), 4232. https://doi.org/10.3390/rs15174232

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop