# Improved Bearings-Only Multi-Target Tracking with GM-PHD Filtering

^{*}

Previous Article in Journal

Department of Electronic Systems Engineering, Hanyang University, Ansan, Gyeonggi-do 15588, Korea

Author to whom correspondence should be addressed.

Academic Editor: Vittorio M. N. Passaro

Received: 25 July 2016
/
Revised: 5 September 2016
/
Accepted: 7 September 2016
/
Published: 10 September 2016

(This article belongs to the Section Physical Sensors)

In this paper, an improved nonlinear Gaussian mixture probability hypothesis density (GM-PHD) filter is proposed to address bearings-only measurements in multi-target tracking. The proposed method, called the Gaussian mixture measurements-probability hypothesis density (GMM-PHD) filter, not only approximates the posterior intensity using a Gaussian mixture, but also models the likelihood function with a Gaussian mixture instead of a single Gaussian distribution. Besides, the target birth model of the GMM-PHD filter is assumed to be partially uniform instead of a Gaussian mixture. Simulation results show that the proposed filter outperforms the GM-PHD filter embedded with the extended Kalman filter (EKF) and the unscented Kalman filter (UKF).

Bearings-only multi-target tracking (MTT) [1,2,3] with clutter and missed detections is a challenging nonlinear problem. The filter for such a problem should not only deal with the nonlinearity of measurements, but also solve the measurement origin uncertainty of multiple targets. Bearings-only measurements [4] are often obtained by passive sensors, such as sonars, and contain a low level of information about detected targets, leading to the observability problem [5]. For bearings-only single-target tracking, the observable level can be evaluated by the Cramer–Rao lower bound (CRLB) [6,7]. However, it is difficult to evaluate the level of observability in a theoretical context for the bearings-only multi-target tracking with clutter and missed detections [8]. In order to overcome the non-observable problem, the sensor needs to outmaneuver the targets [5,9]. As the origin of a measurement is not known for target tracking in clutter, the tracking filter needs to have a proper track-to-measurement-association method [1] or something equivalent, such as symmetric measurement equations [10] and random finite sets [11].

For tracking targets using nonlinear measurements, the most popular method is the extended Kalman filter (EKF) [12]. It linearizes the nonlinear measurement function around the predicted target state under the assumption that the true target state is close enough to the predicted state. Then, the measurement update of the filter can be performed using the Kalman filter (KF) [12], which has a good performance for linear systems. The unscented Kalman filter (UKF) [13] applies the unscented transform instead of linearization to the nonlinear measurement function. The probability density functions (pdfs) are represented and propagated using several sigma points. The particle filter (PF) [14] represents the nonlinear pdfs by considering the amount of particles and can obtain better performances than the EKF and the UKF. However, particle filters have a heavy computational load. Apart from these nonlinear filters, many other filters exist for nonlinear systems, such as the cubature Kalman filter (CKF) [15], the shifted Rayleigh filter (SRF) [16] and the ensemble Kalman filter (EnKF) [17].

To handle the measurement origin uncertainty problem in MTT with clutters, many techniques have been developed. Many of these methods belong to the following categories: joint probability data association (JPDA) [18], multiple hypothesis tracking (MHT) [19,20] and random finite set (RFS) [11] based methods. The JPDA filters come from combing the probability data association (PDA) [1] filter with joint events and can only track fixed and known numbers of targets. To accommodate varied and unknown numbers of targets, the joint integrated PDA (JIPDA) [21] filter, which is able to estimate the probability of target existence, is proposed. To improve the tracking accuracy, a multi-scan multi-target tracking algorithm, the joint integrated track splitting (JITS), was developed in [22]. As the JIPDA and the JITS suffer from heavy computational load when tracking targets in mutual proximity, the linear multi-target IPDA (LMIPDA) and the linear multi-target ITS (LMITS) were proposed in [22,23], respectively. Besides, the iterative JIPDA (iJIPDA) [24] tracker is also a computationally-efficient algorithm for the MTT. The MHT filters attempt to maintain and evaluate a set of measurement hypotheses with high track scores. There are many versions of the MHT filter, and most of them can be grouped into two classes: the track-oriented class [25] and the measurement-oriented class [19]. The JPDA and MHT approaches are formulated via data association, which takes a large proportion of computational resources, while the RFS approach is an emerging paradigm and is established without data association. The RFS-based filters [26,27,28] treat multi-target states and measurements as the state finite set and the measurement finite set, respectively. The RFS-based filters try to estimate the target set based on the measurement set. In this way, the RFS-based filters are performed in a computational efficient way for the multi-target tracking.

Among RFS-based filters, the probability hypothesis density (PHD) [26] filter is a first moment approximation to the multi-target predicted and posterior densities. It propagates the target intensities without considering data associations between targets and measurements. As there is no close form to the PHD filter, a sequential Monte Carlo (SMC) or Gaussian mixture (GM) technique was implemented, which resulted in the SMC-PHD [29,30] filter or the GM-PHD [31] filter. For bearings-only MTT, it is difficult to apply the SMC-PHD filter because of the problem of extracting estimated target states. For this reason, the GM-PHD filter is considered in this paper. The GM-PHD filter associated with the EKF and the UKF are termed as GM-PHD-EKF and GM-PHD-UKF, respectively.

In the GM-PHD-EKF and the GM-PHD-UKF, the predicted intensity is modeled by a Gaussian mixture, and the likelihood function [12] is approximated using a single Gaussian distribution. The updated intensity of each filter is obtained after the predicted intensity is updated by the measurements. As bearings-only measurements suffer from severe nonlinearity, the likelihood function approximated by a single Gaussian is not accurate enough. Furthermore, the accuracy of the updated intensity cannot get enough improvements after the predicted intensity is updated by the inaccurate likelihood function of the measurements. In this paper, the Gaussian mixture measurements-PHD (GMM-PHD) filter is proposed to address bearings-only MTT with clutter and missed detections. To improve the tracking accuracy, the target intensities are approximated by Gaussian mixtures, and the likelihood function is also modeled by a Gaussian mixture in the GMM-PHD filter. In this way, the updated intensity of the GMM-PHD is much more accurate than those of the GM-PHD-EKF and the GM-PHD-UKF after the predicted intensity is updated by a more accurate likelihood function, which is modeled by Gaussian mixtures. The proposed filter is a nonlinear MTT algorithm that can address not only bearings-only measurements, but also other nonlinear measurements. For bearings-only MTT, the targets may appear at any place in the measurement space. In order to reduce the number of selected parameters and avoid the undesirable effect of poor parameter selection [8], the derivation of the GMM-PHD filter is processed with a partially-uniform target birth model [8,32], but not a Gaussian mixture birth model. In the simulation experiment, the performance of the GMM-PHD is compared with the GM-PHD-EKF and the GM-PHD-UKF in terms of the optimal subpattern assignment (OSPA) [33] distance, the OSPA localization, the OSPA cardinality and the average CPU time.

In this paper, bearings-only MTT with one passive sensor on a maneuvering platform is studied. The targets obey the continuous white noise acceleration (CWNA) motion model [1,12]. The target motion model and the sensor measurement model in the 2D Cartesian coordinates case are considered in this section. For the target t, the target state ${\mathbf{e}}_{k}^{t}$ with position $({x}_{k}^{t},{y}_{k}^{t})$ and velocity $({\dot{x}}_{k}^{t},{\dot{y}}_{k}^{t})$ at time k is expressed as:

$${\mathbf{e}}_{k}^{t}={[{x}_{k}^{t},{y}_{k}^{t},{\dot{x}}_{k}^{t},{\dot{y}}_{k}^{t}]}^{\prime}.$$

An RFS ${\mathbf{X}}_{k}$ of target states can contain any number ${N}_{k}$ of targets at time k and is given by:

$${\mathbf{X}}_{k}=\{{\mathbf{e}}_{k,1}^{t},\cdots ,{\mathbf{e}}_{k,{N}_{k}}^{t}\}.$$

Let **χ** denote the single target state space and $\mathcal{F}\left(\mathit{\chi}\right)$ denote the set of all finite subsets of **χ**, i.e., ${\mathbf{e}}_{k,1}^{t},\dots ,{\mathbf{e}}_{k,{N}_{k}}^{t}\in \mathit{\chi}$ and ${\mathbf{X}}_{k}\in \mathcal{F}\left(\mathit{\chi}\right)$.

As the target t is assumed to follow the CWNA motion model, its dynamic model can be expressed as:
where the state propagation matrix **Φ** is time-invariant,

$${\mathbf{e}}_{k}^{t}=\mathsf{\Phi}{\mathbf{e}}_{k-1}^{t}+{\mathit{\nu}}_{k-1},$$

$$\mathbf{\Phi}=\left[\begin{array}{cc}1& T\\ 0& 1\end{array}\right]\otimes {\mathbf{I}}_{2},$$

${\mathit{\nu}}_{k-1}$ is a sequence of zero mean, white Gaussian process noises with covariance:

$${\mathbf{Q}}_{k-1}=q\left[\begin{array}{cc}{T}^{4}/4& {T}^{3}/2\\ {T}^{3}/2& {T}^{2}\end{array}\right]\otimes {\mathbf{I}}_{2},$$

T is the sampling time, ${\mathbf{I}}_{2}$ is the $2\times 2$ identity matrix and q is the power spectral density (PSD) [1]. The sensor state ${\mathbf{e}}_{k}^{s}$ is given as:

$${\mathbf{e}}_{k}^{s}={[{x}_{k}^{s},{y}_{k}^{s},{\dot{x}}_{k}^{s},{\dot{y}}_{k}^{s}]}^{\prime}.$$

Each target t can be detected with the probability ${P}_{D,k}$ at time k. The sensor can generate a measurement when the target is detected. We use ${\theta}_{k}^{t}$ to denote the target measurement at time k for target t. The measurement from the radar is:
where:
${\mathit{\varpi}}_{k}$ is zero mean, white Gaussian measurement noise with covariance:
that it is uncorrelated with ${\mathit{\nu}}_{k}$. Therefore, each target t can generate a measurement RFS ${\mathbf{\Theta}}_{k}\left({\mathbf{e}}_{k}^{t}\right)$, which can be either $\left\{{\theta}_{k}^{t}\right\}$ (the target t is detectable) or ∅ (the target t is not detected). In addition, the sensor also produces false measurements, which form an RFS ${\mathbf{K}}_{k}$ at each time k. Then, the MTT measurement RFS ${\mathbf{W}}_{k}$ at time k can be expressed as:

$${\mathbf{w}}_{k}\stackrel{\Delta}{\phantom{\rule{0.0pt}{0ex}}=}{\theta}_{k}^{t}=h({\mathbf{e}}_{k}^{t},{\mathbf{e}}_{k}^{s})+{\mathit{\varpi}}_{k},$$

$$h({\mathbf{e}}_{k}^{t},{\mathbf{e}}_{k}^{s})={tan}^{-1}\left(\frac{{x}_{k}^{t}-{x}_{k}^{s}}{{y}_{k}^{t}-{y}_{k}^{s}}\right),$$

$${R}_{k}={\sigma}_{\theta}^{2},$$

$${\mathbf{W}}_{k}={\mathbf{K}}_{k}\cup \left[\bigcup _{\mathbf{e}\in {\mathbf{X}}_{k}}{\mathbf{\Theta}}_{k}\left(\mathbf{e}\right)\right].$$

In RFS-based methods, the Bayesian recursion of multi-target posterior density is propagated in time as:
where ${f}_{k|k-1}\left({\mathbf{X}}_{k}|\mathbf{X}\right)$ and ${g}_{k}\left({\mathbf{W}}_{k}|{\mathbf{X}}_{k}\right)$ denote the multi-target transition density and likelihood function, respectively. Here, ${p}_{k}\left({\mathbf{X}}_{k}|{\mathbf{W}}_{1:k}\right)$ denotes the multi-target posterior, density and ${\mu}_{s}\phantom{\rule{-0.166667em}{0ex}}\left(d\mathbf{X}\right)$ is an appropriate reference measure on $\mathcal{F}\left(\mathit{\chi}\right)$ [31].

$${p}_{k|k-1}\phantom{\rule{-0.166667em}{0ex}}\left({\mathbf{X}}_{k}|{\mathbf{W}}_{1:k-1}\phantom{\rule{-0.166667em}{0ex}}\right)\phantom{\rule{-0.166667em}{0ex}}=\phantom{\rule{-0.166667em}{0ex}}\int \phantom{\rule{-0.166667em}{0ex}}{f}_{k|k-1}\phantom{\rule{-0.166667em}{0ex}}\left({\mathbf{X}}_{k}|\mathbf{X}\phantom{\rule{-0.166667em}{0ex}}\right){p}_{k-1}\phantom{\rule{-0.166667em}{0ex}}\left(\mathbf{X}|{\mathbf{W}}_{1:k-1}\phantom{\rule{-0.166667em}{0ex}}\right)\phantom{\rule{-0.166667em}{0ex}}{\mu}_{s}\phantom{\rule{-0.166667em}{0ex}}\left(d\mathbf{X}\phantom{\rule{-0.166667em}{0ex}}\right),$$

$${p}_{k}\left({\mathbf{X}}_{k}|{\mathbf{W}}_{1:k}\right)=\frac{{g}_{k}\left({\mathbf{W}}_{k}|{\mathbf{X}}_{k}\right){p}_{k|k-1}\left({\mathbf{X}}_{k}|{\mathbf{W}}_{1:k-1}\right)}{\int {g}_{k}\left({\mathbf{W}}_{k}|\mathbf{X}\right){p}_{k|k-1}\left(\mathbf{X}|{\mathbf{W}}_{1:k-1}\right){\mu}_{s}\left(d\mathbf{X}\right)},$$

To reduce the computational intractability, a first moment approximation (the PHD filter) of the recursion is proposed in [26]. The general form of the PHD filter recursion (without target spawning) is given by:
where ${v}_{k|k-1}\left(\mathbf{e}\right)$ and ${v}_{k|k}\left(\mathbf{e}\right)$ denote the predicted intensity from time $k-1$ to k and the updated intensity, respectively, ${P}_{S,k}\left(\mathit{\eta}\right)$ is the survival probability, meaning the target still survives at time k, ${f}_{k|k-1}\left(\mathbf{e}|\mathit{\eta}\right)$ represents the single target transition density from time $k-1$ to k, ${\gamma}_{k}\left(\mathbf{e}\right)$ denotes the prior intensity of spontaneous target births at time k, ${g}_{k}\left(\mathbf{w}|\mathbf{e}\right)$ is the single target measurement likelihood function and ${\kappa}_{k}\left(\mathbf{w}\right)$ is the clutter intensity.

$$\begin{array}{c}\hfill {v}_{k|k-1}\phantom{\rule{-0.166667em}{0ex}}\left(\mathbf{e}\right)\phantom{\rule{-0.166667em}{0ex}}=\phantom{\rule{-0.166667em}{0ex}}\int \phantom{\rule{-0.166667em}{0ex}}{P}_{S,k}\phantom{\rule{-0.166667em}{0ex}}\left(\mathit{\eta}\phantom{\rule{-0.166667em}{0ex}}\right){f}_{k|k-1}\phantom{\rule{-0.166667em}{0ex}}\left(\mathbf{e}|\mathit{\eta}\phantom{\rule{-0.166667em}{0ex}}\right){v}_{k-1}\phantom{\rule{-0.166667em}{0ex}}\left(\mathit{\eta}\phantom{\rule{-0.166667em}{0ex}}\right)d\mathit{\eta}+{\gamma}_{k}\phantom{\rule{-0.166667em}{0ex}}\left(\mathbf{e}\right),\end{array}$$

$$\begin{array}{c}\hfill {v}_{k|k}\left(\mathbf{e}\right)=\left[1-{P}_{D,k}\left(\mathbf{e}\right)\right]{v}_{k|k-1}\left(\mathbf{e}\right)+\sum _{\mathbf{w}\in {\mathbf{W}}_{k}}\frac{{P}_{D,k}\left(\mathbf{e}\right){g}_{k}\left(\mathbf{w}|\mathbf{e}\right){v}_{k|k-1}\left(\mathbf{e}\right)}{{\kappa}_{k}\left(\mathbf{w}\right)+\int {P}_{D,k}\left(\mathbf{e}\right){g}_{k}\left(\mathbf{w}|\mathbf{e}\right){v}_{k|k-1}\left(\mathbf{e}\right)d\mathbf{e}},\end{array}$$

According to the Gaussian mixture assumption, the posterior intensity at time $k-1$ can be expressed as:

$$\begin{array}{c}\hfill {v}_{k-1}\left(\mathbf{e}\right)=\sum _{i=1}^{{J}_{k-1}}{\omega}_{k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k-1}^{\left(i\right)},{\mathbf{J}}_{k-1}^{\left(i\right)}\right).\end{array}$$

Let $\theta ,r,c,\mathrm{and}\phantom{\rule{4pt}{0ex}}s$ denote the bearing, range, course and speed in polar coordinates, respectively. The function $\Psi \left(\mathbf{e};{\gamma}_{k}\left(\theta ,r,c,s\right)\right)$ represents the transformation of the density ${\gamma}_{k}\left(\theta ,r,c,s\right)$ from polar coordinates to Cartesian coordinates. Then, the predicted intensity at time k is given by:
where:

$$\begin{array}{c}\hfill {v}_{k|k-1}\left(\mathbf{e}\right)={v}_{S,k|k-1}\left(\mathbf{e}\right)+\Psi \left(\mathbf{e};{\gamma}_{k}\left(\theta ,r,c,s\right)\right),\end{array}$$

$$\begin{array}{c}\hfill {v}_{S,k|k-1}\left(\mathbf{e}\right)={P}_{S,k}\sum _{i=1}^{{J}_{k-1}}{\omega}_{k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};\phantom{\rule{4pt}{0ex}}{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right),\end{array}$$

$$\begin{array}{c}\hfill {\gamma}_{k}\left(\theta ,r,c,s\right)\phantom{\rule{4pt}{0ex}}={\omega}_{k}^{b}U\phantom{\rule{-0.166667em}{0ex}}\left(\theta ;{\mathcal{H}}_{\theta}\right)\mathcal{N}\left(r;\overline{r},{\sigma}_{r}^{2}\right)\mathcal{N}\left(c;\theta -\pi ,{\sigma}_{c}^{2}\right)\mathcal{N}\left(s;\overline{s},{\sigma}_{s}^{2}\right).\end{array}$$

In Equation (17), the survived probability ${P}_{S,k}\left(\mathbf{e}\right)$ is assumed to be independent of the target state and is given as ${P}_{S,k}$. In Equation (18), ${\omega}_{k}^{b}$ is the expected number of targets appearing at time k, $U\left(\theta ;{\mathcal{H}}_{\theta}\right)$ is the uniform distribution in θ over the region ${\mathcal{H}}_{\theta}$, $\overline{r}$ is the prior mean of the range and ${\sigma}_{r}^{2}$ is its prior variance. Similarly, $\overline{s}$ is the prior mean of the speed associated with its prior variance ${\sigma}_{s}^{2}$, and ${\sigma}_{c}^{2}$ is the prior course variance. The predicted estimates ${\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)}$ and ${\mathbf{J}}_{k|k-1}^{\left(i\right)}$ are obtained via the Kalman prediction:

$$\begin{array}{c}\hfill {\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)}=\mathbf{\Phi}{\widehat{\mathbf{e}}}_{k-1}^{\left(i\right)},\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{J}}_{k|k-1}^{\left(i\right)}=\mathbf{\Phi}{\mathbf{J}}_{k|k-1}^{\left(i\right)}{\mathbf{\Phi}}^{\prime}+{\mathbf{Q}}_{k-1}.\end{array}$$

The target state is augmented by a binary variable β, so we may distinguish between the surviving components and the birth components:
where ${\omega}_{k|k-1}^{\left(i\right)}={P}_{S,k}{\omega}_{k-1}^{\left(i\right)}$. As noted in [34], the surviving and birth components are separated to avoid biasing the cardinality estimates.

$$\begin{array}{c}\hfill {v}_{k|k-1}\left(\mathbf{e},\beta \right)=\left\{\begin{array}{c}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right),\beta =0\\ \Psi \left(\mathbf{e};{\gamma}_{k}\left(\theta ,r,c,s\right)\right),\beta =1\end{array}\right.\end{array}$$

In GM-PHD filters, the likelihood function ${g}_{k}\left(\mathbf{w}|\mathbf{e}\right)$ is always modeled as a single Gaussian distribution. However, it is approximated by a Gaussian mixture in the proposed GMM-PHD filter.

Though the measurement noise is assumed to be Gaussian, the measurement uncertainty is non-Gaussian (non-ellipse) in Cartesian coordinates. The GMM measurement presentation first divides the non-elliptical measurement uncertainty into several segments. Then, each segment is modeled by one Gaussian distribution, and the whole measurement uncertainty can be approximated by a Gaussian mixture.

Suppose the measurement uncertainty in Cartesian coordinates is determined by the range interval $\left[{r}_{k,min},{r}_{k,max}\right]$ and the measurement ${\theta}_{k}^{t}$ with standard deviation ${\sigma}_{\theta}$. The range interval is divided into ${A}_{k}$ subintervals, given by [35,36]:
where:

$$\frac{{r}_{k,a+1}}{{r}_{k,a}}={\tau}_{k};\phantom{\rule{4pt}{0ex}}a=1,\dots ,{A}_{k},$$

$${\tau}_{k}={\left(\frac{{r}_{k,max}}{{r}_{k,min}}\right)}^{1/\phantom{1{B}_{k}}\phantom{\rule{0.0pt}{0ex}}{A}_{k}}.$$

Then, each segment a can be determined by ${r}_{k,a},{r}_{k,a+1},{\theta}_{k}^{t}-{\sigma}_{\theta},{\theta}_{k}^{t}+{\sigma}_{\theta}$ in polar coordinates. Let $\overline{r}=\left({r}_{k,a}+{r}_{k,a+1}\right)/2$ and $\Delta r=\left({r}_{k,a+1}-{r}_{k,a}\right)/2$, then the segment a is approximated by a Gaussian distribution with mean ${\widehat{\mathbf{w}}}_{k,a}$ and covariance ${\mathbf{R}}_{k,a}$ in Cartesian coordinates:
where:

$${\widehat{\mathbf{w}}}_{k,a}=\left[\begin{array}{c}{x}_{k}^{s}\\ {y}_{k}^{s}\end{array}\right]+\overline{r}\left[\begin{array}{c}sin\left({\theta}_{k}^{t}\right)\\ cos\left({\theta}_{k}^{t}\right)\end{array}\right],$$

$${\mathbf{R}}_{k,a}=\mathit{\varphi}\left[\begin{array}{cc}{\left(\Delta r\right)}^{2}& 0\\ 0& {\overline{r}}^{2}{\sigma}_{\theta}^{2}\end{array}\right]{\mathit{\varphi}}^{\prime},$$

$$\mathit{\varphi}=\left[\begin{array}{cc}sin\left({\theta}_{k}^{t}\right)& -cos\left({\theta}_{k}^{t}\right)\\ cos\left({\theta}_{k}^{t}\right)& sin\left({\theta}_{k}^{t}\right)\end{array}\right].$$

Obviously, the area of each segment is different, and the weight of the segment a is proportional to its area, given by [35]:
and

$${\lambda}_{k,a}=\frac{\sqrt{det\left({\mathbf{R}}_{k,a}\right)}}{{\displaystyle \sum _{a=1}^{{A}_{k}}}\sqrt{det\left({\mathbf{R}}_{k,a}\right)}},$$

$$\sum _{a=1}^{{A}_{k}}{\lambda}_{k,a}=1.$$

Then, the measurement likelihood function in Cartesian coordinate $\left(\beta =0\right)$ is approximated as a Gaussian mixture:
where, $\mathbf{H}=\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\end{array}\right]$ is the observation matrix and the constant ${C}_{k}$ is calculated as [37]:

$$\begin{array}{c}\hfill {g}_{k}\left(\mathbf{w}|\mathbf{e},\beta =0\right)\approx {C}_{k}\sum _{a=1}^{{A}_{k}}{\lambda}_{k,a}\mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left({\widehat{\mathbf{w}}}_{k,a};\mathbf{H}{\mathbf{e}}_{k},{\mathbf{R}}_{k,a}\right),\end{array}$$

$${C}_{k}={\int}_{{r}_{k,min}}^{{r}_{k,max}}rdr=\frac{{r}_{k,max}^{2}-{r}_{k,min}^{2}}{2}.$$

Since the measurement noise is modeled as Gaussian, the likelihood function in polar coordinates $\left(\beta =1\right)$ is expressed as:

$$\begin{array}{c}\hfill {g}_{k}\left(\mathbf{w}|\mathbf{e},\beta =1\right)=\mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left({\theta}_{k}^{t};\theta ,{\sigma}_{\theta}^{2}\right)\end{array}$$

Figure 1 gives an example of the GMM presentation. In the figure, the measurement uncertainty of the GMM-PHD is approximated by six measurement components (shown as the solid ellipses), and that of the GM-PHD-EKF is modeled by a single Gaussian distribution (shown as the dashed ellipse). Actually, the measurement uncertainty of the GM-PHD-UKF is also approximated by one Gaussian distribution. The true target is displayed by a cross. It is obvious that the measurement likelihood function approximated using Gaussian mixtures in the GMM-PHD is much more accurate than the approximation with one Gaussian in the GM-PHD-EKF. Thus, the GMM-PHD can perform with a much higher tracking accuracy than the GM-PHD-EKF and the GM-PHD-UKF.

New targets are assumed to be always detected at their time of birth. Furthermore, we also assume that the detection probability for a surviving target is independent of their state and:

$$\begin{array}{c}\hfill {P}_{D,k}\left(\mathbf{e},\beta \right)=\left\{\begin{array}{c}{P}_{D,k},\beta =0\\ 1,\beta =1\end{array}\right..\end{array}$$

According to Equation (14), the posterior intensity at time k is given by:
and:

$$\begin{array}{c}{v}_{k|k}\left(\mathbf{e},\beta =0\right)\hfill \\ =\left[1-{P}_{D,k}\right]{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right)\hfill \\ \phantom{\rule{14.22636pt}{0ex}}+{\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}\frac{\begin{array}{c}{P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\lambda}_{k,a}\mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left({\widehat{\mathbf{w}}}_{k,a};\mathbf{H}{\mathbf{e}}_{k},{\mathbf{R}}_{k,a}\right){\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right)\hfill \end{array}}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+\int {P}_{D,k}\left(\mathbf{e}\right){g}_{k}\left(\mathbf{w}|\mathbf{e}\right){v}_{k|k-1}\left(\mathbf{e}\right)d\mathbf{e}\hfill \end{array}}\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}=\left[1-{P}_{D,k}\right]{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right)\hfill \\ \phantom{\rule{14.22636pt}{0ex}}+{\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\displaystyle \sum _{a=1}^{{A}_{k}}}\frac{\begin{array}{c}{P}_{D,k}{C}_{k}{\lambda}_{k,a}\mathcal{N}\left({\widehat{\mathbf{w}}}_{k,a};\mathbf{H}{\mathbf{e}}_{k},{\mathbf{R}}_{k,a}\right){\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\mathbf{e}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\phantom{\rule{-0.166667em}{0ex}}\right)\hfill \end{array}}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+\int {P}_{D,k}\left(\mathbf{e}\right){g}_{k}\left(\mathbf{w}|\mathbf{e}\right)\xb7{v}_{k|k-1}\left(\mathbf{e}\right)d\mathbf{e}\hfill \end{array}},\hfill \end{array}$$

$$\begin{array}{c}{v}_{k|k}\left(\mathbf{e},\beta =1\right)\hfill \\ ={\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}\frac{\begin{array}{c}{g}_{k}\left(\mathbf{w}|\mathbf{e}\right)\Psi \left(\mathbf{e};{\gamma}_{k}\left(\theta ,r,c,s\right)\right)\hfill \end{array}}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+\int {P}_{D,k}\left(\mathbf{e}\right){g}_{k}\left(\mathbf{w}|\mathbf{e}\right){v}_{k|k-1}\left(\mathbf{e}\right)d\mathbf{e}\hfill \end{array}}\hfill \\ ={\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}\frac{\begin{array}{c}\Psi \left(\mathbf{e};{g}_{k}\left(\mathbf{w}|\mathbf{e}\right){\gamma}_{k}\left(\theta ,r,c,s\right)\right)\hfill \end{array}}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+\int {P}_{D,k}\left(\mathbf{e}\right){g}_{k}\left(\mathbf{w}|\mathbf{e}\right){v}_{k|k-1}\left(\mathbf{e}\right)d\mathbf{e}\hfill \end{array}}.\hfill \end{array}$$

In the first step of Equation (34), the likelihood function ${g}_{k}\left(\mathbf{w}|\mathbf{e}\right)$ is expressed by a Gaussian distribution Equation (31), but not a Gaussian mixture Equation (29), as the target birth model ${\gamma}_{k}\left(\theta ,r,c,s\right)$ is given in polar coordinates (uniform across the bearing space and Gaussian in the range and velocity). As both of them are expressed in the same polar coordinates, we have:

$${g}_{k}\left(\mathbf{w}|\mathbf{e}\right)\Psi \left(\mathbf{e};{\gamma}_{k}\left(\theta ,r,c,s\right)\right)=\Psi \left(\mathbf{e};{g}_{k}\left(\mathbf{w}|\mathbf{e}\right){\gamma}_{k}\left(\theta ,r,c,s\right)\right).$$

In the above equation,
where ${1}_{\mathcal{H}}\left(\theta \right)$ is the indicator function of the bearing space region ${\mathcal{H}}_{\theta}$ and ${V}_{\mathcal{H}}$ is the volume of ${\mathcal{H}}_{\theta}$, and we use the approximation:
which is reasonable in practice by assuming that ${\sigma}_{\theta}$ is small compared to the region ${\mathcal{H}}_{\theta}$. The transformation function $\Psi \left(\mathbf{e};{\phi}_{k}\left(\theta ,r,c,s\right)\right)$ transforms the function ${\phi}_{k}\left(\theta ,r,c,s\right)$ from polar coordinates to Cartesian coordinates, given by:
where the weight ${\lambda}_{k,a}$ is given by Equation (27), the position part of the mean ${\tilde{\mathbf{e}}}_{k}\left({\theta}_{k}^{t}\right)$ and the covariance ${\tilde{\mathbf{J}}}_{k}\left({\theta}_{k}^{t}\right)$ are given by Equations (24) and (25), respectively, and the velocity part is calculated by the approximation [38,39]:
where:

$$\begin{array}{c}{\phi}_{k}\left(\theta ,r,c,s\right)\triangleq {g}_{k}\left(\mathbf{w}|\mathbf{e}\right){\gamma}_{k}\left(\theta ,r,c,s\right)\hfill \\ =\mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left({\theta}_{k}^{t};\theta ,{\sigma}_{\theta}^{2}\right){\omega}_{k}^{b}U\phantom{\rule{-0.166667em}{0ex}}\left(\theta ;{\mathcal{H}}_{\theta}\right)\mathcal{N}\left(r;\overline{r},{\sigma}_{r}^{2}\right)\mathcal{N}\left(c;\theta -\pi ,{\sigma}_{c}^{2}\right)\hfill \\ \phantom{\rule{28.45274pt}{0ex}}\xb7\mathcal{N}\left(s;\overline{s},{\sigma}_{s}^{2}\right)\hfill \\ ={\omega}_{k}^{b}\frac{{1}_{\mathcal{H}}\left(\theta \right)}{{V}_{\mathcal{H}}}\mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left(\theta ;{\theta}_{k}^{t},{\sigma}_{\theta}^{2}\right)\mathcal{N}\left(r;\overline{r},{\sigma}_{r}^{2}\right)\mathcal{N}\left(c;\theta -\pi ,{\sigma}_{c}^{2}\right)\hfill \\ \phantom{\rule{28.45274pt}{0ex}}\xb7\mathcal{N}\left(s;\overline{s},{\sigma}_{s}^{2}\right)\hfill \\ \approx \frac{{\omega}_{k}^{b}}{2\pi}\mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left(\theta ;{\theta}_{k}^{t},{\sigma}_{\theta}^{2}\right)\mathcal{N}\left(r;\overline{r},{\sigma}_{r}^{2}\right)\mathcal{N}\left(c;\theta -\pi ,{\sigma}_{c}^{2}\right)\mathcal{N}\left(s;\overline{s},{\sigma}_{s}^{2}\right),\hfill \end{array}$$

$${1}_{\mathcal{H}}\left(\theta \right)\mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left(\theta ;{\theta}_{k}^{t},{\sigma}_{\theta}^{2}\right)\approx \mathcal{N}\phantom{\rule{-0.166667em}{0ex}}\left(\theta ;{\theta}_{k}^{t},{\sigma}_{\theta}^{2}\right)$$

$$\Psi \left(\mathbf{e};{\phi}_{k}\left(\theta ,r,c,s\right)\right)\approx \frac{{\omega}_{k}^{b}}{2\pi}\sum _{a=1}^{{A}_{k}}{\lambda}_{k,a}\mathcal{N}\left(\mathbf{e};{\tilde{\mathbf{e}}}_{k}\left({\theta}_{k}^{t}\right),{\tilde{\mathbf{J}}}_{k}\left({\theta}_{k}^{t}\right)\right),$$

$${\tilde{\mathbf{e}}}_{k}\left({\theta}_{k}^{t}\right)=\left[\begin{array}{c}{x}_{k}^{s}+\overline{r}sin\left({\theta}_{k}^{t}\right)\\ {y}_{k}^{s}+\overline{r}cos\left({\theta}_{k}^{t}\right)\\ \overline{s}sin\left({\theta}_{k}^{t}-\pi \right)\\ \overline{s}cos\left({\theta}_{k}^{t}-\pi \right)\end{array}\right],$$

$${\tilde{\mathbf{J}}}_{k}\left({\theta}_{k}^{t}\right)=\left[\begin{array}{cccc}{P}_{xx}& {P}_{xy}& 0& 0\\ {P}_{yx}& {P}_{yy}& 0& 0\\ 0& 0& {P}_{\dot{x}\dot{x}}& {P}_{\dot{x}\dot{y}}\\ 0& 0& {P}_{\dot{y}\dot{x}}& {P}_{\dot{y}\dot{y}}\end{array}\right],$$

$${P}_{xx}={\left(\Delta r\right)}^{2}\phantom{\rule{-0.166667em}{0ex}}{sin}^{2}\phantom{\rule{-0.166667em}{0ex}}\left({\theta}_{k}^{t}\phantom{\rule{-0.166667em}{0ex}}\right)+{\overline{r}}^{2}{\sigma}_{\theta}^{2}{cos}^{2}\phantom{\rule{-0.166667em}{0ex}}\left({\theta}_{k}^{t}\right),$$

$${P}_{yy}={\left(\Delta r\right)}^{2}\phantom{\rule{-0.166667em}{0ex}}{cos}^{2}\phantom{\rule{-0.166667em}{0ex}}\left({\theta}_{k}^{t}\phantom{\rule{-0.166667em}{0ex}}\right)+{\overline{r}}^{2}{\sigma}_{\theta}^{2}{sin}^{2}\phantom{\rule{-0.166667em}{0ex}}\left({\theta}_{k}^{t}\right),$$

$${P}_{xy}={P}_{yx}=\frac{1}{2}sin2{\theta}_{k}^{t}\left[{\left(\Delta r\right)}^{2}-{\overline{r}}^{2}{\sigma}_{\theta}^{2}\right],$$

$${P}_{\dot{x}\dot{x}}={\sigma}_{s}^{2}{sin}^{2}\left({\theta}_{k}^{t}-\pi \right)+{\sigma}_{c}^{2}{\overline{s}}^{2}{cos}^{2}\left({\theta}_{k}^{t}-\pi \right),$$

$${P}_{\dot{x}\dot{x}}={\sigma}_{s}^{2}{cos}^{2}\left({\theta}_{k}^{t}-\pi \right)+{\sigma}_{c}^{2}{\overline{s}}^{2}{sin}^{2}\left({\theta}_{k}^{t}-\pi \right),$$

$${P}_{\dot{x}\dot{y}}={P}_{\dot{y}\dot{x}}=\frac{1}{2}sin\left(2\left({\theta}_{k}^{t}-\pi \right)\right)\left[{\sigma}_{s}^{2}-{\sigma}_{c}^{2}{\overline{s}}^{2}\right].$$

In the denominator of Equations (33) and (34), we have the following:
where ${q}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}\right)=\mathcal{N}\left({\widehat{\mathbf{w}}}_{k,a};\mathbf{H}{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{S}}_{k|k-1}^{\left(i\right)}\right)$ is the likelihood of measurement component ${\widehat{\mathbf{w}}}_{k,a}$ against the component i with the innovation covariance:
and the updated target states and corresponding covariance are given by:
with the Kalman gain:

$$\begin{array}{c}\int {P}_{D,k}\left(\mathbf{e}\right){g}_{k}\left(\mathbf{w}|\mathbf{e}\right){v}_{k|k-1}\left(\mathbf{e}\right)d\mathbf{e}\hfill \\ =\phantom{\rule{0.277778em}{0ex}}\int \int \phantom{\rule{0.277778em}{0ex}}{P}_{D,k}\left(\mathbf{e},\beta \right){g}_{k}\left(\mathbf{w}|\mathbf{e},\beta \right){v}_{k|k-1}\left(\mathbf{e},\beta \right)d\mathbf{e}d\beta \hfill \\ =\int {\displaystyle \sum _{\beta =0}^{1}}{P}_{D,k}\left(\mathbf{e},\beta \right){g}_{k}\left(\mathbf{w}|\mathbf{e},\beta \right){v}_{k|k-1}\left(\mathbf{e},\beta \right)d\mathbf{e}\hfill \\ =\int {P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\lambda}_{k,a}\mathcal{N}\left({\widehat{\mathbf{w}}}_{k,a};\mathbf{H}{\mathbf{e}}_{k},{\mathbf{R}}_{k,a}\right){\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right)d\mathbf{e}\hfill \\ +\int \Psi \left(\mathbf{e};{g}_{k}\left(\mathbf{w}|\mathbf{e}\right){\gamma}_{k}\left(\theta ,r,c,s\right)\right)d\mathbf{e}\hfill \\ ={P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\lambda}_{k,a}{\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left({\widehat{\mathbf{w}}}_{k,a};\mathbf{H}{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{S}}_{k|k-1}^{\left(i\right)}\right)\int \mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k}^{\left(i,a\right)},{\mathbf{J}}_{k|k}^{\left(i,a\right)}\right)d\mathbf{e}\hfill \\ +\int \frac{{\omega}_{k}^{b}}{2\pi}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\lambda}_{k,a}\mathcal{N}\left(\mathbf{e};{\tilde{\mathbf{e}}}_{k}\left({\theta}_{k}^{t}\right),{\tilde{\mathbf{J}}}_{k}\left({\theta}_{k}^{t}\right)\right)d\mathbf{e}\hfill \\ ={P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\lambda}_{k,a}{\omega}_{k|k-1}^{\left(i\right)}{q}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}\right)\phantom{\rule{-0.166667em}{0ex}}+\phantom{\rule{-0.166667em}{0ex}}\frac{{\omega}_{k}^{b}}{2\pi},\hfill \end{array}$$

$${\mathbf{S}}_{k|k-1}^{\left(i\right)}=\mathbf{H}{\mathbf{J}}_{k|k-1}^{\left(i\right)}{\mathbf{H}}^{\prime}+{\mathbf{R}}_{k,a},$$

$${\widehat{\mathbf{e}}}_{k|k}^{\left(i,a\right)}={\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)}+{\mathbf{K}}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}-\mathbf{H}{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)}\right),$$

$${\mathbf{J}}_{k|k}^{\left(i,a\right)}={\mathbf{J}}_{k|k-1}^{\left(i\right)}-{\mathbf{K}}_{k}^{\left(i\right)}\mathbf{H}{\mathbf{J}}_{k|k-1}^{\left(i\right)},$$

$${\mathbf{K}}_{k}^{\left(i\right)}={\mathbf{J}}_{k|k-1}^{\left(i\right)}{\mathbf{H}}^{\prime}{\left({\mathbf{S}}_{k|k-1}^{\left(i\right)}\right)}^{-1}.$$

The following approximation for the posterior intensity can be obtained:
and:
where:

$$\begin{array}{c}{v}_{k|k}\left(\mathbf{e},\beta =0\right)\phantom{\rule{14.22636pt}{0ex}}\hfill \\ =\left[1-{P}_{D,k}\right]{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right)\hfill \\ \phantom{\rule{14.22636pt}{0ex}}+{\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\displaystyle \sum _{a=1}^{{A}_{k}}}\frac{\begin{array}{c}{P}_{D,k}{C}_{k}{\lambda}_{k,a}\mathcal{N}\left({\widehat{\mathbf{w}}}_{k,a};\mathbf{H}{\mathbf{e}}_{k},{\mathbf{R}}_{k,a}\right){\omega}_{k|k-1}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right)\hfill \end{array}}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+{P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\lambda}_{k,a}{\omega}_{k|k-1}^{\left(i\right)}{q}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}\right)\phantom{\rule{-0.166667em}{0ex}}+\phantom{\rule{-0.166667em}{0ex}}\frac{{\omega}_{k}^{b}}{2\pi}\hfill \end{array}}\hfill \\ ={\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\omega}_{m,k}^{\left(i\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k-1}^{\left(i\right)},{\mathbf{J}}_{k|k-1}^{\left(i\right)}\right)\phantom{\rule{1.0pt}{0ex}}+{\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\omega}_{s,k}^{\left(i,a\right)}\mathcal{N}\left(\mathbf{e};{\widehat{\mathbf{e}}}_{k|k}^{\left(i,a\right)},{\mathbf{J}}_{k|k}^{\left(i,a\right)}\right),\hfill \end{array}$$

$$\begin{array}{c}{v}_{k|k}\left(\mathbf{e},\beta =1\right)\hfill \\ \phantom{\rule{1.0pt}{0ex}}\phantom{\rule{1.0pt}{0ex}}={\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}{\displaystyle \sum _{a=1}^{{A}_{k}}}\frac{\frac{{\omega}_{k}^{b}}{2\pi}{\lambda}_{k,a}\mathcal{N}\left(\mathbf{e};{\tilde{\mathbf{e}}}_{k}\left({\theta}_{k}^{t}\right),{\tilde{\mathbf{J}}}_{k}\left({\theta}_{k}^{t}\right)\right)}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+{P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\lambda}_{k,a}{\omega}_{k|k-1}^{\left(i\right)}{q}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}\right)\phantom{\rule{-0.166667em}{0ex}}+\phantom{\rule{-0.166667em}{0ex}}\frac{{\omega}_{k}^{b}}{2\pi}\hfill \end{array}}\hfill \\ \phantom{\rule{1.0pt}{0ex}}={\displaystyle \sum _{\mathbf{w}\in {\mathbf{W}}_{k}}}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\omega}_{b,k}\mathcal{N}\left(\mathbf{e};{\tilde{\mathbf{e}}}_{k}\left({\theta}_{k}^{t}\right),{\tilde{\mathbf{J}}}_{k}\left({\theta}_{k}^{t}\right)\right),\hfill \end{array}$$

$${\omega}_{m,k}^{\left(i\right)}=\left[1-{P}_{D,k}\right]{\omega}_{k|k-1}^{\left(i\right)},$$

$${\omega}_{s,k}^{\left(i,a\right)}=\frac{{P}_{D,k}{C}_{k}{\lambda}_{k,a}{\omega}_{k|k-1}^{\left(i\right)}{q}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}\right)}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+{P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\lambda}_{k,a}{\omega}_{k|k-1}^{\left(i\right)}{q}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}\right)\phantom{\rule{-0.166667em}{0ex}}+\phantom{\rule{-0.166667em}{0ex}}\frac{{\omega}_{k}^{b}}{2\pi}\hfill \end{array}},$$

$${\omega}_{b,k}^{\left(a\right)}=\frac{{\lambda}_{k,a}{\omega}_{k}^{b}/\phantom{{\lambda}_{k,a}{\omega}_{k}^{b}{V}_{H}}\phantom{\rule{0.0pt}{0ex}}2\pi}{\begin{array}{c}{\kappa}_{k}\left(\mathbf{w}\right)+{P}_{D,k}{C}_{k}{\displaystyle \sum _{a=1}^{{A}_{k}}}{\displaystyle \sum _{i=1}^{{J}_{k-1}}}{\lambda}_{k,a}{\omega}_{k|k-1}^{\left(i\right)}{q}_{k}^{\left(i\right)}\left({\widehat{\mathbf{w}}}_{k,a}\right)\phantom{\rule{-0.166667em}{0ex}}+\phantom{\rule{-0.166667em}{0ex}}\frac{{\omega}_{k}^{b}}{2\pi}\hfill \end{array}}.$$

Just like the GM-PHD filter, the number of Gaussian components of the updated intensity exponentially increases in time. To reduce the computational load, same techniques (component merging and pruning) in [27,31] are also implemented in the GMM-PHD filter. If the weight of a Gaussian component is lower than the preset threshold, it will be discarded. When some of components are close enough, they will be merged into one Gaussian component. If the total number of Gaussian components is bigger than the maximum value ${M}_{k}$, only ${M}_{k}$ components with high probability will be retained. More details about component management can be found in [31]. The state extraction is also the same as the method in [31]. If the weights are bigger than a threshold, the corresponding states are extracted as the outputs of the filter.

To illustrate the improvements of the proposed GMM-PHD filter, we compare it with the GM-PHD-EKF and the GM-PHD-UKF, which also use a partially-uniform birth model. The GM-PHD-EKF and GM-PHD-UKF were introduced in [8] for a challenging bearings-only multi-target scenario.

The sensor is located on a maneuvering platform with the initial position $\left[-4200\phantom{\rule{4pt}{0ex}}\mathrm{m},3500\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$. The sensor platform moves at a speed of five knots and changes course two times to ensure the observability of the bearings-only target tracking. Note that the sensor should change course again to track the targets that appear after the second sensor maneuvering. The initial course of the sensor is ${220}^{\circ}$and changes to ${60}^{\circ}$ from 14 min to 17 min. The second course change occurs from 31 min to 34 min, with the course changed from ${60}^{\circ}$ to ${220}^{\circ}$. Note that the clockwise rotation from the positive Y-axis is considered to be positive. A time-varying number of targets is contained in the scenario, and the motion profile of targets is presented in Table 1. The starting positions of Targets $\#1\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\#2$ are $\left[-8000\phantom{\rule{4pt}{0ex}}\mathrm{m},-2500\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$ and $\left[-3000\phantom{\rule{4pt}{0ex}}\mathrm{m},-6500\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$, respectively. Similarly, the initial positions of Targets $\#3\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\#4$ are $\left[4100\phantom{\rule{4pt}{0ex}}\mathrm{m},-6100\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$ and $\left[4200\phantom{\rule{4pt}{0ex}}\mathrm{m},-2200\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$, respectively. Finally, the travel of Target $\#5$ starts at $\left[6300\phantom{\rule{4pt}{0ex}}\mathrm{m},4000\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$. An illustration of the sensor and the targets without process noise and clutter is presented in Figure 2.

In the simulation, the sampling time of the sensor is 10 s, and the total simulation time is 3000 s. The sensor generates measurements with a standard deviation ${\sigma}_{\theta}={1}^{\circ}$, and the survival probability of all targets is also the same, ${P}_{S,k}=0.98$. The number of clutter measurements is assumed to follow a Poisson distribution with a mean of 15. The detection probability in Case 1 is $0.95$. The clutter measurements are distributed uniformly across the whole bearing space $\left[0,\phantom{\rule{4pt}{0ex}}2\pi \right]$. The PSD is $q=2.5\times {10}^{-5}\phantom{\rule{4pt}{0ex}}{\mathrm{m}}^{2}/{\mathrm{s}}^{4}$.

In the target birth model, the prior selected parameters are the same for all filters. The prior target course is ${\theta}_{k}^{t}-\pi $ with standard deviation ${\sigma}_{c}={50}^{\circ}$. The prior target speed is $\overline{s}=10$ knots with the standard deviation ${\sigma}_{s}=4$ knots. The prior target range for the GM-PHD-EKF and the GM-PHD-UKF is $\overline{r}=12,000$ m with standard deviation ${\sigma}_{r}=4000$ m. The range interval for the GMM-PHD is $\left[300\phantom{\rule{4pt}{0ex}}\mathrm{m},\phantom{\rule{4pt}{0ex}}18000\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$. The number of measurement components in each scan is ${A}_{k}=8$. The birth intensity is set to ${\omega}_{k}^{b}=0.05$ for all algorithms. The thresholds for the component merging and pruning are four and ${10}^{-5}$, respectively. Further, all filters have the maximum number of the Gaussian components, ${M}_{k}=100$.

In this case, a more challenging scenario with higher clutter intensity and lower detection probability is considered. Compared with Case 1, the mean number of clutter measurements is increased to 30, and the detection probability is decreased to 0.85. The other parameters of Case 2 are the same as those of Case 1. An example of the measurements of Case 2 is shown in Figure 3.

In this case, some selected parameters of filters are changed to show whether the proposed GMM-PHD also has better performance than the GM-PHD-EKF and the GM-PHD-UKF. The survival probability ${P}_{S,k}$ in Case 3 is changed to 0.95. The birth intensity ${\omega}_{k}^{b}$ is set as 0.01 in all filters. Besides, the maximum number of the Gaussian components ${M}_{k}$ is increased to 150 in each filter. The other parameters of Case 3 are also the same as those of Case 1.

In this case, two maneuvering Targets $\#6\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\#7$ are added to Case 1. The starting positions of Targets $\#6\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\#7$ are $\left[1000\phantom{\rule{4pt}{0ex}}\mathrm{m},-8000\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$ and $\left[3000\phantom{\rule{4pt}{0ex}}\mathrm{m},3000\phantom{\rule{4pt}{0ex}}\mathrm{m}\right]$, respectively. The course of Target $\#6$ is changed from ${350}^{\circ}$ to ${270}^{\circ}$ at 25 min. The initial course of Target $\#7$ is ${180}^{\circ}$ and changes to ${240}^{\circ}$ at 28 min. The survival times of Targets $\#6\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\#7$ are $\left[200\phantom{\rule{4pt}{0ex}}\mathrm{s},\phantom{\rule{4pt}{0ex}}2800\phantom{\rule{4pt}{0ex}}\mathrm{s}\right]$ and $\left[400\phantom{\rule{4pt}{0ex}}\mathrm{s},\phantom{\rule{4pt}{0ex}}3000\phantom{\rule{4pt}{0ex}}\mathrm{s}\right]$, respectively. The speed of two targets is eight knots. The other parameters of Case 4 are also the same as those of Case 1. An illustration of the sensor and the targets without process noise and clutter in Case 4 is presented in Figure 4.

The performances of all filters are compared with the average OSPA metric (with order two and cutoff 4000 m) over 500 Monte Carlo runs. The simulation results of Case 1, Case 2, Case 3 and Case 4 are presented in Figure 5, Figure 6, Figure 7 and Figure 8, respectively. The OSPA distance consists of two components: localization and cardinality. Figure 5a, Figure 6a, Figure 7a and Figure 8a show OSPA distances of three filters. The OSPA localizations are presented in Figure 5b, Figure 6b, Figure 7b and Figure 8b. The OSPA cardinalities are shown in Figure 5c, Figure 6c, Figure 7c and Figure 8c. The cardinality statistics are also considered and are shown in Figure 5d, Figure 6d, Figure 7d and Figure 8d. The execution time of different filters for Case 2 is also compared and shown in Figure 9.

In Figure 5a, Figure 6a, Figure 7a and Figure 8a, the OSPA distance of the GM-PHD-UKF is slightly lower than that of the GM-PHD-EKF. The GMM-PHD performs significantly better than the other two filters. Before 1000 s, the target states are unobservable, and all of the filters have a large OSPA distance error. After the sensor maneuver, the target states become observable, and the OSPA distance decreases sharply. From 1500 s to 2500 s, the OSPA distances of all filters show a little bit of an increase as the measurements overlap, which makes it hard for the filters to estimate the target states.

In Figure 5b, Figure 6b, Figure 7b and Figure 8b, the OSPA localization of the GMM-PHD is much lower than those of the GM-PHD-EKF and the GM-PHD-UKF after the target states become observable. Especially in Case 4, which contains two maneuver targets, the OSPA localization errors of the GM-PHD-EKF and the GM-PHD-UKF have significant increases after target maneuvering, as shown in Figure 8b. However, the proposed GMM-PHD can keep a high tracking accuracy as the likelihood function is modeled by Gaussian mixtures, which can quickly responds to the target course change. The performance of the GM-PHD-UKF is also better than the GM-PHD-EKF on the OSPA localization metric in all cases.

In Case 1, Case 3 and Case 4, the three filters have almost the same OSPA cardinality error as shown in Figure 5c, Figure 7c and Figure 8c. Additionally, the three filters have accurate cardinality estimates in Figure 5d, Figure 7d and Figure 8d. For Case 2, the GMM-PHD has some improvement of the OSPA cardinality criteria compared to the GM-PHD-EKF and the GM-PHD-UKF in Figure 6c. Furthermore, the cardinality statistic of the GMM-PHD is more accurate than those of other two filters in Figure 6d. From Figure 5d, Figure 6d, Figure 7d and Figure 8d, three filters suffer from a delay in detecting a new target as the targets are not observable before the sensor maneuver.

According to the discussion above, the improvement of the GMM-PHD is mainly reflected in increasing the tracking accuracy of the target states, but not the cardinality estimation of the targets, for Case 1, Case 3 and Case 4. When the simulation situation becomes worse (the clutter intensity becomes bigger, and the probability of target detection decreases) in Case 2, the GMM-PHD shows improvement on both the track accuracy and the cardinality estimate. As the measurement likelihood is approximated by a Gaussian mixture, the number of track components in the GMM-PHD filter is bigger than for the GM-PHD-EKF and the GM-PHD-UKF. The execution time of the GMM-PHD filter is slightly larger than the other two filters, as shown in Figure 9. However, the execution times of these filters are much smaller than the real time. Note that the computational load of each filter becomes heavier if the number of targets increases, as the maximum number ${M}_{k}$ should be set bigger to ensure good performances.

For bearings-only multi-target tracking with clutter and missed detections, the PHD filter is popular and known to be effective. As there is no closed form solution to the original PHD recursion, the Gaussian mixture assumption is applied, and the GM-PHD filter was designed. However, the likelihood function of the GM-PHD filter is always modeled by a Gaussian distribution. In this paper, the GMM-PHD filter was proposed in a way that the likelihood function, as well as the posterior intensity are modeled by Gaussian mixtures. In this way, the likelihood function of the GMM-PHD can be approximated more accurately than those of the GM-PHD-EKF and the GM-PHD-UKF. The simulation results show that the proposed algorithm has significant improvement of target state estimation for challenging bearings-only multi-target tracking scenarios at the expense of computational resources. Besides, the targets appear at several fixed points in the standard form of the GM-PHD filter. The GMM-PHD was derived by using a partially uniform target birth model that can reduce the number of parameters that must be chosen by the end user.

This work was supported by the Defense Acquisition Administration and Agency for Defense Development, Republic of Korea (Grant UD140081CD).

Qian Zhang and Taek Lyul Song provided insights into formulating the ideas, performed the simulations and analyzed the simulation results. Qian Zhang wrote the paper.

The authors declare no conflict of interest.

- Bar-Shalom, Y.; Willett, P.K.; Tian, X. Tracking and Data Fusion: A Handbook of Algorithms; YBS Publishing: Storrs, CT, USA, 2011. [Google Scholar]
- Blackman, S.; Popoli, R. Design and Analysis of Modern Tracking Systems; Artech House: Norwood, MA, USA, 1999. [Google Scholar]
- Challa, S.; Morelande, M.; Mušicki, D. Fundamentals of Object Tracking; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
- Ristic, B.; Arulampalam, S.; Gordon, N. Beyond the Kalman Filter; Artech House: Norwood, MA, USA, 2004. [Google Scholar]
- Nardone, S.C.; Aidala, V.J. Observability criteria for bearings-only target motion analysis. IEEE Trans. Aerosp. Electron. Syst.
**1981**, 17, 162–166. [Google Scholar] [CrossRef] - Tichavsky, P.; Muravchik, C.H.; Nehorai, A. Posterior Cramer-Rao Bounds for discrete-time nonlinear filtering. IEEE Trans. Signal Process.
**1998**, 46, 1386–1396. [Google Scholar] [CrossRef] - Arulampalam, S.; Clark, M.; Vinter, R. Performance of the Shifted Rayleigh filter in single-sensor bearings-only tracking. In Proceedings of the 10th International Conference on Information Fusion, Quebec, QC, Canada, 9–12 July 2007; pp. 1–6.
- Beard, M.; Vo, B.T.; Vo, B.N.; Arulampalam, S. A partially uniform target birth model for Gaussian mixture PHD/CPHD filtering. IEEE Trans. Aerosp. Electron. Syst.
**2013**, 49, 2835–2844. [Google Scholar] [CrossRef] - Nardone, S.C.; Lindgren, A.G.; Gong, K.F. Fundamental properties and performance of conventional bearings-only target motion analysis. IEEE Trans. Autom. Control
**1984**, 29, 775–787. [Google Scholar] [CrossRef] - Kamen, E.W. Multiple target tracking based on symmetric measurement equations. IEEE Trans. Autom. Control
**1992**, 37, 371–374. [Google Scholar] [CrossRef] - Mahler, R. Statistical Multisource-Multitarget Information Fusion; Artech House: Norwood, MA, USA, 2007. [Google Scholar]
- Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; Wiley: New York, NY, USA, 2001. [Google Scholar]
- Wan, E.A.; van der Merwe, R. The Unscented Kalman filter for nonlinear estimation. In Proceedings of the IEEE Svrnposiurii 2000 (AS-SPCC), Lake Louise, AB, Canada, 4 October 2000.
- Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process.
**2002**, 50, 174–188. [Google Scholar] [CrossRef] - Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control
**1984**, 54, 1254–1269. [Google Scholar] [CrossRef] - Clark, J.M.C.; Vinter, R.B.; Yaqoob, M.M. Shifted Rayleigh filter: A new algorithm for bearings-only tracking. IEEE Trans. Aerosp. Electron. Syst.
**2007**, 43, 1373–1384. [Google Scholar] [CrossRef] - Evensen, G. The Ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn.
**2003**, 53, 343–367. [Google Scholar] [CrossRef] - Bar-Shalom, Y.; Li, X.R. Multitarget-Multisensor Tracking: Principles and Techniques; University of Connecticutl: Storrs, CT, USA, 1995. [Google Scholar]
- Reid, D.B. An algorithm for tracking multiple targets. IEEE Trans. Autom. Control
**1979**, 24, 843–854. [Google Scholar] [CrossRef] - Li, X.; Li, Y.; Yu, J.; Chen, X.; Dai, M. PMHT Approach for multi-target multi-sensor sonar tracking in clutter. Sensors
**2015**, 15, 28177–28192. [Google Scholar] [CrossRef] [PubMed] - Mušicki, D.; Evans, R.J. Joint integrated probabilistic data association: JIPDA. IEEE Trans. Aerosp. Electron. Syst.
**2004**, 40, 1093–1099. [Google Scholar] [CrossRef] - Mušicki, D.; Evans, R.J. Multiscan multitarget tracking in clutter with integrated track splitting filter. IEEE Trans. Aerosp. Electron. Syst.
**2009**, 45, 1432–1447. [Google Scholar] [CrossRef] - Mušicki, D.; la Scala, B. Multi-target tracking in clutter without measurement assignment. IEEE Trans. Aerosp. Electron. Syst.
**2008**, 44, 877–896. [Google Scholar] [CrossRef] - Song, T.L.; Kim, H.W.; Mušicki, D. Iterative joint integrated probabilistic data association for multitarget tracking. IEEE Trans. Aerosp. Electron. Syst.
**2015**, 51, 642–653. [Google Scholar] [CrossRef] - Kurien, T. Multitarget Multisensor Tracking; Artech House: Boston, MA, USA, 1990; Volume 1. [Google Scholar]
- Mahler, R. Multitarget bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst.
**2004**, 39, 1152–1178. [Google Scholar] [CrossRef] - Vo, B.T.; Vo, B.N.; Cantoni, A. Analytic implementations of the cardinalized probability hypothesis density filter. IEEE Trans. Signal Process.
**2007**, 55, 3553–3567. [Google Scholar] [CrossRef] - Vo, B.T.; Vo, B.N.; Cantoni, A. The cardinality balanced multi-target multi-bernoulli filter and its implementations. IEEE Trans. Signal Process.
**2009**, 57, 409–423. [Google Scholar] - Vo, B.N.; Singh, S.; Doucet, A. Sequential Monte Carlo methods for multitarget filtering with random finite sets. IEEE Trans. Aerosp. Electron. Syst.
**2005**, 41, 1224–1245. [Google Scholar] - Liu, Z.; Wang, Z.; Xu, M. Cubature information SMC-PHD for multi-target tracking. Sensors
**2016**, 16, 653. [Google Scholar] [CrossRef] [PubMed] - Vo, B.N.; Ma, W.K. The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process.
**2006**, 54, 4091–4104. [Google Scholar] [CrossRef] - Beard, M.; Vo, B.T.; Vo, B.N.; Arulampalam, S. Gaussian mixture PHD and CPHD filtering with partially uniform target birth. In Proceedings of the 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 535–541.
- Schuhmacher, D.; Vo, B.T.; Vo, B.N. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process.
**2008**, 56, 3447–3457. [Google Scholar] [CrossRef] - Ristic, B.; Clark, D.; Vo, B.N.; Vo, B.T. Adaptive target birth intensity for PHD and CPHD filters. IEEE Trans. Aerosp. Electron. Syst.
**2012**, 48, 1656–1668. [Google Scholar] [CrossRef] - Kronhamn, T.R. Bearings-only target motion analysis based on a multihypothesis Kalman filter and adaptive ownship motion control. IEE Proc. Radar Sonar Navig.
**1998**, 145, 247–252. [Google Scholar] [CrossRef] - Mušicki, D. Bearings only single-sensor target tracking using Gaussian mixtures. Automatica
**2009**, 45, 2088–2092. [Google Scholar] [CrossRef] - Mušicki, D.; Song, T.L.; Kim, W.C.; Nešič, D. Non-linear automatic target tracking in clutter using dynamic Gaussian mixture. IET Radar Sonar Navig.
**2012**, 6, 937–944. [Google Scholar] [CrossRef] - Lerro, D.; Bar-Shalom, Y. Tracking with debiased consistent converted measurements versus EKF. IEEE Trans. Aerosp. Electron. Syst.
**1993**, 29, 1015–1022. [Google Scholar] [CrossRef] - Beard, M.; Arulampalam, S. Performance of PHD and CPHD filtering versus JIPDA for bearings-only multi-target tracking. In Proceedings of the 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 542–549.

Target | Survival Time (s) | Course (degree) | Speed (knots) |
---|---|---|---|

#1 | $\left[0,\phantom{\rule{4pt}{0ex}}2400\right]$ | 95 | 8 |

#2 | $\left[300,\phantom{\rule{4pt}{0ex}}3000\right]$ | 20 | 7 |

#3 | $\left[500,\phantom{\rule{4pt}{0ex}}3000\right]$ | 280 | 8 |

#4 | $\left[0,\phantom{\rule{4pt}{0ex}}3000\right]$ | 275 | 7 |

#5 | $\left[0,\phantom{\rule{4pt}{0ex}}2700\right]$ | 215 | 10 |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).