Open Access
This article is

- freely available
- re-usable

*Sensors*
**2019**,
*19*(19),
4226;
https://doi.org/10.3390/s19194226

Article

A Computationally Efficient Labeled Multi-Bernoulli Smoother for Multi-Target Tracking †

^{1}

National Key Laboratory of Science and Technology on ATR, College of Electronic Science, National University of Defense Technology, Changsha 410073, China

^{2}

Key Laboratory of Information Fusion Technology (Ministry of Education), School of Automation, Northwestern Polytechnical University, Xi’an 710072, China

*

Correspondence: [email protected] (T.L.); [email protected] (H.X.)

^{†}

This paper is an extended and modified version of our conference paper “A Forward-Backward Labeled Multi-Bernoulli Smoother” published in Proceedings of the 16th International Conference on Distributed Computing and Artificial Intelligence, Avila, Spain, 26–28 June 2019.

Received: 8 September 2019 / Accepted: 26 September 2019 / Published: 28 September 2019

## Abstract

**:**

A forward–backward labeled multi-Bernoulli (LMB) smoother is proposed for multi-target tracking. The proposed smoother consists of two components corresponding to forward LMB filtering and backward LMB smoothing, respectively. The former is the standard LMB filter and the latter is proved to be closed under LMB prior. It is also shown that the proposed LMB smoother can improve both the cardinality estimation and the state estimation, and the major computational complexity is linear with the number of targets. Implementation based on the Sequential Monte Carlo method in a representative scenario has demonstrated the effectiveness and computational efficiency of the proposed smoother in comparison to existing approaches.

Keywords:

random finite set; bayes smoother; labeled multi-Bernoulli; multi-target tracking; Sequential Monte Carlo## 1. Introduction

Multi-target tracking (MTT) has been widely used in many engineering fields including aerospace surveillance, biomedical analytics, autonomous driving, indoor localization, robotic networks and so on [1,2,3,4,5]. In the applications, both the number of targets and their states may vary in time, and the measurements are obscured by clutter and missed detection [2,3]. Traditionally, approaches to MTT are built on the base of appropriate data association methods such as typically joint probabilistic data association [1] or multiple hypothesis tracking [6,7]. A novel approach has been developed by Mahler based on the random finite set (RFS) theory [2] which has attracted substantial attention in the last decade. Simply speaking, the RFS methods model the target states and the measurements into the RFSs explicitly and have gained tremendous interest in recent years [8]. A variety of RFS filters have been proposed, including probability hypothesis density (PHD) filter [9,10], cardinalized PHD (CPHD) filter [11], Cardinality Balanced multi-Bernoulli (CBMeMBer) filter [12], generalized labeled multi-Bernoulli (GLMB) filter [13,14] and labeled multi-Bernoulli (LMB) filter [15]. Compared with the filtering that refers the current state, the smoothing [16,17,18,19] typically refers to estimating the target state of the past time using all measurements till the current time which has a better accuracy than filtering but also suffers from a higher computational cost. Therefore, it is of great significance in practice to develop an RFS smoother that is computationally efficient, reliable and accurate.

The Bernoulli smoother was first proposed in [20,21]; it performs better than the Bernoulli filter, but adapts to, at most, one target. For the multi-target problem, the forward–backward PHD smoothers are proposed in [17,18,22,23,24]. It has been shown in [17] that the PHD smoother can improve the accuracy of position estimation as compared with the PHD filter, but does not necessarily gain better cardinality estimation. The CPHD smoother is proposed in [25] which uses an approximate scheme to overcome the intractability of the classic CPHD smoother [26] but bears a complicated algorithm structure. A forward–backward multi-target multi-Bernoulli (MeMBer) smoother is proposed in [27] which also does not necessarily improve the cardinality estimation. However, these smoothers, as well as the corresponding filters, do not provide the information for each track. Therefore, labelled RFS-based filters and smoothers have been developed [28,29,30,31,32], for generating track estimates which is also the focus of this paper.

The GLMB smoother is proposed in [32], which is the first exact closed form solution to the smoothing recursion based on labeled RFS but has not been implemented in practice due to the overcomplicated data associations. Recently, Chen has pointed out the challenge to form tracks since the optimal solution is not as simple as labeling [33]. Relevantly, a multi-scan GLMB filter based on the smoothing-while-filtering framework is proposed in [34] for better labeling. However, the truncation of the multi-scan GLMB filter needs to solve an NP-hard multi-dimensional assignment problem. In short, these existing labeled smoothers have a high computational complexity even if it is practically implementable. Furthermore, we point out that, what is also related to the framework of smoothing-while-filtering [34] is the joint smoothing and filtering [19,35,36] based on which so far, however, only a single target is considered.

In this paper, we derive a forward–backward LMB smoother for multi-target tracking. Preliminary and limited results have been published in [37]. This paper provides additional results, complete proofs, and additional experiments. The proposed smoother consists of two parts regarding the forward LMB filtering and backward LMB smoothing, respectively. While in the former we apply the standard LMB filter [15], the key contribution of our work lies in the backward smoothing algorithm design. We prove that the proposed backward LMB smoothing is closed under the LMB prior for the standard multi-target system models and the backward smoothed density of each track is similar to the Bernoulli backward smoothed density [20]. Superior to the approximate parameteric/Gaussian (mixture) approximation [38], the Sequential Monte Carlo (SMC) method is a powerful method for representing arbitrary/non-Gaussian models [4]. Based on the SMC method, the proposed smoother reduces both the state error and the cardinality error as compared with the PHD smoother [17], the MeMBer smoother [27] and the LMB filter [15], and has a lower computational complexity as compared with the PHD smoother and the MeMBer smoother.

The rest of the paper is organized as follows. Basic definitions of labeled RFS and the multi-target Bayes forward–backward smoother are reviewed briefly in Section 2. The proposed forward–backward LMB smoother is detailed in theory in Section 3 and implemented based on the SMC in Section 4, respectively. Simulation results are presented in Section 5 before we conclude in Section 6.

## 2. Background

#### 2.1. Notation

Single target states are denoted by lowercase letters, for example, x and $\mathbf{x}$. Multi-target states are denoted by uppercase letters, for example, X and $\mathbf{X}$. x and X are the unlabeled state representations. $\mathbf{x}$ and $\mathbf{X}$ are the labeled state representations. State spaces are denoted by blackboard bold letters, for example, a label space which contains a countable number of distinct labels is denoted as $\mathbb{L}$, and a unlabeled target state space is denoted as $\mathbb{X}$. A labeled single target state has the form $\mathbf{x}=\left(x,\ell \right)$, where $x\in \mathbb{X}$ and $\ell \in \mathbb{L}$. A labeled multi-target state has the form $\mathbf{X}=\left\{{\mathbf{x}}_{1},...,{\mathbf{x}}_{i},...,{\mathbf{x}}_{\left|\mathbf{X}\right|}\right\}$, where ${\mathbf{x}}_{i}$ denotes a labeled single target state in $\mathbf{X}$, $\left|\xb7\right|$ denotes the cardinality (or the number of targets) of a multi-target set, and $\mathbf{X}\subseteq \mathbb{X}\times \mathbb{L}$.

The projection $L:\mathbb{X}\times \mathbb{L}\to \mathbb{L}$ is to make $L\left(\mathbf{x}\right)=\ell $ and $L\left(\mathbf{X}\right)=\left\{L\left(\mathbf{x}\right):\mathbf{x}\in \mathbf{X}\right\}$. Then define a generalized Kronecker delta function and inclusion function as:

$${\delta}_{\mathbf{Y}}\left(\mathbf{X}\right)\triangleq \left\{\begin{array}{cc}1,\hfill & \mathbf{X}=\mathbf{Y};\hfill \\ 0,\hfill & \mathrm{otherwise}.\hfill \end{array}\right.$$

$${1}_{\mathbf{Y}}\left(\mathbf{X}\right)\triangleq \left\{\begin{array}{cc}1,\hfill & \mathbf{X}\subset \mathbf{Y};\hfill \\ 0,\hfill & \mathrm{otherwise}.\hfill \end{array}\right.$$

Define $\Delta \left(\mathbf{X}\right)\triangleq {\delta}_{\left|\mathbf{X}\right|}\left(\left|L\left(\mathbf{X}\right)\right|\right)$. The inner product of $f\left(\mathbf{x}\right)$ and $g\left(\mathbf{x}\right)$ on a labeled single target space is defined as

$$\u2329f,g\u232a=\int f\left(\mathbf{x}\right)g\left(\mathbf{x}\right)d\mathbf{x}=\int f\left(x\right)g\left(x\right)dx$$

The multi-target exponential form of $\mathbf{X}$ is defined as

$${h}^{\mathbf{X}}\triangleq \left\{\begin{array}{cc}1,\hfill & \mathbf{X}=\u2300;\hfill \\ {\prod}_{\mathbf{x}\in \mathbf{X}}h\left(\mathbf{x}\right),\hfill & \mathbf{X}\ne \u2300.\hfill \end{array}\right.$$

#### 2.2. GLMB and LMB RFS

The GLMB RFS $\mathbf{X}$ is a Labeled RFS constructed by labeled multi-target states. The distribution of an GLMB RFS [13] is exactly closed under the prediction and update of the multi-target Bayes filter. The GLMB distribution is denoted as
where $\mathbb{C}$ is a discrete index set, ${p}^{\left(c\right)}\left(\xb7,\ell \right)$ is the density of the track ℓ, ${\omega}^{\left(c\right)}\left(\mathrm{I}\right)$ is the nonnegative weight of the hypothesis $\left(c,\mathrm{I}\right)$, and $\sum _{\mathrm{I}\subset \mathbb{L}}}{\displaystyle \sum _{c\in \mathbb{C}}}{\omega}^{\left(c\right)}\left(\mathrm{I}\right)=1$, $\int {p}^{\left(c\right)}\left(x,\ell \right)dx=1$.

$$\pi \left(\mathbf{X}\right)=\Delta \left(\mathbf{X}\right)\sum _{c\in \mathbb{C}}{\omega}^{\left(c\right)}\left(L\left(\mathbf{X}\right)\right){\left[{p}^{\left(c\right)}\right]}^{\mathbf{X}}$$

The distribution of an LMB RFS with the parameter set $\pi ={\left\{({r}^{\ell},{p}^{\ell})\right\}}_{\ell \in \mathbb{L}}$ is given by [15]
where
where ${r}^{\ell}$ denotes the existence probability of the track ℓ, $p\left(\xb7,\ell \right)$ denotes the density, and $\omega \left(\mathrm{I}\right)$ denotes the weight of the hypothesis $\mathrm{I}=\left\{{\ell}_{1},...,{\ell}_{\left|\mathrm{I}\right|}\right\}$. Note that $\mathbb{L}$ is a discrete countable space and the number of labels in $\mathbb{L}$ equals to the number of the Bernoulli components (with non-zero existence probability) in the LMB RFS.

$$\pi \left(\mathbf{X}\right)=\Delta \left(\mathbf{X}\right)\omega \left(L\left(\mathbf{X}\right)\right){p}^{\mathbf{X}}$$

$$\omega \left(\mathrm{I}\right)=\prod _{\ell \in \mathbb{L}}\left(1-{r}^{\ell}\right)\prod _{\ell \in \mathrm{I}}\frac{{1}_{\mathbb{L}}\left(\ell \right){r}^{\ell}}{1-{r}^{\ell}}$$

$${p}^{\ell}\triangleq p(x,\ell )$$

An LMB RFS is a special case of an GLMB RFS, and the densities and the existence probabilities of different tracks in an LMB RFS are both uncorrelated. Two properties associated with an LMB RFS are as follows. More specifically, the cardinality distribution [2] is given by
where ${\sigma}_{\nu ,n}\left({x}_{1},\xb7\xb7\xb7,{x}_{\nu}\right)$ is the elementary homogeneous symmetric function of degree n in $\nu $ variables.

$$p\left(n\right)=\left(\prod _{\ell \in \mathbb{L}}\left(1-{r}^{\ell}\right)\right){\sigma}_{\left|\mathbb{L}\right|,n}\left(\frac{{r}^{{\ell}_{1}}}{1-{r}^{{\ell}_{1}}},\xb7\xb7\xb7,\frac{{r}^{{\ell}_{\left|\mathbb{L}\right|}}}{1-{r}^{{\ell}_{\left|\mathbb{L}\right|}}}\right)$$

**Lemma**

**1.**

If ${\mathbf{X}}_{a}$ and ${\mathbf{X}}_{b}$ are both LMB RFSs with the probability densities ${\pi}_{a}\left({\mathbf{X}}_{a}\right)$ and ${\pi}_{b}\left({\mathbf{X}}_{b}\right)$, respectively, where ${\mathbf{X}}_{a}\subseteq \mathbb{X}\times {\mathbb{L}}_{a}$, ${\mathbf{X}}_{b}\subseteq \mathbb{X}\times {\mathbb{L}}_{b}$ and ${\mathbb{L}}_{a}\bigcap {\mathbb{L}}_{b}=\u2300$, $\mathbf{X}={\mathbf{X}}_{a}\bigcup {\mathbf{X}}_{b}$ is an LMB RFS with the probability density $\pi \left(\mathbf{X}\right)$ and vice versa. The probability densities of ${\pi}_{a}\left({\mathbf{X}}_{a}\right)$, ${\pi}_{b}\left({\mathbf{X}}_{b}\right)$ and $\pi \left(\mathbf{X}\right)$ have the relations as follows :

$$\pi \left(\mathbf{X}\right)={\pi}_{a}\left({\mathbf{X}}_{a}\right){\pi}_{b}\left({\mathbf{X}}_{b}\right)$$

The proof of Lemma 1 is given in the Appendix A.

#### 2.3. Multi-Target Bayes Forward–Backward Smoother

The recursion of multi-target Bayes forward–backward smoother is shown in Figure 1 which consists of forward Bayes filtering and backward Bayes smoothing. The multi-target state at t is ${\mathbf{X}}_{t}$, where ${\mathbf{X}}_{t}\subseteq \mathbb{X}\times {\mathbb{L}}_{1:t}$ and ${\mathbb{L}}_{1:t}$ denotes the label space of the targets at t (including those born prior to t). The measurement set at t is ${Z}_{t}$. The prediction and update for the forward Bayes filtering can be denoted as [2]
where ${\pi}_{t|t-1}$ is denoted as the predicted multi-target density from $t-1$ to t, ${f}_{t|t-1}$ is denoted as the multi-target Markov transition density at t, ${\pi}_{t|t}$ is denoted as the multi-target posterior density at t, and ${g}_{t}$ is denoted as the multi-target likelihood function at t. The integral in the equations is the set integral. If the backward smoothed density from t to k ($k\le t$) is denoted by ${\pi}_{k|t}\left(\mathbf{Y}\right)$ which is initialized with ${\pi}_{t|t}\left(\mathbf{Y}\right)$, the backward Bayes smoothed density ${\pi}_{k-1|t}\left(\mathbf{X}\right)$ at $k-1$ can be written as [32]

$${\pi}_{t|t-1}\left({\mathbf{X}}_{t}\right)=\int {f}_{t|t-1}\left({\mathbf{X}}_{t}|{\mathbf{X}}_{t-1}\right){\pi}_{t-1|t-1}\left({\mathbf{X}}_{t-1}\right)\delta {\mathbf{X}}_{t-1}$$

$${\pi}_{t|t}\left({\mathbf{X}}_{t}\right)=\frac{{g}_{t}\left({Z}_{t}|{\mathbf{X}}_{t}\right){\pi}_{t|t-1}\left({\mathbf{X}}_{t}\right)}{\int {g}_{t}\left({Z}_{t}|\mathbf{X}\right){\pi}_{t|t-1}\left(\mathbf{X}\right)\delta \mathbf{X}}$$

$${\pi}_{k-1|t}\left(\mathbf{X}\right)={\pi}_{k-1|k-1}\left(\mathbf{X}\right)\int {f}_{k|k-1}\left(\mathbf{Y}|\mathbf{X}\right)\frac{{\pi}_{k|t}\left(\mathbf{Y}\right)}{{\pi}_{k|k-1}\left(\mathbf{Y}\right)}\delta \mathbf{Y}$$

#### 2.4. Multi-Target Motion and Measurement Models

Let ${p}_{s,t|t-1}^{\ell}\triangleq {p}_{s,t|t-1}\left(x,\ell \right)$ denote the surviving probability of the target $\left(x,\ell \right)$ from $t-1$ to t. ${f}_{t|t-1}\left({x}_{+}|\left(x,\ell \right)\right)$ denotes the single target Markov transition density from $t-1$ to t. $\mathbf{X}$ denotes the multi-target state at $t-1$ while ${\mathbf{Y}}^{-}$ denotes the survival multi-target state from $t-1$ to t, where $\mathbf{X}\subseteq \mathbb{X}\times {\mathbb{L}}_{1:t-1}$ and $\mathbf{Y}\subseteq \mathbf{X}$. Considering the survival targets only, multi-target Markov transition density ${f}_{s,t|t-1}$ is [2,13]

$${f}_{s,t|t-1}\left({\mathbf{Y}}^{-}|\mathbf{X}\right)=\Delta \left({\mathbf{Y}}^{-}\right)\Delta \left(\mathbf{X}\right){\left(1-{p}_{s,t|t-1}^{\ell}\right)}^{\mathbf{X}}\prod _{\ell \in L\left({\mathbf{Y}}^{-}\right)}\frac{{1}_{L\left(\mathbf{X}\right)}\left(\ell \right){p}_{s,t|t-1}^{\ell}{f}_{t|t-1}\left(y|x,\ell \right)}{\left(1-{p}_{s,t|t-1}^{\ell}\right)}$$

It is assumed that the newborn targets are an LMB RFS denoted as ${\mathbf{Y}}^{+}$, and ${\mathbf{Y}}^{+}\subseteq \mathbb{X}\times {\mathbb{L}}_{t}$, where ${\mathbb{L}}_{t}$ denotes the label space of the newborn targets at t. ${r}_{B,t|t-1}^{\ell}$ denotes the birth probability of target $\left(x,\ell \right)$ at t and ${p}_{B,t|t-1}\left(y,\ell \right)$ denotes the corresponded density. The density ${f}_{B,t|t-1}$ of newborn targets can be denoted as [2,13]

$${f}_{B,t|t-1}\left({\mathbf{Y}}^{+}\right)=\Delta \left({\mathbf{Y}}^{+}\right)\prod _{\ell \in {\mathbb{L}}_{t}}\left(1-{r}_{B,t|t-1}^{\ell}\right)\prod _{\ell \in L\left({\mathbf{Y}}^{+}\right)}\frac{{1}_{{\mathbb{L}}_{t}}\left(\ell \right){r}_{B,t|t-1}^{\ell}{p}_{B,t|t-1}\left(y,\ell \right)}{\left(1-{r}_{B,t|t-1}^{\ell}\right)}$$

The multi-target state at t can be denoted as $\mathbf{Y}={\mathbf{Y}}^{+}\bigcup {\mathbf{Y}}^{-}$ where ${\mathbf{Y}}^{+}$ and ${\mathbf{Y}}^{-}$ are disjoint and independent. From Lemma 1, the joint multi-target Markov transition density ${f}_{t|t-1}$ is written as [2,13]

$${f}_{t|t-1}\left(\mathbf{Y}|\mathbf{X}\right)={f}_{B,t|t-1}\left({\mathbf{Y}}^{+}\right){f}_{s,t|t-1}\left({\mathbf{Y}}^{-}|\mathbf{X}\right)$$

Multi-target likelihood function ${g}_{t}\left({Z}_{t}|\mathbf{X}\right)$ is given as [2]
where ${p}_{D,t}^{\ell}\triangleq {p}_{D,t}\left(x,\ell \right)$ denotes the detection probability of target $\left(x,\ell \right)$ at t. ${g}_{t}\left(z|x,\ell \right)$ denotes the probability that target $\left(x,\ell \right)$ generates the measurement z if detected. The intensity function (or the PHD) of the Poisson clutter is $\kappa \left(\xb7\right)$. The association function $\theta :L\left(\mathbf{X}\right)\to \left\{0,1,...,\left|{Z}_{t}\right|\right\}$ has the property that $\theta \left({\ell}_{i}\right)=\theta \left({\ell}_{{i}^{\prime}}\right)>0$ implies ${\ell}_{i}={\ell}_{{i}^{\prime}}$. $\Theta $ denotes the set of all association functions and $\theta \in \Theta $. When $\theta \left(\ell \right)=0$, the target $\left(x,\ell \right)$ is missed in detection. When $\theta \left(\ell \right)>0$, ${z}_{\theta \left(\ell \right)}$ denotes the measurement associated with target $\left(x,\ell \right)$. ${Z}_{t}/{\displaystyle \bigcup _{\ell \in L\left(\mathbf{X}\right),\theta \left(\ell \right)>0}}\phantom{\rule{0.166667em}{0ex}}{z}_{\theta \left(\ell \right)}$ denotes the false alarms at time t.

$${g}_{t}\left({Z}_{t}|\mathbf{X}\right)={e}^{-\u2329\kappa ,1\u232a}{\kappa}^{{Z}_{t}}{\left(1-{p}_{D,t}^{\ell}\right)}^{\mathbf{X}}\sum _{\theta \in \Theta}\left(\prod _{\theta \left(\ell \right)>0}\frac{{p}_{D,t}^{\ell}{g}_{t}\left({z}_{\theta \left(\ell \right)}|x,\ell \right)}{\left(1-{p}_{D,t}^{\ell}\right)\kappa \left({z}_{\theta \left(\ell \right)}\right)}\right)$$

## 3. LMB Smoother

In this section, we detail the proposed LMB smoother and discuss the cardinality estimation. The LMB smoother framework can be simply depicted as shown in Figure 2 which consists of forward LMB filtering and backward LMB smoothing. The forward LMB filtering used in our approach is the standard LMB filter [15]. The backward LMB smoothing each time step has ${L}_{d}$-step backward smoothing recursions where ${L}_{d}$ denotes the fixed lag.

#### 3.1. Forward LMB Filtering

#### 3.1.1. Prediction

Given that the multi-target prior at time $t-1$ is an LMB (distribution) parameterized as ${\pi}_{t-1|t-1}={\left\{\left({r}_{t-1|t-1}^{\ell},{p}_{t-1|t-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t-1}}$ and the density of the newborn targets is an LMB parameterized as ${\pi}_{B,t}={\left\{\left({r}_{B,t}^{\ell},{p}_{B,t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{t}}$, then the predicted multi-target density at t remains an LMB which can be denoted as [12,15]
where

$${\pi}_{t|t-1}={\left\{\left({r}_{S,t|t-1}^{\ell},{p}_{S,t|t-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t-1}}\bigcup {\left\{\left({r}_{B,t}^{\ell},{p}_{B,t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{t}}={\left\{\left({r}_{t|t-1}^{\ell},{p}_{t|t-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t}}$$

$${\eta}_{S,t|t-1}^{\ell}=\u2329{p}_{s,t|t-1}^{\ell}\left(\xb7\right),{p}_{t-1|t-1}^{\ell}\left(\xb7\right)\u232a$$

$${r}_{S,t|t-1}^{\ell}={\eta}_{S,t|t-1}^{\ell}{r}_{t-1|t-1}^{\ell}$$

$${p}_{S,t|t-1}\left({x}_{+},\ell \right)=\frac{\u2329{p}_{s,t|t-1}\left(\xb7,\ell \right){f}_{t|t-1}\left({x}_{+}|\left(\xb7,\ell \right)\right),{p}_{t-1|t-1}\left(\xb7,\ell \right)\u232a}{{\eta}_{S,t|t-1}^{\ell}}$$

$${p}_{t|t-1}\left({x}_{+},\ell \right)={1}_{{\mathbb{L}}_{1:t-1}}\left(\ell \right){p}_{S,t|t-1}\left({x}_{+},\ell \right)+{1}_{{\mathbb{L}}_{t}}\left(\ell \right){p}_{B,t}\left({x}_{+},\ell \right)$$

#### 3.1.2. Update

Assume that the predicted multi-target density at t is an LMB with the parameter set ${\pi}_{t|t-1}={\left\{\left({r}_{t|t-1}^{\ell},{p}_{t|t-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t}}$, namely,

$${\pi}_{t|t-1}\left(\mathbf{X}\right)=\Delta \left(\mathbf{X}\right){\omega}_{t|t-1}\left(L\left(\mathbf{X}\right)\right){\left[{p}_{t|t-1}^{\ell}\right]}^{\mathbf{X}}$$

Under the likelihood function of (17), the updated multi-target posterior density is an GLMB (Strictly speaking, it’s a $\delta $-GLMB which is a special case of an GLMB [13]) which can be denoted as [13]
where $F\left({\mathbb{L}}_{1:t}\right)$ denotes the class of ${\mathbb{L}}_{1:t}$, ${\mathrm{I}}_{+}=\left\{{\ell}_{1},...,{\ell}_{\left|{\mathrm{I}}_{+}\right|}\right\}$ denotes the label set of a hypothesis, ${\Theta}_{{\mathrm{I}}_{+}}$ is the set of the association function $\theta :{\mathrm{I}}_{+}\to \left\{0,1,...,\left|Z\right|\right\}$, and

$${\pi}_{t|t}\left(\mathbf{X}\right)=\Delta \left(\mathbf{X}\right)\sum _{\left({\mathrm{I}}_{+},\theta \right)\in F\left({\mathbb{L}}_{1:t}\right)\times {\Theta}_{{\mathrm{I}}_{+}}}{\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right){\delta}_{{\mathrm{I}}_{+}}\left(L\left(\mathbf{X}\right)\right){\left[{p}_{t|t}^{\theta}\left(x,\ell \right)\right]}^{\mathbf{X}}$$

$${\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\propto {\omega}_{t|t-1}\left({\mathrm{I}}_{+}\right){\left[{\eta}_{Z}^{\theta}\right]}^{{\mathrm{I}}_{+}}$$

$${p}_{t|t}^{\theta}\left(x,\ell \right)=\frac{{p}_{t|t-1}\left(x,\ell \right){\psi}_{Z}\left(x,\ell ;\theta \right)}{{\eta}_{Z}^{\theta}\left(\ell \right)}$$

$${\eta}_{Z}^{\theta}\left(\ell \right)=\u2329{p}_{t|t-1}\left(\xb7,\ell \right),{\psi}_{Z}\left(\xb7,\ell ;\theta \right)\u232a$$

$${\psi}_{Z}\left(x,\ell ;\theta \right)=\left\{\begin{array}{c}\frac{{p}_{D}\left(x,\ell \right)g\left({z}_{\theta \left(\ell \right)}|x,\ell \right)}{\kappa \left({z}_{\theta \left(\ell \right)}\right)},\theta \left(\ell \right)>0\\ 1-{p}_{D}\left(x,\ell \right),\theta \left(\ell \right)=0\end{array}\right.$$

In the update of the forward LMB filtering, the multi-target posterior density matches the first-order moment of (24) for computing simplification, i.e., [15]
where

$${\pi}_{t|t}\approx {\left\{\left({r}_{t|t}^{\ell},{p}_{t|t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t}}$$

$${r}_{t|t}^{\ell}=\sum _{\left({\mathrm{I}}_{+},\theta \right)\in F\left({\mathbb{L}}_{1:t}\right)\times {\Theta}_{{\mathrm{I}}_{+}}}{\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right){1}_{{\mathrm{I}}_{+}}\left(\ell \right)$$

$${p}_{t|t}\left(x,\ell \right)=\frac{1}{{r}_{t|t}^{\ell}}\sum _{\left({\mathrm{I}}_{+},\theta \right)\in F\left({\mathbb{L}}_{1:t}\right)\times {\Theta}_{{\mathrm{I}}_{+}}}{\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right){1}_{{\mathrm{I}}_{+}}\left(\ell \right){p}_{t|t}^{\theta}\left(x,\ell \right)$$

The approximate posterior density (29) of the forward LMB filtering preserves the first-order moment of the posterior density (24). It is proved in [29,31] that the approximate posterior density (29) in the LMB class minimizes the Kullback-Leibler divergence (KLD) relative to the posterior density (24).

#### 3.2. Backward LMB Smoothing

In this subsection, the backward LMB smoothing is derived. Our derivation directly relies on the multi-target backward smoothing recursion (13), and is different from the derivation of the backward GLMB smoothing [32] which uses the backward corrector recursion [18] as an intermediate process to avoid the set integral of the quotient of two GLMBs.

**Proposition**

**1.**

Given that the multi-target posterior ${\pi}_{k-1|k-1}\left(\mathbf{X}\right)$ and the predicted multi-target density ${\pi}_{k|k-1}\left(\mathbf{Y}\right)$ are both LMBs, and the backward smoothed density ${\pi}_{k|t}\left(\mathbf{Y}\right)$ from t to k ($k\le t$) is an LMB, then the backward smoothed density ${\pi}_{k-1|t}\left(\mathbf{X}\right)$ from t to $k-1$ is also an LMB which can be written as

$${\pi}_{k-1|t}={\left\{\left({r}_{k-1|t}^{\ell},{p}_{k-1|t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:k-1}}$$

where

$${r}_{k-1|t}^{\ell}=1-\frac{\left(1-{r}_{k-1|k-1}^{\ell}\right)\left(1-{r}_{k|t}^{\ell}\right)}{1-{r}_{k|k-1}^{\ell}}$$

$${p}_{k-1|t}\left(x,\ell \right)=\frac{{p}_{k-1|k-1}\left(x,\ell \right)\left({\alpha}_{s,k|t}\left(x,\ell \right)+{\beta}_{s,k|t}\left(x,\ell \right)\int \frac{{f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dy\right)}{\int {p}_{k-1|k-1}\left(x,\ell \right)\left({\alpha}_{s,k|t}\left(x,\ell \right)+{\beta}_{s,k|t}\left(x,\ell \right)\int \frac{{f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dy\right)dx}$$

$${\alpha}_{s,k|t}\left(x,\ell \right)\triangleq \frac{\left(1-{r}_{k|t}^{\ell}\right)\left(1-{p}_{s,k|k-1}\left(x,\ell \right)\right)}{1-{r}_{k|k-1}^{\ell}}$$

$${\beta}_{s,k|t}\left(x,\ell \right)\triangleq \frac{{r}_{k|t}^{\ell}{p}_{s,k|k-1}\left(x,\ell \right)}{{r}_{k|k-1}^{\ell}}$$

The proof of Proposition 1 is given in Appendix B. It is observed that the smoothed density $({r}_{k-1|t}^{\ell},{p}_{k-1|t}^{\ell})$ of the track ℓ has the same form as the smoothed Bernoulli density when it doesn’t take into account the newborn target [20]. Therefore, the proposed LMB smoother can be deemed as an extension of the Bernoulli smoother [20] to multiple targets. From (33), the existence probability of the track ℓ relates to ${r}_{k-1|k-1}^{\ell}$, ${r}_{k|k-1}^{\ell}$ and ${r}_{k|t}^{\ell}$. From (34), the density of the track ℓ contains two terms, where one term only relates to ${p}_{k-1|k-1}\left(x,\ell \right)$ and ${\alpha}_{s,k|t}\left(x,\ell \right)$ which preserves the forward filtering state at $k-1$, and the other term relates to the backward smoothing.

**Remark**

**1.**

The proposed LMB smoother owns a good computationally efficiency to two strategies: First, the newborn tracks are uncorrelated with the backward smoothing since the newborn tracks cannot be alive prior to the birth time, resulting in a simple label space. Second, the existence probabilities and the probability densities of different tracks for an LMB RFS are uncorrelated, and then each track component can be calculated separately, so the computational complexity of the backward smoothing is linear with the number of the tracks.

**Remark**

**2.**

The proposed LMB smoother is also approximate Bayes optimal because the LMB family is closed under the prediction operation and the backward smoothing operation. Although the LMB family is not closed under the update operation, the first-order moment approximation in the LMB class minimizes the KLD relative to the posterior density.

## 4. SMC Implementation and Algorithm Analysis

This section presents the SMC implementation and the state extraction, and analyze the algorithm complexity of the proposed LMB smoother.

#### 4.1. SMC Implementation

**Prediction**: Consider the multi-target posterior at $t-1$ is ${\pi}_{t-1|t-1}={\left\{\left({r}_{t-1|t-1}^{\ell},{p}_{t-1|t-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t-1}}$. In the SMC implementation, ${p}_{t-1|t-1}^{\ell}$ is represented by a set of weighted particles ${\left\{\left({\omega}_{t-1|t-1}^{i}\left(\ell \right),{x}_{t-1|t-1}^{i}\left(\ell \right)\right)\right\}}_{i=1}^{{J}_{t-1|t-1}^{\ell}}$. The density of the newborn targets at t is ${\pi}_{B,t}={\left\{\left({r}_{B,t}^{\ell},{p}_{B,t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{t}}$, where ${p}_{B,t}^{\ell}$ is also represented by a set of weighted particles ${\left\{\left({\omega}_{B,t}^{i}\left(\ell \right),{x}_{B,t}^{i}\left(\ell \right)\right)\right\}}_{i=1}^{{J}_{B,t}^{\ell}}$. Through the prediction of (18)–(22), the predicted multi-target density at t is ${\pi}_{t|t-1}={\left\{\left({r}_{S,t|t-1}^{\ell},{p}_{S,t|t-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t-1}}\bigcup {\left\{\left({r}_{B,t}^{\ell},{p}_{B,t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{t}}$ where ${r}_{S,t|t-1}^{\ell}$ is calculated by (20) and ${p}_{S,t|t-1}^{\ell}$ is also represented by ${\left\{\left({\omega}_{S,t|t-1}^{i}\left(\ell \right),{x}_{S,t|t-1}^{i}\left(\ell \right)\right)\right\}}_{i=1}^{{J}_{t-1|t-1}^{\ell}}$. That is,

$${\omega}_{S,t|t-1}^{i}\left(\ell \right)=\frac{{p}_{s,t|t-1}\left({x}_{t-1|t-1}^{i}\left(\ell \right),\ell \right){\omega}_{t-1|t-1}^{i}\left(\ell \right)}{{\eta}_{S,t|t-1}^{\ell}}$$

$${\eta}_{S,t|t-1}^{\ell}=\sum _{i=1}^{{J}_{t-1|t-1}^{\ell}}{p}_{s,t|t-1}\left({x}_{t-1|t-1}^{i}\left(\ell \right),\ell \right){\omega}_{t-1|t-1}^{i}\left(\ell \right)$$

$${x}_{S,t|t-1}^{i}\left(\ell \right)\sim {f}_{t|t-1}\left(\xb7|\left({x}_{t-1|t-1}^{i}\left(\ell \right),\ell \right)\right)$$

**Update**: Denoting the predicted density as ${\pi}_{t|t-1}={\left\{\left({r}_{t|t-1}^{\ell},{p}_{t|t-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t}}$, it can be written as the form of (6) given by ${\left\{\left({\omega}_{t|t-1}\left({\mathrm{I}}_{+}\right),{p}_{t|t-1}^{\mathbf{X}}\right)\right\}}_{{\mathrm{I}}_{+}\subset {\mathbb{L}}_{1:t}}$, where ${\omega}_{t|t-1}\left({\mathrm{I}}_{+}\right)$ is denoted by (7) and ${\mathrm{I}}_{+}=L\left(\mathbf{X}\right)$. ${p}_{t|t-1}^{\ell}$ is represented by a set of weighted particles ${\left\{\left({\omega}_{t|t-1}^{i}\left(\ell \right),{x}_{t|t-1}^{i}\left(\ell \right)\right)\right\}}_{i=1}^{{J}_{t|t-1}^{\ell}}$. The K-shortest paths algorithm [14] is used to truncate the predicted multi-target density in order to reduce the number of hypotheses. The multi-target posterior density given in (24) is denoted by ${\left\{\left({\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right),{\left[{p}_{t|t}^{\theta}\right]}^{\mathbf{X}}\right)\right\}}_{\left({\mathrm{I}}_{+},\theta \right)\in F\left({\mathbb{L}}_{1:t}\right)\times {\Theta}_{{\mathrm{I}}_{+}}}$. We compute ${p}_{t|t}^{\theta}\left(x,\ell \right)$ and ${\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}$ as follows.

${p}_{t|t}^{\theta}\left(x,\ell \right)$ can also be represented by a set of particles ${\left\{\left({\omega}_{t|t}^{i,\theta \left(\ell \right)},{x}_{t|t}^{i,\theta \left(\ell \right)}\left(\ell \right)\right)\right\}}_{i=1}^{{J}_{t|t-1}^{\ell}}$ where
and

$${\omega}_{t|t}^{i,\theta \left(\ell \right)}=\frac{{\omega}_{t|t-1}^{i}\left(\ell \right){\psi}_{Z}\left({x}_{t|t-1}^{i}\left(\ell \right),\ell ;\theta \right)}{{\eta}_{Z}^{\theta}\left(\ell \right)}$$

$${x}_{t|t}^{i,\theta \left(\ell \right)}\left(\ell \right)={x}_{t|t-1}^{i}\left(\ell \right)$$

$${\eta}_{Z}^{\theta}\left(\ell \right)=\sum _{i=1}^{{J}_{t|t-1}^{\ell}}{\omega}_{t|t-1}^{i}\left(\ell \right){\psi}_{Z}\left({x}_{t|t-1}^{i}\left(\ell \right),\ell ;\theta \right)$$

$${\psi}_{Z}\left({x}_{t|t-1}^{i}\left(\ell \right),\ell ;\theta \right)=\left\{\begin{array}{c}\frac{{p}_{D}\left({x}_{t|t-1}^{i}\left(\ell \right),\ell \right)g\left({z}_{\theta \left(\ell \right)}|{x}_{t|t-1}^{i}\left(\ell \right),\ell \right)}{\kappa \left({z}_{\theta \left(\ell \right)}\right)},\theta \left(\ell \right)>0\\ 1-{p}_{D}\left({x}_{t|t-1}^{i}\left(\ell \right),\ell \right),\theta \left(\ell \right)=0\end{array}\right.$$

The weight ${\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}$ of the hypothesis $\left({\mathrm{I}}_{+},\theta \right)$ is given as
where $\theta $ is an association function and $\theta \in {\Theta}_{{\mathrm{I}}_{+}}$. $\left({\mathrm{I}}_{+},\theta \right)$ is a hypothesis and $\left({\mathrm{I}}_{+},\theta \right)\in F\left({\mathbb{L}}_{1:t}\right)\times {\Theta}_{{\mathrm{I}}_{+}}$. To avoid computing all the hypotheses and their weights, we only reserve a specific number ${T}_{h}$ of largest weights and the ranked optimal assignment algorithm [14] is applied to truncate the multi-target posterior density.

$${\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right)=\frac{{\tilde{\omega}}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right)}{\sum _{\left({\mathrm{I}}_{+},\theta \right)\in F\left({\mathbb{L}}_{1:t}\right)\times {\Theta}_{{\mathrm{I}}_{+}}}{\tilde{\omega}}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right)}$$

$${\tilde{\omega}}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right)={\omega}_{t|t-1}\left({\mathrm{I}}_{+}\right){\left[{\eta}_{Z}^{\theta}\right]}^{{\mathrm{I}}_{+}}$$

The multi-target posterior density of the forward LMB filtering matches the first-order moment of (24) which can be denoted as an LMB with the parameter set ${\pi}_{t|t}\approx {\left\{\left({r}_{t|t}^{\ell},{p}_{t|t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:t}}$ where ${r}_{t|t}^{\ell}$ is given by (30) and ${p}_{t|t}^{\ell}$ is represented by a set of weighted particles ${\left\{\left({\omega}_{t|t}^{j}\left(\ell \right),{x}_{t|t}^{j}\left(\ell \right)\right)\right\}}_{j=1}^{{J}_{t|t}^{\ell}}$ with
where $\left({\mathrm{I}}_{+},\theta \right)\in F\left({\mathbb{L}}_{1:t}\right)\times {\Theta}_{{\mathrm{I}}_{+}}$. The number of particles for ${p}_{t|t}^{\ell}$ increases rapidly and resampling [39] is needed.

$${\omega}_{t|t}^{j}\left(\ell \right)=\frac{{\omega}_{t|t}^{\left({\mathrm{I}}_{+},\theta \right)}\left(Z\right){1}_{{\mathrm{I}}_{+}}\left(\ell \right){\omega}_{t|t}^{i,\theta \left(\ell \right)}}{{r}_{t|t}^{\ell}}$$

$${x}_{t|t}^{j}\left(\ell \right)={x}_{t|t}^{i,\theta \left(\ell \right)}\left(\ell \right)$$

**Backward smoothing**: From the forward LMB filtering, the multi-target posterior density is ${\pi}_{k-1|k-1}={\left\{\left({r}_{k-1|k-1}^{\ell},{p}_{k-1|k-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:k-1}}$ at $k-1$, where ${p}_{k-1|k-1}^{\ell}$ is approximated by ${\left\{\left({\omega}_{k-1|k-1}^{i}\left(\ell \right),{x}_{k-1|k-1}^{i}\left(\ell \right)\right)\right\}}_{i=1}^{J\left(\ell \right)}$. The predicted multi-target density from $k-1$ to k is an LMB and the existence probability of track ℓ is ${r}_{k|k-1}^{\ell}$ where $\ell \in {\mathbb{L}}_{1:k}$. The multi-target backward smoothed density from t to k ($k\le t$) is denoted as ${\pi}_{k|t}\left(\mathbf{Y}\right)={\left\{\left({r}_{k|t}^{\ell},{p}_{k|t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:k}}$, where ${p}_{k|t}^{\ell}$ is represented by a set of weighted particles ${\left\{\left({\omega}_{k|t}^{j}\left(\ell \right),{y}_{k|t}^{j}\left(\ell \right)\right)\right\}}_{j=1}^{Q\left(\ell \right)}$.

Proposition 1 implies that the multi-target backward smoothed density from t to $k-1$ is also an LMB given by ${\pi}_{k-1|t}={\left\{\left({r}_{k-1|t}^{\ell},{p}_{k-1|t}\right)\right\}}_{\ell \in {\mathbb{L}}_{1:k-1}}$ where ${r}_{k-1|t}^{\ell}$ is calculated by (33) and ${p}_{k-1|t}^{\ell}$ is represented by a set of weighted particles ${\left\{\left({\omega}_{k-1|t}^{i}\left(\ell \right),{x}_{k-1|t}^{i}\left(\ell \right)\right)\right\}}_{i=1}^{J\left(\ell \right)}$. The detailed formula of ${p}_{k-1|t}^{\ell}$ is given as
where $i=1,\xb7\xb7\xb7,J\left(\ell \right)$ and

$${\omega}_{k-1|t}^{i}\left(\ell \right)=\frac{{\tilde{\omega}}_{k-1|t}^{i}\left(\ell \right)}{{\displaystyle \sum _{i=1}^{J\left(\ell \right)}}{\tilde{\omega}}_{k-1|t}^{i}\left(\ell \right)}$$

$$\begin{array}{cc}\hfill {\tilde{\omega}}_{k-1|t}^{i}\left(\ell \right)=& \frac{\left(1-{r}_{k|t}^{\ell}\right)\left(1-{p}_{s,k|k-1}\left({x}_{k-1|k-1}^{i}\left(\ell \right),\ell \right)\right){\omega}_{k-1|k-1}^{i}\left(\ell \right)}{\left(1-{r}_{k|k-1}^{\ell}\right)}+\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \frac{{r}_{k|t}^{\ell}{p}_{s,k|k-1}\left({x}_{k-1|k-1}^{i}\left(\ell \right),\ell \right){\omega}_{k-1|k-1}^{i}\left(\ell \right)}{{r}_{k|k-1}^{\ell}}\times \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \sum _{j=1}^{Q\left(\ell \right)}\frac{{f}_{k|k-1}\left({y}_{k|t}^{j}\left(\ell \right)|{x}_{k-1|k-1}^{i}\left(\ell \right),\ell \right){\omega}_{k|t}^{j}\left(\ell \right)}{{p}_{k|k-1}\left({y}_{k|t}^{j}\left(\ell \right),\ell \right)}\hfill \end{array}$$

$${x}_{k-1|t}^{i}\left(\ell \right)={x}_{k-1|k-1}^{i}\left(\ell \right)$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {p}_{k|k-1}\left({y}_{k|t}^{j}\left(\ell \right),\ell \right)=\frac{{r}_{k-1|k-1}^{\ell}}{{r}_{k|k-1}^{\ell}}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\times \sum _{i=1}^{J\left(\ell \right)}{\omega}_{k-1|k-1}^{i}\left(\ell \right){p}_{s,k|k-1}({x}_{k-1|k-1}^{i}\left(\ell \right),\ell ){f}_{k|k-1}\left({y}_{k|t}^{j}\left(\ell \right)|{x}_{k-1|k-1}^{i}\left(\ell \right),\ell \right)\hfill \end{array}$$

Note that the predicted density (22) cannot be directly used as ${p}_{k|k-1}\left(\xb7,\ell \right)$ in (49) for smoothing. Because the forward LMB filtering performs the resampling procedure [39] in each filtering step, the particles of ${p}_{k|t}\left(\xb7,\ell \right)$ from ${p}_{t|t}\left(\xb7,\ell \right)$ (initial smoothed density) are different from those of ${p}_{k|k-1}\left(\xb7,\ell \right)$. We need to estimate ${p}_{k|k-1}\left({y}_{k|t}^{j}\left(\ell \right),\ell \right)$ for each particle $\left({y}_{k|t}^{j}\left(\ell \right),\ell \right)$, $j=1,\xb7\xb7\xb7,Q\left(\ell \right)$ as in (51).

#### 4.2. Backward Smoothing and State Extraction

The pseudo code of the proposed backward smoothing algorithm is given in Algorithm 1. The forward filtering is up to t and the lag of the backward smoothing is ${L}_{d}$. We need to store ${L}_{d}+1$ multi-target posterior densities from $t-{L}_{d}$ to t and ${L}_{d}$ multi-target predicted densities from $t-{L}_{d}+1$ to t for ${L}_{d}$-step backward smoothing recursions. The backward smoothed density ${\pi}_{k|t}\left(\mathbf{Y}\right)$ is initialized with ${\pi}_{t|t}\left(\mathbf{Y}\right)$. In the SMC implementation, pruning, truncation and track cleanup are required and the label set varies with the time, so ${\mathbb{L}}_{1:i}$ is replaced with ${\mathbb{L}}_{i|j}$, where ${\mathbb{L}}_{i|j}$ denotes the label set of the corresponded density ${\pi}_{i|j}(\xb7)$ and ${\mathbb{L}}_{i|j}\subset {\mathbb{L}}_{1:i}$. Therefore, we have ${\mathbb{L}}_{k-1|t}={\mathbb{L}}_{k-1|k-1}\bigcap {\mathbb{L}}_{k|t}$ to represent the label set of backward smoothing at $k-1$ which eliminates the labels of newborn tracks at k and the labels of the pruned tracks in the forward filtering.

Note that, in our approach, resampling [39] can be applied either as the final step of the forward filtering or after the backward smoothing, which lead to insignificant difference in performance. Target states are extracted from the output ${\left\{\left({r}_{t-{L}_{d}|t}^{\ell},{p}_{t-{L}_{d}|t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{t-{L}_{d}|t}}$. That is, the target number N is first estimated by
where $p\left(n\right)$ is the cardinality distribution (9). Then $\widehat{N}$ tracks with the largest existence probabilities are extracted and the target states are mean of the corresponding probability densities.

$$\widehat{N}=\mathrm{arg}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\underset{n}{max}\left\{p\left(n\right)\right\}$$

Algorithm 1: The proposed backward LMB smoothing algorithm. |

Input: lag ${\mathit{L}}_{\mathit{d}}$ at time t, ${\left\{{\left\{\left({\mathit{r}}_{\mathit{k}|\mathit{k}}^{\mathit{\ell}},{\mathit{p}}_{\mathit{k}|\mathit{k}}^{\mathit{\ell}}\right)\right\}}_{\mathit{\ell}\in {\mathbb{L}}_{\mathit{k}|\mathit{k}}}\right\}}_{\mathit{k}=max\left(\mathit{t}-{\mathit{L}}_{\mathit{d}},1\right)}^{\mathit{t}}$, ${\left\{{\left\{{\mathit{r}}_{\mathit{k}|\mathit{k}-1}^{\mathit{\ell}}\right\}}_{\mathit{\ell}\in {\mathbb{L}}_{\mathit{k}|\mathit{k}-1}}\right\}}_{\mathit{k}=max\left(\mathit{t}-{\mathit{L}}_{\mathit{d}},1\right)+1}^{\mathit{t}}$; |

initialize ${\pi}_{k|t}\left(\mathbf{Y}\right)$ with ${\pi}_{t|t}\left(\mathbf{Y}\right)$; |

for k=t:-1:max($t-{L}_{d}$,1)+1 |

${\mathbb{L}}_{k-1|t}={\mathbb{L}}_{k-1|k-1}\bigcap {\mathbb{L}}_{k|t}$; |

for q = 1:size(${\mathbb{L}}_{k-1|t}$,2) |

compute ${r}_{k-1|t}^{{\ell}_{q}}$ according to (33); |

for j=1:$Q\left({\ell}_{q}\right)$ |

estimate ${p}_{k|k-1}\left({y}_{k|t}^{j}\left({\ell}_{q}\right),{\ell}_{q}\right)$ according to (51); |

end |

for i=1:$J\left({\ell}_{q}\right)$ |

compute ${\omega}_{k-1|t}^{i}\left({\ell}_{q}\right)$ according to (48)–(49); |

${x}_{k-1|t}^{i}\left({\ell}_{q}\right)={x}_{k-1|k-1}^{i}\left({\ell}_{q}\right)$; |

end |

end |

end |

Output: ${\left\{{\left\{\left({r}_{k-1|t}^{\ell},{p}_{k-1|t}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{k-1|t}}\right\}}_{k=max\left(t-{L}_{d},1\right)+1}^{t}$. |

#### 4.3. Algorithm Complexity

The major computational cost of the LMB smoother is due to backward LMB smoothing. The computational complexity of the backward LMB smoothing is $O\left({L}_{d}N{L}^{2}\right)$ which can be obtained from the four for-loops of Algorithm 1, where N is the number of tracks (or targets) and L is the number of particles per track. The computational complexity of the innermost two parallel for-loops is $\mathrm{O}\left({L}^{2}\right)$, so the computational complexity of the backward LMB smoothing is $\mathrm{O}\left({L}_{d}N{L}^{2}\right)$ multiplying by the outermost two for-loops.

The structures of backward smoothing for PHD smoother [17] and MeMBer smoother [27] are similar to that of LMB smoother, and the difference is that the particles of all tracks are used for backward smoothing for PHD smoother and MeMBer smoother while the particles of each track are used for their respective backward smoothing in our LMB smoother. We can obtain that the major computational costs for the PHD smoother (Here, we consider the classic SMC implementation [17] instead of the fast SMC implementation [23]) and the MeMBer smoother are both approximately to $o\left({L}_{d}{N}^{2}{L}^{2}\right)$. Therefore, the computational complexity of the proposed LMB smoother is lower than those of PHD smoother [17] and MeMBer smoother [27].

## 5. Simulation Result

A nearly constant turn (NCT) model is considered which has varying turn rate together with noisy range and azimuth measurements [12,13]. The state of the target with label ℓ at t is denoted as ${\mathbf{x}}_{t}=\left({x}_{t},\ell \right)$, where ${x}_{t}={\left[{\tilde{x}}_{t}^{T},{w}_{t}\right]}^{T}$, ${\tilde{x}}_{t}={\left[{p}_{x,t},{\dot{p}}_{x,t},{p}_{y,t},{\dot{p}}_{y,t}\right]}^{T}$ denotes the planar position and velocity of the target, and ${w}_{t}$ denotes the turn rate. The NCT model can be written as
where ${W}_{t-1}$ and ${u}_{t-1}$ are the state noises of velocity and turn rate, respectively. ${W}_{t-1}\sim N\left(W;0,{\sigma}_{W}^{2}{I}_{2}\right)$ and ${u}_{t-1}\sim N\left(u;0,{\sigma}_{u}^{2}{I}_{1}\right)$ where $N\left(\xb7;{m}_{N},{P}_{N}\right)$ denotes a Gaussian density with mean ${m}_{N}$ and variance ${P}_{N}$, and ${I}_{i}$ represents an i order identity matrix. The noise of velocity uses ${\sigma}_{W}=5$ m/s${}^{2}$ and the noise of turn rate uses ${\sigma}_{u}=\pi /180$ rad/s${}^{2}$. The state transition matrix and noise transition matrix are given as follows, respectively,

$${\tilde{x}}_{t}=F\left({w}_{t-1}\right){\tilde{x}}_{t-1}+G{W}_{t-1}$$

$${w}_{t}={w}_{t-1}+{u}_{t-1}T$$

$$F\left(w\right)=\left[\begin{array}{cccc}1& \frac{sinwT}{w}& 0& -\frac{1-coswT}{w}\\ 0& coswT& 0& -sinwT\\ 0& \frac{1-coswT}{w}& 1& \frac{sinwT}{w}\\ 0& sinwT& 0& coswT\end{array}\right],G=\left[\begin{array}{cc}\frac{{T}^{2}}{2}& 0\\ T& 0\\ 0& \frac{{T}^{2}}{2}\\ 0& T\end{array}\right]$$

The state transition model is denoted by (53) and (54). The sampling interval is $T=1$ s. The target survival probability is ${P}_{s}=0.99$. Newborn targets of each time step is an LMB RFS which can be represented as ${\pi}_{B,t}={\left\{\left({r}_{B,t}^{\ell},{p}_{B,t}^{\ell}\right)\right\}}_{\ell \in \mathbb{B}}$ where $\mathbb{B}=\left\{{\ell}_{1},{\ell}_{2}\right\}$, ${r}_{B,t}^{{\ell}_{1}}=0.02$, ${r}_{B,t}^{{\ell}_{2}}=0.03$, and ${p}_{B,t}^{{\ell}_{i}}\sim N\left(\xb7;{m}_{B,t}^{{\ell}_{i}},{P}_{B}\right)$. The mean of track ${\ell}_{1}$ is ${m}_{B,k}^{{\ell}_{1}}={\left[-1500,\phantom{\rule{4.pt}{0ex}}0,\phantom{\rule{4.pt}{0ex}}250,\phantom{\rule{4.pt}{0ex}}0,\phantom{\rule{4.pt}{0ex}}0\phantom{\rule{4.pt}{0ex}}\right]}^{T}$, the mean of track ${\ell}_{2}$ is ${m}_{B,k}^{{\ell}_{2}}={\left[1000,\phantom{\rule{4.pt}{0ex}}0,\phantom{\rule{4.pt}{0ex}}1500,\phantom{\rule{4.pt}{0ex}}0,\phantom{\rule{4.pt}{0ex}}0\right]}^{T}$, and the variances of both are ${P}_{B}=diag{\left(50,50,50,50,6\pi /180\right)}^{2}$. The unit of position ${p}_{x},{p}_{y}$ is m, the unit of velocity ${\dot{p}}_{x},{\dot{p}}_{y}$ is m/s, and the unit of the turn rate w is rad/s.

The measurement equation is
where the measurement noise is ${\epsilon}_{t}\sim N\left(\xb7;0,{P}_{\epsilon}\right)$ with ${P}_{\epsilon}=diag\left({\sigma}_{r}^{2},{\sigma}_{\theta}^{2}\right)$, ${\sigma}_{r}=10$ m, and ${\sigma}_{\theta}=2\pi /180$ rad. The observation region is $\left[\phantom{\rule{4.pt}{0ex}}0,2000\right]$ m$\times \left[\phantom{\rule{4.pt}{0ex}}0,\pi \right]$ rad with the detection probability ${p}_{D}=0.98$. The density of the Poisson clutter is $\kappa \left(\xb7\right)=10/\left(2000\pi \right)\phantom{\rule{0.277778em}{0ex}}$ as the average number of clutter measurements is 10 per scan.

$${z}_{t}={\left[\sqrt{{p}_{x,t}^{2}+{p}_{y,t}^{2}},arctan\left(\frac{{p}_{y,t}}{{p}_{x,t}}\right)\right]}^{T}+{\epsilon}_{t}$$

The number of particles per hypothesized track is set to 1000. We prune the tracks with a weight smaller than ${P}_{T}={10}^{-4}$. The lag of the smoother is ${L}_{d}=3$. The OSPA metric [40] with cut-off parameter $c=100$ m and order parameter $p=1$ is used. We compare the proposed LMB smoother with the LMB filter [15], the PHD filter [9], the PHD smoother [17], the CBMeMBer filter [12] and the MeMBer smoother [27] over 100 Monte Carlo trials.

Figure 3 and Figure 4 show the results of the LMB smoother in one trial, in the plane and in x and y coordinates over time, respectively. The number of targets changes over time due to target birth or death, and there are at most five targets in the scenario. The track of each target is smoothing.

More specifically, Figure 5, Figure 6 and Figure 7 show the cardinality estimation for different methods over 100 Monte Carlo trials. We can see from Figure 5 that the estimated cardinality mean converges to the true cardinality most of time for all methods. Figure 6 shows the errors of the cardinality mean which is given by the estimated cardinality mean minus the true cardinality. At $k=1$ s, 10 s, 40 s and 60 s, there is one or two newborn targets at each time step and the cardinality errors are negative for the LMB filter, because of newborn target detection delays. At $k=67$ s, 81 s and 91 s, one target disappears and the cardinality errors are positive for the LMB filter, because of the detection delay of the target death. At k = 64∼66 s, 78∼80 s and 88∼90 s, the cardinality mean errors are negative for PHD and MeMBer smoother, because of premature deaths of targets. Figure 7 shows the standard deviation of the cardinality estimate for different methods. The standard deviations of the PHD and the MeMBer smoother are larger than those of LMB filter and LMB smoother. In short, the LMB smoother can accurately estimate the cardinality (except for a detection delay at $k=10$ s) and yields the best accuracy.

Figure 8 shows the average OSPA errors of the PHD smoother, the MeMBer smoother, the LMB filter, and the LMB smoother with 100 Monte Carlo trials. It can be seen that the OSPA of the LMB smoother is less than those of the other methods almost all the time. We also give the average OSPA errors of different methods in Table 1. It can be seen that all three kinds of smoothers can effectively reduce OSPA localization components as compared with the corresponding filters. However, the PHD smoother and the MeMBer smoother do not necessarily reduce the OSPA cardinality components, whereas the proposed LMB smoother can improve the cardinality estimation significantly.

Figure 9 shows the average execution time of different methods over 100 Monte Carlo trials. Summing the time of every time step, we get the times per simulation for the PHD smoother, the MeMBer smoother, the LMB filter, and the LMB smoother which are 1341 s, 1537 s, 156 s and 356 s, respectively. The proposed LMB smoother have a higher computational complexity than the LMB filter, but obviously lower than the PHD smoother and the MeMBer smoother. The results comply with our theoretical analysis.

## 6. Conclusions

The paper derives a computationally efficient forward–backward LMB smoother which is closed under the backward smoothing operation and has the advantages for maintaining the independence of different tracks and their track outputs. Both numerical and simulation analyses have demonstrated that the proposed LMB smoother can effectively improve the tracking performance as compared to the PHD smoother, the MeMBer smoother and the LMB filter and have a lower computational complexity as compared with the PHD smoother and the MeMBer smoother. We should point out that our smoother cannot solve the problem of track fragmentation [34] when the label of a track changes before the track ends. It is our future work to investigate the curve/track fitting approach [19,35,36] for improving the continuity and smoothness of the estimated tracks.

## Author Contributions

Methodology, software and draft, R.L.; methodology, review and revision, H.F.; methodology, review and revision, T.L.; methodology, supervision and validation, H.X.

## Funding

This work was supported in part by the Fundamental Research Funds for the Central Universities under grant 3102019ZDHQD08 and by National Natural Science Foundation of China under grant 51975482.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A

**Proof of**

**Lemma 1.**

If ${\mathbf{X}}_{a}$ and ${\mathbf{X}}_{b}$ are both LMB RFSs and ${\mathbb{L}}_{a}\bigcap {\mathbb{L}}_{b}=\u2300$, $\mathbf{X}={\mathbf{X}}_{a}\bigcup {\mathbf{X}}_{b}$ is an LMB RFS. The conclusion can be obtained from the fundamental convolution theorem [2] and also from the independence of the existence probabilities and the densities of different tracks for LMB RFSs [31].

The product of ${\pi}_{a}\left({\mathbf{X}}_{a}\right)$ and ${\pi}_{b}\left({\mathbf{X}}_{b}\right)$ can be denoted as
where $\mathbb{L}={\mathbb{L}}_{a}\bigcup {\mathbb{L}}_{b}$ represents the label space of $\mathbf{X}$. Note that the computation of (A1) is reversible. That is, the union of multiple LMB RFSs on the disjoint subspaces can compose a single LMB RFS on the joint space and vice versa. □

$$\begin{array}{cc}\hfill {\pi}_{a}\left({\mathbf{X}}_{a}\right){\pi}_{b}\left({\mathbf{X}}_{b}\right)=& \Delta \left({\mathbf{X}}_{a}\bigcup {\mathbf{X}}_{b}\right)\left(\prod _{\ell \in {\mathbb{L}}_{a}\bigcup {\mathbb{L}}_{b}}\left(1-{r}^{\ell}\right)\right)\times \hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \left(\prod _{\ell \in L\left({\mathbf{X}}_{a}\bigcup {\mathbf{X}}_{b}\right)}\frac{{1}_{{\mathbb{L}}_{a}\bigcup {\mathbb{L}}_{b}}\left(\ell \right){r}^{\ell}}{\left(1-{r}^{\ell}\right)}\right){\left[p\left(x,\ell \right)\right]}^{{\mathbf{X}}_{a}\bigcup {\mathbf{X}}_{b}}\hfill \\ \hfill =& \Delta \left(\mathbf{X}\right)\left(\prod _{\ell \in \mathbb{L}}\left(1-{r}^{\ell}\right)\prod _{\ell \in L\left(\mathbf{X}\right)}\frac{{1}_{\mathbb{L}}\left(\ell \right){r}^{\ell}}{\left(1-{r}^{\ell}\right)}\right){\left[p\left(x,\ell \right)\right]}^{\mathbf{X}}\hfill \\ \hfill =& \pi \left(\mathbf{X}\right)\hfill \end{array}$$

## Appendix B

**Proof of**

**Proposition 1.**

The proof of Proposition 1 can be summarized as:

Firstly, we can decompose the set integral of the backward smoothing recursion (13) into two parts as (A4): One relates to the survived targets, and the other relates to the newborn targets.

Then, the second part for the newborn targets is equal to 1 as in (A7). We can conclude that newborn targets cannot transmit to the time before birth, and the set integral of (13) equals to the set integral of the first part for the survived targets.

Finally, we prove that the set integral of the first part for the survived targets is an LMB.

From the assumptions, ${\pi}_{k|t}\left(\mathbf{Y}\right)$, ${\pi}_{k|k-1}\left(\mathbf{Y}\right)$ and ${\pi}_{k-1|k-1}\left(\mathbf{X}\right)$ are all LMB distributions. ${\pi}_{k|t}\left(\mathbf{Y}\right)$ is denoted as

$${\pi}_{k|t}\left(\mathbf{Y}\right)=\Delta \left(\mathbf{Y}\right)\prod _{\ell \in {\mathbb{L}}_{1:k}}\left(1-{r}_{k|t}^{\ell}\right)\prod _{\ell \in L\left(\mathbf{Y}\right)}\frac{{1}_{{\mathbb{L}}_{1:k}}\left(\ell \right){r}_{k|t}^{\ell}{p}_{k|t}\left(y,\ell \right)}{\left(1-{r}_{k|t}^{\ell}\right)}$$

The label space of ${\pi}_{k|k-1}\left(\mathbf{Y}\right)$ is ${\mathbb{L}}_{1:k}$. ${\pi}_{k|t}\left(\mathbf{Y}\right)$ is initialized with ${\pi}_{t|t}\left(\mathbf{Y}\right)$ and its label space is ${\mathbb{L}}_{1:k}$ when $k=t$. ${\pi}_{k|t}\left(\mathbf{Y}\right)$ keeps the same label space with ${\pi}_{k|k-1}\left(\mathbf{Y}\right)$ at all other times as to be explained after (A7). The newborn targets can be denoted by an LMB RFS ${\pi}_{B,k|k-1}={\left\{\left({r}_{B,k|k-1}^{\ell},{p}_{B,k|k-1}^{\ell}\right)\right\}}_{\ell \in {\mathbb{L}}_{k}}$. The multi-target transition density ${f}_{k|k-1}\left(\mathbf{Y}|\mathbf{X}\right)$ can be denoted by (16).

Let $\mathbf{Y}={\mathbf{Y}}^{+}\bigcup {\mathbf{Y}}^{-}$. ${\mathbf{Y}}^{+}$ denotes the newborn targets at k and $L\left({\mathbf{Y}}^{+}\right)\subseteq {\mathbb{L}}_{k}$. ${\mathbf{Y}}^{-}$ denotes the surviving targets from $k-1$ to k and $L\left({\mathbf{Y}}^{-}\right)\subseteq {\mathbb{L}}_{1:k-1}$. The backward smoothing recursion (13) can be reformulated as
where

$${\pi}_{k-1|t}\left(\mathbf{X}\right)={\pi}_{k-1|k-1}\left(\mathbf{X}\right)\int {f}_{B,k|k-1}\left({\mathbf{Y}}^{+}\right){f}_{s,k|k-1}\left({\mathbf{Y}}^{-}|\mathbf{X}\right)\frac{{\pi}_{k|t}\left(\mathbf{Y}\right)}{{\pi}_{k|k-1}\left(\mathbf{Y}\right)}\delta \mathbf{Y}$$

$$=\underset{Survivedtargets}{\underbrace{{\pi}_{k-1|k-1}\left(\mathbf{X}\right)\int {f}_{s,k|k-1}\left({\mathbf{Y}}^{-}|\mathbf{X}\right)\frac{{\pi}_{k|t}^{-}\left({\mathbf{Y}}^{-}\right)}{{\pi}_{k|k-1}^{-}\left({\mathbf{Y}}^{-}\right)}\delta {\mathbf{Y}}^{-}}}\underset{Borntargets}{\underbrace{\int {f}_{B,k|k-1}\left({\mathbf{Y}}^{+}\right)\frac{{\pi}_{k|t}^{+}\left({\mathbf{Y}}^{+}\right)}{{\pi}_{k|k-1}^{+}\left({\mathbf{Y}}^{+}\right)}\delta {\mathbf{Y}}^{+}}}$$

$${\pi}_{k|t}\left(\mathbf{Y}\right)={\pi}_{k|t}^{-}\left({\mathbf{Y}}^{-}\right){\pi}_{k|t}^{+}\left({\mathbf{Y}}^{+}\right)$$

$${\pi}_{k|k-1}\left(\mathbf{Y}\right)={\pi}_{k|k-1}^{-}\left({\mathbf{Y}}^{-}\right){\pi}_{k|k-1}^{+}\left({\mathbf{Y}}^{+}\right)$$

The decomposition of ${\pi}_{k|t}\left(\mathbf{Y}\right)$ by (A5) as well as ${\pi}_{k|k-1}\left(\mathbf{Y}\right)$ by (A6) complies with Lemma 1. The derivation from (A3) to (A4) also applies the Proposition in Section 3.5.3 of [2] that the single set integral on the joint space can be denoted as a multiple set integral on the disjoint subspaces. The Formula (A4) consists of two parts: One relates to $\mathbf{X}$ which involves the smoothing of survived targets, and the other is the smoothing of the newborn targets. As ${f}_{B,k|k-1}\left({\mathbf{Y}}^{+}\right)={\pi}_{k|k-1}^{+}\left({\mathbf{Y}}^{+}\right)$, the second part of (A4) is equal to 1 and the result can be obtained from

$$\int {f}_{B,k|k-1}\left({\mathbf{Y}}^{+}\right)\frac{{\pi}_{k|t}^{+}\left({\mathbf{Y}}^{+}\right)}{{\pi}_{k|k-1}^{+}\left({\mathbf{Y}}^{+}\right)}\delta {\mathbf{Y}}^{+}=\int {\pi}_{k|t}^{+}\left({\mathbf{Y}}^{+}\right)\delta {\mathbf{Y}}^{+}=1$$

The Formula (A7) can be explained as follows: Newborn targets cannot transmit to the time before birth, i.e., if a target is born at k, it cannot be alive at $k-1$ via backward LMB smoothing. Furthermore, ${\pi}_{k|t}\left(\mathbf{Y}\right)$ cannot have the targets born after k, and so ${\pi}_{k|t}\left(\mathbf{Y}\right)$ and ${\pi}_{k|k-1}\left(\mathbf{Y}\right)$ have the same label space ${\mathbb{L}}_{1:k}$ which is the union of the label space at $k-1$ and the label space of newborn targets at k.

Combining (35)–(36) and (A7), the Formula (A4) can be further written as

$$\begin{array}{ccc}\hfill {\pi}_{k-1|t}\left(\mathbf{X}\right)& =& {\pi}_{k-1|k-1}\left(\mathbf{X}\right)\int {f}_{s,k|k-1}\left({\mathbf{Y}}^{-}|\mathbf{X}\right)\frac{{\pi}_{k|t}^{-}\left({\mathbf{Y}}^{-}\right)}{{\pi}_{k|k-1}^{-}\left({\mathbf{Y}}^{-}\right)}\delta {\mathbf{Y}}^{-}\hfill \\ & \hfill =& {\pi}_{k-1|k-1}\left(\mathbf{X}\right)\Delta \left(\mathbf{X}\right){\left(\frac{1-{r}_{k|t}^{\ell}}{1-{r}_{k|k-1}^{\ell}}\right)}^{{\mathbb{L}}_{1:k-1}}{\left(1-{p}_{s,k|k-1}\left(x,\ell \right)\right)}^{\mathbf{X}}\int \Delta \left({\mathbf{Y}}^{-}\right)\hfill \\ & & \times {1}_{L\left(\mathbf{X}\right)}\left(L\left({\mathbf{Y}}^{-}\right)\right){\left(\frac{\left(1-{r}_{k|k-1}^{\ell}\right){p}_{s,k|k-1}\left(x,\ell \right){r}_{k|t}^{\ell}{f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{\left(1-{p}_{s,k|k-1}\left(x,\ell \right)\right)\left(1-{r}_{k|t}^{\ell}\right){r}_{k|k-1}^{\ell}{p}_{k|k-1}\left(y,\ell \right)}\right)}^{{\mathbf{Y}}^{-}}\delta {\mathbf{Y}}^{-}\hfill \\ & \hfill =& {\pi}_{k-1|k-1}\left(\mathbf{X}\right){\left(\frac{1-{r}_{k|t}^{\ell}}{1-{r}_{k|k-1}^{\ell}}\right)}^{{\mathbb{L}}_{1:k-1}-L\left(\mathbf{X}\right)}{\left({\alpha}_{s,k|t}\left(x,\ell \right)\right)}^{\mathbf{X}}\int \Delta \left({\mathbf{Y}}^{-}\right)\hfill \\ & & \times {1}_{L\left(\mathbf{X}\right)}\left(L\left({\mathbf{Y}}^{-}\right)\right){\left(\frac{{\beta}_{s,k|t}\left(x,\ell \right)}{{\alpha}_{s,k|t}\left(x,\ell \right)}\right)}^{L\left({\mathbf{Y}}^{-}\right)}{\left(\frac{{f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}\right)}^{{\mathbf{Y}}^{-}}\delta {\mathbf{Y}}^{-}\hfill \end{array}$$

Applying Lemma 3 of [13] (or Lemma 1 in Section 15.5.1 of [2]) and the power-functional identity in Section 3.7 of [2], we have

$$\begin{array}{cc}\hfill {\pi}_{k-1|t}\left(\mathbf{X}\right)=& {\pi}_{k-1|k-1}\left(\mathbf{X}\right){\left(\frac{1-{r}_{k|t}^{\ell}}{1-{r}_{k|k-1}^{\ell}}\right)}^{{\mathbb{L}}_{1:k-1}-L\left(\mathbf{X}\right)}{\left({\alpha}_{s,k|t}\left(x,\ell \right)\right)}^{\mathbf{X}}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\times \sum _{L\left({\mathbf{Y}}^{-}\right)\subseteq L\left(\mathbf{X}\right)}\prod _{\ell \in L\left({\mathbf{Y}}^{-}\right)}\int \frac{{\beta}_{s,k|t}\left(x,\ell \right){f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{\alpha}_{s,k|t}\left(x,\ell \right){p}_{k|k-1}\left(y,\ell \right)}dy\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\pi}_{k-1|k-1}\left(\mathbf{X}\right){\left(\frac{1-{r}_{k|t}^{\ell}}{1-{r}_{k|k-1}^{\ell}}\right)}^{{\mathbb{L}}_{1:k-1}-L\left(\mathbf{X}\right)}{\left({\alpha}_{s,k|t}\left(x,\ell \right)\right)}^{\mathbf{X}}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\times \prod _{\ell \in L\left(\mathbf{X}\right)}\left(1+\int \frac{{\beta}_{s,k|t}\left(x,\ell \right){f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{\alpha}_{s,k|t}\left(x,\ell \right){p}_{k|k-1}\left(y,\ell \right)}dy\right)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={\pi}_{k-1|k-1}\left(\mathbf{X}\right){\left(\frac{1-{r}_{k|t}^{\ell}}{1-{r}_{k|k-1}^{\ell}}\right)}^{{\mathbb{L}}_{1:k-1}-L\left(\mathbf{X}\right)}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\times \prod _{\ell \in L\left(\mathbf{X}\right)}\left({\alpha}_{s,k|t}\left(x,\ell \right)+\int {\beta}_{s,k|t}\left(x,\ell \right)\frac{{f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dy\right)\hfill \end{array}$$

Substituting ${\pi}_{k-1|k-1}\left(\mathbf{X}\right)$ into (A9) yields

$$\begin{array}{cc}\hfill {\pi}_{k-1|t}\left(\mathbf{X}\right)=& \Delta \left(\mathbf{X}\right)\prod _{\ell \in {\mathbb{L}}_{1:k-1}-L\left(\mathbf{X}\right)}\frac{\left(1-{r}_{k-1|k-1}^{\ell}\right)\left(1-{r}_{k|t}^{\ell}\right)}{\left(1-{r}_{k|k-1}^{\ell}\right)}\prod _{\ell \in L\left(\mathbf{X}\right)}\left({1}_{{\mathbb{L}}_{1:k-1}}\left(\ell \right){r}_{k-1|k-1}^{\ell}\right.\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\times \left.{p}_{k-1|k-1}\left(x,\ell \right)\left({\alpha}_{s,k|t}\left(x,\ell \right)+\int {\beta}_{s,k|t}\left(x,\ell \right)\frac{{f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dy\right)\right)\hfill \end{array}$$

Let

$$\begin{array}{cc}\hfill {\pi}_{k-1|t}\left(x,\ell \right)=& {r}_{k-1|t}^{\ell}{p}_{k-1|t}\left(x,\ell \right)\hfill \\ \hfill =& {r}_{k-1|k-1}^{\ell}{p}_{k-1|k-1}\left(x,\ell \right)\left({\alpha}_{s,k|t}\left(x,\ell \right)+\int \frac{{\beta}_{s,k|t}\left(x,\ell \right){f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dy\right)\hfill \end{array}$$

We can obtain that

$${r}_{k-1|t}^{\ell}=\int {\pi}_{k-1|t}\left(x,\ell \right)dx$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {p}_{k-1|t}\left(x,\ell \right)=\frac{{\pi}_{k-1|t}\left(x,\ell \right)}{{r}_{k-1|t}^{\ell}}=\frac{{\pi}_{k-1|t}\left(x,\ell \right)}{\int {\pi}_{k-1|t}\left(x,\ell \right)dx}\hfill \end{array}$$

From (A12), we can obtain (33) and the proof is further given in Appendix C. We can also obtain (34) by substituting (A11) into (A13).

Substituting (A12) and (A13) (or (33) and (34)) into (A10) yields

$$\begin{array}{cc}\hfill {\pi}_{k-1|t}\left(\mathbf{X}\right)=& \Delta \left(\mathbf{X}\right)\prod _{\ell \in {\mathbb{L}}_{1:k-1}-L\left(\mathbf{X}\right)}\left(1-{r}_{k-1|t}^{\ell}\right)\prod _{\ell \in L\left(\mathbf{X}\right)}\left({1}_{{\mathbb{L}}_{1:k-1}}\left(\ell \right){r}_{k-1|t}^{\ell}{p}_{k-1|t}\left(x,\ell \right)\right)\hfill \\ \hfill =& \Delta \left(\mathbf{X}\right)\prod _{\ell \in {\mathbb{L}}_{1:k-1}}\left(1-{r}_{k-1|t}^{\ell}\right)\prod _{\ell \in L\left(\mathbf{X}\right)}\frac{{1}_{{\mathbb{L}}_{1:k-1}}\left(\ell \right){r}_{k-1|t}^{\ell}{p}_{k-1|t}\left(x,\ell \right)}{\left(1-{r}_{k-1|t}^{\ell}\right)}\hfill \end{array}$$

## Appendix C

**Proof.**

**(from (A12) to (33)).**

Firstly, it is known that the target $\left(x,\ell \right)$ at k is the survived target in (A12), $\ell \in {\mathbb{L}}_{1:k-1}$ and $\ell \notin {\mathbb{L}}_{k}$. This is because newborn targets cannot transmit to the time before birth via smoothing from (61) (that is, ${\pi}_{k-1/t}\left(\mathbf{X}\right)$ cannot have the targets born after $k-1$). Therefore, we can apply the labeled single target predicted density formula (the same as (20) and (21) for the survived target)

$${r}_{k|k-1}^{\ell}={r}_{k-1|k-1}^{\ell}\int {p}_{s,k|k-1}\left(x,\ell \right){p}_{k-1|k-1}\left(x,\ell \right)dx$$

$${p}_{k|k-1}\left(y,\ell \right)=\frac{\int {r}_{k-1|k-1}^{\ell}{p}_{k-1|k-1}\left(x,\ell \right){p}_{s,k|k-1}\left(x,\ell \right){f}_{k|k-1}\left(y|x,\ell \right)dx}{{r}_{k|k-1}^{\ell}}$$

Then, from the definition of (A11) and (A12), we can obtain
where the first term of the right of (A17) can be written as

$$\begin{array}{cc}\hfill {r}_{k-1|t}^{\ell}=& \int {\pi}_{k-1|t}\left(x,\ell \right)dx\hfill \\ \hfill =& \int {\alpha}_{s,k|t}\left(x,\ell \right){r}_{k-1|k-1}^{\ell}{p}_{k-1|k-1}\left(x,\ell \right)dx+\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{0.277778em}{0ex}}\int \int \phantom{\rule{0.277778em}{0ex}}\frac{{r}_{k-1|k-1}^{\ell}{p}_{k-1|k-1}\left(x,\ell \right){\beta}_{s,k|t}\left(x,\ell \right){f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dydx\hfill \end{array}$$

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \int {\alpha}_{s,k|t}\left(x,\ell \right){r}_{k-1|k-1}^{\ell}{p}_{k-1|k-1}\left(x,\ell \right)dx\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\frac{\left(1-{r}_{k|t}^{\ell}\right){r}_{k-1|k-1}^{\ell}}{\left(1-{r}_{k|k-1}^{\ell}\right)}\int \left(1-{p}_{s,k|k-1}\left(x,\ell \right)\right){p}_{k-1|k-1}\left(x,\ell \right)dx\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\frac{\left(1-{r}_{k|t}^{\ell}\right)}{\left(1-{r}_{k|k-1}^{\ell}\right)}\left({r}_{k-1|k-1}^{\ell}-{r}_{k|k-1}^{\ell}\right)\hfill \end{array}$$

The second term of the right of (A17) can be written as

$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{0.277778em}{0ex}}\int \int \phantom{\rule{0.277778em}{0ex}}\frac{{r}_{k-1|k-1}^{\ell}{p}_{k-1|k-1}\left(x,\ell \right){\beta}_{s,k|t}\left(x,\ell \right){f}_{k|k-1}\left(y|x,\ell \right){p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dydx\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\int \frac{\int {r}_{k-1|k-1}^{\ell}{p}_{k-1|k-1}\left(x,\ell \right){p}_{s,k|k-1}\left(x,\ell \right){f}_{k|k-1}\left(y|x,\ell \right)dx}{{r}_{k|k-1}^{\ell}}\frac{{r}_{k|t}^{\ell}{p}_{k|t}\left(y,\ell \right)}{{p}_{k|k-1}\left(y,\ell \right)}dy\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =\int {r}_{k|t}^{\ell}{p}_{k|t}\left(y,\ell \right)dy\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& ={r}_{k|t}^{\ell}\hfill \end{array}$$

Finally, combining (A17)–(A19) yields
□

$${r}_{k-1|t}^{\ell}=\frac{\left(1-{r}_{k|t}^{\ell}\right)}{\left(1-{r}_{k|k-1}^{\ell}\right)}\left({r}_{k-1|k-1}^{\ell}-{r}_{k|k-1}^{\ell}\right)+{r}_{k|t}^{\ell}=1-\frac{\left(1-{r}_{k-1|k-1}^{\ell}\right)\left(1-{r}_{k|t}^{\ell}\right)}{1-{r}_{k|k-1}^{\ell}}$$

## References

- Bar-Shalom, Y.; Willett, P.; Tian, X. Tracking and Data Fusion: A Handbook of Algorithms; YBS Publishing: Storrs, CT, USA, 2001. [Google Scholar]
- Mahler, R.P.S. Advances in Statistical Multisource-Multitarget Information Fusion; Artech House: Norwood, MA, USA, 2014. [Google Scholar]
- Vo, B.N.; Mallick, M.; Bar-shalom, Y.; Coraluppi, S.; Osborne, R.; Mahler, R.P.S.; Vo, B.T. Multitarget Tracking. In Wiley Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015. [Google Scholar]
- Wang, X.; Li, T.; Sun, S.; Corchado, J.M. A Survey of Recent Advances in Particle Filters and Remaining Challenges for Multitarget Tracking. Sensors
**2017**, 17, 2707. [Google Scholar] [CrossRef] [PubMed] - Meyer, F.; Kropfreiter, T.; Williams, J.L.; Lau, R.; Hlawatsch, F.; Braca, P.; Win, M.Z. Message Passing Algorithms for Scalable Multitarget Tracking. Proc. IEEE
**2018**, 106, 221–259. [Google Scholar] [CrossRef] - Blackman, S.S.; Popoli, R. Design and Analysis of Modern Tracking Systems; Artech House: Norwood, MA, USA, 1999. [Google Scholar]
- Willett, P.; Ruan, Y.; Streit, R. PMHT: Problems and some solutions. IEEE Trans. Aerosp. Electron. Syst.
**2002**, 38, 738–754. [Google Scholar] [CrossRef] - Mahler, R.P.S. ”Statistics 103” for Multitarget Tracking. Sensors
**2019**, 19, 202. [Google Scholar] [CrossRef] [PubMed] - Mahler, R.P.S. Multitarget Bayes Filtering via First-Order Multitarget Moments. IEEE Trans. Aerosp. Electron. Syst.
**2003**, 39, 1152–1178. [Google Scholar] [CrossRef] - Yang, F.; Tang, W.; Liang, Y. A novel track initialization algorithm based on random sample consensus in dense clutter. Int. J. Adv. Robot. Syst.
**2018**, 15. [Google Scholar] [CrossRef] - Vo, B.T.; Vo, B.N.; Cantoni, A. Analytic Implementations of the Cardinalized Probability Hypothesis Density Filter. IEEE Trans. Signal Process.
**2007**, 55, 3553–3567. [Google Scholar] [CrossRef] - Vo, B.T.; Vo, B.N.; Cantoni, A. The Cardinality Balanced Multi-Target Multi-Bernoulli Filter and Its Implementations. IEEE Trans. Signal Process.
**2009**, 57, 409–423. [Google Scholar] - Vo, B.T.; Vo, B.N. Labeled Random Finite Sets and Multi-Object Conjugate Priors. IEEE Trans. Signal Process.
**2013**, 61, 3460–3475. [Google Scholar] [CrossRef] - Vo, B.N.; Vo, B.T.; Phung, D. Labeled Random Finite Sets and the Bayes Multi-Target Tracking Filter. IEEE Trans. Signal Process.
**2014**, 62, 6554–6567. [Google Scholar] [CrossRef] - Reuter, S.; Vo, B.T.; Vo, B.N.; Dietmayer, K. The Labeled Multi-Bernoulli Filter. IEEE Trans. Signal Process.
**2014**, 62, 3246–3260. [Google Scholar] - Arnaud Doucet, A.M.J. A Tutorial on Particle Filtering and Smoothing: Fifteen years later. In The Oxford Handbook of Nonlinear Filtering; Crisan, D., Rozovskii, B., Eds.; Oxford University Press: Oxford, UK, 2008; pp. 656–704. [Google Scholar]
- Mahler, R.P.S.; Vo, B.T. Forward-Backward Probability Hypothesis Density Smoothing. IEEE Trans. Aerosp. Electron. Syst.
**2012**, 48, 707–728. [Google Scholar] [CrossRef] - Vo, B.N.; Vo, B.T.; Mahler, R.P.S. Closed-form solutions to forward-backward smoothing. IEEE Trans. Signal Process.
**2011**, 60, 2–17. [Google Scholar] [CrossRef] - Li, T.; Chen, H.; Sun, S.; Corchado, J.M. Joint Smoothing and Tracking Based on Continuous-Time Target Trajectory Function Fitting. IEEE Trans. Autom. Sci. Eng.
**2019**, 16, 1476–1483. [Google Scholar] [CrossRef] - Vo, B.T.; Clark, D.; Vo, B.N.; Ristic, B. Bernoulli Forward-Backward Smoothing for Joint Target Detection and Tracking. IEEE Trans. Signal Process.
**2011**, 59, 4473–4477. [Google Scholar] [CrossRef] - Wong, S.; Vo, B.T.; Papi, F. Bernoulli Forward-Backward Smoothing for Track-Before-Detect. IEEE Signal Process. Lett.
**2014**, 21, 727–731. [Google Scholar] [CrossRef] - Nadarajah, N.; Kirubarajan, T.L.T. Multitarget Tracking using Probability Hypothesis Density Smoothing. IEEE Trans. Aerosp. Electron. Syst.
**2011**, 47, 2344–2360. [Google Scholar] [CrossRef] - Nagappa, S.; Clark, D.E. Fast Sequential Monte Carlo PHD Smoothing. In Proceedings of the International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011; pp. 1819–1825. [Google Scholar]
- He, X.; Liu, G. Improved Gaussian mixture probability hypothesis density smoother. Signal Process.
**2016**, 120, 56–63. [Google Scholar] [CrossRef] - Nagappa, S.; Delande, E.D.; Clark, D.E.; Houssineau, J. A Tractable Forward-Backward CPHD Smoother. IEEE Trans. Aerosp. Electron. Syst.
**2017**, 53, 201–217. [Google Scholar] [CrossRef] - Clark, D.E. First-moment multi-object forward-backward smoothing. In Proceedings of the International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010; pp. 1–6. [Google Scholar]
- Dong, L.; Hou, C.; Yi, D. Multi-Bernoulli smoother for multi-target tracking. Aerosp. Sci. Technol.
**2016**, 48, 234–245. [Google Scholar] - Vo, B.T.; Vo, B.N.; Gia, H. An Efficient Implementation of the Generalized Labeled Multi-Bernoulli Filter. IEEE Trans. Signal Process.
**2017**, 65, 1975–1987. [Google Scholar] [CrossRef] - Papi, F.; Vo, B.T.; Vo, B.N.; Fantacci, C.; Beard, M. Generalized Labeled Multi-Bernoulli approximation of Multi-object densities. IEEE Trans. Signal Process.
**2015**, 63, 5487–5497. [Google Scholar] [CrossRef] - Beard, M.; Vo, B.T.; Vo, B.N.; Arulampalam, S. Void probabilities and Cauchy-Schwarz divergence for Generalized Labeled Multi-Bernoulli models. IEEE Trans. Signal Process.
**2017**, 65, 5047–5061. [Google Scholar] [CrossRef] - Li, S.; Wei, Y.; Hoseinnezhad, R.; Wang, B.; Kong, L.J. Multi-object Tracking for Generic Observation Model Using Labeled Random Finite Sets. IEEE Trans. Signal Process.
**2018**, 66, 368–383. [Google Scholar] [CrossRef] - Beard, M.; Vo, B.T.; Vo, B.N. Generalised labelled multi-Bernoulli forward-backward smoothing. In Proceedings of the International Conference on Information Fusion, Heidelberg, Germany, 5–8 July 2016; pp. 1–7. [Google Scholar]
- Chen, L. From labels to tracks: It’s complicated. Proc. SPIE
**2018**, 10646. [Google Scholar] [CrossRef] - Vo, B.N.; Vo, B.T. A Multi-Scan Labeled Random Finite Set Model for Multi-Object State Estimation. IEEE Trans. Signal Process.
**2019**, 67, 4948–4963. [Google Scholar] [CrossRef] - Li, T. Single-Road-Constrained Positioning Based on Deterministic Trajectory Geometry. IEEE Commun. Lett.
**2019**, 23, 80–83. [Google Scholar] [CrossRef] - Li, T.; Wang, X.; Liang, Y.; Yan, J.; Fan, H. A Track-oriented Approach to Target Tracking with Random Finite Set Observations. In Proceedings of the ICCAIS 2019, Chengdu, China, 23–26 October 2019. [Google Scholar]
- Liu, R.; Fan, H.; Xiao, H. A forward-backward labeled Multi-Bernoulli smoother. In Proceedings of the International Conference on Distributed Computing and Artificial Intelligence, Avila, Spain, 26–28 June 2019; pp. 253–261. [Google Scholar]
- Li, T.; Su, J.; Liu, W.; Corchado, J.M. Approximate Gaussian conjugacy: Recursive parametric filtering under nonlinearity, multimodality, uncertainty, and constraint, and beyond. Front. Inf. Technol. Electron. Eng.
**2017**, 18, 1913–1939. [Google Scholar] [CrossRef] - Li, T.; Bolic, M.; Djuric, P. Resampling methods for particle filtering: Classification, Implementation, and Strategies. IEEE Signal Process. Mag.
**2015**, 32, 70–86. [Google Scholar] [CrossRef] - Schuhmacher, D.; Vo, B.T.; Vo, B.N. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process.
**2008**, 56, 3447–3457. [Google Scholar] [CrossRef]

**Figure 3.**The true and estimated trajectories of targets. Different trajectories estimated by the LMB smoother are denoted by different color dots where ‘∘’ denotes the initiations and ‘Δ’ denotes the terminations of the trajectories.

**Figure 5.**The true and estimated cardinalities of the PHD smoother, the MeMBer smoother, the LMB filter and the proposed LMB smoother.

Method | Total | Localization | Cardinality |
---|---|---|---|

OSPA (m) | Component (m) | Component (m) | |

PHD filter | 34.3821 | 26.4094 | 7.9727 |

PHD smoother | 27.3199 | 18.6526 | 8.6673 |

CBMeMBer filter | 30.2842 | 22.3566 | 7.9276 |

MeMBer smoother | 22.0721 | 14.6664 | 7.4056 |

LMB filter | 25.6800 | 22.6574 | 3.0226 |

LMB smoother | 15.1762 | 14.4932 | 0.6830 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).