A Computationally Efficient Labeled Multi-Bernoulli Smoother for Multi-Target Tracking

A forward–backward labeled multi-Bernoulli (LMB) smoother is proposed for multi-target tracking. The proposed smoother consists of two components corresponding to forward LMB filtering and backward LMB smoothing, respectively. The former is the standard LMB filter and the latter is proved to be closed under LMB prior. It is also shown that the proposed LMB smoother can improve both the cardinality estimation and the state estimation, and the major computational complexity is linear with the number of targets. Implementation based on the Sequential Monte Carlo method in a representative scenario has demonstrated the effectiveness and computational efficiency of the proposed smoother in comparison to existing approaches.


Introduction
Multi-target tracking (MTT) has been widely used in many engineering fields including aerospace surveillance, biomedical analytics, autonomous driving, indoor localization, robotic networks and so on [1][2][3][4][5]. In the applications, both the number of targets and their states may vary in time, and the measurements are obscured by clutter and missed detection [2,3]. Traditionally, approaches to MTT are built on the base of appropriate data association methods such as typically joint probabilistic data association [1] or multiple hypothesis tracking [6,7]. A novel approach has been developed by Mahler based on the random finite set (RFS) theory [2] which has attracted substantial attention in the last decade. Simply speaking, the RFS methods model the target states and the measurements into the RFSs explicitly and have gained tremendous interest in recent years [8]. A variety of RFS filters have been proposed, including probability hypothesis density (PHD) filter [9,10], cardinalized PHD (CPHD) filter [11], Cardinality Balanced multi-Bernoulli (CBMeMBer) filter [12], generalized labeled multi-Bernoulli (GLMB) filter [13,14] and labeled multi-Bernoulli (LMB) filter [15]. Compared with the filtering that refers the current state, the smoothing [16][17][18][19] typically refers to estimating the target state of the past time using all measurements till the current time which has a better accuracy than filtering but also suffers from a higher computational cost. Therefore, it is of great significance in practice to develop an RFS smoother that is computationally efficient, reliable and accurate.
The Bernoulli smoother was first proposed in [20,21]; it performs better than the Bernoulli filter, but adapts to, at most, one target. For the multi-target problem, the forward-backward PHD smoothers are proposed in [17,18,[22][23][24]. It has been shown in [17] that the PHD smoother can improve the accuracy of position estimation as compared with the PHD filter, but does not necessarily gain better cardinality estimation. The CPHD smoother is proposed in [25] which uses an approximate scheme to overcome the intractability of the classic CPHD smoother [26] but bears a complicated algorithm structure. A forward-backward multi-target multi-Bernoulli (MeMBer) smoother is proposed in [27] which also does not necessarily improve the cardinality estimation. However, these smoothers, as well as the corresponding filters, do not provide the information for each track. Therefore, labelled RFS-based filters and smoothers have been developed [28][29][30][31][32], for generating track estimates which is also the focus of this paper.
The GLMB smoother is proposed in [32], which is the first exact closed form solution to the smoothing recursion based on labeled RFS but has not been implemented in practice due to the overcomplicated data associations. Recently, Chen has pointed out the challenge to form tracks since the optimal solution is not as simple as labeling [33]. Relevantly, a multi-scan GLMB filter based on the smoothing-while-filtering framework is proposed in [34] for better labeling. However, the truncation of the multi-scan GLMB filter needs to solve an NP-hard multi-dimensional assignment problem. In short, these existing labeled smoothers have a high computational complexity even if it is practically implementable. Furthermore, we point out that, what is also related to the framework of smoothing-while-filtering [34] is the joint smoothing and filtering [19,35,36] based on which so far, however, only a single target is considered.
In this paper, we derive a forward-backward LMB smoother for multi-target tracking. Preliminary and limited results have been published in [37]. This paper provides additional results, complete proofs, and additional experiments. The proposed smoother consists of two parts regarding the forward LMB filtering and backward LMB smoothing, respectively. While in the former we apply the standard LMB filter [15], the key contribution of our work lies in the backward smoothing algorithm design. We prove that the proposed backward LMB smoothing is closed under the LMB prior for the standard multi-target system models and the backward smoothed density of each track is similar to the Bernoulli backward smoothed density [20]. Superior to the approximate parameteric/Gaussian (mixture) approximation [38], the Sequential Monte Carlo (SMC) method is a powerful method for representing arbitrary/non-Gaussian models [4]. Based on the SMC method, the proposed smoother reduces both the state error and the cardinality error as compared with the PHD smoother [17], the MeMBer smoother [27] and the LMB filter [15], and has a lower computational complexity as compared with the PHD smoother and the MeMBer smoother.
The rest of the paper is organized as follows. Basic definitions of labeled RFS and the multi-target Bayes forward-backward smoother are reviewed briefly in Section 2. The proposed forward-backward LMB smoother is detailed in theory in Section 3 and implemented based on the SMC in Section 4, respectively. Simulation results are presented in Section 5 before we conclude in Section 6.

Notation
Single target states are denoted by lowercase letters, for example, x and x. Multi-target states are denoted by uppercase letters, for example, X and X. x and X are the unlabeled state representations.
x and X are the labeled state representations. State spaces are denoted by blackboard bold letters, for example, a label space which contains a countable number of distinct labels is denoted as L, and a unlabeled target state space is denoted as X. A labeled single target state has the form x = (x, ), where x ∈ X and ∈ L. A labeled multi-target state has the form X = x 1 , ..., x i , ..., x |X| , where x i denotes a labeled single target state in X, |·| denotes the cardinality (or the number of targets) of a multi-target set, and X ⊆ X × L.
The projection L : X × L → L is to make L (x) = and L (X) = {L(x) : x ∈ X}. Then define a generalized Kronecker delta function and inclusion function as: Define ∆ (X) δ |X| (|L (X)|). The inner product of f (x) and g (x) on a labeled single target space is defined as The multi-target exponential form of X is defined as

GLMB and LMB RFS
The GLMB RFS X is a Labeled RFS constructed by labeled multi-target states. The distribution of an GLMB RFS [13] is exactly closed under the prediction and update of the multi-target Bayes filter. The GLMB distribution is denoted as where C is a discrete index set, p (c) (·, ) is the density of the track , ω (c) (I) is the nonnegative weight of the hypothesis (c, I), and ∑ The distribution of an LMB RFS with the parameter set π = {(r , p )} ∈L is given by [15] π (X) = ∆ (X) ω (L (X)) p X (6) where where r denotes the existence probability of the track , p (·, ) denotes the density, and ω (I) denotes the weight of the hypothesis I = 1 , ..., |I| . Note that L is a discrete countable space and the number of labels in L equals to the number of the Bernoulli components (with non-zero existence probability) in the LMB RFS. An LMB RFS is a special case of an GLMB RFS, and the densities and the existence probabilities of different tracks in an LMB RFS are both uncorrelated. Two properties associated with an LMB RFS are as follows. More specifically, the cardinality distribution [2] is given by where σ ν,n (x 1 , · · ·, x ν ) is the elementary homogeneous symmetric function of degree n in ν variables.
Lemma 1. If X a and X b are both LMB RFSs with the probability densities π a (X a ) and π b (X b ), respectively, where X a ⊆ X × L a , X b ⊆ X × L b and L a L b = ∅, X = X a X b is an LMB RFS with the probability density π (X) and vice versa. The probability densities of π a (X a ), π b (X b ) and π (X) have the relations as follows : The proof of Lemma 1 is given in the Appendix A.

Multi-Target Bayes Forward-Backward Smoother
The recursion of multi-target Bayes forward-backward smoother is shown in Figure 1 which consists of forward Bayes filtering and backward Bayes smoothing. The multi-target state at t is X t , where X t ⊆ X × L 1:t and L 1:t denotes the label space of the targets at t (including those born prior to t). The measurement set at t is Z t . The prediction and update for the forward Bayes filtering can be denoted as [2] where π t|t−1 is denoted as the predicted multi-target density from t − 1 to t, f t|t−1 is denoted as the multi-target Markov transition density at t, π t|t is denoted as the multi-target posterior density at t, and g t is denoted as the multi-target likelihood function at t. The integral in the equations is the set integral. If the backward smoothed density from t to k (k ≤ t) is denoted by π k|t (Y) which is initialized with π t|t (Y), the backward Bayes smoothed density π k−1|t (X) at k − 1 can be written as [32]

Prediction Update
Initialization Forward filtering Backward smoothing

Multi-Target Motion and Measurement Models
Let p s,t|t−1 p s,t|t−1 (x, ) denote the surviving probability of the target (x, ) from t − 1 to t. f t|t−1 (x + | (x, )) denotes the single target Markov transition density from t − 1 to t. X denotes the multi-target state at t − 1 while Y − denotes the survival multi-target state from t − 1 to t, where X ⊆ X × L 1:t−1 and Y ⊆ X. Considering the survival targets only, multi-target Markov transition density f s,t|t−1 is [2,13] It is assumed that the newborn targets are an LMB RFS denoted as Y + , and Y + ⊆ X × L t , where L t denotes the label space of the newborn targets at t. r B,t|t−1 denotes the birth probability of target (x, ) at t and p B,t|t−1 (y, ) denotes the corresponded density. The density f B,t|t−1 of newborn targets can be denoted as [2,13] The multi-target state at t can be denoted as Y = Y + Y − where Y + and Y − are disjoint and independent. From Lemma 1, the joint multi-target Markov transition density f t|t−1 is written as [2,13] where p D,t p D,t (x, ) denotes the detection probability of target (x, ) at t. g t (z|x, ) denotes the probability that target (x, ) generates the measurement z if detected. The intensity function (or the PHD) of the Poisson clutter is κ (·). The association function θ : z θ( ) denotes the false alarms at time t.

LMB Smoother
In this section, we detail the proposed LMB smoother and discuss the cardinality estimation. The LMB smoother framework can be simply depicted as shown in Figure 2 which consists of forward LMB filtering and backward LMB smoothing. The forward LMB filtering used in our approach is the standard LMB filter [15]. The backward LMB smoothing each time step has L d -step backward smoothing recursions where L d denotes the fixed lag.

Prediction
First order moment approximation Update

Prediction
Given that the multi-target prior at time t − 1 is an LMB (distribution) parameterized as and the density of the newborn targets is an LMB , then the predicted multi-target density at t remains an LMB which can be denoted as [12,15] π t|t−1 = r S,t|t−1 , p S,t|t−1 where

Update
Assume that the predicted multi-target density at t is an LMB with the parameter set Under the likelihood function of (17), the updated multi-target posterior density is an GLMB (Strictly speaking, it's a δ-GLMB which is a special case of an GLMB [13]) which can be denoted as [13] π t|t (X) = ∆ (X) where F (L 1:t ) denotes the class of L 1:t , I + = 1 , ..., |I + | denotes the label set of a hypothesis, Θ I + is the set of the association function θ : I + → {0, 1, ..., |Z|}, and In the update of the forward LMB filtering, the multi-target posterior density matches the first-order moment of (24) for computing simplification, i.e., [15] π t|t ≈ r t|t , p t|t where The approximate posterior density (29) of the forward LMB filtering preserves the first-order moment of the posterior density (24). It is proved in [29,31] that the approximate posterior density (29) in the LMB class minimizes the Kullback-Leibler divergence (KLD) relative to the posterior density (24).

Backward LMB Smoothing
In this subsection, the backward LMB smoothing is derived. Our derivation directly relies on the multi-target backward smoothing recursion (13), and is different from the derivation of the backward GLMB smoothing [32] which uses the backward corrector recursion [18] as an intermediate process to avoid the set integral of the quotient of two GLMBs.
Proposition 1. Given that the multi-target posterior π k−1|k−1 (X) and the predicted multi-target density π k|k−1 (Y) are both LMBs, and the backward smoothed density π k|t (Y) from t to k (k ≤ t) is an LMB, then the backward smoothed density π k−1|t (X) from t to k − 1 is also an LMB which can be written as The proof of Proposition 1 is given in Appendix B. It is observed that the smoothed density (r k−1|t , p k−1|t ) of the track has the same form as the smoothed Bernoulli density when it doesn't take into account the newborn target [20]. Therefore, the proposed LMB smoother can be deemed as an extension of the Bernoulli smoother [20] to multiple targets. From (33), the existence probability of the track relates to r k−1|k−1 , r k|k−1 and r k|t . From (34), the density of the track contains two terms, where one term only relates to p k−1|k−1 (x, ) and α s,k|t (x, ) which preserves the forward filtering state at k − 1, and the other term relates to the backward smoothing.
Remark 1. The proposed LMB smoother owns a good computationally efficiency to two strategies: First, the newborn tracks are uncorrelated with the backward smoothing since the newborn tracks cannot be alive prior to the birth time, resulting in a simple label space. Second, the existence probabilities and the probability densities of different tracks for an LMB RFS are uncorrelated, and then each track component can be calculated separately, so the computational complexity of the backward smoothing is linear with the number of the tracks.

Remark 2.
The proposed LMB smoother is also approximate Bayes optimal because the LMB family is closed under the prediction operation and the backward smoothing operation. Although the LMB family is not closed under the update operation, the first-order moment approximation in the LMB class minimizes the KLD relative to the posterior density.

SMC Implementation and Algorithm Analysis
This section presents the SMC implementation and the state extraction, and analyze the algorithm complexity of the proposed LMB smoother.

SMC Implementation
Prediction: Consider the multi-target posterior at t − 1 is π t−1|t−1 = r t−1|t−1 , p t−1|t−1 In the SMC implementation, p t−1|t−1 is represented by a set of weighted particles The density of the newborn targets at t is where p B,t is also represented by a set of weighted particles . Through the prediction of (18)- (22), the predicted multi-target density at t is π t|t−1 = r S,t|t−1 , p S,t|t−1 where r S,t|t−1 is calculated by (20) and . That is, where f t|t−1 is chosen as the importance sampling density.
Update: Denoting the predicted density as π t|t−1 = r t|t−1 , p t|t−1 ∈L 1:t , it can be written as the form of (6) given by ω t|t−1 (I + ) , p X t|t−1 I + ⊂L 1:t , where ω t|t−1 (I + ) is denoted by (7) and I + = L (X). p t|t−1 is represented by a set of weighted particles The K-shortest paths algorithm [14] is used to truncate the predicted multi-target density in order to reduce the number of hypotheses. The multi-target posterior density given in (24)  p θ t|t (x, ) can also be represented by a set of particles ω and The weight ω (I + ,θ) t|t of the hypothesis (I + , θ) is given as where θ is an association function and θ ∈ Θ I + . (I + , θ) is a hypothesis and (I + , θ) ∈ F (L 1:t ) × Θ I + .
To avoid computing all the hypotheses and their weights, we only reserve a specific number T h of largest weights and the ranked optimal assignment algorithm [14] is applied to truncate the multi-target posterior density. The multi-target posterior density of the forward LMB filtering matches the first-order moment of (24) which can be denoted as an LMB with the parameter set π t|t ≈ r t|t , p t|t ∈L 1:t where r t|t is given by (30) and p t|t is represented by a set of weighted particles ω where (I + , θ) ∈ F (L 1:t ) × Θ I + . The number of particles for p t|t increases rapidly and resampling [39] is needed. Backward smoothing: From the forward LMB filtering, the multi-target posterior density is π k−1|k−1 = r k−1|k−1 , p k−1|k−1 ∈L 1:k−1 at k − 1, where p k−1|k−1 is approximated by . The predicted multi-target density from k − 1 to k is an LMB and the existence probability of track is r k|k−1 where ∈ L 1:k . The multi-target backward smoothed density from t to k (k ≤ t) is denoted as π k|t (Y) = r k|t , p k|t Proposition 1 implies that the multi-target backward smoothed density from t to k − 1 is also an LMB given by π k−1|t = r k−1|t , p k−1|t where r k−1|t is calculated by (33) and p k−1|t is represented by a set of weighted particles . The detailed formula of p k−1|t is given as where i = 1, · · ·, J ( ) and Note that the predicted density (22) cannot be directly used as p k|k−1 (·, ) in (49) for smoothing. Because the forward LMB filtering performs the resampling procedure [39] in each filtering step, the particles of p k|t (·, ) from p t|t (·, ) (initial smoothed density) are different from those of p k|k−1 (·, ).

Backward Smoothing and State Extraction
The pseudo code of the proposed backward smoothing algorithm is given in Algorithm 1. The forward filtering is up to t and the lag of the backward smoothing is L d . We need to store L d + 1 multi-target posterior densities from t − L d to t and L d multi-target predicted densities from t − L d + 1 to t for L d -step backward smoothing recursions. The backward smoothed density π k|t (Y) is initialized with π t|t (Y). In the SMC implementation, pruning, truncation and track cleanup are required and the label set varies with the time, so L 1:i is replaced with L i|j , where L i|j denotes the label set of the corresponded density π i|j (·) and L i|j ⊂ L 1:i . Therefore, we have L k−1|t = L k−1|k−1 L k|t to represent the label set of backward smoothing at k − 1 which eliminates the labels of newborn tracks at k and the labels of the pruned tracks in the forward filtering.
Note that, in our approach, resampling [39] can be applied either as the final step of the forward filtering or after the backward smoothing, which lead to insignificant difference in performance.
Target states are extracted from the output That is, the target number N is where p(n) is the cardinality distribution (9). ThenN tracks with the largest existence probabilities are extracted and the target states are mean of the corresponding probability densities. , compute r q k−1|t according to (33); for j=1:Q q estimate p k|k−1 y j k|t q , q according to (51); end for i=1:J q compute ω i k−1|t q according to (48)-(49);

Algorithm Complexity
The major computational cost of the LMB smoother is due to backward LMB smoothing. The computational complexity of the backward LMB smoothing is O L d NL 2 which can be obtained from the four for-loops of Algorithm 1, where N is the number of tracks (or targets) and L is the number of particles per track. The computational complexity of the innermost two parallel for-loops is O L 2 , so the computational complexity of the backward LMB smoothing is O L d NL 2 multiplying by the outermost two for-loops.
The structures of backward smoothing for PHD smoother [17] and MeMBer smoother [27] are similar to that of LMB smoother, and the difference is that the particles of all tracks are used for backward smoothing for PHD smoother and MeMBer smoother while the particles of each track are used for their respective backward smoothing in our LMB smoother. We can obtain that the major computational costs for the PHD smoother (Here, we consider the classic SMC implementation [17] instead of the fast SMC implementation [23]) and the MeMBer smoother are both approximately to o(L d N 2 L 2 ). Therefore, the computational complexity of the proposed LMB smoother is lower than those of PHD smoother [17] and MeMBer smoother [27].

Simulation Result
A nearly constant turn (NCT) model is considered which has varying turn rate together with noisy range and azimuth measurements [12,13]. The state of the target with label at t is denoted as ,ṗ x,t , p y,t ,ṗ y,t T denotes the planar position and velocity of the target, and w t denotes the turn rate. The NCT model can be written as where W t−1 and u t−1 are the state noises of velocity and turn rate, respectively. W t−1 ∼ N W; 0, σ 2 W I 2 and u t−1 ∼ N u; 0, σ 2 u I 1 where N (·; m N , P N ) denotes a Gaussian density with mean m N and variance P N , and I i represents an i order identity matrix. The noise of velocity uses σ W = 5 m/s 2 and the noise of turn rate uses σ u = π/180 rad/s 2 . The state transition matrix and noise transition matrix are given as follows, respectively, The state transition model is denoted by (53) where the measurement noise is ε t ∼ N (·; 0, P ε ) with P ε = diag σ 2 r , σ 2 θ , σ r = 10 m, and σ θ = 2π/180 rad. The observation region is [ 0,2000] m× [ 0,π] rad with the detection probability p D = 0.98. The density of the Poisson clutter is κ (·) = 10/(2000π) as the average number of clutter measurements is 10 per scan.
The number of particles per hypothesized track is set to 1000. We prune the tracks with a weight smaller than P T = 10 −4 . The lag of the smoother is L d = 3. The OSPA metric [40] with cut-off parameter c = 100 m and order parameter p = 1 is used. We compare the proposed LMB smoother with the LMB filter [15], the PHD filter [9], the PHD smoother [17], the CBMeMBer filter [12] and the MeMBer smoother [27] over 100 Monte Carlo trials. Figures 3 and 4 show the results of the LMB smoother in one trial, in the plane and in x and y coordinates over time, respectively. The number of targets changes over time due to target birth or death, and there are at most five targets in the scenario. The track of each target is smoothing.
More specifically, Figures 5-7 show the cardinality estimation for different methods over 100 Monte Carlo trials. We can see from Figure 5 that the estimated cardinality mean converges to the true cardinality most of time for all methods. Figure 6 shows the errors of the cardinality mean which is given by the estimated cardinality mean minus the true cardinality. At k = 1 s, 10 s, 40 s and 60 s, there is one or two newborn targets at each time step and the cardinality errors are negative for the LMB filter, because of newborn target detection delays. At k = 67 s, 81 s and 91 s, one target disappears and the cardinality errors are positive for the LMB filter, because of the detection delay of the target death. At k = 64∼66 s, 78∼80 s and 88∼90 s, the cardinality mean errors are negative for PHD and MeMBer smoother, because of premature deaths of targets. Figure 7 shows the standard deviation of the cardinality estimate for different methods. The standard deviations of the PHD and the MeMBer smoother are larger than those of LMB filter and LMB smoother. In short, the LMB smoother can accurately estimate the cardinality (except for a detection delay at k = 10 s) and yields the best accuracy. Figure 8 shows the average OSPA errors of the PHD smoother, the MeMBer smoother, the LMB filter, and the LMB smoother with 100 Monte Carlo trials. It can be seen that the OSPA of the LMB smoother is less than those of the other methods almost all the time. We also give the average OSPA errors of different methods in Table 1. It can be seen that all three kinds of smoothers can effectively reduce OSPA localization components as compared with the corresponding filters. However, the PHD smoother and the MeMBer smoother do not necessarily reduce the OSPA cardinality components, whereas the proposed LMB smoother can improve the cardinality estimation significantly.

Conclusions
The paper derives a computationally efficient forward-backward LMB smoother which is closed under the backward smoothing operation and has the advantages for maintaining the independence of different tracks and their track outputs. Both numerical and simulation analyses have demonstrated that the proposed LMB smoother can effectively improve the tracking performance as compared to the PHD smoother, the MeMBer smoother and the LMB filter and have a lower computational complexity as compared with the PHD smoother and the MeMBer smoother. We should point out that our smoother cannot solve the problem of track fragmentation [34] when the label of a track changes before the track ends. It is our future work to investigate the curve/track fitting approach [19,35,36] for improving the continuity and smoothness of the estimated tracks.

Appendix A
Proof of Lemma 1. If X a and X b are both LMB RFSs and L a L b = ∅, X = X a X b is an LMB RFS. The conclusion can be obtained from the fundamental convolution theorem [2] and also from the independence of the existence probabilities and the densities of different tracks for LMB RFSs [31].
The product of π a (X a ) and π b (X b ) can be denoted as where L = L a L b represents the label space of X. Note that the computation of (A1) is reversible. That is, the union of multiple LMB RFSs on the disjoint subspaces can compose a single LMB RFS on the joint space and vice versa.

Appendix B
Proof of Proposition 1. The proof of Proposition 1 can be summarized as: Firstly, we can decompose the set integral of the backward smoothing recursion (13) into two parts as (A4): One relates to the survived targets, and the other relates to the newborn targets.
Then, the second part for the newborn targets is equal to 1 as in (A7). We can conclude that newborn targets cannot transmit to the time before birth, and the set integral of (13) equals to the set integral of the first part for the survived targets.
Finally, we prove that the set integral of the first part for the survived targets is an LMB. From the assumptions, π k|t (Y), π k|k−1 (Y) and π k−1|k−1 (X) are all LMB distributions. π k|t (Y) is denoted as The label space of π k|k−1 (Y) is L 1:k . π k|t (Y) is initialized with π t|t (Y) and its label space is L 1:k when k = t. π k|t (Y) keeps the same label space with π k|k−1 (Y) at all other times as to be explained after (A7). The newborn targets can be denoted by an LMB RFS π B,k|k−1 = r B,k|k−1 , p B,k|k−1 The multi-target transition density f k|k−1 (Y|X) can be denoted by (16).
Let Y = Y + Y − . Y + denotes the newborn targets at k and L (Y + ) ⊆ L k . Y − denotes the surviving targets from k − 1 to k and L (Y − ) ⊆ L 1:k−1 . The backward smoothing recursion (13) can be reformulated as The decomposition of π k|t (Y) by (A5) as well as π k|k−1 (Y) by (A6) complies with Lemma 1. The derivation from (A3) to (A4) also applies the Proposition in Section 3.5.3 of [2] that the single set integral on the joint space can be denoted as a multiple set integral on the disjoint subspaces. The Formula (A4) consists of two parts: One relates to X which involves the smoothing of survived targets, and the other is the smoothing of the newborn targets. As f B,k|k−1 (Y + ) = π + k|k−1 (Y + ), the second part of (A4) is equal to 1 and the result can be obtained from The Formula (A7) can be explained as follows: Newborn targets cannot transmit to the time before birth, i.e., if a target is born at k, it cannot be alive at k − 1 via backward LMB smoothing. Furthermore, π k|t (Y) cannot have the targets born after k, and so π k|t (Y) and π k|k−1 (Y) have the same label space L 1:k which is the union of the label space at k − 1 and the label space of newborn targets at k.