Next Article in Journal
A Verifiable, Privacy-Preserving, and Poisoning Attack-Resilient Federated Learning Framework
Previous Article in Journal
COVID-19 Severity Classification Using Hybrid Feature Extraction: Integrating Persistent Homology, Convolutional Neural Networks and Vision Transformers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncertainty-Aware δ-GLMB Filtering for Multi-Target Tracking

Vision and Image Processing Laboratory, Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(4), 84; https://doi.org/10.3390/bdcc9040084
Submission received: 23 February 2025 / Revised: 24 March 2025 / Accepted: 25 March 2025 / Published: 31 March 2025

Abstract

The δ -GLMB filter is an analytic solution to the multi-target Bayes recursion used in multi-target tracking. It extends the Generalised Labelled Multi-Bernoulli (GLMB) framework by providing an efficient and scalable implementation while preserving track identities, making it a widely used approach in the field. Theoretically, the δ -GLMB filter handles uncertainties in measurements in its filtering procedure. However, in practice, degeneration of the measurement quality affects the performance of this filter. In this paper, we discuss the effects of increasing measurement uncertainty on the δ -GLMB filter and also propose two heuristic methods to improve the performance of the filter in such conditions. The base idea of the proposed methods is to utilise the information stored in the history of the filtering procedure, which can be used to decrease the measurement uncertainty effects on the filter. Since GLMB filters have shown good results in the field of multi-target tracking, an uncertainty-immune δ -GLMB can serve as a strong tool in this area. In this study, the results indicate that the proposed heuristic ideas can improve the performance of filtering in the presence of uncertain observations. Experimental evaluations demonstrate that the proposed methods enhance track continuity and robustness, particularly in scenarios with low detection rates and high clutter, while maintaining computational feasibility.

1. Introduction

Multi-target tracking (MTT) is the procedure of estimating a time-varying number of targets and their states from a sequence of observations. The number of targets is usually unknown, and the observations are noisy due to the uncertainty of detection and presence of spurious targets (clutter) [1,2,3,4,5]. The trajectory of a target is another important outcome of MTT. Ref. [6] proposes a Bayesian multi-target tracking solution that also provides estimates of target trajectories. MTT suffers from the main challenges of detection uncertainty and clutter, which mislead the tracker and cause efficiency reduction in estimating the number of targets, their states, and their trajectories. However there are many approaches to MTT, and three paradigms are discussed as the main solutions. These are Multiple Hypotheses Tracking (MHT) [7,8,9], Joint Probabilistic Data Association (JPDA) [10,11,12], and Random Finite Set (RFS) [13,14,15,16]. Using the RFS approach in MTT provides a sophisticated Bayesian formulation for this problem in which a group of target states (multi-target states) are considered as a finite set [17,18]. This framework is a very popular method for estimating multi-targets, with applications in computer vision [19,20], robotics [21,22,23], sonar [24,25,26], monitoring and surveillance systems [27,28], sensor networks [29,30,31], cell biology [32,33], augmented reality [34,35], etc. In the RFS approach, the density of multi-target states is propagated recursively forward in time using the Bayes multi-target filter [36]. The Bayes filter is a multi-target tracker in which target specifications are incorporated into individual states (whereas in the RFS method, states of multi-targets in a unit of time are incorporated in a single set). Since the Bayes multi-target filter is intractable due to intense numerical complexity, the Probability Hypothesis Density (PHD) [37], Cardinalised PHD (CPHD) [38], and Multi-Bernoulli filters [39] were developed as approximations to the Bayes filter. In this work, we concentrate on an analytic solution to the Bayes MTT known as the δ -Generalised Labelled Multi-Bernoulli ( δ -GLMB) framework, which uses the RFS notion of perceiving targets as distinguishable and unique entities. This filter is applicable to keeping trajectories, and it includes conjugate priors, which are closed under the Chapman–Kolmogrov equation. However, while results in [40,41,42] show promising performances of this filter, the performance of the filter decreases in situations where observation quality drops. The aim of this paper is to reduce the effect of a low detection rate and the existence of clutter on the δ -GLMB filtering procedure. Consequent to the mentioned weakness, in particular, we propose two methods, N-scan δ -GLMB and refined δ -GLMB, to alleviate uncertainties in the observations using the information that is extracted from the history of filtering during a set time, which we believe to contain useful information that can be used to decrease uncertainties. The key innovation is to detect the components which are considered for truncation in the filtering procedure bur whose truncation may cause targets to get lost due to a low detection rate. Detecting such situations is verified from the aspect of considering the filtering history during the running time. The keyword N-scan is adopted from [43], and it is utilised in order to refer to the concept of using the history of information in a certain window with size N.
This paper is organised as follows. A brief overview of RFS, Bayesian multi-target filtering, and the GLMB filter is provided in Section 2. In Section 3, N-scan δ -GLMB and refined δ -GLMB are introduced and discussed. Results of numerical examples are discussed in Section 4, and concluding remarks and extensions are discussed in Section 5.

2. Background

This section gives a summary of the labelled RFS formulation of the MTT and the δ -GLMB filter proposed in [41], along with implementation details from [40]. For clarity, Table 1 provides a summary of the notation used throughout this paper.
The standard inner product notation is shown as f , g f ( x ) g ( x ) d x . For a real valued function h, the notation of the multi-object exponential is shown as h X Π x X h ( x ) , where h = 1 by convention. For a given set X, the cardinality (element count) is denoted by | X | . The generalisation δ Y ( X ) of the Kroneker delta allows the input argument to be in any arbitrary form, such as vectors, sets, etc. Similarly, the indicator function 1 Y ( X ) can be generalised such that it returns a value of 1 when X Y .

2.1. Labelled Random Finite Sets (RFS)

The RFS is a random variable finite set which includes a random number of unordered points [14]. Since the number of points is random, it is possible to consider a cardinality distribution for the RFS. The collection of target states is treated as an RFS with a certain cardinality distribution which enables MTT without knowing the number of targets in the system. Labelled RFS is an extension of RFS, keeping track of each target identity. For this purpose, each target state x X is augmented with a unique unobserved label drawn from a discrete countable space L = { α i : i N } , where the α i s are distinct and N denotes a positive integer set.

2.1.1. Bernoulli RFS

The probability density of a Bernoulli RFS X on X is denoted by [44]
π ( X ) = 1 r , X = r · p ( x ) , X = { x } .
where r is the probability that the RFS has only one element and 1 r  is the probability that the RFS is empty. The elements inside the RFS (in the case of existence) are distributed according to the probability density p ( · ) . The Bernoulli RFS cardinality is itself Bernoulli distributed with parameter r.

2.1.2. Multi-Bernoulli RFS

A Multi-Bernoulli RFS X consists of the union of multiple independent Bernoulli RFS X ( i ) with existence probability r ( i ) ( 0 , 1 ) for  i = 1 , , M , i.e.,  X = i = 1 M X ( i ) · p ( · ) . In [41], a Multi-Bernoulli RFS is described by π = ( r ( i ) , p ( i ) ) i = 1 M .

2.1.3. Labelled Multi-Bernoulli RFS

In [41], there is a derivation which shows that the following formulation is an alternative form of the LMB mentioned in Appendix A:
π ( X ) = Δ ( X ) 1 α ( Ψ ) ( L ( X ) ) [ Φ ( X ; · ) ] Ψ
where Δ ( X ) denotes the distinct label indicator δ | X | ( | L ( X ) | ) and L is a function defined on L : X × L L and a projection L ( ( x , ) ) = ; hence, L ( X ) = { L ( x ) : x X } is the X labels set. The parameter Φ ( X ; · ) is defined as [41]
Φ ( X ; ζ ) = ( x , ) X δ α ( ζ ) ( ) r ( ζ ) p ( ζ ) ( x ) + ( 1 1 L ( X ) ( α ( ζ ) ) ) 1 r ( ζ )
It is remarkable that a labelled RFS and its unlabelled version have the same cardinality distribution (proposition 2 in [41]).

2.2. Generalised Labelled Multi-Bernoulli RFS

The Generalised Labelled Multi-Bernoulli (GLMB) RFS is a family of labelled RFS distributions. GLMB is closed under the multi-object Chapman–Kolmogorov equation with respect to the multi-object transition kernel [41]. The probability density of a GLMB RFS X on X is denoted by
π ( X ) = Δ ( X ) ξ Ξ w ( ξ ) ( L ( X ) ) [ p ( ξ ) ] X
where Ξ is a discrete index set. The non-negative weight term w ( ξ ) ( L ) and the exponential term p ( ξ ) satisfy
L L ξ Ξ w ( ξ ) ( L ) = 1 , p ( ξ ) ( x , ) d x = 1 .
A GLMB family can be understood as a mixture of multi-target exponentials for which its points are not statistically independent.

2.3. Bayesian Multi-Target Filtering

MTT is used to jointly estimate the number of targets and their states from measured observations. Due to the dynamic nature of the MTT problem, the number of targets changes over time. In MTT, the targets X k = { x k , 1 , , x k , N ( k ) } and observations Z k = { z k , 1 , , z k , M ( k ) } are considered as random finite sets, where k is the indicator of time, N ( k ) is the number of targets, and  M ( k ) is the number of observations. Some observations may be clutter and others may be missed according to an underlying probability density. g k ( · | · ) is the multi-target likelihood function at time k which encapsulates the underlying models for clutter and detection. f k | k 1 ( · | · ) is the multi-target transition density at time k which encapsulates the underlying models of births, spawns, deaths, and target motions. The multi-target Bayes filter propagates π k in time [44] according to the following update and prediction steps:
π k ( X k | Z k ) = g k ( Z k | X k ) π k | k 1 ( X k ) g k ( Z k | X ) π k | k 1 ( X ) δ X ,
π k + 1 | k ( X k + 1 ) = f k + 1 | k ( X k + 1 | X k ) π k ( X k | Z k ) δ X k

2.4. Measurement Likelihood Function

The multi-object observation set Z = { z 1 , , z | Z | } is a union of detected states and clutter. Each state ( x , ) X is either detected with probability P D ( x , ) or missed with 1 P D ( x , ) . If the state is detected, it generates an observation point z with likelihood g ( z | x , ) . Considering independent detection conditioned on X and independent clutter with a Poisson distribution with intensity κ , the multi-object likelihood function is denoted by
g ( Z | X ) = e κ , 1 κ Z θ Θ ( L ( X ) ) [ ψ Z ( · ; θ ) ] X
where
ψ Z ( x , ; θ ) = p D ( x , ) g ( z θ ( ) | x , ) κ ( z θ ( ) ) , if θ ( ) > 0 1 p D ( x , ) , if θ ( ) = 0

2.5. Delta-Generalised Labelled Multi-Bernoulli

Delta-Generalised Labelled Multi-Bernoulli ( δ -GLMB) is an analytic solution to Bayes multi-target filtering. The numerical implementation of GLMB is ambiguous, so in order to address the ambiguity, δ -GLMB offers an alternative form,
π ( X ) = Δ ( X ) ( I , ξ ) F ( L ) × Ξ w ( I , ξ ) δ I ( L ( X ) ) [ p ( ξ ) ] X
Substituting w ( ξ ) ( I ) with w ( I , ξ ) in Equation (4) simplifies the numerical implementation [40].
The δ -GLMB filter, particularly the prediction step, is conditioned on measurements observed up to time k. The discrete space Ξ contains the association map histories from the initial point of filtering history up to the present time of filtering [40]. Θ t shows the association map space at time t; thus, Ξ can be defined as Θ 0 : k Θ 0 × × Θ k . Hence, the update and prediction densities of δ -GLMB are denoted as
π k ( X | Z k ) = Δ ( X ) ( I , ξ ) F ( L 0 : k ) × Θ 0 : k w k ( I , ξ ) δ I ( L ( X ) ) [ p k ( ξ ) ( · | Z k ) ] X
π k + 1 | k ( X ) = Δ ( X ) ( I , ξ ) F ( L 0 : k + 1 ) × Θ 0 : k w k + 1 | k ( I , ξ ) δ I ( L ( X ) ) [ p k + 1 | k ( ξ ) ] X
where ξ = ( θ 0 , , θ k ) , θ k Θ 0 : k is an association history map up to time k. Considering I F ( L 0 : k ) as a representation of a track label at time k, the pair ( I , ξ ) F ( L 0 : k ) × Θ 0 : k is known as a hypothesis with weight w k ( I , ξ ) . p ( ξ ) is the density of the kinematic state of track for each ξ . Bayesian recursion is used to propagate the densities forward in time in a closed-form update step [40]:
π ( X | Z ) = Δ ( X ) ( I , ξ ) F ( L ) × Ξ θ Θ ( I ) w ( I , ξ , θ ) ( Z ) δ I ( L ( X ) ) [ p ( ξ , θ ) ( · | Z ) ] X
where Θ ( I ) denotes association maps which are related to components with identity domain I,
w ( I , ξ , θ ) ( Z ) w ( I , ξ ) [ η Z ( ξ , θ ) ] I ,
η Z ( ξ , θ ) ( ) = p ξ ( · , ) , ψ Z ( · , ; θ ) ,
p ( ξ , θ ) ( x , | Z ) = p ( ξ ) ( x , ) ψ ( x , ; θ ) η Z ( ξ , θ ) ( ) ,
The prediction step is given by
π + ( X + ) = Δ ( X + ) ( I + , ξ ) F ( L + ) × Ξ w + ( I + , ξ ) δ I + ( L ( X + ) ) [ p + ( ξ ) ] X +
Considering S as a survival component indicator, B as newly born ones, ⊇ for the super set, and the subscript + for time k + 1 :
w + ( I + , ξ ) = w S ( ξ ) ( I + L ) w B ( I + B )
w S ( ξ ) ( L ) = [ η S ( ξ ) ] L I L [ 1 η S ( ξ ) ] I L w ( I , ξ )
η S ( ξ ) ( ) = p S ( · , ) , p ( ξ ) ) ( · , )
p + ( ξ ) ( x , ) = 1 L ( ) p S ( ξ ) ( x , ) + 1 B ( ) p B ( x , )
p S ( ξ ) ( x , ) = p S ( · , ) f ( x | · , ) , p ( ξ ) ( · , ) η S ( ξ ) ( )

2.6. N-Scan GM-PHD Filter

The GM-PHD filter was proposed [45] as a closed-form solution to the PHD filter [37]. The GM-PHD filter models each target with a Gaussian component with an associated weight ( w K i ), mean ( m k i ), and covariance ( S k i ). After performing the prediction and update, this filter extracts the Gaussian components whose weights are greater than a predefined threshold to estimate the multi-target states at each step and discards those components as non-informative [45]. This pruning is not always effective: if the measurement of the target is uncertain, the weight of the target is significantly decreased and the estimation of the target is lost. Ref. [43] proposes an N-scan approach which improves performance by benefiting from the last N target steps to avoid missing the estimation of the targets having different uncertainties. N-scan GM-PHD defines each Gaussian component with extra parameters to improve tracking performance: a binary confidence θ i , and weight history W H i , containing the last N weights of W H k i = [ w k N + 1 i , , w k 1 i , w k i ] . N-scan GM-PHD propagates parameter set x k i = w K i , m k i , S k i , θ k i , W H k i in time, calculating the number of weights W H k i which exceed a predefined threshold, and treats them with a new pruning policy which considers the whole N history window of weights [43].

3. Proposed Methods

Each iteration of the δ -GLMB filter involves a prediction operation followed by an update operation. These operations generate multi-target exponential weighted sums [40], the number of terms of which are intractable due to super-exponential growth of the terms in time. It is not rational to exhaustively consider all the terms in computations; hence, it is obligatory to prune some terms. Multi-object densities are truncated by keeping the most informative terms with respect to their weights and discarding those deemed non-informative with respect to their insignificant weights (the components with a weight lower than a predefined threshold are truncated). This section analyses truncation in uncertain conditions and shows that the pruning phase of δ -GLMB causes some terms to be truncated from the filtering procedure which are informative during a time window but whose weights have decreased at the moment due to uncertainty conditions like miss detection.
As a sequel to [46], which proposed the concept of N-scan δ -GLMB, this section details the N-scan δ -GLMB implementation and presents an enhanced update and prediction phase regarding the N-scan concept.

3.1. Uncertainty Effects on δ -GLMB

In the δ -GLMB filtering procedure, the pruning phase might cause weak performance in scenarios with a high uncertainty level (e.g., low detection probability, high clutter ratio, etc.). Suppose that one or more targets are not detected due to miss detection of these targets. Their associated hypotheses weight ( w ( I , ξ ) ) will diminish; hence, the associated hypotheses of true tracks are removed from the filtering procedure and cause the filtering performance drop. Furthermore, from a mathematical point of view, as it is obvious in Equation (9), the low detection rate causes the parameter ψ to decrease, and as a result, the multi-state target density decreases. In such conditions, losing true tracks produces a less informative approximation of the posterior distribution in Equation (11), which is clearly obviously inferred from Equations (13)–(15).

3.2. N-Scan δ -GLMB

In the δ -GLMB filtering procedure, there is a pruning operation that discards the hypotheses ( I , ξ ) whose weights are lower than an empirical predefined threshold W T h .
Ω k = { ( I , ξ ) F ( L k ) × Θ k | w k ( I , ξ ) W T h }
where Ω k is defined as the pool containing all hypotheses after pruning. All the hypotheses which exist in this pool continue in the filtering procedure, and those which do not exist anymore are discarded. As mentioned before, this operation occurs in order to prevent the hypotheses numbers from increasing exponentially during the set time and eventually making the filtering procedure intractable. This pruning and its sequel state extraction are not always effective, as pointed out in Section 3.1. Whenever the weight of a hypothesis degenerates below the threshold W T h , the hypothesis is discarded; however, the weight devaluation of a hypothesis may be a sequence of miss detection or another uncertainty.
In this paper, a simple and effective N-scan approach [43,46] is used for truncating the hypotheses instead of discarding those with weights lower than an empirical, predefined threshold W T h . Using the N-scan approach improves the performance of the previous methods, especially in scenarios with a lower SNR ratio which are considered to be uncertain conditions. The suggested method benefits from the N last history of the weights of hypotheses when faced with possible uncertainties. A  δ -GLMB is completely characterised by the set of parameters { ( w ( I , ξ ) , p ( ξ ) ) : ( I , ξ ) F ( L ) × Ξ } . This paper follows [40], which considers the parameter set of δ -GLMB as an enumeration of all hypotheses together with their weights and track densities. This assumption aims at easing the implementation procedure. Considering the mentioned assumption, the δ -GLMB parameter set is shown as { I k ( h ) , ξ k ( h ) , w k ( h ) , p k ( h ) } h = 1 H , where w ( h ) w ( I ( h ) , ξ ( h ) ) and p ( h ) p ( ξ ( h ) ) . With respect to this presentation of the parameter set, the N-scan method adds two additional parameters, γ ( h ) and W H ( h ) ; therefore, the new parameter set is shown as follows:
{ I k ( h ) , ξ k ( h ) , w k ( h ) , p k ( h ) , γ k ( h ) , W H k ( h ) } h = 1 H
where γ ( h ) is the binary confidence indicator for all hypotheses in H . It determines whether a hypothesis h was discarded in previous iterations or not. The value of γ for each hypothesis is initialised to zero at initiation of the hypotheses; whenever a hypothesis satisfies the extraction conditions, the value of γ ( h ) is changed to one. W H ( h ) is the weight history of each hypothesis h H , which keeps the last N weights of a hypothesis h, i.e.,
W H k ( h ) = [ w k N + 1 ( h ) , , w k 1 ( h ) , w k ( h ) ]
This extended version of the parameter set is propagated during the filtering procedure. The additional parameters are utilised to improve the tracking performance of the δ -GLMB filter in more uncertain conditions. The overall steps of the proposed N-scan δ -GLMB algorithm are summarised as follows.

3.2.1. Initialisation

In the δ -GLMB filter, each hypothesis is initialised by a proper setting. In addition to that setting with regard to the new parameter set, γ 0 ( h ) and W H 0 ( h ) need to be initialised too. Since the newly born hypothesis has not been extracted before, the initial value for γ 0 ( h ) is zero. All first N 1 elements of W H 0 ( h ) are set to zero (no history is available before initialisation) and the N t h element is set to the ordinary initialisation value of a hypothesis
γ 0 ( h ) = 1 ,
W H 0 ( h ) = [ 0 , 0 , 0 , , 0 , 0 , w 0 ( h ) ]
where | W H k ( h ) | = N .
Note that the initialisation of a hypothesis may occur in the middle of the filtering time due to newborn targets or spawned targets. In this situation, the initialisation of γ and W H is the same as mentioned above.

3.2.2. Prediction

In the prediction step of the δ -GLMB filter, the filtering density π + ( X + ) is estimated as explained in Section 2.5. However, the estimation of the filtering density π + ( X + ) in the N-scan method is not altered, and the newly defined parameters which are used in further steps propagate during this time. The following propagation schema is appended to the prediction step of the δ -GLMB filter:
γ k | k 1 ( h ) = γ k 1 ( h ) ,
W H k | k 1 ( h ) = W H k 1 ( h )

3.2.3. Update

In the update step of the δ -GLMB filter, the filtering density π ( X | Z ) is estimated by Equation (11). In addition to the estimation of π ( X | Z ) , which is in the presence of the observing measurements, the weight history of a hypothesis is updated too. The weight history updating procedure is denoted by
W H k ( h ) = [ W H k | k 1 ( h ) [ 2 ] , W H k | k 1 ( h ) [ 3 ] , , W H k | k 1 ( h ) [ N ] , w k ( h ) ]
where W H [ ζ ] stands for the ζ t h element of vector WH. Intuitively, the update procedure of WH is equal to removing the first element of the vector WH (the oldest weight of hypothesis h) and adding the current weight of the hypothesis h to the weight history. Note that the order in the weight history matters, as it is an indicator of how old a weight is.

3.2.4. Pruning and Extraction

In δ -GLMB filtering recursion, there is a pruning phase that intends to avoid exponential growth of the system by discarding hypotheses with insignificant weights. This paper benefits from new defined parameters W H and γ to alter the procedure of pruning. The N-scan version of δ -GLMB does not discard the hypotheses whose weights are lower than W T h at iteration k because the degeneration of the hypotheses’ weight could be the result of a miss detection or another uncertainty in the observation. Similar to the idea in [43], utilising new parameters which are added to the hypothesis parameter set, this paper proposes a new discarding method, the results of which show that it outperforms the previous methods. The N-scan method tends to extract the hypotheses which satisfy either one of the following conditions and prune others:
Condition 1: γ k ( h ) = 0 (i.e., the hypothesis h has not been extracted yet) and N T r N I n i t .
Condition 2: γ k ( h ) = 1 (i.e., the hypothesis h has been extracted already) and N T r N S u r v , where N T r shows the number of weights in W H k ( h ) which exceed W T h , N I n i t shows the predefined threshold for extraction of newly born hypotheses, and N S u r v is the threshold for surviving hypotheses.
Note that if the first condition is satisfied, then the value of γ k ( h ) changes to one because the hypothesis h is now extracted. Algorithm 1 summarises the N-scan method.
Algorithm 1: Summary of N-scan pruning algorithm.
Bdcc 09 00084 i001
Intuitively, this method is proposed in order to give a second chance to hypotheses that are candidates for being discarded. Depending on the value of N, this chance may increase relatively. The idea behind giving a second chance to a hypothesis is to prevent truncation of informative hypotheses which belong to the true target; however, at the moment k, their observation is uncertain (e.g., due to miss detection); hence, their weights have dropped. However, when using the N-scan method for truncation, the hypotheses add a slight burden on the filtering procedure, and it helps to reduce missing valuable information due to uncertainties. In the following, the effect of the N-scan on the filtering is pointed out in more detail.
Intuitively, N-scan method treats hypotheses with a weight lower than W T h in an optimistic way. It considers the reduction of their weight as a result of an uncertainty. In a window with size N which contains the weight history of the hypotheses, if the elements exceed W T h at least ϑ times, then the hypothesis is extracted. Hence, the distribution of extraction is formulated following a form which is similar to a binomial trial:
π e x t r a c t i o n ( x , ) = i = 0 N ϑ N ϑ + i p ϑ + i ( x , ) ( 1 p ( x , ) ) N ϑ i
where p ( x , ) is the probability that w ( x , ) is greater than W T h , and this probability relates directly with p D ( x , ) , i.e., p ( x , ) p D ( x , ) (generally relates with uncertainty amount), ϑ = N S u r v for surviving hypotheses, and ϑ = N I n i t for newly born hypotheses.
However, with the reduction of P D , the p falls too at step k, and the other terms which are saved inside W H are able to increase π e x t r a c t i o n ; hence, the miss detection, or generally any reason that makes p degenerate, can be ignored for a restricted time period (relative to the size of W H ). According to π e x t r a c t i o n , increasing the value of N makes the filtering procedure more immune to uncertainty in exchange for keeping more hypotheses, which complicates the system computations.
From another point of view, N-scan provides a super set of hypotheses as H = T C , where H is the space of all hypotheses, T is the space of the truncated hypotheses without using the N-scan method, and C is the space of hypotheses that the N-scan method considers as candidates for uncertain hypotheses. It is obvious that the space C is a super set of the two spaces C T and C F , where C T is the space of those hypotheses which are truly under a uncertain condition and N-scan is able to detect them (e.g., uncertainty in the observations lasts at most N iterations); hence, their weights are degenerated and C F is the space of those hypotheses for which the degeneration in weight is not because of uncertainty (e.g., targets are dead and removed from observation permanently) or because the N-scan method, due to the value of parameter N, is not able to detect them as a member of C T . With regard to the definitions above, the N-scan method keeps | C k | extra hypotheses in iteration k, discards | C i ( N ϑ ) F | hypotheses, and detects | C i N ϑ T | as uncertain hypotheses in previous iterations which are now certain. Hence, the extracted hypotheses are T C T . However, in scenarios with certain observations, the N-scan method performs exactly like the traditional discarding method and adds an irrational overhead to the filtering procedure. This method is useful in uncertain conditions despite the overhead to the filtering procedure. It is a trade-off between better accuracy for MTT and a slight computation overhead. The L 1 -error distance of the N-scan and traditional methods of discarding is investigated in Appendix B.
In the δ -GLMB filtering, computing all the hypotheses first and then discarding them is an computationally intense process. Ref. [40] introduces methods to truncate hypotheses without propagating them. Ref. [40] utilises ranked assignment and K-shortest path algorithms incorporated with update and prediction processes to reach this goal. The following sections propose an alternative implementation of the update and prediction phases based on these tricks with regard to the N-scan concept, which tries to keep the hypotheses that are chosen wrongly as candidates for truncation due to a high uncertainty ratio. These enhanced update and prediction phases alleviate the implementation computation complexity of the δ -GLMB filter and yet try to stay immune to the uncertainties.

3.3. Enhanced Update Phase

Ref. [40] proposes a tractable implementation of the δ -GLMB update which truncates the multi-target filtering density without computing all the hypotheses and their weights. This goal is achieved based on the idea which generates the association maps θ Θ ( I ) in decreasing order of [ η Z ( ξ , θ ) ] I and selects the hypotheses with the highest weights. (As is clear in Equation (11), hypothesis ( I , ξ ) generates a new set of hypotheses ( I , ( ξ , θ ) , θ Θ ( I ) ) with weights w ( I , ξ , θ ) ( Z ) w ( I , ξ ) [ η Z ( ξ , θ ) ] I .) However, when using this idea, the highest weighted components are selected without computing all the possible hypotheses, and the chance of discarding hypotheses which have uncertainty in the current moment rises. In order to solve these problems simultaneously, the concept of scanning the history of the hypotheses’ condition in a certain N-sized windowing time is incorporated with the δ -GLMB update method proposed in [40].
Let assignment matrix S be a representation for each association map θ Θ ( I ) which has the dimension | I | × | Z | where Z = { z 1 , , z | Z | } , I = { 1 , , | I | } . S consists of 0 or 1 entries where S i , j = 1 if and only if θ ( i ) = j (i.e., the jth measurement is assigned to track i ), otherwise S i , j = 0 . The optimal assignment cost matrix is denoted as a | I | × | Z | matrix [40]
C Z ( I , ξ ) = C 1 , 1 C 1 , 2 C 1 , 3 C 1 , | Z | C 2 , 1 C 2 , 2 C 2 , 3 C 2 , | Z | C | I | , 1 C | I | , 2 C | I | , 3 C | I | , | Z |
where
C i , j = l n p ( ξ ) ( · , i ) , p D ( · , i ) g ( z j | · , i ) p ( ξ ) ( · , i ) , 1 p D ( · , i ) κ ( z j )
when i , j are both natural numbers, 1 i | I | and 1 j | Z | .
The cost value above can be interpreted as the cost of assignment the jth measurement to track i . Then the cost of assignment of all the measurements to the tracks can be written as the Frobenius inner product of the cost matrix and assignment matrix
t r ( S T C Z ( I , ξ ) ) = i = 1 | I | j = 1 | Z | C i , j S i , j
In order to obtain an optimal assignment, an optimal assignment matrix ( S * ) is needed which minimises the cost of t r ( S * T C Z ( I , ξ ) ) . Note that each S * has a corresponding associate map θ * . For this purpose, the concept of the ranked assignment problem [47] is used. Such a problem tries to find an enumeration of the least cost assignment matrices in increasing order [48]. Hence, in the case of finding the optimal assignment for measurements to tracks in the δ -GLMB filter for which the cost matrix is C Z ( I , ξ ) , the rank optimal assignment generates an enumeration of association maps θ in order of decreasing [ η Z ( ξ , θ ) ] I . Since the optimal assignment is a combinatorial problem and also enumerates the least T cost assignment, the Murty [48] algorithm is used to solve this problem. Further details are available in [40].
Despite the effectiveness of the mentioned method for ease of implementation, the uncertainty of observations affects the cost value, which is the base of ranking algorithms used for truncation. As is obvious in Equation (34), the growth of uncertainty causes a C i , j increase. With the reduction of the detection rate, and consequently, a reduction in the measurement likelihood function g ( z j | · , i ) or/and in the growth of the clutter rate, the value inside the l n ( · ) drops, hence the cost value increase. Growth of the C i , j causes the corresponding hypothesis rank drops among the hypotheses ranks and consequently obtains a higher chance to be a candidate for truncation. This behaviour may cause an informative hypothesis to get truncated due to existence of the momentary uncertainties and the track of the object to get lost. From a different aspect, by substituting Equation (9) in Equation (15), another form of [ η Z ( ξ , θ ) ] I can be written as follows:
[ η Z ( ξ , θ ) ] I = exp t r ( S T C Z ( I , ξ ) ) I p ( ξ ) ( · , ) , 1 p D ( · , )
which shows that the reduction of the cost matrix causes the value of [ η Z ( ξ , θ ) ] I to decrease, and consequently, the chance of discarding the corresponding hypothesis for track label I gets higher.
In order to increase the immunity of the update phase to uncertainties, we used the N-scan idea in the selection of the hypotheses according to the value of [ η Z ( ξ , θ ) ] I with slight differences, which is described in the following.
After sorting the association maps according to [ η Z ( ξ , θ ) ] I in decreasing order, the association maps with the highest value of [ η Z ( ξ , θ ) ] I (which is analogous to supposing that they are hypotheses that satisfy the condition [ η Z ( ξ , θ ) ] I η T h r e s h o l d ) are considered as candidates for surviving. In contrast with the previously described update method, the rest of the association maps are not truncated.
Let M, T, and T denote the set of all association maps, association maps which satisfy the condition [ η Z ( ξ , θ ) ] I η T h r e s h o l d , and the set of the rest of the association maps, respectively ( M = T T ). The relevant hypotheses in the set T propagate into the next step (prediction step). Due to the uncertainty conditions, it is possible that the set T contains association maps whose [ η Z ( ξ , θ ) ] I is decreased momentarily. The following schema is presented in order to find such association maps; however, their [ η Z ( ξ , θ ) ] I is under the η T h r e s h o l d . In the future of the filtering time, their [ η Z ( ξ , θ ) ] I will increase due to the annihilation of the uncertainties.
Considering a δ -GLMB parameter set denoted by { I k ( h ) , ξ k ( h ) , w k ( h ) , p k ( h ) } h = 1 H , the parameter Υ is added in order to keep track of the value of
j = 1 T k ( h ) T k ( h ) [ η Z ( ξ k ( h ) , θ k ( h , j ) ) ] I k ( h ) | T k ( h ) T k ( h ) |
as an indicator for the amount of cumulative information stored in the hypotheses ( I , ( ξ , θ ) ) , θ Θ ( I ) , which are generated by hypothesis ( I , ξ ) during the filtering time. Therefore the parameter set is denoted by
{ I k ( h ) , ξ k ( h ) , w k ( h ) , p k ( h ) , Υ k ( h ) } h = 1 H
Υ ( h ) is a history matrix which keeps the last N cumulative information stored in hypothesis ( h ) .
Υ k ( h ) = [ υ k N + 1 ( h ) , , υ k 1 ( h ) , υ k ( h ) ]
where
υ k ( h ) = j = 1 T k ( h ) T k ( h ) [ η Z ( ξ k ( h ) , θ k ( h , j ) ) ] I k ( h ) | T k ( h ) T k ( h ) | .
In the rest of this section, according to the newly defined parameter Υ , a new truncation algorithm for the update phase of the δ -GLMB filter is presented which is more immune to uncertain conditions.
The overall steps of the proposed truncation method are summarised as follows. The initialisation and Υ propagation steps are almost the same as the N-scan method, with the history vector Υ 0 ( h ) = [ 0 , 0 , 0 , , 0 , 0 , υ 0 ( h ) ] , where | Υ k ( h ) | = N . An ordered enumeration of association maps θ is obtained after solving the ranked optimal assignment problem whose cost matrix is C Z ( I , ξ ) . This enumeration is sorted according to the value of [ η Z ( ξ , θ ) ] I , which relates directly to the weight of a hypothesis w ( I , ξ , θ ) ( Z ) w ( I , ξ ) [ η Z ( ξ , θ ) ] I .
Analogous to [40], the hypotheses which satisfy the condition [ η Z ( ξ , θ ) ] I η T h r e s h o l d (the set T) are chosen for being extracted in the next prediction step. Although the elements of the set T are selected, there might be hypotheses in the set T whose value of [ η Z ( ξ , θ ) ] I is under a certain threshold; however, selecting them benefits the filtering procedure due to uncertainty reasons.
In the following, utilising the parameter Υ , a method is proposed for detecting some candidates C T which are probable victims of uncertainty reasons that cause the value [ η Z ( ξ , θ ) ] I of a hypothesis to decrease.
A hypothesis ( h ) is added to the candidate set C if it satisfies either one of the following conditions:
Condition 1:
i = 0 | Υ k ( h ) | 1 Υ k ( h ) [ i ] | Υ k ( h ) | υ M e a n
and
V a r ( Υ k ( h ) [ 0 : | Υ k ( h ) | 1 ] ) V a r ( Υ k ( h ) [ 0 : | Υ k ( h ) | 2 ] ) υ V a r
and
Υ k ( h ) [ | Υ k ( h ) | 2 ] ) Υ k ( h ) [ | Υ k ( h ) | 1 ] )
and
[ η Z ( ξ , θ ) ] I η R e l a x e d _ T h r e s h o l d
where υ M e a n is a threshold for the mean of the information indicator in the windowing size of Υ . V a r ( · ) is a variance operator, and Υ [ ζ : ζ ] stands for a subset of Υ whose elements consist of the ζ t h element of vector Υ to the ζ t h element. υ V a r is a threshold for the difference between the variance of the vector Υ before and after adding the last element. η R e l a x e d _ T h r e s h o l d is a threshold for the amount of the η which η R e l a x e d _ T h r e s h o l d η T h r e s h o l d . Note that satisfying condition 1 means the hypothesis is suspected of being under an uncertain circumstance; from the point of experience, sometimes it is beneficial to select some random hypotheses from the set T and put them in the set C.
If a hypothesis satisfies condition 1, then C = C h and ( h ) = 1 ; this parameter can be stored among other parameters of a hypothesis. This parameter shows whether a hypothesis satisfied condition 1 or not. If a hypothesis does not satisfy condition 1 (even once), then = 0 .
Condition 2:
i = 0 | Υ k ( h ) | 1 Υ k ( h ) [ i ] | Υ k ( h ) | υ M e a n
and
( h ) = 1
and
| Υ k ( h ) [ | Υ k ( h ) | 2 ] ) Υ k ( h ) [ | Υ k ( h ) | 1 ] ) | υ T h r e s h
and
[ η Z ( ξ , θ ) ] I η R e l a x e d _ T h r e s h o l d
where | ζ | is the absolute value of ζ and υ T h r e s h is a threshold for the difference between the values of the elements of vector Υ . If a hypothesis satisfies condition 2, then C = C h and ( h ) = 1 . The hypotheses inside the set C are considered as a set of candidates whose elements are possibly influenced by uncertainties. Intuitively, the above algorithm seeks those components whose υ k ( h ) drops momentarily. The obtained set C is given a second chance alongside the set T; hence, the truncated version of the prediction density is denoted in the following. Equation (12) can be written as
π ( X | Z ) = h = 1 H π ( h ) ( X | Z )
where
π ( h ) ( X | Z ) = Δ ( X ) j = 1 | Θ ( I ( h ) ) | w ( I ( h ) , ξ ( h ) , θ ( h , j ) ) ( Z ) δ I ( h ) ( L ( X ) ) [ p ( ξ ( h ) , θ ( h , j ) ) ( · | Z ) ] X
In order to keep the parallel ability of the truncation, the density of each hypothesis ( h ) is truncated separately. Each hypothesis ( h ) generates | Θ ( I ( h ) ) | components for filtering density, which, considering solving the ranked optimal assignment with the mentioned cost matrix, yields θ ( h , j ) , j = 1 , , | T | (the hypotheses are sorted in decreasing order). Since the candidate set C also produces association maps θ ( h , j ) , j ( C ) (hypotheses in C are not ordered), then the truncated version of π ( h ) ( · | Z ) is
π ( h ) ( X | Z ) = Δ ( X ) j ( T C ) w ( I ( h ) , ξ ( h ) , θ ( h , j ) ) ( Z ) δ I ( h ) ( L ( X ) ) [ p ( ξ ( h ) , θ ( h , j ) ) ( · | Z ) ] X
where is an operator which shows the association map indices of a set of hypotheses, i.e., ( T ) = { 1 , , | T | } .

3.4. Enhanced Predict Phase

Using the K-shortest path algorithm, Ref. [40] presents a δ -GLMB prediction implementation which truncates predicted hypotheses without computing all of them and their weights. Analogous to the update, a hypothesis ( I , ξ ) generates a set of hypotheses ( ( J L ) , ξ ) of which J is a set of surviving labels and L is set of birth labels with weights w S ( I , ξ ) ( J ) and w B ( L ) , respectively.
For a hypothesis ( I , ξ ) , the weight of surviving label J I is denoted as follows:
w S ( I , ξ ) ( J ) = w ( I , ξ ) [ 1 η S ( ξ ) ] I η S ( ξ ) 1 η S ( ξ ) J
In order to avoid exhaustively computing all the surviving hypotheses’ weights, the surviving label set are generated in decreasing order of [ η S ( ξ ) 1 η S ( ξ ) ] J using the K-shortest path algorithm with a cost vector C ( I , ξ ) = [ C ( I , ξ ) ( 1 ) , , C ( I , ξ ) ( | I | ) ] (more details are available in [40]), where
C ( I , ξ ) ( j ) = l n η S ( ξ ) ( j ) 1 η S ( ξ ) ( j ) .
After sorting the hypotheses in decreasing order, the highest weighted survival for ( I , ξ ) is selected. It is analogous to considering the selection of those for which [ η S ( ξ ) 1 η S ( ξ ) ] J η T h r e s h o l d .
According to this setting, it is possible to truncate the prediction phase analogous to the update phase with slight differences. υ k ( h ) , which is an indicator for the amount of information stored in a hypothesis, is altered as denoted below:
υ k ( h ) = η S ( ξ ( h ) ) 1 η S ( ξ ( h ) ) J ( h )
where J ( h ) I ( h ) is the set of surviving hypotheses from I ( h ) . Satisfying the condition [ η S ( ξ ) / ( 1 η S ( ξ ) ) ] J η R e l a x e d _ T h r e s h o l d is not essential.
Note that enhanced prediction does not interfere with newborn target truncation due to their small amount of weight and the difficulty of keeping their history.

3.5. Refined δ -GLMB

Using a fixed value for probability of survival is a prevalent scenario in δ -GLMB filters as it is used in [40,49]. Adopting the idea in [50], the following section introduces an adaptive probability of survival which is based on a predefined survival model which can be defined based on different scenarios; for example, the existence of a solid object in a scene can be defined for the filtering system (e.g., a wall, a door, or a window). Afterwards, this survival model is utilised to present a new method for detecting the hypotheses whose measurements are likely uncertain.
In this work, a simple survival model S ( · ) is defined which can be altered according to the existing conditions. S ( · ) is utilised to adaptively calculate the survival probability of a hypothesis according to its position and its direction of movement. The probability of survival for a hypothesis ( h ) is shown by P S , k ( h ) , which is equal to S ( x k ( h ) ) , where x k ( h ) is an indicator for the position that a hypothesis ( h ) presents and is calculated as follows:
x k ( h ) = y p ( h ^ , j ^ ) ( y , · ) d y
( h ^ , j ^ ) = a r g m a x ( h ^ , j ^ ) w ( h , j ) δ N ^ ( | I ( h , j ) | )
N ^ = a r g m a x ρ
ρ ( n ) = j = 1 T w ( h , j ) δ n ( | I ( h , j ) | ) ; n = 0 , , N M a x
More details are available in [40] in the state estimation section.
Each survival model is constructed from a set of boundary lines (linear, quadratic, or any desired form) which are shown as b 1 , b 2 , , b j , , b N B , where N B is the number of boundaries in a survival model. Hence, the probability of survival for a hypotheses can be rewritten as
P S , k ( h ) = S ( x k ( h ) ) = min j ( P S , k ( h , j ) )
where P S , k ( h , j ) is defined as
P S , k ( h , j ) = 0 , d i s t ( x k ( h ) , b j ) < d 2 & D h ( b j ) > 0 d i s t ( x k ( h ) , b j ) + d 1 d 1 d 2 , d 2 d i s t ( x k ( h ) , b j ) < d 1 & D h ( b j ) > 0 P S M a x , Otherwise
where D h ( b j ) is positive if the corresponding position of the h is moving toward the boundary b j and is negative otherwise.
Note that the movement direction of a hypothesis can be inferred from the intrinsic kinematic state vector (by checking the sign of the velocity stored in the state vector along any existing axis X, Y, or Z). d 1 and d 2 stand for distance thresholds, and d i s t ( · , · ) shows the positional distance between its entrances. P S M a x is the maximum value considered for the survival portability, which is at most equal to 1.
In the δ -GLMB filtering procedure, the weight w ( h ) denotes the probability of hypothesis ( h ) . In practice, missing the measurement of a target affects the weights of corresponding hypotheses, and therefore, the estimation might be lost. In this section, inspired by [50], a new parameter P C (which is an abbreviation of Probability of Confirm) is added to the hypothesis parameter set. The parameter P C is responsible for showing the amount of confidence of a hypothesis during the filtering period. P C is an adaptive parameter which is calculated separately for each hypothesis based on a reward and punishment schema. The parameter P C is then utilised in the truncation step in order to decrease the effect of miss detection and similar uncertainties by preventing the truncation of informative hypotheses whose measurements are uncertain momentarily.
Considering the parameter set (with regard to the newly defined parameter PC) as
{ I k ( h ) , ξ k ( h ) , w k ( h ) , p k ( h ) , P C K ( h ) } h = 1 H ,
P C k ( h ) shows the probability of confirming hypothesis ( h ) at the kth step of filtering. At the birth time of each hypothesis, the value of P C is set to a value less than the predefined threshold P C T h r e s h , i.e.,
P C 0 ( h ) = P C T h r e s h ϵ .
where ϵ > 0 .
During the lifetime of each hypothesis ( h ) , whenever the weight of the corresponding hypothesis drops below P C T h r e s h (after performing the update/prediction phase), the hypothesis is considered as having a probable chance of being affected by an uncertain condition like miss detection; hence, the hypothesis is punished by means of penalising the P C k ( h ) . Penalising of the P C occurs using a penalty coefficient τ p :
P C k ( h ) = P C k 1 ( h ) × τ p
where 0 < τ p < 1 . In contrast, if the weight of a hypothesis exceeds P C T h r e s h , then the hypothesis is rewarded as
P C k ( h ) = m i n ( 1 , P C k 1 ( h ) × τ r )
where τ r > 1 .
If the measurement uncertainties continues for a multi-consecutive period of time ( k n , , k + m ) (e.g., a long time occlusion), most hypotheses relative to the corresponding target might perish cruelly. In order to alleviate this flaw, a hypothesis pool refinement method is proposed which utilises the concepts of adaptive probability of survival and adaptive probability of confirmation. The base idea of this method is that if a hypothesis exists in step k 1 and P S , k ( h ) > 0 , then the corresponding track is expected to survive at the next time step k. Therefore, in the truncation section, a refinement step is added. In the refinement step, the hypotheses whose probability of survival is a non-zero positive value and probability of confirmation exceeds a predefined threshold P C R e f i n e are added to the hypotheses pool and given a second chance to confront the probable occurrence of uncertainty.

4. Experimental Results and Discussion

In this section, the performance and effectiveness of the proposed methods which intend to confront the measurement uncertainty drawback of δ -GLMB filtering are verified using both simulated and real-world datasets. The example and scenarios are based on linear Gaussian methods, and hence, Gaussian mixture implementation is used. The experimental results on the simulated scenario utilises a Monte Carlo simulation with 100 runs. Some scenarios with different probabilities of detection and various subsequent miss-detection rates are employed to show the effectiveness of the proposed methods compared with the standard versions of the δ -GLMB filter, GM-PHD filter, and N-scan GM-PHD filter [43] in order to compare them with methods of tracking which are designed to encounter uncertainties in measurements.

4.1. Simulated Dataset Results

A typical scenario is considered as a set of multi-target trajectories on a two-dimensional region [−1000,1000] m × [−1000,1000] m, as shown in Figure 1.
We begin with a duration of K = 100 s. The number of targets may change during the time as an outcome of births and deaths. The kinematic target state is a 4D vector which contains planar position and constant velocity (target velocities are constant and may differ from each other) x k = [ p x , k , p y , k , p ˙ x , k , p ˙ y , k ] T . Noisy vectors of the 2D planar position z k = [ z x , k , z y , k ] T show the measurements. The single-target state space model is linear Gaussian with f k | k 1 ( x k | x k 1 ) = N ( x k ; F k x k 1 , Q k ) and a Gaussian likelihood g k ( z k | x k ) = N ( z k ; H k x k , R k ) , where F , H are state and observation transition matrices, respectively, and R , Q are noise matrices which are represented by
F k = 1 1 Δ Δ 1 1 Δ Δ 0 0 1 1 0 0 1 1 , H k = 1 1 0 0 1 1 0 0 , Q k = σ v 2 Δ 4 4 Δ 4 4 Δ 3 2 Δ 3 2 Δ 4 4 Δ 4 4 Δ 3 2 Δ 3 2 Δ 3 2 Δ 3 2 Δ 2 Δ 2 Δ 3 2 Δ 3 2 Δ 2 Δ 2 , R k = σ ϵ 2 1 1 1 1
where Δ = 1 s is the sampling period, and σ v = 5 ms 2 and σ ϵ = 10 m are standard deviations of the process and measurement noise. The survival probability is P S , k = 0.97 (except the RGM version, for which the probability of survival is adaptive), and the birth model is a Labelled Multi-Bernoulli RFS with parameters π B = { r B ( i ) , p B ( i ) } i = 1 4 , where r B ( i ) = 0.03 and p B ( i ) ( x ) = N ( x ; m B ( i ) , P B ) with m B ( 1 ) = [ 0.1 , 0 , 0.1 , 0 ] T , m B ( 2 ) = [ 400 , 0 , 600 , 0 ] T , m B ( 3 ) = [ 800 , 0 , 200 , 0 ] T , m B ( 4 ) = [ 200 , 0 , 800 , 0 ] T , P B = d i a g ( [ 10 , 10 , 10 , 10 ] T ) 2 . Clutter follows a Poisson RFS, giving an average of 65 false alarms per scan. The δ -GLMB filter is capped at 12,000 components, and results are shown for 100 Monte Carlo trials. The comparison criteria for the methods are fromOptimal Sub-Pattern Assignment (OSPA) [51,52] with a cut-off of c = 100 and order of p = 1 . The OSPA formulation is calculated as
O S P A c , p ( X k , X k ^ ) = 1 | X k ^ | min π Π | X k ^ | i = 1 | X k | ( d c ( x i , x ^ π ( i ) ) ) p + c p × ( | X k ^ | | X k | ) 1 p
The results shown for the N-scan method are obtained considering fixed values for N-scan parameters. The length of the history window is N = 5 , N I n i t = 3 and N S u r v = 2 .
Figure 2 shows the results for the standard δ -GLMB filter, N-scan δ -GLMB filter, N-scan δ -GLMB filter with enhanced update and prediction steps, and refined δ -GLMB. These results are obtained with a detection rate of 80 % , and the average OSPA in this situation is shown in Table 2.
Figure 3 shows the result for different detection probability rates versus the average of OSPA in time. This result shows that the effects of miss detection are decreased using the proposed methods. Notice that in the higher rate of miss detection situation, the proposed methods diverge from the standard δ -GLMB, which shows that the proposed methods operate better in uncertain conditions.
Figure 4 shows the analysis of the methods in different clutter rates, and Figure 5 shows the result in the scenario where some targets are forced to be miss-detected for a certain number of frames in a sequence. The results demonstrate that the proposed N-scan and refined δ -GLMB method improved the estimation performance of the δ -GLMB in uncertain conditions, since the OSPA of the proposed methods decreased compared to the other methods, especially when the uncertainty rate increased. For example, the OSPA distance between our proposed method and other methods obtains a higher value in lower rates of detection, which shows a better performance in uncertain conditions. The proposed refined method also outperforms other methods in the sequenced miss detection. Notice that some proposed methods, like Marginalised δ -GLMB [54], which are recently proposed perform analogously to the δ -GLMB from the aspect of estimation in uncertain conditions. The proposed methods of this paper focus on enhancing the performance of the δ -GLMB filters family; hence, this idea and these methods can be used on other forms of the δ -GLMB.
Figure 6 shows another scenario with a vivid, high amount of uncertainty in the measurements and also dense occlusion in both the X and Y axes. The discussed methods are performed on this scenario, and results are shown in Table 3, which shows the effectiveness of our proposed methods in heavily uncertain conditions.

4.2. Visual Dataset Results

Here, the PETS09-S2L1 and GRAM-RTM [55] datasets are utilised in order to verify the functionality of the proposed methods. We chose PETS09-S2L1 as a dataset, which is crowded with people (presenting a proper multi-object tracking environment) and contains certain obstacles in the scene which cause momentary and continuous occlusions. The GRAM-RTM dataset is chosen because of loads of vehicles crossing in the scene and consequently, a high amount of occlusion. The GRAM-RTM dataset is low-quality, which also causes miss detection. Hence, it is possible to verify the proposed methods’ performance in a challenging environment with uncertainties in the measurement. Figure 7 shows the quality of different methods in sub-samples of the PETS09-S2L1 and GRAM-RTM datasets. As is clear in Figure 7, the δ -GLMB fails to track the targets which are behind the obstacle in the middle of the scene; however, the three other proposed methods show progress in tracking. The scene shown in Figure 7 is a frame where two different objects move from different locations, come across each other, stay for moments (which is mostly miss-detected because of the presence of solid obstacles in the scene), and leave the scene in different directions. The qualitative results shown in Figure 7 show that the proposed methods are more successful in tracking targets than the original δ -GLMB method since they track the targets in more frames and confront the miss-detection effect.
The quantitative results for PETS09-S2L1 and GRAM-RTM are shown in Table 4 and Table 5, respectively, where popular CLEAR MOT metrics are used to evaluate the performance of the trackers [56,57]. The MOT metrics used here are MOTA and MOTP. MOTA, or the Multiple Object Tracking Accuracy measure, combines three error sources, false positives, missed targets, and identity switches, calculated as
M O T A = 1 k ( F a l s e N e g a t i v e k + F l a s e P o s i t i v e k + N S k ) k n g k
where k is the frame number and n b k is the number of estimated bounding boxes at frame k. MOTP, or the Multiple Object Tracking Precision measure, is a criterion for how the tracker performs in a situation of misalignment between the annotated and predicted bounding boxes, calculated as
t t a r g e t s , k A r e a ( g r o u n d T r u t h k t e s t i m a t e d k t ) / A r e a ( g r o u n d T r u t h k t e s t i m a t e d k t ) / k n b k
where n g k denotes the number of ground truths at frame k and N S k is the number of target identity switches. The results also contain four other numerical metrics. The Recall Rate (ReR) metric is brought to verify the ability of the filters for true estimations. The False Alarm Rate (FAR) is utilised to investigate the effect of the uncertainty. The Missed Trajectory Rate (MTR) and the Missed Occlusion Rate (MOR) evaluate the amount of improvement for handling occlusion. These criteria are calculated as
R e R = #   true   estimations #   totoal   ground   truth   targets , F A R = #   false   estimations #   total   estimations ,
M T R = #   missed   trajectories #   total   trajectories , M O R = #   missed   occlusions #   total   occlusions
As is clear from the results, our proposed methods improve the mentioned considered criteria. The criteria MOTA and MOTP are improved, which shows that the proposed methods improve the tracking accuracy and precision of the other discussed trackers. The criteria ReR, FAR, MTR, and MOR are improved, which are criteria for verifying the performance of the trackers in uncertain conditions. For example, decreasing MTR shows that the number of missed trajectories are significantly decreased in our proposed methods compared with the other verified methods. In total, these results show that utilising the information stored in the history of the filtering performance causes a better performance for trackers. For example, when a person detected in the scene is not detected for a while, knowing that this person existed and moved in a certain direction can help the tracker to expect that the person will be seen again in a rational (due to the history of appearance in the scene) area.

5. Conclusions

In practice, δ -GLMB filter performance may decrease in uncertain conditions due to truncation of informative components which have lost information at the moment of the occurrence of an uncertainty. In this paper, we proposed methods that aim at improving the performance of the filtering via guessing which components occur under the mentioned circumstances and keeping them from being removed from the filtering procedure. The main idea is utilising the information stored in the history of the filtering procedure. We benefit from information stored in the last N iterations of filtering, using it to guess which components are truncating wrongly according to their behaviour during the time of their existence (with regard to the N value). Then we use a survival model and a reward-and-punishment scheme to make sure that a component is confirmed to be truncated or to survive. The performance of the proposed methods is verified via a simulated scenario and multiple conditions in terms of different probabilities of detection, different clutter rates, and a subsequent miss-detection scenario whose targets were forced to be miss-detected for coherent sequenced frames. The proposed methods were compared with the standard δ -GLMB filter, and the results show that the proposed methods outperform it in the presence of random uncertainties.

Author Contributions

Conceptualization, M.H.S.; methodology, M.H.S.; software, M.H.S.; validation, M.H.S., S.M., Z.A. and P.F.; formal analysis, M.H.S.; investigation, M.H.S., S.M., Z.A. and P.F.; resources, Z.A. and P.F.; data curation, M.H.S.; writing—original draft preparation, M.H.S.; writing—review and editing, S.M., Z.A. and P.F.; visualization, M.H.S.; supervision, Z.A. and P.F.; project administration, Z.A. and P.F.; funding acquisition, P.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Sciences and Engineering Research Council of Canada via their Discovery program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data and code are available upon reasonable request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Supplementary Background

A.1 The probability density of a Multi-Bernoulli RFS X on X is denoted by [44]
π ( { x 1 , , x n } ) = j = 1 M 1 r ( j ) 1 i 1 , i n M j = 1 n r ( i j ) p ( i j ) ( x j ) 1 r ( i j )
A.2 The probability density of a labelled Multi-Bernoulli (LMB) RFS X on X which has a finite parameter set ( r ( ζ ) , p ( ζ ) ) : ζ Ψ and is augmented with L is denoted by
π ( { ( x 1 , 1 ) , , ( x n , n ) } ) = δ n ( | { 1 , , n } | ) × ζ Ψ 1 r ( ζ ) j = 1 n 1 α ( Ψ ) ( j ) r ( α 1 ( j ) ) p ( α 1 ( j ) ) ( x j ) 1 r ( α 1 ( j ) )
A.3 The set integral is defined as
f ( X ) δ ( X ) = i = 0 1 i ! f ( { x 1 , , x i } ) d ( x 1 , , x i )

Appendix B. The L1-Error of N-Scan Method and Traditional Method of Discarding

In proposition 5 of ref. [40], it is proved that the L 1 -error between a δ -GLMB density and its truncated (traditional truncation) version is denoted by
| | f H f T | | 1 = ( I , ξ ) H T w ( I , ξ )
only if T H , where | | f | | 1 | f ( X ) | d X denotes the L 1 -norm and
f H = Δ ( X ) ( I , ξ ) H w ( I , ξ ) δ I ( L ( X ) ) [ p ( ξ ) ] X .
Now, let T present δ -GLMB density truncated via the N-scan version, since T H , then
| | f H f T | | 1 = ( I , ξ ) H T w ( I , ξ ) = ( I , ξ ) C F w ( I , ξ )
Since C F C , C T = , C H , and T H , then
C T H ( C T ) ¬ T H ¬ T
C ¬ T H ¬ T = C H T C F H T
then
( I , ξ ) H T w ( I , ξ ) ( I , ξ ) C F w ( I , ξ )
So
| | f H f T | | 1 | | f H f T | | 1 .
which shows that the L 1 -error of N-scan method is less than traditional truncation method.
Potential extension: In recent years, self-supervised learning [58,59] has demonstrated strong capabilities in representation learning without requiring labelled data. Techniques such as contrastive learning [60] and kernel-based dependency minimisation [61] have shown promise in learning robust embeddings. In the context of multi-target tracking, SSL could enhance feature learning by enabling the model to extract invariant representations of targets across different observations, reducing the impact of clutter, occlusion, and missing detections. By integrating it with the δ -GLMB filtering process, tracking robustness could be improved, particularly in scenarios where measurement uncertainty is high.

References

  1. Bar, S.Y.; Fortmann, T. Tracking and Data Association. Ph.D. Thesis, Academic Press, Cambridge, UK, 1988. [Google Scholar]
  2. Blackman, S.; Popoli, R. Design and Analysis of Modern Tracking Systems (Artech House Radar Library); Artech House: London, UK, 1999. [Google Scholar]
  3. Alhadhrami, E.; Seghrouchni, A.E.F.; Barbaresco, F.; Zitar, R.A. Testing Different Multi-Target/Multi-Sensor Drone Tracking Methods Under Complex Environment. In Proceedings of the 2024 International Radar Symposium (IRS), Wroclaw, Poland, 2–4 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 352–357. [Google Scholar]
  4. Hong, J.; Wang, T.; Han, Y.; Wei, T. Multi-Target Tracking for Satellite Videos Guided by Spatial-Temporal Proximity and Topological Relationships. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5614020. [Google Scholar] [CrossRef]
  5. Wang, X.; Li, D.; Wang, J.; Tong, D.; Zhao, R.; Ma, Z.; Li, J.; Song, B. Continuous multi-target tracking across disjoint camera views for field transport productivity analysis. Autom. Constr. 2025, 171, 105984. [Google Scholar] [CrossRef]
  6. Reuter, S.; Vo, B.T.; Vo, B.N.; Dietmayer, K. The Labeled Multi-Bernoulli Filter. IEEE Trans. Signal Process. 2014, 62, 3246–3260. [Google Scholar] [CrossRef]
  7. Blackman, S.S. Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp. Electron. Syst. Mag. 2004, 19, 5–18. [Google Scholar] [CrossRef]
  8. Yang, Y.; Yan, T.; Shen, J.; Sun, G.; Tian, Z.; Ju, W. Multi-Hypothesis Tracking Algorithm for Missile Group Targets. In Proceedings of the 2024 IEEE International Conference on Unmanned Systems (ICUS), Nanjing, China, 18–20 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 745–750. [Google Scholar]
  9. Yang, Z.; Nie, H.; Liu, Y.; Bian, C. Robust Tracking Method for Small and Weak Multiple Targets Under Dynamic Interference Based on Q-IMM-MHT. Sensors 2025, 25, 1058. [Google Scholar] [CrossRef]
  10. Fortmann, T.; Bar-Shalom, Y.; Scheffe, M. Sonar tracking of multiple targets using joint probabilistic data association. IEEE J. Ocean. Eng. 1983, 8, 173–184. [Google Scholar] [CrossRef]
  11. Chen, Q.; Wang, P.; Wei, H. An algorithm for multi-target tracking in low-signal-to-clutter-ratio underwater acoustic scenes. AIP Adv. 2024, 14, 105121. [Google Scholar] [CrossRef]
  12. Gu, Z.; Cheng, S.; Wang, C.; Wang, R.; Zhao, Y. Robust Visual Localization System With HD Map Based on Joint Probabilistic Data Association. IEEE Robot. Autom. Lett. 2024, 9, 9415–9422. [Google Scholar] [CrossRef]
  13. Mahler, R. Random set theory for target tracking and identification. In Data Fusion Hand Book; CRC Press: Boca Raton, FL, USA, 2001. [Google Scholar]
  14. Daley, D.J.; Vere-Jones, D. An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  15. Forti, N.; Millefiori, L.M.; Braca, P.; Willett, P. Random finite set tracking for anomaly detection in the presence of clutter. In Proceedings of the 2020 IEEE Radar Conference (RadarConf20), Florence, Italy, 21–25 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  16. Yao, X.; Qi, B.; Wang, P.; Di, R.; Zhang, W. Novel Multi-Target Tracking Based on Poisson Multi-Bernoulli Mixture Filter for High-Clutter Maritime Communications. In Proceedings of the 2024 12th International Conference on Information Systems and Computing Technology (ISCTech), Xi’an, China, 8–11 November 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–7. [Google Scholar]
  17. Si, W.; Wang, L.; Qu, Z. Multi-target tracking using an improved Gaussian mixture CPHD Filter. Sensors 2016, 16, 1964. [Google Scholar] [CrossRef]
  18. Li, C.; Bao, Q.; Pan, J. Multi-target Tracking Method of Non-cooperative Bistatic Radar System Based on Improved PHD Filter. In Proceedings of the 2024 Photonics & Electromagnetics Research Symposium (PIERS), Chengdu, China, 21–25 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–7. [Google Scholar]
  19. Hoseinnezhad, R.; Vo, B.N.; Vo, B.T. Visual tracking in background subtracted image sequences via multi-Bernoulli filtering. IEEE Trans. Signal Process. 2013, 61, 392–397. [Google Scholar] [CrossRef]
  20. Wu, W.; Sun, H.; Zheng, M.; Huang, W. Target Tracking with Random Finite Sets; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  21. Lee, C.S.; Clark, D.E.; Salvi, J. SLAM with dynamic targets via single-cluster PHD filtering. IEEE J. Sel. Top. Signal Process. 2013, 7, 543–552. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Li, Y.; Li, S.; Zeng, J.; Wang, Y.; Yan, S. Multi-target tracking in multi-static networks with autonomous underwater vehicles using a robust multi-sensor labeled multi-Bernoulli filter. J. Mar. Sci. Eng. 2023, 11, 875. [Google Scholar] [CrossRef]
  23. Chen, J.; Xie, Z.; Dames, P. The semantic PHD filter for multi-class target tracking: From theory to practice. Robot. Auton. Syst. 2022, 149, 103947. [Google Scholar]
  24. Jeong, T. Particle PHD filter multiple target tracking in sonar image. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 409–416. [Google Scholar]
  25. Zeng, Y.; Wang, J.; Wei, S.; Zhang, C.; Zhou, X.; Lin, Y. Gaussian mixture probability hypothesis density filter for heterogeneous multi-sensor registration. Mathematics 2024, 12, 886. [Google Scholar] [CrossRef]
  26. Liang, G.; Zhang, B.; Qi, B. An augmented state Gaussian mixture probability hypothesis density filter for multitarget tracking of autonomous underwater vehicles. Ocean Eng. 2023, 287, 115727. [Google Scholar]
  27. Leach, M.J.; Sparks, E.P.; Robertson, N.M. Contextual anomaly detection in crowded surveillance scenes. Pattern Recognit. Lett. 2014, 44, 71–79. [Google Scholar]
  28. Blair, A.; Gostar, A.K.; Bab-Hadiashar, A.; Li, X.; Hoseinnezhad, R. Enhanced Multi-Target Tracking in Dynamic Environments: Distributed Control Methods Within the Random Finite Set Framework. arXiv 2024, arXiv:2401.14085. [Google Scholar]
  29. Meißner, D.A.; Reuter, S.; Strigel, E.; Dietmayer, K. Intersection-Based Road User Tracking Using a Classifying Multiple-Model PHD Filter. IEEE Intell. Transport. Syst. Mag. 2014, 6, 21–33. [Google Scholar]
  30. Zhang, Y.; Zhang, B.; Shen, C.; Liu, H.; Huang, J.; Tian, K.; Tang, Z. Review of the field environmental sensing methods based on multi-sensor information fusion technology. Int. J. Agric. Biol. Eng. 2024, 17, 1–13. [Google Scholar]
  31. Gruden, P.; White, P.R. Automated extraction of dolphin whistles—A sequential Monte Carlo probability hypothesis density approach. J. Acoust. Soc. Am. 2020, 148, 3014–3026. [Google Scholar] [CrossRef] [PubMed]
  32. Rezatofighi, S.H.; Gould, S.; Vo, B.N.; Mele, K.; Hughes, W.E.; Hartley, R. A multiple model probability hypothesis density tracker for time-lapse cell microscopy sequences. In Proceedings of the International Conference on Information Processing in Medical Imaging, Asilomar, CA, USA, 28 June–3 July 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 110–122. [Google Scholar]
  33. Ben-Haim, T.; Raviv, T.R. Graph neural network for cell tracking in microscopy videos. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 610–626. [Google Scholar]
  34. Kim, D.Y. Multi-Bernoulli filtering for keypoint-based visual tracking. In Proceedings of the 2016 International Conference on Control, Automation and Information Sciences (ICCAIS), Ansan, Republic of Korea, 27–29 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 37–41. [Google Scholar]
  35. Baker, L.; Ventura, J.; Langlotz, T.; Gul, S.; Mills, S.; Zollmann, S. Localization and tracking of stationary users for augmented reality. Vis. Comput. 2024, 40, 227–244. [Google Scholar] [CrossRef]
  36. Mahler, R.P. Advances in Statistical Multisource-Multitarget Information Fusion; Artech House: London, UK, 2014. [Google Scholar]
  37. Mahler, R.P. Multitarget Bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1152–1178. [Google Scholar] [CrossRef]
  38. Mahler, R. PHD filters of higher order in target number. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 1523–1543. [Google Scholar] [CrossRef]
  39. Vo, B.N.; Vo, B.T.; Pham, N.T.; Suter, D. Joint detection and estimation of multiple objects from image observations. IEEE Trans. Signal Process. 2010, 58, 5129–5141. [Google Scholar] [CrossRef]
  40. Vo, B.N.; Vo, B.T.; Phung, D. Labeled random finite sets and the Bayes multi-target tracking filter. IEEE Trans. Signal Process. 2014, 62, 6554–6567. [Google Scholar] [CrossRef]
  41. Vo, B.T.; Vo, B.N. Labeled random finite sets and multi-object conjugate priors. IEEE Trans. Signal Process. 2013, 61, 3460–3475. [Google Scholar] [CrossRef]
  42. Liu, Z.; Zheng, D.; Yuan, J.; Chen, A.; Li, H.; Zhou, C.; Chen, W.; Liu, Q. δ-GLMB Filter Based on Multiple Model Multiple Hypothesis Tracking. In Proceedings of the 2022 14th International Conference on Signal Processing Systems (ICSPS), online, 18–20 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 249–258. [Google Scholar]
  43. Yazdian-Dehkordi, M.; Azimifar, Z. Novel N-scan GM-PHD-based approach for multi-target tracking. IET Signal Process. 2016, 10, 493–503. [Google Scholar] [CrossRef]
  44. Mahler, R.P. Statistical Multisource-Multitarget Information Fusion; Artech House, Inc.: London, UK, 2007. [Google Scholar]
  45. Vo, B.N.; Ma, W.K. The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 2006, 54, 4091. [Google Scholar] [CrossRef]
  46. Sepanj, M.H.; Azimifar, Z. N-scan δ-generalized labeled multi-bernoulli-based approach for multi-target tracking. In Proceedings of the Artificial Intelligence and Signal Processing Conference (AISP), Shiraz, Iran, 25–27 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 103–106. [Google Scholar]
  47. Miller, M.L.; Stone, H.S.; Cox, I.J. Optimizing Murty’s ranked assignment method. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 851–862. [Google Scholar] [CrossRef]
  48. Murty, K.G. Letter to the editor—An algorithm for ranking all the assignments in order of increasing cost. Oper. Res. 1968, 16, 682–687. [Google Scholar]
  49. Punchihewa, Y.G.; Vo, B.T.; Vo, B.N.; Kim, D.Y. Multiple Object Tracking in Unknown Backgrounds With Labeled Random Finite Sets. IEEE Trans. Signal Process. 2018, 66, 3040–3055. [Google Scholar] [CrossRef]
  50. Yazdian-Dehkordi, M.; Azimifar, Z. Refined GM-PHD tracker for tracking targets in possible subsequent missed detections. Signal Process. 2015, 116, 112–126. [Google Scholar]
  51. Schuhmacher, D.; Vo, B.T.; Vo, B.N. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process. 2008, 56, 3447–3457. [Google Scholar] [CrossRef]
  52. Tang, T.; Wang, P.; Zhao, P.; Zeng, H.; Chen, J. A novel multi-target TBD scheme for GNSS-based passive bistatic radar. IET Radar Sonar Navig. 2024, 18, 2497–2512. [Google Scholar] [CrossRef]
  53. Liu, J.; Wu, Z.; Zhao, J.; Han, X. Improved GM-PHD Filter for Multi-target Tracking with Dense Clutter and Low Detection Probability. In Proceedings of the 2024 43rd Chinese Control Conference (CCC), Kunming, China, 28–31 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 3505–3510. [Google Scholar]
  54. Vo, B.N.; Vo, B.T.; Hoang, H.G. An efficient implementation of the generalized labeled multi-Bernoulli filter. IEEE Trans. Signal Process. 2017, 65, 1975–1987. [Google Scholar] [CrossRef]
  55. Guerrero-Gomez-Olmedo, R.; Lopez-Sastre, R.J.; Maldonado-Bascon, S.; Fernandez-Caballero, A. Vehicle Tracking by Simultaneous Detection and Viewpoint Estimation. In Proceedings of the IWINAC 2013, Mallorca, Spain, 10–14 June 2013; Part II, LNCS 7931. pp. 306–316. [Google Scholar]
  56. Bernardin, K.; Stiefelhagen, R. Evaluating multiple object tracking performance: The CLEAR MOT metrics. J. Image Video Process. 2008, 2008, 246309. [Google Scholar] [CrossRef]
  57. Putra, H.; Nuha, H.H.; Irsan, M.; Putrada, A.G.; Hisham, S.B.I. Object Tracking in Surveillance System Using Particle Filter and ACF Detection. In Proceedings of the 2024 International Conference on Decision Aid Sciences and Applications (DASA), Hybrid, Bahrain, 11–12 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–7. [Google Scholar]
  58. Sepanj, H.; Fieguth, P. Context-Aware Augmentation for Contrastive Self-Supervised Representation Learning. J. Comput. Vis. Imaging Syst. 2023, 9, 4–7. [Google Scholar]
  59. Sepanj, H.; Fieguth, P. Aligning Feature Distributions in VICReg Using Maximum Mean Discrepancy for Enhanced Manifold Awareness in Self-Supervised Representation Learning. J. Comput. Vis. Imaging Syst. 2024, 10, 13–18. [Google Scholar]
  60. Sepanj, M.H.; Fiegth, P. SinSim: Sinkhorn-Regularized SimCLR. arXiv 2025, arXiv:2502.10478. [Google Scholar]
  61. Sepanj, M.H.; Ghojogh, B.; Fieguth, P. Self-Supervised Learning Using Nonlinear Dependence. arXiv 2025, arXiv:2501.18875. [Google Scholar]
Figure 1. Multiple trajectories on the XY plane which are inputs for tracking simulation procedure. Start/stop positions for each track are shown with o/ Δ . Notice that at the same time, there are at most 10 targets in the system according to different times of birth and death of targets. The targets are occluded, and they cross each other in certain locations shown in the figure.
Figure 1. Multiple trajectories on the XY plane which are inputs for tracking simulation procedure. Start/stop positions for each track are shown with o/ Δ . Notice that at the same time, there are at most 10 targets in the system according to different times of birth and death of targets. The targets are occluded, and they cross each other in certain locations shown in the figure.
Bdcc 09 00084 g001
Figure 2. The figure shows OSPA of different methods in 100 filtering iterations (100 Monte Carlo trials). P D = 0.80 , and 65 clutter per frame is used. As is clear in the figure, the areas under the lines of the proposed methods are lower than the δ -GLMB method, which results in a lower average of OSPA in time and consequently shows a better result obtained.
Figure 2. The figure shows OSPA of different methods in 100 filtering iterations (100 Monte Carlo trials). P D = 0.80 , and 65 clutter per frame is used. As is clear in the figure, the areas under the lines of the proposed methods are lower than the δ -GLMB method, which results in a lower average of OSPA in time and consequently shows a better result obtained.
Bdcc 09 00084 g002
Figure 3. Different probabilities of detection (100 Monte Carlo trials). As is clear in the figure, the proposed methods outperform the δ -GLMB method in the presence of miss detection. With higher rates of miss detection (lower probability of detection), the divergence between different methods increases, which shows the power of the proposed method in more uncertain conditions like the presence of clutter.
Figure 3. Different probabilities of detection (100 Monte Carlo trials). As is clear in the figure, the proposed methods outperform the δ -GLMB method in the presence of miss detection. With higher rates of miss detection (lower probability of detection), the divergence between different methods increases, which shows the power of the proposed method in more uncertain conditions like the presence of clutter.
Bdcc 09 00084 g003
Figure 4. Different clutter per frame rates with probability of detection of 0.95 (100 Monte Carlo trials). As is clear in the figure, the proposed methods outperform the δ -GLMB method in the presence of clutter. With higher rates of clutter, the divergence between different methods increases, which shows the power of the proposed method in more uncertain conditions like the presence of miss detection.
Figure 4. Different clutter per frame rates with probability of detection of 0.95 (100 Monte Carlo trials). As is clear in the figure, the proposed methods outperform the δ -GLMB method in the presence of clutter. With higher rates of clutter, the divergence between different methods increases, which shows the power of the proposed method in more uncertain conditions like the presence of miss detection.
Bdcc 09 00084 g004
Figure 5. Different forced consequent miss detection rates with P D = 0.95 (100 Monte Carlo trials). This figure shows the examination of the proposed method in a condition where miss detection happens consequently; as is clear in the figure, the refined GLMB out performs the other methods.
Figure 5. Different forced consequent miss detection rates with P D = 0.95 (100 Monte Carlo trials). This figure shows the examination of the proposed method in a condition where miss detection happens consequently; as is clear in the figure, the refined GLMB out performs the other methods.
Bdcc 09 00084 g005
Figure 6. Multiple trajectories in X-Time and Y-Time planes, which are highly occluded, and the amount of uncertainty in the measurements is clearly intense.
Figure 6. Multiple trajectories in X-Time and Y-Time planes, which are highly occluded, and the amount of uncertainty in the measurements is clearly intense.
Bdcc 09 00084 g006
Figure 7. Results on PETS09-S2L1 (first row) and GRAM-RTM (Urban1) (second row). For simplicity, just two target trajectories are drawn in the pictures. (First column) Trajectory for δ -GLMB method. (Second column) Trajectory for N-scan method. (Third column) Trajectory for enhanced method. (Fourth column) Trajectory for refined δ -GLMB.
Figure 7. Results on PETS09-S2L1 (first row) and GRAM-RTM (Urban1) (second row). For simplicity, just two target trajectories are drawn in the pictures. (First column) Trajectory for δ -GLMB method. (Second column) Trajectory for N-scan method. (Third column) Trajectory for enhanced method. (Fourth column) Trajectory for refined δ -GLMB.
Bdcc 09 00084 g007
Table 1. Summary of notation.
Table 1. Summary of notation.
NotationDescription
xSingle-target state
XMulti-target state (set of targets)
X State space
ZMulti-object measurement set
L Label space
Unique label assigned to a target
δ Y ( X ) Generalised Kronecker delta function
1 Y ( X ) Indicator function
P D ( x , ) Probability of detection for target ( x , )
g ( z | x , ) Likelihood of measurement z given target ( x , )
π ( X ) Multi-target probability density
Δ ( X ) Distinct label indicator function
w ( I , ξ ) Hypothesis weight
p ( ξ ) Probability density of target state
Table 2. AVG OSPA of different methods in 100 filtering iterations.
Table 2. AVG OSPA of different methods in 100 filtering iterations.
Method ( P D = 0.8 )OSPA (AVG) (# of Targets)
GM-PHD [53]47.46 (5.17)
N-scan GM-PHD [43]39.71 (5.88)
δ -GLMB33.4375 (6.13)
N-scan δ -GLMB31.1589 (6.71)
Enhanced N-scan δ -GLMB29.4330 (6.94)
Refined N-scan δ -GLMB27.0017 (7.26)
Table 3. Average OSPA of different methods in 100 filtering iterations for the scenario defined in Figure 6.
Table 3. Average OSPA of different methods in 100 filtering iterations for the scenario defined in Figure 6.
MethodOSPA (AVG)
GM-PHD [53]39.03
N-scan GM-PHD [43]34.51
δ -GLMB [42]32.28
N-scan δ -GLMB (ours)28.67
Enhanced N-scan δ -GLMB (ours)26.44
Refined N-scan δ -GLMB (ours)24.10
Table 4. Results of different methods on PETS09-S2L1.
Table 4. Results of different methods on PETS09-S2L1.
MethodMOTAMOTPReRFARMTRMOR
GM-PHD [53]42.6254.020.450.120.570.71
N-scan GM-PHD [43]46.7858.490.510.100.460.57
δ -GLMB [42]51.0563.620.570.100.420.64
N-scan δ -GLMB (ours)54.5365.280.620.070.340.42
Enhanced N-scan δ -GLMB (ours)55.6065.910.620.070.340.35
Refined N-scan δ -GLMB (ours)56.7966.380.710.050.260.28
Table 5. Results of different methods on GRAM-RTM.
Table 5. Results of different methods on GRAM-RTM.
MethodMOTAMOTPReRFARMTRMOR
GM-PHD [53]27.9132.590.310.170.680.78
N-scan GM-PHD [43]31.2036.170.460.120.560.66
δ -GLMB [42]30.8337.550.480.140.530.64
N-scan δ -GLMB (ours)35.4140.060.580.070.460.56
Enhanced N-scan δ -GLMB (ours)36.2641.740.600.070.410.53
Refined N-scan δ -GLMB (ours)39.3243.450.680.040.340.43
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sepanj, M.H.; Moradi, S.; Azimifar, Z.; Fieguth, P. Uncertainty-Aware δ-GLMB Filtering for Multi-Target Tracking. Big Data Cogn. Comput. 2025, 9, 84. https://doi.org/10.3390/bdcc9040084

AMA Style

Sepanj MH, Moradi S, Azimifar Z, Fieguth P. Uncertainty-Aware δ-GLMB Filtering for Multi-Target Tracking. Big Data and Cognitive Computing. 2025; 9(4):84. https://doi.org/10.3390/bdcc9040084

Chicago/Turabian Style

Sepanj, M. Hadi, Saed Moradi, Zohreh Azimifar, and Paul Fieguth. 2025. "Uncertainty-Aware δ-GLMB Filtering for Multi-Target Tracking" Big Data and Cognitive Computing 9, no. 4: 84. https://doi.org/10.3390/bdcc9040084

APA Style

Sepanj, M. H., Moradi, S., Azimifar, Z., & Fieguth, P. (2025). Uncertainty-Aware δ-GLMB Filtering for Multi-Target Tracking. Big Data and Cognitive Computing, 9(4), 84. https://doi.org/10.3390/bdcc9040084

Article Metrics

Back to TopTop