Next Article in Journal
The Study of Cross-layer Optimization for Wireless Rechargeable Sensor Networks Implemented in Coal Mines
Previous Article in Journal
A Novel RFID-Based Sensing Method for Low-Cost Bolt Loosening Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection

Ministry of Education Key Laboratory for Intelligent Networks and Network Security (MOE KLINNS), College of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(2), 169; https://doi.org/10.3390/s16020169
Submission received: 18 November 2015 / Revised: 8 January 2016 / Accepted: 21 January 2016 / Published: 28 January 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities.

1. Introduction

The problem of joint detection and estimation (JDE) of multiple targets arises from many applications in surveillance and defense [1], where the number of targets is unknown and the sensor may receive measurements generated randomly from either targets or clutters. There is no information about which are the measurements of interest or which are the clutters. The aim of multi-target JDE is to determine the number of targets and to estimate their states if exist using prior information, as well as a sequence of the sensor measurements. In recent years, multi-target JDE has attracted extensive attention, and many approaches for it have been proposed [2,3,4,5,6,7,8,9,10].
Obviously, it is very necessary to find an error (lower) bound to assess the achievable performance of the multi-target JDE algorithms for the given sensor measurements. It is well known that Tichavsky et al. [11] proposed a recursive posterior Cramér-Rao lower bound (CRLB) for evaluating the performance of nonlinear filters when a target was asserted and observed by a sensor. Then, the CRLB was extended to the cases in which clutter or missed detection was present in the sensor [12,13,14,15]. Nevertheless, these CRLBs [12,13,14,15] could barely be applied to such a JDE problem, since CRLB only considers the estimation error of a target state, but not the detection error of the target number (or existence/non-existence of a target). Within random finite set (RFS) [2,4] framework, Rezaeian and Vo [16] derived the static error bounds for JDE of a single target observed by a single sensor with clutter and missed detection. Tong et al. presented a recursive form of a single-sensor single-target error bound based on CRLB when only missed detection, but not clutter, exists [17] and then extended the result of [17] to the single-sensor multi-target case with the more rigorous restriction that neither clutter nor missed detection exists [18]. Note that the bounds in [17,18] actually do not include the detection error generated by the uncertainty of target number, since the target number can be completely determined by the measurement number by restricting the sensor observation model to the one in [17,18].
This paper proposes an RFS-based single-sensor multi-target JDE error bound when clutter and missed detection may simultaneously exist in the sensor. In order to obtain the results of this paper, the multi-target Bayes recursion is approximated as a multi-Bernoulli process [2] or a Poisson process [2], and all detectors and estimators are restricted to maximum a posterior (MAP) detectors and unbiased estimators. Since the JDE error is the average distance between true and estimated state sets, the second-order optimal sub-pattern assignment (OSPA) distance [19] rather than the Euclidean distance is used as the error metric. Finally, the simulation results show that clutter density and detection probability have significant impacts on the proposed bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) [4] and cardinalized PHD (CPHD) [5] filters for various clutter densities and detection probabilities.
The rest of the paper is organized as follows. Section 2 presents the background for deriving our results. In Section 3, we derive the proposed bound by using multi-Bernoulli or Poisson approximation. A numerical example is presented in Section 4. The conclusions and future work are given in Section 5. Relevant mathematical proofs are provided in Appendix A and Appendix B.

2. Background

  • Set integral: For any real-valued function φ ( X ) of a finite-set variable X, its set integral is [4]:
    φ ( X ) δ X = n = 0 1 n ! X n φ X n d x ( 1 ) d x ( n ) = φ ( ) + n = 1 1 n ! X n φ X n d x ( 1 ) d x ( n )
    where X n = x ( i ) i = 1 n X n denotes a n-points set (that is, the cardinality of the set X n is n) and X n denotes the space of X n . In this paper, we note X 0 = .
  • Multi-Bernoulli RFS: A multi-Bernoulli RFS X is a union of M independent Bernoulli RFSs X ( i ) , X = i = 1 M X ( i ) . Its density is completely described by parameter Υ = r ( i ) , p ( i ) i = 1 M as [6]:
    f ( X ) = π ( ) 1 j 1 · · · j | X | M i = 1 | X | r ( j i ) 1 r ( j i ) p ( j i ) x ( i ) , with π ( ) = i = 1 M 1 r ( i )
    where | · | denotes the cardinality of a set, r ( i ) ( 0 , 1 ) denotes the probability of X ( i ) and p ( i ) x ( i ) denotes the density of x ( i ) .
  • Poisson RFS: An RFS X is Poisson if its density f ( X ) is:
    f ( X ) = e η x X υ ( x ) , with η = υ ( x ) d x and υ ( x ) = η f ( x )
    where υ ( x ) denotes the intensity function of the Poisson RFS X, η is the average number of elements in X and f ( x ) is the density of single element x X .
  • Second-order OSPA distance: The OSPA distance of order p = 2 between set X and its estimate X ^ is [19]:
    d 2 X , X ^ = 0 , | X ^ | = | X | = 0 min τ Π max ( | X ^ | , | X | ) t = 1 min ( | X ^ | , | X | ) min c 2 , x ( t ) x ^ ( τ ( t ) ) 2 2 + c 2 | X ^ | | X | max ( | X ^ | , | X | ) , | X ^ | + | X | > 0
    where Π n denotes the set of permutations on { 1 , 2 , , n } , c > 0 denotes the cut-off parameter, max ( · ) or min ( · ) denotes the maximization or minimization operation and | | · | | 2 denotes the two-norm. The OSPA metric is comprised of two components, each separately accounting for “localization” and “cardinality” errors between two sets. The localization error arises from the estimates paired with the nearest truths, while the cardinality error arises from the unpaired estimates. Schuhmacher et al. [19] have proven that the OSPA distance with p [ 1 , ) and c > 0 is indeed a metric, so it can be used as a principled performance measure.
  • Information inequality and CRLB: Given a joint probability density f ( x , z ) on X × Z , under regularity conditions and the existence of 2 log f ( x , z ) / x i x j , the information inequality states that [20,21]:
    Z X f ( x , z ) x l x ^ l ( z ) 2 d x d z x l E f x ^ l ( z ) 2 · J 1 l , l
    where x ^ ( z ) denotes an estimate of L dimensional vector x based on z , x l and x ^ l ( z ) are, respectively, the l-th components of x and x ^ ( z ) , l = 1 , , L , the notation E f means the expectation with respect to density f and J is known as the L × L Fisher information matrix:
    [ J ] i , j = E f 2 log f ( x , z ) x i x j = Z X f ( x , z ) 2 log f ( x , z ) x i x j d x d z , i , j = 1 , 2 , L
    where [ J ] i , j denotes the element on the i-th row and j-th column of matrix J.
    For the particular case in which the estimator x ^ ( z ) is unbiased (that is, E f x ^ ( z ) = x ), the information inequality of Equation (5) reduces to:
    Z X f ( x , z ) x l x ^ l ( z ) 2 d x d z J 1 l , l
    which is a result known as the CRLB. The Fisher information matrix J in Equation (7) is also computed by Equation (6).
    Note that the ordinary information inequality of Equation (5) holds without the unbiasedness requirement on the estimator x ^ ( z ) . However, unbiasedness is critical in the CRLB of Equation (7).
    Explanation: In the current set up of this paper, our attention is restricted to the unbiased estimator of multi-target states. Our future work will study the extension of the proposed bound to the biased estimator by using the ordinary information inequality of Equation (5).
    Moreover, Equation (5) or Equation (7) is satisfied with equality depending on a very restricted condition. In [21], Poor concludes that, within regularity, the information lower bound is achieved (that is, the “=” in Equation (5) or Equation (7) holds) by x ^ ( z ) if and only if x ^ ( z ) is in a one-parameter exponential family (e.g., the linear Gaussian models for target dynamics and sensor observation described in [11] for achieving the CRLB). More details about this can be found in [21].
  • RFS-based multi-target dynamics and sensor observation models: Let x k X k denote the state vector of a target and X k the set of multi-target states at time k, where X k is the state space of a target. The multi-target dynamics is modeled by:
    X k = x k 1 X k 1 Ψ k | k 1 x k 1 Γ k
    where Ψ k | k 1 x k 1 is the set evolved from the previous state x k 1 , Ψ k | k 1 x k 1 = x k with surviving probability p S , k x k 1 and transition density f k | k 1 x k x k 1 , otherwise Ψ k | k 1 x k 1 = with probability 1 p S , k x k 1 ; Γ k is the set of spontaneous births.
    Let z k Z k denote a measurement vector and Z k the set of measurements received by a sensor at time k, where Z k is the sensor measurement space. The single-sensor multi-target observation is modeled by:
    Z k = x k X k Θ k x k K k
    where Θ k x k is the measurement set originated from state x k , Θ k x k = z k with sensor detection probability p D , k x k and likelihood g k z k x k , otherwise Θ k x k = with probability 1 p D , k x k ; K k is the clutter set, which is modeled as a Poisson RFS with density:
    f c , k K k = e λ k z k K k κ k z k , with λ k = κ k z k d z k and κ k z k = λ k f c , k z k
    where κ k z k is the clutter intensity, λ k is the average clutter number and f c , k z k is the density of a clutter.
    The transition model in Equation (8) jointly incorporates motion, birth and death for multiple targets, while the sensor observation model in Equation (9) jointly accounts for detection uncertainty and clutter. Assume that the RFSs constituting the unions in Equations (8) and (9) are mutually independent. The multi-target JDE at time k is to derive the estimated state set X ^ k Z 1 : k using the collection Z 1 : k = Z 1 , , Z k of all sensor observations up to time k. The paper aims to derive a performance limit to multi-target joint detectors-estimators for the observation of a single sensor with clutter and missed detection. The performance limit is measured by the bound of the average error between X k and X ^ k Z 1 : k .

3. Single-Sensor Multi-Target JDE Error Bounds Using Multi-Bernoulli or Poisson Approximation

At time k, the RFS-based mean square error (MSE) between X k and X ^ k Z 1 : k is defined as:
σ k 2 = E e k 2 X k , X ^ k Z 1 : k = f k X k , Z k Z 1 : k 1 e k 2 X k , X ^ k Z 1 : k δ X k δ Z k = γ k Z k X k f k | k 1 X k Z 1 : k 1 e k 2 X k , X ^ k Z 1 : k δ X k δ Z k
where e k X k , X ^ k Z 1 : k denotes the error metric between X k and X ^ k Z 1 : k , which is defined by the second-order OSPA distance in (4), f k X k , Z k Z 1 : k 1 denotes the density of ( X k , Z k ) given Z 1 : k 1 and γ k Z k X k = f k Z k X k denotes the likelihood for the total sensor measurement process.
At time k, given multi-target state set X k n and sensor measurement set Z k m , all association hypotheses can be represented as a function from target index set { 1 , , n } to sensor measurement index set { 0 , 1 , , m } [2]. Defining that:
θ n , m : 1 , , n 0 , 1 , , m
denotes the association hypothesis function with clutter and missed detection. That is, the t-th target x k ( t ) with θ n , m ( t ) = 0 generates no detection, while target x k ( t ) with θ n , m ( t ) > 0 generates a sensor measurement z k ( θ n , m ( t ) ) , t = 1 , 2 , , n . θ n , m satisfies the property that θ n , m ( t ) = θ n , m ( t ) > 0 implies t = t .
Then, according to the sensor observation model in Equation (9), the likelihood γ k Z k m X k n with Poisson clutter and missed detection can be denoted as [2]:
γ k Z k m X k n = e λ k κ k Z k m θ n , m t = 1 n G k z k ( θ n , m ( t ) ) x k ( t )
where the summation is taken over all association hypotheses θ n , m , and G k z k ( θ n , m ( t ) ) x k ( t ) is defined as:
G k z k ( θ n , m ( t ) ) x k ( t ) = p D , k x k ( t ) g k z k ( θ n , m ( t ) ) x k ( t ) κ k z k ( θ n , m ( t ) ) , θ n , m ( t ) > 0 1 p D , k x k ( t ) , θ n , m ( t ) = 0
while the notation κ Z denotes:
κ Z = z Z κ ( z ) , Z 1 , Z =
For deriving the error bound for multi-target JDE, the following two conditions must be satisfied as in [16]:
  • MAP detection criterion: This is applied to determine the number of targets: given a measurement set Z k at time k, the cardinality of the estimated state set X ^ k Z k is obtained as the maximum of the posterior probabilities P k | X k | = n Z 1 : k :
    n ^ = arg max n P k | X k | = n Z 1 : k
    The reason for the use of the MAP detection rule will be clearly explained later in Remark 1 after Theorems 1 and 2.
  • Unbiased estimation criterion: This is a necessary condition for applying the CRLB of Equation (7) in the proof of Theorems 1 and 2.
Next, we derive the proposed bound by using multi-Bernoulli or Poisson approximation for multi-target Bayes recursion, which are stated in Assumptions A.1 and A.2, respectively.
  • Assumption A.1: At time k, the set Γ k of spontaneous births is a multi-Bernoulli RFS with the parameter Υ Γ , k = r Γ , k ( i ) , p Γ , k ( i ) i = 1 M Γ , k (in general, Υ Γ , k is known a priori). Then, the predicted and posterior multi-target densities f k | k 1 X k Z 1 : k 1 and f k X k Z 1 : k are approximated as the multi-Bernoulli densities with parameters Υ k | k 1 = r k | k 1 ( i ) , p k | k 1 ( i ) i = 1 M k | k 1 and Υ k = r k ( i ) , p k ( i ) i = 1 M k , respectively. Specifically, the parameter of a multi-Bernoulli RFS that approximates the multi-target RFS is propagated under this assumption. The recursions for Υ k | k 1 and Υ k have been presented in [6].
  • Assumption A.2: At time k, the set Γ k of spontaneous births is a Poisson RFS with the intensity υ Γ , k x k (in general, υ Γ , k x k is known a priori). Then, the predicted and posterior multi-target densities f k | k 1 X k Z 1 : k 1 and f k X k Z 1 : k are approximated as the Poisson densities with intensities υ k | k 1 x k and υ k x k , respectively. Specifically, the intensity of a Poisson RFS that approximates the multi-target RFS is propagated under this assumption. The recursions for υ k | k 1 x k and υ k x k have been presented in [4].
Theorem 1. 
Suppose that Assumption A.1 holds; at time k, given the predicted multi-target multi-Bernoulli parameter Υ k | k 1 = r k | k 1 ( i ) , p k | k 1 ( i ) i = 1 M k | k 1 , the error for joint MAP detection and unbiased estimation of multiple targets with the state model in Equation (8) and the sensor observation model in Equation (9) is bounded by:
σ k 2 m = 0 n = 0 N n ^ = 0 , n + n ^ > 0 N Ω k n , m m ! · n ! · max n , n ^ · t = 1 min n , n ^ min c 2 ω k n ^ , n , m , l = 1 L J k ( t ) , n ^ , n , m 1 l , l + c 2 ω k n ^ , n , m n n ^
where:
  • c is the cut-off of the second-order OSPA distance in Equation (4), L is the dimension of state x k and N is the maximum number of the targets observed by the sensor over the surveillance region;
  • Ω k n , m is a normalization factor of the density f k X k n , Z k m Z 1 : k 1 ; it actually denotes the probability of | X k | = n and | Z k | = m given Z 1 : k 1 ,
    Ω k n , m = Z k m X k n f k X k n , Z k m Z 1 : k 1 d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m )
  • ω k n ^ , n , m is the integration of the density f k X k n , Z k m Z 1 : k 1 over the region X k n × Z k n ^ , m ,
    ω k n ^ , n , m = Z k n ^ , m X k n f k X k n , Z k m Z 1 : k 1 d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m )
    Note that the integration region Z k n ^ , m in ω k n ^ , n , m is the subspace in Z k m , where the MAP detector assigns the estimated target number to be n ^ ( n ^ = 0 , 1 , , N ). Z k 0 , m , Z k 1 , m , , Z k N , m are mutually disjoint and cover Z k m . Therefore, ω k n ^ , n , m actually denotes the probability of | X k | = n and | Z k | = m given | X ^ k | = n ^ and Z 1 : k 1 .
  • J k ( t ) , n ^ , n , m is the Fisher information matrix of the t-th target given | Z k | = m , | X k | = n , | X ^ k | = n ^ and Z 1 : k 1 . J k ( t ) , n ^ , n , m , ω k n ^ , n , m and Ω k n , m in Equation (17) are given by (assuming J k ( t ) , n ^ , n , m = for Z k n ^ , m = , n ^ = 0 , 1 , , N ):
    J k ( t ) , n ^ , n , m i , j = 1 ω k n ^ , n , m 2 Z k n ^ , m X k f k x k ( t ) , Z k m Z 1 : k 1 , X k = n · 2 log f k x k ( t ) , Z k m Z 1 : k 1 , X k = n x k , i ( t ) x k , j ( t ) d x k ( t ) d z k ( 1 ) d z k ( m )
    ω k n ^ , n , m = π k | k 1 ( ) e λ k Ω k n , m θ n , m 1 j 1 · · · j n M k | k 1 Z k n ^ , m κ k Z k m D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) d z k ( 1 ) d z k ( m )
    Ω k n , m = π k | k 1 ( ) e λ k λ k m θ n , m 1 j 1 · · · j n M k | k 1 t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) K k | k 1 ( j t )
    D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) = t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) H k j t z k ( θ n , m ( t ) )
    H k j t z k ( θ n , m ( t ) ) = X k p k | k 1 ( j t ) x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t ) d x k ( t )
    K k | k 1 ( j t ) = X k p D , k x k ( t ) p k | k 1 ( j t ) x k ( t ) d x k ( t ) X k p D , k x k ( t ) p k | k 1 ( j t ) x k ( t ) d x k ( t ) λ k λ k , θ n , m ( t ) > 0 X k 1 p D , k x k ( t ) p k | k 1 ( j t ) x k ( t ) d x k ( t ) , θ n , m ( t ) = 0
    π k | k 1 ( ) = t = 1 M k | k 1 1 r k | k 1 ( t )
    where G k z k ( θ n , m ( t ) ) x k ( t ) is given by Equation (14), f k x k ( t ) , Z k m Z 1 : k 1 , X k = n is the density of x k ( t ) , Z k m conditioned on Z 1 : k 1 and X k = n . f k x k ( t ) , Z k m Z 1 : k 1 , X k = n in Equation (20), as well as the integration region Z k n ^ , m in Equations (20) and (21) are given by:
    f k x k ( t ) , Z k m Z 1 : k 1 , X k = n = π k | k 1 ( ) e λ k κ k Z k m Ω k n , m θ n , m 1 j 1 · · · j n M k | k 1 D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) H k j t x k ( θ n , m ( t ) ) p k | k 1 ( j t ) x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t )
    Z k n ^ , m = Z k m Z k m : arg max n ξ k n Z k m Z 1 : k 1 = n ^
    ξ k n Z k m Z 1 : k 1 = 1 j 1 · · · j n M k | k 1 t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) · θ n , m 1 j 1 · · · j n M k | k 1 D k j 1 , . . . , j n z k ( θ n , m ( 1 ) ) , . . . , z k ( θ n , m ( n ) )
    where ξ k n Z k m Z 1 : k 1 denotes a function of Z k m and n given Z 1 : k 1 .
Theorem 2. 
Suppose that Assumption A.2 holds; at time k, given the predicted multi-target Poisson intensity υ k | k 1 x k , the error bound for joint MAP detection and unbiased estimation of multiple targets with the state model in Equation (8) and the sensor observation model in Equation (9) takes the same form as in Theorem 1, except that ω k n ^ , n , m , Ω k n , m , f k x k ( t ) , Z k m Z 1 : k 1 , X k = n and ξ k n Z k m Z 1 : k 1 are changed to:
ω k n ^ , n , m = e η k | k 1 λ k Ω k n , m θ n , m Z k n ^ , m κ k Z k m D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) d z k ( 1 ) d z k ( m )
Ω k n , m = e η k | k 1 λ k λ k m θ n , m t = 1 n K k | k 1 ( t )
f k x k ( t ) , Z k m Z 1 : k 1 , X k = n = e η k | k 1 λ k κ k Z k m Ω k n , m θ n , m D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) H k z k ( θ n , m ( t ) ) · υ k | k 1 x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t )
ξ k n Z k m Z 1 : k 1 = η k | k 1 n θ n , m D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) )
where:
D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) = t = 1 n H k z k ( θ n , m ( t ) )
H k z k ( θ n , m ( t ) ) = X k υ k | k 1 x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t ) d x k ( t )
K k | k 1 ( t ) = X k p D , k x k ( t ) υ k | k 1 x k ( t ) d x k ( t ) X k p D , k x k ( t ) υ k | k 1 x k ( t ) d x k ( t ) λ k λ k , θ n , m ( t ) > 0 X k 1 p D , k x k ( t ) υ k | k 1 x k ( t ) d x k ( t ) , θ n , m ( t ) = 0
η k | k 1 = υ k | k 1 x k d x k
The proofs of Theorems 1 and 2 can be found in Appendix A and Appendix B. In the following, we refer to the bound in Theorem 1 or 2 as the multi-Bernoulli approximated bound (MBA-B) or the Poisson approximated bound (PA-B), respectively.
  • Remark 1: It is well-known that the lower bound is independent of the specific estimation methods. However, it is necessary for the use of the MAP detection rule in deriving the bounds in Theorems 1 and 2. The reasons are as follows.
    First, we have known that the error metric e k X k , X ^ k Z 1 : k in Equation (11) is the second-order OSPA distance in Equation (4). Obviously, the estimated target number has to be considered in the OSPA distance. At time k, the estimated target number depends on the measurement set Z k received by the sensor. We assume that if Z k Z k n ^ , which is a subspace of the measurement space Z k , then the estimated target number by the detector is n ^ ( n ^ = 0 , 1 , , N ). Therefore, to compute the MSE σ k 2 in Equation (11), we have to partition the measurement space Z k into the regions of Z k 0 , Z k 1 , , Z k N , which correspond to all possible estimated target numbers n ^ = 0 , n ^ = 1 , , n ^ = N , respectively. In addition, Z k 0 , Z k 1 , , Z k N are mutually disjoint and cover Z k .
    In the proof of Theorems 1 and 2, to obtain the bound on σ k 2 in Equation (A13) (Equation (A13) is the extended form of the MSE σ k 2 in Equation (11)), we need to find the best integration regions Z k 0 , m , Z k 1 , m , , Z k N , m in Equation (A14) that minimizes Equation (A14). Nevertheless, it is very difficult to define Z k 0 , m , Z k 1 , m , , Z k N , m for the detector without using the MAP criterion because the minimization of Equation (A14) depends on the estimator X ^ k ( · ) . This reflects the extreme complexity in defining Z k 0 , m , Z k 1 , m , , Z k N , m for the detector that minimizes the σ k 2 in Equation (11) and its intricate interconnection with the estimator that may jointly achieve a lower σ k 2 using the MAP detector. A detailed analysis is presented in [16] to illustrate the complicated dependency of the detector and estimator for minimizing the MSE σ k 2 . As a result, without the MAP detector restriction, it is nearly impossible to characterize the joint detector-estimator that minimizes the MSE σ k 2 in Equation (11) due to their extremely complex interrelationship in determining the number of targets and estimating the states of existing targets.
    In summary, with the MAP detection constraint, the estimated target number at time k can be determined just by the detector (that is, independent of the estimator). However, this may make the minimum MSE defined by Equation (11) unachievable. Therefore, imposing the MAP constraint can be regarded as an approximated method to obtain the proposed JDE bounds. In our future work, we will study the JDE error bound without the MAP detection constraint.
  • Remark 2: In general, the integration region Z k n ^ , m for calculating J k ( t ) , n ^ , n , m and ω k n ^ , n , m at time k is different from the previous integration region Z k 1 n ^ , m for calculating J k 1 ( t ) , n ^ , n , m and ω k 1 n ^ , n , m at time k 1 , where the superscripts t , n ^ , n , m and t , n ^ , n , m denote the target indices, estimated target numbers, true target numbers and sensor measurement numbers at time k and time k 1 , respectively. As a result, J k ( t ) , n ^ , n , m cannot be derived directly from J k 1 ( t ) , n ^ , n , m by using a closed-form recursion like the posterior CRLB (PCRLB) in [11]. The recursion of J k ( t ) , n ^ , n , m depends on the propagation of parameter Υ k | k 1 or intensity υ k | k 1 x k of multi-Bernoulli or Poisson RFS that approximates the predicted multi-target RFS.
  • Remark 3: In the special case of no clutter or missed detection, we have K k = and p D , k ( · ) = 1 for the sensor observation model in Equation (9). The numbers of estimated targets, true targets and measurements are obviously equal in this case, | X ^ k | = | X k | = | Z k | . As a result, multi-target JDE reduces to multi-target state estimation only (that is, target detection no longer exists here, and so, the restriction of MAP detection can be omitted) using the sensor measurement. Moreover, given multi-target state set X k n , the total likelihood reduces to:
    γ k Z k n X k n = τ Π n t = 1 n g k z k ( τ ( t ) ) x k ( t )
    and the second-order OSPA distance reduces to:
    d k 2 X k n , X ^ k n = 0 , n = 0 1 n min τ Π n t = 1 n x k ( t ) x ^ k ( τ ( t ) ) 2 2 , n > 0
    because there is no need to consider the cut-off c for cardinality mismatches here. Only for the special case, a theoretically rigorous (that is, without multi-Bernoulli or Poisson approximation to multi-target Bayes recursion) single-sensor multi-target error bound can be derived in [18] using a PCRLB-like recursion.

4. Numerical Examples

A maximum of 10 targets appears on a two-dimensional region S = [ 50 , 50 ] × [ 50 , 50 ] (in m) with various births and deaths. The targets are observed by a single sensor with clutter and missed detection throughout a surveillance period of T = 25 time steps. The sensor sampling interval is Δ t = 1 s . At time k, the state of a target is x k = x k , y k , x ˙ k , y ˙ k , x ¨ k , y ¨ k T , where x k , y k T , x ˙ k , y ˙ k T and x ¨ k , y ¨ k T denote the position, velocity and acceleration vectors along the x axis and y axis, respectively. The state transition density f k | k 1 x k x k 1 is assumed to be:
f k | k 1 x k x k 1 = N x k ; F k x k 1 , Q k
where N · ; m , Q denotes the density of a Gaussian distribution with mean m and covariance matrix Q and F k and Q k are the state evolution matrix and process noise covariance matrix at time k, respectively. Assuming that the kinematics of each target is governed by the constant acceleration (CA) model [22], we have:
F k = 1 Δ t Δ t 2 2 0 1 Δ t 0 0 1 I 2 , Q k = q C A 2 Δ t 4 4 Δ t 3 2 Δ t 2 2 Δ t 3 2 Δ t 2 2 Δ t Δ t 2 2 Δ t 1 I 2
where ⊗ denotes the Kronecker product, I n is the identity matrix of dimension n and q C A = 0 . 01 m / s 2 is the standard deviation of process noise, i.e., acceleration. Target births and deaths occur at random instances and states. The probability of target survival is p S , k ( · ) = 0 . 9 . The state of a target birth satisfies one of the distributions p Γ ( i ) x k = N x k ; x Γ ( i ) , Q Γ ( i = 1 , , 4 ), x Γ ( 1 ) = [ 20 , 20 , 2 , 2 , 0 . 1 , 0 . 1 ] T , x Γ ( 2 ) = [ 20 , 20 , 2 , 3 , 0 . 1 , 0 . 1 ] T , x Γ ( 3 ) = [ 20 , 20 , 2 , 3 , 0 . 1 , 0 . 1 ] T , x Γ ( 4 ) = [ 20 , 20 , 2 , 3 , 0 . 1 , 0 . 1 ] T , Q Γ = d i a g ( 25 , 25 , 0 . 25 , 0 . 25 , 0 . 0025 , 0 . 0025 ) , where d i a g ( · ) denotes a diagonal matrix. The sensor measurement model for state x k is:
g k z k x k = N ρ k o k ; x k 2 + y k 2 arctan y k y k x k x k , R k
where ρ k , o k are, respectively, the range and bearing measurements of the target and R k = d i a g ( ς ρ 2 , ς o 2 ) is the sensor measurement noise covariance matrix. In this example, we assume that ς ρ = 2 . 5 m , ς o = 0 . 1 rad . The detection probability of the sensor is p D , k ( · ) = p D . The average clutter number and the density of the clutter are λ k = λ and f c , k z k = U z k ; S , where U ( · ; S ) = 1 / 10 4 denotes the density of a uniform distribution over the region S .
For Assumption A.1, the parameter for the multi-Bernoulli set Γ k of spontaneous births is Υ Γ = 0 . 1 , p Γ ( i ) i = 1 4 . For Assumption A.2, the intensity for the Poisson set Γ k of spontaneous births is υ Γ x k = i = 1 4 0 . 1 p Γ ( i ) x k .
Then, the proposed bound (MBA-B or PA-B) in this example can be easily obtained by substituting these parameters into Theorem 1 or 2. The second partial derivative 2 log f k x k ( t ) , Z k m Z 1 : k 1 , X k = n / x k , i ( t ) x k , j ( t ) involved in Equation (20) can conveniently be obtained by using the software Mathematica 8.0.1. The Monte Carlo (MC) method [23] is used to numerically calculate J k ( t ) , n ^ , n , m i , j and ω k n ^ , n , m because the involved integrals in them have no closed-forms.
First, let us see how the sensor measurement uncertainty would affect the proposed bound. It is clear that the measurement uncertainty of a sensor is mainly determined by its detection probability and clutter. Therefore, in Figure 1, the proposed two bounds of multi-target position vectors are shown versus scan for three groups of detection probability and clutter intensity: (1) p D = 1 , λ = 50 , (2) p D = 0 . 6 , λ = 150 and (3) p D = 0 . 2 , λ = 250 , respectively, where the cut-off of OSPA distance is c 2 = 400 .
Figure 1. Proposed bounds for multi-target positions versus scan in the cases: p D = 1 , λ = 50 (black lines); p D = 0 . 6 , λ = 150 (green lines); p D = 0 . 2 , λ = 250 (red lines).
Figure 1. Proposed bounds for multi-target positions versus scan in the cases: p D = 1 , λ = 50 (black lines); p D = 0 . 6 , λ = 150 (green lines); p D = 0 . 2 , λ = 250 (red lines).
Sensors 16 00169 g001
From Figure 1, it can be seen that both bounds are asymptotically convergent for various p D and λ. As the number of sensor measurement scans increases, they will get closer. The bounds for the case p D = 1 , λ = 50 are the smallest in the three cases. However, it is somewhat surprising that the bounds for the case p D = 0 . 2 , λ = 250 are lower than the bounds for the case p D = 0 . 6 , λ = 150 . Moreover, the bigger λ becomes for p D , or the lower p D becomes for λ, the longer the convergence time of the bounds seems to be. Figure 1 indicates that clutter density and detection probability of the sensor do have a significant impact on the proposed bound.
To verify the effectiveness of the proposed bounds, we compare the steady-state bounds with the JDE errors of the single-sensor PHD and CPHD filters, which are the average of 200 MC runs of their time-averaged OSPA distances between the true and estimated state sets. The comparison results are presented in Figure 2.
From Figure 2, we can obtain the following observations.
  • The proposed bound does not always increase with λ for given p D or decrease with p D for given λ. This is because of the two contrary effects generated by the increase of λ or p D when p D < 1 or λ > 0 : reducing the possibility for missed targets and increasing the possibility for false targets. If the bound is dominated by the former, then it decreases with λ or p D ; otherwise, it increases with λ or p D . Moreover, PA-B is a little higher than MBA-B when λ is relatively large or p D is relatively small. However, they are very close in general. A possible reason for this is that the multi-Bernoulli assumption (Assumption A.1) outperforms the Poisson assumption (Assumption A.2) slightly for approximating the multi-target Bayes recursion under lower signal-noise-ratio (SNR) conditions.
  • Although the JDE errors of the single-sensor PHD and CPHD filters are a little higher than the proposed bound, all of them are always close versus λ and p D . The extra errors of the two filters are generated by the first-order moment approximations for the posterior multi-target density and the clustering processes involved in their particle implementations for state extraction. Figure 2 also shows that the CPHD filter outperforms the PHD filter. The reason for this is that the former can propagate the cardinality distribution and, thus, has more stable target number estimation than the latter.
Figure 2. Comparisons of joint detection and estimation (JDE) errors of single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters with steady-state bounds for multi-target positions. (a) p D = 1 ; (b) p D = 0 . 6 ; (c) p D = 0 . 2 .
Figure 2. Comparisons of joint detection and estimation (JDE) errors of single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters with steady-state bounds for multi-target positions. (a) p D = 1 ; (b) p D = 0 . 6 ; (c) p D = 0 . 2 .
Sensors 16 00169 g002
3.
The bigger λ becomes for given p D , or the lower p D becomes for given λ, the bigger the gaps between the errors of the two filters and the proposed bound will be. This is because the aforementioned approximation errors of the two filters increase as λ becomes bigger or p D becomes smaller. However, the maximum relative errors of the PHD and CPHD filters, which seem to appear in the case of p D = 0 . 2 and λ = 300 , do not exceed 15% and 8% of MBA-B, as well as 12% and 5% of PA-B in any case, respectively. In fact, the total average relative errors of the two filters are about 7% and 4% of MBA-B, as well as about 6% and 3% of PA-B for various λ and p D , respectively.
Finally, the comparison results in Figure 2 show that for various clutter densities and detection probabilities of the sensor, the proposed bounds are able to provide an effective indication of performance limitations for the two single-sensor multi-target JDE algorithms.

5. Conclusions

Within the RFS framework, we develop two multi-target JDE error bounds using the measurement of a single sensor with clutter and missed detection. The multi-Bernoulli and Poisson approximation to multi-target Bayes recursion are used in deriving the results of the paper, respectively. The proposed bounds are based on the OSPA distance rather than the Euclidean distance. The simulation results show that the clutter density and detection probability of the sensor significantly affect the bounds and verify the effectiveness of the bounds by indicating the performance limitations of the single-sensor PHD and CPHD filters in various sensor measurement environments.
Our future work will focus on the following four aspects:
  • Extending the results to the case of multiple sensors;
  • Extending the results to the case of the biased estimator by using the ordinary information inequality of Equation (5);
  • Studying the JDE error bounds without the MAP detection constraint;
  • Studying the sensor management strategies based on the results.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (61473217, 61174138), the National Key Fundamental Research & Development Programs (973) of China (2013CB329405) and the Provincial Natural Science Foundation Research Project of Shaanxi (2014JQ8333).

Author Contributions

Feng Lian contributed significantly to the conception of the study, analysis and manuscript preparation. Guanghua Zhang performed the data analyses. Zhansheng Duan revised and edited the manuscript. Chongzhao Han helped with performing the analysis with constructive discussions. All authors read and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1. 
First, we determine the target number from the sequence of sensor measurements according to the MAP criterion. Given a measurement set Z k m received by the sensor at time k, using the Bayes rule on the posterior probability P k X k = n Z 1 : k 1 , Z k m , we get:
P k X k = n Z 1 : k 1 , Z k m = P k X k = n Z 1 : k 1 P k Z k m X k = n P k Z k m Z 1 : k 1
where P k Z k m Z 1 : k 1 is a normalizing factor, P k X k = n Z 1 : k 1 and P k Z k m X k = n can be obtained by:
P k X k = n Z 1 : k 1 = X k n f k | k 1 X k n Z 1 : k 1 d x k ( 1 ) d x k ( n )
P k Z k m X k = n = X k n f k | k 1 X k n Z 1 : k 1 γ k Z k m X k n d x k ( 1 ) d x k ( n )
where the likelihood γ k Z k m X k n is given by Equation (13) and the predicted multi-target density f k | k 1 X k n Z 1 : k 1 is a multi-Bernoulli density with the parameter Υ k | k 1 = r k | k 1 ( t ) , p k | k 1 ( t ) t = 1 M k | k 1 according to Assumption A.1,
f k | k 1 X k n Z 1 : k 1 = π k | k 1 ( ) 1 j 1 · · · j n M k | k 1 t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) p k | k 1 ( j t ) x k ( t )
where:
π k | k 1 ( ) = t = 1 M k | k 1 1 r k | k 1 ( t )
Substituting Equations (A4) and (13) into Equations (A2) and (A3), respectively, and integrating out x k ( 1 ) , , x k ( n ) in the two equations, we have:
P k X k = n Z 1 : k 1 = π k | k 1 ( ) 1 j 1 · · · j n M k | k 1 t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t )
P k Z k m X k = n = π k | k 1 ( ) e λ k κ k Z k m θ n , m 1 j 1 · · · j n M k | k 1 D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) )
where:
D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) = t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) H k j t z k ( θ n , m ( t ) )
H k j t z k ( θ n , m ( t ) ) = X k p k | k 1 ( j t ) x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t ) d x k ( t )
and G k z k ( θ n , m ( t ) ) x k ( t ) is given by Equation (14).
Substituting Equations (A6) and (A7) into Equation (A1), the posterior probability P k X k = n Z 1 : k 1 , Z k m becomes:
P k X k = n Z 1 : k 1 , Z k m = π k | k 1 2 ( ) e λ k κ k Z k m P k Z k m Z 1 : k 1 ξ k n Z k m Z 1 : k 1
where:
ξ k n Z k m Z 1 : k 1 = 1 j 1 · · · j n M k | k 1 t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) · θ n , m 1 j 1 · · · j n M k | k 1 D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) )
denotes a function of Z k m and n given Z 1 : k 1 .
The MAP detector upon observing Z k m given Z 1 : k 1 will assign X ^ k Z 1 : k 1 , Z k m = n ^ if:
n ^ = arg max n P k X k = n Z 1 : k 1 , Z k m = arg max n ξ k n Z k m Z 1 : k 1 Z k m Z k n ^ , m
where Z k n ^ , m is the subspace of the m-point sensor measurement space Z k m . Equation (A12) denotes that, at time k, the MAP detector assigns the estimated target number to be n ^ ( n ^ = 0 , 1 , , N , where N is the maximum number of targets observed by the sensor over the surveillance region) if receiving the sensor measurement Z k m Z k n ^ , m . Z k 0 , m , Z k 1 , m , , Z k N , m are mutually disjoint and cover Z k m . For the m-point measurement space Z k m , its partitions Z k 0 , m , Z k 1 , m , , Z k N , m correspond to the all possible estimated target numbers n ^ = 0 , n ^ = 1 , , n ^ = N , respectively.
According to the set integral definition in Equation (1), the MSE in Equation (11) can be extended as:
σ k 2 = m = 0 n = 0 N 1 m ! · n ! Z k m X k n γ k Z k m X k n f k | k 1 X k n Z 1 : k 1 · e k 2 X k n , X ^ k Z 1 : k 1 , Z k m d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m )
By partitioning the integration region Z k m in Equation (A13) into sub-regions Z k 0 , m , Z k 1 , m , , Z k N , m according to the MAP detector in Equation (A12), we have:
σ k 2 = m = 0 n = 0 N n ^ = 0 N 1 m ! · n ! Z k n ^ , m X k n γ k Z k m X k n f k | k 1 X k n Z 1 : k 1 · e k 2 X k n , X ^ k n ^ Z 1 : k 1 , Z k m d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m )
Using the Bayes rule on density f k X k n , Z k m Z 1 : k 1 , we get:
f k X k n , Z k m | Z 1 : k 1 = 1 Ω k n , m γ k Z k m | X k n f k | k 1 X k n | Z 1 : k 1
where Ω k n , m is a normalization factor, and:
Ω k n , m = Z k m X k n f k | k 1 X k n Z 1 : k 1 γ k Z k m X k n d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m )
actually denotes the probability of | X k | = n and | Z k | = m given Z 1 : k 1 .
Substituting Equations (A4) and (13) into Equations (A15) and (A16), respectively, and integrating out z k ( 1 ) , , z k ( m ) in Equation (A16), f k X k n , Z k m Z 1 : k 1 becomes:
f k X k n , Z k m Z 1 : k 1 = π k | k 1 ( ) e λ k κ k Z k m Ω k n , m θ n , m 1 j 1 · · · j n M k | k 1 t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) p k | k 1 ( j t ) x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t )
and Ω k n , m is obtained as:
Ω k n , m = π k | k 1 ( ) e λ k λ k m θ n , m 1 j 1 · · · j n M k | k 1 t = 1 n r k | k 1 ( j t ) 1 r k | k 1 ( j t ) K k | k 1 ( j t )
where:
K k | k 1 ( j t ) = X k p D , k x k ( t ) p k | k 1 ( j t ) x k ( t ) d x k ( t ) X k p D , k x k ( t ) p k | k 1 ( j t ) x k ( t ) d x k ( t ) λ k λ k , θ n , m ( t ) > 0 X k 1 p D , k x k ( t ) p k | k 1 ( j t ) x k ( t ) d x k ( t ) , θ n , m ( t ) = 0
For Equation (A14), replacing γ k Z k m X k n f k | k 1 X k n Z 1 : k 1 with Ω k n , m f k X k n , Z k m Z 1 : k 1 according to Equation (A15) and then replacing e k 2 X k n , X ^ k n ^ Z 1 : k 1 , Z k m with the second-order OSPA distance defined in Equation (4), we get:
σ k 2 = m = 0 n = 0 N n ^ = 0 , n + n ^ > 0 N Ω k n , m m ! · n ! · max ( n ^ , n ) Z k n ^ , m X k n f k X k n , Z k m Z 1 : k 1 · min τ Π max ( n ^ , n ) t = 1 min ( n ^ , n ) min c 2 , x k ( t ) x ^ k ( τ ( t ) ) Z 1 : k 1 , Z k m 2 2 + c 2 | n n ^ | d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m )
where c is the cut-off of the OSPA distance.
Let:
ω k n ^ , n , m = Z k n ^ , m X k n f k X k n , Z k m Z 1 : k 1 d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m )
denote the integral of the density f k X k n , Z k m Z 1 : k 1 over the region Z k n ^ , m × X k n . ω k n ^ , n , m actually denotes the probability of | X k | = n and | Z k | = m given | X ^ k | = n ^ and Z 1 : k 1 , where | X ^ k | = n ^ is from the MAP detector of Equation (A12).
Replacing f k X k n , Z k m Z 1 : k 1 in Equation (A21) with Equation (A17) and integrating out x k ( 1 ) , , x k ( n ) , ω k n ^ , n , m can be rewritten as:
ω k n ^ , n , m = π k | k 1 ( ) e λ k Ω k n , m θ n , m 1 j 1 · · · j n M k | k 1 Z k n ^ , m κ k Z k m D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) d z k ( 1 ) d z k ( m )
Let:
τ * = arg min τ Π max ( n ^ , n ) t = 1 min ( n ^ , n ) min c 2 , x k ( t ) x ^ k ( τ ( t ) ) Z 1 : k 1 , Z k m 2 2
denote the permutation in Π max n ^ , n , which minimizes t = 1 min n ^ , n min c 2 , x k ( t ) x ^ k ( τ ( t ) ) Z 1 : k 1 , Z k m 2 2 . By the use of ω k n ^ , n , m defined in Equation (A21) and τ * defined in Equation (A23), then Equation (A20) can be rewritten as:
σ k 2 = m = 0 n = 0 N n ^ = 0 , n + n ^ > 0 N Ω k n , m m ! · n ! · max ( n ^ , n ) · t = 1 min ( n ^ , n ) min c 2 ω k n ^ , n , m , Z k n ^ , m X k n f k X k n , Z k m Z 1 : k 1 · x k ( t ) x ^ k ( τ * ( t ) ) Z 1 : k 1 , Z k m 2 2 d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m ) + c 2 | n n ^ | ω k n ^ , n , m
Let f k x k ( t ) , Z k m Z 1 : k 1 , X k = n denote the joint probability density of x k ( t ) , Z k m conditioned on Z 1 : k 1 and X k = n . f k x k ( t ) , Z k m Z 1 : k 1 , X k = n can be obtained by:
f k x k ( t ) , Z k m Z 1 : k 1 , X k = n = X k n 1 f k X k n , Z k m Z 1 : k 1 d x k ( 1 ) d x k ( t 1 ) d x k ( t + 1 ) d x k ( n )
Replacing f k X k n , Z k m Z 1 : k 1 in Equation (A25) with Equation (A17) and integrating out x k ( 1 ) , , x k ( t 1 ) , x k ( t + 1 ) , , x k ( n ) , f k x k ( t ) , Z k m Z 1 : k 1 , X k = n can be rewritten as:
f k x k ( t ) , Z k m Z 1 : k 1 , X k = n = π k | k 1 ( ) e λ k κ k Z k m Ω k n , m θ n , m 1 j 1 · · · j n M k | k 1 D k j 1 , , j n z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) H k j t z k ( θ n , m ( t ) ) p k | k 1 ( j t ) x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t )
According to the density f k x k ( t ) , Z k m Z 1 : k 1 , X k = n defined in Equation (A25), the integral Z k n ^ , m X k n f k X k n , Z k m Z 1 : k 1 x k ( t ) x ^ k ( τ * ( t ) ) Z 1 : k 1 , Z k m 2 2 d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m ) involved in Equation (A24) can be rewritten as:
Z k n ^ , m X k n f k X k n , Z k m Z 1 : k 1 x k ( t ) x ^ k ( τ * ( t ) ) Z 1 : k 1 , Z k m 2 2 d x k ( 1 ) d x k ( n ) d z k ( 1 ) d z k ( m ) = Z k n ^ , m X k f k x k ( t ) , Z k m Z 1 : k 1 , X k = n l = 1 L x k , l ( t ) x ^ k , l ( τ * ( t ) ) Z 1 : k 1 , Z k m 2 d x k ( t ) d z k ( 1 ) d z k ( m )
where L is the dimension of the state x k .
Since the estimator has been assumed to be unbiased, we can apply the CRLB of Equation (7) to the density f k x k ( t ) , Z k m Z 1 : k 1 , X k = n in Equation (A27),
Z k n ^ , m X k f k x k ( t ) , Z k m Z 1 : k 1 , X k = n x k , l ( t ) x ^ k , l ( τ * ( t ) ) Z 1 : k 1 , Z k m 2 d x k ( t ) d z k ( 1 ) d z k ( m ) J k ( t ) , n ^ , n , m 1 l , l , l = 1 , , L
where:
J k ( t ) , n ^ , n , m i , j = 1 ω k ( t ) , n ^ , n , m 2 Z k n ^ , m X k f k x k ( t ) , Z k m Z 1 : k 1 , X k = n · 2 log f k x k ( t ) , Z k m Z 1 : k 1 , X k = n x k , i ( t ) x k , j ( t ) d x k ( t ) d z k ( 1 ) d z k ( m )
ω k ( t ) , n ^ , n , m = Z k n ^ , m X k f k x k ( t ) , Z k m Z 1 : k 1 , X k = n d x k ( t ) d z k ( 1 ) d z k ( m )
From Equations (A21), (A25) and (A30), it can be easily seen that:
ω k ( t ) , n ^ , n , m = ω k n ^ , n , m
Finally, by substituting Equation (A28) into Equation (A27) and then Equation (A24), we have Equation (17). This completes the proof.  ☐

Appendix B

Proof of Theorem 2. 
The proof of Theorem 2 is similar to that of Theorem 1. Therefore, only the main differences between them are presented next.
According to Assumption A.2, it follows that the predicted multi-target density f k | k 1 X k n Z 1 : k 1 at time k is a Poisson density with intensity υ k | k 1 x k ,
f k | k 1 X k n Z 1 : k 1 = e η k | k 1 t = 1 n υ k | k 1 x k ( t )
where:
η k | k 1 = υ k | k 1 x k d x k
denotes the expected number of predicted targets at time k.
Substituting Equations (A32) and (13) into Equations (A2) and (A3), respectively, and integrating out x k ( 1 ) , , x k ( n ) in the two equations, the probabilities P k X k = n Z 1 : k 1 and P k Z k m X k = n become:
P k X k = n Z 1 : k 1 = e η k | k 1 η k | k 1 n
P k Z k m X k = n = e η k | k 1 λ k κ k Z k m θ n , m D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) )
where:
D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) = t = 1 n H k z k ( θ n , m ( t ) )
H k z k ( θ n , m ( t ) ) = X k υ k | k 1 x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t ) d x k ( t )
The posterior probability P k X k = n Z 1 : k 1 , Z k m can be obtained by substituting Equations (A34) and (A35) into Equation (A1), and hence, the function ξ k n Z k m Z 1 : k 1 involved in the MAP detector of Equation (A12) becomes:
ξ k n Z k m Z 1 : k 1 = η k | k 1 n · θ n , m D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) )
Substituting Equations (A32) and (13) into Equations (A15) and (A16), respectively, and integrating out z k ( 1 ) , , z k ( m ) in Equation (A16), the density f k X k n , Z k m Z 1 : k 1 becomes:
f k X k n , Z k m Z 1 : k 1 = e η k | k 1 λ k κ k Z k m Ω k n , m θ n , m t = 1 n υ k | k 1 x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t )
where the normalization factor Ω k n , m is:
Ω k n , m = e η k | k 1 λ k λ k m θ n , m t = 1 n K k | k 1 ( t )
with:
K k | k 1 ( t ) = X k p D , k x k ( t ) υ k | k 1 x k ( t ) d x k ( t ) X k p D , k x k ( t ) υ k | k 1 x k ( t ) d x k ( t ) λ k λ k , θ n , m ( t ) > 0 X k 1 p D , k x k ( t ) υ k | k 1 x k ( t ) d x k ( t ) , θ n , m ( t ) = 0
Replacing f k X k n , Z k m Z 1 : k 1 in Equation (A21) with Equation (A39), and integrating out x k ( 1 ) , , x k ( n ) in Equation (A21), ω k n ^ , n , m becomes:
ω k n ^ , n , m = e η k | k 1 λ k Ω k n , m θ n , m Z k n ^ , m κ k Z k m D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) d z k ( 1 ) d z k ( m )
Similarly, replacing f k X k n , Z k m Z 1 : k 1 in Equation (A25) with Equation (A39), and integrating out x k ( 1 ) , , x k ( t 1 ) , x k ( t + 1 ) , , x k ( n ) in Equation (A25), the density f k x k ( t ) , Z k m Z 1 : k 1 , X k = n becomes:
f k x k ( t ) , Z k m Z 1 : k 1 , X k = n = e η k | k 1 λ k κ k Z k m Ω k n , m θ n , m D k z k ( θ n , m ( 1 ) ) , , z k ( θ n , m ( n ) ) H k z k ( θ n , m ( t ) ) υ k | k 1 x k ( t ) G k z k ( θ n , m ( t ) ) x k ( t )
The rest of the proof of Theorem 2 is completely the same as that of Theorem 1. This completes the proof.  ☐

References

  1. Bar-Shalom, Y.; Fortmann, T. Tracking and Data Association; Academic Press: San Diego, CA, USA, 1988. [Google Scholar]
  2. Mahler, R. Statistical Multisource Multitarget Information Fusion; Artech House: Norwood, MA, USA, 2007; pp. 332–335. [Google Scholar]
  3. Blackman, S. Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp. Electron. Syst. Mag. 2004, 19, 5–18. [Google Scholar] [CrossRef]
  4. Mahler, R. Multi-target Bayes filtering via first-order multi-target moments. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1152–1178. [Google Scholar] [CrossRef]
  5. Mahler, R. PHD filters of higher order in target number. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 1523–1543. [Google Scholar] [CrossRef]
  6. Vo, B.T.; Vo, B.N.; Cantoni, A. The cardinality balanced multi-target multi-Bernoulli filter and its implementations. IEEE Trans. Signal Process. 2009, 57, 409–423. [Google Scholar]
  7. Xu, Y.; Xu, H.; An, W.; Xu, D. FISST based method for multi-target tracking in the image plane of optical sensors. Sensors 2012, 12, 2920–2934. [Google Scholar] [CrossRef] [PubMed]
  8. Vo, B.T.; Vo, B.N.; Hoseinnezhad, R.; Mahler, R. Robust multi-bernoulli filtering. IEEE J. Sel. Top. Signal Process. 2013, 7, 399–409. [Google Scholar] [CrossRef]
  9. Vo, B.N.; Vo, B.T.; Phung, D. Labeled random finite sets and the bayes multi-target tracking filter. IEEE Trans. Signal Process. 2014, 62, 6554–6567. [Google Scholar] [CrossRef]
  10. Zhang, F.H.; Buckl, C.; Knoll, A. Multiple vehicle cooperative localization with spatial registration based on a probability hypothesis density filter. Sensors 2014, 14, 995–1009. [Google Scholar] [CrossRef] [PubMed]
  11. Tichavsky, P.; Muravchik, C.; Nehorai, A. Posterior Cramér-Rao bounds for discrete time nonlinear filtering. IEEE Trans. Signal Process. 1998, 46, 1701–1722. [Google Scholar] [CrossRef]
  12. Hernandez, M.; Farina, A.; Ristic, B. PCRLB for tracking in cluttered environments: Measurement sequence conditioning approach. IEEE Trans. Aerosp. Electr. Syst. 2006, 42, 680–704. [Google Scholar] [CrossRef]
  13. Hernandez, M.; Ristic, B.; Farina, A.; Timmoneri, L. A comparison of two Cramér-Rao bounds for nonlinear filtering with Pd < 1. IEEE Trans. Signal Process. 2004, 52, 2361–2370. [Google Scholar]
  14. Zhong, Z.W.; Meng, H.D.; Zhang, H.; Wang, X.Q. Performance bound for extended target tracking using high resolution sensors. Sensors 2010, 10, 11618–11632. [Google Scholar] [CrossRef] [PubMed]
  15. Tang, X.W.; Tang, J.; He, Q.; Wan, S.; Tang, B.; Sun, P.L.; Zhang, N. Cramér-Rao bounds and coherence performance analysis for next generation radar with pulse trains. Sensors 2013, 13, 5347–5367. [Google Scholar] [CrossRef] [PubMed]
  16. Rezaeian, M.; Vo, B.N. Error bounds for joint detection and estimation of a single object with random finite set observation. IEEE Trans. Signal Process. 2010, 58, 1943–1506. [Google Scholar] [CrossRef]
  17. Tong, H.S.; Zhang, H.; Meng, H.D.; Wang, X.Q. A comparison of error bounds for a nonlinear tracking system with detection probability Pd < 1. Sensors 2012, 12, 17390–17413. [Google Scholar] [PubMed]
  18. Tong, H.S.; Zhang, H.; Meng, H.D.; Wang, X.Q. The recursive form of error bounds for RFS state and observation with Pd < 1. IEEE Trans. Signal Process. 2013, 61, 2632–2646. [Google Scholar]
  19. Schuhmacher, D.; Vo, B.T.; Vo, B.N. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process. 2008, 86, 3447–3457. [Google Scholar] [CrossRef]
  20. Herath, S.C.K.; Pathirana, P.N. Optimal sensor arrangements in angle of arrival (AoA) and range based localization with linear sensor arrays. Sensors 2013, 13, 12277–12294. [Google Scholar] [CrossRef] [PubMed]
  21. Poor, V. An Introduction to Signal Detection and Estimation; Springer-Verlag: New York, NY, USA, 1994. [Google Scholar]
  22. Cho, T.; Lee, C.; Choi, S. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy. Sensors 2013, 13, 4122–4137. [Google Scholar] [CrossRef] [PubMed]
  23. Press, W.; Teukolsky, S.; Vetterling, W.; Flannery, B. Numerical Recipes in C; Cambridge: New York, NY, USA, 1992. [Google Scholar]

Share and Cite

MDPI and ACS Style

Lian, F.; Zhang, G.-H.; Duan, Z.-S.; Han, C.-Z. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection. Sensors 2016, 16, 169. https://doi.org/10.3390/s16020169

AMA Style

Lian F, Zhang G-H, Duan Z-S, Han C-Z. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection. Sensors. 2016; 16(2):169. https://doi.org/10.3390/s16020169

Chicago/Turabian Style

Lian, Feng, Guang-Hua Zhang, Zhan-Sheng Duan, and Chong-Zhao Han. 2016. "Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection" Sensors 16, no. 2: 169. https://doi.org/10.3390/s16020169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop