Next Article in Journal
Adaptable and Automated Small UAV Deployments via Virtualization
Next Article in Special Issue
Time-Matching Random Finite Set-Based Filter for Radar Multi-Target Tracking
Previous Article in Journal
A High-Birefringence Microfiber Sagnac-Interferometer Biosensor Based on the Vernier Effect
Previous Article in Special Issue
Target Localization and Tracking by Fusing Doppler Differentials from Cellular Emanations with a Multi-Spectral Video Tracker
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensor Selection for Decentralized Large-Scale Multi-Target Tracking Network

Ministry of Education Key Laboratory for Intelligent Networks and Network Security (MOE KLINNS), School of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(12), 4115; https://doi.org/10.3390/s18124115
Submission received: 11 October 2018 / Revised: 20 November 2018 / Accepted: 20 November 2018 / Published: 23 November 2018
(This article belongs to the Special Issue Multiple Object Tracking: Making Sense of the Sensors)

Abstract

:
A new optimization algorithm of sensor selection is proposed in this paper for decentralized large-scale multi-target tracking (MTT) network within a labeled random finite set (RFS) framework. The method is performed based on a marginalized δ-generalized labeled multi-Bernoulli RFS. The rule of weighted Kullback-Leibler average (KLA) is used to fuse local multi-target densities. A new metric, named as the label assignment (LA) metric, is proposed to measure the distance for two labeled sets. The lower bound of LA metric based mean square error between the labeled multi-target state set and its estimate is taken as the optimized objective function of sensor selection. The proposed bound is obtained by the information inequality to RFS measurement. Then, we present the sequential Monte Carlo and Gaussian mixture implementations for the bound. Another advantage of the bound is that it provides a basis for setting the weights of KLA. The coordinate descent method is proposed to compromise the computational cost of sensor selection and the accuracy of MTT. Simulations verify the effectiveness of our method under different signal-to- noise ratio scenarios.

1. Introduction

With the development of communication and information fusion technologies, multi-target tracking (MTT) [1] based on sensor network becomes a new research hotspot. In general, the sensor networks are divided into two main categories according to their structure: one is centralized network and the other is decentralized network. Compared with the centralized network, the decentralized network is more widely concerned because of its parallelism, flexibility, robustness, scalability, anti-interference and fault tolerance, et al. In most practical applications, due to the limitations of communication bandwidth, energy consumption, computational cost, storage space et al, not all sensors in a network can be activated to observe targets at the same instant. As a result, the problem of sensor selection arises from this, which actually belongs to a branch of sensor management [2]. Although some research results [3,4,5,6,7,8] have been proposed for it, none of them jointly consider the uncertainty of target number and data association.
In the past two decades, random finite set (RFS) [9] based MTT has attracted extensive attention. By the use of RFS, MTT is described as a Bayesian estimation of state and observation sets. RFS filtering has been developed from the primal probability hypothesis density (PHD) [10,11,12], cardinalized PHD [13,14] and multi-Bernoulli [15,16] filters to the latest δ-generalized labeled multi-Bernoulli (δ-GLMB) filter [17,18,19]. The advantage of the latter is its conjugacy and track formation. Nevertheless, the number of components involved in the GLMB density increases exponentially with the recursion. Therefore, two approximation methods, the LMB and Marginalized δ-GLMB (Mδ-GLMB) [20,21,22], are subsequently proposed to reduce the computational cost of GLMB. They are also more suitable for multi-sensor scenarios. [23] has shown that the filtering accuracy of Mδ-GLMB is close to δ-GLMB and the two are significantly superior to LMB in the scenarios of low signal-to-noise ratio (SNR).
Besides the δ-GLMB conjugate prior, the Poisson multi-Bernoulli mixture (PMBM) filter [24,25] and multi-Bernoulli mixture (MBM) filter [25] are also conjugate priors. Track formation in the (P)MBM formulation can also be attained using RFS of trajectories [26].
In recent years, although the RFS-based methods have been used to control the position of one or several mobile sensors for MTT [27,28,29,30,31,32,33,34], none of them refer to the sensor network. Actually, in many cases, the sensors in a network are immobile. Instead, the problems of structure, constraints, node selection and so forth become rather important especially for a large-scale sensor network.
As a result, this paper focuses on the emerging problem of sensor selection for decentralized large-scale MTT networks. A new optimization algorithm for sensor selection is proposed based on the Mδ-GLMB filter. In the proposed method, the fusion of local multi-target posterior densities is carried out by using the rule of weighted Kullback-Leibler average (KLA) [23].
The main contributions of our method includes four aspects. First, the sensor selection is described as a constrained optimization problem with a Bayesian recursion of labeled multi-target RFS. A new metric, named as the label assignment (LA) metric, is proposed to measure the distance for two labeled sets. The lower bound of LA metric based mean square error (MSE) between the labeled multi-target state set and its estimate is treated as the optimized objective function of sensor selection. The bound is derived by the information inequality to RFS measurement [35]. The detailed proofs for the LA metric and its lower bound are presented in the appendices. Second, the normalized weights of the KLA rule are set according to the proposed bound. Third, both the sequential Monte Carlo (SMC) [10,36] and Gaussian mixture (GM) [11,12,13,14,15,16] implementations for the bound are presented. Fourth, because the computational cost of selection optimization increases with sensor number in the form of combination explosion, a sub-optimization method called coordinate descent [37] is proposed to compromise the computational cost and tracking accuracy.
The simulation results show that when the sensors in a decentralized large-scale network have different observation performance, 1) the MTT accuracy of our method is much better than that of the Cauchy-Schwarz (CS) divergence based methods [31,32]; 2) compared with the genetic algorithm [38], the coordinate descent method significantly shortens the calculation time of sensor selection; 3) the GM implementation of the bound is obviously faster than its SMC implementation.

2. Mathematical Background

2.1. Labeled RFS and Mδ-GLMB

In this paper, the unlabeled and labeled variables are, respectively, represented by the italics and bold. For example, the unlabeled state, measurement and their sets are noted as x , z , X and Z ; the labeled state and its set are noted as x = ( x , ) and X , where is the discrete label of x . Let ( X ) , | X | and X × L denote the label set, cardinality and space of X , where X and L are the spaces of the unlabeled state and label.
The state estimates of single-target and multi-targets derived from a measurement set Z are both the functions of Z . To make this clearer, they are, respectively, noted as x ^ ( Z ) and X ^ ( Z ) . x ^ ( Z ) and X ^ ( Z ) are their labeled versions.
Let δ Y ( X ) , 1 Y ( X ) and p X denote the functions of generalized Kronecker, inclusion indicator and multi-object exponential,
δ Y ( X ) = { 1 , if   X = Y 0 , otherwise
1 Y ( X ) = { 1 , if   X Y 0 , otherwise
p X = { x X p ( x ) , X 1 , X =  
where 1 Y ( { x } ) is abbreviated as 1 Y ( x ) . Furthermore, if 1 Y ( X ) = 1 , then let Y X denote the complementary set of X in Y . Y { x } is abbreviated as Y x .
For any real-valued function b ( X ) of X , its set integral b ( X ) δ X is defined as
b ( X ) δ X = n = 0 1 n ! 1 : n L n X n b ( X n ) d x 1 : n  
where x 1 : n = x 1 , , x n and 1 : n = 1 , , n , X n = { x 1 : n } is a n -element labeled set, X n and L n are the spaces of X n and 1 : n .
If X is a Mδ-GLMB RFS, then its density is described as [22]
π ( X ) = Δ ( X ) I ( L ) δ I ( ( X ) ) ω I p I X  
where Δ ( X ) = δ | X | ( | ( X ) | ) is a distinct indicator for the labels of X , I ( L ) is a label set in the collection ( L ) of finite subsets of L , the weight ω I is the existing probability of the label set I , p I ( x ) is the density of x involved in I . The Mδ-GLMB density is abbreviated as π = { ( ω I , p I ) } I ( L ) and its cardinality distribution is
P ( | X | = n ) = I n ( L ) ω I  
where n ( L ) is the collection of n -element subsets of L .

2.2. Information Inequality to RFS Measurement

Let x ^ ( Z m ) be an unbiased estimate of x derived from an m -element measurement set Z m and f ( x , Z m ) be a joint density over the space X 1 × m . Assuming that regularity conditions hold and 2 log f ( x , Z m ) / x i x j exists, the information inequality to RFS measurement is [35]
m X 1 f ( x , Z m ) ( x l x ^ l ( Z m ) ) 2 d x d z 1 : m [ J m 1 ] l , l , l = 1 , , L  
where z 1 : m = z 1 , , z m , L is the dimension of x , x l and x ^ l ( Z m ) are the l th components of the vectors x and x ^ ( Z m ) , J m is the L × L Fisher information matrix (FIM) given | Z | = m ,
[ J m ] i , j = E f [ 2 log f ( x , Z m ) x i x j ] = m X 1 f ( x , Z m ) 2 log f ( x , Z m ) x i x j d x d z 1 : m , i , j = 1 , , L  
(7) holds with equality if and only if f ( x , Z m ) satisfies the distribution of exponential family.

2.3. A New Metric for Labeled RFS

It is well known that the optimal sub-pattern assignment (OSPA) metric [39,40,41] has been extensively used to measure the distance for two unlabeled sets. Although the OSPA metric could also measure differences in the set labels, it may not be very appropriate for the labeled RFS in some application scenarios. As a result, a new metric between two labeled sets X and Y of order 1 p with cut-off c > 0 is proposed as follows.
d ¯ p ( c ) ( X , Y ) = { 0 | X | = | Y | = 0 ( ( X ) ( Y ) d ( c ) ( x , y ) p + c p ( | ( X ) ( Y ) | | ( X ) ( Y ) | ) | ( X ) ( Y ) | ) 1 p | X | + | Y | > 0  
where ( X ) and ( Y ) are the label sets of X and Y , x X and y Y are the unlabeled elements corresponding to the label , | ( X ) ( Y ) | and | ( X ) ( Y ) | | ( X ) ( Y ) | respectively indicates the number of elements in X and Y which have the same and different labels,
d ( c ) ( x , y ) = min ( c , x y )  
denotes the 2-norm of x and y cut off at c > 0 .
See Appendix A for the proof that the d ¯ p ( c ) ( X , Y ) is a metric.
The proposed metric, which is named as the LA metric by us, is a different metric than the OSPA. A physical meaning about the LA metric between two labeled sets X and Y is explained as follows. For all x X and y Y , if x has the same label as y , then x is paired with y and a ‘location’ error d ( c ) ( x , y ) between them is involved in the metric; otherwise, x is unpaired with y and a ‘penalty’ error c for them is involved in the metric.
The most significant difference between the OSPA metric and the LA metric is: In the OSPA metric, the elements of X are paired with the elements of Y depending on the optimal assignment distance of their unlabeled versions. In contrast, in the LA metric, the elements of X are paired with the elements of Y completely depending on their labels.
Take the MTT with the labeled RFS state for example. The calculation of the OSPA error may pair an estimate with a state due to the rule of optimal assignment even if they have different labels. Instead, the calculation of LA error prohibits this kind of pairing even if their unlabeled states are sufficiently close to each other. As a result, the LA metric involves not only the estimation error arising from the target number and individual states as the OSPA metric but also the additional estimation error arising from the labels. In this sense, the LA metric is more demanding than the OSPA metric for measuring the error between the labeled state set and its estimate.

3. Problem Formulation

To simplify the formulas, the time index is omitted and the subscript ‘ + ’ is used to indicate the predicted density.
Multiple targets independently move in region A with random birth and death. The multi-target states are modeled as a labeled RFS X . The dynamic of a single state x = ( x , ) X is described by the survival probability p s ( x ) and transition density f ( x | x ) δ ( ) . The dynamic of multi-target states is described by the transition density f ( X | X ) . Here x = ( x , ) and X are, respectively, the state and state set at the last time.
Targets are observed by the decentralized sensor network shown in Figure 1. The network is composed of sensor nodes (SNs) and local fusion centers (LFCs). Each SN receives measurements and communicates with its superior LFC. Each LFC receives the measurements, conducts data processing and storage, communicates with the other LFCs connected to it and manages its subordinate SNs.
The network structure is completely described by a topological graph with parameter { N , C , A } , where N and C are the label sets of SN and LFC, A C × C is the set of directed connections between LFCs. If the LFC j can receive data of the LFC i , then ( i , j ) A . Let C j = { i C : ( i , j ) A } be the label set of the LFCs connected to the LFC j (including itself) and N j be the label set of the SNs belonging to the LFC j . Each SN only belongs to one LFC, which indicates j C N j = and j C N j = N .
The most significant difference between the decentralized network and the centralized or hierarchical network is that the former has no global fusion center connected to all SNs or all LFCs. The network structure remains unchanged and all measurements are synchronized during the monitoring period.
For the SN s N , it may receive clutter and target measurement or miss the detection. The measurements are modeled as a RFS Z s over the space s . z s Z s is a single measurement. Clutter is modeled as a Poisson RFS with intensity κ s ( z s ) , and
λ s = 1 s κ s ( z s ) d z s  
is the clutter rate.
The multi-target likelihood of the SN s is obtained from [9] as
g s ( Z s | X ) = e λ s [ κ s ] Z s θ s Θ s [ ψ Z s s ( ; θ s ) ] X  
where
ψ Z s s ( x ; θ s ) = δ 0 ( θ s ( ) ) ( 1 p d s ( x ) ) + ( 1 δ 0 ( θ s ( ) ) ) p d s ( x ) g s ( z θ s ( ) s | x ) κ s ( z θ s ( ) s )  
where p d s ( x ) and g s ( z s | x ) are the single-target detection probability and likelihood, Θ s is a collection of association mapping θ s : ( X ) { 0 , 1 , , | Z s | } . θ s ( ) > 0 or θ s ( ) = 0 indicates that the track ( X ) generates a measurement z θ s ( ) s Z s or be missed. Each track at most generates one measurement and each measurement is at most generated by one track, which indicates that = if θ s ( ) = θ s ( ) > 0 . The number of association hypotheses is [42]
χ | ( X ) | , | Z s | = i = 0 min ( | ( X ) | , | Z s | ) | ( X ) | ! | Z s | ! ( | ( X ) | i ) ! ( | Z s | i ) ! i !  
Due to some restrictions, only part of SNs involved in the sub-network of each LFC can be activated to observe targets at each scan. Assume that the multi-target likelihoods of all the SNs in the network are independent of each other given the labeled state set X . Algorithm 1 presents the steps of sensor selection and MTT for the LFC j C under a Bayesian framework. Note that the sequence of measurement sets up to the last time is omitted here for simplifying the formulas of Bayesian recursion.
Algorithm 1. Sensor selection and MTT for the LFC j .
1. Prediction: Calculate the current predicted density by π + j ( X ) = f ( X | X ) π ¯ j ( X ) δ X , where π ¯ j ( X ) is the fused density at the last time;
2. SN selection: Select the SN subset S j N j and receive a collection of their measurement sets Z S j ;
3. Update: Calculate the current posterior density by π j ( X | Z S j ) = s S j g s ( Z s | X ) π + j ( X ) s S j g s ( Z s | X ) π + j ( X ) δ X and then transmit the density to the LFCs connected to the LFC j ;
4. Fusion: Receive the posterior densities from the LFC set C j and then calculate the fused density by the weighted KLA rule π ¯ j ( X ) = i C j [ π i ( X | Z S i ) ] α j , i i C j [ π i ( X | Z S i ) ] α j , i δ X , where α j , i ( i C j ) is the preset normalized weight;
5. State extraction: Extract the current state estimate from π ¯ j ( X ) as the output. Go to Step 1.
Remark 1.
It can be seen from Steps 3 and 4 of Algorithm 1 that in this network, each LFC only communicates once with the LFCs connected to it per recursion. Because of this, the consensus iteration [43,44] of each LFC cannot be carried out since each LFC has to communicate with other LFCs more than once in the iterative process. Finally, the consensus fusion [23] over the entire decentralized sensor network cannot be achieved. As a result, the multi-target estimates outputted by different LFCs may be different. In fact, this character is consistent with most practical application systems.
The SN selection in Step 2 of Algorithm 1 is described as the following constrained optimization problems:
[ S j ] = arg min / max S j N j   ϑ j ( S j ; π + j ) s . t . { γ i j ( S j ; π + j ) 0 i = 1 , , l ν k j ( S j ; π + j ) = 0 k = 1 , , m
where ϑ j ( S j ; π + j ) , γ i j ( S j ; π + j ) 0 and ν k j ( S j ; π + j ) = 0 are the objective function, inequality and equality constraints of S j given the predicted density π + j ( X ) .
The ultimate goal for a MTT network is to optimize the tracking precision. As is known to all, the MSE between target state and its estimate is currently the most widely-used evaluation indicator for tracking accuracy. Given the selected SN subset S j for the LFC j C , the MSE [ σ S j j ] 2 between X and its Bayesian estimate X ^ j ( Z S j ) is
[ σ S j j ] 2 = E [ e 2 ( X , X ^ j ( Z S j ) ) ] = S j X × L f ( X , Z S j ) e 2 ( X , X ^ j ( Z S j ) ) δ X δ Z S j = S j X × L s S j g s ( Z s | X ) π + j ( X ) e 2 ( X , X ^ j ( Z S j ) ) δ X δ Z S j
where S j is the joint measurement space of the SN set S j , f ( X , Z S j ) is the joint density of ( X , Z S j ) , e ( X , X ^ j ( Z S j ) ) denotes the error distance between X and X ^ j ( Z S j ) . In this paper, e ( X , X ^ j ( Z S j ) ) is measured by the 2nd-order LA metric d ¯ 2 ( c ) ( X , X ^ j ( Z S j ) ) in (9).
Nevertheless, [ σ S j j ] 2 cannot be used as the objective function of sensor selection optimization in (15) because X ^ j ( Z S j ) is unknown before sensor selection. To solve this, we replace [ σ S j j ] 2 with its lower bound [ σ _ S j j ] 2 , which provides an online indication on the limit of MTT accuracy within the labeled RFS framework.
Treating π + j ( X ) as a default condition, (15) is finally rewritten as
[ S j ] = arg min S j N j   [ σ _ S j j ] 2 s . t . { γ i j ( S j ) 0 i = 1 , , l ν k j ( S j ) = 0 k = 1 , , m

4. Lower Bound For LA Metric Based MSE and Sub-Optimization For Sensor Selection

4.1. Derivation of LA Bound

In Section 4.1, Section 4.2, Section 4.3 and Appendix B, the superscript ‘ j ’ for the index of LFC is omitted. For example, ( σ _ S j j ) 2 is abbreviated as σ _ S 2 .
In order to derive σ _ S 2 , it is assumed that
A1: Multi-target Bayesian recursion is a Mδ-GLMB RFS [22]. As a result, the predicted density π + ( X ) and posterior density π ( X | Z S ) can be described as π + = { ( ω I , + , p I , + ) } I ( L ) and π ( | Z S ) = { ( ω I ( Z S ) , p I ( | Z S ) ) } I ( L ) .
A2: Although the optimal estimate of X can be extracted from the fused density π ¯ ( X ) by using the joint or marginal multi-target estimator [9], both the methods are very difficult to be implemented. Alternatively, the target number is firstly estimated according to the maximum a posterior (MAP) criterion and then the individual states are estimated according to the unbiased criterion under the obtained target number. In fact, the suboptimal method is applied in almost all multi-target Bayesian filters.
Let Z m S S = Z m s 1 s 1 , , Z m s | S | s | S | be the collection of measurement sets from the SN set S and m S S = m s 1 s 1 × × m s | S | s | S | be the space of Z m S S , where m s i is the number of measurements received by the SN s i . Let q ( X n , Z m S S ) be the joint density over the space ( X × L ) n × m S S . According to Bayesian formula, q ( X n , Z m S S ) is written as
q ( X n , Z m S S ) = 1 Ω n , m S s S g s ( Z m s s | X n ) π + ( X n )  
where Ω n , m S is a normalization factor,
Ω n , m S = 1 : n L n m S S X n s S g s ( Z m s s | X n ) π + ( X n ) d x 1 : n d z 1 : m S S  
where m S S d z 1 : m S S = m s | S | s | S | m s 1 s 1 d z 1 : m s 1 s 1 d z 1 : m s | S | s | S | . (19) shows that Ω n , m S / ( m S ! n ! ) is actually the probability P ( | X | = n , | Z S | = m S ) , where m S ! = m s 1 ! m s | S | ! and | Z S | = m S denotes | Z s 1 | = m s 1 , , | Z s | S | | = m s | S | . Substituting A1 and (12) into (19) as well as using Lemma 12 in [17], Ω n , m S is obtained as
Ω n , m S = n ! ( s S e λ s [ λ s ] m s ) I n ( L ) ω I , + θ S Θ S φ I ( θ S )  
where θ S Θ S denotes θ s 1 Θ s 1 , , θ s | S | Θ s | S | ,
φ I ( θ S ) = p I , + ( , ) , s S ( δ 0 ( θ s ( ) ) ( 1 p d s ( , ) ) + ( 1 δ 0 ( θ s ( ) ) ) p d s ( , ) λ s ) I  
Assume that θ s ( ) > 0 if s Y and θ s ( ) = 0 if s ( S Y ) . Then, (21) can be rewritten as
φ I ( θ S ) = ( Y S p I , + ( , ) , p d Y , S ( , ) s Y λ s ) I = φ I ( S )  
where p I , + ( , ) , p d Y , S ( , ) = X 1 p I , + ( x , ) p d Y , S ( x , ) d x denotes the inner product corresponding to x ,
p d Y , S ( x ) = s Y p d s ( x ) s ( S Y ) ( 1 p d s ( x ) )  
denotes the probability that only the SN subset Y S receives the measurement from state x while the others miss the measurement.
(22) shows that φ I ( θ S ) is independent of the association mapping θ S . From (14), (20) and (22), Ω n , m S is finally obtained as
Ω n , m S = n ! ( s S e λ s [ λ s ] m s χ n , m s ) I n ( L ) ω I , + φ I ( S )  
Since q ( X n , Z m S S ) is permutation invariant over x 1 : n , its marginal density over any of x 1 : n is the same and is obtained by
q n ( x , Z m S S ) = X n 1 q ( { x , x 2 : n } , Z m S S ) d x 2 : n  
Substituting (18) into (25) as well as using A1 and the identical equation δ n ( | { , 2 : n } | ) = δ n 1 ( | { 2 : n } | ) ( 1 1 { 2 : n } ( ) ) , q n ( x , Z m S S ) is written as
q n ( x , Z m S S ) = 1 Ω n , m S 2 : n L n 1 δ n 1 ( | { 2 : n } | ) ( 1 1 { 2 : n } ( ) ) I n ( L ) ω I , + δ I ( { , 2 : n } ) X n 1 s S g s ( Z m s s | { x , x 2 : n } ) p I , + ( x ) t = 2 n p I , + ( x t ) d x 2 : n
Substituting (12) into (26) and then simplifying the result, we get
q n ( x , Z m S S ) = 1 Ω n , m S ( s S e λ s [ κ s ] Z m s s ) I n ( L ) θ S Θ S 1 I ( ) ω I , + η I , Z m S S ( ; θ S ) q I ( x , Z m S S ; θ S )  
where q I ( x , Z m S S ; θ S ) is the marginal density of q ( X , Z m S S ) over any of x 1 : n given the label set I and association mapping θ S ,
q I ( x , Z m S S ; θ S ) = Y S p d Y , S ( x ) p I , + ( x ) s Y g s ( z θ s ( ) s | x )  
η I , Z m S S ( , θ S ) = p I , + ( , ) , s S ψ Z m s s s ( , ; θ s θ s ( ) ) I  
where θ s θ s ( ) denotes the remaining association mapping in θ s except for θ s ( ) .
MAP detection criterion determines | X ^ ( Z m S S ) | = n ^ ( n ^ = 0 , 1 , , ) if and only if Z m S S n ^ , m S S ,
n ^ , m S S = { Z m S S m S S : n ^ = arg max n ( P ( | X | = n | Z m S S ) ) }  
where n ^ , m S S = n ^ , m s 1 s 1 × × n ^ , m s | S | s | S | is the joint measurement subspace of the SN set S where the target number is estimated as n ^ ; 0 , m S S , 1 , m S S , , , m S S is a partition of m S S ; P ( | X | = n | Z m S S ) is the posterior probability of | X | = n given Z m S S . From (6), P ( | X | = n | Z m S S ) is written as
P ( | X | = n | Z m S S ) = I n ( L ) ω I ( Z m S S )  
where ω I ( Z m S S ) is the existing probability of the label set I given Z m S S . According to the update step of Mδ-GLMB [22], ω I ( Z m S S ) is obtained as
ω I ( Z m S S ) = θ S Θ S ω I , + β I , Z m S S ( θ S ) I ( L ) θ S Θ S ω I , + β I , Z m S S ( θ S )  
β I , Z m S S ( θ S ) = p I , + ( , ) , s S ψ Z m s s s ( , ; θ s ) I  
where we have β I , Z m S S ( θ S ) = L 1 η I , Z m S S ( , θ S ) from (29) and (33).
Let Ψ n ^ , n , m S be the integral of q ( X n , Z m S S ) over the space ( X × L ) n × n ^ , m S S . From (18), Ψ n ^ , n , m S can be written as
Ψ n ^ , n , m S = 1 : n L n n ^ , m S S X n q ( X n , Z m S S ) d x 1 : n d z 1 : m S S = 1 Ω n , m S 1 : n L n n ^ , m S S X n s S g s ( Z m s s | X n ) π + ( X n ) d x 1 : n d z 1 : m S S
(34) shows that Ω n , m S Ψ n ^ , n , m S / ( m S ! n ! ) is actually the probability P ( | X ^ ( Z m S S ) | = n ^ , | X | = n , | Z S | = m S ) . Substituting A1 and (12) into (34) as well as using Lemma 12 in [17], Ψ n ^ , n , m S is obtained as
Ψ n ^ , n , m S = n ! Ω n , m S ( s S e λ s [ λ n ^ s ] m s ) I n ( L ) θ S Θ S ω I , + ϕ I , n ^ ( θ S )  
where
λ n ^ s = n ^ 1 s κ s ( z s ) d z s  
ϕ I , n ^ ( θ S ) = p I , + ( , ) , s S ( δ 0 ( θ s ( ) ) ( 1 p d s ( , ) ) + ( 1 δ 0 ( θ s ( ) ) ) p d s ( , ) n ^ 1 s g s ( z s | , ) d z s λ n ^ s ) I  
Similar with φ I ( θ S ) , ϕ I , n ^ ( θ S ) can be rewritten as
ϕ I , n ^ ( θ S ) = ( Y S p I , + ( , ) s Y n ^ 1 s g s ( z s | , ) d z s , p d Y , S ( , ) s Y λ n ^ s ) I = ϕ I , n ^ ( S )  
(38) shows that ϕ I , n ^ ( θ S ) is independent of the association mapping θ S . From (14), (35) and (38), Ψ n ^ , n , m S is finally obtained as
Ψ n ^ , n , m S = n ! Ω n , m S ( s S e λ s [ λ n ^ s ] m s χ n , m s ) I n ( L ) ω I , + ϕ I , n ^ ( S )  
Theorem 1: Given A1, A2 and the SN set S , the lower bound for the 2nd-order LA metric based MSE of (16) is
σ _ S 2 = m S = 0 n = 0 n ^ = 0 , n + n ^ > 0 k = 0 min ( n , n ^ ) L 1 Ω n , m S Ψ n ^ , n , m S m S ! ( n k ) ! ( ε k , n ^ , n min ( c 2 , 1 Ψ n ^ , n , m S l = 1 L [ J n ^ , n , m S 1 ( ) ] l , l ) + ( 1 ε k , n ^ , n ) c 2 )  
where c is the cut-off of the LA metric, L is the dimension of x , Ω n , m S and Ψ n ^ , n , m S are given in (24) and (39),
ε k , n ^ , n = k n + n ^ k ,   k = 0 , 1 , , min ( n , n ^ )  
is actually the possible ratio of the number of common labels to the number of all distinct labels for the multi-target states and their estimates given | X ^ ( Z m S S ) | = n ^ , | X | = n , | Z S | = m S (that is, | ( X n ) ( X ^ n ^ ( Z m S S ) ) | / | ( X n ) ( X ^ n ^ ( Z m S S ) ) | ), J n ^ , n , m S ( ) is the L × L FIM for a single-target state with the label given | X ^ ( Z m S S ) | = n ^ , | X | = n , | Z S | = m S ,
[ J n ^ , n , m S ( ) ] i , j = 1 Ψ n ^ , n , m S 2 n ^ , m S S X 1 Φ n ( x , , Z m S S ) d x d z 1 : m S S i , j = 1 , , L  
where the integral region n ^ , m S S for measurement is given by (30), J n ^ , n , m S ( ) = if n ^ , m S S = , the integrand Φ n ( x , , Z m S S ) is
Φ n ( x , , Z m S S ) = q n ( x , , Z m S S ) 2 log q n ( x , , Z m S S ) x i x j = 1 q n ( x , , Z m S S ) q n ( x , , Z m S S ) x i q n ( x , , Z m S S ) x j + 2 q n ( x , , Z m S S ) x i x j
where q n ( x , , Z m S S ) is given in (27).
See Appendix B for proof of Theorem 1.
Remark 2.
The number of estimated targets is assumed to be unknown in the derivation of the proposed bound. Only the MAP rule, rather than the specific (or exact) number of estimated targets, is required to obtain the bound. The symbol n ^ used for calculating the bound in Theorem 1 is just an index for all possible (or unknown) number of estimated targets. This is similar with the symbols , k , n and m S in Theorem 1, which are just the indices for the labels, the number of common labels in true targets and their estimates, the number of true targets and the number of sensor measurements, respectively.
Furthermore, the reason for imposing the MAP rule has been explained in A2. (30) has also shown that the measurement space m S S can be divided into 0 , m S S , 1 , m S S , , , m S S according to the MAP rule. It is very helpful for the proof of Theorem 1.
Remark 3.
In general, the maximum number of targets and measurements can be presented by prior knowledge. Moreover, the label space L 1 = L 1 B 1 , where L 1 and B 1 are the label spaces for the last time and new-born targets. In general, B 1 can also be preseted by prior knowledge. According to these presets, the sum of infinite terms in (40) becomes the sum of finite terms.
Remark 4.
Once the specific forms of p I , + ( x ) , p d ( x ) and g s ( z s | x ) are given, q n ( x , , Z m S S ) / x i and 2 q n ( x , , Z m S S ) / x i x j in Φ n ( x , , Z m S S ) can be obtained from (27) and (28).
Remark 5.
The formulas of Ψ n ^ , n , m S and J n ^ , n , m S ( ) contain the integral over the measurement subspace n ^ , m S S , which is calculated via MC integration [45]. To improve computational efficiency, the samples of MC integration are selected as predicted ideal measurement sets (PIMS) [31]. The calculation steps are shown in Algorithm 2.
Algorithm 2. Steps for calculating Ψ n ^ , n , m S and J n ^ , n , m S ( ) .
1. Prediction sampling: Generate M samples X ˜ n , + ( 1 ) , , X ˜ n , + ( M ) of multi-target state sets from the predicted density π + ( X ) ;
2. PIMS generating: For j = 1 , , M , generate PIMS Z ˜ m S S , ( j ) of the SN set S based on X ˜ n , + ( j ) [31];
3. PIMS partitioning: Divide PIMS { Z ˜ m S S , ( j ) } j = 1 M into the measurement subspace n ^ , m S S ( n ^ = 0 , 1 , , ) according to (30);
4. MC integration: Given the PIMS assigned to n ^ , m S S , Ψ n ^ , n , m S and J n ^ , n , m S ( ) are obtained by applying MC integral formula [45] to (39) and (42).

4.2. SMC and GM Implementations for the Bound

In order to derive the SMC implementation of the bound in Theorem 1, it is assumed that
A3: Each p I , + ( x ) involved in the predicted Mδ-GLMB density π + = { ( ω I , + , p I , + ) } I ( L ) is described by a set of weighted particles { ( υ ˜ I , + i ( ) , x ˜ I , + i ( ) ) } i = 1 G ˜ I , + ( ) ,
p I , + ( x ) = i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) δ x ˜ I , + i ( ) ( x )  
Substituting (44) into (22), (28), (29), (33) and (38), φ I ( S ) , q I ( x , Z m S S ; θ S ) , η I , Z m S S ( , θ S ) , β I , Z m S S ( θ S ) and ϕ I , n ^ ( S ) are rewritten as
φ I ( S ) = ( Y S i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) p d Y , S ( x ˜ I , + i ( ) , ) s Y λ s ) I  
q I ( x , Z m S S ; θ S ) = Y S i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) p d Y , S ( x ˜ I , + i ( ) , ) s Y g s ( z θ s ( ) s | x ˜ I , + i ( ) , )  
η I , Z m S S ( , θ S ) = ( i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) s S ψ Z m s s s ( x ˜ I , + i ( ) , ; θ s θ s ( ) ) ) I  
β I , Z m S S ( θ S ) = ( i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) s S ψ Z m s s s ( x ˜ I , + i ( ) , ; θ s ) ) I  
ϕ I , n ^ ( S ) = ( Y S i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) p d Y , S ( x ˜ I , + i ( ) , ) s Y n ^ 1 s g s ( z s | x ˜ I , + i ( ) , ) d z s s Y λ n ^ s ) I  
Finally, the SMC forms of Ω n , m S , q n ( x , Z m S S ) , Ψ n ^ , n , m S and Φ n ( x , , Z m S S ) are respectively obtained by substituting (45)–(49) into (24), (27), (39) and (43), where q n ( x , , Z m S S ) / x i and 2 q n ( x , , Z m S S ) / x i x j involved in Φ n ( x , , Z m S S ) are
{ q n ( x , , Z m S S ) x i = i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) q n ( x , , Z m S S ) x i | x = x ˜ I , + i ( ) 2 q n ( x , , Z m S S ) x i x j = i = 1 G ˜ I , + ( ) υ ˜ I , + i ( ) 2 q n ( x , , Z m S S ) x i x j | x = x ˜ I , + i ( )  
In order to derive the GM implementation of the bound in Theorem 1, it is assumed that
A4: Each p I , + ( x ) involved in the predicted Mδ-GLMB density π + = { ( ω I , + , p I , + ) } I ( L ) is described by the GM form of
p I , + ( x ) = i = 1 G I , + ( ) υ I , + i ( ) N ( x ; μ I , + i ( ) , Σ I , + i ( ) ) ,   with   i = 1 G I , + ( ) υ I , + i ( ) = 1  
where N ( ; μ I , + i ( ) , Σ I , + i ( ) ) denotes the Gaussian density with mean μ I , + i ( ) and covariance matrix Σ I , + i ( ) , υ I , + i ( ) and G I , + ( ) are the weights and number of GM terms.
A5: The detection probability p d s ( x ) is independent of x and the likelihood function g s ( z s | x ) is linear Gaussian,
p d s ( x ) = p d s ,   g s ( z s | x ) = N ( z s ; H s x , R s )  
where H s and R s are the observation function and covariance matrix for measurement noise.
From A5 and (23), we have
p d Y , S = s Y p d s s ( S Y ) ( 1 p d s )  
s Y g s ( z s | x ) = N ( z Y ; H Y x , R Y )  
where
z Y = [ z s 1 z s | Y | ] ;   H Y = [ H s 1 H s | Y | ] ;   R Y = [ R s 1 R s | Y | ]  
Substituting (53) into (22), φ I ( S ) is rewritten as
φ I ( S ) = ( Y S p d Y , S s Y λ s ) | I |  
Substituting (51), (53) and (54) into (28) as well as using Lemma 2 in [11], q I ( x , Z m S S ; θ S ) is rewritten as
q I ( x , Z m S S ; θ S ) = Y S i = 1 G I , + ( ) p d Y , S υ I , + i ( ) N ( x ; μ I , + i ( ) , Σ I , + i ( ) ) N ( z θ Y ( ) Y ; H Y x , R Y ) = Y S i = 1 G I , + ( ) p d Y , S υ I , + i ( ) N ( z θ Y ( ) Y ; H Y μ I , + i ( ) , Ξ I , + i , Y ( ) ) N ( x ; μ I , Z m Y Y i ( ; θ Y ) , Σ I i ( ; Y ) )
where
{ Ξ I , + i , Y ( ) = H Y Σ I , + i ( ) [ H Y ] T + R Y μ I , Z m Y Y i ( ; θ Y ) = μ I , + i ( ) + Σ I , + i ( ) [ H Y ] T [ Ξ I , + i , Y ( ) ] 1 ( z θ Y ( ) Y H Y μ I , + i ( ) ) Σ I i ( ; Y ) = Σ I , + i ( ) Σ I , + i ( ) [ H Y ] T [ Ξ I , + i , Y ( ) ] 1 H Y Σ I , + i ( )  
Similarly, substituting (51), (53) and (54) into (29), (33) and (38), η I , Z m S S ( ; θ S ) , β I , Z m S S ( θ S ) and ϕ I , n ^ ( S ) are rewritten as
η I , Z m S S ( ; θ S ) = ( Y S i = 1 G I , + ( ) p d Y , S υ I , + i ( ) s Y κ s ( z [ θ s θ s ( ) ] ( ) s ) N ( z [ θ Y θ Y ( ) ] ( ) Y ; H Y μ I , + i ( ) , Ξ I , + i , Y ( ) ) ) I  
β I , Z m S S ( θ S ) = ( Y S i = 1 G I , + ( ) p d Y , S υ I , + i ( ) s Y κ s ( z θ s ( ) s ) N ( z θ Y ( ) Y ; H Y μ I , + i ( ) , Ξ I , + i , Y ( ) ) ) I  
ϕ I , n ^ ( S ) = ( Y S i = 1 G I , + ( ) p d Y , S υ I , + i ( ) s Y λ n ^ s n ^ 1 Y N ( z Y ; H Y μ I , + i ( ) , Ξ I , + i , Y ( ) ) d z Y ) I  
Finally, the GM forms of Ω n , m S , q n ( x , Z m S S ) , Ψ n ^ , n , m S and Φ n ( x , , Z m S S ) are respectively obtained by substituting (56)–(61) into (24), (27), (39) and (43). Obviously, they no longer contain integrals of state x and have analytic forms except for Ψ n ^ , n , m S . Here q n ( x , , Z m S S ) / x i and 2 q n ( x , , Z m S S ) / x i x j in Φ n ( x , , Z m S S ) are both linear functions of N ( x ; μ I , Z m Y Y i ( ; θ Y ) , Σ I i ( ; Y ) ) / x i and 2 N ( x ; μ I , Z m Y Y i ( ; θ Y ) , Σ I i ( ; Y ) ) / x i x j , which can be obtained by the following formulas:
{ N ( x ; μ , Σ ) x i = [ Σ 1 ( x μ ) ] i N ( x ; μ , Σ ) 2 N ( x ; μ , Σ ) x i x j = [ Σ 1 ( x μ ) ( x μ ) T [ Σ 1 ] T Σ 1 ] i , j N ( x ; μ , Σ )  
If the observation model is non-linear, that is,
g s ( z s | x ) = N ( z s ; h s ( x ) , R s )  
where h s ( x ) is the nonlinear observation function of state x . In this case, the extended Kalman (EK) or unscented Kalman (UK) filter can be used to calculate the mean and covariance matrix of each GM term [46,47].

4.3. Sub-Optimization Based on Coordinate Descent

The computational cost of this method is composed of three parts: sensor selection, Mδ-GLMB filtering and weighted KLA fusion. The latter two have been analyzed in Reference [22,23]. This paper only studies the computational cost of sensor selection and its approximate algorithm. When the number of SNs is large, it has a much more significant effect on the total amount of computation than the last two.
As shown in (15) and (17), sensor selection is actually a constrained combinatorial optimization problem. To find the optimal solution by the exhaustive search method, the objective function needs to be repeatedly calculated for C | N | | S | = | N | ! / ( | S | ! ( | N | | S | ) ! ) times. Obviously, its computational cost increases with the SN number | N | in the form of combination explosion. In order to reduce the computational cost, some heuristic optimization algorithms, such as genetic algorithm [39] and so forth, is used to tackle this problem. However, the convergence speed of the heuristic algorithms will become rather show when the objective function is relatively complex. As a result, to further improve the computational speed, coordinate descent method [37] is proposed to find a sub-optimal solution of (17). Its computational cost increases with the SN number | N | in an approximate polynomial form.
Set a binary switch variable ς s { 0 , 1 } , s = 1 , , | N | , for each SN. ς s = 1 indicates s S while ς s = 0 indicates s S . The vector ς = [ ς 1 , , ς | N | ] is composed of the switch variables of all SNs belonging to the same LFC. Clearly, the set S is completely determined by ς . Then, (17) is relaxed to an unconstrained optimization problem by the augmented objective function
F ( ς , r ) = σ _ ς 2 + ϖ i = 1 l γ i 1 ( ς ) + 1 ϖ j = 1 m ν j 2 ( ς )  
where ϖ > 0 is a barrier factor, i = 1 l γ i 1 ( ς ) and j = 1 m ν j 2 ( ς ) are inequality and equality penalty terms.
Algorithm 3 presents the iteration steps for handling the relaxed problem by using the coordinate descent method.
Algorithm 3. Coordinate descent method.
Step 1: Set initial iteration number i = 0 , initial SN switch vector ς ( 0 ) , initial barrier factor ϖ and its reduction coefficient 0 < C < 1 ;
Step 2: From s = 1 to s = | N | , calculate ς ( i + 1 ) s = arg min ς s { 0 , 1 } F ( ς ( i + 1 ) 1 , , ς ( i + 1 ) s 1 , ς s , ς ( i ) s + 1 , , ς ( i ) | N | , r ) , where ς ( i + 1 ) 1 , , ς ( i + 1 ) s 1 , ς ( i ) s + 1 , , ς ( i ) | N | are treated as constants;
Step 3: If ς ( i + 1 ) = ς ( i ) , then go to Step 4; Otherwise, set i = i + 1 , go to Step 2;
Step 4: If ς ( i + 1 ) = ς ( 0 ) , then output ς ( i + 1 ) as the solution of (17); Otherwise, set ϖ = C ϖ , ς ( 0 ) = ς ( i + 1 ) , i = 0 and then go to Step 2.
In order to improve the probability to converge to the global optimum and accelerate the convergence speed for the coordinate descent method, the initial barrier factor ϖ and its reduction coefficient C can be appropriately selected by the methods in Reference [48], the initial SN switch vector is set as ς ( 0 ) = ς , where ς is the outputted switch vector at the last time.

4.4. Weighted KLA Fusion

A1 indicates that the posterior density of each LFC is a Mδ-GLMB form of π i ( X | Z S i ) = { ( ω I i ( Z S i ) , p I i ( x , | Z S i ) ) } I ( L ) , i C . Then, for the LFC j C , given the LFC subset C j connected to it and the normalized nonnegative fusion weights α j , i ( i C j ), its fused density obtained by the weighted KLA rule is still a Mδ-GLMB form of π ¯ j ( X ) = { ( ω ¯ L j , p ¯ L j ( x , ) ) } L ( L ) [23],
ω ¯ L j = i C j ( ω L i ( Z S i ) ) α j , i [ i C j ( p L i ( x , | Z S i ) ) α j , i d x ] L L L i C j ( ω L i ( Z S i ) ) α j , i [ i C j ( p L i ( x , | Z S i ) ) α j , i d x ] L  
p ¯ L j ( x , ) = i C j ( p L i ( x , | Z S i ) ) α j , i i C j ( p L i ( x , | Z S i ) ) α j , i d x  
where the weight α j , i reflects the effect of local posterior density π i ( X | Z S i ) on the fusion of the LFC j . The larger the weight is, the greater the impact it has on the KLA fusion.
The bound in Theorem 1 reflects the optimal MTT accuracy that is potentially achieved by a LFC after sensor selection. The larger the proposed bound of a LFC is, the worse the precision limit that it can achieve is. Therefore, the normalized weight α j , i in the KLA fusion of the LFC j should be set inversely proportional to the proposed bound ( σ _ S i i ) 2 ,
α j , i = ( σ _ S i i ) 2 i C j ( σ _ S i i ) 2 , i C j  
which indicates that the larger the proposed bound of the LFC is, the smaller the proportion of its posterior density in the KLA fusion should be; and vice versa.

5. Simulations

The main goal of the simulations is to verify the following two points under different SNR conditions. First, our method conducts the sensor selection more effectively than the CS divergence based methods for the decentralized large-scale MTT network. This case is much more obvious when the sensors have different observation performance. Second, the coordinate descent method significantly shortens the calculation time of genetic algorithm at the expense of slight loss in tracking accuracy. To highlight these, the specific scenarios, including the multi-target dynamic model, sensor network architecture, observation model for SNs and so forth, are designed as follows.
Multiple targets move in a constant velocity (CV) model [49] over a two-dimensional region A = [ 0 , 50 ] × [ 0 , 50 ] km 2 and the number of targets is unknown and changes over time. The label of state x is noted as = ( k b , i b ) , where k b is the birth time and i b is the index to distinguish the birth targets at the same time. The unlabeled state is noted as x = [ p x , p ˙ x , p y , p ˙ y ] T , where ( p x , p y ) and ( p ˙ x , p ˙ y ) are the positions and velocities in the X and Y directions. The single-target transition density is in the Gaussian form of
f ( x , | x , ) = N ( x ; F CV x , Q ) δ ( )  
where F CV and Q are the transition matrix and process noise covariance matrix for unlabeled state,
F CV = [ 1 Δ 1 1 Δ 1 ] ,   Q = q Q 2 [ Δ 4 4 Δ 3 2 Δ 3 2 Δ 2 Δ 4 4 Δ 3 2 Δ 3 2 Δ 2 ]  
where Δ is the sampling interval, q Q is the process noise standard deviation. In this example, Δ = 10   s , q Q = 0.002   km / s 2 and survival probability p s ( x ) = 0.95 .
The target birth is modeled as an LMB RFS with density π b = ( ω b , p b ) { 1 , , 12 } , where ω b and p b ( x ) are the existing probability and density of the new-birth target with label ,
p b ( x ) = N ( x ; x ¯ b , Q b )  
where in this example, ω b = 0.05 , Q b = diag ( 25 , 10 2 , 25 , 10 2 ) and x ¯ 1 b ~ x ¯ 12 b are [ 10 , 0.15 , 40 , 0.15 ] T , [ 20 , 0.1 , 40 , 0.15 ] T , [ 30 , 0.1 , 40 , 0.15 ] T , [ 40 , 0.15 , 40 , 0.15 ] T , [ 40 , 0.15 , 30 , 0.1 ] T , [ 40 , 0.15 , 20 , 0.1 ] T , [ 40 , 0.15 , 10 , 0.15 ] T , [ 30 , 0.1 , 10 , 0.15 ] T , [ 20 , 0.1 , 10 , 0.15 ] T , [ 10 , 0.15 , 10 , 0.15 ] T , [ 10 , 0.15 , 20 , 0.1 ] T , [ 10 , 0.15 , 30 , 0.1 ] T in turn, where the units of position and speed are km and km / s .
The decentralized sensor network is composed of | C | = 8 LFCs, which are noted as LFC1~LFC8. The positions (unit: km ) of the LFCs are [ 10 , 40 ] T , [ 25 , 40 ] T , [ 40 , 40 ] T , [ 40 , 25 ] T , [ 40 , 10 ] T , [ 25 , 10 ] T , [ 10 , 10 ] T , [ 10 , 25 ] T in turn. Each LFC has | N j | = 50 subordinate SNs ( j = 1 , , 8 ). So there are in total | N | = SNs in the entire network. Each LFC can communicate with other LFCs within 25 km apart away from it. Finally, the directed connection set A for the LFCs and the locations for all SNs are shown in Figure 2.
The clutter is a uniformly distributed Poisson RFS over the region A . In this example, the clutter rate and detection probability of each SN are firstly set as λ s = λ = 20 and p d s ( x ) = p d = 0.95 , s = 1 , , 400 .
To embody the performance variation of sensor observation, the network is assumed to consist of different types of SNs with distinctive observation function and noise covariance. The single-target likelihoods of the SNs are all Gaussian distributed as shown in (63). Each SN of LFC1 and LFC5 receives the distance and angle measurements of target, so its observation function h s ( x ) is
h s ( x ) = [ x u s , arctan p y u y s p x u x s ] T ,   s N 1   or   s N 5  
where u s = [ u x s , u y s ] T is the known position of the SN s , x u s = ( p x u x s ) 2 + ( p y u y s ) 2 is the distance between the SN s and target. The measurement noise covariance matrix is also modeled as a nonlinear function of state x ,
R s ( x ) = { diag ( ( [ 0.2 + 0.05 x u s ] km ) 2 , ( [ 0.02 + 0.001 x u s ] rad ) 2 ) s N 1 diag ( ( [ 0.4 + 0.04 x u s ] km ) 2 , ( [ 0.04 + 0.0005 x u s ] rad ) 2 ) s N 5  
Each SN of LFC2 and LFC6 only receives the distance measurement of target. Its h s ( x ) and R s ( x ) are
h s ( x ) = x u s ,   s N 2   or   s N 6  
R s ( x ) = { ( [ 0.1 + 0.02 x u s ] km ) 2 s N 2 ( [ 0.2 + 0.01 x u s ] km ) 2 s N 6  
Each SN of LFC3 and LFC7 only receives the angle measurement of target. Its h s ( x ) and R s ( x ) are
h s ( x ) = arctan p y u y s p x u x s ,   s N 3   or   s N 7  
R s ( x ) = { ( [ 0.01 + 0.001 x u s ] rad ) 2 s N 3 ( [ 0.02 + 0.0005 x u s ] rad ) 2 s N 7  
Each SN of LFC4 and LFC8 receives distance and Doppler measurements of target. Its h s ( x ) and R s ( x ) are
h s ( x ) = [ x u s , ( p x u x s ) p ˙ x + ( p y u y s ) p ˙ y x u s ] T ,   s N 4   or   s N 8  
R s ( x ) = { diag ( ( [ 0.2 + 0.05 x u s ] km ) 2 , ( [ 0.02 + 0.001 x u s ] km / s ) 2 ) s N 4 diag ( ( [ 0.4 + 0.04 x u s ] km ) 2 , ( [ 0.04 + 0.0005 x u s ] km / s ) 2 ) s N 8  
In this example, there are three constraints for the sensor selection optimization of (17), which are
C1: Due to the limitations of communication bandwidth, energy consumption, computation capacity and storage space, the LFC j can only select K j SNs at each scan ( K j | N j | ),
K j | S j | = 0  
In this example, if j = 1 , 4 , 5 , 8 , then K j = 5 ; otherwise K j = 8 .
C2: The field of view (FoV) of the SN s is modeled as a circular area with the center u s and radius ρ s , A s ( ρ s ) = { p s A : p s u s ρ s } , A s A . The FoVs of different SNs can be overlapped. To ensure that the FoVs of the SN set S j can totally cover the region A , it is required that
A s S j A s ( ρ s ) = 0  
In this example, if j = 1 , 4 , 5 , 8 , then ρ s = 30   km ; otherwise ρ s = 20   km .
C3: To avoid mutual interference between the homogeneous SNs belonging to the same LFC, the distance between any two SNs in S j must be not smaller than the threshold D j ,
min s , s S j u s u s D j 0  
In this example, D j = 5   km for j = 1 , , 8 .
According to the objective function and calculation method used for sensor selection, our algorithm is abbreviated as LA bound with coordinate descent. It is firstly implemented by the SMC technique and coded by MATLAB R2018a. Each p I ( , ) involved in the Mδ-GLMB density is approximated by 500 particles on average. In this example, the maximum numbers of targets and measurements of each SN per scan are set as 25 and 200, the cut-off is set as c = 1000   m . The algorithm is testing on a desktop with the CPU of AMD Ryzen 7 2700X and 64 GB RAM. We conduct 500 MC simulations, each of which includes T = 25 scans (a total of 250 s). In these simulations, the target tracks (including the instants of birth and death), clutter and measurements originating from targets are generated independently according to the aforementioned models.
We firstly present the result of sensor selection obtained by the algorithm in one simulation. Figure 3 shows the target trajectories in the simulation, where a total of 15 targets are generated at different instants and locations. For easy description, the targets are named as T1~T15 in turn. The name and survival period of each target are marked at the start point of the target. During the surveillance period, the number of targets at the initial time is the least (3 targets). The number of targets at the 15th~18th scans is the most (13 targets). The target T1 intersects with the target T10 at the 15th scan.
Figure 4a~f show the results of sensor selection at the 1st, 5th, 10th, 15th, 20th and 25th scans. It can be seen that the SNs selected by each LFC will change adaptively with the multi-target movement versus time. Specifically, in order to minimize the bound in Theorem 1, most of the selected SNs locate in the regions closer to the survival targets at each scan. A few SNs which are far from the survival targets are selected to satisfy Constraint C2. Moreover, due to Constraint C3, the homogeneous SNs of each LFC cannot be excessively concentrated in a small region.
In order to further verify the performance of this method in tracking accuracy and computational time, it is compared with the methods of LA bound with genetic algorithm, CS divergence with genetic algorithm and Random selection under the same test platform. In the genetic algorithm, the population size is 50, crossing rate is 0.9, mutation rate is 0.001, elite rate is 0.04 and the maximum number of iterations is 500. In the CS divergence method, the objective function of the LFC j is
[ S j ] = arg max S j N j   E [ D C S ( π + j , π j ( | Z S j ) ) ]  
where D C S ( ϕ , φ ) denotes the CS divergence between the densities ϕ and φ . [31] presents the specific form of the CS divergence when ϕ and φ are both GLMB densities. Since the posterior density π j ( X | Z S j ) is unknown before sensor selection, the expected value rather than real value of the CS divergence is applied in (82). In order to calculate the expected value, the MC integration based on PIMS also needs to be used here.
For comparison, both the OSPA and LA metrics are used to measure the error of multi-target position estimates.
Figure 5a,b present the 500 MC averages of the OSPA and LA errors for the four methods. Note that the error here is selected as the average of all LFCs because of Remark 1.
Figure 5 shows that both the averaged OSPA and LA errors from all the four methods decrease with time. Furthermore, the LA errors are always larger than the relevant OSPA errors. The reason for this has been explained in Section 2.3. In both of Figure 5a,b, the errors from Random selection are always the largest. The next is the errors from CS divergence with genetic algorithm. The errors from LA bound with genetic algorithm are always the smallest. The errors from LA bound with coordinate descent are slightly larger than those of LA bound with genetic algorithm. Compared with Random selection, the errors of the other three algorithms are approximately reduced by 40%, 60% and 55%, respectively. Obviously, the two LA bound based methods outperform the CS divergence based method in MTT accuracy. There are three reasons for this:
1) The LA bound has a clearer physical meaning than the CS divergence. This is because that the former indicates the achievable optimal MTT accuracy with labeled RFS state. In contrast, the latter is not directly related to the MTT accuracy since maximizing the CS divergence in (82) cannot guarantee to minimize the OSPA or LA error.
2) The CS divergence cannot provide a basis for setting the weights of KLA rule as the LA bound. Therefore, in the CS divergence based method, the KLA weights can only be set to the same by convention [32]. However, in this example the MTT accuracy of different LFCs is probably not the same because of the distinguishing observation performance of the diverse SNs. If the KLA weights are set to the same without discrimination, the fusion efficiency will decline dramatically in this case.
3) In the step of sensor selection optimization, the coordinate descent method may trap in one of local optimums. In contrast, the genetic algorithm may jump out of local optimums with certain probability by its randomness. This leads that the MTT accuracy of the former is a litter worse than the latter.
On the other hand, the computational cost of a method is in general measured by its CPU run time. Then, the averaged CPU run time per scan for LA bound with coordinate descent, LA bound with genetic algorithm and CS divergence with genetic algorithm are 0.62s, 6.18s and 5.69s, respectively. Note that the run time here is also selected as the average of all LFCs because of Remark 1. Random selection obviously does not need the optimization for sensor selection, so it has no time consumption of this step. It can be seen from this that although the coordinate descent method is slightly worse than the genetic algorithm in tracking accuracy, it significantly shortens the calculation time of sensor selection optimization. In addition, the time consumption of CS divergence with genetic algorithm is slightly less than that of LA bound with genetic algorithm. This case indicates that the computational cost of the proposed bound is a little larger than that of the CS divergence expectation in (82).
In order to show the influence of different SNR on our method, the clutter rate and detection probability of each SN are changed into ( λ = 40 , p d = 0.85 ) , ( λ = 60 , p d = 0.75 ) , ( λ = 80 , p d = 0.65 ) and ( λ = 100 , p d = 0.55 ) . Table 1 and Table 2 present the final values of the OSPA and LA errors for the four methods in each scenario after 500 MC run average.
Moreover, the CPU times consumed by the sensor selection optimization for the first three methods are nearly the same for all the SNR scenarios. This is because that the sensor selection is irrelevant to the specific measurement realizations since it must be completed before the measurements are received.
It can be seen from Algorithm 1 and 2 that as clutter density increases and detection probability decreases,
1) The OSPA and LA errors of all the four methods increase in different degrees but the size order of them is always the same as that of Scenario 1;
2) Taking the error of Random selection as the benchmark, the improvement ratio of CS divergence with genetic algorithm is gradually reduced from about 40% to about 20%. Meanwhile, the improvement ratios of the two LA bound based methods are, respectively, maintained at about 60% and 55%.
The two points reflect that the lower the SNR is, the worse the sensor selection efficiency and tracking accuracy of the CS divergence based methods become. By contrast, the tracking accuracy of the LA bound based methods always maintains a good improvement ratio for all the SNR scenarios.
Assuming that the simulation scenarios remain unchanged, the above four methods are re-implemented by the GM technique where the non-linear likelihood of each SN is approximated by the EK filter,
H ^ s h s ( x ) x | x = x ^ + ;   R ^ s R s ( x ^ + )  
The pruning and merging technology [11] are used to manage the GM terms. Let the number of the GM terms approximating to each p I ( , ) be no more than 30. The thresholds for merging, pruning and state extraction are set to 4, 10 4 and 0.5, respectively. The simulation results of the GM implementation are basically consistent with those of the SMC implementation, except that the OSPA or LA error increases by about 8%. But the calculation time for sensor selection decreases by about 70%.

6. Conclusions and Future Work

A sensor selection optimization algorithm is proposed for the decentralized large-scale MTT network under the labeled RFS framework. The LA metric defined in this paper is used to measure the error between the labeled multi-target states and their estimates. The lower bound of the LA metric based MSE is taken as the cost function of sensor selection. The bound is derived by the information inequality and then, implemented by the SMC or GM technique. Then, the coordinate descent method is used to reduce the computational cost of sensor selection. Simulation results show that when the sensors of the decentralized network have different observation performance, our method outperforms the CS divergence based sensor selection algorithm in MTT accuracy.
Our future work will focus on the following two aspects:
1) Extend the proposed method to the cases of asynchronous measurement or correlated measurement noise;
2) Reference [50] has presented a very efficient implementation of the GLMB filter with linear complexity in the number of measurements and this filter has been demonstrated to handle over 1 million tracks simultaneously [51]. Therefore, it would be very helpful to improve our current study by the use of the methods proposed in Reference [50,51].

Author Contributions

Conceptualization, F.L.; Methodology, F.L.; Software, L.H.; Validation, F.L., L.H., B.W. and C.H.; Formal Analysis, B.W.; Investigation, C.H.; Resources, B.W.; Data Curation, F.L.; Writing-Original Draft Preparation, F.L.; Writing-Review & Editing, L.H.; Visualization, B.W.; Supervision, C.H.; Project Administration, F.L.; Funding Acquisition, F.L.

Funding

This research was supported by the National Natural Science Foundation of China (61473217) and the National Key Fundamental Research & Development Programs (973) of China (2013CB329405).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A.

Proof that d ¯ p ( c ) ( X , Y ) is a metric
A mapping d : X × X [ 0 , ) is called a metric if it obeys the following three properties
1) Identity: d ( x , y ) = 0 if and only if x = y ;
2) Symmetry: d ( x , y ) = d ( y , x ) for all x , y X ;
3) Triangle inequality: d ( x , y ) d ( x , z ) + d ( z , y ) for all x , y , z X .
It can be obtained easily from the definition of d ¯ p ( c ) ( X , Y ) in (9) that 0 d ¯ p ( c ) ( X , Y ) c and d ¯ p ( c ) ( X , Y ) obeys the properties of identity and symmetry. Next, we will prove the triangle inequality
d ¯ p ( c ) ( X , Y ) d ¯ p ( c ) ( X , Z ) + d ¯ p ( c ) ( Z , Y )  
Let x , y , z and ( X ) , ( Y ) , ( Z ) denote the individual elements and label sets of X , Y , Z , respectively. We firstly set
x : = u for ( X ) ( Y ) ( Z ) ( X )  
y : = v for ( X ) ( Y ) ( Z ) ( Y )  
{ z : = u for ( X ) ( Y ) ( Z ) ( X ) ( Z ) z : = v for ( X ) ( Z ) ( Z )
where u and v satisfy
u w c ,   v w c   and   u v c  
for all w X Y Z and all , ( X ) ( Y ) ( Z ) .
Based on the setting of (A2)–(A5), the definition of d ¯ p ( c ) ( X , Y ) in (9) is rewritten as
d ¯ p ( c ) ( X , Y ) = ( ( X ) ( Y ) d ( c ) ( x , y ) p | ( X ) ( Y ) | ) 1 p  
We will prove that the triangle inequality of (A1) holds for all six possible ranks of | ( X ) ( Y ) | , | ( X ) ( Z ) | and | ( Z ) ( Y ) | .
Case 1 ( | ( X ) ( Y ) | | ( X ) ( Z ) | | ( Y ) ( Z ) | ): Since the cut-off distance d ( c ) ( x , y ) c for all ( X ) ( Y ) ( Z ) and | ( X ) ( Y ) | | ( X ) ( Z ) | , (A6) can be amplified to
d ¯ p ( c ) ( X , Y ) ( ( X ) ( Z ) d ( c ) ( x , y ) p | ( X ) ( Z ) | ) 1 p ( ( X ) ( Z ) ( d ( c ) ( x , z ) + d ( c ) ( z , y ) ) p | ( X ) ( Z ) | ) 1 p
where the last line holds because d ( c ) ( x , y ) obeys the triangle inequality.
Applying Minkowski’s inequality to (A7), we get
d ¯ p ( c ) ( X , Y ) ( ( X ) ( Z ) d ( c ) ( x , z ) p | ( X ) ( Z ) | ) 1 p + ( ( X ) ( Z ) d ( c ) ( z , y ) p | ( X ) ( Z ) | ) 1 p = d ¯ p ( c ) ( X , Z ) + ( ( X ) ( Z ) d ( c ) ( z , y ) p | ( X ) ( Z ) | ) 1 p
Similar with the derivation of the first line in (A7), the second term on the right-hand side of ‘=’ in (A8) can be amplified to
( ( X ) ( Z ) d ( c ) ( z , y ) p | ( X ) ( Z ) | ) 1 p ( ( Y ) ( Z ) d ( c ) ( z , y ) p | ( Y ) ( Z ) | ) 1 p = d ¯ p ( c ) ( Y , Z )
because of d ( c ) ( x , y ) c and | ( X ) ( Z ) | | ( Y ) ( Z ) | .
Finally, the triangle inequality of (A1) for Case 1 is obtained by substituting (A9) into (A8).
Case 2 ( | ( X ) ( Y ) | | ( Y ) ( Z ) | | ( X ) ( Z ) | ):
The proof of Case 2 is similar with that of Case 1 except that its first amplification is completed by using | ( X ) ( Y ) | | ( Y ) ( Z ) | and its last amplification is completed by using | ( Y ) ( Z ) | | ( X ) ( Z ) | .
Case 3 ( | ( X ) ( Z ) | | ( Y ) ( Z ) | | ( X ) ( Y ) | ):
Using the triangle inequality of d ( c ) ( x , y ) , (A6) can be amplified to
d ¯ p ( c ) ( X , Y ) ( ( X ) ( Y ) ( d ( c ) ( x , z ) + d ( c ) ( z , y ) ) p | ( X ) ( Y ) | ) 1 p ( ( X ) ( Y ) ( d ( c ) ( x , z ) ) p | ( X ) ( Y ) | ) 1 p + ( ( X ) ( Y ) ( d ( c ) ( z , y ) ) p | ( X ) ( Y ) | ) 1 p
where the last line is derived from Minkowski’s inequality.
Because of | ( X ) ( Z ) | | ( Y ) ( Z ) | | ( X ) ( Y ) | , (A10) can be amplified to
d ¯ p ( c ) ( X , Y ) ( ( X ) ( Y ) ( d ( c ) ( x , z ) ) p | ( X ) ( Z ) | ) 1 p + ( ( X ) ( Y ) ( d ( c ) ( z , y ) ) p | ( Z ) ( Y ) | ) 1 p  
Then, based on the setting of (A2)–(A5), we have
{ ( X ) ( Y ) ( d ( c ) ( x , z ) ) p ( X ) ( Z ) ( d ( c ) ( x , z ) ) p ( X ) ( Y ) ( d ( c ) ( z , y ) ) p ( Z ) ( Y ) ( d ( c ) ( z , y ) ) p  
Substituting (A12) into (A11), (A11) can be amplified to
d ¯ p ( c ) ( X , Y ) ( ( X ) ( Z ) ( d ( c ) ( x , z ) ) p | ( X ) ( Z ) | ) 1 p + ( ( Z ) ( Y ) ( d ( c ) ( z , y ) ) p | ( Z ) ( Y ) | ) 1 p = d ¯ p ( c ) ( X , Z ) + d ¯ p ( c ) ( Z , Y )
which is just the triangle inequality of (A1) for Case 3.
Case 4 ( | ( Y ) ( Z ) | | ( X ) ( Z ) | | ( X ) ( Y ) | ):
The proof of Case 4 is exactly the same as that of Case 3.
Case 5 ( | ( Y ) ( Z ) | | ( X ) ( Y ) | | ( X ) ( Z ) | ):
First, (A10) still holds for Case 5 depending on the triangle inequality of d ( c ) ( x , y ) and Minkowski’s inequality.
Since d ( c ) ( x , y ) c for all ( X ) ( Y ) ( Z ) and | ( X ) ( Y ) | | ( X ) ( Z ) | , we have
( ( X ) ( Y ) ( d ( c ) ( x , z ) ) p | ( X ) ( Y ) | ) 1 p ( ( X ) ( Z ) ( d ( c ) ( x , z ) ) p | ( X ) ( Z ) | ) 1 p = d ¯ p ( c ) ( X , Z )
From | ( Y ) ( Z ) | | ( X ) ( Y ) | , we have
( ( X ) ( Y ) ( d ( c ) ( z , y ) ) p | ( X ) ( Y ) | ) 1 p ( ( X ) ( Y ) ( d ( c ) ( z , y ) ) p | ( Z ) ( Y ) | ) 1 p ( ( Z ) ( Y ) ( d ( c ) ( z , y ) ) p | ( Z ) ( Y ) | ) 1 p = d ¯ p ( c ) ( Z , Y )
where the second line is obtained due to (A12).
Substituting (A14) and (A15) into (A10), we finally obtain the triangle inequality of (A1) for Case 5.
Case 6 ( | ( X ) ( Z ) | | ( X ) ( Y ) | | ( Y ) ( Z ) | ):
The proof of Case 6 is similar with that of Case 5, except that (A14) is derived from | ( X ) ( Z ) | | ( X ) ( Y ) | and (A12) while (A15) is derived from d ( c ) ( x , y ) c and | ( X ) ( Y ) | | ( Y ) ( Z ) | .
Above all, the proof that d ¯ p ( c ) ( X , Y ) in (9) is a metric has been completed. □

Appendix B.

Proof of Theorem 1
From (4) and (18), σ S 2 in (16) is written as
σ S 2 = m S = 0 n = 0 Ω n , m S m S ! n ! 1 : n L n m S S X n q ( X n , Z m S S ) e 2 ( X n , X ^ ( Z m S S ) ) d x 1 : n d z 1 : m S S  
where q ( X n , Z m S S ) and Ω n , m S are defined in (18) and (19), Ω n , m S is finally obtained as (24).
Dividing the integral region m S S of (A16) into 0 , m S S , 1 , m S S , , , m S S according to (30), σ S 2 is rewritten as
σ S 2 = m S = 0 n = 0 Ω n , m S m S ! n ! n ^ = 0 1 : n L n n ^ , m S S X n q ( X n , Z m S S ) e 2 ( X n , X ^ n ^ ( Z m S S ) ) d x 1 : n d z 1 : m S S  
where the error e ( X n , X ^ n ^ ( Z m S S ) ) is measured by the 2nd-order LA metric d ¯ 2 ( c ) ( X n , X ^ n ^ ( Z m S S ) ) defined in (9). Substituting d ¯ 2 ( c ) ( X n , X ^ n ^ ( Z m S S ) ) into (A17) and then using the identical equation | ( X ) ( Y ) | = | ( X ) | + | ( Y ) | | ( X ) ( Y ) | , we get
σ S 2 = m S = 0 n = 0 Ω n , m S m S ! n ! n ^ = 0 , n + n ^ > 0 1 : n L n 1 n + n ^ | ( X n ) ( X ^ n ^ ( Z m S S ) ) | n ^ , m S S X n q ( X n , Z m S S ) ( ( X n ) ( X ^ n ^ ( Z m S S ) ) min ( c 2 , x x ^ ( Z m S S ) 2 ) + c 2 ( n + n ^ 2 | ( X n ) ( X ^ n ^ ( Z m S S ) ) | ) ) d x 1 : n d z 1 : m S S
where 1 : n are the labels of X n . It is obvious that ( X n ) ( X ^ n ^ ( Z m S S ) ) { 1 : n } .
Let k = | ( X n ) ( X ^ n ^ ( Z m S S ) ) | . 0 k min ( n , n ^ ) . Given k and 1 : n , there are C n k = n ! / ( k ! ( n k ) ! ) possible choices from 1 : n for the elements of { 1 : k } = ( X n ) ( X ^ n ^ ( Z m S S ) ) . Then, (A18) can be rewritten as
σ S 2 = m S = 0 n = 0 n ^ = 0 , n + n ^ > 0 1 : n L n k = 0 min ( n , n ^ ) { 1 : k } { 1 : n } Ω n , m S m S ! n ! 1 n + n ^ k n ^ , m S S X n q ( X n , Z m S S ) ( { 1 : k } min ( c 2 , x x ^ ( Z m S S ) 2 ) + c 2 ( n + n ^ 2 k ) ) d x 1 : n d z 1 : m S S = m S = 0 n = 0 n ^ = 0 , n + n ^ > 0 k = 0 min ( n , n ^ ) 1 : k L k Ω n , m S Ψ n ^ , n , m S m S ! k ! ( n k ) ! 1 n + n ^ k ( { 1 : k } min ( c 2 , 1 Ψ n ^ , n , m S ( 1 : n ) L n 1 n ^ , m S S X n q ( X n , Z m S S ) x x ^ ( Z m S S ) 2 d x 1 : n d z 1 : m S S ) + c 2 ( n + n ^ 2 k ) )
where Ψ n ^ , n , m S is defined in (34) and finally obtained as (39), 1 : n denotes the residual label sequence of 1 : n after the label is separately excluded from 1 : n .
Note that the estimate x ^ ( Z m S S ) involved in (A19) is independent of X n . So, the integral term in the last line of (A19) is obtained as
( 1 : n ) L n 1 n ^ , m S S X n q ( X n , Z m S S ) x x ^ ( Z m S S ) 2 d x 1 : n d z 1 : m S S = n ^ , m S S X 1 [ ( 1 : n ) L n 1 X n 1 q ( X n , Z m S S ) d ( x 1 : n x ) ] x x ^ ( Z m S S ) 2 d x d z 1 : m S S = n ^ , m S S X 1 q n ( x , , Z m S S ) l = 1 L ( x l x ^ l ( Z m S S ) ) 2 d x d z 1 : m S S
where the last line is obtained according to the marginal density q n ( x , Z m S S ) of (25) and the definition of 2-norm, L is the dimension of x , X n 1 d ( x 1 : n x ) denotes the residual integral of X n d x 1 : n after the term X 1 d x is separately excluded from X n d x 1 : n .
Since A2 has shown that the estimator x ^ ( Z m S S ) is unbiased, the information inequality of (7) can be applied to (A20),
n ^ , m S S X 1 q n ( x , , Z m S S ) ( x l x ^ l ( Z m S S ) ) 2 d x d z 1 : m S S [ J n ^ , n , m S 1 ( ) ] l , l l = 1 , , L  
where the FIM J n ^ , n , m S ( ) is shown in (42). (A21) holds with equality if and only if q n ( x , , Z m S S ) satisfies the distribution of exponential family.
Substituting (A21) into (A20) and then (A19), we get
σ S 2 m S = 0 n = 0 n ^ = 0 , n + n ^ > 0 k = 0 min ( n , n ^ ) 1 : k L k Ω n , m S Ψ n ^ , n , m S m S ! k ! ( n k ) ! 1 n + n ^ k ( { 1 : k } min ( c 2 , 1 Ψ n ^ , n , m S l = 1 L [ J n ^ , n , m S 1 ( ) ] l , l ) + c 2 ( n + n ^ 2 k ) ) = m S = 0 n = 0 n ^ = 0 , n + n ^ > 0 k = 0 min ( n , n ^ ) L 1 Ω n , m S Ψ n ^ , n , m S m S ! ( n k ) ! 1 n + n ^ k ( k min ( c 2 , 1 Ψ n ^ , n , m S l = 1 L [ J n ^ , n , m S 1 ( ) ] l , l ) + c 2 ( n + n ^ 2 k ) )
where the second line is derived from the fact that the FIM J n ^ , n , m S ( ) is the same for all { 1 : k } given 1 : k L k .
Finally, (40) can be obtained by substituting the notation ε k , n ^ , n defined in (41) into (A22). □.

References

  1. Blackman, S.; Popoli, R. Design and Analysis of Modern Tracking Systems; Artech House: Norwood, MA, USA, 1999. [Google Scholar]
  2. Hero, A.O.; Castanón, D.A.; Cochran, D.; Kastella, K. Foundations and Applications of Sensor Management; Springler: New York, NY, USA, 2008. [Google Scholar]
  3. Tharmarasa, R.; Kirubarajan, T.; Hernandez, M.L.; Sinha, A. PCRLB-based multisensor array management for multitarget tracking. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 539–555. [Google Scholar] [CrossRef]
  4. Tharmarasa, R.; Kirubarajan, T.; Sinha, A.; Lang, T. Decentralized sensor selection for large-scale multisensor-multitarget tracking. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1307–1324. [Google Scholar] [CrossRef]
  5. Fu, Y.; Ling, Q.; Tian, Z. Distributed sensor allocation for multi-target tracking in wireless sensor networks. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 3538–3553. [Google Scholar] [CrossRef]
  6. Mohammadi, A.; Asif, A. Decentralized computation of the conditional posterior Cramér-Rao lower bound: Application to adaptive sensor selection. In Proceedings of the 38th IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
  7. Herath, S.C.K.; Pathirana, P.N. Optimal sensor arrangements in angle of arrival (AoA) and range based localization with linear sensor arrays. Sensors 2013, 13, 12277–12294. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, Z.; Shen, X.; Wang, P.; Zhu, Y. The Cramér-Rao bounds and sensor selection for nonlinear systems with uncertain observations. Sensors 2018, 18, 1103. [Google Scholar] [CrossRef] [PubMed]
  9. Mahler, R. Advances in Statistical Multisource-Multitarget Information Fusion; Artech House: Norwood, MA, USA, 2014. [Google Scholar]
  10. Vo, B.N.; Singh, S.; Doucet, A. Sequential Monte Carlo methods for multi-target filtering with random finite sets. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1224–1245. [Google Scholar]
  11. Vo, B.N.; Ma, W.K. The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 2006, 54, 4091–4104. [Google Scholar] [CrossRef]
  12. Zhang, Q.; Song, T.L. Improved bearings-only multi-target tracking with GM-PHD filtering. Sensors 2016, 16, 1469. [Google Scholar] [CrossRef] [PubMed]
  13. Vo, B.T.; Vo, B.N.; Cantoni, A. Analytic implementations of the cardinalized probability hypothesis density filter. IEEE Trans. Signal Process. 2007, 55, 3553–3567. [Google Scholar] [CrossRef]
  14. Si, W.; Wang, L.; Qu, Z. Multi-target tracking using an improved Gaussian mixture CPHD filter. Sensors 2016, 16, 1964. [Google Scholar] [CrossRef] [PubMed]
  15. Vo, B.T.; Vo, B.N.; Cantoni, A. The cardinality balanced multi-target multi-Bernoulli filter and its implementations. IEEE Trans. Signal Process. 2009, 57, 409–423. [Google Scholar]
  16. He, X.; Liu, G. Cardinality balanced multi-target multi-Bernoulli filter with error compensation. Sensors 2016, 16, 1399. [Google Scholar] [CrossRef] [PubMed]
  17. Vo, B.T.; Vo, B.N. Labeled random finite sets and multi-object conjugate priors. IEEE Trans. Signal Process. 2013, 61, 3460–3475. [Google Scholar] [CrossRef]
  18. Vo, B.N.; Vo, B.T.; Phung, D. Labeled random finite sets and the Bayes multi-target tracking filter. IEEE Trans. Signal Process. 2014, 62, 6554–6567. [Google Scholar] [CrossRef]
  19. Liu, C.; Sun, J.; Lei, P.; Qi, Y. δ-generalized labeled multi-Bernoulli filter using amplitude information of neighboring cells. Sensors 2018, 18, 1153. [Google Scholar]
  20. Reuter, S.; Vo, B.T.; Vo, B.N.; Dietmayer, K. The labeled multi-Bernoulli filter. IEEE Trans. Signal Process. 2014, 62, 3246–3260. [Google Scholar]
  21. Papi, F.; Vo, B.N.; Vo, B.T.; Fantacci, C.; Beard, M. Generalized labeled multi-Bernoulli approximation of multi-object densities. IEEE Trans. Signal Process. 2015, 63, 5487–5497. [Google Scholar] [CrossRef]
  22. Fantacci, C.; Papi, F. Scalable multisensor multitarget tracking using the marginalized δ-GLMB density. IEEE Signal Process Lett. 2016, 23, 863–867. [Google Scholar] [CrossRef]
  23. Fantacci, C.; Vo, B.N.; Vo, B.T.; Battistelli, G.; Chisci, L. Robust fusion for multisensor multiobject tracking. IEEE Signal Process Lett. 2018, 25, 640–644. [Google Scholar] [CrossRef]
  24. Williams, J.L. Marginal multi-Bernoulli filters: RFS derivation of MHT, JIPDA and association-based MeMBer. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1664–1687. [Google Scholar] [CrossRef]
  25. García-Fernández Á, F.; Williams, J.L.; Granström, K.; Svensson, L. Poisson multi-Bernoulli mixture filter: Direct derivation and implementation. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1883–1901. [Google Scholar] [CrossRef]
  26. Granström, K.; Svensson, L.; Xia, Y.; Williams, J.; García-Femández Á, F. Poisson multi-Bernoulli mixture trackers: Continuity through random finite sets of trajectories. In Proceedings of the 21st International Conference on Information Fusion, Cambridge, UK, 10–13 July 2018. [Google Scholar]
  27. Ristic, B.; Vo, B.N. Sensor control for multi-object state-space estimation using random finite sets. Automatica 2010, 46, 1812–1818. [Google Scholar] [CrossRef]
  28. Ristic, B.; Vo, B.N.; Clark, D. A note on the reward function for PHD filters with sensor control. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1521–1529. [Google Scholar] [CrossRef]
  29. Hoang, H.G.; Vo, B.T. Sensor management for multi-target tracking via multi-Bernoulli filtering. Automatica 2014, 50, 1135–1142. [Google Scholar] [CrossRef] [Green Version]
  30. Gostar, A.K.; Hoseinnezhad, R.; Bab-Hadiashar, A. Multi-Bernoulli sensor control via minimization of expected estimation errors. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1762–1773. [Google Scholar] [CrossRef]
  31. Beard, M.; Vo, B.T.; Vo, B.N.; Arulampalam, S. Sensor control for multi-target tracking using Cauchy-Schwarz divergence. In Proceedings of the 18th International Conference on Information Fusion, Washington, DC, USA, 6–9 July 2015. [Google Scholar]
  32. Jiang, M.; Yi, W.; Kong, L. Multi-sensor control for multi-target tracking using Cauchy-Schwarz divergence. In Proceedings of the 19th International Conference on Information Fusion, Heidelberg, Germany, 5–8 July 2016. [Google Scholar]
  33. Gostar, A.K.; Hoseinnezhad, R.; Rathnayake, T.; Wang, X.; Bab-Hadiashar, A. Constrained sensor control for labeled multi-Bernoulli filter using Cauchy-Schwarz divergence. IEEE Signal Process Lett. 2017, 24, 1313–1317. [Google Scholar] [CrossRef]
  34. Wang, X.; Hoseinnezhad, R.; Gostar, A.K.; Rathnayake, T.; Xu, B.; Bab-Hadiashar, A. Multi-sensor control for multi-object Bayes filters. Signal Process. 2018, 14, 260–270. [Google Scholar] [CrossRef]
  35. Rezaeian, M.; Vo, B.N. Error bounds for joint detection and estimation of a single object with random finite set observation. IEEE Trans. Signal Process. 2010, 58, 1493–1506. [Google Scholar] [CrossRef]
  36. Arulampalam, S.; Maskell, S.; Gordon, N.J.; Clapp, T. A tutorial on particle filters for on-line non-linear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  37. Wright, S.J. Coordinate descent algorithms. Math. Program. 2015, 151, 3–34. [Google Scholar] [CrossRef] [Green Version]
  38. Hristakeva, M.; Shrestha, D. Solving the 0-1 knapsack problem with genetic algorithms. In Proceedings of the 37th Midwest Instruction and Computing Symposium, Morris, MN, USA, 16–17 April 2004. [Google Scholar]
  39. Schuhmacher, D.; Vo, B.T.; Vo, B.N. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process. 2008, 56, 3447–3457. [Google Scholar] [CrossRef]
  40. Ristic, B.; Vo, B.N.; Clark, D.; Vo, B.T. A metric for performance evaluation of multi-target tracking algorithms. IEEE Trans. Signal Process. 2011, 59, 3452–3457. [Google Scholar] [CrossRef]
  41. Rahmathullah, A.S.; Garcia-Fernandez, A.F.; Svensson, L. Generalized optimal sub-pattern assignment metric. In Proceedings of the 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017. [Google Scholar]
  42. Zhou, B.; Bose, N.K. Multitarget tracking in clutter: Fast algorithms for data association. IEEE Trans. Aerosp. Electron. Syst. 1993, 29, 352–363. [Google Scholar] [CrossRef]
  43. Battistelli, G.; Chisci, L.; Fantacci, C.; Farina, A.; Graziano, A. Consensus CPHD filter for distributed multitarget tracking. IEEE J. Sel. Top. Sign. Proces. 2013, 7, 508–520. [Google Scholar] [CrossRef]
  44. Battistelli, G.; Chisci, L. Kullback-Leibler average, consensus on probability densities and distributed state estimation with guaranteed stability. Automatica 2014, 50, 707–718. [Google Scholar] [CrossRef]
  45. Davis, P.J.; Rabinowitz, P.; Rheinbolt, W. Methods of Numerical Integration, 2nd Ed. ed; Mineola; Dover Publications: New York, NY, USA, 2007. [Google Scholar]
  46. Anderson, B.D.; Moore, J.B. Optimal Filtering; Prentice-Hall: Englewood Cliffs, NJ, USA, 1979. [Google Scholar]
  47. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  48. Luenberger, D.G.; Ye, Y. Linear and Nonlinear Programming, 4th Ed. ed; Springer: New Yeak, NY, USA, 2015. [Google Scholar]
  49. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking, part V: Multiple-model methods. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1255–1321. [Google Scholar]
  50. Vo, B.N.; Vo, B.T.; Hoang, H.G. An efficient implementation of the generalized labeled multi-Bernoulli filter. IEEE Trans. Signal Process. 2017, 65, 1975–1987. [Google Scholar] [CrossRef]
  51. Beard, M.; Vo, B.T.; Vo, B.N. A solution for large-scale multi-object tracking. arXiv. 2018. Available online: https://arxiv.org/abs/1804.06622 (accessed on 22 November 2018).
Figure 1. Diagram for decentralized sensor network. ○ and ■ denote SN and LFC, solid lines with arrows denote directed connections between LFCs, the dotted lines denote connections between LFC and its subordinate SNs.
Figure 1. Diagram for decentralized sensor network. ○ and ■ denote SN and LFC, solid lines with arrows denote directed connections between LFCs, the dotted lines denote connections between LFC and its subordinate SNs.
Sensors 18 04115 g001
Figure 2. Locations of LFCs and SNs in the decentralized sensor network. ■ denotes LFC, ● denotes SN, different colors correspond to different LFCs and their subordinate SNs. Solid line with directed arrow indicates a communication connection between two LFCs. Each SN is 2.4 km apart away from other one.
Figure 2. Locations of LFCs and SNs in the decentralized sensor network. ■ denotes LFC, ● denotes SN, different colors correspond to different LFCs and their subordinate SNs. Solid line with directed arrow indicates a communication connection between two LFCs. Each SN is 2.4 km apart away from other one.
Sensors 18 04115 g002
Figure 3. Target trajectories in a simulation. ○, △ and ★ are the start point, end point and rest positions of a target, the solid line is the track of a target, different colors correspond to different targets.
Figure 3. Target trajectories in a simulation. ○, △ and ★ are the start point, end point and rest positions of a target, the solid line is the track of a target, different colors correspond to different targets.
Sensors 18 04115 g003
Figure 4. Sensor selection using LA bound with coordinate descend at (a) the 1st scan; (b) the 5th scan; (c) the 10th scan; (d) the 15th scan; (e) the 20th scan; (f) the 25th scan. ★ and ☉ denote the targets and selected SNs, different colors correspond to different targets and SNs of different LFCs.
Figure 4. Sensor selection using LA bound with coordinate descend at (a) the 1st scan; (b) the 5th scan; (c) the 10th scan; (d) the 15th scan; (e) the 20th scan; (f) the 25th scan. ★ and ☉ denote the targets and selected SNs, different colors correspond to different targets and SNs of different LFCs.
Sensors 18 04115 g004aSensors 18 04115 g004bSensors 18 04115 g004c
Figure 5. 500 MC averages of (a) OSPA and (b) LA errors versus time with c = 1000   m .
Figure 5. 500 MC averages of (a) OSPA and (b) LA errors versus time with c = 1000   m .
Sensors 18 04115 g005aSensors 18 04115 g005b
Table 1. Final value of OSPA error (Unit: m).
Table 1. Final value of OSPA error (Unit: m).
Clutter rate and Detection Probability λ = 20 p d = 0.95   λ = 40 p d = 0.85   λ = 60 p d = 0.75   λ = 80 p d = 0.65   λ = 100 p d = 0.55  
Sensor Selection Method
LA bound with coordinate descent171.3193.0218.9249.1283.7
LA bound with genetic algorithm151.9170.8193.6220.5251.6
CS divergence with genetic algorithm238.6278.5333.7405.3494.4
Random selection386.7436.0490.5550.1615.2
Table 2. Final value of LA error (Unit: m).
Table 2. Final value of LA error (Unit: m).
Clutter rate and Detection Probability λ = 20 p d = 0.95   λ = 40 p d = 0.85   λ = 60 p d = 0.75   λ = 80 p d = 0.65   λ = 100 p d = 0.55  
Sensor Selection Method
LA bound with coordinate descent188.8223.9255.0288.6323.9
LA bound with genetic algorithm165.7199.6229.5262.9292.3
CS divergence with genetic algorithm260.1321.2383.9463.9562.8
Random selection426.7500.5564.3634.2710.9

Share and Cite

MDPI and ACS Style

Lian, F.; Hou, L.; Wei, B.; Han, C. Sensor Selection for Decentralized Large-Scale Multi-Target Tracking Network. Sensors 2018, 18, 4115. https://doi.org/10.3390/s18124115

AMA Style

Lian F, Hou L, Wei B, Han C. Sensor Selection for Decentralized Large-Scale Multi-Target Tracking Network. Sensors. 2018; 18(12):4115. https://doi.org/10.3390/s18124115

Chicago/Turabian Style

Lian, Feng, Liming Hou, Bo Wei, and Chongzhao Han. 2018. "Sensor Selection for Decentralized Large-Scale Multi-Target Tracking Network" Sensors 18, no. 12: 4115. https://doi.org/10.3390/s18124115

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop