An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network

We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.


Introduction
Compressed sensing has received considerable attention recently and has been applied successfully in diverse fields, e.g., image processing [1], speech enhancement [2], sensor network [3,4] and radar systems [5]. As an important branch of compressed sensing, distributed compressed sensing (DCS) theory [6,7] rests on a new concept called the joint sparsity of a signal ensemble. A signal ensemble is composed of different signals from the various sensors of the same scene. Three joint sparsity models (JSM) are presented in [6]: JSM-1, JSM-2 and JSM-3. In JSM-1, each signal consists of a sum of two components: a common component that is present in all of the signals and an innovation component that is unique to each signal. In JSM-2, all signals are constructed from the same sparse set of basis vectors, but with different coefficient values. JSM-3 extends JSM-1 so that the common to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of it. We can then search for the innovation support set using an OMP algorithm based on the projected measurement vector and projected sensing matrix.
The main contribution of the paper has three components. First, in contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed DCSMP algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems, which eliminate the need for additional measurement matrices, reduce memory storage and accelerate the reconstruction algorithm. Secondly, in most algorithms addressing the JSM-1, the process of estimating the common component/support set is coupled with that of estimating the innovation component/support set; thus, the inaccurately estimated common support set will lead to failure in estimating the innovation support set. In this paper, the two processes are decoupled by projecting the measurement vector into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of it. It is proven that under the condition that the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Thirdly, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.
The paper is organized as follows. Section 2 introduces the JSM-1 model in distributed compressed sensing. The proposed DCSMP algorithm is introduced in Section 3, which is the main contribution of this paper. The complexity and scalability analysis of the proposed DCSMP algorithm is in Section 4. In Section 5, we consider a simulation example of identifying multiple targets in a multistatic radar system, where different individual receivers observe the same surveillance region with different detection probabilities. A JSM-1 model is constructed based on the multistatic radar system, and sparse recovery is carried out in a decentralized manner across various receivers. Finally, the paper is summarized in Section 6.

Notation
For a set T ⊂ {1, 2, · · · , n}, we use |T| to denote its cardinality, i.e., the number of elements in T. We use T c to denote its complement w.r.t. {1, 2, · · · , n}, i.e., T c := i ∈ {1, 2, · · · , n} : i / ∈ T. For a vector v, v(i) denotes the i-th entry of v, and v T denotes a vector consisting of the entries of v indexed by T. We use ||v|| p to denote the l p norm of v. The support set of v, supp(v), is the set of indices at which v is nonzero, supp(v):= {i : v(i) = 0}. We say that v is s-sparse if |supp(v)| ≤ s.
For a matrix B, B * denotes its conjugate transpose and B † its pseudo-inverse. For a matrix with linearly-independent columns, B † = (B * B) −1 B * .
We use I to denote an identity matrix of appropriate size. For an index set T and a matrix B, B T is the sub-matrix of B containing columns with indices in the set T. Notice that B T = BI T . For a tall matrix P, span(P) denotes the subspace spanned by the column vectors of P.

Problem Formulation in the JSM-1 Framework
Consider a decentralized sensor network where a number of sensors acquire signals and communicate with neighbor nodes to reconstruct the original signals. Each individual sensor (e.g., the p-th sensor node) is not aware of the full network topology, and instead, it knows two sets of local neighbors; the incoming neighbor connections L in p and outgoing neighbor connections L out p . Here, incoming and outgoing connections correspond to communication links where a node can receive or send information, respectively. In particular, it is assumed that the signals sensed by these sensors exhibit both intra-sensor correlation and inter-sensor correlation. This correlated sensing model corresponds to the JSM-1 model, which is described as follows.
The p-th sensor monitors a discrete signal x p ∈ N according to the following relation: where y p ∈ M is the measurement vector, Φ p ∈ M×N is the sensing matrix, e p ∈ M is the measurement noise and Γ is a global set containing all nodes in the network. This setup describes an underdetermined system, where M < N. The signal vector x p is K-sparse, and its support set is defined as S p . The goal of the proposed algorithm is to reconstruct the original signal observed at each sensor node, i.e., to reconstruct x p at the p-th node (p ∈ Γ).
In the JSM-1, each sensor has its own signal; that means signals across sensors are not the same, but have correlations. The sparse signal x p can be represented as: where z c ∈ N denotes the common component of the sparse signal x p , which captures the inter-signal correlation and is common to all signals, and z p ∈ N (p ∈ Γ) denotes the innovation component of the sparse signal x p , which captures the intra-signal correlation and is specific to the p-th sparse vector x p . We further define J and I p as the support set of z c and z p , respectively, and have: Here, the partial support set J is common to all sparse signals, which is defined as the common support set. The partial support set I p is specific for the p-th node and defined as the innovation support set.

Distributed Compact Sensing Matrix Pursuit Algorithm
The goal of this section is to introduce the proposed DCSMP algorithm. A block diagram of the DCSMP algorithm is shown in Figure 1, which comprises four stages: (1) rough estimation, (2) data fusion, (3) innovation support set estimation and (4) final estimation. The estimated common support set is generated at the first and second stage; the estimated innovation support set is obtained at the third stage; and finally, we can obtain the final estimates of the support set and the original sparse signal at the fourth stage.

Rough Estimation
The goal of this section is to calculate the candidate support set for each individual sensor. In order to tackle the high coherence problem due to the high resolution of the sensors, we consider constructing a compact sensing matrix for each individual sensing matrix at each node, which is a compact version of the original sensing matrix and has low coherence. The readers can refer to [19] for details of the construction process of the compact sensing matrix. An OMP algorithm is then utilized to calculate a rough estimate of the true support set, based on which we can obtain the candidate support setF p in the original sensing matrix. Finally, we prove that the candidate support set contains the true support set for each individual sensor. The detailed procedures of the candidate support set estimation (CSSE) algorithm are presented in Algorithm 1.
Input: Φ p , y p Output:F p The candidate support set estimation algorithm consists of four steps. The first step uses the "ConstructCompact" function, which constructs the compact sensing matrix Ψ p based on the original sensing matrix Φ p . The process is the same as that described in Section 2.3 in [19]. At the second step, the OMP algorithm is used to find an estimate of the true support set, which is represented aŝ a p = {â 1 p , · · · ,â k p , · · · ,â K p }, K ≤ K, whereâ k p denotes the k-th element ofâ p . At the third step, in the "MapToSubspace" function, each element in the estimated support setâ k p corresponds to a condensed column of the compact sensing matrix. For example,â k p (β j p ) indicates that the k-th element ofâ p corresponds to the j-th condensed column β j p . This column is defined as a contributing column. We can then obtain an initial estimate of the correct subspace,Ξ ini p , spanned by K contributing columns {β i p , · · · , β j p , · · · , β l p }, asΞ ini p =span(β i p , · · · , β j p , · · · , β l p ). The final step uses the function "FindCandidateSupportSet", which is to find the candidate support set in the original sensing matrix Φ p . Each contributing column corresponds to a similar column group in Φ p , which is named the contributing similar column group. We can then obtain a setΛ p containing the indices of K contributing similar column groups. All of the columns in each contributing similar column group fromΛ p are listed out, and their indices form a candidate support setF p . Proposition 1. The true support set S p is a subset of the candidate support setF p , i.e., S p ⊂F p .
Proof. According to Proposition 2 in [19], the vectors spanning the true subspace are contained in K contributing similar column groups. Thus, the true support set is contained in the candidate support setF p , which is the union of the indices of the columns contained in the K contributing similar column groups.
At the end of the rough estimation stage, the p-th node sends its own candidate support set to its neighboring nodes, as well as receives the candidate support sets from its neighboring nodes. The candidate support sets from both the p-th node and its neighboring nodes are sent to the data fusion stage, where the estimated common support set is generated by using some fusion strategy.

Data Fusion
For the p-th node, its true support set S p (including the common support set J and innovation support set I p ) is contained in the candidate support setF p , according to Proposition 1. Considering that the common support set is joint to the p-th node and its connected neighboring nodes, it can be obtained by fusing the candidate support sets from both the p-th node and its neighboring nodes in the network. For fusion, we use a democratic voting strategy [12].
The data fusion algorithm is presented in Algorithm 2. The p-th node has access to the candidate support sets {F q } q∈L in p from neighbors and the local estimateF p . The estimated common support set J is formed (Step 5) such that each index in the estimated common support set is present in at least two candidate support sets from {{F q } q∈L in p ,F p }. Having more votes for a certain index increases the probability of this index being correct.
In the above algorithm, "vote 1 " denotes the voting procedure [12]. Since an index present in two nodes' candidate support sets will be treated as an element of the estimated common support set, the probability of the event that the estimated common support set contains the true common support set is very high. Thus, it is reasonable to assume that the true common support set is a subset of the estimated common support set.

Innovation Support Set Estimation
We consider a general condition that the true common support set is a subset of the estimated common support set. The related assumptions are listed as follows. Assumption 1. The true common support set J and the innovation support set I p , satisfy the following conditions: 1. The true common support set J is a subset of the estimated common support setĴ, i.e., J ⊆Ĵ. 2. The innovation support set I p is a proper subset of the complement ofĴ, i.e., I p ⊂Ĵ c .
The second assumption holds considering that the cardinality of I p is far less than that ofĴ c . The goal of the innovation support set estimation stage is to calculate I p . We prove Lemma 1 before we proceed to the detailed procedures of the proposed algorithm.

Lemma 1.
Assuming that |Ĵ| ≤ 2K, define the projection matrix P p as: and the projected measurement vectorỹ p as:ỹ The projected measurement vectorỹ p can be represented in a standard equation in compressed sensing as: where A p = P p (Φ p )ˆJ c is the projected sensing matrix and e p = P p e p is the equivalent noise. Moreover, the innovation support set I p can be calculated using an OMP algorithm, based on the projected measurement vector y p and projected sensing matrix A p .
Proof. The proof consists of three parts. First, we prove that (Φ p )ˆJ has full column rank under the condition that the cardinality of the estimated common support setĴ is less than or equal to 2K, i.e., |Ĵ| ≤ 2K, where K is the sparsity level of x p . According to Theorem 2.13 in [20], a unique s-sparse solution of the system y = Ax implies that every set of 2s columns of A is linearly independent, where s indicates the sparsity level of x. Theorem 2.13 applies to the deterministic sensing matrix in this work, since the system has a unique K-sparse solution. Thus, every set of 2K columns of Φ p is linearly independent. Under the condition that |Ĵ| ≤ 2K, the submatrix (Φ p )ˆJ contains at most 2K columns. Since every set of 2K columns of Φ p is linearly independent, (Φ p )ˆJ has full column rank.
Secondly, we prove that the projected measurement vectorỹ p can be represented in a standard equation in compressed sensing.
The fourth equality of (7) holds considering that the contribution ofĴ to y p can be nullified, by projecting y p into a perpendicular space using the projection matrix P p .
Thus, we can obtain (6), a standard equation in compressed sensing. Finally, we will prove that I p is the support set of the sparse vector (x p )ˆJ c . The support set of x p can be represented as: According to Assumption 1, J is a subset ofĴ and, thus, is the set of indices at which (x p )ˆJ is nonzero. Similarly, I p is the proper subset ofĴ c and is the set of indices at which (x p )ˆJ c is nonzero. Thus, I p is the support set of the sparse vector (x p )ˆJ c .
In summary, (6) is a standard equation in compressed sensing, and we can calculate I p using an OMP algorithm based on the projected measurement vectorỹ p and projected sensing matrix A p . Algorithm 3: Innovation support set estimation (ISSE).
In the above algorithm (Algorithm 3), the first step is to construct the projection matrix P p , while the second and third steps are to construct the projected measurement vectorỹ p and projected sensing matrix A p . Finally, the innovation support setÎ p can be calculated using an OMP algorithm, based oñ y p and A p .

Final Estimation
The goal of this section is to estimate the support set S p and the sparse vector x p , given the estimated common support setĴ and the estimated innovation support setÎ p . First, a combinational search is performed inĴ to find the true common support set J. Each combination together withÎ p forms an estimate of the true support set, which is denoted asŜ j p , j = 1, 2, · · · , N co , where N co indicates the number of combinations. Based on each estimated support set, we can obtain an estimate of the sparse vector x p (denoted asx j p ), using the pseudo-inverse operation. Among the obtained estimated sparse vectorsx j p , j = 1, 2, · · · , N co , we can find the one with the least residual, which is termed as the final estimate of the sparse vector x p . The detailed procedures are as follows.
The combinational search algorithm consists of six steps. The first step uses the "ListCombinations" function, which lists C 1 E , C 2 E , · · · , C E E combinations based on the indices inĴ, where E = |Ĵ|. Each combination is represented as J I j , j = 1, 2, · · · , N co . At the second step, each combination together withÎ p forms an estimate of the true support set, i.e.,Ŝ j p = J I j Î p , j = 1, 2, · · · , N co . At the third step, the proposed algorithm solves a least squares problem to approximate the nonzero entries , resulting in an estimate of the sparse vector,x j p , j = 1, 2, · · · , N co . The fourth step is to calculate the residual r j p (j = 1, · · · , N co ). The l 2 norm of r j p is indicated as ||r j p || 2 . The fifth step uses the "MinimumResidual" function, which finds the residual with the least l 2 norm among the residuals, and denotes it as r min p . Concurrently, we can find its associate sparse signal and support set, denoted asx min p andŜ min p , respectively. Finally, we can obtain the final estimates of the sparse signal and the true support set, by settingx p ←x min p andŜ p ←Ŝ min p .

Proposition 2.
The proposed combinational search algorithm (Algorithm 4) can find the true support set S p , provided thatÎ p is correctly estimated.
Proof. In Algorithm 4, C 1 E , C 2 E , · · · , C E E combinations are listed based on the indices inĴ, where E = |Ĵ|. Each combination is represented as J I j , j = 1, 2, · · · , N co , where N co indicates the number of combinations. Since J ⊆Ĵ, J is contained in J I j , j = 1, 2, · · · , N co . Thus, S p = J I p is contained in J I j Î p provided thatÎ p is correctly estimated. Therefore, we can find the true support set S p via the combinational search algorithm.

DCSMP Algorithm
Using Algorithms 1∼4, we now develop the DCSMP algorithm presented in Algorithm 5. The inputs to Algorithm 5 for the p-th node are the measurement signal y p and the sensing matrix Φ p . Furthermore, Algorithm 5 knows L in p and L out p . We assume that some underlying communication scheme is provided for the transmit and receive functionality.
In Algorithm 5, local candidate support set estimates are exchanged over the network (Steps 2 and 3). The data fusion algorithm merges the local and neighboring candidate support set estimates to produce the estimated common support setĴ (Step 4). At Step 5, the contribution ofĴ to y p is nullified, by projecting y p into a subspace perpendicular to the space spanned by the columns of Φ p , indexed byĴ. We can then calculate the estimated innovation support setÎ p using an OMP algorithm based on the projected measurement vector and projected sensing matrix. Finally, a combinational search is performed inĴ to find J, and thus, we can obtain the final estimates of the support set and the original sparse vector.
Discussion: Two additional settings are considered when information from all sensors can be fused. The first setting is that each sensor has the capability of communicating with all other sensors in the network, and the second one is that the measurements of all sensors are sent to a fusion center, which generates an estimate of the original signal using information from all sensors.
It is straightforward to extend the proposed DCSMP algorithm to the above two settings. To deal with the first condition, we need only change the inputs of the data fusion algorithm (Algorithm 2). For instance, for the p-th node, change the original inputs of the data fusion algorithm, the candidate support sets from its neighbors, {F q } q∈L in p , to the candidate support sets from all other sensors in the network, {F q } q =p . For the second one, when the measurements from all of the sensors in the network are sent to the fusion center, we need only change the inputs of the data fusion algorithm (Algorithm 2) to the candidate support sets from all of the sensors in the network.

Complexity Analysis
The proposed DCSMP algorithm consists of two main parts: offline processing and online processing. The offline processing transforms the individual sensing matrix Φ p to a compact sensing matrix Ψ p using similarity analysis. The computational complexity of the offline processing mainly focuses on the computation of the similarity between any two columns of the individual sensing matrix, which is of the order of O( [19]. M and N are the numbers of the rows and columns of Φ p , respectively. The online processing procedure consists of four parts: (1) rough estimation; (2) data fusion; (3) innovation support set estimation; and (4) final estimation. First, for the rough estimation process, an OMP algorithm is used to find a rough estimate of the true support set for each individual signal.
The computational complexity is of the order of O(MD p ) for the p-th node, where D p is the number of columns of the compact sensing matrix Ψ p [19].
Secondly, for the data fusion process, we use a democratic voting strategy [12], which is very simple and has negligible complexity compared with the other three processes.
In the innovation support set estimation process, first, the measurement vector y p is projected into the subspace that is perpendicular to the space spanned by the columns of Φ p , which are indexed by the estimated common support setĴ, as: The complexity of this step concentrates on the pseudo-inverse operation, i.e., computing (Φ p ) † J y p , using the least squares algorithm. Thus, the computational cost focuses on the least squares estimation and is of the order of O(|Ĵ| · M), according to [21]. Secondly, the estimated innovation support setÎ p is calculated using an OMP algorithm, and the complexity of this step is O(M · |Ĵ c |). Thus, the complexity of entire innovation support set estimation is O(|Ĵ| · M)+O(M · |Ĵ c |).
In the final estimation process, C 1 E , C 2 E , · · · , C E E combinations are listed out based on the indices inĴ, where E = |Ĵ|. Each combination is represented as J I j , j = 1, 2, · · · , N co , where N co Each combination together withÎ p forms an estimate of the true support set, i.e., S j p = J I j Î p , j = 1, 2, · · · , N co . Based on each estimated support set, the nonzero entries of the estimated sparse vector are calculated using the least squares algorithm. The computational cost of the final estimation process is of the order of C 1 [21]. Furthermore, since max(|Ŝ 1 p |, |Ŝ 2 p |, · · · , |Ŝ E p |) ≤ K, we have: Considering that C 1 E · O(KM) + · · · + C E E · O(KM) = N co · O(KM), the computational cost of the final estimation process can be approximated as the order of N co · O(KM).
In summary, the complexity analysis for online processing is listed in Table 1, and the computational complexity of the whole DCSMP algorithm is listed in Table 2. Table 1. Complexity analysis for online processing.

Scalability Analysis
In a typical wireless sensor network (WSN) scenario, signals are sampled at source nodes and aggregated at sink nodes. The correlation between source nodes causes redundancy. Many methods are proposed to reduce the redundancy, such as the structure fidelity data collection approach [11] and distributed compressed sensing [22]. This paper considers a decentralized sensor network where a number of densely-placed sensors acquire signals and communicates with neighbor nodes to reconstruct the original signals. All of the sensor nodes are equivalent. Each individual sensor (e.g., the p-th sensor node) is not aware of the full network topology, and instead, it knows two sets of local neighbors: the incoming neighbor connections L in p and outgoing neighbor connections L out p . Here, incoming and outgoing connections correspond to communication links where a node can receive or send information, respectively. The sparse signal is sampled and reconstructed at each sensor node, by exploiting the correlation between the sensor nodes. In particular, at the p-th node, the DCSMP algorithm fuses the candidate support set estimates from both the p-th node and its neighboring nodes, to enhance the reconstruction performance.
In the conventional sensor network, as the size of the network expands, the transmission burden and the computational cost of the data fusion process increase dramatically at each node. However, in the DCSMP algorithm-based sensor network, the candidate support sets are transmitted over the network, rather than the transmitted measurements as in the conventional network. This significantly reduces the transmission burden on the links. Moreover, the computational cost of the data fusion process grows linearly with the number of incoming neighboring connections. The computational complexity of the data fusion process is negligible since a very simple democratic voting strategy [12] is adopted in this work. Thus, the DCSMP algorithm provides a desirable structural scalability.

Simulation Results and Analysis
In this section, we consider a distributed multistatic radar system, which consists of a transmitter and a number of receivers ( Figure 2). The transmitter emits the transmitted signal; the receivers receive the echoes from the targets. It is assumed that different individual receivers observe a same surveillance region with different detection probabilities. A JSM-1 is constructed based on the multistatic radar system, and sparse recovery is carried out in a decentralized manner across various receivers.

Sparse Representation in State Space
We consider N T targets moving within the surveillance region. The state vector of the d-th target (d = 1, · · · , N T ) at the k-th scan is defined as , where px d k and vx d k denote respectively the position and velocity of the d-th target along the x axis of Cartesian frame at scan k; py d k and vy d k along the y axis and pz d k and vz d k along the z axis. In the multistatic radar system, a transmitter Tr is at a known position tr = [x 0 , y 0 , z 0 ] T ; a number of receivers R i (i = 1, · · · , N R ) are placed at known locations r i = [x i , y i , z i ] T , i = 1, · · · , N R , in a Cartesian coordinate system, where N R denotes the number of receivers.
In practice, the number, locations and velocities of the targets are unknown during the tracking process. The state space at scan k is divided into N g grids (possible values), listed as g l k , l = 1, · · · , N g . An auxiliary parameter, namely grid reflection, is attached to each grid. If a grid is occupied by a target, its grid reflection parameter is set as the reflection coefficient of the target; otherwise, it is set as zero. All of the grid reflection parameters are mapped into a grid reflection vector ξ k , which is an indicator vector that contains the true reflectivity of targets at each grid location. Considering that the number of grids occupied by targets is much smaller than that of the total grids in state space, ξ k is a sparse vector.
Each grid in the state space represents a state vector of a potential target. The l-th grid g l k is transformed to a delay-Doppler set (τ i,l k , f i,l k ), according to Equations (12) and (13), under the condition of receiver R i , where P l k = [px l k , py l k , pz l k ] T and V l k = [vx l k , vy l k , vz l k ] T denote the position and velocity of the l-th grid at scan k, respectively; u tr,l k and u i,l k denote the unit vector from the transmitter to the l-th grid and the unit vector from the l-th grid to the i-th receiver, respectively.
Note: From (12) and (13), it can be seen that a grid in the state space, g l k , corresponds to different delay-Doppler set (τ i,l k , f i,l k ), under the condition of different receivers R i , i = 1, · · · , N R .

Compressed Sensing Model for an Individual Receiver
At the i-th receiver R i , the received measurement signal can be represented via the grids in target spate space, as: where α i,l k denotes the reflection coefficient corresponding to the l-th grid between the transmitter and the i-th receiver; τ i,l k is the l-th grid originated delay, and f i,l k is the Doppler shift frequency of the l-th grid, both measured by the i-th receiver; w i k (t) is the complex envelope of the overall disturbance at the i-th receiver.
The state vector corresponding to the l-th grid, g l k , contributes to the received signal if it is occupied by a target. We define ϕ i,l k (t) as the l-th grid's contribution to the received signal, as: At the i-th receiver, a sequence of discrete outputs of the received signal is sampled, which forms a measurement vector y i k , as: where r i k (n), n = 1, · · · , W are discrete output samples collected by the i-th receiver, at scan k. The measurement vector y i k can be represented in a compressed sensing framework, as in (17), where Φ i k is a sensing matrix and ξ i k is a sparse grid reflection vector at the i-th receiver. We have: where ϕ i,l k is the l-th column of the sensing matrix, i.e., ϕ i,l k = [ϕ i,l k (1), ϕ i,l k (2), · · · , ϕ i,l k (W)] T . It can be seen from (15) that ϕ i,l k is the l-th grid's contribution to the received signal and is deterministic in the condition of fixed grids. Since the division of the state space with grids is assumed fixed prior in this work, the sensing matrix is deterministic in nature.

General JSM-1 in a Multistatic Radar System
In the multistatic radar system, assume that the i-th receiver R i has bi-directional communication links with a number of neighboring nodes (receivers), which are denoted as U j i , j = 1, · · · , NE i , i = 1, · · · , N R , where NE i denotes the number of neighboring nodes of the i-th receiver. For each neighboring node U j i , we can obtain a standard equation in compressed sensing, as: where y j k , Φ j k , ξ j k and e j k denote the measurement vector, sensing matrix, sparse grid reflection vector and noise vector, respectively, for the j-th neighboring node of the i-th receiver.
It is assumed that the i-th receiver and its NE i neighboring nodes observe a surveillance area. For each individual receiver, it cannot "see" all of the targets at a time. This is reasonable in practice considering that the receivers are located at different positions with different viewing angles, and each receiver has a different detection probability (less than one). Therefore, some targets are observed by all of the receivers, namely common targets; and the others are observed by individual receiver, which are named as innovation targets.
The common targets' states are the same as different observers (receivers) located at different positions, when the targets' states and receivers' positions are defined in the same coordinate system. Thus, for different receivers, their corresponding sparse grid reflection vectors share the same locations of part of the nonzero reflection coefficients (corresponding to the common targets), i.e., having the same common support set. However, the reflection coefficients of a common target observed by different receivers are not the same due to different parameters of each individual receiver; thus, the common components of different sparse grid reflection vectors are different, which does not fit the standard JSM-1 (2) presented in Section 2.
A general JSM-1 is adopted in this work, which relies only on Equation (3), focusing on common support set. In this condition, it is assumed that different sparse grid reflection vectors share a common support set, while having different reflection coefficients. Since the proposed DCSMP algorithm focuses on support set estimation instead of estimating the common component (the reflection coefficients), the proposed DCSMP algorithm can cope with the general JSM-1 efficiently.

Simulation Environment Setup
A multistatic radar system consisting of a transmitter and three receivers are considered in this simulation example. The transmitter is located at tr = [0, 0, 0] T km; the receivers are located at r 1 = [0, 8, 0] T km, r 2 = [8, 0, 0] T km and r 3 = [8, 8, 0] T km, respectively, in a 3D Cartesian coordinate system. The three receivers are connected with bi-directional communication links. The carrier frequency ( f c ) for each receiver is 10 GHz. The number of discrete samples collected at each receiver (W) is 80. The volume of the surveillance position space (represented in px k , py k , pz k ) is 10 3 × 10 3 × 10 3 m 3 , which is divided into 10 × 10 × 10 grid points; the volume of the surveillance velocity space (represented in vx k , vy k , vz k ) is 60 × 60 × 60 (m/s) 3 , which is divided into 2 × 2 × 2 grid points. Therefore, the total number of grids in state space (N g ) is 8 × 10 3 .

Identification of Multiple Targets' States
The proposed DCSMP algorithm is used to identify multiple targets (including common and innovation targets) in the state space. The achievable resolution of the sparse vector in state space obtained by the DCSMP algorithm is evaluated and compared with the DIPP algorithm. We focus on the difference in position while assuming that all of the targets have the same velocity, for clarity in representation.
Three cases are considered in the simulation example: (a) separately-distributed targets; (b) closely-spaced common targets; (c) closely-spaced innovation targets observed by the receiver R1. The addressed scenarios are characterized by the signal-to-noise ratio (SNR), which is set to 20 dB. The simulation parameters for the three cases are listed in Tables 3-5. The simulation results are shown in Figures 3-8. The estimated positions shown in each figure are the combination of the results from three receivers, including the common targets observed by all of the receivers and the innovation targets observed by the individual receiver. This is reasonable since in practice, at the final stage of estimation, each receiver will send its estimated locations of the innovation targets to its neighboring nodes, and all of the connected receivers will have a common scene of the surveillance area. Figures 3 and 4 show the results of the identification of multiple separately-distributed targets in the state space, using the DCSMP and DIPP algorithm, respectively. In Figure 3, the true positions of targets are denoted as stars ("*"), with text indications "C1" and "C2" for common targets and "I1", "I2", "I3" for innovation targets. The estimated positions of targets are denoted as circles ("o"), with text indications "C1 estimate " and "C2 estimate " for estimated common targets and "I1 estimate ", "I2 estimate ", "I3 estimate " for estimated innovation targets. It can be seen from Figure 3 that the multiple separately-distributed targets (including both the common and innovation targets) are accurately identified in the state space using the DCSMP algorithm, and similar results appear in Figure 4, for the DIPP algorithm.

Closely-Spaced Common Targets
The main challenge of this scenario arises from the small separation between two closely-spaced common targets C1 and C2. Figure 6 shows that the DIPP algorithm fails in distinguishing C1 and C2. This is due to the reason that the DIPP algorithm cannot efficiently cope with the sensing matrix with high coherence due to the high resolution of the state space. Moreover, since the estimated common support set (i.e., estimated positions of the common targets) is adopted as the side information to calculate the innovation support set by the DIPP algorithm, the wrongly-estimated common support set leads to failure in estimating the innovation support set, for each receiver. Therefore, the positions of the innovation targets cannot be identified accurately ( Figure 6).
In Figure 5, the common targets are accurately identified, which verifies that the proposed DCSMP algorithm is capable of dealing with the sensing matrix with high coherence. As a consequence, the innovation targets are accurately identified based on the correctly-estimated common support set.

Closely-Spaced Innovation Targets
The main challenge in this scenario arises from the small separation between two closely-spaced innovation targets I11 and I12, observed by the receiver R1. Figure 8 shows that the DIPP algorithm succeeds in estimating the positions of two separately-distributed common targets. The estimated common support set is then input as the side information to calculate the innovation support set, at three receivers, respectively. Figure 8 shows that the first receiver R1 fails in distinguishing the two closely-spaced innovation targets I11 and I12, while the other two receivers R2 and R3 succeed in identifying their corresponding innovation targets I2 and I3, respectively. In Figure 7, it can be seen that the proposed DCSMP algorithm succeeds in identifying both the common and innovation targets.

Random Testing Cases
A sequence of random cases is tested where 20 targets (including common and innovation targets) are randomly distributed in the three-dimensional position space. Five hundred Monte Carlo simulations are performed for each trail. Considering that each individual receiver cannot "see" all of the targets at a time, the numbers of common and innovation targets are set as 15 and 5, respectively. Thus, we have the following parameter setup for the simulation: the signal dimensionality N = 8 × 10 3 , the measurement dimensionality M = 80, the sparsity level K = 20, the common support set J = 15, the innovation support set I p = 5, the number of all nodes (receivers) N R = 3 and the number of Monte Carlo trails N MC = 500.
Two metrics are utilized to evaluate the performance of the proposed algorithm. The first metric is called distributed reconstruction error (DRE), which is defined as: where x i p represents the true signal for the p-th receiver in the i-th Monte Carlo trail andx i p represents the estimated signal. Our objective is to achieve a lower DRE considering the whole decentralized network. We also adopt the average support-set cardinality error (ASCE) as a direct evaluation of the support-set recovery performance [12]. Note that the ASCE has the range [0, 1], and our objective is to achieve a lower ASCE.
We now provide the average performance results using the performance measures DRE and ASCE, which are shown in Table 6. The performance of OMP and SP algorithms is included in the simulation as a benchmark characterizing a single-sensor (disconnected) scenario. From Table 6, it can be seen that the proposed DCSMP algorithm obtains the lowest DRE and ASCE, which verifies that the proposed DCSMP algorithm outperforms the three other algorithms (DIPP, SP and OMP), in reconstructing the sparse vector with high resolution, in a decentralized network.

Simulation Results on a Large-Scale Sensor Network
This simulation is to validate the performance of the proposed DCSMP algorithm on a large-scale sensor network. The network topology is built along the random geometric graph model, presented by Penrose in [23], where a number of nodes are randomly distributed in an area. Each node connects with its neighbor nodes located within a certain distance from it. Figure 9 shows a typical setup of such a network consisting of 105 nodes. The proposed DCSMP algorithm is compared to the compressed sensing-based algorithms, e.g., the SP, OMP and DIPP algorithms, as well as non compressed sensing based algorithms, e.g., principal component analysis (PCA) [24] and the distributed wavelet compression (DWC) algorithm [25]. The signal model is constructed based on the multistatic radar system. The sparse grid reflection vector is chosen as the original signal. For the conventional compressed sensing-based algorithms, e.g., the SP and OMP algorithm, the measurements are compressed by a down-sampling measurement matrix before transmitting to the neighboring nodes. For the non-compressed sensing-based approaches, e.g., the PCA and DWC algorithm, the data compression is achieved after aggravating signals from neighboring nodes. For the above two kinds of algorithms, the measurements are transmitted in the network. In contrast, in the DCSMP and DDIP algorithm, only the estimated candidate support sets are transmitted between nodes in the network, which significantly reduces the transmission burden.
A sequence of random cases is tested where a number of targets (including common and innovation targets) are randomly distributed in the three-dimensional position space. The simulation parameters are the same as those in Section 5.6. Two metrics, DRE and ASCE, are utilized to evaluate the reconstruction performance of different algorithms. Figure 10a shows the variation of DRE with different number of nodes, for different algorithms. It can be seen from Figure 10a that the proposed DCSMP algorithm achieves the lowest DRE. The non-compressed sensing-based algorithms (e.g., PCA and DWC) cannot reconstruct the original signal perfectly at a down-sampling rate, thus they achieve large DREs. Though the conventional compressed sensing algorithms (e.g., SP and OMP) and DIPP algorithm can reconstruct the original signal accurately at a down-sampling rate, they cannot cope with the sensing matrix with high coherence, thus achieving moderate DREs, compared with the proposed DCSMP algorithm. Moreover, it can be seen from Figure 10a that the DREs of the two algorithms, DCSMP and DDIP, slightly decrease with the increase of network size. This is due to the reason that the information (the candidate support set or the measurements) received by each node increases with the size of the network expands, thus resulting in a more accurate estimated signal. Figure 10b shows the variation of ASCE with different numbers of nodes. The proposed DCSMP algorithm achieves the lowest ASCE. This verifies that the proposed DCSMP algorithm outperforms the three other algorithms (DIPP, SP and OMP) in dealing with the sensing matrix with high coherence.

Conclusions
A novel DCSMP algorithm is proposed to tackle the JSM-1 in a decentralized scenario. The proposed algorithm adopts deterministic sensing matrices built directly on the real acquisition systems. In the proposed algorithm, the process of estimating the common support set is to decouple that of estimating the innovation support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. The simulation results show that the proposed algorithm can perform successful sparse recovery in a decentralized manner across various receivers, in a multistatic radar system.