Next Article in Journal
A Task-Centric Cooperative Sensing Scheme for Mobile Crowdsourcing Systems
Next Article in Special Issue
myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection
Previous Article in Journal
Atmospheric and Fog Effects on Ultra-Wide Band Radar Operating at Extremely High Frequencies
Previous Article in Special Issue
Systematic Error Modeling and Bias Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation

1
State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China
2
Institute of Information and Control, Hangzhou Dianzi University, Hangzhou 310018, China
3
Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China
4
Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
5
Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(5), 686; https://doi.org/10.3390/s16050686
Submission received: 17 March 2016 / Revised: 28 April 2016 / Accepted: 29 April 2016 / Published: 23 May 2016
(This article belongs to the Special Issue Advances in Multi-Sensor Information Fusion: Theory and Applications)

Abstract

:
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r–Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

1. Introduction

Direction of arrival (DoA) estimation using acoustic sensor arrays has attracted significant interest due to its wide applications [1,2,3,4]. Traditionally, DoA is realized using sensor arrays, such as passive towed-array sonar systems, where all sensors are wire-connected to a fusion center (FC) [5]. For wireless sensor array networks (WSAN) [6,7], arrays (locally wire/wireless-connected clusters) of dispersed sensors are deployed over large sensor fields and communicate wirelessly to the FC. However, remote sensor arrays often suffer from restricted resources, such as local power supply, local computational capacity and wireless transmission bandwidth. Therefore, data samples at remote sensor arrays need to be compressed before transmitting to the FC. On the other hand, executing sophisticated data compression algorithms on remote sensors is also restricted by local power consumption and computational ability.
To meet those resource requirements of the WSAN, the newly-emerged compressive sensing (CS) technology [8,9,10] has great potential because of its ability to reconstruct the raw signal from only a small number of random measurements. In this work, we propose a novel DoA estimation approach that accomplishes low power, robust DoA estimation over a WSAN platform, which reduces the volume of transmitted data without complicated local data compression operations. Incorporating a CS-based formulation, this approach is able to directly estimate the incident angles of acoustic sources from randomly-sampled acoustic signals. This is made possible by fully exploiting the joint spatial and spectral sparse structure of the acoustic signals acquired by the sensor array. Hence, we refer to this approach as compressive sensing joint sparse representation direction of arrival (CSJSR-DoA) estimation method.
Existing CS-based source DoA estimation methods can be categorized into two different approaches: compressive bearing estimation (COBE) [11,12,13] and compressive sensing array DoA estimation (CSA-DoA) [14]. With COBE, the incident angles (DoAs) are modeled as a sparse angle vector, with non-zero entries indicating the presence of (a few well separated) acoustic sources. The acoustic signals received at each sensor can then be represented as the product of a redundant steering matrix and a sparse angle vector. The steering matrix, consisting of steering vectors of all possible incident angles, must be estimated using the received noisy acoustic signal at a reference node. Thus, the raw data at the reference node and randomly-projected measurements of the non-reference nodes need to be transmitted to the remote FC. This imposes a heavy communication cost and excessive energy consumption at remote sensor arrays. With CSJSR-DoA, individual sensor nodes only perform data acquisition and random subsampling, while the DoA estimation is performed at the FC. Therefore, no local computation is required at the remote sensor nodes, and the CSJSR-DoA approach is more suitable for a resource-restricted WSAN system.
With the CSA-DoA [15,16,17] method, analog sensor array data are randomly projected onto a lower dimensional subspace in the analog domain before being converted into digital samples using an analog to digital converter (ADC). As such, fewer ADCs are required. This approach is similar to the CS-camera projection and was originally proposed in [18]. The steering vectors of different incident angles at a certain frequency are used to form the reconstruction dictionary. Further work extends this type of array processing method to broadband scenarios [19,20]. Since the data volume reduction is realized in the analog domain, special analog electronic circuits are required to implement the CSA-DoA approach. The proposed CSJSR-DoA approach, on the other hand, requires no special-purpose analog hardware. Some similar work [21] uses the angle domain sparsity of sources and formulates the narrow band signal of an antenna array within the BCS [22,23] framework. However, it is utilized as an alternative DoA estimation approach, and no data compression is considered.
Another distinction between the proposed CSJSR-DoA approach and these existing CS-based DoA estimation approaches is that the CSJSR-DoA approach exploits both the spatial and spectral domain structure of the acoustic sensor array signals. We argue that many practical broadband acoustic signals can be characterized by a few dominant frequency entries. With the purpose of DoA estimation, it would be sufficient to focus on these few dominant frequency entries and to exploit their frequency sparsity to enhance the performance. For example, Figure 1 shows the spectrograms of two types of broadband sources: an engine sound of a Porsche vehicle and a bird chirping [24]. It is clear that these broadband acoustic signals are dominated by multiple dominant frequency entries. The frequency domain sparse structure [25] illustrated in these figures may be utilized to realize efficient compressive sensing.
The main contributions are:
  • A joint sparse representation-based DoA estimation approach is proposed, that exposes the joint spatial and spectral domain sparse structure of array signals and incorporates the multiple measurement vector [26] approach to solve the DoA estimation problem;
  • A theoretical mutual coherence bound of a uniform linear sensor array is provided, which defines the minimum angular separation of the sources that are required for the CSJSR-DoA approach to yield a reliable solution with a high probability;
  • The Cram e ´ r–Rao bound of the CSJSR-DoA estimator is derived to quantify the theoretical DoA estimation performance;
  • A prototype acoustic WSAN platform is developed to validate the effectiveness of the proposed CSJSR-DoA approach.
The remainder of this paper is organized as follows. In Section 2, the array signal model, the sparsity model of narrowband array processing and the CS theory are briefly reviewed. The CSJSR-DoA approach is derived in Section 3. In Section 4, the theoretical analysis of the DoA estimation performance, as well as the Cram e ´ r–Rao bound are derived. In Section 5, the performance of the CSJSR-DoA approach is evaluated by simulations and field experiments using a prototype wireless sensor array platform. At last, the conclusion is summarized in Section 6.

2. Background

In this paper, the transposition and complex-conjugate transposition operations are denoted by superscripts ( · ) T and ( · ) H , respectively. Lowercase and uppercase bold symbols denote vectors and matrices, respectively. The major symbols used in this paper are summarized in Table 1.

2.1. Signal Model and Joint Spatial-Spectral Sparse Structure

A WSAN consists of one or more sensor arrays and one FC. Sensors within the same sensor array are arbitrarily deployed and connected to the FC via wireless channels. Sensor arrays are battery powered and have limited processing capabilities. The FC is connected to the infrastructure and has stronger processing capabilities. To conserve energy at remote sensor arrays, it is desirable to reduce the volume of data to be transmitted via wireless channels and to move as much of the processing tasks to the FC as possible.
For simplicity of notation, in this work, we assume a single sensor array consisting of H acoustic sensor nodes, and it communicates wirelessly via radio channels to a single FC. Each sensor node is equipped with an omni-directional microphone. The received acoustic signals will be sampled at a sampling frequency of f s Hz. N samples will be grouped into a frame (a snapshot) from which a DoA estimate will be made. We assume that state-of-the-art wireless synchronization protocols, such as Reference Broadcast Synchronization (RBS) [7], are applied so that clocks at sensor nodes can be synchronized at a precision in the order of micro-seconds, which is sufficiently accurate for acoustic DoA estimation.
We assume there are Q targets. The q-th source emits an acoustic signal and contains no more than R q ( N > R q ) dominant frequency entries. Denote s q to be a N × 1 frame vector of the q-th source received at the reference node of the sensor array. Its N-point DFT (discrete Fourier transform) may be decomposed into two components:
s q = Ψ ( r q + ν q )
where Ψ is an N × N inverse DFT matrix, r q is an N × 1 and R q -sparse vector, containing no more than R q DFT coefficients with larger magnitude, and ν q contains the remaining smaller DFT coefficients.
The h-th sensor node of the same sensor array will receive the same s q with a relative delay τ h , q . Thus, the data received at the h-th sensor are the time-delayed summation of Q sources. That is:
x h ( t ) = q = 1 Q s q ( t - τ h , q ) + ω h ( t )
where ω h ( t ) is the zero mean white Gaussian noise at the h-th sensor.
Figure 2 is the time delay model of a sensor array. Define θ q as the incident angle of the q-th source, c as the speed of acoustic signal and [ u h , v h ] as the position of the h-th sensor with respect to the array centroid ( h = 1 H [ u h , v h ] = [ 0 , 0 ] ). The time delay τ h , q is given by:
τ h , q = d h cos ( θ q - ρ h ) / c = d h cos ( θ q ) cos ( ρ h ) + sin ( θ q ) sin ( ρ h ) / c = 1 c cos ( θ q ) u h + sin ( θ q ) v h
where ρ h is the direction of the h-th sensor with respect to the array centroid. After exploiting N-point DFT to x h = [ x h ( 1 ) , , x h ( N ) ] T , its frequency domain expression is given by:
d ˜ h = Ψ - 1 x h
where d ˜ h = d h ( 0 ) , d h ( 1 ) , , d h ( N - 1 ) T , and each component d h ( k ) is equivalent to multiplying the k-th entry of the acoustic spectrum r q + ν q by a phase shift exp ( - j ω k τ h , q ) . That is:
d h ( k ) = q = 1 Q exp ( - j ω k τ h , q ) ( r q ( k ) + ν q ( k ) ) + w h ( k )
where w h ( k ) is the frequency domain expression of ω h ( t ) . Since x h consists of all acoustic signals, the N × 1 vector d ˜ h will be R-sparse ( max ( R q ) R q = 1 Q R q ). This is because the dominant components of Q sources may fall within overlapped or different spectrum bands.
Consider the array data spectrum d ( k ) = [ d 1 ( k ) , d 2 ( k ) , , d H ( k ) ] T of H sensors at the k-th frequency; one may write:
d ( k ) = q = 1 Q a q ( k ) r q ( k ) + w ( k )
where a q ( k ) = [ exp ( - j ω k τ 1 , q ) , , exp ( - j ω k τ H , q ) ] T is the steering vector of the q-th source, w ( k ) = [ w 1 ( k ) , w 2 ( k ) , , w H ( k ) ] T , w h ( k ) = q = 1 Q exp ( - j ω k τ h , q ) ν q ( k ) + w h ( k ) .
The relative delay τ h , q is the time for the acoustic wave of the q-th source traveling from the array centroid to the h-th node in the array along the incident angle θ q . Suppose that the entire range of incident angles is divided into L ( Q ) divisions with each division corresponding to a quantized incident angle θ = / L , 0 L - 1 . We can construct a redundant H × L steering matrix A L ( k ) = [ a 1 ( k ) , a 2 ( k ) , , a L ( k ) ] that includes L steering vectors. Here, a ( k ) = [ exp ( - j ω k τ 1 , ) , exp ( - j ω k τ 2 , ) , , exp ( - j ω k τ H , ) ] T is defined as the steering vector corresponding to an incident angle θ . Furthermore, define a L × 1 , Q-sparse source energy vector r ( k ) such that:
r ( k ) = r q ( k ) θ θ q θ + 1 0 otherwise .
If L is sufficiently large, it is safe to assume that different θ q will fall into different angle bins. With this change of notation, Equation (6) can be rewritten as:
d ( k ) = A L ( k ) r ( k ) + w ( k )
where r ( k ) = [ r 1 ( k ) , r 2 ( k ) , , r L ( k ) ] T . With Equations (4) and (6), one may construct an H × N matrix:
D = d ( 0 ) , d ( 1 ) , , d ( N - 1 ) = d ˜ 1 , d ˜ 2 , , d ˜ H T
By performing row-vectorization of matrix D , we have an H · N × 1 vector:
d ˜ = d ˜ 1 T , d ˜ 2 T , , d ˜ H T T
Similarly, by performing column-vectorization of matrix D , we have another H · N × 1 vector:
d ¯ = d T ( 0 ) , d T ( 1 ) , , d T ( N - 1 ) T
Since the entries of both d ˜ and d ¯ are associated with the same D matrix, there exists an H · N × H · N permutation matrix M such that:
d ˜ = M d ¯
Equation (12) provides a link between d ˜ , which exhibits the R-sparsity in the frequency domain, and d ¯ , which exploits the Q-sparsity in the spatial (incident angle) domain. In other words, the array samples, as summarized in matrix D , exhibit a joint frequency and spatial (angle) domain sparsity that will be exploited later in this paper.

2.2. Compressive Sensing and Random Sub-Sampled Measurements

The presence of sparsity, as described in the models above, promises great potential for introducing CS to reduce the amount of transmitted sensor data.
The compressive sensing theory [27] states that a signal x may be reconstructed perfectly from a dimension reduced measurement y R M × 1 through a linear measurement system y = Φ x + n when α = Ψ - 1 x is a K-sparse vector. This is often accomplished by solving a constrained optimization problem:
min | | α | | 1 s . t . | | y - Φ Ψ α | | < σ
where | | α | | 1 is the 1 norm of vector α , Φ is the measurement matrix, Ψ is the sparse matrix and σ is a noise threshold. To confirm stable reconstruction, the measurement matrix Φ is usually chosen to be a random Gaussian matrix or a random binary matrix.
The measurement vector in traditional compressive sensing problem settings is often computed as a weighted linear combination of the observed high dimensional signals. For example, in COBE [13], the N × 1 sensor data vector in a frame will be multiplied to an M × N measurement matrix ( M < N ) digitally to yield a measurement vector y and then transmitted via a wireless channel to the FC. This obviously requires much computation and is likely to further deplete the energy reserve on sensor nodes. In CSA-DoA [15], an analog filter needs to be inserted before the ADC to realize the measurement operation. This approach requires special-purpose hardware and can be quite expensive.
In this work, time domain sensor data will be purposely discarded and will not be transmitted to the FC. By doing so, data reduction may be achieved without incurring expensive compression computations on the sensor nodes. Specifically, we will use a random sub-sampling matrix [28] defined on a set of non-uniform, yet known random sampling intervals { τ m ; 1 m M } . Let a sequence be u = { u ( 1 ) , u ( 2 ) , , u ( M ) } such that u ( 1 ) = 1 , and:
u ( m ) = u ( m - 1 ) + r ( τ m ) , 2 m M
where r ( · ) is the rounding operation. Then, the ( m , n ) -th element of this proposed random subsampling matrix is given by:
Φ ( m , n ) = 1 1 n = u ( m ) N 0 otherwise
The selection of the proposed random subsampling matrix is because the product of Φ Ψ is a partial Fourier matrix, which has been proven to satisfy the restricted isometry property (RIP) property with a high probability [29].
Furthermore, in this work, the lossy nature of wireless channels will be exploited by modeling the packet loss during wireless transmission over noisy channels as a form of (involuntary) random data sub-sampling. It is well known that due to different link types (amplify forward and direct link) [30], power allocation [31] and time-varying channel conditions, wireless transmissions are likely to suffer from packet loss, and the identities of lost data packets are known at the FC. Hence, this type of packet loss can be modeled as another form of random sub-sampling. In our recent work [32], the data packet loss is modeled with a random selection matrix.
In this work, both the random sub-sampling matrix and the random selection matrix will be incorporated as a combined measurement matrix that requires no computation on the sensor nodes while achieving the desired data reduction. Denote the random sampling matrix at the h-th sensor node as Φ h and the random selection matrix over the wireless channels between the h-th sensor node and the FC as Φ l o s s h . The combined equivalent measurement matrix may then be obtained as:
Φ e h = Φ l o s s h · Φ h .

3. CSJSR-DoA Formulation

The proposed CSJSR-DoA algorithm consists of three major steps: (1) at the h-th wireless sensor node, the received and digitized acoustic signal x h will be sub-sampled using random compressive sampling as described in Equation (15) to yield a node measurement vector Φ h x h ; (2) these node measurement vectors will then be transmitted through lossy wireless channels to the FC; (3) The FC will receive an overall measurement vector y = y 1 T , y 2 T , , y H T T , which will then be processed by the CSJSR-DoA approach to directly estimate a sparse DoA indicator vector. The schematic diagram of the proposed CSJSR-DOA approach is summarized in Figure 3.
Note that at the FC, the received compressive measurement of the h-th sensor node may be expressed as:
y h = Φ l o s s · Φ h x h = Φ e h Ψ d ˜ h
Hence, the overall measurement vector y may be expressed as:
y 1 y 2 y H = Φ e 1 Ψ Φ e 2 Ψ Φ e H Ψ d ˜ 1 d ˜ 2 d ˜ H
or in matrix form,
y = Θ d ˜
where Θ = diag ( Φ e 1 Ψ , Φ e 2 Ψ , , Φ e H Ψ ) is the joint sparse representation matrix. As discussed earlier, { d ˜ h , 1 h H } shares a common R-sparse structure, since the signal measured at one sensor node would be a time-delayed version of those at other sensors.
Next, from Equations (8) and (11), one may write:
d ¯ = A L r L + w ,
where r L = [ r L ( 0 ) T , r L ( 1 ) T , , r L ( N - 1 ) T ] T , w = [ w ( 0 ) T , w ( 1 ) T , , w ( N - 1 ) T ] T , A L is the H N × N L joint steering matrix:
A L = A L ( 0 ) A L ( 1 ) A L ( N - 1 )
Substituting Equations (12) and (20) into Equation (19) leads to:
y = Θ d ˜ = Θ M d ¯ = Θ M ( A L r L + w ) = Υ r L + w c
where w c = Θ H w is the subsampled Gaussian white noise received in the FC. Here, Υ = Θ M A L is the joint sparse representation matrix that combines the joint sparse matrices Θ and A L . The details of this joint sparse representation matrix are:
Υ = ϕ e , 1 1 a L , 1 T ( 0 ) ϕ e , 1 2 a L , 2 T ( 0 ) ϕ e , 1 H a L , H T ( 0 ) Υ [ 1 ] ϕ e , 2 1 a L , 1 T ( 1 ) ϕ e , 2 2 a L , 2 T ( 1 ) ϕ e , 2 H a L , H T ( 1 ) Υ [ 2 ] ϕ e , N 1 a L , 1 T ( N - 1 ) ϕ e , N 2 a L , 2 T ( N - 1 ) ϕ e , N H a L , H T ( N - 1 ) Υ [ N ]
where ϕ e , n h is the n-th column of Φ e h Ψ and a L , h ( n ) is the h-th row of A L ( n ) . The significance of Equation (22) is that one may reconstruct the joint sparse indicating vector r L from the joint measurement vector y directly without reconstructing the raw data x h , 1 h H at individual sensor nodes. r L is a joint sparse vector with no more than R non-zero blocks, and there are at most Q non-zero entries in each of the R non-zero blocks.
With these relations, we may now formulate the joint sparse representation-based DoA estimation problem (group sparse reconstruction [33]) in a multiple measurements vector (MMV) formulation [26,34,35]:
min r L = 1 L | s ( ) | subject to | | y - Υ r L | | 2 γ
where s ( ) = n = 0 N - 1 r ( n ) 2 and γ is a noise threshold. To solve this optimization problem, a second-order cone programming (SOCP) [36] approach has been proposed. The joint sparse representation of the wideband array signal in Equation (23) helps to improve the DoA estimation performance of wideband signals, as long as the same DoA of a signal is shared by different frequencies [37]. Note that the group sparsity-based DoA estimation for wideband signals is advantageous as compared to the traditional wideband DoA estimation methods, because it achieves near-coherent DoA estimation performance without the requirement of coherent signal projection through, e.g., the well-known focusing technique [38].
However, the computation cost is rather expensive. In [39], an MMV-Proxmethod has been proposed. We adopt this method for the underlying problem. In particular, we solve Equation (23) by a two-step process:
  • Estimate the frequency domain support T of array signals by reconstructing the spectral sparse indicative vector d ˜ from the joint measurements y and prune the joint reconstruction matrix Υ by selecting Υ [ r ] with non-zero frequency bins;
  • reconstruct the DoA indicative vector s directly from the joint measurements y using the pruned joint reconstruction matrix.
In summary, the first step is to estimate the non-zero block r L ( k ) and to simplify the joint sparse matrix by selecting Υ [ r ] with non-zero r L ( r ) . The second step is to reconstruct the DoA using the pruned joint sparse representation matrix Υ ˜ . These steps are summarized in the listing of Algorithm 1.
The computational complexity of Equation (23) in the SOCP framework using an interior point implementation is O ( L 3 N 3 ) [39]. If we decouple the original problem into two sub-problems (Steps 2, 3 and 4 in the above algorithm), its computational complexity can be reduced to O ( L 3 R 3 + N 3 H 3 ) .
Algorithm 1 Decoupled joint sparse reconstruction.
Input: Received joint random measurement vector y ;
Equivalent random measurement matrix Φ e h ;
Output: DoA indicating vector s ;
1. Estimate the noise level by random sampling in source free scenario, γ = | | y | | 2 2 ;
2. Estimate the common support T = supp ( d ˜ ) by solving:
  min d ˜ n = 0 N - 1 h = 1 H d h ( n ) 2 subject to  | | y - Θ d ˜ | | 2 2 γ ;
3. Construct the pruned joint reconstruction matrix Υ ˜ ;
Υ ˜ = Υ [ n 1 ] , Υ [ n 2 ] , , Υ [ n | T | ] , r T ;
4. Solve the pruned reconstruction problem:
  min r ¯ L s 1 subject to  | | y - Υ ˜ r ¯ | | 2 2 γ , s ( ) = n T r ( n ) 2 , r ¯ = [ r T ( n 1 ) , , r T ( n | T | ) ] T ;

4. Performance Analysis

In this section, the performance of the proposed CSJSR-DoA approach is investigated. First, the relevant coherence, its upper bound and the angular separation problem are studied. Second, the Cram e ´ r–Rao bound of the proposed CSJSR-DoA approach will be discussed.

4.1. Sparse Reconstruction Analysis and Angle Separation

The existence of solution to Equation (23) depends on the property of the reconstruction dictionary Υ. Usually, the mutual coherence and the restricted isometry property (RIP) of a sparse representation matrix are used as the criteria that indicate the reconstruction performance. Consider that the verification of the RIP property requires combinational computational complexity [40]; it is preferable to use the property of coherence that is easily computable to provide more concrete recovery guarantees. In this paper, we try to analyze the coherence of the CSJSR-DoA approach.
In the CS theory, the coherence of a matrix is the worst-case linear dependence of any pair of its column vectors. Similarly, in the block sparse signal case, in which a signal is composed by several blocks and only a small number of these blocks is non-zero, the block-coherence [41] has been proposed.
Recall from Equation (21):
Υ = Θ M A L
where Θ is a block diagonal joint sparse sampling matrix, M is a full rank, unitary permutation matrix and A L is a block diagonal steering matrix. To study the coherence of Υ, we need to study the block-coherence of Υ.
Theorem 1. 
Assume that H sensor nodes form a sensor array, and the received data in the fusion center are y h = Φ e h x h ( Φ e h R M × N ) , h = 1 , 2 , , H . Then, the block-coherence of the joint sparse representation matrix Υ satisfies:
μ B ( Υ ) N - M ( N - 1 ) M
Proof. 
See Appendix A. ☐
The block-coherence μ B guarantees stable reconstruction of the block sparse signal r L with a high probability. However, each block r L ( n ) indicates the DoA of sources. Thus, the coherence of each sub-matrix Υ [ n ] should be considered. In this paper, we derive the coherence of Υ [ n ] . This result will provide a lower bound of the source separation (in turns of DoA angle) issue to ensure a reliable DoA estimation. However, the coherence of Υ [ n ] depends on array geometry, and it is difficult to obtain an analytical bound.
In this paper, we consider the typical uniform linear array geometry and find an interesting result of the coherence bound. The derived result is similar to the array pattern [42] analysis and is validated for both narrowband and broadband acoustic sources. For broadband sources, the highest dominant frequency may be used.
Theorem 2. 
Assume that H sensor nodes form a uniformly-spaced linear array with d ( λ / 2 ) being the spacing between adjacent sensors where λ is the wavelength of the acoustic signal. If the difference of the incident angles of any pair of sources satisfies:
Δ θ c H d f
then the coherence of Υ [ n ] is bounded by:
μ ( Υ [ n ] ) 1 H sin ( π / H ) N - M ( N - 1 ) M
Proof. 
See Appendix B. ☐

4.2. Cram e ´ r–Rao Bound of CSJSR-DoA

In this subsection, we derive the Cram e ´ r–Rao bound (CRB) of the proposed CSJSR-DoA method and study its impact on the DoA estimation accuracy due to factors such as the number of measurements, the number of sensors and the noise level σ 2 , which is defined in Equation (13).
Recall that the CRB is defined as the inverse of the Fisher information matrix (FIM) [43],
CRB ( Λ ) = F - 1 ( Λ )
where Λ is the unknown parameter set. In this paper, we only derive the CRB of a single source case. Thus, the unknown parameter is Λ = [ θ , r T , f T ] T , where f = [ f 1 , f 2 , , f R ] T and r = [ r 0 ( k 1 ) , r 0 ( k 2 ) , , r 0 ( k R ) ] T are the sparse frequency bins and their corresponding source spectrum, respectively.
In the case of Gaussian white noise, the FIM is:
F ( Λ ) = σ 2 2 Re [ G H G ]
where G = y / Λ T . For simplification, we assume that all H sensors have the same Φ e ( Φ e 1 = , , = Φ e H = Φ e ) . Recall Equations (4), (6) and (17); the measurement vector of the CSJSR-DoA approach in a single source case is:
y Λ = i = 1 R r 0 ( k i ) a 0 ( k i ) ϕ e , k i + w c
where ϕ e , k i is the k i -th column of Φ e Ψ . After some calculations, the CRB of the CSJSR-DoA approach is given by:
C R B ( θ ) = σ 2 2 M α e 2 2
where e = [ f 1 r 0 ( k 1 ) , f 2 r 0 ( k 2 ) , , f R r 0 ( k R ) ] T , α = 4 π 2 c 2 h = 0 H - 1 u h cos ( θ ) - v h sin ( θ ) 2 .
Proof. 
See Appendix C. ☐

5. Performance Evaluation

In this section, extensive simulations are carried out to: (1) study the performance of the CSJSR-DoA approach; and (2) compare the performance of the CSJSR-DoA approach against COBE and CSA-DoA, using the L1-SVD [26] algorithm as the baseline. In addition, we also developed a hardware prototype platform and collected data from outdoor experiments to validate the practical applicability of the CSJSR-DoA approach.

5.1. Simulation Settings

We synthesized acoustic signals based on Equation (1). The synthesized signals contain four dominant frequency entries at 300 Hz, 500 Hz, 600 Hz and 800 Hz. These correspond to wavelengths of 1 . 14 , 0 . 7 , 0 . 57 and 0 . 43 m, respectively, assuming the sound speed at 343 m/s. The received acoustic signal also contains additive Gaussian white noise with zero mean. The variances are set so that the resulting SNR ranges from - 10 dB to 10 dB in 5-dB increments. Such an acoustic signal resembles the dominant entries’ distribution of some practical acoustic sources, such as truck engine sounds. The sampling rate is 2 ksamples/s. At this rate, a frame of 125 ms is selected. The sampling rate is chosen so that it satisfies the Nyquist sampling rate and avoids the frequency aliasing issue.
We assume a single uniform linear array consisting of six ( H = 6 ) acoustic sensor nodes deployed in a sensor field. The spacing between adjacent sensors is 0.2 m, which is smaller than half of the wavelength ( 0 . 43 m) of the 800-Hz entry. All acoustic sources are located at the broad side of this linear array, so that the incident angles (directions of arrivals) are constrained within a - 90 to 90 range. We divide this angle range into ( L = 360 ) partitions, so that each partition equals 0 . 5 .
Consider that data packets transmitted from the sensor array to the FC may suffer from data packet loss. We will simulate data packet loss rates, denoted by r loss , at 0, 5 % , 10 % , 20 % and 30 % , respectively. To mitigate burst data transmission loss, the acoustic data stream will be interleaved while being assembled into the data packet for transmission. However, other than packet headers, there will be no additional channel coding bits appended. The data loss rate is computed as the percentage of packets that are lost during transmission through wireless channels versus the total number of packets that are sent from the sensor array.
With the CSJSR-DoA approach, both random subsampling and wireless channel packet loss will reduce the amount of received acoustic samples compared to what was originally sampled at the sensor array. In particular, at each sensor node, the acoustic data samples will be randomly subsampled according to Equations (14) and (15). The ratio M / N in these equations can be regarded, in the context of CS, as the ratio of the number of measurements versus the total number of data samples. With channel data packet loss taken into account, we instead define r dc ( 1 ) to be the ratio of the number of acoustic samples successfully received at the fusion center versus the total number of samples acquired by the H acoustic sensors. Ignoring the overhead of data packet header, we have:
r dc ( 1 - r loss ) × M / N
In this section, the DoA performance will be reported against different values of r dc . In practice, if r loss can be estimated in real time, we may adjust M to achieve the desired DoA accuracy.
Four algorithms are implemented: L1-SVD, CSJSR-DoA, CSA-DoA and COBE. The L1-SVD algorithm [26] is applied to the entire set of acoustic samples without random subsampling and is used as a baseline for both simulation and experiment. The COBE algorithm [13] requires a specific reference node to be sampled at the Nyquist rate without any subsampling in order to provide a reference source signal. To have fair comparisons, we performed trial runs to empirically obtain the best parameters for the CSA-DoA and COBE algorithms. All four algorithms are implemented using MATLAB Version 7.9.0. The optimization is performed using the Sedumi 1.2.1 toolbox [44].

5.2. CSJSR-DoA Performance Evaluation

In this simulation, two ( Q = 2 ) stationary acoustic sources are placed in the far field from the sensor array at [ - 60 , 30 ] . The noise levels of the acoustic signal correspond to SNR = - 10 dB, - 5 dB, 0 dB, 5 dB and 10 dB, and it is assumed that r loss = 0 % . Five hundred independent trials are performed. Consider that sparse reconstruction is a probability event; a DoA estimate is considered a success detection if a sparse indicative vector can be reconstructed successfully. In other words, some trials fail to give a solution to the sparse DoA indicative vector. Based on the results of detected trials, the number of impinging signals, Q, and the corresponding DoAs can be estimated. Similar to some spatial spectrum search-based approaches, the spatial spectrum will be calculated using r ss = 20 log 10 s / max ( s ) , and some conventional peak-finding approaches [45,46] are used to find Q ˜ peaks of the obtained spatial spectrum. However, the estimated Q ˜ directions can be divided into three categories: Q ˜ < Q , Q ˜ = Q and Q ˜ > Q . In this case, a proper performance criterion is hard to choose. Following a similar work [22], which takes into account both the errors in estimating the signal number Q ˜ and the corresponding DoAs, the root mean square errors (RMSE) of the n-th detected trial will also be reported using the formula:
RMSE = n = 1 N e RMSE ( n ) 2 N e
RMS E ( n ) = q = 1 Q ˜ ( n ) θ q - θ ˜ q ( n ) 2 + Q - Q ˜ ( n ) Δ θ max 2 Q , Q ˜ ( n ) Q q = 1 Q θ q - θ ˜ q ( n ) 2 + j = Q + 1 Q ˜ ( n ) θ ˜ j ( n ) - θ ¯ j ( n ) 2 Q , Q ˜ ( n ) > Q
where N e is the number of detected trials, θ ¯ j ( n ) = arg min θ q , q [ 1 , Q ] θ q - θ ˜ j ( n ) . Here, Δ θ max is a penalty term of the maximum admissible DoA error (i.e., 180 for a liner array), and θ ¯ j ( n ) is the least error of additional false DoA.
The averaged detection rate and the RMSE are summarized in Figure 4. It may be noted that when the SNR is greater or equal to 0 dB, r dc 0 . 25 yields very satisfactory DoA performance. To put it into context, r dc = 0 . 25 roughly equals transmitting 500 samples per second without data loss. The corresponding data volume of Nyquist rate sampling is 1600.
It is also interesting to examine how the RMSE obtained using simulation is compared against the theoretical Cram e ´ r–Rao bound. In Figure 5, it can be seen that for SNR > 0 dB, the RMSE is very close to the theoretically-computed CRB.
Next, we want to investigate the impact of varying the data packet loss rate for different values of random subsampling ratio M / N . The noise level of the acoustic signal is set at SNR = 0 dB. Two sets of plots are provided in Figure 6. On the top, the averaged probability of successful detection and the RMSE are plotted against M / N . The performance degradation due to data packet loss can be seen clearly. On the bottom, the averaged probability of successful detection and the RMSE are plotted against r dc , which, according to Equation (32), already takes into account the impact of r loss . Hence, the different curves on the bottom panel are merged into an identical one. This is because the random data loss is equivalent to the random subsampling of the raw data.
Our next goal is to investigate the DoA resolution of the proposed CSJSR-DoA approach when two acoustic sources become closer in terms of DoAs. Under the same simulation configuration, we fix one source at 0 and vary the incident angle of the second source from 2 to 40 in 2 increments. We perform 300 independent trials for each setting. The SNR = 5 dB , and the peaks of the 360 × 1 sparse vector indicate the DoAs of two sources.
We report the averaged angle estimation error versus target DoA separation angles in Figure 7. Based on the discussion on Theorem 2, the steering vectors of two nearby sources are strongly correlated or have larger coherence. Therefore, the reconstruction of a correct sparse DoA indicative vector does not perform well. In simulations, the probability (false DoA probability) that only one source is found when two DoAs exist and the RMSE of each DoA estimation versus target DoA separation angles are shown in Figure 8. Note that when the separation angles are small, the averaged estimation errors, the RMSEs and the probability of false DoA estimation are rather large. However, when the targets are well separated ( 20 ), the DoAs of both targets are accurately estimated (the averaged estimation error and the false DoA probability are close to zero, and the RMSEs decrease to a constant). This validates the theoretical angle separation result of c H d f = 343 6 × 0 . 2 × 800 = 20 . 5 .
Finally, we use simulation to compare the proposed CSJSR-DoA approach against two other CS-based DoA methods: COBE and CSA-DoA. The simulation conditions are identical to those used in Figure 4. While these three methods differ in how the measurements are made, the definition of r dc would still be applicable to all of them; namely, the number of samples received at the FC versus the total amount of raw data samples at the sensor array. Specifically, the analog projection of the CSA-DoA approach is emulated by projecting the digitized signal. The results are summarized in Figure 9. It shows that the CSJSR-DoA method has favorable performance over others in a number of situations. Additional comparison of these methods using data collected from a prototype WSAN platform will be discussed next.

5.3. Prototype WSAN Platform and Field Experiment

To further validate the capability of the CSJSR-DoA approach, a WSAN [47] was developed. Each sensor node (shown in Figure 10a) is equipped with an omni-directional microphone, a 16-bit ADC and a wireless transceiver operating at a rate of 31.25 KBps using the ZigBee protocol [48]. Random sub-sampled acoustic data samples are transmitted to the FC through the wireless transceiver to the FC (laptop). To guarantee time synchronization within the sensor array, the RBS time synchronization scheme is used. Figure 10b shows the experiment configuration, in which a laptop equipped with a wireless receiver was used as the FC. The sensor array is shown in the background close to the upper right corner.
In the first experiment, the task is to track the DoAs of a single moving acoustic source (digital acoustic signal played through a speaker). The distance between the source and the array varies from 20 m to 40 m, and the measured SNR in the array was about 6 dB. Each sensor node took samples during a period of 125 ms at a 2 Ksampling rate and transmitted them to the FC using the IEEE 802.15.4 protocol. In the FC, the DoA estimate was updated every second. The measured data loss ratio r loss was 0 . 27 , and the corresponding r dc of the received data in the FC was 0 . 23 . The result is shown in Figure 11. The red line is the ground truth, and the green circles are CSJSR-DoA estimates. The averaged probability of success detection is 90 % , and the corresponding RMSE is 2 . 76 . This experiment provides an example of the practical use of the CSJSR-DoA approach.
In the second experiment, two stationary acoustic sources are used. They are placed at distances of 12 . 94 m and 14 . 66 m with (ground truth) DoA angles at - 6 . 5 and 31 . 0 respectively. Six sensors were incorporated into the array with an inter-sensor spacing equal to 0.2 m. Using data collected in this experiment, the performance of the CSJSR-DoA approach is compared against COBE, CSA-DoA and the baseline algorithm L1-SVD. The results are shown in Figure 12 and Table 2. The data compression ratio of all of these algorithms is fixed at r dc = 0 . 3 . To facilitate the comparison, the corresponding L × 1 sparse DoA angle vector s is normalized and expressed in units of dB. From Figure 12, it is shown that the CSJSR-DoA approach yields sharper DoA estimates compared to the other three methods.

6. Conclusions

In this paper, a compressive sensing joint sparse representation approach (CSJSR-DoA) is presented for DoA estimation on a WSAN platform. By exploiting both frequency domain and spatial domain sparsity, the CSJSR-DoA approach provides a direct DoA angle estimation at the FC, while requiring almost no computation at power-constrained remote sensor nodes. We provided performance analysis in terms of DoA angle resolution and the Cram e ´ r–Rao bound of the estimates. We further conducted extensive simulation and built a prototype experimental WSAN platform to investigate the impacts of various parameters on the DoA performance. We also compared the performance of the proposed CSJSR-DoA approach against two other compressive sensing DoA estimation methods and showed that the CSJSR-DoA approach provides superior performance in both simulation runs and real-world experiments.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (No. NSFC61273079), in part by the National Natural Science Foundation of China Key Projects (U1509215), in part by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA06020201) and in part by the Open Research Project of the State Key Laboratory of Industrial Control Technology, Zhejiang University (No. ICT1600208, No. ICT1600199, No. ICT1600213).

Author Contributions

Kai Yu, Zhi Wang and Yu-Hen Hu have contributed to the scientific part of this work. All of the authors have contributed to the writing of this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A.

Let C = [ c 1 c d C [ 1 ] , c d + 1 c 2 d C [ 2 ] , c N - d + 1 c N C [ M ] ] be a concatenation of column block C [ m ] of size L × d . Then, the block coherence is defined as:
μ B = max r , l r 1 d ρ ( C [ r ] H C [ l ] )
where ρ ( X ) is the spectral norm of matrix X . For simplification, we assume all of the H sensors have the same measurement matrix ( Φ e = Φ e 1 = , , = Φ e H ); therefore:
Υ [ l ] H Υ [ r ] = 1 M H A L ( l ) ϕ e , l H A L ( r ) ϕ e , r = ϕ e , l H ϕ e , r M H A L ( l ) H A L ( r )
Note the structure of ϕ e , r ; the product Φ e Ψ is equivalent to selecting some rows from the discrete Fourier transform (DFT) matrix Ψ. Hence, their product Φ e Ψ is a partial Fourier transform matrix (submatrix of a full DFT matrix). For the underlying random non-uniform sampling, the selection of these M rows is determined by u . Applying the Welch Bound inequality [49], when u is randomly selected from { 1 , 2 , , N } ,
| ϕ e , l H ϕ e , r | ( N - M ) M ( N - 1 )
According to the Gershgorin circle theorem, μ B is bound by:
μ B = max l , r l 1 L ρ ( Υ [ l ] H Υ [ r ] ) max l , r l 1 L H ( N - M ) ( N - 1 ) M λ max A L ( l ) H A L ( r ) = N - M ( N - 1 ) M

Appendix B.

Assume that a uniform linear array consists of H sensors and that the distance between adjacent sensors is d. The orientationof a linear array is set to be orthogonal to the y-axis, and the array centroid is [ 0 , 0 ] . The positions of the h-th sensor satisfies u h - u h - 1 = d ( h = 2 , 3 , , H ) , v h = 0 ( h = 1 , 2 , , H ) and τ h , q = u h sin ( θ q ) , θ q ( - 90 , 90 ] . In this case, the corresponding steering vector is given by:
a ( k ) = [ e - j ω k u 1 sin ( θ ) c , e - j ω k u 2 sin ( θ ) c , , e - j u H ω k sin ( θ ) c ] T = χ [ 1 , e - j ω k d sin ( θ ) c , , e - j u H ω k ( H - 1 ) d sin ( θ ) c ] T
where χ = e - j ω k u 1 sin ( θ ) c is a constant. Recall Equation (A2); the block sub-matrix is given by:
Υ [ k ] = A L ( k ) ϕ e , k
The sparse representation matrix in Equation (8) that compactly expresses the spectrum of the H received signals is:
A L ( k ) = [ a 1 ( k ) , a 2 ( k ) , , a L ( k ) ]
when ω k d / c > π ( d > λ / 2 ) [37,50]; there are at least two different angles θ 1 and θ 2 that satisfy:
e - j ω k d sin ( θ 1 ) c = e - j ω k d sin ( θ 2 ) c
in which θ 1 θ 2 ( - π / 2 , π / 2 ] . This implies that there will be multiple identical columns in A L ( k ) , and hence, the coherence of A L ( k ) will be one. However, in CS theory, smaller coherence means better reconstruction performance. To confirm the uniqueness of A L ( k ) (not the same columns in A L ( k ) ) and, hence, a stable reconstruction of a sparse indicating vector, a small coherence is desirable.
On the condition that d 1 / 2 λ , the coherence of Υ [ k ] is given by:
μ Υ [ k ] = 1 M H max 1 1 2 L | a 1 ( k ) ϕ e , k H a 2 ( k ) ϕ e , k | 1 H N - M M ( N - 1 ) max 1 1 2 L | a 1 H ( k ) a 2 ( k ) | = 1 H N - M M ( N - 1 ) max 1 1 2 L h = 0 H - 1 exp ( - j h ω k d p ( θ ) c ) = N - M M ( N - 1 ) μ a
where μ a = 1 H max 1 1 2 L 1 - exp - j H ω k d p ( θ ) c 1 - exp - j ω k d p ( θ ) c is determined by the redundant array manifold matrix, p ( θ ) = sin ( θ 1 ) - sin ( θ 2 ) cos ( θ 1 ) Δ θ , Δ θ = θ 1 - θ 2 .
Note that lim Δ θ 0 μ a = 1 , that is when the incident angles of two sources are close enough, the coherence may approach μ B , and hence, their DoAs become unresolvable. If the angle difference between two sources is larger than a certain value, a small coherence value can be guaranteed. Based on such an observation, an upper bound of μ a can be established as μ b = 2 / ( H | 1 - exp ( - j ω d   p ( θ 1 ) c ) | ) . In Figure B1, the coherence μ a and its upper bound μ b are plotted against p ( θ 1 ) . It is observed that μ b decreases sharply when p ( θ 1 ) is smaller than the first side lobe ( H ω d   p ( θ 1 ) / c = 2 π ). It is not difficult to verify that when:
| Δ θ | c H f d cos ( θ 1 ) c H f d
μ a μ b ( Δ θ ) = 1 H 2 1 - e ( - j 2 π / H ) = 1 H   sin ( π / H ) . This means a small coherence can be guaranteed when the angle difference of the two sources is larger than a certain threshold.
Figure B1. Mutual coherence of the array manifold matrix and its upper bound.
Figure B1. Mutual coherence of the array manifold matrix and its upper bound.
Sensors 16 00686 g013

Appendix C.

Recall from Equation (29) that the details of the FIM matrix are:
F ( Λ ) = 2 σ 2 Re ( y H θ y θ ) ( y H θ y r T ) ( y H θ y f T ) ( y H r y θ ) ( y H r y r T ) ( y H r y f T ) ( y H f y θ ) ( y H f y r T ) ( y H f y f T )
Additionally, each term of the FIM is:
y θ = i = 1 R r 0 ( k i ) b ( k i ) a 0 ( k i ) ϕ e , k i , y r 0 T = [ a 0 ( k 1 ) ϕ e , k 1 , a 0 ( k 2 ) ϕ e , k 2 , , a 0 ( k R ) ϕ e , k R ] y f T = [ y f 1 , y f 2 , , y f R ] ,
where b ( k i ) = - j 2 π f k i [ u 1 cos ( θ ) - v 1 sin ( θ ) , u 2 cos ( θ ) - v 2 sin ( θ ) , , u H cos ( θ ) - v H sin ( θ ) ] , y f i = r 0 ( k i ) a 0 ( k i ) v k i ϕ e , k i , v k i = - j 2 π ( u - 1 ) / N . We assume the coherence of Φ h Ψ is very small and can be neglected ( ( ϕ e , k l ) H ϕ e , k n 0 , l n ). Based on this approximation, each component of the FIM is given by:
y H θ y θ = M α | | e | | 2 2 y H r y r T = M J I y H f y f T = ε diag ( r 0 ( k 1 ) 2 , , r 0 ( k R ) 2 ) y H θ y r T = 0 y H θ y f T = 0 y H r y f T = η diag ( r 0 ( k 1 ) , , r 0 ( k R ) )
where α = 4 π 2 c 2 h = 0 H - 1 u h cos ( θ ) - v h sin ( θ ) 2 , e = [ k 1 r 0 ( k 1 ) , , k R r 0 ( k R ) ] T , η = - j π ( 1 + M ) , ε = ( 4 π 2 / N 2 ) u - 1 2 2 . Applying the well-known matrix inverse lemma, the CRB of CSJSR-DoA is:
C R B ( θ ) = σ 2 2 M α e 2 2

References

  1. Ilyas, P.; Chen, H.; Tremoulis, G. Tracking of multiple moving speakers with multiple microphone arrays. IEEE Trans. Speech Audio Process. 2004, 12, 520–529. [Google Scholar]
  2. Wang, Z.; Luo, J.; Zhang, X. A noval location-penalized maximum likelihood estimator for bearing-only target localization. IEEE Trans. Speech Audio Process. 2012, 60, 6166–6181. [Google Scholar]
  3. Wang, Z.; Liao, J.; Cao, Q.; Qi, H.; Wang, Z. Achieving k-barrier Coverage in Hybrid Directional Sensor Networks. IEEE Transactions on Mob. Comput. 2014, 13, 1443–1455. [Google Scholar] [CrossRef]
  4. Krim, H.; Viberg, M. Two decades of array signal processing research: the parametric approach. IEEE Signal Process. Mag. 1996, 13, 67–94. [Google Scholar] [CrossRef]
  5. Winder, A. Sonar system technology. IEEE Trans. Sonics Ultrason. 1975, 22, 291–332. [Google Scholar] [CrossRef]
  6. Chen, J.C.; Yao, K.; Hudson, R.E.; Tung, T.; Reed, C.; Chen, D. Source localization and tracking of a wideband source using a randomly distributed beam-forming sensor array. Int. J. High Perform. Comput. Appl. 2002, 16, 259–272. [Google Scholar] [CrossRef]
  7. Chen, J.C.; Yip, L.; Elson, J.; Wang, H.; Maniezzo, D.; Hudson, R.E.; Yao, K.; Estrin, D. Coherent acoustic array processing and localization on wireless sensor networks. IEEE Process. 2003, 91, 1154–1162. [Google Scholar] [CrossRef]
  8. Donoho, D.L. Compressive sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  9. Baraniuk, R. Compressive sensing. IEEE Signal Process. Mag. 2007, 2007, 118–120. [Google Scholar] [CrossRef]
  10. Candés, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar]
  11. Gurbuz, A.C.; Mcclellan, J.H.; Cevher, V. A compressive beamforming method. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, NV, USA, 31 March–4 April 2008; pp. 2617–2620.
  12. Cevher, V.; Gurbuz, A.C.; Mcclellan, J.H.; Chellappa, R. Compressive wireless arrays for bearing estimation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, NV, USA, 31 March–4 April 2008; pp. 2497–2500.
  13. Gurbuz, A.C.; Cevher, V.; Mcclellan, J.H. Bearing estimation via spatial sparsity using compressive sensing. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1358–1369. [Google Scholar] [CrossRef]
  14. Wang, Y.; Leus, G.; Pandharipande, A. Direction estimation using compressive sampling array processing. In Proceedings of the 15th IEEE Workshop on Statistical Signal Process, Cardiff, UK, 31 August–3 September 2009; pp. 626–629.
  15. Li, B.; Zou, Y.; Zhu, Y. Direction estimation under compressive sensing framework: A review and experimental results. In Proceedings of the International Conference on Information and Automation (ICIA), Shenzhen, China, 6–8 June 2011; pp. 63–68.
  16. Zhang, J.; Bao, M.; Li, X. Wideband DOA estimation of frequency sparse sources with one receiver. In Proceedings of the International Conference on Mobile Ad-hoc and Sensor System (IEEE MASS), Hangzhou, China, 14–16 October 2013; pp. 609–613.
  17. Gu, J.; Zhu, W.; Swamy, M. Compressed sensing for DOA estimation with fewer receivers than sensors. In Proceedings of the International Symposium on Circuits and Systems (ISCAS), Rio de Janeiro, Brazil, 15–18 May 2011; pp. 1752–1755.
  18. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Kelly, K.F.; Sun, T.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2007, 24, 83–91. [Google Scholar] [CrossRef]
  19. Luo, J.; Zhang, X.; Wang, Z. Direction-of-arrival estimation using sparse variable projection optimization. In Proceedings of the International Symposium on Circuits and Systems (ISCAS), Seoul, Korea, 20–23 May 2012; pp. 3106–3109.
  20. Luo, J.; Zhang, X.; Wang, Z. A new subband information fusion method for wideband DOA estimation using sparse signal representation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 4016–4020.
  21. Carlin, M.; Rocca, P.; Oliveri, G.; Viani, F. Directions-of-Arrival Estimation Through Bayesian Compressive Sensing Strategies. IEEE Trans. Antennas Propag. 2013, 61, 3828–3838. [Google Scholar] [CrossRef]
  22. Ji, S.; Xue, Y.; Carin, L. Bayesian Compressive Sensing. IEEE Trans. Signal Process. 2008, 56, 2346–2356. [Google Scholar] [CrossRef]
  23. Yu, L.; Sun, H.; Barbot, J.P.; Zheng, G. Bayesian compressive sensing for cluster structured sparse signals. Signal Process. 2012, 92, 259–269. [Google Scholar]
  24. Zhang, J.; Kossan, G.; Hedley, R.W.; Hudson, R.E.; Taylor, C.E.; Yao, K.; Bao, M. Fast 3D AML-based bird song estimation. Unmanned Syst. 2014, 2, 249–259. [Google Scholar] [CrossRef]
  25. Duarte, M.F.; Davenport, M.A.; Wakin, M.B.; Baraniuk, R.G. Sparse signal detection from incoherent projections. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toulouse, France, 14–19 May 2006; pp. 305–308.
  26. Malioutov, D.; Cetin, M.; Willsky, A.S. A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 2005, 53, 3010–3022. [Google Scholar] [CrossRef]
  27. Candés, E.J. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Math. 2008, 346, 589–592. [Google Scholar]
  28. Yu, K.; Yin, M.; Hu, Y.-H.; Wang, Z. CoSCoR framework for DoA estimation in wireless array sensor network. In Proceedings of the IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), Beijing, China, 6–10 July 2013; pp. 215–219.
  29. Candés, E.J.; Romberg, J.K.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2005, 19, 410–412. [Google Scholar]
  30. Viswanathan, R. Signal-to-noise ratio comparison of amplify-forward and direct link in wireless sensor networks. In Proceedings of the Communication Systems Software and Middleware, Bangalore, Indian, 7–12 January 2007; pp. 1–4.
  31. Alirezaei, G. Optimizing Power Allocation in Sensor Networks with Application in Target Classification; Shaker Verlag: Aachen, Germany, 2014. [Google Scholar]
  32. Wu, L.; Yu, K.; Cao, D.; Hu, Y.-H.; Wang, Z. Efficient sparse signal transmission over lossy link using compressive sensing. Sensors 2015, 15, 880–911. [Google Scholar] [CrossRef] [PubMed]
  33. Lv, X.; Bi, G.; Chen, W. The group lasso for stable recovery of block-sparse signal representations. IEEE Trans. Signal Process. 2011, 59, 1371–1382. [Google Scholar] [CrossRef]
  34. Jie, C.; Huo, X. Theoretical results on sparse representations of multiple-measurement vectors. IEEE Trans. Signal Process. 2006, 54, 4634–4643. [Google Scholar]
  35. Cotter, S.F.; Rao, B.D.; Engan, K.; Kreutz-Delgado, K. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 2005, 53, 2477–2488. [Google Scholar] [CrossRef]
  36. Lobo, M.S.; Vandenberghe, L.; Boyd, S.; Lebret, H. Applications of second-order cone programming. Linear Algebra Its Appl. 1998, 284, 193–228. [Google Scholar] [CrossRef]
  37. Shen, Q.; Liu, W.; Cui, W.; Wu, S.; Zhang, Y.D.; Amin, M.G. Low complexity direction of arrival estimation based on wideband co-prime arrays. IEEE Trans. Audio Speech Lang. Process. 2015, 23, 1445–1456. [Google Scholar] [CrossRef]
  38. Wang, H.; Kaveh, M. Coherent signal-subspace processing for the detection and estimation of angles of arrival of multiple wide-band sources. IEEE Trans. Acoust. Speech Signal Process. 1985, 33, 823–831. [Google Scholar] [CrossRef]
  39. Sun, L.; Liu, J.; Chen, J.; Ye, J. Efficient recovery of jointly sparse vectors. In Proceedings of the 2009 Conference on Advances in Neural Information Processing System, Vancouver, BC, Canada, 7–10 December 2009; pp. 1812–1820.
  40. Davenport, M.; Duarte, M.F.; Eldar, Y.C.; Kutyniok, G. Introduction to Compressed Sensing, Chapter in Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  41. Eldar, Y.; Kuppinger, P.; Bolcskei, H. Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 2010, 58, 3042–3054. [Google Scholar] [CrossRef]
  42. Hansen, R. Array pattern control and synthesis. IEEE Proc. 1992, 80, 141–151. [Google Scholar] [CrossRef]
  43. Sengijpta, S.K. Fundamentals of statistical signal processing: Estimation theory. Technometrics 1995, 37, 465–466. [Google Scholar] [CrossRef]
  44. Sedumi. Available online: http://sedumi.ie.lehigh.edu/ (accessed on 9 May 2016).
  45. Likar, A.; Vidmar, T. A peak-search method based on spectrum convolution. J. Phys. D Appl. Phys. 2003, 36, 1903–1909. [Google Scholar] [CrossRef]
  46. Guo, Q.; Liao, G. Fast music spectrum peak search via metropolis-hastings sampler. J. Electron. 2005, 22, 599–604. [Google Scholar] [CrossRef]
  47. Yu, K.; Yin, M.; Du, T.; Wu, L.; Wang, Z. Demo: AWSAN: A realtime wireless sensor array platform. In Proceedings of the International Conference on Mobile Ad-hoc and Sensor System (IEEE MASS), Hangzhou, China, 14–16 October 2013; pp. 441–442.
  48. ZigBee Specifications. Available online: http://www.zigbee.org (accessed on 9 May 2016).
  49. Xia, P.; Zhou, S.; Giannakis, G.B. Achieving the welch bound with difference set. IEEE Trans. Inf. Theory 2005, 51, 1900–1907. [Google Scholar] [CrossRef]
  50. Tang, Z.; Blacquiere, G.; Leus, G. Aliasing-free wideband beamforming using sparse signal representation. IEEE Trans. Signal Process. 2011, 59, 3464–3469. [Google Scholar] [CrossRef]
Figure 1. Spectrogram of (a) a Porsche engine and (b) a bird chirping.
Figure 1. Spectrogram of (a) a Porsche engine and (b) a bird chirping.
Sensors 16 00686 g001
Figure 2. Array time delay model.
Figure 2. Array time delay model.
Sensors 16 00686 g002
Figure 3. Schematic diagram of the CSJSR-DoA approach.
Figure 3. Schematic diagram of the CSJSR-DoA approach.
Sensors 16 00686 g003
Figure 4. Performance comparison of the CSJSR-DoA approach under different data volumes: (left) detection rate (right) RMSE.
Figure 4. Performance comparison of the CSJSR-DoA approach under different data volumes: (left) detection rate (right) RMSE.
Sensors 16 00686 g004
Figure 5. Comparison between the CSJSR-DoA result and the Cram e ´ r–Rao bound (CRB): (a) CRB comparison with r d c = 23 % ; (b) CRB comparison with r d c = 31 % ; (c) CRB comparison with r d c = 39 % ; (d) CRB comparison with r d c = 47 % .
Figure 5. Comparison between the CSJSR-DoA result and the Cram e ´ r–Rao bound (CRB): (a) CRB comparison with r d c = 23 % ; (b) CRB comparison with r d c = 31 % ; (c) CRB comparison with r d c = 39 % ; (d) CRB comparison with r d c = 47 % .
Sensors 16 00686 g005
Figure 6. Comparison of different lossy transmissions: (a) Detection rate versus M / N ; (b) RMSE versus M / N ; (c) Detection rate versus r d c ; (d) RMSE versus r d c .
Figure 6. Comparison of different lossy transmissions: (a) Detection rate versus M / N ; (b) RMSE versus M / N ; (c) Detection rate versus r d c ; (d) RMSE versus r d c .
Sensors 16 00686 g006
Figure 7. Comparison of DoA estimation error under different angle separations.
Figure 7. Comparison of DoA estimation error under different angle separations.
Sensors 16 00686 g007
Figure 8. Angle separation comparison: (a) RMSE versus angle (b) false probability versus angle.
Figure 8. Angle separation comparison: (a) RMSE versus angle (b) false probability versus angle.
Sensors 16 00686 g008
Figure 9. Performance comparison among CS-based methods under the same SNR ratio:(a) Detection rate with SNR = 10 dB; (b) RMSE with SNR = 10 dB; (c) Detection rate with SNR = 5 dB;(d) RMSE with SNR = 5 dB; (e) Detection rate with SNR = 0 dB; (f) RMSE with SNR = 0 dB.
Figure 9. Performance comparison among CS-based methods under the same SNR ratio:(a) Detection rate with SNR = 10 dB; (b) RMSE with SNR = 10 dB; (c) Detection rate with SNR = 5 dB;(d) RMSE with SNR = 5 dB; (e) Detection rate with SNR = 0 dB; (f) RMSE with SNR = 0 dB.
Sensors 16 00686 g009
Figure 10. Wireless sensor array network. (a) Sensor node; (b) fusion center and array.
Figure 10. Wireless sensor array network. (a) Sensor node; (b) fusion center and array.
Sensors 16 00686 g010
Figure 11. Experiment result of the CSJSR-DoA approach.
Figure 11. Experiment result of the CSJSR-DoA approach.
Sensors 16 00686 g011
Figure 12. Prototype system experiment of the CSJSR-DoA approach.
Figure 12. Prototype system experiment of the CSJSR-DoA approach.
Sensors 16 00686 g012
Table 1. Symbols and notations. CSJSR, compressive sensing joint sparse representation.
Table 1. Symbols and notations. CSJSR, compressive sensing joint sparse representation.
SymbolExplanation
Hnumber of sensors within an array
Qnumber of sources
R q sparsity of the q-th source signal
s q the q-th source signal in the time domain
r q dominant sparse vector in the frequency domain with R q -sparsity
ν q less prominent components of s q in the frequency domain
τ h , q time delay of the q-th source between the h-th sensor and the array centroid
x h ( t ) received time domain signal in the h-th array
Ψ N × N inverse DFT matrix
x h received time domain signal at the h-th sensor, x h = [ x h ( 1 ) , , x h ( N ) ] T
d ˜ h signal spectrum of x h , d ˜ h = [ d h ( 0 ) , d h ( 1 ) , , d h ( N - 1 ) ] T
d ( k ) array data spectrum at the k-th frequency, d ( k ) = [ d 1 ( k ) , d 2 ( k ) , , d H ( k ) ] T
a ( k ) steering vector of θ at the k-th frequency,
a ( k ) = [ exp ( - j ω k τ 1 , q ) , , exp ( - j ω k τ H , q ) ] T
r ( k ) source signal vector of L directions, r ( k ) = [ r 1 ( k ) , r 2 ( k ) , , r L ( k ) ] T
A L ( k ) steering matrix of L directions at the k-th frequency
A L ( k ) = [ a 1 ( k ) , a 2 ( k ) , , a L ( k ) ]
D array data spectrum matrix
M permutation matrix
d ¯ wideband array data spectrum, d ¯ = [ d T ( 0 ) , d T ( 1 ) , , d T ( N - 1 ) ] T
d ˜ array spectrum of H sensors, d ˜ = [ d ˜ 1 T , d ˜ 2 T , , d ˜ H T ] T
u sample intervals, u = [ u ( 1 ) , u ( 2 ) , , u ( M ) ] T
r ( · ) rounding operation
Φrandom sub-sampling matrix
Φ l o s s h channel loss matrix
y h received measurement of the h-th sensor in the fusion center
y joint measurement vector of H sensors, y = [ y 1 T , , y H T ] T
w joint noise vector of N frequencies, w = [ w ( 0 ) , w ( 1 ) , , w ( N - 1 ) ] T
Θjoint measurement matrix of H sensors, Θ = diag ( Φ e 1 Ψ , Φ e 2 Ψ , , Φ e H Ψ )
diag ( . ) block diagonal matrix operation
Υjoint sparse matrix
s direction indicative vector, s ( ) = n = 0 N - 1 r ( n ) 2
supp ( . ) nonzero index of a vector, supp ( x ) = n | x ( n ) > 0
Υ ˜ pruned joint sparse matrix, Υ ˜ = Υ [ n 1 ] , Υ [ n 2 ] , , Υ [ n | T | ] , r T
F ( Λ ) Fisher information matrix of parameter Λ
CRB Cram e ´ r–Rao bound of the CSJSR algorithm
Table 2. Result of the prototype system experiment. CSA, compressive sensing array; COBE, compressive bearing estimation.
Table 2. Result of the prototype system experiment. CSA, compressive sensing array; COBE, compressive bearing estimation.
L1-SVDCSJSR-DoACSA-DoACOBE
DoA ( )[ - 7 ,29][ - 7 . 5 ,27.5][ - 2 . 5 ,30][ - 9 . 5 ,25]

Share and Cite

MDPI and ACS Style

Yu, K.; Yin, M.; Luo, J.-A.; Wang, Y.; Bao, M.; Hu, Y.-H.; Wang, Z. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation. Sensors 2016, 16, 686. https://doi.org/10.3390/s16050686

AMA Style

Yu K, Yin M, Luo J-A, Wang Y, Bao M, Hu Y-H, Wang Z. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation. Sensors. 2016; 16(5):686. https://doi.org/10.3390/s16050686

Chicago/Turabian Style

Yu, Kai, Ming Yin, Ji-An Luo, Yingguan Wang, Ming Bao, Yu-Hen Hu, and Zhi Wang. 2016. "Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation" Sensors 16, no. 5: 686. https://doi.org/10.3390/s16050686

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop