Next Article in Journal
Electronic Nose Testing Procedure for the Definition of Minimum Performance Requirements for Environmental Odor Monitoring
Next Article in Special Issue
Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes
Previous Article in Journal
Markov Task Network: A Framework for Service Composition under Uncertainty in Cyber-Physical Systems
Previous Article in Special Issue
Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion

School of Instrumentation Science and Opto-Electronics Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(9), 1532; https://doi.org/10.3390/s16091532
Submission received: 21 July 2016 / Revised: 8 September 2016 / Accepted: 13 September 2016 / Published: 21 September 2016
(This article belongs to the Special Issue Advances in Multi-Sensor Information Fusion: Theory and Applications)

Abstract

:
To obtain efficient data gathering methods for wireless sensor networks (WSNs), a novel graph based transform regularized (GBTR) matrix completion algorithm is proposed. The graph based transform sparsity of the sensed data is explored, which is also considered as a penalty term in the matrix completion problem. The proposed GBTR-ADMM algorithm utilizes the alternating direction method of multipliers (ADMM) in an iterative procedure to solve the constrained optimization problem. Since the performance of the ADMM method is sensitive to the number of constraints, the GBTR-A2DM2 algorithm obtained to accelerate the convergence of GBTR-ADMM. GBTR-A2DM2 benefits from merging two constraint conditions into one as well as using a restart rule. The theoretical analysis shows the proposed algorithms obtain satisfactory time complexity. Extensive simulation results verify that our proposed algorithms outperform the state of the art algorithms for data collection problems in WSNs in respect to recovery accuracy, convergence rate, and energy consumption.

1. Introduction

Wireless sensor networks (WSNs) are composed of large-scale, self-organized sensor nodes, which are capable of sensing, data storage, and communication. WSNs have lots of applications, such as remote environment sensing, industrial automation, smart city, and military monitoring. In practical applications, lots of ordinary nodes are deployed in an unattended mode. These ordinary nodes perform data collection tasks individually and transmit the raw data to the sink node in multi-hop access. Since it is difficult to recharge or replace the limited power supply of ordinary nodes, developing energy efficient data gathering methods is becoming crucial.
A large number of data collection methods have been proposed to reduce the energy consumption with different levels of data reconstruction precision in the literature [1,2,3]. These obtained data in WSNs possess spatial and temporal correlations, which are intrinsic characteristics of a physical environment. A previous article [1] proposed a clustered aggregation (CAG) algorithm for data collection, which utilizes the spatial and temporal correlations of the sensed data. Pham et al. [2] presented a divide and conquer approximating (DCA) algorithm to reduce power consumption. Since the sensed data require to be transmitted to the sink node in multihop communication, Rosana et al. [3] proposed a novel algorithm to construct spanning trees for efficient data gathering in wireless sensor networks. Unfortunately, data gathering methods in traditional mode have limitations. Firstly, the clustering methods (or the spanning tree construction methods) reflect high computational cost, as well as the dynamic maintaining of clusters (or trees). Secondly, the energy consumption is not balanced since the nodes close to the sink consume more energy.
Compressive Sensing (CS) [4,5,6,7] theory has brought about a new approach for efficient data gathering in WSNs. Since the sensed data have temporal and spatial correlations, they can be sparsely represented in an appropriate transform basis. CS theory states that a small number of linear measurements can accurately reconstruct the sparse signals when the sensing matrix satisfies the restricated isometry property (RIP). Thus, the number of data transmissions for one measurement is largely reduced. Since the high energy-intensive reconstruction algorithm is implemented at the sink node, the computational cost between these ordinary nodes is quite low. Benefiting from the merits of CS, the energy consumption is balanced and reduced for data gathering problem in WSNs. Thus, many papers [8,9,10,11] have been published about efficient data gathering methods based on CS theory in recent years. Luo et al. [8] first proposed to apply compressive sensing for data gathering in WSNs. The idea of the proposed compressive data gathering (CDG) is that the intermediate nodes transmit the weighted sums of father nodes and their own data. In [9], a distributed compressive sampling method was presented. The method is quite efficient since in-network compression is employed, and each node individually determines the transmit scheme to minimize the total number of transmissions. Liu et al. [10] presented the compressed data aggregation (CDA) method to reconstruct the original signals with high precision. Meannwhile, the energy consumption is reduced in comparison with the CDG method. In [11], the authors proposed the compressive data collection (CDC) method to collect data in wireless sensor networks. The scheme reduces the necessary number of measurements, thus the network lifetime is prolonged.
However, the real application of CS in WSNs is difficult. Firstly, CS assumes the data are sparse or could be sparsely represented in a transform basis. Nevertheless, the appropriate sparse matrix basis is not always available. Secondly, the spatial correlation and the temporal correlation cannot be employed together since the sensed data are expressed in the vector form.
As a more efficient data gathering method, matrix completion (MC) [12] considers recovering the incomplete data matrix by observing a small part of the matrix elements. Actually, MC is an extension of the CS theory. In CS, the signals are represented in the vector form, while MC formulates the signals in the matrix form. The sensor data are commonly denoted as matrix, such as the image signals and the video samples. Thus, these two-dimensional signals can be computed more efficiently in the matrix form, although they could be transformed into the form of a vector. In comparison with CS theory, MC do not require to seek a priori sparse basis, and the necessary sampling ratio could be even lower. Since the sensed data collected in WSNs have spatial and temporal correlation, they show low-rank properties. In [13], the singular value thresholding (SVT) algorithm was proposed by approximating the low-rank matrix with a nuclear norm minimization method. To measure large-scale traffic datasets, Roughan et al. [14] proposed the spatial and temporal matrix completion algorithm, which was called the sparsity regularized matrix factorization (SRMF). In [14], intensive analysis of the massive traffic data resulted in the optimal choice of the spatial and temporal constraint matrices. SRMF can be extended to solve various matrix completion based problems, such as data interpolation, and missing matrix elements inference. To further take advantage of the low-rank feature and the short-term stability property of the sensed data, Cheng et al. [15] proposed the STCDG method. The recovery accuracy is improved, and the power consumption is reduced by applying STCDG for data gathering in WSNs.
Actually, the sensor nodes are deployed in a finite area. Therefore, the features of the sensed data are coupled with network topology information. In our analysis, the sensed data are found to be sparse under the graph based transform (GBT) basis. The GBT basis is composed of the eigenvector of the Laplacian matrix when the whole network is represented as a graph. To the best of our knowledge, this is the first time the GBT sparsity has been applied to a matrix completion problem. In consideration of both the GBT sparsity and the low-rank feature of the sensed data, the GBTR-ADMM and the GBTR-A2DM2 algorithm are proposed. The time complexity of our proposed algorithms are also analyzed, which shows that they have a low complexity. Simulation results show our proposed algorithms outperform the state of art algorithms for data collection problems in respect to recovery accuracy, convergence rate, and energy consumption.
The main contributions of the paper are as follows:
(1)
The features of sensor datasets are analyzed in consideration of their topology information, which reveals that the data matrix is sparse under the graph based transform.
(2)
The graph based transform regularized (GBTR) Matrix Completion problem is formulated. To reconstruct the missing values efficiently, the GBTR by Alternating Direction Method of Multipliers (GBTR-ADMM) algorithm is proposed. Simulation results reveal that GBTR-ADMM outperforms the state of art algorithms in view of the recovery accuracy and the energy consumption.
(3)
To accelerate the convergence of GBTR-ADMM, GBTR-A2DM2 algorithm is proposed, which benefits from a restart rule and the fusion of multiple constraints.
(4)
The time complexity of our proposed algorithms is analyzed, which shows that the complexity is low.
The structure of the paper is concluded as follows: In Section 2, the problem formulation about matrix completion is given. Section 3 presents the features of the real datasets and the synthesized dataset. The proposed GBTR-ADMM and GBTR-A2DM2 algorithms are expatiated individually in Section 4 and Section 5. Section 6 shows the time complexity of the proposed algorithms. In Section 7, the performances of the proposed algorithms are studied. The conclusions and the future works are summarized in Section 8.

2. Problem Formulation

In this section, we introduce the related issues in respect to matrix completion theory. The main notations of the paper are summarized in Table 1.
Suppose there are N sensor nodes in the WSNs. Using { x i } i = 1 N denotes the sensor data, where x i M represents a data vector collected by node i in time slot t 1 , t 2 , , t m . The sample interval is assumed to be equal. Thus, the data matrix X M × N can be used to represent the sensor data gathered by N sensor nodes in M time slots.
In order to reduce energy consumption in resource-constrained WSNs, only a small amount of sensor data is transmitted to the sink node. Let Ω { 1 , 2 , , M } × { 1 , 2 , , N } denotes the indices of the corresponding observed data of X. Similarly, let Ω c denotes the indices of omitted value. Let π Ω be the linear projection operator that keeps the entries in Ω invariant and adjusts the entries in Ω c to zero, that is:
( π Ω ( X ) ) i j = { X i j , ( i , j ) Ω 0 , ( i , j ) Ω c
Suppose matrix M M × N is the observed data, which is the incomplete version of matrix X with entries those outside Ω zeros. That is π Ω ( X i j ) = π Ω ( M i j ) , ( i , j ) Ω .
Our goal is to reduce the amount of data transmission to the sink node, and to design relevant matrix completion algorithm to reconstruct the original data matrix X as closely as possible. The observed ratio is defined as:
τ = ( i , j ) Ω | Ω ( i , j ) | M N
Next, the features of the datasets are studied in detail, which would be utilized in our designed algorithms.

3. Exploring the Features of Datasets

In this subsection, three datasets are utilized for analysis. The first two datasets are gathered by the GreenOrbs [16] system, which is deployed in the forest environment with up to 330 nodes. Its topology is exhibited in Figure 1. We mainly consider the temperature and the humidity data collected between 3 and 5 August 2011. The third dataset is the smooth data generated with a second order Autoregressive (AR) model. In detail, the AR filter H ( z ) = 1 1 + a 1 z 1 + a 2 z 2 is used, where a 1 and a 2 is predefined as −0.1 and −0.8 individually. Five hundred nodes assigned with the generated data are randomly deployed in a 1000 m × 1000 m area, which is shown in Figure 2. The detailed information about these datasets is given in Table 2.

3.1. Low-Rank Property

Since sensor readings have spatial and temporal correlations, the rank r of data matrix X would be small, such as r min ( M , N ) . This low-rank property of the sensor data has been studied in previous papers [14,15,17]. Let rank ( X ) denote the rank of matrix X. Candes et al. [18] proposed to solve the matrix completion problem by minimizing rank ( X ) under suitable constraints. However, the minimization problem of rank function cannot be figured out with a global solution in polynomial time because of its non-convexity. Fortunately, the nuclear norm X , which can be solved using various convex programming methods, is the tightest convex relaxation of the rank function [12]. Also, the relationship between rank function and nuclear norm in matrix completion is similar to the relation between l 0 norm and the convex l 1 norm in compressive sensing.

3.2. GBT Sparsity

Since these datasets are coupled with their topologies, we construct the graph-based transform (GBT) to sparsely represent them. The sensor network is represented by a graph G = ( V , E ) , which consists of a vertex set V (sensor nodes) and an edge set E (sensor links). The link e ( i , j ) is supposed to exist if the Euclidean distance between any two nodes (node i and node j ) is smaller than the communication range. The topological graph is mathematically denoted with the adjacent matrix A:
A ( i , j ) = { 1 , if e ( i , j ) is an element of E 0 , otherwise .
The degree matrix D is a diagonal matrix, where the diagonal elements D ( i , i ) denote the number of links connected to node , and D ( i , j ) = 0 ,   i j . Thus, the Laplacian matrix can be induced as:
L = DA
Since matrix L is symmetric, the eigenvalue decomposition can be obtained. Then, the eigenvector of the Laplacian matrix L constitutes the columns of the graph-based transform matrix, which is denoted as Ψ. Clearly, the GBT matrix Ψ is orthogonal. Detailed information about GBT basis can be found in [19].
The performance of the GBT matrix as a sparse basis is demonstrated in Figure 3. As can be seen, nearly 10% of the sorted GBT coefficients assemble more the 99% of the energy. Therefore, the matrix Ψ−1X is extremely sparse. In other words, the l 1 norm of matrix Ψ−1X, represented as Ψ 1 X 1 , is very small.

4. The Proposed Optimization Algorithm

Previous matrix completion algorithm is not realistic, as the overfitting problem cannot be avoided when the sampling rate is low. Thus, the recovery error could be large due to the overfitting problem. To obtain exact reconstruction of the missing values, the GBT sparsity and the low-rank property of the data matrix are utilized. Finally, the following optimization problem is formulated:
min X { X + λ Ψ 1 X 1 } , subject   to π Ω ( X ) = π Ω ( M )
where λ is the GBT sparsity regularization parameter.
ADMM [20] algorithm can blend the decomposability of dual ascent with an extra variable. Benefitted from the method of multiplier, the algorithm has superior convergence. With an introduced auxiliary variable W M × N , the above problem is rewritten as follows:
min X { X + λ Ψ 1 W 1 } , subject   to X = W , π Ω ( W ) = π Ω ( M )
In the following, some prerequisite properties are presented to sever as the foundation to solve the above problem.
Definition 1.
Define the soft-thresholding operator is:
S ε ( σ ) = { σ ε , i f σ > ε , σ + ε , i f σ < ε , 0 , o t h e r w i s e ,
where ε > 0 , and the operator can be applied to vectors or matrices in an element-wise manner.
In consideration of Definition 1, the following helpful theorem is given, as stated in [13].
Theorem 1.
Suppose the Singular Value Decomposition (SVD) of a matrix Y M × N is defined as Y = U Σ V Τ , Σ = d i a g ( { σ i } 1 i min ( M , N ) ) , where U M × r and V N × r are orthonormal matrix, and r = r a n k ( Y ) . Then, for any matrix Y M × N and λ 0 , the following equations are available:
S λ ( Y ) = arg min X ^ M × N { 1 2 X Y F 2 + λ X 1 }
and
U S λ ( Σ ) V Τ = arg min X ^ M × N { 1 2 X Y F 2 + λ X }
where U Σ V T is the SVD of matrix Y .
Lemma 1.
Suppose Ψ N × N is the unit orthogonal real matrix, and Ψ T denotes the transpose matrix of Ψ . Then, the Frobenius norm of any matrix A is invariant under a unitary transformation, that is:
A Ψ F 2 = Ψ A F 2 = A F 2
Proof. 
Since matrix Ψ is orthogonal, we have Ψ T Ψ = Ψ Ψ T = I N . The inverse of matrix Ψ is equal to Ψ T . Then, following the definition of trace, we have:
A Ψ F 2 = T r ( Ψ T A T A Ψ ) = T r ( A T A Ψ Ψ T ) = T r ( A T A ) = A F 2
and
Ψ A F 2 = T r ( A T Ψ T Ψ A ) = T r ( A T A ) = A F 2
Then, the GBT Regularization by Alternating Direction Method of Multipliers (GBTR-ADMM) is proposed to solve Problem (6). To get rid of the linear constraint in Problem (6), the augmented Lagrangian function is formulated as:
L ( X , Z , W , β ) = X + λ Ψ 1 W 1 + Z , X W + β 2 X W F 2
where Z is the Lagrange multiplier and β > 0 is the penalty parameter. For a comparative analysis, a fixed parameter β and an adaptive update strategy for optimal β are all studied in the experimental analysis section. GBTR-ADMM updates the variables in a three-step iterative approach under the constraint of fixed β . The augmented Lagrangian function L ( X , Z , W , β ) is minimized in respect of the variables in a Gauss-Seidel manner. In each step, a single variable is updated by fixing the rest of the variables. By updating the variables alternately, each subproblem is solved with a closed form solution. More specifically, the iterations of GBTR-ADMM are formulated as follows:
Firstly, the variable X k + 1 is computed with fixed value of Z k and W k . Then L ( X , Z , W , β ) is minimized as follows:
X k + 1 = arg min X L ( X , Z k , W k , β ) = arg min X X + λ Ψ 1 W k 1 + Z k , X W k + β 2 X W k F 2
Removing the constant term in Function (14), this function can be rewritten as:
X k + 1 = arg min X X + Z k , X W k + β 2 X W k F 2 = arg min X X + β 2 X ( W k 1 β Z k ) F 2
Obviously, the above optimization problem has the same form as defined in Theorem 1. Thus, a closed form solution is obtained as follows:
X k + 1 = U S 1 β ( Σ Ρ ) V Τ
where U , V , and Σ Ρ are obtained from the SVD of matrix Ρ , and Ρ equals to W k 1 β Z k .
Secondly, the variable W k + 1 is updated in the choice of default values X k + 1 and Z k . The minimization of L ( X , Z , W , β ) goes as follows:
W k + 1 = arg min W L ( X k + 1 , Z k , W , β ) = arg min W X k + 1 + λ Ψ 1 W 1 + Z k , X k + 1 W + β 2 X k + 1 W F 2
Ignoring the constant term in this step, we can obtain the following optimization problem:
W k + 1 = arg min W L ( X k + 1 , Z k , W , β ) = arg min W λ Ψ 1 W 1 + Z k , X k + 1 W k + β 2 X k + 1 W F 2 = arg min W λ Ψ 1 W 1 + β 2 W ( X k + 1 + 1 β Z k ) F 2
Taking into consideration of the orthogonal invariance of the Forbenius norm, which is defined in Lemma 1, we obtain the following theorem.
Theorem 2.
The closed form solution of Problem (18) is defined as follows:
W k + 1 = arg min W L ( X k + 1 , Z k , W , β ) = Ψ S λ β ( Ψ 1 ( X k + 1 + 1 β Z k ) )
Proof. 
Since matrix Ψ 1 is orthogonal, the following equation is obvious in combination with Lemma 1. □
β 2 W ( X k + 1 + 1 β Z k ) F 2 = β 2 Ψ 1 W Ψ 1 ( X k + 1 + 1 β Z k ) F 2
and defining Q = Ψ−1W, we have:
Q k + 1 = arg min Q L ( X k + 1 , Z k , Q , β ) = arg min Q λ Q 1 + β 2 Q Ψ 1 ( X k + 1 + 1 β Z k ) F 2
By Theorem 1, the closed form solution of the above Problem (21) is obtained as follows:
Q k + 1 = arg min Q L ( X k + 1 , Z k , Q , β ) = S λ β ( Ψ 1 ( X k + 1 + 1 β Z k ) )
Substituting Q = Ψ−1W to Problem (22), then the following closed form solution is available:
W k + 1 = arg min W L ( X k + 1 , Z k , W , β ) = Ψ S λ β ( Ψ 1 ( X k + 1 + 1 β Z k ) )
In view of the second constraint term in Problem (6), the final form of W k + 1 is defined as:
W k + 1 = π Ω ( M ) + π Ω c ( W k + 1 )
Thirdly, with the derived value of X k + 1 and W k + 1 in the above two steps, the calculation of Lagrange multiplier is updated as:
Z k + 1 = Z k + β ( X k + 1 W k + 1 )
The main procedure of GBTR-ADMM is shown in Algorithm 1. Note that the choice of penalty parameter β has high influence on the performance of ADMM algorithm. As it is difficult to choose an optimal value, the adaptive renewal mechanism is preferred in practical application. The performance difference of GBTR-ADMM with varying penalty value is studied in Section 7.1. What is more, the convergence of the ADMM based method is theoretically demonstrated in [20].
Algorithm 1: The proposed GBTR-ADMM algorithm.
Initialization: X 1 = π Ω ( D ) , W 1 = X 1 , Z 1 = X 1 , β , λ
While X k + 1 X k > ξ do
1: X k + 1 = U S 1 β ( Σ Ρ ) V Τ
2: W k + 1 = Ψ S λ β ( Ψ 1 ( X k + 1 1 β Z k ) )
In consideration of the constraint in Problem (6)
W k + 1 = π Ω ( M ) + π Ω c ( W k + 1 )
3: Z k + 1 = Z k + β ( X k + 1 W k + 1 )

5. The Proposed Method for Accelerated Convergence

The performance of ADMM is highly sensitive to the number of variables and the number of constraints. As is stated in [21,22], more memory is required, and the rate of convergence is reduced with multiple variables constraints. What is more, the convergence property is not proven theoretically when the number of variables is greater than or equal to 3. In optimization Problem (6), these two constraints are considered separately, as shown in Algorithm 1. This may slow down the convergence speed.
In this section, a new approach is proposed to solve Problem (6) with fast constringency speed. Firstly, the two constraints in Problem (6) are merged together in a linear operator. Thus, the convergence rate is accelerated with only one constraint. Then, we introduce the GBT Regularization by accelerated alternating direction method of multipliers (GBTR-A2DM2) As we know, the convergence rate of A2DM2 [23,24] algorithm is O ( 1 k 2 ) while the convergence rate of ADMM (as Algorithm 1) is O ( 1 k ) .

5.1. The Fusion of Two Constraints

In consideration of the two constraints in Problem (6), X = W and π Ω ( W ) = π Ω ( M ) , two linear operators, which are represented as A and : M × N 2 M × 2 N , are defined as follows:
A ( X ) = ( X 0 0 0 ) , ( W ) = ( W     0     0 π Ω ( W ) ) , C = ( 0 0     0 π Ω ( M ) )
where C 2 M × 2 N is a constant matrix.
Thus, Problem (6) is reformulated as follows:
min X , W { X + λ Ψ 1 W 1 } , subject   to A ( X ) + ( W ) = C
Also, the Lagrange function for the above optimization problem is:
L ( X , Z , W , β ) = X + λ Ψ 1 W 1 + Z , A ( X ) + ( W ) C + β 2 A ( X ) + ( W ) C F 2
Similar to Algorithm 1, GBTR-A2DM2 decomposes the minimization of L ( X , Z , W , β ) into several subproblems. In each subproblem, GBTR-A2DM2 updates a variable keeping in mind that the other variables are fixed. Specifically, the optimization scheme of GBTR-A2DM2 for Problem (28) is resolved in the following steps:
X k + 1 = arg min X L ( X , Z k , W k , β ) = arg min X X + Z k , A ( X ) + ( W k ) C + β 2 A ( X ) + ( W k ) C F 2 = β 2 A ( X ) + ( W ) C + 1 β Z k F 2
W k + 1 = arg min W L ( X k + 1 , Z k , W , β ) = arg min W λ G 1 W 1 + Z k , A ( X k + 1 ) + ( W ) C + β 2 A ( X k + 1 ) + ( W ) C F 2 = arg min W λ G 1 W 1 + β 2 A ( X k + 1 ) + ( W ) C + 1 β Z k F 2
Z k + 1 = Z k + β ( A ( X k + 1 ) + ( W k + 1 ) C )
The pseudocode of GBTR-A2DM2 algorithm is shown in Algorithm 2. Next, we will discuss the accelerated technique of GBTR-A2DM2 with a restarting rule.
Algorithm 2: GBTR-A2DM2 algorithm using restarting rule.
Initialization
W 0 = W ^ 0 M × N , Z 0 = Z ^ 0 M × N , τ > 0 , a 0 = 1 , η = 0.999
While X k + 1 X k > ξ do
1:  Update X k  by Equation (29)
2:  Update W k  by Equation (30)
3:  Update Z k  by Equation (31)
4:   m k 1 τ Z k Z ^ k F 2 + τ ( W k W ^ k ) F 2
If m k < η m k 1 Then
5:     a k + 1 = 1 + 1 + 4 a k 2 2
6:     W ^ k + 1 = W k + a k 1 a k + 1 ( W k W k 1 )
7:     Z ^ k + 1 = Z k + a k 1 a k + 1 ( Z k Z k 1 )
Else
8:     a k + 1 = 1 , W ^ k + 1 = W k , Z ^ k + 1 = Z k
9:     m k = η 1 m k 1
  End if
End While

5.2. The Accelerated Technique

Since the objective function in Problem (27) is not very convex, the accelerated ADMM method with a restart rule is employed. To determine when to restart the value assignment, the primal error and the dual error are combined:
m k 1 τ Z k Z ^ k F 2 + τ ( W k W ^ k ) F 2
where Z ^ k and W ^ k represent the second updated step in iteration steps 6–7 of Algorithm 2. For each iteration, m k is compared with m k 1 and if m k < η m k 1 , where η is defined equal to 0.999, the algorithm is accelerated with steps 5–7. Otherwise, the method is restarted in process of steps 8–9. In comparison with GBTR-ADMM, Algorithm 2 has a higher convergence rate. Also, the convergence property of Algorithm 2 is guaranteed by A2DM2 with a restarting rule [23].

6. Time Complexity Analysis

In this part, the computational complexity of the proposed algorithms is discussed. The calculation of an inverse matrix cost much, which has the time complexity of O(n3) (n is the dimension of an invertible matrix). Since matrix Ψ is orthogonal, the expensive computation of matrix inversion in our implementation can be substituted by its transposition. Thus, the dominated computational cost of GBTR-ADMM and GBTR-A2DM2 is the execution of matrix SVD in each iteration. As pointed out in [25], the time complexity of SVD operation is O(MN2). In our implementation, the famous PROPACK [26] is utilized to perform partial SVD for the proposed algorithms. Since the low-rank property of the objective matrix, it is inefficient to compute the full SVD. To obtain the dominated energy of the objective matrix, only those singular values exceeding than a certain threshold are necessary. The limitation of PROPACK is that it cannot automatically determine the necessary calculations, except for a predefined number. Thus we are supposed to estimate the number of singular values and assign the number to PROPACK in each iteration.
Suppose s v p k is the number of positive singular values of X k , and s v k is the number of singular value to be measured at k-th iteration. Then, the following updated strategy [27] is used,
s v k + 1 = { s v p k + 1 , i f   s v p k < s v k s v p k + 5 , i f   s v p k = s v k
where the initial estimated value of s v 0 is 10. Benefiting from the software package PROPACK, the time complexity for a M × N matrix with rank of r is O(rMN). Hence, the total time complexity of our proposed algorithms is O(rMN). Nevertheless, the state of art algorithms for matrix completion problem [15,17] demand a complexity of O(r2MN) for each iteration.

7. Performance Evaluation

In this section, we evaluate the performances of GBTR-ADMM and GBTR-A2DM2. The experimental datasets and their topological structures are described in Section 3. Since the proposed algorithms are heavily influenced by several input parameters, it is necessary to choose the optimal parameters to maximize the algorithm performance. With the optimal parameters for GBTR-ADMM and GBTR-A2DM2, the recovery accuracy and the convergence properties are compared with the state of art algorithms. At last, the energy consumption of the proposed algorithms are compared with the state of art data gathering methods for WSNs. Simulation results show that GBTR-ADMM and GBTR-A2DM2 can highly reduce energy consumption in WSNs. Thus, the network lifetime is prolonged.
To measure the performance of the proposed algorithm, the reconstructed data matrix X ^ is achieved. Thus, the recovery performance is estimated by the Normalized Mean Absolute Error (NMAE):
N M A E = ( i , j ) Ω c | ( X ^ ( i , j ) X ( i , j ) ) | ( i , j ) Ω c | Ω c ( i , j ) |

7.1. Parameter Setting

In this subsection, the choice of optimal parameters for GBTR-ADMM is discussed. Previous studies have shown that global convergence for ADMM algorithm holds for any fixed β > 0. However, different parameter values result in various convergence speeds. Thus, the input values, as listed in Table 3, to Algorithm 1 are selected by experience to obtain the best performance. The performance of GBTR-ADMM is also studied with different parameter values of β. The variation of the objective function values of Problem (13) with the increase of iteration numbers is shown in Figure 4. As we can see, β = β0 achieves the best performance. Meanwhile, the descending speed becomes slower when the choice of parameter β is too large or too small. This is because the penalty parameter β trades off between minimizing the primal residual and the residual of the dual problem. A large penalty value may drop the primal residual, but at the expense of an increase of the dual residual, and vice versa.
Figure 4 demonstrates the results in the synthesized datasets, and the optimal chosen value of β changes randomly in other datasets. Instead, an adaptive penalty update method is preferred, which is based on previous study [20]. The update strategy is formulated as follows:
β k + 1 = min ( β max , ρ β k ) ,
where βmax denotes the maximum value of the βk. The value of variable ρ is updated as:
ρ = { ρ 0 , if β k max { X k + 1 X k F 2 , W k + 1 W k F 2 } C F 1 , otherwise < κ
where ρ 0 > 0 is a constant and κ is the predefined threshold value. Obviously, when the residual value between X k + 1 X k F 2 and W k + 1 W k F 2 is less than the threshold, the value of β k + 1 increases to ρ 0 β k . Thus, the convergence speed is improved in this way.
The effect of the sparsity regularization parameter λ is also analyzed. Figure 5 shows the variation of recovery error with different parameter value. As can be seen, the recovery error is quite large with small value of λ. With the increase of λ, recovery error declines rapidly, and remains stable as λ > 0.01. So, the optimal value for the sparsity regularization parameter λ is set as 0.01 in our experiments. Since similar trends are obtained for GBTR-A2DM2, we just omit it here.

7.2. Recovery Accuracy

In this subsection, we compare the recovery accuracy of GBTR-ADMM and GBTR-A2DM2 with the state of art algorithms for matrix completion. The first chosen method is the Spatio-Temporal Compressive Data Collection (STCDG) [15]. The second method is the Compressive Data Collection (CDC) [11]. GBTR-ADMM is considered to solve the optimization Problem (6), while GBTR-A2DM2 is used for the optimization Problem (27) with only one constraint.
Simulations are executed on both the real datasets and the synthesized dataset, which are exploited in detail in Section 3. For each parameter setting of the simulation, the results are averaged for 50 independent trials.
Figure 6, Figure 7 and Figure 8 show that our GBTR based methods can reconstruct the missing values with high accuracy. In general, the recovery errors of all reconstruction algorithms decrease rapidly with the increase of the sampling ratio. When the sampling ratio is high enough, all reconstruction methods achieve smaller recovery error. Since our proposed two GBTR based methods are used to solve the same matrix completion problems, their performances are nearly the same. Figure 6 shows the recovery errors on GreenOrbs temperature data. As can been seen, our proposed methods achieve about 25% recovery error while the error of other two algorithms is more than 80%, when the sampling ratio is 1%.
Similar results can be obtained from Figure 7, which is simulated on the humidity dataset. The recovery error of GBTR based methods is still much less than STCDG and CDC when the sampling ratio is small. When the sampling is 1%, GBTR-ADMM and GBTR-A2DM2 can reconstruct the original missing values with recovery errors of less than 20%. Meanwhile, the recovery errors of STCDG and CDC are nearly 100%.
In Figure 8, the experiment results show that GBTR based methods outperform STCDG and CDC by a larger margin. Compared with Figure 6 and Figure 7, Figure 8 shows that GBTR-ADMM and GBTR-A2DM2 achieve the best performance on the synthesized dataset. The recovery error on synthesized dataset is smaller than the real datasets at the same sampling ratio. The reason is that the synthesized data have much better sparsity than the other two real datasets under the GBT basis.

7.3. Convergence Behavior Analysis

In this subsection, the convergence performances are studied in the synthesized dataset. The compared methods are SVD and STCDG. For each method, we set the same stop conditions, where the tolerance error ξ is 10 4 . Figure 9 shows the necessary number of iterations to obtain accurate reconstruction at different sampling ratios. As can be seen from Figure 9, the convergence speed of our proposed two methods surpasses the SVD and STCDG. SVD has the slowest convergence rate of the four methods. Also, STCDG converges faster than SVD. Although the recovery accuracy of GBTR-A2DM2 and GBTR-ADMM behaves similar, as shown in Figure 6, Figure 7 and Figure 8, GBTR-A2DM2 converges much faster than GBTR-ADMM. Note that GBTR-ADMM needs nearly 160 iterations to converge at the sampling ratio 0.9, while only about 40 iterations leads to the convergence of GBTR-A2DM2. Even when the sampling ratio is 0.5, the necessary number of iterations for GBTR-A2DM2 is about 125, which is less than half of GBTR-ADMM.
Next, when the sampling ratio is fixed as 0.6, the relative recovery errors of all the compared methods are analyzed. As can be seen from Figure 10, Compared to SVD and STCDG, both GBTR-ADMM and GBTR-A2DM2 gain less error in several iteration processes. Clearly, GBTR-A2DM2 converges much faster, which converges in about 100 iterations. Also, with no more than 250 iterations, both GBTR-ADMM and GBTR-A2DM2 terminate as the relative recovery errors drop below the tolerance error. Meanwhile, the relative errors of SVD and STCDG are about one order of magnitude larger than GBTR-ADMM and GBTR-A2DM2. In general, compared with the state of the art methods for matrix completion, our proposed methods can achieve smaller recovery error at the same number of iterations.

7.4. Energy Consumption and Network Lifetime

In this subsection, the energy consumption of the proposed algorithms for data gathering problems is analyzed. Five nodes are randomly deployed in a 1000 m × 1000 m area. The topology is shown in Figure 2, where the sink node is deployed in the center. The data transmission and recovery process are fulfilled in three steps. First, the sink node broadcasts the sampling ratio through the whole network. In our setting, the sampling ratio determines the probability of node to gather data. In the second step, the selected nodes transmit the gathering data to the sink node. Finally, these missing data are reconstructed by implementing GBTR-ADMM and GBTR-A2DM2 at the sink node. The compared methods are CDC and STCDG. In the traditional data gathering method, all sensor nodes are required to transmit their sampling data to the sink node. Thus, the traditional method is selected as a baseline method for comparison.
The energy consumption model in paper [28] is employed in our simulation. The detailed simulation parameters are presented in Table 4. The initial energy for every sensor node is 2 J. Each packet contains 64 bits. The network is supposed to be symmetrical. The energy consumption for one bit transmission, defined as ETx, is 100 nJ. Meanwhile, the reception of a packet consumes ERx = 120 nJ. E A m p is the unit energy consumption of the power amplifying circuit.
The synthesized dataset is exploited in our experiment. The network lifetimes of CDC, STCDG, and our proposed methods are evaluated. Detailed information is revealed in Figure 11. Note that the total energy consumption of the baseline method is relatively constant. That is because the baseline method transmits all the sensor data to the sink node no matter how the sampling ratio varies. To be different, the total power consumptions of CDC, STCDG, and GBTR based methods increase with enlargement of the sampling ratio. The reason is that more sensor data are needed to be transmitted when the sampling ratio increases. Thus, the network lifetime decreases. GBTR-ADMM and GBTR-A2DM2 outperform CDC at the same setting value of sampling ratio. However, the energy consumption of our proposed methods is equal to the baseline method when the sampling radio is exactly 1. Note that the lifetime of CDC is smaller than the baseline when the sampling ratios are higher than 75%. This phenomenon could be explained as per below. In CDC, all sensor nodes transmit the data M times, which is the necessary number to reconstruct original signals. At the same time, the ordinary nodes need only one data transmission and necessary relay transmissions for each sampling in the baseline method. Thus, when the sampling ratio is higher than a specific threshold, the total numbers of transmission for CDC is larger than that in the baseline method. In addition, since STCDG and our proposed methods are all based on the matrix completion theory, the curve variations over their lifetime coincide with each other.

8. Conclusions and Future Works

In this paper, the data gathering problem based on Matrix Completion theory is studied. Except for the low-rank property, the sensed data are observed to be sparse under the graph based transform. By taking full advantage of these features, two novel reconstruction algorithms (named GBTR-ADMM and GBTR-A2DM2) are proposed. The time complexity is also analyzed, which shows their complexity is low. Several experiments on both real datasets and synthesized datasets are carried out. The experiment results show that our proposed methods outperform the state of the art algorithm for data gathering problems in WSNs Furthermore, it is observed that GBTR-A2DM2 converges much faster than GBTR-ADMM. For future works, will focus on applying our proposed algorithms to other datasets of real networks, which may exhibit complex topological information other than random networks, such as the scale-free or the small-world networks.

Acknowledgments

This work is supported in part by the National Natural Science Foundation of China under Grant No. 61371135 and by Beihang University Innovation & Practice Fund for Graduate under Grant YCSJ-02-2016-04. The authors are thankful to the anonymous reviewers for their earnest reviews and helpful suggestions.

Author Contributions

Donghao Wang made deduced the original optimization methods, and implemented detailed algorithm design. Jiangwen Wan revised the manuscript. Zhipeng Nie performed the experiments. Qiang Zhang and Zhijie Fei analyzed the data.

Conflicts of interest

The authors declare no conflict of interest.

References

  1. Yoon, S.; Shahabi, C. The Clustered AGgregation (CAG) technique leveraging spatial and temporal correlations in wireless sensor networks. ACM Trans. Sens. Netw. 2007, 3, 3. [Google Scholar] [CrossRef]
  2. Pham, N.D.; Le, T.D.; Park, K.; Choo, H. SCCS: Spatiotemporal clustering and compressing schemes for efficient data collection applications in WSNs. Int. J. Commun. Syst. 2010, 23, 1311–1333. [Google Scholar] [CrossRef]
  3. Lachowski, R.; Pellenz, M.E.; Penna, M.C.; Jamhour, E.; Souza, R.D. An efficient distributed algorithm for constructing spanning trees in wireless sensor networks. Sensors 2015, 15, 1518–1536. [Google Scholar] [CrossRef] [PubMed]
  4. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  5. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  6. Gleichman, S.; Eldar, Y.C. Blind compressed sensing. IEEE Trans. Inf. Theory 2011, 57, 6958–6975. [Google Scholar] [CrossRef]
  7. Li, S.X.; Gao, F.; Ge, G.N.; Zhang, S.Y. Deterministic construction of compressed sensing matrices via algebraic curves. IEEE Trans. Inf. Theory 2012, 58, 5035–5041. [Google Scholar] [CrossRef]
  8. Luo, C.; Wu, F.; Sun, J.; Chen, C.W. Compressive data gathering for large-scale wireless sensor networks. In Proceedings of the 15th ACM International Conference on Mobile Computing and Networking, Beijing, China, 20–25 September 2009; pp. 145–156.
  9. Caione, C.; Brunelli, D.; Benini, L. Distributed compressive sampling for lifetime optimization in dense wireless sensor networks. IEEE Trans. Ind. Inf. 2012, 8, 30–40. [Google Scholar] [CrossRef]
  10. Xiang, L.; Luo, J.; Rosenberg, C. Compressed data aggregation: Energy-efficient and high-fidelity data collection. IEEE ACM Trans. Netw. 2013, 21, 1722–1735. [Google Scholar] [CrossRef]
  11. Liu, X.Y.; Zhu, Y.; Kong, L.; Liu, C.; Gu, Y.; Vasilakos, A.V.; Wu, M.Y. CDC: Compressive data collection for wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 2188–2197. [Google Scholar] [CrossRef]
  12. Candes, E.J.; Recht, B. Exact matrix completion via convex optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef]
  13. Cai, J.F.; Candes, E.J.; Shen, Z.W. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  14. Roughan, M.; Zhang, Y.; Willinger, W.; Qiu, L.L. Spatio-temporal compressive sensing and internet traffic matrices. IEEE ACM Trans. Netw. 2012, 20, 662–676. [Google Scholar] [CrossRef]
  15. Cheng, J.; Ye, Q.; Jiang, H.; Wang, D.; Wang, C. STCDG: An efficient data gathering algorithm based on matrix completion for wireless sensor networks. IEEE Trans. Wirel. Commun. 2013, 12, 850–861. [Google Scholar] [CrossRef]
  16. Liu, Y.; He, Y.; Li, M.; Wang, J.; Liu, K.; Li, X. Does wireless sensor network scale? A measurement study on GreenOrbs. IEEE Trans. Parallel Distrib. Syst. 2013, 24, 1983–1993. [Google Scholar] [CrossRef]
  17. Kong, L.; Xia, M.; Liu, X.Y.; Chen, G.; Gu, Y.; Wu, M.Y.; Liu, X. Data loss and reconstruction in wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 2014, 25, 2818–2828. [Google Scholar] [CrossRef]
  18. Candes, E.; Recht, B. Exact matrix completion via convex optimization. Commun. ACM 2012, 55, 111–119. [Google Scholar] [CrossRef]
  19. Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef]
  20. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  21. He, B.; Tao, M.; Yuan, X. Alternating direction method with Gaussian back substitution for separable convex programming. SIAM J. Optim. 2012, 22, 313–340. [Google Scholar] [CrossRef]
  22. Hu, Y.; Zhang, D.; Ye, J.; Li, X.; He, X. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2117–2130. [Google Scholar] [CrossRef] [PubMed]
  23. Goldstein, T.; O′Donoghue, B.; Setzer, S.; Baraniuk, R. Fast alternating direction optimization methods. SIAM J. Imaging Sci. 2014, 7, 1588–1623. [Google Scholar] [CrossRef]
  24. Kadkhodaie, M.; Christakopoulou, K.; Sanjabi, M.; Banerjee, A. Accelerated alternating direction method of multipliers. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 497–506.
  25. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2012. [Google Scholar]
  26. Larsen, R.M. PROPACK-Software for Large and Sparse SVD Calculations. Available online: http://sun.stanford.edu/~rmunk/PROPACK (accessed on 19 September 2016).
  27. Toh, K.C.; Yun, S. An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 2010, 6, 615–640. [Google Scholar]
  28. Heinzelman, W.R.; Chandrakasan, A.; Balakrishnan, H. Energy-efficient communication protocol for wireless microsensor networks. In Proceedings of the 33rd Annual Hawaii International Conference on System Siences, Maui, HI, USA, 4–7 Jauary 2000; p. 223.
Figure 1. The real deployment topology of GreenOrbs.
Figure 1. The real deployment topology of GreenOrbs.
Sensors 16 01532 g001
Figure 2. The random topology of synthetized data with 500 nodes in a 1000 m × 1000 m area.
Figure 2. The random topology of synthetized data with 500 nodes in a 1000 m × 1000 m area.
Sensors 16 01532 g002
Figure 3. The sorted GBT coefficients of the datasets.
Figure 3. The sorted GBT coefficients of the datasets.
Sensors 16 01532 g003
Figure 4. The performance of GBTR-ADMM in respect to different β.
Figure 4. The performance of GBTR-ADMM in respect to different β.
Sensors 16 01532 g004
Figure 5. The effect of the sparsity regularization parameter λ.
Figure 5. The effect of the sparsity regularization parameter λ.
Sensors 16 01532 g005
Figure 6. Recovery errors on temperature dataset.
Figure 6. Recovery errors on temperature dataset.
Sensors 16 01532 g006
Figure 7. Recovery errors on humidity dataset.
Figure 7. Recovery errors on humidity dataset.
Sensors 16 01532 g007
Figure 8. Recovery errors in the synthesized dataset.
Figure 8. Recovery errors in the synthesized dataset.
Sensors 16 01532 g008
Figure 9. Necessary number of iterations for different algorithms.
Figure 9. Necessary number of iterations for different algorithms.
Sensors 16 01532 g009
Figure 10. Variation of recovery errors in respect to iteration numbers for different algorithms.
Figure 10. Variation of recovery errors in respect to iteration numbers for different algorithms.
Sensors 16 01532 g010
Figure 11. Network lifetime comparison.
Figure 11. Network lifetime comparison.
Sensors 16 01532 g011
Table 1. Summary of notations.
Table 1. Summary of notations.
M Number of time slots
N Number of sensor nodes
τ The observed ratio
r The matrix rank
λ The GBT sparsity regularization parameter
β The Lagrange penalty parameter
X The original data matrix
X ^ The reconstructed data matrix
M The observed data matrix
D The degree matrix
A The adjacency matrix
L The Laplacian matrix
Ψ The GBT matrix
W The introduced auxiliary variable
Z The Lagrange multiplier
Table 2. The experimental datasets.
Table 2. The experimental datasets.
Data NameData TypesSelected Data MatrixTime Interval
GreenOrbsTemperature 326 × 500 5 min
GreenOrbsHumidity 326 × 500 5 min
SynthesizedAR model 500 × 500 -
Table 3. Input values to Algorithm 1.
Table 3. Input values to Algorithm 1.
Parameter Nameλβ
Set Value0.01 β 0 = 3.0 / min ( m , n )
Table 4. Experimental parameters.
Table 4. Experimental parameters.
Parameter NameValue
Nodes number500
Transmission range100 m
Initial energy2 J
Data Size64 bits
E T x 100 nJ/bit
E R x 120 nJ/bit
E A m p 0.1 nJ/( bit·m2)

Share and Cite

MDPI and ACS Style

Wang, D.; Wan, J.; Nie, Z.; Zhang, Q.; Fei, Z. Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion. Sensors 2016, 16, 1532. https://doi.org/10.3390/s16091532

AMA Style

Wang D, Wan J, Nie Z, Zhang Q, Fei Z. Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion. Sensors. 2016; 16(9):1532. https://doi.org/10.3390/s16091532

Chicago/Turabian Style

Wang, Donghao, Jiangwen Wan, Zhipeng Nie, Qiang Zhang, and Zhijie Fei. 2016. "Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion" Sensors 16, no. 9: 1532. https://doi.org/10.3390/s16091532

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop