A Compressed Sensing Measurement Matrix Construction Method Based on TDMA for Wireless Sensor Networks

Compressed sensing theory has been widely used for data aggregation in WSNs due to its capability of containing much information but with light load of transmission. However, there still exist some issues yet to be solved. For instance, the measurement matrix is complex to construct, and it is difficult to implement in hardware and not suitable for WSNs with limited node energy. To solve this problem, a random measurement matrix construction method based on Time Division Multiple Access (TDMA) is proposed based on the sparse random measurement matrix combined with the data transmission method of the TDMA of nodes in the cluster. The reconstruction performance of the number of non-zero elements per column in this matrix construction method for different signals was compared and analyzed through extensive experiments. It is demonstrated that the proposed matrix can not only accurately reconstruct the original signal, but also reduce the construction complexity from O(MN) to O(d2N) (d≪M), on the premise of achieving the same reconstruction effect as that of the sparse random measurement matrix. Moreover, the matrix construction method is further optimized by utilizing the correlation theory of nested matrices. A TDMA-based semi-random and semi-deterministic measurement matrix construction method is also proposed, which significantly reduces the construction complexity of the measurement matrix from O(d2N) to O(dN), and improves the construction efficiency of the measurement matrix. The findings in this work allow more flexible and efficient compressed sensing for data aggregation in WSNs.


Introduction
The wireless sensor network (WSN) is a distributed sensing network containing a certain number of endpoints (i.e., various sensors), which can be used to sense and detect the outside world [1]. Since WSNs can obtain massive amounts of objective physical information, they have been widely applied in different areas, including military defense, industrial and agricultural control, urban management, biological medical, environmental monitoring, rescue and disaster relief, and remote control of hazardous areas [2]. Due to the wireless nature of the sensors, they do not have a continuous supply of energy. As a result, the shortage and charging of energy has become a vital problem in the context of the sensor nodes. Data aggregation is an ideal and appropriate technology to address the energy consumption problem of the nodes [3][4][5][6][7]. By utilizing data aggregation technology, the information obtained from different nodes can be aggregated directly, which can significantly save the nodes' energy.
In particular, the compressed sensing (CS) theory [8] is widely used for WSN data aggregation. The CS method can obtain a large amount of original information at the cost of transmitting a rather small amount of data, which is in accordance with the WSN data aggregation technology. It is worth stressing that the data collected by the WSN should be time and space dependent to meet the requirements of CS theory for compressible According to CS theory, the sparse or compressible signals [22] can be sampled at a much lower frequency than the Nyquist sampling frequency, and can be perfectly reconstructed by a nonlinear reconstruction algorithm. That is, for an N-dimensional sparse signal x, it can be sparsely decomposed under a M × N-dimensional sparse transformation matrix Ψ as: where θ is an k-sparse N-dimensional column vector, that is, there are only k non-zero terms in θ and k is much smaller than N. Then, projecting under the M × N-dimensional measurement matrix Φ, an M-dimensional observation y can be obtained: where M is much smaller than N, and T is called the sensing matrix. Candès et al. [23] gave a sufficient condition for the existence of a deterministic solution to Equation (2) which requires that T satisfies the Restricted Isometry Property (RIP). Specifically, a matrix T is said to satisfy RIP if there exists δ ∈ (0, 1) such that every k-sparse signal θ satisfies Equation (3): Thus, the reconstructed signal can be obtained by minimizing the l 0 norm or l 1 norm of θ as: θ = argmin θ 0 or θ = argmin θ 1 , The above problem can be approximated utilizing linear programming or other convex optimization algorithms [24], which enables accurate reconstruction of the k-sparse signal θ with high probability.
Although the RIP theory is widely applied in constructing measurement matrices, proof of whether the matrices satisfy RIP appears to be very difficult to obtain under the theoretical framework of RIP. To simplify the derivation of RIP, Donoho et al. [22] optimized the measurement matrix design using the column coherence from the exact reconstruction condition of the compressed sampled signal. Tropp et al. [24] pointed out that the pursuit algorithms [e.g., basis pursuit (BP) and orthogonal matching pursuit (OMP)] can achieve accurate sparse approximation of the original signal when the redundant dictionary satisfies specific column coherence conditions. On this basis, Li et al. [25] proposed the column coherence-based theory for the measurement matrix construction.
For the sake of completeness, a brief introduction to the column coherence method is given as follows. The interdependence coefficient µ for an arbitrary matrix U is first defined as: For any M × N-dimensional real matrix with M being much smaller than N, the lower bound is given as: . According to the inaccuracy principle [26], it can be obtained that the l 1 and the l 0 regularization problems are equivalent (i.e., the solution is unique) when the sparsity of the sparse signal θ satisfies: where µ{T} is the interdependence coefficient of the sensing matrix T.
In order to satisfy (7), it is required to design the measurement matrix T with the smallest interdependence coefficient since the sparsity, which is the left-hand side of (7), is determined only by the signal itself. Compared with RIP, the column coherence condition is much simpler and more efficient. It can be proved that the perceptual factor satisfies the RIP in the case of satisfying the column coherence condition.

Gaussian Random Measurement Matrix
The Gaussian random measurement matrix is the most widely applied method in CS. For a Gaussian random measurement matrix Φ with the dimension of M × N, each element of Φ obeys the Gaussian distribution with zero mean and variance 1/M independently: The Gaussian random measurement matrix is totally random. It can be obtained that the RIP conditions can be satisfied with high probability when the number of measurements of the Gaussian random measurement matrix satisfies: where c is a small constant and k is the sparsity of the measured signal. The Gaussian random matrix is uncorrelated with most of the sparse bases and can therefore be used as a universal measurement matrix.

Bernoulli Random Measurement Matrix
The Bernoulli random measurement matrix has similar properties as the Gaussian random measurement matrix. For a Bernoulli random measurement matrix Φ with the dimension of M × N, each element of Φ obeys the Bernoulli distribution independently. Specifically, the element of the matrix can be represented as: As with the Gaussian random measurement matrix, the RIP condition is likely to be satisfied when the number of measurements of the Bernoulli random measurement matrix satisfies (9). Each element of this matrix has only two values, so it is easier to construct than the Gaussian random measurement matrix.

Sparse Random Measurement Matrix
Both Gaussian and Bernoulli random matrices are dense matrices; however, Zhao H et al. [27] pointed out that a smaller number of sparse projection values possess most of the information of the original signal, based on which a sparse random measurement matrix was proposed. The element of the sparse random matrix is defined as: where α is a constant that determines the sparsity of the matrix. Obviously, as α increases, the number of 0 elements in Φ rises, and the degree of sparsity is enhanced. In practice, it is shown that this matrix has a high accuracy of data reconstruction in the following cases:

Toeplitz and Circulant Measurement Matrix
The general Toeplitz and circulant matrices [28] have the following form: where T is the Toeplitz matrix and C is the circulant matrix. The Toeplitz and circulant measurement matrix is generated by cyclic displacement of row vectors. The circular displacement is easy to implement in hardware, meaning that the general Toeplitz and circulant matrix is promising in related areas.

Reconstructing Performance Validation
Extensive simulations were designed to compare and analyze the performance of the common measurement matrices introduced above. The grayscale image file lena.bmp in Figure 1a, which is commonly used in signal processing, was selected as the original data.
where α is a constant that determines the sparsity of the matrix. Obviously, as α increases, the number of 0 elements in Φ rises, and the degree of sparsity is enhanced. In practice, it is shown that this matrix has a high accuracy of data reconstruction in the following cases:

Toeplitz and Circulant Measurement Matrix
The general Toeplitz and circulant matrices [28] have the following form: where T is the Toeplitz matrix and C is the circulant matrix. The Toeplitz and circulant measurement matrix is generated by cyclic displacement of row vectors. The circular displacement is easy to implement in hardware, meaning that the general Toeplitz and circulant matrix is promising in related areas.

Reconstructing Performance Validation
Extensive simulations were designed to compare and analyze the performance of the common measurement matrices introduced above. The grayscale image file lena.bmp in Figure 1a, which is commonly used in signal processing, was selected as the original data. The cosine sparse matrix was used as the sparse basis, and the grayscale image formed by the sparse representation of this original image has a high sparsity under the The cosine sparse matrix was used as the sparse basis, and the grayscale image formed by the sparse representation of this original image has a high sparsity under the cosine sparse representation, which satisfies the conditions for the use of CS theory. The four matrices mentioned above were used as measurement matrices, and data reconstruction was performed using the OMP algorithm.
Data reconstruction accuracy can be evaluated by the mean absolute error (MAE) [29]: where x denotes the original data and x r denotes the reconstructed data. The data reconstruction accuracy of the four measurement matrices at different sampling frequencies is shown in Figure 2. It is clear that all four measurement matrices can ultimately reconstruct the original data accurately, and that the reconstruction error of all four measurement matrices decreases significantly when the sampling rate is greater than 0.2. x where x denotes the original data and r x denotes the reconstructed data. The data reconstruction accuracy of the four measurement matrices at different sampling frequencies is shown in Figure 2. It is clear that all four measurement matrices can ultimately reconstruct the original data accurately, and that the reconstruction error of all four measurement matrices decreases significantly when the sampling rate is greater than 0.2. The column interdependence coefficients of the four measurement matrices as functions of the sampling rates are shown in Figure 3. It can be seen that the column interdependence coefficients of the Toeplitz and circular matrix are slightly higher than the other three matrices. It is well known that the interdependence coefficients of the measurement matrices are expected to be as small as possible. Therefore, it is natural for this matrix to have a higher reconstruction error, which is verified and exhibited in Figure 2. The reconstruction error of this matrix is slightly higher than the other three matrices until the sampling rate reaches 0.5. The amount of data required to be transmitted by the four measurement matrices in reconstructing the data as a function of the sampling rates is shown in Figure 4. It can be seen that the amount of data required to be transferred by the sparse random matrix is the smallest; this is because the sparse random matrix is the sparsest. The column interdependence coefficients of the four measurement matrices as functions of the sampling rates are shown in Figure 3. It can be seen that the column interdependence coefficients of the Toeplitz and circular matrix are slightly higher than the other three matrices. It is well known that the interdependence coefficients of the measurement matrices are expected to be as small as possible. Therefore, it is natural for this matrix to have a higher reconstruction error, which is verified and exhibited in Figure 2. The reconstruction error of this matrix is slightly higher than the other three matrices until the sampling rate reaches 0.5. x where x denotes the original data and r x denotes the reconstructed data. The data reconstruction accuracy of the four measurement matrices at different sampling frequencies is shown in Figure 2. It is clear that all four measurement matrices can ultimately reconstruct the original data accurately, and that the reconstruction error of all four measurement matrices decreases significantly when the sampling rate is greater than 0.2. The column interdependence coefficients of the four measurement matrices as functions of the sampling rates are shown in Figure 3. It can be seen that the column interdependence coefficients of the Toeplitz and circular matrix are slightly higher than the other three matrices. It is well known that the interdependence coefficients of the measurement matrices are expected to be as small as possible. Therefore, it is natural for this matrix to have a higher reconstruction error, which is verified and exhibited in Figure 2. The reconstruction error of this matrix is slightly higher than the other three matrices until the sampling rate reaches 0.5. The amount of data required to be transmitted by the four measurement matrices in reconstructing the data as a function of the sampling rates is shown in Figure 4. It can be seen that the amount of data required to be transferred by the sparse random matrix is the smallest; this is because the sparse random matrix is the sparsest. The amount of data required to be transmitted by the four measurement matrices in reconstructing the data as a function of the sampling rates is shown in Figure 4. It can be seen that the amount of data required to be transferred by the sparse random matrix is the smallest; this is because the sparse random matrix is the sparsest.  Combining the analyses of the simulation results above, it can be concluded that among these four matrices, the sparse random matrix can provide a comparable reconstruction accuracy to that of the other matrices at the cost of a lower amount of data. However, due to the completely random nature of the sparse random matrix, it is harder to  Combining the analyses of the simulation results above, it can be concluded that among these four matrices, the sparse random matrix can provide a comparable reconstruction accuracy to that of the other matrices at the cost of a lower amount of data. However, due to the completely random nature of the sparse random matrix, it is harder to implement and realize in hardware compared to Toeplitz and cyclic matrices. Nevertheless, the performance of the sparse random matrices in WSN data aggregation can still be improved.

Random Measurement Matrix Construction Method Based on TDMA
The sparse random measurement matrix exhibits good performance in terms of data reconstruction. Note that each element in this matrix is randomly valued according to a certain probability; therefore, the construction complexity of this matrix is O(MN), where M is the number of matrix rows and N is the number of matrix columns. Thus, it is difficult to implement on sensor nodes with limited energy. To make it easier to implement in hardware, a TDMA-based method for constructing a random measurement matrix is proposed by modifying the sparse random measurement matrix. This is constructed as follows: first, several nodes generate projection vectors within their own TDMA time slots, and then the measurement matrix corresponding to these nodes can be constructed using the projection vectors as column vectors. Figure 5 shows a schematic diagram of TDMA time slots for nodes in a cluster, where the number indicates the time slot assigned to the node. It is assumed that the data collected by sensor node i in a time slot is [x 1 , x 2 , · · · x n ] T . They have a strong temporal correlation since they are collected by one node in one time slot. Then, the data collected by the nodes in the cluster shown in Figure 5 can be constructed in the form of the following matrix: where the data collected by each node within a time slot constitutes the column vector of the matrix. There is a strong temporal correlation between single columns of data in this matrix, and a strong spatial correlation between the different columns since they are all nodes within a cluster. To take full advantage of the temporal and spatial correlation between the data collected by this network of nodes, x is reshaped into a column vector when using the CS data aggregation; then, y can be represented as:    Expressed in terms of probabilities, Equation (17) can be represented as: As shown in Equation (16), the corresponding encoding vector of the elements x 11 is T . This coded vector is constructed as follows: The elements of the constructed measurement matrix are then shown as: where Φ ij denotes the element of the i-th row and j-th column of the measurement matrix Φ, I j [k] indicates whether the k-th non-zero element of the j-th column is positive or negative, and D j [k] represents the position of the k-th non-zero element of the j-th column. Expressed in terms of probabilities, Equation (17) can be represented as: Comparing Equations (11) and (18), if we denote and multiply each element of the list I by √ M/d, then we can obtain: Obviously, comparing (20) and (11), the proposed matrix has the same probability representation as that of the sparse random matrix.

Remark 1.
It should be noted that the proposed method is quite different to the sparse random measurement matrix. Compared with the sparse random measurement matrix, which is generated obeying a certain distribution [i.e., Equation (11)] with a complexity of O(MN), the proposed TDMA-based random measurement matrix only needs to generate d random values with one random ordering, leading to a better construction complexity O(d 2 N) (where d M N). It can be seen that the complexity of the construction of the measurement matrix is greatly reduced.

Semi-Random Semi-Deterministic Measurement Matrix Construction Method Based on TDMA
Aiming to further reduce the complexity of the measurement matrix implementation and simplify the implementation in hardware, a semi-random semi-deterministic measurement matrix construction method based on TDMA is proposed, which uses a nested matrix to nest the above random matrix with the deterministic matrix.
Ullah K et al. presented the following theorem on the correlation of matrices [30,31]. Assume that the number of non-zero elements in each column of the matrix V M×N is P and the column coherence coefficient is µ(V). The matrix W is of size P × K with all elements having the same absolute values. Then it is possible to construct a nested matrix Z M×NK , and the column coherence coefficient of matrix Z satisfies the following conditions: The construction process of the matrix Z is shown in Figure 6 [32,33]. The i-th non-zero element in matrix V is replaced by the i-th row in matrix W, and the zero elements are extended to the same dimension until all non-zero elements in the matrix are replaced. According to the above theorem, the TDMA-based random measurement matrix construction method proposed in this paper can be optimized using the following procedure: • The number of non-zero elements in each column of the measurement matrix Φ is d, which satisfies the requirement of the above theorem that the number of non-zero elements in each column of the matrix V is equal. So Φ is used as V in Equation is used as W to generate the nested matrices, because the orthogonal matrix has the smallest column coherence coefficient. Furthermore, to ensure that the nested matrix satisfies Equation (20), the values of the elements in the matrix W are set to M d ± , which also satisfies the requirement of the above theorem that all elements of W have the same absolute value. • The final measurement matrix Φ is constructed in the manner of Figure 6.

Comparative Analysis of Different d Values on the Reconstruction Performance
In order to verify the influence of the value of the parameter d on the reconstructed According to the above theorem, the TDMA-based random measurement matrix construction method proposed in this paper can be optimized using the following procedure:

•
The number of non-zero elements in each column of the measurement matrix Φ is d, which satisfies the requirement of the above theorem that the number of non-zero elements in each column of the matrix V is equal. So Φ is used as V in Equation (21). • An orthogonal matrix of size d × d is used as W to generate the nested matrices, because the orthogonal matrix has the smallest column coherence coefficient. Furthermore, to ensure that the nested matrix satisfies Equation (20), the values of the elements in the matrix W are set to ± √ M/d, which also satisfies the requirement of the above theorem that all elements of W have the same absolute value.

•
The final measurement matrix Φ is constructed in the manner of Figure 6.

Remark 2.
Compared with the measurement matrix constructed by the original method, the reconstructed effect of the semi-random and semi-deterministic matrix is essentially the same in theory as the original because its column coherence coefficient does not increase. In terms of construction complexity, the nested matrices generate d columns from one column of matrix Φ, so it only needs to construct N/d columns instead of N columns. Therefore, the construction complexity is reduced from O(d 2 N) to O(dN), making the hardware implementation simpler.

Comparative Analysis of Different d Values on the Reconstruction Performance
In order to verify the influence of the value of the parameter d on the reconstructed performance of the measurement matrix, extensive simulations were performed. Specifically, different values of d in different signals were compared and analyzed. The original signal uses an artificially sparse random signal, which is itself sparse and does not need to be sparsely expressed. The original signal has length N and sparsity k. It takes on a random value from 1 to 100. The OMP algorithm is used to reconstruct the signal, and the MAE is adopted to evaluate the reconstruction accuracy.  Figure 7b. From this figure, it can be seen that the proposed measurement matrix has a certain degree of universality. When the compression rate is greater than a certain value (in this case around 0.5), the original signal can be accurately reconstructed. In this experiment, the reconstruction accuracy of the measurement matrix with d = 2 is poor, whereas the reconstruction accuracy of the remaining d values is generally consistent. In addition, the MAE values greater than 1 appear at small sampling rates. This seems to not be normal but the reason is as follows: the original signal sizes generated randomly in this paper are all positive, but some elements of the reconstructed signal become negative when the sampling rate is too low, thus making the error greater than 1.
Entropy 2022, 24, x FOR PEER REVIEW 11 of 16 0.5), the original signal can be accurately reconstructed. In this experiment, the reconstruction accuracy of the measurement matrix with d = 2 is poor, whereas the reconstruction accuracy of the remaining d values is generally consistent. In addition, the MAE values greater than 1 appear at small sampling rates. This seems to not be normal but the reason is as follows: the original signal sizes generated randomly in this paper are all positive, but some elements of the reconstructed signal become negative when the sampling rate is too low, thus making the error greater than 1. Furthermore, this experiment was repeated independently 1000 times to count the number of accurate reconstructions of the original signal for different values of d. The corresponding results are shown in Figure 8. The threshold value of MAE is set to 0.1. The experiment result less than the threshold is declared to be an accurate reconstruction of the original signal. Clearly, the reconstruction effect becomes worse as the sampling rate increases, and reaches the worst value when d = 2. Even with a sampling rate of 1, the original signal can only be accurately reconstructed 844 times. The reconstruction results for the other d values are essentially equivalent, and the original signal can be accurately reconstructed with a high probability at a sampling rate of 0.5. Furthermore, this experiment was repeated independently 1000 times to count the number of accurate reconstructions of the original signal for different values of d. The corresponding results are shown in Figure 8. The threshold value of MAE is set to 0.1. The experiment result less than the threshold is declared to be an accurate reconstruction of the original signal. Clearly, the reconstruction effect becomes worse as the sampling rate increases, and reaches the worst value when d = 2. Even with a sampling rate of 1, the original signal can only be accurately reconstructed 844 times. The reconstruction results for the other d values are essentially equivalent, and the original signal can be accurately reconstructed with a high probability at a sampling rate of 0.5.
corresponding results are shown in Figure 8. The threshold value of MAE is set to 0.1. The experiment result less than the threshold is declared to be an accurate reconstruction of the original signal. Clearly, the reconstruction effect becomes worse as the sampling rate increases, and reaches the worst value when d = 2. Even with a sampling rate of 1, the original signal can only be accurately reconstructed 844 times. The reconstruction results for the other d values are essentially equivalent, and the original signal can be accurately reconstructed with a high probability at a sampling rate of 0.5.  Figure 9. It can be seen that when d = 2, the orig-  Figure 9. It can be seen that when d = 2, the original signal can eventually be accurately reconstructed with high probability only at a sparsity of 5 as the sampling rate increases, and the original signal cannot be accurately reconstructed with high probability in all other sparsity cases.
Entropy 2022, 24, x FOR PEER REVIEW 12 inal signal can eventually be accurately reconstructed with high probability only at a s sity of 5 as the sampling rate increases, and the original signal cannot be accurately re structed with high probability in all other sparsity cases.
For the remaining d values, the measurement matrix reconstruction performance comparable, and all matrices can accurately reconstruct the original signal with high p ability as the sampling rate increases. Furthermore, the sampling rate increases with increasing sparsity (around 0.2 at  For the remaining d values, the measurement matrix reconstruction performances are comparable, and all matrices can accurately reconstruct the original signal with high probability as the sampling rate increases. Furthermore, the sampling rate increases with the increasing sparsity (around 0.2 at k = 5 and around 0.5 at k = 20), so as to accurately reconstruct the original signal with high probability. An increase in sampling rate implies an increase in the number of rows M of the measurement matrix Φ. That is, an increase in M leads to a better reconstruction. This is probably because the proposed measurement matrix construction method only has d non-zero elements in each column, the positions of which are chosen randomly in M. Thus, an increase in M leads to an increase in the randomness of the d positions that can be selected, which in turn causes a decrease in the column correlation of the resulting measurement matrix. Therefore, higher probability of accurate reconstruction can be achieved. To highlight the fact that d M in the proposed method, the original signal length is set to 500, 1000, and 1500, the sparsity is set to 60, and the values of d are taken as 2, 4, 6, 8, and 10. The above simulation was repeated and the results are shown in the Figure 10. As can be seen from the figure, the sampling rate of the accurately reconstructed original signal is roughly around 0.6, 0.3, and 0.2, when N is 500, 1000, and 1500, respectively. The required number of samples M is about 250 for different signal lengths N. This indicates the signal length has basically no effect on the reconstruction performance for signals of the same sparsity. Furthermore, the value of M is taken to be proportional to the sparsity of the signal, which is approximately 4 times the sparsity of the signal. We can see that, in order to reconstruct the original data accurately, M is taken to be around 250, whereas at this point d is taken to be 4 or 6. This means that 250 × N random values need to be created for the construction of the sparse random matrix, whereas only 6 × N are needed for the proposed matrix, which greatly reduces the complexity of matrix construction.
Entropy 2022, 24, x FOR PEER REVIEW 13 order to reconstruct the original data accurately, M is taken to be around 250, where this point d is taken to be 4 or 6. This means that 250 N × random values need to be ated for the construction of the sparse random matrix, whereas only 6 N × are neede the proposed matrix, which greatly reduces the complexity of matrix construction.

Simulation Validation Based on Realistic Scenario
The source of experimental samples was sensors located in the laboratory [34]. cifically, a total of approximately 2 million samples collected by 50 sensor nodes utilized, which contain information on temperature, humidity, light, voltage, etc. Fo sake of convenience and without loss of generality, the temperature samples over a ce period of time were used for the simulation. That is, 20 temperature samples collecte

Simulation Validation Based on Realistic Scenario
The source of experimental samples was sensors located in the laboratory [34]. Specifically, a total of approximately 2 million samples collected by 50 sensor nodes were utilized, which contain information on temperature, humidity, light, voltage, etc. For the sake of convenience and without loss of generality, the temperature samples over a certain period of time were used for the simulation. That is, 20 temperature samples collected by all the sensor nodes over the specific period (20 × 50 = 1000) were adopted as the original samples. The cosine sparse basis was used as the sparse matrix to reconstruct the data using the OMP algorithm. The sparse random matrix, the random measurement matrix based on TDMA, and the optimized semi-random semi-deterministic matrix were used as measurement matrices, respectively. For the sparse random matrix, the parameter α in Equation (12) was set to √ N to obtain a high reconstruction accuracy; for the random measurement matrix based on TDMA, the parameter d was set to 4; for the semi-random semi-deterministic matrix, W was chosen as follows: where M is the number of rows of the measurement matrix Φ.
The reconstruction performances of different measurement matrices were compared in terms of the reconstruction accuracy, the number of exact reconstructions in 100 experiments, the measurement matrix column coherence coefficient, and the amount of data required for reconstruction, as shown in Figure 11.
Entropy 2022, 24, x FOR PEER REVIEW 14 The reconstruction performances of different measurement matrices were comp in terms of the reconstruction accuracy, the number of exact reconstructions in 100 ex iments, the measurement matrix column coherence coefficient, and the amount of required for reconstruction, as shown in Figure 11. As can be seen from the figure, the two proposed measurement matrices have sim performances to that of the random sparse matrix in terms of reconstruction accur number of reconstructions, and matrix column coherence coefficients. However, the trix complexity of the proposed method is much lower than that of the random sp As can be seen from the figure, the two proposed measurement matrices have similar performances to that of the random sparse matrix in terms of reconstruction accuracy, number of reconstructions, and matrix column coherence coefficients. However, the matrix complexity of the proposed method is much lower than that of the random sparse matrix, as can be seen from Figure 11d.
This indicates that the proposed TDMA-based random measurement matrix and the TDMA-based semi-random semi-deterministic matrix have comparable reconstruction performance as that of the sparse random measurement matrix, but at the cost of much lower complexity. They require a constant amount of data, which does not increase with the sampling rate. This is a large advantage over the sparse random measurement matrix, which becomes more apparent as the sampling rate increases. Furthermore, the TDMA-based semi-random semi-deterministic matrix is the least complex and the easiest to construct in terms of implementation complexity.

Conclusions
This study considered the construction of the measurement matrix in data aggregation for WSNs based on CS. To reduce the complexity and improve the reconstruction accuracy, a TDMA-based random measurement matrix construction method is proposed on the basis of sparse random measurement matrix. The construction complexity is reduced from O(MN) to O(d 2 N) (d M) compared with the sparse random matrix. To further reduce the complexity, the method is optimized using a nested matrix, leading to a semirandom and semi-deterministic measurement matrix based on TDMA. The complexity is further reduced to O(dN). Finally, simulation experiments were performed to verify the measurement matrix construction complexity and reconstruction accuracy. Simulation results show that the two proposed measurement matrices have comparable reconstruction performance as that of the sparse random measurement matrix, but at the cost of much lower complexity. Moreover, as the degree of sparsity increases, the reduction in the matrix construction complexity of the proposed method becomes more obvious. Hence, greater superiority of the proposed method is observed.
It should be noted that communication loss occurs during the generation of measurement matrices. However, this does not influence the complexity of the measurement matrix construction, which is the focus of this paper, because the energy is lost regardless of whether the matrix is generated for transmission by the sensor nodes to the sink node, or by the sink node to the sensor nodes. However, we will consider this energy loss in future work.