Overview of Compressed Sensing: Sensing Model, Reconstruction Algorithm, and Its Applications

: With the development of intelligent networks such as the Internet of Things, network scales are becoming increasingly larger, and network environments increasingly complex, which brings a great challenge to network communication. The issues of energy-saving, transmission efﬁciency, and security were gradually highlighted. Compressed sensing (CS) helps to simultaneously solve those three problems in the communication of intelligent networks. In CS, fewer samples are required to reconstruct sparse or compressible signals, which breaks the restrict condition of a traditional Nyquist–Shannon sampling theorem. Here, we give an overview of recent CS studies, along the issues of sensing models, reconstruction algorithms, and their applications. First, we introduce several common sensing methods for CS, like sparse dictionary sensing, block-compressed sensing, and chaotic compressed sensing. We also present several state-of-the-art reconstruction algorithms of CS, including the convex optimization, greedy, and Bayesian algorithms. Lastly, we offer recommendation for broad CS applications, such as data compression, image processing, cryptography, and the reconstruction of complex networks. We discuss works related to CS technology and some CS essentials.


Introduction
With the expansion of some traditional networks and the advent of the Internet of Things in recent years, network structures are more complex, and transmitted data in the networks are bigger, as is shown in Figure 1.The numbers of smart sensors and connected devices continue to grow in many practical network applications.This is a huge challenge for network communication, such as with regard to transmission efficiency and network security.Compressed sensing (CS) emerged, which is able to satisfy the needs of transmission efficiency and security at the same time.
Compressed sensing is an advanced method of acquiring and processing signals, and it was first proposed by Donoho [1,2].It can accurately recover the original signal from a few incoherent measurements.In CS, fewer samples are required to reconstruct sparse or compressible signals, which breaks through the traditional Nyquist-Shannon sampling theorem.Suppose that x is a discrete signal, which is transformed into matrix y by a matrix Φ with M × N dimensions.The CS process can be expressed as where M < N, y ∈ R M , and Φ is the sensing or measurement matrix.From Equation (1), signal x with N × N-dimension is compressed into signal y with M × N dimensions.x is unsolvable by y from Equation (1) because the number of equations is less than that of the unknowns.The precondition of solvability for x is that x be sparse or that x be sparse on some orthogonal bases, that is, where Ψ is an orthogonal matrix with N × N dimensions, which satisfies the conditions that ΨΨ T = I and Ψ T Ψ = I.Here, Ψ is the sparsity matrix, and s is a sparse vector.When K values of s are nonzero, and other N − K values are zero (K N), we call the vector s K-sparse.Common sparsity matrices are discrete Fourier transform (DFT) [3], discrete-wavelet-transform (DWT) [4], and discrete-cosine-transform (DCT) [5] matrices.As is shown in Figure 2, on the basis of Equations ( 1) and ( 2), we have where ΦΨ is the sensing matrix.To construct x from y, sensing matrix ΦΨ must be in accordance with the restricted isometry property (RIP) [6].RIP is where δ k ∈ (0, 1), v is an arbitrary sparsity signal.The process of reconstruction can be described as min s s l 1 s.t.y = Θ s, (5) which is a convex-optimization problem.A large amount of work has been done on CS theory and applications [7,8].Based on the CS introduction above, CS is principally composed of two important parts, sensing and reconstruction.In the sensing part, we use a sensing matrix that satisfies certain conditions to obtain a sparse signal.There are many classical sensing matrices, such as the random, deterministic, and structured random matrices.The Gaussian and Bernoulli matrices are typical random matrices.Common deterministic matrices are the polynomial and chaotic matrices.The Toeplitz and the Hadamard matrices are structured random matrices.In the reconstruction step, we use a measurement vector and CS algorithms to reconstruct the original signal.There are many kinds of reconstruction CS algorithms, such as the convex-optimization, greedy, and Bayesian algorithms.In addition to theoretical research, CS has also been utilized in many different domains such as data compression, image encryption, and cryptography.
This paper is divided into five sections.Section 2 introduces several common CS methods, including sparse dictionary, block CS, chaotic CS, deep-learning CS, and semitensor-product CS.
Section 3 provides CS reconstruction algorithms like the convex-optimization, greedy, Bayesian, and noniterative-reconstruction algorithms.Section 4 briefly presents compressed-sensing applications.Lastly, conclusions are presented in Section 5.

Sensing Methods
Sensing methods have always been a hotspot in CS research.The sensing process also affects signal sampling and the accuracy of signal reconstruction.The main operation of a sensing step is to correlate a sparse signal with a proper sensing matrix without any prerequisites.In this section, we briefly introduce several sensing methods.

Sparse Dictionary Sensing
An important topic for sparse-representation research is signal sparse representation under a redundant dictionary.The current sparse representation of signals under redundant dictionaries focuses on the construction of sparse dictionaries, and the design of fast and efficient sparse-decomposition methods.Conventionally, sparsifying dictionary learning aims to construct a proper dictionary Ψ and a matrix s to minimize sparse-representation errors.On the basis of Equation ( 2), the sparse-representation error is defined as follows: where K is the sparsity of s.In the design of CS systems, besides dictionary Ψ, another important aspect is to choose a suitable sensing matrix Φ that can accurately construct original signal x by measurement y.Θ = ΨΦ in Equation (3) shows that the sparsity of measurement y is also an important aspect influencing the reconstruction accuracy of x.Bai et al. embedded a sensing matrix into the problem of sparse dictionary learning, and proposed an alternative optimization strategy [9].Previously, Duarte-Carvajalino et al. had proposed a similar framework [10].In [9], Bai et al. optimized the problem of sparse dictionary learning by embedding a measurement matrix.The optimization process is as follows: where Both A and B are independent of s.Sensing matrix Φ was denoted as Φ = f (Ψ) because it was decided by a given sparsity matrix Ψ, which is also called dictionary Ψ.The authors further proposed an optimized measurement matrix and a new algorithm to solve the corresponding optimization problem.
Another novel dictionary-based approach was proposed and applied in diffusion-tensor imaging (DTI) [11].It combined adaptive dictionaries and T 2 -weighting correction to form a compressed-sensing framework to reconstruct undersampled DTI data.This method could improve spatial resolution, the flexibility of the diffusion protocol, and application feasibility.

Block-Compressed Sensing
Block-compressed sensing (BCS) completes data acquisition and compression by lightweight measurement.When dealing with high-dimensional images and videos, BCS is the most appropriate approach and it can utilize its biggest advantage.This method divides the image into many small patches, and operates on each image patch separately during measurement and reconstruction, which reduces computational complexity and greatly saves sensing-matrix storage space.In BCS, the measurement matrix is small, which is conducive to storage.The measurement value of each image patch can be independently sent after being obtained.The receiver can also independently reconstruct the image patch according to the data, and realize real-time performance.Consider an I r × I c image with a total of N = I r I c pixels.We divided the image into sub-blocks with a size of B × B, and sampled with the same sensing matrix.The vectorized signal of the i-th block is denoted as x i .Corresponding output CS vector y i is where Φ B is an n B × B 2 matrix and n B = nB 2 N .Φ B can be an orthonormalized random matrix, i.e., Gaussian and Bernoulli matrices.Then, measurement matrix Φ in Equation (1) can be represented as follows: where Φ is a block diagonal matrix.From Equation (10), we can see that BCS is storage-saving, as it just requires to store an n B × B 2 matrix Φ B rather than an n × N matrix.Computational complexity and recovery performance are highlighted at both encoder and decoder.To solve the two problems above, Zhang et al. introduced and investigated the BCS strategy with matrix permutation that was used before sampling to reduce the maximal level of signal block sparseness [12].The matrix-permutation procedure was as follows: 2. We used an appropriate permutation matrix NL to process X † , and the process procedure was as follows: where NL is the permutated 2D sparse signal.
After the matrix-permutation procedure, the block-sampling process can be performed as follows: where x i ∈ R n represents the vectorized signal of the i-th block of X , and y i ∈ R m is the measurement vector of x i .Compared with traditional BCS approaches, the matrix-permutation-based BCS method has an advantage in the peak signal-to-noise ratio (PSNR) of the recovery images.
On the basis of BCS, Bigot et al. presented a random-sampling approach that projected the signal onto blocks of measurement vectors [13].There is a typical example when the block consists of horizontal lines in the two-dimensional Fourier plane.They theoretically proved the number of blocks that could be used to accurately reconstruct sparse signals.The matrices, constructed by stacking random measurement blocks, are significant in the application because they can be easily formed on many imaging devices.
Traditional BCS methods rely on independent block image acquisition and independent block reconstruction.In order to enforce smoothness across block borders in BCS, Coluccia et al. proposed a method that used partially overlapping blocks to modify the sensing and reconstruction process [14].They computed a fast preview from the blocks, which imposed the similarity of block borders and was used as a predictor of the entire block.

Chaotic Compressed Sensing
Since chaotic sequences generated by chaotic systems are pseudorandom, they are well-suited to be used as measurement matrices.In chaotic compressed sensing (CCS), chaotic systems can generate pseudorandom sequences by certain methods, which simplifies the construction of sensing matrices compared to a random-sensing matrix.We take the Chebyshev chaotic system as an example [15]: where If parameter w and initiation value z 0 are given in advance, sequence z k , k = 1, 2, 3..., can then be generated on the basis of Equation ( 13).After obtaining this sequence, we use sampling distance d and sampling initial position n 0 to obtain the following sampled sequence: In chaotic systems, the starting value and chaotic parameter have high sensitivity.A completely different sequence is obtained by slightly disturbing the starting value or system parameter, which proves that chaotic systems have high security.
Gan et al. proposed CCS by using the chaotic system of the T-way Bernoulli shift [16], and applied it to data transmission to achieve security.The CCS-based secure-data-transmission scheme has inherent encryption attributes with no additional cost.In this scheme, they used the Bernoulli chaotic system to generate the Bernoulli shift chaotic sequence, which constructs the Bernoulli chaotic-sensing matrix (BCsM).
To guarantee transmission security, Peng et al. improved the generation of chaotic measurement matrix, including chaotic parameters, sampling rate, matrix mapping functions, etc. [17].We only need to save the matrix seeds such as the initial value, chaos parameters, sampling start position and sampling step, instead of saving the entire sensing matrix.The chaotic sensing matrix can be given as where z 0 , initial value; ε, chaotic parameter; C, chaotic system; S, sampler; and T, mapping function.
According to sampling initial position n 0 and sampling step d, the chaotic sequence was obtained after sampling.The required sensing matrix for chaotic compressed sensing could be generated with the mapping function.Compared with traditional CCS, the improved CCS simultaneously solves the problems of energy efficiency and security, and performs very well in image encryption.Yao et al. presented the incoherence-rotated-chaotic (IRC) matrix as a measurement matrix [18].They used the pseudorandomness character of chaotic sequences, the concepts of incoherent factors, and rotation to obtain the IRC sensing matrix.The obtained IRC sensing matrix was suitable for sparse reconstruction, satisfying the RIP criterion during the sparse-reconstruction phase and performing well in RIP with a smaller RIP ratio.Simulation results showed that the IRC matrix performed better than classical random-sensing matrices did.

Deep-Learning Compressed Sensing
The combination of deep learning and compressed sensing has attracted much attention.Adler et al. presented a deep-learning approach for block CS.They deployed a fully connected network to be performed on the block-based linear-perception and nonlinear-reconstruction section [19].They employed a deep neural network that performed BCS by independently processing each block as per Equation (9).They proposed a fully connected network with four layers: 1. input layer with B 2 nodes (B is block size); 2. compressed-sensing layer, B 2 R nodes, R 1 (its weights form the sensing matrix); 3. K ≥ 1 reconstruction layers, B 2 T nodes, each followed by a rectified linear unit (ReLU) [20] activation unit where T > 1 is the redundancy factor; and 4. output layer, B 2 nodes.
Learning from convolutional networks, a deep-learning-based sparse-measurement matrix was presented to reduce the sampling calculation complexity and improve CS reconstruction quality [24].The method had two subnetworks, the sample and reconstruction subnetworks.They assumed that block size N B in block CS was B × B, and measurement size for every block was N b = M N N B .The k-th line of sensing matrix Φ was denoted as The sparse degree is , where ν is the number of nonzero elements in Φ, and N b N B is the whole elements in Φ.To generate the target sample matrix, a sparsity constraint was added as follows: where is the k-th kernel of the convolutional layer in the sample subnetwork, and a k,i is the i-th value of this kernel.The normalization constraint for k-th kernel was formulated as where s k,j = S(a k,j ) and its derivative performed as On the basis of Equation ( 18), the normalized sampling matrix was obtained.Sun et al. presented a deep-learning method for quantizing CS called BW-NQ-DNN [25].The BW-NQ-DNN framework consists of three parts: a nonuniform quantizer, a binary sensing matrix, and a noniterative recovery solver.These three parts have joint optimization through BW-NQ-DNN training.BW-NQ-DNN not only saves a lot of storage and energy, but it also surpasses the most advanced CS-based approaches.When the compression ratio is very high, this method still performs well in recovery performance and classification accuracy.

Semitensor-Product Compressed Sensing
Cheng et al. presented the semitensor product (STP) of matrices that broke through the limitation of conventional matrix operations.They further proposed an evolution of the traditional matrix product [26][27][28][29].Traditional matrix multiplication should meet the limitations of matrix dimensions, that is, the column number of matrix A must equal the row number of matrix x, as is shown in Figure 3. STP theory breaks through this limitation, and it can execute matrix multiplication when the dimensions of two matrices are unmatched.Moreover, STP maintains the main properties of ordinary matrix multiplication.
Suppose that u is a row vector of dimension np, and v is a column vector of dimension p.By splitting u into p equal blocks, that is, u 1 , • • • , u p , each part u i is a row vector of dimension n.The definition of STP, represented by , is Similarly, Let A ∈ R m×n and B ∈ R p×q .If either n is a factor of p or p is a factor of n, then we define the semitensor product of A and B as follows: where A i is the i-th row of A, and B j is the j-th column of B. Equivalently, we can also define the STP of A and B by using the Kronecker product: where t is the least common multiple of n and p, i.e., t = lcm(n, p).Xie et al. proposed semitensor-product compressed sensing (STP-CS), which combined semitensor product and compressed sensing [30].They analyzed STP-CS from a theoretical perspective to demonstrate that the sparse solution is unique with regard to spark and coherence.The RIP criterion is satisfied in the STP-CS model.There are many classical sensing matrices that can be used in STP-CS, such as the Gaussian, Bernoulli, and chaotic matrices.These classical matrices can be used in STP-CS because the RIP configuration of order k in STP-CS is equivalent to that in conventional CS.On the basis of the semitensor product, STP-CS uses a low-dimensional sensing matrix to compress high-dimensional signals.The storage space needed in STP-CS is greatly saved compared with that in block-compressed sensing (BCS) with small block size.The semitensor product can be used to improve the reconstruction algorithm to realize parallel reconstruction, which can simultaneously perform signal reconstruction in multiple CS decoders, resulting in a reduction in total reconstruction time.Traditional matrix multiplication should meet limitations of matrix dimensions.Column number in matrix A must be equal to row number in matrix x.The theory of semitensor product (STP) breaks through this limitation, able to perform matrix multiplication when two matrices do not meet the dimension-matching condition [31].
An application of STP-CS was presented to reduce calculation energy consumption, and it was applied to the communication of wireless sensor networks (WSNs) [31].In terms of recovery quality, STP-CS is almost equal to conventional CS and CCS.Wang et al. proposed a random-sampling method based on the STP-CS framework [32].They used an improved iteratively reweighted least-squares (IRLS) algorithm to obtain the values of the sparse vector.Simulation results showed that their method could save at least one-quarter of the storage resources when ensuring reconstruction performance.
The P-tensor product (PTP) was proposed on the basis of STP.It not only solved the dimensional-matching problem in matrix multiplication, but also provided a new method for angle calculation between different dimensional vectors [33].For example, we can calculate the angle between a one-and a three-dimensional vector by PTP.PTP compensates for the limitations of STP when performing operations on vectors with different dimensions.In PTP, a smaller matrix is changed into a larger matrix, conforming to dimension matching by the tensor operation of matrix P. The choice of matrix P is not limited, and matrix P can be any kind of matrix.When PTP is combined with CS, the high-dimension signal can be measured by low-dimension sampling.Hence, storage space is significantly reduced.

Other Sensing Methods
Traditional compressed sensing associates sparse signals with a common sensing matrix regardless sparse domain.However, the performance of the sensing matrix is very problematic.Especially when the sensing matrix is a partial orthogonal sensing matrix, sensing fails because the signal is sparse in some transform domains.This problem is mainly because of the consistency of the sensing matrix with the sparsity matrix.Nouasria et al. proposed a robust sensing approach that multiplied the sensing matrix by the inverse matrix of the sparsity matrix in the sensing step [34].The operation process of the random sensing matrix, especially the partial orthogonal sensing matrix, was improved.
So far, the sensing schemes of common CS theoretical models consist of random isolated measurements whose elements are randomly generated variables.Boyer et al. introduced the concept of measurement blocks [35].In their scheme, the measurements of the sensing scheme were no longer a set of isolated measurements, but a set of measurements that might represent any shape (for example, parallel or radial lines).
Ishikawa et al. proposed another CS construction approach without randomness [36].Their matrices had low incoherence.In order to obtain a CS matrix with low incoherence, an identity matrix with coherence 0 could be used as part of the CS matrix.In the meantime, they added vectors into their scheme under the basis of low coherence.Their CS matrix was given by the following: where E, m × m identity matrix; v i , m-dimensional appended vector with low coherence; and the dimension of A was m × n, where m n.Compared to existing random matrices, the matrix constructed by this approach could achieve higher recovery accuracy.However, there were still two problems to be solved: failure in some cases and the compression-ratio increment.

Reconstruction Algorithm
A large number of works were done to study the recovery algorithm of compressed sensing.These studies focused on the stable construction, low calculation, and reconstructed accuracy of signals, especially with small measurements.This section introduces reconstruction algorithms such as the convex-optimization, greedy, and Bayesian algorithms.

Convex-Optimization Algorithm
The convex-optimization algorithm converts a nonconvex problem into a convex one to solve signal approximation.Suppose that J(x) is the convex cost that promotes sparsity.That is to say, the value of J(x) is small when signal x is in high sparsity.On the basis of Equation ( 5), the reconstruction of signal x without noise could be described as min{J(x)}, subject to y = Φx. ( Similarly, when there is noise, the reconstruction process is as follows: where H is the cost function to penalize the distance between Φx and y.Equation ( 25) can be expressed as a form without constraints as follows: where λ is a penalty factor.In a convex-optimization algorithm, function J is usually chosen by the l 1 -norm of sparse signal x as J(x) = x 1 , and H is solved as follows: which is the l 2 -norm of error of y and Φx.The most common convex-optimization algorithm is basic pursuit (BP), which uses the l 1 -norm to solve the optimization problem by using linear-programming methods [37].
On the basis of the pulse dictionary, an adaptive BP algorithm was introduced for vibration-signal processing and the fault diagnosis of rolling bearings [38].This approach established the functional model of the impulse dictionary by using the characteristics of bearing fault signals.Simulation results proved that the method fundamentally reduced dictionary redundancy.The BP algorithm with low redundancy could make full use of this advantage in the fault diagnosis of rolling bearings.
Another algorithm is the focal underdetermined system solver (FOCUSS) algorithm [39] that uses the l p norm (p ≤ 1) to solve optimization problems.
Yan et al. presented an improved multimeasurement-vector focal underdetermined system solver, and applied it to synthesize-mode reconfigurable sparse arrays [40].They used sparse-recovery theory to establish a multiple-measurement-vector collaborative sparse-recovery model for the purpose of synthesizing mode-reconfigurable sparse arrays [41][42][43].In addition, there are the SL 0 method, the gradient projection for sparse-reconstruction algorithms [44], and sparse reconstruction by separable approximation [45].

Greedy Algorithm
The greedy iterative-reconstruction algorithm aims at combinatorial optimization problems, which indirectly solves the problem of sparse signal reconstruction by sparse approximation.Its basic principle is to find the support set of the sparse vector in an iterative manner, and to reconstruct the signal by using the constrained least-squares estimation method.In other words, sparse signal reconstruction constructs the sparsest signal on the basis of linear measurements y, which is expressed as follows: where I ⊆ {1, • • • , N} represents an index set, and φ i is the i-th column of matrix Φ.
OMP is a representative greedy algorithm and it is widely used because of its simplicity and superior effects.Noise affects the accurate reconstruction of sparse signals.For this reason, Wen et al. studied the sufficient conditions for accurate OMP support in the presence of noise [52].Their analysis showed that, for any k-sparse signal, the OMP algorithm could accurately recover the signal on the premise that the sensing matrix should satisfy the RIP criterion.
In contrast to algorithms performing a deliberate refinement of the identification step, a recently proposed extension of OMP, referred to as generalized OMP (gOMP) [53] (also known as OSGA or OMMP [54,55]), simply chooses columns that are most correlated with the residual.
New analysis of the improved gOMP algorithm was presented by using the restricted isometry property (RIP) [56].It showed that the gOMP algorithm could perform high-quality signal reconstruction from noisy measurements under the RIP.

Bayesian Algorithm
The Bayesian reconstruction algorithm considers the time correlation of signals to provide better reconstruction accuracy than that of other reconstruction algorithms, especially when signal time correlation is strong.
Common Bayesian algorithms include the expectation-maximization [57], Bayesian compressivesensing [58], sparse Bayesian learning (SBL) [59] and multiple SBL (MSBL) [60] algorithms.The SBL or MSBL algorithm differs from the l 1 -norm convex-optimization algorithm.The global minimization of l 1 -norm convex optimization is usually not the sparsest solution, while the global minimum of SBL or MSBL is sparsest and less than those of typical algorithms (for example, FOCUSS).In a conventional SBL framework, x meets Gaussian prior distribution: where p( , and α {α i } are non-negative hyperparameters.Equation ( 29) shows that, when α i tends to infinity, corresponding coefficient x i turns to zero.
Following the traditional SBL algorithm, Fang et al. demonstrated a new method to recover block sparse signals whose block sparse structure was completely unknown [61].They introduced a pattern-coupled hierarchical Gaussian prior model that could characterize not only coefficient sparseness, but also the statistical dependence of the adjacent signal coefficients.As discussed in [62], the two-layer Gaussian-inverse gamma hierarchical prior led to a learning process that tended to approach most coefficients that were considered uncorrelated, and only retained very few correlation coefficients to interpret the data.The prior of each coefficient involved hyperparameters of its own and its immediate neighbor.

Noniterative Reconstruction Algorithm
The reconstruction of compressed sensing faces two significant challenges, recovery-algorithm efficiency or real time, and signal sparsity in some transform domain, especially when the signal is very large.Some researchers combined deep learning and CS for signal reconstruction, and their schemes performed better with respect to recovery time and peak signal-to-noise ratio (PSNR) [63][64][65][66].
The approximate-message-passing (AMP) algorithm updates the tentative condition at each iteration to find a feasible solution [63].A recovery algorithm was developed with a hidden layer of the network based on AMP.It also has the same number of iterations as those of the AMP algorithm [67].The weights of the neural network in deep learning provide the parameters for the AMP algorithm.Another novel neural network architecture, learned vector AMP (LVAMP) [68], was proposed, inspired by vector AMP (VAMP) [69].LVAMP was developed by extending the VAMP algorithm to a deep network, and training its parameters with similar methods.The resulting LVAMP could improve robustness of the measurement-matrix deviation in independent and identically distributed Gaussian.

Deep-Learning Algorithm
In terms of the problem that the sparse hypothesis model in traditional compressed sensing cannot fully meet the application requirements, deep learning uses a data-driven method to learn signal features and design signal reconstruction in an end-to-end manner.The multiple iterative process of traditional compressed-sensing reconstruction can be replaced by the calculation of deep neural networks to conduct real-time reconstruction processing [70,71].
Zhang et al. presented a structured deep network called ISTA-Net that used the iterative shrinkage-thresholding algorithm (ISTA) to optimize a general 1 norm CS reconstruction model [72].They converted ISTA into a deep-network form, and used nonlinear transformation to solve the near-end mapping problem related to the sparse-induced regularizer.The reconstruction performance of ISTA-Net was much better than that of existing optimization-and network-based CS methods while maintaining fast calculation speed.
Existing deep-learning-based image CS methods need to train different models for different sampling ratios, which increases encoder and decoder complexity.A scalable convolutional neural network was proposed to achieve scalable sampling and scalable reconstruction with only one model called SCSNet [73].The hierarchical reconstruction network in SCSNet contains a base layer that provides the basic reconstruction quality, and some enhancement layers that reference the lower reconstruction layers and gradually improve reconstruction quality.

Compressed-Sensing Applications
Compressed sensing has been widely applied in data compression, image encryption, cryptography, complex network reconstruction, channel estimation, analog-to-information conversion, channel coding, radar reconstruction, radar remote sensing, and digital virtual-asset security and management.Figure 4 presents an example of a data-encryption transmission system based on compressed sensing.CS is often used as a data-encryption and -compression method in networks with energy confinement and open links, such as sensor [74] and body-area [75] networks, and the Internet of Things (IoT) [76].CS has a natural advantage in image encryption due to the sparsity of image data under specific bases or dictionaries.Orsdemir et al. verified that the image-encryption scheme based on CS was effective against noise [77].They analyzed the security of the model from brute force and structured attacks.In addition, CS is used in the construction of various cryptography schemes.Considering the three main problems of image authentication, i.e., tamper detection, location, and recovery, Du et al proposed semifragile image authentication based on CS [78].Hu et al. proposed an image-reconstruction and identity-authentication scheme based on CS in cloud computing.Their scheme outsourced complex reconstruction calculations to the cloud server, and did not reveal the image's private information [79].
Xie et al. made use of compressed sensing to provide a perspective for the solution of parameter-identification problems in coupled map lattices (CMLs) [80].They used the sparse-recovery method of underdetermined linear systems to solve the CML parameter-identification problem.Generally speaking, widely used CMLs include the diffusive CML (DCML) and global CML (GCML) models.The GCML model is given as where x t (i), state of lattice element i on discrete time step t; , coupling parameter; f and g, maps in regard to local dynamics and nonlocal system; and c i = (c i1 , c i2 , • • • , c iN ), weighted vector of element i.From Equation (30), the following equation is obtained: Denoting If each element i is sampled M times, then we know Equation ( 33) could be written as an underdetermined linear system Y = BC.Thus, GCML identification is equal to the reconstruction problem of compressed sensing.In this way, all weighting parameters can be recovered by utilizing M samples, which is much smaller than number N of lattice elements.This method still performs well when various kinds of noise affect the original data.
Li et al. proposed an approach of combining QR decomposition and compressed sensing to recover complex networks with the help of input noise [81], which is shown in Figure 5.The linear network system is defined as where matrix A with dimensions N × N is the structure of the network nodes, and vector X(t) is the state of N nodes in a network system at time t.As is shown in Figure 5, they transformed the linear system model into a compressed-featured equation, and the dynamic structure could be reconstructed by CS.CS provides a new perspective for channel estimation by using channel sparsity.Fang et al. proposed a novel spectrum-sensing algorithm based on STP-CS to judge the state of channel occupancy in wireless networks [82], which is a generalization of a traditional spectrum-sensing algorithm.They took advantage of the sparsity of channel energy in wireless networks, and only needed to reconstruct the energy vector of the occupied channels instead of recovering the entire spectral signals.He et al. addressed the sparse channel-estimation problem in multiple-input-multiple-output orthogonal frequency-division multiplexing systems with the help of distributed CS [83].There is a spatiotemporal union channel-estimation algorithm based on structured compressive sensing to reduce the required pilot overhead.This method utilizes common sparse spatiotemporal characteristics of delay-domain MIMO channels [84].
With the help of compressed sensing, Vaquer et al. proposed a method to reduce the memory footprint of a Monte Carlo simulation where the scalar flux over the entire problem is desired [85].They prescribed to randomly select the Monte Carlo particle tallies that were not contiguous in space, and used a small number of these tallies for partial reconstruction on the basis of minimizing the reconstructed total variation norm.Results for a TRIGA reactor simulation indicated that their method could give accurate flux maps for thermal and fast fluxes by using about 10% the total number of tallies.
In addition, some CS applications include analog-to-information conversion [86], channel coding [87], radar reconstruction [88], and radar remote sensing [89,90].There are still many CS application fields waiting to be explored.

Conclusions
In this article, we gave an overview of compressed sensing with three key aspects: sensing model, reconstruction algorithm, and its applications.We first introduced several sensing models, including sparse-dictionary, block-compressed, chaotic-compressed, deep-learning compressed, and semitensor-product compressed-sensing methods.We also presented a detailed introduction of reconstruction algorithms, such as the convex-optimization, greedy, Bayesian, and noniterative-reconstruction algorithms.Lastly, we provided a brief introduction for CS applications that widely cover many fields, such as data compression, image encryption, cryptography, channel estimation, analog-to-information conversion, channel coding, radar reconstruction, and radar remote sensing.The approaches discussed in this paper provide the theoretical basis for the improvement and new applications of of CS.

Figure 1 .
Figure 1.Network architecture is increasingly complex, and transmitted network data increasingly bigger.

Figure 2 .
Figure 2. Process of simplified compressed sensing (CS).Note: s, sparse vector of x; y, measurement vector; ΦΨ, sensing or measurement matrix; and M < N.

Figure 3 .
Figure 3. Difference between traditional matrix multiplication and semitensor matrix multiplication.Traditional matrix multiplication should meet limitations of matrix dimensions.Column number in matrix A must be equal to row number in matrix x.The theory of semitensor product (STP) breaks through this limitation, able to perform matrix multiplication when two matrices do not meet the dimension-matching condition[31].

Figure 4 .
Figure 4. Data-encryption transmission system based on compressed sensing, which can simultaneously realize data encryption and compression.

Figure 5 .
Figure 5. Identification of complex network model based on compressed sensing and QR decomposition [81].