^{1}

^{*}

^{1}

^{1}

^{2}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Many practical compressible signals like image signals or the networked data in wireless sensor networks have non-uniform support distribution in their sparse representation domain. Utilizing this prior information, a novel compressed sensing (CS) scheme with unequal protection capability is proposed in this paper by introducing a windowing strategy called expanding window compressed sensing (EW-CS). According to the importance of different parts of the signal, the signal is divided into several nested subsets,

Compressed sensing (CS) [

The compressible signal can be indicated by the index of the sparse non-zero components and the value of the non-zero components, where the index of the sparse non-zero components is called sparse support. The ordinary CS algorithms are designed without any constraint on the distribution of the support of the sparse signal. Actually, many natural signals do have special support structures. For example, the image signal is compressible in the discrete cosine transform (DCT) basis or discrete wavelet transform (DWT) basis. The transform coefficients constitute a non-uniform compressible signal where the significant coefficients mostly appear at lower frequency, as shown in

Considering these largely existing non-uniform compressible signals, if we can utilize the prior-knowledge of the support distribution to provide unequal protection for different parts of the signal, we can instinctively achieve better performance. In this paper, such a CS algorithm is proposed to improve the sensing and recovery efficiency for the non-uniform compressible signals, called the expanding window compressed sensing (EW-CS). The windowing method is inspired by the expanding window fountain codes [

The proposed EW-CS scheme treats the different parts of the compressible signal discriminately to pursue the improved overall performance. The more attention on the more important parts is at the expense of the less important parts. Nevertheless, it is worthwhile to do this, because most of the energy or the useful information is contained in the more important parts. The overall performance improvement also has its prerequisite that is the prior knowledge of the rough non-uniform distribution of the significant elements. This prior information is not difficult to get in most cases. For image signals, the significant transform coefficients mostly appear at lower frequency. For networked data, the significant data is surrounding the emerging event. The above statements about the EW-CS are presented and proved from theoretical and numerical perspectives in our paper. Moreover, the EW-CS is applied to image signals and networked data, and shown to provide better performance than the ordinary CS and the existing unequal protection CS schemes.

The rest of this paper is organized as follows: in Section 2, the key issues of CS are briefly reviewed, and the related existing CS algorithms considering non-uniform compressible signals are presented and compared with our EW-CS algorithm. The details of the EW-CS scheme are introduced in Section 3. Section 4 is devoted to analyze the recovery error upper bounds of the proposed EW-CS to compare with the ordinary CS. In Section 5, the variation of basic EW-CS algorithm is discussed. The experimental results are shown and analyzed in Section 6. Section 7 concludes this paper and gives some directions for future work.

A signal ^{N}_{i}_{N}_{×}_{N}_{i}_{M}_{×}_{N}_{N}_{×}_{N}_{N}_{×}_{N}^{N}

Although (_{0}) can give the unique solution with only _{0}) is the linear programming problem, also known as basis pursuit [

The restricted isometriy property (RIP) [_{0}) and (_{1}) solutions with overwhelming probability. Let _{k}_{k}^{M}_{1}) can be further improved by solving the Lasso [

The investigation of utilizing compressible signals' statistic characteristics has already attracted a lot of interest. The most influential one is the Bayesian compressive sensing (BCS) [

However, these compressible signal probability density functions are still based on the assumption that the significant elements are uniformly distributed. Considering the image signals with non-uniform distribution, the wavelet tree model is introduced in BCS in [_{1}-norm minimization algorithm is integrated by explore the hidden Markov tree model of the wavelet coefficients. In [

Compared with the unequal protection methods like the hidden Markov tree model in [

The proposed EW-CS scheme is detailed in this section. First, the non-uniform compressible signal is formularized. Then, the general EW-CS encoding and decoding algorithms are presented. Finally, we simplify the EW-CS to a two-window case which is quite important for applications and analysis.

In the proposed EW-CS scheme, for clearer explanation, we assume that the signal _{i}^{T}_{N}_{×}_{N}_{(k)}_{p}^{−1/}^{p}_{p}_{p}_{(k)}

The non-uniform compressible signal means that the distribution of the significant elements in the sparse support is non-uniform, that is, the support distribution has some structured characteristics. We use importance classes to describe this distribution. From the sparsity perspective, the importance of a class is corresponding to how many significant elements it contains. The compressible signal _{w}_{w}_{k}_{k}/N_{w}

The importance class determines a sequence of strictly increasing subsets of the signal's elements, which we call windows. The _{w}

In the classical CS algorithm, one method to generate the random sensing matrix is by picking the entries from Gaussian distribution _{w}_{w}_{w}M_{w}^{w}_{w}_{w}_{(w)}_{(w)}

It can be noticed from _{w}M

Although the classes of measurements are described to be generated sequentially in _{w}_{w}_{w}M_{w}_{k}

The encoding strategy of the EW-CS is presented class by class. This is sometimes necessary for constrained encoder or distributed coding scenarios, but it is usually more efficient to use joint decoder, which combines the measurements together, and recovers the original _{M}_{×}_{N}^{w}

Thus, the joint decoding can be implemented using this joint sensing matrix. At the decoder side, all the received random projections compose a measurement vector
_{M}_{×}_{N}_{M}_{×}_{N}X

In the previous subsection, we described the proposed EW-CS scheme in general format. Nevertheless, the most common situation is that the significant elements of a natural compressible signal are concentrated at a certain part of the signal. Therefore, two importance classes are enough for the usual non-uniform compressible signal. The first importance class consists of more large elements than the second importance class. The first importance class is also the first window, and the entire signal is the second window. The two-window EW-CS has wider application prospects; thus, this special case of the EW-CS is discussed in detail below.

In the two-window EW-CS, the GP is Π(_{1}_{1})^{2} which is determined by only one coefficient Π_{1} ∈ [0,1]. Similarly, the sparsity distribution can be written as Λ(_{1}_{1}) ^{2}. If the first class really has more significant elements, 0.5 < Λ_{1} ≤ 1; otherwise 0 < Λ_{1} ≤ 0.5. For clarification, we adopt the sequential generation as shown in _{1}_{1})^{2}. By increasing Ω_{1}, we progressively increase the protection of the first importance class. The extreme cases of Π_{1} = M_{1} = 0 and Π_{1} = M_{1} = 0 are the ordinary CS problems.

Based on above definitions, the joint sensing matrix is given as:
_{1} for short form), and
_{2} = (Ф_{2,1} Ф_{2,2}) for short form). Then the encoding procedure can be given as:
_{1} and _{2} denote the measurement vectors for the two windows respectively. The encoding can also be written in joint coding format that:

It must be noted that _{1} and _{2} denote the signal elements of the two importance classes not the two windows, which are defined as different from _{(1)} and _{(2)}. The joint coding format is necessary for facilitating the performance analysis of the EW-CS and contrast with the ordinary CS in the next section.

In this section, the reconstruction error upper bounds of the EW-CS is analyzed and contrasted with the ordinary CS scheme (using CS for short). The simple two-window EW-CS is firstly discussed, and then the results are further extended to the general EW-CS.

The _{2}-norm recovery error defined as ‖X − X̂‖_{l2}

The most direct relationship between the CS and the EW-CS can be obtained by replacing the sub-matrix Ф_{1,2} with all zero matrix of the same size, and then the CS problem becomes the EW-CS as shown in

It can be found that the only difference between the CS and the EW-CS is the different generation of the first class of measurements, _{1}^{CS}_{1}^{EW}_{1} elements, and the remaining (_{2} − _{1}) elements of
_{2} − _{1}) elements, and the remaining elements of

For analyzing the discriminational protection of different importance classes, we should consider _{1} and _{2} separately. This is equal to considering

The first terms on the right hand of

This result can also be proved with the aid of mathematical formulae. From the Theorem 2 in [_{2}-norm recovery error from inaccurate measurements have the upper bound as:
_{s}_{3}_{S}_{4}_{S}_{2}-norm of the noise _{ℓ2} ≤ _{1},_{S}_{2},_{S}_{4}_{S}

For the

Note that the second terms on the right side of _{2} is all zero vector. Based on _{1}^{EW}_{1}^{CS}

Comparing with _{1}) percentage of all the measurements, so they may not guarantee the best recovery. Oher looser bounds for approximation error are introduced here for analyzing

For compressible signals obeying the power law, there is an earlier result in [_{1}-norm obeys the following result with overwhelming probability:
_{p}_{ℓ1}

This is a looser one than _{2} is the number of effectual measurements for
_{1} is defined in previous section as the ratio of number of measurements assigned to the first window (_{1}). Obviously,
_{2}. It is easy to find that when 0 < _{2} < 2, the CS's bound is lower than the EW-CS's; when _{2} = 2, they are equal; when _{2} > 2, the EW-CS's bound is lower than CS's in return. However, the error bound is behaved for compressible signal usually with 0 < _{2} < 1 or with 0 < _{2} < 2 [

Now, the comparison between the CS and the EW-CS for the entire signal

Thus, the overall recovery error upper bounds for

We assume that
_{1} and _{2} which are determined by sparsity distribution and MP, _{1} and Ω_{1}. However, if there are really more significant elements placed in
_{1} ≤ 1, the first term will dominate the overall error upper bounds; otherwise, the second term will do so. Thus, the EW-CS is superior to the CS for a non-uniform compressible signal with proper importance classes assignment.

Based on the ana1ysis of the two-window EW-CS, the results are easily extended to

Note that the EW-CS's sensing matrix Φ^{EW}

Comparing with the conventional CS, only for the first importance class vector

During the introduction of the EW-CS scheme in Section 3, there may be a question that what happens if the importance of classes doesn't decrease with the class index as we assumed. From the analysis in last section, we know that the performance of the EW-CS degraded badly when the more important class doesn't contain more significant elements, _{1} ≤ 1 in the two-window case. Thus, the extension of the EW-CS is necessary for this case. The two basic variations are shown in

It doesn't mean that the EW-CS can only be used in the simple cases where the significant elements concentrate in the front, in the middle or at the back of the signal. It has an even more complicated variation in a certain case. Recall that we have designed a window selection distribution such that each measurement can select the window it belongs to. Similarly, the signal element can also choose the window that it belongs to. It looks a little abnormal in our usual practice, for the signal element may not know its own importance or even do the selection. However, it can be realized in certain case, the wireless sensor network scenarios.

Let us consider the CWS [_{i}_{i}_{ij}_{ji}x_{i}_{ji}_{ji}_{2} <

In our simulation, the _{2}-norm of the recovery error, _{ℓ2}, is used to evaluate the performance of the proposed EW-CS algorithm. A compressible signal ^{N}_{p}_{1}_{1})_{1} = 0.3 and Ω_{1} = 0.5.

The overall sensing rate is ^{EW}^{CS}_{1}_{1})_{1} based method which is implemented by using a package for specifying and solving convex programs [

In the first example, we consider a compressible signal with Λ_{1} = 0.8 and _{2}-norm error for the component signals
_{2}-norm error for the signal

For various choices of Λ_{1}, the _{2}-norm error of the signal _{1}, while the ordinary CS is not impacted by Λ_{1}. With the increase of Λ_{1}, the recovery error of the EW-CS decreases. Roughly speaking, when 0 ≤ Λ_{1} ≤ 0.5, the CS outperforms the EW-CS. This because that the first importance class is not the really most important one. It means that using the EW-CS with wrong matched window settings will introduce more errors.

Nevertheless, when 0.5 ≤ Λ_{1} ≤ 1, the EW-CS begins to outperform the CS. In this case, the window settings match the signal's features, so the superiority of the EW-CS is demonstrated. It can also be noticed that the performance for the component signal

For testing the practical performance of the proposed EW-CS, instead of using artificially generated signals, we used real-world non-uniform compressible signals. The image signals' DCT or DWT coefficients constitute non-uniform compressible signals. With this prior knowledge, the random sensing with the EW-CS can be accomplished. The 512 × 512 test gray images with different features,

The M-CS scheme in [_{1} based method to recover the wavelet coefficients, and did the inverse wavelet transform to recover the original image. In our simulation, the three level DWT is used in M-CS with the measurement allocation rule in [

Although the importance class design in the EW-CS can also directly use the sub-bands of 2-D DWT coefficients, the additional transform process is not willing to do. Thus, we choose 1-D DCT as the sparse representation basis which will be used only at the decoder so that the low-complexity encoder is guaranteed. The ordinary CS is also simulated with the 1-D DCT.

Firstly, we use the two-window EW-CS with the parameters Π_{1} = 0.3 and Ω_{1} = 0.5. The visual quality comparison for “Lena” is given in

In M-CS, the significant coefficients are only sampled by the measurements allocated to the sub-band they belong, not all the measurements, so its performance is worse than the EW-CS at most sensing rates. At low sensing rate, the number of measurements for the most significant coefficients in the M-CS is fixed to guarantee successful recovery, and these measurements are noiseless. However, the EW-CS cannot recover the elements in the first importance class from the noisy measurements even though they are more. When the sensing rate increases, this problem will not exist.

In the second simulation, we reconstruct the images from the noisy measurements which are transmitted through an AWGN channel. The sensing rate is fixed as _{1,2} to the Φ_{1,1} to enlarge the signal power of the most significant class. The PSNR performances with different signal-to-noise ratio (SNR) for the four images are shown in

We can observe that the EW-CS is worse than the CS at lower SNR, but outperforms the CS when the SNR increases. The M-CS's PSNR is lowest at the same SNR, but its decay speed is similar to the CS and is slower than the EW-CS. When the channel noise energy gets greater, the advantage of recovery the first importance class of the EW-CS is weaken, and the recovery quality of the second importance class degrades further. Thus, the EW-CS algorithm is more applicable to the mild transmission environments.

Then, we investigate the impact of changing window size on the recovery quality of the two-window EW-CS. The PSNR performance of “Lena” with different Π_{1} is given in _{1} for different measurement allocation parameters Ω_{1} will be different. That's why the x axis for each curve in _{1}, there is a optimal first window size, but the general trend is the first window should not be too large.

We also evaluate the PSNR performance of the EW-CS with different Ω_{1} when Π_{1} = 0.2, Π_{1} = 0.3 and Π_{1} = 0.4. The available ranges of Ω_{1} for different Π_{1} are different according to the design rule of the joint sensing matrix. The simulation results are plotted in _{1} = 0.2 and Π_{1} = 0.3, there is an optimal number of measurements that should be assigned to it. For Π_{1} = 0.4, however, the more measurements, the better.

Finally, we design the EW-CS with more than two windows. The rank law should also be obeyed. So the three-window EW-CS is designed with the GP Π(^{2} + 0.5^{3} and MP Ω(x) = 0.5^{2} + 0.3^{2} + 0.3^{3} + 0.4^{4} and Ω(x) = 0.1^{2} + 0.3^{3} + 0.4^{4}. The five-window EW-CS is designed with Π(^{2} + 0.1^{3} + 0.2^{4} + 0.5^{5} and Ω(^{2} + 0.3^{3} + 0.2^{4} + 0.3^{5}. The PSNR performance for different window designs for image “Lena” is shown in

The variation of the EW-CS applied in wireless sensor networks is implemented here. We define a 500 m × 500 m sensing area. _{i,j}_{j}_{j}

For the two-window EW-CS, _{1} = 0.3. The first window size is determined by the threshold given to each sensor from the collection centre (not shown in _{1} = 0.5. The _{2}-norm recovery error of this example is 0.2401. More simulation results with different transmission rates are plotted in _{2}-norm recovery error with different SNR is given in

Finally, the performance of the EW-CS with changing first window size is shown in _{1} = 0.5 and the transmission channel is noiseless. The EW-CS in CWS case has the feature that the Π_{1}_{1}_{1}, the smaller the first window is, the more measurements are assigned to the most largest or significant elements, so the overall performance is inversely proportional to the size of the first window as shown in

A novel compressed sensing scheme called expanding window compressed sensing is proposed in this paper to provide unequal protection for non-uniform compressible signals. The efficiency of the proposed scheme is analyzed from the recovery error upper bounds perspective by comparing with ordinary compressed sensing. Different from weighted methods, the windowing technology is adopted to make the scheme more flexible and efficient. Comparing with the blocked sensing method, the nested window design gets more benefit from the joint recovery algorithm for the nested window. The scheme is further applied to practical non-uniform compressible signals,

This work is partially supported by National Science Foundation of China (No. 61201149), the 111 Project (No. B08004), and the Fundamental Research Funds for the Central Universities. This work is also supported (in part) by Korea Evaluation Institute of Industrial Technology (KEIT), under the R&D support program of Ministry of Knowledge Economy, Korea.

Examples of non-uniform compressible signals. (

Expanding window compressed sensing.

Joint sensing matrix for the general EW-CS.

The variations of the expanding windows. (

The non-uniform compressible signal

The _{2}-norm recovery error with different Λ_{1}. (

Visual quality comparisons at sensing rate

The PSNR performances with different sensing rates. (

The PSNR performances with different SNR (dB). (

The PSNR performance with different first window size.

The PSNR performance with different number of measurements for the first window.

The PSNR performances for different number of windows.

The random sensor network with three events (denoted by red stars) and two windows. The sensors in the first window are denoted by blue circles, and the other sensors are denoted by green nodes.

The networked data and its recovery signal.

The _{2}-norm recovery error with different transmission rate.

The _{2}-norm recovery error with different SNR.

The _{2}-norm recovery error with size of the first window.