Next Article in Journal
Stream Data Load Prediction for Resource Scaling Using Online Support Vector Regression
Previous Article in Journal
Research on Quantitative Investment Strategies Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conjugate Gradient Hard Thresholding Pursuit Algorithm for Sparse Signal Recovery

1
Provincial Laboratory of Computer Integrated Manufacturing, Guangdong University of Technology, Guangzhou 510006, China
2
State Key Laboratory of Precision Electronic Manufacturing Technology and Equipment, Guangdong University of Technology, Guangzhou 510006, China
3
School of Physics and Mechanical & Electrical Engineering, Shaoguan University, Shaoguan 512005, Guangdong
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(2), 36; https://doi.org/10.3390/a12020036
Submission received: 25 December 2018 / Revised: 29 January 2019 / Accepted: 30 January 2019 / Published: 13 February 2019

Abstract

:
We propose a new iterative greedy algorithm to reconstruct sparse signals in Compressed Sensing. The algorithm, called Conjugate Gradient Hard Thresholding Pursuit (CGHTP), is a simple combination of Hard Thresholding Pursuit (HTP) and Conjugate Gradient Iterative Hard Thresholding (CGIHT). The conjugate gradient method with a fast asymptotic convergence rate is integrated into the HTP scheme that only uses simple line search, which accelerates the convergence of the iterative process. Moreover, an adaptive step size selection strategy, which constantly shrinks the step size until a convergence criterion is met, ensures that the algorithm has a stable and fast convergence rate without choosing step size. Finally, experiments on both Gaussian-signal and real-world images demonstrate the advantages of the proposed algorithm in convergence rate and reconstruction performance.

1. Introduction

As a new sampling method, Compressed Sensing (CS) has received broad research interest in signal processing, image processing, biomedical engineering, electronic engineering and other fields [1,2,3,4,5,6]. In particular in magnetic resonance image (MRI) processing, CS technology greatly improves the efficiency of MRI processing (Figure 1). By exploiting the sparse characteristics of signals, CS can accurately reconstruct sparse signals from significantly fewer samples than required in the Shannon-Nyquist sampling theorem [7,8].
Assume that a signal f R N is s-sparse in some domain ψ . This means f = ψ x , and x has at most s ( s N ) nonzero entries. This system can be measured by a sampling matrix Φ R M × N ( M < N ) : y = Φ f = Φ ψ x = Ax , where y is called measurement vector, A is the so-called measurement matrix. The CS model is shown in Figure 2. If Φ is incoherent with ψ , the coefficient vector x can be reconstructed exactly from a few measurements by solving the undetermined linear system y = Ax with constraint x 0 s , i.e., solving the following 0 norm minimization problem:
min x 0 s . t . y = Ax .
As a combinatorial optimization problem, the above 0 norm optimization is NP-hard [8]. One way to solve such problem is to transform it into a 1 norm optimization problem:
min x 1 s . t . y = Ax .
Since 1 norm-based problem is convex, some methods such as basis pursuit (BP) [7] and LASSO [9] are usually employed to solve the 1 norm minimization in polynomial time. The solutions obtained by these algorithms can well approximate the exact solution of (1). However, their computational complexity is too highly impractical for many applications. As a relaxation of the 0 norm optimization, the p norm for CS reconstruction has attracted extensive research interest, and the problem can be formulated as the following optimization problem:
min x p s . t . y = Ax ,
where x p = i = 1 N x i p 1 1 p p denotes the p norm of x . There are many methods have been developed to solve such problem [10,11,12,13]. Many studies have shown that using p norm for 0 < p < 1 requires fewer measurements and has much better recovery performance than using 1 norm [14,15]. However, p norm leads to a non-convex optimization problem which is difficult to solve efficiently.
Sparse signals can be quickly recovered by these algorithms while the measurement matrix satisfying the so-called restricted isometry property (RIP) with a constant parameter. A measurement matrix is said to satisfy the s-order RIP if for any s-sparse signal x R N
( 1 δ s ) Ax 2 2 Ax 2 2 x 2 2 x 2 2 ( 1 + δ s ) ,
where 0 δ s 1 . Commonly used sampling matrices include Gaussian matrices, random Bernoulli matrices, partial orthogonal matrices, etc. The measurement matrices consisting of these sampling matrices and orthogonal basis usually satisfy the RIP condition with a high probability [16].
The core ideas of this paper include: (1) by combining the steps of HTP and CGIHT in each iteration, a new algorithm called Conjugate Gradient Hard Thresholding Pursuit (CGHTP) is presented; (2) by alternatively selecting a search direction from the gradient direction and the conjugate direction to improve the convergence rate of the iterative procedure; (3) furthermore, an adaptive step size selection strategy similar to that in normalized iterative hard thresholding (NIHT) is adopted in CGHTP algorithm, which eliminates the effect of step size on the convergence of HTP algorithm.
The remainder of the paper is organized as follows. Section 2 reviews related work on iterative greedy algorithms for CS. Section 3 gives the key ideas and description of the proposed algorithm. The convergence analysis of the CGHTP algorithm is given in Section 4. Some simulation experiments to verify the empirical performance of the CGHTP algorithm is carried out in Section 5. Finally, the conclusion of this paper is presented in Section 6.
Notations: in this paper, ∅ represents an empty set. a , b denotes the inner product of the vector a and b . H s ( x ) denotes an operator that sets all but the largest s absolute value of x . supp ( x ) is denoted to take the index set of nonzero entries of x . Let T , T ˜ { 1 , 2 , , N } . Let x T denote a subvector consisting of elements extracted from x that indexed by the elements in T. A T denotes a submatrix that consists of the columns of A with indices i T . A T denotes the transpose of A . I stands for a unit matrix whose size depends on the context. In addition, let τ = M M N N denote the sampling ratio.

2. Literature Review

In the past decade, a series of iterative greedy algorithms has been proposed to directly solve Equation (1). These algorithms have been attracting increased attention due to their low algorithm complexity and good reconstruction performance. According to the support set selection strategy used in the iteration process, we can roughly classify the family of iterative greedy algorithms into three categories.
(1) orthogonal matching pursuit (OMP) [17]-based OMP-like algorithms, such as compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP) [18], regularized orthogonal matching pursuit (ROMP) [19], generalized orthogonal matching pursuit (GOMP) [20], sparsity adaptive matching pursuit (SAMP) [21], stabilized orthogonal matching pursuit (SOMP) [22], perturbed block orthogonal matching pursuit (PBOMP) [23] and forward backward pursuit (FBP) [24]. A common feature of these algorithms is that in each iteration, a support set is determined to approximate the correct support set according to the correlation value between the measurement vector y and the columns of A .
(2) iterative hard thresholding (IHT)-based IHT-like algorithms, such as NIHT, accelerated iterative hard thresholding (AIHT) [25], conjugate gradient iterative hard thresholding (CGIHT) [26,27]. Unlike the OMP-like algorithm, the approximate support set for the nth iteration of IHT-like algorithms is determined by the value of A T y + ( I A T A ) x n 1 , which is closer to the correct support set than using the values of A T y [28].
(3) algorithms composed of OMP-like and IHT-like algorithms, such as Hard Thresholding Pursuit (HTP) [28,29], generalized hard thresholding pursuit (GHTP) [30], 0 regularized HTP [31], Partial hard thresholding (PHT) [32] and subspace thresholding pursuit (STP) [33]. OMP-like algorithms use least squares to update sparse coefficients in each iteration, which plays a role in debiasing and makes it approach the exact solution quickly. However, it is well known that the least squares are time-consuming, which leads to a high complexity of single iteration. The advantage of IHT-like algorithms are the simple iteration operation and good support set selection strategy, but it needs more iterations. By combining the OMP-like algorithm and the IHT-like algorithm, these algorithms absorb the advantages of the above two kinds of algorithms, and have strong theoretical guarantees and good empirical performance [29].
Among the above algorithms, CoSaMP and SP do not provide a better theoretical guarantee than the IHT algorithm, but their empirical performance is better than IHT. The simple combination of CoSaMP and IHT algorithm lead to the HTP algorithm whose convergence requires fewer number of iterations than IHT algorithm. However, the reconstruction performance of HTP is greatly affected by the step size and an optimal step size is difficult to determine in practical applications, which may cause the HTP to converge slowly when selecting an inappropriate step size. Furthermore, although the HTP algorithm only requires a small number of iterations to converge to the exact vector x , it only uses a single gradient descent direction as the search direction in iterative progress, which may not be the best choice for general optimization problems. In [26], conjugate gradient method was used to accelerate the convergence of IHT algorithm due to its fast convergence rate. In this paper, inspired by the HTP algorithm and CGIHT algorithm, the operations of these two algorithms in each iteration is combined to take advantage of both.
In this paper, we first introduce the idea of the CGHTP algorithm, then analyze the convergence of the proposed algorithm, and finally validate the proposed algorithm with synthetic experiments and real-world image reconstruction experiments.

3. Conjugate Gradient Hard Thresholding Pursuit Algorithm

Firstly, we make a simple summary of the HTP algorithm and the CGIHT algorithm to facilitate an intuitive understanding of the proposed algorithm. The pseudo-codes of HTP and CGIHT are shown in Algorithms 1 and 2, respectively. In [20], the debiasing step in the OMP-like algorithms and the estimation step in the IHT-like algorithms are coupled in an algorithm to form the HTP algorithm. The HTP algorithm has excellent reconstruction performance as well as strong theoretical guarantee in terms of the RIP. Optimizing search direction in each iteration is the main innovation of HTP algorithm. It fully considers that in the iterative process, when the subspace determined by the support set is consistent, the conjugate gradient (CG) method [23] can be employed to accelerate the convergence rate. Two versions of CGIHT algorithms for CS are provided in [19], one of them, the algorithm called CGIHT restarted for CS is summarized in Algorithm 2. The iteration procedures of CGIHT and CGIHT restarted differ in their selection of search directions. CGIHT algorithm uses the conjugate gradient respect to A T T A as the search direction in all iteration, while CGIHT restarted determines the search direction according to whether the support set of the current iteration is equal to that of the previous iteration.
Inspired by the respective advantages of HTP and CGIHT algorithms, we present the CGHTP algorithm in this section. A simple block diagram of the CGHTP algorithm can be seen in Figure 3. The algorithm enters an iterative loop after inputting the initialized data. The loop process updates the x n and support set T n in two alternative ways, one of which is the combination of the gradient descent method and the adaptive step size selection method, and the other is the CG method. The selection criteria for these two ways is to determine whether the support sets of two adjacent iterations are equal. If the support set of the previous iteration is equal to that of the current iteration, then the gradient descent method is used. If not, the CG method is used [34]. All candidate solutions and candidate support sets obtained through these ways are finally updated by the least square method.
The main characteristic of the proposed CGHTP algorithm is that the support set selection strategy of HTP is combined with the acceleration strategy of CGIHT restarted algorithm in one algorithm. The main steps of CGHTP are listed in Algorithm 3. Similar to CGIHT, CGHTP is initialized with x = 0 , T 1 = supp ( H s ( A T y ) ) . The main body of CGHTP mainly includes the following steps. In Step 1), the gradient direction of the cost function y A x n 2 2 [16] in the current iteration is solved to calculate the subsequent step size α n . In Step 2), if x ˜ n + 1 has a different support from the previous estimate, i.e., T n T n 1 , similar to the convergence criterion used in NIHT algorithm, c x ˜ n + 1 x n 2 2 x ˜ n + 1 x n 2 2 A ( x n + 1 x n ) 2 2 A ( x ˜ n + 1 x n ) 2 2 is used to check whether the step size is smaller than it, where c is a small fixed constant and 0 < c < 1 . If this holds, continue to run the next steps. Otherwise the step size is continuously reduced by looping α n α n α n η η until the convergence criterion is met, here 0 < η < 1 . In Step 3), if T n = T n 1 , the CG direction is used as the search direction. Finally, Step 4) updates x n + 1 by solving a least square problem.
Compared to the HTP algorithm, the CGHTP algorithm provides two search directions in the iterative process, including the gradient direction g n of the cost function y Ax n 2 2 and the CG direction d n . According to whether the support sets of the previous and current iterations are equal, one of the two directions is selected as the search direction. In addition, then the support set is updated by solving a least square problem. In addition, CGHTP provides a way to choose the best step size while ensuring convergence.
The HTP algorithm uses the gradient direction as the search direction, which has a linear asymptotic convergence rate related to ( κ 1 ) ( κ 1 ) ( κ + 1 ) ( κ + 1 ) , where κ is the condition number of the matrix A supp ( x n ) T A supp ( x n ) . Owing to the superlinear asymptotic convergence speed with a rate given by ( κ 1 ) ( κ 1 ) ( κ + 1 ) ( κ + 1 ) [24], conjugate gradient method has been applied to accelerate the convergence rate in CGIHT algorithm. CGHTP draws on the advantages of the CGIHT algorithm. If T n = T n 1 , it means that the correct support set is identified and the submatrix A T n is no longer changed, CG method is applied to solve the overdetermined system A T n T A T n x n = A T n T y , and the convergence rate may be superlinear with a rate given by ( κ 1 ) ( κ 1 ) ( κ + 1 ) ( κ + 1 ) .
Algorithm 1 [20] Hard Thresholding Pursuit
Input: A , y , s
Initialization: x 0 = 0 , T 0 = .
for each iteration n 1 do
  1)  x ˜ n = x n 1 + A T ( y A x n 1 )
  2)  T n = supp ( H s ( x ˜ n ) )
  3)  x n = arg min z R N { y Az 2 , supp ( z ) T n }
  4) update the residual r n = y A x n
  until the stopping criteria is met
end for
Algorithm 2 [19] Conjugate Gradient Iterative Hard Thresholding restarted for Compressed Sensing
Input: A , y , s
Initialization: x 0 = 0 , T 1 = , d 1 = 0 , T 0 = supp ( H s ( A T y ) ) .
for each iteration n 0 do
  1)  g n = A T ( y A x n )  (compute the gradient direction)
  2) if T n T n 1 then
       β n = 0
    else
      β n = A g n T n , A d n 1 T n A d n 1 T n , A d n 1 T n  (compute orthogonalization weight)
    end
  3)  d n = g n + β n d n 1  (compute conjugate gradient direction)
  4)  α n = g n T n , g n T n g n T n , g n T n A T n d n T n , A T n d n T n A T n d n T n , A T n d n T n  (compute step size)
  5)  x n + 1 = H s ( x n + α n d n ) , T n + 1 = supp ( x n + 1 )
  until the stopping criteria is met
end for
Algorithm 3 Conjugate Gradient Hard Thresholding Pursuit
Input: A , y , s,
Initialization: x 0 = 0 , T 1 = , d 1 = 0 , T 0 = supp ( H s ( A T y ) ) .
for each iteration n 0 do
  1)  g n = A T ( y A x n ) (compute gradient direction)
  2) if T n T n 1 then
       α n = g n T n , g n T n g n T n , g n T n A T n d n T n , A T n d n T n A T n g n T n , A T n g n T n
      if α n c x ˜ n + 1 x n 2 2 x ˜ n + 1 x n 2 2 A ( x n + 1 x n ) 2 2 A ( x ˜ n + 1 x n ) 2 2 then
      execute loop α n α n α n 2 η until (adaptively select step size)
       α n < x ˜ n + 1 x n 2 2 x ˜ n + 1 x n 2 2 A ( x n + 1 x n ) 2 2 A ( x ˜ n + 1 x n ) 2 2
       x ˜ n + 1 = H s ( x n + α n d n ) , T n + 1 = supp ( x ˜ n + 1 )
      end
    end
  3) if T n = T n 1 then
       β n = A T n g n T n , A T n d n 1 T n A T n d n 1 T n , A T n d n 1 T n
       d n = g n + β n d n 1  (compute conjugate gradient direction)
       α n = g n T n , g n T n g n T n , g n T n A T n d n T n , A T n d n T n A T n d n T n , A T n d n T n  (compute optimal step size)
       x ˜ n + 1 = H s ( x n + α n d n ) , T n + 1 = supp ( x ˜ n + 1 )
    end
  4)  x n + 1 = arg min y Az 2 , supp ( z ) T n + 1
end for
Output: x n + 1 , T n + 1

4. Convergence Analysis of CGHTP

In this section, we analyze the convergence of the proposed algorithms. We give a simple proof that the CGHTP algorithm converges to the exact solution of (1) after a finite number of iterations when the measurement matrix A satisfies some conditions.
In Algorithm 3, when T n T n 1 and α n c x ˜ n + 1 x n 2 2 x ˜ n + 1 x n 2 2 A ( x n + 1 x n ) 2 2 A ( x ˜ n + 1 x n ) 2 2 , the iterative process of the CGHTP algorithm is equivalent to the HTP algorithm. Similar to Lemma 3.1 in [20], we obtain the following lemma:
Lemma 1.
The sequences defined by CGHTP eventually periodic.
Theorem 1.
The sequence ( x n ) generated by CGHTP algorithm converges to x in a finite number of iterations, where x denotes the exact solution of (1)
The proof of Theorem 1 is partitioned here into two steps. The first step is that when the current support set is not equal to the previous support set, the orthogonal factor β is equal to 0. At this time, the CGHTP algorithm reduces to the HTP algorithm, so their convergence is the same. The second step is that when the two support sets are equal, the step size is related to the orthogonal factor and the CG direction. Detailed proof is given below.
Proof of Theorem 1.
When T n T n 1 , according to Algorithm 3, x ˜ n + 1 = H s ( x n + α n g n ) and T n + 1 = supp ( x ˜ n + 1 ) , since x n + 1 = arg min y Az 2 , supp ( z ) T n + 1 , one has
y A x n + 1 2 2 y A x n 2 2 y A x ˜ n + 1 2 2 y A x n 2 2 = y A x n + A ( x n x ˜ n + 1 ) 2 2 y A x n 2 2 = 2 g n , x n x ˜ n + 1 + A ( x n x ˜ n + 1 ) 2 2
It is clear that x ˜ n + 1 is a better approximation to x n + α n g n than x n is, so x n + α n g n x ˜ n + 1 2 2 α n g n 2 2 . By expanding the squares, we can obtain x n x ˜ n + 1 2 2 + 2 α n x n x ˜ n + 1 , g n 0 , substituting this into (5), one has
y A x n + 1 2 2 y A x n 2 2 1 α n x n x ˜ n + 1 2 2 + A ( x n x ˜ n + 1 ) 2 2
substituting α n c x n x ˜ n + 1 2 2 x n x ˜ n + 1 2 2 A ( x n x ˜ n + 1 ) 2 2 A ( x n x ˜ n + 1 ) 2 2 into inequality (6), one can show that
y A x n + 1 2 2 y A x n 2 2 1 1 c A ( x n x ˜ n + 1 ) 2 2
Since 0 < c < 1 , the right side of (7) is less than 0. This implies that the cost function is nonincreasing when T n T n 1 , hence it is convergent.
In the case of T n = T n 1 , for the overdetermined linear system A T n T A T n x n = A T n T y , since A T n T A T n is a positive definite matrix, CG method for solving such problem has the convergence rate as:
x n x A T n T A T n 2 ( κ 1 κ + 1 ) n x 0 x A T n T A T n
where x n x A T n T A T n = ( x n x ) T A T n T A T n ( x n x ) . The submatrix A T n satisfies the RIP condition: ( 1 δ s ) x n x 2 2 A T n ( x n x ) 2 2 ( 1 + δ s ) x n x 2 2 . It is easy to get that κ 1 + δ s 1 δ s . Then (8) can be rewritten as
x n x A T n T A T n 2 ρ n x 0 x A T n T A T n
where ρ = 1 + δ s 1 δ s 1 + δ s + 1 δ s < 1 . From Lemma 1, since the sequence ( x n ) is eventually periodic, it must be eventually constant, which implies that T n + 1 = T n and x n + 1 = x n for n large enough. In summary, the CGHTP algorithm is convergent. □

5. Numerical Experiments and Discussions

In this section, we verify the performance of the proposed algorithm by Gaussian-signal reconstruction experiments and real-world images reconstruction experiments. This section compares the reconstruction performance of HTP (with several different step sizes), CGIHT, and CGHTP with parameter η = 0.9 . All experiments are tested in MATLAB R2014b, running on a Windows 10 machine with Intel I5-7500 CPU 3.4 GHz and 8GB RAM.

5.1. Gaussian-Signal Reconstruction

In this subsection, we construct an M × N random Gaussian matrix as the measurement matrix Φ with entries drawn independently from Gaussian distribution. In addition, we generate an s-sparse signal of length N = 256 , and each nonzero element of this signal is chosen at random and drawn from standard Gaussian distribution. Here, we define the subsampling rate as τ = M M N N . In each experiment, we let HTP execute with four different α : α = 1 , α = α 0.5 , α = α and α = α + 0.5 , where α denotes the step size with which HTP has the optimal empirical performance. In all the tests in this section, each experiment is repeated 500 times independently. To evaluate the performance of the tested algorithms, we use the relative reconstruction error as a criteria to measure the reconstruction accuracy for sparse signals, where the relative reconstruction error is defined as y A x n 2 y A x n 2 y 2 y 2 . For all algorithms tested, we set a common stopping criterion: n > 200 or the relative reconstruction error is less than 10 6 .
The first issue to be verified concerns the number of iterations required by the CGHTP algorithm in comparison with HTP algorithm and CGIHT algorithm. Here we choose the sampling rate τ = 0.5 . For each 1 s 80 , we compute the average number of iterations of different algorithms and plot the curve in Figure 4, where we discriminated between successful reconstructions (mostly occurring for 1 < s 40 ) and unsuccessful reconstructions (mostly occurring for 40 < s 80 ). As can be seen from Figure 4, in the successful case, the CGHTP algorithm requires fewer iterations than the CGIHT algorithm and HTP algorithm with α (in the case α = 3.5 ). In the unsuccessful case, the number of iterations of CGHTP algorithm is comparable to that of HTP algorithm with α . This is because for the same measurement matrix Φ , when the sparsity level is small, the space formed by the columns of the A T n is less correlated, so the condition number of A T n T A T n is small, the advantage of using CG method is obvious; as the sparsity level increase, the condition number of A T n T A T n becomes larger, which has a bad influence on the convergence rate of CGHTP. The average time consumed by different algorithms is shown in Figure 5. It can be seen that CGHTP algorithm requires the least average reconstruction time than other algorithms, because it requires fewer iterations than other algorithms.
At the same time, to verify the advantages of the proposed algorithm in convergence speed, we select a fixed sparse signal and record the relative reconstruction errors of different algorithms under different number of iterations. As can be seen from Figure 6 and Figure 7, as the number of iterations increases, the relative reconstruction errors of the signals reconstructed by HTP and CGHTP decrease, and becomes stable at a certain error value where the convergence is reached. However, the relative reconstruction error of CGIHT algorithm decreases very slowly with the increase of iteration times. Figure 6 and Figure 7 show that the number of iterations for convergence of CGHTP algorithm is smaller than that of HTP algorithm with an optimal step size and CGIHT algorithm, which reflects that CGHTP algorithm converges faster than HTP algorithm and CGIHT algorithm. The acceleration strategy chosen by the gradient descent method and the conjugate gradient method results in a better performance of the CGHTP algorithm in convergence rate.
The recovery ability of the proposed algorithm is evaluated for different sparsity levels and different sampling rates. In each experiment, it is recorded as a successful reconstruction if x n x 2 < 10 4 x 2 . Figure 8 describes the exact reconstruction rate of different algorithms with different sparsity levels at a fixed sampling rate of τ = 0.5 while Figure 9 depicts the recovery performance of these algorithms at different sampling rates at a fixed sparsity level s = 10 . Figure 8 and Figure 9 show that the CGHTP is outperforms other algorithms. In addition, for the same rate of exact reconstruction, CGHTP requires fewer samples than other algorithms.

5.2. Real-World Image Reconstruction

In this subsection, we investigated the performance of CGHTP to reconstruct natural images. Test images used in experiments consist of four natural images of size 512 × 512, as shown in Figure 10a, Figure 11a, Figure 12a and Figure 13a respectively. We choose the discrete wavelet transform(DWT) matrix as the sparse transform basis and the Gaussian random matrix as the measurement matrix. The sparsity level is set to one sixth of the number of samples, i.e., s = M M 6 6 . In all tests, each experiment is repeated 50 times independently and we set an algorithm-stopping criterion: n > 200 or x n x n 1 2 < 10 4 x n 2 . The reconstruction performance of images is usually evaluated by the Peak Signal to Noise Ratio (PSNR), which is defined as:
PSN R dB = 10 log 10 255 2 MSE ,
where MSE denotes the mean square error, and it can be calculated by:
MSE = 1 a b i = 1 a j = 1 b X ( i , j ) X ^ ( i , j ) 2 ,
where the matrices X and X ^ of size a × b represent the original image and the reconstructed image. Meanwhile, we also consider using relative error (abbreviated as Rerr) to evaluate the quality of image reconstruction:
Rerr = X X ^ 2 X X ^ 2 X ^ 2 X ^ 2 .
Table 1 shows the average reconstruction PSNR, Rerr, and reconstruction time of four reconstructed images by different algorithms at different sampling rates. Figure 9, Figure 10, Figure 11 and Figure 12 visually reflects the contrast between the original image and the reconstructed image, where subfigure (b–d) represents the reconstructed images obtained by HTP ( α = α ), CGIHT, and CGHTP, respectively.
It can be seen from Table 1 that the reconstruction performance of different algorithms is different for four tested images. In terms of reconstruction PSNR value, the lower the sampling rate, the more obvious the advantage of CGHTP algorithm over HTP algorithm and CGIHT algorithm. Especially in the case of low sampling rate (T = 0.2), the reconstructed PSNR value of the four tested images obtained by CGHTP algorithm is about 2 dB higher than that obtained by HTP algorithm with the best step size. Similarly, the reconstruction relative error of CGHTP is slightly lower than that of other algorithms. In addition, the average reconstruction time of CGHTP is comparable to that of HTP algorithm with an optimal step size. Due to the calculation of the step size α and the orthogonal factor β , the CGHTP algorithm requires slightly more computation than the HTP algorithm by approximately O ( N s ) . However, in each iteration, the conjugate direction may be used as the search direction in CGHTP to accelerate the convergence rate, so they need fewer iterations than the HTP algorithm. Overall, CGHTP has an empirical performance comparable to the HTP algorithm with an optimal step size in natural image reconstruction.

6. Conclusions

In this paper, we introduce a new greedy iterative algorithm, termed CGHTP. Using conjugate gradient method to accelerate the convergence of the original HTP algorithm is one of the strong features of the algorithm, which makes the CGHTP algorithm faster than the HTP algorithm. Reconstruction experiments with Gaussian sparse signals and natural images illustrate the advantages of the proposed algorithm in convergence rate and reconstruction performance, especially in the case of low sparsity, CGHTP requires fewer iterations than HTP with the best step size and CGIHT algorithm. Although the proposed algorithm has better reconstruction performance, its theoretical guarantee is not very strong, our future works may focus on the research about stronger theoretical guarantee of this algorithm. In addition, we will study how to apply this method to more practical applications.

Author Contributions

Conceptualization, Y.Z.; Data curation, Y.Z. and X.F.; Funding acquisition, Y.H.; Methodology, Y.Z.; Project administration, H.L.; Software, P.L. and X.F.; Validation, H.L., Y.H., P.L. and X.F.; Writing—original draft, Y.Z.; Writing—review & editing, Y.Z. and H.L.

Funding

This work was funded by National Natural Science Foundation of China under Grant Nos. 51775116, and 51374987, and 51405177, NSAF U1430124.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 4, 1289–1306. [Google Scholar] [CrossRef]
  2. Eldar, Y.; Kutinyok, G. Compressed Sensing: Theory and Application; Cambridge University Press: Cambridge, UK, 2011; Volume 4, pp. 1289–1306. [Google Scholar] [CrossRef]
  3. Zhang, J.; Xiang, Q.; Yin, Y.; Chen, C.; Xin, L. Adaptive compressed sensing for wireless image sensor networks. Multimed. Tools Appl. 2017, 3, 4227–4242. [Google Scholar] [CrossRef]
  4. Chen, Z.; Fu, Y.; Xiang, Y.; Rong, R. A novel iterative shrinkage algorithm for CS-MRI via adaptive regularization. IEEE Signal Process. Lett. 2017, 10, 1443–1447. [Google Scholar] [CrossRef]
  5. Bu, H.; Tao, R.; Bai, X.; Zhao, J. A novel SAR imaging algorithm based on compressed sensing. IEEE Geosci. Remote Sens. Lett. 2015, 5, 1003–1007. [Google Scholar] [CrossRef]
  6. Craven, D.; McGinley, B.; Kilmartin, L.; Glavin, M.; Jones, E. Compressed sensing for bioelectric signals: A review. IEEE J. Biomed. Health Inform. 2015, 2, 529–540. [Google Scholar] [CrossRef] [PubMed]
  7. Candès, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 12, 4203–4215. [Google Scholar] [CrossRef]
  8. Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 2, 489–509. [Google Scholar] [CrossRef]
  9. Vujović, S.; Stanković, I.; Daković, M.; Stanković, L. Comparison of a gradient-based and LASSO (ISTA) algorithm for sparse signal reconstruction. In Proceedings of the 5th Mediterranean Conference on Embedded Computing, Bar, Montenegro, 12–16 June 2016; pp. 377–380. [Google Scholar] [CrossRef]
  10. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 1, 183–202. [Google Scholar] [CrossRef]
  11. Xie, K.; He, Z.; Cichocki, A. Convergence analysis of the FOCUSS algorithm. IEEE Trans. Neural Netw. Learn. Syst 2015, 3, 601–613. [Google Scholar] [CrossRef]
  12. Zibetti, M.V.; Helou, E.S.; Pipa, D.R. Accelerating overrelaxed and monotone fast iterative shrinkage-thresholding algorithms with line search for sparse reconstructions. IEEE Trans. Image Process. 2017, 7, 3569–3578. [Google Scholar] [CrossRef]
  13. Cui, A.; Peng, J.; Li, H.; Wen, M.; Jia, J. Iterative thresholding algorithm based on non-convex method for modified p-norm regularization minimization. J. Comput. Appl. Math. 2019, 347, 173–180. [Google Scholar] [CrossRef]
  14. Fei, W.; Liu, P.; Liu, Y.; Qiu, R.C.; Yu, W. Robust sparse recovery for compressive sensing in impulsive noise using p-norm model fitting. In Proceedings of the IEEE International Conference on Acoustics, Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016. [Google Scholar] [CrossRef]
  15. Ghayem, F.; Sadeghi, M.; Babaie-Zadeh, M.; Chatterjee, S.; Skoglund, M.; Jutten, C. Sparse signal recovery using iterative proximal projection. IEEE Trans. Signal Process. 2018, 4, 879–894. [Google Scholar] [CrossRef]
  16. Dirksen, S.; Lecué, G.; Rauhut, H. On the gap between restricted isometry properties and sparse recovery conditions. IEEE Trans. Inf. Theory 2018, 8, 5478–5487. [Google Scholar] [CrossRef]
  17. Cohen, A.; Dahmen, W.; DeVore, R. Orthogonal matching pursuit under the restricted isometry property. Constr. Approx. 2017, 1, 113–127. [Google Scholar] [CrossRef]
  18. Satpathi, S.; Chakraborty, M. On the number of iterations for convergence of cosamp and subspace pursuit algorithms. Appl. Comput. Harmon. Anal. 2017, 3, 568–576. [Google Scholar] [CrossRef]
  19. Needell, D.; Vershynin, R. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 2010, 2, 310–316. [Google Scholar] [CrossRef]
  20. Wang, J.; Kwon, S.; Li, P.; Shim, B. Recovery of sparse signals via generalized orthogonal matching pursuit: A new analysis. IEEE Trans. Signal Process. 2016, 4, 1076–1089. [Google Scholar] [CrossRef]
  21. Yao, S.; Wang, T.; Chong, Y.; Pan, S. Sparsity estimation based adaptive matching pursuit algorithm. Multimed. Tools Appl. 2018, 4, 4095–4112. [Google Scholar] [CrossRef]
  22. Saadat, S.A.; Safari, A.; Needell, D. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP). Pure Appl. Geophys. 2016, 6, 2087–2099. [Google Scholar] [CrossRef]
  23. Cui, Y.; Xu, W.; Tian, Y.; Lin, J. Perturbed block orthogonal matching pursuit. Electron. Lett. 2018, 22, 1300–1302. [Google Scholar] [CrossRef]
  24. Shalaby, W.A.; Saad, W.; Shokair, M.; Dessouky, M.I. Forward-backward hard thresholding algorithm for compressed sensing. In Proceedings of the 2017 34th National Radio Science Conference (NRSC), Alexandria, Egypt, 13–16 March 2017; pp. 142–151. [Google Scholar] [CrossRef]
  25. Blumensath, T. Accelerated iterative hard thresholding. Signal Process. 2012, 3, 752–756. [Google Scholar] [CrossRef]
  26. Blanchard, J.D.; Tanner, J.; Wei, K. Conjugate gradient iterative hard thresholding: Observed noise stability for compressed sensing. IEEE Trans. Signal Process. 2015, 2, 528–537. [Google Scholar] [CrossRef]
  27. Blanchard, J.D.; Tanner, J.; Wei, K. CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference J. IMA 2015, 4, 289–327. [Google Scholar] [CrossRef]
  28. Foucart, S. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM J. Numer. Anal. 2011, 6, 2543–2563. [Google Scholar] [CrossRef]
  29. Bouchot, J.L.; Foucart, S.; Hitczenko, P. Hard thresholding pursuit algorithms: Number of iterations. Appl. Comput. Harmon. Anal. 2016, 2, 412–435. [Google Scholar] [CrossRef]
  30. Li, H.; Fu, Y.; Zhang, Q.; Rong, R. A generalized hard thresholding pursuit algorithm. Circuits Syst. Signal Process. 2014, 4, 1313–1323. [Google Scholar] [CrossRef]
  31. Sun, T.; Jiang, H.; Cheng, L. Hard thresholding pursuit with continuation for 0 regularized minimizations. Math. Meth. Appl. Sci. 2018, 16, 6195–6209. [Google Scholar] [CrossRef]
  32. Jain, P.; Tewari, A.; Dhillon, I.S. Partial hard thresholding. IEEE Trans. Inf. Theory 2017, 5, 3029–3038. [Google Scholar] [CrossRef]
  33. Song, C.B.; Xia, S.T.; Liu, X.J. Subspace thresholding pursuit: A reconstruction algorithm for compressed sensing. In Proceedings of the 2015 IEEE International Symposium on Information Theory (ISIT), Hong Kong, China, 14–19 June 2015. [Google Scholar] [CrossRef]
  34. Powell, M.J.D. Some convergence properties of the conjugate gradient method. Math. Program. 1976, 1, 42–49. [Google Scholar] [CrossRef]
Figure 1. Application of Compressed Sensing in Magnetic Resonance Images (MRI) [4].
Figure 1. Application of Compressed Sensing in Magnetic Resonance Images (MRI) [4].
Algorithms 12 00036 g001
Figure 2. Compressed Sensing model with s = 4 .
Figure 2. Compressed Sensing model with s = 4 .
Algorithms 12 00036 g002
Figure 3. Block diagram of CGHTP algorithm. T n is the support set of x in the nth iteration, “LSM” denotes least square method.
Figure 3. Block diagram of CGHTP algorithm. T n is the support set of x in the nth iteration, “LSM” denotes least square method.
Algorithms 12 00036 g003
Figure 4. Number of iterations for HTP with different step size, CGIHT, and CGHTP algorithms.
Figure 4. Number of iterations for HTP with different step size, CGIHT, and CGHTP algorithms.
Algorithms 12 00036 g004
Figure 5. Time consumed by HTP with different step size, CGIHT, and CGHTP algorithms.
Figure 5. Time consumed by HTP with different step size, CGIHT, and CGHTP algorithms.
Algorithms 12 00036 g005
Figure 6. Relative reconstruction error vs. iteration number, ( s = 10 , M = 120 ).
Figure 6. Relative reconstruction error vs. iteration number, ( s = 10 , M = 120 ).
Algorithms 12 00036 g006
Figure 7. Relative reconstruction error vs. iteration number, ( s = 20 , M = 80 ).
Figure 7. Relative reconstruction error vs. iteration number, ( s = 20 , M = 80 ).
Algorithms 12 00036 g007
Figure 8. Exact reconstruction rate vs. sparsity level for a Gaussian signal.
Figure 8. Exact reconstruction rate vs. sparsity level for a Gaussian signal.
Algorithms 12 00036 g008
Figure 9. Exact reconstruction rate vs. sampling rate for a Gaussian signal.
Figure 9. Exact reconstruction rate vs. sampling rate for a Gaussian signal.
Algorithms 12 00036 g009
Figure 10. The reconstruction results of the standard test image Lenna at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 5.5 , (PSNR: 28.256 dB); (c) reconstructed by CGIHT (PSNR: 20.603 dB); (d) reconstructed by CGHTP (PSNR: 28.467 dB).
Figure 10. The reconstruction results of the standard test image Lenna at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 5.5 , (PSNR: 28.256 dB); (c) reconstructed by CGIHT (PSNR: 20.603 dB); (d) reconstructed by CGHTP (PSNR: 28.467 dB).
Algorithms 12 00036 g010
Figure 11. The reconstruction results of the standard test image Pens at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 5 , (PSNR: 24.655 dB); (c) reconstructed by CGIHT (PSNR: 18.597 dB); (d) reconstructed by CGHTP (PSNR: 25.110 dB).
Figure 11. The reconstruction results of the standard test image Pens at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 5 , (PSNR: 24.655 dB); (c) reconstructed by CGIHT (PSNR: 18.597 dB); (d) reconstructed by CGHTP (PSNR: 25.110 dB).
Algorithms 12 00036 g011
Figure 12. The reconstruction results of the standard test image Pepper at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 6 , (PSNR: 28.471 dB); (c) reconstructed by CGIHT (PSNR: 20.363 dB); (d) reconstructed by CGHTP (PSNR: 28.537 dB).
Figure 12. The reconstruction results of the standard test image Pepper at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 6 , (PSNR: 28.471 dB); (c) reconstructed by CGIHT (PSNR: 20.363 dB); (d) reconstructed by CGHTP (PSNR: 28.537 dB).
Algorithms 12 00036 g012
Figure 13. The reconstruction results of the standard test image Baboon at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 4 , (PSNR: 18.333 dB); (c) reconstructed by CGIHT (PSNR: 14.621 dB); (d) reconstructed by CGHTP (PSNR: 18.364 dB).
Figure 13. The reconstruction results of the standard test image Baboon at τ = 0.4 . (a) original image; (b) reconstructed by HTP with α = 4 , (PSNR: 18.333 dB); (c) reconstructed by CGIHT (PSNR: 14.621 dB); (d) reconstructed by CGHTP (PSNR: 18.364 dB).
Algorithms 12 00036 g013
Table 1. Reconstruction results of Lenna, Pens, Pepper, and Baboon images at different sampling rates by HTP, CGIHT, and CGHTP. The optimal step size α for reconstructing Lenna, Pens, Pepper and Baboon images by HTP algorithm are α = 5.5 , α = 5 , α = 6 and α = 4 , respectively. The abbreviations PSNR and Rerr in the table represent the Peak Signal to Noise Ratio and relative reconstruction error of the reconstructed image, respectively.
Table 1. Reconstruction results of Lenna, Pens, Pepper, and Baboon images at different sampling rates by HTP, CGIHT, and CGHTP. The optimal step size α for reconstructing Lenna, Pens, Pepper and Baboon images by HTP algorithm are α = 5.5 , α = 5 , α = 6 and α = 4 , respectively. The abbreviations PSNR and Rerr in the table represent the Peak Signal to Noise Ratio and relative reconstruction error of the reconstructed image, respectively.
τ = 0.2
AlgorithmsHTPHTP( α 0.5 )HTP( α )HTP( α + 0.5 )CGIHTCGHTP
PSNR (unit:dB)Lenna12.81416.78318.47117.5168.58421.708
Pens13.49114.51215.45014.9269.84317.471
Pepper11.09217.26618.29317.5668.27320.492
Baboon10.34412.60513.31312.2989.12014.863
RerrLenna0.8770.2810.2170.2440.6100.102
Pens0.8100.4310.3300.3820.5320.225
Pepper0.6420.2270.1750.2060.4880.104
Baboon0.9320.6440.5280.6320.8740.222
Average Recovery Time (unit:second)Lenna0.1530.2160.2230.23717.6320.216
Pens0.1440.1970.2120.22817.5370.205
Pepper0.1450.2210.2400.24617.6820.231
Baboon0.1400.1920.2110.19617.450.203
τ = 0.4
AlgorithmsHTPHTP( α 0.5 )HTP( α )HTP( α + 0.5 )CGIHTCGHTP
PSNR (unit:dB)Lenna26.94328.17128.25628.13920.60328.467
Pens23.04224.00424.65524.14218.59725.110
Pepper25.60427.21628.47127.31820.36328.537
Baboon17.61418.23718.33318.24914.62118.364
RerrLenna0.0210.0170.0160.0170.0450.015
Pens0.0570.0460.0420.0440.0740.040
Pepper0.0290.0220.0210.0220.0480.021
Baboon0.0520.0470.0450.0460.1040.044
Average Recovery Time (unit:second)Lenna0.4680.6970.9983.39423.4480.813
Pens0.4460.5560.6410.69523.4540.640
Pepper0.4590.7251.1104.32123.6370.812
Baboon0.4090.5560.5530.59921.7260.531
τ = 0.6
AlgorithmsHTPHTP( α 0.5 )HTP( α )HTP( α + 0.5 )CGIHTCGHTP
PSNR (unit:dB)Lenna29.92431.13331.16431.14527.37631.392
Pens26.56627.95728.31928.27324.33128.633
Pepper29.20730.86731.02830.89127.36331.297
Baboon19.52719.54419.58419.49617.35120.257
RerrLenna0.0130.0100.0100.0100.0190.009
Pens0.0320.0240.0230.0240.0450.022
Pepper0.0160.0110.0100.0110.0230.009
Baboon0.0400.0390.0380.0390.0480.038
Average Recovery Time (unit:second)Lenna1.45122.22354.12161.064102.0593.064
Pens1.4361.8232.98019.836109.2813.186
Pepper1.55924.76156.59362.29994.7083.140
Baboon1.3421.7531.8962.135102.7941.814

Share and Cite

MDPI and ACS Style

Zhang, Y.; Huang, Y.; Li, H.; Li, P.; Fan, X. Conjugate Gradient Hard Thresholding Pursuit Algorithm for Sparse Signal Recovery. Algorithms 2019, 12, 36. https://doi.org/10.3390/a12020036

AMA Style

Zhang Y, Huang Y, Li H, Li P, Fan X. Conjugate Gradient Hard Thresholding Pursuit Algorithm for Sparse Signal Recovery. Algorithms. 2019; 12(2):36. https://doi.org/10.3390/a12020036

Chicago/Turabian Style

Zhang, Yanfeng, Yunbao Huang, Haiyan Li, Pu Li, and Xi’an Fan. 2019. "Conjugate Gradient Hard Thresholding Pursuit Algorithm for Sparse Signal Recovery" Algorithms 12, no. 2: 36. https://doi.org/10.3390/a12020036

APA Style

Zhang, Y., Huang, Y., Li, H., Li, P., & Fan, X. (2019). Conjugate Gradient Hard Thresholding Pursuit Algorithm for Sparse Signal Recovery. Algorithms, 12(2), 36. https://doi.org/10.3390/a12020036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop