Next Article in Journal
Kolmogorov-Arnold-Moser Theory and Symmetries for a Polynomial Quadratic Second Order Difference Equation
Next Article in Special Issue
Viscosity Methods and Split Common Fixed Point Problems for Demicontractive Mappings
Previous Article in Journal
Assessing the Performance of Green Mines via a Hesitant Fuzzy ORESTE–QUALIFLEX Method
Previous Article in Special Issue
Some Generalized Contraction Classes and Common Fixed Points in b-Metric Space Endowed with a Graph
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Hybrid CQ Algorithm for the Split Feasibility Problem in Hilbert Spaces and Its Applications to Compressed Sensing

by
Suthep Suantai
1,
Suparat Kesornprom
2,* and
Prasit Cholamjiak
2,*
1
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
School of Science, University of Phayao, Phayao 56000, Thailand
*
Authors to whom correspondence should be addressed.
Mathematics 2019, 7(9), 789; https://doi.org/10.3390/math7090789
Submission received: 24 June 2019 / Revised: 31 July 2019 / Accepted: 23 August 2019 / Published: 27 August 2019
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
In this paper, we focus on studying the split feasibility problem (SFP), which has many applications in signal processing and image reconstruction. A popular technique is to employ the iterative method which is so called the relaxed CQ algorithm. However, the speed of convergence usually depends on the way of selecting the step size of such algorithms. We aim to suggest a new hybrid CQ algorithm for the SFP by using the self adaptive and the line-search techniques. There is no computation on the inverse and the spectral radius of a matrix. We then prove the weak convergence theorem under mild conditions. Numerical experiments are included to illustrate its performance in compressed sensing. Some comparisons are also given to show the efficiency with other CQ methods in the literature.

1. Introduction

In the present work, we aim to study the split feasibility problem (SFP), which is to find a point
x * C such that A x * Q
where C and Q are nonempty closed and convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 H 2 a bounded linear operator. In 1994, the SFP was first investigated by Censor and Elfving [1] in finite dimensional Hilbert spaces. There have been applications in real world such as image processing and signal recovery (see [2,3]). Byrne [4,5] introduced the following recursive procedure for solving SFP:
x n + 1 = P C ( x n α n A * ( I P Q ) A x n )
where { α n } ( 0 , 2 / A 2 ) , P C and P Q are the projections onto C and Q, respectively, and A * is the adjoint of A. This projection algorithm is usually called the CQ algorithm. Subsequently, Yang [6] introduced the relaxed CQ algorithm. In this case, the projections P C and P Q are, respectively, replaced by P C n and P Q n , where
C n = { x H 1 : c ( x n ) + ξ n , x x n 0 } ,
where c : H 1 R is convex and lower semicontinuous, and ξ n c ( x n ) , and
Q n = { y H 2 : q ( A x n ) + η n , y A x n 0 } ,
where q : H 2 R is convex and lower semicontinuous, and η n q ( A x n ) . In what follows, we define
f n ( x ) = 1 2 ( I P Q n ) A x 2 , n 1
and
f n ( x ) = A * ( I P Q n ) A x .
Precisely, Yang [6] proposed the relaxed CQ algorithm in a finite-dimensional Hilbert space as follows:
Algorithm 1.
Let x 1 H 1 . For n 1 , define
x n + 1 = P C n ( x n α n f n ( x n ) ) .
where { α n } ( 0 , 2 / A 2 ) .
It is seen that, since the sets C n and Q n are half spaces, the projections are easily to be computed. However, the step size { α n } still depends on the norm of A.
To eliminate this difficulty, in 2012, López et al. [7] suggested a new way to select the step size α n as follows:
α n = β n f n ( x n ) f n ( x n ) 2
where { β n } is a sequence in ( 0 , 4 ) such that 0 < a lim inf n β n lim sup n β n b < 4 for some a , b ( 0 , 4 ) . They established the weak convergence of the CQ algorithm (Equation (2)) and the relaxed CQ algorithm (Equation (7)) with the step size defined by Equation (8) in real Hilbert spaces.
Qu and Xiu [8] adopted the line-search technique to construct the step size in Euclidean spaces as follows:
Algorithm 2.
Choose σ > 0 , ρ ( 0 , 1 ) , μ ( 0 , 1 ) . Let x 1 be a point in H 1 . For n 1 , let
y n = P C n ( x n α n f n ( x n ) ) ,
where α n = σ ρ m n and m n is the smallest nonnegative integer such that
α n f n ( x n ) f n ( y n ) μ x n y n .
Set
x n + 1 = P C n ( x n α n f n ( y n ) ) .
In 2012, Bnouhachem et al. [9] proposed the following projection method for solving the SFP.
Algorithm 3.
For a given x 1 R n , let
y n = P C n ( x n α n f n ( x n ) )
where α n > 0 satisfies
α n f n ( x n ) f n ( y n ) μ x n y n , 0 < μ < 1 .
Define
x n + 1 = P C n ( x n φ n d ( x n , α n ) )
where
d ( x n , α n ) = x n y n + α n f n ( y n ) ε n = α n ( f n ( y n ) f n ( x n ) ) D ( x n , α n ) = x n y n ε n ϕ ( x n , α n ) = x n y n , D ( x n , α n )
and
φ n = ϕ ( x n , α n ) d ( x n , α n ) 2 .
Recently, many authors establish weak and strong convergence theorems for the SFP (see also [10,11]).
In this work, combining the work of Bnouhachem et al. [9] and López et al. [7], we suggest a new hybrid CQ algorithm for solving the split feasibility problem and establish weak convergence theorem in Hilbert spaces. Finally, numerical results are given for supporting our main results. The comparison is also given to algorithms of Qu and Xiu [8] and Bnouhachem et al. [9]. It is shown that our method has a better convergence behavior than these CQ algorithms through numerical examples.

2. Preliminaries

We next recall some useful basic concepts that will be used in our proof. Let H be a real Hilbert space equipped with the inner product · , · and the norm · . Let T : H H be a nonlinear mapping. Then, T is called firmly nonexpansive if, for all x , y H ,
x y , T x T y T x T y 2 .
In a real Hilbert space H, we have the following equality:
x , y = 1 2 x 2 + 1 2 y 2 1 2 x y 2 .
A function f : H R is convex if and only if
f ( z ) f ( x ) + f ( x ) , z x
for all z H .
A function f : H R is said to be weakly lower semi-continuous (w-lsc) at x if x n x implies
f ( x ) lim inf n f ( x n ) .
The projection of a nonempty, closed and convex set C onto H is defined by
P C x : = arg min y C x y 2 , x H .
We note that P C and I P C are firmly nonexpansive. From [5], we know that, if
f ( x ) = 1 2 ( I P Q ) A x 2 ,
then f is A 2 -Lipschitz continuous. Moreover, in real Hilbert spaces, we know that [12]
(i)
x P C x , z P C x 0 for all z C ;
(ii)
P C x P C y 2 P C x P C y , x y for all x , y H ; and
(iii)
P C x z 2 x z 2 P C x x 2 for all z C .
Lemma 1.
[12] Let S be a nonempty, closed and convex subset of a real Hilbert space H and { x n } be a sequence in H that satisfies the following assumptions:
(i) 
lim n x n x exists for each x S ; and
(ii) 
ω w ( x n ) S .
Then, { x n } converges weakly to a point in S.

3. Main Results

Throughout this paper, let S be the set of solution of SFP and suppose that S is nonempty. Let C and Q be nonempty that satisfy the following assumptions:
(A1) The set C is defined by
C = { x H 1 : c ( x ) 0 } ,
where c : H 1 R is convex, subdifferentiable on C and bounded on bounded sets, and the set Q is defined by
Q = { y H 2 : q ( y ) 0 } ,
where q : H 2 R is convex, subdifferentiable on Q and bounded on bounded sets.
(A2) For each x H 1 , at least one subgradient ξ c ( x ) can be computed, where
c ( x ) = { z H 1 : c ( u ) c ( x ) + u x , z , u H 1 } .
(A3) For each y H 2 , at least one subgradient η q ( y ) can be computed, where
q ( x ) = { w H 2 : q ( u ) q ( y ) + v y , w , v H 2 } .
Next, we propose our new relaxed CQ algorithm in real Hilbert spaces.
Algorithm 4.
Let x 1 H 1 , for any σ > 0 ,   ρ ( 0 , 1 ) , μ ( 0 , 1 2 ) . Assume { x n } and { y n } have been constructed. Compute x n + 1 via the formula
y n = P C n ( x n α n f n ( x n ) ) ,
where α n = σ ρ m n and m n is the smallest nonnegative integer such that
α n f n ( x n ) f n ( y n ) μ x n y n .
Define
x n + 1 = y n τ n f n ( y n )
where
τ n = β n f n ( y n ) f n ( y n ) 2 + θ n , 0 < β n < 4 , 0 < θ n < 1 .
Lemma 2.
[8] The line-search in Equation (27) terminates after a finite number of steps. In addition, we have the following:
μ ρ L < α n σ
for all n 1 , where L = A 2 .
Next, we state our main theorem in this paper.
Theorem 1.
Assume that { θ n } and { β n } satisfy the assumptions:
(a1) 
lim n θ n = 0 ; and
(a2) 
lim inf n β n ( 4 β n ) > 0 .
Then, { x n } defined by Algorithm 4 converges weakly to a solution of the SFP.
Proof. 
Let z S . Then, we have z = P C n ( z ) and A z = P Q n ( A z ) . It follows that f n ( z ) = 0 . We see that
x n + 1 z 2 = y n τ n f n ( y n ) z 2 = y n z 2 + τ n 2 f n ( y n ) 2 2 τ n y n z , f n ( y n ) .
Since I P Q n is firmly nonexpansive and f n ( z ) = 0 , we get
y n z , f n ( y n ) = y n z , f n ( y n ) f n ( z ) = y n z , A * ( I P Q n ) A y n A * ( I P Q n ) A z = A y n A z , ( I P Q n ) A y n ( I P Q n ) A z ( I P Q n ) A y n 2 = 2 f n ( y n ) .
It also follows that
x n z , f n ( x n ) 2 f n ( x n ) .
From Equation (19), we see that
2 α n y n x n , f n ( x n ) = 2 α n y n x n , f n ( x n ) f n ( y n ) + 2 α n y n x n , f n ( y n ) 2 α n y n x n f n ( x n ) f n ( y n ) + 2 α n 1 2 ( ( I P Q n ) A y n 2 ( I P Q n ) A x n 2 ) 2 α n y n x n f n ( x n ) f n ( y n ) 2 α n f n ( x n ) .
From Equations (33) and (34), we obtain
y n z 2 = P C n ( x n α n f n ( x n ) ) z 2 x n α n f n ( x n ) z 2 y n x n + α n f n ( x n ) 2 = x n z 2 + α n f n ( x n ) 2 2 α n x n z , f n ( x n ) y n x n 2 α n f n ( x n ) 2 2 α n y n x n , f n ( x n ) = x n z 2 2 α n x n z , f n ( x n ) y n x n 2 2 α n y n x n , f n ( x n ) x n z 2 4 α n f n ( x n ) y n x n 2 + 2 α n y n x n f n ( x n ) f n ( y n ) + 2 α n f n ( x n ) x n z 2 2 α n f n ( x n ) y n x n 2 + 2 μ y n x n 2 = x n z 2 2 α n f n ( x n ) ( 1 2 μ ) y n x n 2 .
Combining Equations (31), (32) and (35), we get
x n + 1 z 2 x n z 2 2 α n f n ( x n ) ( 1 2 μ ) y n x n 2 + τ n 2 f n ( y n ) 2 4 τ n f n ( y n ) = x n z 2 2 α n f n ( x n ) ( 1 2 μ ) y n x n 2 + β n 2 f n 2 ( y n ) ( f n ( y n ) 2 + θ n ) 2 f n ( y n ) 2 4 β n f n 2 ( y n ) f n ( y n ) 2 + θ n x n z 2 2 α n f n ( x n ) ( 1 2 μ ) y n x n 2 + β n 2 f n 2 ( y n ) f n ( y n ) 2 + θ n 4 β n f n 2 ( y n ) f n ( y n ) 2 + θ n = x n z 2 2 α n f n ( x n ) ( 1 2 μ ) y n x n 2 β n ( 4 β n ) f n 2 ( y n ) f n ( y n ) 2 + θ n x n z 2 2 μ L f n ( x n ) ( 1 2 μ ) y n x n 2 β n ( 4 β n ) f n 2 ( y n ) f n ( y n ) 2 + θ n ,
where the last inequality follows from Lemma 2. Since 0 < β n < 4 and 0 < μ < 1 2 , it follows that
x n + 1 z x n z .
Thus, lim n x n z exists and hence { x n } is bounded.
From Equation (36) and Assumption (A2), it also follows that
lim n f n 2 ( y n ) f n ( y n ) 2 + θ n = 0 .
By Assumption (A1), we have
lim n f n 2 ( y n ) f n ( y n ) 2 = 0 .
It follows that
lim n f n ( y n ) = lim n ( I P Q n ) A y n = 0 ,
and
lim n f n ( x n ) = lim n ( I P Q n ) A x n = 0 .
From Equation (36), we have
lim n y n x n = 0 .
Using Equations (40) and (42), we have
A x n P Q n A y n = A x n A y n + A y n P Q n A y n A x n A y n + A y n P Q n A y n = A x n y n + A y n P Q n A y n 0 as n .
Let x * be a cluster point of { x n } with { x n k } converging to x * . From Equation (42), we see that { y n k } also converges to x * . We next show that x * is in S. Since y n k C n k , by the definition of C n k , we have
c ( x n k ) + ξ n k , y n k x n k 0
where ξ n k c ( x n k ) . By the assumption that { ξ n k } is bounded and Equation (42), we get
c ( x n k ) ξ n k , x n k y n k ξ n k x n k y n k 0 as k
which implies c ( x * ) 0 . Hence x * C . Since P Q n k ( A y n k ) Q n k , we obtain
q ( A x n k ) + η n k , P Q n k A y n k A x n k 0
where η n k q ( A x n k ) . By the boundedness of { η n k } and Equation (43), it follows that
q ( A x n k ) η n k , A x n k P Q n k A y n k η n k A x n k P Q n k A y n k 0 as k .
We conclude that q ( A x * ) 0 . Thus, A x * Q . Thus, x * is a solution of the SFP.
Hence, by Lemma 1, we conclude that the sequence { x n } converges to a point in S. This completes the proof. □

4. Numerical Experiments

In this section, we provide numerical experiments in compressed sensing. We illustrate the performance of Algorithms 4 and 1 of Yang [6], Algorithm 2 of Qu and Xiu [8], and Algorithm 3 of Bnouhuchem et al. [9]. In signal processing, compressed sensing can be modeled as the following linear equation:
y = A x + ε ,
where x R N is a recovered vector with m nonzero components, y R M is the observed data, ε is the noisy and A is an M × N matrix with M < N . The problem in Equation (48) can be seen as the LASSO problem:
min x R N 1 2 y A x 2 subject to x 1 t ,
where t > 0 is a given constant. In particular, if C = { x R N : x 1 t } and Q = { y } , then the LASSO problem can be considered as the SFP. From this connection, we can apply the CQ algorithm to solve Equation (49).
In this example, the sparse vector x R N is generated by the uniform distribution in [ 2 , 2 ] with m nonzero elements. The matrix A is generated by the normal distribution with mean zero and invariance one. The observation y is generated by the white Gaussian noise with SNR=40. The process is started with t = m and initial point x 1 = o n e s ( N , 1 ) .
The stopping criterion is defined by the mean square error (MSE):
E n = 1 N x n x * 2 < κ ,
where x n is an estimated signal of x * and κ is a tolerance.
In what follows, let μ = 0.3 , σ = 0.2 , ρ = 0.4 , β n = 1.9 and θ n = 1 200 n + 1 . The numerical results are reported as follows.
In Table 1, we observe that the performance of Algorithm 4 is better than other algorithms in terms of CPU time and number of iterations as the spikes of sparse vector is varied from 10 to 30. In this example, it is shown that Algorithm 4 of Yang [6], for which the step size depends on the norm of A, converges more slowly than other algorithms in terms of CPU time.
Next, we provide Figure 1, Figure 2 and Figure 3 to illustrate the convergence behavior, MSE, number of iterations and objective function values when N = 1024 , M = 512 , m = 20 and κ = 10 5 .
In Figure 1, Figure 2 and Figure 3, we can summarize that our proposed algorithm is really more efficient and faster than algorithms of Yang [6], Qu and Xiu [8] and Bnouhachem et al. [9].
In Table 2, we observe that Algorithm 4 is effective and also converges more quickly than Algorithm 1 of Yang [6], Algorithm 2 of Qu and Xiu [8] and Algorithm 3 of Bnouhuchem et al. [9]. Moreover, it is seen that Algorithm 1 of Yang [6] has the highest CPU time in computation. In this case, Algorithm 1 takes more CPU time than it does in the first case (see Table 1). Therefore, we can conclude that our proposed method has the advantage in comparison to other methods, especially Algorithm 1, which requires the spectral computation.
We next provide Figure 4, Figure 5 and Figure 6 to illustrate the convergence behavior, MSE, number of iterations and objective function values when N = 4096 , M = 2048 , m = 60 and κ = 10 5 .
In Figure 4, Figure 5 and Figure 6, we observe that MSE and objective function values of Algorithm 4 decreases faster than Algorithms 1–3 do in each cases.

5. Comparative Analysis

In this section, we discuss the comparative analysis to show the effects of the step sizes α n and β n in Algorithm 4.
We begin this section by studying the effect of the step size β n in Algorithm 4 in terms of the number of iterations and the CPU time with the varied cases.
Choose μ = 0.3 , σ = 0.2 , ρ = 0.4 and θ n = 1 200 n + 1 . Let x 1 and A be as in the previous example. The stopping criterion is defined by Equation (50) with κ = 10 5 .
In Table 3, it is observed that the number of iterations and the CPU time have small reduction when the step size β n tends to 4. The numerical experiments for each cases of β n are shown in Figure 7 and Figure 8, respectively.
Next, we discuss the effect of the step size α n in Algorithm 4. We note that the step size α n depends on the parameters ρ and σ . Thus, we aim to vary these parameters and study its convergence behavior.
Choose μ = 0.3 , σ = 0.2 , β n = 3.9 and θ n = 1 200 n + 1 . Let x 1 and A be as in the previous example. The stopping criterion is defined by Equation (50) with κ = 10 5 . The numerical results are reported in Table 4.
In Table 4, we see that the CPU time decreases significantly when the parameter ρ is also decreased. However, the choice of ρ has no effect in terms of number of iterations.
Next, we discuss the effect of σ in Algorithm 4. In this experiment, choose μ = 0.3 , β n = 3.9 , ρ = 0.5 and θ n = 1 200 n + 1 . The error E n is defined by Equation (50) with κ = 10 5 . The numerical results are reported in Table 5.
In Table 5, we observe that the choices of σ have a small effect in both terms of the CPU time and the number of iterations.
Finally, we discuss the convergence of Algorithm 4 with different cases of M and N. In this case, we set σ = 1 , ρ = 0.5 , μ = 0.3 , β n = 3.9 and θ n = 1 200 n + 1 . The stopping criterion is defined by Equation (50).
In Table 6, it is shown that, if M and N have a high value, then the number of iteration decreases. However, in this case, the CPU time increases.

6. Conclusions

In this work, we introduce a new hybrid CQ algorithm by using the self adaptive and the line-search techniques for the split feasibility problem in Hilbert spaces. This method can be viewed as a refinement and improvement of other CQ algorithms. Convergence analysis of the proposed method is proved under some suitable conditions. The numerical results show that our algorithm has a better convergence behavior than the algorithms of Yang [6], Qu and Xiu [8] and Bnouhachem et al. [9]. A comparative analysis was also performed to show the effects of the step sizes in our algorithm.

Author Contributions

S.S.; supervision and investigation, S.K.; writing original draft and P.C.; formal analysis and methodology.

Funding

This research was funded by Chiang Mai University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithms using Bregman projection in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  3. Stark, H. Image Recovery: Theory and Application; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  4. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  5. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  6. Yang, Q. The relaxed CQ algorithm for solving the split feasibility problem. Inverse Prob. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  7. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  8. Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Prob. 2005, 21, 1655–1665. [Google Scholar] [CrossRef]
  9. Bnouhachem, A.; Noor, M.A.; Khalfaoui, M.; Zhaohan, S. On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 2012, 54, 627–639. [Google Scholar] [CrossRef]
  10. Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  11. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2017, 12, 1–14. [Google Scholar] [CrossRef]
  12. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: London, UK, 2011. [Google Scholar]
Figure 1. From top to bottom: original signal, observation data and recovered signal by Algorithms 1–4, respectively.
Figure 1. From top to bottom: original signal, observation data and recovered signal by Algorithms 1–4, respectively.
Mathematics 07 00789 g001
Figure 2. MSE versus number of iterations when N = 1024 , M = 512 and κ = 10 5 .
Figure 2. MSE versus number of iterations when N = 1024 , M = 512 and κ = 10 5 .
Mathematics 07 00789 g002
Figure 3. The objective function value versus number of iterations when N = 1024 , M = 512 and κ = 10 5 .
Figure 3. The objective function value versus number of iterations when N = 1024 , M = 512 and κ = 10 5 .
Mathematics 07 00789 g003
Figure 4. From top to bottom: original signal, observation data and recovered signal by Algorithms 1–4, respectively.
Figure 4. From top to bottom: original signal, observation data and recovered signal by Algorithms 1–4, respectively.
Mathematics 07 00789 g004
Figure 5. MSE versus number of iterations when N = 4096 , M = 2048 and κ = 10 5 .
Figure 5. MSE versus number of iterations when N = 4096 , M = 2048 and κ = 10 5 .
Mathematics 07 00789 g005
Figure 6. The objective function value versus number of iterations when N = 4096 , M = 2048 and κ = 10 5 .
Figure 6. The objective function value versus number of iterations when N = 4096 , M = 2048 and κ = 10 5 .
Mathematics 07 00789 g006
Figure 7. Graph of number of iterations versus E n in case N = 1024 and M = 512 .
Figure 7. Graph of number of iterations versus E n in case N = 1024 and M = 512 .
Mathematics 07 00789 g007
Figure 8. Graph of number of iterations versus E n in case N = 4096 and M = 2048 .
Figure 8. Graph of number of iterations versus E n in case N = 4096 and M = 2048 .
Mathematics 07 00789 g008
Table 1. Numerical results ( M = 512 and N = 1024 ).
Table 1. Numerical results ( M = 512 and N = 1024 ).
m-SparseMethod κ = 10 4 κ = 10 5
CPUIterCPUIter
m = 10 Algorithm 10.7801930.593183
Algorithm 20.09621870.1000158
Algorithm 30.14162570.060574
Algorithm 40.0271330.059239
m = 15 Algorithm 10.6345930.6778101
Algorithm 20.10201960.1001195
Algorithm 30.10871700.082397
Algorithm 40.0251350.043051
m = 20 Algorithm 11.15351611.1177156
Algorithm 20.16613080.1573296
Algorithm 30.35575000.1139134
Algorithm 40.0516550.069578
m = 25 Algorithm 10.73801032.9774443
Algorithm 20.09901960.4746940
Algorithm 30.06231150.72581308
Algorithm 40.0354420.0922165
m = 30 Algorithm 11.14231683.728092
Algorithm 20.15683211.7980666
Algorithm 30.12191640.4119111
Algorithm 40.0704700.133538
Table 2. Numerical results ( M = 2048 and N = 4096 ).
Table 2. Numerical results ( M = 2048 and N = 4096 ).
m-sparseMethod κ = 10 4 κ = 10 5
CPUIterCPUIter
m = 20 Algorithm 153.48632877.819240
Algorithm 23.1953434.762762
Algorithm 31.5285192.310228
Algorithm 41.0771131.619920
m = 40 Algorithm 174.645638106.342054
Algorithm 24.5607606.186283
Algorithm 32.0701262.940637
Algorithm 41.4418182.171327
m = 60 Algorithm 186.175245137.688570
Algorithm 25.2204708.1821110
Algorithm 32.3965303.643446
Algorithm 41.7580222.690834
m = 80 Algorithm 1133.550467219.4587112
Algorithm 27.818510413.3599178
Algorithm 33.4220435.939275
Algorithm 42.4207303.790247
m = 100 Algorithm 1148.309875327.4775163
Algorithm 28.784011819.7221258
Algorithm 33.80244816.0518202
Algorithm 42.6962345.353866
Table 3. The convergence behavior of Algorithm 4 with different cases of β n .
Table 3. The convergence behavior of Algorithm 4 with different cases of β n .
β n CPUIter
N = 10240.10.4585139
M = 5120.50.197673
m = 201.00.163255
1.50.127244
2.00.118738
2.50.104835
3.00.106532
3.50.129829
3.90.095428
N = 40960.14.454758
M = 20480.53.607539
m = 201.02.202129
1.51.811924
2.01.602421
2.51.574829
3.01.405517
3.51.329716
3.91.317215
Table 4. The convergence behavior of Algorithm 4 with different cases of ρ .
Table 4. The convergence behavior of Algorithm 4 with different cases of ρ .
ρ CPUIter
N = 10240.10.063427
M = 5120.30.098126
m = 200.50.106526
0.70.177327
0.90.542127
N = 40960.10.755417
M = 20480.31.209417
m = 200.51.769717
0.73.187617
0.910.153618
Table 5. The convergence behavior Algorithm 4 with different cases of σ .
Table 5. The convergence behavior Algorithm 4 with different cases of σ .
σ CPUIter
N = 102410.298553
M = 51220.297453
m = 2030.263656
40.247853
50.258452
60.281656
N = 409611.910516
M = 204821.999016
m = 2032.093716
42.137116
52.244917
62.381616
Table 6. The convergence behavior Algorithm 4 with different cases of M and N.
Table 6. The convergence behavior Algorithm 4 with different cases of M and N.
κ = 10 4 κ = 10 5
CPUIterCPUIter
M = 1024 0.9967131.499821
N = 2048
M = 2048 3.8625115.611916
N = 4096
M = 3072 5.044966.57888
N = 6144
M = 4096 7.3689510.18387
N = 8192

Share and Cite

MDPI and ACS Style

Suantai, S.; Kesornprom, S.; Cholamjiak, P. A New Hybrid CQ Algorithm for the Split Feasibility Problem in Hilbert Spaces and Its Applications to Compressed Sensing. Mathematics 2019, 7, 789. https://doi.org/10.3390/math7090789

AMA Style

Suantai S, Kesornprom S, Cholamjiak P. A New Hybrid CQ Algorithm for the Split Feasibility Problem in Hilbert Spaces and Its Applications to Compressed Sensing. Mathematics. 2019; 7(9):789. https://doi.org/10.3390/math7090789

Chicago/Turabian Style

Suantai, Suthep, Suparat Kesornprom, and Prasit Cholamjiak. 2019. "A New Hybrid CQ Algorithm for the Split Feasibility Problem in Hilbert Spaces and Its Applications to Compressed Sensing" Mathematics 7, no. 9: 789. https://doi.org/10.3390/math7090789

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop