Half-Space Relaxation Projection Method for Solving Multiple-Set Split Feasibility Problem

: In this paper, we study an iterative method for solving the multiple-set split feasibility problem: ﬁnd a point in the intersection of a ﬁnite family of closed convex sets in one space such that its image under a linear transformation belongs to the intersection of another ﬁnite family of closed convex sets in the image space. In our result, we obtain a strongly convergent algorithm by relaxing the closed convex sets to half-spaces, using the projection onto those half-spaces and by introducing the extended form of selecting step sizes used in a relaxed CQ algorithm for solving the split feasibility problem. We also give several numerical examples for illustrating the efﬁciency and implementation of our algorithm in comparison with existing algorithms in the literature. j = { y ∈ R t : ( j − M ) e 1 ≤ y ≤ je 1 } , (j ∈ { 1, . . . , M } ) , with a different number of feasible sets N and M, and dimensions s and t. We randomly generated the operator A = ( a ij ) t × s , with a ij ∈ [ 0, 10 ] . In this example, we see the effect of the different choices of λ 1 and λ 2 for the numerical results of the HSRPA using both stepsizes τ j , n and τ ∗ j , n (given in (38)), and also we compare the HSRPA with the algorithm proposed by Zhao et al. ([48], Algorithm 2.1). For the HSRPA we take u = ( 1, 1, 2 ) T , x 1 = rand [ − 100, 100 ] , N = 3, M = 10, s = t = 3 , ρ n = 3 , α n = 1 n + 1 , β n = n + 5 2 n + 6 and δ j = j 1 + ... + M , j ∈ { 1, . . . , M } . For ([48], Algorithm 2.1), we take x 0 = rand [ − 100, 100 ] , t = 3, r = 10, N = M = 3 , ω k = 1 , α i = ( 10133000 ) i , i = 1, 2, 3 , and β j = ( 10133000 ) j , j = 1, 2, . . . , 10 . We use (cid:107) x n + 1 − x n (cid:107) < 10 − 4 as the stopping criteria. The results are presented in Table 2 below.


Split Inverse Problem
Split Inverse Problem (SIP) is an archetypal model presented in ( [1], Section 2), and it is stated as      find x * ∈ X that solves IP1 such that y * = Ax * ∈ Y and solves IP2 where A is a bounded linear operator from a space X to another space Y, and IP1 and IP2 are two inverse problems installed in X and Y, respectively. Real-world inverse problems can be cast into this framework by making different choices of the spaces X and Y (including the case X = Y), and by choosing appropriate inverse problems for IP1 and IP2. For example, image restoration, computer tomograph and intensity-modulated radiation therapy (IMRT) treatment planning generate functions that can be transformed to have an SIP model; see in [2][3][4][5]. The split feasibility problem [6] and multiple-set split feasibility problem [7] are the first instances of the SIP, where the two problems IP1 and IP2 are of the Convex Feasibility Problem (CFP) type [8]. In the SIP framework, many authors studied cases for which IP1 and IP2 are convex feasibility problems, minimization problems, equilibrium problems, fixed point problems, null point problems and so on; see, for example [2,3,5,[9][10][11][12][13][14][15][16][17][18][19][20].

Split Feasibility Problem and Multiple-Set Split Feasibility Problem
Let H be a real Hilbert space and let T : H → H be an operator. We say that T is ρ-strongly quasi-nonexpansive, where ρ ≥ 0, if FixT = {x ∈ H : Tx = x} = ∅ and If ρ = 0 in (1), then T is called a quasi-nonexpansive operator. If ρ > 0 in (1), then we say that T is strongly quasi-nonexpansive. Obviously, a nonexpansive operator having a fixed point is quasi-nonexpansive. If T is quasi-nonexpansive, then FixT is closed and convex. For ν ≥ 0 denote by T ν := (1 − ν)I + νT, the ν-relaxation of T, where I is the identity operator and ν is called relaxation parameter. If T is quasi-nonexpansive, then usually one applies a relaxation parameter ν ∈ [0, 1].
Let H 1 and H 2 be real Hilbert spaces and let A : H 1 → H 2 be a bounded linear operator. Given a nonempty closed convex subsets {C 1 , . . . , C N } and {Q 1 , . . . , Q M } of H 1 and H 2 , respectively. The Multiple-Set Split Feasibility Problem (MSSFP), which was introduced by Censor et al. [7], is formulated as finding a pointx Denote by Ω the set of solutions for (2). The MSSFP (2) with N = M = 1 is known as the Split Feasibility Problem (SFP), which is formulated as finding a point where C and Q are nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. The SFP was first introduced in 1994 by Censor and Elfving [6] for modeling inverse problems in finite-dimensional Hilbert spaces for modeling inverse problems that arise from phase retrievals and in medical image reconstruction. SFP plays an important role in the study of signal processing, image reconstruction, intensity-modulated radiation, therapy, etc. [2,3,5,11]. Several iterative algorithms were presented to solve the SFP and MSSFP provided the solution exists; see, for example, in [3,6,[21][22][23][24][25][26][27][28][29][30][31][32]. The algorithm proposed by Censor and Elfving [6] for solving the SFP involves the computation of the inverse of A per each iteration assuming the existence of the inverse of A, a fact that makes the algorithm nonapplicable in practice. Most methods employ the Landweber operators, see the definition and its property in [33]. In general, all of these methods produce sequences that converge weakly to a solution. Byrne [3] proposed the following iteration for solving the SFP, and called it the CQ-algorithm or the projected Landweber method: where x 1 ∈ H 1 is arbitrary, V ν is a ν-relaxation of the Landweber operator V (corresponding to P Q ), i.e., where ν ∈ (0, 2), A * denotes the adjoint of A, and A 2 is the spectral norm of AA * . It is well known that the CQ algorithm (4) does not necessarily converge strongly to the solution of SFP in the infinite-dimensional Hilbert space. An important advantage of the CQ algorithm by Byrne [3,21] is that computation of the inverse of A (matrix inverses) is not necessary. The Landweber operator is used for a more general type of problem called the split common fixed point problem of quasi-nonexpansive operators (see, for example, [10,34,35]), but the implementation of the Landweber-type algorithm generated requires prior knowledge of the operator norm. However, the operator norm is a global invariant and is often difficult to estimate; see, for example, the Theorem of Hendrickx and Olshevsky in [36]. To overcome this difficulty, Lopez et al. [2] introduced a new way of selecting the step sizes for solving the SFP (3) such that the information of the operator norm is not necessary. To be precise, Lopez et al. [2] proposed In addition, the computation of the projection on a closed convex set is not easy. In order to overcome this drawback, Yang [37] considered SFPs in which the involved sets C and Q are given as sub-level sets of convex functions, i.e., where c : H 1 → R and q : H 2 → R are convex and subdifferentiable functions on H 1 and H 2 , respectively, and that ∂c and ∂q are bounded operators (i.e., bounded on bounded sets). It is known that every convex function defined on a finite-dimensional Hilbert space is subdifferentiable and its subdifferential operator is a bounded operator (see [38]). In this situation, the efficiency of the CQ method is extremely affected because, in general, the computation of projections onto such subsets is still very difficult. Motivated by Fukushima's relaxed projection method in [39], Yang [37] suggested calculating the projection onto a half-space containing the original subset instead of the latter set itself. More precisely, Yang introduced a relaxed CQ algorithm using a half-space relaxation projection method for solving SFP. The proposed algorithm by Yang is given as follows: where f n = A * (I − P Q n )A, for each n ∈ N the set C n is given by where ξ n ∈ ∂c(x n ), and the set Q n is given by where ε n ∈ ∂q(Ax n ). Obviously, C n and Q n are half-spaces and C ⊂ C n and Q ⊂ Q n for every n ≥ 1. More important, since the projections onto C n and Q n have the closed form, the relaxed CQ algorithm is now easily implemented. The specific form of the metric projections onto C n and Q n can be found in [38,40,41]. For solving the MSSFP (2), many methods have been developed; see, for example, in [7,26,[42][43][44][45][46][47][48][49] and references therein. We aim to propose a strongly convergent algorithm with high efficiency that is easy to implement in solving the MSSFP. Motivated by Yang [37], we are interested in solving the MSSFP (2) in which the involved sets C i (i ∈ {1, . . . , N}) and Q j (j ∈ {1, . . . , M}) are given as sub-level sets of convex functions, i.e., where c i : H 1 → R and q j : H 2 → R are convex functions for all i ∈ {1, . . . , N}, j ∈ {1, . . . , M}. We assume that each c i and q j are subdifferentiable on H 1 and H 2 , respectively, and that ∂c i and ∂q j are bounded operators (i.e., bounded on bounded sets). In what follows, we define N + M half-spaces at point x n by where ξ i,n ∈ ∂c i (x n ), and where ε j,n ∈ ∂q j (Ax n ). The paper contributes to developing the algorithm for the MSSFP in the direction of half-space relaxation (assuming C i and Q j are given as a sub-level sets of convex functions (8)) and parallel computation of projection onto half-spaces (9) and (10) without prior knowledge of the operator norm.
This paper is organized in the following way. In Section 2, we recall some basic and useful facts that will be used in the proof of our results. In Section 3, we introduce the extended form of the way of selecting step sizes used in the relaxed CQ algorithm for solving the SFP by [37] and Lopez et al. [2] to work for the MSSFP framework, and we analyze the strong convergence of our proposed algorithm. In Section 4, we give some numerical examples to discuss the performance of the proposed algorithm. Finally, we give some conclusions.

Preliminary
In this section, in order to prove our result, we recall some basic notions and useful results in a real Hilbert space H. The symbols " " and " → " denote weak and strong convergence, respectively. Let C be a nonempty closed convex subset of H. The metric projection on C is a mapping P C : H → C defined by P C (x) = arg min{ y − x : y ∈ C}, x ∈ H. Lemma 1. [50] Let C be a closed convex subset of H. Given x ∈ H and a point z ∈ C, then z = P C (x) if and only if x − z, y − z ≤ 0, ∀y ∈ C.
The mapping T : H → H is firmly nonexpansive if which is equivalent to If T is firmly nonexpansive, I − T is also firmly nonexpansive. The metric projection P C on a closed convex subset C of H is firmly nonexpansive. Definition 1. The subdifferential of a convex function f : If ∂ f (x) = ∅, f is said to be subdifferentiable at x. If the function f is continuously differentiable then A function that is weakly lower semi-continuous at each point of H is called weakly lower semi-continuous on H. Lemma 2. [3,51] Let H 1 and H 2 be real Hilbert spaces and f : H 1 → R is given by f (x) = 1 2 (I − P Q )Ax 2 where Q is closed convex subset of H 2 and A : H 1 → H 2 be a bounded linear operator. Then

(i)
The function f is convex and weakly lower semi-continuous on H 1 ; Lemma 3. [27,52] Let C and Q be closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and f : H 1 → R is given by f (x) = 1 2 (I − P Q )Ax 2 , where A : H 1 → H 2 be a bounded linear operator. Then for λ > 0 andx ∈ H 1 the following statements are equivalent.

Lemma 5.
[54] Let {a n } be the sequence of nonnegative numbers such that a n+1 ≤ (1 − α n )a n + α n δ n , where {δ n } is a sequence of real numbers bounded from above and 0 ≤ α n ≤ 1 and

Half-Space Relaxation Projection Algorithm
In this section, we propose an iterative algorithm to solve the MSSFP (2). To make our algorithm more efficient and the implementation of the algorithm more easy, we assume that the convex sets C i and Q j are given in the form of (8) and we use projections onto half-spaces C i,n and Q j,n defined in (9) and (10), respectively, instead of onto C i and Q j , just as the relaxed or inexact methods in [5,37,39,55]. Moreover, in order to remove the requirement of the estimated value of the operator norm and solve the MSSFP when finding the operator norm is not easy, we now introduce a new way of selecting the step sizes for solving the MSSFP (2) given as follows for x ∈ H 1 , and C i,n and Q j,n are half-spaces defined in (9) and (10).
(i) For each i ∈ {1, . . . , N} and n ≥ 1, define (ii) g n (x) and ∇g n (x) are defined as g n (x) = g i nx ,n (x) and so ∇g n (x) = ∇g i nx ,n (x) where i n x ∈ {1, . . . , N} such that for each n ≥ 1, (iii) For each j ∈ {1, . . . , M} and n ≥ 1, define From Aubin [51], g i,n and f j,n are convex, weakly lower semi-continuous and differentiable for each i ∈ {1, . . . , N} and j ∈ {1, . . . , M}. Now, using ∇g i,n , g i,n , g n , ∇g n , f j,n and ∇ f j,n given in (i)-(iii) above, and assuming that the solution set Ω of the MSSFP (2) is nonempty, we propose and analyze the strong convergence of our algorithm, called the Half-Space Relaxation Projection Algorithm.
Note that, the iterative scheme in Algorithm 1 (HSRPA) is established in away that ∇g n (z n ) and ∇ f j,n (z n ) are computed, i.e., the projections P C i,n and P Q j,n are computed, in parallel setting under simple assumptions on step sizes.
Thus, one can get P C i,n (z n − λ∇ f j,n (z n )) = z n for all i ∈ {1, . . . , N} and j ∈ {1, . . . , M}. Since C i ⊂ C i,n , we get z n ∈ C i,n for all i ∈ {1, . . . , N}. Combined with the fixed point relation Lemma 3 (ii), we also get that Az n ∈ Q j,n for all j ∈ {1, . . . , M}. Following the representations of the sets C i,n and Q j,n in (9) and (10) we obtain that c i (z n ) ≤ 0 for all i ∈ {1, . . . , N} and q j (Az n ) ≤ 0 for all i ∈ {1, . . . , N}, and this implies that z n ∈ C i for all i ∈ {1, . . . , N} and Az n ∈ Q j for all j ∈ {1, . . . , M}, which completes the proof.
By Lemma 6, we can conclude that the HSRPA terminates at some iterate n when {j ∈ {1, . . . , M} : ∇g n (z n ) 2 + ∇ f j,n (z n ) 2 = 0} = ∅. Otherwise, if the HSRPA does not stop, then we have the following strong convergence theorem for the approximation of the solution of the problem of MSSFP (2). Theorem 1. The sequence {x n } generated by HSRPA converges strongly to the solution pointx of MSSFP (2) (x n →x ∈ Ω) wherex = P Ω u.
Proof. Letx ∈ Ω. Since I − P C i,n and I − P Q j,n are firmly nonexpansive, and sincex verifies (2), we have for all and Using definition of y n and Lemma 4 (ii), we have Using convexity of . 2 From (11) and (12), we have In view of (13), (14) and (15), we have From (16) and (C5), we have Using (17), Lemma 4 (i) and the definition of x n+1 , we get From (18) and the definition of z n , we get and Using (18) and (19), we have From the definition of z n , we have Thus, (21) and (22) gives That is, where We know that {x n } is bounded and so it is bounded below. Hence, Γ n is bounded below. Furthermore, using Lemma 5 and (C3), we have Therefore, lim inf n→∞ Γ n is a finite real number and by (C3), we have Since {x n } is bounded, there exists a subsequence {x n k } of {x n } such that x n k p for some p ∈ H 1 and lim inf Since {x n } is bounded and lim inf n→∞ Γ n is finite, we have that 1−β n k α n k β n k x n k +1 − z n k 2 is bounded. Also, by (C4), we have 1−β n α n β n ≥ 1−b α n β n > 0 and so we have that 1 α n k β n k x n k +1 − z n k 2 is bounded.
Observe from (C3) and (C4), we have Therefore, we obtain from (20) and From the definition of x n+1 , we have Hence, Now, using (16), we obtain Therefore, (27), (29) and (C5) gives Again using (C5) together with (30) yields Hence, in view of (31) and restriction condition (C2), we have From the definition of g n k (z n k ), we can have g i,n k (z n k ) ≤ g n k (z n k ), ∀i ∈ {1, . . . , N}.
Similarly, from the boundedness of {ξ i,n } ∞ n=1 and (35), for all i ∈ {1, . . . , N} we obtain Since x n k p and using (28), we have z n k p and hence Az n k Ap. The weak lower semi-continuity of q j (.) and (36) implies that That is, Ap ∈ Q j for all j ∈ {1, . . . , M}. Likewise, the weak lower semi-continuity of c i (.) and (37) implies that That is, p ∈ C i for all i ∈ {1, . . . , N}. Hence, p ∈ Ω. Takex = P Ω u. Then, we obtain from (26) and Lemma 1 that Therefore, x n −x → 0 and this implies that {x n } converges strongly tox. This completes the proof.
When the point u in HSRPA is taken to be 0, from Theorem 1, we see that the limit pointx of the sequence {x n } is the unique minimum norm solution of the MSSFP, i.e., x = min x∈Ω x .
ii. In the algorithm (HSRPA), the stepsize τ j,n can also be replaced by where The proof for the strong convergence of the HSRPA using the stepsize τ * j,n defined in (38) is almost the same as the proof of Theorem 1. To be precise, only slight rearrangement in (14) is required in the proof of Theorem 1.
If M = N = 1, we have the following algorithm as a consequence of HSRPA concering SFP (3) assuming that C and Q are given as sub-level sets of convex functions (5) and by constructing half-spaces (6) and (7), and defining g n (x) = 1 2 (I − P C n )x 2 , ∇g n (x) = (I − P C n )x, f n (x) = 1 2 (I − P Q n )Ax 2 and ∇ f n (x) = A * (I − P Q n )Ax.
Step: Proceed with the following computations: otherwise.

Preliminary Numerical Results and Applications
In this section, we illustrate the numerical performance and applicability of HSRPA by solving some problems. In the first example, we investigate the numerical performance of HSRPA with regards to different choices of the control parameters α n , β n and ρ n . In Example 2, we illustrate the numerical properties of HSRPA in comparison with three other strongly convergent algorithms, namely the gradient projection method (GPM) by Censor et al. ([7], Algorithm 1), the perturbed projection method (PPM) by Censor et al. ( [43], Algorithm 5), and the self-adaptive projection method (SAPM) by Zhao and Yang ([46], Algorithm 3.2). As mentioned in Remark 1(ii), the stepsize in HSRPA can be replaced by the stepsize (38). Therefore in Example 3, we analyze the effect of the two stepsizes in HSRPA for different choices of λ 1 and λ 2 . Additionally, we compare HSRPA with et al. ( [48], Algorithm 2.1). Also, a comparison of Algorithm 2 with the strong convergence result of SFP proposed by Shehu et al. [56] is given in Example 4. Finally in Section 4.1, we present a sparse signal recovery experiment to illustrate the efficiency of Algorithm 2 by comparing with algorithms proposed by Lopez [2] and Yang [37]. The numerical results are completed on a standard TOSHIBA laptop with Intel(R) Core(TM) i5-2450M CPU@2.5GHz 2.5GHz with memory 4GB. The programme is implemented in MATLAB R2020a.
and ∂q j (Az n ) = {(a 1,j , . . . , a t,j ) T }. Note that the projection where C i,n = {x ∈ H 1 : c i (z n ) ≤ ξ i,n , z n − x }, is solving the following quadratic programming problems with inequality constraint minimize 1 2 x TH x +B T n x +c subject toD i,n x ≤F i , i 2 + ξ n,j , z n . Moreover, the projection P Q j,n (Az n ) for where Q j,n = {y ∈ H 2 : q j (Az n ) ≤ ε j,n , Az n − y }, is solving the following quadratic programming problems with inequality constraint minimize 1 2 w TĤ w +B T n w +ĉ subject toD j,n (w) ≤F j , (40) whereĤ = 2I t×t ,B n = −2Az n ,ĉ = Az n 2 ,D j,n = ε j,n ,F j = b j + ε j,n , Az n + a j , y 0 j − Az n for a j = (a 1,j , . . . , a t,j ). The problems (39) and (40) can be effectively solved by its appropriate solver in MATLAB. In the following our experiments we took and ε j,n = (a 1,j , . . . , a t,j ) T . We study the numerical behavior of HSRPA for different parameters α n , β n , ρ n and for different dimensions s = t, where r = 2 (i.e., N = 5), λ 1 = λ 2 = 1 2 , δ j = j 4 for all j ∈ {1, 2, 3, 4} are fixed. Notice that for the choice of λ 1 = λ 2 = 1 2 the parameter ρ n is chosen in such a way that 0 < ρ n < 4 and lim inf n→∞ ρ n (4 − ρ n ) > 0. The numerical results are shown in Figures 1 and 2. (a) β n = 0.6, Data 1: , Data 2: Figure 1. For t = s = 100, ρ n = 2, θ = 10 and for randomly generated starting points x 1 and u. In view of Figures 1 and 2, we see that our algorithm works better for (i) A sequence {α n } in which the terms are very close to zero; (ii) A sequence {β n } in which terms are very close to 1; (iii) A sequence {ρ n } in which terms are larger but not exceeding 4.
We choose GPM, PPM and SAPM because the problem under consideration is the same and the approach has some common features with our approach.
GPM: The proposed iterative algorithm solving MSSFP is reduced to where ∈ (0, 2 The proposed algorithm is obtained by replacing the projections on the closed convex subset C i and Q j in (41) by the half-space C i,n and C j,n .
SAPM: The proposed iterative algorithm for solving the MSSFP is reduced to In our algorithm we took ξ i,n = (−1) i x 0 i for each i ∈ {1, . . . , N} and and ε j,n = (−1) j z 0 j for each j ∈ {2, . . . , M}. For the purpose of comparison we took the following data: The numerical results are shown in Table 1 and Figure 3. To permit comparisons between the three algorithms about the number of iterations (Iter(n)) and the time of execution in seconds (CPUt(s)), we have used the relative difference x 1 −u , which is less than , as the stopping criteria in Table 1. From the numerical results in Table 1 and Figure 3 of Example 2, we can see that our algorithm (HSRPA) has better performance than GPM, PPM and SAPM. To be specific, HSRPA converges faster and requires less iterations than GPM, PPM and SAPM. In view of CPU time of execution, our algorithm has comparable performance with GPM, PPM and SAPM.   We use x n+1 − x n < 10 −4 as the stopping criteria. The results are presented in Table 2 below.
Interestingly, it can be observed from Table 2 that, for λ 1 ≤ λ 2 HSRPA with the stepsize τ j,n is faster in terms of less number of iterations and CPU-run time than HSRPA with the stepsize τ * j,n . On the contrary, HSRPA (τ * j,n ) has better performance for λ 1 > λ 2 and HSRPA with either of the stepsizes converges faster and requires less number of iterations than the compared algorithm ( [48], Algorithm 2.1).
if w(t), −t 3 ≥ −1. We consider the following problem find x * ∈ C such that Ax * ∈ Q.
It is clear that Problem (42) has a nonempty solution set Ω since 0 ∈ Ω. In this case, the iterative scheme in Algorithm 2 u, x 1 ∈ C, with λ 1 = λ 2 = 1 2 , ρ n = 7 2 , α n = 1 n+1 , and β n = n+2 2n+6 becomes In this example, we compare Algorithm 2 with the strong convergence result of SFP proposed by Shehu et al. [56]. The iterative scheme (27) in [56] for u, x 1 ∈ C, with α n = 1 n+1 , β n = n 2(n+1) = γ n and t n = 1 A 2 was reduced into the following form We see here that our iterative scheme can be implemented to solve the problem (42) considered in this example. We use x n+1 − x n < 10 −3 as stopping criteria for both algorithms and the outcome of the numerical experiment is reported in Figure 4. It can be observed from Figure 4 that, for different choices of u and x 1 , Algorithm 2 is faster in terms of less number of iterations and CPU-run time than the algorithm proposed by Shehu et al. [56].

Application to Signal Recovery
In this part, we consider the problem of recovering a noisy sparse signal. Compressed sensing can be modeled as the following linear equation: where x ∈ R N is a vector with L nonzero components to be recovered, b ∈ R M is the measured data with noisy (when = 0, it means that there is no noise to the observed data) and A is an M × N bounded linear observation operator with (N > M). The problem in Equation (44) can be seen as the LASSO problem, which has wide application in signal processing theory [58].
where t > 0 is a given constant. It can be observed that (45) indicates the potential of finding a sparse solution of the SFP (3) due to the 1 constraint. Thus, we apply Algorithm 2 to solve problem (45). Let C := {x : x 1 ≤ t} and Q = {b}, then the minimization problem (45) can be seen as an SFP (3). Denote the level set C n by, where ς n ∈ ∂ω(x n ) with the convex function ω( For a special case where Q = Q n = {b}, Algorithm 2 converges to the solution of (45) and moreover, since the projection onto the level set has an explicit formula, Algorithm 2 can be easily implemented. In the sequel, we present a sparse signal recovery experiment to illustrate the efficiency of Algorithm 2 by comparing with algorithms proposed by Lopez [2] and Yang [37].
The vector x is an L−sparse signal with L non-zero elements that is generated from uniform distribution in the interval [−2, 2]. The matrix A is generated from a normal distribution with mean zero and one variance. The observation b is generated by white Gaussian noise with signal-to-noise ratio SNR = 40. The process is started with t = L, x 0 and u are randomly generated N × 1 vectors. The goal is then to recover the L−sparse signal x by solving (45). The restoration accuracy is measured by the mean squared error as follows: where x n is an estimated signal of x, and > 0 is a given small constant. We take the stopping criteria = 10 −6 and we choose the corresponding parameters ρ n = 3.5, λ 1 = λ 2 = 0.5 for Algorithm 2. We also choose γ = 1 A 2 and ρ n = 2 for the algorithm by Yang [37] and algorithm by Lopez [2], respectively.
It can be observed from Figures 5-8 that the recovered signal by the proposed algorithm has less number of iterations and MSE. Therefore, the quality of the signal recovered by the proposed algorithm is better than the compared algorithms.  Figure 5. The original L−sparse signal versus the recovered sparse signal by Algorithm 2, algorithms by Lopez [2] and Yang [37] when N = 512, M = 120 and L = 30 .

Conclusions
In this paper, we present a strong convergence iterative algorithm solving MSSFP with a way of selecting the stepsizes such that the implementation of the algorithm does not need any prior information as regards the operator norms. Preliminary numerical results are reported to illustrate the numerical behavior of our algorithm (HSRPA), and to compare it with those well known in the literature, including, Censor et al. ([7], Algorithm 1), Censor et al. ( [43], Algorithm 5), Zhao and Yang ([46], Algorithm 3.2) and Zhao et al.( [48], Algorithm 2.1). The numerical results show that our proposed Algorithm is practical and promising for solving MSSFP. Algorithm 2 is applied in signal recovery. The experiment results show that Algorithm 2 has fewer iterations than that of algorithms proposed by Yang [37], Lopez [2] and Shehu [56].