Bounded Perturbation Resilience of Two Modiﬁed Relaxed CQ Algorithms for the Multiple-Sets Split Feasibility Problem

: In this paper, we present some modiﬁed relaxed CQ algorithms with different kinds of step size and perturbation to solve the Multiple-sets Split Feasibility Problem (MSSFP). Under mild assumptions, we establish weak convergence and prove the bounded perturbation resilience of the proposed algorithms in Hilbert spaces. Treating appropriate inertial terms as bounded perturbations, we construct the inertial acceleration versions of the corresponding algorithms. Finally, for the LASSO problem and three experimental examples, numerical computations are given to demonstrate the efﬁciency of the proposed algorithms and the validity of the inertial perturbation.


Introduction
In this paper, we focus on the Multiple-sets Split Feasibility Problem (MSSFP), which is formulated as follows.
where A : H 1 → H 2 is a bounded and linear operator, C i ⊂ H 1 , i = 1, · · · , t, and Q j ⊂ H 2 , j = 1, · · · , r are nonempty closed and convex sets, and H 1 and H 2 are Hilbert spaces. When t = 1, r = 1, it is the Split Feasibility Problem (SFP). Byrne in [1,2] introduced the following CQ algorithm to solve the SFP, where α k ∈ (0, 2 A 2 ). It is proven that the iterates {x k } converge to a solution of the SFP. When P C and P Q have explicit expressions, the CQ algorithm is easy to carry out. However, P C and P Q have no explicit formulas in general; thus the computation of P C and P Q is itself an optimization problem.
To avoid the computation of P C and P Q , Yang [3] proposed the relaxed CQ algorithm in finite dimensional spaces. The algorithm is where α k ∈ (0, 2 A 2 ), C k and Q k are sequences of closed half spaces containing C and Q, respectively.
As for the MSSFP (1), Censor et al. in [4] proposed the following algorithm, where Ω is an auxiliary closed subset, and p(x) is a function to measure the distance from a point to all the sets C i and Q j , where λ i > 0, β j > 0 for every i and j, and ∑ t i=1 λ i + ∑ r j=1 β j = 1, 0 < α < 2/L, The convergence of the algorithm (4) is proved in finite dimensional spaces.
Later, He et al. [5] introduced a relaxed self-adaptive CQ algorithm, where the sequence {τ k } ⊂ (0, 1), µ ∈ H, p k (x) = 1 2 , where the closed convex sets C k i and Q k j are level sets of some convex functions containing C i and Q j , and self-adaptive step size α k = ρ k p k (x k ) ∇p k (x k ) 2 , 0 < ρ k < 4. They proved that the sequence {x k } generated by algorithm (6) converges in norm to P S (µ), where S is the solution set of the MSSFP.
In order to improve the rate of convergence, many scholars have investigated the choice of the step size of the algorithms. Based on the CQ algorithm (2), Yang [6] proposed the step size where {ρ k } is a sequence of positive real numbers satisfying ∑ ∞ n=0 ρ k = ∞ and ∑ ∞ n=0 ρ 2 k < +∞, and f (x) = 1 2 (I − P Q )Ax 2 . Assuming that Q is bounded and A is a matrix with full column rank, Yang proved the convergence of the underlying algorithm in finite dimensional spaces. In 2012, López et al. [7] introduced another choice of the step size sequence {α k } in the algorithm (3) as follows where 0 < ρ k < 4, f k (x) = 1 2 (I − P Q k )Ax 2 , and they proved the weak convergence of the iteration sequence in Hilbert spaces. The advantage of this choice of the step size lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required. Recently, Gibali et al. [8] and Chen et al. [9] used step size determined by Armijo-line search and proved the convergence of the algorithm. For more information on the relaxed CQ algorithm and the selection of step size, please refer to references [10][11][12].
On the other hand, in order to make the algorithms converge faster, specific perturbations have been introduced into the iterative format, since the perturbations guide the iteration to a lower objective function value without losing the overall convergence. So far, bounded perturbation recovery has been used in many problems.
Consider the usage of the bounded perturbation for the non-smooth optimization problems, min x∈H φ(x) = f (x) + g(x), where f and g are proper lower semicontinuous convex functions in real Hilbert spaces, f is differentiable, g is not necessarily differentiable, and ∇ f is L-Lipschitz continuous. One of the classic algorithms is the proximal gradient (PG) algorithm, based on which Guo et al. [13] proposed the following PG algorithm with perturbations, Assume that (i) D is a bounded linear operator, They asserted that the generated sequence {x k } converges weakly to a solution. Later, Guo and Cui [14] proposed the modified PG algorithm for solving this problem, where τ k ⊂ [0, 1], h is a ρ ∈ (0, 1)-contractive operator. They proved that the sequence {x k } generated by the algorithm (8) converges strongly to a solution x * . In 2020, Pakkaranang et al. [15] considered PG algorithm combined with inertial technique and they proved its strong convergence under suitable conditions. For the convex minimization problem, min x∈Ω f (x), where Ω is a nonempty closed convex subset in finite dimensional space and the objective function f is convex, Jin et al. [16] presented the following projected scaled gradient (PSG) algorithm with errors Assume that (i) {D(x k )} ∞ k=0 is a sequence of diagonal scaling matrices, and that (ii) (iii) (iv) are the same as the conditions in algorithm (7); then the generated sequence {x k } converges weakly to a solution.
In 2017, Xu [17] applied the superiorization techniques to the relaxed PSG. The iterative form is where τ k is a sequence in [0, 1], and D(x k ) is a diagonal scaling matrix. He established weak convergence of the above algorithm under appropriate conditions imposed on {τ k } and {λ k }. For the variational inequality problem (VIP for short) F(x * ), x − x * ≥ 0, ∀x ∈ C, where F is a nonlinear operator, Dong et al. [18] considered the external gradient algorithm with perturbations where α k = γl m k with m k the smallest non-negative integer such that Assume that F is monotonous and L-Lipschitz is continuous and that the error sequence is summable; the sequence {x k } generated by the algorithm converges weakly to a solution of the VIP.
For the split variational inclusion problem, Duan and Zheng [19] in 2020 proposed the following algorithm where A is a bounded linear operator, B 1 and B 2 are maximal monotone operators. Assuming that lim k→∞ τ k = 0, ∑ ∞ k=0 τ k = ∞, 0 < inf k→∞ λ k ≤ sup k→∞ λ k < 2 L , L = A 2 and ∑ ∞ k=0 e(x k ) < +∞, they proved that the sequence {x k } strongly converges to a solution of the split variational inclusion problem, which is also the unique solution of some variational inequality problem. For the convex feasibility problem, Censor and Zaslavski [20] considered the perturbation resilience and convergence of dynamic string-averaging projection method.
Adding an inertial term can improve the convergence rate, which is also a perturbation. Recently, for a common solution of the split minimization problem and the fixed point problem, Kaewyong and Sitthithakerngkiet [21] combined the proximal algorithm and a modified Mann's iterative method with the inertial extrapolation and improved related results. Shehu et al. [22] and Li et al. [23] added alternated inertial perturbation to the algorithms for solving the SFP and improved the convergence rate.
At present, the (multiple-sets) split feasibility problem is widely used in application fields, such as CT tomography, image restoration, and image reconstruction, etc. There are many related literatures on the iterative algorithms for solving the (multiple-sets) split feasibility problem. However, there are relatively fewer documents studying the algorithms of the (multiple-sets) split feasibility problem with perturbations, especially with self-adaptive step size. In fact, the latter also has a bounded disturbance recovery property. Motivated by [9,18], we focus on the modified relaxed CQ algorithms to solve the MSSFP (1) in real Hilbert spaces and assert that the proposed algorithms are also bounded-perturbation-resilient.
The rest of the paper is arranged as follows. In Section 2, definitions and notions that will be useful for our analysis are presented. In Section 3, we present our algorithms and prove their weak convergence. In Section 4, we prove that the proposed algorithms have bounded perturbation resilience and construct the inertial modification of the algorithms. Furthermore, finally, in Section 5, we present some numerical simulations to show the validity of the proposed algorithms.

Preliminaries
In this section, we first define some symbols and then review some definitions and basic results that will be used in this paper.
Throughout this paper, H denotes a real Hilbert space endowed with an inner product ·, · and its deduced norm · , and I is the identity operator on H. We denote by S the solution set of the MSSFP (1). Moreover, x k → x (x k x) represents that the sequence {x k } converges strongly (weakly) to x. Finally, we denote by ω ω (x k ) all the weak cluster points of {x k }.
An operator T : H → H is said to be nonexpansive if for all x, y ∈ H, T : H → H is said to be firmly nonexpansive if for all x, y ∈ H, It is well known that T is firmly nonexpansive if and only if I − T is firmly nonexpansive.
Let C be a nonempty closed convex subset of H. Then the metric projection P C from H onto C is defined as The metric projection P C is a firmly nonexpansive operator. Definition 1 ( [24]). A function f : H → R is said to be weakly lower semicontinuous atx if x k converges weakly tox implies

Definition 2.
If ϕ : H → R is a convex function, the subdifferential of ϕ at x is defined as Lemma 1 ( [24]). Let C be a nonempty closed and convex subset of H; then for any x, y ∈ H, z ∈ C, the following assertions hold: k=0 is a sequence of nonnegative real numbers such that where the nonnegative sequences {σ k } ∞ k=0 and {δ k } ∞ k=0 satisfies ∑ ∞ k=0 σ k < +∞ and ∑ ∞ k=0 δ k < +∞, respectively. Then lim k→∞ a k exists.

Lemma 3 ([25]
). Let S be a nonempty closed and convex subset of H and {x k } be a sequence in H that satisfies the following properties: Then {x k } converges weakly to a point in S.

Definition 3.
An algorithmic operator P is said to be bounded perturbations resilient if the iteration x k+1 = P(x k ) and x k+1 = P(x k + λ k ν k ) all converge, where {λ k } is a sequence of nonnegative real numbers, {ν k } is a sequence in H, and M ∈ R and satisfies

Algorithms and Their Convergence
In this section, we introduce two algorithms of the MSSFP (1) and prove their weak convergence. First assume that the following four assumptions hold.
(A3) For any x ∈ H 1 and y ∈ H 2 , at least one subgradient ξ i ∈ ∂c i (x) and η j ∈ ∂q j (y) can be calculated. The subdifferential ∂c i and ∂q j are bounded on the bounded sets.
(A4) The sequences of perturbations Define two sets at point x k by where β j > 0. Then it is easy to verify that the function f k (x) is convex and differentiable with gradient We now present Algorithm 1 with Armijo-line search step size.

Lemma 4 ([6]
). The Armijo-line search terminates after a finite number of steps. In addition, The weak convergence of Algorithm 1 is established below.
Theorem 1. Let {x k } be the sequence generated by Algorithm 1, and the assumptions (A1)∼(A4) hold. Then {x k } converges weakly to a solution of the MSSFP (1).
Proof. Let x * be a solution of the MSSFP. Note that First, we prove that {x k } is bounded. Following Lemma 1 (ii), we have From Lemma 1 (iii), we have that Since I − P C is firmly nonexpensive, ∇ f k (x * ) = 0, and Lemma 4, we get that Based on the definition ofx k and Lemma 1 (i), we know that Note that (17), (23), and Lemma 1 (iii) yield that From assumption (A4), we know that lim k→∞ e i (x k ) = 0, i = 1, 2, and thus ∀ε > 0, ∃K, it holds that e i (x k ) < ε for k > K. We can therefore assume Substituting (21), (22), and (25) into (20) yields Organizing the above formula we know that Since This together with (27) shows that Using Lemma 2 and assumption (A4), we know the existence of lim k→∞ x k − x * 2 and the boundedness of {x k } ∞ k=0 . From (29), it follows which means that We therefore have Thus, by taking k → ∞ in the inequality From (30), we also know Hence for every j = 1, 2, · · · , r, we have Since {x k } is bounded, the set ω ω (x k ) is nonempty. Let x ∈ ω ω (x k ); then there exists a subsequence {x k n } of {x k } such that x k n x. Next, we show that x is a solution of the MSSFP (1), which will show that ω ω (x k ) ⊂ S. In fact, since x k n +1 ∈ C k n [k n ] , then by the definition of C k n [k n ] , we have where Following the assumption (A3) on the boundedness of ∂c i and (32), there exists M 1 such that From the weak lower semicontinuity of the convex function c i , we deduce from (37) that c i ( x) ≤ lim inf s→∞ c i (x k ns ) ≤ 0, i.e., x ∈ C = t i=1 C i . Noting the fact that I − P Q kn j is nonexpansive, together with (31), (34), and A being a bounded and linear operator, we get that Since P Q kn j (Ax k n ) ∈ Q k n j , we have where η k n j ∈ ∂q j (Ax k n ). From the boundedness assumption (A3), (38), and (39), there exists M 2 such that Then q j (A x) ≤ lim inf n→∞ q j (Ax k n ) ≤ 0, thus A x ∈ Q = r j=1 Q j , and therefore x ∈ S. Using Lemma 3, we conclude that the sequence {x k } converges weakly to a solution of the MSSFP (1). Now, we present Algorithm 2 in which the step size is given by the self-adaptive method and prove its weak convergence.

Algorithm 2 (The relaxed CQ algorithm with self-adaptive step size and perturbation)
Take arbitrarily the initial guess x 0 , and calculate where were defined at the beginning of this section.
The convergence result of Algorithm 2 is stated in the next theorem. Proof. First, we prove {x k } is bounded. Let x * ∈ S. Following Lemma 1 (ii), we have From Lemma 1 (iii), it follows Similar with (22), it holds that From Lemma 1 (iv), one has Substituting (43)-(45) into (42), we get that Organizing the above formula, we obtain that From assumption (A4), we know that lim k→∞ e 3 (x k ) = 0, so we can assume without loss of generality that e 3 (x k ) ∈ [0, 1/2), k ≥ 0, then So (47) can be reduced as Using Lemma 2, we get the existence of lim k→∞ x k − x * 2 and the boundedness of {x k } ∞ k=0 . From(47), we know then the fact that inf k ρ k (4 − ρ k ) > 0 asserts that This implies that ∇ f k (x k ) is bounded, and thus (50) yields f k (x k ) → 0. Hence for every j = 1, 2, · · · , r, we have Let {x k n } be a subsequence of {x k } such that x k n x ∈ ω ω (x k ), and {k n s } are a subsequence of {k n } such that [k n s ] = i. Similar to the proof of Theorem 1, we know that c i ( x) ≤ lim inf s→∞ c i (x k ns ) ≤ 0, i.e., x ∈ C = t i=1 C i . Since (52) indicates that q j (A x) ≤ lim inf n→∞ q j (Ax k n ) ≤ 0, A x ∈ Q = r j=1 Q j . Therefore x ∈ S. Using Lemma 3, we conclude that the sequence {x k } converges weakly to a solution of the MSSFP (1).

Bounded Perturbation Resilience of the Algorithms
In this subsection, we consider the bounded perturbation algorithms of Algorithms 1 and 2. Based on Definition 3, in Algorithm 1, let e i (x k ) = 0, i = 1, 2. The original algorithm is where α k is obtained by Armijo-line search step size such that where µ ∈ (0, 1). The generated iteration sequence is weakly convergent, which is proved as a special case in Section 3. The algorithm with the bounded perturbation of (53) is that where [k] = k mod t and α k = γl m k with m k the smallest non-negative integer such that The following theorem shows that the algorithm (53) is bounded perturbation-resilient.
Proof. Let x * ∈ S. Since Σ ∞ k=0 λ k < +∞ and the sequence {ν k } ∞ k=0 are bounded, we have So we can assume that , without loss of generality. Replacing e 2 (x k ) with λ k ν k in (20) and using Lemma 1 (iii) show Since I − P C is firmly nonexpensive, ∇ f k (x * ) = 0 and Lemma 4, we get that Based on the definition ofx k and Lemma 1 (i), we know that Based on (55), the following formulas holds Substituting (60)-(62) into the fifth item of (58), we get Substituting (59) and (63) into (58) we get Since This, together with (64), shows that Using Lemma 2, we know the existence of lim k→∞ x k − x * 2 and the boundedness of {x k } ∞ k=0 . From (64), it follows that Thus, we have lim k→∞ x k −x k = 0, lim k→∞ x k+1 −x k = 0 and lim k→∞ ∑ r j=1 β j (I − P Q k j )Ax k 2 = 0. Hence, and for every j = 1, 2, · · · , r, Similarly to with Theorem 1, we conclude that the sequence {x k } converges weakly to a solution of the MSSFP (1).
Remark 1. When t = 1, r = 1, the MSSFP reduces to the SFP; thus Theorems 1 and 3 guarantee that algorithm (53) is bounded perturbation-resilient with Armijo-line search step size for the SFP.
)Ax, [k] = k mod r. The corresponding algorithm is also bounded perturbation-resilient.
, then (71) can be rewritten as , which is the form of Algorithm 2. According to where O(λ k ν k ) denotes the infinitesimal of the same order of λ k ν k . From the expression of e 3 (x k ), we obtain Since {λ k ν k } is summable, we know that {e 3 (x k )} is summable, i.e., ∑ ∞ k=0 e 3 (x k ) ≤ +∞. Thus, we conclude that the sequence {x k } converges weakly to a solution of the MSSFP (1); i.e., the algorithm (70) is the bounded-perturbation-resilient.

Remark 3.
When t = 1, r = 1, the MSSFP reduces to the SFP; thus Theorems 2 and 4 guarantee that algorithm (70) is bounded-perturbation-resilient with the self-adaptive step size for the SFP.

Construction of the Inertial Algorithms by Bounded Perturbation Resilience
In this subsection, we consider algorithms with inertial terms as a special case of Algorithms 1 and 2. In Algorithm 1, letting e i (x k ) = θ where the step size α k is obtained by Armijo-line search and Theorem 5. Assume that the assumptions (A1)∼(A3) are true, and the sequence {λ k } ∞ k=0 satisfies λ k ≥ 0, and Σ ∞ k=0 λ (i) k < +∞, i = 1, 2. Then, the sequence {x k } ∞ k=0 generated by iterative scheme (74) converges weakly to a solution of the MSSFP (1).
Thus, we know that ν k ≤ 1 and {e i (x k )} ∞ k=0 satisfies assumption (A4). According to Theorem 1, we conclude that the sequence {x k } converges weakly to a solution of the MSSFP (1).

Considering the algorithm with inertial bounded perturbation
where According to Theorem 3, it is easy to know that the sequence {x k } converges weakly to a solution of the MSSFP (1). More relevant evidence can be found in reference [27].
Similarly, we can get Theorem 6, which asserts that Algorithm 2 with the inertial perturbation is weakly convergent. Theorem 6. Assume that (A1)∼(A3) are true; the scalar sequence {λ k } ∞ k=0 satisfies λ k ≥ 0, and Σ ∞ k=0 λ k < +∞, and ρ k satisfies inf k ρ k (4 − ρ k ) > 0. Then the sequence {x k } ∞ k=0 is generated by each of the following iterative scheme, where θ k is the same as (78) and α k is self-adaptive step size which is the same as in Algorithm 2, converges weakly to a solution of the MSSFP (1).

LASSO Problem
Let us consider the following LASSO problem [28] min where A ∈ R m×n , m < n, b ∈ R m , and ε > 0. The matrix A is generated from a standard normal distribution with mean zero and unit variance. The true sparse signal x * is generated from uniformly distribution in the interval [−2, 2] with random p position nonzero, while the rest is kept zero. The sample data b = Ax * . For the considered MSSFP, let r = t = 1 and C = {x | x 1 ≤ ε}, Q = {b}. The objective function is defined as We report the final error between the reconstructed signal and the true signal. Take x k − x * < 10 −4 as the stopping criterion, where x * is the true signal. We compare the algorithms NP1, HP1, NP2 and HP2 with Yang's algorithm [3]. Let α k = γl m k for all k ≥ 1, γ = 1, l = 1 2 , µ = 1 2 , θ k = 1 4 , ρ k = 0.1, and α k = 0.1 * 1 A 2 of Yang's algorithm [3]. The results are reported in Table 1. Figure 1 shows the objective function value versus iteration numbers when m = 240, n = 1024, p = 30.
From Table 1 and Figure 1, we know that the inertial perturbation can improve the convergence of the algorithms and that the algorithms with Armijo-line search or selfadaptive step size perform better than Yang's algorithm [3].
We also measure the restoration accuracy by means of the mean squared error, i.e., MSE= (1/k) x * − x k , where x * is an estimated signal of x. Figure 2 shows a comparison of the accuracy of the recovered signals when m = 1440, n = 6144, p = 180. Given the same number of iterations, the recovered signals generated by algorithms in this paper outperform the one generated by Yang's algorithm; NP1 needs more CPU time and presents lower accuracy; algorithms with self-adaptive step size perform better than the algorithms with step size determined by Armijo-line search in CPU time and imposing inertial perturbation accelerates the convergence rate and accuracy of signal recovery.
We use inertial perturbation to accelerate the convergence of the algorithm. For the convenience of comparison, the initial values of the two inertial algorithms are set to be the same. Let x 0 = x 1 . We use E k = x k+1 − x k / x k to measure the error of the k-th iterate. If E k < 10 −5 , then the iteration process stops. We compare our proposed iteration methods HP1 , HP2 with NP1, NP2 and Liu and Tang's Algorithm 2 in [29]. Algorithm 2 is of the form , T j = P Q k j and α k = 0.2 * 1 A 2 , and the algorithm is referred to as LT alg. The convergence results and the CPU time of the five algorithms are shown in Table 2 and Figure 3. The errors are shown in Figure 4.
The results show that (80) (HP2) outperforms (77) (HP1) for certain initial values. The main reason may be that the self-adaptive step size is more efficient than the one determined by the Armijo-line search. Comparison results of five algorithms and the convergence behavior show that in most cases, the convergence rate of the algorithm can be improved by adding an appropriate perturbation.
We consider using inertial perturbation to accelerate the convergence of the algorithm. If E k = x k+1 − x k / x k < 10 −4 , then the iteration process stops. Let x 0 = x 1 . We choose arbitrarily three different initial points and consider iterative steps of the four algorithms with m, n, r, t being different values. See Table 3 for details.