A Self-Adaptive Algorithm for the Common Solution of the Split Minimization Problem and the Fixed Point Problem

: In this paper, a new self-adaptive step size algorithm to approximate the solution of the split minimization problem and the ﬁxed point problem of nonexpansive mappings was constructed, which combined the proximal algorithm and a modiﬁed Mann’s iterative method with the inertial extrapolation. The strong convergence theorem was provided in the framework of Hilbert spaces and then proven under some suitable conditions. Our result improved related results in the literature. Moreover, some numerical experiments were also provided to show our algorithm’s consistency, accuracy, and performance compared to the existing algorithms in the literature.


Introduction
Throughout this paper, we denote two nonempty closed convex subsets of two real Hilbert spaces H 1 and H 2 by C and Q, respectively. We denote the orthogonal projections onto a set C by P C and let A * : H 2 → H 1 be an adjoint operator of A : H 1 → H 2 , where A is a bounded linear operator.
Over the past decade, inverse problems have been widely studied since they stand at the core of image reconstruction problems and signal processing. The split feasibility problem (SFP) is one of the most popular inverse problems that has attracted the attention of many researchers. Cencer and Elfving first considered the split feasibility problem (SFP) in 1994 [1]. The split feasibility problem (SFP) can mathematically be expressed as follows: find an element x with: x ∈ C such that Ax ∈ Q. (1) As mentioned above, the SFP (1) has received much attention from many researchers because it can be applied to various science branches. Several practical algorithms to solve the SFP (1) presented in recent years were given in [2][3][4][5][6][7]. It is important to note that the split feasibility problem (SFP) (1) is equivalent to the following minimization formulation: In 2002, Byrne [2] introduced a practical method called the CQ algorithm for solving the SFP, which is defined as follows: for all n ≥ 1 and x 1 ∈ H 1 is arbitrarily chosen. They considered the step size τ n ∈ (0, 2/ A 2 ). The advantage of the CQ algorithm is that there is no need to compute the inverse of a matrix because it only deals with an orthogonal projection. However, the CQ algorithm still needs to compute an operator norm of A.
A self-adaptive step size was then introduced by Yang [8] to avoid computing an operator norm of A. Yang designed the step size as follows: where ρ n is a positive sequence parameter that satisfies ∑ ∞ n=0 ρ n = ∞ and ∑ ∞ n=0 ρ 2 n < ∞. Moreover, there are two additional conditions for the self-adaptive step size: (1) Q must be a bounded subset; (2) A must be a full-column-rank matrix.
After that, López [9] modified a self-adaptive step size to remove the two additional conditions of Yang [8]. López then obtained a practical self-adaptive step size given by: where ρ n is a positive sequence bounded below by zero and bounded above by four.
The split minimization problem is presented below. Let f and g be two proper semicontinuous and convex functions on H 1 and H 2 , respectively. Moudafi and Thakur [10] considered the interesting problem called the proximal split feasibility problem. This problem is defined to find a minimizer x such that: where λ > 0 and g λ (Ax) is the following Moreau-Yoshida approximate: It is fascinating to observe the case of C ∩ A −1 Q = ∅. The minimization problem (6) can be reduced to the SFP (1) when we set f = δ C and g = δ Q , where δ C and δ Q are the indicator functions of the subsets C and Q, respectively. The reader can refer to [11] for details. By using the relations (7), we can then define the proximity operator of a function g of order λ as the following form: Moreover, the subdifferential of function f at the point x is given by the following form: Recall the following notations: In the case of (arg min f ) ∩ (A −1 arg min g) = ∅, Moudafi and Thakur [10] also considered a generalization for the minimization problem (6), named the split minimization problem (SMP), which can be expressed to find: x ∈ arg min f such that Ax ∈ arg min g.
Besides considering the SMP (10), they also introduced an algorithm to solve the SMP (10). It is defined as follows: where x 1 ∈ H 1 is arbitrarily chosen and τ n is a self-adaptive step size. In addition, Moudafi and Thakur [10] proved a weak convergence result under some suitable conditions imposed on the parameters. Recently, Abbas [12] constructed and introduced two iterative algorithms to solve the split minimization problem (SMP) (10). These algorithms are defined as follows: and: where x 1 is arbitrarily chosen, step size τ n = ρ n (h(x n ) + l(x n )) ∇h(x n ) 2 + ∇l(x n ) 2 with ρ n ∈ (0, 4), and functions h, l, ∇h and ∇l are defined in Section 3. Abbas [12] provided the sequences generated by the algorithms (12) and (13), which converge strongly to a solution.
Furthermore, currently, fixed point problems of a nonexpansive mapping are still extensively studied by many research works since they are at the core of several problems in the real world, such as signal processing and image recovery. One of the famous algorithms to solve the fixed point problem of a nonexpansive mapping is as follows: where S : C → C is a nonexpansive mapping and the initial point x 1 is chosen in C, {t n } ∈ [0, 1]. The algorithm (14) is known as Mann's algorithm [13]. It is well known that a Mann-type algorithm gives strong convergence provided the underlying space is smooth enough. There are many works in this direction. The reader can refer to [14][15][16] for details. Apart from studying all the above problems, speeding up the convergence rate of algorithms has been often studied by many authors. Polyak [17] introduced a helpful technique to accelerate the rate of convergence called the heavy ball method. After that, many researchers have modified the heavy ball method to use with their algorithms. Nesterov [18] modified the heavy ball method to improve the rate of convergence for the algorithms. This algorithm is known as the modified heavy ball method: where z 1 , z 2 ∈ H 1 are arbitrarily chosen, τ n > 0, 0 ≤ θ n < 1 is an extrapolation factor, and the term θ n (z n − x z−1 ) is called the inertia. For more details, the reader is directed to [19][20][21]. Based on the above ideas, the aims of this work were: (1) to construct a new selfadaptive step size algorithm combine with the proximal algorithm, the modified Mann method with the inertial extrapolation to solve the split minimization problem (SMP) (10), and the fixed point problems of a nonexpansive mapping; (2) to establish the strong convergence results for the SMP and fixed point problems using the proposed algorithm; (3) to give numerical examples for our algorithm to present its consistency, accuracy, and performance compared to the existing algorithms in the literature.

Preliminaries
Some notations used throughout this paper are presented in this section. For an element x in a Hilbert space, x n → x and x n x are denoted by a strong convergence and a weak convergence, respectively. Lemma 1. For every u and v in a real Hilbert space H, then, where κ ∈ [0, 1].

Proposition 1.
Let S : C → H 1 be a mapping with C ⊂ H 1 , where u and v are elements in C. The mapping S is called: It is well known that the metric projection P C of H 1 onto C is a nonexpansive mapping where C ⊆ H 1 is a nonempty closed convex, and it satisfies P C u − P C v 2 ≤ u − v, P C u − P C v for all u, v ∈ H 1 . Moreover, P C u is characterized by the following properties: and: for all u ∈ H 1 and v ∈ C. We denote Γ(H 2 ) the collection of proper convex lower semicontinuous functions on H 2 .

Definition 1.
Ref. [22,23]: Let g ∈ Γ(H 2 ) and x ∈ H 2 . Define the proximal operator of g by: The proximal of g of order λ (λ > 0) is given by: Below are some of the valuable properties of the proximal operators.

1.
If g = δ Q where δ Q is an indicator function of Q, then the proximal operators prox λg = P Q , for all λ > 0; 2.
x = prox g (x + y) if and only if y ∈ ∂g(x).
Let g ∈ Γ(H 2 ). In [26], it was shown that Fix(prox g ) = arg min H 2 g. Moreover, they showed that prox g and I − prox g are both firmly nonexpansive.

Lemma 2.
Ref. [27]: Any sequence {υ n } in a Hilbert space H 1 satisfies Opial's condition if υ n v implies the following inequality: Lemma 3. Ref. [28]: Any sequence of nonnegative real number {a n } can be written in the following relation: and the following three conditions hold: Then, lim n→∞ z n = 0.

Results
This section proposes an iterative algorithm generating a sequence that strongly converges to a solution of split minimization problems (10) and fixed point problems of a nonexpansive mapping. We established the convergence theorem of the proposed algorithm under the statements as follows: Let S : H 1 → H 1 be a nonexpansive mapping. Denote the set of all solutions of a split minimization (10) by Γ and the set of all fixed points of the mapping S by Fix(S). Let Ω = Γ ∩ Fix(S), and suppose that: Then, we obtained the gradients of the functions h and l as follows: Lemma 5. Let h : H 2 → R and l : H 1 → R be two functions that are defined as (21). Then, the gradients ∇h and ∇l are Lipschitz continuous.
Proof. By the notation ∇h(u) := A * (I − prox λg )Au, we find that: By combining (22) with (23), we find that: Therefore, we obtained that ∇h is 1 L -inverse strongly monotone. Moreover: Similarly, one can prove that ∇l is also Lipschitz continuous. This completes the proof.
A valuable assumption for analyzing our main theorem is given as follows.

Algorithm 1 A split minimization algorithm.
Initialization: Let λ > 0 and x 0 , x 1 ∈ H 1 be arbitrarily chosen. Choose some positive sequences {ρ n }, {δ n } and {α n } satisfying Assumption 1. Set n = 1. Iterative step: Given the current iteration x n , calculate the next iterations as follows: where: Stopping criterion: If x n+1 = y n = u n = x n , stop.
Otherwise, put n = n + 1, and go to Iterative step.
Proof. Assume that z = P Ω (v) ∈ Ω. By using the firm non-expansiveness of (I − prox λg ) (see [30,31] for details), we find that: and: This implies: Next, we set w n = δ n v + (1 − δ n )y n . For fixed v ∈ C, we obtain that: and Since S is nonexpansive, we find that: Thus, {x n } is bounded, and this implies that {w n }, {y n }, and {u n } are also bounded. Next, we observe that: Next, we claim that x n+1 − x n → 0 and x n → z. Consider: and: Moreover, consider: Therefore, we obtain: We next show the sequence x n − z → 0 by dividing into two possible cases.

Case 1.
Assume that { x n − z 2 } is the non-increasing sequence. There exists n 0 ∈ N such that x n+1 − z 2 ≤ x n − z 2 , for each n ≥ n 0 . Then, the sequence { x n − z } converges, and so: lim Since lim n→∞ δ n = 0, we obtain by using (36) that: We then obtain by using lim α n (1 − α n ) = 0, inf ρ n (4 − ρ n ) > 0, and ∇h(u n ) 2 + ∇l(u n ) 2 being bounded that: and: Thus, we obtain by using (33) that: Moreover, it easy to see that: By applying (36) and (39) in the Formula (32), we find that: We next observe that: By using the fact that δ n → 0, we find that: Moreover, we observe that: We then obtain by using (38), (41), and (44) that: Next, we observe that: y n − prox λτ n f y n ≤ u n − τ n A * (I − prox λg )Au n − y n ≤ u n − y n + |τ n | ∇h(u n ) .
Thus, we obtain immediately that: lim n→∞ y n − prox λτ n f y n = 0.
We next show that lim sup n→∞ v − z, z − Sw n ≥ 0, where z = P Ω (v). To prove this, we can choose a subsequence {w n i } of {w n } with: Since {w n i } is a bounded sequence, we can take a weakly convergent subsequence {w n i } of {w n } that converges to w ∈ H 1 , that is w n i w. By using the fact that Sw n − w n → 0, we find that Sw n i w. We next show w ∈ Ω in two steps. First, we show that w is a fixed point of S. By contradiction, we assume that y / ∈ Fix(S). Since w n i w and Sw = w, by Opial's conditions, we conclude that: This is a contradiction. This implies w ∈ Fix(S). Second, we show w ∈ Γ. Since w is a weak limit point of {w n }, there is a {w n i } ⊆ {w n } such that w n i w. Since h is lower semicontinuous, we find that: This implies: Then, Aw = prox λg Aw, and so, 0 ∈ ∂g(Aw). This means that Aw is a minimizer of the operator g.
Similarly, since l is lower semicontinuous, we find that: This implies: Thus, we obtain by using Lemma 3, Assumption (A4), Inequality (50), and the boundedness of {u n } that x n → z = P Ω (v).
Suppose that Λ n = x n − z 2 , and for each n ≥ n 0 (where n 0 large enough), define a mapping η : N → N as follows: Thus, η(n) → ∞ when n tends to infinity, and for each n ≥ n 0 , We then obtain by using Inequality (36) that: Since δ η(n) → 0 as n → ∞ and also θ η(n) δ η(n) x η(n) − x η(n)−1 → 0 when n tends to infinity, we observe that: and: lim sup Moreover, we obtain that: This implies that: Thus, we obtain: We now obtain by using Lemma 4 that: as n → ∞. This implies that x n → z and z = P Ω (v). The proof is complete.

Remark 1.
(a) If we put θ n = 0, S ≡ I, α n = 0, and δ n = 0 for all n ≥ 2 in our proposed algorithm, we found that the algorithm (11) of Moudafi and Thakur was obtained. Moreover, we obtained a strong convergence theorem, while Moudafi and Thakur [10] only obtained a weak convergence theorem; (b) If we put A ≡ I, S ≡ I, f ≡ g ≡ 0, and δ n = 0 for all n ≥ 2 in our proposed algorithm, we found that Algorithm (1.2) in [32] was obtained; (c) If we put θ n = 0, A ≡ I, S ≡ I, f ≡ g ≡ 0, and δ n = 0 for all n ≥ 2 in our proposed algorithm, we found that the Mann iteration algorithm in [13] was obtained. Moreover, we obtained a strong convergence theorem, while Mann [13] only obtained a weak convergence theorem; (d) As an extraordinary choice, an extrapolation factor θ n in our proposed algorithm can be chosen as follows: 0 ≤ θ n ≤θ n , for each integer n greater than or equal to three and a positive sequence { n } with n δ n → 0, as n → ∞. This choice was recently derived in [33,34] as an inertial extrapolated step.

Applications and Numerical Results
This section provides the numerical experiments to illustrate the performance and compare Algorithm 1 with and without the inertial term. Moreover, we present an experiment to compare our scheme with the Abbas algorithms [12]. All code was written in MATLAB 2017b and run on a MacBook Pro 2012 with a 2.5 GHz Intel Core i5.
First, we illustrate the performance of our proposed algorithm by comparing the proposed algorithm with and without the inertial term as the following experiment: Example 1. Suppose C = Q = {x ∈ R 100 : x 2 ≤ 1}, and let Ax = x. In problem (10), assume that f = δ C and g = δ Q , where δ is the indicator function. Then: Thus, the problem (10) becomes the SEP (1). We next took the parameters ρ n = 2, α n = 1 4000 and δ n = 1 n + 1 . Thus, by Algorithm 1, we obtained that: x n+1 = 1 4000 x n + 1 − 1 4000 We then provide a comparison of the convergence of Algorithm 1 with: and Algorithm 1 with θ n = 0 in terms of the number of iterations with the stopping criterion A * (I − P Q )Ax 2 2 + (I − P C )x n 2 2 < 10 −2 . The result of this experiment is reported in Figure 1.

Remark 2.
By observing the result of Example 1, we found that our proposed algorithm with inertia was faster and more efficient than our proposed algorithm without inertia (θ n = 0).
Second, we used the example in Abbas [12] to show the performance of our algorithm by comparing our proposed algorithm with Algorithms (12) and (13) in terms of CPU time as the following experiment: Example 2. Let H 1 = H 2 = R N and g = · 2 be the Euclidean norm in R N . The metric projection onto the Euclidean unit ball B is defined by the following: Thus, the proximal operator (the block soft thresholding) [24] is given by: and: Then (see [35]), and: for all x ∈ R N . Assume that Ax = x, and let us consider the split minimization problem (SMP) (10) as follows: z * ∈ arg min f and Az * ∈ arg min g.
It is easy to check that x = (0, 0, ..., 0) is in the set of solutions of Problem (64). We now took n = 1 n + 1 and: for all n ≥ 1. We next took S ≡ I, then we obtained by Algorithm 1 that: The iterative schemes (12) and (13) are: and: respectively, where γ n was given in [12]. We now provide a comparison of the convergence of the iterative schemes (12) and (13) in Abbas's work [12] with our proposed algorithm with S ≡ I in terms of CPU time, where initial points x 1 , x 2 were randomly generated vectors in R N . We tested this experiment with different choices of N as follows: N = 100, 500, 1000, 2000.
We used x n+1 − x n x 2 − x 1 < 10 −2 as the stopping criterion. The result of this experiment is reported in Table 1.

Remark 3.
By observing the result of Example 2, we found that our proposed algorithm was more efficient than Abbas's Algorithms (12) and (13) regarding the CPU time.
The mean-error is given by: Error(x n ) := 1 20 We used Error(x n ) < 10 −2 as the stopping criterion of this experiment. We then observed that the sequence {x n } generated by Algorithm 1 converged to a solution if Error(x n ) converged to zero. Figure 2 shows the average error of our method in three groups of 20 initial points.

Remark 4.
By observing the result of Example 3, we found that the choice of the initial value did not affect the ability of our algorithm to achieve the solutions.

Conclusions
This paper discussed split minimization problems and fixed point problems of a nonexpansive mapping in the framework of Hilbert spaces. We introduced a new iterative scheme that combined the proximal algorithm and the modified Mann method with an inertial extrapolation and a self-adaptive step size. For the proposed algorithm, the main advantage was that there was no need to compute the operator norm of A. Moreover, we illustrated the performance of our proposed algorithm by comparing with other existing methods in terms of CPU time. The obtained results were improved and extended various existing results in existing pieces of literature.