Iterative Methods for Finding Solutions of a Class of Split Feasibility Problems over Fixed Point Sets in Hilbert Spaces

: We consider the split feasibility problem in Hilbert spaces when the hard constraint is common solutions of zeros of the sum of monotone operators and ﬁxed point sets of a ﬁnite family of nonexpansive mappings, while the soft constraint is the inverse image of a ﬁxed point set of a nonexpansive mapping. We introduce iterative algorithms for the weak and strong convergence theorems of the constructed sequences. Some numerical experiments of the introduced algorithm are also discussed.


Introduction
The split feasibility problem (SFP), which was introduced by Censor and Elfving [1], is the problem of finding a point x * ∈ R n such that where C and Q are nonempty closed convex subsets of R n , and L is an n × n matrix. SFP problems have many applications in many fields of science and technology, such as signal processing, image reconstruction, and intensity-modulated radiation therapy; for more information, the readers may see [1][2][3][4] and the references therein. In [1], Censor and Elfving proposed the following algorithm: for arbitrary x 1 ∈ R n , x n+1 = L −1 P Q P L(C) (Lx n ) , ∀n ∈ N, where L(C) = {y ∈ R n |y = Lx, for some x ∈ C}, and P Q and P L(C) are the metric projections onto Q and L(C), respectively. Observe that the introduced algorithm needs the computations of matrix inverses, which may lead to an expensive computation. To overcome this drawback, Byrne [2] suggested the following so-called CQ algorithm: for arbitrary x 1 ∈ R n , x n+1 = P C x n + γL (P Q − I)Lx n , ∀n ∈ N, (2) where γ ∈ 0, 2/ L 2 , and L is the transpose of the matrix L. Notice that Algorithm (2) generates a sequence {x n } by relying on the transpose operator instead of the inverse operator of the considered matrix L. Later on, in 2010, Xu [5] considered SFP in infinite-dimensional Hilbert spaces setting. That is, for two real Hilbert spaces H 1 and H 2 , and nonempty closed convex subsets C and Q of H 1 and H 2 , respectively, and bounded linear operator L : H 1 → H 2 : for a given x 1 ∈ H 1 , the sequence {x n } is constructed by x n+1 = P C x n + γL * (P Q − I)Lx n , ∀n ∈ N, where γ ∈ 0, 2/ L 2 and L * is the adjoint operator of L. In [5], the conditions to guarantee weak convergence of the sequence {x n } to a solution of SFP was considered.
On the other hand, for a Hilbert space H, the variational inclusion problem (VIP), which was initially considered by Martinet [6], has the following formal form: find x * ∈ H such that where B : H → 2 H is a set-valued operator. The popular iteration method for finding a solution of problem (4) is the following so-called proximal point algorithm: for a given x 1 ∈ H, where {λ n } ⊂ (0, ∞) and J B λ n = (I + λ n B) −1 is the resolvent of the maximal monotone operator B corresponding to λ n ; see [7][8][9][10]. Subsequently, for set-valued mappings B 1 : H 1 → 2 H 1 and B 2 : H 2 → 2 H 2 , and a bounded linear operator L : H 1 → H 2 , by using the concept of SFP, Byrne et al. [11] proposed the following so-called split null point problem (SNPP): finding a point x * ∈ H 1 such that 0 ∈ B 1 (x * ) and 0 ∈ B 2 (Lx * ). (5) In [11], the following iterative algorithm was suggested: for λ > 0 and an arbitrary x 1 ∈ H 1 , where γ ∈ 0, 2/ L 2 , and J B 1 λ and J B 2 λ are the resolvent of maximal monotone operators B 1 and B 2 , respectively. They showed that, under the suitable control conditions, the sequence {x n } converges weakly to a solution of problem (5).
Due to the importance of the two above concepts, many authors have been interested and studied approximating the common solutions of a fixed point of nonlinear mappings and the VIP problems; see [12][13][14] for example. In 2015, Takahashi et al. [15] considered the problem of finding a point where B : H 1 → 2 H 1 is a maximal monotone operator, L : H 1 → H 2 is a bounded linear operator, and T : H 2 → H 2 is a nonexpansive mapping. They suggested the following iterative algorithm: for any where {λ n } and {γ n } satisfy some suitable control conditions, and J B λ n is the resolvent of a maximal monotone operator B associated with λ n . They discussed the weak convergence theorem of Algorithm (7) for the solution set of problem (6). Moreover, in [15], Takahashi et al. also considered the problem of finding a point where S : H 1 → H 1 is a nonexpansive mapping. They suggested the following iterative algorithm: for any x 1 ∈ H 1 , where {α n } and {λ n } satisfy some suitable control conditions and provided the weak convergence theorem of Algorithm (9) to a solution point of problem (8). Now, let us consider a generalized concept of the problem (4): finding a point x * ∈ H such that where A : H → H, and B : H → 2 H . If A and B are monotone operators on H, then the elements in the solution set of problem (10) will be called the zeros of the sum of monotone operators. It is well known that there are a number of real world problems that arise in the form of problem (10); see [16][17][18][19] for example and the references therein. By considering the VIP problem (10), Suwannaprapa et al. [20] extended problem (6) to the following problem setting: finding a point when A : H 1 → H 1 is a monotone operator and B : H 1 → 2 H 1 is a maximal monotone operator. They proposed the following algorithm and showed the weak convergence theorem of Algorithm (12). Later, in 2018, Zhu et al. [21] considered the problem of finding a point x * ∈ H 1 and such that when S : H 1 → H 1 and T : H 2 → H 2 are nonexpansive mappings, and proposed the following iterative algorithm: for any x 1 ∈ H 1 , where f : H 1 → H 1 is a contraction mapping. They showed that, under the suitable control conditions, the generated sequence {x n } converges strongly to a point z ∈ F , where z = P F f (z). In this paper, motivated by the above literature, we will consider a problem of finding a point where S i : H 1 → H 1 , i = 1, . . . , N and T : H 2 → H 2 are nonexpansive mappings. We will denote Γ for the solution set of problem (15). We aim to suggest the algorithms for finding a common solution of problem (15) and provide some suitable conditions to guarantee that the constructed sequence {x n } of each algorithm converges to a point in Γ.

Preliminaries
Throughout this paper, we denote by R and N for the sets of real numbers and natural numbers, respectively. A real Hilbert space H will be equipped with the inner product ·, · and norm · , respectively. For a sequence {x n } in H, we denote the strong convergence and weak convergence of {x n } to x in H by x n → x and x n x, respectively. Let T : H → H be a mapping. Then, T is said to be The number K is called a Lipschitz constant. Moreover, if K ∈ [0, 1), we say that T is contraction.
(iv) Averaged if there is α ∈ (0, 1) such that where I is the identity operator on H and S : H → H is a nonexpansive mapping. In the case (16), we say that T is α-averaged.
β-inverse strongly monotone (β-ism) if, for a positive real number β, For a mapping T : H → H, the notation F(T) will stand for the set of fixed points of T that is F(T) = {x ∈ H : Tx = x}. It is well known that, if T is a nonexpansive mapping, then F(T) is closed and convex. Furthermore, it should be observed that firmly nonexpansive mappings are 1 2 -averaged mappings.
Next, we collect the important properties that are needed in this work.

Lemma 1.
The following are true [16,22]: The composite of finitely many averaged mappings is averaged. In particular, if T i is α i -averaged for If the mappings {T i } N i=1 are averaged and have a common fixed point, then (iii) If A is β-ism and λ ∈ (0, β], then T := I − λA is firmly nonexpansive. (iv) A mapping T : H → H is nonexpansive if and only if I − T is 1 2 -ism.
Let B : H → 2 H be a set-valued mapping. We donote D(B) for the effective domain of B, that is, A monotone mapping B is said to be maximal when its graph is not properly contained in the graph of any other monotone operator. For a maximal monotone operator B : H → 2 H and λ > 0, we define the resolvent J B λ by J B λ := (I + λB) −1 : H → D(B). It is well known that, under these settings, the resolvent J B λ is a single-valued and firmly nonexpansive mapping. Moreover, F J B λ = B −1 0 ≡ {x ∈ H : 0 ∈ Bx}, ∀λ > 0; see [15,23]. The following lemma is a useful fact for obtaining our main results.

Lemma 2 ([24]
). Let C be a nonempty closed and convex subset of a real Hilbert space H, and A : C → H be an operator. If B : H → 2 H is a maximal monotone operator, then F J B λ (I − λA) = (A + B) −1 0.
We also use the following lemmas for proving the main result.

Lemma 4 ([25]
). Let C be a closed convex subset of a Hilbert space H and T : C → C be a nonexpansive mapping. Then, U := I − T is demiclosed, that is, x n x 0 and Ux n → y 0 imply Ux 0 = y 0 .
The following fundamental results are needed in our proof.
For each x, y ∈ H and λ ∈ R, we know that see [23]. Let C be a nonempty closed and convex subset of a Hilbert space H. For each point x ∈ H, there exists a unique nearest point in C, denoted by P C x. That is, The operator P C is called the metric projection of H onto C; see [26]. The following property of P C is well known: x − P C x, y − P C x ≤ 0, ∀x ∈ H, y ∈ C.
The following lemmas are important for proving the convergence theorems in this work.

Lemma 5 ([15]
). Let H be a Hilbert space and let {x n } be a sequence in H. Assume that C is a nonempty closed convex subset of H satisfying the following properties: (i) for every x * ∈ C, lim n→∞ x n − x * exists; (ii) if a subsequence {x n j } ⊂ {x n } converges weakly to x * , then x * ∈ C.
Then, there exists x 0 ∈ C such that x n x 0 .
Lemma 6 ( [9,27]). Assume that {a n } is a sequence of nonnegative real numbers satisfying the following relation: where {α n }, {σ n } and {δ n } are sequences of real numbers satisfying Then, a n → 0 as n → ∞.

Main Results
In our main results, the following assumptions will be concerned in order to show the convergence theorems for the introduced algorithm to a solution of problem (15).
Now, we provide the main algorithm and its convergence theorems.

Weak Convergence Theorems
Theorem 1. Let H 1 and H 2 be Hilbert spaces. For any x 1 ∈ H 1 , define where the sequences {λ n }, {γ n } and {α n } satisfy the following conditions: Suppose that the assumptions (A1)-(A5) hold and Γ = ∅. Then, the sequence {x n } converges weakly to an element in Γ.
Proof. Firstly, we set for each n ∈ N. We note that J B λ n is 1 2 -averaged. Since A is β-ism, in view of Lemma 1(iii), for each λ n ∈ (0, β), we have that (I − λ n A) is 1 2 -averaged. Subsequently, by Lemma 1(i), we get J B λ n (I − λ n A) is 3 4 -averaged. Moreover, by Lemma 3, for each γ n ∈ 0, 1/ L 2 , we know that I − γ n L * (I − T)L is γ n L 2 -averaged. Consequently, by Lemma 1(i), we get T n is δ n -averaged, where δ n = 3 + γ n L 2 4 , for each n ∈ N. Now, for each n ∈ N, we can write where δ n := 3 + γ n L 2 4 and V n is a nonexpansive mapping.
for each n ∈ N. By condition (ii), we know that δ n ∈ 3 4 , 1 , so we have By the definition of x n+1 and the relation (19), we obtain for each n ∈ N. Thus, for each n ∈ N. Therefore, for all z ∈ Γ, lim n→∞ x n − z exists. Now, from the relation (20), we see that for each n ∈ N. By the existence of {x n }, and the conditions (ii) and (iii), we get lim n→∞ x n − V n x n = 0.
In addition, from the relation (20), we obtain for each n ∈ N. By the existence of {x n } and the condition (iii), we get Consider for each n ∈ N. By using the fact (21), we obtain lim n→∞ x n − y n = 0.
Next, consider for each n ∈ N. Then, by using the fact (22), we have Next, since Lz ∈ F(T), so we have (I − T)Lz = 0. Note that I − T is 1 2 -ism. Then, we have the following relation for each n ∈ N. By (I − T)Lz = 0 above, we obtain for each n ∈ N. By the relation (26) and z ∈ Γ, we have for each n ∈ N. Then, for each n ∈ N. By the condition (ii), for each n ∈ N, we have By using the fact (23), we get lim Next, we will prove the weak convergence of {x n } by using Lemma 5. Remember that we have lim n→∞ x n − z existing for all z ∈ Γ. Thus, it remains to prove that, if there is a subsequence {x n j } of {x n } that converges weakly to a point x * ∈ H 1 , then x * ∈ Γ.
Assume that x n j x * ; we first show that x * ∈ L −1 F(T). Consider for each j ∈ N. Since L is a bounded linear operator, so we have Lx n j Lx * . By using this one and together with the fact (28), from the equality (29), we have TLx * = Lx * . Hence, Lx * ∈ F(T) or x * ∈ L −1 F(T).
Next, we will show that x * ∈ (A + B) −1 0. Consider for each j ∈ N. Observe that for each n ∈ N. By using the fact (28) to the inequality (31), we obtain Since x n , for each n ∈ N, by the facts (23) and (32), we have Thus, from the inequality (30), by using the fact (33) and together with x n j x * , we obtain Therefore, x * = J B λ n (I − λ n A)x * and hence x * ∈ (A + B) −1 0. Finally, we will show that x * ∈ ∩ N i=1 F(S i ). Consider y n − U N y n ≤ y n − x n + x n − U N y n , for each n ∈ N. By using the facts (22) and (23), we obtain lim n→∞ y n − U N y n = 0.
By using the fact (35) and y n j x * , for each j ∈ N, we obtain from Lemma 4 that x * ∈ F(U N ). Since U i , i = 1, . . . , N are averaged mappings, by Lemma 1(ii), we have F( That is, x * ∈ Γ. Finally, by Lemma 5, we can conclude that {x n } converges weakly to a point in Γ. Hence, the proof is completed.
Proof. Firstly, we will show the boundedness of {x n }. Let z ∈ Γ and follow the lines proof of the inequality (19), we can obtain y n − z ≤ x n − z , for each n ∈ N. Moreover, by the definition of x n+1 and U N z = z, we obtain for each n ∈ N. This implies that x n − z is a bounded sequence. Consequently, y n − z is also a bounded sequence. These imply that {x n } and {y n } are bounded. Next, we note that P Γ f (·) is a contraction mapping. We now letx be the unique fixed point of P Γ f (·). We consider for each n ∈ N. This gives for each n ∈ N.
Next, we will show that lim n→∞ x n+1 − x n = 0. Consider, for each n ∈ N, where M = sup n f (x n ) + U N y n . In the second term of the inequality (39), by the definition of y n and J B λ (I − λA) I − γL * (I − T)L being a nonexpansive mapping, it follows that for each n ∈ N. Substituting the inequality (40) into the inequality (39), we get for each n ∈ N. Thus, by Lemma 6, we have Furthermore, by the definition of x n+1 and the relation (19) in Theorem 1, we get for each n ∈ N. Then, we have that for each n ∈ N. By using the fact (41), the condition (i) and δ ∈ 3 4 , 1 , we get lim n→∞ x n − Vx n = 0.
Subsequently, we have for each n ∈ N. Thus, by the fact (43), we obtain lim n→∞ y n − x n = 0.
Moreover, by the same proof in Theorem 1, we also have lim n→∞ (I − T)Lx n = 0.
Next, since {x n } is bounded on H 1 , there exists a subsequence {x n j } of {x n } that converges weakly to x * ∈ H 1 . We will show that x * ∈ Γ. Now, we know from Theorem 1 that x * ∈ L −1 F(T) and x * ∈ (A + B) −1 0. It remains to show that x * ∈ ∩ N i=1 F(S i ). Consider, for each n ∈ N, Thus, by condition(i), we obtain lim n→∞ x n+1 − U N y n = 0. Since for each n ∈ N, by using the facts (41), (45) and (48), we have lim n→∞ y n − U N y n = 0.
By using the relation (50) and y n j x * , for each j ∈ N, we obtain from Lemma 4 that From the above results, we obtain that x * ∈ Γ. Finally, we will prove that {x n } converges strongly tox = P Γ f (x). Now, we know that {x n } is bounded and from the relation (41) we have x n+1 − x n → 0, as n → ∞. Without loss of generality, by passing to a subsequence if necessary, we may assume that a subsequence {x n j +1 } of {x n+1 } converges weakly to x * ∈ H 1 . Thus, we obtain lim sup From the inequality (38), by using Lemma 6, we can conclude that x n −x → 0, as n → ∞. Thus, x n →x, as n → ∞. Since y n − x n → 0, as n → ∞, so we conclude y n →x, as n → ∞. This completes the proof.
If A = 0 and L = I, then problem (15) is reduced to a type of the common fixed points of nonexpansive mappings; see [28]. That is, in this case, we will consider a problem of finding a point In addition, the following results can be obtained from the main Theorems 1 and 2, respectively.

Applications
In this section, we discuss the applications of problem (15) via Theorems 1 and 2, respectively.

Variational Inequality Problem
Let the normal cone to C at u ∈ C be defined by It is well known that N C is a maximal monotone operator. By considering B := N C : H → 2 H , then we can see that problem (10) is reduced to the problem of finding a point x * ∈ C such that Let V IP(C, A) be denoted for the solution set of problem (59). Notice that, in this case, we have J B λ =: P C . By these settings, problem (15) is reduced to a problem of finding a point Subsequently, by applying Theorems 1 and 2, we obtain the following convergence theorems.

Convex Minimization Problem
We consider a convex function g : H → R, which is Fréchet differentiable. Let C be a given closed convex subset of H. By setting A := ∇g (the gradient of g) and B := N C , we see that the problem of finding a point x * ∈ (A + B) −1 0 is equivalent to the following problem: find a point x * ∈ C such that It is well known that the equation (63) is equivalent to the minimization problem of finding x * ∈ C such that x * ∈ arg min x∈C g(x).
Therefore, in this case, problem (15) reduces to a problem of finding a point Then, by applying Theorems 1 and 2, we obtain the following results.

Theorem 5.
Let H 1 and H 2 be Hilbert spaces and C be a nonempty closed convex subset of H 1 . Let g : H 1 → R be convex and Fréchet differentiable such that ∇g is a ν-Lipschitz continuous. For any x 1 ∈ H 1 , define where the sequences {λ n }, {γ n } and {α n } satisfy the following conditions: for some a, b 1 , b 2 , ∈ R, and U i = (1 − κ i )I + κ i S i for κ i ∈ (0, 1), i = 1, . . . , N. Suppose that the assumptions (A3)-(A5) hold and Γ g,S,T = ∅. Then, the sequence {x n } converges weakly to an element in Γ g,S,T .
Proof. Notice that, by the convex assumption of g together with the ν-Lipschitz continuity of ∇g, we have ∇g is 1 ν -ism (see [29]). Thus, the conclusion can be followed immediately from Theorem 1.

Split Common Fixed Point Problem
Consider a nonexpansive mapping V : H 1 → H 1 . By Lemma 1(iv), we know that A := I − V is a 1 2 -ism, and Ax * = 0 if and only if x * ∈ F(V). Thus, in the case that B := 0 (the zero operator), we see that problem (11) is reduced to the problem of finding a point Problem (67) is called the split common fixed point problem (SCFP), and it has been studied by many authors; see [30][31][32][33] for example. Then, problem (15) is reduced to a problem of finding a point By applying Theorems 1 and 2, we can obtain the following results.
where the sequences {λ n }, {γ n } and {α n } satisfy the following conditions: for some a, b 1 , b 2 , ∈ R, and U i = (1 − κ i )I + κ i S i for κ i ∈ (0, 1), i = 1, . . . , N. Suppose that the assumptions (A3)-(A5) hold and Γ V,S,T = ∅. Then, the sequence {x n } converges weakly to an element in Γ V,S,T . (18) is reduced to Algorithm (69), by setting A := I − V and B := 0. Remember that the zero operator is monotone and continuous. Consequently, it is a maximal monotone operator. Moreover, we know that its resolvent operator is nothing but the identity operator on H 1 . Using these facts, the result is followed immediately.
Proof. We get the above result by setting A := I − V and B := 0 into Algorithm (36).

Numerical Experiments
In this section, we will consider the numerical experiments of Theorems 1 and 2. x := 1 −4 be two fixed vectors in H 1 . We consider the operators P C 1 and P C 2 , where C 1 and C 2 are the following nonempty convex subsets of H 1 : Now, we notice that F(P C 1 ) ∩ F(P C 2 ) = C 1 ∩ C 2 .
Next, for each x := x 1 x 2 ∈ H 1 , we will consider the following two norms: For a function g : H 1 → R, which is defined by We know that g is a convex function and its subdifferential operator is Furthermore, since g is a convex function, we know that ∂g(·) is a maximal monotone operator. Moreover, for each λ > 0, we have where sgn(·) stands for the signum function.
Furthermore, we consider a nonexpansive single value mapping on H 2 , P Q 2 , where Q 2 are the following convex subset of H 2 : We also notice that, since Q 2 is a nonempty set, so we have F(P Q 2 ) = Q 2 . Under the above settings, we will discuss some numerical experiments of the constructed Algorithm (18). In fact, in this suitation, we are considering that Algorithm (18) converges to a point x * ∈ H 1 such that Notice that the solution set of problem (71) is x 3x−1 4 ∈ H 1 : 1 ≤ x ≤ 2.5358 . We consider the experiments by using stopping criterion by We first consider Algorithm (18) with five cases of the stepsize parameters α n and λ n , with the initial vectors 0 0 , 1 −1 , −1 1 and 10 −10 in H 1 . The results are showed in the following Table 1, with fixed values of γ n = 0.5 L 2 and κ 1 = κ 2 = 0.5. From Table 1, we see that, for each initial point, the case of stepsize parameters α n = 0.1, λ n = 0.9 shows the better convergence rate than the other cases.
Next, in Table 2, we set the stepsize parameters α n = 0.1, λ n = 0.9 and consider different three cases of γ n that are γ n = 0.1 L 2 , 0.5 L 2 , 0.9 L 2 . From the presented result in Table 2, we may suggest that the larger stepsize of parameter γ n should provide faster convergence. Table 1. Numerical experiments for the different stepsize parameters of α n and λ n to Algorithm (18) with some initial points.

Example 2.
Let H 1 = R 2 and H 2 = R 3 . We consider some operators and function as in Example 1 that are P C 1 , P Q 1 , P Q 2 , L and g. Furthermore, we consider a contraction mapping f := 1 10 0 0 1 20 . This means, in this suitation, we are considering the problem We notice that the solution set of problem (72) is x 3x−1 4 ∈ H 1 : 1 3 ≤ x ≤ 2.5358 .
In Table 3, we compare the iteration number of Algorithm (14) and Algorithm (36), under the different initial points. We use α n = 0.1, λ n = λ = 0.9 and γ n = γ = 0.9 L 2 in both experiments. From Table 3, one may see that Algorithm (36) shows a faster convergence than Algorithm (14).

Conclusions
In this work, we focus on the problem of finding a common solution of a class of a split feasibility problem and the common fixed points of nonexpansive mappings, namely problem (15), which is a generalization of the problems (8) and (11). By providing the suitable control conditions to the process, in Theorem 1, we can guarantee that the proposed algorithm converges weakly to a solution. Furthermore, the strong convergence theorem of the proposed algorithm (Theorem 2) is also discussed. Some important applications and numerical experiments of the considered problems are also discussed. We point out that the main motivation of the introduced algorithm in this work aims to avoid the complexity of computation of the resolvent operator when we are dealing with the problems that are occurring in the form of the sum of two maximal monotone operators.