Abstract
We consider the split feasibility problem in Hilbert spaces when the hard constraint is common solutions of zeros of the sum of monotone operators and fixed point sets of a finite family of nonexpansive mappings, while the soft constraint is the inverse image of a fixed point set of a nonexpansive mapping. We introduce iterative algorithms for the weak and strong convergence theorems of the constructed sequences. Some numerical experiments of the introduced algorithm are also discussed.
Keywords:
split feasibility problem; fixed point problem; inverse strongly monotone operator; maximal monotone operator; iterative methods MSC:
26A18; 47H05; 49J53; 54H25
1. Introduction
The split feasibility problem (SFP), which was introduced by Censor and Elfving [1], is the problem of finding a point such that
where C and Q are nonempty closed convex subsets of , and L is an matrix. SFP problems have many applications in many fields of science and technology, such as signal processing, image reconstruction, and intensity-modulated radiation therapy; for more information, the readers may see [1,2,3,4] and the references therein. In [1], Censor and Elfving proposed the following algorithm: for arbitrary ,
where , and and are the metric projections onto Q and , respectively. Observe that the introduced algorithm needs the computations of matrix inverses, which may lead to an expensive computation. To overcome this drawback, Byrne [2] suggested the following so-called CQ algorithm: for arbitrary ,
where , and is the transpose of the matrix L. Notice that Algorithm (2) generates a sequence by relying on the transpose operator instead of the inverse operator of the considered matrix L. Later on, in 2010, Xu [5] considered SFP in infinite-dimensional Hilbert spaces setting. That is, for two real Hilbert spaces and , and nonempty closed convex subsets C and Q of and , respectively, and bounded linear operator : for a given , the sequence is constructed by
where and is the adjoint operator of L. In [5], the conditions to guarantee weak convergence of the sequence to a solution of SFP was considered.
On the other hand, for a Hilbert space H, the variational inclusion problem (VIP), which was initially considered by Martinet [6], has the following formal form: find such that
where is a set-valued operator. The popular iteration method for finding a solution of problem (4) is the following so-called proximal point algorithm: for a given ,
where and is the resolvent of the maximal monotone operator B corresponding to ; see [7,8,9,10]. Subsequently, for set-valued mappings and , and a bounded linear operator , by using the concept of SFP, Byrne et al. [11] proposed the following so-called split null point problem (SNPP): finding a point such that
In [11], the following iterative algorithm was suggested: for and an arbitrary ,
where , and and are the resolvent of maximal monotone operators and , respectively. They showed that, under the suitable control conditions, the sequence converges weakly to a solution of problem (5).
Due to the importance of the two above concepts, many authors have been interested and studied approximating the common solutions of a fixed point of nonlinear mappings and the VIP problems; see [12,13,14] for example. In 2015, Takahashi et al. [15] considered the problem of finding a point
where is a maximal monotone operator, is a bounded linear operator, and is a nonexpansive mapping. They suggested the following iterative algorithm: for any ,
where and satisfy some suitable control conditions, and is the resolvent of a maximal monotone operator B associated with . They discussed the weak convergence theorem of Algorithm (7) for the solution set of problem (6). Moreover, in [15], Takahashi et al. also considered the problem of finding a point
where is a nonexpansive mapping. They suggested the following iterative algorithm: for any ,
where and satisfy some suitable control conditions and provided the weak convergence theorem of Algorithm (9) to a solution point of problem (8).
Now, let us consider a generalized concept of the problem (4): finding a point such that
where , and . If A and B are monotone operators on H, then the elements in the solution set of problem (10) will be called the zeros of the sum of monotone operators. It is well known that there are a number of real world problems that arise in the form of problem (10); see [16,17,18,19] for example and the references therein. By considering the VIP problem (10), Suwannaprapa et al. [20] extended problem (6) to the following problem setting: finding a point
when is a monotone operator and is a maximal monotone operator. They proposed the following algorithm
and showed the weak convergence theorem of Algorithm (12). Later, in 2018, Zhu et al. [21] considered the problem of finding a point and such that
when and are nonexpansive mappings, and proposed the following iterative algorithm: for any ,
where is a contraction mapping. They showed that, under the suitable control conditions, the generated sequence converges strongly to a point , where .
In this paper, motivated by the above literature, we will consider a problem of finding a point such that
where , and are nonexpansive mappings. We will denote for the solution set of problem (15). We aim to suggest the algorithms for finding a common solution of problem (15) and provide some suitable conditions to guarantee that the constructed sequence of each algorithm converges to a point in .
2. Preliminaries
Throughout this paper, we denote by and for the sets of real numbers and natural numbers, respectively. A real Hilbert space H will be equipped with the inner product and norm , respectively. For a sequence in H, we denote the strong convergence and weak convergence of to x in H by and , respectively.
Let be a mapping. Then, T is said to be
- (i)
- Lipschitz if there exists such thatThe number K is called a Lipschitz constant. Moreover, if , we say that T is contraction.
- (ii)
- Nonexpansive if
- (iii)
- Firmly nonexpansive if
- (iv)
- Averaged if there is such thatwhere I is the identity operator on H and is a nonexpansive mapping. In the case (16), we say that T is -averaged.
- (v)
- β-inverse strongly monotone (β-ism) if, for a positive real number ,
For a mapping , the notation will stand for the set of fixed points of T that is . It is well known that, if T is a nonexpansive mapping, then is closed and convex. Furthermore, it should be observed that firmly nonexpansive mappings are -averaged mappings.
Next, we collect the important properties that are needed in this work.
Lemma 1.
The following are true [16,22]:
- (i)
- The composite of finitely many averaged mappings is averaged. In particular, if is -averaged for , , then is α-averaged, where .
- (ii)
- If the mappings are averaged and have a common fixed point, then
- (iii)
- If A is β-ism and , then is firmly nonexpansive.
- (iv)
- A mapping is nonexpansive if and only if is -ism.
Let be a set-valued mapping. We donote for the effective domain of B, that is, . The set-valued mapping B is said to be monotone if
A monotone mapping B is said to be maximal when its graph is not properly contained in the graph of any other monotone operator. For a maximal monotone operator and , we define the resolvent by
It is well known that, under these settings, the resolvent is a single-valued and firmly nonexpansive mapping. Moreover, , ; see [15,23].
The following lemma is a useful fact for obtaining our main results.
Lemma 2
([24]). Let C be a nonempty closed and convex subset of a real Hilbert space H, and be an operator. If is a maximal monotone operator, then .
We also use the following lemmas for proving the main result.
Lemma 3
([15]). Let and be Hilbert spaces. Let be a nonzero bounded and linear operator, and be a nonexpansive mapping. Then, for , is -averaged.
Lemma 4
([25]). Let C be a closed convex subset of a Hilbert space H and be a nonexpansive mapping. Then, is demiclosed, that is, and imply .
The following fundamental results are needed in our proof.
For each and , we know that
see [23].
Let C be a nonempty closed and convex subset of a Hilbert space H. For each point , there exists a unique nearest point in C, denoted by . That is,
The operator is called the metric projection of H onto C; see [26]. The following property of is well known:
The following lemmas are important for proving the convergence theorems in this work.
Lemma 5
([15]). Let H be a Hilbert space and let be a sequence in H. Assume that C is a nonempty closed convex subset of H satisfying the following properties:
- (i)
- for every , exists;
- (ii)
- if a subsequence converges weakly to , then .
Then, there exists such that .
Lemma 6
([9,27]). Assume that is a sequence of nonnegative real numbers satisfying the following relation:
where , and are sequences of real numbers satisfying
- (i)
- , ;
- (ii)
- ;
- (iii)
- , .
Then, as .
3. Main Results
In our main results, the following assumptions will be concerned in order to show the convergence theorems for the introduced algorithm to a solution of problem (15).
- (A1)
- is a -inverse strongly monotone operator;
- (A2)
- is a maximal monotone operator;
- (A3)
- is a bounded linear operator;
- (A4)
- is a nonexpansive mapping;
- (A5)
- , are nonexpansive mappings;
- (A6)
- is a contraction mapping with coefficient .
Now, we provide the main algorithm and its convergence theorems.
3.1. Weak Convergence Theorems
Theorem 1.
Let and be Hilbert spaces. For any , define
where the sequences , and satisfy the following conditions:
- (i)
- ,
- (ii)
- ,
- (iii)
- ,
for some a, , , and for , . Suppose that the assumptions (A1)–(A5) hold and . Then, the sequence converges weakly to an element in Γ.
Proof.
Firstly, we set
and , for each . It follows that
for each . We note that is -averaged. Since A is -ism, in view of Lemma 1(iii), for each , we have that is -averaged. Subsequently, by Lemma 1(i), we get is -averaged. Moreover, by Lemma 3, for each , we know that is -averaged. Consequently, by Lemma 1(i), we get is -averaged, where , for each . Now, for each , we can write
where and is a nonexpansive mapping.
Next, we let . Then, and , imply and . Subsequently, we have
and hence . Consider,
for each . By condition (ii), we know that , so we have
for each . Thus,
for each .
Furthermore, since , we also have ; this implies , for each . It follows that . We denote for the operator . From above, we get .
By the definition of and the relation (19), we obtain
for each . Thus,
for each . Therefore, for all , exists.
Now, from the relation (20), we see that
for each . By the existence of , and the conditions (ii) and (iii), we get
In addition, from the relation (20), we obtain
for each . By the existence of and the condition (iii), we get
Next, consider
for each . Then, by using the fact (22), we have
Next, since , so we have . Note that is -ism. Then, we have the following relation
for each . By above, we obtain
for each .
By the relation (26) and , we have
for each . Then,
for each . By the condition (ii), for each , we have
By using the fact (23), we get
Next, we will prove the weak convergence of by using Lemma 5. Remember that we have existing for all . Thus, it remains to prove that, if there is a subsequence of that converges weakly to a point , then .
Assume that ; we first show that . Consider
for each . Since L is a bounded linear operator, so we have . By using this one and together with the fact (28), from the equality (29), we have . Hence, or .
Next, we will show that . Consider
for each . Observe that
for each . By using the fact (28) to the inequality (31), we obtain
Since
for each , by the facts (23) and (32), we have
Thus, from the inequality (30), by using the fact (33) and together with , we obtain
Therefore, and hence .
Finally, we will show that . Consider
for each . By using the facts (22) and (23), we obtain
By using the fact (35) and , for each , we obtain from Lemma 4 that . Since , are averaged mappings, by Lemma 1(ii), we have . This implies that . From the above results, we have that . That is, . Finally, by Lemma 5, we can conclude that converges weakly to a point in . Hence, the proof is completed. □
3.2. Strong Convergence Theorems
Theorem 2.
Let and be Hilbert spaces. For any , define
where , , and for , . Suppose that the assumptions (A1)–(A6) hold, , and the sequence satisfies the following conditions:
- (i)
- ;
- (ii)
- and .
Then, and both converge strongly to , where .
Proof.
Firstly, we will show the boundedness of . Let and follow the lines proof of the inequality (19), we can obtain
for each . Moreover, by the definition of and , we obtain
for each . This implies that is a bounded sequence. Consequently, is also a bounded sequence. These imply that and are bounded.
Next, we note that is a contraction mapping. We now let be the unique fixed point of . We consider
for each . This gives
for each .
Next, we will show that . Consider, for each ,
where . In the second term of the inequality (39), by the definition of and being a nonexpansive mapping, it follows that
for each . Substituting the inequality (40) into the inequality (39), we get
for each . Thus, by Lemma 6, we have
Furthermore, by the definition of and the relation (19) in Theorem 1, we get
for each . Then, we have that
for each . By using the fact (41), the condition (i) and , we get
Subsequently, we have
for each . Thus, by the fact (43), we obtain
Moreover, by the same proof in Theorem 1, we also have
Next, since is bounded on , there exists a subsequence of that converges weakly to . We will show that . Now, we know from Theorem 1 that and . It remains to show that . Consider, for each ,
Thus, by condition(i), we obtain
Since
for each , by using the facts (41), (45) and (48), we have
By using the relation (50) and , for each , we obtain from Lemma 4 that . From the above results, we obtain that .
Finally, we will prove that converges strongly to . Now, we know that is bounded and from the relation (41) we have , as . Without loss of generality, by passing to a subsequence if necessary, we may assume that a subsequence of converges weakly to . Thus, we obtain
From the inequality (38), by using Lemma 6, we can conclude that , as . Thus, , as . Since , as , so we conclude , as . This completes the proof. □
4. Some Deduced Results
If (the identity operator), we see that problem (15) reduces to problem (11). Thus, we have the following results.
Corollary 1.
Let and be Hilbert spaces. For any , define
where the sequences , and satisfy the following conditions:
- (i)
- ,
- (ii)
- ,
- (iii)
- ,
for some a, , . Suppose that the assumptions (A1)–(A4) hold and . Then, the sequence converges weakly to an element in .
Corollary 2.
Let and be Hilbert spaces. For any , define
where and . Suppose that the assumptions (A1)–(A4) and (A6) hold, , and the sequence satisfies the following conditions:
- (i)
- ;
- (ii)
- and .
Then, and both converge strongly to , where .
If (the zero operator) and , then problem (15) is reduced to problem (8). Thus, we also have the following results.
Corollary 3.
Let and be Hilbert spaces. For any , define
where the sequences , and satisfy the following conditions:
- (i)
- ,
- (ii)
- ,
- (iii)
- ,
for some a, , , and for , and S is a nonexpansive mapping. Suppose that the assumptions (A2)–(A4) hold and . Then, the sequence converges weakly to an element in .
Corollary 4.
Let and be Hilbert spaces. For any , define
where , , and for and S is a nonexpansive mapping. Suppose that the assumptions (A2)–(A4), (A6) hold, , and the sequence satisfies the following conditions:
- (i)
- ;
- (ii)
- and .
Then, and both converge strongly to , where .
If and , then problem (15) is reduced to a type of the common fixed points of nonexpansive mappings; see [28]. That is, in this case, we will consider a problem of finding a point
In addition, the following results can be obtained from the main Theorems 1 and 2, respectively.
Corollary 5.
Let H be a Hilbert space. For any , define
where the sequences , and satisfy the following conditions:
- (i)
- ,
- (ii)
- ,
- (iii)
- ,
for some a, b , and for , . Suppose that the assumptions (A2), (A4), and (A5) hold and . Then, the sequence converges weakly to an element in Ω.
Corollary 6.
Let and be Hilbert spaces. For any , define
where , , and for , . Suppose that the assumptions (A2), (A4)–(A6) hold, , and the sequence satisfies the following conditions:
- (i)
- ;
- (ii)
- and .
Then, and both converge strongly to , where .
5. Applications
In this section, we discuss the applications of problem (15) via Theorems 1 and 2, respectively.
5.1. Variational Inequality Problem
Let the normal cone to C at be defined by
It is well known that is a maximal monotone operator. By considering , then we can see that problem (10) is reduced to the problem of finding a point such that
Let be denoted for the solution set of problem (59). Notice that, in this case, we have . By these settings, problem (15) is reduced to a problem of finding a point
Subsequently, by applying Theorems 1 and 2, we obtain the following convergence theorems.
Theorem 3.
Let and be Hilbert spaces and C be a nonempty closed convex subset of . For any , define
where the sequences , , and satisfy the following conditions:
- (i)
- ,
- (ii)
- ,
- (iii)
- ,
for some a, , , , and for , . Suppose that the assumptions (A1), (A3)–(A5) hold and . Then, the sequence converges weakly to an element in .
Theorem 4.
Let and be Hilbert spaces and C be a nonempty closed convex subset of . For any , define
where , , and for , . Suppose that the assumptions (A1), (A3)–(A6) hold, , and the sequence satisfies the following conditions:
- (i)
- ;
- (ii)
- and .
Then, and both converge strongly to , where .
5.2. Convex Minimization Problem
We consider a convex function , which is Fréchet differentiable. Let C be a given closed convex subset of H. By setting (the gradient of g) and , we see that the problem of finding a point is equivalent to the following problem: find a point such that
It is well known that the equation (63) is equivalent to the minimization problem of finding such that
Therefore, in this case, problem (15) reduces to a problem of finding a point
Then, by applying Theorems 1 and 2, we obtain the following results.
Theorem 5.
Let and be Hilbert spaces and C be a nonempty closed convex subset of . Let be convex and Fréchet differentiable such that is a ν-Lipschitz continuous. For any , define
where the sequences , and satisfy the following conditions:
- (i)
- ,
- (ii)
- ,
- (iii)
- ,
for some a, , , , and for , . Suppose that the assumptions (A3)–(A5) hold and . Then, the sequence converges weakly to an element in .
Proof.
Notice that, by the convex assumption of g together with the -Lipschitz continuity of , we have is -ism (see [29]). Thus, the conclusion can be followed immediately from Theorem 1. □
Theorem 6.
Let and be Hilbert spaces and C be a nonempty closed convex subset of . Let be convex and Fréchet differentiable such that is a ν-Lipschitz continuous. For any , define
where , , and for , . Suppose that the assumptions (A3)–(A6) hold, , and the sequence satisfies the following conditions:
- (i)
- ;
- (ii)
- and .
Then, and both converge strongly to , where .
5.3. Split Common Fixed Point Problem
Consider a nonexpansive mapping . By Lemma 1(iv), we know that is a -ism, and if and only if . Thus, in the case that (the zero operator), we see that problem (11) is reduced to the problem of finding a point
Problem (67) is called the split common fixed point problem (SCFP), and it has been studied by many authors; see [30,31,32,33] for example. Then, problem (15) is reduced to a problem of finding a point
By applying Theorems 1 and 2, we can obtain the following results.
Theorem 7.
Let and be Hilbert spaces. Let be nonexpansive mapping. For any , define
where the sequences , and satisfy the following conditions:
- (i)
- ,
- (ii)
- ,
- (iii)
- ,
for some a, , , , and for , . Suppose that the assumptions (A3)–(A5) hold and . Then, the sequence converges weakly to an element in .
Proof.
Observe that Algorithm (18) is reduced to Algorithm (69), by setting and . Remember that the zero operator is monotone and continuous. Consequently, it is a maximal monotone operator. Moreover, we know that its resolvent operator is nothing but the identity operator on . Using these facts, the result is followed immediately. □
Theorem 8.
Let and be Hilbert spaces. Let be a nonexpansive mapping. For any , define
where , , and for , . Suppose that the assumptions (A3)–(A6) hold, , and the sequence satisfies the following conditions:
- (i)
- ;
- (ii)
- and .
Then, and both converge strongly to , where .
Proof.
We get the above result by setting and into Algorithm (36).
6. Numerical Experiments
In this section, we will consider the numerical experiments of Theorems 1 and 2.
Example 1.
Let and be equipped with the Euclidean norm. Let and be two fixed vectors in . We consider the operators and , where and are the following nonempty convex subsets of :
Now, we notice that .
Next, for each , we will consider the following two norms:
For a function , which is defined by
We know that g is a convex function and its subdifferential operator is
Furthermore, since g is a convex function, we know that is a maximal monotone operator. Moreover, for each , we have
where stands for the signum function.
On the other hand, we let and be other fixed vectors. We consider 1-ism operators , where is the following convex subset of :
Furthermore, we consider a nonexpansive single value mapping on , , where are the following convex subset of :
We also notice that, since is a nonempty set, so we have .
Now, let us consider a matrix . We can check that with .
Under the above settings, we will discuss some numerical experiments of the constructed Algorithm (18). In fact, in this suitation, we are considering that Algorithm (18) converges to a point such that
Notice that the solution set of problem (71) is . We consider the experiments by using stopping criterion by .
We first consider Algorithm (18) with five cases of the stepsize parameters and , with the initial vectors , , and in . The results are showed in the following Table 1, with fixed values of and . From Table 1, we see that, for each initial point, the case of stepsize parameters , shows the better convergence rate than the other cases.
Table 1.
Numerical experiments for the different stepsize parameters of and to Algorithm (18) with some initial points.
Next, in Table 2, we set the stepsize parameters , and consider different three cases of that are . From the presented result in Table 2, we may suggest that the larger stepsize of parameter should provide faster convergence.
Table 2.
Influence of the stepsize parameter of Algorithm (18) for different initial points.
Example 2.
Let and . We consider some operators and function as in Example 1 that are , , , L and g. Furthermore, we consider a contraction mapping .
This means, in this suitation, we are considering the problem
We notice that the solution set of problem (72) is .
7. Conclusions
In this work, we focus on the problem of finding a common solution of a class of a split feasibility problem and the common fixed points of nonexpansive mappings, namely problem (15), which is a generalization of the problems (8) and (11). By providing the suitable control conditions to the process, in Theorem 1, we can guarantee that the proposed algorithm converges weakly to a solution. Furthermore, the strong convergence theorem of the proposed algorithm (Theorem 2) is also discussed. Some important applications and numerical experiments of the considered problems are also discussed. We point out that the main motivation of the introduced algorithm in this work aims to avoid the complexity of computation of the resolvent operator when we are dealing with the problems that are occurring in the form of the sum of two maximal monotone operators.
Author Contributions
Conceptualization, S.S., N.P. and M.S.; methodology, M.S.; writing—original draft preparation, M.S.; writing—review and editing, S.S. and N.P.
Funding
This research received no external funding.
Acknowledgments
This research was supported by Chiang Mai University, Chiang Mai, Thailand.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
- Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
- Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
- Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 17. [Google Scholar] [CrossRef]
- Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. D’Informatique Rech. OpéRationnelle 1970, 3, 154–158. [Google Scholar]
- Eckstein, J.; Bertsckas, D.P. On the Douglas Rachford splitting method and the proximal point algorithm for maximal monotone operators. Appl. Math. Mech. Engl. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
- Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar]
- Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
- Yao, Y.; Noor, M.A. On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 2008, 217, 46–55. [Google Scholar] [CrossRef]
- Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
- Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces, Lecture Notes in Mathematic 2057; Springer: Heidelberg, Germany, 2012; pp. 154–196. [Google Scholar]
- Rockafellar, R.T. Monotone operators and the proximal point algorithm. Siam J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
- Zhang, L.; Hao, Y. Fixed point methods for solving solutions of a generalized equilibrium problem. J. Nonlinear Sci. Appl. 2016, 9, 149–159. [Google Scholar] [CrossRef]
- Takahashi, W.; Xu, H.K.; Yao, J.C. Iterative methods for generalized split feasibility problems in Hilbert spaces. Set Valued Var. Anal. 2015, 23, 205–221. [Google Scholar] [CrossRef]
- Boikanyo, O.A. The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, 2016, 10. [Google Scholar] [CrossRef][Green Version]
- Moudafi, A.; Thera, M. Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1997, 94, 425–448. [Google Scholar] [CrossRef]
- Qin, X.; Cho, S.Y.; Wang, L. A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory A 2014, 2014, 10. [Google Scholar] [CrossRef]
- Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. Siam J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
- Suwannaprapa, M.; Petrot, N.; Suantai, S. Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory A 2017, 2017, 17. [Google Scholar] [CrossRef]
- Zhu, J.; Tang, J.; Chang, S. Strong convergence theorems for a class of split feasibility problems and fixed point problem in Hilbert spaces. J. Inequal. Appl. 2018, 2018, 15. [Google Scholar] [CrossRef]
- Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009; pp. 82–163. [Google Scholar]
- Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8, 471–489. [Google Scholar]
- Takahashi, W. Nonlinear Functional Analysis: Fixed Point Theory and Its Applications; Yokohama Publishers: Yokohama, Japan, 2000; pp. 55–92. [Google Scholar]
- Takahashi, W.; Toyoda, M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118, 417–428. [Google Scholar] [CrossRef]
- Liu, L.S. Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappins in Banach spaces. J. Math. Anal. Appl. 1995, 194, 114–125. [Google Scholar] [CrossRef]
- Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
- Baillon, J.B.; Haddad, G. Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 1977, 26, 137–150. [Google Scholar] [CrossRef]
- Cui, H.; Wang, F. Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory A 2014, 2014, 8. [Google Scholar] [CrossRef]
- Moudafi, A. A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal. Theor. 2011, 74, 4083–4087. [Google Scholar] [CrossRef]
- Shimizu, T.; Takahashi, W. Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 1997, 211, 71–83. [Google Scholar] [CrossRef]
- Zhao, J.; He, S. Strong convergence of the viscosity approximation process for the split common fixed-point problem of quasi-nonexpansive mappings. J. Appl. Math. 2012, 2012, 12. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).