Abstract
We introduce an iterative algorithm which converges strongly to a common element of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings. Our iterative method is quite general and includes a large number of iterative methods considered in recent literature as special cases. In particular, we apply our algorithm to solve a general system of variational inequalities, convex feasibility problem, zero point problem of inverse strongly monotone and maximal monotone mappings, split common null point problem, split feasibility problem, split monotone variational inclusion problem and split variational inequality problem. Under relaxed conditions on the parameters, we derive some algorithms and strong convergence results to solve these problems. Our results improve and generalize several known results in the recent literature.
1. Introduction
Fixed point theory has been revealed as a very powerful and effective method for solving a large number of problems which emerge from real world applications and can be translated into equivalent fixed point problems. In order to obtain approximate solution of the fixed point problems various iterative methods have been proposed (see, e.g., [1,2,3,4,5,6,7,8,9,10] and the reference therein). One of the important instances of fixed point problems is the problem of solving zero point problem of nonlinear operators. The most popular method for finding zeros of a maximal monotone operator is the proximal point algorithm (PPA). Rockafellar [11] proved the weak convergence of PPA, but it fails to converge strongly (see [12]). To obtain strong convergence, several authors proposed modification of PPA (see: Kamimura and Takahashi [13], Iiduka-Takahashi [14] and reference therein). In [15], Lehdili and Moudafi introduced the prox-Tikhonov regularization method which combined Tikhonov method with PPA to obtain a strongly convergent sequence.
In 2012, Censor, Gibali and Reich [16] (see also [17,18]) introduced a new variational inequality problem, called the common solutions to variational inequality problem (CSVIP) which comprises of finding common solutions to unrelated variational inequalities. The significance of studying the CSVIP lies in the fact that it includes the well-known convex feasibility problem (CFP) as its special case. The CFP which lies in center of many problems of physical sciences such as sensor networking [19], radiation therapy treatment planning [20], computerized tomography [21], image restoration [22] is to find a point in the intersection of a family of closed convex sets in a Hilbert space.
A special case of the CFP is the split feasibility problem (SFP). In 1994, Censor and Elfving [23] introduced the SFP for modeling phase retrieval problems. This problem has large number of applications in optimization problems, signal processing, image reconstruction, intensity-modulated radiation therapy (IMRT). Starting from SFP, various important split type problems have been introduced and studied in recent years, for example, the split common null point problem (SCNPP), split monotone variational inclusion problem (SMVIP), split variational inequality problem (SVIP).
Motivated and inspired by the above work, we propose an iterative algorithm for finding common element of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings. As applications, we solve all the problems discussed above under weaker conditions.
2. Preliminaries
Throughout the paper, we assume that is a Hilbert space with the inner product and the norm and let I be the identity mapping on . We denote by the set of all fixed points of a mapping T. A sequence in converges to strongly if converges to 0 and weakly if converges to 0, for every . We shall use the notations and to indicate the strong and weak convergence respectively. It is important to note that strong convergence always implies weak convergence, but the converse is not true (see [24]). Let be a nonempty closed convex subset of and denotes the nearest point projection (metric projection) from onto , that is, for each , , for all . Furthermore, is characterized by the fact that and
Next, we recall some definitions of well known operators, which we will use in our paper.
Definition 1.
An operator is said to be
- 1.
- Nonexpansive if .
- 2.
- Contraction if there exists a constant such that .
- 3.
- α-averaged if there exists a constant and a nonexpansive mapping V such that .
- 4.
- β-inverse strongly monotone (for short, β-ism) if there exists such that .
- 5.
- Firmly nonexpansive if .
It is known that metric projection is firmly nonexpansive and every firmly nonexpansive is -averaged.
An operator is called maximal monotone on , if is monotone, i.e., and , and there is no other monotone operator whose graph contains graph of . Further, a resolvent associated with a maximal monotone operator is a single valued operator defined as:
It is well known [24] that if is a maximal monotone operator and , then is firmly nonexpansive and .
A sequence of mappings is said to be a strongly nonexpansive sequence [25] if each is nonexpansive and
whenever such that is bounded and . Note that if we put , for all , then we have definition of strongly nonexpansive mapping defined in [26].
In order to establish our results, we collect several lemmas.
Lemma 1.
Let be a β-ism operator on . Then is nonexpansive.
Proof.
Thus is nonexpansive. □
Lemma 2.
For all , the following inequality holds:
Lemma 3
([27]). Suppose and are three real number sequences satisfying . Assume that and . Then .
Lemma 4
([25]). Let be a sequence of nonexpansive mappings of into , where is a nonempty subset of a Hilbert space . Assume that satisfy the condition . Then a sequence of mappings of into defined by is a strongly nonexpansive sequence, where I is the identity mapping on .
Lemma 5
([25]). Let be a sequence of firmly nonexpansive mappings of into , where is a nonempty subset of . Then is a strongly nonexpansive sequence. In particular, , resolvent of a maximal monotone operator is a strongly nonexpansive sequence.
Lemma 6
([25]). Let C and D be two nonempty subsets of a Hilbert space . Let be a sequence of mappings of C into and a sequence of mappings of D into . Suppose that both and are strongly nonexpansive sequences such that , for each . Then is a strongly nonexpansive sequence.
Lemma 7
([26]). If are strongly nonexpansive mappings and , then .
Lemma 8
([28]). The composition of finitely many averaged mappings is averaged. That is, if are averaged mappings, then so is the composition . Furthermore, if , then .
Lemma 9
([29]). Let T be a firmly nonexpansive self-mapping on with . Then, for any , one has , for all .
Lemma 10
([30]). Let be a nonempty closed convex set and be a nonexpansive mapping. Then is demiclosed at 0, that is, if with and , then .
Lemma 11
(The Resolvent Identity; [31]). For each ,
Lemma 12
([32]). Let be a sequence of real numbers such that there exists a subsequence of such that , for all . Then, there exists a nondecreasing sequence such that and the following properties are satisfied by all (sufficiently large) numbers :
In fact,
3. Main Results
Theorem 1.
Let be a real Hilbert space. Let and V be nonexpansive self-mappings on and be maximal monotone mappings such that
Let be a contraction with coefficient and a sequence defined by and
for all , where and , for . Suppose that , and are sequences in and and are sequences of positive real numbers satisfying the following conditions:
- 1.
- , ;
- 2.
- ;
- 3.
- , for all ;
- 4.
- for all sufficiently large for some .
Then the sequence converges strongly to , where is the unique fixed point of the contraction .
Proof.
Set and . Clearly, each and are nonexpansive mappings for each . By Lemmas 4 and 5, for each , and are composition of strongly nonexpansive mappings. Therefore, from Lemma 7, we get
First, we claim that is bounded. Take an arbitrary element .
By induction, we have
which proves the boundedness of and so we have and . It is well known that fixed point set of nonexpansive mapping is closed and convex and so their intersection. Hence, the metric projection is well defined. In addition, since is a contraction mapping, there exist such that . In order to prove as , we examine two possible cases:
Case I. Assume that there exists such that the real sequence is nonincreasing for all . Since is bounded, is convergent. We first show that . Using nonexpansivness of and (2), we obtain
since is bounded, and is convergent, we obtain
Also is strongly nonexpansive sequence so we conclude that
We next show that . From (2), we obtain
Now, from the nonexpansiveness of and (5), we observe
since is bounded, and is convergent, we obtain
As is strongly nonexpansive sequence, we have
Again from (2), we observe
Using nonexpansiveness of and (8), we observe
so that by boundedness of sequence , and convergent sequence . By Lemma 4, is strongly nonexpansive sequence, so we have
Also, notice that . Condition (ii) together with (10) implies that
Notice that . This together with given condition and (7) implies that
On the other hand, we observe
Using nonexpansiveness of for each and (15), we obtain
in view of the fact that is convergent and using (13), we obtain
Also by using Lemma 6, is strongly nonexpansive sequence for each . Therefore, we have
Now consider
Choose a fixed number s such that and using Lemma 11, for all sufficiently large n, we have
Using (18), we obtain
Similarly, using (12) and Lemma 11, we can obtain
Next, we show that
Observe that . Condition and (21) implies that
Put . Clearly, U is a convex combination of nonexpansive mappings, so is itself nonexpansive and
We observe
Observe that
Since is bounded, it has a convergent subsequence such that converges weakly to some . Further Lemma 10, and (24) implies that , it follows that
where the last inequality follows from (1).
Using Lemma 2, we obtain
where .
It turns out that
Next, we have
that is,
where , , . Using (25), the condition and boundedness of , we obtain . Using condition , it can be easily proven that . Finally, we apply Lemma 3 to (26) to conclude that as .
Case II. Assume that there exists a subsequence of such that
Then, by Lemma 12, there exists a nondecreasing sequence of integers such that as and
As is a strongly nonexpansive sequence, we have as .
Following similar arguments as in Case I, we have
Using the fact that , we obtain , that is,
Since is bounded, , it follows from (29) that as .
This together with (30) implies that as . But , for all , which gives that as . □
Remark 1.
A similar approach has been adopted in the study of consensus problems (see the seminal work [33]).
4. Applications
In this section, we utilize the main result presented in this paper to study many problems in Hilbert spaces.
4.1. Application to a General System of Variational Inequalities
Let be a real Hilbert space and let there be given for each , an operator and a nonempty closed convex subset . First, we introduce the following general system of variational inequalities in Hilbert space, which aims to find such that
where for all . Here, Ω will be used to denote the solution set of (31). In particular, if and , then problem (31) can be reduced to finding such that
which was considered and studied by Ceng et al. [34]. In particular, if and , then the problem (32) reduces to the variational inequality problem for finding such that
Variational inequalities produce effective method to solve several important problems appearing in finance, optimization theory, game theory, mechanics and economics.
Another motivation for introducing (31) is that if we choose and for all , then (31) reduces to an important problem, called the common solutions to variational inequality problem (CSVIP) introduced by Censor, Gibali and Reich [16,17].
Lemma 13.
Let be a finite family of closed convex subsets of a real Hilbert space . Let be nonlinear mappings, where . For given , , is a solution of problem (31) if and only if
That is
Lemma 14.
Let be a finite family of closed convex subsets of a real Hilbert space . Let be -ism self-mappings on , where . Let be a mapping defined by
If , , then T is averaged.
Proof.
We first prove that is averaged for each .
Note that and . Thus, applying Lemma 1, is nonexpansive and therefore, is averaged for , . Also, it well known that is averaged, so the composition (see Lemma 8). Hence again applying Lemma 8, the mapping T is averaged. □
Theorem 2.
Let be a finite family of closed convex subsets of a real Hilbert space . Let be -ism self-mappings on , where . Assume that , where T is defined in Lemma 14. Let be a sequence defined by and
where . Suppose satisfying the conditions and . Then the sequence converges strongly to a point .
Proof.
Applying Lemma 14, we have that T is an averaged mapping on . Therefore, by definition, , for some and a nonexpansive mapping , where . Letting , , and in Theorem 1, the conclusion of Theorem 2 is obtained. □
Remark 2.
In [17], Censor, Gibali and Reich proved the weak convergence theorem for solving the CSVIP. If we take and , for all in (31), then problem (31) reduces to CSVIP and through algorithm (35), we obtain modification of Algorithm 4.1 in [17] and obtain strong convergence, which is often much more desirable than weak convergence.
4.2. Convex Feasibility Problem
Let , be nonempty closed convex subsets of a real Hilbert space with , the convex feasibility problem (CFP) is to find such that .
Most common methods to solving CFP are the projection and reflection methods which comprise some well-known methods, such as the so-called alternating projection method [35,36,37], the Douglas–Rachford (DR) algorithm [38,39,40] and many extensions [41,42,43]. Most projection and reflection methods can be extended to solve the convex feasibility problem involving any finite number of sets. An exception is the Douglas–Rachford method, for which only the theory of two set feasibility problems has been investigated. Motivated by this fact, Borwein and Tam [43], introduced the following cyclic Douglas–Rachford method which can be applied directly to many-set convex feasibility problem in a Hilbert space.
For any , the cyclic Douglas–Rachford method defines a sequence by setting
Here, is a m-set cyclic Douglas–Rachford operator defined as
and each is a two set Douglas-Rachford operator and and are the reflection operators into and respectively. However, it is known that cyclic Douglas-Rachford method may fail to converge strongly (see [44]). We introduce a modification of cyclic Douglas-Rachford method in which strong convergence is guaranteed.
Theorem 3.
Let be closed and convex sets with nonempty intersection and let be a sequence defined by and
where , for and . Suppose and satisfying
- (i)
- ;
- (ii)
- , for all .
Then the sequence converges strongly to a point such that for .
Proof.
Set , for . By Proposition 4.2, in [24], and are nonexpansive. Therefore, their combination is nonexpansive.
Further . Put and in Theorem 1, the sequence converges strongly to a point in . By Corollary 4.3.17 (iii) in [45], , for each . So, for each . Further, using inequality (1), we have
Thus, , for each i and therefore, for each i. □
Remark 3.
By taking , for all in the operator , we obtain the cyclic Douglas–Rachford operator.
4.3. Zeros of Ism and Maximal Monotone
Very recently, based on Yamada’s hybrid steepest descent method, Tian and Jiang [46] introduced an iterative algorithm and proved a weak convergence theorem for zero points of ism and fixed points of a nonexpansive mapping in Hilbert space. Moreover, using this algorithm, they also constructed following algorithm to obtain weak convergence theorem for common zeros of ism and maximal monotone mapping:
Now, we combine hybrid steepest descent method, proximal point algorithm and viscosity approximation method to obtain following strong convergence result.
Theorem 4.
Let be a maximal monotone mapping and F be an θ-ism of into itself such that . Let be a contraction with coefficient and let be a sequence defined by and
Suppose that , and satisfying
- (i)
- , ;
- (ii)
- ;
- (iii)
- .
Then the sequence converges strongly to a point .
Proof.
First, we rewrite as
Using Lemma 1, is nonexpansive. Also, it can be easily proven that .
Further, we observe that
By Proposition 4.2, in [24], is nonexpansive. Also note that . Now, take , , , , , and in Theorem 1, which yields the conclusion of Theorem 4. □
Remark 4.
Theorem 4 improves the Tian and Jiang’s result ([46] Theorem 4.4) from weak to strong convergence theorem. Also is bounded in in ([46] Theorem 4.4), but in Theorem 4, we relax to .
Theorem 5.
Let S be an θ-ism of into itself and let be maximal monotone mappings such that . Let be a contraction with coefficient and let be a sequence defined by and
Suppose that , and satisfying
- (i)
- , ;
- (ii)
- ;
- (iii)
- for all sufficiently large for some .
Then the sequence converges strongly to a point .
Proof.
First, we rewrite that
By using Lemma 1, is nonexpansive and it can be easily proven that . Putting , , , for all , in Theorem 1, the conclusion of Theorem 5 is obtained. □
Remark 5.
- 1.
- Theorem 5 improves and extends Iiduka–Takahashi’s result ([14] Theorem 4.3). By taking , , , in Theorem 5, we obtain ([14] Theorem 4.3) without assuming extra conditions and assumed in ([14] Theorem 4.3).
- 2.
- If we take , in Theorem 5, we obtain Kamimura and Takahashi’s result ([13] Theorem 1). Also we remove the superfluous condition assumed in ([13] Theorem 1). Hence our result improves the result of Kamimura and Takahashi.
- 3.
- The alternating resolvent method studied in Bauschke et al. [47] deals essentially with a special case of the algorithm (38). In fact, if we take , then (38) becomesWe can rewrite (39) aswhere and is the Tikhonov regularization of . Thus Theorem 5 extends and improves the result of Bauschke et al. [47] from weak to strong convergence theorem by using prox-Tikhonov method.
- 4.
4.4. Split Common Null Point Problem
Let and be two real Hilbert spaces. Given two set-valued operators and and a bounded linear operator , the split common null point problem (SCNPP) is the problem of finding
In [48], Byrne et al. introduced this problem for finding such a solution when and are maximal monotone.
Using the fact if and only if , the problem (42) is equivalent to the problem of finding
where . Here, Ψ will be used to denote the solution set of (42).
Lemma 15.
Let and be two real Hilbert spaces. Let be a bounded linear operator and be a firmly nonexpansive maping. Then is -ism.
Proof.
Since S is firmly nonexpansive, using Proposition 4.2, in [24], is firmly nonexpansive. Therefore, for all , we obtain
Also,
Combining the above inequalities, we obtain
Thus is -ism. □
Theorem 6.
Let and be two real Hilbert spaces. Let and be two set-valued maximal monotone operators. Let be a bounded linear operator and be a contraction with coefficient . Let and let be a sequence defined by and
Suppose that and satisfying
- (i)
- , ;
- (ii)
- .
Then the sequence converges strongly to a point in Ψ.
Proof.
Let solves SCNPP i.e. , then we have such that and . Note that if and only if .
Therefore, and so , means . Thus .
Now let , which implies
Choose . Therefore, . An application of Lemma 9, yields
Therefore, i.e., . Thus . Hence .
Also, using Lemma 15, is -ism.
Now, putting , , and in Theorem 5, the conclusion of Theorem 6 is obtained. □
Remark 6.
- 1.
- Theorem 6 generalizes and improves the result in ([49] Theorem 5.1). Indeed, the result in ([49] Theorem 5.1) considers the special case , for all n. Moreover, we assume that , while in ([49] Theorem 5.1), was assumed to be in , which is a more restrictive condition.
- 2.
- If we take and in Theorem 6, we obtain the result of Byrne et al. ([48] Theorem 4.5).
4.5. Split Feasibility Problem
Let C and Q be nonempty closed convex subsets of real Hilbert spaces and respectively. The split feasibility problem (SFP) [23] is defined as finding a point satisfying:
where is a bounded linear operator. In [50], Byrne gave the following algorithm called CQ algorithm for solving the SFP (45):
where . Let be a proper lower semicontinuous convex function.
Then subdifferential of h can be defined as
By Rockafellar Theorem [51], is a maximal monotone operator of into itself. For a closed convex subset C of , the indicator function can be defined as
Also recall, the normal cone of C at a point can be defined as
Since is a proper lower semicontinuous convex function, is a maximal monotone operator. Also it is known that (see [24] Ex. 16.12). Using Theorem 1 and the equality
for all closed convex subset C in and for all we solve the SFP as follows:
Theorem 7.
Let the solution set of SFP (45) is nonempty. Let be a contraction with coefficient and let be a sequence defined by and
Suppose that and satisfying
- (i)
- , ;
- (ii)
- .
Then the sequence converges strongly to a point in the solution set of SFP (45).
Proof.
Put and in Theorem 6, which yields the conclusion of Theorem 7. □
Remark 7.
- 1.
- Theorem 7 extends and improves the result in ([52] Corollary 3.7). In fact, in Theorem 7 taking (constant) and , for all n, we obtain the result in ([52] Corollary 3.7) without assuming an extra condition which was assumed in ([52] Corollary 3.7).
- 2.
- Theorem 7 also improves the result in ([53] Theorem 1).
4.6. Split Monotone Variational Inclusion Problem and Fixed Point Problem for Strictly Pseudocontractive Maps
Let and be two real Hilbert spaces and let and be two set-valued maximal monotone operators.
Let be a bounded linear operator and and be two ism mappings. The split monotone variational inclusion problem (SMVIP) is to find such that
and
Also, it can be easily proven that (see, e.g., Moudafi [54])
and
Let K be a nonempty closed convex subset of a Hilbert space . A mapping is said to be θ-strictly pseudocontractive if there exist θ with such that
It can be observed that is -ism. In fact, in a Hilbert space, we have
Hence, we have
Moudafi [54] introduced the SMVIP (46) and (47) and gave an iterative algorithm for solving this problem. Very recently, Shehu and Ogbuisi [55] proposed an iterative algorithm for solving SMVIP which also solves a fixed point problem for strictly pseudocontractive maps in a real Hilbert space.
The following result of Shehu and Ogbuisi [55] is a consequence of our Theorem 1.
Theorem 8.
Let and be two real Hilbert spaces and let and be two set-valued maximal monotone operators. Let be a bounded linear operator. Let be -ism and be -ism. Let be a θ-strictly pseudocontractive mapping and , where Λ is a solution set of (46) and (47). Let be a sequence defined by and
where , and with L being the spectral radius of the operator and is the adjoint of U. Suppose and satisfying
- (i)
- , ;
- (ii)
- .
Then the sequence converges strongly to point in .
Proof.
With similar arguments as in the proof of Lemma 14, we can easily show that and are averaged mappings on and respectively. Further, in view of Lemma 3.3 in [56], is averaged mapping on . Also, applying Lemma 8, the operator is averaged. Therefore, the composition R is averaged, where . Thus, by definition, for some and a nonexpansive mapping , where .
Also, we note that
where and .
Note that is -ism. Therefore, using Lemma 1, is nonexpansive. Also, it can be easily proven that .
Now let , then we have and .
It is obvious that implies . Therefore, . Using Lemma 8, . Thus .
Now let . Using Lemma 8, . It follows from Lemma 3.3 in [57] that
Therefore, . Hence . Thus .
Now taking , , , and in Theorem 1, which yields the desired result. □
4.7. Split Variational Inequality Problem (SVIP)
The SVIP [16] can be formulated as follows:
and such that
where C and Q are nonempty closed convex subsets of real Hilbert spaces and respectively and is a bounded linear operator and and are two given operators. If we denote the solution sets of VIPs in (48) and (49) by and respectively, then the solution set of SVIP can be written as:
As mentioned in [54], if we choose and in SMVIP (46) and (47), respectively, then we recover SVIP (48,49), where and are normal cones of closed and convex sets C and Q respectively.
Theorem 9.
Let and be two real Hilbert spaces and let be a bounded linear operator. Let be -ism and be -ism. Assume that and let be a sequence defined by and
where , and with L being the spectral radius of the operator and is the adjoint of U. Suppose is a real sequence in satisfying the conditions and . Then the sequence converges strongly to a point in Φ.
Proof.
Put , and in Theorem 8, which yields the desired result. □
Remark 8.
Theorem 9 improves and extends the Censor et al.’s result ([16] Theorem 6.3), where it was assumed that for all ,
We drop this assumption in our result. Furthermore, our result extends Censor et al.’s result ([16] Theorem 6.3) from weak to strong convergence.
5. Concluding Remarks
In this article, we present a new iterative algorithm for finding a common point of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings. Further, we introduced a new general system of variational inequalities which comprises some existing general system of variational inequalities and it is shown that our algorithm converges strongly to a solution of this variational inequality problem. Also, we give modification of cyclic Douglas–Rachford method to solve convex feasibility problem in such a way that strong convergence is guaranteed. In addition, we combine hybrid steepest descent method, proximal point algorithm and viscosity approximation method to obtain a common zero point of maximal monotone and inverse strongly monotone mappings. Further, we improve and extend many results related to different split type problems like split common null point problem, split feasibility problem, split monotone variational inclusion problem and split variational inequality problem. Applicability of our algorithm is not limited to the problems discussed above, it can be further used to solve many important problems, for instance, quasi variational inclusion problem, convex minimization problem, lasso problem, equilibrium problem and many more. Since in this paper, we have worked in a Hilbert space, it should be a natural question for the next research to extend our result in Banach spaces.
Author Contributions
Each author participated to conceptualization, validation, formal analysis, investigation, writing–original draft preparation, writing–review and editing.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Chugh, R.; Malik, P.; Kumar, V. On a new faster implicit fixed point iterative scheme in convex metric spaces. J. Funct. Spaces 2015, 2015. [Google Scholar] [CrossRef][Green Version]
- Khan, A.R.; Kumar, V.; Narwal, S.; Chugh, R. Random iterative algorithms and almost sure stability in Banach spaces. Filomat 2017, 31, 3611–3626. [Google Scholar]
- Kumar, V.; Hussain, N.; Malik, P.; Chugh, R. Jungck-type implicit iterative algorithms with numerical examples. Filomat 2017, 31, 2303–2320. [Google Scholar]
- Yao, Y.; Agarwal, R.P.; Postolache, M.; Liou, Y.C. Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, 2014, 183. [Google Scholar]
- Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
- Yao, Y.; Liou, Y.C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar]
- Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2019, 1–11. [Google Scholar] [CrossRef]
- Yao, Y.; Postolache, M.; Yao, J.C. An iterative algorithm for solving generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar]
- Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019. [Google Scholar] [CrossRef]
- Yao, Y.; Noor, M.A.; Liou, Y.C.; Kang, S.M. Iterative algorithms for generalized variational inequalities. Abstr. Appl. Anal. 2012, 1–10. [Google Scholar]
- Rockafellar, R.T. Monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976, 14, 887–897. [Google Scholar]
- Güler, O. On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29, 403–419. [Google Scholar]
- Kamimura, S.; Takahashi, W. Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106, 226–240. [Google Scholar]
- Iiduka, H.; Takahashi, W. Strong convergence theorems for nonexpansive nonself-mappings and inverse-strongly-monotone mappings. J. Convex Anal. 2004, 11, 69–79. [Google Scholar]
- Lehdili, N.; Moudafi, A. Combining the proximal algorithm and Tikhonov regularization. Optimization 1996, 37, 239–252. [Google Scholar]
- Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar]
- Censor, Y.; Gibali, A.; Reich, S. A von Neumann alternating method for finding common solutions to variational inequalities. Nonlinear Anal. 2012, 75, 4596–4603. [Google Scholar]
- Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set-Valued Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
- Blatt, D.; Hero III, A.O. Energy based sensor network source localization via projection onto convex sets (POCS). IEEE Trans. Signal Process. 2006, 54, 3614–3619. [Google Scholar] [CrossRef]
- Censor, Y.; Altschuler, M.D.; Powlis, W.D. On the use of Cimmino’s simultaneous projections method for computing a solution of the inverse problem in radiation therapy treatment planning. Inverse Probl. 1988, 4, 607–623. [Google Scholar]
- Herman, G.T. Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed.; Springer: London, UK, 2009. [Google Scholar]
- Combettes, P.L. The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–270. [Google Scholar]
- Censor, Y.; Elfving, T. A multiprojection algorithm using bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
- Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8, 471–489. [Google Scholar]
- Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houston J. Math. 1977, 3, 459–470. [Google Scholar]
- Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar]
- Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar]
- Huang, Y.Y.; Hong, C.C. A unified iterative treatment for solutions of problems of split feasibility and equilibrium in Hilbert spaces. Abstr. Appl. Anal. 2013, 613928. [Google Scholar]
- Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: London, UK, 1990. [Google Scholar]
- Barbu, V. Nonlinear Semigroups and Differential Equations in Banach space; Noordhoff: Groningen, The Netherlands, 1976. [Google Scholar]
- Mainge, P.-E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
- Shang, Y. Resilient consensus of switched multi-agent systems. Syst. Control Lett. 2018, 122, 12–18. [Google Scholar]
- Ceng, L.C.; Wang, C.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar]
- Bauschke, H.H.; Borwein, J.M. On the convergence of von Neumann’s alternating projection algorithm. Set-Valued Anal. 1993, 1, 185–212. [Google Scholar] [CrossRef]
- Borwein, J.M.; Li, G.; Yao, L. Analysis of the convergence rate for the cyclic projection algorithm applied to basic semi-algebraic convex sets. SIAM J. Optim. 2014, 24, 498–527. [Google Scholar]
- Gubin, L.G.; Polyak, B.T.; Raik, E.V. The method of projections for finding the common point of convex sets. USSR Comput. Math. Math. Phys. 1967, 7, 1–24. [Google Scholar]
- Douglas, J.; Rachford, H.H. On the numerical solution of the heat conduction problem in 2 and 3 space variables. Trans. Am. Math. Soc. 1956, 82, 421–439. [Google Scholar]
- Eckstein, J.; Bertsekas, D.P. On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar]
- Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar]
- Aragón Artacho, F.J.; Censor, Y.; Gibali, A. The cyclic Douglas-Rachford algorithm with-sets-Douglas- Rachford operators. Optim. Method Softw. 2018. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Noll, D.; Phan, H.M. Linear and strong convergence of algorithms involving averaged nonexpansive operators. J. Math. Anal. Appl. 2015, 421, 1–20. [Google Scholar]
- Borwein, J.M.; Tam, M.K. A cyclic Douglas–Rachford iteration scheme. J. Optim. Theory Appl. 2014, 160, 1–29. [Google Scholar]
- Aragón Artacho, F.J.; Borwein, J.M.; Tam, M.K. Recent results on Douglas-Rachford methods for combinatorial optimization problems. J. Optim. Theory Appl. 2014, 163, 1–30. [Google Scholar]
- Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics; Springer: Heidelberg, Germany, 2012; Volume 2057. [Google Scholar]
- Tian, M.; Jiang, B.N. Weak convergence theorem for zero points of inverse strongly monotone mapping and fixed points of nonexpansive mapping in Hilbert space. Optimization 2017, 66, 1689–1698. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L.; Reich, S. The asymptotic behavior of the composition of two resolvents. Nonlinear Anal. 2005, 60, 283–301. [Google Scholar]
- Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
- Thong, D.V. Viscosity approximation methods for solving fixed point problems and split common fixed point problems. J. Fixed Point Theory Appl. 2017, 19, 1481–1499. [Google Scholar]
- Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar]
- Rockafellar, R.T. Characterization of the sub differentials of convex functions, Pac. J. Math. 1966, 17, 497–510. [Google Scholar]
- Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar]
- Deepho, J.; Kumam, P. A viscosity approximation method for the split feasibility problem. Trans. Engng. Tech. 2015, 69–77. [Google Scholar] [CrossRef]
- Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar]
- Shehu, Y.; Ogbuisi, F.U. An iterative method for solving split monotone variational inclusion and fixed point problems. RACSAM. 2016, 110, 503–518. [Google Scholar]
- Takahashi, W.; Xu, H.K.; Yao, J.C. Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 2015, 23, 205–221. [Google Scholar] [CrossRef]
- Kraikaew, R.; Saejung, S. On split common fixed point problems. J. Math. Anal. Appl. 2014, 415, 513–524. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).