Abstract
Let VIP indicate the variational inequality problem with Lipschitzian and pseudomonotone operator and let CFPP denote the common fixed-point problem of an asymptotically nonexpansive mapping and a strictly pseudocontractive mapping in a real Hilbert space. Our object in this article is to establish strong convergence results for solving the VIP and CFPP by utilizing an inertial-like gradient-like extragradient method with line-search process. Via suitable assumptions, it is shown that the sequences generated by such a method converge strongly to a common solution of the VIP and CFPP, which also solves a hierarchical variational inequality (HVI).
Keywords:
inertial-like subgradient-like extragradient method with line-search process; pseudomonotone variational inequality problem; asymptotically nonexpansive mapping; strictly pseudocontractive mapping; sequentially weak continuity MSC:
47H05; 47H09; 47H10; 90C52
1. Introduction
Throughout this paper we assume that C is a nonempty, convex and closed subset of a real Hilbert space , whose inner product is denoted by . Moreover, let denote the metric projection of H onto C.
Suppose is a mapping. In this paper, we shall consider the following variational inequality (VI) of finding such that
The set of solutions to Equation (1) is denoted by VI(). In 1976, Korpelevich [] first introduced an extragradient method, which is one of the most popular approximation ones for solving Equation (1) till now. That is, for any initial , the sequence is generated by
where is a constant in for the Lipschitz constant of mapping A. In the case where , the sequence constructed by Equation (2) is weakly convergent to a point in . Recently, light has been shed on approximation methods for solving problem Equation (1) by many researchers; see, e.g., [,,,,,,,,,] and references therein, to name but a few.
Let be a mapping. We denote by the set of fixed points of T, i.e., . T is said to be asymptotically nonexpansive if such that and , . If , then T is nonexpansive. Also, T is said to be strictly pseudocontractive if s.t. . If , then T reduces to a nonexpansive mapping. One knows that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings. Both strict pseudocontractions and nonexpansive mappings have been studied extensively by a large number of authors via iteration approximation methods; see, e.g., [,,,,,,] and references therein.
Let the mappings be both inverse-strongly monotone and let the mapping be asymptotically nonexpansive one with a sequence . Let be a -contraction with . By using a modified extragradient method, Cai et al. [] designed a viscosity implicit rule for finding a point in the common solution set of the VIs for A and B and the FPP of T, i.e., for arbitrarily given , is the sequence constructed by
where . Under appropriate conditions imposed on , they proved that is convergent strongly to an element provided .
In the context of extragradient techniques, one has to compute metric projections two times for each computational step. Without doubt, if C is a general convex and closed set, the computation of the projection onto C might be quite consuming-time. In 2011, inspired by Korpelevich’s extragradient method, Censor et al. [] first designed the subgradient extragradient method, where a projection onto a half-space is used in place of the second projection onto C. In 2014, Kraikaew and Saejung [] proposed the Halpern subgradient extragradient method for solving Equation (1), and proved strong convergence of the proposed method to a solution of Equation (1).
In 2018, via the inertial technique, Thong and Hieu [] studied the inertial subgradient extragradient method, and proved weak convergence of their method to a solution of Equation (1). Very recently, they [] constructed two inertial subgradient extragradient algorithms with linear-search process for finding a common solution of problem Equation (1) with operator A and the FPP of operator T with demiclosedness property in a real Hilbert space, where A is Lipschitzian and monotone, and T is quasi-nonexpansive. The constructed inertial subgradient extragradient algorithms (Algorithms 1 and 2) are as below:
| Algorithm 1: Inertial subgradient extragradient algorithm (I) (see [], Algorithm 1]). |
| Initialization: Given arbitrarily. Let . Iterative Steps: Compute in what follows: Step 1. Put and calculate , where is chosen to be the largest satisfying . Step 2. Calculate with . Step 3. Calculate . If then . Set and go to Step 1. |
| Algorithm 2: Inertial subgradient extragradient algorithm (II) (see [], Algorithm 2]). |
| Initialization: Given arbitrarily. Let . Iterative Steps: Calculate as follows: Step 1. Put and calculate , where is chosen to be the largest satisfying . Step 2. Calculate with . Step 3. Calculate . If then . Set and go to Step 1. |
Under mild assumptions, they proved that the sequences generated by the proposed algorithms are weakly convergent to a point in . Recently, gradient-like methods have been studied extensively by many authors; see, e.g., [,,,,,,,,,,,,,,].
Inspired by the research work of [], we introduce two inertial-like subgradient algorithms with line-search process for solving Equation (1) with a Lipschitzian and pseudomonotone operator and the common fixed point problem (CFPP) of an asymptotically nonexpansive operator and a strictly pseudocontractive operator in H. The proposed algorithms comprehensively adopt inertial subgradient extragradient method with line-search process, viscosity approximation method, Mann iteration method and asymptotically nonexpansive mapping. Via suitable assumptions, it is shown that the sequences generated by the suggested algorithms converge strongly to a common solution of the VIP and CFPP, which also solves a hierarchical variational inequality (HVI).
2. Preliminaries
Let and . We use the notation (resp., ) to indicate the strong (resp., weak) convergence of to x. Recall that a mapping is said to be:
- (i)
- L-Lipschitzian (or L-Lipschitz continuous) if for some ;
- (ii)
- monotone if ;
- (iii)
- pseudomonotone if ;
- (iv)
- -strongly monotone if for some ;
- (v)
- sequentially weakly continuous if , the relation holds: .
For metric projections, it is well known that the following assertions hold:
- (i)
- ;
- (ii)
- ;
- (iii)
- ;
- (iv)
- ;
- (v)
- .
Lemma 1.
[] Assume that is a continuous pseudomonotone mapping. Then is a solution to the VI , iff .
Lemma 2.
[] Let the real sequence satisfy the conditions: , where and are sequences in such that (i) and , and (ii) or . Then .
Lemma 3.
[] Let be a ζ-strict pseudocontraction. If the sequence satisfies and , then , where I is the identity operator of H.
Lemma 4.
[] Let be a ζ-strictly pseudocontractive mapping. Let the real numbers satisfy . Then .
Lemma 5.
[] Let the Banach space X admit a weakly continuous duality mapping, the subset be nonempty, convex and closed, and the asymptotically nonexpansive mapping have a fixed point, i.e., . Then is demiclosed at zero, i.e., if the sequence satisfies and , then , where I is the identity mapping of X.
3. Main Results
Unless otherwise stated, we suppose the following.
- is an asymptotically nonexpansive operator with and is a -strictly pseudocontractive mapping.
- is sequentially weakly continuous on C, L-Lipschitzian pseudomonotone on H, and is bounded.
- is a -contraction with .
- .
- and such that
- (i)
- and ;
- (ii)
- ;
- (iii)
- and ;
- (iv)
- , and .
We first introduce an inertial-like subgradient extragradient algorithm (Algorithm 3) with line-search process as follows:
| Algorithm 3: Inertial-like subgradient extragradient algorithm (I). |
| Initialization: Given arbitrarily. Let . Iterative Steps: Compute in what follows: Step 1. Put and calculate , where is chosen to be the largest such that Step 2. Calculate with . Step 3. Calculate Again set and return to Step 1. |
Lemma 6.
In Step 1 of Algorithm 3, the Armijo-like search rule
is well defined, and the inequality holds: .
Proof.
Since A is L-Lipschitzian, we know that Equation (3) holds for all and so is well defined. It is clear that . Next we discuss two cases. In the case where , the inequality is valid. In the case where , from Equation (3) we derive . Also, since A is L-Lipschitzian, we get . Therefore the inequality is true. □
Lemma 7.
Assume that are the sequences constructed by Algorithm 3. Then
where and for all .
Proof.
We observe that
Since with , we have , which together with Equation (3), implies that
Also, from we get
where . Therefore, substituting the last two inequalities for Equation (5), we infer that
In addition, from Algorithm 3 we have
Since the function is convex, from Equation (6) we have
This completes the proof. □
Lemma 8.
Assume that are bounded vector sequences constructed by Algorithm 3. If and such that , then .
Proof.
In terms of Algorithm 3, we deduce , and hence . Using the conditions and , we get
Combining the assumptions and yields
Thus as ,
Furthermore, using Algorithm 3 we have , which hence implies
Note that and . So we obtain
Noticing , we have , and hence
Since A is Lipschitzian, we infer from the boundedness of that is bounded. From , we get the boundedness of . Taking into account , from Equation (9) we have . Moreover, note that . Since A is L-Lipschitzian, from we get . According to Equation (9) we have .
We claim below. Indeed, note that
Hence from Equation (7) and the assumption we get
We now choose a sequence such that as . For each , we denote by the smallest natural number satisfying
From the decreasing property of , it is easy to see that is increasing. Considering that implies , we put
Consequently,
We show . In fact, since and , we get . So, guarantees . Also, since A is sequentially weakly continuous on C, we deduce that . So, we get . It follows that . Since and as , we obtain that
Thus .
The last step is to show . Indeed, we have . From Equation (10) we also have . Note that Lemma 5 yields the demiclosedness of at zero. Thus . Moreover, since and , we have . From Equation (8) we get . By Lemma 5 we know that is demiclosed at zero, and hence we have , i.e., . In addition, taking , we infer that the right hand side of Equation (11) converges to zero by the Lipschitzian property of A, the boundedness of , and the limit . Therefore, . From Lemma 3 we get , and hence . This completes the proof. □
Theorem 1.
Let be the sequence constructed by Algorithm 3. Suppose that . Then
where is only a solution of the HVI: .
Proof.
Without loss of generality, we may assume that . We can claim that is a contractive map. Banach’s Contraction Principle ensures that it has a unique fixed point, i.e., . So, there exists a unique solution to the HVI
It is clear that the necessity of the theorem is valid. In fact, if , then as , we obtain that , , and
We now assume that and , and prove the sufficiency by the following steps.
Step 1. We claim the boundedness of . In fact, take a fixed arbitrarily. From Equation (6) we get
which hence yields
By the definition of , we have
From and , we deduce that , which immediately implies that s.t.
Note that is bounded, and . Hence, we know that is a bounded sequence. So, from , it follows that
where for some . Taking into account , we know that such that
So, from Algorithm 3 and Equation (17) it follows that for all ,
which together with Lemma 4 and , implies that for all ,
By induction, we obtain . Therefore, we derive the boundedness of and hence the one of sequences .
Step 2. We claim that s.t.
In fact, using Lemmas 4 and 7 and the convexity of , we get
where
and
for some . Also, from Equation (16) we have
where
for some . Note that
Substituting Equation (19) for Equation (18), we obtain that for all ,
where . This immediately implies that for all ,
Step 3. We claim that s.t.
In fact, we get
where for some . From Algorithm 3 and the convexity of , we have
which together with Lemma 4, leads to
Hence, we have
which immediately yields
Step 4. We claim the strong convergence of to a unique solution to the HVI Equation (12). In fact, setting , from Equation (22) we know that
According to Lemma 4, it is sufficient to prove that . Since and , from Equation (20) we get
which hence leads to
Obviously, the assumptions and guarantee that . Thus,
Since is bounded, we know that s.t.
Next, we may suppose that . Hence from Equation (25) we get
From and it follows that .
Note that
It is clear that
Consequently, all conditions of Lemma 4 are satisfied, and hence we immediately deduce that . This completes the proof. □
Next, we introduce another inertial-like subgradient extragradient algorithm (Algorithm 4) with line-search process as the following.
It is remarkable that Lemmas 6–8 are still valid for Algorithm 4.
| Algorithm 4: Inertial-like subgradient extragradient algorithm (II). |
| Initialization: Given arbitrarily. Let . Iterative Steps: Compute in what follows: Step 1. Put and calculate , where is chosen to be the largest such that Step 2. Calculate with . Step 3. Calculate Again set and return to Step 1. |
Theorem 2.
Let be the sequence constructed by Algorithm 4. Suppose that . Then
where is only a solution of the HVI: .
Proof.
Using the same reasoning as in the proof of Theorem 1, we know that there is only a solution of Equation (12), and that the necessity of the theorem is true.
We claim the sufficiency of the theorem below. For the purpose, we suppose that and . Then we prove the sufficiency by the following steps.
Step 1. We claim the boundedness of . In fact, using the same reasoning as in Step 1 of the proof of Theorem 1, we obtain that inequalities Equations (13)–(17) hold. Noticing , we infer that s.t.
So, from Algorithm 4 and Equation (17) it follows that for all ,
which together with Lemma 4 and , implies that for all ,
Hence,
Thus, sequence is bounded.
Step 2. We claim that for all ,
with constant . Indeed, utilizing Lemmas 4 and 7 and the convexity of , one reaches
where , and for some . Also, from Equation (16) we have
where for some . Note that . Substituting Equation (28) for Equation (27), we obtain that for all ,
where . This immediately implies that for all ,
Step 3. We claim that s.t.
In fact, we get
where s.t. . From Algorithm 4 and the convexity of , we have
which together with Lemma 4, leads to
By Step 3 of Algorithm 4, and from Equation (30) we know that . Hence, we have
which immediately yields Equation (29).
Step 4. We claim the strong convergence of to a unique solution of HVI Equation (12). In fact, using the same reasoning as in Step 4 of the proof of Theorem 1, we derive the desired conclusion. This completes the proof. □
Next, we shall show how to solve the VIP and CFPP in the following illustrating example.
The initial point is randomly chosen in . Take , and . Then we know that and .
We first provide an example of Lipschitz continuous and pseudomonotone mapping A, asymptotically nonexpansive mapping T and strictly pseudocontractive mapping S with . Let and with the inner product and induced norm . Let be defined as , and for all . Now, we first show that A is pseudomonotone and Lipschitz continuous with such that is bounded. Indeed, it is clear that is bounded. Moreover, for all we have
This implies that A is Lipschitz continuous with . Next, we show that A is pseudomonotone. For any given , it is clear that the relation holds:
Furthermore, it is easy to see that T is asymptotically nonexpansive with , such that as . Indeed, we observe that
and
It is clear that and
Moreover, it is readily seen that . In addition, it is clear that S is strictly pseudocontractive with constant . Indeed, we observe that for all ,
It is clear that for all . Therefore, . In this case, Algorithm 3 can be rewritten as follows:
where and are picked up as in Algorithm 3. Thus, by Theorem 1, we know that converges to if and only if .
On the other hand, Algorithm 4 can be rewritten as follows:
where and are picked up as in Algorithm 4. Thus, by Theorem 2, we know that converges to if and only if .
Author Contributions
The authors made equal contributions to this paper. Conceptualization, methodology, formal analysis and investigation: L.-C.C., A.P., C.-F.W. and J.-C.Y.; writing—original draft preparation: L.-C.C. and A.P.; writing—review and editing: C.-F.W. and J.-C.Y.
Funding
This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). This research was also supported by the Ministry of Science and Technology, Taiwan [grant number: 107-2115-M-037-001].
Conflicts of Interest
The authors declare no conflict of interest.
References
- Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
- Bin Dehaish, B.A. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
- Bin Dehaish, B.A. A regularization projection algorithm for various problems with nonlinear mappings in Hilbert spaces. J. Inequal. Appl. 2015, 2015, 51. [Google Scholar] [CrossRef]
- Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Aanl. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
- Ceng, L.C.; Guu, S.M.; Yao, J.C. Finding common solutions of a variational inequality, a general system of variational inequalities, and a fixed-point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011, 2011, 626159. [Google Scholar] [CrossRef]
- Ceng, L.C.; Ansari, Q.H.; Wong, N.C.; Yao, J.C. An extragradient-like approximation method for variational inequalities and fixed point problems. Fixed Point Theory Appl. 2011, 2011, 22. [Google Scholar] [CrossRef]
- Liu, L.; Qin, X. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019. [Google Scholar] [CrossRef]
- Nguyen, L.V.; Qin, X. Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. 2019. [Google Scholar] [CrossRef]
- Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
- Ceng, L.C.; Guu, S.M.; Yao, J.C. Hybrid iterative method for finding common solutions of generalized mixed equilibrium and fixed point problems. Fixed Point Theory Appl. 2012, 2012, 92. [Google Scholar] [CrossRef]
- Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
- Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
- Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
- Zhao, X.; Ng, K.F.; Li, C.; Yao, J.C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
- Latif, A.; Ceng, L.C.; Ansari, Q.H. Multi-step hybrid viscosity method for systems of variational inequalities defined over sets of solutions of an equilibrium problem and fixed point problems. Fixed Point Theory Appl. 2012, 2012, 186. [Google Scholar] [CrossRef][Green Version]
- Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef]
- Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75, 2116–2125. [Google Scholar] [CrossRef]
- Qin, X.; Cho, S.Y.; Wang, L. Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 2018, 67, 1377–1388. [Google Scholar] [CrossRef]
- Cai, G.; Shehu, Y.; Iyiola, O.S. Strong convergence results for variational inequalities and fixed point problems using modified viscosity implicit rules. Numer. Algorithms 2018, 77, 535–558. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
- Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
- Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
- Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
- Cho, S.Y.; Qin, X. On the strong convergence of an iterative process for asymptotically strict pseudocontractions and equilibrium problems. Appl. Math. Comput. 2014, 235, 430–438. [Google Scholar] [CrossRef]
- Ceng, L.C.; Yao, J.C. Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl. 2013, 2013, 43. [Google Scholar] [CrossRef]
- Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
- Ceng, L.C.; Yuan, Q. Hybrid Mann viscosity implicit iteration methods for triple hierarchical variational inequalities, systems of variational inequalities and fixed point problems. Mathematics 2019, 7, 338. [Google Scholar] [CrossRef]
- Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef]
- Ceng, L.C.; Latif, A.; Ansari, Q.H.; Yao, J.C. Hybrid extragradient method for hierarchical variational inequalities. Fixed Point Theory Appl. 2014, 2014, 222. [Google Scholar] [CrossRef]
- Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
- Ceng, L.C.; Latif, A.; Yao, J.C. On solutions of a system of variational inequalities and fixed point problems in Banach spaces. Fixed Point Theory Appl. 2013, 2013, 176. [Google Scholar] [CrossRef]
- Ceng, L.C.; Shang, M. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2019. [Google Scholar] [CrossRef]
- Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef]
- Ceng, L.C.; Petruşel, A.; Yao, J.C. Composite viscosity approximation methods for equilibrium problem, variational inequality and common fixed points. J. Nonlinear Convex Anal. 2014, 15, 219–240. [Google Scholar]
- Ceng, L.C.; Kong, Z.R.; Wen, C.F. On general systems of variational inequalities. Comput. Math. Appl. 2013, 66, 1514–1532. [Google Scholar] [CrossRef]
- Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218, 1112–1123. [Google Scholar] [CrossRef]
- Ceng, L.C.; Wen, C.F.; Yao, Y. Iterative approaches to hierarchical variational inequalities for infinite nonexpansive mappings and finding zero points of m-accretive operators. J. Nonlinear Var. Anal. 2017, 1, 213–235. [Google Scholar]
- Zaslavski, A.J. Numerical Optimization with Computational Errors; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
- Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
- Ceng, L.C.; Xu, H.K.; Yao, J.C. The viscosity approximation method for asymptotically nonexpansive mappings in Banach spaces. Nonlinear Anal. 2008, 69, 1402–1412. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).