Abstract
In this paper, we survey the split problem of fixed points of two pseudocontractive operators and variational inequalities of two pseudomonotone operators in Hilbert spaces. We present a Tseng-type iterative algorithm for solving the split problem by using self-adaptive techniques. Under certain assumptions, we show that the proposed algorithm converges weakly to a solution of the split problem. An application is included.
1. Introduction
In this paper, we survey the variational inequality (in short, VI) of seeking an element such that
where C is a nonempty closed convex set in a real Hilbert space H, means the inner product of H, and is a nonlinear operator. Denote by the solution set of variational inequality (1).
A host of problems such as optimization problem, saddle point, equilibrium problem, fixed point problem can be converted into the form of variational inequality (1), see [1,2,3,4,5,6,7,8,9,10,11,12]. Many numerical algorithms have been proposed and developed for solving (1) and related problems, see [13,14,15,16,17,18,19,20,21,22,23,24,25] and the references therein. Generally speaking, should satisfy the following assumptions
- is strongly monotone, i.e., there exists a positive constant such that
- is Lipschitz continuous, i.e., there exists a positive constant such that
In order to abate the restriction (2), Korpelevich’s extragradient algorithm ([26]) was proposed in 1976
where denotes the orthogonal projection from H onto C and the step-size is in .
Extragradient algorithm (4) guarantees the convergence of the sequence provided is monotone. Extragradient algorithm and its variant have been investigated extensively, see [27,28,29,30,31]. However, we have to compute (i) twice at two different points and (ii) two values and . Two important modifications of extragradient algorithm have been made. One is proposed in [32] by Censor, Gibali and Reich and another is the following remarkable algorithm proposed in [33] by Tseng
where .
On the other hand, if is not Lipschitz continuous or its Lipschitz constant is difficult to estimate, then algorithms (4) and (5) are invalid. To avoid this obstacle, Iusem [34] used a self-adaptive technique without prior knowledge of Lipschitz constant of for solving (1). Some related works on self-adaptive methods for solving (1), please refer to [35,36,37,38].
Let and be two real Hilbert spaces. Let C and Q be two nonempty closed and convex subsets of and , respectively. Let , , and be four nonlinear operators. We consider the classical split problem which is to find a point such that
where and are the fixed point sets of S and T, respectively.
The solution set of (6) is denoted by , i.e.,
Let f and g be the null operators in C and Q, respectively. Then, the split problem (6) becomes to the split fixed point problem studied in [39,40] which is to find an element point such that
Let S and T be the identity operators in C and Q, respectively. Then, the split problem (6) becomes to the split variational inequality problem studied in [41] which is to find an element such that
The solution set of (8) is denoted by , i.e.,
The split problems (6)–(8) have a common prototype that is the split feasibility ([42]) problem of finding a point such that
The split problems have emerged their powerful applications in image recovery and signal processing, control theory, biomedical engineering and geophysics. Some iterative algorithms for solving the split problems have been studied and extended by many scholars, see [43,44,45,46,47].
Motivated by the work in this direction, in this paper, we further survey the split problem (6) in which S and T are two pseudocontractive operators and f and g are two pseudomonotone operators. We present a Tseng-type iterative algorithm for solving the split problem (6) by using self-adaptive techniques. Under certain conditions, we show that the proposed algorithm converges weakly to a solution of the split problem (6).
2. Preliminaries
Let H be a real Hilbert space equipped with inner product and the induced norm defined by . For any and constant , we have
The symbol denotes the weak convergence and the symbol denotes the strong convergence. Use to denote the set of all weak cluster points of the sequence , namely, .
Recall that an operator is said to be
- Pseudomonotone, if
- Weakly sequentially continuous, if implies that .
Let C be a nonempty closed convex subset of a real Hilbert space H. Recall that an operator is said to be pseudocontractive if
For given , there exists a unique point in C, denoted by such that
It is known that is firmly nonexpansive, that is, satisfies
It is obvious that is nonexpansive, i.e., for all . Moreover, satisfies the following inequality ([48])
Lemma 1 ([49]).
Let C be a nonempty, convex and closed subset of a Hilbert space H. Assume that the operator is pseudocontractive and κ-Lipschitz continuous. Then, for all and , we have
where α is a constant in .
Lemma 2 ([50]).
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a continuous and pseudomonotone operator. Then iff solves the following variational inequality
Lemma 3 ([51]).
Let C be a nonempty, convex and closed subset of a Hilbert space H. Let the operator be continuous pseudocontractive. Then, S is demiclosed, i.e., and as imply that .
Lemma 4 ([52]).
Let Γ be a nonempty closed convex subset of a real Hilbert space H. Let be a sequence. If the following assumptions are satisfied
- , exists;
- .
Then the sequence converges weakly to some point in Γ.
3. Main Results
In this section, we present our main results.
Let and be two real Hilbert spaces. Let and be two nonempty closed convex sets. Let , , and be four nonlinear operators. Let be a bounded linear operator with its adjoint .
Let , , and be four real number sequences. Let , , , and be five constants. Let and be two positive constants.
Next, we introduce an iterative algorithm for solving the split problem (6).
In order to demonstrate the convergence analysis of Algorithm 1, we add some conditions on the operators and the parameters.
| Algorithm 1: Select an initial point . Set . |
| Step 1. Assume that the present iterate and the step-sizes and are given. Compute Step 2. Compute the next iterate by the following form Step 3. Increase k by 1 and go back to Step 1. Meanwhile, update |
Suppose that
- (c1):
- S and T are two pseudocontractive operators with Lipschitz constants and , respectively;
- (c2):
- the operator f is pseudomonotone on , weakly sequentially continuous and -Lipschitz continuous on C;
- (c3):
- the operator g is pseudomonotone on , weakly sequentially continuous and -Lipschitz continuous on Q.
- (r1):
- and ;
- (r2):
- and ;
- (r3):
- , , , and .
Remark 1.
According to (19), the sequence is monotonically decreasing. Moreover, by the -Lipschitz continuity of f, we obtain that . Thus, has a lower bound . Therefore, the limit exists. Similarly, the sequence is monotonically decreasing and has a lower bound . So, the limit exists.
Now, we prove our main theorem.
Theorem 1.
Suppose that . Then the sequence generated by Algorithm 1 converges weakly to some point .
Proof.
Let . Then, and . By (10) and (12), we have
Using Lemma 1, we obtain
Similarly, according to (10), Lemma 1 and (17), we have the following estimate
Applying the inequality (11) to (13), we obtain
Since and , . This together with the pseudomonotonicity of f implies that
It follows that
which yields
By (14), we have
From (10), we obtain
Thanks to (19), . It follows from (30) that
By Remark 1, . So,
Then, there exists and such that when . In combination with (31), we get
This together with (23) implies that
By the property (11) of and (15), we have
Since and , . By the pseudomonotonicity of g, we obtain
It follows that
From (14), we receive
By virtue of (10), we achieve
Duo to (20), we have
This together with (38) implies that
By Remark 1, and hence
So, there exists and such that
when .
In the light of (39), we have
Observe that
In view of (18), we have
So, the sequences , and are all bounded.
By the -Lipschitz continuity of S, we have
It follows that
This together with (47) implies that
From (12) and (47), we conclude that .
Next, we show that . Pick any . Then, there exists a subsequence of such that as . In addition, and as .
First, we prove that . In view of (11) and , we achieve
It follows that
Noting that from (49), we have . Meanwhile, and are bounded. Then, by (52), we deduce
Let be a positive real numbers sequence satisfying . On account of (53), for each , there exists the smallest positive integer such that
Moreover, for each , . Setting , we have . From (54), we have
By the pseudomonotonicity of f, we get
which implies that
Because of , we have
Then,
This together with (55) implies that
By Lemma 2 and (56), we conclude that .
On the other hand, by (51), as . This together with and Lemma 3 implies that . Therefore, .
Next, we show that . Observe that
It follows that
This together with (48) implies that
From (14), as . Thanks to (17) and (48), we have as . Combining with (46), we deduce that . Applying Lemma 3 to (57), we obtain that .
Next, we show that . In view of (10) and , we achieve
It follows that
Noting that from (r3), we have . Then, by (58), we deduce
Choose a positive real numbers sequence such that . In terms of (59), for each , there exists the smallest positive integer such that
Moreover, for each , . Setting , we have . From (60), we have
By the pseudomonotonicity of g, we get
which implies that
Because of , we have
Then,
This together with (61) implies that
By Lemma 2 and (62), we conclude that . So, and .
Finally, we show that the entire sequence converges weakly to . As a matter of fact, we have the following facts:
- (i)
- , exists;
- (ii)
- ;
- (iii)
- .
Thus, by Lemma 4, we deduce that the sequence weakly converges to . This completes the proof. □
Corollary 1.
Suppose that . Then the sequence generated by Algorithm 2 converges weakly to some point .
| Algorithm 2: Select an initial point . Set . |
| Step 1. Assume that the present iterate and the step-sizes and are given. Compute Step 2. Compute the next iterate by the following form Step 3. Increase k by 1 and go back to Step 1. Meanwhile, update |
4. Application to Split Pseudoconvex Optimization Problems and Fixed Point Problems
In this section, we apply Algorithm 1 to solve split pseudoconvex optimization problems and fixed point problems.
Let be the Euclidean space. Let C be a closed convex set in . Recall that a differentiable function is said to be pseudoconvex on C if for every pair of distinct points ,
Now, we consider the following optimization problem
where is pseudoconvex and twice continuously differentiable.
Denote by the solution set of optimization problem (70).
The following lemma reveals the relationship between the variational inequality and the pseudoconvex optimization problem.
Lemma 5
([53]). Suppose that is differentiable and pseudoconvex on C. Then satisfies
if and only if is a minimum of in C.
Let and be two Euclidean spaces. Let and be two nonempty closed convex sets. Let A be a given real matrix. Let and be two pseudocontractive operators with Lipschitz constants and , respectively. Let be a differentiable function with -Lipschitz continuous gradient which is also pseudoconvex on C. Let be a differentiable function with -Lipschitz continuous gradient which is also pseudoconvex on Q.
Consider the following split problem of finding a point such that
The solution set of (71) is denoted by , i.e.,
Next, we introduce an iterative algorithm for solving the split problem (71).
Let , , and be four real number sequences. Let , , , and be five constants. Let and be two positive constants.
Theorem 2.
Suppose that and the conditions (r1)–(r3) hold. Then the sequence generated by Algorithm 3 converges to some point .
| Algorithm 3: Select an initial point . Set . |
| Step 1. Assume that the present iterate and the step-sizes and are given. Compute Step 2. Compute the next iterate by the following form Step 3. Increase k by 1 and go back to Step 1. Meanwhile, update |
5. Concluding Remarks
In this paper, we survey iterative methods for solving the split problem of fixed points of two pseudocontractive operators and variational inequalities of two pseudomonotone operators in Hilbert spaces. By using self-adaptive techniques, we construct a Tseng-type iterative algorithm for solving this split problem. We prove that the proposed Tseng-type iterative algorithm converges weakly to a solution of the split problem under some additional conditions imposed the operators and the parameters. Finally, we apply our algorithm to solve split pseudoconvex optimization problems and fixed point problems.
Author Contributions
Both the authors have contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.
Funding
Li-Jun Zhu was supported by the National Natural Science Foundation of China [grant number 11861003], the Natural Science Foundation of Ningxia province [grant numbers NZ17015, NXYLXK2017B09]. Yeong-Cheng Liou was partially supported by MOST 109-2410-H-037-010 and Kaohsiung Medical University Research Foundation.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Glowinski, R. Numerical Methods for Nonlinear Variational Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
- Berinde, V.; Păcurar, M. Kannan’s fixed point approximation for solving split feasibility and variational inequality problems. J. Comput. Appl. Math. 2021, 386, 113217. [Google Scholar] [CrossRef]
- Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–711. [Google Scholar] [CrossRef] [Green Version]
- Ceng, L.-C.; Petrușel, A.; Yao, J.-C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
- Zhao, X.; Köbis, M.A.; Yao, Y.; Yao, J.-C. A Projected Subgradient Method for Nondifferentiable Quasiconvex Multiobjective Optimization Problems. J. Optim. Theory Appl. 2021, in press. [Google Scholar] [CrossRef]
- Cho, S.Y.; Qin, X.; Yao, J.C.; Yao, Y. Viscosity approximation splitting methods for monotone and nonexpansive operators in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 251–264. [Google Scholar]
- Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
- Ceng, L.-C.; Petruşel, A.; Yao, J.-C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
- Dong, Q.L.; Peng, Y.; Yao, Y. Alternated inertial projection methods for the split equality problem. J. Nonlinear Convex Anal. 2021, 22, 53–67. [Google Scholar]
- Yao, Y.; Li, H.; Postolache, M. Iterative algorithms for split equilibrium problems of monotone operators and fixed point problems of pseudo-contractions. Optimization 2020, 1–19. [Google Scholar] [CrossRef]
- Stampacchi, G. Formes bilineaires coercivites surles ensembles convexes. C. R. Acad. Sci. 1964, 258, 4413–4416. [Google Scholar]
- Zegeye, H.; Shahzad, N.; Yao, Y. Minimum-norm solution of variational inequality and fixed point problem in banach spaces. Optimization 2013, 64, 453–471. [Google Scholar] [CrossRef]
- Fukushima, M. A relaxed projection method for variational inequalities. Math. Program. 1986, 35, 58–70. [Google Scholar] [CrossRef]
- Chen, C.; Ma, S.; Yang, J. A General Inertial Proximal Point Algorithm for Mixed Variational Inequality Problem. SIAM J. Optim. 2015, 25, 2120–2142. [Google Scholar] [CrossRef]
- Yao, Y.; Postolache, M.; Yao, J.C. Iterative algorithms for the generalized variational inequalities. UPB Sci. Bull. Ser. A 2019, 81, 3–16. [Google Scholar]
- Bao, T.Q.; Khanh, P.Q. A Projection-Type Algorithm for Pseudomonotone Nonlipschitzian Multivalued Variational Inequalities. Nonconvex Optim. Appl. 2006, 77, 113–129. [Google Scholar]
- Wang, X.; Li, S.; Kou, X. An Extension of Subgradient Method for Variational Inequality Problems in Hilbert Space. Abstr. Appl. Anal. 2013, 2013, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Zhu, Z.; Yao, Y.; Liu, Q. Homotopy method for solving mathematical programs with bounded box-constrained variational inequalities. Optimization 2019, 68, 2297–2316. [Google Scholar] [CrossRef]
- Maingé, P.-E. Strong convergence of projected reflected gradient methods for variational inequalities. Fixed Point Theory 2018, 19, 659–680. [Google Scholar] [CrossRef]
- Malitsky, Y. Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Softw. 2018, 33, 140–164. [Google Scholar] [CrossRef] [Green Version]
- Yao, Y.; Postolache, M.; Yao, J.C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef] [Green Version]
- Abbas, M.; Ibrahim, Y.; Khan, A.R.; De La Sen, M. Strong Convergence of a System of Generalized Mixed Equilibrium Problem, Split Variational Inclusion Problem and Fixed Point Problem in Banach Spaces. Symmetry 2019, 11, 722. [Google Scholar] [CrossRef] [Green Version]
- Hammad, H.A.; Rehman, H.U.; De La Sen, M. Shrinking Projection Methods for Accelerating Relaxed Inertial Tseng-Type Algorithm with Applications. Math. Probl. Eng. 2020, 2020, 7487383. [Google Scholar] [CrossRef]
- Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
- Moudafi, A. Viscosity Approximation Methods for Fixed-Points Problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
- Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Ekon. Matorsz. Metod. 1976, 12, 747–756. [Google Scholar]
- Zhao, X.; Yao, Y. Modified extragradient algorithms for solving monotone variational inequalities and fixed point problems. Optimization 2020, 69, 1987–2002. [Google Scholar] [CrossRef]
- Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified extragradient-like algorithms with new stepsizes for variational inequalities. Comput. Optim. Appl. 2019, 73, 913–932. [Google Scholar] [CrossRef]
- Vuong, P.T. On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Thong, D.V.; Gibali, A. Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities. J. Fixed Point Theory Appl. 2019, 21, 20. [Google Scholar] [CrossRef]
- Yao, Y.; Postolache, M.; Yao, J.C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Sci. Bull. Ser. A 2020, 82, 3–12. [Google Scholar]
- Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
- Tseng, P. A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings. SIAM J. Control. Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
- Iusem, A.N. An iterative algorithm for the variational inequality problem. Comput. Appl. Math. 1994, 13, 103–114. [Google Scholar]
- He, B.; He, X.-Z.; Liu, H.X.; Wu, T. Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 2009, 196, 43–48. [Google Scholar] [CrossRef]
- He, B.S.; Yang, H.; Wang, S.L. Alternating Direction Method with Self-Adaptive Penalty Parameters for Monotone Variational Inequalities. J. Optim. Theory Appl. 2000, 106, 337–356. [Google Scholar] [CrossRef]
- Yusuf, S.; Ur Rehman, H.; Gibali, A. A self-adaptive extragradient-CQ method for a class of bilevel split equilibrium problem with application to Nash Cournot oligopolistic electricity market models. Comput. Appl. Math. 2020, 39, 293. [Google Scholar]
- Yang, J.; Liu, H. A Modified Projected Gradient Method for Monotone Variational Inequalities. J. Optim. Theory Appl. 2018, 179, 197–211. [Google Scholar] [CrossRef]
- Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
- Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. Algorithms for the Split Variational Inequality Problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
- Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
- He, Z.; Du, W.S. Nonlinear algorithms approach to split common solution problems. Fixed Point Theory Appl. 2012, 2012, 130. [Google Scholar] [CrossRef] [Green Version]
- Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2020, 69, 269–281. [Google Scholar] [CrossRef]
- Xu, H.-K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
- Yao, Y.; Shehu, Y.; Li, X.-H.; Dong, Q.-L. A method with inertial extrapolation step for split monotone inclusion problems. Optimization 2020, 70, 741–761. [Google Scholar] [CrossRef]
- Zhao, X.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, 82, 43–52. [Google Scholar]
- Yao, Y.; Qin, X.; Yao, J.C. Projection methods for firmly type nonexpansive operators. J. Nonlinear Convex Anal. 2018, 19, 407–415. [Google Scholar]
- Yao, Y.; Shahzad, N.; Ya, J.C. Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems. Carpathian J. Math. 2021, in press. [Google Scholar]
- Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
- Zhou, H. Strong convergence of an explicit iterative algorithm for continuous pseudo-contractions in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2009, 70, 4039–4046. [Google Scholar] [CrossRef]
- Abbas, B.; Attouch, H.; Svaiter, B.F. Newton–Like Dynamics and Forward-Backward Methods for Structured Monotone Inclusions in Hilbert Spaces. J. Optim. Theory Appl. 2013, 161, 331–360. [Google Scholar] [CrossRef]
- Harker, P.T.; Pang, J.-S. Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Math. Program. 1990, 48, 161–220. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).