Abstract
The projected subgradient algorithms can be considered as an improvement of the projected algorithms and the subgradient algorithms for the equilibrium problems of the class of monotone and Lipschitz continuous operators. In this paper, we present and analyze an iterative algorithm for finding a common element of the fixed point of pseudocontractive operators and the pseudomonotone equilibrium problem in Hilbert spaces. The suggested iterative algorithm is based on the projected method and subgradient method with a linearsearch technique. We show the strong convergence result for the iterative sequence generated by this algorithm. Some applications are also included. Our result improves and extends some existing results in the literature.
1. Introduction
Throughout, let H be a real Hilbert space endowed with inner product and induced norm . Let be a closed and convex set. Let be a bifunction. Recall that f is said to be monotone if
f is said to be pseudomonotone if
Clearly, we have the inclusion relation: .
In this paper, our research is associated with the equilibrium problem [1] of seeking an element such that
The solution set of the equilibrium problem in Equation (3) is denoted by .
Equilibrium problems have been studied extensively in the literature (see, e.g., [2,3,4,5]). Many problems, such as variational inequalities [6,7,8,9,10,11,12,13,14,15], fixed point problems [16,17,18,19,20,21], and Nash equilibrium in noncooperative games theory [1,22], can be formulated in the form of Equation (3). An important method for solving Equation (3) is the proximal point method, which was originally introduced by Martinet [23] and further developed by Rockafellar [24] for finding a zero of maximal monotone operators. In 2000, Konnov [25] extended the proximal point method to the monotone equilibrium problem. However, the proximal point method cannot be applied for solving the pseudomonotone equilibrium problem [26].
Another basis algorithm for solving the equilibrium problem is the projection algorithm [27]. However, the projection algorithm may fail to converge for the pseudomonotone monotone equilibrium problem. To overcome this disadvantage, the extragradient algorithm [4] can be applied to solve the pseudomonotone equilibrium problem. More precisely, the extragradient algorithm generates a sequence iteratively as follows
However, the main difficulty of the extragradient algorithm in Equation (4) is that, at each iterative step, it requires to solve two strongly convex programs. Consequently, the subgradient algorithm [28,29] has been proposed and developed for solving a large class of equilibrium problems that solves only one strongly convex program rather than two as in the extragradient algorithm, and the convergence results show the efficiency of the algorithms.
At the same time, to solve the equilibrium problem in Equation (3), the bifunction f is always to be assumed to possess the following Lipschitz-type condition [30]:
where and are two positive constants.
It should be pointed out that the condition in Equation (5), in general, is not satisfied. Moreover, even if the condition in Equation (5) holds, finding the constants and is not an easy task. To avoid this difficulty, one can merge in the algorithm, a linesearch procedure into the iterative step. The current study continues developing subgradient algorithms without Lipschitz-type condition for solving the equilibrium problem.
Another problem of interest is the fixed point problem of nonlinear operators. Recall that an operator is said to be pseudocontractive if
and S is called L-Lipschitz if
for some and for all . If , then S is said to be nonexpansive.
It is easy to see that the class of pseudocontractive operators includes the class of nonexpansive operators. The interest in pseudocontractive operators [2,31] is due mainly to their connection with the important class of nonlinear monotone (accretive) operators.
The fixed point problem has numerous applications in science and engineering, and it includes the optimization problem [32], the convex feasibility problem [2], the variational inequality problem [33], and so on. The fixed point problem can be solved by using iterative methods, such as the Mann method [34], the Halpern method [35], and the hybrid method [36].
In this paper, we devote to study iterative algorithms for finding a common element of the set of solutions of the equilibrium problem and the set of fixed points of a wide class of nonlinear operators. The main motivation for considering such a common problem is due to its possible applications in network resource allocation, signal processing, and image recovery [28,37]. Recently, iterative algorithms for solving a common problem of the equilibrium problem and the fixed point problem have been investigated by many researchers [28,38,39,40]. Especially, Nguyen, Strodiot, and Nguyen [41] (Algorithm 3) presented the following hybrid self-adaptive method, for solving the equilibrium and the fixed point problem:
Let and . Let and . Let and set .
Step 1. Compute and where m is the smallest nonnegative integer such that
Step 2. Calculate , where and if and otherwise.
Step 3. Calculate , where is nonexpansive.
Step 4. Compute , where
Step 5. Set and return to Step 1.
We observe that, in the above algorithm, f is assumed to be monotone, the involved operator is nonexpansive, and the construction of half-space is complicated.
The purpose of this paper is to improve and extend the main result in [41] to a general case: (i) We consider the pseudomonotone equilibrium problem, that is, f is assumed to be pseudomonotone. (ii) We extend from the nonexpansive operator to the pseudocontractive operator which includes the nonexpansive operator as a special case. (iii) We adapt the half-space to a simple form. We propose an iterative algorithm for seeking a common solution of the pseudomonotone equilibrium problem and fixed point of pseudocontractive operators. The suggested iterative algorithm is based on the projected method and subgradient method with a linearsearch technique. We show the strong convergence result for the iterative sequence generated by this algorithm.
2. Notations and Lemmas
Throughout, we assume that is a convex and closed subset of a real Hilbert space H. The following symbols are needed in the paper.
- indicates the weak convergence of to as .
- implies the strong convergence of to as .
- means the set of fixed points of S.
- .
Let be a function.
- g is said to be proper if .
- g is said to be lower semicontinuous if is closed for each .
- g is said to be convex if for every and .
- g is said to be -strongly convex if for every and .
Let be a proper, lower semicontinuous, and convex function. Then, the subdifferential of g is defined by
for each .
It is known that possesses the following properties:
- (i)
- is a set-valued maximal monotone operator.
- (ii)
- If g is -strongly convex (), then is -strongly monotone (i.e., ).
- (iii)
- is a solution to the optimization problem if and only if , where means the normal cone of C at defined by
Let be a bi-function satisfying the following assumptions:
- (f1):
- for all ;
- (f2):
- f is pseudomonotone on ;
- (f3):
- f is jointly sequently weakly continuous on , where is an open convex set containing C (recall that f is called jointly sequently weakly continuous on , if and , then ); and
- (f4):
- is convex and subdifferentiable for all .
For each , we use to denote the subdifferential of at x.
Recall that the metric projection is an orthographic projection from H onto C, which possesses the following characteristic: for given ,
The following lemmas are used in the next section.
Lemma 1
([42]). In a Hilbert space H, we have
and .
Lemma 2
([31]). Assume that the operator is L-Lipschitz pseudocontractive. Then, for all and , we have
where .
The next lemma plays a critical role which can be considered as an infinite-dimensional version of Theorem 24.5 in [43]. The proof can be found in [44].
Lemma 3.
Assume that the bi-function satisfies Assumptions (f3) and (f4). For given two points and two sequences and , if and , respectively, then, for any , there exist and verifying
for every , where .
The following lemma is the demi-closed principle of the pseudocontractive operator.
Lemma 4
([45]). If the operator is continuous pseudocontractive, then:
- (i)
- the fixed point set is closed and convex; and
- (ii)
- S satisfies demi-closedness, i.e., and as imply that .
Lemma 5
([46]). For given a sequence and , if and for all , then .
3. Main Results
In this section, we first present our algorithm to solve the pseudomonotone equilibrium problem and fixed point problem and, consequently, we prove the convergence of the suggested algorithm, see Algorithm 1. Next, we state several assumptions on the underlying spaces, the involved operators, and the control parameters.
Assumptions:
- (A1):
- is closed convex and is a given open set which contains C;
- (A2):
- the function satisfies Assumptions (f1)–(f4) stated in Section 2 (under this condition is closed and convex [3]);
- (A3):
- the operator is Lipschitz pseudocontractive with Lipschitz constant ;
- (A4):
- the intersection ;
- (C1):
- the sequence satisfies: with for all ;
- (C2):
- the sequences and satisfy: ; and
- (C3):
- and are two constants.
| Algorithm 1: Let be an initial guess. |
|
Proposition 1.
For each , we have
Proof.
According to Equation (8), by the definition of , we have
It follows from Equation (14) that there exists verifying it yields that
By the definition of subgradient of at , we obtain
Combine Equations (15) and (16) to conclude the desired result. □
Remark 1.
The search rule in Equation (10) is well-defined, i.e., there exists such that Equation (10) holds.
Proof.
Case 1. . In this case, . Consequently, because of (f1). Thus, Equation (10) holds and .
Case 2. . Suppose that the search rule in Equation (10) is not well-defined. Hence, must violate the inequality in Equation (10), i.e., for every , we have
Noting that and letting , we conclude that as . Thanks to Condition (f3), we deduce that and . This, together with Equation (17), implies that
Letting in Equation (13) and noting that , we deduce
Combine the above inequality and Equation (18) to derive that . Hence, , which is incompatible with the assumption. Consequently, the search rule in Equation (10) is well-defined. □
Remark 2.
If , then and thus and is well-defined.
Proof.
Suppose that . Since , from Equation (9), . By using the convexity of , we have
Substituting Equation (10) into the last inequality, we get On the other hand, by the assumption and the definition of the subdifferential, we deduce . Hence, , which is a contradiction. □
Proposition 2.
The sequence generated by Equation (12) is well-defined.
Proof.
Firstly, we prove by induction that for all . is obvious. Suppose that for some . Pick up . In the light of Equation (12) and Lemmas 1 and 2, we obtain
Since f is pseudomonotone and , . According to , by the subdifferential inequality, we have . It follows that .
Case 1. . In terms of Equation (11), we get
Combining Equations (19) and (20), we obtain
and hence .
Case 2. . In this case, and is obvious. Thus, for all .
Secondly, we show that is closed and convex for all . It is obvious that is closed and convex. Suppose that is closed and convex for some . For , note that is equivalent to . It is obvious that is nonempty, closed convex. Therefore, the sequence is well-defined. □
Proposition 3.
and .
Proof.
Since , by the property in Equation (7) of the metric projection, for any , we have
Then,
It yields
which, by selecting , implies that the sequence is bounded.
By terms of Equation (22), we have due to . Thus,
From Equation (23), we deduce . Thus, the limit exists, denoted by q. This, together with Equation (24), implies that . Thanks to the definition of and , we derive . Hence,
By Equation (21), we obtain
Therefore,
and
On the other hand,
It follows from Equation (26) that
□
Proposition 4.
.
Proof.
Selecting any , there exists a subsequence such that . Set for each . Noting that , then there exists such that
Observe that is -strongly monotone because is -strongly convex due to the convexity of . Thus, we have
where .
Taking into account Equations (28) and (29), we obtain
It follows that
Since , by Lemma 3, for any , there exist and such that
The above inclusion and Equation (30) yield that there exists such that for all . This indicates that the sequence is bounded owing to the boundedness of . Then, there exists a subsequence of , again denoted by such that . Consequently, by the definition of , it is also bounded. Thus, there exists a subsequence of , without loss of generality, still denoted by that converges to Applying Lemma 3, for any , there exist and such that
Thus, is bounded. This, together with Equation (25), implies
Next, we show . We consider two cases. Case 1: . According to Equation (13), we have
Since is sequently weakly continuous on the open set , letting in Equation (32), we deduce that , i.e., .
Case 2: . By the convexity of , we get
which results that . Furthermore, from Equation (10), we have Hence,
If , then there exists a subsequence of , still denoted by , such that . In the light of Equations (31) and (33), we conclude that . In the case where as , let be the smallest positive integers such that, for each i,
where .
Consequently, must violate the above search rule in Equation (34), i.e.,
where .
At the same time, by Equation (13), we obtain
From Equations (35) and (36), we have
Letting in Equation (37) and noting that , and , we deduce
It yields that . This, together with Equation (36), implies that . Consequently, . Again, applying Equation (13), we conclude that for all , i.e., .
Next, we show . Observe that by Equation (25) and thus . This, together with Lemma 4 and Equation (27), implies that . Thus, . □
Theorem 1.
The iterate defined by Algorithm 1 converges strongly to .
Proof.
First, by Conditions (A2) and (A4) and Lemma 4, is nonempty, closed and convex. Hence, is well-defined. Thanks to Equation (23), we deduce
By Proposition 4, we obtain . Hence, all conditions of Lemma 5 are fulfilled. Consequently, we conclude that by the conclusion of Lemma 5. □
Remark 3.
In Algorithm 1, if S is nonexpansive, then the conclusion still holds. The construction of half-space in Algorithm 1 is simpler than that in [41]. Our result improves and extends the corresponding result in [41].
4. Applications
In Equation (3), setting , the EP in Equation (3) reduces to the following variational inequality (VI) of seeking verifying
The solution set of the variational inequality in Equation (38) is denoted by .
In this case, solving strongly convex program
is converted to solve . The Armijo-like assumption
can be expressed as
Consequently, we obtain the following algorithm for solving a common problem of the VI and the FPP, see Algorithm 2.
Theorem 2.
Let be a closed convex and Δ be a given open set which contains C. Let be a pseudomonotone and jointly sequently weakly continuous operator. Let the operator be Lipschitz pseudocontractive with Lipschitz constant . Suppose that the intersection . Assume that Conditions (C1)–(C3) are satisfied. Then, the iterate defined by Algorithm 2 converges strongly to .
In Algorithm 2, setting , the identity operator, then and Condition (C2) reduces to Condition (C4): . In this case, we have the following Algorithm 3 and corollary for solving the VI.
| Algorithm 2: Let be an initial guess. |
|
| Algorithm 3: Let be an initial guess. |
|
Corollary 1.
Let be a closed convex and Δ be a given open set which contains C. Let be a pseudomonotone and jointly sequently weakly continuous operator. Suppose that . Assume that Conditions (C1), (C3), and (C4) are satisfied. Then, the iterate defined by Algorithm 3 converges strongly to .
5. Conclusions
In this paper, we investigate pseudomonotone equilibrium problems and fixed point problems in Hilbert spaces. We present an iterative algorithm for finding a common element of the fixed point of pseudocontractive operators and the pseudomonotone equilibrium problem without Lipschitz-type continuity. We prove the strong convergence of the suggested algorithm under some additional assumptions. Since, in our suggested Algorithm 1, the involved function f is assumed to be pseudomonotone, a natural problem arises: how to weaken this assumption to nonmonotone.
Author Contributions
Conceptualization, Y.Y. and N.S.; Data curation, Y.Y.; Formal analysis, Y.Y. and J.-C.Y.; Funding acquisition, N.S.; Investigation, Y.Y., N.S. and J.-C.Y.; Methodology, Y.Y. and N.S.; Project administration, J.-C.Y.; Supervision, J.-C.Y. All the authors have contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.
Funding
Yonghong Yao was supported in part by the grant TD13-5033. Jen-Chih Yao was partially supported by the Grant MOST 106-2923-E-039-001-MY3.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
- Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
- Quoc, T.D.; Muu, L.D.; Hien, N.V. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
- Tada, A.; Takahashi, W. Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 2007, 133, 359–370. [Google Scholar] [CrossRef]
- Bello-Cruz, J.-Y.; Iusem, A.-N. Convergence of direct methods for paramonotone variational inequalities. Comput. Optim. Appl. 2010, 46, 247–263. [Google Scholar] [CrossRef]
- Malitsky, Y. Proximal extrapolated gradient methods for variational inequalities. Optim. Meth. Softw. 2018, 33, 140–164. [Google Scholar] [CrossRef]
- Yao, Y.; Postolache, M.; Yao, J.-C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Politeh. Buch. Ser. A 2020, 83, 3–12. [Google Scholar]
- Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified extragradient-like algorithms with new stepsizes for variational inequalities. Comput. Optim. Appl. 2019, 73, 913–932. [Google Scholar] [CrossRef]
- Thong, D.V.; Gibali, A. Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities. J. Fixed Point Theory Appl. 2019, 21, 20. [Google Scholar] [CrossRef]
- Zhang, C.; Zhu, Z.; Yao, Y.; Liu, Q. Homotopy method for solving mathematical programs with bounded box-constrained variational inequalities. Optimization 2019, 68, 2293–2312. [Google Scholar] [CrossRef]
- Yang, J.; Liu, H.W. A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 2018, 179, 197–211. [Google Scholar] [CrossRef]
- Yao, Y.; Postolache, M.; Liou, Y.-C.; Yao, Z.-S. Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 2016, 10, 1519–1528. [Google Scholar] [CrossRef]
- Thakur, B.S.; Postolache, M. Existence and approximation of solutions for generalized extended nonlinear variational inequalities. J. Inequal. Appl. 2013, 2013, 590. [Google Scholar] [CrossRef]
- Zhao, X.P.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, in press. [Google Scholar]
- Zegeye, H.; Shahzad, N.; Yao, Y. Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 2015, 64, 453–471. [Google Scholar] [CrossRef]
- Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef]
- Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed Points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
- Yao, Y.; Postolache, M.; Yao, J.-C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef]
- Yao, Y.; Shahzad, N. Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6, 621–628. [Google Scholar] [CrossRef]
- Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
- Muu, L.-D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
- Martinet, B. Régularisation dínéquations variationelles par approximations successives. Rev. Française Autom. Inform. Rech. Opér. Anal. Numér. 1970, 4, 154–159. [Google Scholar]
- Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
- Konnov, I.V. Combined Relaxation Methods for Variational Inequalities; Springer: Berlin, Germany, 2001. [Google Scholar]
- Dang, V.H. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 2018, 77, 983–1001. [Google Scholar]
- Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
- Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
- Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
- Mastroeni, G. Gap functions for equilibrium problems. J. Glob. Optim. 2003, 27, 411–426. [Google Scholar] [CrossRef]
- Yao, Y.; Liou, Y.C.; Yao, J.C. Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl. 2015, 2015, 127. [Google Scholar] [CrossRef]
- Iiduka, H. Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 2013, 23, 1–26. [Google Scholar] [CrossRef]
- Zhao, X.P.; Yao, Y.H. Modified extragradient algorithms for solving monotone variational inequalities and fixed point problems. Optimization 2020, in press. [Google Scholar] [CrossRef]
- Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
- Halpern, B. Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef]
- Nakajo, K.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279, 372–379. [Google Scholar] [CrossRef]
- Iiduka, H.; Uchida, M. Fixed point optimization algorithms for network bandwidth allocation problems with compoundable constraints. IEEE Commun. Lett. 2011, 15, 596–598. [Google Scholar] [CrossRef]
- Anh, P.K.; Hieu, D.V. Parallel hybrid methods for variational inequalities, equilibrium problems and common fixed point problems. Vietnam J. Math. 2016, 44, 351–374. [Google Scholar] [CrossRef]
- Hieu, D.V.; Muu, L.D.; Anh, P.K. Parallel hybrid extragradientmethods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 2016, 73, 197–217. [Google Scholar] [CrossRef]
- Vuong, P.T.; Strodiot, J.J.; Nguyen, V.H. On extragradient-viscosity methods for solving equilibrium and fixed point problems in a Hilbert space. Optimization 2015, 64, 429–451. [Google Scholar] [CrossRef]
- Nguyen, T.T.V.; Strodiot, J.J.; Nguyen, V.H. Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 2014, 160, 809–831. [Google Scholar] [CrossRef]
- Dang, V.-H. An extension of hybrid method without extrapolation step to equilibrium problems. JIMO 2017, 13, 1723–1741. [Google Scholar]
- Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
- Vuong, P.T.; Strodiot, J.J.; Nguyen, V.H. Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 2012, 155, 605–627. [Google Scholar] [CrossRef]
- Zhou, H. Strong convergence of an explicit iterative algorithm for continuous pseudocontractions in Banach spaces. Nonlinear Anal. 2009, 70, 4039–4046. [Google Scholar] [CrossRef]
- Martinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).