Abstract
Herein, we present a new parallel extragradient method for solving systems of variational inequalities and common fixed point problems for demicontractive mappings in real Hilbert spaces. The algorithm determines the next iterate by computing a computationally inexpensive projection onto a sub-level set which is constructed using a convex combination of finite functions and an Armijo line-search procedure. A strong convergence result is proved without the need for the assumption of Lipschitz continuity on the cost operators of the variational inequalities. Finally, some numerical experiments are performed to illustrate the performance of the proposed method.
Keywords:
extragradient method; variational inequalities; common solution; common fixed point; pseudomonotone; demicontractive MSC:
65K15; 47J20; 65J15; 90C33
1. Introduction
Let H be a real Hilbert space and C be a nonempty, closed, and convex subset of H. Let be an operator. The Variational Inequalities (VI) is defined as finding such that
The solution set of the VI (1) is denoted by . Mathematically, the VI is considered as a powerful tool for studying many nonlinear problems arising in mechanics, optimization, control network, equilibrium problems, etc.; see [,,]. The problem of finding a common solution of a systems of VI has received a lot of attention by many authors recently, see, e.g., in [,,,,,,] and references therein. This problem covers as special cases, convex feasibility problem, common equilibrium problem, etc. In this paper, we consider the following common problem.
Problem 1.
Find an element such that
where for , are pseudomonotonotone operators, , are -demicontractive mappings, denotes the fixed point set of
The motivation for considering Problem 1 lies in its possible applications to mathematical models whose constraints can be expressed as the common variational inequalities and common fixed point problems. This happen in particular, in network resource allocations, image processing, Nash equilibrium problem, etc., see, e.g., in [,,,].
The simplest method for solving the VI (1) is the projection method of Goldstein [] which is a natural extension of the gradient projection method, and for , it is given by
The projection method (3) converges weakly to a solution of VI (1) if and only if A satisfies some strong conditions such as -strongly monotone and L-Lipschitz continuous. When this condition is relaxed, the method fails to convergence to any solution of the VI (1). Korpelevich [] later introduced an Extragradient Method (EgM) for solving the VI when A is monotone and L-Lipschitz continuous as follows, for ,
where The EgM has been extended to infinite-dimensional spaces by many authors, see, for instance, in [,,,,,,,]. More so, several modifications of the EgM have been introduced recently, see in [,,,,,,]. For finding a common element in the set of solution of monotone variational inequalities and fixed point of k-demicontractive mapping, Mainge [] introduced the following extragradient method, for
where , is a monotone and L-Lipschitz continuous, is a k-demicontractive mapping and is -strongly monotone operator with The author proved a strong convergence for the sequence generated by (4) provided the step size satisfies
Recently, Hieu et al. [] modified (4) and introduced the following extragradient method for approximating a common solution of VI and fixed point problem; given compute via
where such that , , and B are as defined for Algorithm (4). They also proved a strong convergence for the sequence generated by (6) with the aid of (5). An obvious disadvantage in (4) and (6) which impedes their wide usage is the assumption that the Lipschitz constant of A admits a simple estimate. Moreover, in many practical problems, the cost operator may not satisfies Lipschitz condition.
On the other hand, for finding a common fixed point of quasi-nonexpansive mappings, Anh and Hieu [,] proposed a parallel hybrid algorithm as follows,
Furthermore, Censor et al. [] proposed a parallel hybrid-extragradient method for finite family of variational inequalities as follows; choose compute
Motivated by (7) and (8), Anh and Phuong [] recently introduced the following Algorithm 1 parallel hybrid-extragradient method for solving variational inequalities and fixed point of nonexpansive mappings.
| Algorithm 1: PHEM |
| Initialization: Given where are the Lipschitz constant of , , Iterative steps: Compute in parallel |
Meanwhile, Hieu [] introduced a parallel hybrid-subgradient extragradient method which also required finding a farthest element from the iterate as follows.
The author proved that the sequence generated by Algorithm 2 converges strongly to a solution of the systems of VI.
| Algorithm 2: PHSEM |
| Initialization: Choose Set Step 1: Find N projections on in parallel, i.e., Step 2: Find N projections on half-spaces in parallel, i.e., Step 3: Find the farthest element from among i.e., Step 4: Construct the half-spaces and such that Step 5: Find the next iterate via |
However, it should be observed that at each step in the parallel hybrid-extragradient methods mentioned above, one needs to calculate a projection onto the intersection of two sets and This can be computationally expensive when the feasible set is not simple. Moreover, the convergence of the algorithms require prior knowledge of the Lipschitz constants of which are very difficult to estimate in practice.
Motivated by these results, in this paper, we introduce an efficient parallel-extragradient method which does not requires the computation of projection onto and the prior estimates of the Lipschitz constants of for In particular, we highlight some contributions in this paper as follows.
- In our method, the involved cost operators do not need to satisfy the Lipschitz condition. Instead, we assumed that are pseudomonotone and weakly sequentially continuous which is more general than the monotone and Lipschitz continuous used in previous results.
- The sequence generated by our method converges strongly to a solution of (2) without the aid of prior estimate of a Lipschitz constant.
- Furthermore, we performed only single projection onto C in parallel and our algorithm does not need to find the farthest element from the iterate .
- More so, our algorithm does not require the computation of projection onto which make it easier for computations.
2. Preliminaries
In this section, we give some Definitions and basic results that will be used in our subsequent analysis. Let H be a real Hilbert space. The weak and the strong convergence of to is denoted by and as , respectively. Let C be a nonempty, closed, and convex subset of H. The metric projection of onto C is defined as the necessary unique vector satisfying
It is well known that has the following properties (see, e.g., in [,]).
- (i)
- For each and
- (ii)
- For any
- (iii)
- For any and
Next, we state some classes of functions that play essential roles in our convergence analysis.
Definition 1.
The operator is said to be
- 1.
- β-strongly monotone if there exists such that
- 2.
- monotone if
- 3.
- η-strongly pseudomonotone if there exists such thatfor all ;
- 4.
- pseudomonotone if for all
- 5.
- L-Lipschitz continuous if there exists a constant such thatWhen then A is called a contraction;
- 6.
- weakly sequentially continuous if for any such that implies
It is easy to see that (1) ⇒ (2) ⇒ (4) and (1) ⇒ (3) ⇒ (4), but the converse implications do not hold in general, see, for instance, in [].
Lemma 1
([] Lemma 2.1). Consider the VIP (1) with C being a nonempty closed convex subset of H and is pseudomonotone and continuous. Then, if and only if
Definition 2
([]). A mapping is called
- (i)
- nonexpansive if
- (ii)
- quasi-nonexpansive mapping if and
- (iii)
- μ-strictly pseudocontractive if there exists a constant such that
- (iv)
- κ-demicontractive mapping if there exists and such that
It is easy to see that the class of demicontractive mappings contains the class of quasi-nonexpansive and strictly pseudocontractive mappings. Due to this generality and its possible applications in applied analysis, the class of demicontractive mapping has continue to attracts the attention of many authors in this decade.
A bounded linear operator A on H is said to be strongly positive if there exists a constant such that
It is known that when A is strongly positive bounded linear operator with then
For any real Hilbert space it is known that the following identities hold (see, e.g., in []).
Lemma 2.
For all then
- (i)
- (ii)
- (iii)
Lemma 3
([]). Let C be a nonempty closed convex subset of a real Hilbert space H and h be a real-valued function on H. Suppose is nonempty and h is Lipschitz continuous on C with modulus then
Lemma 4
([]). Let be a non-negative real sequence satisfying where , and is a sequence such that Then,
Lemma 5
((Lemma 3.1) []). Let be a sequence of real numbers such that there exists a subsequence of with for all . Consider the integer defined by
Then is a non-decreasing sequence verifying and for all the following estimate holds,
3. Algorithm and Convergence Analysis
In this section, we describe our algorithm and prove its convergence under suitable conditions. Let H be a real Hilbert space and C be a nonempty, closed, and convex subset of H. We suppose that the following assumptions hold.
Assumption 1.
- (A1)
- For , are pseudomonotone, uniformly continuous and weakly sequentially continuous operators;
- (A2)
- For , are -demicontractive mappings with such that are demiclosed at zero;
- (A3)
- is an α-contraction mapping with
- (A4)
- For , are strongly positive bounded linear operators with coefficients , where and
- (A5)
- The solution setis nonempty.
We now present our method as follows.
Remark 1.
Observe that we are at a solution of Problem (2) if We will implicitly assume that this does not occur after finitely many iterations so that our Algorithm 3 generates an infinitely sequence for our analysis.
| Algorithm 3: EFEM |
| Initialization: Choose Let be given arbitrarily and set Iterative step: Step 1: For compute in parallel Step 2. Compute where is the smallest non-negative integer satisfying Step 3. Compute Stopping criterion: If then stop; otherwise, set and go back to Step 1. |
In order to prove the convergence of our algorithm, we assume that the control parameters satisfy the following conditions.
Assumption 2.
- (B1)
- and
- (B2)
We begin the convergence analysis of Algorithm 3 by proving some useful Lemmas.
Lemma 6.
Let and be as defined in Algorithm 3. Then and In particular, if then for all
Proof.
As for then
Furthermore, if for then As and are pseudomonotone, then
Therefore,
Therefore,
□
Remark 2.
Lemma 6 shows that and so is well defined. Consequently, Algorithm 3 is well defined.
Now we show that the sequence generated by Algorithm 3 is bounded.
Lemma 7.
Let be the sequence generated by Algorithm 3. Then is bounded.
Proof.
Let then from (11), we have
Moreover, from Lemma 2 (iii), we get
This implies that
Then from (13), we obtain
By induction, we have
This implies that is bounded. □
Lemma 8.
Let and be the sequence generated by Algorithm 3, then satisfies the following estimates.
- (i)
- for some
- (ii)
- wherefor some
Proof.
As is bounded and are continuous on bounded subsets of H, then are bounded, and thus there exists some constants such that
Consequently,
Therefore from Lemma 3, we have
Hence from Lemma 6, we obtain
This established (i).
Moreover, we have from Algorithm 3 that
Therefore, we obtain
where the existence of M follows from the boundedness of This completes the proof. □
Lemma 9.
Let be a subsequence of generated by Algorithm 3 such that converges weakly to and for all Then
- (i)
- for all ,
- (ii)
Proof.
(i) From the Definition of and (10), we have
Thus,
This implies that
Fix and let in (19), since then
(ii) Suppose is a decreasing sequence of non-negative numbers such that as For each we denote by the smallest positive integer such that
where the existence of follows from (i). This means that
for some satisfying (since for ). As are pseudomonotone, it follows from (i) that
Furthermore, as and are weakly sequentially continuous, then for Therefore,
Using Lemma 1, we have for all Therefore This completes the proof. □
We now present our main result.
Theorem 1.
Suppose is generated by Algorithm 3. Then converges strongly to a point
Proof.
Let and put We consider the following possible cases.
Case: A
Assume that there exists such that is monotonically decreasing for Since is bounded, then
Moreover, from Lemma 8 (i), we have
Therefore
As as and from (21), we have
Furthermore, from (17), we obtain
This implies that
Therefore,
Using condition (C2), we obtain
Consequently,
Furthermore, from (11), we have
Then, from (22), we have
Moreover, as as then
Therefore,
Now, we show that where is the set of weak subsequential limits of Let then there exists a subsequence of such that as Let be subsequences of for From (23), we have
Now we claim that
Indeed, we consider two distinct cases depending on the behavior of subsequence
(i) If then
This implies that
Therefore,
(ii) Suppose Then we may assume without loss of generality that and Let us define for This implies that for Since are bounded and then
As are uniformly continuous, then
Using (12) and from the Definition of for we know that
Therefore,
Putting for all we obtain
Moreover, from Lemma 2 (i), we have
Passing limit to the last inequality as and using (28), we get
For there exists such that
for all Therefore
This contradicts the Definition of metric projection as Thus Therefore, we obtain
Consequently from Lemma 9, we have Furthermore, as and , then by the demi-closedness of , we have that for each This means that Therefore, which show that . We now show that converges strongly to a point As and as then Therefore,
Therefore, using Lemma 4 and Lemma 8 (ii), we have that This implies that converges strongly to
Case B: Suppose is not monotonically decreasing. Let for all (for some large enough) be defined by
Clearly is non-decreasing, as and
As is bounded, there exists a subsequence of still denoted by which converges weakly to Following similar arguments as in Case A, we get
and where is the set of weak subsequential limits of Furthermore,
From Lemma 8 (ii), we have
for some As then we get
Then from (33) and the fact that , we have
Furthermore, for it is easy to see that As a consequence, we get for all sufficiently large n that Thus, Therefore, converges strongly to Consequently, and converges strongly to This completes the proof. □
Remark 3.
- (i)
- Instead of finding the farthest element to the iterate we construct a sub-level set using the convex combination of the finite functions and perform a single projection onto the sub-level set. Note that this projection can be calculated explicitly irrespective of the feasible set C.
- (ii)
- We emphasize that the convergence of our Algorithm 3 is proved without using a prior estimate of any Lipshitz constant. Moreover, the cost operators do not even need to satisfy the Lipschitz condition. Note that the previous results of [,,] and references therein cannot be applied in this situation.
- (iii)
- We give an example of a finite family of which does not satisfy Lipschitz condition.
Example 1.
Let defined by with norm defined by for arbitrary Let and for let be defined by
It is clear that as for each First, we show that is pseudomonotone and not Lipschitz continuous for Let be such that This means that Thus,
Therefore, is pseudomonotone for To see that is not L-Lipschitz continuous for let and Then,
Moreover, implies that
Thus, which is a contradiction. Therefore, is not Lipschitz continuous for
4. Numerical Experiments
In this section, we present some numerical experiments to illustrate the performance of the proposed algorithm. We compare our Algorithm 3 with Algorithm 1 of Anh and Phuong [], Algorithm 2 of Hieu [], Algorithm 1 of Suantai et al. [], and other algorithms in the literature. The projections onto are computed explicitly. All codes are written with a HP PC with the following specification: Intel(R)core i7-9700, CPU 3.00GHz, RAM 4.0GB, MATLAB version 9.9 (R2020b).
Example 2.
First, we consider the variational inequalities with operators for defined by , where
such that for each , is a matrix, is a skew symmetric matrix, is a diagonal matrix, whose diagonal entries are non-negative (so is positive definite) and is a vector in . The feasible set C is given by , where is generated randomly and c is a positive real number randomly in . It is clear that for each i, is monotone (hence, pseudomonotone) and Lipschitz continuous with Lipschitz constant . The entries of matrices are generated randomly and uniformly in , diagonal entries of are in and is equal to the zero vector. In this case, it is easy to see that the . For , let be defined by , for . Then is 0-demicontractive mapping, and is demiclosed at 0. Also for , let , (I being the identity operator on H), we choose , , , , for all . We compare Algorithm 3 with Algorithm 1 of Anh and Phuong [], Algorithm 2 of Hieu [], and Algorithm 1 of Suantai et al. []. We test the algorithms using the following parameters.
- Anh and Phuong alg.:
- Hieu alg.:
- Suantai et al. alg.:
and
- Case I:
- Case II:
- Case III:
- Case IV:
We also use as stopping criterion for each algorithm and plot the graphs of against the number of iteration. The computational results are shown in Table 1 and Figure 1.
Table 1.
Computational result for Example 2.
Figure 1.
Example 2, From Top to Bottom: Case I, Case II, Case III, and Case IV.
Now, we consider the case when with finite family of demicontractive mappings in infinite dimensional spaces. In this example, we compare our algorithm with Algorithm 1 of Anh et al. [] and Algorithm 2.1 of Hieu [].
Example 3.
Let and define by . It is easy to see that A is easy to see that A is strongly pseudomonotone and Lipschitz continuous with . We defined the feasible set and for , is defined by , for . Then is demicontractive mapping with , and is demiclosed at 0. We choose , , , , , , . For Anh et al. alg., we take , ; and for Hieu alg., we take (where in this context). We test the algorithms for and study the behavior of the sequence generated by the algorithms using as stopping criterion. The numerical results are shown in Table 2 and Figure 2.
Table 2.
Computational result for Example 2.
Figure 2.
Example 3, From Top to Bottom: .
5. Conclusions
In this paper, we introduced a new efficient parallel extragradient method for solving systems of variational inequalities involving common fixed point of demicontractive mappings in real Hilbert spaces. The algorithm is designed such that its step size is determined by an Armijo line search technique and a projection onto a sub-level set is computed for determining the next iterate. A strong convergence result is proved under suitable conditions on the control parameters. Finally, some numerical results were reported to show the performance of the proposed method with respect to some other methods in the literature.
Author Contributions
Conceptualization, L.O.J.; Formal analysis, L.O.J.; Funding acquisition, M.A.; Investigation, L.O.J.; Methodology, L.O.J.; Project administration, L.O.J., M.A.; Resources, M.A.; Software, L.O.J.; Supervision, M.A.; Validation, L.O.J.; Visualization, L.O.J.; Writing—original draft, L.O.J.; Writing—review editing, M.A. All authors have read and agreed to the published version of the manuscript.
Funding
L.O. Jolaoso is supported by the Postdoctoral research grant from the Sefako Makgatho Health Sciences University, South Africa.
Acknowledgments
The authors acknowledge with thanks the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Glowinski, R.; Lions, J.L.; Trémoliéres, R. Numerical Analysis of Variational Inequalities; North-Holland: Amsterdam, The Netherlands, 1981. [Google Scholar]
- Kinderlehrer, D.; Stampachia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
- Marcotte, P. Applications of Khobotov’s algorithm to variational and network equilibrium problems. INFOR Inf. Syst. Oper. Res. 1991, 29, 255–270. [Google Scholar] [CrossRef]
- Facchinei, F.; Pang, J. Finite-Dimensional Variational Inequalities and Complementary Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
- Reich, S.; Zalas, R. A modular string averaging procedure for solving the common fixed point problem for quasi-nonexpansive mappings in Hilbert space. Numer. Algorithm 2016, 72, 297–323. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set Valued Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
- Nadezhkina, N.; Takahashi, W. Strong convergence Theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM Optim. 2006, 16, 1230–1241. [Google Scholar] [CrossRef]
- Anh, P.N.; Phuong, N.X. A parallel extragradient-like projection method for unrelated variational inequalities and fixed point problem. J. Fixed Point Theory Appl. 2018, 20, 74. [Google Scholar] [CrossRef]
- Anh, P.N.; Phuong, N.X. Linesearch methods for variational inequalities involving strict pseudocontractions. Optimization 2015, 64, 1841–1854. [Google Scholar] [CrossRef]
- Cholamjiak, P.; Suantai, S.; Sunthrayuth, P. An explicit parallel algorithm for solving variational inclusion problem and fixed point problem in Banach spaces. Banach J. Math. Anal. 2020, 14, 20–40. [Google Scholar] [CrossRef]
- Anh, P.K.; Hieu, D.V. Parallel hybrid methods for variational inequalities, equilibrium problems and common fixed point problems. Vietnam J. Math. 2016, 44, 351–374. [Google Scholar] [CrossRef]
- Iiduka, H. A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 2010, 59, 873–885. [Google Scholar] [CrossRef]
- Iiduka, H.; Yamada, I. A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 2009, 19, 1881–1893. [Google Scholar] [CrossRef]
- Maingé, P.E. A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
- Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef]
- Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 1976, 12, 747–756. (In Russian) [Google Scholar]
- Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [PubMed]
- Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for variational inequality problems in Euclidean space. Optimization 2012, 61, 119–1132. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
- Ceng, L.C.; Hadjisavas, N.; Weng, N.C. Strong convergence Theorems by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46, 635–646. [Google Scholar] [CrossRef]
- Jolaoso, L.O.; Aphane, M. Weak and strong convergence Bregman extragradient schemes for solving pseudo-monotone and non-Lipschitz variational inequalities. J. Ineq. Appl. 2020, 2020, 195. [Google Scholar] [CrossRef]
- Jolaoso, L.O.; Aphane, M. A generalized viscosity inertial projection and contraction method for pseudomonotone variational inequality and fixed point problems. Mathematics 2020, 8, 2039. [Google Scholar] [CrossRef]
- Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A strong convergence Theorem for solving pseudo-monotone variational inequalities using projection methods in a reflexive Banach space. J. Optim. Theory Appl. 2020, 185, 744–766. [Google Scholar] [CrossRef]
- He, B.S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
- Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
- Migorski, S.; Fang, C.; Zeng, S. A new modified subgradient extragradient method for solving variational inequalities. Appl. Anal. 2019, 1–10. [Google Scholar] [CrossRef]
- Hieu, D.V.; Thong, D.V. New extragradient-like algorithms for strongly pseudomonotone variational inequalities. J. Glob. Optim. 2018, 70, 385–399. [Google Scholar] [CrossRef]
- Dong, Q.-L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
- Cholamjiak, P.; Thong, D.V.; Cho, Y.J. A novel inertial projection and contraction method for solving pseudomonotone variational inequality problem. Acta Appl. Math. 2020, 169, 217–245. [Google Scholar] [CrossRef]
- Yamada, I. The hybrid steepest-descent method for variational inequalities problems over the intersection of the fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications; Butnariu, D., Censor, Y., Reich, S., Eds.; North-Holland: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
- Hieu, D.V.; Son, D.X.; Anh, P.K.; Muu, L.D. A Two-Step Extragradient-Viscosity Method for Variational Inequalities and Fixed Point Problems. Acta Math. Vietnam. 2018, 2, 531–552. [Google Scholar]
- Anh, P.K.; Hieu, D.V. Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpensive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
- Hieu, D.V. Parallel and cyclic hybrid subgradient extragradient methods for variational inequalities. Afr. Math. 2017, 28, 677–692. [Google Scholar] [CrossRef]
- Rudin, W. Functional Analysis; McGraw-Hill Series in Higher Mathematics: New York, NY, USA, 1991. [Google Scholar]
- Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
- Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
- Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
- Marino, G.; Xu, H.K. Weak and strong convergence Theorems for strict pseudo-contraction in Hilbert spaces. J. Math. Anal. Appl. 2007, 329, 336–346. [Google Scholar] [CrossRef]
- Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
- Suantai, S.; Peeyada, P.; Yambangwai, D.; Cholamjiak, W. A parallel-viscosity-type subgradient extragradient-line method for finding the common solution of variational inequality problems applied to image restoration problems. Mathematics 2020, 8, 248. [Google Scholar] [CrossRef]
- Anh, T.V.; Muu, L.D.; Son, D.X. Parallel algorithms for solving a class of variational inequalities over the common fixed points set of a finite family of demicontractive mappings. Numer. Funct. Anal. Optim. 2018, 39, 1477–1494. [Google Scholar] [CrossRef]
- Hieu, D.V. An explicit parallel algorithm for variational inequalities. Bull. Malays. Math. Sci. Soc 2019, 42, 201–221. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).