Abstract
A number of applications from mathematical programmings, such as minimax problems, penalization methods and fixed-point problems can be formulated as a variational inequality model. Most of the techniques used to solve such problems involve iterative algorithms, and that is why, in this paper, we introduce a new extragradient-like method to solve the problems of variational inequalities in real Hilbert space involving pseudomonotone operators. The method has a clear advantage because of a variable stepsize formula that is revised on each iteration based on the previous iterations. The key advantage of the method is that it works without the prior knowledge of the Lipschitz constant. Strong convergence of the method is proved under mild conditions. Several numerical experiments are reported to show the numerical behaviour of the method.
1. Introduction
In this article, we consider the classic variational inequalities problems (VIPs) [1,2] for an operator is formulated in the following way:
where is a nonempty, convex and closed subset of a real Hilbert space The inner product and induced norm on are denoted by and , respectively. Moreover, the set of real and natural numbers are denoted by and , respectively. It is important to note that solving the problem (1) is equivalent to solving the following problem:
We assume that the following requirements have been fulfilled:
- (B1)
- The solution set of the problem (1), represented by SVIP is nonempty.
- (B2)
- A mapping is called to be pseudomonotone, i.e.,
- (B3)
- A mapping is said to be Lipschitz continuous, i.e., there exists such that
- (B4)
- A mapping is called to be sequentially weakly continuous, i.e., converges weakly to , where weakly converges to u.
The concept of variational inequalities has been used as a powerful tool to study different subjects, i.e., physics, engineering, economics and optimization theory. The problem (1) was firstly introduced by Stampacchia [1] in 1964 and also provided that this problem (1) is a crucial problem in nonlinear analysis. This is an efficient mathematical technique that integrates several key elements of applied mathematics, i.e., the problems of network equilibrium, the necessary optimality conditions, the complementarity problems and the systems of non-linear equations (for more details [3,4,5,6,7,8,9]). On the other hand, the projection methods are important to find the numerical solution of variational inequalities. Many authors have proposed and studied different projection methods to solve the problem of variational inequalities (see for more details [10,11,12,13,14,15,16,17,18,19,20]) and others in [21,22,23,24,25,26,27,28,29,30,31,32]. In particular, Karpelevich [10] and Antipin [33] introduced the following extragradient method:
Recently, the subgradient extragradient algorithm was established by Censor et al. [12] for solving problem (1) in real Hilbert space. Their method has the form
where Migorski et al. [34] proposed a viscosity-type subgradient extragradient method to solve monotone variational inequalities problems. The main contribution is the presence of a viscosity scheme in the algorithm that was used to improve the convergence rate of the iterative sequence and provide strong convergence theorem. The iterative sequence was generated in the following way: (i) Let and a sequence with and (ii) Compute
where
(iii) Revised the stepsize in the following way:
In this paper, inspired by the iterative methods in [12,16,35,36], a modified subgradient extragradient algorithm is proposed for solving variational inequalities problems involving pseudomonotone mapping in real Hilbert space. It is important to note that our proposed scheme is effective. In particular, by comparing the results of Migorski et al. [34], our algorithm can solve pseudomonotone variational inequalities. Similar to the results of Migorski et al. [34] the proof of strong convergence of the proposed algorithm is proved without knowing the Lipschitz constant of the operator The proposed algorithm could be seen as a modification of the methods that are appeared in [10,12,34,35,36]. Under mild conditions, a strong convergence theorem is proved. Numerical experiments have been shown that the new approach tends to be more successful than the existing one [34].
The rest of this article has been arranged as follows: Section 2 contains some definitions and basic results that have been used throughout the paper. Section 3 contains our main algorithm and a strong convergence theorem. Section 4 presents the numerical results showing the algorithmic efficacy of the proposed method.
2. Preliminaries
This section contains useful lemmas and basic identities that have been used throughout the article. The metric projection for onto a closed and convex subset of is defined by
Lemma 1.
[37,38]Assume is a nonempty, convex and closed subset of a real Hilbert space and is a metric projection from onto .
(i) Let and we have
(ii) if and only if
(iii) For and
Lemma 2.
[37] Let and
(i)
(ii) .
Lemma 3.
[39] Assume that be a sequence of non-negative real numbers satisfying
where and satisfy the following conditions:
Then,
Lemma 4.
[40] Assume that is a sequence of real numbers such that there exists a subsequence of such that for all Then, there exists a non decreasing sequence such that as and the following conditions are fulfilled by all (sufficiently large) numbers :
In fact,
Lemma 5.
[41] Assume that is a pseudomonotone and continuous mapping. Then, is a solution of the problem (1) if and only if is a solution of the following problem.
3. Main Results
We provide a method consisting of two convex minimization problems through a viscosity scheme and an explicit stepsize formula which is being used to improve the convergence rate of the iterative sequence and to make the method independent of the Lipschitz constants. The detailed method is provided in Algorithm 1.
| Algorithm 1 (Explicit method for pseudomonotone variational inequalities problems). |
| Step 0: Let and a sequence satisfying |
| Step 1: Evaluate
|
| If ; STOP. Otherwise, go to Step 2. |
| Step 2: Evaluate |
| where |
| Step 3: Compute |
| Step 4: Evaluate |
Lemma 6.
The stepsize sequence is monotonically decreasing with a lower bound and converges to a fixed
Proof.
Let such that
Clearly, from above we can conclude that has a lower bound Moreover, there exists a real number such that □
Lemma 7.
Assume that satisfies the conditions(B1)–(B4). For a given we have
Proof.
Consider that
Given that we get
which implies that
Since is the solution of problem (1), we have
Due to the pseudomonotonicity of on , we get
By substituting we get
Thus, we have
Note that and by the definition of , we have
Lemma 8.
Suppose that conditions (B1)–(B4) hold. Let be a sequence generated by Algorithm 1. If there is a subsequence which is weakly convergent to and then
Proof.
We have
which is equivalent to
From expression (15), we can write
Therefore, we get
Due to the boundedness of the sequence so does By using the facts and limit as in (17), we get
Moreover, we have
Since and is L-Lipschitz continuous on we get
From (19) and (20), we obtain
Next, we show that . We choose a sequence of positive numbers decreasing and tending to 0. For each k, we denote by the smallest positive integer such that
Due to being decreasing, the sequence is increasing.
Case 1: If there is a subsequence of such that (). Letting we obtain
Hence , therefore we have .
Case 2: If there exists such that for all , Suppose that
Due to the above definition, we obtain
From (18) and (25), for all we have
Due to pseudomonotonicity of for we obtain
For all we have
Since converges weakly to and is sequentially weakly continuous on , we have converges weakly to We can suppose that . Since the norm mapping is sequentially weakly lower semicontinuous, we have
Since and we have
Now, letting in (28), we obtain
Applying the well-known Lemma 5, we can deduce that □
Theorem 1.
Assume that satisfies the conditions (B1)–(B4). Moreover, assume that belongs to the solution set Then, the sequences and generated by Algorithm 1 converge strongly to
Proof.
By using Lemma 7, we have
Due to there exists a fixed number such that
Then, there exists a finite number such that
Hence, we obtain
From the definition of the sequence and the fact that f is a contraction with constant and we obtain
From expressions (34) and (35) and , we obtain
Hence, we conclude that the sequence is bounded. Next, the reflexivity of and the boundedness of the sequence guarantee that there exists a subsequence such that as Now, we prove the strong convergence of the sequence iterative sequence generated by Algorithm 1. Due to the continuity and pseudomonotonicity of the operator imply that the solution set is a closed and convex set (for more details see [42,43]). Since the mapping f is a contraction, is a contraction. The Banach contraction theorem guarantee the existence of a fixed point of such that
By using Lemma 1 (ii), we have
From given and using Lemma 2 (i) and Lemma 7, we have
The rest of the proof shall be divided into the following two parts:
Case 1: Assume that there exists a fixed number such that
Thus, exists and let From expression (38), we have
Due to the existence of and , we deduce that
From expression (41), we have
It follows that
Thus, the sequences , and are bounded. Thus, we can take a subsequence of such that weakly converges to some Moreover, due to and using Lemma 8, we have By following expression (37), we consider that
We have It follows (44) that
From Lemma 2 (ii) and Lemma 7 for all , we get
It follows from expressions (45) and (46), we obtain
Choose () large enough such that Now, using expressions (46) and (47) and applying Lemma 3, we conclude that as
Case 2: Suppose that there exists a subsequence of such that
Thus, by Lemma 4, there exits a sequence and such that
Similar to Case 1, using (38), we have
Due to and we can deduce the following:
From expression (50), we have
Hence, we obtain
We have to use the same justification as in the Case 1, such that
Using (46) and (48), we have
It follows that
Since and is a bounded sequence. Thus, expressions (53) and (55) implies that
From the inequality (48), we have
Consequently, This completes the proof of the theorem. □
4. Numerical Experiments
Numerical investigations present in this section to demonstrate the efficiency of the introduced Algorithm 1 in four test problems, all of which are pseudomonotone. The MATLAB program has been performed on a PC (with Intel(R) Core(TM)i3-4010U CPU @ 1.70 GHz, RAM 4.00 GB) in MATLAB version 9.5 (R2018b). We use the built-in MATLAB Quadratic programming to solve the minimization problems.
Example 1.
Consider the non-linear complementarity problem of Kojima-–Shindo where the feasible set which is defined by
The mapping is defined by
It is easy to see that is not monotone on the set By using the Monte Carlo approach [44], it can be shown that is pseudomonotone on This problem has a unique solution Generate many pairs of points u and v uniformly in satisfying and then check if . In this experiment, we take different initial points and Moreover, control parameters and for Algorithm 1. Numerical investigation regarding the first example was shown in Table 1.
Table 1.
Numerical behaviour of Algorithm 1 using different starting points for Example 1.
Example 2.
Consider the quadratic fractional programming problem in the following form [44]:
where
It is easy to verify that Q is symmetric and positive definite on and consequently f is pseudo-convex on . Therefore, is pseudomonotone. Using the quotient rule, we obtain
In this point of view, we can set in Theorem 1. We minimize f over . This problem has a unique solution In this experiment, we take different initial points and Moreover, control parameters and for Algorithm 1. Numerical investigation regarding the second example is shown in Table 2.
Table 2.
Numerical behaviour of Algorithm 1 using different starting points for Example 2.
Example 3.
The third example was taken from [45] where is defined by
on It can easily see that is Lipschitz continuous with and is not monotone on but pseudomonotone. The above problem has a unique solution In this experiment, we take different initial points and Moreover, control parameters and for Algorithm 1. Numerical investigations regarding the third example is shown in Table 3.
Table 3.
Numerical behaviour of Algorithm 1 using different starting points for Example 3.
Example 4.
The fourth example was taken from [45] where is defined by
where It can easily see that is Lipschitz continuous with and is not monotone on but pseudomonotone. In this experiment, we take different initial points and Moreover, control parameters and for Algorithm 1. Numerical investigations regarding the fourth example is shown in Table 4.
Table 4.
Numerical behaviour of Algorithm 1 using different starting points for Example 4.
5. Conclusions
We have developed an extragradient-like method to solve pseudomonotone variational inequalities in real Hilbert space. The method had an explicit formula for an appropriate and effective stepsize evaluation on each step. For each iteration, the stepsize formula is modified based on the previous iterations. The numerical investigation was presented to explain the numerical effectiveness of our algorithm relative to other methods. These numerical studies suggest that viscosity schemes in this sense generally improve the effectiveness of the iterative sequence.
Author Contributions
Data curation, N.W.; formal analysis, T.K.; funding acquisition, N.P. (Nuttapol Pakkaranang), N.P. (Nattawut Pholasa) and T.K.; investigation, N.W., N.P. (Nuttapol Pakkaranang) and T.K.; methodology, T.K.; project administration, H.u.R., N.P. (Nattawut Pholasa) and T.K.; resources, N.P. (Nattawut Pholasa) and T.K.; software, H.u.R.; supervision, H.u.R. and N.P. (Nattawut Pholasa); Writing—original draft, N.W. and H.u.R.; Writing—review and editing, N.P. (Nuttapol Pakkaranang). All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by University of Phayao and Phetchabun Rajabhat University.
Acknowledgments
We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work. N. Wairojjana would like to thank by Valaya Alongkorn Rajabhat University under the Royal Patronage (VRU). N. Pholasa would like to thank by University of Phayao. T. Khanpanuk would like to thanks Phetchabun Rajabhat University.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Stampacchia, G. Formes bilinéaires coercitives sur les ensembles convexes. Comptes Rendus Hebd. Seances Acad. Sci. 1964, 258, 4413. [Google Scholar]
- Konnov, I.V. On systems of variational inequalities. Russ. Math. C/C Izv. Vyss. Uchebnye Zaved. Mat. 1997, 41, 77–86. [Google Scholar]
- Kassay, G.; Kolumbán, J.; Páles, Z. On Nash stationary points. Publ. Math. 1999, 54, 267–279. [Google Scholar]
- Kassay, G.; Kolumbán, J.; Páles, Z. Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143, 377–389. [Google Scholar] [CrossRef]
- Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar] [CrossRef]
- Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
- Elliott, C.M. Variational and Quasivariational Inequalities Applications to Free—Boundary ProbLems. (Claudio Baiocchi And António Capelo). SIAM Rev. 1987, 29, 314–315. [Google Scholar] [CrossRef]
- Nagurney, A.; Economics, E.N. A Variational Inequality Approach; Springer: Dordrecht, The Netherlands, 1999. [Google Scholar]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
- Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
- Noor, M.A. Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 2010, 21, 97–108. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
- Malitsky, Y.V.; Semenov, V.V. An Extragradient Algorithm for Monotone Variational Inequalities. Cybern. Syst. Anal. 2014, 50, 271–277. [Google Scholar] [CrossRef]
- Tseng, P. A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
- Moudafi, A. Viscosity Approximation Methods for Fixed-Points Problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
- Zhang, L.; Fang, C.; Chen, S. An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems. Numer. Algorithms 2018, 79, 941–956. [Google Scholar] [CrossRef]
- Iusem, A.N.; Svaiter, B.F. A variant of korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [Google Scholar] [CrossRef]
- Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2017, 79, 597–610. [Google Scholar] [CrossRef]
- Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2017, 78, 1045–1060. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
- Gibali, A.; Reich, S.; Zalas, R. Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 2017, 66, 417–437. [Google Scholar] [CrossRef]
- Ogbuisi, F.U.; Shehu, Y. A projected subgradient-proximal method for split equality equilibrium problems of pseudomonotone bifunctions in Banach spaces. J. Nonlinear Var. Anal. 2019, 3, 205–224. [Google Scholar]
- Ceng, L.C. Asymptotic inertial subgradient extragradient approach for pseudomonotone variational inequalities with fixed point constraints of asymptotically nonexpansive mappings. Commun. Optim. Theory 2020, 2020, 2. [Google Scholar]
- Wang, L.; Yu, L.; Li, T. Parallel extragradient algorithms for a family of pseudomonotone equilibrium problems and fixed point problems of nonself-nonexpansive mappings in Hilbert space. J. Nonlinear Funct. Anal. 2020, 2020, 13. [Google Scholar]
- Ceng, L.C. Two inertial linesearch extragradient algorithms for the bilevel split pseudomonotone variational inequality with constraints. J. Appl. Numer. Optim. 2020, 2, 213–233. [Google Scholar] [CrossRef]
- Ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
- Ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef]
- Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef]
- Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A self-adaptive extra-gradient methods for a family of pseudomonotone equilibrium programming with application in different classes of variational inequality problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef]
- Ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
- Ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39, 100. [Google Scholar] [CrossRef]
- Antipin, A.S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika Matematicheskie Metody 1976, 12, 1164–1173. [Google Scholar]
- Migórski, S.; Fang, C.; Zeng, S. A new modified subgradient extragradient method for solving variational inequalities. Appl. Anal. 2019, 1–10. [Google Scholar] [CrossRef]
- Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
- Kraikaew, R.; Saejung, S. Strong Convergence of the Halpern Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Spaces. J. Optim. Theory Appl. 2013, 163, 399–412. [Google Scholar] [CrossRef]
- Heinz, H. Bauschke, P.L.C. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar]
- Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker, Inc.: New York, NY, USA, 1984. [Google Scholar]
- Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
- Maingé, P.E. Strong Convergence of Projected Subgradient Methods for Nonsmooth and Nonstrictly Convex Minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
- Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
- Liu, Z.; Zeng, S.; Motreanu, D. Evolutionary problems driven by variational inequalities. J. Differ. Equ. 2016, 260, 6787–6799. [Google Scholar] [CrossRef]
- Liu, Z.; Migórski, S.; Zeng, S. Partial differential variational inequalities involving nonlocal boundary conditions in Banach spaces. J. Differ. Equ. 2017, 263, 3989–4006. [Google Scholar] [CrossRef]
- Hu, X.; Wang, J. Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network. IEEE Trans. Neural Networks 2006, 17, 1487–1499. [Google Scholar] [CrossRef]
- Shehu, Y.; Dong, Q.L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2018, 68, 385–409. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).