Next Article in Journal
Fuzzy Model for Risk Assessment of Machinery Failures
Next Article in Special Issue
Fast Convergence Methods for Hyperbolic Systems of Balance Laws with Riemann Conditions
Previous Article in Journal
On the Oscillatory Behavior of a Class of Fourth-Order Nonlinear Differential Equation
Previous Article in Special Issue
Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems

by
Habib ur Rehman
1,†,
Poom Kumam
1,2,3,*,†,
Ioannis K. Argyros
4,†,
Nasser Aedh Alreshidi
5,†,
Wiyada Kumam
6,*,† and
Wachirapong Jirakitpuwapat
1,†
1
KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and Applications Research Group, SCL 802 Fixed Point Laboratory, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
5
Department of Mathematics, College of Science, Northern Border University, Arar 73222, Saudi Arabia
6
Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi, Thanyaburi, Pathumthani 12110, Thailand
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2020, 12(4), 523; https://doi.org/10.3390/sym12040523
Submission received: 19 February 2020 / Revised: 8 March 2020 / Accepted: 24 March 2020 / Published: 2 April 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
The main objective of this article is to propose a new method that would extend Popov’s extragradient method by changing two natural projections with two convex optimization problems. We also show the weak convergence of our designed method by taking mild assumptions on a cost bifunction. The method is evaluating only one value of the bifunction per iteration and it is uses an explicit formula for identifying the appropriate stepsize parameter for each iteration. The variable stepsize is going to be effective for enhancing iterative algorithm performance. The variable stepsize is updating for each iteration based on the previous iterations. After numerical examples, we conclude that the effect of the inertial term and variable stepsize has a significant improvement over the processing time and number of iterations.

1. Introduction

Let C to be a nonempty convex, closed subset of a Hilbert space E and f : E × E R be a bifunction with f ( u , u ) = 0 for each u C . The equilibrium problem for f upon C is defined as follows:
Find p * C such that f ( p * , y ) 0 , y C .
The equilibrium problem ( E P ) has many mathematical problems as a particular case, for example, the fixed point problems, complementarity problems, the variational inequality problems ( V I P ), the minimization problems, Nash equilibrium of noncooperative games, saddle point problems and problem of vector minimization (see [1,2,3,4]). The unique formulation of an equilibrium problem was specifically defined in 1992 by Muu and Oettli [5] and further developed by Blum and Oettli [1]. An equilibrium problem is also known as the Ky Fan inequality problem. Fan [6] presents a review and gives specific conditions on a bifunction for the existence of an equilibrium point. Many researchers have provided and generalized many results corresponding to the existence of a solution for the equilibrium problem (see [7,8,9,10]). A considerable number of methods are the earliest set up over the last few years concentrating on the different equilibrium problem classes and other particular forms of an equilibrium problem in abstract spaces (see [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]).
The Korpelevich and Antipin’s extragradient method [30,31] are efficient two-step methods. Flam [12,20] employed the auxiliary problem principle to set up the extragradient method for the monotone equilibrium problems. The consideration on the extragradient method is to figure out two natural projections on C to achieve the next iteration. If the computing of a projection on a feasible set C is hard to compute, it is a challenge to solve two minimal distance problems for the next iteration, which may have an effect on method s performance and efficiency. In order to overcome it, Censor initiated a subgradient extragradient method [32] where the second projection is replaced by a half-plane projection that can be computed effectively. Iterative sequences set up with the above-mentioned extragradient-like methods need to make use of a certain stepsize constant based on the Lipschitz-type constants of a cost bifunction. The prior knowledge about these constant imposes some restrictions on developing an iterative sequence because these Lipschitz-type constants are normally not known or hard to compute.
In 2016, Lyashko et al. [33] developed an extragradient method for solving pseudomonotone equilibrium problems in a real Hilbert space. It is required to solve two optimization problems on a closed convex set for each next iteration, with a reasonable fixed stepsize depends upon on the Lipschitz-type constants. The superiority of the Lyashko et al. [33] method compared to the Tran et al. [20] extragradient method is that the value of the bifunction f is to determine only once for each iteration. Inertial-type methods are based on the discrete variant of a second-order dissipative dynamical system. In order to handle numerically smooth convex minimization problem, Polyak [34] proposed an iterative scheme that would require inertial extrapolation as a boost ingredient to improve the convergence rate of the iterative sequence. The inertial method is commonly a two-step iterative scheme and the next iteration is computed by use of previous two iterations and may be pointed out to as a method of pacing up the iterative sequence (see [34,35]). In the case of equilibrium problems, Moudafi established the second-order differential proximal method [36]. These inertial methods are employed to accelerate the iterative process for the desired solution. Numerical studies indicate that inertial effects generally enhance the performance of the method in terms of the number of iterations and execution time in this context. There are many methods established for the different classes of variational inequality problems (for more details see [37,38,39,40,41]).
In this study, we considered Lyashko et al. [33] and Liu et al. [42] extragradient methods and present its improvement by employing an inertial scheme. We also improved the stepsize to its second step. The stepsize was not fixed in our proposed method, but the stepsize was set up by an explicit formula based on some previous iterations. We formulated a weak convergence theorem for our proposed method for dealing with the problems of equilibriums involving pseudomonotone bifunction within specific conditions. We also examined how our results are linked to variational inequality problems. Apart from this, we considered the well-known Nash–Cournot equilibrium model as a test problem to support the validity of our results. Some applications for variational inequality problems were considered and other numerical examples were explained to back the appropriateness of our designed results.
The rest of the article is set up as follows: In Section 2 we give a few definitions and significant results to be utilized in this paper. Section 3 includes our first algorithm involving pseudomonotone bifunction, and gives the weak convergence result. Section 4 illustrates some application of our results in variational inequality problems. Section 5 sets out numerical examinations to describe numerical performance.

2. Preliminaries

In this part we cover some relevant lemmas, definitions and other notions that will be employed throughout the convergence analysis and numerical part. The notion . , . and . presents for the inner product and norm on the Hilbert space E . Let G : E E be a well-defined operator and V I ( G , C ) is the solution set of a variational inequality problem corresponding operator G over the set C. Moreover E P ( f , C ) stands for the solution set of an equilibrium problem over the set C and p * is any arbitrary element of E P ( f , C ) or V I ( G , C ) .
Let g : C R be a convex function with subdifferential of g at u C defined as:
g ( u ) = { z E : g ( v ) g ( u ) z , v u , v C } .
A normal cone of C at u C is given as
N C ( u ) = { z E : z , v u 0 , v C } .
We consider various conceptions of a bifunction monotonicity (see [1,43] for details).
Definition 1.
The bifunction f : E × E R on C for γ > 0  is
(i) 
strongly monotone if f ( u , v ) + f ( v , u ) γ u v 2 , u , v C ;
(ii) 
monotone if f ( u , v ) + f ( v , u ) 0 , u , v C ;
(iii) 
strongly pseudomonotone if f ( u , v ) 0 f ( v , u ) γ u v 2 , u , v C ;
(iv) 
pseudomonotone if f ( u , v ) 0 f ( v , u ) 0 , u , v C ;
(v) 
satisfying the Lipschitz-type condition on C if there are two real numbers c 1 , c 2 > 0 such that
f ( u , w ) f ( u , v ) + f ( v , w ) + c 1 u v 2 + c 2 v w 2 , u , v , w C ,
holds.
Definition 2.
[44] A metric projection P C ( u ) of u onto a closed, convex subset C of E is defined as follows:
P C ( u ) = arg min v C { v u }
Lemma 1.
[45] Let P C : E C be metric projection from E upon C .  Thus
(i) 
For each u C ,   v E ,
u P C ( v ) 2 + P C ( v ) v 2 u v 2 .
(ii) 
w = P C ( u ) if and only if
u w , v w 0 , v C .
This portion concludes with a few crucial lemmas which are advantageous in investigating the convergence of our proposed results.
Lemma 2.
[46] Let C be a nonempty, closed and convex subset of a real Hilbert space E and h : C R be a convex, subdifferentiable and lower semi-continuous function on C . Moreover, x C is a minimizer of a function h if and only if 0 h ( x ) + N C ( x ) where h ( x ) and N C ( x ) stands for the subdifferential of h at x and the normal cone of C at x respectively.
Lemma 3
([47], Page 31). For every a , b E and ξ R the following relation is true:
ξ a + ( 1 ξ ) b 2 = ξ a 2 + ( 1 ξ ) b 2 ξ ( 1 ξ ) a b 2 .
Lemma 4.
[48] If α n , β n and γ n are sequences in [ 0 , + ) ,
α n + 1 α n + β n ( α n α n 1 ) + γ n , n 1 , w i t h n = 1 + γ n < +
holds with β > 0 such that 0 β n β < 1 ,   n N .  The following items are true.
( i )
n = 1 + [ α n α n 1 ] + < + , with [ p ] + : = max { p , 0 } ;
( i i )
lim n + α n = α * [ 0 , ) .
Lemma 5.
[49] Let { η n } be a sequence in E and C E such that
( i )
For each η C ,   lim n η n η exists;
( i i )
All sequentially weak cluster point of { η n } lies in C;
Then { η n } weakly converges to a element of C .
Lemma 6.
[50] Assume { a n } , { b n } are real sequences such that a n b n n N . Take ϱ , σ ( 0 , 1 ) and μ ( 0 , σ ) . Then there is a sequence λ n in a manner that λ n a n μ b n and λ n ( ϱ μ , σ ) .
Due to Lipschitz-like condition on a bifunction f through above lemma, we have the following inequality.
Corollary 1.
Assume that bifunction f satisfy the Lipschitz-type condition on C through positive constants c 1 and c 2 . Let ϱ ( 0 , 1 ) ,   σ < min 1 3 θ ( 1 θ ) 2 + 4 c 1 ( θ + θ 2 ) , 1 2 c 2 + 4 c 1 ( 1 + θ ) where θ [ 0 , 1 3 ) and μ ( 0 , σ ) . Then there exits a positive real number λ such that
λ f ( u , w ) f ( u , v ) c 1 u v 2 c 2 v w 2 μ f ( v , w )
and ϱ μ < λ < σ where u , v , w C .
Assumption 1.
Let a bifunction f : E × E R satisfies
f 1 .
f ( v , v ) = 0 for all v C and f is pseudomonotone on feasible set C .
f 2 .
f satisfy the Lipschitz-type condition on E with constants c 1 and c 2 .
f 3 .
lim sup n f ( x n , v ) f ( x * , v ) for all v C and { x n } C satisfy x n x * .
f 4 .
f ( u , . ) need to be convex and subdifferentiable over E for all fixed u E .
Since f ( u , . ) is convex and subdifferentiable on E for each fixed u E and subdifferential of f ( u , . ) at x E defined as:
2 f ( u , . ) ( x ) = 2 f ( u , x ) = { z E : f ( u , v ) f ( u , x ) z , v x , v E } .

3. An Algorithm and Its Convergence Analysis

We develop a method and provide a weak convergence result for it. We consider bifunction f that satisfies the conditions of Assumption 1 and E P ( f , C ) . The detailed method is written below.
Lemma 7.
If a sequence { u n } is set up by Algorithm 1. Then the following relationship holds.
μ λ n f ( v n , y ) μ λ n f ( v n , u n + 1 ) w n u n + 1 , y u n + 1 , y E n .
Proof. 
By definition of u n + 1 we have
u n + 1 = arg min y E n { μ λ n f ( v n , y ) + 1 2 w n y 2 } .
By using Lemma 2, we obtain
0 2 μ λ n f ( v n , y ) + 1 2 w n y 2 ( u n + 1 ) + N E n ( u n + 1 ) .
From the above expression there is a ω 2 f ( v n , u n + 1 ) and ω ¯ N E n ( u n + 1 ) such that
μ λ n ω + u n + 1 w n + ω ¯ = 0 .
Thus, we have
w n u n + 1 , y u n + 1 = μ λ n ω , y u n + 1 + ω ¯ , y u n + 1 , y E n .
Since ω ¯ N E n ( u n + 1 ) then ω ¯ , y u n + 1 0 , for all y E n . Thus, we have
μ λ n ω , y u n + 1 w n u n + 1 , y u n + 1 , y E n .
Since ω 2 f ( v n , u n + 1 ) we obtain
f ( v n , y ) f ( v n , u n + 1 ) ω , y u n + 1 , y E .
Combining the expressions of Equations (2) and (3) we get
μ λ n f ( v n , y ) μ λ n f ( v n , u n + 1 ) w n u n + 1 , y u n + 1 , y E n .
Lemma 8.
Let sequence { v n } be generated by Algorithm 1. Then the following inequality holds.
λ n + 1 f ( v n , y ) λ n + 1 f ( v n , v n + 1 ) w n + 1 v n + 1 , y v n + 1 , y C .
Proof. 
By definition of v n + 1 , we have
0 2 λ n + 1 f ( v n , y ) + 1 2 w n + 1 y 2 ( v n + 1 ) + N C ( v n + 1 ) .
Thus, there is a ω 2 f ( v n , v n + 1 ) and ω ¯ N C ( v n + 1 ) such that
λ n + 1 ω + v n + 1 w n + 1 + ω ¯ = 0 .
The above expression implies that
w n + 1 v n + 1 , y v n + 1 = λ n + 1 ω , y v n + 1 + ω ¯ , y v n + 1 , y C .
Since ω ¯ N C ( v n + 1 ) then ω ¯ , y v n + 1 0 , for all y C . This implies that
λ n + 1 ω , y v n + 1 w n + 1 v n + 1 , y v n + 1 , y C .
By ω 2 f ( v n , v n + 1 ) , we can obtain
f ( v n , y ) f ( v n , v n + 1 ) ω , y v n + 1 , y E .
Combining the expressions in Equations (4) and (5) we get
λ n + 1 f ( v n , y ) λ n + 1 f ( v n , v n + 1 ) w n + 1 v n + 1 , y v n + 1 , y C .
Algorithm 1 (The Modified Popov’s subgradient extragradient method for pseudomonotone E P )
  • Initialization: Choose u 1 , v 1 , u 0 , v 0 E ,   ϱ ( 0 , 1 ) ,   σ < min 1 3 θ ( 1 θ ) 2 + 4 c 1 ( θ + θ 2 ) , 1 2 c 2 + 4 c 1 ( 1 + θ ) for a nondecreasing sequence θ n such that 0 θ n θ < 1 3 and λ 0 > 0 .
  • Iterative steps: Let u n 1 , v n 1 , u n and v n are known for n 0 . Construct a half-space
    E n = { z E : w n λ n ω n 1 v n , z v n 0 } ,
    where ω n 1 2 f ( v n 1 , v n ) and w n = u n + θ n ( u n u n 1 ) .
  • Step 1: Compute
    u n + 1 = arg min y E n { μ λ n f ( v n , y ) + 1 2 w n y 2 } .
  • Step 2: Revised the stepsize as follows
    λ n + 1 = min σ , μ f ( v n , u n + 1 ) f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 u n + 1 v n 2 + 1
    and compute
    v n + 1 = arg min y C { λ n + 1 f ( v n , y ) + 1 2 w n + 1 y 2 } ,
    where w n + 1 = u n + 1 + θ n + 1 ( u n + 1 u n ) .
  • Step 3: If v n = v n 1 and u n + 1 = w n , then stop. Else, take n : = n + 1 and return back to Iterative steps.
Lemma 9.
Let { u n } and { v n } are sequences generated by Algorithm 1. Then the following inequality is true.
λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) w n v n , u n + 1 v n .
Proof. 
Since u n + 1 E n then by the definition of E n gives that
w n λ n ω n 1 v n , u n + 1 v n 0 .
The above implies that
λ n ω n 1 , u n + 1 v n w n v n , u n + 1 v n .
By ω n 1 f ( v n 1 , v n ) with y = u n + 1 , we reach the following
f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) ω n 1 , u n + 1 v n , y E .
By combining Equations (7) and (8), we obtain
λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) w n v n , u n + 1 v n .
Lemma 10.
If u n + 1 = w n and v n = v n 1 in Algorithm 1. Then, v n is the solution of Equation (1).
Proof. 
Setting u n + 1 = w n and v n = v n 1 in Lemma 9, we get
λ n f ( v n , u n + 1 ) 0 .
By the means of u n + 1 = w n in Lemma 7, we get
μ λ n f ( v n , y ) μ λ n f ( v n , u n + 1 ) 0 , y E n .
Since μ ( 0 , 1 ) and λ n ( 0 , ) then f ( v n , y ) > 0 , for all y C E n .
Remark 1.
(i). If u n + 1 = v n = w n in Algorithm 1, then v n E P ( f , C ) . It is obvious from Lemma 7.
(ii). If w n + 1 = v n + 1 = v n in Algorithm 1, then v n E P ( f , C ) . It is obvious from Lemma 8.
Lemma 11.
Let a bifunction f : E × E R is satisfying the assumptions ( f 1 f 4 ). Thus, for each p * E P ( f , C ) , we have
u n + 1 p * 2 w n p * 2 ( 1 λ n + 1 ) u n + 1 w n 2 + 4 c 1 λ n + 1 λ n w n v n 1 2 λ n + 1 ( 1 4 c 1 λ n ) w n v n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 .
Proof. 
By substituting y = p * into Lemma 7, we get
μ λ n f ( v n , p * ) μ λ n f ( v n , u n + 1 ) w n u n + 1 , p * u n + 1 , y E n .
By make use of p * E P ( f , C ) implies that f ( p * , v n ) 0 . Due to the pseudomonotonicity of a bifunction f we get f ( v n , p * ) 0 . Therefore, from Equation (11) we get
w n u n + 1 , u n + 1 p * μ λ n f ( v n , u n + 1 ) .
Corollary 1 implies that λ n + 1 in Equation (6) is well-defined and
μ f ( v n , u n + 1 ) λ n + 1 f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 v n u n + 1 2 .
The expressions in Equations (12) and (13) imply that
w n u n + 1 , u n + 1 p * λ n + 1 [ λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 λ n v n 1 v n 2 c 2 λ n u n + 1 v n 2 ] .
Since u n + 1 E n and using Lemma 9, we have
λ n f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) w n v n , u n + 1 v n .
Combining the expressions in Equations (14) and (15) we get
w n u n + 1 , u n + 1 p * λ n + 1 [ w n v n , u n + 1 v n c 1 λ n v n 1 v n 2 c 2 λ n u n + 1 v n 2 ] .
By vector algebra we have the following facts:
2 w n u n + 1 , u n + 1 p * = w n p * 2 u n + 1 w n 2 u n + 1 p * 2 .
2 w n v n , u n + 1 v n = w n v n 2 + u n + 1 v n 2 w n u n + 1 2 .
From the above last two inequalities and Equation (16) we obtain
u n + 1 p * 2 w n p * 2 ( 1 λ n + 1 ) u n + 1 w n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 λ n + 1 w n v n 2 + λ n + 1 ( 2 c 1 λ n ) v n 1 v n 2
By triangle inequality and elementary algebra gives the following inequality
v n 1 v n 2 v n 1 w n + w n v n 2 2 v n 1 w n 2 + 2 w n v n 2 .
From the above two inequalities we have the desired result
u n + 1 p * 2 w n p * 2 ( 1 λ n + 1 ) u n + 1 w n 2 + 4 c 1 λ n λ n + 1 w n v n 1 2 λ n + 1 ( 1 4 c 1 λ n ) w n v n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 .
Theorem 1.
Suppose a bifunction f : E × E R is satisfying the Assumption 1. Then for all p * E P ( f , C ) , the sequences { w n } ,   { u n } and { v n } are generated by Algorithm 1 weakly converge to p * E P ( f , C ) .
Proof. 
From Lemma 11 we have
u n + 1 p * 2 w n p * 2 ( 1 λ n + 1 ) u n + 1 w n 2 + 4 c 1 λ n λ n + 1 w n v n 1 2 λ n + 1 ( 1 4 c 1 λ n ) w n v n 2 λ n + 1 ( 1 2 c 2 λ n ) u n + 1 v n 2 .
By definition of w n in the Algorithm 1 we may write
w n v n 1 2 = u n + θ n ( u n u n 1 ) v n 1 2 = ( 1 + θ n ) ( u n v n 1 ) θ n ( u n 1 v n 1 ) 2 = ( 1 + θ n ) u n v n 1 2 θ n u n 1 v n 1 2 + θ n ( 1 + θ n ) u n u n 1 2 ( 1 + θ ) u n v n 1 2 + θ ( 1 + θ ) u n u n 1 2 .
Adding the value 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 on both sides of expression in Equation (17) and for each n 1 , we obtain
u n + 1 p * 2 + 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 w n p * 2 ( 1 σ ) u n + 1 w n 2 + 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + θ ( 1 + θ ) u n u n 1 2 λ n + 1 ( 1 4 c 1 σ ) w n v n 2 λ n + 1 ( 1 2 c 2 σ ) u n + 1 v n 2
w n p * 2 ( 1 σ ) u n + 1 w n 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + 4 c 1 σ ( θ + θ 2 ) u n u n 1 2 λ n + 1 ( 1 4 c 1 σ ) w n v n 2 λ n + 1 ( 1 2 c 2 σ 4 c 1 σ ( 1 + θ ) ) u n + 1 v n 2
w n p * 2 ( 1 σ ) u n + 1 w n 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + 4 c 1 σ ( θ + θ 2 ) u n u n 1 2 λ n + 1 2 ( 1 2 c 2 σ 4 c 1 σ ( 1 + θ ) ) 2 u n + 1 v n 2 + 2 w n v n 2
w n p * 2 ( 1 σ ) u n + 1 w n 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + 4 c 1 σ ( θ + θ 2 ) u n u n 1 2 λ n + 1 2 ( 1 2 c 2 σ 4 c 1 σ ( 1 + θ ) ) u n + 1 w n 2 .
By Algorithm 1, 0 < λ n σ < 1 2 c 2 + 4 c 1 ( 1 + θ ) and the above inequality ensures
u n + 1 p * 2 + 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 w n p * 2 ( 1 σ ) u n + 1 w n 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + 4 c 1 σ ( θ + θ 2 ) u n u n 1 2 .
From definition of w n in Algorithm 1 we obtain
w n p * 2 = u n + θ n ( u n u n 1 ) p * 2 = ( 1 + θ n ) ( u n p * ) θ n ( u n 1 p * ) 2 = ( 1 + θ n ) u n p * 2 θ n u n 1 p * 2 + θ n ( 1 + θ n ) u n u n 1 2 .
By definition of w n + 1 and through Cauchy inequality, we achieve
u n + 1 w n 2 = u n + 1 u n θ n ( u n u n 1 ) 2 = u n + 1 u n 2 + θ n 2 u n u n 1 2 2 θ n u n + 1 u n , u n u n 1
u n + 1 u n 2 + θ n 2 u n u n 1 2 2 θ n u n + 1 u n u n u n 1 u n + 1 u n 2 + θ n 2 u n u n 1 2 θ n u n + 1 u n 2 θ n u n u n 1 2 = ( 1 θ n ) u n + 1 u n 2 + ( θ n 2 θ n ) u n u n 1 2 .
By combining the expressions of Equations (23), (24) and (26) we have
u n + 1 p * 2 + 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 ( 1 + θ n ) u n p * 2 θ n u n 1 p * 2 + θ n ( 1 + θ n ) u n u n 1 2 ( 1 σ ) ( 1 θ n ) u n + 1 u n 2 + ( θ n 2 θ n ) u n u n 1 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + 4 c 1 σ ( θ + θ 2 ) u n u n 1 2
( 1 + θ n ) u n p * 2 θ n u n 1 p * 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + θ ( 1 + θ ) ( 1 σ ) ( θ n 2 θ n ) + 4 c 1 σ ( θ + θ 2 ) u n u n 1 2 ( 1 σ ) ( 1 θ n ) u n + 1 u n 2
( 1 + θ n ) u n p * 2 θ n u n 1 p * 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 + ϕ n u n u n 1 2 ψ n u n + 1 u n 2 ,
where
ϕ n = θ ( 1 + θ ) ( 1 σ ) ( θ n 2 θ n ) + 4 c 1 σ ( θ + θ 2 ) ;
ψ n = ( 1 σ ) ( 1 θ n ) .
Suppose that
Ψ n = Φ n + ϕ n u n u n 1 2
where Φ n = u n p * 2 θ n u n 1 p * 2 + 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 . We compute the following by expression in Equation (29) we obtain
Ψ n + 1 Ψ n = u n + 1 p * 2 θ n + 1 u n p * 2 + 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 + ϕ n + 1 u n + 1 u n 2 u n p * 2 + θ n u n 1 p * 2 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 ϕ n u n u n 1 2 u n + 1 p * 2 ( 1 + θ n ) u n p * 2 + θ n u n 1 p * 2 + 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 + ϕ n + 1 u n + 1 u n 2 4 c 1 σ λ n ( 1 + θ ) u n v n 1 2 ϕ n u n u n 1 2 ( ψ n ϕ n + 1 ) u n + 1 u n 2 .
Next, we are going to compute
( ψ n ϕ n + 1 ) = ( 1 σ ) ( 1 θ n ) θ ( 1 + θ ) + ( 1 σ ) ( θ n + 1 2 θ n + 1 ) 4 c 1 σ ( θ + θ 2 ) ( 1 σ ) ( 1 θ ) 2 θ ( 1 + θ ) 4 c 1 σ ( θ + θ 2 ) = ( 1 θ ) 2 θ ( 1 + θ ) σ ( 1 θ ) 2 4 c 1 σ ( θ + θ 2 ) = 1 3 θ σ ( 1 θ ) 2 + 4 c 1 ( θ + θ 2 ) 0 .
Equations (30) and (31) with some δ 0 , imply that
Ψ n + 1 Ψ n ( ψ n ϕ n + 1 ) u n + 1 u n 2 δ u n + 1 u n 2 0 .
The relationship in Equation (32) implies that the sequence { Ψ n } is nonincreasing. Furthermore, by definition of Ψ n + 1 we have
Ψ n + 1 = u n + 1 p * 2 θ n + 1 u n p * 2 + ϕ n + 1 u n + 1 u n 2 + 4 c 1 σ λ n + 1 ( 1 + θ ) u n + 1 v n 2 θ n + 1 u n p * 2 .
Additionally, by definition of Ψ n we have
u n p * 2 Ψ n + θ n u n 1 p * 2 Ψ 1 + θ u n 1 p * 2 Ψ 1 ( θ n 1 + + 1 ) + θ n u 0 p * 2 Ψ 1 1 θ + θ n u 0 p * 2 .
On the basis of Equations (33) and (34) we have
Ψ n + 1 θ n + 1 u n p * 2 θ u n p * 2 θ Ψ 1 1 θ + θ n + 1 u 0 p * 2 .
It follows from expressions in Equations (32) and (35) we have
δ n = 1 k u n + 1 u n 2 Ψ 1 Ψ k + 1 Ψ 1 + θ Ψ 1 1 θ + θ k + 1 u 0 p * 2 Ψ 1 1 θ + u 0 p * 2 ,
letting k in Equation (36) we have
n = 1 u n + 1 u n 2 < + implies lim n u n + 1 u n = 0 .
By the relationship in Equations (25) with (37) we have
u n + 1 w n 0 as n .
The expression in Equation (35) implies that
Φ n + 1 θ Ψ 1 1 θ + θ n + 1 u 0 p * 2 + ϕ n + 1 u n + 1 u n 2 .
By Equation (21) we have
λ n + 1 ( 1 2 c 2 σ 4 c 1 σ ( 1 + θ ) ) u n + 1 v n 2 + w n v n 2 Φ n Φ n + 1 + θ ( 1 + θ ) u n u n 1 2 + 4 c 1 σ θ ( 1 + θ ) u n + 1 u n 2 .
Fix k N and use above equation for n = 1 , 2 , , k . Summing up, we get
λ n + 1 ( 1 2 c 2 σ 4 c 1 σ ( 1 + θ ) ) n = 1 k u n + 1 v n 2 + w n v n 2 Φ 0 Φ k + 1 + θ ( 1 + θ ) n = 1 k u n u n 1 2 + 4 c 1 σ θ ( 1 + θ ) n = 1 k u n u n 1 2 Φ 0 + θ Ψ 1 1 θ + θ k + 1 u 0 p * 2 + ϕ k + 1 u k + 1 u k 2 + θ ( 1 + θ ) n = 1 k u n u n 1 2 + 4 c 1 σ θ ( 1 + θ ) n = 1 k u n + 1 u n 2 ,
letting k in above expression we have
n u n + 1 v n 2 < + and n w n v n 2 < +
and
lim n u n + 1 v n = lim n w n v n = 0 .
By using the triangular inequality we can easily derive the following from the above-mentioned expressions
lim n u n v n = lim n u n w n = lim n v n 1 v n = 0 .
Moreover, we follow the relationship in Equation (27) such that
u n + 1 p * 2 ( 1 + θ n ) u n p * 2 θ n u n 1 p * 2 + θ ( 1 + θ ) u n u n 1 2 + 4 c 1 σ ( 1 + θ ) u n v n 1 2 + 4 c 1 σ ( θ + θ 2 ) u n u n 1 2 .
The above expression with Equations (37) and (42) and Lemma 4 suggest that limits of u n p * ,   w n p * and v n p * exist for each p * E P ( f , C ) and imply that the sequences { u n } , { w n } and { v n } are bounded. We require to establish that every weak sequential limit point of the sequence { u n } lies in E P ( f , C ) . Take z to be any sequential weak cluster point of the sequence { u n } , i.e., if there exists a weak convergent subsequence { u n k } of { u n } that converges to z , it implies that { v n k } also weakly converge to z . Our purpose is to prove z E P ( f , C ) . Using Lemma 7 with Equations (13) and (15) we obtain
μ λ n f ( v n k , y ) μ λ n f ( v n k , u n k + 1 ) + w n k u n k + 1 , y u n k + 1 λ n λ n + 1 f ( v n k 1 , u n k + 1 ) λ n λ n + 1 f ( v n k 1 , v n k ) c 1 λ n λ n + 1 v n k 1 v n k 2 c 2 λ n λ n + 1 v n k u n k + 1 2 + w n k u n k + 1 , y u n k + 1 λ n + 1 w n k v n k , u n k + 1 v n k c 1 λ n λ n + 1 v n k 1 v n k 2 c 2 λ n λ n + 1 v n k u n k + 1 2 + w n k u n k + 1 , y u n k + 1
for any member y in C E n . The expressions in Equations (38), (43) and (44) as well as the boundedness of the sequence { u n } mean the right side of the above-mentioned inequality is zero. Taking μ , λ n > 0 , condition ( f 3 ) in (Assumption 1) and v n k z , we obtain
0 lim sup k f ( v n k , y ) f ( z , y ) , y E n .
Then z C implies that f ( z , y ) 0 for all y C E n . This determines that z E P ( f , C ) . By Lemma 5, the sequences { u n } ,   { v n } and { w n } weakly converges to p * E P ( f , C ) .
We make θ n = 0 in the Algorithm 1 and by following Theorem 1 we have an improved variant of Liu et al. [42] extragradient method in terms of stepsize.
Corollary 2.
Let a bifunction f : E × E R satisfies Assumption 1. For every p * E P ( f , C ) , the sequence { u n } and { v n } are set up in the subsequent manner:
  • Initialization: Given u 0 , v 1 , v 0 E ,   ϱ ( 0 , 1 ) ,   σ < min 1 , 1 2 c 2 + 4 c 1 ,   μ ( 0 , σ ) and λ 0 > 0 .
  • Iterative steps: For given u n , v n 1 and v n , construct a half-space
    E n = { z E : u n λ n ω n 1 v n , z v n 0 } ,
  • where ω n 1 2 f ( v n 1 , v n ) .
  • Step 1: Compute
    u n + 1 = arg min y E n { μ λ n f ( v n , y ) + 1 2 u n y 2 } .
  • Step 2: Update the stepsize as follows
    λ n + 1 = min σ , μ f ( v n , u n + 1 ) f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) c 1 v n 1 v n 2 c 2 u n + 1 v n 2 + 1
  • and compute
    v n + 1 = arg min y C { λ n + 1 f ( v n , y ) + 1 2 u n + 1 y 2 } .
  • Then { u n } and { v n } weakly converge to the solution p * E P ( f , C ) .

4. Solving Variational Inequality Problems with New Self-Adaptive Methods

We consider the application of our above-mentioned results to solve variational inequality problems involving pseudomonotone and Lipschitz-type continuous operator. The variational inequality problem is written in the following way:
find p * C so that G ( p * ) , v p * 0 , v C .
An operator G : E E is
( i ) .
monotone on C if G ( u ) G ( v ) , u v 0 , u , v C ;
( i i ) .
L-Lipschitz continuous on C if G ( u ) G ( v ) L u v , u , v C ;
( i i i ) .
pseudomonotone on C if G ( u ) , v u 0 G ( v ) , u v 0 , u , v C .
Note: If we choose the bifunction f ( u , v ) : = G ( u ) , v u for all u , v C then the equilibrium problem transforms into the above variational inequality problem with L = 2 c 1 = 2 c 2 . This means that from the definitions of v n + 1 in the Algorithm 1 and according to the above definition of bifunction f we have
v n + 1 = arg min y C λ n + 1 f ( v n , y ) + 1 2 w n + 1 y 2 = arg min y C λ n + 1 G ( v n ) , y v n + 1 2 w n + 1 y 2 = arg min y C λ n + 1 G ( v n ) , y w n + 1 + 1 2 w n + 1 y 2 + λ n + 1 G ( v n ) , w n + 1 v n = arg min y C 1 2 y ( w n + 1 λ n + 1 G ( v n ) 2 λ n + 1 2 2 G ( v n ) 2 = P C ( w n + 1 λ n + 1 G ( v n ) ) .
Similarly to the expression in Equation (48) the value u n + 1 from Algorithm 1 converts into
u n + 1 = P E n ( w n μ λ n G ( v n ) ) .
Due to ω n 1 2 f ( v n 1 , v n ) and by subdifferential definition we obtain
ω n 1 , z v n G ( v n 1 ) , z v n 1 G ( v n 1 ) , v n v n 1 , z E = G ( v n 1 ) , z v n , z E
and consequently 0 G ( v n 1 ) ω n 1 , z v n for all z E . This implies that
w n λ n G ( v n 1 ) v n , z v n w n λ n G ( v n 1 ) v n , z v n + λ n G ( v n 1 ) ω n 1 , z v n = w n λ n ω n 1 v n , z v n .
Assumption 2.
We assume that G is satisfying the following assumptions:
G 1 * .
G is monotone on C and V I ( G , C ) is nonempty;
G 1 .
G is pseudomonotone on C and V I ( G , C ) is nonempty;
G 2 .
G is L-Lipschitz continuous on C through positive parameter L > 0 ;
G 3 .
lim sup n G ( u n ) , v u n G ( x * ) , v x * for every v C and { u n } C satisfying u n x * .
We have reduced the following results from our main results applicable to solve variational inequality problems.
Corollary 3.
Assume that G : C E is satisfying ( G 1 , G 2 , G 3 ) in Assumption 2. Let { w n } ,   { u n } and { v n } be the sequences obtained as follows:
  • Initialization: Choose u 1 , v 1 , u 0 , v 0 E ,   ϱ ( 0 , 1 ) ,   σ < min 1 3 θ ( 1 θ ) 2 + 2 L ( θ + θ 2 ) , 1 3 L + 2 θ L ) for a nondecreasing sequence θ n such that 0 θ n θ < 1 3 and λ 0 > 0 .
  • Iterative steps: For given u n 1 , v n 1 , u n and v n , construct a half space
    E n = { z E : w n λ n G v n 1 v n , z v n 0 }
  • where w n = u n + θ n ( u n u n 1 ) .
  • Step 1: Compute
    u n + 1 = P E n ( w n μ λ n G ( v n ) ) .
  • Step 2: The stepsize λ n + 1 is updated as follows
    λ n + 1 = min σ , μ G v n , u n + 1 v n G v n 1 , u n + 1 v n L 2 v n 1 v n 2 L 2 u n + 1 v n 2 + 1
  • and compute
    v n + 1 = P C ( w n + 1 λ n + 1 G ( v n ) ) w h e r e w n + 1 = u n + 1 + θ n + 1 ( u n + 1 u n ) .
  • Then the sequence { w n } ,   { u n } and { v n } weakly converge to p * of V I ( G , C ) .
Corollary 4.
Assume that G : C E is satisfying ( G 1 , G 2 , G 3 ) in Assumption 2. Let { u n } and { v n } be the sequences obtained as follows:
  • Initialization: Choose v 1 , u 0 , v 0 E ,   ϱ ( 0 , 1 ) ,   σ < min 1 , 1 3 L and λ 0 > 0 .
  • Iterative steps: For given v n 1 , u n and v n , construct a half space
    E n = { z E : u n λ n G v n 1 v n , z v n 0 } .
  • Step 1: Compute
    u n + 1 = P E n ( u n μ λ n G ( v n ) ) .
  • Step 2: The stepsize λ n + 1 is updated as follows
    λ n + 1 = min σ , μ G v n , u n + 1 v n G v n 1 , u n + 1 v n L 2 v n 1 v n 2 L 2 u n + 1 v n 2 + 1
  • and compute
    v n + 1 = P C ( u n + 1 λ n + 1 G ( v n ) ) .
  • Thus { u n } and { v n } converge weakly to the solution p * of V I ( G , C ) .
We examine that if G is monotone then condition ( G 3 ) can be removed. The assumption ( G 3 ) is required to specify f ( u , v ) = G ( u ) , v u complies with the condition ( f 3 ). In addition, condition ( f 3 ) is required to show z E P ( f , C ) after the inequality in Equation (47). This implies that the condition ( G 3 ) is used to prove z V I ( G , C ) . Now we will prove that z V I ( G , C ) by using the monotonicity of operator G . Since G is monotone, we have
G ( y ) , y v n G ( v n ) , y v n , y E .
By f ( u , v ) = G ( u ) , v u and Equation (46) we have
lim sup k G ( v n k ) , y v n k 0 , y E n .
Combining Equations (51) with (52), we deduce that
lim sup k G ( y ) , y v n k 0 , y E n .
Since v n k z C and G ( y ) , y z 0 for all y C . Let v t = ( 1 t ) z + t y for all t [ 0 , 1 ] . Due to convexity of C the value v t C for each t ( 0 , 1 ) . We obtain
0 G ( v t ) , v t z = t G ( v t ) , y z
That is G ( v t ) , y z 0 for all t ( 0 , 1 ) . By v t z as t 0 and the continuity of G gives G ( z ) , y z 0 for each y C , which implies that z V I ( G , C ) .
Corollary 5.
Assume that G : C E is satisfying ( G 1 * , G 2 ) in Assumption 2. Let { w n } ,   { u n } and { v n } be the sequences obtained as follows:
  • Initialization: Choose u 1 , v 1 , u 0 , v 0 E ,   ϱ ( 0 , 1 ) ,   σ < min 1 3 θ ( 1 θ ) 2 + 2 L ( θ + θ 2 ) , 1 3 L + 2 θ L ) for a nondecreasing sequence θ n such that 0 θ n θ < 1 3 and λ 0 > 0 .
  • Iterative steps: For given u n 1 , v n 1 , u n and v n , construct a half space
    E n = { z E : w n λ n G v n 1 v n , z v n 0 }
  • where w n = u n + θ n ( u n u n 1 ) .
  • Step 1: 
    u n + 1 = P E n ( w n μ λ n G ( v n ) ) .
  • Step 2: The stepsize λ n + 1 is updated as follows
    λ n + 1 = min σ , μ G v n , u n + 1 v n G v n 1 , u n + 1 v n L 2 v n 1 v n 2 L 2 u n + 1 v n 2 + 1
     
  • and compute
    v n + 1 = P C ( w n + 1 λ n + 1 G ( v n ) ) w h e r e w n + 1 = u n + 1 + θ n + 1 ( u n + 1 u n ) .
  • Then the sequences { w n } ,   { u n } and { v n } converges weakly to p * of V I ( G , C ) .
Corollary 6.
Assume that G : C E is satisfying ( G 1 * , G 2 ) in Assumption 2. Let { u n } and { v n } be the sequences obtained as follows:
  • Initialization:  Choose v 1 , u 0 , v 0 E ,   ϱ ( 0 , 1 ) ,   σ < min 1 , 1 3 L and λ 0 > 0 .
  • Iterative steps: For given v n 1 , u n and v n , construct a half space
    E n = { z E : u n λ n G v n 1 v n , z v n 0 } .
  • Step 1: 
    u n + 1 = P E n ( u n μ λ n G ( v n ) ) .
  • Step 2: The stepsize λ n + 1 is updated as follows
    λ n + 1 = min σ , μ G v n , u n + 1 v n G v n 1 , u n + 1 v n L 2 v n 1 v n 2 L 2 u n + 1 v n 2 + 1
  • and compute
    v n + 1 = P C ( u n + 1 λ n + 1 G ( v n ) ) .
  • Then { u n } and { v n } converge weakly to p * of V I ( G , C ) .

5. Computational Experiment

Numerical results produced in this section show the performance of our proposed methods. The MATLAB codes were running in MATLAB version 9.5 (R2018b) on a PC Intel(R) Core(TM)i5-6200 CPU @ 2.30GHz 2.40GHz, RAM 8.00 GB. In these examples, the x-axis indicates the number of iterations or the execution time (in seconds) and y-axes represents the values D n = u n + 1 u n . We present the comparison of Algorithm 1 (Algo3) with the Lyashko et al. [33] (Algo1) and Liu et al. [42] (Algo2).
Example 1.
Suppose that f : C × C R is defined by
f ( u , v ) = A u + B v + d , v u ,
where d R n and A, B are matrices of order n where B an symmetric positive semidefinite and B A is symmetric negative definite with Lipschitz constants are c 1 = c 2 = 1 2 A B (for more details see [20]). During Example 1, matrices A , B are randomly produced (Two matrices are randomly generated E and F with entries from [ 1 , 1 ] . The matrix B = E T E ,   S = F T F and A = S + B . ) and entries of d randomly belongs to [ 1 , 1 ] . The constraint set C R n as
C : = { u R n : 10 u i 10 } .
The numerical findings are shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 and Table 1 with v 1 = ( 4 , , 4 ) ,   u 1 = ( 3 , , 3 ) ,   u 0 = ( 1 , , 1 ) ,   v 0 = ( 2 , , 2 ) ,   λ = 1 12 c 1 ,   σ = 5 42 c 1 ,   μ = 5 44 c 1 ,   θ n = 0.12 and λ 0 = 1 4 c 1 .
Example 2.
Let f : C × C R be defined as
f ( u , v ) = ( u 1 + u 2 1 ) ( v 1 u 1 ) + ( u 1 + u 2 1 ) ( v 2 u 2 )
where C = [ 2 , 5 ] × [ 2 , 5 ] . We see that
f ( u , v ) + f ( v , u ) = ( u 1 v 1 + u 2 v 2 ) 2 0
which gives that bifunction f is monotone. The numerical findings are shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14 and Table 2 with v 1 = u 1 = u 0 = ( 1 , 1 ) ,   λ = 0.03 ,   σ = 0.476 ,   μ = 0.455 ,   θ n = 0.15 and λ 0 = 0.1 .
Example 3.
Let G : R 2 R 2 be defined by
G ( u ) = 0 . 5 u 1 u 2 2 u 2 10 7 4 u 1 0.1 u 2 2 10 7
and let C = { u R 2 : ( u 1 2 ) 2 + ( u 2 2 ) 2 1 } . The operator G is Lipschitz continuous with L = 5 and pseudomonotone. During this experiment we use u 1 = ( 1 , 1 ) ,   v 1 = ( 2 , 2 ) ,   u 0 = ( 3 , 4 ) T with stepsize λ = 10 8 according Lyashko et al. [33] and Liu et al. [42]. We take λ 0 = 0.1 , σ = 0.0392 and μ = 0.0377 . The experimental results are shown in Table 3 and Figure 15, Figure 16, Figure 17 and Figure 18.
Example 4.
Take G : R n R n to be defined through
G ( u ) = A u + B ( u )
where A is a n × n symmetric semidefinite matrix and B ( u ) is the proximal mapping through the function h ( u ) = 1 4 u 4 such that
B ( u ) = arg min v R n v 4 4 + 1 2 v u 2
The property of A and the proximal mapping B implies that G is monotone upon C [45]. The following is a feasible set
C : = { u R 5 : 2 u i 5 } .
The numerical results are shown in Table 4 and Figure 19.

6. Conclusions

We have developed extragradient-like methods to solve pseudomonotone equilibrium problems and different classes of variational inequality problems in real Hilbert space. The advantage of our method is in designing an explicit formula for step size evaluation. For each iteration the stepsize formula is updated based on the previous iterations. Numerical results were reported to demonstrate numerical effectiveness of our results relative to other methods. These numerical studies suggest that inertial effects in this sense also generally improve the effectiveness of the iterative sequence.

Author Contributions

The authors contributed equally to writing this article. All authors have read and agree to the published version of the manuscript.

Funding

This research work was financially supported by King Mongkut’s University of Technology Thonburi through the ‘KMUTT 55th Anniversary Commemorative Fund’. Moreover, this project was supported by Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation research Cluster (CLASSIC), Faculty of Science, KMUTT. In particular, Habib ur Rehman was financed by the Petchra Pra Jom Doctoral Scholarship Academic for Ph.D. Program at KMUTT [grant number 39/2560]. Furthermore, Wiyada Kumam was financially supported by the Rajamangala University of Technology Thanyaburi (RMUTTT) (Grant No. NSF62D0604).

Acknowledgments

The first author would like to thank the “Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi”. We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work.

Conflicts of Interest

The authors declare that they have conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: New York, NY, USA, 2007. [Google Scholar]
  3. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  4. Yang, Q.; Bian, X.; Stark, R.; Fresemann, C.; Song, F. Configuration Equilibrium Model of Product Variant Design Driven by Customer Requirements. Symmetry 2019, 11, 508. [Google Scholar] [CrossRef] [Green Version]
  5. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  6. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  7. Yuan, G.X.Z. KKM Theory and Applications in Nonlinear Analysis; CRC Press: Boca Raton, FL, USA, 1999; Volume 218. [Google Scholar]
  8. Brézis, H.; Nirenberg, L.; Stampacchia, G. A remark on Ky Fan’s minimax principle. Boll. Dell Unione Mat. Ital. 2008, 1, 257–264. [Google Scholar]
  9. Rehman, H.U.; Kumam, P.; Sompong, D. Existence of tripled fixed points and solution of functional integral equations through a measure of noncompactness. Carpathian J. Math. 2019, 35, 193–208. [Google Scholar]
  10. Rehman, H.U.; Gopal, G.; Kumam, P. Generalizations of Darbo’s fixed point theorem for new condensing operators with application to a functional integral equation. Demonstr. Math. 2019, 52, 166–182. [Google Scholar] [CrossRef]
  11. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  12. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  13. Van Hieu, D.; Muu, L.D.; Anh, P.K. Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 2016, 73, 197–217. [Google Scholar] [CrossRef]
  14. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2017, 66, 75–96. [Google Scholar] [CrossRef]
  15. Van Hieu, D. Halpern subgradient extragradient method extended to equilibrium problems. Rev. Real Acad. De Cienc. Exactas Fís. Nat. Ser. A Mat. 2017, 111, 823–840. [Google Scholar] [CrossRef]
  16. Hieua, D.V. Parallel extragradient-proximal methods for split equilibrium problems. Math. Model. Anal. 2016, 21, 478–501. [Google Scholar] [CrossRef]
  17. Konnov, I. Application of the proximal point method to nonmonotone equilibrium problems. J. Optim. Theory Appl. 2003, 119, 317–333. [Google Scholar] [CrossRef]
  18. Duc, P.M.; Muu, L.D.; Quy, N.V. Solution-existence and algorithms with their convergence rate for strongly pseudomonotone equilibrium problems. Pacific J. Optim 2016, 12, 833–845. [Google Scholar]
  19. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar] [CrossRef]
  20. Quoc Tran, D.; Le Dung, M.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  21. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  22. Tada, A.; Takahashi, W. Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 2007, 133, 359–370. [Google Scholar] [CrossRef]
  23. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef] [Green Version]
  24. Ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequal. Appl. 2019, 2019, 1–25. [Google Scholar] [CrossRef]
  25. Rehman, H.U.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  26. Ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  27. Argyros, I.K.; d Hilout, S. Computational Methods in Nonlinear Analysis: Efficient Algorithms, Fixed Point Theory and Applications; World Scientific: Singapore, 2013. [Google Scholar]
  28. Ur Rehman, H.; Kumam, P.; Cho, Y.J.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  29. Argyros, I.K.; Cho, Y.J.; Hilout, S. Numerical Methods for Equations and Its Applications; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  30. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  31. Antipin, A. Convex programming method using a symmetric modification of the Lagrangian functional. Ekon. Mat. Metod. 1976, 12, 1164–1173. [Google Scholar]
  32. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  33. Lyashko, S.I.; Semenov, V.V. A new two-step proximal algorithm of solving the problem of equilibrium programming. In Optimization and Its Applications in Control and Data Sciences; Springer: Cham, Switzerland, 2016; pp. 315–325. [Google Scholar]
  34. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  35. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  36. Moudafi, A. Second-order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 2003, 4, 1–7. [Google Scholar]
  37. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  38. Thong, D.V.; Van Hieu, D. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  39. Dong, Q.; Cho, Y.; Zhong, L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  40. Yang, J. Self-adaptive inertial subgradient extragradient algorithm for solving pseudomonotone variational inequalities. Appl. Anal. 2019. [Google Scholar] [CrossRef]
  41. Thong, D.V.; Van Hieu, D.; Rassias, T.M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. 2020, 14, 115–144. [Google Scholar] [CrossRef]
  42. Liu, Y.; Kong, H. The new extragradient method extended to equilibrium problems. Rev. Real Acad. De Cienc. Exactas Fís. Nat. Ser. A Mat. 2019, 113, 2113–2126. [Google Scholar] [CrossRef]
  43. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  44. Goebel, K.; Reich, S. Uniform convexity. In Hyperbolic Geometry, and Nonexpansive; Marcel Dekker, Inc.: New York, NY, USA, 1984. [Google Scholar]
  45. Kreyszig, E. Introductory Functional Analysis with Applications, 1st ed.; Wiley: New York, NY, USA, 1978. [Google Scholar]
  46. Tiel, J.V. Convex Analysis; John Wiley: New York, NY, USA, 1984. [Google Scholar]
  47. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 408. [Google Scholar]
  48. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  49. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  50. Dadashi, V.; Iyiola, O.S.; Shehu, Y. The subgradient extragradient method for pseudomonotone equilibrium problems. Optimization 2019. [Google Scholar] [CrossRef]
Figure 1. Example 1 when n = 5 .
Figure 1. Example 1 when n = 5 .
Symmetry 12 00523 g001
Figure 2. Example 1 when n = 5 .
Figure 2. Example 1 when n = 5 .
Symmetry 12 00523 g002
Figure 3. Example 1 when n = 10 .
Figure 3. Example 1 when n = 10 .
Symmetry 12 00523 g003
Figure 4. Example 1 when n = 10 .
Figure 4. Example 1 when n = 10 .
Symmetry 12 00523 g004
Figure 5. Example 1 when n = 20 .
Figure 5. Example 1 when n = 20 .
Symmetry 12 00523 g005
Figure 6. Example 1 when n = 20 .
Figure 6. Example 1 when n = 20 .
Symmetry 12 00523 g006
Figure 7. Example 2 when v 0 = ( 1.0 , 2.0 ) .
Figure 7. Example 2 when v 0 = ( 1.0 , 2.0 ) .
Symmetry 12 00523 g007
Figure 8. Example 2 when v 0 = ( 1.0 , 2.0 ) .
Figure 8. Example 2 when v 0 = ( 1.0 , 2.0 ) .
Symmetry 12 00523 g008
Figure 9. Example 2 when v 0 = ( 1.5 , 1.7 ) .
Figure 9. Example 2 when v 0 = ( 1.5 , 1.7 ) .
Symmetry 12 00523 g009
Figure 10. Example 2 when v 0 = ( 1.5 , 1.7 ) .
Figure 10. Example 2 when v 0 = ( 1.5 , 1.7 ) .
Symmetry 12 00523 g010
Figure 11. Example 2 when v 0 = ( 2.7 , 4.6 ) .
Figure 11. Example 2 when v 0 = ( 2.7 , 4.6 ) .
Symmetry 12 00523 g011
Figure 12. Example 2 when v 0 = ( 2.7 , 4.6 ) .
Figure 12. Example 2 when v 0 = ( 2.7 , 4.6 ) .
Symmetry 12 00523 g012
Figure 13. Example 2 when v 0 = ( 2.0 , 3.0 ) .
Figure 13. Example 2 when v 0 = ( 2.0 , 3.0 ) .
Symmetry 12 00523 g013
Figure 14. Example 2 when v 0 = ( 2.0 , 3.0 ) .
Figure 14. Example 2 when v 0 = ( 2.0 , 3.0 ) .
Symmetry 12 00523 g014
Figure 15. Example 3 when u 0 = ( 1.5 , 1.7 ) .
Figure 15. Example 3 when u 0 = ( 1.5 , 1.7 ) .
Symmetry 12 00523 g015
Figure 16. Example 3 when u 0 = ( 2.0 , 3.0 ) .
Figure 16. Example 3 when u 0 = ( 2.0 , 3.0 ) .
Symmetry 12 00523 g016
Figure 17. Example 3 when u 0 = ( 1.0 , 2.0 ) .
Figure 17. Example 3 when u 0 = ( 1.0 , 2.0 ) .
Symmetry 12 00523 g017
Figure 18. Example 3 when u 0 = ( 2.7 , 2.6 ) .
Figure 18. Example 3 when u 0 = ( 2.7 , 2.6 ) .
Symmetry 12 00523 g018
Figure 19. Example 4 when n = 5 .
Figure 19. Example 4 when n = 5 .
Symmetry 12 00523 g019
Table 1. Example 1: The numerical results for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Table 1. Example 1: The numerical results for Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Algo1Algo2Algo3
nIter.Exeu.time.Iter.Exeu.time.Iter.Exeu.time.
52875.93422813.5302120.1204
1072719.878996012.8186160.1584
20299772.762235103510140.1624
Table 2. Example 2: The numerical results for Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14.
Table 2. Example 2: The numerical results for Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14.
Algo1Algo2Algo3
v0Iter.Exeu.time.Iter.Exeu.time.Iter.Exeu.time.
(−1.0, 2.0)1801.78441720.7740200.1025
(1.5, 1.7)1872.10161810.8069230.1125
(2.7, 4.6)1901.90441840.7979170.0881
(2.0, 3.0)1881.86351820.7792200.1063
Table 3. Example 3: The numerical results for Figure 15, Figure 16, Figure 17 and Figure 18.
Table 3. Example 3: The numerical results for Figure 15, Figure 16, Figure 17 and Figure 18.
Algo1Algo2Algo3
v0Iter.Exeu.time.Iter.Exeu.time.Iter.Exeu.time.
(1.5, 1.7)822.6525811.3557470.9015
(2.0, 3.0)822.7698811.3698501.4948
(1.0, 2.0)852.9042841.4026431.2657
(2.7, 2.6)862.8937811.3990481.4540
Table 4. Example 4: The numerical results for Figure 19.
Table 4. Example 4: The numerical results for Figure 19.
Algo1Algo3
nIter.Exeu.time.Iter.Exeu.time.
533812.63641128.8393

Share and Cite

MDPI and ACS Style

Rehman, H.u.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry 2020, 12, 523. https://doi.org/10.3390/sym12040523

AMA Style

Rehman Hu, Kumam P, Argyros IK, Alreshidi NA, Kumam W, Jirakitpuwapat W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry. 2020; 12(4):523. https://doi.org/10.3390/sym12040523

Chicago/Turabian Style

Rehman, Habib ur, Poom Kumam, Ioannis K. Argyros, Nasser Aedh Alreshidi, Wiyada Kumam, and Wachirapong Jirakitpuwapat. 2020. "A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems" Symmetry 12, no. 4: 523. https://doi.org/10.3390/sym12040523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop