Next Article in Journal
Singularly Perturbed Cauchy Problem for a Parabolic Equation with a Rational “Simple” Turning Point
Next Article in Special Issue
Towards the Dependence on Parameters for the Solution of the Thermostatted Kinetic Framework
Previous Article in Journal
Oscillation of Emden–Fowler-Type Neutral Delay Differential Equations
Previous Article in Special Issue
Inertial Iterative Self-Adaptive Step Size Extragradient-Like Method for Solving Equilibrium Problems in Real Hilbert Space with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation Results for Equilibrium Problems Involving Strongly Pseudomonotone Bifunction in Real Hilbert Spaces

1
Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi, Thanyaburi, Pathumthani 12110, Thailand
2
Faculty of Science and Technology, Rajamangala University of Technology Phra Nakhon (RMUTP), 1381 Pracharat 1 Road, Wongsawang, Bang Sue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Axioms 2020, 9(4), 137; https://doi.org/10.3390/axioms9040137
Submission received: 26 September 2020 / Revised: 15 November 2020 / Accepted: 21 November 2020 / Published: 26 November 2020
(This article belongs to the Special Issue Numerical Analysis and Computational Mathematics)

Abstract

:
A plethora of applications in non-linear analysis, including minimax problems, mathematical programming, the fixed-point problems, saddle-point problems, penalization and complementary problems, may be framed as a problem of equilibrium. Most of the methods used to solve equilibrium problems involve iterative methods, which is why the aim of this article is to establish a new iterative method by incorporating an inertial term with a subgradient extragradient method to solve the problem of equilibrium, which includes a bifunction that is strongly pseudomonotone and meets the Lipschitz-type condition in a real Hilbert space. Under certain mild conditions, a strong convergence theorem is proved, and a required sequence is generated without the information of the Lipschitz-type cost bifunction constants. Thus, the method operates with the help of a slow-converging step size sequence. In numerical analysis, we consider various equilibrium test problems to validate our proposed results.

1. Background

Assume that a bifunction f : H × H R satisfying the conditions f ( v , v ) = 0 for each v K . equilibrium problem [1,2] for f on K is said to be:
Find v * K such   that f ( v * , v ) 0 , v K .
where K is a non-empty closed and convex subset of a Hilbert space H . Next, we present the definitions of the important classification of the problems of equilibrium [1,3]. A function f : H × H R on K for γ > 0 is said to be
(i)
strongly monotone if
f ( v 1 , v 2 ) + f ( v 2 , v 1 ) γ v 1 v 2 2 , v 1 , v 2 K ;
(ii)
monotone if
f ( v 1 , v 2 ) + f ( v 2 , v 1 ) 0 , v 1 , v 2 K ;
(iii)
γ -strongly pseudo-monotone if
f ( v 1 , v 2 ) 0 f ( v 2 , v 1 ) γ v 1 v 2 2 , v 1 , v 2 K ;
(iv)
pseudo-monotone if
f ( v 1 , v 2 ) 0 f ( v 2 , v 1 ) 0 , v 1 , v 2 K ;
and
(v)
satisfy the Lipschitz-type conditions on K for L 1 , L 2 > 0 , such that
f ( v 1 , v 3 ) L 1 v 1 v 2 2 L 2 v 2 v 3 2 f ( v 1 , v 2 ) + f ( v 2 , v 3 ) , v 1 , v 2 , v 3 K .
The above well-defined simple mathematical problem (1) includes many mathematical and applied sciences problems as a special case, consisting of the fixed point problems, vector and scalar minimization problems, problems of variational inequalities (VIP), the complementarity problems, the Nash equilibrium problems in non-cooperative games, and inverse optimization problems [1,4,5]. This problem is also seen as a problem of Ky Fan inequality based on his initial contribution [2]. Several researchers have developed and generalized numerous findings on the nature of a solution to an equilibrium problem. (e.g., see [2,4,6,7]). Due to the basic formulation of a problem (1) and its application in both the theoretical and applied sciences, it has been extensively studied in recent times by several authors [8,9] (see also [10,11,12,13,14,15,16]).
Many methods have been previously established and considered their convergence investigation to deal with the problem (1). There is an impressive number of numerical methods have been designed along with their well-defined convergence analysis and theoretical properties to solve the problem (1) in different dimensional spaces [17,18,19,20,21,22]. Regularization is one of the most significant methods to figure out various ill-posed problems in the many fields of pure and applied mathematics. The prominent aspect of the regularization method is to employ it on monotone equilibrium problems and the initial problem converts into strongly monotone equilibrium sub-problem. Therefore, each computationally efficient sub-problem is strongly monotone and a unique solution exists.
A proximal method is another approach to deal with equilibrium problems that rely on numerical minimization problems [23]. This method has also been identified as the extragradient method [24] based on the initial contribution of the Korpelevich [25] method to solve the saddle point problems. Hieu [26] established an algorithmic sequence { u n } as follows:
u 0 K v n = arg min v K { ζ n f ( u n , v ) + 1 2 u n v 2 } , u n + 1 = arg min v K { ζ n f ( v n , v ) + 1 2 u n v 2 } ,
while { ζ n } meet the following conditions:
C 1 : lim n + ζ n = 0 and C 2 : n = 1 + ζ n = + .
Inertial-like methods are two-step iterative methods, where the next iteration is carried out by employing the previous two iterations [27,28]. The inertial interpolation term is required to boost the sequence and help to improve the convergence rate of the iterative sequence. Such inertial methods are essentially used to speed up the iterative sequence to the appropriate solution and to improve the convergence rate. Numerical descriptions demonstrate that inertial effects also enhance the numerical performance. Such impressive attributes increase the curiosity of researchers in creating inertial methods. Recently, various inertial methods have also been established for specific types of equilibrium problems [29,30,31,32].
In this paper, we use the projection method that is simple to carry out due to its low cost and efficient numerical computations. Inspired by the works of Fan et al. [33], Thong and Hieu [34], and Censor et al. [35], we set up an accelerated extragradient-like algorithm to solve the problem (1) and other special class of equilibrium problem, such as variational inequalities. We prove a strong convergence theorem corresponding to the sequence generated to solve the problem of equilibrium under certain mild conditions. At the end, the computational tests show that the algorithm is more efficient than the current ones [26,29,36,37,38].
The rest of the article has been organized as follows. Section 2 consists of some basic results which are used throughout the article. Section 3 includes our proposed method and its convergence analysis. Section 4 includes numerical experiments that demonstrate practical effectiveness.

2. Preliminaries

Assume that a convex function g : K R and subdifferential of g on v 1 K is defined as follows:
g ( v 1 ) = { v 3 H : g ( v 2 ) g ( v 1 ) v 3 , v 2 v 1 , v 2 K } .
A normal cone for K on v 1 K is defined as follows:
N K ( v 1 ) = { v 3 H : v 3 , v 2 v 1 0 , v 2 K } .
Lemma 1
([39]). Assume the three sequences α n , β n and γ n are in [ 0 , + ) such that
α n + 1 α n + β n ( α n α n 1 ) + γ n , f o r a l l   n 1 , h a v i n g n = 1 + γ n < + ,
where 0 < β with 0 β n β < 1 for each n N . Thus, we have
(i) 
n = 1 + [ α n α n 1 ] + < + , with [ q ] + : = max { q , 0 } ;
(ii) 
lim n + α n = α * [ 0 , + ) .
Lemma 2
([40]). For each v 1 , v 2 H and r R , the following equality holds
r v 1 + ( 1 r ) v 2 2 = r v 1 2 + ( 1 r ) v 2 2 r ( 1 r ) v 1 v 2 2 .
Lemma 3
([41]). Let { p n } and { q n } [ 0 , + ) be two sequences such that
n = 1 + p n = + a n d n = 1 + p n q n < + .
Then, lim inf n + q n = 0 .
Lemma 4
([42]). Assume that a function h : K R is subdifferentiable, convex, and lower semi-continuous on K . Then, v 1 K is a function h minimizer if and only if 0 h ( v 1 ) + N K ( v 1 ) while h ( v 1 ) and N K ( v 1 ) stand for the subdifferential of h on v 1 K and a normal cone of K at v 1 , respectively.
Suppose that f : H × H R satisfies the following conditions:
(C1)
f ( v 1 , v 1 ) = 0 , for all v 1 K and f is strongly pseudomonotone on K ;
(C2)
f meet the Lipschitz-type condition with two constants L 1 and L 2 ; and
(C3)
f ( v 1 , . ) is convex and sub-differentiable on H for fixed each v 1 H .

3. Main Results

The following is the main method (Algorithm 1) in more detail.
Algorithm 1. Modified subgradient extragradient method for equilibrium problems.
  • Step 0: Choose u 1 , u 0 H arbitrarily. Let ζ n satisfy the conditions (3). { θ n } and { ϑ n } are control parameter sequences.
  • Step 1: Compute
    v n = arg min v K { ζ n f ( w n , v ) + 1 2 w n v 2 } ,
    where w n = u n + θ n ( u n u n 1 ) . If v n = w n , then STOP and w n E P ( f , K ) .
  • Step 2: Compute a set
    H n = { z H : w n ζ n t n v n , z v n 0 } ,
    where t n 2 f ( w n , v n ) .
  • Step 3: Compute
    η n = arg min v H n { ζ n f ( v n , v ) + 1 2 w n v 2 } .
  • Step 4: Compute
    u n + 1 = ( 1 ϑ n ) w n + ϑ n η n ,
    where { ϑ n } and { θ n } are real sequences meet the conditions:
  • (i) { θ n } sequence is non-decreasing and 0 θ n θ < 1 for each n 1 ;
    (ii) there exists ϑ , δ , σ > 0 such that
    δ > 4 θ θ ( 1 + θ ) + σ 1 θ 2 ,
    and
    0 < ϑ ϑ n δ 4 θ θ ( 1 + θ ) + σ + 1 4 θ δ 4 δ θ ( 1 + θ ) + σ + 1 4 θ δ .
    Set n : = n + 1 and switch to Step 1.
Lemma 5.
Suppose that f : H × H R satisfies the conditions (C1)(C3). For v * E P ( f , K ) , we have
η n v * 2 w n v * 2 ( 1 2 L 1 ζ n ) w n v n 2 ( 1 2 L 2 ζ n ) η n v n 2 2 γ ζ n v n v * 2 .
Proof. 
By value of η n and Lemma 4, we have
0 2 ζ n f ( v n , v ) + 1 2 w n v 2 ( η n ) + N H n ( η n ) .
Thus, there exists ω f ( v n , η n ) and ω ¯ N H n ( η n ) such that
ζ n ω + η n w n + ω ¯ = 0 .
Thus, the above implies that
w n η n , v η n = ζ n ω , v η n + ω ¯ , v η n , v H n .
Since ω ¯ N H n ( η n ) , it implies that ω ¯ , v η n 0 , for all v H n . This gives that
ζ n ω , v η n w n η n , v η n , v H n .
By ω f ( v n , η n ) , we have
f ( v n , v ) f ( v n , η n ) ω , v η n , v H .
From (6) and (7), we obtain
ζ n f ( v n , v ) ζ n f ( v n , η n ) w n η n , v η n , v H n .
By the use of v = v * , we get
ζ n f ( v n , v * ) ζ n f ( v n , η n ) w n η n , v * η n .
By given v * E P ( f , K ) , f ( v * , v n ) 0 , which implies that f ( v n , v * ) γ v n v * 2 . From the expression (9), we obtain
w n η n , η n v * ζ n f ( v n , η n ) + γ ζ n v n v * 2 .
Due to the Lipschitz-type continuity of a bifunction f,
f ( w n , η n ) f ( w n , v n ) + f ( v n , η n ) + L 1 w n v n 2 + L 2 v n η n 2 .
Expressions (10) and (11) gives that
w n η n , η n v * ζ n f ( w n , η n ) f ( w n , v n ) L 1 ζ n w n v n 2 L 2 ζ n v n η n 2 + γ ζ n v n v * 2 .
By value η n H n ,
w n ζ n t n v n , η n v n 0 .
The above implies that
w n v n , η n v n ζ n t n , η n v n .
t n 2 f ( w n , v n ) gives that
f ( w n , v ) f ( w n , v n ) t n , v v n , v H .
Substituting v = η n into the above expression,
f ( w n , η n ) f ( w n , v n ) t n , η n v n .
Expressions (13) and (14) imply that
ζ n f ( w n , η n ) f ( w n , v n ) w n v n , η n v n .
Combining expressions (12) and (15) implies that
w n η n , η n v * w n v n , η n v n L 1 ζ n w n v n 2 L 2 ζ n v n η n 2 + γ ζ n v n v * 2 .
We have the following facts:
2 w n η n , η n v * =   w n v * 2   η n w n 2 η n v * 2 .
2 v n w n , v n η n =   w n v n 2 +   η n v n 2 w n η n 2 .
Thus, we finally obtain
η n v * 2   w n v * 2 ( 1 2 L 1 ζ n ) w n v n 2 ( 1 2 L 2 ζ n ) η n v n 2 2 γ ζ n v n v * 2 .
Theorem 1.
The sequences { w n } , { v n } , { η n } and { u n } generated by Algorithm 1 strongly converge to v * .
Proof. 
By the value of u n + 1 , we have
u n + 1 v * 2 =   ( 1 ϑ n ) w n + ϑ n η n v * 2 =   ( 1 ϑ n ) ( w n v * ) + ϑ n ( η n v * ) 2 = ( 1 ϑ n ) w n v * 2 + ϑ n η n v * 2 ϑ n ( 1 ϑ n ) w n η n 2 ( 1 ϑ n ) w n v * 2 + ϑ n η n v * 2 .
From Lemma 5, we obtain
η n v * 2   w n v * 2 ( 1 2 L 1 ζ n ) w n v n 2 ( 1 2 L 2 ζ n ) η n v n 2 2 γ ζ n v n v * 2 .
By combining expressions (17) and (18), we get
u n + 1 v * 2 ( 1 ϑ n ) w n v * 2 + ϑ n w n v * 2 2 γ ϑ n ζ n v n v * 2 ϑ n ( 1 2 L 1 ζ n ) w n v n 2 ϑ n ( 1 2 L 2 ζ n ) η n v n 2
=   w n v * 2 ϑ n ( 1 b ζ n ) w n v n 2 + η n v n 2
=   w n v * 2 ϑ n ( 1 b ζ n ) 2 2 w n v n 2 + 2 η n v n 2   w n v * 2 ϑ n ( 1 b ζ n ) 2 w n v n   +   η n v n 2
  w n v * 2 ϑ n ( 1 b ζ n ) 2 η n w n 2 ,
where b = max { 2 L 1 , 2 L 2 } . It continues from u n + 1 such that
u n + 1 w n =   ( 1 ϑ n ) w n + ϑ n η n w n   =   ϑ n ( η n w n ) .
Combining (21) and (22), we have
u n + 1 v * 2   w n v * 2 ( 1 b ζ n ) 2 ϑ n u n + 1 w n 2 .
Since ζ n 0 , thus there is n 0 > 0 in order that ζ n 1 2 b for each n n 0 . This implies 1 b ζ n 2 1 4 for every n n 0 . The expression (23) for n n 0 , turn as
u n + 1 v * 2   w n v * 2 1 4 ϑ n u n + 1 w n 2 .
By description of w n , we have
w n v * 2 =   u n + θ n ( u n u n 1 ) v * 2 =   ( 1 + θ n ) ( u n v * ) θ n ( u n 1 v * ) 2 = ( 1 + θ n ) u n v * 2 θ n u n 1 v * 2 + θ n ( 1 + θ n ) u n u n 1 2 .
By value of w n , we have
u n + 1 w n 2 =   u n + 1 u n θ n ( u n u n 1 ) 2 =   u n + 1 u n 2 + θ n 2 u n u n 1 2 + 2 θ n u n u n + 1 , u n u n 1
  u n + 1 u n 2 + θ n 2 u n u n 1 2 ρ n θ n u n + 1 u n 2 θ n ρ n u n u n 1 2 ( 1 ρ n θ n ) u n + 1 u n 2 + θ n 2 θ n ρ n u n u n 1 2 ,
where ρ n = 1 δ ϑ n + θ n . Combining (24), (25), and (27) gives that
u n + 1 v * 2 ( 1 + θ n ) u n v * 2 θ n u n 1 v * 2 + θ n ( 1 + θ n ) u n u n 1 2 1 4 ϑ n ( 1 ρ n θ n ) u n + 1 u n 2 + θ n 2 θ n ρ n u n u n 1 2
= ( 1 + θ n ) u n v * 2 θ n u n 1 v * 2 1 4 ϑ n ( 1 ρ n θ n ) u n + 1 u n 2 + θ n ( 1 + θ n ) 1 4 ϑ n θ n 2 θ n ρ n u n u n 1 2
= ( 1 + θ n ) u n v * 2 θ n u n 1 v * 2 1 4 ϑ n ( 1 ρ n θ n ) u n + 1 u n 2 + γ n u n u n 1 2 ,
where
γ n = θ n ( 1 + θ n ) 1 4 ϑ n θ n 2 θ n ρ n = θ n ( 1 + θ n ) + 1 4 ϑ n θ n ρ n θ n 2 > 0 .
By the above expression and the choice of { ρ n } , we have
γ n = θ n ( 1 + θ n ) + 1 4 ϑ n θ n ρ n θ n 2 θ ( 1 + θ ) + 1 4 θ δ .
We substitute
Ψ n =   u n p 2 θ n u n 1 p 2 + γ n u n u n 1 2 .
It follows (29) such that
Ψ n + 1 Ψ n =   u n + 1 p 2 θ n + 1 u n p 2 + γ n + 1 u n + 1 u n 2   u n p 2 + θ n u n 1 p 2 γ n u n u n 1 2   u n + 1 p 2 ( 1 + θ n ) u n p 2 + θ n u n 1 p 2 + γ n + 1 u n + 1 u n 2 γ n u n u n 1 2 = 1 4 ϑ n ( 1 ρ n θ n ) γ n + 1 u n + 1 u n 2 .
We claim that
1 4 ϑ n ( 1 ρ n θ n ) γ n + 1 σ .
The above inequality implies that
1 4 ϑ n ( 1 ρ n θ n ) γ n + 1 σ iff ( 1 ρ n θ n ) 4 ϑ n γ n + 1 4 ϑ n σ iff ( 1 ρ n θ n ) 4 ϑ n ( γ n + 1 + σ ) 0 iff δ ϑ n δ ϑ n + θ n 4 ϑ n ( γ n + 1 + σ ) 0 iff 4 ( γ n + 1 + σ ) ( δ ϑ n + θ n ) δ
(31) and (5) give that
4 ( γ n + 1 + σ ) ( δ ϑ n + θ n ) 4 θ ( 1 + θ ) + 1 4 θ δ + σ ( δ ϑ n + θ n ) δ .
Expression (32) implies that
Ψ n + 1 Ψ n σ u n + 1 u n 2 0 , for all n n 0 .
Thus, we obtain a non-increasing sequence { Ψ n } for n n 0 . By the value of Ψ n + 1 , we have
Ψ n + 1 =   u n + 1 p 2 θ n + 1 u n p 2 + γ n + 1 u n + 1 u n 2 θ n + 1 u n p 2 .
By the value of Ψ n , we have
Ψ n =   u n p 2 θ n u n 1 p 2 + γ n u n u n 1 2   u n p 2 θ n u n 1 p 2 .
Thus, expression (37) for n n 0 is such that
u n p 2 Ψ n + θ n u n 1 p 2 Ψ n 0 + θ u n 1 p 2 Ψ n 0 ( θ n n 0 + + 1 ) + θ n n 0 u n 0 p 2 Ψ n 0 1 θ + θ n n 0 u n 0 p 2 .
By (36) and (38) for all n n 0 , we get
Ψ n + 1 θ n + 1 u n p 2 θ u n p 2 θ Ψ n 0 1 θ + θ n n 0 + 1 u n 0 p 2 .
It follows from (35) and (39) that
σ n = n 0 k u n + 1 u n 2 Ψ n 0 Ψ k + 1 Ψ n 0 + θ Ψ n 0 1 θ + θ n n 0 + 1 u n 0 p 2 Ψ n 0 1 θ + u n 0 p 2 .
Sending k + implies that
n = 1 + u n + 1 u n 2 < + .
It continues from that
lim n + u n + 1 u n = 0 .
Equations (26) and (42) provide that
lim n + u n + 1 w n = 0 .
By the value of u n + 1 , we have
u n + 1 w n   =   ( 1 ϑ n ) w n + ϑ n η n w n   = ϑ n η n w n .
By Equations (43) and (44), we obtain
lim n + η n w n = 0 .
By the use of triangular inequality and (42) with (43), we obtain
lim n + u n w n   lim n + u n u n + 1 + lim n + u n + 1 w n = 0
and
lim n + u n η n   lim n + u n w n   + lim n + w n η n = 0 .
Expressions (28) and (41) with Lemma 1 imply that
lim n + u n v * 2 = b for some b 0 .
Expressions (46) and (47) imply that
lim n + w n v * 2 = lim n + η n v * 2 = b .
Thus, Lemma 5 implies that
( 1 2 L 2 ζ ) w n v n 2   w n v * 2 η n v * 2 .
The above expression with (48) and (49) gives that
lim n + w n v n = 0 and lim n + v n v * 2 = b .
The argument referred to above concludes that the sequences { w n } , { v n } , { η n } , and { η n } are bounded for each v * E P ( f , K ) the lim n + u n v * 2 exists. It follows from (19) and (25) that we have
2 γ ϑ n ζ n v n v * 2 u n + 1 v * 2 + ( 1 + θ n ) u n v * 2 θ n u n 1 v * 2 + θ n ( 1 + θ n ) u n u n 1 2 ( u n v * 2 u n + 1 v * 2 ) + 2 θ u n u n 1 2 + ( θ n u n v * 2 θ n 1 u n 1 v * 2 ) .
The above expression for k n 0 gives that
n = n 0 k 2 γ ϑ n ζ n v n v * 2 ( u n 0 v * 2 u k + 1 v * 2 ) + 2 θ n = n 0 k u n u n 1 2 + ( θ k u k v * 2 θ 0 u n 0 v * 2 ) u n 0 v * 2 + θ u k v * 2 + 2 θ n = n 0 k u n u n 1 2 ,
letting k + in (53), we obtain
n = n 0 k 2 γ ϑ n ζ n v n v * 2 < + .
From Lemma 3 and (54),
lim inf v n p = 0 .
By expressions (46), (47), (49), (51) and (55),
lim n + v n p = lim n + w n p = lim n + η n p = lim n + u n p = 0 .
This completes the proof. □
Next, we consider the application of our results to solve variational inequality problems. A function G : H H is said to be
(G1)
strongly pseudo-monotone over K for γ > 0 if
G ( v 1 ) , v 2 v 1 0 implies that G ( v 2 ) , v 1 v 2 γ v 1 v 2 2 , v 1 , v 2 K ;
and
(G2)
L-Lipschitz continuity on C if
G ( v 1 ) G ( v 2 ) L v 1 v 2 , v 1 , v 2 K .
Let a bifunction f ( v 1 , v 2 ) : = G ( v 1 ) , v 2 v 1 for all v 1 , v 2 K then equilibrium problem turns into problem of variational inequality with L = 2 L 1 = 2 L 2 . By the value of v n ,
v n = arg min v K ζ n f ( w n , v ) + 1 2 w n v 2 = arg min v K ζ n G ( w n ) , v w n + 1 2 w n v 2 = arg min v K ζ n G ( w n ) , v w n + 1 2 w n v 2 + ζ n 2 2 G ( w n ) 2 ζ n 2 2 G ( w n ) 2 = arg min v K 1 2 v ( w n ζ n G ( w n ) 2 ζ n 2 2 G ( w n ) 2 = P K ( w n ζ n G ( w n ) ) .
Similar to above, the value of η n turns into
η n = P H n ( w n ζ n G ( v n ) ) .
Corollary 1.
Assume that an operator G : K H satisfies Conditions(G1)(G2). Let { w n } , { v n } , { η n } , and { u n } be the sequences generated as follows:
(S1) 
Let u 1 , u 0 H arbitrarily.
(S2) 
Choose ζ n satisfying condition (3) and { θ n } , { ϑ n } are control parameters.
(S3) 
Compute
v n = P K ( w n ζ n G ( w n ) ) ,
where w n = u n + θ n ( u n u n 1 ) . If v n = w n , then STOP.
(S4) 
Determine a half space first H n = { z H : w n ζ n G ( w n ) v n , z v n 0 } and evaluate
η n = P H n ( w n ζ n G ( v n ) ) .
(S5) 
Compute
u n + 1 = ( 1 ϑ n ) w n + ϑ n η n ,
where { θ n } and { ϑ n } satisfies the following conditions:
(i)
non-decreasing sequence { θ n } through 0 θ n θ < 1 , for each n 1 ; and
(ii)
there exists ϑ , δ , σ > 0 , thus that
δ > 4 θ θ ( 1 + θ ) + σ 1 θ 2
and
0 < ϑ ϑ n δ 4 θ θ ( 1 + θ ) + σ + 1 4 θ δ 4 δ θ ( 1 + θ ) + σ + 1 4 θ δ .
Then, { w n } , { v n } , { η n } , and { u n } strongly converge to v * V I ( G , K ) .

4. Numerical Illustration

Numerical findings are summarized in this section to demonstrate the effectiveness of the proposed methods. The following control parameters are used in this section.
(1)
For Hieu et al. [26] (Hieu-EgA), we use D n = u n v n 2 .
(2)
For Hieu et al. [29] (Hieu-mEgA), we use θ = 0.5 and D n = max { u n + 1 v n 2 , u n + 1 w n 2 } .
(3)
For Algorithm 1 (iEgA), we use α n = 0.50 , β n = 0.80 , and D n = w n v n 2 .
Example 1.
Let bifunction f have the following form
f ( u , v ) = A u + B v + c , v u
where c R 5 and A and B are
A = 3.1 2 0 0 0 2 3.6 0 0 0 0 0 3.5 2 0 0 0 2 3.3 0 0 0 0 0 3 B = 1.6 1 0 0 0 1 1.6 0 0 0 0 0 1.5 1 0 0 0 1 1.5 0 0 0 0 0 2
and
c = 1 2 1 2 1
where Lipschitz parameters L 1 = L 2 = 1 2 A B [26]. The feasible set K R 5 is
K : = { u R 5 : 5 u i 5 } .
Table 1 and Figure 1, Figure 2 and Figure 3 show the numerical results by u 1 = u 0 = v 0 = ( 1 , , 1 ) , and T O L = 10 12 .
Example 2.
Let a bifunction f be defined on the convex set K as
f ( u , v ) = ( B B T + S + D ) u , v u ,
where B is a 50 × 50 matrix, S is a 50 × 50 skew-symmetric matrix, and D is a 50 × 50 diagonal matrix. The set K R 50 is defined by
K : = { u R 50 : A u b }
with matrix A as 100 × 50 and vector b as a non-negative vector. Observe that f is monotone and Lipschitz-type constants are c 1 = c 2 = B B T + S + D 2 . We generate random matrices in our case [ B = r a n d ( n ) , C = r a n d ( n ) , S = 0.5 C 0.5 C T , D = d i a g ( r a n d ( n , 1 ) ) ] and the numerical findings regarding Example 2 are shown in Figure 4, Figure 5, Figure 6 and Figure 7 with u 1 = u 0 = v 0 = ( 1 , , 1 ) and T O L = 10 12 .
Example 3.
Let G : R 5 R 5 be defined by
G ( u ) = A u + B ( u ) + c ,
where n × n symmetric semi-definite matrix A and B ( u ) is the function depends on the proximal operator [43] through h ( u ) = 1 4 u 4 such that
B ( u ) = arg min v R n u 4 4 + 1 2 v u 2 .
The feasible set K is considered as
K : = { u R 5 : 2 u i 5 } .
The entries of A and c are taken as follows:
A = 3 1 0 1 2 1 5 1 0 1 0 1 4 2 2 1 0 2 6 1 2 1 2 1 4 c = 1 2 1 2 1
Figure 8, Figure 9, Figure 10 and Figure 11 and Table 2 show the numerical results by using u 1 = u 0 = v 0 = ( 1 , , 1 ) and T O L = 10 12 .
Example 4.
Suppose that K G : R 2 R 2 is defined by
G v 1 v 2 = v 1 + v 2 + s i n ( v 1 ) v 1 + v 2 + s i n ( v 2 ) , f o r a l l ( v 1 , v 2 ) R 2 .
where K = [ 5 , 5 ] × [ 5 , 5 ] . It is easy that G is Lipschitz continuous and strongly pseudomonotone operator. Figure 12, Figure 13, Figure 14 and Figure 15 show the numerical results with u 1 = u 0 = v 0 and T O L = 10 10 .

5. Conclusions

In this paper, we set up a new method by combining an inertial term with an extragradient method for solving a family of strongly pseudomonotone equilibrium problems. The introduced method involves a sequence of diminishing and non-summable step size rule and the method operates without previous information of the Lipschitz-type constants. Four numerical examples are described to show the computational performance of the proposed method in relation to other existing methods. Numerical experiments clearly point out that the method with an inertial term performs better than those without an inertial term.

Author Contributions

Conceptualization, W.K. and K.M.; methodology, W.K. and K.M.; writing—original draft preparation, W.K. and K.M.; writing—review and editing, W.K. and K.M.; software, W.K. and K.M.; supervision, W.K.; and project administration and funding acquisition, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This project was supported by Rajamangala University of Technology Phra Nakhon (RMUTP).

Acknowledgments

The first author thanks the Rajamangala University of Technology Thanyaburi (RMUTT) (Grant No. NSF62D0604). The second author thanks the Rajamangala University of Technology Phra Nakhon (RMUTP).

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  3. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  4. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Existence and solution methods for equilibria. Eur. J. Oper. Res. 2013, 227, 1–11. [Google Scholar] [CrossRef] [Green Version]
  5. Muu, L.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  6. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  7. Antipin, A. Equilibrium programming: Proximal methods. Comput. Math. Math. Phys. 1997, 37, 1285–1296. [Google Scholar]
  8. Giannessi, F.; Maugeri, A.; Pardalos, P.M. Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 58. [Google Scholar]
  9. Dafermos, S. Traffic Equilibrium and Variational Inequalities. Transp. Sci. 1980, 14, 42–54. [Google Scholar] [CrossRef] [Green Version]
  10. Todorčević, V. Harmonic Quasiconformal Mappings and Hyperbolic Type Metrics; Springer International Publishing: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
  11. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019. [Google Scholar] [CrossRef]
  12. ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  13. Todorčević, V. Subharmonic behavior and quasiconformal mappings. Anal. Math. Phys. 2019, 9, 1211–1225. [Google Scholar] [CrossRef]
  14. Koskela, P.; Manojlović, V. Quasi-Nearly Subharmonic Functions and Quasiconformal Mappings. Potential Anal. 2011, 37, 187–196. [Google Scholar] [CrossRef] [Green Version]
  15. ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  16. Hammad, H.A.; ur Rehman, H.; la Sen, M.D. Advanced Algorithms and Common Solutions to Variational Inequalities. Symmetry 2020, 12, 1198. [Google Scholar] [CrossRef]
  17. Hieu, D.V.; Quy, P.K.; Vy, L.V. Explicit iterative algorithms for solving equilibrium problems. Calcolo 2019, 56. [Google Scholar] [CrossRef]
  18. Hieu, D.V. New inertial algorithm for a class of equilibrium problems. Numer. Algorithms 2018, 80, 1413–1436. [Google Scholar] [CrossRef] [Green Version]
  19. Anh, P.K.; Hai, T.N. Novel self-adaptive algorithms for non-Lipschitz equilibrium problems with applications. J. Glob. Optim. 2018, 73, 637–657. [Google Scholar] [CrossRef]
  20. Anh, P.N.; Anh, T.T.H.; Hien, N.D. Modified basic projection methods for a class of equilibrium problems. Numer. Algorithms 2017, 79, 139–152. [Google Scholar] [CrossRef]
  21. ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  22. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  23. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  24. Tran, D.Q.; Dung, M.L.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  25. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  26. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2017, 97, 811–824. [Google Scholar] [CrossRef]
  27. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  28. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  29. Hieu, D.V.; Cho, Y.J.; bin Xiao, Y. Modified extragradient algorithms for solving equilibrium problems. Optimization 2018, 67, 2003–2029. [Google Scholar] [CrossRef]
  30. Rehman, H.U.; Kumam, P.; Dong, Q.L.; Peng, Y.; Deebani, W. A new Popov’s subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 2020, 1–36. [Google Scholar] [CrossRef]
  31. Yordsorn, P.; Kumam, P.; ur Rehman, H.; Ibrahim, A.H. A Weak Convergence Self-Adaptive Method for Solving Pseudomonotone Equilibrium Problems in a Real Hilbert Space. Mathematics 2020, 8, 1165. [Google Scholar] [CrossRef]
  32. Yordsorn, P.; Kumam, P.; Rehman, H.U. Modified two-step extragradient method for solving the pseudomonotone equilibrium programming in a real Hilbert space. Carpathian J. Math. 2020, 36, 313–330. [Google Scholar]
  33. Fan, J.; Liu, L.; Qin, X. A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities. Optimization 2019, 1–17. [Google Scholar] [CrossRef]
  34. Thong, D.V.; Hieu, D.V. Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 2018, 341, 80–98. [Google Scholar] [CrossRef]
  35. Censor, Y.; Gibali, A.; Reich, S. The Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  37. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models. Mathematics 2020, 8, 822. [Google Scholar] [CrossRef]
  38. ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  39. Attouch, F.A.H. An Inertial Proximal Method for Maximal Monotone Operators via Discretization of a Nonlinear Oscillator with Damping. Set-Valued Var. Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  40. Heinz, H.; Bauschke, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  41. Ofoedu, E. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef] [Green Version]
  42. Tiel, J.V. Convex Analysis: An Introductory Text, 1st ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  43. Kreyszig, E. Introductory Functional Analysis with Applications, 1st ed.; Wiley: New York, NY, USA, 1978. [Google Scholar]
Figure 1. Example 1: Numerical comparison for Algorithm 1 while ζ n = 1 ( n + 1 ) log ( n + 3 ) .
Figure 1. Example 1: Numerical comparison for Algorithm 1 while ζ n = 1 ( n + 1 ) log ( n + 3 ) .
Axioms 09 00137 g001
Figure 2. Example 1: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Figure 2. Example 1: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Axioms 09 00137 g002
Figure 3. Example 1: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Figure 3. Example 1: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Axioms 09 00137 g003
Figure 4. Example 2: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Figure 4. Example 2: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Axioms 09 00137 g004
Figure 5. Example 2: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Figure 5. Example 2: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Axioms 09 00137 g005
Figure 6. Example 2: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Figure 6. Example 2: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Axioms 09 00137 g006
Figure 7. Example 2: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Figure 7. Example 2: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Axioms 09 00137 g007
Figure 8. Example 3: Numerical comparison for Algorithm 1 while ζ n = 1 ( n + 1 ) log ( n + 3 ) .
Figure 8. Example 3: Numerical comparison for Algorithm 1 while ζ n = 1 ( n + 1 ) log ( n + 3 ) .
Axioms 09 00137 g008
Figure 9. Example 3: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Figure 9. Example 3: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Axioms 09 00137 g009
Figure 10. Example 3: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Figure 10. Example 3: Numerical comparison for Algorithm 1 while ζ n = log ( n + 3 ) n + 1 .
Axioms 09 00137 g010
Figure 11. Example 3: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Figure 11. Example 3: Numerical comparison for Algorithm 1 while ζ n = 1 n + 1 .
Axioms 09 00137 g011
Figure 12. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 1 , 1 ) and ζ n = 1 n + 1 .
Figure 12. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 1 , 1 ) and ζ n = 1 n + 1 .
Axioms 09 00137 g012
Figure 13. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 4 , 4 ) and ζ n = 1 n + 1 .
Figure 13. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 4 , 4 ) and ζ n = 1 n + 1 .
Axioms 09 00137 g013
Figure 14. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 1 , 1 ) and ζ n = 1 n + 1 .
Figure 14. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 1 , 1 ) and ζ n = 1 n + 1 .
Axioms 09 00137 g014
Figure 15. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 2 , 2 ) and ζ n = 1 n + 1 .
Figure 15. Example 4: Numerical comparison for Algorithm 1 while u 0 = ( 2 , 2 ) and ζ n = 1 n + 1 .
Axioms 09 00137 g015
Table 1. Example 1: Numerical values for Figure 1, Figure 2 and Figure 3.
Table 1. Example 1: Numerical values for Figure 1, Figure 2 and Figure 3.
Hieu-EgA [26]Hieu-mEgA [29]iEgA Algorithm 1
nTOL ζ n Iter.TimeIter.TimeIter.Time
5 10 12 1 log ( n + 3 ) ( n + 1 ) 3205.8584590.5979640.2830
5 10 12 1 n + 1 2223.1116430.4158390.1696
5 10 12 log ( n + 3 ) n + 1 1221.5466400.3732330.1581
Table 2. Example 3: Numerical results for Figure 8, Figure 9, Figure 10 and Figure 11.
Table 2. Example 3: Numerical results for Figure 8, Figure 9, Figure 10 and Figure 11.
Hieu-EgA [26]Hieu-mEgA [29]iEgA Algorithm 1
nTOL ζ n Iter.TimeIter.TimeIter.Time
5 10 10 1 ( n + 1 ) log ( n + 3 ) 44029.762519016.271224710.8531
5 10 10 1 n + 1 19813.848210411.80961455.8483
5 10 10 log ( n + 3 ) n + 1 17812.2979987.84781205.2870
5 10 10 1 n + 1 25116.73371109.60971486.0004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kumam, W.; Muangchoo, K. Approximation Results for Equilibrium Problems Involving Strongly Pseudomonotone Bifunction in Real Hilbert Spaces. Axioms 2020, 9, 137. https://doi.org/10.3390/axioms9040137

AMA Style

Kumam W, Muangchoo K. Approximation Results for Equilibrium Problems Involving Strongly Pseudomonotone Bifunction in Real Hilbert Spaces. Axioms. 2020; 9(4):137. https://doi.org/10.3390/axioms9040137

Chicago/Turabian Style

Kumam, Wiyada, and Kanikar Muangchoo. 2020. "Approximation Results for Equilibrium Problems Involving Strongly Pseudomonotone Bifunction in Real Hilbert Spaces" Axioms 9, no. 4: 137. https://doi.org/10.3390/axioms9040137

APA Style

Kumam, W., & Muangchoo, K. (2020). Approximation Results for Equilibrium Problems Involving Strongly Pseudomonotone Bifunction in Real Hilbert Spaces. Axioms, 9(4), 137. https://doi.org/10.3390/axioms9040137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop