Next Article in Journal
Alternating Polynomial Reconstruction Method for Hyperbolic Conservation Laws
Next Article in Special Issue
Nadler’s Theorem on Incomplete Modular Space
Previous Article in Journal
Pontryagin Maximum Principle for Distributed-Order Fractional Systems
Previous Article in Special Issue
Approximation of Endpoints for α—Reich–Suzuki Nonexpansive Mappings in Hyperbolic Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Extragradient Methods for Solving Split Equilibrium Problems

by
Suthep Suantai
1,2,
Narin Petrot
3,4 and
Manatchanok Khonchaliew
5,*
1
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
4
Centre of Excellence in Nonlinear Analysis and Optimization, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
5
Department of Mathematics, Faculty of Science, Lampang Rajabhat University, Lampang 52100, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(16), 1884; https://doi.org/10.3390/math9161884
Submission received: 15 May 2021 / Revised: 22 July 2021 / Accepted: 3 August 2021 / Published: 8 August 2021
(This article belongs to the Special Issue Nonlinear Problems and Applications of Fixed Point Theory)

Abstract

:
This paper presents two inertial extragradient algorithms for finding a solution of split pseudomonotone equilibrium problems in the setting of real Hilbert spaces. The weak and strong convergence theorems of the introduced algorithms are presented under some constraint qualifications of the scalar sequences. The discussions on the numerical experiments are also provided to demonstrate the effectiveness of the proposed algorithms.

1. Introduction

The equilibrium problem is a problem of finding a point x * C such that
f ( x * , y ) 0 , y C ,
where C is a nonempty closed convex subset of a real Hilbert space H, and f : H × H R is a bifunction. The solution set of the equilibrium problem (1) will be represented by E P ( f , C ) . It is well known that the equilibrium problem (1) can be applied to many mathematical problems, such as optimization problems, variational inequality problems, minimax problems, Nash equilibrium problems, saddle point problems, and fixed point problems (see [1,2,3,4], and the references therein). It is pointed out that one of the most popular methods for solving the equilibrium problem (1), when f is a monotone bifunction, is the proximal point method (see [5]). However, the proximal point method cannot be guaranteed for a weaker assumption, such as f is a pseudomonotone bifunction. To overcome this drawback, Tran et al. [6] proposed the following so-called extragradient method for solving the equilibrium problem, when the bifunction f is pseudomonotone and satisfies Lipschitz-type continuous conditions with positive constants c 1 and c 2 :
x 0 C , y k = arg min λ f ( x k , y ) + 1 2 y x k 2 : y C , x k + 1 = arg min λ f ( y k , y ) + 1 2 y x k 2 : y C ,
where 0 < λ < min 1 2 c 1 , 1 2 c 2 . They proved that the sequence { x k } generated by (2) converges weakly to a solution of the equilibrium problem (1).
Meanwhile, the inertial-type methods have received a lot of attention from many researchers. This method originates from an implicit discretization method (heavy ball method) of the second-order dynamical in time [7,8] and can be regarded as a method of speeding up the convergence properties. In general, the main feature of the inertial-type methods is that the next iterate is constructed by the two previous iterates. The inertial techniques have been proposed for solving the equilibrium problems, for instance, [9,10] and the references therein. In 2019, by using the ideas of inertial and extragradient methods, Vinh and Muu [11] proposed the following method for solving the equilibrium problem, when the bifunction f is pseudomonotone and satisfies Lipschitz-type continuous conditions with positive constants c 1 and c 2 :
x 0 , x 1 C , w k = x k + θ k ( x k x k 1 ) , y k = arg min λ f ( w k , y ) + 1 2 y w k 2 : y C , x k + 1 = arg min λ f ( y k , y ) + 1 2 y w k 2 : y C ,
where 0 < λ < min 1 2 c 1 , 1 2 c 2 and θ k is a suitable parameter. They proved that the sequence { x k } generated by (3) converges weakly to a solution of the equilibrium problem (1). Observe that, in the case of θ k = 0 , for all k N , the algorithm (3) is nothing but the algorithm (2). Moreover, in [11], the authors proposed the following method, when the bifunction f is pseudomonotone and satisfies Lipschitz-type continuous conditions with positive constants c 1 and c 2 :
x 0 , x 1 C , w k = x k + θ k ( x k x k 1 ) , y k = arg min λ f ( w k , y ) + 1 2 y w k 2 : y C , z k = arg min λ f ( y k , y ) + 1 2 y w k 2 : y C , x k + 1 = ( 1 β k γ k ) w k + β k z k ,
where 0 < λ < min 1 2 c 1 , 1 2 c 2 , { β k } , { γ k } ( 0 , 1 ) such that k = 0 γ k = , lim k γ k = 0 , inf k β k ( 1 β k γ k ) > 0 , and θ k is a suitable parameter. They proved that the sequence { x k } generated by (4) converges strongly to the minimum-norm element of the solution of the equilibrium problem (1).
On the other hand, Censer and Elfving [12] proposed the following split feasibility problems:
Find x * C such that A x * Q ,
where C and Q are two nonempty closed convex subsets of the real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator. Many important problems arising from real-world problems can be formulated as the split feasibility problems which had been used for studying signal processing, medical image reconstruction, intensity-modulated radiation therapy, sensor networks, and data compression (see [12,13,14,15] and the references therein).
In 2012, He [16] (see also Moudafi [17]) introduced the split equilibrium problems, as the generalization of the split feasibility problems (5), as follows:
Find x * C such that f ( x * , y ) 0 , y C , and u * : = A x * Q solves g ( u * , v ) 0 , v Q ,
where C, Q are two nonempty closed convex subsets of the real Hilbert spaces H 1 and H 2 , respectively, f : C × C R and g : Q × Q R are bifunctions, and A : H 1 H 2 is a bounded linear operator. To solve the split equilibrium problems (6), He [16] proposed the following proximal point method, when the bifunctions f and g are monotone:
x 0 C , y k C such that f ( y k , y ) + 1 r k y y k , y k x k 0 , y C , u k Q such that g ( u k , v ) + 1 r k v u k , u k A y k 0 , v Q , x k + 1 = P C ( y k + η A * ( u k A y k ) ) ,
where η ( 0 , 1 / A 2 ) , { r k } ( 0 , + ) with lim inf k r k > 0 , and A * is the adjoint operator of A. He proved that the sequence { x k } generated by (7) converges weakly to a solution of the split equilibrium problems (6). Here, the algorithm (7) will be called the PPA Algorithm. After that, under the setting of f : H 1 × H 1 R and g : H 2 × H 2 R , Kim and Dinh [18] proposed the following the extragradient method for finding a solution of the split equilibrium problems, when the bifunctions f and g are pseudomonotone and satisfy Lipschitz-type continuous conditions with positive constants c 1 and c 2 :
x 0 C , y k = arg min λ k f ( x k , y ) + 1 2 y x k 2 : y C , z k = arg min λ k f ( y k , y ) + 1 2 y x k 2 : y C , u k = arg min μ k g ( A z k , u ) + 1 2 u A z k 2 : u Q , v k = arg min μ k g ( u k , u ) + 1 2 u A z k 2 : u Q , x k + 1 = P C ( z k + η A * ( v k A z k ) ) ,
where η ( 0 , 1 / A 2 ) , and { λ k } , { μ k } [ ρ ̲ , ρ ¯ ] with 0 < ρ ̲ ρ ¯ < min 1 2 c 1 , 1 2 c 2 . They proved that the sequence { x k } generated by (8) converges weakly to a solution of the split equilibrium problems. Here, the algorithm (8) will be called the PEA Algorithm. We point out that the algorithm (8) cannot be applied for solving the problem (6) under the setting of g : Q × Q R , since we can not guarantee if A z k belongs to the considered closed convex set Q.
In this paper, we will continue developing methods for solving the split equilibrium problems (6). That is, some new iterative algorithms will be introduced for finding the solutions of the split equilibrium problems, when the considered bifunctions are pseudomonotone. Some numerical examples and comparison of the introduced methods with the aforesaid algorithms will be discussed.
This paper is organized as follows: In Section 2, some definitions and properties will be reviewed for use in subsequent sections. Section 3 will present two inertial extragradient algorithms and prove their convergence theorems. In Section 4, we will discuss the performance of the two introduced algorithms by comparing to the well-known algorithms.

2. Preliminaries

This section will present the definitions and some important basic properties that will be used in this paper. Let H be a real Hilbert space with inner product · , · , and its corresponding · . The symbols → and ⇀ will be denoted for the strong convergence and the weak convergence in H, respectively.
First, we will recall definitions and facts for concerning the equilibrium problems.
Definition 1
([1,3,19]). Let C be a nonempty closed convex subset of H. A bifunction f : H × H R is said to be:
(i) 
monotone on C if
f ( x , y ) + f ( y , x ) 0 , x , y C ;
(ii) 
pseudomonotone on C if
f ( x , y ) 0 f ( y , x ) 0 , x , y C ;
(iii) 
Lipshitz-type continuous on H with constants L 1 > 0 and L 2 > 0 if
f ( x , y ) + f ( y , z ) f ( x , z ) L 1 x y 2 L 2 y z 2 , x , y , z H .
Remark 1.
A monotone bifunction is a pseudomonotone bifunction, but the converse is not true in general, for instance, see [20].
For a nonempty closed convex subset C of H and a bifunction f : H × H R , we are concerned with the following assumptions in this paper:
Assumption 1.
f is weakly continuous on C × C in the sense that, if x C , y C , and { x k } C , { y k } C are two sequences that converge weakly to x and y respectively, then f ( x k , y k ) converges to f ( x , y ) .
Assumption 2.
f ( x , · ) is convex and subdifferentiable on C, for each fixed x C .
Assumption 3.
f is psuedomonotone on C and f ( x , x ) = 0 , for each x C .
Assumption 4.
f is Lipshitz-type continuous on H with constants L 1 > 0 and L 2 > 0 .
Remark 2.
We note that the solution set E P ( f , C ) is closed and convex, when the bifunction f satisfies the Assumptions 1–3 (see [6,21,22] for more detail).
The following lemma is important in order to obtain the main results of this paper.
Lemma 1.
([23]) Let f : H × H R be satisfied Assumptions 2–4. Assume that E P ( f , C ) is a nonempty set and 0 < λ 0 < min 1 2 L 1 , 1 2 L 2 . Let x 0 H . If y 0 and z 0 are constructed by
y 0 = arg min λ 0 f ( x 0 , w ) + 1 2 w x 0 2 : w C , z 0 = arg min λ 0 f ( y 0 , w ) + 1 2 w x 0 2 : w C ,
then,
(i) 
λ 0 [ f ( x 0 , w ) f ( x 0 , y 0 ) ] y 0 x 0 , y 0 w , w C ;
(ii) 
z 0 q 2 x 0 q 2 ( 1 2 λ 0 L 1 ) x 0 y 0 2 ( 1 2 λ 0 L 2 ) y 0 z 0 2 , q E P ( f , C ) .
Next, we recall some basic facts in the functional analysis which will be used in the sequel. For a Hilbert space H, we know that
x + y 2 x 2 + 2 y , x + y ,
and
α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 β γ y z 2 α γ x z 2 ,
for each x , y , z H , and for each α , β , γ [ 0 , 1 ] with α + β + γ = 1 (see [11]).
For each x H , we denote the metric projection of x onto a nonempty closed convex subset C of H by P C ( x ) , that is,
x P C ( x ) y x , y C .
Lemma 2.
(see [24,25]) let C be a nonempty closed convex subset of H. Then,
(i) 
P C ( x ) is singleton and well-defined for each x H ;
(ii) 
z = P C ( x ) if and only if x z , y z 0 , y C ;
(iii) 
P C is a nonexpansive operator, that is,
P C ( x ) P C ( y ) x y , x , y H .
For a function g : H R , the subdifferential of g at z H is defined by
g ( z ) = { w H : g ( y ) g ( z ) w , y z , y H } .
The function g is said to be subdifferentiable at z if g ( z ) .
Lemma 3.
(see [24]) For any z H , the subdifferential g ( z ) of a continuous convex function g is a nonempty, weakly closed, and bounded convex set.
We end this section by recalling some auxiliary results for obtaining the convergence theorems.
Lemma 4.
([26]) Let H be a Hilbert space and { x k } a sequence in H such that there exists a nonempty set S H satisfying
(i) 
For each z S , lim k x k z exists;
(ii) 
ω w ( x k ) S , where ω w ( x k ) = { x H : there is a subsequence { x k n } of { x k } such that x k n x } .
Then, there exists x * S such that the sequence { x k } converges weakly to x * .
Lemma 5.
([27]) Let { a k } and { b k } be sequences of non-negative real numbers such that a k + 1 a k + b k , k N . If k = 1 b k < , then lim k a k exists.
Lemma 6.
([28,29]) Let { a k } and { c k } be sequences of non-negative real numbers such that
a k + 1 ( 1 δ k ) a k + b k + c k , k N { 0 } ,
where { δ k } is a sequence in ( 0 , 1 ) and { b k } is a sequence in R . Assume k = 0 c k < . Then the following results hold:
(i) 
If there is M > 0 such that b k δ k M , for all k N { 0 } , then { a k } is a bounded sequence;
(ii) 
If k = 0 δ k = and lim sup k ( b k / δ k ) 0 , then lim k a k = 0 .
Lemma 7.
([30]) Let { a k } be a sequence of real numbers such that there exists a subsequence { a k i } of { a k } such that a k i < a k i + 1 , for all i N . Then, there exists a nondecreasing sequence { m n } of positive integers such that lim n m n = and the following properties hold:
a m n a m n + 1 and a n a m n + 1 ,
for all (sufficiently large) numbers n N . Indeed, m n is the largest number k in the set { 1 , 2 , , n } such that
a k < a k + 1 .

3. Main Results

Let H 1 and H 2 be two real Hilbert spaces and C and Q be nonempty closed convex subsets of H 1 and H 2 , respectively. Suppose that f : H 1 × H 1 R and g : H 2 × H 2 R are bifunctions which satisfy Assumptions 1–4 with some positive constants { c 1 , c 2 } and { d 1 , d 2 } , respectively. Let us recall the split equilibrium problems:
Find x * C such that f ( x * , y ) 0 , y C , and u * : = A x * Q solves g ( u * , v ) 0 , v Q ,
where f : H 1 × H 1 R and g : H 2 × H 2 R are bifunctions, and A : H 1 H 2 is a bounded linear operator with its adjoint operator A * . From now on, the solution set of problem (11) will be denoted by Ω . That is,
Ω : = E P ( f , C ) A 1 ( E P ( g , Q ) ) .

3.1. Inertial Extragradient Method

Now, we introduce Algorithm 1 for solving the split equilibrium problems (11).
Algorithm 1: Inertial Extragradient Method (IEM)
Initialization. Choose parameters α [ 0 , 1 ) , η 0 , 1 A 2 , { λ k } with 0 < inf λ k sup λ k < min 1 2 c 1 , 1 2 c 2 , { μ k } with 0 < inf μ k sup μ k < min 1 2 d 1 , 1 2 d 2 , and { ϵ k } [ 0 , ) such that k = 0 ϵ k < . Pick x 0 , x 1 C and set k = 1 .
Step 1. Choose θ k such that 0 θ k θ ¯ k , where
θ ¯ k = min α , ϵ k x k x k 1 , if x k x k 1 , α , otherwise ,
and compute
w k = x k + θ k ( x k x k 1 ) .
Step 2. Solve the strongly convex program
y k = arg min λ k f ( w k , y ) + 1 2 y w k 2 : y C .
Step 3. Solve the strongly convex program
z k = arg min λ k f ( y k , y ) + 1 2 y w k 2 : y C .
Step 4. Solve the strongly convex program
u k = arg min μ k g ( A z k , u ) + 1 2 u A z k 2 : u Q .
Step 5. Solve the strongly convex program
v k = arg min μ k g ( u k , u ) + 1 2 u A z k 2 : u Q .
Step 6. The next approximation x k + 1 is defined by
x k + 1 = P C ( z k + η A * ( v k A z k ) ) .
Step 7. Put k : = k + 1 and go to Step 1.
Remark 3.
We pointed out that the term θ k ( x k x k 1 ) , which is included in the IEM Algorithm, is intended to speed up convergence properties and is called the inertial effect. We emphasize that the choice of parameter θ k may lead to the superior numerical behavior of the IEM Algorithm. Moreover, we observe that if θ k = 0 , for each k N , then the IEM Algorithm reduces to the PEA Algorithm (8), which was presented in [18].
Theorem 1.
Suppose that the solution set Ω is nonempty. Then, the sequence { x k } which is generated by the IEM Algorithm converges weakly to an element of Ω.
Proof. 
Let p Ω . That is, p E P ( f , C ) , and A p E P ( g , Q ) . Then, by Lemma 1 (ii), we have
z k p 2 w k p 2 ( 1 2 λ k c 1 ) w k y k 2 ( 1 2 λ k c 2 ) y k z k 2 .
This implies that
z k p w k p .
By the definition of w k , we have
w k p 2 = ( 1 + θ k ) ( x k p ) θ k ( x k 1 p ) 2 = ( 1 + θ k ) x k p 2 θ k x k 1 p 2 + θ k ( 1 + θ k ) x k x k 1 2 ( 1 + θ k ) x k p 2 θ k x k 1 p 2 + 2 θ k x k x k 1 2 .
Thus, in view of (12) and(14), we obtain
z k p 2 x k p 2 θ k ( x k p 2 x k 1 p 2 ) + 2 θ k x k x k 1 2 ( 1 2 λ k c 1 ) w k y k 2 ( 1 2 λ k c 2 ) y k z k 2 .
On the other hand, by Lemma 1 (ii), we obtain
v k A p 2 A z k A p 2 ( 1 2 μ k d 1 ) A z k u k 2 ( 1 2 μ k d 2 ) u k v k 2 .
This implies that
v k A p A z k A p .
By the definition of x k + 1 and the nonexpansivity of P C , we have
x k + 1 p 2 ( z k p ) + η A * ( v k A z k ) 2 = z k p 2 + η 2 A 2 v k A z k 2 + 2 η A ( z k p ) , v k A z k .
Consider,
2 A ( z k p ) , v k A z k = 2 v k A p , v k A z k 2 v k A z k 2 = v k A p 2 v k A z k 2 A z k A p 2 .
Using this one together with (18), we obtain
x k + 1 p 2 z k p 2 η ( 1 η A 2 ) v k A z k 2 + η ( v k A p 2 A z k A p 2 ) .
Combining with (17) implies that
x k + 1 p 2 z k p 2 η ( 1 η A 2 ) v k A z k 2 .
Thus, by the choice of η , we have
x k + 1 p z k p .
Now, the relations (13) and (22) imply that
x k + 1 p w k p .
So, it follows from the definition of w k that
x k + 1 p x k p + θ k x k x k 1 .
Due to the properties of the sequences { θ k } and { ϵ k } , we observe that
k = 1 θ k x k x k 1 k = 1 θ ¯ k x k x k 1 k = 1 ϵ k < .
Then, by (24), (25), and Lemma 5, we show lim k x k p exists. Consequently, the sequence { x k } is bounded. In addition, in view of (15) and (22), we see that
( 1 2 λ k c 1 ) w k y k 2 + ( 1 2 λ k c 2 ) y k z k 2 x k p 2 x k + 1 p 2 + θ k ( x k p 2 x k 1 p 2 ) + 2 θ k x k x k 1 2 .
Thus, by the choices of the control sequences { λ k } together with the existence of lim k x k p and lim k θ k x k x k 1 = 0 , we have
lim k w k y k = 0 ,
and
lim k y k z k = 0 .
These imply that
lim k w k z k = 0 .
Moreover, by using lim k θ k x k x k 1 = 0 , we have
lim k w k x k = 0 .
Using this one together with (27), we obtain
lim k x k y k = 0 .
Thus, it follows from (28) that
lim k x k z k = 0 .
On the other hand, by using (21), we see that
η ( 1 η A 2 ) v k A z k 2 z k p 2 x k + 1 p 2 ( z k x k + x k p x k + 1 p ) ( z k p + x k + 1 p ) .
Thus, by the existence of lim k x k p and (32), we have
lim k v k A z k = 0 .
Furthermore, in view of (16), we obtain
( 1 2 μ k d 1 ) A z k u k 2 + ( 1 2 μ k d 2 ) u k v k 2 A z k A p 2 v k A p 2 = A z k v k ( A z k A p + v k A p ) .
Thus, applying (34) to the above inequality, we have
lim k A z k u k = 0 ,
and
lim k u k v k = 0 .
Now, we will complete the proof of this theorem by using Lemma 4. Notice that, it remains to show ω w ( x k ) Ω . Let x * ω w ( x k ) and { x k n } be a subsequence of { x k } such that x k n x * , as n .
We know that, by using (30)–(32), we also have w k n x * , y k n x * , and z k n x * , as n . The latter fact also implies that A z k n A x * , as n . Using this one together with (35), we obtain u k n A x * , as n . Since C and Q are closed and convex sets, so C and Q are weakly closed, therefore, x * C and A x * Q . By Lemma 1 (i), we have
λ k n [ f ( w k n , y ) f ( w k n , y k n ) ] y k n w k n , y k n y , y C ,
and
μ k n [ g ( A z k n , u ) g ( A z k n , u k n ) ] u k n A z k n , u k n u , u Q .
These imply that
f ( w k n , y ) f ( w k n , y k n ) 1 λ k n y k n w k n y k n y , y C ,
and
g ( A z k n , u ) g ( A z k n , u k n ) 1 μ k n u k n A z k n u k n u , u Q .
Thus, by using (27), (35), and the weak continuity of f and g, we have
f ( x * , y ) 0 , y C ,
and
g ( A x * , u ) 0 , u Q .
Then, we show that x * Ω . This shows that ω w ( x k ) Ω . Hence, by Lemma 4, we can conclude that the sequence { x k } converges weakly to an element of Ω . This completes the proof. □

3.2. Mann-Type Inertial Extragradient Method

In order to obtain a strong convergence result, we propose Algorithm 2 by using the Mann-type techniques (see [11,31]).
Algorithm 2: Mann-type Inertial Extragradient Method (MIEM)
Initialization. Choose parameters α [ 0 , 1 ) , η 0 , 1 A 2 , { λ k } with 0 < inf λ k sup λ k < min 1 2 c 1 , 1 2 c 2 , { μ k } with 0 < inf μ k sup μ k < min 1 2 d 1 , 1 2 d 2 , and { ϵ k } [ 0 , ) , { β k } ( 0 , 1 ) , { γ k } ( 0 , 1 ) such that inf k β k ( 1 β k γ k ) > 0 , k = 0 γ k = , lim k γ k = 0 , k = 0 ϵ k < , and ϵ k = o ( γ k ) , where ϵ k = o ( γ k ) means that the sequence { ϵ k } is an infinitesimal of higher order than { γ n } . Pick x 0 , x 1 C and set k = 1 .
Step 1. Choose θ k such that 0 θ k θ ¯ k , where
θ ¯ k = min α , ϵ k x k x k 1 , if x k x k 1 , α , otherwise ,
and compute
w k = x k + θ k ( x k x k 1 ) .
Step 2. Solve the strongly convex program
y k = arg min λ k f ( w k , y ) + 1 2 y w k 2 : y C .
Step 3. Solve the strongly convex program
z k = arg min λ k f ( y k , y ) + 1 2 y w k 2 : y C .
Step 4. Solve the strongly convex program
u k = arg min μ k g ( A z k , u ) + 1 2 u A z k 2 : u Q .
Step 5. Solve the strongly convex program
v k = arg min μ k g ( u k , u ) + 1 2 u A z k 2 : u Q .
Step 6. Compute
t k = P C ( z k + η A * ( v k A z k ) ) .
.
Step 7. The next approximation x k + 1 is defined by
x k + 1 = ( 1 β k γ k ) w k + β k t k .
.
Step 8. Put k : = k + 1 and go to Step 1.
Theorem 2.
Suppose that the solution set Ω is nonempty. Then, the sequence { x k } which is generated by the MIEM Algorithm converges strongly to the minimum-norm element of Ω.
Proof. 
Let p Ω . That is, p E P ( f , C ) , and A p E P ( g , Q ) . Following the proof of Theorem 1, we can check that
z k p w k p ,
t k p z k p ,
v k A p A z k A p ,
w k p 2 x k p 2 + θ k ( x k p 2 x k 1 p 2 ) + 2 θ k x k x k 1 2 ,
and
t k p 2 z k p 2 η ( 1 η A 2 ) v k A z k 2 .
By the definition of x k + 1 , we obtain
x k + 1 p = ( 1 β k γ k ) ( w k p ) + β k ( t k p ) γ k p ( 1 β k γ k ) w k p + β k t k p + γ k p .
It follows from (37) and (38) that
x k + 1 p ( 1 γ k ) w k p + γ k p .
Thus, by the definition of w k , we have
x k + 1 p ( 1 γ k ) x k p + ( 1 γ k ) θ k x k x k 1 + γ k p = ( 1 γ k ) x k p + γ k ( σ k + p ) ,
where σ k = ( 1 γ k ) θ k γ k x k x k 1 . Due to the choices of the sequence { θ k } , we obtain that
σ k = ( 1 γ k ) θ k γ k x k x k 1 ( 1 γ k ) ϵ k γ k .
Thus, by the properties of ϵ k = o ( γ k ) and lim k γ k = 0 , we have
lim k σ k = 0 .
This implies that the sequence { σ k } is a null sequence. Put M = max { p , sup k N σ k } . Then, by (42) and Lemma 6 (i), the sequence { x k p } is bounded. Consequently, { x k } is a bounded sequence.
In addition, by the definition of x k + 1 and (10), we have
x k + 1 p 2 = ( 1 β k γ k ) ( w k p ) + β k ( t k p ) + γ k ( p ) 2 ( 1 β k γ k ) w k p 2 + β k t k p 2 + γ k p 2 β k ( 1 β k γ k ) w k t k 2 .
Thus, by using (38), (40), and Lemma 1 (ii), we obtain that
x k + 1 p 2 ( 1 β k γ k ) w k p 2 + β k z k p 2 + γ k p 2 β k ( 1 β k γ k ) w k t k 2 ( 1 γ k ) w k p 2 β k ( 1 β k γ k ) w k t k 2 + γ k p 2 β k ( 1 2 λ k c 1 ) w k y k 2 β k ( 1 2 λ k c 2 ) y k z k 2 ( 1 γ k ) x k p 2 + θ k ( 1 γ k ) ( x k p 2 x k 1 p 2 ) + 2 θ k ( 1 γ k ) x k x k 1 2 β k ( 1 β k γ k ) w k t k 2 + γ k p 2 β k ( 1 2 λ k c 1 ) w k y k 2 β k ( 1 2 λ k c 2 ) y k z k 2 .
This implies that
β k ( 1 β k γ k ) w k t k 2 + β k ( 1 2 λ k c 1 ) w k y k 2 + β k ( 1 2 λ k c 2 ) y k z k 2 x k p 2 x k + 1 p 2 + θ k ( 1 γ k ) ( x k p 2 x k 1 p 2 ) + 2 θ k ( 1 γ k ) x k x k 1 2 + γ k p 2 .
Next, we will show that { x k } converges strongly to p ˜ : = P Ω ( 0 ) . We consider the following two possible cases.
Case 1. Suppose that there exists k 0 N such that x k + 1 p ˜ x k p ˜ , for all k k 0 . This means that { x k p ˜ } k k 0 is a nonincreasing sequence. Consequently, by using this one together with the boundness property of { x k p ˜ } , we have that the limit of x k p ˜ exists. Since lim k θ k x k x k 1 = 0 and the properties of the control sequences { β k } , { γ k } , { λ k } , { θ k } , it follows from (45) that
lim k w k t k = 0 ,
lim k w k y k = 0 ,
and
lim k y k z k = 0 .
These imply that
lim k z k t k = 0 .
Moreover, since lim k θ k x k x k 1 = 0 , we obtain
lim k w k x k = 0 .
Using this one together with (47), we obtain
lim k x k y k = 0 .
It follows from (48) that
lim k x k z k = 0 .
On the other hand, in view of (41), we see that
η ( 1 η A 2 ) v k A z k 2 z k p ˜ 2 t k p ˜ 2 z k t k ( z k p ˜ + t k p ˜ ) .
Thus, by using (49), we have
lim k v k A z k = 0 .
Furthermore, by Lemma 1 (ii), we obtain that
( 1 2 μ k d 1 ) A z k u k 2 + ( 1 2 μ k d 2 ) u k v k 2 A z k A p ˜ 2 v k A p ˜ 2 = A z k v k ( A z k A p ˜ + v k A p ˜ ) .
Then, applying (53) to the above inequality, we have
lim k A z k u k = 0 ,
and
lim k u k v k = 0 .
Now, let x * ω w ( x k ) and { x k n } be a subsequence of { x k } such that x k n x * , as n . Following the line proof of Theorem 1, we can show that x * Ω . This means that ω w ( x k ) Ω . Put s k = ( 1 β k ) w k + β k t k . The relations (37) and (38) imply that
s k p ˜ ( 1 β k ) w k p ˜ + β k t k p ˜ w k p ˜ .
By the definition of x k + 1 , we see that
x k + 1 = s k γ k w k = ( 1 γ k ) s k γ k ( w k s k ) = ( 1 γ k ) s k γ k β k ( w k t k ) .
It follows from (9) that
x k + 1 p ˜ 2 = ( 1 γ k ) ( s k p ˜ ) γ k β k ( w k t k ) γ k p ˜ 2 ( 1 γ k ) s k p ˜ 2 2 γ k β k w k t k , x k + 1 p ˜ 2 γ k p ˜ , x k + 1 p ˜ .
Thus, by using (56), we have
x k + 1 p ˜ 2 ( 1 γ k ) w k p ˜ 2 + γ k [ 2 β k w k t k , x k + 1 p ˜ + 2 x k + 1 p ˜ , p ˜ ] .
Consider,
w k p ˜ 2 ( x k p ˜ + θ k x k x k 1 ) 2 x k p ˜ 2 + 2 θ k x k p ˜ x k x k 1 + θ k x k x k 1 2 x k p ˜ 2 + 3 M θ k x k x k 1 ,
where M = sup k N { x k p ˜ , x k x k 1 } . This, together with (58), implies that
x k + 1 p ˜ 2 ( 1 γ k ) x k p ˜ 2 + 3 M θ k ( 1 γ k ) x k x k 1 + γ k [ 2 β k w k t k , x k + 1 p ˜ + 2 x k + 1 p ˜ , p ˜ ] = ( 1 γ k ) x k p ˜ 2 + γ k [ 3 M ( 1 γ k ) θ k γ k x k x k 1 2 β k w k t k , x k + 1 p ˜ + 2 x k + 1 p ˜ , p ˜ ] .
Thus, by the properties of p ˜ : = P Ω ( 0 ) and x * ω w ( x k ) Ω , we obtain
lim sup k x k + 1 p ˜ , p ˜ = lim n x k n + 1 p ˜ , p ˜ = x * p ˜ , p ˜ 0 .
Hence, by (43), (46), (60), (61), and Lemma 6 (ii), we have
lim k x k p ˜ = 0 .
This completes the proof for the first case.
Case 2. Suppose that there exists a subsequence { x k i p ˜ } of { x k p ˜ } such that
x k i p ˜ < x k i + 1 p ˜ , i N .
According to Lemma 7, there exists a nondecreasing sequence { m n } N such that lim n m n = , and
x m n p ˜ x m n + 1 p ˜ and x n p ˜ x m n + 1 p ˜ , n N .
It follows from (45) that
β m n ( 1 β m n γ m n ) w m n t m n 2 + β m n ( 1 2 λ m n c 1 ) w m n y m n 2 + β m n ( 1 2 λ m n c 2 ) y m n z m n 2 x m n p ˜ 2 x m n + 1 p ˜ 2 + θ m n ( 1 γ m n ) ( x m n p ˜ 2 x m n 1 p ˜ 2 ) + 2 θ m n ( 1 γ m n ) x m n x m n 1 2 + γ m n p ˜ 2 θ m n ( 1 γ m n ) x m n x m n 1 ( x m n p ˜ + x m n 1 p ˜ ) + 2 θ m n ( 1 γ m n ) x m n x m n 1 2 + γ m n p ˜ 2 .
Following the line proof of Case 1, we can show that
lim n w m n t m n = 0 , lim n x m n y m n = 0 , lim n x m n z m n = 0 ,
lim n v m n A z m n = 0 , lim n A z m n u m n = 0 , lim n u m n v m n = 0 ,
lim sup n x m n + 1 p ˜ , p ˜ 0 ,
and
x m n + 1 p ˜ 2 ( 1 γ m n ) x m n p ˜ 2 + γ m n [ 3 M ( 1 γ m n ) θ m n γ m n x m n x m n 1 2 β m n w m n t m n , x m n + 1 p ˜ + 2 x m n + 1 p ˜ , p ˜ ] ,
where M = sup n N { x m n p ˜ , x m n x m n 1 } . This, together with (63), implies that
x m n + 1 p ˜ 2 ( 1 γ m n ) x m n + 1 p ˜ 2 + γ m n [ 3 M ( 1 γ m n ) θ m n γ m n x m n x m n 1 2 β m n w m n t m n , x m n + 1 p ˜ + 2 x m n + 1 p ˜ , p ˜ ] .
Using this one together with (63) again, we obtain
x n p ˜ 2 3 M ( 1 γ m n ) θ m n γ m n x m n x m n 1 2 β m n w m n t m n , x m n + 1 p ˜ + 2 x m n + 1 p ˜ , p ˜ .
Thus, by using (43), (65), and (67), we have
lim sup n x n p ˜ 2 0 .
Hence, we can conclude that the sequence { x n } converges strongly to p ˜ : = P Ω ( 0 ) . This completes the proof. □

4. Numerical Experiments

This section will show some examples and numerical results to support Theorems 1 and 2. We will compare the introduced algorithms, IEM and MIEM, with the PPA Algorithm (7) in Example 1 and the PEA Algorithm (8) in Example 2. The numerical experiments are written in Matlab R2015b and performed on a laptop with AMD Dual Core R3-2200U CPU @ 2.50 GHz and RAM 4.00 GB. In both Examples 1 and 2, for each considered matrix, the · means the spectral norm.
Example 1.
Let H 1 = R n and H 2 = R m be two real Hilbert spaces with the Euclidean norm. We consider the bifunctions f ˜ and g ˜ which are generated from the Nash–Cournot oligopolistic equilibrium models of electricity markets (see [22,32]),
f ˜ ( x , y ) = P 1 x + Q 1 y , y x , x , y R n ,
g ˜ ( u , v ) = P 2 u + Q 2 v , v u , u , v R m ,
where P 1 , Q 1 R n × n and P 2 , Q 2 R m × m are matrices such that Q 1 , Q 2 are symmetric positive semidefinite and Q 1 P 1 , Q 2 P 2 are negative semidefinite. Observe that f ˜ ( x , y ) + f ˜ ( y , x ) = ( x y ) T ( Q 1 P 1 ) ( x y ) , x , y R n . Moreover, from the property of Q 1 P 1 , we have that f ˜ is a monotone operator. Similarly, we also have that g ˜ is monotone.
Next, we consider the two bifunctions f and g, which are given by
f ( x , y ) = f ˜ ( x , y ) , i f ( x , y ) C × C , 0 , o t h e r w i s e ,
and
g ( u , v ) = g ˜ ( u , v ) , i f ( u , v ) Q × Q , 0 , o t h e r w i s e ,
where C = i = 1 n [ 5 , 5 ] and Q = j = 1 m [ 20 , 20 ] are the constrained boxes. We note that f and g are Lipschitz-type continuous with constants c 1 = c 2 = 1 2 P 1 Q 1 and d 1 = d 2 = 1 2 P 2 Q 2 , respectively (see [6]). Choose b 1 = max { c 1 , d 1 } , and b 2 = max { c 2 , d 2 } . Then, both bifunctions f and g are Lipschitz-type continuous with constants b 1 and b 2 .
For this numerical experiment, the matrices P 1 , Q 1 , P 2 , and Q 2 are randomly generated in the interval [ 5 , 5 ] such that they satisfy the required properties above and the linear operator A : R n R m is a m × n matrix, in which each of its entries is randomly generated in the interval [ 2 , 2 ] . Note that the solution set Ω is nonempty because of 0 Ω . We will work with the following control parameters: α = 0.5 , η = 1 2 A 2 , λ k = μ k = 1 4 max { b 1 , b 2 } , ϵ k = 1 ( k + 1 ) 2 , γ k = 1 k + 1 , and β k = 0.5 ( 1 γ k ) . The following five cases of the parameter θ k are considered:
Case 1. θ k = 0 .
Case 2. θ k = 0.25 θ ¯ k .
Case 3. θ k = 0.5 θ ¯ k .
Case 4. θ k = 0.75 θ ¯ k .
Case 5. θ k = θ ¯ k .
The function q u a d p r o g in Matlab Optimization Toolbox was used to solve vectors y k , z k , u k , and v k . We randomly generated starting points x 0 = x 1 R n in the interval [ 5 , 5 ] . The IEM and MIEM algorithms were tested along with the PPA Algorithm (7) by using the stopping criteria x k + 1 x k < 10 8 . We randomly generated 10 starting points and the presented results are the average, where n = 10 and m = 20 .
From Table 1, we may suggest that the parameter θ k = θ ¯ k provides better CPU times and iteration numbers than other cases. Moreover, iteration numbers of the IEM and MIEM algorithms with the parameter, θ k 0 , are mostly better than those of the PPA Algorithm. Meanwhile, CPU times of the PPA Algorithm are better than those of the IEM and MIEM Algorithms. However, we would like to remind that, by Remark 1, the class of pseudomonotone bifunction is strictly larger than the class of monotone bifunction, and both IEM and MIEM Algorithms can solve the split equilibrium problems for pseudomonotone bifunctions while the PPA Algorithm may not be applied.
Example 2.
Let H 1 = R n and H 2 = R m be two real Hilbert spaces with the Euclidean norm. We consider a classical form of the bifunction which given by the Cournot–Nash models (see [33]),
f ˜ ( x , y ) = A 1 x + a 1 n ( y + x ) , y x , x , y R n ,
g ˜ ( u , v ) = A 2 u + a 2 m ( v + u ) , v u , u , v R m ,
where
A 1 = 0 a 1 a 1 a 1 a 1 0 a 1 a 1 a 1 a 1 0 a 1 a 1 a 1 0 n × n , A 2 = 0 a 2 a 2 a 2 a 2 0 a 2 a 2 a 2 a 2 0 a 2 a 2 a 2 0 m × m ,
where a 1 and a 2 are positive real numbers. We know that the bifunctions f ˜ and g ˜ are pseudomonotone and they are not monotone on C and Q, respectively (see [34]).
Here, the numerical experiment is considered under the following setting: the boxes C and Q, the bifunctions f and g, the linear operator A, and the control parameters are given as in Example 1. Notice that the solution set Ω is nonempty because of 0 Ω . We observe that f and g are Lipschitz-type continuous with constants c 1 = c 2 = 1 2 A 1 and d 1 = d 2 = 1 2 A 2 , respectively. Choose b 1 = max { c 1 , d 1 } , and b 2 = max { c 2 , d 2 } . Then, both bifunctions f and g are Lipschitz-type continuous with constants b 1 and b 2 . In addition, the positive real numbers a 1 and a 2 are randomly generated in the interval ( 1 , 2 ) and ( 3 , 4 ) , respectively. The following four cases of the parameter θ k are considered:
Case 1. θ k = 0.25 θ ¯ k .
Case 2. θ k = 0.5 θ ¯ k .
Case 3. θ k = 0.75 θ ¯ k .
Case 4. θ k = θ ¯ k .
The function q u a d p r o g in Matlab Optimization Toolbox was used to solve vectors y k , z k , u k , and v k . The starting points x 0 = x 1 R n are randomly generated in the interval [ 5 , 5 ] . We compare the IEM and MIEM Algorithms with the PEA Algorithm (8) by using the stopping criteria x k + 1 x k < 10 8 . We randomly 10 starting points and the presented results are the average, where n = 10 and m = 20 .
Table 2 shows that the parameter θ k from case 4, as θ k = θ ¯ k , yields better CPU times and iteration numbers than other cases. In addition, we see that CPU times and iteration numbers of the IEM and MIEM Algorithms are mostly better than those of the PEA Algorithm.

5. Conclusions

We present two algorithms for solving the split equilibrium problems, when the bifunctions are pseudomonotone and Lipschitz-type continuous in the framework of real Hilbert spaces. We consider both inertial and extragradient methods for introducing a sequence which is convergent to a solution of the considered problem. Some numerical experiments in which the bifunctions are generated from the Nash–Cournot oligopolistic equilibrium models of electricity markets and the Cournot–Nash models, respectively, are performed to illustrate the convergence of introduced algorithms and compare them with some algorithms. In the numerical experiment Example 1, one may observe that the time per iteration ratio of IEM and MIEM is around 0.018, while it is around 0.011 for PPA. This means both the IEM and MIEM Algorithms take approximately 1.63 times the CPU time of the PPA Algorithm in each iteration for solving the considered monotone type problem. When we consider the pseudomonotone type problem in Example 2, the time per iteration of IEM, MIEM, and PEA are about the same at 0.024. This information may lead to a conclusion that a key advantage of the inertial technique is trying to reduce the number of iterations rather than reduce the CPU time per iteration. On the other hand, we emphasize that the exact Lipschitz-type constants of the bifunctions are needed in order to control input parameters of the two introduced algorithms. However, the Lipschitz-type constants of the bifunctions are often unknown, and even in nonlinear problems they are difficult to approximate. For a future research direction, it would be very interesting to develop the algorithm but without the prior knowledge of the Lipschitz-type constants of the bifunctions.

Author Contributions

Conceptualization, S.S., N.P., and M.K.; methodology, S.S., N.P., and M.K.; writing—original draft preparation, M.K.; writing—review and editing, S.S., N.P., and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by Thailand Science Research and Innovation under the project IRN62W0007, the revenue budget in 2021, and Chiang Mai University.

Acknowledgments

The authors are thankful to the Editor and two anonymous referees for comments and remarks which improved the quality and presentation of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 127–149. [Google Scholar]
  2. Buakird, A.; Nimana, N.; Petrot, N. A mean extragradient method for solving variational inequalities. Symmetry 2021, 13, 462. [Google Scholar] [CrossRef]
  3. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  4. Petrot, N.; Tangkhawiwetkul, J. The stability of dynamical system for the quasi mixed equilibrium problem in Hilbert spaces. Thai J. Math. 2020, 18, 1433–1446. [Google Scholar]
  5. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
  6. Tran, D.Q.; Dung, L.M.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  7. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 2004, 9, 773–782. [Google Scholar] [CrossRef]
  8. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator damping. Set-Valued. Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  9. Hieu, D.V. An inertial-like proximal algorithm for equilibrium problems. Math. Meth. Oper. Res. 2018, 88, 399–415. [Google Scholar] [CrossRef]
  10. Moudafi, A. Second-order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 2003, 4, 18. [Google Scholar]
  11. Vinh, N.T.; Muu, L.D. Inertial Extragradient Algorithms for Solving Equilibrium Problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  12. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  13. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [Green Version]
  14. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  15. Suantai, S.; Petrot, N.; Suwannaprapa, M. Iterative methods for finding solutions of a class of split feasibility problems over fixed point sets in Hilbert spaces. Mathematics 2019, 7, 1012. [Google Scholar] [CrossRef] [Green Version]
  16. He, Z. The split equilibrium problem and its convergence algorithms. J. Ineq. Appl. 2012. [Google Scholar] [CrossRef] [Green Version]
  17. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  18. Kim, D.S.; Dinh, B.V. Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces. Numer. Algorithms. 2018, 77, 741–761. [Google Scholar] [CrossRef]
  19. Mastroeni, G. On auxiliary principle for equilibrium problems, In Equilibrium Problems and Variational Models; Daniele, P., Giannessi, F., Maugeri, A., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2003; pp. 289–298. [Google Scholar]
  20. Karamardian, S.; Schaible, S.; Crouzeix, J.P. Characterizations of generalized monotone maps. J. Optim. Theory Appl. 1993, 76, 399–413. [Google Scholar] [CrossRef]
  21. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  22. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar] [CrossRef]
  23. Anh, P.N. A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2013, 62, 271–283. [Google Scholar] [CrossRef]
  24. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2012; Volume 2057. [Google Scholar]
  25. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  26. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  27. Tan, K.K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  28. Mainge, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef] [Green Version]
  29. Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  30. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  31. Hieu, D.V. Strong convergence of a new hybrid algorithm for fixed point problems and equilibrium problems. Math. Model. Anal. 2019, 24, 1–19. [Google Scholar] [CrossRef]
  32. Contreras, J.; Klusch, M.; Krawczyk, J.B. Numerical solution to Nash-Cournot equilibria in coupled constraint electricity markets. EEE Trans. Power Syst. 2004, 19, 195–206. [Google Scholar] [CrossRef] [Green Version]
  33. Konnov, I.V. Combined Relaxation Methods for Variational inequalities; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  34. Anh, P.N.; Le Thi, H.A. An Armijo-type method for pseudomonotone equilibrium problems and its applications. J. Global Optim. 2013, 57, 803–820. [Google Scholar] [CrossRef]
Table 1. The numerical results of Example 1.
Table 1. The numerical results of Example 1.
Average CPU Times (s) Average Iterations
CasesIEMMIEMPPA IEMMIEMPPA
13.16094.11251.7781 179.6237.5177.6
22.75783.4859 164.6207.9
32.56412.9828 148.7177.9
42.38752.4531 131.8146.7
52.00311.8766 110.9112.6
Table 2. The numerical results of Example 2.
Table 2. The numerical results of Example 2.
Average CPU Times (s) Average Iterations
Cases IEMMIEMPPA IEMMIEMPPA
1 1.96092.59222.1891 79.7111.488.2
2 1.72502.2891 71.594.6
3 1.52811.8844 64.478.4
4 1.40941.5313 59.263.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Suantai, S.; Petrot, N.; Khonchaliew, M. Inertial Extragradient Methods for Solving Split Equilibrium Problems. Mathematics 2021, 9, 1884. https://doi.org/10.3390/math9161884

AMA Style

Suantai S, Petrot N, Khonchaliew M. Inertial Extragradient Methods for Solving Split Equilibrium Problems. Mathematics. 2021; 9(16):1884. https://doi.org/10.3390/math9161884

Chicago/Turabian Style

Suantai, Suthep, Narin Petrot, and Manatchanok Khonchaliew. 2021. "Inertial Extragradient Methods for Solving Split Equilibrium Problems" Mathematics 9, no. 16: 1884. https://doi.org/10.3390/math9161884

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop