Inertial Extragradient Methods for Solving Split Equilibrium Problems

: This paper presents two inertial extragradient algorithms for ﬁnding a solution of split pseudomonotone equilibrium problems in the setting of real Hilbert spaces. The weak and strong convergence theorems of the introduced algorithms are presented under some constraint qualiﬁca-tions of the scalar sequences. The discussions on the numerical experiments are also provided to demonstrate the effectiveness of the proposed algorithms.


Introduction
The equilibrium problem is a problem of finding a point x * ∈ C such that f (x * , y) ≥ 0, ∀y ∈ C, where C is a nonempty closed convex subset of a real Hilbert space H, and f : H × H → R is a bifunction. The solution set of the equilibrium problem (1) will be represented by EP( f , C). It is well known that the equilibrium problem (1) can be applied to many mathematical problems, such as optimization problems, variational inequality problems, minimax problems, Nash equilibrium problems, saddle point problems, and fixed point problems (see [1][2][3][4], and the references therein). It is pointed out that one of the most popular methods for solving the equilibrium problem (1), when f is a monotone bifunction, is the proximal point method (see [5]). However, the proximal point method cannot be guaranteed for a weaker assumption, such as f is a pseudomonotone bifunction. To overcome this drawback, Tran et al. [6] proposed the following so-called extragradient method for solving the equilibrium problem, when the bifunction f is pseudomonotone and satisfies Lipschitz-type continuous conditions with positive constants c 1 and c 2 : x 0 ∈ C, y k = arg min λ f (x k , y) + 1 2 y − x k 2 : y ∈ C , x k+1 = arg min λ f (y k , y) + 1 2 y − x k 2 : y ∈ C , where 0 < λ < min 1 2c 1 , 1 2c 2 . They proved that the sequence {x k } generated by (2) converges weakly to a solution of the equilibrium problem (1).
Meanwhile, the inertial-type methods have received a lot of attention from many researchers. This method originates from an implicit discretization method (heavy ball method) of the second-order dynamical in time [7,8] and can be regarded as a method of speeding up the convergence properties. In general, the main feature of the inertial-type methods is that the next iterate is constructed by the two previous iterates. The inertial techniques have been proposed for solving the equilibrium problems, for instance, [9,10] and the references therein. In 2019, by using the ideas of inertial and extragradient methods, Vinh and Muu [11] proposed the following method for solving the equilibrium problem, when the bifunction f is pseudomonotone and satisfies Lipschitz-type continuous conditions with positive constants c 1 and c 2 : x k+1 = arg min λ f (y k , y) + 1 2 y − w k 2 : y ∈ C , where 0 < λ < min 1 2c 1 , 1 2c 2 and θ k is a suitable parameter. They proved that the sequence {x k } generated by (3) converges weakly to a solution of the equilibrium problem (1). Observe that, in the case of θ k = 0, for all k ∈ N, the algorithm (3) is nothing but the algorithm (2). Moreover, in [11], the authors proposed the following method, when the bifunction f is pseudomonotone and satisfies Lipschitz-type continuous conditions with positive constants c 1 and c 2 : where 0 < λ < min 1 and θ k is a suitable parameter. They proved that the sequence {x k } generated by (4) converges strongly to the minimum-norm element of the solution of the equilibrium problem (1).
On the other hand, Censer and Elfving [12] proposed the following split feasibility problems: where C and Q are two nonempty closed convex subsets of the real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 → H 2 is a bounded linear operator. Many important problems arising from real-world problems can be formulated as the split feasibility problems which had been used for studying signal processing, medical image reconstruction, intensitymodulated radiation therapy, sensor networks, and data compression (see [12][13][14][15] and the references therein).
In 2012, He [16] (see also Moudafi [17]) introduced the split equilibrium problems, as the generalization of the split feasibility problems (5), as follows: Find x * ∈ C such that f (x * , y) ≥ 0, ∀y ∈ C, and u * := Ax * ∈ Q solves g(u * , v) ≥ 0, ∀v ∈ Q, where C, Q are two nonempty closed convex subsets of the real Hilbert spaces H 1 and H 2 , respectively, f : C × C → R and g : Q × Q → R are bifunctions, and A : H 1 → H 2 is a bounded linear operator. To solve the split equilibrium problems (6), He [16] proposed the following proximal point method, when the bifunctions f and g are monotone: where η ∈ (0, 1/ A 2 ), {r k } ⊂ (0, +∞) with lim inf k→∞ r k > 0, and A * is the adjoint operator of A. He proved that the sequence {x k } generated by (7) converges weakly to a solution of the split equilibrium problems (6). Here, the algorithm (7) will be called the PPA Algorithm. After that, under the setting of f : Kim and Dinh [18] proposed the following the extragradient method for finding a solution of the split equilibrium problems, when the bifunctions f and g are pseudomonotone and satisfy Lipschitz-type continuous conditions with positive constants c 1 and c 2 : where η ∈ (0, 1/ A 2 ), and {λ k }, {µ k } ⊂ [ρ, ρ] with 0 < ρ ≤ ρ < min 1 2c 1 , 1 2c 2 . They proved that the sequence {x k } generated by (8) converges weakly to a solution of the split equilibrium problems. Here, the algorithm (8) will be called the PEA Algorithm. We point out that the algorithm (8) cannot be applied for solving the problem (6) under the setting of g : Q × Q → R, since we can not guarantee if Az k belongs to the considered closed convex set Q.
In this paper, we will continue developing methods for solving the split equilibrium problems (6). That is, some new iterative algorithms will be introduced for finding the solutions of the split equilibrium problems, when the considered bifunctions are pseudomonotone. Some numerical examples and comparison of the introduced methods with the aforesaid algorithms will be discussed. This paper is organized as follows: In Section 2, some definitions and properties will be reviewed for use in subsequent sections. Section 3 will present two inertial extragradient algorithms and prove their convergence theorems. In Section 4, we will discuss the performance of the two introduced algorithms by comparing to the well-known algorithms.

Preliminaries
This section will present the definitions and some important basic properties that will be used in this paper. Let H be a real Hilbert space with inner product · , · , and its corresponding · . The symbols → and will be denoted for the strong convergence and the weak convergence in H, respectively.
First, we will recall definitions and facts for concerning the equilibrium problems.
Definition 1 ([1,3,19]). Let C be a nonempty closed convex subset of H. A bifunction f : H × H → R is said to be: (iii) Lipshitz-type continuous on H with constants L 1 > 0 and L 2 > 0 if

Remark 1.
A monotone bifunction is a pseudomonotone bifunction, but the converse is not true in general, for instance, see [20].
For a nonempty closed convex subset C of H and a bifunction f : H × H → R, we are concerned with the following assumptions in this paper: Assumption 1. f is weakly continuous on C × C in the sense that, if x ∈ C, y ∈ C, and {x k } ⊂ C, {y k } ⊂ C are two sequences that converge weakly to x and y respectively, then f (x k , y k ) converges to f (x, y).
Assumption 2. f (x, · ) is convex and subdifferentiable on C, for each fixed x ∈ C.
Assumption 3. f is psuedomonotone on C and f (x, x) = 0, for each x ∈ C.
Assumption 4. f is Lipshitz-type continuous on H with constants L 1 > 0 and L 2 > 0.

Remark 2.
We note that the solution set EP( f , C) is closed and convex, when the bifunction f satisfies the Assumptions 1-3 (see [6,21,22] for more detail).
The following lemma is important in order to obtain the main results of this paper.

Lemma 1 ([23]
). Let f : H × H → R be satisfied Assumptions 2-4. Assume that EP( f , C) is a nonempty set and 0 < λ 0 < min 1 then, Next, we recall some basic facts in the functional analysis which will be used in the sequel. For a Hilbert space H, we know that and αx + βy for each x, y, z ∈ H, and for each α, β, γ ∈ [0, 1] with α + β + γ = 1 (see [11]). For each x ∈ H, we denote the metric projection of x onto a nonempty closed convex subset C of H by P C (x), that is, Lemma 2 (see [24,25]). let C be a nonempty closed convex subset of H. Then, (i) P C (x) is singleton and well-defined for each x ∈ H; (ii) z = P C (x) if and only if x − z, y − z ≤ 0, ∀y ∈ C; (iii) P C is a nonexpansive operator, that is, For a function g : H → R, the subdifferential of g at z ∈ H is defined by The function g is said to be subdifferentiable at z if ∂g(z) = ∅.
Lemma 3 (see [24]). For any z ∈ H, the subdifferential ∂g(z) of a continuous convex function g is a nonempty, weakly closed, and bounded convex set.
We end this section by recalling some auxiliary results for obtaining the convergence theorems.

Lemma 4 ([26]). Let H be a Hilbert space and
Then, there exists x * ∈ S such that the sequence {x k } converges weakly to x * .

Lemma 5 ([27]
). Let {a k } and {b k } be sequences of non-negative real numbers such that a k+1 ≤ Lemma 6 ( [28,29]). Let {a k } and {c k } be sequences of non-negative real numbers such that Then the following results hold: Then, there exists a nondecreasing sequence {m n } of positive integers such that lim n→∞ m n = ∞ and the following properties hold: a m n ≤ a m n +1 and a n ≤ a m n +1 , for all (sufficiently large) numbers n ∈ N. Indeed, m n is the largest number k in the set {1, 2, . . . , n} such that a k < a k+1 .

Main Results
Let H 1 and H 2 be two real Hilbert spaces and C and Q be nonempty closed convex subsets of H 1 and H 2 , respectively. Suppose that f : H 1 × H 1 → R and g : H 2 × H 2 → R are bifunctions which satisfy Assumptions 1-4 with some positive constants {c 1 , c 2 } and {d 1 , d 2 }, respectively. Let us recall the split equilibrium problems: where f : H 1 × H 1 → R and g : H 2 × H 2 → R are bifunctions, and A : H 1 → H 2 is a bounded linear operator with its adjoint operator A * . From now on, the solution set of problem (11) will be denoted by Ω. That is, ).

Inertial Extragradient Method
Now, we introduce Algorithm 1 for solving the split equilibrium problems (11). Step and compute Step 2. Solve the strongly convex program Step 3. Solve the strongly convex program Step 4. Solve the strongly convex program Step 5. Solve the strongly convex program Step 6. The next approximation x k+1 is defined by Step 7. Put k := k + 1 and go to Step 1.

Remark 3.
We pointed out that the term θ k (x k − x k−1 ), which is included in the IEM Algorithm, is intended to speed up convergence properties and is called the inertial effect. We emphasize that the choice of parameter θ k may lead to the superior numerical behavior of the IEM Algorithm. Moreover, we observe that if θ k = 0, for each k ∈ N, then the IEM Algorithm reduces to the PEA Algorithm (8), which was presented in [18].
Theorem 1. Suppose that the solution set Ω is nonempty. Then, the sequence {x k } which is generated by the IEM Algorithm converges weakly to an element of Ω.
Proof. Let p ∈ Ω. That is, p ∈ EP( f , C), and Ap ∈ EP(g, Q). Then, by Lemma 1 (ii), we have This implies that By the definition of w k , we have Thus, in view of (12) and (14), we obtain On the other hand, by Lemma 1 (ii), we obtain By the definition of x k+1 and the nonexpansivity of P C , we have Using this one together with (18), we obtain Combining with (17) implies that Thus, by the choice of η, we have Now, the relations (13) and (22) imply that So, it follows from the definition of w k that Due to the properties of the sequences {θ k } and { k }, we observe that Then, by (24), (25), and Lemma 5, we show lim k→∞ x k − p exists. Consequently, the sequence {x k } is bounded. In addition, in view of (15) and (22), we see that Thus, by the choices of the control sequences {λ k } together with the existence of lim and lim k→∞ y k − z k = 0.
These imply that lim Moreover, by using lim Using this one together with (27), we obtain Thus, it follows from (28) that On the other hand, by using (21), we see that Thus, by the existence of lim k→∞ x k − p and (32), we have Furthermore, in view of (16), we obtain Thus, applying (34) to the above inequality, we have and lim Now, we will complete the proof of this theorem by using Lemma 4. Notice that, it remains to show ω w (x k ) ⊂ Ω. Let x * ∈ ω w (x k ) and {x k n } be a subsequence of {x k } such that x k n x * , as n → ∞. We know that, by using (30)-(32), we also have w k n x * , y k n x * , and z k n x * , as n → ∞. The latter fact also implies that Az k n Ax * , as n → ∞. Using this one together with (35), we obtain u k n Ax * , as n → ∞. Since C and Q are closed and convex sets, so C and Q are weakly closed, therefore, x * ∈ C and Ax * ∈ Q. By Lemma 1 (i), we have λ k n [ f (w k n , y) − f (w k n , y k n )] ≥ y k n − w k n , y k n − y , ∀y ∈ C, and µ k n [g(Az k n , u) − g(Az k n , u k n )] ≥ u k n − Az k n , u k n − u , ∀u ∈ Q.
These imply that f (w k n , y) − f (w k n , y k n ) ≥ − 1 λ k n y k n − w k n y k n − y , ∀y ∈ C, and g(Az k n , u) − g(Az k n , u k n ) ≥ − 1 µ k n u k n − Az k n u k n − u , ∀u ∈ Q.
Then, we show that x * ∈ Ω. This shows that ω w (x k ) ⊂ Ω. Hence, by Lemma 4, we can conclude that the sequence {x k } converges weakly to an element of Ω. This completes the proof.

Mann-Type Inertial Extragradient Method
In order to obtain a strong convergence result, we propose Algorithm 2 by using the Mann-type techniques (see [11,31]).

Initialization. Choose parameters
means that the sequence { k } is an infinitesimal of higher order than {γ n }. Pick x 0 , x 1 ∈ C and set k = 1. Step Step 2. Solve the strongly convex program Step 3. Solve the strongly convex program Step 4. Solve the strongly convex program u k = arg min µ k g(Az k , u) + 1 2 u − Az k 2 : u ∈ Q .
Step 5. Solve the strongly convex program v k = arg min µ k g(u k , u) + 1 2 u − Az k 2 : u ∈ Q .
Step 6. Compute Step 7. The next approximation x k+1 is defined by Step 8. Put k := k + 1 and go to Step 1.

Theorem 2.
Suppose that the solution set Ω is nonempty. Then, the sequence {x k } which is generated by the MIEM Algorithm converges strongly to the minimum-norm element of Ω.
Proof. Let p ∈ Ω. That is, p ∈ EP( f , C), and Ap ∈ EP(g, Q). Following the proof of Theorem 1, we can check that By the definition of x k+1 , we obtain It follows from (37) and (38) that Thus, by the definition of w k , we have where Due to the choices of the sequence {θ k }, we obtain that Thus, by the properties of k = o(γ k ) and lim k→∞ γ k = 0, we have This implies that the sequence {σ k } is a null sequence. Put M = max{ p , sup k∈N σ k }.
Then, by (42) and Lemma 6 (i), the sequence { x k − p } is bounded. Consequently, {x k } is a bounded sequence. In addition, by the definition of x k+1 and (10), we have Thus, by using (38), (40), and Lemma 1 (ii), we obtain that This implies that Next, we will show that {x k } converges strongly top := P Ω (0). We consider the following two possible cases. Case 1. Suppose that there exists k 0 ∈ N such that x k+1 −p ≤ x k −p , for all k ≥ k 0 . This means that { x k −p } k≥k 0 is a nonincreasing sequence. Consequently, by using this one together with the boundness property of { x k −p }, we have that the limit of x k −p exists. Since lim k→∞ θ k x k − x k−1 = 0 and the properties of the control sequences and lim k→∞ y k − z k = 0.
These imply that lim Using this one together with (47), we obtain It follows from (48) that lim On the other hand, in view of (41), we see that Thus, by using (49), we have Furthermore, by Lemma 1 (ii), we obtain that Then, applying (53) to the above inequality, we have and lim Now, let x * ∈ ω w (x k ) and {x k n } be a subsequence of {x k } such that x k n x * , as n → ∞. Following the line proof of Theorem 1, we can show that x * ∈ Ω. This means that ω w (x k ) ⊂ Ω. Put s k = (1 − β k )w k + β k t k . The relations (37) and (38) imply that By the definition of x k+1 , we see that It follows from (9) that Thus, by using (56), we have Consider, where This, together with (58), implies that Thus, by the properties ofp := P Ω (0) and Hence, by (43), (46), (60), (61), and Lemma 6 (ii), we have This completes the proof for the first case.

Case 2. Suppose that there exists a subsequence
According to Lemma 7, there exists a nondecreasing sequence {m n } ⊂ N such that lim n→∞ m n = ∞, and x m n −p ≤ x m n +1 −p and x n −p ≤ x m n +1 −p , ∀n ∈ N.
It follows from (45) that Following the line proof of Case 1, we can show that and where M = sup n∈N { x m n −p , x m n − x m n −1 }. This, together with (63), implies that Using this one together with (63) again, we obtain Thus, by using (43), (65), and (67), we have lim sup n→∞ x n −p 2 ≤ 0.
Hence, we can conclude that the sequence {x n } converges strongly top := P Ω (0). This completes the proof.

Numerical Experiments
This section will show some examples and numerical results to support Theorems 1 and 2. We will compare the introduced algorithms, IEM and MIEM, with the PPA Algorithm (7) in Example 1 and the PEA Algorithm (8) in Example 2. The numerical experiments are written in Matlab R2015b and performed on a laptop with AMD Dual Core R3-2200U CPU @ 2.50 GHz and RAM 4.00 GB. In both Examples 1 and 2, for each considered matrix, the · means the spectral norm. Example 1. Let H 1 = R n and H 2 = R m be two real Hilbert spaces with the Euclidean norm. We consider the bifunctionsf andg which are generated from the Nash-Cournot oligopolistic equilibrium models of electricity markets (see [22,32]), where P 1 , Q 1 ∈ R n×n and P 2 , Q 2 ∈ R m×m are matrices such that Q 1 , Q 2 are symmetric positive semidefinite and Q 1 − P 1 , Q 2 − P 2 are negative semidefinite. Observe thatf (x, y) +f (y, x) = (x − y) T (Q 1 − P 1 )(x − y), ∀x, y ∈ R n . Moreover, from the property of Q 1 − P 1 , we have thatf is a monotone operator. Similarly, we also have thatg is monotone. Next, we consider the two bifunctions f and g, which are given by and 20,20] are the constrained boxes. We note that f and g are Lipschitz-type continuous with constants c 1 = c 2 = 1 2 P 1 − Q 1 and d 1 = d 2 = 1 2 P 2 − Q 2 , respectively (see [6]). Choose b 1 = max{c 1 , d 1 }, and b 2 = max{c 2 , d 2 }. Then, both bifunctions f and g are Lipschitz-type continuous with constants b 1 and b 2 .
For this numerical experiment, the matrices P 1 , Q 1 , P 2 , and Q 2 are randomly generated in the interval [−5, 5] such that they satisfy the required properties above and the linear operator A : R n → R m is a m × n matrix, in which each of its entries is randomly generated in the interval [−2, 2]. Note that the solution set Ω is nonempty because of 0 ∈ Ω. We will work with the following control parameters: α = 0.5, η = 1 2 A 2 , λ k = µ k = 1 4 max{b 1 ,b 2 } , k = 1 (k+1) 2 , γ k = 1 k+1 , and β k = 0.5(1 − γ k ). The following five cases of the parameter θ k are considered: The function quadprog in Matlab Optimization Toolbox was used to solve vectors y k , z k , u k , and v k . We randomly generated starting points x 0 = x 1 ∈ R n in the interval [−5, 5]. The IEM and MIEM algorithms were tested along with the PPA Algorithm (7) by using the stopping criteria x k+1 − x k < 10 −8 . We randomly generated 10 starting points and the presented results are the average, where n = 10 and m = 20.
From Table 1, we may suggest that the parameter θ k = θ k provides better CPU times and iteration numbers than other cases. Moreover, iteration numbers of the IEM and MIEM algorithms with the parameter, θ k = 0, are mostly better than those of the PPA Algorithm. Meanwhile, CPU times of the PPA Algorithm are better than those of the IEM and MIEM Algorithms. However, we would like to remind that, by Remark 1, the class of pseudomonotone bifunction is strictly larger than the class of monotone bifunction, and both IEM and MIEM Algorithms can solve the split equilibrium problems for pseudomonotone bifunctions while the PPA Algorithm may not be applied. We consider a classical form of the bifunction which given by the Cournot-Nash models (see [33]), where 0 a 2 a 2 · · · a 2 a 2 0 a 2 · · · a 2 a 2 a 2 0 · · · a 2 · · · · · · · a 2 a 2 · · · · 0 where a 1 and a 2 are positive real numbers. We know that the bifunctionsf andg are pseudomonotone and they are not monotone on C and Q, respectively (see [34]).
Here, the numerical experiment is considered under the following setting: the boxes C and Q, the bifunctions f and g, the linear operator A, and the control parameters are given as in Example 1. Notice that the solution set Ω is nonempty because of 0 ∈ Ω. We observe that f and g are Lipschitz-type continuous with constants c 1 = c 2 = 1 2 A 1 and d 1 = d 2 = 1 2 A 2 , respectively. Choose b 1 = max{c 1 , d 1 }, and b 2 = max{c 2 , d 2 }. Then, both bifunctions f and g are Lipschitz-type continuous with constants b 1 and b 2 . In addition, the positive real numbers a 1 and a 2 are randomly generated in the interval (1, 2) and (3,4), respectively. The following four cases of the parameter θ k are considered: Case 1. θ k = 0.25 θ k . Case 2. θ k = 0.5 θ k . Case 3. θ k = 0.75 θ k . Case 4. θ k = θ k . The function quadprog in Matlab Optimization Toolbox was used to solve vectors y k , z k , u k , and v k . The starting points x 0 = x 1 ∈ R n are randomly generated in the interval [−5, 5]. We compare the IEM and MIEM Algorithms with the PEA Algorithm (8) by using the stopping criteria x k+1 − x k < 10 −8 . We randomly 10 starting points and the presented results are the average, where n = 10 and m = 20. Table 2 shows that the parameter θ k from case 4, as θ k = θ k , yields better CPU times and iteration numbers than other cases. In addition, we see that CPU times and iteration numbers of the IEM and MIEM Algorithms are mostly better than those of the PEA Algorithm.

Conclusions
We present two algorithms for solving the split equilibrium problems, when the bifunctions are pseudomonotone and Lipschitz-type continuous in the framework of real Hilbert spaces. We consider both inertial and extragradient methods for introducing a sequence which is convergent to a solution of the considered problem. Some numerical experiments in which the bifunctions are generated from the Nash-Cournot oligopolistic equilibrium models of electricity markets and the Cournot-Nash models, respectively, are performed to illustrate the convergence of introduced algorithms and compare them with some algorithms. In the numerical experiment Example 1, one may observe that the time per iteration ratio of IEM and MIEM is around 0.018, while it is around 0.011 for PPA. This means both the IEM and MIEM Algorithms take approximately 1.63 times the CPU time of the PPA Algorithm in each iteration for solving the considered monotone type problem. When we consider the pseudomonotone type problem in Example 2, the time per iteration of IEM, MIEM, and PEA are about the same at 0.024. This information may lead to a conclusion that a key advantage of the inertial technique is trying to reduce the number of iterations rather than reduce the CPU time per iteration. On the other hand, we emphasize that the exact Lipschitz-type constants of the bifunctions are needed in order to control input parameters of the two introduced algorithms. However, the Lipschitz-type constants of the bifunctions are often unknown, and even in nonlinear problems they are difficult to approximate. For a future research direction, it would be very interesting to develop the algorithm but without the prior knowledge of the Lipschitz-type constants of the bifunctions.