1. Introduction
The equilibrium problem started to gain interest after the publication of a paper by Blum and Oettli [
1], which discussed the problem of finding a point 
 such that
      
      where 
C is a nonempty closed convex subset of a real Hilbert space 
H, and 
 is a bifunction. This well-known equilibrium model (
1) has been used for studying a variety of mathematical models for physics, chemistry, engineering, and economics. In addition, the equilibrium problem (
1) can be applied to many mathematical problems, such as optimization problems, variational inequality problems, minimax problems, Nash equilibrium problems, saddle point problems, and fixed point problems, see [
1,
2,
3,
4], and the references therein.
In order to solve the equilibrium problem (
1), when 
f is a monotone bifunction, approximate solutions are frequently based on the proximal point method. That is, given 
, at each step, the next iterate 
 can be found by solving the following regularized equilibrium problem: find 
 such that
      
      where 
. Note that the existence of each 
 is guaranteed, on condition that the subproblem (
2) is a strongly monotone problem (see [
5,
6]). However, if 
f is a pseudomonotone bifunction (a property which is weaker than a monotone) the strong monotone-ness of the problem (
2) cannot be guaranteed. Therefore, the sequence 
 may not be well-defined. To overcome this drawback, Tran et al. [
7] proposed the following extragradient method for solving the equilibrium problem, when the considered bifunction 
f is pseudomonotone and Lipschitz-type continuous with positive constants 
 and 
:
      where 
. Tran et al. guaranteed that the sequence 
 generated by (
3) converges weakly to a solution of the equilibrium problem (
1).
On the other hand, for a nonempty closed convex subset 
C of 
H and a mapping 
, the fixed point problem is a problem of finding a point 
 such that 
. This fixed point problem has many important applications, such as optimization problems, variational inequality problems, minimax problems, and saddle point problems, see [
8,
9,
10,
11], and the references therein. The set of fixed points of a mapping 
T will be represented by 
.
An iteration method for finding fixed points of the mapping 
T was proposed by Mann [
12] as follows:
      where 
 and 
. If 
T is a nonexpansive mapping and has a fixed point, then the sequence 
 generated by (
4) converges weakly to a fixed point of 
T. In addition, in 1994, Park and Jeong [
13] showed that if 
T is a quasi-nonexpansive mapping with 
 demiclosed at 0, then the sequence which is generated by the Mann iteration method converges weakly to a fixed point of 
T.
Furthermore, in order to obtain a strong convergence result for the Mann iteration method, Nakajo and Takahashi [
14] proposed the following hybrid method:
     where 
 such that 
 for some 
, and 
 is the metric projection onto 
. Nakajo and Takahashi proved that if 
T is a nonexpansive mapping, then the sequence 
 generated by (
5) converges strongly to 
.
In addition, in 1974, Ishikawa [
15] proposed the following method for finding fixed points of a Lipschitz pseudocontractive mapping 
T:
      where 
, 
 and 
. If 
C is a convex compact subset of 
H, then the sequence 
 generated by (
6) converges strongly to fixed points of 
T. It has been previously shown that the Mann iteration method is generally not applicable for finding fixed points of a Lipschitz pseudocontractive mapping in a Hilbert space. For example, see [
16].
In 2008, by using Ishikawa’s iteration concept, Takahashi et al. [
17] proposed the following hybrid method, called the shrinking projection method, which is different from Nakajo and Takahashi’s method [
14]:
     where 
 with 
, and 
 for some 
. Takahashi et al. proved that if 
T is a nonexpansive mapping, then the sequence 
 generated by (
7) converges strongly to 
.
In recent years, many algorithms have been proposed for finding a common element of the set of solutions of the equilibrium problem and the set of solutions of the fixed point problem. See, for instance, [
8,
11,
18,
19,
20,
21,
22,
23] and the references therein. In 2016, by using both hybrid and extragradient methods together in combination with Ishikawa’s iteration concept, Dinh and Kim [
24] proposed the following iteration method for finding a common element of fixed points of a symmetric generalized hybrid mapping 
T and the set of solutions of the equilibrium problem, when a bifunction 
f is pseudomonotone and Lipschitz-type continuous with positive constants 
 and 
:
      where 
 with 
, 
 such that 
, and 
 for some 
. Dinh and Kim proved that the sequence 
 generated by (
8) converges strongly to 
, where 
 is the solution set of the equilibrium problem.
Now, let us consider the problem of finding a common solution of a finite family of equilibrium problems (CSEP). Let 
C be a nonempty closed convex subset of 
H and let 
, 
, be bifunctions satisfying 
 for each 
. The problem CSEP is to find 
 such that
      
The solution set of the problem CSEP will be denoted by 
. It is worth pointing out that the problem CSEP is a generalization of many mathematical models, such as common solutions to variational inequality problems, convex feasibility problems and common fixed point problems. See [
1,
25,
26,
27] for more details. In 2016, Hieu et al. [
28] considered the following problem: 
      where 
C is a nonempty closed convex subset of 
H, 
, 
, are mappings, and 
, 
, are bifunctions satisfying 
 for each 
. From now on, the solution set of problem (
10) will be denoted by 
S. That is:
By using both hybrid and extragradient methods together in combination with Mann’s iteration concept and parallel splitting-up techniques (see [
25,
29]), they proposed the following algorithm for finding the solution set of problem (
10), when mappings are nonexpansive, and bifunctions are pseudomonotone and Lipschitz-type continuous with positive constants 
 and 
:
      where 
, and 
 such that 
. Hieu et al. proved that the sequence 
 generated by (PHMEM) converges strongly to 
. The algorithm (
11) is called PHMEM method.
The current study will continue developing methods for finding the solution set of problem (
10). Roughly speaking, some new iterative algorithms will be introduced for finding the solution set of problem (
10). Some numerical examples will be considered and the introduced methods will be discussed and compared with the PHMEM algorithm.
This paper is organized as follows: In 
Section 2, some relevant definitions and properties will be reviewed for use in subsequent sections. 
Section 3 will present two shrinking extragradient algorithms and prove their convergence. Finally, in 
Section 4, the performance of the introduced algorithms will be compared to the performance of the PHMEM algorithm and discussed.
  2. Preliminaries
This section will present some definitions and properties that will be used subsequently. First, let H be a real Hilbert space induced by the inner product  and norm . The symbols → and ⇀ will be used here to denote the strong convergence and the weak convergence in H, respectively.
Now, recalled here are definitions of nonlinear mappings related to this work.
Definition 1  ([
30,
31])
. Let C be a nonempty closed convex subset of H. A mapping  is said to be:- (i) 
- pseudocontractive ifwhere I denotes the identity operator on H. 
- (ii) 
- Lipschitzian if there exists  such thatIn particular, if , then T is said to be nonexpansive. 
- (iii) 
- quasi-nonexpansive if  is nonempty and 
- (iv) 
- -symmetric generalized hybrid if there exists  such that 
 Definition 2.  (see [32]) Let C be a nonempty closed convex subset of H and  be a mapping. The mapping T is said to be demiclosed at  if for any sequence  with  and  imply .  Note that the class of pseudocontractive mappings includes the class of nonexpansive mappings. In addition, a nonexpansive mapping with at least one fixed point is a quasi-nonexpansive mapping, but the converse is not true. For example, see [
33]. Moreover, if a 
-symmetric generalized hybrid mapping satisfies 
 , 
  and 
  then 
T is quasi-nonexpansive and 
 demiclosed at 0 (see [
34,
35]). Moreover, 
 is closed and convex when 
T is a quasi-nonexpansive mapping (see [
36]).
Next, we recall definitions and facts for considering the equilibruim problems.
Definition 3  ([
1,
4,
37])
. Let C be a nonempty closed convex subset of H and  be a bifunction. The bifunction f is said to be:- (i) 
- strongly monotone on C if there exists a constant  such that 
- (ii) 
- (iii) 
- (iv) 
- Lipshitz-type continuous on C with constants  and  if 
 Remark 1. From Definition 3, we observe that (i) ⇒ (ii) ⇒ (iii). However, if f is pseudomonotone, f might not be monotone on C. For example, see [38].  For a nonempty closed convex subset C of H and a bifunction  satisfying  for each . In this paper, we are concerned with the following assumptions: 
- (A1)
- f is weakly continuous on  in the sense that, if  and ,  are two sequences in C converge weakly to x and y respectively, then  converges to ; 
- (A2)
-  is convex and subdifferentiable on C for each fixed ; 
- (A3)
- f is psuedomonotone on C; 
- (A4)
- f is Lipshitz-type continuous on C with constants  and . 
It is well-known that the solution set 
 is closed and convex, when the bifunction 
f satisfies the assumptions 
. See, for instance, [
7,
39,
40].
The following facts are very important in order to obtain our main results.
Lemma 1  ([
18])
. Let  be satisfied . If  is nonempty set and . Let . If  and  are defined bythen, - (i) 
- , for all ; 
- (ii) 
- , for all . 
 This section will be closed by recalling the projection mapping and calculus concepts in Hilbert space.
Let 
C be a nonempty closed convex subset of 
H. For each 
, we denote the metric projection of 
x onto 
C by 
, that is
            
The following facts will also be used in this paper.
Lemma 2. (see, for instance, [41,42]) Let C be a nonempty closed convex subset of H. Then - (i) 
-  is singleton and well-defined for each ; 
- (ii) 
-  if and only if , ; 
- (iii) 
- , . 
 For a nonempty closed convex subset 
C of 
H and a convex function 
, the subdifferential of 
g at 
 is defined by
         
The function g is said to be subdifferentiable at z if .
  3. Main Result
In this section, we propose two shrinking extragradient algorithms for finding a solution of problem (
10), when each mapping 
, 
, is quasi-nonexpansive with 
 demiclosed at 0, and each bifunction 
, 
, satisfies all the assumptions 
. We start by observing that if each bifunction 
, 
, is Lipshitz-type continuous on 
C with constants 
 and 
, then
      
      where 
 and 
. This means the bifunctions 
, 
, are Lipshitz-type continuous on 
C with constants 
 and 
. Of course, we will use this notation in this paper. Moreover, for each 
 and 
, we denote 
 for a modulo function at 
k with respect to 
N, that is
      
Now, we propose a following cyclic algorithm.
CSEM Algorithm (Cyclic Shrinking Extragradient Method)
Initialization. Pick , choose parameters  with ,  such that , and  with .
Step 1. Solve the strongly convex program
      
 Step 2. Solve the strongly convex program
      
 Step 4. Construct closed convex subset of 
C:
 Step 5. The next approximation 
 is defined as the projection of 
 onto 
, i.e.,
      
 Step 6. Put  and go to Step 1.
Before going to prove the strong convergence of CSEM Algorithm, we need the following lemma.
Lemma 3. Suppose that the solution set S is nonempty. Then, the sequence  which is generated by CSEM Algorithm is well-defined.
 Proof.  To prove the Lemma, it suffices to show that  is a nonempty closed and convex subset of H, for each . Firstly, we will show the non-emptiness by showing that , for each . Obviously, .
Now, let 
. Then, by Lemma 1 (ii), we have
        
        for each 
. This implies that
        
        for each 
. On the other hand, since 
, it follows from the quasi-nonexpansivity of each 
 (
) and the definitions of 
, 
 that
        
        and
        
        for each 
. The relations (
12) and (
13) imply that
        
        for each 
. Now, suppose that 
. Thus, by using (
14), we see that 
. So, by induction, we have 
, for each 
. Since 
S is a nonempty set, we obtain that 
 is a nonempty set, for each 
.
Next, we show that 
 is a closed and convex subset, for each 
. Note that we already have that 
 is a closed and convex subset. Now, suppose that 
 is a closed and convex subset, we will show that 
 is likewise. To do this, let us consider a set 
. We see that
        
This means that  is a halfspace and . Thus,  is a closed and convex subset. Thus, by induction, we can conclude that  is a closed and convex subset, for each . Consequently, we can guarantee that  is well-defined.  □
 Theorem 1. Suppose that the solution set S is nonempty. Then, the sequence  which is generated by CSEM Algorithm converges strongly to .
 Proof.  Let 
. By the definition of 
, we observe that 
, for each 
. Since 
 and 
, we have
        
        for each 
. This means that 
 is a nondecreasing sequence. Similarly, for each 
, we obtain that
        
        for each 
. By the above inequalities, we get
        
       for each 
. So 
 is a bounded sequence. Consequently, we can conclude that 
 is a convergent sequence. Moreover, we see that 
 is bounded. Thus, in view of (
13) and (
14), we get that 
 and 
 are also bounded. Suppose 
 such that 
. It follows that 
. Then, by Lemma 2 (iii), we have
        
Thus, by using the existence of 
, we get
        
That is 
 is a Cauchy sequence in 
C. Since 
C is closed, there exists 
 such that
        
By the definition of 
 and 
, we see that
          
		  for each 
. It follows that
        
        for each 
. Since 
 and 
, as 
, we obtain that
        
This together with (
17) imply that
        
Since 
 and the quasi-nonexpansivity of each 
 (
), it follows that
        
Consider,
        
        for each 
. By using (
13) and the quasi-nonexpansivity of each 
 (
), we obtain
        
        for each 
. Then, by Lemma 1 (ii), we have
        
        for each 
. It follows that
        
       for each 
. Thus, by using (
18) and the choices of 
, 
, we have
        
        and
        
Then, by 
, we also have
        
        and
        
Next, we claim that 
. From the definition of 
, we see that
        
       for each 
. Then, by using (
18), (
19) and (
23), we have
        
Furthermore, for each fixed 
, we observe that
        
      for each 
. Thus, it follows from (
26) that
        
        for each 
. Since 
, as 
, then for each 
, we get 
, as 
. Combining with (
27), by the demiclosedness at 0 of 
, implies that
        
        for each 
.
Similarly, for each fixed 
, we note that
        
       for each 
. Since 
 and 
, as 
, then for each 
, we have 
 and 
, as 
. By Lemma 1 (i), for each 
, we obtain
        
It follows that, for each 
, we have
        
By using (
21) and weak continuity of each 
 (
), we get that
        
       for each 
. Then, we had shown that 
.
Finally, we will show that 
. In fact, since 
, it follows from (
15) that
        
       for each 
. Then, by using the continuity of norm and 
, we see that
        
Thus, by the definition of  and , we obtain that . This completes the proof.  □
 Next, by replacing cyclic method by parallel method, we propose the following algorithm.
PSEM Algorithm (Parallel Shrinking Extragradient Method)
Initialization. Pick , choose parameters  with ,  such that , and  with .
Step 1. Solve 
N strongly convex programs
      
 Step 2. Solve 
N strongly convex programs
      
 Step 3. Find the farthest element from 
 among 
, 
, i.e.,
      
 Step 5. Find the farthest element from 
 among 
, 
, i.e.,
      
 Step 6. Construct closed convex subset of 
C:
 Step 7. The next approximation 
 is defined as the projection of 
 onto 
, i.e.,
      
.
 Step 8. Put  and go to Step 1.
Theorem 2. Suppose that the solution set S is nonempty. Then, the sequence  which is generated by PSEM Algorithm converges strongly to .
 Proof.  Let 
. By the definition of 
, we suppose that 
 such that 
. Then, by Lemma 1 (ii), we have
        
       for each 
. This implies that
        
        for each 
. On the other hand, by the definition of 
 and the quasi-nonexpansivity of each 
 (
), we have
        
        for each 
. Additionally, by the definition of 
, we suppose that 
 such that 
. It follows from the quasi-nonexpansivity of each 
 (
) that
        
        for each 
. The relations (
28) and (
29) imply that
        
        for each 
. Following the proof of Lemma 3 and Theorem 1, we can show that 
 is a closed convex subset of 
H and 
, for each 
. Moreover, we can check that the sequence 
 is a convergent sequence, say
        
        for some 
.
By the definition of 
 and 
, we see that
          
        for each 
. It follows that
        
        for each 
. Since 
 and 
, as 
, we obtain that
        
This together with (
32) implies that
        
Then, by the definition of 
, we have
        
        for each 
. Since 
 and the quasi-nonexpansivity of each 
 (
), it follows that
        
       for each 
. Beside, by the definition of 
, for each 
, we see that
        
        for each 
. Thus, by using (
29) and the quasi-nonexpansivity of each 
 (
), we have
        
       for 
. So, by Lemma 1 (ii), for each 
, we get that
        
       for each 
. It follows that, for each 
, we have
        
       for each 
. Thus, by using (
33) and the choices of 
, 
, we see that
        
        and
        
Then, by the definition of 
, we have
        
        for each 
. Moreover, by Lemma 1 (ii), for each 
, we get that
        
        for each 
. It follows that, for each 
, we have
        
       for each 
. Combining with (
39) implies that
        
        and
        
        for each 
. Thus, by using (
38), (
40) and 
, we have
        
        and
        
       for each 
.
Next, we claim that 
. From the definition of 
, for each 
, we see that
        
       for each 
. Thus, in view of (
33), (
34), and (
38), we get that
        
       for each 
. Combining with (
42), by the demiclosedness at 0 of 
, implies that
        
        for each 
.
On the other hand, by Lemma 1 (i), for each 
, we see that
        
It follows that, for each 
, we get
        
By using (
31), (
40), (
43) and weak continuity of each 
 (
), we have
        
        for each 
. Thus, we can conclude that 
. The rest of the proof is similar to the arguments in the proof of Theorem 1, and it leads to the conclusion that the sequence 
 converges strongly to 
.  □
 Remark 2. We note that for the PSEM algorithm we solve , , , by using N bifunctions and compute , , , by using M mappings. The farthest elements from  among all  and  are chosen for the next step calculation. However, we solve only , , by using a bifunction and compute only , , by using a mapping for the CSEM algorithm. After that, we construct closed convex subset , and the approximation  is the projection of  onto  for both algorithms. We claim that the numbers of iterations of the PSEM algorithm should be less than the CSEM algorithm. However, the computational times of the CSEM algorithm should be less than the PSEM algorithm for sufficiently large . On the other hand, for the PHMEM algorithm they solved , , , by using N bifunctions, and computed , , by using M mappings. The farthest elements from  among all  and  are chosen similar to the PSEM algorithm. However, they constructed two closed convex subsets , , and the approximation  is the projection of  onto , which is difficult to compute. We will focus on these observations in the next section.
   4. A Numerical Experiment
This section will compare the two introduced algorithms, CSEM and PSEM, with the PHMEM algorithm, which was presented in [
28]. The following setting is taken from Hieu et al. [
28]. Let 
 be a Hilbert space with the standard inner product 
 and the norm 
, for each 
. To be considered here are the nonexpansive self-mappings 
, 
, and the bifunctions 
, 
, which are given on 
 by
      
      and
      
      where 
 if 
, and 
 if 
. Moreover, 
. Then, the bifunctions 
, 
, satisfy conditions 
 (see [
28]). Indeed, the bifunctions 
, 
, are Lipshitz-type continuous with constants 
. Note that the solution set 
S is nonempty because 
.
The following numerical experiment is considered with these parameters: ,  for the CSEM algorithm; , ,  for the PSEM algorithm, when  and . The following six cases of the parameters  and  are considered:
Case 1. , .
Case 2. , .
Case 3. , .
Case 4. , .
Case 5. , .
Case 6. , .
The experiment was written in Matlab R2015b and performed on a PC desktop with Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz 3.40GHz and RAM 4.00 GB. The function 
 in Matlab Optimization Toolbox was used to solve vectors 
, 
 for the CSEM algorithm; 
, 
, 
, for the PSEM algorithm. The set 
 was computed by using the function 
 in Matlab Symbolic Math Toolbox. One can see that the set 
 is the interval 
, where 
, 
. Consequently, the metric projection of a point 
 onto the set 
 was computed by using this form
      
     see [
41]. The CSEM and PSEM algorithms were tested along with the PHMEM algorithm by using the stopping criteria 
 and the results below were presented as averages calculated from four starting points: 
 at 
, 
, 
 and 1.
Table 1 shows that the parameter 
 yields faster computational times and fewer computational iterations than other cases. Compare cases 1–3 with each other and cases 4–6 with each other. Meanwhile, the parameter 
, in which the Ishikawa iteration reduces to the Mann iteration, yields slower computational times and more computational iterations than the other case. Compare cases 1 with 4, 2 with 5, and 3 with 6. Moreover, the computational times of the CSEM algorithm are faster than other algorithms, while the computational iterations of the PSEM algorithm are fewer than or equal to other algorithms. Finally, we see that both computational times and iterations of the CSEM and PSEM algorithms are better than or equal to those of the PHMEM algorithm.
 Remark 3. Let us consider the case of parameters  and , in which the Ishikawa iteration will be reduced to the Picard iteration. We notice that the convergence of PHMEM algorithm cannot be guaranteed in this setting. The computational results of the CSEM and PSEM algorithms are shown as follows.
 From 
Table 2, we see that both computational times and iterations are better than all those cases presented in 
Table 1. However, it should be warned that the Picard iteration method may not always converge to a fixed point of a nonexpansive mapping in general. For example, see [
43].