A New Alternative Regularization Method for Solving Generalized Equilibrium Problems

: The purpose of this paper is to present a numerical method for solving a generalized equilibrium problem involving a Lipschitz continuous and monotone mapping in a Hilbert space. The proposed method can be viewed as an improvement of the Tseng’s extragradient method and the regularization method. We show that the iterative process constructed by the proposed method converges strongly to the smallest norm solution of the generalized equilibrium problem. Several numerical experiments are also given to illustrate the performance of the proposed method. One of the advantages of the proposed method is that it requires no knowledge of Lipschitz-type constants.


Introduction
Let C be a closed, convex and nonempty subset of a real Hilbert space H . Let F : C × C → R be a bifunction, A : C → H be a mapping. The generalized equilibrium problem ( GEP) is defined as: Find a point u * ∈ C such that F (u * , v) + A u * , v − u * ≥ 0, ∀v ∈ C. (1) Denote by GEP(F , A ) the set of solutions of the GEP. If A = 0, then the GEP (1) becomes the equilibrium problem (EP): Find a point u * ∈ C such that F (u * , v) ≥ 0, ∀v ∈ C. (2) The solutions set of (2) is denoted by EP(F ).
In the oligopolistic market equilibrium model [1], it is assumed that the cost functions h i (i = 1, . . . , n) are increasingly piecewise-linear concave and that the price function p(∑ n j=1 x j ) can change firm by firm. Namely, the price has the following form: , and F (u, v) = φ(u) − φ(v) (B 1 , B are two corresponding matrixes). Then the problem of finding a Nash equilibrium point becomes the GEP (1). The GEP is very general in the sense that it includes, as particular cases, optimization, Nase equilibrium problems, variational inequalities, and saddle point problems. Many problems of practical interest in economics and engineering involve equilibrium in their description; see [2][3][4][5][6][7][8][9][10][11][12][13][14][15] for examples.
If F (u, v) = 0 for all u, v ∈ C, then the GEP (1) becomes the variational inequality problem (VIP): Find a point u * ∈ C such that A u * , v − u * ≥ 0, ∀v ∈ C, (3) for which the solutions set is denoted by VI(C, A ). The VIP (3) was introduced by Stampacchia [16] in 1964. It provides a convenient, natural, and unified framework for the study of many problems in operation research, engineering and economics. It includes, as special cases, such well-known problems in mathematical programming as systems of optimization and control problems, traffic network problems, and fixed point problems; see [6,7,17]. Many iterative methods for solving the VIPs have been proposed and studied; see [4,[6][7][8]. Among them, two notable and general directions for solving VIPs are the projection method and the regularized method. In order to solve monotone variational inequality problems, Thong and Hieu [5] recently introduced the following Tseng's extragradient method (TEGM): Assume A : H → H is monotone and Lipschitz continuous. Then they proved that the sequence {x n } generated by Algorithm 1 converges weakly to some solution to the VIP (3) under appropriate conditions. Based on Tseng's extragradient method and the viscosity method, they also introduced the following Tseng-type viscosity algorithm (TEGMV):
Step 1. Given x n (n ≥ 0), compute where σ n is chosen to be the largest σ ∈ {ς, ςκ, ςκ 2 , · · ·} satisfying the following: If x n = y n , then stop and x n is the solution of the VIP (3). Otherwise, go to Step 2.
Step 2. Compute x n+1 = y n − σ n (A y n − A x n ).
Set n := n + 1 and return to Step 1.
The mapping f in Algorithm 2 is a contraction of H . By adding this viscosity term, they proved that the process {x n } constructed by Algorithm 2 converges strongly to x * = P VI(C,A ) f (x * ) under suitable conditions, where P VI(C,A ) denotes the metric projection from H onto the solution set VI(C, A ).
Most recently, inspired by the extragradient method and the regularization method, Hieu et al. [18] introduce the following double projection method (DPM), for each n ≥ 1, where σ n ∈ (0, 1/L). This method converges if A is L-Lipschitz continuous and monotone. Motivated by Thong and Hieu [5], Hieu et al. [18] and Tseng [19], we introduce a new numerical algorithm for solving a generalized equilibrium problem involving a monotone and Lipschitz continuous mapping. This method can be viewed as a combination between the regularization method and the Tseng's extragardient method. We prove that the sequences constructed by the proposed method converge in norm to the smallest norm solution of the generalized equilibrium problem. Finally, we provide several numerical experiments for supporting the proposed method.
Step 1. Given x n (n ≥ 0), compute where σ n is chosen to be the largest σ ∈ {ς, ςκ, ςκ 2 , · · ·} satisfying the following: If x n = y n , then stop and x n is the solution of VIP. Otherwise, go to Step 2.
Step 2. Compute Set n := n + 1 and return to Step 1.

Preliminaries
In this section, we use x n → x(respectively, x n x) to denote the strong (respectively, weak) convergence of the sequence {x n } to x as n →. We denote by Fix(T), the set of fixed points of the mapping T, that is Fix(T) = {x ∈ C : x = Tx}. Let R stand for the set of real numbers, and C denote a nonempty, convex and closed subset of a Hilbert space H . Definition 1. The equilibrium bifunction F : C × C → R is said to be monotone, if:

Definition 2. A mapping
(2) L-Lipschitz continuous on C, if there exists L > 0 such that Assumption 1. Let C be a nonempty, convex and closed subset of a Hilbert space H and F : C × C → R be a bifunction satisfying the following restrictions: (A1) F (u, u) = 0, ∀u ∈ C; (A2) F is monotone; (A3) for all u ∈ C, F (u, ·) is convex and lower semicontinuous; for every v ∈ C and {u n } ⊂ C satisfy u n u * ; (A4 ) F is jointly weakly upper semicontinuous on C × C in the sense that, if x, y ∈ C and {x n }, {y n } ⊂ C converges weakly to x and y, respectively, then F (x n , y n ) → F (x, y) as n → +∞ (see, e.g., [20]).

Lemma 1 ([2,3]).
Let F : C × C → R be a bifunction satisfying Assumption 1 (A1)-(A4). For u ∈ H and r > 0, define a mapping T F r : H → C by: Then, it holds that: Hence, from Lemma 1, we find that: x and x n x , then x n → x.

Main Results
In this section, we focus on the strong convergence analysis for the smallest norm solution of the GEP (1) by using the Tikhonove-type regularization technique. As we know, the Tikhonove-type regularization technique has been effectively applied to convex optimization problems to solve ill-posed problems.
In the sequel, we assume that F : C × C → R is a bifunction satisfying (A1)-(A3) and (A4 ), A : H → H is monotone and L-Lipschitz continuous. For each α > 0, we associate the GEP (1) with the so-called regularized generalized equilibrium problem (RGEP): We deduce from the following Lemma 4 that the RGEP has a unique solution x α for each α > 0. On the other hand, noticing Remark 1 (iv), one finds that GEP(F , A ) is nonempty, closed and convex. Hence there exists uniquely a point x † ∈ GEP(F , A ) which has the smallest norm in the solutions set GEP(F , A ). The relationship between x α and x † can also be described in the following lemma. (i) for each α > 0, the RGEP has a unique solution x α ;

Proof. (i) Since
A is monotone and Lipschitz continuous, then A + αI is also monotone and Lipschitz continuous. From Remark 1 (iv), we find that the solutions set of the RGEP is nonempty. For α > 0, if x * , y * are two solutions of the RGEP, then one has: and Adding up (6) and (7), we have: In view of (8) and the monotone property of F and A , we obtain: which implies x * = y * . In turn, we complete the proof of (i).
(ii) Now we prove that: Taking Since x α is the solution of the RGEP, we then find: Substituting y = w ∈ C into (11), we obtain: Summing up inequalities (10) and (12), we get: Noticing (13), and using the monotone property of F and A , we obtain: we also obtain x α ≤ x † . Therefore, we deduce that {x α } is bounded. Since C is closed and convex, then C is weakly closed. Hence there is a subsequence {x α j } of {x α } and some point x * ∈ C such that x α j x * . In view of the monotone property of A , we deduce, for all v ∈ C, that: Due to the fact that F (x α j , y) + A x α j + αx α j , y − x α j ≥ 0 and noticing (14), we infer that: Letting j → ∞ and noticing (A4'), we obtain: For ∀x ∈ C and t ∈ [0, 1], substituting y = (1 − t)x * + tx into above inequality, we have: In view of (A3), we get: By (A1), we have: Since A is L-Lipschitz continuous on C, by taking t → 0, we have: which implies From Lemma 4 (ii) and the lower weak semi-continuity of the norm, we obtain: Further, due to the fact that x † is a unique solution which has the smallest norm in GEP(F , A ), we derive x * = x † . This means x α j x † as j → +∞. By following a similar argument to that above, we deduce that the whole sequence {x α } converges weakly to x † as α → 0 + .
Next we show that lim α→0 + x α − x † = 0. Indeed, noticing the lower semi-continuous of norm, Lemma 4 (ii) and (15), we obtain: In view of Lemma 2 and the fact that x α j x * , we derive that lim j→∞ x α j = x * = x † . By following the lines of proof as above, we obtain that the whole sequence {x α } converges strongly to x † .
(iii) Assume that x α , x β are the solutions of the RGEP. Then we have From the above two inequalities and using the monotonicity of A and F , one obtains: It follows that: Simplifying it and noticing Lemma 4 (ii), we find: This completes the proof.
In the following, combining with Tseng's extragradient method and the regularization method, we propose a new numerical algorithm for solving the GEPs. Assume that the following two conditions are satisfied: An example for the sequence {α n } satisfying conditions (C1) and (C2) is α n = (n + 1) −p with 1 > p > 0. We now introduce the following Algorithm 3:
Thus, we get from (23) that For each n ≥ n 0 , from the Cauchy-Schwarz inequality and Lemma 4 (iii), we infer which implies Therefore, it follows from (24) and (25) that: We deduce from (C1), (C2) and Lemma 3 that lim n→∞ x n − x α n 2 = 0, which means lim n→∞ x n = x † . This completes the proof.

Application to Split Minimization Problems
Let C be a nonempty closed convex subset of R, ψ : C → R is a convex and continuous differentiable function. Consider the constrained convex minimization problem: The monotonicity of convexity of ∇ψ can be ensured by the monotonicity of ψ. A point v * ∈ C is a solution of the minimization problem (26) if and only if it is a solution of the following variational inequality Setting F (u, v) = 0, it is not difficult to check that GEP(F , ∇ψ) = VI(C, ∇ψ) = arg minψ and T F σ = P C . From Theorem 1, we have the following result.

Theorem 2.
Let ψ : C → R be a convex and continuous differentiable function whose gradient ψ is L-Lipschitz continuous. Suppose that the optimization problem min x∈C ψ(x) is consistent, i.e., its solution set is nonempty. Then the sequence {x n } constructed by Algorithm 4 converges to the unique minimal norm solution of the minimization problem (26) under conditions (C1) and (C2).
Set n := n + 1 and return to Step 1.
In this subsection, we provide some numerical examples to illustrate the behavior and performance of our Algorithm 3 (TEGMR) as well as comparing it with Algorithm (4) (DPM of Hieu et al. [18]), Algorithm 1 (TEGM of Thong and Hieu [5]) and Algorithm 2 (TEGMV of Thong and Hieu [5]).

Example 2.
Let F : R m × R m → R be given by F (x, y) = xy − x 2 for all x, y ∈ R m , A : R m → R m be given by Ax = x, f : R m → R m be given by f x = 2x 3 for all x ∈ R m . The feasible set C is given by C = {x ∈ R m : 0 ≤ x i ≤ 1, i = 1, . . . , m}. The maximum number of iterations is 300 as the stopping criterion and the initial values x 0 are randomly generated by rand (0, 1) in MATLAB. Let us choose α n = 1 √ n+1 , σ n = 1 2 , ς = 1 and δ = l = 1 2 . Figure 4 describe the numerical results, for Example 2 in R 5 and and R 10 , respectively.   According to all graphical representations in Figures 1-4, one finds that Algorithm 3 performs better than than Algorithm (4). Algorithms 1 and 2 in terms of number of iterations and CPU-time taken for computation.

Conclusions
This paper presents an alternative regularization method for finding the smallest norm solution of a generalized equilibrium problem. This method can be considered an improvement of the Tseng's extragradient method and the regularization method. We prove that the iterative process constructed by the proposed method converges strongly to the smallest norm solution of the generalized equilibrium problem. Several numerical experiments are also given to demonstrate the competitive advantage of the suggested methods over other known methods.