Next Article in Journal
A Comparative Study of SSA-BPNN, SSA-ENN, and SSA-SVR Models for Predicting the Thickness of an Excavation Damaged Zone around the Roadway in Rock
Previous Article in Journal
The Dynamics between Structural Conditions and Entrepreneurship in Europe: Feature Extraction and System GMM Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Alternative Regularization Method for Solving Generalized Equilibrium Problems

by
Yanlai Song
1,*,† and
Omar Bazighifan
2,3,*,†
1
College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China
2
Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy
3
Department of Mathematics, Faculty of Science, Hadhramout University, Mukalla 50512, Yemen
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(8), 1350; https://doi.org/10.3390/math10081350
Submission received: 16 March 2022 / Revised: 15 April 2022 / Accepted: 15 April 2022 / Published: 18 April 2022

Abstract

:
The purpose of this paper is to present a numerical method for solving a generalized equilibrium problem involving a Lipschitz continuous and monotone mapping in a Hilbert space. The proposed method can be viewed as an improvement of the Tseng’s extragradient method and the regularization method. We show that the iterative process constructed by the proposed method converges strongly to the smallest norm solution of the generalized equilibrium problem. Several numerical experiments are also given to illustrate the performance of the proposed method. One of the advantages of the proposed method is that it requires no knowledge of Lipschitz-type constants.

1. Introduction

Let C be a closed, convex and nonempty subset of a real Hilbert space H . Let F : C   ×   C R be a bifunction, A : C H be a mapping. The generalized equilibrium problem ( GEP) is defined as:
Find a point u * C such that F ( u * , v )   +   A u * , v     u *     0 , v C .
Denote by GEP ( F , A ) the set of solutions of the GEP. If A   =   0 , then the GEP (1) becomes the equilibrium problem (EP):
Find a point u * C such that F ( u * , v )     0 , v C .
The solutions set of (2) is denoted by EP ( F ) .
In the oligopolistic market equilibrium model [1], it is assumed that the cost functions h i ( i   =   1 , , n ) are increasingly piecewise-linear concave and that the price function p ( j   =   1 n x j ) can change firm by firm. Namely, the price has the following form: p i ( σ ) :   =   α i     β i j   =   1 n x j . Take h ( x )   =   i   =   1 n h i ( x i ) , A ( x )   =   B 1 x     α , ϕ ( x )   =   x T B x   +   h ( x ) , and F ( u , v )   =   ϕ ( u )     ϕ ( v ) ( B 1 , B are two corresponding matrixes). Then the problem of finding a Nash equilibrium point becomes the GEP (1). The GEP is very general in the sense that it includes, as particular cases, optimization, Nase equilibrium problems, variational inequalities, and saddle point problems. Many problems of practical interest in economics and engineering involve equilibrium in their description; see [2,3,4,5,6,7,8,9,10,11,12,13,14,15] for examples.
If F ( u , v )   =   0 for all u , v C , then the GEP (1) becomes the variational inequality problem (VIP):
Find a point u * C such that A u * , v     u *     0 , v C ,
for which the solutions set is denoted by VI ( C , A ) . The VIP (3) was introduced by Stampacchia [16] in 1964. It provides a convenient, natural, and unified framework for the study of many problems in operation research, engineering and economics. It includes, as special cases, such well-known problems in mathematical programming as systems of optimization and control problems, traffic network problems, and fixed point problems; see [6,7,17].
Many iterative methods for solving the VIPs have been proposed and studied; see [4,6,7,8]. Among them, two notable and general directions for solving VIPs are the projection method and the regularized method. In order to solve monotone variational inequality problems, Thong and Hieu [5] recently introduced the following Tseng’s extragradient method (TEGM):
Assume A : H H is monotone and Lipschitz continuous. Then they proved that the sequence { x n } generated by Algorithm 1 converges weakly to some solution to the VIP (3) under appropriate conditions. Based on Tseng’s extragradient method and the viscosity method, they also introduced the following Tseng-type viscosity algorithm (TEGMV):
Algorithm 1: Tseng’s extragradient method (TEGM)
Initialization: Set ς   >   0 , κ , δ ( 0 , 1 ) and let x 0 H be arbitrary.
Step 1. Given x n ( n     0 ) , compute
y n   =   P C ( x n     σ n A x n ) ,
where σ n is chosen to be the largest σ { ς , ς κ , ς κ 2 , · · · } satisfying the following:
σ A y n     A x n     δ y n     x n .
If x n   =   y n , then stop and x n is the solution of the VIP (3). Otherwise, go to Step 2.
Step 2. Compute
x n   +   1   =   y n     σ n ( A y n     A x n ) .
Set n :   =   n   +   1 and return to Step 1.
The mapping f in Algorithm 2 is a contraction of H . By adding this viscosity term, they proved that the process { x n } constructed by Algorithm 2 converges strongly to x *   =   P VI ( C , A ) f ( x * ) under suitable conditions, where P VI ( C , A ) denotes the metric projection from H onto the solution set VI ( C , A ) .
Most recently, inspired by the extragradient method and the regularization method, Hieu et al. [18] introduce the following double projection method (DPM),
x n   +   1   =   P C ( x n     σ n ( A y n   +   α n x n ) , y n   =   P C ( x n     σ n ( A x n + α n x n ) ,
for each n 1 , where σ n ( 0 , 1 / L ) . This method converges if A is L-Lipschitz continuous and monotone.
Motivated by Thong and Hieu [5], Hieu et al. [18] and Tseng [19], we introduce a new numerical algorithm for solving a generalized equilibrium problem involving a monotone and Lipschitz continuous mapping. This method can be viewed as a combination between the regularization method and the Tseng’s extragardient method. We prove that the sequences constructed by the proposed method converge in norm to the smallest norm solution of the generalized equilibrium problem. Finally, we provide several numerical experiments for supporting the proposed method.
Algorithm 2: Tseng’s extragradient method with viscosity technique (TEGMV)
Initialization: Set ς   >   0 , κ , δ ( 0 , 1 ) and let x 0 H be arbitrary.
Step 1. Given x n ( n     0 ) , compute
y n   =   P C ( x n     σ n A x n ) ,
where σ n is chosen to be the largest σ { ς , ς κ , ς κ 2 , · · · } satisfying the following:
σ A y n     A x n     δ y n     x n .
If x n   =   y n , then stop and x n is the solution of VIP. Otherwise, go to Step 2.
Step 2. Compute
x n   +   1   =   α n f ( x n )   +   ( 1     α n ) z n ,
where
z n   =   y n     σ n ( A y n     A x n ) .
Set n :   =   n   +   1 and return to Step 1.

2. Preliminaries

In this section, we use x n x (respectively, x n x ) to denote the strong (respectively, weak) convergence of the sequence { x n } to x as n . We denote by Fix ( T ) , the set of fixed points of the mapping T, that is Fix ( T )   =   { x C : x   =   T x } . Let R stand for the set of real numbers, and C denote a nonempty, convex and closed subset of a Hilbert space H .
Definition 1.
The equilibrium bifunction F : C   ×   C R is said to be monotone, if:
F ( x , y )   +   F ( y , x )     0 , x , y C .
Definition 2.
A mapping A : C H is said to be:
(1)
monotone on C, if
A x     A y , x     y     0 , x , y C ;
(2)
L- Lipschitz continuous on C, if there exists L   >   0 such that
A x     A y     L x     y , x , y C .
Assumption 1.
Let C be a nonempty, convex and closed subset of a Hilbert space H and F : C   ×   C R be a bifunction satisfying the following restrictions:
(A1) 
F ( u , u )   =   0 , u C ;
(A2) 
F is monotone;
(A3) 
for all u C , F ( u , · ) is convex and lower semicontinuous;
(A4) 
for all u , v , w C , lim sup t 0 F ( t w   +   ( 1     t ) u , v )     F ( u , v ) ;
(A4′) 
lim sup n F ( u n , v )     F ( u * , v ) for every v C and { u n } C satisfy u n u * ;
(A4″) 
F is jointly weakly upper semicontinuous on C   ×   C in the sense that, if x , y C and { x n } , { y n } C converges weakly to x and y, respectively, then F ( x n , y n ) F ( x , y ) as n   +   (see, e.g., [20]).
Obviously, the condition ( A 4 ) implies ( A 4 ) and the condition ( A 4 ) implies ( A 4 ) (see, e.g., [21] for more details).
Lemma 1
([2,3]). Let F : C   ×   C R be a bifunction satisfying Assumption A1 (A1)–(A4). For u H and r   >   0 , define a mapping T r F : H C by:
T r F u   =   { w C : F ( w , v )   +   1 r v     w , w     u     0 , v C } .
Then, it holds that:
(i) 
T r F is single-valued;
(ii) 
T r F is a firmly nonexpansive mapping, i.e., for all u , v H , T r F u     T r F v 2     T r F u     T r F v , u     v ;
(iii) 
Fix ( T r F )   =   EP ( F ) ;
(iv) 
EP ( F ) is nonempty closed and convex.
Remark 1.
Suppose A : C H is monotone and Lipschitz continuous, F : C   ×   C R is a bifunction satisfying Assumption A1 (A1)–(A4). It is easy to check that the mapping F ˜ ( u , v ) :   =   F ( u , v )   +   A u , v     u satisfies Assumption A1. Hence, from Lemma 1, we find that:
(i) 
T r F ˜ is single-valued;
(ii) 
T r F ˜ is a firmly nonexpansive mapping;
(iii) 
Fix ( T r F ˜ )   =   EP ( F ˜ )   =   GEP ( F , A ) ;
(iv) 
EP ( F ˜ )   =   GEP ( F , A ) is nonempty, closed and convex.
Lemma 2
([22]). Let { x n } be a sequence in H . If x n x and x n x , then x n x .
Lemma 3
([23,24]). Assume { a n } is a sequence of nonnegative numbers satisfying the following inequality:
a n   +   1     ( 1     β n ) a n   +   b n   +   β n c n , n N ,
where { β n } , { b n } , { c n } satisfy the conditions:
(i) 
n   =   1 β n   =   , lim n β n   =   0 ,
(ii) 
b n     0 , n   =   1 b n   <   ,
(iii) 
lim sup n c n     0 .
Then, lim n a n   =   0 .

3. Main Results

In this section, we focus on the strong convergence analysis for the smallest norm solution of the GEP (1) by using the Tikhonove-type regularization technique. As we know, the Tikhonove-type regularization technique has been effectively applied to convex optimization problems to solve ill-posed problems.
In the sequel, we assume that F : C   ×   C R is a bifunction satisfying (A1)–(A3) and (A4′), A : H H is monotone and L-Lipschitz continuous. For each α   >   0 , we associate the GEP (1) with the so-called regularized generalized equilibrium problem (RGEP):
Find a point x C such that F ( x , y )   +   A x   +   α x , y     x 0 , y C .
We deduce from the following Lemma 4 that the RGEP has a unique solution x α for each α   >   0 . On the other hand, noticing Remark 1 (iv), one finds that GEP ( F , A ) is nonempty, closed and convex. Hence there exists uniquely a point x GEP ( F , A ) which has the smallest norm in the solutions set GEP ( F , A ) . The relationship between x α and x can also be described in the following lemma.
Lemma 4.
Let A : H H be monotone and L-Lipschitz continuous, F : C   ×   C R be a bifunction satisfying Assumption A1 (A1)–(A3) and (A4′). Then it holds that:
(i) 
for each α   >   0 , the RGEP has a unique solution x α ;
(ii) 
lim α 0 + x α     x   =   0 , and x α     x , α   >   0 ;
(iii) 
x α     x β     α     β α x , α , β   >   0 .
Proof. 
(i) Since A is monotone and Lipschitz continuous, then A   +   α I is also monotone and Lipschitz continuous. From Remark 1 (iv), we find that the solutions set of the RGEP is nonempty. For α   >   0 , if x * , y * are two solutions of the RGEP, then one has:
F ( x * , y * )   +   A x *   +   α x * , y *     x * 0
and
F ( y * , x * )   +   A y *   +   α y * , x *     y *     0 .
Adding up (6) and (7), we have:
F ( x * , y * )   +   F ( y * , x * )   +   A y *     A x * , x *     y *     α x *     y * 2     0 .
In view of (8) and the monotone property of F and A , we obtain:
x *     y * 2     0 ,
which implies x *   =   y * . In turn, we complete the proof of (i).
(ii) Now we prove that:
x α     w , w GEP ( F , A ) .
Taking any w GEP ( F , A ) , we have F ( w , y )   +   A w , y     w     0 for all y C , which with y   =   x α C , implies that
F ( w , x α )   +   A w , x α     w     0 .
Since x α is the solution of the RGEP, we then find:
F ( x α , y )   +   A x α   +   α x α , y     x α     0 , y C .
Substituting y   =   w C into (11), we obtain:
F ( x α , w )   +   A x α   +   α x α , w     x α     0 .
Summing up inequalities (10) and (12), we get:
F ( w , x α )   +   F ( x α , w )   +   A x α     A w , w     x α   +   α x α , w     x α     0 .
Noticing (13), and using the monotone property of F and A , we obtain:
x α , w     x α     0 ,
which implies x α 2     x α , w . Thus we have x α     w , w GEP ( F , A ) . Especially, we also obtain x α     x . Therefore, we deduce that { x α } is bounded.
Since C is closed and convex, then C is weakly closed. Hence there is a subsequence { x α j } of { x α } and some point x * C such that x α j x * . In view of the monotone property of A , we deduce, for all v C , that:
( F ( x α j , y )   +   A y , y     x α j )     ( F ( x α j , y )   +   A x α j   +   α j x α j , y     x α j ) = A y     A x α j , y     x α j     α j x α j , y     x α j α j x α j , v     x α j .
Due to the fact that F ( x α j , y )   +   A x α j   +   α x α j , y     x α j     0 and noticing (14), we infer that:
F ( x α j , y )   +   A y , y     x α j     α j x α j , y     x α j , y C .
Letting j and noticing (A4’), we obtain:
F ( x * , y )   +   A y , y     x *     0 , y C .
For x C and t [ 0 , 1 ] , substituting y   =   ( 1     t ) x *   +   t x into above inequality, we have:
F ( x * , ( 1     t ) x *   +   t x )   +   t A ( ( 1     t ) x *   +   t x ) , x     x *     0 , x C .
In view of (A3), we get:
( 1     t ) F ( x * , x * )   +   t F ( x * , x )   +   t A ( ( 1     t ) x *   +   t x ) , x     x *     0 , x C .
By (A1), we have:
F ( x * , x ) + A ( ( 1     t ) x *   +   t x ) , x     x *     0 , x C .
Since A is L-Lipschitz continuous on C, by taking t 0 , we have:
F ( x * , x )   +   A x * , x     x *     0 , x C ,
which implies
x * GEP ( F , A ) .
From Lemma 4 (ii) and the lower weak semi-continuity of the norm, we obtain:
x *     lim inf n x α     x , w GEP ( F , A ) .
Further, due to the fact that x is a unique solution which has the smallest norm in GEP ( F , A ) , we derive x *   =   x . This means x α j x as j   +   . By following a similar argument to that above, we deduce that the whole sequence { x α } converges weakly to x as α 0 + .
Next we show that lim α 0 + x α     x   =   0 . Indeed, noticing the lower semi-continuous of norm, Lemma 4 (ii) and (15), we obtain:
x   =   x *     lim inf j x α j     lim sup j x α j     x ,
which means
lim j x α j   =   x * .
In view of Lemma 2 and the fact that x α j x * , we derive that lim j x α j   =   x *   =   x . By following the lines of proof as above, we obtain that the whole sequence { x α } converges strongly to x .
(iii) Assume that x α , x β are the solutions of the RGEP. Then we have
F ( x α , x β )   +   A x α   +   α x α , x β     x α     0
and
F ( x β , x α )   +   A x β   +   β x β , x α     x β     0 .
From the above two inequalities and using the monotonicity of A and F , one obtains:
α x α , x β     x α   +   β x β , x α     x β     0 .
It follows that:
α x α     x β 2     ( β     α ) x β , x α     u β     | β     α | x β x α     x β .
Simplifying it and noticing Lemma 4 (ii), we find:
x α     x β     | α     β | α x β     | α     β | α x .
This completes the proof.    □
In the following, combining with Tseng’s extragradient method and the regularization method, we propose a new numerical algorithm for solving the GEPs. Assume that the following two conditions are satisfied:
(C1)
lim n α n   =   0 and n   =   1 α n   =   ;
(C2)
lim n α n     α n   +   1 α n 2   =   0 .
An example for the sequence { α n } satisfying conditions (C1) and (C2) is α n   =   ( n   +   1 ) p with 1   >   p   >   0 . We now introduce the following Algorithm 3:
Algorithm 3: The Tseng’s extragradient method with regularization (TEGMR)
Initialization: Set ς   >   0 , κ , δ ( 0 , 1 ) and let x 0 H be arbitrary.
Step 1. Given x n ( n     0 ) , compute
y n   =   T σ n F x n     σ n ( A x n   +   α n x n ) ,
where σ n is chosen to be the largest σ { ς , ς κ , ς κ 2 , · · · } satisfying the following:
σ A y n     A x n     δ y n     x n .
Step 2. Compute
x n + 1   =   y n     σ n ( A y n     A x n ) .
Set n :   =   n   +   1 and return to Step 1.
Lemma 5
([5]). The Armijo-like search rule (17) is well defined and min { ς , δ κ L }     σ n     ς .
Theorem 1.
Let C be a nonempty convex closed subset of real Hilbert spaces H , F : C   ×   C R be a bifunction satisfying (A1)–(A3) and (A4′), and A : H H be monotone and L-Lipschitz continuous. Then the sequence { x n } constructed by Algorithm 3 converges in norm to the minimal norm solution x of the GEP (1) under conditions (C1) and (C2).
Proof. 
By (C1) and Lemma 4 (iii), we obtain x α n x as n . Therefore, it is sufficient to prove that:
lim n x n     x α n   =   0 .
It follows that
x n   +   1     x α n 2 = y n     σ n ( A y n     A x n )     x α n 2 = y n     x α n 2   +   σ n 2 A y n     A x n 2     2 σ n y n     x α n , A y n     A x n = y n     x n 2   +   x n     x α n 2   +   2 y n     x n , x n     x α n   +   σ n 2 A y n     A x n 2 2 σ n y n     x α n , A y n     A x n = y n     x n 2   +   x n     x α n 2   +   2 y n     x n , x n     y n   +   2 y n     x n , y n     x α n + σ n 2 A y n     A x n 2     2 σ n y n     x α n , A y n     A x n = x n     x α n 2     y n     x n 2   +   2 y n     x n , y n     x α n   +   σ n 2 A y n     A x n 2 2 σ n y n     x α n , A y n     A x n .
Since x α is a solution of the RGEP for all α   >   0 , we obtain that:
x α n   =   T σ n F ( x α n     σ n ( A x α n   +   α n x α n ) ) .
Using Lemma 1 (ii), we derive:
y n     x α n 2 = T σ n F x n     σ n ( A x n   +   α n x n )     T σ n F ( x α n     σ n ( A x α n   +   α n x α n ) ) 2 T σ n F x n     σ n ( A x n   +   α n x n )     T σ n F ( x α n     σ n ( A x α n   +   α n x α n ) ) , x n     σ n ( A x n   +   α n x n )     ( x α n     σ n ( A x α n   +   α n x α n ) ) = y n     x α n , x n     σ n ( A x n   +   α n x n )     ( x α n     σ n ( A x α n   +   α n x α n ) ) ,
which implies
y n     x α n , x n     σ n ( A x n   +   α n x n )     ( x α n     σ n ( A x α n   +   α n x α n ) )     y n     x α n 2     0 .
It follows that:
y n     x α n , x n     y n     σ n ( A x n   +   α n x n )   +   σ n ( A x α n   +   α n x α n )     0 ,
or equivalently
y n     x α n , x n     y n     σ n y n     x α n , ( A x n   +   α n x n )     ( A x α n   +   α n x α n ) .
Substituting (20) into (18) and noticing the monotone property of A and (17), we derive:
x n   +   1     x α n 2 x n     x α n 2     y n     x n 2     2 σ n y n     x α n , ( A x n   +   α n x n )     ( A x α n   +   α n x α n ) + σ n 2 A y n     A x n 2     2 σ n y n     x α n , A y n     A x n = x n     x α n 2     y n     x n 2   +   σ n 2 A y n     A x n 2 2 σ n y n     x α n , A y n     A x α n     2 α n σ n y n     x α n , x n     x α n x n     x α n 2     y n     x n 2   +   σ n 2 A y n     A x n 2   +   2 α n σ n y n     x α n , x α n     x n x n x α n 2     ( 1     δ 2 ) y n     x n 2   +   2 α n σ n y n     x α n , x α n     x n .
The last term in (21) is estimated as follows:
2 y n     x α n , x α n     x n = 2 y n     x n , x α n     x n   +   2 x n     x α n , x α n     x n 2 y n     x n x α n     x n     2 x n     x α n 2 y n     x n 2   +   x α n     x n 2     2 x n     x α n 2 = y n     x n 2     x α n     x n 2 .
Substituting (22) into (21), one finds:
x n   +   1     x α n 2 x n     x α n 2     ( 1     δ 2 ) y n     x n 2   +   α n σ n ( y n     x n 2     x α n     x n 2 ) = ( 1     α n σ n ) x n     x α n 2     ( 1     δ 2     α n σ n ) y n     x n 2 .
Since 0   <   δ   <   1 , α n 0 and min { ς , δ l L }     σ n     ς (by Lemma 5), then there exists n 0     1 such that
1     δ 2     α n σ n   >   0 , σ n α n ( 0 , 2 ) , n     n 0 .
Thus, we get from (23) that
x n   +   1     x α n 2     ( 1     α n σ n ) x n     x α n 2 .
For each n     n 0 , from the Cauchy–Schwarz inequality and Lemma 4 (iii), we infer
x n   +   1     x α n 2 = x n   +   1     x α n   +   1 2 + x α n   +   1     x α n 2   +   2 x n   +   1     x α n   +   1 , x α n   +   1     x α n x n   +   1     x α n   +   1 2   +   x α n   +   1     x α n 2     2 x n   +   1     x α n   +   1 x α n   +   1     x α n x n   +   1     x α n   +   1 2   +   x α n   +   1     x α n 2     2 σ n α n x α n   +   1     x α n 2     σ n α n 2 x n   +   1     x α n   +   1 2 2     σ n α n 2 x n   +   1     x α n   +   1 2     2     σ n α n σ n α n x α n   +   1     x α n 2 2     σ n α n 2 x n   +   1     x α n   +   1 2     2     σ n α n σ n α n α n   +   1     α n α n 2 x 2 = 2 σ n α n 2 x n   +   1     x α n   +   1 2     ( 2     σ n α n ) ( α n   +   1     α n ) 2 σ n α n 3 x 2 ,
which implies
x n   +   1     x α n   +   1 2     2 2     σ n α n x n   +   1     x α n 2   +   2 ( α n   +   1     α n ) 2 σ n α n 3 x 2 .
Therefore, it follows from (24) and (25) that:
x n   +   1     x α n   +   1 2 2     2 σ n α n 2     σ n α n x n     x α n 2   +   2 ( α n   +   1     α n ) 2 σ n α n 3 x 2 1     σ n α n 2     σ n α n x n     x α n 2   +   2 ( α n   +   1     α n ) 2 σ n α n 3 x 2 .
We deduce from (C1), (C2) and Lemma 3 that lim n x n     x α n 2   =   0 , which means lim n x n   =   x . This completes the proof.    □

4. Application to Split Minimization Problems

Let C be a nonempty closed convex subset of R , ψ : C R is a convex and continuous differentiable function. Consider the constrained convex minimization problem:
Find a point v * C such that ψ ( v * )   =   min v C ψ ( v ) .
The monotonicity of convexity of ψ can be ensured by the monotonicity of ψ . A point v * C is a solution of the minimization problem (26) if and only if it is a solution of the following variational inequality
ψ ( v * ) , u     v *     0 , u C .
Setting F ( u , v )   =   0 , it is not difficult to check that GEP ( F , ψ )   =   VI ( C , ψ )   =   arg min ψ and T σ F   =   P C . From Theorem 1, we have the following result.
Theorem 2.
Let ψ : C R be a convex and continuous differentiable function whose gradient ψ is L-Lipschitz continuous. Suppose that the optimization problem min x C ψ ( x ) is consistent, i.e., its solution set is nonempty. Then the sequence { x n } constructed by Algorithm 4 converges to the unique minimal norm solution of the minimization problem (26) under conditions (C1) and (C2).
Algorithm 4: The Tseng’s extragradient method with regularization for minimization problems
Initialization: Set ς   >   0 , l , δ ( 0 , 1 ) and let x 0 R be arbitrary.
Step 1. Given x n ( n     0 ) , compute
y n   =   P C x n     σ n ( ψ ( x n )   +   α n x n ) ,
where σ n is chosen to be the largest γ { ς , ς l , ς l 2 , · · · } satisfying the following:
γ ψ ( y n )     ψ ( x n )     δ y n     x n .
Step 2. Compute
x n   +   1   =   y n     σ n ( ψ ( y n )     ψ ( x n ) ) .
Set n :   =   n   +   1 and return to Step 1.
In this subsection, we provide some numerical examples to illustrate the behavior and performance of our Algorithm 3 (TEGMR) as well as comparing it with Algorithm (4) (DPM of Hieu et al. [18]), Algorithm 1 (TEGM of Thong and Hieu [5]) and Algorithm 2 (TEGMV of Thong and Hieu [5]).
Example 1.
Let H   =   R be the set of real numbers. Define the bifunction F ( u , v )   =   u v     u 2 for all u , v H . Let A : H H be given by A x   =   x , f : H H be given by f x   =   2 x 3 for all x H . We get that A is monotone and 1-Lipschitz continuous. It is easy to check that T σ n F x   =   x 1   +   2 σ n . It is also not difficult to check GEP ( F , A )   =   { 0 } . Let us choose α n   =   1 n   +   1 , σ n   =   1 2 , ς   =   1 and δ   =   l   =   1 2 . We test our Algorithm 3 for different values of x 0 , see Figure 1, Figure 2 and Figure 3.
Example 2.
Let F : R m   ×   R m R be given by F ( x , y )   =   x y     x 2 for all x , y R m , A : R m R m be given by A x   =   x , f : R m R m be given by f x   =   2 x 3 for all x R m . The feasible set C is given by C   =   { x R m : 0     x i     1 , i   =   1 , , m } . The maximum number of iterations is 300 as the stopping criterion and the initial values x 0 are randomly generated by rand ( 0 , 1 ) in MATLAB. Let us choose α n   =   1 n   +   1 , σ n   =   1 2 , ς   =   1 and δ   =   l   =   1 2 . Figure 4 describe the numerical results, for Example 2 in R 5 and and R 10 , respectively.
According to all graphical representations in Figure 1, Figure 2, Figure 3 and Figure 4, one finds that Algorithm 3 performs better than than Algorithm (4). Algorithms 1 and 2 in terms of number of iterations and CPU-time taken for computation.

5. Conclusions

This paper presents an alternative regularization method for finding the smallest norm solution of a generalized equilibrium problem. This method can be considered an improvement of the Tseng’s extragradient method and the regularization method. We prove that the iterative process constructed by the proposed method converges strongly to the smallest norm solution of the generalized equilibrium problem. Several numerical experiments are also given to demonstrate the competitive advantage of the suggested methods over other known methods.

Author Contributions

Conceptualization, Y.S. and O.B.; Formal analysis, Y.S. and O.B.; Funding acquisition, Y.S.; Investigation, Y.S.; Methodology, Y.S. and O.B.; Writing—original draft, Y.S. and O.B.; Writing—review & editing, Y.S. and O.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key Scientific Research Project for Colleges and Universities in Henan Province (grant number 20A110038).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank reviewers and the editor for valuable comments for improving the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Muu, L.D.; Nguyen, V.H.; Quy, N.V. On Nash-Cournot oligopolistic market equilibrium models with concave cost functions. J. Glob. Optim. 2008, 41, 351–364. [Google Scholar] [CrossRef]
  2. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 12–145. [Google Scholar]
  3. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  4. Chidume, C.E.; Mărușter, Ș. Iterative methods for the computation of fixed points of demicontractive mappings. J. Comput. Appl. Math. 2010, 234, 861–882. [Google Scholar] [CrossRef] [Green Version]
  5. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2018, 78, 1045–1060. [Google Scholar] [CrossRef]
  6. Alakoya, T.; Jolaoso, L.O.; Mewomo, O.T. A general iterative method for finding common fixed point of finite family of demicontractive mappings with accretive variational inequality problems in Banach spaces. Nonlinear Stud. 2020, 27, 213–236. [Google Scholar]
  7. Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algor. 2021, 88, 1419–1456. [Google Scholar] [CrossRef]
  8. Cai, G.; Dong, Q.-L.; Peng, Y. Strong convergence theorems for solving variational inequality problems with pseudo-monotone and non-Lipschitz operators. J. Opt. Theory Appl. 2021, 188, 447–472. [Google Scholar] [CrossRef]
  9. Yao, Y.; Cho, Y.J.; Liou, Y.C. Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212, 242–250. [Google Scholar] [CrossRef]
  10. Tan, B.; Qin, X.; Yao, J.C. Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 2021, 87, 20. [Google Scholar] [CrossRef]
  11. Song, Y.L.; Ceng, L.C. Strong convergence of a general iterative algorithm for a finite family of accretive operators in Banach spaces. Fixed Point Theory Appl. 2015, 2015, 90. [Google Scholar] [CrossRef] [Green Version]
  12. Jolaoso, L.O.; Karahan, I. A general alternative regularization method with line search technique for solving split equilibrium and fixed point problems in Hilbert spaces. Comput. Appl. Math. 2020, 39, 150. [Google Scholar] [CrossRef]
  13. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  14. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Tan, B.; Qin, X. Self adaptive viscosity-type inertial extragradient algorithms for solving variational inequalities with applications. Math. Model. Anal. 2022, 27, 41–58. [Google Scholar] [CrossRef]
  16. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  17. Bnouhachem, A.; Chen, Y. An iterative method for a common solution of generalized mixed equilibrium problem, variational inequalities and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 2014, 155. [Google Scholar] [CrossRef] [Green Version]
  18. Hieu, D.V.; Quy, P.K.; Duong, H.N. Strong convergence of double-projection method for variational inequality problems. Comput. Appl. Math. 2021, 40, 73. [Google Scholar] [CrossRef]
  19. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  20. Dadashi, V.; Iyiola, O.S.; Shehu, Y. The subgradient extragradient method for pseudomonotone equilibrium problems. Optimization 2020, 69, 901–923. [Google Scholar] [CrossRef]
  21. Rehman, H.U.; Kumam, P.; Dong, Q.L.; Peng, Y.; Deebani, W. A new Popov’s subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 2021, 70, 2675–2710. [Google Scholar] [CrossRef]
  22. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge Univ. Press: Cambridge, UK, 1990. [Google Scholar]
  23. Xu, H.K. Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 2, 1–17. [Google Scholar] [CrossRef]
  24. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example 1, left: x 0   =   1 ; right: x 0   =   5 .
Figure 1. Example 1, left: x 0   =   1 ; right: x 0   =   5 .
Mathematics 10 01350 g001
Figure 2. Example 1, left: x 0   =   10 , right: x 0   =   15 .
Figure 2. Example 1, left: x 0   =   10 , right: x 0   =   15 .
Mathematics 10 01350 g002
Figure 3. Example 1, top left: x 0   =   1 ; top right: x 0   =   5 , bottom left: x 0   =   10 , bottom right: x 0   =   15 .
Figure 3. Example 1, top left: x 0   =   1 ; top right: x 0   =   5 , bottom left: x 0   =   10 , bottom right: x 0   =   15 .
Mathematics 10 01350 g003
Figure 4. Example 2, Left: m = 5 ; Right: m = 10 .
Figure 4. Example 2, Left: m = 5 ; Right: m = 10 .
Mathematics 10 01350 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Y.; Bazighifan, O. A New Alternative Regularization Method for Solving Generalized Equilibrium Problems. Mathematics 2022, 10, 1350. https://doi.org/10.3390/math10081350

AMA Style

Song Y, Bazighifan O. A New Alternative Regularization Method for Solving Generalized Equilibrium Problems. Mathematics. 2022; 10(8):1350. https://doi.org/10.3390/math10081350

Chicago/Turabian Style

Song, Yanlai, and Omar Bazighifan. 2022. "A New Alternative Regularization Method for Solving Generalized Equilibrium Problems" Mathematics 10, no. 8: 1350. https://doi.org/10.3390/math10081350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop