Next Article in Journal
Global Optimization and Common Best Proximity Points for Some Multivalued Contractive Pairs of Mappings
Next Article in Special Issue
The Split Various Variational Inequalities Problems for Three Hilbert Spaces
Previous Article in Journal
Kripke-Style Models for Logics of Evidence and Truth
Previous Article in Special Issue
Inertial Subgradient Extragradient Methods for Solving Variational Inequality Problems and Fixed Point Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A General Inertial Projection-Type Algorithm for Solving Equilibrium Problem in Hilbert Spaces with Applications in Fixed-Point Problems

by
Nopparat Wairojjana
1,
Habib ur Rehman
2,
Manuel De la Sen
3,* and
Nuttapol Pakkaranang
2
1
Applied Mathematics Program, Faculty of Science and Technology, Valaya Alongkorn Rajabhat University under the Royal Patronage (VRU), 1 Moo 20 Phaholyothin Road, Klong Neung, Klong Luang, Pathumthani 13180, Thailand
2
KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and Applications Research Group, SCL 802 Fixed Point Laboratory, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Institute of Research and Development of Processes IIDP, University of the Basque Country, 48940 Leioa, Spain
*
Author to whom correspondence should be addressed.
Axioms 2020, 9(3), 101; https://doi.org/10.3390/axioms9030101
Submission received: 30 July 2020 / Revised: 21 August 2020 / Accepted: 28 August 2020 / Published: 31 August 2020
(This article belongs to the Special Issue Fixed Point Theory and Its Related Topics II)

Abstract

:
A plethora of applications from mathematical programming, such as minimax, and mathematical programming, penalization, fixed point to mention a few can be framed as equilibrium problems. Most of the techniques for solving such problems involve iterative methods that is why, in this paper, we introduced a new extragradient-like method to solve equilibrium problems in real Hilbert spaces with a Lipschitz-type condition on a bifunction. The advantage of a method is a variable stepsize formula that is updated on each iteration based on the previous iterations. The method also operates without the previous information of the Lipschitz-type constants. The weak convergence of the method is established by taking mild conditions on a bifunction. For application, fixed-point theorems that involve strict pseudocontraction and results for pseudomonotone variational inequalities are studied. We have reported various numerical results to show the numerical behaviour of the proposed method and correlate it with existing ones.

1. Introduction

For a nonempty, closed and convex subset K of a real Hilbert space E and f : E × E R is a bifunction with f ( p 1 , p 1 ) = 0 , for each p 1 K . A equilibrium problem [1,2] for f on the set K is defined in the following way:
Find * K such that f ( * , p 1 ) 0 , p 1 K .
The problem (1) is very general, it includes many problems, such as fixed point problems, variational inequalities problems, the optimization problems, the Nash equilibrium of non-cooperative games, the complementarity problems, the saddle point problems, and the vector optimization problem (for further details see [1,3,4]). The equilibrium problem is also considered as the famous Ky Fan inequality [2]. This above-defined particular format of an equilibrium problem (1) is initiated by Muu and Oettli [5] in 1992 and further investigation on its theoretical properties studied by Blum and Oettli [1]. The construction of new optimization-based methods and the modification and extension of existing methods, as well as the examination of their convergence analysis, is an important research direction in equilibrium problem theory. Many methods have been developed over the last few years to numerically solve the equilibrium problems in both finite and infinite dimensional Hilbert spaces, i.e., the extragradient algorithms [6,7,8,9,10,11,12,13,14] subgradient algorithms [15,16,17,18,19,20,21] inertial methods [22,23,24,25], and others in [26,27,28,29,30,31,32,33,34].
In particular, a proximal method [35] is an efficient way to solve equilibrium problems that are equivalent to solving minimization problems on each step. This approach is also considered as the two-step extragradient-like method in [6], because of the early contribution of the Korpelevich [36] extragradient method to solve the saddle point problems. More precisely, Tran et al. introduced a method in [6], in which an iterative sequence { u n + 1 } was generated in the following manner:
u n K , v n = arg min { ξ f ( u n , y ) + 1 2 u n y 2 : y K } , u n + 1 = arg min { ξ f ( v n , y ) + 1 2 u n y 2 : y K } ,
where 0 < ξ < min 1 2 k 1 , 1 2 k 2 and k 1 , k 2 are Lipschitz constants. Moreover, arg min y K f ( x ) is the value of x in set K for which f ( x ) attains it’s minimum. The iterative sequence generated from the above-described method provides a weak convergent iterative sequence and in order to operate it, previous knowledge of the Lipschitz-like constants are required. These Lipschitz-type constants are normally unknown or hard to evaluate. In order to overcome this situation, Hieu et al. [12] introduced an extension of the method in [37] to solve the problems of equilibrium in the following manner: let  [ t ] + : = max { t , 0 } and choose u 0 K , μ ( 0 , 1 ) with ξ 0 > 0 , such that
v n = arg min { ξ n f ( u n , y ) + 1 2 u n y 2 : y K } , u n + 1 = arg min { ξ n f ( v n , y ) + 1 2 u n y 2 : y K } ,
where the stepsize sequence { ξ n } is updated in the following way:
ξ n + 1 = min ξ n , μ ( u n v n 2 + u n + 1 v n 2 ) 2 [ f ( u n , u n + 1 ) f ( u n , v n ) f ( v n , u n + 1 ) ] + .
Recently, Vinh and Muu proposed an inertial iterative algorithm in [38] to solve a pseudomonotone equilibrium problem. The key contribution is an inertial factor in the method that used to enhance the convergence speed of the iterative sequence. The iterative sequence { u n } was defined in the following manner:
(i)
Choose u 1 , u 0 K , θ [ 0 , 1 ) , 0 < ξ < min { 1 2 k 1 , 1 2 k 2 } where a sequence { ρ n } [ 0 , + ) is satisfies the following conditions:
n = 0 + ρ n < + .
(ii)
Choose θ n satisfying 0 θ n θ n ¯ and
θ n ¯ = min θ , ρ n u n u n 1 i f u n u n 1 , θ e l s e .
(iii)
Compute
ϱ n = u n + θ n ( u n u n 1 ) , v n = arg min { ξ f ( ϱ n , y ) + 1 2 ϱ n y 2 : y K } , u n + 1 = arg min { ξ f ( v n , y ) + 1 2 ϱ n y 2 : y K } .
Recently, another efficient inertial algorithm proposed by Hieu et al. in [39] as follows: let  u n 1 , u n , v n K , θ [ 0 , 1 ) , 0 < ξ 1 2 k 2 + 8 k 1 and the sequence { u n } was defined in the following manner:
ϱ n = u n + θ ( u n u n 1 ) , u n + 1 = arg min { ξ f ( v n , y ) + 1 2 ϱ n y 2 : y K } , ϱ n + 1 = u n + 1 + θ ( u n + 1 u n ) , v n + 1 = arg min { ξ f ( v n , y ) + 1 2 ϱ n + 1 y 2 : y K } .
In this article, we concentrates on projection methods that are normally well-established and easy to execute due to their efficient numerical computation. Motivated by the works of [12,38], we formulate an inertial explicit subgradient extragradient method to solve the pseudomonotone equilibrium problem. These results can be seen as the modification of the methods appeared in [6,12,38,39]. Under certain mild conditions, a weak convergence theorem is proved regarding the iterative sequence of the algorithm. Moreover, experimental studies have documented that the designed method tends to be more efficient when compared to the existing methods that are presented in [38,39].
The remainder of the paper has been arranged, as follows: Section 2 contains the elementary results used in this paper. Section 3 contains our main algorithm and proves their convergence. Section 4 and Section 5 incorporate the applications of our main results. Section 6 carries out the numerical results that prove the computational effectiveness of our suggested method.

2. Preliminaries

   Assume that h : K R be a convex function on a nonempty, closed and convex subset K of a real Hilbert space E and subdifferential of a function h at p 1 K is defined by
h ( p 1 ) = { p 3 E : h ( p 2 ) h ( p 1 ) p 3 , p 2 p 1 , p 2 K } .
Assume that K be a nonempty, closed and convex subset of a real Hilbert space E and Normal cone of K at p 1 K is defined by
N K ( p 1 ) = { p 3 E : p 3 , p 2 p 1 0 , p 2 K } .
A metric projection P K ( p 1 ) for p 1 E onto a closed and convex subset K of E is defined by
P K ( p 1 ) = arg min { p 2 p 1 : p 2 K } .
Now, consider the following definitions of monotonicity a bifunction (see for details [1,40]). Assume that f : E × E R on K for γ > 0 is said to be
(1)
γ-strongly monotone if
f ( p 1 , p 2 ) + f ( p 2 , p 1 ) γ p 1 p 2 2 , p 1 , p 2 K ;
(2)
monotone if
f ( p 1 , p 2 ) + f ( p 2 , p 1 ) 0 , p 1 , p 2 K ;
(3)
γ-strongly pseudomonotone if
f ( p 1 , p 2 ) 0 f ( p 2 , p 1 ) γ p 1 p 2 2 , p 1 , p 2 K ;
(4)
pseudomonotone if
f ( p 1 , p 2 ) 0 f ( p 2 , p 1 ) 0 , p 1 , p 2 K .
We have the following implications from the above definitions:
( 1 ) ( 2 ) ( 4 ) and ( 1 ) ( 3 ) ( 4 ) .
In general, the converses are not true. Suppose that f : E × E R satisfy the Lipschitz-type condition [41] on a set K if there exist two constants k 1 , k 2 > 0 , such that
f ( p 1 , p 2 ) + f ( p 2 , p 3 ) + k 1 p 1 p 2 2 + k 2 p 2 p 3 2 f ( p 1 , p 3 ) , p 1 , p 2 , p 3 K .
Lemma 1
([42]). Suppose K be a nonempty, closed and convex subset of E and P K : E K is metric projection from E onto K .
(i)
Let p 1 K and p 2 E , we have
p 1 P K ( p 2 ) 2 + P K ( p 2 ) p 2 2 p 1 p 2 2 .
(ii)
p 3 = P K ( p 1 ) if and only if
p 1 p 3 , p 2 p 3 0 , p 2 K .
(iii)
For any p 2 K and p 1 E
p 1 P K ( p 1 ) p 1 p 2 .
Lemma 2
([43,44]). Assume that h : K R be a convex, lower semicontinuous and subdifferentiable function on K , where K is a nonempty, convex and closed subset of a Hilbert space E . Subsequently, p 1 K is minimizer of a function h if and only if 0 h ( p 1 ) + N K ( p 1 ) , where h ( p 1 ) and N K ( p 1 ) denotes the subdifferential of h at p 1 K and the normal cone of K at p 1 , respectively.
Lemma 3
([45]). Let { u n } be a sequence in E and K E , such that the following conditions are satisfied:
(i)
for every u K , the lim n u n u exists;
(ii)
each sequentially weak cluster limit point of the sequence { u n } belongs to K .
Then, { u n } weakly converge to some element in K .
Lemma 4
([46]). Let { q n } and { p n } be sequences of non-negative real numbers satisfying q n + 1 q n + p n , for each n N . If p n < , then lim n q n exists.
Lemma 5
([47]). For every p 1 , p 2 E and ζ R , then
ζ p 1 + ( 1 ζ ) p 2 2 = ζ p 1 2 + ( 1 ζ ) p 2 2 ζ ( 1 ζ ) p 1 p 2 2 .
Suppose that bifunction f satisfies the following conditions:
(f1)
f is pseudomonotone on K and f ( p 2 , p 2 ) = 0 , for every p 2 K ;
(f2)
f satisfies the Lipschitz-type condition on E with constants k 1 > 0 and k 2 > 0 ;
(f3)
lim sup n f ( p n , v ) f ( p * , v ) for every v K and { p n } K satisfying p n p * ;
(f4)
f ( p 1 , . ) needs to be convex and subdifferentiable on E for all p 1 E .

3. The Modified Extragradient Algorithm for the Problem (1) and Its Convergence Analysis

We provide a method consisting of two strongly convex minimization problems with an inertial term and an explicit stepsize formula that are being used to enhance the convergence rate of the iterative sequence and to make the algorithm independent of the Lipschitz constants. For the sake of simplicity in the presentation, we will use the notation [ t ] + = max { 0 , t } and follow the conventions 0 0 = + and a 0 = + ( a 0 ) . The detailed method is provided below (Algorithm 1):
Algorithm 1 (Modified Extragradient Algorithm for the Problem (1))
  • Initialization: Choose u 1 , u 0 K , μ ( 0 , 1 ) , β n ( 0 , 1 ] , θ [ 0 , 1 ) and { ρ n } [ 0 , + ) satisfying
    n = 0 + ρ n < + .
  • Iterative steps: Choose θ n satisfying 0 θ n θ n ¯ and
    θ n ¯ = min θ , ρ n u n u n 1 i f u n u n 1 , θ e l s e .
  • Step 1: Compute
    v n = arg min y K { ξ n f ( ϱ n , y ) + 1 2 ϱ n y 2 } ,
    where ϱ n = u n + θ n ( u n u n 1 ) . If ϱ n = v n ; STOP. Else, go to next step.
  • Step 2: Compute u n + 1 = ( 1 β n ) ϱ n + β n z n , where
    z n = arg min y K { ξ n f ( v n , y ) + 1 2 ϱ n y 2 } .
  • Step 3: Update the stepsize in the following manner:
    ξ n + 1 = min ξ n , μ ϱ n v n 2 + μ z n v n 2 2 [ f ( ϱ n , z n ) f ( ϱ n , v n ) f ( v n , z n ) ] + .
    Put n : = n + 1 and return to Iterative steps.
Lemma 6.
The sequence { ξ n } is monotonically decreasing with a lower bound min μ 2 max { k 1 , k 2 } , ξ 0 and it converges to ξ > 0 .
Proof. 
From the definition of sequence { ξ n } implies that sequence { ξ n } decreasing monotonically. It is given that f satisfy the Lipschitz-type condition with k 1 and k 2 . Let f ( ϱ n , z n ) f ( ϱ n , v n ) f ( v n , z n ) > 0 , such that
μ ( ϱ n v n 2 + z n v n 2 ) 2 [ f ( ϱ n , z n ) f ( ϱ n , v n ) f ( v n , z n ) ] μ ( ϱ n v n 2 + z n v n 2 ) 2 [ k 1 ϱ n v n 2 + k 2 z n v n 2 ] μ 2 max { k 1 , k 2 } .
The above implies that { ξ n } has a lower bound min μ 2 max { k 1 , k 2 } , ξ 0 . Moreover, there exists a fixed real number ξ > 0 , such that lim n ξ n = ξ .
Remark 1.
Because of the summability of n = 0 + ρ n and the expression (5) implies that
n = 1 θ n u n u n 1 n = 1 θ n ¯ u n u n 1 n = 1 θ u n u n 1 < ,
that implies
lim n θ u n u n 1 = 0 .
Lemma 7.
Suppose that f : E × E R be a bifunction satisfies the conditions(f1)(f4). For each * E P ( f , K ) , we have
z n * 2 ϱ n * 2 1 μ ξ n ξ n + 1 ϱ n v n 2 1 μ ξ n ξ n + 1 z n v n 2 .
Proof. 
From the value of z n , we have
0 2 ξ n f ( v n , y ) + 1 2 ϱ n y 2 ( z n ) + N K ( z n ) .
For some ω f ( v n , z n ) , there exists ω ¯ N K ( z n ) , such that
ξ n ω + z n ϱ n + ω ¯ = 0 .
The above expression implies that
ϱ n z n , y z n = ξ n ω , y z n + ω ¯ , y z n , y K .
For given ω ¯ N K ( z n ) , imply that ω ¯ , y z n 0 , y K . It provides that
ϱ n z n , y z n ξ n ω , y z n , y K .
From ω f ( v n , z n ) , we have
f ( v n , y ) f ( v n , z n ) ω , y z n , y E .
Combining expressions (9) and (10) we obtain
ξ n f ( v n , y ) ξ n f ( v n , z n ) ϱ n z n , y z n , y K .
By substituting y = * in (11), gives that
ξ n f ( v n , * ) ξ n f ( v n , z n ) ϱ n z n , * z n .
Because f ( * , v n ) 0 , then f ( v n , * ) 0 , provides that
ϱ n z n , z n * ξ n f ( v n , z n ) .
From the formula of ξ n + 1 , we obtain
f ( ϱ n , z n ) f ( ϱ n , v n ) f ( v n , z n ) μ ϱ n v n 2 + μ z n v n 2 2 ξ n + 1
From the expressions (13) and (14), we have
ϱ n z n , z n * ξ n { f ( ϱ n , z n ) f ( ϱ n , v n ) } μ ξ n 2 ξ n + 1 ϱ n v n 2 μ ξ n 2 ξ n + 1 z n v n 2 .
Similar to expression (11), the value of v n gives that
ξ n f ( ϱ n , y ) ξ n f ( ϱ n , v n ) ϱ n v n , y v n , y K .
By substituting y = z n in the above expression, we have
ξ n f ( ϱ n , z n ) f ( ϱ n , v n ) ϱ n v n , z n v n .
Combining the expressions (15) and (17), we obtain
ϱ n z n , z n * ϱ n v n , z n v n μ ξ n 2 ξ n + 1 ϱ n v n 2 μ ξ n 2 ξ n + 1 z n v n 2 .
We have the given formulas:
2 ϱ n z n , z n * = ϱ n * 2 + z n ϱ n 2 + z n * 2 .
2 v n ϱ n , v n z n = ϱ n v n 2 + z n v n 2 ϱ n z n 2 .
The above expressions with (18), we have
z n * 2 ϱ n * 2 1 μ ξ n ξ n + 1 ϱ n v n 2 1 μ ξ n ξ n + 1 z n v n 2 .
Theorem 1.
Assume that f : E × E R be a bifunction satisfies the conditions(f1)(f4) and * belongs to solution set E P ( f , K ) . Subsequently, the sequences { ϱ n } , { v n } , { z n } and { u n } generated through Algorithm 1 weakly converges to * . In addition, lim n P E P ( f , K ) ( u n ) = * .
Proof. 
By value of u n + 1 through Lemma 5, we obtain
u n + 1 * 2 = ( 1 β n ) ϱ n + β n z n * 2 = ( 1 β n ) ( ϱ n * ) + β n ( z n * ) 2 = ( 1 β n ) ϱ n * 2 + β n z n * 2 β n ( 1 β n ) ϱ n z n 2 ( 1 β n ) ϱ n * 2 + β n z n * 2 .
By Lemma 7 and expression (19), we obtain
u n + 1 * 2 ϱ n * 2 β n 1 μ ξ n ξ n + 1 ϱ n v n 2 β n 1 μ ξ n ξ n + 1 z n v n 2 .
Because ξ n ξ , then there exists a fixed number ϵ ( 0 , 1 μ ) , such that
lim n 1 μ ξ n ξ n + 1 = 1 μ > ϵ > 0 .
Subsequently, there exist a fixed real number N 1 N such that
1 μ ξ n ξ n + 1 > ϵ > 0 , n N 1 .
Combining the expressions (20) and (21), we obtain
u n + 1 * 2 ϱ n * 2 , n N 1 .
By definition of the ϱ n , we have
ϱ n * = u n + θ n ( u n u n 1 ) * u n * + θ n u n u n 1 .
From the definition of ϱ n in Algorithm 1, we obtain
ϱ n * 2 = u n + θ n ( u n u n 1 ) * 2 = ( 1 + θ n ) ( u n * ) θ n ( u n 1 * ) 2 = ( 1 + θ n ) u n * 2 θ n u n 1 * 2 + θ n ( 1 + θ n ) u n u n 1 2
  ( 1 + θ n ) u n * 2 θ n u n 1 * 2 + 2 θ u n u n 1 2 .
The expression (22) can also be written as
u n + 1 * u n * + θ u n u n 1 , n N 1 .
By using Lemma 4 with expressions (7) and (26), we have
lim n u n * = l , for some finite l 0 .
The equality (8) implies that
lim n u n u n 1 = 0 .
By letting n in (24) implies that
lim n ϱ n * = l .
From the expression (20) and (25), we have
u n + 1 * 2 ( 1 + θ n ) u n * 2 θ n u n 1 * 2 + 2 θ u n u n 1 2 β n 1 μ ξ n ξ n + 1 ϱ n v n 2 β n 1 μ ξ n ξ n + 1 z n v n 2 ,
which further implies that (for n N 1 )
ϵ β ϱ n v n 2 + ϵ β v n z n 2 u n * 2 u n + 1 * 2 + θ n u n * 2 u n 1 * 2 + 2 θ u n u n 1 2 .
By letting n in (31), we obtain
lim n ϱ n v n = lim n v n z n = 0 .
By using the Cauchy inequality and expression (32), we obtain
lim n ϱ n z n lim n ϱ n v n + lim n z n v n = 0 .
The expressions (29) and (32) imply that
lim n v n * = lim n z n * = l .
It follows from the expressions (27), (29) and (34) that the sequences { ϱ n } , { u n } , { v n } and { z n } are bounded. Now, we need to use Lemma 3, for this it is compulsory to show that any weak sequential limit points of { u n } lies in the set E P ( f , K ) . Consider z to be a weak limit point of { u n } i.e., there is a { u n k } of { u n } that is weakly converges to z . Because u n v n 0 , then { v n k } also weakly converge to z and so z K . Now, it is renaming to show that z E P ( f , K ) . From relation (11), due to ξ n + 1 and (17), we have
ξ n k f ( v n k , y ) ξ n k f ( v n k , z n k ) + ϱ n k z n k , y z n k ξ n k f ( ϱ n k , z n k ) ξ n k f ( ϱ n k , v n k ) μ ξ n k 2 ξ n k + 1 ϱ n k v n k 2 μ ξ n k 2 ξ n k + 1 v n k z n k 2 + ϱ n k z n k , y z n k ϱ n k v n k , z n k v n k μ ξ n k 2 ξ n k + 1 ϱ n k v n k 2 μ ξ n k 2 ξ n k + 1 v n k z n k 2 + ϱ n k z n k , y z n k ,
where y K . It follows from (28), (32), (33) and the boundedness of { u n } right hand side tend to zero. Due to ξ n k > 0 , condition (f3) and v n k z , implies
0 lim sup k f ( v n k , y ) f ( z , y ) , y K .
Because z K imply that f ( z , y ) 0 , y K . It is prove that z E P ( f , K ) . By Lemma 3, provides that { ϱ n } , { v n } , { z n } and { u n } weakly converges to * as n .
Finally, to prove that lim n P E P ( f , K ) ( u n ) = * . Let q n : = P E P ( f , K ) ( u n ) , n N . For any * E P ( f , K ) , we have
q n q n u n + u n * u n + u n .
Clearly, the above implies that sequence { q n } is bounded. Next, we need to show that { q n } is a Cauchy sequence. By using Lemma 1(iii) and (23), we have
u n + 1 q n + 1 u n + 1 q n u n q n + θ u n u n 1 , n N 1 .
Thus, Lemma 4 provides the existence of lim n u n q n . Next, take (23) ∀ m > n N 1 , we have
q n u m q n u m 1 + θ u n u n 1 q n u n + θ k = n m 1 u n u n 1 .
Suppose that q m , q n E P ( f , K ) for m > n N 1 , through Lemma 1(i) and (39), we have
q n q m 2 q n u m 2 q m u m 2 q n u n 2 + θ k = n m 1 u n u n 1 2 + 2 θ q n u n k = n m 1 u n u n 1 q m u m 2 .
The existence of lim n u n q n and the summability of the series n u n u n 1 < + , imply lim n q n q m = 0 , m > n . As a result, { q n } is a Cauchy sequence and due the closeness of the set E P ( f , K ) the sequence { q n } strongly converges to q * E P ( f , K ) . Next, remaining to show that q * = * . From Lemma 1(ii) and * , q * E P ( f , K ) , we have
u n q n , * q n 0 .
Because of q n q * and u n * , we obtain
* q * , * q * 0 ,
implies that * = q * = lim n P E P ( f , K ) ( u n ) .

4. Applications to Solve Fixed Point Problems

Now, consider the applications of our results that are discussed in Section 3 to solve fixed-point problems involving κ -strict pseudo-contraction. Let T : K K be a mapping and the fixed point problem is formulated in the following manner:
Find * K such as T ( * ) = * .
Let a mapping T : K K is said to be
(i)
sequentially weakly continuous on K if
T ( p n ) T ( p ) for every sequence in K satisfying p n p ( weakly converges ) ;
(ii)
κ-strict pseudo-contraction [48] on K if
T p 1 T p 2 2 p 1 p 2 2 + κ ( p 1 T p 1 ) ( p 2 T p 2 ) 2 , p 1 , p 2 K ;
that is equivalent to
T p 1 T p 2 , p 1 p 2 p 1 p 2 2 1 κ 2 ( p 1 T p 1 ) ( p 2 T p 2 ) 2 , p 1 , p 2 K .
Note: if we define f ( p 1 , p 2 ) = p 1 T p 1 , p 2 p 1 , p 1 , p 2 K . Then, the problem (1) convert into the fixed point problem with 2 k 1 = 2 k 2 = 3 2 κ 1 κ . The value of v n in Algorithm 1 convert into followings:
v n = arg min y K { ξ n f ( ϱ n , y ) + 1 2 ϱ n y 2 } = arg min y K { ξ n ϱ n T ( ϱ n ) , y ϱ n + 1 2 ϱ n y 2 } = arg min y K { ξ n ϱ n T ( ϱ n ) , y ϱ n + 1 2 ϱ n y 2 + ξ n 2 2 ϱ n T ( ϱ n ) 2 ξ n 2 2 ϱ n T ( ϱ n ) 2 } = arg min y K { 1 2 y ϱ n + ξ n ( ϱ n T ( ϱ n ) ) 2 } = P K ϱ n ξ n ( ϱ n T ( ϱ n ) ) = P K ( 1 ξ n ) ϱ n + ξ n T ( ϱ n ) .
In the similar way to the expression (44), we obtain
z n = P K ϱ n ξ n ( v n T ( v n ) ) .
As a consequence of the results in Section 3, we have the following fixed point theorem:
Corollary 1.
Assume that T : K K to be a weakly continuous and κ-strict pseudocontraction with F i x ( T ) . The sequences ϱ n , v n , z n and u n be generated in the following way:
(i)
Choose u 1 , u 0 K , μ ( 0 , 1 ) , β n ( 0 , 1 ] , θ [ 0 , 1 ) and { ρ n } [ 0 , + ) satisfies the following condition:
n = 0 + ρ n < + .
(ii)
Choose θ n satisfies 0 θ n θ n ¯ , such that
θ n ¯ = min θ , ρ n u n u n 1 i f u n u n 1 , θ e l s e .
(iii)
Compute u n + 1 = ( 1 β n ) ϱ n + β n z n , where
ϱ n = u n + θ n ( u n u n 1 ) , v n = P K ϱ n ξ n ( ϱ n T ( ϱ n ) ) , z n = P K ϱ n ξ n ( v n T ( v n ) ) .
(iv)
Revised the stepsize ξ n + 1 in the following way:
ξ n + 1 = min ξ n , μ ϱ n v n 2 + μ z n v n 2 2 ( ϱ n v n ) ( T ( ϱ n ) T ( v n ) ) , z n v n +
Subsequently, { ϱ n } , { v n } , { z n } and { u n } be the sequences converges weakly to * F i x ( T ) .

5. Application to Solve Variational Inequality Problems

Now, consider the applications of our results that are discussed in Section 3 in order to solve variational inequality problems involving pseudomonotone and Lipschitz-type continuous operator. Let a operator L : K K and the variational inequality problem is formulated as follows:
Find * K such that L ( * ) , y * 0 , y K .
A mapping L : E E is said to be
(i)
L-Lipschitz continuous on K if
L ( p 1 ) L ( p 2 ) L p 1 p 2 , p 1 , p 2 K ;
(ii)
monotone on K if
L ( p 1 ) L ( p 2 ) , p 1 p 2 0 , p 1 , p 2 K ;
(iii)
pseudomonotone on K if
L ( p 1 ) , p 2 p 1 0 L ( p 2 ) , p 1 p 2 0 , p 1 , p 2 K .
Note: let f ( p 1 , p 2 ) : = L ( p 1 ) , p 2 p 1 , p 1 , p 2 K . Thus, problem (1) translates into the problem (VIP) with L = 2 k 1 = 2 k 2 . From the value of v n , we have
v n = arg min y K ξ n f ( ϱ n , y ) + 1 2 ϱ n y 2 = arg min y K ξ n L ( ϱ n ) , y ϱ n + 1 2 ϱ n y 2 + ξ n 2 2 L ( ϱ n ) 2 ξ n 2 2 L ( ϱ n ) 2 = arg min y K 1 2 y ( ϱ n ξ n L ( ϱ n ) ) 2 = P K [ ϱ n ξ n L ( ϱ n ) ] .
In similar way to the expression (49), we obtain
z n = P K [ ϱ n ξ n L ( v n ) ] .
Suppose that a mapping L satisfies the following conditions:
(L1)
L is monotone on K with V I ( L , K ) ;
(L2)
L is L-Lipschitz continuous on K with L > 0 ;
(L3)
L is pseudomonotone on K with V I ( L , K ) ; and,
(L4)
lim sup n L ( p n ) , p p n L ( p ) , y p , y K and { p n } K satisfying p n p .
Next, let L to be monotone and (L4) can be removed. The condition (L4) is used to defined f ( u , v ) = L ( u ) , v u and satisfy the conditions (L4). The condition (f3) is required to show z E P ( f , K ) see (36). The condition (L4) is required to show z V I ( L , K ) . Further, to show that z V I ( L , K ) . By letting the monotonicity of operator L, we have
L ( y ) , y v n L ( v n ) , y v n , y K .
By letting f ( u , v ) = L ( u ) , v u with expression (35), implies that
lim sup k L ( v n k ) , y v n k 0 , y K .
Combining (50) with (51), we deduce that
lim sup k L ( y ) , y v n k 0 , y K .
Therefore, v n k z K , provides L ( y ) , y z 0 , y K . Let v t = ( 1 t ) z + t y , t [ 0 , 1 ] . Since v t K for t ( 0 , 1 ) , we have
0 L ( v t ) , v t z = t L ( v t ) , y z .
That is L ( v t ) , y z 0 every t ( 0 , 1 ) . Due to v t z , while t 0 , we have L ( z ) , y z 0 , for all y K , consequently z V I ( L , K ) .
Corollary 2.
Let L : K E be a mapping and satisfying the conditions(L1)(L2). Assume that the sequences { ϱ n } , { v n } , { z n } and { u n } generated in the following manner:
(i)
Choose u 1 , u 0 K , μ ( 0 , 1 ) , β n ( 0 , 1 ] , θ [ 0 , 1 ) and { ρ n } [ 0 , + ) , such that
n = 0 + ρ n < + .
(ii)
Let θ n satisfies 0 θ n θ n ¯ and
θ n ¯ = min θ , ρ n u n u n 1 i f u n u n 1 , θ o t h e r w i s e .
(iii)
Compute u n + 1 = ( 1 β n ) ϱ n + β n z n , where
ϱ n = u n + θ n ( u n u n 1 ) , v n = P K [ ϱ n ξ n L ( ϱ n ) ] , z n = P K [ ϱ n ξ n L ( v n ) ] .
(iv)
Stepsize ξ n + 1 is revised in the following way:
ξ n + 1 = min ξ n , μ ϱ n v n 2 + μ z n v n 2 2 L ( ϱ n ) L ( v n ) , z n v n +
Subsequently, the sequences { ϱ n } , { v n } , { z n } and { z n } converge weakly to * V I ( L , K ) .
Corollary 3.
Let L : K E be a mapping and satisfying the conditions(L2)(L4). Assume that the sequences { ϱ n } , { v n } , { z n } and { u n } generated in the following manner:
(i)
Choose u 1 , u 0 K , μ ( 0 , 1 ) , β n ( 0 , 1 ] , θ [ 0 , 1 ) and { ρ n } [ 0 , + ) , such that
n = 0 + ρ n < + .
(ii)
Choose θ n satisfying 0 θ n θ n ¯ , such that
θ n ¯ = min θ , ρ n u n u n 1 i f u n u n 1 , θ e l s e .
(iii)
Compute u n + 1 = ( 1 β n ) ϱ n + β n z n , where
ϱ n = u n + θ n ( u n u n 1 ) , v n = P K [ ϱ n ξ n L ( ϱ n ) ] , z n = P K [ ϱ n ξ n L ( v n ) ] .
(iv)
The stepsize ξ n + 1 is updated in the following way:
ξ n + 1 = min ξ n , μ ϱ n v n 2 + μ z n v n 2 2 L ( ϱ n ) L ( v n ) , z n v n +
Subsequently, the sequences { ϱ n } , { v n } , { z n } and { z n } converge weakly to * V I ( L , K ) .

6. Numerical Experiments

The computational results present this section to prove the effectiveness of Algorithm 1 when compared to Algorithm 3.1 in [39] and Algorithm 1 in [38].
(i)
For Algorithm 3.1 (Alg3.1) in [39]:
ξ = 1 10 max { k 1 , k 2 } , θ = 1 2 , Error term ( D n ) = max { u n + 1 v n 2 , u n + 1 ϱ n 2 } .
(ii)
For Algorithm 1 (Alg1) in [38]:
ξ = 1 4 max { k 1 , k 2 } , θ = 1 2 , ρ n = 1 n 2 , Error term ( D n ) = ϱ n v n 2 .
(iii)
For Algorithm 1 (mAlg1):
ξ = 1 2 , θ = 1 2 , μ = 1 3 , ρ n = 1 n 2 , β n = 8 10 , Error term ( D n ) = ϱ n v n 2 .
Example 1.
Let take the Nash–Cournot Equilibrium Model that found in the paper [6]. A bifunction f consider into the following form:
f ( p 1 , p 2 ) = P p 1 + Q p 2 + q , p 2 p 1 ,
where q R m with matrices P, Q of order m and Lipschitz constants are k 1 = k 2 = 1 2 P Q (see for more details [6]). In our case, P , Q are taken at random (choose diagonal matrices A 1 and A 2 randomly entries from [ 0 , 2 ] and [ 2 , 0 ] , respectively. Two random orthogonal matrices B 1 and B 2 provide positive semidefinite matrix M 1 = B 1 A 1 B 1 T and negative semidefinite matrix M 2 = B 2 A 2 B 2 T . Finally, set Q = M 1 + M 1 T , S = M 2 + M 2 T and P = Q S . ) and elements of q are taken arbitrary form [ 1 , 1 ] . A set K R m is taken as
K : = { u R m : 10 u i 10 } .
Table 1 and Table 2 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 presented the numerical results by taking u 1 = u 0 = v 0 = ( 1 , , 1 ) and D n 10 9 .
Example 2.
Suppose that f : K × K R be a bifunction defined in the following way
f ( p , q ) = i = 2 5 ( q i p i ) p , p , q R 5 ,
where K = ( p 1 , , p 5 ) : p 1 1 , p i 1 , i = 2 , , 5 . A bifunction f is Lipschitz-type continuous with constants k 1 = k 2 = 2 and satisfy the conditions(f1)–(f4). In order to evaluate the best possible value of the control parameters, a numerical test is performed taking the variation of the inertial factor θ . The numerical comparison results are shown in the Table 3 by using u 1 = u 0 = v 0 = ( 2 , 3 , 2 , 5 , 5 ) and D n 10 6 .
Example 3.
Let E = L 2 ( [ 0 , 1 ] ) be a Hilbert space with an inner product p , q = 0 1 p ( r ) q ( r ) d r , and the induced norm p = 0 1 p 2 ( r ) d r , p , q E . The set K : = { p L 2 ( [ 0 , 1 ] ) : 0 1 r p ( r ) d r = 2 } . Suppose that f : E × E R is defined by
f ( p , q ) = L ( p ) , q p ,
where L ( p ( r ) ) = 0 r p ( s ) d s , for every p L 2 ( [ 0 , 1 ] ) and r [ 0 , 1 ] . The projection on set K is computed in the following way:
P K ( p ) ( r ) : = p ( r ) 0 1 r p ( r ) d r 2 0 1 r 2 d r r , r [ 0 , 1 ] .
Table 4 reports the numerical results by using stopping criterion D n 10 6 and letting u 1 = u 0 = v 0 .
Example 4.
Assume that a bifunction f is defined by
f ( p , q ) = L ( p ) , q p and L ( p ) = G ( p ) + H ( p ) ,
where
G ( p ) = g 1 ( p ) , g 2 ( p ) , , g m ( p ) , H ( p ) = E p + c , c = ( 1 , 1 , , 1 ) ,
and
g i ( p ) = p i 1 2 + p i 2 + p i 1 p i + p i p i + 1 , i = 1 , 2 , , m , p 0 = p m + 1 = 0 .
Let the matrix E of order m are consider in the following way:
e i , j = 4 j = i 1 i j = 1 2 i j = 1 0 o t h e r w i s e ,
where K = ( u 1 , , u m ) R m : u i 1 , i = 2 , , m . Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 and Table 5 report the numerical results by taking u 1 = u 0 = v 0 = ( 1 , , 1 ) and D n 10 6 .
Remark 2.
(i)
It is also significant that the value of ξ 0 is crucial and performs best when it is nearer to 1 .
(ii)
It is observed that the selection of the value ϑ is often significant and roughly the value ϑ ( 3 , 6 ) performs better than most other values.

7. Conclusions

In this paper, we consider the convergence result for pseudomonotone equilibrium problems that involve Lipschitz-type continuous bifunction but the Lipschitz-type constants are unknown. We modify the extragradient methods with an inertial term and new step size formula. Weak convergence theorem is proved for sequences generated by the algorithm. Several numerical experiments confirm the effectiveness of the proposed algorithms.

Author Contributions

Conceptualization, H.u.R., N.P. and M.D.l.S.; Writing-Original Draft Preparation, N.W., N.P. and H.u.R.; Writing-Review & Editing, N.W., N.P., H.u.R. and M.D.l.S.; Methodology, N.P. and H.u.R.; Visualization, N.W. and N.P.; Software, H.u.R.; Funding Acquisition, M.D.l.S.; Supervision, M.D.l.S. and H.u.R.; Project Administration; M.D.l.S.; Resources; M.D.l.S. and H.u.R. All authors have read and agreed to the published version of this manuscript.

Funding

This research work was financially supported by Spanish Government for Grant RTI2018-094336-B-I00 (MCIU/AEI/FEDER, UE) and to the Basque Government for Grant IT1207-19.

Acknowledgments

We are very grateful to the Editor and the anonymous referees for their valuable and useful comments, which helped improve the quality of this work. Nopparat Wairojjana was partially supported by Valaya Alongkorn Rajabhat University under the Royal Patronage, Thailand. The corresponding author are grateful to the Spanish Government for Grant RTI2018-094336-B-I00 (MCIU/AEI/FEDER, UE) and to the Basque Government for Grant IT1207-19.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  3. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  4. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  5. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  6. Quoc, T.D.; Le Dung, M.N.V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  7. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2011, 52, 139–159. [Google Scholar] [CrossRef]
  8. Lyashko, S.I.; Semenov, V.V. A New Two-Step Proximal Algorithm of Solving the Problem of Equilibrium Programming. In Optimization and Its Applications in Control and Data Sciences; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 315–325. [Google Scholar] [CrossRef]
  9. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef] [Green Version]
  10. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019. [Google Scholar] [CrossRef]
  11. Anh, P.N.; Hai, T.N.; Tuan, P.M. On ergodic algorithms for equilibrium problems. J. Glob. Optim. 2015, 64, 179–195. [Google Scholar] [CrossRef]
  12. Hieu, D.V.; Quy, P.K.; Vy, L.V. Explicit iterative algorithms for solving equilibrium problems. Calcolo 2019, 56. [Google Scholar] [CrossRef]
  13. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2017, 97, 811–824. [Google Scholar] [CrossRef]
  14. ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  15. ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  16. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  17. Hieu, D.V. Halpern subgradient extragradient method extended to equilibrium problems. Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales Serie A Matemáticas 2016, 111, 823–840. [Google Scholar] [CrossRef]
  18. ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  19. Anh, P.N.; An, L.T.H. The subgradient extragradient method extended to equilibrium problems. Optimization 2012, 64, 225–248. [Google Scholar] [CrossRef]
  20. Muu, L.D.; Quoc, T.D. Regularization Algorithms for Solving Monotone Ky Fan Inequalities with Application to a Nash-Cournot Equilibrium Model. J. Optim. Theory Appl. 2009, 142, 185–204. [Google Scholar] [CrossRef]
  21. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  22. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  23. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization Based Methods for Solving the Equilibrium Problems with Applications in Variational Inequality Problems and Solution of Nash Equilibrium Models. Mathematics 2020, 8, 822. [Google Scholar] [CrossRef]
  24. Yordsorn, P.; Kumam, P.; ur Rehman, H.; Ibrahim, A.H. A Weak Convergence Self-Adaptive Method for Solving Pseudomonotone Equilibrium Problems in a Real Hilbert Space. Mathematics 2020, 8, 1165. [Google Scholar] [CrossRef]
  25. Yordsorn, P.; Kumam, P.; Rehman, H.U. Modified two-step extragradient method for solving the pseudomonotone equilibrium programming in a real Hilbert space. Carpathian J. Math. 2020, 36, 313–330. [Google Scholar]
  26. La Sen, M.D.; Agarwal, R.P.; Ibeas, A.; Alonso-Quesada, S. On the Existence of Equilibrium Points, Boundedness, Oscillating Behavior and Positivity of a SVEIRS Epidemic Model under Constant and Impulsive Vaccination. Adv. Differ. Equ. 2011, 2011, 1–32. [Google Scholar] [CrossRef] [Green Version]
  27. La Sen, M.D.; Agarwal, R.P. Some fixed point-type results for a class of extended cyclic self-mappings with a more general contractive condition. Fixed Point Theory Appl. 2011, 2011. [Google Scholar] [CrossRef] [Green Version]
  28. Wairojjana, N.; ur Rehman, H.; Argyros, I.K.; Pakkaranang, N. An Accelerated Extragradient Method for Solving Pseudomonotone Equilibrium Problems with Applications. Axioms 2020, 9, 99. [Google Scholar] [CrossRef]
  29. La Sen, M.D. On Best Proximity Point Theorems and Fixed Point Theorems for -Cyclic Hybrid Self-Mappings in Banach Spaces. Abstr. Appl. Anal. 2013, 2013, 1–14. [Google Scholar] [CrossRef] [Green Version]
  30. ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  31. Rehman, H.U.; Kumam, P.; Dong, Q.L.; Peng, Y.; Deebani, W. A new Popov’s subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 2020, 1–36. [Google Scholar] [CrossRef]
  32. Wang, L.; Yu, L.; Li, T. Parallel extragradient algorithms for a family of pseudomonotone equilibrium problems and fixed point problems of nonself-nonexpansive mappings in Hilbert space. J. Nonlinear Funct. Anal. 2020, 2020, 13. [Google Scholar]
  33. Shahzad, N.; Zegeye, H. Convergence theorems of common solutions for fixed point, variational inequality and equilibrium problems, J. Nonlinear Var. Anal. 2019, 3, 189–203. [Google Scholar]
  34. Farid, M. The subgradient extragradient method for solving mixed equilibrium problems and fixed point problems in Hilbert spaces. J. Appl. Numer. Optim. 2019, 1, 335–345. [Google Scholar]
  35. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  36. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  37. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  38. Vinh, N.T.; Muu, L.D. Inertial Extragradient Algorithms for Solving Equilibrium Problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  39. Hieu, D.V.; Cho, Y.J.; bin Xiao, Y. Modified extragradient algorithms for solving equilibrium problems. Optimization 2018, 67, 2003–2029. [Google Scholar] [CrossRef]
  40. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  41. Mastroeni, G. On Auxiliary Principle for Equilibrium Problems. In Nonconvex Optimization and Its Applications; Springer: New York, NY, USA, 2003; pp. 289–298. [Google Scholar] [CrossRef]
  42. Kreyszig, E. Introductory Functional Analysis with Applications, 1st ed.; Wiley Classics Library, Wiley: Hoboken, NJ, USA, 1989. [Google Scholar]
  43. Tiel, J.V. Convex Analysis: An Introductory Text, 1st ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  44. Ioffe, A.D.; Tihomirov, V.M. (Eds.) Theory of Extremal Problems. In Studies in Mathematics and Its Applications 6; North-Holland, Elsevier: Amsterdam, The Netherlands; New York, NY, USA, 1979. [Google Scholar]
  45. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 1967, 73, 591–598. [Google Scholar] [CrossRef] [Green Version]
  46. Tan, K.; Xu, H. Approximating Fixed Points of Nonexpansive Mappings by the Ishikawa Iteration Process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  47. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 408. [Google Scholar]
  48. Browder, F.; Petryshyn, W. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 , while m = 10.
Figure 1. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 , while m = 10.
Axioms 09 00101 g001
Figure 2. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 , while m = 20.
Figure 2. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 , while m = 20.
Axioms 09 00101 g002
Figure 3. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 while m = 50.
Figure 3. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 while m = 50.
Axioms 09 00101 g003
Figure 4. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 while m = 100.
Figure 4. Example 1: numerical behaviour of Algorithm 1 by letting different options for ξ 0 while m = 100.
Axioms 09 00101 g004
Figure 5. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 60.
Figure 5. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 60.
Axioms 09 00101 g005
Figure 6. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 120.
Figure 6. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 120.
Axioms 09 00101 g006
Figure 7. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 200.
Figure 7. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 200.
Axioms 09 00101 g007
Figure 8. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 300.
Figure 8. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 300.
Axioms 09 00101 g008
Figure 9. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 20.
Figure 9. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 20.
Axioms 09 00101 g009
Figure 10. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 50.
Figure 10. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 50.
Axioms 09 00101 g010
Figure 11. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 100.
Figure 11. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 100.
Axioms 09 00101 g011
Figure 12. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 200.
Figure 12. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 200.
Axioms 09 00101 g012
Figure 13. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 300.
Figure 13. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38] while m = 300.
Axioms 09 00101 g013
Table 1. Example 1: Algorithm 1 numerical behaviour by letting different options for ξ 0 and m.
Table 1. Example 1: Algorithm 1 numerical behaviour by letting different options for ξ 0 and m.
m = 10m = 20m = 50m = 100
ξ 0 iter.timeiter.timeiter.timeiter.time
1.00200.1701250.2153290.2726400.5570
0.80230.1945270.2326310.2788470.5469
0.60250.1995300.2634350.3285520.6228
0.40290.1467330.2979390.3549550.6542
0.20300.2632350.2868420.3849570.6662
Table 2. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Table 2. Example 1: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Number of IterationsExecution Time in Seconds
m Alg3.1Alg1mAlg1Alg3.1Alg1mAlg1
605038280.43620.33520.2705
1205749330.68880.60000.4047
2006657391.47081.08810.6794
3006255401.62131.42511.0303
Table 3. Example 2: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Table 3. Example 2: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Number of IterationsExecution Time in Seconds
θ Alg3.1Alg1mAlg1Alg3.1Alg1mAlg1
0.906756472.86742.53241.6734
0.706353452.78132.64231.5026
0.505747412.09122.42121.4991
0.306148442.41152.35671.5092
0.106960472.92292.28811.5098
Table 4. Example 3: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Table 4. Example 3: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Number of IterationsExecution time in Seconds
u 0 Alg3.1Alg1mAlg1Alg3.1Alg1mAlg1
3 t 3328194.76543.97822.9342
3 t 2 3831205.25984.14583.0987
3 s i n ( t ) 4133225.98765.39764.4298
3 c o s ( t ) 4739226.99215.47654.4611
3 exp ( t ) 2 5843318.46915.83295.0321
Table 5. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Table 5. Example 4: Algorithm 1 (mAlg1) numerical comparison with Algorithm 3.1 (Alg3.1) in [39] and Algorithm 1 (Alg1) in [38].
Number of IterationsExecution Time in Seconds
m Alg3.1Alg1mAlg1Alg3.1Alg1mAlg1
209064501.00890.69230.5541
509870521.60891.90920.8464
10010474582.92312.14561.6970
200109796122.529917.626713.6542
300112816352.677639.001836.6305

Share and Cite

MDPI and ACS Style

Wairojjana, N.; Rehman, H.u.; De la Sen, M.; Pakkaranang, N. A General Inertial Projection-Type Algorithm for Solving Equilibrium Problem in Hilbert Spaces with Applications in Fixed-Point Problems. Axioms 2020, 9, 101. https://doi.org/10.3390/axioms9030101

AMA Style

Wairojjana N, Rehman Hu, De la Sen M, Pakkaranang N. A General Inertial Projection-Type Algorithm for Solving Equilibrium Problem in Hilbert Spaces with Applications in Fixed-Point Problems. Axioms. 2020; 9(3):101. https://doi.org/10.3390/axioms9030101

Chicago/Turabian Style

Wairojjana, Nopparat, Habib ur Rehman, Manuel De la Sen, and Nuttapol Pakkaranang. 2020. "A General Inertial Projection-Type Algorithm for Solving Equilibrium Problem in Hilbert Spaces with Applications in Fixed-Point Problems" Axioms 9, no. 3: 101. https://doi.org/10.3390/axioms9030101

APA Style

Wairojjana, N., Rehman, H. u., De la Sen, M., & Pakkaranang, N. (2020). A General Inertial Projection-Type Algorithm for Solving Equilibrium Problem in Hilbert Spaces with Applications in Fixed-Point Problems. Axioms, 9(3), 101. https://doi.org/10.3390/axioms9030101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop