Next Article in Journal
Can Stiff Matter Solve the Hubble Tension?
Previous Article in Journal
Linearized Stability Analysis of Nonlinear Delay Differential Equations with Impulses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Method for Approximating Solutions to Equilibrium Problems and Fixed-Point Problems without Some Condition Using Extragradient Algorithm

by
Anchalee Sripattanet
and
Atid Kangtunyakarn
*,†
Department of Mathematics, School of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(8), 525; https://doi.org/10.3390/axioms13080525
Submission received: 13 June 2024 / Revised: 28 July 2024 / Accepted: 30 July 2024 / Published: 2 August 2024
(This article belongs to the Section Mathematical Analysis)

Abstract

:
The objective of this research is to present a novel approach to enhance the extragradient algorithm’s efficiency for finding an element within a set of fixed points of nonexpansive mapping and the set of solutions for equilibrium problems. Specifically, we focus on applications involving a pseudomonotone, Lipschitz-type continuous bifunction. Our main contribution lies in establishing a strong convergence theorem for this method, without relying on the assumption of lim n x n + 1 x n = 0 . Moreover, the main theorem can be applied to effectively solve the combination of variational inequality problem (CVIP). In support of our main result, numerical examples are also presented.

1. Introduction

Throughout this paper, we consider C as a non-empty, closed, and convex subset of a real Hilbert space H, equipped with the norm · and the inner product · , · . Let ρ : C × C R be a bifunction. The equilibrium problem ( E P ( ρ , C ) ) involves finding a point μ ^ C such that ρ ( μ ^ , ν ) 0 for all ν C . The set of solutions to E P ( ρ , C ) is denoted by S o l ( ρ , C ) .
The equilibrium problem ( E P ( ρ , C ) ) encompasses and extends several mathematical concepts, including variational inequalities, Nash equilibrium, optimization problems, and fixed-point problems (see [1,2,3]).
To solve the equilibrium problem, many authors have assumed that the bifunction ρ : C × C R satisfies the following conditions:
(M1) 
ρ ( μ , μ ) = 0 , μ C ;
(M2) 
ρ is monotone, meaning ρ ( μ , ν ) + ρ ( ν , μ ) 0 , μ , ν C ;
(M3) 
For every μ , ν , ω C ,
lim t 0 + ρ t ω + ( 1 t ) μ , ν ρ ( μ , ν ) ;
(M4) 
For every μ C , ν ρ ( μ , ν ) is lower semicontinuous and convex.
See, for example, [4,5,6].
For a specific case of equilibrium problem, if ρ ( ϑ , υ ) : = B ( ϑ ) , υ ϑ for all ϑ , υ C , where B : C H , then the equilibrium problem ( E P ( ρ , C ) ) transforms into the following variational inequality ( V I ( C , B ) ) :
F i n d ϑ C s u c h t h a t B ( ϑ ) , υ ϑ 0 , υ C .
Variational inequality, recognized as a powerful and significant tool, has been extensively studied in economics, physics, and numerous other fields in both applied sciences and pure, as noted in [7,8,9,10].
In 2010, Peng [11] introduced an iterative scheme for finding a common element between the sets arising from equilibrium problems and the set of fixed points in a real Hilbert space. The generated sequences { x n } , { y n } , { t n } , and { z n } are defined by
x 0 H , ρ φ ( μ n , y ) + 1 r n y μ n , μ n x n 0 , y C , y n = P r C ( μ n χ n F ( μ n ) ) , t n = P r C ( μ n χ n F ( y n ) ) , z n = δ n t n + ( 1 δ n ) T ( t n ) , C n = { z n z 2 z x n 2 ( 1 δ n ) ( δ n ϵ ) t n T ( t n ) : z C } , Q n = { 0 z x n , x n x : z H } , x n + 1 = P r C n Q n ( x 0 ) .
Under certain conditions, the author demonstrated that the sequences { x n } , { y n } , { t n } , and { z n } converge strongly to P r Ω ( x 0 ) , where Ω : = S o l ( ρ φ , C ) S o l ( F , C ) F ( T ) . Numerous studies have proposed iterative algorithms to find a common element in the solution sets of F ( T ) and E P ( ρ , C ) within a real Hilbert space. These algorithms are designed to address a regularized equilibrium problem involving a monotone and Lipschitz-type continuous bifunction on C.
Later, Pham Ngoc Anh introduced a novel iterative approach [12] to identify a common element of the fixed-point set of nonexpansive mappings, where the solution set of equilibrium problems is sought using a method that handled pseudomonotone, Lipschitz-type continuous bifunctions. This approach is distinguished by solving a strongly convex optimization problem at each iteration, unlike methods for regularized equilibrium problems. The iterative process is defined as follows:
x 0 C , ν n = a r g m i n { λ n ρ ( x n , ν ) + 1 2 x n ν 2 : ν C } , t n = a r g m i n { λ n ρ ( ν n , t ) + 1 2 x n t 2 : t C } , x n + 1 = α n x 0 + ( 1 α n ) M ( t n ) ,
where ρ ( ϑ , υ ) = F ( ϑ ) , υ ϑ for all ϑ , υ C , this method rediscovered the extragradient-type approach proposed by Zeng and Yao [13] for finding a common element in the sets of equilibrium problems ( E P ( ρ , C ) ) and fixed-point problems. During each primary iteration, the method only addressed strongly convex problems on C, but the convergence proof still relied on the assumption that lim n x n + 1 x n = 0 .
In 2022, Kanikar Muangchoo [14] introduced a self-adaptive subgradient extragradient algorithm that employs a monotone step size rule for solving equilibrium problems. The iterative process is defined by
x 0 C , ν n = a r g m i n { λ n ρ ( x n , ν ) + 1 2 ν x n 2 : ν C } , H n = { x n λ n μ n ν n , ω ν n 0 : ω H } , ω n = a r g m i n { λ n ρ ( ν n , ν ) + 1 2 ν x n 2 : ν H n } , x n + 1 = ( 1 γ n δ n ) x n + γ n ω n ,
where { δ n } ( 0 , 1 ) , { γ n } ( a , b ) ( 0 , 1 δ n ) , and λ n > 0 . She also proved that the generated sequence is strongly convergent by using the assumption that lim n x n + 1 x n = 0 . Many researchers have also proved this with the assumption lim n x n + 1 x n = 0 (see [12,13,14]).
Question: Can we use the extragradient method for the proof of convergence without the assumption lim n x n + 1 x n = 0 ?
This paper presents a novel approach to the extragradient algorithm. The proposed method aims to find a common element between the set of fixed points of nonexpansive mappings and the solution set of equilibrium problems, specifically for a pseudomonotone, Lipschitz-type continuous bifunction. The iterative sequence is formulated as follows:
y n = a r g m i n { λ n i = 1 N a i f i ( x n , y ) + 1 2 x n y 2 , y C } , t n = a r g m i n { λ n i = 1 N a i f i ( y n , t ) + 1 2 x n t 2 , t C } , x n + 1 = α n u + β n x n + γ n T ( t n ) , n N ,
where α n + β n + γ n = 1 ,   a i ( 0 , 1 ) for all i = 1 , 2 , . . . , N and i = 1 N a i = 1 , n N .
The paper is organized as follows: Section 2 provides essential definitions and lemmas needed for the subsequent sections. Section 3 introduces a new extragradient algorithm and proves a strong convergence theorem without relying on the common assumption lim n . In Section 4, we utilize the findings from Section 3 to address the combination of variational inequality problems (CVIPs). Finally, Section 5 presents numerical examples to illustrate the effectiveness of our approach.

2. Preliminaries

We now provide some definitions and lemmas that are used in the subsequent sections.
The normal cone to C at a point u C , denoted by N C ( μ ) , is defined as
N C ( μ ) : = { ν H : ν , ω μ 0 , ω C } .
Let ρ : H ( , + ] be a proper function. For x 0 D o m ( ρ ) , the subdifferential of ρ at x 0 as the subset of H is given by
ρ ( x 0 ) = { x * H : ρ ( x ) ρ ( x 0 ) + x * , x x 0 , x H } ,
where D o m ( ρ ) = { x H : ρ ( x ) < } . If ρ ( x 0 ) , then the function ρ is said to be subdifferentiable at x 0 . If the subdifferentiable ρ ( x 0 ) is a singleton, then ρ is termed G a ^ t e a u x differentiable at x 0 , which is represented by ρ ( x 0 ) .
Definition 1.
The mapping M : C C is called nonexpansive if
M μ M ν μ ν , μ , ν C .
Definition 2.
A bifunction ρ : C × C R is defined as follows:
(i) 
monotone on C if
ρ ( υ , ϑ ) + ρ ( ϑ , υ ) 0 , ϑ , υ C ;
(ii) 
pseudomonotone on C if
ρ ( μ , ν ) 0 ρ ( ν , μ ) 0 , μ , ν C ;
(iii) 
Lipschitz-type continuous on C if there exist positive constants c 1 and c 2 such that
ρ ( μ , ν ) + ρ ( ν , ω ) ρ ( μ , ω ) c 1 μ ν 2 c 2 ν ω 2 , μ , ν , ω C .
Lemma 1
([15]). Let ρ : H R be a function that is both convex and possesses a subdifferential at every point. The function ρ reaches its minimum value at μ ^ H if and only if
0 ρ ( μ ^ ) + N C ( μ ^ ) .
Lemma 2
([16]). Suppose we have a sequence { ζ n } of real numbers, with a subsequence { n i } N such that ζ n i < ζ n i + 1 , i N . Consequently, a non-decreasing sequence { m k } N can be constructed such that m k as k , satisfying
(i) 
ζ m k ζ m k + 1 ;
(ii) 
ζ k ζ m k + 1 .
Specifically, m k = max { j k : ζ j < ζ j + 1 } .
Lemma 3
([17]). Let M be a mapping on C that does not expand distances. If the fixed-point set F ( M ) is non-empty, then I M is closed at zero in the weak topology. That is, if a sequence { x n } C converges weakly to some x ¯ C and { ( I M ) ( x n ) } converges strongly to zero, we can infer that ( I M ) ( x ¯ ) = 0 . Therefore, M functions as the identity operator on H .
Lemma 4
([12]). Assume the following conditions are met for the bifunction ρ : C × C R :
(i) 
ρ is pseudomonotone over C;
(ii) 
ρ exhibits Lipschitz-type continuity on C;
(iii) 
for every μ C , the bifunction ν ρ ( μ , ν ) is convex and possesses a subdifferential.
The solution set S o l ( ρ , C ) and the fixed-point set F ( M ) intersect nontrivially and are closed.
Let M : C C be a nonexpansive mapping. Define sequences { x n } , { ν n } , and { t n } starting from x 0 C as follows:
ν n = a r g m i n { 1 2 ν x n 2 + λ n ρ ( x n , ν ) , ν C } , t n = a r g m i n { 1 2 t x n 2 + λ n ρ ( ν n , t ) , t C } , x n + 1 = α n x 0 + ( 1 α n ) M ( t n ) , n N ,
with the sequences { λ n } and { α n } adhering to:
(i) 
{ 2 c 1 λ n } ( 0 , 1 δ ) , for some δ ( 0 , 1 ) ;
(ii) 
min { 1 2 c 1 , 1 2 c 2 } λ n ;
(iii) 
{ α n } ( 0 , 1 ) , n = 1 α n = , lim n α n = 0 .
Given μ * S o l ( ρ , C ) , the following inequality holds:
t n μ * 2 x n μ * 2 1 2 λ n c 1 x n ν n 2 1 2 λ n c 2 ν n t n 2 .
Lemma 5
([18]). Consider a sequence { T n } of non-negative real numbers satisfying
T n + 1 ( 1 χ n ) T n + χ n ζ n , n 0 ,
where χ n is a sequence within the interval ( 0 , 1 ) and { ζ n } is such that  lim n χ n = 0 , n = 1 χ n = , and lim sup n ζ n 0 .
Then, lim n T n = 0 .
To prove our main result, we need to first establish the following lemma.
Lemma 6.
For each i = 1 , 2 , . . . , N , let f i : C × C R be a pseudomonotone mapping with i = 1 N E P ( f i ) . Then,
i = 1 N E P ( f i ) = E P i = 1 N a i f i ,
where a i ( 0 , 1 ) for all i = 1 , 2 , . . . , N and i = 1 N a i = 1 .
It is straightforward to show that i = 1 N E P ( f i ) E P i = 1 N a i f i . Next, we claim that E P i = 1 N a i f i i = 1 N E P ( f i ) . To show this, let x 0 E P i = 1 N a i f i . Then, we have
i = 1 N a i f i ( x 0 , y ) 0 , y C .
Hence, x 0 is a minimizer of
1 2 y x 0 2 + λ i = 1 N a i f i ( x 0 , y ) .
From (1) and Lemma 1, we have
0 1 2 y x 0 2 + λ i = 1 N a i f i ( x 0 , y ) x 0 + N C ( x 0 ) .
There exists w i = 1 N a i f i ( x 0 , x 0 ) and w ¯ N C ( x 0 ) such that
λ w = w ¯ .
So, we obtain
0 x 0 y , w ¯ = x 0 y , λ w , y C .
Since w i = 1 N a i f i ( x 0 , x 0 ) , we have
w a k f k ( x 0 , x 0 ) ,
for all k = 1 , 2 , . . . , N . Then,
a k f k ( x 0 , y ) a k f k ( x 0 , x 0 ) w , y x 0 0 , y C .
Therefore,
a k f k ( x 0 , y ) 0 , y C a n d k = 1 , 2 , . . . , N .
From (2) and a k lying within the interval ( 0 , 1 ) for all k ranging from 1 to N , we obtain
f k ( x 0 , y ) 0 , f o r a l l k = 1 , 2 , . . . , N a n d y C .
Then,
x 0 i = 1 N E P ( f i ) .
Thus,
E P i = 1 N a i f i i = 1 N E P ( f i ) .
Hence,
i = 1 N E P ( f i ) = E P i = 1 N a i f i .

3. Main Results

Theorem 1.
For each i = 1 , 2 , . . . , N , let f i : C × C R be a pseudomonotone mapping that is jointly weakly continuous C × C satisfying the following conditions:
(i) 
f i is convex with respect to its second argument and lower semicontinuous on C;
(ii) 
f i is Lipschitz-type continuous on C;
(iii) 
for every x C , the function y f ( x , y ) is convex and subdifferentiable.
Let T : C C be a nonexpansive mapping with F ( T ) i = 1 N E P ( f i ) .
Let the sequences { x n } , { y n } , and { t n } be generated by x 1 , u and
y n = a r g m i n { λ n i = 1 N a i f i ( x n , y ) + 1 2 x n y 2 , y C } , t n = a r g m i n { λ n i = 1 N a i f i ( y n , t ) + 1 2 x n t 2 , t C } , x n + 1 = α n u + β n x n + γ n T ( t n ) , n N ,
where α n + β n + γ n = 1 , a i ( 0 , 1 ) for all i = 1 , 2 , . . . , N and i = 1 N a i = 1 for all n N .
Assume that the conditions outlined below are satisfied:
(i) 
For each i = 1 , 2 , { 2 i = 1 N a i c i 1 , λ n , β n , γ n } ( 0 , 1 δ ) , δ > 0 ;
(ii) 
λ n min 1 2 i = 1 N a i c i 1 , 1 2 i = 1 N a i c i 2 ;
(iii) 
{ α n } ( 0 , 1 ) , lim n α n = 0 and n = 1 α n = . Where c i 1 and c i 2 are Lipschitz-type constants of f i , i = 1 , 2 , 3 , . . . , n .
Then, the sequences { x n } , { y n } , and { t n } converge strongly to x ˇ = P F ( T ) i = 1 N E P ( f i ) u .
Following the same method as Lemma 4, we use { y n } , { t n } , and { x n + 1 } in (3) instead of { y n } , { t n } , and { x n + 1 } in Lemma 4, and we obtain
t n x ˇ 2   x n x ˇ 2 1 2 λ n i = 1 N a i c i 1 x n y n 2 1 2 λ n i = 1 N a i c i 2 y n t n 2 .
From Lemma 1, we have
t n = a r g m i n λ n i = 1 N a i f i ( y n , t ) + 1 2 t x n 2 , t C ,
if and only if
0 λ n i = 1 N a i f i ( y n , t ) + 1 2 t x n 2 ( t n ) + N C ( t n ) .
According to the established formulation of t n , there exist w i = 1 N a i f i ( y n , t n ) and w ¯ N C ( t n ) such that
0 = λ n w + t n x n + w ¯ .
So, we obtain w ¯ = ( λ n w + t n x n ) .
According to the established formulation of N C ( t n ) , we obtain
t n t , λ n w + t n x n 0 , t C .
This suggests that
t n x n , t t n λ n w , t n t , t C .
With y n = t , we have
λ n w , t n y n t n x n , y n t n .
From w i = 1 N a i f i ( y n , t n ) , we obtain
w , t t n i = 1 N a i f i ( y n , t ) i = 1 N a i f i ( y n , t n ) , t C .
Referring to Equation (6), we obtain the following result:
t n x n , y n t n λ n w , y n t n λ n i = 1 N a i f i ( y n , t n ) i = 1 N a i f i ( y n , y n ) = λ n i = 1 N a i f i ( y n , t n ) .
Similarly, since
y n = a r g m i n λ n i = 1 N a i f i ( x n , y ) + 1 2 y x n 2 , y C .
From Lemma 1, we obtain
0 λ n i = 1 N a i f i ( x n , y ) + 1 2 y x n 2 ( y n ) + N C ( y n ) , y C .
From (8), there exist u i = 1 N a i f i ( x n , y n ) and u ¯ N C ( y n ) such that
0 = λ n u + y n x n + u ¯ .
So, we obtain u ¯ = ( λ n u + y n x n ) .
Given that u i = 1 N a i f i ( x n , y n ) , we can infer that
i = 1 N a i f i ( x n , y ) i = 1 N a i f i ( x n , y n ) u , y y n , y C .
According to the established formulation of N C ( y n ) , we have
y n y , λ n u + y n x n 0 , y C .
This indicates that
y n y , y n x n λ n u , y y n , y C .
From (9) and (10), we obtain
λ n i = 1 N a i f i ( x n , y ) i = 1 N a i f i ( x n , y n ) y n y , y n x n , y C .
Substituting y = t n C , we have
λ n i = 1 N a i f i ( x n , t n ) i = 1 N a i f i ( x n , y n ) y n t n , y n x n .
From (12), it follows that
y n x n , t n y n λ n i = 1 N a i f i ( x n , y n ) i = 1 N a i f i ( x n , t n ) .
Adding (7) and (13), we obtain
t n y n , y n x n t n + x n λ n i = 1 N a i f i ( x n , y n ) i = 1 N a i f i ( x n , t n ) + i = 1 N a i f i ( y n , t n ) .
Since f i is Lipschitz-type continuous on C, we obtain
t n y n 2 λ n i = 1 N a i c i 1 x n y n 2 λ n i = 1 N a i c i 2 y n t n 2 ,
that is,
1 λ n i = 1 N a i c i 2 t n y n 2 λ n i = 1 N a i c i 1 x n y n 2 .
Following this, we will demonstrate that { x n } is bounded.
Let x ˇ F ( T ) . According to the definition of x n and (4), we obtain
x n + 1 x ˇ α n u x ˇ + β n x n x ˇ + γ n T ( t n ) x ˇ α n u x ˇ + β n x n x ˇ + γ n x n x ˇ α n u x ˇ + ( 1 α n ) x n x ˇ max x 1 x ˇ , u x ˇ .
By applying induction, we achieve x n x ˇ max x 1 x ˇ , u x ˇ , for all n N . This shows that { x n } is bounded.
According to the established formulation of x n + 1 and (4), we have
x n + 1 x ˇ 2 α n u x ˇ 2 + β n x n x ˇ 2 + γ n T ( t n ) x ˇ 2 β n γ n x n T ( t n ) 2 α n u x ˇ 2 + β n x n x ˇ 2 + γ n t n x ˇ 2 β n γ n x n T ( t n ) 2 α n u x ˇ 2 + β n x n x ˇ 2 β n γ n x n T ( t n ) 2 + γ n { x n x ˇ 2 1 2 λ n i = 1 N a i c i 1 x n y n 2 1 2 λ n i = 1 N a i c i 2 y n t n 2 } α n u x ˇ 2 + β n x n x ˇ 2 + γ n x n x ˇ 2 β n γ n x n T ( t n ) 2 α n u x ˇ 2 + ( 1 α n ) x n x ˇ 2 β n γ n x n T ( t n ) 2
leading to
β n γ n x n T ( t n ) 2 α n u x ˇ 2 + ( 1 α n ) x n x ˇ 2 x n + 1 x ˇ 2 .
From (15), we obtain
x n + 1 x ˇ 2 α n u x ˇ 2 + β n x n x ˇ 2    + γ n x n x ˇ 2 1 2 λ n i = 1 N a i c i 1 x n y n 2 = α n u x ˇ 2 + ( 1 α n ) x n x ˇ 2 γ n 1 2 λ n i = 1 N a i c i 1 x n y n 2 .
That is,
γ n 1 2 λ n i = 1 N a i c i 1 x n y n 2 α n u x ˇ 2 + ( 1 α n ) x n x ˇ 2 x n + 1 x ˇ 2 .
Following this, let us consider two cases.
Case I . Assume there is an n 0 N such that the sequence { x n x ˇ } n = n 0 is non-increasing. In this scenario, the limit of the sequence { x n x ˇ } exists.
As the limit of { x n x ˇ } exists, from (16), condition ( i i i ) , and by the assumptions on { α n } , { β n } , and { γ n } , we obtain
lim n x n T ( t n ) = 0 .
Because the limit of { x n x ˇ } exists, (17) conditions ( i i ) and ( i i i ) , and by the assumptions on { α n } , { β n } , and { γ n } , we obtain
lim n x n y n = 0 .
From (14) and (19) and conditions ( i i ) and ( i i i ) , we obtain
lim n y n t n = 0 .
Then,
T ( x n ) x n x n t n + T ( t n ) x n x n y n + t n y n + T ( t n ) x n .
From (18)–(20), we have
lim n T ( x n ) x n = 0 .
As { x n } is bounded, there exists a subsequence { x n k } of { x n } such that
lim sup n u x ˇ , x n x ˇ = lim k u x ˇ , x n k x ˇ ,
where x ˇ = P F ( T ) i = 1 N E P ( f i ) u .
Without affecting the generality of the conclusion, it can be assumed that x n k x ¯ H as k .
Hence, (22) reduces to
lim sup n u x ˇ , x n x ˇ = u x ˇ , x ¯ x ˇ .
By Lemma 3, Equation (21), and x n k x ¯ H as k , we obtain
x ¯ = T ( x ¯ ) .
From (19) and (20) and x n k x ¯ H as k , this leads to the conclusion that
y n k x ¯ , t n k x ¯ a s k .
From (11) and assumptions of f i for all i = 1 , 2 , . . . , N , we have
λ n k i = 1 N a i f i ( x n k , y ) i = 1 N a i f i ( x n k , y n k ) y n k y , y n k x n k , y C ,
and when k , we have i = 1 N a i f i ( x ¯ , y ) 0 for all y C . Thus,
x ¯ E P i = 1 N a i f i .
From Lemmas (6) and (25), we obtain
x ¯ i = 1 N E P ( f i ) .
From (24) and (26), we obtain
x ¯ F ( T ) i = 1 N E P ( f i ) .
Using this and x ˇ = P F ( T ) i = 1 N E P ( f i ) u , we obtain
0 u x ˇ , x ¯ x ˇ .
Therefore, when combined with (23), we have
lim sup n u x ˇ , x n x ˇ 0 .
By (4) and x ˇ = P F ( T ) i = 1 N E P ( f i ) u , we obtain
x n + 1 x ˇ 2 β n x n x ˇ 2 + γ n T ( t n ) x ˇ 2 + 2 α n u x ˇ , x n + 1 x ˇ β n x n x ˇ 2 + γ n x n x ˇ 2 + 2 α n u x ˇ , x n + 1 x ˇ ( 1 α n ) x n x ˇ 2 + 2 α n u x ˇ , x n + 1 x ˇ .
By Lemma 5 and (27), we obtain
lim n x n x ˇ = 0 .
From (19) and (20), it follows that
lim n y n x ˇ = lim n t n x ˇ = 0 .
Therefore, the sequences { x n } , { y n } , and { t n } all converge strongly to the same point x ˇ = P F ( T ) i = 1 N E P ( f i ) u .
Case II . Assume there is a subsequence { n k } of {n} such that
x n k x ˇ < x n k + 1 x ˇ , k N .
According to Lemma 2, there exists a non-decreasing sequence { m j } within N such that m j as j ,
x m j + 1 x ˇ x m j x ˇ a n d x j x ˇ x m j + 1 x ˇ , j N .
From (16) and (29), we obtain
β m j γ m j x m j T ( t m j ) 2 α m j u x ˇ 2 + ( 1 α m j ) x m j x ˇ 2 x m j + 1 x ˇ 2 α m j u x ˇ 2 + ( 1 α m j ) x m j x ˇ 2 x m j x ˇ 2 α m j u x ˇ 2 α m j x m j x ˇ 2 α m j u x ˇ 2 .
By the assumptions on { α m j } , { β m j } , and { γ m j } , we obtain
lim j x m j T ( t m j ) = 0 .
From (17) and (29), we obtain
γ m j 1 2 λ m j i = 1 N a i c i 1 x m j y m j 2 α m j u x ˇ 2 + ( 1 α m j ) x m j x ˇ 2    x m j + 1 x ˇ 2 α m j u x ˇ 2 .
By the assumptions on { α m j } , { β m j } , and { γ m j } , and conditions ( i i ) and ( i i i ) , we obtain
lim j x m j y m j = 0 .
Using the same reasoning as in case I , we have
lim j t m j y m j = lim m j t ( x m j ) x m j = 0 .
Applying the same reasoning as in case I , we have
lim sup n u x ˇ , x m j x ˇ 0 .
It follows from (28) and (29) that
x m j + 1 x ˇ 2 ( 1 α m j ) x m j x ˇ 2 + 2 α m j u x ˇ , x m j + 1 x ˇ ( 1 α m j ) x m j + 1 x ˇ 2 + 2 α m j u x ˇ , x m j + 1 x ˇ ,
and hence,
α m j x m j + 1 x ˇ 2 2 α m j u x ˇ , x m j + 1 x ˇ .
From α m j > 0 , (29) and (32), we obtain
x j x ˇ 2 x m j + 1 x ˇ 2 2 u x ˇ , x m j + 1 x ˇ .
Thus, we obtain
lim j x j x ˇ = 0 .
From (30) and (31), it follows that
lim j y j x ˇ = 0 a n d lim j t j x ˇ = 0 .
Therefore, the sequences { x j } , { y j } , and { t j } all strongly converge to the identical point x ˇ = P F ( T ) i = 1 N E P ( f i ) u ; this completes the proof.

4. Application

For every μ , ν C , we define
i = 1 N a i f i ( μ , ν ) : = i = 1 N a i A i ( μ ) , ν μ ,
where A i : C H , i = 1 , 2 , . . . , N , and i = 1 N a i = 1 . Then, E P i = 1 N a i f i becomes the following combination of variational inequality problem (CVIP):
Find μ * C such that i = 1 N a i A i ( μ * ) , ν μ * 0 . This problem is introduced by [19]. We denote by V I ( C , i = 1 N a i A i ) the set of solutions of the CVIP.
In the following theorem, we use (34) instead of i = 1 N a i f i ( x n , y ) in (3) and demonstrate the theorem of strong convergence, which is applied to effectively solve the combination of variational inequality problem (CVIP).
Theorem 2.
For each i = 1 , 2 , . . . , N , let A i : C × C R be a pseudomonotone mapping that is jointly weakly continuous on C × C satisfying the following conditions:
(i) 
A i is convex in its second argument and lower semicontinuous on C;
(ii) 
A i is Lipschitz-type continuous on C;
(iii) 
for each x C , the mapping y A ( x , y ) is convex and subdifferentiable.
Let T : C C be a nonexpansive mapping with F ( T ) V I ( C , i = 1 N a i A i ) .
Let the sequences { x n } , { y n } , and { t n } be generated by x 1 , u and
y n = a r g m i n { λ n i = 1 N a i A i x n , y x n + 1 2 y x n 2 , y C } , t n = a r g m i n { λ n i = 1 N a i A i y n , t y n + 1 2 t x n 2 , t C } , x n + 1 = α n u + β n x n + γ n T ( t n ) , n N ,
where the sum i = 1 N a i , and the sequences { α n } , { β n } , and { γ n } are defined as in Theorem 1 for every n N , and satisfy the conditions (i)–(iii) stated in Theorem 1. Then, the sequences { x n } , { y n } , and { t n } converge strongly to x ˇ = P F ( T ) V I ( C , i = 1 N a i A i ) u .
Applying Theorem 1, we reach the conclusion.
The standard constrained convex optimization problem is to find μ * C that minimizes the function over C:
( μ * ) = min μ C ( μ ) ,
where : C R is a convex function that is Fr e ´ chet differentiable. The set of all solutions to (35) is denoted by Φ .
Lemma 7
([20] (Optimality condition)).  A required condition for a point μ C to serve as a solution to the minimization problem (35) is that μ satisfies the following variational inequality:
( μ ) , ν μ 0 ,
for all ν C . Moreover, if ℑ is convex, then this optimality condition (36) is also sufficient.
Corollary 1.
For each i = 1 , 2 , . . . , N , let i : C × C R be a pseudomonotone mapping that is jointly weakly continuous on C × C satisfying the following conditions:
(i) 
i is convex in its second argument and lower semicontinuous on C;
(ii) 
i is Lipschitz-type continuous on C;
(iii) 
for every x C , the mapping y ( x , y ) is convex and subdifferentiable.
Let T : C C be a nonexpansive mapping with F ( T ) V I ( C , i = 1 N a i i ) .
Let the sequences { x n } , { y n } , and { t n } be generated by x 1 , u and
y n = a r g m i n { λ n i = 1 N a i i x n , y x n + 1 2 y x n 2 , y C } , t n = a r g m i n { λ n i = 1 N a i i y n , t y n + 1 2 t x n 2 , t C } , x n + 1 = α n u + β n x n + γ n T ( t n ) , n N ,
where the sum i = 1 N a i , and the sequences { α n } , { β n } , and { γ n } are defined as in Theorem 1 for every n N , and satisfy conditions ( i ) ( i i i ) stated in Theorem 1. Then, the sequence { x n } , { y n } , and { t n } converge strongly to x ˇ = P F ( T ) V I ( C , i = 1 N a i i ) u .
Applying Theorem 2, we arrive at the conclusion.

5. Example

In this section, we provide an example to illustrate and support our main theorem.
Example 1.
Let H = l 2 ( R ) be the linear space whose elements are all 2-summable sequences { μ k } k = 1 of scalars in R , that is,
l 2 ( R ) : = μ = ( μ 1 , μ 2 , . . . , μ k , . . . ) , μ k R a n d k = 1 | μ k | 2 < ,
with · , · : l 2 × l 2 R and · : l 2 R defined by
μ , ν : = k = 1 μ k ν k
and
μ = ( k = 1 | μ k | 2 ) 1 2 ,
where μ = { μ k } k = 1 , ν = { ν k } k = 1 .
Let C = { x H : x 2 } . For each i = 1 , 2 , . . . , N , define the bifunction f i : C × C R by
f i ( x , y ) = 5 i x , y x , x , y C .
Define T : C C by T x = x 4 . Let α n = 1 4 n , β n = 2 n + 1 2 4 n , γ n = 2 n 3 2 4 n , and λ n = 1 6 . We can rewrite (3) as follows:
y n = a r g m i n { 1 6 i = 1 N a i 5 i x n , y x + 1 2 y x n 2 , y C } , t n = a r g m i n { 1 6 i = 1 N a i 5 i y n , t x + 1 2 t x n 2 , t C } , x n + 1 = 1 4 n u + 2 n + 1 2 4 n x n + 2 n 3 2 4 n T ( t n ) , n N ,
Then, the sequences { x n } , { y n } , and { t n } converge strongly to 0 = ( 0 , 0 , . . . , 0 , . . . ) .
Solution. 
It can be easily shown that T is a nonexpansive mapping and f i is a pseudomonotone bifunction which satisfies the Lipschitz-like condition with constant c i 1 = 5 i 2 , c i 2 = 5 i , for all i = 1 , 2 , . . . , N . Also, f satisfies conditions ( i ) ( i i i ) in Theorem 1. By the definition of T ,   f i for every i = 1 , 2 , . . . , N , we have 0 = ( 0 , 0 , . . . , 0 , . . . ) F ( T ) i = 1 N E P ( f i ) . According to Theorem 1, we can infer that the sequence { x n } strongly converges to 0 = ( 0 , 0 , . . . , 0 , . . . ) .
In numerical analysis, Newton’s method is a mathematical technique used to approximate numerical solutions, such as zeros, x-intercepts, and roots. For a function h ( x ) defined over the real numbers, along with its derivative h ( x ) , if x n is an approximate solution of h ( x ) = 0 where h ( x n ) 0 , then the method can be applied. In general, for n > 0 , the next approximation, defined as x n + 1 , satisfies
x n + 1 = x n h ( x n ) h ( x n ) ,
starting from an initial point x 0 . This method is widely regarded as one of the most used, researched, and applied techniques for generating a sequence x n that approximates the solution.
Newton’s method (37) is an illustration of Picard iteration, for the equation
x n + 1 = T x n ,
where T = I h h , with many authors employing the Picard iteration to approximate fixed points.
π is a significant mathematical constant, and numerous researchers have attempted to approximate its value. To explore the convergence of Newton’s method, we use it in conjunction with our main result to approximate the value of π .
Example 2.
Let R be the set of all real numbers and C = [ π , 100 ] . For an approximate value of π, define h : C C by h ( x ) = ( π x ) 2 . It is straightforward to show that I h h is a nonexpansive mapping. For every i = 1 , 2 , . . . , N , define the bifunction f i : C × C R by
f i ( x , y ) = i x , 2 ( y x ) , x , y C .
Let α n = n 5 n + 2 ,   β n = 2 n + 1 2 5 n + 2 , and γ n = 2 n + 3 2 5 n + 2 . For every i = 1 , 2 , let the constants c i 1 = i , c i 2 = 2 i , a 1 = a 2 = 1 2 , and λ n = 1 6 . We can rewrite (3) as follows:
y n = a r g m i n { 1 6 i = 1 1 1 2 i x n , 2 ( y x n ) + 1 2 y x n 2 , y C } , t n = a r g m i n { 1 6 i = 1 1 1 2 i y n , 2 ( t y n ) + 1 2 t x n 2 , t C } , x n + 1 = n 5 n + 2 u + 2 n + 1 2 5 n + 2 x n + 2 n + 3 2 5 n + 2 ( I h h ) ( t n ) , n N .
Then, the sequences { x n } , { y n } , and { t n } converge strongly to π.
Solution. 
It is straightforward to see that f i is a pseudomonotone bifunction and satisfies a Lipschitz-like condition with constants c i 1 = i , c i 2 = 2 i , for all i = 1 , 2 , . . . , N . Also, function f meets the conditions ( i ) ( i i i ) specified in Theorem 1.
By the definition of I h h , f i for every i = 1 , 2 , . . . , N , we have π F ( I h h ) i = 1 N E P ( f i ) . As a result, this leads us to conclude the sequences { x n } , { y n } , and { t n } converge strongly to π.
Using the algorithm described above, we obtain the numerical result for approximating the value of π, which is presented in Figure 1 and Table 1 below, where x 1 and u are obtained by randomly selecting initial values from the interval [ π 100 ] and n = N = 100 .
Next, we use α n = n 5 n + 2 and λ n = 1 6 for algorithm 1 in [12]. Define the mapping T = I h h with h the same as h in Example 2 and define the bifunction f : C × C R by
f ( x , y ) = x , 2 ( y x ) , x , y C .
We derive the numerical result for approximating the value of π, which is presented in Figure 2 and Table 2 below, where x 1 is obtained by randomly selecting initial values from the interval [ π 100 ] and n = N = 100 .

6. Conclusions

In this work, we present a novel method for the extragradient algorithm to find a common element between the set of fixed points of nonexpansive mappings and the set of solutions to equilibrium problems for a pseudomonotone, Lipschitz-type continuous bifunction. We obtain some strong convergence theorems for the sequence generated by the proposed algorithm under suitable conditions. However, we would like to remark the following:
(1)
Our result is proved without the assumption lim n x n + 1 x n = 0 (there are many researchers who have proved strong convergence theorems with this assumption; see [12,13,14]).
(2)
In Theorem 2, we use (34) instead of i = 1 N a i f i ( x n , y ) in (3) and prove the strong convergence theorem, which is applied to effectively solve the combination of variational inequality problem (CVIP).
(3)
In Corollary 1, we use i in (36) instead of the mapping A i in Theorem 2, where i = 1 , 2 , 3 , . . . , N , and prove the strong convergence theorem, which is applied to the standard constrained convex optimization problem.
(4)
We provide Example 1 to demonstrate the efficiency and implementation of our main result in the space l 2 ( R ) . The convergence of { x n } , { y n } , and { t n } in Example 1 is guaranteed by Theorem 1.
(5)
In Example 2, we obtain a numerical result for approximating the value of π , which is presented in Figure 2 and Table 2. Moreover, we obtain the numerical comparison between our algorithm and algorithm 1 in [12], showing that the sequences { x n } , { y n } , and { t n } of our algorithm converge faster than the sequences { x n } , { y n } , and { t n } of algorithm 1 in [12].

Author Contributions

A.K. was responsible for conceptualization, formal analysis, supervision, and writing—review and editing. A.S. handled writing the original draft, formal analysis, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research is a result of the project entitled “The development of search engine by using approximation method for solving fixed point problem (Year1) No. RE-KRIS/FF67/041” by King Mongkut’s Institute of Technology Ladkrabang (KMITL), which has been received funding support from the NSRF.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to express their sincere appreciation to the Research and Innovation Services at King Mongkut’s Institute of Technology Ladkrabang.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Ansari, Q.H.; Al-Homidan, S.; Yao, J.C. Equilibrium Problems and Fixed Point Theory. Fixed Point Theory Appl. 2012, 2012, 25. [Google Scholar] [CrossRef]
  2. Farid, M. Two algorithms for solving mixed equilibrium problems and fixed point problems in Hilbert spaces. Ann. Univ. Ferrara 2021, 67, 253–268. [Google Scholar] [CrossRef]
  3. Latif, A.; Eslamian, M. A New Iterative Method for Equilibrium Problems and Fixed Point Problems. Nonlin. Anal. Geom. Funct. Theory 2013, 2013, 178053. [Google Scholar] [CrossRef]
  4. Cheawchan, K.; Kangtunyakarn, A. The modified split generalized equilibrium problem for quasi-nonexpansive mappings and applications. J. Inequal. Appl. 2018, 122, 1–28. [Google Scholar] [CrossRef] [PubMed]
  5. Suwannaut, S.; Kangtunyakarn, A. On Approximation of the Combination of Variational Inequality Problem and Equilibrium Problem for Nonlinear Mappings. Thai J. Math. 2021, 19, 1477–1498. [Google Scholar]
  6. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert space. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef]
  7. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  8. Ceng, L.C.; Yao, J.C. Iterative algorithm for generalized set-valued strong nonlinear mixed variational-like inequalities. J. Opt. Theory Appl. 2005, 124, 725–738. [Google Scholar] [CrossRef]
  9. Sripattanet, A.; Kangtunyakarn, A. Approximation of G-variational inequality problems and fixed-point problems of G-κ-strictly pseudocontractive mappings by an intermixed method endowed with a graph. J. Inequal. Appl. 2023, 2023, 63. [Google Scholar] [CrossRef]
  10. Yao, Y.; Yao, J.C. On modified iterative method for nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2007, 186, 1551–1558. [Google Scholar] [CrossRef]
  11. Peng, J.W. Iterative algorithms for mixed equilibrium problems, strict pseudocontractions and monotone mappings. J. Optim. Theory Appl. 2010, 144, 107–119. [Google Scholar] [CrossRef]
  12. Pham, P.N. A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2013, 62, 271–283. [Google Scholar] [CrossRef]
  13. Zeng, L.C.; Yao, J.C. Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10, 1293–1303. [Google Scholar] [CrossRef]
  14. Muangchoo, K. A new explicit extragradient method for solving equilibrium problems with convex constraints. Nonlinear Funct. Anal. Appl. 2022, 27, 1–22. [Google Scholar] [CrossRef]
  15. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementary Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  16. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nnonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  17. Du, W.S.; He, Z. Feasible iterative algorithms for split common solution problems. J. Nonlinear Convex Anal. 2015, 16, 697–710. [Google Scholar]
  18. Xu, H.K. Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 2, 1–17. [Google Scholar] [CrossRef]
  19. Kheawborisut, A.; Kangtunyakarn, A. Modified subgradient extragradient method for system of variational inclusion problem and finite family of variational inequalities problem in real Hilbert space. J. Inequal. Appl. 2021, 2021, 53. [Google Scholar] [CrossRef]
  20. Su, M.; Xu, H.K. Remarks on the Gradient-Projection Algorithm. J. Nonlinear Anal. Optim. 2010, 1, 35–43. [Google Scholar]
Figure 1. The convergence of { x n } , { y n } , and { t n } with Theorem 1.
Figure 1. The convergence of { x n } , { y n } , and { t n } with Theorem 1.
Axioms 13 00525 g001
Figure 2. The convergence of { x n } , { y n } , and { t n } with algorithm 1 in [12].
Figure 2. The convergence of { x n } , { y n } , and { t n } with algorithm 1 in [12].
Axioms 13 00525 g002
Table 1. The numerical results of { x n } , { y n } , and { t n } of Theorem 1.
Table 1. The numerical results of { x n } , { y n } , and { t n } of Theorem 1.
n { x n } { y n } { t n }
132.49262569.67060071.627335
213.33292326.62241927.600786
36.76730810.96853511.946902
44.3423176.0766966.076696
53.5401873.1415934.119960
63.1415933.1415933.141593
983.1415933.1415933.141593
993.1415933.1415933.141593
1003.1415933.1415933.141593
Time taken (s)0.066431
Table 2. The numerical results of { x n } , { y n } , and { t n } of algorithm 1 in [12].
Table 2. The numerical results of { x n } , { y n } , and { t n } of algorithm 1 in [12].
n { x n } { y n } { t n }
159.45360268.69223270.648967
244.63018649.12487751.081612
334.09578637.38446438.362832
426.43264228.57915429.557522
520.67950221.73058022.708948
116.1408576.0766966.076696
125.3096825.0983285.098328
134.8213634.1199605.098328
144.3335534.1199604.119960
154.0460543.1415934.119960
163.1415933.1415933.141593
983.1415933.1415933.141593
993.1415933.1415933.141593
1003.1415933.1415933.141593
Time taken (s)0.084881
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sripattanet, A.; Kangtunyakarn, A. Method for Approximating Solutions to Equilibrium Problems and Fixed-Point Problems without Some Condition Using Extragradient Algorithm. Axioms 2024, 13, 525. https://doi.org/10.3390/axioms13080525

AMA Style

Sripattanet A, Kangtunyakarn A. Method for Approximating Solutions to Equilibrium Problems and Fixed-Point Problems without Some Condition Using Extragradient Algorithm. Axioms. 2024; 13(8):525. https://doi.org/10.3390/axioms13080525

Chicago/Turabian Style

Sripattanet, Anchalee, and Atid Kangtunyakarn. 2024. "Method for Approximating Solutions to Equilibrium Problems and Fixed-Point Problems without Some Condition Using Extragradient Algorithm" Axioms 13, no. 8: 525. https://doi.org/10.3390/axioms13080525

APA Style

Sripattanet, A., & Kangtunyakarn, A. (2024). Method for Approximating Solutions to Equilibrium Problems and Fixed-Point Problems without Some Condition Using Extragradient Algorithm. Axioms, 13(8), 525. https://doi.org/10.3390/axioms13080525

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop