Next Article in Journal
Advanced Analysis of Electrodermal Activity Measures to Detect the Onset of ON State in Parkinson’s Disease
Next Article in Special Issue
Order Properties Concerning Tsallis Residual Entropy
Previous Article in Journal
Symmetric Polynomials in Free Associative Algebras—II
Previous Article in Special Issue
Inequalities That Imply the Norm of a Linear Space Is Induced by an Inner Product
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Subgradient Extra-Gradient Algorithm for Pseudomonotone Equilibrium Problems and Fixed-Point Problems of Bregman Relatively Nonexpansive Mappings

by
Roushanak Lotfikar
1,
Gholamreza Zamani Eskandani
2,
Jong-Kyu Kim
3,* and
Michael Th. Rassias
4
1
Faculty of Basic Science, Ilam University, Ilam P.O. Box 69315-516, Iran
2
Department of Pure Mathematics, Faculty of Mathematical Sciences, University of Tabriz, Tabriz 51666-16471, Iran
3
Department of Mathematics Education, Kyungnam University, Changwon 51767, Republic of Korea
4
Institute of Mathematics, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich, Switzerland
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(23), 4821; https://doi.org/10.3390/math11234821
Submission received: 10 October 2023 / Revised: 8 November 2023 / Accepted: 20 November 2023 / Published: 29 November 2023
(This article belongs to the Special Issue Recent Trends in Convex Analysis and Mathematical Inequalities)

Abstract

:
In this article, we introduce a new subgradient extra-gradient algorithm to find the common element of a set of fixed points of a Bregman relatively nonexpansive mapping and the solution set of an equilibrium problem involving a Pseudomonotone and Bregman–Lipschitz-type bifunction in reflexive Banach spaces. The advantage of the algorithm is that it is run without prior knowledge of the Bregman–Lipschitz coefficients. Finally, two numerical experiments are reported to illustrate the efficiency of the proposed algorithm.

1. Introduction

Let X be a reflexive real Banach space and C be a closed, convex and nonempty subset of X . We denote the dual space of X by X . The minimization problem for a function f : C R is defined as
Find x C such that f ( x ) f ( y ) , y C .
In this case, x is called a minimizer of f, and Argmin y C f ( y ) denotes the set of minimizers of f. Minimization problems are very useful in optimization theory as well as convex and nonlinear analysis. An important generalization of Problem (1) for a bifunction f : C × C R is the following equilibrium problem (EP), defined as
Find x C such that f ( x , y ) 0 , y C .
We denote by EP ( f ) the solutions set of (2). Many interesting and demanding problems in nonlinear analysis, such as complementarity, the fixed point, Nash equilibria, optimization, the saddle point and variational inequality problems, can be reformulated as equilibrium problems (cf. [1,2,3,4]). Some authors have obtained results regarding the existence and stability of solutions of (EP) (cf.  [5,6]).
However, equilibrium problems in finite as well as infinite dimensional spaces were studied by [7,8,9,10]. Dadashi et al. [11] studied the subgradient extra-gradient method for Pseudomonotone equilibrium problems.
Recently, several authors have combined equilibrium problems with fixed-point problems. They have presented algorithms to solve them in Hilbert spaces [9,12]. Also, some authors have presented several methods for solving fixed-point problems in metric spaces, see [13,14,15].
One of the most popular methods used to solve equilibrium problems is the extra-gradient method. Authors have considered the extra-gradient method for monotone and Pseudomonotone equilibrium problems [4,16,17,18,19,20,21].
In [8], Reich and Sabach studied equilibrium problems and fixed-point problems in Banach spaces. In their paper, they presented two algorithms to find the common fixed points of many finite, firmly nonexpansive Bregman operators. Very recently, inspired by the extra-gradient method, Yang and Liu [22] presented an algorithm, which is called the subgradient extra-gradient method, to find a common solution to equilibrium problems and the fixed point of a quasinonexpansive mapping without the knowledge of the Lipschitz-type constants of the bifunction in Hilbert spaces. The algorithm is as follows:
y n = argmin { λ n f ( x n , y ) + 1 2 x n y 2 : y C } , z n = argmin { λ n f ( y n , y ) + 1 2 x n y 2 : y T n } , t n = α n x 0 + ( 1 α n ) z n , x n + 1 = β n z n + ( 1 β n ) S t n ,
where μ ( 0 , 1 ) , λ 0 > 0 and x 0 H is arbitrary. Also,
T n = { v H : x n λ n w n y n , v y n 0 } ,
w n 2 f ( x n , y n ) such that x n λ n w n y n N C ( y n ) and
λ n + 1 = min μ ( z n y n 2 + y n x n 2 ) f ( x n , z n ) f ( x n , y n ) f ( y n , z n ) , λ n , i f f ( x n , z n ) f ( x n , y n ) f ( y n , z n ) > 0 , λ n , o t h e r w i s e ,
in addition, the sequences { α n } and { β n } satisfy the conditions
(i)
{ α n } [ 0 , 1 ] and n = 0 α n = ,
(ii)
lim sup n β n 0 , or n = 0 | α n β n | < .
Inspired by the above work, in the present paper, we introduce a new subgradient extra-gradient algorithm to find the common element of a set of fixed points of a Bregman relatively nonexpansive mapping and the solution set of an equilibrium problem involving a Pseudomonotone and Bregman–Lipschitz-type bifunction in reflexive Banach spaces.
This paper is organized as follows: In Section 2, we recall some definitions and preliminary results. Section 3 deals with our algorithm and the relevant convergence analysis. Finally, in Section 4, we illustrate the proposed subgradient extra-gradient method by considering two numerical experiments.

2. Materials and Methods

In this section, we recall some definitions and preliminaries. Suppose that f : X ( , ] is a convex, proper and lower semicontinuous function. We denote by Argmin the set of minimizers of f. If  Argmin f is a singleton, its unique element is denoted by argmin x X f ( x ) . Additionally, we denote by dom f the domain of f; that is, the set { x X : f ( x ) < } . Let x int dom f . Given the proper, convex and lower semicontinuous function f : X ( , ] , its subdifferential at some x X is defined as
f ( x ) = { ξ X : f ( x ) + y x , ξ f ( y ) , y X } .
Concerning this definition, we have
(i)
f ( x ) is empty when f ( x ) = ,
(ii)
f ( x ) is not in general empty when x dom f ,
(iii)
f ( x ) is nonempty when x int dom f ; precisely, int dom f dom ( f ) .
It will be useful to stress these facts in the present exposition. The function f : X ( , ] defined by
f ( ξ ) = sup { x , ξ f ( x ) : x X } ,
is called the Fenchel conjugate of f . It can be shown that ξ f ( x ) is equivalent to
f ( x ) + f ( ξ ) = x , ξ .
We can show that f is a proper, convex and lower semicontinuous function. The function f is called cofinite if dom f = X . Let f : X ( , + ] be a convex function. Given x int dom f and y X , the right-hand derivative of f at x in the direction y is given by
f ( x , y ) : = lim t 0 f ( x + t y ) f ( x ) t .
A function f is called Gâteaux differentiable at x int dom f if the limit as t 0 in (4) exists for each y . The function f is said to be Gâteaux differentiable if it is Gâteaux differentiable at each x int dom f . In this case, the gradient of f at x is the linear function f ( x ) , which is defined by y , f ( x ) : = f ( x , y ) for all y X . We say that f is Fréchet differentiable at x if it is Gâteaux differentiable and the limit as t 0 in (4) is attained uniformly for every y X with y = 1 . Also, we say that f is uniformly Fréchet differentiable on a bounded subset E of X if the limit is attained uniformly for x E and y = 1 .
The function f : X ( , + ) is called Legendre if it satisfies the following two conditions:
  • (L1) int dom f and subdifferential f is single valued on its domain,
  • (L2) int dom f and f is single valued on its domain.
Since X is reflexive, we always have ( f ) 1 = f (see [23], p. 83). This fact, combined with Conditions (L1) and (L2), implies the following equalities which will be very useful in the sequel:
f = ( f ) 1 , ran f = dom f = int dom f , ran f = dom f = int dom f .
Also, Conditions (L1) and (L2), in conjunction with Theorem 5.4 of  [24], imply that the functions f and f are strictly convex on the interior of their respective domains and f is Legendre if and only if f is Legendre. Several interesting examples of Legendre functions are presented in [24]. Among them are the functions 1 p · p with p ( 1 , ) , where the Banach space X is smooth and strictly convex.
Given a Gâteaux differentiable convex function f : X R , the Bregman distance with respect to f is defined as
D f ( x , y ) : = f ( x ) f ( y ) f ( y ) , x y , x , y E .
Note that D f : dom f × int dom f [ 0 , + ] is not a distance in the usual sense of the term. In general, D f is not symmetric and does not satisfy the triangle inequality. Clearly, D f ( x , x ) = 0 , but  D f ( y , x ) = 0 may not imply x = y . In our case, when f is Legendre, this indeed holds (see [24], Theorem 7.3(vi)). However, D f satisfies the three-point identity
D f ( x , y ) + D f ( y , z ) D f ( x , z ) = x y , f ( z ) f ( y ) ,
and four-point identity
D f ( x , y ) + D f ( w , z ) D f ( x , z ) D f ( w , y ) = x w , f ( z ) f ( y ) ,
for any x , w dom f and y , z int dom f .
More information regarding Bregman functions and distances can be found in [4,24,25,26,27,28,29,30,31]. A function f : X ( , + ] is called totally convex at a point x int dom f if its modulus of total convexity at x, that is, the function υ f ( x , · ) : [ 0 , + ) [ 0 , ] , defined by
υ f ( x , t ) : = inf { D f ( y , x ) : y dom f , y x = t } ,
is positive whenever t > 0 . This notion was first introduced by Butnariu and Iusem in [28]. Let E be a nonempty subset of X. The modulus of the total convexity of f on E is defined by
υ f ( E , t ) = inf { υ f ( x , t ) : x E int dom f } .
A function f is called totally convex on bounded subsets if υ f ( E , t ) is positive for any nonempty and bounded subset E and for any t > 0 . We will need the following lemmas in the proof of our results.
Lemma 1 
([32]). If f : X R is uniformly Fréchet differentiable and bounded on bounded subsets of X, then f is uniformly continuous on bounded subsets of X from the strong topology of X to the strong topology of X .
The function f is called sequentially consistent (see [33]) if, for any two sequences { x n } dom f and { y n } int dom f , such that { x n } is bounded,
lim n D f ( y n , x n ) = 0 ,
and this implies that
lim n y n x n = 0 .
Lemma 2 
([28]). If dom f contains at least two points, then the function f is totally convex on bounded sets if and only if the function f is sequentially consistent.
Lemma 3 
([34]). Let f : X R be a Legendre function such that f is bounded on bounded subsets of int dom f . Let x 0 X . If  { D f ( x 0 , x n ) } is bounded, then the sequence { x n } is bounded too.
Let f be a function and C be a closed, convex and nonempty subset of int dom f .
The Bregman projection (see [35]) concerning f of x int dom f onto C is defined as the necessarily unique vector Proj C f ( x ) C , which satisfies
D f ( Proj C f ( x ) , x ) = inf { D f ( y , x ) : y C } .
The Bregman projection concerning totally convex and Gâteaux differentiable functions has a variational characterization ([33], Corollary 4.4, p. 23).
Lemma 4. 
Let f be Gâteaux differentiable and totally convex on int   dom f. Let C be a closed, convex and nonempty subset of int dom f and x int dom f . Then, the following statements are equivalent:
(i) 
The vector x ^ C is the Bregman projection of x onto C concerning f .
(ii) 
The vector x ^ C is the unique solution of the variational inequality
z y , f ( x ) f ( z ) 0 , y C .
(iii) 
The vector x ^ C is the unique solution of the inequality
D f ( y , z ) + D f ( z , x ) D f ( y , x ) , y C .
With an admissible function f : X ( , + ] , we associate the bifunction V f : X × X [ 0 , + ] (see [36,37]) defined by
V f ( x , x ) = f ( x ) x , x + f ( x ) , x X , x X .
Recall some properties of the bifunction V f . For all x X and x X , we have
V f ( x , x ) = D f ( x , f ( x ) ) ,
Also, for all x X and x , y X (see [38]), we have
V f ( x , x ) + f ( x ) x , y V f ( x , x + y ) ,
Let f : X ( , + ] be a proper, lower semicontinuous function. Then, f : X ( , + ] is a proper, convex and weak lower semicontinuous function (see [39]). Therefore, V f is convex concerning the second variable. Hence, we have
D f z , f i = 1 N t i f ( x i ) i = 1 N t i D f ( z , x i ) , z X ,
where { x i } i = 1 N X and { t i } i = 1 N ( 0 , 1 ) with i = 1 N t i = 1 .
Let B be the closed unit ball and S be the unit sphere of a Banach space X. Let r B : = { z X : z r } for all r > 0 and f : X R be a function. We say that f is uniformly convex on bounded subsets (see [40]) if ρ r ( t ) > 0 for all r , t > 0 , where ρ r : [ 0 , ) [ 0 , ] is the gauge of the uniform convexity of f and is defined by
ρ r ( t ) = inf x , y r B , x y = t , α ( 0 , 1 ) α f ( x ) + ( 1 α ) f ( y ) f ( α x + ( 1 α ) y ) α ( 1 α ) , t 0 .
Lemma 5 
([41]). Let f : X R be a uniformly convex function on bounded subsets of X and r > 0 be a constant. Then,
f k = 0 n α k x k k = 0 n α k f ( x k ) α i α j ρ r ( x i x j ) ,
for all i , j { 0 , 1 , 2 , . . . , n } , x k r B , α k ( 0 , 1 ) and k = 0 , 1 , 2 , . . . , n with k = 0 n α k = 1 , where ρ r is the gauge of the uniform convexity of f.
The function f is also said to be uniformly smooth on bounded subsets (see [40]) if
lim t 0 σ r ( t ) t = 0 for all r > 0 ,
where σ r : [ 0 , ) [ 0 , ] is defined by
σ r ( t ) = sup x r B , y S , α ( 0 , 1 ) α f ( x + ( 1 α ) t y ) + ( 1 α ) f ( x α t y ) f ( x ) α ( 1 α ) ,
for all t 0 . A function f is said to be super coercive if
lim x f ( x ) x = + .
Theorem 1 
([40]). Let f : X R be a super coercive convex function. Then, the following are equivalent:
(i) 
f is uniformly smooth on bounded
subsets of X and bounded on bounded subsets.
(ii) 
f is Fréchet differentiable and f is uniformly
norm-to-norm continuous on bounded subsets of X.
(iii) 
dom f = X , f is super coercive and uniformly convex on bounded subsets of X .
Theorem 2 
([40]). Suppose that f : X R is a convex function which is bounded on bounded subsets of X; then, the following are equivalent:
(i) 
f is super coercive and uniformly convex on bounded subsets of X.
(ii) 
dom f = X , f is bounded on bounded subsets and
uniformly smooth on bounded subsets of X .
(iii) 
dom f = X , f is Fréchet differentiable and f is uniformly norm-to-norm continuous on bounded subsets of X .
Theorem 3 
([42]). Suppose that f : X ( , + ] is a Legendre function. The function f is totally convex on bounded subsets if and only if f is uniformly convex on bounded subsets.
Lemma 6 
([43]). Let C be a nonempty convex subset of X and f : C R be a convex and subdifferentiable function on C. Then, f attains its minimum at x C if and only if 0 f ( x ) + N C ( x ) , where N C ( x ) is the normal cone of C at x; that is,
N C ( x ) : = { x X : x z , x 0 , z C } .
Lemma 7 
([44]). Let f and g be two convex functions on X such that there is a point x 0 dom f dom g where f is continuous. Then,
( f + g ) ( x ) = f ( x ) + g ( x ) , x X .
Let C be a closed convex subset of X. A function g : X × X ( , + ] , such that g ( x , x ) = 0 for all x C , is called a bifunction.
Throughout this paper, we consider bifunctions with the following properties:
  • B 1 . g is monotone on C, that is
g ( x , y ) + g ( y , x ) 0 , x , y C .
  • B 2 . g is Pseudomonotone on C; that is,
g ( x , y ) 0 g ( y , x ) 0 , x , y C .
  • B 3 . g is Bregman γ - strongly Pseudomonotone on C if there exists a constant γ 0 such that
g ( x , y ) 0 g ( y , x ) γ D f ( y , x ) , x , y C .
  • B 4 . g is Bregman–Lipschitz-type continuous on C; that is, there exist two positive constants c 1 , c 2 such that
g ( x , y ) + g ( y , z ) g ( x , z ) c 1 D f ( y , x ) c 2 D f ( z , y ) , x , y , z C ,
the constants c 1 , c 2 are called Bregman–Lipschitz coefficients with respect to f (See [19]).
Lemma 8 
([19]). Let C be a nonempty closed convex subset of a reflexive Banach space X and f : X R be a Legendre and super coercive function. Suppose that g : X × X R is a bifunction satisfying B 1 B 4 . For the arbitrary sequences { x n } C and { λ n } ( 0 , + ) , let { w n } and { z n } be sequences generated by
w n = argmin { λ n g ( x n , y ) + D f ( y , x n ) : y C } , z n = argmin { λ n g ( w n , y ) + D f ( y , x n ) : y C } .
Then, we have
D f ( x , z n ) D f ( x , x n ) ( 1 λ n c 1 ) D f ( w n , x n ) ( 1 λ n c 2 ) D f ( z n , w n ) , x E P ( g ) .
Let S : X X be a mapping; the set of the fixed points of S is
F ( S ) = { x X : S ( x ) = x } .
A point p X is called an asymptotic fixed point of S if X contains a sequence { x n } with x n p such that S x n x n 0 . The set of asymptotic fixed points of S is denoted by F ^ ( S ) . The term “symptotic fixed point” was coined and used by Reich [45].
Definition 1. 
Let S : X X be a mapping with F ( S ) . Then,
(i) 
S is called Bregman quasinonexpansive if D f ( y , S x ) D f ( y , x ) for all x X , y F ( S ) .
(ii) 
S is called Bregman relatively nonexpansive if S is Bregman quasinonexpansive and F ( S ) = F ^ ( S ) .
Bregman quasinonexpansive mappings were studied by Butnariu et al. [46]. Here, we assume that the bifunction g satisfies the following conditions:
  • A 1 . g is Pseudomonotone on C.
  • A 2 . g is Bregman–Lipschitz-type continuous on C.
  • A 3 . g ( x , · ) is convex, lower semicontinuous and subdifferentiable on X for every fixed x X .
  • A 4 . g is jointly weakly continuous on X × C in the sense that, if  x X , y C and { x n } , { y n } converge weakly to x , y , respectively, then g ( x n , y n ) g ( x , y ) as n .
Remark 1. 
If g satisfies A 1 A 4 , then EP ( g ) is closed and convex (see [35]). If S is a Bregman quasinonexpansive mapping, then F ( S ) is a closed convex subset of X ([33], Proposition 1).
Lemma 9 
([47]). Let f : X ( , + ] be uniformly Fréchet differentiable and totally convex on bounded subsets of X. Let C be a nonempty closed and convex subset of int   dom f ,   C B ( C ) denote the family of nonempty closed bounded subsets of C and T : C C B ( C ) be a Bregman relatively nonexpansive mapping. Then, F ( T ) is closed and convex.
Let f : X ( , + ] be a Gâteaux differentiable function and x X ; recall that the proximal mapping of a proper convex and lower semicontinuous function g : C ( , + ] concerning f is defined by
P r o x g ( · ) f ( x ) : = argmin g ( y ) + D f ( y , x ) : y C .
Lemma 10 
([19]). Let f : X ( , + ] be a super coercive and Legendre function. Let x int dom f , C int dom f and g : C ( , + ] be a proper convex and lower semicontinuous function. Then, the following inequality holds:
g ( y ) g ( Prox g f ( x ) ) + Prox g f ( x ) y , f ( x ) f ( Prox g f ( x ) ) 0 , y C .
Lemma 11 
([48]). Let { s n } be a sequence of non-negative real numbers satisfying the inequality
s n + 1 ( 1 α n ) s n + α n β n , n 0 ,
where { α n } and { β n } satisfy the conditions
(i) 
{ α n } [ 0 , 1 ] and n = 0 α n = ,
(ii) 
lim sup n β n 0 , or n = 0 | α n β n | < .
Then, lim n s n = 0 .
Lemma 12 
([49]). Let { a n } be a sequence of real numbers such that there exists a subsequence { n i } N such that a n i < a n i + 1 for all i N . Then, there exists a subsequence { m k } N such that m k , and the following properties are satisfied by all (sufficiently large) numbers k N :
a m k a m k + 1 a n d a k a m k + 1 .
In fact, m k = max { j k : a j < a j + 1 } .

3. Main Results

In this section, we assume that f : X R is a Legendre, super coercive and totally convex function on bounded subsets of X such that f is bounded on bounded subsets of int dom f and the bifunction g : X × X R satisfies A 1 A 4 . Now, we present the following Algorithm 1, and we prove a convergence theorem.
Algorithm 1 Subgradient extra-gradient algorithm
  • Choose λ 0 [ α , β ] ( 0 , p ) , where p = min { 1 c 1 , 1 c 2 } , x 0 X and μ ( 0 , 1 ) . Set n = 0 and go to Step 1.
  • S t e p 1 . Given the current iterate x n , compute
y n = argmin { λ n g ( x n , y ) + D f ( y , x n ) : y C } .
  • S t e p 2 . Choose w n 2 g ( x n , y n ) such that f ( x n ) λ n w n f ( y n ) N C ( y n ) and compute
z n = argmin { λ n g ( y n , y ) + D f ( y , x n ) : y T n } ,
  • where
T n = { v X | v y n , f ( x n ) λ n w n f ( y n ) 0 } .
  • S t e p 3 . Choose { α n } and { β n } such that
{ α n } ( 0 , 1 ) , n = 0 α n = , lim n α n = 0 and β n [ a , b ] ( 0 , 1 ) ,
  • then compute
t n = f α n f ( x 0 ) + ( 1 α n ) f ( z n ) , x n + 1 = f β n f ( z n ) + ( 1 β n ) f ( S t n ) ,
λ n + 1 = min μ ( D f ( z n , y n ) + D f ( y n , x n ) ) g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) , λ n , i f g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) > 0 , λ n , o t h e r w i s e .
  • Set n : = n + 1 and go back to Step 1.
The following lemmas will be useful in the proof of the main theorem.
Lemma 13. 
The sequence { λ n } generated by Algorithm 1 is bounded below with lower bound min μ max ( c 1 , c 2 ) , λ 0 .
Proof of Lemma 13. 
Since g satisfies the Bregman–Lipschitz-type condition with constants c 1 and c 2 , for the case of g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) > 0 , we have
g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) c 1 D f ( z n , y n ) + c 2 D f ( y n , x n ) max ( c 1 , c 2 ) ( D f ( z n , y n ) + D f ( y n , x n ) ) .
From the definition of λ n , we see that this sequence is bounded from below. Indeed, if λ 0 μ max ( c 1 , c 2 ) , then { λ n } is bounded from below by λ 0 ; otherwise, { λ n } is bounded from below by μ max ( c 1 , c 2 ) . □
Remark 2. 
It is obvious that the sequence { λ n } is decreasing and the limit of { λ n } exists and we denote lim n + λ n = λ . Clearly, λ > 0 . If λ 0 μ max ( c 1 , c 2 ) , then { λ n } is a constant sequence.
Lemma 14. 
The sequence { w n } generated by Algorithm 1 is well defined, and C T n .
Proof of Lemma 14. 
It follows from Lemmas 6 and 7 and the condition A 3 that
y n = argmin { λ n g ( x n , y ) + D f ( y , x n ) : y C } ,
if and only if
0 λ n 2 g ( x n , y n ) + 1 D f ( y n , x n ) + N C ( y n ) .
There exists w n 2 g ( x n , y n ) and w N C ( y n ) such that
λ n w n + f ( y n ) f ( x n ) + w = 0 .
Thus, we have
y y n , f ( x n ) f ( y n ) = y y n , w + λ n w n = y y n , w + y y n , λ n w n y y n , λ n w n , y C .
This implies that y y n , f ( x n ) f ( y n ) λ n w n 0 for all y C . Hence, C T n . □
Lemma 15. 
Suppose that S : X X is a Bregman quasinonexpansive mapping. Let { x n } , { y n } , { z n } and { t n } be sequences generated by Algorithm 1 and F ( S ) EP ( g ) . Then, the sequences { x n } , { y n } , { z n } and { t n } are bounded.
Proof of Lemma 15. 
Since
z n = argmin { λ n g ( y n , y ) + D f ( y , x n ) : y T n } = P r o x λ n g ( y n , . ) f ( x n ) ,
by Lemma 10, we have
λ n g ( y n , z n ) g ( y n , y ) z n y , f ( x n ) f ( z n ) , y T n .
Know that
F ( S ) EP ( g ) C T n .
Assume that u F ( S ) EP ( g ) . Substituting y = u into the last inequality, we have
λ n g ( y n , z n ) g ( y n , u ) z n u , f ( x n ) f ( z n ) .
From u EP ( g ) , we obtain g ( u , y n ) 0 . Thus, g ( y n , u ) 0 because of the Pseudomonotonicity of g. Hence, from (11) and λ n > 0 , we obtain
λ n g ( y n , z n ) z n u , f ( x n ) f ( z n ) .
Since w n 2 g ( x n , y n ) , we obtain
g ( x n , y ) g ( x n , y n ) y y n , w n , for all y X .
Substituting y = z n into the last inequality, we obtain that
g ( x n , z n ) g ( x n , y n ) z n y n , w n .
We have
λ n g ( x n , z n ) g ( x n , y n ) λ n z n y n , w n .
From definition of T n , we have
z n y n , f ( x n ) λ n w n f ( y n ) 0 ,
we have
z n y n , f ( x n ) f ( y n ) z n y n , λ n w n .
Combining (12)–(14) with the three-point identity, we obtain that
λ n g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) u z n , f ( x n ) f ( z n ) + z n y n , λ n w n u z n , f ( x n ) f ( z n ) + z n y n , f ( x n ) f ( y n ) = D f ( u , z n ) D f ( u , x n ) + D f ( z n , y n ) + D f ( y n , x n ) .
We have
D f ( u , z n ) λ n g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) + D f ( u , x n ) D f ( z n , y n ) D f ( y n , x n ) .
We obtain
D f ( u , z n ) μ λ n λ n + 1 D f ( z n , y n ) + D f ( y n , x n ) + D f ( u , x n ) D f ( z n , y n ) D f ( y n , x n ) .
On the other hand
lim n + λ n λ n + 1 μ = μ , μ ( 0 , 1 ) .
There exists N N such that for all n N , we have 0 < λ n λ n + 1 μ < 1 . So, D f ( u , z n )   D f ( u , x n ) for all n N . Therefore, we have
D f ( u , x n + 1 ) = D f u , f β n f ( z n ) + ( 1 β n ) f ( S t n β n D f ( u , z n ) + ( 1 β n ) D f ( u , S t n ) β n D f ( u , z n ) + ( 1 β n ) D f ( u , t n ) = β n D f ( u , z n ) + ( 1 β n ) D f u , f α n f ( x 0 ) + ( 1 α n ) f ( z n ) β n D f ( u , z n ) + ( 1 β n ) α n D f ( u , x 0 ) + ( 1 β n ) ( 1 α n ) D f ( u , z n ) ( β n + ( 1 β n ) ( 1 α n ) ) D f ( u , x n ) + ( 1 β n ) α n D f ( u , x 0 ) ( β n + ( 1 β n ) ( 1 α n ) + α n ( 1 β n ) ) max ( D f ( u , x n ) , D f ( u , x 0 ) ) max D f ( u , x n ) , D f ( u , x 0 ) D f ( u , x 0 ) .
Therefore, the sequence { D g ( u , x n ) } is bounded, and by Lemma 3, the sequence { x n } is bounded. We have D f ( u , z n ) D f ( u , x n ) , which implies that { z n } is bounded. From (17) and using Lemma 8, we derive that
D f ( u , x n + 1 ) β n D f ( u , z n ) + ( 1 β n ) α n D f ( u , x 0 ) + ( 1 β n ) ( 1 α n ) D f ( u , z n ) ( β n + ( 1 β n ) ( 1 α n ) ) D f ( u , z n ) + ( 1 β n ) α n D f ( u , x 0 ) ( β n + ( 1 β n ) ( 1 α n ) ) D f ( u , x n ) ( 1 λ n c 1 ) D f ( y n , x n ) ( 1 λ n c 2 ) D f ( z n , y n ) + ( 1 β n ) α n D f ( u , x 0 ) β n + ( 1 β n ) ( 1 α n ) D f ( u , x n ) ( 1 λ n c 1 ) D f ( y n , x n ) + ( 1 β n ) α n D f ( u , x 0 ) .
We get that
( β n + ( 1 β n ) ( 1 α n ) ) ( 1 λ n c 1 ) D f ( y n , x n ) ( β n + ( 1 β n ) ( 1 α n ) ) D f ( u , x n ) D f ( u , x n + 1 ) + ( 1 β n ) α n D f ( u , x 0 ) .
Considering the limit supreme in the last inequality as n , we obtain that lim n D f ( y n , x n ) = 0 . Therefore, { y n } is bounded. Clearly, { t n } is bounded. □
Now, we are ready to prove our main theorem.
Theorem 4. 
Let S be a Bregman relatively nonexpansive mapping. Assume that A 1 A 4 are satisfied and Ω : = F ( S ) EP ( g ) . Then, the sequence { x n } generated by Algorithm 1 converges strongly to Proj Ω f ( x 0 ) .
Proof of Theorem 4. 
By Remark 1 and Lemma 9, Ω is closed and convex. Assume that x = Proj Ω f ( x 0 ) . By Lemma 4, we have
z x , f ( x 0 ) f ( x ) 0 , z Ω .
From Lemma 8, we obtain D f ( x , z n ) D f ( x , x n ) for all n N . Therefore,
D f ( x , x n + 1 ) = D f x , f β n f ( z n ) + ( 1 β n ) f ( S t n β n D f ( x , z n ) + ( 1 β n ) D f ( x , S t n ) β n D f ( x , z n ) + ( 1 β n ) D f ( x , t n ) = β n D f ( x , z n ) + ( 1 β n ) D f ( x , f ( α n f ( x 0 ) + ( 1 α n ) f ( z n ) ) ) β n D f ( x , z n ) + ( 1 β n ) α n D f ( x , x 0 ) + ( 1 β n ) ( 1 α n ) D f ( x , z n ) .
We have
D f ( x , x n + 1 ) β n + ( 1 β n ) ( 1 α n ) D f ( x , z n ) + ( 1 β n ) α n D f ( x , x 0 ) .
From (15), we obtain
D f ( x , z n ) D f ( x , x n ) 1 μ λ n 2 λ n + 1 D f ( z n , y n ) + D f ( y n , x n ) .
Know that
β n + ( 1 β n ) ( 1 α n ) = 1 α n ( 1 β n ) < 1 .
From (20) and (21), we have
D f ( x , x n + 1 ) D f ( x , z n ) + ( 1 β n ) α n D f ( x , x 0 ) D f ( x , x n ) 1 μ λ n λ n + 1 ( D f ( z n , y n ) + D f ( y n , x n ) ) + ( 1 β n ) α n D f ( x , x 0 ) .
We divide the proof into two parts:
Case 1 . In this case, we suppose that there exists N 1 N ( N 1 N ) , such that
D f ( x , x n + 1 ) D f ( x , x n ) ,
for all n N 1 . Then, the limit lim n D f ( x , x n ) exists. Let lim n D f ( x , x n ) = l . By (22), we obtain
1 μ λ n λ n + 1 ( D f ( z n , y n ) + D f ( y n , x n ) ) D f ( x , x n ) D f ( x , x n + 1 ) + ( 1 β n ) α n D f ( x , x 0 ) .
From (23), the fact that
lim n 1 μ λ n λ n + 1 = 1 μ > 0 and lim n α n = 0 ,
we obtain that
1 μ lim sup n ( D f ( z n , y n ) + D f ( y n , x n ) ) 0 .
We have
lim n D f ( z n , y n ) = lim n D f ( y n , x n ) = 0 .
From Lemma 2, we get that
lim n y n x n = lim n z n y n = 0 .
Since { x n } is bounded, there exists a subsequence { x n k } which converges weakly to some z 0 X and
lim sup n x n x , f ( x 0 ) f ( x ) = lim k x n k x , f ( x 0 ) f ( x ) = z 0 x , f ( x 0 ) f ( x ) .
From (24) and x n k z 0 , we have y n k z 0 and z 0 C . Since
y n k = Prox λ n k g ( x n k , · ) f ( x n k ) ,
by Lemma 10 we deduce that
λ n k g ( x n k , y ) g ( x n k , y n k ) y y n k , f ( x n k ) f ( y n k ) , y C .
Considering the limit in the last inequality as k and using the assumptions A 4 , lim k λ n k = λ > 0 , we obtain
λ ( g ( z 0 , y ) g ( z 0 , z 0 ) ) y z 0 , f ( z 0 ) f ( z 0 ) , y C .
Which implies that g ( z 0 , y ) 0 , for all y C . That is, z 0 EP ( g ) .
Next, we prove z 0 F ( S ) . From x n k z 0 and (24), we obtain z n k z 0 . Note that,
lim n α n = 0 ,
therefore,
D ( z n k , t n k ) = D z n k , f ( α n k f ( x 0 ) + ( 1 α n k ) f ( z n k ) ) α n k D ( z n k , x 0 ) + ( 1 α n k ) D ( z n k , z n k ) = α n k D ( z n k , x 0 ) .
We obtain that
lim k D ( z n k , t n k ) = 0 .
We get that
lim k z n k t n k = 0 ,
and thus t n k z 0 . Let
r = sup n { f ( z n ) , f ( S t n ) } .
The sequences { z n } and { S t n } are bounded and f is bounded on bounded subsets of X, we have r < . In view of Lemma 1 and Theorem 1, dom f = X , f is super coercive and uniformly convex on bounded subsets of X . Applying (5) and Lemma 5, we obtain
D f x , x n k + 1 = D f x , f β n k f ( z n k ) + 1 β n k f ( S t n k ) = V f x , β n k f ( z n k ) + 1 β n k f ( S t n k ) = f ( x ) + f β n k f ( z n k ) + 1 β n k f S t n k x , β n k f ( z n k ) + 1 β n k f S t n k f ( x ) + β n k f f ( z n k ) + 1 β n k f f S t n k β n k 1 β n k ρ r f ( z n k ) f S t n k x , β n k f ( z n k ) + 1 β n k f S t n k .
T is a Bregman relatively nonexpansive mapping and
f ( x ) + f ( x ) = x , x
we have
D f x , x n k f ( x ) + β n k z n k , f ( z n k ) β n k f ( z n k ) + 1 β n k S t n k ) , f S t n k 1 β n k f S t n k β n k 1 β n k ρ r f ( z n k ) f S t n k β n k x , f ( z n k ) 1 β n k x , f S t n k = β n k D f x , z n k + 1 β n k D f x , S t n k β n k 1 β n k ρ r f ( z n k ) f S t n k ,
therefore,
β n k 1 β n k ρ r f ( z n k ) f S t n k β n k D f x , z n k + 1 β n k D f x , S t n k D f x , x n k β n k D f x , z n k + 1 β n k D f x , t n k D f x , x n k 1 β n k D f x , f ( α n k f ( x 0 ) + ( 1 α n k ) f ( z n k ) + β n k D f x , x n k D f x , x n k 1 β n k α n k D ( x , x 0 ) + 1 β n k ( 1 α n k ) D ( x , z n k ) + β n k D f x , x n k D f x , x n k 1 β n k α n k D ( x , x 0 ) + 1 β n k ( 1 α n k ) D ( x , x n k ) + β n k D f x , x n k D f x , x n k .
Passing the limit in the last inequality as k , we obtain
lim k ρ r f ( z n k ) f S t n k = 0 .
We prove that
lim k f ( z ¯ n k ) f ( S t n k ) = 0 .
If this is not the case, there exists ε 0 > 0 and a subsequence { n k m } of { n k } such that
f ( z n k m ) f ( z n k m ) ε 0 .
Since ρ is nondecreasing, we obtain
ρ r ( ε 0 ) ρ r ( f ( z n k m ) f ( z n k m ) ) for all m N .
Letting m , we obtain ρ r ( ε 0 ) 0 . But this is a contradiction to the uniform convexity of f on the bounded subsets of X . From Theorems 2 and 3, f is uniformly continuous on the bounded subset of X . Therefore, lim n z n k S t n k = 0 . This, together with (27) and the triangle inequality, gives
lim n t n k S t n k = 0 .
The function f is uniformly continuous on the bounded subset of X ([50], Theorem 1.8), lim n [ f ( t n k ) f ( S t n k ) ] = 0 , and so, from the definition of the Bregman distance, we obtain
lim k D f ( t n k , S t n k ) = 0 .
and thus z 0 is an asymptotic fixed point of Bregman relatively nonexpansive mapping S. Therefore, z 0 F ^ ( S ) = F ( S ) . Hence, z 0 Ω .
We now prove that lim n D f ( x , x n ) = 0 . We have
D f ( x , t n ) = D f x , f ( α n f ( x 0 ) + ( 1 α n ) f ( z n ) ) = V f x , α n f ( x 0 ) + ( 1 α n ) f ( z n ) V f x , α n f ( x 0 ) + ( 1 α n ) f ( z n ) α n ( f ( x 0 ) f ( x ) ) + α n t n x , f ( x 0 ) f ( x ) = V f ( x , ( 1 α n ) f ( z n ) + α n f ( x ) ) + α n t n x , f ( x 0 ) f ( x ) ( 1 α n ) D f ( x , z n ) + α n D f ( x , x ) + α n t n x , f ( x 0 ) f ( x ) .
We have
D f ( x , x n + 1 ) β n D f ( x , z n ) + ( 1 β n ) D f ( x , t n ) β n D f ( x , z n ) + ( 1 β n ) ( 1 α n ) D f ( x , z n ) + α n t n x , f ( x 0 ) f ( x ) = β n + ( 1 β n ) ( 1 α n ) D f ( x , z n ) + α n ( 1 β n ) t n x , f ( x 0 ) f ( x ) = 1 α n ( 1 β n ) D f ( x , z n ) + α n ( 1 β n ) t n x , f ( x 0 ) f ( x )
From t n z 0 and z 0 Ω , we obtain that
lim sup n t n x , f ( x 0 ) f ( x ) = z 0 x , f ( x 0 ) f ( x 0 .
From Lemma 11 and (28), we deduce that
lim n D f ( x , x n + 1 ) = 0 .
From Lemma 2, we have x x n + 1 0 . Since x n k z 0 , we have z 0 = x .
Case 2 . There exists a subsequence { D f ( x , x n j ) } of { D f ( x , x n ) } such that
D f ( x , x n j ) D f ( x , x n j + 1 ) for all j N .
By Lemma 12, there exists an increasing sequence { m k } N such that lim k m k = , and the following inequalities hold for all k N :
0 D f ( x , x m k ) D f ( x , x m k + 1 ) and D f ( x , x k ) D f ( x , x m k + 1 ) .
From (22), we have
1 μ λ n λ n + 1 ( D f ( z n , y n ) + D f ( y n , x n ) ) D f ( x , x n ) D f ( x , x n + 1 ) + α n ( 1 β n ) D f ( x , x 0 ) .
Substituting n = m k into the last inequality, we obtain
1 μ λ m k λ m k + 1 ( D f ( z m k , y m k ) + D f ( y m k , x m k ) ) D f ( x , x m k ) D f ( x , x m k + 1 ) + α m k ( 1 β m k ) D f ( x , x 0 ) .
From (20), we have
lim k 1 μ λ m k λ m k + 1 = 1 μ > 0 and lim k α m k = 0 ,
we obtain
lim k D f ( z m k , y m k ) = lim k D f ( y m k , x m k ) = 0 .
Using the same argument as in the proof of Case 1 and by (29), we obtain that
lim sup k t m k x , f ( x 0 ) f ( x ) 0 .
From (28) for all m k N 1 , we have
D f ( x , x m k + 1 ) ( 1 α m k ( 1 β m k ) ) D f ( x , x m k ) + α m k ( 1 β m k ) t m k x , f ( x 0 ) f ( x ) .
From (31) and Lemma 11, we derive that
lim k D f ( x , x m k + 1 ) = 0 .
On the other hand, we have
D f ( x , x k ) D f ( x , x m k + 1 ) ,
we have
lim k D f ( x , x k ) = 0 .
From Lemma 2, we obtain that lim k x x k = 0 . Therefore, x k x , which is the desired result. □

4. Application

In this section, we consider the particular equilibrium problem corresponding to the function g defined for every x , y X by g ( x , y ) = y x , A x , with A : X X being L-Lipschitz continuous; that is, there exists L > 0 such that
A x A y L x y for all x , y X .
So, we obtain the classical variational inequality:
Find z C such that y z , A z 0 , y C .
The set of solutions to this problem is denoted by VI ( A , C ) . We have ([19], Lemma 4.1)
argmin { λ n g ( x n , y ) + D f ( y , x n ) : y C } = argmin { λ n y y n , A x n + D f ( y , x n ) : y C } = Proj C f f ( f ( x n ) λ n A x n ) .
Therefore, we derive that
argmin { λ n y y n , A y n + D f ( y , x n ) : y T n } = Proj T n f ( f ( f ( x n ) λ n A y n ) .
Let X be a real Banach space. The modulus of convexity δ X : [ 0 , 2 ] [ 0 , 1 ] is defined by
δ X ( ε ) = inf 1 x + y 2 : x = y = 1 , x y ε .
The space X is called uniformly convex if δ X ( ε ) > 0 for every ε ( 0 , 2 ] , and is called p-uniformly convex if p 2 and there exists c p > 0 such that δ X ( ε ) c p ε p for any ε ( 0 , 2 ] .
The modulus of smoothness ρ X ( t ) : [ 0 , ) [ 0 , ) is defined by
ρ X ( t ) = sup x + t y + x t y 2 1 : x = y = 1 ,
The space X is called uniformly smooth if
lim t 0 ρ X ( t ) t = 0 .
For a p-uniformly convex space, the metric and Bregman distance have the following relation [51]:
t x y p D 1 p · p ( x , y ) x y , J X p ( x ) J X p ( x ) ,
where t > 0 is a fixed number and the duality mapping J X p ( x ) : X 2 X is defined by
J X p ( x ) = { f X : x , f = x p , f = x p 1 } ,
for every x X . We know that X is smooth if and only if J X p is a single-valued mapping of X into X . We also know that X is reflexive if and only if J X p is surjective, and X is strictly convex if and only if J X p is one-to-one. Therefore, if X is a smooth, strictly convex and reflexive Banach space, then J X p is a single-valued bijection, and in this case, J X p = ( J X q ) 1 , where J X q is the duality mapping of X .
For p = 2 , the duality mapping J X p is called the normalized duality mapping and is denoted by J . The function ϕ : X 2 R is defined by
ϕ ( y , x ) = y 2 2 y , J x + x 2 ,
for all x , y X . The generalized projection Π C from X onto C is defined by
Π C ( x ) = argmin y C ϕ ( y , x ) x X ,
where C is a nonempty closed and convex subset of X .
Let X be a uniformly smooth and uniformly convex Banach space and f = 1 2 · 2 . Therefore,
f = J , D 1 2 · 2 ( x , y ) = 1 2 ϕ ( x , y ) and Proj C 1 2 · 2 = Π C .
If X is a Hilbert space, then
f = I , D 1 2 · 2 ( x , y ) = 1 2 x y 2 and Proj C 1 2 · 2 = P C ,
where P C is the metric projection.
Hence, we have the following corollary:
Corollary 1. 
Let X be a uniformly smooth and two-uniformly convex Banach space and C be a nonempty closed and convex subset of X. Let S be a Bregman relatively nonexpansive mapping and g ( x , y ) = y x , A x for all x , y X . Let A : X X be a monotone and Lipschitz-continuous mapping. Suppose that Ω = F ( S ) VI ( A , C ) ,   { α n } ( 0 , 1 ) , lim n α n = 0 , n = 0 α n = , β n [ a , b ] ( 0 , 1 ) , and { λ n } is sequence defined in Algorithm 1. Then, the sequence { x n } generated by
λ 0 , x 0 X , μ ( 0 , 1 ) , y n = Π C J 1 ( J ( x n ) λ n A x n , T n = { x X | x y n , J ( x n ) λ n A x n J ( y n ) 0 } , z n = Π C J 1 ( J ( x n ) λ n A y n , t n = J 1 α n J ( x 0 ) + ( 1 α n ) J ( z n ) , x n + 1 = J 1 β n J ( z n ) + ( 1 β n ) J ( S t n ) .
converges strongly to x = Π Ω ( x 0 ) .

5. Numerical Experiment

In the following, two numerical experiments are considered to demonstrate the applicability of our main result.
Example 1. 
Let X = R , C = [ 0 , 1 ] , f = 1 2 | · | 2 and S x = x 2 sin ( x ) , and we consider x 0 = 10 9 > 0 , β n = 1 2 , α n = 1 10 n + 1 and λ 0 = 1 2 as well as μ = 0.9 and ε = 0.001 . Define the bifunction g on C × C into R as follows:
g ( x , y ) = B x ( y x ) ,
where
B x = 0 , x ε , sin ( x ε ) , ε x .
The bifunction g satisfies the conditions A 1 , A 3 , A 4 and A 5 . Furthermore,
g ( x , y ) + g ( y , z ) g ( x , z ) = ( y z ) ( B x B y ) | y z | | x y | ( y z ) 2 2 ( x y ) 2 2 = D 1 2 · 2 ( z , y ) D 1 2 · 2 ( y , x ) ,
which proves the condition A 2 with c 1 = c 2 = 1 . A simple computation shows that Algorithm 1 takes the following form:
y n = x n λ n B x n , T n = X , z n = x n λ n B y n , t n = α n x 0 + ( 1 α n ) z n , x n + 1 = β n z n + ( 1 β n ) t n 2 sin ( t n ) ,
λ n + 1 = min μ ( x n y n ) 2 + ( z n y n ) 2 ( z n y n ) ( B x n B y n ) , λ n , i f ( z n y n ) ( B x n B y n ) 0 , λ n , o t h e r w i s e .
The decreasing values of x n and also the values of | x n x n + 1 | are shown in Figure 1; we see that the sequences { | x n x n + 1 | } and { | x n | } converge to zero.
Now, another numerical example is given in an infinite dimensional space to show that our algorithm is efficient. We will use some notations that were introduced in [52].
Example 2. 
Suppose that X = L 2 ( [ 0 , 1 ] ) with norm x 2 : = 0 1 | x ( t ) | 2 d t and inner product x , y : = 0 1 x ( t ) y ( t ) d t for all x , y in X . Let C : = { x X : x 1 } be the unit ball. Define an operator G : C X by
G ( x ) ( t ) = 0 1 x ( t ) F ( t , s ) h ( x ( s ) ) d s + g ( t ) , x C , t [ 0 , 1 ] ,
where
F ( t , s ) = 2 t s e t + s e e 2 1 , h ( x ) = cos x , g ( t ) = 2 t e t e e 2 1 .
From [53], G is monotone (hence Pseudomonotone) and L-Lipschitz continuous with c = 2 . The bifunction g is defined by g ( x , y ) = G ( x ) , y x , and S : X X is defined by S ( x ) = 1 2 x and f ( x ) = 1 2 x 2 . We consider x 0 = 1 , β n = 1 2 , α n = 1 10 n + 1 and λ 0 = 1 2 as well as ε = 10 6 . The decreasing values of x n and also the values of x n x n + 1 are shown in Figure 2.

6. Conclusions

The equilibrium problem encompasses, among its particular cases, convex optimization problems, variational inequalities, fixed-point problems, Nash equilibrium problems and other problems of interest in many applications. This paper proposes the subgradient extra-gradient algorithm to find a solution to an equilibrium problem involving a Pseudomonotone, which is also a fixed point of a Bregman relatively nonexpansive mapping in reflexive Banach spaces. We proved the strong convergence theorems for the proposed algorithm. Several experiments are reported to illustrate the numerical behavior of our algorithm.

Author Contributions

Writing—original draft, R.L., G.Z.E., J.-K.K. and M.T.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Kim, J.K.; Majee, P. Modified Krasnoselski Mann iterative method for hierarchical fixed point problem and split mixed equilibrium problem. J. Ineq. Appl. 2020, 2020, 227. [Google Scholar] [CrossRef]
  3. Kim, J.K.; Salahuddin. Existence of solutions for multi-valued equilibrium problems. Nonlinear Funct. Anal. Appl. 2018, 23, 779–795. [Google Scholar]
  4. Muangchoo, K. A new explicit extragradient method for solving equilibrium problems with convex constraints. Nonlinear Funct. Anal. Appl. 2022, 27, 1–22. [Google Scholar]
  5. Iusem, A.N.; Sosa, W. Iterative algorithms for equilibrium problems. Optimization 2003, 52, 301–316. [Google Scholar] [CrossRef]
  6. Kassay, G.; Reich, S.; Sabach, S. Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 2011, 21, 1319–1344. [Google Scholar] [CrossRef]
  7. Reich, S.; Sabach, S. Two strong convergence theorems for Bregman strongly nonexpansive operators in reflexive Banach spaces. Nonlinear Anal. 2010, 73, 122–135. [Google Scholar] [CrossRef]
  8. Reich, S.; Sabach, S. A projection method for solving nonlinear problems in reflexive Banach spaces. J. Fixed Point Theory Appl. 2011, 9, 101–116. [Google Scholar] [CrossRef]
  9. Takahashi, W.; Zembayashi, K. Strong convergence theorem by a new hybrid method for equilibrium problems and relatively nonexpansive mappings. Fixed Point Theory Appl. 2008, 2008, 528476. [Google Scholar] [CrossRef]
  10. Takahashi, W.; Zembayashi, K. Strong and weak convergence theorems for equilibrium problems and relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. 2009, 70, 45–57. [Google Scholar] [CrossRef]
  11. Dadashi, V.; Iyiola, O.S.; Shehu, Y. The subgradient extragradient method for pseudomonotone equilibrium problems. Optimization 2020, 69, 901–923. [Google Scholar] [CrossRef]
  12. Anh, P.N. A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2013, 62, 271–283. [Google Scholar] [CrossRef]
  13. Joshi, M.; Tomar, A. On unique and nonunique fixed points in metric spaces and application to chemical sciences. J. Funct. Spaces 2021, 2021, 5525472. [Google Scholar] [CrossRef]
  14. Ozgur, N.Y.; Tas, N. Some fixed-circle theorems on metric spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 1433–1449. [Google Scholar] [CrossRef]
  15. Tomar, A.; Joshi, M.; Padaliya, S.K. Fixed point to fixed circle and activation function in partial metric space. J. Appl. Anal. 2022, 28, 57–66. [Google Scholar] [CrossRef]
  16. Anh, P.N. Strong convergence theorems for nonexpansive mappings and Ky Fan inequalities. J. Optim. Theory Appl. 2012, 154, 303–320. [Google Scholar] [CrossRef]
  17. Anh, P.N.; Kim, J.K.; Hien, N.D.; Hong, N.V. Strong convergence of inertial hybrid subgradient methods for solving equilibrium problems in Hilbert spaces. J. Nonlinear Convex Anal. 2023, 24, 499–514. [Google Scholar]
  18. Anh, P.N.; Thach, H.T.C.; Kim, J.K. Proximal-like subgradient methods for solving multi-valued variational inequalities. Nonlinear Funct. Anal. Appl. 2020, 25, 437–451. [Google Scholar]
  19. Eskandani, G.Z.; Raeisi, M.; Rassias, T.M. A Hybrid extragradient method for pseudomonotone equilibrium problems by using Bregman distance. Fixed Point Theory Appl. 2018, 27, 120–132. [Google Scholar] [CrossRef]
  20. Wairojjana, N.; Pakkaranang, N. Halpern Tseng’s Extragradient Methods for Solving Variational Inequalities Involving Semistrictly Quasimonotone Operator. Nonlinear Funct. Anal. Appl. 2022, 27, 121–140. [Google Scholar]
  21. Wairojjana, N.; Pholasa, N.; Pakkaranang, N. On Strong Convergence Theorems for a Viscosity-type Tseng’s Extragradient Methods Solving Quasimonotone Variational Inequalities. Nonlinear Funct. Anal. Appl. 2022, 27, 381–403. [Google Scholar]
  22. Yang, J.; Liu, H. The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optimi. Lett. 2020, 14, 1803–1816. [Google Scholar] [CrossRef]
  23. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
  24. Bauschke, H.H.; Borwein, J.M.; Combettes, P.L. Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 2001, 3, 615–647. [Google Scholar] [CrossRef]
  25. Abass, H.A.; Narain, O.K.; Onifade, O.M. Inertial extrapolation method for solving systems of monotone variational inclusion and fixed point problems using Bregman distance approach. Nonlinear Funct. Anal. Appl. 2023, 28, 497–520. [Google Scholar]
  26. Bauschke, H.H.; Borwein, J.M.; Combettes, P.L. Bregman monotone optimization algorithms. SIAM J. Control Optim. 2003, 42, 596–636. [Google Scholar] [CrossRef]
  27. Butnariu, D.; Censor, Y.; Reich, S. Iterative averaging of entropic projections for solving stochastic convex feasibility problems. Comput. Optim. Appl. 1997, 8, 21–39. [Google Scholar] [CrossRef]
  28. Butnariu, D.; Iusem, A.N. Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000. [Google Scholar]
  29. Kim, J.K.; Tuyen, T.M. A parallel iterative method for a finite family of Bregman strongly nonexpansive mappings in reflexive Banach spaces. J. Korean Math. Soc. 2020, 57, 617–640. [Google Scholar]
  30. Lotfikar, R.; Zamani Eskandani, G.; Kim, J.K. The subgradient extragradient method for solving monotone bilevel equilibrium problems using Bregman distance. Nonlinear Funct. Anal. Appl. 2023, 28, 337–363. [Google Scholar]
  31. Reem, D.; Reich, S.; De Pierro, A. Re-examination of Bregman functions and new properties of their divergences. Optimization 2019, 68, 279–348. [Google Scholar] [CrossRef]
  32. Reich, S.; Sabach, S. A strong convergence theorem for a proximal-type algorithm in reflexive Banach spaces. J. Nonlinear Convex Anal. 2009, 10, 471–485. [Google Scholar]
  33. Butnariu, D.; Resmerita, E. Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006, 2006, 084919. [Google Scholar] [CrossRef]
  34. Sabach, S. Products of finitely many resolvents of maximal monotone mappings in reflexive banach spaces. SIAM J. Optim. 2011, 21, 1289–1308. [Google Scholar] [CrossRef]
  35. Bregman, L.M. A relaxation method for finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
  36. Alber, Y.I. Metric and generalized projection operators in Banach spaces: Properties and applications. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Kartsatos, A.G., Ed.; Lecture notes in pure and applied mathematics; Dekker: New York, NY, USA, 1996; Volume 178, pp. 15–50. [Google Scholar]
  37. Censor, Y.; Lent, A. An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 1981, 34, 321–353. [Google Scholar] [CrossRef]
  38. Kohsaka, F.; Takahashi, W. Proximal point algorithm with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 2005, 6, 505–523. [Google Scholar]
  39. Phelps, R.P. Convex Functions, Monotone Operators and Differentiability, 2nd ed.; Lecture Notes in Mathematics; Springer: Berlin, Germany, 1993; Volume 1364. [Google Scholar]
  40. Zălinescu, C. Convex Analysis in General Vector Spaces; World Scientific Publishing: Singapore, 2002. [Google Scholar]
  41. Naraghirad, E.; Yao, J.C. Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2013, 2013, 141. [Google Scholar] [CrossRef]
  42. Butnariu, D.; Iusem, A.N.; Zălinescu, C. On uniform convexity, total convexity and convergence of the proximal point and outer Bregman projection algorithms in Banach spaces. J. Convex Anal. 2003, 10, 35–61. [Google Scholar]
  43. Tiel, J.V. Convex Analysis: An Introductory Text; Wiley: Chichester, UK; New York, NY, USA, 1984. [Google Scholar]
  44. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Kluwer Academic: Dordrecht, The Netherlands, 1990. [Google Scholar]
  45. Reich, S. A weak convergence theorem for the alternating method with Bregman distances. In Theory and Applications of Nonlinear Operators; Marcel Dekker: New York, NY, USA, 1996; pp. 313–318. [Google Scholar]
  46. Butnariu, D.; Reich, S.; Zaslavski, A.J. Asymptotic behavior of relatively nonexpansive operators in Banach spaces. J. Appl. Anal. 2001, 7, 151–174. [Google Scholar] [CrossRef]
  47. Shahzad, N.; Zegeye, H. Convergence theorem for common fixed points of a finite family of multi-valued Bregman relatively nonexpansive mappings. Fixed Point Theory Appl. 2014, 2014, 152. [Google Scholar] [CrossRef]
  48. Xu, H.K. Another control condition in an iterative method for nonexpansive mappings. Bullet. Austral. Math. Soc. 2002, 65, 109–113. [Google Scholar] [CrossRef]
  49. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  50. Ambrosetti, A.; Prodi, G. A Primer of Nonlinear Analysis; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  51. Schöpfer, F.; Schuster, T.; Louis, A.K. An iterative regularization method for the solution of the split feasibility problem in Banach spaces. Inverse Probl. 2008, 24, 055008. [Google Scholar] [CrossRef]
  52. Shehu, Y.; Dong, Q.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2018, 68, 385–409. [Google Scholar] [CrossRef]
  53. Hieu, D.V.; Muu, L.D.; Anh, P.K. Parallel hybrid extragradient methods for pseudmonotone equilibrium problems and nonexpansive mappings. Numer. Algor. 2016, 73, 197–217. [Google Scholar] [CrossRef]
Figure 1. The plotting of | x n | and | x n 1 x n | .
Figure 1. The plotting of | x n | and | x n 1 x n | .
Mathematics 11 04821 g001
Figure 2. The plotting of x n and x n 1 x n .
Figure 2. The plotting of x n and x n 1 x n .
Mathematics 11 04821 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lotfikar, R.; Eskandani, G.Z.; Kim, J.-K.; Rassias, M.T. Subgradient Extra-Gradient Algorithm for Pseudomonotone Equilibrium Problems and Fixed-Point Problems of Bregman Relatively Nonexpansive Mappings. Mathematics 2023, 11, 4821. https://doi.org/10.3390/math11234821

AMA Style

Lotfikar R, Eskandani GZ, Kim J-K, Rassias MT. Subgradient Extra-Gradient Algorithm for Pseudomonotone Equilibrium Problems and Fixed-Point Problems of Bregman Relatively Nonexpansive Mappings. Mathematics. 2023; 11(23):4821. https://doi.org/10.3390/math11234821

Chicago/Turabian Style

Lotfikar, Roushanak, Gholamreza Zamani Eskandani, Jong-Kyu Kim, and Michael Th. Rassias. 2023. "Subgradient Extra-Gradient Algorithm for Pseudomonotone Equilibrium Problems and Fixed-Point Problems of Bregman Relatively Nonexpansive Mappings" Mathematics 11, no. 23: 4821. https://doi.org/10.3390/math11234821

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop