Next Article in Journal
Contact Symmetries of a Model in Optimal Investment Theory

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# A Parallel Hybrid Bregman Subgradient Extragradient Method for a System of Pseudomonotone Equilibrium and Fixed Point Problems

by
Annel Thembinkosi Bokodisa
,
Lateef Olakunle Jolaoso
* and
Maggie Aphane
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94 Medunsa 0204, Pretoria 0001, South Africa
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(2), 216; https://doi.org/10.3390/sym13020216
Submission received: 27 December 2020 / Revised: 17 January 2021 / Accepted: 19 January 2021 / Published: 28 January 2021

## Abstract

:
We introduce a new parallel hybrid subgradient extragradient method for solving the system of the pseudomonotone equilibrium problem and common fixed point problem in real reflexive Banach spaces. The algorithm is designed such that its convergence does not require prior estimation of the Lipschitz-like constants of the finite bifunctions underlying the equilibrium problems. Moreover, a strong convergence result is proven without imposing strong conditions on the control sequences. We further provide some numerical experiments to illustrate the performance of the proposed algorithm and compare with some existing methods.
MSC:
49J40; 58E35; 65K15; 90C33

## 1. Introduction

In this paper, we consider the Equilibrium Problem (EP) in the framework of a real reflexive Banach space. Let E be the real reflexive Banach space, and consider its subset C, which is nonempty, closed and convex. Let g: $C × C → R$ be a bifunction. The task of the EP with respect to g is to find a point $p * ∈ C$ such that:
$g ( p * , y ) ≥ 0 , ∀ y ∈ C .$
We denote the set of solutions of (1) by $E P ( g )$. In the literature, it is common knowledge that numerous fascinating and complicated problems in nonlinear analysis, like the complementarity, the fixed point, Nash equilibrium, optimization, saddle point and variational inequality, can be written as an EP [1]. In view of the immense utilization of the EP, it has gained the interest of numerous researchers as of late (see, for example, [2,3,4,5], and the references therein).
On the other hand, let $T : C → C$ be an operator. A point $x ∈ C$ is called a fixed point of T if $x = T x .$ We denote the set of fixed points of T by $F ( T ) .$ Fixed point theory has numerous applications both in pure and applied science. The largest category for its application is differential equations, and others are in economics, control theory, optimization and game theory. Different techniques have been presented for assessing and estimating the fixed points of nonexpansive and quasi-nonexpansive mappings; see [6,7,8,9,10,11], and the references therein.
The problem of finding a common solution of the EP and the fixed point problem, i.e.:
$Find p * ∈ C such that p * ∈ E P ( g ) ∩ F ( T ) ,$
has become an important area of research due to its possible applications in applied science. This happens mainly in image processing, network distribution, signal processing, etc. [12,13,14,15].
Tada and Takahashi [16] presented the following hybrid technique for finding the said element $p *$ of the set of solutions of the monotone EP and the set of fixed points of a nonexpansive mapping T in the framework of Hilbert spaces:
$x 0 ∈ C 0 = Q 0 = C , π n ∈ ( 0 , 1 ) , λ n ∈ ( 0 , ∞ ) z n ∈ C such that g ( z n , y ) + 1 λ n y − z n , z n − x n ≥ 0 , ∀ y ∈ C , β n = π n x n + ( 1 − π n ) T z n , C n = v ∈ C : β n − v ≤ x n − v , Q n = v ∈ C : x 0 − x n , v − x n ≤ 0 , x n + 1 = P C n ∩ Q n x 0 .$
Note that at each step for finding the intermediate approximation $z n$, we need to solve a strongly monotone regularized equilibrium problem, i.e.,
$Find z n ∈ C such that g ( z n , y ) + 1 λ n y − z n , z n − x n ≥ 0 , ∀ y ∈ C .$
If the bifunction g is not monotone, then subproblem (3) is not necessarily strongly monotone; hence, the regularization method cannot be applied to the problem. To overcome this predicament, Anh [17] presented the following iterative scheme for finding a common element of the set of fixed points of a nonexpansive mapping T and the set of solutions of the EP involving a pseudomonotone bifunction g in Hilbert spaces:
In order to establish the strong convergence of the sequences $x n$ generated by (4) to $x * = P E P ( g ) ∩ F ( T ) π 0$, the author required that the positive sequences $σ n$ of the stepsize satisfy a Lipschitz-like condition, i.e.,
$0 < σ n ≤ min 1 2 c 1 , 1 2 c 2 ,$
where $c 1$ and $c 2$ are the Lipschitz-like constants of g, that is $c 1$ and $c 2$ satisfy the following inequality: $g ( x , y ) + g ( y , z ) ≥ g ( x , z ) − c 1 D f ( y , x ) − c 2 D f ( z , y ) , ∀ x , y , z ∈ C$, where $D f$ is the associated Bregman distance of the convex function f.
In 2016, Hieu et al. [18], following a similar trend, introduced the following parallel hybrid method, called the Parallel Modified Extragradient Method (PMEM), for solving the finite family of equilibrium problems and the common fixed point of nonexpansive mappings in the framework of real Hilbert spaces:
$y n i = argmin ρ g i ( x n , y ) + 1 2 x n − y 2 : y ∈ C i = 1 , ⋯ , N , z n i = argmin ρ g i ( y n i , y ) + ρ n − y 2 : y ∈ C i = 1 , ⋯ , N , i n = Argmax z n i − x n : i = 1 , 2 , ⋯ , N , z ¯ n : = z n i n , u n j = α n x n + ( 1 − α n ) T j z ¯ , j = 1 , ⋯ , M , j n = Argmax u n j − x n : j = 1 , 2 , ⋯ , M u ¯ n : = u n j n C n = v ∈ C : u ¯ n − v ≤ x n − v , Q n = v ∈ C : x 0 − x n , v − x n ≤ 0 , x n + 1 = P C n ∩ Q n x 0 .$
They also proved a strong convergence result for the sequence ${ x n }$ generated by (6) provided the stepsize $ρ$ satisfies the following condition:
$0 < ρ < min 1 2 c 1 , 1 2 c 2 ,$
where $c 1 = max { c 1 , i : i = 1 , 2 , ⋯ , N }$ and $c 2 = max { c 2 , i : i = 1 , 2 , ⋯ , N }$, such that $c 1 , i$ and $c 2 , i$ are the Lipschitz-like constants for $g i$ for $i = 1 , 2 , ⋯ , N .$
In 2014, Shahzad and Zegeye [11], in the framework of real reflexive Banach spaces and for approximating the common fixed point of multi-valued Bregman relatively nonexpansive mappings, presented the following iterative scheme:
$u , x 0 ∈ E , α n ∈ ( 0 , 1 ) , { β n , i } i = 0 N ⊂ ( 0 , 1 ) w n = P r o j C f ( α n ∇ f ( u ) + ( 1 − α n ) ∇ f ( x n ) ) , x n + 1 = ∇ f * ( β n , 0 ∇ f ( w n ) + ∑ i = 1 N β i ∇ f ( u i , n ) ) ,$
where $u i , n ∈ T i w n$ and C is a nonempty closed convex subset of int domf (i.e., the interior of the domain of f). Under some mild conditions on the parameters, the authors proved the strong convergence of the sequence $x n$ to $P r o j F f u$, where $F = ⋂ i = 1 N F ( T i )$.
Very recently, Eskandani et al. [19], using the Hybrid Parallel extragradient method (HPA), introduced a Bregman–Lipschitz-type condition for a pseudomonotone bifunction. For estimating this common point $p *$ for a finite family of multi-valued Bregman relatively nonexpansive mappings in reflexive Banach spaces, the following algorithm, called HPA, was presented:
$x 0 ∈ C , α n ∈ ( 0 , 1 ) , { β n , r } r = 0 M ⊂ ( 0 , 1 ) , w n i = argmin σ n g i ( x n , β ) + D f ( β , x n ) : β ∈ C i = 1 , ⋯ , N , z n i = argmin σ n g i ( w n i , β ) + D f ( β , x n ) : β ∈ C i = 1 , ⋯ , N , j n ∈ Argmax D f ( z n i , x n ) , j = 1 , 2 , ⋯ , N , z ¯ n : = z n i n , y n = ∇ f * ( β n , 0 ∇ f ( z ¯ n ) ) + ∑ r = 1 M β n , r ∇ f ( z n , r ) ) , z n , r ∈ T r z ¯ n , ρ n + 1 = P r o j C f ( ∇ f ( u n ) + ( 1 − α n ) ∇ f ( y n ) ) ) ,$
where $D f$ is the related Bregman distance of a function f. Under specific suppositions, they established that the sequence $x n$ converges strongly to $P r o j Ω f u$, where $Ω = ⋂ r = M F ( T r ) ∩ ⋂ i = 1 N E P ( g i ) ≠ ∅$ provided that the stepsize $σ n$ satisfies the condition:
$σ n ⊂ a , b ⊂ ( 0 , q ) , where q = min 1 c 1 , 1 c 2 ,$
where $c 1$ = $max 1 ≤ i ≤ N { c i , 1 }$, $c 2 = max 1 ≤ i ≤ N { c i , 1 }$ and $c i , 2 , c i , 2$ are the Bregman–Lipschitz coefficients of bifunction $g i$ for $i = 1 , 2 , ⋯ , N .$
It is worth noting that the results of Anh, Hieu and Eskandani et al., mentioned above, and other similar ones (see, e.g., [20,21,22]) involve prior knowledge of the Lipschitz-like constants, which have been proven to be very strenuous to approximate piratically. In fact, when it is possible, the estimates are often too small, which slows down the rate of convergence of the algorithms. Thus, it becomes very important to find an algorithm that does not depend on the prior knowledge of the Lipschitz-like constants. Recently, the authors of [23,24,25], introduced some modified extragradient algorithms for solving the pseudomonotone EP (when $N = 1$), which does not involve the prior estimate of the Lipschitz-like constants $c 1$ and $c 2$. Furthermore, related to our work are several methods such as [26], where the stepsize is the variable stepsize formula, that is the bifunction has a Lipschitz-like condition defined on it, and the algorithm also operates without the prior estimation of Lipschitz-type constants. The authors in [27,28] considered algorithms for solving the mixed equilibrium problem, split variational inclusion and fixed point theorems. Lastly, we also mention in passing that the authors in [29] considered a convex feasibility problem, that is finding a common element in the finite intersection of a finite family of convex sets $C i$; whereas, in our work, we consider a finite family of pseudomonotone bifunctions, and our C is fixed.
Persuaded by the outcomes above, in this present paper, we provide another subgradient extragradient technique for finding a common element of the set of solutions of equilibrium problems for a finite family of pseudomonotone bifunctions and the set of common fixed points of a finite family of Bregman relatively nonexpansive mappings in the framework of reflexive Banach spaces. Our algorithm is designed in a way that its convergence does not need prior knowledge of the Lipschitz-like constants of the bifunctions $g i$ for $i = 1 , 2 , ⋯ , N .$ Under specific mild assumptions, we prove a strong convergence result for the sequence generated by our algorithm. Furthermore, we give some numerical examples to demonstrate the proficiency, competitiveness and efficiency of our algorithm with respect to other algorithms in the literature.

## 2. Preliminaries

Throughout this work, E and C are as defined in the Introduction, and we denote the dual space of E by $E *$. Let $f : E → ( − ∞ , ∞ ]$ be a proper convex and lower semicontinuous function. We denote the domain of f by dom f, which is the set ${ x ∈ E : f ( x ) < ∞$}. Letting $x ∈$ int dom f, we define the subdifferential of the function f at x as the convex set such that:
$∂ f ( x ) = { ς ∈ E * : f ( x ) + 〈 y − x , ς 〉 ≤ f ( y ) , ∀ y ∈ E } .$
We also define the Fenchel conjugate of f, as the function:
$f * : E * → ( − ∞ , ∞ ] , such that f * ( ς ) = sup { 〈 ς , x 〉 − f ( x ) : x ∈ E } .$
It is easy to note that $f *$ is a proper convex and lower semicontinuous function. A function f on E is said to be coercive [30] if:
$lim x → + ∞ f ( x ) = + ∞$
Now, considering any convex mapping $f : E → ( − ∞ , ∞ ]$, the directional derivative of f, denoted by $f ∘ ( x , y )$, at $x ∈$ int domf in the direction of y, is given by:
$f ∘ ( x , y ) : = lim t → 0 + f ( x + t y ) − f ( y ) t .$
Suppose that the limit in (10) subsists for every $y ∈ E$, f is called G$a ^$teaux differentiable at x. The function f is referred to as G$a ^$teaux differentiable if it is G$a ^$teaux differentiable at each point x in the domain of f. When the limit t approaching zero in (10) is procured throughout for any point $y ∈ E$ where $∥ y ∥ = 1$, we say that f is Fr$e ^$chet differentiable at x. In the sequel, we assume that f is an admissible function, i.e., f is proper, convex, lower semicontinuous and Gâteaux differentiable. In this case, f is continuous in the interior of the domain of f (int dom f) (see [31,32]). Also, f is said to be Legendre if it satisfies the following two conditions:
L1.
int dom $f ≠ ∅$, and the subdifferential $∂ f$ is single-valued on its domain;
L2.
int dom $f * ≠ ∅$, and $∂ f *$ is single-valued on its domain.
According to ([33], p. 83), it is notable that in a reflexive Banach space $E ,$$∇ f * = ( ∇ f ) − 1$, where $∇ f$ denotes the gradient of f. At the point when this reality is blended together with (L1) and (L2), we get:
$ran ∇ f = dom ∇ f * = int ( dom f ) * and ran ∇ f * = dom ∇ f = int dom f .$
Likewise, according to ([34], Corollary 5.5, p. 634), f is Legendre if and only if $f *$ is Legendre and the functions f and $f *$ are G$a ^$teaux differentiable and strictly convex on int domf and int dom$f *$, respectively.
In 1967, Bregman [35] introduced an exquisite and efficacious tool for designing and dissecting the feasibility of optimization algorithms. In what follows, we presume that $f : E → ( − ∞ , ∞ ]$ is a Gâteaux differentiable function. The Bregman distance is defined as the bifunction $D f : dom f × int dom f → [ 0 , + ∞ )$, where:
$D f ( y , x ) = f ( y ) − f ( x ) − 〈 ∇ f ( x ) , y − x 〉 , y ∈ dom f , x ∈ int dom f .$
The Bregman distance satisfies the following important property called the three point identity: for any $x ∈ dom f$ and $y , z ∈ int dom f$,
$D f ( x , y ) + D f ( y , z ) − D f ( x , z ) = 〈 ∇ f ( z ) − ∇ f ( y ) , x − y 〉 .$
According to ([36], Section 1.2, p. 17), the modulus of total convexity at $x ∈ int dom f$ f is the function $v f ( x , . ) : [ 0 , + ∞ ) → [ 0 , ∞ ]$, defined by:
$v f ( x , s ) : = inf { D f ( y , x ) : y ∈ int dom f , ∥ y − x ∥ = s } .$
The function f is called totally convex at $x ∈ int dom f$ if $v f ( x , s ) > 0$ for any $t > 0$. The function f is said to be totally convex when it is totally convex at every point $x ∈ int dom f$. We mention in passing that f is totally convex on bounded subsets if and only if f is uniformly convex on bounded subsets (see [36], cf.). According to [37], we are reminded that given any bounded sequence ${ σ n }$ and any other sequence, say ${ ψ n }$ in E, then f is referred to as being sequentially consistent if:
$lim n → ∞ D f ( σ n , ψ n ) = 0 ⇒ lim n → ∞ ∥ σ n − ψ n ∥ = 0 .$
Lemma 1
([31]). If $f : E → R$ is uniformly Fr$e ^$chet differentiable and bounded on subsets of E, then $∇ f$ is uniformly continuous on bounded subsets of E from the strong topology E to the strong topology of $E *$.
Lemma 2
([36]). If domf contains at least two points, then the function f is totally convex on bounded sets if and only if the function f is sequentially consistent.
Lemma 3
([32]). Let $f : E → R$ be a G$a ^$teaux differentiable and totally convex function. If $π 0 ∈ E$ and the sequence $D f ( σ n , π 0 )$ is bounded, then the sequence ${ σ n }$ is also bounded.
The Bregman projection (cf. [35]) with respect to f of $x ∈ int dom f$ onto $C ⊂ int dom f$ is characterized as the essentially exceptional vector $Proj C f ( x ) ∈ C$, which satisfies the following:
$D f ( Proj C f ( x ) , x ) = inf { D f ( y , x ) : y ∈ C } .$
Similar to the metric projection in Hilbert spaces, the Bregman projection with respect to totally convex and G$a ^$teaux differentiable functions has a variational characterization ([38], Corollary 4.4, p. 23).
Lemma 4
([38]). Assume that f is G$a ^$teaux differentiable and totally convex on int domf. Let $x ∈ int dom f$ and $C ⊂ int dom f$ be a nonempty, closed, and convex set. If $x ^ ∈ C$, then the following conditions are equivalent:
M1.
The vector $x ^ ∈ C$ is the Bregman projection of x onto C with respect to f.
M2.
The $x ^ ∈ C$ is the unique solution of the variational inequality:
$〈 z − y , ∇ f ( x ) − ∇ f ( z ) 〉 ≥ 0 , ∀ y ∈ C .$
M3.
The vector $x ^$ is the unique solution of the inequality:
$D f ( y , z ) + D f ( z , x ) ≤ D f ( y , x ) , ∀ y ∈ C .$
Definition 1.
Let $T : C → C$ be a mapping. A point x is called a fixed point of T if $T x = x$. The set of fixed points of T is denoted by $F ( T )$. Furthermore, a point $x * ∈ C$ is said to be an asymptotic fixed point of T if C contains a sequence ${ x n } n = 1 ∞$ that converges weakly to $x *$, and $lim n → ∞ ∥ x n − T x n ∥ = 0$. The set of asymptotic fixed points of T is denoted by $F ^ ( T )$.
Definition 2.
Let C be a nonempty, closed and convex subset of E. A mapping $T : C → int dom f$ is called:
i.
Bregman firmly nonexpansive (BFNE) if:
$〈 ∇ f ( T x ) − ∇ f ( T y ) , T x − T y 〉 ≤ 〈 ∇ f ( x ) − ∇ f ( y ) , T x − T y 〉 , for all x , y ∈ C .$
ii.
Bregman strongly nonexpansive (BSNE) with respect to a nonempty $F ^$(T) if:
$D f ( p , T x ) ≤ D f ( p , x ) ,$
for all $p ∈ F ^ ( T )$ and $x ∈ C$, and if whenever ${ x n } n = 1 ∞ ⊂ C$ is bounded, $p ∈ F ^ ( T )$ and:
$lim n → ∞ ( D f ( p , x n ) − D f ( p , T x n ) ) = 0 , it follows that lim n → ∞ D f ( p , x n ) = 0 ,$
iii.
Bregman relatively nonexpansive (BRNE) if:
$D f ( p , T x ) ≤ D f ( p , x ) , ∀ x ∈ C , p ∈ F ( T ) , and F ^ ( T ) = F ( T ) .$
iv.
Quasi-Bregman nonexpansive (QBNE) if $F ( T ) ≠ 0$ and:
$D f ( p , T x ) ≤ D f ( p , x ) , for all x ∈ C , p ∈ F ( T ) .$
It was mentioned in [39] that in the instance where $F ^ ( T ) = F ( T )$, the following inclusion holds:
$BFNE ⊂ BSNE ⊂ QBNE .$
In this case, QBNE is called a Bregman relatively nonexpansive mapping.
Following [40,41], we define a map $V f : E × E * → [ 0 , + ∞ ]$ with respect to f by:
$V f ( μ , μ * ) = f ( μ ) − 〈 x , μ * 〉 + f * ( μ * ) , ∀ μ ∈ E , μ * ∈ E * .$
Then:
$V f ( μ , μ * ) = D f ( μ , ∇ f * ( μ * ) ) , ∀ μ ∈ E , μ * ∈ E * .$
More so, by the subdifferential inequality, we have:
$V f ( μ , μ * ) + 〈 ∇ f * ( μ * ) − x , y * 〉 ≤ V f ( μ , μ * + y * ) ,$
for all $μ ∈ E , μ * , y * ∈ E *$ (see [42]). In addition, if $f : E → ( − ∞ , ∞ ]$ is a proper lower semicontinuous function, then $f * : E * → ( − ∞ , ∞ ]$ is proper weak lower semicontinuous and convex. Hence, $V f$ is convex in the second variable. Then, for all $z ∈ E$, we have:
$D f z , ∇ f * ∑ i = 1 N t i ∇ f ( x i ) ≤ ∑ i = 1 N D f ( z , x i ) ,$
where ${ x i } i = 1 N ⊂ E$ and ${ t i } i = 1 N ⊂ ( 0 , 1 )$ with $∑ i = 1 N t i = 1$.
Let B and S be the closed unit ball and the unit sphere of a Banach space E, respectively. Then, the function $f : E → R$ is said to be uniformly convex on bounded subsets (see [43]) if $ν r > 0$ for all $r , t > 0$, where $ν r : [ 0 , ∞ ) → [ 0 , ∞ ]$ is defined by:
$ν r = inf x , y ∈ r B , ∥ x − y ∥ = t , κ ∈ ( 0 , 1 ) α f ( x ) + ( 1 − κ ) f ( y ) − f ( κ x + 1 − α ) y κ ( 1 − κ ) ,$
$∀ t ≥ 0$. The function $ν r$ is called the gauge of uniform convexity of f. It is known that $ν r$ is a nondecreasing function. If f is uniformly convex, then the following lemma is known:
Lemma 5
([44]). Let E be a Banach space, $r > 0$ be a constant and $f : E → R$ be a uniformly convex function on bounded subsets of E. Then:
$f ∑ k = 0 n a k x k ≤ ∑ k = 0 n a k f ( x k ) − a i a j ρ r ( ∥ x i − x j ∥ ) ,$
for all $i , j ∈ ( 0 , 1 , 2 , ⋯ , n )$, $x k ∈ r B$, $a k ∈ ( 0 , 1 )$ and $k = 0 , 1 , 2 , ⋯ , n$ with $∑ k = 0 n a k = 1$, where $ρ r$ is the gauge of the uniform convexity of f.
Lemma 6
([45]). Suppose that $f : E → ( − ∞ , + ∞ ]$ is a Legendre function. The function f is totally convex on bounded subsets if and only if f is uniformly convex on bounded subsets.
Lemma 7
([46]). Let C be a nonempty convex subset of E and $f : C ∈ R$ be a convex and subdifferentiable function on C. Then, f attains its minimum at $x ∈ C$ if and only if $0 ∈ ∂ f ( x ) + N C ( x )$, where $N C ( x )$ is the normal cone of C at x, that is:
$N C ( x ) : = { x * ∈ E * : 〈 x − z , x * 〉 ≥ 0 , ∀ z ∈ C } .$
Throughout this paper, we assume that the following assumptions hold on any $g : C × C → R$:
A1.
g is pseudomonotone, i.e., $g ( x , y ) ≥ 0 and g ( y , x ) ≤ 0$ for all $x , y ∈ C$,
A2.
g is a Bregman–Lipschitz-type condition, i.e., there exist two positive constants $c 1 , c 2$, such that:
$g ( x , y ) + g ( y , z ) ≥ g ( x , z ) − c 1 D f ( y , x ) − c 2 D f ( z , y ) , ∀ x , y , z ∈ C .$
A3.
$g ( x , x ) = 0$ for all $x ∈ C$,
A4.
$g ( · , y )$ is continuous on C for every $y ∈ C$,
A5.
$g ( x , · )$ is convex, lower semicontinuous and subdifferentiable on C for every fixed $x ∈ C$.

## 3. Results

In this section, we introduce a new parallel hybrid subgradient extragradient algorithm for finding a common element of the set of solutions of equilibrium problems for pseudomonotone bifunctions and the common fixed point of Bregman relatively nonexpansive mappings.
Let E be a real reflexive Banach space, C be a nonempty, closed and convex subset of E and $f : E → R$ be a uniformly Fréchet differentiable function, which is coercive, Legendre, totally convex and bounded on subsets of E such that $C ⊂ int dom f .$ For $i = 1 , 2 , ⋯ , N ,$ let $g i : E × E → R$ be a finite family of bifunctions satisfying Assumptions (A1)–(A5). Furthermore, for $j = 1 , 2 , ⋯ , M ,$ let $T j : E → E$ be a finite family of Bregman relative nonexpansive mappings. Assume that the solution set:
$S o l = ∩ i = 1 N E P ( g i ) ⋂ ∩ j = 1 M F ( T j ) ≠ ∅ .$
Furthermore, the control sequence ${ α n } n = 0 ∞ ⊂ ( 0 , 1 )$ satisfies the condition:
$0 < a ≤ lim inf n → ∞ α n ≤ lim sup n → ∞ α n ≤ b < 1 .$
Now, we present our algorithm (Algorithm 1) as follows.
 Algorithm 1 Parallel hybrid Bregman subgradient extragradient method (PHBSEM). Step 0.     Pick $x 0 ∈ E$, $μ ∈ ( 0 , 1 ) , λ 0 > 0$, and set $n = 0 .$Step 1.     Solve N strongly convex programs: $y n i = a r g m i n g i ( x n , y ) + 1 λ n D f ( x n , y ) : y ∈ C , z n i = a r g m i n g i ( y n i , y ) + 1 λ n D f ( x n , y ) : y ∈ T n i ,$ (19)      where $T n i = z ∈ E : ∇ f ( x n ) − λ n w n i − ∇ f ( y n i ) , z − y n i ≤ 0$ and $w n i ∈ ∂ g i ( x n , y n i ) .$Step 2.     Find the farthest element of $z n i$ from $x n$ by: $i n = A r g m a x D f ( x n , z n i ) : i = 1 , ⋯ , N .$ (20)      Set $z ¯ n = z n i n .$Step 3.     Compute in parallel for $j = 1 , ⋯ , M$: $u n j = ∇ f * ( α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) ) .$ (21)      Find the farthest element of $u n j$ from $x n$ by: $j n = A r g m a x D f ( x n , u n j ) : j = 1 , ⋯ , M .$      Set $u ¯ n = u n j n .$Step 4.     Construct two half-spaces $C n$ and $Q n$ as follows: $C n = z ∈ E : D f ( z , u ¯ n ) ≤ D f ( z , x n ) , Q n = z ∈ E : ∇ f ( z ) − ∇ f ( x n ) , x n − x 0 ≥ 0 .$ (22) Step 5.     Compute $x n + 1$ and $λ n + 1$ by: $x n + 1 = P r o j C n ∩ Q n ( x 0 )$      and: $λ n + 1 = min λ 0 , min 1 ≤ i ≤ N μ D f ( y n i , x n ) + D f ( z n i , y n i ) g ( x n , z n i ) − g ( x n , y n i ) − g ( y n i , z n i ) if g ( x n , z n i ) − g ( x n , y n i ) − g ( y n i , z n i ) ≥ 0 , λ 0 , otherwise .$ (23)
Before we start with the proof of Algorithm 1, we discuss some contributions of the algorithm compared with other methods in the literature.
(i)
Firstly, Algorithm 1 solves two strongly convex optimization problems in parallel for $i = 1 , 2 , ⋯ , N ,$ with the second convex problem solving over the half-spaces $T n i$, which is simpler than the entire feasible set used in Eskandani et al. [19].
(ii)
Moreover, the stepsize in Eskandami et al. [19] required finding the prior estimates of the Lipschitz-like constants of the finite bifunctions, which is very cumbersome for computation. Meanwhile, in Algorithm 1, the stepsize is chosen self-adaptively and does not require the prior estimates of the Lipschitz-like constant of the finite bifunctions.
(iii)
Furthermore, when E is the real Hilbert space, our Algorithm 1 improves the algorithms of [18,47,48,49,50] in the setting of real Hilbert spaces.
(iv)
Furthermore, when E is a real Hilbert space and $N = 1$, $M = 1$, our Algorithm 1 improves and compliments the algorithms of [17,20,22,24,25,51].
Now, to prove the strong convergence of the algorithm parallel hybrid Bregman subgradient extragradient method (PHBSEM), we need the following results.
Lemma 8.
The sequence $λ n$ is bounded by:
$min λ 0 , min 1 ≤ i ≤ N μ max { c 1 , i , c 2 , i } ,$
where ${ c 1 , i } , { c 2 , i }$ are the Lipschitz-like constants of the bifunctions $g i$ for $i = 1 , 2 , ⋯ , N .$
Proof.
From the Lipschitz-like condition (17), we have:
$g i ( x n , z n i ) − g i ( x n , y n i ) − g i ( y n i , z n i ) ≤ c 1 , i D f ( y n i , x n ) + c 2 , i D f ( z n i , y n i ) , for all i = 1 , 2 , ⋯ , N .$
It follows that:
$μ D f ( y n i , x n ) + D f ( z n i , y n i ) g i ( x n , z n i ) − g i ( x n , y n i ) − g i ( y n i , z n i ) ≥ μ D f ( y n i , x n ) + D f ( z n i , y n i ) c 1 , i D f ( y n i , x n ) + c 2 , i D f ( z n i , y n i )$
for $i = 1 , 2 , ⋯ , N .$ Hence, for $i = 1 , 2 , ⋯ , N ,$ we have:
$μ D f ( y n i , x n ) + D f ( z n i , y n i ) g i ( x n , z n i ) − g i ( x n , y n i ) − g i ( y n i , z n i ) ≥ μ D f ( y n i , x n ) + D f ( z n i , y n i ) max c 1 , i , c 2 , i D f ( y n i , x n ) + D f ( z n i , y n i ) .$
Thus, for $i = 1 , 2 , ⋯ , N ,$ we have:
$μ D f ( y n i , x n ) + D f ( z n i , y n i ) g i ( x n , z n i ) − g i ( x n , y n i ) − g i ( y n i , z n i ) ≥ μ max c 1 , i , c 2 , i .$
This implies that:
$μ D f ( y n i , x n ) + D f ( z n i , y n i ) g i ( x n , z n i ) − g i ( x n , y n i ) − g i ( y n i , z n i ) ≥ min 1 ≤ i ≤ N μ max c 1 , i , c 2 , i$
Therefore, we have that $λ n$ is bounded by:
$min λ 0 , min 1 ≤ i ≤ N μ max c 1 , i , c 2 , i .$
□
Lemma 9.
Suppose $x * ∈ S o l$ and $x n , y n i , z n i ,$ where $i = 1 , ⋯ , N$ are as defined in Step 1 and Step 2 of the algorithm PHBSEM. Then:
$D f ( x * , z n i ) ≤ D f ( x * , x n ) − ( 1 − σ ) D f ( x n , y n i ) − ( 1 − σ ) D f ( y n i , z n i ) .$
Proof.
Since $z n i ∈ T n i$, we have:
$∇ f ( x n ) − ∇ f ( y n i ) , z n i − y n i ≤ λ n w n i , z n i − y n i .$
Furthermore, $w n i ∈ ∂ g i ( x n , y n i )$; thus:
$g i ( x n , z ) − g i ( x n , y n i ) ≤ w n i , z − y n i , ∀ z ∈ T n i , i = 1 , 2 , ⋯ , N .$
By (26) and (27), we have:
$∇ f ( x n ) − ∇ f ( y n i ) , z n i − y n i ≤ λ n g i ( x n , z n i ) − g i ( x n , y n i ) .$
Furthermore, since $z n i = argmin g i ( y n i , y ) + 1 2 λ n D f ( x n , y ) : y ∈ T n i ,$ it follows from (7) that:
$0 ∈ ∂ λ n i g i ( y n i , y ) + D f ( x n , y ) ( z n i ) + N T n i ( z n i ) .$
This implies that for $i = 1 , 2 , ⋯ , N ,$ there exists $w n i ∈ ∂ g i ( y n i , z n i )$ and $ξ i ∈ N T n i ( z n i )$ such that:
$λ n w n i + D f ( z n i ) − ∇ f ( x n ) + ξ i = 0 .$
Since $ξ i ∈ N T n i ( z n i ) ,$ then $ξ i , z − z n i ≤ 0 , ∀ z ∈ T n i .$ Hence:
$λ n w n ¯ i , z − z n i ≥ ∇ f ( x n ) − ∇ f ( z n i ) , z − z n i .$
Furthermore, $w n ¯ i ∈ ∂ g i ( y n i , z n i )$, then:
$g i ( y n i , z ) − g i ( y n i , z n i ) ≥ w n ¯ , z − z n i , ∀ z ∈ E .$
Thus,
$λ n g i ( y n i , z ) − g i ( y n i , z n i ) ≥ λ n w n ¯ i , z − z n i , z ∈ E .$
By (29) and (30), we have:
$∇ f ( x n ) − ∇ f ( z n i ) , z − z n i ≤ λ n g i ( y n i , z ) − g i ( y n i , z n i ) .$
Let $z = x * ∈ E P ( g i )$. From (31), we get:
$∇ f ( x n ) − ∇ f ( z n i ) , x * − z n i ≤ λ n g i ( y n i , x * ) − g i ( y n i , z n i ) .$
Since each $g i$ is pseudomonotone, $g i ( y n i , x * ) ≤ 0$. Therefore:
$∇ f ( x n ) − ∇ f ( z n i ) , x * − z n i ≤ − λ n g i ( y n i , z n i ) .$
Adding (28) and (32), we have:
$∇ f ( x n ) − ∇ f ( y n i ) , z n i − y n i + ∇ f ( x n ) − ∇ f ( z n i ) , x * − z n i ≤ λ n g i ( x n , z n i ) − g i ( x n , y n i ) − g i ( y n i , z n i ) .$
By the three point identity, i.e., (11), we get:
$D f ( x * , z n i ) + D f ( y n i , x n ) + D f ( z n i , y n i ) − D f ( x * , x n ) ≤ λ n g i ( x n , z n i ) − g i ( x n , y n i ) − g i ( y n i , z n i ) .$
Using (23), we have:
$D f ( x * , z n i ) ≤ D f ( x * , x n ) − D f ( y n i , x n ) − D f ( z n i , y n i ) + λ n λ n + 1 λ n + 1 g i ( x n , z n i ) − g i ( x n , y n i ) − g ( y n i , z n i ) ≤ D f ( x * , x n ) − D f ( y n i , x n ) − D f ( z n i , y n i ) + λ n λ n + 1 σ D f ( y n i , x n ) + D f ( z n i , y n i ) = D f ( x * , x n ) − ( 1 − σ ) D f ( y n i , x n ) − ( 1 − σ ) D f ( z n i , y n i ) .$
Thus, we obtain (25). This completes the proof.    □
Lemma 10.
For any $x * ∈ S o l$, the following inequality holds:
$D f ( x * , u n j ) ≤ D f ( x * , x n ) − α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) ,$
where $j = 1 , 2 , ⋯ , M$ and $ρ s * : E * → R$ is the gauge of uniform convexity of the conjugate function $f * .$
Proof.
From (21) and Lemma 5, we have:
$D f ( x * , u n j ) = D f ( x * , ∇ f * ( α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) ) ) = V f ( x * , α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) ) = f ( x * ) − x * , α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) + f * α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) ≤ f ( x * ) − α n x * , ∇ f ( z ¯ n ) − ( 1 − α n ) x * , ∇ f ( T j z ¯ n ) + α n f * ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) − α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) = α n f ( x * ) − x * , ∇ f ( z ¯ n ) + ∇ f * ( ∇ f ( z ¯ n ) ) + ( 1 − α n ) f ( x * ) − x * , ∇ f ( T j z ¯ n ) + f * ( ∇ f ( T j z ¯ n ) ) − α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) = α n V f ( x * , ∇ f ( z ¯ n ) ) + ( 1 − α n ) V f ( x * , ∇ f ( T j z ¯ n ) ) − α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) = α n D f ( x * , z ¯ n ) + ( 1 − α n ) D f ( x * , T j z ¯ n ) − α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) ≤ α n D f ( x * , z ¯ n ) + ( 1 − α n ) D f ( x * , z ¯ n ) − α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) = D f ( x * , z ¯ n ) − α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) .$
Since $σ ∈ ( 0 , 1 ) ,$ then the desired result follows from Lemma 9.    □
Lemma 11.
The sequence $x n$ generated by the algorithm PHBSEM is well defined and $S o l ⊂ C n ∩ Q n$.
Proof.
Let $x * ∈ S o l$, then from Lemma 9 and the definition of $u n j$ and $z ¯ n$, we have:
$D f ( x * , u n j ) = D f ( x * , ∇ f * ( α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) ) ≤ α n D f ( x * , z ¯ n ) + ( 1 − α n ) D f ( x * , T j z ¯ n ) ≤ α n D f ( x * , z ¯ n ) + ( 1 − α n ) D f ( x * , z ¯ n ) ≤ D f ( x * , z ¯ n ) ≤ D f ( x * , x n ) .$
Therefore, $D f ( x * , u n j ) ≤ D f ( x * , x n )$, and this implies that $x * ∈ C n$. Hence, $S o l ⊂ C n ,$ for all $n ≥ 0 .$
By induction, for $n ≥ 0$, we have $Q 0 = E ,$ and thus, $S o l ⊂ C 0 ∩ Q 0$. Suppose $x k$ is given and $S o l ⊂ C k ∩ Q k$, for some $n = k > 0$. There exits $x k + 1 ⊂ C k ∩ Q n$ such that $x k + 1 = P r o j C k ∩ Q k ( x 1 )$. By (13), we have:
$x k + 1 − y , ∇ f ( x 1 ) − ∇ f ( x k + 1 ) ≥ 0 , ∀ y ∈ C K ∩ Q k .$
Since $S o l ⊂ C k ∩ Q k$, we get $S o l ⊂ C k + 1$. Therefore, $S o l ⊂ C k + 1 ∩ Q k + 1$. Thus, the sequence $x n$ is well defined.    □
Lemma 12.
Let $x n , y n i , z n i , u n j$ be the sequences generated by the algorithm PHBSEM. Then, the following relations hold:
$lim n → ∞ ∥ x n − x n + 1 ∥ = lim n → ∞ ∥ u n j − x n ∥ = lim n → ∞ ∥ z n i − x n ∥ = lim n → ∞ ∥ y n i − x n ∥ = lim n → ∞ ∥ u n j − T j z ¯ n ∥ = 0 .$
Proof.
Since $S o l ⊂ C n ∩ Q n$ for all $n ≥ 0$ and from the definition of $x n + 1 ,$ then:
$D f ( x n + 1 , x 0 ) ≤ D f ( x * , x 0 ) , ∀ n ≥ 0 , x * ∈ S o l .$
Hence, ${ D f ( x n , x 0 ) }$ is bounded, and from Lemma 3, we have that ${ x n }$ is bounded. Moreover, from the definition of $Q n$ and $x n = P r o j Q n f ( x 0 )$ and since $x n + 1 ∈ C n ∩ Q n ⊂ Q n ,$ we have:
$D f ( x n + 1 , x 0 ) ≥ D f ( x n + 1 , P r o j Q n f ( x 0 ) ) + D f ( P r o j Q n f ( x 0 ) , x 0 ) ≥ D f ( x n + 1 , x n ) + D f ( x n , x 0 ) .$
Then,
$D f ( x n , x 0 ) ≤ D f ( x n + 1 , x 0 ) − D f ( x n + 1 , x n ) ≤ D f ( x n + 1 , x 0 ) .$
Thus,
$D f ( x n , x 0 ) − D f ( x n + 1 , x 0 ) ≥ 0 .$
Hence, $x n$ is increasing, and since it is also bounded, $lim n → ∞ D f ( x n , x 0 )$ exists. Thus, it follows that:
$lim n → ∞ D f ( x n + 1 , x n ) = 0 .$
Then, from (12), we get:
$lim n → ∞ ∥ x n + 1 − x n ∥ = 0 .$
Furthermore, since $x n + 1 ∈ C n$, then $D f ( x n + 1 , u ¯ n ) ≤ D f ( x n + 1 , x n ) .$ Thus, taking the limits of both sides as $n → ∞$ and by (36), we get:
$lim n → ∞ D f ( x n , u ¯ n ) = 0 .$
This implies that:
$lim n → ∞ D f ( x n , u n j ) = 0 , ∀ j = 1 , 2 , ⋯ , M .$
It follows from Lemma 2 that:
$lim n → ∞ x n − u n j = 0 .$
By the uniform continuity and Fréchet differentiability of f, we have:
$lim n → ∞ f ( x n ) − f ( u n j ) = lim n → ∞ ∇ f ( x n ) − ∇ f ( u n j ) = 0 .$
Moreover:
$D f ( x * , x n ) − D f ( x * , u n j ) = f ( x * ) − f ( x n ) − ∇ f ( x n ) , x * − x n − f ( x * ) + f ( u n j ) + ∇ f ( u n j ) , x * − u n j = f ( u n j ) − f ( x n ) − ∇ f ( x n ) , x * − x n + ∇ f ( u n j ) , x * − u n j = f ( u n j ) − f ( x n ) − ∇ f ( x n ) , x * − u n j − ∇ f ( x n ) , u n j − x n + ∇ f ( u n j ) , x * − u n j = f ( u n j ) − f ( x n ) − ∇ f ( x n ) − ∇ f ( u n j ) , x * − u n j − ∇ f ( x n ) , u n j − x n .$
Therefore, from (38) and (39), we get:
$lim n → ∞ D f ( x * , x n ) − D f ( x * , u n j ) = 0$
Then, from Lemma 10, we have:
$α n ( 1 − α n ) ρ s * ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) ≤ D f ( x * , x n ) − D f ( x * , u n j ) .$
Thus, from (18), we obtain:
$lim n → ∞ ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) = 0 ,$
which implies that:
$lim n → ∞ z ¯ n − T j z ¯ n = 0 .$
Moreover, by Lemma 9, we have:
$D f ( x * , u n j ) = D f ( x * , ∇ f * ( α n ∇ ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z ¯ n ) ) ) ≤ α n D f ( x * , z ¯ n ) + ( 1 − α n ) D f ( x * , T j z ¯ n ) ≤ α n D f ( x * , z ¯ n ) + ( 1 − α n ) D f ( x * , z n i ) ≤ α n D f ( x * , x n ) + ( 1 − α n ) D f ( x * , x n ) − ( 1 − σ ) D f ( x n , y n i ) − ( 1 − σ ) D f ( y n i , z n i ) = D f ( x * , x n ) − ( 1 − α n ) ( 1 − σ ) D f ( x n , y n i ) + D f ( y n i , z n i ) .$
Thus,
$( 1 − α n ) ( 1 − σ ) D f ( x n , y n i ) + D f ( y n i , z n i ) ≤ D f ( x * , x n ) − D f ( x * , u n j )$
Therefore, taking the limits as $n → ∞$ and by (40), we have:
$lim n → ∞ D f ( x n , y n i ) = lim n → ∞ D f ( y n i , z n i ) = 0 , ∀ i = 1 , ⋯ , N .$
Then:
$lim n → ∞ ∥ y n i − x n ∥ = lim n → ∞ ∥ y n i − z n i ∥ = 0 .$
Hence:
$lim n → ∞ ∥ x n − z n i ∥ = 0 .$
In addition, from (21), we have:
$∇ f ( u n j ) − ∇ f ( T j z ¯ n ) = α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T j z n ) − ∇ f ( T j z ¯ n ) = α n ( ∇ f ( z ¯ n ) − ∇ f ( T j z ¯ n ) ) .$
Therefore, from (42), we get:
$lim n → ∞ ∥ ∇ f ( u n j ) − ∇ f ( T j z ¯ n ) ∥ = 0 .$
This implies that:
$lim n → ∞ ∥ u n j − T j z ¯ n ∥ = 0 .$
This completes the proof.    □
Now, we prove that the sequence ${ x n }$ generated by Algorithm 1 converges strongly to an element in $S o l .$
Theorem 1.
Let E be a real reflexive Banach space, C be a nonempty, closed and convex subset of E and function $f : E → R$ be a uniformly Fréchet differentiable function, which is coercive, Legendre, totally convex and bounded on subsets of E such that $C ⊂ i n t ( d o m f ) .$ For $i = 1 , 2 , ⋯ , N ,$ let $g i : E × E → R$ be a finite family of bifunctions satisfying Assumptions (A1)–(A5). Furthermore, for $j = 1 , 2 , ⋯ , M ,$ let $T j : E → E$ be a finite family of Bregman relatively nonexpansive mappings. Suppose that $S o l = ∩ i = 1 N E P ( g i ) ⋂ ∩ j = 1 M F ( T j ) ≠ ∅ .$ Let ${ α n }$ be a sequence in $( 0 , 1 )$ such that $0 < a ≤ lim inf n → ∞ α n ≤ lim sup n → ∞ α n ≤ b < 1 .$ Then, the sequences ${ x n } , { y n i } , { z n i }$ generated by Algorithm 1 converge strongly to a solution $x * ,$ where $x * = P r o j S o l f ( x 0 ) .$
Proof.
First, we show that every sequential weak limit point of ${ x n }$ belongs to $S o l .$ From the previous results, we see that $P r o j C n ∩ Q n f x 0$ is well defined and $S o l ⊂ C n ∩ Q n$ for all $n ≥ 0 .$ Furthermore, from Lemma 12, $x n$ is bounded. Then, there exists a subsequence ${ x n k }$ of $x n$ converging weakly to p. By (43) and (44), we obtain that $p ∈ F ( T j )$ for $j = 1 , 2 , ⋯ , M .$ Hence, $p ∈ ⋂ j = 1 M F ( T j ) .$ Since:
$y n i = a r g m i n g i ( x n , y ) + 1 λ n D f ( x n , y ) : y ∈ C , for i = 1 , 2 , ⋯ , N .$
By Lemma 7, we obtain:
$0 ∈ ∂ 2 g i ( x n , y ) + 1 λ n D f ( x n , y ) ( y n i ) + N C ( y n i ) , for i = 1 , 2 , ⋯ , N .$
This implies that:
$λ n w n i + ∇ f ( x n ) − ∇ f ( y n i ) + w ¯ n i = 0 ,$
where $w n i ∈ ∂ g i ( x n , y n i )$ and $w ¯ n i ∈ N C ( y n i )$ for $i = 1 , 2 , ⋯ , N .$ This implies that:
$λ n w n i , y − y n i + w ¯ n i , y − y n i = ∇ f ( y n i ) − ∇ f ( x n ) , y − y n i .$
Note that $w ¯ n i , y − y n i ≤ 0 ,$$∀ y ∈ C , i = 1 , 2 , ⋯ , N .$ Hence,
$λ n w n i , y − y n i ≥ ∇ f ( y n i ) − ∇ f ( x n ) , y − y n i , for i = 1 , 2 , ⋯ , N .$
Furthermore, since $w n i ∈ ∂ 2 g i ( x n , y n i ) ,$ then:
$g i ( x n , y ) + g i ( x n , y n i ) ≥ w n i , y − z n i , ∀ y ∈ C , for i = 1 , 2 , ⋯ , N .$
Then:
$λ n g i ( x n , y ) − g i ( x n , y n i ) ≥ ∇ f ( y n i ) − ∇ f ( x n ) , y − y n i , for i = 1 , 2 , ⋯ , N .$
Therefore:
$λ n k g i ( x n k , y ) − g i ( x n k , y n k i ) ≥ ∇ f ( y n k i ) − ∇ f ( x n k ) , y − y n k i , for i = 1 , 2 , ⋯ , N .$
Passing the limit as $k → ∞$ to the expression in (45), since $x n k ⇀ p$, $∥ ∇ f ( x n k ) − ∇ f ( y n k i ) ∥ → 0$ and $λ n k → λ > 0 ,$ it follows from (45) that:
$g i ( p , y ) ≥ 0 , ∀ y ∈ C , i = 1 , 2 , ⋯ , N .$
Hence, $p ∈ E P ( g i )$ for $i = 1 , 2 , ⋯ , N .$ This means that $p ∈ ⋂ i = 1 N E P ( g i ) .$ Consequently, $p ∈ S o l .$
Now, we show that ${ x n }$ converges strongly to $P r o j S o l f x 0 .$ Since $x n = P r o j C n f x 0 ,$ then we have:
$〈 x n − z , ∇ f ( x 0 ) − ∇ f ( x n ) 〉 ≥ 0 ∀ z ∈ C n .$
Furthermore, $S o l ⊂ C n ,$ thus:
$〈 x n − z , ∇ f ( x 0 ) − ∇ f ( x n ) 〉 ≥ 0 ∀ z ∈ S o l .$
Passing the limit as $n → ∞$ into (46), we get:
$〈 p − z , ∇ f ( x 0 ) − p 〉 ≥ 0 ∀ z ∈ S o l .$
From (13), $p = P r o j S o l f ( x 0 ) .$ This implies that ${ x n }$ converges strongly to $p = P r o j S o l f ( x 0 ) .$ Lemma 12 ensures that ${ y n i } , { z n i }$ also converges strongly to $p = P r o j S o l f ( x 0 ) .$ This completes the proof.    □
The following results can be obtained as consequences of our main result.
Corollary 1.
Let E be a real reflexive Banach space, C be a nonempty, closed and convex subset of E and function $f : E → R$ be a uniformly Fréchet differentiable function, which is coercive, Legendre, totally convex and bounded on subsets of E such that $C ⊂ i n t ( d o m f ) .$ For $i = 1 , 2 , ⋯ , N ,$ let $g i : E × E → R$ be a finite family of bifunctions satisfying Assumptions (A1)–(A5). Furthermore for $j = 1 , 2 , ⋯ , M ,$ let $T j : E → E$ be a finite family of Bregman strongly nonexpansive (BSNE) mappings. Suppose that $S o l = ∩ i = 1 N E P ( g i ) ⋂ ∩ j = 1 M F ( T j ) ≠ ∅ .$ Let ${ α n }$ be a sequence in $( 0 , 1 )$ such that $0 < a ≤ lim inf n → ∞ α n ≤ lim sup n → ∞ α n ≤ b < 1 .$ Then, the sequences ${ x n } , { y n i } , { z n i }$ generated by Algorithm 2 converges strongly to a solution $x * ,$ where $x * = P r o j S o l f ( x 0 ) .$
Furthermore, by setting $N = M = 1 ,$ we obtain the following result, which extends the results of [24,25,51] to a real reflexive Banach space.
Corollary 2.
Let E be a real reflexive Banach space, C be a nonempty, closed and convex subset of E and function $f : E → R$ be a uniformly Fréchet differentiable function, which is coercive, Legendre, totally convex and bounded on subsets of E such that $C ⊂ i n t ( d o m f ) .$ Let $g : E × E → R$ be a bifunction satisfying Assumptions (A1)–(A5) and $T : E → E$ be a Bregman relatively nonexpansive mapping. Suppose that $S o l = E P ( g ) ∩ F ( T ) ≠ ∅ .$ Let ${ α n }$ be a sequence in $( 0 , 1 )$ such that $0 < a ≤ lim inf n → ∞ α n ≤ lim sup n → ∞ α n ≤ b < 1 .$ Then, the sequences ${ x n } , { y n } , { z n }$ generated by the following Algorithm 2 converge strongly to a solution $x * ,$ where $x * = P r o j S o l f ( x 0 ) .$
 Algorithm 2 Hybrid Bregman subgradient extragradient algorithm (HBSEA). Step 0.Pick $x 0 ∈ E$, $μ ∈ ( 0 , 1 ) , λ 0 > 0$, and set $n = 0 .$Step 1.Solve the strongly convex programs: $y n = a r g m i n g ( x n , y ) + 1 λ n D f ( x n , y ) : y ∈ C , z n = a r g m i n g ( y n , y ) + 1 λ n D f ( x n , y ) : y ∈ T n ,$ (47)      where $T n = z ∈ E : ∇ f ( x n ) − λ n w n − ∇ f ( y n ) , z − y n ≤ 0$ and $w n ∈ ∂ g ( x n , y n ) .$Step 2.     Compute: $u n = ∇ f * ( α n ∇ f ( z ¯ n ) + ( 1 − α n ) ∇ f ( T z n ) ) .$ (48) Step 3.     Construct two half-spaces $C n$ and $Q n$ as follows: $C n = z ∈ E : D f ( z , u n ) ≤ D f ( z , x n ) , Q n = z ∈ E : ∇ f ( z ) − ∇ f ( x n ) , x n − x 0 ≥ 0 .$ (49) Step 4.     Compute $x n + 1$ and $λ n + 1$ by: $x n + 1 = P r o j C n ∩ Q n ( x 0 )$      and: $λ n + 1 = min λ 0 , μ D f ( y n , x n ) + D f ( z n , y n ) g ( x n , z n ) − g ( x n , y n ) − g ( y n , z n ) if g ( x n , z n ) − g ( x n , y n ) − g ( y n , z n ) ≥ 0 , λ 0 , otherwise .$ (50)
Furthermore, when E is a real Hilbert space, our Algorithm 1 becomes the following algorithm, which improves the algorithm in [18,50], and the references therein.
 Algorithm 3 Parallel Subgradient Extragradient Algorithm (PSEA). Step 0.     Pick $x 0 ∈ E$, $μ ∈ ( 0 , 1 ) , λ 0 > 0$, and set $n = 0 .$Step 1.     Solve N strongly convex programs: $y n i = a r g m i n g i ( x n , y ) + 1 λ n ∥ x n − y ∥ 2 : y ∈ C , z n i = a r g m i n g i ( y n i , y ) + 1 λ n ∥ x n − y ∥ 2 : y ∈ T n i ,$ (51)      where $T n i = z ∈ E : x n − λ n w n i − y n i , z − y n i ≤ 0$ and $w n i ∈ ∂ g i ( x n , y n i ) .$Step 2.     Find the farthest element of $z n i$ from $x n$ by: $i n = A r g m a x ∥ x n − z n i ∥ : i = 1 , ⋯ , N .$ (52)      Set $z ¯ n = z n i n .$Step 3.     Compute in parallel for $j = 1 , ⋯ , M$: $u n j = α n z ¯ n + ( 1 − α n ) T j z ¯ n .$      Find the farthest element of $u n j$ from $x n$ by: $j n = A r g m a x ∥ x n − u n j ∥ : j = 1 , ⋯ , M .$      Set $u ¯ n = u n j n .$Step 4.     Construct two half-spaces $C n$ and $Q n$ as follows: $C n = z ∈ E : ∥ u ¯ n − z ∥ ≤ ∥ x n − z ∥ , Q n = z ∈ E : z − x n , x n − x 0 ≥ 0 .$ (53) Step 5.     Compute $x n + 1$ and $λ n + 1$ by: $x n + 1 = P C n ∩ Q n ( x 0 )$      and: $λ n + 1 = min λ 0 , min 1 ≤ i ≤ N μ ∥ y n i − x n ∥ 2 + ∥ z n i − y n i ∥ 2 2 ( x n , z n i ) − g ( x n , y n i ) − g ( y n i , z n i ) if g ( x n , z n i ) − g ( x n , y n i ) − g ( y n i , z n i ) ≥ 0 , λ 0 , otherwise .$ (54)

## 4. Numerical Examples

In this section, we give some numerical examples to show the performance and efficiency of the proposed algorithm. We compare the the efficiency of Algorithm 1 (namely PHBSEM) with that of Algorithm (23) of [19] (namely HPA) using different types of convex functions and with Algorithm 1 of [18] (PMEM)). All computations were performed using MATLAB (2019b) programming on a PC with specifications: processor AMD Ryzen 53500 U and 2.10 GHz, 8.00 GB RAM.
We employ the following convex functions:
(1)
$f ( x ) = 1 2 ∥ x ∥ 2 , ∀ x ∈ R m :$ in this case, $∇ f ( x ) = ∇ f * ( x ) = x$ and $D f ( x , y ) = 1 2 ∥ x − y ∥ 2$ (i.e., the Euclidean squared norm);
(2)
$f ( x ) = ∑ i = 1 m x i log ( x i )$ with $x ∈ R + m : = { x ∈ R m : x i > 0 } :$ in this case, $∇ f ( x ) = 1 + log ( x 1 ) , 1 + log ( x 2 ) , ⋯ , 1 + log ( x m ) T ,$$∇ f * ( x ) = exp ( x 1 − 1 ) , exp ( x 2 − 1 ) , ⋯ , exp ( x m − 1 ) T$ and
$D f ( x , y ) = ∑ i = 1 m x i log x i y i + x i − y i ,$
for all $x = ( x 1 , x 2 , ⋯ , x m ) ∈ R + m$ and $y = ( y 1 , y 2 , ⋯ , y m ) ∈ R + m$ (i.e., the Kullback–Leibler distance).
Example 1.
Let $H = R m$ with induced norm $∥ x ∥ = ∑ i = 1 m | x i | 2$ and inner product $〈 x , y 〉 = ∑ i = 1 m x i y i ,$ for all $x = ( x 1 , x 2 , ⋯ , x m ) ∈ R m$ and $y = ( y 1 , y 2 , ⋯ , y m ) ∈ R m .$ The feasible set C is given by:
$C = ( x 1 , x 2 , ⋯ , x m ) ∈ R + m : x k ≤ 1 , where k = 1 , 2 , ⋯ , m .$
Consider the following problem:
$Find x * ∈ S o l : = ⋂ i = 1 N E P ( g i ) ∩ ⋂ j = 1 M F ( S j ) ,$
where $g i : E × E → R$ is defined by:
$g i ( x , y ) = ∑ k = 1 m ( q i k y k 2 − q i k x k 2 ) , i = 1 , 2 , ⋯ , N ,$
where $q i k ∈ ( 0 , 1 )$ is randomly generated $∀ i = 1 , 2 , ⋯ , N ,$$k = 1 , 2 , ⋯ , m$; and $T j : E → E$ is defined by:
$T j ( x ) = x j + 1 , ∀ j = 1 , 2 , ⋯ , M .$
It is easy to see that Conditions (A1)–(A5) are satisfied and $T j$ is a Bregman relatively nonexpansive mapping for $j = 1 , 2 , ⋯ , M .$ Moreover, $S o l = { 0 }$. In what follows, for each $n ∈ N ,$ we choose $α n = 3 n 10 ( n + 1 ) , μ = 0.36 , λ 0 = 0.24 .$ The initial value $x 0 ∈ C$ is generated randomly, and using $D n = ∥ x n − 0 ∥ < 10 − 4$ as the stopping criterion, we compare the performance of the algorithms PHBSEM and HPA using the convex functions defined above and the algorithm HPEM for the following values of $m , M$ and $N :$
• Case I: $m = 5 , N = 5 , M = 2 ;$
• Case II: $m = 10 , N = 6 , M = 4 ;$
• Case III: $m = 20 , N = 10 , M = 5 ;$
• Case IV: $m = 30 , N = 5 , M = 10 .$
The numerical results are shown in Table 1 and Figure 1.
Next, we consider the Nash–Cournot oligopolistic market equilibrium model.
Example 2.
Let $E = R m .$ For $i = 1 , 2 , ⋯ , N ,$ let $g i : E × E → R$ be defined by:
$g i ( x , y ) = 〈 P i x + Q i y + q i , y − x 〉 ,$
where $q i$ are vectors in $R m$ for $i = 1 , 2 , ⋯ , N ,$$P i , Q i$ are matrices of order $m × m$ such that $Q i$ are symmetric positive semidefinite and $Q i − P i$ are symmetric negative semidefinite. Clearly, $g i$ are pseudomonotone and satisfy a Lipschitz-like condition with $c 1 , i = c 2 , i = 1 2 ∥ Q i − P i ∥ .$ We define the feasible set C by:
$C = { x ∈ R m : − 2 ≤ x i ≤ 5 , i = 1 , 2 , ⋯ , m } .$
Now, for $j = 1 , 2 , ⋯ , M ,$ let $T j : R m → R m$ be defined by:
$T j x = ( P Δ j ) ( x ) = d j + r x − d j ∥ x − d j ∥ , if x ∉ Δ j , x , if x ∈ Δ j ,$
where $Δ j$ are the closed balls in $R m$ centred at $d j ∈ R m$ with radius $r > 0 ,$ i.e., $Δ j = { x ∈ R m : ∥ x − d j ∥ ≤ r } .$ It is easy to see that $T j$ are nonexpansive and, thus, Bregman relatively nonexpansive. More over, $S o l = { 0 } .$ We chose the following parameters: $α n = 2 n 5 n + 1$$λ 0 = 0.04 ,$$μ = 0.13$ and compare the performance of the algorithm PHBSEM with the algorithms HPA and HPEM for the following values of $m , M$ and N:
• Case I: $m = 5 , M = 5 , N = 5 ;$
• Case II: $m = 10 , M = 5 , N = 7 ;$
• Case III: $m = 15 , M = 10 , N = 5 ;$
• Case IV: $m = 20 , M = 20 , N = 10 .$
We use $D n = ∥ x n − 0 ∥ < 10 − 4$ as the stopping criterion in each case. The computational results are shown in Table 2 and Figure 2.
Finally, we consider the following example in an infinite-dimensional space. In this case, we chose $f ( x ) = 1 2 ∥ x ∥ 2 .$
Example 3.
Let $E = L 2 ( 0 , 1 )$ with the inner product $x , y = ∫ 0 1 x ( s ) y ( s ) d s$ and the induced norm $x = ∫ 0 1 x ( s ) 2 d s 1 2$. The feasible set is defined as:
$C = x ∈ H : x ≤ 1 .$
Let $g i ( x , y )$ be defined as $A i x , y − x$ with the operator $A i : L 2 ( 0 , 1 ) → L 2 ( 0 , 1 )$ given as $A i ( x ( s ) ) = max 0 , x ( s ) i$, for $i = 1 , 2 , ⋯ , N ,$$s ∈ [ 0 , 1 ] .$ It is easy to see that each $g i$ is monotone, thus pseudomonotone on C for $i = 1 , 2 , ⋯ , N .$ For $j = 1 , 2 , ⋯ , N ,$ we define the mapping $T j : L 2 ( 0 , 1 ) → L 2 ( 0 , 1 )$ by $T j x = P C x ,$ where:
$P C ( x ) = x ∥ x ∥ if ∥ x ∥ > 1 , x if ∥ x ∥ ≤ 1 .$
Then, $T j$ is a Bregman relatively nonexpansive mapping for $j = 1 , 2 , ⋯ , M$ and $S o l = { 0 } .$ We take $α n = 2 n 7 n + 1 , μ = 0.5 , λ = 0.02 ,$$N = 5 , M = 1$ and study the performance of the algorithms PHBSEM, HPA and HPEM for the following initial value:
• Case I: $x 0 = cos ( 3 s ) 7 ;$
• Case II: $x 0 = exp ( 2 s ) ;$
• Case III: $x 0 = s 2 − 1 .$
We use $D n = ∥ x n − 0 ∥ < 10 − 4$ as the stopping criterion and plot the graphs of $D n$ against the number of iterations in each case. The computation results are shown in Table 3 and Figure 3.

## 5. Conclusions

In this paper, we present a new parallel hybrid Bregman subgradient extragradient method for finding a common solution of a finite family of the pseudomonotone equilibrium problem and common fixed point problem for Bregman relatively nonexpansive mappings in real Hilbert spaces. The algorithm is designed such that its convergence does not require prior estimates of the Lipschitz-like constant of the pseudomonotone bifunctions. Furthermore, a strong convergence result is proven under mild conditions. Some numerical examples are presented to show the efficiency and accuracy of the proposed method. This result improves and extends the results of [17,18,19,20,22,25,47,49,50,51] and many other results in the literature.

## Author Contributions

Conceptualization, L.O.J.; methodology, A.T.B. and L.O.J.; validation, M.A. and L.O.J.; formal analysis, A.T.B. and L.O.J.; writing, original draft preparation, A.T.B.; writing, review and editing, L.O.J. and M.A.; visualization, L.O.J.; supervision, L.O.J. and M.A.; project administration, L.O.J. and M.A.; funding acquisition, M.A. All authors read and agreed to the published version of the manuscript.

## Funding

This research was funded by Sefako Makgatho Health Sciences University Postdoctoral research fund, and the APC was funded by the Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Pretoria, South Africa.

Not applicable.

Not applicable.

Not applicable.

## Acknowledgments

The authors acknowledge with thanks the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.

## Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

## References

1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–146. [Google Scholar]
2. Giannessi, F.; Maugeri, A.; Pardalos, P.M. Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models; Kluwer: Dordrecht, The Netherlands, 2001. [Google Scholar]
3. Iusem, A.N.; Sosa, W. Iterative algorithms for equilibrium problems. Optimization 2003, 52, 301–316. [Google Scholar] [CrossRef]
4. Mastroeni, G. Gap functions for equilibrium problems. J. Glob. Optim. 2003, 27, 411–426. [Google Scholar] [CrossRef]
5. Muu, L.D. Stability property of a class of variational inequalities. Optimization 1984, 15, 347–353. [Google Scholar] [CrossRef]
6. Eskandani, G.Z.; Raeisi, M. On the zero point problem of monotone operators in Hadamard spaces. Numer. Algorithms 2018, 80, 1155–1179. [Google Scholar] [CrossRef]
7. Azarmi, S.; Eskandani, G.Z.; Raeisis, M. Products of resolvents and multivalued hybrid mappings in CAT(0) spaces. Acta Math. Sci. 2018, 38, 791–804. [Google Scholar]
8. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
9. Martín-Márquez, V.; Reich, S.; Sabach, S. Bregman strongly nonexpansive operators in re-flexive Banach spaces. J. Math. Anal. Appl. 2013, 400, 597–614. [Google Scholar] [CrossRef]
10. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
11. Shahzad, N.; Zegeye, H. Convergence theorem for common fixed points of a finite family of multi-valued Bregman relatively nonexpansive mappings. Fixed Point Theory Appl. 2014, 152. [Google Scholar] [CrossRef] [Green Version]
12. Iiduka, H. A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 2010, 59, 873–885. [Google Scholar] [CrossRef]
13. Iiduka, H.; Yamada, I. A use of conjugate gradient direction for the convex optimiza-tion problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 2009, 19, 1881–1893. [Google Scholar] [CrossRef]
14. Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
15. Mainge, P.E. A hybrid extragradient viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 49, 1499–1515. [Google Scholar] [CrossRef]
16. Tada, A.; Takahashi, W. Strong convergence theorem for an equilibrium problem and a nonexpansive mapping. In Nonlinear Analysis and Convex Analysis; Takahashi, W., Tanaka, T., Eds.; Yokohama Publishers: Yokohama, Japan, 2006. [Google Scholar]
17. Anh, P.N. A hybrid extragradient method for pseudomonotone equilibrium problems and fixed point problems. Bull. Malays. Math. Sci. Soc. 2013, 36, 107–116. [Google Scholar]
18. Hieu, D.V.; Muu, L.D.; Anh, P.K. Parallel hybrid extragradient methods for pseudmono-tone equilibrium problems and nonexpansive mappings. Numer. Algorithms 2016, 73, 197–217. [Google Scholar] [CrossRef]
19. Eskandani, G.Z.; Raeisi, M.; Rassias, T.M. A hybrid extragradient method for solving pseudomontone equilibrium problem using Bregman distance. J. Fixed Point Theory Appl. 2018, 20, 132. [Google Scholar] [CrossRef]
20. Anh, P.N. A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2013, 62, 271–283. [Google Scholar] [CrossRef]
21. Eskandani, G.; Raeisi, M. A hybrid extragradient method for a general split equality problem involving resolvents and pseudomonotone bifunctions in Banach spaces. Colcolo 2019, 56, 43. [Google Scholar]
22. Hieu, D.V.; Quy, P.K. Accelerated hybrid methods for solving pseudomonotone equilibrium problems. Adv. Comput. Math. 2020, 46, 1–24. [Google Scholar]
23. Hussain, A.; Khanpanuk, T.; Pakkaranang, N.; Rehman, H.U.; Wairojjana, N. A self-adaptive Popov’s extragrdient method for solving equilibrium problems with applications. J. Math. Anal. 2020, 12, 523. [Google Scholar]
24. Wairojjana, N.; ur Rehman, H.; Pakkaranang, N.; Khanpanuk, T. Modified Popov’s subgradient extra-gradient algorithm with inertial technique for equilibrium problems and its applications. Int. J. Appl. Math. 2020, 33, 879–901. [Google Scholar] [CrossRef]
25. Jolaoso, L.O.; Aphane, M. A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems. Fixed Point Theory Appl. 2020, 1, 1–22. [Google Scholar] [CrossRef]
26. Wairojjana, N.; De la Sen, M.; Pakkaranang, N. A general inertial projection-type algorithm for solving equilibrium problem in Hilbert spaces with applications in fixed-point problems. Axioms 2020, 9, 101. [Google Scholar] [CrossRef]
27. Abbas, M.; Rizvi, Y. Strong convergence of a system of generalized mixed equilibrium problem, split variational inclusion problem and fixed point problem in Banach spaces. Symmetry 2019, 11, 722. [Google Scholar] [CrossRef] [Green Version]
28. Kazmi, R.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
29. Bauschke, H.; Borwein, J.M. On projection algorithms for solving convex feasibility prob-lems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
30. Lemmarchal, C.; Hiriat-Urruty, J.B. Convex Analysis and Minimization Algorithms II; Springer: Berlin, Germany, 1993; Volume 306. [Google Scholar]
31. Reich, S.; Sabach, S. A strong convergence theorem for a proximal-type algorithm in re-flexive Banach spaces. J. Nonlinear Convex Anal. 2009, 10, 471–485. [Google Scholar]
32. Reich, S.; Sabach, S. Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 2010, 31, 22–44. [Google Scholar] [CrossRef]
33. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
34. Bauschke, H.; Borwein, J.M.; Combettes, P.L. Essential smoothness, essential strict con-vexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 2001, 3, 615–647. [Google Scholar] [CrossRef] [Green Version]
35. Bregman, L.M. A relaxation method for finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
36. Butnariu, D.; Iusem, A.N. Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization; Kluwer Academic: Dordrecht, The Netherlands, 2000; Volume 40. [Google Scholar]
37. Bauschke, H.H.; Borwein, J.M.; Combettes, P.L. Bregman monotone optimization algo-rithms. SIAM J. Control Optim. 2003, 42, 596–636. [Google Scholar] [CrossRef]
38. Butnariu, D.; Resmerita, E. Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006, 2006, 084919. [Google Scholar] [CrossRef] [Green Version]
39. Kassay, G.; Reich, S.; Sabach, S. Iterative methods for solving systems of variational ine-qualities in reflexive Banach spaces. SIAM J Optim. 2011, 21, 1319–1344. [Google Scholar] [CrossRef]
40. Alber, Y.I. Metric and generalized projection operators in Banach spaces: Properties and applications. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Kartsatos, A.G., Ed.; Lecture Notes in Pure and Applied Mathematics; Dekker: New York, NY, USA, 1996; Volume 178, pp. 15–50. [Google Scholar]
41. Censor, Y.; Lent, A. An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 1981, 34, 321–353. [Google Scholar] [CrossRef]
42. Kohsaka, F.; Takahashi, W. Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 2005, 6, 505–523. [Google Scholar]
43. Zalinescu, C. Convex Analysis in General Vector Spaces; World Scientific Publishing: Singapore, 2002. [Google Scholar]
44. Naraghirad, E.; Yao, J.C. Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2013, 2013, 141. [Google Scholar] [CrossRef] [Green Version]
45. Butnariu, D.; Iusem, A.N.; Zalinescu, C. On uniform convexity, total convexity and convergence of the proximal point and outer Bregman projection algorithms in Banach spaces. J. Convex Anal. 2003, 10, 35–61. [Google Scholar]
46. Tiel, J.V. Convex Analysis: An Introductory Text; Wiley: New York, NY, USA, 1984. [Google Scholar]
47. Hieu, D.V.; Thai, B.H.; Kumam, P. Parallel modified methods for pseudomonotone equi-librium problems and fixed point problems for quasi-nonexpansive mappings. Adv. Oper. Theory 2020, 5, 1684–1717. [Google Scholar] [CrossRef]
48. Bantaojai, T.; Pakkaranang, N.; Kumam, P.; Kumam, W. Convergence anal-ysis of self-adaptive inertial extra-gradient method for solving a family of pseudomonotone equilibrium problems with application. Symmetry 2020, 12, 1332. [Google Scholar] [CrossRef]
49. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. A parallel combination extragradi-ent method with Armijo line searching for finding common solutions of finite families of equilibrium and fixed point problems. Rend. Circ. Mat. Palermo II. Ser. 2020, 69, 711–735. [Google Scholar] [CrossRef]
50. Hieu, D.V. Common solutions to pseudomonotone equilibrium problems. Bull. Ira-Nian Math. Soc. 2016, 42, 1207–1219. [Google Scholar]
51. Yang, J.; Liu, H. The subgradient extragradient method extended to pseudomonotone equi-librium problems and fixed point problems in Hilbert space. Optim. Lett. 2020, 14, 1803–1816. [Google Scholar] [CrossRef]
Figure 1. Example 1. Top left: I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Figure 1. Example 1. Top left: I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Figure 2. Example 1: Top left: I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Figure 2. Example 1: Top left: I; top right: Case II; bottom left: Case III; bottom right: Case IV.
Figure 3. Example 1: Top left: I; top right: Case II; bottom: Case III.
Figure 3. Example 1: Top left: I; top right: Case II; bottom: Case III.
Table 1. Computational result for Example 1. PHBSEM, parallel hybrid Bregman subgradient extragradient method.
Table 1. Computational result for Example 1. PHBSEM, parallel hybrid Bregman subgradient extragradient method.
Case ICase IICase IIICase IV
PHBSEM with $f 1$Iter.15151516
Time (s)1.03431.79264.75222.6533
PHBSEM with $f 2$Iter.9999
Time (s)0.85001.18633.58122.4625
HPA with $f 1$Iter.21327737
Time (s)1.17196.584738.743311.6689
HPA with $f 2$Iter.19171830
Time (s)1.26693.77439.05339.3293
HPEMIter.26272829
Time (s)1.41895.172812.52328.0773
Table 2. Computational result for Example 2.
Table 2. Computational result for Example 2.
Case ICase IICase IIICase IV
PHBSEM with $f 1$Iter.14151515
Time (s)2.34383.74072.54547.7285
PHBSEM with $f 2$Iter.99119
Time (s)2.00314.80163.69486.3261
HPA with $f 1$Iter.18202932
Time (s)6.11299.640110.936123.6825
HPA with $f 2$Iter.17172527
Time (s)6.08028.03889.232219.1269
HPEMIter.26262727
Time (s)8.045113.07509.793618.2040
Table 3. Computational result for Example 3.
Table 3. Computational result for Example 3.
Case ICase IICase III
PHBSEMIter.71714
Time (s)2.90281.10501.1554
HPAIter.82118
Time (s)3.09872.84681.9877
HPEMIter.81916
Time (s)2.94762.00841.9088
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Bokodisa, A.T.; Jolaoso, L.O.; Aphane, M. A Parallel Hybrid Bregman Subgradient Extragradient Method for a System of Pseudomonotone Equilibrium and Fixed Point Problems. Symmetry 2021, 13, 216. https://doi.org/10.3390/sym13020216

AMA Style

Bokodisa AT, Jolaoso LO, Aphane M. A Parallel Hybrid Bregman Subgradient Extragradient Method for a System of Pseudomonotone Equilibrium and Fixed Point Problems. Symmetry. 2021; 13(2):216. https://doi.org/10.3390/sym13020216

Chicago/Turabian Style

Bokodisa, Annel Thembinkosi, Lateef Olakunle Jolaoso, and Maggie Aphane. 2021. "A Parallel Hybrid Bregman Subgradient Extragradient Method for a System of Pseudomonotone Equilibrium and Fixed Point Problems" Symmetry 13, no. 2: 216. https://doi.org/10.3390/sym13020216

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.