Halpern-Subgradient Extragradient Method for Solving Equilibrium and Common Fixed Point Problems in Reﬂexive Banach Spaces

: In this paper, using the concept of Bregman distance, we introduce a new Bregman subgradient extragradient method for solving equilibrium and common ﬁxed point problems in a real reﬂexive Banach space. The algorithm is designed, such that the stepsize is chosen without prior knowledge of the Lipschitz constants. We also prove a strong convergence result for the sequence that is generated by our algorithm under mild conditions. We apply our result to solving variational inequality problems, and ﬁnally, we give some numerical examples to illustrate the efﬁciency and accuracy of the algorithm.


Introduction
In 1994, Blum and Oettli [1] revisited the Equilibrium Problem (EP) that was first introduced by Ky Fan which has become a fundamental concept and serves as an important mathematical tool for solving many concrete problems. The EP generalizes many nonlinear problems such as variational inequalities, minimization problems, fixed point problems, saddle point problems in unified ways, see, for instance [1][2][3][4]. It is well known that several problems arising in many fields of pure and applied mathematics such as economics, physics, optimization theory, engineering mechanics, management sciences, network analysis, etc., can be modeled as an EP; see, e.g., [5] for details.
Let E be a real reflexive Banach space, and C ⊂ E be nonempty, closed, and convex subset. Let g : C × C → R be a bifunction. An EP is defined in the following manner: Find u * ∈ C such that g(u * , z) ≥ 0, ∀ z ∈ C. (1) We denote the set of solutions of problem (1) by EP(g). Because the EP and its applications are of great importance, it has provided a rich area of research for many mathematicians. Recently, many authors have proposed numerous algorithms for solving the EP (1); see, for example, [6][7][8][9]. Some of those algorithms involve proximal point methods [10,11], projection methods [12,13], extragradient methods with or without linesearches [14][15][16], decent methods based on merit functions [17,18], and methods using Bregman distance [19,20].
In 1976, Korpelevich [21] introduced the extragradient method for solving variational inequality problem (which is really a spacial case of the EP) for L-Lipschitz continuous and monotone operators in Euclidean spaces. Korpelevich proved the convergence of the generated sequence under the assumptions of Lipschitz continuity and strong monotonicity. Moreover, there is still a need to calculate two projections onto the closed convex set C in each iteration of the algorithm. The Korpelevich's extragradient strategy has been widely concentrated in the literature for taking care of increasingly broad problems, which comprises of finding a common point that lies in the solution set of a variational inequality and the set of fixed points of a nonexpansive mapping. This kind of problems emerges in different theoretical and modeling contexts, see [22,23], and the references therein. Several years back, Quoc et al. [15] presented a modified version of Korpelevich's method, in which they stretched the method to solve EP's for the case of pseudomonotone and Lipschitz continuous bifunctions. They substituted the two projections on the feasible set C by solving two convex optimization programs for each and every iteration. In 2013, Anh Pham Ngoc [24] presented a hybrid extragradient iteration method, where the extragradient method was extended to fixed point and equilibrium problem. This was done for a pseudomonotone and Lipschitz-type continuous bifunction in a setting of real Hilbert space.
As of late, numerous authors have studied and improved Korpelevich's extragradient method for variational inequality in various ways, see, for example [25,26]. The subgradient extragradient method is one of the ways in which the Korpelevich's extragradient method was studied and improved; see [25]. This involves replacing the second projection in Korpelevich's extragradient method with a projection onto a simple half-space. It is important to say that the projection onto half-spaces can be easily calculated explicitly, unlike the projection onto the whole set C, which can be complicated when C is not simple. This process has motivated several improvements for extragradient-like methods in the literature; see [27][28][29][30]. Recently, Dan Van Hieu [31] extended the subgradient extragradient method to equilibrium problem in real Hilbert spaces. He proved that the subgradient extragradient method strongly converges to an element x ∈ EP(g) provided the stepsize condition 0 < λ n < min 1 2c 1 , 1 2c 2 (2) is satisfied, where c 1 and c 2 are the Lipschitz-like constant of g. It is important to note that the constants c 1 and c 2 are very difficult to find, even when they are estimated, they are often too small, which deteriorates the rate of convergence of the algorithm. There has been an increasing effort on finding iterative methods for solving the EP without a prior condition consisting of the constant c 1 and c 2 ; see, e.g., [32][33][34][35][36][37][38]. On the other hand, Eskandani et al. [39] introduced a hybrid extragradient method for solving EP (1) in a real reflexive Banach space. They showed that the sequence that is produced by their algorithm strongly converges to EP(g) (1). Being motivated by the above results, we introduce a Halpern-subgradient extragradient method for solving pseudomonotone EP and finding common fixed point of countable family of quasi-Bregman nonexpansive mappings in real reflexive Banach spaces. The stepsize of our algorithm is determined by a self-adaptive technique, and we prove a strong convergence result without prior estimate of the Lipschitz constants. We also provide an application of our result to variational inequality problems and give some numerical experiments to show the numerical behaviour of our algorithm. This improves the work of Eskandami et al. [39] and extends the results of [32][33][34][35][36][37] to a reflexive Banach space while using Bregman distance techniques.
Throughout this paper, E denotes a real Banach space with dual E * ; x * , x denotes the duality pairing between x ∈ E and x * ∈ E * ; ∀ denotes the for all; min{A} is the minimum of a set A; max{B} is the maximum of a set B; x n → u implies the strong converges of a sequence {x n } ⊂ X to a point u ∈ E; x n u is the weak convergence of x n to u; · denotes the norm on E, while · * denotes the norm on E * ; EP denotes the equilibrium problem, EP( f ) denotes the solution set of the equilibrium problem; F(T) is the set of fixed point of a mapping T, ∇ f is the gradient of a function f and R is the real number line.

Preliminaries
In this section, we recall some definitions and basic facts and notions that we will need in the sequel.
Let E and C ⊂ E be as defined earlier in the introduction. We denote the dual space of E by E * . the function f : E → (−∞, ∞] is always an admissible, i.e., it is proper, lower semicontinuous, and differentiable. Let dom f = {u * ∈ E : f (u * ) < ∞} denote the domain of f . Let u * ∈ int dom f . We define the subdifferential of f at u * as the convex set that is defined in the following manner and the Fenchel conjugate of f is the function It takes less effort to show that f 6 * is indeed an admissible function.
For any convex mapping f : E → (−∞, ∞], we denote, by f • (x, y), the right-hand derivative of f at x ∈ int dom f in the direction of y, which is, If the limit as t → 0 + in (4) exists for each z, then the function f is said to be Gâteaux differentiable at x. In this case, the gradient of f at u * is the linear function ∇ f (u * ), which is defined by ∇ f (u * ), z := f • (u * , z) for all z ∈ E. The function f is a said to be Gâteaux differentiable if it is Gâteaux differentiable at each u * ∈ dom f . When the limit t → 0 in (4) is uniformly attained for any z ∈ E with z = 1, we say that f is Fréchet differentiable at u * . Throughout this paper, f : E → R is always an admissible function, under this condition we know that f is continuous in int dom f .
The function f is said to be Legendre if it satisfies the following two conditions: L1. int dom f = ∅, and the subdifferential ∂ f is single-valued on its domain; and, L2.
int dom f * = ∅, and ∂ f * is single-valued on its domain.
It is common knowledge that, in E ∇ f = (∇ f * ) −1 (check [40], p. 83). Putting condition (L1) and (L2) together, we get It also follows that f is Lengendre if and only if f * is Legendre [41] (Corollary 5.5, p. 634), and that the functions f and f * are Gâteaux differentiable and strictly convex in the interior of their designated domains.
In 1967, Bregman [42] introduced the concept of Bregman distance; furthermore, he found a rich and compelling method for the utilization of the Bregman distance during the time spent designing and breaking down feasibility and optimization calculations. From now on, we assume that f : The Bregman distance does not fulfill the notable properties of a metric, which is, it is not symmetric in general and the triangle inequality does not hold. However, it generalizes the law of cosines, which, in this case, it is known as the three point identity: for any u * ∈ dom f and y, z ∈ int dom f Looking at [43,44], the modulus of total convexity at x ∈ int dom f is the function v f (x, .) : [0, +∞) → [0, ∞], as given by f is termed totally convex at u * ∈ int dom f if v f (u * , s) is positive for any s > 0. Additionally, f is termed totally convex when it is totally convex at every point u * ∈ int dom f . We comment in passing that f is totally convex on bounded subsets if and only if f is uniformly convex on bounded subsets (check [43]). We remember that f is termed sequentially consistent [45] if for any two sequences {u n } and {z n } in E, such that the first is bounded, Lemma 1 ([46]). If f : E → R is uniformly Fréchet differentiable and bounded on bounded subsets of E, then ∇ f is uniformly continuous on bounded subsets of E from the strong topology E to the strong topology of E * .

Lemma 2 ([47]
). Let f : E → R be a Gâteaux differentiable and totally convex function. If x 0 ∈ E and the sequence D f (x n , x 0 ) is bounded, then the sequence {x n } is also bounded.
The Bregman projection [42] with respect to f of x ∈ int dom f onto a nonempty, closed, and convex set C ⊂ int dom f is defined as the necessarily unique vector Proj C f (x) ∈ C, which satisfies Similar to the metric projection in Hilbert spaces, the Bregman projection with respect to totally convex and Gâteaux differentiable functions has a variational characterization [46] (Corollary 4.4, p. 23).
Suppose that f is Gâteaux differentiable and totally convex on int dom f . Let x ∈ int dom f and C ⊂ int dom f be a nonempty, closed, and convex set. Ifx ∈ C, then the following conditions are equivalent:

M1.
The vectorx ∈ C is the Bregman projection of x onto C with respect to f .

M2.
Thex ∈ C is the unique solution of the variational inequality: M3.
The vectorx is the unique solution of the inequality: Definition 1. Let T : C → C be a mapping. A point x is called fixed point of T if Tx = x. The set of fixed points of T is denoted by F(T). Additionally, a point x * ∈ C is said to be an asymptotic fixed point of T if C contains a sequence {x n } ∞ n=1 which converges weakly to x * , and lim n→∞ x n − Tx n = 0. The set of asymptotic fixed points of T is denoted byF(T).

Definition 2 ([48]
). Let C be a nonempty, closed and convex subset of E. A mapping T : ii. Bregman strongly nonexpansive (BSNE) with respect to a nonemptyF(T) if for all p ∈F(T) and x ∈ C, and if whenever {x n } ∞ n=1 ⊂ C is bounded, p ∈F(T) and It was remarked in [19] that, in the case whereF(T) = F(T), the following inclusion holds: Let B and S be the closed unit ball and the unit sphere of a Banach space E. Let rB = {z ∈ E : z ≤ r}, for all r > 0. Then, the function f : E → R is said to be uniformly convex on bounded subsets (see [49] for all t ≥ 0. The function ρ r is called the gauge of uniform convexity of f . It is known that ρ r is a nondecreasing function. If f is uniformly convex, then the following Lemma is known.

Lemma 3 ([50])
. Let E be a Banach space, r > 0 be a constant and f : E → R be a uniformly convex function on bounded subsets of E. Then for all i, j ∈ (0, 1, 2, ..., n), x k ∈ rB, a k ∈ (0, 1) and k = 0, 1, 2, , ..., n with ∑ n k=0 a k = 1, where ρ r is the gauge of uniform convexity of f . For each u ∈ C, the subgradient of the convex function f (u, ·) at u is denoted by

Lemma 5 ([51]
). Let C be a nonempty convex subset of E and f : C → R be a convex and subdifferentiable function on C. Subsequently, f attains its minimum at x ∈ C if and only if Throughout this paper, we assume that the following assumptions hold on g: A1. g is pseudomonotone, i.e., g(x, y) ≥ 0 and g(y, x) ≤ 0 for all x, y ∈ C, A2. g is Bregman-Lipschitz-type condition, i.e., there exist two positive constants c 1 , c 2 , such that A3. g(x, x) = 0 for all x ∈ C, A4. g(·, y) is continuous on C for every y ∈ C, and A5. g(x, ·) is convex, lower semicontinuous, and subdifferentiable on C for every fixed x ∈ C.

Lemma 6 ([52])
. Let E be a reflexive Banach space, f : E → R be a strong coercive Bregman function and V f : Subsequently, the following assertions hold: In addition, if f : E → (−∞, ∞] is a proper lower semicontinuous function, then f * : E * → (−∞, ∞] is a proper weak lower semicontinuous and convex. Hence, V f is convex in the second variable. Thus, for all z ∈ E, we have

Lemma 7 ([53]
). Let {Θ n } be a sequence of non-negative real numbers satisfying the following identity: Subsequently, {m k } is a non-decreasing sequence verifying lim n→∞ m k = ∞, and for all k ∈ N, the following estimate hold:

Main Results
In this section, we present our algorithms and establish convergence analysis. Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let g : C × C → R be a bifunction satisfying (A1)-(A5). For i ∈ ℵ, where ℵ = N\{0}, let T i : E → E be a countable family of quasi-Bregman nonexpansive mappings, such that I − T i are demiclosed at zero. Let f : E → R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E. Suppose that the solution set We assume that the control sequences satisfy the following condition.
n=0 β n,i = 1 and lim inf n→∞ β n,0 β n,i > 0. Now, suppose that the sequence {x n } is generated by the following algorithm.

Remark 1.
Note that when x n = y n = u n , we are at a common solution of the EP and fixed points of T i for i ∈ ℵ. More so, the following present some of the importance of Algorithm 1. (i) Eskandami et al. [39] introduced a hybrid extragradient method whose convergence depends on the Lipschitz constants c 1 and c 2 which are very difficult to estimate. Moreover, our Algorithm 1 does not depend on the Lipschitz constants and the second argmin problem can be easily solved over the half-space D n . (ii) Hieu and Strodiot [55] proposed an extragradient method with the line search technique (Algorithm 4.1) in two uniformly convex Banach spaces. It is known that such line search method is not always efficient because it consist of inner loop which may consume extra computation time. In Algorithm 1, the stepsize is selected self-adaptively and does not involve any inner loop. (iii) Our algorithm also extends the subgradient extragradient method of [32][33][34][35][36][37] to reflexive Banach spaces using Bregman distance.
Step 1: Compute y n = argmin{λ n g(x n , y) If x n = y n : set x n = z n and go to Step 3. Else, do Step 2.
Step 2: Compute Step 4: Calculate x n+1 and λ n+1 as follows Set n → n + 1 and go to Step 1.
We now give the convergence analysis of Algorithm 1. We begin by proving the following necessary results. Lemma 9. The sequence {λ n } that is generated by our algorithm is bounded by Proof. We deduce from the Definition of λ n+1 that λ n+1 ≤ λ n . This implies that λ n+1 is monotonically decreasing. It follows from (8) that Thus Hence, the sequence {λ n } is bounded by min λ 0 , σ max{c 1 ,c 2 } . This also implies that there exists lim n→∞ λ n = λ > 0.

Lemma 10.
For all u * ∈ EP(g), the following inequality holds: Proof. Because z n ∈ D n , then from Algorithm 1, we have Additionally, since w n ∈ ∂ 2 g(x n , y n ),, then g(x n , y) − g(x n , y n ) ≥ w n , y − y n ∀y ∈ E.
This implies that g(x n , z n ) − g(x n , y n ) ≥ w n , z n − y n .
Combining (11) and (12), we get Additionally, since z n = argmin{λ n g(y n , y) + D f (y, x n ) : y ∈ D n }, it follows from Lemma 5 that 0 ∈ ∂ λ n g(y n , y) This implies that there existsw n ∈ ∂ 2 g(y n , z n ) and η ∈ N D n (z n ), such that Because ξ ∈ N D n (z n ), then ξ, y − z n ≤ 0 for all y ∈ D n . Hence, Additionally,w n ∈ ∂ 2 g(y n , z n ), then g(y n , y) − g(y n , z n ) ≥ w n , y − z n ∀y ∈ E.
Thus, λ n g(y n , y) − g(y n , z n ) ≥ λ n w n , y − z n ∀y ∈ E.
Now, let y = u * ∈ EP(g) in (16), then we have Because g(u * , y n ) ≥ 0 and g is pseudomonotone, then g(y n , u * ) ≤ 0. This implies that Adding (13) and (17), we have By Bregman three point identity (5), it follows that: Additionally, from the Definition of λ n , we have: Lemma 11. The sequence {x n } that is generated by Algorithm 1 is bounded.
Lemma 12. Let s = sup{ ∇ f (y n ) , ∇ f (T i y n ) } and let ρ * : E * → R be the gauge of uniform convexity of the conjugate function f * . Subsequently, Proof. From our algorithm, we have: It follows from Lemma 6 that: By Lemma 3 and (18), we have that: Next, we prove the strong convergence of the sequence that is generated by our algorithm. Proof. Let u * ∈ Sol and put Γ n = D f (u * , x n ). We divide the proof into two cases. Case 1: suppose that there exists N ∈ N such that {Γ n } is monotonically decreasing for all n ≥ N. This implies that lim n→∞ Γ n exists since {x n } is bounded and, thus, we have Γ n − Γ n+1 → 0, as n → ∞.
By (6), we have lim Accordingly, z n − y n ≤ z n − x n + x n − y n → 0 as n → ∞.
Moreover, since f is Fréchet differentiable, then ∇ f is uniformly continuous on bounded subsets of E. Hence, we have Additionally, ∇ f * is uniformly continuous on bounded subsets of E * . Hence, lim n→∞ u n − z n = 0.
Remember, from (20), we have This implies that It follows from (C2) that Hence, lim n→∞ u n − T i u n = 0.
Because {x n } is bounded, there exists a subsequence {x n k } of {x n }, such that x n k z ∈ C. It follows from the fact that lim n→∞ y n − x n → 0, then y n k z. Additionally y n = argmin{λ n g(x n , y) + D f (y, x n ) : y ∈ C}, it follows from Lemma 5 that 0 ∈ ∂ α n g(x n , y) + D f (y, x n ) (y n ) + N C (y n ), ∀y ∈ C.
This implies that Note that ξ, y − y n ≤ 0, ∀y ∈ C.
Hence, from (25), we get λ n w n , y − y n + ξ, y − y n = ∇ f (x n ) − ∇ f (y n ), y − y n , which implies that Additionally, since w n ∈ ∂ 2 g(x n , y n ), then g(x n , y) + g(x n , y n ) ≥ w n , y − z n , ∀ y ∈ C.
Furthermore, since u n − T i u n → 0 and u n z, then z ∈F(T i ). By the demiclosedness of I − T i , we have z ∈ F(T i ) , ∀i. Therefore, it follows that Therefore, by (29) and (30), we have Now, we prove that {x n } strongly converges strongly to u * . Lemma 6, we have . It suffices to show that lim sup n→∞ b n ≤ 0. It follows from (7) that Therefore, using Lemma 7,(32) and (33), we have D f (u * , x n ) → 0. This implies that lim n→∞ x n − u * = 0. Hence, {x n } converges strongly to u * .
Case 2: suppose that {x n } is monotonically increasing. This means that Afterwards, by Lemma 8, we have that there exits a sequence {m n } ⊂ N, such that a m n ≤ a m n+1 and a n ≤ a m n+1 , where m n = max{j ≤ n : a j ≤ a j+1 }. Following similar analysis, as above, we have lim and Because a m n ≤ a m n+1 , then Therefore, Hence, Hence, {x n } converges strongly to u * . This completes the proof.
The following can be obtained as consequences of our main Theorem.

Corollary 1.
Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let g : C × C → R be a bifunction satisfying (A1)-(A5). For i ∈ ℵ, where ℵ = N\{0}, let T i : E → E be finite a family of Bregman strongly nonexpansive mappings. Let f : E → R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E. Suppose that the solution set Subsequently, the sequence {x n } that is generated by the Algorithm 1 converges strongly to a point u * ∈ Γ. Corollary 2. Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let g : C × C → R be a bifunction satisfying (A1)-(A5) and T : E → E be a quasi-Bregman nonexpansive mappings, such that I − T is demiclosed at zero. Let f : E → R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E. Suppose that the solution set Subsequently, the sequence {x n } generated by Algorithm 1 converges strongly to a point u * ∈ Sol.

Application: Variational Inequality
In this section, we consider the classical variational inequality problem, which is a particular case of equilibrium problem.
Let A : C → E * be a mapping. A variational inequality problem, denoted by V IP, is to find z ∈ C such that Az, y − z ≥ 0, ∀y ∈ C.
We denote the solution set of the VIP by V IP (C, A). Variational inequalities are important mathematical tools for solving many problems arising in applied sciences, such as optimization, network equilibrium, mechanics, engineering, economics, etc., (see, for example, [28][29][30] and references therein).
The following results are important in this section.

Lemma 13 ([48])
. Let E be a real valued reflexive Banach space and C be a nonempty, closed, and convex subset of E. Let g : C × C → R be a function, such that g(x, x) = 0 and f : E → R be Legendre and totally coercive function. Subsequently, a point x * ∈ EP(C, g) if, and only if, x * solves the following minimization problem: min{λg(x, y) + D f (y, x) : y ∈ C}, where x ∈ C and λ > 0.

Lemma 14 ([39]
). Let C be a nonempty and closed convex subset of a reflexive Banach space E, A : C× → E * be a mapping, and f * : E → R be a Legendre function. Afterwards, for all x ∈ E, y ∈ C, and λ ∈ (0, +∞). Now, by setting g(x, y) = Ax, y − x for all x, y ∈ C, it follows from Lemma 13 and 14 that argmin{λ n g(x n , y) + D f (y, x n ) : y ∈ C} = argmin{λ n Ax n , y − x n + D f (y, x n ) : Similarly, argmin{λ n g(y n , y) + D f (y, x n ) : y ∈ T n } = Proj f T n ∇ f * (∇ f (x n ) − λ n Ay n ) .

Note that
g(x n , z n ) − g(x n , y n ) − g(y n , z n ) = Ax n , z n − x n − Ax n , y n − x n − Ay n , z n − y n = Ax n , z n − y n − Ay n , z n − y n = Ax n − Ay n , z n − y n .
We assume that A : C → E * satisfies the following assumptions. (B1) A is pseudomonotone, i.e., for x, y ∈ C we have Ax, y − x ≥ 0 ⇒ Ay, y − x ≥ 0; (B2) A is L-Lipschitz continuous with respect to D f , i.e., there exists L > 0, such that D f (Ax, Ay) ≤ LD f (x, y) x, y ∈ C; (B3) A is weakly sequentially continuous, i.e., for any sequence {x n } ⊂ C, such that x n x ∈ C, then Ax n Ax.
Therefore, we can apply our result to solving the VIP as follows: Theorem 2. Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let A : C → E * be a mapping satisfying (B1)-(B3). For i ∈ ℵ, where ℵ = N\{0}, let T i : E → E be a finite family of quasi-Bregman nonexpansive mappings, such that I − T i are demiclosed at zero. Let f : E → R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E. Suppose that the solution set Subsequently, the sequence {x n } generated by the following algorithm converges strongly to a point u * ∈ Γ.

Numerical Examples
In this section, we perform some numerical experiments to illustrate the performance of the proposed method and also compare with the method proposed by Eskandami et al. [39] (shortly, H-EGM), Algorithm 3.1 of Hieu and Strodiot [55] (shortly, HS-ALG. I) and Algorithm 4.1 of Hieu and Strodiot [55] (shortly, HS-ALG. II). All of the optimization subproblems are viably addressed by the quadprog function in Matlab. The calculations are completed utilizing MATLAB program on a Lenovo X250, Intel (R) Center i7 vPro consisting of RAM 8.00GB. All of the optimization subproblems are effectively solved by the quadprog function in Matlab. The computations are carried out using MATLAB program on a Lenovo X250, Intel (R) Core i7 vPro with RAM 8.00GB. Example 1. First, we consider the generalized Nash equilibrium problem described as follows: Assume that there are m companies that produce a specific item. Let x mean the vector whose section x j represents the amount of the item that is delivered by organization j. We accept that the value cost p j (s) is a decreasing affine function of s with s = ∑ m j=1 x j , i.e. p j (s) = α j − β j s, where α j > 0 and β j > 0. It follows that the profit that is generated by company j is given by where c j (x j ) is the tax expense for delivering x j . Assume that C = [x min j , x max j ] is the system set of company j. At that point, the methodology set of the model is C : C 1 × C 2 × · · · × C m . Indeed, each company looks to optimize its benefit by picking the corresponding production level under the assumption that the production of different companies is a parametric input. The renowned Nash equilibrium idea established a commonly utilized method to deal with this model.
We recall that that a point x * ∈ C = C 1 × C 2 × · · · × C m is called an equilibrium point of the model if g j (x * ) ≥ g j (x * [x j ]) for all x j ∈ C j , j = 1, 2, · · · , m, where the vector x * [x j ] stands for the vector obtained from x * by substituting x * j with x j . By taking g(x, y) := ψ(x, y) − ψ(x, x) := − ∑ m j=1 g j (x[y j ]), the problem of finding a Nash equilibrium point of the model can be formulated, as follows: Presently, let us guess that the tax-charge function c j (x j ) is expanding and affine for each j ≥ 1. This presumption implies that both the tax and charge for creating a unit are expanding as the amount of the production gets larger. All things considered, the bifunction g can be formed in the following structure: g(x, y) = Px + Qy + q, y − x , where q ∈ m and P, Q are two matrices of order m, such that Q is symmetric positive semidefinite and Q − P is symmetric negative semidefinite. This shows that g is pseudomonotone. Moreover, it is easy to show that g satisfies the Lipschitz-type condition with c 1 = c 2 = P−Q 2 . We suppose that the set C has the form The matrices P, Q are randomly generated, such that their properties are satisfied and the vector q is generated randomly with its entries being in (−2, 2). The mapping T i : m → m is defined as the projection P C and the initial vector x 0 ∈ m is generated randomly for m = 10, 30, 50, 100. For Algorithm 1, we choose u ∈ m , λ 0 = 0.36, α n = 1 10(n+1) and for each n ∈ N, i ≥ 1, {β n,i } is defined by For H-EGM, we take N = 1, M = 5, α n = 1 10(n+1) , β n,r = 1 6 and choose the best stepsize λ n = 1 2.02c for the algorithm. Also for HS-ALG. I, we take α n = 1 10(n+1) , β n = 2n 5n+8 , λ n = 1 2.02c . Similarly for HS-ALG II, we choose α n = 1 10(n+1) , β n = 2n 5n+8 , γ = 0.08, α = 0.64, ν = 0.8. We use D n = x n+1 − x n 2 < to illustrate the convergence of the algorithms, where = 10 −5 . The numerical results are shown in Table 1 and Figure 1.
From Table 1 and Figure 1, we see that Algorithm 1 performs better than H-EGM, HS-ALG I, and HS-ALG II. This is due to the fact that Algorithm 1 uses a self-adaptive technique to select its stepsize, while H-EGM and HS-ALG I used the choice λ n = 1 2.02c which deteriorate the performance of the algorithms as the value of m increases. Furthermore, HS-ALG II used a computationally expensive line search procedure to determine its stepsize at each iteration. This technique uses inner iteration and consumed additional computational time.  Example 2. Let E = 2 (R) be the linear spaces whose elements are all 2-summable sequences {x j } ∞ j=1 of scalars in R, that is 2 (R) := x = (x 1 , x 2 , · · · , x j , · · · ), x j ∈ R and ∞ ∑ j=1 |x j | 2 < ∞ with inner product ·, · : 2 × 2 → R and || · || : 2 → R defined by x, y := ∞ ∑ j=1 x j y j and It is easy to show that g is a pseudomonotone bifunction that is not monotone and g satisfies condition (A1)-(A5) with Lipschitz-like constant c 1 = c 2 = 5 2 . We define the mapping T i : 2 → 2 by T i x = x 1 2 , x 2 2 , . . . , x i 2 , . . . . Subsequently, T i is QBNE and F(T i ) = {0} and, thus, Sol = {0}. We use similar parameters and stopping criterion, as used in Example 1, for the algorithms with the following initial value:   Figure 2 show the numerical results. From Figure 2 and Table 2, we see that Algorithm 1 performs better than H-EGM, HS-ALG I, and HS-ALG II. Note that the Lipschitz-like constant for the cost operator in this example is c = 5 2 . Subsequently, the prior estimate of the stepsize for H-EGM and HS-ALG I can easily be obtained, which is fixed for every iteration. More so, HS-ALG II used a line search method to determine an appropriate stepsize for each iteration. However, Algorithm 1 updated its stepsize at every iteration while using a computational inexpensive method. Iteration number (n)  for all x, y ∈ L 2 ([0, 1]). The set C is defined by C = {x ∈ H : 1 0 (t 2 + 1)x(t)dt ≤ 1} and the function g : C × C → R is given by g(x, y) = Ax, y − x , where Ax(t) = max{0, x(t)}, t ∈ [0, 1] for all x ∈ H. We defined the mapping T : L 2 ([0, 1]) → L 2 ([0, 1]) by T(x) = 1 0 x(t) 2 . It is not difficult to show that T is QBNE and Sol = {0}. We take α n = 1 n+1 , β n,i = 3n 8n+11 for all the algorithms. For Algorithm 1, we take λ 0 = 0.28 and u = sin(3t). For H-EGM and HS-ALG I., we take N = M = 1 and λ n = 1 3 . Additionally, for HS-ALG. II, we take γ = 0.05, α = 0.28, ν = 0.5. We test the algorithms for the following initial values: Case I: x 0 = t 2 + 1, Case II: x 0 = cos(4t) 4 , Case III: x 0 = exp(3t) 3 , Case IV: x 0 = cos(2t).
We use ||x n+1 − x n || < 10 −4 as the stopping criterion for the numerical computation and plot the graphs of ||x n+1 − x n || against number of iterations in each case. Table 3 and Figure 3 present the numerical results.
From Table 3 and Figure 3, we see that Algorithm 1 also performs better than H-EGM, HS-ALG I, and HS-ALG II. The reason for this advantage is similar to that in Example 2.

Conclusions
In this paper, we introduced a Halpern-type Bregman subgradient extragradient method for solving the pseudomonotone equilibrium problem in a real reflexive Banach space. The stepsize of the algorithm is chosen by a self-adaptive method that does not require computing the prior estimate of the Lipschitz-like constants of the cost operator. We also proved a strong convergence theorem for the sequence that is generated by our algorithm to a common solution of the equilibrium and fixed point problem. Finally, we presented some numerical experiments to illustrate the performance and efficiency of the proposed method. The numerical results showed that the proposed algorithm performs better than other related methods in the literature in terms of the number of iterations and CPU time taken for the computation.