Next Article in Journal
An Efficient Red–Black Skewed Modified Accelerated Arithmetic Mean Iterative Method for Solving Two-Dimensional Poisson Equation
Next Article in Special Issue
Approximating Fixed Points of Relatively Nonexpansive Mappings via Thakur Iteration
Previous Article in Journal
Fredholm Type Integral Equation in Controlled Rectangular Metric-like Spaces
Previous Article in Special Issue
A Family of Derivative Free Algorithms for Multiple-Roots of Van Der Waals Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Subgradient-Type Extrapolation Cyclic Method for Solving an Equilibrium Problem over the Common Fixed-Point Sets

by
Porntip Promsinchai
1,2 and
Nimit Nimana
3,*
1
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
2
Center of Excellence in Nonlinear Analysis and Optimization, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
3
Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(5), 992; https://doi.org/10.3390/sym14050992
Submission received: 11 April 2022 / Revised: 3 May 2022 / Accepted: 10 May 2022 / Published: 12 May 2022

Abstract

:
In this paper, we consider the solving of an equilibrium problem over the common fixed set of cutter mappings in a real Hilbert space. To this end, we present a subgradient-type extrapolation cyclic method. The proposed method is generated based on the ideas of a subgradient method and an extrapolated cyclic cutter method. We prove a strong convergence of the method provided that some suitable assumptions of step-size sequences are assumed. We finally show the numerical behavior of the proposed method.

1. Introduction

Let H be a real Hilbert space with inner product · , · and its induced norm · . In this paper, we present an iterative method to the equilibrium problem over the intersection of fixed-point set:
Problem 1 (BEP).
Let T i : H H , i = 1 , 2 , , m , be cutters with i = 1 m Fix T i , and let f : H × H R be a bifunction satisfying f ( x , x ) = 0 for all x H . Then, our objective is to find a point u ¯ i = 1 m Fix T i such that
f ( u ¯ , y ) 0 for all y i = 1 m Fix T i ,
where Fix T i : = { x H : T i x = x } denotes the set of fixed points of T i .
The equilibrium problem, which was first introduced by Fan [1], includes many problems as particular cases, for example, the fixed-point problem, the variational inequality, the optimization problem, the saddle point problem, the Nash equilibrium problem in non-cooperative games, and others; see, for instance, [2,3,4,5] and references therein.
The equilibrium problems over the fixed-point set have been considered in many articles; see, for instance, [6,7,8,9,10] and references therein. The computational algorithms for solving these kinds of problems have been studied and developed by using the idea of the methods for equilibrium problems and the iterative schemes for fixed-point problems. In particular, Iiduka and Yamada [6] considered the equilibrium problems over the fixed-point set of a firmly nonexpansive mapping and presented a subgradient-type method for solving the considered problems. They showed the convergence of their method and applied the method to the Nash equilibrium problems. After that, the equilibrium problems over the common fixed-point of nonexpansive mappings were considered by Duc and Muu in [7]. They proposed the splitting algorithm, which was updated based on the idea of the classical gradient method and the Krasnosel’skii–Mann method and presented the strong convergence of the presented algorithm. Recently, Thuy and Hai [8] considered the bilevel equilibrium problems and proposed the projected subgradient algorithm to solve the considered problem. They exhibited the strong convergence of the proposed method and applied it to the equilibrium problems over the fixed-point set of a nonexpansive mapping. We notice that the aforementioned literature is considered in the case of the equilibrium problems over the fixed-point set of nonexpansive mappings.
Let us focus on the constrained set of BEP. Now, let T i : H H , i = 1 , 2 , , m , be cutter operators. The common fixed-point problem is to find a point
x * i = 1 m Fix T i
The well-known methods of finding a point that belongs to the intersection of fixed-point sets are initially motivated by the cyclic projection method, which was introduced by Kaczmarz [11]. After that, the convergence of cyclic projection-type methods are investigated in several directions and their convergence results are guaranteed under the operators’ assumptions, such as cutters or nonexpansive operators, see [12,13,14,15,16] and references therein. In particular, Bauschke and Combettes [17] proposed the cyclic cutter method and showed a weak convergence of the proposed method. In [18], Cegielski and Censor presented the extrapolated cyclic cutter method, which is an acceleration of the cyclic cutter method by imposing an appropriate step-size function to the method. Indeed, let T i : H H , i = 1 , 2 , , m , be cutters with i = 1 m Fix T i , we define the step-size function σ : H ( 0 , ) as follows:
σ ( x ) : = i = 1 m T x S i 1 x , S i x S i 1 x T x x 2 , for x i = 1 m Fix T i , 1 , otherwise ,
where the operator T, S 0 and S i , i = 1 , 2 , , m are defined as
T : = T m T m 1 T 1 ,   S 0 :   =   I d ,   and   S i :   =   T i T i 1 T 1 .
It was shown that the extrapolated cyclic cutter method weakly converges provided that the cutter operators T i satisfied the demi-closedness principle for all i = 1 , 2 , , m . Note that, for some practical problem, the value of the extrapolation function may be huge, which lead to some numerical instabilities. To keep away from these instabilities, Cegielski and Nimana [19] proposed the modified extrapolated subgradient projection method for solving the convex feasibility problem, which is a particular situation of common fixed-point problem, and established the convergence of the proposed method as well as demonstrated the performance of the method by the numerical results. After that, the authors in [20] also utilized the idea of the extrapolated cyclic cutter method for dealing with the variational inequality problem with common fixed-point constraints. It can be noted that from [19,20], the iterative methods with extrapolated cyclic cutter terms achieve not only some numerical superiorities to utilizing the classical cyclic cutter scheme but also guarantee the boundedness of the generated sequence, see [20] for further discussions.
In this paper, we propose an iterative algorithm called the Subgradient-type extrapolation cyclic method for solving the equilibrium problems over the intersection of fixed-point sets of cutter operators. The proposed algorithm can be considered as a combination of the subgradient iterative scheme for equilibrium problems in [8] and the extrapolated cyclic cutter method for the intersection of fixed-point sets of cutter operators in [18]. Using the cutter operators and assumptions, we investigate the convergence of the presented algorithm. Moreover, we also present a numerical result of our presented method to illustrate the efficiency of the method.
This paper is organized as follows. In Section 2, we recall some definitions and tools which are needed for our convergence work. In Section 3, we present the subgradient-type extrapolation cyclic method for finding the solution of BEP. We subsequently present the convergence result in this section. In Section 4, the efficacy of the subgradient-type extrapolation cyclic method is illustrated by numerical experiments of the solving equilibrium problem governed by the positive definite symmetric matrices over the common fixed-point set. Finally, we give some concluding remarks in Section 5.

2. Preliminaries

In section, we collect some basic definitions, properties, and useful tools for our work. The readers can consult the books [16,21] for more details.
We denote by I d the identity operator on a real Hilbert space H . For a sequence { x n } n = 1 , the strong and weak convergences of a sequence { x n } n = 1 to a point x H are defined by the expression x n x and x n x , respectively.
In what follows, we recall some definitions and properties of the operator that will be referred to in our analysis.
Definition 1
([16]). Let T : H H be an operator having a fixed point. The operator T is called
(i)
quasi-nonexpansive, if
T x z x z ,
for all x H and z Fix T ,
(ii)
η-strongly quasi-nonexpansive, if there exists η 0 ,
T x z 2 x z 2 η T x x 2 ,
for all x H and z Fix T ,
(iii)
cutter, if
x T x , z T x 0 ,
for all x H and z Fix T .
Lemma 1
[16] (Remark 2.1.31 and Theorem 2.1.39). Let T : H H be an operator having a fixed point. Then the following statements are equivalent:
(i)
T is cutter.
(i)
T x x , z x T x x 2 for all x H and for all z Fix T .
(ii)
T is 1-strongly quasi-nonexpansive.
Definition 2
[16] (Definition 3.2.6). Let T : H H be an operator having a fixed point. The operator T is said to satisfy the demi-closedness (DC) principle if for every sequence { x n } H , x n u H and T x n x n 0 , we have u Fix T .
Definition 3
[16] (Definition 2.1.2). Let T : H H be an operator and λ [ 0 , 2 ] be given. We define the relaxation of the operator T by
T λ : = ( 1 λ ) I d + λ T ,
and we call λ a relaxation parameter.
Next, we recall the definition of a generalization of relaxation of an operator.
Definition 4
[16] (Definition 2.4.1). Let T : H H be an operator, λ [ 0 , 2 ] and σ : H ( 0 , ) . We define the operator T σ , λ : H H by
T σ , λ x : = x + λ σ ( x ) ( T x x ) .
This operator T is called a generalized relaxation of the operator T, the value λ is called a relaxation parameter and the function σ is called a step-size function. The operator T σ , λ is called an extrapolation of T λ if the function σ ( x ) 1 for every x H .
We notice that, if σ ( x ) = 1 , for every x H , then T σ , λ = T λ . Note that T σ :   = T σ , 1 . Then, for every x H , the following relations hold
T σ , λ x x = λ σ ( x ) ( T x x ) = λ ( T σ x x ) ,
and for any λ 0 , we have
Fix T σ , λ = Fix T σ = Fix T .
Next, we provide an important lemma of the step-size function for proving the convergence result.
Lemma 2
[16] (Section 4.10). Let T i : H H , i = 1 , 2 , , m , be cutters with i = 1 m Fix T i and denote the operator T, S 0 and S i , i = 1 , 2 , , m as in (2). Let the function σ : H ( 0 , ) be given by (1). Then the following statements are true:
(i)
For every x Fix T , it is true that
σ ( x ) 1 2 i = 1 m S i x S i 1 x 2 T x x 2 1 2 m .
(ii)
The operator T σ is a cutter.
Now, we recall a notion and some properties of a diagonal subdifferential which will be used in this work.
A function h : H R is said to be subdifferentiable at x 0 H if there exists a vector w H such that
h ( x ) h ( x 0 ) + w , x x 0 , x H .
The vector w is called a subgradient of the function h at x 0 . The collection of all such vectors constitute the subdifferential of h at x 0 and is denoted by h ( x 0 ) .
Let f : H × H R be a bifunction which is convex in the second argument, that is, the function f ( x , · ) : H R is convex at x, for all x H . Then, the set of all subgradient of f ( x , · ) at x is called the diagonal subdifferential and is denoted by 2 f ( x , x ) : = f ( x , · ) ( x ) . The reader can find the notion of the diagonal subdifferential in [22], for more detail.
We end this section by recalling some technical lemmas that are important tools in proving our convergence result.
Lemma 3
[23] (Lemma 3.1). Let { a n } n = 1 and { b n } n = 1 be sequences of nonnegative real numbers such that
a n + 1 a n + b n .
If n = 1 b n < , then lim n a n exists.
Lemma 4
[24] (Lemma 3.1). Let { a n } n = 1 be a sequence of real numbers such that there exists a subsequence { a n j } j = 1 of { a n } n = 1 with a n j < a n j + 1 for all j N . If, for all n n 0 , we define
ν ( n ) = max k [ n 0 , n ] : a k < a k + 1 ,
then the following hold:
(i)
{ ν ( n ) } n n 0 is non-decreasing.
(ii)
lim n ν ( n ) = .
(iii)
a ν ( n ) a ν ( n ) + 1 and a n a ν ( n ) + 1 for every n n 0 .

3. Algorithm and Its Convergence Result

In this section, we firstly propose the subgradient-type extrapolation cyclic method for solving BEP. Subsequently, we present useful lemmas and prove the main convergence theorem.
Remark 1.
(i)
When the number m = 1 and σ ( x n ) = 1 , Algorithm 1 becomes Algorithm 2 considered in [8]. Moreover, it is worth noting that the class of operator considered in this work is different from [8]. In fact, we consider the cutter property of T i , whereas the nonexpansiveness of T is assumed in [8].
(ii)
If the function f ( · , · ) = 0 , Algorithm 1 is reduced to
x n + 1 = x n λ n η n σ ( x n ) ( x n T x n ) ,
where η n = max { μ , d n } for all n 1 . In the case when η n = 1 for all n 1 , this scheme is related to the extrapolated cyclic cutter proposed by [18]. Moreover, this scheme is also related to the work of Cegielski and Nimana [19] for solving a convex feasibility problem when the operator T m is omitted in their paper.
The following assumption relating to the convergence of Algorithm 1 is assumed throughout this work.
Algorithm 1: Subgradient-type extrapolation cyclic method
Initialization: Given the positive real sequences { α n } n = 1 and { λ n } n = 1 . Choose μ ( 0 , + ) and x 1 H arbitrarily.
Step 1. For given x n H , compute the step size as
σ ( x n ) : = i = 1 m T x n S i 1 x n , S i x n S i 1 x n T x n x n 2 , for x n i = 1 m Fix T i , 1 , otherwise .
Step 2. Update the next iterate x n + 1 as
d n :   = σ ( x n ) ( x n T x n ) + α n w n ; where w n 2 f ( x n , x n ) , η n :   = max { μ , d n } , x n + 1 :   = x n λ n η n d n .
Put n :   = n + 1 and go to Step 1.
Assumption A1.
Assume that
(A1)
The bifunction f is ρ-strongly monotone on H , that is, there exists a constant ρ > 0 satisfying
f ( x , y ) + f ( y , x ) ρ x y 2 , x , y H .
(A2)
For each x H , the function f ( x , · ) is convex, subdifferentiable and lower semicontinuous on H .
(A3)
The function x 2 f ( x , x ) is bounded on a bounded subset of H .
(A4)
The sequences { λ n } n = 1 and { α n } n = 1 satisfy n = 1 λ n = , n = 1 λ n 2 < , n = 1 α n λ n = , and lim n α n = 0 .
Remark 2.
(i)
If the whole space H is finite dimensional, the assumption that, for all x H , the function f ( x , · ) is subdifferentiable and weakly lower semicontinuous in (A2) can be omitted. This is because, in the finite dimensional setting, the convexity implies the continuity of a function.
(ii)
The convexity of the function f ( x , · ) implies that the lower semicontinuity is equivalent to the weak lower semicontinuity of the function f ( x , · ) for all x H .
(iii)
If the whole space H is finite dimensional, by invoking the assumption (A2), we have the diagonal subdifferential 2 f ( x n , x n ) : = f ( x n , · ) ( x n ) is nonempty for all n N . Moreover, in this case, the assumption (A3) can be omitted, see [21] (Proposition 16.20).
(iv)
An example of the corresponding step-size sequences in (A4) is the positive real sequences { α n } n = 1 and { λ n } n = 1 given by
α n :   = α ( n + 1 ) a and λ n :   = λ ( n + 1 ) b ,
where α , λ > 0 and a , b > 0 with b > 0.5 and a + b 1 . In fact, since 0 < a + b 1 and b > 0.5 , we have 0.5 < b < 1 and then n = 1 λ n = n = 1 λ ( n + 1 ) b > λ n = 1 1 ( n + 1 ) = . Furthermore, since 1 < 2 b < 2 , we have n = 1 λ n 2 = n = 1 λ 2 ( n + 1 ) 2 b < . We note that n = 1 α n λ n = n = 1 α ( n + 1 ) a λ ( n + 1 ) b = α λ n = 1 1 ( n + 1 ) a + b = . Moreover, we have that lim n α n = lim n α ( n + 1 ) a = 0 .
The following lemma states the important relation of the generated iterates.
Lemma 5.
Let { x n } n = 1 be the sequence generated by Algorithm 1. Then, for every n N and u i = 1 m Fix T i , it holds that
x n + 1 u 2 x n u 2 λ n 4 m η n i = 1 m S i x n S i 1 x n 2 2 α n λ n η n x n u , w n + λ n 2 .
Proof. 
Let n N be fixed. Now, let us note that
x n + 1 u 2 = x n λ n η n d n u 2 = x n u 2 2 λ n η n x n u , d n + λ n η n d n 2 = x n u 2 2 λ n η n x n u , σ ( x n ) ( x n T x n ) + α n w n + λ n η n d n 2 = x n u 2 2 λ n η n x n u , σ ( x n ) ( x n T x n ) 2 α n λ n η n x n u , w n + λ n η n d n 2 .
By using the properties of T σ in (3), Lemma 1 and Lemma 2, we note that
x n + 1 u 2 = x n u 2 2 λ n η n u x n , T σ x n x n 2 α n λ n η n x n u , w n + λ n η n d n 2 x n u 2 2 λ n η n T σ x n x n 2 2 α n λ n η n x n u , w n + λ n η n d n 2 = x n u 2 2 λ n η n σ 2 ( x n ) T x n x n 2 2 α n λ n η n x n u , w n + λ n η n d n 2 x n u 2 λ n 2 η n i = 1 m S i x n S i 1 x n 2 2 T x n x n 2 2 α n λ n η n x n u , w n + λ n η n d n 2 x n u 2 λ n 4 m η n i = 1 m S i x n S i 1 x n 2 2 α n λ n η n x n u , w n + λ n η n d n 2 .
Finally, by utilizing the fact that η n d n , we obtain
x n + 1 u 2 x n u 2 λ n 4 m η n i = 1 m S i x n S i 1 x n 2 2 α n λ n η n x n u , w n + λ n 2 ,
as desired. □
The following lemma guarantees the boundedness of the constructed sequence { x n } n = 1 .
Lemma 6.
The sequence { x n } n = 1 generated by Algorithm 1 is bounded.
Proof. 
Let n N and u i = 1 m Fix T i be fixed. Let us notice that
λ n 4 m η n i = 1 m S i x n S i 1 x n 2 0 ,
which together with Lemma 5 yields
x n + 1 u 2 x n u 2 2 α n λ n η n x n u , w n + λ n 2 .
Now, we set A n : = x n u 2 j = 1 n 1 λ j 2 for all n N . Thus, the relation (4) can be rewritten as
A n + 1 A n + 2 α n λ n η n x n u , w n 0 .
To show that the sequence { x n } n = 1 is bounded, we will consider the proof in two cases:
Case I: Suppose that there exists n 0 N such that the sequence { A n } n = 1 is nonincreasing for all n n 0 . Then, we obtain that x n u 2 j = 1 n 1 λ j 2 A n 0 for all n n 0 , which means that the sequence { x n u } n = 1 is bounded and, subsequently, { x n } n = 1 is also a bounded sequence.
Case II: Suppose that there exists a subsequence { A n k } k = 1 of { A n } n = 1 such that A n k < A n k + 1 for all k N , and let { ν ( n ) } n = 1 be given in Lemma 4. This yields, for every n n 0 , that
A ν ( n ) A ν ( n ) + 1
and
A n A ν ( n ) + 1 .
Invoking the relation (6) in the inequaluty (5) and the positivity of the sequences { α n } n = 1 , { λ n } n = 1 and { η n } n = 1 , we obtain that
x ν ( n ) u , w ν ( n ) 0 .
On the other hand, by using the definition of w ν ( n ) 2 f ( x ν ( n ) , x ν ( n ) ) and the fact that f ( x ν ( n ) , x ν ( n ) ) = 0 , we get
u x ν ( n ) , w ν ( n ) f ( x ν ( n ) , u ) f ( x ν ( n ) , x ν ( n ) ) = f ( x ν ( n ) , u ) .
This together with the inequality (8) yields that
f ( x ν ( n ) , u ) 0 .
Now, it follows from the ρ -strong monotonicity of f that
ρ x ν ( n ) u 2 f ( x ν ( n ) , u ) f ( u , x ν ( n ) ) f ( u , x ν ( n ) ) .
On the other hand, for a fixed u ^ 2 f ( u , u ) , we have
f ( u , x ν ( n ) ) u ^ , x ν ( n ) u ,
which together with the inequality (9) implies that
ρ x ν ( n ) u 2 u ^ , x ν ( n ) u u ^ x ν ( n ) u ,
and so
x ν ( n ) u ρ 1 u ^ .
This means that the sequence { x ν ( n ) u } n = 1 is bounded. Now, since
A ν ( n ) + 1 = x ν ( n + 1 ) u 2 j = 1 ν ( n ) λ j 2 x ν ( n + 1 ) u 2 ,
it follows that { A ν ( n ) + 1 } n = 1 is bounded above. Thus, by using (7), we get that { A n } n = 1 is bounded and hence { x n } n = 1 is also bounded. This completes the proof. □
The following lemma provides some important boundedness properties of the sequences { d n } n = 1 and { η n } n = 1 .
Lemma 7.
The sequences { d n } n = 1 and { η n } n = 1 are bounded.
Proof. 
Let n N and u i = 1 m Fix T i be fixed. Now, let note that
d n = σ ( x n ) ( x n T x n ) + α n w n T σ x n x n + α n w n T σ x n u + x n u + α n w n 2 x n u + α n w n ,
where the first inequality holds true by (3) and the last one holds true by the fact that T σ is a cutter and consequently a quasi-nonexpansive operator.
As w n 2 f ( x n , x n ) , we have from Assumption (A3) and the boundedness of { x n } n = 1 that the sequence { w n } n = 1 is bounded which implies the sequence { d n } n = 1 is also bounded. Consequently, from the definition of the sequence { η n } n = 1 , it can be seen that { η n } n = 1 is also bounded. □
Now, we are in a position to present our main theorem.
Theorem 1.
Let { x n } n = 1 be the sequence generated by Algorithm 1. Suppose that Assumption A1 is satisfied and the operators T i , i = 1 , 2 , , m , satisfy the DC principle. Then the sequence { x n } n = 1 converges strongly to the unique solution x * ofBEP.
Proof. 
Let x * be the unique solution of BEP. Firstly, we note from Lemma 5 with replacing u = x * that
x n + 1 x * 2 + j = 1 n λ j 8 m η j i = 1 m S i x j S i 1 x j 2 j = 1 n λ j 2 x n x * 2 + j = 1 n 1 λ j 8 m η j i = 1 m S i x j S i 1 x j 2 j = 1 n 1 λ j 2 λ n 8 m η n i = 1 m S i x n S i 1 x n 2 2 α n λ n η n x n x * , w n .
For simplicity, we denote Γ n :   = x n x * 2 + j = 1 n 1 λ j 4 m η j i = 1 m S i x j S i 1 x j 2 j = 1 n 1 λ j 2 for all n 2 . Then the inequality (10) is nothing else than
Γ n + 1 Γ n λ n 8 m η n i = 1 m S i x n S i 1 x n 2 2 α n λ n η n x n x * , w n .
To obtain the strong convergence of the generated sequence, we investigate the proof in two cases based on the behavior of the sequence { Γ n } n = 1 .
Case I: Suppose that there is n 0 N such that Γ n + 1 Γ n for every n n 0 . Thus, by using the definition of Γ n , we note that
x n + 1 x * 2 x n x * 2 λ n 8 m η n i = 1 m S i x n S i 1 x n 2 + λ n 2 .
By utilizing Lemma 3 and the fact that n = 1 λ n 2 < , we obtain that the sequence { x n x * } n = 1 is convergent and
n = 1 λ n 8 m η n i = 1 m S i x n S i 1 x n 2 < .
Now, as n = 1 λ n = , we get that
lim n 1 η n i = 1 m S i x n S i 1 x n 2 = 0 .
As the sequence { η n } n = 1 is bounded, we have that, for all i = 1 , 2 , , m ,
lim n S i x n S i 1 x n = 0 .
On the other hand, we note from Lemma 5 and the fact that λ n 4 m η n i = 1 m S i x n S i 1 x n 2 0 , for all n N , that
2 α n λ n η n x n x * , w n x n x * 2 x n + 1 x * 2 + λ n 2 .
By summing up this relation and the condition that n = 1 λ n 2 < , we obtain
n = 1 2 α n λ n η n x n x * , w n < .
Now, since the sequence { η n } n = 1 is bounded, there is a real number M 0 such that η n M for all n N . This together with the assumption n = 1 α n λ n = implies that
n = 1 α n λ n η n n = 1 α n λ n M = .
Next, we show that lim inf n x n x * , w n 0 . Suppose to the contrary that there exist n 0 N and δ > 0 such that x n x * , w n δ for all n n 0 . Then,
= δ n = n 0 2 α n λ n M δ n = n 0 2 α n λ n η n n = n 0 2 α n λ n η n x n x * , w n < ,
which leads to a contradiction. Thus, we obtain
lim inf n x n x * , w n 0 .
From the ρ -strongly monotonicity of f, it follows that
ρ x n x * 2 f ( x n , x * ) f ( x * , x n ) x n x * , w n f ( x * , x n ) .
Then,
ρ x n x * 2 + f ( x * , x n ) x n x * , w n .
By taking the inferior limit, we have
ρ lim inf n x n x * 2 + lim inf n f ( x * , x n ) lim inf n ( ρ x n x * 2 + f ( x * , x n ) ) lim inf n x n x * , w n .
Combining this and the inequality (15), we have
lim inf n x n x * 2 ρ 1 lim inf n f ( x * , x n ) .
Since the sequence { x n } n = 1 is bounded, there exist a weakly cluster point z H and a subsequence { x n k } k = 1 of { x n } n = 1 such that x n k z H . We note from (12) that
lim k ( T 1 I d ) x n k = lim k S 1 x n k S 0 x n k = 0 .
Thus, by using the DC principle of T 1 , we have that z Fix T 1 . Furthermore, since x n k z and it holds that
lim k ( T 1 x n k T 1 z ) ( x n k z ) = 0 ,
which imply that T 1 x n k z . Moreover, we note that
lim k ( T 2 I d ) T 1 x n k = lim k S 2 x n k S 1 x n k = 0 .
By utilizing the DC principle of T 2 , we have z Fix T 2 .
By processing the similar argument as above, we acquire that z Fix T i for all i = 1 , 2 , , m , and hence z i = 1 m Fix T i .
In virtue of the weak lower semicontinuity of f ( x * , · ) , we obtain
lim inf n f ( x * , x n ) = lim k f ( x * , x n k ) = lim inf k f ( x * , x n k ) f ( x * , z ) 0 .
Combining the inequality (16) and (17), we have lim inf n x n x * = 0 . From the existence of lim n x n x * , we can conclude that
lim n x n x * = 0 .
Case II: Suppose that there exists a subsequence { Γ n k } k = 1 of { Γ n } n = 1 such that Γ n k < Γ n k + 1 for all k N . By Lemma 4, there exists a sequence of indices { ν ( n ) } n = 1 such that, for all n n 0 ,
Γ ν ( n ) Γ ν ( n ) + 1 ,
and
Γ n Γ ν ( n ) + 1 .
By using the inequalities (11) and (18), we have
0 Γ ν ( n ) + 1 Γ ν ( n ) λ ν ( n ) 4 m η ν ( n ) i = 1 m S i x ν ( n ) S i 1 x ν ( n ) 2 2 α ν ( n ) λ ν ( n ) η ν ( n ) x ν ( n ) x * , w ν ( n ) .
Then,
i = 1 m S i x ν ( n ) S i 1 x ν ( n ) 2 8 m α ν ( n ) x ν ( n ) x * , w ν ( n ) .
By using the definition of w ν ( n ) 2 f ( x ν ( n ) , x ν ( n ) ) and the fact that f ( x ν ( n ) , x ν ( n ) ) = 0 , we get
x * x ν ( n ) , w ν ( n ) f ( x ν ( n ) , x * ) f ( x ν ( n ) , x ν ( n ) ) = f ( x ν ( n ) , x * ) ,
which implies that
i = 1 m S i x ν ( n ) S i 1 x ν ( n ) 2 8 m α ν ( n ) f ( x ν ( n ) , x * ) .
Now, using the ρ -strongly monotonicity of f that
f ( x ν ( n ) , x * ) ρ x ν ( n ) x * 2 f ( x * , x ν ( n ) ) ,
and for a fixed w * 2 f ( x * , x * ) such that
f ( x * , x ν ( n ) ) w * , x ν ( n ) x * ,
we obtain
i = 1 m S i x ν ( n ) S i 1 x ν ( n ) 2 8 m α ν ( n ) ρ x ν ( n ) x * 2 8 m α ν ( n ) w * , x ν ( n ) x * .
By using the boundedness of { x n } and lim n α n = 0 , we obtain
lim n i = 1 m S i x ν ( n ) S i 1 x ν ( n ) 2 = 0 .
This implies that
lim n S i x ν ( n ) S i 1 x ν ( n ) 2 = 0 .
On the other hand, by using the ρ -strongly monotonicity of f and the inequality (21), we have
ρ x ν ( n ) x * 2 f ( x ν ( n ) , x * ) f ( x * , x ν ( n ) ) x ν ( n ) x * , w ν ( n ) f ( x * , x ν ( n ) ) .
By means of the fact that i = 1 m S i x ν ( n ) S i 1 x ν ( n ) 2 0 in (20), it follows that
x ν ( n ) x * , w ν ( n ) 0 .
Combining this and the above inequality, we obtain
x ν ( n ) x * 2 ρ 1 f ( x * , x ν ( n ) ) .
By taking the superior limit, we have
lim sup n x ν ( n ) x * 2 ρ 1 lim sup n f ( x * , x ν ( n ) ) .
As the sequence { x ν ( n ) } n = 1 is bounded, there exist a weakly cluster point z H and a subsequence { x ν ( n k ) } k = 1 of { x ν ( n ) } n = 1 such that x ν ( n k ) z H . By following the argument as used in Case I together with the fact (22) and the DC principle of each T i , we obtain that, for any subsequence { x ν ( n k ) } k = 1 of { x ν ( n ) } n = 1 , x ν ( n k ) z i = 1 m Fix T i .
By using the weak lower semicontinuity of f ( x * , · ) , we obtain
lim k f ( x * , x ν ( n k ) ) = lim inf k f ( x * , x ν ( n k ) ) f ( x * , z ) 0 .
It follows from the inequality (23) that
lim sup n x ν ( n ) x * 2 ρ 1 lim sup n f ( x * , x ν ( n ) ) 0 .
Then, we obtain
lim n x ν ( n ) x * 2 = 0 .
Note that from the definition of x ν ( n ) + 1 and using the fact that η ν ( n ) d ν ( n ) , we have
x ν ( n ) + 1 x ν ( n ) = x ν ( n ) + λ ν ( n ) η ν ( n ) d ν ( n ) x ν ( n ) = λ ν ( n ) η ν ( n ) d ν ( n ) λ ν ( n ) .
Combining this and using the triangle inequality, we have
x ν ( n ) + 1 x * x ν ( n ) + 1 x ν ( n ) + x ν ( n ) x * λ ν ( n ) + x ν ( n ) x * .
By using the inequality (24) and the fact that lim n λ n = 0 , we obtain
lim n x ν ( n ) + 1 x * = 0 .
Next, using the inequality (19) and the fact that ν ( n ) n , we have
x n x * 2 x ν ( n ) + 1 x * 2 j = 1 n 1 λ j 4 m η j i = 1 m S i x j S i 1 x j 2 + j = 1 ν ( n ) λ j 4 m η j i = 1 m S i x j S i 1 x j 2 + j = 1 n 1 λ j 2 j = 1 ν ( n ) λ j 2 x ν ( n ) + 1 x * 2 j = ν ( n ) n λ j 4 m η j i = 1 m S i x j S i 1 x j 2 + j = ν ( n ) n λ j 2 x ν ( n ) + 1 x * 2 + j = ν ( n ) n λ j 2 .
Finally, by using the inequality (25), and the fact lim n j = ν ( n ) n λ j 2 = 0 , we obtain
lim n x n x * 2 = 0 .
This completes the proof. □
Remark 3.
The DC principle assumption, which is assumed in the Theorem 1 holds true when the operators T i , i = 1 , , m , are nonexpansive. Actually, the metric projection onto closed convex sets and the subgradient projections of a continuous convex function, which is Lipschitz continuous on every bounded subset also satisfy the DC principle, see [16] further details.
Remark 4.
It can be noted that the convergence result obtained in Theorem 1 holds true without any boundedness assumption of the generated sequence as in the previous works, for instance [20]. This underlines the convergence improvements accomplished in this work.

4. A Numerical Example

In this section, we present a numerical example for solving the equilibrium problem over a finite number of half-space constraints. Let A , B be n × n matrices, c i R n , and d i 0 be given for all i = 1 , 2 , , m , we consider the following equilibrium problem: find a point u ¯ i = 1 m Fix T i such that
A u ¯ + B y , y u ¯ 0 for all y i = 1 m Fix T i ,
where the constrained set is
Fix T i = C i : = { x R n : c i , x d i } , i = 1 , 2 , , m .
We consider the operator T i in two cases. In the first case, we put T i to be the subgradient projection defined by
P g i ( x ) = x g i ( x ) g i ( x ) 2 g i ( x ) if g i ( x ) 0 , x otherwise ,
where g i ( x ) = 1 2 dist ( x , C i ) 2 with the distance function is given by dist ( x , C i ) :   = inf z C i z x . In the second case, we put T i :   = P C i , the metric projection onto C i , for all i = 1 , 2 , , m . Note that it is known that the operators P g i and P C i are cutters and satisfy the DC principle with Fix T i = C i . We consider positive definite symmetric matrices A and B defined by B :   = N N + n I n , A :   = B + M M + n I n , where the n × n matrices N , M are randomly generated in ( 0 , 1 ) , and I n is the identity n × n matrix. Note that the bifunction f ( x , y ) :   = A x + B y , y x is strongly monotone on H , and for fixed x H , we have f ( x , · ) is convex on H . Moreover, we note that the diagonal subdifferential 2 f ( x , x ) = { ( A + B ) x } , and we also know that the function x 2 f ( x , x ) is bounded on a bounded subset of H . These mean that the assumptions (A1)–(A3) are now satisfied. In this case, the problem (27) is the particular case of Problem 1 so that the sequence generated by Algorithm 1 can be applied to solve the problem.
We consider behavior of the sequence { x n } n = 1 generated by Algorithm 1 for various positive real sequences { α n } n = 1 and { λ n } n = 1 in the forms of Remark 3. We choose μ = 1 , and generate a vector c i in R n by uniformly distributed random generating between ( 0 , 1 ) and a scalar d i = 0 , for all i = 1 , 2 , , m . We choose the starting point of Algorithm 1 to be a vector whose coordinates are one. We terminate Algorithm 1 by the stopping criterions
x n + 1 x n x n + 1 ε .
In the first experiment, we fix the parameters a = 0.40 , b = 0.60 and ε = 10 6 . We perform 10 independent tests for any collections of parameters α = 0.10 , 0.20 , 0.30 , 0.40 , and 0.50 and λ = 0.10 , 0.20 , 0.30 , 0.40 , 0.50 , 0.60 , 0.70 , 0.80 , and 0.90 when utilizing the operator T i :   = P g i and T i :   = P C i and the results are presented respectively in Table 1 and Table 2, where the average number of iterations and the average computational runtime for any collection of α and λ are presented.
In Table 1, we presented the number of iterations (k) (Iter), the computational time (Time) in seconds when the stopping criteria of Algorithm 1 was met. Note that the larger λ [ 0.20 , 0.90 ] requires a larger number of iterations and computational runtime. Furthermore, the best choice of the involved parameters for both cases is α = 0.20 and λ = 0.10 .
In a similar fashion with Table 1, we also presented in Table 2 the number of iterations (k) (Iter), the computational time (Time) in seconds, when the stopping criterions of Algorithm 1 when using the operator T i = P C i was met. The experimented results are in the same direction with Table 1 where the best choice of the involved parameters for both cases is α = 0.20 and λ = 0.10 .
In the next experiment, we consider the influence of parameters a and b by fixing the best parameters α = 0.20 , λ = 0.10 and ε = 10 6 . We performed 10 independent tests for any collections of parameters a = 0.10 , 0.15 , 0.20 , 0.25 , 0.30 , 0.35 to 0.40 and b = 0.55 , 0.60 , 0.65 , 0.70 , 0.75 , 0.80 , 0.85 , and 0.90 when utilizing the operator T i :   = P g i and T i :   = P C i and the results of the average number of iterations and the average computational runtime for any collection of a and b are presented in Table 3 and Table 4, respectively. We omit the combinations that do not satisfy the assumption in Theorem 1 and label it by -.
In Table 3, we see that the numbers of iterations as well as computational running time decrease when the value a increases. The the best result is obtained for the combination of a = 0.40 and b = 0.60 .
In the same direction as the results in Table 3, it can be seen from Table 4 that the numbers of iterations and the computational running time is decreases when the values a grow up. The the best result is acquired for the combination of a = 0.40 and b = 0.60 .
From these all above experiment, we observe that the choice of corresponding parameters α = 0.20 , λ = 0.10 , a = 0.40 and b = 0.60 yields the best performance of both considered cases.
In the next experiment, we consider the behavior of Algorithm 1 for various values n and m by fixing the corresponding parameters as the above best choice. We also terminate Algorithm 1 when the error tolerance ε = 10 6 was met, and the results are presented in Table 5.
It is observed from Table 5 that for the values n = 200 , 300 , 400 , and 500, the using of the subgradient projection is more efficient than using the metric projection in the sense that the first one requires less computation than the second one in the average number of iterations for all values m. In the case of n = 1000 , we observe that there is no difference on these two cases. One notable behavior is that for each value n, we observe that even if the value m increases, the average numbers of iterations are almost the same, whereas the average computational runtime is increasing.
Finally, we present the comparison of the use of the subgradient projection and the metric projection for various optimality tolerances ε . We set n = 500 and m = 50 and choose the corresponding parameters in the same manner as above, the average numbers of iterations with respect to the optimality tolerances are presented in Figure 1.
The plots in Figure 1 show that using the subgradient projection is more efficient than the metric projection for all the optimality tolerances. This emphasizes the superiority of using the subgradient projection when performing Algorithm 1.

5. Conclusions

In this work, we consider the solving of the bilevel equilibrium problem governed by a strongly monotone bifunction over the intersection of fixed-point sets of cutter operators. We associated with it the so-called subgradient-type extrapolation cyclic method. We present that the generated sequence generated by the proposed method converges to the unique solution to the problem. Our numerical experiment showed that using appropriate operators can yield a better convergence behavior to the proposed method.
It can be seen that the proposed subgradient-type extrapolation cyclic method (Algorithm 1) allows us to compute the operator T i , i = 1 , , m , sequentially. The main advantage of our method is that the computing machine is not necessary to store information while computing. Notwithstanding, the nature of the cyclic method, it is well-known that to compute S i , , the cyclic method needs to have the estimate S i 1 in hand. This means that there has a waiting process while performing the method. In this case, one may consider the simultaneous extrapolation method [25] when dealing with the common fixed-point constrained of BEP.

Author Contributions

Conceptualization, P.P. and N.N.; methodology, P.P. and N.N.; software, N.N.; validation, P.P. and N.N.; convergence analysis, P.P. and N.N.; investigation, P.P. and N.N.; writing—original draft preparation, P.P. and N.N.; writing—review and editing, P.P. and N.N.; visualization, P.P. and N.N.; supervision, N.N.; project administration, N.N.; funding acquisition, N.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Fundamental Fund of Khon Kaen University. This research has received funding support from the National Science, Research and Innovation Fund or NSRF.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors are thankful to the Editor and three anonymous referees for comments and remarks which improved the quality and presentation of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fan, K. A minimax inequality and applications. In Inequality III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972; pp. 103–113. [Google Scholar]
  2. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Student 1994, 63, 123–145. [Google Scholar]
  3. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  4. Mastroeni, G. On auxiliary principle for equilibrium problems. In Equilibrium Problems and Variational Models; Daniele, P., Giannessi, F., Maugeri, A., Eds.; Kluwer Academic: Dordrecht, The Netherlands, 2003; pp. 289–298. [Google Scholar]
  5. Nguyen, T.T.V.; Strodiot, J.J.; Nguyen, V.H. The interior proximal extragradient method for solving equilibrium problems. J. Glob. Optim. 2009, 44, 175–192. [Google Scholar] [CrossRef]
  6. Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
  7. Duc, P.M.; Muu, L.D. A splitting algorithm for a class of bilevel equilibrium problems involving nonexpansive mappings. Optimization 2016, 65, 1855–1866. [Google Scholar] [CrossRef]
  8. Thuy, L.Q.; Hai, T.N. A Projected Subgradient Algorithm for Bilevel Equilibrium Problems and Applications. J. Optim. Theory Appl. 2017, 175, 411–431. [Google Scholar] [CrossRef]
  9. Anh, P.N.; Van Hong, N. New projection methods for equilibrium problems over fixed point sets. Optim. Lett. 2021, 15, 627–648. [Google Scholar] [CrossRef]
  10. Kim, J.K.; Anh, P.N.; Hai, T.N. The Bruck’s ergodic iteration method for the Ky Fan inequality over the fixed point set. Int. J. Comput. Math. 2017, 94, 2466–2480. [Google Scholar]
  11. Kaczmarz, S. Angenäherte Auflösung von Systemen linearer Gleichungen. Bull. Int. 1937, A35, 355–357, reprinted in Int. J. Control 1993, 57, 1269–1271. [Google Scholar]
  12. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  13. Cegielski, A.; Censor, Y. Opial-Type Theorems and the Common Fixed Point Problem. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H., Eds.; Springer Optimization and Its Applications; Springer: New York, NY, USA, 2011; Volume 49. [Google Scholar]
  14. Liu, C. An acceleration scheme for row projection methods. J. Comput. Appl. 1995, 57, 363–391. [Google Scholar] [CrossRef] [Green Version]
  15. Cegielski, A. Generalized relaxations of nonexpansive operators and convex feasibility problems. Contemp. Math. 2010, 513, 111–123. [Google Scholar]
  16. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics, 2057; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  17. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef] [Green Version]
  18. Cegielski, A.; Censor, Y. Extrapolation and local acceleration of an iterative process for common fixed point problems. J. Math. Anal. Appl. 2012, 394, 809–818. [Google Scholar] [CrossRef]
  19. Cegielski, A.; Nimana, N. Extrapolated cyclic subgradient projection methods for the convex feasibility problems and their numerical behaviour. Optimization 2019, 68, 145–161. [Google Scholar] [CrossRef]
  20. Prangprakhon, M.; Nimana, N. xtrapolated sequential constraint method for variational inequality over the intersection of fixed-point sets. Numer. Algorithms 2021, 88, 1051–1075. [Google Scholar] [CrossRef]
  21. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer: New York, NY, USA, 2017. [Google Scholar]
  22. Iusem, A.N. On the maximal monotonicity of diagonal subdifferential operators. J. Convex Anal. 2011, 18, 489–503. [Google Scholar]
  23. Combettes, P.L. Quasi-Fejérian analysis of some optimization algorithms. In Inherently Parallel Algorithms for Feasibility and Optimization; Butnariu, D., Censor, Y., Reich, S., Eds.; Elsevier: New York, NY, USA, 2001; pp. 115–152. [Google Scholar]
  24. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  25. Butnariu, D.; Censor, Y.; Gurfil, P.; Hadar, E. On the behavior of subgradient projections methods for convex feasibility problems in Euclidean spaces. SIAM J. Optim. 2008, 19, 786–807. [Google Scholar] [CrossRef]
Figure 1. Comparison between the using of the subgradient projection T i = P g i and the metric projection T i = P C i with different errors of tolerance.
Figure 1. Comparison between the using of the subgradient projection T i = P g i and the metric projection T i = P C i with different errors of tolerance.
Symmetry 14 00992 g001
Table 1. Influence of parameters α and λ when using the subgradient projection operator T i = P g i where a = 0.40 and b = 0.60 .
Table 1. Influence of parameters α and λ when using the subgradient projection operator T i = P g i where a = 0.40 and b = 0.60 .
λ α = 0.10 α = 0.20 α = 0.30 α = 0.40 α = 0.50
IterTimeIterTimeIterTimeIterTimeIterTime
0.106420.143940.065310.086820.118420.13
0.204340.077140.1010130.1613270.2016440.25
0.305940.0910370.1714920.2319640.2824390.37
0.407540.1113510.2619700.2925980.3932230.48
0.509140.1416740.2524430.3632280.5440130.60
0.6010750.1919870.3329190.4338610.6548050.77
0.7012320.1823000.3533960.5144890.6855710.84
0.8013930.2326180.4038690.5851190.7763620.93
0.9015500.2329280.4443420.6457350.8571571.05
Table 2. Influence of parameters α and λ when using the metric projection T i = P C i where a = 0.40 and b = 0.60 .
Table 2. Influence of parameters α and λ when using the metric projection T i = P C i where a = 0.40 and b = 0.60 .
λ α = 0.10 α = 0.20 α = 0.30 α = 0.40 α = 0.50
IterTimeIterTimeIterTimeIterTimeIterTime
0.106400.164120.065530.076980.098460.13
0.204580.077350.0910330.1213350.1716440.24
0.306170.0810560.1315100.1819670.2324400.49
0.407750.1013710.1719840.2325990.3032300.52
0.509360.1116860.2124540.2932250.3940300.49
0.6010910.1620030.2529230.3538510.4748020.59
0.7012520.1523150.3033900.4144930.5655880.67
0.8014070.1726340.3238680.4851170.6263640.79
0.9015680.2129510.3743320.5257510.7071480.86
Table 3. Influence of parameters a and b when using the subgradient projection T i = P g i where α = 0.20 and λ = 0.10 .
Table 3. Influence of parameters a and b when using the subgradient projection T i = P g i where α = 0.20 and λ = 0.10 .
b a = 0.10 a = 0.15 a = 0.20 a = 0.25 a = 0.30 a = 0.35 a = 0.40
IterTimeIterTimeIterTimeIterTimeIterTimeIterTimeIterTime
0.5569681.0837590.5922160.3313960.229320.146630.105050.09
0.6037650.5822090.3313910.219290.156490.104890.083940.06
0.6522000.3513910.229260.146490.104730.074500.07--
0.7013870.229240.156480.105340.096190.10----
0.759230.167260.148440.1510550.17------
0.8014050.2516490.2721010.34--------
0.8540000.6751990.85----------
0.90186403.03------------
Table 4. Influence of parameters a and b when using the metric projection T i = P C i where α = 0.20 and λ = 0.10 .
Table 4. Influence of parameters a and b when using the metric projection T i = P C i where α = 0.20 and λ = 0.10 .
b a = 0.10 a = 0.15 a = 0.20 a = 0.25 a = 0.30 a = 0.35 a = 0.40
IterTimeIterTimeIterTimeIterTimeIterTimeIterTimeIterTime
0.5569620.8537690.4822120.2713950.179410.146820.095260.07
0.6037700.4622070.3313860.179280.116670.095090.074120.06
0.6522060.3113900.179290.126490.094910.064500.06--
0.7013820.199240.126490.105340.086190.09----
0.759230.137260.108420.1210600.15------
0.8014050.1916490.2320990.30--------
0.8540140.5552000.70----------
0.90186362.55------------
Table 5. Comparisons between the using of the subgradient projection T i = P g i and the metric projection T i = P C i for different sizes of n and m.
Table 5. Comparisons between the using of the subgradient projection T i = P g i and the metric projection T i = P C i for different sizes of n and m.
nm T i : = P g i T i : = P C i
IterTimeIterTime
2001003930.154120.21
2003940.254110.35
3003920.364130.50
4003960.514110.53
5003930.564130.62
10003921.034111.16
3001007800.378100.34
2007810.568100.54
3007800.788100.79
4007811.108101.09
5007811.208091.22
10007812.218092.14
40010013171.1613530.97
20013191.7213541.59
30013182.1913542.11
40013202.6513562.57
50013182.9513542.96
100013185.2313545.34
50010020092.5120462.25
20020083.6320473.36
30020074.3820484.34
40020085.2420475.19
50020095.9120465.95
100020079.70204610.30
1000100775533.31775132.04
200774938.88774937.39
300775043.94774744.26
400774949.75774848.42
500775055.72775253.64
1000774781.83775180.63
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Promsinchai, P.; Nimana, N. A Subgradient-Type Extrapolation Cyclic Method for Solving an Equilibrium Problem over the Common Fixed-Point Sets. Symmetry 2022, 14, 992. https://doi.org/10.3390/sym14050992

AMA Style

Promsinchai P, Nimana N. A Subgradient-Type Extrapolation Cyclic Method for Solving an Equilibrium Problem over the Common Fixed-Point Sets. Symmetry. 2022; 14(5):992. https://doi.org/10.3390/sym14050992

Chicago/Turabian Style

Promsinchai, Porntip, and Nimit Nimana. 2022. "A Subgradient-Type Extrapolation Cyclic Method for Solving an Equilibrium Problem over the Common Fixed-Point Sets" Symmetry 14, no. 5: 992. https://doi.org/10.3390/sym14050992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop