1. Introduction
For a nonempty, closed and convex subset
of a real Hilbert space
and
is a bifunction with
, for each
A equilibrium problem [
1,
2] for
f on the set
is defined in the following way:
The problem (
1) is very general, it includes many problems, such as fixed point problems, variational inequalities problems, the optimization problems, the Nash equilibrium of non-cooperative games, the complementarity problems, the saddle point problems, and the vector optimization problem (for further details see [
1,
3,
4]). The equilibrium problem is also considered as the famous Ky Fan inequality [
2]. This above-defined particular format of an equilibrium problem (
1) is initiated by Muu and Oettli [
5] in 1992 and further investigation on its theoretical properties studied by Blum and Oettli [
1]. The construction of new optimization-based methods and the modification and extension of existing methods, as well as the examination of their convergence analysis, is an important research direction in equilibrium problem theory. Many methods have been developed over the last few years to numerically solve the equilibrium problems in both finite and infinite dimensional Hilbert spaces, i.e., the extragradient algorithms [
6,
7,
8,
9,
10,
11,
12,
13,
14] subgradient algorithms [
15,
16,
17,
18,
19,
20,
21] inertial methods [
22,
23,
24,
25], and others in [
26,
27,
28,
29,
30,
31,
32,
33,
34].
In particular, a proximal method [
35] is an efficient way to solve equilibrium problems that are equivalent to solving minimization problems on each step. This approach is also considered as the two-step extragradient-like method in [
6], because of the early contribution of the Korpelevich [
36] extragradient method to solve the saddle point problems. More precisely, Tran et al. introduced a method in [
6], in which an iterative sequence
was generated in the following manner:
where
and
are Lipschitz constants. Moreover,
is the value of
x in set
for which
attains it’s minimum. The iterative sequence generated from the above-described method provides a weak convergent iterative sequence and in order to operate it, previous knowledge of the Lipschitz-like constants are required. These Lipschitz-type constants are normally unknown or hard to evaluate. In order to overcome this situation, Hieu et al. [
12] introduced an extension of the method in [
37] to solve the problems of equilibrium in the following manner: let
and choose
with
, such that
where the stepsize sequence
is updated in the following way:
Recently, Vinh and Muu proposed an inertial iterative algorithm in [
38] to solve a pseudomonotone equilibrium problem. The key contribution is an inertial factor in the method that used to enhance the convergence speed of the iterative sequence. The iterative sequence
was defined in the following manner:
- (i)
Choose
where a sequence
is satisfies the following conditions:
- (ii)
Choose
satisfying
and
- (iii)
Recently, another efficient inertial algorithm proposed by Hieu et al. in [
39] as follows: let
and the sequence
was defined in the following manner:
In this article, we concentrates on projection methods that are normally well-established and easy to execute due to their efficient numerical computation. Motivated by the works of [
12,
38], we formulate an inertial explicit subgradient extragradient method to solve the pseudomonotone equilibrium problem. These results can be seen as the modification of the methods appeared in [
6,
12,
38,
39]. Under certain mild conditions, a weak convergence theorem is proved regarding the iterative sequence of the algorithm. Moreover, experimental studies have documented that the designed method tends to be more efficient when compared to the existing methods that are presented in [
38,
39].
The remainder of the paper has been arranged, as follows:
Section 2 contains the elementary results used in this paper.
Section 3 contains our main algorithm and proves their convergence.
Section 4 and
Section 5 incorporate the applications of our main results.
Section 6 carries out the numerical results that prove the computational effectiveness of our suggested method.
2. Preliminaries
Assume that
be a convex function on a nonempty, closed and convex subset
of a real Hilbert space
and subdifferential of a function
h at
is defined by
Assume that
be a nonempty, closed and convex subset of a real Hilbert space
and Normal cone of
at
is defined by
A metric projection
for
onto a closed and convex subset
of
is defined by
Now, consider the following definitions of monotonicity a bifunction (see for details [
1,
40]). Assume that
on
for
is said to be
- (1)
- (2)
- (3)
γ-strongly pseudomonotone if
- (4)
We have the following implications from the above definitions:
In general, the converses are not true. Suppose that
satisfy the Lipschitz-type condition [
41] on a set
if there exist two constants
, such that
Lemma 1 ([
42]).
Suppose be a nonempty, closed and convex subset of and is metric projection from onto .- (i)
Let and we have - (ii)
if and only if - (iii)
For any and
Lemma 2 ([
43,
44]).
Assume that be a convex, lower semicontinuous and subdifferentiable function on where is a nonempty, convex and closed subset of a Hilbert space Subsequently, is minimizer of a function h if and only if where and denotes the subdifferential of h at and the normal cone of at , respectively. Lemma 3 ([
45]).
Let be a sequence in and , such that the following conditions are satisfied:- (i)
for every the exists;
- (ii)
each sequentially weak cluster limit point of the sequence belongs to .
Then, weakly converge to some element in
Lemma 4 ([
46]).
Let and be sequences of non-negative real numbers satisfying for each If then exists. Lemma 5 ([
47]).
For every and then Suppose that bifunction f satisfies the following conditions:
- (f1)
f is pseudomonotone on and for every ;
- (f2)
f satisfies the Lipschitz-type condition on with constants and
- (f3)
for every and satisfying ;
- (f4)
needs to be convex and subdifferentiable on for all
3. The Modified Extragradient Algorithm for the Problem (1) and Its Convergence Analysis
We provide a method consisting of two strongly convex minimization problems with an inertial term and an explicit stepsize formula that are being used to enhance the convergence rate of the iterative sequence and to make the algorithm independent of the Lipschitz constants. For the sake of simplicity in the presentation, we will use the notation
and follow the conventions
and
The detailed method is provided below (Algorithm 1):
Algorithm 1 (Modified Extragradient Algorithm for the Problem (1)) |
|
Lemma 6. The sequence is monotonically decreasing with a lower bound and it converges to
Proof. From the definition of sequence
implies that sequence
decreasing monotonically. It is given that
f satisfy the Lipschitz-type condition with
and
. Let
, such that
The above implies that has a lower bound Moreover, there exists a fixed real number , such that □
Remark 1. Because of the summability of and the expression (5) implies thatthat implies Lemma 7. Suppose that be a bifunction satisfies the conditions(f1)
–(f4)
. For each , we have Proof. From the value of
, we have
For some
, there exists
, such that
The above expression implies that
For given
, imply that
∀
It provides that
From
, we have
Combining expressions (
9) and (
10) we obtain
By substituting
in (
11), gives that
Because
, then
provides that
From the formula of
we obtain
From the expressions (
13) and (
14), we have
Similar to expression (
11), the value of
gives that
By substituting
in the above expression, we have
Combining the expressions (
15) and (
17), we obtain
We have the given formulas:
The above expressions with (
18), we have
□
Theorem 1. Assume that be a bifunction satisfies the conditions(f1)–(f4) and belongs to solution set Subsequently, the sequences and generated through Algorithm 1 weakly converges to In addition,
Proof. By value of
through Lemma 5, we obtain
By Lemma 7 and expression (
19), we obtain
Because
then there exists a fixed number
, such that
Subsequently, there exist a fixed real number
such that
Combining the expressions (
20) and (
21), we obtain
By definition of the
, we have
From the definition of
in Algorithm 1, we obtain
The expression (
22) can also be written as
By using Lemma 4 with expressions (
7) and (
26), we have
The equality (
8) implies that
By letting
in (
24) implies that
From the expression (
20) and (
25), we have
which further implies that (for
)
By letting
in (
31), we obtain
By using the Cauchy inequality and expression (
32), we obtain
The expressions (
29) and (
32) imply that
It follows from the expressions (
27), (
29) and (
34) that the sequences
and
are bounded. Now, we need to use Lemma 3, for this it is compulsory to show that any weak sequential limit points of
lies in the set
Consider
z to be a weak limit point of
i.e., there is a
of
that is weakly converges to
Because
, then
also weakly converge to
z and so
Now, it is renaming to show that
From relation (
11), due to
and (
17), we have
where
It follows from (
28), (
32), (
33) and the boundedness of
right hand side tend to zero. Due to
condition (f3) and
implies
Because imply that It is prove that By Lemma 3, provides that and weakly converges to as
Finally, to prove that
Let
For any
, we have
Clearly, the above implies that sequence
is bounded. Next, we need to show that
is a Cauchy sequence. By using Lemma 1(iii) and (
23), we have
Thus, Lemma 4 provides the existence of
Next, take (
23) ∀
we have
Suppose that
for
through Lemma 1(i) and (
39), we have
The existence of
and the summability of the series
imply
∀
As a result,
is a Cauchy sequence and due the closeness of the set
the sequence
strongly converges to
Next, remaining to show that
From Lemma 1(ii) and
, we have
Because of
and
, we obtain
implies that
□
4. Applications to Solve Fixed Point Problems
Now, consider the applications of our results that are discussed in
Section 3 to solve fixed-point problems involving
-strict pseudo-contraction. Let
be a mapping and the fixed point problem is formulated in the following manner:
Let a mapping is said to be
- (i)
sequentially weakly continuous on
if
- (ii)
κ-strict pseudo-contraction [
48] on
if
that is equivalent to
Note: if we define
Then, the problem (
1) convert into the fixed point problem with
The value of
in Algorithm 1 convert into followings:
In the similar way to the expression (
44), we obtain
As a consequence of the results in
Section 3, we have the following fixed point theorem:
Corollary 1. Assume that to be a weakly continuous and κ-strict pseudocontraction with The sequences and be generated in the following way:
- (i)
Choose and satisfies the following condition: - (ii)
Choose satisfies , such that - (iii)
Compute , where - (iv)
Revised the stepsize in the following way:
Subsequently, and be the sequences converges weakly to
5. Application to Solve Variational Inequality Problems
Now, consider the applications of our results that are discussed in
Section 3 in order to solve variational inequality problems involving pseudomonotone and Lipschitz-type continuous operator. Let a operator
and the variational inequality problem is formulated as follows:
A mapping is said to be
- (i)
L-Lipschitz continuous on
if
- (ii)
- (iii)
Note: let
Thus, problem (
1) translates into the problem (VIP) with
From the value of
we have
In similar way to the expression (
49), we obtain
Suppose that a mapping L satisfies the following conditions:
- (L1)
L is monotone on with ;
- (L2)
L is L-Lipschitz continuous on with ;
- (L3)
L is pseudomonotone on with ; and,
- (L4)
and satisfying
Next, let
L to be monotone and (L4) can be removed. The condition (L4) is used to defined
and satisfy the conditions (L4). The condition (f3) is required to show
see (
36). The condition (L4) is required to show
Further, to show that
By letting the monotonicity of operator
L, we have
By letting
with expression (
35), implies that
Combining (
50) with (
51), we deduce that
Therefore,
provides
∀
Let
∀
Since
for
, we have
That is every Due to , while , we have for all consequently
Corollary 2. Let be a mapping and satisfying the conditions(L1)–(L2). Assume that the sequences and generated in the following manner:
- (i)
Choose and , such that - (ii)
Let satisfies and - (iii)
Compute where - (iv)
Stepsize is revised in the following way:
Subsequently, the sequences and converge weakly to
Corollary 3. Let be a mapping and satisfying the conditions(L2)–(L4). Assume that the sequences and generated in the following manner:
- (i)
Choose and , such that - (ii)
Choose satisfying , such that - (iii)
Compute where - (iv)
The stepsize is updated in the following way:
Subsequently, the sequences and converge weakly to