Next Article in Journal
Vaidya Collapse with Nonzero Radial Pressure and Charge
Next Article in Special Issue
A General Inertial Projection-Type Algorithm for Solving Equilibrium Problem in Hilbert Spaces with Applications in Fixed-Point Problems
Previous Article in Journal
Nonlocal Fractional Boundary Value Problems Involving Mixed Right and Left Fractional Derivatives and Integrals
Previous Article in Special Issue
Fuzzy b-Metric Spaces: Fixed Point Results for ψ-Contraction Correspondences and Their Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Subgradient Extragradient Methods for Solving Variational Inequality Problems and Fixed Point Problems

by
Godwin Amechi Okeke
1,2,*,
Mujahid Abbas
3,4 and
Manuel de la Sen
5
1
Department of Mathematics, School of Physical Sciences, Federal University of Technology, Owerri, P.M.B. 1526 Owerri, Imo State, Nigeria
2
Abdus Salam School of Mathematical Sciences, Government College University, Lahore 54600, Pakistan
3
Department of Mathematics, Government College University, Lahore 54000, Pakistan
4
Department of Mathematics and Applied Mathematics, University of Pretoria ( Hatfield Campus), Lynnwood Road, Pretoria 0002, South Africa
5
Institute of Research and Development of Processes, University of the Basque Country, Campus of Leioa (Bizkaia), P.O. Box 644-Bilbao, Barrio Sarriena, 48940 Leioa, Spain
*
Author to whom correspondence should be addressed.
Axioms 2020, 9(2), 51; https://doi.org/10.3390/axioms9020051
Submission received: 16 March 2020 / Revised: 7 May 2020 / Accepted: 7 May 2020 / Published: 11 May 2020
(This article belongs to the Special Issue Fixed Point Theory and Its Related Topics II)

Abstract

:
We propose two new iterative algorithms for solving K-pseudomonotone variational inequality problems in the framework of real Hilbert spaces. These newly proposed methods are obtained by combining the viscosity approximation algorithm, the Picard Mann algorithm and the inertial subgradient extragradient method. We establish some strong convergence theorems for our newly developed methods under certain restriction. Our results extend and improve several recently announced results. Furthermore, we give several numerical experiments to show that our proposed algorithms performs better in comparison with several existing methods.

1. Introduction

In this paper, the set C denotes a nonempty closed convex subset of a real Hilbert space H . The inner product of H is denoted by . , . and the induced norm by . . Suppose A : H H is an operator. The variational inequality problem (VIP) for the operator A on C H is to find a point x * C such that
A x * , x x * 0   for   each   x C .
In this study, we denote the solution set of (VIP) (1) by Γ . The theory of variational inequalities was introduced by Stampacchia [1]. It is known that the (VIP) problem arise in various models involving problems in many fields of study such as mathematics, physics, sciences, social sciences, management sciences, engineering and so on. The ideas and methods of the variational inequalities have been highly applied innovatively in diverse areas of sciences and engineering and have proved very effective in solving certain problems. The theory of (VIP) provides a natural, simple and unified setting for a comprehensive treatment of unrelated problems (see, e.g., [2]). Several authors have developed efficient numerical methods in solving the (VIP) problem. These methods includes the projection methods and its variants (see, e.g., [3,4,5,6,7,8,9,10,11,12,13]). The fundamental objective involves extending the well-known projected gradient algorithm, which is useful in solving the minimization problem f ( x ) subject to x C . This method is given as follows:
x n + 1 = P C ( x n α n f ( x n ) ) , n 0 ,
where the real sequence { α n } satisfy certain conditions and P C is the well-known metric projection onto C H . The interested reader may consult [14] for convergence analysis of this algorithm for the special case in which the mapping f : H R is convex and differentiable. Equation (2) has been extended to the (VIP) problem and it is known as the projected gradient method for optimization problems. This is done by replacing the gradient with the operator A thereby generating a sequence { x n } as follows:
x n + 1 = P C ( x n α n A x n ) , n 0 .
However, the major drawback of this method is the restrictive condition that the operator A is strongly monotone or inverse strongly monotone (see, e.g., [15]) to guarantee the convergence of this method. In 1976, Korpelevich [16] removed this strong condition by introducing the extragradient method for solving saddle point problems. The extragradient method was extended to solving variational inequality problems in both Hilbert and Euclidean spaces. The only required restriction for the extragradient algorithm to converge is that the operator A is monotone and L-Lipschitz continuous. The extragradient method is given as follows:
y n = P C ( x n τ A x n ) x n + 1 = P C ( x n τ A y n ) ,
where τ ( 0 , 1 L ) and the metric projection from H onto C is denoted by P C . If the solution set of the (VIP) denoted by Γ is nonempty, then the sequence { x n } generated by iterative algorithm (4) converges weakly to an element in Γ .
Observe that by using the extragradient method, we need to calculate two projections onto the set C H in every iteration. It is known that the projection onto a closed convex set C H has a close relationship with the minimun distance problem. Let C be a closed and convex set, this method may require a prohibitive amount of computation time. In view of this drawback, in 2011 Censor et al. [5] introduced the subgradient extragradient method by modifying iterative algorithm in Equation (4) above. They replaced the two projections in the extragradient method in Equation (4) onto the set C by only one projection onto the set C H and one onto a half-space. It has been established that the projection onto a given half-space is easier to calculate. Next, we give the subgradient extragradient method of Censor et al. [5] as follows:
y n = P C ( x n τ A x n ) T n = { x H | x n τ A x n y n , x y n 0 } x n + 1 = P T n ( x n τ A y n ) ,
where τ ( 0 , 1 L ) . Several authors have studied the subgradient extragradient method and obtained some interesting and applicable results (see, e.g., [11]) and the references therein.
The theory of pseudomonotone operators is very crucial in studies in nonlinear analysis, variational inequalities and optimization problems (see, e.g., [17,18,19,20]). One important class of pseudomonotone operators was introduced in 1976 by Karamardian [21] and have been utilized in solving problems in variational inequalities, optimization and economics (see, e.g., [17,20]). In this paper, we shall call the class of pseudomonotone in the sense of Karamardian K-pseudomonotone. Yao [20] utilized K-pseudomonotone in solving some variational inequalities problems in Banach spaces. He established some new existence results which extend many known results in infinite-dimensional spaces under some weak assumptions. He also proved some uniqueness results for the complementarity problem with K-pseudomonotone operators in Banach spaces. It is our purpose in the present paper to introduce two new inertial subgradient extragradient iterative algorithms for solving K-pseudomonotone variational inequality problems in the framework of real Hilbert spaces.
The inertial type iterative algorithms are based on a discrete version of a second order dissipative dynamical system (see, [22,23]). These kind of algorithms can be seen as a process of accelerating the convergence properties of a given method (see, e.g., [24,25,26]). Alvarez and Attouch [24] in 2001 used the inertial method to derive a proximal algorithm for solving the problem of finding zero of a maximal monotone operator. Their method is given as follows: given x n 1 , x n H and any two parameters θ n [ 0 , 1 ) , λ n > 0 , obtain x n + 1 H such that
0 λ n A ( x n + 1 ) + x n + 1 x n θ n ( x n x n 1 ) .
This algorithm can be written equivalently as follows:
x n + 1 = J λ n A ( x n + θ n ( x n x n 1 ) ,
where J λ n A is the resolvent of the operator A with the given parameter λ n and the inertial is induced by the term θ n ( x n x n 1 ) .
Several researchers have developed some fast iterative algorithms by using inertial methods. These methods includes the inertial Douglas–Rachford splitting method (see, e.g., [27]), inertial forward–backward splitting methods (see, e.g., [28]), inertial ADMM (see, e.g., [29]), inertial proximal–extragradient method (see, e.g., [30]), inertial forward–backward–forward method (see, e.g., [31]), inertial contraction method (see, e.g., [32]), inertial Tseng method (see, e.g., [33]) and inertial Mann method (see, e.g., [11]).
Inspired by the results above, we propose two inertial subgradient extragradient methods for finding a solution of K-pseudomonotone and Lipschitz continuous (VIP). Our first proposed iterative algorithm is a hybrid of the inertial subgradient extragradient method [11], the viscosity method [34] and the Picard Mann method [35]. Our second method combines the inertial subgradient extragradient method [11] and the Picard Mann method [35].
This paper is organized as follows. In Section 2, we give some preliminary definitions of concepts and results that will be crucial in this study. In Section 3, we present our proposed iterative algorithms and prove some convergence results for them. In Section 4, we present some numerical experiments to support the convergence of our proposed iterative algorithms. In Section 5, we give the concluding remarks of the study.

2. Preliminaries

In this paper, the set C denotes a nonempty closed convex subset of a real Hilbert space H . The inner product of H is denoted by . , . and the induced norm by . .
We denote the weak convergence of the sequence { x n } to x by x n x as n , we denote the strong convergence of { x n } to x by x n x as n .
For each x , y H and α R , we recall the following inequalities in Hilbert spaces:
α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 .
x + y 2 x 2 + 2 y , x + y .
x + y 2 = x 2 + 2 x , y + y 2 .
A mapping A : H H is said to be nonexpansive if for each x , y H , we have
A x A y x y .
For each x H , we can find a unique nearest point in C H , denoted by P C x such that we have
x P C x x y
for each y C . Then P C is known as the metric projection of H onto C H . It has been proved that the mapping P C is nonexpansive.
Lemma 1
([36]). Suppose that C is a closed convex subset of a real Hilbert space H and for each x H . Then the following holds:
(i) 
P C x P C y 2 P C x P C y , x y for all y H .
(ii) 
P C x y 2 x y 2 x P C x 2 for all y H .
(iii) 
Given x H and z C . Then we have
z = P C x x z , z y 0
for all y C .
For more of the metric projection P C , the interested reader should see Section 3 of [36].
The fixed point problem involves finding the fixed point of an operator A : H H . The set of fixed point of the operator A is denoted by F ( A ) and we assume that it is nonempty, that is F ( A ) . The fixed point problem ( F P ) is then formulated as follows:
find x H such   that   x = A ( x ) .
In this paper, our problem of interest is to find a point x H such that
x Γ F ( A ) .
Definition 1.
Let A : H H be a mapping. Then for all x , y H
(i) 
A is said to be L-Lipschitz continuous with L > 0 if
A x A y L x y .
If L [ 0 , 1 ) then A is called a contraction mapping.
(ii) 
A is said to be monotone if
A x A y , x y 0 .
(iii) 
The mapping A : H H is said to be pseudomonotone in the sense of Karamardian [21] or K-pseudomonotone for short, if for all x , y H
A y , x y 0 A x , x y 0 .
The following lemmas will be needed in this paper.
Lemma 2.
([37]). Suppose { x n } is a real sequence of nonnegative numbers such that there is a subsequence { x n j } of { x n } such that x n j < x n j + 1 for any j N . Then there is a nondecreasing sequence { m k } of N such that lim k m k = and the following properties are fulfilled: for each (sufficiently large) number k N ,
x m k x m k + 1 , x k x m k + 1 .
In fact, m k is the largest number n in the set { 1 , 2 , , k } such that x n < x n + 1 .
Lemma 3.
([38]). Let { a n } be a sequence of nonnegative real numbers such that
a n + 1 ( 1 α n ) a n + α n b n
for all n 0 , where { α n } ( 0 , 1 ) and { b n } is a sequence such that
(a) 
n = 0 α n = ;
(b) 
lim sup n b n 0 .
Then lim n a n = 0 .

3. Main Results

The following condition will be needed in this study.
Condition 3.1
The operator A : H H is K-pseudomonotone and L-Lipschitz continuous on the real Hilbert space H , with the solution set of the (VIP) (1.1) Γ and the contraction mapping f : H H with the contraction parameter k [ 0 , 1 ) . The feasible set C H is non-empty, closed and convex.

3.1. The Viscosity Inertial Subgradient Extragradient Algorithm

We propose the following algorithm
Algorithm 3.1
Step 0: Given τ ( 0 , 1 L ) . { α n } [ 0 , α ) for some α > 0 and { β n } ( 0 , 1 ) satisfying the following conditions:
lim n β n = 0 , n = 1 β n = .
Choose initial x 0 , x 1 C and set n : = 1 .
Step 1: Compute
w n = x n + α n ( x n x n 1 ) ,
y n = P C ( w n τ A w n ) .
If y n = w n , then stop, y n is a solution to the (VIP) problem. Otherwise, go to Step 2.
Step 2: Construct the half-space
T n : = z H : w n τ A w n y n , z y n 0
and compute
z n = P T n ( w n τ A y n ) .
Step 3: Calculate
h n = ( 1 β n ) z n + β n f ( z n ) ,
and compute
x n + 1 = f ( h n ) .
Let n : = n + 1 and return to Step 1.
Next, we prove the following results which will be useful in this study.
Lemma 4.
Let { x n } be a sequence generated by Algorithm 3.1. Then
x n + 1 p 2 w n p 2 ( 1 τ L ) y n x n + 1 2 ( 1 τ L ) y n w n 2 ,
for all p Γ .
Proof. 
Since p Γ C T n , then by Equation (10) and Lemma 2 (i) we have the following
x n + 1 p 2 = P T n ( w n τ A y n ) P T n p 2 x n + 1 p , w n τ A y n p = 1 2 x n + 1 p 2 + 1 2 w n τ A y n p 2 1 2 x n + 1 w n + τ A y n 2 = 1 2 x n + 1 p 2 + 1 2 w n p 2 + 1 2 τ 2 A y n 2 w n p , τ A y n 1 2 x n + 1 w n 2 1 2 τ 2 A y n 2 x n + 1 w n , τ A y n = 1 2 x n + 1 p 2 + 1 2 w n p 2 1 2 x n + 1 w n 2 x n + 1 p , τ A y n .
Hence, from Equation (25) we obtain
x n + 1 p 2 w n p 2 x n + 1 w n 2 2 x n + 1 p , τ A y n .
Using the condition that A is K-pseudomonotone, we have that 2 τ A y n , y n p 0 . We now add this to the right hand side of inequality (25) to obtain the following
x n + 1 p 2 w n p 2 x n + 1 w n 2 2 x n + 1 p , τ A y n + 2 τ A y n , y n p = w n p 2 x n + 1 w n 2 2 τ x n + 1 p , A y n + 2 τ y n p , A y n = w n p 2 x n + 1 w n 2 2 τ x n + 1 y n , A y n 2 τ y n p , A y n + 2 τ y n p , A y n = w n p 2 x n + 1 w n 2 2 τ x n + 1 y n , A y n A w n 2 τ x n + 1 y n , A w n = w n p 2 x n + 1 w n 2 + 2 τ y n x n + 1 , A y n A w n + 2 τ y n x n + 1 , A w n .
Next, we have the following estimates using the condition that A is L-Lipschitz continuous
2 τ y n x n + 1 , A y n A w n 2 τ y n x n + 1 A y n A w n 2 τ L y n x n + 1 y n w n τ L y n x n + 1 2 + τ L y n w n 2 .
Since y n = P T n ( w n τ A y n ) and x n + 1 T n , we obtain w n τ A w n y n , x n + 1 y n 0 . This implies that
2 τ y n x n + 1 , A w n 2 y n w n , x n + 1 y n = x n + 1 w n 2 y n w n 2 x n + 1 y n 2 .
Using Equations (28) and (29) in Equation (27), we obtain:
x n + 1 p 2 w n p 2 x n + 1 w n 2 + τ L y n x n + 1 2 + τ L y n w n 2 + x n + 1 w n 2 y n w n 2 x n + 1 y n 2 = w n p 2 ( 1 τ L ) y n x n + 1 2 ( 1 τ L ) y n w n 2 .
The proof of Lemma 4 is completed. □
Next, we prove the following results for Algorithm 3.1.
Theorem 1.
Assume that the sequence { α n } is chosen such that
lim n α n β n x n x n 1 = 0 .
Suppose that { x n } is a sequence generated by our Algorithm 3.1, then { x n } converges strongly to an element p Γ , where we have that p = P Γ f ( p ) .
Proof. 
Claim I
We need to prove that the sequence { x n } is bounded, for each p = P Γ f ( p ) . By Lemma 4 we have
z n p 2 w n p 2 ( 1 τ L ) y n x n + 1 2 ( 1 τ L ) y n w n 2 .
This implies that
z n p w n p .
Using Equation (18), we have
w n p = x n + α n ( x n x n 1 ) p x n p + α n x n x n 1 = x n p + β n . α n β n x n x n 1 .
Using the condition that lim n α n β n x n x n 1 = 0 ; it follows that there exist a constant 1 0 such that
α n β n x n x n 1 1 , for   each   n 0 .
Hence, using Equations (34) and (35) in Equation (33) we obtain
z n p w n p x n p + β n 1 .
Using (23) and the condition that f is a contraction mapping, we have
x n + 1 p = f ( h n ) p = f ( h n ) f ( p ) + f ( p ) p f ( h n ) f ( p ) + f ( p ) p k h n p + f ( p ) p .
By Equation (22), we have
h n p = ( 1 β n ) z n + β n f ( z n ) p ( 1 β n ) z n p + β n f ( z n ) p ( 1 β n ) z n p + β n f ( z n ) f ( p ) + β n f ( p ) p ( 1 β n ) z n p + β n k z n p + β n f ( p ) p = ( 1 β n ( 1 k ) ) z n p + β n f ( p ) p .
Using Equation (38) in Equation (37), we obtain:
x n + 1 p k ( 1 β n ( 1 k ) ) z n p + β n f ( p ) p + f ( p ) p = k ( 1 β n ( 1 k ) ) z n p + k β n f ( p ) p + f ( p ) p k ( 1 β n ( 1 k ) ) z n p + k f ( p ) p + f ( p ) p = k ( 1 β n ( 1 k ) ) z n p + ( 1 + k ) f ( p ) p .
Using Equation (36) in Equation (39), we have
x n + 1 p k ( 1 β n ( 1 k ) ) x n p + k β n 1 + ( 1 + k ) f ( p ) p k ( 1 β n ( 1 k ) ) x n p + k 1 + ( 1 + k ) f ( p ) p max { x n p , 1 + 2 f ( p ) p } max { x 0 p , 1 + 2 f ( p ) p } .
This means that { x n } is bounded. Hence, it follows that { z n } , { f ( z n ) } , { h n } , { f ( h n ) } and { w n } are bounded.
Claim II
( 1 τ L ) y n w n 2 + ( 1 τ L ) y n x n + 1 2 x n p 2 x n + 1 p 2 + β n 5 ,
for some 5 > 0 . By Equation (23), we have
x n + 1 p 2 = f ( h n ) p 2 = f ( h n ) f ( p ) + f ( p ) p 2 f ( h n ) f ( p ) + f ( p ) p 2 k h n p + f ( p ) p 2 h n p 2 + 2 h n p f ( p ) p + f ( p ) p 2 h n p 2 + 2 ,
for some 2 > 0 . From Equation (22), we have
h n p 2 = ( 1 β n ) z n + β n f ( z n ) p 2 ( 1 β n ) z n p 2 + β n f ( z n ) p 2 ( 1 β n ) z n p 2 + β n ( f ( z n ) f ( p ) + f ( p ) p ) 2 ( 1 β n ) z n p 2 + β n ( k z n p + f ( p ) p ) 2 ( 1 β n ) z n p 2 + 2 β n z n p f ( p ) p + β n z n p 2 + β n f ( p ) p 2 = z n p 2 + β n ( 2 z n p f ( p ) p + f ( p ) p 2 ) z n p 2 + β n 3 ,
for some 3 > 0 . Using Equation (32) in Equation (43), we obtain
h n p 2 w n p 2 ( 1 τ L ) y n x n + 1 2 ( 1 τ L ) y n w n 2 + β n 3 .
From Equation (36), we have
w n p x n p + β n 1 .
This implies that
w n p 2 ( x n p + β n 1 ) 2 = x n p 2 + β n ( 2 1 x n p + β n 1 2 ) x n p 2 + β n 4 ,
for some 4 > 0 . Combining Equations (44) and (46), we have
h n p 2 x n p 2 + β n 4 ( 1 τ L ) y n x n + 1 2 ( 1 τ L ) y n w n 2 + β n 3 .
Using Equation (47) in Equation (42), we have
x n + 1 p 2 x n p 2 + β n 4 ( 1 τ L ) y n x n + 1 ( 1 τ L ) y n w n 2 + β n 3 + β n 2 .
This implies that
( 1 τ L ) y n w n 2 + ( 1 τ L ) y n x n + 1 2 x n p 2 x n + 1 p 2 + β n 5 ,
where 5 : = 2 + 3 + 4 .
Claim III
x n + 1 p 2 2 k ( 1 β n ( 1 k ) ) x n p 2 + 2 β n ( 1 k ) 2 k 1 k f ( p ) p , x n + 1 p + 3 D 1 k . α n β n x n x n 1 + 1 β n ( 1 k ) f ( p ) p 2 ,
for some D > 0 . Using Equations (10) and (18), we have
w n p 2 = x n + α n ( x n x n 1 ) p 2 = x n p 2 + 2 α n x n p , x n x n 1 + α n 2 x n x n 1 2 x n p 2 + 2 α n x n p x n x n 1 + α n 2 x n x n 1 .
By Equations (10) and (23), we have
x n + 1 p 2 = f ( h n ) p 2 = f ( h n ) f ( p ) + f ( p ) p 2 = f ( h n ) f ( p ) 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) , f ( p ) p k 2 h n p 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) f ( p ) p k 2 h n p 2 + f ( p ) p 2 + k 2 h n p 2 + f ( p ) p 2 2 k h n p 2 + 2 f ( p ) p 2 .
Using Equations (9) and (22), we have
h n p 2 = β n f ( z n ) + ( 1 β n ) z n p 2 = β n ( f ( z n ) f ( p ) ) + ( 1 β n ) ( z n p ) + β n ( f ( p ) p ) 2 β n ( f ( z n ) f ( p ) ) + ( 1 β n ) ( z n p ) 2 + 2 β n f ( p ) p , x n + 1 p β n f ( z n ) f ( p ) 2 + ( 1 β n ) z n p 2 + 2 β n f ( p ) p , x n + 1 p β n k 2 z n p 2 + ( 1 β n ) z n p 2 + 2 β n f ( p ) p , x n + 1 p β n k z n p 2 + ( 1 β n ) z n p 2 + 2 β n f ( p ) p , x n + 1 p = ( 1 β n ( 1 k ) ) z n p 2 + 2 β n f ( p ) p , x n + 1 p ( 1 β n ( 1 k ) ) w n p 2 + 2 β n f ( p ) p , x n + 1 p .
Using Equation (51) in Equation (53), we have
h n p 2 ( 1 β n ( 1 k ) ) x n p 2 + 2 α n x n p x n x n 1 + α n 2 x n x n 1 2 + 2 β n f ( p ) p , x n + 1 p .
Using Equation (54) in Equation (52), we have:
x n + 1 p 2 2 k ( 1 β n ( 1 k ) ) x n p 2 + 4 k α n x n p x n x n 1 + 2 k α n 2 x n x n 1 2 + 4 k β n f ( p ) p , x n + 1 p + 2 f ( p ) p 2 = 2 k ( 1 β n ( 1 k ) ) x n p 2 + 2 k α n x n x n 1 2 x n p + α n x n x n 1 + 2 ( 1 k ) 2 k β n 1 k f ( p ) p , x n + 1 p + 1 1 k f ( p ) p 2 2 k ( 1 β n ( 1 k ) ) x n p 2 + 2 k α n x n x n 1 2 x n p + α x n x n 1 + 2 ( 1 k ) 2 k β n 1 k f ( p ) p , x n + 1 p + 1 1 k f ( p ) p 2 2 k ( 1 β n ( 1 k ) ) x n p 2 + 6 D α n x n x n 1 + 2 ( 1 k ) 2 k β n 1 k f ( p ) p , x n + 1 p + 1 1 k f ( p ) p 2 2 k ( 1 β n ( 1 k ) ) x n p 2 + 2 β n ( 1 k ) 2 k 1 k f ( p ) p , x n + 1 p + 3 D 1 k . α n β n x n x n 1 + 1 β n ( 1 k ) f ( p ) p 2 ,
where D : = sup n N { x n p , α x n x n 1 } > 0 .
Claim IV
We need to prove that the sequence { x n p 2 } converges to zero by considering two possible cases.
Case I
There exists a number N N such that x n + 1 p 2 x n p 2 for each n N . This implies that lim n x n p exists and by Claim II, we have
lim n y n w n = 0 , lim n y n x n + 1 = 0 .
The fact that the sequence { x n } is bounded implies that there exists a subsequence { x n k } of { x n } that converges weakly to some z H such that
lim sup n f ( p ) p , x n p = lim k f ( p ) p , x n k p = f ( p ) p , z p .
Using Equation (56) and Lemma 3, we get z Γ . From Equation (57) and the fact that p = P Γ f ( p ) , we get
lim sup n f ( p ) p , x n p = f ( p ) p , z p 0 .
Next, we prove that
lim n x n + 1 x n = 0 .
Clearly,
w n x n = α n x n x n 1 = α n β n . β n x n x n 1 0   as   n .
Combining Equations (56) and (60) we have
x n + 1 x n x n + 1 y n + y n w n + w n x n 0   as   n .
Using Equations (58) and (59) we have
lim sup n f ( p ) p , x n + 1 p lim sup n f ( p ) p , x n p = f ( p ) p , z p 0 .
Hence by Lemma 3 and Claim III we have lim n x n p = 0 .
Case II
We can find a subsequence { x n j p 2 } of { x n p 2 } satisfying x n j p 2 < x n j + 1 p 2 for each j N . Hence, by Lemma 2 it follows that we can find a nondecreasing real sequence { m k } of N satisfying lim k m k = and we get the following inequalities for every k N :
x m k p 2 x m k + 1 p 2 , x k p 2 x m k p 2 .
By Claim II we get
( 1 τ L ) y m k w m k 2 + ( 1 τ L ) y m k x m k + 1 2 x m k p 2 x m k + 1 p 2 + β m k 5 β m k 5 .
Hence, we have
lim k y m k w m k = 0 , lim k y m k x m k + 1 = 0 .
By similar arguments as in the proof of Case I, we have
x m k + 1 x m k 0 as   k ,
and
lim sup k f ( p ) p , x m k + 1 p 0 .
By Claim III we obtain
x m k + 1 p 2 2 k ( 1 β m k ( 1 k ) ) x m k p 2 + 2 β m k ( 1 k ) [ 2 k 1 k f ( p ) p , x m k + 1 p + 3 D 1 k . α m k β m k x m k x m k 1 + 1 β m k ( 1 k ) f ( p ) p 2 ] .
By Equations (63) and (68) we have:
x m k + 1 p 2 2 k ( 1 β m k ( 1 k ) ) x m k p 2 + 2 β m k ( 1 k ) [ 2 k 1     k f ( p ) p , x m k + 1 p + 3 D 1     k . α m k β m k x m k x m k 1 + 1 β m k ( 1     k ) f ( p ) p 2 ] .
Hence, we have
x m k + 1 p 2 2 k 1     k f ( p ) p , x m k + 1 p + 3 D 1     k . α m k β m k x m k x m k 1 + 1 β m k ( 1     k ) f ( p ) p 2 .
Therefore we obtain:
lim sup k x m k + 1 p 0 .
Combining Equations (63) and (71) we obtain lim sup k x k p 0 , this means that x k p . The proof of Theorem 1 is completed. □
Remark 1.
Suantai et al. [39] observed that condition (31) can be easily implemented in numerical results since the value of x n x n 1 is given before choosing α n . We can choose α n as follows:
α n = min α , ε n x n x n 1 , if   x n x n 1 , α otherwise ,
where α 0 and { ε n } is a positive sequence such that ε n = o ( β n ) .

3.2. Picard–Mann Hybrid Type Inertial Subgradient Extragradient Algorithm

We propose the following algorithm
Algorithm 3.2
Step 0: Given τ ( 0 , 1 L ) . { α n } [ 0 , α ) for some α > 0 , { λ n } ( a , b ) ( 0 , 1 β n ) and { β n } ( 0 , 1 ) satisfying the following conditions:
lim n β n = 0 , n = 1 β n = .
Choose initial x 0 , x 1 C and set n : = 1 .
Step 1: Compute
w n = x n + α n ( x n x n 1 ) ,
y n = P C ( w n τ A w n ) .
If y n = w n , then stop, y n is a solution of the (VIP) problem. Otherwise, go to Step 2.
Step 2: Construct the half-space
T n : = z H : w n τ A w n y n , z y n 0
and compute
z n = P T n ( w n τ A y n ) .
Step 3: Calculate
h n = ( 1 λ n β n ) x n + λ n z n ,
and compute
x n + 1 = f ( h n ) .
Let n : = n + 1 and return to Step 1.
Next, we prove the following important result for Algorithm 3.2.
Theorem 2.
Suppose that { α n } is a real sequence such that the following condition holds:
lim n α n β n x n x n 1 = 0 .
Then the sequence { x n } generated by Algorithm 3.2 converges strongly to an element p Γ , where p = min { z : z Γ } .
Proof. 
We now examine the following claims:
Claim I
We claim that the sequence { x n } is bounded. Using similar arguments as in the proof of Theorem 1, we get
z n p 2 w n p 2 ( 1 τ L ) y n x n + 1 2 ( 1 τ L ) y n w n 2 .
This implies that
z n p w n p .
Moreover, we have
z n p w n p x n p + β n 1 ,
for some 1 > 0 .
x n + 1 p k h n p + f ( p ) p .
Using Equation (77) we have
h n p = ( 1 λ n β n ) x n + λ n z n p = ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) β n p ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) + β n p .
Using Equations (10) and (82), we have the following estimate:
( 1 λ n β n ) ( x n p ) + λ n ( z n p ) 2 = ( 1 λ n β n ) 2 x n p 2 + 2 ( 1 λ n β n ) λ n x n p , z n p + λ n 2 z n p 2 ( 1 λ n β n ) 2 x n p 2 + 2 ( 1 λ n β n ) λ n x n p z n p + λ n 2 z n p 2 ( 1 λ n β n ) 2 x n p 2 + ( 1 λ n β n ) λ n x n p 2 + ( 1 λ n β n ) λ n z n p 2 + λ n 2 z n p 2 ( 1 λ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) λ n z n p 2
This implies that
( 1 λ n β n ) ( x n p ) + λ n ( z n p ) 2 ( 1 λ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) λ n ( x n p + β n 1 ) 2 ( 1 λ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) λ n x n p 2 + 2 ( 1 β n ) λ n β n x n p 1 + β n 2 1 2 ( 1 β n ) 2 x n p 2 + 2 ( 1 β n ) β n x n p 1 + β n 2 1 2 = ( 1 β n ) x n p + β n 1 2 .
This implies that
( 1 λ n β n ) ( x n p ) + λ n ( z n p ) ( 1 β n ) x n p + β n 1 .
Using Equation (87) in Equation (84), we get
h n p ( 1 β n ) x n p + β n 1 + β n p = ( 1 β n ) x n p + β n ( 1 + p ) .
Using Equation (88) in Equation (83), we have
x n + 1 p ( 1 β n ) x n p + β n ( 1 + p ) + f ( p ) p max x n p , 1 + p + f ( p ) p max x 0 p , 1 + p + f ( p ) p .
Therefore, the sequence { x n } is bounded. It follows that { z n } , { w n } and { h n } are all bounded.
Claim II
We want to show that
( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 + ( 1 β n ) λ n ( 1 τ L ) y n w n 2 x n p 2 x n + 1 p 2 + β n 6 ,
for some 6 > 0 . From Equation (42), we have
x n + 1 p 2 h n p 2 + 2 ,
for some 2 > 0 . Using (10) and (77) we get
h n p 2 = ( 1 λ n β n ) x n + λ n z n p 2 = ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) β n p 2 = ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) 2 2 β n ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) , p + β n 2 p 2 ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) 2 + β n 3 ,
for some 3 > 0 . Using Equation (85) in Equation (92), we get
h n p 2 ( 1 λ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) λ n z n p 2 + β n 3 .
Using Equation (80) in Equation (93), we get
h n p 2 ( 1 λ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) λ n [ w n p 2 ( 1 τ L ) y n x n + 1 2 ( 1 τ L ) y n w n 2 ] + β n 3 = ( 1 λ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) λ n w n p 2 ( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 ( 1 β n ) λ n ( 1 τ L ) y n w n 2 + β n 3 .
From Equation (36), we get
z n p w n p x n p + β n 1 .
This implies that
w n p 2 x n p 2 + β n 4 ,
for some 4 > 0 . Using Equation (96) in Equation (94), we get
h n p 2 ( 1 λ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) λ n [ x n p 2 + β n 4 ] ( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 ( 1 β n ) λ n ( 1 τ L ) y n w n 2 + β n 3 = ( 1 β n ) 2 x n p 2 + β n ( 1 β n ) λ n 4 ( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 ( 1 β n ) λ n ( 1 τ L ) y n w n 2 + β n 3 x n p 2 ( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 ( 1 β n ) λ n ( 1 τ L ) y n w n 2 + β n 5 ,
for some 5 > 0 . Using Equation (97) in Equation (91), we have
x n + 1 p 2 x n p 2 ( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 ( 1 β n ) λ n ( 1 τ L ) y n w n 2 + β n 5 + 2 x n p 2 ( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 ( 1 β n ) λ n ( 1 τ L ) y n w n 2 + β n 6 ,
for some 6 > 0 . This implies that
( 1 β n ) λ n ( 1 τ L ) y n x n + 1 2 + ( 1 β n ) λ n ( 1 τ L ) y n w n 2 x n p 2 x n + 1 p 2 + β n 6 .
Claim III
We want to show that
x n + 1 p 2 2 k x n p 2 + 2 k β n [ 2 ( 1 β n ) x n p 9 + λ n . α n β n x n x n 1 9 + 1 k β n f ( p ) p 2 2 ( 1 λ n β n ) ( x n p ) , p + 3 9 ] ,
for some 9 > 0 .
Using Equations (10) and (78), we have
x n + 1 p 2 = f ( h n ) f ( p ) + f ( p ) p 2 = f ( h n ) f ( p ) 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) , f ( p ) p k 2 h n p 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) , f ( p ) p k h n p 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) , f ( p ) p k h n p 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) f ( p ) p k h n p 2 + f ( p ) p 2 + f ( h n ) f ( p ) 2 + f ( p ) p 2 2 k h n p 2 + 2 f ( p ) p 2 .
Next, we have the following estimate, using Equations (10) and (77)
h n p 2 = ( 1 λ n β n ) x n + λ n z n p 2 = ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) β n p 2 = ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) 2 2 β n ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) , p + β n 2 p 2 .
Using Equation (86) in Equation (102), we have
h n p 2 { ( 1 β n ) x n p + β n 1 } 2 2 β n ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) , p + β n 2 p 2 ( 1 β n ) x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) + λ n ( z n p ) , p + β n 2 p 2 = ( 1 β n ) x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) , p 2 λ n β n z n p , p + β n 2 p 2 = ( 1 β n ) x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) , p + 2 λ n β n p z n , p + β n 2 p 2 ( 1 β n ) x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) , p + 2 λ n β n p z n p + β n 2 p 2 ( 1 β n ) x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) , p + λ n β n z n p 2 + λ n β n p 2 + β n 2 p 2 ( 1 β n ) x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) , p + λ n β n w n p 2 + λ n β n p 2 + β n 2 p 2 .
Next, we have
w n p 2 = x n + α n ( x n x n 1 ) p 2 = ( x n p ) + α n ( x n x n 1 ) 2 = x n p 2 + 2 α n x n p , x n x n 1 + α n 2 x n x n 1 2 x n p 2 + 2 α n x n p x n x n 1 + α n 2 x n x n 1 2 x n p 2 + α n x n x n 1 { 2 x n p + α n x n x n 1 } x n p 2 + α n x n x n 1 8 ,
for some 8 > 0 . Using Equation (104) in Equation (103) we have
h n p 2 ( 1 β n ) x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) , p + λ n β n x n p 2 + λ n β n α n x n x n 1 8 + λ n β n p 2 + β n 2 p 2 x n p 2 + 2 ( 1 β n ) β n x n p 7 + β n 7 2 2 β n ( 1 λ n β n ) ( x n p ) , p + λ n β n α n x n x n 1 8 + λ n β n p 2 + β n 2 p 2 .
Using Equation (105) in Equation (101), we have
x n + 1 p 2 2 k x n p 2 + 4 k ( 1 β n ) β n x n p 7 + 2 k β n 7 2 4 k β n ( 1 λ n β n ) ( x n p ) , p + 2 k λ n β n α n x n x n 1 8 + 2 k λ n β n p 2 + 2 k β n 2 p 2 + 2 f ( p ) p 2 2 k x n p 2 + 4 k ( 1 β n ) β n x n p 7 + 2 k β n 7 2 4 k β n ( 1 λ n β n ) ( x n p ) , p + 2 k λ n α n x n x n 1 8 + 2 k λ n β n p 2 + 2 k β n 2 p 2 + 2 f ( p ) p 2 2 k x n p 2 + 2 k β n [ 2 ( 1 β n ) x n p 7 + 7 2 2 ( 1 λ n β n ) ( x n p ) , p + λ n . α n β n x n x n 1 8 + λ n p 2 + β n p 2 + 1 k β n f ( p ) p 2 ] 2 k x n p 2 + 2 k β n [ 2 ( 1 β n ) x n p 9 + λ n . α n β n x n x n 1 9 + 1 k β n f ( p ) p 2 2 ( 1 λ n β n ) ( x n p ) , p + 3 9 ] ,
for some 9 > 0 .
Claim IV
We need to prove that the real sequence { x n p 2 } converges to 0 by considering the following two cases:
Case I
There exists a number N N such that for every n N , we have x n + 1 p 2 x n p 2 . Hence, we have that lim n x n p exists so that by Claim II, we have
lim n y n w n = 0 , lim n y n x n + 1 = 0 .
Since the sequence { x n } is bounded, it follows that there exists a subsequence { x n k } of { x n } such that { x n k } converges weakly to some z H such that
lim sup n f ( p ) p , x n p = lim k f ( p ) p , x n k p = f ( p ) p , z p .
Using Equation (107) and Lemma 3, we get z Γ . From Equation (108) and the fact that p = P Γ f ( p ) , we get
lim sup n f ( p ) p , x n p = f ( p ) p , z p 0 .
Next, we prove that
lim n x n + 1 x n = 0 .
Clearly,
w n x n = α n x n x n 1 = α n β n . β n x n x n 1 0   as   n .
Combining Equations (107) and (111) we have
x n + 1 x n x n + 1 y n + y n w n + w n x n 0   as   n .
Using Equations (109) and (110) we have
lim sup n f ( p ) p , x n + 1 p lim sup n f ( p ) p , x n p = f ( p ) p , z p 0 .
Hence by Lemma 3 and Claim III we have lim n x n p = 0 .
Case II
We can find a subsequence { x n j p 2 } of { x n p 2 } satisfying x n j p 2 < x n j + 1 p 2 for each j N . Hence, by Lemma 2 it follows that there is a nondecreasing real sequence { m k } of N satisfying lim k m k = so that we get the following inequalities for every k N :
x m k p 2 x m k + 1 p 2 , x k p 2 x m k p 2 .
By Claim II we get
( 1 β m k ) λ m k ( 1 τ L ) y m k x m k + 1 2 + ( 1 β m k ) λ m k ( 1 τ L ) y m k w m k 2 x m k p 2 x m k + 1 p 2 + β m k 6 β m k 6
Hence, we have
lim k y m k w m k = 0 , lim k y m k x m k + 1 = 0 .
By similar arguments as in the proof of Case I, we have
x m k + 1 x m k 0 as   k ,
and
lim sup k f ( p ) p , x m k + 1 p 0 .
By Claim III we obtain
x m k + 1 p 2 2 k x m k p 2 + 2 k β m k [ 2 ( 1 β m k ) x m k p 9 + λ m k . α m k β m k x m k x m k 1 9 + 1 k β m k f ( p ) p 2 2 ( 1 λ m k β m k ) ( x m k p ) , p + 3 9 ] ,
for some 9 > 0 .
By Equations (114) and (119) we have:
x m k + 1 p 2 2 k x m k p 2 + 2 k β m k [ 2 ( 1 β m k ) x m k p 9 + λ m k . α m k β m k x m k x m k 1 9 + 1 k β m k f ( p ) p 2 2 ( 1 λ m k β m k ) ( x m k p ) , p + 3 9 ] ,
Hence, we have
x m k + 1 p 2 2 ( 1 β m k ) x m k p 9 + λ m k . α m k β m k x m k x m k 1 9 + 1 k β m k f ( p ) p 2 2 ( 1 λ m k β m k ) ( x m k p ) , p + 3 9 .
Therefore we obtain:
lim sup k x m k + 1 p 0 .
Combining Equations (114) and (122) we obtain lim sup k x k p 0 , this means that x k p . The proof of Theorem 2 is completed. □

4. Numerical Illustrations

In this section, we consider two numerical examples to illustrate the convergence of Algorithms 3.1, Algorithms 3.2 and compare them with three well-known algorithms. All our numerical illustrations were executed on a HP laptop with the following specifications: Intel(R) Core(TM)i5-6200U CPU 2.3GHz with 4 GB RAM. All our codes were written in MATLAB 2015a. In reporting our numerical results, the following tables, ‘Iter.’, ‘Sec.’ and Error denote the number of iterations, the CPU time in seconds and x I t e r x * , respectively. We choose β n = 1 ( n + 1 )
α n = min { α 0 , β n 2 x n x n 1 } , if   x n x n 1 α 0 ,            otherwise .
f ( x ) = 0.5 x for Algorithm 3.1, Algorithm 3.2, λ n = 1 1 n for Algorithm 3.2.
Example 1.
Suppose that H = L 2 ( [ 0 , 1 ] ) with the inner product
x , y : = 0 1 x ( t ) y ( t ) d t , x , y H
and the included norm
x : = ( 0 1 | x ( t ) | 2 d t ) 1 2 , x H
Let C : = { x H : x 1 } be the unit ball and define an operator A : C H by
A x ( t ) = max { 0 , x ( t ) } .
and Q : = { x H , a , x b } where 0 a H and b R ,
we can easily see that A is 1-Lipschitz continuous and monotone on C. Considering the condition on C and A, the set of solutions to the variational inequality problem (VIP) is given by
T = { 0 } .
It is known that
P C ( x ) = x x L 2 , if x L 2 > 1 , x , if x L 2 1 . .
and
P Q ( x ) = b a , x a 2 a + x , if a , x > b , x ,    if a , x b .
Now, we apply Algorithm 3.1, Algorithm 3.2, Mainge’s algorithm [37] and Kraikaew and Saejung’s algorithm [40] to solve the variational inequality problem (VIP). We choose α n = 1 n + 1 for Mainge’s algorithm and Kraikaew and Saejung’s algorithm and τ = 0.5 for all algorithms. We use stopping rule x n 0 < 10 4 or Iter <= 3000 for all algorithms. The numerical results of all algorithms with different x 0 are reported in Table 1 below:
The convergence behaviour of algorithms with different starting point is given in Figure 1 and Figure 2. In these figures, we represent the value of errors x n 0 for all algorithms by the y-axis and the number of iterations by the x-axis.
Example 2.
Assume that A : R m R m is defined by A ( x ) = M x + q with M = B B T + S + D , where S is an m × m skew-symmetric matrix, B is an m × m matrix, D is an m × m diagonal matrix, whose diagonal entries are positive (so M is positive definite), q is a vector in R m and
C : = { x R m : 5 x i 5 , i = 1 , , m } .
Clearly, we can see that the operator A is monotone and Lipschitz continuous with a Lipschitz constant L = M . Given that q = 0 , the unique solution of the corresponding (VIP) is { 0 } .
We will compare Algorithm 3.1, Algorithm 3.2 with Tseng’s extragradient method (TEGM) [41], Inertial Tseng extragradient algorithm (ITEGM) of Thong and Hieu [33], subgradient extragradient method (SEGM) of Censor et al. [5]. We choose τ = 0.9 L for all algorithm, α n = α = 0.99 1 + 8 ϵ 1 2 ϵ 2 ( 1 ϵ ) where ϵ = 1 λ L 1 + λ L for inertial Tseng extragradient algorithm. The starting points are x 0 = ( 1 , 1 , , 1 ) T R m .
For experiment, all entries of B, S and D are generated randomly from a normal distribution with mean zero and unit variance. We use stopping rule x n 0 < 10 4 or Iter <= 1000 for all algorithms. The results are described in Table 2 and Figure 3 and Figure 4.
Table 1 and Table 2 and Figure 1, Figure 2, Figure 3 and Figure 4, give the errors of the Mainge’s algorithm [37] and Kraikaew and Saejung’s algorithm [40], Tseng’s extragradient method (TEGM) [41], Inertial Tseng extragradient algorithm (ITEGM) [33], subgradient extragradient method (SEGM) of Censor et al. [5] and Algorithms 3.1, 3.2 as well as their execution times. They show that Algorithms 3.1 and 3.2 are less time consuming and more accurate than those of Mainge [37], Kraikaew and Saejung [40], Tseng [41], Thong and Hieu [33] and Censor et al. [5].

5. Conclusions

In this study, we developed two new iterative algorithms for solving K-pseudomonotone variational inequality problems in the framework of real Hilbert spaces. We established some strong convergence theorems for our proposed algorithms under certain conditions. We proved via several numerical experiments that our proposed algorithms performs better in comparison than those of Mainge [37], Kraikaew and Saejung [40], Tseng [41], Thong and Hieu [33] and Censor et al. [5].
Data AvailabilityThe authors declare that all data relating to our results in this paper are available within the paper.

Author Contributions

All authors contributed equally to the writing of this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Basque Government: IT1207-19.

Acknowledgments

This paper was completed while the first author was visiting the Abdus Salam School of Mathematical Sciences (ASSMS), Government College University Lahore, Pakistan as a postdoctoral fellow. The authors wish to thank the referees for their comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interests.

Data Availability

The authors declare that all data relating to our results in this paper are available within the paper.

References

  1. Stampacchia, G. Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. 1964, 258, 4413–4416. [Google Scholar]
  2. Browder, F.E. The fixed point theory of multivalued mapping in topological vector spaces. Math. Ann. 1968, 177, 283–301. [Google Scholar] [CrossRef]
  3. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  5. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithm 2012, 59, 301–323. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  7. Gibali, A.; Shehu, Y. An efficient iterative method for finding common fixed point and variational inequalities in Hilbert spaces. Optimization 2019, 68, 13–32. [Google Scholar] [CrossRef]
  8. Gibali, A.; Ha, N.H.; Thuong, N.T.; Trang, T.H.; Vinh, N.T. Polyak’s gradient method for solving the split convex feasibility problem and its applications. J. Appl. Numer. Optim. 2019, 1, 145–156. [Google Scholar]
  9. Khan, A.R.; Ugwunnadi, G.C.; Makukula, Z.G.; Abbas, M. Strong convergence of inertial subgradient extragradient method for solving variational inequality in Banach space. Carpathian J. Math. 2019, 35, 327–338. [Google Scholar]
  10. Maingé, P.E. Projected subgradient techniques and viscosity for optimization with variational inequality constraints. Eur. J. Oper. Res. 2010, 205, 501–506. [Google Scholar] [CrossRef]
  11. Thong, D.V.; Vinh, N.T.; Cho, Y.J. Accelerated subgradient extragradient methods for variational inequality problems. J. Sci. Comput. 2019, 80, 1438–1462. [Google Scholar] [CrossRef]
  12. Wang, F.; Pham, H. On a new algorithm for solving variational inequality and fixed point problems. J. Nonlinear Var. Anal. 2019, 3, 225–233. [Google Scholar]
  13. Wang, L.; Yu, L.; Li, T. Parallel extragradient algorithms for a family of pseudomonotone equilibrium problems and fixed point problems of nonself-nonexpansive mappings in Hilbert space. J. Nonlinear Funct. Anal. 2020, 2020, 13. [Google Scholar]
  14. Alber, Y.I.; Iusem, A.N. Extension of subgradient techniques for nonsmooth optimization in Banach spaces. Set Valued Anal. 2001, 9, 315–335. [Google Scholar] [CrossRef]
  15. Xiu, N.H.; Zhang, J.Z. Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 2003, 152, 559–587. [Google Scholar] [CrossRef] [Green Version]
  16. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  17. Farouq, N.E. Pseudomonotone variational inequalities: Convergence of proximal methods. J. Optim. Theory Appl. 2001, 109, 311–326. [Google Scholar] [CrossRef]
  18. Hadjisavvas, N.; Schaible, S.; Wong, N.-C. Pseudomonotone operators: A survey of the theory and its applications. J. Optim. Theory Appl. 2012, 152, 1–20. [Google Scholar] [CrossRef]
  19. Kien, B.T.; Lee, G.M. An existence theorem for generalized variational inequalities with discontinuous and pseudomonotone operators. Nonlinear Anal. 2011, 74, 1495–1500. [Google Scholar] [CrossRef]
  20. Yao, J.-C. Variational inequalities with generalized monotone operators. Math. Oper. Res. 1994, 19, 691–705. [Google Scholar] [CrossRef]
  21. Karamardian, S. Complementarity problems over cones with monotone and pseudomonotone maps. J. Optim. Theory Appl. 1976, 18, 445–454. [Google Scholar] [CrossRef]
  22. Attouch, H.; Goudon, X.; Redont, P. The heavy ball with friction. I. The continuous dynamical system. Commun. Contemp. Math. 2000, 2, 1–34. [Google Scholar] [CrossRef]
  23. Attouch, H.; Czamecki, M.O. Asymptotic control and stabilization of nonlinear oscillators with non-isolated equilibria. J. Differ. Equ. 2002, 179, 278–310. [Google Scholar] [CrossRef] [Green Version]
  24. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  25. Maingé, P.E. Inertial iterative process for fixed points of certain quasi-nonexpansive mappings. Set Valued Anal. 2007, 15, 67–69. [Google Scholar] [CrossRef]
  26. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  27. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar]
  28. Attouch, H.; Peypouquet, J.; Redont, P. A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 2014, 24, 232–256. [Google Scholar] [CrossRef]
  29. Chen, C.; Chan, R.H.; Ma, S.; Yang, J. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015, 8, 2239–2267. [Google Scholar] [CrossRef]
  30. Bot, R.I.; Csetnek, E.R. A hybrid proximal-extragradient algorithm with inertial effects. Numer. Funct. Anal. Optim. 2015, 36, 951–963. [Google Scholar] [CrossRef] [Green Version]
  31. Bot, R.I.; Csetnek, E.R. An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algor. 2016, 71, 519–540. [Google Scholar] [CrossRef] [Green Version]
  32. Dong, L.Q.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  33. Thong, D.V.; Hieu, D.V. Modified Tseng’s extragradient algorithms for variational inequality problems. J. Fixed Point Theory Appl. 2018, 20, 152. [Google Scholar] [CrossRef]
  34. Moudafi, A. Viscosity approximations methods for fixed point problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  35. Khan, S.H. A Picard-Mann hybrid iterative process. Fixed Point Theory Appl. 2013, 2013, 69. [Google Scholar] [CrossRef]
  36. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
  37. Maingé, P.E. A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
  38. Liu, L.S. Ishikawa and Mann iteration process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 1995, 194, 114–125. [Google Scholar] [CrossRef] [Green Version]
  39. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 14, 1595–1615. [Google Scholar] [CrossRef] [Green Version]
  40. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  41. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
Figure 1. Comparison of all algorithms with x 0 = s i n ( 3 t ) 100 .
Figure 1. Comparison of all algorithms with x 0 = s i n ( 3 t ) 100 .
Axioms 09 00051 g001
Figure 2. Comparison of all algorithms with x 0 = ( s i n ( 3 t ) + c o s ( 10 t ) ) 300 .
Figure 2. Comparison of all algorithms with x 0 = ( s i n ( 3 t ) + c o s ( 10 t ) ) 300 .
Axioms 09 00051 g002
Figure 3. Comparison of all algorithms with m = 50 .
Figure 3. Comparison of all algorithms with m = 50 .
Axioms 09 00051 g003
Figure 4. Comparison of all algorithms with m = 100 .
Figure 4. Comparison of all algorithms with m = 100 .
Axioms 09 00051 g004
Table 1. Numerical results obtained by other algorithms.
Table 1. Numerical results obtained by other algorithms.
Methods x 0 = sin ( 3 t ) 100 x 0 = ( sin ( 3 * t ) + cos ( 10 t ) ) 300
Sec.Iter.Error.Sec.Iter.Error.
Algorithm 3.10.0022101.1891 × 10 5 0.001894.8894 × 10 5
Algorithm 3.20.001984.7288 × 10 5 0.001475.3503 × 10 5
Algorithm of Kraikaew et al.0.406322879.9981 × 10 5 0.171910659.9924 × 10 5
Algorithm of Mainge0.125022879.9981 × 10 5 0.046910659.9924 × 10 5
Table 2. Numerical results obtained by other algorithms.
Table 2. Numerical results obtained by other algorithms.
Methodsm = 50m = 100
Sec.Iter.Error.Sec.Iter.Error.
Algorithm 3.10.08106.9882 × 10 5 0.14063106.6947 × 10 5
Algorithm 3.20.07889.0032 × 10 5 0.199.9385 × 10 5
TEGM4.243810000.08499.453110000.2646
ITEGM4.518810000.07909.687510000.2594
SEGM4.396910000.08509.515610000.2647

Share and Cite

MDPI and ACS Style

Okeke, G.A.; Abbas, M.; de la Sen, M. Inertial Subgradient Extragradient Methods for Solving Variational Inequality Problems and Fixed Point Problems. Axioms 2020, 9, 51. https://doi.org/10.3390/axioms9020051

AMA Style

Okeke GA, Abbas M, de la Sen M. Inertial Subgradient Extragradient Methods for Solving Variational Inequality Problems and Fixed Point Problems. Axioms. 2020; 9(2):51. https://doi.org/10.3390/axioms9020051

Chicago/Turabian Style

Okeke, Godwin Amechi, Mujahid Abbas, and Manuel de la Sen. 2020. "Inertial Subgradient Extragradient Methods for Solving Variational Inequality Problems and Fixed Point Problems" Axioms 9, no. 2: 51. https://doi.org/10.3390/axioms9020051

APA Style

Okeke, G. A., Abbas, M., & de la Sen, M. (2020). Inertial Subgradient Extragradient Methods for Solving Variational Inequality Problems and Fixed Point Problems. Axioms, 9(2), 51. https://doi.org/10.3390/axioms9020051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop