Next Article in Journal
Applications of Optimal Spline Approximations for the Solution of Nonlinear Time-Fractional Initial Value Problems
Next Article in Special Issue
Proximal Linearized Iteratively Reweighted Algorithms for Nonconvex and Nonsmooth Optimization Problem
Previous Article in Journal
Axiomatic Characterizations of a Proportional Influence Measure for Sequential Projects with Imperfect Reliability
Previous Article in Special Issue
A Tseng-Type Algorithm with Self-Adaptive Techniques for Solving the Split Problem of Fixed Points and Pseudomonotone Variational Inequalities in Hilbert Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerated Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems in Hilbert Spaces

1
Functional Analysis and Optimization Research Group Laboratory (FANORG), Department of Mathematics, School of Physical Sciences, Federal University of Technology Owerri, Owerri P.M.B. 1526, Nigeria
2
Department of Mathematics, Government College University Katchery Road, Lahore 54000, Pakistan
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Institute of Research and Development of Processes, Campus of Leioa (Bizkaia), University of the Basque Country, P.O. Box 644, Barrio Sarriena, 48940 Leioa, Spain
5
Department of Sciences and Humanities, Lahore Campus, National University of Computer and Emerging Sciences, Lahore 54000, Pakistan
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(4), 248; https://doi.org/10.3390/axioms10040248
Submission received: 5 August 2021 / Revised: 13 September 2021 / Accepted: 26 September 2021 / Published: 1 October 2021
(This article belongs to the Special Issue Advances in Nonlinear and Convex Analysis)

Abstract

:
The aim of this paper is to propose a new iterative algorithm to approximate the solution for a variational inequality problem in real Hilbert spaces. A strong convergence result for the above problem is established under certain mild conditions. Our proposed method requires the computation of only one projection onto the feasible set in each iteration. Some numerical examples are presented to support that our proposed method performs better than some known comparable methods for solving variational inequality problems.

1. Introduction

Suppose C is a nonempty closed convex subset of a real Hilbert space H with the inner product . , . which induces the norm . , and A is a self mapping on H. The variational inequality problem (VIP) for an operator A on C H is to find a point x C such that the following is the case.
A x , x x 0 for each x C .
In this paper, we denote the solution set of (VIP) (1) by Γ .
The theory of variational inequalities problems (VIP) was introduced by Stampacchia [1]. It has been proved that the (VIP) problem arise from various mathematical models connected with real life problems. Over the years, VIP has attracted the attention of well-known mathematicians due to its applications in several fields of interest such as sciences and engineering. Interest in (VIP) stems from the fact that it is applicable in solving several real life problems that are of physical interest, such as the problem of the steady filtration of a liquid through a porous membrane in several dimensions, the problem of lubrication, the motion of a fluid past a certain profile and the small deflections of an elastic beam (See, e.g., [2]).
The optimization problem comprises maximizing or minimizing some function f relative to some set S . The function f provides room for the comparison of different options relative to determining which is the best. We write the optimization problem as follows:
o p t i m i z e x S f ( x )
where o p t i m i z e stands for min or max, and f : R n R denotes the objective function. We note that the optimal solution of a maximization problem of the following:
max x S f ( x )
coincide with the optimal solutions of the minimization problem:
min x S f ( x )
and we have max x S f ( x ) = min x S ( f ( x ) ) .
The two popular methods of solving (VIP) (1) are the projection method and the regularized method. Several authors have developed efficient iterative algorithms for solving the (VIP) (1). The projection-type methods are well developed in the literature (see, for example, [3,4,5,6,7]). The well-known projected gradient algorithm method, which is useful in solving the minimization problem f ( x ) subject to x C is given as follows:
x n + 1 = P C ( x n α n f ( x n ) ) , n 0 ,
where the real sequence { α n } of parameters satisfies some conditions, P C is the well-known metric projection of vectors in H onto C and f denotes the gradient of f . Interested readers may refer to [8] for convergence analysis of the above method for the case when f : H R is convex and differentiable. The method (2) was extended to the (VIP) (1) problem by replacing the gradient of f with the operator A and by generating a sequence { x n } as follows.
x n + 1 = P C ( x n α n A x n ) , n 0 .
Note that the major disadvantage of this method is the restriction that the operator A is strongly monotone or inverse strongly monotone ([9]) for the convergence of this method. In 1976, Korpelevich [10] removed this strong condition by introducing the extragradient method for solving saddle point problems. This well-known method was extended to solving variational inequality problems (VIP) in both Hilbert and Euclidean spaces (see [10,11]). For the onvergence of this method, the only restriction on the operator A is monotonocity and L-Lipschitz continuity. The extragradient method is given as follows:
y n = P C ( x n λ A x n ) x n + 1 = P C ( x n λ A y n ) ,
where λ ( 0 , 1 L ) . If the solution set Γ of the (VIP) is nonempty, then the sequence { x n } generated by the iterative method (4) converges weakly to an element in Γ .
Clearly, by using the above method, one needs to compute two projections onto the set C in every iteration. It is well known that the projection onto a closed convex set C H has a close relationship with the minimum distance problem, which may require a restrictive amount of computation time. To solve this problem, Censor et al. [4] introduced the subgradient extragradient method by modifying iterative algorithm (4). They replaced the two projections in the extragradient method (4) onto the set C with one projection onto the set C H and one projection onto a half-space, which is easier to calculate.
The Censor et al. [4] subgradient extragradient method is given as follows:
y n = P C ( x n λ A x n ) T n = { x H : x n λ A x n y n , x y n 0 } x n + 1 = P T n ( x n λ A y n ) ,
where λ ( 0 , 1 L ) . Several authors have studied the subgradient extragradient method and proved some useful and applicable results (see, for example, [7,12] and the references therein).
In 2000, Tseng [13] developed a method involving only one projection for solving the variational inequality problem (VIP) (1). The Tseng’s extragradient method is as follows:
y n = P C ( x n λ A x n ) x n + 1 = y n λ ( A y n A x n ) ,
where λ ( 0 , 1 L ) . Recently, many well-known mathematicians developed some modified Tseng’s extragradient methods for solving variational inequality problems (VIP) (see, e.g., [14,15,16] and the references therein).
The inertial-type iterative methods are based on a discrete version of a second order dissipative dynamical system (see [7,17,18]). These methods can be seen as a process meant to accelerate the rate of convergence of a given method (see, e.g., [19,20,21]). In 2001, Alvarez and Attouch [19] applied the inertial method to derive a proximal algorithm for solving the problem of finding zero of a maximal monotone operator. Their method is given as follows.
Given x n 1 , x n H and two parameters θ n [ 0 , 1 ) , λ n > 0 , x n + 1 H is obtained such that the following is the case:
0 λ n A ( x n + 1 ) + x n + 1 x n θ n ( x n x n 1 ) .
The above method can be written equivalently as follows:
x n + 1 = J λ n A ( x n + θ n ( x n x n 1 ) ,
where J λ n A is the resolvent of the operator A with the given parameter λ n , and the inertial is induced by the term θ n ( x n x n 1 ) .
Several algorithms with faster convergence rate via the use of inertial methods have appeared in literature recently (see, e.g., [22,23]). These algorithms include inertial forward-backward splitting methods ([24]), the inertial Douglas–Rachford splitting method ([25]), inertial ADMM ([26]), inertial proximal-extragradient method ([27]), inertial contraction method ([28]) and inertial forward-backward-forward method ([29]), among others.
Motivated by the results above, we propose a new algorithm for solving variational inequality problems in real Hilbert spaces. Our proposed method combines the modified Tseng’s extragradient method [13], the viscosity method [30] and the Picard–Mann method [31]. Our method requires the computation of only one projection onto the feasible set (solution set) in each iteration. We establish a strong convergence theorem of the proposed algorithm under certain mild conditions. Furthermore, with the help of several numerical illustrations, we show that the proposed method performs better than some known methods for solving variational inequality problems.
This paper is organized as follows: In Section 2, some preliminary definitions and known results that are needed in this study are given. In Section 3, a modified Tseng’s extragradient algorithm is proposed, and a strong convergence theorem for the method is presented. In Section 4, some numerical illustrations are given to show that method presented herein performs better than some existing methods. Section 5 contains the concluding remarks of this paper.

2. Preliminaries

Let H be a real Hilbert space, we recall the following definitions.
Definition 1.
A mapping A : H H is said to be:
(i)
L-Lipschitz continuous with L > 0 if the following is the case.
A x A y L x y for all x , y H .
If L [ 0 , 1 ) then A is called a contraction mapping. If L = 1 , then A is called nonexpansive mapping.
(ii)
It is monotone if the following is the case.
A x A y , x y 0 , for all x , y H .
(iii)
A is called strictly monotone if for any x y , the following is the case:
x y , A ( x ) A ( y ) > 0
and the equality is possible only if x = y .
(iv)
A is called strongly monotone if for any x , y H , the following is the case:
x y , A ( x ) A ( y ) α ( x y ) x y
where the nonnegative function α ( t ) defined at t 0 satisfies the condition α ( 0 ) = 0 and α ( t ) when t .
(v)
A is called pseudomonotone if the following is the case.
A ( y ) , x y 0 A ( x ) , x y 0 , x , y H .
For every x H , there exists a unique point P C x in C H such that the following is the case:
x P C x x y
for each y C (see, e.g., [32]). A mapping P C is known as the metric projection of H onto C H . It is known that the mapping P C is nonexpansive.
Next, we recall the following lemmas which will be useful in this paper.
Lemma 1
([32]). Given that C is a closed convex subset of a real Hilbert space H and x H , we have the following:
(i)
P C x P C y 2 P C x P C y , x y for all y H ;
(ii)
P C x y 2 x y 2 x P C x 2 for all y H ;
(iii)
z = P C x if and only if x z , z y 0
for all y C .
For more properties of the metric projection P C , the interested reader may refer to Section 3 of [32].
Let A : H H . The fixed point problem ( F P ) is formulated as follows.
find x H such that x = A ( x ) .
The set of fixed point of the operator A is denoted by F ( A ) , and we assume that F ( A ) . Our interest in this paper is to find a point x H such that the following is the case.
x Γ F ( A ) .
The weak convergence of the sequence { x n } to x is denoted by x n x as n , and we denote the strong convergence of { x n } to x by x n x as n .
For each x , y H and α R , we recall the following inequalities in Hilbert spaces.
α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 .
x + y 2 x 2 + 2 y , x + y .
x + y 2 = x 2 + 2 x , y + y 2 .
The following lemmas will be needed in this paper.
Lemma 2
([33,34]). Let { a n } be a sequence of nonnegative real numbers, { α n } denotes a sequence of real numbers in ( 0 , 1 ) with n = 1 α n = and { b n } denotes a sequence of real numbers. We will assume that the following is the case.
a n + 1 ( 1 α n ) a n + α n b n , n 1 .
If lim sup k b n k 0 for every subsequence { a n k } of { a n } satisfying lim inf k ( a n k + 1 a n k ) 0 , then lim n a n = 0 .

3. Main Results

We assume that the following condition is satisfied.
Condition 1.
The feasible set C is a non-empty, closed and convex subset of the real Hilbert space H . The mapping A : H H is monotone and L-Lipschitz continuous on H , with the solution set of (VIP) (1); Γ and f : H H is a contraction mapping with the contraction parameter k [ 0 , 1 ) .
We now propose the following algorithm.
  • Step 0: Given { α n } [ 0 , α ) for some α > 0 , λ ( 0 , 1 L ) , { θ n } ( a , b ) ( 0 , 1 β n ) and { β n } ( 0 , 1 ) satisfying the following conditions:
    lim n β n = 0 , n = 1 β n = .
    choose the initial x 0 , x 1 C and set n : = 1 .
  • Step 1: Set the following:
    w n = x n + α n ( x n x n 1 ) ,
    and compute the following.
    y n = P C ( w n λ A w n ) .
    If y n = w n , then stop the computation. y n is a solution to the problem (VIP). Otherwise, proceed to Step 2.
  • Step 2: Set the following:
    h n = ( 1 θ n β n ) f ( x n ) + θ n z n
    and compute the following:
    x n + 1 = f ( h n ) ,
    where
    z n = y n λ ( A y n A w n ) .
    Set n : = n + 1 and proceed to Step 1.
Next, we prove the following results.
Theorem 1.
Suppose Condition 1 holds and the following is the case.
lim n α n β n x n x n 1 = 0 .
The sequence { x n } generated by the algorithm converges strongly to an element p Γ , such that p = P Γ f ( p ) .
Proof. 
Claim I: We claim that the sequence { x n } is bounded for each p = P V I ( C , A ) f ( p ) . We begin by proving the following.
z n p 2 w n p 2 ( 1 λ 2 L 2 ) y n w n 2 .
Using (22) together with (10), (11), (19) and Lemma 1 (ii), we have the following.
z n p 2 = y n λ ( A y n A w n ) p 2 = y n p 2 + λ 2 A y n A w n 2 2 λ y n p , A y n A w n = P C ( w n λ A w n ) p 2 + λ 2 A y n A w n 2 2 λ y n p , A y n A w n w n λ A w n p 2 w n λ A w n P C ( w n λ A w n ) 2 + λ 2 A y n A w n 2 2 λ y n p , A y n A w n = w n λ A w n p 2 w n λ A w n y n 2 + λ 2 A y n A w n 2 2 λ y n p , A y n A w n w n p 2 2 λ A w n , w n p λ A w n w n y n 2 + 2 λ A w n , w n y n λ A w n + λ 2 A y n A w n 2 2 λ y n p , A y n A w n = w n p 2 2 λ A y n , y n p w n y n 2 + λ 2 A y n A w n 2 = w n p 2 2 λ A y n A p , y n p 2 λ A p , y n p w n y n 2 + λ 2 A y n A w n 2 .
Using the fact that the mapping A is monotone and L-Lipschitz continuous, we have the following from inequality (25).
z n p 2 w n p 2 w n y n 2 + λ 2 A y n A w n 2 w n p 2 w n y n 2 + λ 2 L 2 y n w n 2 = w n p 2 ( 1 λ 2 L 2 ) y n w n 2 .
This implies the following.
z n p w n p .
By (18), we have the following estimate.
w n p = x n + α n ( x n x n 1 ) p x n p + α n x n x n 1 = x n p + β n . α n β n x n x n 1 .
Using the condition that α n β n x n x n 1 0 in (23), it follows that there exists a constant 1 > 0 such that the following is the case.
α n β n x n x n 1 1 , n 1 .
Using (28) and (29) in (27), we have the following.
z n p w n p x n p + β n 1 .
Using (21) and the condition that f is a contraction mapping, we have the following.
x n + 1 p = f ( h n ) p = f ( h n ) f ( p ) + f ( p ) p f ( h n ) f ( p ) + f ( p ) p k h n p + f ( p ) p .
Next, by using (20) together with (30), we have the following:
h n p = ( 1 θ n β n ) f ( x n ) + θ n z n p = ( 1 θ n β n ) ( f ( x n ) p ) + θ n ( z n p ) β n p ( 1 θ n β n ) ( f ( x n ) p ) + θ n ( z n p ) + β n p ( 1 θ n β n ) f ( x n ) p + θ n z n p + β n p ( 1 θ n β n ) f ( x n ) f ( p ) + ( 1 θ n β n ) f ( p ) p + θ n z n p + β n p k ( 1 θ n β n ) x n p + ( 1 θ n β n ) f ( p ) p + θ n z n p + β n p ( 1 θ n β n ) x n p + ( 1 θ n β n ) f ( p ) p + θ n [ x n p + β n 1 ] + β n p = ( 1 β n ) x n p + ( 1 θ n β n ) f ( p ) p + β n ( θ n 1 + p ) ( 1 β n ) x n p + ( 1 θ n β n ) f ( p ) p + β n 2 ,
for some 2 > 0 .
Using (32) in (31), we have the following:
x n + 1 p k ( 1 β n ) x n p + k ( 1 θ n β n ) f ( p ) p + β n k 2 + f ( p ) p ( 1 β n ) x n p + ( 1 θ n β n ) f ( p ) p + β n k 2 + f ( p ) p = ( 1 β n ) x n p + ( 2 θ n β n ) f ( p ) p + β n k 2 ( 1 β n ) x n p + ( 1 k ) 3 + 2 f ( p ) p 1 k max x n p , 3 + 2 f ( p ) p 1 k max x 0 p , 3 + 2 f ( p ) p 1 k ,
for some 3 > 0 . This implies that the sequence { x n } is bounded. Therefore, it follows that { z n } , { h n } , { f ( h n ) } and { w n } are bounded.
Claim II: We have the following case:
( 1 β n ) ( 1 λ 2 L 2 ) y n w n 2 3 ( 1 β n ) x n p 2 x n + 1 p 2 + β n 10 ,
for some 10 > 0 . By using (21) together with (11), we obtain the following:
x n + 1 p 2 = f ( h n ) p 2 = f ( h n ) f ( p ) + f ( p ) p 2 = f ( h n ) f ( p ) 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) , f ( p ) p k 2 h n p 2 + f ( p ) p 2 + 2 f ( h n ) f ( p ) f ( p ) p k h n p 2 + f ( p ) p 2 + 2 k h n p f ( p ) p k h n p 2 + k 4 ,
for some 4 > 0 . By using (20) together with (10) and (11), we obtain the following:
h n p 2 = ( 1 θ n β n ) f ( x n ) + θ n z n p 2 = ( 1 θ n β n ) ( f ( x n ) p ) + θ n ( z n p ) 2 2 β n ( 1 θ n β n ) ( f ( x n ) p ) + θ n ( z n p ) , p + β n 2 p 2 ( 1 θ n β n ) ( f ( x n ) p ) + θ n ( z n p ) 2 + β n 5 ( 1 θ n β n ) 2 f ( x n ) p 2 + 2 ( 1 θ n β n ) θ n f ( x n ) p z n p + θ n 2 z n p 2 + β n 5 ( 1 θ n β n ) 2 f ( x n ) p 2 + ( 1 θ n β n ) θ n f ( x n ) p 2 + ( 1 θ n β n ) θ n z n p 2 + θ n 2 z n p 2 + β n 5 ( 1 θ n β n ) ( 1 β n ) f ( x n ) p 2 + ( 1 β n ) θ n z n p 2 + β n 5 ,
for some 5 > 0 . Next, we have the following estimate:
f ( x n ) p 2 = f ( x n ) f ( p ) + f ( p ) p 2 = f ( x n ) f ( p ) 2 + f ( p ) p 2 + 2 f ( x n ) f ( p ) , f ( p ) p f ( x n ) f ( p ) 2 + f ( p ) p 2 + 2 f ( x n ) f ( p ) f ( p ) p k 2 x n p 2 + f ( p ) p 2 + f ( x n ) f ( p ) 2 + f ( p ) p 2 k 2 x n p 2 + f ( p ) p 2 + k 2 x n p 2 + f ( p ) p 2 2 k x n p 2 + 2 f ( p ) p 2 2 k x n p 2 + 6 ,
for some 6 > 0 . Hence, by combining (36) and (35), we have the following:
h n p 2 2 k ( 1 θ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) θ n z n p 2 + β n 7 ,
for some 7 > 0 . By using (26) and (37) in (34), we obtain the following:
x n + 1 p 2 2 k 2 ( 1 θ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) θ n z n p 2 + β n 7 + k 4 2 k ( 1 θ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) θ n z n p 2 + β n 8 2 k ( 1 θ n β n ) ( 1 β n ) x n p 2 + ( 1 β n ) θ n [ w n p 2 ( 1 λ 2 L 2 ) y n w n 2 ] + β n 8 2 ( 1 β n ) x n p 2 + ( 1 β n ) w n p 2 ( 1 β n ) ( 1 λ 2 L 2 ) y n w n 2 + β n 8 ,
for some 8 > 0 . From (30), we obtain the following:
w n p 2 ( x n p + β n 1 ) 2 = x n p 2 + β n ( 2 1 x n p + β n 1 2 ) x n p 2 + β n 9 ,
for some 9 > 0 . By using (39) in (38), we obtain the following:
x n + 1 p 2 2 ( 1 β n ) x n p 2 + ( 1 β n ) x n p 2 + ( 1 β n ) β n 9 + β n 8 ( 1 β n ) ( 1 λ 2 L 2 ) y n w n 2 3 ( 1 β n ) x n p 2 + β n 10 ( 1 β n ) ( 1 λ 2 L 2 ) y n w n 2 ,
for some 10 > 0 . This implies that the following is the case:
( 1 β n ) ( 1 λ 2 L 2 ) y n w n 2 3 ( 1 β n ) x n p 2 x n + 1 p 2 + β n 10 ,
for some 10 > 0 .
Claim III: We have the following:
x n + 1 p 2 k ( 1 β n ) x n p 2 + ( 1 k ) β n [ 3 D ( 1 β n ) ( 1 k ) . α n β n x n x n 1 ] + 2 ( 1 k ) [ 1 ( 1 k ) f ( p ) p , x n + 1 p + 2 ( 1 θ n ) ( 1 k ) ( x n p 2 + f ( p ) p 2 ) ] ,
for some D > 0 .
From (10), we obtain the following.
x n + 1 p 2 = f ( h n ) p 2 = f ( h n ) f ( p ) + f ( p ) p 2 f ( h n ) f ( p ) 2 + 2 f ( p ) p , x n + 1 p k 2 h n p 2 + 2 f ( p ) p , x n + 1 p k h n p 2 + 2 f ( p ) p , x n + 1 p .
Next, we have the following estimate.
h n p 2 = ( 1 θ n β n ) f ( x n ) + θ n z n p 2 = ( 1 θ n ) 2 f ( x n ) f ( p ) + f ( p ) p 2 + θ n ( z n p ) β n ( f ( x n ) p ) 2 + 2 ( 1 θ n ) f ( x n ) p , θ n ( z n p ) β n ( f ( x n ) p ) ( 1 θ n ) f ( x n ) f ( p ) 2 + ( 1 θ n ) f ( p ) p 2 + 2 ( 1 θ n ) f ( x n ) f ( p ) , f ( p ) p + θ n 2 z n p 2 + β n 2 f ( x n ) p 2 2 β n θ n z n p , f ( x n ) p + 2 θ n ( 1 θ n ) f ( x n ) p , z n p 2 β n ( 1 θ n ) f ( x n ) p , f ( x n ) p k 2 ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + 2 ( 1 θ n ) f ( x n ) f ( p ) f ( p ) p + θ n z n p 2 + β n f ( x n ) p 2 2 β n θ n z n p , f ( x n ) p + 2 θ n ( 1 θ n ) z n p , f ( x n ) p k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + ( 1 θ n ) f ( x n ) f ( p ) 2 + ( 1 θ n ) f ( p ) p 2 + θ n z n p 2 + β n f ( x n ) f ( p ) + f ( p ) p 2 + 2 θ n ( 1 θ n β n ) z n p , f ( x n ) p k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k 2 ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + θ n z n p 2 + β n f ( x n ) f ( p ) 2 + β n f ( p ) p 2 + 2 β n f ( x n ) f ( p ) , f ( p ) p + 2 θ n ( 1 θ n β n ) z n p , f ( x n ) p k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + θ n z n p 2 + β n k 2 x n p 2 + β n f ( p ) p 2 + 2 β n f ( x n ) f ( p ) f ( p ) p + 2 θ n ( 1 θ n β n ) z n p f ( x n ) p k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + θ n z n p 2 + β n k x n p 2 + β n f ( p ) p 2 + β n f ( x n ) f ( p ) 2 + β n f ( p ) p 2 + θ n ( 1 θ n β n ) z n p 2 + θ n ( 1 θ n β n ) f ( x n ) p 2 k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + θ n z n p 2 + β n k x n p 2 + β n f ( p ) p 2 + β n k x n p 2 + β n f ( p ) p 2 + θ n ( 1 θ n β n ) z n p 2 + θ n ( 1 θ n β n ) f ( x n ) f ( p ) + f ( p ) p 2 = k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + θ n z n p 2 + β n k x n p 2 + β n f ( p ) p 2 + β n k x n p 2 + β n f ( p ) p 2 + θ n ( 1 θ n β n ) z n p 2 + θ n ( 1 θ n β n ) f ( x n ) f ( p ) 2 + θ n ( 1 θ n β n ) f ( p ) p 2 + 2 θ n ( 1 θ n β n ) f ( x n ) f ( p ) , f ( p ) p k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + θ n z n p 2 + β n k x n p 2 + β n f ( p ) p 2 + β n k x n p 2 + β n f ( p ) p 2 + ( 1 θ n β n ) z n p 2 + k θ n ( 1 θ n β n ) x n p 2 + θ n ( 1 θ n β n ) f ( p ) p 2 + 2 θ n ( 1 θ n β n ) f ( x n ) f ( p ) f ( p ) p k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + ( 1 β n ) z n p 2 + β n k x n p 2 + β n f ( p ) p 2 + β n k x n p 2 + β n f ( p ) p 2 + k θ n ( 1 θ n β n ) x n p 2 + θ n ( 1 θ n β n ) f ( p ) p 2 + θ n ( 1 θ n β n ) f ( x n ) f ( p ) 2 + θ n ( 1 θ n β n ) f ( p ) p 2 k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + ( 1 β n ) z n p 2 + β n k x n p 2 + β n f ( p ) p 2 + β n k x n p 2 + β n f ( p ) p 2 + k θ n ( 1 θ n β n ) x n p 2 + θ n ( 1 θ n β n ) f ( p ) p 2 + k θ n ( 1 θ n β n ) x n p 2 + θ n ( 1 θ n β n ) f ( p ) p 2 k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + k ( 1 θ n ) x n p 2 + ( 1 θ n ) f ( p ) p 2 + ( 1 β n ) z n p 2 + β n x n p 2 + β n f ( p ) p 2 + β n x n p 2 + β n f ( p ) p 2 + ( 1 θ n β n ) x n p 2 + ( 1 θ n β n ) f ( p ) p 2 + ( 1 θ n β n ) x n p 2 + ( 1 θ n β n ) × f ( p ) p 2 4 ( 1 θ n ) x n p 2 + 4 ( 1 θ n ) f ( p ) p 2 + ( 1 β n ) z n p 2 .
By using (18), we obtain the following.
w n p 2 = x n + α n ( x n x n 1 ) p 2 = x n p 2 + 2 α n x n p , x n x n 1 + α n 2 x n x n 1 2 x n p 2 + 2 α n x n p x n x n 1 + α n 2 x n x n 1 2 .
Hence, by (27), it follows that the following is the case.
z n p 2 w n p 2 .
By using (44) and (45) in (43), we obtain the following.
h n p 2 4 ( 1 θ n ) x n p 2 + 4 ( 1 θ n ) f ( p ) p 2 + ( 1 β n ) x n p 2 + 2 α n ( 1 β n ) x n p x n x n 1 + α n 2 ( 1 β n ) x n x n 1 2 .
Next, by using (46) in (42), we obtain the following:
x n + 1 p 2 4 k ( 1 θ n ) x n p 2 + 4 k ( 1 θ n ) f ( p ) p 2 + k ( 1 β n ) x n p 2 + 2 k α n ( 1 β n ) x n p x n x n 1 + k α n 2 ( 1 β n ) x n x n 1 2 + 2 f ( p ) p , x n + 1 p k ( 1 β n ) x n p 2 + k α n ( 1 β n ) x n x n 1 [ 2 x n p + α n x n x n 1 ] + 2 f ( p ) p , x n + 1 p + 4 k ( 1 θ n ) [ x n p 2 + f ( p ) p 2 ] k ( 1 β n ) x n p 2 + 3 D α n ( 1 β n ) x n x n 1 + 2 f ( p ) p , x n + 1 p + 4 ( 1 θ n ) [ x n p 2 + f ( p ) p 2 ] k ( 1 β n ) x n p 2 + ( 1 k ) β n [ 3 D ( 1 β n ) ( 1 k ) . α n β n x n x n 1 ] + 2 ( 1 k ) [ 1 ( 1 k ) f ( p ) p , x n + 1 p + 2 ( 1 θ n ) ( 1 k ) ( x n p 2 + f ( p ) p 2 ) ] ,
where D : = sup n N { x n p , α n x n x n 1 } > 0 .
Claim IV:
The sequence is { x n p 2 } 0 as n . By Lemma 2, it suffices to prove that lim sup n f ( p ) p , x n k + 1 p 0 for each subsequence { x n k p } of { x n p } such that the following is satisfied.
lim inf k ( x n k + 1 p x n k p ) 0 .
Assume that { x n k p } is a subsequence of { x n p } such that lim inf k ( x n k + 1 p x n k p ) 0 . Then, we have the following.
lim inf k ( x n k + 1 p 2 x n k p 2 ) = lim inf k [ x n k + 1 p x n k p ) × ( x n k + 1 p + x n k p ) ] 0 .
Hence, by Claim II, we obtain the following.
lim sup k [ ( 1 β n k ) ( 1 λ 2 L 2 ) y n k w n k 2 ] lim sup k [ 3 ( 1 β n k ) x n k p 2 x n k + 1 p 2 + β n k 10 ] lim sup k [ 3 ( 1 β n k ) x n k p 2 x n k + 1 p 2 ] + lim sup k β n k 10 = lim inf k [ x n k + 1 p 2 x n k p 2 ] 0 .
This implies the following case.
lim k y n k w n k = 0 .
Next, we show that the following is the case.
x n k + 1 x n k 0 as n .
By using (49), we have the following.
z n k w n k = y n k λ ( A y n k A w n k ) w n k y n k w n k + λ A y n k A w n k ( 1 + λ L ) y n k w n k 0 as n .
Next, we have the following.
x n k + 1 z n k = f ( h n k ) z n k 0 as n .
Similarly, we have the following.
x n k w n k = α n k x n k x n k 1 = β n k . α n k β n k x n k x n k 1 0 as n .
By using (51)–(53), we obtain the following.
x n k + 1 x n k x n k + 1 z n k + z n k w n k + w n k x n k 0 as n .
Since the sequence { x n k } is bounded, there exists a subsequence { x n k j } of { x n k } that converges weakly to a point x H such that the following is the case.
lim sup k f ( p ) p , x n k p = lim j f ( p ) p , x n k j p = f ( p ) p , x p .
By Lemma 4 and (49), we obtain x Γ . Using (55) and the fact that p = P Γ f ( p ) , we are able to obtain the following.
lim sup k f ( p ) p , x n k p = f ( p ) p , x p 0 .
By using (50) and (56), we have the following case.
lim sup k f ( p ) p , x n k + 1 p lim sup k f ( p ) p , x n k p = f ( p ) p , x p 0 .
By using (57), Lemma 2, Claim III and the condition that lim n α n β n x n x n 1 = 0 , we obtain lim n x n p = 0 . The proof of Theorem 1 is now completed. □
Remark 1.
Suantai et al. [35] observed that condition (23) can be easily implemented in numerical results since the value of x n x n 1 is given before choosing α n . We can choose α n as follows:
α n = min α , ε n x n x n 1 , if x n x n 1 , α otherwise ,
where α 0 and { ε n } is a positive sequence such that ε n = o ( β n ) .

4. Numerical Illustrations

In this section, we provide some examples to illustrate and analyze the convergence of our proposed modified Tseng’s extragradient algorithm. In order to determine the execution time, we terminated the algorithm by using condition x n + 1 x 2 < ϵ where x is the solution of the problem and ϵ = 10 5 .
Example 1.
Let A : R R be defined by A x = 2 x . Clearly, A is monotone and Lipschitz continuous with L = 2 . Define f : R R by f x = x 4 . Choose α n = 0.50 , β n = 1 n + 2 and θ n = 0.5 ( 1 β n ) with λ = 0.4 . The feasible set is chosen to be C = [ 1 , 2 ] . Table 1 below shows the comparison of elapsed times for the proposed algorithm ViTEM and iTEM [16]. We test the algorithms for two choices of initial points.
Figure 1 shows the comparison of two algorithms for different choices of parameter β n . For this purpose, we chose x 0 = x 1 = 1 .
Example 2.
Define A : R 2 R 2 by A ( x , y ) = ( x + y + sin x , x + y + sin y ) . Note that A is monotone and Lipschitz mapping with L = 3 . Let f ( x ) = x 8 . The feasible set is chosen to be C = [ 1 , 1 ] × [ 2 , 2 ] . We chose α n = 3 , β n = 1 n + 2 and θ n = 0.5 ( 1 β n ) with λ = 1 4 . Table 2 below analyzes the elapsed times of ViTEM and iTEM [16] for different choices of x 0 and x 1 .
Figure 2 shows the comparison of two algorithms for different choices of parameter β n . For this purpose, we chose x 0 = ( 0.5 , 0.5 ) T and x 1 = ( 1 , 1 ) T .
Example 3.
Let H = L 2 ( [ 0 , 1 ] ) with the inner product x , y : = 0 1 x ( t ) y ( t ) d t and the induced norm x = 0 1 | x ( t ) | 2 d t , for all x , y H . The operator A : H H defined by A ( x ( t ) ) = max { 0 , x ( t ) } for t [ 0 , 1 ] is monotone and Lipschitz continuous on H with L = 1 . The feasible set is chosen to be the unit ball, C : = { x H : x 1 } . We chose α n = 0.5 , β n = 1 n + 2 and θ n = 0.5 ( 1 β n ) with λ = 1 2 . Table 3 below examines the elapsed times of ViTEM and ITEM [16] for different choices of initial points x 0 and x 1 .
Figure 3 shows the comparison of two algorithms for different choices of parameter β n . For this purpose, we chose x 0 ( t ) = t 100 and x 1 ( t ) = t 10 .

5. Conclusions

By combining the modified Tseng’s extragradient method [13], the viscosity method [30] and the Picard–Mann method [31], a new algorithm is proposed for solving variational inequality problems in real Hilbert spaces. It is worth mentioning that the proposed method requires the computation of only one projection onto the feasible set in each iteration. A strong convergence theorem for the proposed algorithm is obtained under certain mild conditions. It is shown that the proposed method performs better than some existing methods for solving variational inequality problems via several numerical examples.

Author Contributions

All authors contributed equally to the writing of this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Basque Government: IT1207-19.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

The authors wish to thank the editor and the anonymous referees for their useful comments and suggestions. This paper was completed when the first author was visiting the Abdus Salam School of Mathematical Sciences (ASSMS), Government College University Lahore, Pakistan, as a post-doctoral fellow.

Conflicts of Interest

The authors declare no conflict of interests.

References

  1. Stampacchia, G. Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. 1964, 258, 4413–4416. [Google Scholar]
  2. Kinderleh, D.; Stampacchia, G. An introduction to variational inequalities and their applications. SIAM Class. Appl. Math. 2000, CL31, 222–277. [Google Scholar]
  3. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithm 2012, 59, 301–323. [Google Scholar] [CrossRef]
  5. Khan, A.R.; Ugwunnadi, G.C.; Makukula, Z.G.; Abbas, M. Strong convergence of inertial subgradient extragradient method for solving variational inequality in Banach space. Carpathian J. Math. 2019, 35, 327–338. [Google Scholar] [CrossRef]
  6. Maingé, P.E. Projected subgradient techniques and viscosity for optimization with variational inequality constraints. Eur. J. Oper. Res. 2010, 205, 501–506. [Google Scholar] [CrossRef]
  7. Okeke, G.A.; Abbas, M.; de la Sen, M. Inertial subgradient extragradient methods for solving variational inequality problems and fixed point problems. Axioms 2020, 9, 51. [Google Scholar] [CrossRef]
  8. Alber, Y.I.; Iusem, A.N. Extension of subgradient techniques for nonsmooth optimization in Banach spaces. Set-Valued Anal. 2001, 9, 315–335. [Google Scholar] [CrossRef]
  9. Xiu, N.H.; Zhang, J.Z. Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 2003, 152, 559–587. [Google Scholar] [CrossRef] [Green Version]
  10. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  11. Antipin, A.S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekon. Mat. Metod. 1976, 12, 1164–1173. [Google Scholar]
  12. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  13. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  14. Chen, J.; Liu, S.; Chang, X. Modified Tseng’s extragradient methods for variational inequality on Hadamard manifolds. Appl. Anal. 2021, 100, 2627–2640. [Google Scholar] [CrossRef]
  15. Suantai, S.; Kankam, K.; Cholamjiak, P. A novel forward-backward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 2020, 8, 42. [Google Scholar] [CrossRef] [Green Version]
  16. Thong, D.V.; Vinh, N.T.; Cho, Y.J. A strong convergence theorem for Tseng’s extragradient method for solving variational inequality problems. Optim. Lett. 2019, 14, 1157–1175. [Google Scholar] [CrossRef]
  17. Attouch, H.; Goudon, X.; Redont, P. The heavy ball with friction. I. The continuous dynamical system. Commun. Contemp. Math. 2000, 2, 1–34. [Google Scholar] [CrossRef]
  18. Attouch, H.; Czamecki, M.O. Asymptotic control and stabilization of nonlinear oscillators with non-isolated equilibria. J. Differ. Equ. 2002, 179, 278–310. [Google Scholar] [CrossRef] [Green Version]
  19. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  20. Maingé, P.E. Inertial iterative process for fixed points of certain quasi-nonexpansive mappings. Set-Valued Anal. 2007, 15, 67–69. [Google Scholar] [CrossRef]
  21. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  22. Ceng, L.C. Asymptotic inertial subgradient extragradient approach for pseudomonotone variational inequalities with fixed point constraints of asymptotically nonexpansive mappings. Commun. Optim. Theory 2020, 2020, 2. [Google Scholar]
  23. Tian, M.; Xu, G. Inertial modified Tseng’s extragradient algorithms for solving monotone variational inequalities and fixed point problems. J. Nonlinear Funct. Anal. 2020, 2020, 35. [Google Scholar]
  24. Attouch, H.; Peypouquet, J.; Redont, P. A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim. 2014, 24, 232–256. [Google Scholar] [CrossRef]
  25. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar]
  26. Chen, C.; Chan, R.H.; Ma, S.; Yang, J. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015, 8, 2239–2267. [Google Scholar] [CrossRef]
  27. Bot, R.I.; Csetnek, E.R. A hybrid proximal-extragradient algorithm with inertial effects. Numer. Funct. Anal. Optim. 2015, 36, 951–963. [Google Scholar] [CrossRef] [Green Version]
  28. Dong, L.Q.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  29. Bot, R.I.; Csetnek, E.R. An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algor. 2016, 71, 519–540. [Google Scholar] [CrossRef] [Green Version]
  30. Moudafi, A. Viscosity approximations methods for fixed point problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  31. Khan, S.H. A Picard-Mann hybrid iterative process. Fixed Point Theory Appl. 2013, 2013, 69. [Google Scholar] [CrossRef]
  32. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
  33. Kimura, Y.; Saejung, S. Strong convergence for a common fixed point of two different generalizations of cutter operators. Linear Nonlinear Anal. 2015, 1, 53–65. [Google Scholar]
  34. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  35. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 14, 1595–1615. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of convergence.
Figure 1. Comparison of convergence.
Axioms 10 00248 g001
Figure 2. Comparison of convergence.
Figure 2. Comparison of convergence.
Axioms 10 00248 g002
Figure 3. Comparison of convergence.
Figure 3. Comparison of convergence.
Axioms 10 00248 g003
Table 1. Comparison of elapsed CPU times.
Table 1. Comparison of elapsed CPU times.
x 0 = 0.1 , x 1 = 0.5 x 0 = 0.5 , x 1 = 1.5
ViTEM0.039508 s (iter = 5)0.039197 s (iter = 5)
iTEM0.040179 s (iter = 13)0.040681 s (iter = 15)
Table 2. Comparison of elapsed CPU times.
Table 2. Comparison of elapsed CPU times.
x 0 = ( 1 , 1 ) T , x 1 = ( 1.5 , 1.5 ) T x 0 = ( 0.5 , 0.5 ) T , x 1 = ( 1 , 1 ) T
ViTEM0.078728 s (iter = 3)0.073666 s (iter = 3)
iTEM0.081701 s (iter = 15)0.076256 s (iter = 15)
Table 3. Comparison of elapsed CPU times.
Table 3. Comparison of elapsed CPU times.
x 0 ( t ) = t 100 , x 1 ( t ) = t 10 x 0 ( t ) = 0.5 ( t + 0.5 cos ( t ) ) , x 1 = t + 0.5 cos ( t )
ViTEM0.022461 s (iter = 4)0.190571 s (iter = 5)
iTEM0.032179 s (iter > 100)0.590528 s (iter > 100)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Okeke, G.A.; Abbas, M.; De la Sen, M.; Iqbal, H. Accelerated Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems in Hilbert Spaces. Axioms 2021, 10, 248. https://doi.org/10.3390/axioms10040248

AMA Style

Okeke GA, Abbas M, De la Sen M, Iqbal H. Accelerated Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems in Hilbert Spaces. Axioms. 2021; 10(4):248. https://doi.org/10.3390/axioms10040248

Chicago/Turabian Style

Okeke, Godwin Amechi, Mujahid Abbas, Manuel De la Sen, and Hira Iqbal. 2021. "Accelerated Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems in Hilbert Spaces" Axioms 10, no. 4: 248. https://doi.org/10.3390/axioms10040248

APA Style

Okeke, G. A., Abbas, M., De la Sen, M., & Iqbal, H. (2021). Accelerated Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems in Hilbert Spaces. Axioms, 10(4), 248. https://doi.org/10.3390/axioms10040248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop