Accelerated Modiﬁed Tseng’s Extragradient Method for Solving Variational Inequality Problems in Hilbert Spaces

: The aim of this paper is to propose a new iterative algorithm to approximate the solution for a variational inequality problem in real Hilbert spaces. A strong convergence result for the above problem is established under certain mild conditions. Our proposed method requires the computation of only one projection onto the feasible set in each iteration. Some numerical examples are presented to support that our proposed method performs better than some known comparable methods for solving variational inequality problems.


Introduction
Suppose C is a nonempty closed convex subset of a real Hilbert space H with the inner product ., . which induces the norm . , and A is a self mapping on H. The variational inequality problem (VIP) for an operator A on C ⊂ H is to find a point x * ∈ C such that the following is the case.
In this paper, we denote the solution set of (VIP) (1) by Γ.
The theory of variational inequalities problems (VIP) was introduced by Stampacchia [1]. It has been proved that the (VIP) problem arise from various mathematical models connected with real life problems. Over the years, VIP has attracted the attention of wellknown mathematicians due to its applications in several fields of interest such as sciences and engineering. Interest in (VIP) stems from the fact that it is applicable in solving several real life problems that are of physical interest, such as the problem of the steady filtration of a liquid through a porous membrane in several dimensions, the problem of lubrication, the motion of a fluid past a certain profile and the small deflections of an elastic beam (See, e.g., [2]).
The optimization problem comprises maximizing or minimizing some function f relative to some set S. The function f provides room for the comparison of different options relative to determining which is the best. We write the optimization problem as follows: where optimize stands for min or max, and f : R n → R denotes the objective function. We note that the optimal solution of a maximization problem of the following: coincide with the optimal solutions of the minimization problem: and we have max x∈S f (x) = − min x∈S (− f (x)).
The two popular methods of solving (VIP) (1) are the projection method and the regularized method. Several authors have developed efficient iterative algorithms for solving the (VIP) (1). The projection-type methods are well developed in the literature (see, for example, [3][4][5][6][7]). The well-known projected gradient algorithm method, which is useful in solving the minimization problem f (x) subject to x ∈ C is given as follows: where the real sequence {α n } of parameters satisfies some conditions, P C is the well-known metric projection of vectors in H onto C and f denotes the gradient of f . Interested readers may refer to [8] for convergence analysis of the above method for the case when f : H → R is convex and differentiable. The method (2) was extended to the (VIP) (1) problem by replacing the gradient of f with the operator A and by generating a sequence {x n } as follows.
Note that the major disadvantage of this method is the restriction that the operator A is strongly monotone or inverse strongly monotone ( [9]) for the convergence of this method. In 1976, Korpelevich [10] removed this strong condition by introducing the extragradient method for solving saddle point problems. This well-known method was extended to solving variational inequality problems (VIP) in both Hilbert and Euclidean spaces (see [10,11]). For the onvergence of this method, the only restriction on the operator A is monotonocity and L-Lipschitz continuity. The extragradient method is given as follows: where λ ∈ (0, 1 L ). If the solution set Γ of the (VIP) is nonempty, then the sequence {x n } generated by the iterative method (4) converges weakly to an element in Γ.
Clearly, by using the above method, one needs to compute two projections onto the set C in every iteration. It is well known that the projection onto a closed convex set C ⊂ H has a close relationship with the minimum distance problem, which may require a restrictive amount of computation time. To solve this problem, Censor et al. [4] introduced the subgradient extragradient method by modifying iterative algorithm (4). They replaced the two projections in the extragradient method (4) onto the set C with one projection onto the set C ⊂ H and one projection onto a half-space, which is easier to calculate. The Censor et al. [4] subgradient extragradient method is given as follows: where λ ∈ (0, 1 L ). Several authors have studied the subgradient extragradient method and proved some useful and applicable results (see, for example, [7,12] and the references therein).
The inertial-type iterative methods are based on a discrete version of a second order dissipative dynamical system (see [7,17,18]). These methods can be seen as a process meant to accelerate the rate of convergence of a given method (see, e.g., [19][20][21]). In 2001, Alvarez and Attouch [19] applied the inertial method to derive a proximal algorithm for solving the problem of finding zero of a maximal monotone operator. Their method is given as follows.
Given x n−1 , x n ∈ H and two parameters θ n ∈ [0, 1), λ n > 0, x n+1 ∈ H is obtained such that the following is the case: The above method can be written equivalently as follows: where J A λ n is the resolvent of the operator A with the given parameter λ n , and the inertial is induced by the term θ n (x n − x n−1 ).
Motivated by the results above, we propose a new algorithm for solving variational inequality problems in real Hilbert spaces. Our proposed method combines the modified Tseng's extragradient method [13], the viscosity method [30] and the Picard-Mann method [31]. Our method requires the computation of only one projection onto the feasible set (solution set) in each iteration. We establish a strong convergence theorem of the proposed algorithm under certain mild conditions. Furthermore, with the help of several numerical illustrations, we show that the proposed method performs better than some known methods for solving variational inequality problems.
This paper is organized as follows: In Section 2, some preliminary definitions and known results that are needed in this study are given. In Section 3, a modified Tseng's extragradient algorithm is proposed, and a strong convergence theorem for the method is presented. In Section 4, some numerical illustrations are given to show that method presented herein performs better than some existing methods. Section 5 contains the concluding remarks of this paper.

Preliminaries
Let H be a real Hilbert space, we recall the following definitions.

Definition 1. A mapping A :
H → H is said to be: (i) L-Lipschitz continuous with L > 0 if the following is the case.
If L ∈ [0, 1) then A is called a contraction mapping. If L = 1, then A is called nonexpansive mapping.
(ii) It is monotone if the following is the case.
(iii) A is called strictly monotone if for any x = y, the following is the case: and the equality is possible only if x = y.
(iv) A is called strongly monotone if for any x, y ∈ H, the following is the case: (v) A is called pseudomonotone if the following is the case.
For every x ∈ H, there exists a unique point P C x in C ⊂ H such that the following is the case: for each y ∈ C (see, e.g., [32]). A mapping P C is known as the metric projection of H onto C ⊂ H. It is known that the mapping P C is nonexpansive. Next, we recall the following lemmas which will be useful in this paper.

Lemma 1 ([32]).
Given that C is a closed convex subset of a real Hilbert space H and x ∈ H, we have the following: For more properties of the metric projection P C , the interested reader may refer to Section 3 of [32].
Let A : H → H. The fixed point problem (FP) is formulated as follows.
The set of fixed point of the operator A is denoted by F(A), and we assume that F(A) = ∅. Our interest in this paper is to find a point x ∈ H such that the following is the case.
The weak convergence of the sequence{x n } to x is denoted by x n x as n → ∞, and we denote the strong convergence of {x n } to x by x n → x as n → ∞.
For each x, y ∈ H and α ∈ R, we recall the following inequalities in Hilbert spaces.
The following lemmas will be needed in this paper. 33,34]). Let {a n } be a sequence of nonnegative real numbers, {α n } denotes a sequence of real numbers in (0, 1) with ∑ ∞ n=1 α n = ∞ and {b n } denotes a sequence of real numbers. We will assume that the following is the case. a n+1 ≤ (1 − α n )a n + α n b n , ∀n ≥ 1.
If lim sup k→∞ b n k ≤ 0 for every subsequence {a n k } of {a n } satisfying lim inf k→∞ (a n k+1 − a n k ) ≥ 0, then lim n→∞ a n = 0.

Main Results
We assume that the following condition is satisfied. We now propose the following algorithm.
Step 1: Set the following: and compute the following.
If y n = w n , then stop the computation. y n is a solution to the problem (VIP). Otherwise, proceed to Step 2.
Step 2: Set the following: and compute the following: where z n = y n − λ(Ay n − Aw n ). (22) Set n := n + 1 and proceed to Step 1.
Next, we prove the following results.
Theorem 1. Suppose Condition 1 holds and the following is the case.
The sequence {x n } generated by the algorithm converges strongly to an element p ∈ Γ, such that p = P Γ • f (p).
Proof. Claim I: We claim that the sequence {x n } is bounded for each p = P V I(C,A)• f (p) . We begin by proving the following.
Using the fact that the mapping A is monotone and L-Lipschitz continuous, we have the following from inequality (25).
This implies the following.
By (18), we have the following estimate.
Using the condition that α n β n x n − x n−1 → 0 in (23), it follows that there exists a constant 1 > 0 such that the following is the case.
Using (21) and the condition that f is a contraction mapping, we have the following.
Using (32) in (31), we have the following: for some 3 > 0. This implies that the sequence {x n } is bounded. Therefore, it follows that {z n }, {h n }, { f (h n )} and {w n } are bounded.
Claim III: We have the following: From (10), we obtain the following.
(42) Next, we have the following estimate.
Claim IV: The sequence is { x n − p 2 } −→ 0 as n → ∞. By Lemma 2, it suffices to prove that lim sup n→∞ f (p) − p, x n k+1 − p ≤ 0 for each subsequence { x n k − p } of { x n − p } such that the following is satisfied.
Hence, by Claim II, we obtain the following.
This implies the following case. lim k→∞ y n k − w n k = 0. (49) Next, we show that the following is the case.
By using (49), we have the following.
z n k − w n k = y n k − λ(Ay n k − Aw n k ) − w n k ≤ y n k − w n k + λ Ay n k − Aw n k ≤ (1 + λL) y n k − w n k −→ 0 as n → ∞.
(51) Next, we have the following.
Similarly, we have the following.
x n k +1 − x n k ≤ x n k +1 − z n k + z n k − w n k + w n k − x n k −→ 0 as n → ∞.
Since the sequence {x n k } is bounded, there exists a subsequence {x n k j } of {x n k } that converges weakly to a point x * ∈ H such that the following is the case.
lim sup By Lemma 4 and (49), we obtain x * ∈ Γ. Using (55) and the fact that p = P Γ • f (p), we are able to obtain the following.
lim sup By using (50) and (56), we have the following case.
By using (57), Lemma 2, Claim III and the condition that lim n→∞ α n β n x n − x n−1 = 0, we obtain lim n→∞ x n − p = 0. The proof of Theorem 1 is now completed.

Remark 1.
Suantai et al. [35] observed that condition (23) can be easily implemented in numerical results since the value of x n − x n−1 is given before choosing α n . We can choose α n as follows: where α ≥ 0 and {ε n } is a positive sequence such that ε n = o(β n ).

Numerical Illustrations
In this section, we provide some examples to illustrate and analyze the convergence of our proposed modified Tseng's extragradient algorithm. In order to determine the execution time, we terminated the algorithm by using condition x n+1 − x * 2 < where x * is the solution of the problem and = 10 −5 .  Table 1 below shows the comparison of elapsed times for the proposed algorithm ViTEM and iTEM [16]. We test the algorithms for two choices of initial points.   Figure 1 shows the comparison of two algorithms for different choices of parameter β n . For this purpose, we chose x 0 = x 1 = 1.

Conclusions
By combining the modified Tseng's extragradient method [13], the viscosity method [30] and the Picard-Mann method [31], a new algorithm is proposed for solving variational inequality problems in real Hilbert spaces. It is worth mentioning that the proposed method requires the computation of only one projection onto the feasible set in each iteration. A strong convergence theorem for the proposed algorithm is obtained under certain mild conditions. It is shown that the proposed method performs better than some existing methods for solving variational inequality problems via several numerical examples.
Author Contributions: All authors contributed equally to the writing of this paper. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement:
The data used to support the findings of this study are included within the article.