Strong Convergent Theorems Governed by Pseudo-Monotone Mappings

: The purpose of this paper is to introduce two different kinds of iterative algorithms with inertial effects for solving variational inequalities. The iterative processes are based on the extragradient method, the Mann-type method and the viscosity method. Convergence theorems of strong convergence are established in Hilbert spaces under mild assumption that the associated mapping is Lipschitz continuous, pseudo-monotone and sequentially weakly continuous. Numerical experiments are performed to illustrate the behaviors of our proposed methods, as well as comparing them with the existing one in literature.


Introduction and Preliminaries
Let C be a closed, convex and nonempty subset of a Hilbert space H. The inner product and its induced norm of H are denoted by ·, · and · respectively. Problem 1. Let A : H → H be a nonlinear operator. We consider the following variational inequality problem find x * ∈ V I(C, A) := {x * ∈ C : z − x * , A(x * ) ≥ 0, ∀z ∈ C}. (1) The variational inequality, which serves as an important model in studying a wide class of real problems arising in traffic network, medical imaging, machine learning, transportation, etc. Due to its wide applications, this model unifies a number of optimization-related problems, such as, saddle problems, equilibrium problems, complementary problems, fixed point problems; see, e.g., [1][2][3][4][5].
Next, one introduces an important tool in this paper: the nearest point (metric) projection. For all x ∈ H, there exists a unique nearest point in C, which is denoted by P C x, such that x − P C x = inf{ x − y : y ∈ C}. Then P C is called the nearest point (metric) projection from H onto C. It is known that the projection operator is firmly nonexpansive and can be characterized by x − P C x, y − P C x ≤ 0, ∀x ∈ H, y ∈ C, which is also equivalent to x − P C x 2 ≤ x − y 2 − y − P C x 2 , ∀x ∈ H, y ∈ C; see [6]. As a routine way, one can turn the variational inequality problem into a fixed point problem via a resolvent operator, that is, P C (I − αA)x * = x * , for all α > 0.
Let us recall some definitions of mappings involved in our study. An operator A : H → H is said to be (i) Sequentially weakly continuous if, for each weak sequence {x k } to x * , in this case we say that {Ax k } converges weakly to Ax * ; where the mapping A : H → H is monotone and L-Lipschitz continuous for some L > 0 and λL ∈ (0, 1). Recently, the extragradient method has received great attention by many authors [13]. It has been studied in various ways for solving a more general problem in the setting of Hilbert spaces. Typically, the extragradient method has been successfully applied for solving the pseudomonotone variational inequality, see [14]. Now, let us consider the inertial extrapolation, which can be regarded as an acceleration procedure of speeding up the convergence properties. Due to its importance, there are increasing interests in studying inertial-type algorithms; see, e.g., [15][16][17][18][19] and the references therein. By incorporating the inertial extrapolation into the extragradient method, Dong et al. [20] introduced the following inertial extragradient algorithm (EAI). Given any x 0 , x 1 ∈ H, for each k ≥ 1, where the mapping A : H → H is monotone and Lipschitz continuous with the constant L > 0. The authors showed that {x k } converges weakly to an element of VI(C, A) under the following conditions: Note that inertial extragradient algorithm involving the inertial term mentioned above is weakly convergent.
Many problems, arising in a broad range of applied areas, such as image recovery, quantum physics, economics, control theory and mechanics, have been extensively studied in the infinite dimensional setting. In such problems, the norm convergence is essential, since the energy x − x n 2 of the error between the solution x and the iterate x n eventually becomes arbitrarily small. Furthermore, in the context of solving the convex optimization problem min{Λ(x) : x ∈ H}, the rate of convergence of {Λ(x k )} seems to be better in the case when {x k } enjoys a strong convergence than that in the case when {x k } enjoys a weak convergence. Thus, these naturally give rise to a question how to appropriately modify the inertial extragradient method so that the strong convergence is guaranteed. To solve the above question, we will propose two modified inertial extragradient algorithms. The first modification stems from the Mann-type method [21,22] and the other one is of viscosity nature [23].
An obvious disadvantage of Algorithms (2) and (3) is the assumption that the mapping A should be Lipschitz continuous and monotone. To avoid this restrictive assumption, in this paper, we show that our proposed algorithms can solve the pseudo-monotone variational inequality under suitable assumptions. It is worth mentioning that the class of pseudo-monotone mappings virtually contains the class of monotone mappings. Indeed, the scope of the related optimization problems can be enlarged from convex optimization problems to pseudoconvex optimization problems. This guarantees the advantage of modified inertial extragradient methods in comparing with the other solution methods.
The following lemmas will be used in the proof of our main results.

Lemma 2 ([25]
). Let {a k } be a real sequence defined in [0, +∞) such that there exists a subsequence {a kj } of {a k } with a k j < a k j +1 for all j ∈ N. There exists a nondecreasing sequence {m i } such that lim i m i = ∞ and the following properties are satisfied by all (sufficiently large) number i ∈ N: a m i +1 ≥ a i and a m i +1 ≥ a m i . Indeed, m i is the largest number of n in the range of {1, 2, . . . , i} such that a k ≤ a k+1 holds.
The rest of the paper is organized as follows. In Section 2, we give two variants of the inertial extragradient method for solving pseudo-monotone variational inequalities. We also prove strong convergence results for the proposed algorithms. In Section 3, some numerical experiments are presented to deal with quadratic programming problems, which demonstrates the performances of our methods. Finally, the conclusion is given in the last section.

Algorithm and Convergence
Throughout the rest of the paper, one always systematically assumes that the following set of hypotheses: • The feasible set C is a nonempty, closed and convex set in a real Hilbert space H; • The operator A : H → H is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous for some L > 0, with its solution set V I(C, A) = ∅.
First, we present the algorithm for solving the pseudo-monotone variational inequality which combines the inertial extragradient method and Mann-type method.
The following propositions are known results of the iterative sequences generated by Algorithms 1 and 2, which are crucial for the proof of our convergence theorems, see [14].

Proposition 1.
Assume that A is pseudo-monotone and L-Lipschitz continuous with V I(C, A) = ∅. Let x * be a solution of V I(C, A). Setting z k = P C (y k − λ k Ay k ), we have

Proposition 2.
The mapping A is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous for some L > 0. Assume that V I(C, A) = ∅. Assume additionally that lim k→∞ y k − w k = 0 and lim inf k→∞ λ k > 0. Then the sequence {w k } generated by Algorithms 1 and 2 converges weakly to a solution of V I(C, A).

Algorithm 1:
Goto final; 7 end 8 else 9 Compute y k : Now we are in a position to establish the main result of this note.
Then the sequence {x n } generated by Algorithm 1 converges to the solutionx = P V I(C,A) 0 in norm.
Proof. Let us fix x * = P V I(C,A) 0. To simplify the notations, one sets z k = P C (y k − λ k Ay k ). By applying Proposition 1, together with the definition of {w k }, one easily obtains that Invoking (4), the definition of {x k } implies that Let This clearly implies that {x k } is bounded. As a result, we have that sequences {y k }, {w k } and {z k } are bounded as well. Again, by using the definition of {x k }, we find that On the other hand, In view of the definition of {w k }, we deduce that Invoking the boundedness of {x k } and {z k }, there exists a positive constant M 2 > 0 such that By combining inequalities (7)- (10) with Proposition 1, one asserts that Note that It follows from the above equality that Indeed, based on (9), we have that Combining (12) with (13), we find that Now we prove that the sequence { x k − x * } converges to 0 by considering two possible cases on It follows from (11) and (15) and the conditions lim k→∞ γ k = 0 and lim k→∞ β k > 0 that From the nonexpansivity of P C and the L-Lipschitz continuity of A, one concludes that Combining (16) with (17), one has lim k→∞ y k − z k = 0.
It follows that lim From (16), (18) and (19), we obtain that Recalling that {x k } is bounded, one assumes that there exists a subsequence {x k j } of {x k } such that x k j x as j → ∞. Invoking (19), one has that w k j x as j → ∞. As a sequence, by use of Proposition 2, we find thatx ∈ V I(C, A). From the fact x * = P V I(C,A) 0, we obtain that lim sup From the boundedness of {x k } and lim k→∞ γ k = 0, we infer Combining (21) with (22), we further find that From the condition lim k→∞ (14), (20) and (23), we conclude from Lemma 1 that lim k→∞ x k − x * = 0. In other words, it entails that x k → x * as k → ∞.
Case 2: Suppose that there exists a subsequence { x k j − p } of { x k − p } such that x k j − p < x k j +1 − p , ∀j ∈ N. In this case, by using Lemma 2, one sees that there exists a nondecreasing sequence {n i } of N such that lim i→∞ n i = ∞ and the following inequalities hold for all i ∈ N, By using (11) and (24), we have that Recalling that lim k→∞ α n i x n i − x n i −1 = lim k→∞ γ n i = 0 and lim k→∞ β n i > 0, it follows from (15) and (25) that lim i→∞ y n i − w n i = 0.
Using the same arguments as in the proof of Case 1, one obtains that and Coming back to (14), we have (29) In light of (27)-(29), we have lim sup i→∞ x n i +1 − x * 2 ≤ 0. Invoking (24), we obtain that lim i→∞ x i − x * 2 = 0, which further implies that x k → x * , as k → ∞. This completes the proof.
The other algorithm reads as follows.

12
Set k ← k + 1; Now, we are ready to analyze the convergence of Algorithm 2. The outline of its proof is similar to that of Theorem 1.
Proof. Fixing x * = P V I(C,A) • g(x * ) and using the same arguments as in the proof of Theorem 1, we infer and Now, using (30) and the definition of {x k }, one sees that Since lim k→∞ Coming back to (33), we have that It entails that {x k } is bounded. This implies that {w k }, {y k } and {z k } are bounded as well. We apply Proposition 1 and (31) to get that Again, by using Proposition 1 and (31), together with (9), one concludes that Now let us show that the sequence { x k − x * } converges to zero. To obtain this result, we consider two possible cases on the sequence { x k − x * }.
Case 1: There exists K ∈ N such that x k+1 − x * ≤ x k − x * for all k ≥ K. Observe that lim k→∞ x k − x * 2 exists. Due to the condition 0 < a ≤ λ n ≤ b < 1 L , we have that Based on the conditions lim k→∞ α k δ k x k − x k−1 = 0 and {δ k } ⊂ (0, 1), we have that lim k→∞ α k x k − x k−1 = 0. Since {w k } and {z k } are bounded and the condition lim k→∞ δ k = 0 holds, we find that Combining (32) with (37), we have that It follows that lim In light of (37)-(39), we have that The boundedness of {x k } asserts that there exists a subsequence {x k j } of {x k } such that x k j x, as j → ∞. Invoking (39), we observe that w k j x, as j → ∞. And hence, it follows from Proposition 2 thatx ∈ V I(C, A). Invoking Recalling the definition of {x k } and the assumption lim k→∞ δ k = 0, we infer that Combining (41) with (42), we find that Invoking the conditions lim k→∞ Case 2: Suppose that there is no k 0 ∈ N such that { x k − x * } ∞ k=k 0 is monotonically decreasing. In this case, we can define a mapping ϕ : N → N as i.e., ϕ(k) is the largest number i in {1, 2, . . . , k} such that x i − x * 2 increases at i = ϕ(k). Note that ϕ(k) is well-defined for all sufficiently large k. On the other hand, ϕ(·) is a nondecreasing sequence such that lim k→∞ ϕ(k) = ∞ and the following inequalities hold for all k ≥ 0, The conditions lim k→∞ α k δ k x k − x k−1 = 0 and lim k→∞ δ k = 0 entail that lim k→∞ α k x k − x k−1 = 0. Combining (34) and (44) with the boundedness of {z k }, we successfully find that as k → ∞. Therefore, it follows from (36) that With the help of (45), using the same arguments as in the proof of Case 1, we infer that According to (35) and (44), we have (47) Therefore, combining the condition lim k→∞ (46) and (47), we have that lim sup k→∞ x ϕ(k) − x * 2 ≤ 0. And hence, it follows from (44) that x k → x * , as k → ∞. This completes the proof.

Numerical Results
In this section, we perform some computational experiments in support of the convergence properties of our proposed methods and compare our methods with Algorithm EAI, see [20].
All programs are written in Matlab version 5.0 and computed on a PC Desktop Intel(R) Core (TM) i5-8250U CPU @1.60GHz. Consider the quadratic programming problem in the form below x ∈ R n x i ≥ 0, i = 1, 2, . . . , n, with the following properties (48)-(50) in a n-dimensional Euclidean space. When Θ is symmetric and positive-definite in R n , and consequently A = ∇Λ = Θx + Υ is pseudo-monotone and Lipschitz continuous with the constant L = Θ . Meanwhile, we choose the parameters λ k = 1 1.5 Θ , α k = 1 k 2 , γ k = δ k = 1 k and β k = k−1 2k (k ≥ 1). We can check that all of conditions in Theorems 1 and 2 are satisfied. We choose randomly initial points x 0 , x 1 ∈ R n in the following experiments. Let us consider the first example [26] with data (48) given by We apply Algorithm 1 to solve this problem in H = R 5 . We take the iteration number k = 5000 as the stopping criterion. As depicted in Figure 1, one sees that the optimal solution of this problem is unique (1, 0, 0, 0, 0) T . We use the sequence {E k } defined by E k = x k − P C (x k − λ k Ax k ) (k = 1, 2, 3, . . . ) to study the convergence of different algorithms in H = R 5 . From the definition of the metric projection, if E k ≤ ε, x k can be considered as an ε-solution of this problem. We take the iteration number k = 100 as the stopping criterion. To illustrate the computational performance of all the algorithms, the numerical results are shown in Figure 2. From the changing processes of the values of {E k }, we find that Algorithm 2 has a better behavior than Algorithms 1, EAI. It achieves a more stable and higher precision with the number of iterations. Moreover, the convergence of {E k } to 0, implies that the iterative sequence {E k } converges to the solution of this test. Now, we show the second example [27] with the data (49) expressed as We apply Algorithm 1 to solve this problem in H = R 5 . We set the iteration number k = 5000 as the stopping criterion. As depicted in Figure 1, one sees that the optimal solution of this problem is unique (1, 0, 0, 0, 0 This problem is solved for Θ, 50 × 50 matrix, Υ, 50 vector and Θ, 30 × 30 matrix, Υ, 30 vector, respectively. We use Algorithms 1 and 2 to solve this problem and we take the iteration number k = 200, 100 as the stopping criterion. The test results are described in Figures 3 and 4. Thus, we obtain the changing processes of x 1 − x 30 with respect to the number of iterations and the running time (x-axis). From this, we find that the iterative sequences generated by Algorithms 1 and 2 converge to a unique solution.  Next, we consider another example [27] with the data (50) written as Θ = diag(1/k, 2/k, . . . , 1), Υ = (−1, . . . , −1) T .
We take the iteration number k = 1000 as the stopping criterion in H = R 25 . From the results reported in Figure 5, one has shown the changing processes of the values of x 1 -x 25 (y-axis) in terms of the number of iterations and the cpu time (x-axis). Accordingly, one sees that Algorithm 1 has a convergent behavior.

Conclusions
In this paper, we proposed two inertial extragradient extensions for finding a solution of the pseudo-monotone variational inequalities in the setting of Hilbert spaces. We also established strong convergence theorems of the proposed algorithms. Numerical experiments show that our algorithms enjoy a faster rate of convergence than the one given by Dong et al. [20]. It is worth mentioning that many significant real life problems are generally defined in Banach spaces. Therefore, it is of interest to extend our results to the Banach space, which is more general than the Hilbert space.