Next Article in Journal
Nonlinear Systems of Volterra Equations with Piecewise Smooth Kernels: Numerical Solution and Application for Power Systems Operation
Next Article in Special Issue
Interpolative Reich–Rus–Ćirić and Hardy–Rogers Contraction on Quasi-Partial b-Metric Space and Related Fixed Point Results
Previous Article in Journal
Similarity Measure of Lattice Ordered Multi-Fuzzy Soft Sets Based on Set Theoretic Approach and Its Application in Decision Making
Previous Article in Special Issue
The Bregman–Opial Property and Bregman Generalized Hybrid Maps of Reflexive Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strong Convergent Theorems Governed by Pseudo-Monotone Mappings

1
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Department of Mathematics, Hangzhou Normal University, Hangzhou 311121, China
3
Center for General Education, China Medical University, Taichung 40447, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(8), 1256; https://doi.org/10.3390/math8081256
Submission received: 20 July 2020 / Revised: 27 July 2020 / Accepted: 29 July 2020 / Published: 31 July 2020
(This article belongs to the Special Issue Variational Inequality)

Abstract

:
The purpose of this paper is to introduce two different kinds of iterative algorithms with inertial effects for solving variational inequalities. The iterative processes are based on the extragradient method, the Mann-type method and the viscosity method. Convergence theorems of strong convergence are established in Hilbert spaces under mild assumption that the associated mapping is Lipschitz continuous, pseudo-monotone and sequentially weakly continuous. Numerical experiments are performed to illustrate the behaviors of our proposed methods, as well as comparing them with the existing one in literature.

1. Introduction and Preliminaries

Let C be a closed, convex and nonempty subset of a Hilbert space H. The inner product and its induced norm of H are denoted by · , · and · respectively.
Problem 1.
Let A : H H be a nonlinear operator. We consider the following variational inequality problem
find x * V I ( C , A ) : = { x * C : z x * , A ( x * ) 0 , z C } .
The variational inequality, which serves as an important model in studying a wide class of real problems arising in traffic network, medical imaging, machine learning, transportation, etc. Due to its wide applications, this model unifies a number of optimization-related problems, such as, saddle problems, equilibrium problems, complementary problems, fixed point problems; see, e.g., [1,2,3,4,5].
Next, one introduces an important tool in this paper: the nearest point (metric) projection. For all x H , there exists a unique nearest point in C, which is denoted by P C x , such that x P C x = inf { x y : y C } . Then P C is called the nearest point (metric) projection from H onto C. It is known that the projection operator is firmly nonexpansive and can be characterized by x P C x , y P C x 0 , x H , y C , which is also equivalent to x P C x 2 x y 2 y P C x 2 , x H , y C ; see [6]. As a routine way, one can turn the variational inequality problem into a fixed point problem via a resolvent operator, that is, P C ( I α A ) x * = x * , for all α > 0 .
Let us recall some definitions of mappings involved in our study. An operator A : H H is said to be
(i)
Sequentially weakly continuous if, for each weak sequence { x k } to x * , in this case we say that { A x k } converges weakly to A x * ;
(ii)
Pseudomonotone on H if A x , y x 0 A y , y x 0 , x , y H ;
(iii)
Monotone on H if A x A y , x y 0 , x , y H ;
(iv)
L-Lipschitz continuous on H if there exists L > 0 such that A x A y L x y , x , y H .
Suppose that g : H R is convex and continuously Fréchet differentiable. Then g is convex if and only if g : H H is monotone. At the same time, it is well known that g is pseudo-monotone if and only if g is pseudo-convex.
Recently, iterative methods for solving variational inequalities and related optimization problems have been proposed and analyzed by many authors [7,8,9,10,11]. Let us start with Korpelevich’s method [12], which was proposed for the Euclidean case, known as the extragradient method. This method needs two calculations of the projection onto a nonempty closed convex subset, that is, it generates a sequence by the following iteration procedure
x 0 C , w k = P C ( x k λ A x k ) , x k + 1 = P C ( x k λ A w k ) ,
where the mapping A : H H is monotone and L-Lipschitz continuous for some L > 0 and λ L ( 0 , 1 ) . Recently, the extragradient method has received great attention by many authors [13]. It has been studied in various ways for solving a more general problem in the setting of Hilbert spaces. Typically, the extragradient method has been successfully applied for solving the pseudomonotone variational inequality, see [14].
Now, let us consider the inertial extrapolation, which can be regarded as an acceleration procedure of speeding up the convergence properties. Due to its importance, there are increasing interests in studying inertial-type algorithms; see, e.g., [15,16,17,18,19] and the references therein. By incorporating the inertial extrapolation into the extragradient method, Dong et al. [20] introduced the following inertial extragradient algorithm (EAI). Given any x 0 , x 1 H ,
z k = x k + γ k ( x k x k 1 ) , y k = P C ( z k β A z k ) , x k + 1 = ( 1 α k ) z k + α k P C ( z k β A y k ) ,
for each k 1 , where the mapping A : H H is monotone and Lipschitz continuous with the constant L > 0 . The authors showed that { x k } converges weakly to an element of VI( C , A ) under the following conditions:
(i)
{ γ k } is non-decreasing with γ 1 = 0 and 0 γ k γ < 1 for each k 1 ;
(ii)
α , σ , δ > 0 such that
γ [ ( 1 + β L ) 2 γ ( 1 + γ ) + ( 1 β 2 L 2 ) γ σ + σ ( 1 + β L ) 2 ] 1 β 2 L 2 < δ
and
( 1 β 2 L 2 ) δ γ [ ( 1 + β L ) 2 γ ( 1 + γ ) + ( 1 β 2 L 2 ) γ σ + σ ( 1 + β L ) 2 ] δ [ ( 1 + β L ) 2 γ ( 1 + γ ) + ( 1 β 2 L 2 ) γ σ + σ ( 1 + β L ) 2 ] α k α > 0 .
Note that inertial extragradient algorithm involving the inertial term mentioned above is weakly convergent.
Many problems, arising in a broad range of applied areas, such as image recovery, quantum physics, economics, control theory and mechanics, have been extensively studied in the infinite dimensional setting. In such problems, the norm convergence is essential, since the energy x x n 2 of the error between the solution x and the iterate x n eventually becomes arbitrarily small. Furthermore, in the context of solving the convex optimization problem min { Λ ( x ) : x H } , the rate of convergence of { Λ ( x k ) } seems to be better in the case when { x k } enjoys a strong convergence than that in the case when { x k } enjoys a weak convergence. Thus, these naturally give rise to a question how to appropriately modify the inertial extragradient method so that the strong convergence is guaranteed. To solve the above question, we will propose two modified inertial extragradient algorithms. The first modification stems from the Mann-type method [21,22] and the other one is of viscosity nature [23].
An obvious disadvantage of Algorithms (2) and (3) is the assumption that the mapping A should be Lipschitz continuous and monotone. To avoid this restrictive assumption, in this paper, we show that our proposed algorithms can solve the pseudo-monotone variational inequality under suitable assumptions. It is worth mentioning that the class of pseudo-monotone mappings virtually contains the class of monotone mappings. Indeed, the scope of the related optimization problems can be enlarged from convex optimization problems to pseudoconvex optimization problems. This guarantees the advantage of modified inertial extragradient methods in comparing with the other solution methods.
The following lemmas will be used in the proof of our main results.
Lemma 1
([24]). Let { a k } be nonnegative real sequence with relation a k + 1 α k b k + a k ( 1 α k ) , where { α k } ( 0 , 1 ) and { b k } are real sequences satisfying the restrictions lim k α k = 0 , k = 0 α k = and lim sup k b k 0 . Then a k 0 as k .
Lemma 2
([25]). Let { a k } be a real sequence defined in [ 0 , + ) such that there exists a subsequence { a k j } of { a k } with a k j < a k j + 1 for all j N . There exists a nondecreasing sequence { m i } such that lim i m i = and the following properties are satisfied by all (sufficiently large) number i N : a m i + 1 a i and a m i + 1 a m i . Indeed, m i is the largest number of n in the range of { 1 , 2 , , i } such that a k a k + 1 holds.
The rest of the paper is organized as follows. In Section 2, we give two variants of the inertial extragradient method for solving pseudo-monotone variational inequalities. We also prove strong convergence results for the proposed algorithms. In Section 3, some numerical experiments are presented to deal with quadratic programming problems, which demonstrates the performances of our methods. Finally, the conclusion is given in the last section.

2. Algorithm and Convergence

Throughout the rest of the paper, one always systematically assumes that the following set of hypotheses:
  • The feasible set C is a nonempty, closed and convex set in a real Hilbert space H;
  • The operator A : H H is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous for some L > 0 , with its solution set V I ( C , A ) .
First, we present the algorithm for solving the pseudo-monotone variational inequality which combines the inertial extragradient method and Mann-type method.
The following propositions are known results of the iterative sequences generated by Algorithms 1 and 2, which are crucial for the proof of our convergence theorems, see [14].
Proposition 1.
Assume that A is pseudo-monotone and L-Lipschitz continuous with V I ( C , A ) . Let x * be a solution of V I ( C , A ) . Setting z k = P C ( y k λ k A y k ) , we have
z k x * 2 w k x * 2 ( 1 λ k 2 L 2 ) y k w k 2 .
Proposition 2.
The mapping A is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous for some L > 0 . Assume that V I ( C , A ) . Assume additionally that lim k y k w k = 0 and lim inf k λ k > 0 . Then the sequence { w k } generated by Algorithms 1 and 2 converges weakly to a solution of V I ( C , A ) .
Algorithm 1:
Mathematics 08 01256 i001
Now we are in a position to establish the main result of this note.
Theorem 1.
Let { λ k } , { γ k } , { β k } be three real sequences in ( 0 , 1 ) such that 0 < a λ n b < 1 L for some a , b ( 0 , 1 ) , lim k γ k = 0 , k = 1 γ k = , { β k } ( 0 , 1 γ k ) and lim k β k > 0 . Assume that the sequence { α k } is chosen such that lim k α k γ k x k x k 1 = 0 . Then the sequence { x n } generated by Algorithm 1 converges to the solution x ^ = P V I ( C , A ) 0 in norm.
Proof. 
Let us fix x * = P V I ( C , A ) 0 . To simplify the notations, one sets z k = P C ( y k λ k A y k ) . By applying Proposition 1, together with the definition of { w k } , one easily obtains that
z k x * w k x * = x k + α k ( x k x k 1 ) x * α k x k x k 1 + x k x * .
Invoking (4), the definition of { x k } implies that
x k + 1 x * ( 1 β k γ k ) ( x k x * ) + β k ( z k x * ) + γ k x * ( 1 γ k ) x k x * + γ k α k γ k x k x k 1 + x * .
Let M 1 > 0 be a positive constant such that α k γ k x k x k 1 M 1 . Due to the assumption lim k α k γ k x k x k 1 = 0 . Coming back to (5), we obtain that
x n + 1 x * max { x k x * , M 1 + x * } max { x 0 x * , M 1 + x * } .
This clearly implies that { x k } is bounded. As a result, we have that sequences { y k } , { w k } and { z k } are bounded as well. Again, by using the definition of { x k } , we find that
x k + 1 x * 2 = β k ( z k x * ) + ( 1 β k γ k ) ( x k x * ) γ k x * 2 β k ( z k x * ) + ( 1 β k γ k ) ( x k x * ) 2 2 γ k ( 1 β k γ k ) ( x k x * ) + β k ( z k x * ) , x * β k ( z k x * ) + ( 1 β k γ k ) ( x k x * ) 2 + 2 γ k ( 1 β k γ k ) ( x k x * ) + β k ( z k x * ) x * .
On the other hand,
β k ( z k x * ) + ( 1 β k γ k ) ( x k x * ) 2 β k 2 z k x * 2 + ( 1 β k γ k ) 2 x k x * 2 + 2 ( 1 β k γ k ) β k x k x * z k x * β k 2 z k x * 2 + ( 1 β k γ k ) 2 x k x * 2 + ( 1 β k γ k ) β k x k x * 2 + ( 1 β k γ k ) β k z k x * 2 ( 1 γ k ) β k z k x * 2 + ( 1 β k γ k ) ( 1 γ k ) x k x * 2 .
In view of the definition of { w k } , we deduce that
w k x * 2 α k ( x k x k 1 ) + x k x * 2 2 α k x k x k 1 w k x * + x k x * 2 .
Invoking the boundedness of { x k } and { z k } , there exists a positive constant M 2 > 0 such that
( 1 β k γ k ) ( x k x * ) + β k ( z k x * ) x * M 2 .
By combining inequalities (7)–(10) with Proposition 1, one asserts that
x n + 1 x * 2 ( 1 β k γ k ) ( 1 γ k ) x k x * 2 + ( 1 γ k ) β k z k x * 2 + 2 γ k M 2 ( 1 β k γ k ) ( 1 γ k ) x k x * 2 + ( 1 γ k ) β k ( x k x * 2 + 2 α k x k x k 1 w k x * ( 1 λ k 2 L 2 ) y k w k 2 ) + 2 γ k M 2 x k x * 2 + 2 α k x k x k 1 w k x * 2 ( 1 γ k ) β k ( 1 λ k 2 L 2 ) y k w k 2 + 2 γ k M 2 .
Note that x k + 1 = ( 1 β k ) x k + β k z k γ k x k . Setting p n = ( 1 β k ) x k + β k z k , we find that p k x k = β k ( z k x k ) . Furthermore, we can reformulate x k + 1 as
x k + 1 = p k γ k x k = ( 1 γ k ) p k + γ k ( p k x k ) = ( 1 γ k ) p k + γ k β k ( z k x k ) .
It follows from the above equality that
x k + 1 x * 2 = ( 1 γ k ) ( p k x * ) + γ k ( β k ( z k x k ) x * ) 2 ( 1 γ k ) p k x * 2 + 2 γ k β k z k x k , x k + 1 x * 2 γ k x * , x k + 1 x * .
Indeed, based on (9), we have that
p k x * 2 = β k ( z k x * ) + ( 1 β k ) ( x k x * ) 2 ( 1 β k ) x k x * 2 + β k ( 2 α k x k x k 1 w k x * + x k x * 2 ) 2 α k x k x k 1 w k x * + x k x * 2 .
Combining (12) with (13), we find that
x k + 1 x * 2 ( 1 γ k ) ( x k x * 2 + 2 α k x k x k 1 w k x * ) + 2 γ k β k z k x k x k + 1 x * 2 γ k x * , x k + 1 x * ( 1 γ k ) x k x * 2 + γ k ( 2 α k γ k x k x k 1 w k x * + 2 β k z k x k x k + 1 x * + 2 x * , x * x k + 1 ) .
Now we prove that the sequence { x k x * } converges to 0 by considering two possible cases on { x k x * } .
Case 1. Suppose that there exists K N such that x k + 1 x * x k x * for all k K . This implies that lim k x k x * exists. From conditions lim k α k γ k x k x k 1 = 0 and { γ k } ( 0 , 1 ) , we find that lim k α k x k x k 1 = 0 . Since 0 < a λ n b < 1 L , it holds that
0 < 1 b 2 L 2 1 λ k 2 L 2 1 a 2 L 2 < 1 .
It follows from (11) and (15) and the conditions lim k γ k = 0 and lim k β k > 0 that
lim k y k w k = 0 .
From the nonexpansivity of P C and the L-Lipschitz continuity of A, one concludes that
z k y k = P C ( y k λ k A y k ) P C ( w k λ k A w k ) ( 1 + λ k L ) y k w k .
Combining (16) with (17), one has
lim k y k z k = 0 .
It follows that
lim k w k x k = lim k α k x k x k 1 = 0 .
From (16), (18) and (19), we obtain that
lim k z k x k lim k ( z k y k + y k w k + w k x k ) = 0 .
Recalling that { x k } is bounded, one assumes that there exists a subsequence { x k j } of { x k } such that x k j x ^ as j . Invoking (19), one has that w k j x ^ as j . As a sequence, by use of Proposition 2, we find that x ^ V I ( C , A ) . From the fact x * = P V I ( C , A ) 0 , we obtain that
lim sup k x * , x * x k = lim j x * , x * x k j = x * , x * x ^ 0 .
From the boundedness of { x k } and lim k γ k = 0 , we infer
x k + 1 x k = β k ( z k x k ) γ k x k β k z k x k + γ k x k 0 , k .
Combining (21) with (22), we further find that
lim sup k x * , x * x k + 1 0 .
From the condition lim k α k γ k x k x k + 1 = lim k γ k = 0 , k = 1 γ k = , (14), (20) and (23), we conclude from Lemma 1 that lim k x k x * = 0 . In other words, it entails that x k x * as k .
Case 2: Suppose that there exists a subsequence { x k j p } of { x k p } such that x k j p < x k j + 1 p , j N . In this case, by using Lemma 2, one sees that there exists a nondecreasing sequence { n i } of N such that lim i n i = and the following inequalities hold for all i N ,
x n i + 1 p x n i p , x n i + 1 p x i p .
By using (11) and (24), we have that
2 ( 1 γ n i ) β n i ( 1 λ n i 2 L 2 ) y n i w n i 2 2 α n i x n i x n i 1 w n i x * + 2 γ n i M 2 .
Recalling that lim k α n i x n i x n i 1 = lim k γ n i = 0 and lim k β n i > 0 , it follows from (15) and (25) that
lim i y n i w n i = 0 .
Using the same arguments as in the proof of Case 1, one obtains that
lim i w n i x n i = lim i z n i x n i = lim i x n i + 1 x n i = 0 .
and
lim sup i x * , x * x n i + 1 0 .
Coming back to (14), we have
x n i + 1 x * 2 2 α n i γ n i x n i x n i 1 w n i x * + 2 β n i z n i x n i x n i + 1 x * + 2 x * , x * x n i + 1 .
In light of (27)–(29), we have lim sup i x n i + 1 x * 2 0 . Invoking (24), we obtain that lim i x i x * 2 = 0 , which further implies that x k x * , as k . This completes the proof.  □
The other algorithm reads as follows.
Algorithm 2:
Mathematics 08 01256 i002
Now, we are ready to analyze the convergence of Algorithm 2. The outline of its proof is similar to that of Theorem 1.
Theorem 2.
Let g : H H be a contraction mapping with the contraction parameter κ ( 0 , 1 ) . Let { λ k } , { δ k } be two real sequences in ( 0 , 1 ) such that 0 < a λ n b < 1 L for some a , b ( 0 , 1 ) , k = 1 δ k = and lim k δ k = 0 . Assume that the sequence { α k } is chosen such that lim k α k δ k x k x k 1 = 0 . Then the sequence { x n } generated by Algorithm 2 converges strongly to the solution x ^ V I ( C , A ) , where x ^ = P V I ( C , A ) g ( x ^ ) .
Proof. 
Fixing x * = P V I ( C , A ) g ( x * ) and using the same arguments as in the proof of Theorem 1, we infer
z k x * w k x * x k x * + α k x k x k 1 ,
w k x * 2 x k x * 2 + 2 α k x k x k 1 w k x * ,
and
z k y k ( 1 + λ k L ) y k w k .
Now, using (30) and the definition of { x k } , one sees that
x k + 1 x * ( 1 δ k ) z k x * + δ k g ( z k ) g ( x * ) + δ k g ( x * ) x * ( 1 δ k ( 1 κ ) ) z k x * + δ k g ( x * ) x * ( 1 δ k ( 1 κ ) ) x k x * + δ k ( 1 κ ) α k δ k ( 1 κ ) x k x k 1 + 1 1 κ g ( x * ) x * .
Since lim k α k δ k x k x k 1 = 0 , we set briefly α k δ k x k x k 1 M 3 for some positive constant M 3 > 0 . Coming back to (33), we have that
x k x * ( 1 δ k ( 1 κ ) ) x k x * + δ k ( 1 κ ) M 3 1 κ + 1 1 κ ( g ( x * ) x * ) max x k x * , M 3 1 κ + 1 1 κ ( g ( x * ) x * ) max x 0 x * , M 3 1 κ + 1 1 κ ( g ( x * ) x * ) .
It entails that { x k } is bounded. This implies that { w k } , { y k } and { z k } are bounded as well. We apply Proposition 1 and (31) to get that
x k + 1 x * 2 ( 1 δ k ) z k x * 2 + δ k ( g ( z k ) g ( x * ) + g ( x * ) x * ) 2 ( 1 δ k ) z k x * 2 + δ k ( z k x * + g ( x * ) x * ) 2 x k x * 2 + 2 α k x k x k 1 w k x * ( 1 λ k 2 L 2 ) y k w k 2 + δ k ( g ( x * ) x * 2 + 2 z k x * g ( x * ) x * ) .
Again, by using Proposition 1 and (31), together with (9), one concludes that
x k + 1 x * 2 ( 1 δ k ) z k x * 2 + δ k g ( z k ) g ( x * ) 2 + 2 δ k g ( x * ) x * , x k + 1 x * ( 1 δ k ( 1 κ ) ) z k x * 2 + 2 δ k g ( x * ) x * , x k + 1 x * ( 1 δ k ( 1 κ ) ) x k x * 2 + δ k ( 1 κ ) ( 2 α k δ k ( 1 κ ) x k x k 1 w k x * + 2 1 κ g ( x * ) x * , x k + 1 x * ) .
Now let us show that the sequence { x k x * } converges to zero. To obtain this result, we consider two possible cases on the sequence { x k x * } .
Case 1: There exists K N such that x k + 1 x * x k x * for all k K . Observe that lim k x k x * 2 exists. Due to the condition 0 < a λ n b < 1 L , we have that
0 < 1 b 2 L 2 1 λ k 2 L 2 1 a 2 L 2 < 1 .
Based on the conditions lim k α k δ k x k x k 1 = 0 and { δ k } ( 0 , 1 ) , we have that lim k α k x k x k 1 = 0 . Since { w k } and { z k } are bounded and the condition lim k δ k = 0 holds, we find that
lim k y k w k = 0 .
Combining (32) with (37), we have that
lim k y k z k = 0 .
It follows that
lim k w k x k = lim k α k x k x k 1 = 0 .
In light of (37)–(39), we have that
lim k z k x k lim k ( z k y k + y k w k + w k x k ) = 0 .
The boundedness of { x k } asserts that there exists a subsequence { x k j } of { x k } such that x k j x ^ , as j . Invoking (39), we observe that w k j x ^ , as j . And hence, it follows from Proposition 2 that x ^ V I ( C , A ) . Invoking x * = P V I ( C , A ) g ( x * ) , we deduce that
lim sup k g ( x * ) x * , x k x * = lim j g ( x * ) x * , x k j x * = g ( x * ) x * , x ^ x * 0 .
Recalling the definition of { x k } and the assumption lim k δ k = 0 , we infer that
x k + 1 x k = ( 1 δ k ) z k + δ k g ( z k ) x k z k x k + δ k z k g ( z k ) 0 , k .
Combining (41) with (42), we find that
lim sup k g ( x * ) x * , x k + 1 x * 0 .
Invoking the conditions lim k α k δ k x k x k 1 = 0 , lim k δ k = 0 , k = 1 δ k = , κ ( 0 , 1 ) , we apply Lemma 1 to get that lim k x k x * = 0 , that is, lim k x k = x * .
Case 2: Suppose that there is no k 0 N such that { x k x * } k = k 0 is monotonically decreasing. In this case, we can define a mapping φ : N N as
φ ( k ) : = max { i N : i k , x i x * 2 x i + 1 x * 2 } ,
i.e., φ ( k ) is the largest number i in { 1 , 2 , , k } such that x i x * 2 increases at i = φ ( k ) . Note that φ ( k ) is well-defined for all sufficiently large k. On the other hand, φ ( · ) is a nondecreasing sequence such that lim k φ ( k ) = and the following inequalities hold for all k 0 ,
x φ ( k ) + 1 p 2 x φ ( k ) p 2 , x φ ( k ) + 1 p 2 x k p 2 .
The conditions lim k α k δ k x k x k 1 = 0 and lim k δ k = 0 entail that lim k α k x k x k 1 = 0 . Combining (34) and (44) with the boundedness of { z k } , we successfully find that
( 1 λ φ ( k ) 2 L 2 ) y φ ( k ) w φ ( k ) 2 2 α φ ( k ) x φ ( k ) x φ ( k ) 1 w φ ( k ) x * + δ φ ( k ) ( g ( x * ) x * 2 + 2 z φ ( k ) x * g ( x * ) x * ) 0 ,
as k . Therefore, it follows from (36) that
lim k y φ ( k ) w φ ( k ) = 0 .
With the help of (45), using the same arguments as in the proof of Case 1, we infer that
lim sup k g ( x * ) x * , x φ ( k ) + 1 x * 0 .
According to (35) and (44), we have
x φ ( k ) x * 2 2 α φ ( k ) δ φ ( k ) ( 1 κ ) x φ ( k ) x φ ( k ) 1 w φ ( k ) x * + 2 1 κ g ( x * ) x * , x φ ( k ) + 1 x * .
Therefore, combining the condition lim k α k δ k x k x k 1 = 0 with (46) and (47), we have that lim sup k x φ ( k ) x * 2 0 . And hence, it follows from (44) that x k x * , as k . This completes the proof.  □

3. Numerical Results

In this section, we perform some computational experiments in support of the convergence properties of our proposed methods and compare our methods with Algorithm EAI, see [20].
All programs are written in Matlab version 5.0 and computed on a PC Desktop Intel(R) Core (TM) i5-8250U CPU @1.60GHz. Consider the quadratic programming problem in the form below
min Λ = x T Θ x + Υ T x s . t . x R n x i 0 , i = 1 , 2 , , n ,
with the following properties (48)–(50) in a n-dimensional Euclidean space. When Θ is symmetric and positive-definite in R n , and consequently A = Λ = Θ x + Υ is pseudo-monotone and Lipschitz continuous with the constant L = Θ . Meanwhile, we choose the parameters λ k = 1 1 . 5 Θ , α k = 1 k 2 , γ k = δ k = 1 k and β k = k 1 2 k ( k 1 ) . We can check that all of conditions in Theorems 1 and 2 are satisfied. We choose randomly initial points x 0 , x 1 R n in the following experiments. Let us consider the first example [26] with data (48) given by
Θ = 1 2 2 2 2 5 6 6 2 6 9 10 2 6 10 4 k 3 , Υ = 1 1 1 1 .
We apply Algorithm 1 to solve this problem in H = R 5 . We take the iteration number k = 5000 as the stopping criterion. As depicted in Figure 1, one sees that the optimal solution of this problem is unique ( 1 , 0 , 0 , 0 , 0 ) T .
We use the sequence { E k } defined by E k = x k P C ( x k λ k A x k ) ( k = 1 , 2 , 3 , ) to study the convergence of different algorithms in H = R 5 . From the definition of the metric projection, if E k ε , x k can be considered as an ε -solution of this problem. We take the iteration number k = 100 as the stopping criterion. To illustrate the computational performance of all the algorithms, the numerical results are shown in Figure 2. From the changing processes of the values of { E k } , we find that Algorithm 2 has a better behavior than Algorithms 1, EAI. It achieves a more stable and higher precision with the number of iterations. Moreover, the convergence of { E k } to 0, implies that the iterative sequence { E k } converges to the solution of this test.
Now, we show the second example [27] with the data (49) expressed as
Θ = 4 1 0 0 0 0 1 4 1 0 0 0 0 1 4 1 0 0 0 0 1 4 1 0 0 0 0 1 4 1 0 0 0 0 1 4 , Υ = 1 1 1 1 .
We apply Algorithm 1 to solve this problem in H = R 5 . We set the iteration number k = 5000 as the stopping criterion. As depicted in Figure 1, one sees that the optimal solution of this problem is unique ( 1 , 0 , 0 , 0 , 0 ) T .
This problem is solved for Θ , 50 × 50 matrix, Υ , 50 vector and Θ , 30 × 30 matrix, Υ , 30 vector, respectively. We use Algorithms 1 and 2 to solve this problem and we take the iteration number k = 200 , 100 as the stopping criterion. The test results are described in Figure 3 and Figure 4. Thus, we obtain the changing processes of x 1 x 30 with respect to the number of iterations and the running time (x-axis). From this, we find that the iterative sequences generated by Algorithms 1 and 2 converge to a unique solution.
Next, we consider another example [27] with the data (50) written as
Θ = d i a g ( 1 / k , 2 / k , , 1 ) , Υ = ( 1 , , 1 ) T .
We take the iteration number k = 1000 as the stopping criterion in H = R 25 . From the results reported in Figure 5, one has shown the changing processes of the values of x 1 x 25 (y-axis) in terms of the number of iterations and the cpu time (x-axis). Accordingly, one sees that Algorithm 1 has a convergent behavior.

4. Conclusions

In this paper, we proposed two inertial extragradient extensions for finding a solution of the pseudo-monotone variational inequalities in the setting of Hilbert spaces. We also established strong convergence theorems of the proposed algorithms. Numerical experiments show that our algorithms enjoy a faster rate of convergence than the one given by Dong et al. [20]. It is worth mentioning that many significant real life problems are generally defined in Banach spaces. Therefore, it is of interest to extend our results to the Banach space, which is more general than the Hilbert space.

Author Contributions

Conceptualization, L.L., X.Q. and J.-C.Y.; methodology, L.L., X.Q. and J.-C.Y.; visualization, L.L.; supervision, X.Q. and J.-C.Y. All authors have read and agreed to the published version of this manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tan, B.; Xu, S.; Li, S. Inertial shrinking projection algorithms for solving hierarchical variational inequality problems. J. Nonlinear Convex Anal. 2020, 21, 871–884. [Google Scholar]
  2. Liu, L. A hybrid steepest descent method for solving split feasibility problems involving nonexpansive mappings. J. Nonlinear Convex Anal. 2019, 20, 471–488. [Google Scholar]
  3. Takahashi, S.; Takahahsi, W. The split common null point problem and the shrinking projection method in Banach spaces. Optimization 2016, 65, 281–287. [Google Scholar] [CrossRef]
  4. Takahashi, W. The split feasibility problem in Banach spaces. J. Nonlinear Convex Anal. 2014, 15, 1349–1355. [Google Scholar]
  5. Cho, S.Y.; Li, W.; Kang, S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013, 2013. [Google Scholar] [CrossRef] [Green Version]
  6. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
  7. Cho, S.Y. Convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2017, 8, 19–31. [Google Scholar]
  8. Cho, S.Y.; Kang, S.M. Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24, 224–228. [Google Scholar] [CrossRef]
  9. Gibali, A.; Censor, Y.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar]
  10. Alsulami, S.M.; Takahashi, W. The split common null point problem for maximal monotone mappings in Hilbert spaces and applications. J. Nonlinear Convex Anal. 2014, 15, 793–808. [Google Scholar]
  11. Cho, S.Y. Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 2016, 9, 1083–1092. [Google Scholar] [CrossRef] [Green Version]
  12. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  13. Gibali, A.; Censor, Y.; Reich, S. Extensions of Korpelevich’s extragradient method for the variatiomal inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar]
  14. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [Green Version]
  15. Thong, D.V.; Vinh, N.T.; Cho, Y.J. Accelerated subgradient extragradient methods for variational inequality problems. J. Sci. Comput. 2019, 80, 1438–1462. [Google Scholar] [CrossRef]
  16. Thong, D.V.; Van Hieu, D. Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 2018, 341, 80–98. [Google Scholar] [CrossRef]
  17. Tan, B.; Xu, S.; Li, S. Modified inertial hybrid and shrinking projection algorithms for solving fixed point problems. Mathematics 2020, 8, 236. [Google Scholar] [CrossRef] [Green Version]
  18. Shehu, Y.; Iyiola, O.S. Strong convergence result for monotone variational inequalities. Numer. Algorithms 2017, 76, 259–282. [Google Scholar] [CrossRef]
  19. Luo, Y.; Shang, M.; Tan, B. A general inertial viscosity type method for nonexpansive mappings and its applications in signal processing. Mathematics 2020, 8, 288. [Google Scholar] [CrossRef] [Green Version]
  20. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  21. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  22. Marino, G.; Scardamaglia, B.; Karapinar, E. Strong convergence theorem for strict pseudo-contractions in Hilbert spaces. J. Inequal. Appl. 2016, 2016. [Google Scholar] [CrossRef] [Green Version]
  23. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  24. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  25. Maingé, P.E. The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 2010, 59, 74–79. [Google Scholar]
  26. Fathi, Y. Computational complexity of LCPs associated with positive definite symmetric matrices. Math. Program. 1979, 17, 335–344. [Google Scholar] [CrossRef]
  27. Geiger, C.; Kanzow, C. On the resolution of monotone complementarity problems. Comput. Optim. Appl. 1996, 5, 155–173. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Behaviors of x 1 x 5 with the number of iterations (resp.) k = 5000 . Numerical results for Algorithm 1.
Figure 1. Behaviors of x 1 x 5 with the number of iterations (resp.) k = 5000 . Numerical results for Algorithm 1.
Mathematics 08 01256 g001
Figure 2. Behaviors of error sequence { E k } with the number of iterations k = 100 . Numerical results for Algorithms 1 and 2 and EAI.
Figure 2. Behaviors of error sequence { E k } with the number of iterations k = 100 . Numerical results for Algorithms 1 and 2 and EAI.
Mathematics 08 01256 g002
Figure 3. Behaviors of x 1 x 50 with the number of iterations (resp.) k = 200 . Numerical results for Algorithm 1.
Figure 3. Behaviors of x 1 x 50 with the number of iterations (resp.) k = 200 . Numerical results for Algorithm 1.
Mathematics 08 01256 g003
Figure 4. Behaviors of x 1 x 30 with the number of iterations (resp.) k = 100 . Numerical results for Algorithm 2.
Figure 4. Behaviors of x 1 x 30 with the number of iterations (resp.) k = 100 . Numerical results for Algorithm 2.
Mathematics 08 01256 g004
Figure 5. Behaviors of x 1 x 25 with the number of iterations (resp.) k = 1000 . Numerical results for Algorithm 1.
Figure 5. Behaviors of x 1 x 25 with the number of iterations (resp.) k = 1000 . Numerical results for Algorithm 1.
Mathematics 08 01256 g005

Share and Cite

MDPI and ACS Style

Liu, L.; Qin, X.; Yao, J.-C. Strong Convergent Theorems Governed by Pseudo-Monotone Mappings. Mathematics 2020, 8, 1256. https://doi.org/10.3390/math8081256

AMA Style

Liu L, Qin X, Yao J-C. Strong Convergent Theorems Governed by Pseudo-Monotone Mappings. Mathematics. 2020; 8(8):1256. https://doi.org/10.3390/math8081256

Chicago/Turabian Style

Liu, Liya, Xiaolong Qin, and Jen-Chih Yao. 2020. "Strong Convergent Theorems Governed by Pseudo-Monotone Mappings" Mathematics 8, no. 8: 1256. https://doi.org/10.3390/math8081256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop