Next Article in Journal
Rational Approximations in Robust Preconditioning of Multiphysics Problems
Next Article in Special Issue
Common Fixed-Point and Fixed-Circle Results for a Class of Discontinuous F-Contractive Mappings
Previous Article in Journal
Super Resolution for Noisy Images Using Convolutional Neural Networks
Previous Article in Special Issue
A New Nonparametric Filled Function Method for Integer Programming Problems with Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Mann Subgradient-like Extragradient Rules for Variational Inequalities and Common Fixed Points Involving Asymptotically Nonexpansive Mappings

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
3
Research Center for Interneural Computing, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(5), 779; https://doi.org/10.3390/math10050779
Submission received: 9 February 2022 / Revised: 21 February 2022 / Accepted: 22 February 2022 / Published: 28 February 2022
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications II)

Abstract

:
In a real Hilbert space, we aim to investigate two modified Mann subgradient-like methods to find a solution to pseudo-monotone variational inequalities, which is also a common fixed point of a finite family of nonexpansive mappings and an asymptotically nonexpansive mapping. We obtain strong convergence results for the sequences constructed by these proposed rules. We give some examples to illustrate our analysis.

1. Introduction

Let the · , · and · represent the inner product and induced norm in a real Hilbert space H, respectively. We denote by P C the nearest point projection from H onto C, where C H and C is convex and closed. Given T : C H a nonlinear mapping, we denote by Fix ( T ) the fixed point set of T, i.e., Fix ( T ) = { x C : x = T x } . Let the R , and ⇀ indicate the set of all real numbers, the strong convergence, and the weak convergence, respectively. A self-mapping T : C C is referred to as being asymptotically nonexpansive if  { ψ n } [ 0 , + ) s.t. lim n ψ n = 0 and
T n x T n y x y + ψ n x y n 1 , x , y C
and T is nonexpansive when ψ n = 0 .
Given a continuous mapping A : H H , a variational inequality problem (denoted by (VIP)) is:
find z * C such that A z * , z z * 0 z C .
Let us denote the set of the solution VIP by VI( C , A ). In 1976, Korpelevich [1] put forth the extragradient method, which has been one of the most effective approaches for solving the VIP:
y n = P C ( x n ζ A x n ) , x n + 1 = P C ( x n ζ A y n ) n 0 ,
for ζ ( 0 , 1 L ) with L being the Lipschitz constant of A. Weak convergence results of (2) have been obtained in studies [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22] and references therein.
The extragradient method (2) involves solving a minimization problem over C at each iteration when P C has no closed-form solution. This could make the extragradient method (2) computationally expensive. In study [6], Censor et al. modified (2) and introduced the subgradient extragradient:
y n = P C ( x n ζ A x n ) , D n = { v H : x n ζ A x n y n , v y n 0 } , x n + 1 = P D n ( x n ζ A y n ) ,
for ζ ( 0 , 1 L ) with L being the Lipschitz constant of A. Thong and Hieu [19] added an inertial extrapolation step to (3): x 0 , x 1 H ,
v n = x n + α n ( x n x n 1 ) , y n = P C ( v n ζ A w n ) , D n = { v H : v n ζ A w n y n , v y n 0 } , x n + 1 = P C n ( v n ζ A y n ) ,
for ζ ( 0 , 1 L ) with L being the Lipschitz constant of A, and the weak convergence being obtained. In study [22], Reich et al. suggested the modified projection-type method for solving the VIP with the pseudo-monotone and uniformly continuous mapping A, given a sequence { α n } ( 0 , 1 ) and a contraction f : C C with constant ϱ [ 0 , 1 ) . For any initial x 1 C , the sequence { x n } is constructed below.
Furthermore, it was proven in study [22] that the sequence { x n } generated by Algorithm 1 converges strongly. Subsequently, Ceng, Yao and Shehu [21] proposed a Mann-type method of (2) to solve pseudo-monotone variational inequalities and the common fixed point problem of many finitely nonexpansive self-mappings { T i } i = 1 N on C and an asymptotically nonexpansive self-mapping T 0 : = T on C. Given a contraction f : C C with constant ϱ [ 0 , 1 ) , let { σ n } [ 0 , 1 ] and { α n } , { β n } , { γ n } ( 0 , 1 ) with α n + β n + γ n = 1 n 1 and T n : = T n mod N . For any initial x 1 C , the sequence { x n } is constructed below (Algorithm 2).
Algorithm 1 (see study [22]). Initialization: Given μ > 0 , l ( 0 , 1 ) , λ ( 0 , 1 μ ) .
Iterative Steps: Given the current iterate x n , calculate x n + 1 as follows:
Step 1. Compute y n = P C ( x n λ A x n ) and r λ ( x n ) : = x n y n . If r λ ( x n ) = 0 ,
then stop. x n is a solution of VI ( C , A ) . Otherwise;
Step 2. Compute w n = x n ζ n r λ ( x n ) , where ζ n : = l j n and j n is the smallest nonnegative integer j, satisfying A x n A ( x n l j r λ ( x n ) ) , r λ ( x n ) μ 2 r λ ( x n ) 2 ;
Step 3. Compute x n + 1 = α n f ( x n ) + ( 1 α n ) P C n ( x n ) , where C n : = { x C : h n ( x n ) 0 } and h n ( x ) = A w n , x x n + ζ n 2 λ r λ ( x n ) 2 .
Algorithm 2 (see study [21]). Initialization: Given μ > 0 , l ( 0 , 1 ) , λ ( 0 , 1 μ ) .
Iterative Steps: Given x n , compute
Step 1. Set w n = ( 1 σ n ) x n + σ n T n x n , and compute y n = P C ( w n λ A w n ) and r λ ( w n ) : = w n y n ;
Step 2. Compute t n = w n ζ n r λ ( w n ) , where ζ n : = l j n and j n is the smallest nonnegative integer j,
satisfying A w n A ( w n l j r λ ( w n ) ) , w n y n μ 2 r λ ( w n ) 2 ;
Step 3. Compute z n = P C n ( w n ) and x n + 1 = α n f ( x n ) + β n x n + γ n T n z n ,
where C n : = { x C : h n ( x ) 0 } and h n ( x ) = A t n , x w n + ζ n 2 λ r λ ( w n ) 2 .
Again set n : = n + 1 and go to Step 1.
  Under suitable conditions, it was proven in study [21] that the sequence { x n } converges strongly to x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) if and only if lim n ( x n x n + 1 + x n y n ) = 0 provided T n z n T n + 1 z n 0 , where x * = P Ω ( I ρ F + f ) x * .
  In a real Hilbert space H, let the VIP and CFPP represent the pseudo-monotone variational inequality problem with uniformly continuous mapping A, the common fixed point problem of a finite family of nonexpansive mappings { T i } i = 1 N , and an asymptotically nonexpansive mapping T 0 : = T , respectively. Inspired by the above research works, we propose and analyze two modified Mann subgradient-like extragradient algorithms with the line-search process for solving the VIP and CFPP. The proposed algorithms are based on the Mann iteration method, (3) method with the line-search process, and the viscosity approximation method. Under some conditions, we establish some strong convergence results for the sequences constructed by these proposed rules. Finally, our main results are applied to handle the VIP and CFPP in an illustrated example.
The structure of the article is specified below. In Section 2, we first recall some concepts and basic results. Section 3 explores the strong convergence analysis of our proposed methods. Finally, in Section 4, an illustrated example is given. Our results complement related results by Ceng, Yao, and Shehu [21]; Reich et al. [22]; and Ceng and Shang [9]. Indeed, it is worth emphasizing that our problem of finding an element x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) is more general and more interesting than the corresponding problem of finding an element x * VI ( C , A ) in study [22]. Moreover, our strong convergence theorems are more advantageous and more clever than the corresponding strong convergence ones in studies [9,21] because the conclusion x n x * Ω x n x n + 1 + x n y n 0 ( n ) in the corresponding strong convergence theorems [9,21] is updated by our conclusion x n x * Ω . Without question, the strong convergence criteria for the sequence { x n } in this paper are more convenient and more beneficial in comparison with those of studies [9,21].

2. Preliminaries

We say that T : C H is
(a) L-Lipschitz continuous (or L-Lipschitzian) if L > 0 such that T u T v L u v u , v C ;
(b) Monotone if T u T v , u v 0 u , v C ;
(c) pseudo-monotone if T u , v u 0 T v , v u 0 u , v C ;
(d) ϖ -strongly monotone if ϖ > 0 such that T u T v , u v ϖ u v 2 u , v C ;
(e) Sequentially weakly continuous if { u n } C , we have u n u T u n T u .  
One can see that (b) implies (c) but the converse fails. Given v H , there exists a unique nearest point in C, denoted by P C v ( P C is called a metric projection of H onto C), such that v P C v v z z C . According to reference [17], we know that the following hold:  
(a) u v , P C u P C v P C u P C v 2 u , v H ;
(b) u P C u , v P C u 0 u H , v C ;
(c) u v 2 u P C u 2 + v P C u 2 u H , v C ;
(d) u v 2 = u 2 v 2 2 u v , v u , v H ;
(e) λ u + μ v 2 = λ u 2 + μ v 2 λ μ u v 2 u , v H , λ , μ R with λ + μ = 1 .
Lemma 1
(see reference [4]). Let H 1 and H 2 be two real Hilbert spaces. Suppose that A : H 1 H 2 is uniformly continuous on bounded subsets of H 1 and M is a bounded subset of H 1 . Then, A ( M ) is bounded.
It is easy from the subdifferential inequality of 1 2 · 2 :
u + v 2 u 2 + 2 v , u + v u , v H .
Lemma 2
(see reference [23]). Let h be a real-valued function on H and define K : = { x C : h ( x ) 0 } . If K is nonempty and h is Lipschitz continuous on C with modulus ψ > 0 , then dist ( x , K ) ψ 1 max { h ( x ) , 0 } x C , where dist ( x , K ) denotes the distance of x to K.
Lemma 3
(see reference [6], Lemma 2.1). Assume that A : C H is pseudo-monotone and continuous. Then u * C is a solution to the VIP A u * , u u * 0 u C , if and only if A u , u u * 0 u C .
Lemma 4
(see reference [24]). Let { b n } be a sequence of nonnegative numbers satisfying: b n + 1 ( 1 ς n ) b n + ς n γ n n 1 , where { ς n } and { γ n } are sequences of real numbers such that (i) { ς n } [ 0 , 1 ] and n = 1 ς n = , and (ii) lim sup n γ n 0 or n = 1 | ς n γ n | < . Then lim n b n = 0 .
Lemma 5
(see reference [25]). Let X be a Banach space which admits a weakly continuous duality mapping, C be a nonempty closed convex subset of X, and T : C C be an asymptotically nonexpansive mapping with Fix ( T ) . Then I T is demiclosed at zero, i.e., if { x n } is a sequence in C such that x n x C and ( I T ) x n 0 , then ( I T ) x = 0 , where I is the identity mapping of X.
Lemma 6
(see reference [26]). Let { Γ n } be a sequence of real numbers such that there exists { Γ n k } of { Γ n } , satisfying Γ n k < Γ n k + 1 for each integer k 1 . Define
η ( n ) = max { k n : Γ k < Γ k + 1 } ,
where integer n 0 1 and { k n 0 : Γ k < Γ k + 1 } . Then (i) η ( n 0 ) η ( n 0 + 1 ) and η ( n ) ; (ii) Γ η ( n ) Γ η ( n ) + 1 and Γ n Γ η ( n ) + 1 n n 0 .

3. Our Contributions

Assume that
T : C C is an asymptotically nonexpansive mapping and T i : C C is a nonexpansive mapping for i = 1 , . . . , N such that the sequence { T n } n = 1 is defined as in Algorithm 1.
A : H H is pseudo-monotone and uniformly continuous on C, s.t. A z lim inf n A x n for each { x n } C with x n z .
f : C C is a contraction with constant ϱ [ 0 , 1 ) , and Ω = i = 0 N Fix ( T i ) VI ( C , A ) with T 0 : = T .
{ α n } , { β n } , { γ n } ( 0 , 1 ) and { σ n } [ 0 , 1 ] such that
(i) α n + β n + γ n = 1 n 1 and n = 1 α n = ;
(ii) lim n α n = 0 and lim n ψ n α n = 0 ;
(iii) 0 < lim inf n γ n lim sup n γ n < 1 ;
(iv) 0 < lim inf n σ n lim sup n σ n < 1 .
Lemma 7.
The Armijo-type search rule (5) is well defined, and consequently r λ ( w n ) 2 λ A w n , r λ ( w n ) .
Proof. 
From l ( 0 , 1 ) and uniform continuity of A on C, one has lim j A w n A ( w n l j r λ ( w n ) ) , r λ ( w n ) = 0 . If r λ ( w n ) = 0 , then j n = 0 . If r λ ( w n ) 0 , then ∃ (integer) j n 0 , satisfying (3.1). By the firm nonexpansivity of P C , one obtains x P C y 2 x y , x P C y x C , y H . Putting y = w n λ A w n and x = w n , one gets w n P C ( w n λ A w n ) 2 λ A w n , w n P C ( w n λ A w n ) , and hence r λ ( w n ) 2 λ A w n , r λ ( w n ) .    □
Lemma 8.
Let p Ω and assume the function h n is formulated as (3.2). Then, h n ( w n ) = ζ n 2 λ r λ ( w n ) 2 and h n ( p ) 0 . In addition, if r λ ( w n ) 0 , then h n ( w n ) > 0 .
Let { u n } be the sequence constructed in Algorithm 3.
Algorithm 3 Initialization: Given μ > 0 , l ( 0 , 1 ) , λ ( 0 , 1 μ ) . Pick u 1 C .
Iterative Steps: Given u n , compute
Step 1. Set w n = ( 1 σ n ) u n + σ n T n u n , and compute y n = P C ( w n λ A w n ) and r λ ( w n ) : = w n y n ;
Step 2. Compute t n = w n ζ n r λ ( w n ) , where ζ n : = l j n and j n is the smallest nonnegative integer j,
satisfying
A w n A ( w n l j r λ ( w n ) ) , w n y n μ 2 r λ ( w n ) 2 .

Step 3. Compute z n = P D n ( w n ) and u n + 1 = α n f ( u n ) + β n u n + γ n T n z n ,
where D n : = { x C : h n ( x ) 0 } and
h n ( x ) = A t n , x w n + ζ n 2 λ r λ ( w n ) 2 .
Again set n : = n + 1 and go to Step 1.
Proof. 
The first claim of Lemma 8 is evident. Let us show the second claim. In fact, for p Ω , by Lemma 3 one has A t n , t n p 0 . So, one obtains that
h n ( p ) = A t n , p w n + ζ n 2 λ r λ ( w n ) 2 ζ n A t n , r λ ( w n ) + ζ n 2 λ r λ ( w n ) 2 .
Furthermore, from (5) one has A w n A t n , r λ ( w n ) μ 2 r λ ( w n ) 2 . Thus, by Lemma 7 we get
A t n , r λ ( w n ) A w n , r λ ( w n ) μ 2 r λ ( w n ) 2 ( 1 λ μ 2 ) r λ ( w n ) 2 .
Combining (7) and (8) arrives at
h n ( p ) ζ n 2 ( 1 λ μ ) r λ ( w n ) 2 0 .
   □
Lemma 9.
Let { u n } be the sequence constructed in Algorithm 3, s.t. u n y n 0 , u n u n + 1 0 , u n T n u n 0 , u n T n u n 0 . Suppose that T n u n T n + 1 u n 0 and { u n k } { u n } s.t. u n k z . Then z Ω .
Proof. 
Using Algorithm 3, one obtains w n u n = σ n ( T n u n u n ) n 1 , and hence w n u n T n u n u n . Using the hypothesis u n T n u n 0 , we have
lim n w n u n = 0 ,
which together with the hypothesis u n y n 0 , implies that
w n y n w n u n + u n y n 0 ( n ) .
Besides this, combining w n u n 0 and u n k z yields w n k z .    □
Let us show that lim n u n T i u n = 0 for i = 1 , . . . , N . In fact, note that for i = 1 , . . . , N ,
u n T n + i u n u n x n + i + x n + i T n + i x n + i + T n + i x n + i T n + i u n 2 u n x n + i + x n + i T n + i x n + i .
By u n u n + 1 0 and u n T n u n 0 , we get lim n u n T n + i u n = 0 for i = 1 , . . . , N . This immediately arrives at
lim n u n T i u n = 0 for i = 1 , . . . , N .
Moreover, we claim that u n T u n 0 is as n . In fact, combining the hypotheses u n T n u n 0 and T n u n T n + 1 u n 0 , guarantees that
u n T u n u n T n u n + T n u n T n + 1 u n + T n + 1 u n T u n u n T n u n + T n u n T n + 1 u n + ( 1 + ψ 1 ) T n u n u n = ( 2 + ψ 1 ) u n T n u n + T n u n T n + 1 u n 0 ( n ) .
Next, let us show z VI ( C , A ) . In fact, since C is convex and closed, from { u n } C and u n k z we get z C . In what follows, we consider two cases. If A z = 0 , then it is clear that z VI ( C , A ) because A z , x z 0 x C . Assume that A z 0 . Then, it follows from w n y n 0 and w n k z that y n k z . Using the assumption on A, instead of the sequentially weak continuity of A, we get 0 < A z lim inf k A y n k . So, we could suppose that A y n k 0 k 1 . Furthermore, from y n = P C ( w n λ A w n ) , we have w n λ A w n y n , u y n 0 u C . Thus,
1 λ w n y n , u y n + A w n , y n w n A w n , u w n u C .
According to the uniform continuity of A on C, one knows that { A w n k } is bounded (due to Lemma 1). Note that { y n k } is bounded as well. Then, by (13) we have lim inf k A w n k , u w n k 0 u C . To show that z VI ( C , A ) , we pick { ε k } ( 0 , 1 ) s.t. ε k 0 as k . For each k 1 , let m k be the smallest natural number, such that
A w n j , u w n j + ε k 0 j m k .
Since { ε k } is nonincreasing, it can be readily seen that { m k } is increasing. Noticing that A w m k 0 k 1 (due to { A w m k } { A w n k } ), we set μ m k = A w m k A w m k 2 , and we get A w m k , μ m k = 1 . By (14), it gives A w m k , u + ε k μ m k w m k 0 . The pseudomonotonicity of A then gives A ( u + ε k μ m k ) , u + ε k μ m k w m k 0 , i.e.,
A u , u w m k A u A ( u + ε k μ m k ) , x + ε k μ m k w m k ε k A u , μ m k .
Observe that w n k z , { w m k } { w n k } and ε k 0 as k . So it follows that 0 lim sup k ε k μ m k = lim sup k ε k A w m k lim sup k ε k lim inf k A w n k = 0 . Hence, we get ε k μ m k 0 as k .
Next, we show that z Ω . Passing to the limit as k in (15), we have A u , u z = lim inf k A u , u w m k 0 u C . Using Lemma 3, z VI ( C , A ) . Furthermore, for i = 1 , . . . , N , since Lemma 5 guarantees the demiclosedness of I T i at zero, from u n k z and u n k T i u n k 0 (due to (11)) we deduce that z Fix ( T i ) . Thus, z i = 1 N Fix ( T i ) . Hence, from u n k z and u n k T u n k 0 (due to (12)), we obtain that z Fix ( T ) . Therefore, z Ω .
Lemma 10.
Assume { w n } in Algorithm 3 is such that ζ n r λ ( w n ) 2 0 as n . Then, w n y n 0 .
Proof. 
On the contrary, suppose that lim sup n w n y n = a > 0 . Then, { n k } { n } s.t.
lim k w n k y n k = a > 0 .
Note that lim k ζ n k r λ ( w n k ) 2 = 0 . In what follows, we consider two cases.    □
Case 1. lim inf k ζ n k > 0 . In this case, we might assume that ζ > 0 s.t. ζ n k ζ > 0 k 1 . Then, it follows that w n k y n k 2 = 1 ζ n k ζ n k w n k y n k 2 1 ζ · ζ n k r λ ( w n k ) 2 , which hence yields
0 < a 2 = lim k w n k y n k 2 lim k 1 ζ · ζ n k r λ ( w n k ) 2 = 0 .
This reaches a contradiction.
Case 2. lim inf k ζ n k = 0 . In this case, there exists a subsequence of { ζ n k } , still denoted by { ζ n k } , such that lim k ζ n k = 0 . Putting υ n k = 1 l ζ n k y n k + ( 1 1 l ζ n k ) w n k , we get υ n k = w n k 1 l ζ n k ( w n k y n k ) . Since lim n ζ n r λ ( w n ) 2 = 0 , we have
lim k υ n k w n k 2 = lim k 1 l 2 ζ n k · ζ n k w n k y n k 2 = 0 .
From the step size rule (5) and the definition of υ n k , it follows that
A w n k A υ n k , w n k y n k > μ 2 w n k y n k 2 .
Using the uniform the continuity of A on C, from (18) we deduce that lim k A w n k A υ n k = 0 , which together with (19) leads to lim k w n k y n k = 0 . Thus, this contradicts with (16). Consequently, lim n w n y n = 0 .
Theorem 1.
Suppose that the sequence { u n } is constructed by Algorithm 3. Then, u n u * Ω provided T n u n T n + 1 u n 0 , where u * Ω is the unique solution to the VIP: ( I f ) u * , p u * 0 p Ω .
Proof. 
First of all, since 0 < lim inf n γ n lim sup n γ n < 1 and lim n ψ n α n = 0 , we may assume, without loss of generality, that { γ n } [ a , b ] ( 0 , 1 ) and ψ n α n ( 1 ϱ ) 2 n 1 . Clearly, P Ω f : C C is a contraction. Hence, there exists u * C , such that u * = P Ω f ( u * ) . Therefore, u * Ω = i = 0 N Fix ( T i ) VI ( C , A ) of the VIP
( I f ) u * , p u * 0 p Ω .
Next, we show the conclusion of the theorem. With this aim, we consider the following steps.
Step 1. We claim that the following inequality holds:
z n p 2 w n p 2 dist 2 ( w n , D n ) p Ω .
Indeed, one has
z n p 2 = P D n w n p 2 w n p 2 P D n w n w n 2 = w n p 2 dist 2 ( w n , D n ) ,
which immediately yields
z n p w n p n 1 .
Thus
w n p ( 1 σ n ) u n p + σ n T n u n p ( 1 σ n ) u n p + σ n ( 1 + ψ n ) u n p ( 1 + ψ n ) u n p ,
which together with (22), yields
z n p w n p ( 1 + ψ n ) u n p n 1 .
Thus, from (23) and α n + β n + γ n = 1 n 1 it follows that
u n + 1 p α n f ( u n ) p + β n u n p + γ n T n z n p α n ( f ( u n ) f ( p ) + f ( p ) p ) + β n u n p + γ n ( 1 + ψ n ) u n p α n ( ϱ u n p + f ( p ) p ) + β n u n p + γ n u n p + α n ( 1 ϱ ) 2 u n p = [ 1 α n ( 1 ϱ ) 2 ] u n p + α n f ( p ) p max { u n p , 2 f ( p ) p 1 ϱ } .
Thus, u n p max { u 1 p , 2 f ( p ) p 1 ϱ } n 1 . This { u n } is bounded, and so are { w n } , { y n } , { z n } , { f ( u n ) } , { A t n } , { T n u n } , { T n z n } .
Step 2. Let us obtain
γ n ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 + β n γ n u n T n z n 2 u n p 2 u n + 1 p 2 + ψ n K + 2 α n f ( u n ) p , u n + 1 p
for some K > 0 . To prove this, we first note that
u n + 1 p 2 = α n ( f ( u n ) p ) + β n ( u n p ) + γ n ( T n z n p ) 2 β n ( u n p ) + γ n ( T n z n p ) 2 + 2 α n f ( u n ) p , u n + 1 p β n u n p 2 + γ n z n p 2 β n γ n u n T n z n 2 + 2 α n f ( u n ) p , u n + 1 p .
On the other hand, by Algorithm 3 one has
z n p 2 = P D n w n p 2 w n p 2 z n w n 2 = ( 1 σ n ) u n p 2 + σ n T n u n p 2 ( 1 σ n ) σ n u n T n u n 2 z n w n 2 ( 1 + ψ n ) 2 u n p 2 ( 1 σ n ) σ n u n T n u n 2 z n w n 2 .
Substituting (25) into (24), one gets
u n + 1 p 2 β n u n p 2 + γ n ( 1 + ψ n ) 2 u n p 2 ( 1 σ n ) σ n u n T n u n 2 z n w n 2 β n γ n u n T n z n 2 + 2 α n f ( u n ) p , u n + 1 p ( 1 α n ) u n p 2 γ n ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 + ψ n ( 2 + ψ n ) u n p 2 β n γ n u n T n z n 2 + 2 α n f ( u n ) p , u n + 1 p u n p 2 γ n ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 β n γ n u n T n z n 2 + ψ n K + 2 α n f ( u n ) p , u n + 1 p ,
where sup n 1 ( 2 + ψ n ) u n p 2 K for some K > 0 . This immediately implies that
γ n ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 + β n γ n u n T n z n 2 u n p 2 u n + 1 p 2 + ψ n K + 2 α n f ( u n ) p , u n + 1 p .
Step 3. We show that
γ n ζ n 2 λ L r λ ( w n ) 2 2 u n p 2 u n + 1 p 2 + α n f ( u n ) p 2 + ψ n K .
Indeed, we claim that for some L > 0 ,
z n p 2 w n p 2 ζ n 2 λ L r λ ( w n ) 2 2 .
Thanks to the boundedness of { A t n } , we know that L > 0 s.t. A t n L n 1 , which arrives at
| h n ( u ) h n ( v ) | = | A t n , u v | A t n u v L u v u , v D n .
This hence ensures that h n ( · ) is L-Lipschitz continuous on D n . By Lemmas 2 and 8, one obtains
dist ( w n , D n ) 1 L h n ( w n ) = ζ n 2 λ L r λ ( w n ) 2 .
Combining (21) and (27) immediately yields
z n p 2 w n p 2 ζ n 2 λ L r λ ( w n ) 2 2 .
From Algorithm 3, (23), and (26) it follows that
u n + 1 p 2 α n f ( u n ) p 2 + β n u n p 2 + γ n T n z n p 2 α n f ( u n ) p 2 + β n u n p 2 + γ n { ( 1 + ψ n ) 2 u n p 2 ζ n 2 λ L r λ ( w n ) 2 2 } α n f ( u n ) p 2 + ( 1 α n ) u n p 2 + ψ n ( 2 + ψ n ) u n p 2 γ n ζ n 2 λ L r λ ( w n ) 2 2 α n f ( u n ) p 2 + ψ n K + u n p 2 γ n ζ n 2 λ L r λ ( w n ) 2 2 .
This immediately yields
γ n ζ n 2 λ L r λ ( w n ) 2 2 u n p 2 u n + 1 p 2 + α n f ( u n ) p 2 + ψ n K .
Step 4. We show that
u n + 1 p 2 ( 1 α n ( 1 ϱ ) ) u n p 2 + α n ( 1 ϱ ) 2 f ( p ) p , u n + 1 p 1 ϱ + ψ n α n · K 1 ϱ .
Indeed, from Algorithm 3 and (23), one has
u n + 1 p 2 = α n ( f ( u n ) f ( p ) ) + β n ( u n p ) + γ n ( T n z n p ) + α n ( f ( p ) p ) 2 α n ( f ( u n ) f ( p ) ) + β n ( u n p ) + γ n ( T n z n p ) 2 + 2 α n f ( p ) p , u n + 1 p α n f ( u n ) f ( p ) 2 + β n u n p 2 + γ n z n p 2 + 2 α n f ( p ) p , u n + 1 p α n f ( u n ) f ( p ) 2 + β n u n p 2 + γ n ( 1 + ψ n ) 2 u n p 2 + 2 α n f ( p ) p , u n + 1 p ϱ α n u n p 2 + β n u n p 2 + γ n u n p 2 + ψ n ( 2 + ψ n ) u n p 2 + 2 α n f ( p ) p , u n + 1 p ϱ α n u n p 2 + β n u n p 2 + γ n u n p 2 + ψ n K + 2 α n f ( p ) p , u n + 1 p = [ 1 α n ( 1 ϱ ) ] u n p 2 + ψ n K + 2 α n f ( p ) p , u n + 1 p = ( 1 α n ( 1 ϱ ) ) u n p 2 + α n ( 1 ϱ ) 2 f ( p ) p , u n + 1 p 1 ϱ + ψ n α n · K 1 ϱ .
Step 5. We obtain the strong convergence to u * Ω , satisfying (20).
Indeed, putting p = u * , we deduce from (28) that
u n + 1 u * 2 ( 1 α n ( 1 ϱ ) ) u n u * 2 + α n ( 1 ϱ ) 2 f ( u * ) u * , u n + 1 u * 1 ϱ + ψ n α n · K 1 ϱ .
Setting Γ n = u n u * 2 we show Γ n 0 ( n ) .    □
Case 1. Assume there exists n 0 1 such that { Γ n } is nonincreasing. Thus, lim n Γ n = < + and lim n ( Γ n Γ n + 1 ) = 0 . Putting p = u * , from Step 2 and { γ n } [ a , b ] ( 0 , 1 ) , we obtain
a [ ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 ] + ( 1 α n b ) a u n T n z n 2 γ n ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 + β n γ n u n T n z n 2 u n u * 2 u n + 1 u * 2 + ψ n K + 2 α n f ( u n ) u * , u n + 1 u * Γ n Γ n + 1 + ψ n K + 2 α n f ( u n ) u * u n + 1 u * .
Since 0 < lim inf n σ n lim sup n σ n < 1 , ψ n 0 , α n 0 and Γ n Γ n + 1 0 , from the boundedness of { u n } one has
lim n u n T n u n = lim n u n T n z n = lim n w n z n = 0 .
So, it follows from Algorithm 3 and (30) that
w n u n = σ n T n u n u n T n u n u n 0 ( n ) ,
and
u n + 1 u n α n f ( u n ) u n + γ n T n z n u n α n f ( u n ) u n + T n z n u n 0 ( n ) .
Putting p = u * , from Step 3 we obtain
γ n ζ n 2 λ L r λ ( w n ) 2 2 u n u * 2 u n + 1 u * 2 + α n f ( u n ) u * 2 + ψ n K = Γ n Γ n + 1 + ψ n K + α n f ( u n ) u * 2 .
Since 0 < lim inf n γ n , ψ n 0 , α n 0 and Γ n Γ n + 1 0 , from the boundedness of { u n } one gets
lim n ζ n 2 λ L r λ ( w n ) 2 2 = 0 .
Hence, by Lemma 10 we deduce that
lim n w n y n = 0 ,
which immediately yields
u n y n u n w n + w n y n 0 ( n )
From the boundedness of { u n } , it follows that there exists a subsequence { u n k } of { u n } such that
lim sup n f ( u * ) u * , u n u * = lim k f ( u * ) u * , u n k u * .
Since H is reflexive and { u n } is bounded, we may assume, without loss of generality, that u n k x ˜ . Thus, from (33) one gets
lim sup n f ( u * ) u * , u n u * = lim k f ( u * ) u * , u n k u * = f ( u * ) u * , x ˜ u * .
Furthermore, by Algorithm 3 we get u n + 1 z n = α n ( f ( u n ) z n ) + β n ( u n z n ) + γ n ( T n z n z n ) , which immediately yields
γ n T n z n z n u n + 1 z n + α n ( f ( u n ) + z n ) + β n u n z n u n + 1 u n + 2 ( u n w n + w n z n ) + α n ( f ( u n ) + z n ) .
Since u n u n + 1 0 , w n u n 0 , w n z n 0 , α n 0 , lim inf n γ n > 0 and { u n } , { z n } are bounded, we obtain lim n z n T n z n = 0 , which together with the nonexpansivity of each T n , arrives at
u n T n u n u n z n + z n T n z n + T n z n T n u n 2 u n z n + z n T n z n 2 ( u n w n + w n z n ) + z n T n z n 0 ( n ) .
Since u n y n 0 , u n u n + 1 0 , u n T n u n 0 , u n T n u n 0 and u n k x ˜ , by Lemma 9 we infer that x ˜ Ω . Hence from (20) and (34) one gets
lim sup n f ( u * ) u * , u n u * = f ( u * ) u * , x ˜ u * 0 ,
which immediately leads to
lim sup n f ( u * ) u * , u n + 1 u * = lim sup n f ( u * ) u * , u n + 1 u n + f ( u * ) u * , u n u * lim sup n f ( u * ) u * u n + 1 u n + f ( u * ) u * , u n u * 0 .
Note that { α n ( 1 ϱ ) } [ 0 , 1 ] , n = 1 α n ( 1 ϱ ) = , and
lim sup n 2 f ( u * ) u * , u n + 1 u * 1 ϱ + ψ n α n · K 1 ϱ 0 .
Consequently, applying Lemma 4 to (29), one has lim n u n u * 2 = 0 .
Case 2. Suppose that { Γ n k } { Γ n } s.t. Γ n k < Γ n k + 1 k N , where N is the set of all positive integers. Define the mapping η : N N by
η ( n ) : = max { k n : Γ k < Γ k + 1 } .
By Lemma 6, we get
Γ η ( n ) Γ η ( n ) + 1 and Γ n Γ η ( n ) + 1 .
Putting p = u * , from Step 2 we have
a ( 1 σ η ( n ) ) σ η ( n ) u η ( n ) T η ( n ) u η ( n ) 2 + z η ( n ) w η ( n ) 2 + ( 1 α η ( n ) b ) a u η ( n ) T η ( n ) z η ( n ) 2 γ η ( n ) ( 1 σ η ( n ) ) σ η ( n ) u η ( n ) T η ( n ) u η ( n ) 2 + z η ( n ) w η ( n ) 2 + β η ( n ) γ η ( n ) u η ( n ) T η ( n ) z η ( n ) 2 Γ η ( n ) Γ η ( n ) + 1 + ψ η ( n ) K + 2 α η ( n ) f ( u η ( n ) ) u * , x η ( n ) + 1 u * ψ η ( n ) K + 2 α η ( n ) f ( u η ( n ) ) u * x η ( n ) + 1 u * ,
which immediately yields
lim n u η ( n ) T η ( n ) u η ( n ) = lim n u η ( n ) T η ( n ) z η ( n ) = lim n w η ( n ) z η ( n ) = 0 .
Putting p = u * , from Step 3 we get
γ η ( n ) [ ζ η ( n ) 2 λ L r λ ( w η ( n ) ) 2 ] 2 Γ η ( n ) Γ η ( n ) + 1 + α η ( n ) f ( u η ( n ) ) u * 2 + ψ η ( n ) K ψ η ( n ) K + α η ( n ) f ( u η ( n ) ) u * 2 ,
which hence leads to
lim n ζ η ( n ) 2 λ L r λ ( w η ( n ) ) 2 2 = 0 .
Utilizing the same inferences as in the proof of Case 1, we deduce that
lim n w η ( n ) y η ( n ) = lim n w η ( n ) u η ( n ) = lim n u η ( n ) + 1 u η ( n ) = 0 ,
and
lim sup n f ( u * ) u * , u η ( n ) + 1 u * 0 .
On the other hand, from (29) we obtain
α η ( n ) ( 1 ϱ ) Γ η ( n ) Γ η ( n ) Γ η ( n ) + 1 + α η ( n ) ( 1 ϱ ) 2 f ( u * ) u * , u η ( n ) + 1 u * 1 ϱ + ψ η ( n ) α η ( n ) · K 1 ϱ α η ( n ) ( 1 ϱ ) 2 f ( u * ) u * , u η ( n ) + 1 u * 1 ϱ + ψ η ( n ) α η ( n ) · K 1 ϱ .
which hence arrives at
lim sup n Γ η ( n ) lim sup n 2 f ( u * ) u * , u η ( n ) + 1 u * 1 ϱ + ψ η ( n ) α η ( n ) · K 1 ϱ 0 .
Thus, lim n u η ( n ) u * 2 = 0 . Furthermore, note that
u η ( n ) + 1 u * 2 u η ( n ) u * 2 = 2 u η ( n ) + 1 u η ( n ) , u η ( n ) u * + u η ( n ) + 1 u η ( n ) 2 2 u η ( n ) + 1 u η ( n ) u η ( n ) u * + u η ( n ) + 1 u η ( n ) 2 0 ( n ) .
Thanks to Γ n Γ η ( n ) + 1 , we get
u n u * 2 u η ( n ) + 1 u * 2 u η ( n ) u * 2 + 2 u η ( n ) + 1 u η ( n ) u η ( n ) u * + u η ( n ) + 1 u η ( n ) 2 0 ( n ) .
That is, u n u * as n .
Theorem 2.
Suppose T : C C is nonexpansive and { u n } is constructed by: u 1 C ,
w n = ( 1 σ n ) u n + σ n T u n , y n = P C ( w n λ A w n ) , t n = ( 1 ζ n ) w n + ζ n y n , z n = P D n ( w n ) , u n + 1 = α n f ( u n ) + β n u n + γ n T n z n ,
where for each n 1 , D n and ζ n that are chosen as in Algorithm 3, then u n u * Ω , where u * Ω is the unique solution to the VIP:   ( I f ) u * , p u * 0 p Ω .
Proof. 
Step 1. { u n } is bounded. Indeed, using the same arguments as in Step 1 of the proof of Theorem 1, we obtain the desired assertion.
Step 2.
γ n ( 1 σ n ) σ n u n T u n 2 + z n w n 2 + β n γ n u n T n z n 2 u n p 2 u n + 1 p 2 + 2 α n f ( u n ) p , u n + 1 p .
Indeed, using the same arguments as in Step 2 of the proof of Theorem 1, we have the result.
Step 3.
γ n ζ n 2 λ L r λ ( w n ) 2 2 u n p 2 u n + 1 p 2 + α n f ( u n ) p 2 .
The same arguments in Step 3 of the proof of Theorem 1 give the conclusion.
Step 4.
u n + 1 p 2 ( 1 α n ( 1 ϱ ) ) u n p 2 + α n ( 1 ϱ ) · 2 f ( p ) p , u n + 1 p 1 ϱ .
The results follow from the same arguments as in Step 4 of the proof of Theorem 1.
Step 5. { u n } converges strongly to u * Ω , which satisfies (20), with T 0 = T as a nonexpansive mapping. Letting p = u * , we deduce from Step 4 that
u n + 1 u * 2 ( 1 α n ( 1 ϱ ) ) u n u * 2 + α n ( 1 ϱ ) · 2 f ( u * ) u * , u n + 1 u * 1 ϱ .
Setting Γ n = u n u * 2 , we show Γ n 0 ( n ) by considering the two cases below.    □
Case 1. If there exists an integer n 0 1 such that { Γ n } is nonincreasing, then lim n Γ n = < + and lim n ( Γ n Γ n + 1 ) = 0 . Putting p = u * , from Step 2 and { γ n } [ a , b ] ( 0 , 1 ) we obtain
a [ ( 1 σ n ) σ n u n T u n 2 + z n w n 2 ] + ( 1 α n b ) a u n T n z n 2 γ n ( 1 σ n ) σ n u n T u n 2 + z n w n 2 + β n γ n u n T n z n 2 Γ n Γ n + 1 + 2 α n f ( u n ) u * , u n + 1 u * Γ n Γ n + 1 + 2 α n f ( u n ) u * u n + 1 u * ,
which hence yields
lim n u n T u n = lim n u n T n z n = lim n w n z n = 0 .
Putting p = u * , from Step 3 we obtain
γ n ζ n 2 λ L r λ ( w n ) 2 2 Γ n Γ n + 1 + α n f ( u n ) u * 2 ,
which immediately leads to
lim n ζ n 2 λ L r λ ( w n ) 2 2 = 0 .
By inference, as in Case 1, of the proof of Theorem 1, we deduce
lim n w n y n = lim n w n u n = lim n u n + 1 u n = 0 ,
and
lim sup n f ( u * ) u * , u n + 1 u * 0 .
Consequently, applying Lemma 4 to (37), one has lim n u n u * 2 = 0 .
Case 2. Suppose that { Γ n k } { Γ n } s.t. Γ n k < Γ n k + 1 k N , where N is the set of all positive integers. Define the mapping η : N N by
η ( n ) : = max { k n : Γ k < Γ k + 1 } .
By Lemma 6, we get
Γ η ( n ) Γ η ( n ) + 1 and Γ n Γ η ( n ) + 1 .
The conclusion follows using same arguments as in Case 2 of the proof of Theorem 1.
We introduce a viscosity extragradient-like iterative method.
We point out that Lemmas 7–10 still hold for Algorithm Section 3.
Algorithm 4 Initialization: Given μ > 0 , l ( 0 , 1 ) , λ ( 0 , 1 μ ) . Let u 1 C be arbitrary.
Iterative Steps: Given u n , calculate
Step 1. Set w n = ( 1 σ n ) u n + σ n T n u n , and compute y n = P C ( w n λ A w n ) and r λ ( w n ) : = w n y n .
Step 2. Compute t n = w n ζ n r λ ( w n ) , where ζ n : = l j n and j n is the smallest nonnegative integer j,
satisfying
A w n A ( w n l j r λ ( w n ) ) , w n y n μ 2 r λ ( w n ) 2 .

Step 3. Compute z n = P D n ( w n ) and u n + 1 = α n f ( u n ) + β n w n + γ n T n z n ,
where D n : = { x C : h n ( x ) 0 } and
h n ( x ) = A t n , x w n + ζ n 2 λ r λ ( w n ) 2 .
Theorem 3.
Suppose { u n } is constructed by Algorithm 4. Then, u n u * Ω provided T n u n T n + 1 u n 0 , where u * Ω is the unique solution to the VIP: ( I f ) u * , p u * 0 p Ω .
Proof. 
By 0 < lim inf n γ n lim sup n γ n < 1 and lim n ψ n α n = 0 , we have, without loss of generality, that { γ n } [ a , b ] ( 0 , 1 ) and ψ n α n ( 1 ϱ ) 2 n 1 . By the same arguments as in the proof of Theorem 3.1, we have u * Ω = i = 0 N Fix ( T i ) VI ( C , A ) .
Next, we show the conclusion of the theorem. With this aim, we divide the rest of the proof into several steps.
Step 1. { u n } is bounded. Using the same arguments as in Step 1 of the proof of Theorem 3.1, we have inequalities (21)–(23). Thus, from (23) and α n + β n + γ n = 1 n 1 , it follows that
u n + 1 p α n ( f ( u n ) f ( p ) + f ( p ) p ) + β n w n p + γ n z n p α n ( ϱ u n p + f ( p ) p ) + β n w n p + γ n w n p α n ( ϱ u n p + f ( p ) p ) + ( β n + γ n ) ( 1 + ψ n ) u n p α n ( ϱ u n p + f ( p ) p ) + ( β n + γ n ) u n p + α n ( 1 ϱ ) 2 u n p = [ 1 α n ( 1 ϱ ) 2 ] u n p + α n ( 1 ϱ ) 2 · 2 f ( p ) p 1 ϱ max { u n p , 2 f ( p ) p 1 ϱ } .
Inducting, we obtain u n p max { u 1 p , 2 f ( p ) p 1 ϱ } n 1 . Thus, { u n } is bounded, and so are the sequences { w n } , { y n } , { z n } , { f ( u n ) } , { A t n } , { T n u n } , { T n z n } .
Step 2. We show that
γ n ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 + β n γ n w n T n z n 2 u n p 2 u n + 1 p 2 + ψ n K + 2 α n f ( u n ) p , u n + 1 p
for some K > 0 . To prove this, we first note that
u n + 1 p 2 = α n ( f ( u n ) p ) + β n ( w n p ) + γ n ( T n z n p ) 2 β n ( w n p ) + γ n ( T n z n p ) 2 + 2 α n f ( u n ) p , u n + 1 p β n w n p 2 + γ n z n p 2 β n γ n w n T n z n 2 + 2 α n f ( u n ) p , u n + 1 p .
On the other hand, using the same inferences as in (25) one has
z n p 2 ( 1 + ψ n ) 2 u n p 2 ( 1 σ n ) σ n u n T n u n 2 z n w n 2 .
Substituting (44) into (43), one gets
u n + 1 p 2 β n ( 1 + ψ n ) 2 u n p 2 + γ n [ ( 1 + ψ n ) 2 u n p 2 ( 1 σ n ) σ n u n T n u n 2 z n w n 2 ] β n γ n w n T n z n 2 + 2 α n f ( u n ) p , u n + 1 p ( 1 α n ) u n p 2 γ n [ ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 ] + ψ n ( 2 + ψ n ) u n p 2 β n γ n w n T n z n 2 + 2 α n f ( u n ) p , u n + 1 p u n p 2 γ n ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 β n γ n w n T n z n 2 + ψ n K + 2 α n f ( u n ) p , u n + 1 p ,
where sup n 1 ( 2 + ψ n ) u n p 2 K for some K > 0 . This immediately implies that
γ n [ ( 1 σ n ) σ n u n T n u n 2 + z n w n 2 ] + β n γ n w n T n z n 2 u n p 2 u n + 1 p 2 + ψ n K + 2 α n f ( u n ) p , u n + 1 p .
Step 3. We show that
γ n ζ n 2 λ L r λ ( w n ) 2 2 u n p 2 u n + 1 p 2 + α n f ( u n ) p 2 + ψ n K .
Indeed, using the same argument as that of (26), we obtain that for some L > 0 ,
z n p 2 w n p 2 ζ n 2 λ L r λ ( w n ) 2 2 .
From Algorithm 4, (23), and (45) it follows that
u n + 1 p 2 α n f ( u n ) p 2 + β n w n p 2 + γ n z n p 2 α n f ( u n ) p 2 + β n w n p 2 + γ n [ w n p 2 [ ζ n 2 λ L r λ ( w n ) 2 ] 2 ] α n f ( u n ) p 2 + ( 1 + ψ n ) 2 u n p 2 γ n [ ζ n 2 λ L r λ ( w n ) 2 ] 2 α n f ( u n ) p 2 + u n p 2 + ψ n K γ n ζ n 2 λ L r λ ( w n ) 2 2 ,
which hence yields the desired assertion.
Step 4. We show that
u n + 1 p 2 ( 1 α n ( 1 ϱ ) ) u n p 2 + α n ( 1 ϱ ) 2 f ( p ) p , u n + 1 p 1 ϱ + ψ n α n · K 1 ϱ .
Indeed, from Algorithm 4 and (3.19), one has
u n + 1 p 2 α n ( f ( u n ) f ( p ) ) + β n ( w n p ) + γ n ( T n z n p ) 2 + 2 α n f ( p ) p , u n + 1 p ϱ α n u n p 2 + β n w n p 2 + γ n z n p 2 + 2 α n f ( p ) p , u n + 1 p ϱ α n u n p 2 + ( 1 α n ) w n p 2 + 2 α n f ( p ) p , u n + 1 p ϱ α n u n p 2 + ( 1 α n ) u n p 2 + ψ n ( 2 + ψ n ) u n p 2 + 2 α n f ( p ) p , u n + 1 p [ 1 α n ( 1 ϱ ) ] u n p 2 + ψ n K + 2 α n f ( p ) p , u n + 1 p ,
which hence leads to the desired assertion.
Step 5. { u n } converges strongly to the unique solution u * Ω , which satisfies (20). This follows the argument in Step 5 of the proof of Theorem 1. □
Theorem 4.
Suppose T : C C is nonexpansive and { u n } is constructed by: u 1 C ,
w n = ( 1 σ n ) u n + σ n T u n , y n = P C ( w n λ A w n ) , t n = ( 1 ζ n ) w n + ζ n y n , z n = P D n ( w n ) , u n + 1 = α n f ( u n ) + β n w n + γ n T n z n ,
where for each n 1 , D n and ζ n are chosen as in Algorithm 4, then u n u * Ω , where u * Ω is the unique solution to the VIP: ( I f ) u * , p u * 0 p Ω .
Proof. 
Step 1. By Step 1 of the proof of Theorem 2, we see that { u n } is bounded.
Step 2. By the same arguments as in Step 2 of the proof of Theorem 2, we have
γ n [ ( 1 σ n ) σ n u n T u n 2 + z n w n 2 ] + β n γ n u n T n z n 2 u n p 2 u n + 1 p 2 + 2 α n f ( u n ) p , u n + 1 p ,
for some K > 0 .
Step 3. Step 3 of the proof of Theorem 2 gives
γ n ζ n 2 λ L r λ ( w n ) 2 2 u n p 2 u n + 1 p 2 + α n f ( u n ) p 2 .
Step 4. Step 4 of the proof of Theorem 2 gives
u n + 1 p 2 ( 1 α n ( 1 ϱ ) ) u n p 2 + α n ( 1 ϱ ) · 2 f ( p ) p , u n + 1 p 1 ϱ .
Step 5. By arguments as in Step 5 of the proof of Theorem 2, we have that { u n } converges strongly to the unique solution u * Ω , satisfying (20). □
Remark 1.
Compared with the corresponding results in Ceng et al. [21], Reich et al. [22], and Ceng and Shang [9], our results improve and extend them in the following aspects.
(i) Although the same problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) as considered in this paper was studied in reference [21], our strong convergence theorems are more advantageous and more subtle than the corresponding strong convergence ones in reference [21] because the conclusion u n u * Ω u n u n + 1 + u n y n 0 ( n ) in the corresponding strong convergence theorems [21] is updated by our conclusion u n u * Ω . Without doubt, the strong convergence criteria for the sequence { u n } in this paper are more convenient and more beneficial in comparison with those of reference [21]. In addition, to overcome the weakness of the strong convergence criteria in reference [21] (i.e., lim n ( u n u n + 1 + u n y n ) = 0 ), we make use of Maingé’s technique (i.e., Lemma 6) to derive successfully the conclusion u n u * Ω .
(ii) Our results reduce to the results in reference [22] when T i = I , where I is the identity mapping for i = 0 , 1 , . . . , N .
(iii) The operator A in reference [9] is extended from being Lipschitz continuous and sequentially weak in continuity mapping to A being uniformly continuous with A z lim inf n A u n for each { u n } C with u n z C . Furthermore, the hybrid inertial subgradient extragradient method with the line-search process in reference [9] is extended in this paper. For example, the original inertial technique w n = T n u n + α n ( T n u n T n x n 1 ) is replaced by our Mann iteration approach w n = ( 1 σ n ) u n + σ n T n u n , and the original iterative step u n + 1 = β n f ( u n ) + γ n u n + ( ( 1 γ n ) I β n ρ F ) T n z n is replaced by our simpler iterative one u n + 1 = α n f ( u n ) + β n u n + γ n T n z n . It is worth mentioning that the definition of z n in the former formulation of u n + 1 is very different from the definition of z n in the latter formulation of u n + 1 .
(iv) We intend to apply the SP-iteration studied in reference [27] to the problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) considered in this paper in our next project. As part of our future project, we will apply our results to the appearance of fractals using ideas given in reference [28].

4. Applications

In what follows, we give the following illustrated example. Put μ = l = λ = 1 2 , σ n = 1 3 , α n = 1 3 ( n + 1 ) , β n = n 3 ( n + 1 ) and γ n = 2 3 .
We first provide an example of Lipschitz continuous and pseudo-monotone mapping A, asymptotically nonexpansive mapping T and nonexpansive mapping T 1 with Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . Let C = [ 3 , 3 ] and H = R with the inner product a , b = a b and induced norm · = | · | . The initial point u 1 is randomly chosen in C. Take f ( u ) = 1 3 u u C with ϱ = 1 3 . Let A : H H and T , T 1 : C C be defined as A u : = 1 1 + | sin u | 1 1 + | u | , T u : = 2 5 sin u , and T 1 u : = sin u for all u C . We now claim that A is pseudo-monotone and Lipschitz continuous. Indeed, for all u , v H we have
A u A v | v u ( 1 + u ) ( 1 + v ) | + | sin v sin u ( 1 + sin u ) ( 1 + sin v ) | v u ( 1 + u ) ( 1 + v ) + sin v sin u ( 1 + sin u ) ( 1 + sin v ) u v + sin u sin v 2 u v .
This implies that A is Lipschitz continuous. Next, we show that A is pseudo-monotone. For each u , v H , it is easy to see that
A u , v u = ( 1 1 + | sin u | 1 1 + | u | ) ( v u ) 0 A v , v u = ( 1 1 + | sin v | 1 1 + | v | ) ( v u ) 0 .
Besides, it is easy to verify that T is asymptotically nonexpansive with ψ n = ( 2 5 ) n n 1 , such that T n + 1 z n T n z n 0 as n . Indeed, we observe that
T n u T n v 2 5 T n 1 u T n 1 v ( 2 5 ) n u v ( 1 + ψ n ) u v ,
and
T n + 1 u n T n u n ( 2 5 ) n 1 T 2 u n T u n = ( 2 5 ) n 1 2 5 sin ( T u n ) 2 5 sin u n 2 ( 2 5 ) n 0 .
It is clear that Fix ( T ) = { 0 } and
lim n ψ n α n = lim n ( 2 / 5 ) n 1 / 3 ( n + 1 ) = 0 .
In addition, it is clear that T 1 is nonexpansive and Fix ( T 1 ) = { 0 } . Therefore, Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) = { 0 } . In this case, Algorithm 3 can be rewritten as follows:
w n = 2 3 u n + 1 3 T n u n , y n = P C ( w n 1 2 A w n ) , t n = ( 1 ζ n ) w n + ζ n y n , z n = P D n ( w n ) , u n + 1 = 1 3 ( n + 1 ) · 1 3 u n + n 3 ( n + 1 ) u n + 2 3 T 1 z n n 1 ,
where for each n 1 , D n and ζ n are chosen as in Algorithm 3. Then, by Theorem 1, we know that { u n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) .
More so, since T u : = 2 5 sin u is also nonexpansive, we consider the modified version of Algorithm 3, that is,
w n = 2 3 u n + 1 3 T u n , y n = P C ( w n 1 2 A w n ) , t n = ( 1 ζ n ) w n + ζ n y n , z n = P D n ( w n ) , u n + 1 = 1 3 ( n + 1 ) · 1 3 u n + n 3 ( n + 1 ) u n + 2 3 T 1 z n n 1 ,
where for each n 1 , D n and ζ n are chosen as above. Then, by Theorem 2, we know that { u n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . In particular, we compare the performance of the new algorithm (4.2) with the Reich et al. [22] method using similar parameters as above. We choose the following initial input and take u n + 1 u n < 5 E 5 as the stopping criterion:
Case I: u 1 = 2 ; Case II: u 1 = exp ( 150 77 ) ; Case III: u 1 = 3 4 π ; Case IV: u 1 = 7 .
The numerical results are shown in Table 1 and Figure 1. One can observe from the table and figures that our proposed Algorithm 1 outperforms the method proposed by Reich et al. [22] based on our test example.

Author Contributions

Conceptualization, L.-C.C.; Formal analysis, Y.S.; Funding acquisition, J.-C.Y.; Investigation, L.-C.C. and Y.S.; Methodology, L.-C.C. and Y.S.; Project administration, J.-C.Y.; Supervision, J.-C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

Lu-Chuan Ceng is partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), the 2020 Shanghai Leading Talents Program of the Shanghai Municipal Human Resources and Social Security Bureau (20LJ2006100) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). The research of Jen-Chih Yao was supported by the grant MOST 108-2115-M-039- 005-MY3.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  2. Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
  3. Zhao, X.P.; Yao, Y.H. Convergence analysis of extragradient algorithms for pseudo-monotone variational inequalities. J. Nonlinear Convex Anal. 2020, 21, 2185–2192. [Google Scholar]
  4. Iusem, A.N.; Nasri, M. Korpelevich’s method for variational inequality problems in Banach spaces. J. Global Optim. 2011, 50, 59–76. [Google Scholar] [CrossRef]
  5. Tan, B.; Li, S.X.; Qin, X.L. On modified subgradient extragradient methods for pseudomonotone variational inequality problems with applications. Comput. Appl. Math. 2021, 40, 1–22. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  7. Yao, Y.; Shahzad, N.; Yao, J.C. Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems. Carpathian J. Math. 2021, 37, 541–550. [Google Scholar] [CrossRef]
  8. Iusem, A.N.; Mohebbi, V. An extragradient method for vector equilibrium problems on Hadamard manifolds. J. Nonlinear Var. Anal. 2021, 5, 459–476. [Google Scholar]
  9. Ceng, L.C.; Shang, M.J. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2021, 70, 715–740. [Google Scholar] [CrossRef]
  10. Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  11. Chen, J.F.; Liu, S.Y.; Chang, X.K. Extragradient method and golden ratio method for equilibrium problems on Hadamard manifolds. Int. J. Comput. Math. 2021, 98, 1699–1712. [Google Scholar] [CrossRef]
  12. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  13. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  15. Dong, Q.L.; Cai, G. Convergence analysis for fixed point problem of asymptotically nonexpansive mappings and variational inequality problem in Hilbert spaces. Optimization 2021, 70, 1171–1193. [Google Scholar] [CrossRef]
  16. Thong, D.V.; Dong, Q.L.; Liu, L.L.; Triet, N.A.; Lan, N.P. Two new inertial subgradient extragradient methods with variable step sizes for solving pseudomonotone variational inequality problems in Hilbert spaces. J. Comput. Appl. Math. 2021. [Google Scholar]
  17. Vuong, P.T.; Shehu, Y. Convergence of an extragradient-type method for variational inequality with applications to optimal control problems. Numer. Algorithms 2019, 81, 269–291. [Google Scholar] [CrossRef]
  18. Cai, G.; Dong, Q.L.; Peng, Y. Strong convergence theorems for inertial Tseng’s extragradient method for solving variational inequality problems and fixed point problems. Optim. Lett. 2021, 15, 1457–1474. [Google Scholar] [CrossRef]
  19. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  20. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  21. Ceng, L.C.; Yao, J.C.; Shehu, Y. On Mann-type subgradient-ike extragradient method with linear-search process for hierarchical variational inequalities for asymptotically nonexpansive mappings. Mathematics 2021, 9, 3322. [Google Scholar] [CrossRef]
  22. Reich, S.; Thong, D.V.; Dong, Q.L.; Li, X.H.; Dung, V.T. New algorithms and convergence theorems for solving variational inequalities with non-Lipschitz mappings. Numer. Algorithms 2021, 87, 527–549. [Google Scholar] [CrossRef]
  23. He, Y.R. A new double projection algorithm for variational inequalities. J. Comput. Appl. Math. 2006, 185, 166–173. [Google Scholar] [CrossRef] [Green Version]
  24. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  25. Lim, T.C.; Xu, H.K. Fixed point theorems for asymptotically nonexpansive mappings. Nonlinear Anal. 1994, 22, 1345–1355. [Google Scholar] [CrossRef]
  26. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  27. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
  28. Antal, S.; Tomar, A.; Prajapati, D.J.; Sajid, M. Fractals as Julia Sets of Complex Sine Function via Fixed Point Iterations. Fractal Fract. 2021, 5, 272. [Google Scholar] [CrossRef]
Figure 1. Computation result showing performance of our new method and the Reich et al. [22] method: Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Figure 1. Computation result showing performance of our new method and the Reich et al. [22] method: Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Mathematics 10 00779 g001
Table 1. Numerical results showing performance of our new method and the Reich et al. [22] method.
Table 1. Numerical results showing performance of our new method and the Reich et al. [22] method.
New AlgorithmRiech et al. [22] alg.
Iter. TimeIter. Time
Case I17 0.008272 0.0176
Case II24 0.0079168 0.0717
Case III58 0.013171 0.0166
Case IV119 0.0298164 0.0455
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Shehu, Y.; Yao, J.-C. Modified Mann Subgradient-like Extragradient Rules for Variational Inequalities and Common Fixed Points Involving Asymptotically Nonexpansive Mappings. Mathematics 2022, 10, 779. https://doi.org/10.3390/math10050779

AMA Style

Ceng L-C, Shehu Y, Yao J-C. Modified Mann Subgradient-like Extragradient Rules for Variational Inequalities and Common Fixed Points Involving Asymptotically Nonexpansive Mappings. Mathematics. 2022; 10(5):779. https://doi.org/10.3390/math10050779

Chicago/Turabian Style

Ceng, Lu-Chuan, Yekini Shehu, and Jen-Chih Yao. 2022. "Modified Mann Subgradient-like Extragradient Rules for Variational Inequalities and Common Fixed Points Involving Asymptotically Nonexpansive Mappings" Mathematics 10, no. 5: 779. https://doi.org/10.3390/math10050779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop