Next Article in Journal
New Operated Polynomial Identities and Gröbner-Shirshov Bases
Next Article in Special Issue
New Fundamental Results on the Continuous and Discrete Integro-Differential Equations
Previous Article in Journal
Searching for a Unique Exciton Model of Photosynthetic Pigment–Protein Complexes: Photosystem II Reaction Center Study by Differential Evolution
Previous Article in Special Issue
Modified Projection Method with Inertial Technique and Hybrid Stepsize for the Split Feasibility Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Strengthened Inertial-Type Subgradient Extragradient Rule with Adaptive Step Sizes for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive Mappings

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
Center for Fundamental Science, and Research Center for Nonlinear Analysis and Optimization, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
3
Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung 80708, Taiwan
4
Department of Healthcare Administration and Medical Informatics and Research Center of Nonlinear Analysis and Optimization, Kaohsiung Medical University, Kaohsiung 807, Taiwan
5
Research Center for Interneural Computing, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(6), 958; https://doi.org/10.3390/math10060958
Submission received: 7 February 2022 / Revised: 5 March 2022 / Accepted: 8 March 2022 / Published: 17 March 2022
(This article belongs to the Special Issue Applied Functional Analysis and Applications)

Abstract

:
In a real Hilbert space, let the VIP denote a pseudomonotone variational inequality problem with Lipschitz continuity operator, and let the CFPP indicate a common fixed-point problem of finitely many nonexpansive mappings and an asymptotically nonexpansive mapping. On the basis of the Mann iteration method, the viscosity approximation method and the hybrid steepest-descent method, we propose and analyze two strengthened inertial-type subgradient extragradient rules with adaptive step sizes for solving the VIP and CFPP. With the help of suitable restrictions, we show the strong convergence of the suggested rules to a common solution of the VIP and CFPP, which is the unique solution of a hierarchical variational inequality (HVI).

1. Introduction

Let ( H , · , · ) be a real Hilbert space with the norm · . Given a convex and closed set C in H, we denote by P C the metric projection from H onto C. Suppose that T : C H is a nonlinear operator on C. We use the notations Fix ( T ) , R , and ⇀ to indicate the fixed point set of T, the set of all real numbers, the strong convergence and the weak convergence, respectively. An operator T : C C is referred to as being asymptotically nonexpansive if { θ k } [ 0 , + ) s.t. lim k θ k = 0 and
T k u T k v ( 1 + θ k ) u v k 1 , u , v C .
In particular, whenever θ n = 0 n 1 , T is referred to as being nonexpansive.
Give an operator A : H H . The classical variational inequality problem (VIP) is the one of finding u * C s.t. A u * , u u * 0 u C . The solution set of the VIP is denoted by VI( C , A ). Currently, one of the most popular methods for solving the VIP is the extragradient method proposed by Korpelevich [1] in 1976, i.e., for any starting point u 0 C , { u k } the sequence below is constructed
v k = P C ( u k A u k ) , u k + 1 = P C ( u k A v k ) k 0 ,
with ( 0 , 1 L ) . In case VI ( C , A ) , the sequence { u k } constructed by (2) converges weakly to an element in VI ( C , A ) . The literature on the VIP is numerous and Korpelevich’s extragradient method has received great attention given by many authors, who ameliorated it in various aspects; see, e.g., [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26] and references therein, to name but a few.
It is clear that, in the extragradient method, one has to calculate two projections onto C per one iteration. Without doubt, the projection onto a closed convex set C is closely related to a minimum distance problem. In case C there is a general convex and closed set; this might require an outrageous amount of calculation time. In 2011, Censor et al. [6] improved Korpelevich’s extragradient method and first invented the subgradient extragradient method, in which one uses a projection onto a half-space in place of the second projection onto C:
v k = P C ( u k A u k ) , C k = { u H : u k A u k v k , u v k 0 } , u k + 1 = P C k ( u k A v k ) k 0 ,
with ( 0 , 1 L ) . In 2018, via the inertial approach, Thong and Hieu [19] first suggested the inertial subgradient extragradient method, i.e., for any starting points x 0 , x 1 H , { u k } is the sequence constructed below
w k = u k + α k ( u k u k 1 ) , v k = P C ( w k A w k ) , C k = { u H : w k A w k v k , u v k 0 } , u k + 1 = P C k ( w k A v k ) k 1 ,
with ( 0 , 1 L ) . Under mild assumptions, they proved that { u k } converges weakly to an element of VI ( C , A ) . Very recently, Ceng et al. [27] introduced a modified inertial subgradient extragradient method (i.e., Algorithm 1 specified below) for solving the VIP with pseudomonotone and Lipschitz continuous mapping A and the common fixed-point problem (CFPP) of finitely many nonexpansive mappings { T i } i = 1 N in a real Hilbert space H. Give a contraction f : H H with constant δ [ 0 , 1 ) , and an η -strongly monotone and κ -Lipschitzian mapping F : H H with δ < : = 1 1 ρ ( 2 η ρ κ 2 ) for ρ ( 0 , 2 η / κ 2 ) . Suppose that { β k } , { γ k } , { ϵ k } are sequences in ( 0 , 1 ) s.t. β k + γ k < 1 k 1 , β k 0 and ϵ k / β k 0 as k . In addition, we write T k : = T k mod N for integer k 1 with the mod function taking values in the set { 1 , 2 , , N } , that is, if k = j N + q for some integers j 0 and 0 q < N , then T k = T N if q = 0 and T k = T q if 0 < q < N .
Algorithm 1: A modified inertial subgradient extragradient method (see [27])
Initialization: Give τ 1 > 0 , α > 0 , μ ( 0 , 1 ) . Let u 0 , u 1 H be arbitrary.
Iterative Steps: Calculate u k + 1 as follows:
Step 1. Given the iterates u k 1 and u k ( k 1 ) , choose α k s.t. 0 α k α ˜ k , where
α ˜ k = min { α , ϵ k u k u k 1 } , if u k u k 1 , α , otherwise .
Step 2. Calculate w k = T k u k + α k ( T k u k T k u k 1 ) and v k = P C ( w k τ k A w k ) .
Step 3. Construct the half-space C k : = { u H : w k τ k A w k v k , u v k 0 } , and calculate t k = P C k ( w k τ k A v k ) .
Step 4. Compute u k + 1 = β k f ( u k ) + γ k u k + ( ( 1 γ k ) I β k ρ F ) t k , and update
τ k + 1 = min { μ w k v k 2 + t k v k 2 2 A w k A v k , t k v k , τ k } , if A w k A v k , t k v k > 0 , τ k , otherwise .
Put k : = k + 1 and return to Step 1.
Under suitable assumptions, it was proven in [27] that { u k } converges strongly to an element of Ω = l = 1 N Fix ( T l ) VI ( C , A ) . Subsequently, Thong et al. [16] suggested the new inertial subgradient extragradient method (i.e., Algorithm 2 specified below) with adaptive step sizes for solving the VIP with the pseudomonotone and Lipschitz continuous mapping A. Suppose that { λ k } ( λ , 1 ) ( 0 , 1 ) and { ϵ k } , { β k } ( 0 , 1 ) are s.t. β k 0 and ϵ k / β k 0 as k .
Algorithm 2: New inertial subgradient extragradient method (see [16])
Give τ 1 > 0 , α > 0 , μ ( 0 , 1 ) . Let u 0 , u 1 H be arbitrary.
     Choose α k s.t.
      0 α k α ˜ k : = min { α , ϵ k u k u k 1 } , if u k u k 1 , α , otherwise ,
      w k = ( 1 β k ) [ u k + α k ( u k u k 1 ) ] ,
      v k = P C ( w k τ k A w k ) ,
      C k : = { u H : w k τ k A w k v k , u v k 0 } ,
      u k + 1 = ( 1 λ k ) w k + λ k P C k ( w k τ k A v k ) ,
     update
      τ k + 1 = min { μ w k v k 2 + t k v k 2 2 A w k A v k , t k v k , τ k } , if A w k A v k , t k v k > 0 , τ k , otherwise ,
     where t k : = P C k ( w k τ k A v k ) .
Under appropriate assumptions, it was proven in [16] that { u k } converges strongly to an element of VI ( C , A ) . In a real Hilbert space H, let the VIP denote a pseudomonotone variational inequality problem with Lipschitz continuity operator A, and let the CFPP indicate a common fixed-point problem of finitely many nonexpansive mappings { T i } i = 1 N and an asymptotically nonexpansive mapping T 0 : = T . On the basis of Mann iteration method, viscosity approximation method and hybrid steepest-descent method, we propose and analyze two strengthened inertial-type subgradient extragradient rules (i.e., Algorithms 3 and 4 formulated in Section 3) with adaptive step sizes for solving the VIP and CFPP. With the help of suitable restrictions, we show strong convergence of the suggested rules to a common solution of the VIP and CFPP, which is the unique solution of a hierarchical variational inequality (HVI) defined on the common solution set Ω : = i = 0 N Fix ( T i ) VI ( C , A ) . In the end, our main results are applied to solve the VIP and CFPP in an illustrated example.
This article is arranged below: In Section 2, we present some concepts and basic tools for further use. Section 3 treats the convergence analysis of the suggested rules. In the end, Section 4 applies our main results to solve the VIP and CFPP in an illustrated example. Our results improve and extend the corresponding results announced by some others, e.g., Kraikaew and Saejung [20], Ceng et al. [27], Thong et al. [16] and Ceng and Shang [22].

2. Preliminaries

Suppose that C is a nonempty closed convex subset of a real Hilbert space H. Given u H and { u k } H , we use the u k u (resp., u k u ) to indicate the strong (resp., weak) convergence of { u k } to u. An operator Γ : C H is referred to as being
(i)
L-Lipschitz continuous (or L-Lipschitzian) in case L > 0 s.t. Γ u Γ v L u v u , v C ;
(ii)
monotone in case Γ u Γ v , u v 0 u , v C ;
(iii)
pseudomonotone in case Γ u , v u 0 Γ v , v u 0 u , v C ;
(iv)
α -strongly monotone in case α > 0 s.t. Γ u Γ v , u v α u v 2 u , v C ;
(v)
sequentially weakly continuous in case { u k } C , the relation holds: u k u Γ u k Γ u .
Obviously, every monotone mapping is pseudomonotone but the converse is not true. For every u H , one knows that there is only a nearest point in C, denoted by P C u , s.t. u P C u u v v C . P C is referred to as a metric projection of H onto C.
Lemma 1
(see [28]). The following hold:
(i) 
u v , P C u P C v P C u P C v 2 u , v H ;
(ii) 
u P C u , v P C u 0 u H , v C ;
(iii) 
u v 2 u P C v 2 + v P C u 2 u H , v C ;
(iv) 
u v 2 = u 2 v 2 2 u v , v u , v H ;
(v) 
λ u + ( 1 λ ) v 2 = λ u 2 + ( 1 λ ) v 2 λ ( 1 λ ) u v 2 u , v H , λ [ 0 , 1 ] .
Lemma 2
(see [10]). For any u H and λ η > 0 the inequalities hold: u P C ( u λ A u ) λ u P C ( u η A u ) η and u P C ( u η A u ) u P C ( u λ A u ) .
Lemma 3
(see [6], Lemma 2.1). Suppose that A : C H is pseudomonotone and continuous. Then u * C is a solution to the VIP A u * , v u * 0 v C , if and only if A v , v u * 0 v C .
Lemma 4
(see [29]). Suppose that { a k } is a sequence of nonnegative reals s.t. a k + 1 ( 1 λ k ) a k + λ k γ k k 1 , where { λ k } and { γ k } are sequences of reals s.t. (a) { λ k } [ 0 , 1 ] and k = 1 λ k = , and (b) lim sup k γ k 0 or k = 1 | λ k γ k | < . Then lim k a k = 0 .
Lemma 5
(see [30]). Assume that X is a Banach space which admits a weakly continuous duality mapping, C is a nonempty, convex and closed set in X, and T : C C is an asymptotically nonexpansive mapping with Fix ( T ) . Then I T is demiclosed at zero, i.e., for any { u k } C with u k u C , the relation holds: ( I T ) u k 0 u Fix ( T ) , where I is the identity mapping of X.
Lemma 6
(see [29], Lemma 3.1). Give λ ( 0 , 1 ] . Suppose that the mapping T : C H is nonexpansive, and the operator T λ : C H is formulated by T λ u : = T u λ μ F ( T u ) u C , where F : H H is κ-Lipschitzian and η-strongly monotone. Then T λ is a contraction provided 0 < μ < 2 η κ 2 , i.e., T λ u T λ v ( 1 λ ) u v u , v C , with = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .
The following lemma will play an important role in the convergence analysis of the suggested rules in this paper.
Lemma 7
(see [31]). Suppose that { Γ k } is a sequence in R , which does not decrease at infinity in the sense that { Γ k l } { Γ k } s.t. Γ k l < Γ k l + 1 l 1 . Let the sequence { ( k ) } k k 0 of integers be defined as
( k ) = max { l k : Γ l < Γ l + 1 } ,
where integer k 0 1 s.t. { l k 0 : Γ l < Γ l + 1 } . Then the statements hold below:
(i) 
( k 0 ) ( k 0 + 1 ) and ( k ) ;
(ii) 
Γ ( k ) Γ ( k ) + 1 and Γ k Γ ( k ) + 1 k k 0 .

3. Main Results

Suppose that the feasible set C is nonempty, convex and closed in a real Hilbert space H. In what follows, we assume always that the conditions hold below.
T : H H is asymptotically nonexpansive and T l : H H is nonexpansive for l = 1 , , N s.t. { T n } n = 1 is formulated as in Algorithm 1 (Such hypothesis will be imposed on Algorithms 3 and 4 specified below).
A : H H is L-Lipschitz continuous, pseudomonotone on H, s.t. A u lim inf n A u k { u k } C with u k u , and Ω = l = 0 N Fix ( T l ) VI ( C , A ) with T 0 : = T .
f : H H is a contraction with coefficient δ [ 0 , 1 ) , and F : H H is η -strongly monotone and κ -Lipschitzian s.t. δ < : = 1 1 ρ ( 2 η ρ κ 2 ) for ρ ( 0 , 2 η κ 2 ) .
{ λ n } , { ϵ n } , { β n } , { γ n } ( 0 , 1 ) with β n + γ n < 1 n 1 , s.t.
(i)
lim n β n = 0 and n = 1 β n = ;
(ii)
lim n ϵ n β n = 0 and lim n θ n β n = 0 ;
(iii)
0 < lim inf n γ n lim sup n γ n < 1 ;
(iv)
{ λ n } [ λ ̲ , λ ¯ ] ( 0 , 1 ) .
Algorithm 3: Strengthened inertial-type subgradient extragradient rules
Initialization: Give τ 1 > 0 , α > 0 , μ ( 0 , 1 ) . Let x 0 , x 1 H be arbitrary and choose α n s.t.
0 α n α ˜ n : = min { α , ϵ n x n x n 1 } , if x n x n 1 , α , otherwise .
Iterative Steps: Calculate x n + 1 as follows:
Step 1. Set t n = x n + α n ( x n x n 1 ) , and compute
       w n = β n f ( x n ) + γ n x n + ( ( 1 γ n ) I β n ρ F ) T n t n
and y n = P C ( w n τ n A w n ) .
Step 2. Compute x n + 1 = ( 1 λ n ) w n + λ n T n P C n ( w n τ n A y n ) with
       C n : = { x H : w n τ n A w n y n , x y n 0 } .
Step 3. Update
τ n + 1 = min { μ w n y n 2 + z n y n 2 2 A w n A y n , z n y n , τ n } , if A w n A y n , z n y n > 0 , τ n , otherwise ,
with z n = P C n ( w n τ n A y n ) . Again set n : = n + 1 and go to Step 1.
It is worth mentioning that, putting γ n = 0 n 1 , f = 0 and T l = T = ρ F = I for l = 1 , , N , we transform Algorithm 3 into Algorithm 2. It is clear that, from (5) one has α n β n x n 1 x n 0 as n . In fact, from α n x n 1 x n ϵ n n 1 , it follows that α n β n x n 1 x n ϵ n β n 0 (due to condition (ii)).
Lemma 8.
Suppose { τ n } is constructed by (6). Then { τ n } is nonincreasing s.t. τ n τ : = min { τ 1 , μ L } n 1 , and lim n τ n τ : = min { τ 1 , μ L } .
Proof. 
From (6), we observe that τ n τ n + 1 n 1 . Moreover, it is clear that
1 2 ( w n y n 2 + z n y n 2 ) w n y n z n y n A w n A y n , z n y n L w n y n z n y n τ n + 1 min { τ n , μ L } .
It is worth pointing out that, if w n = y n or A y n = 0 , then y n VI ( C , A ) (due to Lemmas 2 and 8). In fact, when w n = y n or A y n = 0 , one has 0 = y n P C ( y n τ n A y n ) y n P C ( y n τ A y n ) . So, one gets y n VI ( C , A ) .
Lemma 9.
Under the assumptions in Algorithm 3, there exists an integer m 0 1 s.t. n m 0 , 1 μ τ n τ n + 1 > 0 and
x n + 1 p 2 w n p 2 1 2 ( 1 μ τ n τ n + 1 ) λ n z n w n 2 + θ n ( 2 + θ n ) z n p 2 p Ω .
Proof. 
First, let us show that
2 A w n A y n , z n y n μ τ n + 1 w n y n 2 + μ τ n + 1 z n y n 2 n 1 .
In fact, in case A w n A y n , z n y n 0 , then (8) holds. Otherwise, from (6) one gets (8). Note that for every p Ω C C n ,
z n p 2 = P C n ( w n τ n A y n ) p 2 z n p , w n τ n A y n p = 1 2 z n p 2 + 1 2 w n p 2 1 2 z n w n 2 z n p , τ n A y n ,
which hence arrives at
z n p 2 w n p 2 z n w n 2 2 z n p , τ n A y n .
Owing to p VI ( C , A ) , one gets A p , u p 0 u C . By the pseudomonotonicity of A on C one has A u , u p 0 u C . Setting u : = y n C one gets A y n , p y n 0 . Thus,
A y n , p z n = A y n , p y n + A y n , y n z n A y n , y n z n .
Combining (9) and (10), one obtains
z n p 2 w n p 2 z n y n 2 y n w n 2 + 2 w n τ n A y n y n , z n y n .
Noticing z n = P C n ( w n τ n A y n ) and C n : = { x H : w n τ n A w n y n , x y n 0 } , one has
2 w n τ n A y n y n , z n y n = 2 w n τ n A w n y n , z n y n + 2 τ n A w n A y n , z n y n 2 τ n A w n A y n , z n y n ,
which along with (8), yields
2 w n τ n A y n y n , z n y n μ τ n τ n + 1 w n y n 2 + μ τ n τ n + 1 z n y n 2 .
This along with (11), leads to
z n p 2 w n p 2 ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) .
In addition, from Algorithm 3 one gets
x n + 1 p 2 ( 1 λ n ) w n p 2 + λ n ( 1 + θ n ) 2 z n p 2 ( 1 λ n ) w n p 2 + λ n z n p 2 + θ n ( 2 + θ n ) z n p 2 ,
which along with (12), yields
x n + 1 p 2 w n p 2 ( 1 μ τ n τ n + 1 ) λ n ( y n w n 2 + z n y n 2 ) + θ n ( 2 + θ n ) z n p 2 .
Since 1 μ τ n τ n + 1 1 μ > 0 as n , obviously there exists an integer m 0 1 such that for each n m 0 , 1 μ τ n τ n + 1 > 0 and (7) holds. □
Lemma 10.
Suppose that { x n } , { z n } are bounded sequences constructed in Algorithm 3 such that x n x n + 1 0 , x n z n 0 , x n T n x n 0 and x n T n x n 0 . If T n x n T n + 1 x n 0 and { x n k } { x n } s.t. x n k z H , then z Ω .
Proof. 
From Algorithm 3 and the hypotheses x n x n + 1 0 and x n T n x n 0 , we obtain that
t n x n = α n x n x n 1 0 ( n ) ,
and hence
t n T n t n t n x n + x n T n x n + T n x n T n t n 2 t n x n + x n T n x n 0 ( n ) .
It is clear that { t n } is bounded. Note that
w n x n = ( 1 γ n ) ( T n t n x n ) + β n ( f ( x n ) ρ F T n t n ) = ( 1 γ n ) [ T n t n t n + t n x n ] + β n ( f ( x n ) ρ F T n t n ) ( 1 γ n ) T n t n t n + t n x n + β n f ( x n ) ρ F T n t n T n t n t n + t n x n + β n ( f ( x n ) + ρ F T n t n ) .
Since β n 0 , t n x n 0 and t n T n t n 0 , by the boundedness of { x n } and { T n t n } we deduce that
lim n w n x n = 0 .
It is clear that { w n } is bounded. From (12) it follows that
( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) w n p 2 z n p 2 w n z n ( w n p + z n p ) ( w n x n + x n z n ) ( w n p + z n p ) .
Since 1 μ τ n τ n + 1 1 μ > 0 and x n z n 0 (due to the hypothesis), from (13) one gets
lim n w n y n = 0 and lim n y n z n = 0 .
It is clear that { y n } is bounded. Moreover, thanks to the hypotheses x n z n 0 , x n T n x n 0 and T n x n T n + 1 x n 0 , one has that
z n T n z n z n x n + x n T n x n + T n x n T n z n ( 2 + θ n ) z n x n + x n T n x n 0 ( n ) ,
and hence
z n T z n z n T n x n + T n x n T n + 1 x n + T n + 1 x n T z n ( 2 + θ 1 ) z n T n x n + T n x n T n + 1 x n ( 2 + θ 1 ) ( z n T n z n + T n z n T n x n ) + T n x n T n + 1 x n ( 2 + θ 1 ) { z n T n z n + ( 1 + θ n ) z n x n } + T n x n T n + 1 x n 0 ( n ) .
We claim that lim n x n T l x n = 0 for l = 1 , , N . In fact, observe that for l = 1 , , N ,
x n T n + l x n x n T n + l x n + l + T n + l x n + l T n + l x n x n T n + l x n + l + x n + l x n T n + l x n + l x n + l + 2 x n + l x n .
So, using the hypotheses x n T n x n 0 and x n x n + 1 0 , one gets lim n x n T n + l x n = 0 for l = 1 , , N . This hence ensures that
lim n x n T l x n = 0 l { 1 , 2 , , N } .
Noticing y n = P C ( w n τ n A w n ) , one has w n τ n A w n y n , u y n 0 u C , and hence
1 τ n k w n k y n k , u y n k + A w n k , y n k w n k A w n k , u w n k u C .
Noticing the boundedness of { w n k } and Lipschitz continuity of A, one gets the boundedness of { A w n k } . Owing to τ n τ : = min { τ 1 , μ L } and w n y n 0 , from (16) one gets
lim inf k A w n k , u w n k 0 u C .
Next, let us show z VI ( C , A ) . In fact, since w n x n 0 , w n y n 0 and x n k z , we obtain that x n y n 0 and y n k z . Since C is convex and closed, from { y n } C one has z C . In what follows, we consider two cases. If A z = 0 , then it is clear that z VI ( C , A ) because A z , u z 0 u C . Assume that A z 0 . Then, by the hypothesis on A, instead of the sequentially weak continuity of A, we get 0 < A z lim inf k A y n k . Note that combining w n y n 0 and L-Lipschitz continuity of A attains A w n A y n 0 . So it follows that
0 < A z lim inf k A y n k = lim inf k ( A y n k A y n k A w n k ) lim inf k A w n k .
So, we may assume, without loss of generality, that A w n k 0 k 1 .
We now pick a sequence { δ k } ( 0 , 1 ) s.t. δ k 0 as k . For every k 1 , let the m k indicate the smallest positive integer s.t.
A w n j , u w n j + δ k 0 j m k .
Noticing that { δ k } is decreasing, one knows that { m k } is increasing. Since { w m k } { w n k } and A w n k 0 k 1 , we put m k = A w m k A w m k 2 , and get A w m k , m k = 1 k 1 . So, from (17) one has A w m k , u + δ k m k w m k 0 k 1 . Moreover, using the pseudomonotonicity of A one has A ( u + δ k m k ) , u + δ k m k w m k 0 k 1 . This immediately arrives at
A u , u w m k A u A ( u + δ k m k ) , u + δ k m k w m k δ k A u , m k k 1 .
Let us show that lim k δ k m k = 0 . In fact, since 0 < A z lim inf k A w n k , { w m k } { w n k } and δ k 0 , we deduce that 0 lim sup k δ k m k = lim sup k δ k A w m k lim sup k δ k lim inf k A w n k = 0 . So, one has δ k m k 0 as k .
In the end, we claim that z Ω . In fact, using (15) one has x n k T l x n k 0 for l = 1 , , N . Since Lemma 5 ensures the demiclosedness of I T l at zero, from x n k z one has z Fix ( T l ) l { 1 , , N } . So, z l = 1 N Fix ( T l ) . Meanwhile, from x n z n 0 and x n k z , one gets z n k z . Using (14) one has z n k T z n k 0 . From Lemma 5 one knows the demiclosedness of I T at zero, and hence gets z Fix ( T ) . Additionally, letting k , we infer that the right hand side of (18) tends to zero by the uniform continuity of A, the boundedness of { w m k } , { m k } and the limit lim k δ k m k = 0 . Consequently, A u , u z = lim inf k A u , u w m k 0 u C . Using Lemma 3 one has z VI ( C , A ) . Therefore, z l = 0 N Fix ( T l ) VI ( C , A ) = Ω . □
Theorem 1.
Suppose that { x n } is the sequence constructed in Algorithm 3. Then { x n } converges strongly to x * Ω provided T n x n T n + 1 x n 0 , where x * Ω is the unique solution to the HVI: ( ρ F f ) x * , p x * 0 p Ω .
Proof. 
Since lim n θ n β n = 0 and 0 < lim inf n γ n lim sup n γ n < 1 , we might assume that θ n β n ( δ ) 2 n 1 and { γ n } [ a , b ] ( 0 , 1 ) . Let us show that P Ω ( f ρ F + I ) is a contraction. In fact, using Lemma 6 one has
P Ω ( f ρ F + I ) u P Ω ( f ρ F + I ) v ( I ρ F ) u ( I ρ F ) v + f ( u ) f ( v ) ( 1 ) u v + δ u v = [ 1 ( δ ) ] u v u , v H ,
which ensures that P Ω ( f ρ F + I ) is a contraction. Using Banach’s Contraction Mapping Principle we know that P Ω ( f ρ F + I ) has a unique fixed point. Say x * H , i.e., x * = P Ω ( f ρ F + I ) x * . Therefore, | x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) solving the HVI
( ρ F f ) x * , p x * 0 p Ω .
In what follows we claim that the conclusion of the theorem is valid. To the goal, we divide the remainder of the proof into several steps.
Claim 1. We prove that { x n } is bounded. In fact, for an arbitrary p Ω = l = 0 N Fix ( T l ) VI ( C , A ) , we obtain that T p = p , T n p = p n 1 , and (12) holds, i.e.,
z n p 2 w n p 2 ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) .
Since lim n ( 1 μ τ n τ n + 1 ) = 1 μ > 0 , we may assume, without loss of generality, that 1 μ τ n τ n + 1 > 0 n 1 . Hence, one gets
z n p w n p n 1 .
From the definition of t n , we get
t n p x n p + α n x n x n 1 = x n p + β n · α n β n x n x n 1 .
Since α n β n x n x n 1 0 as n , one knows that K 1 > 0 s.t.
α n β n x n x n 1 K 1 n 1 ,
which together (21), yields
t n p x n p + β n K 1 n 1 .
So, from β n + γ n < 1 , Lemma 6 and (20) it follows that
z n p β n f ( x n ) f ( p ) + γ n x n p + ( 1 γ n ) × ( I β n 1 γ n ρ F ) T n t n ( I β n 1 γ n ρ F ) p + β n 1 γ n ( f ρ F ) p β n δ x n p + γ n x n p + ( 1 γ n β n ) t n p + β n ( f ρ F ) p β n δ ( x n p + β n K 1 ) + γ n ( x n p + β n K 1 ) + ( 1 γ n β n ) ( x n p + β n K 1 ) + β n ( f ρ F ) p x n p + β n ( K 1 + ( f ρ F ) p ) .
Noticing x n + 1 = ( 1 λ n ) w n + λ n T n z n , we infer from (23) that
x n + 1 p ( 1 λ n ) w n p + λ n ( 1 + θ n ) z n p [ 1 β n ( δ ) ] x n p + β n ( K 1 + ( f ρ F ) p ) + β n ( δ ) 2 · [ x n p + β n ( K 1 + ( f ρ F ) p ) ] [ 1 β n ( δ ) 2 ] x n p + β n ( δ ) 2 · 3 ( K 1 + ( f ρ F ) p ) δ max { x n p , 3 ( K 1 + ( f ρ F ) p ) δ } .
By induction, we obtain x n p max { x 1 p , 3 ( K 1 + ( f ρ F ) p ) δ } n 1 . Thus, { x n } is bounded, and so are the sequences { t n } , { w n } , { y n } , { z n } , { f ( x n ) } , { T n t n } , { T n z n } .
Claim 2. We prove that
λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) x n p 2 x n + 1 p 2 + ( β n + θ n ) K 4 ,
for some K 4 > 0 . In fact, one has
w n p = β n ( f ( x n ) f ( p ) ) + γ n ( x n p ) + ( 1 γ n ) [ ( I β n 1 γ n ρ F ) T n t n ( I β n 1 γ n ρ F ) p ] + β n ( f ρ F ) p .
Using Lemma 6 and the convexity of the function h ( t ) = t 2 t R , one gets
w n p 2 β n ( f ( x n ) f ( p ) ) + γ n ( x n p ) + ( 1 γ n ) [ ( I β n 1 γ n ρ F ) T n t n ( I β n 1 γ n ρ F ) p ] 2 + 2 β n ( f ρ F ) p , w n p [ β n δ x n p + γ n x n p + ( 1 β n γ n ) t n p ] 2 + 2 β n ( f ρ F ) p , w n p β n δ x n p 2 + γ n x n p 2 + ( 1 β n γ n ) t n p 2 + 2 β n ( f ρ F ) p , w n p β n δ x n p 2 + γ n x n p 2 + ( 1 β n γ n ) t n p 2 + β n K 2
(due to β n δ + γ n + ( 1 β n γ n ) = 1 β n ( δ ) 1 ), where sup n 1 2 ( f ρ F ) p w n p K 2 for some K 2 > 0 . Noticing x n + 1 = ( 1 λ n ) w n + λ n T n z n , from (12) we have
x n + 1 p 2 ( 1 λ n ) w n p 2 + λ n z n p 2 + θ n ( 2 + θ n ) z n p 2 w n p 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + θ n ( 2 + θ n ) z n p 2 .
Moreover, from (22) we have
t n p 2 x n p 2 + β n ( 2 K 1 x n p + β n K 1 2 ) x n p 2 + β n K 3 ,
where sup n 1 ( 2 K 1 x n p + β n K 1 2 ) K 3 for some K 3 > 0 . Combining (24)–(26), we obtain
x n + 1 p 2 β n δ x n p 2 + γ n x n p 2 + ( 1 β n γ n ) t n p 2 + β n K 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + θ n ( 2 + θ n ) z n p 2 [ 1 β n ( δ ) ] x n p 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + β n K 3 + β n K 2 + θ n ( 2 + θ n ) z n p 2 x n p 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + ( β n + θ n ) K 4 ,
where sup n 1 [ K 2 + K 3 + ( 2 + θ n ) z n p 2 ] K 4 . This immediately implies that
λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) x n p 2 x n + 1 p 2 + ( θ n + β n ) K 4 .
Claim 3. We prove that
x n + 1 p 2 [ 1 β n ( δ ) ] x n p 2 + β n ( δ ) [ 3 K δ ( θ n 3 β n + α n β n x n x n 1 ) + 2 ( f ρ F ) p , w n p δ ]
for some K > 0 . In fact, observe that
t n p 2 x n p 2 + α n x n x n 1 [ 2 x n p + α n x n x n 1 ] .
Combining (20), (24) and (28), one has
x n + 1 p 2 ( 1 λ n ) w n p 2 + λ n ( 1 + θ n ) 2 z n p 2 w n p 2 + θ n ( 2 + θ n ) w n p 2 β n δ x n p 2 + γ n x n p 2 + ( 1 β n γ n ) t n p 2 + 2 β n ( f ρ F ) p , w n p + θ n ( 2 + θ n ) w n p 2 [ 1 β n ( δ ) ] x n p 2 + β n ( δ ) [ 3 K δ ( θ n 3 β n + α n β n x n x n 1 ) + 2 ( f ρ F ) p , w n p δ ] ,
where sup n 1 { x n p , α n x n x n 1 , ( 2 + θ n ) w n p 2 } K for some K > 0 .
Claim 4. We prove that { x n } converges strongly to the unique solution x * Ω of the HVI (19). In fact, setting p = x * , one obtains from the last inequality that
x n + 1 x * 2 [ 1 β n ( δ ) ] x n x * 2 + β n ( δ ) [ 3 K δ ( θ n 3 β n + α n β n x n x n 1 ) + 2 ( f ρ F ) x * , w n x * δ ] .
Putting Γ n = x n x * 2 , we demonstrate Γ n 0 ( n ) by considering the following cases.
Case 1. Assume that ∃ (integer) n 0 1 s.t. { Γ n } is nonincreasing. Then the limit lim n Γ n = ς < + and lim n ( Γ n Γ n + 1 ) = 0 .
Using (27), Γ n Γ n + 1 0 , β n 0 , θ n 0 , 1 μ τ n τ n + 1 1 μ and { λ n } [ λ ̲ , λ ¯ ] ( 0 , 1 ) , one has
lim sup n λ ̲ ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) lim sup n λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) lim sup n [ x n x * 2 x n + 1 x * 2 + ( β n + θ n ) K 4 ] = lim sup n ( Γ n Γ n + 1 + ( β n + θ n ) K 4 ) = 0 .
This immediately ensures that
lim n y n w n = 0 and lim n z n y n = 0 .
Hence, one gets
w n z n w n y n + y n z n 0 ( n ) .
Noticing w n x * = γ n ( x n x * ) + ( 1 γ n ) ( T n t n x * ) + β n ( f ( x n ) ρ F T n t n ) , we deduce from (25) and (26) that
x n + 1 x * 2 w n x * 2 + θ n ( 2 + θ n ) z n x * 2 γ n ( x n x * ) + ( 1 γ n ) ( T n t n x * ) 2 + 2 β n f ( x n ) ρ F T n t n , w n x * + θ n ( 2 + θ n ) z n x * 2 γ n x n x * 2 + ( 1 γ n ) T n t n x * 2 γ n ( 1 γ n ) x n T n t n 2 + 2 β n f ( x n ) ρ F T n t n w n x * + θ n ( 2 + θ n ) z n x * 2 γ n ( x n x * 2 + β n K 3 ) + ( 1 γ n ) ( x n x * 2 + β n K 3 ) γ n ( 1 γ n ) x n T n t n 2 + 2 β n f ( x n ) ρ F T n t n w n x * + θ n ( 2 + θ n ) z n x * 2 = x n x * 2 + β n K 3 γ n ( 1 γ n ) x n T n t n 2 + 2 β n f ( x n ) ρ F T n t n w n x * + θ n ( 2 + θ n ) z n x * 2 ,
which together with { γ n } [ a , b ] ( 0 , 1 ) , yields
a ( 1 b ) x n T n t n 2 γ n ( 1 γ n ) x n T n t n 2 x n x * 2 x n + 1 x * 2 + β n K 3 + 2 β n f ( x n ) ρ F T n t n w n x * + θ n ( 2 + θ n ) z n x * 2 Γ n Γ n + 1 + β n [ K 3 + 2 ( f ( x n ) + ρ F T n t n ) w n x * ] + θ n ( 2 + θ n ) z n x * 2 .
Since Γ n Γ n + 1 0 , β n 0 and θ n 0 , by the boundedness of { w n } , { x n } , { z n } , { T n t n } one has
lim n x n T n t n = 0 ,
and hence
w n x n = ( 1 γ n ) ( T n t n x n ) + β n ( f ( x n ) ρ F T n t n ) T n t n x n + β n ( f ( x n ) + ρ F T n t n ) 0 ( n ) .
This together with (30), leads to
x n z n x n w n + w n z n 0 ( n ) ,
and hence
x n T n x n x n T n t n + T n t n T n x n x n T n t n + t n x n = x n T n t n + α n x n x n 1 0 ( n ) .
On the other hand, from (20), (24) and (26) it follows that
x n + 1 x * 2 = ( 1 λ n ) w n x * 2 + λ n T n z n x * 2 λ n ( 1 λ n ) w n T n z n 2 ( 1 λ n ) w n x * 2 + λ n ( 1 + θ n ) 2 z n x * 2 λ n ( 1 λ n ) w n T n z n 2 w n x * 2 + θ n ( 2 + θ n ) z n x * 2 λ n ( 1 λ n ) w n T n z n 2 β n δ x n x * 2 + γ n x n x * 2 + ( 1 β n γ n ) t n x * 2 + β n K 2 + θ n ( 2 + θ n ) z n x * 2 λ n ( 1 λ n ) w n T n z n 2 β n δ ( x n x * 2 + β n K 3 ) + γ n ( x n x * 2 + β n K 3 ) + ( 1 β n γ n ) ( x n x * 2 + β n K 3 ) + β n K 2 + θ n ( 2 + θ n ) z n x * 2 λ n ( 1 λ n ) w n T n z n 2 = [ 1 β n ( δ ) ] ( x n x * 2 + β n K 3 ) + β n K 2 + θ n ( 2 + θ n ) z n x * 2 λ n ( 1 λ n ) w n T n z n 2 x n x * 2 + β n K 3 + β n K 2 + θ n ( 2 + θ n ) z n x * 2 λ n ( 1 λ n ) w n T n z n 2 x n x * 2 + ( β n + θ n ) K 4 λ n ( 1 λ n ) w n T n z n 2 ,
which together with { λ n } [ λ ̲ , λ ¯ ] ( 0 , 1 ) , arrives at
λ ̲ ( 1 λ ¯ ) w n T n z n 2 λ n ( 1 λ n ) w n T n z n 2 x n x * 2 x n + 1 x * 2 + ( β n + θ n ) K 4 = Γ n Γ n + 1 + ( β n + θ n ) K 4 .
Since Γ n Γ n + 1 0 , β n 0 and θ n 0 , one obtains
lim n w n T n z n = 0 .
Therefore, we have
x n + 1 x n = ( 1 λ n ) ( w n x n ) + λ n ( T n z n x n ) = ( 1 λ n ) ( w n x n ) + λ n ( T n z n w n + w n x n ) = w n x n + λ n ( T n z n w n ) w n x n + T n z n w n 0 ( n ) .
and
x n T n x n x n w n + w n T n z n + T n z n T n x n x n w n + w n T n z n + ( 1 + θ n ) z n x n 0 ( n ) .
From the boundedness of { x n } , it follows that { x n k } { x n } s.t.
lim sup n ( f ρ F ) x * , x n x * = lim k ( f ρ F ) x * , x n k x * .
Since H is reflexive and { x n } is bounded, we may assume, without loss of generality, that x n k x ˜ . Hence we get
lim sup n ( f ρ F ) x * , x n x * = lim k ( f ρ F ) x * , x n k x * = ( f ρ F ) x * , x ˜ x * .
Note that x n x n + 1 0 , x n z n 0 , x n T n x n 0 and x n T n x n 0 (due to (31)–(34)). Since T n x n T n + 1 x n 0 and x n k x ˜ with { x n k } { x n } , by Lemma 10 we infer that x ˜ Ω . Thus, using (19) and (35) one has
lim sup n ( f ρ F ) x * , x n x * = ( f ρ F ) x * , x ˜ x * 0 ,
which immediately leads to
lim sup n ( f ρ F ) x * , w n x * lim sup n [ ( f ρ F ) x * w n x n + ( f ρ F ) x * , x n x * ] 0 .
Note that { β n ( δ ) } [ 0 , 1 ] , n = 1 β n ( δ ) = , and
lim sup n [ 3 K δ ( θ n 3 β n + α n β n x n x n 1 ) + 2 ( f ρ F ) x * , w n x * δ ] 0 .
Consequently, applying Lemma 4 to (29), we have lim n 0 x n x * 2 = 0 .
Case 2. Assume that { Γ n l } { Γ n } s.t. Γ n l < Γ n l + 1 l N , where N is the set of all positive integers. Define the mapping : N N by
( n ) : = max { l n : Γ l < Γ l + 1 } .
In terms of Lemma 7, we obtain
Γ ( n ) Γ ( n ) + 1 and Γ n Γ ( n ) + 1 .
Putting p = x * , from (27) we have
lim sup n λ ̲ ( 1 μ τ ( n ) τ ( n ) + 1 ) ( y ( n ) w ( n ) 2 + z ( n ) y ( n ) 2 ) lim sup n λ ( n ) ( 1 μ τ ( n ) τ ( n ) + 1 ) ( y ( n ) w ( n ) 2 + z ( n ) y ( n ) 2 ) lim sup n ( Γ ( n ) Γ ( n ) + 1 + ( β ( n ) + θ ( n ) ) K 4 ) = 0 .
This Hence ensures that
lim n y ( n ) w ( n ) = 0 and lim n z ( n ) y ( n ) = 0 .
Using the same inferences as in the proof of Case 1, we deduce that lim n x ( n ) z ( n ) = 0 ,
lim n x ( n ) T ( n ) x ( n ) = lim n x ( n ) + 1 x ( n ) = lim n x ( n ) T ( n ) x ( n ) = 0 ,
and
lim sup n ( f ρ F ) x * , w ( n ) x * 0 .
On the other hand, from (29) we obtain
β ( n ) ( δ ) Γ ( n ) Γ ( n ) Γ ( n ) + 1 + β ( n ) ( δ ) [ 3 K δ ( θ ( n ) 3 β ( n ) + α ( n ) β ( n ) x ( n ) x ( n ) 1 ) + 2 ( f ρ F ) x * , w ( n ) x * δ ] β ( n ) ( δ ) [ 3 K δ ( θ ( n ) 3 β ( n ) + α ( n ) β ( n ) x ( n ) x ( n ) 1 ) + 2 ( f ρ F ) x * , w ( n ) x * δ ] ,
which hence yields
lim sup n Γ ( n ) lim sup n [ 3 K δ ( θ ( n ) 3 β ( n ) + α ( n ) β ( n ) x ( n ) x ( n ) 1 ) + 2 ( f ρ F ) x * , w ( n ) x * δ ] 0 .
Thus, lim n x ( n ) x * 2 = 0 . Moreover, note that
x ( n ) + 1 x * 2 x ( n ) x * 2 = 2 x ( n ) + 1 x ( n ) , x ( n ) x * + x ( n ) + 1 x ( n ) 2 2 x ( n ) + 1 x ( n ) x ( n ) x * + x ( n ) + 1 x ( n ) 2 0 ( n ) .
Owing to Γ n Γ ( n ) + 1 , we get
x n x * 2 x ( n ) + 1 x * 2 x ( n ) x * 2 + 2 x ( n ) + 1 x ( n ) x ( n ) x * + x ( n ) + 1 x ( n ) 2 0 ( n ) .
That is, x n x * as n . This completes the proof. □
Theorem 2.
Suppose that T : H H is nonexpansive and { x n } is the sequence constructed by the modified version of Algorithm Section 3, i.e., for any starting points x 0 , x 1 H ,
t n = x n + α n ( x n x n 1 ) , w n = β n f ( x n ) + γ n x n + ( ( 1 γ n ) I β n ρ F ) T n t n , y n = P C ( w n τ n A w n ) , z n = P C n ( w n τ n A y n ) , x n + 1 = ( 1 λ n ) w n + λ n T z n n 1 ,
where for each n 1 , α n , τ n and C n are chosen as in Algorithm 3. Then { x n } converges strongly to x * Ω , which is the unique solution to the HVI: ( ρ F f ) x * , p x * 0 p Ω .
Proof. 
We divide the proof of the theorem into several steps.
Claim 1. We prove that { x n } is bounded. In fact, using the same arguments as in Claim 1 of the proof of Theorem 1, we obtain the desired assertion.
Claim 2. We prove that
λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) x n p 2 x n + 1 p 2 + β n K 4
for some K 4 > 0 . In fact, using the same argument as in Claim 2 of the proof of Theorem 1, we obtain the desired assertion.
Claim 3. We prove that
x n + 1 p 2 [ 1 β n ( δ ) ] x n p 2 + β n ( δ ) [ 3 K δ · α n β n x n x n 1 + 2 ( f ρ F ) p , w n p δ ]
for some K > 0 . In fact, using the same argument as in Claim 3 of the proof of Theorem 1, we obtain the desired assertion.
Claim 4. We prove that { x n } converges strongly to the unique solution x * Ω of the HVI (19), with T 0 = T being a nonexpansive mapping. In fact, setting p = x * , one deduces from Claim 3 that
x n + 1 x * 2 [ 1 β n ( δ ) ] x n x * 2 + β n ( δ ) [ 3 K δ · α n β n x n x n 1 + 2 ( f ρ F ) x * , w n x * δ ] .
Putting Γ n = x n x * 2 , we demonstrate Γ n 0 ( n ) by considering the following cases.
Case 1. Assume that ∃ (integer) n 0 1 s.t. { Γ n } is nonincreasing. Then the limit lim n Γ n = ς < + and lim n ( Γ n Γ n + 1 ) = 0 .
Using Claim 2, Γ n Γ n + 1 0 , β n 0 , 1 μ τ n τ n + 1 1 μ and { λ n } [ λ ̲ , λ ¯ ] ( 0 , 1 ) , one has
lim sup n λ ̲ ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) lim sup n λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) lim sup n ( Γ n Γ n + 1 + β n K 4 ) = 0 ,
which immediately yields
lim n y n w n = 0 and lim n z n y n = 0 .
Hence, one gets
w n z n w n y n + y n z n 0 ( n ) .
Using the same inferences as in Case 1 of the proof of Theorem 1, we deduce that 0,
lim n x n z n = lim n x n T n x n = 0 ,
lim n x n + 1 x n = lim n x n T n x n = 0 ,
and
lim sup n ( f ρ F ) x * , w n x * 0 .
As a result, applying Lemma 4 to (38), one conclude that lim n x n x * 2 = 0 .
Case 2. Assume that { Γ n l } { Γ n } s.t. Γ n l < Γ n l + 1 l N , with N being the set of all natural numbers. Let the mapping : N N be defined as
( n ) : = max { l n : Γ l < Γ l + 1 } .
From Lemma 7, we have
Γ ( n ) Γ ( n ) + 1 and Γ n Γ ( n ) + 1 .
In the rest of the proof, using the same inferences as in Case 2 of the proof of Theorem 1, we derive the desired result. This completes the proof. □
Next, we formulate another strengthened inertial-type subgradient extragradient rule below.
It is worth pointing out that Lemmas 8–10 are still valid for Algorithm 4.
Theorem 3.
Suppose that { x n } is the sequence constructed in Algorithm 4. Then { x n } converges strongly to x * Ω provided T n x n T n + 1 x n 0 , where x * Ω is the unique solution to the HVI: ( ρ F f ) x * , p x * 0 p Ω .
Proof. 
Since lim n θ n β n = 0 and 0 < lim inf n γ n lim sup n γ n < 1 , we might assume that θ n β n ( δ ) 2 n 1 and { γ n } [ a , b ] ( 0 , 1 ) . Using the same arguments as in the proof of Theorem 1, we deduce that there exists the unique solution x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) of the VIP (20).
Next we show the conclusion of the theorem. To the goal, we divide the remainder of the proof into several steps.
Algorithm 4: Strengthened inertial-type subgradient extragradient rules
Initialization: Give τ 1 > 0 , α > 0 , μ ( 0 , 1 ) . Let x 0 , x 1 H be arbitrary and choose α n s.t.
0 α n α ˜ n : = min { α , ϵ n x n x n 1 } , if x n x n 1 , α , otherwise .
Iterative Steps: Compute x n + 1 as follows:
Step 1. Set t n = x n + α n ( x n x n 1 ) , and calculate
w n = β n f ( x n ) + γ n t n + ( ( 1 γ n ) I β n ρ F ) T n x n
and y n = P C ( w n τ n A w n ) .
Step 2. Calculate x n + 1 = ( 1 λ n ) w n + λ n T n P C n ( w n τ n A y n ) with
C n : = { x H : w n τ n A w n y n , x y n 0 } .
Step 3. Update
τ n + 1 = min { μ w n y n 2 + z n y n 2 2 A w n A y n , z n y n , τ n } , if A w n A y n , z n y n > 0 , τ n , otherwise ,
with z n = P C n ( w n τ n A y n ) . Again set n : = n + 1 and go to Step 1.
Claim 1. We prove that { x n } is bounded. In fact, using the same arguments as in Step 1 of the proof of Theorem 1, one obtains that inequalities (20)–(22) hold. So, from β n + γ n < 1 , Lemma 6 and (22) it follows that
z n p β n δ x n p + γ n t n p + ( 1 γ n β n ) x n p + β n ( f ρ F ) p β n δ ( x n p + β n K 1 ) + γ n ( x n p + β n K 1 ) + ( 1 γ n β n ) × ( x n p + β n K 1 ) + β n ( f ρ F ) p x n p + β n ( K 1 + ( f ρ F ) p ) .
Noticing x n + 1 = ( 1 λ n ) w n + λ n T n z n , from the last inequality one gets
x n + 1 p w n p + θ n w n p [ 1 β n ( δ ) 2 ] x n p + β n ( δ ) 2 · 3 ( K 1 + ( f ρ F ) p ) δ max { x n p , 3 ( K 1 + ( f ρ F ) p ) δ } .
By induction, we obtain x n p max { x 1 p , 3 ( K 1 + ( f ρ F ) p ) δ } n 1 . Thus, { x n } is bounded, and so are the sequences { t n } , { w n } , { y n } , { z n } , { f ( x n ) } , { T n x n } , { T n z n } .
Claim 2. We prove that
λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) x n p 2 x n + 1 p 2 + ( θ n + β n ) K 4
for some K 4 > 0 . Using the similar arguments to that of (24), one gets
w n p 2 β n δ x n p 2 + γ n t n p 2 + ( 1 β n γ n ) x n p 2 + β n K 2 ,
where sup n 1 2 ( f ρ F ) p w n p K 2 for some K 2 > 0 . Utilizing the same arguments as those of (25) and (26), we have
x n + 1 p 2 w n p 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + θ n ( 2 + θ n ) z n p 2 ,
and
t n p 2 x n p 2 + β n K 3 ,
where sup n 1 ( 2 K 1 x n p + β n K 1 2 ) K 3 for some K 3 > 0 . From (45)–(47), we get
x n + 1 p 2 β n δ x n p 2 + γ n ( x n p 2 + β n K 3 ) + ( 1 β n γ n ) x n p 2 + β n K 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + θ n ( 2 + θ n ) z n p 2 [ 1 β n ( δ ) ] x n p 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + β n K 3 + β n K 2 + θ n ( 2 + θ n ) z n p 2 x n p 2 λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) + ( θ n + β n ) K 4 ,
where sup n 1 [ K 2 + K 3 + ( 2 + θ n ) z n p 2 ] K 4 . This immediately implies that
λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) x n p 2 x n + 1 p 2 + ( θ n + β n ) K 4 .
Claim 3. We prove that
x n + 1 p 2 [ 1 β n ( δ ) ] x n p 2 + β n ( δ ) [ 3 K δ ( θ n 3 β n + α n β n x n x n 1 ) + 2 ( f ρ F ) p , w n p δ ]
for some K > 0 . In fact, one has
t n p 2 x n p 2 + α n x n x n 1 [ 2 x n p + α n x n x n 1 ] .
Combining (20), (45) and (49), we have
x n + 1 p 2 β n δ x n p 2 + γ n t n p 2 + ( 1 β n γ n ) x n p 2 + 2 β n ( f ρ F ) p , w n p + θ n ( 2 + θ n ) w n p 2 [ 1 β n ( δ ) ] x n p 2 + β n ( δ ) [ 3 K δ ( θ n 3 β n + α n β n x n x n 1 ) + 2 ( f ρ F ) p , w n p δ ] ,
where sup n 1 { x n p , α n x n x n 1 , ( 2 + θ n ) w n p 2 } K for some K > 0 .
Claim 4. We prove that { x n } converges strongly to a unique solution x * Ω of the HVI (19). In fact, using the same arguments as in Claim 4 of the proof of Theorem 1, we obtain the desired assertion. □
Theorem 4.
Suppose that T : H H is nonexpansive and { x n } is the sequence constructed by the modified version of Algorithm 4, i.e., for any starting points x 0 , x 1 H ,
t n = x n + α n ( x n x n 1 ) , w n = β n f ( x n ) + γ n t n + ( ( 1 γ n ) I β n ρ F ) T n x n , y n = P C ( w n τ n A w n ) , z n = P C n ( w n τ n A y n ) , x n + 1 = ( 1 λ n ) w n + λ n T z n n 1 ,
where for each n 1 , α n , τ n and C n are chosen as in Algorithm 4. Then { x n } converges strongly to x * Ω , which is the unique solution to the HVI: ( ρ F f ) x * , p x * 0 p Ω .
Proof. 
We divide the proof of the theorem into several steps.
Claim 1. We prove that { x n } is bounded. In fact, using the same arguments as in Claim 1 of the proof of Theorem 2, we obtain the desired assertion.
Claim 2. We prove that
λ n ( 1 μ τ n τ n + 1 ) ( y n w n 2 + z n y n 2 ) x n p 2 x n + 1 p 2 + θ n k 4
for some K 4 > 0 . In fact, using the same arguments as in Claim 2 of the proof of Theorem 2, we obtain the desired assertion.
Claim 3. We prove that
x n + 1 p 2 [ 1 β n ( δ ) ] x n p 2 + β n ( δ ) [ 3 K δ · α n β n x n x n 1 + 2 ( f ρ F ) p , w n p δ ]
for some K > 0 . In fact, using the same arguments as in Claim 3 of the proof of Theorem 2, we obtain the desired assertion.
Claim 4. We prove that { x n } converges strongly to a unique solution x * Ω of the HVI (19), with T 0 = T a nonexpansive mapping. In fact, using the same arguments as in Claim 4 of the proof of Theorem 2, we obtain the desired assertion. This completes the proof. □
Remark 1.
Compared with the corresponding results in Kraikaew and Saejung [20], Ceng et al. [27], Thong et al. [16] and Ceng and Shang [22], our results improve and extend them in the following aspects.
(i) 
The problem of finding an element of VI ( C , A ) in [20] is extended to develop our problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) where T i is nonexpansive for i = 1 , , N and T 0 = T is asymptotically nonexpansive. The Halpern subgradient extragradient method for solving the VIP in [20] is extended to develop our strengthened inertial-type subgradient extragradient rule with adaptive step sizes for solving the VIP and CFPP on the basis of Mann iteration method, viscosity approximation method and hybrid steepest-descent method.
(ii) 
The problem of finding an element of VI ( C , A ) in [16] is extended to develop our problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) where T i is nonexpansive for i = 1 , , N and T 0 = T is asymptotically nonexpansive. The inertial subgradient extragradient method for solving the VIP in [16] is extended to develop our strengthened inertial-type subgradient extragradient rule for solving the VIP and the CFPP of finitely many nonexpansive mappings and an asymptotically nonexpansive mapping on the basis of Mann iteration method, viscosity approximation method and hybrid steepest-descent method.
(iii) 
The problem of finding an element of i = 1 N Fix ( T i ) VI ( C , A ) (where T i is nonexpansive for i = 1 , , N ) in [27] is extended to develop our problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) where T i is nonexpansive for i = 1 , , N and T 0 = T is asymptotically nonexpansive. The modified inertial subgradient extragradient method in [27] is extended to develop our strengthened inertial-type subgradient extragradient rule involving asymptotically nonexpansive mapping. It is worth pointing out that the modified inertial subgradient extragradient method in [27] only concerns nonexpansive mappngs.
(iv) 
While the problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) in [22] is still studied in this paper, the hybrid inertial subgradient extragradient method with line-search process in [22] is extended to develop our strengthened inertial-type subgradient extragradient rule with adaptive step sizes, e.g., the original inertial technique w n = T n x n + α n ( T n x n T n x n 1 ) is extended to develop our strengthened inertial-type iteration approach t n = x n + α n ( x n x n 1 ) and w n = β n f ( x n ) + γ n x n + ( ( 1 γ n ) I β n ρ F ) T n t n . Our strong convergence theorems are more beneficial and more subtle than the corresponding theorems [22] because the conclusion x n x * Ω x n x n + 1 + x n y n 0 ( n ) in the corresponding theorems [22] is updated by our conclusion x n x * Ω .

4. Applicability and Implementability of Rules

In this section, our main results are exploited to solve the VIP and CFPP in an illustrated example. Put τ 1 = α = μ = 1 2 , λ n = 1 3 , β n = 1 3 ( n + 1 ) , γ n = n 3 ( n + 1 ) and ϵ n = 1 3 ( n + 1 ) 2 .
We next provide an example of Lipschitz continuous and pseudomonotone mapping A, asymptotically nonexpansive mapping T and nonexpansive mapping T 1 with Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . Let C = [ 2 , 3 ] and H = R with the inner product a , b = a b and induced norm · = | · | . The starting points x 0 , x 1 are randomly chosen in H. Take ρ = 2 and f ( x ) = F ( x ) = 1 2 x x H . Then δ = κ = η = 1 2 , ρ = 2 ( 0 , 2 η κ 2 ) = ( 0 , 4 ) and
δ = 1 2 < = 1 1 ρ ( 2 η ρ κ 2 ) = 1 .
Let A : H H and T , T 1 : H H be defined as A x : = 1 1 + | sin x | 1 1 + | x | , T x : = 2 3 sin x and T 1 x : = sin x for all x H . Let us show that A is pseudomonotone and Lipschitz continuous. In fact, for all x , y H we have
A x A y | y x ( 1 + x ) ( 1 + y ) | + | sin y sin x ( 1 + sin x ) ( 1 + sin y ) | y x ( 1 + x ) ( 1 + y ) + sin y sin x ( 1 + sin x ) ( 1 + sin y ) x y + sin x sin y 2 x y .
This ensures that A is of Lipschitz continuity on H. Moreover, we claim that A is pseudomonotone. Actually, it is easy to see that for all x , y H ,
A x , y x = ( 1 1 + | sin x | 1 1 + | x | ) ( y x ) 0 A y , y x = ( 1 1 + | sin y | 1 1 + | y | ) ( y x ) 0 .
Moreover, it is easy to verify that T is asymptotically nonexpansive with θ n = ( 2 3 ) n n 1 , such that T n + 1 x n T n x n 0 as n . In fact, we note that
T n x T n y 2 3 T n 1 x T n 1 y ( 2 3 ) n x y ( 1 + θ n ) x y ,
and
T n + 1 x n T n x n ( 2 3 ) n 1 T 2 x n T x n = ( 2 3 ) n 1 2 3 sin ( T x n ) 2 3 sin x n 2 ( 2 3 ) n 0 .
It is clear that Fix ( T ) = { 0 } and
lim n θ n β n = lim n ( 2 / 3 ) n 1 / 3 ( n + 1 ) = 0 .
In addition, it is clear that T 1 is nonexpansive and Fix ( T 1 ) = { 0 } . Therefore, Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) = { 0 } . In this case, Algorithm 3 can be rewritten below:
t n = x n + α n ( x n x n 1 ) , w n = 1 3 ( n + 1 ) · 1 2 x n + n 3 ( n + 1 ) x n + 2 3 T n t n , y n = P C ( w n τ n A w n ) , z n = P C n ( w n τ n A y n ) , x n + 1 = 2 3 w n + 1 3 T n z n n 1
(with ( 1 γ n ) I β n ρ F = 2 3 I ), where for each n 1 , α n , τ n and C n are chosen as in Algorithm 3. Then, by Theorem 1, we know that { x n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) .
In particular, since T x : = 2 3 sin x is also nonexpansive, we consider the modified version of Algorithm 3, that is,
t n = x n + α n ( x n x n 1 ) , w n = 1 3 ( n + 1 ) · 1 2 x n + n 3 ( n + 1 ) x n + 2 3 T n t n , y n = P C ( w n τ n A w n ) , z n = P C n ( w n τ n A y n ) , x n + 1 = 2 3 w n + 1 3 T z n n 1
where for each n 1 , α n , τ n and C n are chosen as above. Then, by Theorem 2, we know that { x n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) .

5. Conclusions

In this article, we have put forward two strengthened inertial-type subgradient extragradient rules with adaptive step sizes for solving the VIP and CFPP in Hilbert spaces, where the VIP denotes a pseudomonotone variational inequality problem with Lipschitz continuity operator A, and the CFPP indicates a common fixed-point problem of finitely many nonexpansive mappings { T i } i = 1 N and an asymptotically nonexpansive mapping T 0 : = T . Under the lack of the sequential weak continuity and Lipschitz constant of the cost operator A, we have demonstrated the strong convergence of the suggested rules to a common solution of the VIP and CFPP, which is the unique solution of a hierarchical variational inequality (HVI) defined on the common solution set Ω : = i = 0 N Fix ( T i ) VI ( C , A ) . In addition, an illustrated example is provided to demonstrate the applicability and implementability of our proposed rules.

Author Contributions

Conceptualization, L.-C.C., Y.-C.L. and J.-C.Y.; methodology, L.-C.C., C.-F.W. and J.-C.Y.; software, C.-F.W.; validation, L.-C.C., C.-F.W. and J.-C.Y.; formal analysis, L.-C.C., Y.-C.L. and J.-C.Y.; investigation, L.-C.C., C.-F.W., Y.-C.L. and J.-C.Y.; writing - original draft preparation, L.-C.C. and Y.-C.L.; writing—review and editing, L.-C.C. and Y.-C.L.; supervision, J.-C.Y.; project administration, J.-C.Y.; funding acquisition, L.-C.C. and J.-C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

Lu-Chuan Ceng is partially supported by the 2020 Shanghai Leading Talents Program of the Shanghai Municipal Human Resources and Social Security Bureau (20LJ2006100), Innovation Program of Shanghai Municipal Education Commission (15ZZ068) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). The research of Jen-Chih Yao was supported by the grant MOST 108-2115-M-039- 005-MY3. Yeong-Cheng Liou was supported in part by the MOST Project in Taiwan (110-2410-H-037-001) and the grant from NKUST and KMU joint R&D Project (110KK002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  2. Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
  3. Jolaoso, L.O.; Shehu, Y.; Yao, J.C. Inertial extragradient type method for mixed variational inequalities without monotonicity. Math. Comput. Simul. 2022, 192, 353–369. [Google Scholar] [CrossRef]
  4. Iusem, A.N.; Nasri, M. Korpelevich’s method for variational inequality problems in Banach spaces. J. Glob. Optim. 2011, 50, 59–76. [Google Scholar] [CrossRef]
  5. Chen, J.F.; Liu, S.Y. Extragradient-like method for pseudomonotone equilibrium problems on Hadamard manifolds. J. Inequal. Appl. 2020, 2020, 205. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  7. Yao, Y.; Shahzad, N.; Yao, J.C. Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems. Carpathian J. Math. 2021, 37, 541–550. [Google Scholar] [CrossRef]
  8. He, L.; Cui, Y.L.; Ceng, L.C.; Zhao, T.Y.; Wang, D.Q.; Hu, H.Y. Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule. J. Inequal. Appl. 2021, 2021, 146. [Google Scholar] [CrossRef]
  9. Iusem, A.N.; Mohebbi, V. An extragradient method for vector equilibrium problems on Hadamard manifolds. J. Nonlinear Var. Anal. 2021, 5, 459–476. [Google Scholar]
  10. Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  11. Bello, A.U.; Omojola, M.T.; Nnakwe, M.O. Two methods for solving split common fixed point problems of strict pseudo-contractive mappings in Hilbert spaces with applications. Appl. Set-Valued Anal. Optim. 2021, 3, 75–93. [Google Scholar]
  12. Eslamian, M. A hierarchical variational inequality problem for generalized demimetric mappings with applications. J. Nonlinear Var. Anal. 2021, 5, 965–979. [Google Scholar]
  13. Chen, J.F.; Liu, S.Y.; Chang, X.K. Extragradient method and golden ratio method for equilibrium problems on Hadamard manifolds. Int. J. Comput. Math. 2021, 98, 1699–1712. [Google Scholar] [CrossRef]
  14. Shehu, Y.; Gibali, G.; Sagratella, S. Inertial projection-type methods for solving quasi-variational inequalities in real Hilbert spaces. J. Optim. Theory Appl. 2020, 184, 877–894. [Google Scholar] [CrossRef]
  15. Chen, J.f.; Liu, S.Y.; Chang, X.K. Modified Tseng’s extragradient methods for variational inequality on Hadamard manifolds. Appl. Anal. 2021, 100, 2627–2640. [Google Scholar] [CrossRef]
  16. Thong, D.V.; Dong, Q.L.; Liu, L.L.; Triet, N.A.; Lan, N.P. Two new inertial subgradient extragradient methods with variable step sizes for solving pseudomonotone variational inequality problems in Hilbert spaces. J. Comput. Appl. Math. 2022, in press. [Google Scholar] [CrossRef]
  17. Fan, J.J.; Qin, X. Weak and strong convergence of inertial Tseng’s extragradient algorithms for solving variational inequality problems. Optimization 2021, 70, 1195–1216. [Google Scholar] [CrossRef]
  18. Fan, J.J.; Liu, L.Y.; Qin, X. A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities. Optimization 2020, 69, 2199–2215. [Google Scholar] [CrossRef]
  19. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  20. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  21. Vuong, P.T.; Shehu, Y. Convergence of an extragradient-type method for variational inequality with applications to optimal control problems. Numer. Algorithms 2019, 81, 269–291. [Google Scholar] [CrossRef]
  22. Ceng, L.C.; Shang, M.J. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2021, 70, 715–740. [Google Scholar] [CrossRef]
  23. Ceng, L.C.; Shehu, Y.; Yao, J.C. Modified Mann subgradient-like extragradient rules for variational inequalities and common fixed points involving asymptotically nonexpansive mappings. Mathematics 2022, 10, 779. [Google Scholar] [CrossRef]
  24. Ceng, L.C.; Yao, J.C.; Shehu, Y. On Mann-type subgradient-like extragradient method with linear-search process for hierarchical variational inequalities for asymptotically nonexpansive mappings. Mathematics 2021, 9, 3322. [Google Scholar] [CrossRef]
  25. Deng, L.; Hu, R.; Fang, Y.P. Inertial extragradient algorithms for solving equilibrium problems without any monotonicity in Hilbert spaces. J. Comput. Appl. Math. 2022, 44, 639–663. [Google Scholar]
  26. Thong, D.V. Extragradient method with a new adaptive step size for solving non-Lipschitzian pseudo-monotone variational inequalities. Carpathian J. Math. 2022, 38, 503–516. [Google Scholar] [CrossRef]
  27. Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 2020, 21, 93–108. [Google Scholar] [CrossRef]
  28. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  29. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  30. Lim, T.C.; Xu, H.K. Fixed point theorems for asymptotically nonexpansive mappings. Nonlinear Anal. 1994, 22, 1345–1355. [Google Scholar] [CrossRef]
  31. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Wen, C.-F.; Liou, Y.-C.; Yao, J.-C. On Strengthened Inertial-Type Subgradient Extragradient Rule with Adaptive Step Sizes for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive Mappings. Mathematics 2022, 10, 958. https://doi.org/10.3390/math10060958

AMA Style

Ceng L-C, Wen C-F, Liou Y-C, Yao J-C. On Strengthened Inertial-Type Subgradient Extragradient Rule with Adaptive Step Sizes for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive Mappings. Mathematics. 2022; 10(6):958. https://doi.org/10.3390/math10060958

Chicago/Turabian Style

Ceng, Lu-Chuan, Ching-Feng Wen, Yeong-Cheng Liou, and Jen-Chih Yao. 2022. "On Strengthened Inertial-Type Subgradient Extragradient Rule with Adaptive Step Sizes for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive Mappings" Mathematics 10, no. 6: 958. https://doi.org/10.3390/math10060958

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop