Next Article in Journal
Communication Fault Maintenance Decision of Information System Based on Inverse Symmetry Algorithm
Next Article in Special Issue
Sensitivity Analysis of Mixed Cayley Inclusion Problem with XOR-Operation
Previous Article in Journal
A Hybrid MCDM Model to Select Optimal Hosts of Variety Shows in the Social Media Era
Previous Article in Special Issue
Parallel Tseng’s Extragradient Methods for Solving Systems of Variational Inequalities on Hadamard Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a General Extragradient Implicit Method and Its Applications to Optimization

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
School of Mathematics and Statistics, Linyi University, Linyi 276000, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(1), 124; https://doi.org/10.3390/sym12010124
Submission received: 19 November 2019 / Revised: 2 January 2020 / Accepted: 6 January 2020 / Published: 8 January 2020
(This article belongs to the Special Issue Symmetry in Nonlinear Functional Analysis and Optimization Theory)

Abstract

:
Let X be a Banach space with both q-uniformly smooth and uniformly convex structures. This article introduces and considers a general extragradient implicit method for solving a general system of variational inequalities (GSVI) with the constraints of a common fixed point problem (CFPP) of a countable family of nonlinear mappings { S n } n = 0 and a monotone variational inclusion, zero-point, problem. Here, the constraints are symmetrical and the general extragradient implicit method is based on Korpelevich’s extragradient method, implicit viscosity approximation method, Mann’s iteration method, and the W-mappings constructed by { S n } n = 0 .

1. Introduction

Let X be Banach space and J be duality set-valued mapping on X. Let A 1 : C X and A 2 : C X be two nonlinear nonself mappings of accretive type. In this work, we investigate the following symmetrical system problem:
μ 1 A 1 y * + x * y * , J ( x x * ) 0 , x C , μ 2 A 2 x * + y * x * , J ( x y * ) 0 , x C ,
with two real constants μ 1 and μ 2 > 0 . This is called a symmetrical variational system. This system was first introduced and studied in [1]. The symmetry system is quite applicable in lots of convex optimizations and finds a lot of applications in applied sciences, such as intensity modulated radiation therapy, signal processing, image reconstruction, and so on. Indeed, the model of these problems can be rewritten as a variational inequality, which is a special case of the system that is, the unconstrained minimization problem
min x H f ¯ ( x ) : = f ( x ) + I C ( x ) ,
where f : H R is a real-valued convex function that is assumed to be continuously differentiable and I C ( x ) is the indicator of C:
I C ( x ) = 0 , x C , , x C .
There are lot of numerical techniques for dealing with it; see, e.g., [2,3,4,5,6,7,8,9]. In addition, x * = y * , A 1 = A 2 = A yield that Equation (1) becomes the generalized variational inequality, which consists of numerically getting x * C with J ( x x * ) , A x * 0 , where x is any vector in its subset C. The (generalized) variational inequality models lots of real applications, such as image reconstruction in emission tomography. In addition, one knows that projection methods are efficient for such a problem [10]. In 2006, Aoyama, Iiduka, and Takahashi [11] proposed and focused on a process and proved the norm convergence of the sequences defined by their process with the aid of the weak topology.
In 2013, in order to solve the above symmetrical variational system with common fixed points of a family of non-expansive self-mappings { S n } n = 0 on C, Ceng et al. [12] investigated an implicit two-step iterative process via a relaxed gradient technique in a class of Banach spaces with restricted geometry structures. Let Π C be a sunny non-expansive retraction operator onto set C, A 1 be α 1 -inverse-strongly accretive nonself operator, A 2 be α 2 -inverse-strongly accretive nonself operator from C to X, and f be a contraction self operator on C. Under the restriction Ω = n = 0 Fix ( S n ) Fix ( Π C ( I μ 1 A 1 ) Π C ( I μ 2 A 2 ) ) , let { x n } be the vector sequence devised by
y n = ( 1 α n ) Π C ( I μ 1 A 1 ) Π C ( I μ 2 A 2 ) x n + α n f ( y n ) , x n + 1 = ( 1 β n ) S n y n + β n x n n 0 ,
with 0 < κ 2 μ i < 2 α i for i = 1 , 2 , where { α n } and { β n } are number sequences in ( 0 , 1 ) satisfying the conditions: n = 0 α n = , lim n α n = 0 , lim sup n β n < 1 and lim inf n β n > 0 . They proved norm convergence of { x n } to x * Ω . Recently, this problem has attracted much attention from the authors working on convex believel problems; see, e.g., [13,14,15,16,17,18,19]
Meantime, in order to solve the Equation (1) with the common fixed point problem constraint of a countable family of non-expansive self-mappings { S n } n = 0 on C, Song and Ceng [20] found a general iterative scheme in a Banach space with both uniformly convex and q-uniformly smooth structures (whose smoothness constant is κ q , where 1 < q 2 ). Let Π C , A 1 , A 2 , G be the same operators as above. One lets Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) and suppose that f is L-Lipschitzian nonself mapping with constant L 0 and F is a k-Lipschitz η -strongly accretive singel-valued noself operator. Let τ = ρ ( η κ q ρ q 1 k q q ) and assume 0 < ρ q 1 < q η κ q k q , 0 < μ i q 1 < q α i κ q , τ > L γ > 0 . For arbitrarily given x 0 C , let { x n } be the sequence generated by
y n = ( 1 β n ) x n + β n Π C ( I μ 1 A 1 ) Π C ( I μ 2 A 2 ) x n , x n + 1 = Π C [ α n γ f ( x n ) + γ n x n + ( ( 1 γ n ) I α n ρ F ) S n y n ] n 0 ,
where { γ n } , { β n } , { α n } are real control sequences processing parameter conditions. They claimed convergence of { x n } to x * Ω in the sense of norm.
Suppose that A is a q-order α -inverse-strongly accretive self operator on X and B : X 2 X is an accretive operator with the range of ( I + B ) 1 filling the full space. In 2017, in order to solve the variational inclusion (VI) of obtaining x * X such that 0 ( A + B ) x * , Chang et al. [21] suggested and devised a viscosity implicit generalized rule in the setting of smooth Banach spaces that also processes uniform convex structures. They claimed that { x n } converges to x * Ω in norm. The method employed by Chang et al. [21] has been applied to popular equilibrium problems; see, e.g., [22,23,24,25,26,27]
Motivated by the above research results, the purpose of this research is to obtain, on the Banach space with uniform convexness and q-uniform smoothness, for example, L p with p > 1 , a feasibility point in the solution set of the Equation (1) involving a CFPP of nonlinear operator { S n } n = 0 and a variational inclusion (VI). We suggest and investigate a general method of gradient implicit typ, which is based on Korpelevich’s extragradient method, the implicit viscosity approximation method, and the W-mappings constructed by { S n } n = 0 . We then prove the vector sequences devised and generated by the proposed implicit method to a solution of the symmetrical variational Equation (1) with the VI and CFPP constraints in the norm. Finally, our results are applied for solving the CFPP of non-expansive and strict pseudocontractive operators, and convex minimization problems in Hilbert spaces. Our results improve and extend some related recent results in [12,20,21,28,29].

2. Preliminaries

Let q > 1 be a real number. The set-valued duality mapping J q : X 2 X * is defined as
J q ( x ) : = { ϕ X * : x q = x , ϕ and x q 1 = ϕ } x X .
It is known that the duality mapping J q defined above from X into the family of nonempty (by Hahn–Banach’s theorem) weak * compact subsets of X * satisfies, for all x X , J q ( x ) = J q ( x ) . Under the structures of smoothness and uniform convexness, one knows that there exists a continuous convex and strictly increasing function g : [ 0 , 2 r ] R such that 0 = g ( 0 ) and g ( x y ) x 2 + y 2 2 x , J ( y ) for all x , y B r = { y X : y r } . We suppose that Π maps C into some subset D. One recalls that Π is called a sunny provided that t ( x Π ( x ) ) + Π ( x ) C for x C and t 0 , Π [ Π ( x ) + t ( x Π ( x ) ) ] = Π ( x ) . Π is a retraction provided Π = Π 2 .
Lemma 1.
[30] We suppose that q > 1 and X is a q-uniformly smooth Banach space with the generalized duality mapping J q . Then, for any given x , y X , the inequality holds: x + y q x q + q y , j q ( x + y ) j q ( x + y ) J q ( x + y ) and Fix ( ( I + λ B ) 1 ( I λ A ) ) = ( A + B ) 1 0 λ > 0 . Let α, β, and γ be three position real constants with α + β + γ = 1 . In addition, if X is uniformly convex, then there exists a continuous convex and strictly increasing function g : [ 0 , ) [ 0 , ) with the restraint that α x + β y + γ y 2 + α β g ( x y ) α x 2 + β y 2 + γ y 2 for all α , β , γ [ 0 , 1 ] .
Proposition 1.
[31] We suppose that X is q-uniformly smooth space with q ( 1 , 2 ] . Then, κ q y q + x q x + y q q y , J q ( x ) for any vectors x X , y X . If q = 2 , the special case, then κ 2 y 2 + x 2 x + y 2 2 y , J 2 ( x ) for any vectors x X , y X .
From now on, one assumes that A is a set-valued operator from C to 2 X . A is called an accretive operator if j q ( x y ) , u v 0 , where j q ( x y ) J q ( x y ) , u A x , v A y . A is called an α -inverse-strongly accretive operator j q ( x y ) , u v α A x A y q , where j q ( x y ) J q ( x y ) , α > 0 , u A x , v A y . For all λ > 0 , X = ( I + λ A ) C . Then, A is called m-accretive. On the class of m-accretive operators, one can get a back-ward operator J λ A = ( I + λ A ) 1 , which is commonly called the resolvent operator of A.
Lemma 2.
[32] In a Banach space X, one has J λ x = J μ ( μ λ x + ( 1 μ λ ) J λ x ) , x X , μ , λ > 0 . Let J λ A be the associated resolvent operator of A. Thus, J λ A is a single-valued Lipschitz continuous operator Fix ( J λ A ) = A 1 0 , where A 1 0 = { x C : 0 A x } ; if the setting is reduced to Hilbert spaces, m-accretiveness is equivalent to the maximal monotonicity.
Proposition 2.
[33] Let X be a uniformly convex and q-uniformly smooth Banach space. Assume that r > 0 is some positive real number and A is a single-valued accretive with the inverse-strongly accretiveness and B is an accretive operator with X = ( I + λ B ) C . α-inverse-strongly accretive mapping of order q and B : C 2 X is an m-accretive operator. Thus, there exists a continuous convex and strictly increasing function ϕ : R + R + with ϕ ( 0 ) = 0 such that
T λ x T λ y q + λ ( α q λ q 1 κ q ) A x A y q x y q ϕ ( ( I J λ B ) ( I λ A ) y ( I J λ B ) ( I λ A ) x ) ,
for all x , y B ˜ r , a ball in C, where κ q is the q-uniformly smooth constant of X. In particular, if 0 < λ q 1 α q κ q , then T λ is non-expansive.
Lemma 3.
[20] Let X be q-uniformly smooth and A : C X be q-order α-inverse-strongly accretive. Then, the following inequality holds:
λ ( q α κ q λ q 1 ) A x A y q + ( I λ A ) x ( I λ A ) y q x y q .
In particular, if 0 < λ q 1 q α κ q , then the complimentary operator of A processes the nonexpansivity. Suppose that Π C is a non-expansive sunny retraction from X onto C. Let both the mapping A 1 : C X and A 2 : C X be inverse-strongly accretive and let G be a self-mapping on C defined by G x : = Π C ( I μ 1 A 1 ) Π C ( I μ 2 A 2 ) x C . If 0 < μ i q 1 q α i κ q for i = 1 , 2 , G is a non-expansive self-mapping C. ( x * , y * ) , where both x * and y * are in C, solves the variational system (1) if and only if x * = Π C ( y * μ 1 A 1 y * ) , where y * = Π C ( x * μ 2 A 2 x * ) .
Lemma 4.
[34] Let x n + 1 = α n x n + ( 1 α n ) y n n 0 and lim sup n ( y n + 1 y n x n + 1 x n ) 0 , where { α n } is a sequence satisfying the condition lim sup n α n < 1 and lim inf n α n > 0 and { x n } and { y n } are bounded sequences in a Banach space. Then, lim n y n x n = 0 .
Let { ζ n } be a real sequence in (0,1) and S i a non-expansive mapping defined on C for each i { 1 , 2 , · , } . Next, one defines a mapping associated with n by
U n , n = ζ n S n U n , n + 1 + ( 1 ζ n ) I , U n , n 1 = ζ n 1 S n 1 U n , n + ( 1 ζ n 1 ) I , U n , k = ζ k S k U n , k + 1 + ( 1 ζ k ) I , W n = U n , 0 = ζ 0 S 0 U n , 1 + ( 1 ζ 0 ) I ,
where U n , n + 1 = I . The W n , called W-mapping, is a non-expansive mapping.
Lemma 5.
[35] Let { S n } n = 0 be a countable family of non-expansive self-mappings on C, which is a subset of strictly convex space with n = 0 Fix ( S n ) , and { ζ n } n = 0 be a real sequence such that 0 < ζ n b < 1 n 0 . Then, the following statements hold:
(i) 
the limit lim n U n , k x exists for all x C and k 0 ;
(ii) 
W n is non-expansive and Fix ( W n ) = i = 0 n Fix ( S i ) n 0 ;
(iii) 
the mapping W : C C defined by W x : = lim n W n x = lim n U n , 0 x x C , is a non-expansive mapping satisfying Fix ( W ) = n = 0 Fix ( S n ) and it is called the W-mapping generated by S 0 , S 1 , and ζ 0 , ζ 1 , .
Using the same arguments as in the proof of [[36], Lemma 4], we obtain the following.
Proposition 3.
Let { S n } n = 0 and { ζ n } n = 0 be as in Lemma 5. Let D be any bounded set in C. One has lim n sup x D W n x W x = 0 .
Lemma 6.
[37] Let a n + 1 a n + λ n γ n a n λ n n 0 , where { λ n } and { γ n } are sequences of real numbers such that lim sup n γ n 0 or n = 0 | λ n γ n | < ; { λ n } [ 0 , 1 ] and n = 0 λ n = . Hence, lim n a n = 0 .

3. Convergence Results

Theorem 1.
Suppose that X is uniformly convex and q-uniformly, where 1 < q 2 , smooth space. Suppose that B is a set-valued m-accretive operator and A is a single-valued α-inverse-strongly accretive operator. Suppose that A 1 is a single-valued α 1 -inverse-strongly accretive operator and A 2 is a single-valued α 2 -inverse-strongly accretive operator. Suppose that f is a contraction defined on set C with contractive efficient δ ( 0 , 1 ) and { W n } is the sequence defined by Equation (3). Suppose that Π C is a non-expansive sunny retraction from X onto set C and Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) ( B + A ) 1 0 , where GSVI ( C , A 1 , A 2 ) is the fixed point set of G : = Π C ( I μ 1 A 1 ) Π C ( I μ 2 A 2 ) with 0 < μ 1 q 1 κ q < q α 1 and 0 < μ 2 q 1 κ q < q α 2 . Define a sequence { x n } as follows:
u n = Π C ( y n μ 2 A 2 y n ) , v n = Π C ( u n μ 1 A 1 u n ) , y n = β n x n + γ n W n ( t n x n + ( 1 t n ) J λ n B ( I λ n A ) v n ) + α n f ( x n ) , x n + 1 = ( 1 δ n ) W n y n + δ n x n n 0 ,
where { λ n } ( 0 , ( q α κ q ) 1 q 1 ) and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy the following conditions:
(i) 
α n + β n + γ n = 1 , and n = 0 α n = ;
(ii) 
lim n | β n β n 1 | = lim n | γ n γ n 1 | = lim n α n = lim n | t n t n 1 | = 0 ;
(iii) 
lim inf n γ n t n ( 1 t n ) > 0 and lim sup n γ n ( 1 t n ) < 1 ;
(iv) 
lim inf n β n γ n > 0 , lim inf n δ n > 0 and lim sup n δ n < 1 ;
(v) 
0 < λ ¯ λ n n 0 and lim n λ n = λ < ( q α κ q ) 1 q 1 .
Then, x n x * Ω strongly.
Proof. 
Re-write process Equation (4) as
y n = β n x n + γ n W n ( t n x n + ( 1 t n ) T n G y n ) + α n f ( x n ) , x n + 1 = ( 1 δ n ) W n y n + δ n x n n 0 ,
where T n : = J λ n B ( I λ n A ) n 0 . From { λ n } ( 0 , ( q α κ q ) 1 q 1 ) and Proposition 2, we observe, for each n, that T n is a non-expansive self-mapping on C. Since α n + β n + γ n = 1 , we know that
α n δ + β n + γ n t n + γ n ( 1 t n ) = α n δ + γ n + β n = 1 α n ( 1 δ ) .
For each n, one defines a self-mapping F n on C by F n ( x ) = α n f ( x n ) + β n x n + γ n W n ( t n x n + ( 1 t n ) T n G x ) x C . Thus,
F n ( x ) F n ( y ) = γ n W n ( t n x n + ( 1 t n ) T n G x ) W n ( t n x n + ( 1 t n ) T n G y )   γ n ( 1 t n ) T n G x T n G y γ n ( 1 t n ) x y .
Since 0 < γ n ( 1 t n ) < 1 , one has a unique vector y n C satisfying
y n = α n f ( x n ) + β n x n + γ n W n ( t n x n + ( 1 t n ) T n G y n ) .
The following proof is split to complete this conclusion. □
Step 1. We show that iterative sequence { x n } is bounded. Take a fixed p Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) ( A + B ) 1 0 arbitrarily. Lemma 5 guarantees W n p = p , G p = p and T n p = p . Moreover, the nonexpansivity of T n and G (due to Proposition 2) sends us to
p y n = β n ( p x n ) + γ n [ p W n ( t n x n + ( 1 t n ) T n G y n ) ] + α n ( p f ( x n ) ) β n p x n + γ n p W n ( t n x n + ( 1 t n ) T n G y n ) + α n ( f ( x n ) f ( p ) + p f ( p ) ) β n p x n + γ n [ t n p x n + ( 1 t n ) p T n G y n ] + α n ( δ p x n + f ( p ) p ) γ n ( 1 t n ) p y n + ( α n δ + β n + γ n t n ) p x n + α n f ( p ) p ,
which therefore implies that
p y n α n δ + β n + γ n t n 1 γ n ( 1 t n ) p x n + α n 1 γ n ( 1 t n ) f ( p ) p   = 1 α n ( 1 δ ) γ n ( 1 t n ) 1 γ n ( 1 t n ) p x n + α n 1 γ n ( 1 t n ) f ( p ) p   = ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) p x n + α n 1 γ n ( 1 t n ) f ( p ) p .
Thus, from Equation (5), Equation (6), and Lemma 5 (i), we have
p x n + 1 ( 1 δ n ) p W n y n + δ n p x n ( 1 δ n ) p y n + δ n p x n   δ n p x n + ( 1 δ n ) { ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) p x n + α n 1 γ n ( 1 t n ) f ( p ) p }   = [ 1 ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) α n ] p x n + ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) α n f ( p ) p 1 δ max { f ( p ) p 1 δ , p x n } .
One claims that all the iterative sequences are bounded.
Step 2. One proves that x n + 1 x n goes to 0 as n goes to . By borrowing Equation (5), we have
z n z n 1 = t n ( x n x n 1 ) + ( t n t n 1 ) ( x n 1 T n 1 G y n 1 ) + ( 1 t n ) ( T n G y n T n 1 G y n 1 ) ,
and
y n y n 1 = ( α n α n 1 ) f ( x n 1 ) + α n ( f ( x n ) f ( x n 1 ) ) + β n ( x n x n 1 )   + ( β n β n 1 ) x n 1 + γ n ( W n z n W n 1 z n 1 ) + ( γ n γ n 1 ) W n 1 z n 1 .
By using Lemma 2 and Proposition 2, one deduces that
T n G y n T n 1 G y n 1 T n G y n T n G y n 1 + T n G y n 1 T n 1 G y n 1 y n y n 1 + J λ n B ( I λ n A ) G y n 1 J λ n 1 B ( I λ n 1 A ) G y n 1 y n y n 1 + J λ n B ( I λ n A ) G y n 1 J λ n 1 B ( I λ n A ) G y n 1 + J λ n 1 B ( I λ n A ) G y n 1 J λ n 1 B ( I λ n 1 A ) G y n 1 = y n y n 1 + J λ n 1 B ( λ n 1 λ n I + ( 1 λ n 1 λ n ) J λ n B ) ( I λ n A ) G y n 1 J λ n 1 B ( I λ n A ) G y n 1 + J λ n 1 B ( I λ n A ) G y n 1 J λ n 1 B ( I λ n 1 A ) G y n 1 y n y n 1 + | 1 λ n 1 λ n | J λ n B ( I λ n A ) G y n 1 ( I λ n A ) G y n 1 + | λ n λ n 1 | A G y n 1 | λ n λ n 1 | M 1 + y n y n 1 ,
where sup n 1 { 1 λ ¯ J λ n B ( I λ n A ) G y n 1 ( I λ n A ) G y n 1 + A G y n 1 } M 1 for some M 1 > 0 . Thus, it follows from Equation (8) that
W n z n W n 1 z n 1 W n z n 1 W n 1 z n 1 + W n z n W n z n 1   | t n t n 1 | x n 1 T n 1 G y n 1 + t n x n x n 1     + ( 1 t n ) T n G y n T n 1 G y n 1 + W n z n 1 W n 1 z n 1   | t n t n 1 | x n 1 T n 1 G y n 1 + t n x n x n 1     + ( 1 t n ) [ y n y n 1 + | λ n λ n 1 | M 1 ] + W n z n 1 W n 1 z n 1 .
This inequality together with Equation (7), implies that
y n y n 1 | α n α n 1 | f ( x n 1 ) + β n x n x n 1 + α n f ( x n ) f ( x n 1 ) + | β n β n 1 | x n 1 + γ n W n z n W n 1 z n 1 + | γ n γ n 1 | W n 1 z n 1 | α n α n 1 | f ( x n 1 ) + α n δ x n x n 1 + β n x n x n 1 + | β n β n 1 | x n 1 + γ n { t n x n x n 1 + | t n t n 1 | x n 1 T n 1 G y n 1 + ( 1 t n ) [ y n y n 1 + | λ n λ n 1 | M 1 ] + W n z n 1 W n 1 z n 1 } + | γ n γ n 1 | W n 1 z n 1 γ n ( 1 t n ) y n y n 1 + ( α n δ + β n + γ n t n ) x n 1 x n + ( | α n 1 α n | + | β n 1 β n | + | t n 1 t n | + | γ n 1 γ n | + | λ n 1 λ n | ) M 2 + W n z n 1 W n 1 z n 1 ,
where sup n 0 { f ( x n ) + x n + T n G y n + M 1 + W n z n } M 2 for some M 2 > 0 . Then,
y n y n 1 α n δ + β n + γ n t n 1 γ n ( 1 t n ) x n 1 x n + 1 1 γ n ( 1 t n ) [ ( | α n 1 α n | + | β n 1 β n |     + | t n 1 t n | + | γ n 1 γ n | + | λ n 1 | λ n ) M 2 + W n 1 z n 1 W n z n 1 ]   = ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) x n x n 1 + 1 1 γ n ( 1 t n ) [ ( | α n α n 1 | + | β n β n 1 |     + | γ n γ n 1 | + | t n t n 1 | + | λ n λ n 1 | ) M 2 + W n z n 1 W n 1 z n 1 ]   x n x n 1 + 1 1 γ n ( 1 t n ) [ ( | γ n 1 γ n | + | β n 1 β n | + | α n 1 α n |     + | t n 1 t n | + | λ n 1 λ n | ) M 2 + W n z n 1 W n 1 z n 1 ] ,
and hence
W n y n W n 1 y n 1 W n y n W n y n 1 + W n y n 1 W n 1 y n 1 x n x n 1 + 1 1 γ n ( 1 t n ) [ ( | γ n 1 γ n | + | β n 1 β n | + | α n 1 α n | + | t n 1 t n | + | λ n 1 λ n | ) M 2 + W n z n 1 W n 1 z n 1 ] + W n y n 1 W n 1 y n 1 .
Consequently,
W n y n W n 1 y n 1 x n x n 1 1 1 γ n ( 1 t n ) [ ( | γ n 1 γ n | + | β n 1 β n | + | α n 1 α n | + | λ n 1 λ n | + | t n 1 t n | ) M 2 + W n 1 z n 1 W n z n 1 ] + W n y n 1 W n 1 y n 1 .
Since lim n sup x D W n x W x = 0 , where D = { y n : n 0 } { z n : n 0 } of C (due to Proposition 3), we know that
lim n W n y n 1 W n 1 y n 1 = lim n W n z n 1 W n 1 z n 1 = 0 .
Note that lim n α n = 0 , lim n λ n = λ and lim inf n ( 1 γ n ( 1 t n ) ) > 0 . Since | β n β n 1 | , | γ n γ n 1 | and | t n t n 1 | all go to 0 as n goes to the infinity (due to conditions (ii), (iii)), one says
lim sup n ( W n y n W n 1 y n 1 x n x n 1 ) 0 .
Lemma 4 guarantees lim n W n y n x n = 0 . Hence, we obtain
lim n x n + 1 x n = lim n ( 1 δ n ) W n y n x n = 0 .
Step 3. We show that x n y n 0 and x n G x n 0 as n . Indeed, for simplicity, we denote p ¯ : = Π C ( I μ 2 A 2 ) p . Note that u n = Π C ( I μ 2 A 2 ) y n and v n = Π C ( I μ 1 A 1 ) u n . Then, v n = G y n . From Lemma 3, we have
u n p ¯ q ( I μ 2 A 2 ) y n ( I μ 2 A 2 ) p q   y n p q μ 2 ( q α 2 κ q μ 2 q 1 ) A 2 y n A 2 p q .
In the same way, we get
v n p q + μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q u n p ¯ q .
Substituting Equation (10) for Equation (11), we obtain
v n p q + μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q y n p q μ 2 ( q α 2 κ q μ 2 q 1 ) A 2 y n A 2 p q .
By Lemma 1, we infer from Equation (5) and Equation (12) that z n p q t n x n p q + ( 1 t n ) v n p q , and hence
y n p q = β n ( x n p ) + γ n ( W n z n p ) + α n ( f ( p ) p ) + α n ( f ( x n ) f ( p ) ) q β n ( x n p ) + γ n ( W n z n p ) + α n ( f ( x n ) f ( p ) ) q + q α n f ( p ) p , J q ( y n p ) α n f ( x n ) f ( p ) q + β n x n p q + γ n W n z n p q + q α n f ( p ) p , J q ( y n p ) α n δ x n p q + β n x n p q + γ n [ t n x n p q + ( 1 t n ) v n p q ] + q α n f ( p ) p y n p q 1 ( α n δ + β n + γ n t n ) x n p q + γ n ( 1 t n ) [ p y n q μ 2 ( q α 2 κ q μ 2 q 1 ) A 2 y n A 2 p q μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q ] + q α n p y n q 1 p f ( p ) .
It yields that
y n p q ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) x n p q γ n ( 1 t n ) 1 γ n ( 1 t n ) [ μ 2 ( q α 2 κ q μ 2 q 1 ) A 2 y n A 2 p q     + μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q ] + q α n 1 γ n ( 1 t n ) p y n q 1 p f ( p ) .
Combing this with Equation (5), one says
x n + 1 p q δ n x n p q + ( 1 δ n ) y n p q δ n x n p q + ( 1 δ n ) { ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) x n p q γ n ( 1 t n ) 1 γ n ( 1 t n ) [ μ 2 ( q α 2 κ q μ 2 q 1 ) × × A 2 y n A 2 p q + μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q ] + q α n 1 γ n ( 1 t n ) f ( p ) p y n p q 1 } = ( 1 α n ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) ) x n p q γ n ( 1 δ n ) ( 1 t n ) 1 γ n ( 1 t n ) [ μ 2 ( q α 2 κ q μ 2 q 1 ) × × A 2 y n A 2 p q + μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q ] + q ( 1 δ n ) α n 1 γ n ( 1 t n ) f ( p ) p y n p q 1 x n p q ( 1 δ n ) γ n ( 1 t n ) 1 γ n ( 1 t n ) [ μ 2 ( q α 2 κ q μ 2 q 1 ) A 2 y n A 2 p q + μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q ] + α n M 3 ,
where sup n 0 { q ( 1 δ n ) 1 γ n ( 1 t n ) p y n q 1 p f ( p ) } M 3 , where M 3 > 0 is a real. Thus, it follows from Equation (13) and Proposition 1 that
( 1 δ n ) γ n ( 1 t n ) 1 γ n ( 1 t n ) [ μ 2 ( q α 2 κ q μ 2 q 1 ) A 2 y n A 2 p q + μ 1 ( q α 1 κ q μ 1 q 1 ) A 1 u n A 1 p ¯ q ] x n p q x n + 1 p q + α n M 3 q x n x n + 1 x n + 1 p q 1 + κ q x n x n + 1 q + α n M 3 .
Since 0 < μ i q 1 < q α i κ q for i = 1 , 2 , from Equation (9), lim inf n γ n ( 1 t n ) > 0 , lim inf n ( 1 δ n ) > 0 and lim n α n = 0 , we get
lim n A 2 y n A 2 p = 0 and lim n A 1 u n A 1 p ¯ = 0 .
Utilizing Propositions 1 and 3, we have
u n p ¯ 2 = Π C ( I μ 2 A 2 ) y n Π C ( I μ 2 A 2 ) p 2 ( I μ 2 A 2 ) y n ( I μ 2 A 2 ) p , J ( u n p ¯ ) = y n p , J ( u n p ¯ ) + μ 2 A 2 p A 2 y n , J ( u n p ¯ ) 1 2 [ y n p 2 + u n p ¯ 2 g 1 ( y n u n ( p p ¯ ) ) ] + μ 2 A 2 p A 2 y n u n p ¯ ,
which implies that
u n p ¯ 2 + g 1 ( y n u n ( p p ¯ ) ) y n p 2 + 2 μ 2 A 2 p A 2 y n u n p ¯ .
Following the above line, one can derive
v n p 2 + g 2 ( u n v n + ( p p ¯ ) ) u n p ¯ 2 + 2 μ 1 A 1 p ¯ A 1 u n v n p .
Combining Equation (15) and Equation (16), one further derives
v n p 2 + g 1 ( y n u n ( p p ¯ ) ) + g 2 ( u n v n + ( p p ¯ ) ) y n p 2 + 2 μ 2 A 2 p A 2 y n u n p ¯ + 2 μ 1 A 1 p ¯ A 1 u n v n p .
Utilizing Lemma 1, we obtain from Equation (5) and Equation (17) that
z n p 2 ( 1 t n ) T n G y n p 2 t n ( 1 t n ) g 3 ( x n T n G y n ) + t n x n p 2   ( 1 t n ) v n p 2 t n ( 1 t n ) g 3 ( x n T n G y n ) + t n x n p 2 ,
and hence
y n p 2 + 2 α n J ( y n p ) , f ( p ) p + β n ( x n p ) + γ n ( W n z n p ) + α n ( f ( x n ) f ( p ) ) 2 β n x n p 2 + γ n p W n z n 2 + α n f ( p ) f ( x n ) 2 β n γ n g 4 ( x n W n z n ) + 2 α n f ( p ) p , J ( y n p ) α n δ x n p 2 + β n x n p 2 + γ n [ t n x n p 2 + ( 1 t n ) v n p 2 t n ( 1 t n ) g 3 ( x n T n G y n ) ] + 2 α n f ( p ) p y n p β n γ n g 4 ( x n W n z n ) α n δ x n p 2 + β n x n p 2 + γ n { t n x n p 2 + ( 1 t n ) [ y n p 2 g 1 ( y n u n ( p p ¯ ) ) g 2 ( u n v n + ( p p ¯ ) ) + 2 μ 2 A 2 p A 2 y n u n p ¯ + 2 μ 1 A 1 p ¯ A 1 u n v n p ] t n ( 1 t n ) g 3 ( x n T n G y n ) } + 2 α n f ( p ) p y n p β n γ n g 4 ( x n W n z n ) ( α n δ + β n + γ n t n ) x n p 2 + γ n ( 1 t n ) y n p 2 γ n ( 1 t n ) [ g 1 ( y n u n ( p p ¯ ) ) + g 2 ( u n v n + ( p p ¯ ) ) ] + 2 μ 2 A 2 p A 2 y n u n p ¯ + 2 μ 1 A 1 p ¯ A 1 u n v n p + 2 α n f ( p ) p y n p γ n t n ( 1 t n ) g 3 ( x n T n G y n ) β n γ n g 4 ( x n W n z n ) ,
which immediately yields
y n p 2 ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) x n p 2 γ n ( 1 t n ) 1 γ n ( 1 t n ) [ g 1 ( y n u n ( p p ¯ ) ) + g 2 ( u n v n + ( p p ¯ ) ) ] + 2 1 γ n ( 1 t n ) [ μ 2 A 2 p A 2 y n u n p ¯ + μ 1 A 1 p ¯ A 1 u n v n p + α n p y n f ( p ) p ] 1 1 γ n ( 1 t n ) [ γ n t n ( 1 t n ) g 3 ( x n T n G y n ) + β n γ n g 4 ( x n W n z n ) ] .
This together with Equation (5) leads to
x n + 1 p 2 δ n x n p 2 + ( 1 δ n ) y n p 2 δ n x n p 2 + ( 1 δ n ) { ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) x n p 2 γ n ( 1 t n ) 1 γ n ( 1 t n ) [ g 1 ( y n u n ( p p ¯ ) ) + g 2 ( u n v n + ( p p ¯ ) ) ] + 2 1 γ n ( 1 t n ) [ μ 2 A 2 p A 2 y n u n p ¯ + μ 1 A 1 p ¯ A 1 u n v n p + α n f ( p ) p y n p ] 1 1 γ n ( 1 t n ) [ γ n t n ( 1 t n ) g 3 ( x n T n G y n ) + β n γ n g 4 ( x n W n z n ) ] } ( 1 α n ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) ) x n p 2 1 δ n 1 γ n ( 1 t n ) [ γ n ( 1 t n ) ( g 1 ( y n u n ( p p ¯ ) ) + g 2 ( u n v n + ( p p ¯ ) ) ) + γ n t n ( 1 t n ) g 3 ( x n T n G y n ) + β n γ n g 4 ( x n W n z n ) ] + 2 1 γ n ( 1 t n ) [ μ 2 A 2 p A 2 y n u n p ¯ + μ 1 A 1 p ¯ A 1 u n v n p + α n f ( p ) p y n p ] x n p 2 1 δ n 1 γ n ( 1 t n ) [ γ n ( 1 t n ) ( g 1 ( y n u n ( p p ¯ ) ) + g 2 ( u n v n + ( p p ¯ ) ) ) + γ n t n ( 1 t n ) g 3 ( x n T n G y n ) + β n γ n g 4 ( x n W n z n ) ] + 2 1 γ n ( 1 t n ) [ μ 2 A 2 p A 2 y n × × u n p ¯ + μ 1 A 1 p ¯ A 1 u n v n p + α n p y n p f ( p ) ] .
It yields that
1 δ n 1 γ n ( 1 t n ) [ γ n ( 1 t n ) ( g 1 ( y n u n ( p p ¯ ) ) + g 2 ( u n v n + ( p p ¯ ) ) ) + γ n t n ( 1 t n ) g 3 ( x n T n G y n ) + β n γ n g 4 ( x n W n z n ) ] x n p 2 x n + 1 p 2 + 2 1 γ n ( 1 t n ) [ μ 2 A 2 p A 2 y n u n p ¯ + μ 1 A 1 p ¯ A 1 u n v n p + α n y n p f ( p ) p ] ( x n + 1 p + x n p ) x n x n + 1 + 2 1 γ n ( 1 t n ) [ μ 2 A 2 p A 2 y n u n p ¯ + μ 1 A 1 p ¯ A 1 u n v n p + α n f ( p ) p y n p ] .
Utilizing Equation (9) and Equation (14), from lim inf n ( 1 δ n ) > 0 , lim inf n γ n t n ( 1 t n ) > 0 and lim inf n β n γ n > 0 , we conclude that lim n g 1 ( y n u n ( p p ¯ ) ) = 0 , lim n g 2 ( u n v n + ( p p ¯ ) ) = 0 , lim n g 3 ( x n T n G y n ) = 0 and lim n g 4 ( x n W n z n ) = 0 . Utilizing the properties of g 1 , g 2 , g 3 and g 4 , we deduce that
lim n y n u n ( p p ¯ ) = lim n u n v n + ( p p ¯ ) = lim n x n T n G y n = lim n x n W n z n = 0 .
From Equation (18), we get
y n G y n = y n v n y n u n ( p p ¯ ) + u n v n + ( p p ¯ ) 0 ( n ) .
In the meantime, again from Equation (5), we have y n x n = α n ( f ( x n ) x n ) + γ n ( W n z n x n ) . Hence, from Equation (18), we get y n x n α n f ( x n ) x n + W n z n x n 0 ( n ) . This together with Equation (19) implies that
x n G x n x n y n + y n G y n + G y n G x n   2 x n y n + y n G y n 0 ( n ) .
Step 4. We show that x n W x n 0 , x n T λ x n 0 and x n Γ x n 0 as n , where W x = lim n W n x x C , T λ = J λ B ( I λ A ) and Γ x = θ 1 W x + θ 2 G x + θ 3 T λ x x C for constants θ 1 , θ 2 , θ 3 ( 0 , 1 ) satisfying θ 1 + θ 2 + θ 3 = 1 . Indeed, since x n + 1 x n + x n y n = δ n ( x n y n ) + ( 1 δ n ) ( W n y n y n ) , from x n x n + 1 0 and x n y n 0 , we obtain
W n y n y n = 1 1 δ n x n + 1 x n + ( 1 δ n ) ( x n y n ) x n + 1 x n + x n y n 1 δ n 0 ( n ) ,
which together with Proposition 3 and x n y n 0 implies that
lim n W x n x n = 0 .
Furthermore, utilizing the same method used for Equation (8), one arrives at
T n y n T λ y n | 1 λ λ n | J λ n B ( I λ n A ) y n ( I λ n A ) y n + | λ n λ | A y n   = | 1 λ λ n | T n y n ( I λ n A ) y n + | λ n λ | A y n .
Since lim n λ n = λ and the sequences { y n } , { T n y n } , { A y n } are bounded, we get
lim n T n y n T λ y n = 0 .
By utilizing Lemma 1, we deduce from Equation (18), Equation (19), Equation (22), and x n y n 0 that
lim n T λ x n x n = 0 .
We now define the mapping Γ x = θ 1 W x + θ 2 G x + θ 3 T λ x x C for constants θ 1 , θ 2 , θ 3 ( 0 , 1 ) satisfying θ 1 + θ 2 + θ 3 = 1 . Lemma 4 further sends us to
x n Γ x n = θ 1 ( x n W x n ) + θ 2 ( x n G x n ) + θ 3 ( x n T λ x n )   θ 1 x n W x n + θ 2 x n G x n + θ 3 x n T λ x n .
From Equation (20), Equation (21), Equation (23), and Equation (24), we get
lim n x n Γ x n = 0 .
Step 5. We show that
lim sup n f ( x * ) x * , J ( x n x * ) 0 ,
where x * = s- lim n x t with x t being a fixed point of the contraction ( 1 t ) Γ + t f for each t ( 0 , 1 ) . By Lemma 1, we conclude that
x n x t 2 f n ( t ) + 2 t x t x n 2 + ( 1 + t 2 2 t ) x t x n 2 + 2 t f ( x t ) x t , J ( x t x n ) ,
where
f n ( t ) = ( Γ x n x n + 2 x t x n ) x n Γ x n ( 1 t ) 2 0 ( n ) .
Equation (27) yields that
2 t J ( x t x n ) , x t f ( x t ) f n ( t ) + t 2 x n x t 2 .
Letting n in Equation (29), one arrives at
lim sup n 2 x t f ( x t ) , J ( x t x n ) t M 4 ,
where sup { x t x n 2 } M 4 , where M 4 > 0 . Further letting t go to 0 in Equation (30), we have
lim sup t 0 lim sup n x t f ( x t ) , J ( x t x n ) 0 .
Thus,
lim sup n f ( x * ) x * , J ( x n x * ) lim sup n f ( x * ) x * , J ( x n x * ) J ( x n x t ) + ( 1 + δ ) x t x * lim sup n x n x t + lim sup n f ( x t ) x t , J ( x n x t ) .
Taking into account that lim t 0 x t x * = 0 , we have
lim sup n f ( x * ) x * , J ( x n x * ) = lim sup t 0 lim sup n f ( x * ) x * , J ( x n x * ) lim sup t 0 lim sup n f ( x * ) x * , J ( x n x * ) J ( x n x t ) .
Since the space is smooth, we conclude from Equation (26) that
lim sup n f ( x * ) x * , J ( y n x * ) = lim sup n { J ( x n x * ) , f ( x * ) x * + J ( y n x * ) J ( x n x * ) , f ( x * ) x * } = lim sup n J ( x n x * ) , f ( x * ) x * 0 .
Step 6. We show that x n x * 0 as n . Indeed, we observe that
y n x * 2 = α n ( f ( x n ) f ( x * ) ) + β n ( x n x * ) + γ n ( W n z n x * ) + α n ( f ( x * ) x * ) 2 α n f ( x n ) f ( x * ) 2 + β n x n x * 2 + γ n z n x * 2 + 2 α n f ( x * ) x * , J ( y n x * ) α n δ x n x * 2 + β n x n x * 2 + γ n ( t n x n x * 2 + ( 1 t n ) y n x * 2 ) + 2 α n f ( x * ) x * , J ( y n x * ) ,
which hence yields
y n x * 2 ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) x n x * 2 + 2 α n 1 γ n ( 1 t n ) f ( x * ) x * , J ( y n x * ) .
Thus,
x n + 1 x * 2 δ n x n x * 2 + ( 1 δ n ) W n y n x * 2 δ n x n x * 2 + ( 1 δ n ) { ( 1 α n ( 1 δ ) 1 γ n ( 1 t n ) ) x n x * 2 + 2 α n 1 γ n ( 1 t n ) f ( x * ) x * , J ( y n x * ) } = [ 1 α n ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) ] x n x * 2 + α n ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) · 2 f ( x * ) x * , J ( y n x * ) 1 δ .
Since lim inf n ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) > 0 , { α n ( 1 δ ) 1 γ n ( 1 t n ) } ( 0 , 1 ) and n = 0 α n = , we know that { α n ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) } ( 0 , 1 ) and n = 0 α n ( 1 δ n ) ( 1 δ ) 1 γ n ( 1 t n ) = . Utilizing Lemma 6 and Equation (32), one from Equation (34) gets that x n x * 0 as n tends to the infinity. This completes the proof.
Let q > 1 . A mapping T : C C is said to be η -strictly pseudocontractive of order q if for each x , y C , there exists j q ( x y ) J q ( x y ) such that T x T y , j q ( x y ) x y q η x y ( T x T y ) q for some η ( 0 , 1 ) . It is clear that T : C C is η -strictly pseudocontractive of order q iff I T is q-order η -inverse-strongly accretive.
Corollary 1.
Let X be uniformly convex and q-uniformly, where 1 < q 2 , smooth space. Let B : C 2 X be an m-accretive operator and A : C X be a q-order α-inverse-strongly accretive operator Let Π C be a non-expansive sunny retraction onto C and let T be a q-order η-strictly pseudocontractive self-mapping defined on C such that Ω = n = 0 Fix ( S n ) Fix ( T ) ( A + B ) 1 0 . Let f be a δ-contractive self-mapping defined on C with constant δ ( 0 , 1 ) and { W n } be the vector sequence defined by Equation (3). Define a sequence { x n } as follows:
y n = β n x n + γ n W n ( t n x n + ( 1 t n ) J λ n B ( I λ n A ) ( ( 1 l ) I + l T ) y n ) + α n f ( x n ) , x n + 1 = ( 1 δ n ) W n y n + δ n x n n 0 ,
where 0 < l < min { 1 , ( q η κ q ) 1 q 1 } , { λ n } ( 0 , ( q α κ q ) 1 q 1 ) and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy the following conditions:
(i) 
n = 0 α n = and α n + β n + γ n = 1 ;
(ii) 
lim n | β n β n 1 | = lim n | γ n γ n 1 | = lim n α n = lim n | t n t n 1 | = 0 ;
(iii) 
lim sup n γ n t n ( 1 t n ) > 0 and lim inf n γ n ( 1 t n ) < 1 ;
(iv) 
lim inf n β n γ n > 0 , lim inf n δ n > 0 and lim sup n δ n < 1 ;
(v) 
0 < λ ¯ λ n n 0 and lim n λ n = λ < ( q α κ q ) 1 q 1 .
Then, x n x * Ω strongly.
Proof. 
In Theorem 1, we put A 1 = I T , A 2 = 0 and μ 1 = l , where 0 < l < min { 1 , ( q η κ q ) 1 q 1 } . Then, GSVI (1) is equivalent to the variational inequality: A 1 x * , J ( x x * ) 0 x C . In this case, A 1 : C X is q-order η -inverse-strongly accretive. It is not hard to see that Fix ( T ) = VI ( C , A 1 ) . Indeed, for l ( 0 , 1 ) , we observe that
p VI ( C , A 1 ) A 1 p , J ( x p ) 0 x C p = Π C ( p l A 1 p )   p = Π C ( p l ( I T ) p ) = ( 1 l ) p + l T p p Fix ( T ) .
Thus, we obtain that Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) ( A + B ) 1 0 = n = 0 Fix ( S n ) Fix ( T ) ( A + B ) 1 0 , and Π C ( I μ 1 A 1 ) Π C ( I μ 2 A 2 ) y n = Π C ( I μ 1 A 1 ) y n = ( ( 1 l ) I + l T ) y n . Thus, Equation (4) reduces to Equation (35). Therefore, the desired result follows from Theorem 3.1. □

4. Subresults

4.1. Variational Inequality Problem

The framework of potential spaces will be restricted into a Hilbert space H in this section. Let A : C H , where C is a nonempty subset, be a single-valued operator. Let us recall the classical variational inequality problem (VIP): A x * , x x * 0 for any x C . The set of solutions of the VIP is denoted by the notation VI ( C , A ) . Let I C be an indicator operator of C given by
I C ( x ) = 0 , if x C , , if x C .
One finds that I C is a proper convex and lower semicontinuous function and I C , the subdifferential, is a maximally monotone operator. For λ > 0 , the resolvent of I C is denoted by J λ I C , i.e., J λ I C = ( I + λ I C ) 1 . We denote the normal cone of C at u by N C ( u ) , i.e., N C ( u ) = { w H : w , v u 0 v C } . Note that
I C ( u ) = { w H : I C ( v ) + w , v u I C ( u ) }   = { w H : w , v u 0 } = N C ( u ) .
Thus, we know that x u λ N C ( u ) u = J λ I C ( x ) u = P C ( x ) v C x u , v u 0 . Hence, we get VI ( C , A ) = ( A + I C ) 1 0 .
Next, putting B = I C in Corollary 1, we can obtain the following convergence theorem.
Theorem 2.
Let the mapping A : C H be α-inverse-strongly monotone, and T : C C be a 2-order η-strictly pseudocontractive mapping such that Ω = n = 0 Fix ( S n ) VI ( C , A ) Fix ( T ) . Let { W n } be the mapping sequence defined by (2.1) and f be a δ-contractive self-mapping with contractive constant δ ( 0 , 1 ) . Define a sequence { x n } by
y n = β n x n + γ n W n ( t n x n + ( 1 t n ) P C ( I λ n A ) ( ( 1 l ) I + l T ) y n ) + α n f ( x n ) , x n + 1 = ( 1 δ n ) W n y n + δ n x n n 0 ,
where 0 < l < min { 1 , 2 η } , { λ n } ( 0 , 2 α ) and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy the following conditions:
(i) 
n = 0 α n = and α n + β n + γ n = 1 ;
(ii) 
lim n | β n β n 1 | = lim n | γ n γ n 1 | = lim n α n = lim n | t n t n 1 | = 0 ;
(iii) 
lim sup n γ n t n ( 1 t n ) 0 and lim inf n γ n ( 1 t n ) < 1 ;
(iv) 
lim inf n β n γ n > 0 , lim inf n δ n > 0 and lim sup n δ n < 1 ;
(v) 
0 < λ ¯ λ n n 0 and lim n λ n = λ < 2 α .
Then, x n x * Ω strongly.

4.2. Convex Minimization Problem

Let g : C R be a convex smooth function and h : C R be a proper convex and lower semicontinuous function. The convex minimization problem is
g ( x * ) + h ( x * ) = min x C { g ( x ) + h ( x ) } .
This is equivalent to the problem 0 h ( x * ) + g ( x * ) , where h is the subdifferential of h and g is the gradient of g. Next, setting A = g and B = h in Corollary 1, we can obtain the following.
Theorem 3.
Let g : C R be a convex and differentiable function with 1 α -Lipschitz continuous gradient g and h : C R be a convex and lower semicontinuous function. Let f be a δ-contractive self-mapping defined on C and { W n } be the sequence defined by Equation (3). Let T be an η-strictly pseudocontractive self-mapping defined on C with order 2 such that Ω = n = 0 Fix ( S n ) Fix ( T ) ( g + h ) 1 0 , where ( g + h ) 1 0 is the set of minimizers attained by g + h . Define a sequence { x n } by
y n = β n x n + γ n W n ( t n x n + ( 1 t n ) J λ n h ( I λ n g ) ( ( 1 l ) I + l T ) y n ) + α n f ( x n ) , x n + 1 = ( 1 δ n ) W n y n + δ n x n n 0 ,
where 0 < l < min { 1 , 2 η } , { λ n } ( 0 , 2 α ) and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy the following conditions:
(i) 
n = 0 α n = , α n + β n + γ n = 1 ;
(ii) 
lim n | β n β n 1 | = lim n | γ n γ n 1 | = lim n α n = lim n | t n t n 1 | = 0 ;
(iii) 
lim sup n γ n t n ( 1 t n ) < 1 and lim inf n γ n ( 1 t n ) > 0 ;
(iv) 
lim inf n β n γ n > 0 , lim inf n δ n > 0 and lim sup n δ n < 1 ;
(v) 
0 < λ ¯ λ n n 0 and lim n λ n = λ < 2 α .
Then, x n x * Ω strongly. Indeed, * also solves the inequality: ( I f ) x * , x * p 0 p Ω uniquely.

4.3. Split Feasibility Problem

Let Γ : H 1 H 2 be a linear bounded operator with its adjoint Γ * . Let C and Q be convex closed sets in Hilbert spaces H 1 and H 2 , respectively. One considers the split feasibility problem (SFP): x * C and Γ x * Q . The solution set of the SFP is C Γ 1 Q . To solve the SFP, one can set it as the following convexly minimization problem:
min x C g ( x ) : = 1 2 Γ x P Q Γ x 2 .
Here, g has a Lipschitz gradient given by g = Γ * ( I P Q ) Γ . In addition, g is 1 Γ 2 -inverse-strongly monotone, where Γ 2 is the spectral radius of Γ * Γ . Thus, x * solves the SFP iff x * satisfies the inclusion problem:
0 I C ( x * ) + g ( x * ) x * λ g ( x * ) ( I + λ I C ) x *   x * = J λ I C ( x * λ g ( x * ) )   x * = P C ( x * λ g ( x * ) ) .
Theorem 4.
Let Γ : H 1 H 2 be a linear bounded operator with its adjoint Γ * , and T be an η-strictly pseudocontractive self-mapping defined on C with order 2 such that Ω = n = 0 Fix ( S n ) Fix ( T ) ( C Γ 1 Q ) . Let f be a δ-contractive self-mapping defined on C and { W n } be the sequence defined by (2.1). Define a sequence { x n } by
y n = β n x n + γ n W n ( t n x n + ( 1 t n ) P C ( I λ n Γ * ( I P Q ) Γ ) ( ( 1 l ) I + l T ) y n ) + α n f ( x n ) , x n + 1 = ( 1 δ n ) W n y n + δ n x n n 0 ,
where 0 < l < min { 1 , 2 η } , { λ n } ( 0 , 2 Γ 2 ) and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy the following conditions:
(i) 
n = 0 α n = , α n + β n + γ n = 1 ;
(ii) 
lim n | β n 1 β n | = lim n | γ n 1 γ n | = lim n α n = lim n | t n 1 t n | = 0 ;
(iii) 
lim sup n γ n t n ( 1 t n ) < 1 and lim inf n γ n ( 1 t n ) > 0 ;
(iv) 
lim inf n β n γ n > 0 , lim inf n δ n > 0 and lim sup n δ n < 1 ;
(v) 
0 < λ ¯ λ n n 0 and lim n λ n = λ < 2 Γ 2 .
Then, x n x * Ω strongly.

5. Conclusions

In this paper, we established norm convergence theorems of solutions for a general symmetrical variational system, which can be acted as a framework for many real world problems arising in engineering and medical imaging, which involves some convex optimization subproblems. There is no compact assumption on the operators of accretive type and the sets in the whole space. The restrictions, which are also mild, imposed on the control parameters. Our results provide an outlet for viscosity type algorithms without compact assumptions in infinite-dimensional spaces. From the space frameworks’ point of view, the space in our convergence theorems is still not general; however, it is Banach now. It is of interest to further relax the convex restrictions in the future research.

Author Contributions

All the authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Shandong Province of China (ZR2017LA001) and partially supported by NSF of China (Grant no. 11771196).

Acknowledgments

We are grateful to the referees for their useful suggestions which improved this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ceng, L.C.; Wang, C.Y. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
  2. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef] [Green Version]
  3. Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  4. Cho, S.Y.; Li, W.; Kang, S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013, 2013, 199. [Google Scholar] [CrossRef] [Green Version]
  5. Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
  6. An, N.T.; Nam, N.M. Solving k-center problems involving sets based on optimization techniques. J. Global Optim. 2020, 76, 189–209. [Google Scholar] [CrossRef]
  7. Zhao, X. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  8. Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
  9. Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  10. Levitan, E.; Herman, G.T. A maximum a posteriori probability expectation maximization algorithm for image reconstruction in emission tomography. IEEE Trans. Medical Imaging 1987, MI-6, 185–192. [Google Scholar] [CrossRef]
  11. Aoyama, K.; Iiduka, H.; Takahashi, W. Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006, 2006, 35390. [Google Scholar] [CrossRef] [Green Version]
  12. Ceng, L.C.; Latif, A.; Yao, J.C. On solutions of a system of variational inequalities and fixed point problems in Banach spaces. Fixed Point Theory Appl. 2013, 2013, 176. [Google Scholar] [CrossRef] [Green Version]
  13. Qin, X.; Petrusel, A. CQ iterative algorithms for fixed points of non-expansive mappings and split feasibility problems in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 157–165. [Google Scholar]
  14. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  15. Ceng, L.C.; Petrusel, A.; Yao, J.C. On Mann viscosity subgradient extragradient algorithms for fixed point problems of finitely many strict pseudocontractions and variational inequalities. Mathematics 2019, 7, 925. [Google Scholar] [CrossRef] [Green Version]
  16. Ceng, L.C.; Peturusel, A.; Wen, C.F.; Yao, J.C. Inertial-like subgradient extragradient methods for variational inequalities and fixed points of asymptotically non-expansive and strictly pseudocontractive mappings. Mathematics 2019, 7, 860. [Google Scholar] [CrossRef] [Green Version]
  17. Cho, S.Y. Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2018, 8, 19–31. [Google Scholar]
  18. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  19. Qin, X.; Cho, S.Y.; Wang, L. A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, 2014, 75. [Google Scholar] [CrossRef] [Green Version]
  20. Song, Y.; . Ceng, L.C. A general iteration scheme for variational inequality problem and common fixed point problems of non-expansive mappings in q-uniformly smooth Banach spaces. J. Glob. Optim. 2013, 57, 1327–1348. [Google Scholar] [CrossRef]
  21. Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  22. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  23. Qin, X.; Cho, S.Y. Convergence analysis of a monotone projection algorithm in reflexive Banach spaces. Acta Math. Sci. 2017, 37, 488–502. [Google Scholar] [CrossRef]
  24. Nguyen, L.V.; Qin, X. Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. 2019. [Google Scholar] [CrossRef]
  25. Ceng, L.C.; Postolache, M.; Wen, C.F.; Yao, Y. Variational inequalities approaches to minimization problems with constraints of generalized mixed equilibria and variational inclusions. Mathematics 2019, 7, 270. [Google Scholar] [CrossRef] [Green Version]
  26. Ceng, L.C.; Yuan, Q. Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 2019, 2019, 274. [Google Scholar] [CrossRef]
  27. Cho, S.Y. Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 2016, 9, 1083–1092. [Google Scholar] [CrossRef] [Green Version]
  28. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  29. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of non-expansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
  30. Qin, X.; Cho, S.Y.; Yao, J.C. Weak and strong convergence of splitting algorithms in Banach spaces. Optimization 2019. [Google Scholar] [CrossRef]
  31. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  32. Takahashi, W. Convex Analysis and Approximation of Fixed Points; Yokohama Publisher: Yokohama, Japan, 2000. [Google Scholar]
  33. López, L.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Forward-backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, 2012, 109236. [Google Scholar] [CrossRef] [Green Version]
  34. Suzuki, T. Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter non-expansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305, 227–239. [Google Scholar] [CrossRef] [Green Version]
  35. Shimoji, K.; Takahashi, W. Strong convergence to common fixed points of infinite non-expansive mappings and applications. Taiwan. J. Math. 2001, 5, 387–404. [Google Scholar] [CrossRef]
  36. Chang, S.S.; Lee, H.W.; Chan, C.K. A new method for solving equilibrium problem, fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal. 2009, 70, 3307–3319. [Google Scholar] [CrossRef]
  37. Xue, Z.; Zhou, H.; Cho, Y.J. Iterative solutions of nonlinear equations for m-accretive operators in Banach spaces. J. Nonlinear Convex Anal. 2000, 1, 313–320. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Yuan, Q. On a General Extragradient Implicit Method and Its Applications to Optimization. Symmetry 2020, 12, 124. https://doi.org/10.3390/sym12010124

AMA Style

Ceng L-C, Yuan Q. On a General Extragradient Implicit Method and Its Applications to Optimization. Symmetry. 2020; 12(1):124. https://doi.org/10.3390/sym12010124

Chicago/Turabian Style

Ceng, Lu-Chuan, and Qing Yuan. 2020. "On a General Extragradient Implicit Method and Its Applications to Optimization" Symmetry 12, no. 1: 124. https://doi.org/10.3390/sym12010124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop