Next Article in Journal
A Water Supply Pipeline Risk Analysis Methodology Based on DIY and Hierarchical Fuzzy Inference
Next Article in Special Issue
On a General Extragradient Implicit Method and Its Applications to Optimization
Previous Article in Journal
On Extendability of the Principle of Equivalent Utility
Previous Article in Special Issue
Hybrid Algorithms for Variational Inequalities Involving a Strict Pseudocontraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parallel Tseng’s Extragradient Methods for Solving Systems of Variational Inequalities on Hadamard Manifolds

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(1), 43; https://doi.org/10.3390/sym12010043
Submission received: 15 November 2019 / Revised: 9 December 2019 / Accepted: 12 December 2019 / Published: 24 December 2019
(This article belongs to the Special Issue Symmetry in Nonlinear Functional Analysis and Optimization Theory)

Abstract

:
The aim of this article is to study two efficient parallel algorithms for obtaining a solution to a system of monotone variational inequalities (SVI) on Hadamard manifolds. The parallel algorithms are inspired by Tseng’s extragradient techniques with new step sizes, which are established without the knowledge of the Lipschitz constants of the operators and line-search. Under the monotonicity assumptions regarding the underlying vector fields, one proves that the sequences generated by the methods converge to a solution of the monotone SVI whenever it exists.

1. Introduction

Given an operator A : H H and a convex and closed subset C in a real Hilbert space H, the well known variational inequality problem (VIP) indicates the one of finding a point x * C such that
A x * , x x * 0 x C .
It is well known that the variational inequality theory has been playing a big role in the study of signal processing, image reconstruction, mathematical programming, differential equations, and others; see, e.g., [1,2,3,4,5]. A large number of numerical methods has been designed for solving the VIPs and related optimization problems; see, e.g., [6,7,8,9,10]. With the help of an additional projection operator, Korpelevich [11] first introduced
y n = P C ( I d λ A ) x n , x n + 1 = P C ( x n λ A y n ) n 0 ,
where I d stands for the identity, λ is a real number in ( 0 , 1 L ) , where L is the Lipschitz module of A, and P C stands for the nearest point projection operator onto subset C. Recently, the gradient (reduced) type iterative schemes are under the spotlight of engineers and mathematicians working in the communities of control theory and optimization. Based on the approach, a number of investigators have conducted various approaches on algorithms; see e.g., [12,13,14,15,16,17,18] and the references cited therein.
Let both A 1 and A 2 be single-valued self-operators on space H. Recently, Ceng et al. [19] considered and studied the following system problem of finding ( x * , y * ) C × C such that
x * y * + μ 1 A 1 y * , x x * 0 x C , y * x * + μ 2 A 2 x * , x y * 0 x C ,
with two positive constants μ 1 and μ 2 > 0 , which is called a system of variational inequalities (SVI). In [19], system problem (3) was transformed into a fixed-point problem (FPP). Utilizing the equivalence relation between system problem (3) and the FPP of some operator, Ceng et al. [19] proposed and investigated a relaxed type method for solving system problem (3); see also [16,20,21,22,23,24] for recent investigations.
On the other hand, in 2003, Németh [25] introduced the VIP on Hadamard manifolds, that is, find x * C such that
A x * , exp x * 1 x 0 x C ,
where C is a nonempty, convex and closed set in Hadamard manifold M , A : M T M is a vector field, that is, A x T x M x M , and exp 1 is the inverse of an exponential map. We denote by S the solution set of problem (4). Recently, some methods and techniques have been generalized from Euclidean spaces to Riemannian manifolds because the generalization has some important advantages; see, e.g., [26,27,28]. It is well known that the research progress on the problem (4) is limited by the nonlinearity of manifolds, and hence is slow. Moreover, the research on its algorithms is mainly focused on the proximal point algorithm and Korpelevich’s method. Very recently, using Tseng’s extragradient methods, Chen et al. [29] constructed two effective algorithms to solve the problem (4) on Hadamard manifolds. Moreover, their results gave a further answer to the open question put forth in Ferreira et al. [30].
Inspired by problems (3) and (4), this paper introduces and considers the SVI on Hadamard manifolds, that is, find ( x * , y * ) C × C such that
exp y * 1 x * + μ 1 A 1 y * , exp x * 1 x 0 x C , exp x * 1 y * + μ 2 A 2 x * , exp y * 1 x 0 x C ,
with constants μ 1 , μ 2 > 0 , If A 1 = A 2 = A and x * = y * , then SVI (5) reduces to VIP (4).
Inspired by the extragradient algorithms in Chen et al. [29], we propose and analyze two parallel effective algorithms for solving SVI (5), by virtue of Tseng’s extragradient method. The first algorithm’s step sizes are obtained by using line-search, and the second one only by using two previous iterates. In both algorithms, the Lipschitz constants are not required to be known. Moreover, our results improve and extend the corresponding results announced by some others, e.g., Ceng et al. [19] and Chen et al. [29].
The outline of the work is organized as follows. Some basic concepts, notations and important lemmas in Riemannian geometry are presented in Section 2. In Section 3, we present two algorithms based on the Tseng’s extragradient method for SVI (5) on Hadamard manifolds and obtain the desired convergence theorems.

2. Preliminaries

This paper assumes that the Riemannian manifold M indicates a connected m-dimensional manifold endowed with a Riemannian metric. We use the same notations in [31]. For more details about these notations and relevant definitions, please consult relevant textbook on Riemannian geometry (see, e.g., [31]).
Definition 1.
(see [32]). Let X ( M ) be contain all univalued vector fields V : M T M such that V ( x ) T x M x M and the domain D ( V ) of V be defined by D ( V ) = { x M : V ( x ) } . Let V X ( M ) . Then V is said to be pseudomonotone if, for any x , y D ( V ) ,
V ( x ) , exp x 1 y 0 V ( y ) , exp y 1 x 0 .
Proposition 1.
(see [31]). (Comparison theorem for triangles). Let Δ ( p 1 , p 2 , p 3 ) be a geodesic triangle. Denote, for each i = 1 , 2 , 3 ( mod 3 ) , by γ i : [ 0 , l i ] M the geodesic joining p i to p i + 1 , and set l i = L ( γ i ) , and α i : = ( γ i ( 0 ) , γ i 1 ( l i 1 ) ) . Then
(i) 
α 1 + α 2 + α 3 π ;
(ii) 
l i 2 + l i + 1 2 2 l i l i + 1 cos α i + 1 l i 1 2 ;
(iii) 
l i + 1 cos α i + 2 + l i cos α i l i + 2 .
In terms of the distance and the exponential map, the above inequality can be rewritten as
d 2 ( p i , p i + 1 ) + d 2 ( p i + 1 , p i + 2 ) 2 exp p i + 1 1 p i , exp p i + 1 1 p i + 2 d 2 ( p i 1 , p i ) ,
since
exp p i + 1 1 p i , exp p i + 1 1 p i + 2 = d ( p i , p i + 1 ) d ( p i + 1 , p i + 2 ) cos α i + 1 .
Lemma 1.
(see [33]). Let x 0 M and { x n } M with x n x 0 . Then the following assertions hold.
(i) 
For any y M , we have exp x n 1 y exp x 0 1 y and exp y 1 x n exp y 1 x 0 ;
(ii) 
If v n T x n M and v n v 0 , then v 0 T x 0 M ;
(iii) 
Given u n , v n T x n M and u 0 , v 0 T x 0 M , if u n u 0 and v n v 0 , then u n , v n u 0 , v 0 ;
(iv) 
For any u T x 0 M , the function F : M T M , defined by F ( x ) = P x , x 0 u x M is continuous on M .
Lemma 2.
(see [34]). Given p M , there exists a unique projection P C ( p ) . Furthermore, the following inequality holds:
exp P C ( p ) 1 p , exp P C ( p ) 1 p 0 p C .
Proposition 2.
(see [32]). The following statements are equivalent:
(i) 
x * is a solution of problem (4);
(ii) 
x * = P C ( exp x * ( β 0 A x * ) ) for some β 0 > 0 ;
(iii) 
x * = P C ( exp x * ( β A x * ) ) for all β > 0 ;
(iv) 
r ( x * , β ) = 0 , where r ( x * , β ) = exp x * 1 [ P C ( exp x * ( β A x * ) ) ] .
Lemma 3.
(see [35]). Let δ ( p , q , r ) be a geodesic triangle in M Hadamard manifold. Then, there exists p , q , r R 2 such that
d ( p , q ) = p q , d ( q , r ) = q r and d ( r , p ) = r p .
Lemma 4.
(see [36]). Let δ ( p , q , r ) be a geodesic triangle in a Hadamard manifold M and δ ( p , q , r ) be its comparison triangle.
(i) 
If α , β , γ (resp., α , β , γ ) be the angles of δ ( p , q , r ) (resp., δ ( p , q , r ) ) at the vertices p , q , r (resp., p , q , r ). Then, the following inequalities hold:
α α , β β and γ γ .
(ii) 
If z is a point in the geodesic joining p to q and z is its comparison point in the interval [ p , q ] such that d ( z , p ) = z p and d ( z , q ) = z q , then the following inequality holds:
d ( z , r ) z r .
Definition 2.
A vector field f defined on a complete Riemannian manifold M is said to be Lipschitz continuous if there exists a constant L ( M ) = L > 0 such that
d ( f ( x ) , f ( x ) ) L d ( x , x ) x , x M .
Besides this global concept, if for each x 0 M , there exist L ( x 0 ) > 0 and δ = δ ( x 0 ) > 0 such that inequality (7) occurs, with L = L ( x 0 ) , for all x , x B δ ( x 0 ) : = { x M : d ( x 0 , x ) < δ } , then f is said to be locally Lipschitz continuous.
Finally, utilizing a similar technique to that of transforming SVI (3) into the FPP in [19], we derive the following result.
Lemma 5.
A pair ( x * , y * ) , with x * , y * C , is a solution of SVI (5) if and only if x * Fix ( G ) , i.e., x * = G x * where Fix ( G ) is the fixed-point set of the mapping G : = P C ( exp I ( μ 1 A 1 ) ) P C ( exp I ( μ 2 A 2 ) ) and y * = P C ( exp I ( μ 2 A 2 ) ) x * .
Proof. 
In terms of Lemma 2, we obtain that
exp y * 1 x * + μ 1 A 1 y * , exp x * 1 x 0 x C , exp x * 1 y * + μ 2 A 2 x * , exp y * 1 x 0 x C
x * = P C ( exp y * ( μ 1 A 1 y * ) ) , y * = P C ( exp x * ( μ 2 A 2 x * ) )
x * = P C ( exp I ( μ 1 A 1 ) ) y * = P C ( exp I ( μ 1 A 1 ) ) P C ( exp I ( μ 2 A 2 ) ) x * .
That is, x * Fix ( G ) . □

3. Main Results

In this section, inspired by the extragradient algorithms in Chen et al. [30], we propose the following Algorithms 1 and 2 for solving the system (5) of monotone variational inequalities on Hadamard manifolds, which are based on Tseng’s extragradient method. The Algorithm 1 presents a simple and convenient way with the line-search for defining the step sizes. Meantime, in the Algorithm 3, the step sizes are computed by current information for the iterates, instead of requiring the knowledge of the Lipchitz constants of operators and additional projections. In particular, if we set A 1 = A 2 = A in Algorithms 1 and 3, these algorithms are reduced to the following Algorithms 2 and 4, respectively, for solving the monotone VIP (4) on Hadamard manifolds. Let assume the following:
(H1) S , where S is the set of solutions of SVI (5).
(H2) For i = 1 , 2 , A i : M T M is a vector field, that is, A i x T x M x M , and exp 1 is the inverse of exponential map.
(H3) For i = 1 , 2 , the mapping A i is monotone, i.e.,
A i x A i y , exp y 1 x 0 x , y M .
(H4) For i = 1 , 2 , the mapping A i is Lipschitz-continuous with constant L i > 0 , i.e., there exists L i > 0 such that
d ( A i x , A i y ) L i d ( x , y ) x , y M .
We first recall the concept of Fejér convergence and its related result.
Definition 3.
(see [37]). Let X be a complete metric space and C X be a nonempty set. A sequence { x k } X is called Fejér convergent to C, if d ( x k + 1 , y ) d ( x k , y ) y C , k 0 .
Proposition 3.
(see [33]). Let X be a complete metric space and let C X be a nonempty set. Let { x k } X be Fejér convergent to C and suppose that any cluster point of { x k } lies in C. Then { x k } converges to a point of C.

3.1. Parallel Tseng’s Extragradient Method with Line-Search

From the Lemma 5, we can obtain the following Algorithm 1. In particular, putting A 1 = A 2 = A in the Algorithm 1, we can solve VIP (4).
Algorithm 1: Parallel Tseng’s extragradient method with line-search.
Initialization: Given γ i > 0 , l i ( 0 , 1 ) , λ i ( 0 , 1 ) for i = 1 , 2 . Let x 0 M .
Iterative Steps:
Step 1. Calculate
z ˜ n = P C ( exp x n ( μ 2 , n A 2 x n ) ) , y n = P C ( exp z n ( μ 1 , n A 1 z n ) ) ,
where μ i , n is the largest μ i { γ i , γ i l i , γ i l i 2 , . . . } for i = 1 , 2 , satisfying the Armijo-like search rule (ALSR)
μ 2 d ( A 2 x n , A 2 z ˜ n ) λ 2 d ( x n , z ˜ n ) , μ 1 d ( A 1 z n , A 1 y n ) λ 1 d ( z n , y n ) .
Step 2. Calculate
z n = exp z ˜ n ( μ 2 , n ( A 2 x n A 2 z ˜ n ) ) , x n + 1 = exp y n ( μ 1 , n ( A 1 z n A 1 y n ) ) .
n n + 1 and go to Step 1.
Algorithm 1 is well defined in the following lemma.
Lemma 6.
The Armijo-like search rule (ALSR) is well defined and min { γ i , λ i l i L i } μ i , n γ i for i = 1 , 2 .
Proof. 
Since A i is L i -Lipschitz continuous on M for i = 1 , 2 , we have
d ( A i w n , A i P C ( exp w n ( μ i A i w n ) ) ) L i d ( w n , P C ( exp w n ( μ i A i w n ) ) ) ,
which is equivalent to
λ i L i d ( A i w n , A i P C ( exp w n ( μ i A i w n ) ) ) λ i d ( w n , P C ( exp w n ( μ i A i w n ) ) ) .
Thus, (ALSR) holds for all μ i λ i L i . So μ i , n is well defined for i = 1 , 2 .
Obviously, μ i , n γ i for i = 1 , 2 . If μ i , n = γ i then this lemma is valid; otherwise, if μ i , n < γ i by the search rule (ALSR), we know that μ i , n l i must violate inequality (ALSR), i.e.,
d ( A i w n , A i P C ( exp w n ( μ i , n l i A i w n ) ) ) > λ i μ i , n l i d ( w n , P C ( exp w n ( μ i , n l i A i w n ) ) ,
Again from L i -Lipschitz continuity of A i on M , we obtain μ i , n > λ i l i L i . □
Corollary 1.
The Armijo-like search rule (ALSR) with A 1 = A 2 = A is well defined and min { γ i , λ i l i L } μ i , n γ i for i = 1 , 2 .
Now, we analyze the convergence of Algorithm 1.
Lemma 7.
Let { x n } and { z n } be the iterative sequences constructed via Algorithm 1. Then both { x n } and { z n } are bounded iterative sequences, provided for all ( p , q ) S and n 0 ,
( 1 λ 2 2 ) d 2 ( z ˜ n , x n ) + ( 1 λ 1 2 ) d 2 ( y n , z n ) + 2 μ 2 , n A 2 p , exp p 1 z ˜ n + 2 μ 1 , n A 1 p , exp p 1 y n 0 , ( 1 λ 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) + ( 1 λ 1 2 ) d 2 ( y n , z n ) + 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 + 2 μ 1 , n A 1 q , exp q 1 y n 0 .
Proof. 
Take a fixed ( p , q ) C × C arbitrarily. Then, noticing
z ˜ n = P C ( exp x n ( μ 2 , n A 2 x n ) ) , y n = P C ( exp z n ( μ 1 , n A 1 z n ) ) ,
we deduce from Lemma 2 that
exp x n 1 z ˜ n + μ 2 , n A 2 x n , exp p 1 z ˜ n 0 , exp z n 1 y n + μ 1 , n A 1 z n , exp q 1 y n 0 ,
and hence
exp x n 1 z ˜ n , exp p 1 z ˜ n μ 2 , n A 2 x n , exp p 1 z ˜ n , exp z n 1 y n , exp q 1 y n μ 1 , n A 1 z n , exp q 1 y n .
Also, from the monotonicity of A 2 on M it follows that
A 2 z ˜ n A 2 x n , exp p 1 z ˜ n = A 2 z ˜ n A 2 p , exp p 1 z ˜ n + A 2 p A 2 x n , exp p 1 z ˜ n A 2 p A 2 x n , exp p 1 z ˜ n = A 2 p , exp p 1 z ˜ n A 2 x n , exp p 1 z ˜ n .
We now fix n 0 . Consider the geodesic triangle δ ( x n , z ˜ n , p ) and its comparison triangle Δ ( x n , z ˜ n , p ) . Then by Lemma 3, we have d ( x n , p ) = d ( x n , p ) , d ( z ˜ n , p ) = d ( z ˜ n , p ) , and d ( x n , z ˜ n ) = d ( x n , z ˜ n ) . Recall that z n = exp z ˜ n ( μ 2 , n ( A 2 x n A 2 z ˜ n ) ) . The comparison point of z n is z ˜ n + μ 2 , n ( A 2 x n A 2 z ˜ n ) . By Lemma 4, we have
d 2 ( z n , p ) d 2 ( z n , p ) = p z ˜ n + μ 2 , n ( A 2 x n A 2 z ˜ n ) 2 = p z ˜ n 2 + μ 2 , n 2 A 2 x n A 2 z ˜ n 2 + 2 μ 2 , n A 2 x n A 2 z ˜ n , z ˜ n p = z ˜ n x n 2 + p x n 2 + 2 z ˜ n x n , x n p + μ 2 , n 2 A 2 x n A 2 z ˜ n 2 + 2 μ 2 , n A 2 x n A 2 z ˜ n , z ˜ n p = p x n 2 + z ˜ n x n 2 2 z ˜ n x n , z ˜ n x n + 2 z ˜ n x n , z ˜ n p + μ 2 , n 2 A 2 z ˜ n A 2 x n 2 + 2 μ 2 , n A 2 x n A 2 z ˜ n , z ˜ n p = p x n 2 z ˜ n x n 2 + μ 2 , n 2 A 2 z ˜ n A 2 x n 2 + 2 z ˜ n 2 x n + 2 μ 2 , n A 2 x n 2 μ 2 , n A 2 z ˜ n , z ˜ n p d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + μ 2 , n 2 A 2 z ˜ n A 2 x n 2 + 2 z ˜ n 2 x n + 2 μ 2 , n A 2 x n 2 μ 2 , n A 2 z ˜ n , z ˜ n p .
In the geodesic triangle Δ ( A 2 x n , A 2 z ˜ n , z n ) and its comparison triangle Δ ( A 2 x n , A 2 z ˜ n , z n ) . Using Lemma 3 again, we have d ( A 2 x n , A 2 z ˜ n ) = d ( A 2 x n , A 2 z ˜ n ) . From (11), we obtain
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) + 2 z ˜ n 2 x n + 2 μ 2 , n A 2 x n 2 μ 2 , n A 2 z ˜ n , z ˜ n p .
Consider the geodesic triangle Δ ( a , b , c ) and its comparison triangle Δ ( a , b , c ) . Then set a = 2 exp x n 1 y n 2 μ 2 , n ( A 2 z ˜ n A 2 x n ) and b = exp p 1 z ˜ n , ( resp., a = 2 z ˜ n 2 x n + 2 μ 2 , n A 2 x n ) 2 μ 2 , n A 2 z ˜ n and b = z ˜ n p ). Let β and β denote the angles at c and c , respectively. Then by Lemma 4 (i), we have β β and so cos β cos β . Then by Proposition 1 and Lemma 3, we have
a , b = a b cos β a b cos β = a b cos β = a , b .
It is easy to see that
2 z ˜ n 2 x n + 2 μ 2 , n A 2 x n 2 μ 2 , n A 2 z ˜ n , z ˜ n p 2 exp x n 1 z ˜ n 2 μ 2 , n ( A 2 z ˜ n A 2 x n ) , exp p 1 z ˜ n .
Due to (12) and (13), it follows that
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) + 2 z ˜ n 2 x n + 2 μ 2 , n A 2 x n 2 μ 2 , n A 2 z ˜ n , z ˜ n p d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) + 2 exp x n 1 z ˜ n 2 μ 2 , n ( A 2 z ˜ n A 2 x n ) , exp p 1 z ˜ n = d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) 2 μ 2 , n A 2 z ˜ n A 2 x n , exp p 1 z ˜ n + 2 exp x n 1 z ˜ n , exp p 1 z ˜ n .
From (9), (10) and (14), we have
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) + 2 μ 2 , n ( A 2 x n , exp p 1 z ˜ n A 2 p , exp p 1 z ˜ n ) 2 μ 2 , n A 2 x n , exp p 1 z ˜ n = d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
According to (ALSR) and (15), we obtain
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + λ 2 2 d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n = d 2 ( x n , p ) ( 1 λ 2 2 ) d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
In a similar way,
d 2 ( x n + 1 , q ) d 2 ( z n , q ) ( 1 λ 1 2 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 q , exp q 1 y n .
Next, we restrict ( p , q ) S . Then, substituting (16) for (17) with q : = p sends us to
d 2 ( x n + 1 , p ) d 2 ( x n , p ) ( 1 λ 2 2 ) d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n ( 1 λ 1 2 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 p , exp p 1 y n = d 2 ( x n , p ) ( 1 λ 2 2 ) d 2 ( z ˜ n , x n ) ( 1 λ 1 2 ) d 2 ( y n , z n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n 2 μ 1 , n A 1 p , exp p 1 y n .
This together with the hypotheses, implies that d ( x n + 1 , p ) d ( x n , p ) . So the sequence { x n } is bounded. In the same way, substituting (17) for (16) with n : = n + 1 and p : = q implies
d 2 ( z n + 1 , q ) d 2 ( z n , q ) ( 1 λ 1 2 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 q , exp q 1 y n ( 1 λ 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 = d 2 ( z n , q ) ( 1 λ 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) ( 1 λ 1 2 ) d 2 ( y n , z n ) 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 2 μ 1 , n A 1 q , exp q 1 y n .
This together with the hypotheses, implies that d ( z n + 1 , q ) d ( z n , q ) . So the sequence { z n } is bounded.  □
Corollary 2.
Let { x n } and { z n } be the iterative sequences constructed via Algorithm 1 with A 1 = A 2 = A . Then both { x n } and { z n } are bounded.
Proof. 
Take a fixed p S arbitrarily. Noticing A 1 = A 2 = A , we obtain from (16) and (17)
d 2 ( x n + 1 , p ) d 2 ( x n , p ) ( 1 λ 2 2 ) d 2 ( z ˜ n , x n ) ( 1 λ 1 2 ) d 2 ( y n , z n ) , d 2 ( z n + 1 , p ) d 2 ( z n , p ) ( 1 λ 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) ( 1 λ 1 2 ) d 2 ( y n , z n ) .
i.e., d ( x n , p ) d ( x n + 1 , p ) and d ( z n , p ) d ( z n + 1 , p ) . So the sequences { x n } and { z n } are bounded. Also, noticing λ 1 , λ 2 ( 0 , 1 ) in Algorithm 2, we have
( 1 λ 2 2 ) d 2 ( z ˜ n , x n ) + ( 1 λ 1 2 ) d 2 ( y n , z n ) d 2 ( x n , p ) d 2 ( x n + 1 , p ) , ( 1 λ 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) + ( 1 λ 1 2 ) d 2 ( y n , z n ) d 2 ( z n , p ) d 2 ( z n + 1 , p ) ,
and so lim n d ( z ˜ n , x n ) = 0 and lim n d ( y n , z n ) = 0 . So the sequences { z ˜ n } and { y n } are bounded. Note that
z n = exp z ˜ n ( μ 2 , n ( A x n A z ˜ n ) ) , x n + 1 = exp y n ( μ 1 , n ( A z n A y n ) ) .
Since A i is L i -Lipschitz continuous for i = 1 , 2 , we conclude that lim n d ( z n , z ˜ n ) = 0 and lim n d ( x n + 1 , y n ) = 0 . □
Algorithm 2: Parallel Tseng’s extragradient method with line-search.
Initialization: Given γ i > 0 , l i ( 0 , 1 ) , λ i ( 0 , 1 ) for i = 1 , 2 . Let x 0 M .
Iterative Steps:
Step 1. Calculate
z ˜ n = P C ( exp x n ( μ 2 , n A x n ) ) , y n = P C ( exp z n ( μ 1 , n A z n ) ) ,
where μ i , n is chosen to be the largest μ i { γ i , γ i l i , γ i l i 2 , } for i = 1 , 2 , satisfying
μ 2 d ( A x n , A z ˜ n ) λ 2 d ( x n , z ˜ n ) , μ 1 d ( A z n , A y n ) λ 1 d ( z n , y n ) .
Step 2. Calculate
z n = exp z ˜ n ( μ 2 , n ( A x n A z ˜ n ) ) , x n + 1 = exp y n ( μ 1 , n ( A z n A y n ) ) .
n n + 1 and go to Step 1.
Theorem 1.
Let { x n } and { z n } be the iterative sequences constructed via Algorithm 1. Assume that the hypotheses in Lemma 7 hold. Then { ( x n , z n ) } is convergent to a solution of SVI (5) provided d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n .
Proof. 
First of all, by Lemma 7, we know that { x n } and { z n } are bounded, and
d ( x n + 1 , p ) d ( x n , p ) and d ( z n + 1 , q ) d ( z n , q ) ( p , q ) S , n 0 .
Utilizing the assumption that d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n , we obtain that { y n } and { z ˜ n } are bounded. We define the sets S 1 , S 2 as follows:
S 1 = { p C : q C such that ( p , q ) S } and S 2 = { q C : p C such that ( p , q ) S } .
From Definition 2, we know that { x n } and { z n } are Fejér convergent to S 1 and S 2 , respectively. Let p ¯ be an accumulation point of { x n } . Then ∃ { x n k } { x n } such that lim k x n k = p ¯ . Since { z n } , { μ 1 , n } and { μ 2 , n } all are bounded, we may assume, without loss of generality, that z n k q ¯ , μ 1 , n k μ ¯ 1 and μ 2 , n k μ ¯ 2 as k . Since d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n , we deduce that y n k p ¯ and z ˜ n k q ¯ . Note that
z ˜ n k = P C ( exp x n k ( μ 2 , n k A 2 x n k ) ) , y n k = P C ( exp z n k ( μ 1 , n k A 1 z n k ) ) .
Hence by Lemma 2, we get
0 exp x n k 1 z ˜ n k + μ 2 , n k A 2 x n k , exp z ˜ n k 1 x = exp x n k 1 z ˜ n k , exp z ˜ n k 1 x + μ 2 , n k A 2 x n k , exp z ˜ n k 1 x = exp x n k 1 z ˜ n k , exp z ˜ n k 1 x + μ 2 , n k A 2 x n k , exp z ˜ n k 1 x n k + μ 2 , n k A 2 x n k , exp x n k 1 x ,
and
0 exp z n k 1 y n k + μ 1 , n k A 1 z n k , exp y n k 1 x = exp z n k 1 y n k , exp y n k 1 x + μ 1 , n k A 1 z n k , exp y n k 1 x = exp z n k 1 y n k , exp y n k 1 x + μ 1 , n k A 1 z n k , exp y n k 1 z n k + μ 1 , n k A 1 z n k , exp z n k 1 x .
By Lemma 6 we have μ i , n > λ i l i L i for i = 1 , 2 . Passing to the limit, and combining (6) with (18) and (19), respectively, we get
0 exp p ¯ 1 q ¯ , exp q ¯ 1 x + μ ¯ 2 A 2 p ¯ , exp q ¯ 1 p ¯ + μ ¯ 2 A 2 p ¯ , exp p ¯ 1 x , 0 exp q ¯ 1 p ¯ , exp p ¯ 1 x + μ ¯ 1 A 1 q ¯ , exp p ¯ 1 q ¯ + μ ¯ 1 A 1 q ¯ , exp q ¯ 1 x .
Consequently,
exp q ¯ 1 p ¯ + μ ¯ 1 A 1 q ¯ , exp p ¯ 1 x 0 x C , exp p ¯ 1 q ¯ + μ ¯ 2 A 2 p ¯ , exp q ¯ 1 x 0 x C .
This means that ( p ¯ , q ¯ ) S , and hance p ¯ S 1 . So it follows from Proposition 3 that x n p ¯ as n .
On the other hand, suppose that q ^ is an accumulation point of { z n } . Then ∃ { z m k } { z n } such that lim k z m k = q ^ . Since { x n } , { μ 1 , n } and { μ 2 , n } all are bounded, we may assume, without loss of generality, that x m k p ^ , μ 1 , m k μ ^ 1 and μ 2 , m k μ ^ 2 as k . Since d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n , we deduce that y m k p ^ and z ˜ m k q ^ . Note that
z ˜ m k = P C ( exp x m k ( μ 2 , m k A 2 x m k ) ) , y m k = P C ( exp z m k ( μ 1 , m k A 1 z m k ) ) .
Similar ideas to (20) give
exp q ^ 1 p ^ + μ ^ 1 A 1 q ^ , exp p ^ 1 x 0 x C , exp p ^ 1 q ^ + μ ^ 2 A 2 p ^ , exp q ^ 1 x 0 x C .
This means that ( p ^ , q ^ ) S , and hance q ^ S 2 . By Proposition 3, we get z n q ^ as n . Therefore, in terms of the uniqueness of the limit, we have { ( x n , z n ) } converges to ( p ^ , q ^ ) S to the SVI (5). This completes the proof. □
Theorem 2.
Let { x n } and { z n } be the iterative sequences constructed via Algorithm 2. Then { x n } and { z n } both are convergent to a solution of VIP (4).
Proof. 
By Corollary 2 and Definition 2, we know that { x n } and { z n } both are Fejér convergent to the same S. Let p ¯ be an accumulation point of { x n } . Then ∃ { x n k } { x n } such that lim k x n k = p ¯ . Hence, from lim k d ( z ˜ n k , x n k ) = 0 we get lim k z ˜ n k = p ¯ . Since { μ 2 , n } is bounded, we may assume, without loss of generality, that lim k μ 2 , n k = μ ¯ 2 . So it follows from z ˜ n k = P C ( exp x n k ( μ 2 , n k A x n k ) ) that p ¯ = P C ( exp p ¯ ( μ ¯ 2 A p ¯ ) ) . In terms of Proposition 2, we get p ¯ S . Thus, by Proposition 3 we infer that x n p ¯ as n . In a similar way, we can show that z n q ¯ as n for some q ¯ S . Since d ( z ˜ n , x n ) 0 and d ( z n , z ˜ n ) 0 , we derive the desired result. □

3.2. Parallel Tseng’s Extragradient Method

To solve problem (5), we give the following Algorithm 3, that is, a parallel Tseng’s extragradient algorithm. The step sizes in this algorithm are obtained by simple updating, rather than using the line-search, which results in a lower computational cost.
Algorithm 3: Parallel Tseng’s extragradient method.
Initialization. Given μ i , 0 > 0 , λ i ( 0 , 1 ) for i = 1 , 2 , and x 0 M an arbitrary starting point.
Iterative Steps:
Step 1. Compute
z ˜ n = P C ( exp x n ( μ 2 , n A 2 x n ) ) , y n = P C ( exp z n ( μ 1 , n A 1 z n ) ) .
Step 2. Compute
z n = exp z ˜ n μ 2 , n ( A 2 x n A 2 z ˜ n ) , x n + 1 = exp y n μ 1 , n ( A 1 z n A 1 y n ) ,
and
μ 2 , n + 1 = min { λ 2 d ( x n , z ˜ n ) d ( A 2 x n , A 2 z ˜ n ) , μ 2 , n } , if d ( A 2 x n , A 2 z ˜ n ) 0 , μ 2 , n , otherwise , μ 1 , n + 1 = min { λ 1 d ( z n , y n ) d ( A 1 z n , A 1 y n ) , μ 1 , n } , if d ( A 1 z n , A 1 y n ) 0 , μ 1 , n , otherwise .
n n + 1 and go to Step 1.
In particular, putting A 1 = A 2 = A in Algorithm 3, we obtain the following Algorithm 4, that is, a parallel Tseng’s extragradient method for solving VIP (4).
Algorithm 4: Parallel Tseng’s extragradient method.
Initialization. Given μ i , 0 > 0 , λ i ( 0 , 1 ) for i = 1 , 2 , and x 0 M an arbitrary starting point.
Iterative Steps:
Step 1. Compute
z ˜ n = P C ( exp x n ( μ 2 , n A x n ) ) , y n = P C ( exp z n ( μ 1 , n A z n ) ) .
Step 2. Compute
z n = exp z ˜ n μ 2 , n ( A x n A z ˜ n ) , x n + 1 = exp y n μ 1 , n ( A z n A y n ) ,
and
μ 2 , n + 1 = min { λ 2 d ( x n , z ˜ n ) d ( A x n , A z ˜ n ) , μ 2 , n } , if d ( A x n , A z ˜ n ) 0 , μ 2 , n , otherwise , μ 1 , n + 1 = min { λ 1 d ( z n , y n ) d ( A z n , A y n ) , μ 1 , n } , if d ( A z n , A y n ) 0 , μ 1 , n , otherwise .
n n + 1 and go to Step 1.
Lemma 8.
For i = 1 , 2 , the sequence { μ i , n } constructed via Algorithm 3 is monotonically decreasing with lower bound min { λ i L i , μ i , 0 } .
Proof. 
Obviously, the sequence { μ i , n } is monotonically decreasing for i = 1 , 2 . Note that A i is a Lipschitz continuous mapping with constant L i > 0 for i = 1 , 2 . Then, in the case of d ( A 2 x n , A 2 z ˜ n ) 0 , we have
λ 2 d ( x n , z ˜ n ) d ( A 2 x n , A 2 z ˜ n ) λ 2 d ( x n , z ˜ n ) L 2 d ( x n , z ˜ n ) = λ 2 L 2 .
Thus, the sequence { μ 2 , n } has the lower bound min { λ 2 L 2 , μ 2 , 0 } . In a similar way, we can show that { μ 1 , n } has the lower bound min { λ 1 L 1 , μ 1 , 0 } . □
Corollary 3.
For i = 1 , 2 , the sequence { μ i , n } constructed via Algorithm 4 is monotonically decreasing with lower bound min { λ i L , μ i , 0 } .
Lemma 9.
Let { x n } and { z n } be the iterative sequences constructed via Algorithm 3. Then both { x n } and { z n } are bounded, provided for all ( p , q ) S and n 0 ,
( 1 μ 2 , n 2 · λ 2 2 μ 2 , n + 1 2 ) d 2 ( z ˜ n , x n ) + ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 2 ) d 2 ( y n , z n ) + 2 μ 2 , n A 2 p , exp p 1 z ˜ n + 2 μ 1 , n A 1 p , exp p 1 y n 0 , ( 1 μ 2 , n + 1 2 · λ 2 2 μ 2 , n + 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) + ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 2 ) d 2 ( y n , z n ) + 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 + 2 μ 1 , n A 1 q , exp q 1 y n 0 .
Proof. 
Similar to the proof of Lemma 7, we get
d 2 ( z n , p ) d 2 ( x n , p ) + d 2 ( z ˜ n , x n ) + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) 2 μ 2 , n A 2 z ˜ n A 2 x n , exp p 1 z ˜ n + 2 exp x n 1 z ˜ n , exp p 1 x n .
Then from (23), we have
d 2 ( z n , p ) d 2 ( x n , p ) + d 2 ( z ˜ n , x n ) 2 exp x n 1 z ˜ n , exp x n 1 z ˜ n + 2 exp x n 1 z ˜ n , exp p 1 z ˜ n + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) 2 μ 2 , n A 2 z ˜ n A 2 x n , exp p 1 z ˜ n = d 2 ( x n , p ) d 2 ( z ˜ n , x n ) + 2 exp x n 1 z ˜ n , exp p 1 z ˜ n + μ 2 , n 2 d 2 ( A 2 z ˜ n , A 2 x n ) 2 μ 2 , n A 2 z ˜ n A 2 x n , exp p 1 z ˜ n .
Since z ˜ n = P C ( exp x n ( μ 2 , n A 2 x n ) ) , by Lemma 2, we have
exp x n 1 z ˜ n + μ 2 , n A 2 x n , exp p 1 z ˜ n 0 ,
that is,
exp x n 1 z ˜ n , exp p 1 z ˜ n μ 2 , n A 2 x n , exp p 1 z ˜ n .
Combining (24), (25) and (22) yields
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 x n , exp p 1 z ˜ n + μ 2 , n 2 · λ 2 2 μ 2 , n + 1 2 d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 z ˜ n A 2 x n , exp p 1 z ˜ n .
Also, from the monotonicity of A 2 on M it follows that
A 2 z ˜ n A 2 x n , exp p 1 z ˜ n = A 2 z ˜ n A 2 p , exp p 1 z ˜ n + A 2 p A 2 x n , exp p 1 z ˜ n A 2 p A 2 x n , exp p 1 z ˜ n = A 2 p , exp p 1 z ˜ n A 2 x n , exp p 1 z ˜ n .
From (26) and (27), we obtain
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 x n , exp p 1 z ˜ n + μ 2 , n 2 · λ 2 2 μ 2 , n + 1 2 d 2 ( z ˜ n , x n ) 2 μ 2 , n ( A 2 p , exp p 1 z ˜ n A 2 x n , exp p 1 z ˜ n ) = d 2 ( x n , p ) ( 1 μ 2 , n 2 · λ 2 2 μ 2 , n + 1 2 ) d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
In a similar way, we get
d 2 ( x n + 1 , q ) d 2 ( z n , q ) ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 2 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 q , exp q 1 y n .
Next, we restrict ( p , q ) S . Then, substituting (28) for (29) with q : = p , we have
d 2 ( x n + 1 , p ) d 2 ( x n , p ) ( 1 μ 2 , n 2 · λ 2 2 μ 2 , n + 1 2 ) d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 p , exp p 1 y n = d 2 ( x n , p ) ( 1 μ 2 , n 2 · λ 2 2 μ 2 , n + 1 2 ) d 2 ( z ˜ n , x n ) ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 2 ) d 2 ( y n , z n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n 2 μ 1 , n A 1 p , exp p 1 y n .
This together with the hypotheses, implies that d ( x n + 1 , p ) d ( x n , p ) . So the sequence { x n } is bounded. In the same way, substituting (29) for (28) with n : = n + 1 and p : = q , we have
d 2 ( z n + 1 , q ) d 2 ( z n , q ) ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 q , exp q 1 y n ( 1 μ 2 , n + 1 2 · λ 2 2 μ 2 , n + 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 = d 2 ( z n , q ) ( 1 μ 2 , n + 1 2 · λ 2 2 μ 2 , n + 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 ) d 2 ( y n , z n ) 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 2 μ 1 , n A 1 q , exp q 1 y n .
This together with the hypotheses, implies that d ( z n + 1 , q ) d ( z n , q ) . So the sequence { z n } is bounded. □
Corollary 4.
Let { x n } and { z n } be constructed via Algorithm 4. Then { x n } and { z n } are bounded.
Proof. 
Take a fixed p S arbitrarily. Noticing A 1 = A 2 = A , we deduce from (28) and (29) that
d 2 ( x n + 1 , p ) d 2 ( x n , p ) ( 1 μ 2 , n 2 · λ 2 2 μ 2 , n + 1 2 ) d 2 ( z ˜ n , x n ) ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 2 ) d 2 ( y n , z n ) , d 2 ( z n + 1 , p ) d 2 ( z n , p ) ( 1 μ 2 , n + 1 2 · λ 2 2 μ 2 , n + 2 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) ( 1 μ 1 , n 2 · λ 1 2 μ 1 , n + 1 ) d 2 ( y n , z n ) .
Since lim n ( 1 μ i , n 2 · λ i 2 μ i , n + 1 2 ) = 1 λ i 2 > 0 for i = 1 , 2 , we know that there exists n 0 0 such that 1 μ i , n 2 · λ i 2 μ i , n + 1 2 > 0 n n 0 for i = 1 , 2 . This implies that d ( x n , p ) d ( x n + 1 , p ) and d ( z n , p ) d ( z n + 1 , p ) . So the sequences { x n } and { z n } are bounded. It can be seen that lim n d ( z ˜ n , x n ) = 0 and lim n d ( y n , z n ) = 0 . Hence { z ˜ n } and { y n } are bounded. Note that
z n = exp z ˜ n ( μ 2 , n ( A x n A z ˜ n ) ) , x n + 1 = exp y n ( μ 1 , n ( A z n A y n ) ) .
Since A i is L i -Lipschitz continuous for i = 1 , 2 , we conclude that lim n d ( z n , z ˜ n ) = 0 and lim n d ( x n + 1 , y n ) = 0 . □
Theorem 3.
Let { x n } and { z n } be constructed via Algorithm 3. Assume that the hypotheses in Lemma 9 hold. Then { ( x n , z n ) } is convergent to a solution of SVI (5) provided d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n .
Proof. 
First of all, by Lemma 8, the limit of { μ i , n } exists for i = 1 , 2 . We denote μ i = lim n μ i , n , then μ i > 0 for i = 1 , 2 . By Lemma 9 { x n } and { z n } are bounded, and
d ( x n , p ) d ( x n + 1 , p ) and d ( z n , q ) d ( z n + 1 , q ) ( p , q ) S , n 0 .
By assumption that d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n , we obtain that { y n } and { z ˜ n } are bounded. Define the sets S 1 , S 2 as follows:
S 1 = { p C : q C such that ( p , q ) S } and S 2 = { q C : p C such that ( p , q ) S } .
From Definition 2, we know that { x n } and { z n } are Fejér convergent to S 1 and S 2 , respectively. Then ∃ { x n k } { x n } such that lim k x n k = p ¯ . Since { z n } is bounded, we may assume, without loss of generality, that z n k q ¯ . Meantime, it is clear that μ 1 , n k μ 1 and μ 2 , n k μ 2 as k . Since d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n , we deduce that y n k p ¯ and z ˜ n k q ¯ . Note that
z ˜ n k = P C ( exp x n k ( μ 2 , n k A 2 x n k ) ) , y n k = P C ( exp z n k ( μ 1 , n k A 1 z n k ) ) .
Letting k and using Lemma 1 and Lipschitz continuity of A i , i = 1 , 2 , we obtain
q ¯ = P C ( exp p ¯ ( μ 2 A 2 p ¯ ) ) , p ¯ = P C ( exp q ¯ ( μ 1 A 1 q ¯ ) ) .
Thus, p ¯ = P C ( exp I ( μ 1 A 1 ) ) P C ( exp I ( μ 2 A 2 ) ) p ¯ . By Lemma 5 we get ( p ¯ , q ¯ ) S , and hence p ¯ S 1 . By Proposition 3, x n p ¯ as n .
On the other hand, suppose that q ^ is an accumulation point of { z n } . Then lim k z m k = q ^ , where { z m k } { z n } is some subsequence. Since { x n } is a bounded iterative sequence, we may suppose x m k p ^ . Meantime, it is clear that μ 1 , m k μ 1 and μ 2 , m k μ 2 as k . Since d ( x n , y n ) 0 and d ( z n , z ˜ n ) 0 as n , we deduce that y m k p ^ and z ˜ m k q ^ . Note that
z ˜ m k = P C ( exp x m k ( μ 2 , m k A 2 x m k ) ) , y m k = P C ( exp z m k ( μ 1 , m k A 1 z m k ) ) .
Letting k and using Lemma 1 and Lipschitz continuity of A i , i = 1 , 2 reaches
q ^ = P C ( exp p ^ ( μ 2 A 2 p ^ ) ) , p ^ = P C ( exp q ^ ( μ 1 A 1 q ^ ) ) .
Thus, p ^ = P C ( exp I ( μ 1 A 1 ) ) P C ( exp I ( μ 2 A 2 ) ) p ^ . By Lemma 5 we get ( p ^ , q ^ ) S , and hance q ^ S 2 . By Proposition 3, z n q ^ as n . Therefore, in terms of the uniqueness of the limit, we infer that ( x n , z n ) ( p ^ , q ^ ) S to the SVI (5). □
Theorem 4.
Assume that { x n } and { z n } are constructed via Algorithm 4. Then { x n } and { z n } both are convergent to a solution of VIP (4).
Proof. 
By Corollary 4 and Definition 2, one knows that { x n } and { z n } both are Fejér convergent to the same S. Let p ¯ be an accumulation point of { x n } . Thus, ∃ { x n k } { x n } such that lim k x n k = p ¯ . Hence, from lim k d ( z ˜ n k , x n k ) = 0 we get lim k z ˜ n k = p ¯ . Since lim n μ 2 , n = μ 2 , we have lim k μ 2 , n k = μ 2 . So it follows from z ˜ n k = P C ( exp x n k ( μ 2 , n k A x n k ) ) that p ¯ = P C ( exp p ¯ ( μ 2 A p ¯ ) ) . In terms of Proposition 2, we get p ¯ S . Thus, by Proposition 3 we infer that x n p ¯ as n . Similarly, one can obtain z n q ¯ as n for some q ¯ S . Since d ( z ˜ n , x n ) 0 and d ( z n , z ˜ n ) 0 , we derive the desired result. □

4. Concluding Remark

In this paper, we focused to systems of variational inequalities on Hadamard manifolds and present two algorithms to deal with it under the monotonicity assumption on the underlying vector fields. We considered two strategies for obtaining step sizes. The second has many advantages; simple structure, low computational cost and no requiring extra projection. To design more effective methods for the problem (5) on Hadamard manifolds, we will consider the geometric structure of manifolds and some numerical implementations in the future. Since every complete and connected Riemannian manifold is a geodesic metric space (see, e.g., [38]), our results can be obtained in geodesic spaces.

Author Contributions

All the authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Natural Science Foundation of China(11671365), Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002), and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, X. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  2. An, N.T. Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 2019. [Google Scholar] [CrossRef]
  3. Kobis, E.; Kobis, M.A.; Qin, X. Nonlinear separation approach to inverse variational inequalities in real linear spaces. J. Optim. Theory Appl. 2019, 183, 105–121. [Google Scholar] [CrossRef]
  4. Ceng, L.C.; Yuan, Q. Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 2019, 2019, 374. [Google Scholar] [CrossRef]
  5. Ceng, L.C.; Shang, M. Generalized Mann viscosity implicit rules for solving systems of variational inequalities with constraints of variational inclusions and fixed point problems. Mathematics 2019, 7, 933. [Google Scholar] [CrossRef] [Green Version]
  6. Cho, S.Y. Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 2016, 9, 1083–1092. [Google Scholar] [CrossRef] [Green Version]
  7. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef] [Green Version]
  8. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
  9. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  10. Reich, S.; Sabach, S. Three strong convergence theorems regarding iterative methods for solving equilibrium problems in reflexive Banach spaces. Contemp. Math. 2012, 568, 225–240. [Google Scholar]
  11. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  12. Qin, X.; Petrusel, A.; Yao, J.C. CQ iterative algorithms for fixed points of nonexpansive mappings and split feasibility problems in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 157–165. [Google Scholar]
  13. Cho, S.Y. Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2018, 8, 19–31. [Google Scholar]
  14. Dehaish, B.A.B. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
  15. Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
  16. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  17. Takahashi, W.; Wen, C.F. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  18. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  19. Ceng, L.C.; Wang, C.Y.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
  20. Qin, X.; Cho, S.Y.; Yao, J.C. Weak and strong convergence of splitting algorithms in Banach spaces. Optimization 2019, 1–25. [Google Scholar] [CrossRef]
  21. Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
  22. Shehu, Y. Hybrid iterative scheme for fixed point problem, infinite systems of equilibrium and variational inequality problems. Comput. Math. Appl. 2012, 63, 1089–1103. [Google Scholar] [CrossRef] [Green Version]
  23. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  24. Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set-Valued Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
  25. Németh, S.Z. Variational inequalities on Hadamard manifolds. Nonlinear Anal. 2003, 52, 1491–1498. [Google Scholar] [CrossRef]
  26. Ceng, L.C.; Shang, M. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2019. [Google Scholar] [CrossRef]
  27. Ansari, Q.H.; Babu, F. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  28. Li, X.; Huang, N.; Ansari, Q.H.; Yao, J.C. Convergence rate of descent method with new inexact line-search on Riemannian manifolds. J. Optim. Theory Appl. 2019, 180, 830–854. [Google Scholar] [CrossRef]
  29. Chen, J.F.; Liu, S.Y.; Chang, X.K. Modified Tseng’s extragradient methods for variational inequality on Hadamard manifolds. Appl. Anal. 2019. [Google Scholar] [CrossRef]
  30. Ferreira, O.P.; Lucambio Pérez, L.R.; Németh, S.Z. Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 2005, 31, 133–151. [Google Scholar] [CrossRef]
  31. Sakai, T. Riemannian Geometry, Vol. 149 in Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1996. [Google Scholar]
  32. Tang, G.J.; Huang, N.J. Korpelevich’s method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 2012, 54, 493–509. [Google Scholar] [CrossRef]
  33. Li, C.; López, L.; Martín-Márquez, V. Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 2009, 79, 663–683. [Google Scholar] [CrossRef]
  34. Wang, J.; López, G.; Martín-Márquez, V.; Li, C. Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 2010, 146, 691–708. [Google Scholar] [CrossRef] [Green Version]
  35. Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef] [Green Version]
  36. Li, C.; López, G.; Martín-Márquez, V. Iterative algorithms for nonexpansive mappings on Hadamard manifolds. Taiwan J. Math. 2010, 14, 541–559. [Google Scholar]
  37. Ferreira, O.P.; Oliveira, P.R. Proximal point algorithm on Riemannian manifolds. Optimization 2002, 51, 257–270. [Google Scholar] [CrossRef]
  38. Bridson, M.R.; Haeiger, A. Metric Spaces of Non-positive Curvature; Springer: Berlin/Heidelberg, Germany, 2013; Volume 319. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Shehu, Y.; Wang, Y. Parallel Tseng’s Extragradient Methods for Solving Systems of Variational Inequalities on Hadamard Manifolds. Symmetry 2020, 12, 43. https://doi.org/10.3390/sym12010043

AMA Style

Ceng L-C, Shehu Y, Wang Y. Parallel Tseng’s Extragradient Methods for Solving Systems of Variational Inequalities on Hadamard Manifolds. Symmetry. 2020; 12(1):43. https://doi.org/10.3390/sym12010043

Chicago/Turabian Style

Ceng, Lu-Chuan, Yekini Shehu, and Yuanheng Wang. 2020. "Parallel Tseng’s Extragradient Methods for Solving Systems of Variational Inequalities on Hadamard Manifolds" Symmetry 12, no. 1: 43. https://doi.org/10.3390/sym12010043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop