Next Article in Journal
StyleGANs and Transfer Learning for Generating Synthetic Images in Industrial Applications
Next Article in Special Issue
Using Rough Set Theory to Find Minimal Log with Rule Generation
Previous Article in Journal
The Effect of Absorbable and Non-Absorbable Sutures on Nasal Width Following Cinch Sutures in Orthognathic Surgery
Previous Article in Special Issue
Refinements of Wilker–Huygens-Type Inequalities via Trigonometric Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Parallel Subgradient Extragradient Rule for Solving Systems of Variational Inequalities in Hadamard Manifolds

Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1496; https://doi.org/10.3390/sym13081496
Submission received: 26 July 2021 / Revised: 7 August 2021 / Accepted: 10 August 2021 / Published: 15 August 2021
(This article belongs to the Special Issue Symmetry in Nonlinear Functional Analysis and Optimization Theory II)

Abstract

:
In a Hadamard manifold, let the VIP and SVI represent a variational inequality problem and a system of variational inequalities, respectively, where the SVI consists of two variational inequalities which are of symmetric structure mutually. This article designs two parallel algorithms to solve the SVI via the subgradient extragradient approach, where each algorithm consists of two parts which are of symmetric structure mutually. It is proven that, if the underlying vector fields are of monotonicity, then the sequences constructed by these algorithms converge to a solution of the SVI. We also discuss applications of these algorithms for approximating solutions to the VIP. Our theorems complement some recent and important ones in the literature.

1. Introduction

Suppose that the operator F is a self-mapping on a real Hilbert space ( H , · , · ). Let the set C H be nonempty, convex, and closed. Consider the classical variational inequality problem (VIP) of finding a point z ¯ C s.t.:
F z ¯ , x z ¯ 0 x C .
It is well known that variational inequalities like VIP (1) have played an important role in the study of economics, transportation, mathematical programming, engineering mechanics, etc. Let F be L-Lipschitzian with constant L > 0 . Given ( 0 , 1 L ) . In 1976, Korpelevich’s extragradient rule was first introduced in [1] for solving VIP (1). For any initial v 0 C , let the sequence { v l } be generated by
z l = P C ( v l F v l ) , v l + 1 = P C ( v l F z l ) l 0 ,
where P C is the metric projection of H onto C. To the most of our knowledge, Korpelevich’s extragradient rule has become one of the best effective numerical methods for the VIP and related optimization problems. Moreover, many authors improved it in various kinds of ways; see, e.g., [2,3,4,5,6,7,8,9,10,11] and references therein, to name but a few.
In 2008, Ceng et al. [8] considered the following system of variational inequalities (SVI): find ( p * , q * ) C × C s.t.
p * q * + 1 F 1 q * , p p * 0 p C , q * p * + 2 F 2 p * , q q * 0 q C ,
where F k is a self-mapping on H and k is a positive constant for k = 1 , 2 . It is clear that the SVI (3) consists of two variational inequalities which are of symmetric structure mutually. It is worth mentioning that the SVI (3) has been transformed into the following fixed-point problem (FPP).
Lemma 1
(see [8], [Lemma 2.1]). A pair ( p * , q * ) C × C , is a solution of SVI (3) if and only if p * is a fixed point of the mapping G : = P C ( I 1 F 1 ) P C ( I 2 F 2 ) , i.e., p * Fix ( G ) , where q * = P C ( I 2 F 2 ) p * .
In terms of Lemma 1, Ceng et al. [8] suggested and analyzed a relaxed extragradient algorithm for solving SVI (3). In 2011, the subgradient extragradient rule was first proposed in [6] for solving VIP (1), where the second projection onto C is replaced by the projection onto a half-space:
q l = P C ( p l ξ F p l ) , C l = { y H : p l ξ F p l q l , y q l 0 } , p l + 1 = P C l ( p l ξ F q l ) l 0 ,
with constant ξ ( 0 , 1 L ) . The above rule is more advantageous and more subtle than the rule (2) in the case when C is a feasible set with a complex structure and the calculation of projection onto C is oppressively time-squandering.
In 2018, Yang et al. [12] designed the modified subgradient extragradient rule for solving VIP (1). For any given λ 0 > 0 , u 0 H and μ ( 0 , 1 ) , let the sequences { u l } and { v l } be generated by
v l = P C ( u l ς l F u l ) , C l = { y H : u l ς l F u l v l , y v l 0 } , u l + 1 = P C l ( u l ς l F v l ) l 0 ,
where ς l + 1 is chosen as
ς l + 1 = min { μ ( u l v l 2 + u l + 1 v l 2 ) 2 F u l F v l , u l + 1 v l , ς l } , if F u l F v l , u l + 1 v l > 0 , ς l , otherwise .
It was proven in [12] that { u l } and { v l } converge weakly to a solution of VIP (1).
On the other hand, suppose that C is a nonempty, convex and closed subset of a Hadamard manifold M , and A : M T M is a vector field, that is, A u T u M u M . In 2003, Németh [13] introduced the new VIP of finding u * C s.t.:
A u * , exp u * 1 u 0 u C ,
where exp 1 is the inverse of an exponential map. The solution set of VIP (4) is denoted by S. Subsequently, some rules and methods are extended from Euclidean spaces to Riemannian manifolds because of some important advantages of the extension; see, e.g., [14,15,16,17]. Furthermore, inspired by the SVI (3) and the multiobjective optimization problem in [17], Ceng et al. [18] introduced a system of multiobjective optimization problems (SMOP) in a Hadamard manifold and invented a parallel proximal point rule for solving the SMOP.
It is remarkable that the research works on the algorithms for VIP (4) are mainly focused on a proximal point algorithm [19] and Korpelevich’s extragradient rule [20]. Very recently, Chen et al. [9] suggested the modified Tseng’s extragradient method to solve VIP (4). Moreover, their results gave an affirmative answer to the open question put forth in [21].
Let C be a nonempty closed convex subset of a Hadamard manifold M , and A k : M T M be a vector field for k = 1 , 2 , i.e., A k u T u M u M . According to problems (3) and (4), Ceng et al. [22] introduced the new SVI of finding ( u * , v * ) C × C s.t.
exp v * 1 u * + μ 1 A 1 v * , exp u * 1 u 0 u C , exp u * 1 v * + μ 2 A 2 u * , exp v * 1 v 0 v C ,
where constants μ 1 , μ 2 ( 0 , ) , and exp 1 is the inverse of exponential map. In particular, if A 1 = A 2 = A and u * = v * , then SVI (5) reduces to VIP (4).
In this paper, we design two parallel algorithms to solve the SVI (5) via the subgradient extragradient approach, where each algorithm consists of two parts which have a mutually symmetric structure. It is proven that, if the underlying vector fields are of monotonicity, then the sequences constructed by these algorithms converge to a solution of the SVI (5). We also discuss applications of these algorithms to the approximation of solutions to the VIP (4). Our results improve and extend the corresponding results announced in [8,9,12,22].
The remainder of the paper is arranged below. Some preliminary concepts, notations, important lemmas, and propositions in Riemannian geometry are recalled in Section 2. It is remarkable that one can find most of them in every textbook about Riemannian geometry (e.g., [23]). Two new parallel algorithms based on the modified subgradient extragradient approach [12] are proposed for SVI (5), and some convergence theorems are proved in Section 3.

2. Preliminaries

Let M indicate a simply connected and finite-dimensional differentiable manifold. A differentiable manifold M endowed with a Riemannian metric is called a Riemannian manifold. We denote by T υ M the tangent space of M at υ M , by · , · υ the scalar product on T υ M with the associated norm · υ , where the subscript υ is sometimes omitted, and by T M : = υ M T υ M the tangent bundle of M , which is actually a manifold. Let γ : [ a , b ] M be a piecewise smooth curve joining υ to ω (i.e., γ ( a ) = υ and γ ( b ) = ω ), we define the length l ( γ ) = a b γ ( t ) d t . Then, the Riemannian distance d ( υ , ω ) , which induces the original topology on M , is defined by minimizing this length over the set of all such curves joining υ to ω .
Suppose that the Levi–Civita connection ∇ is associated with the Riemannian metric and the smooth curve γ lies in M . A vector field X is referred to as being parallel along γ iff γ X = 0 . In case γ itself is parallel along γ , γ is known as a geodesic, and, in this case, γ is constant. It is remarkable that this notion is different from the corresponding one in the calculus of variations—in particular, if γ = 1 , γ is referred to as being normalized. A geodesic joining υ to ω in M is called minimal if its length equals d ( υ , ω ) .
Let M be a Riemannian manifold. M is referred to as being complete iff for each υ M all geodesics emanating from υ are defined for all t R : = ( , ) . Using the Hopf–Rinow Theorem, we infer that, if M is complete, each pair of points in M can be joined by a minimal geodesic. In the meantime, ( M , d ) becomes a complete metric space and bounded closed subsets are compact ones in M .
We denote by P γ , . , . the parallel transport on the tangent bundle T M along γ w.r.t. ∇, defined by
P γ , γ ( b ) , γ ( a ) ( υ ) = V ( γ ( b ) ) a , b R , υ T γ ( a ) M ,
where V is the unique vector field such that γ ( t ) V = 0 for each t and V ( γ ( a ) ) = υ . Then, for any a , b R , P γ , γ ( b ) , γ ( a ) is an isometry from T γ ( a ) M to T γ ( b ) M . For the convenience, we will write P ω , υ instead of P γ , ω , υ in the case where γ is a minimal geodesic joining υ to ω .
Let M be complete. An exponential map exp υ : T υ M M at υ is defined by exp υ ω = γ ω ( 1 , υ ) for each ω T υ M , where γ ( · ) = γ ω ( · , υ ) is the geodesic starting at υ with velocity υ . Then, exp υ t ω = γ ω ( t , υ ) for each real number t. It is worth emphasizing that the mapping exp υ is differentiable on T υ M for each υ M . The exponential map has inverse exp υ 1 : M T υ M , i.e., ϕ = exp υ 1 ω , and the geodesic is the unique shortest path with exp υ 1 ω = exp ω 1 υ = d ( υ , ω ) , where d ( υ , ω ) is the geodesic distance between υ and ω in M .
A set D M is referred to as being convex if, for every y , z K , the geodesic joining y to z lies in D, i.e., if γ : [ a , b ] M is a geodesic satisfying y = γ ( a ) and z = γ ( b ) , then γ ( ( 1 t ) a + t b ) D t [ 0 , 1 ] . From now on, we denote by D a nonempty closed convex set in M , and by P D the projection of M onto D, i.e.,
P D ( y ) = { y 0 D : d ( y , y 0 ) d ( y , z ) for all z D } y M .
A real-valued function f defined on M is referred to as being convex if, for each geodesic γ of M , the composite function f γ : R R is convex, i.e.,
( f γ ) ( s a + ( 1 s ) b ) s ( f γ ) ( a ) + ( 1 s ) ( f γ ) ( b ) a , b R , s [ 0 , 1 ] .
A Hadamard manifold M is a complete simply connected Riemannian manifold of non-positive sectional curvature. If M is a Hadamard manifold, then exp υ 1 : M T υ M is a diffeomorphism for each υ M and, if υ , ω M , then there exists a unique minimal geodesic joining υ to ω . Next, we always assume that M is a Hadamard manifold.
Proposition 1
(see [23]). Let υ M . Then, exp υ : T υ M M is a diffeomorphism, and, for any points υ , ω M , there exists a unique normalized geodesic joining υ to ω, which is actually a minimal geodesic.
The above proposition shows that M is diffeomorphic to the Euclidean space R m . Then, M has the same topology and differential structure as R m . Moreover, Hadamard manifolds and Euclidean spaces have some similar geometrical properties.
Definition 1
(see [20]). Let X ( M ) be the set of all single-valued vector fields V : M T M s.t. V ( υ ) T υ M υ M and the domain D ( V ) of V is defined by D ( V ) = { υ M : V ( υ ) } . Let V X ( M ) . Then, V is referred to as being pseudomonotone if, for each υ , ω D ( V ) ,
V ( υ ) , exp υ 1 ω 0 V ( ω ) , exp ω 1 υ 0 .
A geodesic triangle Δ ( p 1 , p 2 , p 3 ) of a Riemannian manifold is a set consisting of three points p 1 , p 2 and p 3 , and three minimal geodesics γ i joining p i to p i + 1 , with i = 1 , 2 , 3 ( mod 3 ) .
Proposition 2
(see [23] (Comparison theorem for triangles)). Suppose that Δ ( p 1 , p 2 , p 3 ) is a geodesic triangle. Ones denote, for each i = 1 , 2 , 3 ( mod 3 ) , by γ i : [ 0 , l i ] M , the geodesic joining p i to p i + 1 , and put l i = L ( γ i ) , and α i : = ( γ i ( 0 ) , γ i 1 ( l i 1 ) ) . Then,
(i) 
α 1 + α 2 + α 3 π ;
(ii) 
l i 2 + l i + 1 2 2 l i l i + 1 cos α i + 1 l i 1 2 ;
(iii) 
l i + 1 cos α i + 2 + l i cos α i l i + 2 .
According to the distance and the exponential map, inequality (ii) in Proposition 2 can be rewritten as
d 2 ( p i , p i + 1 ) + d 2 ( p i + 1 , p i + 2 ) 2 exp p i + 1 1 p i , exp p i + 1 1 p i + 2 d 2 ( p i 1 , p i ) ,
owing to the fact that
exp p i + 1 1 p i , exp p i + 1 1 p i + 2 = d ( p i , p i + 1 ) d ( p i + 1 , p i + 2 ) cos α i + 1 .
Lemma 2
(see [24]). Let u 0 M and { u n } M s.t. u n u 0 . Then, the following holds:
(i) 
exp u n 1 v exp u 0 1 v and exp v 1 u n exp v 1 u 0 for all v M .
(ii) 
If y n T u n M and y n y 0 , then y 0 T u 0 M .
(iii) 
Given p n , q n T u n M and p 0 , q 0 T u 0 M , if p n p 0 and q n q 0 , then p n , q n p 0 , q 0 .
(iv) 
For each υ T u 0 M , the map F : M T M , defined by F ( u ) = P u , u 0 υ u M , is continuous on M .
For each u M and C M , there is only a point u 0 C satisfying d ( u , u 0 ) d ( u , v ) v C . Then, the unique point is known as the projection of u onto the convex set C, denoted by P C ( u ) .
Proposition 3
(see [25]). For each u M , the following inequality holds:
exp P C ( u ) 1 u , exp P C ( u ) 1 v 0 v C .
Proposition 4
(see [20]). Let C M be closed and convex. Then, the metric projection P C is nonexpansive, i.e., d ( P C ( p ) , P C ( q ) ) d ( p , q ) p , q M .
Lemma 3
(see [21]). Assume that M is of constant curvature, u M and ϱ T u M . Then, L u , ϱ : = { v M : exp u 1 v , ϱ 0 } is convex.
Lemma 4
(see [20]). Suppose that C is a nonempty closed convex subset of a Hadamard manifold M . Then, d 2 ( P C ( p ) , q ) d 2 ( p , q ) d 2 ( p , P C ( p ) ) p M , q C .
Lemma 5
(see [13]). Let A be a continuous and monotone vector field on C, given z C . Then, A z , exp z 1 υ 0 υ C A υ , exp υ 1 z 0 υ C .
It is easy from Proposition 3 to see that the following hold:
Proposition 5
(see [20]). The following assertions are equivalent:
(i) 
u * solves the VIP (4);
(ii) 
u * = P C ( exp u * ( β 0 A u * ) ) for some β 0 > 0 ;
(iii) 
u * = P C ( exp u * ( β A u * ) ) for all β > 0 ;
(iv) 
r ( u * , β ) = 0 , with r ( u * , β ) = exp u * 1 [ P C ( exp u * ( β A u * ) ) ] .
The following two lemmas play a crucial role in the convergence derivation of the algorithms.
Lemma 6
(see [26]). Suppose that Δ ( u , v , w ) is a geodesic triangle in M , a Hadamard manifold. Then, u , v , w R 2 s.t.
d ( u , v ) = u v , d ( v , w ) = v w and d ( w , u ) = w u .
The triangle Δ ( u , v , w ) is called the comparison triangle of Δ ( u , v , w ) , which is unique up to isometry of M . The following result can be proved by using element geometry. This is also a direct application of the Alexandrov’s Lemma in R 2 (see [27]). It explains the relationship between two triangles Δ ( u , v , w ) and Δ ( u , v , w ) involving angles and distances between points.
Lemma 7
(see [28]). Let Δ ( u , v , w ) be a geodesic triangle in a Hadamard manifold M and Δ ( u , v , w ) its comparison triangle.
(i) 
Assume that α , β , γ (resp., α , β , γ ) are three angles of Δ ( u , v , w ) (resp., Δ ( u , v , w ) ) at three vertices u , v , w (resp., u , v , w ). Then, the inequalities hold: α α , β β and γ γ .
(ii) 
Assume that the point z lies in the geodesic joining u to v and z is its comparison point in the interval [ u , v ] satisfying d ( z , u ) = z u and d ( z , v ) = z v . Then, the inequality holds: d ( z , w ) z w .
Definition 2
(see [9]). A vector field f defined on a complete Riemannian manifold M is referred to as being Lipschitzian if L ( M ) = L > 0 s.t.
d ( f ( υ ) , f ( ω ) ) L d ( υ , ω ) υ , ω M .
Besides the above concept, if for each υ 0 M , L ( υ 0 ) > 0 and σ = σ ( υ 0 ) > 0 s.t. (7) holds, with L = L ( υ 0 ) , for all υ , ω B σ ( υ 0 ) : = { z M : d ( υ 0 , z ) < δ } , then f is said to be locally Lipschitzian.
Finally, by the similar inference to that of transforming SVI (3) into the FPP in [8], we obtain the following.
Lemma 8
(see [22], [Lemma 5]). A pair ( p * , q * ) C × C is a solution of SVI (5) if and only if p * is a fixed point of the mapping G : = P C ( exp I ( μ 1 A 1 ) ) P C ( exp I ( μ 2 A 2 ) ) , i.e., p * Fix ( G ) , where q * = P C ( exp I ( μ 2 A 2 ) ) p * .

3. Algorithms and Convergence Criteria

In this section, inspired by the algorithms in [9], we suggest two new parallel algorithms for solving VIP (5) on Hadamard manifolds via the modified subgradient extragradient approach in [12].
From now on, the following assumptions are always adopted:
Hypothesis 1 (H1).
The solution set of SVI (5), denoted by S , is nonempty.
Hypothesis 2 (H2).
A 1 , A 2 : M T M are vector fields, i.e., A k u T u M u M for k = 1 , 2 .)
Hypothesis 3 (H3).
A 1 and A 2 both are monotone, i.e., for k = 1 , 2 , A k x A k y , exp y 1 x 0 x , y M .
Hypothesis 4 (H4).
A 1 and A 2 both are Lipschitzian with constants L 1 , L 2 > 0 , i.e., for k = 1 , 2 , L k > 0 s.t.
d ( A k x , A k y ) L k d ( x , y ) x , y M .
Next, we recall the notion of Fejér convergence and related result.
Definition 3
(see [29]). Suppose that X is a complete metric space and C X is a nonempty set. Then, a sequence { x l } X is referred to as being Fejér convergent to C, if d ( x l + 1 , y ) d ( x l , y ) y C , l 0 .
Proposition 6
(see [24]). Suppose that X is a complete metric space and C X is a nonempty set. Let { x l } X be Fejér convergent to C and assume that any cluster point of { x l } belongs in C. Then, { x l } converges to a point of C.

3.1. The First Parallel Algorithm

Algorithm 1 is the first parallel algorithm for the SVI.
Algorithm 1: The first parallel algorithm for the SVI.
Initialization : Given x 0 M arbitrarily . Let μ k , 0 > 0 and λ k ( 0 , 1 ) for k = 1 , 2 .
Iteration Steps: Compute x n + 1 below:
Step 1. Compute
z ˜ n = P C ( exp x n ( μ 2 , n A 2 x n ) ) , y n = P C ( exp z n ( μ 1 , n A 1 z n ) ) .
Step 2. Construct
C 2 , n = { x M : exp z ˜ n 1 x n μ 2 , n A 2 x n , exp z ˜ n 1 x 0 } , C 1 , n = { x M : exp y n 1 z n μ 1 , n A 1 z n , exp y n 1 x 0 } ,
and calculate
z n = P C 2 , n ( exp x n ( μ 2 , n A 2 z ˜ n ) ) , x n + 1 = P C 1 , n ( exp z n ( μ 1 , n A 1 y n ) ) .
Step 3. Calculate
μ 2 , n + 1 = min { λ 2 ( d 2 ( x n , z ˜ n ) + d 2 ( z n , z ˜ n ) ) 2 A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n , μ 2 , n } if A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n > 0 , μ 2 , n otherwise . μ 1 , n + 1 = min { λ 1 ( d 2 ( z n , y n ) ) + d 2 ( x n + 1 , y n ) 2 A 1 z n A 1 y n , exp y n 1 x n + 1 , μ 1 , n } if A 1 z n A 1 y n , exp y n 1 x n + 1 > 0 , μ 1 , n otherwise . ( 8 )
Again , put n : = n + 1 and go to Step 1 .
In particular, putting A 1 = A 2 = A in Algorithm 1, we obtain the following algoritm (Algorithm 2) for solving VIP (4).
Algorithm 2 : The first parallel algorithm for the VIP .
Initialization : Given x 0 M arbitrarily , let μ k , 0 > 0 and λ k ( 0 , 1 ) for k = 1 , 2 .
Iteration Steps: Compute x n + 1 below:
Step 1. Compute
z ˜ n = P C ( exp x n ( μ 2 , n A x n ) ) , y n = P C ( exp z n ( μ 1 , n A z n ) ) .
Step 2. Construct
C 2 , n = { x M : exp z ˜ n 1 x n μ 2 , n A x n , exp z ˜ n 1 x 0 } , C 1 , n = { x M : exp y n 1 z n μ 1 , n A z n , exp y n 1 x 0 } ,
and calculate
z n = P C 2 , n ( exp x n ( μ 2 , n A z ˜ n ) ) , x n + 1 = P C 1 , n ( exp z n ( μ 1 , n A y n ) ) .
Step 3. Calculate
μ 2 , n + 1 = min { λ 2 ( d 2 ( x n , z ˜ n ) + d 2 ( z n , z ˜ n ) ) 2 A x n A z ˜ n , exp z ˜ n 1 z n , μ 2 , n } if A x n A z ˜ n , exp z ˜ n 1 z n > 0 , μ 2 , n otherwise . μ 1 , n + 1 = min { λ 1 ( d 2 ( z n , y n ) ) + d 2 ( x n + 1 , y n ) 2 A z n A y n , exp y n 1 x n + 1 , μ 1 , n } if A z n A y n , exp y n 1 x n + 1 > 0 , μ 1 , n otherwise .
Again , put n : = n + 1 and go to Step 1 .
Lemma 9.
For k = 1 , 2 , the sequence { μ k , n } generated by Algorithm 1 is a monotonically decreasing one with lower bound min { λ k L k , μ k , 0 } .
Proof. 
It is clear that { μ k , n } is a monotonically decreasing sequence for k = 1 , 2 . Since A k is a Lipschitzian mapping with constant L k > 0 for k = 1 , 2 , in the case of A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n > 0 , we have
λ 2 ( d 2 ( x n , z ˜ n ) + d 2 ( z n , z ˜ n ) ) 2 A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n 2 λ 2 d ( x n , z ˜ n ) d ( z n , z ˜ n ) ) 2 d ( A 2 x n , A 2 z ˜ n ) d ( z n , z ˜ n ) λ 2 d ( x n , z ˜ n ) L 2 d ( x n , z ˜ n ) = λ 2 L 2 .
Thus, the sequence { μ 2 , n } has the lower bound min { λ 2 L 2 , μ 2 , 0 } . In a similar way, we can show that { μ 1 , n } has the lower bound min { λ 1 L 1 , μ 1 , 0 } .   □
Corollary 1.
For k = 1 , 2 , the sequence { μ k , n } generated by Algorithm 2 is a monotonically decreasing one with lower bound min { λ k L , μ k , 0 } .
Lemma 10.
Let { x n } and { z n } be the sequences generated by Algorithm 1. Then, the sequences { x n } and { z n } are bounded, provided for all ( p , q ) S and n n 0 ,
( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( x n , z ˜ n ) + ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( z n , y n ) + 2 μ 2 , n A 2 p , exp p 1 z ˜ n + 2 μ 1 , n A 1 p , exp p 1 y n 0 , ( 1 μ 2 , n + 1 μ 2 , n + 2 λ 2 ) d 2 ( x n + 1 , z ˜ n + 1 ) + ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( z n , y n ) + 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 + 2 μ 1 , n A 1 q , exp q 1 y n 0 .
Proof. 
Take a fixed ( p , q ) C × C arbitrarily. Then, from the monotonicity of A 2 , we get A 2 z ˜ n A 2 p , exp p 1 z ˜ n 0 , which hence yields A 2 z ˜ n , exp p 1 z ˜ n A 2 p , exp p 1 z ˜ n . That is, A 2 z ˜ n , exp z n 1 z ˜ n + exp p 1 z n A 2 p , exp p 1 z ˜ n n 0 . Thus, it immediately follows that
A 2 z ˜ n , exp z n 1 p A 2 z ˜ n , exp z n 1 z ˜ n A 2 p , exp p 1 z ˜ n n 0 .
By the definition of C 2 , n , we have exp z ˜ n 1 x n μ 2 , n A 2 x n , exp z ˜ n 1 z n 0 . Then,
exp z ˜ n 1 x n μ 2 , n A 2 z ˜ n , exp z ˜ n 1 z n = exp z ˜ n 1 x n μ 2 , n A 2 x n , exp z ˜ n 1 z n + μ 2 , n A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n μ 2 , n A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n .
Now, by fixing n 0 , we consider the geodesic triangle Δ ( x n , z ˜ n , p ) and its comparison triangle Δ ( x n , z ˜ n , p ) . Then, d ( x n , p ) = d ( x n , p ) , d ( z ˜ n , p ) = d ( z ˜ n , p ) , and d ( x n , z ˜ n ) = d ( x n , z ˜ n ) . Recall from Algorithm 1 that z n = P C 2 , n ( exp x n ( μ 2 , n A 2 z ˜ n ) ) . The comparison point of z n is P C 2 , n ( x n μ 2 , n A 2 z ˜ n ) . Thus, in Δ ( x n , z ˜ n , p ) , (10) and (11) can be rewritten as
A 2 z ˜ n , p z n + A 2 p , z ˜ n p A 2 z ˜ n , z ˜ n z n ,
x n μ 2 , n A 2 z ˜ n z ˜ n , z n z ˜ n μ 2 , n A 2 x n A 2 z ˜ n , z n z ˜ n .
Then, by Lemma 7 (ii), (10) and Lemma 4, we have
d 2 ( z n , p ) d 2 ( z n , p ) = P C 2 , n ( x n μ 2 , n A 2 z ˜ n ) p 2 x n μ 2 , n A 2 z ˜ n p 2 x n μ 2 , n A 2 z ˜ n z n 2 = x n p 2 x n z n 2 + 2 μ 2 , n A 2 z ˜ n , p z n x n p 2 x n z n 2 + 2 μ 2 , n ( A 2 z ˜ n , z ˜ n z n A 2 p , z ˜ n p ) = x n p 2 x n z ˜ n + z ˜ n z n 2 + 2 μ 2 , n ( A 2 z ˜ n , z ˜ n z n A 2 p , z ˜ n p ) = x n p 2 x n z ˜ n 2 z ˜ n z n 2 + 2 x n μ 2 , n A 2 z ˜ n z ˜ n , z n z ˜ n 2 μ 2 , n A 2 p , z ˜ n p = d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + 2 x n μ 2 , n A 2 z ˜ n z ˜ n , z n z ˜ n 2 μ 2 , n A 2 p , z ˜ n p = d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + 2 x n μ 2 , n A 2 z ˜ n z ˜ n , z n z ˜ n 2 μ 2 , n A 2 p , z ˜ n p .
Consider the geodesic triangle Δ ( a , b , c ) and its comparison triangle Δ ( a , b , c ) . Then, set a = exp z ˜ n 1 x n μ 2 , n A 2 z ˜ n and b = exp z ˜ n 1 z n , (resp., a = x n μ 2 , n A 2 z ˜ n z ˜ n and b = z n z ˜ n ). Let β and β denote the angles at c and c , respectively. Then, β β by Lemma 7 and so cos β cos β . Then, by Proposition 2 and Lemma 6, we have
a , b = a b cos β a b cos β = a b cos β = a , b .
Hence,
x n μ 2 , n A 2 z ˜ n z ˜ n , z n z ˜ n exp z ˜ n 1 x n μ 2 , n A 2 z ˜ n , exp z ˜ n 1 z n .
Similarly, we get 2 μ 2 , n A 2 p , z ˜ n p 2 μ 2 , n A 2 p , exp p 1 z ˜ n . This together with (14) and (15) imply that
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + 2 x n μ 2 , n A 2 z ˜ n z ˜ n , z n z ˜ n 2 μ 2 , n A 2 p , z ˜ n p d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + exp z ˜ n 1 x n μ 2 , n A 2 z ˜ n , exp z ˜ n 1 z n 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
Combining (11) and (16), we get
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + exp z ˜ n 1 x n μ 2 , n A 2 z ˜ n , exp z ˜ n 1 z n 2 μ 2 , n A 2 p , exp p 1 z ˜ n d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + 2 μ 2 , n A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n 2 μ 2 , n A 2 p , exp p 1 z ˜ n = d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + 2 μ 2 , n μ 2 , n + 1 μ 2 , n + 1 A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
By the definition of μ 2 , n , if A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n > 0 , then
2 μ 2 , n μ 2 , n + 1 μ 2 , n + 1 A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n μ 2 , n μ 2 , n + 1 λ 2 ( d 2 ( x n , z ˜ n ) + d 2 ( z n , z ˜ n ) ) ;
in the case of A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n 0 , it is clear that
2 μ 2 , n μ 2 , n + 1 μ 2 , n + 1 A 2 x n A 2 z ˜ n , exp z ˜ n 1 z n 0 μ 2 , n μ 2 , n + 1 λ 2 ( d 2 ( x n , z ˜ n ) + d 2 ( z n , z ˜ n ) ) .
Thus,
d 2 ( z n , p ) d 2 ( x n , p ) d 2 ( x n , z ˜ n ) d 2 ( z ˜ n , z n ) + μ 2 , n μ 2 , n + 1 λ 2 ( d 2 ( x n , z ˜ n ) + d 2 ( z n , z ˜ n ) ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n = d 2 ( x n , p ) ( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( x n , z ˜ n ) ( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( z ˜ n , z n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
In a similar way, we get
d 2 ( x n + 1 , q ) d 2 ( z n , q ) ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( z n , y n ) ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( y n , x n + 1 ) 2 μ 1 , n A 1 q , exp q 1 y n .
Note that the limit lim n μ k , n μ k , n + 1 λ k = λ k ( 0 , 1 ) for k = 1 , 2 . Hence, there exists n 0 0 such that μ k , n μ k , n + 1 λ k ( 0 , 1 ) n n 0 for k = 1 , 2 .
Next, we restrict ( p , q ) S . Then, substituting (18) for (19) with q : = p , we obtain that, for all n n 0 ,
d 2 ( x n + 1 , p ) d 2 ( x n , p ) ( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( z ˜ n , x n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 p , exp p 1 y n = d 2 ( x n , p ) ( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( z ˜ n , x n ) ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( y n , z n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n 2 μ 1 , n A 1 p , exp p 1 y n .
This, together with the assumptions, guarantees that d ( x n + 1 , p ) d ( x n , p ) n n 0 . Thus, the sequence { x n } is bounded.
In the same way, substituting (19) for (18) with n : = n + 1 and p : = q , we obtain that, for all n n 0 ,
d 2 ( z n + 1 , q ) d 2 ( z n , q ) ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( y n , z n ) 2 μ 1 , n A 1 q , exp q 1 y n ( 1 μ 2 , n + 1 μ 2 , n + 2 λ 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 = d 2 ( z n , q ) ( 1 μ 2 , n + 1 μ 2 , n + 2 λ 2 ) d 2 ( z ˜ n + 1 , x n + 1 ) ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( y n , z n ) 2 μ 2 , n + 1 A 2 q , exp q 1 z ˜ n + 1 2 μ 1 , n A 1 q , exp q 1 y n .
This, together with the assumptions, guarantees that d ( z n + 1 , q ) d ( z n , q ) . Thus, the sequence { z n } is bounded.   □
Corollary 2.
Let the sequences { x n } and { z n } be generated by Algorithm 2. Then, { x n } and { z n } both are bounded sequences.
Proof. 
We denote by S the solution set of VIP (4). Take a fixed p S arbitrarily. Noticing A 1 = A 2 = A , we deduce from (18) and (19) that, for each n n 0 , d ( x n + 1 , p ) d ( x n , p ) , d ( z n + 1 , p ) d ( z n , p ) , and
d 2 ( x n + 1 , p ) d 2 ( x n , p ) ( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( x n , z ˜ n ) ( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( z ˜ n , z n ) ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( z n , y n ) ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( y n , x n + 1 ) .
Hence, { x n } and { z n } both are bounded sequences. Moreover, it is clear that, for all n n 0 ,
( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( x n , z ˜ n ) + ( 1 μ 2 , n μ 2 , n + 1 λ 2 ) d 2 ( z ˜ n , z n ) + ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( z n , y n ) + ( 1 μ 1 , n μ 1 , n + 1 λ 1 ) d 2 ( y n , x n + 1 ) d 2 ( x n , p ) d 2 ( x n + 1 , p ) .
Since lim n μ i , n μ i , n + 1 λ i = λ i ( 0 , 1 ) for i = 1 , 2 , we conclude that d ( x n , z ˜ n ) 0 , d ( z ˜ n , z n ) 0 , d ( z n , y n ) 0 and d ( y n , x n + 1 ) 0 as n .   □
Theorem 1.
Let the sequences { x n } and { z n } be generated by Algorithm 1. Suppose that the conditions in Lemma 10 hold. Then, { ( x n , z n ) } converges to a solution of SVI (5) provided lim n { d ( x n , y n ) + d ( z n , z ˜ n ) } = 0 .
Proof. 
First of all, by Lemma 9, we have μ j : = lim n μ j , n min { λ j L j , μ j , 0 } for j = 1 , 2 . Moreover, by Lemma 10, we know that { z n } and { x n } both are bounded, and that, for all n n 0 ,
d ( z n + 1 , q ) d ( z n , q ) and d ( x n + 1 , p ) d ( x n , p ) ( p , q ) S .
Noticing lim n { d ( x n , y n ) + d ( z n , z ˜ n ) } = 0 (due to the assumption), we deduce that { z ˜ n } and { y n } both are bounded. We define the sets S 1 , S 2 as follows:
S 1 = { p C : q C s . t . ( p , q ) S } and S 2 = { q C : p C s . t . ( p , q ) S } .
From Definition 3, we know that { x n } and { z n } are Fejér convergent to S 1 and S 2 , respectively. Let p ¯ be a cluster point of { x n } . Then, there exists a subsequence { x n k } { x n } such that lim k x n k = p ¯ . From the boundedness of { z n } , we might assume that z n k q ¯ as k . Since lim n { d ( x n , y n ) + d ( z n , z ˜ n ) } = 0 , we obtain that z ˜ n k q ¯ and y n k p ¯ . In addition, noticing
z ˜ n k = P C ( exp x n k ( μ 2 , n k A 2 x n k ) ) , y n k = P C ( exp z n k ( μ 1 , n k A 1 z n k ) ) ,
by Proposition 3, we infer that, for all x C ,
0 exp x n k 1 z ˜ n k + μ 2 , n k A 2 x n k , exp z ˜ n k 1 x = exp x n k 1 z ˜ n k , exp z ˜ n k 1 x + μ 2 , n k A 2 x n k , exp z ˜ n k 1 x = exp x n k 1 z ˜ n k , exp z ˜ n k 1 x + μ 2 , n k A 2 x n k , exp z ˜ n k 1 x n k + μ 2 , n k A 2 x n k , exp x n k 1 x ,
and
0 exp z n k 1 y n k + μ 1 , n k A 1 z n k , exp y n k 1 x = exp z n k 1 y n k , exp y n k 1 x + μ 1 , n k A 1 z n k , exp y n k 1 x = exp z n k 1 y n k , exp y n k 1 x + μ 1 , n k A 1 z n k , exp y n k 1 z n k + μ 1 , n k A 1 z n k , exp z n k 1 x .
Note that lim k { d ( x n k , y n k ) + d ( z n k , z ˜ n k ) } = 0 , the subsequences { y n k } , { z ˜ n k } are bounded, and lim n μ j , n = μ j > 0 for j = 1 , 2 . Letting k , we take the limits in (20) and (21) and hence get
0 exp p ¯ 1 q ¯ , exp q ¯ 1 x + μ 2 A 2 p ¯ , exp q ¯ 1 p ¯ + μ 2 A 2 p ¯ , exp p ¯ 1 x , 0 exp q ¯ 1 p ¯ , exp p ¯ 1 x + μ 1 A 1 q ¯ , exp p ¯ 1 q ¯ + μ 1 A 1 q ¯ , exp q ¯ 1 x .
Therefore,
exp q ¯ 1 p ¯ + μ 1 A 1 q ¯ , exp p ¯ 1 x 0 x C , exp p ¯ 1 q ¯ + μ 2 A 2 p ¯ , exp q ¯ 1 x 0 x C .
This leads to ( p ¯ , q ¯ ) S , and hence p ¯ S 1 . Thus, by Proposition 6, we obtain that x n p ¯ as n .
Next, let q ^ be a cluster point of { z n } . It is known that there exists a subsequence { z m k } { z n } such that lim k z m k = q ^ . Using the boundedness of { x n } , we might assume that x m k p ^ as k . Thanks to lim n { d ( x n , y n ) + d ( z n , z ˜ n ) } = 0 , we obtain that z ˜ m k q ^ and y m k p ^ . Noticing
z ˜ m k = P C ( exp x m k ( μ 2 , m k A 2 x m k ) ) , y m k = P C ( exp z m k ( μ 1 , m k A 1 z m k ) ) ,
by similar arguments to those of (22), we deduce that
exp q ^ 1 p ^ + μ 1 A 1 q ^ , exp p ^ 1 x 0 x C , exp p ^ 1 q ^ + μ 2 A 2 p ^ , exp q ^ 1 x 0 x C .
This yields ( p ^ , q ^ ) S , and hence q ^ S 2 . Thus, by Proposition 6, we obtain that z n q ^ as n . Consequently, using the uniqueness of the limit, we infer that { ( x n , z n ) } is convergent to a solution ( p ^ , q ^ ) S of SVI (5).   □
Theorem 2.
Suppose that the sequences { x n } and { z n } both are generated by Algorithm 2. Then, { x n } and { z n } both converge to a solution of VIP (4).
Proof. 
Using Definition 3 and Corollary 2, we deduce that { z n } and { x n } both are Fejér convergent to the same S. Let p ¯ be a cluster point of { x n } . It is known that { x n k } { x n } s.t. lim k x n k = p ¯ . Then, using lim k d ( x n k , z ˜ n k ) = 0 , we have lim k z ˜ n k = p ¯ . Since lim k μ 2 , n k = μ 2 and z ˜ n k = P C ( exp x n k ( μ 2 , n k A x n k ) ) , we obtain p ¯ = P C ( exp p ¯ ( μ 2 A p ¯ ) ) . Hence, by Proposition 3, we get p ¯ S . Thus, from Proposition 6, it follows that x n p ¯ as n . Similarly, we can infer that z n q ¯ as n for some q ¯ S . Using lim n { d ( x n , z ˜ n ) + d ( z ˜ n , z n ) } = 0 , we obtain the desired result.   □

3.2. The Second Parallel Algorithm

Algorithm 3 is the second parallel algorithm for the SVI.
Algorithm 3 : The second parallel algorithm for the SVI .
Initialization : Given x 0 , y 0 , z 0 , z ˜ 0 M arbitrarily . Let μ k , 0 = μ k , 1 > 0
and λ k ( 0 , 2 1 ) for k = 1 , 2 , and compute
z 1 = P C ( exp x 0 ( μ 2 , 0 A 2 z ˜ 0 ) ) , x 1 = P C ( exp z 0 ( μ 1 , 0 A 1 y 0 ) ) and z ˜ 1 = P C ( exp x 1 ( μ 2 , 1 A 2 y 0 ) ) , y 1 = P C ( exp z 1 ( μ 1 , 1 A 1 z ˜ 0 ) ) .
Iteration Steps: Compute x n + 1 and z n + 1   ( n 1 ) below:
Step 1. Construct
C 2 , n = { x M : exp z ˜ n 1 x n μ 2 , n A 2 z ˜ n 1 , exp z ˜ n 1 x 0 } , C 1 , n = { x M : exp y n 1 z n μ 1 , n A 1 y n 1 , exp y n 1 x 0 } ,
and calculate
z n + 1 = P C 2 , n ( exp x n ( μ 2 , n A 2 z ˜ n ) ) , x n + 1 = P C 1 , n ( exp z n ( μ 1 , n A 1 y n ) ) .
Step 2. Calculate
z ˜ n + 1 = P C ( exp x n + 1 ( μ 2 , n + 1 A 2 y n ) ) , y n + 1 = P C ( exp z n + 1 ( μ 1 , n + 1 A 1 z ˜ n ) ) , where
μ 2 , n + 1 = min { λ 2 d ( z ˜ n , z ˜ n 1 ) d ( A 2 z ˜ n , A 2 z ˜ n 1 ) , μ 2 , n } if d ( A 2 z ˜ n , A 2 z ˜ n 1 ) 0 , μ 2 , n otherwise . μ 1 , n + 1 = min { λ 1 d ( y n , y n 1 ) d ( A 1 y n , A 1 y n 1 ) , μ 1 , n } if d ( A 1 y n , A 1 y n 1 ) 0 , μ 1 , n otherwise . ( 24 )
Again , put n : = n + 1 and go to Step 1 .
In particular, putting A 1 = A 2 = A in Algorithm 3, we obtain the following algorithm (Algorithm 4) for solving VIP (4).
Algorithm 4 : The second parallel algorithm for the VIP .
Initialization : Given x 0 , y 0 , z 0 , z ˜ 0 M arbitrarily . Let μ k , 0 = μ k , 1 > 0
and λ k ( 0 , 2 1 ) for k = 1 , 2 , and compute
z 1 = P C ( exp x 0 ( μ 2 , 0 A z ˜ 0 ) ) , z ˜ 1 = P C ( exp z 1 ( μ 2 , 1 A z ˜ 0 ) ) and x 1 = P C ( exp z 0 ( μ 1 , 0 A y 0 ) ) , y 1 = P C ( exp x 1 ( μ 1 , 1 A y 0 ) ) .
Iteration Steps: Compute x n + 1 and z n + 1   ( n 1 ) below:
Step 1. Construct
C 2 , n = { x M : exp z ˜ n 1 x n μ 2 , n A z ˜ n 1 , exp z ˜ n 1 x 0 } , C 1 , n = { x M : exp y n 1 z n μ 1 , n A y n 1 , exp y n 1 x 0 }
and calculate
z n + 1 = P C 2 , n ( exp x n ( μ 2 , n A z ˜ n ) ) , x n + 1 = P C 1 , n ( exp z n ( μ 1 , n A y n ) ) .
Step 2. Calculate
z ˜ n + 1 = P C ( exp x n + 1 ( μ 2 , n + 1 A y n ) ) , y n + 1 = P C ( exp z n + 1 ( μ 1 , n + 1 A z ˜ n ) ) ,
where
μ 2 , n + 1 = min { λ 2 d ( z ˜ n , z ˜ n 1 ) d ( A z ˜ n , A z ˜ n 1 ) , μ 2 , n } if d ( A z ˜ n , A z ˜ n 1 ) 0 , μ 2 , n otherwise . μ 1 , n + 1 = min { λ 1 d ( y n , y n 1 ) d ( A y n , A y n 1 ) , μ 1 , n } if d ( A y n , A y n 1 ) 0 , μ 1 , n otherwise .
Again , put n : = n + 1 and go to Step 1 .
Lemma 11.
For k = 1 , 2 , the sequence { μ k , n } generated by Algorithm 3 is monotonically decreasing with lower bound min { λ k L k , μ k , 0 } .
Proof. 
It is clear that { μ k , n } is monotonically decreasing for k = 1 , 2 . Note that A k is a Lipschitzian mapping with constant L k > 0 for k = 1 , 2 . Then, in the case of d ( A 2 z ˜ n , A 2 z ˜ n 1 ) 0 , we have
λ 2 d ( z ˜ n , z ˜ n 1 ) d ( A 2 z ˜ n , A 2 z ˜ n 1 ) λ 2 d ( z ˜ n , z ˜ n 1 ) L 2 d ( z ˜ n , z ˜ n 1 ) = λ 2 L 2 .
Consequently, { μ 2 , n } is the sequence with lower bound min { λ 2 L 2 , μ 2 , 0 } . Similarly, we can show that { μ 1 , n } is the sequence with lower bound min { λ 1 L 1 , μ 1 , 0 } .   □
Corollary 3.
For k = 1 , 2 , the sequence { μ k , n } generated by Algorithm 4 is monotonically decreasing with lower bound min { λ k L , μ k , 0 } .
Lemma 12.
Let { x n } and { z n } be the sequences generated by Algorithm 3. Then, the sequences { x n } and { z n } are bounded, provided for all ( p , q ) S and n n 0 ,
( 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 ) d 2 ( z n , z ˜ n ) + ( 1 ( 1 + 2 ) λ 1 μ 1 , n μ 1 , n + 1 ) d 2 ( z n , y n ) + 2 μ 2 , n A 2 p , exp p 1 z ˜ n + 2 μ 1 , n A 1 p , exp p 1 y n 0 , ( 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 ) d 2 ( z n , z ˜ n ) + ( 1 ( 1 + 2 ) λ 1 μ 1 , n μ 1 , n + 1 ) d 2 ( z n , y n ) + 2 μ 2 , n A 2 q , exp q 1 z ˜ n + 2 μ 1 , n A 1 q , exp q 1 y n 0 .
Proof. 
Take ( p , q ) C × C arbitrarily. Utilizing the similar arguments to those in the proof of Lemma 10, we can deduce the following inequality:
d 2 ( z n + 1 , p ) d 2 ( x n , p ) d 2 ( z n + 1 , z ˜ n ) d 2 ( z n , z ˜ n ) + 2 μ 2 , n A 2 z ˜ n 1 A 2 z ˜ n , exp z ˜ n 1 z n + 1 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
We now estimate the term A 2 z ˜ n 1 A 2 z ˜ n , exp z ˜ n 1 z n + 1 in (25). From (6), the definition of μ 2 , n + 1 in Algorithm 3, we have
2 μ 2 , n A 2 z ˜ n 1 A 2 z ˜ n , exp z ˜ n 1 z n + 1 2 μ 2 , n d ( A 2 z ˜ n 1 , A 2 z ˜ n ) d ( z n + 1 , z ˜ n ) 2 μ 2 , n λ 2 d ( z ˜ n 1 , z ˜ n ) μ 2 , n + 1 d ( z n + 1 , z ˜ n ) = 2 λ 2 μ 2 , n μ 2 , n + 1 d ( z ˜ n 1 , z ˜ n ) d ( z n + 1 , z ˜ n ) λ 2 μ 2 , n μ 2 , n + 1 ( 1 2 d 2 ( z ˜ n 1 , z ˜ n ) + 2 d 2 ( z n + 1 , z ˜ n ) ) .
In the meantime, by the fact ( a + b ) 2 ( 2 + 2 ) a 2 + 2 b 2 , we get
d 2 ( z ˜ n 1 , z ˜ n ) ( d ( z ˜ n , z n ) + d ( z n , z ˜ n 1 ) ) 2 ( 2 + 2 ) d 2 ( z ˜ n , z n ) + 2 d 2 ( z n , z ˜ n 1 ) .
From (26) and (27), it follows that
2 μ 2 , n A 2 z ˜ n 1 A 2 z ˜ n , exp z ˜ n 1 z n + 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z ˜ n , z n ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) + 2 λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n + 1 , z ˜ n ) .
Substituting (28) for (25), we obtain
d 2 ( z n + 1 , p ) d 2 ( x n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ( 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 ) d 2 ( z n , z ˜ n ) ( 1 2 λ 2 μ 2 , n μ 2 , n + 1 ) d 2 ( z n + 1 , z ˜ n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
Adding λ 2 μ 2 , n + 1 μ 2 , n + 2 d 2 ( z n + 1 , z ˜ n ) to both sides of (29), we get
[ d 2 ( z n + 1 , p ) + λ 2 μ 2 , n + 1 μ 2 , n + 2 d 2 ( z n + 1 , z ˜ n ) ] [ d 2 ( x n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ] ( 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 ) d 2 ( z n , z ˜ n ) ( 1 2 λ 2 μ 2 , n μ 2 , n + 1 λ 2 μ 2 , n + 1 μ 2 , n + 2 ) d 2 ( z n + 1 , z ˜ n ) 2 μ 2 , n A 2 p , exp p 1 z ˜ n .
In a similar way, we get
[ d 2 ( x n + 1 , q ) + λ 1 μ 1 , n + 1 μ 1 , n + 2 d 2 ( x n + 1 , y n ) ] [ d 2 ( z n , q ) + λ 1 μ 1 , n μ 1 , n + 1 d 2 ( x n , y n 1 ) ] ( 1 ( 1 + 2 ) λ 1 μ 1 , n μ 1 , n + 1 ) d 2 ( z n , y n ) ( 1 2 λ 1 μ 1 , n μ 1 , n + 1 λ 1 μ 1 , n + 1 μ 1 , n + 2 ) d 2 ( x n + 1 , y n ) 2 μ 1 , n A 1 q , exp q 1 y n .
From lim n μ 2 , n = μ 2 > 0 (due to Lemma 11) and λ 2 ( 0 , 2 1 ) (due to Algorithm 3), we get
lim n ( 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 ) = lim n ( 1 2 λ 2 μ 2 , n μ 2 , n + 1 λ 2 μ 2 , n + 1 μ 2 , n + 2 ) = 1 λ 2 ( 1 + 2 ) > 0 .
Hence, there exists an integer n 0 0 such that
1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 > 0 and 1 2 λ 2 μ 2 , n μ 2 , n + 1 λ 2 μ 2 , n + 1 μ 2 , n + 2 > 0 n n 0 .
Next, we restrict ( p , q ) S . Assume that, for all n n 0 ,
( 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 ) d 2 ( z n , z ˜ n ) + ( 1 ( 1 + 2 ) λ 1 μ 1 , n μ 1 , n + 1 ) d 2 ( z n , y n ) + 2 μ 2 , n A 2 p , exp p 1 z ˜ n + 2 μ 1 , n A 1 p , exp p 1 y n 0 .
Adding (30) to (31) with q : = p , we obtain that, for all n n 0 ,
[ d 2 ( z n + 1 , p ) + λ 2 μ 2 , n + 1 μ 2 , n + 2 d 2 ( z n + 1 , z ˜ n ) ] + [ d 2 ( x n + 1 , p ) + λ 1 μ 1 , n + 1 μ 1 , n + 2 d 2 ( x n + 1 , y n ) ] [ d 2 ( x n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ] + [ d 2 ( z n , p ) + λ 1 μ 1 , n μ 1 , n + 1 d 2 ( x n , y n 1 ) ] .
This implies that there exists the limit
lim n { [ d 2 ( z n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ] + [ d 2 ( x n , p ) + λ 1 μ 1 , n μ 1 , n + 1 d 2 ( x n , y n 1 ) ] } .
Hence, { d 2 ( z n , p ) } and { d 2 ( x n , p ) } both are bounded. Therefore, { z n } and { x n } both are bounded. In addition, again from (30), (31), and (34), we deduce that, for all n n 0 ,
( 1 2 λ 2 μ 2 , n μ 2 , n + 1 λ 2 μ 2 , n + 1 μ 2 , n + 2 ) d 2 ( z n + 1 , z ˜ n ) + ( 1 2 λ 1 μ 1 , n μ 1 , n + 1 λ 1 μ 1 , n + 1 μ 1 , n + 2 ) d 2 ( x n + 1 , y n ) [ d 2 ( z n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ] + [ d 2 ( x n , p ) + λ 1 μ 1 , n μ 1 , n + 1 d 2 ( x n , y n 1 ) ] { [ d 2 ( z n + 1 , p ) + λ 2 μ 2 , n + 1 μ 2 , n + 2 d 2 ( z n + 1 , z ˜ n ) ] + [ d 2 ( x n + 1 , p ) + λ 1 μ 1 , n + 1 μ 1 , n + 2 d 2 ( x n + 1 , y n ) ] } ,
which, together with (32), leads to
lim n d ( z n + 1 , z ˜ n ) = lim n d ( x n + 1 , y n ) = 0 .
Consequently, from the boundedness of { z n } and { x n } , we infer that { z ˜ n } and { y n } both are bounded. Moreover, it follows that there exists the limit lim n ( d 2 ( x n , p ) + d 2 ( z n , p ) ) for each ( p , q ) S . In a similar way, we also infer that there exists the limit lim n ( d 2 ( x n , q ) + d 2 ( z n , q ) ) for each ( p , q ) S .   □
Corollary 4.
Let { x n } and { z n } be the sequences generated by Algorithm 4. Then, the sequences { x n } and { z n } are bounded.
Proof. 
Let S indicate the solution set of VIP (4) and fix p S arbitrarily. Noticing A 1 = A 2 = A , we deduce from (30) and (31) that
[ d 2 ( z n + 1 , p ) + λ 2 μ 2 , n + 1 μ 2 , n + 2 d 2 ( z n + 1 , z ˜ n ) ] [ d 2 ( x n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ] ( 1 ( 1 + 2 ) λ 2 μ 2 , n μ 2 , n + 1 ) d 2 ( z n , z ˜ n ) ( 1 2 λ 2 μ 2 , n μ 2 , n + 1 λ 2 μ 2 , n + 1 μ 2 , n + 2 ) d 2 ( z n + 1 , z ˜ n ) , [ d 2 ( x n + 1 , p ) + λ 1 μ 1 , n + 1 μ 1 , n + 2 d 2 ( x n + 1 , y n ) ] [ d 2 ( z n , p ) + λ 1 μ 1 , n μ 1 , n + 1 d 2 ( x n , y n 1 ) ] ( 1 ( 1 + 2 ) λ 1 μ 1 , n μ 1 , n + 1 ) d 2 ( z n , y n ) ( 1 2 λ 1 μ 1 , n μ 1 , n + 1 λ 1 μ 1 , n + 1 μ 1 , n + 2 ) d 2 ( x n + 1 , y n ) .
Since lim n ( 1 ( 1 + 2 ) λ k μ k , n μ k , n + 1 ) = lim n ( 1 2 λ k μ k , n μ k , n + 1 λ k μ k , n + 1 μ k , n + 2 ) = 1 λ k ( 1 + 2 ) > 0 for k = 1 , 2 , we know that there exists an integer n 0 0 such that 1 ( 1 + 2 ) λ k μ k , n μ k , n + 1 > 0 and 1 2 λ k μ k , n μ k , n + 1 λ k μ k , n + 1 μ k , n + 2 > 0 for all n n 0 . Thus, it follows that, for all n n 0 ,
[ d 2 ( z n + 1 , p ) + λ 2 μ 2 , n + 1 μ 2 , n + 2 d 2 ( z n + 1 , z ˜ n ) ] + [ d 2 ( x n + 1 , p ) + λ 1 μ 1 , n + 1 μ 1 , n + 2 d 2 ( x n + 1 , y n ) ] [ d 2 ( z n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ] + [ d 2 ( x n , p ) + λ 1 μ 1 , n μ 1 , n + 1 d 2 ( x n , y n 1 ) ] .
This implies that there exists the limit
lim n { [ d 2 ( z n , p ) + λ 2 μ 2 , n μ 2 , n + 1 d 2 ( z n , z ˜ n 1 ) ] + [ d 2 ( x n , p ) + λ 1 μ 1 , n μ 1 , n + 1 d 2 ( x n , y n 1 ) ] } .
Therefore, { z n } and { x n } both are bounded. Moreover, it is easy to see that lim n d ( z n , z ˜ n ) = lim n d ( z n + 1 , z ˜ n ) = 0 and lim n d ( z n , y n ) = lim n d ( x n + 1 , y n ) = 0 .   □
Theorem 3.
Let the sequences { x n } , { z n } be generated by Algorithm 3. Assume that the conditions in Lemma 12 hold. Then, { ( x n , z n ) } converges to a solution of SVI (5) provided lim n { d ( x n , y n ) + d ( z n , z ˜ n ) } = 0 and lim n { d 2 ( z n , p ) + d 2 ( x n , q ) } < + for all ( p , q ) S .
Proof. 
First of all, by Lemma 11, we have lim n μ k , n = μ k > 0 for k = 1 , 2 . Using Lemma 12, we obtain the boundedness of the sequences { x n } , { z n } , and the existence of the limits lim n ( d 2 ( x n , p ) + d 2 ( z n , p ) ) and lim n ( d 2 ( x n , q ) + d 2 ( z n , q ) ) for each ( p , q ) S . We observe that, for each ( p , q ) S ,
lim n ( d 2 ( x n , p ) + d 2 ( z n , q ) ) = lim n [ d 2 ( x n , p ) + d 2 ( z n , p ) + d 2 ( x n , q ) + d 2 ( z n , q ) ( d 2 ( z n , p ) + d 2 ( x n , q ) ) ] = lim n ( d 2 ( x n , p ) + d 2 ( z n , p ) ) + lim n ( d 2 ( x n , q ) + d 2 ( z n , q ) ) lim n ( d 2 ( z n , p ) + d 2 ( x n , q ) ) < + .
We claim that each cluster point of { ( x n , z n ) } belongs to S . Indeed, since { ( x n , z n ) } is bounded, there exists a subsequence { ( x m k , z m k ) } of { ( x n , z n ) } converging to ( x * , y * ) M × M . This means that x m k x * and z m k y * . It is clear that y m k x * and z ˜ m k y * because d ( x m k , y m k ) 0 and d ( z m k , z ˜ m k ) 0 as k . Since C is closed and convex in M , from { ( y m k , z ˜ m k ) } C × C , we get ( x * , y * ) C × C . Taking into account that d ( z n , z ˜ n ) 0 and d ( x n , y n ) 0 as n , we infer from (35) that d ( z ˜ n , z ˜ n + 1 ) 0 and d ( y n , y n + 1 ) 0 as n .
Noticing that z ˜ n = P C ( exp x n ( μ 2 , n A 2 y n 1 ) ) and y n = P C ( exp z n ( μ 1 , n A 1 z ˜ n 1 ) ) , from Proposition 3, we get
exp x n 1 z ˜ n + μ 2 , n A 2 y n 1 , exp z ˜ n 1 x 0 x C , exp z n 1 y n + μ 1 , n A 1 z ˜ n 1 , exp y n 1 x 0 x C .
Hence, we have
0 exp x n 1 z ˜ n , exp z ˜ n 1 x + μ 2 , n A 2 y n 1 , exp z ˜ n 1 x = exp x n 1 z ˜ n , exp z ˜ n 1 x + μ 2 , n A 2 y n 1 , exp z ˜ n 1 1 x + μ 2 , n A 2 y n 1 , exp z ˜ n 1 z ˜ n 1 , 0 exp z n 1 y n , exp y n 1 x + μ 1 , n A 1 z ˜ n 1 , exp y n 1 x = exp z n 1 y n , exp y n 1 x + μ 1 , n A 1 z ˜ n 1 , exp y n 1 1 x + μ 1 , n A 1 z ˜ n 1 , exp y n 1 y n 1 .
Passing to the limits in two inequalities of (36) as n : = m k , we get
0 exp x * 1 y * , exp y * 1 x + μ 2 A 2 x * , exp y * 1 x x C , 0 exp y * 1 x * , exp x * 1 x + μ 1 A 1 y * , exp x * 1 x x C .
This means that ( x * , y * ) is a solution to the SVI (5), i.e., ( x * , y * ) S .
For the rest of the proof, it is sufficient to show that the sequence { ( x n , z n ) } only has a cluster point. Indeed, suppose that { ( x n , z n ) } has at least two cluster points ( x ¯ , y ¯ ) , ( x ^ , y ^ ) S . Then, there exist two subsequences { ( x n i , z n i ) } and { ( x m i , z m i ) } of { ( x n , z n ) } such that { ( x n i , z n i ) ( x ¯ , y ¯ ) and ( x m i , z m i ) ( x ^ , y ^ ) as i . By Proposition 2, we get
lim n ( d 2 ( x n , x ^ ) + d 2 ( z n , y ^ ) ) = lim i ( d 2 ( x n i , x ^ ) + d 2 ( z n i , y ^ ) ) lim i [ d 2 ( x n i , x ¯ ) + d 2 ( x ¯ , x ^ ) 2 exp x ¯ 1 x n i , exp x ¯ 1 x ^ + d 2 ( z n i , y ¯ ) + d 2 ( y ¯ , y ^ ) 2 exp y ¯ 1 z n i , exp y ¯ 1 y ^ ] = lim n ( d 2 ( x n , x ¯ ) + d 2 ( z n , y ¯ ) ) + d 2 ( x ¯ , x ^ ) + d 2 ( y ¯ , y ^ ) ,
and
lim n ( d 2 ( x n , x ¯ ) + d 2 ( z n , y ¯ ) ) = lim i ( d 2 ( x m i , x ¯ ) + d 2 ( z m i , y ¯ ) ) lim i [ d 2 ( x m i , x ^ ) + d 2 ( x ^ , x ¯ ) 2 exp x ^ 1 x m i , exp x ^ 1 x ¯ + d 2 ( z m i , y ¯ ) + d 2 ( y ^ , y ¯ ) 2 exp y ^ 1 z m i , exp y ^ 1 y ¯ ] = lim n ( d 2 ( x n , x ^ ) + d 2 ( z n , y ^ ) ) + d 2 ( x ^ , x ¯ ) + d 2 ( y ^ , y ¯ ) .
Combining (37) and (38), we have x ¯ = x ^ and y ¯ = y ^ .   □
Theorem 4.
Suppose that the sequences { x n } and { z n } both are generated by Algorithm 4. Then, { x n } and { z n } both converge to a solution of VIP (4).
Proof. 
By Corollary 4, we know that { x n } and { z n } are bounded. Putting A 1 = A 2 = A and p = q S in (30) and (31), we deduce that
lim n d ( z n , z ˜ n ) = lim n d ( z n + 1 , z ˜ n ) = 0 , lim n d ( z n , y n ) = lim n d ( x n + 1 , y n ) = 0 .
Thus, it follows that lim n d ( z n , z n + 1 ) = lim n d ( z n , x n + 1 ) = 0 . Note that d ( y n + 1 , y n ) d ( y n + 1 , z n + 1 ) + d ( z n + 1 , z n ) + d ( z n , y n ) 0 ( n ) . Thus, we have d ( x n + 1 , y n + 1 ) d ( x n + 1 , y n ) + d ( y n , y n + 1 ) ( n ) , and hence lim n d ( x n , y n ) = 0 . In addition, since d ( x n + 1 , z n + 1 ) d ( x n + 1 , z n ) + d ( z n , z n + 1 ) ( n ) , we get lim n d ( x n + 1 , z n + 1 ) = 0 , and hence lim n d ( x n , z n ) = 0 . Note that the SVI (5) with A 1 = A 2 = A has a solution ( p , p ) C × C if and only if the VIP (4) has solution p C . Therefore, by Theorem 3, we know that { ( x n , z n ) } converges to a solution ( x * , y * ) C × C to the SVI (5) with A 1 = A 2 = A . Thus, from lim n d ( x n , z n ) = 0 , it follows that { x n } and { z n } both are convergent to a solution x * = y * C to the VIP (4)   □

Author Contributions

Conceptualization, L.H. and H.-L.F.; methodology, L.H.; software, H.-L.F.; validation, H.-Y.H., T.-Y.Z. and D.-Q.W.; formal analysis, L.H.; investigation, C.-Y.W. and L.-C.C.; resources, L.-C.C.; data curation, H.-Y.H.; writing original draft preparation, C.-Y.W.; writing review and editing, L.-C.C.; supervision, L.-C.C.; project administration, L.-C.C.; funding acquisition, L.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 2020 Shanghai Leading Talents Program of the Shanghai Municipal Human Resources and Social Security Bureau, 20LJ2006100; the Innovation Program of Shanghai Municipal Education Commission, 15ZZ068; and the Program for Outstanding Academic Leaders in Shanghai City, 15XD1503100.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metods 1976, 12, 747–756. [Google Scholar]
  2. Yao, Y.; Liou, Y.C.; Yao, J.C. An extragradient method for fixed point problems and variational inequality problems. J. Inequal. Appl. 2007, 2007, 38752. [Google Scholar] [CrossRef] [Green Version]
  3. Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
  4. Yao, Y.; Marino, G.; Muglia, L. A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 2014, 63, 559–569. [Google Scholar] [CrossRef]
  5. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–501. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 2021, 70, 1337–1358. [Google Scholar] [CrossRef]
  8. Ceng, L.C.; Wang, C.Y.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
  9. Chen, J.F.; Liu, S.Y.; Chang, X.K. Modified Tseng’s extragradient methods for variational inequality on Hadamard manifolds. Appl. Anal. 2021, 100, 2627–2640. [Google Scholar] [CrossRef]
  10. Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  11. Dong, Q.L.; Lu, Y.Y.; Yang, J.F. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  12. Yang, J.; Liu, H.W.; Liu, Z.X. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  13. Németh, S.Z. Variational inequalities on Hadamard manifolds. Nonlinear Anal. 2003, 52, 1491–1498. [Google Scholar] [CrossRef]
  14. Li, X.B.; Huang, N.J.; Ansari, Q.H.; Yao, J.C. Convergence rate of descent method with new inexact line-search on Riemannian manifolds. J. Optim. Theory Appl. 2019, 180, 830–854. [Google Scholar] [CrossRef]
  15. Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 5. [Google Scholar] [CrossRef]
  16. Bento, G.C.; Ferreira, O.P.; Pereira, Y.R. Proximal point method for vector optimization on Hadamard manifolds. Oper. Res. Lett. 2018, 46, 13–18. [Google Scholar] [CrossRef] [Green Version]
  17. Bento, G.C.; Neto, J.X.C.; Meireles, L.V. Proximal point method for locally Lipschitz functions in multiobjective optimization on Hadamard manifolds. J. Optim. Theory Appl. 2018, 179, 37–52. [Google Scholar] [CrossRef]
  18. Ceng, L.C.; Li, X.; Qin, X. Parallel proximal point methods for systems of vector optimization problems on Hadamard manifolds without convexity. Optimization 2020, 69, 357–383. [Google Scholar] [CrossRef]
  19. Tang, G.J.; Zhou, L.W.; Huang, N.J. The proximal point algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Optim. Lett. 2013, 7, 779–790. [Google Scholar] [CrossRef]
  20. Tang, G.J.; Huang, N.J. Korpelevich’s method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 2012, 54, 493–509. [Google Scholar] [CrossRef]
  21. Ferreira, O.P.; Lucambio Pérez, L.R.; Németh, S.Z. Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 2005, 31, 133–151. [Google Scholar] [CrossRef]
  22. Ceng, L.C.; Shehu, Y.; Wang, Y.H. Parallel Tseng’s extragradient methods for solving systems of variational inequalities on Hadamard manifolds. Symmetry 2020, 12, 43. [Google Scholar] [CrossRef] [Green Version]
  23. Sakai, T. Riemannian Geometry. In Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1996; Volume 149. [Google Scholar]
  24. Li, C.; López, G.; Martín-Márquez, V. Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 2009, 79, 663–683. [Google Scholar] [CrossRef]
  25. Wang, J.H.; López, G.; Martín-Márquez, V.; Li, C. Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 2010, 146, 691–708. [Google Scholar] [CrossRef] [Green Version]
  26. Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef] [Green Version]
  27. Reich, S.; Shafrir, I. Nonexpansive iterations in hyperbolic spaces. Nonlinear Anal. 1990, 15, 537–558. [Google Scholar] [CrossRef]
  28. Li, C.; López, G.; Martín-Márquez, V. Iterative algorithms for nonexpansive mappings on Hadamard manifolds. Taiwan. J. Math. 2010, 14, 541–559. [Google Scholar]
  29. Ferreira, O.P.; Oliveira, P.R. Proximal point algorithm on Riemannian manifolds. Optimization 2002, 51, 257–270. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, C.-Y.; Ceng, L.-C.; He, L.; Hu, H.-Y.; Zhao, T.-Y.; Wang, D.-Q.; Fan, H.-L. On the Parallel Subgradient Extragradient Rule for Solving Systems of Variational Inequalities in Hadamard Manifolds. Symmetry 2021, 13, 1496. https://doi.org/10.3390/sym13081496

AMA Style

Wang C-Y, Ceng L-C, He L, Hu H-Y, Zhao T-Y, Wang D-Q, Fan H-L. On the Parallel Subgradient Extragradient Rule for Solving Systems of Variational Inequalities in Hadamard Manifolds. Symmetry. 2021; 13(8):1496. https://doi.org/10.3390/sym13081496

Chicago/Turabian Style

Wang, Chun-Yan, Lu-Chuan Ceng, Long He, Hui-Ying Hu, Tu-Yan Zhao, Dan-Qiong Wang, and Hong-Ling Fan. 2021. "On the Parallel Subgradient Extragradient Rule for Solving Systems of Variational Inequalities in Hadamard Manifolds" Symmetry 13, no. 8: 1496. https://doi.org/10.3390/sym13081496

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop