Abstract
In a Hadamard manifold, let the VIP and SVI represent a variational inequality problem and a system of variational inequalities, respectively, where the SVI consists of two variational inequalities which are of symmetric structure mutually. This article designs two parallel algorithms to solve the SVI via the subgradient extragradient approach, where each algorithm consists of two parts which are of symmetric structure mutually. It is proven that, if the underlying vector fields are of monotonicity, then the sequences constructed by these algorithms converge to a solution of the SVI. We also discuss applications of these algorithms for approximating solutions to the VIP. Our theorems complement some recent and important ones in the literature.
Keywords:
parallel subgradient extragradient rule; Hadamard manifold; system of variational inequalities; monotone vector fields; convex set MSC:
47J20; 51H25; 65C10; 65C15; 90C33
1. Introduction
Suppose that the operator F is a self-mapping on a real Hilbert space (). Let the set be nonempty, convex, and closed. Consider the classical variational inequality problem (VIP) of finding a point s.t.:
It is well known that variational inequalities like VIP (1) have played an important role in the study of economics, transportation, mathematical programming, engineering mechanics, etc. Let F be L-Lipschitzian with constant . Given . In 1976, Korpelevich’s extragradient rule was first introduced in [1] for solving VIP (1). For any initial , let the sequence be generated by
where is the metric projection of H onto C. To the most of our knowledge, Korpelevich’s extragradient rule has become one of the best effective numerical methods for the VIP and related optimization problems. Moreover, many authors improved it in various kinds of ways; see, e.g., [2,3,4,5,6,7,8,9,10,11] and references therein, to name but a few.
In 2008, Ceng et al. [8] considered the following system of variational inequalities (SVI): find s.t.
where is a self-mapping on H and is a positive constant for . It is clear that the SVI (3) consists of two variational inequalities which are of symmetric structure mutually. It is worth mentioning that the SVI (3) has been transformed into the following fixed-point problem (FPP).
Lemma 1
(see [8], [Lemma 2.1]). A pair , is a solution of SVI (3) if and only if is a fixed point of the mapping , i.e., , where .
In terms of Lemma 1, Ceng et al. [8] suggested and analyzed a relaxed extragradient algorithm for solving SVI (3). In 2011, the subgradient extragradient rule was first proposed in [6] for solving VIP (1), where the second projection onto C is replaced by the projection onto a half-space:
with constant . The above rule is more advantageous and more subtle than the rule (2) in the case when C is a feasible set with a complex structure and the calculation of projection onto C is oppressively time-squandering.
In 2018, Yang et al. [12] designed the modified subgradient extragradient rule for solving VIP (1). For any given and , let the sequences and be generated by
where is chosen as
It was proven in [12] that and converge weakly to a solution of VIP (1).
On the other hand, suppose that C is a nonempty, convex and closed subset of a Hadamard manifold , and is a vector field, that is, . In 2003, Németh [13] introduced the new VIP of finding s.t.:
where is the inverse of an exponential map. The solution set of VIP (4) is denoted by S. Subsequently, some rules and methods are extended from Euclidean spaces to Riemannian manifolds because of some important advantages of the extension; see, e.g., [14,15,16,17]. Furthermore, inspired by the SVI (3) and the multiobjective optimization problem in [17], Ceng et al. [18] introduced a system of multiobjective optimization problems (SMOP) in a Hadamard manifold and invented a parallel proximal point rule for solving the SMOP.
It is remarkable that the research works on the algorithms for VIP (4) are mainly focused on a proximal point algorithm [19] and Korpelevich’s extragradient rule [20]. Very recently, Chen et al. [9] suggested the modified Tseng’s extragradient method to solve VIP (4). Moreover, their results gave an affirmative answer to the open question put forth in [21].
Let C be a nonempty closed convex subset of a Hadamard manifold , and be a vector field for , i.e., . According to problems (3) and (4), Ceng et al. [22] introduced the new SVI of finding s.t.
where constants , and is the inverse of exponential map. In particular, if and , then SVI (5) reduces to VIP (4).
In this paper, we design two parallel algorithms to solve the SVI (5) via the subgradient extragradient approach, where each algorithm consists of two parts which have a mutually symmetric structure. It is proven that, if the underlying vector fields are of monotonicity, then the sequences constructed by these algorithms converge to a solution of the SVI (5). We also discuss applications of these algorithms to the approximation of solutions to the VIP (4). Our results improve and extend the corresponding results announced in [8,9,12,22].
The remainder of the paper is arranged below. Some preliminary concepts, notations, important lemmas, and propositions in Riemannian geometry are recalled in Section 2. It is remarkable that one can find most of them in every textbook about Riemannian geometry (e.g., [23]). Two new parallel algorithms based on the modified subgradient extragradient approach [12] are proposed for SVI (5), and some convergence theorems are proved in Section 3.
2. Preliminaries
Let indicate a simply connected and finite-dimensional differentiable manifold. A differentiable manifold endowed with a Riemannian metric is called a Riemannian manifold. We denote by the tangent space of at , by the scalar product on with the associated norm , where the subscript is sometimes omitted, and by the tangent bundle of , which is actually a manifold. Let be a piecewise smooth curve joining to (i.e., and ), we define the length . Then, the Riemannian distance , which induces the original topology on , is defined by minimizing this length over the set of all such curves joining to .
Suppose that the Levi–Civita connection ∇ is associated with the Riemannian metric and the smooth curve lies in . A vector field X is referred to as being parallel along iff . In case itself is parallel along , is known as a geodesic, and, in this case, is constant. It is remarkable that this notion is different from the corresponding one in the calculus of variations—in particular, if , is referred to as being normalized. A geodesic joining to in is called minimal if its length equals .
Let be a Riemannian manifold. is referred to as being complete iff for each all geodesics emanating from are defined for all . Using the Hopf–Rinow Theorem, we infer that, if is complete, each pair of points in can be joined by a minimal geodesic. In the meantime, becomes a complete metric space and bounded closed subsets are compact ones in .
We denote by the parallel transport on the tangent bundle along w.r.t. ∇, defined by
where V is the unique vector field such that for each t and . Then, for any is an isometry from to . For the convenience, we will write instead of in the case where is a minimal geodesic joining to .
Let be complete. An exponential map at is defined by for each , where is the geodesic starting at with velocity . Then, for each real number t. It is worth emphasizing that the mapping is differentiable on for each . The exponential map has inverse , i.e., , and the geodesic is the unique shortest path with , where is the geodesic distance between and in .
A set is referred to as being convex if, for every , the geodesic joining y to z lies in D, i.e., if is a geodesic satisfying and , then . From now on, we denote by D a nonempty closed convex set in , and by the projection of onto D, i.e.,
A real-valued function f defined on is referred to as being convex if, for each geodesic of , the composite function is convex, i.e.,
A Hadamard manifold is a complete simply connected Riemannian manifold of non-positive sectional curvature. If is a Hadamard manifold, then is a diffeomorphism for each and, if , then there exists a unique minimal geodesic joining to . Next, we always assume that is a Hadamard manifold.
Proposition 1
(see [23]). Let . Then, is a diffeomorphism, and, for any points , there exists a unique normalized geodesic joining υ to ω, which is actually a minimal geodesic.
The above proposition shows that is diffeomorphic to the Euclidean space . Then, has the same topology and differential structure as . Moreover, Hadamard manifolds and Euclidean spaces have some similar geometrical properties.
Definition 1
(see [20]). Let be the set of all single-valued vector fields s.t. and the domain of V is defined by . Let . Then, V is referred to as being pseudomonotone if, for each ,
A geodesic triangle of a Riemannian manifold is a set consisting of three points and , and three minimal geodesics joining to , with .
Proposition 2
(see [23] (Comparison theorem for triangles)). Suppose that is a geodesic triangle. Ones denote, for each , by , the geodesic joining to , and put , and . Then,
- (i)
- ;
- (ii)
- ;
- (iii)
- .
According to the distance and the exponential map, inequality (ii) in Proposition 2 can be rewritten as
owing to the fact that
Lemma 2
(see [24]). Let and s.t. . Then, the following holds:
- (i)
- and for all .
- (ii)
- If and , then .
- (iii)
- Given and , if and , then .
- (iv)
- For each , the map , defined by , is continuous on .
For each and , there is only a point satisfying . Then, the unique point is known as the projection of u onto the convex set C, denoted by .
Proposition 3
(see [25]). For each , the following inequality holds:
Proposition 4
(see [20]). Let be closed and convex. Then, the metric projection is nonexpansive, i.e., .
Lemma 3
(see [21]). Assume that is of constant curvature, and . Then, is convex.
Lemma 4
(see [20]). Suppose that C is a nonempty closed convex subset of a Hadamard manifold . Then,
Lemma 5
(see [13]). Let A be a continuous and monotone vector field on C, given . Then, .
It is easy from Proposition 3 to see that the following hold:
Proposition 5
(see [20]). The following assertions are equivalent:
- (i)
- solves the VIP (4);
- (ii)
- for some ;
- (iii)
- for all ;
- (iv)
- , with .
The following two lemmas play a crucial role in the convergence derivation of the algorithms.
Lemma 6
(see [26]). Suppose that is a geodesic triangle in , a Hadamard manifold. Then, s.t.
The triangle is called the comparison triangle of , which is unique up to isometry of . The following result can be proved by using element geometry. This is also a direct application of the Alexandrov’s Lemma in (see [27]). It explains the relationship between two triangles and involving angles and distances between points.
Lemma 7
(see [28]). Let be a geodesic triangle in a Hadamard manifold and its comparison triangle.
- (i)
- Assume that (resp., ) are three angles of (resp., ) at three vertices (resp., ). Then, the inequalities hold: .
- (ii)
- Assume that the point z lies in the geodesic joining u to v and is its comparison point in the interval satisfying and . Then, the inequality holds: .
Definition 2
(see [9]). A vector field f defined on a complete Riemannian manifold is referred to as being Lipschitzian if s.t.
Besides the above concept, if for each , and s.t. (7) holds, with , for all , then f is said to be locally Lipschitzian.
Finally, by the similar inference to that of transforming SVI (3) into the FPP in [8], we obtain the following.
Lemma 8
(see [22], [Lemma 5]). A pair is a solution of SVI (5) if and only if is a fixed point of the mapping , i.e., , where .
3. Algorithms and Convergence Criteria
In this section, inspired by the algorithms in [9], we suggest two new parallel algorithms for solving VIP (5) on Hadamard manifolds via the modified subgradient extragradient approach in [12].
From now on, the following assumptions are always adopted:
Hypothesis 1 (H1).
The solution set of SVI (5), denoted by , is nonempty.
Hypothesis 2 (H2).
are vector fields, i.e., for .)
Hypothesis 3 (H3).
and both are monotone, i.e., for , .
Hypothesis 4 (H4).
and both are Lipschitzian with constants , i.e., for , s.t.
Next, we recall the notion of Fejér convergence and related result.
Definition 3
(see [29]). Suppose that X is a complete metric space and is a nonempty set. Then, a sequence is referred to as being Fejér convergent to C, if .
Proposition 6
(see [24]). Suppose that X is a complete metric space and is a nonempty set. Let be Fejér convergent to C and assume that any cluster point of belongs in C. Then, converges to a point of C.
3.1. The First Parallel Algorithm
Algorithm 1 is the first parallel algorithm for the SVI.
| Algorithm 1: The first parallel algorithm for the SVI. |
| Iteration Steps: Compute below: |
| Step 1. Compute |
| Step 2. Construct |
| and calculate |
| Step 3. Calculate |
In particular, putting in Algorithm 1, we obtain the following algoritm (Algorithm 2) for solving VIP (4).
| Iteration Steps: Compute below: |
| Step 1. Compute |
| Step 2. Construct |
| and calculate |
| Step 3. Calculate |
Lemma 9.
For , the sequence generated by Algorithm 1 is a monotonically decreasing one with lower bound .
Proof.
It is clear that is a monotonically decreasing sequence for . Since is a Lipschitzian mapping with constant for , in the case of , we have
Thus, the sequence has the lower bound . In a similar way, we can show that has the lower bound . □
Corollary 1.
For , the sequence generated by Algorithm 2 is a monotonically decreasing one with lower bound .
Lemma 10.
Let and be the sequences generated by Algorithm 1. Then, the sequences and are bounded, provided for all and ,
Proof.
Take a fixed arbitrarily. Then, from the monotonicity of , we get , which hence yields . That is, . Thus, it immediately follows that
By the definition of , we have . Then,
Now, by fixing , we consider the geodesic triangle and its comparison triangle . Then, , and . Recall from Algorithm 1 that . The comparison point of is . Thus, in , (10) and (11) can be rewritten as
Then, by Lemma 7 (ii), (10) and Lemma 4, we have
Consider the geodesic triangle and its comparison triangle . Then, set and , (resp., and ). Let and denote the angles at c and , respectively. Then, by Lemma 7 and so . Then, by Proposition 2 and Lemma 6, we have
Hence,
Similarly, we get . This together with (14) and (15) imply that
Combining (11) and (16), we get
By the definition of , if , then
in the case of , it is clear that
Thus,
In a similar way, we get
Note that the limit for . Hence, there exists such that for .
Next, we restrict . Then, substituting (18) for (19) with , we obtain that, for all ,
This, together with the assumptions, guarantees that . Thus, the sequence is bounded.
In the same way, substituting (19) for (18) with and , we obtain that, for all ,
This, together with the assumptions, guarantees that . Thus, the sequence is bounded. □
Corollary 2.
Let the sequences and be generated by Algorithm 2. Then, and both are bounded sequences.
Proof.
We denote by S the solution set of VIP (4). Take a fixed arbitrarily. Noticing , we deduce from (18) and (19) that, for each , , , and
Hence, and both are bounded sequences. Moreover, it is clear that, for all ,
Since for , we conclude that and as . □
Theorem 1.
Let the sequences and be generated by Algorithm 1. Suppose that the conditions in Lemma 10 hold. Then, converges to a solution of SVI (5) provided .
Proof.
First of all, by Lemma 9, we have for . Moreover, by Lemma 10, we know that and both are bounded, and that, for all ,
Noticing (due to the assumption), we deduce that and both are bounded. We define the sets as follows:
From Definition 3, we know that and are Fejér convergent to and , respectively. Let be a cluster point of . Then, there exists a subsequence such that . From the boundedness of , we might assume that as . Since , we obtain that and . In addition, noticing
by Proposition 3, we infer that, for all ,
and
Note that , the subsequences are bounded, and for . Letting , we take the limits in (20) and (21) and hence get
Therefore,
This leads to , and hence . Thus, by Proposition 6, we obtain that as .
Next, let be a cluster point of . It is known that there exists a subsequence such that . Using the boundedness of , we might assume that as . Thanks to , we obtain that and . Noticing
by similar arguments to those of (22), we deduce that
This yields , and hence . Thus, by Proposition 6, we obtain that as . Consequently, using the uniqueness of the limit, we infer that is convergent to a solution of SVI (5). □
Theorem 2.
Suppose that the sequences and both are generated by Algorithm 2. Then, and both converge to a solution of VIP (4).
Proof.
Using Definition 3 and Corollary 2, we deduce that and both are Fejér convergent to the same S. Let be a cluster point of . It is known that s.t. . Then, using , we have . Since and , we obtain . Hence, by Proposition 3, we get . Thus, from Proposition 6, it follows that as . Similarly, we can infer that as for some . Using , we obtain the desired result. □
3.2. The Second Parallel Algorithm
Algorithm 3 is the second parallel algorithm for the SVI.
| , and compute |
| Iteration Steps: Compute and below: |
| Step 1. Construct |
| and calculate |
| Step 2. Calculate |
| where |
In particular, putting in Algorithm 3, we obtain the following algorithm (Algorithm 4) for solving VIP (4).
| and for , and compute |
| Iteration Steps: Compute and below: |
| Step 1. Construct |
| and calculate |
| Step 2. Calculate |
| where |
Lemma 11.
For , the sequence generated by Algorithm 3 is monotonically decreasing with lower bound .
Proof.
It is clear that is monotonically decreasing for . Note that is a Lipschitzian mapping with constant for . Then, in the case of , we have
Consequently, is the sequence with lower bound . Similarly, we can show that is the sequence with lower bound . □
Corollary 3.
For , the sequence generated by Algorithm 4 is monotonically decreasing with lower bound .
Lemma 12.
Let and be the sequences generated by Algorithm 3. Then, the sequences and are bounded, provided for all and ,
Proof.
Take arbitrarily. Utilizing the similar arguments to those in the proof of Lemma 10, we can deduce the following inequality:
We now estimate the term in (25). From (6), the definition of in Algorithm 3, we have
In the meantime, by the fact , we get
From (26) and (27), it follows that
Substituting (28) for (25), we obtain
Adding to both sides of (29), we get
In a similar way, we get
From (due to Lemma 11) and (due to Algorithm 3), we get
Hence, there exists an integer such that
Next, we restrict . Assume that, for all ,
Adding (30) to (31) with , we obtain that, for all ,
This implies that there exists the limit
Hence, and both are bounded. Therefore, and both are bounded. In addition, again from (30), (31), and (34), we deduce that, for all ,
which, together with (32), leads to
Consequently, from the boundedness of and , we infer that and both are bounded. Moreover, it follows that there exists the limit for each . In a similar way, we also infer that there exists the limit for each . □
Corollary 4.
Let and be the sequences generated by Algorithm 4. Then, the sequences and are bounded.
Proof.
Let S indicate the solution set of VIP (4) and fix arbitrarily. Noticing , we deduce from (30) and (31) that
Since for , we know that there exists an integer such that and for all . Thus, it follows that, for all ,
This implies that there exists the limit
Therefore, and both are bounded. Moreover, it is easy to see that = and . □
Theorem 3.
Let the sequences be generated by Algorithm 3. Assume that the conditions in Lemma 12 hold. Then, converges to a solution of SVI (5) provided and for all .
Proof.
First of all, by Lemma 11, we have for . Using Lemma 12, we obtain the boundedness of the sequences , and the existence of the limits and for each . We observe that, for each ,
We claim that each cluster point of belongs to . Indeed, since is bounded, there exists a subsequence of converging to . This means that and . It is clear that and because and as . Since C is closed and convex in , from , we get . Taking into account that and as , we infer from (35) that and as .
Noticing that and , from Proposition 3, we get
Hence, we have
Passing to the limits in two inequalities of (36) as , we get
This means that is a solution to the SVI (5), i.e., .
For the rest of the proof, it is sufficient to show that the sequence only has a cluster point. Indeed, suppose that has at least two cluster points . Then, there exist two subsequences and of such that and as . By Proposition 2, we get
and
Combining (37) and (38), we have and . □
Theorem 4.
Suppose that the sequences and both are generated by Algorithm 4. Then, and both converge to a solution of VIP (4).
Proof.
By Corollary 4, we know that and are bounded. Putting and in (30) and (31), we deduce that
Thus, it follows that . Note that . Thus, we have , and hence . In addition, since , we get , and hence . Note that the SVI (5) with has a solution if and only if the VIP (4) has solution . Therefore, by Theorem 3, we know that converges to a solution to the SVI (5) with . Thus, from , it follows that and both are convergent to a solution to the VIP (4) □
Author Contributions
Conceptualization, L.H. and H.-L.F.; methodology, L.H.; software, H.-L.F.; validation, H.-Y.H., T.-Y.Z. and D.-Q.W.; formal analysis, L.H.; investigation, C.-Y.W. and L.-C.C.; resources, L.-C.C.; data curation, H.-Y.H.; writing original draft preparation, C.-Y.W.; writing review and editing, L.-C.C.; supervision, L.-C.C.; project administration, L.-C.C.; funding acquisition, L.-C.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the 2020 Shanghai Leading Talents Program of the Shanghai Municipal Human Resources and Social Security Bureau, 20LJ2006100; the Innovation Program of Shanghai Municipal Education Commission, 15ZZ068; and the Program for Outstanding Academic Leaders in Shanghai City, 15XD1503100.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metods 1976, 12, 747–756. [Google Scholar]
- Yao, Y.; Liou, Y.C.; Yao, J.C. An extragradient method for fixed point problems and variational inequality problems. J. Inequal. Appl. 2007, 2007, 38752. [Google Scholar] [CrossRef] [Green Version]
- Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
- Yao, Y.; Marino, G.; Muglia, L. A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 2014, 63, 559–569. [Google Scholar] [CrossRef]
- Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–501. [Google Scholar] [CrossRef]
- Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 2021, 70, 1337–1358. [Google Scholar] [CrossRef]
- Ceng, L.C.; Wang, C.Y.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
- Chen, J.F.; Liu, S.Y.; Chang, X.K. Modified Tseng’s extragradient methods for variational inequality on Hadamard manifolds. Appl. Anal. 2021, 100, 2627–2640. [Google Scholar] [CrossRef]
- Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
- Dong, Q.L.; Lu, Y.Y.; Yang, J.F. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
- Yang, J.; Liu, H.W.; Liu, Z.X. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
- Németh, S.Z. Variational inequalities on Hadamard manifolds. Nonlinear Anal. 2003, 52, 1491–1498. [Google Scholar] [CrossRef]
- Li, X.B.; Huang, N.J.; Ansari, Q.H.; Yao, J.C. Convergence rate of descent method with new inexact line-search on Riemannian manifolds. J. Optim. Theory Appl. 2019, 180, 830–854. [Google Scholar] [CrossRef]
- Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 5. [Google Scholar] [CrossRef]
- Bento, G.C.; Ferreira, O.P.; Pereira, Y.R. Proximal point method for vector optimization on Hadamard manifolds. Oper. Res. Lett. 2018, 46, 13–18. [Google Scholar] [CrossRef] [Green Version]
- Bento, G.C.; Neto, J.X.C.; Meireles, L.V. Proximal point method for locally Lipschitz functions in multiobjective optimization on Hadamard manifolds. J. Optim. Theory Appl. 2018, 179, 37–52. [Google Scholar] [CrossRef]
- Ceng, L.C.; Li, X.; Qin, X. Parallel proximal point methods for systems of vector optimization problems on Hadamard manifolds without convexity. Optimization 2020, 69, 357–383. [Google Scholar] [CrossRef]
- Tang, G.J.; Zhou, L.W.; Huang, N.J. The proximal point algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Optim. Lett. 2013, 7, 779–790. [Google Scholar] [CrossRef]
- Tang, G.J.; Huang, N.J. Korpelevich’s method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 2012, 54, 493–509. [Google Scholar] [CrossRef]
- Ferreira, O.P.; Lucambio Pérez, L.R.; Németh, S.Z. Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 2005, 31, 133–151. [Google Scholar] [CrossRef]
- Ceng, L.C.; Shehu, Y.; Wang, Y.H. Parallel Tseng’s extragradient methods for solving systems of variational inequalities on Hadamard manifolds. Symmetry 2020, 12, 43. [Google Scholar] [CrossRef] [Green Version]
- Sakai, T. Riemannian Geometry. In Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1996; Volume 149. [Google Scholar]
- Li, C.; López, G.; Martín-Márquez, V. Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 2009, 79, 663–683. [Google Scholar] [CrossRef]
- Wang, J.H.; López, G.; Martín-Márquez, V.; Li, C. Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 2010, 146, 691–708. [Google Scholar] [CrossRef] [Green Version]
- Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef] [Green Version]
- Reich, S.; Shafrir, I. Nonexpansive iterations in hyperbolic spaces. Nonlinear Anal. 1990, 15, 537–558. [Google Scholar] [CrossRef]
- Li, C.; López, G.; Martín-Márquez, V. Iterative algorithms for nonexpansive mappings on Hadamard manifolds. Taiwan. J. Math. 2010, 14, 541–559. [Google Scholar]
- Ferreira, O.P.; Oliveira, P.R. Proximal point algorithm on Riemannian manifolds. Optimization 2002, 51, 257–270. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).