An Inertial Subgradient Extragradient Method for Approximating Solutions to Equilibrium Problems in Hadamard Manifolds

: In this work, we are concerned with the iterative approximation of solutions to equilibrium problems in the framework of Hadamard manifolds. We introduce a subgradient extragradient type method with a self-adaptive step size. The use of a step size which is allowed to increase per iteration is to avoid the dependence of our method on the Lipschitz constant of the underlying operator as has been the case in recent articles in this direction. In general, operators satisfying weak monotonicity conditions seem to be more applicable in practice. By using inertial and viscosity techniques, we establish a convergence result for solving a pseudomonotone equilibrium problem under some appropriate conditions. As applications, we use our method to solve some theoretical optimization problems. Finally, we present some numerical illustrations in order to demonstrate the quantitative efﬁcacy and superiority of our proposed method over a previous method present in the literature.


Introduction
The minimax inequality introduced in 1972 by Ky Fan [1], later renamed as Equilibrium Problem (EP), plays a major role in many fields and provides a unified framework for the study of variational inequalities, game theory, mathematical economics, fixed point theory and optimization theory.The use of the term equilibrium problem is credited to the 1994 paper by Blum and Oetlli [2], which followed an earlier article by Muu and Oetlli [3].In the latter paper, three standard examples of the EP were given, viz: fixed point, convex minimization and variational inequality problems.The EP also includes as examples, convex differentiable optimization, complementarity, saddle point and Nash equilibrium problem [2,3].Let g : K × K → R be a bifunction such that g(x, x) = 0 for all x ∈ K, where K is a nonempty subset of a topological space X.Then, the EP calls for finding a point x ∈ K such that g(x, y) ≥ 0 ∀ y ∈ K.
The study of variational inequality, equilibrium and other related optimization problems has recently received considerable attention from researchers in the framework of Riemannian manifolds.Thus, methods and ideas have been extended from linear settings to this more general setting.These generalizations become necessary because of the advantages they bring forth.For example, nonconvex optimization problems can easily be transformed into problems of convex type by choosing a suitable Riemannian metric [4][5][6].Another advantage of this extension is that constrained optimization problems can be viewed as unconstrained ones [4,[6][7][8].As a result, classical methods for solving optimization problems have been extended from linear frameworks to Riemannian manifolds.In 2012, Colao et al. [9] studied equilibrium problems on Hadamard manifolds in the following setting: Let K be a nonempty, closed and convex subset of an Hadamard manifold M and let g : K × K → R be a bifunction satisfying g(x, x) = 0 for all x ∈ M.An equilibrium problem (EP, for short) on a manifold consists of finding x ∈ K such that g(x, y) ≥ 0 ∀ y ∈ K. ( We denote by Sol(g, K) the solution set of the EP (1).By developing and proving Fan's KKM Lemma, Colao et al. [9] studied the existence of solutions to the EP in the framework of Hadamard spaces.For many other studies and results in this direction, see, for example, [10][11][12].
The development of an effective iterative algorithm for approximating solutions to optimization problems is another interesting area of research in nonlinear analysis and optimization theory.Iterative approximation of solutions to equilibrium problems in any setting, whether linear or nonlinear, includes, for example, the use of the Extragradient Method (EGM) proposed by Korpelevich [13].The EGM was initially used for solving saddle point problems.It was later adapted for solving variational inequality and then equilibrium problems.Inspired by the EGM and the perceived drawbacks of the method, Censor et al. [14] introduced the Subgradient Extragradient Method (SEGM), which has since been used for solving both variational and equilibrium problems.For solving EPs, Tran et al. [15] introduced an extragradient-like method for approximating solutions to pseudomonotone equilibrium problems.Using an alternative approach, Nguyen et al. [16] introduced a method for finding common solutions to fixed point and equilibrium problems, which is based on the extragradient method proposed in [15].More recently, Rehman et al. [17] introduced an inertial subgradient extragradient algorithm for solving equilibrium problems.Using a viscosity approach, they proved a strong convergence theorem for an algorithm approximating solutions to EPs with pseudomonotone bifunctions.For more contributions regarding methods for solving EPs in linear settings, see, for example, [12,[18][19][20][21].
We note that several of the methods discussed above have been extended to EPs on Hadamard manifolds.The first work of Colao et al. [9] was followed by those of Salahuddin [10] and Li et al. [21].Neto et al. [22] extended the result of Nguyen et al. [16] to this setting by considering the following algorithm: Given λ n > 0, compute , and c 1 > 0 and c 2 > 0 are the Lipschitz constants of the bifunction g.Fan et al. [23] proposed an explicit extragradient method for solving pseudomonotone equilibrium problems on Hadamard manifolds.Their method uses a variable step size which is monotonically decreasing.These authors proved a convergence theorem for their method and also established an R-linear convergence result for the proposed method.Very recently, Ali-Akbari [24] has introduced a subgradient extragradient algorithm for approximating solutions to EPs on Hadamard manifolds and has proved a convergence theorem for approximating solutions to pseudomonotone equilibrium problems.This theorem depends on the Lipschitz constants of the corresponding bifunctions.
The inertial technique finds crucial application in the construction of effective and accelerated algorithms in fixed point and optimization theory (see, for instance, [25,26]).In this method, the next iterate is determined by two preceding iterates (x n−1 and x n ) and an inertial parameter θ n which controls the momentum x n − x n−1 .For more recent developments regarding inertial algorithms, we refer the readers to [25,27,28] and to references therein.At this point, we recall that the viscosity method due to Moudafi [29] for a nonexpansive mapping S and a given strict contraction f over K is given by x 0 ∈ K and where the sequence {β n } ⊂ (0, 1) converges to zero.The viscosity method has also been adapted to the framework of Hadamard manifolds (see [30,31]).In this setting, the sequence {x n } starting with an arbitrary point x 0 ∈ K is given by Motivated by the subgradient method of [14], the viscosity approach [30][31][32], and by Rehman et al. [17,27] and Ali-Akbari [24], we introduce an inertial subgradient extragradient algorithm for approximating solutions to equilibrium problems on Hadamard manifolds.Employing the viscosity technique, we propose an algorithm for approximating solutions to pseudomonotone equilibrium problems and establish a convergence theorem for it.The proposed algorithm uses a self-adaptive step length which is allowed to increase during the execution of the method.In this way, dependence of the method on the Lipschitz constants is dispensed with.In order to be more precise, we now highlight the following advantages of our result over previous results announced in this direction in the literature: (i) The method in the present paper uses an adaptive step size which is allowed to increase from iteration to iteration unlike the method in [23,33], where the step sizes decrease monotonically, and the method of [17,24,27], which relies on the Lipschitz condition imposed on the bifunction.Since the relevant Lipschitz constants can be difficult to estimate, this affects the efficiency of the method; (ii) We note that the sequence of control parameters of the viscosity step of our method is only required to be non-summable.This differs from [30,32], where an extra condition (that the difference between successive parameters be summable) is imposed; (iii) The use of the inertial technique makes the convergence of our algorithm faster than that of the method used in [23,24]; (iv) Our result is obtained in the framework of an Hadamard manifold unlike the results of [15][16][17], which were obtained in real Hilbert spaces.
The rest of our paper is organized as follows: First, we recall some useful definitions and preliminary results in Section 2. In Section 3, we introduce our proposed method, state our main result and present its convergence analysis.Two applications are presented in Section 4. In Section 5, we present the results of a numerical experiment which shows the efficiency of our method.We provide some concluding remarks in Section 6.

Preliminaries
Let M be an m-dimensional manifold, let x ∈ M and let T x M be the tangent space of M at x ∈ M. We denote by TM = x∈M T x M the tangent bundle of M.An inner product R •, • is called a Riemannian metric on M if •, • x : T x M × T x M → R is an inner product for all x ∈ M. The corresponding norm induced by the inner product R x •, • on T x M is denoted by • x .We will drop the subscript x and adopt • for the corresponding norm induced by the inner product.A differentiable manifold M endowed with a Riemannian metric R •, • is called a Riemannian manifold.In what follows, we denote the Riemannian metric R •, • by •, • when no confusion arises.Given a piecewise smooth curve γ : [a 1 , a 2 ] → M joining x to y (that is, γ(a 1 ) = x and γ(a 2 ) = y), we define the length l(γ) of γ by l(γ) := a 2 a 1 γ (t) dt.The Riemannian distance d(x, y) is the minimal length over the set of all such curves joining x to y.The metric topology induced by d coincides with the original topology on M. We denote by ∇ the Levi-Civita connection associated with the Riemannian metric [34].
Let γ be a smooth curve in M. A vector field X along γ is said to be parallel if ∇ γ X = 0, where 0 is the zero tangent vector.If γ itself is parallel along γ, then we say that γ is a geodesic and γ is a constant.If γ = 1, then the geodesic γ is said to be normalized.A geodesic joining x to y in M is called a minimizing geodesic if its length equals d(x, y).A Riemannian manifold M equipped with a Riemannian distance d is a metric space (M, d).A Riemannian manifold M is said to be complete if for all x ∈ M, all geodesics emanating from x are defined for all t ∈ R. The Hopf-Rinow theorem [34] posits that if M is complete, then any pair of points in M can be joined by a minimizing geodesic.Moreover, if (M, d) is a complete metric space, then every bounded and closed subset of M is compact.If M is a complete Riemannian manifold, then the exponential map exp where γ v (•, x) is the geodesic starting from x with velocity v (that is, γ v (0, x) = x and γ v (0, x) = v).Then, for any t, we have exp x tv = γ v (t, x) and exp x 0 = γ v (0, x) = x.Note that the mapping exp x is differentiable on T x M for every x ∈ M. The exponential map exp x has an inverse exp −1  x : M → T x M. For any x, y ∈ M, we have d(x, y) = exp −1 y x = exp −1 x y (see [34] for more details).The parallel transport P γ,γ(a 2 ),γ(a 1 ) : where F is the unique vector field such that ∇ γ (t) F = 0 for all t ∈ [a 1 , a 2 ] and F(γ(a 1 )) = v.If γ is a minimizing geodesic joining x to y, then we write P y,x instead of P γ,y,x .Note that for every a 1 , a 2 , r, s ∈ R, we have Additionally, P γ(a 2 ),γ(a 1 ) is an isometry from T γ(a 1 ) M to T γ(a 2 ) M, that is, the parallel transport preserves the inner product We now give some examples of Hadamard manifolds.Space 1: Let R ++ = {x ∈ R : x > 0} and M = (R ++ , •, • ) be the Riemannian manifold equipped with the inner product x, y = xy ∀ x, y ∈ R. Since the sectional curvature of M is zero [35], M is an Hadamard manifold.Let x, y ∈ M and v ∈ T x M with v 2 = 1.Then, d(x, y) = | ln x − ln y|, exp x tv = xe vx t , t ∈ (0, +∞), and exp −1 i=1 and y = {y i } m i=1 .Space 3: See [36].Let M = H n be the n dimensional hyperbolic space of constant sectional curvature k = −1.The metric of H n is induced from the Lorentz metric {•, •} and will be denoted by the same symbol.Consider the following model for H n : We have {u, x} = 0 for all u ∈ T x H n .Also, A subset K ⊂ M is said to be convex if for any two points x, y ∈ K, the geodesic γ joining x to y is contained in Riemannian manifold of non-positive sectional curvature is called an Hadamard manifold.We denote by M a finite dimensional Hadamard manifold.Henceforth, unless otherwise stated, we represent by K a nonempty, closed and convex subset of M.
We now collect some results and definitions which we shall use in the next section.Proposition 1 ([34]).Let x ∈ M. The exponential mapping exp x : T x M → M is a diffeomorphism.For any two points x, y ∈ M, there exists a unique normalized geodesic joining x to y, which is given by A geodesic triangle ∆(p, q, r) of a Riemannian manifold M is a set containing three points p, q, r and three minimizing geodesics joining these points.

Proposition 2 ([34]
).Let ∆(p, q, r) be a geodesic triangle in M.Then, and Moreover, if θ is the angle at p, then we have For any x ∈ M and K ⊂ M, there exists a unique point y ∈ K such that d(x, y) ≤ d(x, z) for all z ∈ K.This unique point y is called the nearest point projection of x onto the closed and convex set K and is denoted P K (x).
Any ψ-contraction belongs to the class of mappings introduced by Boyd and Wong [38] who established the existence and uniqueness of a fixed point for mappings in this class in the framework of complete metric spaces.
The next lemma presents the relationship between triangles in R 2 and geodesic triangles in Riemannian manifolds (see [39]).
The triangle ∆( ū1 , ū2 , ū3 ) in Lemma 2 is called the comparison triangle for ∆(u 1 , u 2 , u 3 ) ⊂ M. The points ū1 , ū2 and ū3 are called comparison points to the points u 1 , u 2 and u 3 in M.
A function h : M → R is said to be geodesic if for any geodesic γ ∈ M, the composition The convex function h is called subdifferentiable at a point x ∈ M if the set ∂h(x) is nonempty.The elements of ∂h(x) are called the subgradients of h at x.The set ∂h(x) is closed and convex, and it is known to be nonempty if h is convex on M. We denote by ∂ 2 h the partial derivative of h at the second argument, that is, ∂ 2 h(x, •) for all x ∈ M. The normal cone, denoted N K , is defined at a point x ∈ M by Lemma 3 ([20]).Let x 0 ∈ M and {x n } ⊂ M be such that x n → x 0 .Then, for any y ∈ M, we have exp −1 x n y → exp −1 x 0 y and exp −1 y x n → exp −1 y x 0 ; The following definitions can be found in [40].Let K be a nonempty, closed and convex subset of (ii) Pseudomontone on K if g(x, y) ≥ 0 ⇒ g(y, x) ≤ 0 ∀ x, y ∈ K; (iii) Lipschitz-type continuous if there exist constants c 1 > 0 and c 2 > 0, such that For solving EP (1), we make the following assumptions concerning g on K : (A1) g is pseudomonotone on K and g(x, x) = 0 for all x ∈ M; (A2) g(•, y) is upper semicontinuous for all y ∈ M; (A3) g(x, •) is convex and subdifferentiable for all fixed x ∈ M; (A4) g satisfies a Lipschitz-type condition on M.
The following propositions (see [41]) are very useful in our convergence analysis: Proposition 3. Let M be an Hadamard manifold and d : M × M :→ R be the distance function.
Then, the function d is convex with respect to the product Riemannian metric.In other words, given any pair of geodesics γ 1 : [0, 1] → M and γ 2 : [0, 1] → M, then for all t ∈ [0, 1], we have In particular, for each y ∈ M, the function d(•, y) : M → R is a convex function.
Proposition 4. Let M be an Hadamard manifold and x ∈ M. Let ρ x (y) = 1 2 d 2 (x, y).Then, ρ x (y) is strictly convex and its gradient at y is given by Proposition 5. Let K be a nonempty convex subset of an Hadamard manifold M and let h : K → R be a proper, convex and lower semicontinuous function on K.Then, a point x solves the convex minimization problem min if and only if 0 ∈ ∂h(x) + N K (x).

Main Result
In this section, we first propose a convergent algorithm for approximating a solution to the EP (1) and then present its convergence analysis.Let f : M → M be a ψ-contraction where ψ : [0, +∞) → [0, +∞) is a continuous and increasing function satisfying ψ(0) = 0 and ψ(s) < s for all s > 0. The solution set Sol(g, K) is closed and convex [9,10].We assume that Sol(g, K) is nonempty.
Assume { n } is a positive sequence such that n = •(β n ), that is, lim Remark 2. We observe that Algorithm 1 provides us with a self-adaptive method where the step length can increase from iteration to iteration unlike the monotone decreasing sequence of step lengths in [17].By this construction, the dependence of the bifunction g on the Lipschitz constants is dispensed with.
Step 1: Given x n , x n−1 and λ n , choose θ n such that θ n ∈ [0, θn ], where θn = min θ, If y n = w n , then stop.Otherwise, go to the next step.
Step 2: Define the half-space T n by Step 3: Compute where γ n : [0, 1] → M is the geodesic joining f (x n ) to z n , that is, γ n (0) = f (x n ) and γ n (1) = z n for all n ≥ 0.
Set n := n + 1 and return to Step 1. Lemma 6.Let {λ n } be the sequence given by (8).Then, lim n→∞ Proof.Assume (A4) holds, then there exist c 1 and c 2 such that Thus, Using induction, we obtain It is not difficult to show that lim n→∞ λ n = λ.Therefore, the convergence of {λ n } implies that Lemma 7. The sequence {x n } defined recursively by Algorithm 1 satisfies the inequality Proof.Let p ∈ Sol(g, K).Using the definition of z n and Proposition 5, we find that There exist Hence, for all y ∈ T n , we obtain From the definition of the subdifferential and the fact that a n ∈ ∂ 2 g(y n , z n ), it follows that We obtain from ( 9) and (10) that Let y = p in (11).We have Since p ∈ Sol(g, K), we have g(p, y n ) ≥ 0. If follows from the pseudomonotonicity of g that g(y n , p) ≤ 0. Thus, we obtain It is easy to from (8), that It follows from z n ∈ T n that exp −1 y n w n − λ n v n , exp −1 y n z n ≤ 0, which implies that Since v n ∈ ∂ 2 g(w n , y n ), it follows from the definition of the subdifferential that v n , exp −1 y n y ≤ g(w n , y) − g(w n , y n ).
Setting y = z n in the above inequality, we have Thus, it follows from above inequality and ( 14) that Combining ( 12), ( 13) and ( 15), we obtain Using Equation ( 3) and Proposition 2, we obtain Using this in (16), we obtain Therefore, we have Lemma 8. Let f : K → K be a ψ-contraction and assume that Then, the sequence {x n } generated by Algorithm 1 is bounded.
Proof.Fix n ≥ 1 and p ∈ Sol(g, K), and consider the geodesic triangles ∆(w n , x n , p) and ∆(x n , x n−1 , p) with the comparison triangles ∆(w n , x n , p ) and ∆(x n , x n−1 , p ).Then, by Lemma 2, we have Since Hence, we obtain It is not difficult to see that Next, using the definition of x n+1 , the convexity of the Riemannian distance and ( 17), we see that Since 0 < κ = sup{ ψ(d(x n ,q)) d(x n ,q) : x n = q, n ≥ 0, q ∈ Sol(g, K)} < 1, we find that Hence, the sequence {x n } is bounded.Consequently, the sequences {w n }, {y n } and {z n } are bounded too.Theorem 1.Let f : K → K be a ψ-contraction and assume conditions (A1)-(A4) hold.If 0 < κ = sup{ ψ(d(x n ,q)) d(x n ,q) : x n = q, n ≥ 0, q ∈ Sol(g, K)} < 1, then the sequence {x n } generated by Algorithm 1 converges to a point p ∈ Sol(g, K), where p = P Sol(g,K) f (p) and P Sol(g,K) is the nearest point projection of K onto Sol(g, K).
Proof.Let p ∈ Sol(g, K) satisfy p = P Sol(g,K) f (p).Note that this fixed point equation has a unique solution by the Boyd-Wong fixed point theorem [38].Fix n ≥ 1 and let w = f (x n ), z = z n and y = f (p).Consider the following geodesic triangles with their respective comparison triangles in R 2 : ∆(w, z, p) and ∆(w , z , p ), ∆(y, z, w) and ∆(y , z , w ), ∆(y, z, p) and ∆(y , z , p ).By Lemma 2, we have d(w, z) = w − z , d(w, y) = w − y , d(w, p) = w − p , d(z, y) = z − y and d(y, p) = y − p .From the definition of x n+1 , we have Let α and α denote the angles at p and p in the triangles ∆(y, x n+1 , p) and ∆(y , x n+1 , p ), respectively.Then, we have α ≤ α and cos α ≤ cos α.Using Lemma 4 and the property of f , we obtain (20), we obtain where It follows from ( 23) that where M = sup n∈N b n .We claim that d(x n , p) → 0 as n → ∞.To prove this, set a n = d(x n , p) It is easy to see from ( 23) that the sequence {a n } satisfies Next, we claim that lim sup k→∞ b n k ≤ 0 whenever there exists a subsequence {a To prove this, assume the existence of such a subsequence {a n k }.Then, by using (24), we have lim sup Note that λ n → λ as n → ∞ and that µ ∈ (0, 1).Hence, there exists N ≥ 0 such that for all n ≥ N, 0 < µλ n λ n+1 < 1.That is, lim By replacing p with x n k in (18), it is not difficult to see that Using the triangle inequality, we obtain Using ( 26) and ( 27), we obtain By employing the convexity of the Riemannian distance, we have Thus, it follows from (C1) that When combined with (28), we obtain Now, we claim that lim sup k→∞ b n k ≤ 0. To see this, we only need to show that lim sup Since x n k j → q, it follows from (28) that y n k j , z n k j → q.Using (11), we see that which implies, in view of (13), that By replacing the proximal term arg min y∈M g(x, y) + 1 2λ n d 2 (x, y) with P K (exp x (−λ n G(x))), where P K is the metric projection of M onto K in Algorithm 1, we have the following method for approximating a point in V IP(G, K) : In this setting, we have the following convergence theorem for approximating a solution to the VIP (33).
Algorithm 2: Inertial subgradient extragradient method for solving VIP(ISEMVIP) Step 1: Given x n , x n−1 and λ n , choose θ n such that θ n ∈ [0, θn ], where otherwise. Compute If y n = w n , then stop.Otherwise, go to the next step.
Step 2: Compute v n = Gw n and define the half-space T n by T n := {y ∈ M : exp −1 y n w n − λ n v n , exp −1 y n y ≤ 0} with v n ∈ ∂ 2 g(w n , y n ) and compute Step 3: Compute where Set n := n + 1 and return to Step 1.
Remark 3. Note that Algorithm 2 is a direct application of Algorithm 1 to a variational inequality problem and that the projection onto the half-space T n in Algorithm 2 can be calculated in closed form without the need to use a minimization algorithm for computing z n in Algorithm 1 for solving equilibrium problems.For a closed-form formula for computing the metric projection onto T n , (see for example, [46]).

An Application to Solving Convex Optimization Problems
Consider the convex optimization problem (COP) where h is a proper lower semicontinuous convex function of M into (−∞, +∞] such that K is contained in the effective domain of h, that is, K ⊂ domh := {x ∈ M : h(x) < +∞}.
The set of solutions to COP (38) is denoted by COP(h, K).Let the bifunction g : K × K → R be defined by g(x, y) := h(y) − h(x).Then, g(x, y) satisfies conditions (A1)-(A4) and COP(h, K) = Sol(g, K).Let Prox λh be the proximal operator of the function h of parameter λ > 0 let ∇h denote the gradient of h.Using the term Prox λh (exp x (−λ∇h(x))) in place of arg min y∈M g(x, y) + 1 2λ n d 2 (x, y) in Algorithm 1, we obtain a method for minimizing the function h.

Numerical Example
In this section, we present some numerical illustrations of our main result.All codes were written in Matlab 2017b computed on a Personal Computer (PC) Core i5 at 2.0 GHz and 8.00 GB RAM.
Example 1.We consider an extension of the Nash equilibrium model introduced in [7,47].In this problem, the bifunction g : K × K → R is given by g(x, y) = Px + Qy + p, y − x .
Let M be Space 2 above and let K ⊂ M be given by Let x, y ∈ K, and let p = (p 1 , p 2 , • • • , p m ) T ∈ R m be chosen randomly with elements in [1, m].The matrices P and Q are two square matrices of order m such that Q is symmetric positive semidefinite and Q − P is negative semidefinite.It is known (see [7]) that g is pseudomonotone and satisfies (A2) with Lipschitz constants c 1 = c 2 = 1 2 Q − P (see [15], Lemma 6.2).Assumptions (A3) and (A4) are also satisfied (see [48]).Thus, our main theorem is fully compatible with this example.Setting δ n = 1 2n+7 , β n = 1 n+1 , n = 1 n 1.1 , µ = 0.5 and λ 1 = 10 −3 , we compare our method with (Algorithm 1) of Fan et al. [23].The comparisons are made for some values of m using x n+1 − x n 2 = 10 −4 as the stopping criterion.The results for this example are presented in Table 1 and Figure 1.Example 2. Let M be Space 2 above.We consider an example of a variational inequality and present a numerical comparison of Algorithm 1 through its adaptation to VI with (Algorithm 1) of Fan et al. [23].The following example has been considered by authors in many recent articles (see, for example, [49]).Let the mapping F : E → M be defined by

Conclusions
In this paper, we introduced an inertial subgradient extragradient method for approximating solutions to equilibrium problems in the framework of Hadamard manifolds.Since we use self-adaptive step sizes which are allowed to increase from iteration to iteration, our method does not require knowledge of the Lipschitz constant of the cost operator.A convergence result was proved by using a viscosity technique with mild conditions on the control parameters involved for generating the sequence of the approximants.We also provided two theoretical applications of our result.Furthermore, we presented some nu-merical experiments which illustrate the performance of the method we proposed.By way of comparison to another method presented for the same subject in Fan et al. [23], we displayed the competitiveness of our Algorithm.The authors intend to consider more examples in Hadamard manifolds in future works.

Table 1 .
Computation results for Example 1.

Table 2 .
Computation result for Example 2.