Necessary and Sufﬁcient Second-Order Optimality Conditions on Hadamard Manifolds

: This work is intended to lead a study of necessary and sufﬁcient optimality conditions for scalar optimization problems on Hadamard manifolds. In the context of this geometry, we obtain and present new function types characterized by the property of having all their second-order stationary points be global minimums. In order to do so, we extend the concept convexity in Euclidean space to a more general notion of invexity on Hadamard manifolds. This is done employing notions of second-order directional derivatives, second-order pseudoinvexity functions, and the second-order Karush–Kuhn–Tucker-pseudoinvexity problem. Thus, we prove that every second-order stationary point is a global minimum if and only if the problem is either second-order pseudoinvex or second-order KKT-pseudoinvex depending on whether the problem regards unconstrained or constrained scalar optimization, respectively. This result has not been presented in the literature before. Finally, examples of these new characterizations are provided in the context of “Higgs Boson like” potentials, among others.


Introduction
This article concerns itself with several questions, the first of which is why we study the optimality conditions of scalar mathematical programming problems. Optimality conditions are a crucial asset to solve optimization problems, which constitute some of the most ubiquitous types of problems across many scientific disciplines. Furthermore, optimality conditions and their associated optimal points play a vital role in activities as interdisciplinary as finding the best-fit parameters of a model given a set of data. In computational biology, the RNA sequence alignment problem or the evolution of median molecules has been investigated as a single-objective optimization [1].
The second and most important of these questions is the generalization and use of the notion of convexity beyond standard Euclidean geometries. Convexity conditions and their different extensions are best understood by looking at the properties they bring to an optimization problem. Hence, ensuring convexity conditions in an optimization problem guarantees that every found local optimum is indeed a global optimum. Other functions such as pseudoconvexity ensure us that every critical point is also an optimum. However, this later property does not fully characterize pseudoconvex functions as it is also shared with other types of functions. For example, the invex functions, first described by Hanson [2], are defined as any function where critical and optimal points coincide.
The equivalence between critical and optimal points makes it possible to design practical numerical methods or algorithms to find solutions to problems that satisfy this condition. Accompanying this, replacing the x-y vector in the definition of the convex function by a certain function or kernel η(x, y) makes the invex definition concept much more flexible and applicable to other fields, such as fuzzy or interval-valued environments [3,4].
In scalar mathematical programming problems with constraints, the invexity of the objective function is not sufficient to guarantee that a Karush-Kuhn-Tucker stationary point is an optimum. Thus, it is also necessary to impose conditions on the constraints of the problem. Following these ideas, Martin [5] first defined the KT-invex function. It is also well known that the first-order KKT necessary conditions only provide us with possible candidate solutions, the KKT stationary points, while the second-order necessary conditions discard some of these points to find the optimum. The importance of these second-order conditions must be stressed since optimums cannot be found making exclusive use of the first-order classical optimality conditions. An example of this is illustrated in Ginchev and Ivanov [6].
Until recently, studies have mainly explored these problems on standard Euclidean geometry. However, efforts have shifted towards developing a good understanding of them on different varieties of Riemannian manifolds such as the Hadamard manifolds considered in our study. While the Euclidean space is a linear flat space, Riemannian manifolds are non-linear curved surfaces. In convex optimization, the convexity of a set in a linear space is linked to the possibility of connecting two points of the space through segments of straight lines. Since the role of the straight lines in Euclidean space is carried out by geodesics on manifolds, translating a problem to a non-Euclidean setup partially amounts to replacing the linear segments by geodesic arcs.
Studying optimization problems on Hadamard manifolds poses great advantages since generally, solving nonconvex constrained problems on R n with a Euclidean metric can be rephrased as solving the unconstrained convex minimization problem on a Hadamard manifold feasible set with an appropriate metric [7]. In this way, we can take advantage of the convexity of the geometry.
Besides the field of mathematics, Hadamard manifolds have attracted a substantial amount of interest. For example, in medicine, Hadamard manifolds are used in medical imaging analysis to quantify tumour growth and consequently infer disease progression. Furthermore, problems related to stereo vision processing [8] or the study of machine learning or computer vision cannot be solved in classical spaces and requires the help of Hadamard manifolds for their modelling [9].
In recent times, several authors have studied these topics. The invexity on Riemannian manifolds and its relationship with the monotonicity were studied by Barani and Pouryayevali [10], and the Mond-Weir dual problems for the optimization problems under invexity assumptions were also obtained on Hadamard manifolds by Zhou and Huang [11]. More closely related to our study, Ginchev and Ivanov [6] obtain second-order optimality conditions of the KKT type for a problem with inequality constraints using pseudoconvex functions and Euclidean spaces. In [12], the author utilized second-order Fréchet differentiable functions in R n , and he defined the second-order KT-pseudoconvexity to prove that each KT point of second-order is a global minimum. Two years later, the same author, in [13], extended some of these results to invex functions, but always in Euclidean spaces.
In Ruiz-Garzón et al. [14], we obtained first-order optimality conditions for the scalar and vector optimization problem on Riemannian manifolds, but not second-order conditions; this will be discussed in this article. Later, in [15], the authors proved the non-smooth version of the previous conditions. Furthermore, in Ruiz-Garzón et al. [16], we proved the existence of optimality conditions from KKT to constrained vector optimization problems on Hadamard manifolds as a particular case of equilibrium vector problems with constraints.
Motivated by Ivanov's work mentioned previously, the objective of this paper will focus on extending the second-order optimality conditions obtained in Euclidean spaces to geometries beyond manifolds. Therefore, we present necessary and sufficient optimality conditions for both unconstrained and constrained scalar optimization problems on Hadamard manifolds, looking for the function types for which the second-order critical points and the global minimum points coincide.
Next, we will give the different sections of this article. In Section 2, we recall concepts related to Hadamard manifolds, and we define the second-order directional derivative function on Hadamard manifolds. In Section 3, we prove that second-order invexity and second-order pseudoinvexity functions coincide and are characterized by the fact that all second-order critical points are equivalent to optimums for unconstrained optimization problem. In Section 4, we extend the previous results to a constrained optimization problem by defining new concepts such as the second-order KKT stationary point and second-order KKT-pseudoinvex problem to characterize the global optimums of these problems on Hadamard spaces. We will end with some conclusions and references.

Preliminaries
Let M be an n-dimensional differential manifold. We denote by T x M the n-dimensional tangent space of M at x, which is a real vector space of the same dimension as M. The collection of all tangent spaces of M is denoted by TM = x∈M T x M, and it is the tangent bundle of M.
Given a piecewise C 1 curve α : [a, b] → M, we denote its norm by . x and its corresponding length as L(α) = b a α (t) α(t) dt. For any two points x, y ∈ M, the distance d between the points can be defined as d(x, y) = inf{L(α)| α is a piecewise C 1 curve joining x and y}.
Furthermore, any curve α joining x and y in M such that L(α) = d(x, y) constitutes a geodesic. We will assume that the manifold is complete; in other words, that any two points x and y can be connected with a geodesic whose length is d(x, y).
For differentiable manifolds, it is broadly possible to define the derivatives of the curves on the manifold. The derivative at a point x on the manifold lies in a vector space T x M. We define the Riemannian exponential map exp : With the goal of generalizing the concept of the convex set to Hadamard manifolds, in [17], the author proposed the following definition: Definition 1. We will say that a subset S 1 of M is totally convex if S 1 contains every geodesic α x,y of M whose endpoints x and y belong to S 1 .
It is well known that a simply connected complete Riemannian manifold of non-positive sectional curvature is called a Hadamard manifold. There, the geodesic between any two points is unique. In addition to this, the exponential map at each point of the manifold, M, is a global diffeomorphism. Moreover, this exp map is defined on the whole tangent space and exp −1 Example 1. The space of symmetric n × n positive-definite matrices S(n, R) endowed with the Frobenius metric defined by < U, V >= tr(U, V) is an example of a Hadamard manifold.
Let η : M × M → TM be a function defined on the product manifold such that η(x, y) ∈ T y M, ∀x, y ∈ M. Now, to achieve the objectives of this article, we will need an adequate concept of the differential and second-order differential on Hadamard manifolds: Definition 2. Let M be a Hadamard manifold. We define the differential of the function θ : M → R at the point x along the direction η(x,x) as dθx(η(x,x)) = grad θx, η(x,x) for all η(x,x) in Tx M where grad θx is the gradient of function θ atx. Remark 1. The differential of θ atx of η(x,x) is similar to the definition of the directional derivative in the Euclidean space.
If we wish to study the second-order optimality conditions, we need to take a step forward. Thus, we propose the following definition of the second-order directional derivative on Hadamard manifolds: Definition 3. Let M be a Hadamard manifold. A mapping θ : M → R is said to be a second-order directional derivative function at the pointx along the direction ξ(x,x) ∈ Tx M if and only if the limit: exists and it is finite. The function θ is called second-order directionally differentiable on M if the second-order directional derivative function exists for each point of M and any direction ξ(x,x) ∈ Tx M.

Applications to Scalar Optimization Problems
In this section, we will characterize the functions whose critical points are global optimums living on Hadamard's spaces in the context of scalar optimization problems.

Unconstrained Case
We start by considering the unconstrained scalar optimization problem (SOP): where M is a Hadamard manifold and θ : M → R is a differentiable function.
Let us now introduce the notion of the invexity of a function on Hadamard manifolds, guided by the concept of convexity on linear spaces: Remark 2. Note that if M = R n , we have the classic and well-known invexity definition given by Hanson [2]. Thus, the previously defined inverse exponential map simplifies to the familiar form exp −1 Once having introduced the concept of invexity, we can know discuss the existence of critical points. For this purpose, we will adopt the definition given by Ruiz-Garzón et al. [14]: Remark 3. This result is a generalization to Riemannian manifolds, a similar result to the one achieved by Craven and Glover [18]. Obviously, if a function has no critical points, so it has no global minimums, then it is invex.
Given these definitions, we are in a position in which we can tackle one of the objectives of our work. Thus, we will now propose a generalization of the concept of the two-invexity given by Ivanov [13] for Fréchet differentiable functions in dimensional finite Euclidean space R n to Hadamard manifolds M. Definition 6. Let M be a Hadamard manifold. A differentiable and the second-order directionally differentiable θ : M → R function are said to be a two-invex (2-IX) atx ∈ M if there exist η(x,x), ξ(x,x) ∈ Tx M such that the derivative θ (x, ξ(x,x)) exists and: Remark 4. Note that each convex or invex function is also a two-invex function. This implies that the class of two-invex functions extends the class of invex ones.
Moreover, θ is a convex function, and therefore, it is two-invex.
Furthermore, we can similarly extend the concept of the stationary point: Definition 7. Let M be a Hadamard manifold. Suppose that the function θ : M → R is differentiable and second-order directionally differentiable at anyx ∈ M and along every direction. A feasible pointx for SOP is said to be a two-critical point (2-CP), if there exists some η(x,x) ∈ T x M non-identically zero such that dθx(η(x,x)) = 0, and if for some direction ξ(x,x) ∈ T x M, there exists the derivative θ (x, ξ(x,x)), then Thus, we can propose and prove the following theorem that characterizes the concept of two-invex functions on Hadamard manifolds: Theorem 2. Let M be a Hadamard manifold. Let θ : M → R be a differentiable and second-order directionally differentiable function at anyx ∈ M along any direction. The function θ is two-invex at everyx ∈ M if and only if each 2-CP is a global minimum of θ on M.
Proof. We will argue this proof using reductio ad absurdum. Let us begin by making the hypothesis that θ is two-invex at everyx ∈ M andx is a 2-CP, but it is not a global minimum. Therefore, there exists a point x ∈ M such that θ(x) < θ(x). Thus, it follows that dθx(η(x,x)) = 0. Moreover, for some ξ(x,x)) ∈ T x M such that θ (x, ξ(x,x)) exists. By the two-invexity of θ, there exist a µ(x,x) ∈ T x M and a ξ(x,x) ∈ T x M such that θ (x, ξ(x,x)) exists and: which is a contradiction with (1). Now, we will prove the sufficient condition. Suppose each 2-CP is a global minimum; we need to prove that there exist η(x,x), ξ(x,x) ∈ Tx M such that the derivative θ (x, ξ(x,x)) exists and: This is ensured by defining, for example, η(x,x) = tgradθ(x) where t is an arbitrary positive real and ξ(x,x) = 0; then, (2) holds, and θ is two-invex. Therefore, we finally find that: This result extends Theorem 2.6 given by Ivanov [13] for the finite-dimensional Euclidean space to the Hadamard manifold.
We consider now an example of a possible application of the previous characterization in Hadamard manifolds.

Example 3.
Let us consider the following unconstrained scalar optimization problem: The objective function corresponds to the Ricker wavelet, usually referred to as the Mexican hat wavelet (see Figure 1). The Ricker wavelet is a function with many applications within the field of physics such as the modelling of seismic data. Furthermore, this wavelet can be understood as the cross-section of a Higgs-Anderson potential; a potential of great relevance in explaining the inner workings of the Higgs Boson and other modern topics of condensed matter physics. Note that the tails of the original Higgs-Anderson potential diverge to infinity, while in our example, they converge to a constant value. Moreover, let us further consider the set Ω = {p = (p 1 , p 2 ) ∈ R 2 : p 2 > 0}. Finally, let G be the Riemannian metric tensor of our Hadamard space, here chosen to be 2 × 2 matrix defined by G(p) = (g ij (p)) with g 11 (p) = g 22 (p) = 1 p 2 2 , g 12 (p) = g 21 (p) = 0.
Endowing Ω with the Riemannian metric, we can define the inner product of two vectors, u and v, lived on such a space as u, v =< G(p)u, v >. Furthermore, the gradient is then defined as grad θ(p) = G(p) −1 ∇θ(p). Thus, we obtain a complete Hadamard manifold H 2 , representative of the upper half-plane of a hyperbolic space.
Thus, we can prove that the function θ is two-invex atp, but not invex. We can show this by considering a η(p, p) = ξ(p, p) = 3p − p and η(p, p) = (0, 1) such that: On the one hand, p = (0, 1.87) is a critical point and a global maximum. On the other hand, p = (− √ 3, 0.61) and p = ( √ 3, 0.61) are two 2-CP and global minimums. By the previous Theorem 2, θ is a second-order invex (2-IX) function since the sets of two-stationary points (2-CP) and global minimums coincide. However, the objective function is not invex (IX) because the set of CP points does not coincide with global minimums. Therefore, from Examples (2) and (3): In generalized convexity theory, it is well known that pseudoconvex functions are a generalization of convex functions in Euclidean spaces, and they ensure us that all critical points are optimal. Now, in the same way, we will concern ourselves with extending the concept of the two-invexity function to the two-pseudoinvex function on Hadamard manifolds.

Definition 8.
Let M be a Hadamard manifold. Moreover, let θ : M → R be a differentiable and second-order directionally differentiable function at anyx ∈ M and along every direction. A differentiable θ function is said to be a two-pseudoinvex (2-PIX) atx ∈ M if there exist η(x,x), ξ(x,x) ∈ Tx M such that: We will now analyse the relationship between 2-IX and 2-PIX functions. Theorem 3. Let M be a Hadamard manifold. Let θ : M → R be differentiable and second-order directionally differentiable at everyx ∈ M and along every direction such that x ∈ M, θ(x) < θ(x), dθx(η(x,x)) = 0. If θ is a two-pseudoinvex function atx ∈ M, then θ is also two-invex atx ∈ M.
Proof. In order to prove this, let us make the following assumption. Letx, x ∈ M be two points such that θ(x) < θ(x).
We can even go one step further very easily: Theorem 4. Let M be a Hadamard manifold. Let θ : M → R be differential and second-order directionally differential at everyx ∈ M in every direction such that x ∈ M, θ(x) < θ(x), dθx(η(x,x)) = 0. The function θ is two-pseudoinvex atx ∈ M if and only if θ is two-invex atx ∈ M.
Proof. On the one hand, from Theorem 3, the two-pseudoinvexity implies the two-invexity. On the other hand, it is well known that the two-invexity implies the two-pseudoinvexity.
Joining the theorems 2 and 4, we get the following conclusion: Corollary 1. Let M be a Hadamard manifold. Let θ : M → R be differential and second-order directionally differential at everyx ∈ M in every direction such that x ∈ M, θ(x) < θ(x), dθx(η(x,x)) = 0. The function θ is two-pseudoinvex atx ∈ M if and only if each 2-CP is a global minimum of θ on M.
In summary, we have that: The previous results extend the results obtained by Ivanov, Theorems 2.12 and 2.14 in [13], from an environment of convexity to a more general environment of invexity. Furthermore, we generalize these notions from Euclidean space to Hadamard manifolds. Therefore, the two-pseudoinvexity coincides with the two-invexity in the same way that the pseudoinvexity coincides with the invexity, as demonstrated by Craven and Glover in [18] on Euclidean space.

Constrained Case
We consider the constrained scalar optimization problem of the form: where M is Hadamard manifold and θ, g j : M → R, j = 1, 2, . . . , m is a set of differentiable functions. Let us consider S 1 = {x ∈ M| g j (x) ≤ 0, j = 1, 2, . . . , m}, and let I(x) be the set of active constraints. The equality constraints h j (x) can be considered inequality constraints as h j (x) ≤ 0 and −h j (x) ≤ 0.
Similarly to the unconstrained case, our aim is to find the kind of functions lived on Hadamard spaces for which the Karush-Kuhn-Tucker points and the optimums coincide. For this purpose, let us consider quasiinvex functions. These functions are a generalization of quasiconvex functions, a type of function that shares most of its properties with convex and pseudoconvex functions. However, as opposed to pseudoconvex functions, the critical points of quasiconvex functions may not be optimal. Definition 9. Let M be a Hadamard manifold. Let θ : M → R be a differentiable function. Then, θ is said to We employed the following definition of critical direction: dθx(η(x,x)) ≤ 0 and dg j(x) (η(x,x)) ≤ 0 ∀j ∈ I(x).
In the constrained case, the concept of critical point we explored in the unconstrained case is replaced by this concept. Definition 11. Let M be a Hadamard manifold. Suppose that the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are differentiable and second-order directionally differentiable at anyx ∈ S 1 in every critical direction η(x,x) ∈ Tx M. A feasible pointx for constrained scalar optimization problem (CSOP) is said to be a two-Karush-Kuhn-Tucker stationary point , if for every critical direction η(x,x) non-identically zero, there exist non-negative multipliers λ, µ 1 , . . . , µ m with (λ, µ) = (0, 0) such that: where L = λθ + ∑ n j=1 µ j g j is the Lagrange function.
Note that the last two conditions have been added to the classic KKT conditions. Now, we need to introduce some new concepts that allow us to relate stationary and optimal points in the constrained scalar optimization problem (CSOP), since the invexity of the objective function by itself does not guarantee the identification of stationary and optimal points. Thus, our intention is to extend the kind of KT-invex functions created by Martin [5] to generalized invexity on Hadamard manifolds. In order to do so, let us set the following definitions: dg j(x) (η(x,x)) = 0, ∀j ∈ I(x) ⇒ g j (x, η(x,x)) ≤ 0 (11) where I(x) = {j = 1, . . . , m : g j (x) = 0}.
We now can obtain the sufficient condition for global optimality: Theorem 5. Let M be a Hadamard manifold. Suppose that the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are differentiable and second-order directionally differentiable at anyx ∈ S 1 in every critical direction. Furthermore, assume that the CSOP is a 2-KKT pseudoinvex problem. Then, each 2-KKT point is a global minimum.
From the 2-KKT-pseudoinvexity of CSOP, we have that θ (x; η(x,x)) < 0 and g j (x; η(x,x)) ≤ 0 for all j ∈ I(x) with µ j ≥ 0. Thus, on the one hand, we get L (x; η(x,x)) < 0, and on the other, hand we get Expression (7), a contradiction. Now, we will obtain a necessary condition for optimality. Theorem 6. Let M be a Hadamard manifold. Suppose that the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are differentiable and second-order directionally differentiable at anyx ∈ S 1 in every critical direction η(x,x) ∈ Tx M and the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are quasiinvex differentiable atx ∈ S 1 with respect to η non-identically zero. If each 2-KKT point is a global minimum, then the problem CSOP is 2-KKT pseudoinvex.
However, it is follows directly from the assumption x ∈ S 1 , j ∈ I(x) and the quasiinvexity of g j at x ∈ S 1 . Therefore, Expression (11) happens.
Hence, we arrive at the most important outcome of this section that we present in the following corollary: Corollary 2. Let M be a Hadamard manifold. Suppose that the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are differentiable and second-order directionally differentiable at anyx ∈ S 1 in every critical direction η(x,x) ∈ Tx M and the functions θ, g j : S 1 ⊆ M → R, j = 1, 2, . . . , m are quasiinvex differentiable atx ∈ S 1 with respect to η non-identically zero. Then, each 2-KKT point is a global minimum if and only if the problem CSOP is 2-KKT pseudoinvex.
Therefore, we have that under a quasiinvexity environment: This result generalizes Ivanov's Theorems 3.1 and 3.2 [12] to Hadamard manifolds and to invex environments. Moreover, it also completes with second-order conditions the first-order results obtained by Ruiz-Garzón et. al. [14] for scalar constrained optimization problems.
We now illustrate the previous corollary 2 with an example: Example 4. Let us recover the set Ω and the Riemannian metric from our previous example 3. Now, let the (CSOP) be defined as: (CSOP)