Solutions of Optimization Problems on Hadamard Manifolds with Lipschitz Functions

The aims of this paper are twofold. First, it is shown, for the first time, which types of nonsmooth functions are characterized by all vector critical points as being efficient or weakly efficient solutions of vector optimization problems in constrained and unconstrained scenarios on Hadamard manifolds. This implies the need to extend different concepts, such as the Karush–Kuhn–Tucker vector critical points and generalized invexity functions, to Hadamard manifolds. The relationships between these quantities are clarified through a great number of explanatory examples. Second, we present an economic application proving that Nash’s critical and equilibrium points coincide in the case of invex payoff functions. This is done on Hadamard manifolds, a particular case of noncompact Riemannian symmetric spaces.


Introduction
Firstly, our area of interest is the Hadamard manifolds. This paper is concerned with the pursuit of solutions of optimization problems defined on Hadamard manifolds through critical points, where the objective function may be nonsmooth. Optimal conditions are obtained under weaker assumptions than those already existing in the literature.
The idea of convex sets in a linear space is based upon the possibility of connecting any two points of the space using line segments. In nonlinear spaces such as Hadamard manifolds, linear segments are replaced by geodesic arcs. The idea behind this is the same as the one that inspired the 19th century geometricians who created non-Euclidean geometry.
The use of Hadamard manifolds has the following advantages: (a) Nonconvex constrained problems in R n are transformed into convex ones in the Hadamard manifolds (see [1]). (b) Moreover, for example, the set X = {(cos t, sin t) : t ∈ [π/4, 3π/4]} is not convex in the usual sense with X ⊂ R 2 , but X is a geodesic convex on the Poincaré upper-plane model (H 2 , g H ), as it is the image of a geodesic segment (see [2]).
Secondly, in this paper, we consider the concept of invexity because of the great computational advantages it offers. The optimality conditions that invexity involves are essential in obtaining optimal points through the search for critical points with practical numerical methods. The invexity concept, introduced by Hanson [3], is an extension of differentiable convexity. A scalar function is invex if and only if every critical point is a global minimum solution.
From the mind of Ben-Israel and Mond [4] the pseudoinvex functions emerged and although in the scalar case these functions coincide with the invex ones in the vector case they are different (see Ruiz-Garzón et al. ( [5], Example 3.2)).
Thirdly, the nonsmooth optimization formulation is found to have several clear advantages over its smooth counterpart, the main one being that it produces exact solutions to optimization problems while smoothing variants only produce approximate solutions (see Li et al. [6]). The importance of generalizing optimization methods to locally Lipschitz functions lies in their applications. For example, in controlled thermonuclear fusion research [7], engineering [8], stereo vision processing [9], and machine learning or computer vision [10,11]. In the field of medicine, symmetric Riemannian manifolds have been used in the analysis of medical images of tumor growth, as shown by Fletcher et al. [12]. The space of diffusion tensors required in these cases is a curved manifold named as a Riemannian symmetric space. In Bejenaru and Udriste [13], the authors extended multivariate optimal control techniques to Riemannian optimization problems in order to derive a Hamiltonian approach.
Finally, for this paper, special mention should be made of studies on Nash-Stampacchia equilibria. Kristály [2,14] studied the existence and relationship of Nash's critical and equilibrium points using strategy sets based on geodesic convex subsets of Hadamard manifolds and convex payoff functions, taking advantage of the geometrical features of these spaces. Equilibrium theory plays a very important role within the game theory created by von Neumann and Morgenstern [15] in 1944 and the development of the "Prisoner's Dilemma" by Tucker and Nash in 1950 [16].
The state of the art is as follows. The initial idea for this article came from a paper written by Kristály [2] in which he relates Nash's critical points and equilibrium points under conditions of convexity.
Hosseini and Pouryayevali [17] presented a subdifferential calculus for locally Lipschitz functions to prove Lebourg's mean value theorem in Riemannian manifolds. Later, the same authors [18] obtained necessary optimality conditions for an optimization problem on complete Riemannian manifolds, but they did not obtain characterizations. Kiliçman and Saleh [19] presented a Karush-Kuhn-Tucker sufficient optimality condition as well as a new Hermite-Hadamard-type integral inequality using differentiable sub-b-s-preinvex functions.
Other authors, such as Papa Quiroz and Oliveira [20], have used the concept of subdifferentials on Hadamard manifolds to prove the global convergence of their method of solving optimization problems to the critical point of a function.
Bento and Cruz [21] developed a subgradient-type method for solving non-smooth vectorial optimization problems. Their method converges to a Pareto optimal point through a vector critical point on a manifold with nonnegative sectional curvature.
In 2012, Colao et al. [1] proved the existence of a Nash equilibrium point on Hadamard manifolds under the condition of convexity of the payoff functions.
Chen et al. [22] discussed how to obtain efficient solutions involving generalized invex functions and Karush-Kuhn-Tucker (KKT) sufficient conditions on Riemannian manifolds.
In 2014, Boumal et al. [23] authored a Matlab toolbox for optimization on manifolds (www.manopt.org). An extension of optimization methods for solving minimization problems on Hadamard manifolds when the objective function is Lipschitz was proposed by Grohs and Hosseini [24].
In 2016, Gutiérrez et al. [25] provided a characterization of pseudoinvexity through the vector critical point and found efficient solutions to multiobjective optimization problems using Lipschitz functions on linear spaces. Two years later, Ruiz-Garzón et al. [26] extended these properties on Riemannian manifolds in the smooth case. In 2019, Ruiz-Garzón et al. [27] showed the existence of KKT optimality conditions for weakly efficient Pareto solutions for vector equilibrium problems, with particular focus on the Nash equilibrium problem, but only in the differential case.
Contributions. The aim of our work is to characterize the types of nonsmooth functions for which the critical points are solutions to constrained and unconstrained optimization problems on Hadamard manifolds and to extend the results obtained by Gutiérrez et al. [25] and Ruiz-Garzón et al. [26] on linear spaces.
For this aim, in Section 2, we introduce a number of different generalized invexity concepts (pseudoinvexity and strong pseudoinvexity, respectively) and consider the so-called generalized Jacobian, a natural subdifferential associated with a locally Lipschitz function. We illustrate these new definitions of functions with examples on Hadamard manifolds.
In Section 3, the concept of pseudoinvexity allows us to determine efficient and weakly efficient Pareto solutions of an unconstrained vector optimization problem through an adequate nonsmooth vector critical point concept. As a particular case, we show that, in the scalar case and on Hadamard manifolds, the invexity and pseudoinvexity concepts coincide.
In Section 4, the vector critical point and pseudoinvexity concepts are extended from unconstrained to constrained vector optimization problems. We analyze the necessary characteristics of the objective and constraint functions of a vector optimization problem so that the KKT vector critical point is an efficient and weakly efficient solution on Hadamard manifolds in the nonsmooth case.
In Section 5, we prove the equivalence between Nash critical and equilibrium points with invex payoff functions. Finally, Section 6 presents the conclusions to this study.

Preliminaries
Let M be a Riemannian manifold endowed with a Riemannian metric g x on a tangent space T x M. The corresponding norm is denoted by . x and the length of a piecewise C 1 curve α : [a, b] → M is defined by Let d be the distance that induces the original topology on M, defined as d(x, y) = inf{L(α)| α is a piecewise C 1 curve joining x and y, ∀x, y ∈ M} It is known that any path α joining x and y in M such that L(α) = d(x, y) is a geodesic, and is called a minimal geodesic. If M is complete, then any points in M can be joined by a minimal geodesic.
The derivatives of the curves at a point x on the manifold lie in a vector space T x M. We denote by T x M the n-dimensional tangent space of M at x, and denote by TM = x∈M T x M the tangent bundle of M. LetTM be an open neighborhood of M such that exp : [22]. It is easy to see that exp Let η : M × M → TM be a map defined on the product manifold such that Of all the classes of Riemannian manifolds, this work is dedicated to the Hadamard manifolds.

Definition 1.
Recall that a simply connected complete Riemannian manifold of nonpositive sectional curvature is called a Hadamard manifold.
Let M be a Hadamard manifold. Then, exp x : T x M → M is a diffeomorphism, and for any two points x, y ∈ M, there exists a unique minimal geodesic α x,y = exp x (t exp −1 x y) for all t ∈ [0, 1] joining x to y.
We now define a generalization of the concept of convex sets and convex functions in R n : Definition 2. [28] A subset X of M is said to be a geodesic convex if, for any two points x, y ∈ X, the geodesic α of M has endpoints x and y belonging to X; that is, if α : [0, 1] → M such that α(0) = x and α(1) = y, then α(t) ∈ X for all t ∈ [0, 1]. Furthermore, on a Hadamard manifold, X is a geodesic convex if and only if exp y (t exp −1 y x) ∈ X. Definition 3. [28] Let M be a Hadamard manifold and X ⊆ M be a geodesic convex. A function θ : X → R is said to be convex if, for every x, y ∈ X, Let us now recall the following concepts in the nonsmooth case. A function θ is said to be locally Lipschitz on M if θ is Lipschitz near x for every x ∈ M.
Example 1. The space of symmetric n × n positive-definite matrices S(n, R) endowed with the Frobenius metric defined by < U, V >= tr(U, V) is an example of Hadamard manifold. If λ 1 , . . . , λ n denote the n real eigenvalues of A ∈ S(n, R) then λ k : S(n, R) → R is a locally Lipschitz function.
With Lipschitz functions, generalized gradients or subdifferentials replace the classical derivative.

Definition 5.
[24] Suppose θ : M → R is a locally Lipschitz function on a Hadamard manifold M. Given another point y ∈ M, consider α y,v (t) = exp −1 (tw) to be a geodesic passing through y with derivative w. Then, the Clarke generalized directional derivative of θ at x ∈ M in the direction v ∈ T x M, denoted by θ 0 (x, v), is defined as Definition 6. We define the subdifferential of θ at x, denoted by ∂θ(x), as the subset of T x M with the support function given by θ 0 (x; .), i.e., for every v ∈ T x M, It can be proved that the generalized Jacobian is where X is a dense subset of M on which θ is differentiable and conv(·) denotes the convex hull. We briefly examine some particular cases.
(a) When θ is a locally Lipschitz convex function, we have θ 0 ( However, for the vector function f = ( f 1 , . . . , f p ) : M → R p , the generalized Jacobian ∂ f (x) is contained and, in general, is different from the Cartesian product of Clarke subdifferentials of the components of f .
We denote by R p + the nonnegative orthant of R p , and the order in R p is defined in the usual way: [29]).
The notions of generalized invexity introduced by Osuna-Gómez et al. [30] for differentiable functions, and later by Gutiérrez et al. [31] for locally Lipschitz functions using the generalized Jacobian in a finite-dimensional context, can be extended to Hadamard manifolds as follows.
The function f is said to be invex (resp. pseudoinvex, strong pseudoinvex) with respect to η on X if, for every x ∈ X, f is invex (resp. pseudoinvex, strong pseudoinvex) at x with respect to η on X.
The following examples illustrate the above definitions and relations on Hadamard manifolds.

Example 2.
Let Ω = {p = (p 1 , p 2 ) ∈ R 2 : p 2 > 0} be a set and let G be a 2 × 2 matrix defined by G(p) = (g ij (p)) with Endowing Ω with the Riemannian metric u, v =< G(p)u, v >, we obtain a complete Riemannian manifold H 2 , namely, the upper half-plane model of hyperbolic space.

Example 4.
Let Ω be the upper half-plane model of hyperbolic space with the Riemannian metric u, v = < G(p)u, v >, let G be a 2 × 2 matrix defined by G(p) = (g ij (p)) with : Ω → R 2 be a function defined as f 1 (p 1 , p 2 ) = p 1 and We are going to prove that f is a pseudoinvex function but not strong pseudoinvex or invex. We have that the following: In summary, The function f is pseudoinvex with respect to every η(p, + implies that f should be nondecreasing, but f 2 is nonincreasing and this previous condition is not satisfied. However, f is not strong pseudoinvex on Ω with respect to any η(p,p) = (v 1 , v 2 ) because we can choose p = (0, 1),p = (1, 1), and In the same manner, f is not invex on Ω because if we choose p = (0, 1) andp = (1, 1), there exists no Expression (1) implies that −1 ≥ v 1 and 0 ≥ av 1 , but for a = −1, there is a contradiction between them. In summary, it is well known that invexity and strong pseudoinvexity imply pseudoinvexity (see [31]), but we have found that pseudoinvexity does not imply either invexity or strong pseudoinvexity.

IX ⇒ PIX ⇐ SGPIX
We now have all the tools required to discuss critical points and solutions of vector optimization problems in the next section.

Relations between Solutions of Vector Optimization Problems and Vector Critical Points on Hadamard Manifolds
The objective of this section is to check whether nonsmooth optimality conditions obtained in linear spaces can be extended to Hadamard manifolds.
In Ruiz-Garzón et al. [26], we studied the role of invexity in the scalar case on Riemannian manifolds for the differential scenario, but not that of pseudoinvexity. In this section, we study the role of pseudoinvexity in both the scalar and vector cases on the Hadamard manifolds in unconstrained VOPs when the functions are nondifferentiable. We examine when vectorial critical points coincide with efficient and weakly efficient points.
In this section, we consider the unconstrained multiobjective programming problem (VOP) defined as: We now study some relations between solutions of (VOP) and vector critical points. We will start by defining the concept of the vector critical point: Definition 9. Let M be a Hadamard manifold, X be an open geodesic convex subset of M, and f : X ⊆ M → R p be a locally Lipschitz function. A feasible pointx ∈ X is said to be a vector critical point (VCP) with respect to η if there exist some x ∈ X ⊆ M with η(x,x) ∈ Tx M not identically zero and λ ∈ R The importance of VCPs in obtaining weakly efficient points (efficient points) can be illustrated through a characterization of pseudoinvexity (resp. strong pseudoinvexity). Proof. Firstly, we prove that f is pseudoinvex with respect to η.
(a) We consider two points x,x ∈ X and assume that f (x) − f (x) ∈ − int R p + . Then,x is not a weakly efficient solution of (VOP). By the hypothesis, we derive thatx is not a VCP with respect to η, i.e., there do not exist some x ∈ X ⊆ M with η(x,x) ∈ Tx M not identically zero and , and therefore f is PIX with respect to η on X.
We now prove the sufficient condition. We assume by hypothesis that f is PIX with respect to η and thatx is a VCP with respect to the same η. Thus, for some x ∈ X ⊆ M with η(x,x) ∈ Tx M, λ ∈ R p + \ {0}, and A ∈ ∂ f (x). We need to prove thatx is a weakly efficient point. By reductio ad absurdum, suppose thatx is not a weakly efficient solution of (VOP). Then, there exists a point x ∈ X such that f (x) − f (x) ∈ − int R p + . Using the fact that f is PIX atx with respect to η on X, we have Aη(x,x) ∈ − int R p + , and so λ T Aη(x,x) < 0, which contradicts (2).
In the same way, we can prove the following corollary. Let us underline that Theorem 1 and Corollary 1 show that pseudoinvexity (resp. strong pseudoinvexity) is a minimal requirement for the property that every VCP is a weakly efficient (resp. efficient) solution of problem (VOP) on a Hadamard manifold in the nonsmooth case.
In summary, we have that Theorem 1 extends Theorem 2.2 of Osuna et al. [30] and Theorem 5 of Gutiérrez et al. [25] from linear spaces to Hadamard manifolds.
Next, an example is given to demonstrate the applicability of the previous results.
For scalar functions, we can go one step further. (c) θ is PIX with respect to η on X.
(b) ⇔ (c) The result is given by Theorem 1.
In summary, Corollary 2 provides us with a necessary and sufficient invexity condition for locally Lipschitz functions on Hadamard manifolds. It extends a result given by Gutiérrez et al. [25] for Euclidean spaces. In Ruiz-Garzón et al. [26], only invexity was characterized on Riemannian manifolds; now, we have shown that invexity and pseudoinvexity coincide. They describe a wider class of differentiable and locally Lipschitz functions in which the critical points are global minima in unconstrained problems on Hadamard manifolds.
The question that now arises is whether, in the case of the constrained vector optimization problem, solutions and vector critical points also coincide when applying pseudoinvexity assumptions.

Relations between Solutions of the Constrained VOP and KKT VCPs on Hadamard Manifolds
The objective of this section is to extend the results obtained in the previous section for the unconstrained case to the constrained case. We want to determine the conditions under which KKT VCPs and efficient and weakly efficient points coincide.
We consider the constrained multiobjective programming problem (CVOP) defined as: As for the unconstrained case, we are going to use KKT VCPs, which are defined as follows.
A new type of invex function that involves the objective and constraint functions is needed to study the efficient solutions for (CVOP) using KKT VCPs. Definition 11. Problem (CVOP) is said to be KKT-pseudoinvex (KKT-PIX) atx with respect to η : Definition 12. Problem (CVOP) is said to be strong KKT-pseudoinvex (SG-KKT-PIX) with respect to η : Obviously, if there are no constraints, these definitions coincide with those given in the preliminaries and are an extension to Hadamard manifolds of those given by Osuna et al. [30,32] and Gutiérrez et al. [31].
The following theorem shows us the importance and usefulness of (CVOP) being SG-KKT-PIX in locating the efficient points through the KKT-VCP points.
Theorem 2. Every KKT-VCP with respect to η is an efficient solution of (CVOP) if and only if (CVOP) is SG-KKT-PIX with respect to the same η.
Proof. We prove that (CVOP) is SG-KKT-PIX with respect to η atx. Let us suppose that there exists some because otherwise (CVOP) would be SG-KKT-PIX with respect to η, and the result would be proved. From (10), we have thatx is not an efficient solution, and using the initial hypothesis,x is not a KKT-VCP, i.e., then there exist some has no solutionλ ≥ 0,μ I(x) ≥0. Therefore, by Motzkin's Alternative theorem [33], the system Let us now prove the reciprocal condition. Letx be a KKT-VCP with respect to η and (CVOP) be SG-KKT-PIX with respect to the same η. We have to prove thatx is an efficient solution for (CVOP). By reductio ad absurdum, consider a feasible point x such that By hypothesis, (CVOP) is SG-KKT-PIX with respect to η atx if, ∀x ∈ X ⊆ M, there exist some However, asλ ≥ 0,μ I(x) ≥0 and from (11), it follows that which contradicts (12). Therefore,x is an efficient solution for (CVOP).
Arguing in the same form, we can prove the following corollary. In summary, we have that: These results extend Theorem 3.7 and Corollary 3.8 obtained by Ruiz et al. [26] on Hadamard manifolds from the differentiable case to the nondifferentiable case, and extend Theorem 3.7 obtained by Osuna et al. [32] or Theorem 2.3 obtained by Osuna et al. [30] in finite-dimensional Euclidean spaces.
We illustrate the above results with an example.
Example 6. Consider the following constrained vector optimization problem: Let Ω be the upper half-plane model of hyperbolic space and use the Riemannian metric. We will prove that p = (1/2, 1/2) is a weakly efficient solution for (CVOP). There exists η(p, p) = 3p − p = (1, 0) such that

Application: Relations between Nash Equilibrium Points and Nash Critical Points
In this section, we relate Nash's equilibrium and critical points. A Nash strategy requires n players, each optimizing his own criterion given that all other criteria are fixed by the rest of the players. When no player can further improve his criterion, then a change of strategy by one player does not cause the other players to change their strategies. In this case, the system has reached a state called Nash equilibrium. When the equilibrium is achieved, none of the players has an incentive to unilaterally deviate from this point. In general, there may be one or more Nash equilibrium points.
The following concepts were described by Kristály [2].
We have proven that the relationship between Nash's critical and equilibrium points is obtained for invex payoff functions, extending the results obtained for convex payoff functions given by Kristály [2].
In summary, in invexity environments, we have that: Let us illustrate this property with an example.
and consider a two-player game with payoff functions defined as: We are going to prove that the point (x,ȳ) = (1, 1) is an NEP and an NCP simultaneously.
We have that f 1 (·, y) is a locally Lipschitz function on R for every y ∈ K 2 and f 2 (x, ·) is C 1 function on R for every y ∈ K 1 . One hand, we can calculate the subdifferential: The NCPs are the solutions (x,ȳ) ∈ K of the system: f 0 1 ((x,ȳ), exp −1 x (q)) =< ∂ f 1 (x,ȳ), (q −x) >≥ 0 ∀q ∈ K 1 f 0 2 ((x,ȳ), exp −1 y (q)) = (ȳ −x)(q −ȳ) ≥ 0, ∀q ∈ K 2 On the other hand, one way to get the NEP is through the rational reaction sets. For two players, let R i be the rational reaction set for player i. For example, R 1 = {(x, y) ∈ K 1 × K 2 such that f 1 (x, y) ≤ f 1 (x, y)} R 2 = {(x,ȳ) ∈ K 1 × K 2 such that f 2 (x,ȳ) ≤ f 2 (x, y)} We can calculate the partial derivative: The NEP is obtained from the intersection of the two rational reaction sets: Obviously, K 1 = K 2 ⊂ M = R are convex. Additionally, f 1 (·, y) is a convex function and threfore an invex function on K 1 for every y ∈ K 2 and f 2 (x, ·) is invex on K 2 ⊂ R for every x ∈ K 1 . In our case, this solution is the point (x,ȳ) = (1, 1), which is both an NEP and an NCP.

Conclusions
This paper has shown, for the first time, which types of functions are characterized by all VCPs being efficient or weakly efficient solutions of vector optimization problems with and without constraints on Hadamard manifolds. We have extended the results given by Gutiérrez et al. [25] and Ruiz-Garzón et al. [26] from linear spaces to nonlinear spaces and in the more general case of nonsmooth functions. We have introduced a great number of explanatory examples, and have presented an economics application showing that Nash's critical and equilibrium points coincide in the case of invex payoff functions.
The results presented in this paper lead to the following conclusions: • There is a need to extend the different concepts of invexity to Hadamard manifolds and clarify the relationships between them.

•
It is important to use an adequate definition of VCPs or KKT-VCPs.

•
There are applications of invexity in the search for equilibrium points, which are so desirable in economics.
In our opinion, in the future, we should search for algorithms or software that reflect the theoretical results achieved here, and identify further applications in the fields of physics and economics.
Author Contributions: All authors contributed equally to this research and in writing paper. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.