Existence and Uniqueness of Zeros for Vector-Valued Functions with K -Adjustability Convexity and Their Applications

: In this paper, we introduce the new concepts of K -adjustability convexity and strictly K -adjustability convexity which respectively generalize and extend the concepts of K -convexity and strictly K -convexity. We establish some new existence and uniqueness theorems of zeros for vector-valued functions with K -adjustability convexity. As their applications, we obtain existence theorems for the minimization problem and ﬁxed point problem which are original and quite different from the known results in the existing literature.


Introduction and Preliminaries
It is well known that convex analysis has played an important role in almost all branches of mathematics, physics, economics, and engineering. Convexity is an ancient and natural notion and the theory of convex functions is an essential part of the general subject of convexity.
Let V be a vector space. A nonempty subset A of V is called convex if for any x, y ∈ A, λx + (1 − λ)y ∈ A for all λ ∈ [0, 1]. Let X be a nonempty convex subset of V. A real-valued function f : X → R is called convex if f (tx + (1 − t)y) ≤ t f (x) + (1 − t) f (y) (1) for all x, y ∈ X and t ∈ [0, 1]. If the above inequality (1) is strict whenever x = y and 0 < t < 1, then f is called strictly convex. A function f : X → R is called concave (resp. strictly concave) if − f is convex (resp. strictly convex). A large amount of new notions of generalized convexity and concavity have been investigated by several authors; see, for example, ref. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15] and references therein. The general vector optimization problem (VOP) for a vector-valued function f : X → V 2 can be formalized as follows: where V 1 and V 2 be vector spaces and X is a nonempty subset of V 1 . Vector optimization problems have been intensively investigated, and various feasible methods have been proposed over a century and has made more important contributions to improve our understanding of the real world around us in various fields. Convex analysis and vector optimization has wide and significant applications in many areas of mathematics, including nonlinear analysis, finance mathematics, vector differential equations and inclusions, dynamic system theory, control theory, economics, game theory, machine learning, multiobjective programming, multi-criteria decision making, game theory, signal processing, and so forth. For more details, see, e.g., ref. [1,[7][8][9][10]16] and references therein. In reality, we often encounter non-convex functions or non-concave functions when solving problems in the real world, so these known results for convex functions or concave functions are not easily applicable to work. Motivated by that reason, in this paper, we study and introduce the new concepts of K-adjustability convexity and strictly K-adjustability convexity (see Definition 1 below). A nontrivial example is given to illustrate that the concept of K-adjustability convexity is a real generalization of the concept of K-convexity. In Section 3, we establish some new existence and uniqueness theorems of zeros for vector-valued functions with K-adjustability convexity. As their applications, we obtain existence theorems for minimization problem and fixed point problem which are original and quite different from the known results in the literature.

New Concepts of K-Adjustability Convexity and Strictly K-Adjustability Convexity
Let V be a topological vector space (t.v.s., for short) with its zero vector θ V . Let A be a nonempty subset of V. We use the notations A, co(A) and co(A) to denote the closure, convex hull and closed convex hull (i.e., the closure of the convex hull) of A, respectively.
For a given cone K ⊆ V, we can define a partial ordering K with respect to K by x ≺ K y will stand for x K y and x = y, while x K y will stand for y − x ∈ intK, where intK denotes the interior of K. A function ϕ : Let X be a topological space. A real-valued function h : X → R is lower semicontinuous (in short lsc) (resp. upper semicontinuous, in short usc) if {x ∈ X : h(x) ≤ r} (resp. {x ∈ X : h(x) ≥ r}) is closed for each r ∈ R.
Let Y be a t.v.s. with its zero vector θ, K be a proper (i.e., K = Y), closed and convex pointed cone in Y with intK = ∅, e ∈ intK, and K be a partial ordering with respect to K. A vector-valued function f : X → Y is said to be (e, K)-lower semicontinuous [9,17] if for each r ∈ R, the set {x ∈ X : f (x) ∈ re − K} is closed.
In this paper, we introduce the concepts of K-adjustability convexity and strictly K-adjustability convexity. Definition 1. Let V 1 and V 2 be vector spaces, X be a nonempty convex set in V 1 , K be a given convex cone in V 2 and µ : V 2 → V 2 be a mapping. A vector-valued function f : X → V 2 is called (i) K-adjustability convex with respect to µ (abbreviated as (K, µ)-adjconvex) if for all x, y ∈ X and t ∈ [0, 1]. In particular, f is called K-convex if µ is an identity mapping on V 2 and (2) becomes for all x, y ∈ X and t ∈ [0, 1]. (ii) strictly K-adjustability convex with respect to µ (abbreviated as strictly (K, µ)-adjconvex) if for all x, y ∈ X with x = y and t ∈ (0, 1). In particular, f is called strictly K-convex if µ is an identity mapping on V 2 and (3) becomes for all x, y ∈ X with x = y and t ∈ (0, 1).
Here, we give an example where f is K-adjconvex but not K-convex.
Take x = 1 2 and y = − 1 2 . Thus, we get We claim that f is (K, µ)-adjconvex. Let x, y ∈ X and t ∈ [0, 1] be given. We consider the following four possible cases:
In Definition 1, if we take V = V 1 , V 2 = R, K = [0, +∞) ⊂ R, then we obtain the following concepts. Definition 2. Let X be a nonempty convex subset of a vector space V and µ : R → R be a function. A real-valued function f : X → R is called (i) adjustability convex with respect to µ (abbreviated as (µ)-adjconvex) if for all x, y ∈ X and t ∈ [0, 1]. In particular, if µ is an identity mapping on R, then f is called convex. (ii) strictly adjustability convex with respect to µ (abbreviated as strictly (µ)-adjconvex) if for all x, y ∈ X with x = y and t ∈ (0, 1). In particular, if µ is an identity mapping on R, then f is called strictly convex.
In the following, unless otherwise specified, we always suppose that Y is a locally convex Hausdorff t.v.s. with its zero vector θ, K be a proper, closed and convex pointed cone in Y with intK = ∅, e ∈ intK, and K be a partial ordering with respect to K. Recall that the nonlinear scalarization function ξ e : Y → R is defined by Obviously, ξ e (θ) = 0.
The following known result is very crucial in our proofs.
By Applying (i) of Lemma 1, one can easily verify the following result; see also [19,24]. Lemma 2. Let X be a topological space and f : X → Y be a vector-valued function. Then f is (e, K)-lower semicontinuous if and only if ξ e • f is lower semicontinuous.

New Existence Results and Their Applications to Minimization Problem and Fixed Point Problem
The following lemma is very important and will be used for proving our main results. Then there exists a strictly decreasing sequence {λ n } n∈N of positive real numbers such that µ(λ n+1 e) K λ n e for all n ∈ N and λ n ↓ 0 as n → ∞.
The following result is immediate from Lemma 3 if we take Y = R, K = [0, +∞) ⊂ R and e = 1.

Corollary 1.
Let µ : R → R be a function satisfying the following condition: (A R ) For any > 0, there exists c > 0 such that Then there exists a strictly decreasing sequence {λ n } n∈N of positive real numbers such that µ(λ n+1 ) < λ n for all n ∈ N and λ n ↓ 0 as n → ∞.
We now establish the following crucial and useful existence result which is one of the main results of this paper and will be applied to minimization problem and fixed point problem. Theorem 1. Let (E, · ) be a normed linear space, Y be a locally convex Hausdorff t.v.s. with its zero vector θ, K be a proper, closed and convex pointed cone in Y with intK = ∅, and let e ∈ intK be fixed. Let W be a nonempty weakly compact and convex subset of E, µ : Y → Y be a K -nondecreasing vector-valued function satisfying the condition (A) as in Lemma 3 and f : W → Y be a vector-valued function. Assume that (H1) for any positive real number γ, {x ∈ W : f (x) ∈ γe − K} is a nonempty closed subset of W, (H2) f is (K, µ)-adjconvex.
Then there exists v ∈ W such that f (v) ∈ −K.
Proof. By applying Lemma 3, there exists a strictly decreasing sequence {λ n } n∈N of positive real numbers such that µ(λ n+1 e) K λ n e for all n ∈ N, (8) and λ n ↓ 0 as n → ∞. For any n ∈ N, let Applying Lemma 1, we have Thus, by (H1), C n is a nonempty closed subset of W. Clearly, C n+1 ⊆ C n for all n ∈ N. We choose an arbitrary point z n from C n for all n ∈ N. For any m, n ∈ N with m ≥ n, let We verify that co(D m,n ) ⊆ C n for all m, n ∈ N with m ≥ n.
Then co(U n ) ⊆ C n for all n ∈ N. Indeed, assume on the contrary that co(U j * ) C j * for some j * ∈ N. So, there exist z k 1 , z k 2 , · · · , z k s ∈ U j * and α 1 , α 2 , · · · , α s ≥ 0 with and hence deduces from (9) that co D k s −1,k 1 −1 ⊆ co D k s −1,j * ⊆ C j * , which leads to a contradiction. Hence co(U n ) ⊆ C n for all n ∈ N. By the closedness of C n , we get co(U n ) ⊆ C n for all n ∈ N.
Since co(U n+1 ) ⊆ co(U n ) and co(U n ) is weakly compact for all n ∈ N, {co(U n ) : n ∈ N} is a family of closed subsets of the weakly compact set co(U 1 ) which has the finite intersection property. Therefore we deduce ∅ = n∈N co(U n ) ⊆ n∈N C n and hence we can take v ∈ n∈N C n ⊆ W. So F(v) ≤ λ n for all n ∈ N. Since λ n ↓ 0 as n → ∞, we get The proof is completed. Then there exists v ∈ W such that h(v) ≤ 0.
Proof. Take Y = R, K = [0, +∞) ⊂ R and e = 1. Then Y is a locally convex Hausdorff t.v.s. with its zero vector θ = 0, K is a proper, closed and convex pointed cone in Y with intK = (0, +∞) = ∅, and 1 ∈ intK. Define a partial ordering K with respect to K by Then h is a mapping from W into Y and τ : Y → Y is a K -nondecreasing function satisfying the condition (A) as in Lemma 3. Clearly, conditions (a) and (b) respectively imply conditions (H1) and (H2) as in Theorem 1. Hence all the assumptions of Theorem 1 are satisfied and therefore the desired conclusion follows immediately from Theorem 1.
As a direct consequence of Theorem 1, we obtain the following existence result.
Then there exists v ∈ W such that f (v) ∈ −K.
Proof. For any positive real number γ, by (h1), (h2) and Lemma 2, the set is a nonempty closed subset of W. Therefore, the condition (H1) as in Theorem 1 holds. Applying Theorem 1, we can immediately obtain the conclusion. Then there exists v ∈ W such that h(v) ≤ 0.
Applying Theorem 1, we can establish an existence theorem of zeros for vector-valued functions with K-adjustability convexity under an additional assumption. Theorem 3. In Theorem 1, if we further assume that f (x) ∈ K for all x ∈ W, then the equation f (x) = θ has at least one root in W.
Proof. By Theorem 1, there exists v ∈ W such that f (v) ∈ −K. Therefore, by our hypothesis, we get Hence v is a root of f (x) = θ. The proof is completed.
As an immediate consequence of Theorem 3, we obtain the following new existence theorem.
Corollary 5. In Corollary 3 (or Corollary 4), if we further assume that h(x) ≥ 0 for all x ∈ W, then the equation h(x) = 0 has at least one root in W.
The following new existence and uniqueness theorem of zeros for vector-valued functions with strictly (K, µ)-adjconvexity is established by applying Theorem 3. Proof. Applying Theorem 3, the equation f (x) = θ has at least one root in W. Assume that u, v ∈ W are two distinct roots of f (x) = θ. Since W is convex and µ(θ) = θ, we have 1 2 u + 1 2 v ∈ W and µ 1 2 f (u) + 1 2 f (v) = θ. By (H3), we get a contradiction. Therefore, the equation f (x) = θ has a unique root in W. The proof is completed.

Corollary 6.
In Corollary 3, if we further assume that τ(0) = 0, h(x) ≥ 0 for all x ∈ W and the condition (b) is replaced with (b1) h is strictly (τ)-adjconvex, then the equation h(x) = 0 has a unique root in W.
As an interesting application of Corollary 5, we prove the following minimization theorem.
Theorem 5. Let W be a nonempty weakly compact and convex subset of a normed linear space (E, · ) with origin θ and g : W → R be a convex, lower semicontinuous and bounded below function. Then Moreover, if g is strictly convex, then arg min x∈W g(x) is a singleton set.
Proof. Since g is bounded below, inf z∈W g(z) exists. Let h : W → R be defined by h(x) = g(x) − inf z∈W g(z) for x ∈ W.