Optimality Conditions and Duality for a Class of Generalized Convex Interval-Valued Optimization Problems

This paper is devoted to derive optimality conditions and duality theorems for interval-valued optimization problems based on gH-symmetrically derivative. Further, the concepts of symmetric pseudo-convexity and symmetric quasi-convexity for interval-valued functions are proposed to extend above optimization conditions. Examples are also presented to illustrate corresponding results.


Introduction
Due to the complexity of the environment and the inherent ambiguity of human cognition, the information data in real world optimization problems are usually uncertain. More often, we can not ignore the fact that small uncertainties in data may lead to a completely meaningless of the usual optimal solutions from a practical viewpoint. Therefore, much interest has been paid to the uncertain optimization problems, see [1][2][3][4].
There are various approaches used to tackle the optimization problems with uncertainty, such as stochastic process [5], fuzzy set theory [6] and interval analysis [7]. Among them, the method of interval analysis is to express an uncertain variable as a real interval or an interval-valued function (IVF), which has been applied to many fields, such as, the models involving inexact linear programming problems [8], data envelopment analysis [9], optimal control [10], goal programming [11], minimax regret solutions [12] and multiperiod portfolio selection problems [13] etc. Up to now, we can find many works involving interval-valued optimization problems (IVOPs) (see [14,15]).
In classical optimization theory, the derivative is the most frequently used one. It plays an important role in the study of optimality conditions and duality theorems in constrained optimization problems. To date, various notions of IVF's derivative have been proposed, see [16][17][18][19][20][21][22][23]. One famous concept is H-derivative defined in [16]. However, the H-derivative is restrictive. In 2009, Stefanini and Bede presented the gH-derivative [23] to overcome the disadvantages of H-derivative. Furthermore, in [24], Guo et al. proposed the gHsymmetrically derivative which is more general than gH-derivative. Researchers of optimal problems have largely used these derivatives of IVFs. For instance, Wu [25] considered the Karush-Kuhn-Tucker (KKT) conditions for nonlinear IVOPs using H-derivative. In [26,27], Wolfe type dual problems of IVOPs were investigated. Later, more general KKT optimality conditions has been proposed by Chalco-Cano et al. [28,29] based on gH-derivative. Besides, Jayswal et al. [30] extended optimality conditions and duality theorems for IVOPs with the generalized convexity. Antczak et al. [31] studied the optimality conditions and duality results for nonsmooth vector optimization problems with multiple IVFs [32]. In 2019, Ghosh [33] have extended the KKT condition for constrained IVOPs. In addition, Van [34] investigated the duality results for interval-valued pseudoconvex optimization problems with equilibrium constraints.
Based on the fact that the IVOPs have been extensively studied on optimality condition and duality by many researchers in recent years, in this paper, we continue to study and develop these results on optimality conditions and Wolfe duality of IVOPs on the basis of the gH-symmetrically derivative. In addition, we are going to introduce more appropriate concepts of symmetric pseudo-convexity and symmetric quasi-convexity to weak the convexity hypothesis.
The remaining of the paper is as follows: In Section 2, we give preliminaries and recall some main concepts. In Section 3, we propose the directional gH-symmetrically derivative and more appropriate concepts of generalized convexity. Section 4 establishes the necessary optimality conditions and Wolfe duality theorems. In Section 5, we apply the generalized convexities to investigate the content in Section 4. Our results are properly wider than the results in [28][29][30].

Preliminaries
Theorem 1 ([35]). Suppose that f : M → R is symmetrically differentiable on M and N is an open convex subset of M. Then f is convex on N if and only if (1) Theorem 2 ([36]). Let A be a m × n real matrix and let c ∈ R n be a column vector. Then the implication holds for all t ∈ R n if and only if ∃u ≥ 0 : where u ∈ R m .
Let I be the set of all bounded and closed intervals in R, i.e., I = {a = [a, a]|a, a ∈ R and a ≤ a}.
In [23], Stefanini and Bede presented the gH-difference: In addition, this difference between two intervals always exists, i.e., Furthermore, the partial order relation " LU " on I is determined as follows: Let R n be the n-dimensional Euclidean space, and T ⊂ R n is an open set. We call the function F : T → I an IVF, i.e., F(t) is a closed interval in R for every t ∈ T. The IVF F can also be denoted as F = [F, F], where F and F are real functions and F ≤ F on T. Moreover, F, F are called the endpoint functions of F. Definition 1 ([24]). Let F : T → I. Then F is said to be gH-symmetrically differentiable at t 0 ∈ T if there exists F s (t 0 ) ∈ I such that: Definition 2 ([24]). Let F : i , then we say that F has the ith partial gH-symmetrically

Definition 3 ([24]
). Let F : T → I be an IVF, and ∂ s t i F stands for the partial gH-symmetrically derivative with respect to the ith variable t i . If ∂ s t i F(t 0 ) (i = 1, . . . , n) exist on some neighborhoods of t 0 and are continuous at t 0 , then F is said to be gH-symmetrically differentiable at t 0 ∈ T. Moreover, we denote by the symmetric gradient of F at t 0 .

Theorem 3 ([24]
). Let the IVF F : T → I be continuous in (t 0 − δ, t 0 + δ) for some δ > 0. Then F is gH-symmetrically differentiable at t 0 ∈ T if and only if F and F are symmetrically differentiable at t 0 .

Definition 4 ([28]
). Let F = [F, F] be an IVF defined on T. We say that F is LU-convex at t * if for every θ ∈ [0, 1] and t ∈ T. Now, we introduce the following IVOP : be the collection of feasible points of Problem (5), and the set of objective values of primal Problem (5) is indicated by: Moreover, we review the definition of non-dominated solution to the Problem (5): ). Let t * be a feasible solution of Problem (5), i.e., t * ∈ X . Then t * is said to be a non-dominated solution of Problem (5) if there exists no t ∈ X \ {t * } such that: F(t) ≺ LU F(t * ).
The KKT sufficient optimality conditions of Problem (5) have been obtained in [24]: , Sufficient optimality condition). Assume that F : M → I is LU-convex and gH-symmetrically differentiable at t * , g : M → R n is convex and symmetrically differentiable at then t * is a non-dominated solution to Problem (5).
The condition (7) in Theorem 4 is satisfied at t = 0 when µ 1 = 11 2 , and µ 2 = 0. On the other hand, it can be easily verified that t = 0 is a non-dominated solution of Problem (8). Hence, Theorem 4 is verified.
Noted that F is not gH-differentiable at t = 0, the sufficient conditions in [24] are properly wider than those in [28].

Generalized Convexity of gH-Symmetrically Differentiable IVFs
The LU-convexity assumption in [28] may be restrictive. For example, the IVF is not LU-convex at t = 0. Inspired by this, we introduce the directional gH-symmetrically derivative and the concepts of generalized convexity for IVFs which will be used in Section 4.

Definition 6.
Let F : T → I be an IVF and h ∈ R n . Then F is called directional gH-symmetrically differentiable at t 0 in the direction h if D s F(t 0 : h) ∈ I exists and If t = (t 1 , . . . , t n ) T and e i = (0, . . . , i 1, . . . , 0), then D s F(t : e i ) is the partial gH-symmetrically derivative of F with respect to t i at t. Theorem 5. If F : T → I is gH-symmetrically differentiable at t ∈ T and h ∈ R n , then the directional gH-symmetrically derivative exists and Proof. Since, by hypothesis, F is gH-symmetrically differentiable at t, then there exists F s (t) ∈ I such that: Then, we have: Thus, we complete the proof.
Definition 8. The IVF F : T → I is called symmetric quasi-convex (SQ-convex) at t 0 ∈ T, if F is gH-symmetrically differentiable at t 0 and F is said to be symmetric quasi-concave (SQ-concave) at t 0 if −F is SQ-convex at t 0 .
Remark 1. When F = F, i.e., F degenerates to a real function, the concepts of SQ-convexity and SP-convexity will degenerate to s-quasiconvexity and s-pseudoconvexity in [35].

KKT Necessary Conditions
The necessary optimality conditions are an important part of the optimization theory, because these conditions can be used to exclude all the feasible solutions which are not optimal solutions, i.e., they can identify all options for solving the problem. From this point, using gH-symmetrically derivative, we establish a KKT necessary optimality condition which is more general than [28,29].
In order to obtain the necessary condition of Problem (5), we shall use the Slater's constraint qualification [37]. Such condition is: Theorem 6 (Necessary optimality condition). Assume that F : M → I is LU-convex and gH-symmetrically differentiable, g i : M → R(i = 1, . . . , m) are symmetrically differentiable and convex on M. Suppose H = {i : g i (t * ) = 0}. If t * is a non-dominated solution to Problem (5) and the following conditions are satisfied: (A1) For every i ∈ H and for all y ∈ R n , there exist some positive real numbers ξ i , when 0 < ξ < ξ i and ∇ s g i (t * ) T y < 0, we have: ∇ s g i (t * + ξy) T y < 0; (A2) The set X satisfies the Slater's constraint qualification. For i ∈ H and for all h ∈ R n , D + F(t * : h) ≥ 0 implies that D s F(t * : h) ≥ 0 or D + F(t * : h) ≥ 0 implies that D s F(t * : h) ≥ 0; where D + F and D − F (D + F and D − F) are the right-sided and left-sided directional derivative of F (F). Then, there exists u * ∈ R m + such that condition (7) in Theorem 4 holds.
Proof. Suppose the above conditions are satisfied. Assume there exists w ∈ R n such that: Since X satisfies the Slater's constraint qualification, by Equation (10), there exists t 0 ∈ X such that g i (t 0 ) < 0 (i = 1, . . . , m). Then we have: Combining Theorem 1 and the convexity of g i , we have by inequality (11), we get for all ρ > 0. By hypothesis in (A1), there exists ξ i > 0 such that Since t * is a non-dominated solution to Problem (5), there exists no feasible solution t such that: F(t) ≺ F(t * ), i.e., which contradicts to the inequality (11). Thus, inequality (11) has no solution. By Theorem 2, there exists 0 ≤ µ * i ∈ R such that The proof is complete.
On the other hand, we have: Hence, Theorem 6 is verified.

Wolfe Type Duality
In this section, we consider the Wolfe dual Problem (14) of Problem (5) as follows: subject to ∇ s F(t) For convenience, we write: We denote by the feasible set of dual Problem (14) and the set of all objective values of Problem (14).
Next, we discuss the solvability for Wolfe primal and dual problems.  (5) and (14), respectively, then the following statements hold true: µ). Moreover, the statements still hold true under strict inequality.
Proof. Supposet, (t, µ) are feasible solutions to Problem (5) and (14), respectively. Since F is LU-convex, we have: Thus, the statement (B1) holds true. On the other hand, if F(t) − F(t) > 0, then The other statements can also be proof by using similar arguments.

Lemma 2.
Under the same assumption to Lemma 1, ift, (t, µ) are feasible solutions to Problems (5) and (14), respectively, then the following statements hold true: Moreover, the statements still hold true under strict inequality.
Proof. Suppose F(t) ≤ F(t), then we have: Thus, the statement (C1) holds true. On the other hand, if F(t) < F(t), then: The proof of (C2) is similar to (C1), so we omit it.
Proof. If F(t) and F(t) are comparable, by Lemmas 1 and 2, we can obtain the statement (D1); If F(t), F(t) are not comparable, then we have: By Lemmas 1 and 2, we obtain that: The proof is complete.
Proof. Suppose (t * , µ * ) is not a non-dominated solution to Problem (14), then there exists (t, µ) ∈ Y so that: Since L(t * , µ * ) ∈ O P (F, X ), there existst ∈ X such that: According to Theorem 7, if F(t), F(t) are comparable, then we have If F(t), F(t) are not comparable, then: These two results are contradict to Equation (20). Thus, we complete the proof.
Proof. The proof is similar to Theorem 8, so we omit it.
Proof. The proof follows Theorem 8 and Theorem 9.
By Corollary 1, there exists µ * ∈ R m + such that (t * , µ * ) is a solution to Problem (14). The proof is complete. On the other hand, the IVF F in Example 2 satisfies the conditions (A1) and (A2), which verifies Theorem 10.

The optimality Conditions with Generalized Convexity
In this section, we use the concepts of SP-convexity and SQ-convexity which are less restrictive than LU-convexity to obtain some generalized optimality theorems of Problem (5).
Theorem 11. (Sufficient condition) Suppose F is SP-convex and g i is s-quasiconvex at t * for i ∈ H. If t * ∈ X , and for some µ * ∈ R n + condition (7) in Theorem 4 holds, then t * is a non-dominated solution to Problem (5).
Proof. Assume for some µ * ≥ 0, condition (7) in Theorem 4 holds. We have Since g i (t) ≤ g i (t * ) and g i is s-quasiconvex at t * for i ∈ H, we obtain g s i (t * )(t − t * ) ≤ 0. Thus: Thanks to the SP-convexity of F, we have: Then t * is an optimal solution to the real-valued objective function F + F subject to the same constraints of Problem (5). Suppose t * is not a non-dominated solution of Problem (5), there exists t ∈ X such that: which contradicts Equation (22). The proof is complete.
We observe that F is not gH-differentiable at t = 0, and F is not LU-convex at t = 0 with: However, F is SP-convex at t = 0 and g i is s-quasiconvex at t = 0 for i ∈ H. Furthermore, F is gH-symmetrically differentiable at t = 0 with F s (0) = [ 5 4 , 3 2 ].
Moreover, we have: On the other hand, t = 0 is a non-dominated solution to Problem (23), which verifies Theorem 11. Theorem 12. (Necessary condition) Suppose F is SQ-concave at t * and g i is s-pseudoconcave at t * for i ∈ H. If t * is a non-dominated solution to Problem (5) and g i is lower semicontinuous on M for all i ∈ H, then (t * , µ * ) satisfies condition (7) in Theorem 4 with some µ * ≥ 0.
Proof. Assume X 1 = {t ∈ X : g i (t) < 0 for all i ∈ H}. The set X 1 is relatively open since g i is lower semicontinuous on M for each i ∈ H. Since t * ∈ X 1 , there is some α 0 such that for any y ∈ E n , t * + αy ∈ X 1 when: 0 < α < α 0 .
Suppose 0 < α < α 0 and for i ∈ H we have g s i (t * ) T y ≤ 0, then g s i (t * ) T αy ≤ 0 for i ∈ H. According to the s-pseudoconcavity of g i at t * , we have: g i (t * + αy) ≤ g i (t * ).
Proof. The proof is similar to the proof of Theorem 10.