gH-Symmetrically Derivative of Interval-Valued Functions and Applications in Interval-Valued Optimization

: In this paper, we present the gH-symmetrical derivative of interval-valued functions and its properties. In application, we apply this new derivative to investigate the Karush–Kuhn–Tucker (KKT) conditions of interval-valued optimization problems. Meanwhile, some examples are worked out to illuminate the obtained results.


Introduction
In modern times, the optimization problems with uncertainty have received considerable attention and have great value in economic and control fields (e.g., [1][2][3][4]). From this point of view, Ishibuchi and Tanaka [5] derived the interval-valued optimization as an attempt to handle the problems with imprecise parameters. Since then, a collection of papers written by Chanas, Kuchta and Bitran et al. (e.g., [6][7][8]) offered many different approaches on this subject. For more profound results and applications, please see [9][10][11][12][13][14][15]. In addition, the importance of derivatives in nonlinear interval-valued optimization problems can not be ignored. Toward this end, Wu [16][17][18] discussed interval-valued nonlinear programming problems and gave a utilization of the H-derivative in interval-valued Karush-Kuhn-Tucker (KKT) optimization problems. Also, according to the results given by Chalco-Cano [19], the gH-differentiability was extended to learn interval-valued KKT optimality conditions. As for details of above mentioned derivatives, we refer the interested readers to [20,21].
Motivated by Wu [17] and Chalco-Cano [19], we introduce the gH-symmetrical derivative which is more general than the gH-derivative. Based on this derivative and its properties, we give KKT optimality conditions for interval-valued optimization problems.
The paper is discussed as follows. In Section 2, we recall some preliminaries. In Section 3, we put forward some concepts and theorems of the gH-symmetrical derivative. In Section 4, new KKT type optimality conditions are derived and some interesting examples are given. Finally, Section 5 contains some conclusions.

Preliminaries
Firstly, let R denote the space of real numbers and Q denote the set of rational numbers. We denote the set of real intervals by (I, D) is a complete metric space. The relation " LU " of I is determined by Definition 1 ([21]). The gH-difference of c, d ∈ I is defined as below This gH-difference of two intervals always exists and it is equal to (1) Proposition 1 ([22]). We recall some properties of intervals c, d and e.
(1) Assume the length of interval c is defined by l(c) = c − c. Then Let f : (a, b) → I be an interval-valued function, and . The functions f , f are called endpoint functions of f . In [21] Stefannini and Bede introduced the gH-derivative as follows.

Main Results
Now, we introduce the gH-symmetrical derivative and some corresponding properties.
For convenience, let D I ((a, b), I), SD I ((a, b), I)) be the collection of gH-differentiable and gH-symmetrically differentiable interval functions on (a, b). Lemma 1. Let c, d and e ∈ I. If (l(c) − l(d))(l(d) − l(e)) < 0, then we have Proof.
The following Theorem 1 shows the relation between D I ((a, b), I) and SD I ((a, b), I)).
However, the converse is not true.
Proof. Fix t ∈ (a, b) and assume f (t) exists. Put Applying Proposition 1 and Lemma 1, we obtain Hence, a. If K ≥ 0, by (5) we have According to Definition 5, f s (t) exists and Thus, f s (t) exists and Therefore, f is gH-symmetrically differentiable in view of (6) and (7). Conversely, we now give a counter example as follows. Let does not exist. Then f 1 is not gH-differentiable at t = 1.
Conversely, suppose f and f are symmetrically derivative at t 0 .
Next, we study the gH-symmetrically derivative of f : M ⊂ R n → I where M is an open set.
The following Theorem illustrates the relation between symmetric gradients and partial gH-symmetrical derivatives.
Proof. By Definition 6, substituting h j = 0 (j = i) and taking h i → 0 in M, it follows at once Therefore, the symmetric gradient of f at the point (0, 0) is s g f (0, 0) = (0, 0).

Remark 2.
The gradient of f in [19] is more restrictive than the symmetric gradient. For instance, the partial derivative ( ∂ f ∂t 1 ) g (0, 0) does not exist in Example 1. So we can not obtain the gradient at (0, 0) using the gH-derivative.

Mathematical Programming Applications
Now, we discuss the following interval-valued optimization problem (IVOP): where g 1 , g 2 , . . . , g m : M ⊂ R n → R are symmetrically differentiable and convex on M, M is an open and convex set and f : M → I is LU-convex (see [19], Definition 8). Then we study the LU-solution (see ( [17], Definition 5.1)) of the problem (IVOP1).
Proof. We define f l (t) = λ 1 f (t) + λ 2 f (t). Since f is LU-convex and gH-symmetrically differentiable at t * , then f l is convex and symmetrically differentiable at t * . And then we have following conditions Based on Theorem 3.1 of [26], t * is an optimal solution of the real-valued objective function f l subject to the same constraints of problem (IVOP1), i.e., Next, we illustrate this theorem by contradiction. Assume t * is not a solution of (IVOP1), then there exists an t ∈ M such that f (t) ≺ LU f (t * ), i.e., Therefore, we obtain that f l (t) < f l (t * ) which leads to a contradiction. This completes the proof.

Remark 3. Note Theorem 4 in
Applying Theorem 5 we have the following result.
As shown in Example 1, symmetric gradient is more general than the gradient of f using the gH-derivative, we derive new KKT conditions for (IVOP1) using the symmetric gradient of interval-valued function given in Definition 6. Theorem 6. Under the same assumption of Theorem 5, the following KKT conditions hold Then t * is an optimal LU-solution of problem (IVOP1).
Proof. By Theorem 3, the equation ∇ s g f (t * ) + ∑ m i=1 µ i ∇ s g i (t * ) = 0 can be interpreted as which implies where ν i = 2µ i (i = 1, . . . , m). Then the result meets all conditions of Theorem 5. That is the end of proof.

Remark 4.
It is worth noting that Theorem 9 of [19] can not solve the problem (IVOP3) since f is not gH-differentiable at 0. So Theorem 6 is more general than Theorem 9 in [19].

Remark 5.
Comparing Example 2 with Example 3, it is easy to see Theorem 5 is more generic than Theorem 6. Nonetheless, Theorem 6 can be very effective for obtaining the solution of (IVOP1) in some cases.

Conclusions and Further Research
We defined the gH-symmetrical derivative of interval-valued functions, which is more general than the gH-derivative. In addition, we generalized some results of Wu [17] and Chalco-Cano [19] by establishing sufficient optimality conditions for optimality problems involving gH-symmetrically differentiable objective functions. The symmetric gradient of interval functions is more general and it is more robust for optimization problems. However, the equality constraints are not considered in our paper. We can try to handle equality constraints using a similar methodology to the one proposed in this paper. Moreover, the constraint functions in this paper are still real-valued. In future research, we may extend the constraint functions as the interval-valued functions. And we may study the symmetric integral and more interesting properties about interval-valued functions.