1. Introduction
Robust optimization is an important deterministic technique for studying optimization problems with data uncertainty, which is protected against data uncertainty and has grown significantly, see [
1,
2,
3,
4,
5,
6]. The optimization theory mainly includes multi-objective optimization and focuses on finding global optimal solutions or global efficient solutions. However, in real-world situations where the solutions are very susceptible to perturbations from the variables, we might not always be able to identify the global optimal solutions. To reduce the sensitivity to variable perturbations under these conditions, we are going to find the robust solutions.
The set-valued optimization problem:
has been widely studied by scholars, where
M is a closed and convex subset of a real topological linear space
X,
and
are given functions. Set-valued optimization is a thriving research field with numerous applications, for example in risk management [
7,
8], statistics [
9], and others. Hamel and Heyde [
7] defined set-valued (convex) measures of risk and their acceptance sets, and they gave dual representation theorems. Hamel et al. [
8] defined set-valued risk measures on
with
for conical market models, and primal and gave dual representation results. Hamel and Kostner [
9] discussed relationships to families of univariate quantile functions and to depth functions, and introduced a corresponding Value at Risk for multivariate random variables as well as stochastic orders by the set-valued approach. The vectorial criterion and the set criterion are the two different forms of solution criteria for set-valued optimization problems. Each different criterion has been studied independently. The challenge of minimizing a function, when the representation of a point is actually a set, is dealt by set-valued optimization. Since there is no way to minimize a set by a total order relation, it is necessary to give a definition for minimizing the set-valued objective function. The literature [
10,
11,
12] introduced the concepts of preorders to compare sets. These preorders enable the formulation of set-valued optimization problems pertaining to the robustness of multi-objective optimization problems. Eichfelder and Jahn [
10] presented different optimality notions such as minimal, weakly minimal, strongly minimal and properly minimal elements in a pre-ordered linear space and discussed the relations among these notions. Young [
11] introduced the upper set less relation and lower set less relation and then used these set relations to analyze the upper and lower limits of real number sequences. Kuroiwa et al. [
12] referred to the upper-type set relation and considered some duality theorems of a set optimization problem. Furthermore, six other forms of set relations [
13] were also used by Kuroiwa et al. [
12] to solve set optimization problems. By generalized differentiable assumptions, a separation scheme is used to construct some robust necessary conditions for uncertain optimization problems by Wei et al. [
14]. By using the constraint qualification and the regularity condition, Wang et al. [
15] developed weak and strong KKT robust necessary conditions for a nonconvex nonsmooth uncertain multiobjective optimization problem under the conditions of upper semi-continuity.
Rockafellar and Tyrrell [
16] first introduced subdifferential concepts of convex functions. Recently, many authors have generalized subdifferentials of a vector-valued map to the one of a set-valued map [
17,
18]. There are two main approaches to define the subdifferential of set-valued mappings: one is to define the subdifferential by the derivative of the set-valued maps [
17], the other is to define subdifferential by using algebraic forms [
18,
19,
20,
21,
22]. Tanino [
18] pioneered conjugate duality for vector optimization problems and introduced weak efficient points of a set to provide a weak subdifferential for set-valued mappings. A few characteristics of this weak subdifferential were covered by Sach [
19]. By using an algebraic form, Yang [
20] defined a weak subdifferential for set-valued mappings, demonstrated an extension theorem of the Hahn-Banach theorem, and talked about the existence of the weak subgradients. Chen and Jahn [
21] introduced a kind of weak subdifferential, which is more powerful than the weak subdifferential [
20]. By the weak subdifferential, they established a sufficient optimality condition for set-valued optimization problems. Borwein [
22] introduced a strong subgradient, and proved a Lagrange multiplier theorem and a Sandwich theorem for convex maps. Peng et al. [
23] proved the existence of the Borwein-strong subgradient and Yang-weak subgradient for set-valued maps and presented a new Lagrange multiplier theorem and a new Sandwich theorem for set-valued maps. Li and Guo [
24] investigated the features of the weak subdifferential that was first proposed in [
21], as well as the necessary and sufficient conditions for optimality in set-valued optimization problems. Hernández and Rodríguez-Marín [
25] presented a new definition of the strong subgradient for set-valued mappings that were stronger than the weak subgradient of set-valued mappings introduced by Chen and Jahn [
21]. Long et al. [
26] obtained two existence theorems for weak subgradients of set-valued mappings described in [
21]. They also deduced several features of the weak subdifferential for set-valued mappings. İnceoğlu [
27] defined the second-order weak subdifferential and examined some properties of the concept.
Recently, the dual theorem in the face of data uncertainty has received a great deal of attention due to the reality of uncertainty in many real-world optimization problems. Suneja et al. [
28] constructed strong/weak duality results between the primary problem and its Mond-Weir type dual problem using Clarke’s generalized gradients and sufficient optimality criteria for the vector optimization problems. Chuong and Kim [
29] established sufficient conditions for (weakly) efficient solutions of a nonsmooth semi-infinite multiobjective optimization problem and proposed types of Wolfe and Mond-Weir dual problems via the limiting subdifferential of locally Lipschitz functions. Moreover, they explored weak and strong duality. By means of multipliers and limiting subdifferentials of the related functions, Chuong [
30] established necessary/sufficient optimality conditions for robust (weakly) Pareto solutions of a robust multiobjective optimization problem involving nonsmooth/nonconvex real-valued functions. In addition, they addressed a dual (robust) multiobjective problem to the primal one, and explored weak/strong duality. By virtue of subdifferential [
31], Sun et al. [
32] obtained optimality condition and established Wolfe type robust duality between the uncertain optimization problem and its uncertain dual problem under the conditions of continuity and cone-convex-concavity.
To the best of our knowledge, there are a few concepts of solutions for the uncertain set-valued optimization problem through set-order relation. Moreover, there is very little literature on the optimality condition and the dual theorem for set-based robust efficient solutions of uncertain set-valued optimization problems by terms of the second-order strong differential of a set-valued mapping. Lately, Som and Vetrivrl [
33] introduced robustness for set-valued optimization to generalize some existing concepts of robustness for scalar and vector-valued optimization, and they followed the set approach for solutions to set-valued optimization problems.
To weaken the conditions of continuity and cone-convex-concavity [
15,
32], inspired by the subdifferential [
20,
22] and set-order relations [
34], we introduce a new second-order strong subdifferential of set-valued mapping and define the set-based robust efficient solution for an uncertain set-valued optimization problem. Meanwhile, by using the second-order strong subdifferential of set-valued maps, we put forward Wolfe type dual problem and investigate the robust weak duality and robust strong duality of the set-based robust efficient solutions for uncertain set-valued optimization problems.
This paper is organized as follows. We quickly go through the concepts in
Section 2 before introducing a brand-new second-order strong subdifferential of a set-valued map. We derive some crucial new subdifferential features in
Section 3. We obtain a necessary and sufficient condition for the set-based robust efficient solutions to the uncertain set-valued optimization problem in
Section 4 thanks to the concept of the second-order strong subdifferential of set-valued mappings. The robust weak duality and robust strong duality of the uncertain set-valued optimization problem are established in
Section 5.
Section 6 is a short conclusion of the paper.
2. Preliminaries and Definitions
Throughout the paper, let
X and
Y be two real topological linear spaces with their topological dual spaces
and
, respectively.
and
denote the original points of
X and
Y, respectively. Let
be a solid closed convex pointed cone. The dual cone of
K is defined by
Let
be a natural number and
. Let
be a nonempty subset.
and
denote the closure and interior of
D, respectively.
.
Let
M be a subset of
X and
be a set-valued map. The domain, graph and epigraph of
H are defined, respectively, by
and
A partial order relation
of space
Y caused by the cone
K as follows:
Definition 1 ([34]). Let be arbitrarily chosen sets.
- (i)
The lower set less order relation is defined by - (ii)
The upper set less order relation is defined by
Definition 2 ([35]). Let be arbitrarily chosen sets. Then the certainly less order relation is defined byor equivalently, or, whenever . Definition 3 ([31]). Let M be a nonempty subset of X. M is said to be convex if for any and for all , Definition 4 ([31]). Let M be a nonempty convex subset of X. is called K-convex if for any and for all , Definition 5. A function has a global minimum at if Definition 6 ([22]). Let be a set-valued map and be K-convex, , and , the setis called the Borwein-strong subdifferential of H at . Enlightened by the Borwein-strong subdifferential in [
22,
23], we put forward the new notion of second-order strong subdifferential for a set-valued map.
Definition 7. Let be a set-valued map, , and . Then is said to be a second-order strong subgradient of H at ifThe setis said to be the second-order strong subdifferential of H at . If , then H is said to be second-order strong subdifferentiable at . The following example shows Definition 7.
Example 1. Let be a set-valued map with for any . Take . A simple calculation shows that . Then we obtain Remark 1. Let be a set-valued map. If the condition is not satisfied, Definition 7 is not complete. The following example shows the case.
Example 2. Let be a set-valued map with for any . Take . A simple calculation shows that . Then it follows from Definition 7 that ξ does not exist, i.e.,Therefore, the condition is necessary in Definition 7. Remark 2. Let be a set-valued map. Obviously, if the second-order strong subdifferential exists, then . However, may not necessarily be true. Now we give an example to illustrate the case.
Example 3. Let be a set-valued map, and let for any . Take . A simple calculation shows that . Then we haveandThus, , but . 3. Properties of a Second-Order Strong Subdifferential of Set-Valued Maps
In this section, we present some properties of a second-order strong subdifferential of set-valued maps. Firstly, we introduce the following lemma.
Lemma 1. Let , and . Set . Then Proof. Let
,
and
. Since
and
is a linear function,
This proof is complete. □
Theorem 1. Let be a set-valued map, , and . Then the set is convex.
Proof. If , then there is nothing to be demonstrated.
Suppose
. Let
,
and
. Then,
and
i.e.,
and
By Lemma 1, it follows from (
1) and (
2) that
Thus,
This proof is complete. □
Theorem 2. Let be a set-valued map, , and . Let H be second-order strong subdifferentiable at . Then H has a global minimum at if and only if .
Proof. Since
H has a global minimum at
,
Then,
which implies that
Let
. Then, by Definition 7, we obtain
which implies that
for all
,
. Therefore, according to Definition 5,
H has a global minimum at
. This proof is complete. □
Theorem 3. Let be a set-valued map and . Let , and . If H and are second-order strong subdifferentiable at and , respectively, then Proof. Let
. Then
Here we finish the proof. □
Now, we provide an illustration of Theorem 3.
Example 4. Let be a set-valued map, and let . Take . A simple calculation shows that . Then for any , we obtainandTherefore, . Theorem 4. Let H and be set-valued maps, , , , and . If H and Q are second-order strong subdifferentiable at and , respectively, then Proof. Let
and
. Then,
and
i.e.,
and
According to Lemma 1, it follows from (
3) and (
4) that
Thus,
i.e.,
Therefore,
. This proof is complete. □
Corollary 1. Let be set-valued maps, , , and . If is second-order strong subdifferentiable at , , then Remark 3. Let H and be set-valued maps. If H and Q are strong subdifferentiable at and , respectively, thenHowever, can not be omitted in Theorem 4. We take into consideration the following examples to demonstrate Theorem 4 and Remark 3.
Example 5. Let H and be set-valued maps with and . Take , and . A simple calculation shows that and . Then we obtainandso,Moreover,and In fact, and . Therefore, and .
Example 6. Let H and be set-valued maps, and let , . Take . A simple calculation shows that and . Then we obtainandso,Moreover,Therefore, . 4. The Optimality Condition for the Uncertain Set-Valued Optimization Problem
Problem (SOP) has been studied extensively without taking into account data uncertainty. However, in most real-world practical applications, there are more uncertainties in optimization problems. To define an uncertain set-valued optimization problem (USOP), we assume that uncertainties in the objective function are given as scenarios from a known uncertainty set
, where
is an uncertain parameter,
. The following uncertain set-valued optimization problem (USOP) can be used to describe the problem (SOP) when there is data uncertainty for both the objectives and the constraints:
where
and
,
are given functions, and the uncertain parameter
belongs to a compact and convex uncertainty set
.
Let
be a set-valued map,
is defined as follows:
In this paper, we investigate problem using a robust approach. As we all know, there is no proper method to directly solve problem , so it is necessary to replace problem by the deterministic version, that is, the robust counterpart of problem . By this means, various concepts of robustness have been proposed on the basis of different robust counterparts to describe the preferences of decision makers.
The most celebrated and researched robustness concept is called worst-case robustness (also known as min-max robustness or strict robustness in the literature). The idea is to minimize the worst possible objective function value, and search for a solution that is good enough in the worst case. Meanwhile, the constraints should be satisfied for every parameter
,
. Worst-case robustness is a conservative concept and reveals the pessimistic attitude of a decision maker. Then, the robust (worst-case) counterpart of problem
is as follows:
Definition 8. The robust feasible set of problem (USOP) is defined by We assume that Obviously, the set of all robust feasible solutions to problem (USOP) is the same as the set of all feasible solutions to problem (URSOP). Definition 9. is said to be a -robust efficient solution to problem (USOP) if is a -efficient solution to problem (URSOP), i.e., for all such that In this part, we create a necessary and sufficient optimality condition of the -robust efficient solution to problem (USOP).
Theorem 5. Let and , be set-valued maps, , and . Assume that the following conditions hold:
- (i)
is bounded on ;
- (ii)
exists for all ;
- (iii)
for any and k, and ;
- (iv)
for any j and k, is second-order strong subdifferentiable at and , respectively.
Then is a -robust efficient solution to problem if and only if for any and k, there exist and such thatand Proof. Let
be a
-robust efficient solution to problem (USOP). Then
. Hence, for all
, we have
. Thus, take
such that
Moreover, for any
j, there exists
such that
In fact, there are two cases to illustrate (
5) as follows:
- (i)
If , then take arbitrary , we get .
- (ii)
If , then take , we can easily get that .
Since
U is a finite set and
is bounded, there exists
such that
According to the definition of the second-order strong subdifferential, one obtains
Therefore, we get
Assume that for any
and
k, there exist
and
such that
and
By Theorem 3 and Corollary 1, we get
Since
, one has
Therefore,
Obviously,
. Then by Definition 7, we get
Since
for any
j, we calculate that
, i.e., for the preceding element
, we have
. Together with
for all
, i.e.,
for all
and
, it follows from (
7) that
i.e.,
Moreover, by the transitivity of
set-order relation, it follows from (
6) and
, one has
Thus,
is a
-robust efficient solution to problem (USOP). This proof is complete. □
Remark 4. - (i)
We extend the uncertain scalar optimization problem in [32] (Theorem 3.1) to the uncertain set-valued optimization problem (USOP) in Theorem 5. - (ii)
Ref. [32] (Theorem 3.1) is established under the conditions of continuity and cone-convex-concavity, [15] (Corollaries 3.1 and 3.2) are established under the conditions of upper semi-continuity, it is under the conditions of existence of the maximum and boundedness that we obtain Theorem 5. Since bounded functions may not be continuous, our result in Theorem 5 extends [32] (Theorem 3.1) and [15] (Corollaries 3.1 and 3.2).
5. Wolfe Type Robust Duality of Problem (USOP)
The robust weak duality and the robust strong duality are covered in this section, which begin by introducing a Wolfe type dual problem () for the uncertain set-valued optimization problem (USOP).
We now consider the Wolfe type dual problem
of problem (USOP):
Definition 10. The robust feasible solution set P of problem is defined by In this section, we suppose that
Definition 11. is said to be a -robust efficient solution to problem if there is no feasible solution other than such that Theorem 6. If for any k, is bounded and closed, and exists for all , then for any feasible solution x to problem and any feasible solution to problem , we have Proof. Let x be a feasible solution to problem (URSOP) and be a feasible solution to problem .
To the contrary, suppose that (
8) does not hold. Then, there exist
and
such that
From
, we have
Then, for all
and
, there exist
and
such that
i.e.,
Due to
, we can conclude that
. In fact, suppose that
. Then, it follows from (
9) that
Since
is bounded and closed, and
, we obtain
which is impossible. Thus,
. And then, by the definition of
set-order relationship, one has
It follows from
and (
11) that
i.e.,
Moreover, it follows from
, one has
Thus, it follows from (
12) and (
13) that
which contradicts (
10). Therefore, for any feasible solution
x to problem (URSOP) and any feasible solution
to problem
, we have
We complete the proof. □
Theorem 7 (Robust strong duality). Let and , be set-valued maps, , and . Assume that the following conditions hold:
- (i)
is bounded on for any k;
- (ii)
exists for all and k;
- (iii)
for any and k, and ;
- (iv)
for any j and k, and are second-order strong subdifferentiable at and , respectively;
- (v)
is a -robust efficient solution to problem .
Then for any , there exist and such that is a -robust efficient solution to problem .
Proof. Let
be a
-robust efficient solution to problem (USOP). By Theorem 5, we know that for any
and
k, there exist
and
such that
and
Therefore,
is a feasible solution to problem (
). Then, for any feasible solution
to problem (
), it follows from (
14) and (
15) and Theorem 6 that
Hence,
is a
-robust efficient solution to problem (
). This proof is complete. □
Remark 5. Theorems 10 and 11 generalize Theorems 4.1 and 4.2 in [32] from a scalar case to a set-valued one, respectively.