Constraint Quali ﬁ cations for Vector Optimization Problems in Real Topological Spaces

: In this paper, we introduce a series of de ﬁ nitions of generalized a ﬃ ne functions for vector-valued functions by use of “linear set’. We prove that our generalized a ﬃ ne functions have some similar properties to generalized convex functions. We present examples to show that our generalized a ﬃ nenesses are di ﬀ erent from one another, and also provide an example to show that our de ﬁ nition of presuba ﬃ nelikeness is non-trivial; presuba ﬃ nelikeness is the weakest generalized af-ﬁ neness introduced in this article. We work with optimization problems that are de ﬁ ned and taking values in linear topological spaces. We devote to the study of constraint quali ﬁ cations, and derive some optimality conditions as well as a strong duality theorem. Our optimization problems have inequality constraints, equality constraints, and abstract constraints; our inequality constraints are generalized convex functions and equality constraints are generalized a ﬃ ne functions.


Introduction and Preliminary
The theory of vector optimization is at the crossroads of many subjects. The terms "minimum," "maximum," and "optimum" are in line with a mathematical tradition, while words such as "efficient" or "non-dominated" find larger use in business-related topics. Historically, linear programs were the focus in the optimization community, and initially, it was thought that the major divide was between linear and nonlinear optimization problems; later, people discovered that some nonlinear problems were much harder than others, and the "right" divide was between convex and nonconvex problems. The author has determined that affineness and generalized affinenesses are also very useful for the subject "optimization".
Suppose X, Y are real linear topological spaces [1].
A subset B X ⊆ is called a linear set if B is a nonempty vector subspace of X.
A subset B X ⊆ is called an affine set if the line passing through any two points of B is entirely contained in B (i.e., 1 2 (1 ) x x B α α + − ∈ whenever 1 2 , x x B ∈ and R α ∈ ); A subset B X ⊆ is called a convex set if any segment with endpoints in B is contained in B (i.e., . Let Y be a real topological vector space with pointed convex cone Y+. We denote the partial order induced by Y+ as follows: 2 1 y y  iff + ∈ − Y y y 2 1 , or, 1 2 y y  iff 1 2 y y Y + − ∈− ; 2 1 y y  iff + ∈ − Y y y int 2 1 , or 1 2 y y  iff 1 2 int y y Y + − ∈− where intY+ denotes the topological interior of a set Y+.
A function f: X Y → is said to be linear if 1 2 , x x X ∈ and , R α β ∈ ; f is said to be affine if ∈ ; and f is said to be convex if In the next section, we generalize the definition of affine function, prove that our generalized affine functions have some similar properties with generalized convex functions, and present some examples which show that our generalized affinenesses are not equivalent to one another.
In Section 3, we recall some existing definitions of generalized convexities, which are very comparable with the definitions of generalized affinenesses introduced in this article. Section 4 works with optimization problems that are defined and taking values in linear topological spaces, devotes to the study of constraint qualifications, and derives some optimality conditions as well as a strong duality theorem.

Generalized Affinenesses
, there holds We introduce here the following definitions of generalized affine functions.
In the following Definitions 3 and 4, we assume that B Y ⊆ is any given linear set.
For any linear set B, since 0 B ∈ , we may take u = 0. So, affinelikeness implies subaffinelikeness, and preaffinelikeness implies presubaffinelikeness. It is obvious that affineness implies preaffineness, and the following Example1 shows that the converse is not true.

Example 1 An example of an affinelike function which is not an affine function.
It is known that a function is an affine function if and only it is in the form of ( ) is not an affine function. However, f is affinelike.
Similarly, affinelikeness implies preaffinelikeness ( 1 τ = ), and presubaffinelikeness implies subaffinelikeness. The following Example 2 shows that a preaffinelike function is not necessary to be an affinelike function.

Example 2 An example of a preaffinelike function which is not an affinelike function.
Consider the function

Example 4 An example of a presubaffinelike function which is not a preaffinelike function.
Actually, the function in Example 3 is subaffinelike, therefore it is presubaffinelike on D.
However, for This shows that the function f is not preaffinelike on D.

Example 5 An example of a presubaffinelike function which is not a subaffinelike function.
Consider the function 2 2 ( , ) ( , ), , f x y x y x y R = ∈ .
In the above inequality, we note that either In fact, Then,

Example 7 An example of a preaffinelike function which is not a subaffinelike function.
Consider the function Take the 2-dimensional linear set However, for (1) Actually, if x = 0, it is obvious that proves that the inequality (1) must be true. Consequently, On the other hand, So far, we have showed the following relationships (where subaffinelikeness and presubaffinelikeness are related to "a given linear set B"): The following Proposition 1 is very similar to the corresponding results for generalized convexities (see Proposition 2).
⊆ a given linear set, and t is any Therefore, f (D) is an affine set. On the other hand, assume that f (D) is an affine set.
On the other hand, suppose that is an affine set. Then, 1 2 , , i.e., On the other hand, assume that f (D) + B is an affine set.
On the other hand, assume that Therefore, i.e., The presubaffineness is the weakest one in the series of the generalized affinenesses introduced here. The following example shows that our definition of presubaffinelikeness is not trivial.

Example 8 An example of non-presubaffinelike function.
Consider the function And so, 2 2 ( , ) ( , ) f x y x y = is not B-presubaffinelike.

Generalized Convexities
In this section, we recall some existing definitions of generalized convexities, which are very comparable with the definitions of generalized affinenesses introduced in this article.
Let Y be a topological vector space, X D ⊆ be a nonempty set, and Y+ be a convex cone in Y and It is known that a function f: The following Definition 5 was introduced in Fan [2].

Definition 5 A function f:
We may define Y+-preconvexlike functions as follows.

was introduced by Jeyakumar [3].
Definition 7 A function f: In fact, in Jeyakumar [3], the definition of subconvexlike was introduced as the following form Definition 7 *.
From the definitions above, one may introduce the following definition of presubconvexlike functions.

Definition 8 A function f:
Our Definitions 7 and 8 are more comparable with our definitions of generalized affineness.
Similar to the proof of the above Proposition 1, we present the following Proposition 2.

Constraint Qualifications
Consider the following vector optimization problem: , , Y+, Zi+ are closed convex cones in Y and Zi, respectively, and D is a nonempty subset of X.

Remark 1 We note that the condition (A1) says that f and
The following is the well-known definition of a weakly efficient solution.

Definition 9 A point
We first introduce the following constraint qualification which is similar to the constraint qualification in the differentiate form from nonlinear programming.

Definition 10 Let
It is obvious that NNAMCQ holds at Hence, NNAMCQ is weaker than ( [7], (CQ1)) (in [7], CQ1 was for set-valued optimization problems) in the constraint 0 ( min , which means that only the binding constraints are considered. Under the NNAMCQ, the following KuhnTucker type necessary optimality condition holds.

F x ∈ is a weakly efficient solution of (VP) with
) (x f y ∈ , then exists a vector Proof. Since x is a weakly efficient solution of (VP) with ) (x f y ∈ there exists a nonzero vec- such that (2) holds. Since NNAMCQ holds at x , ξ must be nonzero. Otherwise if ξ = 0 then ) , ( ς η must be a nonzero solution of But this is impossible, since the NNAMCQ holds at x .□ Similar to ([7], (CQ2)) which is slightly stronger than ( [7], (CQ1)), we define the following constraint qualification which is stronger than the NNAMCQ. ( ) 0, ( ( )) 0

Definition 12 (Slater Condition CQ).
Let F x ∈ . We say that (VP) satisfies the Slater condition at x if the following conditions hold: Similar to ([7], Proposition 2) (again, in [7], discussions are made for set-valued optimization problems), we have the following relationship between the constraint qualifications.

Proposition 3
The following statements are true: Proof. The proof of (i) is similar to ([7], Proposition 2). Now we prove (ii). By the assumption (A1), the following sets C1 and C2 are convex: Suppose to the contrary that the Slater condition does not hold. Then,   . We say that (VP) satisfies the calmness condition at x provided that there exist ) 0 , 0 , (

Theorem2 Assume that (A1) is satisfied and either (A2) or (A3) holds. If
, and the calmness condition holds at x , then there exists ) (x U , a neighborhood of x , and a vector Proof. It is easy to see that under the calmness condition, x being a weakly efficient solution of (VP) implies that By assumption, the above optimization problem satisfies the generalized convexity assumption (A1). Now we prove that the NNAMCQ holds naturally at ς η satisfies the system: It is obvious that (6) where BX is the unit ball of X, and

Remark2
in the terminology of [10] or upper-Lipschitz continuous at x in the terminology of [11]

Proof. Since D is polyhedral and h is a polyhedral multifunction, its inverse map
is a polyhedral multifunction. That is, the graph of S is a union of polyhedral convex sets. Since which is also a union of polyhedral convex sets, Σ is also a polyhedral multifunction and where BY is the unit ball of Y.
Then, for all f L K ≥ , x is a weakly efficient solution of the exact penalized optimization problem Proof. Let us prove the assertion by supposing the contrary. Then, there is a point is a normed space and f is strongly Lipschitz on D. If x is a weakly efficient solution of (VP) and the error bound constraint qualification is satisfied at x , then (VP) satisfies the calmness condition at x . Proposition 5 x is a weakly efficient solution of the penalized problem

Proof. By the exact penalization principle in
The results then follow from the definitions of the calmness and the error bound constraint qualification. □ Theorem 3 Assume that the generalized convexity assumption (A1) is satisfied with f replaced by Our last result Theorem 4 is a strong duality theorem, which generalizes a result in Fang, Li, and Ng [14 ]. The following strong duality result holds which extends the strong duality theorem in ( [7], Theorem 7) (which was for set-valued optimization problems), to allow weaker convexity assumptions. We omit the proof since it is similar to [7]. Theorem 4 Assume that (A1) is satisfied, either (A2) or (A3) is satisfied, and a constraint qualification such as NNAMCQ is satisfied. If x is a weakly efficient solution of (VP), then there exists

Conclusions
We introduce the following definitions of generalized affine functions: affinelikeness, preaffinelikeness, subaffinelikeness, and presubaffinelikeness. Examples 1 to 7 show that definitions of affine, affinelike, preaffinelike, subaffinelike, and presubaffinelike functions are all different. Example 8 is an example of non-presubaffinelike function; presubaffineness is the weakest one in the series. Proposition 1 demonstrates that our generalized affine functions have some similar properties with generalized convex functions.
And then, we work with vector optimization problems in real linear topological spaces, and obtain necessary conditions, sufficient conditions, or necessary and sufficient conditions for weakly efficient solutions, which generalize the corresponding classical results in [13,15] and some recent results in [7,9,[16][17][18]. We note that the constraint qualifications in [13,17,18] are in the differentiation form. Compared with the results in [19] and ( [20], p. 297) in discussions of convex constraints, we only required weakened convexities for constraint qualifications in this article. We note that [17] works with semi-definite programming. In [17], two groups of functions gi(x) ≥ 0, i∈I and hj(x) = 0, j∈J can be just considered as two topological spaces (I and J do not have to be finite sets). We also note that f is supposed to be "proper convex" in [18]; and in [18], functions are required to be "quasiconvex".
Generalized affine functions and generalized convex functions can be used for other discussions of optimization problems, e.g., dualities, scalarizations, as well as saddle points, etc.