1. Introduction and Preliminary
The theory of vector optimization is at the crossroads of many subjects. The terms “minimum,” “maximum,” and “optimum” are in line with a mathematical tradition, while words such as “efficient” or “non-dominated” find larger use in business-related topics. Historically, linear programs were the focus in the optimization community, and initially, it was thought that the major divide was between linear and nonlinear optimization problems; later, people discovered that some nonlinear problems were much harder than others, and the “right” divide was between convex and nonconvex problems. The author has determined that affineness and generalized affinenesses are also very useful for the subject “optimization”.
Suppose 
X, 
Y are real linear topological spaces [
1].
A subset  is called a linear set if B is a nonempty vector subspace of X.
A subset  is called an affine set if the line passing through any two points of B is entirely contained in B (i.e.,  whenever  and );
A subset  is called a convex set if any segment with endpoints in B is contained in B (i.e.,  whenever  and ).
Each linear set is affine, and each affine set is convex. Moreover, any translation of an affine (convex, respectively) set is affine (convex, resp.). It is known that a set B is linear if and only if B is affine and contains the zero point  of X; a set B is affine if and only if B is a translation of a linear set.
A subset 
Y+ of 
Y is said to be a cone if 
 for all 
 and 
. We denote by 
 the zero element in the topological vector space 
Y and simply by 0 if there is no confusion. A convex cone is one for which 
 for all 
 and 
. A pointed cone is one for which 
. Let 
Y be a real topological vector space with pointed convex cone 
Y+. We denote the partial order induced by 
Y+ as follows:
      where int
Y+ denotes the topological interior of a set 
Y+.
A function 
f: 
 is said to be linear if
      
      whenever 
 and 
; 
f is said to be affine if
      
      whenever 
; and 
f is said to be convex if
      
      whenever 
.
In the next section, we generalize the definition of affine function, prove that our generalized affine functions have some similar properties with generalized convex functions, and present some examples which show that our generalized affinenesses are not equivalent to one another.
In 
Section 3, we recall some existing definitions of generalized convexities, which are very comparable with the definitions of generalized affinenesses introduced in this article.
Section 4 works with optimization problems that are defined and taking values in linear topological spaces, devotes to the study of constraint qualifications, and derives some optimality conditions as well as a strong duality theorem.
   2. Generalized Affinenesses
A function 
f: 
 is said to be affine on 
D if 
, there holds
      
We introduce here the following definitions of generalized affine functions.
Definition 1.  A function f:   is said to be affinelike on D if   such that
 Definition 2.  A function f:  is said to be preaffinelike on D if  such that
 In the following Definitions 3 and 4, we assume that  is any given linear set.
Definition 3.  A function f: 
 is said to be B-subaffinelike on D if , 
 such that  Definition 4.  A function f: 
 is said to be B-presubaffinelike on D if , 
 such that  For any linear set B, since , we may take u = 0. So, affinelikeness implies subaffinelikeness, and preaffinelikeness implies presubaffinelikeness.
It is obvious that affineness implies preaffineness, and the following Example 1 shows that the converse is not true.
Example 1.  An example of an affinelike function which is not an affine function.
 It is known that a function is an affine function if and only it is in the form of 
; therefore
      
      is not an affine function.
However, 
f is affinelike. 
 taking
      
      then
      
Similarly, affinelikeness implies preaffinelikeness (), and presubaffinelikeness implies subaffinelikeness. The following Example 2 shows that a preaffinelike function is not necessary to be an affinelike function.
Example 2.  An example of a preaffinelike function which is not an affinelike function.
 Consider the function .
Take 
, then 
; but
      
      therefore
      
So f is not affinelike.
But 
f is an preaffinelike function. For 
 taking 
 if 
, 
 if 
, then
      
      where 
.
Example 3.  An example of a subaffinelike function which is not an affinelike function.
 Consider the function  and the linear set .
 taking 
  then
	  
      therefore 
 is 
B-subaffinelike on 
.
 is not affinelike on 
 Actually, for 
  one has 
, but
      
      hence
      
Example 4.  An example of a presubaffinelike function which is not a preaffinelike function.
 Actually, the function in Example 3 is subaffinelike, therefore it is presubaffinelike on D.
However, for 
  one has
      
      but
      
This shows that the function f is not preaffinelike on D.
Example 5.  An example of a presubaffinelike function which is not a subaffinelike function.
 Consider the function .
Take the 2-dimensional linear set .
Take 
, then
      
Either 
 or 
 must be negative; but 
, 
; therefore
      
And so,  is not B-subaffinelike.
However, 
 is 
B-presubaffinelike.
      
Case 1. If both of 
 are positive, we take 
, 
, 
, then
      
Case 2. If both of 
 are negative, we take 
, 
, 
, then
      
Case 3. If one of 
 is negative, and the other is non-negative, we take
      
And so 
 are both non-negative or both negative; taking 
 or 
, respectively, one has
      
      where
      
Therefore,  is B-presubaffinelike.
Example 6.  An example of a subaffinelike function which is not a preaffinelike function.
 Consider the function 
Take the 2-dimensional linear set 
Take 
, then
      
In the above inequality, we note that either  or , .
Therefore,  is not preaffinelike.
However,  is B-subaffinelike.
In fact, 
 we may choose 
 with 
x large enough such that
      
Example 7.  An example of a preaffinelike function which is not a subaffinelike function.
 Consider the function .
Take the 2-dimensional linear set .
Take 
, then
      
So, 
,
      
However, for 
,
      
Actually, if 
x = 0, it is obvious that 
; if 
, the right side of (1) implies that 
, and the left side of (1) is 
. This proves that the inequality (1) must be true. Consequently,
      
So  is not B-subaffinelike.
On the other hand, 
 we may take 
 if 
 or 
 if 
, then
      
      where 
.
Therefore,  is preaffinelike.
So far, we have showed the following relationships (where subaffinelikeness and presubaffinelikeness are related to “a given linear set 
B”):
The following Proposition 1 is very similar to the corresponding results for generalized convexities (see Proposition 2).
Proposition 1.  Suppose f:  is a function,  a given linear set, and t is any real scalar.
       
- (a) 
 f is affinelike on D if and only if f (D) is an affine set;
- (b) 
 f is preaffinelike on D if and only if  is an affine set;
- (c) 
 f is B-subaffinelike on D if and only if f (D) + B is an affine set;
- (d) 
 f is B-presubaffinelike on D if and only if  + B is an affine set.
 Proof.  (a) If f is affinelike on 
D, 
, 
  such that
      
 Therefore, f (D) is an affine set.
On the other hand, assume that 
f (
D) is an affine set. 
 we have
      
Therefore, 
 such that
      
And hence f is affinelike on D.
(b) Assume f is a preaffinelike function.
    for 
  such that
      
 Since 
f is preaffinelike, 
 such that
      
Therefore
      
      where 
. Consequently, 
 is an affine set.
On the other hand, suppose that 
 is an affine set. Then, 
  since 
,
      
Therefore, 
 such that
      
Then, f is an affinelike function.
(c) Assume that f is B-subaffinelike.
, 
 , such that 
 and 
. The subaffinelikeness of 
f implies that 
 , and 
 such that
      
      i.e.,
      
Therefore
      
      where 
Then, f (D) + B is an affine set.
On the other hand, assume that f (D) + B is an affine set.
 , 
 such that
      
      i.e.,
      
      where 
. And hence 
f is 
B-subaffinelike.
 (d) Suppose f is a B-presubaffinelike function.
, similar to the proof of (b),
  , 
, for which 
 and
      
      where 
. This proves that 
+ 
B is an affine set.
On the other hand, assume that + B is an affine set.
 , 
 since 
, 
, 
 such that
      
 Therefore,
      
      i.e.,
      
      where 
. And so 
f is 
B-presubaffinelike. □ 
The presubaffineness is the weakest one in the series of the generalized affinenesses introduced here. The following example shows that our definition of presubaffinelikeness is not trivial.
Example 8.  An example of non-presubaffinelike function.
 Consider the function .
Take the linear set .
Take 
, then
      
Either 
 or 
 must be negative, but 
 hold for 
; therefore, for any scalar 
(Actually, , one has ; and  either  or , then, either  or ).
And so,  is not B-presubaffinelike.
  4. Constraint Qualifications
Consider the following vector optimization problem:
      where 
f: 
, 
, 
, 
Y+, 
Zi+ are closed convex cones in 
Y and 
Zi, respectively, and 
D is a nonempty subset of 
X.
Throughout this paper, the following assumptions will be used (
 are real scalars).
      
 such that
      
Remark 1.  We note that the condition (A1) says that f and  are presubconvexlike, and  (j = 1, 2, …, n) are preaffinelike.
 Let 
F be the feasible set of (
VP), i.e.,
      
The following is the well-known definition of a weakly efficient solution.
Definition 10.  A point  is said to be a weakly efficient solution of (VP) with a weakly efficient value  if for every  there exists no  satisfying .
 We first introduce the following constraint qualification which is similar to the constraint qualification in the differentiate form from nonlinear programming.
Definition 11.  Let . 
We say that (
VP) 
satisfies the No Nonzero Abnormal Multiplier Constraint Qualification (NNAMCQ) at  if there is no nonzero vector  satisfying the systemwhere  is some neighborhood of .
  It is obvious that NNAMCQ holds at 
 with 
 being the whole space 
X if and only if for all 
 satisfying 
, there exists 
 such that
      
Hence, NNAMCQ is weaker than ([
7], (CQ1)) (in [
7], CQ1 was for set-valued optimization problems) in the constraint 
, which means that only the binding constraints are considered. Under the NNAMCQ, the following KuhnTucker type necessary optimality condition holds.
Theorem 1.  Assume that the generalized convexity assumption (A1) is satisfied and either (A2) or (A3) holds. If  is a weakly efficient solution of (VP) with , 
then exists a vector  with  such thatfor a neighborhood  of 
.
  Proof.  Since 
 is a weakly efficient solution of (VP) with 
 there exists a nonzero vector 
 such that (2) holds. Since NNAMCQ holds at 
, 
 must be nonzero. Otherwise if 
 = 0 then 
 must be a nonzero solution of
      
 But this is impossible, since the NNAMCQ holds at . □
Similar to ([
7], (CQ2)) which is slightly stronger than ([
7], (CQ1)), we define the following constraint qualification which is stronger than the NNAMCQ.
Definition 12.  (SNNAMCQ) Let . We say that (VP) satisfies the No Nonzero Abnormal Multiplier Constraint Qualification (NNAMCQ) at  provided that
- (i) 
 satisfying,
      
 - (ii) 
 , , s.t.  for all .
 We now quote the Slater condition introduced in ([
7], (CQ3)).
Definition 13  (Slater Condition CQ). Let . We say that (VP) satisfies the Slater condition at  if the following conditions hold:
		
- (i) 
 , s.t. ;
- (ii) 
  for all j.
 Similar to ([
7], Proposition 2) (again, in [
7], discussions are made for set-valued optimization problems), we have the following relationship between the constraint qualifications.
Proposition 3.  The following statements are true:
(i) Slater CQ  SNNAMCQ  NNAMCQ with  being the whole space X;
(ii) Assume that (A1) and (A2) (or (A1) and (A3)) hold and the NNAMCQ with  being the whole space X without the restriction of  at . Then, the Slater condition (CQ) holds.
 Proof.  The proof of (i) is similar to ([
7], Proposition 2). Now we prove (ii). By the assumption (A1), the following sets C
1 and C
2 are convex:
      
 Suppose to the contrary that the Slater condition does not hold. Then, 
 or 
. If the former 
 holds, then by the separation theorem [
1], there exists a nonzero vector 
 such that
      
      for all 
. Since 
 are convex cones, consequently we have
      
      for all 
 and take 
 in (3), we have
      
      which contradicts the NNAMCQ. Similarly if the latter 
 holds then there exists 
 such that 
, which contradicts NNAMCQ. □
Definition 14  (Calmness Condition)
. Let . 
Let  and . 
We say that (
VP) 
satisfies the calmness condition at  provided that there exist , 
a neighborhood of , 
and a map  with  such that for each Satisfying
      
      there is no 
, such that
      
Theorem 2.  Assume that (A1) is satisfied and either (A2) or (A3) holds. If  is a weakly efficient solution of (VP) with , and the calmness condition holds at , 
then there exists , 
a neighborhood of , 
and a vector  with  such that  Proof.  It is easy to see that under the calmness condition, 
 being a weakly efficient solution of (
VP) implies that 
 is a weakly efficient solution of the perturbed problem:
	  
VP(
p,
q)
      
 By assumption, the above optimization problem satisfies the generalized convexity assumption (A1). Now we prove that the NNAMCQ holds naturally at 
. Suppose that 
 satisfies the system:
If 
, then there exists 
 small enough such that 
. Since 
, 
, and there exists 
, which implies that 
, hence
      
      which contradicts (5). Hence, 
 and (5) becomes
      
If 
, then there exists 
p small enough such that 
. Let 
, then
      
      and hence
      
      which is impossible. Consequently, 
 as well. Hence, there exists 
 with 
 such that
      
It is obvious that (6) implies (4) and hence the proof of the theorem is complete. □
Definition 15.  Let  be normed spaces. We say that (
VP) 
satisfies the error bound constraint qualification at a feasible point  if there exist positive constants , 
and  such thatwhere BX is the unit ball of X, 
and 
        Remark 2.  Note that the error bound constraint qualification is satisfied at a feasible point  if and only if the function  is pseudo upper-Lipschitz continuous around  in the terminology of ([8]) (which is referred to as being calm at  in [9]). Hence,  being either pseudo-Lipschitz continuous around  in the terminology of [10] or upper-Lipschitz continuous at  in the terminology of [11] implies that the error bound constraint qualification holds at . 
Recall that a function  is called a polyhedral multifunction if its graph is a union of finitely many polyhedral convex sets. This class of function is closed under (finite) addition, scalar multiplication, and (finite) composition. By ([12], Proposition 1), a polyhedral multifunction is upper-Lipschitz. Hence, the following result provides a sufficient condition for the error bound constraint qualification.
  Proposition 4.  Let X = Rn and W = Rm. Suppose that D is polyhedral and h is a polyhedral multifunction. Then, the error bound constraint qualification always holds at any feasible point .
 Proof.  Since 
D is polyhedral and 
h is a polyhedral multifunction, its inverse map 
 is a polyhedral multifunction. That is, the graph of 
S is a union of polyhedral convex sets. Since
      
	  which is also a union of polyhedral convex sets, 
 is also a polyhedral multifunction and hence upper-Lipschitz at any point of 
 by ([
12], Proposition 1). Therefore, the error bound constraint qualification holds at 
. □
 Definition 16.  Let X be a normed space,  be a function, and . 
f is said to be Lipschitz near  if there exist , 
a neighborhood of , 
and a constant Lf > 0 
such that for all ,
      
where BY is the unit ball of Y.
  Definition 17.  Let X be a normed space,  be a function and . 
f is said to be strongly Lipschitz on  if there exist a constant Lf > 0 
such that for all , 
 and ,
      
  The following result generalizes the exact penalization [
13].
Proposition 5.  Let X be a normed space,  be a function which is strongly Lipschitz of rank Lf on a set . 
Let  and suppose that  is a weakly efficient solution ofwith . 
Then, for all , 
 is a weakly efficient solution of the exact penalized optimization problemwhere .
  Proof.  Let us prove the assertion by supposing the contrary. Then, there is a point 
, 
, and 
 satisfying 
. Let 
 and 
 be a point such that 
. Then, for any 
,
      
 Since 
 is arbitrary, it contradicts the fact that 
 is a weakly efficient solution of
      
 □
Proposition 6.  Suppose  is a normed space and f is strongly Lipschitz on D. If  is a weakly efficient solution of (VP) and the error bound constraint qualification is satisfied at , then (VP) satisfies the calmness condition at .
 Proof.  By the exact penalization principle in Proposition  5 
 is a weakly efficient solution of the penalized problem
      
The results then follow from the definitions of the calmness and the error bound constraint qualification. □
 Theorem 3.  Assume that the generalized convexity assumption (A1) is satisfied with f replaced by  and either (A2) or (A3) holds. Suppose  is a normed space and f is strongly Lipschitz on D. If  is a weakly efficient solution of (VP) and the error bound constraint qualification is satisfied at , then there exist , a neighborhood of , and a vector  with  such that (4) holds.
 Using Proposition 4, Theorem 3 has the following easy corollary.
Corollary 1.  Suppose Y is a normed space, X = Rn, W = Rm and D is polyhedral, and f is strongly Lipschitz on D. Assume that the generalized convexity assumption (A1) is satisfied with f replaced by  and either (A2) or (A3) holds. If  is a weakly efficient solution of (VP) without the inequality constraint , 
and h is a polyhedral multifunction, then there exist , 
a neighborhood of  a vector  with  such that  Our last result Theorem 4 is a strong duality theorem, which generalizes a result in Fang, Li, and Ng [
14].
For two topological vector spaces 
Z and 
Y, let 
B(
Z; 
Y) be the set of continuous linear transformations from 
Z to 
Y and
      
The Lagrangian map for (
VP) is the function
      
      defined by
      
Given 
, consider the vector minimization problem induced by (
VP):
      and denote by 
 the set of weakly efficient value of the problem (
VPST). The Lagrange dual problem associated with the primal problem (
VP) is
      
The following strong duality result holds which extends the strong duality theorem in ([
7], Theorem 7) (which was for set-valued optimization problems), to allow weaker convexity assumptions. We omit the proof since it is similar to [
7].
Theorem 4.  Assume that (A1) is satisfied, either (A2) or (A3) is satisfied, and a constraint qualification such as NNAMCQ is satisfied. If  is a weakly efficient solution of (VP), then there existssuch that