Next Article in Journal
Fundus Image Classification Using VGG-19 Architecture with PCA and SVD
Previous Article in Journal
Universal Quantum Computing and Three-Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimality and Duality with Respect to b-(,m)-Convex Programming

1
Department of Mathematics, College of Science, China Three Gorges University, Yichang 443002, China
2
Three Gorges Mathematical Research Center, China Three Gorges University, Yichang 443002, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(12), 774; https://doi.org/10.3390/sym10120774
Submission received: 12 November 2018 / Revised: 8 December 2018 / Accepted: 17 December 2018 / Published: 19 December 2018

Abstract

:
Noticing that E -convexity, m-convexity and b-invexity have similar structures in their definitions, there are some possibilities to treat these three class of mappings uniformly. For this purpose, the definitions of the ( E , m ) -convex sets and the b- ( E , m ) -convex mappings are introduced. The properties concerning operations that preserve the ( E , m ) -convexity of the proposed mappings are derived. The unconstrained and inequality constrained b- ( E , m ) -convex programming are considered, where the sufficient conditions of optimality are developed and the uniqueness of the solution to the b- ( E , m ) -convex programming are investigated. Furthermore, the sufficient optimality conditions and the Fritz–John necessary optimality criteria for nonlinear multi-objective b- ( E , m ) -convex programming are established. The Wolfe-type symmetric duality theorems under the b- ( E , m ) -convexity, including weak and strong symmetric duality theorems, are also presented. Finally, we construct two examples in detail to show how the obtained results can be used in b- ( E , m ) -convex programming.

1. Introduction

Convexity, as well as generalized convexity, has a vital position in optimality and has many consequences in different aspects of mathematical programming. Because of its significance, researchers made efforts towards generalized convexity. For instance, Youness [1] considered a class of sets as well as a family of mappings named E -convex sets along with E -convex mappings in 1999. Bector and Singh [2] introduced b-vex functions in 1991. Several years later, these kinds of generalized convex mappings aroused plenty of research enthusiasm. For example, Iqbal et al. [3] discussed a new family of sets as well as a new group of mappings named geodesic E -convex sets together with geodesic E -convex mappings, which are defined on a Riemannian manifold. Mishra et al. [4] studied a class of E -b-vex mappings, for which the elementary properties were observed and kinds of interrelations with other mappings were discussed. Syau et al. [5] defined a family of mappings, called E -b-vex mappings, via generalizing b-vex mappings and E -vex mappings.
These studies contribute to the evolution of the generalized convex mappings. However, there are also some unsatisfactory generalizations. For example, Yang [6] pointed out the drawback of the work in [1] by presenting some counterexamples. Therefore, it is urgent to consider wider families of generalized convex mappings and study the optimality conditions for nonlinear generalized convex programming. Recently, some significant results involving the properties of generalized convex mappings, optimality conditions for nonlinear generalized convex programming, and the duality theorems are developed. For example, in [7], the authors studied geodesic sub-b-s-convex mapping on the Riemann manifolds. In [8], the author considered roughly B-invex mappings as well as generalized roughly B-invex mapping. Sufficient optimality criteria for nonlinear programming involving these mappings are also investigated. The class of E -convex set, E -convex and E -quasiconvex mappings are extended to E -invex set, E -preinvex and E -prequasiconvex mappings in [9]. Gulati and Verma introduced a pair of nondifferentiable higher-order symmetric dual models in [10]. In [11], the authors investigated a class of mapping called geodesic semi- E -b-vex mappings as well as generalized geodesic semi- E -b-vex mappings. To solve E -convex multiobjective nonlinear programming, Megahed and his collaborators presented a combined interactive approach in [12]. In addition, to estimate a possible impact on applied sciences, Pitea et al. studied generalized nonconvex multitime multiobjective variational problems [13] as well as its application such as minimizing a vector of functionals of curvilinear integral type [14].
Inspired by the above work and based on the work in [15,16,17], we introduce a new family of generalized convex sets as well as generalized convex mappings, named ( E , m ) -convex sets and b- ( E , m ) -convex mappings, and develop some interesting properties of this family of sets and mappings, respectively. Furthermore, we develop duality theorems and the optimality conditions for single objective programming as well as multi-objective programming, which are under the b- ( E , m ) -convexity. In addition, two detailed examples are provided to depict the results.
To end this section, let us evoke several concepts of generalized convexity.
Definition 1.
A set M R n is called an E -convex set if there exists a mapping E : R n R n such that
( 1 τ ) E ( μ ) + τ E ( ν ) M ,
for every μ , ν M and τ [ 0 , 1 ] [1].
Definition 2.
A mapping g : M R is named an E -convex mapping on set M R n , if there exists a mapping E : R n R n satisfying that M is an E -convex set and
g ( 1 τ ) E ( μ ) + τ E ( ν ) ( 1 τ ) g E ( μ ) + τ g E ( ν ) ,
for every μ , ν M and τ [ 0 , 1 ] [1].
Definition 3.
A mapping g : [ 0 , b ] R is called an m-convex mapping [18], where m [ 0 , 1 ] , if for all μ , ν [ 0 , b ] and τ [ 0 , 1 ] , the following holds:
g ( τ μ + m ( 1 τ ) ν ) τ g ( μ ) + m ( 1 τ ) g ( ν ) .
Definition 4.
Let S be nonempty and convex in R n . The mapping g : S R is called a b-vex mapping defined on S corresponding to mapping b : S × S × [ 0 , 1 ] R + , if
g ( τ μ + ( 1 τ ) ν ) τ b ( μ , ν , τ ) g ( μ ) + ( 1 τ b ( μ , ν , τ ) ) g ( ν ) ,
holds for each μ , ν S , τ [ 0 , 1 ] and τ b ( μ , ν , τ ) [ 0 , 1 ] [2].

2. ( E , m ) -Convex Sets and b - ( E , m ) -Convex Mappings

Before we present the notion of b- ( E , m ) -convex mappings, we give the concept of ( E , m ) -convex set in this section as follows.
Definition 5.
A set M R n is called an ( E , m ) -convex set, if there exists a mapping E : R n R n and certain fixed m [ 0 , 1 ] satisfying
τ E ( μ ) + m ( 1 τ ) E ( ν ) M
for every μ , ν M along with τ [ 0 , 1 ] .
Remark 1.
By definition, we can easily check that m E ( x ) M for all x M and some fixed m [ 0 , 1 ] . In addition, each convex set M R n is an ( E , m ) -convex set via choosing a mapping E : R n R n to be the identify mapping and m = 1 . Each E -convex set M R n is an ( E , m ) -convex set through choosing m = 1 .
To establish the optimal conditions and duality theory concerning b- ( E , m ) -convexity, we first understand the basic properties of ( E , m ) -convex set. We have the following proposition without proof.
Proposition 1.
The following statements are true:
1. 
E ( M ) M provided M R n is an ( E , m ) -convex set.
2. 
i I M i is an ( E , m ) -convex set given that M i ( i I ) is a class of ( E , m ) -convex sets.
3. 
Assuming that E : R n R n is a linear mapping and M 1 , M 2 R n are two ( E , m ) -convex sets, we have that M 1 + M 2 is also an ( E , m ) -convex set.
Remark 2.
If we assume that M 1 and M 2 are two ( E , m ) -convex sets, then M 1 M 2 is not necessarily an ( E , m ) -convex set. A counterexample is given below.
Example 1.
Let us consider the mapping E : R 2 R 2 defined by
E ( μ , ν ) = 2 ν 3 μ 3 , ν 3 + 4 μ 3 ,
and two sets
M 1 = { ( μ , ν ) R 2 | ( μ , ν ) = τ 1 ( 0 , 0 ) + τ 2 ( 2 , 1 ) + τ 3 ( 0 , 3 ) } , M 2 = { ( μ , ν ) R 2 | ( μ , ν ) = τ 1 ( 0 , 0 ) + τ 2 ( 0 , 3 ) + τ 3 ( 2 , 1 ) } ,
with τ 1 , τ 2 , τ 3 0 , i = 1 3 τ i = 1 and m = 1 . Both M 1 and M 2 are ( E , m ) -convex sets, but M 1 M 2 is not an ( E , m ) -convex set.
The fact that the E -convex, m-convex, and b-vex functions have almost the same constructs invokes us to generalize these different classes of convexity. Now, let us introduce the b- ( E , m ) -convex function and ( E , m ) -quasiconvex function as follows.
Definition 6.
A mapping g : M R is called a b- ( E , m ) -convex mapping on M R n corresponding to mapping b : M × M × [ 0 , 1 ] R + , if there exist a mapping E : R n R n and some fixed m [ 0 , 1 ] , satisfying that M is an ( E , m ) -convex set and
g τ E ( μ ) + m ( 1 τ ) E ( ν ) τ b ( μ , ν , τ ) g E ( μ ) + m 1 τ b ( μ , ν , τ ) g E ( ν )
for all μ , ν M and τ [ 0 , 1 ] with τ b ( μ , ν , τ ) [ 0 , 1 ] . Similarly, if
g τ E ( μ ) + m ( 1 τ ) E ( ν ) τ b ( μ , ν , τ ) g E ( μ ) + m 1 τ b ( μ , ν , τ ) g E ( ν )
for each μ , ν M and τ [ 0 , 1 ] with τ b ( μ , ν , τ ) [ 0 , 1 ] , then g is named a b- ( E , m ) -concave mapping. If τ = 0 , then the inequalities become equations, i.e. m g ( E ( μ ) ) = g ( m E ( μ ) ) holds for every μ M and some fixed m [ 0 , 1 ] . Moreover, the mapping g is called a strictly b- ( E , m ) -convex(concave) mapping on M if these two inequalities strictly hold for μ ν and τ ( 0 , 1 ) .
Remark 3.
If b ( x , y , τ ) 1 and m = 1 , then the b- ( E , m ) -convex mapping degenerates to an E -convex mapping.
Definition 7.
A mapping g : M R is called an ( E , m ) -quasiconvex mapping on ( E , m ) -convex set M R n , if there exist a mapping E : R n R n and some fixed m [ 0 , 1 ] satisfying that M is an ( E , m ) -convex set and
g τ E ( μ ) + m ( 1 τ ) E ( ν ) max g E ( μ ) , g E ( ν )
for every μ , ν M and τ [ 0 , 1 ] .
Next, we explore if the b- ( E , m ) -convex mapping on the ( E , m ) -convex sets have some properties that are similar to those of the E -convex mapping. Let us present the first property in the following.
Proposition 2.
If g is a convex mapping that is defined on the convex set M , then g must be a b- ( E , 1 ) -convex mapping, where E is the identity mapping and b ( x , y , τ ) 1 .
Remark 4.
Proposition 2 states a sufficient condition for g being a b- ( E , 1 ) -convex mapping, but the converse may fail to hold.
To illustrate this fact, let us construct an example as follows.
Example 2.
Suppose that g : R R has the following expression
g ( μ ) = μ + 1 i f   μ > 0 , 0 i f   μ 0 ,
and E : R R has the form E ( μ ) = μ 2 , then R is an ( E , m ) -convex set and g is a b- ( E , m ) -convex mapping. However, g is not a convex mapping.
Clearly, R is an ( E , m ) -convex set, and
( 1 τ ) E ( u ) + m τ E ( v ) R , u , v R , τ [ 0 , 1 ] , m [ 0 , 1 ] .
For each function b : M × M × [ 0 , 1 ] R + with τ b ( u , v , τ ) [ 0 , 1 ] , it is obvious that
g τ E ( u ) + m ( 1 τ ) E ( v ) = g τ u 2 m ( 1 τ ) v 2 = 0
and
τ b ( u , v , τ ) g E ( u ) + m 1 τ b ( u , v , τ ) g E ( v ) = 0
hold for every u , v R , τ [ 0 , 1 ] and certain fixed m [ 0 , 1 ] . Thus, the function g is b- ( E , m ) -convex on R . However, g has no convexity on R . Let u = 1 , v = 1 3 , τ = 5 8 , then the following fact holds. That is,
g τ u + ( 1 τ ) v = 3 2 > 5 4 = τ g ( u ) + ( 1 τ ) g ( v ) .
Proposition 3.
Every b-vex mapping g on the convex set M is a b- ( E , 1 ) -convex mapping with E being an identical mapping.
Remark 5.
Proposition 3 states a sufficient condition for g being a b- ( E , 1 ) -convex mapping, but the converse may fail to hold.
A counterexample is given as follows.
Example 3.
The mapping g considered in Example 2 is also a b- ( E , m ) -convex mapping but not a b-vex mapping for all mappings b : R × R × [ 0 , 1 ] [ 0 , 1 ] and certain fixed m [ 0 , 1 ] . If μ = 1 , ν = 1 3 , τ = 5 8 , then obviously we have that
g τ μ + ( 1 τ ) ν = 3 2 > 5 4 b ( μ , ν , τ ) = τ b ( μ , ν , τ ) g ( μ ) + 1 τ b ( μ , ν , τ ) g ( ν ) .
To explore the optimal conditions and duality theory on nonlinear b- ( E , m ) -convex programming as well as multi-objective b- ( E , m ) -convex programming, we consider the preserving property of b- ( E , m ) -convex mappings under positive linear combination, taking extremes, and composition. Since the proofs of these properties are straightforward, they are skipped.
Proposition 4.
Let M R n be a nonempty ( E , m ) -convex set and g i : M R ( i = 1 , 2 , , n ) be b- ( E , m ) -convex mappings on the same ( E , m ) -convex set M corresponding to the same mapping E and b on M , then the mappings
g = i = 1 n a i g i , a i R + , ( i = 1 , 2 , , n ) , g = max { g i , i = 1 , 2 , , n }
and
g = sup { g i , i = 1 , 2 , , n }
are all b- ( E , m ) -convex mappings related to the mapping E and b on M .
Proposition 5.
If g : M R + is a b- ( E , m ) -convex mapping on the ( E , m ) -convex set M corresponding to the mapping b and ψ : R R is an increasing homogeneous mapping, then the composite mapping ψ ( g ) is a b- ( E , m ) -convex mapping on M .
Now, we are ready to give some theorems involving b- ( E , m ) -convex mappings.
Theorem 1.
Each nonnegative b- ( E , m ) -convex mapping g on the ( E , m ) -convex set M is ( E , m ) -quasiconvex on M .
Proof. 
Noticing that g is a nonnegative b- ( E , m ) -convex mapping on M , for every μ , ν M , τ [ 0 , 1 ] and certain fixed m [ 0 , 1 ] , the condition g E ( μ ) g E ( ν ) yields that
g τ E ( μ ) + m ( 1 τ ) E ( ν ) τ b ( μ , ν , τ ) g E ( μ ) + m 1 τ b ( μ , ν , τ ) g E ( ν ) τ b ( μ , ν , τ ) g E ( ν ) + m 1 τ b ( μ , ν , τ ) g E ( ν ) = τ b ( μ , ν , τ ) + m 1 τ b ( μ , ν , τ ) g E ( ν ) τ b ( μ , ν , τ ) + 1 τ b ( μ , ν , τ ) g E ( ν ) = g E ( ν ) .
Similarly, if g E ( ν ) g E ( μ ) , then
g τ E ( μ ) + m ( 1 τ ) E ( ν ) g E ( μ ) .
Hence,
g τ E ( μ ) + m ( 1 τ ) E ( ν ) max g E ( μ ) , g E ( ν ) .
Thus, g is an ( E , m ) -quasiconvex mapping on M . □
Theorem 2.
Let us assume that g : M R is a b- ( E , m ) -convex mapping on the ( E , m ) -convex set M . Then, the level set S α = { μ | μ M , α R + , g ( μ ) α } is an ( E , m ) -convex set.
Proof. 
If μ , ν M , E ( μ ) , E ( ν ) S α , τ [ 0 , 1 ] and certain fixed m [ 0 , 1 ] , then
g E ( μ ) α , g E ( ν ) α , τ E ( μ ) + m ( 1 τ ) E ( ν ) M .
Noticing that α 0 , the condition g is a b- ( E , m ) -convex mapping on M yields that
g τ E ( μ ) + m ( 1 τ ) E ( ν ) τ b ( μ , ν , τ ) g E ( μ ) + m ( 1 τ b ( μ , ν , τ ) ) g E ( ν ) τ b ( μ , ν , τ ) α + m ( 1 τ b ( μ , ν , τ ) ) α τ b ( μ , ν , τ ) α + ( 1 τ b ( μ , ν , τ ) ) α = α .
Thus, τ E ( μ ) + m ( 1 τ ) E ( ν ) S α , which proves that the level set S α is an ( E , m ) -convex set. □
Theorem 3.
Let us assume g: M R ia a differentiable b- ( E , m ) -convex mapping on the ( E , m ) -convex set M corresponding to mapping b : M × M × [ 0 , 1 ] R + . Suppose that, for certain fixed μ , ν M , b ( μ , ν , 0 ) = lim τ 0 b ( μ , ν , τ ) , the following result holds:
g m E ( ν ) T E ( μ ) m E ( ν ) b ( μ , ν , 0 ) g E ( μ ) g m E ( ν ) .
Proof. 
According to the Taylor expansion of g, invoking the b- ( E , m ) -convexity of g yields that
g τ E ( μ ) + m ( 1 τ ) E ( ν ) = g m E ( ν ) + τ E ( μ ) m E ( ν ) = g m E ( ν ) + τ g m E ( ν ) T E ( μ ) m E ( ν ) + o ( τ ) = g m E ( ν ) + τ g m E ( ν ) T E ( μ ) m E ( ν ) + o ( τ )
and
g τ E ( μ ) + m ( 1 τ ) E ( ν ) τ b ( μ , ν , τ ) g E ( μ ) + m 1 τ b ( μ , ν , τ ) g E ( ν ) .
Notice that, when τ = 0 , m f ( E ( μ ) ) = g ( m E ( μ ) ) . Hence, by dividing the inequality above by τ and letting τ 0 + , we deduce that
g m E ( ν ) T E ( μ ) m E ( ν ) b ( μ , ν , 0 ) g E ( μ ) g m E ( ν ) ,
which completes the proof of the desired result. □

3. b- ( E , m ) -Convex Programming

To demonstrate the application of the results established in last section, the following nonlinear programming is considered in this section:
( P ) min g ( μ ) | μ M R n ,
where g : M R is a differentiable b- ( E , m ) -convex mapping on the ( E , m ) -convex set M .
Theorem 4.
Suppose that g : M R is a nonnegative strictly b- ( E , m ) -convex mapping on an ( E , m ) -convex set M , then the global optimal solution to the b- ( E , m ) -convex programming ( P ) is unique.
Proof. 
We prove this result by contradiction. Assume that μ 1 , μ 2 M , μ 1 μ 2 are two global optimal solutions to ( P ) . Thus, g E ( μ 1 ) = g E ( μ 2 ) . Because g is a nonnegative strictly b- ( E , m ) -convex mapping, we have that
g τ E ( μ 1 ) + m ( 1 τ ) E ( μ 2 ) < τ b ( μ 1 , μ 2 , τ ) g E ( μ 1 ) + m 1 τ b ( μ 1 , μ 2 , τ ) g E ( μ 2 ) τ b ( μ 1 , μ 2 , τ ) g E ( μ 1 ) + 1 τ b ( μ 1 , μ 2 , τ ) g E ( μ 2 ) = g E ( μ 1 ) = g E ( μ 2 ) ,
which implies E ( μ 1 ) and E ( μ 2 ) are not global optimal solutions. This contradiction shows that the global optimal solution to the b- ( E , m ) -convex programming ( P ) must be unique. □
Theorem 5.
Let us assume that g: M R is a differentiable b- ( E , m ) -convex mapping on ( E , m ) -convex set M corresponding to mapping b : M × M × [ 0 , 1 ] R + . If m E ( μ ¯ ) M and the following inequality
g m E ( μ ¯ ) T E ( μ ) m E ( μ ¯ ) 0
holds for every μ M and certain fixed m [ 0 , 1 ] , then m E ( μ ¯ ) is the optimal solution to the b- ( E , m ) -convex programming ( P ) corresponding to g on M .
Proof. 
Because g is a differentiable b- ( E , m ) -convex mapping, by Theorem 3, for every μ M , we obtain that
g m E ( μ ¯ ) T E ( μ ) m E ( μ ¯ ) b ( μ , μ ¯ , 0 ) g E ( μ ) g m E ( μ ¯ ) .
At the same time, noticing that b ( μ , μ ¯ , τ ) 0 and
g m E ( μ ¯ ) T E ( μ ) m E ( μ ¯ ) 0 ,
we can conclude that g E ( μ ) g m E ( μ ¯ ) 0 . Hence, m E ( μ ¯ ) is the optimal solution to the b- ( E , m ) - convex programming ( P ) . This completes the proof. □
Now, let us apply the results above to the nonlinear programming with the following inequality constraints:
min g E ( μ ) ( C P ) s . t . h i E ( μ ) 0 , i I = { 1 , 2 , , m } , μ R n ,
where g : R n R and h i ( i I ) : R n R are differentiable b- ( E , m ) -convex and b i - ( E , m ) -convex functions. For convenience, we denote the feasible set of ( C P ) by M E = { μ R n | h i ( μ ) 0 , i I } .
Theorem 6.
Suppose that there exist mappings E : R n R n and b , b i : R n × R n × [ 0 , 1 ] R + ( i I ) satisfying g and h i ( i I ) are b- ( E , m ) -convex and b i - ( E , m ) -convex on R n . If g is nonnegative strictly b- ( E , m ) -convex, then the global optimal solution to the b- ( E , m ) -convex programming ( C P ) must be unique.
Proof. 
Since it is straightforward to prove Theorem 6, we skip it here. □
The following theorem presents the Karush–Kuhn–Tucker (KKT) sufficient conditions.
Theorem 7.
Let us assume that g is a differentiable b- ( E , m ) -convex mapping corresponding to b and h i are differentiable b i - ( E , m ) -convex corresponding to b i ( i I ) . Suppose that m E ( μ * ) M E is a K K T point of ( C P ) , namely there are multipliers u i 0 ( i I ) satisfying
g m E ( μ * ) + i I u i h i m E ( μ * ) = 0 , u i h i m E ( μ * ) = 0 .
Then, for the problem ( C P ) , we have a unique optimal solution m E ( μ * ) .
Proof. 
For every μ M E ,
h i ( μ ) 0 = h i m E ( μ * ) , i I ( μ * ) = { i I | h i ( μ ) = 0 } .
Therefore, by the b- ( E , m ) -convexity of h i and Theorem 3, for i I ( μ * ) , we obtain that
h i m E ( μ * ) T E ( μ ) m E ( μ * ) b i ( μ , μ * , 0 ) h i ( E ( μ ) ) h i m E ( μ * ) 0 .
Thus, by using the K K T conditions and multipliers u i 0 , ( i I ( μ * ) ) , we deduce that
g m E ( μ * ) T E ( μ ) m E ( μ * ) = i I μ i h i E ( μ * ) T E ( μ ) m E ( μ * ) 0 .
Hence, by Theorem 5, for every μ M E with μ μ * , we can conclude that g E ( μ ) g m E ( μ * ) . This proves that, for the problem ( C P ) , we have a unique optimal solution m E ( μ * ) and ends the proof. □

4. Multi-Objective b - ( E , m ) -Convex Programming

To consider an application of the results developed in Section 2 in multi-objective b- ( E , m ) -convex programming, let us assume that E : M M ( M R n ) is a surjection in this section. Simultaneously, we define the mapping ( g E ) : M R by ( g E ) ( μ ) = g ( E ( μ ) ) for each μ M .
Consider the multi-objective nonlinear programming as follows:
( M P ) min g ( μ ) = g 1 ( μ ) , g 2 ( μ ) , , g p ( μ ) s . t . μ M = { μ R n | h j ( μ ) 0 , j = 1 , 2 , , q } ,
where g i ( μ ) : R n R , i P = { 1 , 2 , , p } and h i ( μ ) : R n R , j J = { 1 , 2 , , q } are b- ( E , m ) -convex.
We also consider the b- ( E , m ) -convex programming corresponding to ( M P E ) as follows:
( M P E ) min ( g E ) ( μ ) = ( g 1 E ) ( μ ) , ( g 2 E ) ( μ ) , , ( g p E ) ( μ ) s . t . μ E ( M ) = { μ R n | ( h j E ) ( μ ) 0 , j = 1 , 2 , , q } ,
where g i E , i P and h j E , j J are differentiable functions on M .
Definition 8.
A feasible point μ * E ( M ) to problem ( M P E ) is called an effective solution if and only if there is no other μ E ( M ) satisfying ( g i E ) ( μ ) ( g i E ) ( μ * ) for every i P along with inequality holding strictly for at least one i 0 P .
Theorem 8.
Suppose that E : M M is a surjective mapping. Then, μ ¯ being an effective solution to ( M P E ) is equivalent to E ( μ ¯ ) being an effective solution to ( M P ) .
Proof. 
We omit the proof of Theorem 8 here because it is essentially the same as that of Theorem 3.1 in [19]. □
Based on Theorem 8, we present the following sufficient optimality conditions for multi-objective b- ( E , m ) -convex programming ( M P ) .
Theorem 9.
(Sufficient optimality condition) Let E : M M be a surjective mapping and M be an ( E , m ) -convex set. Suppose that ( μ ¯ , τ ¯ , η ¯ ) has the following properties:
τ ¯ g m E ( μ ¯ ) + η ¯ h m E ( μ ¯ ) = 0 , η ¯ h m E ( μ ¯ ) = 0 , h m E ( μ ¯ ) 0 , τ ¯ > 0 , η ¯ 0 ,
where τ ¯ R p , η ¯ R q , then m E ( μ ¯ ) must be an effective solution to ( M P ) .
Proof. 
On the contrary, if we assume that m E ( μ ¯ ) is not an effective solution to ( M P ) , then there is a μ * M satisfying
g ( μ * ) g m E ( μ ¯ ) .
Since g i and h j are differentiable b- ( E , m ) -convex on M , combining with Theorem 3, for every μ M , we have that
g i m E ( μ ¯ ) T E ( μ ) m E ( μ ¯ ) b i g i ( E ( μ ) ) g i ( m E ( μ ¯ ) )
and
h j m E ( μ ¯ ) T E ( μ ) m E ( μ ¯ ) b j h j ( E ( μ ) ) h j ( m E ( μ ¯ ) ) ,
where b i = b i ( μ , μ ¯ , 0 ) , i I and b j = b j ( μ , μ ¯ , 0 ) , j J . Due to τ ¯ > 0 , η ¯ 0 , from Equations (15) and (16), for every i I , j J , it follows that
τ ¯ b i   max g ( E ( μ ) ) g ( m E ( μ ¯ ) ) + η ¯ b j   max h ( E ( μ ) ) h ( m E ( μ ¯ ) ) E ( μ ) m E ( μ ¯ ) τ ¯ g m E ( μ ¯ ) T + η ¯ h m E ( μ ¯ ) T ,
where b i   max = max { b i | i P } and b j   max = max { b j | j J } . By the conditions τ ¯ g m E ( μ ¯ ) + η ¯ h m E ( μ ¯ ) = 0 , η ¯ h m E ( μ ¯ ) = 0 and h m E ( μ ¯ ) 0 , we get
g ( E ( μ ) ) g ( m E ( μ ¯ ) ) ,
which is an contradiction to Equation (14). Thus, m E ( μ ¯ ) has to be an effective solution to ( M P ) . □
Now, we are ready to give the following result which builds a bridge connecting the scalar with multi-objective nonlinear programming.
Theorem 10.
The point μ ¯ E ( M ) being an effective solution to ( M P E ) is equivalent to μ ¯ solving
( M P E ) k min ( g k E ) ( μ ) s . t . g i E ) ( μ ) ( g i E ) ( μ ¯ ) , i P k : = P { k } , ( h E ) ( μ ) 0 ,
for every k = 1 , 2 , , p .
Proof. 
Assume contrarily that μ ¯ does not solve the problem ( M P E ) k , then there exists a μ E ( M ) satisfying
( g k E ) ( μ ) < ( g k E ) ( μ ¯ ) , k P ,
( g i E ) ( μ ) ( g i E ) ( μ ¯ ) , i k .
Thus, we have that μ ¯ does not solve the problem ( M P E ) either.
Conversely, if μ ¯ solves the problem ( M P E ) k for each k P , then for any μ E ( M ) with the property that ( g i E ) ( μ ) ( g i E ) ( μ ¯ ) , i k , we have ( g k E ) ( μ ) < ( g k E ) ( μ ¯ ) , which implies that there is no different μ E ( M ) satisfying ( g i E ) ( μ ) ( g i E ) ( μ ¯ ) , i P where the inequality should hold strictly for at least one i. This implies that μ ¯ is not a solution of ( M P E ) k . The proof is complete. □
Avoiding of loss of generality, let us assume P J . Setting
( G t E ) ( μ ) = ( g t E ) ( μ ) ( G t E ) ( μ ¯ ) , t P k ; ( g t E ) ( μ ) , t J ,
and T = P k J , it is clear that ( M P E ) k can be rewritten as:
( M P E ) G t min ( g k E ) ( μ ) s . t . ( G t E ) ( μ ) 0 , t T ,
for each k P .
To establish the necessary optimal conditions, the following theorem is developed. We skip the proof due to Mangasarian in [20].
Theorem 11.
Assume that μ ¯ E ( M ) is a local solution to ( M P E ) . Let g k E , for each k I and G t E and t T be differentiable functions. Furthermore, let
  • O = { t | ( G t E ) ( μ ¯ ) = 0 } , Q = { t | ( G t E ) ( μ ¯ ) < 0 } , O Q = T ;
  • V = { t | ( G t E ) ( μ ¯ ) = 0 and ( G t E ) ( μ ¯ ) is b- ( E , m ) -concave at μ ¯ };
  • W = { t | ( G t E ) ( μ ¯ ) = 0 and ( G t E ) ( μ ¯ ) is not b- ( E , m ) -concave at μ ¯ }; and
  • O = V W .
Then, the system of inequalities
( g k E ) ( μ ¯ ) z < 0 , ( G W E ) ( μ ¯ ) z < 0 , ( G V E ) ( μ ¯ ) z 0
does not have any solution z R n for every k P .
Using Theorem 10 and Theorem 11, we present Fritz–John necessary optimality criteria as follows.
Theorem 12.
If μ ¯ E ( M ) is an effective solution to ( M P E ) , then there are τ ¯ R p and η ¯ R q satisfying
τ ¯ ( g E ) ( μ ¯ ) + η ¯ ( h E ) ( μ ¯ ) = 0 , η ¯ ( h E ) ( μ ¯ ) = 0 , ( h E ) ( μ ¯ ) 0 , ( τ ¯ , η ¯ ) 0 .
Proof. 
Because μ ¯ is an effective solution to ( M P E ) , by Theorem 10, μ ¯ solves ( M P E ) k for every k I . Combining with Theorem 11, we get that the system of inequalities
( g k E ) ( μ ¯ ) z < 0 , ( G W E ) ( μ ¯ ) z < 0 , ( G V E ) ( μ ¯ ) z 0
does not have any solution z R n for each k P . Thus, by Motzin’s Theorem in [20], there are τ ¯ k , τ ¯ W , τ ¯ V satisfying
τ ¯ k ( g k E ) ( μ ¯ ) + η ¯ W ( G W E ) ( μ ¯ ) + η ¯ V ( G V E ) ( μ ¯ ) = 0 , ( τ ¯ k , η ¯ W ) 0 , η ¯ V 0 .
Because ( G W E ) ( μ ¯ ) = 0 and ( G V E ) ( μ ¯ ) = 0 , if we set
η ¯ Q = 0 , η ¯ = ( η ¯ W , η ¯ V , η ¯ Q ) ,
then we have that
η ¯ ( G E ) ( μ ¯ ) = η ¯ W ( G W E ) ( μ ¯ ) + η ¯ V ( G V E ) ( μ ¯ ) + η ¯ Q ( G Q E ) ( μ ¯ ) = 0
and
τ ¯ k ( g k E ) ( μ ¯ ) + η ¯ ( G E ) ( μ ¯ ) = 0 .
Therefore, for every k I it yields
τ ¯ ( g E ) ( μ ¯ ) + η ¯ ( h E ) ( μ ¯ ) = 0 , η ¯ ( h E ) ( μ ¯ ) = 0 , ( τ ¯ , η ¯ ) 0 , τ ¯ = ( τ ¯ 1 , τ ¯ 2 , , τ ¯ p ) .
Noticing that μ ¯ E ( M ) , ( h E ) ( μ ¯ ) 0 , the proof is finished. □

5. Duality Theorems

As an another application of the results stated in Section 2, the Wolfe duality Theorems of ( P E ) under the b- ( E , m ) -convexity are considered in this section.
Consider the following programming:
min g E ( z ) + u T h E ( z )
( D E ) s . t . g E ( z ) + h E ( z ) u = 0 ,
u 0 ,
where u = ( u 1 , , u n ) T , h ( z ) = h ( z 1 ) , , h ( z n ) T and h ( z ) = ( h ( z 1 ) , , h ( z n ) ) . Suppose that functions g and h i ( i I ) are differentiable b- ( E , m ) -convex functions. Similarly, the feasible set of ( D E ) is denoted by M ¯ E = ( z , u ) | g E ( μ ) + h E ( μ ) u = 0 , u 0 .
For convenience, in the following theorems and corollaries, we write
b 0 = lim τ 0 b ( μ , z , τ ) , b ¯ i = lim τ 0 b i ( μ , z , τ ) ,
t 0 = lim τ 0 1 τ b ( z , μ , τ ) 1 τ , t ¯ i = lim τ 0 1 τ b i ( z , μ , τ ) 1 τ ,
I 1 ( z ) = { i I | h i ( z ) < 0 } , I 2 ( z ) = { i I | h i ( z ) > 0 } ,
b = max { 0 , b ¯ i , i I 1 ( z ) } , b = min { 0 , b ¯ i , i I 2 ( z ) } ,
t = max { 0 , t ¯ i , i I 1 ( z ) } , t = min { 0 , t ¯ i , i I 2 ( z ) } .
Theorem 13.
(Weak Duality Theorem) Suppose that g and h i ( i I ) are differentiable b- ( E , m ) -convex and b i - ( E , m ) -convex on R n corresponding to mappings b and b i ( i I ) . If μ M E , ( z , u ) M ¯ E and b b 0 b , then g E ( μ ) m g E ( z ) + m u T h E ( z ) is correct for every feasible point x to ( C P ) .
Proof. 
Combining the Taylor expansion of g and the b- ( E , m ) -convexity of g obtains that
g m E ( z ) T E ( μ ) m E ( z ) b 0 g E ( μ ) m g E ( z ) .
Similarly, let A = d i a g ( b ¯ i , i I ) , which yields that
h m E ( z ) T E ( μ ) m E ( z ) A h E ( μ ) m h E ( z ) .
From Equation (21), we have that
g m E ( z ) T E ( μ ) m E ( z ) = u T h m E ( z ) T E ( μ ) m E ( z ) .
Since u 0 , h ( μ ) 0 and b ¯ i 0 , from Equations (23)–(25), we get that
b 0 g E ( μ ) m g E ( z ) u T A m h E ( z ) h E ( μ ) = i = 1 n u i b ¯ i m h i E ( z ) h i E ( μ ) i = 1 n m u i b ¯ i h i E ( z ) = i I 1 ( z ) m u i b ¯ i h i E ( z ) + i I 2 ( z ) m u i b ¯ i h i E ( μ ) .
According to b b 0 b and b 0 > 0 , dividing both sides of the inequality above by b 0 yields that
g E ( μ ) m g E ( z ) i I 1 ( z ) m u i b ¯ i b 0 h i E ( z ) + i I 2 ( z ) m u i b ¯ i b 0 h i E ( μ ) i I 1 ( z ) m u i h i E ( z ) + i I 2 ( z ) m u i h i E ( μ ) = m u T h E ( z ) .
Hence, g E ( μ ) m g E ( z ) + m u T h E ( z ) and the proof is completed. □
Corollary 1.
Suppose that g and h i ( i I ) are differentiable b- ( E , m ) -convex mappings on R n corresponding to the same mapping b. If μ M E , ( z , u ) M ¯ E , then g E ( μ ) m g E ( z ) + m u T h E ( z ) holds for each feasible point μ of ( P E ) .
Theorem 14.
(Strong Duality Theorem) Let g be differentiable b- ( E , m ) -convex mapping corresponding to mappings b, h i ( i I ) be b i - ( E , m ) -convex functions on R n corresponding to mappings b i ( i I ) , and ( μ * , u * ) be a KKT point of ( C P ) . Assume that
(1)
μ * is an optimal solution of ( C P ) ; and
(2)
for each ( z , u ) M ¯ E , t 0 > 0 , t ¯ i > 0 and t t 0 t .
Then, ( μ * , u * ) is an optimal solution to ( D E ) . Moreover, the optimal values to ( C P ) and ( D E ) are the same.
Proof. 
Invoking Taylor expansion again, in term of the b- ( E , m ) -convexity of g, we get that
g m E ( z ) T m E ( μ * ) E ( z ) t 0 m g E ( μ * ) g E ( z ) .
Similarly, let H = d i a g ( t ¯ i , i I ) , we have
h m E ( z ) T m E ( μ * ) E ( z ) H m g E ( μ * ) h E ( z ) .
At the same time, since ( μ * , u * ) is a K K T point to ( C P ) , we have that g m E ( μ * ) + i I μ i * h i m E ( μ * ) = 0 and u i * h i m E ( μ * ) = 0 , which implies that ( μ * , u * ) is a feasible solution to ( D E ) . Using the inequities in Equations (26) and (27) and noticing that u 0 , h ( E ( μ * ) ) 0 and t t 0 t , we have
t 0 ( g E ( μ * ) + ( u * ) T h E ( μ * ) g E ( z ) u T h E ( z ) ) u T H m g E ( μ * ) h E ( z ) t 0 u T h E ( z ) m u T H g E ( μ * ) + i I 1 ( z ) u i ( t ¯ i t 0 ) h i E ( z ) + i I 2 ( z ) u i ( t ¯ i t 0 ) h i E ( z ) m u T H g E ( μ * ) 0 .
Thus, in view of t 0 > 0 , we obtain that
g E ( μ * ) + ( u * ) T h E ( μ * ) g E ( z ) + u T h E ( z ) , ( z , u ) M ¯ E .
That is to say, ( μ * , u * ) is an optimal solution of ( D E ) . Furthermore, the optimal values of ( C P ) and those of ( D E ) equal to each other. This completes the proof. □
Corollary 2.
Let g and h i ( i I ) be differentiable b- ( E , m ) -convex function on R n corresponding to the same mapping b and ( μ * , u * ) be a KKT point to ( C P ) . Assume that
(1)
μ * is an optimal solution to ( P E ) ; and
(2)
for each ( z , u ) M ¯ E , t 0 > 0 .
We have that ( μ * , u * ) is an optimal solution to ( D E ) . Moreover, the optimal values to ( C P ) and ( D E ) are the same.

6. Applications in b - ( E , m ) -Convex Programming

To illustrate the optimality conditions proposed in this paper about b- ( E , m ) -convex programming and multi-objective b- ( E , m ) -convex programming, respectively, two examples are constructed in this section.
Example 4.
Let us study the problem described as follows:
( C P ¯ ) min g ( μ ) s . t . μ M = { μ R | h 1 ( μ ) 0 , h 2 ( μ ) 0 } ,
where g ( μ ) = μ + μ 2 , h 1 ( μ ) = μ 1 and h 2 ( μ ) = μ . Suppose E : M E ( M ) is defined as E ( μ ) = μ 1 . It is elementary to check that E is a surjective mapping. Let b ( μ , ν , τ ) 1 and m = 1 2 , thus g ( μ ) , h 1 ( μ ) , h 2 ( μ ) are differentiable b- ( E , m ) -convex mappings corresponding to the same mapping E ( μ ) and b ( μ , ν , τ ) 1 on M .
It is obvious that the feasible set M = [ 0 , 1 ] , and μ = 0 is the optimal solution of C P ¯ . Using the Karush–Kuhn–Tucker sufficient conditions (Theorem 7), we have
g m E ( μ * ) + i = 1 2 η i h i m E ( μ * ) = 2 ( 1 2 ( μ 1 ) ) + 1 + η 1 η 2 = 0 , η 1 h 1 m E ( μ * ) = η 1 ( 1 2 ( μ 1 ) 1 ) = 0 , η 2 h 2 m E ( μ * ) = η 2 ( 1 2 ( μ 1 ) ) = 0 .
We get η 1 = 0 , η 2 = 1 , μ * = 1 , and m E ( μ * ) = 0 is the optimal solution to C P ¯ , which is an application of Theorem 7.
Example 5.
Let us consider the problem described as follows:
( M P ¯ ) min g ( μ ) = g 1 ( μ ) , g 2 ( μ ) s . t . μ M = { μ R | h 1 ( μ ) 0 , h 2 ( μ ) 0 } ,
where g 1 ( μ ) = μ , g 2 ( μ ) = μ 2 , h 1 ( μ ) = μ 1 , and h 2 ( μ ) = μ .
Similar to Example 4, suppose E : M E ( M ) is defined as E ( μ ) = μ 1 , b ( μ , ν , τ ) 1 and m = 1 2 . Thus, g 1 ( μ ) , g 2 ( μ ) , h 1 ( μ ) and h 2 ( μ ) are differentiable b- ( E , m ) -convex mappings corresponding to the same mapping E ( μ ) and b ( μ , ν , τ ) 1 on the ( E , m ) -convex set M = [ 0 , 1 ] . We also have the following b- ( E , m ) -convex multi-objective nonlinear programming related to ( M P ¯ ) :
( M P E ¯ ) min g ( μ ) = ( g 1 E ) ( μ ) , ( g 2 E ) ( μ ) s . t . μ E ( M ) = { μ R | ( h 1 E ) ( μ ) 0 , ( h 2 E ) ( μ ) 0 } ,
where ( g 1 E ) ( μ ) = μ 1 , ( g 2 E ) ( μ ) = μ 2 2 μ + 1 , ( h 1 E ) ( μ ) = μ 2 , and ( h 2 E ) ( μ ) = μ + 1 .
(a)
It can be checked easily that the feasible sets of ( M P ¯ ) and ( M P E ¯ ) are M = [ 0 , 1 ] and E ( M ) = [ 1 , 2 ] , respectively.
(b)
In terms of the definition of the effective solution, we have that μ ¯ = 1 E ( M ) is an effective solution to ( M P E ¯ ) and E ( μ ¯ ) = 0 M is an effective solution to ( M P ¯ ) . This is an application of Theorem 8.
(c)
It can be easily checked that ( μ ¯ , ( τ ¯ 1 , τ ¯ 2 ) , ( η ¯ 1 , η ¯ 2 ) ) = ( 1 , ( 1 2 , 1 ) , ( 0 , 1 2 ) ) satisfying the assumptions in Theorem 9, and m E ( μ ¯ ) = 0 is the effective solution to ( M P ¯ ) . This is an application of Theorem 9.
(d)
Since the efficient μ ¯ = 1 for ( M P E ¯ ) , it also solves both ( M P E ¯ 1 ) and ( M P E ¯ ) 2 :
( M P E ) ¯ 1 min ( g 1 E ) ( μ ) s . t . ( g 2 E ) ( μ ) ( g 2 E ) ( μ ¯ ) , ( h E ) ( μ ) 0
and
( M P E ) ¯ 2 min ( g 2 E ) ( μ ) s . t . ( g 1 E ) ( μ ) ( g 1 E ) ( μ ¯ ) , ( h E ) ( μ ) 0 ,
where ( h E ) ( μ ) = ( ( h 1 E ) ( μ ) , ( h 2 E ) ( μ ) ) . Hence, this is an application of Theorem 10.

7. Conclusions

( E , m ) -convex sets and b- ( E , m ) -convex mappings are introduced in this paper. Since ( E , m ) - convex set is exactly E -convex set with m = 1 and b- ( E , m ) -convex mappings is exactly E -convex mappings with m = 1 and b ( μ , ν , τ ) = 1 , the b- ( E , m ) -convex mappings is a generalization of E -convex, m-convex and b-vex mappings. The properties of these sets and mappings are derived, among which we are mainly concerned with the operations that preserve the ( E , m ) -convexity. Using these properties, especially the b- ( E , m ) -convexity, we study the unconstrained b- ( E , m ) -convex programming as well as the inequality constrained b- ( E , m ) -convex programming. During this process, the sufficient conditions of optimality are discussed in detail. We also establish the uniqueness of the solution to the b- ( E , m ) -convex programming. Moreover, we obtain the sufficient optimality conditions and the Fritz–John necessary optimality criteria for nonlinear multi-objective b- ( E , m ) -convex programming and present the duality theorems under the b- ( E , m ) -convexity. Finally, to illustrate the effectiveness of the proposed results, we provide two examples, which concern the applications in b- ( E , m ) -convex programming.
To some extent, the method developed in this paper is not profound enough since it does not go beyond the standard process of the E -convexity approach. However, we do work on some special concrete calculative cases, which unify the E -convexity, m-convexity and b-invexity. To the best of our knowledge, this is the first time these three class of mappings are treated uniformly. In future work, we may extend the ideas and techniques presented in this paper to Riemannian manifolds.

Author Contributions

All authors contributed equally and significantly to this paper. All authors read and approved the final manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China under Grant No. 11301296.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. Youness, E.A. E-convex sets, E-convex functions, and E-convex programming. J. Optim. Theory Appl. 1999, 102, 439–450. [Google Scholar] [CrossRef]
  2. Bector, C.R.; Singh, C. B-vex functions. J. Optim. Theory Appl. 1991, 71, 439–453. [Google Scholar] [CrossRef]
  3. Iqbal, A.; Ali, S.; Ahmad, I. On geodesic E-convex sets, geodesic E-convex functions and E-epigraphs. J. Optim. Theory Appl. 2012, 155, 239–251. [Google Scholar] [CrossRef]
  4. Mishra, S.K.; Mohapatra, R.N.; Youness, E.A. Some properties of semi E-b-vex functions. Appl. Math. Comput. 2011, 217, 5525–5530. [Google Scholar] [CrossRef]
  5. Syau, Y.-R.; Jia, L.X.; Lee, E.S. Generalizations of E-convex and B-vex functions. Comput. Math. Appl. 2009, 58, 711–716. [Google Scholar] [CrossRef]
  6. Yang, X.M. On E-convex sets, E-convex functions, and E-convex programming. J. Optim. Theory Appl. 2001, 109, 699–704. [Google Scholar] [CrossRef]
  7. Ahmad, I.; Jayswal, A.; Kumari, B. Characterizations of geodesic sub-b-s-convex functions on Riemannian manifolds. J. Nonlinear Sci. Appl. 2018, 11, 189–197. [Google Scholar] [CrossRef]
  8. Emam, T. Roughly B-invex programming problems. Calcolo 2011, 48, 173–188. [Google Scholar] [CrossRef]
  9. Fulga, C.; Preda, V. Nonlinear programming with E-preinvex and local E-preinvex functions. Eur. J. Oper. Res. 2009, 192, 737–743. [Google Scholar] [CrossRef]
  10. Gulatia, T.R.; Vermab, K. Nondifferentiable higher order symmetric duality under invexity/generalized invexity. Filomat 2014, 28, 1661–1674. [Google Scholar] [CrossRef]
  11. Kiliçman, A.; Saleh, W. Some properties of geodesic semi E-b-vex functions. Open Math. 2015, 13, 795–804. [Google Scholar] [CrossRef]
  12. Megahed, A.A.; El-Banna, A.Z.; Youness, E.A.; Gomaa, H.G. A combined interactive approach for solving E-convex multiobjective nonlinear programming problem. Appl. Math. Comput. 2011, 217, 6777–6784. [Google Scholar]
  13. Pitea, A.; Antczak, T. Proper efficiency and duality for a new class of nonconvex multitime multiobjective variational problems. J. Inequal. Appl. 2014, 2014, 333. [Google Scholar] [CrossRef] [Green Version]
  14. Antczak, T.; Pitea, A. Parametric approach to multitime multiobjective fractional variational problems under (F, ρ)-convexity. Optim. Control Appl. Meth. 2016, 24, 831–847. [Google Scholar] [CrossRef]
  15. Liao, J.G.; Du, T.S. On some characterizations of sub-b-s-convex functions. Filomat 2016, 14, 3885–3895. [Google Scholar] [CrossRef]
  16. Liao, J.G.; Du, T.S. Optimality conditions in sub-(b,m)-convex programming. Politehn. Univ. Bucharest Sci. Bull. Ser. A Appl. Math. Phys. 2017, 79, 95–106. [Google Scholar]
  17. Kiliçman, A.; Saleh, W. Generalized preinvex functions and their applications. Symmetry 2018, 10, 493. [Google Scholar] [CrossRef]
  18. Toader, G.H. Some generalisations of the convexity. In Proceedings of the Colloquium on Approximation and Optimization, Cluj-Napoca, Romania, 25–27 October 1984; pp. 329–338. [Google Scholar]
  19. Piao, G.R.; Jiao, L.; Kim, D.S. Optimality and mixed duality in multiobjective E-convex programming. J. Inequal. Appl. 2015, 2015, 335. [Google Scholar] [CrossRef]
  20. Mangasarian, O.L. Nonlinear Programming; McGraw-Hill: New York, NY, USA, 1969. [Google Scholar]

Share and Cite

MDPI and ACS Style

Yu, B.; Liao, J.; Du, T. Optimality and Duality with Respect to b-(,m)-Convex Programming. Symmetry 2018, 10, 774. https://doi.org/10.3390/sym10120774

AMA Style

Yu B, Liao J, Du T. Optimality and Duality with Respect to b-(,m)-Convex Programming. Symmetry. 2018; 10(12):774. https://doi.org/10.3390/sym10120774

Chicago/Turabian Style

Yu, Bo, Jiagen Liao, and Tingsong Du. 2018. "Optimality and Duality with Respect to b-(,m)-Convex Programming" Symmetry 10, no. 12: 774. https://doi.org/10.3390/sym10120774

APA Style

Yu, B., Liao, J., & Du, T. (2018). Optimality and Duality with Respect to b-(,m)-Convex Programming. Symmetry, 10(12), 774. https://doi.org/10.3390/sym10120774

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop