Next Article in Journal
A Note on Distance-Based Entropy of Dendrimers
Previous Article in Journal
Relatively Cyclic and Noncyclic P-Contractions in Locally K-Convex Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiobjective Fractional Symmetric Duality in Mathematical Programming with (C,Gf)-Invexity Assumptions

by
Ramu Dubey
1,
Lakshmi Narayan Mishra
2,* and
Clemente Cesarano
3
1
Department of Mathematics, J.C. Bose University of Science and Technology, YMCA, Faridabad 121 006, India
2
Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology (VIT) University, Vellore 632 014, Tamil Nadu, India
3
Section of Mathematics, International Telematic University UNINETTUNO, C.so Vittorio Emanuele II, 3900186 Roma, Italy
*
Author to whom correspondence should be addressed.
Axioms 2019, 8(3), 97; https://doi.org/10.3390/axioms8030097
Submission received: 21 June 2019 / Revised: 6 August 2019 / Accepted: 7 August 2019 / Published: 13 August 2019

Abstract

:
In this paper, a new class of ( C , G f ) -invex functions introduce and give nontrivial numerical examples which justify exist such type of functions. Also, we construct generalized convexity definitions (such as, ( F , G f ) -invexity, C-convex etc.). We consider Mond–Weir type fractional symmetric dual programs and derive duality results under ( C , G f ) -invexity assumptions. Our results generalize several known results in the literature.

1. Introduction

The goal of optimization is to find the best value for each variable in order to achieve satisfactory performance. Optimization is an active and fast growing research area and has a great impact on the real world. In most real life problems, decisions are made taking into account several conflicting criteria, rather than by optimizing a single objective. Such a problem is called multiobjective programming. Problems of multiobjective programming are widespread in mathematical modelling of real world systems problems for a very broad range of applications.
In 1981, Hanson [1] introduced the concept of invexity which is an extension of differentiable convex function and proved the sufficiency of Kuhn-Tucker conditions. Antczak [2] introduced the concept of G-invex functions and derived some optimality conditions for constrained optimization problems under G-invexity. In [3], Antczak extended the above notion by defining a vector valued G f -invex function and proved necessary and sufficient optimality conditions for a multiobjective nonlinear programming problem. Recently, Kang et al. [4] defined G-invexity for a locally Lipchitz function and obtained optimality conditions for multiobjective programming using these functions. Many researchers have worked related to the same area [5,6,7].
In the last several years, various optimality and duality results have been obtained for multiobjective fractional programming problems. Bector and Chandra Motivated by various concepts of generalized convexity. Ferrara and Stefaneseu [8] used the ( ϕ , ρ ) -invexity to discuss the optimality conditions and duality results for multiobjective programming problem. Further, Stefaneseu and Ferrara [9] introduced a new class of ( ϕ , ρ ) ω -invexity for a multiobjective program and established optimality conditions and duality theorems under these assumptions.
In this article, we have introduced various definitions ( C , G f ) -invexity/ ( F , G f ) -invexity and constructed nontrivial numerical examples illustrates the existence of such functions. We considered a pair of multiobjective Mond–Weir type symmetric fractional primal-dual problems. Further, under the ( C , G f ) -invexity assumptions, we derive duality results.

2. Preliminaries and Definitions

Consider the following vector minimization problem:
( MP ) Minimize   f ( x ) = f 1 ( x ) , f 2 ( x ) , , f k ( x ) T Subject   to   X 0 = { x X R n : g j ( x ) 0 , j = 1 , 2 , , m }
where f = { f 1 , f 2 , , f k } : X R k and g = { g 1 , g 2 , , g m } : X R m are differentiable functions defined on X .
Definition 1
([10]). A point x ¯ X 0 is said to be an efficient solution of (MP) if there exists no other x X 0 such that f r ( x ) < f r ( x ¯ ) , for some r = 1 , 2 , , k and f i ( x ) f i ( x ¯ ) , for all i = 1 , 2 , , k .
Let f = ( f 1 , , f k ) : X R k be a differentiable function defined on open set ϕ X R n and I f i ( X ) , i = 1 , 2 , , k be the range of f i .
Definition 2
([11]). Let C : X × X × R n R ( X R n ) be a function which satisfies C x , u ( 0 ) = 0 , ( x , u ) X × X . Then, the function C is said to be convex on R n with respect to third argument iff for any fixed ( x , u ) X × X ,
C x , u ( λ x 1 + ( 1 λ ) x 2 ) λ C x , u ( x 1 ) + ( 1 λ ) C x , u ( x 2 ) , λ ( 0 , 1 ) , x 1 , x 2 R n .
Now, we introduce the definition of C-convex function:
Definition 3
([12]). The function f is said to be C-convex at u X such that x X ,
f i ( x ) f i ( u ) C x , u [ x f i ( u ) ] , i = 1 , 2 , , k .
If the above inequality sign changes to ≤, then f is called C-concave at u X .
Definition 4.
The function f is said to be G f -convex at u X if there exist a differentiable function G f = ( G f 1 , G f 2 , , G f k ) : R R k such that every component G f i : I f i ( X ) R is strictly increasing on the range of I f i such that x X ,
G f i ( f i ( x ) ) G f i ( f i ( u ) ) ( x u ) G f i ( f i ( u ) ) x f i ( u ) , i = 1 , 2 , , k .
If the above inequality sign changes to ≤, then f is called G f -concave at u X .
Definition 5.
A functional F : X × X × R n R is said to be sublinear with respect to the third variable if for all ( x , u ) X × X ,
(i) 
F x , u ( a 1 + a 2 ) F x , u ( a 1 ) + F x , u ( a 2 ) , f o r a l l a 1 , a 2 R n ,
(ii) 
F x , u ( α a ) = α F x , u ( a ) , f o r a l l α R + a n d a R n .
Now, we introduce the definition of a differentiable vector valued ( F , G f ) -invex function.
Definition 6.
The function fis said to be ( F , G f ) -invex at u X if there exist sublinear functional F and a differentiable function G f = ( G f 1 , G f 2 , , G f k ) : R R k such that every component G f i : I f i ( X ) R is strictly increasing on the range of I f i such that x X ,
G f i ( f i ( x ) ) G f i ( f i ( u ) ) F x , u [ G f i ( f i ( u ) ) x f i ( u ) ] , i = 1 , 2 , , k .
If the above inequality sign changes to ≤I f is called ( F , G f ) -incave at u X .
Next, we introduce the definition of ( C , G f ) -invex function:
Definition 7.
The function f is said to be ( C , G f ) -invex at u X if there exist convex function C and a differentiable function G f = ( G f 1 , G f 2 , , G f k ) : R R k such that every component G f i : I f i ( X ) R is strictly increasing on the range of I f i such that x X ,
G f i ( f i ( x ) ) G f i ( f i ( u ) ) C x , u [ G f i ( f i ( u ) ) x f i ( u ) ] , i = 1 , 2 , , k .
Definition 8.
Let f : X R k be a vector-valued differentiable function. If there exist sublinear functional F and a differentiable function G f = ( G f 1 , G f 2 , , G f k ) : R R k such that every component G f i : I f i ( X ) R is strictly increasing on the range of I f i and a vector valued function η : X × X R n such that x X and p i R n ,
F x , u [ G f i ( f i ( u ) ) x f i ( u ) ] 0 G f i ( f i ( x ) ) G f i ( f i ( u ) ) 0 , f o r a l l i = 1 , 2 , , k ,
then f is called ( F , G f ) -pseudoinvex at u X with respect to η .
If the above inequalities sign changes to , then f is called ( F , G f ) -incave/ ( F , G f ) -pseudoincave at u X .
Definition 9.
Let f : X R k be a vector-valued differentiable function. If there exist convex function C and a differentiable function G f = ( G f 1 , G f 2 , , G f k ) : R R k such that every component G f i : I f i ( X ) R is strictly increasing on the range of I f i and a vector valued function η : X × X R n such that x X and p i R n ,
C x , u [ G f i ( f i ( u ) ) x f i ( u ) ] 0 G f i ( f i ( x ) ) G f i ( f i ( u ) ) 0 , f o r a l l i = 1 , 2 , , k ,
then f is called ( C , G f ) -pseudoinvex at u X .
If the above inequalities sign changes to , then f is called ( C , G f ) -incave/ ( C , G f ) -pseudoincave at u X .
Now, we give a nontrivial example which is ( C , G f ) -invex function, but on the either side the function f cannot hold the definitions like as ( F , G f ) -invex, F-convex and C-convex.
Example 1.
Let f : [ 1 , 1 ] R 2 be defined as
f ( x ) = f 1 ( x ) , f 2 ( x )
where f 1 ( x ) = x 4 , f 2 ( x ) = a r c ( t a n x ) and G f = G f 1 , G f 2 : R R 2 be defined as:
G f 1 ( t ) = t 9 + t 7 + t 3 + 1 a n d G f 2 ( t ) = tan t .
Let C : X × X × R 2 R be given as:
C x , u ( a ) = a 2 ( x u ) .
Now, we will show that f is ( C , G f ) -invex at u = 0 . For this, we have to claim that
τ i = G f i ( f i ( x ) ) G f i ( f i ( u ) ) C x , u [ G f i ( f i ( u ) ) x f i ( u ) ] 0 , f o r i = 1 , 2 .
Substituting the values of f 1 , f 2 , G f 1 and G f 2 in the above expressions, we obtain
τ 1 = x 36 + x 28 + x 12 + 1 ( u 36 + u 28 + u 12 + 1 ) C x , u ( 36 u 35 + 28 u 27 + 12 u 11 ) × 4 u 3 ,
and
τ 2 = x u C x , u 1 × 1 ( 1 + u 2 )
which at u = 0 yield
τ 1 = x 36 + x 28 + x 12 a n d τ 2 = 0 .
Obviously, τ 1 0 , a n d τ 2 0 , x [ 1 , 1 ] .
Hence, f is ( C , G f ) -invex at u = 0 [ 1 , 1 ] .
Now, suppose
δ = f 2 ( x ) f 2 ( u ) C x , u x f 2 ( u )
or
δ = a r c ( t a n x ) a r c ( t a n u ) C x , u 1 ( 1 + u 2 )
which at u = 0 yields
δ = a r c ( t a n x ) 1 .
This expression may not be non-negative for all x [ 1 , 1 ] . For instance at x = 1 [ 1 , 1 ] ,
δ = π 4 1 < 0 .
Therefore, f 2 is not C-convex at u = 0 . Hence, f = ( f 1 , f 2 ) is not C-convex at u = 0 [ 1 , 1 ] .
Finally, C x , u is not sublinear in its third position. Hence, function f is neither F nor ( F , G f ) -invex functions.

3. G-Mond-Weir Type Primal-Dual Model

In this section, we consider the following pair of multiobjective fractional symmetric primal-dual programs:
(MFP) Minimize L ( x , y ) = G f 1 f 1 x , y G g 1 g 1 x , y , G f 2 f 2 x , y G g 2 g 2 x , y , G f k f k x , y G g k g k x , y
subject to
i = 1 k λ i [ G f i ( f i ( x , y ) ) y f i ( x , y ) G f i ( f i ( x , y ) ) G g i ( g i ( x , y ) ) ( G g i g i ( x , y ) ) y g i ( x , y ) ] 0 ,
y T i = 1 k λ i [ G f i ( f i ( x , y ) ) y f i ( x , y ) G f i ( f i ( x , y ) ) G g i ( g i ( x , y ) ) ( G g i ( g i ( x , y ) ) y g i ( x , y ) ) ] 0 ,
λ > 0 , λ T e = 1 .
(MFD) Maximize M ( u , v ) = G f 1 f 1 u , v G g 1 g 1 u , v , G f 2 f 2 u , v G g 2 g 2 u , v , G f k f k u , v G g k g k u , v
subject to
i = 1 k λ i [ G f i ( f i ( u , v ) ) x f i ( u , v ) G f i ( f i ( u , v ) ) G g i ( g i ( u , v ) ) ( G g i ( g i ( u , v ) ) x g i ( u , v ) ) ] 0 ,
u T i = 1 k λ i [ G f i ( f i ( u , v ) ) x f i ( u , v ) G f i ( f i ( u , v ) ) G g i ( g i ( u , v ) ) ( G g i ( g i ( u , v ) ) x g i ( u , v ) ) ] 0 ,
λ > 0 , λ T e = 1 .
G f i : I f i R and G g i : I g i R are differentiable strictly increasing functions on their domains. It is assumed that in the feasible regions, the numerators are nonnegative and denominators are positive.
Now, Let U = U 1 , U 2 , , U k and V = V 1 , V 2 , , V k . Then, we can express the programs (MFP) and (MFD) equivalently as:
( M F P ) U Minimize U
subject to
G f i ( f i ( x , y ) ) U i G g i ( g i ( x , y ) ) = 0 , i = 1 , 2 , , k ,
i = 1 k λ i ( G f i ( f i ( x , y ) ) y f i ( x , y ) ) U i G g i ( g i ( x , y ) ) y g i ( x , y ) 0 ,
y T i = 1 k λ i ( G f i ( f i ( x , y ) ) y f i ( x , y ) ) U i G g i ( g i ( x , y ) ) y g i ( x , y ) 0 ,
λ > 0 , λ T e = 1 .
( M F D ) V Minimize V
subject to
G f i ( f i ( u , v ) ) V i G g i ( g i ( u , v ) ) = 0 , i = 1 , 2 , , k ,
i = 1 k λ i ( G f i ( f i ( u , v ) ) x f i ( u , v ) ) V i G g i ( g i ( u , v ) ) x g i ( u , v ) 0 ,
u T i = 1 k λ i ( G f i ( f i ( u , v ) ) x f i ( u , v ) ) V i G g i ( g i ( u , v ) ) x g i ( u , v ) 0 ,
λ > 0 , λ T e = 1 .
Next, we prove duality theorems for ( M F P ) U and ( M F P ) V , which one equally apply to (MFP) and (MFD), respectively.
Theorem 1.
(Weak duality). Let ( x , y , U , λ ) and ( u , v , V , λ ) be feasible for (MFP) U and (MFD) V , respectively. Let
(i) 
f ( . , v ) be ( C , G f ) -invex at u for fixed v,
(ii) 
g ( . , v ) be ( C , G g ) -incave at u for fixed v,
(iii) 
f ( x , . ) be ( C ¯ , G f ) -incave at y for fixed x,
(iv) 
g ( x , . ) be ( C ¯ , G f ) -invex at y for fixed x,
(v) 
i = 1 k λ i [ 1 U i ] > 0 and i = 1 k λ i [ 1 V i ] > 0 ,
(vi) 
G g i g i ( x , v ) > 0 , i = 1 , 2 , , k ,
(vii) 
C x , u ( a ) + a T u 0 , a 0 and C ¯ v , y ( b ) + b T y 0 , b 0 ,
where C : R n × R n × R n R and C ¯ : R m × R m × R m R .
Then, U V .
Proof. 
By hypotheses ( i ) and ( i i ) , we have
G f i ( f i ( x , v ) ) G f i ( f i ( u , v ) ) C x , u G f i ( f i ( u , v ) ) x f i ( u , v )
and
G g i ( g i ( x , v ) ) + G g i ( g i ( u , v ) ) C x , u G g i ( g i ( u , v ) ) x g i ( u , v ) .
Using ( v ) , λ > 0 , λ i τ , and λ i V i τ , where τ = i = 1 k λ i ( 1 V i ) and (9)–(10), respectively, we obtain
λ i τ G f i ( f i ( x , v ) ) G f i ( f i ( u , v ) ) λ i τ C x , u G f i ( f i ( u , v ) ) x f i ( u , v ) .
and
λ i V i τ G f i ( g i ( x , v ) ) + G f i ( g i ( u , v ) ) λ i V i τ C x , u G f i ( g i ( u , v ) ) x g i ( u , v ) .
Now, summing over i and adding the above two inequalities and using convexity of C x , u , we have
i = 1 k λ i τ G f i ( f i ( x , v ) ) G f i ( f i ( u , v ) ) + i = 1 k λ i V i τ G g i ( g i ( x , v ) ) + G g i ( g i ( u , v ) )
C x , u i = 1 k λ i τ G f i ( f i ( u , v ) ) x f i ( u , v ) V i ( G g i ( g i ( u , v ) ) x g i ( u , v ) ) .
Now, from (6), we have
a = i = 1 k λ i τ [ ( G f i ( f i ( u , v ) ) x f i ( u , v ) V i ( G g i ( g i ( u , v ) ) x g i ( u , v ) ] 0 .
Hence, for this a , C x , u ( a ) u T a 0 (from ( v i i ) ). Using this in (11), we obtain
i = 1 k λ i G f i ( f i ( x , v ) ) G f i ( f i ( u , v ) ) + i = 1 k λ i V i G g i ( g i ( x , v ) ) + G g i ( g i ( u , v ) ) 0 .
Using (5) in above inequality, we get
i = 1 k λ i [ G f i ( f i ( x , v ) ) V i G g i g i ( x , v ) ] 0 .
From hypotheses ( i i i ) ( v ) and from the condition ( v i i ) , for
b = i = 1 k λ i τ [ G f i ( f i ( x , y ) ) y f i ( x , y ) U i ( G g i ( g i ( x , y ) ) y g i ( x , y ) ) ] 0 ,
we get
i = 1 k λ i [ G f i ( f i ( x , v ) ) + U i ( G g i ( g i ( x , v ) ) ] 0 .
Adding the inequalities (12) and (13), we get
i = 1 k λ i ( U i V i ) ( G g i ( g i ( x , v ) ) 0 .
Since λ > 0 and using ( v i ) , it follows that U V . This completes the proof. □
Theorem 2.
(Weak duality). Let ( x , y , U , λ ) and ( u , v , V , λ ) be feasible for (MFP) U and (MFD) V , respectively. Let
(i) 
f ( . , v ) be ( C , G f ) -pseudoinvex at u for fixed v,
(ii) 
g ( . , v ) be ( C , G g ) -pseudoincave at u for fixed v,
(iii) 
f ( x , . ) be ( C ¯ , G f ) -pseudoincave at y for fixed x,
(iv) 
g ( x , . ) be ( C ¯ , G f ) -pseudoinvex at y for fixed x,
(v) 
i = 1 k λ i [ 1 U i ] > 0 and i = 1 k λ i [ 1 V i ] > 0 ,
(vi) 
G g i g i ( x , v ) > 0 , i = 1 , 2 , , k ,
(vii) 
C x , u ( a ) + a T u 0 , a 0 and C ¯ v , y ( b ) + b T y 0 , b 0 ,
where C : R n × R n × R n R and C ¯ : R m × R m × R m R .
Then, U V .
Proof. 
The proof follows on the lines of Theorem 2. □
Theorem 3.
(Strong duality). Let ( x ¯ , y ¯ , U ¯ , λ ¯ ) be an efficient solutions of (MFP) U and fix λ = λ ¯ in ( M F D ) V . If the following conditions hold:
(i) 
the matrix
i = 1 k λ ¯ i [ G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) y f i ( x ¯ , y ¯ ) T + G f i ( f i ( x ¯ , y ¯ ) ) y y f i ( x ¯ , y ¯ )
U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) y g i ( x ¯ , y ¯ ) T + G g i ( g i ( x ¯ , y ¯ ) ) y y g i ( x ¯ , y ¯ ) ) ]
is positive definite or negative definite,
(ii) 
the vectors
( G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ) i = 1 k
are linearly independent,
(iii) 
U ¯ i > 0 , i = 1 , 2 , , k ,
then, ( x ¯ , y ¯ , U ¯ , λ ¯ ) is feasible solution for ( M F D ) V . Furthermore, if the hypotheses of Theorem 2 and 3 hold, then ( x ¯ , y ¯ , U ¯ , λ ¯ ) is an efficient solution of ( M F D ) V and the objective functions have same values.
Proof. 
Since ( x ¯ , y ¯ , U ¯ , λ ¯ ) is an efficient solution of ( M F D ) U , therefore by the Fritz John necessary optimality conditions [13], there exist α R k , β R k , γ R + , δ R , ξ R k , i = 1 , 2 , , k such that
i = 1 k β i ( ( G f i ( f i ( x ¯ , y ¯ ) ) x f i ( x ¯ , y ¯ ) ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) x g i ( x ¯ , y ¯ ) ) )
+ ( y δ y ¯ ) T i = 1 k λ i ¯ [ G f i ( f i ( x ¯ , y ¯ ) ) x f i ( x ¯ , y ¯ ) ( y f i ( x ¯ , y ¯ ) ) T + G f i ( f i ( x ¯ , y ¯ ) ) x y f i ( x ¯ , y ¯ ) ]
U ¯ i [ G g i ( g i ( x ¯ , y ¯ ) ) x g i ( x ¯ , y ¯ ) ( y g i ( x ¯ , y ¯ ) ) T + G g i ( g i ( x ¯ , y ¯ ) ) x y g i ( x ¯ , y ¯ ) ] = 0 ,
i = 1 k ( β i δ λ i ¯ ) ( G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) )
+ ( γ δ y ¯ ) T i = 1 k λ ¯ i [ G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) ( y f i ( x ¯ , y ¯ ) ) T + G f i ( f i ( x ¯ , y ¯ ) ) y y f i ( x ¯ , y ¯ ) ]
U ¯ i [ G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ( y g i ( x ¯ , y ¯ ) ) T + G g i ( g i ( x ¯ , y ¯ ) ) y y g i ( x ¯ , y ¯ ) ] = 0 ,
α i β i ( G g i ( g i ( x ¯ , y ¯ ) ) ) ( γ δ y ¯ ) λ i ¯ ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ) = 0 , i = 1 , 2 , , k ,
( γ δ y ¯ ) T [ ( G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) )
U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ) ] ξ i = 0 , i = 1 , 2 , , k ,
γ T i = 1 k λ ¯ i [ G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ) ] = 0 ,
δ y ¯ T i = 1 k λ ¯ i [ G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ) ] = 0 ,
λ ¯ T ξ = 0 ,
( α , δ , ξ ) 0 , ( α , β , γ , δ , ξ ) 0 .
Since λ ¯ > 0 and ξ ¯ 0 , (21) implies that ξ ¯ = 0 .
Post-multiplication ( γ δ y ¯ ) in (16) and using (18) and ξ = 0 , we get
( γ δ y ¯ ) T i = 1 k λ ¯ i ( G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) ( y f i ( x ¯ , y ¯ ) ) T + G f i ( f i ( x ¯ , y ¯ ) ) y y f i ( x ¯ , y ¯ )
U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ( y g i ( x ¯ , y ¯ ) ) T + G g i ( g i ( x ¯ , y ¯ ) ) y y g i ( x ¯ , y ¯ ) ) ( γ δ y ¯ ) = 0 ,
which from hypothesis ( i ) yields
γ = δ y ¯ .
Using (24) in (16), we have
i = 1 k ( β i δ λ ¯ i ) [ G f i ( f i ( x ¯ , y ¯ ) ) y f i ( x ¯ , y ¯ ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) y g i ( x ¯ , y ¯ ) ) ] = 0 .
It follows from hypothesis ( i i ) that
β i = δ λ i ¯ , i = 1 , 2 , , k .
Now, we claim that β i 0 , i . Otherwise, if β t 0 = 0 , for some i = t 0 , then from (25), since λ ¯ > 0 , we have δ = 0 . Again from (25), β i = 0 , i . Thus from (17), we get α i = 0 , i . Also from (24), γ = 0 . This contradicts (22). Hence, β i 0 , for all i. Further, if β i < 0 , for any i, then from (25), δ < 0 , which again contradicts (22). Hence, β i > 0 , i .
Further, using (22) and (25) in (15), we get
i = 1 k λ ¯ i [ ( G f i ( f i ( x ¯ , y ¯ ) ) x f i ( x ¯ , y ¯ ) ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) x g i ( x ¯ , y ¯ ) ) ] = 0 ,
and
x ¯ T i = 1 k λ ¯ i [ ( G f i ( f i ( x ¯ , y ¯ ) ) x f i ( x ¯ , y ¯ ) ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) x g i ( x ¯ , y ¯ ) ) ] = 0 .
Next, it follows that
( G f i ( f i ( x ¯ , y ¯ ) ) ) U ¯ i ( G g i ( g i ( x ¯ , y ¯ ) ) ) = 0 , i = 1 , 2 , , k .
This together with (26), (27) and (28) shows that ( x ¯ , y ¯ , U ¯ , λ ¯ ) is feasible solution of (MFD) V . Now, let ( x ¯ , y ¯ , U ¯ , λ ¯ ) be not an efficient solution of (MFD) V . Then, there exists other ( u , v , V , λ ) is feasible solution of (MFD) V such that U ¯ i V i , i K and U j ¯ < V j , for some j K . This contradicts the result of the Theorems 2 and 3. Hence, this completes the proof. □
Theorem 4.
(Converse duality). Let ( u ¯ , v ¯ , V ¯ , λ ¯ ) be an efficient solutions of (MFD) V and fix λ = λ ¯ in (MFP) U . If the following conditions hold:
(i) 
the matrix
i = 1 k λ ¯ i [ G f i ( f i ( u ¯ , v ¯ ) ) x f i ( u ¯ , v ¯ ) x f i ( u ¯ , v ¯ ) T + G f i ( f i ( u ¯ , v ¯ ) ) x x f i ( u ¯ , v ¯ )
V ¯ i ( G g i ( g i ( u ¯ , v ¯ ) ) x g i ( u ¯ , v ¯ ) x g i ( u ¯ , v ¯ ) T + G g i ( g i ( u ¯ , v ¯ ) ) x x g i ( u ¯ , v ¯ ) ) ]
is positive definite or negative definite,
(ii) 
the vectors
G f i ( f i ( u ¯ , v ¯ ) ) x f i ( u ¯ , v ¯ ) V ¯ i ( G g i ( g i ( u ¯ , v ¯ ) ) x g i ( u ¯ , v ¯ ) ) i = 1 k
are linearly independent,
(iii) 
V i ¯ > 0 , i = 1 , 2 , , k ,
then, ( u ¯ , v ¯ , V ¯ , λ ¯ ) is feasible solution of (MFP) U . Furthermore, if the assumptions of Theorems 2 and 3 hold, then ( u ¯ , v ¯ , V ¯ , λ ¯ ) is an efficient solution of (MFP) U and objective functions have equal values.
Proof. 
The results can be obtained on the lines of Theorem 3. □

4. Conclusions

In this paper, we have considered a new type of nondifferentiable multiobjective fractional symmetric programming problem and derived duality theorems under generalized assumptions. The present work can further be extended to nondifferentiable second order fractional symmetric programming problems over arbitrary cones. This will orient the future task of the authors.

Author Contributions

These authors contributed equally to this work.

Acknowledgments

The authors are grateful to the anonymous referees for their useful comments and suggestions which have improved the presentation of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hanson, M.A. On sufficiency on the Kuhn-Tucker conditions. J. Math. Anal. Appl. 1981, 80, 545–550. [Google Scholar] [CrossRef]
  2. Antczak, T. New optimality conditions and duality results of G-type in differentiable mathematical programming. Nonlinear Anal. 2007, 66, 1617–1632. [Google Scholar] [CrossRef]
  3. Antczak, T. On G-invex multiobjective programming. Part I. Optimality. Nonlinear Anal. 2009, 43, 97–109. [Google Scholar] [CrossRef]
  4. Kang, Y.M.; Kim, D.S.; Kim, M.H. Optimality conditions of G-type in locally Lipchitz multiobjective programming. Vietnam J. Math. 2012, 40, 275–285. [Google Scholar]
  5. Vandana; Dubey, R.; Deepmala; Mishra, L.N.; Mishra, V.N. Duality relations for a class of a multiobjective fractional programming problem involving support functions. Am. J. Oper. Res. 2018, 8, 294–311. [Google Scholar] [CrossRef]
  6. Dubey, R.; Mishra, V.N.; Tomar, P. Duality relations for second-order programming problem under (G,αf)-bonvexity assumptions. Asian Eur. J. Math. 2018, 13, 1–17. [Google Scholar] [CrossRef]
  7. Dubey, R.; Mishra, V.N. Symmetric duality results for second -order nondifferentiable multiobjective programming problem. RAIRO Oper. Res. 2019, 53, 539–558. [Google Scholar] [CrossRef]
  8. Ferrara, M.; Stefaneseu, M.V. Optimality conditions and duality in multiobjective programming with (ϕ,ρ)-invexity. Yugosl. J. Oper. Res. 2008, 18, 153–165. [Google Scholar] [CrossRef]
  9. Stefaneseu, M.V.; Ferrara, M. Multiobjective programming with new invexities. Optim. Lett. 2013, 7, 855–870. [Google Scholar] [CrossRef]
  10. Richard Egudo, R. Multiobjective fractional duality. Bull. Aust. Math. Soc. 1988, 37, 367–378. [Google Scholar] [CrossRef] [Green Version]
  11. Long, X.J. Optimality conditions and duality for nondifferentiable multiobjective fractional programming problems with (C,α,ρ,d)-convexity. Math. Oper. Res. 2011, 148, 197–208. [Google Scholar] [CrossRef]
  12. Ben-Israel, A.; Mond, B. What is invexity. J. Aust. Math. Soc. Ser. B 1986, 28, 1–9. [Google Scholar] [CrossRef]
  13. Brumelle, S. Duality for multiple objective convex programs. Math. Oper. Res. 2011, 6, 159–172. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Dubey, R.; Mishra, L.N.; Cesarano, C. Multiobjective Fractional Symmetric Duality in Mathematical Programming with (C,Gf)-Invexity Assumptions. Axioms 2019, 8, 97. https://doi.org/10.3390/axioms8030097

AMA Style

Dubey R, Mishra LN, Cesarano C. Multiobjective Fractional Symmetric Duality in Mathematical Programming with (C,Gf)-Invexity Assumptions. Axioms. 2019; 8(3):97. https://doi.org/10.3390/axioms8030097

Chicago/Turabian Style

Dubey, Ramu, Lakshmi Narayan Mishra, and Clemente Cesarano. 2019. "Multiobjective Fractional Symmetric Duality in Mathematical Programming with (C,Gf)-Invexity Assumptions" Axioms 8, no. 3: 97. https://doi.org/10.3390/axioms8030097

APA Style

Dubey, R., Mishra, L. N., & Cesarano, C. (2019). Multiobjective Fractional Symmetric Duality in Mathematical Programming with (C,Gf)-Invexity Assumptions. Axioms, 8(3), 97. https://doi.org/10.3390/axioms8030097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop