Next Article in Journal
General New Results on (ϕ,F)Contractions in bMetric-like-Spaces
Next Article in Special Issue
On Sufficiency Conditions for Some Robust Variational Control Problems
Previous Article in Journal
Plurisubharmonic Interpolation and Plurisubharmonic Geodesics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Optimality Conditions for a Class of Fractional Optimization Problems

Financial Mathematics and Actuarial Science (FMAS)—Research Group, Department of Mathematics, Faculty of Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
Axioms 2023, 12(7), 673; https://doi.org/10.3390/axioms12070673
Submission received: 19 June 2023 / Revised: 4 July 2023 / Accepted: 6 July 2023 / Published: 7 July 2023

Abstract

:
In this paper, by considering the parametric technique, we study a class of fractional optimization problems involving data uncertainty in the objective functional. We formulate and prove the robust Karush-Kuhn-Tucker necessary optimality conditions and provide their sufficiency by considering the convexity and/or concavity assumptions of the involved functionals. In addition, to complete the study, an illustrative example is presented.

1. Introduction

Optimization theory is widely used in concrete problems coming from decision theory, game theory, economics, data classification, production inventory, and portfolio selection. Since data in real-life problems are obtained, most of the time, by measurement or estimation, errors are bound to occur. But, the accumulation of errors may imply the computation results contradict the original problem. To overcome this shortcoming, the use of interval analysis, fuzzy numbers, and the robust technique to represent data has become a popular research direction in recent years.
The fractional optimization problem means to optimize the ratio of two objective (cost) functions (functionals). However, Dinkelbach [1] and Jagannathan [2] formulated a parametric technique to study a fractional optimization problem by converting it into an equivalent non-fractional optimization problem. Over time, many researchers used this technique to investigate various classes of fractional optimization problems. In this direction, we mention the works of Antczak and Pitea [3], Mititelu [4], Mititelu and Treanţă [5], Treanţă and Mititelu [6], and Antczak [7]. Guo et al. [8,9] proposed symmetric gH-derivative and its applications to dual interval-valued optimization problems. For other connected ideas on this topic, interested readers are directed to Nahak [10], Patel [11], Kim and Kim [12,13,14], and Manesh et al. [15] and references therein.
Uncertain optimization problems arise when we have a large volume of data, old sources, sample disparity, inadequate information, and other factors that lead to data uncertainty. Therefore, the robust technique has an important role in studying the optimization problem governed by data uncertainty. Many researchers studied different optimization problems, including data uncertainty, and tried to formulate new and efficient results (see, for instance, Beck and Tal [16], Jeyakumar et al. [17], Treanţă [18,19,20], Baranwal et al. [21], Jayswal et al. [22], Preeti et al. [23], and references therein).
In this paper, we state a constrained fractional optimization problem involving data uncertainty in the objective functional generated by curvilinear-type integral. Namely, by considering the parametric technique, we present the robust Karush-Kuhn-Tucker necessary optimality conditions and establish their sufficiency by using the convexity and concavity assumptions of the involved functionals. The present paper has several principal credits. We mention the most important: (i) defining, by using the parametric approach, of the notions of robust optimal solution for the case of curvilinear integral-type functionals, (ii) formulating novel proofs for the main results, and (iii) providing a framework determined by infinite dimensional normed spaces of functions and curvilinear integral-type functionals. These elements are original in the field of robust fractional optimization problems.
The paper continues as follows. In Section 2, we state the basic notations and concepts in order to formulate and prove the main results. We introduce the fractional optimization problem involving data uncertainty in the objective functional, the associated non-fractional optimization problem, and the corresponding robust counterparts. In Section 3, under suitable convexity and concavity assumptions, we establish the robust optimality (necessary and sufficient) conditions for the considered problem. Finally, Section 4 provides the conclusions and future research direction associated with this paper.

2. Notations, Assumptions and Problem Formulation

Next, we are considering some basic notations and assumptions in order to formulate and prove the new main theorems derived in the present study. In this regard, we start with the standard finite-dimensional Euclidean spaces R m , R n and R l , with t = ( t γ ) ,   γ = 1 , m ¯ , p = ( p ι ) , ι = 1 , n ¯ , and y = ( y j ) , j = 1 , l ¯ as arbitrary points of R m , R n and R l , respectively. Let K = K t 0 , t 1 R m be a hyper-parallelepiped, having the diagonally opposite corners t 0 = ( t 0 γ ) and t 1 = ( t 1 γ ) , γ = 1 , m ¯ , and let C K be a curve (piecewise differentiable), joining the points t 0 = ( t 0 γ ) and t 1 = ( t 1 γ ) in R m . Define
P = p : K R n | p = piecewise smooth function ,
Y = y : K R l | y = piecewise continuous function
as the space of piecewise smooth functions (state variables), and the space of piecewise continuous functions (control variables), respectively, and assume the product space P × Y is endowed with the norm induced by the following inner product
( p , y ) , ( b , z ) = C p ( t ) · b ( t ) + y ( t ) · z ( t ) d t π
= C ι = 1 n p ι ( t ) b ι ( t ) + j = 1 l y j ( t ) z j ( t ) d t π , ( p , y ) , ( b , z ) P × Y .
Using the above mathematical notations and elements, by denoting p γ ( t ) = p t γ ( t ) , we formulate the following first-order P D E & P D I -constrained fractional optimization problem, with data uncertainty in the objective functional, as follows
( P ) min ( p ( · ) , y ( · ) ) C Δ π ( t , p ( t ) , p γ ( t ) , y ( t ) , f ) d t π C Θ π ( t , p ( t ) , p γ ( t ) , y ( t ) , g ) d t π
subject to A β ( t , p ( t ) , p γ ( t ) , y ( t ) ) 0 , β = 1 , q ¯ , t K , B γ ι ( t , p ( t ) , p γ ( t ) , y ( t ) ) : = p t γ ( t ) Q γ ι ( t , p ( t ) , y ( t ) ) = 0 , ι = 1 , n ¯ , γ = 1 , m ¯ , t K , p ( t 0 ) = p 0 = given , p ( t 1 ) = p 1 = given ,
where f and g are some uncertainty parameters in the convex compact sets F R and G R , respectively, and Δ = ( Δ π ) : K × P 2 × Y × F R m , Θ = ( Θ π ) : K × P 2 × Y × G R m { 0 } , A β : K × P 2 × Y R , β = 1 , q ¯ , B γ ι : K × P 2 × Y R , ι = 1 , n ¯ , γ = 1 , m ¯ , are assumed to be continuously differentiable functionals.
Definition 1.
The above functionals
C Δ π ( t , p ( t ) , p γ ( t ) , y ( t ) , f ) d t π
and
C Θ π ( t , p ( t ) , p γ ( t ) , y ( t ) , g ) d t π
are named path-independent if D γ Δ π = D π Δ γ and D γ Θ π = D π Θ γ , for π γ .
Assumption 1.
By considering the above functionals
C Δ π ( t , p ( t ) , p γ ( t ) , y ( t ) , f ) d t π
and
C Θ π ( t , p ( t ) , p γ ( t ) , y ( t ) , g ) d t π
are path-independent, the following working hypothesis is assumed:
d L : = D γ h π p γ ( p p 0 ) d t π
is a total exact differential, with L ( t 0 ) = L ( t 1 ) , h { Δ , Θ } .
The robust counterpart for ( P ) , reducing the possible uncertainties in ( P ) , is given as
( RP ) min ( p ( · ) , y ( · ) ) C max f F Δ π ( t , p ( t ) , p γ ( t ) , y ( t ) , f ) d t π C min g G Θ π ( t , p ( t ) , p γ ( t ) , y ( t ) , g ) d t π subject to A β ( t , p ( t ) , p γ ( t ) , y ( t ) ) 0 , β = 1 , q ¯ , t K , B γ ι ( t , p ( t ) , p γ ( t ) , y ( t ) ) = 0 , ι = 1 , n ¯ , γ = 1 , m ¯ , t K , p ( t 0 ) = p 0 = given , p ( t 1 ) = p 1 = given ,
where Δ = ( Δ π ) , Θ = ( Θ π ) , A = ( A β ) and B = ( B γ ι ) are defined as in ( P ) .
The set of all feasible solutions to ( RP ) , which is the same as the set of all feasible solutions to ( P ) , is defined as
D = { ( p , y ) P × Y | A β ( t , p ( t ) , p γ ( t ) , y ( t ) ) 0 , B γ ι ( t , p ( t ) , p γ ( t ) , y ( t ) ) = 0 ,
p ( t 0 ) = p 0 = given , p ( t 1 ) = p 1 = given , t K } .
For ( p , y ) D , we assume that Δ 0 and Θ > 0 . Further, by considering the positive real number
R f , g 0 = min ( p ( · ) , y ( · ) ) C max f F Δ π ( t , p ( t ) , p γ ( t ) , y ( t ) , f ) d t π C min g G Θ π ( t , p ( t ) , p γ ( t ) , y ( t ) , g ) d t π = C max f F Δ π ( t , p 0 ( t ) , p γ 0 ( t ) , y 0 ( t ) , f ) d t π C min g G Θ π ( t , p 0 ( t ) , p γ 0 ( t ) , y 0 ( t ) , g ) d t π ,
on the line of Jagannathan [2], Dinkelbach [1], and following Mititelu and Treanţă [5], we build a non-fractional optimization problem associated with ( P ) , as
( NP ) min ( p ( · ) , y ( · ) ) C Δ π ( t , p ( t ) , p γ ( t ) , y ( t ) , f ) d t π R f , g 0 C Θ π ( t , p ( t ) , p γ ( t ) , y ( t ) , g ) d t π subject to A β ( t , p ( t ) , p γ ( t ) , y ( t ) ) 0 , β = 1 , q ¯ , t K , B γ ι ( t , p ( t ) , p γ ( t ) , y ( t ) ) = 0 , ι = 1 , n ¯ , γ = 1 , m ¯ , t K , p ( t 0 ) = p 0 = given , p ( t 1 ) = p 1 = given .
The robust counterpart for ( NP ) is given by
( RNP ) min ( p ( · ) , y ( · ) ) { C max f F Δ π ( t , p ( t ) , p γ ( t ) , y ( t ) , f ) d t π R f , g 0 C min g G Θ π ( t , p ( t ) , p γ ( t ) , y ( t ) , g ) d t π } subject to A β ( t , p ( t ) , p γ ( t ) , y ( t ) ) 0 , β = 1 , q ¯ , t K , B γ ι ( t , p ( t ) , p γ ( t ) , y ( t ) ) = 0 , ι = 1 , n ¯ , γ = 1 , m ¯ , t K , p ( t 0 ) = p 0 = given , p ( t 1 ) = p 1 = given .
Next, for a simple presentation, we will use the following abbreviations throughout the paper: p = p ( t ) , y = y ( t ) , p ¯ = p ¯ ( t ) , y ¯ = y ¯ ( t ) , p ^ = p ^ ( t ) , y ^ = y ^ ( t ) , ζ = ( t , p ( t ) , p γ ( t ) , y ( t ) ) , ζ ¯ = ( t , p ¯ ( t ) , p ¯ γ ( t ) , y ¯ ( t ) ) , ζ ^ = ( t , p ^ ( t ) , p ^ γ ( t ) , y ^ ( t ) ) .
Definition 2.
A point ( p ¯ , y ¯ ) D is said to be a robust optimal solution to ( P ) , if
C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π C max f F Δ π ( ζ , f ) d t π C min g G Θ π ( ζ , g ) d t π ,
for all ( p , y ) D .
Definition 3.
A point ( p ¯ , y ¯ ) D is said to be a robust optimal solution to ( NP ) , if
C max f F Δ π ( ζ ¯ , f ) d t π R f , g C min g G Θ π ( ζ ¯ , g ) d t π C max f F Δ π ( ζ , f ) d t π R f , g C min g G Θ π ( ζ , g ) d t π ,
for all ( p , y ) D .
Remark 1.
We can observe that D is the set of feasible solutions to ( NP ) (and, also, for ( RNP ) ).
Remark 2.
The robust optimal solutions to ( P ) (or ( NP ) ) are also robust optimal solutions to ( RP ) (or ( RNP ) ).
Next, in order to prove the principal results of this paper, we present the definition of convex and concave curvilinear integral functionals (see, for instance, Treanţă [24]).
Definition 4.
A curvilinear integral functional C Δ π ( ζ , f ¯ ) d t π is said to be convex at ( p ¯ , y ¯ ) P × Y if the following inequality
C Δ π ( ζ , f ¯ ) d t π C Δ π ( ζ ¯ , f ¯ ) d t π C ( p p ¯ ) Δ π p ( ζ ¯ , f ¯ ) + ( y y ¯ ) Δ π y ( ζ ¯ , f ¯ ) d t π
+ C ( p γ p ¯ γ ) Δ π p γ ( ζ ¯ , f ¯ ) d t π
holds, for all ( p , y ) P × Y .
Definition 5.
A curvilinear integral functional C Δ π ( ζ , f ¯ ) d t π is said to be concave at ( p ¯ , y ¯ ) P × Y if the following inequality
C Δ π ( ζ , f ¯ ) d t π C Δ π ( ζ ¯ , f ¯ ) d t π C ( p p ¯ ) Δ π p ( ζ ¯ , f ¯ ) + ( y y ¯ ) Δ π y ( ζ ¯ , f ¯ ) d t π
+ C ( p γ p ¯ γ ) Δ π p γ ( ζ ¯ , f ¯ ) d t π
holds, for all ( p , y ) P × Y .

3. Robust Optimality Conditions

In this part of the present study, under suitable hypotheses, we establish the robust necessary and sufficient optimality conditions associated with the fractional optimization problem ( P ) .
Now, we provide an auxiliary result that will be used to establish the robust sufficient optimality conditions for ( P ) . More precisely, we present the equivalence between the robust optimal solutions to ( P ) and ( NP ) .
Proposition 1.
If ( p ¯ , y ¯ ) D is a robust optimal solution to ( P ) , then there exists the positive real number R f , g such that ( p ¯ , y ¯ ) D is a robust optimal solution to ( NP ) . Moreover, if ( p ¯ , y ¯ ) D is a robust optimal solution to ( NP ) and R f , g = C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π , then ( p ¯ , y ¯ ) D is a robust optimal solution to ( P ) .
Proof. 
By reductio ad absurdum, let us assume that ( p ¯ , y ¯ ) D is a robust optimal solution to ( P ) , but it is not a robust optimal solution to ( NP ) . In consequence, there exists ( p , y ) D such that
C max f F Δ π ( ζ , f ) d t π R f , g C min g G Θ π ( ζ , g ) d t π < C max f F Δ π ( ζ ¯ , f ) d t π R f , g C min g G Θ π ( ζ ¯ , g ) d t π .
Now, if we consider R f , g = C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π , we get
C max f F Δ π ( ζ , f ) d t π C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π C min g G Θ π ( ζ , g ) d t π < C max f F Δ π ( ζ ¯ , f ) d t π C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π C min g G Θ π ( ζ ¯ , g ) d t π ,
which is equivalent with
C max f F Δ π ( ζ , f ) d t π C min g G Θ π ( ζ , g ) d t π < C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π ,
and this contradicts ( p ¯ , y ¯ ) is a robust optimal solution to ( P ) .
Conversely, let ( p ¯ , y ¯ ) D be a robust optimal solution to ( NP ) , with
R f , g = C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π ,
and suppose that ( p ¯ , y ¯ ) D is not a robust optimal solution to ( P ) . Thus, there exists ( p , y ) D such that
C max f F Δ π ( ζ , f ) d t π C min g G Θ π ( ζ , g ) d t π < C max f F Δ π ( ζ ¯ , f ) d t π C min g G Θ π ( ζ ¯ , g ) d t π ,
and, by taking into account the definition of R f , g , the above inequality becomes
C max f F Δ π ( ζ , f ) d t π R f , g C min g G Θ π ( ζ , g ) d t π < 0 .
or, equivalently,
C max f F Δ π ( ζ , f ) d t π R f , g C min g G Θ π ( ζ , g ) d t π < C max f F Δ π ( ζ ¯ , f ) d t π R f , g C min g G Θ π ( ζ ¯ , g ) d t π ,
which contradics that ( p ¯ , y ¯ ) D is a robust optimal solution to ( NP ) , and the proof is complete. □
The next theorem formulates the robust necessary conditions of optimality for ( P ) .
Theorem 1.
Consider ( p ¯ , y ¯ ) D is a robust optimal solution for the robust fractional optimization problem ( P ) and max f F Δ π ( ζ , f ) = Δ π ( ζ , f ¯ ) , min g G Θ π ( ζ , g ) = Θ π ( ζ , g ¯ ) . Then, there exist θ ¯ R and the piecewise differentiable functions μ ¯ = ( μ ¯ β ( t ) ) R + q , λ ¯ = ( λ ¯ γ ι ( t ) ) R n m , satisfying
θ ¯ Δ π , p ( ζ ¯ , f ¯ ) R f , g Θ π , p ( ζ ¯ , g ¯ ) + μ ¯ T A p ( ζ ¯ ) + λ ¯ T B p ( ζ ¯ )
t γ θ ¯ Δ π , p γ ( ζ ¯ , f ¯ ) R f , g Θ π , p γ ( ζ ¯ , g ¯ ) + μ ¯ T A p γ ( ζ ¯ ) + λ ¯ T B p γ ( ζ ¯ ) = 0 ,
θ ¯ Δ π , y ( ζ ¯ , f ¯ ) R f , g Θ π , y ( ζ ¯ , g ¯ ) + μ ¯ T A y ( ζ ¯ ) + λ ¯ T B y ( ζ ¯ ) = 0 ,
μ ¯ T A ( ζ ¯ ) = 0 , μ ¯ β 0 , β = 1 , q ¯ ,
θ ¯ 0 ,
for t K , π = 1 , m ¯ , except at points of discontinuity.
Proof. 
Let us consider some variations for p ¯ ( t ) and y ¯ ( t ) , respectively, as follows: p ¯ ( t ) + ε 1 h ( t ) and y ¯ ( t ) + ε 2 m ( t ) , where ε 1 , ε 2 are the variational parameters. Therefore, we convert the involved integral functionals in functions depending on ( ε 1 , ε 2 ) , defined as
F ( ε 1 , ε 2 ) = C [ Δ π ( t , p ¯ ( t ) + ε 1 h ( t ) , p ¯ γ ( t ) + ε 1 h γ ( t ) ,
y ¯ ( t ) + ε 2 m ( t ) , f ¯ ) R f , g Θ π ( t , p ¯ ( t ) + ε 1 h ( t ) , p ¯ γ ( t ) + ε 1 h γ ( t ) , y ¯ ( t ) + ε 2 m ( t ) , g ¯ ) ] d t π ,
Z ( ε 1 , ε 2 ) = C A ( t , p ¯ ( t ) + ε 1 h ( t ) , p ¯ γ ( t ) + ε 1 h γ ( t ) ,
y ¯ ( t ) + ε 2 m ( t ) ) d t π ,
and
J ( ε 1 , ε 2 ) = C B ( t , p ¯ ( t ) + ε 1 h ( t ) , p ¯ γ ( t ) + ε 1 h γ ( t ) ,
y ¯ ( t ) + ε 2 m ( t ) ) d t π .
By hypothesis, the pair ( p ¯ , y ¯ ) is assumed to be a robust optimal solution to ( P ) . Thus, the point ( 0 , 0 ) becomes an optimal solution to the following robust optimization problem
min ε 1 , ε 2 F ( ε 1 , ε 2 )
subject to
Z ( ε 1 , ε 2 ) 0 , J ( ε 1 , ε 2 ) = 0
h ( t 0 ) = h ( t 1 ) = m ( t 0 ) = m ( t 1 ) = 0 .
In consequence, there exist the Lagrange multipliers θ ¯ R , μ ¯ = ( μ ¯ β ( t ) ) R + q ,   λ ¯ = ( λ ¯ γ ι ( t ) ) R n m , fulfilling the Fritz John conditions
θ ¯ F ( 0 , 0 ) + μ ¯ T Z ( 0 , 0 ) + λ ¯ T J ( 0 , 0 ) = 0 , ( )
μ ¯ T Z ( 0 , 0 ) = 0 , μ ¯ 0 ,
θ ¯ 0 ,
(see Δ ( x 1 , x 2 ) as the gradient of Δ at ( x 1 , x 2 ) ). The first relation fomulated in ( ) is rewritten as
C [ θ ¯ Δ π p ¯ ι R f , g Θ π p ¯ ι h ι + θ ¯ Δ π p ¯ γ ι R f , g Θ π p ¯ γ ι h γ ι
+ μ ¯ T A p ¯ ι h ι + μ ¯ T A p ¯ γ ι h γ ι
+ λ ¯ T B p ¯ ι h ι + λ ¯ T B p ¯ γ ι h γ ι ] d t π = 0 ,
C θ ¯ Δ π y ¯ j R f , g Θ π y ¯ j m j + μ ¯ T A y ¯ j m j + λ ¯ T B y ¯ j m j d t π = 0 ,
or, in an equivalent manner, as follows
C [ θ ¯ Δ π p ¯ ι R f , g Θ π p ¯ ι t γ θ ¯ Δ π p ¯ γ ι R f , g Θ π p ¯ γ ι
+ μ ¯ T A p ¯ ι t γ μ ¯ T A p ¯ γ ι
+ λ ¯ T B p ¯ ι t γ λ ¯ T B p ¯ γ ι ] h ι d t π = 0 ,
C θ ¯ Δ π y ¯ j R f , g Θ π y ¯ j + μ ¯ T A y ¯ j + λ ¯ T B y ¯ j m j d t π = 0 ,
where we used the method of integration by parts, the boundary conditions, and the divergence formula.
In the following, taking into account one of the fundamental lemmas from the theory of calculus of variations, we get
θ ¯ Δ π p ¯ ι R f , g Θ π p ¯ ι t γ θ ¯ Δ π p ¯ γ ι R f , g Θ π p ¯ γ ι
+ μ ¯ T A p ¯ ι t γ μ ¯ T A p ¯ γ ι
+ λ ¯ T B p ¯ ι t γ λ ¯ T B p ¯ γ ι = 0 , ι = 1 , n ¯ ,
θ ¯ Δ π y ¯ j R f , g Θ π y ¯ j + μ ¯ T A y ¯ j + λ ¯ T B y ¯ j = 0 , j = 1 , l ¯ ,
or, in an equivalent way,
θ ¯ Δ π , p ( ζ ¯ , f ¯ ) R f , g Θ π , p ( ζ ¯ , g ¯ ) + μ ¯ T A p ( ζ ¯ ) + λ ¯ T B p ( ζ ¯ )
t γ { θ ¯ Δ π , p γ ( ζ ¯ , f ¯ ) R f , g Θ π , p γ ( ζ ¯ , g ¯ ) + μ ¯ T A p γ ( ζ ¯ ) + λ ¯ T B p γ ( ζ ¯ ) } = 0 ,
θ ¯ Δ π , y ( ζ ¯ , f ¯ ) R f , g Θ π , y ( ζ ¯ , g ¯ ) + μ ¯ T A y ( ζ ¯ ) + λ ¯ T B y ( ζ ¯ ) = 0 .
Finally, the second part formulated in ( ) ,
μ ¯ T Z ( 0 , 0 ) = 0 , μ ¯ 0 ,
θ ¯ 0
provides us
μ ¯ T A ( ζ ¯ ) = 0 , μ ¯ 0 ,
θ ¯ 0
and this completes the proof. □
Remark 3.
The relations ( 1 ) ( 4 ) in Theorem 1 are called robust necessary optimality conditions for the robust fractional optimization problem ( P ) .
Definition 6.
The feasible solution ( p ¯ , y ¯ ) D is said to be a normal robust optimal solution to ( P ) if θ ¯ > 0 (see Theorem 1).
Further, we state a result on the robust sufficient conditions associated with the considered fractional optimization problem.
Theorem 2.
If max f F Δ π ( ζ , f ) = Δ π ( ζ , f ¯ ) , min g G Θ π ( ζ , g ) = Θ π ( ζ , g ¯ ) , the robust necessary optimality conditions ( 1 ) ( 4 ) are fulfilled, and the involved integral functionals C θ ¯ Δ π ( ζ , f ¯ ) d t π , C μ ¯ T A ( ζ ) d t π and C λ ¯ T B ( ζ ) d t π are convex at ( p ¯ , y ¯ ) D , and C θ ¯ Θ π ( ζ , g ¯ ) d t π is concave at ( p ¯ , y ¯ ) D , then the pair ( p ¯ , y ¯ ) D is a robust optimal solution to ( P ) .
Proof. 
By contrary, let us suppose the pair ( p ¯ , y ¯ ) D is not a robust optimal solution to ( P ) . By considering Proposition 1, it results that the pair ( p ¯ , y ¯ ) D is not a robust optimal solution to ( NP ) , as well. Thus, there exists ( p , y ) D satisfying
C max f F Δ π ( ζ , f ) d t π R f , g C min g G Θ π ( ζ , g ) d t π < C max f F Δ π ( ζ ¯ , f ) d t π R f , g C min g G Θ π ( ζ ¯ , g ) d t π ,
and by taking max f F Δ π ( ζ , f ) = Δ π ( ζ , f ¯ ) and min g G Θ π ( ζ , g ) = Θ π ( ζ , g ¯ ) , we obtain
C Δ π ( ζ , f ¯ ) d t π R f , g C Θ π ( ζ , g ¯ ) d t π < C Δ π ( ζ ¯ , f ¯ ) d t π R f , g C Θ π ( ζ ¯ , g ¯ ) d t π .
By hypothesis, the integral functional C θ ¯ Δ π ( ζ , f ¯ ) d t π is convex at ( p ¯ , y ¯ ) D , and the integral functional C θ ¯ Θ π ( ζ , g ¯ ) d t π is concave at ( p ¯ , y ¯ ) D . Therefore, it follows
C θ ¯ Δ π ( ζ , f ¯ ) d t π C θ ¯ Δ π ( ζ ¯ , f ¯ ) d t π C ( p p ¯ ) θ ¯ Δ π p ( ζ ¯ , f ¯ ) + ( y y ¯ ) θ ¯ Δ π y ( ζ ¯ , f ¯ ) d t π
+ C ( p γ p ¯ γ ) θ ¯ Δ π p γ ( ζ ¯ , f ¯ ) d t π
and
C θ ¯ Θ π ( ζ , g ¯ ) d t π C θ ¯ Θ π ( ζ ¯ , g ¯ ) d t π C ( p p ¯ ) θ ¯ Θ π p ( ζ ¯ , g ¯ ) + ( y y ¯ ) θ ¯ Θ π y ( ζ ¯ , g ¯ ) d t π
+ C ( p γ p ¯ γ ) θ ¯ Θ π p γ ( ζ ¯ , g ¯ ) d t π
Now, on multiplying the inequality ( 7 ) with R f , g , and subtracting it from the inequality ( 6 ) , it results
C θ ¯ Δ π ( ζ , f ¯ ) d t π R f , g C θ ¯ Θ π ( ζ , g ¯ ) d t π C θ ¯ Δ π ( ζ ¯ , f ¯ ) d t π + R f , g C θ ¯ Θ π ( ζ ¯ , g ¯ ) } d t π C ( p p ¯ ) θ ¯ Δ π p ( ζ ¯ , f ¯ ) d t π R f , g C ( p p ¯ ) θ ¯ Θ π p ( ζ ¯ , g ¯ ) d t π + C ( y y ¯ ) θ ¯ Δ π y ( ζ ¯ , f ¯ ) d t π R f , g C ( y y ¯ ) θ ¯ Θ π y ( ζ ¯ , g ¯ ) d t π
+ C ( p γ p ¯ γ ) θ ¯ Δ π p γ ( ζ ¯ , f ¯ ) d t π R f , g C ( p γ p ¯ γ ) θ ¯ Θ π p γ ( ζ ¯ , g ¯ ) d t π ,
and, by using relation ( 5 ) , we get
C ( p p ¯ ) θ ¯ Δ π p ( ζ ¯ , f ¯ ) d t π R f , g C ( p p ¯ ) θ ¯ Θ π p ( ζ ¯ , g ¯ ) d t π
+ C ( y y ¯ ) θ ¯ Δ π y ( ζ ¯ , f ¯ ) d t π R f , g C ( y y ¯ ) θ ¯ Θ π y ( ζ ¯ , g ¯ ) d t π
+ C ( p γ p ¯ γ ) θ ¯ Δ π p γ ( ζ ¯ , f ¯ ) d t π R f , g C ( p γ p ¯ γ ) θ ¯ Θ π p γ ( ζ ¯ , g ¯ ) d t π < 0 .
Also, by using the assumptions formulated in the theorem, since the integral functionals C μ ¯ T A ( ζ ) d t π and C λ ¯ T B ( ζ ) d t π are convex at ( p ¯ , y ¯ ) D , we obtain
C μ ¯ T A ( ζ ) μ ¯ T A ( ζ ¯ ) d t π C ( p p ¯ ) μ ¯ T A p ( ζ ¯ ) d t π + C ( y y ¯ ) μ ¯ T A y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) μ ¯ T A p γ ( ζ ¯ ) d t π
and
C λ ¯ T B ( ζ ) λ ¯ T B ( ζ ¯ ) d t π C ( p p ¯ ) λ ¯ T B p ( ζ ¯ ) d t π + C ( y y ¯ ) λ ¯ T B y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) λ ¯ T B p γ ( ζ ¯ ) d t π .
Taking into account the feasibility of ( p , y ) for ( P ) and by using the relations ( 1 ) ( 4 ) , we obtain
C ( p p ¯ ) μ ¯ T A p ( ζ ¯ ) d t π + C ( y y ¯ ) μ ¯ T A y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) μ ¯ T A p γ ( ζ ¯ ) d t π 0
and
C ( p p ¯ ) λ ¯ T B p ( ζ ¯ ) d t π + C ( y y ¯ ) λ ¯ T B y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) λ ¯ T B p γ ( ζ ¯ ) d t π 0 .
On adding the inequalities ( 8 ) ( 10 ) , we obtain
C ( p p ¯ ) θ ¯ Δ π p ( ζ ¯ , f ¯ ) d t π R f , g C ( p p ¯ ) θ ¯ Θ π p ( ζ ¯ , g ¯ ) d t π
+ C ( y y ¯ ) θ ¯ Δ π y ( ζ ¯ , f ¯ ) d t π R f , g C ( y y ¯ ) θ ¯ Θ π y ( ζ ¯ , g ¯ ) d t π
+ C ( p γ p ¯ γ ) θ ¯ Δ π p γ ( ζ ¯ , f ¯ ) d t π R f , g C ( p γ p ¯ γ ) θ ¯ Θ π p γ ( ζ ¯ , g ¯ ) d t π
+ C ( p p ¯ ) μ ¯ T A p ( ζ ¯ ) d t π + C ( y y ¯ ) μ ¯ T A y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) μ ¯ T A p γ ( ζ ¯ ) d t π 0
+ C ( p p ¯ ) λ ¯ T B p ( ζ ¯ ) d t π + C ( y y ¯ ) λ ¯ T B y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) λ ¯ T B p γ ( ζ ¯ ) d t π < 0 .
On the other hand, after multiplying the robust necessary optimality conditions ( 1 ) and ( 2 ) with the terms ( p p ¯ ) and ( y y ¯ ) , and integrating them, by adding the resulting equations, we get
C ( p p ¯ ) θ ¯ Δ π p ( ζ ¯ , f ¯ ) d t π R f , g C ( p p ¯ ) θ ¯ Θ π p ( ζ ¯ , g ¯ ) d t π
+ C ( y y ¯ ) θ ¯ Δ π y ( ζ ¯ , f ¯ ) d t π R f , g C ( y y ¯ ) θ ¯ Θ π y ( ζ ¯ , g ¯ ) d t π
+ C ( p γ p ¯ γ ) θ ¯ Δ π p γ ( ζ ¯ , f ¯ ) d t π R f , g C ( p γ p ¯ γ ) θ ¯ Θ π p γ ( ζ ¯ , g ¯ ) d t π
+ C ( p p ¯ ) μ ¯ T A p ( ζ ¯ ) d t π + C ( y y ¯ ) μ ¯ T A y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) μ ¯ T A p γ ( ζ ¯ ) d t π 0
+ C ( p p ¯ ) λ ¯ T B p ( ζ ¯ ) d t π + C ( y y ¯ ) λ ¯ T B y ( ζ ¯ ) d t π
+ C ( p γ p ¯ γ ) λ ¯ T B p γ ( ζ ¯ ) d t π = 0 ,
that contradicts the inequality ( 11 ) and the proof is complete. □
Example 1.
The following application illustrates the theoretical developments formulated in the previous sections of this study. In this regard, we consider we are interested only in real-valued (that is, n = l = 1 ) affine piecewise smooth control and state functions, F = G = [ 1 , 2 ] , and K R 2 (that is, m = 2 ) is a square fixed by the diagonally opposite corners t 0 = ( t 0 1 , t 0 2 ) = ( 0 , 0 ) and t 1 = ( t 1 2 , t 1 2 ) = ( 1 2 , 1 2 ) R 2 . Let us introduce the following fractional optimization problem with data uncertainty in objective functional:
( P 1 ) min ( p ( · ) , y ( · ) ) C Δ ( ζ , f ) d t π C Θ ( ζ , g ) d t π = C [ y 2 + f ] d t π C [ g p e 2 p + 1 2 ] d t π subject to A ( ζ ) = p 2 + p 2 0 N γ ( ζ ) = p t γ + 2 y 1 = 0 , γ = 1 , 2 p ( 0 , 0 ) = 1 , p 1 2 , 1 2 = 1 3 .
We can notice that the robust feasible solution set to ( P 1 ) is
D = ( p , y ) P × Y : 2 p 1 , p t 1 = p t 2 = 1 2 y , p ( 0 , 0 ) = 1 , p 1 2 , 1 2 = 1 3
and, by direct computation (see Theorem 1), we find ( p ¯ , y ¯ ) = 2 3 ( t 1 + t 2 ) + 1 , 5 6 D , and at t 1 = t 2 = 0 it satisfies the robust necessary optimality conditions ( 1 ) ( 4 ) with R f , g = 169 36 e 5 2 , the uncertainty parameters f ¯ = 2 ,   g ¯ = 1 , and Lagrange multipliers θ ¯ = 1 2 ,   μ ¯ = 0 , λ ¯ 1 = λ ¯ 2 = 5 24 . Further, it can also be easily verified that all the conditions of Theorem 2 are satisfied, which ensure that ( p ¯ , y ¯ ) is a robust optimal solution to ( P 1 ) .

4. Conclusions

In this paper, we have studied a class of fractional variational control problems involving data uncertainty in the objective functional. Under the convexity and concavity hypotheses of the involved functionals, we have established the associated robust Karush-Kuhn-Tucker necessary and sufficient optimality conditions. Concretely, we have defined, by using the parametric approach, the notions of a robust optimal solution for the case of curvilinear integral-type functionals. Also, we have formulated novel proofs for the main results, and we have provided a framework determined by infinite dimensional normed spaces of functions and curvilinear integral-type functionals. To the best of the authors’ knowledge, the results presented in this paper are new in the specialized literature. In addition, as future research directions of this paper, the author mentions the presence of data uncertainty in the constraints, the associated duality theory, and saddle-point optimality criteria.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Dinkelbach, W. On nonlinear fractional programming. Manag. Sci. 1967, 13, 492–498. [Google Scholar] [CrossRef]
  2. Jagannathan, R. Duality for nonlinear fractional programs. Z. Fuer Oper. Res. 1973, 17, 1–3. [Google Scholar] [CrossRef]
  3. Antczak, T.; Pitea, A. Parametric approach to multitime multiobjective fractional variational problems under (f, ρ)-convexity. Optim. Control Appl. Methods 2016, 37, 831–847. [Google Scholar] [CrossRef]
  4. Mititelu, S. Efficiency and duality for multiobjective fractional variational problems with (ρ, b)-quasiinvexity. Yugosl. J. Oper. Res. 2016, 19, 85–99. [Google Scholar] [CrossRef]
  5. Mititelu, Ş.; Treanţă, S. Efficiency conditions in vector control problems governed by multiple integrals. J. Appl. Math. Comput. 2018, 57, 647–665. [Google Scholar] [CrossRef]
  6. Treanţă, S.; Mititelu, Ş. Duality with (ρ, b)–quasiinvexity for multidimensional vector fractional control problems. J. Inf. Optim. Sci. 2019, 40, 1429–1445. [Google Scholar] [CrossRef]
  7. Antczak, T. Parametric approach for approximate efficiency of robust multiobjective fractional programming problems. Math. Methods Appl. Sci. 2021, 44, 11211–11230. [Google Scholar] [CrossRef]
  8. Guo, Y.; Ye, G.; Liu, W.; Zhao, D.; Treanţă, S. On symmetric gH-derivative applications to dual interval-valued optimization problems. Chaos Solitons Fractals 2022, 158, 112068. [Google Scholar] [CrossRef]
  9. Guo, Y.; Ye, G.; Liu, W.; Zhao, D.; Treanţă, S. Optimality conditions and duality for a class of generalized convex interval-valued optimization problems. Mathematics 2021, 9, 2979. [Google Scholar] [CrossRef]
  10. Nahak, C. Duality for multiobjective variational control and multiobjective fractional variational control problems with pseudoinvexity. J. Appl. Math. Stoch. Anal. 2006, 2006, 62631. [Google Scholar] [CrossRef] [Green Version]
  11. Patel, R.B. Duality for multiobjective fractional variational control problems with (F, ρ)-convexity. Int. J. Stat. Manag. Syst. 2000, 3, 113–134. [Google Scholar] [CrossRef]
  12. Kim, G.A.; Kim, M.H. On sufficiency and duality for fractional robust optimization problems involving (g,ρ)-invex function. East Asian Math. J. 2016, 32, 635–639. [Google Scholar] [CrossRef] [Green Version]
  13. Kim, M.H.; Kim, G.A. On optimality and duality for generalized fractional robust optimization problems. East Asian Math. J. 2015, 31, 737–742. [Google Scholar] [CrossRef] [Green Version]
  14. Kim, M.H.; Kim, G.S. Optimality conditions and duality in fractional robust optimization problems. East Asian Math. J. 2015, 31, 345–349. [Google Scholar] [CrossRef] [Green Version]
  15. Manesh, S.S.; Saraj, M.; Alizadeh, M.; Momeni, M. On robust weakly ϵ-efficient solutions for multi-objective fractional programming problems under data uncertainty. AIMS Math. 2021, 7, 2331–2347. [Google Scholar] [CrossRef]
  16. Beck, A.; Tal, A.B. Duality in robust optimization: Primal worst equals dual best. Oper. Res. Lett. 2009, 37, 1–6. [Google Scholar] [CrossRef]
  17. Jeyakumar, G.; Li, G.; Lee, G.M. Robust duality for generalized convex programming problems under data uncertainty. Nonlinear Anal. Theory Methods Appl. 2012, 75, 1362–1373. [Google Scholar] [CrossRef]
  18. Treanţă, S. Efficiency in uncertain variational control problems. Neural. Comput. Appl. 2021, 33, 5719–5732. [Google Scholar] [CrossRef]
  19. Treanţă, S. Robust saddle-point criterion in second-order partial differential equation and partial differential inequation constrained control problems. Int. J. Robust Nonlinear Control. 2021, 31, 9282–9293. [Google Scholar] [CrossRef]
  20. Treanţă, S.; Jiménez, M.A. On generalized KT-pseudoinvex control problems involving multiple integral functionals. Eur. J. Control. 2018, 43, 39–45. [Google Scholar] [CrossRef]
  21. Baranwal, A.; Jayswal, A.; Kardam, P. Robust duality for the uncertain multitime control optimization problems. Int. J. Robust Nonlinear Control 2022, 32, 5837–5847. [Google Scholar] [CrossRef]
  22. Jayswal, A.; Preeti; Arana-Jiménez, M. Robust penalty function method for an uncertain multi-time control optimization problems. J. Math. Anal. Appl. 2022, 505, 125453. [Google Scholar] [CrossRef]
  23. Jayswal, A.; Preeti; Arana-Jiménez, M. An exact l1 penalty function method for a multitime control optimization problem with data uncertainty. Optim. Control Appl. Methods 2020, 41, 1705–1717. [Google Scholar] [CrossRef]
  24. Treanţă, S. On Controlled Variational Inequalities Involving Convex Functionals. In Optimization of Complex Systems: Theory, Models, Algorithms and Applications; Le Thi, H., Le, H., Pham Dinh, T., Eds.; WCGO 2019; Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2020; Volume 991, pp. 164–174. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saeed, T. Robust Optimality Conditions for a Class of Fractional Optimization Problems. Axioms 2023, 12, 673. https://doi.org/10.3390/axioms12070673

AMA Style

Saeed T. Robust Optimality Conditions for a Class of Fractional Optimization Problems. Axioms. 2023; 12(7):673. https://doi.org/10.3390/axioms12070673

Chicago/Turabian Style

Saeed, Tareq. 2023. "Robust Optimality Conditions for a Class of Fractional Optimization Problems" Axioms 12, no. 7: 673. https://doi.org/10.3390/axioms12070673

APA Style

Saeed, T. (2023). Robust Optimality Conditions for a Class of Fractional Optimization Problems. Axioms, 12(7), 673. https://doi.org/10.3390/axioms12070673

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop