Robust Optimality Conditions for a Class of Fractional Optimization Problems

: In this paper, by considering the parametric technique, we study a class of fractional optimization problems involving data uncertainty in the objective functional. We formulate and prove the robust Karush-Kuhn-Tucker necessary optimality conditions and provide their sufﬁciency by considering the convexity and/or concavity assumptions of the involved functionals. In addition, to complete the study, an illustrative example is presented


Introduction
Optimization theory is widely used in concrete problems coming from decision theory, game theory, economics, data classification, production inventory, and portfolio selection.Since data in real-life problems are obtained, most of the time, by measurement or estimation, errors are bound to occur.But, the accumulation of errors may imply the computation results contradict the original problem.To overcome this shortcoming, the use of interval analysis, fuzzy numbers, and the robust technique to represent data has become a popular research direction in recent years.
The fractional optimization problem means to optimize the ratio of two objective (cost) functions (functionals).However, Dinkelbach [1] and Jagannathan [2] formulated a parametric technique to study a fractional optimization problem by converting it into an equivalent non-fractional optimization problem.Over time, many researchers used this technique to investigate various classes of fractional optimization problems.In this direction, we mention the works of Antczak and Pitea [3], Mititelu [4], Mititelu and Treanţȃ [5], Treanţȃ and Mititelu [6], and Antczak [7].Guo et al. [8,9] proposed symmetric gH-derivative and its applications to dual interval-valued optimization problems.For other connected ideas on this topic, interested readers are directed to Nahak [10], Patel [11], Kim and Kim [12][13][14], and Manesh et al. [15] and references therein.
Uncertain optimization problems arise when we have a large volume of data, old sources, sample disparity, inadequate information, and other factors that lead to data uncertainty.Therefore, the robust technique has an important role in studying the optimization problem governed by data uncertainty.Many researchers studied different optimization problems, including data uncertainty, and tried to formulate new and efficient results (see, for instance, Beck and Tal [16], Jeyakumar et al. [17], Treanţȃ [18][19][20], Baranwal et al. [21], Jayswal et al. [22], Preeti et al. [23], and references therein).
In this paper, we state a constrained fractional optimization problem involving data uncertainty in the objective functional generated by curvilinear-type integral.Namely, by considering the parametric technique, we present the robust Karush-Kuhn-Tucker necessary optimality conditions and establish their sufficiency by using the convexity and concavity assumptions of the involved functionals.The present paper has several principal credits.We mention the most important: (i) defining, by using the parametric approach, of the notions of robust optimal solution for the case of curvilinear integraltype functionals, (ii) formulating novel proofs for the main results, and (iii) providing a framework determined by infinite dimensional normed spaces of functions and curvilinear integral-type functionals.These elements are original in the field of robust fractional optimization problems.
The paper continues as follows.In Section 2, we state the basic notations and concepts in order to formulate and prove the main results.We introduce the fractional optimization problem involving data uncertainty in the objective functional, the associated non-fractional optimization problem, and the corresponding robust counterparts.In Section 3, under suitable convexity and concavity assumptions, we establish the robust optimality (necessary and sufficient) conditions for the considered problem.Finally, Section 4 provides the conclusions and future research direction associated with this paper.

Notations, Assumptions and Problem Formulation
Next, we are considering some basic notations and assumptions in order to formulate and prove the new main theorems derived in the present study.In this regard, we start with the standard finite-dimensional Euclidean spaces R m , R n and R l , with t = (t γ ), γ = 1, m, p = (p ι ), ι = 1, n, and y = (y j ), j = 1, l as arbitrary points of R m , R n and R l , respectively.Let K = K t 0 ,t 1 ⊂ R m be a hyper-parallelepiped, having the diagonally opposite corners t 0 = (t γ 0 ) and t 1 = (t γ 1 ), γ = 1, m, and let C ⊂ K be a curve (piecewise differentiable), joining the points t 0 = (t γ 0 ) and t 1 = (t γ 1 ) in R m .Define as the space of piecewise smooth functions (state variables), and the space of piecewise continuous functions (control variables), respectively, and assume the product space P × Y is endowed with the norm induced by the following inner product Using the above mathematical notations and elements, by denoting p γ (t) = ∂p ∂t γ (t), we formulate the following first-order PDE&PDI-constrained fractional optimization problem, with data uncertainty in the objective functional, as follows where f and g are some uncertainty parameters in the convex compact sets F ⊂ R and G ⊂ R, respectively, and , are assumed to be continuously differentiable functionals.

Definition 1. The above functionals
Assumption 1.By considering the above functionals are path-independent, the following working hypothesis is assumed: The robust counterpart for (P), reducing the possible uncertainties in (P), is given as ) are defined as in (P).The set of all feasible solutions to (RP), which is the same as the set of all feasible solutions to (P), is defined as For (p, y) ∈ D, we assume that ∆ ≥ 0 and Θ > 0. Further, by considering the positive real number , on the line of Jagannathan [2], Dinkelbach [1], and following Mititelu and Treanţȃ [5], we build a non-fractional optimization problem associated with (P), as The robust counterpart for (NP) is given by Next, for a simple presentation, we will use the following abbreviations throughout the paper: Definition 2. A point (p, y) ∈ D is said to be a robust optimal solution to (P), if , for all (p, y) ∈ D. Definition 3. A point (p, y) ∈ D is said to be a robust optimal solution to (NP), if Remark 1.We can observe that D is the set of feasible solutions to (NP) (and, also, for (RNP)).
Next, in order to prove the principal results of this paper, we present the definition of convex and concave curvilinear integral functionals (see, for instance, Treanţȃ [24]).

Definition 4. A curvilinear integral functional
holds, for all (p, y) ∈ P × Y.

Definition 5. A curvilinear integral functional
holds, for all (p, y) ∈ P × Y.

Robust Optimality Conditions
In this part of the present study, under suitable hypotheses, we establish the robust necessary and sufficient optimality conditions associated with the fractional optimization problem (P).Now, we provide an auxiliary result that will be used to establish the robust sufficient optimality conditions for (P).More precisely, we present the equivalence between the robust optimal solutions to (P) and (NP).Proposition 1.If (p, y) ∈ D is a robust optimal solution to (P), then there exists the positive real number R − f ,g such that (p, y) ∈ D is a robust optimal solution to (NP).Moreover, if (p, y) ∈ D is a robust optimal solution to (NP) and R − , then (p, y) ∈ D is a robust optimal solution to (P).
Proof.By reductio ad absurdum, let us assume that (p, y) ∈ D is a robust optimal solution to (P), but it is not a robust optimal solution to (NP).In consequence, there exists (p, y) ∈ D such that , we get which is equivalent with , and this contradicts (p, y) is a robust optimal solution to (P).
Conversely, let (p, y) ∈ D be a robust optimal solution to (NP), with , and suppose that (p, y) ∈ D is not a robust optimal solution to (P).Thus, there exists (p, y) ∈ D such that , and, by taking into account the definition of R − f ,g , the above inequality becomes or, equivalently, which contradics that (p, y) ∈ D is a robust optimal solution to (NP), and the proof is complete.
The next theorem formulates the robust necessary conditions of optimality for (P).
Theorem 1.Consider ( p, ȳ) ∈ D is a robust optimal solution for the robust fractional optimization problem (P) and Then, there exist θ ∈ R and the piecewise differentiable functions μ = ( for t ∈ K, π = 1, m, except at points of discontinuity. Proof.Let us consider some variations for p(t) and ȳ(t), respectively, as follows: p(t) + ε 1 h(t) and ȳ(t) + ε 2 m(t), where ε 1 , ε are the variational parameters.Therefore, we convert the involved integral functionals in functions depending on (ε 1 , ε 2 ), defined as By hypothesis, the pair ( p, ȳ) is assumed to be a robust optimal solution to (P).Thus, the point (0, 0) becomes an optimal solution to the following robust optimization problem min In consequence, there exist the Lagrange multipliers θ ∈ R, μ = ( where we used the method of integration by parts, the boundary conditions, and the divergence formula.
In the following, taking into account one of the fundamental lemmas from the theory of calculus of variations, we get Finally, the second part formulated in ( * ), and this completes the proof.
Remark 3. The relations (1)-( 4) in Theorem 1 are called robust necessary optimality conditions for the robust fractional optimization problem (P).Definition 6.The feasible solution ( p, ȳ) ∈ D is said to be a normal robust optimal solution to (P) if θ > 0 (see Theorem 1).
Further, we state a result on the robust sufficient conditions associated with the considered fractional optimization problem.Proof.By contrary, let us suppose the pair (p, y) ∈ D is not a robust optimal solution to (P).By considering Proposition 1, it results that the pair (p, y) ∈ D is not a robust optimal solution to (NP), as well.Thus, there exists (p, y) ∈ D satisfying and by taking max By hypothesis, the integral functional Now, on multiplying the inequality (7) with R − f ,g , and subtracting it from the inequality (6), it results and, by using relation (5), we get Also, by using the assumptions formulated in the theorem, since the integral function- Taking into account the feasibility of (p, y) for (P) and by using the relations (1)-( 4), we obtain On adding the inequalities (8)-( 10), we obtain  (11) and the proof is complete.
Example 1.The following application illustrates the theoretical developments formulated in the previous sections of this study.In this regard, we consider we are interested only in real-valued (that is, n = l = 1) affine piecewise smooth control and state functions, F = G = [1, 2], and K ⊂ R 2 (that is, m = 2) is a square fixed by the diagonally opposite corners t 0 = (t 1 0 , t 2 0 ) = (0, 0) and t 1 = (t 2 1 , t 2 1 ) = ( 1 2 , 1 2 ) ∈ R 2 .Let us introduce the following fractional optimization problem with data uncertainty in objective functional: g), the robust necessary optimality conditions (1)-(4) are fulfilled, and the involved integral functionals C θ∆ π (ζ, f )dt π , C µ T A(ζ)dt π and C λ T B(ζ)dt π are convex at (p, y) ∈ D, and C θΘ π (ζ, g)dt π is concave at (p, y) ∈ D, then the pair (p, y) ∈ D is a robust optimal solution to (P).