1. Introduction
To study practical, concrete, or real-life problems coming from economics, decision theory, production inventory, data classification, game theory, or portfolio selection, optimization models and control theory are widely used. Since practical problems are often governed by estimation or measurement, some errors may occur. Most of the time, the presence of various errors may contradict the computational results associated with the original problem. To overcome this issue, the use of a robust approach to represent data, the presence of fuzzy numbers, or the use of interval analysis, has become an important research direction in the last decades.
Optimizing the ratio of two objective or cost functions or functionals, means to study a fractional optimization problem. Thus, Dinkelbach [
1] and Jagannathan [
2] succeeded to transform it into an equivalent non-fractional optimization problem, by considering a parametric approach. In time, many researchers have considered this technique to study and solve various fractional variational problems. We mention the works of Mititelu [
3], Antczak and Pitea [
4], Mititelu and Treanţă [
5], Antczak [
6]. For other ideas on this subject, interested readers are directed to Patel [
7], Nahak [
8], Manesh et al. [
9], Kim and Kim [
10,
11,
12] and references therein. Noor [
13] considered and introduced some new concepts of the biconvex functions involving an arbitrary bifunction and function. More precisely, Noor has shown that the optimality conditions for the general biconvex functions can be characterized by a class of bivariational-like inequalities.
Optimization and variational problems with uncertain data arise when we have inadequate information, old sources, a large volume of data, sample disparity, or some other factors leading to data uncertainty. To investigate these cases, the robust technique is intensively used in studying the optimization problem with data uncertainty. This approach reduces the uncertainty associated with the original problem. Several researchers stated and investigated different optimization or variational problems involving data uncertainty, and they tried to establish novel and efficient results (see Jeyakumar et al. [
14], Beck and Tal [
15], Baranwal et al. [
16], Treanţă [
17,
18], Preeti et al. [
19], Jayswal et al. [
20]).
Over time, optimal control problems subject to nonlinear equality and/or inequality-type constraints (governed by ordinary differential equations) have been formulated and studied by many researchers. But, since so many phenomena are subject to laws involving partial differential equations or partial differential inequations (PDEs/PDIs), it is generated the need for a consistent analysis of scalar/vector variational control problems with PDE/PDI or isoperimetric-type constraints and multiple/path-independent curvilinear integral cost functionals. Of course, the multiple/curvilinear integrals in the calculus of variations have been considered and studied so far, but, these multiple/curvilinear integrals were not sufficiently analyzed in the context of robust optimal control models. Because of the increasing complexity of the environment, the initial data often suffer from inaccuracy. Therefore, an adequate uncertainty framework is necessary to formulate the model and new methods have to be adapted or developed to provide optimal or efficient solutions in a certain sense. The current paper is situated around the studies of robust/uncertain optimization problems.
Next, we formulate a fractional variational control problem with mixed constraints and data uncertainty in the cost functional (given by path-independent curvilinear-type integral). Further, we state the robust necessary optimality conditions and prove their sufficiency by using the convexity, quasi-convexity, strictly quasi-convexity, and/or monotonic quasi-convexity assumptions of the involved functionals. Moreover, we introduce and describe the robust Kuhn-Tucker points associated with the considered optimization problem. The most important and principal credits of the present paper are the following: (i) we introduce, by using the parametric technique, the notions of robust optimal solution and robust Kuhn-Tucker point for the case of curvilinear integral-type functionals, (ii) we state novel proofs for the main results, and (iii) we build a new framework determined by spaces of functions and by curvilinear integral-type functionals.
We continue the paper as follows.
Section 2 states the basic concepts, notations, and assumptions used to formulate the principal results. In
Section 3, by considering suitable convexity, quasi-convexity, strictly quasi-convexity, and/or monotonic quasi-convexity hypotheses, we establish robust sufficient optimality conditions for the considered problem. In addition, we describe the notion of robust Kuhn-Tucker point. Finally, in
Section 4 we present the conclusions and formulate some future research directions for this paper.
2. Auxiliary Tools
In the following, we consider the basic concepts, notations, and assumptions used to formulate the principal results. Thus, we start with the classical finite-dimensional Euclidean spaces
,
and
, with
(that is,
),
, and
as arbitrary points of
,
and
, respectively. Let
be a hyper-parallelepiped, having the diagonally opposite corners
and
, and let
be a curve (piecewise differentiable), joining the points
and
in
. Define
as the space of piecewise smooth functions (
state variables), and the space of piecewise continuous functions (
control variables), respectively, and assume the product space
is endowed with the norm induced by the following inner product
Using the above mathematical notations and elements, by denoting
, we formulate the following first-order
-constrained fractional optimization problem, with data uncertainty in the objective functional, as follows
where
f and
g are some uncertainty parameters in the convex compact sets
and
, respectively, and
, are assumed to be continuously differentiable functionals.
Definition 1. The above functionalsandare named path-independent if and , for . Assumption 1. By considering the above functionalsandare path-independent, the following working hypothesis is assumed:is a total exact differential, with . The robust counterpart for
, reducing the possible uncertainties in
, is given as
where
and
are defined as in
.
The set of all feasible solutions to
, which is the same as the set of all feasible solutions to
, is defined as
For
, we assume that
and
. Further, by considering the positive real number
on the line of Jagannathan [
2], Dinkelbach [
1], and following Mititelu and Treanţă [
5], we build a non-fractional optimization problem associated with
, as
The robust counterpart for
is given by
Next, for a simple presentation, we will use the following abbreviations throughout the paper: .
Definition 2. A point is said to be a robust optimal solution to , iffor all . Definition 3. A point is said to be a robust optimal solution to , iffor all . Remark 1. We can observe that is the set of feasible solutions to (and, also, for ).
Remark 2. The robust optimal solutions to (or ) are also robust optimal solutions to (or ).
Next, in order to prove the principal results of this paper, we present the definition of convex, quasi-convex, strictly quasi-convex, and monotonic quasi-convex curvilinear integral functionals (see, for instance, Treanţă [
21]).
Definition 4. A curvilinear integral functional is said to be convex at if the following inequalityholds, for all . Definition 5. A curvilinear integral functional is said to be quasi-convex at if the following inequalityimpliesfor all . Definition 6. A curvilinear integral functional is said to be strictly quasi-convex at if the following inequalityimpliesfor all . Definition 7. A curvilinear integral functional is said to be monotonic quasi-convex at if the following inequalityimpliesfor all . Remark 3. The relationships between the various convexities proposed in this article are discussed and illustrated with suitable examples in previous research works (see, for instance, Mititelu and Treanţă [5] and Jayswal et al. [22]). 3. Robust Sufficient Optimality Conditions
Next, by considering suitable convexity, quasi-convexity, strictly quasi-convexity, and/or monotonic quasi-convexity hypotheses, we establish robust sufficient optimality conditions for the considered problem. In addition, in accordance with Treanţă and Arana-Jiménez [
23], we describe the notion of
robust Kuhn-Tucker point.
The next Proposition provides an auxiliary result to establish the robust sufficient optimality conditions for
(see Saeed [
24]).
Proposition 1. If is a robust optimal solution to , then there exists the positive real number such that is a robust optimal solution to . Moreover, if is a robust optimal solution to and , then is a robust optimal solution to .
The next result formulates the robust necessary conditions of optimality for
(see Saeed [
24]).
Theorem 1. Consider is a robust optimal solution for the robust fractional optimization problem and , . Then, there exist and the piecewise differentiable functions , satisfyingfor , except at points of discontinuity. Remark 4. The relations (1)–(4) in Theorem 1 are called robust necessary optimality conditions for the robust fractional optimization problem .
Definition 8. The feasible solution is said to be a normal robust optimal solution to if (see Theorem 1).
Next, in accordance to Treanţă and Arana-Jiménez [
23], we introduce and describe the
robust Kuhn-Tucker point associated with
.
Definition 9. Let , . The robust feasible solution is said to be a robust Kuhn-Tucker point of if there exist the piecewise differentiable functions , satisfyingfor , except at points of discontinuity. Taking into account the above-mentioned definition, we formulate the following theorem.
Theorem 2. If is a normal robust optimal solution for the robust fractional optimization problem , with , , then is a robust Kuhn-Tucker point of .
Proof. Let us consider
,
. Since
is a robust optimal solution of
, by Theorem 1, there exist
and the piecewise differentiable functions
, satisfying
for
, except at points of discontinuity. As
is assumed to be a normal robust optimal solution, we can take
and this completes the proof. □
Next, under only convexity assumptions of the considered functionals, a result is provided for the sufficiency of the robust necessary optimality conditions established in Theorem 1.
Theorem 3. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare convex at . Then the pair is a robust optimal solution to . Proof. By contrary, let us suppose that
is not a robust optimal solution to
. Then, there exists
with the property (according to Proposition 1)
By considering
,
, we get
By hypothesis, we have considered
fulfills the conditions
–
. By multiplying Equations
and
by
and
, respectively, and integrating them, we get
by using the method of integration by parts, the boundary conditions, and the divergence formula.
On the other hand, since
is convex at
, we have
and, by using inequality
, it follows
Now, by using the convexity property at
of the functional
, we get
The previous inequality along with the robust feasibility of
to
and the optimality condition
, give
Further, by considering, in the same manner, the convexity property at
of the integral functional
, and the robust feasibility of
to
, we obtain
Finally, by adding the relations
,
and
, side by side, we have
being a contradiction with the relation
, and this completes the proof. □
The next theorems assert new robust sufficient optimality conditions under (strictly, monotonic) quasi-convexity assumptions.
Theorem 4. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare quasi-convex and strictly quasi-convex at , respectively, and is monotonic quasi-convex at . Then the pair is a robust optimal solution to . Proof. Let us assume that
is not a robust optimal solution to
, and consider the following non-empty set
By hypothesis, for
, we get
then
For
, the equality
holds and it follows
Also, for
, the inequality
gives
By hypothesis, we have considered
fulfills the conditions
–
. By multiplying Equations
and
by
and
, respectively, and integrating them, we get
by using the method of integration by parts, the boundary conditions, and the divergence formula. On the other hand, by adding the relations
,
and
, side by side, we have
being a contradiction with the relation
, and this completes the proof. □
Next, some immediate consequences of the previous theorem can be formulated as follows.
Theorem 5. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare strictly quasi-convex and quasi-convex at , respectively, and is monotonic quasi-convex at . Then the pair is a robust optimal solution to . Proof. The proof follows in the same manner as in Theorem 4, by replacing the sign “≤” in with “<”, and the sign “<” in with “≤”. □
Theorem 6. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare quasi-convex and strictly quasi-convex at , respectively, and is monotonic quasi-convex at . Then the pair is a robust optimal solution to . Proof. The proof follows in the same manner as in Theorem 4, by replacing the . □
Theorem 7. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare strictly quasi-convex and quasi-convex at , respectively, and is monotonic quasi-convex at . Then the pair is a robust optimal solution to . Proof. The proof follows in the same manner as in Theorem 4, by replacing the , the sign “≤” in with “<”, and the sign “<” in with “≤”. □
Theorem 8. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare quasi-convex and strictly quasi-convex at , respectively. Then the pair is a robust optimal solution to . Proof. The proof follows in the same manner as in Theorem 4, by considering the sign “<” in and , then adding them. □
Theorem 9. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare strictly quasi-convex and quasi-convex at , respectively. Then the pair is a robust optimal solution to . Proof. The proof follows in the same manner as in Theorem 4, by considering the sign “<” in , and the sign “≤” in and , then adding them. □
Theorem 10. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare quasi-convex and strictly quasi-convex at , respectively. Then the pair is a robust optimal solution to . Proof. The proof follows in the same manner as in Theorem 4, by replacing the , and by considering the sign “<” in and , then adding them. □
Theorem 11. Let be a feasible solution to such that the robust necessary optimality conditions given in (1)–(4) are satisfied, , , and considerare strictly quasi-convex and quasi-convex at , respectively. Then the pair is a robust optimal solution to . Proof. The proof follows in the same manner as in Theorem 4, by replacing the , the sign “≤” in with “<”, and by considering the sign “≤” in and , then adding them. □
Remark 5. (i) In order to justify the main elements formulated in the paper, some illustrative applications and numerical simulations can be consulted by the reader in the recent research work of Jayswal et al. [22]. (ii) Regarding the research limitations associated with this paper, we could mention the study of the case where the second-order partial derivatives are presented, and, also, the situation when the involved functionals are not necessarily (quasi-) convex.
(iii) In order to highlight the above-mentioned theorems, a suitable illustrative application (from mechanics) is presented and investigated in Treanţă [25].