Next Article in Journal
Mathematical Models of Epidemics with Infection Time
Previous Article in Journal
A Hybrid PI–Fuzzy Control Scheme for a Drum Drying Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Cost Function and State Constrains Optimal Control Problems

by
Hugo Leiva
1,*,
Guido Tapia-Riera
1,†,
Jhoana P. Romero-Leiton
2,† and
Cosme Duque
3,†
1
School of Mathematical and Computational Siences, University YachayTech, San Miguel de Urcuqui 100115, Imbabura, Ecuador
2
Department of Mathematical Sciences, University of Puerto Rico at Mayagüez, Mayagüez 00681-9000, Puerto Rico
3
Departamento de Matemáticas, Facultad de Ciencias, Universidad de Los Andes, Mérida 5101, Venezuela
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
AppliedMath 2025, 5(2), 46; https://doi.org/10.3390/appliedmath5020046
Submission received: 9 March 2025 / Revised: 30 March 2025 / Accepted: 8 April 2025 / Published: 10 April 2025

Abstract

:
In this paper, we analyze an optimal control problem with a mixed cost function, which combines a terminal cost at the final state and an integral term involving the state and control variables. The problem includes both state and control constraints, which adds complexity to the analysis. We establish a necessary optimality condition in the form of the maximum principle, where the adjoint equation is an integral equation involving the Riemann and Stieltjes integrals with respect to a Borel measure. Our approach is based on the Dubovitskii–Milyutin theory, which employs conic approximations to efficiently manage state constraints. To illustrate the applicability of our results, we consider two examples related to epidemiological models, specifically the SIR model. These examples demonstrate how the developed framework can inform optimal control strategies to mitigate disease spread. Furthermore, we explore the implications of our findings in broader contexts, emphasizing how mixed cost functions manifest in various applied settings. Incorporating state constraints requires advanced mathematical techniques, and our approach provides a structured way to address them. The integral nature of the adjoint equation highlights the role of measure-theoretic tools in optimal control. Through our examples, we demonstrate practical applications of the proposed methodology, reinforcing its usefulness in real-life situations. By extending the Dubovitskii–Milyutin framework, we contribute to a deeper understanding of constrained control problems and their solutions.

1. Introduction

In this article, we consider an optimal control problem with a mixed cost function, composed of a term evaluated at the final state and the integral of a function involving the state and control variables. In addition to the standard constraints, the state and control variables are also subject to very general constraints. As mentioned in the abstract, we derive a maximum principle where the adjoint equation is an integral equation whose right-hand side is the sum of a Riemann integral and a Stieltjes integral with respect to a Borel measure.
Optimal control problems with mixed-state control constraints arise in various practical applications, such as in economics [1], aerospace engineering [2], and biomedical science [3]. The inclusion of constraints in both the state and control variables significantly complicates the analysis, which requires advanced mathematical techniques to establish the necessary optimality conditions [4]. One of the best-established methods for handling such constraints is the Dubovitskii–Milyutin theory [5], which provides a framework for deriving optimality conditions through conical approximations. This approach has been successfully applied to nonlinear systems with state constraints [6] and to problems involving control-affine dynamics [7].
Although the maximum principle obtained here provides the necessary conditions for optimality, additional conditions are often required to ensure sufficiency. This work examines these conditions and their broader implications. A key contribution is the adaptation of Dubovitskii–Milyutin theory to effectively handle mixed-state control constraints, which has previously been explored in various applications such as energy management systems [8], spacecraft trajectory optimization [9], and financial engineering [10].
To demonstrate the practical application of our findings, we use two optimal control problems: epidemic modeling (SIR model) and optimal control strategies at the onset of a new viral outbreak as illustrative examples (see [11]). Controlling infectious diseases has been extensively studied using optimal control methods [12], with constraints on state variables often representing limitations in medical resources or vaccine availability [13].
Furthermore, we believe that our results can be applied to problems such as the optimal guidance of endoatmospheric launch vehicle systems under mixed-state control constraints (see [14]) and the improvement of vehicle efficiency through the optimization of driving behavior (see [15]). This research also opens doors to solving complex problems in various fields, including controlled mechanical systems [16] and high-resolution image processing with neural networks [17]. By expanding on the established applications of Dubovitskii–Milyutin theory, as seen in previous studies such as [18], this study further enhances the understanding and utility of optimal control solutions under mixed constraints, advancing both the theoretical and applied aspects of the field.
The following provides an outline of this paper:
In Section 2, we introduce the problem under study and present the main results. Section 3 provides the necessary theoretical foundations of the Dubovitskii–Milyutin theory, which serves as the basis for proving our main results. We also give a detailed exposition of key concepts and relevant previous work to contextualize our contributions.
In Section 4, we rigorously prove the main result of this work. Section 5 establishes sufficient conditions to guarantee the existence of an optimal pair for the problems considered.
Section 6 presents two real-world models related to the dynamics of SIR-type infections. These examples demonstrate the practical applicability of our results in biological contexts, emphasizing the relevance of the problems addressed.
In Section 7, we propose an open problem that emerges from our research, encouraging future investigations to explore new directions and address unresolved questions in this area.
Finally, in Section 8, we summarize our findings, discuss their significance, and outline potential avenues for future research.

2. Setting of the Problem and the Main Results

In this section, we introduce the optimal control problem under consideration, which involves a system governed by a differential equation and subject to control and state constraints; also, the main theorems are presented. The goal is to minimize the objective function, which consists of a terminal cost term and an integral cost term over time. Formally, the problem is presented as follows:
Let n , r N and T R + be fixed parameters. Consider the mappings Ψ , Θ , f , and g:
Ψ : R n × R r × [ 0 , T ] R n , Θ : R n × R r × [ 0 , T ] R , g : R n × [ 0 , T ] R , f : R n R .
Now, we introduce the following function spaces C n [ 0 , T ] = C ( [ 0 , T ] ; R n ) and L r as follows:
C n [ 0 , T ] = { z : [ 0 , T ] R n : z i s   a   c o n t i n u o u s   f u n c t i o n } ,
equipped with the supremum norm given by
z = sup t [ 0 , T ] z ( t ) R n .
Additionally, we consider the classical Banach space L r = L ( [ 0 , T ] ; R r ) , consisting of essentially bounded measurable functions, endowed with the essential supremum norm.
Problem 1.
f ( z ( T ) ) + 0 T Θ ( z ( t ) , v ( t ) , t ) d t l o c min .
    ( z , v ) E = C n [ 0 , T ] × L r [ 0 , T ] ,
    z ˙ ( t ) = Ψ ( z ( t ) , v ( t ) , t ) , z ( 0 ) = z 0 ,
    z ( T ) = z 1 ; z 1 , z 0 R n ,
        v ( t ) V , t [ 0 , T ] , a . e . ,
g ( z ( t ) , t ) 0 ( t [ 0 , T ] ) .
Assumption 1.
 (a) 
The mappings Θ, Ψ, g, and f are continuous and possess partial derivatives Θ z , Θ v , Ψ z , Ψ v , f z , and g z that are sufficiently smooth on compact subsets of R n × R r × [ 0 , T ] .
 (b) 
The set V R r is convex and closed, with a nonempty interior—i.e., i n t ( V ) .
 (c) 
The function g is continuous and satisfies g ( z 0 , 0 ) < 0 , g ( z 1 , T ) < 0 , where z 0 , z 1 R n are fixed points. Furthermore, g has a continuously differentiable first variable, denoted by g z , such that g z ( z , t ) 0 whenever g ( z , t ) = 0 .
Theorem 1.
Let us suppose that Conditions (a)–(c) of the previous assumptions are fulfilled. Let   ( z , v ) E  be a solution of Problem 1.
 (α) 
Then, there exists a non-negative Borel measure μ on  [ 0 , T ]  with support in
R : = { t [ 0 , T ] / g ( z ( t ) , t ) = 0 } .
 (β) 
There exists  ϱ 0 0  and a function  η L 1 n [ 0 , T ] = L 1 ( [ 0 , T ] ; R n )   such that   ϱ 0 and η are not simultaneously zero. Moreover, η is a solution of the integral equation
η ( t ) = t T ( Ψ z * ( z ( τ ) , v ( τ ) , τ ) η ( τ ) + ϱ 0 Θ z ( z ( τ ) , v ( τ ) , τ ) ) d τ a + ϱ 0 f ( ( z ( T ) ) ) + t T g z ( z ( τ ) , τ ) d μ ( τ ) ,
and also, for all  v V and almost all t [ 0 , T ] , it follows that
Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) + ϱ 0 Θ v ( z ( t ) , v ( t ) , t ) , v v ( t ) 0
Remark 1.
If Condition (6) is changed by the condition
g ( z ( T ) , T ) 0 ,
then we can prove the following result as well:
Theorem 2.
Let us suppose that Conditions (a)–(b) are fulfilled. Let     ( z , v ) E   be a solutions of the new Problem and suppose that g has continuous derivative with respect to its first variable g z such that z g ( z ( T ) , T ) 0 . Then, there exist ϱ 0 0 , μ 0 0 and a function η C n [ 0 , T ] = C ( [ 0 , T ] ; R n ) such that ϱ 0 and η are not both zero. Moreover, η is a solution of the differential equation
η ˙ ( t ) = Ψ z * ( z ( t ) , v ( t ) , t ) η ( t ) + ϱ 0 Θ z ( z ( t ) , v ( t ) , t ) ) + μ 0 z g ( z ( T ) , T ) , η ( T ) = a ϱ 0 f ( z ( T ) ) ,
and also, for all v V and almost all t [ 0 , T ] , it follows that
Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) + ϱ 0 Θ v ( z ( t ) , v ( t ) , t ) , v v ( t ) 0 .

3. Preliminaries Results

In this paper, we describe the key findings of the Dubovitskii–Milyutin (DM) framework. We define the general constrained optimization problem and develop approximation cones for both the objective function and the constraints. The optimality condition, represented by the Euler–Lagrange (EL) equation, is derived using the duals of these cones. These fundamental results follow from well-established principles of DM theory, with comprehensive proofs provided in [18,19,20].

3.1. Cones, Dual Cones, and the DM Theorem

Let E be a topological vector space with a locally convex structure, and let E * be its dual space (i.e., the space of continuous linear functionals on E).
A subset K E is called a cone with vertex at zero if it satisfies the property β K = K for each β > 0 .
The dual cone associated with K is defined as follows:
K + = { F E * : F ( z ) 0 , z K } .
If { K α E : α A } is a family of convex and w-closed cones, then the following equality holds:
α A K α + = α A K α + ¯ ,
where the closure is taken in the w * -topology. (See [20].)
Lemma 1.
Let K 1 , K 2 , , K n E be open convex cones such that
i = 1 n K i .
Then
i = 1 n K i + = i = 1 n K i + .
Lemma 2
(DM). Let K 1 , K 2 , , K n + 1 E be convex cones with the vertex at zero, with K 1 , K 2 , , K n open. Then,
i = 1 n + 1 K i =
if and only if there exist F i K i + ( i = 1 , 2 , , n + 1 ), not all zero, such that
F 1 + F 2 + + F n + F n + 1 = 0 .

3.2. The Abstract Euler–Lagrange (EL) Equation

Consider a function L : E R , and Q i E ( i = 1 , 2 , , n + 1 ) such that i n t ( Q i ) ( i = 1 , 2 , , n ). Define the following problem:
L ( z ) l o c min z Q i , ( i = 1 , 2 , , n + 1 ) .
Remark 2.
The sets Q i , ( i = 1 , 2 , , n ) are usually given by inequality-type constraints, while Q n + 1 is defined by equality-type constraints, and in general, i n t ( Q n + 1 ) = .
Theorem 3
(DM). Let z E be an optimal solution to Problem (12), and assume that the following conditions are met:
 (a) 
K 0 denotes the descent (or decay) cone of L at z .
 (b) 
K i represent the feasibility (or admissible) cones associated with Q i at z Q i for each i = 1 , 2 , , n .
 (c) 
K n + 1 corresponds to the tangent cone to Q n + 1 at z .
If the cones K i ( i = 0 , 1 , 2 , , n + 1 ) are convex, then there exist functionals F i K i + , for ( i = 0 , 1 , , n + 1 ) , not all identically zero, such that
F 0 + F 1 + + F n + 1 = 0 .
Remark 3.
The expression given in (13) is referred to asGeneralized Euler–Lagrange Equation. A detailed explanation of the concepts of recession, feasibility, and tangency cones can be found in [20].
The procedure for implementing the DM Theorem in concrete cases is as follows:
1.
Determine the decrease directions.
2.
Identify the feasible directions.
3.
Establish the tangent directions.
4.
Construct the associated dual cones.

3.3. Important Results

Now, we explicitly determine the decay, admissible, and tangent cones in certain cases. Given z E , we define the decay cone as K d = K d ( L , z ) , representing the set of decay directions.
Theorem 4
(See [20], p. 48). If E is a Banach space and L is Fréchet-differentiable at z E , then
K d ( L , z ) = { h E L ( z ) h < 0 } ,
where L ( z ) denotes the Fréchet derivative of L evaluated at z .
Theorem 5
(See [20], p. 45). Let L : E R be a continuous and convex function defined on a topological vector space E, and let z E . Then, the function L admits a directional derivative in every direction at z , and the following properties hold:
 (a) 
The directional derivative of L at z along h is given by
L ( z , h ) = inf L ( z + ε h ) L ( z ) ε ε R + .
 (b) 
The decay cone at z is characterized as
K d ( L , z ) = { h E L ( z , h ) < 0 } .
The admissible cone to z Q will be denoted by K a = K a ( Q , z ) .
Theorem 6
(See [20], p. 59). If Q is any convex set with a nonempty interior, i.e., i n t ( Q ) , then
K a = { h E h = λ ( z z ) , z i n t ( Q ) , λ R + } .
The set consisting of all tangent vectors to Q at z forms a cone with its vertex at the origin, which will be denoted by K T : = K T ( Q , z ) and referred to as the tangent cone.
In this section, we highlight Lyusternik’s Theorem, a fundamental tool for computing the set of tangent vectors. This result plays a key role in our analysis, as it enables us to determine the vectors that are tangent at a given point.
Theorem 7
(Lyusternik [20]). Let E 1 and E 2 be Banach spaces, and assume that
 (a) 
z E 1 , and the mapping P : E 1 E 2 is Fréchet-differentiable at z .
 (b) 
The derivative P ( z ) : E 1 E 2 is surjective.
Under these conditions, the tangent cone K T to the set Q : = { z E 1 P ( z ) = 0 } at the point z Q is given by
K T = K e r   P ( z ) .

4. Proof of the Main Theorem 1

Proof. 
Let L ¯ : E R be a function defined as follows:
L ¯ ( z , v ) = f ( z ( T ) ) + 0 T Θ ( z ( t ) , v ( t ) , t ) d t ,
and let Q : = Q 1 Q 2 Q 3 , where the sets Q 1 , Q 2 , and Q 3 consist of pairs ( z , v ) E that satisfy conditions (3), (4), (5), and (6), respectively.
Then, Problem 1 is equivalent to the following:
L ¯ ( z , v ) l o c min , ( z , v ) Q .
(a)
Study of the mapping L ¯ .
Let K 0 : = K d ( L ¯ , ( z , v ) ) denote the decrease cone of L ¯ at the point ( z , v ) . According to Theorem 4, we have
K 0 = { ( z , v ) E : L ¯ ( z , v ) ( z , v ) < 0 } .
If K 0 , then it follows that
K 0 + = { ϱ 0 L ¯ ( z , v ) : ϱ 0 0 } .
According to Example 9.2 in [20], p. 62, the derivative L ¯ ( z , v ) is given by
L ¯ ( z , v ) ( z , v ) = f ( z ( T ) ) z ( T ) + 0 T [ Θ z ( z , v , t ) z ( t ) + Θ v ( z , v , t ) v ( t ) ] d t , ( z , v ) E .
Thus, for any F 0 K 0 + , there exists   ϱ 0 0 such that
F 0 ( z , v ) = ϱ 0 { f ( z ( T ) ) z ( T ) + 0 T [ Θ z ( z , v , t ) z ( t ) + Θ v ( z , v , t ) v ( t ) ] d t } , ( z , v ) E .
(b)
Analysis of the restriction Q 1 .
We aim to find the tangent cone to Q 1 at the point ( z , v )
K 1 : = K T ( Q 1 , ( z , v ) ) .
Assume that the system
z ˙ ( z ) = Ψ z ( z ( t ) , v ( t ) , t ) z ( t ) + Ψ v ( z ( t ) , v ( t ) , t ) v ( t ) ,
is controllable (see [21,22,23]); then, in view of Theorem 7, we find that
K 1 = ( z , v ) E / z ( t ) = 0 T [ Ψ z ( z ( τ ) , v ( τ ) , τ ) z ( τ ) + Ψ v ( z ( τ ) , v ( τ ) , t ) v ( τ ) ] d τ , z ( T ) = 0 ( t [ 0 , T ] ) } .
Now, let us calculate K 1 + . To do so, we shall consider the following linear spaces:
L 1 : = ( z , v ) E / z ( t ) = 0 t [ Ψ z ( z , v , τ ) z ( τ ) + Ψ v ( z , v , ( τ ) ) ] v ( τ ) ) d τ , ( t [ 0 , T ] ) } , L 2 : = { ( z , v ) E / z ( T ) = 0 } .
Hence,
K 1 = L 1 L 2 .
Then, by Proposition 2.40 from [18], we have that F 12 L 2 + if, and only if, there exists a R n such that
F 12 ( z , v ) = a , z ( T ) ( ( z , v ) E ) .
Moreover, by Lemma 2.5 from [18], it follows that L 1 + + L 2 + is w * closed; then, by the cone properties, we obtain that
K 1 + = L 1 + + L 2 + .
Therefore, F 1 K 1 + if, and only if, F 1 = F 11 + F 12 , F 11 L 1 + , F 12 L 2 + .
(c)
Analysis of restriction Q 2 .
Define the set
Q 2 : = { v L r [ 0 , T ] : v ( t ) V , t [ 0 , T ] , a . e . } .
Then Q 2 = C n [ 0 , T ] × Q 2 . Given that V is convex, closed, and int ( V ) , the following hold: 1. Q 2 and Q 2 are closed and convex. 2. int ( Q 2 ) and int ( Q 2 ) .
Let K 2 be the admissible cone to Q 2 at ( z , v ) Q 2 :
K 2 = C n [ 0 , T ] × K 2 ,
where K 2 is the admissible cone to Q 2 at v Q 2 .
For any F 2 K 2 + , there exists F 2 K 2 + such that
F 2 = ( 0 , F 2 ) .
By Theorem 6, F 2 is a support of Q 2 at v .
(d)
Analysis of restriction Q 3 .
Let us define the following function:
l : C n [ 0 , T ] R l ( z ) = max t [ 0 , T ] g ( z ( t ) , t ) .
Then, by example 7.5 from (See [20], p. 52), we have that
l ( z ; h ) = max t R g z ( z ( t ) , t ) , ( h C n [ 0 , T ] ) ,
where
R = { t [ 0 , T ] / g ( z ( t ) , t ) = l ( z ) } .
Thus, we obtain that
R = { t [ 0 , T ] / g ( z ( t ) , t ) = 0 } .
On the other hand,
K 3 = K a ( Q 3 , ( z , v ) ) K d ( L , ( z , v ) ) = : K d
But, by Theorem 5, we obtain
K d = { ( h , u ) E / l ( z , h ) < 0 } = { ( h , u ) E / g z ( z ( t ) , t ) h ( t ) < 0 ( t R ) } .
Since (15), we obtain that
K 3 + K d + .
Then, by example 10.3 ([20], p. 73), we have that for all F K 3 + , there is a non-negative Borel measure μ on [ 0 , T ] such that
F ( z , v ) = 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) , ( ( z , v ) E )
and μ has support in
R = { t [ 0 , T ] / g ( z ( t ) , t ) = 0 } .
(e)
The Euler–Lagrange equation.
The convexity of the cones K 0 , K 1 , K 2 , and K 3 is evident. Then, applying Theorem 3, we can find functionals F i K i + ( i = 0 , 1 , 2 , 3 ) not simultaneously zero satisfying
F 0 + F 1 + F 2 + F 3 = 0 .
Equation (16) can be expressed as follows:
ϱ 0 f ( z ( T ) ) z ( T ) ϱ 0 0 T [ Θ z ( z , v , t ) z ( t ) + Θ v ( z , v , t ) v ( t ) ] d t + F 11 ( z , v ) + a , z ( T ) + F 2 ( u ) + F 3 ( z , v ) = 0 , ( ( z , v ) E ) .
Now, for every u L r , there exists a function z C n [ 0 , T ] that satisfies the Equation (14) with z ( 0 ) = 0 . Therefore, ( z , v ) L 1 , and consequently F 11 ( z , v ) = 0 . As a result, the Euler–Lagrange equation can be expressed as
F 2 ( v ) = ϱ 0 0 T Θ z ( z , v , t ) z ( t ) d t + ϱ 0 0 T Θ v ( z , v , t ) v ( t ) d t a , z ( T ) + ϱ 0 f ( z ( T ) ) z ( T ) + 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) , ( ( z , v ) E ) .
Let η be the solution of Equation (7), which means
η ( t ) = a + ϱ 0 f ( z ( T ) ) + t T [ ( Ψ z * ( z , v , τ ) η ( τ ) + ϱ 0 Θ z ( z , v , τ ) ] d τ + + t T g z ( z ( τ ) , τ ) d μ ( τ ) .
This equation is a second-order Volterra-type equation, which has a unique solution η L 1 n [ 0 , T ] (see [24], p. 519). By multiplying both sides of the previous equation by z ˙ and integrating from 0 to T, we obtain
0 T z ˙ , η ( t ) d t = 0 T a , z ˙ ( t ) d t + ϱ 0 f ( z ( T ) ) z ( T ) + 0 T z ˙ ( t ) , t T [ ( Ψ z * ( z , v , τ ) η ( τ ) + ϱ 0 Θ z ( z , v , τ ) ] d τ d t + + 0 T z ˙ ( t ) , t T g z ( z ( τ ) , τ ) d μ ( τ ) d t .
Since
z ˙ ( t ) = Ψ z ( z ( t ) , v ( t ) , t ) z ( t ) + Ψ v ( z ( t ) , v ( t ) , t ) v ( t ) , z ( 0 ) = 0 ,
we have that
z ˙ ( t ) Ψ v ( z ( t ) , v ( t ) , t ) v ( t ) , η ( t ) = Ψ z ( z ( t ) , v ( t ) , t ) z ( t ) , η ( t ) .
This reformulation simplifies the analysis or provides new insights into the structure of the solution.
0 T z ˙ ( t ) , η ( t ) d t = a , z ( T ) 0 T z ˙ ( t ) , η ( t ) d t + ϱ 0 f ( z ( T ) ) z ( T ) + 0 T Ψ v * ( z ( t ) , v ( t ) , t ) η ( τ ) , v ( t ) d t + ϱ 0 0 T z ( t ) , Θ z ( z ( t ) , v ( t ) , t ) d t + 0 T z ˙ ( t ) , t T g z ( z ( τ ) , τ ) d μ ( τ ) d t .
The third term on the right-hand side can be simplified by using the integration by parts method for the Stieltjes integral, along with the conditions g ( z 0 , 0 ) < 0 and g ( z 1 , T ) < 0 . Specifically, since 0 R and T R , it follows that μ ( 0 ) = μ ( T ) = 0 .
0 T z ˙ ( t ) , t T g z ( z ( τ ) , τ ) d μ ( τ ) d t = 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) .
Then,
ϱ 0 0 T Θ z ( z ( τ ) , v , t ) z ( t ) d t + 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) a , z ( T ) + ϱ 0 f ( z ( T ) ) z ( T ) = 0 T Ψ v * ( z , v , t ) η ( t ) , v ( t ) d t .
Then, by the EL Equation (15), we obtain that
F 2 ( v ) = 0 T Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) + ϱ 0 Θ v ( z ( t ) , v ( t ) , t ) v ( t ) d t ( v L r [ 0 , T ] ) .
Since F 2 is a support of Q 2 at the point v Q 2 , from example 10.5 ([20], p. 76), it follows that
Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) + ϱ 0 Θ v ( z ( t ) , v ( t ) , t ) , v v ( t ) 0 ,
for all v V and almost all t [ 0 , T ] .
Now, we will see that the case ϱ 0 = 0 , η = 0 is not possible. In fact, if η = 0 , then η ( T ) = a = 0 . Thus,
F 12 ( z , v ) = a , z ( T ) = 0 ( ( z , v ) E ) ,
that is, F 12 0 . So, from Equation (7), and the fact that ϱ 0 = 0 , we obtain that
t T g z ( z ( τ ) , τ ) d μ ( τ ) = 0 ( t [ 0 , T ] ) ,
which implies that F 3 = 0 . Also, from (18), we have that F 2 ( u ) = 0 ( v L r [ 0 , T ] ) ; then, from the EL equation, it follows that F 11 = 0 , where
F 1 = F 11 + F 12 = 0 ,
which contradicts the statement of Theorem 3.
At this point, we have introduced two additional assumptions: Firstly, we have assumed that K 0 . Secondly, we have supposed that the variational linear system
z ˙ = Ψ z ( z , v , t ) z ( t ) + Ψ v ( z , v , , t ) v ( t )
is controllable. We shall now establish that these assumptions are superfluous. Indeed, if K 0 = , then by definition of K 0 , we have that
f ( z ( T ) ) z ( T ) + 0 T [ Θ z ( z ( t ) , v ( t ) , t ) z ( t ) + Θ v ( z ( t ) , v ( t ) , t ) v ( t ) ] d t = 0 .
Let us put μ = 0 , a = 0 . Then, from Equation (17), we have that
ϱ 0 f ( z ( T ) ) z ( T ) + 0 T Θ z ( z , v , t ) z ( t ) = 0 T Ψ v ( z , v , t ) η ( t ) v ( t ) d t ,
for all ( z , v ) such that z is solution of equation the (14). Then,
0 T [ Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) + Θ v ( z ( t ) , v ( t ) , t ) , v ( t ) ] d t = 0 ( v L r [ 0 , T ] )
which leads to the conclusion that
Ψ v * ( z , v , t ) η ( t ) + Θ v ( z , v , t ) , v v ( t ) = 0 ,
for all v V and almost all t [ 0 , T ] .
Assuming System (14) is not controllable, according to an equivalence with the definition of controllability outlined in [20,22,25], there is a non-trivial function η C n [ 0 , T ] that is a solution of
η ˙ ( t ) = Ψ z * ( z ( t ) , v ( t ) , t ) η ( t ) ,
such that for all t [ 0 , T ] , it follows that
Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) = 0 .
By taking ϱ 0 = 0 , μ = 0 , we obtain that η is solution of (7), and therefore
Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) , v v ( t ) 0 ,
for all v V and almost all t [ 0 , T ] .
Thus, the proof of Theorem 1 is now fully complete. □

5. Sufficient Condition for Optimality

The necessary optimality condition established in Theorem 1 (maximum principle) can also serve as a sufficient condition under certain additional assumptions. In particular, let us consider the case of Problem 1, where the governing differential equation is linear.
Problem 2.
f ( z ( T ) ) + 0 T Θ ( z ( t ) , v ( t ) , t ) d t l o c min .
( z , v ) E : = C n [ 0 , T ] × L r [ 0 , T ] ,
z ˙ ( t ) = Λ ( t ) z ( t ) + B ( t ) v ( t ) ,
z ( 0 ) = z 0 , z ( T ) = z 1 ; z 1 , z 0 R n ,
v ( t ) V , t [ 0 , T ] ,
g ( z ( t ) , t , α ) 0 ( t [ 0 , T ] ) ,
where Λ ( · ) : [ 0 , T ] R n × n , B ( · ) : [ 0 , T ] R n × r are continuous matrix functions. Let ( z , v ) E be satisfying Conditions (20)–(23).
Theorem 8.
Assume that Conditions (a)–(c), (α) and ( β ) from Theorem 1 hold.
Moreover, suppose the following conditions are satisfied:
 (A) 
System (20) is controllable.
 (B) 
There exists v ˜ L r [ 0 , T ] such that v ˜ ( t ) i n t ( V ) , t [ 0 , T ] .
 (C) 
The corresponding solution z ˜ associated with v ˜ in Equation (20) satisfies z ˜ ( T ) = z 1 and  g ( z ˜ ( t ) , t ) < 0 , t [ 0 , T ] .
 (D) 
The functions f , g , and Θ are convex with respect to their first two arguments.
Then, the pair ( z , v ) is a global solution to Problem 2.
Proof. 
Let us define the function L ¯ : E R as follows:
L ¯ ( z , v ) = f ( z ( T ) ) + 0 T Θ ( z ( t ) , v ( t ) , t ) d t .
Consider the set Q : = Q 1 Q 2 Q 3 , where Q 1 is given by (20) and (21), Q 2 by (22), and Q 3 by (23) as in Theorem 1.
Then, Problem 2 is equivalent to the following:
L ¯ ( z , v ) loc min , ( z , v ) Q .
It is clear that the sets Q i ( i = 1 , 2 , 3 ) are convex, and from Conditions ( C ) ( D ) , we have that L ¯ is convex. Additionally, ( z ˜ , v ˜ ) int ( Q 2 ) int ( Q 3 ) Q 1 .
Thus, by Theorem 2.17 from [18], we have the following:
( z , v ) is a minimum point of L ¯ at Q if, and only if, there are F i K i + ( i = 0 , 1 , 2 , 3 ) , not all zero such that
F 0 + F 1 + F 2 + F 3 = 0 .
Here, K i ( i = 0 , 1 , 2 , 3 ) are cones defined as in Theorem 1.
Let K 3 = K a ( Q 3 , ( z , v ) ) be the admissible cone to Q 3 at the point ( z , v ) . Then,
K 3 K d ( L ¯ , ( z , v ) ) = : K d ,
where
K d : = { ( z , v ) E / g z ( z ( t ) , t , α ) z ( t ) < 0 ( t R ) } ,
and
R : = { t [ 0 , T ] / g ( z ( t ) , t ) = 0 } .
Then, by dual cones properties, we have that
K 3 + K d + .
So, each F K d + has the following form:
F ( z , v ) = 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) ( ( z , v ) E ) .
Here, μ is a non-negative Borel measure with support on R .
Now, suppose the Maximum Principle from Theorem 1 holds. This guarantees the existence of ϱ 0 0 , a R n , and non-negative Borel measures μ supported on R. Furthermore, there exists a function η L 1 n [ 0 , T ] that satisfies the following integral equation:
η ( t ) = a + ϱ 0 f ( ( z ( T ) ) ) + t T ( Λ * ( τ ) η ( τ ) ) + ϱ 0 Θ z ( z ( τ ) , v ( τ ) , τ ) d τ + t T g z ( z ( τ ) , τ ) d μ ( τ ) ;
where both ϱ 0 and η are non-zero. Furthermore, for every v V and almost every t [ 0 , , T ] , the following holds
B * ( t ) η ( t ) + ϱ 0 Θ v ( z ( t ) , v ( t ) , t ) , v v ( t ) 0 .
To prove the theorem, it suffices to show that there exist elements F i K i + ( i = 0 , 1 , 2 , 3 ) , not all equal to zero, such that F 0 + F 1 + F 2 + F 3 = 0 ; for which we define the set
Q 2 = { v L r / v ( t ) V , t [ 0 , T ] , a . e . } .
and functionals
F 2 : L r R , F 2 : E R F 2 ( v ) : = 0 T B * ( t ) η ( t ) + ϱ 0 Θ v ( z ( t ) , v ( t ) , t ) , v ( t ) d t , F 2 : = ( 0 , F 2 ) .
Then, from (25), we obtain that
F 2 ( v ) F 2 ( v ) ( v Q 2 ) .
So, F 2 is a support of Q 2 at v . Hence, F 2 = ( 0 , F 2 ) K 2 + . Let us define the functional F 11 : E R as follows:
F 11 ( z , v ) = ϱ 0 f ( ( z ( T ) ) ) z ( T ) + ϱ 0 0 T [ Θ z ( z ( t ) , v ( t ) , t ) z ( t ) + Θ v ( z ( t ) , v ( t ) , t ) v ( t ) ] d t F 2 ( u ) a , z ( T ) + 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) .
Now, we will see that F 11 L 1 + , where
L 1 = ( z , v ) / z ( t ) = 0 t [ Λ ( τ ) z ( τ ) + B ( τ ) v ( τ ) ] d τ ( t [ 0 , T ] ) ,
as in Theorem 1. In fact, suppose that ( z , v ) L 1 . Then, multiplying both sides of Equation (24) by z ˙ and integrating from 0 to T, we obtain that
ϱ 0 0 T Θ z ( z ( t ) , v ( t ) , t ) z ( t ) d t + 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) a , z ( T ) + ϱ 0 f ( ( z ( T ) ) ) z ( T ) = 0 T B * ( t ) η ( t ) , v ( t ) d t .
Then,
F 11 ( z , v ) = F 2 ( u ) + 0 T B * ( t ) η ( t ) , v ( t ) d t + ϱ 0 0 T Θ v ( z ( t ) , v ( t ) , t ) v ( t ) d t .
Therefore,
F 11 ( z , v ) = F 2 ( u ) + F 2 ( u ) = 0 .
Thus, F 11 L 1 + .
Next, we will introduce the following functionals:
F 0 , F 1 , F 3 ; E R ,
by
F 0 ( z , v ) = ϱ 0 f ( ( z ( T ) ) ) z ( T ) + 0 T [ Θ z ( z ( t ) , v ( t ) , t ) z ( t ) + Θ v ( z ( t ) , v ( t ) , t ) v ( t ) ] d t F 1 ( z , v ) = F 11 ( z , v ) + a , z ( T ) , F 3 ( z , v ) = 0 T g z ( z ( t ) , t ) z ( t ) d μ ( t ) .
Then, F 0 K 0 + , F 1 K 1 + , F 3 K 3 + , and it also holds that
F 0 + F 1 + F 2 + F 3 = 0 ,
with the condition that not all of these functionals are zero, since by assumption, ϱ 0 and η are not simultaneously zero.
From the convexity conditions, the global-minimality of ( z , v ) follows. □

6. A Mathematical Model

In this section, we will present two important real-life models where our results can be applied; then, in the next section, we will present an open problem.

6.1. Optimal Control in Epidemics: SIR Model

The study in [26] analyzed this model using the Hamiltonian framework. Consider a population affected by an epidemic, where the goal is to mitigate its spread through vaccination. The following variables are introduced:
  • I ( t ) , representing the number of infectious individuals capable of transmitting the disease.
  • S ( t ) , denoting the number of individuals who are not infected but are susceptible to the disease.
  • R ( t ) , indicating the number of recovered individuals who are no longer susceptible to infection.
Let r > 0 denote the infection rate, γ > 0 the recovery rate, and v ( t ) the vaccination rate. The control function v ( t ) is constrained by 0 v ( t ) e . The optimal control problem for the SIR model system is formulated as follows:
α 2 I ( T ) + 0 T 1 2 v ( t ) 2 d t l o c min ,
( z , v ) C ( [ 0 , T ] ; I R 3 ) × L ( [ 0 , T ] ; I R ) ,
S ˙ ( t ) = r S ( t ) I ( t ) + v ( t ) , I ˙ ( t ) = r S ( t ) I ( t ) γ I ( t ) v ( t ) , R ˙ ( t ) = γ I ( t ) ,
S ( 0 ) = S 0 , I ( 0 ) = I 0 , R ( 0 ) = R 0 .
v ( t ) [ 0 , e ] , t [ 0 , T ] a . e .
where α > 0 . The goal is to determine an optimal vaccination strategy in order to minimize the above cost function on a fixed time T.
Using the following notation, we can write this problem in an abstract formulation using the foregoing Theorem:
z = S I R a n d Ψ ( z , v , t ) = r S I + v r S I γ I v γ I ,
f ( z ) = 1 2 α I , Θ ( z , v , t ) = 1 2 v 2 a n d g ( z , t ) = 0 .
z ˙ ( t ) = S ˙ ( t ) I ˙ ( t ) R ˙ ( t ) = Ψ ( z ( t ) , v ( t ) , t ) = r S ( t ) I ( t ) + v ( t ) r S ( t ) I ( t ) γ I ( t ) v ( t ) γ I ( t ) , z ( 0 ) = z 0
In other to apply Theorem 2, we compute the adjoint equation and the Pontryagin’s Maximum Principle to find the optimal control. In fact,
η ˙ ( t ) = Ψ z * ( z ( t ) , v ( t ) , t ) η ( t ) = r I ( t ) r I ( t ) 0 r S ( t ) r S ( t ) + γ γ 0 0 0 η 1 ( t ) η 2 ( t ) η 3 ( t ) = r I ( t ) ( η 1 ( t ) η 2 ( t ) ) r S ( t ) ( n 1 ( t ) n 2 ( t ) ) + γ ( n 2 ( t ) n 3 ( t ) ) 0 η ( T ) = 0 α 2 0 .
Note that, since there is no condition on z ( T ) , we have a = 0 . For simplicity in the calculations, we also set ρ 0 = 1 . Now, we proceed to compute
Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) = 1 , 1 , 0 n 1 ( t ) n 2 ( t ) n 3 ( t ) = η 2 ( t ) η 1 ( t ) ,
and Θ v ( z ( t ) , v ( t ) , t ) = v ( t ) . Then, for all v V , we have
η 2 ( t ) η 1 ( t ) + v ( t ) , v v ( t ) 0 , t [ 0 , T ] , a . e .
This is equivalent to
max v [ 0 , e ] ( η 1 ( t ) η 2 ( t ) v ( t ) ) v = ( η 1 ( t ) η 2 ( t ) v ( t ) ) v ( t ) .
Then, the optimal control is given by
v ( t ) = 0 , i f η 1 ( t ) η 2 ( t ) < 0 , η 1 ( t ) η 2 ( t ) , i f 0 η 1 ( t ) η 2 ( t ) e , e , i f η 1 ( t ) η 2 ( t ) > e .
At the final time T, we have η 1 ( T ) η 2 ( T ) = α 2 . Hence, if 2 e < α , then v ( t ) = e near the final time T.
Remark 4.
In these types of models, it is natural to assume that the number of infectious individuals capable of spreading the infection, I ( t ) , is less than or equal to the number of non-infectious but susceptible individuals, S ( t ) . To apply Theorem 1, we incorporate the following condition:
g ( z ( t ) , t ) = I ( t ) S ( t ) 0 ( t [ 0 , T ] ) .
η ( t ) = t T Ψ z * ( z ( τ ) , v ( τ ) , τ ) η ( τ ) d τ + f ( ( z ( T ) ) ) + t T g z ( z ( τ ) , τ ) d μ ( τ ) = t T Ψ z * ( z ( τ ) , v ( τ ) , τ ) η ( τ ) d τ + 0 α 2 0 + t T 1 1 0   d μ ( τ ) = t T Ψ z * ( z ( τ ) , v ( τ ) , τ ) η ( τ ) d τ + 0 α 2 0 + ( T t ) . 1 1 0
Therefore, the adjoint equation becomes the following equation, keeping the maximum principle the same:
η ˙ ( t ) = Ψ z * ( z ( t ) , v ( t ) , t ) η ( t ) + 1 1 0 η ( T ) = 0 α 2 0 .

6.2. Optimal Control Problem at Early Stage of Viral Outbreak

This model was analyzed in [11] using the Karush–Kuhn–Tucker conditions. Consider a population affected by an epidemic, such as COVID-19, where the objective is to curb its spread through vaccination. The SIR model with controlled intervention is described as follows:
  • I ( t ) , representing the number of infectious individuals capable of transmitting the disease;
  • S ( t ) , denoting the number of individuals who are not infected but remain susceptible;
  • R ( t ) , indicating the number of recovered individuals who are no longer at risk of infection.
Here, β > 0 represents the infection rate, γ > 0 the recovery rate, and v ( t ) the vaccination rate. The control function v ( t ) is subject to the constraint 0 u ( t ) 1 . The optimal control problem for the SIR model system is formulated as follows:
S ( 0 ) S ( T ) + 0 T Θ ( z ( t ) , v ( t ) , t ) d t l o c min ,
( z , v ) C ( [ 0 , T ] ; I R 3 ) × L ( [ 0 , T ] ; I R ) ,
S ˙ ( t ) = β ( 1 v ( t ) ) S ( t ) I ( t ) , I ˙ ( t ) = β ( 1 v ( t ) ) S ( t ) I ( t ) γ I ( t ) , R ˙ ( t ) = γ I ( t ) ,
S ( 0 ) = S 0 , I ( 0 ) = I 0 , R ( 0 ) = R 0 .
v ( t ) [ 0 , 1 ] , t [ 0 , T ] a . e .
The goal is to determine an optimal vaccination strategy in order to minimize the above cost function on a fixed time T.
Using the following notation, we can write this problem in abstract formulation:
z = S I R a n d Ψ ( z , v , t ) = β ( 1 v ) S I β ( 1 v ) S I γ I γ I ,
f ( w , z ) = w z , Θ ( z , v , t ) = g e n e r a l a n d g ( z , t ) = 0 .
z ˙ ( t ) = S ˙ ( t ) I ˙ ( t ) R ˙ ( t ) = Ψ ( z ( t ) , v ( t ) , t ) = β ( 1 v ( t ) ) S ( t ) I ( t ) β ( 1 v ( t ) ) S ( t ) I ( t ) γ I ( t ) γ I ( t ) , z ( 0 ) = z 0
In other to apply Theorem 2, we compute the adjoint equation and the Pongtryagin’s maximum principle to find the optimal control. In fact,
η ˙ ( t ) = Ψ z * ( z ( t ) , v ( t ) , t ) η ( t ) = β ( 1 v ( t ) ) I ( t ) β ( 1 v ( t ) ) I ( t ) 0 β ( 1 v ( t ) ) S ( t ) β ( 1 v ( t ) ) S ( t ) + γ γ 0 0 0 η 1 ( t ) η 2 ( t ) η 3 ( t ) = β ( 1 v ( t ) ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) β ( 1 v ( t ) ) S ( t ) ( η 1 ( t ) n 2 ( t ) ) + γ ( η 2 ( t ) η 3 ( t ) ) 0 η ( T ) = 1 0 0 = 1 0 0 .
Note that, since there is no condition on z ( T ) , we have a = 0 . For simplicity in the calculations, we also set ρ 0 = 1 . Now, we proceed to compute
Ψ v * ( z ( t ) , v ( t ) , t ) η ( t ) = β S ( t ) I ( t ) , β S ( t ) I ( t ) , 0 n 1 ( t ) n 2 ( t ) n 3 ( t ) = β S ( t ) I ( t ) ( η 2 ( t ) η 1 ( t ) ) ,
and Θ v ( z ( t ) , v ( t ) , t ) = g e n e r i c . Then, for all v [ 0 , 1 ] , we obtain
β S ( t ) I ( t ) ( η 2 ( t ) η 1 ( t ) ) + Θ v ( z ( t ) , v ( t ) , t ) , v v ( t ) 0 , t [ 0 , T ] , a . e .
This is equivalent to
max v [ 0 , 1 ] ( β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) Θ v ( z ( t ) ) v = ( β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) Θ v ( z ( t ) ) v ( t ) .
From here, we can specify different Θ ( z , v , t ) functions that allow us to determine what form the optimal control has. For example,
(i)
If (a) Θ ( z , v , t ) = c v , then the optimal control can be compute approximately from the maximum principle.
max v [ 0 , 1 ] ( β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) C ) v = ( β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) C ) v ( t ) .
Then, the optimal control is given by
v ( t ) = 0 , i f β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) C < 0 , 1 , i f β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) C 0 .
(ii)
If Θ ( z , v , t ) = 1 2 v 2 , then the optimal control can be computed approximately from the maximum principle.
max v [ 0 , 1 ] ( β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) v ( t ) ) v = ( β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) v ( t ) ) v ( t ) .
Then, the optimal control is given by
v ( t ) = 0 , i f β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) < 0 , β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) , i f 0 β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) 1 , 1 , i f β S ( t ) I ( t ) ( η 1 ( t ) η 2 ( t ) ) > 1 .

7. Open Problem

In this section, we present an open problem that offers a promising avenue for future research and could even serve as the basis for a doctoral thesis. This problem addresses an optimal control scenario that simultaneously incorporates impulses and constraints on a state variable. Our main objective is to analyze the following optimal control problem, which we plan to explore in future work:
Problem 3.
f ( z ( T ) ) + 0 T Θ ( z ( t ) , v ( t ) , t ) d t l o c min .
( z , v ) E : = PW ( [ 0 , T ] ; R n ) × L r [ 0 , T ] ,
z ˙ ( t ) = Ψ ( z ( t ) , v ( t ) , t ) , z ( 0 ) = z 0
G i ( z ( T ) ) = 0 , i = 1 , 2 , , q n ,
z ( t k + ) = z ( t k ) + J k ( z ( t k ) ) , k = 1 , 2 , 3 , , p ,
v ( t ) V , t [ 0 , T ] , a . e . ,
g ( z ( t ) , t , α ) 0 ( α A , t [ 0 , T ] ) .

8. Conclusions and Final Remarks

In this article, we analyze an optimal control problem characterized by a mixed cost function, composed of a terminal cost evaluated at the final state and an integral term involving the state and control variables. In addition to standard constraints, we incorporate general constraints for both the state and control variables, significantly expanding the applicability of our results. To address this problem, we derive a maximum principle where the adjoint equation is an integral equation, the right-hand side of which consists of the sum of a Riemann integral and a Stieltjes integral with respect to a Borel measure. One of the key contributions of this work was the effective application of Dubovitskii–Milyutin theory to obtain the necessary conditions for optimality under mixed constraints. This method allowed us to handle equality and inequality constraints in a unified framework, confirming its robustness in addressing complex optimal control problems. Our theoretical findings were illustrated through a practical example, demonstrating the effectiveness of our approach in real-life scenarios.The results presented in this study open several avenues for future research. First, while our approach provides a solid foundation for solving constrained optimal control problems, further exploration is required in systems governed by nonlinear dynamics, where additional difficulties may arise due to nonconvexity in the constraint set. A natural extension would involve relaxing some of the smoothness assumptions and incorporating state-dependent constraints, which are commonly found in applications such as economics, engineering, and epidemiology. Furthermore, computational methods for solving constrained optimal control problems remain an active area of research. Our findings suggest that the integration of modern numerical optimization techniques, such as machine learning-based approaches and neural networks, could improve the efficiency of control strategies, especially in high-dimensional systems. The interaction between traditional control theory and data-driven methods is expected to generate new insights and improved solution techniques. Beyond theoretical aspects, practical applications of our results can be envisioned in various fields. For example, in robotics, mixed-constraint optimal control is crucial for trajectory planning and motion optimization in autonomous systems. Similarly, in aerospace engineering, state-constraint optimal control is critical for mission planning and vehicle guidance under physical and operational constraints. Furthermore, epidemiological models, such as the SIR-based examples considered here, can greatly benefit from advanced optimal control strategies to design effective intervention policies for disease mitigation. Finally, future work could also focus on sensitivity analysis of optimal solutions with respect to perturbations in system parameters. Since many real-world control problems involve uncertainty, incorporating robust and stochastic optimal control techniques would further enhance the practical applicability of our approach. In conclusion, our study provides a rigorous theoretical framework for mixed-constraint optimal control problems and demonstrates the effectiveness of the Dubovitskii–Milyutin approach in obtaining optimality conditions. By extending these results to nonlinear systems, integrating modern computational methods, and exploring practical applications in engineering and biomedical sciences, we anticipate significant advances in both the theory and application of constrained optimal control.

Author Contributions

H.L.: Writing—original draft, review, and editing, research, formal analysis, and conceptualization. G.T.-R.: Writing—review and editing, research, and formal analysis. J.P.R.-L.: Writing—review and editing, research, and formal analysis. C.D.: Writing—review and editing, research, and formal analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cesari, L. Optimization—Theory and Applications. Problems with Ordinary Differential Equations; Springer-Verlag: Berlin/Heidelberg, Germany, 1983. [Google Scholar]
  2. Bryson, A.E.; Ho, Y.C. Applied Optimal Control. Optimization, Estimation and Control; Routledge: London, UK, 1975. [Google Scholar]
  3. Lenhart, S.; Workman, J.T. Optimal Control Applied to Biological Models; Chapman and Hall/CRC: New York, NY, USA, 2007. [Google Scholar]
  4. Clarke, F.H. Optimization and Nonsmooth Analysis. In Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1990. [Google Scholar]
  5. Dubovitskii, A.Y.; Milyutin, A.A. Extremum problems in the presence of restrictions. Comput. Math. Math. Phys. 1965, 5, 1–80. [Google Scholar] [CrossRef]
  6. Frankowska, H. Optimal Control Under State Constraints. In Proceedings of the International Congress of Mathematicians 2010 (ICM 2010), Hyderabad, India, 19–27 August 2010; pp. 2915–2942. [Google Scholar]
  7. Vinter, R. Convex duality and nonlinear optimal control. SIAM J. Control Optim. 1993, 31, 518–538. [Google Scholar] [CrossRef]
  8. Middelberg, A.; Zhang, J.; Xia, X. An optimal control model for load shifting—With application in the energy management of a colliery. Appl. Energy 2009, 86, 1266–1273. [Google Scholar] [CrossRef]
  9. Betts, J.T. Survey of Numerical Methods for Trajectory Optimization. J. Guid. Control. Dyn. 1998, 21, 193. [Google Scholar] [CrossRef]
  10. Yong, J.; Zhou, X. Stochastic Controls. Hamiltonian Systems and HJB Equations; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  11. Smirnova, A.; Ye, X. On optimal control at the onset of a new viral outbreak. Infect. Dis. Model. 2024, 9, 995–1006. [Google Scholar] [CrossRef] [PubMed]
  12. Behncke, H. Optimal control of deterministic epidemics. Optim. Control Appl. Meth. 2000, 21, 269–285. [Google Scholar] [CrossRef]
  13. Sharomi, O.; Malik, T. Optimal control in epidemiology. Ann. Oper. Res. 2017, 251, 55–71. [Google Scholar] [CrossRef]
  14. Bonalli, R.; Herisse, B.; Trela, E. Optimal Control of Endoatmospheric Launch Vehicle Systems: Geometric and Computational Issues. IEEE Trans. Automat. Control 2020, 65, 2418–2433. [Google Scholar] [CrossRef]
  15. Lee, H.; Kim, K.; Kim, N.; Cha, S.W. Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning. Appl. Energy 2022, 313, 118460. [Google Scholar] [CrossRef]
  16. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  17. Poggio, T.; Girosi, F. Networks for Approximation and Learning. Proc. IEEE 1990, 78, 1481–1497. [Google Scholar] [CrossRef]
  18. Leiva, H. Pontryagin’s maximum principle for optimal control problems governed by nonlinear impulsive differential equations. J. Math. Appl. 2023, 46, 15–68. [Google Scholar]
  19. Leiva, H.; Cabada, D.; Gallo, R. Roughness of the controllability for time varying systems under the influence of impulses, delay, and non-local conditions. Nonauton. Dyn. Syst. 2020, 7, 126–139. [Google Scholar] [CrossRef]
  20. Girsanov, I.V. Lectures on Mathematical Theory of Extremum Problems. In Lecture Notes in Economics and Mathematical Systems; Beckmann, M., Goos, G., Künzi, H.P., Eds.; Springer-Verlag: Berlin/Heidelberg, Germany, 1972. [Google Scholar]
  21. Cabada, D.; Garcia, K.; Guevara, C.; Leiva, H. Controllability of time varying semilinear non-instantaneous impulsive systems with delay and nonlocal conditions. Arch. Control Sci. 2022, 32, 335–357. [Google Scholar]
  22. Lee, E.B.; Markus, L. Fundations of Optimmal Control Theory; Wiley: New York, NY, USA, 1967. [Google Scholar]
  23. Leiva, H. Controllability of semilinear impulsive nonautonomous systems. Int. J. Control. 2014, 88, 585–592. [Google Scholar] [CrossRef]
  24. Kolmogorov, A.N.; Fomin, S.V. Elementos de la teoría de funciones y de análisis funcional; Editorial Mir: Moscow, Russia, 1975. [Google Scholar]
  25. Lalvay, S.; Padilla-Segarra, A.; Zouhair, W. On the existence and uniqueness of solutions for non-autonomous semi-linear systems with non-instantaneous impulses, delay, and non-local conditions. Miskolc Math. Notes 2022, 23, 295–310. [Google Scholar] [CrossRef]
  26. Trélat, E. Contrôle optimal: Théorie et applications, 2nd ed.; De Boeck Sup, Vuibert: Paris, France, 2008. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leiva, H.; Tapia-Riera, G.; Romero-Leiton, J.P.; Duque, C. Mixed Cost Function and State Constrains Optimal Control Problems. AppliedMath 2025, 5, 46. https://doi.org/10.3390/appliedmath5020046

AMA Style

Leiva H, Tapia-Riera G, Romero-Leiton JP, Duque C. Mixed Cost Function and State Constrains Optimal Control Problems. AppliedMath. 2025; 5(2):46. https://doi.org/10.3390/appliedmath5020046

Chicago/Turabian Style

Leiva, Hugo, Guido Tapia-Riera, Jhoana P. Romero-Leiton, and Cosme Duque. 2025. "Mixed Cost Function and State Constrains Optimal Control Problems" AppliedMath 5, no. 2: 46. https://doi.org/10.3390/appliedmath5020046

APA Style

Leiva, H., Tapia-Riera, G., Romero-Leiton, J. P., & Duque, C. (2025). Mixed Cost Function and State Constrains Optimal Control Problems. AppliedMath, 5(2), 46. https://doi.org/10.3390/appliedmath5020046

Article Metrics

Back to TopTop