Next Article in Journal
Political Mobilization in the Laboratory: The Role of Norms and Communication
Next Article in Special Issue
Optimal Control Theory: Introduction to the Special Issue
Previous Article in Journal
Teams Do Inflict Costly Third-Party Punishment as Individuals Do: Experimental Evidence
Previous Article in Special Issue
Optimal Control and Positional Controllability in a One-Sector Economy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations

by
Alexander Arguchintsev
1,* and
Vasilisa Poplevko
2
1
Institute of Mathematics, Irkutsk State University, K. Marx Street 1, 664003 Irkutsk, Russia
2
Office of PhD Programs, Irkutsk State University, K. Marx Street 1, 664003 Irkutsk, Russia
*
Author to whom correspondence should be addressed.
Games 2021, 12(1), 23; https://doi.org/10.3390/g12010023
Submission received: 30 November 2020 / Revised: 23 February 2021 / Accepted: 1 March 2021 / Published: 3 March 2021
(This article belongs to the Special Issue Optimal Control Theory)

Abstract

:
This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efficient optimal control methods for the problem. An example illustrates the approach.

1. Introduction

This paper studies a special class of optimal control problems for linear systems of first-order hyperbolic equations. In these problems, the function on the right-hand side of the hyperbolic system is determined from controlled systems of ordinary differential equations (ODEs). Such problems arise in the simulation of some processes of chemical technology [1]. They also may be used for solving inverse problems in models of population dynamics [2].
We consider a case of a linear hyperbolic system and a linear controlled ODE system with the coefficients of the phase variables depending on controls. Because of the latter circumstance, the Pontryagin maximum principle is not a sufficient optimality condition. On the other hand, an attempt to apply general optimization approaches [3] leads to endless iterative processes. Moreover, at each iteration step, it is required in this case to repeatedly integrate a system of hyperbolic equations. There is the possibility of applying ideas of non-classic exact formulas for the increment formula for the cost-functional. The authors used this idea in [4] for a special type of hyperbolic system with two orthogonal characteristics and controlled boundary conditions.
In this paper, two symmetric variants of the non-classic exact formulas for the increment of a linear cost-functional are suggested. The classical method of increments proposed by Rozonoer [5] allows one to obtain in the considered problem the necessary optimality condition similar to the Pontryagin maximum principle. The proof essentially uses the local character of the increment formula. Namely, one estimates the residuals in terms of the value that characterizes the smallness of the measure of the domain of the needle variation of a control. Below, for two particular cases of the problem under consideration, we obtain non-standard, exact (without residuals) increment formulas. The use of these formulas allows to reduce the original problem to an optimal control problem for a system of ODE equations. Note that, under this approach, the hyperbolic system is integrated only two times: at the beginning of the process (after an initial admissible control has been selected) and at the end of a computational process. We present a simple example to illustrate the proposed approach.

2. Problem Statement

We consider an optimization problem for a system of first-order hyperbolic equations with a linear right-hand side:
x t + A ( s , t ) x s = Φ ( s , t ) x + f ¯ ( s , t ) + C ( t ) y ( s , t ) Π ,   Π = S × T ,   S = [ s 0 , s 1 ] , T = [ t 0 , t 1 ] .
Here, x = x ( s , t ) , y = y ( t ) are n and m–dimensional vector-functions, respectively; generally speaking, functions x ( s , t ) are piecewise continuous functions. All components x i = x i ( s , t ) are continuously differentiable along any characteristic of the i-th family of characteristics of system (1); y ( t ) are absolutely continuous functions; the concepts of generalized solutions will be discussed below; A ( s , t ) is a n × n diagonal matrix; C ( t ) , Φ ( s , t ) are matrix-functions n × m ,   n × n , respectively; and f ¯ ( s , t ) is a n—dimensional vector-function. In addition, we assume that the diagonal elements a i ( s , t ) , i = 1 , 2 , , n of the matrix A are continuous and continuously differentiable in Π and do not change signs in the rectangle Π : a i ( s , t ) > 0 , i = 1 , 2 , , m 1 ; a i ( s , t ) = 0 , i = m 1 + 1 , m 1 + 2 , , m 2 ; a i ( s , t ) < 0 , i = m 2 + 1 , m 2 + 2 , , n . Let A + ( s , t ) and A ( s , t ) be diagonal submatrices of A of orders m 1 and ( n m 2 ) , respectively, composed of its positive and negative elements. Accordingly, we consider two subvectors x + = ( x 1 , x 2 , , x m 1 ) and x = ( x m 2 + 1 , x m 2 + 2 , , x n ) , corresponding to the positive and negative diagonal elements of A.
Initial and boundary conditions take the form
x ( s , t 0 ) = x 0 ( s ) ,   s S ;   x + ( s 0 , t ) = η ( t ) ,   x ( s 1 , t ) = μ ( t ) , t T ,
where the functions x 0 ( s ) ,   μ ( t ) and η ( t ) are continuous with respect to their arguments on the sets S and T, respectively.
We assume the function y ( t ) in the right-hand side of the system (1) is determined by a controlled system of ordinary differential equations
d y d t = B ( u ( t ) , t ) y ( t ) + d ( u ( t ) , t ) ,   t T ,   y ( t 0 ) = y 0 .
Here, B ( u ( t ) , t ) is a m × m matrix function, and d ( u ( t ) , t ) is a m-dimensional vector function. In addition, we assume that the matrices Φ ( s , t ) , B ( u , t ) , and C ( t ) and the vectors f ¯ ( s , t ) , d ( u ( t ) , t ) are continuous functions of their arguments everywhere in the corresponding domains.
The admissible controls are assumed to be bounded, and measurable vector functions u ( t ) are determined on the interval T and satisfy, almost everywhere, an inclusion-type constraint,
u ( t ) U ,   t T ,
where U is a compact belonging to the Euclidean space E r , and r is a natural number.
We assume that the goal of the optimization problem is to minimize a linear functional,
J ( u ) = S α ( s ) , x ( s , t 1 )   d s + Π b 0 ( s , t ) , x ( s , t )   d s   d t m i n .
Here, the vectors α ( s ) and b 0 ( s , t ) are continuous functions of their arguments everywhere in the corresponding domains, and . , . is a designation of a scalar product in Euclidean space of corresponding dimension. The above assumptions are not sufficient for the existence and uniqueness of a (1)–(5) solution. However, our aim is not in solving this problem, but it is in reduction to an optimal control problem for a system of ODE equations.
The above assumptions are also insufficient for the existence of a classical (i.e., continuously differentiable) solution of problem (1)–(3), and we use the notion of a generalized solution [6] (pp. 63–69, 90–94).
This is a solution x = x ( s , t ) , piecewise continuous in Π , for which all components x i = x i ( s , t ) are continuously differentiable along any characteristic of the i-th family of characteristics of system (1).
We introduce the notion of characteristics s ( i ) = s ( i ) ( ξ , τ ; t ) , defined by the ordinary differential equations
d s d t = a i ( s , t ) ,   i = 1 , 2 , , n ;   s ( τ ) = ξ .
Thus, we understand a generalized solution of the boundary-value problem (1)–(3) as a function that satisfies a system of integral equations, where the integration is performed along characteristics of the initial hyperbolic system (1):
x i ( s , t ) = x i ( ξ ( i ) ( s , t ) , τ ( i ) ( s , t ) ) + τ ( i ) ( s , t ) t f i ( x ( ξ , τ ) , y ( τ ) , ξ , τ ) ξ = s ( i ) ( s , t ; τ ) d τ ,
f i ( x , y , s , t ) = ( Φ ( s , t ) x + f ¯ ( s , t ) + C ( t ) y ) i   ,
( s , t ) Π ;   i = 1 , 2 , , n .
Here, ( ξ ( i ) ( s , t ) , τ ( i ) ( s , t ) ) is the initial point of the i-th characteristic that goes through the point ( s , t ) .
Instead of the left side of system (1), we consider the differential operator
d x d t A = d x 1 d t A , d x 2 d t A , , d x n d t A ,
where d x i / d t A is the derivative of the i-th component of the state vector along the corresponding family of characteristic curves.
In principle, we could assume that initial conditions are consistent with boundary conditions in (2):
( x 0 ( s 0 ) ) + = η ( t 0 ) ,   ( ( x 0 ( s 1 ) ) = μ ( t 1 ) .
In this case, a solution of (1)–(3) will be continuous, but not necessarily a differentiable function everywhere. For differentiability, it is necessary to assume the fulfillment of consistent conditions of a higher order [7] (pp. 70–72). However, these conditions depend on right-hand sides of hyperbolic systems. In our case, these right-hand sides depend on control, and it is necessary to introduce additional restrictions for a control function. We also consider further adjoint differential systems for which such conditions are not satisfied.
Note that generalized solutions of hyperbolic balance laws [8,9] and the concept of entropy are well-suited to prove existence and uniqueness results in corresponding functional spaces. However, the approach based on characteristics of hyperbolic systems is suitable for optimal control methods, estimations of convergence, and so forth.

3. The Cost-Functional Increment Formulas

Although Equation (1) is linear with respect to x, the Pontryagin maximum principle is not a sufficient optimality condition. This is explained by the dependence of the matrix of coefficients in (3) on controls. Let us construct a non-classical exact increment formula of (5).
Consider two admissible controls,   u   and   u ˜ = u + Δ u   . Let   x , y   be the corresponding solutions of (1), (3) for   u   , and   x ˜ = x + Δ x , y ˜ = y + Δ y   be corresponding solutions of (1)–(3) for   u ˜ = u + Δ u   . Then, the system (1)–(3) takes the following form:
d Δ x d t A = Φ ( s , t ) Δ x + C ( t ) Δ y ,
Δ x ( s , t 0 ) = Δ x + ( s 0 , t ) = Δ x ( s 1 , t ) = 0 . d Δ y d t = B ( u ˜ ( t ) , t ) y ˜ ( t ) B ( u ( t ) , t ) y ( t ) + Δ d ( u ( t ) , t ) ,   Δ y ( t 0 ) = 0 .
Here, Δ d ( u ( t ) , t ) = d ( u ˜ ( t ) , t ) d ( u ( t ) , t ) . By transforming the right-hand side of (7) with respect to the representation
B ( u ˜ ( t ) , t ) y ˜ B ( u ( t ) , t ) y = Δ B ( u ( t ) , t ) y ˜ + B ( u ( t ) , t ) Δ y ,
where Δ B ( u ( t ) , t ) = B ( u ˜ ( t ) , t ) B ( u ( t ) , t ) . Then, the increment of (5) is written as
Δ J ( u ) = J ( u ˜ ) J ( u ) = S α ( s ) , Δ x ( s , t 1 )   d s
+ Π b 0 ( s , t ) , Δ x ( s , t )   d s d t + Π ψ ( s , t ) , d Δ x d t A
Φ ( s , t ) Δ x C Δ y   d s d t + T p ( t ) , d Δ y d t Δ B ( u ( t ) , t ) y ˜
B ( u ( t ) , t ) Δ y Δ d ( u ( t ) , t )   d t .
Functions ψ = ψ ( s , t ) and p = p ( t ) are still arbitrary vector functions with the same smoothness properties as x ( s , t ) and y ( t ) .
It is obvious that
Π ψ ( s , t ) , d Δ x d t A Φ ( s , t ) Δ x C Δ y   d s d t = 0 ,
T p ( t ) , d Δ y d t Δ B ( u ( t ) , t ) y ˜ B ( u ( t ) , t ) Δ y Δ d ( u ( t ) , t )   d t = 0
for arbitrary functions ψ ( s , t ) and p ( t ) .
By applying the ordinary and generalized formulas of integration by parts [3] (Chapter 8, Section 3) to the following terms, respectively,
T p ( t ) , d Δ y d t   d t ,
Π ψ ( s , t ) , d Δ x d t A   d s d t ,
we obtain
Δ J ( u ) = S α ( s ) , Δ x ( s , t 1 )   d s + Π b 0 ( s , t ) , Δ x ( s , t )   d s d t
+ S ( ψ ( s , t 1 ) , Δ x ( s , t 1 ) ψ ( s , t 0 ) , Δ x ( s , t 0 ) )   d s
T ( ψ ( s 0 , t ) , A ( s 0 , t ) Δ x ( s 0 , t ) ψ ( s 1 , t ) , A ( s 1 , t ) Δ x ( s 1 , t ) )   d t
Π d ψ d t A + s A ( s , t ) ψ + Φ T ψ , Δ x   d s d t Π ψ ( s , t ) , C Δ y d s d t
+ p ( t 1 ) , Δ y ( t 1 ) T p ˙ ( t ) , Δ y ( t )   d t
T p ( t ) , Δ B ( u ( t ) , t ) y ˜ + B ( u ( t ) , t ) Δ y + Δ d ( u , t )   d t .
In the last integral, we will not transform the integrand to that in terms of the unperturbed solution y = y ( t , u ) (the expedient that is traditionally used in the proof of the pointwise maximum principle). Instead, we require that the functions ψ ( s , t ) and p ( t ) are solutions of the following adjoint problem:
d ψ d t A + s A ( s , t ) ψ = Φ T ψ + b 0 ( s , t ) ,     ( s , t ) Π ;
ψ ( s , t 1 ) = α ( s ) ,     s S , ψ ( s 0 , t ) = 0 ,     ψ + ( s 1 , t ) = 0 ,     t T ; p ˙ = B T ( u , t ) p ( t ) ( s 1 s 0 ) C T ψ + ( s 0 , t ) ,     p ( t 1 ) = 0 .
A boundary value problem (8) has the same structure as (1) and (2). The cost-functional (5) does not contain addends depending on unfixed components ( x ( s 0 , t ) ) and ( x ( s 1 , t ) ) + . Therefore, the corresponding boundary conditions in (8) are equal to zero.
Function ψ ( s , t ) is a solution of (8), piecewise continuous in Π , for which all components ψ i = ψ i ( s , t ) are continuously differentiable along any characteristic of the i-th family of characteristics of a hyperbolic system in (8). p ( t ) are absolutely continuous functions.
Finally, we obtain the following formula for the cost-functional increment:
Δ J ( u ) = T p ( t , u ) , Δ B ( u ( t ) , t ) y ˜ + Δ d ( u , t )   d t .
The second increment formula is symmetric to this one. It may be obtained by using the representation
B ( u ˜ ( t ) , t ) y ˜ B ( u ( t ) , t ) y = Δ B ( u ( t ) , t ) y + B ( u ˜ , t ) Δ y .
Then,
Δ J ( u ) = T p ( t , u ˜ ) , Δ B ( u ( t ) , t ) y + Δ d ( u , t )   d t .
Since the admissible controls u ( t ) and u ˜ ( t ) are arbitrary, Formula (11) can be obtained from (10) by the formal replacement of u by u ˜ and u ˜ by u.
Note that the basic difference of Formulas (10) and (11) from the classical increment formula is that (10) and (11) are exact formulas, that is, they do not have remainder terms. However, in this case, system (9) (or system (3) is integrated for perturbed controls).

4. Variational Maximum Principle

Increment Formulas (10) and (11) make it possible to reduce the problem (1)–(5) to an optimal control problem for a system of ordinary differential equations. We consider a scheme based on Formula (10).
We introduce the following optimal control problem
I ( v ) = T p ( t , u ) , ( B ( v ( t ) , t ) B ( u ( t ) , t ) ) z ( t , v )
+ d ( v ( t ) , t ) d ( u ( t ) , t )   d t min , z ˙ = B ( v ( t ) , t ) z + d ( v ( t ) , t ) ,     t T , z ( t 0 ) = y 0 , v ( t ) U .
Here, u ( t ) ,   p ( t , u ) ,   d = d ( u , t ) and B = B ( u , t ) are the fixed functions; z ( t ) is a m-dimensional state function; and v ( t ) is a control function satisfying the same constraints as the control function in the optimal control problem (1)–(5).
By analogy to (12), the increment formula (11) allows us to reduce the original problem (1)–(5) to a linear optimal control problem for the ODE system:
I ( v ) = T p ( t , v ) , ( B ( v ( t ) , t ) B ( u ( t ) , t ) ) y ( t , u ) + d ( v ( t ) , t ) d ( u ( t ) , t )   d t min , p ˙ = B T ( v ( t ) , t ) p ( t ) ( s 1 s 0 ) C T ψ + ( s 0 , t ) ,     t T , p ( t 1 ) = 0 . v ( t ) U .
Here, u ( t ) ,   d = d ( u , t ) and B = B ( u , t ) are fixed functions; p ( t ) is a state function, and v ( t ) is a control function satisfying the same constraints as the control function in the original optimal control problem.
These results enable us to formulate variational maximum principles for a linear optimization problem by hyperbolic systems.
Theorem 1.
A control u * ( t ) is optimal in the problem (1)–(5) if, and only if the function v * = u * ( t ) is optimal in problem (12) or equivalently in problem (13) for any fixed admissible u ( t ) .
It follows from (10), (11) that the optimal value of the functional in the original problem is given by
J ( v * ) = J ( u ) + I ( v * ) .
Thus, the following solution scheme based on Formula (10) can be proposed.
  • An arbitrary admissible control u = u ( t ) is specified. Then, we calculate a solution p = p ( t , u ) of the adjoint problem (9) corresponding to this control. Note that we need also to calculate a solution of problem (8), which does not depend on an admissible process.
  • We calculate an auxiliary optimal control problem (12) for the system of ordinary differential equations. Its solution will be a solution of the problem (1)–(5).
The solution scheme based on Formula (11) is as follows.
  • We choose an arbitrary admissible control u = u ( t ) . Then, we calculate y ( t , u ) and ψ ( s , t ) .
  • The auxiliary optimal control problem (13) must be solved. Its solution will be a solution of the problem (1)–(5).
Note that the auxiliary problems (12) and (13) are currently well-known and can be solved by applying many efficient methods developed for solving optimal control problems for systems of ordinary differential equations [3,10,11,12,13,14].
It is required to solve only twice the system of partial differential equations to calculate x = x ( s , t , u ) , ψ = ψ ( s , t ) and, if needed, x ˜ = x ( s , t , u ˜ ) , and to solve the optimal control problem for the system of ODEs linear in the state variables. Note that the solution of problem (1)–(5) by means of iteration processes of the classical maximum principle requires repeated integration of the hyperbolic system (1) at each iteration step.

5. Illustrative Example

To illustrate the main constructions of our approach, we consider a simple example.
In Π = [ 0 , 1 ] × [ 0 , 1 ] , we consider an optimal control problem
J ( u ) = 0 1 x ( s , 1 )   d s min ,
x t + x s = y 1 + 2 y 2 ,     x ( s , 0 ) = 0 ,     x ( 0 , t ) = 0 ,
y ˙ 1 = 6 y 2 u ,     y 1 ( 0 ) = 0 ,
y ˙ 2 = u ,     y 2 ( 0 ) = 0 ,
u ( t ) [ 1 ; 1 ] ,     t [ 0 ; 1 ] .
Note that the functions y 1 ,   y 2 are absolutely continuous in T. The characteristics of the hyperbolic equation which are defined as solutions of the ODE d s d t = 1 have the following form: s = t + C o n s t . Strongly speaking, function x ( s , t ) is a solution of a corresponding integral equation which is constructed on the characteristics of the hyperbolic equation
x ( s , t ) = 0 t f ( y 1 ( τ ) , y 2 ( τ ) )   d τ ,
where f ( y 1 ( · ) , y 2 ( · ) ) = y 1 + 2 y 2 . The function x ( s , t ) is continuously differentiable along any characteristic of the family of characteristics of the hyperbolic equation.
The adjoint problem (8) and (9) has the following form:
ψ t + ψ s = 0 ,     ψ ( s , 1 ) = 1 ,     ψ ( 1 , t ) = 0 ,
p ˙ 1 = 0 1 ψ     d s ,     p 1 ( 1 ) = 0 ,
p ˙ 2 = 6 p 1 u 0 1 2 ψ     d s ,   p 2 ( 1 ) = 0 .
Here, functions ψ ( s , t ) and p ( t ) = ( p 1 ( t ) , p 2 ( t ) ) are from the same class of functions as x ( s , t ) and y ( t ) = ( y 1 ( t ) , y 2 ( t ) ) , respectively.
Let the initial control be u 0 ( t ) = 0 ,   t [ 0 ; 1 ] . The solution of the adjoint problem corresponding to this control is the following:
ψ ( s , t ) =   0 , t < s , 1 , t s . ,  
p 1 ( t ) = t 2 2 1 2 ,     p 2 ( t ) = t 2 1 .
Here,
B ( u , t ) = 0 6 u 0 0 ,     d ( u ( t ) , t ) = ( 0 , u ) T .
Then, the original problem is reduced to the optimal control problem for a system of ODE equations
I ( v ) = 0 1 ( 1 t 2 ) ( 3 y 2 1 ) v ( t )     d t m i n , y ˙ 1 = 6 y 2 v ,     y 1 ( 0 ) = 0 , y ˙ 2 = v ,     y 2 ( 0 ) = 0 , v ( t ) [ 1 ; 1 ] ,     t [ 0 ; 1 ] .
For solving the reduced optimal control problem, one can use a wide set of modern optimal control methods (for example, see reviews [10,11,12]).
This problem is not a linear optimal control problem because the cost-functional and first differential equation contain terms of y 2 v type. In the authors’ opinion, special methods for bilinear problems will be efficient for similar problems [13,14].

6. Conclusions

We considered an optimal control problem by a hybrid system of hyperbolic and ordinary differential equations. The proposed approach allows to reduce this problem to a classic optimal control problem by a system of ordinary differential equations. It is interesting that the proved necessary and sufficient optimality condition (Theorem 1) is satisfied for any fixed admissible function u ( t ) . It is due to the fact that increment formulas have been proved for an arbitrary pair of admissible controls,   u ( t )   and   u ˜ ( t ) . We did not use any local control variations.
Our further goal is to extend this approach to the case of quadratic cost-functionals. In this case, we have to use second-order, non-classic, exact-increment formulas. The authors hope that singular control problems may be considered on the base of this approach.

Author Contributions

Conceptualization, A.A.; methodology, A.A.; formal analysis, A.A. and V.P.; investigation, A.A. and V.P.; example, V.P., writing—original draft preparation, A.A. and V.P.; writing—review and editing, A.A. and V.P. All authors have read and agreed to the published version of the manuscript.

Funding

The reported study was funded by RFBR and the Government of the Irkutsk Region, project number 20-41-385002, and by RFBR, project number 20-07-00407.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ODEordinary differential equation

References

  1. Demidenko, N. Optimal control of thermal-engineering processes in tube furnaces. Chem. Petrol. Eng. 2006, 42, 128–130. [Google Scholar] [CrossRef]
  2. Petukhov, A. Modeling of threshold effects in social systems based on nonlinear dynamics. Cybern. Phys. 2019, 8, 277–287. [Google Scholar] [CrossRef]
  3. Vasiliev, O. Optimization Methods; World Federation Publishers Company Inc.: Atlanta, GA, USA, 1996. [Google Scholar]
  4. Arguchintsev, A.V.; Poplevko, V.P. Optimal control of initial conditions in canonical hyperbolic system of the first-order based on non-standard increment formulas. Rus. Math. 2008, 52, 1–7. [Google Scholar] [CrossRef]
  5. Rozonoer, L.I. LS Pontryagin’s maximum principle in the theory of optimum systems. Part I. Autom. Remote Contr. 1959, 20, 1288–1302. [Google Scholar]
  6. Rozhdestvenskiyi, B.L.; Yanenko, N.N. Systems of Quasilinear Equations and Their Applications to Gas Dynamics; Nauka: Moscow, Russia, 1968. [Google Scholar]
  7. Godunov, S.K. Equations of Mathematical Physics; Nauka: Moscow, Russia, 1979. [Google Scholar]
  8. LeVeque, R. Finite Volume Methods for Hyperbolic Problems (Cambridge Texts in Applied Mathematics); Cambridge University Press: Cambridge, UK, 2002. [Google Scholar] [CrossRef]
  9. Dafermos, C.M. Hyperbolic Conservation Laws in Continuum Physics (Grundlehren der Mathematischen Wissenschaften), 4th ed.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 325. [Google Scholar]
  10. Rao, A.V. A survey of numerical methods for optimal control. Adv. Astron. Sci. 2009, 135, 1–32. [Google Scholar]
  11. Golfetto, W.A.; Silva Fernandes, S. A review of gradient algorithms for numerical computation of optimal trajectories. J. Aerosp. Technol. Manag. 2012, 4, 131–143. [Google Scholar] [CrossRef] [Green Version]
  12. Biral, F.; Bertolazzi, E.; Bosetti, P. Notes on numerical methods for solving optimal control problems. IEEJ J. Ind. Appl. 2016, 5, 154–166. [Google Scholar] [CrossRef] [Green Version]
  13. Srochko, V.A.; Antonik, V.G. Optimality conditions for extremal controls in bilinear and quadratic problems. Russ. Math. 2016, 60, 75–80. [Google Scholar] [CrossRef]
  14. Srochko, V.A.; Aksenyushkina, E.V. Parameterization of some linear systems control problems. Bull. Irkutsk. State Univ. Ser. Math. 2019, 30, 83–98. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Arguchintsev, A.; Poplevko, V. An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations. Games 2021, 12, 23. https://doi.org/10.3390/g12010023

AMA Style

Arguchintsev A, Poplevko V. An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations. Games. 2021; 12(1):23. https://doi.org/10.3390/g12010023

Chicago/Turabian Style

Arguchintsev, Alexander, and Vasilisa Poplevko. 2021. "An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations" Games 12, no. 1: 23. https://doi.org/10.3390/g12010023

APA Style

Arguchintsev, A., & Poplevko, V. (2021). An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations. Games, 12(1), 23. https://doi.org/10.3390/g12010023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop