An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations

: This paper deals with an optimal control problem for a linear system of ﬁrst-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefﬁcients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efﬁcient optimal control methods for the problem. An example illustrates the approach.


Introduction
This paper studies a special class of optimal control problems for linear systems of first-order hyperbolic equations. In these problems, the function on the right-hand side of the hyperbolic system is determined from controlled systems of ordinary differential equations (ODEs). Such problems arise in the simulation of some processes of chemical technology [1]. They also may be used for solving inverse problems in models of population dynamics [2].
We consider a case of a linear hyperbolic system and a linear controlled ODE system with the coefficients of the phase variables depending on controls. Because of the latter circumstance, the Pontryagin maximum principle is not a sufficient optimality condition. On the other hand, an attempt to apply general optimization approaches [3] leads to endless iterative processes. Moreover, at each iteration step, it is required in this case to repeatedly integrate a system of hyperbolic equations. There is the possibility of applying ideas of non-classic exact formulas for the increment formula for the cost-functional. The authors used this idea in [4] for a special type of hyperbolic system with two orthogonal characteristics and controlled boundary conditions.
In this paper, two symmetric variants of the non-classic exact formulas for the increment of a linear cost-functional are suggested. The classical method of increments proposed by Rozonoer [5] allows one to obtain in the considered problem the necessary optimality condition similar to the Pontryagin maximum principle. The proof essentially uses the local character of the increment formula. Namely, one estimates the residuals in terms of the value that characterizes the smallness of the measure of the domain of the needle variation of a control. Below, for two particular cases of the problem under consideration, we obtain non-standard, exact (without residuals) increment formulas. The use of these formulas allows to reduce the original problem to an optimal control problem for a system of ODE equations. Note that, under this approach, the hyperbolic system is integrated only two times: at the beginning of the process (after an initial admissible control has been selected) and at the end of a computational process. We present a simple example to illustrate the proposed approach.

Problem Statement
We consider an optimization problem for a system of first-order hyperbolic equations with a linear right-hand side: Here, x = x(s, t), y = y(t) are n and m-dimensional vector-functions, respectively; generally speaking, functions x(s, t) are piecewise continuous functions. All components x i = x i (s, t) are continuously differentiable along any characteristic of the i-th family of characteristics of system (1); y(t) are absolutely continuous functions; the concepts of generalized solutions will be discussed below; A(s, t) is a n × n diagonal matrix; C(t), Φ(s, t) are matrix-functions n × m, n × n, respectively; and f (s, t) is a n-dimensional vector-function. In addition, we assume that the diagonal elements a i (s, t), i = 1, 2, . . . , n of the matrix A are continuous and continuously differentiable in Π and do not change signs in the rectangle Π: a i (s, t) > 0, i = 1, 2, . . . , m 1 ; a i (s, t) = 0, i = m 1 + 1, m 1 + 2, . . . , m 2 ; a i (s, t) < 0, i = m 2 + 1, m 2 + 2, . . . , n. Let A + (s, t) and A − (s, t) be diagonal submatrices of A of orders m 1 and (n − m 2 ), respectively, composed of its positive and negative elements. Accordingly, we consider two subvectors x + = (x 1 , x 2 , . . . , x m 1 ) and x − = (x m 2 +1 , x m 2 +2 , . . . , x n ), corresponding to the positive and negative diagonal elements of A.
Initial and boundary conditions take the form where the functions x 0 (s), µ(t) and η(t) are continuous with respect to their arguments on the sets S and T, respectively. We assume the function y(t) in the right-hand side of the system (1) is determined by a controlled system of ordinary differential equations Here, B(u(t), t) is a m × m matrix function, and d(u(t), t) is a m-dimensional vector function. In addition, we assume that the matrices Φ(s, t), B(u, t), and C(t) and the vectors f (s, t), d(u(t), t) are continuous functions of their arguments everywhere in the corresponding domains.
The admissible controls are assumed to be bounded, and measurable vector functions u(t) are determined on the interval T and satisfy, almost everywhere, an inclusion-type constraint, where U is a compact belonging to the Euclidean space E r , and r is a natural number. We assume that the goal of the optimization problem is to minimize a linear functional, Here, the vectors α(s) and b 0 (s, t) are continuous functions of their arguments everywhere in the corresponding domains, and ., . is a designation of a scalar product in Euclidean space of corresponding dimension. The above assumptions are not sufficient for the existence and uniqueness of a (1)-(5) solution. However, our aim is not in solving this problem, but it is in reduction to an optimal control problem for a system of ODE equations.
This is a solution x = x(s, t), piecewise continuous in Π, for which all components x i = x i (s, t) are continuously differentiable along any characteristic of the i-th family of characteristics of system (1).
We introduce the notion of characteristics s (i) = s (i) (ξ, τ; t), defined by the ordinary differential equations Thus, we understand a generalized solution of the boundary-value problem (1)-(3) as a function that satisfies a system of integral equations, where the integration is performed along characteristics of the initial hyperbolic system (1): Here, (ξ (i) (s, t), τ (i) (s, t)) is the initial point of the i-th characteristic that goes through the point (s, t).
Instead of the left side of system (1), we consider the differential operator where (dx i /dt) A is the derivative of the i-th component of the state vector along the corresponding family of characteristic curves. In principle, we could assume that initial conditions are consistent with boundary conditions in (2): In this case, a solution of (1)-(3) will be continuous, but not necessarily a differentiable function everywhere. For differentiability, it is necessary to assume the fulfillment of consistent conditions of a higher order [7] (pp. 70-72). However, these conditions depend on right-hand sides of hyperbolic systems. In our case, these right-hand sides depend on control, and it is necessary to introduce additional restrictions for a control function. We also consider further adjoint differential systems for which such conditions are not satisfied.
Note that generalized solutions of hyperbolic balance laws [8,9] and the concept of entropy are well-suited to prove existence and uniqueness results in corresponding functional spaces. However, the approach based on characteristics of hyperbolic systems is suitable for optimal control methods, estimations of convergence, and so forth.

The Cost-Functional Increment Formulas
Although Equation (1) is linear with respect to x, the Pontryagin maximum principle is not a sufficient optimality condition. This is explained by the dependence of the matrix of coefficients in (3) on controls. Let us construct a non-classical exact increment formula of (5).
It is obvious that for arbitrary functions ψ(s, t) and p(t). By applying the ordinary and generalized formulas of integration by parts [3] (Chapter 8, Section 3) to the following terms, respectively, In the last integral, we will not transform the integrand to that in terms of the unperturbed solution y = y(t, u) (the expedient that is traditionally used in the proof of the pointwise maximum principle). Instead, we require that the functions ψ(s, t) and p(t) are solutions of the following adjoint problem: A boundary value problem (8) has the same structure as (1) and (2). The costfunctional (5) does not contain addends depending on unfixed components (x(s 0 , t)) − and (x(s 1 , t)) + . Therefore, the corresponding boundary conditions in (8) are equal to zero.
Function ψ(s, t) is a solution of (8), piecewise continuous in Π, for which all components ψ i = ψ i (s, t) are continuously differentiable along any characteristic of the i-th family of characteristics of a hyperbolic system in (8). p(t) are absolutely continuous functions.
Finally, we obtain the following formula for the cost-functional increment: The second increment formula is symmetric to this one. It may be obtained by using the representation B( u(t), t) y − B(u(t), t)y = ∆B(u(t), t)y + B( u, t)∆y.

Then,
Since the admissible controls u(t) and u(t) are arbitrary, Formula (11) can be obtained from (10) by the formal replacement of u by u and u by u.
Note that the basic difference of Formulas (10) and (11) from the classical increment formula is that (10) and (11) are exact formulas, that is, they do not have remainder terms. However, in this case, system (9) (or system (3) is integrated for perturbed controls).

Variational Maximum Principle
Increment Formulas (10) and (11) make it possible to reduce the problem (1)-(5) to an optimal control problem for a system of ordinary differential equations. We consider a scheme based on Formula (10).
We introduce the following optimal control problem Here, u(t), p(t, u), d = d(u, t) and B = B(u, t) are the fixed functions; z(t) is a mdimensional state function; and v(t) is a control function satisfying the same constraints as the control function in the optimal control problem (1)- (5).
By analogy to (12), the increment formula (11) allows us to reduce the original problem (1)-(5) to a linear optimal control problem for the ODE system: Here, u(t), d = d(u, t) and B = B(u, t) are fixed functions; p(t) is a state function, and v(t) is a control function satisfying the same constraints as the control function in the original optimal control problem.
These results enable us to formulate variational maximum principles for a linear optimization problem by hyperbolic systems. Theorem 1. A control u * (t) is optimal in the problem (1)-(5) if, and only if the function v * = u * (t) is optimal in problem (12) or equivalently in problem (13) for any fixed admissible u(t).
It follows from (10), (11) that the optimal value of the functional in the original problem is given by J(v * ) = J(u) + I(v * ).
Thus, the following solution scheme based on Formula (10) can be proposed.
1. An arbitrary admissible control u = u(t) is specified. Then, we calculate a solution p = p(t, u) of the adjoint problem (9) corresponding to this control. Note that we need also to calculate a solution of problem (8), which does not depend on an admissible process.

2.
We calculate an auxiliary optimal control problem (12) for the system of ordinary differential equations. Its solution will be a solution of the problem (1)- (5).
The solution scheme based on Formula (11) is as follows.

1.
We choose an arbitrary admissible control u = u(t). Then, we calculate y(t, u) and ψ(s, t).

2.
The auxiliary optimal control problem (13) must be solved. Its solution will be a solution of the problem (1)- (5).
Note that the auxiliary problems (12) and (13) are currently well-known and can be solved by applying many efficient methods developed for solving optimal control problems for systems of ordinary differential equations [3,[10][11][12][13][14].
It is required to solve only twice the system of partial differential equations to calculate x = x(s, t, u), ψ = ψ(s, t) and, if needed,x = x(s, t,ũ), and to solve the optimal control problem for the system of ODEs linear in the state variables. Note that the solution of problem (1)-(5) by means of iteration processes of the classical maximum principle requires repeated integration of the hyperbolic system (1) at each iteration step.

Illustrative Example
To illustrate the main constructions of our approach, we consider a simple example. In Π = [0, 1] × [0, 1], we consider an optimal control problem Note that the functions y 1 , y 2 are absolutely continuous in T. The characteristics of the hyperbolic equation which are defined as solutions of the ODE ds dt = 1 have the following form: s = t + Const. Strongly speaking, function x(s, t) is a solution of a corresponding integral equation which is constructed on the characteristics of the hyperbolic equation where f (y 1 (·), y 2 (·)) = y 1 + 2y 2 . The function x(s, t) is continuously differentiable along any characteristic of the family of characteristics of the hyperbolic equation.
Let the initial control be u 0 (t) = 0, t ∈ [0; 1]. The solution of the adjoint problem corresponding to this control is the following: , Here, Then, the original problem is reduced to the optimal control problem for a system of ODE equations For solving the reduced optimal control problem, one can use a wide set of modern optimal control methods (for example, see reviews [10][11][12]).
This problem is not a linear optimal control problem because the cost-functional and first differential equation contain terms of y 2 v type. In the authors' opinion, special methods for bilinear problems will be efficient for similar problems [13,14].

Conclusions
We considered an optimal control problem by a hybrid system of hyperbolic and ordinary differential equations. The proposed approach allows to reduce this problem to a classic optimal control problem by a system of ordinary differential equations. It is interesting that the proved necessary and sufficient optimality condition (Theorem 1) is satisfied for any fixed admissible function u(t). It is due to the fact that increment formulas have been proved for an arbitrary pair of admissible controls, u(t) andũ(t). We did not use any local control variations.
Our further goal is to extend this approach to the case of quadratic cost-functionals. In this case, we have to use second-order, non-classic, exact-increment formulas. The authors hope that singular control problems may be considered on the base of this approach.
Author Contributions: Conceptualization, A.A.; methodology, A.A.; formal analysis, A.A. and V.P.; investigation, A.A. and V.P.; example, V.P., writing-original draft preparation, A.A. and V.P.; writing-review and editing, A.A. and V.P. All authors have read and agreed to the published version of the manuscript.

Funding:
The reported study was funded by RFBR and the Government of the Irkutsk Region, project number 20-41-385002, and by RFBR, project number 20-07-00407.