Optimal Control Theory for a System of Partial Differential Equations Associated with Stratiﬁed Fluids

: In this paper, we investigate the existence of an optimal solution of a functional restricted to non-linear partial differential equations, which ruled the dynamics of viscous and incompressible stratiﬁed ﬂuids in R 3 . Additionally, we use the ﬁrst derivative of the considered functional to establish the necessary condition of the optimality for the optimal solution.


Introduction
Following the results of the modern calculus of variations, in this article, we study the optimal solution of an energy functional constraint to a partial differential equations system, which models the dynamic of an exponential stratified fluid in a three-dimensional space. To do this, we investigate the existence of solutions of a non-homogenous and non-linear partial differential system, extending the result obtained in [1], where only a potential external force was considered. Being more specific, for a Ω ⊂ R 3 , non-empty, open, connected, and bounded set, with boundary Σ = ∂Ω × (0, T) that is smooth enough (at least Lipschitz continuous) and letting ν be the normal vector outside the boundary, we define Q := Ω × (0, T) as the domain of our model where the motion of the fluid takes place. Here, T > 0, (0, T) is the time interval and t ∈ (0; T) is the temporal variable.
We are interested in establishing the existence of the solution for the following nonlinear problem in a weaker sense: ∂y 2 ∂t − µ∆y 2 + y · ∇y 2 + ∂p ∂x 2 = u 2 , ∂y 3 ∂t − µ∆y 3 + gρ + y · ∇y 3 + ∂p ∂x 3 = u 3 , ∂ρ ∂t − N 2 g y 3 = u 4 , where x = (x 1 , x 2 , x 3 ) denotes the spatial variable, y = y(x, t) = (y 1 (x, t), y 2 (x, t), y 3 (x, t)) denotes the velocity field of the fluid and u(x, t) = (u 1 , u 2 , u 3 , u 4 ) corresponds to a known function from L 2 (Ω). We also have the parameter µ > 0 as the kinematic viscosity, and N and g are positive constants. The last equation for the non-linear system is because our fluid is incompressible, p denotes the scalar field of the dynamic pressure and ρ represents the dynamic density. For an ideal case, the Equations (1) can be founded in [2,3]. For a viscous compressible fluid, the system (1) is deduced, for example, in [4]. When we study optimal control problems, we start from a dynamical system that evolves temporarily in a period time [t 0 , t f ], described by a state equation of a specific variable y(t), called a state variable with an initial condition y 0 . This evolution of the system depends on a particular function u(t), called a control variable, and what is sought with it is to influence the evolution of y(t) such that we can optimize (maximize or minimize) a given functional, which depends on both the state and control variables, called the energy functional . To be more related to these terms of the theory of optimal control, we can see [5,6].
The primary motivation of this paper is to minimize an energy functional of the form J(y, u), which depends on a control variable u and the velocity field y subject to a state equation that corresponds in our case, to a non-linear system of partial differential equations given in (1).
The functional that we are going to minimize is defined by: where y d ∈ L 2 (Ω) 4 is the desired state, u d ∈ L 2 (Ω) 4 is the desired control (or also called control change) and λ > 0 is a constant. From a mathematical perspective, most of the control systems involve a set of ordinary differential equations or linear partial differential equations in their restrictions, see for example [7]. In this case, we consider a non-linear model, which makes this proposal novel and attractive. On the other hand, there is some progress associated with the Navier-Stokes systems [5,8]. However, not much seems to be known about works that deal with non-linear exponential stratified fluids, making our results an open door to consider new parameters such as salinity, rotation, and temperature in future works. This paper is distributed in five sections. In Section 1, we introduce and describe the problem; later, in Section 2, we show the essential background information to understand the problem. In Section 3, we introduce the weak formulation of the problem. In Section 4, we study the existence of solutions for the optimal problem, and finally, in Section 5, we establish the optimal condition.

Previous Definitions and Notations
Before starting the study and analysis of our optimal control problem, we introduce some previous elements and necessary notation to understand the non-linear motion in the dynamics of viscous and incompressible stratified fluids in R 3 that will be considered this paper.
Let Ω be a domain of the space R 3 , and let p in R, such that 1 ≤ p ≤ ∞. A function y : Ω −→ R (or C), is said to belong to L p (Ω), if y is measurable and the norm is finite. The spaces L p (Ω) are Banach spaces (see [9]). Furthermore, in the spaces L p (Ω) the Hölder Inequality is fulfilled, which ensures that, for y ∈ L p (Ω) and v ∈ L q (Ω) with 1 p In particular, when we have that p = 2, then L 2 (Ω) is a Hilbert space with the scalar product It is known that L 2 (Ω) is one of the essential Hilbert spaces in the mathematical analysis since they appear very frequently in the study of partial differential equations, and it is the space where the kinetic energy is automatically well defined. As the variational form of a mathematical physics problem appears, we cross the Sobolev's spaces denoted by W k,p (Ω), and defined as the set of all functions y(x) ∈ L p (Ω) that have all the generalized derivatives up to the order p, which also belongs to L p (Ω). The associated norm defined in this space is given by where D α y is the weak derivate of order α. We also find other types of Sobolev spaces such as W k,p 0 (Ω). Note that when p = 2, we can simply write H k (Ω) and H k 0 (Ω) instead of W k,2 (Ω) and W k,2 0 (Ω), respectively (see for example [10]). Furthermore remember that when k = 1 and p = 2, we have that the space W 1,2 (Ω) is better known as H 1 (Ω), since it is a Hilbert space, endowed with the scalar product: The norm induced by the previous scalar product is given by: On the other hand, let us denote by D(Ω) the space of functions ϕ : Ω −→ R 3 of class C ∞ (Ω) with compact support and by D (Ω) the space of distributions on Ω. Throughout this paper, we will use the standard notations for the Lebesgue and Sobolev spaces, in particular the norm in L 2 (Ω) and the scalar product in L 2 (Ω) will be represented by · and (·, ·) respectively. Let us define and the associated norms are given to from | u | 2 := (u, u) and u 2 := ((u, u)). Consider the following notation for the solenoidal Banach spaces H and V, which intrinsically satisfy the condition ∇ · y = 0, and which we can represent as: Here, ∇ · y denotes the divergence of y and γ n denotes the normal component of the trace operator, where γ n : y → n · y ∂Ω = 0, here n denotes the external normal to the boundary.
These spaces are used very frequently in equations of the dynamics of the stratified fluids and are defined as the closure of Θ in L 2 (Ω) 3 and of Θ in H 1 0 (Ω) 3 , respectively, where It is well-known that H and V are Hilbert spaces with the scalar product (·, ·) and · respectively. Furthermore, where injections are dense and continuous.
On the other hand, if V is a Banach space with dual space V , then the duality between the spaces V and V is denoted by ·, · V ,V and its associated norm in V is denoted by · V . We introduce the following space of functions y whose derivative y t exists as an abstract function: The spaces defined above are endowed with the following norms: are Banach spaces. When X is a Hilbert space, we have that W α ([0, T]; X) is a Hilbert space.
In particular, the space W α ([0, T]; X) is endowed with the following scalar product: In this way, we have the following results for 1 ≤ α ≤ 2 (see [10][11][12]): Now, we defined our set of admissible controls, which denoted by U ad , and its elements are called admissible controls, which satisfy the inequality constraints of our non-linear system given from: where the control constraints u a , u b ∈ L 2 (Q T ) with Remark 0. Note that our set of admissible controls defined in (3) is a non-empty, convex, and closed subset in L 2 (Q T ) 4 .
Now, let us recall the following classic result that we will need later to show the existence of optimal controls. Definition 0. ( [6]) Let X be a Banach space and let J : X −→ R be a functional. We say that J is weakly lower semicontinuous, if for any sequence (x n ) n∈N ⊂ X such that x n x when n −→ ∞ we have that: 2.1. Formulation of the optimal control problem associated with the non-linear model.
In this part, we will formulate our optimal control problem associated with the partial differential equation described by (1). In order to show the existence of solutions of the non-linear system (1), we represent our model system in a simpler way using the following notation: Then, we can rewrite (1) in more compact form as, ∂y ∂t + (y · ∇)y − µ∆y + My + ∇p = u, y ∈ Ω × R, div (y ) = 0, x ∈ Ω and t ≥ 0, y(t, ·) = 0 on the boundary of Σ = ∂Ω × (0, T), y(0, ·) = y 0 in Ω.
In this way, we can introduce our energy functional that we want to minimize, which depends on the state and the control (y, u) and that we define by: where y d ∈ L 2 (Ω) 4 is the desired state, u d ∈ L 2 (Ω) 4 is the desired control (or also called control change) and λ > 0 is a constant. Now, we can introduce the functional space given by: where ∇ · ϕ = 0 denotes the divergence of ϕ = (ϕ 1 , ϕ 2 , ϕ 3 ). The space V endowed with the inner product and the usual norm of space H 1 0 (Ω) 4 , the space of all functions y ∈ H 1 (Ω) with null trace: The space given by (5) will be of great importance to us, since through it we can find the functions y : [0, T] −→ V, which are weak solutions of our non-linear problem given by (4). On the other hand, our space V is clearly a Banach space with the norm · H 1 0 (Ω) . In this way, it is a Hilbert space. It is also reflexive since V ⊆ H 1 0 (Ω) is separable. In summary, we can establish our optimal control problem: subject to the state equations that establish the dependency between the state variable y and the control variable u: Here, u ∈ L 2 (Q T ) is the control, it is an external force that affects the fluid, (for example gravity); y 0 ∈ V is a divergence-free vector field in R 3 , the kinematic viscosity µ > 0 and U ad represents our set of constraints defined as in (3). The aim of our control problem is to find a solution u ∈ L 2 (Q T ), where y is the solution of (7) associated with u such that it minimizes our energy functional given by (6).
In this paper, we will show the existence of solutions and establish the use of the first derivative of the energy functional to derive the conditions that the optimal solutions have to satisfy Equations (6) and (7).

Weak Formulation for the Non-Linear Problem
In this section, whenever we refer to space V, we will work with the functional space defined by (5), we will also identify V * as its dual space and (·, ·) and · will denote the scalar product and the usual norm in L 2 (Ω), respectively.
We are interested in establishing theorems of existence and uniqueness of the solution for our non-linear problem given by (4), for which we first study the existence of a solution in a weaker sense.
First of all, suppose there are functions y ∈ C 2,1 (Ω × (0, T)) and ∇p ∈ C(Q T ) classical solutions for our non-linear system of partial differential equations given by (4). Let us show the weak formulation for our non-linear problem: Suppose that y is a solution of (4). Then, multiplying the first equation of (4) by v ∈ V, we obtain the following: Then, integrating over Ω, we have then, applying Green's Theorem in the third term of the previous equation, we obtain Keeping in mind the boundary condition y ∂Ω = 0 and div (y ) = 0, we obtain the following: Now, (9) can be rewritten as Next, we introduce the following bilinear and trilinear form for our weak formulation of the (10), where a(·, ·) : a((y, v)) = Ω ∇y · ∇v (11) and Then, replacing identities (11) and (12) in (10), we obtain the following: In summary, we have Equation (13) suggests the following weak formulation for our non-linear system given in (4), and which we express thus: where the term corresponds to the non-linear term of our system (4). We call the expression (14) the variational formulation (or weak) for our non-linear system given in (4).
On the other hand, let us see some properties with respect to the non-linear operator defined in (15), which can be found in ( [8]): For every (y, v, w) ∈ V, we have that: As a consequence, we have the following lemma. (16) and in particular, In this way, we can introduce our definition of a weak solution given from (14) as we will see below.

Existence and uniqueness of weak solutions
Let us introduce the definition of a weak solution.
Definition 0. Given u ∈ L 2 ([0, T]; V ) and y 0 ∈ V, say that y is a weak solution to the problem (4) on the interval (0, T) if: For our purposes, we want to give an equivalent formulation as an equation in functional spaces. For this, we can introduce a linear and continuous operator A : and we define a non-linear operator B : The operator B is a bounded mapping from . Now, with the above notations, we can establish an equivalent formulation for our definition (17) in terms of the following functional differential equation.
Definition 0. Let u ∈ L 2 ([0, T]; V ) and y 0 ∈ V be given. A function y ∈ W α ([0, T]; V) is called a weak solution to the problem (4) on the interval (0, T) if it fulfills: (Ay, y) = y 2 for all y ∈ V. Now, we can also equivalently formulate our control problem given in (6) and (7) using the operators defined in (18) and (19): subject to the state equations and the control of restrictions In this part, we will see how we can reduce our given energy functional (6). Sometimes it is convenient to work with the reduced functional since it allows us to better establish the study on the existence of the optimal values for our optimal control problem given in (6) and (7). We can rewrite our optimal control problem as an optimization problem only in terms of u, as we will see below: we define the control to state mapping denoted by Υ, which associates an element u ∈ U ad ⊂ L 2 ([0, T]; L 2 (Ω) 4 ) with an element y ∈ W α ([0, T]; V) and which is the solution of (4).
The control to state mapping for the optimal control problem (6) and (7) is given as follows: where y u is the unique solution of (17).
Remark 0. 1. Note that if we replace y = Υ(u) in our energy functional given in (6), then our functional J would be expressed in terms of the control variable u, which we will denote by Ψ: where we have that and furthermore, Ψ is minimized on the set where the term Ψ(u) will be called the reduced energy functional. In our context, Υ is the non-linear solution mapping associated with (21). 2. The minimization of J subject to the state Equation (21) is equivalent to minimizing Ψ over all admissible controls.
In order to understand the proof of the following theorem, which is one of the main results of this work, we give the proof in several stages. The main idea of the proof is the following. First, we use the Faedo-Galerkin method to find out approximate solutions of (4). Then, using some auxiliary estimations, we show the convergence of these approximations to the solution of the model (4).

Proof.
Stage 1: Existence of the approximate solution.
Note that V defined by (5) is a reflexive and separable Hilbert space. Then, by a classical result of functional analysis, there is an orthonormal and dense subset (z i ) i∈N of V.
Let us considerer the finite dimensional subspace Restricting to the space Z m , we solve the system of equations given from (17): Here, y 0m is the orthogonal projection of the initial data y 0 ∈ V on the subspace V m = span{z 1 , z 2 , z 3 , . . . , z m }. We can observe that (26) is in fact a Cauchy initial value problem for a non-linear system of ordinary differential equations. Indeed, in the unknowns α im (t), the system can be written as Now, due to the smoothness of the coefficients, we can use a classical result from the theory of ordinary differential equations, and ensure that there exists a unique classical solution y m defined on a maximal interval [0, t m ] with 0 < t m ≤ T. For the convergence, we need to show that t m = T for all m. In that way, the interval of existence of solutions will not change when m goes to infinity. If t m < T, then lim t→t m sup y m (t) = +∞.
In the following stage, we prove that (y m (t)) m∈N is bounded on [0, T] by a constant independent of t and m. Then, the solution is defined in [0, T] for all m.

Stage 2:
Estimates for the approximate solution.
Using the Cauchy-Schwarz inequality and keeping in mind that b(y(t), y(t), y(t)) = 0 for almost all t ∈ [0, T], we obtain Therefore, we obtain The last inequality implies that Integrating on both sides of the previous expression and using the Grönwall's inequality, we deduce that This tell us, that t m = T for all m. Moreover, (y m ) m∈N is uniformly bounded in L ∞ ([0, T]; H). On other hand, integrating from 0 to T on both sides of Equation (28) and using (29), we obtain Consequently, (y m ) m∈N is uniformly bounded in L 2 ([0, T]; V).

After integrate the last equation, we obtain
Taking into account the Cauchy's and Ladyzhenskaya's inequalities, we have that In summary, we have the following: In this way, due to the fact that y m (t) is uniformly bounded in L ∞ ([0, T]; V), it follows that y m (t) is uniformly bounded in L 2 ([0, T]; V).

Stage 4: Extraction of subsequence and convergence to the solution.
We can extract a subsequence of (y m ) m∈N that converges (in an appropriate sense) to a function y and then go to the limit in the approximate problem given by (26) as follows: , then there is a subsequence (which we will denote in the same way) (y m ) m∈N such that Now, note that V → H with dense and continuous injections.

Stage 5: Existence of solutions.
Let us take a function η ∈ D(0, T). In Equation (26), we multiply by η, and integrate over the interval (0, T). Then we obtain, Consequently, we have that, This equality is true by linearity and by density for all v ∈ V. Thus, we have that y verifies the equation given by (17). Now, let us show that y(0) = y 0 . Since y is a weak solution of (4), taking η ∈ C ∞ ([0, T]) with η(T) = 0, for all v ∈ V we have that: On the other hand, we can integrate from 0 a T and we obtain the following: Thus, if η is such that η(T) = 0, it follows: Thus, we obtain that Taking η(0) = 1, obtain that y(0) = y 0 ∈ V.

Study of the Existence of Solutions for Our Optimal Control Problem
In this section, we will show the existence of optimal controls for our non-linear system given by (7).
Let us show that our optimal control problem formulated in (6) and (7) has a solution in U ad . To prove this fact, we need the following result associated with the non-linear operator. Now, with the following result, we want to show that our optimal control problem formulated by (6) and (7) has a solution in U ad . Theorem 2. The optimal control problem given by (6) and (7) admits an optimal solution u ∈ U ad with associated state y ∈ W α ([0, T]; V) for 1 ≤ α ≤ 2.
Proof. Note that the set of admissible controls defined by (3) is non-empty, convex, and closed in L 2 (Ω) 4 . Then, for every control u ∈ L 2 (Ω) 4 , applying Theorem 1, there is a unique weak solution of the state Equations (20) and (21). Therefore, we have that J(y, u) ≥ 0 for every admissible pair (y, u).
Hence, there exists the infimum of J over all admissible controls and states that such: then, now it follows that In summary, we have that: On the other hand, there is a sequence (y n , u n ) n∈N of admissible pairs such that J(y n , u n ) −→ J as n −→ +∞. First, we will show that (u n ) n∈N and (y n ) n∈N are bounded sequences in L 2 (Ω) 4 and W α ([0, T]; V), respectively.
From the convergence, we see that the set (J(y n , u n )) n∈N is bounded, this implies that the set (u n ) n∈N is bounded in L 2 (Ω) 4 . Now, we need to show that (y n ) n∈N and (y nt ) n∈N are bounded in L 2 ([0, T]; V). Indeed, y nt (t) + µAy n (t) + My n (t) + B(y n (t)) = u n (t) in L 2 ([0, T]; V ) y n (0) = y 0 in V.
Then, since (y n ) n∈N is bounded in L ∞ ([0, T]; H) and L ∞ ([0, T]; V), it follows that T 0 b(y n (t), y n (t), y nt (t)) dt ≤ C T 0 | y n (t) | 1/2 · y n (t) 3/2 · y nt (t) dt Therefore, from Equation (34), we have Since (u n ) n∈N is bounded in L 2 (Ω) 4 , we can ensure that (y nt ) n∈N is bounded in L 2 ([0, T]; V). Thus, it follows that (y n ) n∈N is bounded in W α ([0, T]; V). Then, we can extract a subsequence (y n , u n ) n∈N converging weakly in the space W α ([0, T]; V) × L 2 (Ω) 4 to some limit (y, u). Now, let us show that (y, u) is an admissible pair, that is, it satisfies the state equations given by (21). Indeed, note that the set of admissible controls U ad is non-empty, convex, and closed in L 2 (Ω) 3 , so it is weakly closed. Therefore, u is admissible, that is, u ∈ U ad , and y is the state associated with u.
Then, let us show that the pair (y, u) satisfies the state equations given by (21), that is, for every v ∈ L 2 ([0, T]; V), we have the following convergences: By the construction of the proof, we have that y 0 = y n (0) for all n, hence, it holds y(0) = y 0 .
Finally, it remains to show J = J(y, u) = J(v). Remember that our energy functional is given by (20), therefore we have that J = J(v) is a convex functional. Moreover, J(v) is continuous on W α ([0, T]; V) × L 2 (Ω) 4 , thus by Definition 0 we have that J = J(v) is weakly lower semicontinuous, that is, J(y, u) ≤ lim inf J(y n , u n ) = J. Now, since (y, u) is an admissible pair, and J is the infimum over all admissible pairs, then it follows that J = J(y, u).
Thus, we have that (y, u) is a pair of optimal controls.

Optimality Condition
In this section, we will show that the optimal solution must satisfy the first-order necessary optimality condition associated with our optimal control problem given in (6).
We will study the case in which the Gâteaux derivative of the energy functional vanishes. We obtain a possible candidate solution for our optimal control, that is, if the Gâteaux derivative of our functional exists, then the optimal solution must satisfy the first-order necessary condition.
The first-order necessary condition allows conclusions to be drawn that have to do with the form and characterization of control problems.
In this part, we will establish the first-order necessary optimization condition associated with our optimal control problem given in (6). This condition will be necessary for local optimization since it is of vital importance in many aspects, that is, from the first-order necessary conditions, we can establish the candidates to be optimal controls by numerical approximations in such a way that the approximate solutions allow us to solve the first-order optimization system at a discrete level and this would be additional work that could be studied later as future research work. Now, we can show that the optimal solution must satisfy the first-order necessary condition associated with our problem given in (20). This is performed directly using the Gâteaux derivative of our functional Ψ(u). In fact, for every h ∈ L 2 ([0, T]; L 2 (Ω) 4 ) and for every α ∈ R, we have that due to the very definition of u. In particular, we have that which implies that the derivative at the point α ∈ R of the function α −→ Ψ(u + αh), is precisely the Gâteaux derivative of Ψ in the direction of h at the point u vanishes for every h ∈ L 2 ([0, T]; L 2 (Ω) 4 ). Before stating our main result, let us recall the following result: is the solution of the problem given by we also have that σ ∈ L ∞ (0, T; V)) ∩ L 2 (0, T; (H 2 (Ω)) 4 ) and B (y u )σ L 2 (J 1 (Ω)) ) ≤ c y u σ .
Let us introduce the definition of locally optimal control.
Definition 3 (locally optimal control). A control u ∈ U ad is said to be locally optimal in L 2 (Ω) 3 if there is a constant β > 0 such that J(y, u) ≤ J(y, u), holds for all u ∈ U ad with u − u L 2 (Ω) 4 ≤ β. Here y and y denote the state of the system associated with u and u, respectively, that is, y = Υ(u) and y = Υ(u).
The first-order necessary optimization conditions are in many references, but for the related optimization conditions for optimal control problems with elliptic and parabolic partial differential equations (see [6,7,9] ), they were the main references that helped us a lot, studying optimal control problems, as well as the study of stratified fluids (see [15][16][17]). Now, let us show our main result of this section and show the first-order necessary optimization condition for our control problem given in (6).

Theorem 4.
Let U be a real Banach space, and let U ad ⊂ L 2 (Ω) 4 be a non-empty, convex, and closed set in L 2 (Ω) 4 and the functional Ψ : U −→ R be Gâteaux differentiable on U ad . Let u ∈ U ad be a solution of the problem min u∈U ad Ψ(u). (36) Then the following optimality condition holds for all u ∈ U ad . If, additionally, u ∈ U ad solves the variational inequality above and Ψ is convex, then u is the unique solution of (36).
Then, inserting (38) into (39) we obtain: We can rewrite the last inequality by: thus, it follows that lim t→0 + Ψ(u + t(u − u)) − Ψ(u) t ≥ 0. Now, using the fact that Ψ is Gâteaux differentiable on U ad , and taking the limit as t −→ 0, we obtain On the other hand, let u ∈ U ad be arbitrary and let u ∈ U ad be a solution of Equation (37). Since Ψ is convex, then we have that: In fact, for all t ∈ [0, 1], it follows Ψ(u + t(u − u)) ≤ (1 − t)Ψ(u) + tΨ(u), hence, Then, from Equations (37) and (40), we obtain that Therefore, we have that u is an optimal solution.
In order to characterize optimal solutions, we introduce the adjoint problem to the equations, which describes the non-linear motion in the dynamics of viscous and incompressible stratified fluids in R 3 .
Proof. First of all, let us work with our reduced energy functional Ψ given in (24), which is given by: where we have that By Banach space optimization principles, we know that the variational inequality Ψ (u)(u − u) ≥ 0, for all u ∈ U ad is a necessary condition for local optimality of u. It remains to compute Ψ and to derive the adjoint system. Let us write Ψ in the form given by (43). The first derivative Ψ at u is characterized by