Noether Theorem in Stochastic Optimal Control Problems via Contact Symmetries

: We establish a generalization of the Noether theorem for stochastic optimal control problems. Exploiting the tools of jet bundles and contact geometry, we prove that from any (contact) symmetry of the Hamilton–Jacobi–Bellman equation associated with an optimal control problem it is possible to build a related local martingale. Moreover, we provide an application of the theoretical results to Merton’s optimal portfolio problem, showing that this model admits inﬁnitely many conserved quantities in the form of local martingales.


Introduction
The concept of symmetry of ordinary or partial differential equations (ODEs and PDEs) was introduced by Sophus Lie at the end of the 19th century with the aim of extending the Galois theory from polynomial to differential equations. Actually, all the theory of Lie groups and algebras was developed by Lie himself as well as the principal tools for facing the problem of symmetries of differential equations (see [1] for an historical introduction to the subject and [2,3] for some modern presentations).
One of the most important applications of the study of symmetries in physical systems was provided by Emmy Noether. She understood that when an equation comes from a variational problem, such as in Lagrangian mechanics, general relativity or, more generally, field theory, it is possible to relate each symmetry of the equation to a conserved quantity, i.e., a function of the state of the system that does not change during the evolution of the dynamics, and conversely, to each conserved quantity it is possible to associate a symmetry of the motion. The simplest examples are, in Newtonian and Lagrangian mechanics, the conservation of energy, which is related to the invariance with respect to time translation, and the conservation of angular momentum, which is correlated to the invariance with respect to rotations. The classical Noether theorem (see, e.g., [2,4] for an exposition of the subject) has found many generalizations in deterministic optimal control theory (see, e.g., [5,6] and also [7][8][9] on the related problem of commuting Hamiltonians and Hamilton-Jacobi multi-time equations).
The development of a Lie symmetry analysis for stochastic differential equations (SDEs) and general random systems is relatively recent (see, e.g., [10][11][12][13][14][15][16][17][18][19][20] for some recent developments in the non-variational case). For stochastic systems arising from a variational framework, it is certainly interesting to study the relation between their symmetries and functionals that are conserved by their flow, and, in particular, to establish stochastic generalizations of the Noether theorem.
The problem of finding some kinds of conservation laws for SDEs was discussed in various papers (see [21][22][23][24][25][26][27][28][29][30]). We could summarize three different approaches to this problem. The first one was considered by Misawa in [26,27,31], where the author studied the case in which some Markovian functions of solutions of SDEs are exactly conserved during time evolution.
The second approach was adopted by Zambrini and co-authors in a number of works. They put themselves in the framework of Euclidean quantum mechanics, which represents a geometrically consistent stochastic deformation of classical mechanics where a Gaussian noise is added to a classical system. This setting has a close connection with optimal transport and optimal control (see, e.g., [32] for an introduction to the topic). More precisely, in [29], a generalization of the Noether theorem has been proved: to any oneparameter symmetry of a variational problem it is possible to associate a martingale that is independent both from the initial and final condition of the system. This first step was quite important since it stressed that the suitable generalization of conserved quantities in a stochastic setting is not a function that remains constant during the time evolution of a stochastic system, but a function that is constant in mean. Another remarkable advance in the study of variational symmetries was achieved in [24,25,30], where it was noted that the symmetries of the Hamilton-Jacobi-Bellman (HJB) equation of the considered variational problem are the correct objects to be associated to the aforementioned martingales and the contact geometry is a good framework in which a stochastic version of the Noether theorem can be formulated. Indeed, to each Lie point symmetry of the HJB equation it is possible to associate a martingale for the evolution of the system. It is worth also mentioning the papers [21,28], where a suitable notion of integrable system, i.e., a system with a number of martingales and symmetries equal to the number of the dimension, is discussed.
The third approach was proposed by Baez and Fong in [22] (see also [23]). The authors showed a method to build martingales applying the action of symmetries to solution to backward Kolmogorov equation, that can be interpreted as a linear version of HJB equation obtained when the control and the objective function are trivial.
In our paper, we generalize at least along two directions the approach proposed by Zambrini and co-authors, as listed above. First, we work in a different optimal control setting that can be seen as a generalization of the variational framework described in their articles. Second, we do not only restrict to Lie point symmetries but we take advantage of the general notion of contact symmetry, namely a transformation preserving the contact structure of the jet space (see Section 3).
We prove here a Noether theorem (Theorem 12) that relates to any contact symmetry of the HJB equation associated with an optimal control problem, a martingale that is given by the generator of the contact symmetry. More precisely, if we consider the generator Ω(t, x, u, u x ) of a contact symmetry (which is a regular function defined on the jet space J 1 (R n , R), i.e., a map depending on a function u and on its first derivatives u x ), a regular solution U(t, x) to the HJB equation and the solution X t to the optimal control problem, then the process O t = Ω(t, X t , U(t, X t ), ∇U(t, X t )), obtained by composing the generator Ω with the function U and the process X, is a local martingale.
Furthermore, we generalize the Noether theorem also to the case where the coefficients and the Lagrangian of the control problem are random. Indeed, we establish that the Noether theorem holds also in the case of stochastic HJB equation, introduced in [33] by Peng to study the optimal control problem with stochastic final condition or stochastic Lagrangian, provided that we restrict ourselves to a subset of Lie point symmetries (Theorem 13 and Corollary 1).
Finally, the present paper provides an application of our theory to a non-trivial interesting problem arising in mathematical finance, that is, Merton's optimal portfolio problem. First proposed by Merton in [34], this model finds nowadays many different applications and generalizations (see [35] for a review of the original problem and various generalizations and [36][37][38][39] for some more recent works on the subject). A particular form of the Noether theorem for this problem can be found in [40]. We show here that the HJB equation of this optimal control system admits infinitely many contact symmetries. It is important to notice that the contact symmetry generalization is essential in this specific problem, since, when we restrict to Lie point symmetries as it is done in the aforementioned literature, the equation admits only a finite number of infinitesimal invariants. The presence of infinitely many contact symmetries yields the possibility to construct infinitely many martingales whose means are preserved by the evolution of the system. Moreover, we also point out that, when the final condition is random or the coefficients of the evolution of the stock are general adapted processes, our stochastic generalization of the Noether theorem (Corollary 1) allows us to construct some non-trivial martingales for this classical mathematical model. We think that the presence of these martingales could be related to the existence of many explicit solutions for Merton's problem, and therefore we expect that the methods presented here can be used to build other explicit solutions for it. We plan to study in a future work the financial consequences of the conservation laws individuated in this paper.
Since the stochastic and geometrical frameworks are not so commonly put together, we also provide a concise introduction to both these subjects.

Plan of the Paper
The paper is organized as follows. Section 2 introduces stochastic optimal control problems both in the deterministic and stochastic case, presenting also the HJB equation, and it is useful also to fix the notations that we adopt throughout the paper. Contact symmetries and their properties in the PDEs setting are discussed in Section 3. Section 4 contains the main theoretical results of the paper, namely the Noether theorems for deterministic and stochastic HJB equations. The application of such results to Merton's optimal portfolio problem is given in Section 5.

A Brief Survey on Stochastic Optimal Control
We give here an overview of some results about stochastic control problems, referring the interested reader to [41][42][43][44] for further investigations on such results, though more precise references will be given throughout the section. The main aim of this section is to introduce the topics we will deal with and to give the tools from the stochastic optimal control theory that we will use later on in the paper.

Deterministic Optimal Control and Lagrange Mechanics
We start recalling some notions about deterministic optimal control and, in particular, we focus on Lagrangian-type optimal control problems, i.e., problems arising from Lagrangian formulation of classical mechanics. More precisely, we consider a system of controlled ODEs of the form where with t 0 T, are the initial time and the final time horizon, respectively, and α = (α 1 , . . . , α n ) ∈ C([t 0 , T], R n ) is the control function. We want to maximize the following objective functional where X We suppose that there exists only one smooth function A : R n × R n → R n such that and also that, for any x ∈ R n , the map A(x, ·) := (A 1 (x, ·), . . . , A n (x, ·)) is smoothly invertible in all its variables as a function from R n into itself. Define then the PDE where u x = (u x 1 , . . . , u x n ). Equation (3) is usually referred to as Hamilton-Jacobi equation in the context of Lagrangian mechanics or Hamilton-Jacobi-Bellman equation in the context of optimal control theory.
We state now the deterministic version of the so-called verification theorem.

Remark 1.
It is important to note that, in the deterministic case and when U ∈ C 1,2 ([t 0 , T] × R n , R), i.e., U is differentiable one time with respect to time t and two times with respect to space x ∈ R n , the function t → α t is C 1 ([t 0 , T], R n ) and it satisfies the Euler-Lagrange equations where i = 1, . . . , n.

Classical Stochastic Optimal Control Problem
An optimal control problem consists of maximizing an objective functional, depending on the state of a dynamical system, on which we can act through a control process.
Let K be a (convex) subset of R d and fix a final time T > 0. Denote by W an m-dimensional Brownian motion on a filtered probability space (Ω, F , (F t ) t≥0 , P), where (F t ) t≥0 is the natural filtration generated by W. We assume that the state of the system is modeled by the following stochastic differential equation (SDE) where µ : R + × R n × R d → R n and σ : R + × R n × R d → R n×m are measurable functions that are also Lipschitz-continuous on the set K, i.e., there exists a constant C 0, such that, for every t ∈ R + , x, y ∈ R n , a ∈ K, where σ 2 = tr(σ * σ). We will also use the notation µ = (µ i ) i=1,...,n and σ = (σ i ) i=1,...,n, =1,...,m .
The control process α = (α s ), appearing in (5), is a K-valued progressively measurable process with respect to the filtration (F t ) t≥0 . We denote by K the set of control processes α such that We call X t 0 ,x t , t ∈ [t 0 , T] the solution to the SDE (5). (6) and (7) imply that, for any initial condition (t 0 , x) ∈ [0, T) × R n and for all α ∈ K, there exists a unique strong solution X x,t 0 t to the SDE (5) (see, e.g., Theorem 2.2 in Chapter 4 of [45]).

Remark 2. Conditions
Let L : R + × R n × R d → R and g : R n → R be two measurable functions, such that g satisfies the quadratic growth condition |g(x)| C(1 + |x| 2 ), for every x ∈ R n , for some constant C independent of x.
For (t 0 , x) ∈ [0, T) × R n , we denote by K L (t 0 , x) the subset of controls in K such that We consider an objective function of the following form We are now in position to introduce the stochastic optimal control problem. Definition 1. Fixed (t 0 , x) ∈ [0, T) × R n , the stochastic optimal control problem consists of maximizing the objective function J(t 0 , x, α) over all α ∈ K L (t 0 , x) subject to the SDE (5). The associated value function is then defined as Given an initial condition (t 0 , x) ∈ [0, T) × R n , we call α * ∈ K L (t 0 , x) an optimal control if J(t 0 , x, α * ) = U(t 0 , x).
We call Hamilton-Jacobi-Bellman equation (HJB) the PDE where L a t is the Kolmogorov operator associated with Equation (5), namely, for ψ ∈ C 2 (R n ), with η ij defined, for every i, j ∈ {1, . . . , n}, as We also write, for x ∈ R n , p ∈ R n and q ∈ R n×n , so that the HJB Equation (8) can be written also in the following way We state here the classical verification theorem.
be a solution to the HJB Equation (9) for t 0 = 0, satisfying the following quadratic growth, for some constant C, Suppose that there exists a measurable function A * (t, x), (t, x) ∈ [0, T) × R n , taking values in K, such that with initial condition X t = x, admits a unique solution X * s , (iii) The process A * (s, X * s ), s ∈ [t, T] lies in K L (t, x). Then and A * (·, X * · ) is an optimal control for the stochastic optimal control problem in Definition 1.

Remark 3.
The quadratic growth condition (10) is used in Theorem 2 only to guarantee that the local martingale part of the semi-martingale decomposition of ϕ(t, X t ), namely, by Itô formula, is in L 1 and a martingale (and not only a local martingale). This means that the statement of Theorem 2 holds assuming only that (11) is a L 1 martingale, i.e., without condition (10).

Stochastic Hamilton-Jacobi-Bellman Equation
The present section generalizes the aforementioned Hamilton-Jacobi-Bellman equation to its stochastic counterpart. Let us first recall the Itô-Kunita formula.
Theorem 3 (Itô-Kunita formula). Let F(t, x), (t, x) ∈ [0, T] × R n be a random field that is continuous in (t, x) almost surely, such that (i) For every t ∈ [0, T], F(t, ·) is a C 2 -map from R n into R, P-a.s., (ii) For each x ∈ R n , F(·, x) is a continuous semi-martingale P-a.s., and it satisfies where Y j s , j = 1, . . . , m, are m continuous semi-martingales, f j (s, x), x ∈ R n , s ∈ [0, T], are random fields that are continuous in (s, x) and satisfy the following properties: (a) For every s ∈ [0, T], f j (s, ·) is a C 2 -map from R n to R, P-a.s., (b) For every x ∈ R n , f j (·, x) is an adapted process.
Sticking, where possible, with the notation introduced in Section 2.2, we consider a stochastic optimal control problem where also the functions L, g, µ, and σ are random. More precisely, they depend also on ω ∈ Ω in a predictable way, namely, L(t, x, a, ·), g(x, ·), µ(t, x, ·), and σ(t, x, ·) are F t -measurable, for any (t, x, a) ∈ R + × R n × K. In order to distinguish them from the functions in the previous section and recall that the following are stochastic terms, we write also L S (t, x, a) = L(t, x, a, ·), g S (x) = g(x, ·).
We want then to maximize the objective functional where X solves the SDE and α ∈ K L . Let us introduce, in a completely analogous way as in the previous section, the value function From now on, we may omit the explicit dependence on ω ∈ Ω of the functions. Then, for any fixed x, U(t, x) is an F t -adapted process but, a priori, it is not of bounded variation. We can anyway expect that it is a continuous semi-martingale, and therefore, by the representation theorem for semi-martingales and martingales (see, e.g., Section IV. 31 and Section IV.36 in [48]), that it can be written as follows, and both of them are sufficiently smooth with respect to x, then the pair (U, Y) should satisfy a stochastic Hamilton-Jacobi-Bellman equation (SHJB). More precisely, we say that (ϕ, Ψ) solves the SHJB related with the optimal control problem (12) and (13) if (ϕ, Ψ) satisfies the following backward stochastic partial differential equation where ..,n, =1,...,m ∈ R n×m . See, e.g., Section 3.1 in [33] for more details about the derivation of Equation (14) and Section 4 in the same reference for results concerning the well-posedness of such an equation. We state here the verification theorem, which tells us that a sufficiently smooth solution of the SHJB equation coincides with the value function v. Theorem 4. Let (ϕ, Ψ) be a smooth solution of the SHJB Equation (14) with t 0 = 0 and assume that the following conditions hold: Suppose further that there exists a predictable admissible control A * (t, x, ω) such that and that it is regular enough so that the SDE (13) is well-posed with solution X. Then (ϕ, Ψ) = (V, Y) and moreover, for any initial data (0, x) with x ∈ R n , A * (t, X t , ω) maximizes the objective function U.

Remark 4.
Under suitable regularity conditions on µ, σ, L S , and g S , it is possible to prove that the SHJB Equation (14) admits a unique solution satisfying the hypotheses of Theorem 4. A rigorous proof of this fact can be found in Section 4 of [33]. For further developments on SHJB equations and the related stochastic optimal control problems we refer the reader to, e.g., [49][50][51][52], as well as the already mentioned paper by Peng [33].

Solutions of PDEs via Contact Symmetries
In this section, we recall some basic facts concerning the theory of symmetries on which our results are based, referring to [2,3,[53][54][55] for a complete treatment of these topics. We start with a formal introduction on jet spaces (for an extended introduction to the subject see, e.g., [56,57]), and then proceed with contact symmetries and their applications in solving PDEs. Despite the fact that these results are well-known, we insert here a small survey for the ease of the reader, as well as we introduce the notation that will be adopted in the rest of the paper.

Jet Spaces and Jet Bundles
The jet space is a generalization of the notion of tangent bundle of a manifold. Let M and N be two open subsets of R m and R n , respectively, and consider a smooth function . ., up to order k. For example, if m = 2 and n = 1, then pr (2) The k-th prolongation can also be looked at as the Taylor polynomial of degree k for f at the point x. The space whose coordinates represent the independent variables, the dependent variables and the derivatives of the dependent variables up to order k is called the k-th order jet space of the underlying space N × M, and we denote it by J k (M, N). It is a smooth vector bundle on M with projection π k,−1 : J k (M, N) → M given by To any function f ∈ C k (M, N), where C k (M, N) is the infinite-dimensional Fréchet space of k times differentiable functions on M taking values in N, we associate a continuous section of the bundle (J k (M, N), M, π k,−1 ) in the following way where D i f (x) is the vector collecting all the i-th order derivatives of f with respect to x. In this setting, a differential equation is a sub-manifold ∆ E ⊂ J k (M, N). For example, in the scalar case N = R, we usually consider ∆ E as the null set of some regular functions, i.e., N). We say that a smooth function f : M → N is a solution to the equation E (represented by the sub-manifold ∆ E ) if, for any x ∈ M, we have D k f (x) ∈ ∆ E . The set of all solutions to equation E will be denoted by S E .
For instance, in the previous case where N = R and Remark 5. For technical reasons, it is usually not possible to consider generic equations E (corresponding to generic sub-manifold ∆ E ⊂ J k (M, N)). In the following, we always consider non-degenerate systems of differential equations in the sense of Definition 2.70 in [2]. This condition assures that, for each fixed x 0 ∈ M and each set of derivatives (u 0 , u 0 x , u 0 xx , . . .), there exists a solution to the equation defined in a neighborhood of x 0 with the prescribed derivatives (u 0 , u 0 x , u 0 xx , . . .) at the point x 0 . Since the precise formulation of this condition is quite technical and the evolution equations considered in Section 4 always satisfy such an assumption, we refer to Section 2.6 of [2] for complete details.

Contact Transformations
We want to introduce a class of transformations induced by diffeomorphisms of J k (M, N). For simplicity, we consider the case k = 2, M ⊂ R n and N = R. Consider a diffeomorphism Φ : J 2 (M, N) → J 2 (M, N) given by the following relations Hereafter, we use the notation Φ We now aim to define a transformation F Φ on the space of smooth functions induced by the map Φ on the jet space. Let U ∈ C ∞ (M, N) and consider the map C U,Φ : M → M given by In this case, for any U ∈ C ∞ (M, N), the map C U,Φ is given by C U,Φ (x) = λx and, thus, it does not depend on U and it is always a diffeomorphism from R into itself, since λ = 0. This implies that F Φ = C ∞ (M, N) and also that, if the map F Φ exists, then it must satisfy for any U ∈ C ∞ (M, N). On the other hand, we have ).
This simple counterexample shows that a diffeomorphism Φ : J 2 (M, N) → J 2 (M, N) must satisfy some additional conditions in order to generate an operator F Φ . For this reason, we introduce the following definition. N) is said to be a contact transformation if it generates a (nonlinear) operator F Φ in the sense of Definition 3.
It is possible to give a nice geometric characterization of the set of contact transformations. From now on, we write Λ 1 J n (M, N) for the vector space of 1-forms on J n (M, N). In particular, consider the following 1-forms, We denote by C ⊂ Λ 1 J 2 (M, N) the contact structure, also called Cartan distribution in [56], which is generated by N) is a contact transformation in the sense of Definition 4 if and only if it preserves the contact structure C, that is, where Φ * is the pull-back of differential forms on J 2 (M, N) induced by Φ.

Remark 6.
The contact transformation Φ is uniquely determined by its action on J 1 (M, N). In particular, Φ x , Φ u , and Φ u x depend only on (x, u, u x ) and they do not depend on u xx (see, e.g., Chapter 2 in [56]).
In the study of the geometry of jet spaces (see, e.g., Chapter 6 of [57]), the term "contact structure" is often used to express the set of forms C. This custom is due to the fact that, as explained in Remark 6, the contact transformations are extensions of diffeomorphisms on J 1 (M, N), i.e., the set of transformations considered here is in one-to-one correspondence with the one usually considered in contact geometry.
In the following, we will not consider just a single contact transformation but oneparameter groups of contact transformations Φ λ , which means that Φ · : R × J 2 (M, N) → J 2 (M, N) is C ∞ , Φ λ is a contact transformation for each λ ∈ R, Φ 0 (x, u, u x , u xx ) = (x, u, u x , u xx ), and, for each λ 1 , λ 2 ∈ R, In general, a one-parameter group of diffeomorphisms Φ Y,λ , where λ ∈ R, is generated by a vector field Y ∈ T J 2 (M, N), i.e., belonging to the tangent bundle of J 2 (M, N), which in local coordinates has the expression by the following relations for any λ ∈ R and (x, u, u x , u xx ) ∈ J 2 (M, N). It is useful to introduce the following natural notion. (16)) on J 2 (M, N) is called an infinitesimal contact transformation if it generates (through Equation (17)) a one-parameter group of diffeomorphisms Φ λ of contact transformations.

Definition 5. A vector field Y (of the form
The following theorem characterizes all the infinitesimal contact transformations on J 2 (M, N).
Proof. The proof can be found in Chapter 21 of [3] and references therein.

Remark 8.
We say that a vector field of the form Y Ω satisfying the hypotheses of Theorem 6 is the infinitesimal contact transformation generated by the (contact generating) function Ω. Under this terminology, Theorem 5 guarantees that any infinitesimal contact transformation is generated in a unique way by some smooth function Ω : There is a special subset of vector fields of the type (18) arising from coordinate transformations involving only the dependent and independent variables (x, u).

Definition 6.
We say that Y Ω Lie, f ,g is a (projected) Lie point transformation if it is a contact transformation of the form (M, N), R).

Remark 9.
It is simple to see that a Lie point transformation Y Ω Lie, f ,g can be reduced to a standard vector fieldỸ = ∑ i f i (x)∂ x i + g(x, u)∂ u on J 0 (R n , R), i.e.,Ỹ is the generator of a one-parameter group of transformations involving only the dependent and independent variables (x, u).

Remark 10.
Another important property of Lie point transformations is the following. Denoting by Φ Lie, f ,g,λ , where λ ∈ R, the one-parameter group generated by the Lie point transformation Y Ω Lie, f ,g , we have that, for any λ ∈ R, the domain F Φ Lie, f ,g,λ of the nonlinear operator F Φ Lie, f ,g,λ generated by Φ Lie, f ,g,λ is the whole C For what follows, we introduce the (formal) operators D In a similar way, we write D We can characterize more precisely the general form of Lie point transformations.
namely, Y Ω Lie, f ,g has the following expression Proof. The theorem is a direct application of Theorem 6 to vector fields of the form (19).
If n = 1, and the coordinate system of J 0 (R, R) is given by (x, u), some examples of Lie point transformations are: • The dilation of independent variable x, i.e.,Ỹ = x∂ x (see the notation in Remark 9), related to the generator function Ω = −xu x and generating the one parameter group defined by The dilation of dependent variable u, namely,Ỹ = u∂ u related to the generator function Ω = u and generating the one parameter group defined by We conclude this section providing the definition of symmetry of a differential equation.
is a solution to E , where F Φ and F Φ are the operator generated by the contact transformation Φ and the domain of F Φ , respectively (see Definition 3). We say that an (infinitesimal) contact transformation Y Ω is an (infinitesimal contact) symmetry of the differential equation E if the one-parameter group Φ Y Ω ,λ generated by Y Ω is a set of symmetries of the equation E .

Remark 11.
With an abuse of language, we say that the function Ω ∈ C ∞ (J 1 (M, N), R) is a contact symmetry of the equation E if the corresponding contact vector field Y Ω is a symmetry of E .

Remark 12. If Y is a Lie point transformation and it is a contact symmetry of the equation E , then we say that Y is a Lie point symmetry of the equation E .
It is possible to give a completely geometric characterization of the contact symmetries of a differential equation E . The infinitesimal contact transformation Y Ω is a symmetry of the non-degenerate differential equation E (see Remark 5 for the definition of non-degenerate differential equation) if and only if where i = 1, . . . , p.
Proof. The proof is given in Theorem 2.27 and Theorem 2.71 in [2] for the case of Lie point symmetries that are diffeomorphisms of J k (M, N), for k ≥ 0. Since the contact transformations are diffeomorphism of J h (M, N), for any h ≥ 1 (see, e.g., Chapter 21 of [3]), the case of contact transformations can be proved using the same methods.

Symmetries and Classical Noether Theorem
Let us discuss here the classical Noether theorem in the Lagrangian mechanics setting described in Section 2.1. Heuristically, the Noether theorem says that to any infinitesimal transformation leaving invariant the optimal control problem, namely Equation (1) and the Lagrangian L, a constant of motion is associated.
More precisely, let Y x,a be a vector field in R n × R n transforming the variables x i and a i of Equation (1) and the Lagrangian L. We suppose that Y x,a is "projected" with respect to the variables x i , that is, If we want the projected vector field (23) to be a symmetry of Equation (1), then we need that If we also require that L is invariant with respect to the flow of Y x,a , then we must have So we say that Y x,a is a symmetry of the optimal control problem of Section 2.1 if and only if conditions (24) and (25) hold.

Theorem 9 (Noether theorem).
Let Y x,a be a symmetry of the Lagrangian L according with Equation (25). Then, supposing the existence of a C 1 optimal control α t , we have that is constant with respect to time t ∈ [t 0 , T].
Proof. Let us compute the derivative with respect to time of (26), then, by Euler-Lagrange Equation (4), we have d dt which is zero as a consequence of Equation (25).
It is possible to give an equivalent formulation of Theorem 9 using the Lie point symmetries of Hamilton-Jacobi equation.

Theorem 10 (Noether theorem, Hamilton-Jacobi version).
Let where X t is the solution to Equation (1) with α i t = A i (X t , ∇U(X t )) (see Section 2 for the definition of the map A), is constant with respect to time t ∈ [t 0 , T].

Lemma 1. Y Ω is a contact symmetry of the Hamilton-Jacobi Equation (3) if and only if
Proof. It is a consequence of Equation (18) and Definition 7. See, e.g., Section 21.2 in [3].
Proof of Theorem 10. See the proof of Theorem 12 below, where the statement is proven in the general stochastic case.

Remark 13.
The two formulations of the Noether theorem given by Theorems 9 and 10 are equivalent in the sense that Y is a symmetry of the optimal control problem if and only if Ω is a contact symmetry of the related Hamilton-Jacobi equation, namely, Equation (28) holds. Furthermore, if we choose the optimal control α i t to be equal to A i (X t , ∇U(X t )), then the two conserved quantities (26) and (27) are equal.

The Case of Deterministic HJB Equation
Considering M = R + × R n and denoting the first variable by t and the other independent variables by x i , for i = 1, . . . , n, for the Hamilton-Jacobi-Bellman equation we have that ∆ E is described by the equation Equation (29) is a special kind of evolution equation since it has the form for some smooth function H ∈ C 2 (R × J 2 (R n , R)), where u x = (u x 1 , . . . , u x n ), and u xx = (u x i x j ) i,j=1,...,n . In this case, it is convenient to choose a generating function of the form Ω(t, x, u, u x ). (31) Remark 14. It is important to notice that, for a generic contact symmetry on J 2 (M, R) = J 2 (R + × R n , R), the generating function has the form Ω(t, x, u, u t , u x ), (32) depending also on the variable u t , which represents the time derivative. Choosing a generating function of the form (31) instead of the form (32) means to consider contact transformations that do not change the time variable t. The main reason is that the time variable in stochastic equations plays a peculiar role and cannot be changed in the same way as the spacial variable x. Nevertheless, in [24,25,29] also a special kind of time change has been considered corresponding to the generating functionΩ where f ∈ C ∞ (R + , R), and Ω Lie, f ,g (t, x, u, u x ) is the generator of a Lie point transformation, see Equation (21) (see also Remark 15 for a further discussion of this point).

Theorem 11.
Consider an evolution PDE of the form (30). An infinitesimal contact transformation generated by the function Ω of the form (31) is a contact symmetry for Equation (30) if and only if where D x i are defined in Equation (20) and D x i x j · = D x i (D x j (·)).

Let us introduce
where X t is a solution to Equation (5) with respect to an optimal control A * t .

Assumption 1.
There exists at least one measurable function A(t, x, u x , u xx ) such that As a consequence of Assumption 1, we can choose the process to be the optimal control provided that the solution U to Equation (9) is at least C 2 . The next result is our first stochastic generalization of Noether theorem. (9) is continuously differentiable with respect to time and C 2 with respect to x. If Ω is a contact symmetry of Equation (9), then O t is a local martingale.

Remark 15.
The works [24,25,29] present a Noether theorem involving a time change and a Lie point transformation with a generator of the form (33) for an optimal control system with affine type control and an objective function with quadratic dependence from the control. More precisely, they proved that, ifΩ of the form (33) is a symmetry of the HJB equation, then the procesŝ is a local martingale. The presence of some time invariance was essential in the papers [21,28] for extending the concept of integrable systems to the stochastic framework. We expect that the martingality of the process (35) holds also in the general setting presented here. Since it is not completely clear what the role of time change is in our setting and if the conservation of (35) holds for more general time changes, we prefer to postpone this analysis to some later works.
From now on we take H as in Section 2.2, namely, In order to prove Theorem 12, we anticipate the following result.

Lemma 2. We have that
Proof. In the case where µ, σ, and A are C 1 in all their variables, the result follows from the fact that ∂ a i H(t, x, u x , u xx , A(t, x, u x , u xx )) = 0.
The general case is a consequence of Assumption 1 and the envelope theorem. For the latter we refer the reader to, e.g., [61,62].
Proof of Theorem 12. We compute the differential of O t using Itô formula, to get Since U ∈ C 2,3 ([0, T] × R n , R), we also have Exploiting Equations (36) and (37), the fact that X t is solution to (5), and the relations where M t is a local martingale. Using the explicit definition of D x i , it is simple to note that and we have Using Lemma 2, the fact that we can choose α t = A(t, X t , ∇U(t, X t ), D 2 U(t, X t )), and the determining Equation (34), we obtain which concludes the proof.

The Case of Stochastic HJB Equation
We face the problem of stochastic HJB equation, that is, we consider, as we did in Section 2.3, and In this case, Though some ideas concerning symmetries for SPDEs are discussed, e.g., in [63,64], a general theory has not been developed yet. For this reason, we extend the notion of infinitesimal symmetry introduced in Definition 7 in the following way. Hereafter, we consider the probability space (W, F t , P) where W = C 0 (R, R m ) is the canonical space for the Brownian motion W, F t is the natural filtration generated by W t , and P is the Wiener measure on W.

Definition 8.
Let Ω : R + × J 1 (R n , R) × W → R be a predictable regular random field on R + × J 1 (R n , R), which is C 1 with respect to the time t and C 2 in all other variables. We say that Y Ω is a contact symmetry for Equation (39) when we have

Assumption 2.
There exists at least one measurable function A S (t, x, u x , u xx , ψ x ) such that where H S is defined by Equation (38).

Lemma 3. We have that
Proof. The proof is similar to the one of Lemma 2.
The following result represents our second stochastic generalization of the Noether theorem.
Theorem 13. Let Assumption 2 hold true. Suppose that the solution (U, Ψ) to Equation (39) is continuously differentiable with respect to time and C 3 with respect to x almost surely. If Ω is a contact symmetry of Equation (9), theñ is a local martingale.
Proof. Since the proof is similar to the one of Theorem 12, we report here only some steps of the proof. By Theorem 3, we have and Adopting the usual notation for O t = Ω(t, X t , U(t, X t ), ∇U(t, X t )) and α t = A S (t, X t , ∇U(t, X t ), D 2 U(t, X t ), ∇Ψ t (X t )), we have Plugging in Equations (41) and (42), and exploiting Theorem 3 in order to compute the quadratic variations, we get Notice that, by Definition 8, we have which, by Lemma 3, is equivalent to Then we obtain Following then the same steps as in the proof of Theorem 12 we get the result.

Corollary 1.
Suppose that Ω is a Lie point symmetry of the form where c ∈ R and f k , g : R n+1 → R are smooth functions such that, for j = 1, . . . , n and = 1, . . . , m, Then O t = Ω(t, X t , ∇U(t, X t )) is a local martingale.
Proof. Under the previous conditions, we have The thesis follows from Theorem 13.

Merton's Optimal Portfolio Problem
In this section, we propose a symmetry analysis of Merton's problem of optimal portfolio selection (see the original paper [34,35] for a review on the subject). Let us consider a set of controls α t = (c(t), γ(t)) and a controlled diffusion dynamics described by the SDE where X is the wealth process controlled by the proportion γ(t) ∈ [0, 1] invested in the risky asset at time t and by the consumption c(t) ∈ [0, +∞) per unit time at time t. Moreover, r is the constant interest rate, and µ(t), σ(t) > 0 are continuous functions such that σ(t) > > 0 (or in the case of Section 5.2 are general continuous predictable stochastic processes). Fixing some finite time horizon T > 0, the problem of choosing optimal portfolio selection consists of maximizing the objective functional where L(t, α t ) = e −ρt V(c(t)).
Here, ρ ∈ (0, +∞) is the discount rate, V is a strictly concave utility function that is assumed to be differentiable with V (z) > 0 for z > 0, and g is a given function.
Let us remark that the set K introduced in Section 2.2 here has the form

Markovian Case
The maximization problem introduced above is a particular case of the general one studied in Section 2.2. The associated value function is while the HJB equation becomes The optimal value (c , γ ) of (c, γ) is given by the solutions to the system that is The corresponding functional H takes the form So we study the following PDE with We are looking for the symmetry generated by the generating function Ω(t, x, u, u x ). Hereafter, we assume that the function h V defined above is a smooth function in a suitable open subset of R 2 .
Theorem 14. The function Ω generates a contact symmetry of Equation (46) if and only if it admits one of the following forms , Finally, Theorems 12 and 14 allow us to obtain the explicit forms of the local martingales of Merton's model.

Corollary 2.
Let U(t, x) be a classical solution to Equation (46) and let X t be the solution to Equation (43) with (γ, c) satisfying the equalities (44) and (45). Then, the processes are local martingales.
Proof of Theorem 14. The generating function Ω is a (contact) symmetry of the PDE if and only if the following set of determining equations holds for some arbitrary constants d 1 , d 2 , and d 3 . Therefore, we have By (63), we obtain Inserting the previous expression of f 1 , f 2 , and f 3 in (61), we get that Ω is a contact symmetry of Equation (46) if and only if it is a linear combination of the following expressions where G 1 , G 2 , G 3 , and G 4 are smooth solutions to the PDEs satisfying Equations (48)- (51).
Equations (48)-(51) can be solved explicitly for some special form of K(t, x, u x ). Taking, in particular, the following two expressions derived by taking the isoelastic utility functions, also known as constant relative risk aversion utilities (see [65]) defined as respectively. If we denote by Ω 1 x ] + G 2 1 (t, u x ) the symmetries of the Equation (46) when K = K 1 and K = K 2 , respectively, we have that G 1 1 solves the equation where Making the ansatz G 1 1 (t, u x ) = φ 1 (t)u r x + φ 2 (t)u r x log(u x ) + φ 3 (t), the function G 1 3 solves (49) (with h = h 1 ) if and only if φ 1 and φ 2 solve the following ODEs With the ansatz The function G 2 3 solves (50) (with h = h 2 ) if and only if φ 1 solves the following ODE

Non-Markovian Case
We consider here the case where µ(t) and σ(t) are predictable continuous stochastic processes with respect to the filtration generated by F t , that is, the problem now fits in the more general model treated in Section 2.3. This case is relevant, for example, when we are considering stochastic volatility models (see, e.g., [36,38,66] for stochastic volatility models and [39] for the non-Markovian Merton problem of the form approached here). We assume also that g(x, ω) is a F t random field. In this case, the value function is a random field depending on the time t and the variable x of the form The random field U satisfies the following backward stochastic PDE dU(t, x) + sup (c,γ)∈K H S (t, x, ∇U(t, x), D 2 U(t, x), ∇Ψ(t, x), (c, γ)) dt = Ψ(t, x) dW t , (68) where H S (t, x, u x , u xx , ψ x , (c, γ)) = = exp(−ρt)V(c) + (γ(µ(t) − r) + r)xu x − cu x + xσ(t)γψ x + 1 2 u xx σ(t) 2 γ 2 x 2 .
The optimal value of the function (c, γ) is given by the solution to the system which means that while c * is given by Equation (44). This implies that H S (t, x, u x , u xx , ψ x ) = ((µ(t) − r)u x + σ(t)ψ x ) 2 2σ(t) 2 u xx + K(t, x, u x ), where K(t, x, u x ) is given by Equation (47). In the following, we write δ S (t) = (µ(t) − r) 2 where we recall that here µ and σ are generic predictable continuous stochastic processes. So we consider a generator function Ω S (t, u, u x , ω), depending explicitly on ω.
is a local martingale.
Proof. If V(z) = z θ /θ we have This implies that 1 θ − 1 ∂ u x h V − h V = 0. So, using Equations (71) and (72), we get that where G 5 (t, u x ) is any solution to the equation is a symmetry of the Equation (68). A particular solution to Equation (74) is G 5 ≡ 0, in which case Ω 5 has the form Ω 5 = −u − xu x /θ; however, −u − xu x /θ satisfies the hypotheses of Corollary 1, from which we get the thesis. The second part of the corollary can be proven in a similar way.
As already mentioned in the introduction, the construction of the martingales obtained in Corollaries 2 and 4 could be deeply connected to the well-known explicit solutions of Merton's optimal portfolio problem (see, e.g., [35] for a review and [36,37] for recent developments on the explicit solutions of Merton's problem). The investigation of the link between these two notions will be the subject of a future paper.

Conclusions
We proposed a generalization of the Noether theorem to a generic stochastic optimal control problem exploiting the tools of contact geometry and contact transformations. The results are formulated in Theorems 12 and 13 and Corollary 1, and they establish a relation between any contact symmetry of the HJB equation associated with an optimal control problem and a martingale given by the generator of the contact symmetry. For the case of deterministic coefficients and Lagrangian, we considered a generating function Ω(t, x, u, u x ) of a contact symmetry of the associated HJB equation and we showed that the process Ω(t, X t , U (t, X t ), ∇U (t, X t )), where U(t, x) is the solution to the HJB equation and X t is the solution to the stochastic optimal control problem, is a local martingale. Also, we proved an analogous result for a stochastic optimal control problem with stochastic coefficients and Lagrangian.
As we pointed out in the introduction, our results can be seen as a generalization of some previous works by Zambrini et al. (see, [24,25,30]) in two directions: first we considered a wider class of transformations, and second we extended the mentioned result to the case of stochastic backward HJB equations.
We applied our results to Merton's portfolio problem, building some martingales related to its solution(s). We considered both the Markovian and the non-Markovian case.
Interesting future developments of this work can be the investigation of the case where the solutions of HJB equations are viscosity solutions (and not classical), so that there is not enough regularity to apply Itô's formula, and the study of symmetries of stochastic backward equations based on the HJB equations exploited in the present paper. Finally, giving a financial meaning to the martingales we built in the case of Merton's problem could be another interesting line of research.