Next Article in Journal
Skewness-Kurtosis Model-Based Projection Pursuit with Application to Summarizing Gene Expression Data
Next Article in Special Issue
Regulatory Estimates for Defaulted Exposures: A Case Study of Spanish Mortgages
Previous Article in Journal
Modeling Political Corruption in Spain
Previous Article in Special Issue
Capital Allocation Rules and the No-Undercut Property
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Noether Theorem in Stochastic Optimal Control Problems via Contact Symmetries

by
Francesco C. De Vecchi
1,†,
Elisa Mastrogiacomo
2,†,
Mattia Turra
1,† and
Stefania Ugolini
3,*,†
1
Institute for Applied Mathematics & Hausdorff Center for Mathematics, Universität Bonn, Endenicher Allee 60, 53115 Bonn, Germany
2
Dipartimento di Economia, Università degli Studi dell’Insubria, Via Montegeneroso 71, 21100 Varese, Italy
3
Dipartimento di Matematica, Università degli Studi di Milano, Via Saldini 50, 20113 Milano, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(9), 953; https://doi.org/10.3390/math9090953
Submission received: 8 February 2021 / Revised: 15 April 2021 / Accepted: 19 April 2021 / Published: 24 April 2021
(This article belongs to the Special Issue Applied Mathematical Methods in Financial Risk Management)

Abstract

:
We establish a generalization of the Noether theorem for stochastic optimal control problems. Exploiting the tools of jet bundles and contact geometry, we prove that from any (contact) symmetry of the Hamilton–Jacobi–Bellman equation associated with an optimal control problem it is possible to build a related local martingale. Moreover, we provide an application of the theoretical results to Merton’s optimal portfolio problem, showing that this model admits infinitely many conserved quantities in the form of local martingales.

1. Introduction

The concept of symmetry of ordinary or partial differential equations (ODEs and PDEs) was introduced by Sophus Lie at the end of the 19th century with the aim of extending the Galois theory from polynomial to differential equations. Actually, all the theory of Lie groups and algebras was developed by Lie himself as well as the principal tools for facing the problem of symmetries of differential equations (see [1] for an historical introduction to the subject and [2,3] for some modern presentations).
One of the most important applications of the study of symmetries in physical systems was provided by Emmy Noether. She understood that when an equation comes from a variational problem, such as in Lagrangian mechanics, general relativity or, more generally, field theory, it is possible to relate each symmetry of the equation to a conserved quantity, i.e., a function of the state of the system that does not change during the evolution of the dynamics, and conversely, to each conserved quantity it is possible to associate a symmetry of the motion. The simplest examples are, in Newtonian and Lagrangian mechanics, the conservation of energy, which is related to the invariance with respect to time translation, and the conservation of angular momentum, which is correlated to the invariance with respect to rotations. The classical Noether theorem (see, e.g., [2,4] for an exposition of the subject) has found many generalizations in deterministic optimal control theory (see, e.g., [5,6] and also [7,8,9] on the related problem of commuting Hamiltonians and Hamilton–Jacobi multi-time equations).
The development of a Lie symmetry analysis for stochastic differential equations (SDEs) and general random systems is relatively recent (see, e.g., [10,11,12,13,14,15,16,17,18,19,20] for some recent developments in the non-variational case). For stochastic systems arising from a variational framework, it is certainly interesting to study the relation between their symmetries and functionals that are conserved by their flow, and, in particular, to establish stochastic generalizations of the Noether theorem.
The problem of finding some kinds of conservation laws for SDEs was discussed in various papers (see [21,22,23,24,25,26,27,28,29,30]). We could summarize three different approaches to this problem. The first one was considered by Misawa in [26,27,31], where the author studied the case in which some Markovian functions of solutions of SDEs are exactly conserved during time evolution.
The second approach was adopted by Zambrini and co-authors in a number of works. They put themselves in the framework of Euclidean quantum mechanics, which represents a geometrically consistent stochastic deformation of classical mechanics where a Gaussian noise is added to a classical system. This setting has a close connection with optimal transport and optimal control (see, e.g., [32] for an introduction to the topic). More precisely, in [29], a generalization of the Noether theorem has been proved: to any one-parameter symmetry of a variational problem it is possible to associate a martingale that is independent both from the initial and final condition of the system. This first step was quite important since it stressed that the suitable generalization of conserved quantities in a stochastic setting is not a function that remains constant during the time evolution of a stochastic system, but a function that is constant in mean. Another remarkable advance in the study of variational symmetries was achieved in [24,25,30], where it was noted that the symmetries of the Hamilton–Jacobi–Bellman (HJB) equation of the considered variational problem are the correct objects to be associated to the aforementioned martingales and the contact geometry is a good framework in which a stochastic version of the Noether theorem can be formulated. Indeed, to each Lie point symmetry of the HJB equation it is possible to associate a martingale for the evolution of the system. It is worth also mentioning the papers [21,28], where a suitable notion of integrable system, i.e., a system with a number of martingales and symmetries equal to the number of the dimension, is discussed.
The third approach was proposed by Baez and Fong in [22] (see also [23]). The authors showed a method to build martingales applying the action of symmetries to solution to backward Kolmogorov equation, that can be interpreted as a linear version of HJB equation obtained when the control and the objective function are trivial.
In our paper, we generalize at least along two directions the approach proposed by Zambrini and co-authors, as listed above. First, we work in a different optimal control setting that can be seen as a generalization of the variational framework described in their articles. Second, we do not only restrict to Lie point symmetries but we take advantage of the general notion of contact symmetry, namely a transformation preserving the contact structure of the jet space (see Section 3).
We prove here a Noether theorem (Theorem 12) that relates to any contact symmetry of the HJB equation associated with an optimal control problem, a martingale that is given by the generator of the contact symmetry. More precisely, if we consider the generator Ω ( t , x , u , u x ) of a contact symmetry (which is a regular function defined on the jet space J 1 ( R n , R ) , i.e., a map depending on a function u and on its first derivatives u x ), a regular solution U ( t , x ) to the HJB equation and the solution X t to the optimal control problem, then the process O t = Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) , obtained by composing the generator Ω with the function U and the process X, is a local martingale.
Furthermore, we generalize the Noether theorem also to the case where the coefficients and the Lagrangian of the control problem are random. Indeed, we establish that the Noether theorem holds also in the case of stochastic HJB equation, introduced in [33] by Peng to study the optimal control problem with stochastic final condition or stochastic Lagrangian, provided that we restrict ourselves to a subset of Lie point symmetries (Theorem 13 and Corollary 1).
Finally, the present paper provides an application of our theory to a non-trivial interesting problem arising in mathematical finance, that is, Merton’s optimal portfolio problem. First proposed by Merton in [34], this model finds nowadays many different applications and generalizations (see [35] for a review of the original problem and various generalizations and [36,37,38,39] for some more recent works on the subject). A particular form of the Noether theorem for this problem can be found in [40]. We show here that the HJB equation of this optimal control system admits infinitely many contact symmetries. It is important to notice that the contact symmetry generalization is essential in this specific problem, since, when we restrict to Lie point symmetries as it is done in the aforementioned literature, the equation admits only a finite number of infinitesimal invariants. The presence of infinitely many contact symmetries yields the possibility to construct infinitely many martingales whose means are preserved by the evolution of the system. Moreover, we also point out that, when the final condition is random or the coefficients of the evolution of the stock are general adapted processes, our stochastic generalization of the Noether theorem (Corollary 1) allows us to construct some non-trivial martingales for this classical mathematical model. We think that the presence of these martingales could be related to the existence of many explicit solutions for Merton’s problem, and therefore we expect that the methods presented here can be used to build other explicit solutions for it. We plan to study in a future work the financial consequences of the conservation laws individuated in this paper.
Since the stochastic and geometrical frameworks are not so commonly put together, we also provide a concise introduction to both these subjects.

Plan of the Paper

The paper is organized as follows. Section 2 introduces stochastic optimal control problems both in the deterministic and stochastic case, presenting also the HJB equation, and it is useful also to fix the notations that we adopt throughout the paper. Contact symmetries and their properties in the PDEs setting are discussed in Section 3. Section 4 contains the main theoretical results of the paper, namely the Noether theorems for deterministic and stochastic HJB equations. The application of such results to Merton’s optimal portfolio problem is given in Section 5.

2. A Brief Survey on Stochastic Optimal Control

We give here an overview of some results about stochastic control problems, referring the interested reader to [41,42,43,44] for further investigations on such results, though more precise references will be given throughout the section. The main aim of this section is to introduce the topics we will deal with and to give the tools from the stochastic optimal control theory that we will use later on in the paper.

2.1. Deterministic Optimal Control and Lagrange Mechanics

We start recalling some notions about deterministic optimal control and, in particular, we focus on Lagrangian-type optimal control problems, i.e., problems arising from Lagrangian formulation of classical mechanics. More precisely, we consider a system of controlled ODEs of the form
d X t i = α t i d t ,
where X = ( X 1 , , X n ) : [ t 0 , T ] R n is in C 1 ( [ t 0 , T ] , R n ) , t 0 , T R , with t 0 T , are the initial time and the final time horizon, respectively, and α = ( α 1 , , α n ) C ( [ t 0 , T ] , R n ) is the control function. We want to maximize the following objective functional
J ( t 0 , x , α ) = t 0 T L ( X s t 0 , x , α s ) d s + g ( X T t 0 , x ) .
where X t t 0 , x is the solution to the ODE (1) such that X t 0 t 0 , x = x R n .
We suppose that there exists only one smooth function A : R n × R n R n such that
i = 1 n A i ( x , p ) p i + L ( x , A ( x , p ) ) = sup a R n i = 1 n a i p i + L ( x , a ) , ( x , p ) R n × R n ,
and also that, for any x R n , the map A ( x , · ) : = ( A 1 ( x , · ) , , A n ( x , · ) ) is smoothly invertible in all its variables as a function from R n into itself. Define then the PDE
u t H ( x , u x ) = u t i = 1 n A i ( x , u x ) u x i + L ( x , A ( x , u x ) ) = 0 ,
where u x = ( u x 1 , , u x n ) . Equation (3) is usually referred to as Hamilton–Jacobi equation in the context of Lagrangian mechanics or Hamilton–Jacobi–Bellman equation in the context of optimal control theory.
We state now the deterministic version of the so-called verification theorem.
Theorem 1.
Let U ( t , x ) C 1 ( [ t 0 , T ] × R n , R ) be a solution to Hamilton–Jacobi Equation (3). Then the optimal control problem (1) with objective functional (2) admits a unique solution, for any x R n , given, for every i = 1 , , n , by
α t i = A i ( X t , U ( t , X t ) ) , for every t [ t 0 , T ] .
Proof. 
See, e.g., Theorem 4.4 in [41]. □
Remark 1.
It is important to note that, in the deterministic case and when U C 1 , 2 ( [ t 0 , T ] × R n , R ) , i.e., U is differentiable one time with respect to time t and two times with respect to space x R n , the function t α t is C 1 ( [ t 0 , T ] , R n ) and it satisfies the Euler–Lagrange equations
d d t ( a i L ( X t , α t ) ) x i L ( X t , α t ) = 0 ,
where i = 1 , , n .

2.2. Classical Stochastic Optimal Control Problem

An optimal control problem consists of maximizing an objective functional, depending on the state of a dynamical system, on which we can act through a control process.
Let K be a (convex) subset of R d and fix a final time T > 0 . Denote by W an m-dimensional Brownian motion on a filtered probability space ( Ω , F , ( F t ) t 0 , P ) , where ( F t ) t 0 is the natural filtration generated by W. We assume that the state of the system is modeled by the following stochastic differential equation (SDE)
d X t = μ ( t , X t , α t ) d t + σ ( t , X t , α t ) d W t , t 0 < t T , X t 0 = x ,
where μ : R + × R n × R d R n and σ : R + × R n × R d R n × m are measurable functions that are also Lipschitz-continuous on the set K, i.e., there exists a constant C 0 , such that, for every t R + , x , y R n , a K ,
| μ ( t , x , a ) μ ( t , y , a ) | + σ ( t , x , a ) σ ( t , y , a ) C | x y | ,
where σ 2 = tr ( σ * σ ) .
We will also use the notation μ = ( μ i ) i = 1 , , n and σ = ( σ i ) i = 1 , , n , = 1 , , m .
The control process α = ( α s ) , appearing in (5), is a K-valued progressively measurable process with respect to the filtration ( F t ) t 0 . We denote by K the set of control processes α such that
E 0 T ( | μ ( t , 0 , α t ) | 2 + σ ( t , 0 , α t ) 2 ) d t < + .
We call X t t 0 , x , t [ t 0 , T ] the solution to the SDE (5).
Remark 2.
Conditions (6) and (7) imply that, for any initial condition ( t 0 , x ) [ 0 , T ) × R n and for all α K , there exists a unique strong solution X t x , t 0 to the SDE (5) (see, e.g., Theorem 2.2 in Chapter 4 of [45]).
Let L : R + × R n × R d R and g : R n R be two measurable functions, such that g satisfies the quadratic growth condition | g ( x ) | C ( 1 + | x | 2 ) , for every x R n , for some constant C independent of x.
For ( t 0 , x ) [ 0 , T ) × R n , we denote by K L ( t 0 , x ) the subset of controls in K such that
E t 0 T | L ( t , X t t 0 , x , α t ) | d t < + .
We consider an objective function of the following form
J ( t 0 , x , α ) = E t 0 T L ( s , X s t 0 , x , α s ) d s + g ( X T t 0 , x ) .
We are now in position to introduce the stochastic optimal control problem.
Definition 1.
Fixed ( t 0 , x ) [ 0 , T ) × R n , the stochastic optimal control problem consists of maximizing the objective function J ( t 0 , x , α ) over all α K L ( t 0 , x ) subject to the SDE (5). The associated value function is then defined as
U ( t 0 , x ) = max α K L ( t 0 , x ) E t 0 T L ( t , X t t 0 , x , α t ) d t + g ( X T t 0 , x ) .
Given an initial condition ( t 0 , x ) [ 0 , T ) × R n , we call α * K L ( t 0 , x ) an optimal control if
J ( t 0 , x , α * ) = U ( t 0 , x ) .
We call Hamilton–Jacobi–Bellman equation (HJB) the PDE
t φ ( t , x ) + sup a K { L t a φ ( t , x ) + L ( t , x , a ) } = 0 , ( t , x ) [ t 0 , T ) × R n , φ ( T , x ) = g ( x ) , x R n ,
where L t a is the Kolmogorov operator associated with Equation (5), namely, for ψ C 2 ( R n ) ,
L t a ψ ( x ) = 1 2 i , j = 1 n η i j ( t , x , a ) x i x j ψ ( x ) + i = 1 n μ i ( t , x , a ) x i ψ ( x ) , ( t , x , a ) R + × R n × K ,
with η i j defined, for every i , j { 1 , , n } , as
η i j ( t , x , a ) = ( σ σ ) i j ( t , x , a ) = = 1 m σ i ( t , x , a ) σ j ( t , x , a ) , ( t , x , a ) R + × R n × K .
We also write, for x R n , p R n and q R n × n ,
H ( t , x , p , q ) = sup a K 1 2 i , j = 1 n η i j ( t , x , a ) q i j + i = 1 n μ i ( t , x , a ) p i + L ( t , x , a ) ,
so that the HJB Equation (8) can be written also in the following way
t φ ( t , x ) + H ( t , x , φ , D 2 φ ) = 0 , ( t , x ) [ t 0 , T ) × R n , φ ( T , x ) = g ( x ) , x R n .
We state here the classical verification theorem.
Theorem 2.
Let φ C 1 , 2 ( [ 0 , T ) × R n ) C 0 ( [ 0 , T ] × R n ) be a solution to the HJB Equation (9) for t 0 = 0 , satisfying the following quadratic growth, for some constant C,
| φ ( t , x ) | C ( 1 + | x | 2 ) , r o r a l l ( t , x ) [ 0 , T ] × R n .
Suppose that there exists a measurable function A * ( t , x ) , ( t , x ) [ 0 , T ) × R n , taking values in K, such that
 (i)
We have
0 = t φ ( t , x ) + H ( t , x , φ , D 2 φ ) = t φ ( t , x ) + L t A * ( t , x ) φ ( t , x ) + L ( t , x , A * ( t , x ) ) ,
 (ii)
The SDE
d X s = μ ( s , X s , A * ( s , X s ) ) d s + σ ( s , X s , A * ( s , X s ) ) d W s ,
with initial condition X t = x , admits a unique solution X s * ,
 (iii)
The process A * ( s , X s * ) , s [ t , T ] lies in K L ( t , x ) .
Then
φ ( t , x ) = U ( t , x ) , ( t , x ) [ 0 , T ] × R n ,
and A * ( · , X · * ) is an optimal control for the stochastic optimal control problem in Definition 1.
Proof. 
See, e.g., Theorem 3.5.2 in [42]. Some other references for the verification theorem are also Theorem 4.1 in [41], Theorem 5.7 in [43], and Theorem 4.1 in [44]. □
Remark 3.
The quadratic growth condition (10) is used in Theorem 2 only to guarantee that the local martingale part of the semi-martingale decomposition of φ ( t , X t ) , namely, by Itô formula,
i = 1 n = 1 m 0 t σ i ( s , X s , A * ( s , X s ) ) x i φ ( s , X s ) d W s ,
is in L 1 and a martingale (and not only a local martingale). This means that the statement of Theorem 2 holds assuming only that (11) is a L 1 martingale, i.e., without condition (10).

2.3. Stochastic Hamilton–Jacobi–Bellman Equation

The present section generalizes the aforementioned Hamilton–Jacobi–Bellman equation to its stochastic counterpart. Let us first recall the Itô–Kunita formula.
Theorem 3
(Itô–Kunita formula). Let F ( t , x ) , ( t , x ) [ 0 , T ] × R n be a random field that is continuous in ( t , x ) almost surely, such that
 (i)
For every t [ 0 , T ] , F ( t , · ) is a C 2 -map from R n into R , P -a.s.,
 (ii)
For each x R n , F ( · , x ) is a continuous semi-martingale P -a.s., and it satisfies
F ( t , x ) = F ( 0 , x ) + j = 1 m 0 t f j ( s , x ) d Y s j , for every ( t , x ) [ 0 , T ] × R n , a . s . ,
where Y s j , j = 1 , , m , are m continuous semi-martingales, f j ( s , x ) , x R n , s [ 0 , T ] , are random fields that are continuous in ( s , x ) and satisfy the following properties:
 (a)
For every s [ 0 , T ] , f j ( s , · ) is a C 2 -map from R n to R , P -a.s.,
 (b)
For every x R n , f j ( · , x ) is an adapted process.
Let X t = ( X t 1 , , X t n ) be continuous semi-martingales. Then we have, for t [ 0 , T ] ,
F ( t , X t ) = F ( 0 , X 0 ) + j = 1 m 0 t f j ( s , X s ) d Y s j + i = 1 n 0 t x i F ( s , X s ) d X s i + j = 1 m i = 1 n 0 t x i f j ( s , X s ) d [ Y j , X i ] s + i , k = 1 n 0 t x i x k F ( s , X s ) d [ X i , X k ] s ,
where [ · , · ] s stands for the quadratic variation of semi-martingales. Furthermore, if F C 3 and f j C 3 , P -a.s., then we have, for i = 1 , , n ,
x i F ( t , x ) = x i F ( 0 , x ) + j = 1 m 0 t x i f j ( s , x ) d Y s j , for every ( t , x ) [ 0 , T ] × R n , P - a . s .
Proof. 
See, e.g., the article [46] or the book [47], both by H. Kunita. □
Sticking, where possible, with the notation introduced in Section 2.2, we consider a stochastic optimal control problem where also the functions L, g, μ , and σ are random. More precisely, they depend also on ω Ω in a predictable way, namely, L ( t , x , a , · ) , g ( x , · ) , μ ( t , x , · ) , and σ ( t , x , · ) are F t -measurable, for any ( t , x , a ) R + × R n × K . In order to distinguish them from the functions in the previous section and recall that the following are stochastic terms, we write also
L S ( t , x , a ) = L ( t , x , a , · ) , g S ( x ) = g ( x , · ) .
We want then to maximize the objective functional
E t 0 T L S ( t , X t , α t ) d t + g S ( X T ) ,
where X solves the SDE
d X t = μ ( t , X t , α t , ω ) d t + σ ( t , X t , α t , ω ) d W t , t 0 < t < T , X t 0 = x .
and α K L .
Let us introduce, in a completely analogous way as in the previous section, the value function
U ( t , x , ω ) = max α K L E t T L S ( s , X s , α s ) d s + g S ( X T ) | F t .
From now on, we may omit the explicit dependence on ω Ω of the functions. Then, for any fixed x, U ( t , x ) is an F t -adapted process but, a priori, it is not of bounded variation. We can anyway expect that it is a continuous semi-martingale, and therefore, by the representation theorem for semi-martingales and martingales (see, e.g., Section IV.31 and Section IV.36 in [48]), that it can be written as follows,
U ( t , x ) = Γ T ( x ) Γ t ( x ) t T Y s ( x ) d W s , x R n , 0 t T ,
where, for every x R n , Γ t ( x ) and Y t ( x ) are F t -adapted processes and Γ t ( x ) is of bounded variations. In this case, if Γ t ( x ) and Y t ( x ) are almost surely continuous in ( t , x ) , Γ t ( x ) is differentiable with respect to t, and both of them are sufficiently smooth with respect to x, then the pair ( U , Y ) should satisfy a stochastic Hamilton–Jacobi–Bellman equation (SHJB).
More precisely, we say that ( φ , Ψ ) solves the SHJB related with the optimal control problem (12) and (13) if ( φ , Ψ ) satisfies the following backward stochastic partial differential equation
d φ t ( x ) + H S ( r , x , φ , D 2 φ , Ψ ) d t = = 1 m Ψ t ( x ) d W t , ( t , x ) [ t 0 , T ) × R n , φ T ( x ) = g ( x ) , x R n ,
where
H S ( t , x , u x , u x x , ψ x ) H ( t , x , u x , u x x , ψ x , ω ) = sup a K H S ( t , x , u x , u x x , a , ψ x ) ,
and
H S ( t , x , u x , u x x , a , ψ x ) = i = 1 n μ i ( t , x , a , ω ) u x i + 1 2 i , j = 1 n = 1 m σ i ( t , x , a , ω ) σ j ( t , x , a , ω ) u x i x j + i = 1 n = 1 m σ i ( t , x , a , ω ) ψ x i + L S ( t , x , a ) ,
with ψ x = ( ψ x i ) i = 1 , , n , = 1 , , m R n × m . See, e.g., Section 3.1 in [33] for more details about the derivation of Equation (14) and Section 4 in the same reference for results concerning the well-posedness of such an equation.
We state here the verification theorem, which tells us that a sufficiently smooth solution of the SHJB equation coincides with the value function v.
Theorem 4.
Let ( φ , Ψ ) be a smooth solution of the SHJB Equation (14) with t 0 = 0 and assume that the following conditions hold:
 (i)
For each t [ 0 , T ] , x ( φ t ( x ) , Ψ t ( x ) ) is a C 2 -map from R n into R × R m , P -a.s.,
 (ii)
For each x R n , t ( φ t ( x ) , Ψ t ( x ) ) and t ( φ t ( x ) , D 2 φ t ( x ) , Ψ t ( x ) ) are continuous F t -adapted processes.
Suppose further that there exists a predictable admissible control A * ( t , x , ω ) such that
H S ( t , x , φ , D 2 φ , Ψ ) = H S ( t , x , φ , D 2 φ , A * ( t , x , ω ) , Ψ ) ,
and that it is regular enough so that the SDE (13) is well-posed with solution X. Then ( φ , Ψ ) = ( V , Y ) and moreover, for any initial data ( 0 , x ) with x R n , A * ( t , X t , ω ) maximizes the objective function U.
Proof. 
See Section 3.2 in [33]. □
Remark 4.
Under suitable regularity conditions on μ, σ, L S , and g S , it is possible to prove that the SHJB Equation (14) admits a unique solution satisfying the hypotheses of Theorem 4. A rigorous proof of this fact can be found in Section 4 of [33]. For further developments on SHJB equations and the related stochastic optimal control problems we refer the reader to, e.g., [49,50,51,52], as well as the already mentioned paper by Peng [33].

3. Solutions of PDEs via Contact Symmetries

In this section, we recall some basic facts concerning the theory of symmetries on which our results are based, referring to [2,3,53,54,55] for a complete treatment of these topics. We start with a formal introduction on jet spaces (for an extended introduction to the subject see, e.g., [56,57]), and then proceed with contact symmetries and their applications in solving PDEs. Despite the fact that these results are well-known, we insert here a small survey for the ease of the reader, as well as we introduce the notation that will be adopted in the rest of the paper.

3.1. Jet Spaces and Jet Bundles

The jet space is a generalization of the notion of tangent bundle of a manifold. Let M and N be two open subsets of R m and R n , respectively, and consider a smooth function f : M N . Take a standard coordinate system x = ( x 1 , , x m ) in M and let u = ( u 1 , , u n ) = f ( x ) N . We can then consider the k-th prolongation u ( k ) = pr ( k ) f ( x ) , that is defined by the relations u x i j = x i f j ( x ) , u x i x l j = x i x l f j ( x ) , , up to order k. For example, if m = 2 and n = 1 , then pr ( 2 ) f ( x 1 , x 2 ) is given by
( u ; u x 1 , u x 2 ; u x 1 x 1 , u x 1 x 2 , u x 2 x 2 ) = ( f ; x 1 f , x 2 f ; x 1 x 1 f , x 1 x 2 f , x 2 x 2 f ) ( x 1 , x 2 ) .
The k-th prolongation can also be looked at as the Taylor polynomial of degree k for f at the point x. The space whose coordinates represent the independent variables, the dependent variables and the derivatives of the dependent variables up to order k is called the k-th order jet space of the underlying space N × M , and we denote it by J k ( M , N ) . It is a smooth vector bundle on M with projection π k , 1 : J k ( M , N ) M given by
π k , 1 ( x , u , u x , u x x , ) = x .
More explicitly, J k ( M , N ) = M × N × N ( 1 ) × × N ( k ) , where N ( i ) , is the space of i-th order derivatives of u with respect to x. It is clear that N ( i ) R n i with
n i = m + i 1 i .
To any function f C k ( M , N ) , where C k ( M , N ) is the infinite-dimensional Fréchet space of k times differentiable functions on M taking values in N, we associate a continuous section of the bundle ( J k ( M , N ) , M , π k , 1 ) in the following way
f D k ( f ) ( x ) = ( x , u = f ( x ) , u x = f ( x ) , u x x = D 2 f ( x ) , , D k f ( x ) ) ,
where D i f ( x ) is the vector collecting all the i-th order derivatives of f with respect to x. In this setting, a differential equation is a sub-manifold Δ E J k ( M , N ) . For example, in the scalar case N = R , we usually consider Δ E as the null set of some regular functions, i.e., Δ E = { E i ( x , u , u x , u x x , ) = 0 , i { 1 , , p } } .
Definition 2.
Consider a (finite) set E i : J k ( M , N ) R , for i = 1 , . . . , p where p N and p > 0 , of smooth functions defining a sub-manifold Δ E = { E i ( x , u , u x , u x x , ) = 0 , i { 1 , , p } } of J k ( M , N ) . We say that a smooth function f : M N is a solution to the equation E (represented by the sub-manifold Δ E ) if, for any x M , we have D k f ( x ) Δ E . The set of all solutions to equation E will be denoted by S E .
For instance, in the previous case where N = R and Δ E = { E i ( x , u , u x , ) = 0 , i { 1 , , p } } , f is a solution to equation E if E i ( x , f ( x ) , f ( x ) , ) = 0 , for every i = 1 , , p , x M .
Remark 5.
For technical reasons, it is usually not possible to consider generic equations E (corresponding to generic sub-manifold Δ E J k ( M , N ) ). In the following, we always consider non-degenerate systems of differential equations in the sense of Definition 2.70 in [2]. This condition assures that, for each fixed x 0 M and each set of derivatives ( u 0 , u x 0 , u x x 0 , ) , there exists a solution to the equation defined in a neighborhood of x 0 with the prescribed derivatives ( u 0 , u x 0 , u x x 0 , ) at the point x 0 . Since the precise formulation of this condition is quite technical and the evolution equations considered in Section 4 always satisfy such an assumption, we refer to Section 2.6 of [2] for complete details.

3.2. Contact Transformations

We want to introduce a class of transformations induced by diffeomorphisms of J k ( M , N ) . For simplicity, we consider the case k = 2 , M R n and N = R . Consider a diffeomorphism Φ : J 2 ( M , N ) J 2 ( M , N ) given by the following relations
x ˜ i = Φ x i ( x , u , u x , u x x ) , u ˜ = Φ u ( x , u , u x , u x x ) , u ˜ x i = Φ u x i ( x , u , u x , u x x ) , u ˜ x i x j = Φ u x i x j ( x , u , u x , u x x ) .
Hereafter, we use the notation Φ x = ( Φ x 1 , , Φ x n ) , Φ u x = ( Φ u x 1 , , Φ u x n ) and Φ u x x = ( Φ u x i x j ) | i , j = 1 , . . . , n .
We now aim to define a transformation F Φ on the space of smooth functions induced by the map Φ on the jet space. Let U C ( M , N ) and consider the map C U , Φ : M M given by
C U , Φ ( x ) = Φ x ( x , U ( x ) , U ( x ) , D 2 U ( x ) ) .
Let F Φ C ( M , N ) be the subset of smooth functions U C ( M , N ) such that C U , Φ is a diffeomorphism from M into itself.
Definition 3.
We say that the diffeomorphism Φ generates the (nonlinear) operator F Φ on the space of functions F Φ , if there is a map F Φ : F Φ C ( M , N ) such that
F Φ ( U ) ( x ) = Φ u ( C U , Φ 1 ( x ) , U ( C U , Φ 1 ( x ) ) , U ( C U , Φ 1 ( x ) ) , D 2 U ( C U , Φ 1 ( x ) ) ) , x i F Φ ( U ) ( x ) = Φ u x i ( C U , Φ 1 ( x ) , U ( C U , Φ 1 ( x ) ) , U ( C U , Φ 1 ( x ) ) , D 2 U ( C U , Φ 1 ( x ) ) ) , x i x j F Φ ( U ) ( x ) = Φ u x i x j ( C U , Φ 1 ( x ) , U ( C U , Φ 1 ( x ) ) , U ( C U , Φ 1 ( x ) ) , D 2 U ( C U , Φ 1 ( x ) ) ) .
Not every diffeomorphism Φ : J 2 ( M , N ) J 2 ( M , N ) generates an operator F Φ on the space of functions F Φ . For example, consider M = R and the map Φ x ( x , u , u x , u x x ) = λ x , Φ u ( x , u , u x , u x x ) = u , Φ u x ( x , u , u x , u x x ) = u x , and Φ u x x ( x , u , u x , u x x ) = u x x , where λ > 0 . In this case, for any U C ( M , N ) , the map C U , Φ is given by C U , Φ ( x ) = λ x and, thus, it does not depend on U and it is always a diffeomorphism from R into itself, since λ 0 . This implies that F Φ = C ( M , N ) and also that, if the map F Φ exists, then it must satisfy
F Φ ( U ) = U ( λ 1 x ) ,
for any U C ( M , N ) . On the other hand, we have
x F Φ ( U ) = λ 1 U ( λ 1 x ) U ( λ 1 x ) = Φ u x ( C U , Φ 1 ( x ) , U ( C U , Φ 1 ( x ) ) , U ( C U , Φ 1 ( x ) ) , D 2 U ( C U , Φ 1 ( x ) ) ) .
This simple counterexample shows that a diffeomorphism Φ : J 2 ( M , N ) J 2 ( M , N ) must satisfy some additional conditions in order to generate an operator F Φ . For this reason, we introduce the following definition.
Definition 4.
A diffeomorphism Φ : J 2 ( M , N ) J 2 ( M , N ) is said to be a contact transformation if it generates a (nonlinear) operator F Φ in the sense of Definition 3.
It is possible to give a nice geometric characterization of the set of contact transformations. From now on, we write Λ 1 J n ( M , N ) for the vector space of 1-forms on J n ( M , N ) . In particular, consider the following 1-forms,
κ = d u i = 1 n u x i d x i , κ x i = d u x i j = 1 n u x i x j d x j .
We denote by C Λ 1 J 2 ( M , N ) the contact structure, also called Cartan distribution in [56], which is generated by
C = span { κ , κ x i , i = 1 , . . . , n } .
Theorem 5.
A diffeomorphism Φ : J 2 ( M , N ) J 2 ( M , N ) is a contact transformation in the sense of Definition 4 if and only if it preserves the contact structure C , that is,
Φ * ( C ) = C ,
where Φ * is the pull-back of differential forms on J 2 ( M , N ) induced by Φ.
Proof. 
See, e.g., Chapter 2 in [56], Section 4 in [53], Chapter 21 in [3], and the references therein. □
Remark 6.
The contact transformation Φ is uniquely determined by its action on J 1 ( M , N ) . In particular, Φ x , Φ u , and Φ u x depend only on ( x , u , u x ) and they do not depend on u x x (see, e.g., Chapter 2 in [56]).
Remark 7.
In contact geometry, a contact structure on a ( 2 n + 1 ) -dimensional manifold M is a 1-form ζ, which is maximally non-integrable, namely, ζ ( d ζ ) n 0 , and the contact transformations are the diffeomorphisms Ψ of M such that Ψ * ( ζ ) = f · ζ , for some f C ( M , R ) (see, e.g., [58,59] for an introduction to the subject and [60] for an historical overview). This definition is satisfied by J 1 ( M , R ) with the 1-form ζ = κ defined in (15).
In the study of the geometry of jet spaces (see, e.g., Chapter 6 of [57]), the term “contact structure” is often used to express the set of forms C . This custom is due to the fact that, as explained in Remark 6, the contact transformations are extensions of diffeomorphisms on J 1 ( M , N ) , i.e., the set of transformations considered here is in one-to-one correspondence with the one usually considered in contact geometry.
In the following, we will not consider just a single contact transformation but one-parameter groups of contact transformations Φ λ , which means that Φ · : R × J 2 ( M , N ) J 2 ( M , N ) is C , Φ λ is a contact transformation for each λ R , Φ 0 ( x , u , u x , u x x ) = ( x , u , u x , u x x ) , and, for each λ 1 , λ 2 R ,
Φ λ 1 Φ λ 2 = Φ λ 1 + λ 2 .
In general, a one-parameter group of diffeomorphisms Φ Y , λ , where λ R , is generated by a vector field Y T J 2 ( M , N ) , i.e., belonging to the tangent bundle of J 2 ( M , N ) , which in local coordinates has the expression
Y = i = 1 n Y x i ( x , u , u x , u x x ) x i + Y u ( x , u , u x , u x x ) u + i = 1 n Y u x i ( x , u , u x , u x x ) u x i + i , j = 1 n Y u x i x j ( x , u , u x , u x x ) u x i x j ,
by the following relations
λ Φ Y , λ x i ( x , u , u x , u x x ) = Y x i Φ λ ( x , u , u x , u x x ) , λ Φ Y , λ u ( x , u , u x , u x x ) = Y u Φ λ ( x , u , u x , u x x ) , λ Φ Y , λ u x i ( x , u , u x , u x x ) = Y u x i Φ λ ( x , u , u x , u x x ) , λ Φ Y , λ u x i x j ( x , u , u x , u x x ) = Y u x i x j Φ λ ( x , u , u x , u x x ) ,
for any λ R and ( x , u , u x , u x x ) J 2 ( M , N ) . It is useful to introduce the following natural notion.
Definition 5.
A vector field Y (of the form (16)) on J 2 ( M , N ) is called an infinitesimal contact transformation if it generates (through Equation (17)) a one-parameter group of diffeomorphisms Φ λ of contact transformations.
The following theorem characterizes all the infinitesimal contact transformations on J 2 ( M , N ) .
Theorem 6.
A vector field Y on J 2 ( M , N ) is an infinitesimal contact transformation (in the sense of Definition 5) if and only if there exists a unique smooth map Ω : J 1 ( M , N ) R such that Y = Y Ω , where Y Ω is a vector field on J 2 ( M , N ) defined as
Y Ω = i = 1 n u x i Ω x i + Ω i = 1 n u x i u x i Ω u + i = 1 n ( x i Ω + u x i u Ω ) u x i + i , j , k , = 1 n ( x i x j Ω + u x j x i u Ω + u x j x k x i u x k Ω + u x i x j u Ω + u x i u x j u u Ω + u x i u x j x k u u x k Ω + u x i x j u Ω + u x i x k x j u x k Ω + u x i x k u x j u x k u Ω + u x i x k u x j x u x k u x Ω ) u x i x j .
Proof. 
The proof can be found in Chapter 21 of [3] and references therein. □
Remark 8.
We say that a vector field of the form Y Ω satisfying the hypotheses of Theorem 6 is the infinitesimal contact transformation generated by the (contact generating) function Ω;. Under this terminology, Theorem 5 guarantees that any infinitesimal contact transformation is generated in a unique way by some smooth function Ω : J 1 ( M , N ) R .
There is a special subset of vector fields of the type (18) arising from coordinate transformations involving only the dependent and independent variables ( x , u ) .
Definition 6.
We say that Y Ω Lie , f , g is a (projected) Lie point transformation if it is a contact transformation of the form
Y Ω Lie , f , g = i = 1 n f i ( x ) x i + g ( x , u ) u + i = 1 n Y u x i ( x , u , u x ) u x i + i , j = 1 n Y u x i x j ( x , u , u x , u x x ) u x i x j ,
where f i C ( M , R ) , g C ( J 0 ( M , N ) , R ) , Y u x i C ( J 1 ( M , N ) , R ) and Y u x i x j C ( J 2 ( M , N ) , R ) .
Remark 9.
It is simple to see that a Lie point transformation Y Ω Lie , f , g can be reduced to a standard vector field Y ˜ = i f i ( x ) x i + g ( x , u ) u on J 0 ( R n , R ) , i.e., Y ˜ is the generator of a one-parameter group of transformations involving only the dependent and independent variables ( x , u ) .
Remark 10.
Another important property of Lie point transformations is the following. Denoting by Φ Lie , f , g , λ , where λ R , the one-parameter group generated by the Lie point transformation Y Ω Lie , f , g , we have that, for any λ R , the domain F Φ Lie , f , g , λ of the nonlinear operator F Φ Lie , f , g , λ generated by Φ Lie , f , g , λ is the whole C ( M , N ) = F Φ Lie , f , g , λ .
For what follows, we introduce the (formal) operators D x i : C ( J k ( M , N ) ) C ( J k + 1 ( M , N ) ) given by
D x i = x i + u x i u + j = 1 n u x i x j u x j + + j 1 j p = 1 n u x j 1 x j x i u x j 1 x j +
In a similar way, we write D x i x j ( · ) = D x i ( D x j ( · ) ) , D x i x j x ( · ) = D x i ( D x j ( D x ( · ) ) ) , etc.
We can characterize more precisely the general form of Lie point transformations.
Theorem 7.
The vector field Y Ω Lie , f , g is a (projected) Lie point transformation if and only if it is generated by a function of the form
Ω Lie , f , g ( x , u , u x ) = g ( x , u ) i = 1 n f i ( x ) u x i ,
namely, Y Ω Lie , f , g has the following expression
Y Ω Lie , g , f : = i = 1 n f i ( x ) x i + g ( x , u ) u + i , j = 1 n ( D x i f j ( x ) u x j + D x i ( g ) ) u x i + i , j , k = 1 n ( D x i x j ( f k ) u x k D x i ( f k ) u x k x j + D x i x j ( g ) ) u x i x j .
Proof. 
The theorem is a direct application of Theorem 6 to vector fields of the form (19). □
If n = 1 , and the coordinate system of J 0 ( R , R ) is given by ( x , u ) , some examples of Lie point transformations are:
  • The dilation of independent variable x, i.e., Y ˜ = x x (see the notation in Remark 9), related to the generator function Ω = x u x and generating the one parameter group defined by
    Φ λ x ( x , u ) = e λ x Φ λ u ( x , u ) = u Φ λ u x ( x , u , u x ) = e λ u x Φ λ u x x ( x , u , u x , u x x ) = e 2 λ u x x .
  • The dilation of dependent variable u, namely, Y ˜ = u u related to the generator function Ω = u and generating the one parameter group defined by
    Φ λ x ( x , u ) = x Φ λ u ( x , u ) = e λ u Φ λ u x ( x , u , u x ) = e λ u x Φ λ u x x ( x , u , u x , u x x ) = e λ u x x .
We conclude this section providing the definition of symmetry of a differential equation.
Definition 7.
A contact transformation Φ : J 2 ( M , N ) J 2 ( M , N ) is a (contact) symmetry of the differential equation E if, for any solution U C ( M , N ) F Φ to the equation E , also F Φ ( U ) is a solution to E , where F Φ and F Φ are the operator generated by the contact transformation Φ and the domain of F Φ , respectively (see Definition 3).
We say that an (infinitesimal) contact transformation Y Ω is an (infinitesimal contact) symmetry of the differential equation E if the one-parameter group Φ Y Ω , λ generated by Y Ω is a set of symmetries of the equation E .
Remark 11.
With an abuse of language, we say that the function Ω C ( J 1 ( M , N ) , R ) is a contact symmetry of the equation E if the corresponding contact vector field Y Ω is a symmetry of E .
Remark 12.
If Y is a Lie point transformation and it is a contact symmetry of the equation E , then we say that Y is a Lie point symmetry of the equation E .
It is possible to give a completely geometric characterization of the contact symmetries of a differential equation E .
Theorem 8
(Determining equations). A contact transformation Φ is a symmetry of the equation E represented by the sub-manifold Δ E J 2 ( M , N ) of the form
Δ E = { E i ( x , u , u x , u x x ) = 0 , i { 1 , , p } } ,
where p N , p > 0 and E i C ( J 2 ( M , N ) , R ) , if and only if
Φ ( Δ E ) = Δ E .
The infinitesimal contact transformation Y Ω is a symmetry of the non-degenerate differential equation E (see Remark 5 for the definition of non-degenerate differential equation) if and only if
Y ( E i ( x , u , u x , u x x ) ) | Δ E = 0 ,
where i = 1 , , p .
Proof. 
The proof is given in Theorem 2.27 and Theorem 2.71 in [2] for the case of Lie point symmetries that are diffeomorphisms of J k ( M , N ) , for k 0 . Since the contact transformations are diffeomorphism of J h ( M , N ) , for any h 1 (see, e.g., Chapter 21 of [3]), the case of contact transformations can be proved using the same methods. □

3.3. Symmetries and Classical Noether Theorem

Let us discuss here the classical Noether theorem in the Lagrangian mechanics setting described in Section 2.1. Heuristically, the Noether theorem says that to any infinitesimal transformation leaving invariant the optimal control problem, namely Equation (1) and the Lagrangian L, a constant of motion is associated.
More precisely, let Y x , a be a vector field in R n × R n transforming the variables x i and a i of Equation (1) and the Lagrangian L. We suppose that Y x , a is “projected” with respect to the variables x i , that is,
Y x , a = i = 1 n f i ( x ) x i + g i ( x , a ) a i .
If we want the projected vector field (23) to be a symmetry of Equation (1), then we need that
g i ( x , a ) = j = 1 n x j f i ( x ) a j .
If we also require that L is invariant with respect to the flow of Y x , a , then we must have
Y x , a ( L ) ( x , a ) = i = 1 n f i ( x ) x i L ( x , a ) + i , j = 1 n x j f i ( x ) a j a i L ( x , a ) = 0 .
So we say that Y x , a is a symmetry of the optimal control problem of Section 2.1 if and only if conditions (24) and (25) hold.
Theorem 9
(Noether theorem). Let Y x , a be a symmetry of the Lagrangian L according with Equation (25). Then, supposing the existence of a C 1 optimal control α t , we have that
i = 1 n f i ( X t ) a i L ( X t , α t )
is constant with respect to time t [ t 0 , T ] .
Proof. 
Let us compute the derivative with respect to time of (26), then, by Euler–Lagrange Equation (4), we have
d d t i = 1 n f i ( X t ) a i L ( X t , α t ) = i , j = 1 n x j f i ( X t ) a i L ( X t , α t ) d X t j d t + i = 1 n f i ( X t ) d d t ( a i L ( X t , α t ) ) = i , j = 1 n [ x j f i ( X t ) α t j a i L ( X t , α t ) + f i ( X t ) ( x i L ( X t , α t ) ) ] ,
which is zero as a consequence of Equation (25). □
It is possible to give an equivalent formulation of Theorem 9 using the Lie point symmetries of Hamilton–Jacobi equation.
Theorem 10
(Noether theorem, Hamilton–Jacobi version). Let Ω ( x , u x ) = i = 1 n f i ( x ) u x i be a contact symmetry of the Hamilton–Jacobi Equation (3). Then, if U C 1 , 2 ( [ t 0 , , T ] × R n , R ) is a solution to Equation (3), we have that
Ω ( X t , U ( X t ) ) = i = 1 n f i ( X t ) x i U ( X t ) ,
where X t is the solution to Equation (1) with α t i = A i ( X t , U ( X t ) ) (see Section 2 for the definition of the map A ), is constant with respect to time t [ t 0 , T ] .
Lemma 1.
Y Ω is a contact symmetry of the Hamilton–Jacobi Equation (3) if and only if
i = 1 n x i Ω u x i H u x i Ω x i H = 0 .
Proof. 
It is a consequence of Equation (18) and Definition 7. See, e.g., Section 21.2 in [3]. □
Proof of Theorem 10.
See the proof of Theorem 12 below, where the statement is proven in the general stochastic case. □
Remark 13.
The two formulations of the Noether theorem given by Theorems 9 and 10 are equivalent in the sense that Y x , a = i , j = 1 n ( f i ( x ) x i + x j f i ( x , a ) a j a i ) is a symmetry of the optimal control problem if and only if Ω is a contact symmetry of the related Hamilton–Jacobi equation, namely, Equation (28) holds. Furthermore, if we choose the optimal control α t i to be equal to A i ( X t , U ( X t ) ) , then the two conserved quantities (26) and (27) are equal.

4. Noether Theorem for Stochastic Optimal Control

4.1. The Case of Deterministic HJB Equation

Considering M = R + × R n and denoting the first variable by t and the other independent variables by x i , for i = 1 , , n , for the Hamilton–Jacobi–Bellman equation we have that Δ E is described by the equation
u t + max a K 1 2 i , j = 1 n η i j ( t , x , a ) u x i x j + i = 1 n μ i ( t , x , a ) u x i + L ( t , x , a ) = 0 .
Equation (29) is a special kind of evolution equation since it has the form
u t + H ( t , x , u , u x , u x , x ) = 0 ,
for some smooth function H C 2 ( R × J 2 ( R n , R ) ) , where u x = ( u x 1 , , u x n ) , and u x x = ( u x i x j ) i , j = 1 , , n . In this case, it is convenient to choose a generating function of the form
Ω ( t , x , u , u x ) .
Remark 14.
It is important to notice that, for a generic contact symmetry on J 2 ( M , R ) = J 2 ( R + × R n , R ) , the generating function has the form
Ω ˜ ( t , x , u , u t , u x ) ,
depending also on the variable u t , which represents the time derivative. Choosing a generating function of the form (31) instead of the form (32) means to consider contact transformations that do not change the time variable t. The main reason is that the time variable in stochastic equations plays a peculiar role and cannot be changed in the same way as the spacial variable x. Nevertheless, in [24,25,29] also a special kind of time change has been considered corresponding to the generating function
Ω ˜ = f ( t ) u t + Ω Lie , f , g ( t , x , u , u x ) ,
where f C ( R + , R ) , and Ω Lie , f , g ( t , x , u , u x ) is the generator of a Lie point transformation, see Equation (21) (see also Remark 15 for a further discussion of this point).
Theorem 11.
Consider an evolution PDE of the form (30). An infinitesimal contact transformation generated by the function Ω of the form (31) is a contact symmetry for Equation (30) if and only if
t Ω H u Ω + i , j = 1 n D x i Ω u x i H + D x i x j Ω u x i x j H D x i H u x i Ω = 0 ,
where D x i are defined in Equation (20) and D x i x j · = D x i ( D x j ( · ) ) .
Proof. 
The statement follows directly from Theorems 6 and 8 (in particular Equations (18) and (22)). □
Let us introduce
O t = Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) ,
where X t is a solution to Equation (5) with respect to an optimal control A t * .
Assumption 1.
There exists at least one measurable function A ( t , x , u x , u x x ) such that
A ( t , x , u x , u x x ) arg max H ( t , x , u x , u x x , · ) ,
where
H ( t , x , u x , u x x , a ) = i = 1 n μ i ( t , x , a ) u x i + 1 2 i , j = 1 n = 1 m σ i ( t , x , a ) σ j ( t , x , a ) u x i x j + L ( t , x , a ) .
As a consequence of Assumption 1, we can choose the process
α t = A ( t , X t , U ( t , X t ) , D 2 U ( t , X t ) )
to be the optimal control provided that the solution U to Equation (9) is at least C 2 .
The next result is our first stochastic generalization of Noether theorem.
Theorem 12.
Let Assumption 1 hold true. Suppose that the solution U to Equation (9) is continuously differentiable with respect to time and C 2 with respect to x. If Ω is a contact symmetry of Equation (9), then O t is a local martingale.
Remark 15.
The works [24,25,29] present a Noether theorem involving a time change and a Lie point transformation with a generator of the form (33) for an optimal control system with affine type control and an objective function with quadratic dependence from the control. More precisely, they proved that, if Ω ˜ of the form (33) is a symmetry of the HJB equation, then the process
O ^ t = Ω ˜ ( t , X t , U ( t , X t ) , t U ( t , X t ) , U ( t , X t ) ) = f ( t ) H ( t , U ( t , X t ) , D 2 U ( t , X t ) ) + Ω Lie , f , g ( t , X t , U ( t , X t ) , U ( t , X t ) ) ,
is a local martingale. The presence of some time invariance was essential in the papers [21,28] for extending the concept of integrable systems to the stochastic framework. We expect that the martingality of the process (35) holds also in the general setting presented here. Since it is not completely clear what the role of time change is in our setting and if the conservation of (35) holds for more general time changes, we prefer to postpone this analysis to some later works.
From now on we take H as in Section 2.2, namely,
H ( t , x , u x , u x x ) = sup a K H ( t , x , u x , u x x , a ) .
In order to prove Theorem 12, we anticipate the following result.
Lemma 2.
We have that
u x i H = μ i ( t , x , A ( t , x , u x , u x x ) ) , u x i x j H = 1 2 = 1 m σ i ( t , x , A ( t , x , u x , u x x ) ) σ j ( t , x , A ( t , x , u x , u x x ) ) .
Proof. 
In the case where μ , σ , and A are C 1 in all their variables, the result follows from the fact that
a i H ( t , x , u x , u x x , A ( t , x , u x , u x x ) ) = 0 .
The general case is a consequence of Assumption 1 and the envelope theorem. For the latter we refer the reader to, e.g., [61,62]. □
Proof of Theorem 12.
We compute the differential of O t using Itô formula, to get
d O t = d Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) = t Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d t + u Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d U ( t , X t ) + i = 1 n x i Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d X t i + i = 1 n u x i Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d x i U ( t , X t ) + 1 2 i , j = 1 n x i x j Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d [ X i , X j ] t + 1 2 u u Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d [ U ( · , X · ) , U ( · , X · ) ] t
+ 1 2 j = 1 n u x j Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d [ U ( · , X · ) , X j ] t + 1 2 j = 1 n u u x j Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d [ x j U ( · , X · ) , U ( · , X · ) ] t + 1 2 i , j = 1 n u x i x j Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d [ x i U ( · , X · ) , X j ] t + 1 2 i , j = 1 n u x i u x j Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d [ x i U ( · , X · ) , x j U ( · , X · ) ] t .
Since U C 2 , 3 ( [ 0 , T ] × R n , R ) , we also have
d U ( t , X t ) = t U ( t , X t ) d t + i = 1 n x i U ( t , X t ) d X t i + 1 2 i , j = 1 n x i x j U ( t , X t ) d [ X i , X j ] t ,
d x i U ( t , X t ) = x i , t U ( t , X t ) d t + j = 1 n x i x j U ( t , X t ) d X t j + 1 2 j , k = 1 n x i x j x k U ( t , X t ) d [ X j , X k ] t .
Exploiting Equations (36) and (37), the fact that X t is solution to (5), and the relations
d [ X i , X j ] t = = 1 m σ i ( t , X t , α t ) σ j ( t , X t , α t ) d t , d [ U ( · , X · ) , X i ] t = j = 1 n = 1 m x j U ( t , X t ) σ j ( t , X t , α t ) σ i ( t , X t , α t ) d t , d [ U ( · , X · ) , U ( · , X · ) ] t = i , j = 1 n = 1 m x j U ( t , X t ) x i U ( t , X t ) σ j ( t , X t , α t ) σ i ( t , X t , α t ) d t , d [ U ( · , X · ) , x i U ( · , X · ) ] t = k , j = 1 n = 1 m x j U ( t , X t ) x i x k U ( t , X t ) σ j ( t , X t , α t ) σ k ( t , X t , α t ) d t , d [ x l U ( · , X · ) , x i U ( · , X · ) ] t = k , j = 1 n = 1 m x l x j U ( t , X t ) x i x k U ( t , X t ) σ j ( t , X t , α t ) σ i ( t , X t , α t ) d t ,
we obtain
d O t = i , k = 1 n μ i ( t , X t , α t ) x i Ω + u x i u Ω + u x i x k u x k Ω ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) ) d t + 1 2 i , j , k , l = 1 n = 1 m σ i ( t , X t , α t ) σ j ( t , X t , α t ) ( x i , x j Ω + u x j x i , u Ω + u x j x k x i u x k Ω + u x i x j , u Ω + u x i u x j u u Ω + u x i u x j x k u u x k Ω + u x i x j u Ω + u x i x k x j u x k Ω + u x i x k u x j u x k u Ω + u x i x k u x j x l u x k u x l Ω + u x i x j x k u x k Ω ) ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) , D 3 U ( t , X t ) ) d t + t U u Ω + i = 1 n t x i U u x i Ω + t Ω ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) ) d t + d M t ,
where M t is a local martingale. Using the explicit definition of D x i , it is simple to note that
D x i Ω = x i Ω + u x i u Ω + k = 1 n u x i x k u x k Ω , D x i x j Ω = x i x j Ω + u x j x i u Ω + u x i x j u Ω + u x i u x j u u Ω + u x i x j u Ω + k , l = 1 n ( u x j x k x i u x k Ω + u x i u x j x k u u x k Ω + u x i x k x j u x k Ω + u x i x k u x j u x k u Ω + u x i x k u x j x l u x k u x l Ω + u x i x j x k u x k Ω ) ,
and we have
t U = H ( t , x , U , D 2 U ) and t , x i U = ( D x i H ) ( t , x , U , D 2 U , D 3 U ) .
Using Lemma 2, the fact that we can choose α t = A ( t , X t , U ( t , X t ) , D 2 U ( t , X t ) ) , and the determining Equation (34), we obtain
d O t = = i = 1 n μ i ( t , X t , α t ) ( D x i Ω ) ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) ) d t + 1 2 = 1 m i , j = 1 n σ i ( t , X t , α t ) · σ j ( t , X t , α t ) · ( D x i x j Ω ) ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) , D 3 U ( t , X t ) ) d t + i = 1 n H u Ω D x i H u x i Ω + t Ω ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) ) d t + d M t = i , j = 1 n D x i Ω u x i H + D x i x j Ω u x i x j H H u Ω D x i H u x i Ω + t Ω ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) , D 3 U ( t , X t ) ) d t + d M t = d M t ,
which concludes the proof. □

4.2. The Case of Stochastic HJB Equation

We face the problem of stochastic HJB equation, that is, we consider, as we did in Section 2.3,
H S ( t , x , u x , u x x , a , ψ x ) = i = 1 n μ i ( t , x , a , ω ) u x i + 1 2 i , j = 1 n = 1 m σ i ( t , x , a , ω ) σ j ( t , x , a , ω ) u x i x j + i = 1 n = 1 m σ i ( t , x , a , ω ) ψ x i + L S ( t , x , a ) .
and
H S ( t , x , u x , u x x , ψ x ) = sup a K H S ( t , x , u x , u x x , a , ψ x ) .
In this case,
d U t ( x ) = H S ( t , x , U , D 2 U , Ψ ) d t + = 1 m Ψ t ( x ) d W t .
Though some ideas concerning symmetries for SPDEs are discussed, e.g., in [63,64], a general theory has not been developed yet. For this reason, we extend the notion of infinitesimal symmetry introduced in Definition 7 in the following way. Hereafter, we consider the probability space ( W , F t , P ) where W = C 0 ( R , R m ) is the canonical space for the Brownian motion W, F t is the natural filtration generated by W t , and P is the Wiener measure on W .
Definition 8.
Let Ω : R + × J 1 ( R n , R ) × W R be a predictable regular random field on R + × J 1 ( R n , R ) , which is C 1 with respect to the time t and C 2 in all other variables. We say that Y Ω is a contact symmetry for Equation (39) when we have
t Ω H S u Ω + i , j = 1 n D x i Ω u x i H S + D x i x j Ω u x i x j H S D x i H S u x i Ω = 0 .
Assumption 2.
There exists at least one measurable function A S ( t , x , u x , u x x , ψ x ) such that
A S ( t , x , u x , u x x , ψ x ) arg max H S ( t , x , u x , u x x , · , ψ x ) ,
where H S is defined by Equation (38).
Lemma 3.
We have that
u x i H S = μ i ( t , x , A S ( t , x , u x , u x x , ψ x ) , ω ) , u x i x j H S = 1 2 = 1 m σ i ( t , x , A S ( x , u x , u x x , ψ x ) , ω ) σ j ( t , x , A S ( t , x , u x , u x x , ψ x ) , ω ) , ψ x i H S = σ i ( t , x , A S ( t , x , u x , u x x , ψ x ) , ω ) .
Proof. 
The proof is similar to the one of Lemma 2. □
The following result represents our second stochastic generalization of the Noether theorem.
Theorem 13.
Let Assumption 2 hold true. Suppose that the solution ( U , Ψ ) to Equation (39) is continuously differentiable with respect to time and C 3 with respect to x almost surely. If Ω is a contact symmetry of Equation (9), then
O ˜ t = Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) 1 2 0 t i , j = 1 n = 1 m ( u u Ω ( Ψ s ) 2 + 2 u x i σ i Ψ s x i u Ω σ i Ψ u x i Ω x i σ j Ψ x j + x i u x j Ω σ i Ψ x j + u x i u x j Ω ( Ψ x i Ψ x j + σ i u x i Ψ x j + σ j u x j Ψ x i ) + u u x j Ω Ψ Ψ x j + σ i u x i Ψ x j + σ i u x i x j Ψ ) ( s , X s , U ( s , X s ) , U ( s , X s ) ) d s
is a local martingale.
Proof. 
Since the proof is similar to the one of Theorem 12, we report here only some steps of the proof. By Theorem 3, we have
d U ( t , X t ) = H S ( t , X t , U ( t , X t ) , D 2 U ( t , X t ) , Ψ t ( X t ) ) d t + = 1 m Ψ t ( X t ) d W t + = 1 m i = 1 n x i Ψ ( X t ) d [ W , X i ] t + i = 1 n x i U ( t , X t ) d X t i + 1 2 i , j = 1 n x i x j U ( t , X t ) d [ X i , X j ] t ,
and
d x k U ( t , X t ) = D x k H S ( t , X t , U ( t , X t ) , D 2 U ( X t , t ) , Ψ t ( X t ) ) d t + = 1 m x k Ψ t ( X t ) d W t + i = 1 n = 1 m x i x k Ψ t ( X t ) d [ W , X i ] t + i = 1 n x i x k U ( t , X t ) d X t i + 1 2 i , j = 1 n x i x j x k U ( t , X t ) d [ X i , X j ] t .
Adopting the usual notation for O t = Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) and α t = A S ( t , X t , U ( t , X t ) , D 2 U ( t , X t ) , Ψ t ( X t ) ) , we have
d O t = i = 1 n μ i ( t , X t , α t ) x i Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d t + 1 2 i , j = 1 n = 1 m σ i ( t , X t , α t ) σ j ( t , X t , α t ) x i x j Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) d t + u Ω d U ( t , X t ) + 1 2 u u Ω d [ U , U ] + i = 1 n x i u Ω d [ X i , U ] + i = 1 n u x i Ω d x i U ( t , X t ) + 1 2 i , j = 1 n u x i u x j Ω d [ x i U , x j U ] + j = 1 n u u x j Ω d [ U , x j U ] + i , i = 1 n x i u x j Ω d [ X i , x j U ] + t Ω d t + d M ˜ t .
Plugging in Equations (41) and (42), and exploiting Theorem 3 in order to compute the quadratic variations, we get
d O t = i = 1 n μ i ( t , X t , α t ) x i Ω + u x i u Ω + k = 1 n u x i x k u x k Ω ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) ) d t + 1 2 i , j = 1 n = 1 m σ i ( t , X t , α t ) σ j ( t , X t , α t ) ( k , l = 1 n x i x j Ω + u x j x i u Ω + u x j x k x i u x k Ω + u x i x j u Ω + u x i u x j u , u Ω + u x i u x j x k u u x k Ω + u x i x j u Ω + u x i x k x j u x k Ω + u x i x k u x j u x k u Ω + u x i x k u x j x l u x k u x l Ω + u x i x j x k u x k Ω ) ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) , D 3 U ( t , X t ) ) d t u Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) H S ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) , Ψ t ( X t ) ) d t + 1 2 u u Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) i = 1 n = 1 m Ψ t ( x ) 2 + 2 x i U ( t , x ) σ i ( x , a ) Ψ t ( x ) ( t , X t , α t ) d t + i = 1 n = 1 m x i u Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) σ i ( X t , α t ) Ψ t ( X t ) d t i = 1 n u x i Ω D x i H S ( t , X t , U ( t , X t ) , U ( t , X t ) , D 2 U ( t , X t ) , Ψ t ( X t ) ) d t + i , k = 1 n = 1 m u x i Ω σ k x i x k Ψ t + u Ω σ i Ψ x i ( t , X t , U ( t , X t ) , U ( t , X t ) , α t ) d t + i , j = 1 n = 1 m u x i u x j Ω ( x i Ψ t x j Ψ t + σ i u x i x j Ψ + σ j u x j x i Ψ ) ( t , X t , U ( t , X t ) , U ( t , X t ) , α t ) d t + i , j = 1 n = 1 m x i u x j Ω σ i Ψ x j ( t , X t , U ( t , X t ) , U ( t , X t ) , α t ) d t + j = 1 n = 1 m u u x j Ω ( Ψ t x j Ψ t + σ i u x i x j Ψ t + σ i u x i x j Ψ t ) ( t , X t , U ( t , X t ) , U ( t , X t ) , α t ) d t + d M ˜ t .
Notice that, by Definition 8, we have
0 = u Ω H S + i , j = 1 n u x i Ω D x i H S u x i H S D x i Ω u x i u x j H S D x i x j Ω ,
which, by Lemma 3, is equivalent to
u Ω H S + i , j = 1 n u x i Ω D x i H S u x i H S D x i Ω u x i u x j H S D x i x j Ω = = i , j = 1 n = 1 m u x i Ω ( x i σ j Ψ x j + σ j Ψ x j x i ) + u Ω ( σ i Ψ x i ) .
Then we obtain
d O t = 1 2 u , u Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) i = 1 n = 1 m ( Ψ t ) 2 + 2 u x i σ i Ψ t ( t , X t , U ( t , X t ) , U ( t , X t ) ) d t + i = 1 n = 1 m x i u Ω σ i Ψ ( t , X t , U ( t , X t ) , U ( t , X t ) ) d t + i , j = 1 n = 1 m ( u x i u x j Ω × × ( x i Ψ t x j Ψ t + σ i u x i x j Ψ t + σ j u x j x i Ψ t ) ) ( t , X t , U ( t , X t ) , U ( t , X t ) , α t ) d t + i , j = 1 n = 1 m u x i Ω x i σ j x j Ψ t + x i u x j Ω σ i x j Ψ t ( t , X t , U ( t , X t ) , U ( t , X t ) , α t ) d t + i , j = 1 n = 1 m u u x j Ω ( Ψ x j Ψ t + σ i u x i x j Ψ t + σ i u x i x j Ψ t ) ( t , X t , U ( t , X t ) , U ( t , X t ) , α t ) d t + d M ˜ t .
Following then the same steps as in the proof of Theorem 12 we get the result. □
Corollary 1.
Suppose that Ω is a Lie point symmetry of the form
Ω ( t , x , u , u x ) = c u + g ( t , x ) k = 1 n f k ( t , x ) u x k ,
where c R and f k , g : R n + 1 R are smooth functions such that, for j = 1 , , n and = 1 , , m ,
k = 1 n f k x k σ j σ k x k f j = 0 .
Then O t = Ω ( t , X t , U ( t , X t ) ) is a local martingale.
Proof. 
Under the previous conditions, we have
j = 1 n u x j Ω x j σ i + x i u x j Ω σ i = x i u Ω σ i = 0 .
The thesis follows from Theorem 13. □

5. Merton’s Optimal Portfolio Problem

In this section, we propose a symmetry analysis of Merton’s problem of optimal portfolio selection (see the original paper [34,35] for a review on the subject). Let us consider a set of controls α t = ( c ( t ) , γ ( t ) ) and a controlled diffusion dynamics described by the SDE
d X t = ( γ ( t ) ( μ ( t ) r ) + r ) X t c ( t ) d t + X t γ ( t ) σ ( t ) d W t ,
where X is the wealth process controlled by the proportion γ ( t ) [ 0 , 1 ] invested in the risky asset at time t and by the consumption c ( t ) [ 0 , + ) per unit time at time t. Moreover, r is the constant interest rate, and μ ( t ) , σ ( t ) > 0 are continuous functions such that σ ( t ) > ϵ > 0 (or in the case of Section 5.2 are general continuous predictable stochastic processes). Fixing some finite time horizon T > 0 , the problem of choosing optimal portfolio selection consists of maximizing the objective functional
E t T L ( t , α t ) d s + g ( X T ) ,
where
L ( t , α t ) = e ρ t V ( c ( t ) ) .
Here, ρ ( 0 , + ) is the discount rate, V is a strictly concave utility function that is assumed to be differentiable with V ( z ) > 0 for z > 0 , and g is a given function.
Let us remark that the set K introduced in Section 2.2 here has the form
K = [ 0 , + ) × [ 0 , 1 ] .

5.1. Markovian Case

The maximization problem introduced above is a particular case of the general one studied in Section 2.2. The associated value function is
U ( t , x ) = max α K L E t T L ( s , α s ) d s + g ( X T ) | X t = x ,
while the HJB equation becomes
t U + max ( c , γ ) K H ( t , x , U , D 2 U , ( c , γ ) ) = 0 ,
with
H ( t , x , u x , u x x , ( c , γ ) ) = exp ( ρ t ) V ( c ) + u x ( γ ( μ ( t ) r ) + r ) x u x c + 1 2 u x x σ ( t ) 2 γ 2 x 2 .
The optimal value ( c * , γ * ) of ( c , γ ) is given by the solutions to the system
c H = exp ( ρ t ) V ( c ) u x = 0 , γ H = ( μ ( t ) r ) x u x + u x x σ 2 ( t ) x 2 γ = 0 ,
that is
c * ( t ) = ( V ) 1 ( u x exp ( ρ t ) ) ,
γ * ( t ) = ( μ ( t ) r ) u x x u x x σ 2 ( t ) .
The corresponding functional H takes the form
H ( t , x , u x , u x x ) = H ( t , x , u x , u x x , ( c * ( t ) , γ * ( t ) ) ) = exp ( ρ t ) V ( c * ( t ) ) + u x ( γ * ( t ) ( μ ( t ) r ) + r ) x u x c * ( t ) + 1 2 u x x σ 2 ( t ) γ * ( t ) 2 x 2 = exp ( ρ t ) V ( V ) 1 ( u x exp ( ρ t ) ) ( μ ( t ) r ) u x x u x x σ 2 ( t ) ( μ ( t ) r ) + r x u x u x ( V ) 1 ( u x exp ( ρ t ) ) + 1 2 u x x σ 2 ( μ ( t ) r ) 2 u x 2 x 2 u x x 2 σ 4 ( t ) x 2 = exp ( ρ t ) V ( V ) 1 ( u x exp ( ρ t ) ) ( μ ( t ) r ) u x x u x x σ 2 ( t ) ( μ ( t ) r ) + r x u x u x ( V ) 1 ( u x exp ( ρ t ) ) + 1 2 ( μ ( t ) r ) 2 u x 2 u x x σ 2 ( t ) .
So we study the following PDE
u t δ ( t ) 2 u x 2 u x x + K ( t , x , u x ) = 0 ,
with
K ( t , x , u x ) = h V ( t , u x ) + r x u x , δ ( t ) = ( μ ( t ) r ) 2 σ 2 ( t ) ,
where
h V ( t , u x ) = exp ( ρ t ) V ( V ) 1 ( u x exp ( ρ t ) ) u x ( V ) 1 ( u x exp ( ρ t ) ) .
We are looking for the symmetry generated by the generating function Ω ( t , x , u , u x ) . Hereafter, we assume that the function h V defined above is a smooth function in a suitable open subset of R 2 .
Theorem 14.
The function Ω generates a contact symmetry of Equation (46) if and only if it admits one of the following forms
Ω 1 = exp r ( r 1 ) t 2 [ u · u x r x u x r + 1 ] + G 1 ( t , u x ) , Ω 2 = u + G 2 ( t , u x ) , Ω 3 = exp ( r t ) x u x + G 3 ( t , u x ) , Ω 4 = G 4 ( t , u x ) ,
where G 1 , G 2 , G 3 , G 4 : R + × R R are smooth functions satisfying the PDEs
2 exp r ( r 1 ) t 2 u x r h V + δ ( t ) u x 2 u x u x G 1 + 2 t G 1 = 0 ,
2 u x u x h V 2 h V + δ ( t ) u x 2 u x u x G 2 + 2 t G 2 = 0 ,
2 exp ( r t ) u x u x h V + 2 x r exp ( r t ) u x + δ ( t ) u x 2 u x u x G 3 + 2 t G 3 = 0 ,
δ ( t ) u x 2 u x u x G 4 + 2 t G 4 = 0 .
Finally, Theorems 12 and 14 allow us to obtain the explicit forms of the local martingales of Merton’s model.
Corollary 2.
Let U ( t , x ) be a classical solution to Equation (46) and let X t be the solution to Equation (43) with ( γ , c ) satisfying the equalities (44) and (45). Then, the processes
O 1 , t = exp r ( r 1 ) t 2 [ U ( t , X t ) x U ( t , X t ) r x U ( t , X t ) r + 1 ] + G 1 ( t , x U ( t , X t ) ) , O 2 , t = U ( t , X t ) + G 2 ( t , x U ( t , X t ) ) , O 3 , t = exp ( r t ) X t x U ( t , X t ) + G 3 ( t , x U ( t , X t ) ) , O 4 , t = G 4 ( t , x U ( t , X t ) ) ,
are local martingales.
Proof of Theorem 14.
The generating function Ω is a (contact) symmetry of the PDE if and only if the following set of determining equations holds
δ 2 u x 2 u x u x Ω + u x u Ω · u x K + x Ω · u x K u Ω · K u x Ω · x K + t Ω = 0 ,
δ u x 2 u u x Ω + δ u x u x x Ω δ x Ω = 0 ,
δ 2 u x 2 u u Ω + δ u x u x Ω + δ 2 x x Ω = 0 .
We can differentiate Equation (53) with respect to u and Equation (54) with respect to u x , and equate the term u u u x Ω to obtain
( 4 u u Ω + u u x x Ω ) u x 2 + ( u x x x Ω + 7 u x Ω ) u x + 2 x x Ω = 0 .
Differentiating Equation (53) with respect to x, we can get an expression of u u x x Ω in terms of u x x x Ω and x x Ω . Replacing now the obtained expression in Equation (55) yields
u x u u Ω + u x Ω = 0 .
If we differentiate Equation (53) with respect to u and use Equation (56), then we have
u x Ω = 0 .
Inserting Equation (57) in Equation (56), we get
u u Ω = 0 ,
from which, thanks to Equations (54) and (57), we obtain
x x Ω = 0 .
This means that Ω is a function of the form
Ω ( t , x , u , u x ) = f 1 ( t , u x ) u + f 2 ( t , u x ) x + f 3 ( t , u x ) .
If we replace expression (58) inside the determining Equations (52)–(54), we have that f 1 , f 2 , and f 3 have to satisfy the following set of equations
u x 2 u x u x f 1 + 2 t f 1 = 0 ,
u x 2 u x u x f 2 + 2 t f 2 2 f 2 r = 0 ,
2 u x f 1 · u x K 2 f 2 · u x K + 2 f 1 · K + δ u x 2 u x u x f 3 + 2 t f 3 = 0 ,
u x 2 u x f 1 + u x u x f 2 f 2 = 0 .
Solving Equation (62) with respect to f 2 , we obtain that
f 2 = u x f 1 + g 1 ( t ) u x .
Replacing the expression (63) in Equation (60) and using Equation (59), we have the equation
2 u x 2 u x f 1 + 2 u x t g 1 + 2 r u x f 1 2 r g 1 u x = 0 ,
from which we get that
f 1 = d ( t ) + b ( t ) r u x r b ( t ) r ,
where b ( t ) = t g 1 r g 1 . Replacing Equation (64) in Equation (59), we obtain
r ( r 1 ) d ( t ) + b ( t ) r + 2 d ( t ) + b ( t ) r = 0 , t , t g 1 r t g 1 = 0 ,
giving
b ( t ) = c 2 , d ( t ) = d 1 exp r ( r 1 ) t 2 d 2 r , g 1 ( t ) = d 3 exp ( r t ) d 2 r ,
for some arbitrary constants d 1 , d 2 , and d 3 . Therefore, we have
f 1 = d 1 exp r ( r 1 ) t 2 u x r d 2 r .
By (63), we obtain
f 2 = d 1 exp r ( r 1 ) t 2 u x r + 1 + d 3 exp ( r t ) u x .
Inserting the previous expression of f 1 , f 2 , and f 3 in (61), we get that Ω is a contact symmetry of Equation (46) if and only if it is a linear combination of the following expressions
Ω 1 = exp r ( r 1 ) t 2 [ u · u x r x u x r + 1 ] + G 1 ( t , u x ) , d 1 = 1 , d 2 = d 3 = 0 , Ω 2 = u + G 2 ( t , u x ) , d 2 = r , d 1 = d 3 = 0 , Ω 3 = exp ( r t ) x u x + G 3 ( t , u x ) , d 3 = 1 , d 1 = d 2 = 0 , Ω 4 = G 4 ( t , u x ) , d 1 = d 2 = d 3 = 0 ,
where G 1 , G 2 , G 3 , and G 4 are smooth solutions to the PDEs satisfying Equations (48)–(51). □
Equations (48)–(51) can be solved explicitly for some special form of K ( t , x , u x ) . Taking, in particular, the following two expressions
K 1 = h 1 + r x u x , h 1 = exp ( ρ t ) [ log ( u x ) + ρ t + 1 ] ,
and
K 2 = h 2 + r x u x , h 2 = exp ρ t θ 1 u x θ θ 1 θ 1 θ ,
derived by taking the isoelastic utility functions, also known as constant relative risk aversion utilities (see [65]) defined as
V ( z ) = log ( z ) and V ( z ) = z θ θ , θ ( 0 , 1 ) ,
respectively. If we denote by Ω 1 1 = exp r ( r 1 ) t 2 [ u · u x r x u x r + 1 ] + G 1 1 ( t , u x ) and by Ω 1 2 = exp r ( r 1 ) t 2 [ u · u x r x u x r + 1 ] + G 1 2 ( t , u x ) the symmetries of the Equation (46) when K = K 1 and K = K 2 , respectively, we have that G 1 1 solves the equation
Γ 1 ( t , u x ) + δ ( t ) u x 2 u x , u x G 1 1 + 2 t G 1 1 = 0 ,
where
Γ 1 ( t , u x ) = 2 exp r ( r 1 ) t ρ t 2 [ 1 2 u x r log ( u x ) u x r ρ t u x r ] .
Making the ansatz
G 1 1 ( t , u x ) = ϕ 1 ( t ) u x r + ϕ 2 ( t ) u x r log ( u x ) + ϕ 3 ( t ) ,
we have that G 1 1 solves (65) if and only if ϕ 1 , ϕ 2 , ϕ 3 solve the following ODEs
ϕ 1 = 2 exp r ( r 1 ) t 2 ρ t ( 2 + ρ t ) δ ( t ) r ( r 1 ) ϕ 1 δ ( t ) ( r 2 ) ϕ 2 δ ( t ) ϕ 2 , ϕ 2 = 2 exp r ( r 1 ) t 2 ρ t δ ( t ) r ( r 1 ) ϕ 2 , ϕ 3 = 2 exp r ( r 1 ) t 2 ρ t .
In the same way G 1 2 solves
Γ 2 ( t , u x ) + u x 2 u x , u x G 1 2 + 2 t G 1 2 = 0 ,
where
Γ 2 ( t , u x ) = 2 u x θ θ 1 exp r ( r 1 ) t 2 + ρ θ 1 t 1 2 θ 1 θ u x r .
With the ansatz
G 1 2 ( t , u x ) = ϕ 1 ( t ) u x θ θ 1 + ϕ 2 ( t ) u x θ θ 1 + r 1 ,
Equation (67) holds if and only if ϕ 1 , ϕ 2 , and ϕ 3 solve the following ODEs
ϕ 1 = exp r ( r 1 ) t 2 + ρ θ 1 t δ ( t ) 2 θ θ 1 θ θ 1 1 ϕ 1 , ϕ 2 = 4 θ 1 θ exp r ( r 1 ) t 2 + ρ θ 1 t δ ( t ) θ θ 1 + r θ θ 1 + r 1 ϕ 2 .
If we denote by Ω 2 1 = u + G 2 1 ( t , u x ) and by Ω 2 2 = u + G 2 2 ( t , u x ) the symmetries of the Equation (46) when K = K 1 and K = K 2 , respectively, then we get that G 2 i , i = 1 , 2 , solve (49) with h 1 and h 2 given by (65) and (66).
With the ansatz
G 2 1 ( t , u x ) = ϕ 1 ( t ) + ϕ 2 ( t ) log ( u x ) ,
The function G 2 1 solves (49) (with h = h 1 ) if and only if ϕ 1 and ϕ 2 solve the following ODEs
ϕ 1 ( t ) = δ ( t ) 2 ϕ 2 ( t ) exp ( ρ t ) exp ( ρ t ) ( ρ t + 1 ) ϕ 2 ( t ) = exp ( ρ t ) .
With the ansatz
G 2 2 ( t , u x ) = ϕ 1 ( t ) u x θ θ 1 ,
the function G 2 2 solves (49) (with h = h 2 ) if and only if ϕ 1 solves the following ODE
ϕ 1 ( t ) = δ ( t ) θ 2 ( θ 1 ) 2 ϕ 1 1 θ exp ρ θ 1 t .
If we denote by Ω 3 1 = exp ( r t ) x u x + G 3 1 ( t , u x ) and by Ω 3 2 = exp ( r t ) x u x + G 3 2 ( t , u x ) the symmetries of the Equation (46) when K = K 1 and K = K 2 , respectively, then G 3 i , i = 1 , 2 , solve (50) with h 1 and h 2 given above, respectively.
With the ansatz
G 3 1 ( t , u x ) = ϕ 1 ( t ) + ϕ 2 ( t ) log ( u x ) ,
the function G 3 1 solves (49) (with h = h 1 ) if and only if ϕ 1 and ϕ 2 solve the following ODEs
ϕ 1 ( t ) = δ ( t ) 2 ϕ 2 ( t ) exp ( ( r ρ ) t ) , ϕ 2 ( t ) = 0 .
With the ansatz
G 3 2 ( t , u x ) = ϕ 1 ( t ) u x θ θ 1 ,
The function G 3 2 solves (50) (with h = h 2 ) if and only if ϕ 1 solves the following ODE
ϕ 1 ( t ) = δ ( t ) θ 2 ( θ 1 ) 2 ϕ 1 + exp r + ρ θ 1 t .

5.2. Non-Markovian Case

We consider here the case where μ ( t ) and σ ( t ) are predictable continuous stochastic processes with respect to the filtration generated by F t , that is, the problem now fits in the more general model treated in Section 2.3. This case is relevant, for example, when we are considering stochastic volatility models (see, e.g., [36,38,66] for stochastic volatility models and [39] for the non-Markovian Merton problem of the form approached here). We assume also that g ( x , ω ) is a F t random field. In this case, the value function is a random field depending on the time t and the variable x of the form
U ( t , x ) = E t T L ( s , α t ) d s + g ( X T , ω ) F t { X t = x } .
The random field U satisfies the following backward stochastic PDE
d U ( t , x ) + sup ( c , γ ) K H S ( t , x , U ( t , x ) , D 2 U ( t , x ) , Ψ ( t , x ) , ( c , γ ) ) d t = Ψ ( t , x ) d W t ,
where
H S ( t , x , u x , u x x , ψ x , ( c , γ ) ) = = exp ( ρ t ) V ( c ) + ( γ ( μ ( t ) r ) + r ) x u x c u x + x σ ( t ) γ ψ x + 1 2 u x x σ ( t ) 2 γ 2 x 2 .
The optimal value of the function ( c , γ ) is given by the solution to the system
c H = exp ( ρ t ) V ( c ) u x = 0 , γ H = ( μ ( t ) r ) x u x + x σ ( t ) ψ x + u x x σ 2 ( t ) x 2 γ = 0 ,
which means that
γ * = ( μ ( t ) r ) u x + σ ( t ) ψ x x u x x σ ( t ) 2 ,
while c * is given by Equation (44). This implies that
H S ( t , x , u x , u x x , ψ x ) = ( ( μ ( t ) r ) u x + σ ( t ) ψ x ) 2 2 σ ( t ) 2 u x x + K ( t , x , u x ) ,
where K ( t , x , u x ) is given by Equation (47). In the following, we write
δ S ( t ) = ( μ ( t ) r ) 2 σ ( t ) 2 ,
where we recall that here μ and σ are generic predictable continuous stochastic processes. So we consider a generator function Ω S ( t , u , u x , ω ) , depending explicitly on ω .
Theorem 15.
The generator function Ω S ( t , u , u x , ω ) is a symmetry of Equation (68) in the sense of Definition 8 if and only if Ω S has one of the following forms
Ω 1 = exp r ( r 1 ) t 2 [ u · u x r x u x r + 1 ] + G 1 ( t , u x ) , Ω 2 = u + G 2 ( t , u x ) , Ω 3 = exp ( r t ) x u x + G 3 ( t , u x ) , Ω 4 = G 4 ( t , u x ) ,
where G 1 S , G 2 S , G 3 S : R + × R × Ω R are smooth predictable random fields satisfying the following random PDEs
2 exp r ( r 1 ) t 2 u x r h V + δ S ( t ) u x 2 u x u x G 1 + 2 t G 1 = 0 ,
2 u x u x h V 2 h V + δ S ( t ) u x 2 u x u x G 2 + 2 t G 2 = 0 ,
2 exp ( r t ) u x u x h V + δ S ( t ) u x 2 u x u x G 3 + 2 t G 3 = 0 ,
δ S ( t ) u x 2 u x u x G 4 + 2 t G 4 = 0 .
Proof. 
Since
H S ( t , x , u x , u x x , 0 ) = δ S ( t ) 2 u x x + K ( t , x , u x ) ,
which is formally equal to H defined in Section 5.1, the theorem can be easily proven using the same argument exploited in the proof of Theorem 14. □
Remark 16.
The symmetries Ω i S of Theorem 15 depend on ω W since the functions G i solve the random Equations (70)–(73) (where the random dependence is given by δ S ( t ) ).
Corollary 3.
Let ( U ( t , x ) , Ψ ( t , x ) ) be a classical solution to Equation (68) and let X t be the solution to Equation (43) with ( γ , c ) satisfying equalities (44) and (69). Then, the processes
O ˜ 1 , t = exp r ( r 1 ) t 2 [ U ( t , X t ) x U ( t , X t ) r x U ( t , X t ) r + 1 ] + G 1 ( t , x U ( t , X t ) ) I 1 ( t , U , U , Ψ ) , O ˜ 2 , t = U ( t , X t ) + G 2 ( t , x U ( t , X t ) ) I 2 ( t , U , U , Ψ ) , O ˜ 3 , t = exp ( r t ) X t x U ( t , X t ) + G 3 ( t , x U ( t , X t ) ) I 3 ( t , U , U , Ψ ) , O ˜ 4 , t = G 4 ( t , x U ( t , X t ) ) I 4 ( t , U , U , Ψ ) ,
are local martingales. Here, I 1 , I 2 , I 3 , and I 4 are the integral expressions associated with O 1 , t , O 2 , t , O 3 , t , and O 4 , t , respectively, by the relation given in Equation (40).
Proof. 
The first statement follows from Theorems 13 and 15. □
In the particular case where r = 0 , V ( z ) = z θ / θ (where θ ( 0 , 1 ) ) or without the consumption V ( z ) = 0 , c = 0 , we can obtain the following stronger result.
Corollary 4.
Suppose that r = 0 and V ( z ) = z θ / θ . Then, we have that
O t = U ( t , X t ) 1 θ X t x U ( t , X t )
is a local martingale. Furthermore, if V = 0 (and we consider c = 0 ) we have that, for any c 1 , c 2 R ,
O t c 1 , c 2 = c 1 U ( t , X t ) + c 2 X t x U ( t , X t )
is a local martingale.
Proof. 
If V ( z ) = z θ / θ we have
h V ( t , u x ) = exp ( ρ t ) u x θ θ 1 θ 1 θ .
This implies that 1 θ 1 u x h V h V = 0 . So, using Equations (71) and (72), we get that
Ω 5 = Ω 2 1 θ Ω 3 = u 1 θ x u x + G 5 ( t , u x ) ,
where G 5 ( t , u x ) is any solution to the equation
δ S ( t ) u x 2 u x u x G 5 + 2 t G 5 = 0 ,
is a symmetry of the Equation (68). A particular solution to Equation (74) is G 5 0 , in which case Ω 5 has the form Ω 5 = u x u x / θ ; however, u x u x / θ satisfies the hypotheses of Corollary 1, from which we get the thesis. The second part of the corollary can be proven in a similar way. □
As already mentioned in the introduction, the construction of the martingales obtained in Corollaries 2 and 4 could be deeply connected to the well-known explicit solutions of Merton’s optimal portfolio problem (see, e.g., [35] for a review and [36,37] for recent developments on the explicit solutions of Merton’s problem). The investigation of the link between these two notions will be the subject of a future paper.

6. Conclusions

We proposed a generalization of the Noether theorem to a generic stochastic optimal control problem exploiting the tools of contact geometry and contact transformations. The results are formulated in Theorems 12 and 13 and Corollary 1, and they establish a relation between any contact symmetry of the HJB equation associated with an optimal control problem and a martingale given by the generator of the contact symmetry. For the case of deterministic coefficients and Lagrangian, we considered a generating function Ω ( t , x , u , u x ) of a contact symmetry of the associated HJB equation and we showed that the process Ω ( t , X t , U ( t , X t ) , U ( t , X t ) ) , where U ( t , x ) is the solution to the HJB equation and X t is the solution to the stochastic optimal control problem, is a local martingale. Also, we proved an analogous result for a stochastic optimal control problem with stochastic coefficients and Lagrangian.
As we pointed out in the introduction, our results can be seen as a generalization of some previous works by Zambrini et al. (see, [24,25,30]) in two directions: first we considered a wider class of transformations, and second we extended the mentioned result to the case of stochastic backward HJB equations.
We applied our results to Merton’s portfolio problem, building some martingales related to its solution(s). We considered both the Markovian and the non-Markovian case.
Interesting future developments of this work can be the investigation of the case where the solutions of HJB equations are viscosity solutions (and not classical), so that there is not enough regularity to apply Itô’s formula, and the study of symmetries of stochastic backward equations based on the HJB equations exploited in the present paper. Finally, giving a financial meaning to the martingales we built in the case of Merton’s problem could be another interesting line of research.

Author Contributions

Conceptualization, F.C.D.V., E.M., M.T. and S.U. All authors have read and agreed to the published version of the manuscript.

Funding

The first, second, and fourth author are funded by Istituto Nazionale di Alta Matematica “Francesco Severi” (INdAM), Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA): “Lie’s Symmetries Analysis of Stochastic Optimal Control Problems with Applications”. The first and third author are funded by the DFG under Germany’s Excellence Strategy-GZ 2047/1, project-id 390685813.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to the anonymous reviewers for their constructive comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HJBHamilton–Jacobi–Bellman
ODEOrdinary differential equations
SDEStochastic differential equations
PDEPartial differential equations
P -a.s. P -almost surely

References

  1. Hawkins, T. Emergence of the Theory of Lie Groups: An Essay in the History of Mathematics 1869–1926; Sources and Studies in the History of Mathematics and Physical Sciences; Springer: New York, NY, USA, 2012. [Google Scholar] [CrossRef] [Green Version]
  2. Olver, P.J. Applications of Lie Groups to Differential Equations, 2nd ed.; Graduate Texts in Mathematics; Springer: New York, NY, USA, 1993; Volume 107. [Google Scholar]
  3. Stephani, H. Differential Equations: Their Solution Using Symmetries; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  4. Arnol’d, V.I. Mathematical Methods of Classical Mechanics, 2nd ed.; Graduate Texts in Mathematics; Springer: New York, NY, USA, 1989; Volume 60, p. xvi+508. [Google Scholar] [CrossRef]
  5. Van der Schaft, A.J. Symmetries in optimal control. SIAM J. Control Optim. 1987, 25, 245–259. [Google Scholar] [CrossRef]
  6. Torres, D.F.M. Carathéodory equivalence, Noether theorems, and Tonelli full-regularity in the calculus of variations and optimal control. J. Math. Sci. 2004, 120, 1032–1050. [Google Scholar] [CrossRef] [Green Version]
  7. Cardin, F.; Viterbo, C. Commuting Hamiltonians and Hamilton-Jacobi multi-time equations. Duke Math. J. 2008, 144, 235–284. [Google Scholar] [CrossRef] [Green Version]
  8. Treanţă, S.; Udrişte, C. Single-time and multi-time Hamilton-Jacobi theory based on higher order Lagrangians. In Mathematical and Statistical Applications in Life Sciences and Engineering; Springer: Singapore, 2017; pp. 71–95. [Google Scholar]
  9. Treanţă, S. Noether-Type First Integrals Associated with Autonomous Second-Order Lagrangians. Symmetry 2019, 11, 1088. [Google Scholar] [CrossRef] [Green Version]
  10. Albeverio, S.; De Vecchi, F.C.; Morando, P.; Ugolini, S. Weak symmetries of stochastic differential equations driven by semimartingales with jumps. Electron. J. Probab. 2020, 25, 34. [Google Scholar] [CrossRef]
  11. Albeverio, S.; De Vecchi, F.C.; Morando, P.; Ugolini, S. Random transformations and invariance of semimartingales on Lie groups. Random Oper. Stoch. Equ. 2021, 29, 41–65. [Google Scholar] [CrossRef]
  12. De Vecchi, F.C.; Morando, P.; Ugolini, S. A note on symmetries of diffusions within a martingale problem approach. Stoch. Dyn. 2019, 19, 1950011. [Google Scholar] [CrossRef] [Green Version]
  13. De Vecchi, F.C.; Morando, P.; Ugolini, S. Reduction and reconstruction of SDEs via Girsanov and quasi Doob symmetries. J. Phys. A 2021. [Google Scholar] [CrossRef]
  14. De Vecchi, F.C.; Morando, P.; Ugolini, S. Symmetries of stochastic differential equations using Girsanov transformations. J. Phys. A 2020, 53, 135204. [Google Scholar] [CrossRef] [Green Version]
  15. De Vecchi, F.C.; Romano, A.; Ugolini, S. A symmetry-adapted numerical scheme for SDEs. J. Geom. Mech. 2019, 11, 325–359. [Google Scholar] [CrossRef] [Green Version]
  16. Gaeta, G. W-symmetries of Ito stochastic differential equations. J. Math. Phys. 2019, 60, 053501. [Google Scholar] [CrossRef] [Green Version]
  17. Gaeta, G. Symmetry of stochastic non-variational differential equations. Phys. Rep. 2017, 686, 1–62. [Google Scholar] [CrossRef] [Green Version]
  18. Gaeta, G.; Spadaro, F. Symmetry classification of scalar Ito equations with multiplicative noise. J. Nonlinear Math. Phys. 2020, 27, 679–687. [Google Scholar] [CrossRef]
  19. Kozlov, R. Lie point symmetries of Stratonovich stochastic differential equations. J. Phys. A 2018, 51, 505201. [Google Scholar] [CrossRef]
  20. Liao, M. Invariant diffusion processes under Lie group actions. Sci. China Math. 2019, 62, 1493–1510. [Google Scholar] [CrossRef]
  21. Arnaudon, M.; Zambrini, J.C. A stochastic look at geodesics on the sphere. In Geometric Science of Information; Nielsen, F., Barbaresco, F., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10589, pp. 470–476. [Google Scholar] [CrossRef] [Green Version]
  22. Baez, J.C.; Fong, B. A Noether theorem for Markov processes. J. Math. Phys. 2013, 54, 013301. [Google Scholar] [CrossRef] [Green Version]
  23. Luo, S.; Shen, J.; Shen, Y. A Noether theorem for random locations. arXiv 2018, arXiv:1811.03490. [Google Scholar]
  24. Lescot, P.; Zambrini, J.C. Isovectors for the Hamilton-Jacobi-Bellman equation, formal stochastic differentials and first integrals in Euclidean quantum mechanics. In Seminar on Stochastic Analysis, Random Fields and Applications IV; Dalang, R.C., Dozzi, M., Russo, F., Eds.; Progress in Probability; Birkhäuser: Basel, Switzerland, 2004; Volume 58, pp. 187–202. [Google Scholar]
  25. Lescot, P.; Zambrini, J.C. Probabilistic deformation of contact geometry, diffusion processes and their quadratures. In Seminar on Stochastic Analysis, Random Fields and Applications V; Dalang, R.C., Dozzi, M., Russo, F., Eds.; Progress in Probability; Birkhäuser: Basel, Switzerland, 2008; Volume 59, pp. 203–226. [Google Scholar] [CrossRef]
  26. Misawa, T. Conserved quantities and symmetry for stochastic dynamical systems. Phys. Lett. A 1994, 195, 185–189. [Google Scholar] [CrossRef]
  27. Misawa, T. New conserved quantities derived from symmetry for stochastic dynamical systems. J. Phys. A 1994, 27, L777–L782. [Google Scholar] [CrossRef]
  28. Privault, N.; Zambrini, J.C. Stochastic deformation of integrable dynamical systems and random time symmetry. J. Math. Phys. 2010, 51, 082104. [Google Scholar] [CrossRef] [Green Version]
  29. Thieullen, M.; Zambrini, J.C. Symmetries in the stochastic calculus of variations. Probab. Theory Relat. Fields 1997, 107, 401–427. [Google Scholar] [CrossRef]
  30. Zambrini, J.C. On the geometry of the Hamilton-Jacobi-Bellman equation. J. Geom. Mech. 2009, 1, 369–387. [Google Scholar] [CrossRef]
  31. Misawa, T. Conserved quantities and symmetries related to stochastic dynamical systems. Ann. Inst. Stat. Math. 1999, 51, 779–802. [Google Scholar] [CrossRef]
  32. Zambrini, J.C. The research program of stochastic deformation (with a view toward geometric mechanics). In Stochastic Analysis: A Series of Lectures; Progress in Probability; Birkhäuser/Springer: Basel, Switzerland, 2015; Volume 68, pp. 359–393. [Google Scholar] [CrossRef] [Green Version]
  33. Peng, S.G. Stochastic Hamilton-Jacobi equations. SIAM J. Control Optim. 1992, 30, 284–304. [Google Scholar] [CrossRef]
  34. Merton, R.C. Lifetime portfolio selection under uncertainty: The continuous-time case. Rev. Econ. Stat. 1969, 51, 247–257. [Google Scholar] [CrossRef] [Green Version]
  35. Rogers, L.C.G. Optimal Investment; SpringerBriefs in Quantitative Finance; Springer: Berlin/Heidelberg, Germany, 2013; Volume 1007. [Google Scholar]
  36. Benth, F.E.; Karlsen, K.H.; Reikvam, K. Merton’s portfolio optimization problem in a Black and Scholes market with non-Gaussian stochastic volatility of Ornstein-Uhlenbeck type. Math. Financ. 2003, 13, 215–244. [Google Scholar] [CrossRef]
  37. Biagini, S.; Pınar, M.Ç. The robust Merton problem of an ambiguity averse investor. Math. Financ. Econ. 2017, 11, 1–24. [Google Scholar] [CrossRef] [Green Version]
  38. Fouque, J.P.; Sircar, R.; Zariphopoulou, T. Portfolio optimization and stochastic volatility asymptotics. Math. Financ. 2017, 27, 704–745. [Google Scholar] [CrossRef]
  39. Øksendal, B.; Sulem, A.; Zhang, T. A stochastic HJB equation for optimal control of forward-backwards SDEs. In The Fascination of Probability, Statistics and Their Applications; Springer: Cham, Switzerland, 2016; pp. 435–446. [Google Scholar]
  40. Askenazy, P. Symmetry and optimal control in economics. J. Math. Anal. Appl. 2003, 282, 603–613. [Google Scholar] [CrossRef] [Green Version]
  41. Fleming, W.H.; Rishel, R.W. Deterministic and Stochastic Optimal Control; Applications of Mathematics; Springer: New York, NY, USA, 1975; Volume 1. [Google Scholar]
  42. Pham, H. Continuous-Time Stochastic Control and Optimization with Financial Applications; Stochastic Modelling and Applied Probability; Springer: Berlin/Heidelberg, Germany, 2009; Volume 61. [Google Scholar]
  43. Touzi, N. Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE; Fields Institute Monographs; Springer Science & Business Media: New York, NY, USA, 2012; Volume 29. [Google Scholar]
  44. Yong, J.; Zhou, X.Y. Stochastic Controls: Hamiltonian Systems and HJB Equations; Stochastic Modelling and Applied Probability; Springer: New York, NY, USA, 1999; Volume 43. [Google Scholar]
  45. Ikeda, N.; Watanabe, S. Stochastic Differential Equations and Diffusion Processes; North Holland Publ. Co.: Amsterdam, The Netherlands, 1989. [Google Scholar]
  46. Kunita, H. Some extensions of Ito’s formula. In Séminaire de Probabilités XV 1979/80; Azéma, J., Yor, M., Eds.; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1981; Volume 850, pp. 118–141. [Google Scholar]
  47. Kunita, H. Stochastic Flows and Stochastic Differential Equations; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  48. Rogers, L.C.G.; Williams, D. Diffusions, Markov Processes and Martingales: Volume 2, Itô Calculus; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  49. Buckdahn, R.; Ma, J. Pathwise stochastic control problems and stochastic HJB equations. SIAM J. Control Optim. 2007, 45, 2224–2256. [Google Scholar] [CrossRef]
  50. Chang, M.H.; Pang, T.; Yong, J. Optimal stopping problem for stochastic differential equations with random coefficients. SIAM J. Control Optim. 2009, 48, 941–971. [Google Scholar] [CrossRef] [Green Version]
  51. Englezos, N.; Karatzas, I. Utility maximization with habit formation: Dynamic programming and stochastic PDEs. SIAM J. Control Optim. 2009, 48, 481–520. [Google Scholar] [CrossRef]
  52. Qiu, J. Viscosity Solutions of Stochastic Hamilton–Jacobi–Bellman Equations. SIAM J. Control Optim. 2018, 56, 3708–3730. [Google Scholar] [CrossRef] [Green Version]
  53. De Vecchi, F.C.; Morando, P. The geometry of differential constraints for a class of evolution PDEs. J. Geom. Phys. 2020, 156, 103771. [Google Scholar] [CrossRef]
  54. Gaeta, G. Nonlinear Symmetries and Nonlinear Equations; Mathematics and Its Applications; Springer: Dordrecht, The Netherlands, 1994; Volume 299. [Google Scholar] [CrossRef]
  55. Hydon, P.E. Symmetry Methods for Differential Equations: A Beginner’s Guide; Cambridge Texts in Applied Mathematics; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  56. Bocharov, A.; Chetverikov, V.; Duzhin, S.; Khor’kova, N.; Krasil’shchik, I.; Samokhin, A.; Torkhov, Y.; Verbovetsky, A.; Vinogradov, A. Symmetries and Conservation Laws for Differential Equations of Mathematical Physics; Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1999; Volume 182, p. 333. [Google Scholar] [CrossRef]
  57. Saunders, D.J. The Geometry of Jet Bundles; London Mathematical Society Lecture Note Series; Cambridge University Press: Cambridge, UK, 1989; Volume 142, p. viii+293. [Google Scholar] [CrossRef]
  58. Arnol’d, V.I. Geometrical Methods in the Theory of Ordinary Differential Equations; Grundlehren der Mathematischen Wissenschaften; Springer: New York, NY, USA, 1988; Volume 250. [Google Scholar] [CrossRef]
  59. Geiges, H. An Introduction to Contact Topology; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 2008; Volume 109, p. xvi+440. [Google Scholar] [CrossRef]
  60. Geiges, H. A brief history of contact geometry and topology. Expo. Math. 2001, 19, 25–53. [Google Scholar] [CrossRef] [Green Version]
  61. de Carvalho Griebeler, M.; de Araújo, J.P. General envelope theorems for multidimensional type spaces. In Proceedings of the 31º Meeting of the Brazilian Econometric Society, Rio de Janeiro, Brazil, 9–11 December 2009. [Google Scholar]
  62. Milgrom, P.; Segal, I. Envelope theorems for arbitrary choice sets. Econometrica 2002, 70, 583–601. [Google Scholar] [CrossRef] [Green Version]
  63. de Lara, M.C. Reduction of the Zakai equation by invariance group techniques. Stoch. Process. Their Appl. 1998, 73, 119–130. [Google Scholar] [CrossRef] [Green Version]
  64. De Vecchi, F.C. Finite dimensional solutions to SPDEs and the geometry of infinite jet bundles. arXiv 2017, arXiv:1712.08490. [Google Scholar]
  65. Pratt, J.W. Risk aversion in the small and in the large. In Uncertainty in Economics; Elsevier: Amsterdam, The Netherlands, 1978; pp. 59–79. [Google Scholar] [CrossRef]
  66. Lorig, M.; Sircar, R. Portfolio optimization under local-stochastic volatility: Coefficient Taylor series approximations and implied sharpe ratio. SIAM J. Financ. Math. 2016, 7, 418–447. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

De Vecchi, F.C.; Mastrogiacomo, E.; Turra, M.; Ugolini, S. Noether Theorem in Stochastic Optimal Control Problems via Contact Symmetries. Mathematics 2021, 9, 953. https://doi.org/10.3390/math9090953

AMA Style

De Vecchi FC, Mastrogiacomo E, Turra M, Ugolini S. Noether Theorem in Stochastic Optimal Control Problems via Contact Symmetries. Mathematics. 2021; 9(9):953. https://doi.org/10.3390/math9090953

Chicago/Turabian Style

De Vecchi, Francesco C., Elisa Mastrogiacomo, Mattia Turra, and Stefania Ugolini. 2021. "Noether Theorem in Stochastic Optimal Control Problems via Contact Symmetries" Mathematics 9, no. 9: 953. https://doi.org/10.3390/math9090953

APA Style

De Vecchi, F. C., Mastrogiacomo, E., Turra, M., & Ugolini, S. (2021). Noether Theorem in Stochastic Optimal Control Problems via Contact Symmetries. Mathematics, 9(9), 953. https://doi.org/10.3390/math9090953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop