Next Article in Journal
ILRA: Novelty Detection in Face-Based Intervener Re-Identification

Symmetry 2019, 11(9), 1153; https://doi.org/10.3390/sym11091153

Article
Lie-Point Symmetries and Backward Stochastic Differential Equations
by 1 and 2,*
1
School of Science, Tianjin University of Technology, Tianjin 300384, China
2
Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan 250199, China
*
Author to whom correspondence should be addressed.
Received: 22 July 2019 / Accepted: 5 September 2019 / Published: 11 September 2019

## Abstract

:
In this paper, we introduce the Lie-point symmetry method into backward stochastic differential equation and forward–backward stochastic differential equations, and get the corresponding deterministic equations.
Keywords:
backward stochastic differential equation; Lie-point symmetry; forward–backward stochastic differential equation

## 1. Introduction

In general, almost all differential equations are difficult to solve explicitly. Numerical methods are frequently used for obtaining approximate solutions. However, exact solutions are important, because with their help, one can analyze the properties of the equations. The symmetric method is one of the methods used for finding exact solutions.
Symmetric method for differential equations was once a popular method in the study of ordinary differential equations (ODEs) and partial differential equations (PDEs). A general survey of this method can be found in [1,2,3,4], etc.
Simply speaking, symmetric method means a group of transformations that maps a solution of a given system of equations to another solution of the same system. This method was first initiated by Sophus Lie in the early 19th century. He found that the problem of finding the group of transformations (known as Lie group), reduced to solving a system of determining equations for its infinitesimal generators. This method helps to reduce the order of differential equations, and find invariant solutions, etc.
In 1990s, there appeared applications of Lie group theory to stochastic differential equations, for example [5,6,7,8], etc. Since then many papers contribute to this area. The symmetries includes fiber preserving symmetry, generalized Lie-point symmetry, W-symmetry ([9,10]), and random Lie symmetry [11], etc. They are mostly applied in Itô SDE driven by Brownian motion. There are also papers applying this method in SDE driven by both Brownian motion and Poisson processes, or even general semimartingales [12].
In this paper, we will introduce the symmetric method into multidimensional backward stochastic differential equations (BSDEs).
In 1990, Pardoux-Peng [13] generalized the linear backward stochastic differential equations that appears in [14] when Bismut was solving a stochastic control problem in 1973, and got a generalized and usually nonlinear BSDE. They proved the existence and uniqueness under certain conditions. The BSDE with parameters $( g , T , Y )$ (simply denoted by BSDE $( g , T , Y )$) is as follows
$y t = Y + ∫ t T g ( s , y s , z s ) d s + ∫ t T z s d w s ,$
where T denotes the terminal time, Y the terminal condition, g the generator. The solution of the BSDE is a pair of processes $( y t , z t )$ that satisfies the above equation. Since then, BSDE theory, as a new field, gains rapid development both in theory and practical applications, for example, finance, stochastic control, etc. However, to get the exact solution of BSDE is generally difficult work, such that hardly any paper contributes to it.
Lie-point symmetry method, because of its property, can be commonly used in BSDE under any conditions if the solution exists.
When we study Lie-point symmetry of BSDE, it is natural to consider the infinitesimals of spatial y and temporal t. However, z, as a special process that appears in the solutions of BSDE, raises a natural problem, i.e., should z be included in the consideration for the transformation? The answer is no. In fact, since $z t$ is a process accompanying the $y t$, we can get the transformation for z spontaneously as we get the transformation for y. Furthermore, we carefully studied the properties of the derived transformation for z.
In this paper, we consider two cases. In the first case, transformation does not include time change. Therefore the terminal time T is always left invariant. In the second case, time change is considered. However, the transformation of t relies on t only, i.e., we only consider projectable transformation. We have also deduced the transformation for z in both cases, and get the determining equations for the corresponding symmetry.
As with the conserved quantity (or first integral) that appears in SDE, we define the ‘martingale transformation function’, a function that transforms the solutions $Y t$ of BSDEs into martingales. This kind of functions leads to a new method for solving BSDEs.
Finally, we use symmetric method to study the solutions of forward–backward stochastic differential equations (FBSDEs). How to deal with the terminal condition of FBSDE becomes a key point.
This paper is organized as follows. In Section 2, we introduce the basic knowledge about BSDE and the application of symmetry method in SDEs. In Section 3, we study the spatial symmetry for BSDE; then in Section 4, similarly to the notion of conserved quantity, we introduce the so-called martingale transformation function, which turns a BSDE into a martingale and get the conditions, under which multiplied by martingale transformation function, the symmetry is still a symmetry; in Section 5, adding time change, which leads to the transformation of Brownian motion, we get the generalized symmetry. Finally, in Section 6, we study the symmetry transformation on forward–backward stochastic differential equations (FBSDEs).

## 2. Some Facts about BSDE and Lie-Point Symmetries

In this section, we introduce the basic knowledge about BSDEs and Lie-point symmetries.
We denote by $R n$ the n-dimensional Euclidean space, equipped with the standard inner product and the Euclidean norm $| · |$. We also denote by $R n × d$ the collection of all $n × d$ real matrices, and for matrix $z = ( z i j ) n × d$, we denote $z i : = ( z i 1 , ⋯ , z i d )$ and $| z | : = T r ( z z T )$, where $T r$ denotes the trace, $z T$ represents the transposition of z. We simply denote $z 1 z 2 T$ by $z 1 z 2$ without causing any confusion. For f being a function of x, $∂ x f$ represents the derivative of f with respect to x. It may be a real number if both f and x are $R$-valued; a vector if f is $R$-valued while x is $R n$-valued; or a Jacobian matrix if x is $R n$-valued and f is $R m$-valued function.

#### 2.1. BSDEs

We consider the probability space $( Ω , F , P )$. A d-dimensional Brownian motion w is defined on this space, and $( F t w ) t ≥ 0$ denotes the filtration generated by w and satisfies the usual conditions.
For a $σ$-field $C$ defined on this probability space, $L m 2 ( C ) : = { Y$: Y is a $R m$-valued $C$-measurable random variable satisfying $E | Y | 2 < ∞$}.
Now we explain BSDE $( g , T , Y )$ (Equation (1)) in detail. The terminal time $T > 0$, the terminal condition Y is usually assumed to be in the space $L m 2 ( F T w )$, the generator $g : R + × R m × R d → R m$. Please note that g is usually assumed to be dependent on $ω$ in the literature on BSDE. However, in this paper, similarly to most applications of symmetric method in SDE, we assume g being deterministic. Moreover, the integral with respect to w denotes the Itô’s integral. The solution of BSDE (1) is a pair of processes $( y t , z t ) t ∈ [ 0 , T ]$ that satisfies the Equation (1), where $( y t ) t ∈ [ 0 , T ]$ is continuous in t, a.s., and is $( F t w ) t ∈ [ 0 , T ]$-adapted, $( z t ) t ∈ [ 0 , T ]$ is $( F t w ) t ∈ [ 0 , T ]$-predictable process and usually satisfies $E [ ∫ 0 T | z t | 2 d t ] < ∞$.
The existence and uniqueness conditions of BSDE are diverse. For example, the existence and uniqueness conditions:
• g is Lipschitz continuous in y and z, i.e., $| g ( t , y 1 , z 1 ) − g ( t , y 2 , z 2 ) | ≤ K ( | y 1 − y 2 | + | z 1 − z 2 | )$, see for example [13];
• g is of quadratic growth in z, see for example, [15,16,17,18];
• g is uniformly continuous and of at most linear growth in $y , z$, see [19];
the existence conditions: g is continuous in $y , z$ and of at most linear growth, etc., see [20]. Different assumptions of g may correspond to different conditions of $Y , y , z$. Since the symmetric method only requires the existence of the solutions of BSDEs, we will not specify the conditions.

#### 2.2. Symmetries on SDEs

Consider n-dimensional SDE
$d x ( t ) = b ( t , x ( t ) ) d t + σ ( t , x ( t ) ) d w t ,$
where $b : R + × R n → R n$ and $σ : R + × R n → R n × R d$ are smooth functions.
We introduce a projectable vector field X on $R × R n$. It is given by
$X = τ ( t ) ∂ ∂ t + ξ i ( t , x ) ∂ ∂ x i ,$
where $τ : R + → R$ and $ξ i : R + × R n → R$ ($i = 1 , ⋯ , n$) are functions smooth enough. Here $ξ i$ (resp. $x i$) denotes the i-th component of $ξ$ (resp. x) and we use Einstein summation convention. ‘projectable’ means that $τ$ depends on t alone.
Every vector field (3) generates a flow acting on $R × R n$ which we denote by $Φ ε$,
$Φ ε = ( Φ ε t , Φ ε x ) = ( Φ ε t , Φ ε x 1 , Φ ε x 2 , ⋯ , Φ ε x n ) ,$
for $ε ∈ R$. The relation between X and $Φ ε$ is given by
$Φ ε t = t + ∫ 0 ε τ ( Φ r t ) d r , Φ ε x i = x i + ∫ 0 ε ξ i ( Φ r t , Φ r x ) d r .$
Therefore $Φ ε$ forms a one-parameter continuous transformation group. For this notion, the readers can refer to [2]. X is also called the infinitesimal generator of $Φ ε$.
Here is the definition for symmetry of SDE.
Definition 1.
The vector field X given in (3) is a symmetry of SDE (2) if the action given by $Φ ε$ keeps the solutions of (2) invariant.
Since $Φ ε$ acts on the variable t according to (5), if $τ ≠ 0$, this action, hence, transforms the Brownian motion w to a new process. It is proved that this new process, which we denote by $w ˜$, is again a Brownian motion (see [21,22] for more details).
Generally for variables $( t , x )$ and the transformation $Φ ε$, we denote simply the transformed variables by $t ˜$ and $x ˜$, i.e., $( t ˜ , x ˜ ) = Φ ε = ( Φ ε t , Φ ε x )$. Let $x = x ( t )$ be a solution of (2). Then Definition 1 means that the vector field X is a symmetry of (2) if and only if the transformed process $x ˜ = x ˜ ( t ˜ )$ satisfies the following equation
$d x ˜ ( t ˜ ) = b ( t ˜ , x ˜ ( t ˜ ) ) d t ˜ + σ ( t ˜ , x ˜ ( t ˜ ) ) d w ˜ t ˜ .$
Please note that $x ( t )$ and $x ˜ ( t ˜ )$ have different initial values. The corresponding initial values $( t 0 , x 0 )$ and $( t ˜ 0 , x ˜ 0 )$ also satisfy the transformation $( t ˜ 0 , x ˜ 0 ) = ( Φ ε t 0 , Φ ε x 0 )$.

#### 2.3. Conserved Quantity

Now we introduce the notion of conserved quantity (i.e., first integral) of SDE.
Definition 2.
A function $I ( t , x ) : R + × R n → R$ is a conserved quantity (or first integral) of a system of SDEs (2) if it remains constant on the solutions of SDEs.
For this notion, the readers can refer to [5,6,7,8,23] etc.
We also introduce the following Lemma (see [23]).
Lemma 1.
A system of SDEs (2) with a non-zero diffusion matrix admits two linearly connected symmetries
$X 1 = τ ( t , x ) ∂ ∂ t + ξ i ( t , x ) ∂ ∂ x i$
and
$X 2 = I ( t , x ) τ ( t , x ) ∂ ∂ t + I ( t , x ) ξ i ( t , x ) ∂ ∂ x i$
if an only if function $I ( t , x )$ is a conserved quantity (first integral) of the system.

## 3. Symmetry Transformation I

In this section, we start to introduce the Lie symmetry method into the BSDE theory. Let us consider the m-dimensional BSDE without terminal conditions and terminal time,
$d y t = − g ( t , y t , z t ) d t + z t d w t .$
We suppose that the generator $g : R + × R m × R m × d → R m$ is smooth enough with respect to $( t , y , z )$.
First, we consider a near-identity change of coordinates, passing from y to $y ˜$ via
$y i → y ˜ i = y i + ε ξ i ( t , y ) .$
Using Itô’s formula, we have
$d y ˜ i = d y i + ε d ξ i = − g i d t + z i d w + ε { ∂ t ξ i d t + ∂ y j ξ i d y j + 1 2 ∂ y j y k 2 ξ i z j z k d t } = { − g i + ε ∂ t ξ i − ε ∂ y j ξ i g j + ε 1 2 ∂ y j y k 2 ξ i z j z k } d t + { z i + ε ∂ y j ξ i z j } d w ,$
where on the right-hand side, the default variables are $( t , y , z )$. We let $z ˜ i = z i + ε ∂ y j ξ i z j$. Please note that at first order in $ε$, we have
$y i = y ˜ i − ε ξ i ( t , y ˜ ) , z i = z ˜ i − ε ∂ y j ξ i ( t , y ˜ ) z ˜ j .$
We simply denote by $y i = y ˜ i − ε ξ i$ and $z i = z ˜ i − ε ∂ y j ξ i z ˜ j$. For g, we have the following deduction
$g i ( t , y , z ) = g i ( t , y ˜ − ε ξ ( t , y ˜ ) , z ˜ − ε ∂ y j ξ ( t , y ˜ ) z ˜ j ) = g i ( t , y ˜ , z ˜ ) − ε ∂ y j g i ( t , y ˜ , z ˜ ) ξ j ( t , y ˜ ) − ε ∂ z k l g i ( t , y ˜ , z ˜ ) ∂ y j ξ k ( t , y ˜ ) z ˜ j l = g i − ε ∂ y j g i ξ j + ∂ z k l g i ∂ y j ξ k z ˜ j l | ( t , y ˜ , z ˜ ) .$
We can deduce that
$d y ˜ i = − g i − ε ∂ y j g i ξ j + ∂ z k l g i ∂ y j ξ k z ˜ j l + ∂ t ξ i − ∂ y j ξ i g j + 1 2 ∂ y j y k 2 ξ i z ˜ j z ˜ k | ( t , y ˜ , z ˜ ) d t + z ˜ i d w .$
Thus, we have the following theorem.
Theorem 1.
The Lie-point spatial symmetries of BSDE (1) are given by $X = ξ ( t , y ) ( ∂ / ∂ y )$ with ξ satisfying the determining equations, i.e., for each $i = 1 , ⋯ , m$,
$∂ y j g i ξ j + ∂ z k l g i ∂ y j ξ k z j l + ∂ t ξ i − ∂ y j ξ i g j + 1 2 ∂ y j y k 2 ξ i z j z k = 0 ,$
with summation over $j , k = 1 , ⋯ , m$ and $l = l , ⋯ , d$.
According to (4) and (5), X corresponds to the continuous transformation group $Φ ε = ( Φ ε t , Φ ε y ) = ( t , Φ ε y )$. We also introduce the corresponding transformation on z generated by X or equivalently $Φ ε$. That is
$Φ ε z i = z i + ∫ 0 ε ∂ y j ξ i | ( t , Φ r y ) Φ r z j d r ,$
with summation over $j = 1 , ⋯ , m$, for $i = 1 , ⋯ , m$. We will get the following relationships.
Theorem 2.
It holds that $Φ ε z = ∂ y Φ ε y z$ for $Φ ε y$ and $Φ ε z$ generated by (4) and (9) respectively.
Proof.
We consider the system of linear differential equations with variable coefficients
$d x ( r ) = ∂ y ξ | ( t , Φ r y ) x ( r ) d r .$
Please note that $∂ y ξ$ is an $m × m$-dimensional Jacobian matrix. We denote the transition matrix by $ϕ ( r , 0 )$. Then $ϕ · i ( r , 0 )$, the ith column of $ϕ ( r , 0 )$, is a solution of Equation (10) with initial value $x ( 0 ) = e i = ( 0 , ⋯ , 0 , 1 , 0 , ⋯ , 0 ) T$, of which the i-th component is 1, and the others are all 0.
For a general vector $x ( 0 ) ∈ R m$, the solution of Equation (10) with initial value $x ( 0 )$ can be represented as $ϕ ( r , 0 ) x ( 0 )$. Therefore the solution of (9) can be represented by $Φ ε z = ϕ ( ε , 0 ) z$.
We only need to prove that $ϕ ( ε , 0 ) ≡ ∂ y Φ ε y$, i.e., $( ∂ y i Φ ε y 1 , ⋯ , ∂ y i Φ ε y m ) T$, the i-th column of $∂ y Φ ε y$, is a solution of (10) with initial value $x ( 0 ) = e i$.
In fact, since each $ξ i$ is a smooth function of y, $Φ r y i$ is also a smooth function of y and r. Therefore, we have the following deduction
$d d r ∂ y i Φ r y j = ∂ ∂ y i d d r Φ r y j = ∂ ∂ y i ξ j ( t , Φ r y ) = ∂ y 1 ξ j | ( t , Φ r y ) ∂ y i Φ r y 1 + ∂ y 2 ξ j | ( t , Φ r y ) ∂ y i Φ r y 2 + ⋯ + ∂ y m ξ j | ( t , Φ r y ) ∂ y i Φ r y m = ( ∂ y 1 ξ j , ∂ y 2 ξ j , ⋯ , ∂ y m ξ j ) | ( t , Φ r y ) ∂ y i Φ r y 1 ∂ y i Φ r y 2 ⋮ ∂ y i Φ r y m .$
Moreover, $∂ y i Φ 0 y = e i$. So $∂ y i Φ r y$ is a solution of Equation (10) with initial value $e i$. Thus, the result follows. The proof is complete. □
With this transformation, the filtration is still generated by the same Brownian motion w and the terminal time does not change. However, the terminal condition changes. Therefore a solution $( y t , z t ) t ∈ [ 0 , T ]$ of BSDE $( g , T , Y )$ is transformed into the solution $( Φ ε y t , Φ ε z t ) = ( Φ ε y t , ∂ y Φ ε y t z t )$ of BSDE $( g , T , Φ ε Y )$.
Here are several examples of symmetries.
Example 1.
Let $m = 1$, $g = C | z | 2$. We get
$2 C | z | 2 ∂ y ξ + ∂ t ξ − C | z | 2 ∂ y ξ + 1 2 ∂ y y 2 ξ | z | 2 = 0 .$
Rearranging the equation, we get
$( C ∂ y ξ + 1 2 ∂ y y 2 ξ ) | z | 2 + ∂ t ξ = 0 .$
Letting $| z | → + ∞$, the above formula is equivalent to the following equations
$C ∂ y ξ + 1 2 ∂ y y 2 ξ = 0 , ∂ t ξ = 0 .$
Solving the above equations, we get $ξ = c 1 exp ( − 2 C y ) + c 2$, for any $c 1 , c 2 ∈ R$. Thus, the symmetries are as follows
$X 0 = ∂ y , X 1 = e − 2 C y ∂ y .$
According to (2.4) in [11], $X 0$ corresponds to
$Φ ε , 0 y = y + ∫ 0 ε 1 d r , Φ ε , 0 z = z + ∫ 0 ε 0 d r .$
Therefore $Φ ε , 0 y = y + ε$, $Φ ε , 0 z = z$. $X 1$ corresponds to transformation functions
$Φ ε , 1 y = y + ∫ 0 ε e − 2 C Φ r , 1 y d r , Φ ε , 1 z = z + ∫ 0 ε ( − 2 C ) e − 2 C Φ r , 1 y Φ r , 1 z d r .$
Therefore $Φ ε , 1 y = 1 2 C ln | 2 C ε + e 2 C y |$, and $Φ ε , 1 z = e 2 C y 2 C ε + e 2 C y z$. Both $Φ ε , 0 = ( Φ ε , 0 y , Φ ε , 0 z )$ and $Φ ε , 1 = ( Φ ε , 1 y , Φ ε , 1 z )$ keep the solutions of BSDE (6) invariant. More specifically, $Φ ε , 0 y$ transforms the solutions of BSDE $( C | z | 2 , T , Y )$ to those of BSDE $( C | z | 2 , T , Y + ε )$, and $Φ ε , 1 y$ transforms solutions of BSDE $( C | z | 2 , T , Y )$ to those of BSDE $( C | z | 2 , T , 1 2 C ln | 2 C ε + e 2 C Y | )$.
Example 2.
For $m = 1$, $g = C | z | p$ with $1 < p < 2$, Equation (8) is equivalent to the following equations
$∂ y y 2 ξ = 0 , ∂ t ξ = 0 , ∂ y ξ ∂ z g z − ∂ y ξ g = ∂ y ξ C p | z | p − ∂ y ξ C | z | p = ∂ y ξ C ( p − 1 ) | z | p = 0 .$
Solving the above equations, we get that ξ is a constant independent of $t , y$, i.e., the only symmetry is $X = ∂ y$. The corresponding transformation is $Φ ε y = y + ε$.
Example 3.
For $m = 1$, $g = C | z |$, Equation (8) is equivalent to the following equations
$∂ y y 2 ξ = 0 , ∂ t ξ = 0 .$
Solving the above equations, we get $ξ = c 1 y + c 2$ for any $c 1 , c 2 ∈ R$. Therefore, we get two symmetries $X 0 = ∂ y$, $X 1 = y ∂ y$. In addition, the corresponding transformations are $Φ ε , 0 y = y + ε$, $Φ ε , 0 z = z$ and $Φ ε , 1 y = y e ε$, $Φ ε , 1 z = e ε z$.
Example 4.
Suppose that g is independent of z. We get equation
$∂ y g ξ + ∂ t ξ − ∂ y ξ g + 1 2 ∂ y y 2 ξ | z | 2 = 0 .$
The equation is equivalent to
$∂ y y 2 ξ = 0 , ∂ y g ξ + ∂ t ξ − ∂ y ξ g = 0 .$
Remark 1.
(1) In this paper ξ is assumed to be independent of z. That is because if ξ is a function of z, when we take derivative of ξ with respect z, we will get $d z$, which usually does not exist. (2) Equation (8) is a system of equations with independent variables $( t , y , z )$. However, ξ is just a vector function of $( t , y )$. Therefore, Equation (8) may be unsolvable because of z.

## 4. Transformation to Martingale

Inspired by the notion of conserved quantity, we try to find the conserved quantity of BSDE (6). For a function $I : R + × R m → R$, applying Itô’s formula to $I ( t , y t )$, we have
$d I ( t , y t ) = ∂ t I ( t , y t ) − ∂ y j I ( t , y t ) g j ( t , y t , z t ) + 1 2 ∂ y j y k 2 I ( t , y t ) z t j z t k d t + ∂ y j I ( t , y t ) z t j d w t ,$
summation over $j , k$. Therefore, by the definition of conserved quantity, we should let I satisfy the following two equations
$∂ t I − ∂ y j I g j + 1 2 ∂ y j y k 2 I z j z k ≡ 0 , ∂ y j I z j ≡ 0 .$
Since z can be arbitrary, we have that $∂ y j I ≡ 0$, for each $j = 1 , ⋯ , m$, and $∂ t I ≡ 0$. Therefore, the conserved quantity degenerates to a constant. Thus, there is no nontrivial conserved quantity (or first integral) for BSDE.
However, for BSDE, we can consider another important kind of functions $I ( t , y )$, satisfying that $I ( t , y t )$ forms a martingale. Thus, I is the solution of the equation $∂ t I − ∂ y j I g j + 1 2 ∂ y j y k 2 I z j z k = 0$. Now we introduce the following definition
Definition 3.
A smooth function $I ( t , y ) : R + × R m → R$ is called martingale transformation function of a process $y t$ if it turns a process $y t$ into a martingale, i.e., $I ( t , y t )$ is a martingale.
We have the following theorem.
Theorem 3.
A function $I ( t , y )$ forms a martingale transformation function for the solution $y t$ of BSDE (6), if and only if it satisfies the following equation
$∂ t I − ∂ y j I g j + 1 2 ∂ y j y k 2 I z j z k ≡ 0$
(summation over $j , k$).
Remark 2.
Suppose $m = 1$ and that $I ( t , y )$ is a martingale transformation function of BSDE (1). If moreover, $I ( t , y )$ is invertible about y for each t, we can use $I ( t , y )$ to solve BSDE (1). In fact, for terminal value Y and terminal time T, we can get a process $y ˜ t = E [ I ( T , Y ) | F t w ] = I ( t , y t )$. Since I is invertible in y for each fixed t, we can directly get $y t = I − 1 ( t , y ˜ t )$. In fact, this method was already used by Bahlali, Eddahbi, Ouknine [24].
Inspired by Lemma 1, for BSDE (1), we suppose that $I ( t , y )$ is a martingale transformation function of the solution $y t$ of BSDE (1). We consider the operator $X = I ( t , y ) ξ ( t , y ) ∂ / ∂ y$. By Theorem 1, we can deduce that for each $i = 1 , ⋯ , m$,
$∂ y j g i I ξ j + ∂ y j ( I ξ k ) ∂ z k l g i z j l + ∂ t ( I ξ i ) − g j ∂ y j ( I ξ i ) + 1 2 ∂ y j y k 2 ( I ξ i ) z j z k = I ∂ y j g i ξ j + ∂ y j ξ k ∂ z k l g i z j l + ∂ t ξ i − ∂ y j ξ i g j + 1 2 ∂ y j y k 2 ξ i z j z k + ξ i ∂ t I − ∂ y j I g j + 1 2 ∂ y j y k 2 I z j z k + ∂ y j I ∂ z k l g i ξ k z j l + ∂ y j I ∂ y k ξ i z j z k = ∂ y j I ( ∂ z k l g i z j l ξ k + ∂ y k ξ i z j z k )$
(summation over $j , k , l = 1 , ⋯ , m$). Therefore, we get the following theorem.
Theorem 4.
Suppose $X 0 = ξ ( t , y ) ∂ y$ is a symmetry of BSDE (6) and $I ( t , y )$ is a martingale transformation of the BSDE. If moreover, for each i, $∂ y i I ( ∂ z k l g i z j l ξ k + ∂ y k ξ i z j z k ) ≡ 0$ holds for any $t , y , z$, then $X = I ( t , y ) ξ ( t , y ) ∂ / ∂ y$ forms a new symmetry of BSDE (6).

## 5. Symmetry Transformation II

Following the results in [6], we consider again BSDE (6) and introduce a time change
$t → t ˜ = t + ε τ ( t ) .$
Here $τ$ is a smooth function of t. We have the following substitution:
$d w ˜ = ( 1 + ε ∂ t τ ( t ) / 2 ) d w$
and conversely,
$d w = ( 1 − ε ∂ t τ ( t ˜ ) / 2 ) d w ˜ .$
The transformations of the Wiener process $w ( t )$ into $w ˜ ( t )$ is specifically discussed in Appendix A in [6]. For more details about this transformation, the readers can also refer to [11,21,22], etc.
According to Equation (7), the last item in the right-hand side of the third equation can be transformed as follows
$( z i + ε ∂ y j ξ i z j ) d w = ( z i + ε ∂ y j ξ i z j ) ( 1 − ε ∂ t τ ( t ˜ ) / 2 ) d w ˜ .$
Thus, considering the first order, we can introduce $z ˜ i = z i + ε ( ∂ y j ξ i ( t , y ) z j − ∂ t τ ( t ) z i / 2 )$ with summation over j.
Combing the following equations
$t = t ˜ − ε τ ( t ˜ ) , d t = ( 1 − ε ∂ t τ ) | t ˜ d t ˜ , z i = z ˜ i − ε ∂ y j ξ i z ˜ j − ∂ t τ z ˜ i / 2 | ( t ˜ , y ˜ , z ˜ ) , g i ( t , y , z ) = g i − ε ∂ t g i τ − ε ∂ y j g i ξ j − ε ∂ z j g i ( ∂ y k ξ j z ˜ k − ∂ t τ z ˜ j / 2 ) | ( t ˜ , y ˜ , z ˜ ) ,$
Equation (7) applies,
$d y ˜ i = − g i + ε ∂ t ξ i − ∂ y j ξ i g j + 1 2 ∂ y j y k 2 ξ i z j z k | ( t , y , z ) d t + [ z i + ε ∂ y j ξ i z j ] d w = − g i + ε ∂ t ξ i − ∂ y j ξ i g j + 1 2 ∂ y j y k 2 ξ i z j z k | ( t , y , z ) ( 1 − ε ∂ t τ ( t ˜ ) ) d t ˜ + z ˜ i d w ˜ = { − g i + ε [ g i ∂ t τ + ∂ t g i τ + ∂ y j g i ξ j + ∂ y k ξ j ∂ z j g i z ˜ k − 1 2 ∂ t τ ∂ z j g i z ˜ j + ∂ t ξ i − ∂ y j ξ i g j + 1 2 ∂ y j y k 2 ξ i z ˜ j z ˜ k ] } | ( t ˜ , y ˜ , z ˜ ) d t ˜ + z ˜ i d w ˜ ,$
with summation over $j , k = 1 , ⋯ , m$. Then we get the following theorem.
Theorem 5.
The projectable vector field $X = τ ( t ) ( ∂ / ∂ t ) + ξ ( t , y ) ( ∂ / ∂ y )$ is a symmetry generator for BSDE (6) if and only if τ and ξ satisfy the full determining equations for projectable symmetries of BSDE:
$g i ∂ t τ + ∂ t g i τ + ∂ y j g i ξ j + ∂ y k ξ j ∂ z j g i z k − 1 2 ∂ t τ ∂ z j g i z j + ∂ t ξ i − ∂ y j ξ i g j + 1 2 ∂ y j y k 2 ξ i z j z k = 0 ,$
(summation over $j , k = 1 , ⋯ , m$), for each $i = 1 , ⋯ , m$.
By symmetry X, we can define $Φ ε t$ and $Φ ε y$, and moreover,
$Φ ε z i = z i + ∫ 0 ε ∂ y j ξ i | ( Φ r t , Φ r y ) Φ r z j − 1 2 ∂ t τ | Φ r t Φ r z i d r .$
Remark 3.
(1) Please note that in this case, $Φ ε z ≠ ∂ y Φ ε y z$, because of the transformation of w. (2) For any fixed ε, $w ˜$ generates a new filtration $( F t ˜ w ˜ ) t ˜ ∈ R +$. In addition, if a transformation transforms t to $t ˜$, then $F t w = F t ˜ w ˜$. (3) The transformation generated by X transforms the terminal time and terminal condition from $( T , Y )$ to $( Φ ε T , Φ ε Y )$, and from the solution $( y t , z t ) t ∈ [ 0 , T ]$ to $( ( Φ ε y t ) Φ ε t , ( Φ ε z t ) Φ ε t ) Φ ε t ∈ [ max { 0 , Φ ε 0 } , max { 0 , Φ ε T } ]$.
Example 5.
Let $m = 1$ and $g = C | z | 2$, then we have
$∂ t ξ + C | z | 2 ∂ t τ − ∂ y ξ C | z | 2 + 2 C ∂ y ξ | z | 2 − ∂ t τ C | z | 2 + 1 2 ∂ y y 2 ξ | z | 2 = 0 .$
Letting $| z | → + ∞$, the above equation is equivalent to
$C ∂ y ξ + 1 2 ∂ y y 2 ξ = 0 , ∂ t ξ = 0 .$
We get solutions
$τ ( t ) = f ( t ) ξ ( t , y ) = c 1 e − 2 C y + c 2 ,$
where $f ( t )$ is an arbitrary function that is smooth enough and $c 1 , c 2$ are arbitrary constants. So, the symmetries of the BSDE are as follows:
$X 0 = ∂ y , X 1 = e − 2 C y ∂ y , X 2 = f ( t ) ∂ t .$
Therefore $Φ ε , 0$ and $Φ ε , 1$ generated by $X 0$ and $X 1$ respectively, are the same as those in Example 1, while $Φ ε , 0 t = Φ ε , 1 t = t$. Similarly, for $X 2$, $Φ ε , 2$ is the solution of the following equations
$Φ ε , 2 t = t + ∫ 0 ε f ( Φ r , 2 t ) d r , Φ ε , 2 y = y , Φ ε , 2 z = z − ∫ 0 ε 1 2 f ′ ( Φ r , 2 t ) Φ r , 2 z d r .$
Example 6.
Suppose $m = 1$ and g is independent of z. Equation (11) leads to the following equations
$∂ y y 2 ξ = 0 , ∂ t ξ + g ∂ t τ + ∂ t g τ + ∂ y g ξ − ∂ y ξ g = 0 .$

## 6. Symmetry Transformation on FBSDE

In this section, we consider the following FBSDE
$d x t = b ( t , x t , y t , z t ) d t + σ ( t , x t , y t , z t ) d w t , d y t = − g ( t , x t , y t , z t ) d t + z t d w t , y T = H ( x T ) .$
where $b : R + × R n × R m × R m × d → R n$, $σ : R + × R n × R m × R m × d → R n × d$, $g : R + × R n × R m × R m × d → R m$, $H : R n → R m$ are all smooth functions. In addition, we suppose that the solution of the FBSDE exists.
We introduce the following near-identity change of coordinates,
$t ˜ = t + ε τ ( t ) , x ˜ i = x i + ε ξ i ( t , x , y ) , y ˜ j = y j + ε η j ( t , x , y ) ,$
for $i = 1 , ⋯ , n$, $j = 1 , ⋯ , m$. For ease of notation, we denote $i , k , l = 1 , ⋯ , n$, $j , α , β , γ = 1 , ⋯ , m$, and $∂ i f$ (resp. $∂ k f , ∂ l f$) the partial derivative of f with respect to $x i$ (resp. $x k , x l$), while $∂ j f$ (resp. $∂ α f , ∂ β f$) the partial derivative of f with respect to $y j$ (resp. $y α , y β$), $∂ γ f$ the partial derivative of f with respect to $z γ$. Then we have
$d y ˜ j = d y j + ε d η j = d y j + ε ∂ t η j d t + ∂ k η j d x k + ∂ α η j d y α + 1 2 ∂ k l 2 η j σ k σ l + 1 2 ∂ α β 2 η j z α z β + 1 2 ∂ k α 2 η j σ k z α d t = − g j + ε ∂ t η j + ∂ k η j b k − ∂ α η j g α + 1 2 ∂ k l 2 η j σ k σ l + 1 2 ∂ α β 2 η j z α z β + 1 2 ∂ k α 2 η j σ k z α | ( t , x , y , z ) d t + z j + ε ( ∂ k η j σ k + ∂ α η j z α ) | ( t , x , y , z ) d w$
(summation over the repeated index $k , l$ and $α , β$).
By $d w = ( 1 − ε ∂ t τ / 2 ) d w ˜$, at first order in $ε$, we have
$[ z j + ε ( ∂ k η j σ k + ∂ α η j z α ) ] d w = [ z j + ε ( ∂ k η j σ k + ∂ α η j z α ) ] ( 1 − ε ∂ t τ 2 ) d w ˜ = z j + ε ( ∂ k η j σ k + ∂ α η j z α − ∂ t τ z j 2 ) | ( t , x , y ) d w ˜ .$
Let $z ˜ j = z j + ε ( ∂ k η j σ k + ∂ α η j z α − ∂ t τ z j 2 ) : = z j + ε ζ j ( t , x , y , z )$. We have
$d t = ( 1 − ε τ t ( t ˜ ) ) d t ˜ , z j = z ˜ j − ε ∂ k η j σ k + ∂ α η j z ˜ α − ∂ t τ z ˜ j 2 | ( t ˜ , x ˜ , y ˜ , z ˜ ) = z ˜ j − ε ζ j ( t ˜ , x ˜ , y ˜ , z ˜ ) d w ˜ , g j ( t , x , y , z ) = g j − ε ( ∂ t g j τ + ∂ i g j ξ i + ∂ α g j η α + ∂ γ g j ζ γ ) | ( t ˜ , x ˜ , y ˜ , z ˜ ) .$
Therefore, we have
$d y ˜ j = [ − g j + ε ( g j ∂ t τ + ∂ t g j τ + ∂ i g j ξ i + ∂ α g j η α + ∂ γ g j ζ γ + ∂ t η j + ∂ k η j b k − ∂ α η j g α + 1 2 ∂ k l 2 η j σ k σ l + 1 2 ∂ α β 2 η j z ˜ α z ˜ β + 1 2 ∂ k α 2 η j σ k z ˜ α ) ] | ( t ˜ , x ˜ , y ˜ , z ˜ ) d t ˜ + z ˜ d w ˜ .$
Also, we have
$d x ˜ i = d x i + ε d ξ i = b i d t + σ i d w + ε ∂ t ξ i d t + ∂ k ξ i d x k + ∂ α ξ i d y α + 1 2 ∂ k l 2 ξ i σ k σ l d t + 1 2 ∂ α β 2 ξ i z α z β d t + 1 2 ∂ k α 2 ξ i σ k z α d t = b i d t + σ i d w + ε [ ∂ t ξ i + ∂ k ξ i b k − ∂ α ξ i g α + 1 2 ∂ k l 2 ξ i σ k σ l + 1 2 ∂ α β 2 ξ i z α z β + 1 2 ∂ k α 2 ξ i σ k z α d t + ∂ k ξ i σ k + ∂ α ξ i z α d w ] = b i + ε ∂ t ξ i + ∂ k ξ i b k − ∂ α ξ i g α + 1 2 ∂ k l 2 ξ i σ k σ l + 1 2 ∂ α β 2 ξ i z α z β + 1 2 ∂ k α 2 ξ i σ k z α | ( t , x , y , z ) d t + σ i + ε ∂ k ξ i σ k + ∂ α ξ i z α | ( t , x , y , z ) d w .$
Using Taylor’s series expansion at point $( t ˜ , x ˜ , y ˜ )$ gives
$d x ˜ i = [ b i + ε ( − b i ∂ t τ − ∂ t b i τ − ∂ k b i ξ k − ∂ α b i η α − ∂ γ b i ζ γ + ∂ t ξ i + ∂ k ξ i b k − ∂ α ξ i g α + 1 2 ∂ k l 2 ξ i σ k σ l + 1 2 ∂ α β 2 ξ i z ˜ α z ˜ β + 1 2 ∂ k α 2 ξ i σ k z ˜ α ) ] | ( t ˜ , x ˜ , y ˜ , z ˜ ) d t ˜ + σ i + ε − 1 2 σ i ∂ t τ − ∂ t σ i τ − ∂ k σ i ξ k − ∂ α σ i η α − ∂ γ σ i ζ γ + ∂ k ξ i σ k + ∂ α ξ i z ˜ α | ( t ˜ , x ˜ , y ˜ , z ˜ ) d w ˜ .$
We also introduce $Φ ε = ( Φ ε t , Φ ε x , Φ ε y )$. For terminal condition $y T = H ( x T )$, We have the following result.
For any fixed $ε ¯ > 0$ (It can be similarly deduced for $ε ¯ < 0$), we define $[ t 1 ( ε ) , t 2 ( ε ) ] : = { Φ r T : r ∈ [ 0 , ε ] }$ and, without loss of generality, we suppose that $t 1 > 0$. Suppose that $Φ ε$ transforms any solution $( x t , y t , z t )$ of FBSDE (13) into $( x ˜ t ˜ , y ˜ t ˜ , z ˜ t ˜ )$.
Theorem 6.
For all $0 < ε ≤ ε ¯$, and any solution $( x t , y t , z t )$ of FBSDE (13), the terminal condition $y ˜ T ˜ = H ( x ˜ T ˜ )$ holds, if and only if for each $x ∈ R m$ and $t ∈ [ t 1 ( ε ) , t 2 ( ε ) ]$,
$η ( t , x , H ( x ) ) ≡ ∂ x H ( x ) ξ ( t , x , H ( x ) ) .$
Proof. Sufficiency.
Since $y T = H ( x T )$, we consider $Φ ε$ with initial values $( T , x , H ( x ) )$, where $y = H ( x )$. By the definition of $Φ ε$, we know that for $r ∈ [ 0 , ε ¯ ]$,
$d Φ r x i = ξ i ( Φ r T , Φ r x , Φ r y ) d r ,$
and
$d Φ r y j = η j ( Φ r T , Φ r x , Φ r y ) d r .$
Then
$d H ( Φ r x ) = ∂ i H | Φ r x d Φ r x i = ∂ i H | Φ r x ξ i ( Φ r T , Φ r x , Φ r y ) d r$
(with summation over i).
Assume that
$η ( t , x , y ) = ∂ x H ( x ) ξ ( t , x , y )$
holds for each $t ∈ [ t 1 , t 2 ]$ and $x , y$. Then
$d Φ r y = d H ( Φ r x ) .$
Taking integral from 0 to $ε$ for both sides, we get
$Φ ε y − Φ 0 y = Φ ε y − y = H ( Φ ε x ) − H ( Φ 0 x ) = H ( Φ ε x ) − H ( x ) .$
Since $y = H ( x )$, we get $Φ ε y = H ( Φ ε x )$. Therefore, Equation (16) can be easily assured by condition (14) which is a rather weaker condition than (15). The sufficiency follows.
Necessity. Since for any $0 < ε ≤ ε ¯$, $Φ ε y T ≡ H ( Φ ε x T )$, taking derivative, we get
$d Φ ε x T i d ε = ξ i ( Φ ε T , Φ ε x T , Φ ε y T ) ,$
$d Φ ε y T j d ε = η j ( Φ ε T , Φ ε x T , Φ ε y T ) ,$
$d Φ ε y T j d ε = d H j ( Φ ε x T ) d ε = ∂ i H j | Φ ε x T d Φ ε x i T d ε = ∂ i H j | Φ ε x T ξ i ( Φ ε T , Φ ε x T , Φ ε y T ) .$
Therefore, $η ( Φ ε T , Φ ε x T , H ( Φ ε x T ) ) ≡ ∂ x H | Φ ε x T ξ ( Φ ε T , Φ ε x T , H ( Φ ε x T ) ) .$ Since $Φ ε T$ can take any values in $[ t 1 , t 2 ]$, and $x T$ can take any values in $R m$, we have (14). The proof is complete. □
Therefore, we get the following theorem.
Theorem 7.
The projectable vector field $X 0 = τ ( t ) ∂ t + ξ ( t , x , y ) ∂ x + η ( t , x , y ) ∂ y$ is a symmetry generator of FBSDE (13) allowing time change if and only if $τ , ξ , η$ satisfy the following determining equations:
$g j ∂ t τ + ∂ t g j τ + ∂ i g j ξ i + ∂ α g j η α + ∂ γ g j ( ∂ k η γ σ k + ∂ α η γ z α − ∂ t τ z γ 2 ) + ∂ t η j + ∂ k η j b k − ∂ α η j g α + 1 2 ∂ k l 2 η j σ k σ l + 1 2 ∂ α β 2 η j z α z β + 1 2 ∂ k α 2 η j σ k z α = 0 ( summation over i , k , l , α , β , γ ) , for j = 1 , ⋯ , m − b i ∂ t τ − ∂ t b i τ − ∂ k b i ξ k − ∂ α b i η α − ∂ γ b i ( ∂ k η γ σ k + ∂ α η γ z α − ∂ t τ z γ 2 ) + ∂ t ξ i + ∂ k ξ i b k − ∂ α ξ i g α + 1 2 ∂ k l 2 ξ i σ k σ l + 1 2 ∂ α β 2 ξ i z α z β + 1 2 ∂ k α 2 ξ i σ k z α = 0 ( summation over k , l , α , β , γ ) , for i = 1 , ⋯ , n − 1 2 σ i ∂ t τ − ∂ t σ i τ − ∂ k σ i ξ k − ∂ α σ i η α − ∂ γ σ i ( ∂ k η γ σ k + ∂ α η γ z α − ∂ t τ z γ 2 ) + ∂ k ξ i σ k + ∂ α ξ i z α = 0 ( summation over k , α , γ ) , for i = 1 , ⋯ , n η ( t , x , H ( x ) ) ≡ ∂ x H ( x ) ξ ( t , x , H ( x ) ) , for all t that is a transformation of T .$
Remark 4.
If $τ = 0$, we get symmetries that leave the terminal time invariant.

## Author Contributions

Conceptualization, methodology, formal analysis, writing—original draft preparation, writing—review and editing: N.Z. and G.J.

## Funding

Supported by the National Natural Science Foundation of China grant number 11601387 and 11171186, National Key R&D Program of China grant number 2018YFA0703900.

## Acknowledgments

We thank referees for useful comments.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Dorodnitsyn, V. Applications of Lie Groups to Difference Equations; Chapman and Hall/CRC: Boca Raton, FL, USA, 2010. [Google Scholar]
2. Ovsiannikov, L.V. Group Analysis of Differential Equations; Academic Press: Cambridge, MA, USA, 1982. [Google Scholar]
3. Ibragimov, N.K. Elementary Lie Group Analysis and Ordinary Differential Equations; Wiley: New York, NY, USA, 1999. [Google Scholar]
4. Stephani, H. Differential Equations: Their Solution Using Symmetries; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
5. Albeverio, S.; Fei, S. Remark on symmetry of stochastic dynamical systems and their conserved quantities. J. Phys. A Math. Gen. 1995, 28, 6363–6371. [Google Scholar] [CrossRef]
6. Gaeta, G.; Rodríguez Quintero, N. Lie-point symmetries and stochastic differential equations. J. Phys. A Math. Gen. 1999, 32, 8485–8505. [Google Scholar] [CrossRef]
7. Misawa, T. New conserved quantities from symmetry for stochastic dynamical systems. J. Phys. A Math. Gen. 1994, 27, 177–192. [Google Scholar] [CrossRef]
8. Misawa, T. Conserved quantities and symmetries related to stochastic dynamical systems. Ann. Inst. Stat. Math. 1999, 51, 779–802. [Google Scholar] [CrossRef]
9. Gaeta, G. Symmetry of stochastic equations. J. Proc. Natl. Acad. Sci. Ukr. 2004, 50, 98109. [Google Scholar]
10. Gaeta, G. Lie-point symmetries and stocahstic differential equations II. J. Phys. A Math. Gen. 2000, 33, 4883–4902. [Google Scholar] [CrossRef]
11. Catuogno, P.J.; Lucinger, L.R. Random Lie-point symmetries. J. Nonlinear Math. Phys. 2014, 21, 149–165. [Google Scholar] [CrossRef]
12. Albeverio, S.; De Vecchi, F.C.; Morando, P.; Ugolini, S. Symmetries and invariance properties of stochastic differential equations driven by semimartingales with jumps. arXiv 2017, arXiv:1708.01764. [Google Scholar]
13. Pardoux, P.; Peng, S.G. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 14, 55–61. [Google Scholar] [CrossRef]
14. Bismut, J.M. Théorie probabilste du contrôle des diffusion. Mem. Am. Math. Soc. 1976, 4, 167. [Google Scholar]
15. Kobylanski, M. Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab. 2002, 28, 558–602. [Google Scholar] [CrossRef]
16. Briand, P.; Elie, R. A simple constructive approach to quadratic BSDEs with or without delay. Stoch. Process. Their Appl. 2013, 123, 2921–2939. [Google Scholar] [CrossRef]
17. Briand, P.; Hu, Y. BSDE with quadratic growth and unbounded terminal value. Probab. Theory Relat. Fields 2006, 136, 509–660. [Google Scholar] [CrossRef]
18. Briand, P.; Hu, Y. Quadratic BSDEs with convex generators and unbounded terminal conditions. Probab. Theory Relat. Fields 2008, 141, 543–567. [Google Scholar] [CrossRef]
19. Jia, G. Some uniqueness results for one-dimensional BSDEs with uniformly continuous coefficients. Stat. Probab. Lett. 2009, 79, 436–441. [Google Scholar] [CrossRef]
20. Lepeltier, J.P.; San Martin, J. Backward stochastic differential equations with continuous coefficient. Stat. Probab. Lett. 1997, 32, 425–430. [Google Scholar] [CrossRef]
21. Oksendal, B. Stochastic differential equations. In An Introduction with Applications, 5th ed.; Universitext; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
22. Oksendal, B. When is a stochastic integral a time change of a diffusion? J. Theor. Probab. 1990, 3, 207–226. [Google Scholar] [CrossRef]
23. Kozlov, R. Symmetries of systems of stochastic differential equations with diffusion matrices of full rank. J. Phys. A Math. Theor. 2010, 43, 245201. [Google Scholar] [CrossRef]
24. Bahlali, K.; Eddahbi, M.; Ouknine, Y. Quadratic BSDE with $L 2$-terminal data: Krylov’s estimate, Itô Krylov’s formula and existence results. Ann. Probab. 2017, 45, 2377–2397. [Google Scholar] [CrossRef]