Next Article in Journal
Nonlinear Adaptive Fuzzy Hybrid Sliding Mode Control Design for Trajectory Tracking of Autonomous Mobile Robots
Previous Article in Journal
An Improved Version of the Parameterized Hardy–Hilbert Inequality Involving Two Partial Sums
Previous Article in Special Issue
Mean-Field Modeling of Green Technology Adoption: A Competition for Incentives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Backward Stochastic Linear Quadratic Optimal Control with Expectational Equality Constraint

School of Mathematics, Guizhou Normal University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Current address: School of Big Data and Computer Science, Guizhou Normal University, Guiyang 550025, China.
Mathematics 2025, 13(8), 1327; https://doi.org/10.3390/math13081327
Submission received: 11 March 2025 / Revised: 7 April 2025 / Accepted: 16 April 2025 / Published: 18 April 2025
(This article belongs to the Special Issue Stochastic Optimal Control, Game Theory, and Related Applications)

Abstract

:
This paper investigates a backward stochastic linear quadratic control problem with an expected-type equality constraint on the initial state. By using the Lagrange multiplier method, the problem with a uniformly convex cost functional is first transformed into an equivalent unconstrained parameterized backward stochastic linear quadratic control problem. Then, under the surjectivity of the linear constraint, the equivalence between the original problem and the dual problem is proven by Lagrange duality theory. Subsequently, with the help of the maximum principle, an explicit solution of the optimal control for the unconstrained problem is obtained. This solution is feedback-based and determined by an adjoint stochastic differential equation, a Riccati-type ordinary differential equation, a backward stochastic differential equation, and an equality, thereby yielding the optimal control for the original problem. Finally, an optimal control for an investment portfolio problem with an expected-type equality constraint on the initial state is explicitly provided.

1. Introduction

Backward stochastic differential equations (BSDEs) play a pivotal role in stochastic control theory and have found extensive applications in finance (see [1,2,3,4]). The linear quadratic (LQ) optimal control problem for BSDEs with deterministic coefficients was first investigated by Lim and Zhou [5]. They derived a complete solution to such backward stochastic linear quadratic (BSLQ) problems by employing forward equations, limit procedures, and completing squares techniques. Following this foundational work, numerous extensions have been developed, including BSLQ problems under partial information [6], BSLQ problems with asymmetric information [7], mean-field backward LQ optimal control problems [8], BSLQ problems with stochastic coefficients [9], BSLQ problems with random jumps [10], the turnpike property of BSLQ problems [11], BSLQ problems with indefinite weighting matrices in the cost functional [12], among others.
In the aforementioned literature, no constraints were considered for BSLQ optimal control. However, in practical applications, constraints on states or controls are often inevitable. Examples include the portfolio selection problem with variance investment under the restriction of no short selling of stocks [13] and the welfare pension problem [14]. Therefore, studying BSLQ optimal control with state or control constraints is of significant importance. Examples of such constraints include mixed-control-state integral quadratic inequality constraints [15,16], conic control constraints over an infinite time horizon [17,18], state switching, stochastic coefficients, conic control constraints over finite [19] and infinite time domains [20], mixed pointwise state-control linear inequality constraints [21,22], and terminal state expected inequality constraints [23], to name just a few.
Based on the ideas from the aforementioned literature, and as a companion to [24], which addresses a forward stochastic LQ optimal control problem with a terminal state expected equality constraint, this paper investigates a backward stochastic optimal control problem with an initial state expected equality constraint (CBSLQ for short):
min J ( u ) E [ N X 0 ξ , u ] = b .
Let T > 0 , and consider the underlying filtered probability space ( Ω , F T , F , P ) . On this space, a one-dimensional standard Wiener process W is defined. Here, F is the natural filtration generated by W (augmented by all the P-null sets). The expectation operator is denoted by E [ · ] . The controlled backward linear stochastic differential equation is
d X t ξ , u = ( A ( t ) X t ξ , u + B ( t ) u t + C ( t ) Z t ξ , u ) d t + Z t ξ , u d W t , X T ξ , u = ξ ,
with the state ( X ξ , u , Z ξ , u ) depending on a terminal condition ξ and a control u, and the quadratic cost function is
J ξ ( u ) J ( u ) = 1 2 E [ < H X 0 ξ , u , X 0 ξ , u > + 0 T < Q ( t ) X t ξ , u , X t ξ , u > + < S ( t ) Z t ξ , u , Z t ξ , u > + < R ( t ) u ( t ) , u ( t ) > d t ] .
Here, the terminal datum ξ ; the control u; the coefficients A ( · ) , B ( · ) , C ( · ) , Q ( · ) , S ( · ) , R ( · ) , H , N , b ; and the operator < · > are subject to appropriate assumptions and will be defined later.
Clearly, the completion of squares method [24] is not applicable for solving our CBSLQ problem. However, the maximum principle [8] combined with Lagrange duality theory [24] provides an effective approach. Our main contributions are as follows: (1) the CBSLQ problem is proposed for the first time; (2) under the uniform convexity of the cost function, an equivalent unconstrained parameterized BSLQ problem is explicitly solved, whose optimal control is feedback based and determined by an adjoint SDE, a Riccati-type ODE, a BSDE, and an equality; (3) the surjectivity of the linear constraint is established with an illustrative example, and the equivalence between the CBSLQ problem and the dual problem of the equivalent BSLQ problem is ensured, thereby obtaining the optimal control for the CBSLQ problem.
The remainder of this paper is organized as follows. Section 2 introduces basic notations and assumptions. Section 3 establishes the equivalence between the (CBSLQ) problem and its dual problem under a surjectivity condition. In Section 4, an explicit solution to the optimal control of the unconstrained problem is derived by using the maximum principle. Section 5 provides several equivalent characterizations of the surjectivity condition. A financial application of the (CBSLQ) problem to portfolio management is presented in Section 6. Concluding remarks are given in Section 7.

2. Preliminaries

Let R n × m be the Euclidean space of all n × m real matrices (and simply write it as R n when m = 1 ), on which, for any two elements M and N, the inner product < M , N > is the trace of the matrix M N with the transpose ⊤ of matrices and induces the norm of M as | M | = ( M M ) . Denote as S n the space of all symmetric n × n real matrices. If Σ S n is positive definite (positive semi-definite), we write it as Σ > 0 ( Σ 0 ) . The same symbols < · , · > and | · | shall be used to denote the inner product and the induced norm, respectively, in possibly different Hillbert spaces X without confusion. Given a measure space ( Ξ , G , μ ) , for a measurable function φ : Ξ [ 0 , ) , its essential supremum is defined as e s s sup ξ φ = inf { α R μ ( φ ( ξ ) > α ) = 0 } . Some spaces of random vectors or random processes are introduced by us as follows.
L 2 ( 0 , T ; X ) = { φ : [ 0 , T ] X | φ is Lebesgue - measurable , 0 T | φ ( t ) | 2 d t < } . L ( 0 , T ; X ) = { φ : [ 0 , T ] X | φ is Lebesgue - measurable , e s s sup t | φ | < } . L F 2 ( 0 , T ; X ) = { φ : [ 0 , T ] × Ω X | φ is F - progressively measurable and E 0 T | φ ( t ) | 2 d t < } . L F ( 0 , T ; X ) = { φ : [ 0 , T ] × Ω X | φ is F - progressively measurable and e s s sup t , w | φ | < } .
To make our problem CBSLQ solvable, we introduce some assumptions below.
Assumption 1.
The coefficients of the state equation are all deterministic and satisfy the following:
A ( · ) , C ( · ) L ( 0 , T ; R n × n ) ; B ( · ) L ( 0 , T ; R n × m ) ; ξ R n , N R × n , b R
Assumption 2.
The weighting coefficients in the cost functional are all deterministic and satisfy the following:
Q ( · ) , S ( · ) L ( 0 , T ; S n ) ; R ( · ) L ( 0 , T ; S m ) , H S n , Q ( · ) , S ( · ) , H 0 ; R ( · ) δ I ,   for δ > 0 .
Assumption 3.
Our admissible control set is
U a d = { u L F 2 ( 0 , T ; R m ) ( X ξ , u , Z ξ , u , u ) s a t i s f i e s ( 2 ) a n d E [ N X 0 ξ , u ] = b . }
such that the mapping u E [ N X 0 ξ , u ] is surjective, that is,
R = { E [ N X 0 ξ , u ] | u U a d } .
We say a functional f : X R is called strongly convex [25] if there exists a constant σ > 0 ,
f ( α x + β y ) α f ( x ) + β f ( y ) σ 2 α β | x y | X 2 , x , y X , α , β [ 0 , 1 ] , α + β = 1 .
Lemma 1.
Suppose that Assumptions 1 and 2 hold. Then, the cost functional J is strongly convex and continuous on L F 2 ( 0 , T ; R m ) . In addition, if Assumption 3 holds, CBSLQ is uniquely solvable on U a d .
Proof of Lemma 1.
By the Assumptions 1 and 2, for each controller u L F 2 ( 0 , T ; R m ) (2), there exists a unique solution ( X ξ , u , Z ξ , u ) , and J ( u ) < , which is continuous. We need to prove its strong convexity:
Let u 1 , u 2 L F 2 ( 0 , T ; R m ) , α , β [ 0 , 1 ] . Then, by (A1), such solutions as ( X t ξ , u 1 , Z t ξ , α u 1 ) , ( X t ξ , u 2 , Z t ξ , β u 2 ) , and ( X t ξ , α u 1 + β u 2 , Z t α u 1 + β u 2 ) to the systems (2) controlled by u 1 , u 2 and α u 1 + β u 2 , respectively, exist uniquely. Moreover,
X t ξ , α u 1 + β u 2 = α X t ξ , u 1 + β X t ξ , u 2 , Z t ξ , α u 1 + β u 2 = α Z t ξ , u 1 + β Z t ξ , u 2 .
In addition, it is easy to check that
J ξ ( α u 1 + β u 2 ) = α J ξ ( u 1 ) + β J ξ ( u 2 ) α β J 0 ( u 1 u 2 ) .
Then by Assumption 2,
J ξ ( α u 1 + β u 2 ) α J ξ ( u 1 ) + β J ξ ( u 2 ) δ α β 0 T | u 1 t u 2 t | 2 d t ,
which means J is strongly convex on L F 2 ( 0 , T ; R m ) .
By Assumption 3, U a d is nonempty. It is also a closed convex subset of L F 2 ( 0 , T ; R m ) , since the control system (2) is linear and the initial state constraint is a linear equality constraint. Then, by the standard existence theory of convex optimization (see, for instance, [26], Theorem 2.31), CBSLQ is uniquely solvable.

3. Lagrangian Duality

Inspired by the ideas in [24], the Lagrangian dual method can be applied to solve the problem ( C B S L Q ) .
Lemma 2.
For u L F 2 ( 0 , T ; R m ) , λ R , we define
M ( u , λ ) J ( u ) + < λ , E [ N X 0 ξ , u b ] > ,
where X 0 ξ , u satisfies (2), and J is the same as the definition (3). If Assumptions 1 and 2 hold, then given λ R , the following unconstrained backward stochastic linear quadratic problem with parameter λ (for short, BSLQu) is uniquely solvable:
d ( λ ) inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) .
Proof of Lemma 2.
Note first that the mapping M : L F 2 ( 0 , T ; R m ) × R R is well defined by Lemma 1.
Moreover, given λ R , there is for u 1 , u 2 L F 2 ( 0 , T ; R m ) , and α , β [ 0 , 1 ] with α + β = 1 ,
M ( α u 1 + β u 2 , λ ) = α M ( u 1 , λ ) + β M ( u 2 , λ ) α β J 0 ( u 1 u 2 ) .
Then, by Assumption 2, we have
M ( α u 1 + β u 2 , λ ) α M ( u 1 , λ ) + β M ( u 2 , λ ) δ α β 0 T | u 1 t u 2 t | 2 d t ,
which means that M is strongly convex with respect to control u on L F 2 ( 0 , T ; R m ) . Again, by Theorem 2.31 in [26], the problem BSLQu is uniquely solvable. □
Clearly, under the Assumptions 1 and 2, by (4)
sup λ R M ( u , λ ) = sup λ R { J ( u ) + < λ , E [ N X 0 ξ , u b ] > } = J ( u ) , E [ N X 0 ξ , u b ] = 0 , + , E [ N X 0 ξ , u b ] 0 .
Therefore, the original CBSLQ is equivalent to
inf u U a d J ( u ) = inf u L F 2 ( 0 , T ; R m ) sup λ R M ( u , λ ) .
Now, we define the dual problem of CBSLQ:
sup λ R inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) = sup λ R d ( λ ) .
In what follows, we prove the strong duality between CBSLQ and its dual problem (6)
Theorem 1.
Assume Assumptions 1–3 and let u ¯ be the unique optimal solution to CBSLQ. Then, the following two assertions hold true.
(i) The strong duality between CBSLQ and unconstrained problem (6) holds true, i.e.,
inf u L F 2 ( 0 , T ; R m ) sup λ R M ( u , λ ) = sup λ R inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) .
(ii) Let λ ¯ be the solution to the dual problem (6); then, ( u ¯ , λ ¯ ) is a s a d d l e p o i n t , i.e.,
M ( u ¯ , λ ) M ( u ¯ , λ ¯ ) M ( u , λ ¯ ) , u L F 2 ( 0 , T ; R m ) , λ R .
Especially,
M ( u ¯ , λ ¯ ) = inf u L F 2 ( 0 , T ; R m ) M ( u , λ ¯ ) .
Proof of Theorem 1.
Define
K = { ( α , β ) R l + 1 u L F 2 ( 0 , T ; R m ) s . t . J ( u ) J ( u ¯ ) α , E [ N X 0 ξ , u b ] = β } ,
and
O = { ( α , β ) R l + 1 α < 0 , β = 0 } .
Clearly, O is a convex set. In the following, we will prove that K is also a convex set. Let ( α 1 , β 1 ) , ( α 2 , β 2 ) K ; then, there exist u 1 , u 2 L F 2 ( 0 , T ; R m ) such that
J ( u i ) J ( u ¯ ) α i , E [ N X 0 ξ , u i b ] = β i , i = 1 , 2 .
By Lemma 1, J is convex. Then, for any θ [ 0 , 1 ] ,
J ( θ u 1 + ( 1 θ ) u 2 ) J ( u ¯ ) θ J ( u 1 ) + ( 1 θ ) J ( u 2 ) J ( u ¯ ) = θ J ( u 1 ) + ( 1 θ ) J ( u 2 ) θ J ( u ¯ ) ( 1 θ ) J ( u ¯ ) θ α 1 + ( 1 θ ) α 2 .
From (2), obviously, there is a unique solution ( X , Z ) satisfying
X t = ξ t T A ( t ) X t + B ( t ) u t + C ( t ) Z t d t t T Z t d W t .
such that E [ X t ] satisfy the linearity in (2). Thus,
E [ N X 0 ξ , θ u 1 + ( 1 θ ) u 2 b ] = E [ N ( θ X 0 ξ , u 1 + ( 1 + θ ) X 0 ξ , u 2 ) b ] = E [ N X 0 ξ , u 1 b ] + ( 1 θ ) E [ N X 0 ξ , u 2 b ] = θ β 1 + ( 1 θ ) β 2 .
Therefore,
θ ( α 1 , β 1 ) + ( 1 θ ) ( α 2 , β 2 ) K .
Therefore, K is also a convex set.
By the optimality of u ¯ , we deduce that K O = . Then, by the separation theorem, ( λ 0 , λ 1 ) R + 1 , ( λ 0 , λ 1 ) 0 such that
inf α , β K { λ 0 α + < λ , β > } sup α , β O { λ 0 α + < λ , β > } = sup α < 0 λ 0 α .
Then, we have λ 0 0 , sup α < 0 λ 0 α = 0 , and
0 inf α , β K { λ 0 α + < λ , β > } inf u L F 2 ( 0 , T ; R m ) { λ 0 ( J ( u ) J ( u ¯ ) ) + < λ , E [ N X 0 ξ , u b ] > } .
We claim that λ 0 0 . If λ 0 = 0 , then
0 < λ , E [ N X ξ , u ( 0 ) b ] > , u L F 2 ( 0 , T ; R m ) .
Then, by Assumption 3, λ = 0 . This contradicts ( λ 0 , λ ) 0 . Therefore, λ 0 > 0 .
Letting λ ¯ = λ λ 0 , we have
0 J ( u ) J ( u ¯ ) + < λ ¯ , E [ N X 0 ξ , u b ] > , u L F 2 ( 0 , T ; R m ) .
Since
E [ N X ¯ 0 ξ , u ¯ b ] = 0 ,
it holds that
J ( u ¯ ) + < λ ¯ , E [ N X ¯ 0 ξ , u ¯ b ] > J ( u ) + < λ ¯ , E [ N X 0 ξ , u b ] > , u L F 2 ( 0 , T ; R m ) .
This implies that
inf u L F 2 ( 0 , T ; R m ) sup λ R l M ( u , λ ) = J ( u ¯ ) = J ( u ¯ ) + < λ ¯ , E [ N X ¯ 0 ξ , u ¯ b ] > inf u L F 2 ( 0 , T ; R m ) M ( u , λ ¯ ) = d ( λ ¯ ) sup λ R l d ( λ ) = sup λ R l inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) .
Since there is
sup λ R l inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) inf u L F 2 ( 0 , T ; R m ) sup λ R l M ( u , λ ) ,
inf u L F 2 ( 0 , T ; R m ) sup λ R l M ( u , λ ) = sup λ R l inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) .
This proves (i).
The following proves the assertion (ii). Let λ ¯ in R be the unique optimal solution (which exists by (i)); then, we have
sup λ R l inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) = inf u L F 2 ( 0 , T ; R m ) M ( u , λ ¯ ) M ( u , λ ¯ ) , u L F 2 ( 0 , T ; R m ) .
By the optimality of u ¯ and λ ¯ ,
M ( u ¯ , λ ) M ( u ¯ , λ ¯ ) M ( u , λ ¯ ) , u L F 2 ( 0 , T ; R m ) , λ R l .
Thus, M ( u ¯ , λ ¯ ) is a saddle point of M. Then, it follows that
M ( u ¯ , λ ¯ ) = inf u L F 2 ( 0 , T ; R m ) M ( u , λ ¯ ) .

4. Maximum Principle

The effient method for the solving backward stochastic linear quadratic problem in [8] can be used to solve our BSLQu.
Lemma 3.
Suppose that Assumptions 1 and 2 hold. If ( X ¯ ξ , u ¯ , Z ¯ ξ , u ¯ , u ¯ ) is the optimal triple of BSLQu with parameter λ, then
R ( t ) u ¯ t B ( t ) Y ¯ t λ = 0 , t [ 0 , T ] ,
where the process Y ¯ is the solution to the following SDE
d Y ¯ t λ = { A ( t ) Y ¯ t λ + Q ( t ) X ¯ t ξ , u ¯ } d t + { C ( t ) Y ¯ t λ + S ( t ) Z ¯ t ξ , u ¯ } d W ( t ) , Y ¯ 0 λ = H X ¯ 0 ξ , u ¯ + N λ .
Proof of Lemma 3.
Let u ¯ be the optimal control of BSLQu (4); then, for any v L F 2 ( 0 , T ; R m ) and ε [ 0 , 1 ] ,
( X t ξ , u ¯ + ε v , Z t ξ , u ¯ + ε v ) = ( X ¯ t ξ , u ¯ + ε X t 0 , v , Z ¯ t ξ , u ¯ + ε Z t 0 , v ) ,
where ( X 0 , v , Z 0 , v ) is a solution to (2) with the terminal state ξ = 0 . Then, we get
M ( u ¯ + ε v , λ ) M ( u ¯ , λ ) = ε E [ < H X 0 0 , v + N λ , X 0 0 , v > + 0 T < Q ( t ) X ¯ t ξ , u ¯ , X t 0 , v > + < S ( t ) Z ¯ t ξ , u ¯ , Z t 0 , v > + < R ( t ) u ¯ t , v t > d t ] + 1 2 ε 2 E [ < H X 0 0 , v , X 0 0 , v > + 0 T < Q ( t ) X t 0 , v , X t 0 , v > + < R ( t ) v t , v t > + < S ( t ) Z t 0 , v , Z t 0 , v > d t ] .
Moreover, with the help of Equation (8) and by applying It o ^ ’s formula to t < Y ¯ t , X t 0 , v > , the following result is given
E [ < H X 0 0 , v + N λ , X 0 0 , v > ] = E [ 0 T d < Y ¯ t λ , X t 0 , v > ] = E [ 0 T < Q ( t ) X ¯ t ξ , u ¯ , X t 0 , v > + < S ( t ) Z ¯ t ξ , u ¯ , Z t 0 , v > + < B ( t ) Y ¯ t λ , v t > d t ] .
Therefore,
M ( u ¯ t + ε v t , λ ) M ( u ¯ t , λ ) = ε E [ 0 T < R ( t ) u ¯ t B ( t ) Y ¯ t λ , v t > d t ] + 1 2 ε 2 E [ < H X 0 0 , v , X 0 0 , v > + 0 T < Q ( t ) X t 0 , v , X t 0 , v > + < S ( t ) Z t 0 , v , Z t 0 , v > + < R ( t ) v t , v t > d t ] .
Next, set ϕ ( ε ) = M ( u ¯ t + ε v t , λ ) M ( u ¯ t , λ ) . Then, ϕ is continuously differentiable with perturbed parameters ε , and we obtain
0 = ϕ ( 0 ) = E [ 0 T < R ( t ) u ¯ t B ( t ) Y ¯ t λ , v t > d t ] .
We directly claim that (7) holds, since v is arbitrary.
From the above result, we see that if u happens to be an optimal control of BSLQu, then the following FBSDE admits an adapted solution ( Y ¯ λ , X ¯ ξ , u ¯ , Z ¯ ξ , u ¯ ) :
d Y ¯ t λ = { A ( t ) Y ¯ t λ + Q ( t ) X ¯ t ξ , u ¯ } d t + { C ( t ) Y ¯ t λ + S ( t ) Z ¯ t ξ , u ¯ } d W t , d X ¯ t ξ , u ¯ = { A ( t ) X ¯ t ξ , u ¯ + B ( t ) u ¯ t + C ( t ) Z ¯ t ξ , u ¯ } d t + Z ¯ t ξ , u ¯ d W t , Y ¯ 0 λ = H X ¯ 0 ξ , u ¯ + N λ , X T = ξ ,
and the following stationarity condition holds:
R ( t ) u ¯ t B ( t ) Y ¯ t λ = 0 , t [ 0 , T ] .
We call (8), together with the stationarity condition (11), the optimality system for the optimal control of CBSLQ. Note that the four-step scheme introduced in [27,28] for general FBSDEs has provided an efficient skill to solve this special FBSDE (10).
In fact, due to the linearity of BSLQu, there is also a linear relation between X ¯ ξ , u ¯ and Y ¯ λ , as shown in the following lemma.
Lemma 4.
In the FBSDE (10), there is, for t [ 0 , T ] ,
X ¯ t ξ , u ¯ = P ( t ) Y ¯ t λ h t ,
where the matrix-valued function P satisfies the Riccati-type ODE
P ˙ ( t ) A ( t ) P ( t ) P ( t ) A ( t ) P ( t ) Q ( t ) P ( t ) + B ( t ) R 1 ( t ) B ( t ) + C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) C ( t ) = 0 , P ( T ) = 0 ,
and the process pair ( h , β ) satisfies the BSDE
d h t = { ( A ( t ) + P ( t ) Q ( t ) ) h t + C ( t ) ( I + P ( t ) S ( t ) ) 1 β t } d t + β t d W t , h T = ξ .
Proof of Lemma 4.
Assume that
X ¯ t ξ , u ¯ = P ( t ) Y ¯ t λ h t
where P : [ 0 , T ] S n is absolutely continuous, and a process pair ( h , β ) satisfies the following BSDE
d h t = α t d t + β t d W t , h T = ξ
for some adapted process α .
Applying It o ^ ’s formula to Equation (14) together with (15) and (10), we have
0 = d X ¯ t ξ , u ¯ + P ˙ ( t ) Y ¯ t λ d t + P ( t ) d Y ¯ t λ + d h t = { A ( t ) X ¯ t ξ , u ¯ + B ( t ) u ¯ t + C ( t ) Z ¯ t ξ , u ¯ } + Z ¯ t ξ , u ¯ d W t + P ˙ ( t ) Y ¯ t λ d t + { P ( t ) A ( t ) Y ¯ t λ + P ( t ) Q ( t ) X ¯ t ξ , u ¯ } d t + { P ( t ) C ( t ) Y ¯ t λ + P ( t ) S ( t ) Z ¯ t ξ , u ¯ } d W t + α t d t + β t d W t = { A ( t ) X ¯ t ξ , u ¯ + B ( t ) u ¯ t + C ( t ) Z ¯ t ξ , u ¯ + P ˙ ( t ) Y ¯ t λ P ( t ) A ( t ) Y ¯ t λ + P ( t ) Q ( t ) X ¯ t ξ , u ¯ + α t } d t + { Z ¯ t ξ , u ¯ P ( t ) C ( t ) Y ¯ t λ + P ( t ) S ( t ) Z ¯ t ξ , u ¯ + β t } d W t = { A ( t ) P ( t ) Y ¯ t λ A ( t ) h t + B ( t ) R 1 ( t ) B ( t ) Y ¯ t λ + C ( t ) Z ¯ t ξ , u ¯ + P ˙ ( t ) Y ¯ t λ P ( t ) A ( t ) Y ¯ t λ P ( t ) Q ( t ) P ( t ) Y ¯ t λ P ( t ) Q ( t ) h t + α t } d t + { Z ¯ t ξ , u ¯ P ( t ) C ( t ) Y ¯ t λ + P ( t ) S ( t ) Z ¯ t ξ , u ¯ + β t } d W t = { ( P ˙ ( t ) A ( t ) P ( t ) P ( t ) A ( t ) P ( t ) Q ( t ) P ( t ) + B ( t ) R 1 ( t ) B ( t ) ) Y ¯ t λ + C ( t ) Z ¯ t ξ , u ¯ ( A ( t ) + P ( t ) Q ( t ) ) h t + α t } d t + { Z ¯ t ξ , u ¯ P ( t ) C ( t ) Y ¯ t λ + P ( t ) S ( t ) Z ¯ t ξ , u ¯ + β t } d W t .
Then, we can suppose that
( P ˙ ( t ) A ( t ) P ( t ) P ( t ) A ( t ) P ( t ) Q ( t ) P ( t ) + B ( t ) R 1 ( t ) B ( t ) ) Y ¯ t λ + C ( t ) Z ¯ t ξ , u ¯ ( A ( t ) + P ( t ) Q ( t ) ) h t + α t = 0
and
Z ¯ t ξ , u ¯ P ( t ) C ( t ) Y ¯ t λ + P ( t ) S ( t ) Z ¯ t ξ , u ¯ + β t = 0 .
Now, from (17) we have
Z ¯ t ξ , u ¯ = ( I + P ( t ) S ( t ) ) 1 { P ( t ) C ( t ) Y ¯ t λ β t }
where I is the n × n identity matrix. Then, substituting it into (16) gives
( P ˙ ( t ) A ( t ) P ( t ) P ( t ) A ( t ) P ( t ) Q ( t ) P ( t ) + B ( t ) R 1 ( t ) B ( t ) + C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) C ( t ) ) Y ¯ t λ C ( t ) ( I + P ( t ) S ( t ) ) 1 β t ( A ( t ) + P ( t ) Q ( t ) ) h t + α t = 0 .
From the above equation, one can take
P ˙ ( t ) A ( t ) P ( t ) P ( t ) A ( t ) P ( t ) Q ( t ) P ( t ) + B ( t ) R 1 ( t ) B ( t ) + C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) C ( t ) = 0 , α ( t ) C ( t ) ( I + P ( t ) S ( t ) ) 1 β t ( A ( t ) + P ( t ) Q ( t ) ) h t = 0 .
Comparing the terminal values on both sides of Equations (10) and (15), one has P ( T ) = 0 . Then, there are two equations, the ODE (13) and the BSDE (15). According to Assumptions 1 and 2 and Proposition 4.1 in [5], each of the two equations has a unique solution. Thus, Equation (12) follows. □
Moreover, we can obtain the optimal cost function of the BSLQ as in the following lemma.
Lemma 5.
Suppose that Assumptions 1 and 2 hold. Then, the cost function for the optimal triple ( X ¯ ξ , u ¯ , Z ¯ ξ , u ¯ , u ¯ ) is
d ( λ ) = M ( u ¯ , λ ) = 1 2 E [ < P ( 0 ) ( N λ H h 0 ) , ( N λ H h 0 ) ( I + H P ( 0 ) ) 1 > + 2 < H P ( 0 ) ( N λ H h 0 ) ( I + H P ( 0 ) ) 1 , h 0 > + < H h 0 , h 0 > ] E [ < N λ , P ( 0 ) ( N λ H h 0 ) ( I + H P ( 0 ) ) 1 + h 0 > < b , λ > ] + E [ 0 T < Q ( t ) h t , h t > + < ( I + P ( t ) S ( t ) ) 1 S ( t ) β t , β t > d t ] ,
where P ( 0 ) and h 0 are defined in Lemma 4.
Proof of Lemma 5.
Note that, according to the formula (4),
M ( u ¯ , λ ) = 1 2 E [ 0 T < Q ( t ) X ¯ t ξ , u ¯ , X ¯ t ξ , u ¯ > + < S ( t ) Z ¯ t ξ , u ¯ , Z ¯ t ξ , u ¯ > + < R ( t ) u ¯ t , u ¯ t > d t + < H X 0 ξ , u ¯ , X 0 ξ , u ¯ > ] + < λ , E [ N X 0 ξ , u ¯ b ] > .
On one hand, according to the three equalities (11), (12), and (18), we have
1 2 0 T < Q ( t ) X ¯ t ξ , u ¯ , X ¯ t ξ , u ¯ > + < S ( t ) Z ¯ t ξ , u ¯ , Z ¯ t ξ , u ¯ > + < R ( t ) u ¯ t , u ¯ t > d t = 1 2 0 T { < ( P ( t ) Q ( t ) P ( t ) + C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) S ( t ) P ( t ) ( I + P ( t ) S ( t ) ) 1 C ( t ) + B ( t ) R 1 ( t ) B ( t ) ) Y ¯ t λ , Y ¯ t λ > + 2 < Y ¯ t λ , P ( t ) h t > 2 < Y ¯ t λ , C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) S ( t ) ( I + P ( t ) S ( t ) ) 1 β t > + < Q ( t ) h t , h t > + < ( I + P ( t ) S ( t ) ) 1 S ( t ) ( I + P ( t ) S ( t ) ) 1 β t , β t > } d t .
On the other hand, by applying I t o ^ ’s formula to t < P ( t ) Y ¯ t , Y ¯ t > , we have
0 = E [ < P ( 0 ) Y ¯ 0 λ , Y ¯ 0 λ > ] + E [ 0 T { < ( P ˙ ( t ) ( A ( t ) + P ( t ) Q ( t ) ) P ( t ) P ( t ) ( A ( t ) + P ( t ) Q ( t ) ) ( t ) + C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) ( I + P ( t ) S ( t ) ) 1 C ( t ) ) Y ¯ t λ , Y ¯ t λ > 2 < Y ¯ t λ , P ( t ) h t > + 2 < Y ¯ t λ , C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) S ( t ) ( I + P ( t ) S ( t ) ) 1 β t > + < ( I + P ( t ) S ( t ) ) 1 > S ( t ) P ( t ) S ( t ) ( I + P ( t ) S ( t ) ) 1 β t , β t > } d t ] .
Then, combining (19) with (20), we get
d ( λ ) = M ( u ¯ , λ ) = 1 2 E [ < H X ¯ 0 ξ , u ¯ , X ¯ 0 ξ , u ¯ > + < P ( 0 ) Y ¯ 0 , Y ¯ 0 > ] + < λ , E [ N X ¯ 0 ξ , u ¯ b ] > + E [ 0 T Q ( t ) h t , h t > + < ( I + P ( t ) S ( t ) ) 1 S ( t ) β t , β t > d t ] .
Since it can be obtained from (11) and (12) that
Y ¯ 0 λ = ( N λ H h 0 ) ( I + H P ( 0 ) ) 1 ,
we obtain
1 2 E [ < H X 0 ξ , u , X 0 ξ , u > + < P ( 0 ) Y 0 , Y 0 > ] + < λ , E [ N X 0 ξ , u b ] > = 1 2 E [ < H ( P ( 0 ) Y 0 + h 0 ) , P ( 0 ) Y 0 + h 0 > + < P ( 0 ) Y 0 , Y 0 > ] < λ , E [ N ( P ( 0 ) Y 0 + h 0 ) + b ] > = 1 2 E [ < P ( 0 ) ( I + H P ( 0 ) ) Y 0 , Y 0 > + 2 < H P ( 0 ) Y 0 , h 0 > + < H h 0 , h 0 > ] E [ < N λ , P ( 0 ) Y 0 + h 0 > < b , λ > ] = 1 2 E [ < P ( 0 ) ( N λ H h 0 ) , ( N λ H h 0 ) ( I + H P ( 0 ) ) 1 > + 2 < H P ( 0 ) ( N λ H h 0 ) ( I + H P ( 0 ) ) 1 , h 0 > + < H h 0 , h 0 > ] E [ < N λ , P ( 0 ) ( N λ H h 0 ) ( I + H P ( 0 ) ) 1 + h 0 > < b , λ > ] .
Substituting the above expression into (21) gives the result immediately. □
According to the above lemma, we can construct an optimization problem
d ( λ ¯ ) = sup λ R d ( λ ) ,
which can be solved easily.
Lemma 6.
Suppose Assumptions 1 and 2 hold. Then, for the optimization problem (22), there is a unique optimal λ ¯ satisfying the following linear algebraic equations
E [ ( H ( I + H P ( 0 ) ) 1 I ) N h 0 + b ] = E [ N λ ¯ P ( 0 ) ( I + H P ( 0 ) ) 1 N ] .
Proof of Lemma 6.
First, we prove that N P ( 0 ) N > 0 . According to Assumption 3, the matrix N is full rank, and P ( 0 ) is a positive definite matrix. Therefore, N P ( 0 ) N is also a positive definite matrix, i.e., N P ( 0 ) N > 0 . Obviously, the dual function d on R defined by the Lagrange multiplier method is concave and differentiable. Then, its optimal solution λ ¯ satisfies the first-order optimality conditions, that is
0 = d ( λ ¯ ) = E [ ( H ( I + H P ( 0 ) ) 1 I ) N h 0 + b N λ ¯ P ( 0 ) ( I + H P ( 0 ) ) 1 N ] ,
Since N P ( 0 ) N > 0 , the matrix N P ( 0 ) N is invertible. Therefore, there exists a unique λ ¯ that satisfies the above equation. □
Now, we can state our main theorem about our original problem, the CBSLQ, whose proof is similar to that of Theorem 3.2 in [24] based on Lagrangian duality theory. We provide the details in the next section for readers’ convenience.
Theorem 2.
Suppose Assumptions 1–3 hold. Then, the optimal control of CBSLQ is
u ¯ t = R 1 ( t ) B ( t ) Y ¯ t λ ¯ ,
where
d Y ¯ t λ ¯ = { A ( t ) Y ¯ t λ ¯ + Q ( t ) X ¯ t ξ , u ¯ } d t + { C ( t ) Y ¯ t λ ¯ + S ( t ) Z ¯ t ξ , u ¯ } d W t , Y ¯ 0 λ ¯ = H X ¯ 0 ξ , u ¯ + N λ ¯ , E [ H P ( 0 ) ( I + H P ( 0 ) ) 1 I ) N h 0 b ] = E [ N λ ¯ P ( 0 ) ( I + H P ( 0 ) ) 1 N ]
with P ( 0 ) and h 0 respectively determined by the ODE
P ˙ ( t ) A ( t ) P ( t ) P ( t ) A ( t ) P ( t ) Q ( t ) P ( t ) + B ( t ) R 1 ( t ) B ( t ) + C ( t ) ( I + P ( t ) S ( t ) ) 1 P ( t ) C ( t ) = 0 , P ( T ) = 0 ,
and the BSDE
d h t = { ( A ( t ) + P ( t ) Q ( t ) ) h t + C ( t ) ( I + P ( t ) S ( t ) ) 1 β t } d t + β t d W t , h T = ξ .
Proof of Theorem 2.
It follows from Lemma 1, Lemma 2, Lemma 3, and Lemma 6, and Theorem 1 in Section 3. □

5. The Characterization of Condition (Assumption 3)

In this section, we will investigate the equivalent characterizations of condition (Assumption 3). The basic idea is from the fundamental controllability argumentation for deterministic controlled linear systems (see, for instance, [29]). In order to characterize the condition (Assumption 3), let us consider a special CSBLQ with Assumption 1 holding true:
m i n 1 2 E [ 0 T | u t | 2 d t ] . s . t u L F 2 ( 0 , T ; R m ) ( X ξ , u , Z ξ , u , u ) satisfies d X t ξ , u = ( A ( t ) X t ξ , u + B ( t ) u t + C ( t ) Z t ξ , u ) d t + Z t ξ , u d W t , X T ξ , u = ξ , a n d E [ N X 0 ξ , u ] = b .
Define the Lagrangian functional for (23) by
M ( u , λ ) = 1 2 E [ 0 T | u t | 2 d t ] + λ , E [ N X 0 ξ , u ] b
for u L F 2 ( 0 , T ; R m ) and λ R .
Then, by Lemma 3.1 and Lemma 3.2, for a given λ R , for the problem
d ( λ ) inf u L F 2 ( 0 , T ; R m ) M ( u , λ ) ,
there is a unique optimal control u ¯ satisfying, for t [ 0 , T ] ,
u ¯ t = B ( t ) Y ¯ t λ
with the process Y ¯ λ being the solution to an adjoint equation
d Y ¯ t λ = A ( t ) Y ¯ t λ d t C ( t ) Y ¯ t λ d W t Y ¯ 0 λ = N λ .
By applying I t o ^ ’ s formula together with the Equations (26) and (25), we have
d ( λ ) = M ( u ¯ , λ ) = 1 2 E [ 0 T | u ¯ t | 2 d t ] + λ , E [ N X 0 ξ , u ¯ ] b = 1 2 E [ 0 T | B ( t ) Y ¯ t λ | 2 d t ] + < Y ¯ T λ , ξ > E [ < λ , b > ] .
We have the following result.
Lemma 7.
If λ ¯ is the optimal solution to
sup λ R d ( λ ) ,
then
u ¯ t B ( t ) Y ¯ t λ ¯
is the optimal solution to (23), where Y ¯ t λ ¯ is the solution to (26) with initial datum N λ ¯ .
Proof of Lemma 7.
Since λ ¯ is the optimal solution, for any μ R ,
0 = < d ( λ ¯ ) , μ > = < Y ¯ T μ , ξ > E [ < b , μ > ] E [ 0 T < B ( t ) Y ¯ t λ ¯ , B ( t ) Y ¯ t μ > d t ] .
Let X ξ , u ¯ be the solution to the controlled system (2) with terminal datum ξ and control u ¯ defined by (28). Then, by (28) and (29) and I t o ^ ’ formula,
E [ < N X 0 ξ , u ¯ b , μ > ] = E [ < N μ , X 0 ξ , u ¯ > < b , μ > ] = E [ < Y ¯ T μ , ξ > 0 T < u ¯ t , B ( t ) Y ¯ t μ > d t < b , μ > ] = 0 .
Due to the arbitrariness of μ , we obtain N X ¯ 0 ξ , u ¯ b = 0 , a . s . This proves that the control u ¯ defined by (28) is a feasible control.
Next, we prove the optimality of u ¯ . Replacing μ by λ ¯ in (30), we obtain
0 = E [ < Y ¯ T λ ¯ , ξ > 0 T | u ¯ t | 2 d t < b , λ ¯ > ] .
Note that for any u L F 2 ( 0 , T ; R m ) with the corresponding state X ξ , u such that N X 0 ξ , u b = 0 , a . s . , by I t o ^ ’s formula,
E [ < X 0 ξ , u , N λ ¯ > ] = E [ < Y ¯ T λ ¯ , ξ > 0 T < u t , B ( t ) Y ¯ t λ ¯ > d t ] .
Combining (31) with (32), we obtain
E [ 0 T | u ¯ t | 2 d t ] = E [ < N X 0 ξ , u b , λ ¯ > ] + E [ 0 T < u t , B ( t ) Y ¯ t λ ¯ > d t ] = E [ 0 T < u t , B ( t ) Y ¯ t λ ¯ > d t ] ( E [ 0 T | u t | 2 d t ] ) 1 2 ( E [ 0 T | u ¯ t | 2 d t ] ) 1 2 .
This proves the optimality of u ¯ . □
The following theorem gives a necessary and sufficient condition for u E [ N X 0 ξ , u ] being a surjection.
Theorem 3.
Suppose that Assumptions 1 and 2 hold. Then, u E [ N X 0 ξ , u ] is a surjection if and only if there is c > 0 such that λ R ,
E [ 0 T | B ( t ) Y ¯ t λ | 2 d t ] c | λ | 2 ,
where Y ¯ t λ is the solution to (26) with initial datum N λ .
Proof of Theorem 3.
We first prove the sufficiency.
For arbitrarily α R , let d α be the dual function defined in (27) with b replaced by α . If there is c > 0 such that (33) holds, then d α is coercive. Since the convexity and continuity of d α are obvious, the dual problem (27) with b replaced by α has a solution λ ¯ α . Then, by Lemma 7, the control u ¯ α defined by (28) with λ ¯ replaced by λ ¯ α is a minimal norm control such that E [ N X ξ , u ¯ α ( 0 ) ] = α . By the arbitrariness of α , the mapping u E [ N X ξ , u ( 0 ) ] is surjective.
Next, we prove the necessity.
Suppose by contradiction that u E ( N X ξ , u ( 0 ) ) is surjective but the inequality (33) does not hold. Then, there is a sequence { λ n R } such that | λ n | = 1 , and
lim n E [ 0 T | B ( t ) Y ¯ t λ ¯ n | 2 d t ] = 0 .
Without loss of generality, assume that λ n converges to some λ ˜ R with | λ | = 1 . Then,
E [ 0 T | B ( t ) Y ¯ t λ ¯ | 2 d t ] = 0 .
Therefore, B ( t ) Y ¯ t λ ¯ = 0 , a . e , t [ 0 , T ] , a . s . Since u E [ N X 0 ξ , u ] is surjective, for any α R , there is u such that E [ N X 0 ξ , u ] α = 0 .
Then, from (26)
Y T λ ¯ = N λ ¯ e 0 T [ A ( s ) 1 2 C 2 ( s ) ] d s + 0 T C ( s ) d W ( s ) .
Let Φ = e 0 T [ A ( s ) 1 2 C 2 ( s ) ] d s + 0 T C ( s ) d W ( s ) ; then,
0 = E [ < N X 0 ξ , u α , λ ¯ > ] = E [ < Y T λ ¯ , ξ > ] E [ 0 T < u ¯ ( t ) , B ( t ) Y ¯ t λ ¯ > d ] t E [ < α , λ ¯ > ] = E [ < Y T λ ¯ , ξ > ] E [ < α , λ ¯ > ] = E [ < N λ ¯ Φ , ξ > ] E [ < α , λ ¯ > ] .
By the arbitrariness of α , we have λ ¯ = 0 , contradicting | λ ¯ | = 1 . This completes the proof. □

6. An Illustrative Example

Here is a simple practical application problem to support our theory. This example demonstrates the practical value of the theoretical results in Section 3, Section 4 and Section 5. We consider a backward portfolio management problem with a constraint expectation of initial state
min J ( u ) = E [ 0 T | u ( t ) | 2 d t ] , E [ X 0 ] = b
under the following backward stochastic linear equation in terms of ( X t , Z t ) controlled by u:
d X t = ( 0.03 X t + 0.2 u ( t ) ) d t + Z t d W ( t ) , X T = 50 .
Here, u ( t ) is the investment strategy adjustment variable at time t [ 0 , T ] (such as the adjustment of the capital allocation proportion), which is adapted to the known noise of standard Brownian motion W and constrained by the expected value of the initial investment portfolio b; X t represents the value of the investment portfolio; and Z t is the market fluctuation factor (such as the fluctuation of the market index). Among the other parameters, 0.03 represents the expected growth rate of the investment portfolio itself, and 0.2 represents the impact coefficient of the investment strategy adjustment on the value of the investment portfolio.
Then, according the discussions in the above sections, there is a corresponding Riccati ODE in terms of P,
P ˙ ( t ) 0.06 P ( t ) + 0.04 = 0 , P ( T ) = 0 ,
an ODE in terms of Y ¯ t ,
Y ¯ ˙ t + 0.03 Y ¯ t = 0 , Y ¯ 0 = λ ,
and an ODE in terms of h t ,
h t ˙ 0.03 h t = 0 , h T = 50 .
Clearly, we have explicit solutions, respectively, for t [ 0 , T ] ,
P ( t ) = 2 3 2 3 e 0.06 t 0.06 T , Y ¯ t = λ e 0.03 t , h t = 50 e 0.03 ( t T ) .
Then, we can solve the dual problem
sup λ R d ( λ ) = 1 2 ( 2 3 2 3 e 0.06 T ) λ 2 + ( 50 e 0.03 T b ) λ ,
whose unique optimal solution is
λ ¯ = ( 50 e 0.03 T b ) 2 3 2 3 e 0.06 T .
Therefore, the optimal control has the explicit representation
u ¯ ( t ) = 0.2 λ ¯ e 0.03 t , t [ 0 , T ] .
Table 1 shows optimization results under different parameter combinations.
Economic Explanation:
(i) Time Effect: As the investment horizon T increases, the optimal control intensity λ decays at a faster rate (exponential term e 0.03 t ), which aligns with the practical understanding that long-term investments should reduce trading frequency.
(ii) Risk Budgeting: The exponential decay characteristic of P ( t ) indicates that the system allocates a decreasing risk budget over time, providing a quantitative tool for dynamic risk control.
(iii) Constraint Effectiveness: By setting the initial expectation b, investors can precisely control the statistical properties of the portfolio value (see Table 1).
(iv) Computational Superiority: The explicit solution avoids iterative numerical optimization, providing a real-time strategy generation method for high-frequency trading scenarios.
(v) Scalability: The model parameters (0.03 growth rate, 0.2 control coefficient) can be extended to time-varying functions, adapting to complex market environments.
The expected value equality constraint of E [ X 0 ] = b indicates that the expected value of the initial investment portfolio is b. It is an initial reference point set by the investor and is used to measure whether the starting point of the investment portfolio meets the expectations. The variance of the investment portfolio value reflects the extent to which it deviates from the expected path due to factors such as market fluctuations and uncertainties in investment strategies. The larger the variance, the higher the uncertainty of the investment portfolio value is while the greater the investment risk is. Generally, investors hope to minimize this variance under a given expected return level. This leads to solving the following stochastic LQ problem with initial state constraint:
m i n V a r X T = E [ X T ] 2 ( E [ X T ] ) 2 , s u b j e c t t o u L F 2 ( 0 , T ; R m ) , ( X t , u ) satisfies the CBSLQ problem ( 6.1 )
We show in Figure 1 the efficient frontier of the above portfolio management problem for T = 1 and ξ = 50 . The efficient frontier graph illustrates the trade-off relationship between the initial expected return b and terminal risk (standard deviation), exhibiting a characteristic convex decreasing curve (consistent with high-risk-high-return theory), which validates the core theoretical proposition of the model: precise risk-return management can be achieved through backward stochastic control. The purple point on the graph represents the maximum return point, while the shaded area denotes the infeasible region where no strategy can simultaneously deliver higher returns and lower risk.
This example visually confirms three key values of the model through the efficient frontier: (1) the initial constraint b enables precise control of the investment starting point; (2) dynamic strategies can generate Pareto-optimal paths; (3) parameters (e.g., the 0.2 adjustment coefficient) possess clear economic significance. In practical applications, this framework provides quantitative tools for structured product design and dynamic pension fund allocation.

7. Conclusions

This paper studies a backward stochastic LQ problem with linear equality constraints on the initial state expectation (CBSLQ). By use of the Lagrange multiplier method and maximum principle, we give an explicit solution to an equivalent unconstrained parameterized BSLQ problem when the cost function is uniformly convex, which is a feedback-optimal control determined by an adjoint SDE, a Riccati-type ODE, a BSDE, and an equality. Under the surjectivity of the linear constraint, we prove the equivalence between the original problem and the dual problem by Lagrange duality theory, thereby obtaining the optimal control for the original CBSLQ. Finally, an illustrative example regarding an investment portfolio is given. This study of the CBSLQ will provide some reference for financial investment or industry control.

Author Contributions

Conceptualization, Y.Z. and Y.L.; methodology, Y.L. and J.L.; software, Y.L.; validation, J.L.; formal analysis, Y.L.; writing—original draft, Y.L.; data curation, Y.L.; writing—review and editing, Y.Z.; supervision, Y.Z.; project administration, Y.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This document is the result of a research project funded by the National Natural Science Foundation of China (11861025), Guizhou OKZYD[2022]4055, Natural Science Research Project of Guizhou Provincial Department of Education (QJJ[2023]011), Guizhou Provincial QKHPTRC-BQW [2024]015.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yong, J.; Zhou, X.Y. Stochastic Controls: Hamiltonian Systems and HJB Equations; Springer: New York, NY, USA, 1999. [Google Scholar]
  2. Pham, H. Continuous-Time Stochastic Control and Optimization with Financial Applications; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  3. Peng, S. Backward stochastic differential equation, nonlinear expectation and their applications. In Proceedings of the International Congress of Mathematicians, Hyderabad, India, 19–27 August 2010; Volume I, pp. 393–432. [Google Scholar]
  4. Zhang, J. Backward Stochastic Differential Equations: From Linear to Fully Nonlinear Theory; Springer: New York, NY, USA, 2017. [Google Scholar]
  5. Lim, A.E.B.; Zhou, X.Y. Linear-quadratic control of backward stochastic differential equations. SIAM J. Control Optim. 2001, 40, 450–474. [Google Scholar] [CrossRef]
  6. Huang, J.; Wang, S.; Wu, Z. Backward mean-field linear-quadratic-Gaussian (LQG) games: Full and partial information. IEEE Trans. Automat. Control 2016, 61, 3784–3796. [Google Scholar] [CrossRef]
  7. Wang, G.; Xiao, H.; Xiong, J. A kind of LQ non-zero sum differential game of backward stochastic differential equation with asymmetric information. Automatica 2018, 97, 346–352. [Google Scholar] [CrossRef]
  8. Li, X.; Sun, J. Linear quadratic optimal control problems for mean field backward stochastic differential equations. Appl. Math. Optim. 2017, 80, 223–250. [Google Scholar] [CrossRef]
  9. Sun, J.; Wang, H. Linear-quadratic optimal control for backward stochastic differential equations with random coefficients. ESAIM Control Optim. Calc. Var. 2021, 27, 46. [Google Scholar] [CrossRef]
  10. Zhang, D. Backward linear-quadratic stochastic optimal control and nonzero-sum differential game problem with random jumps. J. Syst. Sci. Complex. 2011, 24, 647–662. [Google Scholar] [CrossRef]
  11. Chen, Y.; Luo, P. Turnpike properties for stochastic backward linear-quadratic optimal problems. Mathematics 2023. submitted. [Google Scholar]
  12. Du, K.; Huang, J.; Wu, Z. Linear quadratic mean-field-game of backward stochastic differential systems. Math. Control Relat. Fields 2018, 8, 653–678. [Google Scholar] [CrossRef]
  13. Wu, F.; Xiong, J.; Zhang, X. Zero-sum stochastic linear-quadratic Stackelberg differential games with jumps. Appl. Math. Optim. 2024, 89, 29. [Google Scholar] [CrossRef]
  14. Achim, K. Die Messung des Sozialstaates: Beschäftigungspolitische Unterschiede Zwischen Brutto-und Nettosozialleistungsquote; Working Paper from Center for Social Policy Studies; University of Munich: Munich, Germany, 2001; p. 34. Available online: https://nbn-resolving.org/urn:nbn:de:0168-ssoar-115013 (accessed on 1 March 2025).
  15. Lim, A.E.B.; Zhou, X.Y. Stochastic optimal LQR control with integral quadratic constraints and indefinite control weights. IEEE Trans. Automat. Control 1999, 44, 1359–1369. [Google Scholar] [CrossRef]
  16. Sun, J. Linear quadratic optimal control problems with fixed terminal states and integral quadratic constraints. Appl. Math. Optim. 2021, 83, 251–276. [Google Scholar] [CrossRef]
  17. Chen, X.; Zhou, X.Y. Stochastic linear-quadratic control with conic control constraints on an infinite time horizon. SIAM J. Control Optim. 2004, 43, 1120–1150. [Google Scholar] [CrossRef]
  18. Hu, Y.; Zhou, X.Y. Constrained stochastic LQ control with random coefficients, and application to portfolio selection. SIAM J. Control Optim. 2005, 44, 444–466. [Google Scholar] [CrossRef]
  19. Hu, Y.; Shi, X.; Xu, Z.Q. Constrained stochastic LQ control with regime switching and application to portfolio selection. Ann. Appl. Probab. 2022, 32, 426–460. [Google Scholar] [CrossRef]
  20. Ying, H.; Sun, X.M.; Quan, Z.X. Constrained stochastic LQ control on infinite time horizon with regime switching. ESAIM Control Optim. Calc. Var. 2022, 28, 24. [Google Scholar]
  21. Wu, W.; Gao, J.; Lu, J.-G.; Li, X. On continuous-time constrained stochastic linear-quadratic control. Automatica 2020, 114, 108809. [Google Scholar] [CrossRef]
  22. Klaus, K.; Neitzel, I.; Rösch, A. Sufficient optimality conditions for the Moreau-Yosida-type regularization concept applied to semilinear elliptic optimal control problems with pointwise state constraints. Ann. Acad. Rom. Sci. Ser. Math. Its Appl. 2010, 2, 222–246. [Google Scholar]
  23. Feng, X.; Hu, Y.; Huang, J. Backward Stackelberg differential game with constraints: A mixed terminal-perturbation and linear-quadratic approach. SIAM J. Control Optim. 2022, 60, 1488–1518. [Google Scholar] [CrossRef]
  24. Zhang, H.; Zhang, X.F. Stochastic linear quadratic optimal control problems with expectation-type linear equality constraints on the terminal states. Syst. Control Lett. 2023, 177, 105551. [Google Scholar] [CrossRef]
  25. Zhang, H.; Zhang, X. Lagrangian dual method for solving stochastic linear quadratic optimal control problems with terminal state constraints. ESAIM Control Optim. Calc. Var. 2024, 30, 22. [Google Scholar] [CrossRef]
  26. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
  27. Ma, J.; Protter, P.; Yong, J. Solving forward-backward stochastic differential equations explicitly: A four-step scheme. Probab. Theory Relat. Fields 1994, 98, 339–359. [Google Scholar] [CrossRef]
  28. Ma, J.; Yong, J. Forward-Backward Stochastic Differential Equations and Their Applications; Lecture Notes in Mathematics; Springer: New York, NY, USA, 1999; Volume 1702. [Google Scholar]
  29. Glowinski, R.; Lions, J.-L. Exact and approximate controllability for distributed parameter systems. Acta Numer. 1994, 3, 269–378. [Google Scholar] [CrossRef]
Figure 1. Efficient Frontier.
Figure 1. Efficient Frontier.
Mathematics 13 01327 g001
Table 1. Optimization results under different parameter combinations.
Table 1. Optimization results under different parameter combinations.
Parameter Combination
( b , T )
Optimal
λ ¯
Terminal Variance
Var ( X T )
Control Cost
0 T E [ u 2 ] dt
(40, 5)15.286.8518.42
(45, 8)9.6711.2312.56
(48, 10)6.3415.078.91
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, Y.; Li, J.; Zhou, Y. Backward Stochastic Linear Quadratic Optimal Control with Expectational Equality Constraint. Mathematics 2025, 13, 1327. https://doi.org/10.3390/math13081327

AMA Style

Lu Y, Li J, Zhou Y. Backward Stochastic Linear Quadratic Optimal Control with Expectational Equality Constraint. Mathematics. 2025; 13(8):1327. https://doi.org/10.3390/math13081327

Chicago/Turabian Style

Lu, Yanrong, Jize Li, and Yonghui Zhou. 2025. "Backward Stochastic Linear Quadratic Optimal Control with Expectational Equality Constraint" Mathematics 13, no. 8: 1327. https://doi.org/10.3390/math13081327

APA Style

Lu, Y., Li, J., & Zhou, Y. (2025). Backward Stochastic Linear Quadratic Optimal Control with Expectational Equality Constraint. Mathematics, 13(8), 1327. https://doi.org/10.3390/math13081327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop