Abstract
A class of stochastic linear-quadratic (LQ) dynamic optimization problems involving a large population is investigated in this work. Here, the agents cooperate with each other to minimize certain social costs. Furthermore, differently from the classic social optima literature, the dynamics in this framework are driven by anticipated backward stochastic differential equations (ABSDE) in which the terminal instead of the initial condition is specified and the anticipated terms are involved. The individual admissible controls are constrained in closed convex subsets, and the common noise is considered. As a result, the related social cost is represented by a recursive functional in which the initial state is involved. By virtue of the so-called anticipated person-by-person optimality principle, a decentralized strategy can be derived. This is based on a class of new consistency condition systems, which are mean-field-type anticipated forward-backward stochastic differential delay equations (AFBSDDEs). The well-posedness of such a consistency condition system is obtained through a discounting decoupling method. Finally, the corresponding asymptotic social optimality is proved.
Keywords:
asymptotic social optima; anticipated person-by-person optimality; initially mixed-coupled AFBSDDE; LQ recursive control; mean-field team MSC:
91A07; 91A15; 93E03; 93E20
1. Introduction
Large population problems are widely investigated in many areas, such as engineering, biology, finance, economics, physics, etc. Due to a highly complicated coupling structure, it is infeasible and ineffective to obtain classical centralized strategies by combining the exact dynamics of all agents in a large population problem. As a substitute, it is more effective and tractable to investigate the corresponding decentralized strategies that only take into account their own individual dynamic (information) and some off-line quantities. In this research direction, many researchers have focus their efforts on mean field (MF) game studies. Interested readers can refer to [1,2,3,4,5,6,7] and the references therein.
Differently from the above works, the cooperative optimization problem has attracted much attention in the last ten years, including the so-called social optima problems. Readers can refer to [8,9,10,11,12,13,14,15,16] for related research. For more research and applications of MF social optima problems, readers can refer to [17,18,19].
It is known that both competitive behaviors and cooperative trades are influenced by the time factor. In many real cases, the evolution of a controlled system depends not only on the present state or decision policy at time t, but also on the past trajectories on a historical interval , where denotes a lagged index. As a result, a controlled system may contain a state delay, control delay, or both. Therefore, there are various practical problems where the systems are driven by stochastic differential delay equations (SDDEs).
In recent years, the study of SDDEs has attracted the attention of many researchers, see, e.g., [20,21,22], etc. The feature of backward dynamics makes our framework different from the existing works of the MF LQ team, wherein the individual states were driven by SDEs. Unlike SDEs, the terminal condition of a BSDE should be specified as a prior, instead of an initial condition. As a result, the BSDE will admit one adapted solution pair . Linear BSDEs were first introduced in [23], and the general nonlinear BSDEs were first introduced in [24]. For more development of BSDEs, such as various applications in mathematical finance, readers can refer to [25,26,27], and to [28,29,30,31,32,33,34], etc., for optimal control and game theory.
A new type of BSDE called anticipated BSDEs (ABSDEs) was introduced in Peng and Yang [35] and a duality between SDDEs and ABSDEs was established. In an ABSDE, the generator not only contains the values of solutions for the present but also the future. This occurs in many phenomena in reality. Readers can refer to [36,37,38,39,40] and the references therein for more details of SDDE theory and its broad applications.
In this paper, we investigate an LQ mean-field game with an anticipated BSDE in the case of an input constraint with partial information and common noise. In all of the aforementioned papers concerning LQ control problems, the control was unconstrained and the (feedback) control was constructed through either a dynamic programming principle (DPP) or stochastic maximum principle (SMP), both of which are automatically admissible. However, if we impose constraints on the admissible control, the entire LQ approach fails to apply (see, e.g., [41,42]). For more applications in finance and economics, please refer to [4,43], etc. In addition, a partial information structure is introduced. Due to the common noise, the limiting process turns out to be stochastic.
The rest of this paper is organized as follows: We formulate the problem in Section 2. We derive the auxiliary control problem with an input constraint and partial information in Section 3. In Section 4, the consistency condition system and its well-posedness are established using the discounting method. Section 5 focuses on the asymptotic optimality of the decentralized strategy. In Section 6, we conclude the paper.
2. Problem Formulation
Consider a finite time horizon for fixed . Assume that is a complete filtered probability space satisfying the usual conditions and is a -dimensional Brownian motion in this space. Let be the filtration generated by and augmented by (the class of all -null sets of ). Let be the augmentation of by . Let be the augmentation of by . Let be a given fixed time horizon. denotes the expectation under , and denotes the conditional expectation. Define for .
Let denote the standard Euclidean inner product. denotes the transpose of a vector (or matrix) x. For a matrix , we define the norm . denotes the set of symmetric matrices with real elements. denotes that , which is positive (semi)definite, while denotes that, for some , . Let be a given Hilbert space. Denote any times valued in by
- is -adapted, continuous, such that
In this work, we study an LQ large population system with K-type discrete heterogeneous agents . The dynamics of the agents are driven by a system of linear ABSDEs with mean-field coupling: that is, for
where denotes the state-average of the agents. The number is a parameter of the agent to model a heterogeneous population, and we assume that takes a value in a finite set . We call a k-type agent if . In this paper, we are interested in the asymptotic behavior as N tends to infinity. For , introduce
where is the cardinality of index set . For , let , then is a probability vector representing the empirical distribution of Let be nonempty closed convex sets. We introduce the following assumption:
- (A1)
- There exists a probability mass vector such that ,
- (A2)
- , , , . If , and are identically distributed and the common distribution is denoted by .
- (A3)
- (), , .
It follows that under (A1)–(A3), the state equation in (1) admits a unique solution for all In fact, if we denote by
then, (1) can be rewritten as
which is a linear ABSDE of vector value and admits a unique solution for , (see [4,35], etc.). Thus, for any , the state equation (1) admits a unique solution .
Let be the set of strategies of all N agents and , . The cost functional for , , is given by
The aggregate team functional of N agents is
We impose the following assumptions on the coefficients of the cost functionals:
- (A4)
- , , , , , , (), , , .
For , the centralized admissible strategy set for the agent is given by
Correspondingly, the decentralized admissible strategy set for the agent is given by
We propose the following optimal problem:
Problem (SO-IC-PI). Find a strategy set where , , such that
Definition 1.
A strategy , is an ε-social decentralized optimal strategy if there exists , such that
Now, we briefly show the process of studying Problem (SO-IC-PI): Firstly, with the help of the anticipated person-by-person optimality principle and variational technique, we obtain an auxiliary LQ anticipated control problem. Then, the stochastic maximum principle ([35,36], etc.) is applied to derive the optimal control of auxiliary problem. Secondly, we establish and investigate the consistency condition system using the discounting method (e.g., [4,44]) to determine the frozen MF term and the off-line variables. Thirdly, by virtue of the standard estimations of the AFBSDDE ([35,36,39]), we verify that the decentralized strategy is the asymptotic optimality of the centralized strategy.
3. Stochastic Optimal Control Problem for the Agents
In this section, we try to solve the optimal control problem and derive the decentralized control.
3.1. Backward Person-by-Person Optimality
Let be the centralized optimal strategy of all agents. Now, consider the perturbation that the agent uses the strategy and all the other agents still apply the strategy . The realized states (1) corresponding to and are denoted by and , respectively. For , denote the perturbation
Therefore, the variation in the state for is given by
and for , ,
For , define , thus
where denotes the indicative function, that is . Through some elementary calculations, we can further obtain the variation in the cost functional of , as follows:
For , the variation in the cost functional of is given by
Therefore, by combining the above equalities, the variation in the social cost satisfies
Step I: First, replacing and in (8) with the mean-field terms and , which will be determined later,
where
and
Step II: Next, for , introduce the limit to replace , and for , introduce the limit to replace , where
Therefore,
where
Step III: Finally, we introduce the following adjoint equations and as
Applying Itô’s formula to , we have
For , integrating from 0 to T and taking expectation, we obtain
Similarly, we have
Recall that , for , then
Similarly,
Letting
substituting (11) and (12) into (10), we have
where
and
In the following, when considering the expectations, we will use to denote the process defined in (13) of the representative agent of type k. Now, we introduce the decentralized auxiliary cost functional with perturbation as
3.2. Decentralized Strategy
Motivated by (14), we introduce the following auxiliary anticipated backward LQG control problem:
Problem (MFG-IC-PI). Minimize over subject to
where
and will be determined by the consistency condition in the following section.
Similarly to [29,35], we will apply a stochastic maximum principle to study Problem (MFG-IC-PI). First, introduce the following first order adjoint equation:
The global stochastic maximum principle implies that
where is the projection mapping from to its closed convex subset under the norm . The related Hamiltonian system becomes
4. Consistency Condition
Theorem 1.
Let (A1)–(A4) hold. The parameters in Problem (MFG-IC-PI) are determined by
where is the solution of the following MF-AFBSDDE, which is the so-called consistency condition system: for ,
In the following, we will use the discounting method of [4,44] to study the global solvability of MF-AFBSDDE (18). To start, we first give some results for the general nonlinear forward-backward system
where the coefficients satisfy the following conditions
- (A5)
- There exist and positive constants such that for t and all coefficients
- (A6)
Let be a Hilbert space. Recall that denotes the space of valued progressively measurable processes such that . Then, for , we define an equivalent norm on :
Define
Then, we have the following theorem:
Theorem 2.
Suppose that assumptions (A5) and (A6) hold. Then, there exists a , which depends on , for such that when for , there exists a unique adapted solution to the MF-AFBSDDE (19). Further, if , there exists a , which depends on , for and is independent of T, such that when for , there exists a unique adapted solution to the MF-AFBSDDE (19).
Before proving Theorem 2, we should carry out some preparative work. For any given , the backward equation in the MF-AFBSDDE (19) admits a unique solution . Thus, we introduce a map , through
The well-posedness of (20) under the assumptions (A5) and (A6) can be established by [35,45,46]. The proof is also omitted. Moreover, we have .
Lemma 1.
Let be the solution of (20) corresponding to , , respectively. Then, for all and the constants , we have
where , and . We also have that
where solves the equation . Moreover,
and
Specifically, if ,
Proof.
Denote . Applying Itô’s formula to , we obtain
It follows that
Notice that
By noticing (26), we obtain (21).
Similarly, applying Itô’s formula to for , we have
From the above estimates and (27), one can prove (22). We mention that there indeed exists a solving the equation In fact, from the above equation, it follows that
By noticing that , are positive, which yields that is an increasing continuous function satisfying and . Therefore, for a given and , there exists a unique such that (28) holds for intermediate value theorem.
Similarly, for a given , the forward equation in the MF-AFBSDDE (19) admits a unique solution . Thus, we introduce a map , through
Indeed, the well-posedness of (29) can be established by applying the contraction mapping method under assumptions (A5) and (A6), although the term is involved. We omit the proof here. From Burkholder–Davis–Gundy inequality and the delay theory, it follows that .
Lemma 2.
Let be the solution of (29) corresponding to , , respectively. Then, for all and two constants , we have
where , and . We also have
where solves the equation Moreover,
Proof.
Denote . Applying Itô’s formula to , we obtain
It follows that
Notice that
and
By noticing (33), we obtain (30).
Similarly, for which solves the equation , we apply Itô’s formula to for , we have
From the above estimates and (34), one can prove (31). We note that there indeed exists a solving the equation In fact, from the above equation, it follows that
By noticing that , , are positive which yields that is an increasing continuous function satisfying and . Therefore, for a given and , there exists a unique such that (35) holds using intermediate value theorem.
Now, we present the proof of Theorem 2.
Proof of Theorem 2.
Define . Since is defined by (20) and is defined by (29). Therefore, maps onto itself. To prove the theorem, we only need to show that is a contraction mapping for some equivalent norm . For , let and ; from (23), (24) and (32), we have
Recall that satisfies , and satisfies . By choosing a suitable (e.g., choosing , and large enough such that and ), the first assertion of Theorem 2 is obtained.
If , we can choose and a sufficiently large such that
In fact, recall that satisfies , and satisfies . Denote and , then
and
respectively (concerning the existence of , , see the proof of Lemmas 1 and 2). Then, it is obvious that and . Thus,
and by recalling that , one can choose sufficiently large such that . Therefore, by recalling and , we can choose suitable (e.g., ) such that
Theorem 3.
Suppose that , then there exists a , which depends on , , and is independent of T, such that when , there exists a unique adapted solution to consistency condition system (17).
5. Asymptotic -Optimality
We start this section with some estimates, which play an important role in asymptotic optimality.
5.1. Agent Perturbation
Let be the decentralized strategy given by
where
with given by Theorem 1.
Correspondingly, the realized state under the decentralized strategy satisfies
and .
Let us consider the case that the agent uses an alternative strategy , while the other agents use the strategy . The realized state with the i-th agent’s perturbation is
where . For , denote the perturbation
Similarly to the computations in Section 3.1, we have
where
First, we need some estimations. In the proofs, L will denote a constant whose value may change from line to line. Similarly to the proof of Lemma 5.1 of [4], by virtue of estimations of the AFBSDDE, we derive
Lemma 3.
Let (A1)–(A4) hold. Then, there exists a constant L independent of N, such that
Similarly to Lemma 3, using the boundness of and , we have
where L is a constant independent of N.
Lemma 4.
Let (A1)–(A4) hold. Then, there exists a constant L independent of N such that
where
Proof.
For , denote the k-type agent state average by , and further , thus
Notice that
we have
Using the Cauchy–Schwartz inequality, Burkholder–Davis–Gundy inequality, and the estimates of ABSDE, we have
By (A2), for , are independent identically distributed. Note that , thus are independent identically distributed. Then, we have
and
Therefore,
Using Gronwall inequality, we have
Since,
and
we have
and
Therefore, the result follows from Gronwall inequality. □
By Lemma 4, for , we can derive
Lemma 5.
Let (A1)–(A4) hold. Then, there exists a constant L independent of N such that
Proof.
Remark 1.
Lemma 6.
Let (A1)–(A4) hold. Then, there exist constants L independent of N such that
and for , ,
Proof.
First,
and for ,
Therefore, it follows from the Burkholder–Davis–Gundy inequality that
Thus,
It then follows from the Gronwall inequality that
Similarly, we have (45). □
Applying the above estimations, using the standard estimations of the AFBSDDE, (13) and (36), we can obtain the following result.
Lemma 7.
Let (A1)–(A4) hold. Then, there exists a constant L independent of N such that
Proof.
It follows from (13) that
By the definition of , we have
where denotes the optimal state of -type corresponding to (36) and satisfies
Recall the notation defined in proof of Lemma 4. Noticing
we have
Notice
With the help of the proof of Lemma 4, one obtains
Then, (46) is obtained based on the above inequalities, boundness of , Lemma 3, Lemma 4, and the Gronwall inequality. □
5.2. Asymptotic Optimality
In order to prove asymptotic optimality, it suffices to consider the perturbations such that It is easy to check that
where is a constant independent of N. Therefore, in the following, we only consider the perturbations satisfying
Let , and consider the perturbation .
Theorem 4.
Let (A1)–(A4) hold. Then, is a -optimal strategy for the agents.
Proof.
We denote as the variation in on the i-th direction. From (37), we know that, exists and is continuous, for each . Thus,
Here, is the higher order infinitesimal of and the norm of is given by
From (47), we know that , which yields that .
Therefore, in order to prove asymptotic optimality, we only need to show that
According to Section 5.1, we derive
It follows from the optimality of that
Moreover, by Lemmas 4–7 (recall that when calculating , it only needs to take perturbation for one component of ), we have
Therefore,
□
6. Conclusions
A class of stochastic LQ dynamic optimization problems involving a large population was researched in the work. Unlike the well-studied MF game, these agents cooperate to minimize a social cost, while the dynamics are evolved by the ABSDE in the case of an input constraint and partial information. By virtue of the so-called anticipated person-by-person optimality, we solved an auxiliary control problem and derived a decentralized social strategy based on an MF-type consistency condition system. We also established the well-posedness of this MF-AFBSDDE using the discounting decoupling method. Finally, the related asymptotic social optimality was verified.
Funding
This research was funded by the National Key R&D Program of China (Nos. 2023YFA1009203, 2022YFA1006104), the Taishan Scholars Young Program of Shandong (No. TSQN202211032), and the Young Scholars Program of Shandong University.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).
Conflicts of Interest
The author declares no conflicts of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
References
- Bensoussan, A.; Sung, K.; Yam, S.; Yung, S. Linear-quadratic mean field games. J. Optim. Theory Appl. 2016, 169, 496–529. [Google Scholar] [CrossRef]
- Buckdahn, R.; Li, J.; Peng, S.; Rainer, C. Mean-field stochastic differential equations and associated PDEs. Ann. Probab. 2017, 45, 824–878. [Google Scholar] [CrossRef]
- Carmona, R.; Delarue, F. Probabilistic analysis of mean-field games. SIAM J. Control Optim. 2013, 51, 2705–2734. [Google Scholar] [CrossRef]
- Hu, Y.; Huang, J.; Nie, T. Linear-quadratic-Gaussian mixed mean-field games with heterogeneous input constraints. SIAM J. Control Optim. 2018, 56, 2835–2877. [Google Scholar] [CrossRef]
- Lasry, J.M.; Lions, P.L. Mean field games. Jpn. J. Math. 2007, 2, 229–260. [Google Scholar] [CrossRef]
- Nguyen, S.; Nguyen, D.; Yin, G. A stochastic maximum principle for switching diffusions using conditional mean-fields with applications to control problems. ESAIM Control Optim. Calc. Var. 2020, 26, 69. [Google Scholar] [CrossRef]
- Nourian, M.; Caines, P.E. ϵ-Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents. SIAM J. Control Optim. 2013, 51, 3302–3331. [Google Scholar] [CrossRef]
- Arabneydi, J.; Mahajan, A. Team-optimal solution of finite number of mean-field coupled LQG subsystems. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 5308–5313. [Google Scholar]
- Du, K.; Wu, Z. Social optima in mean field linear-quadratic-Gaussian models with control input constraint. Syst. Control Lett. 2022, 162, 105174. [Google Scholar] [CrossRef]
- Huang, J.; Wang, B.; Yong, J. Social optima in mean field linear-quadratic-Gaussian control with volatility uncertainty. SIAM J. Control Optim. 2021, 59, 825–856. [Google Scholar] [CrossRef]
- Huang, M.; Caines, P.E.; Malhamé, R.P. Social optima in mean field LQG control: Centralized and decentralized strategies. IEEE Trans. Autom. Control 2012, 57, 1736–1751. [Google Scholar] [CrossRef]
- Huang, M.; Nguyen, S. Linear-Quadratic Mean Field Social Optimization with a Major Player. arXiv 2019, arXiv:1904.03346. [Google Scholar]
- Nie, T.; Wang, S.; Wu, Z. Linear-quadratic delayed mean-field social optimization. Appl. Math. Optim. 2024, 89, 4. [Google Scholar] [CrossRef]
- Wang, B.; Zhang, H.; Zhang, J. Mean field linear-quadratic control: Uniform stabilization and social optimality. Automatica 2020, 121, 109088. [Google Scholar] [CrossRef]
- Wang, B.; Zhang, J. Social optima in mean field linear-quadratic-Gaussian models with Markov jump parameters. SIAM J. Control Optim. 2017, 55, 429–456. [Google Scholar] [CrossRef]
- Wang, S. Rational expectations: An approach of anticipated linear-quadratic social optima. Int. J. Control 2024, 97, 2444–2466. [Google Scholar] [CrossRef]
- Barreiro-Gomez, J.; Duncan, T.; Tembine, H. Linear-quadratic mean-field-type games: Jump-diffusion process with regime switching. IEEE Trans. Autom. Control 2019, 64, 4329–4336. [Google Scholar] [CrossRef]
- Chen, Y.; Bušić, A.; Meyn, S.P. State estimation for the individual and the population in mean field control with application to demand dispatch. IEEE Trans. Autom. Control 2016, 62, 1138–1149. [Google Scholar] [CrossRef]
- Nuno, G.; Moll, B. Social optima in economies with heterogeneous agents. Rev. Econ. Dyn. 2018, 28, 150–180. [Google Scholar] [CrossRef]
- Mao, X. Stochastic Differential Equations and Their Applications; Horwood: New York, NY, USA, 1997. [Google Scholar]
- ksendal, B.; Sulem, A. A maximum principle for optimal control of stochastic systems with delay, with applications to finance. Optim. Control Partial. Differ. Equ. Conf. 2001, 64–79. [Google Scholar]
- ksendal, B.; Sulem, A. Optimal stochastic impulse control with delayed reaction. Appl. Math. Optim. 2008, 58, 243–255. [Google Scholar] [CrossRef]
- Bismut, J. An introductory approach to duality in optimal stochastic control. SIAM Rev. 1978, 20, 62–78. [Google Scholar] [CrossRef]
- Pardoux, E.; Peng, S. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 14, 55–61. [Google Scholar] [CrossRef]
- Duffie, D.; Epstein, L. Stochastic differential utility. Econometrica 1992, 60, 353–394. [Google Scholar] [CrossRef]
- Karoui, N.E.; Peng, S.; Quenez, M.C. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
- Karoui, N.E.; Peng, S.; Quenez, M.C. A dynamic maximum principle for the optimization of recursive utilities under constraints. Ann. Appl. Probab. 2001, 11, 664–693. [Google Scholar] [CrossRef]
- Feng, X.; Huang, J.; Wang, S. Social optima of backward linear-quadratic-Gaussian mean-field teams. Appl. Math. Optim. 2021, 84, S651–S694. [Google Scholar] [CrossRef]
- Huang, J.; Wang, G.; Xiong, J. A maximum principle for partial information backward stochastic control problems with application. SIAM J. Control Optim. 2009, 48, 2106–2117. [Google Scholar] [CrossRef]
- Huang, J.; Wang, S.; Wu, Z. Backward mean-field linear-quadratic-Gaussian (LQG) games: Full and partial information. IEEE Trans. Autom. Control 2016, 61, 3784–3796. [Google Scholar] [CrossRef]
- Li, X.; Sun, J.; Xiong, J. Linear quadratic optimal control problems for mean-field backward stochastic differential equations. Appl. Math. Optim. 2019, 80, 223–250. [Google Scholar] [CrossRef]
- Lim, A.E.; Zhou, X.Y. Linear-quadratic control of backward stochastic differential equations. SIAM J. Control Optim. 2001, 40, 450–474. [Google Scholar] [CrossRef]
- Wang, G.; Wu, Z. The maximum principle for stochastic recursive optimal control problems under partial information. IEEE Trans. Autom. Control 2009, 54, 1230–1242. [Google Scholar] [CrossRef]
- Xu, W. A maximum principle for optimal control for a class of controlled systems. J. Austral. Math. Soc. Ser. B 1996, 38, 172–181. [Google Scholar] [CrossRef]
- Peng, S.; Yang, Z. Anticipated backward stochastic differential equations. Ann. Probab. 2009, 37, 877–902. [Google Scholar] [CrossRef]
- Chen, L.; Wu, Z. Maximum principle for the stochastic optimal control problem with delay and application. Automatica 2010, 46, 1074–1080. [Google Scholar] [CrossRef]
- Huang, J.; Li, N. Linear-quadratic mean-field game for stochastic delayed systems. IEEE Trans. Automat. Control 2018, 63, 2722–2729. [Google Scholar] [CrossRef]
- Menoukeu-Pamen, O. Optimal control for stochastic delay system under model uncertainty: A stochastic differential game approach. J. Optim. Theory Appl. 2015, 167, 998–1031. [Google Scholar] [CrossRef]
- Yu, Z. The stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls. Automatica 2012, 48, 2420–2432. [Google Scholar] [CrossRef]
- Zhang, S.; Xiong, J.; Shi, J. A linear-quadratic optimal control problem of stochastic differential equations with delay and partial information. Syst. Control Lett. 2021, 157, 105046. [Google Scholar] [CrossRef]
- Chen, X.; Zhou, X. Stochastic linear quadratic control with conic control constraints on an infinite time horizon. SIAM J. Control Optim. 2004, 43, 1120–1150. [Google Scholar] [CrossRef]
- Hu, Y.; Zhou, X. Constrained stochastic LQ control with random coefficients and applications to portfolio selection. SIAM J. Control Optim. 2005, 44, 444–466. [Google Scholar] [CrossRef]
- Li, X.; Zhou, X.; Lim, A. Dynamic mean-variance portfolio selection with no-shorting constraints. SIAM J. Control Optim. 2002, 40, 1540–1555. [Google Scholar] [CrossRef]
- Pardoux, E.; Tang, S. Forward-backward stochastic differential equations and quasilinear parabolic PDEs. Probab. Theory Related Fields 1999, 114, 123–150. [Google Scholar] [CrossRef]
- Buckdahn, R.; Li, J.; Peng, S. Mean-field backward stochastic differential equations and related partial differential equations. Stoch. Process. Their Appl. 2009, 119, 3133–3154. [Google Scholar] [CrossRef]
- Darling, R.; Pardoux, E. Backward SDE with random terminal time and applications to semilinear elliptic PDE. Ann. Probab. 1997, 25, 1135–1159. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).