Next Article in Journal
Determinants of Equilibrium Selection in Network Formation: An Experiment
Previous Article in Journal
Effects of Relatedness on the Evolution of Cooperation in Nonlinear Public Goods Games
Previous Article in Special Issue
A Stochastic Maximum Principle for Markov Chains of Mean-Field Type
Article Menu

Export Article

Games 2018, 9(4), 88; https://doi.org/10.3390/g9040088

Article
Mean-Field Type Games between Two Players Driven by Backward Stochastic Differential Equations
Department of Mathematics, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden
Received: 3 September 2018 / Accepted: 23 October 2018 / Published: 1 November 2018

Abstract

:
In this paper, mean-field type games between two players with backward stochastic dynamics are defined and studied. They make up a class of non-zero-sum, non-cooperating, differential games where the players’ state dynamics solve backward stochastic differential equations (BSDE) that depend on the marginal distributions of player states. Players try to minimize their individual cost functionals, also depending on the marginal state distributions. Under some regularity conditions, we derive necessary and sufficient conditions for existence of Nash equilibria. Player behavior is illustrated by numerical examples, and is compared to a centrally planned solution where the social cost, the sum of player costs, is minimized. The inefficiency of a Nash equilibrium, compared to socially optimal behavior, is quantified by the so-called price of anarchy. Numerical simulations of the price of anarchy indicate how the improvement in social cost achievable by a central planner depends on problem parameters.
Keywords:
mean-field type game; non-zero-sum differential game; cooperative game; backward stochastic differential equations; linear-quadratic stochastic control; social cost; price of anarchy

1. Introduction

Mean-field type games (MFTG) is a class of games in which payoffs and dynamics depend not only on the state and control profiles of the players, but also on the distribution of the state-control processes. MFTGs has by now a plethora of applications in the engineering sciences, see [1] and the references therein. This paper studies MFTGs between two players, with state-distribution dependent cost functionals J i : U i R , i = 1 , 2 , and mean-field BSDE state dynamics. The Nash solution ( u ^ · 1 , u ^ · 2 ) U 1 × U 2 is dictated by the pair of inequalities
J 1 ( u ^ · 1 ; u ^ · 2 ) J 1 ( u · 1 ; u ^ · 2 ) , u · 1 U 1 , J 2 ( u ^ · 2 ; u ^ · 1 ) J 2 ( u · 2 ; u ^ · 1 ) , u · 2 U 2 .
Following the path laid-out in [2], we establish a Pontryagin type maximum principle, yielding necessary and sufficient conditions for any pair of controls satisfying (1). Behavior in the equilibrium (1) is compared to the socially optimal solution, that minimizes the social cost J : = J 1 + J 2
J ( u ^ · 1 , u ^ · 2 ) J ( u · 1 , u · 2 ) , u · 1 U 1 , u · 2 U 2 .

1.1. Related Work

Pontryagin’s maximum principle is the tool, alongside dynamic programming, to characterize optimal controls in both deterministic and stochastic settings. It can treat not only standard stochastic systems, but generalizes to optimal stopping, singular controls, risk-sensitive controls and partially observed models. Pontryagin’s maximum principle yields necessary conditions that must be satisfied by any optimal solution. The necessary conditions become sufficient under additional convexity conditions. Early results showed that an optimal control along with the corresponding optimal state trajectory must solve the so-called Hamiltonian system, which is a two-point (forward-backward) boundary value problem, together with a maximum condition of the so-called Hamiltonian function. A very useful aspect of this result is that minimization of the cost functional (over a set of control functions) may reduce to pointwise maximization of the Hamiltonian, at each point in time (over the set of control values). Pontryagin’s technique for deterministic systems and stochastic systems with uncontrolled diffusion can be summarized as follows: assume that there exists an optimal control, make a spike-variation of it and then consider the first order term of the Taylor expansion with respect to the perturbation. This leads to variational inequality and the result follows from duality. If the diffusion is controlled, second order terms in the Taylor expansion has to be considered. In this case, one ends up with two forward-backward SDEs and a more involved maximum condition for the Hamiltonian. See [3] for a detailed account. For stochastic systems, the backward equation is fundamentally different from the forward equation, whenever one is looking for adapted solutions. An adapted solution to a BSDE is a pair of adapted stochastic processes ( Y · , Z · ) , where Z · corrects “non-adaptiveness” caused by the terminal condition of Y · . As pointed out by [4], the first component Y · corresponds to the mean evolution of the dynamics, and Z · to the risk between current time and terminal time. Linear BSDEs extends to non-linear BSDEs [5], with applications not only within stochastic optimal control but also in stochastic analysis [6] and finance [7,8], and to forward-backward SDEs (FBSDE). BSDEs with distribution-dependent coefficients, mean-field BSDEs, were derived as limits of particle systems in [9]. Existence and uniqueness results for mean-field BSDEs, as well as a comparison theorem, are provided in [10].
In stochastic differential games, both zero-sum and nonzero-sum, Pontryagin’s stochastic maximum principle (SMP) and dynamic programming are the main tools for obtaining conditions for an equilibrium. These tools were essentially inherited from the theory of stochastic optimal control. As in the optimal control setting, the latter deals with solving systems of second-order parabolic partial differential equations, while the former is related to analyzing FBSDEs where, in the case of initial state constraints, the BSDE corresponds to the adjoint process. For a recent example of the use of SMP in stochastic differential game theory, see [11].
The theory of mean-field type control (MFTC), initiated in [2], treats stochastic control problems with coefficients dependent on the marginal state-distribution. This theory is by now well developed for forward stochastic dynamics, i.e., with initial conditions on state [12,13,14,15]. With SMPs for MFTC problems at hand, MFTG theory can inherit these techniques like stochastic differential game theory does in the mean-field free case. See [16] for a review of solution approaches to MFTGs. A MFTC problem can be interpreted as a large population limit of a cooperative game, where the players share a joint goal to optimize some objective [17]. A close relative to MFTC is mean field games (MFG). MFGs is class of non-cooperative stochastic differential games where a large number of indistinguishable (anonymous) players interact weakly through a mean-field coupling term, initiated by [18,19] independently, and followed up by, among many others [20,21,22]. Weak player-to-player interaction through a mean-field coupling restricts the influence one player has on any other player to be inversely proportional to the number of players, hence the level of influence of any specific player is very small. The coupling of player state dynamics leads to conflicting objectives, and an approximation of mass behavior provides an approximate limit solution (equilibrium) to the MFG. In contrast to the MFG, players in a MFTG can be influential, and distinguishable (non-anonymous). That is, state dynamics and/or cost need not be of the same form over the whole player population, and a single player can have a major influence on other players’ dynamics and/or cost.
Already in [23], an SMP in local form was derived for a controlled non-linear BSDE. By first finding a global estimate for the variation of the second component of the BSDE solution, an SMP in global form was derived in [24]. A reinterpretation of BSDEs as forward stochastic optimal control problems [4] opened up for a new solution approach in the field of control of BSDEs. Inspired by the reinterpretation, optimal control of LQ BSDEs was solved in [25] by constructing a sequence of forward control problems with an additional state constraint, whose limit solution is the solution to the original LQ BSDE control problem. This approach was later used by [26] to solve a general FBSDE control problem, where the authors overcome the difficulty of controlling the diffusion in the forward process. Instead of writing down a second-order adjoint equation for the full system, the technique of [25] is used. Previous to that, [27] studied optimal control problem for general coupled forward–backward stochastic differential equations (FBSDEs) with controlled diffusions. A maximum principle of Pontryagin’s type for the optimal control is derived, by means of spike variation techniques.
Optimal control of mean-field BSDEs has recently gained attention. In [28] the mean-field LQ BSDE control problem with deterministic coefficients is studied. Assuming the control space is linear, linear perturbation is used to derive a stationarity condition which together with a mean-field FBSDE system characterizes the optimal control. Existence of optimal controls is also proven under convexity assumptions. Other recent work on the control of BSDEs includes [29,30], both using the FBSDE approach of [27].

1.2. Potential Applications of MFTG with Mean-Field BSDE Dynamics

In [31], a model is proposed for pedestrians groups moving towards targets they are forced to reach, such as deliveries and emergency personnel. The strict terminal condition leads to the formulation of a dynamic model for crowd motion where the state dynamics is a mean-field BSDE. Mean-field effects appear in pedestrian crowd models as approximations of aggregate human interaction, so the game would in fact be a MFTG [32]. A game between such groups is of interest since it can be a tool for decentralized decision making under conflicting interests. Other areas of application include strategies for financial investments, where often future conditions are specified [8,33] and lead to dynamic models including BSDEs. The already mentioned study [1] presents a lengthy list of applications of forward MFTGs in engineering sciences.

1.3. Paper Contribution and Outline

In this paper, control of mean-field BSDEs is extended to games between players whose state dynamics are mean-field BSDEs. Such games are in fact MFTGs, since the distribution of each player is effected by both players’ choice of strategy. Our MFTG could be viewed as a game between mean-field FBSDEs, where the backward equation is the state equation, and the forward equation is pure noise. A Pontryagin’s type SMP is derived, resulting in a verification theorem and conditions for existence of a Nash equilibrium. This solution approach is similar to that of [23,24,28]. The use of spike-perturbation requires minimal assumptions on the set of admissible controls, and differentiating measure-valued functions makes it possible to go beyond linear-quadratic mean-field cost and dynamics. The state BSDE is not converted to a forward optimization problem in the spirit of [25]. As a consequence, the adjoint equation in our SMP is a forward SDE. For the sake of comparison, optimality conditions for the cooperative situation are derived. In this setting, the players work together to optimize social cost, which is the the sum of player costs. The approach used is a straight-forward adaptation of the techniques used in control of SDEs of mean-field type; again, we do not need to take the route via some equivalent forward optimization problem to solve the backward MFTC problem. This cooperative game is a MFTC problem, and our result here is basically a special case of the FBSDE results in [27] or [26] mentioned above, although mean-field terms are present. Numerical simulations are done in the linear-quadratic case, which is explicitly solvable up to a system of ODEs. The examples pinpoint differences between player behavior in the game versus the centrally planned solution. The fraction between the social cost in the game equilibrium and the social cost optimum quantifies the game efficiency and was first studied in [34] for traffic coordination on networks under the name coordination ratio. This fraction was later renamed to the price of anarchy in [35]. We notice that paying a high price for using large control values, or deviating from a preferred initial position makes the problem stiffer, in the sense that the improvement by team optimality is decreasing, while paying a high price for mean-field related costs makes the problem less stiff.
The rest of this paper is organized as follows. The problem formulation is given in Section 2. Section 3 and Section 4 deal with necessary and sufficient conditions for any Nash equilibrium and social optimum; maximum principles for the MFTG and the MFTC are derived. An LQ problem is solved explicitly in Section 5, and numerical results are presented. The paper concludes with some remarks on possible extensions in Section 6, followed by an appendix containing proofs.

2. Problem Formulation

List of Symbols

  • T ( 0 , ) —the time horizon.
  • ( Ω , F , F , P ) —the underlying filtered probability space.
  • L ( X ) —the distribution of a random variable X under P .
  • L F t 2 ( Ω ; R d ) —the set of R d -valued F t -measurable random variables X such that E [ | X | 2 ] < .
  • G —the progressive σ -algebra.
  • X · —a stochastic process { X t } t 0 .
  • S 2 , k —the set of R k -valued, continuous G -measurable processes X · such that E [ sup t [ 0 , T ] | X t | 2 ] < .
  • H 2 , k —the set of R k -valued G -measurable processes X · such that E [ 0 T | X s | 2 d s ] < .
  • U i — the set of admissible controls for player i.
  • P ( X ) —the set of probability measures on X .
  • P 2 ( X ) —the set of probability measures on X with finite second moment.
  • Θ t i —the t-marginal of the state-, law- and control-tuple of player i.
  • Z F —the trace (Frobenius) norm of the matrix Z.
  • y i f ( y i ) —derivative of the R d -valued function f.
  • μ i f ( μ i ) —derivative of the P 2 ( R d ) -valued function f, see Appendix A for details.
Let T > 0 be a finite real number representing the time horizon of the game. Consider a filtered probability space ( Ω , F , { F t } t 0 , P ) on which two independent standard Brownian motions W · 1 , W · 2 are defined, d 1 - and d 2 -dimensional respectively. Additionally, y T 1 , y T 2 L F T 2 ( Ω ; R d ) and ξ , F 0 -measurable, are defined on the space. We assume that these five random objects are independent and that they generate the filtration F : = { F t } t 0 . Notice that ξ makes F 0 non-trivial. Let G be the σ -algebra on [ 0 , T ] × Ω of F t -progressively measurable sets. For k 1 , let S 2 , k be the set of R k -valued and continuous G -measurable processes X · : = { X t : t [ 0 , T ] } such that E [ sup t [ 0 , T ] | X t | 2 ] < , and let H 2 , k be the set of R k -valued G -measurable processes X · such that E [ 0 T | X s | 2 d s ] < .
Let ( U i , d U i ) be a separable metric space, i = 1 , 2 . Player i picks her control u · i from the set
U i : = { u : [ 0 , T ] × Ω U i | u · is F -adapted , E [ 0 T d U i ( u s ) 2 d s ] < } .
The distribution of any random variable ξ X will be denoted by L ( ξ ) P ( X ) , and i will denote the index { 1 , 2 } \ i . Given a pair of controls ( u · 1 , u · 2 ) U 1 × U 2 , consider the system of controlled BSDEs
d Y t i = b i ( t , Θ t i , Θ t i , Z t ) d t + Z t i , 1 d W t 1 + Z t i , 2 d W t 2 , Y T i = y T i , i = 1 , 2 ,
where Θ t i = ( Y t i , L ( Y t i ) , u t i ) and Z t = [ Z t 1 , 1 Z t 1 , 2 Z t 2 , 1 Z t 2 , 2 ] . Furthermore,
b i : Ω × [ 0 , T ] × S × U i × S × U i × R d × ( 2 d 1 + 2 d 2 ) R d ,
where S : = R d × P ( R d ) is equipped with the norm ( y , μ ) S : = | y | + d 2 ( μ ) , d 2 being the 2-Wasserstein metric on P ( R d ) . R d × ( 2 d 1 + 2 d 2 ) is equipped with the trace norm Z F = tr ( Z Z ) 1 / 2 . Note that if X is a square integrable random variable in R d , then d 2 ( L ( X ) ) < and L ( X ) P 2 ( R d ) , the space of measures with finite d 2 -norm.
Given ( u · 1 , u · 2 ) U 1 × U 2 , a pair of R d × R d × ( d 1 + d 2 ) -valued G -measurable processes ( Y · i , [ Z · i , 1 Z · i , 2 ] ) , i = 1 , 2 , is a solution to (4) if
Y t i = y T i t T b i ( s , Θ s i , Θ s i , Z s ) d s j = 1 2 t T Z s i , j d W s j , t [ 0 , T ] , a . s . ,
and ( Y · i , [ Z · i , 1 Z · i , 2 ] ) S 2 , d × H 2 , d × ( d 1 + d 2 ) .
Remark 1.
Any terminal condition y T i L F T 2 ( Ω ; R d ) naturally induces a F -martingale Y t i : = E [ y T i | F t ] . The martingale representation theorem then gives existence of a unique process [ Z · i , 1 , Z · i , 2 ] H 2 , d × ( d 1 + d 2 ) such that Y t i = y T i + t T Z s i , 1 d W s 1 + t T Z s i , 2 d W s 2 , i.e., [ Z · i , 1 , Z · i , 2 ] plays the role of the projection and without it, Y · i would not be G -measurable. Hence the noise ( W · 1 , W · 2 ) generating the filtration is common to both players, and [ Z · i , 1 , Z · i , 2 ] , i = 1 , 2 is their respective reaction to it. Player i may actually be effected by all the noise in the filtration even if only some components of ( W · 1 , W · 2 ) appear in b i . An interpretation of [ Z · i , 1 , Z · i , 2 ] is that it is a second control of player i: first she plays u · i to heed preferences on energy use, initial position etc., then she picks [ Z · i , 1 , Z · i , 2 ] so that her path to y T i is the optimal prediction based on available information in the filtration at any given time. The component b i in (4) acts as a velocity in.
Existence and uniqueness of (4) is given by a slight variation of the results of [10], where the one-dimensional case is treated. For the d-dimensional mean-field free case, see [36].
Assumption 1.
The process b i ( ω , · , 0 , , 0 ) , i = 1 , 2 , belongs to H 2 , d and for any v i = ( y i , μ i , u i , y i , μ i , u i , z ) S × U 1 × S × U 2 × R d × ( 2 d 1 + 2 d 2 ) , b i ( ω , · , v i ) , i = 1 , 2 , is G -measurable.
Assumption 2.
Given a pair of control values ( u 1 , u 2 ) U 1 × U 2 , there exists a constant L > 0 such that for all t [ 0 , T ] and tuples ( y 1 , μ 1 , y 2 , μ 2 , z ) , ( y ¯ 1 , μ ¯ 1 , y ¯ 2 , μ 2 ¯ , z ¯ ) S × S × R d ( 2 d 1 + 2 d 2 )
| b i ( t , y i , μ i , u i , y i , μ i , u i , z ) b i ( t , y ¯ i , μ ¯ i , u i , y ¯ i , μ ¯ i , u i , z ¯ ) | L j = 1 2 ( y j , μ j ) ( y ¯ j , μ ¯ j ) S + z z ¯ F , P - a . s . , i = 1 , 2 .
Theorem 1.
Let Assumptions 1 and 2 hold. Then, for any terminal conditions y T 1 , y T 2 L 2 ( Ω , F T , P ; R d ) and ( u · 1 , u · 2 ) U 1 × U 2 , the system of mean-field BSDEs (4) has a unique solution ( Y · i , [ Z · i , 1 , Z · i , 2 ] ) S 2 , d × H 2 , d × ( d 1 + d 2 ) , i = 1 , 2 .
Next, we introduce the best reply of player i as follows:
J i ( u · i ; u · i ) : = E [ 0 T f i ( t , Θ t i , Θ t i ) d t + h i ( Y 0 i , L ( Y 0 i ) , Y 0 i , L ( Y 0 i ) ) ]
for given maps f i : [ 0 , T ] × S × U i × S × U i R and h i : Ω × S × S R .
Assumption 3.
For any pair of controls ( u · 1 , u · 2 ) U 1 × U 2 , f i ( · , Θ · i , Θ · i ) L F 1 ( 0 , T ; R ) and h ( Y 0 i , L ( Y 0 i ) , Y 0 i , L ( Y 0 i ) ) L F 0 1 ( Ω ; R ) .
The problems we consider next are
  • The Mean-field Type Game (MFTG): find the Nash equilibrium controls of
    inf u · i U i J i ( u · i ; u · i ) , i = 1 , 2 , s . t . d Y t i = b i ( t , Θ t i , Θ t i , Z t ) d t + Z i , 1 d W t 1 + Z t i , 2 d W t 2 , Y T i = y T i .
  • The Mean-field Type Control Problem (MFTC): find the optimal control pair of
    inf ( u · 1 , u · 2 ) U 1 × U 2 J ( u · 1 , u · 2 ) : = J 1 ( u · 1 ; u · 2 ) + J 2 ( u · 2 ; u · 1 ) , s . t . d Y t i = b i ( t , Θ t i , Θ t i , Z t ) d t + Z i , 1 d W t 1 + Z t i , 2 d W t 2 , Y T i = y T i , i = 1 , 2 .
In the game each player assumes that the other player acts rationally, i.e., minimizes cost, and picks her control as the best response to that. This leads to a set of two inequalities, characterizing any control pair ( u · 1 , u · 2 ) that constitute a Nash equilibrium. In this paper, each player is aware of the other player’s control set, best response function and state dynamics. Therefore, even though the decision process is decentralized, both players solve the same set of inequalities. When there is not a unique Nash equilibrium, there is an ambiguity around which equilibrium strategy to play if the players do not communicate. In the control problem, a central planner decides what strategies are played by both of the players. The central planner might just be the two players cooperating towards a common goal, or some superior decision maker. The goal is to find the control pair that minimizes the social cost J. This notion of a centrally planned/cooperative solution is related to the concept of team optimality in team problems [37]. In a team problem, the players share a common objective. A team-optimal solution is then the solution to the joint minimization of the common objective. In our case, the social cost J is a common objective in the MFTC. The Nash solution to the team problem is given by the control pair that satisfies the two inequalities
J ( u ^ · 1 , u ^ · 2 ) J ( u · 1 , u ^ · 2 ) , u · 1 U 1 , J ( u ^ · 1 , u ^ · 2 ) J ( u ^ · 1 , u · 2 ) , u · 2 U 2 .
In (11), each player is minimizing the social cost with respect to its marginal, under the assumption that the other player is minimizing its marginal. This is the so-called player-by-player optimality of a control pair in a team problem. Notice that if we set J 1 ( u · 1 ; u · 2 ) = J 2 ( u · 2 ; u · 1 ) in (9), it becomes a team problem. The solution to the MFTG (9) will then be the player-by-player optimal solution to the minimization of the social cost.
Logically, we expect the optimal social cost to be lower than the social cost in a Nash equilibrium. The ratio between the worst case social cost in the game and the optimal social cost is called the price of anarchy, and we will highlight it in the numerical simulations in Section 5 where we also observe behavioral differences between MFTG and MFTC given identical data.

3. Problem 1: MFTG

This section is the derivation of necessary and sufficient equilibrium conditions of (9). Given the existence of such a pair of controls, we derive the conditions by the means of a Pontryagin type stochastic maximum principle.
Assume that ( u ^ · 1 , u ^ · 2 ) is a Nash equilibrium for the MFTG, i.e., satisfies the following system of inequalities,
J 1 ( u ^ · 1 ; u ^ · 2 ) J 1 ( u · 1 ; u ^ · 2 ) , u · 1 U 1 , J 2 ( u ^ · 2 ; u ^ · 1 ) J 2 ( u · 2 ; u ^ · 2 ) , u · 2 U 2 .
Consider the first inequality, with u ¯ ε , 1 chosen as a spike-perturbation of u ^ 1 . That is, for u · U 1 ,
u ¯ t ε , 1 : = u ^ t 1 , t [ 0 , T ] \ E ε , u t , t E ε .
Here, E ε is any subset of [ 0 , T ] of Lebesgue measure ε . Clearly, u ¯ · ε , 1 U 1 . When player 1 plays the spike-perturbed control u ¯ · ε , 1 and player 2 plays the equilibrium control u ^ · 2 , we denote the dynamics by
d Y ¯ t ε , 1 = b 1 ( t , Θ ¯ t ε , 1 , Y ¯ t ε , 2 , L ( Y ¯ t ε , 2 ) , u ^ t 2 , Z ¯ t ε ) d t + Z ¯ t ε , 1 , 1 d W t 1 + Z ¯ t ε , 1 , 2 d W t 2 , Y ¯ T 1 = y 1 , d Y ¯ t ε , 2 = b 2 ( t , Y ¯ t ε , 2 , L ( Y ¯ t ε , 2 ) , u ^ t 2 , Θ ¯ t ε , 1 , Z ¯ t ε ) d t + Z ¯ t ε , 2 , 1 d W t 1 + Z ¯ t ε , 2 , 2 d W t 2 , Y ¯ T 2 = y 2 .
The performance of the perturbed dynamics (14) will be compared with that of the equilibrium dynamics
d Y ^ t 1 = b 1 ( t , Θ ^ t 1 , Θ ^ t 2 , Z ^ t ) d t + Z ^ t 1 , 1 d W t 1 + Z ^ t 1 , 2 d W t 2 , Y ^ T 1 = y 1 , d Y ^ t 2 = b 2 ( t , Θ ^ t 2 , Θ ^ t 1 , Z ^ t ) d t + Z ^ t 2 , 1 d W t 1 + Z ^ t 2 , 2 d W t 2 , Y ^ T 2 = y 2 .
For simplicity, we write for φ { b 1 , f 1 , h 1 } , ψ { b 2 , f 2 , h 2 } , ϑ { b i , f i , h i , i = 1 , 2 } ,
φ ¯ t ε : = φ ( t , Θ ¯ t ε , 1 , Y ¯ t ε , 2 , L ( Y ¯ t ε , 2 ) , u ^ t 2 , Z ¯ t ε ) , ψ ¯ t ε : = ψ ( t , Y ¯ t ε , 2 , L ( Y ¯ t ε , 2 ) , u ^ t 2 , Θ ¯ t ε , 1 , Z ¯ t ε ) , ϑ ^ t : = ϑ ( t , Θ ^ t i , Θ ^ t i , Z ^ t ) .
In this shorthand notation, which will be used from now on, the difference in performance is
J 1 ( u ¯ · ε , 1 ; u ^ · 2 ) J 1 ( u ^ · 1 ; u ^ · 2 ) = E 0 T f ¯ t ε , 1 f ^ t 1 d t + h ¯ 0 ε , 1 h ^ 0 1 .
Any derivative of f : a f ( a ) will be denoted a f , indifferent of the space the function is mapping from/to.
Assumption 4.
The functions
( y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 , z ) b i ( t , y i , μ i , u i , y i , μ i , u i , z ) ( y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 ) f i ( t , y i , μ i , u i , y i , μ i , u i ) ( y 1 , μ 1 , y 2 , μ 2 ) h i ( y i , μ i , y i , μ i )
are for all t a.s. differentiable at ( Θ ^ t 1 , Θ ^ t 2 , Z ^ t ) , ( Θ ^ t 1 , Θ ^ t 2 ) and ( Y ^ 0 1 , L ( Y ^ 0 1 ) , Y ^ 0 2 , L ( Y ^ 0 2 ) ) respectively. Furthermore,
y j b ^ t i , μ j b ^ t i , y j f ^ t i , μ j f ^ t i , i , j = 1 , 2 ,
are for all t a.s. uniformly bounded, and
y j h ^ 0 i + E [ ( ( μ j h ^ 0 i ) ] L F 0 2 ( Ω ; R d ) .
For i = 1 , 2 ,
h ¯ 0 ε , i h ^ 0 i = j = 1 2 { y j h ^ 0 i ( Y ¯ 0 ε , j Y ^ 0 j ) + E [ ( μ j h ^ 0 i ) ( Y ¯ 0 ε , j Y ^ 0 j ) ] } + j = 1 2 { o | Y ¯ 0 ε , j Y ^ 0 j | + o E [ | Y ¯ 0 ε , j Y ^ 0 j | 2 ] 1 / 2 } .
A brief overview on differentiation of P 2 ( R d ) -valued functions is found in Appendix A, and the notation ( μ j h ^ 0 i ) is defined in (A8). Both Y ¯ t ε , 1 Y ^ t 1 and Y ¯ t ε , 2 Y ^ t 2 appear in (21), this suggests that we need to introduce two first order variation processes. That is, we want ( Y ˜ · i , [ Z ˜ · i , 1 , Z ˜ · i , 2 ] ) , i = 1 , 2 , that for some C > 0 satisfies
sup 0 t T E | Y ˜ t i | 2 + j = 1 2 0 t Z ˜ s i , j F 2 d s C ε 2 , sup 0 t T E | Y ¯ t ε , i Y ^ t i Y ˜ t i | 2 + j = 1 2 0 t Z ¯ s ε , i , j Z ^ s i , j Z ˜ s i , j F 2 d s C ε 2 .
Let δ i denote variation in u · i so that for ϑ { f i , b i , i = 1 , 2 } ,
δ i ϑ ( t ) : = ϑ ( t , Y ^ t i , L ( Y ^ t i ) , u ¯ t ε , i , Θ ^ t i , Z ^ t ) ϑ ^ t .
Assumption 5.
For y i , μ i S , i = 1 , 2 , z R d × ( 2 d 1 + 2 d 2 ) and ( u 1 , u 2 ) , ( v 1 , v 2 ) U 1 × U 2 , there exists a constant L > 0 such that
| b i ( t , y i , μ i , u i , y i , μ i , u i , z ) b i ( t , y i , μ i , v i , y i , μ i , v i , z ) | L j = 1 2 d U j ( u j , v j ) ,
a.s. for all t [ 0 , T ] .
Lemma 1.
Let Assumptions 1, 2, 4 and 5 be in force. The first order variation processes that satisfy (22) is given by the following system of BSDEs,
{ d Y ˜ t i = j = 1 2 { y j b ^ t i Y ˜ t j + E ( μ j b ^ t i ) Y ˜ t j } + j , k = 1 2 z j , k b ^ t i Z ˜ t j , k + δ 1 b i ( t ) 1 E ε ( t ) d t + j = 1 2 Z ˜ t i , j d W t j , Y ˜ T i = 0 , i = 1 , 2 .
A proof is found in the appendix. By Lemma 1,
E h ¯ 0 ε , 1 h ^ 0 1 = E j = 1 2 y j h ^ 0 1 Y ˜ 0 j + E ( μ j h ^ 0 1 ) Y ˜ 0 j + o ( ε ) = E j = 1 2 p 0 1 , j Y ˜ 0 j + o ( ε ) ,
where the introduced costates p · 1 , j , j = 1 , 2 , satisfy p 0 1 , j : = y j h ^ 0 1 + E ( ( μ j h ^ 0 1 ) . The notation ( ( μ j h ^ 0 1 ) is defined in (A10). Assumption 4 grants us existence and uniqueness to Equation (27) below.
Lemma 2 (Duality relation).
Lett Assumptions 1, 2 and 4 hold. Let p · 1 , j be the solution to the SDE
{ d p t 1 , j = { y j H ^ t 1 + E ( ( μ j H ^ t 1 ) } d t k = 1 2 z j , k H ^ t 1 d W t k , p 0 1 , j = y j h ^ 0 1 + E ( ( μ j h ^ 0 1 ) ,
where for ( y i , μ i ) S , i = 1 , 2 , and ( u 1 , u 2 , z ) U 1 × U 2 × R d × ( 2 d 1 + 2 d 2 ) ,
H 1 ( ω , t , y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 , z , p t 1 , 1 , p t 1 , 2 ) : = j = 1 2 b j ( ω , t , y j , μ j , u j , y j , μ j , u j , z ) p t 1 , j f 1 ( t , y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 ) .
Then the following duality relation holds,
E j = 1 2 p 0 1 , j Y ˜ 0 j = E 0 T j = 1 2 p t 1 , j δ 1 b j ( t ) 1 E ε ( t ) + Y ˜ t j y j f ^ t 1 + E ( ( μ j f ^ t 1 ) d t .
A proof of the lemma above is found in the appendix. We have that
f ¯ t ε , i f ^ t i = j = 1 2 { y j f ^ t i ( Y ¯ t ε , j Y ^ t j ) + E ( μ j f ^ t i ) ( Y ¯ t ε , j Y ^ t j ) } + δ 1 f i ( t ) 1 E ε ( t ) + j = 1 2 { o | Y ¯ t ε , j Y ^ t j | + o E [ | Y ¯ t ε , j Y ^ t j | 2 ] 1 / 2 } .
By the expansion (30) and Lemma 1,
E 0 T f ¯ t ε , 1 f ^ t 1 d t = E 0 T j = 1 2 Y ˜ t j y j f ^ t 1 + E ( ( μ j f ^ t 1 ) + δ 1 f 1 ( t ) 1 E ε ( t ) d t + o ( ε ) ,
which yields
J 1 ( u ¯ · ε , 1 ; u ^ · 2 ) J 1 ( u ^ · 1 ; u ^ · 2 ) = E 0 T { p t 1 , 1 δ 1 b 1 ( t ) p t 1 , 2 δ 1 b 2 ( t ) + δ 1 f 1 ( t ) } 1 E ε ( t ) d t + o ( ε ) .
Therefore
J 1 ( u ¯ · ε , 1 ; u ^ · 2 ) J 1 ( u ^ · 1 ; u ^ · 2 ) = E 0 T δ 1 H 1 ( t ) 1 E ε ( t ) d t + o ( ε ) .
From the last identity, we can derive necessary and sufficient conditions for player 1’s best response to u ^ · 2 .
The same argument can be carried out for players 2’s best response to u ^ · 1 . Naturally, we need to impose the corresponding assumptions on player 2’s control. For completeness and later reference, we state now the second player’s version of Lemma 2.
Lemma 3 (Duality relation, player 2).
Let Assumptions 1, 2 and 4 hold, and let p · 2 , j be the solution to the SDE
{ d p t 2 , j = { y j H ^ t 2 + E ( ( μ j H ^ t 2 ) } d t k = 1 2 z j , k H ^ t 2 d W t k , p 0 2 , j = y j h ^ 0 2 + E ( ( μ j h ^ 0 2 ) ,
where for ( y i , μ i ) S , i = 1 , 2 , and ( u 1 , u 2 , z ) U 1 × U 2 × R d × ( 2 d 1 + 2 d 2 ) ,
H 2 ( ω , t , y 2 , μ 2 , u 2 , y 1 , μ 1 , u 1 , z , p t 2 , 1 , p t 2 , 2 ) : = j = 1 2 b j ( ω , t , y j , μ j , u j , y j , μ j , u j , z ) p t 2 , j f 2 ( t , y 2 , μ 2 , u 2 , y 1 , μ 1 , u 1 ) .
Then the following duality relation holds,
E j = 1 2 p 0 2 , j Y ˜ 0 j = E 0 T j = 1 2 p t 2 , j δ 2 b j ( t ) 1 E ε ( t ) + Y ˜ t j y j f ^ t 2 + E ( ( y j f ^ t 2 ) d t .
Necessary equilibrium conditions can be stated as a system of 6 Equations, 2 state BSDEs and 4 costate (adjoint) SDEs. Sufficient conditions for a Nash equilibrium can now be stated as convexity conditions on the 4 functions H i , h i , i = 1 , 2 . We let Assumptions 1–5 be in place.
Theorem 2.
[Necessary equilibrium conditions] Suppose that ( Y ^ · i , [ Z ^ · i , 1 , Z ^ · i , 2 ] , u ^ · i ) , i = 1 , 2 , is an equilibrium for the MFTG and that p · i , j , i , j = 1 , 2 , solve (27) and (34). Then, for i = 1 , 2 ,
u ^ t i = argmax α U i H i ( t , Y ^ t i , L ( Y ^ t i ) , α , Θ ^ t i , Z ^ t , p t i , 1 , p t i , 2 ) , a . e . t , a . s .
Proof. 
Let E ε : = [ s , s + ε ] , u · U 1 and A F t for t E ϵ . Consider the spike-perturbation
u t ε : = { u t 1 A + u ^ t 1 1 A c , t E ε , u ^ t 1 , t [ 0 , T ] \ E ε .
Then
H ^ t 1 H 1 ( t , Y ^ t 1 , L ( Y ^ t 1 ) , u t ε , Θ ^ t 2 , Z ^ t , p t 1 , 1 , p t 1 , 2 ) = H ^ t 1 H 1 ( t , Y ^ t 1 , L ( Y ^ t 1 ) , u t , Θ ^ t 2 , Z ^ t , p t 1 , 1 , p t 1 , 2 ) 1 A 1 E ε ( t ) .
Applying (33), we obtain
1 ε E s s + ε H ^ t 1 H 1 ( t , Y ^ t 1 , L ( Y ^ t 1 ) , u t , Θ ^ t 2 , Z ^ t , p t 1 , 1 , p t 1 , 2 ) 1 A d t 1 ε o ( ε ) .
Sending ε to zero yields
E H ^ s 1 H 1 ( s , Y ^ s 1 , L ( Y ^ s 1 ) , u s , Θ ^ s 2 , Z ^ s , p s 1 , 1 , p s 1 , 2 ) 1 A 0 , a . e . s [ 0 , T ] .
The last inequality holds for all A F s , thus
E H ^ s 1 H 1 ( s , Y ^ s 1 , L ( Y ^ s 1 ) , u s , Θ ^ s 1 , Z ^ s , p s 1 , 1 , p s 1 , 2 ) | F s 0 , a . e . s [ 0 , T ] , a . s .
By measurability of the integrand in (42),
u ^ t 1 = argmax α U 1 H 1 ( t , Y ^ t 1 , L ( Y ^ t 1 ) , α , Θ ^ t 2 , Z ^ t , p t 1 , 1 , p t 1 , 2 ) , a . e . t [ 0 , T ] , a . s .
The same argument yields
u ^ t 2 = argmax α U 2 H 2 ( t , Y ^ t 2 , L ( Y ^ t 2 ) , α , Θ ^ t 1 , Z ^ t , p t 2 , 1 , p t 2 , 2 ) , a . e . t [ 0 , T ] , a . s .
Theorem 3.
[Sufficient equilibrium conditions] Suppose that u ^ · 1 and u ^ · 2 satisfy (37). Suppose furthermore that for ( t , p i , 1 , p i , 2 , z ) [ 0 , T ] × R d × R d × R d × ( 2 d 1 + 2 d 2 ) , i = 1 , 2 ,
( y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 ) H i ( t , y i , μ i , u i , y i , μ i , u i , z , p i , 1 , p i , 2 )
is concave a.s. and
( y 1 , μ 1 , y 2 , μ 2 ) h i ( y i , μ i , y i , μ i )
is convex a.s. Then u ^ · 1 , u ^ · 2 constitute an equilibrium control and ( Y ^ · i , [ Z ^ · i , 1 , Z ^ · i , 2 ] , u ^ · i ) , i = 1 , 2 , is an equilibrium for the MFTG.
Proof. 
By assumption, δ i H i ( t ) 0 for any spike variation, almost surely for a.e. t. Applying the convexity and concavity assumptions in the expansion steps results in the inequality
0 E 0 T δ i H i ( t ) 1 E ε ( t ) d t J i ( u · i ; u ^ · i ) J i ( u ^ · i ; u ^ · i ) .

4. Problem 2: MFTC

Carrying out a similar argument to that of the previous section, we find necessary optimality conditions for problem (10). Also, we readily get a verification theorem. The pair ( u ^ · 1 , u ^ · 2 ) U 1 × U 2 is optimal if
J ( u ^ · 1 , u ^ · 2 ) J ( u · 1 , u · 2 ) , ( u · 1 , u · 2 ) U 1 × U 2 .
Assume from now on that ( u ^ · 1 , u ^ · 2 ) is an optimal control. We study the inequality (48) when ( u ˇ · ε , 1 , u ˇ · ε , 2 ) is a spike-perturbation of ( u ^ · 1 , u ^ · 2 ) ,
( u ˇ t ε , 1 , u ˇ t ε , 2 ) : = { ( u ^ t 1 , u ^ t 2 ) , t [ 0 , T ] \ E ε , ( u t 1 , u t 2 ) , t E ε ,
where E ε is any subset of [ 0 , T ] of Lebesgue measure ε and ( u · 1 , u · 2 ) U 1 × U 2 . When the players use the perturbed control, we denote the state dynamics by
{ d Y ˇ t ε , 1 = b 1 ( t , Θ ˇ t ε , 1 , Θ ˇ t ε , 2 , Z ˇ t ε ) d t + Z ˇ t ε , 1 , 1 d W t 1 + Z ˇ t ε , 1 , 2 d W t 2 , Y ˇ T 1 = y 1 , d Y ˇ t ε , 2 = b 2 ( t , Θ ˇ t ε , 2 , Θ ˇ t ε , 1 , Z ˇ t ε ) d t + Z ˇ t ε , 2 , 1 d W t 1 + Z ˇ t ε , 2 , 2 d W t 2 , Y ˇ T 2 = y 2 ,
and we will compare their performance to that of the optimally controlled state dynamics
{ d Y ^ t 1 = b 1 ( t , Θ ^ t 1 , Θ ^ t 2 , Z ^ t ) d t + Z ^ t 1 , 1 d W t 1 + Z ^ t 1 , 2 d W t 2 , Y ^ T 1 = y 1 , d Y ^ t 2 = b 2 ( t , Θ ^ t 2 , Θ ^ t 1 , Z ^ t ) d t + Z ^ t 2 , 1 d W t 1 + Z ^ t 2 , 2 d W t 2 , Y ^ T 2 = y 2 .
For simplicity, we write for ϑ { b i , f i , h i , i = 1 , 2 } ,
ϑ ˇ t ε : = ϑ ( t , Θ ˇ t ε , i , Θ ˇ t ε , i , Z ˇ t ε ) , ϑ ^ t : = ϑ ( t , Θ ^ t i , Θ ^ t i , Z ^ t ) ,
and in this notation,
J ( u ˇ · ε , 1 , u ˇ · ε , 2 ) J ( u ^ · 1 , u ^ · 2 ) = E 0 T f ˇ t ε , 1 + f ˇ t ε , 2 f ^ t 1 f ^ t 2 d t + h ˇ 0 ε , 1 + h ˇ 0 ε , 2 h ^ 0 1 h ^ 0 2 = E 0 T f ˇ t ε f ^ t d t + h ˇ 0 ε h ^ 0
where f t : = f t 1 + f t 2 and h t : = h t 1 + h t 2 . Again, we want to find first order variation processes ( Y ˜ · i , [ Z ˜ · i , 1 , Z ˜ · i , 2 ] ) , i = 1 , 2 , that satisfy (22) with ( Y ¯ · ε , i , [ Z ¯ · ε , i , 1 , Z ¯ · ε , i , 2 ] ) replaced by its ’checked’ counterpart ( Y ˇ · ε , i , [ Z ˇ · ε , i , 1 , Z ˇ · ε , i , 2 ] ) .
Assumption 6.
The functions
( y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 , z ) b i ( t , y i , μ i , u i , y i , μ i , u i , z ) ( y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 ) f i ( t , y i , μ i , u i , y i , μ i , u i ) ( y 1 , μ 1 , y 2 , μ 2 ) h i ( y i , μ i , y i , μ i )
are for all t a.s. differentiable at ( Θ ^ t 1 , Θ ^ t 2 , Z ^ t ) , ( Θ ^ t 1 , Θ ^ t 2 ) and ( Y ^ 0 1 , L ( Y ^ 0 1 ) , Y ^ 0 2 , L ( Y ^ 0 2 ) ) respectively. Furthermore,
y j b ^ t i , μ j b ^ t i , y j f ^ t i , μ j f ^ t i , i , j = 1 , 2 ,
are for all t a.s. uniformly bounded and y j h ^ 0 i + E ( ( μ j h ^ 0 i ) L F 0 2 ( Ω ; R d ) .
Notice that the point of differentiability is generally not the same in Assumptions 4 and 6. Above, ( u ^ · 1 , u ^ · 2 ) is an optimal control while in Assumption 4, it is an equilibrium control. Let δ denote simultaneous variation in controls, for ϑ { f i , b i , i = 1 , 2 } ,
δ ϑ ( t ) : = δ 1 ϑ ( t ) + δ 2 ϑ ( t ) .
Lemma 4.
Let Assumptions 1, 2, 5 and 6 be in force. The first order variation processes that satisfy the ’checked’ version of (22) is given by the following system of BSDEs,
{ d Y ˜ t i = j = 1 2 { y j b ^ i Y ˜ t j + E ( μ j b ^ i ) Y ˜ t j } + δ b i ( t ) 1 E ε ( t ) + j , k = 1 2 z j , k b ^ i Z ˜ t j , k d t + j = 1 2 Z ˜ t i , j d W t j , Y ˜ T i = 0 , i = 1 , 2 ,
The proof follows the same steps as the proof of Lemma 1.
By Lemma 4,
E h ˇ 0 ε h ^ 0 = E j = 1 2 p 0 j Y ˜ 0 j + o ( ε ) ,
where p 0 j : = y j h ^ 0 + E ( ( μ j h ^ 0 ) .
Lemma 5 (Duality relation).
Let Assumptions 1, 2 and 6 hold. Let p · j be the solution to the SDE
{ d p t j = { y j H ^ t + E ( ( μ j H ^ t ) } d t k = 1 2 z j , k H ^ t d W t k , p 0 j = y j h ^ 0 + E ( ( μ j h ^ 0 ) ,
where for ( y i , μ i ) S , i = 1 , 2 , and ( u 1 , u 2 , z ) U 1 × U 2 × R d × ( 2 d 1 + 2 d 2 ) ,
H ( ω , t , y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 , z , p t 1 , p t 2 ) : = j = 1 2 b j ( ω , t , y j , μ j , u j , y j , μ j , u j , z ) p t j f j ( t , y j , μ j , u j , y j , μ j , u j ) .
Then the following duality relation holds,
E [ j = 1 2 p 0 j Y ˜ 0 j ] = E 0 T j = 1 2 p t j δ b j ( t ) 1 E ε ( t ) + Y ˜ t j y j f ^ t + E ( ( μ j f ^ t ) d t .
The proof of Lemma 5 is almost identical that of Lemma 2. By Lemma 4,
J ( u ˇ · ε , 1 , u ˇ · ε , 2 ) J ( u ^ · 1 , u ^ · 2 ) = E 0 T { p t 1 δ b 1 ( t ) p t 2 δ b 2 ( t ) + δ f ( t ) } 1 E ε ( t ) d t + o ( ε ) .
Thus
J ( u ˇ · ε , 1 , u ˇ · ε , 2 ) J ( u ^ · 1 , u ^ · 2 ) = E 0 T δ H ( t ) 1 E ε ( t ) d t + o ( ε ) .
In the following two theorems, Assumptions 1–3, 5 and 6 are in force.
Theorem 4 (Necessary optimality conditions).
Suppose that ( Y ^ · i , [ Z ^ · i , 1 , Z ^ · i , 2 ] ) , i = 1 , 2 is an optimal solution to the MFTC and that p · i , i = 1 , 2 , solves (59). Then, for i = 1 , 2 ,
( u ^ t 1 , u ^ t 2 ) = argmax ( v , w ) U 1 × U 2 H ( t , Y ^ t 1 , L ( Y ^ t 1 ) , v , Y ^ t 2 , L ( Y ^ t 2 ) , w , Z ^ t , p t 1 , p t 2 ) , a . e . t , a . s .
Theorem 5 (Sufficient optimality conditions).
Suppose ( u ^ · 1 , u ^ · 2 ) satisfy (64). Suppose furthermore that for ( t , p 1 , p 2 , z ) [ 0 , T ] × R d × R d × R d × ( 2 d 1 + 2 d 2 ) , i = 1 , 2 ,
( y 1 , μ 1 , u 1 , y 2 , μ 2 , u 2 ) H ( t , y 1 , μ 1 , u 1 , y 2 , μ 2 s , u 2 , z , p 1 , p 2 )
is concave a.s. and
( y 1 , μ 1 , y 2 , μ 2 ) h ( y 1 , μ 1 , y 2 , μ 2 )
is convex a.s. Then ( u ^ · 1 , u ^ · 2 ) is an optimal control and ( Y ^ · i , [ Z ^ · i , 1 , Z ^ · i , 2 ] , u ^ · i ) , i = 1 , 2 solves the MFTC.

5. Example: The Linear-Quadratic Case

In this section we consider a linear-quadratic version of (9) and (10), in the one dimensional case. Let a i , c i , j , q i , j , q ¯ i , j , q ˜ i , j , s ¯ i , j , s i , s ¯ i E , r i : [ 0 , T ] R , i , j = 1 , 2 be deterministic coefficient functions, uniformly bounded over [ 0 , T ] . Additionally, r i ( t ) ϵ > 0 for i = 1 , 2 . Define
b i ( t , Θ t i , Θ t i , Z t ) = a i ( t ) u t i + j = 1 2 c i , j ( t ) W t j , f i ( t , Θ t i , Θ t i ) = j = 1 2 { 1 2 q i , j ( t ) ( Y t j ) 2 + 1 2 q ¯ i , j ( t ) E [ Y t j ] 2 + q ˜ i , j ( t ) Y t j E [ Y t j ] + s ¯ i , j ( t ) E [ Y t j ] Y t j } + s i ( t ) Y t i Y t i + s ¯ i E ( t ) E [ Y t i ] E [ Y t i ] + 1 2 r i ( t ) ( u t i ) 2 .
The uniform boundedness of the coefficients implies Assumptions 1–6, given initial costs h 1 , h 2 , satisfying Assumption 4 and 6. Assumption 3 (integrability of f i ) follows by classical BSDE estimates [38]. Recall player 1 and 2’s Hamiltonian, defined in (28) and (35). In the setup of this example,
H i ( t , Θ t i , Θ t i , Z t ) = a 1 ( t ) u t 1 + j = 1 2 c 1 , j W t j p i , 1 + a 2 ( t ) u t 2 + j = 1 2 c 2 , j W t j p i , 2 j = 1 2 { 1 2 q i , j ( t ) ( Y t j ) 2 + 1 2 q ¯ i , j ( t ) E [ Y t j ] 2 + q ˜ i , j ( t ) Y t j E [ Y t j ] + s ¯ i , j ( t ) E [ Y t j ] Y t j } s i ( t ) Y t i Y t i s ¯ i E ( t ) E [ Y t i ] E [ Y t i ] 1 2 r i ( t ) ( u t i ) 2 .
The Hessian of ( y 1 , , u 2 ) H 1 ( t , y 1 , , u 2 , z , p 1 , 1 , p 1 , 2 ) is
H 1 ( t ) : = q 1 , 1 ( t ) q ˜ 1 , 1 ( t ) 0 s 1 ( t ) s ¯ 1 , 2 ( t ) 0 q ˜ 1 , 1 ( t ) q ¯ 1 , 1 ( t ) 0 s ¯ 1 , 1 ( t ) s ¯ 1 E ( t ) 0 0 0 r 1 ( t ) 0 0 0 s 1 ( t ) s ¯ 1 , 1 ( t ) 0 q 1 , 2 ( t ) q ˜ 1 , 2 ( t ) 0 s ¯ 1 , 2 ( t ) s ¯ 1 E ( t ) 0 q ˜ 1 , 2 ( t ) q ¯ 1 , 2 ( t ) 0 0 0 0 0 0 0
and the Hessian of ( y 1 , , u 2 ) H 2 ( t , y 1 , , u 2 , z , p 2 , 1 , p 2 , 2 ) is
H 2 ( t ) : = q 2 , 1 ( t ) q ˜ 2 , 1 ( t ) 0 s 2 ( t ) s ¯ 2 , 2 ( t ) 0 q ˜ 2 , 1 ( t ) q ¯ 2 , 1 ( t ) 0 s ¯ 2 , 1 ( t ) s ¯ 2 E ( t ) 0 0 0 0 0 0 0 s 2 ( t ) s ¯ 2 , 1 ( t ) 0 q 2 , 2 ( t ) q ˜ 2 , 2 ( t ) 0 s ¯ 2 , 2 ( t ) s ¯ 2 E ( t ) 0 q ˜ 2 , 2 ( t ) q ¯ 2 , 2 ( t ) 0 0 0 0 0 0 r 2 ( t ) .
The coefficients are further assumed to be such that H 1 ( t ) and H 2 ( t ) are negative semi-definite for all t [ 0 , T ] . Also, we assume that ( y 1 , , μ 2 ) h i ( y 1 , , μ 2 ) , yet unspecified, is convex. Theorem 3 yields
u ^ t i = a i ( t ) r i 1 ( t ) p t i , i ,
where p · i , i solves (27) or (34), depending on i. In fact the equilibrium is unique in this case, since u ^ · i is the unique pointwise solution to (37) and p · i , i is unique, see (A25) and (A26). By Theorem 5,
u ^ t i = a i ( t ) r i 1 ( t ) p t i ,
where p · i solves (59), is an optimal control for the linear-quadratic MFTC and it is unique.

5.1. MFTG

The equilibrium dynamics are
{ d Y ^ t i = a i 2 ( t ) r i 1 ( t ) p t i , i + j = 1 2 c i , j W t j d t + Z ^ t i , 1 d W t 1 + Z ^ t i , 2 d W t 2 , Y ^ T i = y T i .
We see that only two costate processes, p · 1 , 1 and p · 2 , 2 , are relevant here. This is a consequence of the lack of explicit dependence on u i in the b i and f i specified in (67). Nevertheless, the running cost f i depends implicitly on u i through player i ’s state and mean.
We make the following ansatz: there exists deterministic functions α i , α ¯ i , β i , β ¯ i , θ i : [ 0 , T ] R , i = 1 , 2 and γ i , j : [ 0 , T ] R , i , j = 1 , 2 , such that
Y ^ t i = α i ( t ) p t i , i + α ¯ i ( t ) E [ p t i , i ] + β i ( t ) p t i , i + β ¯ i ( t ) E [ p t i , i ] + γ i , 1 ( t ) W t 1 + γ i , 2 ( t ) W t 2 + θ i ( t ) .
Clearly, we need to impose the terminal conditions
α i ( T ) = 0 , α ¯ i ( T ) = 0 , β i ( T ) = 0 , β ¯ i ( T ) = 0 , γ i , j ( T ) = 0 , θ i ( T ) = y T i .
Calculations presented in the appendix identifies coefficients and yields the following system of ODEs determining α i ( · ) , , θ i ( · ) ,
{ α ˙ i ( t ) + α i ( t ) P i ( t ) + β i ( t ) R i ( t ) = a i 2 ( t ) r i 1 ( t ) , α ¯ ˙ i ( t ) + α i ( t ) P ¯ i ( t ) + α ¯ i ( t ) ( P i ( t ) + P ¯ i ( t ) ) + β i ( t ) R ¯ i ( t ) + β ¯ i ( t ) ( R i ( t ) + R ¯ i ( t ) ) = 0 , β ˙ i ( t ) + α i ( t ) R i ( t ) + β i ( t ) P i ( t ) = 0 , β ¯ ˙ i ( t ) + α i ( t ) R ¯ i ( t ) + α ¯ i ( t ) ( R i ( t ) + R ¯ i ( t ) ) + β i ( t ) P ¯ i ( t ) + β ¯ i ( t ) ( P i ( t ) + P ¯ i ( t ) ) = 0 , γ ˙ i , 1 ( t ) + α i ( t ) Φ i ( t ) + β i ( t ) Φ i ( t ) = c i , 1 ( t ) , γ ˙ i , 2 ( t ) + α i ( t ) Ψ i ( t ) + β i ( t ) Ψ i ( t ) = c i , 2 ( t ) , θ ˙ i ( t ) + θ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) ( Q i ( t ) + Q ¯ i ( t ) ) + ( β i ( t ) + β ¯ i ( t ) ) ( S i ( t ) + S ¯ i ( t ) ) + θ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) ( S i ( t ) + S ¯ i ( t ) ) + ( β i ( t ) + β ¯ i ( t ) ) ( Q i ( t ) + Q ¯ i ( t ) ) = 0 ,
where
P i ( t ) : = Q i ( t ) α i ( t ) + S i ( t ) β i ( t ) , P ¯ i ( t ) : = Q i ( t ) α ¯ i ( t ) + Q ¯ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) + S i ( t ) β ¯ i ( t ) + S ¯ i ( t ) ( β i ( t ) + β ¯ i ( t ) ) , R i ( t ) : = Q i ( t ) β i ( t ) + S i ( t ) α i ( t ) , R ¯ i ( t ) : = Q i ( t ) β ¯ i ( t ) + Q ¯ i ( t ) ( β i ( t ) + β ¯ i ( t ) ) + S i ( t ) α ¯ i ( t ) + S ¯ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) , Φ i ( t ) : = ( Q i ( t ) γ i , 1 ( t ) + S i ( t ) γ i , 1 ( t ) ) , Ψ i ( t ) : = ( Q i ( t ) γ i , 2 ( t ) + S i ( t ) γ i , 2 ( t ) ) , Q i ( t ) : = q i , i ( t ) + q ˜ i , i ( t ) , Q ¯ i ( t ) : = q ˜ i , i ( t ) + q ¯ i , i ( t ) , S i ( t ) : = s i ( t ) + s ¯ i , i ( t ) , S ¯ i ( t ) : = s ¯ i , i ( t ) + s ¯ i E ( t ) .
Now (74)–(77) gives us the equilibrium dynamics. In this fashion, it is possible to solve LQ problems more general than (67).

5.2. MFTC

The optimally controlled dynamics are
{ d Y ^ t i = a i 2 ( t ) r i 1 ( t ) p t i + j = 1 2 c i , j W t j d t + Z ^ t i , 1 d W t 1 + Z ^ t i , 2 d W t 2 , Y ^ T i = y T i .
We make almost the same ansatz as before, assume that there exists deterministic functions α i , α ¯ i , β i , β ¯ i , θ i : [ 0 , T ] R , i = 1 , 2 and γ i , j : [ 0 , T ] R , i , j = 1 , 2 , with terminal conditions
α i ( T ) = 0 , α ¯ i ( T ) = 0 , β i ( T ) = 0 , β ¯ i ( T ) = 0 , γ i , j ( T ) = 0 , θ i ( T ) = y T i
such that
Y ^ t i = α i ( t ) p t i + α ¯ i ( t ) E [ p t i ] + β i ( t ) p t i + β ¯ i ( t ) E [ p t i ] + γ i , 1 ( t ) W t 1 + γ i , 2 ( t ) W t 2 + θ i ( t ) .
By redefining Q i , Q ¯ i , S i , S ¯ i in (77),
Q i ( t ) : = q 1 , i ( t ) + q 2 , i ( t ) + q ˜ 1 , i ( t ) + q ˜ 2 , i ( t ) , Q ¯ i ( t ) : = q ˜ 1 , i ( t ) + q ˜ 2 , i ( t ) + q ¯ 1 , i ( t ) + q ¯ 2 , i ( t ) , S i ( t ) : = s 1 ( t ) + s 2 ( t ) + s ¯ 1 , i ( t ) + s ¯ 2 , i ( t ) , S ¯ i ( t ) : = s ¯ 1 , i ( t ) + s ¯ 2 , i ( t ) + s ¯ 1 E ( t ) + s ¯ 2 E ( t ) ,
(76), (77) and (79), (80) gives us the optimally controlled state dynamics.

5.3. Simulation and the Price of Anarchy

Let T : = 1 , ξ : = ( y 0 1 , y 0 2 ) L F 0 2 ( Ω ; R d × R d ) be preferred initial positions for player 1 and 2 respectively, and
f t i : = 1 2 r i ( u t i ) 2 + ρ i ( Y t i E [ Y t i ] ) 2 , h t i : = ν i 2 ( Y 0 i y 0 i ) 2 .
In this setup, H 1 and H 2 are negative semi-definite if r i , ρ i > 0 , h i is convex if ν i > 0 . In Figure 1 numerical simulations of MFTG and MFTC are presented. In (a), the two players have identical preferences, but different terminal conditions. The situation is symmetric in the sense that we expect the realized paths of player 1 reflected through the line y = 0 to be approximately paths of player 2. In (c), preferences are asymmetric and as a consequence, the realized paths are not each others mirrored images.
The central planner in a MFTC uses more information than a single player does. In fact, in our example, γ i , j ( t ) = 0 when i j in the MFTG. The interpretation is that in the game, player i does not care about player i ’s noise, only its mean state. For the central planner however, γ i , j is not identically zero for i j . This can be observed in (b), where the central planner makes the player states evolve under some common noise.
In (c) we see an interesting contrast between the MFTG and the MFTC. Player 1 (black) feels no attraction to player 2 ( ρ 1 = 0 ) while player 2 is attracted to the mean position of player 1 ( ρ 2 > 0 ). In the game, player 1 travels on the straight line from ( t , y ) ( 0 , 1 ) to its terminal position ( t , y ) = ( 1 , 2 ) . Player 2, on the other hand, deviates far from its preferred initial position at time t = 0 , only to be in the proximity of player 1. In the MFTC, the central planner makes player 1 linger around y = 0 for some time, before turning south towards the terminal position. The result is less movement movement by player 2. Even though player 1 pays a higher individual cost, the social cost is reduced by approximately 33 % . The social cost J is approximated by
J ( u 1 , u 2 ) 1 N i = 1 N j ( ω i ) ,
where j ( ω i ) = j = 1 2 0 T f t j ( ω i ) d t + h j ( ω i ) . In (a) and (c), the outcomes of j (circles for equilibrium control, stars for optimal control) are presented along with the approximation of J (dashed lines) for N = 100 . The optimal control yields the lower social cost in both cases. This is expected, the general inefficiency of a Nash equilibrium in nonzero-sum games is well known [39]. The price of anarchy quantifies the inefficiency due to non-cooperation, see for static games [34,40], for differential games [41] and for linear-quadratic mean-field type games [42]. The price of anarchy in mean-field games has been studied recently in [43,44]. It is defined as the largest ratio between social cost for an equilibrium (MFTG) to the optimal social cost (MFTC),
P o A : = sup ( u ^ · 1 , u ^ · 2 ) MFTG equilibrium J ( u ^ · 1 , u ^ · 2 ) min u · i U i , i = 1 , 2 J ( u · 1 , u · 2 ) .
Taking the parameter set of (a) as a point of reference, see Table 1, we vary one parameter at the time and study P o A . The result is presented in Figure 2. In the intervals studied, P o A is increasing in ρ i and T and decreasing in ν i and r i . The reason is that the players become less flexible when ν i and/or r i are increased, and the improvement a central planner can do decreases. On the other hand, an increased time horizon gives the central planner more time to improve the social cost. Also, an increased preference on attraction rewards the unegoistic behavior in the MFTC model.

6. Conclusions and Discussion

Mean-field type games with backward stochastic dynamics, where the coefficients are allowed to depend on the marginal distributions of the player states, have been defined in this paper. Under regularity assumptions necessary conditions for a Nash equilibrium have been derived in the form of a stochastic maximum principle. Additional convexity assumptions yielded sufficient conditions. In linear-quadric examples, player behavior in the MFTG is compared to the centrally planned solution in the MFTC. The efficiency of the MFTG Nash equilibrium, quantified by the price of anarchy, and its dependence on problem parameters is studied.
The framework presented in this paper has many possible extensions, towards both theory and applications. The theory for martingale-driven BSDEs is now standard, and one could exchange W · 1 , W · 2 throughout this paper for two martingales M · 1 , M · 2 , possibly jump processes, and approach the game with the theory of forward-backward SDEs. Indeed, the topic of games between mean-field FBSDEs seems yet unexplored. These kind of problems would have immediate applications in finance.
With our definition of U i , we have restricted ourselves to open loop adapted controls in this paper. Other information structures, such as perfect/partial state- and/or law feedback controls, lagged or noise-perturbed controls are possible. Furthermore, both players have perfect information about each other in this paper. Taking inspiration from for example [45,46], the access to information could be restricted, so that the players have only partial information on states/laws. These types of extensions are interesting both from the theoretical and applied point of view. Depending on application, the information structure of the problem will naturally change.
Exploring conditions for the MFTG to be a potential game, or an S-modular game, can open a door for applications in for example interference management and resource allocation [47,48,49] to make use of this framework.

Acknowledgments

Financial support from the Swedish Research Council (2016-04086) is gratefully acknowledged. The author would like to thank Boualem Djehiche and Salah Choutri for fruitful discussions and useful suggestions, and the anonymous reviewers, whose remarks helped to substantially improve this work.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BSDEBackward stochastic differential equation
FBSDEForward-backward stochastic differential equation
LQLinear-quadratic
MFTCMean-field type control problem
MFTGMean-field type game
ODEOrdinary differential equation
PoAPrice of Anarchy
SDEStochastic differential equation

Appendix A. Differentiation and Approximation of Measure-Valued Functions

Derivatives of measure-valued functions will be defined with the lifting technique, outlined for example in [14,15,50]. Consider the function f : P 2 ( R d ) R . We assume that our probability space is rich enough, so that for every μ P 2 ( R d ) , there exists a square-integrable random variable X whose distribution is μ, i.e., μ = L ( X ) . For example, ( [ 0 , 1 ] , B ( [ 0 , 1 ] ) , d x ) has this property. Then we may write f ( μ ) = : F ( X ) and we can differentiate F in Fréchet-sense whenever there exists a continuous linear functional D F [ X ] : L 2 ( F ; R d ) R such that
F ( X + Y ) F ( X ) = E [ D F [ X ] Y ] + o ( Y 2 ) = : D Y f ( μ ) + o ( Y 2 ) ,
where Y 2 2 : = E [ Y 2 ] . D Y f ( μ ) is the Fréchet derivative of f at μ, in the direction Y and we have that
D Y f ( μ ) = E [ D F [ X ] Y ] = : lim t 0 E [ F ( X + t Y ) F ( X ) ] t , Y L 2 ( F ; R d ) , μ = L ( X ) .
By Riesz’ Representation Theorem, D F [ X ] is unique and it is known [14] that there exists a Borel function φ [ μ ] : R d R d , independent of the version of X, such that D F [ X ] = φ [ μ ] ( X ) . Therefore, with μ = L ( X ) for some random variable X , (A1) can be written as
f ( μ ) f ( μ ) = E [ h [ X ] ( X ) , X X ] + o ( X X 2 ) , X L 2 ( F ; R d ) .
We denote μ f ( μ ; x ) : = h [ μ ] ( x ) , x R d , μ f ( L ( X ) ; X ) = : μ f ( L ( X ) ) , and we have the identity
D F [ X ] = h [ L ( X ) ] ( X ) = μ f ( L ( X ) ) .
Example A1.
If f ( μ ) = ( R d x d μ ( x ) ) 2 then
lim t 0 E [ X + t Y ] 2 E [ X ] 2 t = E [ 2 E [ X ] Y ] ,
and μ f ( μ ) = 2 R d x d μ ( x ) .
Example A2.
If f ( μ ) = R d x d μ ( x ) then μ f ( μ ) = 1 .
The Taylor approximation of a measure-valued function is given by (A3), and we will write
f ( L ( X ) ) f ( L ( X ) ) = E μ f ( L ( X ) ) ( X X ) + o ( X X 2 ) .
Assume now that f takes another argument, ξ. Then
f ( ξ , L ( X ) ) f ( ξ , L ( X ) ) = E μ f ( ξ ˜ , L ( X ) ; X ) ( X X ) + o ( X X 2 ) ,
where the expectation is not taken over the tilded variable. Note that P X is deterministic. In situations where the expected value is taken only over the directional argument of μ f , we will write
E μ f ( ξ ˜ , L ( X ) ; X ) ( X X ) = : E ( μ f ( ξ , L ( X ) ) ) ( X X ) .
The expected value in (A7) is a random quantity because of ξ ˜ . Taking another expected value, and changing the order of integration, leads to
E E ˜ [ μ f ( ξ ˜ , L ( X ) ; X ) ] ( X X ) ,
where the tilded expectation is taken only over the tilded variable. The notation for this will be
E ˜ [ μ f ( ξ ˜ , L ( X ) ; X ) ] = : E ( ( μ f ( ξ , L ( X ) ) ) .

Appendix B. Proofs

Lemma 1
Let
b ˜ t i : = j = 1 2 { y j b ^ t i Y ˜ t j + E ( μ j b ^ t i ) Y ˜ t j } + j , k = 1 2 z j , k b ^ t i Z ˜ t i , j ,
then Y ˜ t i = t T b ˜ s i + δ 1 b i ( s ) 1 E ε ( s ) d s j = 1 2 t T Z ˜ s i , j d W s . An application of Ito’s formula to | Y ˜ t 1 | 2 + | Y ˜ t 2 | 2 yields
i = 1 2 | Y ˜ t i | 2 + t T i , j = 1 2 Z ˜ s i , j F 2 d s = t T 2 i = 1 2 Y ˜ s i , b ˜ s i + δ 1 b i ( s ) 1 E ε ( s ) d s + i , j = 1 2 t T Y ˜ s i , Z ˜ s i , j d W s j .
Let D denote the largest bound for all the derivatives of b 1 and b 2 present. By Jensen’s and Young’s inequalities,
2 i = 1 2 Y ˜ s i , b ˜ s i i = 1 2 { ( 6 D + 16 D 2 ) | Y ˜ s i | 2 + 2 D E [ | Y ˜ s i | 2 ] } + 1 2 i , j = 1 2 Z ˜ s i , j F 2 .
The stochastic integrals in (A12) are local martingales and vanish under an expectation [36]. Therefore, with K 0 : = 8 D + 16 D 2 ,
E i = 1 2 | Y ˜ s i | 2 + 1 2 i , j = 1 2 t T Z ˜ s i , j F 2 d s K 0 t T E i = 1 2 | Y ˜ s i | 2 d s + 2 t T E i = 1 2 Y ˜ s i , δ 1 b i ( s ) 1 E ε ( s ) d s .
Let τ [ 0 , T ] , then
sup ( T τ ) t T K 0 t T E i = 1 2 | Y ˜ s i | 2 d s K 0 δ sup ( T τ ) t T E i = 1 2 | Y ˜ s i | 2 .
and by Hölder’s and Young’s inequalities,
sup ( T τ ) t T t T E i = 1 2 Y ˜ s i , δ 1 b i ( s ) 1 E ε ( s ) d s sup ( T τ ) t T t T i = 1 2 E | Y ˜ s i | 2 1 / 2 E | δ 1 b i ( s ) 1 E ε ( s ) | 2 1 / 2 d s i = 1 2 { sup ( T τ ) t T E | Y ˜ s i | 2 1 / 2 } T τ T E | δ 1 b i ( s ) | 2 1 / 2 1 E ε ( s ) d s i = 1 2 δ 2 { sup ( T τ ) t T E | Y ˜ s i | 2 } + 1 2 δ T δ T E | δ 1 b i ( s ) | 2 1 / 2 1 E ε ( s ) d s 2 .
By Assumption 5 and the definition of U 1 , we have for some K 1 > 0 ,
1 2 δ T δ T E | δ 1 b i ( s ) | 2 1 / 2 1 E ε ( s ) d s 2 K 1 ε 2
Plugging (A15) and (A16) into (A14) yields
sup ( T δ ) t T E ( 1 ( K 0 + 1 ) δ ) i = 1 2 | Y ˜ t i | 2 + 1 2 i , j = 1 2 t T Z ˜ s i , j F 2 d s K 1 ε 2 .
For δ < ( K 0 + 1 ) 1 , we conclude that
sup ( T δ ) t T E i = 1 2 | Y ˜ t i | 2 + i , j = 1 2 t T Z ˜ s i , j F 2 d s K 2 ε 2 ,
where K 2 > 0 depends on δ, the bound D, the Lipschitz coefficient of b i and the integration bound in the definition of U 1 . The steps above can be repeated for the intervals [ T 2 δ , T δ ] , [ T 3 δ , T 2 δ ] , etc. until 0 is reached. After a finite number of iterations, we have
sup 0 t T E i = 1 2 | Y ˜ t i | 2 + i , j = 1 2 t T Z ˜ s i , j F 2 d s K 3 ε 2 ,
where K 3 depends on K 2 and T. This is the first estimate in (22). The second estimate follows from similar calculations.
Lemma 2
Integration by parts yields
E j = 1 2 Y ˜ 0 j p 0 1 , j = E 0 T j = 1 2 Y ˜ t j d p t 1 , j + p t 1 , j d Y ˜ t j + d Y ˜ t j , p 1 , j t d t .
Assume that d p t 1 , j = β t j d t + σ t j , 1 d W t 1 + σ t j , 2 d W t 2 , then
i = 1 2 Y ˜ t i d p t 1 , i + p t 1 , i d Y ˜ t i + d p 1 , i , Y ˜ i t = i = 1 2 [ Y ˜ t i β t i d t + σ t i , 1 d W t 1 + σ t i , 2 d W t 2 + p t 1 , i j = 1 2 { y j b ^ t i Y ˜ t j + E ( μ j b ^ t i ) Y ˜ t j + k = 1 2 z j , k b ^ t i Z ˜ t j , k } + δ 1 b i ( t ) 1 E ε ( t ) + σ t i , 1 Z ˜ t i , 1 + σ t i , 2 Z ˜ t i , 2 ] d t + ( ) d W t 1 + ( ) d W t 2 .
Hence, the lemma is equivalent to that, under expectations, we have
E [ 0 T { Y ˜ t 1 β t 1 + Y ˜ t 2 β t 2 + Y ˜ t 1 { p t 1 , 1 y 1 b ^ t 1 + p t 1 , 2 y 1 b ^ t 2 + E ( ( μ 1 b ^ t 1 ) p t 1 , 1 + E ( ( μ 1 b ^ t 2 ) p t 1 , 2 } + Y ˜ t 2 { p t 1 , 1 y 2 b ^ t 1 + p t 1 , 2 y 2 b ^ t 2 + E ( ( μ 2 b ^ t 1 ) p t 1 , 1 + E ( ( μ 2 b ^ t 2 ) p t 1 , 2 } + ( p t 1 , 1 z 1 , 1 b ^ t 1 + p t 1 , 2 z 1 , 1 b ^ t 2 + σ t 1 , 1 ) Z ˜ t 1 , 1 + ( p t 1 , 1 z 1 , 2 b ^ t 1 + p t 1 , 2 z 1 , 2 b ^ t 2 + σ t 1 , 2 ) Z ˜ t 1 , 2 + ( p t 1 , 1 z 2 , 1 b ^ t 1 + p t 1 , 2 z 2 , 1 b ^ t 2 + σ t 2 , 1 ) Z ˜ t 2 , 1 + ( p t 1 , 1 z 2 , 2 b ^ t 1 + p t 1 , 2 z 2 , 2 b ^ t 2 + σ t 2 , 2 ) Z ˜ t 2 , 2 + ( p t 1 , 1 δ 1 b 1 ( t ) + p t 1 , 2 δ 1 b 2 ( t ) 1 E ε ( t ) } d t ] = E 0 T i = 1 2 Y ˜ t i { y i f ^ t 1 + E ( ( μ i f ^ t 1 ) } p t 1 , i δ 1 b i ( t ) 1 E ε ( t ) d t .
We match coefficients and get
β t j = p t 1 , 1 y j b ^ t 1 + p t 1 , 2 y j b ^ t 2 + E ( ( μ i b ^ t 1 ) p t 1 , 1 + E ( ( μ i b ^ t 2 ) p t 1 , 2 + y j b ^ t 1 + E ( ( μ j b ^ t 1 ) = { y j H ^ t 1 + E ( ( μ j H ^ t 1 } , σ t j , k = p t 1 , 1 z j , k b ^ t 1 + p t 1 , 2 z j , k b ^ t 2 .
Linear-Quadratic MFTG–Derivation of ODE System
Under the ansatz, the adjoint equation is
d p t i , i = { ( q i , i ( t ) + q ˜ i , i ( t ) ) Y ^ t i + ( q ˜ i , i ( t ) + q ¯ i , i ( t ) ) E [ Y ^ t i ] ( s i , 1 ( t ) + s i , 2 ( t ) s ¯ i , i ( t ) ) Y ^ t i + ( s ¯ i , i ( t ) + s ¯ i , 1 E + s ¯ i , 2 E ( t ) ) E [ Y ^ t i ] } d t = : { Q i ( t ) Y ^ t i + Q ¯ i ( t ) E [ Y ^ t i ] + S i ( t ) Y ^ t i + S ¯ i ( t ) E [ Y ^ t i ] } d t = { Q i ( t ) ( α i ( t ) p t i , i + α ¯ i ( t ) E [ p t i , i ] + β i ( t ) p t i , i + β ¯ i ( t ) E [ p t i , i ] + γ i , 1 ( t ) W t 1 + γ i , 2 W t 2 + θ i ( t ) ) + Q ¯ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) E [ p t i , i ] + ( β i ( t ) + β ¯ i ( t ) ) E [ p t i , i ] + θ i ( t ) + S i ( t ) ( α i ( t ) p t i , i + α ¯ i ( t ) E [ p t i , i ] + β i ( t ) p t i , i + β ¯ i ( t ) E [ p t i , i ] + γ i , 1 W t 1 + γ i , 2 W t 2 + θ i ( t ) ) + S ¯ i ( t ) ( ( α i ( t ) + α ¯ i ( t ) ) E [ p t i , i ] + ( β i ( t ) + β ¯ i ( t ) ) E [ p t i , i ] + θ i ( t ) ) } d t = { p t i , i ( Q i ( t ) α i ( t ) + S i ( t ) β i ( t ) ) + E [ p t i , i ] ( Q i ( t ) α ¯ i ( t ) + Q ¯ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) + S i ( t ) β ¯ i ( t ) + S ¯ i ( t ) ( β i ( t ) + β ¯ i ( t ) ) ) + p t i , i ( Q i ( t ) β i ( t ) + S i ( t ) α i ( t ) ) + E [ p t i , i ] ( Q i ( t ) β ¯ i ( t ) + Q ¯ i ( t ) ( β i ( t ) + β ¯ i ( t ) ) + S i ( t ) α ¯ i ( t ) + S ¯ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) ) + W t 1 ( Q i ( t ) γ i , 1 ( t ) + S i ( t ) γ i , 2 ( t ) ) + W t 2 ( Q i ( t ) γ i , 2 + S i ( t ) γ i , 2 ) + θ i ( t ) ( Q i ( t ) + Q ¯ i ( t ) ) + θ i ( t ) ( S i ( t ) + S ¯ i ( t ) ) } d t = : { p t i , i P i ( t ) + E [ p t i , i ] P ¯ i ( t ) + p t i , i R i ( t ) + E [ p t i , i ] R ¯ i ( t ) + W t 1 Φ i ( t ) + W t 2 Ψ i ( t ) + θ i ( t ) ( Q i ( t ) + Q ¯ i ( t ) ) + θ i ( t ) ( S i ( t ) + S ¯ i ( t ) ) } d t ,
and the expected value of p · i , i solves
d ( E [ p t i , i ] ) = { E [ p t i , i ] ( P i ( t ) + P ¯ i ( t ) ) + E [ p t i , i ] ( R i ( t ) + R ¯ i ( t ) ) + θ i ( t ) ( Q i ( t ) + Q ¯ i ( t ) ) + θ i ( t ) ( S i ( t ) + S ¯ i ( t ) ) } d t .
The initial conditions p 0 i , i , E [ p 0 i , i ] , p 0 i , i , E [ p 0 i , i ] are given by a system of linear equations, which is derived is the same way as (A25) and (A26). Applying Ito’s formula to the ansatz, and using (A25) and (A26), we get
d Y ^ t i = ( α ˙ i ( t ) p t i , i + α ¯ ˙ i ( t ) E [ p t i , i ] + β ˙ i ( t ) p t i , i + β ¯ ˙ i ( t ) E [ p t i , i ] + γ ˙ i , 1 ( t ) W t 1 + γ ˙ i , 2 ( t ) W t 2 + θ ˙ i ( t ) ) d t + α i ( t ) d p t i , i + α ¯ i ( t ) d ( E [ p t i , i ] ) + β i ( t ) d p t i , i + β ¯ i ( t ) d ( E [ p t i , i ] ) + γ i , 1 ( t ) d W t 1 + γ i , 2 ( t ) d W t 2 = { p t i , i ( α ˙ i ( t ) + α i ( t ) P i ( t ) + β i ( t ) R i ( t ) ) + E [ p t i , i ] ( α ¯ ˙ i ( t ) + α i ( t ) P ¯ i ( t ) + α ¯ i ( t ) ( P i ( t ) + P ¯ i ( t ) ) + β i ( t ) R ¯ i ( t ) + β ¯ i ( t ) ( R i ( t ) + R ¯ i ( t ) ) ) + p t i , i ( β ˙ i ( t ) + α i ( t ) R i ( t ) + β i ( t ) P i ( t ) ) + E [ p t i , i ] ( β ˙ i ( t ) + α i ( t ) R ¯ i ( t ) + α ¯ i ( t ) ( R i ( t ) + R ¯ i ( t ) ) + β i ( t ) P ¯ i ( t ) + β ¯ i ( t ) ( P i ( t ) + P ¯ i ( t ) ) ) + W t 1 ( γ ˙ i , 1 + α i ( t ) Φ i ( t ) + β i ( t ) Φ i ( t ) ) + W t 2 ( γ ˙ i , 2 + α i ( t ) Ψ i ( t ) + β i ( t ) Ψ i ( t ) ) + ( θ ˙ i ( t ) + θ i ( t ) ( α i ( t ) + α ¯ i ( t ) ) ( Q i ( t ) + Q ¯ i ( t ) ) + ( β i ( t ) + β ¯ i ( t ) ) ( S i ( t ) + S ¯ i ( t ) ) + θ i ( α i ( t ) + α ¯ i ( t ) ) ( S i ( t ) + S ¯ i ( t ) ) + ( β i ( t ) + β ¯ i ( t ) ) ( Q i ( t ) + Q ¯ i ( t ) ) ) } d t + γ i , 1 ( t ) d W t 1 + γ i , 2 d W t 2 .
We can now match these dynamics with the true state dynamics and we get the system of ODEs (76) and γ i , j ( t ) = Z ^ t i , j .

References

  1. Djehiche, B.; Tcheukam, A.; Tembine, H. Mean-Field-Type Games in Engineering. AIMS Electron. Electr. Eng. 2017, 1, 18–73. [Google Scholar] [CrossRef]
  2. Andersson, D.; Djehiche, B. A maximum principle for SDEs of mean-field type. Appl. Math. Optim. 2011, 63, 341–356. [Google Scholar] [CrossRef]
  3. Yong, J.; Zhou, X.Y. Stochastic Controls: Hamiltonian Systems and HJB Equations; Springer Science & Business Media: Berlin, Gremany, 1999; Volume 43. [Google Scholar]
  4. Kohlmann, M.; Zhou, X.Y. Relationship between backward stochastic differential equations and stochastic controls: A linear-quadratic approach. SIAM J. Control Optim. 2000, 38, 1392–1407. [Google Scholar] [CrossRef]
  5. Pardoux, É.; Peng, S. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 14, 55–61. [Google Scholar] [CrossRef]
  6. Peng, S. Probabilistic interpretation for systems of quasilinear parabolic partial differential equations. Stoch. Stoch. Rep. 1991, 37, 61–74. [Google Scholar] [CrossRef]
  7. Duffie, D.; Epstein, L.G. Stochastic differential utility. Econom. J. Econom. Soc. 1992, 60, 353–394. [Google Scholar] [CrossRef]
  8. El Karoui, N.; Peng, S.; Quenez, M.C. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  9. Buckdahn, R.; Djehiche, B.; Li, J.; Peng, S. Mean-field backward stochastic differential equations: A limit approach. Ann. Probab. 2009, 37, 1524–1565. [Google Scholar] [CrossRef]
  10. Buckdahn, R.; Li, J.; Peng, S. Mean-field backward stochastic differential equations and related partial differential equations. Stoch. Process. Appl. 2009, 119, 3133–3154. [Google Scholar] [CrossRef]
  11. Moon, J.; Duncan, T.E.; Basar, T. Risk-Sensitive Zero-Sum Differential Games. IEEE Trans. Autom. Control 2018, in press. [Google Scholar] [CrossRef]
  12. Bensoussan, A.; Frehse, J.; Yam, P. Mean Field Games and Mean Field Type Control Theory; Springer: Berlin, Germany, 2013; Volume 101. [Google Scholar]
  13. Djehiche, B.; Tembine, H.; Tempone, R. A stochastic maximum principle for risk-sensitive mean-field type control. IEEE Trans. Autom. Control 2015, 60, 2640–2649. [Google Scholar] [CrossRef]
  14. Buckdahn, R.; Li, J.; Ma, J. A Stochastic Maximum Principle for General Mean-Field Systems. Appl. Math. Optim. 2016, 74, 507–534. [Google Scholar] [CrossRef]
  15. Carmona, R.; Delarue, F. Probabilistic Theory of Mean Field Games with Applications I–II; Springer: Berlin, Germany, 2018. [Google Scholar]
  16. Tembine, H. Mean-field-type games. AIMS Math. 2017, 2, 706–735. [Google Scholar] [CrossRef]
  17. Lacker, D. Limit Theory for Controlled McKean–Vlasov Dynamics. SIAM J. Control Optim. 2017, 55, 1641–1672. [Google Scholar] [CrossRef]
  18. Huang, M.; Malhamé, R.P.; Caines, P.E. Large population stochastic dynamic games: Closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 2006, 6, 221–252. [Google Scholar]
  19. Lasry, J.M.; Lions, P.L. Mean field games. Jpn. J. Math. 2007, 2, 229–260. [Google Scholar] [CrossRef]
  20. Li, T.; Zhang, J.F. Asymptotically optimal decentralized control for large population stochastic multiagent systems. IEEE Trans. Autom. Control 2008, 53, 1643–1660. [Google Scholar] [CrossRef]
  21. Tembine, H.; Zhu, Q.; Başar, T. Risk-sensitive mean-field games. IEEE Trans. Autom. Control 2014, 59, 835–850. [Google Scholar] [CrossRef]
  22. Moon, J.; Basar, T. Linear Quadratic Risk-Sensitive and Robust Mean Field Games. IEEE Trans. Automat. Contr. 2017, 62, 1062–1077. [Google Scholar] [CrossRef]
  23. Peng, S. Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim. 1993, 27, 125–144. [Google Scholar] [CrossRef]
  24. Dokuchaev, N.; Zhou, X.Y. Stochastic controls with terminal contingent conditions. J. Math. Anal. Appl. 1999, 238, 143–165. [Google Scholar] [CrossRef]
  25. Lim, A.E.; Zhou, X.Y. Linear-quadratic control of backward stochastic differential equations. SIAM J. Control Optim. 2001, 40, 450–474. [Google Scholar] [CrossRef]
  26. Wu, Z. A general maximum principle for optimal control of forward–backward stochastic systems. Automatica 2013, 49, 1473–1480. [Google Scholar] [CrossRef]
  27. Yong, J. Forward-backward stochastic differential equations with mixed initial-terminal conditions. Trans. Am. Math. Soc. 2010, 362, 1047–1096. [Google Scholar] [CrossRef]
  28. Li, X.; Sun, J.; Xiong, J. Linear Quadratic Optimal Control Problems for Mean-Field Backward Stochastic Differential Equations. Appl. Math. Optim. 2016, 1–28. [Google Scholar] [CrossRef]
  29. Tang, M.; Meng, Q. Linear-Quadratic Optimal Control Problems for Mean-Field Backward Stochastic Differential Equations with Jumps. arXiv, 2016; arXiv:1611.06434. [Google Scholar]
  30. Li, J.; Min, H. Controlled mean-field backward stochastic differential equations with jumps involving the value function. J. Syst. Sci. Complex. 2016, 29, 1238–1268. [Google Scholar] [CrossRef]
  31. Aurell, A.; Djehiche, B. Modeling tagged pedestrian motion: A mean-field type control approach. arXiv, 2018; arXiv:1801.08777. [Google Scholar]
  32. Aurell, A.; Djehiche, B. Mean-field type modeling of nonlocal crowd aversion in pedestrian crowd dynamics. SIAM J. Control Optim. 2018, 56, 434–455. [Google Scholar] [CrossRef]
  33. Chen, Z.; Epstein, L. Ambiguity, risk, and asset returns in continuous time. Econometrica 2002, 70, 1403–1443. [Google Scholar] [CrossRef]
  34. Koutsoupias, E.; Papadimitriou, C. Worst-case equilibria. In Annual Symposium on Theoretical Aspects of Computer Science; Springer: Berlin, Germany, 1999; pp. 404–413. [Google Scholar]
  35. Papadimitriou, C. Algorithms, games, and the internet. In Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing, Crete, Greece, 6–8 July 2001; pp. 749–753. [Google Scholar]
  36. Pardoux, É. BSDEs, weak convergence and homogenization of semilinear PDEs. In Nonlinear Analysis, Differential Equations and Control; Springer: Berlin, Germany, 1999; pp. 503–549. [Google Scholar]
  37. Basar, T.; Olsder, G.J. Dynamic Noncooperative Game Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1999; Volume 23. [Google Scholar]
  38. Zhang, J. Backward Stochastic Differential Equations: From Linear to Fully Nonlinear Theory; Springer: Berlin, Germany, 2017; Volume 86. [Google Scholar]
  39. Dubey, P. Inefficiency of Nash equilibria. Math. Oper. Res. 1986, 11, 1–8. [Google Scholar] [CrossRef]
  40. Koutsoupias, E.; Papadimitriou, C. Worst-case equilibria. Comput. Sci. Rev. 2009, 3, 65–69. [Google Scholar] [CrossRef]
  41. Başar, T.; Zhu, Q. Prices of anarchy, information, and cooperation in differential games. Dyn. Games Appl. 2011, 1, 50–73. [Google Scholar] [CrossRef]
  42. Duncan, T.E.; Tembine, H. Linear–Quadratic Mean-Field-Type Games: A Direct Method. Games 2018, 9, 7. [Google Scholar] [CrossRef]
  43. Carmona, R.; Graves, C.V.; Tan, Z. Price of anarchy for mean field games. arXiv, 2018; arXiv:1802.04644. [Google Scholar]
  44. Cardaliaguet, P.; Rainer, C. On the (in) efficiency of MFG equilibria. arXiv, 2018; arXiv:1802.06637. [Google Scholar]
  45. Ma, H.; Liu, B. Maximum principle for partially observed risk-sensitive optimal control problems of mean-field type. Eur. J. Control 2016, 32, 16–23. [Google Scholar] [CrossRef]
  46. Ma, H.; Liu, B. Linear-quadratic optimal control problem for partially observed forward-backward stochastic differential equations of mean-field type. Asian J. Control 2016, 18, 2146–2157. [Google Scholar] [CrossRef]
  47. Semasinghe, P.; Maghsudi, S.; Hossain, E. Game theoretic mechanisms for resource management in massive wireless IoT systems. IEEE Commun. Mag. 2017, 55, 121–127. [Google Scholar] [CrossRef]
  48. Tsiropoulou, E.E.; Vamvakas, P.; Papavassiliou, S. Joint customized price and power control for energy-efficient multi-service wireless networks via S-modular theory. IEEE Trans. Green Commun. Netw. 2017, 1, 17–28. [Google Scholar] [CrossRef]
  49. Katsinis, G.; Tsiropoulou, E.E.; Papavassiliou, S. Multicell Interference Management in Device to Device Underlay Cellular Networks. Future Internet 2017, 9, 44. [Google Scholar] [CrossRef]
  50. Cardaliaguet, P. Notes on Mean Field Games. Technical Report. 2010. Available online: https://www.researchgate.net/publication/228702832 (accessed on 3 September 2018).
Figure 1. Numerical examples: (a) symmetric preference, (b) single path sample, (c) asymmetric attraction and initial position. Circles indicate the preferred initial positions.
Figure 1. Numerical examples: (a) symmetric preference, (b) single path sample, (c) asymmetric attraction and initial position. Circles indicate the preferred initial positions.
Games 09 00088 g001
Figure 2. Numerical approximations (N = 5000) of the price of anarchy PoA.
Figure 2. Numerical approximations (N = 5000) of the price of anarchy PoA.
Games 09 00088 g002
Table 1. Parameter values in the symmetric case (a).
Table 1. Parameter values in the symmetric case (a).
y T 1 a 1 c 11 c 12 r 1 ρ 1 ν 1 y 0 1
−210.30111 N ( 0 , 0 . 1 )
y T 2 a 2 c 21 c 22 r 2 ρ 2 ν 2 y 0 2
2100.3111 N ( 0 , 0 . 1 )

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Games EISSN 2073-4336 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top