Next Article in Journal
An RBF-Based h-Adaptive Cartesian Grid Refinement Method for Arbitrary Single/Multi-Body Hull Modeling and Reconstruction
Previous Article in Journal
A Decomposition Method for a Fractional-Order Multi-Dimensional Telegraph Equation via the Elzaki Transform
Previous Article in Special Issue
Pricing Various Types of Power Options under Stochastic Volatility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning and Mean-Field Games: A Stochastic Optimal Control Perspective

1
Department of Computer Science, University of Verona, 37134 Verona, Italy
2
Department of Mathematics, University of Trento, 38123 Trento, Italy
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(1), 14; https://doi.org/10.3390/sym13010014
Submission received: 18 November 2020 / Revised: 9 December 2020 / Accepted: 17 December 2020 / Published: 23 December 2020
(This article belongs to the Special Issue Advances in Stochastic Differential Equations)

Abstract

:
We provide a rigorous mathematical formulation of Deep Learning (DL) methodologies through an in-depth analysis of the learning procedures characterizing Neural Network (NN) models within the theoretical frameworks of Stochastic Optimal Control (SOC) and Mean-Field Games (MFGs). In particular, we show how the supervised learning approach can be translated in terms of a (stochastic) mean-field optimal control problem by applying the Hamilton–Jacobi–Bellman (HJB) approach and the mean-field Pontryagin maximum principle. Our contribution sheds new light on a possible theoretical connection between mean-field problems and DL, melting heterogeneous approaches and reporting the state-of-the-art within such fields to show how the latter different perspectives can be indeed fruitfully unified.

1. Introduction

Controlled stochastic processes, which naturally arise in a plethora of heterogeneous fields, spanning, e.g., from mathematical finance to industry, can be solved in the setting of continuous time stochastic control theory. In particular, when we have to analyse complex dynamics produced by the mutual interaction of a large set of indistinguishable players, an efficient approach to infer knowledge about the resulting behaviour, typical for example of a neuronal ensemble, is provided by Mean-Field Game (MFG) methods, as described in [1]. MFG theory generalizes classical models of interacting particle systems characterizing statistical mechanics. Intuitively, each particle is replaced by rational agents whose dynamics are represented by a Stochastic Differential Equation (SDE). The term mean-field refers to the highly symmetric form of interaction: the dynamics and the objective of each particle depend on an empirical measure capturing the global behaviour of the population. The solution of an MFG is analogous to a Nash equilibrium for a non-cooperative game [2]. The key idea is that the population limit can be effectively approximated by statistical features of the system corresponding to the behaviour of a typical group of agents, in a Wasserstein space sense [3]. On the other hand, Deep Learning (DL) is frequently used in several Machine Learning (ML) based applications, spanning from image classification and speech recognition to predictive maintenance and clustering. Therefore, it has become essential to provide a strong mathematical formulation and to analyse both the setting and the associated algorithms [4,5]. Commonly, Neural Networks (NNs) are trained through the Stochastic Gradient Descent (SGD) method. It updates the trainable parameters using gradient information computed randomly via a back-propagation algorithm with the disadvantage of being slow in the first steps of training. An alternative consists of expressing the learning procedure of an NN as a dynamical system (see [6]), which can be then analysed as an optimal control problem [7].
The present paper is structured as follows. In Section 2, we introduce the fundamentals about the Wasserstein space, Stochastic Optimal Control (SOC) and MFGs. In Section 3, the link between NNs and MFGs is deeply analysed in order to reflect the probabilistic nature of the learning process. Conclusions and prospects are outlined in Section 4.

2. Problem Formulation and Preliminaries

2.1. Wasserstein Metrics

Many of the main results contained in this paper will be stated in terms of convergence in the distribution of random variables, vectors, processes and measures. We refer to [8,9,10] concerning the basics about the theory of the weak convergence of probability measures in metric spaces equipped with the natural Borel σ -algebra; see also [11,12] and the references therein for further details. According to [8] (pp. 7, 8), let us recall the following two definitions, useful to highlight the strict connection between measure theory and probability. Given a probability measure μ P ( X ) over the metric space X and a sequence ( μ n ) P ( X ) with n N , we say that μ n converges weakly to μ or μ n μ if:
lim n X f d μ n = X f d μ , f C b ( X ) .
Given a sequence of X -valued random variables { X n } n 1 , we say that X n converges weakly (or in distribution) to a X -valued random variable X, if:
lim n E [ f ( X n ) ] = E [ f ( X ) ] , f C b ( X ) ,
denoting this convergence by X n X . Focusing on the convergence of empirical measures, let ( X i ) be a sequence of independent and identically distributed (i.i.d.) X -random variables and define the P ( X ) -valued random variable:
μ n = 1 n i = 1 n δ X i ,
then we have a random probability measure, usually indicated as the empirical measure. According to [8] (pp. 12–16), P ( X ) is endowed with a metric (compatible with the notion of weak convergence) in order to consider P ( X ) as a metric space itself. Let us recall the Wasserstein metric, defined on P ( X ) , based on the idea of coupling. In particular, given μ , ν P ( X ) , Π ( μ , ν ) represents the set of Borel probability measures π on X × X , with first, resp. second, marginal μ , resp. ν . Namely, π ( A × X ) = μ ( A ) and π ( X × A ) = ν ( A ) for every Borel set A X . Then, we define P p ( X ) for p 1 , as the set of probability measures μ P ( X ) satisfying:
X d ( x , x 0 ) p μ ( d x ) < ,
where x 0 X is an arbitrary reference point. Consequently, the p-Wasserstein metric on P p ( X ) is defined as:
W p ( μ , ν ) = inf π Π ( μ , ν ) X × X d ( x , y ) p π ( d x , d y ) 1 / p = = inf X μ , Y ν E [ d ( X , Y ) p ] 1 / p ,
where the infimum is taken over all pairs of X -valued random variables X , Y with, respectively, given marginals μ and ν .

2.2. Stochastic Optimal Control Problem

Following [13] (see also [14] and the references therein), let { Ω , F , ( F t ) , P } be a filtered probability space, with filtration F t = { F t , t [ 0 , T ] } , T > 0 , supporting:
  • a controlled state variable ( X t α ) t [ 0 , T ] , where X t is an i.i.d. sequence of R n -valued F 0 -measurable random variables;
  • a sequence { W t i } i 1 of independent and F t -adapted Brownian motions.
We consider a control as enforced on the state variable by α = ( α t ) t [ 0 , T ] , that is an adapted process defined on A = { ( α t ) t [ 0 , T ] } , containing F -progressively measurable controls, which take values in A identified as a closed subset of R d . The objective function J is then defined as:
J ( α , X ) = E 0 T f ( α s , X s ) d s + g ( X T ) ,
where f : A × R n R denotes the running objective function, while g : R n R denotes the terminal objective function, and the expectation is taken with respect to (w.r.t.) the probability measure P .
The goal corresponds to identifying the control α solving the maximization problem:
max α A J ( α , X t ) ,
according to the n-dimensional stochastic controlled process X s s.t.
d X s = b ( α s , X s ) d s + σ ( α s , X s ) d W s X t = x , s [ t , T ] .
The drift function b : A × R n R n and the volatility function b : A × R n R d + n are measurable, and they satisfy a uniform Lipschitz condition in x, i.e., there exists K > 0 such that, for all x , y R n and α s A , it holds that:
| b ( x , α ) b ( y , α ) | + | σ ( x , α ) σ ( y , α | K | x y | .
Previous assumptions guarantee that the SDE (3) has a unique solution, denoted by X s t , x s [ t , T ] . Therefore, the objective function can be explicitly expressed in terms of x, t, namely:
J ( α , x , t ) = E 0 T f ( α s , X s t , x ) d s + g ( X T t , x ) ,
with ( t , x ) [ 0 , T ] × R n , and the process α A takes values α s A . Let v be the value function:
v ( t , x ) = sup α A J ( t , x , α ) ,
then the corresponding optimal control α ^ : [ 0 , T ] × R n A is defined by:
J ( t , x , α ^ ) = v ( t , x ) , ( t , x ) , [ 0 , T ] × R n ,
whose solution can be found by exploiting two different and interconnected approaches respectively based on the Hamilton–Jacobi–Bellman (HJB) equation and on the stochastic Pontryagin Maximum Principle (PMP).
The first one moves from the Dynamic Programming Principle (DPP), then leading to the nonlinear second order Partial Differential Equation (PDE) known as the HJB equation (see [3]), namely:
v t ( t , x ) + sup a A L α v ( t , x ) + f ( x , a ) = 0 v ( T , x ) = g ( x ) ,
that holds ( x , t ) 0 , T × R n , where L defines a second order operator called the infinitesimal generator of the controlled diffusion:
L a v = b ( x , a ) · x v ( x ) + 1 2 T r ( σ ( x , a ) σ ( x , a ) T x 2 v ( x ) .
It is possible to define the Hamiltonian for the SOC problem [15,16] as H : R d × R d × S d R { } , written as follows:
H ( x , y , z ) = sup a A b ( x , a ) · y + 1 2 T r [ σ σ T ( x , a ) z ] + f ( x , a ) ,
where S d denotes the set of symmetric d × d matrices and the variables y and z are called the adjoint variables. The previously defined Hamiltonian allows encapsulating the nonlinearity of the HJB equation, which can be rearranged as:
t v ( t , x ) + H ( x , ( t , x ) , 2 ( t , x ) ) = 0 .
On the other hand, the stochastic PMP leads to a system of coupled Forward-Backward SDEs (FBSDEs) plus an external optimality condition in terms of the Hamiltonian function; see, e.g., [2,17] and the references therein. We define the local Hamiltonian as:
H t ( x , u , p , q ) = f t ( x , u ) + b t ( x , u ) p + σ t ( x ) q ,
where ( p , q ) are the adjoint variables.
By assuming that f t ( x , u ) , b t ( x , u ) and σ ( x , u ) are progressively measurable, bounded in C 1 ( x , u ) and Lipschitz ( x , u ) R 2 , we have that if an arbitrary control maximizes the Hamiltonian, then (necessary condition for optimality) it is the optimal one:
H τ ( x ^ τ , ν ^ τ , P ^ τ , Q ^ τ ) = max ν R H t ( x ^ τ , ν , P ^ τ , Q ^ τ ) .
Moreover, by requiring that x g ( x ) and x H ^ t ( x , P ^ τ , Q ^ τ ) = sup ν H τ ( x ^ τ , ν , P ^ τ , Q ^ τ ) are both concave functions, the PMP becomes also a sufficient condition to characterize the optimal control. In addition, by the envelope theorem, it follows that the optimal control ν ^ maximizing H also maximizes its derivative x H :
x H τ ( x ^ τ , ν ^ τ , P ^ τ , Q ^ τ ) = x H ^ t ( x ^ τ , P ^ τ , Q ^ τ ) .

2.3. Mean-Field Games

In what follows, according to [1,3,5,18], we will exploit the theory of SOC to analyse Mean-Field Games (MFGs), to gain insights into the symmetric stochastic differential games characterized by a number of players that tends to infinity. In particular, consider n players i = 1 , , n each of which controls a private state X i whose dynamics read as follows:
d X t i = 1 N j = 1 N b ( t , X t i , X t j , α t i ) + σ d W t i ,
σ R being a (common) constant multiplying a random noise component, i.e., d W t i , steered by the i-th copy of a standard Brownian motion.
Each control α t i represents the strategy played by the ith player, while X t j models the influence of the state of the other j-players. The dynamics in Equation (7) can be reformulated as:
1 N j = 1 N b ( t , X t i , X t j , α t i ) = b ( t , X t i , X t j , α t i ) μ ¯ t N d x ,
where the empirical distribution μ ¯ t N of private states is defined by:
μ ¯ t N = 1 N j = 1 N δ X t j .
Hence, the same dynamics for each player are obtained, since each player depends only on the global behaviour of the entire system and not on the state of the single player:
d X t i = 1 N j = 1 N b ( t , X t i , μ ¯ t N , α t i ) + σ d W t i .
Each individual is minimizing a cost composed by a running cost and a terminal one. Therefore, the objective function J i ( α ) , i , is defined by:
J i ( α i ) = E 0 T f ( t , X t i , μ ¯ t N , α t i ) d t + g ( X T i , μ ¯ T N ) ,
the goal being to select ϵ -Nash equilibria, the vector ( α 1 , , α N ) being a ε -Nash equilibrium if:
i = 1 , , N α i J ( α ) ϵ J ( α i , α i ) ,
meaning that if the ith player is changing his/her behaviour for a small value ϵ , while other players follow a fixed strategy, there is no change in the profit of the game in terms of the cost J. Let us note that, if ϵ = 0 , we are dealing with the standard Nash equilibria.
In this scenario, the solution of an MFG (see [3]) corresponds to a couple ( α , μ ) , where μ t = L ( X t α ) models the optimal empirical distribution and α = ϕ ( t , X t i ) the optimal strategies.
Due to the symmetry of the system, all the agents play the same strategy profile:
ϕ 1 ( t , X t 1 ) = = ϕ N ( t , X t N ) = ϕ ( t ) , i = 1 , , N ;
hence, by fixing the flow of the probability measure ( μ t ) t 0 , T , the solution ϕ solves the SOC problem parametrized by the choice of the family ( μ t ) as:
ϕ = argmin ϕ ( t ) t 0 , T E 0 T f ( t , X t , μ t , ϕ ( t , X t ) ) d t + g ( X T , μ T ) ,
subject to:
d X t = b ( t , X t , μ t , ϕ ( t , X t ) ) d t + σ d W t .
The solution of the latter optimization problem returns the best response of a representative player to the empirical measure ( μ t ) in a scenario where no player is profitable in changing his/her own strategy.
Concerning the choice of the flow of the probability measure ( μ t ) t 0 , T (see Equation (1)), if N , the asymptotic independence implied by the law of large numbers ensures that the empirical measure μ ¯ t N tends to the statistical distribution of X t in the form of a fixed point equation, that is L ( X α μ ) = μ . Once the optimal feedback ϕ has been found, for each choice of the parameter ( μ t ) 0 t T , it is necessary to check that the optimal empirical measure μ t * can be recovered from the statistical distribution of the optimal states X t α * , i.e., t , L ( X t α ) = μ t must hold.
By freezing the family μ of probability, the Hamiltonian becomes:
H μ t ( t , x , y , α ) = y b ( t , x , μ t , α ) + f ( t , x , μ t , α ) .
Let us assume that there exists a regular function:
α ^ ( t , x , y ) : [ 0 , T ] × R × R argmin α A H μ t ( t , x , y , α ) ,
then we denote the infimum over controls α of H μ t ( t , x , y , α ) by:
H μ t ( t , x , y ) : = inf α A H μ t ( t , x , y , α ) ,
ending up with a stochastic control problem that can be solved (see, e.g., [3] (pp. 7–9)) by applying one of the following two methods:
  • The PDE approach through HJB Equation (21) and the Kolmogorov equation;
  • The Backward Stochastic Differential Equation (BSDE) approach based on the PMP.
We introduce the HJB value equation following [3], (p. 7), i.e.,:
v ( t , x ) = inf α A t E t T f ( s , X s , μ s , α s ) d s + g ( X T , μ T ) | X t = x ,
where A t denotes the set of admissible controls over the interval [ t , T ] . It is expected that v is the solution, in the viscosity sense, of the HJB equation; see, e.g., [19,20] and the references therein. Hence:
t v + σ 2 2 x x 2 v + H μ t ( t , x , x v ( t , x ) ) = 0 , ( t , x ) [ 0 , T ] × R ,
with terminal condition v ( T , x ) = g ( x , μ T ) , x R . Imposing μ ¯ = ( μ t ) 0 t T as the flow of the optimally controlled state, the flow of statistical distributions should satisfy Kolmogorov’s equation, so that, introducing the notation β ( t , x ) = b ( t , x , μ t , ϕ ( t , x ) ) with ϕ the optimal feedback, the flow ( ν t ) 0 t T of measures given by ν t = L ( X t ) satisfies the Kolmogorov’s equation:
t ν σ 2 2 x x 2 ν d i v ( β ( t , x ) ν ) = 0 , ( t , x ) [ 0 , T ] × R ,
with initial condition ν 0 = μ 0 . Such a PDE holds in the sense of distributions, since ν t represents a density and the derivatives involved must be considered in the Wasserstein sense. Setting ν t = L ( X t ) = μ t , we end up with a system of coupled non-linear forward-backward PDEs (11) and (12), constituting the so-called MFG-PDE system. On the other hand, we can approximate the solution of the aforementioned stochastic control problem via the stochastic Pontryagin principle. In particular, for each open-loop adapted control α = ( α t ) 0 t T , we denote by X α = ( X t α ) 0 t T the associated state, and we introduce the adjoint equation:
d Y t = x H μ t ( t , X t , Y t , α t ) d t + Z t d W t , t [ 0 , T ] Y T = x g ( X T , μ T ) .
H μ t corresponds to the Hamiltonian defined in Equation (9), and the solution ( Y t , Z t ) 0 t T of the BSDE is called a set of adjoint processes.
The PMP necessary condition states that whenever X α is an optimal state, it must hold that H μ t ( t , X t , Y t , α t ) = H μ t ( t , X t , Y t ) , t [ 0 , T ] . Moreover, if the Hamiltonian H μ t is convex w.r.t. the variables ( x , α ) and the terminal g is convex w.r.t. the variable x, then the system given by Equations (13) and (8) characterizes the optimal states of the MFG problem.

3. Main Result

In this section, we generalize the approach proposed in [5], providing an application of the Mean-Field (MF) optimal control to Deep Learning (DL). In particular, we start by considering the learning process characterizing supervised learning as a population risk minimization problem, hence considering its probabilistic nature in the sense that the optimal control parameters, corresponding to the trainable weights in the associated Neural Network (NN) model, depending on the population distribution of input-target pairs constituting the randomness source.

3.1. Neural Network as a Dynamical System

In order to study DL as an optimal control problem, it is necessary to express the NN learning process as a dynamical system [6,21]. In the simplest form, the feed-forward propagation in a T layer, T 1 , network can be expressed by the following difference equation:
x t + 1 = x t + f ( x t , θ t ) , t = 0 , , T 1 ,
where x 0 is the input, e.g., an image, several time-series, etc., while x T is the final output, to be compared to some target y T by means of a given loss function. By moving from a discrete time formulation to a continuous one, the forward dynamics we are interested in will be described by a differential equation that takes the role of (14). The learning aim is to tune the trainable parameters θ 0 , , θ T 1 to have x T as close as possible to y T , according to a specified metric and knowing that the target y T is joined to the input x 0 by means of a probability measure μ 0 .
Following the dynamical systems approach developed in [6], the supervised learning method aims to approximate some function F , usually called the oracle, denoted by F : X Y .
As stated before, the set X R d contains the d-dimensional array of inputs, e.g., images, financial time-series, sound recorded data, text, etc., while Y are the targets modelling the corresponding images, numerical forecast, or predicted texts.
In this setting, it is standard to define what is called a hypothesis space as:
H = { F θ : X Y | θ Θ } .
Training moves from a collection of K samples of input-target pairs { x i , y i = F ( x i ) } i = 1 K , the goal being to approximate F exploiting these training data points.
Let ( Ω , F , P ) be a probability space supporting random variables x 0 R d and y T R l , jointly distributed according to μ 0 : = P ( x 0 , y T ) with μ 0 modelling the distribution of the input-target pairs. The set of controls Θ R m denotes the admissible training weights that are assumed to be essentially bounded, measurable L ( [ 0 , T ] , Θ ) functions. The network depth, i.e., the number of layers, is denoted by T > 0 . We also introduce the functions:
  • the feed-forward dynamics f : R d × Θ R d ;
  • the terminal loss function Φ : R d × R l R ;
  • the regularization term L : R d × Θ R .
State dynamics are described by an Ordinary Differential Equation (ODE) of the form:
x ˙ t = f ( x t , θ t ) ,
representing the continuous version of Equation (14), equipped with an initial condition x 0 , which is a random variable responsible for the randomness term characterizing Equation (15).
The population risk minimization problem in DL can be expressed by the following MF-optimal control problem (see [5] (p. 5)):
inf θ L ( [ 0 , T ] , Θ ) J ( θ ) : = E μ 0 Φ ( x T , y T ) + 0 T L ( x t , θ t ) d t ,
subject to the dynamics expressed by the stochastic ODE (15). Since the weights θ are shared by the distribution μ 0 of random variable ( x 0 , y T ) pairs, Equation (16) can be studied as an MF-optimal control problem.
On the other hand, the empirical risk minimization problem can be expressed by a sampled optimal control problem after computing i.i.d. samples { x 0 i , y T i } ) i = 1 N modelled by μ 0 = P ( x 0 , y T ) :
inf θ L ( [ 0 , T ] , Θ ) J N ( θ ) : = 1 N i = 1 N Φ ( x T i , y T i ) + 0 T L ( x t i , θ t ) d t ,
subject to the dynamics:
x ˙ t i = f ( x t i , θ t ) , i = 1 , , N ,
whose solutions, moving from random initial conditions through a deterministic path, correspond to random variables.
As in classical optimal control theory, the previous problem can be solved following two inter-connected approaches: a global theory, based on the Dynamic Programming Principle (DPP) leading to the HJB equation, or considering the Pontryagin Maximum Principle (PMP) approach, hence expressing the solution by a system of Forward Backward SDEs (FBSDEs) plus a local optimality condition.

3.2. HJB Equation

The idea behind the HJB formalism is to define a value function corresponding to the optimal loss of the control problem w.r.t. the general starting time and state. For the population risk minimization formulation expressed by Equation (16), the state argument of the value function corresponds to an infinite-dimensional object that models a joint distribution of the input-target as an element of a suitable Wasserstein space.
As regards random variables and their distribution, a suitable space must be defined for the rigorous treatment of the optimal control problem. In particular, we use the shorthand notation L 2 ( Ω , R d + l ) for L 2 ( ( Ω , F , P ) , R d + l ) to denote the set of R d + l -valued square integrable random variables w.r.t. a given probability measure P . Then, we deal with a Hilbert space considering the norm:
| | X | | L 2 : = E ( | | X | | 2 1 2 , X L 2 ( Ω , R d + l ) ,
The set P 2 ( R d + l ) denotes the integrable probability measures defined on the Euclidean space R d + l . Let us recall that the random variable X is square integrable in L 2 ( Ω , R d + l ) if and only if its law P X P 2 ( R d + l ) . The space P 2 ( R d + l ) can be endowed with a metric by considering the Wasserstein distance defined in Equation (2). For p = 2 , the two-Wasserstein distance reads:
W 2 ( μ , ν ) : = inf R d + l × R d + l | | w z | | 2 π ( d w , d z ) 1 2 | π P 2 ( R d + l × R d + l ) w i t h   m a r g i n a l s μ a n d ν ,
according to the marginals introduced in Section 2.1, or equivalently:
W 2 ( μ , ν ) : = inf | | X Y | | L 2 | X , Y L 2 ( Ω , R d + l ) w i t h P X = μ , P Y = ν ,
see, e.g., [5] (p. 6). Moreover, μ P 2 ( R d + l ) , we define the associated norm:
| | μ | | L 2 = R d + l | | w | | 2 μ ( d w ) 1 2 .
Given a measurable function ψ : R d + l R q that is square integrable w.r.t. the probability distribution μ , the following notation is introduced:
ψ ( · ) , μ : = R d + l ψ ( w ) μ ( d w ) .
Concerning the dynamical evolution of probability measures, let us fix ξ L 2 ( Ω , R d + l ) and the control process θ L ( [ 0 , T ] , Θ ) . Then, the dynamics of the system can be written as:
W s t , ξ , θ = ξ + t s f ¯ ( W s t , ξ , θ , θ t ) d s , s [ t , T ] ,
μ being the law associated with the variable ψ defined by μ = P ξ P 2 ( R d + l ) , and we can rewrite the law of W s t , ξ , θ as:
P s t , μ , θ : = P W s t , ξ , θ .
Indeed, the law involving the dynamics W s t , ξ , θ depends only on the law of ξ and not on the random variable itself; see, e.g., [5] (p. 7).
It turns out that, to obtain the HJB Equation (5) corresponding to the above introduced formulation, it is necessary to define the concept of the derivative w.r.t. a probability measure. To begin with, it is useful to consider probability measures on R d + l as laws expressing probabilistic features of the R d + l -valued random variables defined over the probability space ( Ω , F , P ) . Then, we define the Banach space of random variables to define the derivatives. Moreover, if we define a function u : P 2 ( R d + l ) R , it is possible to lift it into its extension U defined on L 2 ( [ 0 , T ] , R d + l ) , as follows:
U ( X ) = u ( P X ) , X L 2 ( Ω , R d + l ) ,
then the definition of the derivative w.r.t. a probability measure can be expressed in terms of U in the usual Banach space setting. In particular, we have that u is C 1 ( P 2 ( R d + l ) ) , if the lifted function U is Fréchet differentiable with continuous derivatives.
Since L 2 ( Ω , R d + l ) can be identified with its dual, if the Fréchet derivative D U ( X ) exists, by Riesz’s theorem, it can be identified with an element of L 2 ( Ω , R d + l ) , i.e.,
D U ( X ) ( Y ) = E [ D U ( X ) · Y ] , Y L 2 ( Ω , R d + l ) .
It is worth underlining that D U ( X ) does not depend on X, but only on the law described by X; hence, the derivative of u at μ = P X is described by μ u ( P X ) : R d + l R d + l , defined as:
D U ( X ) = μ u ( P X ) ( X ) .
By duality, we know that μ u ( P X ) is square integrable w.r.t. μ . To define a notion for the chain rule in P 2 ( R d + l ) , a dynamical system is described by:
W t = ξ + 0 t f ¯ ( W s ) d s , ξ L 2 ( Ω , R d + l ) ,
where f ¯ denotes the feed-forward dynamics. If a function u C 1 ( P 2 ( R d + l ) ) , meaning that it is differentiable with a continuous derivative w.r.t. a probability measure, then, for all t [ 0 , T ] , we have:
u ( P W t ) = u ( P W 0 ) + 0 t μ u ( P W s ) ( · ) · f ¯ ( · ) , P W s d s ,
where · denotes the usual inner product between the vector in R d + l . Equivalently, exploiting the lifted function of u, we can state:
U ( W t ) = U ( W 0 ) + 0 t E [ D U ( W s ) · f ¯ ( W s ) ] d s .
Moreover, the variable w denotes the concatenated ( d + l ) -dimensional variable ( x , y ) , where x R d and y R l . Correspondingly, f ¯ ( w , θ ) : = ( f ( x , θ ) , 0 ) is the extended ( d + l ) -dimensional Feed-Forward Function (FFF), L ¯ ( w , θ ) = L ( x , θ ) is the extended ( d + l ) -dimensional regularization loss and Φ ¯ ( w ) : = Φ ( x , y ) represents the terminal loss function.
Since the state variable is identified with a probability distribution μ P 2 ( R d + l ) , the resulting objective functional can be defined as:
J ( t , μ , θ ) : = E ( x t , y T ) μ Φ ( x T , y T ) + t T L ( x t , θ t ) d t ,
which can be written, with the concatenated variable w and the bracket notation introduced in (18), as:
J ( t , μ , θ ) : = Φ ¯ ( w ) , P T t , μ , θ + t T L ¯ ( w , θ s ) , P s t , μ , θ d s .
In this setting, some assumptions for the value function are needed to solve Equation (16). In particular:
  • f, L and Φ are bounded;
  • f, L and Φ are Lipschitz w.r.t. x, and the Lipschitz constant of f and L are independent of θ ;
  • μ 0 P 2 ( R ( d + l ) ) .
The value function v * ( t , μ ) is defined as the real-valued function on [ 0 , T ] × P 2 ( R d + l ) , corresponding to the infimum of the functional J over the training parameters θ :
v * ( t , μ ) = inf θ L ( [ 0 , T ] , Θ ) J ( t , μ , θ ) .
It is essential to observe how the value function satisfies a recursive relation based on the Dynamic Programming Principle (DPP). This implies that, for any optimal trajectory starting from any intermediate point, the remaining part of the trajectory still has to be optimal. The latter principle can be expressed by defining the value function as:
v * ( t , μ ) = inf θ L ( [ 0 , T ] ) , Θ t t ^ L ¯ ( · , θ s ) , P s t , μ , θ d s + v * ( t ^ , P t ^ t , μ , θ ) ,
0 t t ^ T and μ P 2 ( R d + l ) .
Considering a small increment of time t ^ = t + δ t with δ > 0 , we can compute the Taylor expansion in the Wasserstein sense, hence obtaining:
0 = inf θ L ( [ 0 , T ] ) , Θ v * ( t + δ t , P t + δ t t , μ , θ ) v * ( t , μ ) + t t + δ t L ¯ ( w , θ s ) , P s t , μ , θ d s .
By the chain rule in P 2 ( R d + l ) , we have:
0 inf θ L ( [ 0 , T ] ) , Θ t v ( t , μ ) δ t + t t + δ t μ v ( t , μ ) ( w ) · f ¯ ( w , θ ) + L ¯ ( w , θ s ) , μ d s .
Since the infinitesimal δ t does not affect the distribution μ and the controls θ (see [5] (p. 13)), integrating the second term, we have:
0 δ t inf θ L ( [ 0 , T ] ) , Θ t v ( t , μ ) + μ v ( t , μ ) ( w ) · f ¯ ( w , θ ) + L ¯ ( w , θ ) , μ .
Taking δ t 0 , we have:
v t + inf θ Θ μ v ( t , μ ) ( w ) · f ¯ ( w , θ ) + L ¯ ( w , θ ) , μ = 0 , o n [ 0 , T ) × P 2 ( R d + l ) v ( T , μ ) = Φ ¯ ( w ) , μ , o n P 2 ( R d + l ) .
Since the value function should solve the HJB equation, it is essential to find the precise link between the solution of this PDE and the value function obtained from the minimization of the functional J. To provide the result, we use a verification argument allowing the following consideration: if the solution of the HJB is smooth enough, then it corresponds to the value function v * ; moreover, it allows computing the optimal control θ * .
Theorem 1
(The verification argument). Let v be a function in C 1 , 1 ( [ 0 , T ] × P 2 ( R d + l ) ) . If v is a solution of the HJB equation in (21) and there exists θ ¯ that is mapping ( t , μ ) Θ attaining the infimum in the HJB equation, then v ( t , μ ) = v * ( t , μ ) , and θ ¯ is an optimal feedback control policy, i.e., θ = θ * is a solution of the population risk minimization problem expressed by Equation (16), where θ t * : = θ ¯ ( t , P w t ) with P w 0 = μ 0 and d w t d t = f ¯ ( w t , θ t ) .
Proof of Theorem 1. 
Given any control process θ , applying Formula (20) between s = t and s = T with explicit time dependence gives:
v ( T , P T t , μ , θ ) = v ( t , μ ) + t T v t ( s , P s t , μ , θ ) + μ v ( s , P s t , μ , θ ) ( · ) · f ¯ ( · , θ s ) , P s t , μ , θ ) d s ,
Equivalently:
v ( t , μ ) = v ( T , P T t , μ , θ ) t T v t ( s , P s t , μ , θ ) + μ v ( s , P s t , μ , θ ) ( · ) · f ¯ ( · , θ s ) , P s t , μ , θ d s v ( T , P T t , μ , θ ) + t T L ¯ ( · , θ s ) , P s t , μ , θ d s = Φ ¯ ( · ) , P t T , μ , θ + t T L ¯ ( · , θ s ) , P s t , μ , θ d s = J ( t , μ , θ ) ,
where the first inequality comes from the infimum condition in (21).
Since the control is arbitrary, we have:
v ( t , μ ) v * ( t , μ ) ,
then it can be substituted with θ * where θ t * = θ ¯ ( t , P s t , μ , θ * ) is computed by the optimal feedback control. Repeating the above argument, the inequality becomes an equality since the infimum is attained for θ ¯ :
v ( t , μ ) = J ( t , μ , θ * ) v * ( t , μ ) .
Thus, v ( t , μ ) = v * ( t , μ ) , and θ ¯ defines an optimal control policy. For more details, see [5] (Proposition 3, pp. 13–14). □
The importance of Theorem 1 consists of linking smooth solutions of the parabolic PDE to the solutions of the population risk minimization problem, becoming a natural candidate for the DL problem.
Moreover, the optimal control policy θ ¯ : [ 0 , T ] × P 2 ( R d + l ) Θ is identified by computing the infimum in (21). Hence, it turns out that the HJB equation strongly characterizes the learning problem’s solution for the feedback, or closed-loop networks: control weights are actively adjusted according to the outputs, and this is the essential feature of closed-loop control. Nevertheless, the solution comes from a PDE that is in general difficult to solve, even numerically. On the other hand, open-loop solutions can be obtained from the closed-loop control policy by sequentially setting θ t = θ ¯ ( t , P w t ) , w t being the solution of the feed-forward ODE describing the dynamics of the state variable, with θ = θ up to time t. Usually within DL settings, open-loops are used during training or to measure the inference of a trained model, since trained weights for each neuron will have a fixed value.
The great limit of such a formulation relies in assuming that the value function v ( t , μ ) is continuously differentiable. It is straightforward to study a more flexible characterization for v dealing with weak solutions, also denoted as viscosity solutions. Thus, it is worth considering a weaker formulation of the PDE to go beyond the concept of classical solutions, by introducing the notion of viscosity ones, hence allowing obtaining relevant results when dealing with weaker assumptions on the coefficients defining the (stochastic) differential problem we are interested in; see, e.g., [5] (Section 5 pp. 14–22) for more details.
The key idea relies on exploiting the lifting identification between measures and random variables and moving from the Wasserstein space P 2 ( R d + l ) to the Hilbert space L 2 ( Ω , R d + l ) , using tools developed to study viscosity solutions.
We introduce a functional defined as the Hamiltonian for viscosity formulation H ( ξ , P ) : L 2 ( Ω , R d + l ) × L 2 ( Ω , R d + l ) R through:
H ( ξ , P ) = inf θ Θ E [ P · f ¯ ( ξ , θ ) + L ¯ ( ξ , θ ) ] .
Then, the lifted Bellman equation can be written w.r.t. V ( t , ξ ) = v ( t , P ξ ) as follows:
V t + H ( ξ , D V ( t , ξ ) ) = 0 o n [ 0 , T ) × L 2 ( Ω , R d + l ) , V ( T , ξ ) = E [ Φ ¯ ( ξ ) ] o n L 2 ( Ω , R d + l ) ;
hence, the PDE we are analysing is now set within a larger space corresponding to L 2 ( Ω , R d + l ) .
We say that a bounded, uniformly continuous function u : [ 0 , T ] × P 2 ( R d + l ) R is a viscosity solution of HJB Equation (21) if its lifted function U : [ 0 , T ] × L 2 ( Ω , R d + l ) R defined by:
U ( t , ξ ) = u ( t , P ξ ) ,
is a viscosity solution to the lifted Bellman Equation (23), namely:
  • U ( T , ξ ) E [ Φ ¯ ( ξ ) ] and for any test function ψ C 1 , 1 ( [ 0 , T ] × L 2 ( Ω , R d + l ) ) such that ( U ψ ) has a local maximum at ( t 0 , ξ 0 ) [ 0 , T ) × L 2 ( Ω , R d + l ) , ψ solves:
    t ψ ( t 0 , ξ 0 ) + H ( ξ 0 , D ψ ( t 0 , ξ 0 ) ) 0 .
  • U ( T , ξ ) E [ Φ ¯ ( ξ ) ] and for any test function ψ C 1 , 1 ( [ 0 , T ] × L 2 ( Ω , R d + l ) ) such that the map ( U ψ ) has a local minimum at ( t 0 , ξ 0 ) [ 0 , T ) × L 2 ( Ω , R d + l ) , ψ solves:
    t ψ ( t 0 , ξ 0 ) + H ( ξ 0 , D ψ ( t 0 , ξ 0 ) ) 0 .
Actually, the unique solution of this formulation corresponds to the value function v * from the minimization problem; see, e.g., [5] (Theorem 1, p. 15). Therefore, the HJB equation provides both the necessary and sufficient condition for the optimality of the learning procedure.
Adopting the MF-optimal control viewpoint implies that the population risk minimization problem of DL can be studied as a variational problem, whose solution is characterized by a suitable HJB equation, in analogy with the classical calculus of variations. In other words, the HJB equation is a global characterization of the value function to be solved over the entire space P 2 ( R d + l ) of input-target distributions. From the numerical point of view, it is a hard task to get a solution for the entire space; this is why the learning problem is typically locally solved, around some (small set of) trajectories generated according to the initial condition μ 0 P 2 ( R d + l ) , then applying the obtained feedback to nearby input-target distributions.

3.3. Mean-Field Pontryagin Maximum Principle

We have seen how the HJB approach provides a characterization of the optimal solution for the population risk minimization problem that holds globally in P 2 ( R d + l ) , at the price of being difficult to handle in practice. Moving from this consideration, the MF-PMP aims to show a local condition for optimality, expressed in terms of E [ H ] , i.e., the expectation of the Hamiltonian function.
Starting from the population risk minimization problem defined in Equation (16) and given a collection of K sample input-target pairs, the single ith input sample is considered. The prediction of the network can be approximated by a deterministic transformation of the terminal state g ( X T i ) for some g : R d Y that models a function both of the initial input x i and of the control parameters θ . Moreover, we define a loss function Φ : Y × Y R , which is minimized when the arguments are equal. Therefore, the goal is to minimize:
i = 1 K Φ ( g ( X T i ) , y i ) .
Since g is fixed, it can be absorbed into the definition of the loss function by defining the array Φ i ( · ) : = Φ ( · , y i ) .
Then, the supervised learning problem can be expressed as:
min θ U i = 1 K Φ i ( X T i ) + 0 T L ( θ t ) d t ,
where L : Θ R acts as a regularizer term to model a running cost.
Input variables x = ( x 1 , , x K ) can be considered as the elements of a Euclidean space R d × K , representing the initial conditions of the following ODE system:
X ˙ t i = f θ t ( t , X t i ) , X 0 i = x i , 0 t T , i = 1 , , K ,
where θ : [ 0 , T ] Θ are the control parameters to be trained. The dynamics (25) are decoupled except for the control. A general space U for controls θ is then defined as: U : = { θ : [ 0 , T ] Θ : θ i s L e b e s g u e m e a s u r a b l e } , and we are aiming to choose θ in U to have g ( X T i ) closer to y i for i = 1 , , K .
To formulate the PMP as a set of necessary conditions for optimal solutions, it is useful to define the Hamiltonian H : [ 0 , T ] × R d × R d × Θ R given by:
H ( t , x , p , θ ) : = p · f ( t , x , θ ) L ( θ ) ,
with p modelling an adjoint process as in Equation (6).
Let us underline that all input-target pairs ( x 0 , y 0 ) , connected by the distribution μ 0 , share a common control parameter, and this feature suggests the idea to develop a maximum condition that has to hold in the average sense. Indeed, the control is now enforced on the continuity equation that describes the evolution of probability densities.
The following assumptions are needed:
  • f is bounded and f, L are continuous w.r.t. θ ;
  • f, L and Φ are continuously differentiable w.r.t. x;
  • the distribution μ 0 has bounded support in R d × R l , which means there exists M > 0 such that μ ( { ( x , y ) R d × R l : | | x | | + | | y | | 1 } ) = 1 .
Theorem 2 (Mean-Field Pontryagin Maximum Principle).
 Let assumptions 1–3 hold and θ * L ( [ 0 , T ] , Θ ) be the minimizer for J ( θ ) corresponding to the optimal control of the population risk minimization problem (16). Define the Hamiltonian H : R d × R d × Θ R as
H ( x , p , θ ) = p · f ( x , θ ) L ( x , θ ) .
Then, there exist absolutely continuous stochastic processes x * and p * solving the following forward backward SDEs:
x ˙ t * = f ( x t * , θ t * ) , x t * = x 0 ,
p ˙ t * = x H ( x t * , p t * , θ t * ) , p T * = x Φ ( x T * , y 0 ) ,
and the related optimality condition expressed in terms of the expectation of the Hamiltonian function:
E μ 0 H ( x t * , p t * , θ t * ) E μ 0 H ( x t * , p t * , θ ) , θ Θ , f o r   a . e . t [ 0 , T ] .
Proof of Theorem 2. 
For the sake of simplicity, let us introduce a new coordinate x 0 satisfying the dynamics x ˙ t 0 = L ( x t , * , θ t * ) with x 0 0 = 0 . Through this choice, the definition of the Hamiltonian in Equation (26) can be rewritten without running loss L by redefining:
x ( x 0 , x ) , f ( L , f ) , Φ ( x T , y 0 ) Φ ( x T , y 0 ) + x T 0 .
Assumptions 1–3 are still preserved, but now we consider without loss of generality the case L 0 .
Let some τ ( 0 , T ] be a Lebesgue point of f ^ ( t ) : = f ( x t * , θ t * ) ; in this setting, these points are dense in [ 0 , T ] . Now, for ε ( 0 , τ ) , define the family of perturbed controls:
θ t τ , ε = ω t τ ε , τ θ t * o t h e r w i s e ,
where ω Θ is an admissible control; this kind of perturbation is called needle perturbation. Accordingly, define x t τ , ε by:
x t τ , ε = x 0 + 0 t f ( x s τ , ε , θ s τ , ε ) d s ,
that is the solution of the forward propagation equation with the perturbed control θ τ , ε . Clearly, x t * = x t τ , ε for every t τ ε and every x 0 since the perturbation is not present. At the limit point t = τ , the following holds:
1 ε ( x τ τ , ε x τ * ) = 1 ε τ ε τ f ( x s τ , ε , ω ) f ( f s * ) , θ s * ) d s ,
and since τ is a Lebesgue point of F:
v τ : = lim ε 0 1 ε ( x τ τ , ε x τ * ) = f ( x τ * , ω ) f ( x τ * , θ τ * ) .
It is possible to characterize v τ as the leading order perturbation on the state due to the needle perturbation introduced in the infinitesimal interval [ τ ε , τ ] . In interval ( τ , T ] , the dynamics is the same before applying the perturbation since the controls are the same.
Now, it is necessary to consider how the perturbation v τ propagates. Thus, define for t τ :
v t ε : = 1 ε ( x t τ , ε x t * ) ,
and:
v t : = lim ε 0 v t ε ,
v t is well-defined for almost every t, which are every Lebesgue point of the map t x * ( t ) , and it satisfies the following linearised equation:
v ˙ t = x f ( x t * , θ t * ) T v t , t ( τ , T ] v τ = f ( x τ * , ω ) f ( x τ * , θ τ * ) .
In particular, v ( T ) represents the perturbation of the final state introduced by this control. By the optimality assumption of θ * , it follows that:
E μ 0 Φ ( x T τ , ε , y 0 ) E μ 0 Φ ( x T * , y 0 ) .
Assumptions 1 and 2 (p. 13) imply x Φ is bounded. By the dominated convergence theorem, we know that:
0 lim ε 0 1 ε E μ 0 [ Φ ( x T τ , ε , y 0 ) Φ ( x T * , y 0 ) ]
= E μ 0 d d ε Φ ( x T ε , τ , y 0 ) | ε = 0 +
= E μ 0 x Φ ( x T ε , τ , y 0 ) · v T .
Let us define p as the solution of the adjoint of Equation (30), hence:
p ˙ t * = x f ( x s * , θ s * ) p t * , p T * = x Φ ( x T * , y 0 ) .
By Equation (31), it follows that E μ 0 p T * · v T 0 , and moreover, for all t [ τ , t ] :
d d t ( p t * · v t ) = p ˙ t * · v t + v ˙ t · p t * = 0 ;
thus,
E μ 0 p τ * · v t = E μ 0 p T * · v T 0 , t [ τ , T ] ,
so that taking t = τ :
E μ 0 p τ * · f ( x τ * , θ τ * ) E μ 0 p T * · f ( x τ * , ω ) .
Since ω is arbitrarily chosen, this completes the proof by recalling that H ( x , p , θ ) = p · f ( x , θ ) . See [5] (Theorem 3, pp. 23–24) for more details. □
MF-PMP refers only to the control of the open-loop type; Equation (27) is a feed-forward ODE, describing the state dynamics under optimal controls θ * . Equation (28) defines the evolution of the co-state variable p s * , characterizing the evolution of an adjoint variational condition backwards in time. It is interesting to note how the optimality condition described in Equation (29) does not involve first order partial derivatives, being expressed in terms of expectations. In particular, it requires that optimal solutions must globally maximize the Hamiltonian function. This aspect allows considering also cases of non-differentiable dynamics w.r.t. the controls weights, as well as cases characterized by optimal weights lying on the boundary of the training set Θ . Moreover, the usual first order optimality condition can be derived from (29). Comparing this MF formulation to the classical PMP, we can see that the main difference lies in the fact that the maximization condition is expressed in terms of the expectation above a probability density μ 0 . The latter result is not surprising, since the mean-field-optimal control must depend on the probability distribution of input-target pairs.
Let us also note that the mean-field PMP expressed in Theorem 2 can be written more compactly as follows. For each control process θ L ( [ 0 , T ] , Θ ) , we denote by x θ : = { x t θ : 0 t T } and p θ : = { p t θ : 0 t T } the solutions of Hamilton’s Equations (27) and (28), and we enforce the control expressed by the random variables ( x 0 , y T ) μ 0 , through:
x ˙ t θ = f ( x t θ , θ t ) , x 0 θ = x 0 ,
p ˙ t θ = x H ( x θ , p t θ , θ t ) , p θ = x Φ ( x T θ , y t ) .
Then, θ * satisfies the PMP if and only if:
E μ 0 H ( x t θ * , p t θ * , θ t * ) E μ 0 H ( x t θ * , p t θ * , θ ) , θ Θ .
Furthermore, observe that the mean-field PMP includes, as a special case, the necessary conditions for the optimality of the sampled optimal control problem (17). In order to point out this aspect, define the empirical measure:
μ 0 N : = 1 N i = 1 N δ ( x 0 i , y T i ) ,
and apply the mean-field PMP with μ 0 N in place of μ 0 to obtain:
1 N i = 1 N H ( x t θ * , i , p t θ * , i , θ t * ) 1 N i = 1 N H ( x t θ * , i , p t θ * , i , θ ) , θ Θ ,
where x t θ * , i and p t θ * , i are defined through the input-target pair ( x 0 i . y T i ) . Moreover, since μ 0 N is a random measure, (33) is a random equation whose solutions correspond to random variables.
Concerning possible numerical analysis of the DL algorithm based on the maximum principle, we refer to [4] (pp. 13–15), where a comparison can be found between usual gradient approaches to the discrete formulation of the Mean-Field PMP stated in Theorem 2, with a loss function based on Equation (24). The test and train losses of some variants of SGD algorithms are compared to the mean-field algorithm based on the discrete PMP. It is possible to observe that the latter algorithm is characterized by a better convergence rate, being then faster. This improvement is mainly due to the fact that it allows avoiding possibly getting stuck, caused by the flat regions, as clearly shown by the graphs reported in [4] (p. 14).

3.4. Connection between the HJB Equation and the PMP

In what follows, we provide connections between the global and local formulation of the HJB formalism via the PMP, exploiting the connection between Hamilton’s canonical equations (ODEs) and the Hamilton–Jacobi equations (PDEs). The Hamiltonian dynamics of Equations (27) and (28) describe the trajectory of a random variable that is completely determined by random variables ( x 0 , y T ) . On the other hand, the optimality condition described by Equation (29) does not depend on the particular probability measure of the initial input-target pairs. Notice that the maximum principle can be expressed in terms of a Hamiltonian flow that depends on a probability measure in a suitable Wasserstein space and where Equation (29) is the corresponding lifting version. Analogously, in order to have both solutions in the same functional space, HJB has to be lifted in the L 2 ( Ω , R d + l ) space.
Starting from the lifted Bellman Equation (23), lying in L 2 ( Ω , R d + l ) , it is possible to apply the method of the characteristics and define the following system of equations, after introducing P t = D V ( t , ξ t ) :
ξ ˙ t = D P H ( ξ t , P t ) P ˙ t = D ξ H ( ξ t , P t ) .
Suppose Equation (34) has a solution that satisfies an initial condition given by:
P ξ 0 = μ 0 ,
and a terminal one involving Bellman equation given by:
P T = w Φ ¯ ( ξ T ) .
We also assume that θ ¯ ( ξ , P ) is the optimal control achieving the infimum in (21), as an interior point of Θ , then we can explicitly write the equation of the Hamiltonian w.r.t. the optimal control as:
H = E [ P · f ¯ ( ξ , θ ¯ ( ξ , P ) ) + L ¯ ( ξ , θ ¯ ( ξ , P ) ) ] .
Therefore, by the first order condition, we have:
E [ θ f ¯ ( ξ , θ ¯ ( ξ , P ) ) P + θ L ¯ ( ξ , θ ¯ ( ξ , P ) ) ] = 0 ,
so that, taking into consideration Equation (34), we obtain the Hamilton-type equations:
ξ ˙ t = f ¯ ( ξ t , θ ¯ ( ξ t , P t ) ) P ˙ t = w f ¯ ( ξ t , θ ¯ ( ξ t , P t ) ) P t w L ¯ ( ξ t , θ ¯ ( ξ t , P t ) ) .
Use w = ( x , y ) as concatenated variable and θ * = θ ¯ ( ξ t , P t ) to remark that the last l components of f ¯ are zero and by considering only the first d components:
x ˙ t = f ( ξ t , θ * ) p ˙ t = x f ( x t , θ * ) p t x L ( x t , θ t * ) .
Summing up: Hamilton’s equation of the system (36) can be viewed as the characteristic equations of the HJB equation in its lifted formulation described by Equation (23). Essentially, the PMP gives a necessary condition for any characteristic of the HJB equation: any characteristic originating from μ 0 , that is the initial law of the random variables, must satisfy a local necessary optimal condition constituted by the mean-field PMP. This justifies the claim that the PMP constitutes a local condition if compared to the HJB equation.

3.5. Small-Time Uniqueness

A natural question is to understand when the PMP solutions also provide sufficient conditions to have optimality. We start by considering that the uniqueness of the solution implies sufficiency, and we investigate which assumptions are needed to have a unique solution of the mean-field PMP equations. Equations (27) and (28) model highly non-linear two-point boundary value problems in terms of x * and p * , which are also coupled through their laws. In general, even without the coupling, this kind of PDE is known to not have a unique solution; see, e.g., Ch. 7 of [22]. In order to prove the uniqueness, strong assumptions are needed to prove the small-time case.
Theorem 3
(Small-time uniqueness). Suppose that:
  • f is bounded;
  • f, L and Φ are continuously differentiable w.r.t. both x and θ with bounded and Lipschitz partial derivatives;
  • the distribution μ 0 has bounded support in R d × R l , which means there exists M > 0 such that μ { ( x , y ) R d × R l : | | x | | + | | y | | 1 } = 1 ;
  • H ( x , p , θ ) is strongly concave in θ and uniformly in x, p R d , i.e., x x 2 H ( x , p , θ ) + λ 0 0 for some λ 0 0 .
Then, for sufficiently small T, if θ * 1 and θ * 2 are solutions of the mean-field PMP derived in Theorem 2, then θ * 1 = θ * 2 .
Before proving the theorem, we report the following lemma, which provides an estimate of the difference between flow-maps driven by two different controls in terms of the small-time parameter T:
Lemma 1.
Let θ 1 , θ 2 L ( [ 0 , T ] , Θ ) . Then, there exists a constant T 0 such that for all T [ 0 , T 0 ) , it holds that:
| | x θ 1 x θ 2 | | L + | | p θ 1 p θ 2 | | L C ( T ) | | θ 1 θ 2 | | L ,
where C ( T ) > 0 satisfies C ( T ) 0 as T 0 .
Proof of Lemma 1. 
We denote δ θ : = θ 1 θ 2 , δ x : = x θ 1 x θ 2 and δ p = p θ 1 p θ 2 , respectively. Since x 0 θ 1 = x 2 θ 1 = x 0 , integrating the respective ODEs while considering the first two assumptions of Theorem 6 leads to:
| | δ x t | | 0 t | | f ( x s θ 1 , θ s 1 ) f ( x s θ 1 , θ s 2 ) | | d s K L 0 T | | δ x s | | d s + K L 0 T | | δ p s | | d s ,
and so:
| | δ x | | L K L T | | δ x | | + K L T | | δ θ | | .
Now, if T T 0 : = 1 K L . We then have:
| | δ x | | L K L T 1 K L T | | δ θ | | L .
Similarly:
| | δ p t | | K L | | δ x T | | + K L t T | | δ x s | | d s + K L t T | | δ p s | | d s
| | δ p | | ( K L + K L T ) | | δ x | | L + K L T | | δ p | | L ,
and hence:
| | δ p | | L K L ( 1 + T ) 1 K L T | | δ x | | L ,
which combined with Equation (37) proves the lemma.  □
Through the above estimate, it is now possible to prove Theorem 3.
Proof of Theorem 3. 
In what follows, we exploit [5] (p. 29). By uniform strong concavity, the function θ E μ 0 H ( x t θ 1 , p t θ 1 , θ ) is strongly concave. Thus, consider a λ 0 > 0 such that:
λ 0 2 | | θ t 1 θ t 2 | | 2 E μ 0 H ( x t θ 1 , p t θ 1 , θ t 2 ) E μ 0 H ( x t θ 1 , p t θ 1 , θ t 1 ) · ( θ t 1 θ t 2 ) .
A similar expression holds for θ E μ 0 H ( x t θ 2 , p t θ 2 , θ ) , and by combining the two inequalities, also exploiting the smoothness of functions involved, we have:
λ 0 | | θ t 1 θ t 2 | | 2 E μ 0 H ( x t θ 1 , p t θ 1 , θ t 2 ) E μ 0 H ( x t θ 1 , p t θ 1 , θ t 1 ) · ( θ t 1 θ t 2 ) + E μ 0 H ( x t θ 2 , p t θ 2 , θ t 1 ) E μ 0 H ( x t θ 2 , p t θ 2 , θ t 2 ) · ( θ t 1 θ t 2 ) E μ 0 | | H ( x t θ 1 , p t θ 1 , θ t 1 ) H ( x t θ 2 , p t θ 2 , θ t 1 ) | | | | θ t 1 θ t 2 | | + E μ 0 | | H ( x t θ 1 , p t θ 1 , θ t 2 ) H ( x t θ 2 , p t θ 2 , θ t 2 ) | | | | θ t 1 θ t 2 | | K L | | δ θ | | L ( | | δ x | | L + | | δ p | | L ) .
Combining the above inequality with Lemma 1, we have:
| | δ θ | | L 2 K L λ 0 C ( T ) | | δ θ | | L 2 .
However, C ( T ) = o ( 1 ) , and by taking T sufficiently small, so that K L C ( T ) < λ 0 , this implies that | | δ θ | | L = 0 . For more details, see [5] (Lemma 2, p. 28).  □
Within the ML setting, the small-period T roughly corresponds to the regime where the reachable set of the forward dynamics is small. This can be interpreted as the case where the model has low capacity or low expressive power. In other words, Theorem 3 states that the optimal solution is unique if the network capacity is low, even when there is a huge number of parameters. It is essential to underline that the hypothesis of the strong concavity of the Hamiltonian H does not imply convexity for the loss function J, which can also be highly non-convex due to the non-linear transformation introduced by the activation functions σ .

4. Conclusions

In this paper, typical NN structures are considered as the evolution of a dynamical system to rigorously state the population risk minimization problem related to DL. Two parallel and connected perspectives are followed. The result expressed in Theorem 2 represents the generalization of the PMP in the calculus of variations and a local characterization of optimal trajectories derived from HJB Equation (21). Deriving the necessary condition for the optimality of the PMP provides several advantages: there is no reference to the derivative w.r.t. the probability measure in the Wasserstein sense; the parameter set Θ is highly general; maximization is point-wise in t, once x and p are known. Moreover, it allows the possibility to develop a learning algorithm without referring to the classical methods of DL such as the SGD. As an interest point with a general relevance within the optimal control theory, we remark that the controls, and then the weights θ , are only assumed to be measurable and essentially bounded in time. In particular, we allow for discontinuous terms. Even in the case where the number of parameters is infinite, as in the MFG setting, it is feasible to derive non-trivial stability estimates. This aspect results in contrast with generalized bounds based on classical statistical learning, where the increasing number of parameters adversely affects the generalization. On the other hand, it will be interesting to consider algorithms with weights having only discrete values, e.g., binary, values, requiring small memory amounts, with corresponding efficiency and speed improvements. A further direction of our ongoing research is based on the study of the PMP from a discrete-time perspective in order to relate the theoretical framework directly to the applications.

Author Contributions

Conceptualization, L.D.P. and M.G.; Investigation, L.D.P. and M.G.; Methodology, L.D.P. and M.G.; Writing—original draft, L.D.P. and M.G.; Writing—review and editing, L.D.P. and M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cardaliguet, P. Notes on Mean-Field Games; Technical Report, from P.-L. Lions’ lectures at Collège de France; Collège de France: Paris, France, 2012. [Google Scholar]
  2. Touzi, N. Stochastic Control and Application to Finance; Chapter 1–4; Ecole Polytechnique: Paris, France, 2018. [Google Scholar]
  3. Carmona, R.; Delarue, F.; Lachapelle, A. Control of Mc-Kean-Vlasov Dynamics Versus Mean-Field Games; Technical Report; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  4. Li, Q.; Chen, L.; Tai, C.; E, W. Maximum principle based algorithms for deep learning. J. Mach. Learn. Res. 2018, 18, 1–29. [Google Scholar]
  5. Weinan, E.; Han, J.; Li, Q. A Mean-Field Optimal Control Formulation of Deep Learning. arXiv 2018, arXiv:1807.01083. [Google Scholar]
  6. Li, Q.; Lin, T.T.; Shen, Z. Deep Learning via Dynamical Systems: An Approximation Perspective. arXiv 2019, arXiv:1912.10382v1. [Google Scholar]
  7. Athans, M.; Falb, P.L. Optimal Control: An Introduction to the Theory and Its Applications; Courier Corporation, Dover Publications, Inc.: Mineola, NY, USA, 2013. [Google Scholar]
  8. Lacker, D. Mean-Field Games and Interacting Particle Systems. 2018. Available online: http://www.columbia.edu/~dl3133/MFGSpring2018.pdf (accessed on 17 November 2020).
  9. Gangbo, W.; Kim, H.K.; Pacini, T. Differential forms on Wasserstein space and infinite-dimensional Hamiltonian systems. arXiv 2009, arXiv:0807.1065. [Google Scholar] [CrossRef]
  10. Sagitov, S. Weak Convergence of Probability Measures; Chalmers University of Technology and Gothenburg University, 2015. Available online: http://www.math.chalmers.se/~serik/WeakConv/C-space.pdf (accessed on 17 November 2020).
  11. Billingsley, P. Weak Convergence in Metric Spaces; Wiley Series in Probability and Statistics: Hoboken, NJ, USA, 1999. [Google Scholar]
  12. Bergström, H. Weak Convergence of Measures. In A Volume in Probability and Mathematical Statistics: A Series of Monographs and Textbooks; Springer: New York, NY, USA, 1982. [Google Scholar]
  13. Benazzoli, C.; Campi, L.; Di Persio, L. Mean field games with controlled jump–diffusion dynamics: Existence results and an illiquid interbank market model. Stoch. Process. Their Appl. 2020, 130, 6927–6964. [Google Scholar] [CrossRef]
  14. Benazzoli, C.; Campi, L.; Di Persio, L. ϵ-Nash equilibrium in stochastic differential games with mean-field interaction and controlled jumps. Stat. Probab. Lett. 2019. [Google Scholar] [CrossRef]
  15. Carrillo, J.A.; Pimentel, E.A.; Voskanyan, V.K. On a mean-field optimal control problem. Nonlinear Anal. Theory Methods Appl. 2020, 199, 112039. [Google Scholar] [CrossRef]
  16. Liberzon, D. Calculus of Variations and Optimal Control Theory: A Concise Introduction; Princeton University Press: Princeton, NJ, USA, 2012. [Google Scholar]
  17. Ma, J.; Yong, J. Forward-Backward Stochastic Differential Equations and Their Applications; Springer: New York, NY, USA, 2007. [Google Scholar]
  18. Hadikhanloo, S. Learning in Mean-Field Games. Ph.D. Thesis, Dauphine Universitè Paris, Paris, France, 2018. [Google Scholar]
  19. Fleming, W.H.; Soner, H.M. Controlled Markov Processes and Viscosity Solutions; Stochastic Modelling and Applied Probability; Springer: New York, NY, USA, 2006. [Google Scholar]
  20. Frankowska, H. Hamilton-Jacobi equations: Viscosity solutions and generalized gradients. J. Math. Anal. Appl. 1989, 141, 1. [Google Scholar] [CrossRef] [Green Version]
  21. Weinan, E. A Proposal on Machine Learning via Dynamical Systems. arXiv 2019, arXiv:1912.10382v1. [Google Scholar]
  22. Kelley, W.G.; Peterson, A.C. The Theory of Differential Equations: Classical and Qualitative; Springer Science and Business Media: Berlin, Germany, 2010. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Persio, L.D.; Garbelli, M. Deep Learning and Mean-Field Games: A Stochastic Optimal Control Perspective. Symmetry 2021, 13, 14. https://doi.org/10.3390/sym13010014

AMA Style

Persio LD, Garbelli M. Deep Learning and Mean-Field Games: A Stochastic Optimal Control Perspective. Symmetry. 2021; 13(1):14. https://doi.org/10.3390/sym13010014

Chicago/Turabian Style

Persio, Luca Di, and Matteo Garbelli. 2021. "Deep Learning and Mean-Field Games: A Stochastic Optimal Control Perspective" Symmetry 13, no. 1: 14. https://doi.org/10.3390/sym13010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop