Deep Learning and Mean-Field Games: A Stochastic Optimal Control Perspective

: We provide a rigorous mathematical formulation of Deep Learning (DL) methodologies through an in-depth analysis of the learning procedures characterizing Neural Network (NN) models within the theoretical frameworks of Stochastic Optimal Control (SOC) and Mean-Field Games (MFGs). In particular, we show how the supervised learning approach can be translated in terms of a (stochastic) mean-ﬁeld optimal control problem by applying the Hamilton–Jacobi–Bellman (HJB) approach and the mean-ﬁeld Pontryagin maximum principle. Our contribution sheds new light on a possible theoretical connection between mean-ﬁeld problems and DL, melting heterogeneous approaches and reporting the state-of-the-art within such ﬁelds to show how the latter different perspectives can be indeed fruitfully uniﬁed.


Introduction
Controlled stochastic processes, which naturally arise in a plethora of heterogeneous fields, spanning, e.g., from mathematical finance to industry, can be solved in the setting of continuous time stochastic control theory. In particular, when we have to analyse complex dynamics produced by the mutual interaction of a large set of indistinguishable players, an efficient approach to infer knowledge about the resulting behaviour, typical for example of a neuronal ensemble, is provided by Mean-Field Game (MFG) methods, as described in [1]. MFG theory generalizes classical models of interacting particle systems characterizing statistical mechanics. Intuitively, each particle is replaced by rational agents whose dynamics are represented by a Stochastic Differential Equation (SDE). The term mean-field refers to the highly symmetric form of interaction: the dynamics and the objective of each particle depend on an empirical measure capturing the global behaviour of the population. The solution of an MFG is analogous to a Nash equilibrium for a non-cooperative game [2].
The key idea is that the population limit can be effectively approximated by statistical features of the system corresponding to the behaviour of a typical group of agents, in a Wasserstein space sense [3]. On the other hand, Deep Learning (DL) is frequently used in several Machine Learning (ML) based applications, spanning from image classification and speech recognition to predictive maintenance and clustering. Therefore, it has become essential to provide a strong mathematical formulation and to analyse both the setting and the associated algorithms [4,5]. Commonly, Neural Networks (NNs) are trained through the Stochastic Gradient Descent (SGD) method. It updates the trainable parameters using gradient information computed randomly via a back-propagation algorithm with the disadvantage of being slow in the first steps of training. An alternative consists of expressing the learning procedure of an NN as a dynamical system (see [6]), which can be then analysed as an optimal control problem [7].
The present paper is structured as follows. In Section 2, we introduce the fundamentals about the Wasserstein space, Stochastic Optimal Control (SOC) and MFGs. In Section 3, then we have a random probability measure, usually indicated as the empirical measure. According to [8] (pp. [12][13][14][15][16], P (X ) is endowed with a metric (compatible with the notion of weak convergence) in order to consider P (X ) as a metric space itself. Let us recall the Wasserstein metric, defined on P (X ), based on the idea of coupling. In particular, given µ, ν ∈ P (X ), Π(µ, ν) represents the set of Borel probability measures π on X × X , with first, resp. second, marginal µ, resp. ν. Namely, π(A × X ) = µ(A) and π(X × A) = ν(A) for every Borel set A ⊂ X . Then, we define P p (X ) for p ≥ 1, as the set of probability measures µ ∈ P (X ) satisfying: where x 0 ∈ X is an arbitrary reference point. Consequently, the p-Wasserstein metric on P p (X ) is defined as: where the infimum is taken over all pairs of X -valued random variables X, Y with, respectively, given marginals µ and ν.

Stochastic Optimal Control Problem
Following [13] (see also [14] and the references therein), let {Ω, F , (F t ), P} be a filtered probability space, with filtration F t = {F t , t ∈ [0, T]}, T > 0, supporting: • a controlled state variable (X α t ) t∈[0,T] , where X t is an i.i.d. sequence of R n -valued F 0 -measurable random variables; • a sequence {W i t } i≥1 of independent and F t -adapted Brownian motions.
We consider a control as enforced on the state variable by α = (α t ) t∈[0,T] , that is an adapted process defined on A = {(α t ) t∈[0,T] }, containing F -progressively measurable controls, which take values in A identified as a closed subset of R d . The objective function J is then defined as: where f : A × R n → R denotes the running objective function, while g : R n → R denotes the terminal objective function, and the expectation is taken with respect to (w.r.t.) the probability measure P.
The goal corresponds to identifying the control α solving the maximization problem: according to the n-dimensional stochastic controlled process X s s.t.
The drift function b : A × R n → R n and the volatility function b : A × R n → R d+n are measurable, and they satisfy a uniform Lipschitz condition in x, i.e., there exists K > 0 such that, for all x, y ∈ R n and α s ∈ A, it holds that: Previous assumptions guarantee that the SDE (3) has a unique solution, denoted by . Therefore, the objective function can be explicitly expressed in terms of x, t, namely: with (t, x) ∈ [0, T] × R n , and the process α ∈ A takes values α s ∈ A. Let v be the value function: then the corresponding optimal controlα : [0, T] × R n → A is defined by: whose solution can be found by exploiting two different and interconnected approaches respectively based on the Hamilton-Jacobi-Bellman (HJB) equation and on the stochastic Pontryagin Maximum Principle (PMP). The first one moves from the Dynamic Programming Principle (DPP), then leading to the nonlinear second order Partial Differential Equation (PDE) known as the HJB equation (see [3]), namely: that holds ∀(x, t) ∈ [0, T] × R n , where L defines a second order operator called the infinitesimal generator of the controlled diffusion: It is possible to define the Hamiltonian for the SOC problem [15,16] as H : R d × R d × S d → R ∪ {∞}, written as follows: where S d denotes the set of symmetric d × d matrices and the variables y and z are called the adjoint variables. The previously defined Hamiltonian allows encapsulating the nonlinearity of the HJB equation, which can be rearranged as: On the other hand, the stochastic PMP leads to a system of coupled Forward-Backward SDEs (FBSDEs) plus an external optimality condition in terms of the Hamiltonian function; see, e.g., [2,17] and the references therein. We define the local Hamiltonian as: where (p, q) are the adjoint variables.
By assuming that f t (x, u), b t (x, u) and σ(x, u) are progressively measurable, bounded in C 1 (x, u) and Lipschitz ∀(x, u) ∈ R 2 , we have that if an arbitrary control maximizes the Hamiltonian, then (necessary condition for optimality) it is the optimal one: Moreover, by requiring that x → g(x) and x →Ĥ t (x,P τ ,Q τ ) = sup ν H τ (x τ , ν,P τ ,Q τ ) are both concave functions, the PMP becomes also a sufficient condition to characterize the optimal control. In addition, by the envelope theorem, it follows that the optimal controlν maximizing H also maximizes its derivative ∂ x H:

Mean-Field Games
In what follows, according to [1,3,5,18], we will exploit the theory of SOC to analyse Mean-Field Games (MFGs), to gain insights into the symmetric stochastic differential games characterized by a number of players that tends to infinity. In particular, consider n players i = 1, . . . , n each of which controls a private state X i whose dynamics read as follows: σ ∈ R being a (common) constant multiplying a random noise component, i.e., dW i t , steered by the i-th copy of a standard Brownian motion.
Each control α i t represents the strategy played by the ith player, while X j t models the influence of the state of the other j-players. The dynamics in Equation (7) can be reformulated as: where the empirical distributionμ N t of private states is defined by: Hence, the same dynamics for each player are obtained, since each player depends only on the global behaviour of the entire system and not on the state of the single player: Each individual is minimizing a cost composed by a running cost and a terminal one. Therefore, the objective function J i (α), ∀i, is defined by: the goal being to select -Nash equilibria, the vector(α 1 , . . . , α N ) being a ε-Nash equilibrium if: meaning that if the ith player is changing his/her behaviour for a small value , while other players follow a fixed strategy, there is no change in the profit of the game in terms of the cost J. Let us note that, if = 0, we are dealing with the standard Nash equilibria. In this scenario, the solution of an MFG (see [3]) corresponds to a couple (α , µ ), where µ t = L(X α t ) models the optimal empirical distribution and α = φ (t, X i t ) the optimal strategies.
Due to the symmetry of the system, all the agents play the same strategy profile: hence, by fixing the flow of the probability measure (µ t ) t∈[0,T] , the solution φ solves the SOC problem parametrized by the choice of the family (µ t ) as: subject to: The solution of the latter optimization problem returns the best response of a representative player to the empirical measure (µ t ) in a scenario where no player is profitable in changing his/her own strategy.
Concerning the choice of the flow of the probability measure (µ t ) t∈[0,T] (see Equation (1)), if N → ∞, the asymptotic independence implied by the law of large numbers ensures that the empirical measureμ N t tends to the statistical distribution of X t in the form of a fixed point equation, that is L(X α µ ) = µ. Once the optimal feedback φ has been found, for each choice of the parameter (µ t ) 0≤t≤T , it is necessary to check that the optimal empirical measure µ * t can be recovered from the statistical distribution of the optimal states X α * t , i.e., ∀ t, L(X α t ) = µ t must hold. By freezing the family µ of probability, the Hamiltonian becomes: Let us assume that there exists a regular function: then we denote the infimum over controls α of H µ t (t, x, y, α) by: ending up with a stochastic control problem that can be solved (see, e.g., [3] (pp. 7-9)) by applying one of the following two methods: 1. The PDE approach through HJB Equation (21)  We introduce the HJB value equation following [3], (p. 7), i.e.,: where A t denotes the set of admissible controls over the interval [t, T]. It is expected that v is the solution, in the viscosity sense, of the HJB equation; see, e.g., [19,20] and the references therein. Hence: as the flow of the optimally controlled state, the flow of statistical distributions should satisfy Kolmogorov's equation, so that, introducing the notation β(t, x)) with φ the optimal feedback, the flow (ν t ) 0≤t≤T of measures given by ν t = L(X t ) satisfies the Kolmogorov's equation: with initial condition ν 0 = µ 0 . Such a PDE holds in the sense of distributions, since ν t represents a density and the derivatives involved must be considered in the Wasserstein sense. Setting ν t = L(X t ) = µ t , we end up with a system of coupled non-linear forwardbackward PDEs (11) and (12), constituting the so-called MFG-PDE system. On the other hand, we can approximate the solution of the aforementioned stochastic control problem via the stochastic Pontryagin principle. In particular, for each open-loop adapted control α = (α t ) 0≤t≤T , we denote by X α = (X α t ) 0≤t≤T the associated state, and we introduce the adjoint equation: H µ t corresponds to the Hamiltonian defined in Equation (9), and the solution (Y t , Z t ) 0≤t≤T of the BSDE is called a set of adjoint processes. The PMP necessary condition states that whenever X α is an optimal state, it must hold that the variables (x, α) and the terminal g is convex w.r.t. the variable x, then the system given by Equations (13) and (8) characterizes the optimal states of the MFG problem.

Main Result
In this section, we generalize the approach proposed in [5], providing an application of the Mean-Field (MF) optimal control to Deep Learning (DL). In particular, we start by considering the learning process characterizing supervised learning as a population risk minimization problem, hence considering its probabilistic nature in the sense that the optimal control parameters, corresponding to the trainable weights in the associated Neural Network (NN) model, depending on the population distribution of input-target pairs constituting the randomness source.

Neural Network as a Dynamical System
In order to study DL as an optimal control problem, it is necessary to express the NN learning process as a dynamical system [6,21]. In the simplest form, the feed-forward propagation in a T layer, T ≥ 1, network can be expressed by the following difference equation: where x 0 is the input, e.g., an image, several time-series, etc., while x T is the final output, to be compared to some target y T by means of a given loss function. By moving from a discrete time formulation to a continuous one, the forward dynamics we are interested in will be described by a differential equation that takes the role of (14). The learning aim is to tune the trainable parameters θ 0 , . . . , θ T−1 to have x T as close as possible to y T , according to a specified metric and knowing that the target y T is joined to the input x 0 by means of a probability measure µ 0 . Following the dynamical systems approach developed in [6], the supervised learning method aims to approximate some function F , usually called the oracle, denoted by F : X → Y.
As stated before, the set X ⊂ R d contains the d-dimensional array of inputs, e.g., images, financial time-series, sound recorded data, text, etc., while Y are the targets modelling the corresponding images, numerical forecast, or predicted texts.
In this setting, it is standard to define what is called a hypothesis space as: Training moves from a collection of K samples of input-target pairs {x i , , the goal being to approximate F exploiting these training data points.
Let (Ω, F , P) be a probability space supporting random variables x 0 ∈ R d and y T ∈ R l , jointly distributed according to µ 0 := P (x 0 ,y T ) with µ 0 modelling the distribution of the input-target pairs. The set of controls Θ ⊆ R m denotes the admissible training weights that are assumed to be essentially bounded, measurable L ∞ ([0, T], Θ) functions. The network depth, i.e., the number of layers, is denoted by T > 0. We also introduce the functions: State dynamics are described by an Ordinary Differential Equation (ODE) of the form: representing the continuous version of Equation (14), equipped with an initial condition x 0 , which is a random variable responsible for the randomness term characterizing Equation (15). The population risk minimization problem in DL can be expressed by the following MF-optimal control problem (see [5] (p. 5)): subject to the dynamics expressed by the stochastic ODE (15). Since the weights θ are shared by the distribution µ 0 of random variable (x 0 , y T ) pairs, Equation (16) can be studied as an MF-optimal control problem.
On the other hand, the empirical risk minimization problem can be expressed by a sampled optimal control problem after computing i.i.d.
subject to the dynamics: whose solutions, moving from random initial conditions through a deterministic path, correspond to random variables. As in classical optimal control theory, the previous problem can be solved following two inter-connected approaches: a global theory, based on the Dynamic Programming Principle (DPP) leading to the HJB equation, or considering the Pontryagin Maximum Principle (PMP) approach, hence expressing the solution by a system of Forward Backward SDEs (FBSDEs) plus a local optimality condition.

HJB Equation
The idea behind the HJB formalism is to define a value function corresponding to the optimal loss of the control problem w.r.t. the general starting time and state. For the population risk minimization formulation expressed by Equation (16), the state argument of the value function corresponds to an infinite-dimensional object that models a joint distribution of the input-target as an element of a suitable Wasserstein space.
As regards random variables and their distribution, a suitable space must be defined for the rigorous treatment of the optimal control problem. In particular, we use the shorthand notation L 2 (Ω, R d+l ) for L 2 ((Ω, F , P), R d+l ) to denote the set of R d+l -valued square integrable random variables w.r.t. a given probability measure P. Then, we deal with a Hilbert space considering the norm: The set P 2 (R d+l ) denotes the integrable probability measures defined on the Euclidean space R d+l . Let us recall that the random variable X is square integrable in L 2 (Ω, R d+l ) if and only if its law P X ∈ P 2 (R d+l ). The space P 2 (R d+l ) can be endowed with a metric by considering the Wasserstein distance defined in Equation (2). For p = 2, the two-Wasserstein distance reads: according to the marginals introduced in Section 2.1, or equivalently: see, e.g., [5] (p. 6). Moreover, ∀ µ ∈ P 2 (R d+l ), we define the associated norm: Given a measurable function ψ : R d+l → R q that is square integrable w.r.t. the probability distribution µ, the following notation is introduced: Concerning the dynamical evolution of probability measures, let us fix ξ ∈ L 2 (Ω, R d+l ) and the control process θ ∈ L ∞ ([0, T], Θ). Then, the dynamics of the system can be written as: µ being the law associated with the variable ψ defined by µ = P ξ ∈ P 2 (R d+l ), and we can rewrite the law of W t,ξ,θ s as: Indeed, the law involving the dynamics W t,ξ,θ s depends only on the law of ξ and not on the random variable itself; see, e.g., [5] (p. 7).
It turns out that, to obtain the HJB Equation (5) corresponding to the above introduced formulation, it is necessary to define the concept of the derivative w.r.t. a probability measure. To begin with, it is useful to consider probability measures on R d+l as laws expressing probabilistic features of the R d+l -valued random variables defined over the probability space (Ω, F , P). Then, we define the Banach space of random variables to define the derivatives. Moreover, if we define a function u : P 2 (R d+l ) → R, it is possible to lift it into its extension U defined on L 2 ([0, T], R d+l ), as follows: then the definition of the derivative w.r.t. a probability measure can be expressed in terms of U in the usual Banach space setting. In particular, we have that u is C 1 (P 2 (R d+l )), if the lifted function U is Fréchet differentiable with continuous derivatives. Since L 2 (Ω, R d+l ) can be identified with its dual, if the Fréchet derivative DU(X) exists, by Riesz's theorem, it can be identified with an element of L 2 (Ω, R d+l ), i.e., It is worth underlining that DU(X) does not depend on X, but only on the law described by X; hence, the derivative of u at µ = P X is described by ∂ µ u(P X ) : R d+l → R d+l , defined as: DU(X) = ∂ µ u(P X )(X) .
By duality, we know that ∂ µ u(P X ) is square integrable w.r.t. µ. To define a notion for the chain rule in P 2 (R d+l ), a dynamical system is described by: wheref denotes the feed-forward dynamics. If a function u ∈ C 1 (P 2 (R d+l )), meaning that it is differentiable with a continuous derivative w.r.t. a probability measure, then, for all t ∈ [0, T], we have: where · denotes the usual inner product between the vector in R d+l . Equivalently, exploiting the lifted function of u, we can state: Moreover, the variable w denotes the concatenated (d + l)-dimensional variable (x, y), where x ∈ R d and y ∈ R l . Correspondingly,f (w, θ) := ( f (x, θ), 0) is the extended (d + l)-dimensional Feed-Forward Function (FFF),L(w, θ) = L(x, θ) is the extended (d + l)dimensional regularization loss andΦ(w) := Φ(x, y) represents the terminal loss function.
Since the state variable is identified with a probability distribution µ ∈ P 2 (R d+l ), the resulting objective functional can be defined as: which can be written, with the concatenated variable w and the bracket notation introduced in (18), as: In this setting, some assumptions for the value function are needed to solve Equation (16). In particular: 1.
f , L and Φ are Lipschitz w.r.t. x, and the Lipschitz constant of f and L are independent of θ; 3. µ 0 ∈ P 2 (R (d+l) ).
The value function v * (t, µ) is defined as the real-valued function on [0, T] × P 2 (R d+l ), corresponding to the infimum of the functional J over the training parameters θ: It is essential to observe how the value function satisfies a recursive relation based on the Dynamic Programming Principle (DPP). This implies that, for any optimal trajectory starting from any intermediate point, the remaining part of the trajectory still has to be optimal. The latter principle can be expressed by defining the value function as: ∀ 0 ≤ t ≤t ≤ T and µ ∈ P 2 (R d+l ). Considering a small increment of timet = t + δt with δ > 0, we can compute the Taylor expansion in the Wasserstein sense, hence obtaining: By the chain rule in P 2 (R d+l ), we have: Since the infinitesimal δt does not affect the distribution µ and the controls θ (see [5] (p. 13)), integrating the second term, we have: Taking δt → 0, we have: Since the value function should solve the HJB equation, it is essential to find the precise link between the solution of this PDE and the value function obtained from the minimization of the functional J. To provide the result, we use a verification argument allowing the following consideration: if the solution of the HJB is smooth enough, then it corresponds to the value function v * ; moreover, it allows computing the optimal control θ * .

Theorem 1 (The verification argument).
Let v be a function in C 1,1 ([0, T] × P 2 (R d+l )). If v is a solution of the HJB equation in (21) and there existsθ that is mapping (t, µ) → Θ attaining the infimum in the HJB equation, then v(t, µ) = v * (t, µ), andθ is an optimal feedback control policy, i.e., θ = θ * is a solution of the population risk minimization problem expressed by Equation (16), Proof of Theorem 1. Given any control process θ, applying Formula (20) between s = t and s = T with explicit time dependence gives: where the first inequality comes from the infimum condition in (21). Since the control is arbitrary, we have: then it can be substituted with θ * where θ * t =θ (t, P t,µ,θ * s ) is computed by the optimal feedback control. Repeating the above argument, the inequality becomes an equality since the infimum is attained forθ: Thus, v(t, µ) = v * (t, µ), andθ defines an optimal control policy. For more details, see [5] (Proposition 3, pp. [13][14].
The importance of Theorem 1 consists of linking smooth solutions of the parabolic PDE to the solutions of the population risk minimization problem, becoming a natural candidate for the DL problem.
Moreover, the optimal control policyθ : [0, T] × P 2 (R d+l ) → Θ is identified by computing the infimum in (21). Hence, it turns out that the HJB equation strongly characterizes the learning problem's solution for the feedback, or closed-loop networks: control weights are actively adjusted according to the outputs, and this is the essential feature of closedloop control. Nevertheless, the solution comes from a PDE that is in general difficult to solve, even numerically. On the other hand, open-loop solutions can be obtained from the closed-loop control policy by sequentially setting θ t =θ (t, P w t ), w t being the solution of the feed-forward ODE describing the dynamics of the state variable, with θ = θ up to time t. Usually within DL settings, open-loops are used during training or to measure the inference of a trained model, since trained weights for each neuron will have a fixed value.
The great limit of such a formulation relies in assuming that the value function v (t, µ) is continuously differentiable. It is straightforward to study a more flexible characterization for v dealing with weak solutions, also denoted as viscosity solutions. Thus, it is worth considering a weaker formulation of the PDE to go beyond the concept of classical solutions, by introducing the notion of viscosity ones, hence allowing obtaining relevant results when dealing with weaker assumptions on the coefficients defining the (stochastic) differential problem we are interested in; see, e.g., [5] (Section 5 pp. 14-22) for more details.
The key idea relies on exploiting the lifting identification between measures and random variables and moving from the Wasserstein space P 2 (R d+l ) to the Hilbert space L 2 (Ω, R d+l ), using tools developed to study viscosity solutions.
Adopting the MF-optimal control viewpoint implies that the population risk minimization problem of DL can be studied as a variational problem, whose solution is characterized by a suitable HJB equation, in analogy with the classical calculus of variations. In other words, the HJB equation is a global characterization of the value function to be solved over the entire space P 2 (R d+l ) of input-target distributions. From the numerical point of view, it is a hard task to get a solution for the entire space; this is why the learning problem is typically locally solved, around some (small set of) trajectories generated according to the initial condition µ 0 ∈ P 2 (R d+l ), then applying the obtained feedback to nearby input-target distributions.

Mean-Field Pontryagin Maximum Principle
We have seen how the HJB approach provides a characterization of the optimal solution for the population risk minimization problem that holds globally in P 2 (R d+l ), at the price of being difficult to handle in practice. Moving from this consideration, the MF-PMP aims to show a local condition for optimality, expressed in terms of E[H], i.e., the expectation of the Hamiltonian function.
Starting from the population risk minimization problem defined in Equation (16) and given a collection of K sample input-target pairs, the single ith input sample is considered. The prediction of the network can be approximated by a deterministic transformation of the terminal state g(X i T ) for some g : R d → Y that models a function both of the initial input x i and of the control parameters θ. Moreover, we define a loss function Φ : Y × Y → R, which is minimized when the arguments are equal. Therefore, the goal is to minimize: Since g is fixed, it can be absorbed into the definition of the loss function by defining the array Φ i (·) := Φ(·, y i ).
Then, the supervised learning problem can be expressed as: where L : Θ → R acts as a regularizer term to model a running cost.
Input variables x = (x 1 , . . . , x K ) can be considered as the elements of a Euclidean space R d×K , representing the initial conditions of the following ODE system: where θ : [0, T] → Θ are the control parameters to be trained. The dynamics (25) are decoupled except for the control. A general space U for controls θ is then defined as: U := {θ : [0, T] → Θ : θ isLebesguemeasurable}, and we are aiming to choose θ in U to have g(X i T ) closer to y i for i = 1, . . . , K. To formulate the PMP as a set of necessary conditions for optimal solutions, it is useful to define the Hamiltonian H : [0, T] × R d × R d × Θ → R given by: with p modelling an adjoint process as in Equation (6).
Let us underline that all input-target pairs (x 0 , y 0 ), connected by the distribution µ 0 , share a common control parameter, and this feature suggests the idea to develop a maximum condition that has to hold in the average sense. Indeed, the control is now enforced on the continuity equation that describes the evolution of probability densities.
The following assumptions are needed: 1.
f is bounded and f , L are continuous w.r.t. θ; 2.
Theorem 2 (Mean-Field Pontryagin Maximum Principle). Let assumptions 1-3 hold and θ * ∈ L ∞ ([0, T], Θ) be the minimizer for J(θ) corresponding to the optimal control of the population risk minimization problem (16). Define the Hamiltonian H : and apply the mean-field PMP with µ N 0 in place of µ 0 to obtain: where x θ * ,i t and p θ * ,i t are defined through the input-target pair (x i 0 .y i T ). Moreover, since µ N 0 is a random measure, (33) is a random equation whose solutions correspond to random variables.
Concerning possible numerical analysis of the DL algorithm based on the maximum principle, we refer to [4] (pp. [13][14][15], where a comparison can be found between usual gradient approaches to the discrete formulation of the Mean-Field PMP stated in Theorem 2, with a loss function based on Equation (24). The test and train losses of some variants of SGD algorithms are compared to the mean-field algorithm based on the discrete PMP. It is possible to observe that the latter algorithm is characterized by a better convergence rate, being then faster. This improvement is mainly due to the fact that it allows avoiding possibly getting stuck, caused by the flat regions, as clearly shown by the graphs reported in [4] (p. 14).

Connection between the HJB Equation and the PMP
In what follows, we provide connections between the global and local formulation of the HJB formalism via the PMP, exploiting the connection between Hamilton's canonical equations (ODEs) and the Hamilton-Jacobi equations (PDEs). The Hamiltonian dynamics of Equations (27) and (28) describe the trajectory of a random variable that is completely determined by random variables (x 0 , y T ). On the other hand, the optimality condition described by Equation (29) does not depend on the particular probability measure of the initial input-target pairs. Notice that the maximum principle can be expressed in terms of a Hamiltonian flow that depends on a probability measure in a suitable Wasserstein space and where Equation (29) is the corresponding lifting version. Analogously, in order to have both solutions in the same functional space, HJB has to be lifted in the L 2 (Ω, R d+l ) space.
Starting from the lifted Bellman Equation (23), lying in L 2 (Ω, R d+l ), it is possible to apply the method of the characteristics and define the following system of equations, after introducing P t = DV(t, ξ t ): Suppose Equation (34) has a solution that satisfies an initial condition given by: and a terminal one involving Bellman equation given by: We also assume thatθ(ξ, P) is the optimal control achieving the infimum in (21), as an interior point of Θ, then we can explicitly write the equation of the Hamiltonian w.r.t. the optimal control as: Therefore, by the first order condition, we have: E[∇ θf (ξ,θ(ξ, P))P + ∇ θL (ξ,θ(ξ, P))] = 0 , so that, taking into consideration Equation (34), we obtain the Hamilton-type equations: Use w = (x, y) as concatenated variable and θ * =θ(ξ t , P t ) to remark that the last l components off are zero and by considering only the first d components: Summing up: Hamilton's equation of the system (36) can be viewed as the characteristic equations of the HJB equation in its lifted formulation described by Equation (23). Essentially, the PMP gives a necessary condition for any characteristic of the HJB equation: any characteristic originating from µ 0 , that is the initial law of the random variables, must satisfy a local necessary optimal condition constituted by the mean-field PMP. This justifies the claim that the PMP constitutes a local condition if compared to the HJB equation.

Small-Time Uniqueness
A natural question is to understand when the PMP solutions also provide sufficient conditions to have optimality. We start by considering that the uniqueness of the solution implies sufficiency, and we investigate which assumptions are needed to have a unique solution of the mean-field PMP equations. Equations (27) and (28) model highly nonlinear two-point boundary value problems in terms of x * and p * , which are also coupled through their laws. In general, even without the coupling, this kind of PDE is known to not have a unique solution; see, e.g., Ch. 7 of [22]. In order to prove the uniqueness, strong assumptions are needed to prove the small-time case.
Before proving the theorem, we report the following lemma, which provides an estimate of the difference between flow-maps driven by two different controls in terms of the small-time parameter T: Then, there exists a constant T 0 such that for all T ∈ [0, T 0 ), it holds that: where C(T) > 0 satisfies C(T) → 0 as T → 0.
Through the above estimate, it is now possible to prove Theorem 3.
Within the ML setting, the small-period T roughly corresponds to the regime where the reachable set of the forward dynamics is small. This can be interpreted as the case where the model has low capacity or low expressive power. In other words, Theorem 3 states that the optimal solution is unique if the network capacity is low, even when there is a huge number of parameters. It is essential to underline that the hypothesis of the strong concavity of the Hamiltonian H does not imply convexity for the loss function J, which can also be highly non-convex due to the non-linear transformation introduced by the activation functions σ.

Conclusions
In this paper, typical NN structures are considered as the evolution of a dynamical system to rigorously state the population risk minimization problem related to DL. Two parallel and connected perspectives are followed. The result expressed in Theorem 2 represents the generalization of the PMP in the calculus of variations and a local characterization of optimal trajectories derived from HJB Equation (21). Deriving the necessary condition for the optimality of the PMP provides several advantages: there is no reference to the derivative w.r.t. the probability measure in the Wasserstein sense; the parameter set Θ is highly general; maximization is point-wise in t, once x and p are known. Moreover, it allows the possibility to develop a learning algorithm without referring to the classical methods of DL such as the SGD. As an interest point with a general relevance within the optimal control theory, we remark that the controls, and then the weights θ, are only assumed to be measurable and essentially bounded in time. In particular, we allow for discontinuous terms. Even in the case where the number of parameters is infinite, as in the MFG setting, it is feasible to derive non-trivial stability estimates. This aspect results in contrast with generalized bounds based on classical statistical learning, where the increasing number of parameters adversely affects the generalization. On the other hand, it will be interesting to consider algorithms with weights having only discrete values, e.g., binary, values, requiring small memory amounts, with corresponding efficiency and speed improvements. A further direction of our ongoing research is based on the study of the PMP from a discrete-time perspective in order to relate the theoretical framework directly to the applications.