A Stochastic Maximum Principle for Markov chains of mean-field type

We derive sufficient and necessary optimality conditions in terms of a stochastic maximum principle (SMP) for controls associated with cost functionals of mean-field type, under dynamics driven by a class of Markov chains of mean-field type which are pure jump processes obtained as solutions of a well-posed martingale problem. As an illustration, we apply the result to generic examples of control problems as well as some applications.


INTRODUCTION
The goal of this paper is to find sufficient and necessary optimality conditions in terms of a stochastic maximum principle (SMP) for a set of admissible controlsû, which minimize payoff functionals of the form w.r.t. admissible controls u, for some given functions f , h, κ f and κ h , under dynamics driven by a pure jump process x with state space I = {0, 1, 2, 3, . . .} whose jump intensity under the probability measure P u is of the form λ u ij (t) := λ ij (t, x, E u [κ(x(t))], u(t)), i, j ∈ I, for some given functions λ and κ, as long as the intensities are predictable. Due to the dependence of the intensities on the mean of (a function of) x(t) under P u , the process x is commonly called a nonlinear Markov chain or Markov chain of mean-type, although it is does not satisfy the standard Markov property as explained in the seminal paper by McKean [McK66] for diffusion processes. A more general situation is when the jump intensities depend on the marginal law P u • x −1 (t) of x(t) under P u . To keep the content of the paper as simple as possible, we do not treat this general case. The dependence of the intensities on the whole path x makes the jump process cover a large class of realword applications. The present work is a continuation of [CDT16] where we proved existence and uniqueness of this class of processes, in terms of a martingale problem, and derived sufficient conditions (cf. Theorem 4.6 in [CDT16]) for existence of an optimal control which minimizes J(u), for a rather general class of (unbounded) jump intensities. Since the suggested conditions are rather difficult to apply in concrete situations (see Remark 4.7 and Example 4.8 in [CDT16]), we aim in this paper to investigate whether the SMP can yield optimality conditions that are tractable and easy to verify.
While in the usual strong-type control problems, the dynamics is given in terms of a process X u which solves a stochastic differential equation (SDE) on a given probability space (Ω, F , Q), the dynamics in our formulation is given in terms of a family of probability measures (P u , u ∈ U ) and x as the coordinate process i.e. it does not change with the control u. This type of formulation is usually called weak-type formulation for control problems.
The main idea in the Martingale and Dynamic Programming approaches to optimal control problems for jump processes (without mean-field coupling) suggested in previous work including the following first papers in the subject [BV77,Bis78,DE77,WD79] (the list of references is far from being exhaustive), is to use the Radon-Nikodym density process L u of P u w.r.t. some reference probability measure P as dynamics and recast the control problem to a standard one. In this paper we apply the same idea and recast the control problem to a mean-field-type control problem to which an SMP can applied. By a Girsanov-type result for pure jump processes, the density process L u is a martingale and solves a linear SDE driven by some accompanying P-martingale M. The adjoint process associated to the SMP solves a (Markov chain) backward stochastic differential equation (BSDE) driven by the P-martingale M, whose existence and uniqueness can be derived using the results by Cohen and Elliott [CE12,CE15]. For some linear and quadratic cost functionals, we explicitly solve these BSDEs and derive a closed form of the optimal control.
In Section 2, we briefly recall the basic stochastic calculus for pure jump processes we will use in the sequel. In Section 3, we derive sufficient and necessary optimality conditions for the control problem. The SMP optimality conditions are derived in terms of a mean-field stochastic maximum principle where the adjoint equation is a Markov chain BSDE. In Section 3, we illustrate the results by two examples of optimal control problems that involve two-state chains and linear quadratic cost functionals. We also consider an optimal control of mean-field version of the Schlögl model for chemical reactions. We consider linear and quadratic cost functionals in all examples for the sake of simplicity and also because, in these cases, we obtain the optimal controls in closed form.
The obtained results can easily be extended to pure jump processes taking values on more general state spaces such as I = Z d , d ≥ 1.

PRELIMINARIES
Let I := {0, 1, 2, . . .} equipped with its discrete topology and σ-field and let Ω := D([0, T], I) be the space of functions from [0, T] to I that are right continuous with left limits at each t ∈ [0, T) and are left continuous at time T. We endow Ω with the Skorohod metric d 0 so that (Ω, d 0 ) is a complete separable metric (i.e. Polish) space. Given t ∈ [0, T] and ω ∈ Ω, put x(t, ω) ≡ ω(t) and denote by F 0 t := σ(x(s), s ≤ t), 0 ≤ t ≤ T, the filtration generated by x. Denote by F the Borel σ-field over Ω. It is well known that F coincides with σ(x(s), 0 ≤ s ≤ T).
To x we associate the indicator process I i (t) := 1 {x(t)=i} whose value is 1 if the chain is in state i at time t and 0 otherwise, and the counting processes which count the number of jumps from state i into state j during the time interval (0, t]. Obviously, since x is right continuous with left limits, both I i and N ij are right continuous with left limits. Moreover, by the relationship (1) the state process, the indicator processes, and the counting processes carry the same information which is represented by the natural filtration F 0 := (F 0 t , 0 ≤ t ≤ T) of x. Note that (1) is equivalent to the following useful representation (2) Let G = (g ij , i, j ∈ I), where g ij are constant entries, be a Q-matrix: By Theorem 4.7.3 in [EK09], or Theorem 20.6 in [RW00] (for the finite state-space), given the Q-matrix G and a probability measure ξ over I, there exists a unique probability measure P on (Ω, F ) under which the coordinate process x is a time-homogeneous Markov chain with intensity matrix G and starting distribution ξ i.e. such that P • x −1 (0) = ξ. Equivalently, P solves the martingale problem for G with initial probability distribution ξ meaning that, for every f on I, the process defined by By Lemma 21.13 in [RW00], the compensated processes associated with the counting processes N ij , defined by are zero mean, square integrable and mutually orthogonal P-martingales whose predictable quadratic variations are Moreover, at jump times t, we have Thus, the optional variation of M We call M := {M ij , i = j} the accompanying martingale of the counting process N : Hereafter, a process from [0, T] × Ω into a measurable space is said predictable (resp. progressively measurable) if it is predictable (resp. progressively measurable) w.r.t. the predictable σ-field on [0, T] × Ω (resp. F).
For a real-valued matrix m := (m ij , i, j ∈ I) indexed by I × I, we let If m is time-dependent, we simply write m(t) 2 g .

A STOCHASTIC MAXIMUM PRINCIPLE
We consider controls with values in some subset U of R d and let U be the set of F- For u ∈ U , let P u be the probability measure on (Ω, F ) under which the coordinate process x is a jump process with intensities In this section we propose to characterize minimizersū of J i.e.ū ∈ U satisfying in terms of a stochastic maximum principle (SMP). We first state and prove the sufficient optimality conditions. Then, we state the necessary optimality conditions.
Let P be the probability measure on (Ω, F ) under which x is a time-homogeneous Markov chain such that P • x −1 (0) = ξ and with Q-matrix (g ij ) ij satisfying (3). Then, by a Girsanov-type result for pure jump processes (see e.g. [RW00,Brè81]), it holds that and (M ij ) ij is the P-martingale given in (6). Moreover, the accompanying martingale Noting that Integrating by parts and taking expectation, we obtain We have recast our problem of controlling a Markov chain through its intensity matrix to a standard control problem which aims at minimizing the cost functional (19) under the dynamics given by the density process L u which satisfies (16), to which the meanfield stochastic maximum principle in [BDL] can be applied. The corresponding optimal dynamics is given by the probability measureP on (Ω, F ) defined by where Lū is the associated density process. (Lū,ū) is called optimal pair associated with (13).
To the admissible pair of processes (Lū,ū) we associate the solution (p, q) (if it exits) of the following linear BSDE of mean-field type, known as first-order adjoint equation: In the next proposition we give sufficient conditions on f , h, ℓ, κ, κ f and κ h that guarantee existence of a uniques solution to the BSDE (21).
This solution is unique up to indistinguishability for p and equality dP × g ij I i (s − )ds-almost everywhere for q.
Proof. Assumptions (A1) and (A2) make the driver of the BSDE (21) Lipschitz continuous in q. The proof is similar to that of Theorem 3.1 for the Brownian motion driven meanfield BSDE derived in [BLP] by considering the following norm where β > 0, along with Itô's formula for purely discontinuous semimartingales. We omit the details. Let (Lū,ū) be an admissible pair and (p, q) be the associated first order adjoint process solution of (21).
For v ∈ U, we introduce the Hamiltonian associated to our control problem Next, we state the SMP sufficient and necessary optimality conditions, but only prove the sufficient optimality case, as the necessary optimality conditions result is tedious and more involved but by now 'standard' and can be derived following the same steps of [BDL,SS13,TL94].
Furthermore, for u andū in U , we set (26)

EXAMPLES
In this section we first solve the adjoint equation associated to an optimal control problem associated with a standard two-state Markov chain, then we extend the problem to a two-state Markov chain of man-field type. As mentioned in Remark (3.5), whether sufficient or necessary conditions may apply depends of course on the smoothness of the involved functions. Not all the functions involved in the next examples satisfy the convexity conditions imposed in Theorem (3.3).

Example 1. Optimal control of a standard two-state Markov chain.
We study the optimal control of a simple Markov chain x whose state space is X = {a, b}, where (0 ≤ a < b) are integers, and its jump intensity matrix is where α is a given positive constant intensity and u is the control process we assume nonnegative, bounded and predictable. Let P the probability measure under which the chain x has intensity matrix G = −g ab g ab g ba −g ba , g ab , g ba > 0.
Further, let L u (t) = dP u dP F t be the density process given by (16), where ℓ is defined by The control problem we want to solve consists of finding the optimal controlū that minimizes the linear-quadratic cost functional Given a control v ∈ U, consider the Hamiltonian . By the first order optimality conditions, an optimal controlū is solution of the equation The optimal control is thus where, for each t, q ba (t) ≥ 0, sinceū(t) ≥ 0. It remains to identify q ba (t). Consider the associated adjoint equations given by In view of (28), the driver reads (29) ℓū(t), q(t) g − 1 2ū

The adjoint equation becomes
Now, considering the probability measure P under which x is a Markov chain whose jump intensity matrix , the processes defined by dt, are P-martingales having the same jumps as the martingales M ij : and (31) dp(t) = q ab (t)d M ab (t) + q ba (t)d M ba (t).
This yields Integrating (31) and then taking conditional expectation yields Therefore, Under the probability measure P Taking conditional expectation, we obtain and which in view of (33) implies that Therefore, which yields the following explicit form of the optimal control: In the next two examples we highlight the effect of the mean-field coupling in both the jump intensity and the cost functional on the optimal control. Example 2. Mean-field optimal control of a two-state Markov chain. We consider the same chain as in the first example but with the following mean-field type jump intensities, (t ∈ [0, T]), and want to minimize the cost functional where Var u (x(T)) denotes the variance of x(T) under the probability P u defined by Given a control v ∈ U, consider the Hamiltonian Performing similar calculations as in Example 1, we find that the optimal control is given by We will now identify q ba . The associated adjoint equation is given by In view of (36), the driver reads The adjoint equation becomes Consider the probability measure P, under which x is a Markov chain whose jump intensity matrix This change of measure yields the P−martingales This yields Integrating (38), then taking conditional expectation yields Therefore, Next, we compute the right hand side of (40), then we identify q ba by matching. Setμ(t) := Eū[x(t)]and φ(t, x(t)) := (x(t) −μ(t)) 2 . Under P, Dynkin's formula yields Taking conditional expectation yields and Therefore, Matching (39) with (41) yields Noting that a ≤μ(t) ≤ b, to guarantee that both λū(t) and G(t) above are indeed intensity matrices, it suffices to impose that We further characterize the optimal controlū(t) by findingμ(t) which satisfies (42). Indeed, under Pū, x has the representation Taking the expectation under Pū yields In particular, the mapping t → µ(t) is absolutely continuous. Using the fact that (a − b) . Thus, in view (42),μ should satisfy the following constrained Riccati equation where m 0 is a given initial value. As it is well known, without the imposed constraint onμ, the Riccati equation admits an explicit solution that may explode in finite time unless the involved coefficients a, b, α and m 0 evolve within certain ranges. With the imposed constraint onμ, these ranges may become further tighter. Below we illustrate this through a few cases. As shown in the tables below, for low values of α, the ODE (44) can be solved for any time. How low the intensity should be mainly depends on the size of b and b − a, the larger is b the wider is the range for α for which the ODE is solvable. In particular, when a = 0 and b = 1, (44) is solvable for any time when α = 0.1, 0.2. For greater values of α the ODE violates the constraint proportionally "faster".
The results also show that the initial conditions may affects the time horizon T. Starting with values reasonably close to a+b 2 the ODE (44) is solvable only for relatively shorter time horizons than when we start with values reasonably close to zero. Example 3. Mean-field Schlögl model. We suggest to solve a control problem associated with a mean-field version of the Schlögl model (cf. [NP77], [Che04], [DZ91] and [FZ92]) where the intensities are of the form for some predictable and positive control process u, where β > 0 and (α ij ) ij is a deterministic Q-matrix for which there exists N 0 ≥ 1 such that α ij = 0 for |j − i| ≥ N 0 and α ij > 0 for |j − i| < N 0 . We consider the following mean field-type cost functional