Concentration and Poincar\'e type inequalities for a degenerate pure jump Markov process

We study Talagrand concentration and Poincar\'e type inequalities for unbounded pure jump Markov processes. In particular we focus on processes with degenerate jumps that depend on the past of the whole system, based on the model introduced by Galves and L\"ocherbach in \cite{G-L}, in order to describe the activity of a biological neural network. As a result we obtain exponential rates of convergence to equilibrium.


Introduction
Our objective is to obtain Poincaré type inequalities for the semigroup P t and theassociated invariant measure of non bounded jump processes inspired by the model introduced in [19] by Galves and Löcherbach, in order to describe the interactions between brain neurons. As a result we obtain exponentially fast rates of convergence to equilibrium. There are three interesting features about this particular jump process. The first is that it is characterized by degenerate jumps, since every neuron jumps to zero after it spikes, and thus looses its memory. The second, is that the probability of any neuron to spike depends on its current position and so from the past of the whole neural system. Thirdly, the intensity function that describes the jump behaviour of any of the non bounded neurons at any time is an unbounded function.
For P t the associated semigroup and µ the invariant measure we show the Poincaré type inequality where the second term is a local term for the compact set D := {x ∈ R N + : x i ≤ m, 1 ≤ i ≤ N}, for some m. Accordingly, for every function defined outside the compact set {x ∈ R N + : x i ≤ m + 1, 1 ≤ i ≤ N} we obtain the stronger µ (V ar Pt (f )) ≤ c(t)µ(Γ(f, f ))).
Consequently, we derive concentration properties In addition, we show further Talagrand type concentration inequalities, Before we describe the model we present the neuroscience framework of the problem.
1.1. the neuroscience framework. We consider a group of finitely many interacting neurons, say N in number. Every one of these neurons i, 1 ≤ i ≤ N is described by the evolution of its membrane potential X i t : R + → R + at time t ∈ R + . In this way, an N dimensional random process X t = (X 1 t , ..., X N t ) is defined that represents the membrane potential of the N neurons in the network.
The membrane potential X i t of a neuron i does not describe only the neuron itself, but also the interactions between the different neurons in the network, through the spiking activity of the neuron. What is called spike, or alternatively action potential, is a high-amplitude and brief depolarisation of the membrane potential that occurs from time to time, and constitutes the only perturbations of the membrane potential that can be propagated from one neuron to another through chemical synapses.
The frequency with which a neuron spikes, is expressed through the intensity function φ : R + → R + . When a neuron has membrane potential x, then its intensity is φ(x).
Neurons loose their memory every time they spike, in the sense that after a neuron i spikes its membrane potential is set to zero, which can be understood as the resting potential. The membrane potential of the rest of the neurons j = i is then increased by a quantity W i→j ≥ 0 called the synaptic weight, which represents the influence of the spiking neuron i on j. It should be noted that the membrane potential of any of the N neurons between two consequent jumps remains constant.
From our discussion up to this point it should be clear that the whole dynamic of the whole interacting neural system is interpreted exclusively by the jump times.
Thus, from a purely a probabilistic point of view, this activity can be described by a simple point process. One should however bare in mind that since the spiking neuron jumps to zero these point processes are non-Markovian. For examples of Hawkes processes describing neural systems one can look at [11], [17], [18], [19], [23] and [25].
An alternative view point, instead of focusing exclusively on the jump times, is to try to model the evolution of the membrane potential that occurs between jumps as well, when this evolution is already determined. In the case of deterministic drift between the jumps, for example, as examined in [24], the membrane potential is attracted towards en equilibrium potential exponentially fast. In that case, the process is a Piecewise Deterministic Markov Processe introduced by Davis in [13] and [14]. PDMP processes are frequentely used in probability to model chemical and biological phenomena (see for instance [12] and [30], as well as [4] for an overview).
In the current paper we adopt a similar framework, but in our case we do not consider a drift between the jumps, but rather a pure jump Markov process, which for convenience we will abbreviate as PJMP.
Although here we work with a finite number of neurons, so that we can take advantage of the Markovian nature of the membrane potential, Hawkes processes in general allow the study of infinite neural systems, as in [19] or [25].
On the contrary of [24], a Lyapunov-type inequality allows us to get rid of the compact state-space assumption. Due to the deterministic and degenerate nature of the jumps, the process does not have a density continuous with respect to the Lebesgue measure. We refer the reader to [29] for a study of the density of the invariant measure. Here, we make use of the lack of drift between the jumps to work with discrete probabilities instead of density.
1.2. the model. Consider the intensity function φ : R + → R + , which satisfies the following conditions: There exist strictly positive constants δ, c such that The intensity function characterizes the Markov process X t = (X 1 t , . . . , X N t ). If we define then the generator L of the process X is expressed through the intensity function, by for every x ∈ R N + and f : R N + → R any test function. Furthermore, for every i = 1, . . . , N and t ≥ 0, the Markov Process X solves the following stochastic differential equation Poisson random measures on R + ×R + with intensity measure dsdz, for some N > 1 fixed.
1.3. Poincaré type inequalities. We have defined a PJMP that describes our neural system, with dynamics similar to the model introduced in [19]. We aim in studying Poincaré type inequalities both for the semigroup P t and the invariant measure µ of the process.
We start with a description of the analytical framework and the definition of the Poincaré inequality on a general discreet setting. For more details one can consult [3], [10], [16], [32] and [35]. Throughout the paper we will conveniently write f dv for the expectation of the function f with respect to the measure v, f dv.
Consider a Markov semigroup P t f (x) = E x (f (X t )) and the infinitesimal generator Lf := lim t→0+ Ptf −f t of a Markov process (X t ) t≥0 . We will frequentely use the following relationships: d dt P t = LP t = P t L (see for instance [22]). Furthermore, we will say that a measure µ is invariant for the semigroup (P s ) s≥0 if µ satisfies µP s = µ, for every s ≥ 0.
From the definition of the generator we obtain that µ(Lf ) = 0.
Define the "carré du champ" operator Γ(, ) by In the special case of the PJMP where the infinitesimal generator L has the form (1.4), a simple calculation shows that the carré du champ has the following expression We recall the definition of the variance of a function f with respect to a probability measure m: Having defined all the necessary increments, we can present the definition of the classical Poincaré inequality. A probability measure m satisfies the Poincaré inequality if for some strictly positive constant C independent of the function f . In the case where instead of a single measure we have a familly of measures as in the case of the semigroup {P t , t ≥ 0}, then the constant C may depend on the time t, i.e. C = C(t), as is the case for the examples studied in [35], [3] and [10]. The aforementioned papers used the so-called semigroup method that will be also followed in the current work. The nature of this method usually leads to an inequality for the semigroup P t which involves a time constant C(t).
In both [35] and [3], in order to retrieve the carré du champ the translation property was used. Taking advantage of this, for example in [35], the inequality was obtained for a constant C(t) = t for a path space of Poisson point processes. Although this property does not hold in the degenerate PJMP examined here, we can still show that a Poincaré inequality, which also involves the invariant measure, holds for the semigroup {P t , t ≥ 0} but with a time constant C(t) of higher order than one.
In a recent paper [26] the same degenerate PJMP as in (1.1)-(1.4) was considered but for bounded neurons, with membrane potential taking values in a compact set D (1.6) D := {x ∈ R N + : x i ≤ m, 1 ≤ i ≤ N} for some positive constant m. The Poincaré type inequality obtained for the compact case was where α(t) is a second order polynomial of time t and β some positive constant. In the more general non compact case examined in the current paper we will prove an alternative weighted Poincaré type inequality, which is formulated by taking the expectation with respect to the invariant measure µ for the semigroup (P t ) t≥0 in the typical Poincaré inequality, that is where on the right hand side we have also added two local term for the compact set D as in (1.6), the one of which has a weight that depends on the intensity function φ. Consequently, the stronger holds for every function f with a domain outside the compact {x ∈ R N + : The reasons why in the unbounded case we focus on this particular Poincaré type inequality rather than the classical one about P t presented above relate with the special features that characterise the behaviour of the PJMP process examined in the current paper. Some of them are similar to the compact case, like the memoryless behaviour of the neuron that spikes and as already mentioned the lack of the translation property. In the non compact case however, we also have to deal with the hindrance of controlling the intensity function φ which is non bounded. In order to handle the intensity functions we will use the Lyapunov method presented in [9] and [5] which has the advantage of reducing the problem from the unbounded to the compact case where variables take value within the compact set D defined in (1.6), that satisfies where the set N i=1 x i ≤ m is the set involved in the Lyapunov method. Since the jump behaviour depends on the current position of a neuron this has the benefit of bounding the values of φ and thus controlling the spike behaviour of the neurons.
The Lyapunov method however, as we will see later in more detail in the proof of Proposition 2.6, requires the control of a Lyapunov function V, more specifically of −LV V . As it will be explained in more detail later, this is a problem that although can be solved relatively easy in the case of diffusions by choosing appropriate exponential densities, in the case of jump processes it is more difficult and requires the use of invariant measures.
The inequality for the semigroup famille {P t , t ≥ 0} which refers to the general case where neurons take values in the whole of R + follows. Theorem 1.1. Assume the PJMP as described in (1.1)- (1.4). Then, for every t ≥ t 1 , for some t 1 > 0, the following weighted Poincaré type inequality holds while δ 1 (t) a third and δ 2 (t) a second order polynomial of t respectively, that do not depend on the function f , where the set D is as in (1.6).
As a direct corollary of the theorem we obtain the following.
for every t ≥ t 1 , for some t 1 > 0.
We conclude this section with the Poincaré ineqality for the invariant measure µ presented on the next theorem.

Concentration and other Talagrand type inequalities.
Concentration inequalities play a vital role in the examination of a system's convergence to equilibrium. Talagrand (see [33] and [34]) associated the log-Sobolev and Poincaré inequalities for exponential distributions with concentration properties (see also [8]), that is for some p ≥ 1. In particular, when the log-Sobolev inequality holds, then (1.7) is true for p = 2, while in the case of the weaker Poincaré inequality, the exponent is p = 1. Furthermore, the modified log-Sobolev inequality that interpolates between the two, investigated for example in [7], [20] and [31], gives convergence to equilibrium of speed 1 < p < 2.
The problem of concentration properties for measures that satisfy a Poincaré inequality, or as in our case, the Poincaré type inequality, is closely related with exponential integrability of the measure, that is µ(e λf ) < +∞ for some appropriate class of functions f . This problem, is itself connected to bounding the carré du champ of the exponent of a function for some Ψ(f ) uniformly bounded. In the case of diffusion processes where the carré du champ is defined through a derivation, (1.8) is satisfied for ||∇f || ∞ < 1 (see section 3 for more details). For a detailed discussion on the subject one can look at [28]. In our case we consider Then we can obtain exponential integrability and a bound ( This, together with the Poincaré type inequality already obtained, can show concentration properties for a different class of functions than the ones assumed in Corollary 1.5, as presented in the next theorem.
Consequently we obtain the following convergence to equilibrium property: For every function f , satisfying where µ is the invariant measure of the semigroup P t .
Furthermore, for the case of unbounded neurons, we can obtain Talagrand inequalities in the spirit of the ones proven for the modified log-Sobolev in [7].  x : A few words about the structure of the paper. The proof of the Poincaré inequality for the semigroup P t and the invariant measure µ are presented in sections 2.1 and 2.2 respectively. For both inequalities a Lyapunov inequality will be used to control the behaviour of the neurons outside a compact set. This is proven at the begining of section 2. In the final section 3 the concentration inequalities are proven. At first in Proposition 3.1 we present the main tool that connects the Poincaré type inequality with the concentration properties. Then, the required conditions are verified for the PJMP.

proof of the Poincaré inequalityies
In both the inequalities involving the semigroup and the invariant measure, the use of a Lyapunov function will be a crucial tool in order to control the intensity function outside a compact set.
At first we will work towards deriving the Lyapunov inequality required. That will be the subject of the next lemma.
We recall that under the framework of [24], the generator of our process is given, for any function f, by We assume that for all i, j, W i→j ≥ 0, we can then consider that the state space is R N + . We put W i := j =i W i→j . Lemma 2.1. Assume that for all x ∈ R + , φ(x) ≥ cx and δ ≤ φ(x) for some constants c and δ > 0. Then if we consider the Lyapunov function: there exist positive constants ϑ, b and m so that the following Lyapunov inequality holds Proof. For the Lyapunov function V as stated before, we have in which case m = b+α(c∧δ) (1−α)(c∧δ) . Since α can be chosen arbitrary close to 1, if we want to impose α(c ∧ δ) > 1, we need to assume that c > 1 and δ > 1.
In the following subsection we show the weighted Poincaré for the semigroup P t , while in subsection 2.2 we show the inequality for the invariant measure.

Poincaré inequality for the semigroup.
In this section we prove the main results of the paper for systems of neurons that take values on R + , presented in Theorem 1.1.
As mentioned in the introduction, the approach used will be to reduce the problem from the unbounded case to the compact case examined in [26]. To do this we will follow closely the Lyapunov approach developed in [9] and [5] to prove superPoincaré inequalities.
We start by showing that the chain returns to the compact set D with a strictly positive probability bounded from below.
For a neuron i ∈ I and time s, we define p s (x) to be the probability that the process starting with initial configuration x has no jump during time s, and p i s (x) the probability that the process has exactly one jump of neuron i and no jumps for other neurons during time s. Then, as a function of the time s, is continuous, strictly increasing on (0, t 0 ) and strictly decreasing on (t 0 , +∞), while we have p i 0 (x) = 0. For any configuration y ∈ D we define the set of configurations D y containing all configurations x such that for some t > 0, π t (x, y) := P x (X t = y) > 0.
Lemma 2.2. Assume the PJMP as described in (1.1)-(1.4). Then, for every y ∈ D and x ∈ D y , Proof. We want to show that for every configuration y ∈ D that belongs to the domain of the invariant measure, one has that π t (x, y) ≥ 1 θ for some positive θ. The proof will be divided in three parts.
A) At fist, for y ∈ D, we restrict ourselves to every x ∈ D ∩ D y .
Since µ(y) > 0 and lim t→∞ π t (x, y) = µ(y) we readily obtain that for every couple x, y ∈ D there exist θ 1 > 0 and t x,y > 0 such that for every t > t x,y we have that π t (x, y) > 1 θ 1 . But since D is compact, the configurations in D are finite in number and so max x,y∈D {t x,y } < ∞. We thus conclude that there exists a θ 1 > 0 such that π t (x, y) > 1 θ 1 for every t > max x,y∈D {t x,y }.
In the next two steps we extend the last result to x ∈ D c . B) We will show that there exist θ 2 > 0 and 1 δ > t 2 > 0, such that for every x ∈ D c ∩ D y there exists a z ∈ D ∩ D y such that We enumerate the N neurons with numbers from 1 to N on decreasing order, so ∆ 1 (x))...) the configuration starting from x after the 1st, then the 2nd up to the time the i'th neuron has spiked in that order. Then for every s i > 0 we have is the probability that the process starting from x has exactly one jump of the neuron i in time s and no jumps of other neurons. If we To see this, from (2.1) we can compute bounds for p i So we obtain π t 2 (x, z) ≥ (Ne) −N , and the result is proven for θ 2 = (Ne) N , z =x N and t 2 ≤ N i=1 s i ≤ 1 δ . C) Having shown (A) and (B) we can now complete the proof of the lemma for x ∈ D c . For this, it is sufficient, for every y ∈ D and x ∈ D c ∩ D y to write π t (x, y) ≥ π t 3 (x,x N )π t 2 (x N , y) and the assertion follows for t ≥ 1 δ + t 2 . Consequently, the lemma follows for t ≥ max{t 1 , t 2 + 1 δ }. Taking under account the last result, we can obtain the first technical bound needed in the proof of the local Poincaré inequality, taking advantage of the bounds shown for times bigger than t 1 .

Lemma 2.3. Assume z ∈ D c . For the PJMP as described in (1.1)-(1.4), we have
for every t ≥ t 1 .
Proof. We can compute Since t ≥ t 1 , we can use Lemma 2.2 to bound for every w and y ∈ D, π u (w, y) ≤ θπ t (x, y) we obtain Now we will use twice the Cauchy-Schwarz inequality, to pass the square inside the two sums. We will then obtain
Proof. Consider the semigroup P t f (x) = E x f (x t ). Since d ds P s = LP s = P s L, we can calculate We want to bound the carré du champ of the semigroup on the right hand side Γ(P t−s f, P t−s f ) by the semigroup of the carré du champ P t−s Γ(f, f ) so that the energy of the Poincaré inequality will be formed. If the process is such that the translation property E x+y f (z) = E x f (z + y) holds, as in [35] and [3], then one can obtain the desired bound as shown below.
In our case where the degeneracy of the process does not allow for the translation property to take hold we will use a bound based on the Dynkin's formula. If we then use Dynkin's formula we can consequently bound In order to bound the second term above we will use the bound shown in Lemma 2.3 By the definition of the carré du champ we then get If we combine the last one together with (2.3) we obtain From the last lemma we obtain the following local Poincaré inequality.
Proof. Since for µ the invariant measure of P t one has µ(x) = y µ(y)P t (y, x) we can write If we now use Lemma 2.4 to bound the semigroup we obtain where B t (x) a function of the semigroup P t of some function with initial configuration x. Then Proof. At first, we can write We can bound the first term on the right hand side from (2.5). For the second term we can use the Lyapunov inequality. That gives If we choose D large enough to contain the set B, i.e. B ∩ D c = ∅ the last one is reduced to The need to bound the quantity −LV V which appears from the use of the Lyapunov inequality is the actual reason why we need to make use of the invariant measure µ and obtain the type of Poincaré inequality shown in our final result, rather than the Poincaré type inequality based exclusively on the P t measure obtained in the previous section for the compact case. If we had not taken the expectation with respect to the invariant measure, we would had needed to bound instead. This, in the case of diffusions can be bounded by the carré du champ of the function Γ(f, f ) by making an appropriate selection of exponential decreasing density (see for instance [5], [6] and [9]). In the case of jump processes however, and in particular of PJMP as on the current paper where densities cannot been specified, a similar bound cannot be obtained. However, when it comes to the analogue expression involving the invariant measure there is a powerful result that we can use, which has been presented in [9] (see Lemma 2.12). According to this, when the expectation is taken with respect to the invariant measure, the desired bound holds as seen in the following lemma.
Lemma 2.7. ( [9]: Lemma 2.12) For every U ≥ 1 such that − LU U is bounded from below, the following bound holds where µ is the invariant measure of the process and d 1 is some positive constant.
Since V ≥ 1 and for x ∈ D we have from the Lyapunov inequality that − LV V ≥ ϑ we get the following bound for some positive constant d 1 . Since for the infinitesimal operator µ(Lf ) = 0 for every function f , we can write So that, Gathering all together we finally obtain the desired inequality which proves the proposition for a constant δ(t) = a 1 (t) + d 1 2ϑ .
The last proposition together with the Lyapunov inequality from Lemma 2.1 and the local Poincaré inequality of Corollary 2.5 proves Theorem 1.1.

proof of the Poincaré inequalities for the invariant measure.
In the next proposition we see how the Lyapunov inequality is sufficient to prove a Poincaré inequality for the invariant measure µ presented in Theorem 1.3, using methods developed in [5], [6] and [9].
Proof. At first assume µ(f I D ) = 0. We can write For the second term if we work as in Proposition 2.6, with the use of the Lyapunov inequality we have the following bound For the first term, we will use the approach applied in [32] in order to prove Poincaré inequalities for finite Markov chains. Since we have assumed f I D dµ = 0, we can write If we consider J xy = J 1 , ..., J Jxy to be the shortest sequence of spikes that leads from the configuration x to the configuration y without leaving D, then we can denotex 0 = x and for every k = 0, ..., J xy ,x k = ∆ J k (∆ J k−1 (...∆ J 1 (x))...), the configuration after the kth neuron on the sequence has spiked. Since D is finite, the length of the sequence is always uniformly bounded for any couple x, y ∈ D.
We can then write Since φ ≥ δ we have If we form the caré du champ, we will obtain This leads to Gathering everything together gives 3. proof of Talagrand inequality for the invariant measure.
Now we can prove concentration properties. At first we present the general proposition that connects the Poincaré inequality of Theorem 1.3 with concentration of measure properties. The concentration properties will be based on the following proposition, that follows closely the approach in [27] (see also [28], [8], [1] and [2]). We will also use elements from [7] since one of the main conditions (3.1), will refer to the bounded function F r = min{F, r}. for some λ 0 > 0. Furthermore, Proof. From the Poincaré inequality, for f = e If we bound the carré du champ from condition (3.1) µ(e λFr ) ≤ C 0 λ 2 C 3 µ e λFr + µe For λ < 1 Iterating this gives We notice that µ(e λ 2 n Fr ) 2 n → e λµ(Fr ) as n → ∞ and that Since {P t F r < r} = {P t F < r} we can apply Chebyshev's inequality µ ({P t F > r}) ≤ e −λr µ(e λPtFr ) ≤ e −λr µ(P t e λFr ) = e −λr µ(e λFr ).
because of Jensen's inequality and the invariant measure property µP t = µ. Substitute F with F − µ(F ) and the result follows.
To complete the proofs of concentration theorems 1.4 and 1.6 and of Corollary 1.4, we need to verify (3.1) . We start with Theorem 1.6. We have to show condition (3.1) for F (x) = N i=1 x i . This will be the subject of the next lemma. Lemma 3.2. Assume the PJMP as described in (1.1)- (1.4) Then µ(Γ(e λFr/2 , e λFr/2 )) ≤ C 3 λ 2 µ e λFr where F r = min(F (x), r) for r > 0.
Proof. From the definition of the carré du champ µ(Γ(e λFr/2 , e λFr/2 )) = To bound µ(M i ) we will distinguish four cases: a) Consider the set A := {x : F (x) ≥ r and F (∆ i (x)) ≥ r}. Then, for x ∈ A F r (∆ i (x)) = F r (x) = r and so µ(M i I A ) = 0. b) Consider the set B := {x : F (x) ≥ r and F (∆ i (x)) ≤ r}. Then, for x ∈ B, and computed e λFr I B = e λr I B = µ(e λr I B ) = µ(e λFr I B ). c) Consider the set C := {F (∆ i (x)) ≤ F (x) < r}. Then, for x ∈ C, Since F r ≤ r we know that µ(e λFr ) ≤ e λr < ∞ and so we can bound d) Consider the set D := {F (x) < r and F (x) < F (∆ i (x))}. Then, for x ∈ D, which means that x i is bounded by So, we can compute If we gather all four cases together, we finally obtain µ(Γ(e λF R /2 , e λFr/2 )) ≤ Cλ 2 µ e λFr for a constant In the remaining of the section we prove the main concentration properties of the paper, presented in Theorem 1.4 and Corollary 1.5 . What remains is to present conditions so that (3.1) of Proposition 3.1 holds.