# Poincaré-Type Inequalities for Compact Degenerate Pure Jump Markov Processes

^{1}

^{2}

^{*}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

Institut National de la Recherche Agronomique (INRA), MaIAGE, Allee de Vilvert, 78352 Jouy-en-Josas, France

Neuromat, Instituto de Matematica e Estatistica, Universidade de Sao Paulo, Sao Paulo SP-CEP 05508-090, Brasil

Author to whom correspondence should be addressed.

Received: 9 May 2019
/
Revised: 31 May 2019
/
Accepted: 1 June 2019
/
Published: 6 June 2019

(This article belongs to the Special Issue Stochastic Processes in Neuronal Modeling)

We aim to prove Poincaré inequalities for a class of pure jump Markov processes inspired by the model introduced by Galves and Löcherbach to describe the behavior of interacting brain neurons. In particular, we consider neurons with degenerate jumps, i.e., which lose their memory when they spike, while the probability of a spike depends on the actual position and thus the past of the whole neural system. The process studied by Galves and Löcherbach is a point process counting the spike events of the system and is therefore non-Markovian. In this work, we consider a process describing the membrane potential of each neuron that contains the relevant information of the past. This allows us to work in a Markovian framework.

The aim of this paper is to prove Poincaré inequalities for the semigroup ${P}_{t}$, as well as for the invariant measure, of the model introduced in [1] by Galves and Löcherbach, to describe the activity of a biological neural network. What is particularly interesting about the jump process in question is that it is characterized by degenerate jumps, in the sense that after a particle (neuron) spikes, it loses its memory by jumping to zero. Furthermore, the probability of a spike of a particular neuron at any time depends on its actual position and thus the past of the whole neural system.

For ${P}_{t}$ the associated semigroup, first we prove some Poincaré-type inequalities of the form
for any possible starting configuration x. We give here the general form of the type of inequalities investigated in this paper; however to avoid overloading this introduction with technical details we postpone the definitions of classical quantities such as the “carré du champ” $\Gamma (f,f)$ and other notations used here to Section 1.3.

$$Va{r}_{{P}_{t}}\left(f\left(x\right)\right)\le \alpha \left(t\right){\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left(x\right)ds+\beta \sum _{i=1}^{n}{\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)ds.$$

Then, we restrict ourselves to the special case where the initial configuration x is in the domain of the invariant measure, and we derive the stronger Poincaré inequality

$$Va{r}_{{P}_{t}}\left(f\left(x\right)\right)\le \alpha \left(t\right){P}_{t}\Gamma (f,f)\left(x\right)+\beta {\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left(x\right)ds.$$

Then we show a Poincaré inequality for the invariant measure $\pi $

$$Va{r}_{\pi}\left(f\right)\le c\pi \left(\Gamma (f,f)\right).$$

Before we describe the model, we present the neuroscience framework of the problem.

The activity of one neuron is described by the evolution of its membrane potential. This evolution presents from time to time a brief and high-amplitude depolarization called an action potential or spike. The spiking probability or rate of a given neuron depends on the value of its membrane potential. These spikes are the only perturbations of the membrane potential that can be transmitted from one neuron to another through chemical synapses. When a neuron i spikes, its membrane potential is reset to 0 while the so-called “post-synaptic neurons” influenced by neuron i receive an additional amount of membrane potential.

From a probabilistic point of view, this activity can be described by a simple point process since the whole activity is characterized by the jump times. In the literature, Hawkes processes are often used to describe systems of interacting neurons, see [1,2,3,4,5,6] for example. The reset to 0 of the spiking neuron provides a variable length memory for the dynamic and therefore point processes describing these systems are non-Markovian.

On the other hand, it is possible to describe the activity of the network with a process modeling not only the jump times but the whole evolution of the membrane potential of each neuron. This evolution needs then to be specified between the jumps. In [7] the process describing this evolution follows a deterministic drift between the jumps, more precisely the membrane potential of each neuron is attracted with exponential speed towards an equilibrium potential. This process is then Markovian and belongs to the family of Piecewise Deterministic Markov Processes introduced by Davis ([8,9]). Such processes are widely used in probability modeling of e.g., biological or chemical phenomena (see e.g., [10] or [11], see [12] for an overview). The point of view we adopt here is close to this framework, but we work without drift between the jumps. We therefore consider a pure jump Markov process and will make use of the abbreviation PJMP in the rest of the present work.

We consider a process ${X}_{t}=({X}_{t}^{1},\dots ,{X}_{t}^{N}),$ where N is the number of neurons in the network and where for each neuron $i,\phantom{\rule{0.166667em}{0ex}}1\le i\le N$ and each time $t\in {\mathbb{R}}_{+},$ each variable ${X}_{t}^{i}$ represents the membrane potential of neuron i at time $t$. Each membrane potential ${X}_{t}^{i}$ takes value in ${\mathbb{R}}_{+}$. A neuron with membrane potential x “spikes” with intensity $\varphi \left(x\right),$ where $\varphi :{\mathbb{R}}_{+}\to {\mathbb{R}}_{+}$ is a given intensity function. When a neuron i fires, its membrane potential is reset to $0,$ interpreted as resting potential, while the membrane potential of any post-synaptic neuron j is increased by ${W}_{i\to j}\ge 0$. Between two jumps of the system, the membrane potential of each neuron is constant.

Let $N>1$ be fixed and ${\left({N}^{i}(ds,dz)\right)}_{i=1,\cdots ,N}$ be a family of i.i.d. Poisson random measures on ${\mathbb{R}}_{+}\times {\mathbb{R}}_{+}$ with intensity measure $dsdz$. We study the Markov process ${X}_{t}=({X}_{t}^{1},\dots ,{X}_{t}^{N})$ taking values in ${\mathbb{R}}_{+}^{N}$ and solving, for $i=1,\cdots ,N$, for $t\ge 0$,

$${X}_{t}^{i}={X}_{0}^{i}-{\int}_{0}^{t}{\int}_{0}^{\infty}{X}_{s-}^{i}{1}_{\{z\le \varphi \left({X}_{s-}^{i}\right)\}}{N}^{i}(ds,dz)+{\sum}_{j\ne i}{W}_{j\to i}{\int}_{0}^{t}{\int}_{0}^{\infty}{1}_{\{z\le \varphi \left({X}_{s-}^{j}\right)\}}{1}_{\{{X}_{s-}^{i}\le m-{W}_{j\to i}\}}{N}^{j}(ds,dz).$$

In the above equation for each $j\ne i,\phantom{\rule{0.166667em}{0ex}}{W}_{j\to i}\in {\mathbb{R}}_{+}$ is the synaptic weight describing the influence of neuron j on neuron $i$. Finally, the function $\varphi :{\mathbb{R}}_{+}\mapsto {\mathbb{R}}_{+}$ is the intensity function.

This can be seen in the following way. The first term ${X}_{0}^{i}$ is the starting point of the process at time $0$. The second term corresponds to the reset to 0 of neuron i when it spikes. The point process ${N}_{i}$ gives the times where a spike of neuron i can occur which actually happens with rate $\varphi \left({X}_{s-}^{i}\right)$ leading to a reset to 0 of neuron i due to the term $-{X}_{s-}^{i}$. The third term corresponds to the modification of membrane potential of post-synaptic neurons influenced by neuron $i.$ The modification value is given by the synaptic weight ${W}_{j\to i}$ provided the new value is smaller than the maximum potential $m,$ which is ensured by the second indicatrix function.

The generator of the process X is given for any test function $f:{\mathbb{R}}_{+}^{N}\to \mathbb{R}$ and $x\in {\mathbb{R}}_{+}^{N}$ is described by
where
for some $m>0$. With this definition the process remains inside the compact set

$$\mathcal{L}f\left(x\right)=\sum _{i=1}^{N}\varphi \left({x}^{i}\right)\left[f\left({\Delta}_{i}\left(x\right)\right)-f\left(x\right)\right]$$

$${\left({\Delta}_{i}\left(x\right)\right)}_{j}=\left\{\begin{array}{cc}{x}^{j}+{W}_{i\to j}\hfill & j\ne i\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}{x}^{j}+{W}_{i\to j}\le m\\ {x}^{j}\hfill & j\ne i\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}{x}^{j}+{W}_{i\to j}>m\\ 0\hfill & j=i\end{array}\right\}$$

$$D:=\{x\in {\mathbb{R}}_{+}^{N}:{x}_{i}\le m,\phantom{\rule{0.166667em}{0ex}}1\le i\le N\}.$$

Furthermore, we also assume the following conditions about the intensity function:
for some strictly positive constants c and $\delta $.

$$\varphi \left(x\right)>cx+\delta \phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}x\in {\mathbb{R}}_{+}$$

The probability for a neuron to spike grows with its membrane potential so it is natural to think of the function $\varphi $ as an increasing function. Condition (5) implies that this growth is at least linear, and models the spontaneous activity of the system: whatever the configuration x is, the system will always have a positive spiking rate.

Our purpose is to show Poincaré-type inequalities for our PJMP, whose dynamic is similar to the model introduced in [1]. The main difference between our framework and the one of [1] relies in the fact that as in [7], we study a process modeling the membrane potential of each neuron instead of a point process focusing only on the spiking events. Focusing on the spike train is sufficient to describe the network activity since it contains all the relevant information. However, the membrane potential integrates each relevant spike that occurred in the past and this gives us a Markovian framework. Regardless of the point of view, the notable difference in dynamics with [1] is the absence of drift between jumps. We assume here that there is no loss of memory between spikes. Therefore, our conclusions cannot directly apply to the model studied in [1].

We will investigate Poincaré-type inequalities at first for the semigroup ${P}_{t}$ and then for the invariant measure $\pi $. Concerning the semigroup inequality, we will study two different cases. The first, the general one, where the system starts from any possible initial configuration. Then, we restrict to initial configurations that belong to the domain of the invariant measure.

Let us first describe the general framework and define the Poincaré inequalities on a discrete setting (see also [13,14,15,16,17]). At first we should note a convention we will widely use. For a function f and measure $\nu $ we will write $\nu \left(f\right)$ for the expectation of the function f with respect to the measure $\nu $ that is

$$\nu (f)=\int fd\nu .$$

We consider a Markov process ${\left({X}_{t}\right)}_{t\ge 0}$ which is described by the infinitesimal generator $\mathcal{L}$ and the associated Markov semigroup ${P}_{t}f\left(x\right)={\mathbb{E}}^{x}\left(f\left({X}_{t}\right)\right)$. For a semigroup and its associated infinitesimal generator we will need the following well know relationships: $\frac{d}{ds}{P}_{s}=\mathcal{L}{P}_{s}={P}_{s}\mathcal{L}$ (see for example [18]).

We define $\pi $ to be the invariant measure for the semigroup ${\left({P}_{t}\right)}_{t\ge 0}$ if and only if

$$\pi {P}_{t}=\pi .$$

Furthermore, we define the “carré du champ” operator (see [19]) by:

$$\Gamma (f,g):=\frac{1}{2}(\mathcal{L}\left(fg\right)-f\mathcal{L}g-g\mathcal{L}f).$$

For more details on this important operator and the inequalities that relate to it one can look at [18,19,20]. For the PJMP process defined above with the specific generator $\mathcal{L}$ given by (2) a simple calculation shows that the carré du champ takes the following form.

$$\Gamma (f,f)=\frac{1}{2}(\mathcal{L}{f}^{2}-2f\mathcal{L}f)=\frac{1}{2}(\sum _{i=1}^{N}\varphi \left({x}^{i}\right){\left[f\left({\Delta}_{i}\left(x\right)\right)-f\left(x\right)\right]}^{2}).$$

We say that a measure $\nu $ satisfies a Poincaré inequality if there exists constant $C>0$ independent of f, such that
where the variance of a function f with respect to a measure $\nu $ is defined with the usual way as: $Va{r}_{\nu}\left(f\right)=\nu {(f-\nu \left(f\right))}^{2}$. It should be noted that in the case where the measure $\nu $ is the semigroup ${P}_{t}$, then the constant C may depend on t, $C=C\left(t\right)$.

$$\left(SG\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}Va{r}_{\nu}\left(f\right)\le C\nu (\Gamma (f,f))$$

For the Poincaré inequality for continuous time Markov chains one can look in [16,21]. In [13,14,17], the Poincaré inequality (SG) for ${P}_{t}$ has been shown for some point processes, for a constant that depends on time t, while the stronger log-Sobolev inequality, has been disproved. The general method used in these papers that will be followed also in the current work, is based on the so-called semigroup method which shows the inequality for the semigroup ${P}_{t}$.

The main difficulty here is that for the pure jump Markov process that we examine in the current paper, the translation property
used in [13,17] does not hold here. This appears to be important because the translation property is a key element in these papers, since it allows to bound the carré du champ by comparing the mean ${\mathbb{E}}^{x}f\left(z\right)$ where the process starts from position x with the mean ${\mathbb{E}}^{{\Delta}_{i}\left(x\right)}f\left(z\right)$ where it starts from ${\Delta}_{i}\left(x\right),$ the jump-neighbor of $x.$ However, we can still obtain Poincaré-type inequalities, but with a constant $C\left(t\right)$ which is a polynomial of order higher than one. This power is higher than the constant $C\left(t\right)=t$, the optimal obtained in [17] for a path space of Poisson point processes.

$${\mathbb{E}}^{x+y}f\left(z\right)={\mathbb{E}}^{x}f(z+y)$$

It should be noted that the aforementioned translation property relates with the ${\Gamma}_{2}$ criterion (see [22,23]) for the Poincaré inequality, which states that if
then the Poincaré inequality is satisfied. A more detailed discussion on this criterion follows later in Section 2.2. Since this is not satisfied in our case we obtain a Poincaré-type inequality instead.

$${\Gamma}_{2}\left(f\right):=\frac{1}{2}\mathcal{L}(\Gamma (f,f))-2\Gamma (f,\mathcal{L}l)\ge 0,$$

Before we present the results of the paper it is important to highlight an important distinction on the nature of the initial configuration from which the process can start. We can classify the initial configurations according to the return probability to them. Recall that the membrane potential ${x}_{i}$ of every neuron i takes positive values within a compact set and that whenever a neuron j different than i spikes, the neuron i jumps ${W}_{i\to j}$ positions up, while the only other movement it does is jumping to zero when it spikes. That means that every variable ${x}_{i}$ can jump down only to zero while after the first jump, can only pass from a finite number of possible positions since the state-space is then discrete inside a compact due to the imposed maximum potential $m.$ Since the neurons stay still between spikes, that implies that there is a finite number of possible configurations to which $X=({X}^{1},\dots {X}^{N})$ can return after every neuron has spiked for the first time. This is the domain of the invariant measure $\pi $ of the semigroup ${P}_{t}$, and we will denote it as $\widehat{D}$. Thus, if the initial configuration $x=({x}^{1},\dots {x}^{N})$ does not belong to $\widehat{D}$, after the process enters $\widehat{D}$, it will never return to this initial configuration.

It should be noted that it is easy to find initial configurations $x=({x}^{1},\dots {x}^{N})\notin \widehat{D}$. For example one can consider any x such that at least one of the ${x}^{i}$s is not a sum of synaptic weights ${W}_{j\to i}$, or any x with ${x}^{i}={x}^{j}$ for every i and j.

Below we present the Poincaré inequality for the semigroup ${P}_{t}$ for general starting configurations.

Assume the PJMP as described in (2)–(5). Then, for every $x\in D$, the following Poincaré-type inequality holds.
with $\alpha \left(t\right)$ a second order polynomial of the time t that does not depend on the function f and β a constant.

$$Va{r}_{{E}^{x}}\left(f\left({x}_{t}\right)\right)\le \alpha \left(t\right){\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left(x\right)ds+\beta \sum _{i=1}^{n}{\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)ds$$

One notices that since the coefficient $\alpha \left(t\right)$ is a polynomial of second order and $\beta $ is a constant, the first term dominates over the second for long time t, as shown in the next corollary.

Assume the PJMP as described in (2)–(5). Then, for every $x\in D$, and t sufficiently large, i.e., $t>\zeta \left(f\right)$
where $\zeta \left(f\right)$ is a constant depending only on $f,$

$$Va{r}_{{E}^{x}}\left(f\left({x}_{t}\right)\right)\le 2\alpha \left(t\right){\int}_{0}^{t}{P}_{w}\Gamma \left((f,f)\right)\left(x\right)dw$$

$$\zeta \left(f\right)=\underset{x\in D}{max}{\left(\frac{\beta {\sum}_{i=1}^{n}\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)}{\Gamma (f,f)\left(x\right)}\right)}^{\frac{1}{2}}.$$

One should notice that although the lower value $\zeta \left(f\right)$ depends on the function f, the coefficient $2\alpha \left(t\right)$ of the inequality does not depend on the function f.

The proof of the Poincaré inequality for the general initial configuration is presented in Section 2.

In the special case where x is on the domain of the invariant measure we obtain the stronger inequality.

Assume the PJMP as described in (2)–(5). Then, there exists a ${t}_{1}>0$ such that for every $t>{t}_{1}$ and for every $x\in \widehat{D}$, the following Poincaré-type inequality holds.
with $\gamma \left(t\right)$ a third order polynomial of the time t that do not depend on the function f.

$$Va{r}_{{E}^{x}}\left(f\left({x}_{t}\right)\right)\le \gamma \left(t\right){P}_{t}\Gamma (f,f)\left(x\right)+2{\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left(x\right)ds.$$

As in the general case, for t large enough we have the following corollary.

Assume the PJMP as described in (2)–(5). Then, there exists a ${t}_{1}>0$, such that for every $x\in \widehat{D}$, and t sufficiently large, i.e., $t>max\{\xi \left(f\right),{t}_{1}\}$
where $\xi \left(f\right)$ is a constant depending only on $f$,

$$Va{r}_{{E}^{x}}\left(f\left({x}_{t}\right)\right)\le 2\gamma \left(t\right){P}_{t}\Gamma (f,f)\left(x\right)$$

$$\xi \left(f\right)={\left(\frac{2{max}_{x\in \widehat{D}}\Gamma (f,f)\left(x\right)}{{min}_{x\in \widehat{D}}\Gamma (f,f)\left(x\right)}\right)}^{\frac{1}{2}}.$$

We conclude this section with the Poincaré inequality for the invariant measure $\pi $ presented on the next theorem.

In this section, we focus on neurons that start with values on any possible initial configuration $x\in D$ as described by (2)–(5), and we prove the local Poincaré inequalities presented in Theorem 1 and Corollary 1. Let us first state some technical results.

We start by showing properties of the jump probabilities of the degenerate PJMP processes. Since the process is constant between jumps, the set of reachable positions y after a given time t for a trajectory starting from x is discrete. We therefore define

$${\pi}_{t}(x,y):={P}_{x}({X}_{t}=y)\text{}\mathrm{and}\text{}{D}_{x}:=\{y\in D,{\pi}_{t}(x,y)0\}.$$

This set is finite for the following reasons. On one hand, for each neuron $i\in I,$ the set ${S}_{i}=\left\{0\right\}\cup \{{\sum}_{k=1}^{n}{W}_{{j}_{k}\to i},n\in {\mathbb{N}}^{*},{j}_{k}\in I\}$ is discrete and such that the intersection with any compact is finite. On the other hand, we have ${D}_{x}\subset \left[{\prod}_{i\in I}\left({S}_{i}\cup ({x}_{i}+{S}_{i})\right)\right]\cap D.$

The idea is that since the process is constant between jumps, elements of ${D}_{x}$ are such that there exists a sequence of jumps leading from x to $y.$ Since we are only interested on the arrival position $y,$ among all jump sequences leading to $y,$ we can consider only sequences with minimal number of jumps and the number of such jump sequences leading to positions inside a compact is finite, due to the fact that each ${W}_{j\to i}$ is non-negative.

Since x is also in the compact $D,$ we can have an upper bound for the cardinal of ${D}_{x}$ independent from $x.$

For a given time $s\in {\mathbb{R}}_{+}$ and a given position $x\in D,$ we denote by ${p}_{s}\left(x\right)$ the probability that starting at time 0 from position $x,$ the process has no jump in the interval $[0,s],$ and for a given neuron $i\in I$ by ${p}_{s}^{i}\left(x\right)$ the probability that the process has exactly one jump of neuron i and no jumps for other neurons.

Introducing the notation $\overline{\varphi}\left(x\right)={\sum}_{j\in I}\varphi \left({x}_{j}\right)$ and given the dynamics of the model, we have that
and

$${p}_{s}\left(x\right)={e}^{-s\overline{\varphi}\left(x\right)}$$

$$\begin{array}{c}{p}_{s}^{i}\left(x\right)={\int}_{0}^{s}\varphi \left({x}_{i}\right){e}^{-u\overline{\varphi}\left(x\right)}{e}^{-(s-u)\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)}du\hfill \\ \text{\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{0.17em}}=\left\{\begin{array}{cc}\frac{\varphi \left({x}_{i}\right)}{\overline{\varphi}\left(x\right)-\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)}\left({e}^{-s\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)}-{e}^{-s\overline{\varphi}\left(x\right)}\right)\hfill & \phantom{\rule{0.166667em}{0ex}}\mathrm{if}\phantom{\rule{0.166667em}{0ex}}\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)\ne \overline{\varphi}\left(x\right)\hfill \\ s\varphi \left({x}_{i}\right){e}^{-s\overline{\varphi}\left(x\right)}\hfill & \phantom{\rule{0.166667em}{0ex}}\mathrm{if}\phantom{\rule{0.166667em}{0ex}}\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)=\overline{\varphi}\left(x\right)\hfill \end{array}\right\}.\hfill \end{array}$$

Define

$${t}_{0}=\left\{\begin{array}{cc}\frac{ln\left(\overline{\varphi}\left(x\right)\right)-ln\left(\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)\right)}{\overline{\varphi}\left(x\right)-\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)}\hfill & \phantom{\rule{0.166667em}{0ex}}\mathrm{if}\phantom{\rule{0.166667em}{0ex}}\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)\ne \overline{\varphi}\left(x\right)\hfill \\ \frac{1}{\overline{\varphi}\left(x\right)}\hfill & \phantom{\rule{0.166667em}{0ex}}\mathrm{if}\phantom{\rule{0.166667em}{0ex}}\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)=\overline{\varphi}\left(x\right)\hfill \end{array}\right\}.$$

As a function of $s,\phantom{\rule{0.166667em}{0ex}}{p}_{s}^{i}\left(x\right)$ is continuous, strictly increasing on $(0,{t}_{0})$ and strictly decreasing on $({t}_{0},+\infty )$ and we have ${p}_{0}^{i}\left(x\right)=0.$

Assume the PJMP as described in (2)–(5).There exists positive constants ${C}_{1}$ and ${C}_{2}$ independent of $t,\phantom{\rule{0.166667em}{0ex}}x$ and y such that

- For all $t>{t}_{0},$ we have$$\sum _{y\in {D}_{x}}\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}\le {C}_{1}.$$
- For all $t\le {t}_{0},$ we have$$\sum _{y\in {D}_{x}\backslash \left\{{\Delta}^{i}\left(x\right)\right\}}\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}\le {C}_{1}$$$$\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),{\Delta}^{i}\left(x\right))}{{\pi}_{t}(x,{\Delta}^{i}\left(x\right))}\le \frac{{\pi}_{t}({\Delta}^{i}\left(x\right),{\Delta}^{i}\left(x\right))}{{\pi}_{t}(x,{\Delta}^{i}\left(x\right))}\le \frac{{C}_{2}}{t}.$$

As said before, the set ${D}_{x}$ is finite so it is sufficient to obtain an upper bound for the ratio $\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}.$

We have for all $s\in (0,t)$

$$\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}\le \frac{{\left({\pi}_{t-s}({\Delta}^{i}\left(x\right),y){p}_{s}\left(y\right)+{sup}_{z\in D}(1-{p}_{s}\left(z\right))\right)}^{2}}{{p}_{s}^{i}\left(x\right){\pi}_{t-s}({\Delta}^{i}\left(x\right),y)}.$$

Here we decomposed the numerator according to two events. Either ${X}_{t-s}=y$ and there no jump in the interval of time $[t-s,t]$ or there is at least one jump in the interval of time $[t-s,t],$ whatever the position $z\in D$ of the process at time $t-s.$

From the previous inequality, we then obtain
where we recall that the constant m appears in the definition of the compact set D introduced in (4).

$$\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}\le \frac{{\left({\pi}_{t-s}({\Delta}^{i}\left(x\right),y){p}_{s}\left(y\right)+(1-{e}^{-sN\varphi \left(m\right)})\right)}^{2}}{{p}_{s}^{i}\left(x\right){\pi}_{t-s}({\Delta}^{i}\left(x\right),y)}.$$

If ${\pi}_{t-{t}_{0}}({\Delta}^{i}\left(x\right),y)\ge {p}_{{t}_{0}}^{i}\left(x\right),$ we have

$$\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}\le \frac{1}{{\left({p}_{{t}_{0}}^{i}\left(x\right)\right)}^{2}}.$$

Assume now that ${\pi}_{t-{t}_{0}}({\Delta}^{i}\left(x\right),y)<{p}_{{t}_{0}}^{i}\left(x\right)$ and let us recall that as a function of $s,\phantom{\rule{0.166667em}{0ex}}{p}_{s}^{i}\left(x\right)$ is continuous, strictly increasing on $(0,{t}_{0})$ and ${p}_{0}^{i}\left(x\right)=0.$

On the other hand, as a function of $s,\phantom{\rule{0.166667em}{0ex}}{\pi}_{t-s}({\Delta}^{i}\left(x\right),y)$ is continuous and takes value ${\pi}_{t}({\Delta}^{i}\left(x\right),y)>0$ for $s=0.$

We deduce from this that there exists ${s}_{*}\in (0,{t}_{0})$ such that ${p}_{{s}_{*}}^{i}\left(x\right)={\pi}_{t-{s}_{*}}({\Delta}^{i}\left(x\right),y).$

Now (9) with $s={s}_{*}$ gives us

$$\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}\le {\left({p}_{{s}_{*}}\left(y\right)\right)}^{2}+2{p}_{{s}_{*}}\left(y\right)\frac{1-{e}^{-{s}_{*}N\varphi \left(m\right)}}{{p}_{{s}_{*}}^{i}\left(x\right)}+{\left(\frac{1-{e}^{-{s}_{*}N\varphi \left(m\right)}}{{p}_{{s}_{*}}^{i}\left(x\right)}\right)}^{2}.$$

For all $s\in (0,{t}_{0}),\phantom{\rule{0.166667em}{0ex}}{p}_{s}\left(y\right)\le 1,$ and we then study $\frac{1-{e}^{-sN\varphi \left(m\right)}}{{p}_{s}^{i}\left(x\right)}$ as a function of $s\in (0,{t}_{0}).$

Using the explicit value of ${p}_{s}^{i}\left(x\right)$ given in (6) and assumption (5), we obtain for all $s\in (0,{t}_{0}),$

$$\frac{1-{e}^{-sN\varphi \left(m\right)}}{{p}_{s}^{i}\left(x\right)}\le \left\{\begin{array}{cc}\frac{{e}^{{t}_{0}N\varphi \left(m\right)}}{\delta}\phantom{\rule{0.166667em}{0ex}}\frac{\left(\overline{\varphi}\left(x\right)-\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)\right)\left(1-{e}^{-sN\varphi \left(m\right)}\right)}{1-{e}^{-s\left(\overline{\varphi}\left(x\right)-\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)\right)}}\hfill & \phantom{\rule{0.166667em}{0ex}}\mathrm{if}\phantom{\rule{0.166667em}{0ex}}\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)\ne \overline{\varphi}\left(x\right)\hfill \\ \frac{{e}^{{t}_{0}N\varphi \left(m\right)}}{\delta}\frac{1-{e}^{-s}}{s}\hfill & \phantom{\rule{0.166667em}{0ex}}\mathrm{if}\phantom{\rule{0.166667em}{0ex}}\overline{\varphi}\left({\Delta}^{i}\left(x\right)\right)=\overline{\varphi}\left(x\right)\hfill \end{array}\right\}.$$

Recall that $\delta >0$ is defined in assumption (5) and satisfies $\varphi \left(x\right)\ge \delta $ for all $x\in {\mathbb{R}}_{+}.$

In both cases, when s is far from zero, we can obtain an upper bound independent of $x,$ and when s goes to zero, the limit of the right-hand term is $\frac{N\varphi \left(m\right){e}^{{t}_{0}N\varphi \left(m\right)}}{\delta}.$

From this, we deduce that there exists a constant ${M}_{D}$ such that for all $s\in (0,{t}_{0}),$

$$\frac{1-{e}^{-sN\varphi \left(m\right)}}{{p}_{s}^{i}\left(x\right)}\le {M}_{D}.$$

Putting all together, we obtain the announced result for the case where $t>{t}_{0}.$

We now consider the case where $t\le {t}_{0}$.

We start by considering the case where $y\ne {\Delta}^{i}\left(x\right)$ and go back to (9).

As a function of $s,\phantom{\rule{0.166667em}{0ex}}{\pi}_{t-s}({\Delta}^{i}\left(x\right),y)$ is continuous and takes values ${\pi}_{t}({\Delta}^{i}\left(x\right),y)>0$ and ${\pi}_{0}({\Delta}^{i}\left(x\right),y)=0$ respectively for $s=0$ and $s=t.$

We deduce from this that there exists ${s}_{*}\in (0,t)\subset (0,{t}_{0})$ such that ${p}_{{s}_{*}}^{i}\left(x\right)={\pi}_{t-{s}_{*}}({\Delta}^{i}\left(x\right),y)$ and we are back in the previous case so that the same result holds.

Let us assume now that $y={\Delta}^{i}\left(x\right),$ we have

$$\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}\le \frac{{\pi}_{t}({\Delta}^{i}\left(x\right),{\Delta}^{i}\left(x\right))}{{\pi}_{t}(x,{\Delta}^{i}\left(x\right))}\le \frac{1}{{p}_{t}^{i}\left(x\right)}.$$

Recall the explicit expression of ${p}_{t}^{i}\left(x\right)$ given in (6) and use (5) to bound the intensity function
for some constant C independent of t and $x,$ which gives us the announced result. □

$${p}_{t}^{i}\left(x\right)=t\varphi \left({x}_{i}\right){e}^{-t\overline{\varphi}\left(x\right)}\ge t\delta {e}^{-{t}_{0}{sup}_{x\in D}\varphi \left(x\right)}=Ct$$

Taking under account the last result, we can obtain the first technical bound needed in the proof of the local Poincaré inequality.

Assume the PJMP as described in (2)–(5). Then

$${\left({\int}_{{t}_{0}}^{t-s}({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}-{\mathbb{E}}^{x})\sum _{j=1}^{N}\varphi ({x}_{u}^{j})(f\left({\Delta}_{j}\left({x}_{u}\right)\right)-f\left({x}_{u}\right))du\right)}^{2}\le (t-s)(1+{C}_{1})M{\int}_{{t}_{0}}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right).$$

Consider ${\pi}_{t}(x,y)$ to probability kernel of ${\mathbb{E}}^{x}$, i.e., ${\mathbb{E}}^{x}\left(f\left({x}_{t}\right)\right)={\sum}_{y}{\pi}_{t}(x,y)f\left(y\right)$. Then we can write

$$\begin{array}{cc}\hfill {\mathbf{II}}_{1}:=& {\left({\int}_{{t}_{0}}^{t-s}({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}-{\mathbb{E}}^{x}){\displaystyle \sum _{j=1}^{N}}\varphi ({x}_{u}^{j})(f\left({\Delta}_{j}\left({x}_{u}\right)\right)-f\left({x}_{u}\right))du\right)}^{2}\hfill \\ \hfill =& {\left({\int}_{{t}_{0}}^{t-s}{\displaystyle \sum _{y}}{\pi}_{u}(x,y)\left((\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}-1){\displaystyle \sum _{j=1}^{N}}{\varphi}_{u}\left({y}^{j}\right)(f\left({\Delta}_{j}\left(y\right)\right)-f\left(y\right))\right)du\right)}^{2}.\hfill \end{array}$$

To continue we will use Holder’s inequality to pass the second power inside the first integral, which will give
and then we apply the Cauchy-Schwarz inequality for the measure ${\mathbb{E}}^{x}$ to get

$$\begin{array}{c}\hfill {\mathbf{II}}_{1}\le (t-s){\int}_{{t}_{0}}^{t-s}{\left(\sum _{y}{\pi}_{u}(x,y)\left((\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}-1)\sum _{j=1}^{N}{\varphi}_{u}\left({y}^{j}\right)(f\left({\Delta}_{j}\left(y\right)\right)-f\left(y\right))\right)\right)}^{2}du\end{array}$$

$${\mathbf{II}}_{1}\le (t-s){\int}_{{t}_{0}}^{t-s}{\mathbb{E}}^{x}{\left(\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}-1\right)}^{2}\ast {\mathbb{E}}^{x}{\left(\underset{:=\mathbb{S}}{\underbrace{{\displaystyle \sum _{j=1}^{N}}\varphi \left({y}_{u}^{j}\right)(f\left({\Delta}_{j}\left({y}_{u}\right)\right)-f\left({y}_{u}\right))}}\right)}^{2}du.$$

The first quantity involved in the above integral is bounded from Lemma 1 by a constant
while for the sum involved in the second quantity, since $\varphi \left(x\right)\ge \delta >0$ we can use Holder’s inequality
where $M={sup}_{x\in D}{\sum}_{i=1}^{N}\varphi \left({x}^{i}\right)$, so that
□

$${\mathbb{E}}^{x}{\left(\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}-1\right)}^{2}\le 1+{E}^{x}{\left(\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}\right)}^{2}=1+\sum _{y\in {D}_{x}}\frac{{\pi}_{u}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{u}(x,y)}\le 1+{C}_{1}$$

$$\begin{array}{cc}\hfill \mathbb{S}& ={\left({\displaystyle \sum _{j=1}^{N}}\varphi ({y}_{u}^{j})\right)}^{2}{\left({\displaystyle \sum _{j=1}^{N}}\frac{\varphi ({y}_{u}^{j})}{{\sum}_{j=1}^{N}\varphi ({y}_{u}^{j})}(f\left({\Delta}_{j}\left({y}_{u}\right)\right)-f\left({y}_{u}\right))\right)}^{2}\hfill \\ \hfill & \le M{\displaystyle \sum _{j=1}^{N}}\varphi ({y}_{u}^{j}){(f\left({\Delta}_{j}\left({y}_{u}\right)\right)-f\left({y}_{u}\right))}^{2}=M\Gamma (f,f)\left({x}_{u}\right)\hfill \end{array}$$

$${\mathbf{II}}_{1}\le (t-s)(1+{C}_{1})M{\int}_{{t}_{0}}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)du.$$

We will now extend the last bound to an integral on a time domain starting at $0$.

For the PJMP as described in (2)–(5), we have
where $c\left(t\right)={t}_{0}8M({C}_{1}+1)+2t(1+{C}_{1})M$.

$$\begin{array}{cc}\hfill {\left({\int}_{0}^{t-s}\left({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}\left(\mathcal{L}f\left({x}_{u}\right)\right)-{\mathbb{E}}^{x}\left(\mathcal{L}f\left({x}_{u}\right)\right)\right)du\right)}^{2}\le & 16{t}_{0}^{2}M\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)\hfill \\ \hfill & +c(t-s){\int}_{0}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)du\hfill \end{array}$$

To calculate a bound for
we will need to control the ration $\frac{{\pi}_{u}^{2}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}$. As shown in Lemma 1, this ratio depends on time when $u\le {t}_{0}$, otherwise it is bounded by a constant. For this reason, we will start by breaking the integration variable of the time t into two domains, $(0,{t}_{0})$ and $({t}_{0},t-s)$.

$${\mathbb{E}}^{{\Delta}_{i}\left(x\right)}\left(\mathcal{L}f\left({x}_{u}\right)\right)-{\mathbb{E}}^{x}\left(\mathcal{L}f\left({x}_{u}\right)\right)$$

$$\begin{array}{cc}\hfill {\mathbf{I}}_{1}:=& {\left({\int}_{0}^{t-s}\left({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}\left(\mathcal{L}f\left({x}_{u}\right)\right)-{\mathbb{E}}^{x}\left(\mathcal{L}f\left({x}_{u}\right)\right)\right)du\right)}^{2}\le \hfill \\ \hfill & 2\underset{{\mathbf{II}}_{1}}{\underbrace{{\left({\int}_{{t}_{0}}^{t-s}({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}-{\mathbb{E}}^{x}){\displaystyle \sum _{i=1}^{N}}\varphi \left({x}_{u}^{i}\right)(f({\Delta}_{i}\left({x}_{u}\right)-f\left({x}_{u}\right))du\right)}^{2}}}\hfill \\ \hfill & +2\underset{:={\mathbf{II}}_{2}}{\underbrace{{\left({\int}_{0}^{{t}_{0}}({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}-{\mathbb{E}}^{x}){\displaystyle \sum _{i=1}^{N}}\varphi \left({x}_{u}^{i}\right)(f\left({\Delta}_{i}\left({x}_{u}\right)\right)-f\left({x}_{u}\right))du\right)}^{2}}}.\hfill \end{array}$$

The first summand ${\mathbf{II}}_{1}$ is upper bounded by the previous lemma. To bound the second term ${\mathbf{II}}_{2}$ on the right-hand side of (13) we write

$$\begin{array}{cc}\hfill {\mathbf{II}}_{2}\le & 2\underset{:={\mathbf{III}}_{1}}{\underbrace{{\left({\int}_{0}^{{t}_{0}}({\pi}_{u}({\Delta}_{i}(x),{\Delta}_{i}(x))-{\pi}_{u}(x,{\Delta}_{i}(x))){\displaystyle \sum _{i=1}^{N}}\varphi ({\Delta}_{i}{(x)}^{i})(f({\Delta}_{i}({\Delta}_{i}(x))-f({\Delta}_{i}(x)))du\right)}^{2}}}\hfill \\ \hfill & +2\underset{:={\mathbf{III}}_{2}}{\underbrace{{\left({\int}_{0}^{{t}_{0}}({\displaystyle \sum _{y\in D,y\ne {\Delta}_{i}(x)}}({\pi}_{u}({\Delta}_{i}(x),y)-{\pi}_{u}(x,y)){\displaystyle \sum _{i=1}^{N}}\varphi ({y}^{i})(f({\Delta}_{i}(y)-f(y))du\right)}^{2}}}.\hfill \end{array}$$

The distinction on the two cases, whether after time u the neurons configuration is ${\Delta}_{i}\left(x\right)$ or not, relates to the two different bounds Lemma 1 provides for the fraction $\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),y)}{{\pi}_{t}(x,y)}$ whether y is ${\Delta}_{i}\left(x\right)$ or not. We will first calculate the second term on the right-hand side. For this term we will work similar to Lemma 2. At first we will apply the Holder inequality on the time integral after we first divide with the normalization constant ${t}_{0}$. This will give

$${\mathbf{III}}_{2}\le {t}_{0}{\int}_{0}^{{t}_{0}}{\left(\sum _{y\in D,y\ne {\Delta}_{i}\left(x\right)}(\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}-1){\pi}_{u}(x,y)\sum _{i=1}^{N}\varphi \left({y}^{i}\right)(f\left({\Delta}_{i}\left(y\right)\right)-f\left(y\right)\right)}^{2}du.$$

Now we will use the Cauchy-Schwarz inequality in the first sum. We will then obtain the following

$$\begin{array}{cc}\hfill {\mathbf{III}}_{2}\le & {t}_{0}{\int}_{0}^{{t}_{0}}\left[{\displaystyle \sum _{y\in D,y\ne {\Delta}_{i}\left(x\right)}}{\pi}_{u}(x,y){(\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}-1)}^{2}\right]\ast \hfill \\ \hfill & \text{\hspace{0.17em}}\left[{\displaystyle \sum _{y\in D,y\ne {\Delta}_{i}\left(x\right)}}{\pi}_{u}(x,y){\left({\displaystyle \sum _{i=1}^{N}}\varphi \left({y}^{i}\right)(f\left({\Delta}_{i}\left(y\right)\right)-f\left(y\right)\right)}^{2}\right]du.\hfill \end{array}$$

The first term on the last product can be upper bounded from Lemma 1

$$\begin{array}{cc}\hfill {\displaystyle \sum _{y\in D,y\ne {\Delta}_{i}\left(x\right)}}{\pi}_{u}(x,y){(\frac{{\pi}_{u}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}-1)}^{2}\le & {\displaystyle \sum _{y\in D,y\ne {\Delta}_{i}\left(x\right)}}(\frac{{\pi}_{u}^{2}({\Delta}_{i}\left(x\right),y)}{{\pi}_{u}(x,y)}+{\pi}_{u}(x,y))\hfill \\ \hfill \le & ({C}_{1}+1).\hfill \end{array}$$

Meanwhile for the second term involved in the product of (15) we can write
where for the first bound we made use once more of the Holder inequality, after we divided with the appropriate normalization constant ${\sum}_{i=1}^{N}\varphi \left({x}_{u}^{i}\right)$. If we put the last bound together with (16) into (15), we obtain

$$\begin{array}{cc}\hfill {\displaystyle \sum _{y\in D,y\ne {\Delta}_{i}\left(x\right)}}& {\pi}_{u}(x,y){\left(\sum _{i=1}^{N}\varphi \left({y}^{i}\right)(f\left({\Delta}_{i}\left(y\right)\right)-f\left(y\right)\right)}^{2}\hfill \\ \hfill & \le {\displaystyle \sum _{y\in D,y\ne {\Delta}_{i}\left(x\right)}}{\pi}_{u}(x,y)({\displaystyle \sum _{i=1}^{N}}\varphi \left({y}^{i}\right)){\displaystyle \sum _{i=1}^{N}}\varphi \left({y}^{i}\right){(f\left({\Delta}_{i}\left(y\right)\right)-f\left(y\right))}^{2}\hfill \\ \hfill & \le M{\displaystyle \sum _{y}}{\pi}_{u}(x,y)\Gamma (f,f)\left(y\right)\hfill \\ \hfill & =M{\mathbb{E}}^{x}\Gamma (f,f)\left({x}_{u}\right)\hfill \end{array}$$

$${\mathbf{III}}_{2}\le {t}_{0}M({C}_{1}+1){\int}_{0}^{{t}_{0}}{\mathbb{E}}^{x}\Gamma (f,f)\left({x}_{u}\right)du.$$

We now calculate the first summand of (14). Notice that in this case we cannot use the analogue bound from Lemma 1, that is $\frac{{\pi}_{t}^{2}({\Delta}^{i}\left(x\right),{\Delta}^{i}\left(x\right))}{{\pi}_{t}(x,{\Delta}^{i}\left(x\right))}\le \frac{{C}_{2}}{t},$ as we did for ${\mathbf{III}}_{2}$, since that will lead to a final upper bound ${\mathbf{III}}_{1}\le {t}_{0}M({C}_{1}+1){\int}_{0}^{{t}_{0}}\frac{1}{u}{\mathbb{E}}^{x}\Gamma (f,f)\left({x}_{u}\right)du$ which may diverge. Instead, we will bound the ${\mathbf{III}}_{1}$ by the carré du champ of the function after the first jump. We can write
where above we divided with the normalization constant ${\sum}_{i=1}^{N}\varphi ({\Delta}_{i}{\left(x\right)}^{i})$, since $\varphi \left(x\right)\ge \delta $. We can now apply the Holder inequality on the sum, so that

$$\begin{array}{cc}\hfill {\mathbf{III}}_{1}\le & 4{\left({\int}_{0}^{{t}_{0}}({\displaystyle \sum _{i=1}^{N}}\varphi ({\Delta}_{i}{(x)}^{i})|f({\Delta}_{i}({\Delta}_{i}(x)))-f({\Delta}_{i}(x))|du\right)}^{2}\hfill \\ \hfill \le & 4{t}_{0}^{2}{({\displaystyle \sum _{i=1}^{N}}\varphi ({\Delta}_{i}{(x)}^{i}))}^{2}{\left({\displaystyle \sum _{i=1}^{N}}\frac{\varphi ({\Delta}_{i}{(x)}^{i})}{{\sum}_{i=1}^{N}\varphi ({\Delta}_{i}{(x)}^{i})}|f({\Delta}_{i}({\Delta}_{i}(x)))-f({\Delta}_{i}(x))|\right)}^{2}\hfill \end{array}$$

$${\mathbf{III}}_{1}\le 4{t}_{0}^{2}M(\sum _{i=1}^{N}\varphi ({\Delta}_{i}{(x)}^{i})(f({\Delta}_{i}({\Delta}_{i}(x)))-f{({\Delta}_{i}(x))}^{2})=4{t}_{0}^{2}M\Gamma (f,f)({\Delta}_{i}(x)).$$

If we combine this together with (17) and (14) we get the following bound for the second term of (13)

$${\mathbf{II}}_{2}\le 8{t}_{0}^{2}M\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)+4({C}_{1}+1){t}_{0}M{\int}_{0}^{{t}_{0}}{\mathbb{E}}^{x}\Gamma (f,f)\left({x}_{u}\right)du.$$

The last one together with the bound shown in Lemma 2 for the first term ${\mathbf{II}}_{1}$ of (13) gives
since the carré du champ is non-negative, as shown below
by Cauchy-Swartz inequality. □

$$\begin{array}{cc}\hfill {\mathbf{I}}_{1}\le & {t}_{0}^{2}16M\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)+{t}_{0}8M({C}_{1}+1){\int}_{0}^{{t}_{0}}{\mathbb{E}}^{x}\Gamma (f,f)\left({x}_{u}\right)du\hfill \\ \hfill & +2(t-s)(1+{C}_{1})M{\int}_{{t}_{0}}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)du\hfill \\ \hfill \le & {t}_{0}^{2}16M\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)+2M({C}_{1}+1)(4{t}_{0}+(t-s)){\int}_{0}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)du.\hfill \end{array}$$

$$\Gamma (f,f)=\frac{1}{2}(\mathcal{L}\left({f}^{2}\right)-2f\mathcal{L}f)=\underset{t\downarrow 0}{lim}\frac{1}{2t}{P}_{t}{f}^{2}-{\left({P}_{t}f\right)}^{2}\ge 0$$

We have obtained all the technical results that we need to show the Poincaré inequality for the semigroup ${P}_{t}$ for general initial configurations.

Denote ${P}_{t}f\left(x\right)={\mathbb{E}}^{x}f\left({x}_{t}\right)$. Then
since $\frac{d}{ds}{P}_{s}=\mathcal{L}{P}_{s}={P}_{s}\mathcal{L}$.

$${P}_{t}{f}^{2}\left(x\right)-{\left({P}_{t}f\left(x\right)\right)}^{2}={\int}_{0}^{t}\frac{d}{ds}{P}_{s}{\left({P}_{t-s}f\right)}^{2}\left(x\right)ds={\int}_{0}^{t}{P}_{s}\Gamma ({P}_{t-s}f,{P}_{t-s}f)\left(x\right)ds$$

We can write

$$\Gamma ({P}_{t-s}f,{P}_{t-s}f)\left(x\right)=\sum _{i=1}^{N}\varphi \left({x}^{i}\right){({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}f\left({x}_{t-s}\right)-{\mathbb{E}}^{x}f\left({x}_{t-s}\right))}^{2}.$$

If we could use the translation property ${\mathbb{E}}^{x+y}f\left(z\right)={\mathbb{E}}^{x}f(z+y)$ used for instance in proving Poincaré and modified log-Sobolev inequalities in [13,17], then we could bound relatively easy the carré du champ of the expectation of the functions by the carré du champ of the functions themselves, as demonstrated below

$$\begin{array}{cc}\hfill {\displaystyle \sum _{i=1}^{N}}\varphi \left({x}^{i}\right){({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}f\left({x}_{t-s}\right)-{\mathbb{E}}^{x}f\left({x}_{t-s}\right))}^{2}=& {\displaystyle \sum _{i=1}^{N}}\varphi \left({x}^{i}\right){({\mathbb{E}}^{x}f\left({\Delta}_{i}\left({x}_{t-s}\right)\right)-{\mathbb{E}}^{x}f\left({x}_{t-s}\right))}^{2}\hfill \\ \hfill \le & {P}_{t-s}\Gamma (f,f)\left(x\right)\hfill \end{array}$$

The inequality $\Gamma ({P}_{t}f,{P}_{t}f)\le {P}_{t}\Gamma (f,f)$ for $t>0$ relates directly with the ${\Gamma}_{2}$ criterion (see [22,23]) which states that if ${\Gamma}_{2}\left(f\right):=\frac{1}{2}\mathcal{L}(\Gamma (f,f))-2\Gamma (f,\mathcal{L}l)\ge 0$ then the Poincaré inequality is true, since
implies $\Gamma ({P}_{t}f,{P}_{t}f)\le {P}_{t}\Gamma (f,f)$ (see also [13]).

$$\begin{array}{cc}\hfill \frac{d}{ds}({P}_{s}\Gamma ({P}_{t-s}f,{P}_{t-s}f))=& \frac{1}{2}{P}_{s}(\mathcal{L}\Gamma ({P}_{t-s}f,{P}_{t-s}f))-2\Gamma ({P}_{t-s}f,\mathcal{L}{P}_{t-s}f)\hfill \\ \hfill =& {P}_{s}\left({\Gamma}_{2}\left({P}_{t-s}f\right)\right)\ge 0\hfill \end{array}$$

Unfortunately, this is not the case with our PJMP where the degeneracy of jumps and the memoryless nature of them allows any neuron ${x}_{i}$ to jump to zero from any position, with a probability that depends on the current configuration of the neurons. Moreover, contrary on the case of Poisson processes, our intensity also depends on the position.

To obtain the carré du champ of the functions we will make use of the Dynkin’s formula which will allow us to bound the expectation of a function with the expectation of the infinitesimal generator of the function which is comparable to the desired carré du champ of the function.

Therefore, from Dynkin’s formula
we get

$${\mathbb{E}}^{x}f\left({x}_{t}\right)=f\left(x\right)+{\int}_{0}^{t}{\mathbb{E}}^{x}\left(\mathcal{L}f\left({x}_{u}\right)\right)du$$

$$\begin{array}{cc}\hfill {\left({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}f\left({x}_{t-s}\right)-{\mathbb{E}}^{x}f\left({x}_{t-s}\right)\right)}^{2}\le & 2{\left(f\left({\Delta}_{i}\left(x\right)\right)-f\left(x\right)\right)}^{2}\hfill \\ \hfill & +2{\left({\int}_{0}^{t-s}\left({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}\left(\mathcal{L}f\left({x}_{u}\right)\right)-{\mathbb{E}}^{x}\left(\mathcal{L}f\left({x}_{u}\right)\right)\right)du\right)}^{2}.\hfill \end{array}$$

To bound the second term above we will use the bound shown in Lemma 3

$$\begin{array}{cc}\hfill {\left({\mathbb{E}}^{{\Delta}_{i}\left(x\right)}f\left({x}_{t-s}\right)-{\mathbb{E}}^{x}f\left({x}_{t-s}\right)\right)}^{2}\le & 2{\left(f\left({\Delta}_{i}\left(x\right)\right)-f\left(x\right)\right)}^{2}+32{t}_{0}^{2}M\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)\hfill \\ \hfill & +2c(t-s){\int}_{0}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)du\hfill \end{array}$$

This together with (19) gives

$$\begin{array}{cc}\hfill \Gamma ({P}_{t-s}f,{P}_{t-s}f)\left(x\right)\le & 2\Gamma (f,f)\left(x\right)+32{t}_{0}^{2}M{\displaystyle \sum _{i=1}^{n}}\varphi \left({x}^{i}\right)\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)\hfill \\ \hfill & +2Mc(t-s){\int}_{0}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)du.\hfill \end{array}$$

Finally, plugging this in (18) we obtain

$$\begin{array}{cc}\hfill {P}_{t}{f}^{2}\left(x\right)-{\left({P}_{t}f\left(x\right)\right)}^{2}\le & 2{\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left(x\right)ds+32{t}_{0}^{2}M{\int}_{0}^{t}{P}_{s}{\displaystyle \sum _{i=1}^{n}}\varphi \left({x}^{i}\right)\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)ds\hfill \\ \hfill & +2M{\int}_{0}^{t}c(t-s){P}_{s}{\int}_{0}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)duds.\hfill \end{array}$$

For the second term we can bound $\varphi $ by M. For the last term on the right-hand side, since the carré du champ is non-negative, we can get
where above we used the property of the Markov semigroup ${P}_{s}{P}_{w-s}={P}_{w}$. Since this last quantity does not depend on s, and $c(t-s)\le c\left(t\right)$ we further get

$$\begin{array}{cc}\hfill {P}_{s}{\int}_{0}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)du& ={P}_{s}{\int}_{s}^{t}{P}_{w-s}\Gamma \left((f,f)\right)\left(x\right)dw\hfill \\ \hfill & \le {P}_{s}{\int}_{0}^{t}{P}_{w-s}\Gamma \left((f,f)\right)\left(x\right)dw={\int}_{0}^{t}{P}_{w}\Gamma \left((f,f)\right)\left(x\right)dw.\hfill \end{array}$$

$${\int}_{0}^{t}c(t-s){P}_{s}{\int}_{0}^{t-s}{\mathbb{E}}^{x}\Gamma \left((f,f)\right)\left({x}_{u}\right)duds\le c\left(t\right)t{\int}_{0}^{t}{P}_{w}\Gamma \left((f,f)\right)\left(x\right)dw.$$

Putting everything together we finally obtain

$$\begin{array}{cc}\hfill {P}_{t}{f}^{2}\left(x\right)-{\left({P}_{t}f\left(x\right)\right)}^{2}\le & (2+2Mc\left(t\right)t){\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left(x\right)ds\hfill \\ \hfill & +32{t}_{0}^{2}{M}^{2}{\displaystyle \sum _{i=1}^{n}}{\int}_{0}^{t}{P}_{s}\Gamma (f,f)\left({\Delta}_{i}\left(x\right)\right)ds.\hfill \end{array}$$

And so, the theorem follows for constants

$$\alpha \left(t\right)=2+2Mtc\left(t\right)=2+2Mt{t}_{0}8M({C}_{1}+1)+4{t}^{2}(1+{C}_{1}){M}^{2}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\beta =32{t}_{0}^{2}{M}^{2}.$$

We start by showing that in the case where the initial configuration belongs on the domain $x\in \widehat{D}$ of the invariant measure, we obtain a strong lower bound for the probabilities ${\pi}_{t}(x,y)={P}_{x}({X}_{t}=y)$, for time t big enough, as presented on the following lemma.

Since $\widehat{D}\subset D$ is finite, we know that there exists a strictly positive constant $\eta $, such that $\pi \left(x\right)>\eta >0$ for every $x\in \widehat{D}$. Since, ${lim}_{t\to \infty}{\pi}_{t}(x,y)=\pi \left(y\right)$ for every $x\in \widehat{D}$, such that $y\in {D}_{x}$, we obtain that there exists a $\theta >0$ such that ${\pi}_{t}(x,y)>\frac{1}{\theta}$ for every $x\in \widehat{D}$, such that $y\in {D}_{x}$, since $\widehat{D}\subset D$ is finite. □

Taking under account the last result, we can obtain the first technical bound needed in the proof of the local Poincaré inequality, taking advantage of the bounds shown for times bigger than ${t}_{1}$.

Assume the PJMP as described in (2)–(5). Then, for every $z\in \widehat{D}$
for every $t\ge {t}_{1}$.

$${P}_{s}{\left({\int}_{0}^{t-s}\left({\mathbb{E}}^{{\Delta}_{i}\left(z\right)}(\mathcal{L}f\left({z}_{u}\right){\mathcal{I}}_{{z}_{u}\in \widehat{D}})-{\mathbb{E}}^{z}(\mathcal{L}f\left({z}_{u}\right){\mathcal{I}}_{{z}_{u}\in \widehat{D}})\right)du\right)}^{2}\le 4{\theta}^{2}{t}^{2}M{\mathbb{E}}^{x}(\Gamma (f,f)\left({x}_{t}\right){\mathcal{I}}_{{x}_{t}\in \widehat{D}})$$

We can compute

$$\begin{array}{cc}\hfill {\mathbf{I}}_{2}:=& {P}_{s}{\left({\int}_{0}^{t-s}\left({\mathbb{E}}^{{\Delta}_{i}(z)}(\mathcal{L}f({z}_{u}){\mathcal{I}}_{{z}_{u}\in \widehat{D}})-{\mathbb{E}}^{z}(\mathcal{L}f({z}_{u}){\mathcal{I}}_{{z}_{u}\in \widehat{D}})\right)du\right)}^{2}\hfill \\ \hfill \le & 2{P}_{s}{\left({\int}_{0}^{t}{\displaystyle \sum _{y\in \widehat{D}}}{\pi}_{u}({\Delta}_{i}(z),y)\left|\mathcal{L}f(y)\right|du\right)}^{2}+2{P}_{s}{\left({\int}_{0}^{t}{\displaystyle \sum _{y\in \widehat{D}}}{\pi}_{u}(z,y)\left|\mathcal{L}f(y)\right|du\right)}^{2}\hfill \end{array}$$

Now we will use three times the Cauchy-Schwarz inequality, to pass the square inside the integral and the two sums. We will then obtain
where above we used the semigroup property ${P}_{s}{P}_{u}={P}_{s+u}$. Since $t\ge {t}_{1}$, we can use Lemma 4 to bound for every w and $y\in \widehat{D}$, ${\pi}_{u+s}(w,y)\le \theta {\pi}_{t}(x,y)$. We then obtain
□

$$\begin{array}{cc}\hfill {\mathbf{I}}_{2}\le & 2{\theta}^{2}tM{\displaystyle \sum _{\omega =z,{\Delta}_{i}\left(z\right)}}{\displaystyle \sum _{y\in \widehat{D}}}{\int}_{0}^{t}{P}_{s}{\pi}_{u}(\omega ,y)\sum _{i=1}^{N}\varphi \left({y}^{i}\right){\left(f\left({\Delta}_{i}\left(y\right)\right)-f\left(y\right)\right)}^{2}du\hfill \\ \hfill =& 2{\theta}^{2}tM{\displaystyle \sum _{\omega =z,{\Delta}_{i}\left(z\right)}}{\displaystyle \sum _{y\in \widehat{D}}}{\int}_{0}^{t}{\pi}_{s+u}(\omega ,y)\sum _{i=1}^{N}\varphi \left({y}^{i}\right){\left(f\left({\Delta}_{i}\left(y\right)\right)-f\left(y\right)\right)}^{2}du\hfill \end{array}$$

$$\begin{array}{cc}\hfill {\mathbf{I}}_{2}& \le 4{\theta}^{2}{t}^{2}M{\displaystyle \sum _{y\in D}}{\pi}_{t}(x,y){\displaystyle \sum _{i=1}^{N}}\varphi \left({y}^{i}\right){\left(f\left({\Delta}_{i}\left(y\right)\right)-f\left(y\right)\right)}^{2}\hfill \\ \hfill & =4{\theta}^{2}{t}^{2}M{P}_{t}\Gamma (f,f)\left(x\right).\hfill \end{array}$$

We can now show the Poincaré inequality for the semigroup ${P}_{t}$ for initial configurations inside the domain $\widehat{D}$ of the invariant measure $\pi $.

We will work as in the proof of the Poincaré inequality of Theorem 1 for general initial conditions. As before, we denote ${P}_{t}f\left(x\right)={\mathbb{E}}^{x}f\left({x}_{t}\right)$. Then
since $\frac{d}{ds}{P}_{s}=\mathcal{L}{P}_{s}={P}_{s}\mathcal{L}$. To bound the carré du champ, as in the general case of Theorem 1, from (19) and Dynkin’s formula we obtain the following

$$\begin{array}{cc}\hfill {\mathbb{E}}^{x}({f}^{2}({x}_{t}){\mathcal{I}}_{{x}_{t}\in \widehat{D}})-{\left({\mathbb{E}}^{x}(f({x}_{t}){\mathcal{I}}_{{x}_{t}\in \widehat{D}})\right)}^{2}& ={\int}_{0}^{t}\frac{d}{ds}{P}_{s}{\left({\mathbb{E}}^{{x}_{s}}(f({x}_{t-s}){\mathcal{I}}_{{x}_{t-s}\in \widehat{D}})\right)}^{2}(x)ds\hfill \\ \hfill ={\int}_{0}^{t}{P}_{s}\Gamma ({\mathbb{E}}^{{x}_{s}}(f({x}_{t-s})& {\mathcal{I}}_{{x}_{t-s}\in \widehat{D}}),{\mathbb{E}}^{{x}_{s}}(f({x}_{t-s}){\mathcal{I}}_{{x}_{t-s}\in \widehat{D}}))(x)ds\hfill \end{array}$$

$$\begin{array}{cc}\hfill \Gamma ({\mathbb{E}}^{{x}_{s}}(f(& {x}_{t-s}){\mathcal{I}}_{{x}_{t-s}\in \widehat{D}}),{\mathbb{E}}^{{x}_{s}}(f({x}_{t-s}){\mathcal{I}}_{{x}_{t-s}\in \widehat{D}}))\le 2\Gamma (f,f)({x}_{s})\hfill \\ \hfill & +2{\displaystyle \sum _{i=1}^{N}}\varphi ({x}_{s}^{i}){\left({\int}_{0}^{t-s}\left({\mathbb{E}}^{{\Delta}_{i}({x}_{s})}(\mathcal{L}f({x}_{u}){\mathcal{I}}_{{x}_{u}\in \widehat{D}})-{\mathbb{E}}^{{x}_{s}}(\mathcal{L}f({x}_{u}){\mathcal{I}}_{{x}_{u}\in \widehat{D}})\right)du\right)}^{2}.\hfill \end{array}$$

From that and (20) and the bound $M={sup}_{x\in D}{\sum}_{i=1}^{N}\varphi \left({x}^{i}\right)$ on $\varphi $, we then get

$$\begin{array}{cc}\hfill {\mathbb{E}}^{x}({f}^{2}({x}_{t}){\mathcal{I}}_{{x}_{t}\in \widehat{D}})-& {\left({\mathbb{E}}^{x}(f({x}_{t}){\mathcal{I}}_{{x}_{t}\in \widehat{D}})\right)}^{2}\le 2{\int}_{0}^{t}{P}_{s}\Gamma (f,f)(x)ds\hfill \\ \hfill +2M\sum _{i=1}^{N}{\int}_{0}^{t}{P}_{s}& {\left({\int}_{0}^{t-s}\left({\mathbb{E}}^{{\Delta}_{i}({x}_{s})}(\mathcal{L}f({x}_{u}){\mathcal{I}}_{{x}_{u}\in \widehat{D}})-{\mathbb{E}}^{{x}_{s}}(\mathcal{L}f({x}_{u}){\mathcal{I}}_{{x}_{u}\in \widehat{D}})\right)du\right)}^{2}ds.\hfill \end{array}$$

Since $t\ge {t}_{1}$, we can use Lemma 5 to bound the second term on the right-hand side
□

$${\mathbb{E}}^{x}({f}^{2}({x}_{t}){\mathcal{I}}_{{x}_{t}\in \widehat{D}})-{\left({\mathbb{E}}^{x}(f({x}_{t}){\mathcal{I}}_{{x}_{t}\in \widehat{D}})\right)}^{2}\le 2{\int}_{0}^{t}{P}_{s}\Gamma (f,f)(x)ds+8{\theta}^{2}{M}^{2}N{t}^{3}{P}_{t}\Gamma (f,f)(x).$$

In this section, we prove a Poincaré inequality for the invariant measure $\pi $ presented in Theorem 3, using methods developed in [20,24,25].

At first assume $\pi \left(f\right)=0$. We can write

$$Va{r}_{\pi}\left(f\right)=\int {f}^{2}d\pi =\int {f}^{2}{\mathcal{I}}_{\widehat{D}}d\pi .$$

We will follow the method from [16] used to prove Spectral Gap for finite Markov chains. Since $\pi \left(f\right)=0$, we can write

$$\int {f}^{2}{\mathcal{I}}_{\widehat{D}}d\pi =\frac{1}{2}\int \int {(f\left(x\right)-f\left(y\right))}^{2}{\mathcal{I}}_{x\in \widehat{D}}{\mathcal{I}}_{y\in \widehat{D}}\pi \left(dx\right)\pi \left(dy\right).$$

Consider $\delta (x,y)=\{{i}_{1},{i}_{2},\dots ,{i}_{\left|\gamma \right(x,y\left)\right|}\}$ the shortest path from $x\in \widehat{D}$ to $y\in \widehat{D}$, where the indexes ${i}_{k}$ stand for the neuron that spikes. Since $\widehat{D}$ is finite $max\{x,y\in \widehat{D}:|\delta (x,y)|\}$ is finite. We denote ${\tilde{x}}^{0}=x$ and ${\tilde{x}}^{k}={\Delta}_{{i}_{k}}({\Delta}_{{i}_{k-1}}(\dots {\Delta}_{{i}_{1}}\left(x\right))\dots )$, for $k=1,\dots ,\left|\delta \right(x,y\left)\right|$, so that ${\tilde{x}}^{\left|\delta \right(x,y\left)\right|}=y$. So, we can write
where above we used that $\varphi \ge \delta $. We can now form the caré du champ

$$\begin{array}{cc}\hfill \pi (x)\pi (y){(f(x)-f(y))}^{2}\le & \pi (y)\pi (x){\displaystyle \sum _{j=0}^{|\delta (x,y)|}}{(f(\Delta {({\tilde{x}}^{j})}_{{i}_{j}})-f({\tilde{x}}^{j}))}^{2}\hfill \\ \hfill \le & \frac{\pi (y)\pi (x)}{\delta}{\displaystyle \sum _{j=0}^{|\delta (x,y)|}}\phi ({\tilde{x}}_{{i}_{j}}^{i}){(f(\Delta {({\tilde{x}}^{j})}_{{i}_{j}})-f({\tilde{x}}^{j}))}^{2}\hfill \end{array}$$

$$\begin{array}{cc}\hfill \pi (x)\pi (y){(f(x)-f(y))}^{2}\le & \frac{\pi (y)\pi (x)}{\delta}{\displaystyle \sum _{j=0}^{|\delta (x,y)|}}{\displaystyle \sum _{i\in D}}\phi ({\tilde{x}}_{{i}_{j}}^{i}){(f(\Delta {({\tilde{x}}^{j})}_{i})-f({\tilde{x}}^{j}))}^{2}\hfill \\ \hfill \le & \frac{\pi (y)\pi (x)}{min\{x\in \widehat{D}:\pi (x)\}\delta}{\displaystyle \sum _{j=0}^{|\delta (x,y)|}}\pi (({\tilde{x}}^{j}))\Gamma (f,f)({\tilde{x}}^{j}).\hfill \end{array}$$

We then have

$$\begin{array}{cc}\hfill \int {f}^{2}{\mathcal{I}}_{D}d\pi \le & \frac{{N}^{2}}{2min\{x\in \widehat{D}:\pi (x)\}\delta}{\displaystyle \sum _{x\in D}}\pi (x)\Gamma (f,f)(x)\hfill \\ \hfill =& \frac{{N}^{2}}{2min\{x\in \widehat{D}:\pi (x)\}\delta}\pi (\Gamma (f,f){\mathcal{I}}_{\widehat{D}}).\hfill \end{array}$$

Putting all together leads to
□

$$Va{r}_{\pi}(f)\le \frac{{N}^{2}}{2min\{x\in \widehat{D}:\pi (x)\}\delta}\int \Gamma (f,f)d\pi .$$

In this paper, we study the probabilistic model introduced by Galves and Löcherbach in [1], in order to describe neural networks. In particular, we want to show that despite the degenerate nature of the process, one can still obtain Poincaré-type inequalities for the associated semigroup, similar to the ones obtained by [13,14,17] for point processes without degenerate jumps.

In terms of practical applications, the concentration inequality we have derived for the invariant measure $\pi $, implies that the process remains very close to its mean, while the Poincaré-type inequalities for the semigroup, imply that despite the degeneracy that characterizes the behavior of the neural system, after a long time passes the neurons behave closer to a system without degeneracy.

In the current paper we studied both the cases where the initial configuration, belongs and does not belong in the domain of the invariant measure. Future directions should focus on restricting to the special case where the initial configuration belongs exclusively in the domain of the invariant measure. Then, stronger inequalities from the Poincaré-type inequalities obtained here for the semigroup appear to be satisfied, as is the modified logarithmic Sobolev inequality.

Furthermore, the extension of the results obtained in the current paper for compact neurons, to the more general case of unbounded neurons is of particular interest.

The authors contributed equally to this work.

This article was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/ 07699-0, S.Paulo Research Foundation); This article is supported by FAPESP grant (2016/17655-8) ${}^{1}$ and (2017/15587-8) ${}^{2}$.

The authors thank Eva Löcherbach for careful reading and valuable comments.

The authors declare no conflict of interest.

- Galves, A.; Löcherbach, E. Infinite Systems of Interacting Chains with Memory of Variable Length-A Stochastic Model for Biological Neural Nets. J. Stat. Phys.
**2013**, 151, 896–921. [Google Scholar] [CrossRef] - Chevalier, J. Mean-field limit of generalized Hawkes processes. Stoch. Process. Their Appl.
**2017**, 127, 3870–3912. [Google Scholar] [CrossRef] - Duarte, A.; Löcherbach, E.; Ost, G. Stability, convergence to equilibrium and simulation of non-linear Hawkes Processes with memory kernels given by the sum of Erlang kernels. arXiv
**2016**, arXiv:1610.03300. [Google Scholar] - Duarte, A.; Ost, G. A model for neural activity in the absence of external stimuli. Markov Process. Relat. Fields
**2014**, 22, 37–52. [Google Scholar] - Hansen, N.; Reynaud-Bouret, P.; Rivoirard, V. Lasso and probabilistic inequalities for multivariate point processes. Bernoulli
**2015**, 21, 83–143. [Google Scholar] [CrossRef] - Hodara, P.; Löcherbach, E. Hawkes processes with variable length memory and an infinite number of components. Adv. Appl. Probab.
**2017**, 49, 84–107. [Google Scholar] [CrossRef] - Hodara, P.; Krell, N.; Löcherbach, E. Non-parametric estimation of the spiking rate in systems of interacting neurons. Stat. Inference Stoch. Process.
**2016**, 21, 81–111. [Google Scholar] [CrossRef] - Davis, M.H.A. Piecewise-derministic Markov processes: A general class off nondiffusion stochastic models. J. R. Stat. Soc. Ser. B
**1984**, 46, 353–388. [Google Scholar] - Davis, M.H.A. Markov models and optimization. In Monographs on Statistics and Applied Probability; Chapman & Hall: London, UK, 1993; Volume 49. [Google Scholar]
- Crudu, A.; Debussche, A.; Muller, A.; Radulescu, O. Convergence of stochastic gene networks to hybrid piecewise deterministic processes. Ann. Appl. Probab.
**2012**, 22, 1822–1859. [Google Scholar] [CrossRef] - Pakdaman, K.; Thieulen, M.; Wainrib, G. Fluid limit theorems for stochastic hybrid systems with application to neuron models. Adv. Appl. Probab.
**2010**, 42, 761–794. [Google Scholar] [CrossRef] - Azaïs, R.; Bardet, J.B.; Genadot, A.; Krell, N.; Zitt, P.A. Piecewise deterministic Markov process (pdmps). Recent results. Proceedings
**2014**, 44, 276–290. [Google Scholar] - Ane, C.; Ledoux, M. On logarithmic Sobolev inequalities for continuous time random walks on graphs. Probab. Theory Relat. Fields
**2000**, 116, 573–602. [Google Scholar] - Chafai, D. Entropies, convexity, and functional inequalities. J. Math. Kyoto Univ.
**2004**, 44, 325–363. [Google Scholar] [CrossRef] - Diaconis, P.; Saloff-Coste, L. Logarithmic Sobolev inequalities for finite Markov Chains. Ann. Appl. Probab.
**1996**, 6, 695–750. [Google Scholar] [CrossRef] - Saloff-Coste, L. Lectures on finite Markov chains. In IHP Course 98, Ecole d’ Ete de Probabilites de Saint-Flour XXVI, Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1996; Volume 1665, pp. 301–413. [Google Scholar]
- Wang, F.-Y.; Yuan, C. Poincaré inequality on the path space of Poisson point processes. J. Theor. Probab.
**2010**, 23, 824–833. [Google Scholar] [CrossRef] - Guionnet, A.; Zegarlinski, B. Lectures on Logarithmic Sobolev Inequalities. In IHP Course 98, in Seminare de Probabilite XXVI, Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2003; Volume 1801, pp. 1–134. [Google Scholar]
- Bakry, D.; Emery, M. Difusions hypercontractives. In Seminaire de Probabilites XIX, Springer Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1985; Volume 1123, pp. 177–206. [Google Scholar]
- Bakry, D.; Gentil, I.; Ledoux, M. Analysis and geometry of Markov diffusion operators. In Grundlehren der Mathematischen Wissenschaften; Springer: Berlin/Heidelberg, Germany, 2014; Volume 348. [Google Scholar]
- Diaconis, P.; Saloff-Coste, L. Geometric bounds for eigenvalues of Markov Chains. Ann. Probab.
**1991**, 1, 36–61. [Google Scholar] [CrossRef] - Bakry, D. L’hypercontructivité et son utilisation en théorie des semigroupes. Ecole d’Eté de Probabilités de St-Flour. Lecture Notes Math.
**1994**, 1581, 1–114. [Google Scholar] - Bakry, D. On Sobolev and logarithmic Sobolev inequalities for Markov semigroups. New Trends Stoch. Anal.
**1997**, 43–75. [Google Scholar] - Bakry, D.; Cattiaux, P.; Guillin, A. Rate of convergence for ergodic continuous Markov processes: Lyapunov versus Poincaré. J. Funct. Anal.
**2008**, 254, 727–759. [Google Scholar] [CrossRef] - Cattiaux, P.; Guillin, A.; Wang, F.-Y.; Wu, L. Lyapunov conditions for super Poincaré inequality. J. Funct. Anal.
**2009**, 256, 1821–1841. [Google Scholar] [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).