Next Article in Journal
Applying Undesirable Output Model to Security Evaluation of Taiwan
Next Article in Special Issue
Revealing Spectrum Features of Stochastic Neuron Spike Trains
Previous Article in Journal
Centered Polygonal Lacunary Graphs: A Graph Theoretic Approach to p-Sequences of Centered Polygonal Lacunary Functions
Previous Article in Special Issue
On the Integral of the Fractional Brownian Motion and Some Pseudo-Fractional Gaussian Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Semi-Markov Leaky Integrate-and-Fire Model

by
Giacomo Ascione
1,*,† and
Bruno Toaldo
2,†
1
Dipartimento di Matematica e Applicazioni “Renato Caccioppoli”, Università degli Studi di Napoli Federico II, I-80126 Naples, Italy
2
Dipartimento di Matematica “Giuseppe Peano”, Università degli Studi di Torino, 10123 Torino, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(11), 1022; https://doi.org/10.3390/math7111022
Submission received: 9 September 2019 / Revised: 22 October 2019 / Accepted: 24 October 2019 / Published: 29 October 2019
(This article belongs to the Special Issue Stochastic Processes in Neuronal Modeling)

Abstract

:
In this paper, a Leaky Integrate-and-Fire (LIF) model for the membrane potential of a neuron is considered, in case the potential process is a semi-Markov process. Semi-Markov property is obtained here by means of the time-change of a Gauss-Markov process. This model has some merits, including heavy-tailed distribution of the waiting times between spikes. This and other properties of the process, such as the mean, variance and autocovariance, are discussed.

1. Introduction

Leaky Integrate-and-Fire (LIF) models have a long tradition in the field of neuronal modeling. Starting from the Lapique’s Integrate-and-Fire (IF) model (see [1]) which has been later modified to consider a leakage in the membrane potential of the neuron (see, for a complete review of IF and LIF models [2,3]), these models gained considerable popularity due their mathematical simplicity. In particular, a stochastic LIF model has been introduced (see [4]) to include the action of a noise in the model. Under the classical assumptions, the membrane potential of a neuron is described by an Ornstein-Uhlenbeck process or, more generally, by a Gauss-Markov process. When the potential reach a suitable boundary at a random time T, the neuron emits a signal (which is traceable and thus it said to be a `spike’) and then the process is reset by setting V ( T ) = V ( 0 ) . For an overview of stochastic IF model, we refer to [5]. This model has several unrealistic features (see [6]), some of which have been remedied by different authors with different approaches (such, for instance, considering a different stochastic model [7], or introducing correlated inputs [8]). For example, in [9] it is observed that the random time T is a heavy-tailed random variable which may have infinite expectation. This feature is in general ignored in the literature since, under classical assumptions, the random times have tails whose asymptotic decays is exponential. Furthermore, resetting the process after each spike is also unrealistic, since the model completely loses the effect of past events.
The model we study in this paper is a modification of the one introduced in [10] and face the two issues above. In particular, a random time change of the potential process is introduced and this delays the first passage time through the boundary sufficiently to make it a heavy-tailed r.v. An adaptive threshold approach is proposed to avoid the reset problem. Furthermore, the time-change makes the process semi-Markovian and introduces memory effects which we describe by investigating the autocovariance function. Some other technical properties of the semi-Markov potential, such us mean value and variance, are also investigated. It is crucial the comparison of our model with the popular Unit 240-1, studied for instance in [9,11]. Actually, it is the problem of describing and producing heavy-tailed distributions of interspike intervals that has become quite popular. After [9,11] the problem has been studied for instance ten years later in [12], while more recently in [13].
Besides giving a model that (at least qualitatively) seems to be more accurate to describe the behavior of a neuron with heavy-tailed ISIs distribution, we also provide an application of a presently well-known fractionalization procedure to neuroscience. We considered only a simple linear model, i.e., the Leaky Integrate-and-Fire model, to give an easy example of application of this fractionalization procedure and how such procedure produces both a weighted covariance structure that, together with the non-Markov property, gives us correlation of the spiking times, and a delay in the firing activity due to the introduction of a sort of stochastic clock. We refer to fractionalization procedure since this random time-change of Markov processes introduces semi-Markov processes governed by fractional equations (see, for example [14,15,16,17]), which are very popular in applications, and thus we establish a connection of our model with fractional equations. This procedure can be adapted to various processes. For instance, one could think to use this procedure to produce a stochastic fractional version of the Hodgkin-Huxley model, based on the stochastic model of [7]. Thus, to resume the aim of this paper, we intend to show that our time-changing procedure can actually produce realistic models also when applied to easy models, and provide an approach that we aim to further generalize to much more complex processes.
The paper is structured as follows:
  • In Section 2 we introduce the semi-Markov Leaky Integrate-and-Fire model and discuss the semi-Markov property of the membrane potential process;
  • In Section 3 we give mean and variance of the membrane potential process;
  • In Section 4 we address the problem of the autocovariance function. Despite being already determined in [18], we use a different approach that leads to two independently interesting results. In particular, we have in Theorem 1 a formula for the bivariate Laplace transform of an inverse subordinator while in Theorem 2 a formula for the autocovariance function of a time-changed stationary Ornstein-Uhlenbeck as defined in [18]. This last result was obtained in the non-stationary case (for deterministic initial values) by [18]: we provide some changes in the proof given there to determine the autocovariance in the stationary case. We then use these two results to determine the autocovariance function of the membrane potential process. In the same section we show that the autocovariance function is still infinitesimal and decreasing.
  • In Section 5 we focus on the effect of the time-change on the distribution of the first spiking times and the Interspike intervals of this model;
  • In Section 6 we compare only qualitative (due to a lack of quantitative data) the features of the distribution of the Interspike intervals of the model and of the Unit 240-1;
  • Finally, in Section 7 we give a resume of the results.
Let us also recall that semi-Markov models are quite used in several fields of application. Just to cite some of them, we could consider applications in finance (as for instance in [19]), queueing theory (as for instance in [20,21]), epidemiology (as for instance in [22]) and also social sciences (as for instance in [23]). For any functional space we will work with (as, for instance, L l o c 1 or L l o c ) we refer to [24].

2. The Semi-Markov Leaky Integrate-and-Fire Model

Let us consider a standard stochastic Leaky Integrate-and-Fire model (see, for instance [5,25]), i.e., let us describe the membrane potential of a neuron with a stochastic process V ( t ) which is strong solution of the following stochastic differential equation (SDE)
d V ( t ) = 1 θ ( V ( t ) V L ) + I ( t ) d t + σ d W ( t ) , V ( 0 ) = V 0
where V L R is the leak potential, θ > 0 is the characteristic time of the neuron (seen as a leaky RC circuit, obtained by a modification of the classical Integrate-and-Fire model [1]), σ > 0 is the amplitude of the noise, W ( t ) is a standard Brownian motion (hence d W ( t ) is a white noise) and I ( t ) is a L l o c 1 function that describes the input stimuli (which could be synaptic or injected).
We say that a neuron fires if the membrane potential V ( t ) crosses a certain fixed threshold V t h > V 0 . In such case, the first passage time
T = inf { t > 0 : V ( t ) > V t h }
represents the first spiking time of the neuron. Moreover, to model successive spiking times of the neuron V ( t ) one can use two different approaches:
  • One can reset the process V ( t ) in the sense that one poses V ( T ) = V t h and V ( T ) = V r for some constant V r < V t h , then one can consider the process V 2 ( t ) = V ( t T ) to be solution of (1) with V 2 ( 0 ) = V r and I 2 ( t ) : = I ( t + T ) and study the first passage time of this new process through V t h (see [4,5,26,27,28,29,30,31] and references therein);
  • One could also consider a suitably modified (time-dependent) threshold such that the n-th passage time T n through such threshold represents the n-th spiking time of the neuron (this approach is called adaptive threshold approach, see [32,33,34] and references therein).
For the adaptive threshold approach, it can be useful to observe that the sequence ( T n ) n N is almost surely increasing. Moreover, the interspike intervals (ISIs) A n = T n T n 1 for n N (where T 0 = 0 ) are independent and if I ( t ) I 0 is constant, they are also identically distributed.
It has been shown in [27] that under suitable assumptions on I ( t ) , the probability survival functions of A n t P ( A n > t ) are asymptotically exponential. However, in some particular settings this behavior is in contradiction with the experimental data. Indeed, in [9], it has been shown that the A n should be similar to one-sided stable random variables and then, in particular, their probability survival functions should decay like power laws.
In order then to introduce a sort of delay in this behavior, we now consider a stochastic time scale for the process V ( t ) . Let us recall that a subordinator S ( t ) is an increasing Lévy process (see [35,36]) and thus we can define its right-continuous inverse as
E ( t ) = inf { y > 0 : S ( y ) t } .
In particular, let us consider driftless subordinators, i.e., non-decreasing Lévy processes S ( t ) such that their Lévy exponent Φ ( λ ) can be expressed as
Φ ( λ ) = 0 + ( 1 e λ x ) ν ( d x )
where ν is the Lévy measure of S ( t ) . Moreover, we will assume that ν ( 0 , + ) = + , so that the process S ( t ) is strictly increasing.
Now we can define the time-changed LIF model. Consider the process V ( t ) that is strong solution of (1). Then let us also consider the inverse E ( t ) of an independent subordinator. Let us then define V Φ ( t ) : = V ( E ( t ) ) as our new membrane potential process (where Φ is the Laplace exponent of the subordinator S ( t ) ). Despite losing an easy physical interpretation of the constant θ , we will see in the following that such process (for a suitable choice of Φ and I ( t ) ) recover some properties (as found in [9]) of the ISIs distribution. We say that this model is semi-Markovian in the sense that the process V Φ ( t ) is semi-Markovian, and so it enjoys the Markov property at any random time T such that T ( ω ) s : S ( y , ω ) = s for some y 0 , as rigorously discussed in [37] Section 4b. The reader might also consult for example [38] (in particular Example (2.13) and Section 5) for a more general class of semi-Markov processes including V Φ .
Heuristically, the reason the Markov property is lost after time-change can be summarized as follows. The process E ( t ) has interval of constancy whose random length is the length of the jumps of S, i.e.,
E ( t ) = y , S ( y ) t < S ( y ) .
Of course, these intervals of constancy are not (in general) exponentially distributed and thus E ( t ) is not Markovian. Also V ( E ( t ) ) has the same interval of constancies and thus also it is not a Markov process. It is useful to note that a semi-Markov process can be embedded in a Markov process by adding a coordinate containing the information which are lost together with the exponential distribution: hence if we define
γ ( t ) : = t max sup s t : V ϕ ( t ) V Φ ( s ) , 0 = t S ( E ( t ) ) ,
where we use the convention sup = , which is the sojourn time in the current position of V Φ , we find that V Φ ( t ) , γ ( t ) is a Markov process ([39] Section 4; we suggest to the reader [40] chapter 3 and references therein for an overview of various equivalent definitions of semi-Markov processes.)
In the next sections, we will focus on some characteristics of the process V Φ ( t ) .

3. Mean and Variance Functions of V Φ ( t )

3.1. Preliminaries on V ( t )

Let us give first some preliminaries on the strong solution V ( t ) of (1). Let us recall that, solving the equation, we obtain
V ( t ) = ( 1 e t θ ) V L + e t θ V 0 + e t θ 0 t I ( s ) e s θ d s + σ e t θ 0 t e s θ d W ( s ) .
Let us define the following quantities:
  • The process
    U ( t ) = ( 1 e t θ ) V L + e t θ V 0 + σ e t θ 0 t e s θ d W ( s )
    which is the solution of (1) when we set I ( t ) 0 , hence it is a non-stationary Ornstein-Uhlenbeck process with equilibrium in V L and degenerate initial value V 0 ;
  • The function
    J ( t ) = e t θ 0 t I ( s ) e s θ d s
    for a general I ( t ) , which is the integrated stimuli in the process V ( t ) .
Thus, we have
V ( t ) = U ( t ) + J ( t ) .
Now, let us observe that (see, for instance [27])
E [ U ( t ) ] = ( 1 e t θ ) V L + e t θ V 0
and
Cov ( U ( t ) , U ( s ) ) = σ 2 θ 2 ( e | t s | θ + e t + s θ ) .
Thus, being J ( t ) a deterministic function, we have
E [ V ( t ) ] = ( 1 e t θ ) V L + e t θ V 0 + J ( t )
and
Cov ( V ( t ) , V ( s ) ) = σ 2 θ 2 ( e | t s | θ e t + s θ ) .
Concerning the variance, we have in particular
D [ V ( t ) ] = σ 2 θ 2 ( 1 e 2 t θ ) .
From now on, for simplicity, let us set λ = 1 θ .

3.2. Mean of V Φ ( t )

Let us now consider V Φ ( t ) . We want first to evaluate E [ V Φ ( t ) ] . To do this, let us first define U Φ ( t ) = U ( E ( t ) ) and J Φ ( t ) = J ( E ( t ) ) to obtain V Φ ( t ) = U Φ ( t ) + J Φ ( t ) .
Let us now introduce some notation. Let us denote with f ( t , y ) the probability density function of E ( t ) and with g ( t , y ) the probability density function of S ( t ) . Moreover, let us denote
η ( t , z ) = E [ e z E ( t ) ]
the Laplace transform of f ( t , y ) with respect to y. Now we are ready to give the following proposition, which have been actually obtained in [18], but we recall the proof for the sake of completeness.
Proposition 1.
We have
E [ U Φ ( t ) ] = V L 1 η ( t , λ ) + V 0 η ( t , λ ) .
Proof. 
Let us observe that by conditioning
E [ U Φ ( t ) ] = 0 + E [ U ( s ) ] f ( t , s ) d s = V L 0 + f ( t , s ) d s 0 + e λ s f ( t , s ) d s + V 0 0 + e λ s f ( t , s ) d s = V L 1 η ( t , λ ) + V 0 η ( t , λ ) .
 □
Consequently, we obtain the mean of V Φ ( t ) .
Corollary 1.
We have
E [ V Φ ( t ) ] = V L 1 η ( t , λ ) + V 0 η ( t , λ ) + 0 + J ( s ) f ( t , s ) d s
A particular case in which the mean is explicable in closed form is given by I ( t ) I 0 . Indeed, in such case we have
J ( t ) = I 0 λ ( 1 e λ t )
and then
E [ V Φ ( t ) ] = V L 1 η ( t , λ ) + V 0 η ( t , λ ) + I 0 λ ( 1 η ( t , λ ) ) .
Let us observe that if Φ ( z ) = z α for α ( 0 , 1 ) , then, denoting V α ( t ) : = V z α ( t ) , we have that E [ V α ( t ) ] is a solution of a fractional Cauchy problem, i.e., a Cauchy problem with a fractional time derivative. We recall that the fractional derivative of order α (see [41]) is, for any suitable function f,
d α f d t α ( t ) = 1 Γ ( 1 α ) d d t 0 t ( t τ ) α f ( τ ) f ( 0 ) d τ .
In the next result we make rigorous this assertion: observe also that such result is linked to the definition of fractional Pearson diffusion, as given in [42] and can actually be derived starting from the equations studied in such paper. Here we follow a different approach , which relies on the linearity of the equation.
Proposition 2.
If Φ ( λ ) = λ α for α ( 0 , 1 ) and I L l o c ( 0 , + ) , then, denoting V α ( t ) : = V λ α ( t ) , we have that E [ V α ( t ) ] is solution of
d α d t α E [ V α ( t ) ] = 1 θ ( E [ V α ( t ) ] V L ) + E [ I ( E ( t ) ) ] , f o r a l m o s t e v e r y t > 0 E [ V α ( 0 ) ] = V 0 ,
that is to say:
  • the function t 1 θ ( E [ V α ( t ) ] V L ) + E [ I ( E ( t ) ) ] is in L l o c 1 ( 0 , + ) ;
  • for t ( 0 , + ) it holds:
    1 Γ ( 1 α ) 0 t ( t τ ) α E [ V α ( τ ) ] V α ( 0 ) d τ = 1 θ 0 t E [ V α ( s ) ] d s + V L θ t + 0 t E [ I ( E ( s ) ) ] d s .
Moreover, if I ( t ) is continuous and bounded, E [ V α ( t ) ] is solution of (12) for every t > 0 .
Proof. 
First of all, let us show that E [ I ( E ( t ) ) ] and E [ V α ( t ) ] are in L l o c 1 ( 0 , + ) . To do this, let us first observe that
E [ I ( E ( t ) ) ] = 0 + I ( s ) f ( t , s ) d s
thus, we have
0 t | E [ I ( E ( τ ) ) ] | d τ 0 t 0 + | I ( s ) | f ( τ , s ) d s d τ C t τ
since 0 + f ( τ , s ) d s = 1 for any τ > 0 , where C t = I L ( 0 , t ) .
Moreover, we have that for any t [ 0 , T ]
| J ( t ) | e t θ 0 t | I ( s ) | e s θ d s C t θ ( 1 e t θ ) C T θ ,
thus, also E [ V ( t ) ] is in L l o c ( 0 , + ) . Observing that
E [ V α ( t ) ] = 0 + E [ V ( s ) ] f ( t , s ) d s ,
we have that E [ V α ( t ) ] L l o c 1 ( 0 , + ) . Now, starting from (13) and taking the Laplace transform (denoting with V ˜ α ( z ) the Laplace transform of E [ V α ( t ) ] ), we achieve
V ˜ ( z ) = z α 1 0 + E [ V ( s ) ] e s z α d s
where z α 1 e s z α is the Laplace transform of f with respect to t (see [43]). It is already well known that E [ V ( t ) ] is an absolutely continuous function solving the following Cauchy problem
d d t E [ V ( t ) ] = 1 θ ( E [ V ( t ) ] V L ) + I ( t ) , for almost every t > 0 E [ V α ( 0 ) ] = V 0 ,
hence we can integrate by parts to obtain
V ˜ α ( z ) = z 1 V 0 + z 1 0 + d E [ V ( s ) ] d s e s z α d s ,
that is to say
V ˜ α ( z ) = z 1 V 0 z 1 θ 0 + E [ V ( s ) ] e s z α d s + z 1 0 + 1 θ V L + I ( s ) e s z α d s
and then, multiplying everything by z α 1
z α 1 V ˜ α ( z ) = z 1 ( z α 1 V 0 ) z 1 1 θ 0 + E [ V ( s ) ] z α 1 e s z α d s + z 1 0 + 1 θ V L + I ( s ) z α 1 e s z α d s .
Taking the inverse Laplace transform (and recalling also the Laplace transform of the power function) we have
1 Γ ( 1 α ) 0 t ( t τ ) α E [ V α ( τ ) ] d τ = V 0 Γ ( 1 α ) 0 t τ α d τ 1 θ 0 t 0 + E [ V ( s ) ] f ( τ , s ) d s d τ + V L θ 0 t 0 + f ( τ , s ) d s d τ + 0 t 0 + I ( s ) f ( τ , s ) d s d τ
that is to say, using that 0 + f ( τ , s ) d s = 1 ,
1 Γ ( 1 α ) 0 t ( t τ ) α E [ V α ( τ ) ] d τ V 0 Γ ( 1 α ) 0 t τ α d τ = 1 θ 0 t E [ V α ( τ ) ] d τ + V L θ t + 0 t E [ I ( E ( τ ) ) ] d τ .
Rearrange (14) to get
1 Γ ( 1 α ) 0 t ( t τ ) α E [ V α ( τ ) ] V α ( 0 ) d τ = 1 θ 0 t E [ V α ( τ ) ] d τ + V L θ t + 0 t E [ I ( E ( τ ) ) ] d τ
and note that the equality is true for any t > 0 by uniqueness of Laplace transform since both sides of (14) are continuous functions of t > 0 . Thus, we have shown that E [ V α ( t ) ] is solution of Equation (12) for almost every t > 0 .
If I is continuous and bounded, also J ( t ) (and then E [ V ( t ) ] ) is a continuous and bounded function. Moreover, recalling that (see [43])
f ( t , s ) = t α s 1 1 / α g α ( t s 1 / α )
where g α is the density of S ( 1 ) , we have that
E [ I ( E ( t ) ) ] = 0 + I ( s ) f ( t , s ) d s = 0 + I ( s ) t α s 1 1 / α g α ( t s 1 / α ) d s = 0 + I t w α g α ( w ) d w
where w = t s 1 / α . From this it is easy to show that t E [ I ( E ( t ) ) ] is continuous and the same holds for E [ V α ( t ) ] . Thus, the right-hand side of Equation (15) is C 1 and can be differentiated in ( 0 , + ) . □
Remark 1.
The definition of solution is given in the same spirit as the definition of Caratheodory solution for an ordinary differential equation (see [44]). In particular, one has that d α d t α E [ V α ( t ) ] exists for almost every t > 0 and the first equation of (12) actually holds for such t.
Let us also recall that Equation (12) is actually the equation of a fractional order LIF model, as introduced in [45].

3.3. The Variance of V Φ ( t )

In the same way as we did before, we can obtain the variance of the process V Φ ( t ) .
Proposition 3.
We have
D [ V Φ ( t ) ] = σ 2 2 λ ( 1 η ( t , 2 λ ) )
Proof. 
By conditioning we have
D [ V Φ ( t ) ] = 0 + D [ V ( s ) ] f ( t , s ) d s = σ 2 2 λ ( 0 + f ( t , s ) d s 0 + e 2 λ s f ( t , s ) d s ) = σ 2 2 λ ( 1 η ( t , 2 λ ) ) .
 □
From this equality we can deduce in particular that for any t > 0 we have V Φ ( t ) L 2 ( P ) . This observation will be useful for the evaluation of the autocovariance function.
Remark 2.
Since by dominated convergence η ( t , z ) 0 as t + for any z > 0 , we have that
lim t + D [ V Φ ( t ) ] = σ 2 2 λ > 0 .

4. The Autocovariance Function of V Φ ( t )

In this Section we want to describe the autocovariance function of the membrane potential process V Φ . Here we want to follow a different approach from the one given by [18]. Actually, in Corollary 2 we determine the autocovariance of V Φ by splitting the integral in two pieces: one is given by the bivariate Laplace transform of the inverse subordinator while the other is the autocovariance of a stationary Ornstein-Uhlenbeck process. Therefore, in the next Subsection we will determine a formula for the first piece, then in Section 4.2 we will do the same for the second piece and finally in Section 4.3 we will glue together these results.

4.1. The Bivariate Laplace Transform of E ( t )

As we said before, we know that V Φ ( t ) L 2 ( P ) for any t > 0 . Hence we have that for any couple ( t , s ) ( 0 , + ) 2 , by a simple application of the Cauchy-Schwartz inequality,
Cov ( V Φ ( t ) , V Φ ( s ) ) < + .
The covariance of V Φ ( t ) has been already determined in [18]. However, we follow here a different approach which gives a slightly more explicit result. Therefore, let us define first for any ( t , s ) ( 0 , + ) 2 the measure
H ( 2 ) ( t , s , A ) = P ( ( E ( t ) , E ( s ) ) A ) for any A R 2 measurable .
We first want to determine the bivariate Laplace-Stieltjes transform of H ( 2 ) ( t , s , A ) . The following theorem provides a formula for such bivariate Laplace-Stieltjes transform.
Theorem 1.
Let us suppose that for any z > 0 we have
t 0 + P ( E ( t ) u ) e z u d u = 0 + t P ( E ( t ) u ) e z u d u .
Then, for any z > 0 and t s > 0 , we have
0 + 0 + e z ( u + v ) H ( 2 ) ( t , s , d u d v ) = η ( t , z ) + η ( s , 2 z ) + 1 2 0 s η ( t y , z ) y η ( y , 2 z ) d y .
The proof is given in Appendix A.

4.2. The Autocovariance Function of a Time-Changed Stationary Ornstein-Uhlenbeck Process

We will also need to determine the covariance of a time-changed stationary Ornstein-Uhlenbeck process. In particular let us consider a stationary Ornstein-Uhlenbeck process U S ( t ) and its time-changed process U S Φ ( t ) : = U S ( E ( t ) ) . For the inverse stable subordinator, the covariance has been already determined in [46], while for the non-stationary case it has been already achieved in [18]. Here we consider the general stationary case by using the same approach.
Theorem 2.
Suppose that for any z 0
0 + t P ( E ( t ) v ) e z v d v = t 0 + P ( E ( t ) v ) e z v d v .
Then, for any z > 0 and t s > 0 we have
0 + 0 + e z | u v | H ( 2 ) ( t , s , d u d v ) = z 0 s η ( t y , z ) y E [ E ( y ) ] d y 2 + 2 η ( s , z ) + η ( t , z ) .
The proof is given in Appendix B.
Remark 3.
Since for a stationary Ornstein-Uhlenbeck process U S ( t ) of parameter λ > 0 the covariance is given by Cov ( U S ( t ) , U S ( s ) ) = e λ | t s | , Formula (18) gives also the value of Cov ( U S Φ ( t ) , U S Φ ( s ) ) as z = λ .

4.3. The Autocovariance Function of V Φ ( t )

Now we are ready to obtain the autocovariance function of V Φ ( t ) .
Corollary 2.
Suppose that for any z 0
0 + t P ( E ( t ) v ) e z v d v = t 0 + P ( E ( t ) v ) e z v d v .
Then, for any 0 < s < t ,
Cov ( V Φ ( t ) , V Φ ( s ) ) = σ 2 2 λ λ 0 s η ( t y , λ ) y E [ L ( y ) ] d y 2 + 2 η ( s , λ ) η ( s , 2 λ ) 1 2 0 s η ( t y , λ ) y η ( y , 2 λ ) d y .
Proof. 
For t s > 0 , using (7) we have
Cov ( V Φ ( t ) , V Φ ( s ) ) = σ 2 2 λ 0 + 0 + ( e | u v | θ e u + v θ ) H ( 2 ) ( t , s , d u d v ) .
 □
Equations (17) and (18) give us the fact that the involved integrals are finite, hence we can split the integral in two parts and then use the aforementioned equations.
Concerning the autocovariance function of V Φ , it is interesting to observe that two important properties are preserved.
Proposition 4.
Let us fix t > 0 and define the function
c t Φ ( s ) = Cov ( V Φ ( t + s ) , V Φ ( t ) ) .
Then:
a 
c t Φ ( s ) is decreasing;
b 
lim s + c Φ ( s ) = 0 .
Proof. 
Let us show a. To do this, let us consider 0 s 1 < s 2 and the measure
H ( 3 ) ( t + s 2 , t + s 1 , t , B ) = P ( ( E ( t + s 2 ) , E ( t + s 1 ) , E ( t ) ) B )
for any Borel set B R 3 . In particular, such a measure is concentrated in the set
A = { ( u , v , w ) R 3 : u ( 0 , + ) , v ( 0 , u ) , w ( 0 , v ) } .
Moreover, let us define c ( t , s ) = Cov ( V ( t ) , V ( s ) ) . Now let us observe that
c t Φ ( s 1 ) c t Φ ( s 2 ) = Cov ( V Φ ( t + s 1 ) , V Φ ( t ) ) Cov ( V Φ ( t + s 2 ) , V Φ ( t ) ) = 0 + 0 u 0 v ( Cov ( V ( v ) , V ( w ) ) Cov ( V ( u ) , V ( w ) ) ) H ( 3 ) ( t + s 2 , t + s 1 , t , d u d v d w ) = 0 + 0 u 0 v ( c ( v , w ) c ( u , w ) ) H ( 3 ) ( t + s 2 , t + s 1 , t , d u d v d w ) 0
since, being w < v < u in A, c ( v , w ) c ( u , w ) 0 (since we already know that the function t c ( t , s ) is decreasing when t s ). Now let us show b. To do this, let us observe that E ( t ) is almost surely increasing and (pathwise) lim t + E ( t ) = + almost surely. Thus, let us write
c t Φ ( s ) = Cov ( V Φ ( t + s ) , V Φ ( t ) ) = Cov ( V ( E ( t + s ) ) , V ( E ( t ) ) ) = E [ c ( E ( t + s ) , E ( t ) ) ] = Ω c ( E ( t + s , ω ) , E ( t , ω ) ) d P ( ω ) .
Now observe that c ( t , s ) is a continuous function and lim t + c ( t , s ) = 0 . Moreover, for fixed s, the function t c ( t , s ) is bounded hence, in particular, | c ( E ( t + s ) , E ( t ) ) | C ( t ) almost surely and lim s + c ( E ( t + s ) , E ( t ) ) = 0 almost surely. Thus, we can use dominated convergence theorem to obtain
lim s + c t Φ ( s ) = lim s + Ω c ( E ( t + s , ω ) , E ( t , ω ) ) d P ( ω ) = Ω lim s + c ( E ( t + s , ω ) , E ( t , ω ) ) d P ( ω ) = 0
concluding the proof. □
This last result tells us that we do not lose the main features of the covariance function, i.e., it is still infinitesimal and decreasing with respect to the time gap. However, the asymptotic behavior depends now on the choice of Φ , in particular on the behavior of η . It is for instance known (see [46]) that the asymptotic behavior of the covariance when Φ ( λ ) = λ α is the one of a power function with exponent less than 1, hence the process is long-range dependent. Thus, we have that with a suitable choice of Φ we can alter the asymptotic behavior of the covariance to reproduce different memory effects at the level of the autocovariance of the process. Let us remark that changing the asymptotic behavior of the covariance has been already used to describe long-memory effects of the membrane potential process (see for instance [8]).

5. First Spiking Times and Interspike Intervals

5.1. Spiking Times in Case of Excitatory Stimuli

From now on, let us consider I ( t ) to be an L 1 excitatory stimuli, i.e., I ( t ) 0 . We are now interested in the first passage time of V Φ ( t ) through the threshold V t h > V 0 . In this case, it is easy to see that the process V ( t ) is a Gauss-Markov process and satisfies the hypotheses of [10] Corollary 3.4.4, hence, denoting T 1 = inf { t > 0 : V ( t ) > V t h } , we have E [ T 1 ] < + . Concerning the behavior as t 0 + of t P ( T 1 t ) , recalling (see [10] Corollary 3.2.4) that the ratio of the Gauss-Markov process V ( t ) is given by
r V ( t ) = 1 2 λ e 2 λ t 1
we have that r V is a strictly increasing C 2 ( 0 , + ) function by [10] Remark 3.5.5 we know that we are under the hypotheses of [10] Proposition 3.5.5, thus
lim t 0 + P ( T 1 t ) t γ = 0
for any γ R . One can also show that T 1 is an absolutely continuous random variable whose density is infinitely differentiable with L 1 derivatives (since they are the unique weak solution of initial-value parabolic problems, see [47]).
Now let us consider the process V Φ ( t ) and let us define T 1 = { t > 0 : V Φ ( t ) > V t h } , which represents the first spiking time of the neuron. Concerning the asymptotic behavior of the survival function and the cumulative distribution function we have the following proposition as an application of the results in [10].
Proposition 5.
With the notation above, we have
i 
If Φ is regularly varying at 0 + with index α [ 0 , 1 ) . Then as t +
P ( T 1 > t ) E [ T 1 ] Γ ( 1 α ) Φ 1 t ;
ii 
If t ν ( t , + ) is an absolutely continuous function, then T 1 is an absolutely continuous random variable;
iii 
If t ν ( t , + ) is an absolutely continuous function and there exist r 0 , C > 0 and γ ( 0 , 2 ) such that
0 r s 2 ν ( d s ) < C r γ , r ( 0 , r 0 )
then the density p T 1 of T 1 is infinitely differentiable and all the derivatives are bounded;
iv 
Under the hypotheses of i i i , if Φ is regularly varying at + with index α [ 0 , 1 ) , then as t 0 +
lim t 0 + P ( T 1 t ) t γ = 0
for any γ R .
Proof. 
(i) We have already shown that E [ T 1 ] < + , so this property follows from [10] Corollary 2.2.3;
(ii,iii) These are just [10] Propositions 2.3.1 and 2.3.2;
(iv) We have already observed that T 1 is an absolutely continuous random variable whose density is infinitely differentiable with L 1 derivatives (hence Laplace transformable), thus from (20) and [10] Theorem 2.5.4 we obtain the desired property. □
With this proposition, we have summarized some of the properties we can obtain concerning the regularity of T 1 and the asymptotic behavior of its survival and cumulative distribution functions. However, we can extend these results to successive spiking times by using an adaptive threshold method. To do this, instead of resetting the process, let us consider some other barriers
V t h ( n ) = V t h + ( n 1 ) ( V t h V r )
where V r R is a constant representing the reset potential, i.e., the membrane potential after the depolarization. However, since we want the first passage times of our process through such threshold to be the spiking times of the neuron, we need to spatially translate the whole process of ( n 1 ) ( V t h V r ) after a spike. To do so, we need to modify Equation (1) as
d V ˜ ( t ) = λ ( V ˜ ( t ) V L ( t ) ) + I ( t ) d t + σ d W ( t ) , V ˜ ( 0 ) = V 0
where this time V L ( t ) is a suitable stochastic process. In particular, let us set for t [ 0 , T 1 ) V L ( t ) V L . Now let us suppose we have defined V L ( t ) up to T n = inf { t > 0 : V ( t ) V t h ( n ) } . Then we pose T n + 1 = inf { t > 0 : V ( t ) V t h ( n + 1 ) } and V L ( t ) = V L + n ( V t h V r ) for t [ T n , T n + 1 ) . This is a sort of feedback definition: we do not know T n + 1 until we define V L ( t ) , but we can define V L ( t ) in T n and then start the process with such fixed value of V L ( t ) until it reaches the threshold V t h ( n ) . Using this modification for the classical stochastic Leaky Integrate-and-Fire model could allow us to say that the T n represent the n-th spiking time of the neuron (as they are equivalent to the ones obtained by resetting the process). However, this leads to a much more difficult process to handle with. Indeed, to write the solution of (23), let us define the counting process
N ( t ) = n = 1 + χ T n t
where for any A F
χ A ( ω ) = 1 ω A 0 ω A .
Thus, we can exploit the process V ˜ ( t ) as
V ˜ ( t ) = e λ t V 0 + ( 1 e λ t ) V L + N ( t ) ( V t h V r ) i = 1 N ( t ) ( V t h V r ) e λ ( t T i ) + + e λ t 0 t I ( s ) e λ s d s + σ e λ t 0 t e λ s d W ( s )
which is quite complicated. However, to obtain again the starting process, we can again modify the threshold, which will become stochastic. Indeed, let us suppose we want to observe the n-th spike. Hence we are conditioning with respect to the event { N ( t ) = n 1 } . Under such conditioning, V ˜ ( t ) V t h ( n ) if and only if
V ( t ) V t h ( n ) ( n 1 ) ( V t h V r ) + i = 1 n 1 ( V t h V r ) e λ ( t T i ) .
Hence let us define the new stochastic threshold as
V t h ( n ) ( t ) = V t h + i = 1 n 1 ( V t h V r ) e λ ( t T i ) .
In this way, we can say that T n = inf { t > 0 : V ( t ) V t h ( n ) ( t ) } still represents the n-th spiking time of the neuron. In particular, if we are conditioning the process V t h ( n ) ( t ) with the knowledge of ( T 1 , , T n 1 ) , this is an exponentially decaying threshold such that lim t + V t h ( n ) ( t ) = V t h .
Now, since we are dealing with the semi-Markov model, let us consider T n = inf { t > 0 : V Φ ( t ) V t h ( n ) ( t ) } . For such random variables, after conditioning with respect to ( T 1 , , T n 1 ) , we cannot directly extend Proposition 5. However, one can use [10] Propositions 2.2.6 and 2.2.7 to express some properties of the limit superior and limit inferior of some quantity involving the survival and the cumulative distribution function. In a forthcoming paper we aim to show that Proposition 5 can be actually extended to some cases of time-varying thresholds.

5.2. The Interspike Intervals

Another important property of the times T n relies on their representation. Indeed, we have
T n = d S ( T n ) .
This property can be used to determine the distribution of the interspike intervals. Indeed, let us observe that T n + 1 T n almost surely. Thus, if we define the measure μ ( 2 ) ( B ) = P ( ( T n + 1 , T n ) B ) for any Borel set B ( 0 , + ) 2 , it is easy to see that it is concentrated on the set
A = { ( u , v ) : u ( 0 , + ) , v ( 0 , u ) } .
Hence, we have, by using the fact that S ( t ) is independent from T n + 1 and T n ,
P ( S ( T n + 1 ) S ( T n ) t ) = 0 + 0 u P ( S ( u ) S ( v ) t ) μ ( 2 ) ( d u d v ) .
Now, observe that in A we have u > v , hence, by using the fact that S ( t ) is a Lévy process, since S ( u ) S ( v ) = d S ( u v ) ,
P ( S ( T n + 1 ) S ( T n ) t ) = 0 + 0 u P ( S ( u v ) t ) μ ( 2 ) ( d u d v ) = P ( S ( T n + 1 T n ) t ) .
Hence if we define K n = T n T n 1 (where T 0 = 0 ) and K n = T n T n 1 , we obtain from (25)
P ( K n t ) = P ( S ( K n ) t )
that is to say K n = d S ( K n ) . This property allows us to determine the distribution of the interspike intervals K n from the distribution of S ( K n ) . In particular, if we denote with g n the density of K n we have
P ( K n t ) = 0 + P ( S ( y ) t ) g n ( y ) d y .
Moreover, K n are independent and then also K n . Indeed, we have
P ( K n t 1 , K m t 2 ) = P ( T n T n 1 t 1 , T m T m 1 t 2 ) = P ( S ( T n ) S ( T n 1 ) t 1 , S ( T m ) S ( T m 1 ) t 2 ) .
Now let us suppose m < n . Then m n 1 and m 1 < n 1 . Thus, if we consider the measure μ ( 4 ) ( B ) = P ( ( T n , T n 1 , T m , T m 1 ) B ) for any Borel set B R 4 , it is easy to see that it is concentrated on the set
A = { ( u , v , w , z ) : u ( 0 , + ) , v ( 0 , u ) , w ( 0 , v ) , z ( 0 , w ) } .
Hence we have, by using the fact that S is a Lévy process and ( v , u ) and ( z , w ) are disjoint intervals,
P ( K n t 1 , K m t 2 ) = P ( S ( T n ) S ( T n 1 ) t 1 , S ( T m ) S ( T m 1 ) t 2 ) = 0 + 0 u 0 v 0 w P ( S ( u ) S ( v ) t 1 , S ( w ) S ( z ) t 2 ) μ ( 4 ) ( d u d v d w d z ) = 0 + 0 u 0 v 0 w P ( S ( u ) S ( v ) t 1 ) P ( S ( w ) S ( z ) t 2 ) μ ( 4 ) ( d u d v d w d z ) = 0 + 0 u 0 v 0 w P ( S ( u v ) t 1 ) P ( S ( w z ) t 2 ) μ ( 4 ) ( d u d v d w d z ) .
Now let us consider the function s : ( u , t ) P ( S ( u ) t ) . Then
P ( K n t 1 , K m t 2 ) = 0 + 0 u 0 v 0 w P ( S ( u v ) t 1 ) P ( S ( w z ) t 2 ) μ ( 4 ) ( d u d v d w d z ) = E [ s ( T n T n 1 , t 1 ) s ( T m T m 1 , t 2 ) ] = E [ s ( K n , t 1 ) s ( K m , t 2 ) ] .
Now let us consider the measures η n , m ( 2 ) ( B 2 ) = P ( ( K n , K m ) B 2 ) for any Borel set B 2 R 2 and η n ( 1 ) ( B 1 ) = P ( K n B 1 ) for any Borel set B 1 R . Then, since K n and K m are independent we have η n , m ( 2 ) ( d u d v ) = η ( 1 ) ( d u ) η ( 1 ) ( d v ) and we have
P ( K n t 1 , K m t 2 ) = E [ s ( K n , t 1 ) s ( K m , t 2 ) ] = 0 + 0 + P ( S ( u ) t 1 ) P ( S ( v ) t 2 ) η ( 2 ) ( d u d v ) = 0 + P ( S ( u ) t 1 ) η ( 1 ) ( d u ) 0 + P ( S ( v ) t 2 ) η ( 1 ) ( d v ) = P ( S ( K n ) t 1 ) P ( S ( K m ) t 1 ) = P ( K n t 1 ) P ( K m t 2 ) ,
so K n and K m are independent.
Now let us suppose that I ( t ) I 0 . Then we know that K n are i.i.d. random variables. Thus also K n are i.i.d. random variables. In particular, K 1 = T 1 and then all the interspike intervals are distributed as T 1 . Thus, we can conclude that if I ( t ) I 0 , then the law of the first passage time of V ( t ) through V t h describes not only the first spiking time, but also all the interspike intervals. In particular, we can extend in such case Proposition 5 to the variables K n .

6. Comparison with the Unit 240-1

In [11], the authors give an overview of quantitative methods to study spontaneous activity of neurons. In particular, they considered some neuronal units in the cochlear nucleus of the cat. Let us focus our attention on two particular neuronal units: Unit 259-2 and Unit 240-1. Using the exact words of the aforementioned paper The histogram for Unit 259-2 appears to be unimodal and asymmetric [...] while that of Unit 240-1 is unimodal and asymmetric, but on a quite different time scale than that of Unit 259-2. Indeed, the authors then assert that The spike trains of Unit 259-2 and Unit 240-1 do not appear to be easily characterizable.
In the same paper, the authors try to give a characterization of the interspike intervals of Unit 259-2. Indeed, they assert that The fact that the interval histogram rises rapidly to its mode (at 3 msec.), together with the exponential decay, suggests that the process generating the spike train might be a Poisson process with dead time , hence, in particular, the interspike intervals should be exponentially distributed. However when the histogram of Unit 240-1 is replotted on a semilogarithmic scale, the decay is clearly seen to be non-exponential. The fact that the histograms of the interspike intervals of the Unit 259-2 remind of an exponential distribution, while the ones of the Unit 240-1 do not have an exponential decay, but still are similar to the ones of Unit 259-2 but on a different time scale, suggests that the distribution of the interspike intervals of the Unit 240-1 could be similar to the one of a Mittag-Leffler random variable, or maybe to the one of a stable random variable, or at least it should have an heavy tail.
Thus, in [11] a first attempt to study and characterize the interspike interval distribution of the Unit 240-1 is done, but the rescaling they did were not enough to find such distribution. After that paper, other papers focused on trying to reconstruct the distribution of Unit 240-1. In [48], the interspike interval distribution of the Unit 240-1 is fitted for instance with a gamma distribution, while in [9] Figure 5, as suggested by the scaling invariant property of the histograms, it is fitted by an inverse Gaussian distribution. However, each of these two distributions are exponentially decaying. An interesting solution is found in [9]: here the scaling invariant property of the histograms is interpreted as a stability property and then a stable distribution is used to fit the data concerning the Unit 240-1. In particular, the authors take in consideration Cauchy distribution (which is power-like decaying) since it [...] has essentially the same invariance property as that found for the density of interspike intervals of Unit 240-1. .
Concerning linear models such as the Leaky Integrate-and-Fire, it is not enough to describe the behavior of the Unit 240-1. Indeed, for long time, the distribution of the interspike intervals for the spontaneous activity (hence, being I ( t ) 0 , they are i.i.d. random variables) generated by a Leaky Intergrate-and-Fire model are asymptotically exponentially distributed (see [27]), hence their decay is too fast to accord to the one of the Unit 240-1. However, if we consider Φ ( λ ) = λ α for some α ( 0 , 1 ) , our semi-Markov Leaky Integrate-and-Fire model admits interspike intervals that are i.i.d. and the asymptotically equal to power laws for long times (in particular the asymptotic behavior is similar to the one of a Mittag-Leffler function). This decay is much more in accord to the one obtained by [11] for Unit 240-1, since it preserves the heavy tails that have been observed. Moreover, it is not so distant from the description of [9], since the decay is a power law of exponent α ( 0 , 1 ) that can be tuned with the data. Moreover, other choices of Φ can be done according to the data to obtain a more precise fit of the histograms, since Proposition 5 guarantees that if Φ is of regular variation at 0 + , then the survival function of the interspike intervals decay as the product of a power and a slowly varying function, which is quite similar to a power-law decay.

7. Conclusions

In this paper, we used a fractionalization procedure to produce heavy-tailed non-Markov neuronal models from easy Markov linear models. This procedure consists of using a stochastic timescale inside the process itself: this random time is given by the inverse of a subordinator independent of the original Markov process. First, this procedure leads a semi-Markov process (whose evolution depends on the current position and on the current sojourn time in it) in place of the Markov process describing the membrane potential of the neuron. As a consequence, despite we did not modify two fundamental properties of the covariance, i.e., being decreasing and infinitesimal, we can actually obtain different asymptotic behavior and thus force the process to be long-range dependent (it is the case of the stable subordinator for instance, as observed in [46]). As we already discussed, long-range dependent processes (or, more in general, non-delta correlated noises) can be used to describe memory effects in neuronal models and provide a first generalization of the easy linear models, in such a way to obtain realistic data (see [8,31]). The most important consequence is related to the spiking times: the time-change forces a delay in the dynamics of the membrane potential, which leads to a delay in the distribution of the spiking times. In particular, we achieve heavy-tailed first spiking times. Moreover, we preserve independence and identical distribution of the interspike intervals in the case of spontaneous activity, thus leading to the heavy-tailed distributions of such intervals in the spontaneous activity case. Finally, we compare (only qualitatively, due to a lack of data) the behavior of our process (as Φ ( z ) = z α , i.e., in the α -stable case) with the behavior of the Unit 240-1, which is seen to admit heavy-tailed interspike intervals (see [9,11]). As we also stated in the introduction, the problem of the power-law decay of the distribution of the spiking times has been more recently studied in [13]. From this comparison we notice that the Mittag-Leffler-likely behavior of the interspike intervals of the Unit 240-1, together with the power-law decay, are reproduced by our model. However, this does not mean that our model is in accord with the data and a statistical and experimental study must be done on them.
Finally, let us recall that this is actually a toy example. Despite being a simple (linear) example, it seems to work according to already known phenomena (this is the case of Unit 240-1). However, here we propose this approach as an exemplary procedure to produce more complex delayed or heavy-tailed neuronal models and thus we aim to study different time-changed models (eventually also non-linear ones, such as the stochastic Hodgkin-Huxley model in [7]).

Author Contributions

Conceptualization, G.A. and B.T.; Investigation, G.A. and B.T.; Writing—original draft, G.A. and B.T.

Funding

This research is partially supported by MIUR-PRIN 2017, project “Stochastic Models for Complex Systems”, no. 2017JFFHSH.

Acknowledgments

We would like to thank the referees whose remarks and suggestions have certainly improved the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

Proof. 
Let us denote with G ( u , v ) = e z ( u + v ) and let us observe that G is a C 1 function. Let us first consider for any couple ( a , b ) ( 0 , + ) 2
0 a 0 b G ( u , v ) H ( 2 ) ( t , s , d u d v ) = 0 a 0 b H ( 2 ) ( t , s , [ u , a ] × [ v , b ] ) G ( d u , d v ) + 0 a H ( 2 ) ( t , s , [ u , a ] × [ 0 , b ] ) G ( d u , 0 ) + 0 b H ( 2 ) ( t , s , [ 0 , a ] × [ v , b ] ) G ( 0 , d v ) + H ( 2 ) ( t , s , [ 0 , a ] × [ 0 , b ] ) G ( 0 , 0 )
where we used bivariate integration by parts formula (see [49] Lemma 2.2). Now let us observe that since G is a C 1 function we have
G ( d u , v ) = z e z ( u + v ) d u G ( u , d v ) = z e z ( u + v ) d v G ( d u , d v ) = z 2 e z ( u + v ) d u d v
and then
0 a 0 b G ( u , v ) H ( 2 ) ( t , s , d u d v ) = 0 a 0 b H ( 2 ) ( t , s , [ u , a ] × [ v , b ] ) z 2 e z ( u + v ) d u d v 0 a H ( 2 ) ( t , s , [ u , a ] × [ 0 , b ] ) z e z u d u 0 b H ( 2 ) ( t , s , [ 0 , a ] × [ v , b ] ) z e z v d v + H ( 2 ) ( t , s , [ 0 , a ] × [ 0 , b ] ) .
Now let us define
I 1 ( a , b ) = 0 a H ( 2 ) ( t , s , [ u , a ] × [ 0 , b ] ) z e z u d u I 2 ( a , b ) = 0 b H ( 2 ) ( t , s , [ u , a ] × [ 0 , b ] ) z e z v d v I 3 ( a , b ) = 0 a 0 b H ( 2 ) ( t , s , [ u , a ] × [ v , b ] ) z 2 e z ( u + v ) d u d v
and let us work with I 1 ( a , b ) . Since the integrand is non-decreasing, by monotone convergence theorem we have
lim a + I 1 ( a , b ) = lim a + 0 + H ( 2 ) ( t , s , [ u , a ] × [ 0 , b ] ) χ [ 0 , a ] ( u ) z e z u d u = 0 + H ( 2 ) ( t , s , [ u , + ) × [ 0 , b ] ) z e z u d u .
Taking then also the limit as b + we have, also by monotone convergence theorem,
lim b + lim a + I 1 ( a , b ) = lim b + 0 + H ( 2 ) ( t , s , [ u , + ) × [ 0 , b ] ) z e z u d u = 0 + H ( 2 ) ( t , s , [ u , + ) × [ 0 , + ) ) z e z u d u .
First, let us observe that since we are only using monotone convergence theorem, we have the same result if we exchange the order of the limits. Now let us observe that
H ( 2 ) ( t , s , [ u , + ) × [ 0 , + ) ) = P ( E ( t ) u , E ( s ) 0 ) = P ( E ( t ) u )
hence we have
lim b + lim a + I 1 ( a , b ) = 0 + P ( E ( t ) u ) z e z u d u = 0 + ( 1 P ( E ( t ) u ) ) d ( e z u ) = 1 0 + e z u f ( t , u ) d u = 1 η ( t , z ) .
In the same way we have
lim b + lim a + I 2 ( a , b ) = lim a + lim b + I 2 ( a , b ) = 1 η ( s , z ) .
Concerning I 3 ( a , b ) , we have, by using again monotone convergence theorem,
lim b + lim a + I 3 ( a , b ) = lim a + lim b + I 3 ( a , b ) = 0 + 0 + H ( 2 ) ( t , s , [ u , + ) × [ v , + ) ) z 2 e z ( u + v ) d u d v .
Finally, let us also observe that
lim a , b + H ( 2 ) ( t , s , [ 0 , a ] × [ 0 , b ] ) = H ( 2 ) ( t , s , [ 0 , + ) × [ 0 , + ) ) = 1 .
Hence, we can take the limit as a , b + in (A2) and, since lim a , b + I 1 ( a , b ) , lim a , b + I 2 ( a , b ) , lim a , b + I 3 ( a , b ) and lim a , b + H ( 2 ) ( t , s , [ 0 , a ] × [ 0 , b ] ) are all finite and G ( u , v ) 0 , we have
0 + 0 + G ( u , v ) H ( 2 ) ( t , s , d u d v ) = 0 + 0 + H ( 2 ) ( t , s , [ u , + ) × [ v , + ) ) z 2 e z ( u + v ) d u d v 1 + η ( t , z ) + η ( s , z ) .
Now let us denote
I 4 = 0 + 0 + H ( 2 ) ( t , s , [ u , + ) × [ v , + ) ) z 2 e z ( u + v ) d u d v
and observe that
H ( 2 ) ( t , s , [ u , + ) × [ v , + ) ) = P ( E ( t ) u , E ( s ) v )
to obtain
I 4 = 0 + 0 + P ( E ( t ) u , E ( s ) v ) z 2 e z ( u + v ) d u d v = u < v P ( E ( t ) u , E ( s ) v ) z 2 e z ( u + v ) d u d v + u > v P ( E ( t ) u , E ( s ) v ) z 2 e z ( u + v ) d u d v .
Now, since t s , we have
I 4 = u < v P ( E ( s ) v ) z 2 e z ( u + v ) d u d v + u > v P ( E ( t ) u , E ( s ) v ) z 2 e z ( u + v ) d u d v .
Let us set
I 5 = u < v P ( E ( s ) v ) z 2 e z ( u + v ) d u d v I 6 = u > v P ( E ( t ) u , E ( s ) v ) z 2 e z ( u + v ) d u d v .
Concerning I 5 we have
I 5 = u < v P ( E ( s ) v ) z 2 e z ( u + v ) d u d v = 0 + P ( E ( s ) v ) ( z ) e z v 0 v ( z ) e z u d u d v = 0 + P ( E ( s ) v ) ( z ) e 2 z v d v 0 + P ( E ( s ) v ) ( z ) e z v d v = 1 2 0 + ( 1 P ( E ( s ) v ) ) d ( e 2 z v ) 0 + ( 1 P ( E ( s ) v ) ) d ( e z v ) = 1 2 + 1 2 0 + f ( s , v ) e 2 z v d v + 1 0 + f ( s , v ) e z v d v = 1 2 + 1 2 η ( s , 2 z ) η ( s , z ) .
Concerning I 6 , let us use [50] Equation 2.17 so that we can express P ( E ( t ) u , E ( s ) v ) for t s and u > v . Set
D ( t , s ) = { ( x , y ) R 2 : y [ 0 , s ] , x [ 0 , t y ] }
to obtain
I 6 = u > v D ( t , s ) g ( v , y ) g ( u v , x ) d x d y z 2 e z ( u + v ) d u d v = u > v D ( t , s ) g ( v , y ) g ( u v , x ) z 2 e z ( u + v ) d x d y d u d v = D ( t , s ) u > v g ( v , y ) g ( u v , x ) z 2 e z ( u + v ) d x d y d u d v = D ( t , s ) 0 + g ( v , y ) ( z ) e z v v + g ( u v , x ) ( z ) e z u d u d v d x d y .
Let us use the change of variable w = u v to obtain
v + g ( u v , x ) ( z ) e z u d u = e z v 0 + g ( w , x ) ( z ) e z w d w
and then we have in Equation (A6)
I 6 = D ( t , s ) 0 + g ( v , y ) ( z ) e 2 z v d v 0 + g ( w , x ) ( z ) e z w d w d x d y .
Now let us pose
I 7 = 0 + g ( x , w ) ( z ) e z w d w I 8 = 0 + g ( y , v ) ( z ) e 2 z v d v .
Let us evaluate I 7 . To do this, let us observe that
g ( x , w ) = x P ( S ( w ) x ) = x P ( w L ( x ) ) = x P ( L ( x ) w )
hence, by the hypothesis, we have
I 7 = x 0 + P ( L ( x ) w ) ( z ) e z w d w = x 0 + P ( L ( x ) w ) d ( e z w ) = x 0 + e z w f ( x , w ) d w = x η ( x , z ) .
Analogously we have I 8 = 1 2 y η ( y , 2 z ) . Using these two equalities in (A7) we have (recalling that η ( 0 , z ) = 1 )
I 6 = 1 2 0 s 0 t y x η ( x , z ) d x y η ( y , 2 z ) d y = 1 2 0 s η ( t y , z ) y η ( y , 2 z ) d y 1 2 0 s y η ( y , 2 z ) d y = 1 2 0 s η ( t y , z ) y η ( y , 2 z ) d y 1 2 η ( s , 2 z ) + 1 2 .
Now, using Equations (A5) and (A8) in (A4) we obtain
I 4 = 1 + η ( s , 2 z ) η ( s , z ) + 1 2 0 s η ( t y , z ) y η ( y , 2 z ) d y
Now, by using Equation (A9) in (A3) we finally obtain
0 + 0 + G ( u , v ) H ( 2 ) ( t , s , d u d v ) = η ( t , z ) + η ( s , 2 z ) + 1 2 0 s η ( t y , z ) y η ( y , 2 z ) d y .
 □

Appendix B. Proof of Theorem 2

Proof. 
Let us denote G ( u , v ) = e z | u v | . Observe that this time G is not a C 1 function. Let us fix a generic couple ( a , b ) ( 0 , + ) 2 and then let us use bivariate integration by parts formula to obtain again (A1). Now let us define
I 1 ( a , b ) = 0 a H ( 2 ) ( t , s , [ u , a ] × [ 0 , b ] ) G ( d u , 0 ) I 2 ( a , b ) = 0 b H ( 2 ) ( t , s , [ u , a ] × [ 0 , b ] ) G ( 0 , d v ) I 3 ( a , b ) = 0 a 0 b H ( 2 ) ( t , s , [ u , a ] × [ v , b ] ) G ( d u , d v ) .
Let us first work with I 1 ( a , b ) . As done in the previous theorem, we can pass to the limit as a , b + to obtain
lim a , b + I 1 ( a , b ) = 0 + P ( E ( t ) u ) G ( d u , 0 )
Now let us observe that
G ( d u , v ) = ( z e z ( u v ) χ u > v ( u , v ) + z e z ( v u ) χ u v ( u , v ) ) d u G ( u , d v ) = ( z e z ( u v ) χ u > v ( u , v ) z e z ( v u ) χ u v ( u , v ) ) d v .
Thus, we have
lim a , b + I 1 ( a , b ) = 0 + P ( E ( t ) u ) ( z ) e z u d u = 0 + ( 1 P ( E ( t ) u ) ) d ( e z u ) = 1 + η ( t , z )
In the same way we obtain
lim a , b + I 2 ( a , b ) = 1 + η ( s , z )
Hence, as before, by taking the limit as a , b + in (A1) and using the fact that lim a , b + I 1 ( a , b ) , lim a , b + I 2 ( a , b ) and lim a , b + H ( 2 ) ( t , s , [ u , a ] × [ v , b ] ) are finite we obtain
0 + 0 + G ( u , v ) H ( 2 ) ( t , s , d u d v ) = 0 + 0 + H ( 2 ) ( t , s , [ u , + ) × [ v , + ) ) G ( d u , d v ) 1 + η ( t , z ) + η ( s , z ) .
Now let us set
I 4 = 0 + 0 + H ( 2 ) ( t , s , [ u , + ) × [ v , + ) ) G ( d u , d v ) = u < v P ( E ( t ) u , E ( s ) v ) G ( d u , d v ) + u = v P ( E ( t ) u , E ( s ) v ) G ( d u , d v ) + u > v P ( E ( t ) u , E ( s ) v ) G ( d u , d v )
and then let us set
I 5 = u < v P ( E ( t ) u , E ( s ) v ) G ( d u , d v ) I 6 = u = v P ( E ( t ) u , E ( s ) v ) G ( d u , d v ) I 7 = u > v P ( E ( t ) u , E ( s ) v ) G ( d u , d v ) .
Let us first work with I 6 . We have
I 6 = u = v P ( E ( s ) u ) G ( d u , d v ) .
Now, from the expression of G ( d u , v ) we have that G ( d u , d v ) in u = v is given by 2 z d u . Hence we have
I 6 = 2 z 0 + P ( E ( s ) u ) d u = 2 z E [ E ( s ) ] .
Now let us work with I 5 . We have
I 5 = u < v P ( E ( s ) v ) G ( d u , d v )
with G ( d u , d v ) = z 2 e z ( v u ) d u d v for u < v hence we have
I 5 = 0 + P ( E ( s ) v ) ( z ) e z v 0 v z e z u d u d v = 0 + P ( E ( s ) v ) ( z ) e z v ( e z v 1 ) d v = 0 + P ( E ( s ) v ) ( z ) ( 1 e z v ) d v = z 0 + P ( E ( s ) v ) d v + 0 + ( 1 P ( E ( s ) v ) ) d ( e z v ) = z E [ E ( s ) ] 1 + η ( s , z ) .
Now let us work with I 7 . We have, since G ( d u , d v ) = z 2 e z ( u v ) d u d v for u > v ,
I 7 = z 2 u > v P ( E ( t ) u , E ( s ) v ) e z ( u v ) d u d v .
Let us use as before [50] Equation 2.17 to obtain
I 7 = u > v D ( t , s ) g ( v , y ) g ( u v , x ) z 2 e z ( u v ) d x d y d u d v = D ( t , s ) u > v g ( v , y ) g ( u v , x ) z 2 e z ( u v ) d x d y d u d v = z D ( t , s ) 0 + g ( v , y ) e z v v + g ( u v , x ) ( z ) e z u d u d v d x d y = z D ( t , s ) 0 + g ( v , y ) d v 0 + g ( w , x ) ( z ) e z w d w d x d y .
Denote now
I 8 ( y ) = 0 + g ( v , y ) d v I 9 ( x ) = 0 + g ( w , x ) ( z ) e z w d w
and observe that, as before, I 9 ( x ) = x η ( x , z ) . Concerning I 8 ( y ) , we have
I 8 ( y ) = y 0 + P ( E ( y ) v ) d v = y E [ E ( y ) ] .
By using these two equalities in (A17) we have
I 7 = z 0 s 0 t y x η ( x , z ) d x y E [ E ( y ) ] d y = z 0 s η ( t y , z ) y E [ E ( y ) ] d y z 0 s y E [ E ( y ) ] d y = z 0 s η ( t y , z ) y E [ E ( y ) ] d y z E [ E ( s ) ]
Now let us use (A18), (A16), (A15) in (A14) to obtain
I 4 = z 0 s η ( t y , z ) y E [ E ( y ) ] d y 1 + η ( s , z )
and then use (A19) in (A13) to obtain
0 + 0 + G ( u , v ) H ( 2 ) ( t , s , d u d v ) = z 0 s η ( t y , z ) y E [ E ( y ) ] d y 2 + 2 η ( s , z ) + η ( t , z )
concluding the proof. □

References

  1. Abbott, L.F. Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res. Bull. 1999, 50, 303–304. [Google Scholar] [CrossRef]
  2. Burkitt, A.N. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 2006, 95, 1–19. [Google Scholar] [CrossRef] [PubMed]
  3. Burkitt, A.N. A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. Biol. Cybern. 2006, 95, 97–112. [Google Scholar] [CrossRef] [PubMed]
  4. Ricciardi, L.M.; Sacerdote, L. The Ornstein-Uhlenbeck process as a model for neuronal activity. Biol. Cybern. 1979, 35, 1–9. [Google Scholar] [CrossRef]
  5. Sacerdote, L.; Giraudo, M.T. Stochastic integrate and fire models: A review on mathematical methods and their applications. In Stochastic Biomathematical Models; Springer: Berlin, Germany, 2013; pp. 99–148. [Google Scholar]
  6. Shinomoto, S.; Sakai, Y.; Funahashi, S. The Ornstein-Uhlenbeck process does not reproduce spiking statistics of neurons in prefrontal cortex. Neural Comput. 1999, 11, 935–951. [Google Scholar] [CrossRef]
  7. Fox, R.F. Stochastic versions of the Hodgkin-Huxley equations. Biophys. J. 1997, 72, 2068–2074. [Google Scholar] [CrossRef] [Green Version]
  8. Sakai, Y.; Funahashi, S.; Shinomoto, S. Temporally correlated inputs to leaky integrate-and-fire models can reproduce spiking statistics of cortical neurons. Neural Netw. 1999, 12, 1181–1190. [Google Scholar] [CrossRef]
  9. Gerstein, G.L.; Mandelbrot, B. Random walk models for the spike activity of a single neuron. Biophys. J. 1964, 4, 41–68. [Google Scholar] [CrossRef]
  10. Ascione, G.; Pirozzi, E.; Toaldo, B. On the exit time from open sets of some semi-Markov processes. arXiv 2017, arXiv:1709.06333. [Google Scholar]
  11. Rodieck, R.; Kiang, N.S.; Gerstein, G. Some quantitative methods for the study of spontaneous activity of single neurons. Biophys. J. 1962, 2, 351–368. [Google Scholar] [CrossRef]
  12. Holden, A. A note on convolution and stable distributions in the nervous system. Biol. Cybern. 1975, 20, 171–173. [Google Scholar] [CrossRef] [PubMed]
  13. Tsubo, Y.; Isomura, Y.; Fukai, T. Power-law inter-spike interval distributions infer a conditional maximization of entropy in cortical neurons. PLoS Comput. Biol. 2012, 8, e1002461. [Google Scholar] [CrossRef] [PubMed]
  14. Meerschaert, M.M.; Toaldo, B. Relaxation patterns and semi-Markov dynamics. Stoch. Process. Appl. 2019, 129, 2850–2879. [Google Scholar] [CrossRef]
  15. Orsingher, E.; Ricciuti, C.; Toaldo, B. Time-Inhomogeneous Jump processes and variable order operators. Potential Anal. 2016, 45, 435–461. [Google Scholar] [CrossRef]
  16. Orsingher, E.; Ricciuti, C.; Toaldo, B. On semi-Markov processes and their Kolmogorov’s integro-differential equations. J. Funct. Anal. 2018, 275, 830–868. [Google Scholar] [CrossRef]
  17. Ricciuti, C.; Toaldo, B. Semi-Markov models and motion in heterogeneous media. J. Stat. Phys. 2017, 169, 340–361. [Google Scholar] [CrossRef]
  18. Gajda, J.; Wylomańska, A. Time-changed Ornstein–Uhlenbeck process. J. Phys. A Math. Theor. 2015, 48, 135004. [Google Scholar] [CrossRef]
  19. Vadori, N.; Swishchuk, A. Inhomogeneous Random Evolutions: Limit Theorems and Financial Applications. Mathematics 2019, 7, 447. [Google Scholar] [CrossRef]
  20. Cahoy, D.O.; Polito, F.; Phoha, V. Transient behavior of fractional queues and related processes. Methodol. Comput. Appl. Probab. 2015, 17, 739–759. [Google Scholar] [CrossRef]
  21. Ascione, G.; Leonenko, N.; Pirozzi, E. Fractional queues with catastrophes and their transient behaviour. Mathematics 2018, 6, 159. [Google Scholar] [CrossRef]
  22. Lefèvre, C.; Simon, M. SIR epidemics with stages of infection. Adv. Appl. Probab. 2016, 48, 768–791. [Google Scholar] [CrossRef]
  23. Ashton, S.; Scalas, E.; Georgiou, N.; Kiss, I.Z. The Mathematics of Human Contact: Developing a Model for Social Interaction in School Children. Acta Phys. Pol. A 2018, 133, 18. [Google Scholar] [CrossRef]
  24. Brezis, H. Functional Analysis, Sobolev Spaces and Partial Differential Equations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  25. Buonocore, A.; Caputo, L.; Pirozzi, E.; Ricciardi, L.M. On a stochastic leaky integrate-and-fire neuronal model. Neural Comput. 2010, 22, 2558–2585. [Google Scholar] [CrossRef] [PubMed]
  26. Ascione, G.; Pirozzi, E. On a stochastic neuronal model integrating correlated inputs. Math. Biosci. Eng. 2019, 16, 5206. [Google Scholar] [CrossRef]
  27. Buonocore, A.; Caputo, L.; Pirozzi, E.; Ricciardi, L.M. The first passage time problem for Gauss-diffusion processes: Algorithmic approaches and applications to LIF neuronal model. Methodol. Comput. Appl. Probab. 2011, 13, 29–57. [Google Scholar] [CrossRef]
  28. Buonocore, A.; Caputo, L.; Pirozzi, E.; Carfora, M.F. A leaky integrate-and-fire model with adaptation for the generation of a spike train. Math. Biosci. Eng. MBE 2016, 13, 483–493. [Google Scholar]
  29. Carfora, M.F.; Pirozzi, E. Linked Gauss-Diffusion processes for modeling a finite-size neuronal network. Biosystems 2017, 161, 15–23. [Google Scholar] [CrossRef] [Green Version]
  30. Fourcaud, N.; Brunel, N. Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Comput. 2002, 14, 2057–2110. [Google Scholar] [CrossRef]
  31. Pirozzi, E. Colored noise and a stochastic fractional model for correlated inputs and adaptation in neuronal firing. Biol. Cybern. 2018, 112, 25–39. [Google Scholar] [CrossRef]
  32. Kobayashi, R.; Tsubo, Y.; Shinomoto, S. Made-to-order spiking neuron model equipped with a multi-timescale adaptive threshold. Front. Comput. Neurosci. 2009, 3, 9. [Google Scholar] [CrossRef] [Green Version]
  33. Kobayashi, R.; Kitano, K. Impact of slow K+ currents on spike generation can be described by an adaptive threshold model. J. Comput. Neurosci. 2016, 40, 347–362. [Google Scholar] [CrossRef] [PubMed]
  34. Huang, C.; Resnik, A.; Celikel, T.; Englitz, B. Adaptive spike threshold enables robust and temporally precise neuronal encoding. PLoS Comput. Biol. 2016, 12, e1004984. [Google Scholar] [CrossRef] [PubMed]
  35. Bertoin, J. Lévy Processes; Cambridge University Press: Cambridge, UK, 1996; Volume 121. [Google Scholar]
  36. Bertoin, J. Subordinators: Examples and Applications. In Lectures on Probability Theory and Statistics; Springer: Berlin, Germany, 1999; pp. 1–91. [Google Scholar] [Green Version]
  37. Cinlar, E. Markov Additive Processes and Semi-Regeneration; Technical Report; Northwestern University: Evanston, IL, USA, 1974. [Google Scholar]
  38. Kaspi, H.; Masonneuve, B. Regenerative systems on the real line. Ann. Probab. 1988, 16, 1306–1332. [Google Scholar] [CrossRef]
  39. Meerschaert, M.M.; Straka, P. Semi-Markov approach to continuous time random walk limit processes. Ann. Probab. 2014, 42, 1699–1723. [Google Scholar] [CrossRef] [Green Version]
  40. Harlamov, B. Continuous Semi-Markov Processes; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
  41. Li, C.; Qian, D.; Chen, Y. On Riemann-Liouville and Caputo derivatives. Discrete Dyn. Nat. Soc. 2011, 2011, 562494. [Google Scholar] [CrossRef]
  42. Leonenko, N.N.; Meerschaert, M.M.; Sikorskii, A. Fractional pearson diffusions. J. Math. Anal. Appl. 2013, 403, 532–546. [Google Scholar] [CrossRef]
  43. Meerschaert, M.M.; Straka, P. Inverse stable subordinators. Math. Model. Nat. Phenom. 2013, 8, 1–16. [Google Scholar] [CrossRef]
  44. Coddington, E.A.; Levinson, N. Theory of Ordinary Differential Equations; Tata McGraw-Hill Education: New York, NY, USA, 1955. [Google Scholar]
  45. Teka, W.W.; Upadhyay, R.K.; Mondal, A. Fractional-order leaky integrate-and-fire model with long-term memory and power law dynamics. Neural Netw. 2017, 93, 110–125. [Google Scholar] [CrossRef]
  46. Leonenko, N.N.; Meerschaert, M.M.; Sikorskii, A. Correlation structure of fractional Pearson diffusions. Comput. Math. Appl. 2013, 66, 737–745. [Google Scholar] [CrossRef]
  47. Patie, P.; Winter, C. First exit time probability for multidimensional diffusions: A PDE-based approach. J. Comput. Appl. Math. 2008, 222, 42–53. [Google Scholar] [CrossRef] [Green Version]
  48. Gabbiani, F.; Cox, S.J. Mathematics for Neuroscientists; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
  49. Gill, R.D.; Laan, M.J.v.d.; Wellner, J.A. Inefficient estimators of the bivariate survival function for three models. Annales de l’I.H.P. Probabilités et Statistiques 1995, 31, 545–597. [Google Scholar]
  50. Mijena, J.B. Correlation structure of time-changed fractional Brownian motion. arXiv 2014, arXiv:1408.4502. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ascione, G.; Toaldo, B. A Semi-Markov Leaky Integrate-and-Fire Model. Mathematics 2019, 7, 1022. https://doi.org/10.3390/math7111022

AMA Style

Ascione G, Toaldo B. A Semi-Markov Leaky Integrate-and-Fire Model. Mathematics. 2019; 7(11):1022. https://doi.org/10.3390/math7111022

Chicago/Turabian Style

Ascione, Giacomo, and Bruno Toaldo. 2019. "A Semi-Markov Leaky Integrate-and-Fire Model" Mathematics 7, no. 11: 1022. https://doi.org/10.3390/math7111022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop