Abstract
In this paper, a Leaky Integrate-and-Fire (LIF) model for the membrane potential of a neuron is considered, in case the potential process is a semi-Markov process. Semi-Markov property is obtained here by means of the time-change of a Gauss-Markov process. This model has some merits, including heavy-tailed distribution of the waiting times between spikes. This and other properties of the process, such as the mean, variance and autocovariance, are discussed.
1. Introduction
Leaky Integrate-and-Fire (LIF) models have a long tradition in the field of neuronal modeling. Starting from the Lapique’s Integrate-and-Fire (IF) model (see []) which has been later modified to consider a leakage in the membrane potential of the neuron (see, for a complete review of IF and LIF models [,]), these models gained considerable popularity due their mathematical simplicity. In particular, a stochastic LIF model has been introduced (see []) to include the action of a noise in the model. Under the classical assumptions, the membrane potential of a neuron is described by an Ornstein-Uhlenbeck process or, more generally, by a Gauss-Markov process. When the potential reach a suitable boundary at a random time T, the neuron emits a signal (which is traceable and thus it said to be a `spike’) and then the process is reset by setting . For an overview of stochastic IF model, we refer to []. This model has several unrealistic features (see []), some of which have been remedied by different authors with different approaches (such, for instance, considering a different stochastic model [], or introducing correlated inputs []). For example, in [] it is observed that the random time T is a heavy-tailed random variable which may have infinite expectation. This feature is in general ignored in the literature since, under classical assumptions, the random times have tails whose asymptotic decays is exponential. Furthermore, resetting the process after each spike is also unrealistic, since the model completely loses the effect of past events.
The model we study in this paper is a modification of the one introduced in [] and face the two issues above. In particular, a random time change of the potential process is introduced and this delays the first passage time through the boundary sufficiently to make it a heavy-tailed r.v. An adaptive threshold approach is proposed to avoid the reset problem. Furthermore, the time-change makes the process semi-Markovian and introduces memory effects which we describe by investigating the autocovariance function. Some other technical properties of the semi-Markov potential, such us mean value and variance, are also investigated. It is crucial the comparison of our model with the popular Unit 240-1, studied for instance in [,]. Actually, it is the problem of describing and producing heavy-tailed distributions of interspike intervals that has become quite popular. After [,] the problem has been studied for instance ten years later in [], while more recently in [].
Besides giving a model that (at least qualitatively) seems to be more accurate to describe the behavior of a neuron with heavy-tailed ISIs distribution, we also provide an application of a presently well-known fractionalization procedure to neuroscience. We considered only a simple linear model, i.e., the Leaky Integrate-and-Fire model, to give an easy example of application of this fractionalization procedure and how such procedure produces both a weighted covariance structure that, together with the non-Markov property, gives us correlation of the spiking times, and a delay in the firing activity due to the introduction of a sort of stochastic clock. We refer to fractionalization procedure since this random time-change of Markov processes introduces semi-Markov processes governed by fractional equations (see, for example [,,,]), which are very popular in applications, and thus we establish a connection of our model with fractional equations. This procedure can be adapted to various processes. For instance, one could think to use this procedure to produce a stochastic fractional version of the Hodgkin-Huxley model, based on the stochastic model of []. Thus, to resume the aim of this paper, we intend to show that our time-changing procedure can actually produce realistic models also when applied to easy models, and provide an approach that we aim to further generalize to much more complex processes.
The paper is structured as follows:
- In Section 2 we introduce the semi-Markov Leaky Integrate-and-Fire model and discuss the semi-Markov property of the membrane potential process;
- In Section 3 we give mean and variance of the membrane potential process;
- In Section 4 we address the problem of the autocovariance function. Despite being already determined in [], we use a different approach that leads to two independently interesting results. In particular, we have in Theorem 1 a formula for the bivariate Laplace transform of an inverse subordinator while in Theorem 2 a formula for the autocovariance function of a time-changed stationary Ornstein-Uhlenbeck as defined in []. This last result was obtained in the non-stationary case (for deterministic initial values) by []: we provide some changes in the proof given there to determine the autocovariance in the stationary case. We then use these two results to determine the autocovariance function of the membrane potential process. In the same section we show that the autocovariance function is still infinitesimal and decreasing.
- In Section 5 we focus on the effect of the time-change on the distribution of the first spiking times and the Interspike intervals of this model;
- In Section 6 we compare only qualitative (due to a lack of quantitative data) the features of the distribution of the Interspike intervals of the model and of the Unit 240-1;
- Finally, in Section 7 we give a resume of the results.
Let us also recall that semi-Markov models are quite used in several fields of application. Just to cite some of them, we could consider applications in finance (as for instance in []), queueing theory (as for instance in [,]), epidemiology (as for instance in []) and also social sciences (as for instance in []). For any functional space we will work with (as, for instance, or ) we refer to [].
2. The Semi-Markov Leaky Integrate-and-Fire Model
Let us consider a standard stochastic Leaky Integrate-and-Fire model (see, for instance [,]), i.e., let us describe the membrane potential of a neuron with a stochastic process which is strong solution of the following stochastic differential equation (SDE)
where is the leak potential, is the characteristic time of the neuron (seen as a leaky RC circuit, obtained by a modification of the classical Integrate-and-Fire model []), is the amplitude of the noise, is a standard Brownian motion (hence is a white noise) and is a function that describes the input stimuli (which could be synaptic or injected).
We say that a neuron fires if the membrane potential crosses a certain fixed threshold . In such case, the first passage time
represents the first spiking time of the neuron. Moreover, to model successive spiking times of the neuron one can use two different approaches:
- One can reset the process in the sense that one poses and for some constant , then one can consider the process to be solution of (1) with and and study the first passage time of this new process through (see [,,,,,,,] and references therein);
- One could also consider a suitably modified (time-dependent) threshold such that the n-th passage time through such threshold represents the n-th spiking time of the neuron (this approach is called adaptive threshold approach, see [,,] and references therein).
For the adaptive threshold approach, it can be useful to observe that the sequence is almost surely increasing. Moreover, the interspike intervals (ISIs) for (where ) are independent and if is constant, they are also identically distributed.
It has been shown in [] that under suitable assumptions on , the probability survival functions of are asymptotically exponential. However, in some particular settings this behavior is in contradiction with the experimental data. Indeed, in [], it has been shown that the should be similar to one-sided stable random variables and then, in particular, their probability survival functions should decay like power laws.
In order then to introduce a sort of delay in this behavior, we now consider a stochastic time scale for the process . Let us recall that a subordinator is an increasing Lévy process (see [,]) and thus we can define its right-continuous inverse as
In particular, let us consider driftless subordinators, i.e., non-decreasing Lévy processes such that their Lévy exponent can be expressed as
where is the Lévy measure of . Moreover, we will assume that , so that the process is strictly increasing.
Now we can define the time-changed LIF model. Consider the process that is strong solution of (1). Then let us also consider the inverse of an independent subordinator. Let us then define as our new membrane potential process (where is the Laplace exponent of the subordinator ). Despite losing an easy physical interpretation of the constant , we will see in the following that such process (for a suitable choice of and ) recover some properties (as found in []) of the ISIs distribution. We say that this model is semi-Markovian in the sense that the process is semi-Markovian, and so it enjoys the Markov property at any random time T such that , as rigorously discussed in [] Section 4b. The reader might also consult for example [] (in particular Example (2.13) and Section 5) for a more general class of semi-Markov processes including .
Heuristically, the reason the Markov property is lost after time-change can be summarized as follows. The process has interval of constancy whose random length is the length of the jumps of S, i.e.,
Of course, these intervals of constancy are not (in general) exponentially distributed and thus is not Markovian. Also has the same interval of constancies and thus also it is not a Markov process. It is useful to note that a semi-Markov process can be embedded in a Markov process by adding a coordinate containing the information which are lost together with the exponential distribution: hence if we define
where we use the convention , which is the sojourn time in the current position of , we find that is a Markov process ([] Section 4; we suggest to the reader [] chapter 3 and references therein for an overview of various equivalent definitions of semi-Markov processes.)
In the next sections, we will focus on some characteristics of the process .
3. Mean and Variance Functions of
3.1. Preliminaries on
Let us give first some preliminaries on the strong solution of (1). Let us recall that, solving the equation, we obtain
Let us define the following quantities:
- The process
- The function
Thus, we have
Now, let us observe that (see, for instance [])
and
Thus, being a deterministic function, we have
and
Concerning the variance, we have in particular
From now on, for simplicity, let us set .
3.2. Mean of
Let us now consider . We want first to evaluate . To do this, let us first define and to obtain .
Let us now introduce some notation. Let us denote with the probability density function of and with the probability density function of . Moreover, let us denote
the Laplace transform of with respect to y. Now we are ready to give the following proposition, which have been actually obtained in [], but we recall the proof for the sake of completeness.
Proposition 1.
We have
Proof.
Let us observe that by conditioning
□
Consequently, we obtain the mean of .
Corollary 1.
We have
A particular case in which the mean is explicable in closed form is given by . Indeed, in such case we have
and then
Let us observe that if for , then, denoting , we have that is a solution of a fractional Cauchy problem, i.e., a Cauchy problem with a fractional time derivative. We recall that the fractional derivative of order (see []) is, for any suitable function f,
In the next result we make rigorous this assertion: observe also that such result is linked to the definition of fractional Pearson diffusion, as given in [] and can actually be derived starting from the equations studied in such paper. Here we follow a different approach , which relies on the linearity of the equation.
Proposition 2.
If for and , then, denoting , we have that is solution of
that is to say:
- the function is in ;
- for it holds:
Moreover, if is continuous and bounded, is solution of (12) for every .
Proof.
First of all, let us show that and are in . To do this, let us first observe that
thus, we have
since for any , where .
Moreover, we have that for any
thus, also is in . Observing that
we have that . Now, starting from (13) and taking the Laplace transform (denoting with the Laplace transform of ), we achieve
where is the Laplace transform of f with respect to t (see []). It is already well known that is an absolutely continuous function solving the following Cauchy problem
hence we can integrate by parts to obtain
that is to say
and then, multiplying everything by
Taking the inverse Laplace transform (and recalling also the Laplace transform of the power function) we have
that is to say, using that ,
Rearrange (14) to get
and note that the equality is true for any by uniqueness of Laplace transform since both sides of (14) are continuous functions of . Thus, we have shown that is solution of Equation (12) for almost every .
If I is continuous and bounded, also (and then ) is a continuous and bounded function. Moreover, recalling that (see [])
where is the density of , we have that
where . From this it is easy to show that is continuous and the same holds for . Thus, the right-hand side of Equation (15) is and can be differentiated in . □
Remark 1.
The definition of solution is given in the same spirit as the definition of Caratheodory solution for an ordinary differential equation (see []). In particular, one has that exists for almost every and the first equation of (12) actually holds for such t.
Let us also recall that Equation (12) is actually the equation of a fractional order LIF model, as introduced in [].
3.3. The Variance of
In the same way as we did before, we can obtain the variance of the process .
Proposition 3.
We have
Proof.
By conditioning we have
□
From this equality we can deduce in particular that for any we have . This observation will be useful for the evaluation of the autocovariance function.
Remark 2.
Since by dominated convergence as for any , we have that
4. The Autocovariance Function of
In this Section we want to describe the autocovariance function of the membrane potential process . Here we want to follow a different approach from the one given by []. Actually, in Corollary 2 we determine the autocovariance of by splitting the integral in two pieces: one is given by the bivariate Laplace transform of the inverse subordinator while the other is the autocovariance of a stationary Ornstein-Uhlenbeck process. Therefore, in the next Subsection we will determine a formula for the first piece, then in Section 4.2 we will do the same for the second piece and finally in Section 4.3 we will glue together these results.
4.1. The Bivariate Laplace Transform of
As we said before, we know that for any . Hence we have that for any couple , by a simple application of the Cauchy-Schwartz inequality,
The covariance of has been already determined in []. However, we follow here a different approach which gives a slightly more explicit result. Therefore, let us define first for any the measure
We first want to determine the bivariate Laplace-Stieltjes transform of . The following theorem provides a formula for such bivariate Laplace-Stieltjes transform.
Theorem 1.
Let us suppose that for any we have
Then, for any and , we have
The proof is given in Appendix A.
4.2. The Autocovariance Function of a Time-Changed Stationary Ornstein-Uhlenbeck Process
We will also need to determine the covariance of a time-changed stationary Ornstein-Uhlenbeck process. In particular let us consider a stationary Ornstein-Uhlenbeck process and its time-changed process . For the inverse stable subordinator, the covariance has been already determined in [], while for the non-stationary case it has been already achieved in []. Here we consider the general stationary case by using the same approach.
Theorem 2.
Suppose that for any
Then, for any and we have
The proof is given in Appendix B.
Remark 3.
Since for a stationary Ornstein-Uhlenbeck process of parameter the covariance is given by , Formula (18) gives also the value of as .
4.3. The Autocovariance Function of
Now we are ready to obtain the autocovariance function of .
Corollary 2.
Suppose that for any
Then, for any ,
Proof.
For , using (7) we have
□
Concerning the autocovariance function of , it is interesting to observe that two important properties are preserved.
Proposition 4.
Let us fix and define the function
Then:
- a
- is decreasing;
- b
- .
Proof.
Let us show a. To do this, let us consider and the measure
for any Borel set . In particular, such a measure is concentrated in the set
Moreover, let us define . Now let us observe that
since, being in A, (since we already know that the function is decreasing when ). Now let us show b. To do this, let us observe that is almost surely increasing and (pathwise) almost surely. Thus, let us write
Now observe that is a continuous function and . Moreover, for fixed s, the function is bounded hence, in particular, almost surely and almost surely. Thus, we can use dominated convergence theorem to obtain
concluding the proof. □
This last result tells us that we do not lose the main features of the covariance function, i.e., it is still infinitesimal and decreasing with respect to the time gap. However, the asymptotic behavior depends now on the choice of , in particular on the behavior of . It is for instance known (see []) that the asymptotic behavior of the covariance when is the one of a power function with exponent less than 1, hence the process is long-range dependent. Thus, we have that with a suitable choice of we can alter the asymptotic behavior of the covariance to reproduce different memory effects at the level of the autocovariance of the process. Let us remark that changing the asymptotic behavior of the covariance has been already used to describe long-memory effects of the membrane potential process (see for instance []).
5. First Spiking Times and Interspike Intervals
5.1. Spiking Times in Case of Excitatory Stimuli
From now on, let us consider to be an excitatory stimuli, i.e., . We are now interested in the first passage time of through the threshold . In this case, it is easy to see that the process is a Gauss-Markov process and satisfies the hypotheses of [] Corollary 3.4.4, hence, denoting , we have . Concerning the behavior as of , recalling (see [] Corollary 3.2.4) that the ratio of the Gauss-Markov process is given by
we have that is a strictly increasing function by [] Remark 3.5.5 we know that we are under the hypotheses of [] Proposition 3.5.5, thus
for any . One can also show that is an absolutely continuous random variable whose density is infinitely differentiable with derivatives (since they are the unique weak solution of initial-value parabolic problems, see []).
Now let us consider the process and let us define , which represents the first spiking time of the neuron. Concerning the asymptotic behavior of the survival function and the cumulative distribution function we have the following proposition as an application of the results in [].
Proposition 5.
With the notation above, we have
- i
- If Φ is regularly varying at with index . Then as
- ii
- If is an absolutely continuous function, then is an absolutely continuous random variable;
- iii
- If is an absolutely continuous function and there exist and such that
- iv
- Under the hypotheses of , if Φ is regularly varying at with index , then as
Proof.
(i) We have already shown that , so this property follows from [] Corollary 2.2.3;
(ii,iii) These are just [] Propositions 2.3.1 and 2.3.2;
(iv) We have already observed that is an absolutely continuous random variable whose density is infinitely differentiable with derivatives (hence Laplace transformable), thus from (20) and [] Theorem 2.5.4 we obtain the desired property. □
With this proposition, we have summarized some of the properties we can obtain concerning the regularity of and the asymptotic behavior of its survival and cumulative distribution functions. However, we can extend these results to successive spiking times by using an adaptive threshold method. To do this, instead of resetting the process, let us consider some other barriers
where is a constant representing the reset potential, i.e., the membrane potential after the depolarization. However, since we want the first passage times of our process through such threshold to be the spiking times of the neuron, we need to spatially translate the whole process of after a spike. To do so, we need to modify Equation (1) as
where this time is a suitable stochastic process. In particular, let us set for . Now let us suppose we have defined up to . Then we pose and for . This is a sort of feedback definition: we do not know until we define , but we can define in and then start the process with such fixed value of until it reaches the threshold . Using this modification for the classical stochastic Leaky Integrate-and-Fire model could allow us to say that the represent the n-th spiking time of the neuron (as they are equivalent to the ones obtained by resetting the process). However, this leads to a much more difficult process to handle with. Indeed, to write the solution of (23), let us define the counting process
where for any
Thus, we can exploit the process as
which is quite complicated. However, to obtain again the starting process, we can again modify the threshold, which will become stochastic. Indeed, let us suppose we want to observe the n-th spike. Hence we are conditioning with respect to the event . Under such conditioning, if and only if
Hence let us define the new stochastic threshold as
In this way, we can say that still represents the n-th spiking time of the neuron. In particular, if we are conditioning the process with the knowledge of , this is an exponentially decaying threshold such that .
Now, since we are dealing with the semi-Markov model, let us consider . For such random variables, after conditioning with respect to , we cannot directly extend Proposition 5. However, one can use [] Propositions 2.2.6 and 2.2.7 to express some properties of the limit superior and limit inferior of some quantity involving the survival and the cumulative distribution function. In a forthcoming paper we aim to show that Proposition 5 can be actually extended to some cases of time-varying thresholds.
5.2. The Interspike Intervals
Another important property of the times relies on their representation. Indeed, we have
This property can be used to determine the distribution of the interspike intervals. Indeed, let us observe that almost surely. Thus, if we define the measure for any Borel set , it is easy to see that it is concentrated on the set
Hence, we have, by using the fact that is independent from and ,
Now, observe that in A we have , hence, by using the fact that is a Lévy process, since ,
Hence if we define (where ) and , we obtain from (25)
that is to say . This property allows us to determine the distribution of the interspike intervals from the distribution of . In particular, if we denote with the density of we have
Moreover, are independent and then also . Indeed, we have
Now let us suppose . Then and . Thus, if we consider the measure for any Borel set , it is easy to see that it is concentrated on the set
Hence we have, by using the fact that S is a Lévy process and and are disjoint intervals,
Now let us consider the function . Then
Now let us consider the measures for any Borel set and for any Borel set . Then, since and are independent we have and we have
so and are independent.
Now let us suppose that . Then we know that are i.i.d. random variables. Thus also are i.i.d. random variables. In particular, and then all the interspike intervals are distributed as . Thus, we can conclude that if , then the law of the first passage time of through describes not only the first spiking time, but also all the interspike intervals. In particular, we can extend in such case Proposition 5 to the variables .
6. Comparison with the Unit 240-1
In [], the authors give an overview of quantitative methods to study spontaneous activity of neurons. In particular, they considered some neuronal units in the cochlear nucleus of the cat. Let us focus our attention on two particular neuronal units: Unit 259-2 and Unit 240-1. Using the exact words of the aforementioned paper The histogram for Unit 259-2 appears to be unimodal and asymmetric [...] while that of Unit 240-1 is unimodal and asymmetric, but on a quite different time scale than that of Unit 259-2. Indeed, the authors then assert that The spike trains of Unit 259-2 and Unit 240-1 do not appear to be easily characterizable.
In the same paper, the authors try to give a characterization of the interspike intervals of Unit 259-2. Indeed, they assert that The fact that the interval histogram rises rapidly to its mode (at 3 msec.), together with the exponential decay, suggests that the process generating the spike train might be a Poisson process with dead time , hence, in particular, the interspike intervals should be exponentially distributed. However when the histogram of Unit 240-1 is replotted on a semilogarithmic scale, the decay is clearly seen to be non-exponential. The fact that the histograms of the interspike intervals of the Unit 259-2 remind of an exponential distribution, while the ones of the Unit 240-1 do not have an exponential decay, but still are similar to the ones of Unit 259-2 but on a different time scale, suggests that the distribution of the interspike intervals of the Unit 240-1 could be similar to the one of a Mittag-Leffler random variable, or maybe to the one of a stable random variable, or at least it should have an heavy tail.
Thus, in [] a first attempt to study and characterize the interspike interval distribution of the Unit 240-1 is done, but the rescaling they did were not enough to find such distribution. After that paper, other papers focused on trying to reconstruct the distribution of Unit 240-1. In [], the interspike interval distribution of the Unit 240-1 is fitted for instance with a gamma distribution, while in [] Figure 5, as suggested by the scaling invariant property of the histograms, it is fitted by an inverse Gaussian distribution. However, each of these two distributions are exponentially decaying. An interesting solution is found in []: here the scaling invariant property of the histograms is interpreted as a stability property and then a stable distribution is used to fit the data concerning the Unit 240-1. In particular, the authors take in consideration Cauchy distribution (which is power-like decaying) since it [...] has essentially the same invariance property as that found for the density of interspike intervals of Unit 240-1..
Concerning linear models such as the Leaky Integrate-and-Fire, it is not enough to describe the behavior of the Unit 240-1. Indeed, for long time, the distribution of the interspike intervals for the spontaneous activity (hence, being , they are i.i.d. random variables) generated by a Leaky Intergrate-and-Fire model are asymptotically exponentially distributed (see []), hence their decay is too fast to accord to the one of the Unit 240-1. However, if we consider for some , our semi-Markov Leaky Integrate-and-Fire model admits interspike intervals that are i.i.d. and the asymptotically equal to power laws for long times (in particular the asymptotic behavior is similar to the one of a Mittag-Leffler function). This decay is much more in accord to the one obtained by [] for Unit 240-1, since it preserves the heavy tails that have been observed. Moreover, it is not so distant from the description of [], since the decay is a power law of exponent that can be tuned with the data. Moreover, other choices of can be done according to the data to obtain a more precise fit of the histograms, since Proposition 5 guarantees that if is of regular variation at , then the survival function of the interspike intervals decay as the product of a power and a slowly varying function, which is quite similar to a power-law decay.
7. Conclusions
In this paper, we used a fractionalization procedure to produce heavy-tailed non-Markov neuronal models from easy Markov linear models. This procedure consists of using a stochastic timescale inside the process itself: this random time is given by the inverse of a subordinator independent of the original Markov process. First, this procedure leads a semi-Markov process (whose evolution depends on the current position and on the current sojourn time in it) in place of the Markov process describing the membrane potential of the neuron. As a consequence, despite we did not modify two fundamental properties of the covariance, i.e., being decreasing and infinitesimal, we can actually obtain different asymptotic behavior and thus force the process to be long-range dependent (it is the case of the stable subordinator for instance, as observed in []). As we already discussed, long-range dependent processes (or, more in general, non-delta correlated noises) can be used to describe memory effects in neuronal models and provide a first generalization of the easy linear models, in such a way to obtain realistic data (see [,]). The most important consequence is related to the spiking times: the time-change forces a delay in the dynamics of the membrane potential, which leads to a delay in the distribution of the spiking times. In particular, we achieve heavy-tailed first spiking times. Moreover, we preserve independence and identical distribution of the interspike intervals in the case of spontaneous activity, thus leading to the heavy-tailed distributions of such intervals in the spontaneous activity case. Finally, we compare (only qualitatively, due to a lack of data) the behavior of our process (as , i.e., in the -stable case) with the behavior of the Unit 240-1, which is seen to admit heavy-tailed interspike intervals (see [,]). As we also stated in the introduction, the problem of the power-law decay of the distribution of the spiking times has been more recently studied in []. From this comparison we notice that the Mittag-Leffler-likely behavior of the interspike intervals of the Unit 240-1, together with the power-law decay, are reproduced by our model. However, this does not mean that our model is in accord with the data and a statistical and experimental study must be done on them.
Finally, let us recall that this is actually a toy example. Despite being a simple (linear) example, it seems to work according to already known phenomena (this is the case of Unit 240-1). However, here we propose this approach as an exemplary procedure to produce more complex delayed or heavy-tailed neuronal models and thus we aim to study different time-changed models (eventually also non-linear ones, such as the stochastic Hodgkin-Huxley model in []).
Author Contributions
Conceptualization, G.A. and B.T.; Investigation, G.A. and B.T.; Writing—original draft, G.A. and B.T.
Funding
This research is partially supported by MIUR-PRIN 2017, project “Stochastic Models for Complex Systems”, no. 2017JFFHSH.
Acknowledgments
We would like to thank the referees whose remarks and suggestions have certainly improved the article.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Proof of Theorem 1
Proof.
Let us denote with and let us observe that G is a function. Let us first consider for any couple
where we used bivariate integration by parts formula (see [] Lemma 2.2). Now let us observe that since G is a function we have
and then
Now let us define
and let us work with . Since the integrand is non-decreasing, by monotone convergence theorem we have
Taking then also the limit as we have, also by monotone convergence theorem,
First, let us observe that since we are only using monotone convergence theorem, we have the same result if we exchange the order of the limits. Now let us observe that
hence we have
In the same way we have
Concerning , we have, by using again monotone convergence theorem,
Finally, let us also observe that
Hence, we can take the limit as in (A2) and, since , , and are all finite and , we have
Now let us denote
and observe that
to obtain
Now, since , we have
Let us set
Concerning we have
Concerning , let us use [] Equation 2.17 so that we can express for and . Set
to obtain
Now let us pose
Let us evaluate . To do this, let us observe that
hence, by the hypothesis, we have
Analogously we have . Using these two equalities in (A7) we have (recalling that )
Appendix B. Proof of Theorem 2
Proof.
Let us denote . Observe that this time G is not a function. Let us fix a generic couple and then let us use bivariate integration by parts formula to obtain again (A1). Now let us define
Let us first work with . As done in the previous theorem, we can pass to the limit as to obtain
Now let us observe that
Thus, we have
In the same way we obtain
Hence, as before, by taking the limit as in (A1) and using the fact that , and are finite we obtain
Now let us set
and then let us set
Let us first work with . We have
Now, from the expression of we have that in is given by . Hence we have
Now let us work with . We have
with for hence we have
Now let us work with . We have, since for ,
Let us use as before [] Equation 2.17 to obtain
Denote now
and observe that, as before, . Concerning , we have
By using these two equalities in (A17) we have
References
- Abbott, L.F. Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res. Bull. 1999, 50, 303–304. [Google Scholar] [CrossRef]
- Burkitt, A.N. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 2006, 95, 1–19. [Google Scholar] [CrossRef] [PubMed]
- Burkitt, A.N. A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. Biol. Cybern. 2006, 95, 97–112. [Google Scholar] [CrossRef] [PubMed]
- Ricciardi, L.M.; Sacerdote, L. The Ornstein-Uhlenbeck process as a model for neuronal activity. Biol. Cybern. 1979, 35, 1–9. [Google Scholar] [CrossRef]
- Sacerdote, L.; Giraudo, M.T. Stochastic integrate and fire models: A review on mathematical methods and their applications. In Stochastic Biomathematical Models; Springer: Berlin, Germany, 2013; pp. 99–148. [Google Scholar]
- Shinomoto, S.; Sakai, Y.; Funahashi, S. The Ornstein-Uhlenbeck process does not reproduce spiking statistics of neurons in prefrontal cortex. Neural Comput. 1999, 11, 935–951. [Google Scholar] [CrossRef]
- Fox, R.F. Stochastic versions of the Hodgkin-Huxley equations. Biophys. J. 1997, 72, 2068–2074. [Google Scholar] [CrossRef]
- Sakai, Y.; Funahashi, S.; Shinomoto, S. Temporally correlated inputs to leaky integrate-and-fire models can reproduce spiking statistics of cortical neurons. Neural Netw. 1999, 12, 1181–1190. [Google Scholar] [CrossRef]
- Gerstein, G.L.; Mandelbrot, B. Random walk models for the spike activity of a single neuron. Biophys. J. 1964, 4, 41–68. [Google Scholar] [CrossRef]
- Ascione, G.; Pirozzi, E.; Toaldo, B. On the exit time from open sets of some semi-Markov processes. arXiv 2017, arXiv:1709.06333. [Google Scholar]
- Rodieck, R.; Kiang, N.S.; Gerstein, G. Some quantitative methods for the study of spontaneous activity of single neurons. Biophys. J. 1962, 2, 351–368. [Google Scholar] [CrossRef]
- Holden, A. A note on convolution and stable distributions in the nervous system. Biol. Cybern. 1975, 20, 171–173. [Google Scholar] [CrossRef] [PubMed]
- Tsubo, Y.; Isomura, Y.; Fukai, T. Power-law inter-spike interval distributions infer a conditional maximization of entropy in cortical neurons. PLoS Comput. Biol. 2012, 8, e1002461. [Google Scholar] [CrossRef] [PubMed]
- Meerschaert, M.M.; Toaldo, B. Relaxation patterns and semi-Markov dynamics. Stoch. Process. Appl. 2019, 129, 2850–2879. [Google Scholar] [CrossRef]
- Orsingher, E.; Ricciuti, C.; Toaldo, B. Time-Inhomogeneous Jump processes and variable order operators. Potential Anal. 2016, 45, 435–461. [Google Scholar] [CrossRef]
- Orsingher, E.; Ricciuti, C.; Toaldo, B. On semi-Markov processes and their Kolmogorov’s integro-differential equations. J. Funct. Anal. 2018, 275, 830–868. [Google Scholar] [CrossRef]
- Ricciuti, C.; Toaldo, B. Semi-Markov models and motion in heterogeneous media. J. Stat. Phys. 2017, 169, 340–361. [Google Scholar] [CrossRef]
- Gajda, J.; Wylomańska, A. Time-changed Ornstein–Uhlenbeck process. J. Phys. A Math. Theor. 2015, 48, 135004. [Google Scholar] [CrossRef]
- Vadori, N.; Swishchuk, A. Inhomogeneous Random Evolutions: Limit Theorems and Financial Applications. Mathematics 2019, 7, 447. [Google Scholar] [CrossRef]
- Cahoy, D.O.; Polito, F.; Phoha, V. Transient behavior of fractional queues and related processes. Methodol. Comput. Appl. Probab. 2015, 17, 739–759. [Google Scholar] [CrossRef]
- Ascione, G.; Leonenko, N.; Pirozzi, E. Fractional queues with catastrophes and their transient behaviour. Mathematics 2018, 6, 159. [Google Scholar] [CrossRef]
- Lefèvre, C.; Simon, M. SIR epidemics with stages of infection. Adv. Appl. Probab. 2016, 48, 768–791. [Google Scholar] [CrossRef]
- Ashton, S.; Scalas, E.; Georgiou, N.; Kiss, I.Z. The Mathematics of Human Contact: Developing a Model for Social Interaction in School Children. Acta Phys. Pol. A 2018, 133, 18. [Google Scholar] [CrossRef]
- Brezis, H. Functional Analysis, Sobolev Spaces and Partial Differential Equations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Buonocore, A.; Caputo, L.; Pirozzi, E.; Ricciardi, L.M. On a stochastic leaky integrate-and-fire neuronal model. Neural Comput. 2010, 22, 2558–2585. [Google Scholar] [CrossRef] [PubMed]
- Ascione, G.; Pirozzi, E. On a stochastic neuronal model integrating correlated inputs. Math. Biosci. Eng. 2019, 16, 5206. [Google Scholar] [CrossRef]
- Buonocore, A.; Caputo, L.; Pirozzi, E.; Ricciardi, L.M. The first passage time problem for Gauss-diffusion processes: Algorithmic approaches and applications to LIF neuronal model. Methodol. Comput. Appl. Probab. 2011, 13, 29–57. [Google Scholar] [CrossRef]
- Buonocore, A.; Caputo, L.; Pirozzi, E.; Carfora, M.F. A leaky integrate-and-fire model with adaptation for the generation of a spike train. Math. Biosci. Eng. MBE 2016, 13, 483–493. [Google Scholar]
- Carfora, M.F.; Pirozzi, E. Linked Gauss-Diffusion processes for modeling a finite-size neuronal network. Biosystems 2017, 161, 15–23. [Google Scholar] [CrossRef]
- Fourcaud, N.; Brunel, N. Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Comput. 2002, 14, 2057–2110. [Google Scholar] [CrossRef]
- Pirozzi, E. Colored noise and a stochastic fractional model for correlated inputs and adaptation in neuronal firing. Biol. Cybern. 2018, 112, 25–39. [Google Scholar] [CrossRef]
- Kobayashi, R.; Tsubo, Y.; Shinomoto, S. Made-to-order spiking neuron model equipped with a multi-timescale adaptive threshold. Front. Comput. Neurosci. 2009, 3, 9. [Google Scholar] [CrossRef]
- Kobayashi, R.; Kitano, K. Impact of slow K+ currents on spike generation can be described by an adaptive threshold model. J. Comput. Neurosci. 2016, 40, 347–362. [Google Scholar] [CrossRef] [PubMed]
- Huang, C.; Resnik, A.; Celikel, T.; Englitz, B. Adaptive spike threshold enables robust and temporally precise neuronal encoding. PLoS Comput. Biol. 2016, 12, e1004984. [Google Scholar] [CrossRef] [PubMed]
- Bertoin, J. Lévy Processes; Cambridge University Press: Cambridge, UK, 1996; Volume 121. [Google Scholar]
- Bertoin, J. Subordinators: Examples and Applications. In Lectures on Probability Theory and Statistics; Springer: Berlin, Germany, 1999; pp. 1–91. [Google Scholar]
- Cinlar, E. Markov Additive Processes and Semi-Regeneration; Technical Report; Northwestern University: Evanston, IL, USA, 1974. [Google Scholar]
- Kaspi, H.; Masonneuve, B. Regenerative systems on the real line. Ann. Probab. 1988, 16, 1306–1332. [Google Scholar] [CrossRef]
- Meerschaert, M.M.; Straka, P. Semi-Markov approach to continuous time random walk limit processes. Ann. Probab. 2014, 42, 1699–1723. [Google Scholar] [CrossRef][Green Version]
- Harlamov, B. Continuous Semi-Markov Processes; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
- Li, C.; Qian, D.; Chen, Y. On Riemann-Liouville and Caputo derivatives. Discrete Dyn. Nat. Soc. 2011, 2011, 562494. [Google Scholar] [CrossRef]
- Leonenko, N.N.; Meerschaert, M.M.; Sikorskii, A. Fractional pearson diffusions. J. Math. Anal. Appl. 2013, 403, 532–546. [Google Scholar] [CrossRef]
- Meerschaert, M.M.; Straka, P. Inverse stable subordinators. Math. Model. Nat. Phenom. 2013, 8, 1–16. [Google Scholar] [CrossRef]
- Coddington, E.A.; Levinson, N. Theory of Ordinary Differential Equations; Tata McGraw-Hill Education: New York, NY, USA, 1955. [Google Scholar]
- Teka, W.W.; Upadhyay, R.K.; Mondal, A. Fractional-order leaky integrate-and-fire model with long-term memory and power law dynamics. Neural Netw. 2017, 93, 110–125. [Google Scholar] [CrossRef]
- Leonenko, N.N.; Meerschaert, M.M.; Sikorskii, A. Correlation structure of fractional Pearson diffusions. Comput. Math. Appl. 2013, 66, 737–745. [Google Scholar] [CrossRef]
- Patie, P.; Winter, C. First exit time probability for multidimensional diffusions: A PDE-based approach. J. Comput. Appl. Math. 2008, 222, 42–53. [Google Scholar] [CrossRef]
- Gabbiani, F.; Cox, S.J. Mathematics for Neuroscientists; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
- Gill, R.D.; Laan, M.J.v.d.; Wellner, J.A. Inefficient estimators of the bivariate survival function for three models. Annales de l’I.H.P. Probabilités et Statistiques 1995, 31, 545–597. [Google Scholar]
- Mijena, J.B. Correlation structure of time-changed fractional Brownian motion. arXiv 2014, arXiv:1408.4502. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).