Next Article in Journal
Introducing the Third-Order Fuzzy Superordination Concept and Related Results
Next Article in Special Issue
Modeling and Performance Evaluation of a Cellular Network with OMA and NOMA Users with Batch Arrivals by Means of an M[X]/M/S/0 Model
Previous Article in Journal
Anomaly Detection over Streaming Graphs with Finger-Based Higher-Order Graph Sketch
Previous Article in Special Issue
New Computer Experiment Designs with Area-Interaction Point Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mittag–Leffler Fractional Stochastic Integrals and Processes with Applications

Dipartimento di Matematica e Fisica, Università della Campania “Luigi Vanvitelli”, 81100 Caserta, Italy
Mathematics 2024, 12(19), 3094; https://doi.org/10.3390/math12193094
Submission received: 31 August 2024 / Revised: 27 September 2024 / Accepted: 1 October 2024 / Published: 2 October 2024
(This article belongs to the Special Issue Stochastic Processes: Theory, Simulation and Applications)

Abstract

:
We study Mittag–Leffler (ML) fractional integrals involved in the solution processes of a system of coupled fractional stochastic differential equations. We introduce the ML fractional stochastic process as a ML fractional stochastic integral with respect to a standard Brownian motion. We provide some representation formulas of solution processes in terms of Mittag–Leffler fractional integrals and processes. Computable expressions of the mean functions and of the covariances of such processes are specifically given. The application in neuronal modeling is provided, and all involved functions and processes are specifically determined. Numerical evaluations are carried out and some results are shown and discussed.

1. Introduction

The interest for fractional stochastic calculus is continuously growing, often motivated by the need to construct stochastic models, including m e m o r y effects. Indeed, the non-locality property of fractional integrals and derivatives can be well exploited to develop models with memory and/or a sort of history dependence. In particular, fractional operators are mathematical tools that allow us to construct new processes derived as solutions, i.e., fractional integrals of stochastic and fractional differential equations. Recently, some models [1,2,3,4,5] have began relying on fractional stochastic differential equations (fSDEs). Generalized fractional integrals (see, for instance, [6,7]) turn out to be extremely useful in such a framework because more specialized fractional integrals (or, equivalently, fractional integrals with specialized kernels, [8]) allow for the mathematical characterization of fractional solution processes, with more refined and realistic stochastic models being able to be realized. This justifies the great importance of Prabhakar fractional calculus [9,10] from mathematical and practical points of view. The Mittag–Leffler fractional integrals we use are special cases of Prabhakar fractional integrals [11]. For integrals and derivatives of Mittag–Leffler functions, see [12] and the references therein.
Here, in particular, our purpose is to show that Mittag–Leffler fractional integrals (rigorously defined in Section 2), i.e., integrals like
0 t ( t s ) ν 1 E ν , ν ( t s ) ν λ f ( s ) d s
and Mittag–Leffler stochastic fractional processes (rigorously defined in Section 2), i.e., processes like
0 t ( t s ) ν 1 E ν , ν ( t s ) ν λ f ( s ) d W s , with W Brownian motion ,
are involved in the construction of solution processes X ( t ) and Y ( t ) of the following coupled equations:
D α X ( t ) = a X ( t ) + b ( t ) + σ ( t ) d W ( t ) d t , X ( 0 ) = X 0
D β Y ( t ) = g Y ( t ) + h ( t ) + ς X ( t ) Y ( 0 ) = Y 0
with two fractional orders α ,   β   ( 0 , 1 ) for the involved Caputo derivatives [12,13], and with W ( t ) representing standard Brownian motion. Coefficients a , g , ς are real numbers, whereas some regularity conditions will be assumed on coefficient functions b ( t ) , σ ( t ) , h ( t ) , as specified in Section 2 and Section 3.
We call the process X ( t ) the Mittag–Leffler (ML) fractional process. Note that the process Y ( t ) is obtained by applying a fractional integration procedure to the process X ( t ) . For this reason, we also say that Y ( t ) is obtained by fractionally integrating the process X ( t ) . We provide representation formulae for such processes by means of Mittag–Leffler fractional integrals, and we provide expressions for their mean and covariance functions in terms of ML fractional integrals. Indeed, even if several numerical procedures are available for numerically solving SDEs that can be specialized for the above considered equations (see, for instance [14,15], and references therein), we are mainly interested in the characterization of the stochastic integrals of Equations (1) and (2).
Firstly, in order to clarify the connection with classical fractional integrals and derivatives, we recall the definition of fractional operators that we will use in this paper, i.e., the definition of the fractional Riemann–Liouville (RL) integral [12,16] of order α ( 0 , 1 ) :
I α ( f ) ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 f ( s ) d s , t R + ,
with f L 1 ( 0 , t ) , and Γ the Gamma Euler function, i.e., Γ ( z ) = 0 + t z 1 e t d t , z > 0 and the definition of Caputo fractional derivative [12,17], where, for all f C 1 ,
D ν ( f ) ( t ) = 1 Γ ( 1 ν ) 0 t ( t s ) ν f ( s ) d s ,
with ν ( 0 , 1 ] .
Note that we use the following to identify notations: I α ( f ) ( t ) , I α ( f ( · ) ) ( t ) and I α ( f ( t ) ) .
We aim to study the particular cases involving the above fractional integrals: the case in which the RL fractional integral considers Brownian motion and when f ( t ) is a stochastic process. In both cases, the above integrals have to be intended in a p a t h w i s e interpretation. We show that such kind of particular processes arise as solutions of fractional differential equations.
With this purpose, we also remark that the Caputo fractional derivative D α is just the right inverse of the R L integral of α order (see [16]), i.e.,
I α D α ( f ) = f f ( 0 ) , f C 1 ,
due to the equality D α ( f ) = I 1 α ( f ) and the semigroup property of the RL integral with respect to the fractional order α . Indeed, we take into account that (see [12] for details) we have
I α D α ( f ) = I α I 1 α ( f ) = I 1 f = f f ( 0 )
where I α for α = 1 is the classical Riemann integral.
By deriving the expression of the solution processes for Equations (1) and (2), we have work with generalized fractional integrals [8]. More specifically, the solution processes will be obtained as special cases of Prabhakar fractional integrals or generalized fractional Mittag–Leffler integrals (see, [8,18,19]). Indeed, we recall that the Prabhakar integral was defined at first in [20] and then in [8]. Regarding the latter version, we essentially refer to it as follows:
I α , β γ , λ P R ( f ) ( t ) = 0 t ( t s ) β 1 E α , β γ ( ( t s ) β / λ ) f ( s ) d s , t R + .
Here, E α , β γ ( z ) is the Prabhakar function also known as the generalized three-parameter Mittag–Leffler function, which is given by
E α , β γ ( z ) = k = 0 Γ ( γ + k ) z k k ! Γ ( γ ) Γ ( α k + β )
for z C . The parameters α , β , γ can be complex numbers, but we consider the real case with α , β ( 0 , 1 ) and γ ( 0 , 1 ] . Moreover, note that (6) for γ = 1 provides the two-parameter Mittag–Leffler function
E α , β ( z ) = k = 0 z k Γ ( α k + β )
and finally the one-parameter Mittag–Leffler function is E α ( z ) = E α , 1 ( z ) (see, for details, [21]). The Prabhakar function and the corresponding integral (5), which generalize fractional integrals with power a kernel and their tempered version [22], allowed the development of the so-called Prabhakar fractional calculus (see, for instance, [9,10]).
The novelty of this contribution is essentially in the characterization of the solution processes of Equations (1) and (2), which are special cases of Prabhakar integrals involving the two-parameter Mittag–Leffler functions, which we just call Mittag–Leffler fractional integrals. More specifically, we construct the process X ( t ) by means of a stochastic version, defined here as the Mittag–Leffler fractional integral with respect to Brownian motion, and we recognize it as non-zero mean generalized fractional Brownian motion. Then, the solution process Y ( t ) of (2) will be written by means of a Mittag–Leffler fractional integral of X ( t ) . We provide expressions of covariance functions of such processes in terms of Mittag–Leffler fractional integrals. These computable expressions are used for numerical evaluations and comparisons in an application example to neuronal modeling.
Indeed, the first motivation of our study of such a kind of processes was created from the need to construct fractional models for neuronal activity that would be able to be affected by memory effects and to evolve on different time scales (see [4]). More specifically, the correlated process X ( t ) evolving on an α -time scale plays the role of the input to the process Y ( t ) (the voltage of the neuronal membrane) that integrates X ( t ) on a β -time scale. For the differential dynamics of both processes, the fractional Caputo derivative is used in consideration of the non-locality property of such an operator, suitable to preserve the evolution h i s t o r y of the process itself. Moreover, the adoption of the correlated input [23,24,25] in place of the traditional white noise allows to explain a sort of memory effect [26] by means of the long-range dependence property [27]. We show the usefulness of our theoretical results by applying them in some specific neuronal models.
SUMMARY: In this manuscript, our contribution is to study the processes X and Y as solutions of the assigned fractional SDEs (1) and (2), as well as to provide explicit expressions for their mean functions and covariances. In particular, in Section 2, some essential theoretical assumptions and known results are specified. Then, in Theorems 1 and 2, we provide representation formulae for processes X and Y, respectively, in terms of ML fractional integrals. The connection with some kind of fractional Brownian motions, with tempered fractional Brownian motion [22] and other well-known processes being among them, is highlighted. The long-range dependence property is preserved. Section 3 is devoted to the study of the process Y, which we call the fractionally integrated process of X . Beyond the new expression of its covariance (in Theorem 2), we also provide an alternative expression for the covariance of Y in Corollary 2. In Section 4, some neuronal models are considered as special cases of fSDEs and are inserted in this mathematical setting in order to provide useful applications of these results. In Section 5, some numerical and graphical evaluations are shown and discussed.

2. Mittag–Leffler Fractional Integrals for Fractional SDEs

With the purpose to recall the known results of [28,29] and to enrich them, we first provide preliminary details about the existence and uniqueness of the solutions of Equations (1) and (2). Substantially, the method of variation of constant formulae for Caputo fractional differential equations, as described in [2], is applied for solving SDEs (1) and (2) under the assumption that the coefficients of the considered SDEs satisfy Lipschitz-type conditions. As in [2], we recall some definitions and theoretical details.
We firstly focus on Equation (1).

2.1. Functional Spaces, Assumptions and Known Results

Consider a complete filtered probability space ( Ω , F , F = F t t 0 , P ) and, for any t [ 0 , ) , the space L 2 ( Ω , F t , P ) of the mean square integrable functions f : Ω R with the mean square norm f m s = E ( f 2 ) . Moreover, a process Z : [ 0 , ) L 2 ( Ω , F , P ) is F -adapted if Z ( t ) L 2 ( Ω , F t , P ) , t 0 .
In such a space, we say that Equation (1) is a Caputo-fractional SDE, i.e., it belongs to the following wider class of fractional SDEs:
D α Z ( t ) = [ A Z ( t ) + B ( t , Z ( t ) ) ] + σ ( t , Z ( t ) ) d W ( t ) d t Z ( 0 ) = Z 0
which, even if it is written in a slightly different way, has to be considered as in Equation (2) of [2], in dimension d = 1 , with A R , on the bounded interval [ 0 , T ] with T > 0 . The process W ( t ) is a scalar standard Brownian motion with respect to the given filtration F t t 0 . Furthermore (cf. [2,4]), the existence of solutions can be proved if the fractional order is α ( 1 / 2 , 1 ) , and the coefficients A , B , σ are real-valued measurable functions on [ 0 , T ] satisfying the following assumptions:
(i)
There exist L > 0 such that z 1 , z 2 R , t [ 0 , T ]
B ( t , z 1 ) B ( t , z 2 ) + σ ( t , z 1 ) σ ( t , z 2 ) L z 1 z 2
(ii)
0 T B ( s , 0 ) 2 d s < , is sup s [ 0 , T ] σ ( s , 0 ) < .
From [28,29], a classical solution of (7) is, for t [ 0 , T ] , the process Z of (7) that is an F -adapted process with initial condition Z ( 0 ) = Z 0 L 2 ( Ω , F 0 , P ) such that
Z ( t ) = Z 0 + 1 Γ ( α ) 0 t ( t s ) α 1 A Z ( s ) + B ( s , Z ( s ) ) d s + 1 Γ ( α ) 0 t ( t s ) α 1 σ ( s , Z ( s ) ) d W s .
In [4], the above solution process was written as
Z ( t ) = Z 0 + I α A Z ( t ) + B ( t , Z ( t ) ) + I α σ ( t , Z ( t ) ) d W t .
Indeed, in (9) I α is the Riemann–Liouville fractional integral as defined in (3) evaluated on a function involving the process Z ( t ) , whereas I α is the following fractional stochastic Itô integral:
I α σ ( t , Z ( t ) ) d W t = 1 Γ ( α ) 0 t ( t s ) α 1 σ ( s , Z ( s ) ) d W s .
More generally, we can denote the following integral as a Riemann–Liouville–Itô (RLI) fractional stochastic integral:
I α f ( t , x ( t ) ) d W t = 1 Γ ( α ) 0 t ( t s ) α 1 f ( s , x ( s ) ) d W s , t [ 0 , T ] ,
where f ( s , x ( s ) ) is a bounded measurable function in [ 0 , T ] × R , α ( 0 , 1 ] , and Γ ( · ) is the Gamma function. Note that in (10), the above RLI integral is applied to the stochastic process Z ( t ) in place of the spatial random variable x in (11).
We focus just on these processes that are the fractional stochastic integrals with respect to Brownian motion. Moreover, (10) can also be viewed as a weighted (by σ ( t , X ( t ) ) ) version of Riemann–Liouville (RL) fractional Brownian motion (fBM) [30,31] that is specifically
I α d W t = 1 Γ ( α ) 0 t ( t τ ) α 1 d W τ .
Indeed, (10) is just a RL fBM in case σ ( s , X ( s ) ) σ is a constant function.
Coming back to the resolution of fSDE (7), in [2,28], by applying a variation constant resolution method, it was proved that an explicit form of the solution process of the fractional SDE (7) with initial condition Z ( 0 ) = Z 0 L 2 ( Ω , F 0 , P ) , t [ 0 , T ] , is
Z ( t ) = E α ( t α A ) Z 0 + 0 t ( t s ) α 1 E α , α ( t s ) α A B ( s , Z s ) d s + 0 t ( t s ) α 1 E α , α ( t s ) α A σ ( s , Z s ) d W s .
In such a theoretical setting, we give the following proposition, which is particularly useful for the characterization of the solution process of (1) and for a wide range of applications.

2.2. Mittag–Leffler Fractional Integrals and Processes

From (3) and (10), in the above functional space and under all above assumptions, we first define the two following fractional integral operators, which are special cases of Prabhakar fractional integrals; more specifically, we call them Mittag–Leffler fractional integrals.
Definition 1.
The deterministic fractional Mittag–Leffler (fML) integral operator is, for t [ 0 , T ] ,
I M L α , A ( f ( · ) ) ( t ) = I α , A ( f ( t ) ) = Γ ( α ) I α E α , α ( t · ) α A f ( · ) ) ( t ) = 0 t ( t s ) α 1 E α , α ( t s ) α A f ( s ) d s
where f ( s ) is a bounded measurable function in [ 0 , T ] , A is a real number, α ( 0 , 1 ] ,   I α is the Riemann–Liouville fractional integral as in (3), and Γ ( · ) is the Gamma function.
In case f ( t ) is a stochastic process, we also say that I M L α , A ( f ( t ) ) is a Mittag–Leffler fractional stochastic process.
Definition 2.
The stochastic fractional Mittag–Leffler (sfML) integral operator with respect to the Brownian motion W ( t ) is
J M L α , A ( f ( · ) , d W ) ( t ) = J M L α , A ( f ( t ) , d W ) = Γ ( α ) I α E α , α ( t · ) α A f ( · ) d W · ( t ) = 0 t ( t s ) α 1 E α , α ( t s ) α A f ( s ) d W s
where f ( s ) is a bounded measurable function in [ 0 , T ] , A is a real number, α ( 0 , 1 ] ,   I α is the RLI fractional stochastic integral as in (11), and Γ ( · ) is the Gamma function.
We also say that J M L α , A ( f ( t ) , d W ) is a generalized Riemann–Liouville fractional Brownian motion or a Mittag–Leffler fractional stochastic process (or integral) with respect to the Brownian motion.
Lemma 1.
Under the above assumptions about the integrability of all involved functions, particularly σ ( t ) L 2 ( ( 0 , T ] ) ) , the stochastic process J M L α , A ( σ ( t ) , d W ) for α ( 1 / 2 , 1 ) is Gaussian with a.s. ( α 1 / 2 ) -Hölder continuous paths.
Proof. 
The proof can be derived from [32] (pp. 196–197) and from Lemma 1.17 of [12]. □
Theorem 1.
Consider the following fSDE:
D α X ( t ) = [ A X ( t ) + B ( t ) ] + σ ( t ) d W ( t ) d t , X ( 0 ) = X 0
with coefficients (note that in (16), coefficients are denoted by A and B, in accordance to the general setting of Section 2; in particular, Equation (16) is the same as Equation (1), as specified in Remark 1) A R , B L 2 ( [ 0 , T ] , R ) , σ L ( [ 0 , T ] , R ) . Moreover, X 0 is a square integrable random variable independent of W ( t ) for any t 0 and α > 1 / 2 . The solution process is a Gaussian process, such that
X ( t ) = E [ X ( t ) ] + J M L α , A ( σ ( t ) , d W )
where the mean function is such that
E [ X ( t ) ] = E α ( t α A ) E [ X 0 ] + I M L α , A ( B ( t ) )
Moreover, the covariance is, for any u , t [ 0 , T ] , such that
C o v [ X ( u ) , X ( t ) ] = E [ J M L α , A ( σ ( u ) , d W ) · J M L α , A ( σ ( t ) , d W ) ]
which, for u , t [ 0 , T ] , with w = min { u , t } and r = max { u , t } , can be obtained as
C o v [ X ( u ) , X ( t ) ] = I M L α , A ( ( r · ) α 1 E α , α ( r · ) α A σ 2 ( · ) ) ( w )
or equivalently, in the more explicit form, for u < t
C o v [ X ( u ) , X ( t ) ] = 0 u ( u s ) α 1 E α , α ( u s ) α A ( t s ) α 1 E α , α ( t s ) α A σ 2 ( s ) d s .
Proof. 
We recall that from (13) (given in [2]), as specified for Equation (16), the solution process can be written as
X ( t ) = E α ( t α A ) X 0 + 0 t ( t s ) α 1 E α , α ( t s ) α A B ( s ) d s + 0 t ( t s ) α 1 E α , α ( t s ) α A σ ( s ) d W s .
Due to the zero mean value of the last stochastic integral, it is immediate that the mean function of a such process is
E [ X ( t ) ] = E α ( t α A ) E [ X 0 ] + 0 t ( t s ) α 1 E α , α ( t s ) α A B ( s ) d s = E α ( t α A ) E [ X 0 ] + Γ ( α ) I α E α , α ( t · ) α A B ( · )
from which the representation (18) holds after the identification of the ML operator I M L α , A ( B ( t ) ) , given in (14), evaluated in t, with the last fractional integral term on the RHS of (22).
The process X ( t ) is Gaussian by applying Lemma 1.
Equation (19) follows from the proved validity of the representation (17); indeed, it is sufficient to note that
C o v [ X ( u ) , X ( t ) ] = E [ ( X ( u ) E [ X ( u ) ] ) ( X ( t ) E [ X ( t ) ] ) ] = E [ J M L α , A ( σ ( u ) , d W ) · J M L α , A ( σ ( t ) , d W ) ] .
Furthermore, in order to prove (20), we choose u < t . Indeed, from (19) and (15), we have
C o v [ X ( u ) , X ( t ) ] = E [ J M L α , A ( σ ( u ) , d W ) · J M L α , A ( σ ( t ) , d W ) ] = E 0 u ( u s ) α 1 E α , α ( u s ) α A σ ( s ) d W s 0 t ( t v ) α 1 E α , α ( t v ) α A σ ( v ) d W v = E 0 u d W s 0 u ( u s ) α 1 E α , α ( u s ) α A σ ( s ) ( t v ) α 1 E α , α ( t v ) α A σ ( v ) d W v + E 0 u d W s u t ( u s ) α 1 E α , α ( u s ) α A σ ( s ) ( t v ) α 1 E α , α ( t v ) α A σ ( v ) d W v
where the last expectation is zero due to the independence of Brownian increments related to no overlapping intervals. Then, by setting K ( z , w ) = ( z w ) α 1 E α , α ( z w ) α A σ ( z ) , we can write that
C o v [ X ( u ) , X ( t ) ] = = E 0 u ( u s ) α 1 E α , α ( u s ) α A σ ( s ) d W s 0 u ( t v ) α 1 E α , α ( t v ) α A σ ( v ) d W v = E 0 u K ( u , s ) d W s 0 u K ( t , v ) d W v = 0 u K ( u , s ) K ( t , s ) d s = 0 u ( u s ) α 1 E α , α ( u s ) α A ( t s ) α 1 E α , α ( t s ) α A σ 2 ( s ) d s
where we used the Itô isometry. Finally, from (14), we recognize that
0 u ( u s ) α 1 E α , α ( u s ) α A ( t s ) α 1 E α , α ( t s ) α A σ 2 ( s ) d s = I α ( ( t · ) α 1 E α , α ( t · ) α A σ 2 ( · ) ) ( u )
with u = min { u , t } .
Corollary 1.
The Mittag–Leffler fractional stochastic process X ( t ) (17), with A < 0 , exhibits the long-range dependence property.
Proof. 
At first, we recall that from the covariance function [30] of the Riemann–Liouville fractional Brownian motion (12), it is possible to prove the well-known long-range dependence property of such process. Indeed, for instance, see Equation (6) of [30] and the discussion about the long-range dependence of the Lévy fBm, equivalent to the above-introduced RL fBM.
Now, the ML fractional process X ( t ) has a covariance value equal to E [ J M L α , A ( σ ( u ) , d W ) · J M L α , A ( σ ( t ) , d W ) ] from Theorem 1, whereas the covariance of the RL fBM can be obtained as E [ J M L α , 0 ( σ , d W ) · J M L α , 0 ( σ , d W ) ] . Hence, we can write
C o v ( X ( u ) , X ( t ) ) C t · E [ J M L α , 0 ( σ , d W ) · J M L α , 0 ( σ , d W ) ]
where C t = 2 max s ( 0 , t ) { σ 2 ( s ) E α , α ( ( t s ) α A ) } , which allows us to process X ( t ) to preserve the long-range dependence property. Indeed, we are under the assumption that σ ( t ) is bounded measurable and at least an L 2 function; moreover, from [9], we know that C t includes a bound for the Mittag–Leffler function involving t 2 α .
Remark 1.
We remark that all above results can specifically refer to the solution process X of Equation (1) by considering Equation (16), with A = a and B ( t ) b ( t ) .
Remark 2.
Note that in the case α = 1 , one can recover previous well-known theoretical results by taking into account that fractional derivatives and integrals reduce to the corresponding classical ones in this case. Consequently, it is easy to prove that the above results become exactly equal to the well-known ones in ordinary and stochastic differential calculus. Moreover, we can say that the process X ( t ) for α = 1 reduces to a non-homogeneous Ornstein–Uhlenbeck process; indeed, note that (24) becomes
C o v [ X ( u ) , X ( t ) ] = I M L 1 , A ( E 1 , 1 ( t · ) A σ 2 ( · ) ) ( u ) = 0 u ( E 1 , 1 ( u s ) A E 1 , 1 ( t s ) A σ 2 ( s ) d s = 0 u e ( u s ) A e ( t s ) A σ 2 ( s ) d s = e ( u + t ) A 0 u e 2 s A σ 2 ( s ) d s .
This is just the covariance of an Ornstein–Uhlenbeck process solution of the following classical SDE obtained from (16) for α = 1 :
d X ( t ) = [ A X ( t ) + B ( t ) ] d t + σ ( t ) d W ( t ) , X ( 0 ) = X 0

3. Fractionally Integrated Mittag–Leffler Fractional Processes

In this section, we aim to address the characterization of the solution process of Equation (2). Hence, we focus on the following non-homogeneous linear fractional differential equation (fDe) on a bounded interval [ 0 , T ] :
D β z ( t ) = G z ( t ) + H ( t ) , ( 0 ) = z 0
where z ( t ) is a function of C 1 ( [ 0 , T ] ) with T > 0 and D β being the Caputo derivative as in (4), and with β ( 0 , 1 ) , G , H ( t ) being measurable and bounded real-valued functions on [ 0 , T ] .
Here, in the same functional space of Section 2, we consider the case where H ( t ) in (28) is a stochastic process with sample paths almost surely (a.s.) Hölder continuous. In such a way, fDe (28) is a stochastic fDe considered in a pathwise sense [2,4].
The explicit form of the solution of fDe (28) in [ 0 , T ] , as proved in [3], is
z ( t ) = E β ( t β G ) z 0 + 0 t ( t s ) β 1 E β , β ( t s ) β G H ( s ) d s .
Equation (2) is a stochastic version of the fractional differential Equation (28); indeed, by setting in (28) G = g and H ( t ) = h ( t ) + ς X ( t ) , we obtain Equation (2). In this setting, the coefficient function H ( t ) is stochastic and it includes X ( t ) , which has to be a process in a complete filtered probability space ( Ω , F , { F t } t [ 0 , ) , P ) with a.s. Hölder continuous paths, such that the corresponding stochastic solution process of (2) can be written as follows:
Y ( t ) = E β ( t β g ) y 0 + 0 t ( t s ) β 1 E β , β ( t s ) β g h ( s ) d s + ς 0 t ( t s ) β 1 E β , β ( t s ) β g X ( s ) d s .
Finally, by using definitions of the ML fractional integral (14), we can re-write the solution process Y ( t ) given in (30) as a ML fractionally integrated process, i.e.,
Y ( t ) = E β ( t β g ) y 0 + I M L β , g ( h ( t ) + ς X ( t ) ) .
For the specific case of the system of Equations (1) and (2), by taking into account that X ( t ) is the process given in (17), we specify the solution process Y ( t ) by means of the ML fractional integral operators. Moreover, we can specify its mean and the covariance functions in the following proposition.
Theorem 2.
In connection with Equation (1), under the assumptions of Section 2, Lemma 1 and Theorem 1, the solution process of Equation (2) is the following Gaussian process:
Y ( t ) = E [ Y ( t ) ] + ς I M L β , g ( J M L α , a ( σ ( t ) , d W ) )
with mean
E [ Y ( t ) ] = E β ( t β g ) y 0 + I M L β , g ( h ( t ) ) + ς I M L β , g ( E [ X ( t ) ] )
and covariance such that
C o v [ Y ( u ) , Y ( t ) ] = ς 2 E [ I M L β , g ( J M L α , a ( σ ( u ) , d W ) ) I M L β , g ( J M L α , a ( σ ( t ) , d W ) ) ]
where I M L β , g and J M L α , a are defined in (14) and (15), respectively.
Moreover, the covariance for u < t is
C o v [ Y ( u ) , Y ( t ) ] = ς 2 0 u v u ( u s ) β 1 E β , β ( u s ) β g ( s v ) α 1 E α , α ( s v ) α a d s v t ( t s ) β 1 E β , β ( t s ) β g ( s v ) α 1 E α , α ( s v ) α a d s σ 2 ( v ) d v .
Proof. 
By using the definition of the ML fractional integral (14), we can re-write the solution process Y ( t ) given in (30) as a ML fractionally integrated process in such a way that we have
Y ( t ) = E β ( t β g ) y 0 + I M L β , g ( h ( t ) ) + ς I M L β , g ( X ( t ) )
The representation Formula (32) is derived by substituting the representation Formula (17) of X ( t ) in (36) and by considering that the expectation of ς I M L β , g ( J M L α , a ( σ ( t ) , d W ) ) is zero. Hence, (33) also holds.
Furthermore, the covariance of Y ( t ) can be written as in (34) due to
C o v [ Y ( u ) , Y ( t ) ] = E [ ( Y ( u ) E [ Y ( u ) ] ) ( Y ( t ) E [ Y ( t ) ] ) ]
and by taking into account (32).
Furthermore, in order to prove the validity of the expression (35) for the covariance of Y ( t ) , and taking into account (34), we calculate the following expectation:
E [ I M L β , g ( J M L α , a ( σ ( u ) , d W ) ) I M L β , g ( J M L α , a ( σ ( t ) , d W ) ) ] = E 0 u ( u s ) β 1 E β , β ( u s ) β g 0 s ( s v ) α 1 E α , α ( s v ) α a σ ( v ) d W v d s 0 t ( t s ) β 1 E β , β ( t s ) β g 0 s ( s v ) α 1 E α , α ( s v ) α a σ ( v ) d W v d s
By applying Fubini’s theorem, we can also write
E [ I M L β , g ( J M L α , a ( σ ( u ) , d W ) ) I M L β , g ( J M L α , a ( σ ( t ) , d W ) ) ] = E 0 u σ ( v ) d W v v u ( u s ) β 1 E β , β ( u s ) β g ( s v ) α 1 E α , α ( s v ) α a d s 0 t σ ( v ) d W v v t ( t s ) β 1 E β , β ( t s ) β g ( s v ) α 1 E α , α ( s v ) α a d s
By setting
K ( z , r ) = r z ( z s ) β 1 E β , β ( z s ) β g ( s r ) α 1 E α , α ( s r ) α a d s
we proceed as follows:
E [ I M L β , g ( J M L α , a ( σ ( u ) , d W ) ) I M L β , g ( J M L α , a ( σ ( t ) , d W ) ) ] = E 0 u σ ( v ) K ( u , v ) d W v 0 t σ ( v ) K ( t , v ) d W v = E 0 u σ ( v ) K ( u , v ) d W v 0 u σ ( v ) K ( t , v ) d W v + E 0 u σ ( v ) K ( u , v ) d W v u t σ ( v ) K ( t , v ) d W v = E 0 u σ ( v ) K ( u , v ) d W v 0 u σ ( v ) K ( t , v ) d W v
by using the independence of stochastic integrals on non-overlapping intervals by which
E 0 u σ ( v ) K ( u , v ) d W v u t σ ( v ) K ( t , v ) d W v = 0 .
Finally, we obtain
C o v [ Y ( u ) , Y ( t ) ] = ς 2 E 0 u σ ( v ) K ( u , v ) d W v 0 u σ ( v ) K ( t , v ) d W v = ς 2 0 u K ( u , v ) K ( t , v ) σ 2 ( v ) d v = ς 2 0 u v u ( u s ) β 1 E β , β ( u s ) β g ( s v ) α 1 E α , α ( s v ) α a d s v t ( t s ) β 1 E β , β ( t s ) β g ( s v ) α 1 E α , α ( s v ) α a d s σ 2 ( v ) d v
due to the application of the Itô isometry. This completes the proof. □
Corollary 2.
Under the previous assumptions about the functional space of the considered processes, i.e., X and Y processes are in L 2 ( Ω , F , { F t } t 0 , P ) , and by assuming the covariance of process X ( t ) is an integrable function, the covariance of the Y ( t ) process can be alternatively obtained as follows:
C o v ( Y ( u ) , Y ( t ) ) = ς 2 I M L β , g ( I M L β , g ( C o v ( X ( u ) X ( t ) ) ) )
which, in the explicit form, is
C o v ( Y ( u ) , Y ( t ) ) = ς 2 0 u ( u v ) β 1 E β , β ( u v ) β g d v v t ( t s ) β 1 E β , β ( t s ) β g d s 0 v ( v z ) α 1 E α , α ( v z ) α a ( s z ) α 1 E α , α ( s z ) α a σ 2 ( z ) d z .
Proof. 
From (31) and by taking into account (33), the covariance of Y ( t ) can be evaluated as follows:
C o v ( Y ( u ) , Y ( t ) ) = ς 2 E [ I M L β , g ( X ( u ) ) I M L β , g ( X ( t ) ) ]
where the right-hand side is equal to
ς 2 E [ I M L β , g ( X ( u ) ) I M L β , g ( X ( t ) ) ] = ς 2 I M L β , g ( I M L β , g ( C o v ( X ( u ) X ( t ) ) ) ) .
By applying the two nested ML fractional integrals I M L β , g to the explicit expression (24) of the covariance of X ( t ) , we obtain
I M L β , g ( I M L β , g ( C o v ( X ( u ) X ( t ) ) ) ) = 0 u ( u v ) β 1 E β , β ( u v ) β g v t ( t s ) β 1 E β , β ( t s ) β g C o v ( X ( v ) X ( s ) ) d v d s .
Finally, Equation (42) holds by substituting (24) in (45). □
Remark 3.
The two expressions (35) and (42) are different forms of the C o v ( Y ( u ) , Y ( t ) ) obtained by means of different strategies based on different, but equivalent, representation formulas of process Y ( t ) , , i.e., on Equations (32) and (31), respectively. Obviously, they take the same values.
We verify the agreement of numerical evaluations of (35) and (42), obtained by an R-code, which was devised and implemented ad hoc. The Mittag-LeffleR package was used for numerical evaluations of the involved Mittag–Leffler functions. See the comparative plots in Figure 1: identical behavior is evident, even if there is a slight difference, mainly due to numerical (here, non-refined) approximation scripts.
As an advantage of form (35), we can say that from these numerical implementations, it was evident that a larger amount of computational time was spent for evaluations of (42) than that for (35), which was sometimes more than double.
Remark 4.
In the special case of α = 1 , and referring to (1), X ( t ) is an Ornstein–Uhlenbeck process with covariance as in (26), i.e.,
C o v [ X ( u ) , X ( t ) ] = e ( u + t ) a 0 u e 2 s a σ 2 ( s ) d s .
Hence, from (41) and by substituting (46) in (42), we have
C o v ( Y ( u ) , Y ( t ) ) = ς 2 0 u ( u v ) β 1 E β , β ( u v ) β g v t ( t s ) β 1 E β , β ( t s ) β g e ( v + s ) a 0 v e 2 z a σ 2 ( z ) d z d s d v .
Then, for σ ( s ) σ in (47), we particularly have
C o v ( Y ( u ) , Y ( t ) ) = = ς 2 σ 2 2 a 0 u ( u v ) β 1 E β , β ( u v ) β g v t ( t s ) β 1 E β , β ( t s ) β g e ( s + v ) a e ( s v ) a d s d v .
We remark that the corresponding expression obtained by means of (35) for α = 1 and σ ( t ) σ is specifically the following:
C o v ( Y ( u ) , Y ( t ) ) = ς 2 σ 2 0 u v u ( u s ) β 1 E β , β ( u s ) β g e ( s v ) a d s v t ( t s ) β 1 E β , β ( t s ) β g e ( s v ) a d s d v .
In Figure 1, we compare Formula (48) with Formula (49) in order to show the satisfactory agreement.

4. Applications in Neuronal Modeling

Under all above assumptions, we consider neuronal models that are based just on the coupled Equations (1) and (2). Substantially, we identify the coefficients on the considered fSDEs with specific features of the neuronal model in such a way the utility of these results is meaningful and reliable. The values and the forms of the coefficients of fractional SDE considered here verify all the above theoretical assumptions in such a way that solutions exist; furthermore, we exploit their explicit expressions that provided here. Moreover, we add that the mathematical results in a compact set [ 0 , T ] are adequate for such kind of models because the neuronal dynamics are considered until a finite time instant T corresponding to the firing time of a neuron.
We mainly refer to [4], in which three different fractional stochastic neuronal models were studied. See there the introduction to such kinds of models, experimental motivations and the modeling advantages specifically discussed.
The prototype of such kinds of models is
( Input process ) D α η ( t ) = η ( t ) I τ + σ τ d W ( t ) d t , η ( 0 ) = η 0 ( Voltage process ) D β V ( t ) = g L C m [ V ( t ) V L ] + η ( t ) C m , V ( 0 ) = V 0
with η ( t ) being the correlated input process, including a constant electric current I (synaptic one originated from the surrounding neurons or superimposed from outside), and V ( t ) being the voltage of the neuronal membrane. Here, the SDE’s coefficients have specific meanings. Indeed, C m stands for membrane capacitance, g L stands for leak conductance, V L stands for a “resting” value of membrane potential, d W ( t ) is the white noise (with W ( t ) a standard Brownian motion), and σ turns the intensity (or amplitude) of the noise. Furthermore, τ is the characteristic time of the input process η ( t ) , i.e., it is related to the mean time of the decay towards to the equilibrium level; it is also related to the so-called correlation time of η ( t ) . This model can be viewed as the fractional version of the neuronal model with leakage (see, for an introduction to neuronal models and firing activity, [33,34,35]; for different kinds of stimuli, [36]; for non-time-homogeneous neuronal models, [37]; for fractional neuronal models, [5,38,39,40]; and the references therein).
Note that the introduction of the fractional derivatives and of the two different fractional orders allows us to preserve memory of the time evolution of each process on different time scales. Indeed, the time scale is regulated by the value of the fractional order α and β , a useful property that adequately models the two different dynamics of the input and of the voltage.
The identification of the above neuronal model (50) with Equations (1) and (2) is easily obtained by setting
in Equation (1): a 1 τ , b ( t ) I τ , σ ( t ) σ / τ , X ( t ) = η ( t )
in Equation (2): g = g L C m , h ( t ) g L V L C m , ς = 1 / C m , Y ( t ) = V ( t ) .

4.1. The Mean and Covariance of ML Fractional Input Process η ( t )

By applying the suitable above substitutions of the coefficients, we can specify that from (17)
η ( t ) = E [ η ( t ) ] + J M L α , A ( σ ( t ) , d W )
and the mean function from (18) is
E [ η ( t ) ] = E α ( t α / τ ) η 0 + I M L α , ( 1 / τ ) I τ
and, from Equation (20), for u < t , the covariance is
C o v [ η ( u ) , η ( t ) ] = σ 2 τ 2 I M L α , ( 1 / τ ) ( ( t · ) α 1 E α , α ( t · ) α / τ ) ( u )
which is specifically
C o v [ η ( u ) , η ( t ) ] = σ 2 τ 2 0 u ( u s ) α 1 E α , α ( u s ) α / τ ( t s ) α 1 E α , α ( t s ) α / τ d s .
Note that, in this model, from the representation Formula (17) adapted here to η ( t ) , i.e., for (51), the input process is a generalized RL fBM with a non-zero mean and long-range dependence property. In particular, it is a Gaussian process and its mean is asymptotically driven by the additional current term I .

4.2. The Mean and Covariance of Fractionally Integrated ML Process V ( t )

About the voltage process V ( t ) , we can specify that, from (32), it is
V ( t ) = E [ V ( t ) ] + ς I M L β , g ( J M L α , a ( σ ( t ) , d W ) ) .
Alternatively, from (31), it is
V ( t ) = E β t β g L C m η 0 + I M L β , g g L V L C m ( t ) + I M L β , g η ( t ) C m , where g = g L C m ,
with mean
E [ V ( t ) ] = E β t β g L C m η 0 + I M L β , g g L V L C m ( t ) + I M L β , g E [ η ( t ) ] C m
and covariance such that
C o v [ V ( u ) , V ( t ) ] = 1 C m 2 E [ I M L β , g ( J M L α , ( 1 / τ ) ( σ ( u ) , d W ) ) I M L β , g ( J M L α , ( 1 / τ ) ( σ ( t ) , d W ) ) ]
or, in the more explicit form,
C o v [ V ( u ) , V ( t ) ] = σ 2 C m 2 τ 2 0 u v u ( u s ) β 1 E β , β ( u s ) β g L C m ( s v ) α 1 E α , α ( s v ) α / τ d s v t ( t s ) β 1 E β , β ( t s ) β g L C m ( s v ) α 1 E α , α ( s v ) α / τ d s d v .
Compare formulae (53) and (58) with those in [4], which were given in an implicit form involving RL fractional integrals. The main result is just Equation (58); indeed, it is a formula that can be computable by numerical and symbolic algorithms. It is really innovative from a theoretical point of view and especially regarding its applicability in the neuronal modeling context.
We also remark that for the other settings of parameters, it is possible to specify further neuronal models. Indeed, for instance, if g L = 0 , the model (50) is an integrate-and-fire model [33] without leakage. Correspondingly, the process V ( t ) is exactly the ML fractional integral of the input η ( t ) , with a mean that is the ML fractional integral of η mean [4].

5. Some Numerical Results

With reference to the neuronal model considered in the previous section, we provide some plots obtained from numerical evaluations of given results. We adopt specified values of involved parameters. In particular, by referring to [4] and the references therein, we assume, if not differently specified, the following values for parameters: The membrane capacitance C m = 1 μ F; The (positive or negative) resting membrane potential: V L = ± 0.1 mV; The initial membrane potential V 0 = 0 mV; The leak conductance g L = 0.1 mS; The characteristic time of the membrane C m / g L = 10 ms; The initial input value η 0 = 0 nA; The input characteristic time τ = 5 ms; The (positive or negative) constant current I = ± 0.01 nA; The noise intensity parameter ς 2 = 1 nA 2 ms.

5.1. About the Mean Functions

The investigation is carried out by proving numerical evaluations of the mean of the input process η ( t ) with a varying value of α in Figure 2, the mean of the voltage process V ( t ) with a varying value of β in Figure 3 and the mean of the voltage process V ( t ) with a varying α order of the coupled input η ( t ) in Figure 4.
In Figure 2 we plot means, E [ η ( t ) ] as given in (52) for two different choices of the current term I, as specified in the caption. These means can also be viewed as specific cases of E [ X ( t ) ] given in (18). In this figure, different plots correspond to different values of the fractional α . It is possible to see in both cases (on left and right) of applied currents I that the decay towards the asymptotic value I is faster when α = 1 , and it becomes slower and slower as values of α decrease. This sort of slowdown is typical of fractional dynamics with respect to those (classical) of an integer order.
In Figure 3, we plot the means E [ V ( t ) ] as in (56) for different β values. These means can also be viewed as particular cases of E [ Y ( t ) ] given in (33). In this figure, the plots correspond to different values of the fractional β , whereas α = 1 for the corresponding input process η ( t ) . The resting potential value V L and the input current term I are chosen as V L = 0.1 , I = 0.01 on the left and V L = 0.1 , I = 0.01 on the right. On the left: the initial attraction towards negative values is evident due to the presence of the resting potential value V L < 0 , and then due to the successive increasing behavior of the applied positive current I . The overall behaviors regarding different values of β appear ordered as β varies. In particular, for β = 1 , the mean behavior of V ( t ) appears more sensitive to the presence of V L because it assumes the lowest values with respect to other cases. Moreover, it seems to attain the equilibrium level more rapidly ( see the tail of plots for large time values greater than 25). On the right: the same considerations can be made even if the plots are symmetrically (with the opposite sign) identical to those on the left due to the opposite (in sign) choices of parameters V L and I .
In Figure 4, we plot E [ V ( t ) ] as in (56) for different values of α and β . These plots are particular cases of E [ Y ( t ) ] given in (33). Looking at the plots from the left to the right, i.e., for increasing values of β , the mean varies on a wider range of values, which means that the mean of the potential is affected more by the joint action of the presence of the resting potential and of the input process. Such a feature holds for all considered values of α . Looking inside each sub-figure, the slowdown in the evolution is evident and ordered as the values of α decrease.
A final consideration can be made: the numerical and graphical analyses of the mean functions E [ η ( t ) ] , E [ V ( t ) ] , of the input process and voltage process, respectively, reveal that a wider range than that of the classical case of possible evolutions concerning neuronal dynamics can be described by the proposed model (50). This also means that for suitable choices of fractional orders, the model can fit neuronal data that are not described well by classical models. In particular, the evolution time scale of the input and of the voltage can be tuned appropriately in accordance with the neuro-physiological data requirements.

5.2. About the Covariance Functions

For the covariance functions C o v [ η ( u ) , η ( t ) ] and C o v ( V ( u ) , V ( t ) ) , we show and discuss the plots obtained from numerical evaluations of Equations (53) and (58), respectively. These plots coincide with those of C o v [ X ( u ) , X ( t ) ] as in (20) and C o v ( Y ( u ) , Y ( t ) ) as in (42), respectively, for specified settings of involved parameters and functions. We also consider Equation (49) for C o v ( Y ( u ) , Y ( t ) ) when we set α = 1 .
In Figure 5, we plot covariance C o v [ η ( u ) , η ( t ) ] given in (53). These plots are particular cases of C o v [ X ( u ) , X ( t ) ] as in (20). From such graphical representations, we understand that such covariance attains initially higher values in correspondence of higher values of α ; then, the inverse occurs: on the tails, the decaying of the covariance is slower for smaller values of α . This is related to the long-range property of this type of process. Moreover, it is possible to note the perfect agreement with the integer (classical) case for α = 1 when the value of α increases from 0.6 to 0.99 .
In Figure 6, we provide plots of C o v ( V ( u ) , V ( t ) ) as in (58) for u = 1 , t > u , α = 1 and for some values of β . Such plots are the same as for the C o v ( Y ( u ) , Y ( t ) ) obtained from (49). We note that these numerical evaluations initially show increasing values of the C o v ( V ( u ) , V ( t ) ) in correspondence of increasing values of β (see Figure 6, left), but this behavior changes and inverts for large values of time t , as it is possible to see in the tails of plots on the right of Figure 6. We already observed this similar behavior for C o v [ η ( u ) , η ( t ) ] in Figure 5. Hence, V ( t ) inherits this type of feature, which is, again, related to the long-range dependence property that such covariance preserves. This will be further investigated from a theoretical point of view and via successive numerical optimized evaluations.
In Figure 7, Figure 8 and Figure 9, we provide plots of C o v ( V ( u ) , V ( t ) ) as in (58) with varying u and t values, and for three values of β = 0.6 , 0.8 , 0.99 . Such plots are the same of the C o v ( Y ( u ) , Y ( t ) ) obtained from (49) for a suitable setting of involved parameters and functions. By comparing plots of Figure 7, Figure 8 and Figure 9, which correspond to the plots for values of β in an increasing order, we note that the covariance C o v ( V ( u ) , V ( t ) ) attains increasing values: from the maximum value 1.4 in Figure 7 to the maximum value 4.5 in Figure 9. Moreover, from the observation of color maps on left of Figure 7, Figure 8 and Figure 9, the extension of the region of plane ( u , t ) , corresponds to the highest values of covariance and stretches around the variance diagonal as β values increase. This can be also interpreted by saying that the covariance persists over a longer time period for increasing values of β . Finally, the value of the covariance increases as u and t both increase.
This in-depth study and the provided plots of the covariances C o v ( X ( u ) , X ( t ) ) and C o v ( Y ( u ) , Y ( t ) ) , especially after the investigation by means of three-dimensional visualizations, confirm the inclusion of an amount of “memory” in these types of models. The provided representations of such functions allowed for these exploration results to be obtained and for the validation of theoretical properties. They also opened up the possibility to use them in many different further modeling frameworks.

6. Some Concluding Remarks

Motivated by the wish to provide more refined neuronal models, including some memory effects, we studied the solution processes of a couple of fractional stochastic differential equations (Equations (1) and (2)) involving Caputo-fractional derivatives. Such processes are ML fractional processes, as specified in (21) and (30), respectively. In order to provide compact expressions of such processes and of their means and covariances, we introduced the ML fractional integral operators: a deterministic one as in (14) and a stochastic one as in (15) with respect to standard Brownian motion. The main results are in Theorems 1 and 2, in which the representation Formulas (17) and (32) are given for processes X and Y , respectively, in terms of the ML fractional operators. Such representations turn out extremely useful for determining computable new expressions of the covariances (20) of X and (35) for Y, but they also allow us to understand the roles and the resulting effects of processes in the coupled dynamics. This is extremely useful when we assign physical values to all involved functions, parameters and processes, as we demonstrated in the provided application to the neuronal model (50) in Section 4 and Section 5.
We remark that the studied processes are Gaussian non-Markovian with a long-range dependence property. The provided results allow us to devise simulation algorithms for these types of processes, such as those based on exact generating procedures of Gaussian random variables: this is now possible due to the explicit computable form of the covariances, which are provided here. (We recall that, in particular, many direct computations of ML fractional integrals, in simple cases, are available in [11,12]).
The Figures show that numerical and quantitative analyses of covariances of such processes are possible, and they has to be carried out in a systematic way. Moreover, we understand that a more rigorous and in-depth study of ML stochastic fractional integrals and processes is also necessary, due the mathematical beauty and the practical interest of such operators. All these proposed research objectives will be developed in our future works.

Funding

This study was partially funded by the projects PRIN-PNRR P2022XSF5H, PRIN2022-MUR 2022XZSAFN and DAISY: Dynamic Analysis of Interacting Biological Systems through Statistics.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The author thanks the anonymous reviewers for their precious suggestions and greatly appreciates their comments, which lead to this improved revised version of this paper.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLMittag–Leffler
RLRiemann–Louville
RLIRiemann–Louville–Itô
fBMFractional Brownian motion
SDEStochastic differential equation
fDEFractional differential equation
fSDEFractional stochastic differential equation

References

  1. Abundo, M.; Pirozzi, E. Fractionally integrated Gauss-Markov processes and applications. Commun. Nonlinear Sci. Numer. Simul. 2021, 101, 105862. [Google Scholar] [CrossRef]
  2. Anh, P.T.; Doan, T.S.; Huong, P.T. A variation of constant formula for Caputo fractional stochastic differential equations. Stat. Probab. Lett. 2019, 145, 351–358. [Google Scholar] [CrossRef]
  3. Li, K.; Peng, J. Laplace transform and fractional differential equations. Appl. Math. Lett. 2011, 24, 2019–2023. [Google Scholar]
  4. Pirozzi, E. Some Fractional Stochastic Models for Neuronal Activity with Different Time-Scales and Correlated Inputs. Fractal Fract. 2024, 8, 57. [Google Scholar] [CrossRef]
  5. Teka, W.; Marinov, T.M.; Santamaria, F. Neuronal Spike Timing Adaptation Described with a Fractional Leaky Integrate-and-Fire Model. PLoS Comput. Biol. 2014, 10, e1003526. [Google Scholar] [CrossRef]
  6. Luchko, Y. General Fractional Integrals and Derivatives of Arbitrary Order. Symmetry 2021, 13, 755. [Google Scholar] [CrossRef]
  7. Kochubei, A.N. General fractional calculus, evolution equations, and renewal processes. Integr. Equ. Oper. Theory 2011, 71, 583–600. [Google Scholar] [CrossRef]
  8. Fernandez, A.; Özarslan, M.A.; Baleanu, D. On fractional calculus with general analytic kernels. In Applied Mathematics and Computation; Elsevier: Amsterdam, The Netherlands, 2019; Volume 354, pp. 248–265. [Google Scholar]
  9. Giusti, A.; Colombaro, I.; Garra, R.; Garrappa, R.; Polito, F.; Popolizio, M.; Mainardi, F. A Practical Guide to Prabhakar Fractional Calculus. Fract. Calc. Appl. Anal. 2020, 23, 9–54. [Google Scholar] [CrossRef]
  10. Polito, F.; Tomovski, Z. Some Properties of Prabhakar-type fractional calculus operators. Fract. Differ. Calc. 2016, 6, 73–94. [Google Scholar] [CrossRef]
  11. Haubold, H.J.; Mathai, A.M.; Saxena, R.K. Mittag-Leffler functions and their applications. J. Appl. Math. 2011, 51, 298628. [Google Scholar] [CrossRef]
  12. Ascione, G.; Mishura, Y.; Pirozzi, E. Fractional Deterministic and Stochastic Calculus; De Gruyter: Berlin, Germany, 2024. [Google Scholar]
  13. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach Science Publishers: London, UK, 1993. [Google Scholar]
  14. Baccouch, M.; Temimi, H.; Ben-Romdhane, M. A discontinuous Galerkin method for systems of stochastic differential equations with applications to population biology, finance, and physics. J. Comput. Appl. Math. 2021, 388, 113297. [Google Scholar] [CrossRef]
  15. Batiha, I.M.; Abubaker, A.A.; Jebril, I.H.; Al-Shaikh, S.B.; Matarneh, K. A Numerical Approach of Handling Fractional Stochastic Differential Equations. Axioms 2023, 12, 388. [Google Scholar] [CrossRef]
  16. Diethelm, K. The Analysis of Fractional Differential Equations, Lecture Notes in Mathematics 2004; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
  17. Podlubny, I. Fractional Differential Equations; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  18. Garra, R.; Gorenflo, R.; Polito, F.; Tomovski, Z. Hilfer–Prabhakar derivatives and some applications. Appl. Math. Comput. 2014, 242, 576–589. [Google Scholar] [CrossRef]
  19. Kilbas, A.A.; Saigo, M.; Saxena, R.K. Generalized mittag-leffler function and generalized fractional calculus operators. Integral Transform. Spec. Funct. 2004, 15, 31–49. [Google Scholar] [CrossRef]
  20. Prabhakar, T.R. A singular integral equation with a generalized Mittag Leffler function in the kernel. Yokohama Math. J. 1971, 19, 7–15. [Google Scholar]
  21. Garrappa, R.; Kaslik, E.; Popolizio, M. Evaluation of Fractional Integrals and Derivatives of Elementary Functions: Overview and Tutorial. Mathematics 2019, 7, 407. [Google Scholar] [CrossRef]
  22. Meerschaert, M.M.; Sabzikar, F. Tempered fractional Brownian motion. Stat. Probab. Lett. 2013, 83, 2269–2275. [Google Scholar] [CrossRef]
  23. Ascione, G.; Pirozzi, E. On a stochastic neuronal model integrating correlated inputs. Math. Biosci. Eng. 2019, 16, 5206–5225. [Google Scholar] [CrossRef]
  24. Sakai, Y.; Funahashi, S.; Shinomoto, S. Temporally correlated inputs to leaky integrate-and-fire models can reproduce spiking statistics of cortical neurons. Neural Netw. 1999, 12, 1181–1190. [Google Scholar] [CrossRef]
  25. Shinomoto, S.; Sakai, Y.; Funahashi, S. The Ornstein-Uhlenbeck process does not reproduce spiking statistics of cortical neurons. Neural Comput. 1997, 11, 935–951. [Google Scholar] [CrossRef]
  26. Kim, H.; Shinomoto, S. Estimating nonstationary inputs from a single spike train based on a neuron model with adaptation. Math. Bios. Eng. 2014, 11, 49–62. [Google Scholar] [CrossRef]
  27. Leonenko, N.; Meerschaert, M.M.; Schilling, R.L.; Sikorskii, A. Correlation Structure of Time-Changed Lévy Processes. Commun. Appl. Ind. Math. 2014, 6, e-483. [Google Scholar] [CrossRef]
  28. Doan, T.S.; Kloeden, P.; Huong, P.; Tuan, H.T. Asymptotic separation between solutions of Caputo fractional stochastic differential equations. Stoch. Anal. Appl. 2018, 36, 654–664. [Google Scholar]
  29. Wang, Y.; Xu, J.; Kloeden, P.E. Asymptotic behavior of stochastic lattice systems with a Caputo fractional time derivative. Nonlinear Anal. 2016, 135, 205–222. [Google Scholar] [CrossRef]
  30. Bénichou, O.; Oshanin, G. A unifying representation of path integrals for fractional Brownian motions. J. Phys. A Math. Theor. 2024, 57, 225001. [Google Scholar] [CrossRef]
  31. Picard, J. Representation formulae for the fractional Brownian motion. In Séminaire de Probabilités XLIII; Springer: Berlin/Heidelberg, Germany, 2011; pp. 3–70. [Google Scholar]
  32. Baldi, P. Stochastic Calculus: An Introduction through Theory and Exercises; Universitext; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar] [CrossRef]
  33. Burkitt, A.N. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 2006, 95, 1–19. [Google Scholar] [CrossRef]
  34. Burkitt, A.N. A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. Biol. Cybern. 2006, 95, 97–112. [Google Scholar] [CrossRef]
  35. Tuckwell, H.C. Spatial neuron model with two-parameter Ornstein-Uhlenbeck input current. Phys. A Stat. Mech. Its Appl. 2006, 368, 495–510. [Google Scholar] [CrossRef]
  36. Lansky, P. Sources of periodical force in noisy integrate-and-re models of neuronal dynamics. Phys. Rev. E 1997, 55, 2040–2043. [Google Scholar] [CrossRef]
  37. Stevens, C.F.; Zador, A.M. Novel integrate-and-fire-like model of repetitive firing in cortical neurons. In Proceedings of the 5th Joint Symposium on Neural Comput, La Jolla, CA, USA, 5 May 1998; Volume 8, pp. 172–177. [Google Scholar]
  38. Ascione, G.; Toaldo, B. A Semi-Markov Leaky Integrate-and-Fire model. Mathematics 2019, 7, 1022. [Google Scholar] [CrossRef]
  39. Bazzani, A.; Bassi, G.; Turchetti, G. Diffusion and memory effects for stochastic processes and fractional Langevin equations. Phys. A Stat. Mech. Appl. 2003, 324, 530–550. [Google Scholar] [CrossRef]
  40. Benedetto, E.; Polito, F.; Sacerdote, L. On firing rate estimation for dependent interspike intervals. Neural Comput. 2015, 27, 699–724. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Comparison between plots of C o v ( Y ( u ) , Y ( t ) ) given by (48) (Blue) and (49) (Red) for u = 1 and t > u . On left: discretization step 0.5 and for t ( 1 , 30 ) ; on right: the zoom for t ( 1 , 7 ) with discretization step 0.1 . Other parameters: β = 0.6 , σ = ς = 1 , a = 0.2 , g = 0.1 .
Figure 1. Comparison between plots of C o v ( Y ( u ) , Y ( t ) ) given by (48) (Blue) and (49) (Red) for u = 1 and t > u . On left: discretization step 0.5 and for t ( 1 , 30 ) ; on right: the zoom for t ( 1 , 7 ) with discretization step 0.1 . Other parameters: β = 0.6 , σ = ς = 1 , a = 0.2 , g = 0.1 .
Mathematics 12 03094 g001
Figure 2. Plots of the mean input E [ η ( t ) ] as given in (52) with I = 0.01 on the left, with I = 0.01 on the right, τ = 5 , η 0 = 0 , for different values of α . These are also the plots of E [ X ( t ) ] given in (18) with X 0 = 0 , A = 1 / τ , B ( t ) I / τ .
Figure 2. Plots of the mean input E [ η ( t ) ] as given in (52) with I = 0.01 on the left, with I = 0.01 on the right, τ = 5 , η 0 = 0 , for different values of α . These are also the plots of E [ X ( t ) ] given in (18) with X 0 = 0 , A = 1 / τ , B ( t ) I / τ .
Mathematics 12 03094 g002
Figure 3. Plots of mean voltages E [ V ( t ) ] as in (56) with V 0 = 0 , C m = 1 , g L = 0.1 , g = g L C m , for different values of β . About the coupled input process η ( t ) , we set α = 1 , and the same values of the remaining parameters are set as in Figure 2. These plots are also particular cases of E [ Y ( t ) ] given in (33) with y 0 = 0 , h ( t ) g L V L C m , ς = 1 / C m . On the left: V L = 0.1 , I = 0.01 . On the rigth: V L = 0.1 , I = 0.01 .
Figure 3. Plots of mean voltages E [ V ( t ) ] as in (56) with V 0 = 0 , C m = 1 , g L = 0.1 , g = g L C m , for different values of β . About the coupled input process η ( t ) , we set α = 1 , and the same values of the remaining parameters are set as in Figure 2. These plots are also particular cases of E [ Y ( t ) ] given in (33) with y 0 = 0 , h ( t ) g L V L C m , ς = 1 / C m . On the left: V L = 0.1 , I = 0.01 . On the rigth: V L = 0.1 , I = 0.01 .
Mathematics 12 03094 g003
Figure 4. Plots of E [ V ( t ) ] as given in (56) with V L = 0.1 , V 0 = 0 , C m = 1 , g L = 0.1 , g = g L C m , for different values of α and β . Parameters for η ( t ) are τ = 5 , η 0 = 0 , I = 0.01 . These are also plots of E [ Y ( t ) ] given in (33) with y 0 = 0 , h ( t ) g L V L C m , ς = 1 / C m .
Figure 4. Plots of E [ V ( t ) ] as given in (56) with V L = 0.1 , V 0 = 0 , C m = 1 , g L = 0.1 , g = g L C m , for different values of α and β . Parameters for η ( t ) are τ = 5 , η 0 = 0 , I = 0.01 . These are also plots of E [ Y ( t ) ] given in (33) with y 0 = 0 , h ( t ) g L V L C m , ς = 1 / C m .
Mathematics 12 03094 g004
Figure 5. Plots of covariance C o v [ η ( u ) , η ( t ) ] given in (53) for u = 1 and t > u , with τ = 5 , σ / τ = 1 and for different values of α . These plots are particular cases of C o v [ X ( u ) , X ( t ) ] as in (20) with A = 1 / τ and σ ( t ) σ / τ .
Figure 5. Plots of covariance C o v [ η ( u ) , η ( t ) ] given in (53) for u = 1 and t > u , with τ = 5 , σ / τ = 1 and for different values of α . These plots are particular cases of C o v [ X ( u ) , X ( t ) ] as in (20) with A = 1 / τ and σ ( t ) σ / τ .
Mathematics 12 03094 g005
Figure 6. Plots of C o v ( V ( u ) , V ( t ) ) as given in (58) for u = 1 and t > u , and for some values of β . On the left: τ = 2 and t ( 1 , 30 ) ; on the right: τ = 1 and t ( 1 , 50 ) (with discretization step 1). Other parameters: σ = 1 , C m = 1 , g L = 0.1 . These are also plots of C o v ( Y ( u ) , Y ( t ) ) obtained from (49), with a = 1 / τ , ς = 1 / C m , g = g L C m .
Figure 6. Plots of C o v ( V ( u ) , V ( t ) ) as given in (58) for u = 1 and t > u , and for some values of β . On the left: τ = 2 and t ( 1 , 30 ) ; on the right: τ = 1 and t ( 1 , 50 ) (with discretization step 1). Other parameters: σ = 1 , C m = 1 , g L = 0.1 . These are also plots of C o v ( Y ( u ) , Y ( t ) ) obtained from (49), with a = 1 / τ , ς = 1 / C m , g = g L C m .
Mathematics 12 03094 g006
Figure 7. For β = 0.6 and α = 1 : Plots of C o v ( V ( u ) , V ( t ) ) as in (58), or C o v ( Y ( u ) , Y ( t ) ) as in (49), with varying values of u and t. On the left and in the middle: three-dimensional plots from different perspectives; on the right: two-dimensional color map of the same C o v ( V ( u ) , V ( t ) ) . Here, the discretization step is 1, a = 1 / τ = 1 ; other parameters are the same as in Figure 6.
Figure 7. For β = 0.6 and α = 1 : Plots of C o v ( V ( u ) , V ( t ) ) as in (58), or C o v ( Y ( u ) , Y ( t ) ) as in (49), with varying values of u and t. On the left and in the middle: three-dimensional plots from different perspectives; on the right: two-dimensional color map of the same C o v ( V ( u ) , V ( t ) ) . Here, the discretization step is 1, a = 1 / τ = 1 ; other parameters are the same as in Figure 6.
Mathematics 12 03094 g007
Figure 8. For β = 0.8 and α = 1 : Plots of C o v ( V ( u ) , V ( t ) ) as in (58), or C o v ( Y ( u ) , Y ( t ) ) as in (49), with varying values of u and t. On the left and in the middle: three-dimensional plots from different perspectives; on the right: two-dimensional color map of the same C o v ( V ( u ) , V ( t ) ) . Here, the discretization step is 1, a = 1 / τ = 1 ; other parameters are the same as in Figure 6.
Figure 8. For β = 0.8 and α = 1 : Plots of C o v ( V ( u ) , V ( t ) ) as in (58), or C o v ( Y ( u ) , Y ( t ) ) as in (49), with varying values of u and t. On the left and in the middle: three-dimensional plots from different perspectives; on the right: two-dimensional color map of the same C o v ( V ( u ) , V ( t ) ) . Here, the discretization step is 1, a = 1 / τ = 1 ; other parameters are the same as in Figure 6.
Mathematics 12 03094 g008
Figure 9. For β = 0.99 and α = 1 : Plots of C o v ( V ( u ) , V ( t ) ) as in (58), or C o v ( Y ( u ) , Y ( t ) ) as in (49), with varying values of u and t. On the left and in the middle: three-dimensional plots from different perspectives; on the right: two-dimensional color map of the same C o v ( V ( u ) , V ( t ) ) . Here, the discretization step is 1, a = 1 / τ = 1 ; other parameters are the same as in Figure 6.
Figure 9. For β = 0.99 and α = 1 : Plots of C o v ( V ( u ) , V ( t ) ) as in (58), or C o v ( Y ( u ) , Y ( t ) ) as in (49), with varying values of u and t. On the left and in the middle: three-dimensional plots from different perspectives; on the right: two-dimensional color map of the same C o v ( V ( u ) , V ( t ) ) . Here, the discretization step is 1, a = 1 / τ = 1 ; other parameters are the same as in Figure 6.
Mathematics 12 03094 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pirozzi, E. Mittag–Leffler Fractional Stochastic Integrals and Processes with Applications. Mathematics 2024, 12, 3094. https://doi.org/10.3390/math12193094

AMA Style

Pirozzi E. Mittag–Leffler Fractional Stochastic Integrals and Processes with Applications. Mathematics. 2024; 12(19):3094. https://doi.org/10.3390/math12193094

Chicago/Turabian Style

Pirozzi, Enrica. 2024. "Mittag–Leffler Fractional Stochastic Integrals and Processes with Applications" Mathematics 12, no. 19: 3094. https://doi.org/10.3390/math12193094

APA Style

Pirozzi, E. (2024). Mittag–Leffler Fractional Stochastic Integrals and Processes with Applications. Mathematics, 12(19), 3094. https://doi.org/10.3390/math12193094

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop