Abstract
This paper is devoted to the estimation of the entropy of the dynamical system , where the stochastic process consists of the fractional Riemann–Liouville integral of order of a Gauss–Markov process. The study is based on a specific algorithm suitably devised in order to perform the simulation of sample paths of such processes and to evaluate the numerical approximation of the entropy. We focus on fractionally integrated Brownian motion and Ornstein–Uhlenbeck process due their main rule in the theory and application fields. Their entropy is specifically estimated by computing its approximation (ApEn). We investigate the relation between the value of and the complexity degree; we show that the entropy of is a decreasing function of .
MSC:
60G15; 26A33; 65C20
1. Introduction
In the study of a biological system, whose time evolution is modeled by a stochastic process that depends on a certain parameter often there is a need to find how a change in the value of affects the qualitative behavior of the system, as well as its complexity degree, or entropy. Another useful information is the knowledge of a stochastic ordering, with respect to expectation of functionals of the process (e.g., its mean and variance), when varying
As a case study, we are interested to the qualitative behavior of the fractional integral of a Gauss–Markov (GM) process, when varying the order of the fractional integration.
Actually, GM processes and their fractional integrals over time are very relevant in various application fields, especially in Biology—e.g., in stochastic models for neuronal activity (see [1]). In particular, the fractional integral of order of a GM process, say , is suitable to describe certain stochastic phenomena with long range memory dynamics, involving correlated input processes (see [2]).
As an example of application, one can consider the model for the neuronal activity, based on the coupled differential equations:
Here, stands for the Caputo fractional derivative (see [3]); is in place of the white noise, usually utilized in the stochastic differential equation, which describes a Leaky Integrate-and-Fire (LIF) neuronal model (see, for example, [4]). The colored noise process is the correlated process obeying the second of Equation (1) and it is the input for the first one; it is indeed a time-non-homogeneous GM process of Ornstein–Uhlenbeck (OU)-type (see Section 2). The stochastic process represents the voltage of the neuronal membrane, whereas is the membrane capacitance, the leak conductance, the resting (equilibrium) level potential, the synaptic current (deterministic function), is the correlation time of and the noise (standard BM). As we can see, the process , which is the solution of (1) belongs to the class of fractional integrals of GM processes. Indeed, it is a specific example of process, being the Caputo fractional integral of the process [5]. The biophysical motivation in the above model is to describe a neuronal activity as a perfect integrator (without leakage), from an initial time until to the current time, of the process , representing the time dependent input. The use of fractional operators allows us to regulate the time scale by choosing the fractional order of integration suitably adherent to the neuro-physiological evidences. Indeed, such a model can be useful, for instance, in the investigation and simulation of synchronous/asynchronous communications in networks of neurons [6].
To introduce the terms of our investigation, we recall some definitions.
A continuous GM process is a stochastic process of the form:
where denotes standard Brownian motion (BM), are functions in with and is a monotone increasing, differentiable and non-negative function.
For a continuous function its Riemann–Liouville (RL) fractional integral of order is defined as (see [7]):
where is the Gamma Euler function—i.e.,
We recall also that the Caputo fractional derivative of order of a function is defined by (see [3]):
where denotes the ordinary derivative of
Notice that, taking the limit for one gets while —i.e., the ordinary Riemann integral of f. Moreover, and
Referring to the neuronal model (1), assuming that (and, in some cases, also ), the RL fractional integral is used as the left-inverse of the Caputo derivative (see [8,9]). In this way, we find that the solution of (1) involves the RL fractional integral process of the GM process specifically:
Thus, turns out to be written in terms of the fractional integral of
From this consideration, in the framework of general stochastic models involving correlated processes, it appears useful to investigate the properties of —i.e., the fractional integral of a GM process as varying Although is not Markov, we have showed in [2] that it is still a Gaussian process with mean and variance for instance, the fractional integral of BM has mean and variance
(for closed formulae of the mean and variance of the fractional integral of a general GM process, see [2]). For fixed turned out to be increasing, as a function of Moreover, in [2] we found that for small values of time t the variances of the considered fractionally integrated GM processes become ever lower, as increases (i.e., the variance decreases as a function of for large values of t this behavior is overturned, and the variance increases with (see [2]).
In this paper, we aim to characterize the qualitative behavior of the dynamical system by means of its entropy. Indeed, the entropy is widely used for this purpose in many fields (see [10,11,12,13,14]). In Biology, entropy is useful to characterize the behavior of, for example, Leaky Integrate-and-Fire (LIF) neuronal models (see [4]). In finance, Kelly in [15] introduced entropy for gambling on horse races, and Breiman in [16] for investments in general markets. Finally, the admissible self-financing strategy achieving the maximum entropy results in a growth optimal strategy (see [17]).
In order to specify the entropy for the processes considered in this paper, we first note that, for a fixed time s the r.v. is normally distributed with mean and variance , so recalling that the entropy of a r.v. X with density is given by
where, by calculation, it easily follows that the entropy of the normal r.v. with fixed depends only on and it is given by (see [18], p. 181):
Thus, the larger the variance the larger the entropy of for a fixed time
In this paper we are interested in studying a different quantity: for a certain value of and , our aim is to find the entropy of trajectories of which involves all the points of the trajectories up to time and to show that the entropy is a decreasing function of
We do not actually compute the entropy of but its approximate entropy ApEn (see [19]), obtained by using several long enough simulated trajectories (they were previously obtained in [2], for the fractional integral of some noteworthy GM processes , namely, BM and Ornstein–Uhlenbeck (OU)). In fact, Pincus [19] has showed that ApEn is suitable to quantify the concept of changing complexity, being able to distinguish a wide variety of system behaviors. Indeed, for general time series, it can potentially separate those coming from deterministic systems and stochastic ones, and those coming from periodic systems and chaotic ones; moreover, for a homogeneous, ergodic Markov chain, ApEn coincides with Kolmogorov–Sinai entropy. Thus, though is not a Markov process, its approximate entropy ApEn is able to characterize the complexity degree of the system, when varying
As we said, we previously found that, in all the considered cases of GM processes, for large t the variance of their fractional integral is an increasing function of while for small t it decreases with instead, the covariance function has more diversified behaviors (see [2]).
In the present article, we show that, for small values of exhibits a large value of the complexity degree; a possible explanation is that, for small the trajectories of the process become more jagged, giving rise to a greater value of the complexity degree. In fact, our estimates of ApEn show that it is a decreasing function of This behavior appears for the fractional integral of BM (FIBM), as well as for the fractional integral of the OU process (FIOU).
2. The Entropy of the Trajectories of
In this section, we study the complexity degree of the trajectories of the process , in two noteworthy cases of GM processes precisely:
- (i)
- so is fractionally integrated Brownian motion (FIBM);
- (ii)
- is the Ornstein–Uhlenbeck (OU) process, driven by the SDE.
Both FIBM and FIOU are Gaussian processes whose variance and covariance functions were explicitly obtained in [2] and studied, as functions of
To study the complexity degree of the trajectories of the process , in cases (i) and (ii), we make use of several simulated trajectories of length previously obtained in [2], for N large. The sample paths have been obtained by using the R software, with time discretization step and by means of the same sequence of pseudo-random Gaussian numbers. The simulation algorithm has been realized as an R script. More specifically, we specialize the algorithm to simulate an array of Gaussian numbers with a specified covariance matrix. Indeed, we first set the time instants (with and and we evaluate the elements of the covariance matrix Note that, for each fractionally integrated Gauss–Markov process here considered, we implemented a specific algorithm to be evaluated by numerical procedures the mathematical expression of the covariance according to Equation (3.5) of [2]. Then, we apply the Cholesky decomposition to matrix C in order to determine the lower triangular matrix G, such that , where is the transposition of Finally, we generate N pseudo-Gaussian standard numbers and we set (for , with the th row of matrix G) such that the obtained array is a simulation of a centered Gaussian distributed N-dimensional r.v. with covariances for
In particular, referring to algorithms for the generation of pseudo-random numbers (see [21]), the main steps of implementation were the following (for more, see [2]):
- STEP 1
- The elements of covariance matrix are calculated at times of an equi-spaced temporal grid.
- STEP 2
- The Cholesky decomposition algorithm is applied to the covariance matrix C in order to obtain a lower triangular matrix , such that
- STEP 3
- The N-dimensional array of standard pseudo-Gaussian numbers is generated.
- STEP 4
- The sequence of simulated values of the correlated fractionally integrated process is constructed as the array
Finally, the array provides the simulated path—i.e., a realization of whose components have the assigned covariance.
2.1. The Approximate Entropy
In [19] Pincus defined the concept of approximate entropy (ApEn) to measure the complexity of a system, proving also that, for a Markov chain, ApEn equals the entropy rate of the chain. In fact, to measure chaos concerning a given set of data, we have at our disposal Hausdorff and correlation dimension, K-S entropy, and the Lyapunov spectrum (see [19]); indeed, to calculate one of the above parameters, one needs an impractically large amount of data. Instead, calculation of ApEn(m, r) (see below for the definition) only requires relatively few points. Actually, as shown in [19], if one uses only 1000 points, and m is taken as being equal to 2, ApEn(m, r) can characterize a large variety of system behaviors, since it is able to distinguish deterministic systems from stochastic ones, and periodic systems from chaotic ones.
For instance, Abundo et al. [10] used ApEn to obtain numerical approximations of the entropy rate, with the final purpose to investigate the degree of cooperativity of proteins in a Markov model with binomial transition distributions. They showed that the corresponding ApEn is a decreasing function of the degree of cooperativity (for more about approximation of entropy by numerical algorithms, see [12] and references therein).
Now, we recall from [19] the definition of ApEn. Let be given a time-series of data, equally spaced in time, and fix an integer and a positive number Next, let us consider a sequence of vectors in defined by Then, define for each i,
in which the distance between two vectors is defined by
We observe that the quantities measure up to a tolerance r the frequency of patterns which are similar to a certain pattern whose window length is Now, define
and
Given N data points, formula (16) can be implemented by defining the statistics
Heuristically, we can say that ApEn is a measure of the logarithmic likelihood that runs of patterns that are close for m observations, remain close on the next incremental comparison. A greater likelihood of remaining close (i.e., regularity) produces smaller ApEn values, and viceversa. On the basis of simulated data, Pincus showed that, for and , for values of r, between and times the standard deviation of the data produce reasonable statistical validity of Moreover, he showed that, for a homogeneous, ergodic Markov chain, ApEn coincides with the Kolmogorov–Sinai entropy (see [14]), that is
where denotes the transition probability of the Markov chain from the state i to the state j, and is the th component of the vector of the stationary probabilities, being the step transition probability of the Markov chain from the state i to the state j.
2.2. Calculation of the Entropy of Simulated Trajectories of the Process
In the case of FIBM and FIOU, for a set of values we have performed L (discretized) trajectories of length N of the process by means of the simulation algorithm previously described in STEPS 1–4. In particular, for each simulated path, we follow the remaining steps:
- STEP 5
- Construction of the array in (for a fixed m) by extracting from a given sample path , obtained in STEPS 1–4, the vectors
- STEP 6
- Construction of the distance matrix whose elements are are defined as the follows distance between vectors and —i.e.,
- STEP 7
- After setting , with sample deviation of simulated paths, evaluation of array whose components are provided asfor
- STEP 8
- Evaluation of the quantitiesand
We have taken the number of paths L large enough and N from 100 to and for each of these L trajectories of length corresponding to a value of we have estimated by means of the approximation where (the standard deviation of trajectory points); then, the approximate entropy of has been obtained by This allowed us to study the dependence of the entropy of the sample paths of on the parameter showing that the entropy—namely a measure of the complexity of the dynamical system —is a decreasing function of
Since the fractional integral of order zero of is nothing but the process itself, and the fractional integral of order 1 is the ordinary Riemann integral of our result means that fractional integration introduces a greater degree of complexity than that corresponding to ordinary integration; moreover, the maximum degree of complexity is obtained for the original process (that is, without integration).
In Figure 1 and Figure 2 we plot the numerical results for ApEn, as a function of for FIBM and FIOU, respectively. When the estimates of ApEn have been obtained for it appears clear that ApEn is a decreasing function of
Figure 1.
Approximate entropy (ApEn) of FIBM, as a function of for (Values of are on the horizontal axes).
Figure 2.
Approximate entropy (ApEn) of FIOU with as a function of for (Values of are on the horizontal axes).
Moreover, our calculation highlights that, for small values of , the trajectories of FIBM and FIOU become more jagged, giving rise to a greater value of the complexity degree (see Figure 3).
Figure 3.
Some simulated sample paths of FIBM (left) and of FIOU (right) for some values of (specified by labels inside the figure). In both examples we set , but the time discretization step h is on the left and on the right. The seed of the random generator is the same for all simulated paths. (Values of time t are on the horizontal axes).
We also show that the results of ApEn as N increases in Figure 4 and Figure 5. Our investigations show that the estimated values of ApEn for FIOU, for a given and a given trajectory length, are considerably larger than those for FIBM (compare Figure 4 and Figure 5). This possibly depends on the fact that the trajectories of FIOU are more complicated than those of FIBM, giving rise to a greater complexity degree. Moreover, contrary to the case of FIBM, where for all the estimated value of ApEn is a decreasing function of the length N of simulated trajectories, in the case of FIOU, for , the estimated value of ApEn appears to be an increasing function of Perhaps if one used far longer trajectories to estimate ApEn, the values obtained in both cases would be comparable and they would exhibit the same behavior as a function of Notice, however, that to simulate very long trajectories is impractical from the computational point of view (even for , the CPU time to evaluate ApEn in the case of FIOU was of order of almost one hour).
Figure 4.
Approximate entropy (ApEn) of FIBM, for various value of (specified by labels inside the figure) and N (on the horizontal axes).
Figure 5.
Approximate entropy (ApEn) of FIOU with for various value of (specified by labels inside the figure) and N (on the horizontal axes).
3. Conclusions and Final Remarks
In this paper, we further investigated the qualitative behavior of the fractional integral of order of a Gauss–Markov process, that we already studied in [2].
Actually, Gauss–Markov processes and their fractional integrals over time are very relevant in various application fields, especially in Biology—e.g., in stochastic models for neuronal activity (see [1]). In fact, the fractional integral of order of a Gauss–Markov process , say is suitable to describe stochastic phenomena with long range memory dynamics, involving correlated input processes, which are very relevant in Biology (see [2]).
While in [2] we have showed that is itself a Gaussian process, and we have found its variance and covariance, obtaining that the variance of is an increasing function of in this paper we have characterized the qualitative behavior of the dynamical system by means of its complexity degree, or entropy. Actually, for several values of we have estimated its approximate entropy ApEn, obtained by long enough trajectories of the process Specifically, we investigate the problem by means of the implementation of an algorithm based on STEPS 1–8 detailed described in the paper. We have found that ApEn is a decreasing function of this behavior appeared for the fractional integral of the Brownian motion, as well as for the fractional integral of Ornstein–Uhlenbeck process. Since the fractional integral of of order zero is nothing but the process itself, and the fractional integral of order 1 is the Riemann integral of our result means that fractional integration introduces a greater degree of complexity than in the case of ordinary integration; moreover, the maximum degree of complexity is obtained for the original Gauss–Markov process (that is, without integration).
Furthermore, we remark that the algorithm for computing ApEn uses numerical data, which can be used independently of knowing the process where they come from. However, in our case, we study the process when varying the parameter so we need to simulate its trajectories, and to make use of the obtained numerical values to estimate ApEn. As regards the possibility of finding out, by using ApEn, if certain data come from a particular class of possible systems, we have not investigated this. Our aim was only to characterize the behavior of fractionally integrated Gauss–Markov process as varying the parameter by means of the corresponding value of ApEn.
As a future work, we aim to estimate the entropy for other cases of fractionally integrated Gauss–Markov processes such as the fractional integral of stationary Ornstein–Uhlenbeck process. Moreover, in order to further characterize the qualitative behavior of in terms of our investigation will be addressed to estimate the fractal dimension of its trajectories, as a function of
Author Contributions
Conceptualization, M.A.; data curation, E.P.; investigation, M.A. and E.P.; methodology, M.A. and E.P.; Software, E.P. All authors have read and agreed to the published version of the manuscript.
Funding
This work was partially supported by MIUR-PRIN 2017, project Stochastic Models for Complex Systems, no. 2017JFFHSH, by Gruppo Nazionale per il Calcolo Scientifico (GNCS-INdAM) and by the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006.
Acknowledgments
We thank the anonymous reviewers for their valuable comments.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Pirozzi, E. Colored noise and a stochastic fractional model for correlated inputs and adaptation in neuronal firing. Biol. Cybern. 2018, 112, 25–39. [Google Scholar] [CrossRef] [PubMed]
- Abundo, M.; Pirozzi, E. On the Fractional Riemann-Liouville Integral of Gauss-Markov processes and applications. arXiv 2019, arXiv:1905.08167. [Google Scholar]
- Caputo, M. Linear models of dissipation whose Q is almost frequency independent–II. Geophys. J. R. Astron. Soc. 1967, 13, 529–539. [Google Scholar] [CrossRef]
- Burkitt, A.N. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 2006, 95, 1–19. [Google Scholar] [CrossRef] [PubMed]
- Pirozzi, E. On the Integration of Fractional Neuronal Dynamics Driven by Correlated Processes; Lecture Notes in Computer Science, 12013 LNCS; Springer: Cham, Switzerland, 2019; pp. 211–219. [Google Scholar]
- Tamura, S.; Nishitani, Y.; Hosokawa, C.; Mizuno-Matsumoto, Y. Asynchronous Multiplex Communication Channels in 2-D Neural Network with Fluctuating Characteristics. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 2336–2345. [Google Scholar] [CrossRef] [PubMed]
- Debnath, L. Fractional integral and fractional differential equations in fluid mechanics. Fract. Calc. Appl. Anal. 2003, 6, 119–155. [Google Scholar]
- Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and applications of fractional differential equations, volume 204. In North-Holland Mathematics Studies; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
- Malinowska, A.B. Advanced Methods in the Fractional Calculus of Variations; Springer Briefs in Applied Sciences and Technology; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar] [CrossRef]
- Abundo, M.; Accardi, L.; Rosato, N.; Stella, L. Analyzing protein energy data by a stochastic model for cooperative interactions: Comparison and characterization of cooperativity. J. Math. Biol. 2002, 44, 341–359. [Google Scholar] [CrossRef] [PubMed]
- Bollt, E.M.; Skufca, J.D. Control entropy: A complexity measure for nonstationary signals. Math. Biosci. Eng. 2009, 6. [Google Scholar] [CrossRef]
- Ciuperca, G.; Girardin, V. On the estimation of the entropy rate of finite Markov chains. In Proceedings of the International Symposium on Applied Stochastic Models and Data Analysis, Brest, France, 17–20 January 2005. [Google Scholar]
- Delgado-Bonal, A.; Marshak, A. Approximate Entropy and Sample Entropy: A Comprehensive Tutorial. Entropy 2019, 21, 541. [Google Scholar] [CrossRef]
- Walters, P. An Introduction to Ergodic Theory; Springer: New York, NY, USA, 1982. [Google Scholar]
- Kelly, J.L. A new interpretation for the information rate. Bell Syst. Tech. J. 1956, 35, 917–926. [Google Scholar] [CrossRef]
- Breiman, L. Optimal gambling system for favorable games. In Proceedings of the 4-th Berkeley Symposium on Mathematical Statistics and Probablity, Berkeley, CA, USA, 20 June–30 July 1960; University California Press: Cambridge, UK, 1960; Volume 1, pp. 65–78. [Google Scholar]
- Li, P.; Yan, J. The growth optimal portfolio in discrete-time financial markets. Adv. Math. 2002, 31, 537–542. [Google Scholar]
- Applebaum, D. Probability and Information; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
- Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [PubMed]
- Abundo, M. An inverse first-passage problem for one-dimensional diffusion with random starting point. Stat. Probab. Lett. 2012, 82, 7–14. [Google Scholar] [CrossRef]
- Haugh, M. Generating Random Variables and Stochastic Processes. In IEOR E4703: Monte Carlo Simulation; Columbia University: New York, NY, USA, 2016. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).