Next Article in Journal
Solutions of a Class of Switch Dynamical Systems
Next Article in Special Issue
On the Stochastic Motion Induced by Magnetic Fields in Random Environments
Previous Article in Journal
Machine Learning Predictors for Min-Entropy Estimation
Previous Article in Special Issue
Kinetic Theory of Self-Propelled Particles with Nematic Alignment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Levy Noise Affects Ornstein–Uhlenbeck Memory

School of Chemistry, Tel Aviv University, Tel Aviv 6997801, Israel
Entropy 2025, 27(2), 157; https://doi.org/10.3390/e27020157
Submission received: 6 January 2025 / Revised: 29 January 2025 / Accepted: 30 January 2025 / Published: 2 February 2025
(This article belongs to the Collection Foundations of Statistical Mechanics)

Abstract

:
This paper investigates the memory of the Ornstein–Uhlenbeck process (OUP) via three ratios of the OUP increments: signal-to-noise, noise-to-noise, and tail-to-tail. Intuition suggests the following points: (1) changing the noise that drives the OUP from Gauss to Levy will not affect the memory, as both noises share the common ‘independent increments’ property; (2) changing the auto-correlation of the OUP from exponential to slowly decaying will affect the memory, as the change yields a process with long-range correlations; and (3) with regard to Levy driving noise, the greater the noise fluctuations, the noisier the prediction of the OUP increments. This paper shows that intuition is plain wrong. Indeed, a detailed analysis establishes that for each of the three above-mentioned points, the very converse holds. Hence, Levy noise has a significant and counter-intuitive effect on Ornstein–Uhlenbeck memory.
PACS:
02.50.-r (probability theory, stochastic processes, and statistics); 05.40.-a (fluctuation phenomena, random processes, noise, and Brownian motion)

1. Introduction

For random processes that are in statistical equilibria, the foundational ‘go-to’ stochastic model is the Ornstein–Uhlenbeck process (OUP) [1,2,3,4]. Mathematically, the OUP is the intersection of three principal classes of random processes [5]: Gaussian [6,7,8]; stationary [9,10,11]; and Markovian [12,13,14]. Being a Gaussian and stationary random process, the OUP is characterized by its auto-correlation function, which is exponential. The dynamics of the OUP are governed by the Langevin equation [15,16,17]. In addition, the stochastic ‘engine’ that ‘drives’ the OUP is Gauss white noise (GWN), which is the velocity process of Brownian motion [18].
The literature on the OUP is vast, e.g., [2,3,4,19,20,21,22,23,24,25,26]. Recent examples of OUP research include: active Ornstein–Uhlenbeck particles [27,28,29,30,31]; OUP and stochastic resetting [32,33,34]; OUP parameter estimation [35,36]; OUP time averages [37]; phase descriptions of multidimensional OUP [38]; OUP and ergodicity breaking [39]; OUP survival analysis [40]; OUP first-passage area [41]; OUP and optical tweezers [42]; OUP large deviations [43]; high-dimensional OUP [44]; and OUP with a memory kernel [45].
The OUP is a ‘regular’ stochastic model. On the one hand, being a Gaussian process, the OUP belongs to the realm of finite-variance and light-tailed stochastic models. On the other hand, as its exponential auto-correlation function decays rapidly, the OUP belongs to the realm of stochastic models with short-range correlations. There are two main approaches that turn the OUP from a regular to an ‘anomalous’ stochastic model: Levy and Gauss.
The Levy approach shifts the OUP (from the realm of finite-variance and light-tailed stochastic models) to the realm of infinite-variance and heavy-tailed stochastic models [46,47]; Mandelbrot and Wallis termed this shift the “Noah effect” [48]. Specifically, the Levy approach replaces the GWN that drives the OUP with Levy noise, which is the velocity process of the (symmetric and stable) Levy motion [49,50,51].
This replacement yields the Levy-driven OUP [52,53,54,55,56,57,58,59,60,61,62,63] and the Levy-driven Langevin equation [64,65,66,67,68,69,70,71,72,73,74,75]. Applications of the Levy-driven OUP include telecommunications [76], reliability analysis [77], fuel switching [78], insurance [79], and finance [80,81,82].
Stochastic models that are driven by Levy noise attracted major interest in statistical physics [83,84,85,86,87,88,89,90,91,92,93,94]. In particular—as evident from the above references—so did the Levy-driven OUP and the Levy-driven Langevin equation. As evident from the following recent research examples, interest in general Levy-driven models [95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111], Levy-driven OUP [79,82,112,113,114,115], and the Levy-driven Langevin equation [116,117,118,119,120,121,122] is ongoing.
The Gauss approach shifts the OUP (from the realm of stochastic models with short-range correlations) to the realm of stochastic models with long-range correlations [123,124,125]; Mandelbrot and Wallis termed this shift the “Joseph effect” [48]. Specifically, the Gauss approach replaces the OUP’s exponential auto-correlation function with auto-correlation functions that decay slowly.
Both approaches maintain the stationary property of the OUP. With regard to the Gaussian and Markovian properties of the OUP, the approaches act in ‘orthogonal directions’. The Levy approach maintains the Markovian property, and it discards the Gaussian property. Conversely, the Gauss approach (as its name suggests) maintains the Gaussian property, and it discards the Markovian property.
GWN and Levy noise share a common independence property: their noise values, at different time points, are statistically independent (a precise definition of the independence property will be stated in Section 2.3). Consequently, one would expect that the Levy approach will not affect process memory. On the other hand—due to the shift from short-range to long-range correlations—one would expect that the Gauss approach will affect process memory. This paper establishes that it is actually the very converse that holds: the Levy approach yields memory effects which the Gauss approach does not.
To attain its aim, the paper advances as follows. The increment of a random process is its displacement over a specified temporal interval. The increment’s conditional distribution is that in which the following information is provided: the position of the random process at the initial time point of the increment’s temporal interval. Based on the increment’s unconditional and conditional statistical distributions, three informative ratios are devised and analyzed: a ‘signal-to-noise’ ratio; a ‘noise-to-noise’ ratio; and a ‘tail-to-tail’ ratio. These ratios quantify process memory, and their analysis establishes the following conclusions:
The Levy approach yields memory effects that are markedly more profound than those of the OUP.
The Gauss approach yields memory effects that are qualitatively identical to those of the OUP.
In addition to the above conclusions, yet another counter-intuitive conclusion is reached. Indeed, with regard to the Levy-driven OUP, consider the prediction of the above increment, provided the above information. One would expect that the greater the fluctuations in the driving Levy noise, the more ‘noisy’ the prediction. Further analysis of the noise-to-noise ratio establishes that it is actually the very converse that holds: the greater the fluctuations in the driving Levy noise, the less noisy the prediction.
The paper is organized as follows: Section 2 ‘sets the stage’ with concise reviews of several notions (that underpin this research). Section 3 presents the unconditional and conditional statistical distributions of the Levy-driven OUP increments. Section 4 devises the three ratios, and analyzes them for the Levy approach. Section 5 analyzes the three ratios for the Gauss approach. Lastly, Section 6 concludes with a discussion that compares the Levy approach to the Gauss approach, and that describes the counter-intuitive nature of the key results established here.
Future research can proceed in two possible directions. On the one hand, theoreticians can look for additional counter-intuitive behaviors of the Levy-driven OUP. On the other hand, practitioners can look for ‘real-world’ applications that exploit the counter-intuitive behaviors of the Levy-driven OUP.

2. Setting the Stage

This section sets the stage for the topic of this paper. The section tersely reviews the following notions: the measurement of randomness (Section 2.1); Levy distributions (Section 2.2); Levy noise (Section 2.3); the Levy-driven Langevin equation and the Ornstein–Uhlenbeck process (Section 2.4).

2.1. Measuring Randomness

Consider a real-valued random variable X. The most common way to quantify the randomness of X is via its standard deviation, σ [ X ] , which displays the two following gauge properties.
G1. 
The standard deviation is non-negative, σ [ X ] 0 , and it vanishes if and only if the random variable is deterministic:
σ [ X ] = 0 X = c o n s t ,
where the equality on the right-hand side holds with probability one.
G2. 
The standard-deviation’s response to an affine transformation of the random variable is:
σ [ a X + b ] = | a | · σ [ X ] ,
where a and b are, respectively, the slope and the intercept of the affine transformation.
In fact, any gauge σ [ X ] that displays the two gauge properties is a legitimate measure of randomness [126,127]. For example, the perplexity [128]—which is the exponentiation of Shannon entropy [129]—is a measure of randomness. The inverse participation ratio [130,131]—which is the exponentiation of the collision entropy [132,133]—is also a measure of randomness. The reciprocal of the inverse participation ratio is known as Simpson’s index in biology and ecology [134], and as Hirschman’s index in economics and competition law [135]. The perplexity and the inverse participation ratio are special cases of yet another measure of randomness—the diversity [136,137,138,139,140], which is the exponentiation of Renyi entropy [141].

2.2. Levy Distribution

As above, consider a real-valued random variable X. The statistical distribution of X is symmetric Levy-stable [142,143,144] when
E exp i θ X = exp i c θ 1 p s θ p
( < θ < ). Namely, Equation (3) is the characteristic function of the random variable X, and it has three parameters: a real center parameter c; a positive scale parameter s; and a power p whose admissible values are 0 < p 2 .
Henceforth, a random variable X—whose characteristic function is that of Equation (3)—is termed, in short, Levy. In turn, the power p is henceforth termed Levy exponent. Key facts regarding the Levy random variable X are the following [142,143,144]:
F1. 
The random variable X is symmetric about its center c, and hence: the median of X is its center, M [ X ] = c .
F2. 
When the Levy exponent is in the range 0 < p 1 then the mean absolute deviation of X from its center diverges: E X c = .
F3. 
When the Levy exponent is one, p = 1 , then the statistical distribution of X is Cauchy.
F4. 
When the Levy exponent is in the range 1 < p < 2 then: the mean of X is its center, E [ X ] = c ; and the mean squared deviation of X from its center diverges, E [ X c 2 ] = .
F5. 
When the Levy exponent is two, p = 2 , then the statistical distribution of X is Gauss, and hence: the parameters c and s are, respectively, the mean and the standard deviation of X.
The facts stated above imply that the mean behavior of the Levy random variable X changes dramatically as the Levy exponent p crosses the ‘Cauchy threshold’ p = 1 . This threshold shall appear time and again along this paper.
The standard deviation of the Levy random variable X is well defined only in the Gauss case ( p = 2 ). Other measures of randomness—e.g., the ones noted in Section 2.1—are well defined at large. For a measure of randomness which is well defined, Equations (2) and (3) imply that
σ [ X ] = σ p · s ,
where σ p is the measure of randomness of a ‘normalized’ Levy random variable. Namely, a Levy random variable with center zero c = 0 and with scale one s = 1 . So, up to a coefficient, the scale s of the Levy random variable X manifests its randomness.

2.3. Levy Noise

Consider a real-valued stochastic process { η u ; < u < } . Namely, the process is defined over the real line ( < u < ), and its values are η u . The process is a Levy noise when it displays the two following Levy properties [49,50,51].
L1. 
The integral of the noise over a time interval of duration Δ is a Levy random variable with center zero c = 0 and scale s = Δ 1 / p (where p is the Levy exponent).
L2. 
The integrals of the noise over disjoint time intervals are independent random variables.
The Levy noise admits a characterization in terms of its integrals with respect to ‘test functions’: f u η u d u , where f u is a general test function. Indeed, the process is a Levy noise if and only if the integral is a Levy random variable with center zero, and with scale
f p : = f u p d u 1 / p .
The quantity appearing in Equation (5) is the “ L p norm” of the test function f u (with respect to the Lebesgue measure over the real line). The Levy exponent two, p = 2 , characterizes a special case in which: the Levy noise is Gauss white noise (GWN), the velocity process of Brownian motion [18]; and the corresponding L 2 norm is the Hilbert norm.
The behavior of the L p norm changes dramatically as the Levy exponent p crosses the Cauchy threshold p = 1 (which was noted in Section 2.2). Indeed, consider the unit ball with respect to the L p norm, i.e.: the collection of all test functions f u with a norm that is no larger than one, f p 1 . This unit ball is not convex when the Levy exponent is in the range 0 < p < 1 , and it is convex when the Levy exponent is in the range 1 p 2 .

2.4. Langevin, Ornstein and Uhlenbeck

Arguably, the best known ordinary differential equation is the linear one. In turn, adding noise to the linear ordinary differential equation yields what is arguably the best known stochastic differential equation—the Langevin equation [15,16,17]—which is described as follows.
Consider a real-valued random motion { X t ; < t < } . Namely, the underlying timeline is < t < , and the motion’s position at the time point t is X t . The motion’s dynamics are governed by the Langevin equation when
X ˙ t = α X t + β η t ,
where: α is a positive damping coefficient; β is a positive noise coefficient; and { η t ; < t < } is a real-valued noise that ‘drives’ the Langevin equation.
Integrating the Langevin Equation (6), while setting the integration constant to be zero, yields the following integral representation of the motion’s positions:
X t = β exp α t D t · t exp α u η u d u I t .
Namely, the integral representation of Equation (7) is the product of two terms—one deterministic and one stochastic. The deterministic term, D t , is an exponential decay function. The stochastic term, I t , is a ‘running integral’ of the noise with respect to an exponential growth function.
Traditionally, the noise that drives the Langevin equation is set to be GWN. In turn, the resulting motion is the Ornstein–Uhlenbeck process (OUP) [1,2,3,4]. As noted in Section 2.3, GWN is the Levy noise with the Levy exponent two, p = 2 . In general, when the noise is Levy then: Equation (6) is the Levy-driven Langevin equation [64,65,66,67,68,69,70,71,72,73,74,75]; and Equation (7) is an integral representation of the Levy-driven OUP [52,53,54,55,56,57,58,59,60,61,62,63].

3. Increments of the Levy-Driven OUP

This section sets the focus on the increments of the Levy-driven OUP. The positions of the process admit the integral representation of Equation (7). The increment of the process over the time interval ( t , t + Δ ) —where t is a real time point, and where Δ is a positive duration—is the displacement X t + Δ X t . The increments’ unconditional and conditional statistics will be addressed, respectively, in Section 3.1 and Section 3.2.

3.1. Increments’ Unconditional Statistics

Equation (7) implies that the increment of the process over the time interval ( t , t + Δ ) admits the following integral representation:
X t + Δ X t = D t + Δ I t + Δ D t I t = D t + Δ D t I t + D t + Δ I t + Δ I t = t D t + Δ D t exp α u η u d u + t t + Δ D t + Δ exp α u η u d u .
So, the increment admits the formulation X t + Δ X t = f u η u d u , where: (i) f u = D t + Δ D t exp α u when u t ; (ii) f u = D t + Δ exp α u when t < u t + Δ ; and (iii) f u = 0 when u > t + Δ .
In turn, it follows from Section 2.3 that the increment X t + Δ X t is a Levy random variable with center zero, and with scale
f p = t D t + Δ D t exp α u p d u + t t + Δ D t + Δ exp α u p d u 1 / p = β α p 1 / p 1 exp α Δ p + 1 exp α p Δ 1 / p .
The transition from the top line of Equation (9) to its bottom line is due to a straightforward calculation.

3.2. Increments’ Conditional Statistics

The Equations (7) and (8) imply that
X t + Δ X t = D t + Δ D t 1 X t + t t + Δ D t + Δ exp α u η u d u .
Also, Equation (7) and the properties of the Levy noise imply that the position X t and the integral appearing on the right-hand side of Equation (10) are independent random variables.
Given the information X t , the increment X t + Δ X t has two parts: deterministic and stochastic. The deterministic part is the term, on the right-hand side of Equation (10), that involves X t . The stochastic part is the integral on the right-hand side of Equation (10). Note that the integral admits the formulation f u η u d u , where: (i) f u = 0 when u t ; (ii) f u = D t + Δ exp α u when t < u t + Δ ; and (iii) f u = 0 when u > t + Δ .
So, given the information X t , the conditional distribution of the increment X t + Δ X t is Levy with the following center and scale parameters. The center of the conditional distribution is the deterministic part
D t + Δ D t 1 X t = 1 exp α Δ X t .
The scale of the conditional distribution is that of the stochastic part
f p = t t + Δ D t + Δ exp α u p d u 1 / p = β α p 1 / p 1 exp α p Δ 1 / p .
The transition from the top line of Equation (12) to its bottom line is due to a straightforward calculation.

4. Three Ratios

With regard to the unconditional and conditional statistical distributions of the increment X t + Δ X t (which were presented in Section 3), this section will devise and analyze three informative ratios: a ‘signal-to-noise’ ratio (Section 4.1); a ‘noise-to-noise’ ratio (Section 4.2); and a ‘tail-to-tail’ ratio (Section 4.3). The three ratios will involve the function
ϕ p v = 1 exp v p 1 exp p v ,
where: the function’s variable v is positive ( 0 < v < ); and the function’s parameter p is the Levy exponent ( 0 < p 2 ). The properties of the function ϕ p v , which will be used along this section, are derived in Appendix A.

4.1. Signal-to-Noise Ratio

Consider a statistical distribution of interest, which has a real mean and a positive standard deviation. The distribution’s signal-to-noise ratio is that of the following quantities: the numerator is the absolute value of the distribution’s mean; and the denominator is the distribution’s standard deviation. As its name suggests, the ratio measures how strong the distribution’s ‘signal’ (i.e., mean) is—measured relative to the distribution’s ‘noise’ (i.e., standard deviation).
When the distribution of interest is Levy, then the mean is well-defined only when the Levy exponent is in the range 1 < p 2 , and the standard deviation is positive only when the Levy exponent is p = 2 . So, albeit in the Gauss case ( p = 2 ), the signal-to-noise ratio cannot be applied ‘as is’ to the Levy distribution.
Nonetheless, with some ‘tinkering’, the signal-to-noise ratio can be modified to fit the Levy distribution. Indeed, on the one hand, we can replace the mean with the median—which is well defined, and which coincides with the mean when the latter is well defined ( 1 < p 2 ). On the other hand, we can replace the standard deviation with a measure of randomness σ [ · ] that is well defined for the Levy distribution (e.g., the above-mentioned perplexity, inverse participation ratio, and diversity).
Replacing the mean with the median is akin—in the context of signal processing—to replacing moving-average filters with median filters [145,146,147]. Also, replacing the standard deviation with other measures of randomness is akin—in the context of financial markets—to replacing volatility with entropy-based measures [148,149,150].
As shown in Section 3.1, the increment’s unconditional distribution is Levy with a zero center. In turn, the modified signal-to-noise ratio of the increment’s unconditional distribution is zero. Matters change dramatically when switching from the increment’s unconditional distribution to its conditional distribution.
As shown in Section 3.2, the increment’s conditional distribution is Levy, with a center specified in Equation (11) and a scale specified in Equation (12). In turn, the fact F1 of Section 2.2 and Equation (4) implies that the modified signal-to-noise ratio of the increment’s conditional distribution is:
M X t + Δ X t | X t σ X t + Δ X t | X t = α p 1 / p β σ p X t ω · ϕ p α Δ 1 / p ,
where the coefficient σ p is that of Equation (4). Note that the particular choice of the measure of randomness σ [ · ] affects the signal-to-noise ratio of Equation (14) only via the coefficient σ p .
It follows straightforwardly from Equation (13) that lim v ϕ p v = 1 , and that: when the Levy exponent is p = 1 then the function is flat, ϕ 1 v = 1 . It is shown in Appendix A that the shape of the function ϕ p v is determined by the Levy exponent p as follows. When the Levy exponent is in the range 0 < p < 1 then: lim v 0 ϕ p v = , and the function is monotone decreasing. When the Levy exponent is in the range 1 < p 2 then: lim v 0 ϕ p v = 0 , and the function is monotone increasing.
Consider the given information not to be zero, X t 0 . Due to the properties of the function ϕ p v , the asymptotic value—as Δ —of the signal-to-noise ratio of Equation (14) is the positive number ω . Also—as a function of the duration Δ —the signal-to-noise ratio of Equation (14) displays the following behaviors (see Figure 1).
Sub-Cauchy ( 0 < p < 1 ) case: the ratio is monotone decreasing from to its asymptotic value.
Cauchy ( p = 1 ) case: the ratio is flat, and its constant value is its asymptotic value.
Super-Cauchy ( 1 < p < 2 ) and Gauss ( p = 2 ) cases: the ratio is monotone increasing from 0 to its asymptotic value.

4.2. Noise-to-Noise Ratio

As in Section 4.1, consider a measure of randomness σ [ · ] that is well defined for the Levy distribution. The noise-to-noise ratio of the increment X t + Δ X t is that of the following quantities: the numerator is the measure of randomness of the increment’s conditional distribution; and the denominator is the measure of randomness of the increment’s unconditional distribution.
Equation (4) implies that the noise-to-noise ratio is a scale-to-scale ratio—that of the scale of the increment’s conditional distribution to the scale of the increment’s unconditional distribution. These scales are specified in Equations (9) and (12), and (after a bit of algebra) they yield the ratio
σ X t + Δ X t | X t σ X t + Δ X t = 1 1 + ϕ p α Δ 1 / p .
The noise-to-noise ratio of Equation (15) is universal: as it is a scale-to-scale ratio—it is invariant with respect to the particular choice of the measure of randomness σ [ · ] (that is well defined for the Levy distribution).
The noise-to-noise ratio of Equation (15) quantifies the extent to which the given information X t reduces the measured randomness of the increment X t + Δ X t . This ratio takes values in the unit interval: it is bounded from below by the value zero—which manifests full ( 100 % ) reduction in randomness; and it is bounded from above by the value one—which manifests no ( 0 % ) reduction in randomness.
The properties of the function ϕ p v were described in Section 4.1. Due to these properties, the asymptotic value—as Δ —of the noise-to-noise ratio of Equation (15) is 1 / 2 1 / p . Also—as a function of the duration Δ —the noise-to-noise ratio of Equation (15) displays the following behaviors (see Figure 2).
Sub-Cauchy ( 0 < p < 1 ) case: the ratio is monotone increasing from 0 to its asymptotic value.
Cauchy ( p = 1 ) case: the ratio is flat, and its constant value is its asymptotic value.
Super-Cauchy ( 1 < p < 2 ) and Gauss ( p = 2 ) cases: the ratio is monotone decreasing from 1 to its asymptotic value.
As noted in Section 2.4, the parameter α is the damping coefficient of the Langevin Equation (6), and it is positive. It follows straightforwardly from Equation (15) that the monotonicity behavior of the noise-to-noise ratio—with respect to the duration Δ —holds identically also with respect to the parameter α .

4.3. Tail-to-Tail Ratio

As in Section 2.2, consider a real-valued random variable X whose statistical distribution is Levy with center c, scale s, and Levy exponent p. In the Gauss case ( p = 2 ), the probability tails of random variable X are ‘light’, i.e., they exhibit a super-exponential decay. In sharp contrast, when the Levy exponent is in the range 0 < p < 2 , then the probability tails of random variable X are ‘heavy’ [46,47]. Specifically, the heavy probability tails exhibit the following power-law decay [142,143,144]:
Pr X c > x τ p s p · 1 x p ,
where ≈ denotes asymptotic equality in the limit x , and where τ p is a tail coefficient (namely, τ p is a positive number that depends on the Levy exponent p alone).
Now, as in Section 4.1 and Section 4.2, consider the increment X t + Δ X t . When the Levy exponent is in the range 0 < p < 2 , the increment’s tail-to-tail ratio is that of the following quantities: the numerator is the right-hand side of Equation (16)—with regard to the increment’s conditional distribution; and the denominator is the right-hand side of Equation (16)—with regard to the increment’s unconditional distribution. In turn, the tail-to-tail ratio is the pth power of the scale-to-scale ratio—that of the scale of the increment’s conditional distribution to the scale of the increment’s unconditional distribution.
So, the tail-to-tail ratio is the pth power of the noise-to-noise ratio of Equation (15): 1 / [ 1 + ϕ p α Δ ] . Similarly to the noise-to-noise ratio of Equation (15), the tail-to-tail ratio also takes values in the unit interval: it is bounded from below by the value zero, and it is bounded from above by the value one.
The properties of the function ϕ p v were described in Section 4.1. Due to these properties, the asymptotic value—as Δ —of the tail-to-tail ratio is 1 / 2 . Also—as a function of the duration Δ —the tail-to-tail ratio displays the following behaviors (see Figure 3).
Sub-Cauchy ( 0 < p < 1 ) case: the ratio is monotone increasing from 0 to its asymptotic value.
Cauchy ( p = 1 ) case: the ratio is flat, and its constant value is its asymptotic value.
Super-Cauchy ( 1 < p < 2 ) case: the ratio is monotone decreasing from 1 to its asymptotic value.
As noted in Section 2.4, the parameter α is the damping coefficient of the Langevin Equation (6), and it is positive. It follows straightforwardly that the monotonicity behavior of the tail-to-tail ratio 1 / [ 1 + ϕ p α Δ ] —with respect to the duration Δ —holds identically also with respect to the parameter α .

5. Gauss Approach

As described in the introduction, the OUP is a ‘regular’ stochastic model, and there are two main approaches that turn it to an ‘anomalous’ stochastic model: Levy and Gauss. The Levy approach was addressed in the sections above. This section addresses the Gauss approach.
Being a Gaussian stationary process (with zero means), the OUP is characterized by its auto-correlation function—which is exponential. Specifically, the correlation of the OUP positions X t and X t + Δ is ρ O U P Δ = exp ( α Δ ) , where the exponent α is the positive damping coefficient of the Langevin Equation (6). Evidently, the OUP auto-correlation function ρ O U P Δ ( Δ 0 ) is monotone decreasing from ρ O U P 0 = 1 to lim Δ ρ O U P Δ = 0 .
In the Gauss approach, the OUP is replaced by a Gaussian stationary process (with zero means) that is characterized by an auto-correlation function ρ Δ ( Δ 0 ). Akin to the exponential auto-correlation function of the OUP, the auto-correlation function ρ Δ is considered to be monotone decreasing from ρ 0 = 1 to lim Δ ρ Δ = 0 . The monotonicity of the auto-correlation function—which holds for the OUP, and which is set in the Gauss approach—is a natural assumption. Indeed, as ρ Δ is the correlation of the positions at the time points t and t + Δ , it is natural to assume the following: (i) the greater the temporal gap between the two time points, the lesser the correlation of the corresponding positions; and (ii) when the temporal gap grows infinitely large, the correlation decays to zero.
With regard to a process that is specified by the Gauss approach, the following results are proved in Appendix A. As with the OUP, the positions of the process are real-valued, and the position at the time point t is denoted X t .
Signal-to-noise ratio. The counterpart of the modified signal-to-noise ratio of Equation (14) is
M X t + Δ X t | X t σ X t + Δ X t | X t = X t σ 2 V ω · 2 1 + ρ Δ 1 ,
where: the coefficient σ 2 is that of Equation (4); and V is the positions’ variance. In turn, the monotonicity of the auto-correlation function ρ Δ implies that (when the given information is not zero, X t 0 ): as a function of the duration Δ , the signal-to-noise ratio of Equation (17) is monotone increasing from 0 to ω .
Noise-to-noise ratio. The counterpart of the noise-to-noise ratio of Equation (15) is
σ X t + Δ X t | X t σ X t + Δ X t = 1 + ρ Δ 2 .
In turn, the monotonicity of the auto-correlation function ρ Δ implies that: as a function of the duration Δ , the noise-to-noise ratio of Equation (18) is monotone decreasing from 1 to 1 / 2 .
Tail-to-tail ratio. For any positive duration Δ , the tail-to-tail ratio is zero.
The shapes noted above—increasing signal-to-noise ratio, decreasing noise-to-noise ratio, and zero tail-to-tail ratio—are invariant with respect to the auto-correlation function ρ Δ . Thus, in particular, these shapes hold for the OUP.

6. Discussion

The three ratios that were devised in Section 4 quantify—each from its own perspective—the extent to which the information X t affects the statistics of the increment X t + Δ X t . In other words, these ratios quantify the memory of the process under consideration: the Levy-driven OUP in Section 4; and the Gaussian stationary process (as specified by the Gauss approach) in Section 5.
This section concludes with a discussion of the results established in Section 4 and Section 5, and of the results’ counter-intuitive nature. The discussion addresses the following issues: Levy vs. Gauss comparison (Section 6.1); the Cauchy threshold (Section 6.2); Noah vs. Joseph comparison (Section 6.3); and Levy fluctuations (Section 6.4).

6.1. Levy vs. Gauss

The main results of Section 4 and Section 5—regarding the monotonicity of the three ratios with respect to the positive duration Δ —are summarized in Table 1. As evident from this table, the Levy approach yields memory effects, whereas the Gauss approach does not. Indeed, in the Gauss approach, each ratio displays a single type of behavior—which is identical for the OUP on the one hand, and for the Gaussian stationary process on the other hand. In sharp contrast, in the Levy approach each ratio displays three different types of behavior.
In the space of real-valued stationary processes, the OUP can be pictured as a ‘junction’ from which the two approaches fork out. The Levy approach is parameterized by the Levy exponent, 0 < p 2 , and hence it has a single degree of freedom. The Gauss approach is parameterized by the monotone decreasing auto-correlation function, ρ Δ ( Δ 0 ), and hence it has infinitely many degrees of freedom.
Reasonably, one would expect that the Gauss approach (with its infinitely many degrees of freedom) generates richer statistical behaviors than the Levy approach (with its single degree of freedom). Yet, from the perspectives of the three ratios, the very converse holds. Indeed, the following counter-intuitive Levy vs. Gauss conclusion is attained: the Levy approach generates richer statistical behaviors than the Gauss approach.

6.2. Cauchy Threshold

The Levy exponent p = 1 , which characterizes the special case of the Cauchy distribution and the Cauchy noise, was termed ‘Cauchy threshold’ in Section 2. Indeed, as explained in that section, the mean behavior of the Levy distribution changes dramatically as the Levy exponent crosses the Cauchy threshold, and an intrinsic convexity of the Levy noise emerges/vanishes as the Levy exponent crosses the Cauchy threshold.
As evident from Table 1, the Levy exponent p = 1 assumes a ‘threshold role’ also with regard to the three ratios: signal-to-noise, noise-to-noise, and tail-to-tail. Indeed, for each of these ratios, the monotonicity changes dramatically as the Levy exponent crosses the Cauchy threshold. For the signal-to-noise ratio and the noise-to-noise ratio, the following holds: above the Cauchy threshold, the monotonicity is identical to that of the OUP; and below the Cauchy threshold, the monotonicity flips, and it is antithetical to that of the OUP.

6.3. Noah vs. Joseph

The Levy approach and the Gauss approach ‘upgrade’ the OUP in two ‘orthogonal directions’: the Noah effect in the former, and the Joseph effect in the latter. The orthogonality of these directions shall now be explained and discussed.
In the Levy approach, the driving noise is changed from GWN to Levy noise. In turn, the statistics of the positions change from Gauss to Levy, i.e., from finite variances and light tails to infinite variances and heavy tails [46,47]. Mandelbrot and Wallis termed this change “Noah effect” [48]. Note that this change affects amplitudinal fluctuations in the noise, and it does not affect temporal dependencies of the noise. Indeed, both GWN and Levy noise share the L2 Levy property of Section 2.3: the integrals of the noise over disjoint time intervals are independent random variables.
In the Gauss approach, the auto-correlation function is changed from exponential (which characterizes the OUP) to general (which is monotone decreasing to zero). In turn, the stationary correlation structure may change from short-range to long-range [123,124,125]. Namely, for a correlation structure that is determined by the auto-correlation function ρ Δ ( Δ 0 ): ‘short-range’ is when the auto-correlation is integrable at infinity, 1 ρ Δ d Δ < ; and ‘long-range’ is when the auto-correlation is not integrable at infinity, 1 ρ Δ d Δ = . Mandelbrot and Wallis termed this change “Joseph effect” [48]. Note that this change affects temporal dependencies of the process, and it does not affect amplitudinal fluctuations of the process. Indeed, the statistics of the positions remain Gauss, and hence their variances remain finite and their tails remain ‘light’.
Reasonably, one would expect that the Noah effect will not affect the process memory, whereas the Joseph effect will. Yet, from the perspectives of the three ratios, the very converse holds. Indeed, the Levy vs. Gauss conclusion (stated in Section 6.1) further yields the following counter-intuitive Noah vs. Joseph conclusion: the Noah effect affects process memory, whereas the Joseph effect does not.

6.4. Levy Fluctuations

As noted in the opening of Section 4, the parameter of the function ϕ p v (of Equation (13)) is the Levy exponent 0 < p 2 . In addition, as established in Section 4, the three ratios of that section involve the function ϕ p v . Now, keep the variable v fixed, and vary the Levy exponent p.
It is shown in Appendix A that—with respect to the Levy exponent p—the shape of ϕ p v is as follows: it is monotone decreasing from the limit value lim p 0 ϕ p v = to the value ϕ 2 v = tanh ( 1 2 v ) . In turn, the noise-to-noise ratio of Equation (15) and the tail-to-tail ratio of Section 4.3 display the behaviors stated below (see Figure 4). The explanation why the shape of ϕ p v implies the behavior of the noise-to-noise ratio is detailed in Appendix A.
As a function of the Levy exponent p, the noise-to-noise ratio is monotone increasing from 0 to 1 2 [ 1 + exp ( α Δ ) ] .
As a function of the Levy exponent p, the tail-to-tail ratio is monotone increasing from 0 to 1 2 [ 1 + exp ( α Δ ) ] .
The probability tails of the Levy distribution were described in Equation (16). The following fact is evident from Equation (16): the smaller the Levy exponent p, the ‘heavier’ the probability tails—and hence the ‘wilder’ the fluctuations in the Levy noise.
As noted in Section 4.2, the noise-to-noise ratio of Equation (15) quantifies the extent to which the given information X t reduces the measured randomness of the increment X t + Δ X t . Reasonably, one would expect that the ‘wilder’ the fluctuations in the Levy noise, the lesser the reduction in randomness. However, the behavior of the noise-to-noise ratio asserts the very converse: the smaller the Levy exponent p, the smaller the noise-to-noise ratio. So, the following counter-intuitive fluctuations conclusion is attained: the wilder the fluctuations in the Levy noise, the greater the reduction in randomness.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Appendix A.1. The Function of Equation (13)

As noted in the opening of Section 4: the function of Equation (13) is ϕ p v = 1 exp v p / [ 1 exp p v ] ; its variable is positive, 0 < v < ; and its parameter is the Levy exponent, 0 < p 2 .

Appendix A.1.1. The Function of Equation (13) with Respect to Its Variable

It follows from Equation (13) that
ϕ p v = 1 exp v p 1 exp p v = 1 exp v v p 1 exp p v p v · 1 p v p 1 .
In turn, as lim x 0 1 exp x x = 1 , Equation (A1) implies that
lim v 0 ϕ p v = when 0 < p < 1 , 0 when 1 < p 2 .
Setting u = exp v implies that
ϕ p v = 1 u p 1 u p .
In turn, Equation (A3) implies that
d ϕ p v d v = p 1 u p 1 1 u p + 1 u p p u p 1 1 u p 2 · d u d v = p 1 u p 1 1 u p 2 u p 1 + 1 u u p 1 · u = p 1 u p 1 u 1 u p 2 1 u p 1 .
As the variable v is positive, the variable u takes values in the unit interval: 0 < u < 1 . Consequently, Equation (A4) implies that:
d ϕ p v d v < 0 when 0 < p < 1 , d ϕ p v d v > 0 when 1 < p 2 .

Appendix A.1.2. The Function of Equation (13) with Respect to Its Parameter

It follows from Equation (13) that
ϕ p v = 1 exp v p 1 exp p v p v · 1 p v
In turn, as lim p 0 1 exp v p = 1 and as lim x 0 1 exp x x = 1 , Equation (A6) implies that
lim p 0 ϕ p v =
Equation (A3) implies that
ϕ p v p = 1 u p ln 1 u 1 u p + 1 u p u p ln u 1 u p 2 = 1 u p 1 u p 2 ln 1 u 1 u p + ln u u p .
As noted above, the variable u takes values in the unit interval: 0 < u < 1 . Consequently, ln 1 u < 0 and ln u < 0 , and hence
ϕ p v p < 0 .

Appendix A.1.3. A Transformation of the Function of Equation (13)

Consider the function
ψ p v = 1 1 + ϕ p v 1 / p ,
where ϕ p v is the function of Equation (13), and (as above): the variable is positive, 0 < v < ; and the parameter is the Levy exponent, 0 < p 2 . Equation (A10) implies that
ln ψ p v = 1 p ln 1 + ϕ p v .
In turn, Equation (A11) implies that
ψ p v p ψ p v = ln ψ p v p = 1 p 2 ln 1 + ϕ p v 1 p ϕ p v p 1 + ϕ p v .
As ϕ p v is positive, ln 1 + ϕ p v > 0 . Consequently, Equations (A9) and (A12) imply that
ψ p v p > 0 .

Appendix A.2. Bivariate Normal Calculations

Consider a random vector ( Z 1 , Z 2 ) whose statistical distribution is bivariate normal [6,7,8] with the following characteristics. (I) Zero means: E Z 1 = 0 and E Z 2 = 0 . (II) Identical variances: Var Z 1 = v and Var Z 2 = v , where v is a positive number. (III) Covariance Cov Z 1 , Z 2 = r v , where r is a correlation that is smaller than one: r < 1 (this condition means that the components of the random vector ( Z 1 , Z 2 ) are not perfectly correlated). In addition, consider a general measure of randomness σ [ · ] (as described in Section 2.1).

Appendix A.2.1. Signal-to-Noise Ratio

The conditional statistical distribution of Z 2 —given the information Z 1 —is normal with the following characteristics [6,7,8]: conditional mean
E Z 2 | Z 1 = r Z 1 ;
and conditional variance
Var Z 2 | Z 1 = 1 r 2 v .
In turn, the conditional statistical distribution of the difference Z 2 Z 1 —given the information Z 1 —is normal with the following characteristics: conditional mean
E Z 2 Z 1 | Z 1 = r 1 Z 1 ;
and conditional variance
Var Z 2 Z 1 | Z 1 = 1 r 2 v .
As the mean and the median of the normal statistical distribution coincide, Equation (A16) implies that
M Z 2 Z 1 | Z 1 = r 1 Z 1 .
Equations (4) and (A17) (together with the fact that the variance is the square of the standard deviation) imply that
σ Z 2 Z 1 | Z 1 = σ 2 1 r 2 v ,
where the coefficient σ 2 is that of Equation (4). In turn, Equations (A18) and (A19) yield the following signal-to-noise ratio:
M Z 2 Z 1 | Z 1 σ Z 2 Z 1 | Z 1 = Z 1 σ 2 v 1 r 1 r 2 = Z 1 σ 2 v 2 1 + r 1 .

Appendix A.2.2. Noise-to-Noise Ratio

The statistical distribution of the difference Z 2 Z 1 is normal with the following characteristics: mean
E Z 2 Z 1 = E Z 2 E Z 1 = 0 ;
and variance
Var Z 2 Z 1 = Var Z 1 + Var Z 2 2 Cov Z 1 , Z 2 = 2 v 2 r v = ( 1 r ) 2 v .
Consequently, Equations (A17) and (A22) yield the following variance-to-variance ratio:
Var Z 2 Z 1 | Z 1 Var Z 2 Z 1 = 1 r 2 v ( 1 r ) 2 v = 1 + r 2 .
In turn, Equations (4) and (A23) (together with the fact that the variance is the square of the standard deviation) yield the following noise-to-noise ratio
σ Z 2 Z 1 | Z 1 σ Z 2 Z 1 = 1 + r 2 .

Appendix A.2.3. Tail-to-Tail Ratio

With regard to the difference Z 2 Z 1 , use the following shorthand notation: a 2 is the conditional variance of Equation (A17); and b 2 is the variance of Equation (A22). Note that
b 2 a 2 = ( 1 r ) 2 v 1 r 2 v = 1 r 2 v .
As the correlation is smaller than one, r < 1 , the difference of Equation (A25) is positive: b 2 a 2 > 0 .
The probability that the absolute value of a ‘standard’ normal random variable (i.e., with mean zero and with variance one) is greater than the positive level l is
Φ l : = l 2 π exp 1 2 u 2 d u .
In turn, the probability that the deviation of a general normal random variable from its mean is greater than the positive number x is Φ x / s , where s is the standard deviation of the general normal random variable.
So, with regard to the conditional statistical distribution of the difference Z 2 Z 1 , given the information Z 1 , Equations (A16) and (A17) imply that
Pr Z 2 Z 1 r 1 Z 1 > x | Z 1 = Φ x a .
With regard to the (unconditional) statistical distribution of the difference Z 2 Z 1 , Equations (A21) and (A22) imply that
Pr Z 2 Z 1 0 > x = Φ x b .
L’Hopital’s rule and Equation (A26) imply that
lim x Φ x a Φ x b = lim x Φ x a 1 a Φ x b 1 b = b a lim x exp 1 2 x a 2 exp 1 2 x b 2 = b a lim x exp 1 2 b 2 a 2 a 2 b 2 x 2 .
In turn, as b 2 a 2 > 0 , Equation (A29) yields the following tail-to-tail ratio:
lim x Φ x a Φ x b = 0 .

Appendix A.2.4. Gaussian Stationary Processes

Consider a Gaussian stationary process—as specified in the ‘Gauss direction’ of Section 5—with a monotone-decreasing auto-correlation function ρ Δ ( Δ 0 ). Further consider the increment X t + Δ X t of the Gaussian stationary process. Using the above notation, set the components of the bivariate mormal random vector ( Z 1 , Z 2 ) to be the following: Z 1 = X t and Z 2 = X t + Δ . In turn, the corresponding correlation is r = ρ Δ , and hence: Equation (A20) yields the signal-to-noise ratio of Equation (17), and Equation (A24) yields the noise-to-noise ratio of Equation (18). Also, Equation (A30) implies that the tail-to-tail ratio is zero.

References

  1. Uhlenbeck, G.E.; Ornstein, L.S. On the theory of the Brownian motion. Phys. Rev. 1930, 36, 823. [Google Scholar] [CrossRef]
  2. Caceres, M.O.; Budini, A.A. The generalized Ornstein-Uhlenbeck process. J. Phys. A Math. Gen. 1997, 30, 8427. [Google Scholar] [CrossRef]
  3. Bezuglyy, V.; Mehlig, B.; Wilkinson, M.; Nakamura, K.; Arvedson, E. Generalized ornstein-uhlenbeck processes. J. Math. Phys. 2006, 47, 073301. [Google Scholar] [CrossRef]
  4. Maller, R.A.; Muller, G.; Szimayer, A. Ornstein-Uhlenbeck Processes and Extensions; Handbook of Financial Time Series; Springer: Berlin/Heidelberg, Germany, 2009; pp. 421–437. [Google Scholar]
  5. Doob, J.L. The Brownian movement and stochastic equations. Ann. Math. 1942, 43, 351–369. [Google Scholar] [CrossRef]
  6. MacKay, D.J.C. Introduction to Gaussian processes. NATO ASI Ser. F Comput. Syst. Sci. 1998, 168, 133–166. [Google Scholar]
  7. Ibragimov, I.; Rozanov, Y. Gaussian Random Processes; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  8. Lifshits, M. Lectures on Gaussian Processes; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  9. Lindgren, G. Stationary Stochastic Processes: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  10. Lindgren, G.; Rootzen, H.; Sandsten, M. Stationary Stochastic Processes for Scientists and Engineers; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  11. Hida, T. Stationary Stochastic Processes; MN-8; Princeton University Press: Princeton, NJ, USA, 2015; Volume 8. [Google Scholar]
  12. Gillespie, D.T. Markov Processes: An Introduction for Physical Scientists; Elsevier: Amsterdam, The Netherlands, 1991. [Google Scholar]
  13. Liggett, T.M. Continuous Time Markov Processes: An Introduction; American Mathematical Society: Providence, RI, USA, 2010; Volume 113. [Google Scholar]
  14. Dynkin, E.B. Theory of Markov Processes; Dover: Mineola, NY, USA, 2012. [Google Scholar]
  15. Langevin, P. Sur la theorie du mouvement Brownien. Compt. Rendus 1908, 146, 530–533. [Google Scholar]
  16. Coffey, W.; Kalmykov, Y.P. The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering; World Scientific: Singapore, 2012. [Google Scholar]
  17. Pavliotis, G.A. Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations; Springer: Berlin/Heidelberg, Germany, 2014; Volume 60. [Google Scholar]
  18. Borodin, A.N.; Salminen, P. Handbook of Brownian Motion: Facts and Formulae; Birkhauser: Basel, Switzerland, 2015. [Google Scholar]
  19. Debbasch, F.; Mallick, K.; Rivet, J.-P. Relativistic Ornstein-Uhlenbeck process. J. Stat. Phys. 1997, 88, 945–966. [Google Scholar] [CrossRef]
  20. Graversen, S.; Peskir, G. Maximal inequalities for the Ornstein-Uhlenbeck process. Proc. Am. Math. Soc. 2000, 128, 3035–3041. [Google Scholar] [CrossRef]
  21. Aalen, O.O.; Gjessing, H.K. Survival models based on the Ornstein-Uhlenbeck process. Lifetime Data Anal. 2004, 10, 407–423. [Google Scholar] [CrossRef]
  22. Larralde, H. A first passage time distribution for a discrete version of the Ornstein-Uhlenbeck process. J. Phys. A Math. Gen. 2004, 37, 3759. [Google Scholar] [CrossRef]
  23. Eliazar, I.; Klafter, J. Markov-breaking and the emergence of long memory in Ornstein-Uhlenbeck systems. J. Phys. A Math. Theor. 2008, 41, 122001. [Google Scholar] [CrossRef]
  24. Eliazar, I.; Klafter, J. From Ornstein-Uhlenbeck dynamics to long-memory processes and fractional Brownian motion. Phys. Rev. E 2009, 79, 021115. [Google Scholar] [CrossRef]
  25. Wilkinson, M.; Pumir, A. Spherical Ornstein-Uhlenbeck processes. J. Stat. Phys. 2011, 145, 113. [Google Scholar] [CrossRef]
  26. Gajda, J.; Wylomańska, A. Time-changed Ornstein-Uhlenbeck process. J. Phys. A Math. Theor. 2015, 48, 135004. [Google Scholar] [CrossRef]
  27. Bonilla, L.L. Active Ornstein-Uhlenbeck particles. Phys. Rev. E 2019, 100, 022601. [Google Scholar] [CrossRef]
  28. Sevilla, F.J.; Rodriguez, R.F.; Ruben Gomez-Solano, J. Generalized Ornstein-Uhlenbeck model for active motion. Phys. Rev. E 2019, 100, 032123. [Google Scholar] [CrossRef]
  29. Martin, D.; O’Byrne, J.; Cates, M.E.; Fodor, E.; Nardini, C.; Tailleur, J.; Wijland, F.V. Statistical mechanics of active Ornstein-Uhlenbeck particles. Phys. Rev. E 2021, 103, 032607. [Google Scholar] [CrossRef]
  30. Nguyen, G.H.P.; Wittmann, R.; Lowen, H. Active Ornstein–Uhlenbeck model for self-propelled particles with inertia. J. Phys. Condens. Matter 2021, 34, 035101. [Google Scholar] [CrossRef]
  31. Dabelow, L.; Eichhorn, R. Irreversibility in active matter: General framework for active Ornstein-Uhlenbeck particles. Front. Phys. 2021, 8, 582992. [Google Scholar] [CrossRef]
  32. Trajanovski, P.; Jolakoski, P.; Zelenkovski, K.; Iomin, A.; Kocarev, L.; Sandev, T. Ornstein-Uhlenbeck process and generalizations: Particle dynamics under comb constraints and stochastic resetting. Phys. Rev. E 2023, 107, 054129. [Google Scholar] [CrossRef]
  33. Trajanovski, P.; Jolakoski, P.; Kocarev, L.; Sandev, T. Ornstein-Uhlenbeck Process on Three-Dimensional Comb under Stochastic Resetting. Mathematics 2023, 11, 3576. [Google Scholar] [CrossRef]
  34. Dubey, A.; Pal, A. First-passage functionals for Ornstein Uhlenbeck process with stochastic resetting. arXiv 2023, arXiv:2304.05226. [Google Scholar] [CrossRef]
  35. Strey, H.H. Estimation of parameters from time traces originating from an Ornstein-Uhlenbeck process. Phys. Rev. E 2019, 100, 062142. [Google Scholar] [CrossRef] [PubMed]
  36. Janczura, J.; Magdziarz, M.; Metzler, R. Parameter estimation of the fractional Ornstein-Uhlenbeck process based on quadratic variation. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 103125. [Google Scholar] [CrossRef] [PubMed]
  37. Cherstvy, A.G.; Thapa, S.; Mardoukhi, Y.; Chechkin, A.V.; Metzler, R. Time averages and their statistical variation for the Ornstein-Uhlenbeck process: Role of initial particle distributions and relaxation to stationarity. Phys. Rev. E 2018, 98, 022134. [Google Scholar] [CrossRef]
  38. Thomas, P.J.; Lindner, B. Phase descriptions of a multidimensional Ornstein-Uhlenbeck process. Phys. Rev. E 2019, 99, 062221. [Google Scholar] [CrossRef] [PubMed]
  39. Mardoukhi, Y.; Chechkin, A.; Metzler, R. Spurious ergodicity breaking in normal and fractional Ornstein-Uhlenbeck process. New J. Phys. 2020, 22, 073012. [Google Scholar] [CrossRef]
  40. Giorgini, L.T.; Moon, W.; Wettlaufer, J.S. Analytical Survival Analysis of the Ornstein-Uhlenbeck Process. J. Stat. Phys. 2020, 181, 2404–2414. [Google Scholar] [CrossRef]
  41. Kearney, M.J.; Martin, R.J. Statistics of the first passage area functional for an Ornstein-Uhlenbeck process. J. Phys. A Math. Theor. 2021, 54, 055002. [Google Scholar] [CrossRef]
  42. Goerlich, R.; Li, M.; Albert, S.; Manfredi, G.; Hervieux, P.; Genet, C. Noise and ergodic properties of Brownian motion in an optical tweezer: Looking at regime crossovers in an Ornstein-Uhlenbeck process. Phys. Rev. E 2021, 103, 032132. [Google Scholar] [CrossRef]
  43. Smith, N.R. Anomalous scaling and first-order dynamical phase transition in large deviations of the Ornstein-Uhlenbeck process. Phys. Rev. E 2022, 105, 014120. [Google Scholar] [CrossRef]
  44. Kersting, H.; Orvieto, A.; Proske, F.; Lucchi, A. Mean first exit times of Ornstein-Uhlenbeck processes in high-dimensional spaces. J. Phys. A Math. Theor. 2023, 56, 215003. [Google Scholar] [CrossRef]
  45. Trajanovski, P.; Jolakoski, P.; Kocarev, L.; Metzler, R.; Sandev, T. Generalised Ornstein-Uhlenbeck process: Memory effects and resetting. J. Phys. Math. Theor. 2025, 58, 045001. [Google Scholar] [CrossRef]
  46. Adler, R.; Feldman, R.; Taqqu, M. A Practical Guide to Heavy Tails: Statistical Techniques and Applications; Springer: New York, NY, USA, 1998. [Google Scholar]
  47. Nair, J.; Wierman, A.; Zwart, B. The Fundamentals of Heavy-Tails: Properties, Emergence, and Identification; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  48. Mandelbrot, B.B.; Wallis, J.R. Noah, Joseph, and operational hydrology. Water Resour. Res. 1968, 4, 909–918. [Google Scholar] [CrossRef]
  49. Bertoin, J. Levy Processes; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  50. Ken-Iti, S. Levy Processes and Infinitely Divisible Distributions; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  51. Barndorff-Nielsen, O.E.; Mikosch, T.; Resnick, S.I. (Eds.) Levy Processes: Theory and Applications; Springer Science & Business Media: New York, NY, USA, 2001. [Google Scholar]
  52. Garbaczewski, P.; Olkiewicz, R. Ornstein-Uhlenbeck-Cauchy process. J. Math. Phys. 2000, 41, 6843–6860. [Google Scholar] [CrossRef]
  53. Eliazar, I.; Klafter, J. A growth-collapse model: Levy inflow, geometric crashes, and generalized Ornstein-Uhlenbeck dynamics. Phys. A Stat. Mech. Its Appl. 2004, 334, 1–21. [Google Scholar] [CrossRef]
  54. Jongbloed, G.; Meulen, F.H.V.D.; Vaart, A.W.V.D. Nonparametric inference for Levy-driven Ornstein-Uhlenbeck processes. Bernoulli 2005, 11, 759–791. [Google Scholar] [CrossRef]
  55. Eliazar, I.; Klafter, J. Levy, Ornstein-Uhlenbeck, and subordination: Spectral vs. jump description. J. Stat. Phys. 2004, 119, 165–196. [Google Scholar] [CrossRef]
  56. Eliazar, I.; Klafter, J. Stochastic Ornstein-Uhlenbeck Capacitors. J. Stat. Phys. 2005, 118, 177–198. [Google Scholar] [CrossRef]
  57. Brockwell, P.J.; Davis, R.A.; Yang, Y. Estimation for non-negative Levy-driven Ornstein-Uhlenbeck processes. J. Appl. Probab. 2007, 44, 977–989. [Google Scholar] [CrossRef]
  58. Magdziarz, M. Short and long memory fractional Ornstein-Uhlenbeck alpha-stable processes. Stoch. Model. 2007, 23, 451–473. [Google Scholar] [CrossRef]
  59. Magdziarz, M. Fractional Ornstein-Uhlenbeck processes. Joseph effect in models with infinite variance. Phys. A Stat. Mech. Its Appl. 2008, 387, 123–133. [Google Scholar] [CrossRef]
  60. Brockwell, P.J.; Lindner, A. Ornstein-Uhlenbeck related models driven by Levy processes. Stat. Methods Stoch. Differ. Equ. 2012, 124, 383–427. [Google Scholar]
  61. Toenjes, R.; Sokolov, I.M.; Postnikov, E.B. Nonspectral relaxation in one dimensional Ornstein-Uhlenbeck processes. Phys. Rev. Lett. 2013, 110, 150602. [Google Scholar] [CrossRef] [PubMed]
  62. Riedle, M. Ornstein-Uhlenbeck processes driven by cylindrical Levy processes. Potential Anal. 2015, 42, 809–838. [Google Scholar] [CrossRef]
  63. Thiel, F.; Sokolov, I.M.; Postnikov, E.B. Nonspectral modes and how to find them in the Ornstein-Uhlenbeck process with white μ-stable noise. Phys. Rev. E 2016, 93, 052104. [Google Scholar] [CrossRef]
  64. Fogedby, H.C. Langevin equations for continuous time Levy flights. Phys. Rev. E 1994, 50, 1657. [Google Scholar] [CrossRef]
  65. Jespersen, S.; Metzler, R.; Fogedby, H.C. Levy flights in external force fields: Langevin and fractional Fokker-Planck equations and their solutions. Phys. Rev. E 1999, 59, 2736. [Google Scholar] [CrossRef]
  66. Chechkin, A.; Gonchar, V.; Klafter, J.; Metzler, R.; Tanatarov, L. Stationary states of non-linear oscillators driven by Levy noise. Chem. Phys. 2002, 284, 233–251. [Google Scholar] [CrossRef]
  67. Brockmann, D.; Sokolov, I.M. Levy flights in external force fields: From models to equations. Chem. Phys. 2002, 284, 409–421. [Google Scholar] [CrossRef]
  68. Eliazar, I.; Klafter, J. Levy-driven Langevin systems: Targeted stochasticity. J. Stat. Phys. 2003, 111, 739–768. [Google Scholar] [CrossRef]
  69. Chechkin, A.V.; Gonchar, V.Y.; Klafter, J.; Metzler, R.; Tanatarov, L.V. Levy flights in a steep potential well. J. Stat. Phys. 2004, 115, 1505–1535. [Google Scholar] [CrossRef]
  70. Dybiec, B.; Gudowska-Nowak, E.; Sokolov, I.M. Stationary states in Langevin dynamics under asymmetric Levy noises. Phys. Rev. E Nonlinear Soft Matter Phys. 2007, 76, 041122. [Google Scholar] [CrossRef] [PubMed]
  71. Dybiec, B.; Sokolov, I.M.; Chechkin, A.V. Stationary states in single-well potentials under symmetric Levy noises. J. Stat. Mech. Theory Exp. 2010, 2010, P07008. [Google Scholar] [CrossRef]
  72. Eliazar, I.I.; Shlesinger, M.F. Langevin unification of fractional motions. J. Phys. A Math. Theor. 2012, 45, 162002. [Google Scholar] [CrossRef]
  73. Magdziarz, M.; Szczotka, W.; Zebrowski, P. Langevin picture of Levy walks and their extensions. J. Stat. Phys. 2012, 147, 74–96. [Google Scholar] [CrossRef]
  74. Sandev, T.; Metzler, R.; Tomovski, Z. Velocity and displacement correlation functions for fractional generalized Langevin equations. Fract. Calc. Appl. Anal. 2012, 15, 426–450. [Google Scholar] [CrossRef]
  75. Liemert, A.; Sandev, T.; Kantz, H. Generalized Langevin equation with tempered memory kernel. Phys. A Stat. Mech. Its Appl. 2017, 466, 356–369. [Google Scholar] [CrossRef]
  76. Wolpert, R.L.; Taqqu, M.S. Fractional Ornstein-Uhlenbeck Levy processes and the Telecom process: Upstairs and downstairs. Signal Process. 2005, 85, 1523–1545. [Google Scholar] [CrossRef]
  77. Shu, Y.; Feng, Q.; Kao, E.P.C.; Liu, H. Levy-driven non-Gaussian Ornstein-Uhlenbeck processes for degradation-based reliability analysis. IIE Trans. 2016, 48, 993–1003. [Google Scholar] [CrossRef]
  78. Chevallier, J.; Goutte, S. Estimation of Levy-driven Ornstein-Uhlenbeck processes: Application to modeling of CO2 and fuel-switching. Ann. Oper. Res. 2017, 255, 169–197. [Google Scholar] [CrossRef]
  79. Kabanov, Y.; Pergamenshchikov, S. Ruin probabilities for a Levy-driven generalised Ornstein–Uhlenbeck process. Financ. Stochastics 2020, 24, 39–69. [Google Scholar] [CrossRef]
  80. Onalan, O. Financial modelling with Ornstein-Uhlenbeck processes driven by Levy process. In Proceedings of the World Congress on Engineering, London, UK, 1–3 July 2009; Volume 2, pp. 1–3. [Google Scholar]
  81. Onalan, O. Fractional Ornstein-Uhlenbeck processes driven by stable Levy motion in finance. Int. Res. J. Financ. Econ. 2010, 42, 129–139. [Google Scholar]
  82. Endres, S.; Stubinger, J. Optimal trading strategies for Levy-driven Ornstein–Uhlenbeck processes. Appl. Econ. 2019, 51, 3153–3169. [Google Scholar] [CrossRef]
  83. Shlesinger, M.F.; Klafter, J. Levy walks versus Levy flights. In On Growth and Form: Fractal and Non-Fractal Patterns in Physics; Springer: Dordrecht, The Netherlands, 1986; pp. 279–283. [Google Scholar]
  84. Shlesinger, M.F.; Klafter, J.; West, B.J. Levy walks with applications to turbulence and chaos. Phys. A Stat. Mech. Its Appl. 1986, 140, 212–218. [Google Scholar] [CrossRef]
  85. Allegrini, P.; Grigolini, P.; West, B.J. Dynamical approach to Levy processes. Phys. Rev. E 1996, 54, 4760. [Google Scholar] [CrossRef]
  86. Shlesinger, M.F.; West, B.J.; Klafter, J. Levy dynamics of enhanced diffusion: Application to turbulence. Phys. Rev. Lett. 1987, 58, 1100. [Google Scholar] [CrossRef]
  87. Uchaikin, V.V. Self-similar anomalous diffusion and Levy-stable laws. Phys.-Uspekhi 2003, 46, 821. [Google Scholar] [CrossRef]
  88. Chechkin, A.V.; Gonchar, V.Y.; Klafter, J.; Metzler, R. Fundamentals of Levy flight processes. In Fractals, Diffusion, and Relaxation in Disordered Complex Systems: Advances in Chemical Physics, Part B; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006; pp. 439–496. [Google Scholar]
  89. Metzler, R.; Chechkin, A.V.; Gonchar, V.Y.; Klafter, J. Some fundamental aspects of Levy flights. Chaos Solitons Fractals 2007, 34, 129–142. [Google Scholar] [CrossRef]
  90. Chechkin, A.V.; Metzler, R.; Klafter, J.; Gonchar, V.Y. Introduction to the theory of Levy flights. In Anomalous Transport: Foundations and Applications; Wiley-VCH Verlag GmbH & Co. KGaA: Weinheim, Germany, 2008; pp. 129–162. [Google Scholar]
  91. Dubkov, A.A.; Spagnolo, B.; Uchaikin, V.V. Levy flight superdiffusion: An introduction. Int. J. Bifurc. Chaos 2008, 18, 2649–2672. [Google Scholar] [CrossRef]
  92. Schinckus, C. How physicists made stable Levy processes physically plausible. Braz. J. Phys. 2013, 43, 281–293. [Google Scholar] [CrossRef]
  93. Zaburdaev, V.; Denisov, S.; Klafter, J. Levy walks. Rev. Mod. Phys. 2015, 87, 483–530. [Google Scholar] [CrossRef]
  94. Reynolds, A.M. Current status and future directions of Levy walk research. Biol. Open 2018, 7, bio030106. [Google Scholar] [CrossRef]
  95. Abe, M.S. Functional advantages of Levy walks emerging near a critical point. Proc. Natl. Acad. Sci. USA 2013, 117, 24336–24344. [Google Scholar] [CrossRef]
  96. Garg, K.; Kello, C.T. Efficient Levy walks in virtual human foraging. Sci. Rep. 2021, 11, 5242. [Google Scholar] [CrossRef]
  97. Mukherjee, S.; Singh, R.K.; James, M.; Ray, S.S. Anomalous diffusion and Levy walks distinguish active from inertial turbulence. Phys. Rev. Lett. 2021, 127, 118001. [Google Scholar] [CrossRef]
  98. Gunji, Y.-P.; Kawai, T.; Murakami, H.; Tomaru, T.; Minoura, M.; Shinohara, S. Levy walk in swarm models based on Bayesian and inverse Bayesian inference. Comput. Struct. Biotechnol. J. 2021, 19, 247–260. [Google Scholar] [CrossRef]
  99. Park, S.; Thapa, S.; Kim, Y.; Lomholt, M.A.; Jeon, J.-H. Bayesian inference of Levy walks via hidden Markov models. J. Phys. A Math. Theor. 2021, 54, 484001. [Google Scholar] [CrossRef]
  100. Romero-Ruiz, A.; Rivero, M.J.; Milne, A.; Morgan, S.; Filho, P.M.; Pulley, S.; Segura, C.; Harris, P.; Lee, M.R.; Coleman, K.; et al. Grazing livestock move by Levy walks: Implications for soil health and environment. J. Environ. Manag. 2023, 345, 118835. [Google Scholar] [CrossRef] [PubMed]
  101. Sakiyama, T.; Okawara, M. A short memory can induce an optimal Levy walk. In World Conference on Information Systems and Technologies; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 421–428. [Google Scholar]
  102. Levernier, N.; Textor, J.; Benichou, O.; Voituriez, R. Inverse square Levy walks are not optimal search strategies for d ≥ 2. Phys. Rev. Lett. 2020, 124, 080601. [Google Scholar] [CrossRef]
  103. Guinard, B.; Korman, A. Intermittent inverse-square Levy walks are optimal for finding targets of all sizes. Sci. Adv. 2021, 7, eabe8211. [Google Scholar] [CrossRef] [PubMed]
  104. Clementi, A.; d’Amore, F.; Giakkoupis, G.; Natale, E. Search via parallel Levy walks on Z2. In Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing, Virtual, 26–30 July 2021; pp. 81–91. [Google Scholar]
  105. Padash, A.; Sandev, T.; Kantz, H.; Metzler, R.; Chechkin, A.V. Asymmetric Levy flights are more efficient in random search. Fractal Fract. 2022, 6, 260. [Google Scholar] [CrossRef]
  106. Majumdar, S.N.; Mounaix, P.; Sabhapandit, S.; Schehr, G. Record statistics for random walks and Levy flights with resetting. J. Phys. A Math. Theor. 2021, 55, 034002. [Google Scholar] [CrossRef]
  107. Zbik, B.; Dybiec, B. Levy flights and Levy walks under stochastic resetting. Phys. Rev. E 2024, 109, 044147. [Google Scholar] [CrossRef]
  108. Radice, M.; Cristadoro, G. Optimizing leapover lengths of Levy flights with resetting. Phys. Rev. 2024, 110, L022103. [Google Scholar] [CrossRef] [PubMed]
  109. Xu, P.; Zhou, T.; Metzler, R.; Deng, W. Levy walk dynamics in an external harmonic potential. Phys. Rev. E 2020, 101, 062127. [Google Scholar] [CrossRef] [PubMed]
  110. Aghion, E.; Meyer, P.G.; Adlakha, V.; Kantz, H.; Bassler, K.E. Moses, Noah and Joseph effects in Levy walks. New J. Phys. 2021, 23, 023002. [Google Scholar] [CrossRef]
  111. Cleland, J.D.; Williams, M.A.K. Analytical Investigations into Anomalous Diffusion Driven by Stress Redistribution Events: Consequences of Levy Flights. Mathematics 2022, 10, 3235. [Google Scholar] [CrossRef]
  112. Mba, J.C.; Mwambi, S.M.; Pindza, E. A Monte Carlo Approach to Bitcoin Price Prediction with Fractional Ornstein–Uhlenbeck Levy Process. Forecasting 2022, 4, 409–419. [Google Scholar] [CrossRef]
  113. Mariani, M.C.; Asante, P.K.; Kubin, W.; Tweneboah, O.K. Data Analysis Using a Coupled System of Ornstein–Uhlenbeck Equations Driven by Levy Processes. Axioms 2022, 11, 160. [Google Scholar] [CrossRef]
  114. Barrera, G.; Hogele, M.A.; Pardo, J.C. Cutoff thermalization for Ornstein–Uhlenbeck systems with small Levy noise in the Wasserstein distance. J. Stat. Phys. 2021, 184, 27. [Google Scholar] [CrossRef]
  115. Zhang, X.; Shu, H.; Yi, H. Parameter Estimation for Ornstein–Uhlenbeck Driven by Ornstein–Uhlenbeck Processes with Small Levy Noises. J. Theor. Probab. 2023, 36, 78–98. [Google Scholar] [CrossRef]
  116. Chen, Y.; Wang, X.; Deng, W. Langevin dynamics for a Levy walk with memory. Phys. Rev. E 2019, 99, 012135. [Google Scholar] [CrossRef] [PubMed]
  117. Barrera, G.; Hogele, M.A.; Pardo, J.C. The cutoff phenomenon in Wasserstein distance for nonlinear stable Langevin systems with small Levy noise. J. Dyn. Differ. Equ. 2024, 36, 251–278. [Google Scholar] [CrossRef]
  118. Liu, Y.; Wang, J.; Zhang, M.-g. Exponential Contractivity and Propagation of Chaos for Langevin Dynamics of McKean-Vlasov Type with Levy Noises. Potential Anal. 2024, 1–34. [Google Scholar] [CrossRef]
  119. Bao, J.; Fang, R.; Wang, J. Exponential ergodicity of Levy driven Langevin dynamics with singular potentials. Stoch. Process. Their Appl. 2024, 172, 104341. [Google Scholar] [CrossRef]
  120. Wang, X.; Chen, Y.; Deng, W. Levy-walk-like Langevin dynamics. New J. Phys. 2019, 21, 013024. [Google Scholar] [CrossRef]
  121. Chen, Y.; Deng, W. Levy-walk-like Langevin dynamics affected by a time-dependent force. Phys. Rev. E 2021, 103, 012136. [Google Scholar] [CrossRef]
  122. Chen, Y.; Wang, X.; Ge, M. Levy-walk-like Langevin dynamics with random parameters. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 013109. [Google Scholar] [CrossRef]
  123. Cox, B.; Laufer, J.G.; Arridge, S.R.; Beard, P.C.; Laufer, A.J.G.; Arridge, A.S.R. Long Range Dependence: A Review; In Iowa State University: Ames, IA, USA, 1984. [Google Scholar]
  124. Doukhan, P.; Oppenheim, G.; Taqqu, M. (Eds.) Theory and Applications of Long-Range Dependence; Springer Science & Business Media: New York, NY, USA, 2002. [Google Scholar]
  125. Rangarajan, G.; Ding, M. (Eds.) Processes with Long-Range Correlations: Theory and Applications; Springer Science & Business Media: New York, NY, USA, 2003. [Google Scholar]
  126. Eliazar, I. How random is a random vector? Ann. Phys. 2015, 363, 164–184. [Google Scholar] [CrossRef]
  127. Eliazar, I. Five degrees of randomness. Phys. A Stat. Mech. Its Appl. 2021, 568, 125662. [Google Scholar] [CrossRef]
  128. Jelinek, F.; Mercer, R.L.; Bahl, L.R.; Baker, J.K. Perplexity: A measure of the difficulty of speech recognition tasks. J. Acoust. Soc. Am. 1977, 62, S63. [Google Scholar] [CrossRef]
  129. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  130. Bell, R.J.; Dean, P.; Hibbins-Butler, D.C. Localization of normal modes in vitreous silica, germania and beryllium fluoride. J. Phys. C Solid State Phys. 1970, 3, 2111. [Google Scholar] [CrossRef]
  131. Bell, R.J.; Dean, P. The structure of vitreous silica: Validity of the random network theory. Philos. Mag. 1972, 25, 1381–1398. [Google Scholar] [CrossRef]
  132. Bosyk, G.M.; Portesi, M.; Plastino, A. Collision entropy and optimal uncertainty. Phys. Rev. A 2012, 85, 012108. [Google Scholar] [CrossRef]
  133. Skorski, M. Shannon entropy versus renyi entropy from a cryptographic viewpoint. In Proceedings of the IMA International Conference on Cryptography and Coding, Oxford, UK, 15–17 December 2015; pp. 257–274. [Google Scholar]
  134. Simpson, E.H. Measurement of diversity. Nature 1949, 163, 688. [Google Scholar] [CrossRef]
  135. Hirschman, A.O. National Power and the Structure of Foreign Trade; University of Califorina Press: Berkeley, CA, USA, 1945. [Google Scholar]
  136. Hill, M.O. Diversity and evenness: A unifying notation and its consequences. Ecology 1973, 54, 427–432. [Google Scholar] [CrossRef]
  137. Peet, R.K. The measurement of species diversity. Annu. Rev. Ecol. Syst. 1974, 5, 285–307. [Google Scholar] [CrossRef]
  138. Magurran, A.E. Ecological Diversity and Its Measurement; Princeton University Press: Princeton, NJ, USA, 1988. [Google Scholar]
  139. Jost, L. Entropy and diversity. Oikos 2006, 113, 363–375. [Google Scholar] [CrossRef]
  140. Legendre, P.; Legendre, L. Numerical Ecology; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  141. Renyi, A. On measures of information and entropy. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20–30 July 1960; Volume 1, pp. 547–561. [Google Scholar]
  142. Zolotarev, V.M. One-Dimensional Stable Distributions; American Mathematical Society: Providence, RI, USA, 1986; Volume 65. [Google Scholar]
  143. Borak, S.; Hardle, W.; Weron, R. Stable Distributions; Humboldt-Universitat zu Berlin: Berlin, Germany, 2005. [Google Scholar]
  144. Nolan, J.P. Univariate Stable Distributions; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  145. Brownrigg, D.R.K. The weighted median filter. Commun. ACM 1984, 27, 807–818. [Google Scholar] [CrossRef]
  146. Yin, L.; Yang, R.; Gabbouj, M.; Neuvo, Y. Weighted median filters: A tutorial. IEEE Trans. Circuits Syst. II Analog Digit. Signal Process. 1996, 43, 157–192. [Google Scholar] [CrossRef]
  147. Justusson, B.I. Median filtering: Statistical properties. In Two-Dimensional Digital Signal Prcessing II: Transforms and Median Filters; Springer: Berlin/Heidelberg, Germany, 2006; pp. 161–196. [Google Scholar]
  148. Bentes, S.R.; Menezes, R. Entropy: A new measure of stock market volatility? J. Phys. Conf. Ser. 2012, 394, 012033. [Google Scholar] [CrossRef]
  149. Bose, R.; Hamacher, K. Alternate entropy measure for assessing volatility in financial markets. Phys. Rev. E- Nonlinear Soft Matter Phys. 2012, 86, 056112. [Google Scholar] [CrossRef]
  150. Ruiz, M.d.C.; Guillamon, A.; Gabaldon, A. A new approach to measure volatility in energy markets. Entropy 2012, 14, 74–91. [Google Scholar] [CrossRef]
Figure 1. The signal-to-noise ratio (SNR) of Equation (14), as a function of the duration Δ (with parameters α = 1 and ω = 1). Sub-Cauchy examples are depicted in the left panel, with the following Levy exponents: red 0.5; blue 0.6; green 0.7; purple 0.8. Super-Cauchy and Gauss examples are depicted in the right panel, with the following Levy exponents: red 1.25; blue 1.5; green 1.75; purple 2.
Figure 1. The signal-to-noise ratio (SNR) of Equation (14), as a function of the duration Δ (with parameters α = 1 and ω = 1). Sub-Cauchy examples are depicted in the left panel, with the following Levy exponents: red 0.5; blue 0.6; green 0.7; purple 0.8. Super-Cauchy and Gauss examples are depicted in the right panel, with the following Levy exponents: red 1.25; blue 1.5; green 1.75; purple 2.
Entropy 27 00157 g001
Figure 2. The noise-to-noise ratio (NNR) of Equation (15), as a function of the duration Δ (with parameter α = 1). Sub-Cauchy examples are depicted in the left panel, with the following Levy exponents: red 0.5; blue 0.6; green 0.7; purple 0.8. Super-Cauchy and Gauss examples are depicted in the right panel, with the following Levy exponents: red 1.25; blue 1.5; green 1.75; purple 2.
Figure 2. The noise-to-noise ratio (NNR) of Equation (15), as a function of the duration Δ (with parameter α = 1). Sub-Cauchy examples are depicted in the left panel, with the following Levy exponents: red 0.5; blue 0.6; green 0.7; purple 0.8. Super-Cauchy and Gauss examples are depicted in the right panel, with the following Levy exponents: red 1.25; blue 1.5; green 1.75; purple 2.
Entropy 27 00157 g002
Figure 3. The tail-to-tail ratio (TTR), as a function of the duration Δ (with parameter α = 1). Sub-Cauchy examples are depicted in the left panel, with the following Levy exponents: red 0.5; blue 0.6; green 0.7; purple 0.8. Super-Cauchy examples are depicted in the right panel, with the following Levy exponents: red 1.25; blue 1.5; green 1.75; purple 1.99.
Figure 3. The tail-to-tail ratio (TTR), as a function of the duration Δ (with parameter α = 1). Sub-Cauchy examples are depicted in the left panel, with the following Levy exponents: red 0.5; blue 0.6; green 0.7; purple 0.8. Super-Cauchy examples are depicted in the right panel, with the following Levy exponents: red 1.25; blue 1.5; green 1.75; purple 1.99.
Entropy 27 00157 g003
Figure 4. The noise-to-noise ratio (NNR) of Equation (15) and the tail-to-tail ratio (TTR), as functions of the Levy exponent p (with parameter α = 1). The examples are depicted with the following values of the duration Δ : blue −ln(0.2); green −ln(0.4); purple −ln(0.6); red −ln(0.8).
Figure 4. The noise-to-noise ratio (NNR) of Equation (15) and the tail-to-tail ratio (TTR), as functions of the Levy exponent p (with parameter α = 1). The examples are depicted with the following values of the duration Δ : blue −ln(0.2); green −ln(0.4); purple −ln(0.6); red −ln(0.8).
Entropy 27 00157 g004
Table 1. Levy approach vs. Gauss approach. The table’s rows correspond to the three ratios: signal-to-noise, noise-to-noise, and tail-to-tail. The table’s columns correspond, respectively, to the two approaches: the Levy-driven OUP, where the Levy exponent is in the ‘pure Levy’ range 0 < p < 2 ; and a Gaussian stationary process (with zero means), whose auto-correlation function is monotone decreasing to zero. The table’s cells describe the monotonicity of the ratios with respect to the positive duration Δ .
Table 1. Levy approach vs. Gauss approach. The table’s rows correspond to the three ratios: signal-to-noise, noise-to-noise, and tail-to-tail. The table’s columns correspond, respectively, to the two approaches: the Levy-driven OUP, where the Levy exponent is in the ‘pure Levy’ range 0 < p < 2 ; and a Gaussian stationary process (with zero means), whose auto-correlation function is monotone decreasing to zero. The table’s cells describe the monotonicity of the ratios with respect to the positive duration Δ .
RatioLevy ApproachGauss Approach
Signal-to-noise decreasing ( p < 1 ) flat ( p = 1 ) increasing ( p > 1 ) increasing
Noise-to-noise increasing ( p < 1 ) flat ( p = 1 ) decreasing ( p > 1 ) decreasing
Tail-to-tail increasing ( p < 1 ) flat ( p = 1 ) decreasing ( p > 1 ) zero
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Eliazar, I. Levy Noise Affects Ornstein–Uhlenbeck Memory. Entropy 2025, 27, 157. https://doi.org/10.3390/e27020157

AMA Style

Eliazar I. Levy Noise Affects Ornstein–Uhlenbeck Memory. Entropy. 2025; 27(2):157. https://doi.org/10.3390/e27020157

Chicago/Turabian Style

Eliazar, Iddo. 2025. "Levy Noise Affects Ornstein–Uhlenbeck Memory" Entropy 27, no. 2: 157. https://doi.org/10.3390/e27020157

APA Style

Eliazar, I. (2025). Levy Noise Affects Ornstein–Uhlenbeck Memory. Entropy, 27(2), 157. https://doi.org/10.3390/e27020157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop