Next Article in Journal / Special Issue
Generalised Geometric Brownian Motion: Theory and Applications to Option Pricing
Previous Article in Journal
Nonlinear Dynamics and Entropy of Complex Systems with Hidden and Self-Excited Attractors II
Previous Article in Special Issue
Large Deviations for Continuous Time Random Walks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Continuous-Time Random Walk Extension of the Gillis Model

1
Center for Nonlinear and Complex Systems, Dipartimento di Scienza e Alta Tecnologia, Università degli Studi dell’Insubria, Via Valleggio 11, 22100 Como, Italy
2
Istituto Nazionale di Fisica Nucleare—Sezione di Milano, Via Celoria 16, 20133 Milano, Italy
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(12), 1431; https://doi.org/10.3390/e22121431
Submission received: 24 November 2020 / Revised: 14 December 2020 / Accepted: 15 December 2020 / Published: 18 December 2020
(This article belongs to the Special Issue New Trends in Random Walks)

Abstract

:
We consider a continuous-time random walk which is the generalization, by means of the introduction of waiting periods on sites, of the one-dimensional non-homogeneous random walk with a position-dependent drift known in the mathematical literature as Gillis random walk. This modified stochastic process allows to significantly change local, non-local and transport properties in the presence of heavy-tailed waiting-time distributions lacking the first moment: we provide here exact results concerning hitting times, first-time events, survival probabilities, occupation times, the moments spectrum and the statistics of records. Specifically, normal diffusion gives way to subdiffusion and we are witnessing the breaking of ergodicity. Furthermore we also test our theoretical predictions with numerical simulations.

1. Introduction

Since their first appearance, random walks have always been used as effective mathematical tools to describe a plenty of problems from a variety of fields, such as crystallography, biology, behavioural sciences, optical and metal physics, finance and economics. Although homogeneous random walks are not a mystery anymore, in many situations the topology of the environment causes correlations (induced by the medium inhomogeneities), which have powerful consequences on the transport properties of the process. The birth of whole classes of non-homogeneous random walks [1,2] is due to the need to study disordered media and non-Brownian motions, responsible for anomalous diffusive behaviour. This topic of research is prompted by phenomena observed in several systems such as turbolent flows, dynamical systems with intermittencies, glassy materials, Lorentz gases, predators hunting for food [3,4,5,6,7,8,9,10,11,12,13]. For the sake of clarity, we recall that anomalous processes are characterized by a mean square displacement of the walker’s position with a sublinear or superlinear growth in time as opposed to normal Brownian diffusion, defined as an asymptotically linear time dependence of the variance.
In this context, the outstanding Gillis random walk [14] plays a crucial role, since it is one of the few analytically solvable models of non-homogeneous random walks with a drift dependent on the position in the sample. Other few exceptions are random walks with a limited number of boundaries or defective sites [15,16,17,18]. The Gillis model is a nearest-neighbour centrally-biased random walk on Z , lacking translational invariance for the transition probabilities, which provides an appropriate environment in order to investigate the critical behaviour in the proximity of a phase transition: while keeping fixed the dimensionality of the model, one can observe different regimes by simply changing the parameter value.
As is natural, in the first instance one typically focuses his attention on the dynamics of the random walk by considering a discretization of the time evolution: basically you wear the simplest clock, consisting of a counting measure of the number of steps. But in most physical situations, you deal with systems requiring a continuous-time description of the evolution (that clearly introduces a higher degree of complexity). In order to show this important difference, we can rely on the explanatory comparative analysis between Lévy flights and Lévy walks [19,20]. We are mentioning homogeneous random walks whose transition probabilities have an infinite variance: Lévy flights are indeed jump processes with steps picked from a long-tailed (or Lévy) distribution, whose tails are not exponentially bounded and so there is a good chance that you jump really far from the current site. This is what we mean when we say, from a mathematical point of view, that the distribution does not have a finite variance. However, Lévy flights have a drawback: clearly, if spatial trajectories are totally unaware of the related time trace, flights are rightful, otherwise they are not exactly physically acceptable because they appear to possess an infinite speed. More realistic models are instead Lévy walks, where the walker needs a certain time to perform the jump, which is no longer instantaneous. The time spent is usually proportional to the length of the step, so we assume a constant finite speed for the motion.
In our case, we take a step back: in Lévy walks you are already assuming the existence of spatiotemporal correlations, but in general the easiest way to get a continuous-time description starting from the discrete model is to decouple spatial and temporal components. This is precisely what E. Montroll and G.H. Weiss did in 1965 [21] by means of a random walk (the so called Continuous Time Random Walk) whose jumps are still instantaneous but the dynamics is subordinated to a random physical clock. Basically, you have to introduce a new random variable, the waiting time on a site, in addition to the length of the jump [22]. Also this time there are relevant application aspects: ruin theory of insurance companies, dynamics of prices in financial markets, impurity in semiconductors, transport in porous media, transport in geological formations. An incomplete list of general references includes [23,24,25,26,27,28].
Inspired by the previous models, we consider the continuous-time generalization of the discrete-time Gillis random walk that we have already studied thoroughly in Reference [29]. In particular, we will also look at first-time events: they account for key problems in the theory of stochastic processes, since you can determine when system variables assume specific values (for example, see Reference [30]).
This paper is structured as follows. In Section 2 we briefly present the background in order to provide a complete overview of the known results, which are the basis of the work. Then, in Section 3, we will discuss the original results, by establishing a connection between the discrete-time random walk and the continuous-time formalism. In particular, two significant phenomena will arise: the ergodicity breaking and the extension of the anomalous diffusion regime. In Section 4, moreover, we will integrate the theoretical analysis with computational simulations, as further confirmation. Finally, in Section 5 we will summarize all the conclusions previously described in detail.

2. Review of Previous Work

First of all, intending to be self-consistent, we provide a brief recap of the discrete-time Gillis model and resume the key concepts necessary for its continuous-time version. In this way we will be sufficiently equipped to move on to list the major results.

2.1. Gillis Random Walk

The Gillis model [14] is a discrete-time random walk, on a one-dimensional lattice, whose transition probabilities p i , j of moving from site i to site j are non-null if and only if | i j | = 1 , namely i , j are nearest-neighbour lattice points. We assume that the positional dependence is ruled by the real parameter ϵ , where | ϵ | < 1 and:
R j : = p j , j + 1 = 1 2 1 ϵ j , L j : = p j , j 1 = 1 2 1 + ϵ j f o r j Z \ { 0 } , R 0 : = 1 2 L 0 .
Clearly, if you set out ϵ = 0 you recover homogeneity since it boils down to the simple symmetric random walk. Otherwise, the position-dependent drift is responsible for an attractive bias towards the starting site, the origin, when 0 < ϵ < 1 or for a repulsive action if 1 < ϵ < 0 .
As we said in the general introduction, the Gillis random walk is one of the few analytically solvable models and, in particular, in his original paper the author writes down the exact expression of the generating function P ( z ) of { p n } n N in terms of the elementary hypergeometric function 2 F 1 ( a , b ; c ; z ) [31], where p n : = p n ( 0 , 0 ) denotes the probability that the walker returns to the origin, not necessarily for the first time, after n steps. Actually, this solution has been later generalized for a generic starting site [1]. Given the probability p n ( j 0 , j = 0 ) that the particle starts from any site j 0 and passes through the origin after n steps, we can write its generating function:
P ( j 0 , 0 ; z ) = n = 0 p 2 n + | j 0 | ( j 0 , 0 ) z 2 n + | j 0 | = z | j 0 | | j 0 | ! Γ ( 1 + ϵ + | j 0 | ) 2 | j 0 | Γ ( 1 + ϵ ) 2 F 1 ϵ + 1 + | j 0 | 2 , ϵ + | j 0 | 2 + 1 ; | j 0 | + 1 ; z 2 2 F 1 1 2 ϵ , 1 2 ϵ + 1 2 ; 1 ; z 2 .
This is one of the essential tools for the following analysis, along with those in our previous paper [29], and for j 0 = 0 it is clearly consistent with the original result by Gillis concerning the generating function P ( z ) : = P ( 0 , 0 ; z ) .
Another relevant statement for future considerations is that the motion is positive-recurrent (recurrent with a finite mean return time) and ergodic (thus admitting a stationary distribution) when 1 2 < ϵ < 1 , null-recurrent (recurrent with an infinite mean return time that increases with the number of steps) if 1 2 ϵ 1 2 and transient for 1 < ϵ < 1 2 . To be more precise [32], the mean time taken between two consecutive returns to the starting site up to the n-th step is:
τ r e t ( n ) n 3 / 2 + ϵ if 1 < ϵ < 1 2 , n ln 2 ( n ) if ϵ = 1 2 , n 1 / 2 + ϵ if 1 2 < ϵ < + 1 2 , ln ( n ) if ϵ = + 1 2 , 2 ϵ 2 ϵ 1 if + 1 2 < ϵ < + 1 ,
and this is a direct consequence of Equation (2). In fact, starting from there one can also obtain the generating functions of the first-hitting and the first-return times to the origin. First of all, let us define the probability f n ( j 0 , j ) that the moving particle starts from j 0 and hits j for the first time after n steps. Then we know that { f n ( j 0 , j ) } n N are connected to { p n ( j 0 , j ) } n N in the following way:
p n ( j 0 , j ) = δ n , 0 δ j , j 0 + k = 1 n f k ( j 0 , j ) p n k ( j , j ) ,
or, equivalently, in terms of the corresponding generating functions:
F ( j 0 , j ; z ) = P ( j 0 , j ; z ) δ j , j 0 P ( j , j ; z ) .
Notice that in the presence of translational invariance p n k ( j , j ) = p n k and P ( j , j ; z ) = P ( z ) (see Equation ( 2.8 ) in Reference [22]). In our context, anyway, Equation (5) becomes particularly easy since we choose j = 0 . Hence we can finally conclude that when j 0 = 0 :
F ( z ) : = F ( 0 , 0 ; z ) = n = 1 f 2 n z 2 n = 1 1 P ( z ) = 1 2 F 1 1 2 ϵ , 1 2 ϵ + 1 2 ; 1 ; z 2 2 F 1 1 2 ϵ + 1 2 , 1 2 ϵ + 1 ; 1 ; z 2 ,
where f n : = f n ( 0 , 0 ) are the first-return probabilities, whereas for j 0 0 (first-passage probabilities):
F ( j 0 , 0 ; z ) = n = 0 f 2 n + | j 0 | ( j 0 , 0 ) z 2 n + | j 0 | = P ( j 0 , 0 ; z ) P ( z ) = z | j 0 | | j 0 | ! Γ ( 1 + ϵ + | j 0 | ) 2 | j 0 | Γ ( 1 + ϵ ) 2 F 1 ϵ + 1 + | j 0 | 2 , ϵ + | j 0 | 2 + 1 ; | j 0 | + 1 ; z 2 2 F 1 1 2 ϵ + 1 , ϵ + 1 2 ; 1 ; z 2 .
The mean time spent between two consecutive visits at the origin up to the n-th step is easily derived from τ r e t ( n ) = lim z 1 F ( z ) F ( z ) [32].
Now, instead, moving on to transport properties, we can quickly resume the moments spectrum and the statistics of records (for more details and references see Reference [29]). Firstly, denoting the moment of order q with | j n | q : = j Z p n ( 0 , j ) | j | q , we know that the asymptotical dependence on the number of steps n is | j n | q n ν q , where:
ν q = ν q ( ϵ ) = q 2 if ϵ < 1 2 , 0 if ϵ > 1 2 and q < 2 ϵ 1 , 1 + q 2 ϵ if ϵ > 1 2 and q > 2 ϵ 1 .
Translated into words, this leads to recognize the presence of a phase transition: non-ergodic processes are characterized by normal diffusion, since the second moment shows an asymptotically linear growth in time, whereas the ergodic ones reveal strong anomalous (sub-)diffusion [33].
Secondly, let us first recall the following definition: given a finite set of random variables, the record value is the largest/smallest value assumed by that sequence. In the Gillis model, the events to account for are the positions { j k } k N on the one-dimensional lattice during the motion and the record after n steps R n , with R 0 = 0 due to the intial condition j 0 = 0 , can be seen as the non-negative integer exceeding all the previously occupied sites: indeed thanks to symmetry, as we will explain in details later on, we can restrict ourselves to studying a random walk defined on the half-line, with a reflecting barrier at the origin. In addition, the presence of a nearest-neighbour structure implies that the number of records after n steps N n is connected to the value of the maximum M n : = max 1 k n j k by means of the trivial relationship N n = M n + 1 , where:
M n n 1 / 2 if 1 2 < ϵ + 1 2 , n 1 / ( 1 + 2 ϵ ) if + 1 2 ϵ < + 1 .
We point out that here we only consider the range ϵ > 1 2 in order to limit ourselves to recurrent processes: the limiting case ϵ = 1 2 is excluded because of technical reasons in the rigorous proof (see Reference [34]).
Here, again, the model enters two different phases, according to the value of the characteristic parameter ϵ . In particular, in the interval ϵ 1 2 , 1 2 the mean number of records has the same growth of the first moment.

2.2. CTRW

Our aim here is to formalize the transformation of the number of steps into the physical real time. We shall follow Reference [22]: for an in-depth and more exhaustive review on subordination techniques refer to Reference [35], for instance. As a preliminary remark, we point out that, moving from discrete to continuous formalism, we have to abandon the generating function (for the time domain, not for the lattice) in favour of a more appropriate mathematical tool, the Laplace transform.
The basic assumption is that we have a random walker who performs instantaneous jumps on a line, but now he is forced to wait on the target site for a certain interval of time, whose duration t is always picked from a common probability distribution ψ ( t ) , before going any further. So, for instance, t 1 will be the waiting time on the origin before jumping for the first time and, moreover, we would emphasize that the waiting times of subsequent steps are independent and identically distributed (according to ψ ( t ) ) random variables.
These are the essential instruments for introducing the quantities of interest. Firstly, we can define the PDF (Probability Density Function) ψ n ( t ) of the occurrence of the n-th step at time t = t 1 + + t n . As a consequence, through independence, the following recurrence relation holds:
ψ n ( t ) = 0 t ψ n 1 ( t ) ψ ( t t ) d t ψ ^ n ( s ) = ψ ^ n 1 ( s ) ψ ^ ( s ) = = ψ ^ n ( s ) ,
where the convolution becomes a product and, from now on, the use of the following convention is implied: variables indicated in brackets in the functions uniquely define the space you are working in (real for t, Laplace for s). Secondly, we can introduce the PDF χ n ( t ) of taking exactly n steps up to the time t (namely this time the n-th step may occur at time t < t and then the walker rests on the site):
χ n ( t ) = 0 t ψ n ( t ) 1 0 t t ψ ( τ ) d τ survival   probability on   a   site   χ 0 ( t t ) d t χ ^ n ( s ) = ψ ^ n ( s ) 1 ψ ^ ( s ) s .
Next section will shed some light on the role of these useful quantities.

3. Results

For the sake of clarity, we will simply state the significant elements in this section. For all of the detailed computation please refer to appendices further below.

3.1. Probability of Being at the Origin

The most natural step to undertake as first thing is obviously to determine the probability of finding the walker at the origin at time t, for comparison with Gillis original results. This task can be carried out in two different ways, both instructive.

3.1.1. Gillis Way

As a first attempt, one could be led to translate Gillis method into continuous-time formalism. And in fact this is a viable solution. The starting point is Equation ( 2.1 ) in Reference [14] that reads:
p n + 1 ( j ) = p n ( j 1 ) R j 1 + p n ( j + 1 ) L j + 1 ,
where p n ( j ) : = p n ( 0 , j ) denotes the probability of being at site j after n steps when the initial position of the walker is the origin.
In order to accomplish the transformation, we need to establish some more notation. In particular we notice that, after introducing the physical real time, the position at time t is still the position after n steps, provided that exactly n jumps have been counted up to time t. Hence:
  • p ( j , t ) = n = 0 p n ( j ) χ n ( t ) = 0 t p a ( j , t ) χ 0 ( t t ) d t is the probability of being (arriving) at j at (within) time t;
  • p a ( j , t ) = n = 0 p n ( j ) ψ n ( t ) is the probability of arriving at j at time t.
By performing the Laplace transform on time and the generating function on sites, we get: P ^ ( x , s ) : = 0 d t e s t j = 0 p ( j , t ) x j = P ^ a ( x , s ) · χ ^ 0 ( s ) = P ^ a ( x , s ) 1 ψ ^ ( s ) s . Now, the continuous-time equivalent of Equation (12) is obtained by multiplying both sides by ψ n + 1 ( t ) and summing over n:
n = 0 p n + 1 ( j ) ψ n + 1 ( t ) = n = 0 p n ( j ) ψ n ( t ) p 0 ( j ) ψ 0 ( t ) = p a ( j , t ) δ j , 0 δ ( t ) , 0 t p a ( j 1 , t ) ψ ( t t ) d t R j 1 + 0 t p a ( j + 1 , t ) ψ ( t t ) d t L j + 1 .
Essentially, we find ourselves in the exact same situation, we just need to shift focus back to a new key element, the arrival event. Retracing the steps of the original paper (see Appendix A), we can (trivially) conclude that:
p ^ a ( j = 0 ; s ) = 0 2 π ( 1 ψ ^ ( s ) cos θ ) 1 ϵ d θ 0 2 π ( 1 ψ ^ ( s ) cos θ ) ϵ d θ = 2 F 1 1 2 ϵ + 1 , 1 2 ϵ + 1 2 ; 1 ; ψ ^ 2 ( s ) 2 F 1 1 2 ϵ , 1 2 ϵ + 1 2 ; 1 ; ψ ^ 2 ( s ) ,
namely:
p ^ ( s ) : = p ^ ( j = 0 ; s ) = 1 ψ ^ ( s ) s p ^ a ( j = 0 ; s ) = 1 ψ ^ ( s ) s P [ z = ψ ^ ( s ) ] ,
where we remind you that P [ ψ ^ ( s ) ] is the generating function (evaluated at ψ ^ ( s ) ) of the probability of being at the origin in the discrete-time model.
This result is not surprising: given that the temporal component is independent of the spatial scale, the time trace is ruled by a random clock that replaces the role of the counting measure (the simple internal clock given by the number of steps). The generating function of the probability of arriving at the origin is the same of the one associated with the discrete model (where there is no distinction between arriving and being, because the walker can not stand still on a site), but subordinated to the new physical time. This observation let us generalize immediately the result to the case of a generic starting point j 0 , obtaining:
p ^ ( j 0 , 0 ; s ) = 1 ψ ^ ( s ) s p ^ a ( j 0 , 0 ; s ) = 1 ψ ^ ( s ) s P [ j 0 , 0 ; z = ψ ^ ( s ) ] ,
which, thanks to Equation (2), becomes:
p ^ ( j 0 , 0 ; s ) = 1 ψ ^ ( s ) s ψ ^ ( s ) 2 | j 0 | Γ ( 1 + ϵ + | j 0 | ) | j 0 | ! Γ ( 1 + ϵ ) 2 F 1 ϵ + 1 + | j 0 | 2 , ϵ + | j 0 | 2 + 1 ; | j 0 | + 1 ; ψ ^ 2 ( s ) 2 F 1 1 2 ϵ , 1 2 ϵ + 1 2 ; 1 ; ψ ^ 2 ( s ) .

3.1.2. Recurrence Relation: First-Return Time to the Origin

However, we can arrive at Equation (15) also in a different way. If we now perform, as before, a continuous-time transformation of Equation (4) with j 0 = 0 = j , we get:
p ( t ) = χ 0 ( t ) + n = 1 k = 1 n f k p n k χ n ( t ) ,
and considering the Laplace domain:
p ^ ( s ) = 1 ψ ^ ( s ) s 1 + n = 1 k = 1 n f k p n k ψ ^ n ( s )
= 1 ψ ^ ( s ) s 1 + F [ z = ψ ^ ( s ) ] P [ z = ψ ^ ( s ) ] .
As a last step we can plug in Equation (5), so we finally go back to Equation (15). In addition, we have immediately also the Laplace transform for the first-return time. Indeed, since the first-return is an arrival event and thus coincides with the occurrence of a step, there is no way to land earlier and wait for the remaining time. Hence, from a mathematical point of view, we can write [22]:
f ( t ) : = f ( j = 0 , t ) = n = 0 f n ψ n ( t ) f ^ ( s ) = F [ z = ψ ^ ( s ) ] = 1 1 P [ z = ψ ^ ( s ) ] ,
thanks to Equation (5). Lastly, by comparing Equation (15) with Equation (21), we get the relationship in the Laplace domain:
f ^ ( s ) = 1 1 ψ ^ ( s ) s p ^ ( s ) .
Once again we can generalize the previous formula for a generic starting site j 0 0 :
f ^ ( j 0 , 0 ; s ) = F [ j 0 , 0 ; z = ψ ^ ( s ) ] = P [ j 0 , 0 ; z = ψ ^ ( s ) ] P [ z = ψ ^ ( s ) ] = p ^ ( j 0 , 0 ; s ) p ^ ( s ) .
Now, turning back to our specific case, we know that the generating functions of interest can be written in the form (see Reference [29]):
P ( z ) = 1 ( 1 z ) ρ H 1 1 z ,
F ( z ) = 1 ( 1 z ) ρ L 1 1 z , for ϵ 1 2 ,
where L ( x ) = 1 / H ( x ) are slowly-varying functions at infinity, namely for instance L : R + R + is such that c > 0 lim x L ( c x ) L ( x ) = 1 , and:
ρ = 0 if 1 < ϵ 1 2 1 2 + ϵ if 1 2 < ϵ < + 1 2 1 if + 1 2 ϵ < + 1 .
As a consequence, the corresponding Laplace transforms are automatically given by:
p ^ ( s ) = [ 1 ψ ^ ( s ) ] 1 ρ s H 1 1 ψ ^ ( s ) ,
f ^ ( s ) = 1 [ 1 ψ ^ ( s ) ] ρ L 1 1 ψ ^ ( s ) .
At this point, it is apparent that we are forced to split our analysis according to the features of the waiting-time distribution: clearly the asymptotic behaviour of the quantities above mentioned is established by the expansion of the Laplace transform of the waiting-time distribution ψ ^ ( s ) for small s.

3.1.3. Finite-Mean Waiting-Time Distributions

As the first choice, one can think of { t i } i N as i.i.d. positive random variables with finite mean τ (but non necessarily a finite variance too: for example, the waiting times may be taken belonging to the domain of attraction - because they must be spectrally positive [36] - of α -stable laws with α ( 1 , 2 ) ). In these circumstances, the leading term in the expansion is: ψ ^ ( s ) = 1 τ s + o ( s ) for s 0 . Therefore, in the limit s 0 we get:
p ^ ( s ) τ 1 ρ s ρ H 1 τ s p ( t ) 1 Γ ( ρ ) τ 1 ρ t 1 ρ H t τ , with 0 < ρ 1 ,
f ^ ( s ) 1 τ ρ s ρ L 1 τ s f ( t ) ρ Γ ( 1 ρ ) τ ρ t 1 + ρ L t τ , with 0 < ρ < 1 ,
equivalently written in the limit t by directly applying Tauberian theorems [22,37]. In any case, however, the exponent of the power-law decay is the same of the discrete-time model [29]:
p 2 n n 1 2 + ϵ if 1 < ϵ < + 1 2 , 4 ln ( n ) if ϵ = + 1 2 , 2 1 ϵ if + 1 2 < ϵ < + 1 , f 2 n n 1 2 + ϵ if 1 < ϵ < 1 2 , 1 n ln 2 ( n ) if ϵ = 1 2 , n 3 2 ϵ if 1 2 < ϵ < + 1 .
This is not an astonishing result because obviously t τ n , where n is the number of steps. It is merely a change of scale. So from now on we will disregard this possibility.

3.1.4. Infinite-Mean Waiting-Time Distributions

Implications are a little bit different if we choose power-law distributions lacking the first moment, because the dynamics becomes highly irregular this time. If we assume a heavy-tailed waiting-time distribution of the form: ψ ( t ) B t 1 + α with 0 < α < 1 , then the corresponding Laplace expansion is: ψ ^ ( s ) = 1 b s α + o ( s α ) , where b : = B Γ ( 1 α ) α . Again by substitution, we derive:
p ^ ( s ) b 1 ρ s 1 α ( 1 ρ ) H 1 b s α p ( t ) 1 t α ( 1 ρ ) ,
f ^ ( s ) 1 b ρ s α ρ L 1 b s α f ( t ) 1 t 1 + α ρ , with 0 < ρ 1 .
We invite you to read Appendix D for a check with a well-known example. Moreover, as advanced in the previous section, asymptotic expansion and Tauberian theorems give us an immediate, even if incomplete, insight of what happens: exact and exhaustive results (involving a generic starting site) are postponed in Appendix B and Appendix C, in order not to burden the reading.
Anyhow, we provide here the summarising full spectrum of return, first-return, hitting and first-hitting time PDFs of the origin, which is consistent with the asymptotic behaviour previously predicted in Equation (32) and for ϵ > 1 2 in Equation (33). Firstly, keeping in mind Equation (17), we have:
p ( j 0 , 0 , t ) b 2 ϵ 1 Γ ( 1 + ϵ + | j 0 | ) Γ ( 1 ϵ ) Γ ( 1 + ϵ ) Γ ( | j 0 | ϵ ) 1 Γ ( 1 α ) 1 t α if 1 < ϵ < 1 2 , 1 4 ln t α 2 b b Γ ( 1 α ) 1 t α if ϵ = 1 2 , b 2 1 2 ϵ Γ ( 1 ϵ ) Γ 1 2 + ϵ Γ 1 2 ϵ Γ ( 1 + ϵ ) 1 Γ 1 α 2 + α ϵ 1 t α 1 2 ϵ if 1 2 < ϵ < + 1 2 , 2 ln t α 2 b 1 if ϵ = + 1 2 , 1 1 2 ϵ if + 1 2 < ϵ < + 1 .
With the choice j 0 = 0 , you get immediately the PDF of returns. In particular, let us point out that in the recurrent cases the coefficient does not depend on j 0 .
A little discrepancy, instead, arises if you compare first-passage and first-return events (see Equations (6) and (7)). The first-return time PDF is asymptotically given by:
f ( t ) 2 ϵ 1 2 2 ϵ + 1 ϵ 2 Γ 3 2 + ϵ Γ ( 1 ϵ ) Γ ( ϵ + 1 ) Γ 1 2 ϵ α Γ 1 + α 1 2 + ϵ b 1 2 ϵ t 1 α 1 2 + ϵ if 1 < ϵ < 1 2 , 4 α ln 2 t α 2 b 1 t if ϵ = 1 2 , 2 1 2 ϵ Γ 1 2 ϵ Γ ( 1 + ϵ ) Γ ( 1 ϵ ) Γ 1 2 + ϵ α 1 2 + ϵ Γ 1 α 2 α ϵ b 1 2 + ϵ t 1 + α 1 2 + ϵ if 1 2 < ϵ < + 1 2 , b 2 ln t α 2 b α Γ ( 1 α ) 1 t 1 + α if ϵ = + 1 2 , 2 ϵ 2 ϵ 1 α Γ ( 1 α ) b t 1 + α if + 1 2 < ϵ < + 1 ,
whereas the first-hitting time PDF can be connected to the previous one by means of the following relationship:
f ( j 0 , 0 , t ) C ϵ ( j 0 ) f ( t ) ,
with:
C ϵ ( j 0 ) = 1 2 ϵ + 1 ϵ + Γ ( 1 + ϵ + | j 0 | ) Γ ( 1 ϵ ) Γ ( 1 + ϵ ) Γ ( | j 0 | ϵ ) if 1 < ϵ < 1 2 , 1 4 Ψ 1 4 + Ψ 3 4 Ψ 1 4 + | j 0 | 2 Ψ 3 4 + | j 0 | 2 if ϵ = 1 2 , 1 2 ϵ + 1 ϵ + Γ ( 1 + ϵ + | j 0 | ) Γ ( 1 ϵ ) Γ ( 1 + ϵ ) Γ ( | j 0 | ϵ ) if 1 2 < ϵ < + 1 2 , j 0 2 if ϵ = + 1 2 , j 0 2 2 ϵ if + 1 2 < ϵ < + 1 ,
and Ψ ( z ) denoting the digamma function [31]. As a consequence, we notice that, by setting j 0 = 0 , the asymptotic expansion of f ( j 0 , 0 , t ) vanishes in all regimes: the direct relation between p ( j 0 , 0 , t ) and p ( t ) does not hold anymore for first-time events. Nevertheless, although the coefficients differ, the asymptotic decays of f ( j 0 , 0 , t ) and f ( t ) are the same.

3.2. Survival Probability on the Positive Semi-Axis

Return and first-return probabilities allow us to determine also the asymptotic behaviour of other related quantities. In the first place, we can introduce the survival probability in a given subset: it is defined as the probability q n of never escaping from the selected collection of neighbouring sites. For instance, by considering N , it can be written as:
q n : = P ( j 1 0 , j 2 0 , , j n 0 | j 0 = 0 ) , q 0 : = 1 .
This quantity has been deepenly studied for a wide range of homogeneous stochastic processess. In particular, with regards to random walks with i.i.d. steps, the historical Sparre-Andersen theorem [38] is a significant result connecting a non-local property, since the survival probability depends on the history of the motion, to the local (in time) probability of standing non-positive at the last step:
Q ( z ) = n = 0 q n z n = exp n = 1 z n n P ( j n 0 ) , q n n 1 / 2 .
It is an outstanding expression of universality, both in discrete and continuous-time versions, if you consider jump distributions that are continuous and symmetric about the origin, although this feature is partially lost (the coefficient of proportionality in the scaling law is no longer universal) when you move on a lattice instead of the real line [39,40]. However, whereas temporal components have already been included in the analysis [41], not much has been said about spatial correlations, to the authors’ knowledge.
With our previous results [42] in mind, we will now consider changes arising from the subordination to a physical clock. We need to introduce the persistence probability u n of never coming back to the origin up to the n-th step, namely u n : = 1 k = 0 n f k = P ( j 1 0 , j 2 0 , , j n 0 | j 0 = 0 ) , in order to write down the following recurrence relation:
2 q n = δ n , 0 + u n + k = 1 n f k q n k 2 q ( t ) = χ 0 ( t ) + u ( t ) + n = 1 k = 1 n f k q n k χ n ( t ) .
In the Laplace domain it becomes:
2 q ^ ( s ) = u ^ ( s ) + 1 ψ ^ ( s ) s 1 + F [ ψ ^ ( s ) ] Q [ ψ ^ ( s ) ] , where u ^ ( s ) = 1 F [ ψ ^ ( s ) ] s ,
and in conclusion:
q ^ ( s ) 1 s 1 α ρ , s 0 q ( t ) 1 t α ρ , t ,
to be compared with the discrete-time results [42]:
Q ( z ) = 1 + U ( z ) 1 + ( 1 z ) U ( z ) , U ( z ) = 1 F ( z ) 1 z = 1 ( 1 z ) 1 ρ L 1 1 z ,
with Q ( z ) U ( z ) as z 1 and 1 2 < ϵ < 1 2 . It is apparent that there are similarities in the null-recurrent cases 1 2 < ϵ < 1 2 , since q n n ρ , and when ϵ 1 2 , whose decay is even slower (for ϵ = 1 2 it is a decreasing slowly-varying function). The main relevant difference, instead, is the disappearance of the positive-recurrent regime (where Tauberian theorems fail).
Nevertheless, if the underlying discrete-time random walk is ergodic, a discrepancy remains also in the continuous-time translation. In general, we have to notice that:
q ^ ( s ) = u ^ ( s ) s u ^ 2 ( s ) 2 + 1 ψ ^ ( s ) 2 s 1 1 2 L 1 b s α u ^ ( s ) 1 < ϵ < 1 2 , u ^ ( s ) 1 2 ϵ + 1 2 , 1 + 1 2 L u ^ ( s ) + 1 2 < ϵ < + 1 ,
where the slowly-varying function L tends to a constant. The coefficient of proportionality between q ( t ) and u ( t ) should not be underestimated: it means that the occupation time spent at the origin behaves in a different way, as we will see later on. Indeed, for the sake of illustration, a halved coefficient q ( t ) 1 2 u ( t ) would mean that visits at the origin are negligible (which is recovered in the limiting case ϵ = 1 ). In particular, 1 + 1 2 L > 1 says that they have more weight in the presence of an underlying ergodic context.
As a final check, if we suppose, on the contrary, a finite time scale ( ψ ^ ( s ) 1 τ s ) then we have again q ( t ) u ( t ) (independently of τ , clearly) and q ( t ) t ρ when 1 2 < ϵ < 1 2 .

3.3. Occupation Times

This section will be devoted to the statistics of the fraction of time spent by the walker at a given site or in a given subset. As in the previous one, the probability distribution of the quantity of interest stems from the features of the asymptotic decay of return and first-return PDFs. We shall describe (or simply mention) any other necessary tool from time to time, as always.

3.3.1. Occupation Time of the Origin

In the discrete-time formalism, as we have already discussed in Section 3.1.1, the particle can not stand still on a site and so considering the occupation time of a single site is equivalent to talk about the number of visits to the same. Thanks to the Darling-Kac theorem [43], a remarkable mathematical result for Markov processes, we know [29,42] that the number of visits to the starting point (properly rescaled by the average taken over several realizations) has a Mittag-Leffler distribution of index ρ as limiting distribution. We would emphasize that spatial inhomogeneities cause non-Markovianity for the original process, but now we are focusing on returns to the origin that are renewal events. Thus you have a sequence of i.i.d. first-return times and loss of memory is ensured in each case.
Obviously, this result is still true for waiting-time distributions with finite mean: even if the physical clock is running when the particle rests on a site and the internal clock stops, the microscopic time scale gives us the constant of direct proportionality necessary to move from the number of steps to the correct time measure, which has the same distribution, as a consequence.
In the non-trivial continuous-time translation, instead, what we need to apply is Lamperti theorem [44]. It is a statement involving two-state stochastic processes (being or not at the origin in our case). More precisely we deal with its continuous-time generalization, which has been discussed in many works, such as References [45,46,47,48,49]. Here we provide the final formula, that is the starting point of our analysis: for a detailed proof refer to Reference [47], for instance. Essentially, we will conclude that, even if the Mittag-Leffler statistics is mapped to a Lamperti distribution, the index ρ of the discrete-time formalism is always replaced by the product α ρ characterizing the asymptotic expansion of the first-return PDF. In particular, in order to preserve the ergodic property of the discrete-time version ( ρ = 1 ) we have to consider a waiting-time distribution with finite mean ( α = 1 ): in this way, the Lamperti distribution collapses to a Dirac delta function on the mean value of the occupation time.
Before formalizing the theorem, let us define some notation. We consider a stochastic process described by a set of transitions between two states (that we call i n and o u t ) and we consider arrivals at the origin and departures as events. Time periods between events are i.i.d. random variables, with PDFs ψ i n ( t ) ψ ( t ) and ψ o u t ( t ) f ( 1 , 0 , t ) respectively, that are the alternating distributions of the renewal process. In fact, the time spent on state i n is precisely the waiting time on a site, whereas the time spent outside the origin coincides with the first-return time to the origin starting from j 0 = 1 because, thanks to the nearest-neighbour structure, when you leave the origin you land on ±1 and f ( j 0 , j , t ) = f ( | j 0 | , j , t ) by symmetry, as witnessed by Equation (37). Moreover we can notice that ψ i n and ψ o u t are connected by means of the first-return PDF, in fact:
f ( t ) = 0 t ψ ( t ) ψ o u t ( t t ) d t f ^ ( s ) = ψ ^ ( s ) ψ ^ o u t ( s ) .
We assume that at t = 0 the particle occupies the origin (namely it is in state i n ) and we denote the total times spent by the walker in the two states up to time t by T i n and T o u t , associated with the PDFs f t i n ( T i n ) and f t o u t ( T o u t ) . Continuous-time Lamperti theorem tells us that the double Laplace transforms of these quantities are:
f ^ s i n ( u ) = ψ ^ i n ( s + u ) 1 ψ ^ o u t ( s ) s + 1 ψ ^ i n ( s + u ) s + u 1 1 ψ ^ i n ( s + u ) ψ ^ o u t ( s ) ,
f ^ s o u t ( u ) = ψ ^ i n ( s ) 1 ψ ^ o u t ( s + u ) s + u + 1 ψ ^ i n ( s ) s 1 1 ψ ^ i n ( s ) ψ ^ o u t ( s + u ) .
For the moment we focus on the non-ergodic regime ϵ 1 2 . First of all, let us choose a finite-mean waiting-time distribution, which constitutes a useful check. Clearly ψ ^ i n ( s ) = ψ ^ ( s ) 1 τ s and we know that f ^ ( s ) 1 τ ρ s ρ L 1 τ s : having different asymptotic time decays, ψ ^ o u t is ruled by the slower one, namely ψ ^ o u t ( s ) f ^ ( s ) (and indeed C ϵ ( 1 ) = 1 as you can see from Equation (37)). By substituting in Equation (46), we immediately get:
f ^ s i n ( u ) τ + τ ρ s ρ 1 L 1 τ s τ ρ + 1 s ρ 1 ( s + u ) L 1 τ s τ ( s + u ) + τ ρ s ρ L 1 τ s τ ρ + 1 s ρ ( s + u ) L 1 τ s ,
and by expanding in powers of u, one can compute the moments of order k of T i n ( t ) in the time domain:
T i n k ( s ) = ( 1 ) k k u k f ^ s i n ( u ) | u = 0 k ! L k 1 τ s τ k ( 1 ρ ) s 1 + k ρ , s 0 T i n k ( t ) k ! Γ ( 1 + k ρ ) τ k ( 1 ρ ) L k t τ t k ρ , t .
This suggests to us that if we consider the rescaled random variable:
ζ ( t ) : = L t τ τ 1 ρ T i n ( t ) t ρ lim t E [ ζ k ( t ) ] = Γ ( 1 + k ) Γ ( 1 + k ρ ) ,
then we asymptotically recover the moments of the Mittag-Leffler function of index ρ , as we said previously. We would point out that ζ is not directly the fraction of time spent at the origin and this observation is consistent with the fact that, in addition to the presence of an infinite recurrence time, f ( t ) decays more slowly with respect to ψ ( t ) : without a properly scaling, T i n ( t ) is negligible with respect to T o u t ( t ) and, from a mathematical point of view, it follows a Dirac delta with mass at the origin, namely all moments converge to 0.
If now, instead, we take waiting-time distributions with infinite mean, we can not find out any scaling function in such a way that the rescaled occupation time admits a limiting distribution. In fact, recalling that ψ ^ ( s ) 1 b s α and f ^ ( s ) 1 b ρ s α ρ L 1 b s α , we similarly obtain:
f ^ s i n ( u ) b ( s + u ) α 1 + b ρ s α ρ 1 L 1 b s α b ρ + 1 s α ρ 1 ( s + u ) α L 1 b s α b ( s + u ) α + b ρ s α ρ L 1 b s α b ρ + 1 s α ρ ( s + u ) α L 1 b s α ,
T i n k ( t ) ( 1 ) k + 1 k Γ ( α ) b 1 ρ t k + α ( ρ 1 ) L t α b Γ ( α k + 1 ) Γ ( 1 + k + α ( ρ 1 ) ) lim t E T i n ( t ) t k = 0 .
Let us move on to the discrete-time ergodic regime: ϵ > 1 2 and ρ = 1 . This time ψ ^ ( s ) and f ^ ( s ) are of the same order, since they possess the same asymptotic exponent and the slowly-varying function decays to a constant L. As a consequence, they both determine the behaviour of:
ψ ^ o u t ( s ) = f ^ ( 1 , 0 ; s ) 1 ( L 1 ) b s α ,
according to C ϵ ( 1 ) = 1 2 ϵ .
By exploiting again Equation (46), in the limit s 0 we have:
f ^ s i n ( u ) 1 s 1 + u s α 1 + L 1 1 + u s α + L 1 ,
which may be inverted (see Reference [50] as in the original paper [44]) and leads to the Lamperti probability density function for the fraction of time T i n ( t ) t spent at the origin (ergodicity breaking):
G η , α ( t ) = a sin ( π α ) π t α 1 ( 1 t ) α 1 a 2 t 2 α + 2 a t α ( 1 t ) α cos ( π α ) + ( 1 t ) 2 α ,
where a = L 1 is the asymmetry parameter and η : = lim t E T i n ( t ) t = 1 L . In addition, we notice that:
τ r e t : = n = 1 n f n = lim z 1 F ( z ) = L = 2 ϵ 2 ϵ 1 ,
and so the expected value of the fraction of continuous-time spent at the origin coincides with the inverse mean recurrence time of the discrete-time random walk. But, thanks to ergodicity, we have also a stationary distribution π 0 at the origin for ϵ > 1 2 [29] that, by means of Birkhoff ergodic theorem, satisfies:
1 τ r e t = lim n k = 1 n δ j k , 0 n = δ j , 0 t = B δ j , 0 e n s = π 0 ,
and in conclusion:
a = 1 π 0 π 0 = π o u t π i n ,
where π o u t , π i n are the stationary measures of the subsets associated with the two states, according to the known results in the literature [46,47,48,49].
As a last comment, we turn back again to the finite-mean case. As expected, when α = 1 we get:
f ^ s i n ( u ) 1 s + η u , lim t E T i n ( t ) t k = η k f t i n ( T i n ) = δ ( T i n η t ) ,
namely a Dirac delta centered at the expected value η .

3.3.2. Occupation Time of the Positive Semi-Axis

In the non-ergodic (for the discrete-time random walk) regime, since T o u t ( t ) t 1 given that the fraction of time spent at the origin is negligible (as we discussed above, after Equation (50)), we have a system with a state space split into two subsets, Z + and Z , that can communicate only passing through the recurrent event, the origin, that is also the initial condition. Thanks to symmetry, ψ Z + ( t ) = ψ Z ( t ) ψ o u t ( t ) and the limiting distribution of the fraction of time spent in each subset is the symmetric Lamperti PDF of index α ρ , G 1 2 , α ρ , which for finite-mean waiting times consistently boils down to G 1 2 , ρ (by directly applying the original Lamperti statement [44]).
In the ergodic regime, instead, when you split the state Z \ { 0 } in two symmetric subsets, you must in any case look at a three-state process: although the mean recurrence time is still infinite, the fraction of time spent at the origin has its weight without any rescaling, see Equation (55). But by symmetry you know also that T Z + ( t ) t = 1 2 T o u t ( t ) t : as a consequence, you can easily conclude that the Lamperti distribution is G η + , α with η + = η o u t 2 = L 1 2 L . In fact, you can retrace previous steps for the asymptotic expansion of f ^ s o u t ( u ) in Equation (46) or equivalently observe that E T o u t ( t ) t = 1 E T i n ( t ) t = L 1 L = 1 2 ϵ and the exponent α remains unchanged when you move from ψ i n ( t ) to ψ o u t ( t ) . And here too, the asymmetry parameter could be written as:
a = 1 π Z + π Z + = π Z { 0 } π Z + .
By way of conclusion, as in the previous section, if you set α = 1 then you obviously recover the ergodicity in the continuous-time model, since you get a Dirac delta with mass at η + . Apparently this time there is a little difference with respect to the discrete-time random walk (see Reference [29]): the degenerate distribution is no longer centered at 1 2 (obtained immediately from Lamperti theorem [44]), as expected by symmetry. But this value was due to the convention [44] of counting the visits at the origin T i n ( t ) t 0 according to the direction of motion. So, if we consider, in addition to the occupation time of the positive axis, half the time spent at the origin (in the long-time limit), then we correctly get a mass at η + + η 2 = 1 2 . This comment allows us to highlight another aspect of the ergodicity breaking: when α < 1 , on the contrary, the choice of the convention to be adopted is completely irrelevant to the final result, since the mean return time to the origin is infinite, supporting the asymmetry of the distribution.

3.4. Moments Spectrum

Having assumed the presence of a waiting-time distribution of the form specified in Section 3.1.4 and knowing the asymptotic behaviour of the moments with respect to the number of steps, Equation (8), all we have to do is find out the number of steps performed (on average) up to time t in order to determine the physical time dependence of the moments. Clearly we can write [22] n ( t ) = n = 0 n χ n ( t ) that in the Laplace domain reads:
n ^ ( s ) = 1 ψ ^ ( s ) s n = 0 n ψ ^ n ( s ) = 1 ψ ^ ( s ) s · ψ ^ ( s ) d d ψ ^ ( s ) n = 0 ψ ^ n ( s ) = ψ ^ ( s ) s [ 1 ψ ^ ( s ) ] 1 b s α + 1 .
Now, by applying Tauberian theorems once more and coming back to the time domain, we get:
n ( t ) 1 Γ ( 1 + α ) t α b .
As a consequence, we can easily conclude that:
| j | q ( t ) = j Z p ( j , t ) | j | q t α ν ( q ) = t q 2 α if ϵ < 1 2 , t 0 if ϵ > 1 2 and q < 2 ϵ 1 , t 1 + q 2 ϵ 2 α if ϵ > 1 2 and q > 2 ϵ 1 ,
hence, in particular, a subdiffusive regime also arises for non-ergodic processes. The derivation of this spectrum for the discrete-time model is rather technical: it is a consequence of the specific form of the continuum limit. So we will not dwell on a brief recap this time, for the detailed analysis refer to References [29,51].

3.5. Statistics of Records

The statistics of records is another aspects relying on the mean number of steps counted in a given time period. Essentially, we have to retrace the relevant steps shown in Reference [29] for the discrete-time random walk in the light of additional knowledge.
First we must outline an excursion as each subsequence between consecutive returns to the origin: as we shall see, properties of single excursions carry information about the expected value of the maximum of the entire motion. We have handled with a stochastic process defined on the half-line, for instance on the non-negative integers N : in the case of symmetric random walks, we do not need to deal with its extension to the whole line, the origin can always be assumed to be a totally reflecting barrier. Indeed changes take over if and only if positive and negative excursions are characterized by different tail bounds for their durations [34]. A fundamental assumption to fulfill, instead, is the presence of the regenerative structure, whereas Markovianity is not required. Moreover, in order to make sure that there is recurrence, we focus on the range ϵ > 1 2 .
For the sake of completeness, here we provide heuristic guidelines: they should simply be intended as a motivation, for rigourous proofs we entrust you to previous references. Let E n denote the number of excursions, equivalently the number of returns to the origin, occurred up to the n-th step and M the maximum position occupied during a single excursion. In Reference [29] we have shown that the stochastic process obtained from the Gillis random walk { j k } k N by means of the transformation j n 1 + 2 ϵ is a symmetric random walk with no longer drift. As a consequence, thanks to classic results in random walk theory (see Reference [30]), we know that the probability of reaching the site m before coming back to the origin, that is also the probability of having M beyond m, is given by:
P ( hitting   m   before   going   back   to   0 ) P ( M m ) 1 m 1 + 2 ϵ ,
and then:
P ( M n < m ) = [ P ( M < m ) ] E n = 1 C m 1 2 ϵ E n ,
since, because of the renewal property, excursions are independent of one another. Now, by means of the common limits for exponential functions:
lim n 1 x 1 2 ϵ E n E n = e x 1 2 ϵ ,
since recurrence ensures E n as n , we deduce that the correct scaling law for the maximum is M n E n 1 1 + 2 ϵ . At this point, we have just to find out the relationship between the number of excursions E n and the number of steps n. But this is almost immediate since we know that the properly rescaled random variable E n n ρ follows a Mittag-Leffler distribution of parameter ρ [42,43], whose first moment is by definition 1 Γ ( 1 + ρ ) and as a consequence E n n ρ . In conclusion, we get that the expected value of the maximum reached by the particle up to time t is:
M n n 1 2 if 1 2 < ϵ + 1 2 , n 1 1 + 2 ϵ if + 1 2 ϵ < + 1 , M ( t ) t α 2 if 1 2 < ϵ + 1 2 , t α 1 + 2 ϵ if + 1 2 ϵ < + 1 .
An interesting comment concerns a related quantity, the duration of a single excursion T, namely the first-return time to the origin. A mathematical rigorous theorem [34] comes to our aid once again. On the event that { j n } n N reaches m during an excursion, semimartingale estimates can be used to show that approximately the walker spends an amount of time of order m 2 before returning to the origin:
P ( T > m 2 ) P ( M > m ) P ( T > n ) n 1 2 ϵ ,
that is clearly consistent with our result in Section 3.1.3: f 2 n n 3 2 ϵ . Moreover, the expected value of the maximum duration of an excursion up to the n-th step is:
T n m a x E n 2 1 + 2 ϵ n if 1 2 < ϵ + 1 2 , n 2 1 + 2 ϵ if + 1 2 ϵ < + 1 ,
according to the fact that for ϵ 1 2 the process is null-recurrent, whereas in the ergodic regime we have a finite mean return time and the growth of T n m a x is slower. On the contrary, as we have seen in Section 3.3, in the presence of a non-trivial continuous-time random walk ergodicity is lost and in fact:
f ( t ) t 1 α ρ , T m a x ( t ) t .

4. Numerical Results

Here our intent is to substantiate theoretical arguments by means of numerical checks. Moreover, we also take the opportunity to show how detailed analytical considerations are fundamental in this kind of context: some aspects are intrinsically difficult to be directly investigated from a numerical point of view.
Before going any further, as a general comment, from now on we will consider Pareto distributions as heavy-tailed waiting-time distributions for our simulations:
ψ ( t ) = α t 0 α t α + 1 , t > t 0 ,
where α is a positive parameter, the so-called tail index, and t 0 , the scale parameter, is the lower bound for t. In this way, the variance of the random variable for α ( 1 , 2 ] is infinite, with a finite mean, whereas it does not exist for α 1 , when the expected value becomes infinite. We will focus on the latter case.

4.1. Return and First-Return Events

Here we compare Figure 1a,b with Equations (34) and (35), respectively: there is good agreement with the previous theoretical analysis.

4.2. Occupation Times

In Figure 2 we examine the dependence of the PDF of the occupation time of the origin on the features of the waiting-time distribution in the purely non-ergodic regime ϵ 1 2 , 1 2 . In the first one, Figure 2a, we have α > 1 , namely a finite first moment with a finite ( α = 3 ) or infinite ( α = 1.9 ) variance, and as a consequence in both cases the occupation time T i n ( t ) rescaled by its mean value, ζ , follows a limiting Mittag-Leffler distribution of index ρ = 1 2 + ϵ , which is the same as the properly rescaled number of visits to the origin. In the presence of an infinite first moment, instead, there is no longer an appropriate scaling function: we show (Figure 2b) the slow convergence of the fraction of occupation time T i n ( t ) t to a Dirac delta with mass at 0. For increasing evolution times, the peak at u = 0 becomes more and more prominent with respect to u = 1 in the asymmetric U-shaped PDF, suggesting the collapse to a degenerate Lamperti distribution.
Next, as illustrated in Figure 3, we move on to the ergodic regime of the underlying random walk: we consider different values for α in order to hint that, when α approaches 1, the expected Lamperti distribution, Equation (55), eventually collapses to a Dirac delta centered at the mean value η of the occupation time, according to previous results in the physical literature [46,47,48,49].
We now discuss the distribution of the occupation time of the positive semi-axis. In Figure 4, we take a purely non-ergodic process: since the fraction of time spent at the origin is negligible, we have the expected symmetric Lamperti distribution of index α ρ , which replaces the discrete-time parameter ρ . In Figure 5, we shift to the discrete-time ergodic regime by setting ϵ = 0.9 . We can observe once again the birth of the continuous-time ergodic regime when α 1 , with an asymmetry due to the fact that T i n ( t ) t 0 .

4.3. Moments Spectrum

In Figure 6a you can see the expected smooth behaviour for a purely non-ergodic process, although it is no longer related to normal diffusion. In Figure 6b, instead, in addition to subdiffusion we recognize the presence of a corner, since for q < 2 ϵ 1 the moments tend to a constant, which is typical of the underlying ergodic property: the convergence near the critical point is slower.

4.4. Records

In Figure 7, we finally show the asymptotic behaviour of the mean number of records, or equivalently the expected maximum, up to time t. In particular, we want to emphasize that, even if the range ϵ 1 2 , 1 2 becomes an anomalous regime (in contrast with the discrete-time model), the mean number of records still behaves as the first moment.

5. Discussion

We have reassessed all the exact results found out in our previous work [29] in the light of the continuous-time formalism. By considering waiting times on the sites picked from a heavy-tailed distribution lacking the first moment, meaningful modifications in all regimes can be carried out.
By tuning the real parameter | ϵ | < 1 , we detect the following differences with respect to the discrete-time dynamics. First of all, the ergodic regime for ϵ > 1 2 fades out. Nevertheless, the underlying ergodic property makes the continuous-time upper range distinct from the purely non-ergodic processes ϵ 1 2 : visits at the origin have more and more weight since the fraction of time spent at the starting site does not converge to 0. Although in the presence of an infinite mean recurrence time, due solely to the irregular temporal component, we have a non-degenerate Lamperti distribution for the quantity of interest. Secondly, the strong-anomalous diffusion regime, characterizing the ergodic processes in the discete-time version, is weakly extended to the purely non-ergodic range, where weak subdiffusion replaces normal diffusion. More generally, return and first-return probabilities have a slower asymptotic power-law decay, depending on the parameter α of the temporal tail bounds.
It remains an interesting open problem to extend the analysis to centrally-biased random walks with hopping rates beyond next neighbours. Some conclusions may be drawn, for instance, by applying Lamperti criteria [52,53] about the recurrence or the transience of the random walk, but only if precise assumptions regarding the increments (see Equation ( 3.11 ) in Reference [52]) are satisfied. Though, in general, most of the above results must be reassessed: first of all, you are no longer forced to pass through the origin in the transitions between Z and Z + and the renewal theory, by identifying events with returns to the starting site, plays a crucial role in our setting. Secondly, looking also at the corresponding diffusion equation, the most remarkable case should involve increments that are not uniformly bounded, or better random jump distances with infinite mean.
We hope our studies will fall under an increasingly wide class of general exact results for stochastic processes lacking translational invariance, which hide subtle phenomena of physical interest not satisfied by the well-known homogeneous counterpart.

Author Contributions

All authors have contributed substantially to the work. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge partial support from PRIN Research Project No. 2017S35EHN “Regular and stochastic behavior in dynamical systems” of the Italian Ministry of Education, University and Research (MIUR).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CTRWContinuous Time Random Walk
PDFProbability Density Function
i.i.d.Independent Identically Distributed

Appendix A. Gillis-Type Proof

In Section 3.1.1 we have written the following equation:
p a ( j , t ) δ j , 0 δ ( t ) = 0 t p a ( j 1 , t ) ψ ( t t ) d t R j 1 + 0 t p a ( j + 1 , t ) ψ ( t t ) d t L j + 1 .
In the Laplace domain it reads:
p ^ a ( j ; s ) δ j , 0 = ψ ^ ( s ) p ^ a ( j 1 ; s ) R j 1 + p ^ a ( j + 1 ; s ) L j + 1 ,
and considering also the generating function on sites, we get:
P ^ a ( x , s ) 1 = ψ ^ ( s ) x R ^ ( x , s ) + 1 x L ^ ( x , s ) where R ^ ( x , s ) : = j = + p ^ a ( j ; s ) R j z j ,
similarly for L ^ ( x , s ) and clearly R ^ ( x , s ) + L ^ ( x , s ) = P ^ a ( x , s ) . As a consequence, we can write:
1 2 P ^ a ( x , s ) ϵ 2 j 0 p ^ a ( j ; s ) j x j = R ^ ( x , s ) = x ψ ^ ( s ) ψ ^ ( s ) ( x 2 1 ) P ^ a ( x , s ) x ψ ^ ( s ) ( x 2 1 ) .
Differentiating both sides with respect to x, we obtain:
x R ^ ( x , s ) = 1 2 x P ^ a ( x , s ) 1 2 ϵ x [ P ^ a ( x , s ) p ^ a ( 0 ; s ) ] ,
to be compared with:
ψ ^ ( s ) ( x 2 1 ) 2 x R ^ ( x , s ) = P ^ a ( x , s ) [ 2 x ψ ^ ( s ) ( x 2 + 1 ) ] + x P ^ a ( x , s ) ( x 2 1 ) ( x ψ ^ ( s ) ) + x 2 + 1 ,
and so the differential equation for P ^ a ( x , s ) is:
[ x ψ ^ ( s ) ( x 2 1 ) 2 2 x ( x 2 1 ) ( x ψ ^ ( s ) ) ] x P ^ a ( x , s ) + [ 2 x ( x 2 + 1 ) 4 x 2 ψ ^ ( s ) ϵ ψ ^ ( s ) ( x 2 1 ) 2 ] P ^ a ( x , s ) = 2 x ( x 2 + 1 ) ϵ ψ ^ ( s ) ( x 2 1 ) 2 p ^ a ( 0 ; s ) .
Now, we set x = e i ϕ , x P ^ a ( x , s ) = : x E ( ϕ ( x ) , s ) = i x E ϕ and split real and imaginary parts, thus we get E ϕ + f ( ϕ ) E = g ( ϕ ) , where:
f ( ϕ ) = [ 1 ψ ^ ( s ) cos ϕ ] 1 [ ψ ^ ( s ) ( 1 ϵ ) sin ϕ ( 1 ψ ^ ( s ) cos ϕ ) cot ϕ ] ,
g ( ϕ ) = [ 1 ψ ^ ( s ) cos ϕ ] 1 [ cot ϕ ϵ ψ ^ ( s ) sin ϕ p ^ a ( 0 ; s ) ] ,
and the solution is:
E ( ϕ , s ) = e f ( ϕ ) d ϕ g ( ϕ ) e f ( ϕ ) d ϕ d ϕ + const . .
In order to recover Equation (14), it is sufficient to perform the calculations and recall that:
p ^ a ( 0 ; s ) = 1 2 π 0 2 π E ( ϕ , s ) d ϕ .

Appendix B. Hitting Time PDF of the Origin: Exact Results

Let us derive the exact and asymptotic behaviours of the probabilities of being at the origin. For the moment, we neglect the limiting cases ϵ = ± 1 2 . All we need are the following properties of the Gamma function and the transformation formula 15.3.6 for the hypergeometric functions in Reference [31]:
Γ ( z + 1 ) = z Γ ( z ) , z Z , k = 0 n 1 Γ z + k n = ( 2 π ) n 1 2 n 1 2 n z Γ ( n z ) ,
2 F 1 ( a , b ; c ; z ) = Γ ( c ) Γ ( c a b ) Γ ( c a ) Γ ( c b ) 2 F 1 ( a , b ; a + b c + 1 ; 1 z )
+ ( 1 z ) c a b Γ ( c ) Γ ( a + b c ) Γ ( a ) Γ ( b ) 2 F 1 ( c a , c b ; c a b + 1 ; 1 z )
= : G · F G + ( 1 z ) c a b K · F K ,
where | arg ( 1 z ) | < π and c a b Z . Now, recalling Equation (17) and considering ψ ^ ( s ) 1 b s α for s 0 , we have to asymptotically compute p ^ ( j 0 , 0 ; s ) in different regimes.
First of all, let us take ϵ 1 , 1 2 . By skipping the intermediate steps, we get:
p ^ ( j 0 , 0 ; s ) b s 1 α 1 2 | j 0 | Γ ( 1 + ϵ + | j 0 | ) | j 0 | ! Γ ( 1 + ϵ ) G N G D = b s 1 α Γ ( 1 + ϵ + | j 0 | ) 2 | j 0 | | j 0 | ! Γ ( 1 + ϵ ) 2 | j 0 | Γ ( | j 0 | + 1 ) Γ ( 1 ϵ ) ( 2 ϵ 1 ) Γ ( | j 0 | ϵ ) ,
since:
0 < c N a N b N = ϵ 1 2 < 1 2 and 0 < 1 2 < c D a D b D = 1 2 ϵ < 3 2 .
In conclusion, by means of Tauberian theorems:
p ( j 0 , 0 , t ) b 2 ϵ 1 Γ ( 1 + ϵ + | j 0 | ) Γ ( 1 ϵ ) Γ ( 1 + ϵ ) Γ ( | j 0 | ϵ ) 1 Γ ( 1 α ) 1 t α as t .
Before going any further, let us make a comment which highlights the transience property. It is glaring that:
p ^ ( j 0 , 0 ; s ) 1 ψ ^ ( s ) s = χ ^ 0 ( s ) ,
namely it has the same scaling law of the survival probability on a site, although possibly not the same coefficient. In particular, setting j 0 = 0 means to consider return probabilities. But if you choose moreover the limiting case ϵ = 1 , when the particle moves to ± 1 it will never come back since L + 1 = 0 = R 1 and it is like placing two barriers with total reflection on the outside. As a consequence p ( t ) χ 0 ( t ) .
If ϵ 1 2 , 1 2 , we find again 0 < c D a D b D ( < 1 ) but ( 1 < ) c N a N b N < 0 and so when s 0 and t :
p ^ ( j 0 , 0 ; s ) = 1 ψ ^ ( s ) s ψ ^ ( s ) 2 | j 0 | Γ ( 1 + ϵ + | j 0 | ) | j 0 | ! Γ ( 1 + ϵ ) [ 1 ψ ^ 2 ( s ) ] 1 2 ϵ K N F K N + [ 1 ψ ^ 2 ( s ) ] 1 2 + ϵ G N F G N G D F G D + [ 1 ψ ^ 2 ( s ) ] 1 2 ϵ K D F K D b s 1 α Γ ( 1 + ϵ + | j 0 | ) 2 | j 0 | | j 0 | ! Γ ( 1 + ϵ ) ( 2 b s α ) ϵ 1 2 K N G D = 1 2 ϵ + 1 2 b 1 2 ϵ s 1 α 2 + α ϵ Γ ( 1 + ϵ + | j 0 | ) 2 | j 0 | | j 0 | ! Γ ( 1 + ϵ ) 2 2 ϵ + | j 0 | Γ ( | j 0 | + 1 ) Γ 1 2 + ϵ Γ ( 1 ϵ ) Γ ( ϵ + 1 + | j 0 | ) Γ 1 2 ϵ ,
p ( j 0 , 0 , t ) b 2 1 2 ϵ Γ 1 2 + ϵ Γ ( 1 ϵ ) Γ ( 1 + ϵ ) Γ 1 2 ϵ 1 Γ 1 α 2 + α ϵ 1 t α 1 2 ϵ .
In the first line of p ^ ( j 0 , 0 ; s ) , we want to emphasize that we can always explicitly write the exact slowly-varying function (the last factor), even if then we focus on its asymptotic expansion.
When ϵ 1 2 , 1 , instead we have 3 2 < c N a N b N < 1 < 0 , 1 2 < c D a D b D < 0 and as a consequence:
p ^ ( j 0 , 0 ; s ) b s 1 α Γ ( 1 + ϵ + | j 0 | ) 2 | j 0 | | j 0 | ! Γ ( 1 + ϵ ) ( 2 b s α ) ϵ 1 2 ( 2 b s α ) 1 2 ϵ K N K D = 1 2 s Γ ( 1 + ϵ + | j 0 | ) 2 | j 0 | | j 0 | ! Γ ( 1 + ϵ ) 2 1 + | j 0 | Γ ( | j 0 | + 1 ) ϵ 1 2 Γ ( ϵ ) Γ ( ϵ + 1 + | j 0 | ) = 1 s 2 ϵ 1 2 ϵ ,
p ( j 0 , 0 , t ) 2 ϵ 1 2 ϵ .
Finally, we have to handle with the transition points. If ϵ = 1 2 we need to introduce also 15.3.10 and 15.3.11 in Reference [31], with m = 1 , 2 , 3 , , | arg ( 1 z ) | < π , | 1 z | < 1 :
2 F 1 ( a , b ; a + b ; z ) = Γ ( a + b ) Γ ( a ) Γ ( b ) n = 0 ( a ) n ( b ) n n ! 2 [ 2 Ψ ( n + 1 ) Ψ ( a + n ) Ψ ( b + n ) ln ( 1 z ) ] ( 1 z ) n ,
where Ψ ( z ) = d d z ln Γ ( z ) denotes the digamma function and ( z ) n is the Pochhammer symbol, and:
2 F 1 ( a , b ; a + b + m ; z ) = Γ ( m ) Γ ( a + b + m ) Γ ( a + m ) Γ ( b + m ) n = 0 m 1 ( a ) n ( b ) n n ! ( 1 m ) n ( 1 z ) n Γ ( a + b + m ) Γ ( a ) Γ ( b ) ( z 1 ) m × n = 0 ( a + m ) n ( b + m ) n n ! ( n + m ) ! ( 1 z ) n [ ln ( 1 z ) Ψ ( n + 1 ) Ψ ( n + m + 1 ) + Ψ ( a + n + m ) + Ψ ( b + n + m ) ] .
Hence we get:
p ^ ( j 0 , 0 ; s ) = 1 ψ ^ ( s ) s ψ ^ ( s ) 2 | j 0 | Γ 1 2 + | j 0 | | j 0 | ! π 2 F 1 ( a N , b N ; a N + b N ; ψ ^ 2 ( s ) ) 2 F 1 ( a D , b D ; a D + b D + 1 ; ψ ^ 2 ( s ) )
b s 1 α Γ 1 2 + | j 0 | 2 | j 0 | | j 0 | ! π Γ 3 4 Γ 5 4 Γ ( | j 0 | + 1 ) Γ 1 4 + | j 0 | 2 Γ | j 0 | 2 + 3 4 ln 1 2 b s α = b 4 ln 1 2 b s α 1 s 1 α ,
p ( j 0 , 0 , t ) 1 4 ln t α 2 b b Γ ( 1 α ) 1 t α .
Whereas, when ϵ = + 1 2 , we need also Equation (15.3.12) in Reference [31]:
2 F 1 ( a , b ; a + b m ; z ) = Γ ( m ) Γ ( a + b m ) Γ ( a ) Γ ( b ) ( 1 z ) m n = 0 m 1 ( a m ) n ( b m ) n n ! ( 1 m ) n ( 1 z ) n ( 1 ) m Γ ( a + b m ) Γ ( a m ) Γ ( b m ) n = 0 ( a ) n ( b ) n n ! ( n + m ) ! ( 1 z ) n [ ln ( 1 z ) Ψ ( n + 1 ) Ψ ( n + m + 1 ) + Ψ ( a + n ) + Ψ ( b + n ) ] .
In this way, we can conclude that:
p ^ ( j 0 , 0 ; s ) = 1 ψ ^ ( s ) s ψ ^ ( s ) 2 | j 0 | Γ 3 2 + | j 0 | | j 0 | ! Γ 3 2 2 F 1 ( a N , b N ; a N + b N 1 ; ψ ^ 2 ( s ) ) 2 F 1 ( a D , b D ; a D + b D ; ψ ^ 2 ( s ) )
b s 1 α Γ 3 2 + | j 0 | 2 | j 0 | | j 0 | ! π 2 Γ ( | j 0 | + 1 ) Γ 1 4 Γ 3 4 Γ 3 4 + | j 0 | 2 Γ 5 4 + | j 0 | 2 1 2 b s α ln 1 2 b s α
= 2 s ln 1 2 b s α 1 ,
p ( j 0 , 0 , t ) 2 ln t α 2 b 1 .

Appendix C. First-Hitting Time PDF: Exact Results

Now, by means of Equations (22) and (23), we can exploit the previous appendix in order to extend the analysis to first-passage events. Once again, we can deduce exact results although in the end we pull up asymptotic formulas, but in addition this time we have to split our investigation according to the choice of the starting site: we have to differentiate between first-passage and first-return.

Appendix C.1. First-Return

Knowing already that:
p ^ ( s ) ϵ 2 ϵ + 1 b s 1 α if 1 < ϵ < 1 2 , 1 4 ln 1 2 b s α b s 1 α if ϵ = 1 2 , b 2 1 2 ϵ Γ ( 1 ϵ ) Γ 1 2 + ϵ Γ 1 2 ϵ Γ ( 1 + ϵ ) 1 s 1 α 1 2 ϵ if 1 2 < ϵ < + 1 2 , 2 ln 1 2 b s α 1 1 s if ϵ = + 1 2 , 1 1 2 ϵ 1 s if + 1 2 < ϵ < + 1 ,
we immediately get:
f ^ ( s ) = 1 1 ψ ^ ( s ) s p ^ ( s ) 1 4 ln 1 2 b s α 1 if ϵ = 1 2 , 1 2 1 2 ϵ Γ 1 2 ϵ Γ ( 1 + ϵ ) Γ ( 1 ϵ ) Γ 1 2 + ϵ b 1 2 + ϵ s α 1 2 + ϵ if 1 2 < ϵ < + 1 2 , 1 1 2 ln 1 2 b s α b s α if ϵ = + 1 2 , 1 2 ϵ 2 ϵ 1 b s α if + 1 2 < ϵ < + 1 ,
and (thanks to Tauberian theorems):
f ( t ) 2 1 2 ϵ Γ 1 2 ϵ Γ ( 1 + ϵ ) Γ ( 1 ϵ ) Γ 1 2 + ϵ α 1 2 + ϵ Γ 1 α 2 α ϵ b 1 2 + ϵ t 1 + α 1 2 + ϵ if 1 2 < ϵ < + 1 2 , b 2 ln t α 2 b α Γ ( 1 α ) 1 t 1 + α if ϵ = + 1 2 , 2 ϵ 2 ϵ 1 α Γ ( 1 α ) b t 1 + α if + 1 2 < ϵ < + 1 .
Actually, we can not directly apply Tauberian theorems to f ^ ( s ) , of course, but we can however get around the problem by using the following trick. If we consider f ^ ( s ) 1 b s η L 1 s with 0 < η < 1 and use an auxiliary function, for instance the derivative, then:
f ^ ( s ) = 0 e s t t f ( t ) d t η b s η 1 L 1 s t f ( t ) η b Γ ( 1 η ) L ( t ) t η .
Though we have yet to determine the result for ϵ 1 2 . The limiting case ϵ = 1 2 is almost immediate since again:
f ^ ( s ) 4 α ln 2 1 2 b s α 1 s for s 0 f ( t ) 4 α ln 2 t α 2 b 1 t for t .
For transient processes, instead, we must first supplement the asymptotic expansion with higher order terms:
p ^ ( s ) b s 1 α G N F G N + ( 2 b s α ) 1 2 ϵ K N F K N G D F G D + ( 2 b s α ) + 1 2 ϵ K D F K D
b s 1 α G N G D 1 + ( 2 b s α ) 1 2 ϵ K N G N 1 ( 2 b s α ) 1 2 ϵ K D G D
b s 1 α ϵ 2 ϵ + 1 1 + ( 2 b s α ) 1 2 ϵ K N G N ,
in such a way that:
f ^ ( s ) 1 2 ϵ + 1 ϵ + 2 ϵ + 1 ϵ K N G N ( 2 b s α ) 1 2 ϵ , with lim ϵ 1 + f ^ ( s ) = 0 , lim ϵ 1 2 f ^ ( s ) = 1 ,
f ^ ( s ) 2 ϵ + 1 ϵ 2 1 + 2 ϵ Γ ϵ + 1 2 Γ ( ϵ ) Γ ( ϵ + 1 ) Γ 1 2 ϵ α 1 2 + ϵ ( 2 b ) 1 2 ϵ s 1 α 1 2 + ϵ ,
f ( t ) 2 ϵ 1 2 2 ϵ + 1 ϵ 2 Γ 3 2 + ϵ Γ ( 1 ϵ ) Γ ( ϵ + 1 ) Γ 1 2 ϵ α Γ 1 + α 1 2 + ϵ b 1 2 ϵ t 1 α 2 α ϵ .

Appendix C.2. First-Hitting

Now, keeping in mind the techniques illustrated in the previous Appendix C.1 and Appendix B, we manage to generalize the results to the first-passage time to the origin, assuming to start from any other site j 0 0 .
As long as ϵ ± 1 2 , we can write:
f ^ ( j 0 , 0 ; s ) = p ^ ( j 0 , 0 ; s ) p ^ ( s ) Γ ( 1 + ϵ + | j 0 | ) | j 0 | ! 2 | j 0 | Γ ( 1 + ϵ ) ( 1 | j 0 | b s α ) G N F G N + ( 2 b s α ) 1 2 ϵ K N F K N G D F G D + ( 2 b s α ) 1 2 ϵ K D F K D .
When ϵ 1 2 , 1 :
f ^ ( j 0 , 0 ; s ) Γ ( 1 + ϵ + | j 0 | ) | j 0 | ! 2 | j 0 | Γ ( 1 + ϵ ) ( 1 | j 0 | b s α ) K N K D F K N ( s 0 ) F K D ( s 0 ) = ( 1 | j 0 | b s α ) 1 + a K N b K N c K N 2 b s α 1 + a K D b K D c K D 2 b s α ,
since the first term in the expansion of hypergeometric functions 2 F 1 ( a , b ; c ; z ) = n = 0 ( a ) n ( b ) n ( c ) n z n n ! is of order s α , which is dominant with respect to s α 1 2 + ϵ . Therefore:
f ^ ( j 0 , 0 ; s ) 1 | j 0 | + ( | j 0 | + 1 ϵ ) ( | j 0 | ϵ ) 2 ϵ 1 + ϵ ( 1 ϵ ) 2 ϵ 1 b s α = 1 j 0 2 2 ϵ 1 b s α ,
f ( j 0 , 0 , t ) j 0 2 2 ϵ 1 α Γ ( 1 α ) b t α + 1 j 0 2 2 ϵ f ( t ) .
If ϵ 1 2 , 1 2 :
f ^ ( j 0 , 0 ; s ) ( 1 | j 0 | b s α ) 1 + G N K N ( 2 b s α ) 1 2 + ϵ 1 + G D K D ( 2 b s α ) 1 2 + ϵ 1 2 1 2 + ϵ G N K N + G D K D b 1 2 + ϵ s α 1 2 + ϵ ,
f ( j 0 , 0 , t ) 2 1 2 + ϵ 2 1 + 2 ϵ Γ ϵ 1 2 Γ ϵ + 1 2 Γ ( 1 + ϵ ) Γ ( ϵ ) Γ ( 1 + ϵ + | j 0 | ) Γ ( | j 0 | ϵ ) α 1 2 + ϵ b 1 2 + ϵ t 1 α 1 2 + ϵ Γ 1 α 1 2 + ϵ
1 2 ϵ + 1 ϵ + Γ ( 1 + ϵ + | j 0 | ) Γ ( 1 ϵ ) Γ ( 1 + ϵ ) Γ ( | j 0 | ϵ ) f ( t ) .
Finally, for ϵ 1 , 1 2 :
f ^ ( j 0 , 0 ; s ) Γ ( 1 + ϵ + | j 0 | ) | j 0 | ! 2 | j 0 | Γ ( 1 + ϵ ) ( 1 | j 0 | b s α ) G N G D 1 + K N G N ( 2 b s α ) 1 2 ϵ 1 + K D G D ( 2 b s α ) 1 2 ϵ
Γ ( 1 + ϵ + | j 0 | ) Γ ( ϵ ) Γ ( | j 0 | ϵ ) Γ ( 1 + ϵ ) 1 K N G N + K D G D ( 2 b ) 1 2 ϵ s α 1 2 + ϵ ,
f ( j 0 , 0 , t ) α 2 1 2 + ϵ Γ 1 + α 2 + α ϵ Γ 3 2 + ϵ Γ ( ϵ ) Γ ϵ 1 2 Γ ( 1 + ϵ ) 1 Γ ( ϵ ) Γ ( ϵ + 1 + | j 0 | ) Γ ( | j 0 | ϵ ) Γ ( 1 + ϵ ) b 1 2 ϵ t 1 α ϵ α 2
1 2 ϵ + 1 ϵ + Γ ( 1 + ϵ + | j 0 | ) Γ ( 1 ϵ ) Γ ( 1 + ϵ ) Γ ( | j 0 | ϵ ) f ( t ) .
At this stage, we have to focus on the transition points. Firstly, let us consider ϵ = + 1 2 :
f ^ ( j 0 , 0 ; s ) Γ 3 2 + | j 0 | | j 0 | ! 2 | j 0 | Γ 3 2 ( 1 | j 0 | b s α ) 2 F 1 3 4 + | j 0 | 2 , 5 4 + | j 0 | 2 ; | j 0 | + 1 ; ψ ^ 2 ( s ) 2 F 1 5 4 , 3 4 ; 1 ; ψ ^ 2 ( s )
= Γ 3 2 + | j 0 | | j 0 | ! 2 | j 0 | Γ 3 2 ( 1 | j 0 | b s α ) 2 F 1 ( a N , b N ; a N + b N 1 ; ψ ^ 2 ( s ) ) 2 F 1 ( a D , b D ; a D + b D 1 ; ψ ^ 2 ( s ) ) ,
where the hypergeometric functions in the numerator and denominator asymptotically behave as:
2 F 1 ( a , b ; a + b 1 ; z ) Γ ( a + b 1 ) Γ ( a ) Γ ( b ) 1 1 z 1 ( a 1 ) ( b 1 ) ln 1 1 z ( 1 z ) ,
N Γ ( | j 0 | + 1 ) 2 1 2 | j 0 | π Γ 3 2 + | j 0 | 1 b s α 1 j 0 2 2 1 8 ln 1 2 b s α b s α ,
D 1 2 1 2 π b s α 1 + 1 8 ln 1 2 b s α b s α .
In conclusion:
f ^ ( j 0 , 0 ; s ) 1 j 0 2 2 ln 1 2 b s α b s α ,
f ( j 0 , 0 , t ) j 0 2 2 ln t α 2 b α Γ ( 1 α ) b t α + 1 j 0 2 f ( t ) .
Secondly, when ϵ = 1 2 we get:
f ^ ( j 0 , 0 ; s ) Γ 1 2 + | j 0 | <