Anomalous stochastic transport of particles with self-reinforcement and Mittag-Leffler distributed rest times

We introduce a persistent random walk model for the stochastic transport of particles involving self-reinforcement and a rest state with Mittag-Leffler distributed residence times. The model involves a system of hyperbolic partial differential equations with a non-local switching term described by the Riemann-Liouville derivative. From Monte Carlo simulations, this model generates superdiffusion at intermediate times but reverts to subdiffusion in the long time asymptotic limit. To confirm this result, we derive the equation for the second moment and find that it is subdiffusive in the long time limit. Analyses of two simpler models are also included, which demonstrate the dominance of the Mittag-Leffler rest state leading to subdiffusion. The observation that transient superdiffusion occurs in an eventually subdiffusive system is a useful feature for application in stochastic biological transport.


Introduction
The stochastic movement of intracellular organelles, cells and animals very often exhibits anomalous diffusion, which has led to the widespread use of fractional diffusion equations and fractional derivatives in modelling [1,2]. There are several recent observations that emphasize the importance of fractional models in biological phenomena, such as cancer cell motility [3], polarized cell dynamics [4], intracellular transport of organelles [5] and animal migration [6]. In particular, we observe superdiffusive and subdiffusive transport simultaneously in intracellular transport [7,5,8]. Recently, the superdiffusion was modelled by a persistent random walk using the concept of self-reinforcing directionality [9,10]. However, it is well known that endosomes often rest before moving and that these rest times are power-law distributed [8]. Therefore, it is natural to formulate the self-reinforcing persistent random walk model with Mittag-Leffler distributed rest times, which have power-law tails.
For continuous time random walks (CTRW), the competition between power-law run and rest times has been explored thourougly [11,12,13]. A single model based on the elephant random walk [14] with reinforcement exhibiting superdiffusion, diffusion and subdiffusion in the long time limit has been formulated in discrete time and space [15]. The model presented here is actually a generalization of the elephant random walk [14,16,17,18,19,20,21], a jump process, to a persistent random walk framework with finite velocity [22,2,23]. Using the persistent random walk framework is advantageous as extensions such as reactions, chemosensitive movement and interactions between agents are established in literature [24,25,26,27,28,29,30,31,32] and convenient to introduce. The purpose of our paper is to explore the impact of an anomalous rest state on self-reinforced persistent random walks with finite velocity.
2 Stochastic transport with self-reinforcement and Mittag-Leffler distributed rest times To implement rests to the self-reinforcing persistent random walk [10], we formulate a model with three states. We introduce the probability density functions (PDFs) for the active states with positive and negative velocity, p+(x, t) and p−(x, t), and the resting state, p0(x, t). In the active states, the random walk runs with constant speed ν for an exponentially distributed time with rate λ. After each active run, the random walk pauses for a Mittag-Leffler distributed residence time and then makes a choice to switch to some next state. With conditional transition probability r+, r− and r0, the random walk transitions from rest to the Figure 1: A diagram that shows a dog and a monotrichous bacterium making the decision to transition from the rest state to either the positive velocity, rest and negative velocity states with the associated conditional transition probabilities, r + , r − and r 0 . If the conditional transition probabilities depend on the previous history of choices, r + , r − and r 0 should depend on the current position and age.
positive velocity state, negative velocity state or remains in the rest state (r+ + r− + r0 = 1). An illustration of this is shown in Figure 1. The PDEs that represent this random walk are where the integral escape rate from the rest state, i(x, t), is defined as follows (see for example, [33,34,35]) Here, the Riemann-Liouville derivative is Note that for the non-Markovian alternating states, one can use the general expression for the escape rate i(x, t) in the form of convolution of the memory kernel and density [36].
In what follows, we will introduce and explain the two key components in the model (1): self-reinforcement and Mittag-Leffler distributed rest times.
Self-reinforcement. Firstly, introducing self-reinforcement to (1) requires careful consideration of the conditional transition probabilities. What really is self-reinforcement and how does one introduce it in (1)? If a process is self-reinforcing, then it retains a memory of its past decisions and uses this history to affect its current decisions. In other words, the random walk should consider its past decisions to transition into positive and negative velocity states and adjust the conditional transition probabilities (r+, r− and r0). Looking at Figure 1, the particle or animal should have r+, r− and r0 change with time as more decisions are made.
An intuitive formulation of self-reinforcement is that the time spent travelling with velocity ±ν determines the probability that the particle will switch to that state. Mathematically, where t+/t, t−/t and t0/t are the relative times spent in the positive, negative and zero velocity states respectively, out of the total time elapsed t. The constant prefactors w1, w2 and w3 are weights on each of those relative times and w1 + w2 + w3 = 1. To avoid the trivial case when the reinforcement to the rest state results in permanent rest, we set w3 = 1/3. Now using t = t+ + t− + t0 and x = x0 + ν(t+ − t−) in (4), r± can be expressed as where α0 the self-reinforcement parameter. The initial position of the random walk is x0 but for simplicity, we will assume x0 = 0 from now on. If α0 > 0, then w1 > w2 and time spent in corresponding states increases probabilities of future occupation in that state (in (4) r± increases more as t± increases since w1 > w2). On the other hand if α0 < 0, then w1 < w2 and time spent in the corresponding states increases probabilities of future avoidance of that state (r± increases more as t∓ increases since w1 < w2). Intuitively, reinforcement of past behaviour can be represented by w1 > w2 and punishment of past behaviour as w1 < w2. The equation (5) is a powerful formulation since we can express self-reinforcement as an additive term to constant and equal conditional transition probabilities of 1/3. Moreover, this additive term encapsulates the past history of the random walk by accounting for the ratio of current position x and current time t. This ratio compares how far the particle has moved away from the initial position given the maximum possible position it could have obtained. It is now clear that the superdiffusion generated by this self-reinforcing mechanism [10] is fundamentally different to those generated by power-law flights in CTRW and Lévy walk formulations (a biological motivation is given in Section VII of [10]). It would be interesting to extend this model to three dimensions as it was done for the diffusion-advection model for intracellular transport in [37].
Mittag-Leffler distributed rest times. Secondly, power-law distributed rest times with the divergent first moment are introduced into (1) through the Riemann-Liouville fractional derivative. This method is well established in literature [2]. This component is significant as power-laws are seen in many empirical observations for stochastic processes which possess complex underlying mechanisms [38]. Pertinent examples of power-laws include the waiting times between: stock transactions [39], arrivals of internet viruses [40], sudden decreases in terminal airway resistance for lungs with respiratory problems [41], players joining a game network [42], household residence before moving [43]; consecutive emails sent [44]; dopamine signalling in Drosophila melanogaster [45]; and active-passive state switching in endosome movement [8]. All of these examples, demonstrate the importance of power-law waiting times in real phenomena, highlighting the need for a self-reinforcing persistent random walk model with an agent that takes power-law distributed rests while deciding the choice of the next run. To be exact, the introduction of this integral escape rate, i(x, t), is equivalent to the random walker waiting in the rest state for a random time, which has the probability density and E β (·) is the one parameter Mittag-Leffler function. The waiting times in the rest state will be distributed approximately like (τ /τ0) −β for large values of τ /τ0 resulting in 'heavy' or power-law tails. Now, from (1), we can formulate a single governing equation by introducing the total density function p = p+ + p− + p0 and the flux J = ν(p+ − p−). Then We can eliminate the flux J by combining the first two equations in (7) and arrive at the governing equations where for self-reinforcement, 0 < α0 < 2/3 and r0 = 1/3. From these governing equations, it is not immediately clear what the effect of Mittag-Leffler distributed rest times will have when competing with self-reinforcement. To elucidate this relationship, we perform second moment calculations in the next section.

Second moment calculations
From (8), we obtain the fractional differential equations Equation (10) is a fractional differential equation describing the total probability to find the random walk in the rest state. To simplify calculations, we take the Laplace transform of (9) and (10) along with the initial Rearranging and taking the long time limit, t → ∞ (s → 0), the equations in (11) give Using (12) we obtain a single equation for the second moment in Laplace space From (13), we obtain the asymptotic second moment in Laplace space aŝ Finally, taking the inverse Laplace transform, Clearly, (15) demonstrates that the random walk with self-reinforcement (5) and Mittag-Leffler distributed rest times (6) is subdiffusive in the long time limit. This theoretical result is confirmed by the second moment of numerical simulations shown in Figure 2. Interestingly, transient superdiffusion is found in Monte Carlo simulations shown in Figure 3. This suggests that self-reinforcement still plays a major role at shorter time scales but is negated by the eventual trapping of particles in the rest state. This type of behaviour is important in intracellular transport, where organelles transition between superdiffusion and subdiffusion at different time scales [46,8].
To understand intuitively what occurs when introducing a heavy tailed waiting time for the rest state, we consider two simple cases: a single velocity model and a symmetric two velocity model both with a non-Markovian rest state, and derive their second moments in the long time limit.

Single active state model
As a first example, consider the simplest possible case where there is only one active state with velocity ν and one rest state, such that the system of fractional differential equations describing this random walk is This simple fractional model can be used to describe the movement of intracellular organelles in only one direction interrupted by rests with Mittag-Leffler distributed residence times.
From (16), we obtain a system of fractional differential equations describing the second moment where µ2+ = ∞ −∞ x 2 p+dx, µ1+ = ∞ −∞ xp+dx and other symbols were defined in (9). From adding together the equations in (17), we find where µ2 = µ2+ + µ20. In order to find µ1+, we again use (16) to obtain where N+ = ∞ −∞ p+dx and µ10 = ∞ −∞ xp0dx. Recall that our main objective is to find µ2 for which we need µ1+, but from (19) it is clear that we also need to find N+. So, we integrate (16) to obtain To derive equations (17) (19) and (20), we have used the fact that p+(x, t) = p0(x, t) = 0 and dp+/dx = dp0/dx = 0 as x → ±∞. This is because p+ and p0 are probability density functions that must be normalizable, and additionally, we know that this random walk propagates with finite speed from some initial position. It is clear that dealing with the Riemann-Liouville derivative in Laplace space will be far easier than attempting to solve (18), (19) and (20) directly. For initial conditions, we assume that the random walk starts in the active state at x = 0 at t = 0 such that p+(x, 0) = δ(x) and p0(x, 0) = 0. In a similar way to deriving (11), using the initial conditions in conjunction with the Laplace transforms of (18), (19) and (20), we can obtainμ Finally, taking the inverse Laplace transform Using an analogous procedure as above, the first moment for this model can be calculated as µ1(t) ∼ νt β /λτ β 0 Γ(β + 1). This draws parallels with the fractional Poisson process [47,48,49,50,51,52,53], which has exactly the same time dependence for the first and second moments. In fact closely examining (16), we can see the underlying stochastic process for the single active state model is the fractional Poisson process. For (16), the random walk waits in a rest state for a Mittag-Leffler distributed random time and then proceeds to travel with finite velocity ν for an exponentially distributed random time.

Bi-directional transport model
The second example we will consider is an extension of the first by adding another active state with velocity −ν. This model is ideal for bi-directional intracellular transport [54] with intermittent resting for power-law distributed times. The system of equations that describes this random walk is In order to derive an expression for the second moment, we combine the equations in (23), as we did in (1) to obtain (7). Doing this, we find Again, we can eliminate J by combining the first two equations in (24) to arrive at the governing equations We can obtain similar equations to (9) and (10) using (25) Similar to the first example, using the initial conditions p+(x, 0) = δ(x)/2, p−(x, 0) = δ(x)/2 and p0(x, 0) = 0, and taking the asymptotic limit as s → 0, we obtain After substitution, we can find the second moment for the second example in the long time limit aŝ Finally, taking the inverse Laplace transform where 0 < β < 1. Again, we find that the power-law rests dominate for long times and generates subdiffusion. For symmetric active states with velocities ±ν, we find that the second moment is purely subdiffusive unlike the first example. In the first example, there was no fractional diffusion limit that could be taken. However, in this second example, the fractional diffusion limit exists, which means that the system can be approximated accurately by the fractional diffusion equation where the fractional diffusion coefficient has an explicit form Therefore in the second example, we see that the introduction of a non-Markovian rest state with divergent mean residence time causes the second moment to be subdiffusive. On the other hand, the first example is inherently different because there is no accurate fractional diffusion equation to approximate the system and so the second moment should not be interpreted in diffusive, subdiffusive or superdiffusive terms. Comparing (15) with (29), we see that the only difference is the constant multiplier 1 − r0. The non-Markovian rest state (or equivalently, the Mittag-Leffler distributed waiting times for rests) completely neutralizes the superdiffusion generated by the self-reinforcement in the long time limit. Figure 2 demonstrates this by calculating the second moment from simulated trajectories of the random walk corresponding to the system of PDEs in (8). where W ∈ [0, 1) is a uniformly distributed random number and R± = r0 ± α0Xc/(2νTc).
All simulations were performed using Python3. To reduce execution time, the 'Numba' package and the 'multiprocessing' package were used for JIT compilation and CPU parallelization, respectively. Figure 2 confirms the analytical result in (15), which predicted subdiffusion for the long time limit regardless of the strength of self-reinforcement, in this case α0 = 0.6. Interestingly, Figure 3 shows that if τ0 is small, then superdiffusion is possible for intermediate times.
This is further demonstrated by the PDFs observed in Figure 4. Clearly for small values of τ0, the advection caused by self-reinforcement is dominant leading to a skewed PDF for positive velocity. However for larger τ0, the PDF reverts back to Laplacian distributions as expected for subdiffusive random walks. This finding is intriguing as self-reinforcement places a greater weight on the role of the 'characteristic' scale τ0 for the power-law distributed resting times. The fact that transient superdiffusion can occur, for intermediate times in the presence of heavy tailed resting times, is suggestive of the power of this model to describe natural phenomena with time-varying anomalous exponents.

Conclusion and Summary
In this paper, we formulate a persistent random walk model for the stochastic transport of particles with self-reinforced directionality and a non-Markovian rest state with Mittag-Leffler distributed residence times. To achieve this, we derive a system of hyperbolic PDEs with a non-local switching term involving the Riemann-Liouville fractional derivative. To investigate the nature of this random walk model, we derive a fractional differential equation for the second moment. We demonstrate analytically and numerically that the introduction of anomalous rests ensures subdiffusion in the long time limit. However, transient superdiffusion is observed for intermediate times, which is also a feature in the intracellular transport of organelles. We further corroborate these results by showing the PDFs of the random walk positions, which exhibit Laplacian distributions in the long time limit but skewed bimodal distributions for intermediate times.