Next Article in Journal
Performance Optimizations with Single-, Bi-, Tri-, and Quadru-Objective for Irreversible Diesel Cycle
Next Article in Special Issue
Mean Hitting Time for Random Walks on a Class of Sparse Networks
Previous Article in Journal
Data-Driven Analysis of Nonlinear Heterogeneous Reactions through Sparse Modeling and Bayesian Statistical Approaches
Previous Article in Special Issue
Random Walks with Invariant Loop Probabilities: Stereographic Random Walks

Article

A Semi-Deterministic Random Walk with Resetting

by 2 and
1
Instituto Universitario Física y Matemáticas University of Salamanca, Plaza Merced s/n, 37008 Salamanca, Spain
2
Departament de Física de Matèria Condensada, University of Barcelona, Martí i Franquès 1, E-08028 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(7), 825; https://doi.org/10.3390/e23070825
Received: 4 June 2021 / Revised: 22 June 2021 / Accepted: 23 June 2021 / Published: 28 June 2021
(This article belongs to the Special Issue New Trends in Random Walks)

Abstract

We consider a discrete-time random walk $\left({x}_{t}\right)$ which, at random times, is reset to the starting position and performs a deterministic motion between them. We show that the quantity $Pr\left({x}_{t+1}=n+1|{x}_{t}=n\right),n\to \infty$ determines if the system is averse, neutral or inclined towards resetting. It also classifies the stationary distribution. Double barrier probabilities, first passage times and the distribution of the escape time from intervals are determined.

1. Preliminaries

In a previous paper [1], we introduced the Sisyphus random walk as an infinite Markov chain that moves on the space state $N = { 0 , 1 , 2 , … , ∞ }$ and that, at every step, can either jump one unit rightward or return to the initial state from where it is restarted. The system was named after the king of Ephyra, Sisyphus, who was condemned to lift a heavy stone in an endless cycle.
Here, we generalize the above idea and consider a random walk on the integers $( x t ) t ∈ N$ whose dynamics alternates deterministic linear motion with resets which drive the system to the starting point at the random times $( t n ) n ≥ 1$. At every clock tick, the position of the random walker is such that $| x t |$ either increases one unit, or returns to the ground state, whereupon the evolution continues. Such resetting occurs through an independent mechanism superimposed onto the original semi-deterministic evolution. Once $( x t )$ is reset to the origin at $t 1$, it begins the evolution anew from scratch which is deterministic between resets.
Using translational invariance, we can suppose that $x 0 = 0$ with no loss of generality. Concretely, starting from $x 0 = 0$, three possibilities exist for the future position $x 1$: the system may remain at $x 1 = 0$ provided a reset occurs at $t = 1$; otherwise, it goes one unit to the right with probability $ρ$ or to the left with probability $ρ ¯ = 1 − ρ$. In addition, if the system has wandered into the positives so that at a certain time $t ≥ 0$ is $x t > 0$ (respectively $x t < 0$) then, at time $t + 1$, it may either be reset to the origin $x t + 1 = 0$ with arbitrary probability or else increase (respectively, decrease) one unit to $x t + 1 = x t + 1$ (respectively $x t + 1 = x t − 1$).
Such apparent simplicity is misleading, as this simple evolution law can exhibit a surprisingly complex and rich behavior. Indeed, at each site, we allow arbitrary probabilities for the random walk to reset to the origin and, additionally, the possibility to move both in the positive and negative integers. The only restriction in this general dynamics is the requirement that $( x t ) t ∈ N$ be a Markov chain. The resulting system is a natural, non-trivial generalization of that of [1], which is recovered when the reset probability is independent of the location and when $ρ = 1$.
In a different setting, such a system may be used as an idealized model of the random dynamics of a “mobile” in a trap, say, who is trying to climb stepwise a ladder or wall given that, at every step, there is a probability of slipping to the bottom, resulting in the need to restart again. Here, the natural question would be the determination of the location probability and expected time to escape the trap.
A related mechanism—Sisyphus cooling— was proposed by Claude Cohen-Tannoudji in certain optical contexts to the effect that an atom may climb up a potential hill, till suddenly it is returned to some ground state where it can restart anew. The hallmark of such systems is the possibility to display “back-to-square-one” behavior, a feature common in real life systems. Indeed, the study of stochastic processes subject to random resets is a problem that has attracted great interest in recent years after the seminal work of Manrubia and Zanette [2] and Evans and Majumadar [3]. Presently, the dynamics of systems with resets is being subjected to intense study, see the recent review [4]. Other mechanisms for random walks that are suddenly refreshed to the starting position are considered in [5,6,7]. Brownian motion with resets is considered in [3,8] while in [9] the propagator of Brownian motion under time-dependent resetting is obtained (see also [10] for further elaboration). In [11], these ideas are applied to the case of a compound Poisson process with positive jumps and constant drift. Further elaboration appears in [12]. Reset mechanisms have also been thoroughly applied to search strategies in mathematical and physical contexts as well as to behavioral ecology, see [10,13,14,15,16,17,18]. Surprisingly, strategies that incorporate reset to pure search are advantageous in certain contexts in ecology and biophysics and molecular dynamics, [19,20,21,22]. A generalization of the Kardar–Parisi–Zhang (KPZ) equation that describes fluctuating interfaces and polymers under resetting is covered in [23]. Dynamical systems with resets have also been used as proxies of the classical integrate-and-fire models of neuron dynamics, see [24,25]. In the context of Lévy flights with resetting, see the interesting papers [26,27]. For other applications, see also the recent papers [28,29,30,31,32,33,34,35].
As commented, the main aim of this paper is to study the main features of the semi-deterministic random walk with resets $( x t ) , t = 0 , 1 , … ∞$. The evolution rules for such a random walk are described in Section 2. We then study the propensity towards resetting of the system. According to this important property, we denote systems as reset averse, neutral or reset-inclined, and characterize them in terms of the transition probabilities and behavior of $Pr x t + 1 = n + 1 | x t = n , n → ∞$. In Section 3, we study the stationary distribution that the system approaches for large time. Section 4 considers first-passage problems and, in particular, two-sided exit probabilities; concretely, given levels $a , b ∈ N ,$ we study the probability that x reaches $a > 0$ before having reached $− b$ and distributions of the escape time. First passage times (FPT) also play a key role in statistical decision models, or to devise optimal strategies for seeking information; the rate at which a Brownian particle, under the influence of a metastable potential, escapes from a potential well is also a critical subject in the study of polymers, the so-called Kramers problem [36].
Under the simplest election $ρ = 1$ and $q n : = Pr x t + 1 = n + 1 | x t = n = q 1$ constant, we have that the distribution of the FPT to level $k ≥ 1$ is that of the number of trials required in an unfair coin-toss to obtain k consecutive successes, a classical problem in probability. Even with $k = 2$, the distribution of such a problem is not trivial.

2. The Model

The evolution rules for the random walk $( x t ) , t = 0 , 1 , … ∞$ are as follows. Let $x 0 : = 0$ be the initial position. We suppose that, if for any $t ≥ 0$ is $x t = 0$, then the random walk satisfies
$Pr x t + 1 = n | x t = 0 = q ¯ 1 δ n 0 + q 1 ρ δ n 1 + q 1 ρ ¯ δ n , − 1 , n ∈ Z$
We denote $q 1 = Pr ( x t + 1 ≠ 0 | x t = 0 ) ∈ ( 0 , 1 )$ the probability that, starting form zero, the system moves away from the origin at the next instant and $ρ : = Pr ( x t + 1 = 1 | x t = 0 , x t + 1 ≠ 0 ) ∈ [ 0 , 1 ]$ the probability that, if the system abandons the originat time t, it goes to position $x t + 1 = 1$. To ease notation, for any value p, we set $p ¯ : = 1 − p$, $q ¯ 1 : = 1 − q 1$. Besides $δ n k$ is the Kronecker delta.
Further, we suppose that the random walk $( x t ) , t = 0 , 1 , … ∞$ is a Markov chain where, if $x t ≡ n ≠ 0$, the only allowed transitions are either to site $n + sign ( n )$ if no reset occurs, which happens with probability $q n + 1$; or else to ${ 0 }$, when a reset occurs, with probability $1 − q n + 1$. Here, the sequence $( q n )$ satisfies that $0 ≤ q n ≤ 1$ for all n. It follows that the chain has the transition rules
$Pr x t + 1 = m | x t = n = q n + 1 , m = n + 1 1 − q n + 1 , m = 0 , t ≥ n > 0$
$Pr x t + 1 = m | x t = n = q | n | + 1 , m = n − 1 1 − q | n | + 1 , m = 0 , t ≥ − n > 0$
$Pr x t + 1 = m | x t = 0 = ρ q 1 , m = 1 ρ ¯ q 1 , m = − 1 1 − q 1 , m = 0$
and 0 otherwise. We also suppose that the infinite product with general term $q n$ satisfies
$lim n → ∞ ∏ j = 1 n q j = 0 ; alternatively ∑ j = 1 ∞ ( 1 − q j ) = ∞$
This mild requirement does not imply that $lim n → ∞ q n = 0$ (see Equation (12) below).
The model considered in [1] is recovered assuming $ρ = 1$ and that the jump-probability is constant: $q 1 = q 2 = … q n = …$. Figure 1 displays a typical sample path.

2.1. Reset Times

We denote as $t 1$ the random time at which the first reset happens. Here, we consider its distribution probability $p n : = Pr ( t 1 = n ) , n = 1 , … , ∞$. Similarly, we denote as $t k$ the random time at which the $k − th$ reset happens. To this end, note that for $n = 1 , 2 , …$ the reset takes place at time n if in all previous times no reset has occurred—and so $| x 1 | = 1 , … , | x n − 1 | = n − 1$ and $x n = 0$. Thus, we have transitions ${ 0 } ↦ { 1 } … ↦ { n − 1 } ↦ { 0 }$ and the corresponding probability
$p n : = Pr ( t 1 = n ) = q 1 … q n − 1 q ¯ n$
where $t 1$ is a proper random variable, in view of Equation (5).
The following representation clarifies the meaning and different roles of $( p n )$ and $( q n )$
$p n : = Pr ( t 1 = n ) = Pr ( t t = n ) : = Pr x t + n = 0 , x t + j ≠ 0 , 0 < j < n | x t = 0$
and
$q ¯ n = Pr x t + n = 0 | x t = 0 , x t + j ≠ 0 , 0 < j < n$
We relate both probabilities. We introduce recursively a sequence $( β n )$ via $β 0 ≡ 1$ and $β n : = q 1 … q n , n = 1 , 2 , …$. Note then that
$p n = q 1 … q n − 1 q ¯ n = β n − 1 − β n$
This can be inverted as
$β n = p n + 1 + p n + 2 + … = F ¯ t 1 ( n ) = 1 − F t 1 ( n )$
where $F t 1 ≡ F$ is the cumulative distribution function (cdf) of $t 1$ and $F ¯ ( n ) : = 1 − F ( n )$. Recalling that $β n : = q 1 … q n ,$ we finally have that Equation (6) can be inverted as
$q n = F ¯ ( n ) F ¯ ( n − 1 ) , n = 1 , 2 …$

2.2. Reset Averse and Reset-Inclined Systems

One of the most defining traits in the random walk (2)–(4) is what we call propensity towards resetting, a measure of how likely is that the resetting mechanism is triggered as the time from the last reset increases. We say that a system is inclined towards resetting if such a probability grows as the distance to the origin increases: $q ¯ n < q ¯ n + 1 ,$ for all n. Intuitively, for a reset-inclined system, the greater the time since the last visit (alternatively, the farthest off) the more anxious to return to the origin becomes the random walk. If this probability decreases (respectively, remains unchanged), we say that the system is reset-averse or reset-neutral. Reset-neutral chains correspond to having $q n = q n − 1 ≡ q 1 ∈ ( 0 , 1 )$ for all n. This is the choice considered in [1]. In this case
$Pr ( t 1 = n ) = q 1 n − 1 ( 1 − q 1 ) , F ( n ) = 1 − q 1 n$
Actually, we are interested in this property for large n. We say that a system is ultimately averse, neutral or, respectively, inclined towards resetting if, as the time from the last reset tends to infinity, the reset probability $( q n )$ satisfies
$lim n → ∞ q n : = lim n → ∞ Pr x t + n ≠ 0 | x t = 0 , x t + j ≠ 0 , 0 < j < n = 0 , ( inclined ) q ∞ ∈ ( 0 , 1 ) ( neutral ) 1 , ( averse )$
The selection $q n = q 1 / n$ corresponds to an ultimately reset-inclined system. Here, we have $lim n → ∞ q n = 0$ and
$p n = q 1 n − 1 ( n − 1 ) ! − q 1 n n ! , F ¯ ( n ) = q 1 n n ! , n = 1 , 2 …$
A simple calculation yields $< t 1 > = e q 1 ≤ e$, which is bounded with respect to the parameter $q 1$.
Finally, the choice $q n = n / ( n + 1 )$ corresponds to a reset-averse system. Here, the chain has power law decay tails:
$p n = 1 n ( n + 1 ) and F ¯ ( n ) = 1 n + 1$
The selections (13) and (14) reflect that the probability to commit an error that sends the walker to square one diminishes (increases) with every step. This may be put down to a capability to learn or, in contrast, to forget or grow tired with the distance to the origin. Equation (14) corresponds to $q n = q n − 1 1 + 1 n 2 − 1$—and hence to learning—while, if $q n = q n − 1 ( 1 − 1 n )$, then Zipf law (13)$: q n = q 1 / n$ follows.
Equation (14) may also arise due to uncertainty in the relevant parameters. Suppose we accept the basic model (11) to hold but are ignorant of the value of parameter $q 1$. Besides, we accept that all values for $q 1$ are equally likely; in this situation, the parameter $q 1$ should be assumed to have Uniform $( 0 , 1 )$ distribution. Bayes theorem implies that the distribution at posteriori of $t 1$ must be given by Equation (14):
$Pr ( t 1 = n ) = ∫ 0 1 Pr ( t 1 = n | q 1 ) d q 1 = ∫ 0 1 d q 1 q 1 n − 1 ( 1 − q 1 ) = 1 n ( n + 1 )$
We next show that the above behavior is ubiquitous, so the reset propensity is directly related with the tail’s behavior. Indeed, since the sequence $F ¯ ( n )$ is strictly monotone and $F ¯ ( n ) ↓ 0$ as $n → ∞$, the Stolz–Cesáro theorem gives
$q ∞ : = lim n → ∞ q n = lim n → ∞ F ¯ ( n ) F ¯ ( n − 1 ) = lim n → ∞ p n p n − 1$
Requiring $q ∞ ≡ e − λ ∈ ( 0 , 1 )$, we obtain that asymptotically $( p n )$ must grow as
$p n ≈ c e − λ n , c , λ > 0 , n → ∞$
which is the paradigmatic example of ultimately neutral systems. Note that such $( p n )$ has medium tails. By contrast, tails of the form
$p n ≈ c e − λ n α , n → ∞ where c > 0 , λ > 0 , α > 0$
give $q ∞ = 1$ if $0 < α < 1$ and $q ∞ = 0$ if $α > 1$. The exponential case $α = 1$, i.e., the geometric distribution, marks the crossover between these cases.
Note that slowly, power-law decaying, sequences such as
$p n ≈ c / n α , c > 0 , α > 1 , n → ∞$
also correspond to reset-averse systems ultimately. Thus, heavy tails of the sequence $( p n )$ correspond to reset-averse systems while the opposite holds with medium and light (super-exponential) tails such as those in Equations (11) and (18). Table 1 summarizes all the possibilities.
More complicated tails can be handled noting the behavior of ultimately averse, neutral or inclined reset systems under sums and products. We use $q ∞ : = ϑ$ to denote that $lim n → ∞ q n ∈ ( 0 , 1 )$ (thus, $lim n → ∞ q n = 0 , ϑ$ or 1). Hence, with obvious notation, the sums and product rules for $q ∞ ( 1 ) , q ∞ ( 2 )$ read
$0 + 0 = 0 ; 0 + ϑ = ϑ ; 0 + 1 = 1 ; ϑ + ϑ = ϑ ; ϑ + 1 = 1 + 1 = 1 ;$
$0 · 0 = 0 · ϑ = 0 · 1 = 0 ; ϑ · ϑ = ϑ · 1 = ϑ ; 1 · 1 = 1$
where the symbol $ϑ · ϑ = ϑ$ is used to mean that if
$lim n → ∞ q n ( 1 ) ∈ ( 0 , 1 ) , lim n → ∞ q n ( 2 ) ∈ ( 0 , 1 ) , then lim n → ∞ q n ( 1 ) q n ( 2 ) ∈ ( 0 , 1 )$
As an example, for $0 < c < 1$, consider the hybrid system
$p n = n ν ¯ + 1 n ( n + 1 ) ν n − 1 = O ( e − λ n n )$
where $ν : = e − λ , λ > 0$. Here, $p n ≡ p n ( 1 ) p n ( 2 )$ and tails display mixed exponential and power-law decay. Hence, $q ∞ = q ∞ ( 1 ) · q ∞ ( 2 ) = ϑ · 1 = ϑ$ correspond to an ultimately neutral system. This is corroborated by an exact evaluation of $q n$. Equation (10) yields that
$q n = n ν / ( n + 1 ) and lim n → ∞ q n = ν ∈ ( 0 , 1 )$

The fact that the system has a capability for learning (or to forget, see Equations (11), (13) and (14)) suggests the existence of some “memory” in the dynamics, a fact that cast doubt upon the Markovian nature of the model. Actually, there is some hidden memory, although not in $( x t )$.
This apparent paradox has interesting implications and can be developed further as we now explain. At any time t, let $N t$ be the process that counts the number of resets $t n$ “observed” in the time window $( 0 , t ]$. Thus, $N t = n$ if $t n ≤ t < t n + 1$. Given that $N t = n , n ≤ t$ any additional information may result in information relevant to predict its future: if additionally, say $N t − 1 = n − 1$, we infer that a reset has occurred exactly at t, a valuable information. Hence, $( N t )$ is generically non-Markovian. The only exception is the system (11); here, $N t$ follows a binomial distribution with success parameter $q ¯ 1$:
$P N t = n = t n · q ¯ 1 n q 1 t − n , n = 0 , 1 , … t$
By contrast, suppose we know $x t = i , 0 < i ≤ t$. This information amounts to saying that the previous reset occurred at $t n = t − i$ and hence pins down the history of the process after $t n$, viz $x t − i + j = i + j , ∀ j = 0 , 1 , … i$. By contrast, the history of the process previous to $t n$ remains unknown. Should additional information be provided, it would not help predict the future of x, since at $t n$ the process started anew; hence, only the history after $t n$ is relevant, but this is already known. Thus, we arrive at the counter-intuitive fact that $( N t )$ needs not be Markovian but $( x t )$ is. Likewise, the vector chain $( N t , x t )$ is Markovian.

3. Equilibrium Distribution

Here, we consider the large time distribution of the random walk. Call $x ∞ ≡ lim t → ∞ x t$ the limit of the process and $π n : = Pr ( x ∞ = n ) , n ∈ Z$ its distribution. When it exists it is also a stationary state, in the sense that if it has initially this distribution then it will not abandon it. $( π n )$ satisfies the system
$∑ n ∈ Z g n m π n = π m , m ∈ Z$
where $( g n m )$ is the transition probability matrix defined in Equation (3):
$g n m : = Pr x t + 1 = m | x t = n$
To handle this, we divide the matrix in upper and lower parts, connected only by the column and rows with index 0, i.e.,
$G = G − 0 0 G +$
where $G −$ is essentially obtained from $G +$ by reflection and $G + , n m , n , m = 0 , ∞$ reads (including the $0 −$ column)
$G + = q ¯ 1 ρ q 1 0 0 0 … q ¯ 2 0 q 2 0 0 … q ¯ 3 0 0 q 3 0 … … … q ¯ n 0 … 0 0 q n$
By insertion, we find
$π 1 = ρ q 1 π 0 , π − 1 = ρ ¯ q 1 π 0$
along with the recursive system
$π n + 1 = q n + 1 π n , n ≥ 1 and π n − 1 = q | n | − 1 π n , n ≤ − 1$
Solving recursively, we find
$π n = ρ π 0 q 1 q 2 … q n = π 0 ρ F ¯ ( n ) and π − n = π 0 ρ ¯ F ¯ ( n ) , n ≥ 1$
Normalization gives $1 / π 0 = ∑ n = 0 ∞ n p n ≡ < t 1 > ≡ μ$ which requires $< t 1 > ≡ μ < ∞$, i.e., $( p n )$ must decay at least as $p n ≈ 1 / n r , r > 2$. In this case, defining $ρ n ≡ ρ 1 n > 0 + ρ ¯ 1 n < 0 + δ n 0$, the stationary distribution is
$π n = ( ρ n / μ ) F ¯ ( | n | ) , n = − ∞ , … , ∞$
The probability that the random walk has drifted to site n for large times decreases as $F ¯ ( n )$ does, see Table 1. Hence, for reset-averse systems, $( π n )$ displays heavy tails (which may even decay in a weak, algebraic way), and so there exists a non-negligible probability to find the system away from the origin at large times. This agrees with the “unwillingness” of the system to return to the origin. The opposite holds for reset-inclined systems where this probability will decrease quicker than exponentially. Concretely, for the cases (11) and (13), the stationary distribution is
$π n = ρ n ( 1 − q 1 ) q 1 | n | and π n = ρ n e q 1 q 1 | n | | n | ! , n = − ∞ , … 0 , … , ∞$
Finally, for system (14), there is neither an equilibrium nor a stationary distribution, indicating that the chain spreads out far from the origin and does not settle to an equilibrium.
Writing Equation (21) as $∑ n ≠ m π m g n m = ∑ n ≠ m π n g m n$, it states that the total probability flux from all states n into m, $n ≠ m$, must equal the flux from state m into the rest of states. Hence, $( π n )$ satisfies a global balance equations, viz Equation (21). This does not imply that $( π n )$ is an equilibrium state, only a limit state; for an equilibrium distribution it must satisfy the far stronger “detailed balance” condition: $π m g n m ≠ π n g m n$—which does not hold. This was to be expected since detailed balance guarantees time-reversibility of the dynamics, a trait that the system at hand clearly does not exhibit, as a simple inspection of the trajectories shows.
In the absence of any information, the maximum entropy principle yields that the distribution with maximum entropy should be chosen on the basis that it is the least-informative one. Within the class with fixed finite mean, it is well known that this corresponds to a geometric distribution. It follows from Equation (28) that this implies that the model satisfies Equation (11). We conclude that the reset neutral chain (11) satisfying $q n = q 1$, for all n, and arbitrary $ρ$ has the remarkable properties of being the only selection that corresponds to a truly Markovian situation (i.e., for both $( x t )$ and the arrival process $( N t )$) and the one that gives the maximum entropy for fixed mean.

4. Escape Probabilities

In a classical study, W. Feller [37] showed that most recurrent properties of general diffusion processes can be codified in terms of two of the functions that define escape probabilities from an interval $( c , d ) , c < d$. Given that the process has started from a general $x 0 , c < x 0 < d$, Feller considers the “scale and speed functions”, defined as
$s ( x 0 ) = Pr ( τ d > τ c ) and m ( x 0 ) ≡ < min { τ c , τ d } >$
and shows that they solve certain differential equations (see [37] for an overview). Here, for any $a ∈ R$, we introduce the “hitting time” $τ a = inf { t > 0 : x t = a }$ which represents the lapse of time necessary to travel from the starting value to a.
We perform a similar study here and determine, for given levels $a , b ∈ N , − b < 0 < a$, the probability that the random walk $( x t )$ reaches a before having reached $− b$. Note that by translational invariance the case when $( x )$ starts from general $x 0$ is immediately reduced to that with $x 0 = 0$.
We start noting that when resets are switched off the only source of randomness lies in the first displacement of the random walk away from $x = 0$; hence, $x n = n$ for all n if $x 1 = 1$. In this case, $τ a , b$—the minimum time to hit either $a > 0$ or $− b < 0$—is a binary random variable that takes values a and b with probabilities $ρ$ and $ρ ¯$. Besides, $Pr 0 ( τ a < τ b ) = ρ$.
Obviously, $τ a , b$ will increase when a reset mechanism is introduced; it is, however, tempting to think that resets do not affect the escape probabilities, namely $Pr 0 ( τ a < τ b ) = ρ$ still holds. However this is not correct! To dispel such a misinterpretation, note that resets introduce a bias which favors the closest barrier against the farthest one. This is similar to the classical waiting time paradox where cycles with very large inter-reset times have a greater probability than smaller ones. Intuitively, if restarts occur very often, the possibility to reach the farthest barrier diminishes. We now determine this probability.
A very simple argument goes as follows. Consider the probability $ℓ a$ that the random walk $( x )$ reaches $a > 0$ before having reached $− b$ when we know that $( x t )$ hits a or $− b$ in the given cycle. The event that escape occurs at a given cycle, say, the first, is
$E : = { x 1 > 0 , t 1 > a } ∪ { x 1 < 0 , t 1 > b } ≡ E 1 ∪ E 2$
with probability
$κ ≡ Pr ( E ) = ρ F ¯ ( a ) + ρ ¯ F ¯ ( b )$
Hence, the probability that escape occurs via the upper barrier can be evaluated as the probability of $E 1$ conditional on the event E having happened:
$ℓ a = Pr ( escape via a | E ) = Pr ( E 1 | E 1 ∪ E 2 ) = ρ F ¯ ( a ) κ$
The reasoning when escape occurs at a general given cycle is a bit more involved but does not change the result.
Denote $ℓ a 0 ≡ ρ$ the corresponding probability when no resets are introduced. Then,
$ℓ a ≥ ℓ a 0 ⇔ F ¯ ( b ) ≤ F ¯ ( a ) ⇔ b ≥ a$
which means that resets increase the probability to hit first the closest barrier, as expected. Further, when $a = b$ Equation (32) yields $ℓ a = ℓ a 0$.
We thus have for the neutral chain (11), the reset-averse chain (13) and the reset-inclined chain (14), respectively
$ℓ a = ρ ρ + ρ ¯ q 1 b − a − 1 ρ b ! b ! ρ + ρ ¯ a ! q 1 b − a − 1 1 + ρ ¯ ( a + 1 ) ρ ( b + 1 ) − 1 = ρ ρ + ρ ¯ q 1 d − 1 ρ ρ + ρ ¯ q 1 d a ! / ( a + d ) ! − 1 1 + ρ ¯ ( a + 1 ) ρ ( a + d + 1 ) − 1$
In the second equality, we introduce $d : = b − a$, which measures the departure from symmetry of the problem and suppose $b ≥ a$ for ease of notation.

5. Escape Times

5.1. Symmetry Properties of First Passage Times

Denote for a moment as $τ a , b ρ$ the FPT to either a or $− b$ when $Pr ( x t + 1 = 1 | x t = 0 , x t + 1 ≠ 0 ) = ρ$. This quantity $τ a , b$ has a nice interpretation. Suppose the model (11) holds. Say a success has been scored every time a reset does not happen. Then, $τ n , n 1 = τ n 1$ is the time that takes to obtain $n ≥ 1$ successes in a row provided the probability of individual success is $q 1$, a classical problem in probability. Obviously, if $n = 1$ then $τ 1 1$—the first time to reach level 1—must have a geometric distribution with parameter $q 1$. However, even with $n = 2$, this problem has no easy solution, not even for the mean times.
We note the interesting relation between the asymmetric and symmetric cases.
• If $l a$ is defined in Equation (32) and $l a + l b = 1$ and $E X ≡ < X >$ indicates the expected value of the random variable X, we have
$E τ a , b ρ = l a E τ a , a ρ + l b E τ b , b ρ$
• $τ a , a ρ$ is independent of $ρ$. Besides, the distributions in the symmetric case and one-sided case are equal, namely, for any b
$τ a , a ρ = τ a , b 1 = τ a , ∞ 1 ≡ τ a 1 ; τ a , ∞ ρ = τ a ρ$
Indeed, when the interval is symmetric the escape time will not be influenced by whether resets favor upward or downward flights; hence, Equation (36) must hold. For the sake of comparison, we see that Equation (39) overestimates the time it takes to reach the boundaries.
A first approximation is given by $< τ a , b > ≈ < N > × < t 1 >$ where $N$ is the number of resets until escape. To warm up, we consider first the distribution of $N$.
By independence of cycles, $N$ has a geometric distribution with exit parameter $κ : = Pr ( E )$ where E, $κ$ are defined in Equations (30), (31). Thus, we have $N ∼$ Geom $( κ )$:
$Pr ( N = n ) = κ ( 1 − κ ) n − 1 , n = 1 , 2 … and$
$< N > = ( 1 / κ ) , < τ a , b > ≈ < N > × < t 1 > = ∑ n = 1 ∞ n p n ρ F ¯ ( a ) + ρ ¯ F ¯ ( b )$
In particular, for the symmetric case $a = b$,
$< τ a , b > ≈ ∑ n = 1 ∞ n p n / ( ∑ n = a + 1 ∞ p n ) ≥ a + ∑ n = 1 a n p n / ( ∑ n = a + 1 ∞ p n )$
Clearly, this approximation is only reasonable when the system needs a large number of resets prior to exit the interval, i.e., $κ ≈ 0$.

5.2. Mean Exit Time

To study the exact time to hit a or b, we note that depending of what happens at the the first reset $t 1$ there are five excluding and exhausting possibilities. These scenarios are:
• (S1) $x 1 > 0$ and $t 1 > a$;
• (S2) $x 1 < 0$ and $t 1 > b$;
• (S3) $x 1 > 0$ and $t 1 ≤ a$;
• (S4) corresponds to having $x 1 < 0$ and $t 1 ≤ b$;
• (S5) corresponds to $x 1 = 0$.
Under scenario (S1), $( x t )$ hits a before it hits b with $τ a , b = a$. Under scenario (S2), $( x t )$ hits b before a and $τ a , b = b$. Scenarios (S3) to (S5) refresh $( x t )$ to the origin so the “race” starts again from scratch; hence $τ a , b = t 1 + τ a , b ′$, where $τ a , b ′$ is the time that remains until exit once the new cycle starts. This implies that
$τ a , b = a if x 1 > 0 , t 1 > a b if x 1 < 0 , t 1 > b t 1 + τ a , b ′ if 2 ≤ t 1 ≤ a , x 1 > 0 or 2 ≤ t 1 ≤ b , x 1 < 0 or t 1 = 1$
$and E τ a , b = a ρ F ¯ ( a ) + b ρ ¯ F ¯ ( b ) +$
$ρ E ( t 1 1 t 1 ≤ a ) + ρ ¯ E ( t 1 1 t 1 ≤ b ) + ( ρ ¯ F ( b ) + ρ F ( a ) ) E τ a , b ′$
Here, $1 A = 1$ if the event A holds and is 0 otherwise. Thus, we finally obtain
$E τ a , b = ρ a F ¯ ( a ) + E ( t 1 1 t 1 ≤ a ) + ρ ¯ b F ¯ ( b ) + E ( ( t 1 1 t 1 ≤ b ) ρ ¯ F ¯ ( b ) + ρ F ¯ ( a )$
If $b → ∞$, then $b F ¯ ( b ) → 0$ and we recover the mean hitting time to level a as
$E τ a = a + 1 ρ F ¯ ( a ) E ( t 1 ) − ρ E t 1 1 t 1 > a$
Particularly interesting is the symmetric case $a = b$. Here,
$E τ a , a ρ = E τ a 1 = a + E ( t 1 1 t 1 ≤ a ) F ¯ ( a ) = a + ∑ n = 1 a n p n / ( ∑ n = a + 1 ∞ p n )$
Note how this implies equation (35).

5.3. Distribution of the Exit Time

Finally, we consider the distribution of $τ a , b$. We evaluate its generating function
$G ( z ) = ∑ n = 1 ∞ z n Pr ( τ a , b = n )$
by using Equation (40). Here, $z ∈ C , | z | ≤ 1$. Recall that
$τ a , b = a 1 x 1 > 0 , t 1 > a + b 1 x 1 < 0 , t 1 > b + t 1 + τ a , b ′$
$1 2 ≤ t 1 ≤ a , x 1 > 0 + 1 2 ≤ t 1 ≤ b , x 1 < 0 + 1 t 1 = 1$
Note also that
$E z t 1 + τ a , b ′ 1 2 ≤ t 1 ≤ a , x 1 > 0 = E z t 1 1 2 ≤ t 1 ≤ a , x 1 > 0 E z τ a , b = ρ p ^ a ( z ) G τ a , b ( z )$
where we define the truncated generating function $p ^ a ( z ) = ∑ k = 1 a z k p k$.
It follows from Equation (40) that $G τ a , b ( z )$ is the sum of the following terms
$G τ a , b ( z ) = E 1 + E 2 G τ a , b ′ ( z )$
where
$E 1 : = z a Pr ( x 1 > 0 , t 1 > a ) + z b Pr ( x 1 < 0 , t 1 > b ) = z a ρ F ¯ ( a ) + z b ρ ¯ F ¯ ( b ) ,$
$E 2 : = E ( z t 1 1 t 1 ≤ a , x 1 > 0 ) + E z t 1 1 t 1 ≤ b , x 1 < 0 + E ( 1 t 1 = 1 )$
Thus finally, in Laplace space, the generating functions read
$G τ a , b ( z ) = z a ρ F ¯ ( a ) + z b ρ ¯ F ¯ ( b ) 1 − ρ p ^ a ( z ) − ρ ¯ p ^ b ( z )$
Hence, the mass function of $τ a , b$ is
$P ( τ a , b = n ) = F ¯ ( a ) 2 π i ∮ d z G τ a , b ( z ) z n + 1 , n ≥ 1$
If either $b = a$ (symmetric case) or $ρ = 1 , b = 1$ (one-sided case), it simplifies to
$G τ a , a ( z ) = z a F ¯ ( a ) 1 − p ^ a ( z )$
$P ( τ a , a = n ) = F ¯ ( a ) 2 π i ∮ d z d z z n + 1 − a ( 1 − p ^ a ( z ) ) , n ≥ a$
The FPT to a is recovered letting $b → ∞$; then, $p ^ b ( z ) → p ^ ( z ) : = ∑ n = 1 ∞ z n p n$ and
$G τ a ( z ) = z a ρ F ¯ ( a ) 1 − ρ p ^ a ( z ) − ρ ¯ p ^ ( z )$

5.4. FPT under the Model (11)

If Equation (11) holds, the distribution of $τ a , a$ simplifies. The generating function and distribution of the exit time read $p ^ a ( z ) = q ¯ 1 z ( 1 − ( z q 1 ) a ) / ( 1 − z q 1 )$ and
$G τ a , a ( z ) = ( q 1 z ) a ( 1 − q 1 z ) 1 − z + q 1 a q ¯ 1 z a + 1$
Hence, when $a = 1$ we recover $G τ 1 , 1 ( z ) = q 1 z / ( 1 − q ¯ 1 z )$ corresponding to a geometric distribution. Note
$Pr ( t 1 = n ) = q 1 n − 1 ( 1 − q 1 ) , Pr ( τ 1 , 1 = n ) = q ¯ 1 n − 1 q 1$
For $a = 2$, we have
$G τ 2 , 2 ( z ) = ( q 1 z ) 2 1 − q ¯ 1 z − q 1 q ¯ 1 z 2$
If $s ± : = q ¯ 1 ± q ¯ 1 2 + 4 q 1 q ¯ 1$, this can be inverted as
$P ( τ 2 , 2 = n ) = q 1 2 ∑ j = 0 n − 2 n − 2 − j j q 1 j q ¯ 1 n − 2 − j = q 1 2 s + n − 1 − s − n − 1 2 n − 2 ( s + − s − )$
Hence, summing an arithmetic-geometric series, we find if $ℓ = 1 / q 1$
$E τ a , a = a + 1 q 1 a q ¯ 1 2 q 1 ( 1 − q 1 a + 1 − q ¯ 1 ( a + 1 ) q 1 a + 1 = ℓ a − 1 ℓ − 1$
Let $ξ a$ denote the number of trials until the first consecutive a successes occur in a sequence of Bernoulli trials (or unfair coin-tosses) with probability of individual success $q 1$. This problem does not have a simple answer except when $a = 1$. Clearly, $ξ 1 ∼ Geom ( q 1 )$.
To handle the case $a ≥ 2$, we note that the distribution of $ξ a$ is that of the FPT to level a provided $ρ = 1$; it is recovered letting $b → ∞$ (see Equation (36) ) and using Equation (52) as
$ξ a = τ a , ∞ 1 = τ a 1 = τ a , a ρ and G ξ ( z ) = ( q 1 z ) a ( 1 − q 1 z ) 1 − z + q 1 a q ¯ 1 z a + 1$
The mean number of trials until the first consecutive a successes is
$a + E ( t 1 1 t 1 ≤ a ) F ¯ ( a )$

6. Discussion

We considered a discrete-time random walk $( x t )$ which at random times is reset to the starting position and performs a deterministic motion between them. We discussed how to interpret the property that the system is averse, neutral or inclined towards resetting. We showed that such a behavior is critical for the existence and properties of the stationary distribution. We obtained double barrier probabilities, first passage times and the distribution of the escape time from intervals. We pointed out that the distribution of the FPT to level $k ≥ 1$ solves a classical problem in probability, namely that of the number of trials required in an unfair coin-toss or Bernoulli trial to obtain k successes in a row.

Author Contributions

Investigation, J.V. and J.A.V.; conceptualization, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge support from the Spanish Agencia Estatal de Investigación and the European Fondo Europeo de Desarrollo Regional (AEI/FEDER, UE) under contract no. PID2019-106811GB-C33. MM thanks the Catalan Agència de Gestió d’Ajuts Universitaris i de Recerca (AGAUR), contract no. RED2018-102518-T.

Not applicable.

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

1. Montero, M.; Villarroel, J. Directed random walk with random restarts: The Sisyphus random walk. Phys. Rev. E 2016, 94, 032132. [Google Scholar] [CrossRef][Green Version]
2. Manrubia, S.C.; Zanette, D.H. Stochastic multiplicative processes with reset events. Phys. Rev. E 1999, 59, 4945–4948. [Google Scholar] [CrossRef][Green Version]
3. Evans, M.R.; Majumdar, S.N. Diffusion with Stochastic Resetting. Phys. Rev. Lett. 2011, 106, 160601. [Google Scholar] [CrossRef][Green Version]
4. Evans, M.R.; Majumdar, S.N. Stochastic resetting and applications. J. Phys. A Math. Theor. 2020, 53, 193001. [Google Scholar] [CrossRef]
5. Janson, S.; Peres, Y. Hitting times for random walks with restarts. SIAM J. Discret. Math. 2012, 26, 537–547. [Google Scholar] [CrossRef][Green Version]
6. Méndez, V.; Campos, D. Characterization of stationary states in random walks with stochastic resetting. Phys. Rev. E 2016, 93, 022106. [Google Scholar] [CrossRef] [PubMed][Green Version]
7. Majumdar, S.N.; Sabhapandit, S.; Schehr, G. Random walk with random resetting to the maximum position. Phys. Rev. E 2015, 92, 052126. [Google Scholar] [CrossRef][Green Version]
8. Evans, M.R.; Majumdar, S.N. Diffusion with optimal resetting. J. Phys. A Math. Theor. 2011, 44, 435001. [Google Scholar] [CrossRef][Green Version]
9. Pal, A.; Kundu, A.; Evans, M.R. Diffusion under time-dependent resetting. J. Phys. A Math. Theor. 2016, 49, 225001. [Google Scholar] [CrossRef]
10. Falcao, R.; Evans, M.R. Interacting Brownian motion with resetting. J. Stat. Mech. 2017, 2017, 053301. [Google Scholar] [CrossRef]
11. Montero, M.; Villarroel, J. Monotonous continuous-time random walks with drift and stochastic reset events. Phys. Rev. E 2013, 87, 012116. [Google Scholar] [CrossRef][Green Version]
12. Montero, M.; Masó-Puigdellosas, A.; Villarroel, J. Continuous-time random walks with reset events. Historical background and new perspectives. Eur. Phys. J. B 2017, 90, 176. [Google Scholar] [CrossRef][Green Version]
13. Montanari, A.; Zecchina, R. Optimizing searches via rare events. Phys. Rev. Lett. 2002, 88, 178701. [Google Scholar] [CrossRef][Green Version]
14. Evans, M.R.; Majumdar, S.N.; Mallick, K. Optimal diffusive search: Non equilibrium resetting versus equilibrium dynamics. J. Phys. A Math. Theor. 2013, 46, 185001. [Google Scholar] [CrossRef]
15. Boyer, D.; Solis-Salas, C. Random walks with preferential relocations to places visited in the past and application to biology. Phys. Rev. Lett. 2014, 112, 240601. [Google Scholar] [CrossRef][Green Version]
16. Kusmierz, L.; Majumdar, S.N.; Sabhapandit, S.; Schehr, G. First order transition for the optimal search time of Lévy flights with resetting. Phys. Rev. Lett. 2014, 113, 220602. [Google Scholar] [CrossRef][Green Version]
17. Bhat, U.; De Bacco, C.; Redner, S. Stochastic search with Poisson and deterministic resetting. J. Stat. Mech. 2016, 2016, 083401. [Google Scholar] [CrossRef][Green Version]
18. Campos, D.; Bartumeus, F.; Méndez, V.; Espadaler, X. Variability in individual activity bursts improves ant foraging success. J. Roy. Soc. Interf. 2016, 125, 20130859. [Google Scholar] [CrossRef] [PubMed]
19. Reuveni, S.; Urbach, M.; Klafter, J. Role of substrate unbinding in Michaelis-Menten enzymatic reactions. Proc. Natl. Acad. Sci. USA 2014, 111, 4391–4396. [Google Scholar] [CrossRef] [PubMed][Green Version]
20. Rotbart, T.; Reuveni, S.; Urbakh, M. Michaelis-Mentin reaction scheme as a unified approach towards the optimal restart problem. Phys. Rev. E 2015, 92, 060101. [Google Scholar] [CrossRef][Green Version]
21. Reuveni, S. Optimal stochastic restart renders fluctuations in first passage times Universal. Phys. Rev. Lett. 2016, 116, 170601. [Google Scholar] [CrossRef][Green Version]
22. Pal, A.; Eliazar, I.; Reuveni, S. First passage under restart with branching. Phys. Rev. Lett. 2019, 122, 020602. [Google Scholar] [CrossRef][Green Version]
23. Gupta, S.; Majumdar, S.N.; Schehr, G. Fluctuating interfaces subject to stochastic resetting. Phys. Rev. Lett. 2014, 112, 220601. [Google Scholar] [CrossRef][Green Version]
24. Lindner, B.; Chacron, M.J.; Longtin, A. Integrate-and-fire neurons with threshold noise. Phys. Rev. E 2005, 72, 021911. [Google Scholar] [CrossRef][Green Version]
25. Villarroel, J.; Montero, M. Continuous-time ballistic process with random resets. J. Statistical Mech. 2018, 140, 78–130. [Google Scholar] [CrossRef]
26. Boyer, D.; Pineda, I. Slow Lévy Flights. Phys. Rev. E 2016, 93, 022103. [Google Scholar] [CrossRef][Green Version]
27. Kusmierz, L.; Gudowska-Nowak, E. Optimal first-arrival times in Lévy flights with resetting. Phys. Rev. E 2015, 92, 052127. [Google Scholar] [CrossRef] [PubMed][Green Version]
28. Whitehouse, J.; Evans, M.R.; Majumdar, S.N. Effect of partial absorption on diffusion with resetting. Phys. Rev. E 2013, 87, 022118. [Google Scholar] [CrossRef][Green Version]
29. Durang, X.; Henkel, M.; Park, H. Sstatistical mechanics of the coagulation-diffusion process with a stochastic reset. J. Phys. A Math. Theor. 2014, 47, 045002. [Google Scholar] [CrossRef][Green Version]
30. Evans, M.R.; Majumdar, S.N. Diffusion with resetting in arbitrary spatial dimension. J. Phys. A Math. Theor. 2014, 47, 285001. [Google Scholar] [CrossRef]
31. Nagar, A.; Gupta, S. Diffusion with stochastic resetting at power-law times. Phys. Rev. E 2016, 93, 060102. [Google Scholar] [CrossRef][Green Version]
32. Boyer, D.; Evans, M.R.; Majumdar, S.N. Long time scaling behaviour for diffusion with resetting and memory. J. Stat. Mech. 2017, 2017, 023208. [Google Scholar] [CrossRef][Green Version]
33. Pal, A. Diffusion in a potential landscape with stochastic resetting. Phys.Rev. E 2015, 91, 012113. [Google Scholar] [CrossRef] [PubMed][Green Version]
34. Majumdar, S.N.; Sabhapandit, S.; Schehr, G. Dynamical transition in the temporal relaxation of stochastic processes under resetting. Phys. Rev. E 2015, 91, 052131. [Google Scholar] [CrossRef] [PubMed][Green Version]
35. Majumdar, S.N.; Meerson, B. Statistics of first-passage Brownian functionals. J. Stat. Mech. 2020, 2020, 023202. [Google Scholar] [CrossRef][Green Version]
36. Kramers, H.A. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica 1940, 7, 284. [Google Scholar] [CrossRef]
37. Feller, W. Diffusion processes in one dimension. Trans. Am. Math. Soc. 1954, 77, 1–3. [Google Scholar] [CrossRef]
Figure 1. A typical sample paths of the process where $t 1 = 5 , t 2 = 7 , …$ and $x 1 = x 6 = 1$.
Figure 1. A typical sample paths of the process where $t 1 = 5 , t 2 = 7 , …$ and $x 1 = x 6 = 1$.
Table 1. The table summarizes the propensity to resetting in terms of the decay of $p n : = Pr ( t 1 = n )$ and the equilibrium distribution. In all cases $λ > 0$.
Table 1. The table summarizes the propensity to resetting in terms of the decay of $p n : = Pr ( t 1 = n )$ and the equilibrium distribution. In all cases $λ > 0$.
$p n → ∞$$F t 1 ( n )$$q n → ∞$PropensityTails$E t 1$$π n → ∞$
$O ( e − λ n α ) , α > 1$$O ( e − λ n α )$0$inclined$Super-exp.<$O ( e − λ n α )$
$O ( ( e − λ n ) n ,$$O ( ( e − λ n ) n ,$0$inclined$Super-exp.<$O ( ( e − λ n ) n$
$O ( e − λ n )$$O ( e − λ n )$$∈ ( 0 , 1 )$$neutral$$exp .$<$O ( e − λ n )$
$O ( e − λ n α ) , 0 < α < 1$$O ( e − λ n α ) , 0 < α < 1$1$averse$Sub-exp.<$O ( e − λ n α )$
$O ( 1 / n α ) , α > 2$$O ( 1 / n α − 1 ) , α > 2$1$averse$Power-law<$O ( n 1 − α )$
$O ( 1 / n α ) , 1 < α ≤ 2$$O ( 1 / n α − 1 ) , 1 < α ≤ 2$1$averse$Power-law=—-
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.