Next Article in Journal
Ulam-Hyers Stability of Caputo–Katugampola Generalized Hukuhara Type Partial Differential Symmetry Coupled Systems
Previous Article in Journal
Prediction of Skid Resistance of Asphalt Pavements on Highways Based on Machine Learning: The Impact of Activation Functions and Optimizer Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Model of Elephant and Moran Random Walks: Exact Distribution and Symmetry Properties

Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(10), 1709; https://doi.org/10.3390/sym17101709
Submission received: 14 September 2025 / Revised: 5 October 2025 / Accepted: 9 October 2025 / Published: 11 October 2025
(This article belongs to the Section Mathematics)

Abstract

This work introduces a hybrid memory-based random walk model that combines the Elephant Random Walk with a modified Moran Random Walk. The model introduces a sequence of independent and identically distributed random variables with mean 1, representing step sizes. A particle starts at the origin and moves upward with probability r or remains stationary with probability 1 r . From the second step onward, the particle decides its next action based on its previous movement, repeating it with probability p or taking the opposite action with probability 1 p . The novelty of our approach lies in integrating a short-memory mechanism with variable step sizes, which allows us to derive exact distributions, recurrence relations, and central limit theorems. Our main contributions include (i) establishing explicit expressions for the moment-generating function and the exact distribution of the process, (ii) analyzing the number of stops through a symmetry phenomenon between repetition and inversion, and (iii) providing asymptotic results supported by simulations.

1. Introduction

We consider a stochastic model involving both the elephant random walk (see, for example [1], as an example and description of this random walk) and a modified version of the Moran random walk (see, for example [1]). In this model, we define a sequence of positive independent and identically distributed random variables (i.i.d.) ( Z n ) n 1 , each with an expected value of 1 (without loss of generality). We also introduce two probabilities, r and p.
At time 0, a particle starts at the origin. At time 1, the particle has the possibility of moving upward with a step size of Z 1 and probability of r, or it may stay at the origin with probability 1 r . From time n = 2 onward, the particle remembers its action in the previous step ( n 1 )—whether it moved up by a step of size Z n 1 or remained stationary—and makes a new decision at step n. With probability p, it chooses to repeat the same action as in the previous step, and with probability 1 p , it decides to do the opposite of what it did in the previous step.
If the particle decides to move upward, it will increase its position by a step size Z n . If it decides to remain stationary (stops), its position does not change.
In recent years, several extensions of the elephant random walk have been developed, highlighting the vitality of this topic. For instance, Laulin [2] applied martingale techniques to analyze reinforced ERWs in critical and super-diffusive regimes. More recently, Roy, Takei, and Tanemura [3] investigated phase transitions in ERWs with power-law memory, and studied triangular array settings of ERWs. Abdelkader–Aguech [1] examined models with gradually increasing memory and random step sizes, providing central limit theorems and asymptotic characterizations. These recent studies demonstrate the wide applicability of ERWs, from reinforced mechanisms to non-Markovian memory effects. However, most of the above works focus either on full memory or on gradually expanding memory frameworks. They provide asymptotic results (laws of large numbers, CLTs) but typically do not yield explicit distributions. Moreover, the possibility of combining short-memory effects with random step sizes and a move-or-stop mechanism has not been fully addressed in the literature. Our contribution fills this gap by introducing a hybrid model blending the memory structure of ERWs with a Moran-type mechanism, and by deriving exact distributions alongside asymptotic laws. This not only extends recent developments but also highlights a novel symmetry phenomenon between repetition and inversion of past steps.
The goal is to study the position of the particle after n steps, focusing on the evolution of its movement based on the given probabilities and the random walking behavior at each step. The present model is best viewed as a hybrid of the ERW and the Moran model, rather than a generalization of the former. It combines the memory property of the ERW with the move-or-stop mechanism of a Moran-type process, and is directly inspired by Bercu’s martingale approach [4], which provides key tools for memory-driven stochastic processes. Building on this framework, we introduce variable step sizes and a symmetry between move-or-stop, which allows us to derive exact distributions, recurrence relations, and moment-generating functions. Our work is also related to Gut and Stadtmüller [5,6], who studied structural aspects of ERWs such as the number of zeros and delays. In our setting, the process D n , which counts the number of stops up to time n, plays a role analogous to their “number of zeros.” By establishing exact formulas for D n and linking them to Y ^ n , we extend and complement their results. Finally, the central limit theorems we obtain for both D n and Y ^ n parallel those of Gut and Stadtmüller, but within a hybrid framework blending of ERW memory with the Moran mechanism. Our model can find concrete applications in both physics and biology. In statistical physics, the parameter p reflects interaction rules in non-Markovian spin systems with long-range correlations, where the decision to repeat or invert a previous move mimics competing interaction mechanisms.
  • In Quantum physics [7], the recurrence relations and exact distributions obtained here provide tools to study quantum walks with memory, relevant for quantum computation and the simulation of complex quantum systems.
  • In biology, the stop-or-move mechanism of our model naturally represents the foraging strategies of animals: individuals may either continue in the same direction, invert their movement, or remain stationary to exploit resources, as observed in empirical mobility data.
Compared to earlier works, our contribution is distinct. Gut and Stadtmüller [5,6] analyzed the number of zeros and central limit theorems in the full-memory ERW, whereas we focus on a short-memory hybrid walk with random step sizes. Bercu [4] introduced a martingale approach for ERWs; our work extends this framework by incorporating the Moran-type move-or-stop mechanism and deriving exact distributions through generating function techniques.
The paper is organized as follows. In Section 2, we introduce and describe our model. Section 3 presents our main results. In Section 4, we give some simulations of processes D n and Y ^ n . Section 5 is devoted to the detailed description of the process Z ^ n . In Section 6, we analyze the process D n , which represents the number of times, up to time n, that the elephant decides not to move. In Section 7, we prove our main result concerning the walk Y ^ n . Finally, in Section 8 we discuss our results and give some perspectives.

2. Definitions and Presentation of the Model

Consider ( Z n ) n 1 an independent and identically distributed (i.i.d.) sequence of random variables with mean equal to 1 and variance  σ 2 . Our model is a mix between the elephant random walk and the Moran random walk. At each step n, the elephant chooses between two possible moves: either moving upward by a step of size Z n , or remaining stationary.
It starts from 0 at time 0 (i.e., ( Y ^ 0 = 0 ) ) . Our model is given by the following system: at time 1
Y ^ 1 = Z 1 , with probability r , 0 , with probability 1 r ,
and for time n 2 , the random walk Y ^ n can only remember the step n 1 and decides to repeat the same action with probability p and to do the opposite ste with probability 1 p , where p ( 0 , 1 ) and r ( 0 , 1 ) . In other words, if at time n 1 the walk goes up with a step size of Z n 1 and probability of p, the walk goes again up with a step size of Z n and with a probability of 1 p , and this walk does not move. If at the time n 1 the walk does not move, it remains in the same position at time n with probability p, and with probability 1 p , it goes up with a step size of Z n . Let the sequence Z ^ n n 1 be the sequence of steps of the walk at any time, defined by, for all k 1 ,
Z ^ k = Z k , if , at step k , the walk go up , 0 , if , at step k , the walk stay where it is .
In particular, we have
P Z ^ 1 0 = r , P Z ^ 1 = 0 = 1 r .
At the step n, the process Y ^ n has as value
Y ^ n = k = 1 n Z ^ k .
We define the event A n , at time n, the random walk Y ^ n decides to repeat the same action made at the time  n 1 .
A n : = At time n , the random walk decides to reproduce the same action made at time n 1
For all n 1 , we can write
Z ^ n = Z n 1 A n 1 { Z ^ n 1 0 } + 1 A ¯ n 1 { Z ^ n 1 = 0 } ,
Y ^ n = Y ^ n 1 + Z n 1 A n 1 { Z ^ n 1 0 } + 1 A ¯ n 1 { Z ^ n 1 = 0 } .
Remark that if p = 1 , then Y ^ n is a classical random walk with probability r and constant equal to 0 with probability 1 r . That is, if B e r ( r ) is a Bernoulli random variable independent of the process ( Z n ) n 1 , then
Y ^ n = a . s B e r ( r ) k = 1 n Z k .
In the rest of the paper we assume that p < 1 . The random variable D n can be given by
D n = k = 1 n 1 { Z ^ k = 0 } = k = 1 n 1 { Y ^ k 1 = Y ^ k } .
The parameter r determines the probability of an initial upward move, while p controls the memory mechanism by balancing repetition versus inversion. Their interaction strongly influences both the number of stops D n and the trajectory Y ^ n , as reflected in the exact and asymptotic results presented in this work. The following Figure 1 illustrates the state transition structure of the proposed stochastic model.
We present example trajectories of a particle for three different step-size mechanisms: deterministic, Bernoulli, and exponential. The simulations are performed for n = 20 steps, with an initial move probability r = 0.5 and two different persistence probabilities, p = 0.2 and p = 0.8 . These examples illustrate how both the step-size distribution and the tendency to repeat the previous action influence the particle’s trajectory. In particular, lower persistence ( p = 0.2 ) leads to frequent reversals in direction, while higher persistence ( p = 0.5 ) produces smoother trajectories. The stochastic nature of Bernoulli and exponential step sizes introduces additional variability compared with the deterministic case, with exponential steps generating the largest fluctuations due to their unbounded, continuous nature.
Figure 2 shows that the deterministic path increases regularly due to the fixed step size, while the Bernoulli path exhibits additional variability arising from discrete random steps. The exponential path shows the largest fluctuations, reflecting the unbounded, continuous nature of the step sizes. In all cases, the persistence probability p = 0.8 induces temporal correlation, causing trajectories to tend to continue their previous direction. Overall, this figure illustrates how step-size randomness and directional persistence jointly influence the particle’s motion.
Figure 3 shows that the deterministic path progresses in uniform steps but frequently reverses direction due to low persistence. The Bernoulli path exhibits additional variability from discrete random steps, occasionally stagnating when the step size is zero. The exponential path shows the largest fluctuations due to continuous, unbounded step sizes combined with frequent direction reversals. This figure illustrates how step-size distribution and directional persistence jointly influence particle trajectories.

3. Main Results

In this section we present our main results. We denote by D n , the number of steps with size 0 from { 1 , n } . Is the number of times that the elephants have decided not to move. The first result is about the first two moments of D n given in Equation (6).
Theorem 1.
For all n 1 , the mean and variance of D n are given by
E [ D n ] = 1 4 ( 1 p ) 2 n ( 1 p ) + ( 1 2 r ) 1 ( 2 p 1 ) n ,
and
V a r [ D n ] = 1 2 r 4 ( 1 p ) 1 ( 2 p 1 ) n + 1 4 ( 1 p ) ( 1 2 r ) 1 ( 2 p 1 ) n 1 n + 1 2 n + ( 1 2 r ) 4 ( 1 p ) ( 2 p 1 ) 2 ( 1 p ) 1 ( 2 p 1 ) n 1 ( n 1 ) ( 2 p 1 ) n ( 1 2 r ) 8 ( 1 p ) 2 ( 2 p 1 ) n 1 1 + ( 2 p 1 ) n + 2 n ( 1 p ) + 2 p 1 4 1 p ( n 1 ) 1 2 1 p 1 ( 2 p 1 ) n 1 + 1 4 n ( n 1 ) 1 16 ( 1 p ) 2 2 n ( 1 p ) + ( 1 2 r ) 1 ( 2 p 1 ) n 2 .
The exact distribution of D n is given in the following Theorem.
Theorem 2.
For all 1 m n , we have
P D n = m = ( 1 r ) n 1 + n 2 = n 1 , k 1 + n 2 = m 1 , k 1 n 1 r n 1 k 1 ( 1 r ) k 1 p n 2 k 1 n 1 + r ( 1 r ) m n 1 + n 2 = n 1 , n 1 m 1 p n 2 m n 1 .
Using the fact that, for all k , l two integer numbers, we have
C o v D k + l , D k = 1 2 P Z ^ k = 0 2 p 1 l 1 2 p 1 k 1 2 r ,
we deduce the following result, which clearly illustrates the symmetry phenomenon between the number of upward moves and the number of stops.
Theorem 3.
We have the following central limit theorem (CLT) with respect to D n :
n D n n 1 2 D N 0 , σ D 2 ,
where  σ D 2 = lim n V a r ( D n ) n .
Remark 1.
The number of stops D n was also studied in [6], but only in the context of the full-memory ERW, where it was shown that the expectation of D n is given by, for n 1 ,
E [ D n ] = Γ ( n + 1 r ) Γ ( 1 r ) ( 9 n 1 ) ! ,
where, in this reference, r is the probability to stop at step n. Some other characteristics related to D n are given in the proofs.
The second part of our results concerns the process Y ^ n , representing the position of the elephant at time n.
Theorem 4.
For all fixed n 1 , the mean and variance of Y ^ n are given by
E [ Y ^ n ] = 1 4 ( 1 p ) 2 n ( 1 p ) ( 1 2 r ) [ 1 ( 2 p 1 ) n ] , V a r [ Y ^ n ] = V a r [ D n ] + σ 2 n E [ D n ] .
The distribution of Y ^ n can be characterized by the moment-generating function, where the moment-generating function Φ S ( t ) of a random variable S, when it exists, is defined as Φ S ( t ) = E e t S .
Theorem 5.
For all n 2 , the moment generating function of Y ^ n is given by
Φ n ( t ) = ( 1 r ) m = 1 n Φ Z ( t ) n m n 1 + n 2 = n 1 , k 1 + n 2 = m 1 , k 1 n 1 r n 1 k 1 ( 1 r ) k 1 p n 2 k 1 n 1 +   r m = 1 n Φ Z ( t ) n m ( 1 r ) m n 1 + n 2 = n 1 , n 1 m 1 p n 2 m n 1 .
The third result is about the central limit theorem with respect to Y ^ n .
Theorem 6.
When n goes to infinity, we have
n Y ^ n n 1 + D n n D N 0 , σ 2 2 ,
where  σ 2  is the variance of  Z 1 .

4. Simulations

4.1. Curve of the Sequence E ( D n ) n

Figure 4 displays the evolution of the ratio R n = E ( D n ) n as a function of time n for different parameter choices ( p , r ) . Regardless of the values of r and p, all trajectories rapidly converge to the same limiting value, approximately R 1 / 2 . This indicates that, asymptotically, the system spends about half of the time in state 0, independently of the initial conditions and transition probabilities. The convergence is very fast: after only a few hundred iterations, the curves become indistinguishable and remain stable around the limiting ratio.

4.2. Simulations of D n n

Figure 5 displays the evolution of the ratio H n = D n n as a function of time n for different parameter values ( r , p ) . At the beginning, the trajectories show strong fluctuations, especially for n < 500 . As time increases, all curves gradually stabilize and converge toward similar limiting values around H 0.5 . This suggests that despite differences in the initial parameters ( r , p ) , the long-term behavior of the ratio H remains stable and relatively independent of the chosen values. The early-stage variability reflects sensitivity to initial conditions, whereas the convergence in the long run highlights the robustness of the system’s asymptotic properties.

4.3. Simulations of the CLT with Respect to D n

Across all panels in Figure 6, the empirical points lie close to the reference line, and the RMSE values remain small (below 0.07), supporting the adequacy of the normal approximation predicted by the central limit theorem. The persistence parameter p and the initial move probability r slightly affect tail alignment: higher persistence ( p = 0.8 ) with moderate r yields slightly larger deviations in the extremes, while lower persistence ( p = 0.2 ) produces comparatively smoother fits. Nevertheless, deviations remain minor and are reflected in the small RMSE values. Overall, the QQ plots confirm that for these parameter settings, the distribution of the standardized statistic is well approximated by a normal law, with only mild discrepancies in the tails.
By analyzing different combinations of parameters n, N, p, and r, in Table 1 we can draw several conclusions.
1.
Effect of Sample Size per Sample (n)
(a)
As n increases, the data generally becomes more normal.
(b)
This is most clearly seen in the RMSE (QQ) column, which consistently decreases as n increases for fixed N, p, r.
(c)
Example: For N = 100 , p = 0.8 , r = 0.5 , RMSE drops from 0.1289 ( n = 1000 ) to 0.0564 ( n = 5000 ) to 0.0385 ( n = 10 , 000 ).
(d)
The p-values from the normality tests also tend to become larger (less significant) as n increases, further supporting this trend.
2.
Effect of Number of Samples/Replications (N)
(a)
As N increases, the data also becomes more normal.
(b)
This pattern is also very clear in the RMSE (QQ) column.
(c)
Example: For n = 5000 , p = 0.8 , r = 0.5 , RMSE drops from 0.0688 ( N = 500 ) to 0.0530 ( N = 1000 ) to 0.0163 ( N = 5000 ).
(d)
This suggests that the data generation process itself may produce a perfectly normal distribution only in the limit as the number of replications grows large.
3.
Effect of Parameters p and r
(a)
The effects of p and r are more subtle and interact with n and N.
(b)
There is no single, dominant pattern. For instance:
  • Sometimes a combination like p = 0.7 , r = 0.8 produces excellent normality (e.g., row 3, row 15).
  • Other combinations like p = 0.6 , r = 0.2 can show weaker normality for smaller n and N (e.g., row 4, row 8) but become perfectly normal with larger n and N (e.g., row 20, row 32).
(c)
This indicates that the impact of the data generation parameters (p, r) is complex and depends on the sample size.
4.
Agreement Between Tests
(a)
The four normality tests generally agree on the broad conclusion (normal vs. not normal) but often disagree on the specific p-value.
(b)
For very large n and N (e.g., n = 10,000 , N = 10,000 ), the Shapiro-Wilk test returns NA. This is a known computational limitation; the test becomes too computationally expensive or unstable with very large sample sizes.
5.
Overall Normality
(a)
Despite some low p-values for smaller n and N, the Skewness and Kurtosis values are almost always very close to 0.
(b)
The RMSE values are consistently low, especially for larger n and N.
(c)
Conclusion: The data generation process produces distributions that are very close to normal, especially when the sample size (n) and the number of replications (N) are large. The deviations from normality detected by the tests in some scenarios are minor in a practical sense.

4.4. Simulations of the CLT with Respect to Y ^ n

Table 2 reports normality assessment results for different sample sizes n { 1000 ,   5000 ,   10,000 } , numbers of Monte Carlo replications N, and parameter pairs ( p , r ) { ( 0.9 ,   0.7 ) ,   ( 0.5 ,   0.4 ) ,   ( 0.3 ,   0.2 ) } . Several normality tests (Shapiro–Wilk, Anderson–Darling, D’Agostino, Jarque–Bera) are presented alongside skewness, kurtosis, and QQ-plot RMSE. As n increases, all tests generally indicate improved agreement with the normal distribution, with p-values moving away from the 5% significance threshold. Skewness remains near zero and kurtosis close to 3 for n 5000 , while QQ-plot RMSE decreases, confirming visual closeness to normality. Deviations are most pronounced at n = 1000 for ( p , r ) = ( 0.9 , 0.7 ) , where tests occasionally reject normality due to negative skew and excess kurtosis. Overall, these results provide strong evidence of a Central Limit Theorem effect: the standardized statistic converges rapidly to Gaussian behavior as n grows, with excellent agreement across all tests for n 5000 .
The QQ plots in Figure 7 demonstrate robust asymptotic normality of the standardized statistic n Y ^ n n 1 + D n n across all parameter configurations, with exceptional alignment to the theoretical normal distribution evidenced by near-perfect linearity along the reference line and complete containment within the 95% confidence bands. The remarkably low RMSE values ( 0.0174 0.0229 ) indicate minimal quantile deviation, with the highest persistence parameters ( p = 0.9 , r = 0.7 ) yielding the most precise approximation and even the lowest persistence case ( p = 0.3 , r = 0.2 ) maintaining excellent fit. The absence of systematic deviations confirms no detectable skewness or heavy-tailed behavior, validating the Central Limit Theorem’s applicability and ensuring reliable statistical inference using normal approximations across the entire parameter space.

5. Some Characteristics of the Distribution of the Process Z ^ n

We start with the probability that at step n the process does not move. Let u n : = P Z ^ n = 0 . Due to Equation (4), the following lemma is straightforward.
Lemma 1.
The sequence u n satisfies, u 1 = 1 r and, for n 2 ,
u n = 2 p 1 u n 1 + 1 p ,
the solution is given by
u n = 2 p 1 n 1 1 2 r + 1 2 .
In the rest of this section, we focus on explicitly calculating the first two moments of the processes Z ^ n and Y ^ n . These moments will be useful later for studying the distributions of these two processes, as well as for deriving the asymptotic behavior in probability of the process Y ^ n . Denote m n , 1 : = E [ Z ^ n ] , from the last lemma we deduce
Corollary 1.
The mean m n , 1 is given by, for all n 1
m n , 1 = 1 2 1 2 r ( 2 p 1 ) n 1 .
Proof. 
It is clear, from the model, that m 1 , 1 = E [ Z ^ 1 ] = E [ Y ^ 1 ] = r . For n 2 , one has
m n , 1 = E Z n 1 A n 1 { Z ^ n 1 0 } + 1 A ¯ n 1 { Z ^ n 1 = 0 } = E [ Z n ] E 1 A n E 1 { Z ^ n 1 0 } + E 1 A ¯ n E 1 { Z ^ n 1 = 0 } .
Use the identities
E [ Z n ] = 1 , E 1 A n = p , E 1 { Z ^ n 1 = 0 } = u n 1 ,
to conclude that
m n , 1 = p 1 u n 1 + 1 p u n 1 = p ( 2 p 1 ) u n 1 .
Using, now, the explicit expression of u n in Lemma 1, to get
m n , 1 = 1 2 1 2 r ( 2 p 1 ) n 1 .
 □
Lemma 2.
Let m n , 2 : = E [ Z ^ n 2 ] , the second moment of Z ^ n . The sequence m n , 2 satisfies the following recursive equation:
m n , 2 = ( 1 + σ 2 ) p ( 2 p 1 ) u n 1 ,
and its explicit expression is given by the following, for all n 1 :
m n , 2 = 1 2 ( 1 + σ 2 ) ( 1 + σ 2 ) 1 2 r ( 2 p 1 ) n 1 .
Proof. 
One has
m n , 2 = E [ Z n 2 ] E 1 A n E 1 { Z ^ n 1 0 } + E 1 A ¯ n E 1 { Z ^ n 1 = 0 } .
Using Equation (8) and V a r [ Z n ] = σ 2 , then m n , 2 satisfies
m n , 2 = ( 1 + σ 2 ) p ( 2 p 1 ) u n 1 = ( 1 + σ 2 ) ( p 2 p 1 n 1 1 2 r p + 1 2 ) = ( 1 + σ 2 ) 1 2 2 p 1 n 1 1 2 r
and its explicit formula is given by
m n , 2 = 1 2 ( 1 + σ 2 ) ( 1 + σ 2 ) 1 2 r ( 2 p 1 ) n 1 .
 □

6. Number of Stops of the Process

By stop, we mean the fact that the walk does not move. For all n 1 , let D n be the number of times that the walk does not move, then n D n is the number of moves up to time n. Recall that D n is defined by
D n = # 1 k n , Z ^ k = 0 .
The process D n plays a crucial role in the study of the process Y ^ n due to a distributional identity linking the two processes. For this reason, we dedicate this section to the analysis of D n , which will later be used to deduce the behavior of the process Y ^ n .

6.1. Proof of Theorem 1

Since the sequence ( D n ) n satisfies, for all n 1 ,
D n = D n 1 + 1 { Z ^ n = 0 }
we deduce that, the mean d n : = E [ D n ] satisfies
d n = d n 1 + u n = k = 1 n u k .
The proof can be achieved using Equation (7).
Remark 2.
The symmetry phenomenon between the number of times the elephant decides to move upward and the number of times it decides to remain stationary is revealed on average. In fact, as a direct consequence of the last Lemma, we deduce that, for p 1 ,
lim n E [ D n ] n = 1 2 .
Recall that the variable D n can be given by D n = k = 1 n I { Z ^ k = 0 } . We start by computing the second moment of D n . we have
E [ D n 2 ] = k = 1 n P Z ^ k = 0 + 2 1 l < k n P Z ^ k = 0 , Z ^ l = 0 = k = 1 n P Z ^ k = 0 + 2 1 l < k n P Z ^ k = 0 | Z ^ l = 0 P Z ^ l = 0 .
Remark that P Z ^ k = 0 | Z ^ l = 0 is the probability that the step of the same walk at time k l equals 0 for r = 1 p . Then,
P Z ^ k = 0 | Z ^ l = 0 = 2 p 1 k l 1 p 1 2 + 1 2 .
We deduce,
E [ D n 2 ] = k = 1 n 2 p 1 k 1 1 2 r + 1 2 + 2 1 l < k n 2 p 1 k l 1 p 1 2 + 1 2 2 p 1 l 1 1 2 r + 1 2 = k = 1 n 2 p 1 k 1 1 2 r + 1 2 + 2 l = 1 n 1 2 p 1 l 1 1 2 r + 1 2 k = l + 1 n 2 p 1 k l 1 p 1 2 + 1 2 = k = 1 n 2 p 1 k 1 1 2 r + 1 2 + 2 l = 1 n 1 2 p 1 l 1 1 2 r + 1 2 j = 0 n ( l + 1 ) 2 p 1 j p 1 2 + 1 2 .
Using some identities about geometric sums, we obtain the following:
E [ D n 2 ] = k = 1 n 2 p 1 k 1 1 2 r + 1 2 + 2 l = 1 n 1 2 p 1 l 1 1 2 r + 1 2 2 p 1 1 2 p 1 n l 4 1 p + n l 2 = 1 2 k = 1 n 2 p 1 k 1 1 2 r + 1 + 1 2 r 4 1 p l = 1 n 1 2 p 1 l 1 2 p 1 n l + 1 2 r l = 1 n 1 2 p 1 l 1 n l 2 + 2 p 1 4 1 p l = 1 n 1 1 2 p 1 n l + l = 1 n 1 n l 2 .
Then we have
E [ D n 2 ] = 1 2 r 4 ( 1 p ) 1 ( 2 p 1 ) n + 1 2 n + 1 8 ( 1 p ) 2 ( 1 2 r ) ( 2 p 1 ) 1 ( 2 p 1 ) n 1 1 2 r 4 ( 1 p ) ( n 1 ) ( 2 p 1 ) n + 1 4 ( 1 p ) ( 1 2 r ) 1 ( 2 p 1 ) n 1 n ( 1 2 r ) 8 ( 1 p ) 2 ( 2 p 1 ) n 1 1 + ( 2 p 1 ) n + 2 n ( 1 p ) + 2 p 1 4 1 p ( n 1 ) 2 p 1 8 1 p 2 1 ( 2 p 1 ) n 1 + 1 4 n ( n 1 )
Remark 3.
When p is close to 1, we have the following asymptotic for the second moment of D n :
E [ D n 2 ] n 2 4 + ( 3 4 r ) n .
The following convergence in probability holds.
Corollary 2.
When n goes to infinity, we have
D n n P 1 2 .
Proof. 
Since lim n V a r [ D n ] / n 2 = 0 , and using Theorem 2.2.4 [8] (p. 56), we deduce
D n E [ D n ] n P 0 as   n .
The desired result is obtained using (11). □
Corollary 3.
When n goes to infinity, we have
n D n a . s + .
Proof. 
From the last result, we deduce that
D n P + .
Since n D n is an increasing sequence, we conclude that
D n a . s + .
On the other hand, by expressing n D n as
n D n = k = 1 n 1 { Z ^ k 0 } = k = 1 n 1 { Y ^ k 1 Y ^ k } ,
we deduce, using the same arguments as for D n , that n D n a . s + .  □

6.2. Proof of Theorem 2

Recall that, P Z ^ 1 0 = r , P Z ^ 1 = 0 = 1 r . In order to study the distribution of D n , we employee the bi-variate generating function Ψ ( x , y ) defined on ( 0 , 1 ) 2 and given by
Ψ ( x , y ) = 1 m n P D n = m x n m y m .
The generating series method is a highly effective tool for studying this type of model. It generally allows researchers to derive the exact distribution of the random walk or, at the very least, its asymptotic behavior. For an example of the application of this method, we refer the book [9]. The following Theorem will help us to obtain the exact distribution of D n .
Theorem 7.
The expression of Ψ ( x , y ) is given by
Ψ ( x , y ) = ( 1 r ) 0 k 1 n 1 , n 2 0 r n 1 k 1 ( 1 r ) k 1 p n 2 k 1 n 1 y k 1 + n 2 + 1 x n 1 k 1 + r ( 1 r ) 0 k 1 n 1 , n 2 0 ( 1 r ) k 1 p n 2 k 1 n 1 y k 1 + 1 x n 1 + n 2 k 1 .
To prove Theorem 7, we need the following lemma
Lemma 3.
For m 1 and n m + 1 , we have
P D n = m = r P D n 1 = m + 1 r P D n 1 = m 1
Proof. 
For m 1 , n m + 1 , we have
P D n = m = P D n = m , Z ^ 1 0 + P D n = m , Z ^ 1 = 0 = P D n = m | Z ^ 1 0 P Z ^ 1 0 + P D n = m | Z ^ 1 = 0 P Z ^ 1 = 0 = P D n 1 = m P Z ^ 1 0 + P D n 1 = m 1 P Z ^ 1 = 0 = r P D n 1 = m + 1 r P D n 1 = m 1 .
 □
On the other hand, we can verify that
Lemma 4.
For all n 1 ,
P D n = 1 = ( 1 p ) p n 2 + ( n 2 ) r ( 1 p ) 2 p n 3 , P D n = n = ( 1 p ) p n 1 , P D n = 0 = r p n 1 , P D 1 = 1 = ( 1 r ) .
Proof of Thorem 7.
Using Lemmas 3 and 4, we have
Ψ ( x , y ) = 1 m n P D n = m x n m y m = P D 1 = 1 y + n = 2 + m = 1 n P ( D n = m ) x n m y m = P D 1 = 1 y + n = 2 + P ( D n = n ) y n + n = 2 + m = 1 n 1 P ( D n = m ) x n m y m = ( 1 r ) y + ( 1 r ) n = 2 + p n 1 y n + n = 2 + m = 1 n 1 P ( D n = m ) x n m y m = ( 1 r ) y + ( 1 r ) p y 2 n = 2 + ( p y ) n + r n = 2 + m = 1 n 1 P ( D n 1 = m ) x n m y m + ( 1 r ) n = 2 + m = 1 n 1 P ( D n 1 = m 1 ) x n m y m
Ψ ( x , y ) = ( 1 r ) y + ( 1 r ) p y 2 1 p y + r n = 1 + m = 1 n P ( D n = m ) x n m + 1 y m + ( 1 r ) n = 1 + m = 0 n P ( D n = m ) x n m y m + 1 = ( 1 r ) y + ( 1 r ) p y 2 1 p y + r x 1 m n P ( D n = m ) x n m y m + ( 1 r ) y 1 m n P ( D n = m ) x n m y m + ( 1 r ) y n = 1 + P ( D n = 0 ) x n = ( 1 r ) y + ( 1 r ) p y 2 1 p y + r x Ψ ( x , y ) + ( 1 r ) y Ψ ( x , y ) + ( 1 r ) r y n = 1 + p n 1 x n = ( 1 r ) y + ( 1 r ) p y 2 1 p y + r x Ψ ( x , y ) + ( 1 r ) y Ψ ( x , y ) + r ( 1 r ) y x 1 p x .
As a conclusion, we have
Ψ ( x , y ) = 1 1 r x ( 1 r ) y ( 1 r ) y + p ( 1 r ) y 2 1 p y + r ( 1 r ) y x 1 p x = 1 1 r x ( 1 r ) y ( 1 r ) y 1 p y + r ( 1 r ) y x 1 p x = ( 1 r ) y 1 r x ( 1 r ) y 1 1 p y + r x 1 p x .
The terms on the right-hand side, in the last equality, can be given by
( 1 r ) y 1 r x ( 1 r ) y = ( 1 r ) n 1 = 0 + k 1 = 0 n 1 r n 1 k 1 ( 1 r ) k 1 k 1 n 1 y k 1 + 1 x n 1 k 1 1 1 p y = n 2 = 0 + p n 2 y n 2 r x 1 p x = r n 3 = 0 + p n 3 x n 3 + 1 .
 □
Proof of Theorem 2.
Using the expression of Ψ ( x , y ) in Theorem 7, and using the Taylor expansion, with respect to x and y, we deduce that, for all integers k and l we have
[ x k y l ] = P D k + l = l .
 □

7. Distribution of Y ^ n

For t real number, denote by Φ n ( t ) = E e t Y ^ n the moment generating function of Y ^ n . On the other hand, it is clear that
Y ^ n = D k = 0 n D n Z k .
Using Corollary 3, we can conclude using the law of large numbers that
Corollary 4.
We have
Y ^ n n P 1 2 .

7.1. Proof of Theorem 4

The proof is based on the identity given in Equation (12) and Theorem 1.

7.2. Proof of Theorem 5

For all n 1 and for all complex number z, let
Ψ n ( z ) = m = 1 n P ( D n = m ) z n m .
Then
Φ n ( t ) = E e t Y ^ n = E Φ Z ( t ) n D n = m = 0 n P ( D n = m ) Φ Z ( t ) n m = Ψ n ( Φ Z ( t ) ) .
Finally, we finish the proof using the exact expression of Ψ n ( t ) .
Remark 4.
Using the exact expression of Φ n ( t ) , one can derive, in an alternative form—as a sum—the expectation and variance of Y ^ n . Consequently, new identities that may be useful in combinatorics are obtained:
E [ Y ^ n ] = d d t Φ n ( t ) | t = 0 and E [ Y ^ n 2 ] = d 2 d t 2 Φ n ( t ) | t = 0 .

7.3. Proof of Theorem 6

By Equation (12), and using Corollary 3, using the central limit theorem, we obtain
1 n D n Y ^ n n D n D N 0 , σ 2 .
On the other hand, when n goes to infinity, and by Corollary (2), we have
n n D n = 1 1 D n n P 2 .
Then
1 n D n Y ^ n n D n ) = 1 n 1 D n n Y ^ n n D n ) = n 1 D n n Y ^ n n 1 + D n n .

8. Conclusions and Perspectives

In this work, we successfully derived the explicit distribution of the random walk Y ^ n as well as the number of stops D n . This model represents a modification of the framework studied in [10] and also draws inspiration from the model presented in [5,6]. The main result of this paper, which is the position of the elephant in each time step n, is the exact distribution of this process at each step n. This result was achieved through the study of the number of stops or delays D n , using the bivariate generating series tool.
Beyond exact distributions and asymptotic laws, our analysis highlights the fundamental role of binary symmetry between repetition and inversion of past actions. This structural symmetry provides a unifying perspective for interpreting the probabilistic dynamics of the model and suggests potential applications in physics, combinatorics, and memory-based random processes where such balanced opposing mechanisms naturally arise.”
Several extensions of our model are possible. One particularly interesting case would be to consider an elephant’s random walk where the memory includes the entire past trajectory. This would allow for a richer and more complex exploration of long-term dependencies and behaviors in the walk, potentially leading to new mathematical insights and applications. At each step n + 1 , the elephant recalls all previous steps X 1 , , X n . It then randomly selects one step X K from this past sequence and decides its next move based on the following probabilities:
  • With probability α , it takes the step R ( p ) X K , where R ( p ) is a scaling or transformation factor that may depend on a parameter p.
  • With probability 1 α , it takes a new independent step Z n + 1 , which could represent noise or a random perturbation.
This mechanism introduces a process where the elephant’s memory of the entire past influences its future behavior, allowing for intricate dependencies and patterns to emerge in the trajectory. It provides a foundation for studying systems where long-term memory and randomness coexist dynamically. This model can be applied to the study of a type of Pólya urn model in the following way: We assume that we have an urn initially containing one white ball and one red ball, along with a sequence of independent and identically distributed (iid) integer random variables ( Z n ) n . At step n = 1 , a ball is drawn, placed back into the urn, and Z 1 balls of the same color are added with probability r; with probability 1 r , no balls are added. At each subsequent step n + 1 , a ball is drawn, placed back into the urn, and, with probability p, if balls were added in the previous step n, Z n + 1 balls of the same color are added with probability p, and no balls are added with probability 1 p . The reverse phenomenon occurs if no balls were added at step n.
An interesting perspective for future research is the validation of the model with empirical datasets. In physics, memory-driven spin systems or quantum walk experiments could provide data where the parameter p reflects the persistence of states, and the exact distributions derived here can be compared with observed transition statistics. In biology, GPS-based animal mobility records represent a natural framework where alternating phases of movement and rest are observed; our hybrid model can be fitted to such trajectories by estimating ( p , r ) from the empirical frequency of repetitions, inversions, and stops. Such comparisons would enhance both the applicability and the persuasiveness of the model, opening the way to data-driven validation.

Author Contributions

Conceptualization, R.A.; Methodology, M.A.; Software, M.A.; Validation, R.A.; Writing—original draft, R.A. and M.A.; Funding acquisition, R.A. All authors have read and agreed to the published version of the manuscript.

Funding

Ongoing Research Funding Program (ORF-2025-987), King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This work is supported by the Ongoing Research Funding Program (ORF-2025-987), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdelkader, M.; Aguech, R. Moran random walk with reset and short memory. Aims Math. 2024, 9, 19888–19910. [Google Scholar] [CrossRef]
  2. Laulin, L. New Insights on the Reinforced Elephant Random Walk Using a Martingale Approach. J. Stat. Phys. 2022, 186, 9. [Google Scholar] [CrossRef]
  3. Roy, R.; Takei, M.; Tanemura, H. Moran random walk with short memory. Electron. Commun. Probab. 2024, 29, 78. [Google Scholar]
  4. Bercu, B. A martingale approach for the elephant random walk. J. Phys. A Math. Theor. 2018, 51, 015201. [Google Scholar] [CrossRef]
  5. Gut, A.; Stadtmuller, U. Elephant random walks; A review. Ann. Univ. Sci. Budapest. Sect. Comp. 2023, 54, 171–198. [Google Scholar] [CrossRef]
  6. Gut, A.; Stadtmuller, U. The number of zeros in Elephant random walks with delays. Stat. Probab. Lett. 2021, 174, 109–112. [Google Scholar] [CrossRef]
  7. Venegas-Andraca, S.E. Quantum walks: A comprehensive review. Quantum Inf. Process. 2012, 11, 1015–1106. [Google Scholar] [CrossRef]
  8. Durrett, R. Probability, Theory and Examples, 4th ed.; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  9. Flajolet, P.; Sedgewick, R. Analytic Combinatorics; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  10. Kiss, J.; Vető, B. Moments of the superdiffusive elephant random walk with general step distribution. Electron. Commun. Probab. 2022, 27, 1–12. [Google Scholar] [CrossRef]
Figure 1. Transition diagram.
Figure 1. Transition diagram.
Symmetry 17 01709 g001
Figure 2. Particle paths: deterministic, Bernoulli(0.5), and exponential(1), when r = 0.5 and p = 0.8 .
Figure 2. Particle paths: deterministic, Bernoulli(0.5), and exponential(1), when r = 0.5 and p = 0.8 .
Symmetry 17 01709 g002
Figure 3. Particle paths: deterministic, Bernoulli(0.5), and exponential(1), when r = 0.5 and p = 0.2 .
Figure 3. Particle paths: deterministic, Bernoulli(0.5), and exponential(1), when r = 0.5 and p = 0.2 .
Symmetry 17 01709 g003
Figure 4. Simulation of the ratio log ( E ( D n ) n ) 0.5 when n = 1000 and N = 500 .
Figure 4. Simulation of the ratio log ( E ( D n ) n ) 0.5 when n = 1000 and N = 500 .
Symmetry 17 01709 g004
Figure 5. Simulation of the ratio H n = D n n for n = 10,000 and N = 5000 .
Figure 5. Simulation of the ratio H n = D n n for n = 10,000 and N = 5000 .
Symmetry 17 01709 g005
Figure 6. Normality test and QQ-plot RMSE for different values of p, and r for D n .
Figure 6. Normality test and QQ-plot RMSE for different values of p, and r for D n .
Symmetry 17 01709 g006
Figure 7. QQ-plot for n Y ^ n n 1 + D n n , when n = 10,000 and N = 5000 .
Figure 7. QQ-plot for n Y ^ n n 1 + D n n , when n = 10,000 and N = 5000 .
Symmetry 17 01709 g007
Table 1. Normality tests for the statistic D n .
Table 1. Normality tests for the statistic D n .
nNprShapiro pAD pD’Agostino pJarque-Bera pSkewKurtosisRMSE (QQ)
10001000.80.50.07340.05530.37440.39840.2062−0.52130.1289
10001000.90.40.75030.72940.98540.96560.0042−0.12940.1459
10001000.70.80.68840.44490.37160.4133−0.20740.50200.0767
10001000.60.20.10680.07770.14760.2449−0.3408−0.45900.0796
10005000.80.50.30500.64930.88990.6237−0.0149−0.21080.0564
10005000.90.40.30090.23400.45740.7217−0.0804−0.07400.0908
10005000.70.80.07710.03000.93770.0962−0.0084−0.47380.0576
10005000.60.20.21210.13530.16460.3287−0.1509−0.12550.0370
100010000.80.50.49200.93010.29970.5859−0.0799−0.01210.0385
100010000.90.40.61070.70530.24060.49190.0904−0.03680.0522
100010000.70.80.45010.61760.30500.4085−0.0790−0.13420.0297
100010000.60.20.18380.18210.06910.12290.14050.14730.0281
50005000.80.50.14750.15190.14590.3530−0.15790.01350.0688
50005000.90.40.82830.95720.61090.49310.0550−0.23620.0577
50005000.70.80.93530.84400.93810.9241−0.00840.08540.0299
50005000.60.20.83340.68080.56820.7689−0.0617−0.10010.0246
500010000.80.50.05720.07710.63270.8048−0.0368−0.07080.0530
500010000.90.40.51900.47860.27470.5219−0.0841−0.05380.0570
500010000.70.80.87950.76430.28640.52820.08210.06070.0238
500010000.60.20.76670.56810.26180.37590.08640.13070.0208
500050000.80.50.73090.89950.43330.49720.0271−0.06140.0163
500050000.90.40.20420.12100.19900.28750.04440.06380.0323
500050000.70.80.34800.27780.22670.3971−0.04180.04320.0150
500050000.60.20.34150.26450.48340.15170.0242−0.12550.0125
10,00010000.80.50.08440.43230.03080.0931−0.1672−0.04690.0500
10,00010000.90.40.70070.82210.89650.7072−0.01000.12740.0513
10,00010000.70.80.40500.13150.16140.3744−0.10800.02280.0309
10,00010000.60.20.75490.77860.36420.6191−0.0699−0.05910.0210
10,00050000.80.50.24300.45380.07370.08590.06190.09070.0221
10,00050000.90.40.84030.68260.31340.5722−0.0349−0.02220.0229
10,00050000.70.80.38470.22290.77480.2187−0.00990.11920.0158
10,00050000.60.20.36860.56180.58680.8148−0.01880.02350.0120
10,00010,0000.80.5-0.20190.48980.78560.0169−0.00390.0171
10,00010,0000.90.4-0.12000.70060.0878−0.0094−0.10640.0240
10,00010,0000.70.8-0.14640.24410.1870−0.02850.06920.0126
10,00010,0000.60.2-0.18630.49410.7915−0.01670.00130.0078
Table 2. Normality tests for the statistic n Y ^ n n 1 + D n n .
Table 2. Normality tests for the statistic n Y ^ n n 1 + D n n .
nNprShapiro_pAD_pDAgostino_pJarqueBera_pSkewKurtosisQQ_RMSE
10001000.90.70.04630.16800.01770.0210−0.5793.7150.1632
10001000.50.40.32490.55540.43190.54850.1822.6050.1081
10001000.30.20.51880.38110.54540.8483−0.1403.0310.1068
10005000.90.70.60020.59920.70170.6046−0.0412.7960.0471
10005000.50.40.43690.11380.94470.7695−0.0072.8420.0571
10005000.30.20.45420.57820.48440.46890.0762.7770.0525
100010000.90.70.74860.96000.74520.93230.0253.0290.0339
100010000.50.40.67390.62540.56930.8480−0.0442.9840.0347
100010000.30.20.85920.92930.29250.5771−0.0813.0100.0323
50005000.90.70.47090.69030.21770.4189−0.1343.1100.0535
50005000.50.40.16790.36290.13230.21750.1642.8010.0646
50005000.30.20.04050.18800.04050.1114−0.2242.8980.0763
500010000.90.70.56340.23620.34440.5558−0.0732.9160.0360
500010000.50.40.75210.51650.54080.82050.0472.9750.0325
500010000.30.20.91020.87620.97870.81080.0022.9000.0291
500050000.90.70.43590.12180.78310.61900.0102.9350.0191
500050000.50.40.25890.28200.49840.16070.0233.1240.0225
500050000.30.20.49990.41400.58790.5095−0.0193.0710.0183
10,00010000.90.70.12610.02380.59050.7085−0.0412.9020.0489
10,00010000.50.40.55460.27310.72480.54260.0273.1630.0387
10,00010000.30.20.14460.05200.61660.31840.0392.7790.0461
10,00030000.90.70.63850.18980.38140.4425−0.0392.9170.0213
10,00030000.50.40.64710.78730.71750.48730.0163.1020.0247
10,00030000.30.20.01930.13180.36240.03950.0413.2120.0355
10,00050000.90.70.83610.88340.85700.97170.0062.9890.0147
10,00050000.50.40.07090.34400.02950.0640−0.0753.0600.0255
10,00050000.30.20.18670.41240.36700.4591−0.0313.0600.0217
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aguech, R.; Abdelkader, M. A Hybrid Model of Elephant and Moran Random Walks: Exact Distribution and Symmetry Properties. Symmetry 2025, 17, 1709. https://doi.org/10.3390/sym17101709

AMA Style

Aguech R, Abdelkader M. A Hybrid Model of Elephant and Moran Random Walks: Exact Distribution and Symmetry Properties. Symmetry. 2025; 17(10):1709. https://doi.org/10.3390/sym17101709

Chicago/Turabian Style

Aguech, Rafik, and Mohamed Abdelkader. 2025. "A Hybrid Model of Elephant and Moran Random Walks: Exact Distribution and Symmetry Properties" Symmetry 17, no. 10: 1709. https://doi.org/10.3390/sym17101709

APA Style

Aguech, R., & Abdelkader, M. (2025). A Hybrid Model of Elephant and Moran Random Walks: Exact Distribution and Symmetry Properties. Symmetry, 17(10), 1709. https://doi.org/10.3390/sym17101709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop