Skorokhod Reﬂection Problem for Delayed Brownian Motion with Applications to Fractional Queues

: Several queueing systems in heavy trafﬁc regimes are shown to admit a diffusive approximation in terms of the Reﬂected Brownian Motion. The latter is deﬁned by solving the Skorokhod reﬂection problem on the trajectories of a standard Brownian motion. In recent years, fractional queueing systems have been introduced to model a class of queueing systems with heavy-tailed interarrival and service times. In this paper, we consider a subdiffusive approximation for such processes in the heavy trafﬁc regime. To do this, we introduce the Delayed Reﬂected Brownian Motion by either solving the Skorohod reﬂection problem on the trajectories of the delayed Brownian motion or by composing the Reﬂected Brownian Motion with an inverse stable subordinator. The heavy trafﬁc limit is achieved via the continuous mapping theorem. As a further interesting consequence, we obtain a simulation algorithm for the Delayed Reﬂected Brownian Motion via a continuous-time random walk approximation.


Introduction
Queueing theory is a widely studied branch of mathematics, thanks to its numerous applications.Aside from the classical ones in telecommunications [1] and traffic engineering [2], one can cite the ones in computing [3] (with particular attention to scheduling) and economics (see, for instance, [4]), together with the fact that such a branch clearly shares different models with population dynamics (see also [5]).The simplest building block of such a theory is given by the M/M/1 queue, which is the prototype of a Markovian queueing model.While, on one hand, the study of this queueing model at its stationary phase can be achieved without too much effort (see, for instance, [6,7]), the determination of its distribution during the transient phase is a quite difficult task that requires much more attention (see, for instance, [8,9]).
The main feature of the M/M/1 queue is the Markov property, which, on one hand, gives a quite tractable model, while, on the other, it precludes any possibility of including memory effects.To overcome such a problem, one can extend the study to the class of semi-Markov processes, introduced by Lévy in [10].A special class of semi-Markov processes can be obtained by means of a time change, i.e., the introduction of a non-Markov stochastic clock in the model.This is done, for instance, by composing a parent Markov process with the inverse of a subordinator.In the specific case of the α-stable subordinator, one refers to such a new process as a fractional version of the parent one, due to the link between this time-change procedure and fractional calculus, as one can see from [11][12][13][14][15][16][17], to cite some of the several works on the topic.Together with a purely mathematical interest, let us stress that this procedure leads also to some interesting applications (see, for instance, [18] in economics, [19] in computing and [20] in computational neurosciences).In the context of queueing theory, this has been first done in [4], in the case of the M/M/1 queue, and then extended in [21,22] to the case of M/M/1 queues with catastrophes, [23] in the case of Erlang queues and [24] for the M/M/∞ queues.
Another quite interesting feature of the classical M/M/1 queue is given by its link with the Brownian motion.It is well known that the behavior of the queue is widely influenced by the traffic intensity ρ.In particular, if ρ < 1, the queue is ergodic (and thus admits a stationary state), while for ρ ≥ 1, this is no longer true.In a certain sense, we can see the critical value ρ = 1 as a bifurcation of the system (as seen as some sort of integral curve in the space of probability measures), in which it loses its globally asymptotically stable equilibrium point.Thus, a natural question one can ask concerns what happens, at a proper scaling, to the queueing model as ρ is near 1.This is called the heavy traffic regime.In the case of the M/M/1 queue, it is well known that, up to a proper scaling, the queue length process in the heavy traffic regime can be approximated by means of a Reflected Brownian Motion with negative drift (see [25]), i.e., a Brownian motion with negative drift whose velocity is symmetrically regulated as soon as it touches 0.More precisely, the Reflected Brownian Motion is the solution of the Skorokhod reflection problem on the trajectory of a drifted Brownian motion.This process is of independent importance, as it provides the stochastic representation of the solution of the heat equation with Neumann boundary condition (see [26]).
The time-change procedure described before has been also applied to the Brownian motion, obtaining the so-called Delayed Brownian Motion (see [27]).Despite the name, it is interesting to observe that such a process is not necessarily delayed: it is true that it exhibits some intervals of constancy; nevertheless, on short times, it can vary faster than the parent Brownian motion, as evidenced from its expected lifetime in fractal sets (see [28]).Once this procedure has been applied to the Brownian motion, one could ask what happens as we adopt a time-change also to the Reflected Brownian Motion.
In this paper, we introduce the Delayed Reflected Brownian Motion (DRBM) as the solution of the Skorokhod reflection problem for the trajectories of the Delayed Brownian Motion (DBM).In particular, we prove that the process obtained in this way coincides with the one achieved by time-changing the Reflected Brownian Motion via an inverse stable subordinator.As a consequence, we can obtain some alternative representations of the process as a direct counterpart of the ones in the classical case.Among them, it is interesting to cite that we are able to exhibit a representation analogous to the one of [29] in terms of time-changed stochastic differential equations (introduced in [30]).We then use such a process to characterize the heavy traffic regime of a fractional M/M/1 queue.The results that we obtain have a twofold interest.One one hand, we achieve a subdiffusive approximation of the fractional M/M/1 queue in the heavy traffic regime, which is a quite powerful tool to study the properties of such queueing models.On the other hand, if we take a look at the results from a different point of view, we have a continuous-time random walk approximation of the Delayed Reflected Brownian Motion.The latter can be used to provide simulation algorithms that do not require us to simulate the inverse subordinator or to provide some inverse Laplace transforms (both of them being expensive computational tasks in terms of errors and time).The motivation of the paper is indeed in this twofold result: we want to provide a new tool to study the properties of the fractional M/M/1 queue introduced in [4] in the heavy traffic regime, while, at the same time, exploiting and better investigating the power of the discrete event simulation algorithm (see [31]) via continuous-time random walk approximations of a subdiffusive process.
The paper is organized as follows: in Section 2, we introduce the Skorokhod's reflection problem and the Reflected Brownian Motion, while in Section 3, we investigate the Delayed Reflected Brownian Motion, with particular attention to the scaling properties of the two processes.The heavy traffic approximation for both the classical and the fractional M/M/1 queues is discussed in Section 4, together with some scaling properties of the classical M/M/1 queueing system.In this section, we also refine (and, in a certain sense, complete) the analysis of the queueing model introduced in [4], by proving the respective independence of the interarrival and the service times (recall that they are not mutually independent) and providing their distributions (with a different strategy with respect to [21] that does not require any further conditioning).Finally, in Section 5, we exhibit the simulation algorithm for the fractional M/M/1 queue and how to use it to simulate a Delayed Reflected Brownian motion, together with some heuristic considerations on stopping criteria.

Skorokhod's Reflection Problem
Let us denote by C(R + 0 ) (where R + 0 := [0, +∞)) the space of continuous functions f : R + 0 → R. In the following, we also need the space of cádlág functions D(R + 0 ), i.e., the space of functions f : R + 0 → R that are right-continuous and such that the left limits exist finite.Let us denote, for simplicity, Let us also recall that any monotone function a : R + 0 → R is locally of bounded variation and thus admits a distributional derivative da that is a Radon measure (see [32]).
Practically, we are asking if it is possible to construct a path z starting from y by adding a term a in such a way that whenever y touches 0, it is instantaneously symmetrically reflected (by using the function a), so that the resulting path z is conditioned to remain non-negative.For these reasons, the function a is called the regulator, while z is called the regulated path.Let us emphasize that the problem can be extended to the n-dimensional setting for any n ∈ N-for instance, asking that the regulated path is conditioned to remain in a certain orthant (so that the path is symmetrically reflected as soon as it touches a coordinate hyperplane) or a polyedral region; see [33].Let us recall that the one-dimensional Skorokhod problem admits a unique solution.
Theorem 1.Let y ∈ D 0 (R + 0 ).Then, there exists a unique solution (z, a) ∈ D(R + 0 ) × D(R + 0 ) of the one-dimensional Skorokhod reflection problem (given in Definition 1).In particular, With the previous theorem in mind, one can define the reflection map as Ψ : is the unique solution of the Skorokhod problem.We also have that ) is well defined.Concerning the continuity of the reflection map on D 0 (R + 0 ), we first have to consider some topology on it (see [25] (Chapter 3)).

Definition 2.
Let us consider the space D([0, T]) of the cádlág functions f : [0, T] → R. We denote by U the uniform topology, induced on D([0, T]) by the metric Let us denote by ι the identity map on [0, T], i.e., ι(t) = t for any t ∈ [0, T].Moreover, let We denote by J 1 the topology on D([0, T]) defined by the metric where We denote by x n U → x and x n ) for a sequence {T k } k≥0 of continuity points of x such that T k → +∞.This topology is metrizable and we still denote the metric as d J 1 .
In general, if we have a sequence of D(R + 0 )-valued random variables, we denote X n ⇒ X whenever X n converges in distribution to a D(R + 0 )-valued random variable X with respect to the J 1 topology.Now, we can describe the continuity properties of the reflection map (see [25] (Lemma 13.5.1 and Theorem 13.5.1)for the compact domain case and extend it with [25] (Theorem 12.9.4) to the non-compact domain case).

Let us also denote
and observe the following elementary properties of the reflection map (which are particular cases of [25] (Lemma 13.5.2)).

Lemma 1.
The following properties are true: (i) For any x ∈ D 0 (R + 0 ) and b > 0, it holds that (ii) For any x ∈ D 0 (R + 0 ) and λ ∈ Λ, it holds that Proof.Let us first observe that, clearly, (bx) − = bx − .Then it holds, for any t ≥ 0, by Theorem 1, that proving property (i).
It is clear, by definition, that an RBM is almost surely non-negative.This is achieved thanks to the regulator, which provides a symmetrization of the velocity with which the original BM tries to cross 0. This additive component symmetrizes perfectly such an effort, thus preventing the process from crossing the 0 threshold.However, in this case, the regulator is a singular function; thus, we cannot properly speak about velocity.The symmetrizing effect of the regulator is instead clearer in the processes that are linked to the queueing systems, as will be shown in Section 4. First, we want to prove a simple scaling property.To do this, let us denote ι b ∈ Λ as ι b (t) = bt.Obviously, ι 1 ≡ ι.Extending the arguments in [25] (Theorem 5.7.9), we have the following result.Proposition 1.Let W η,σ 2 ∼ RBM(η, σ 2 ).The following properties hold true.
Proof.Let us first prove property (i).Consider The self-similarity of the Brownian motion implies that Applying the reflection map on both sides, we conclude the proof by Lemma 1.
Property (ii) follows from property (i) applied on W sign(η) by choosing b = η 2 σ 2 and c = σ 2 |η| .Property (iii) follows from property (i) applied on W by choosing b = σ 2 and c = 1.Finally, property (iv) follows from (i) applied on W by choosing b = 1 and c = σ.
From now on, we will focus on the case σ 2 = 1, as any other case can be deduced from this one.Let us stress that the one-dimensional distribution of W η is well known (see [25] (Theorem 5.7.7) or [34] (Section 3.6)).Theorem 3.For W η ∼ RBM(η), y ≥ 0 and t > 0, it holds that where Φ is the cumulative distribution function (CDF) of a standard normal random variable, i.e., Remark 1.It is clear that P( W η (t) > y) = 1 for any y < 0 by definition.Let us also observe that, if η = 0, then the one-dimensional distribution of W is a folded Gaussian distribution.This will be made clearer in the following subsection.
As a consequence of the previous theorem, we have the following corollary (see [35] (Corollary 1.1.1)).
Corollary 1.For W η ∼ RBM(η) with η < 0 and t > 0, it holds that where ϕ is the probability density function (PDF) of a standard normal random variable, i.e.,

Alternative Construction of the Reflected Brownian Motion
One can provide a different construction of the Reflected Brownian Motion by means of the absolute value of an Itô process.To do this, let us observe, as done in [36], that, whenever y(0) = 0, it holds that Using this characterization, together with the fact that W η (0) = 0, we have It is clear that W = −W is still a Brownian motion and we can write Combining the previous argument with [29] (Theorem 1), we get the following result.Theorem 4. For W η ∼ RBM(η), the following properties are true.(i) There exists a process W −η ∼ BM(−η) such that, denoting by (ii) Let W ∼ BM and consider X η the unique strong solution of the stochastic differential equation Then, it holds that Remark 2.

•
In the case η = 0, the previous theorem tells that W d = |W|, i.e., it is a folded Gaussian process, as observed in the previous subsection.
The previous characterization of the RBM can be fruitfully used for stochastic simulation.Indeed, Equation (2) can be discretized by means of the Euler scheme (see [38]), which is weakly convergent of order 1.On the other hand, one could also simulate W η by means of the reflection map.However, such an algorithm is weakly convergent of order 1/2.In any case, one can provide not only algorithms with improved weak convergence order, but also exact simulation algorithms for W η (t) at some grid points t ≥ 0 (see [39]).Other approaches to generate the sample paths of the RBM are based on the Gauss-Markov nature of the Drifted Brownian Motion (see [40,41]).There is also another approximate simulation method that makes use of a continuous-time random walk (CTRW) approximation.The latter is made clearer in Section 5.
In the following proposition, we recall some of the main properties of the stable subordinator and its inverse (see [44]).Proposition 2. Let α ∈ (0, 1) and σ α an α-stable subordinator with inverse L α .Then, (i) σ α is a strictly increasing process with a.s.pure jump sample paths; (ii) L α is an increasing process with a.s.continuous sample paths; (iii) For any fixed t > 0, σ α (t) is an absolutely continuous random variable with PDF g α (s; t) satisfying where g α (•) := g α (•; 1).This means that σ α (t) ); (iv) For any fixed t > 0, L α (t) is an absolutely continuous random variable with PDF f α (s; t) satisfying (v) For any fixed s > 0, it holds that abs( Remark 3. Remark that, while σ α is a Lévy process and then a strong Markov process, L α is not even a Markov process. As it is clear from the previous proposition, explicit formulae for the one-variable function g α lead to analogous ones for the density of the stable subordinator and its inverse.However, such explicit formulae are not so easy to handle.First of all, there are several integral representations of g α (see [46] and references therein).Among them, let us recall the so-called Mikusinski representation: dτ.
This representation is shown to be quite useful both to determine asymptotic properties of g α and to evaluate it numerically (see [47]).The function g α can be also expressed in several ways as a function series (see [48]).Let us refer, among them, to the following formula: The latter leads to a quite interesting representation of f α (s; t), which can be also used to prove some regularity results (as, for instance, done in [49]).Further discussion on the various representations of g α and f α is given, for instance, in the Appendix of [50].
In the following, we want to apply a time-change procedure on W by using as a stochastic clock the process L α .Thus, it could be useful to recall the notion of the semi-Markov process, as introduced in [10].
In particular, if we consider a strong Markov process X = {X(t), t ≥ 0} and an independent inverse α-stable subordinator L α , we can define the time-changed process ) is a strong Markov process.This implies (actually, it is equivalent) that X α is a semi-Markov process.The semi-Markov property of X α , in particular, follows from the fact that (X, σ α ) is a Markov additive process in the sense of [52] (see [51] for details).Finally, let us recall that such a semi-Markov property actually tells us that the process X α satisfies a (strong) regenerative property (in the sense of [53]) with the regenerative set given by i.e., the process X α satisfies the strong Markov property on any random time T such that T (ω) ∈ M(ω) for almost any ω ∈ Ω.Actually, a regenerative property is common for any semi-Markov process and could be informally stated as follows: a semi-Markov process exhibits the strong Markov property at each time in which it changes its state.
First of all, let us stress that the arguments in the previous subsection tell us that ) and L α is an inverse α-stable subordinator independent of it (see, for instance, [27]).We denote this as W η,σ 2 α ∼ DBM(η, σ 2 ; α) and we call W η,σ 2 its parent process.
Another natural definition for the DRBM could be obtained by simply considering ).In the following proposition, we observe that such a definition is equivalent to the one we have given.
Proof.This proposition easily follows from property (ii) of Lemma 1 after observing that We can also deduce a scaling property for the DRBM in the spirit of Proposition 1.
, and L α be the involved inverse α-stable subordinator.By Proposition 1 with b = 1, we get c W η,σ 2 ∼ RBM ηc, σ 2 c 2 .By the Skorokhod representation theorem (see, for instance, [54]), we can suppose, without loss of generality, that there exists a process surely.This clearly implies that also W ηc,σ 2 c 2 is independent of L α .Thus, composing both sides of the equality with L α , we conclude the proof of (i).Property (ii) follows from (i) applied to W η/σ α with c = σ.Remark 5. Thanks to the previous proposition, we can always reduce to the case σ 2 = 1.However, we cannot reduce to the case η = ±1, 0. This is due to the fact that the composition operator is not symmetric and Thus, from now on, let us consider σ 2 = 1.Concerning the one-dimensional distribution, we get a subordination principle from a simple conditioning argument.Proposition 5.For W η α ∼ DRBM(η, α), y ≥ 0 and t > 0, it holds that as a conditional expectation.Precisely, if we let 1 A be the indicator function of the set A ∈ B(R), that is to say, where W η is the parent process of W η α and L α is independent of W η .Using the tower property of conditional expectations, we achieve where we also use the fact that L α is independent of W η and Theorem 3. Finally, we have With the same exact argument, we can prove the following statement.
As for the classical case, we could ask if there are some alternative representations of the DRBM in terms of the solution of some particular SDE.This is done by generalizing Theorem 4. (ii) Let W α ∼ DBM(η; α) and consider X η α the unique strong solution of the time-changed stochastic differential equation (in the sense of [30]) Then, it holds that W Proof.Let us first prove property (i).Let W η be the parent process of W η α .By item (i) of Theorem 4, we know that there exists Thus, applying the composition with L α on both sides of equality (5), we conclude the proof of (i).To prove the second item, let us define X η as the unique strong solution of Equation ( 2), where W is the parent process of W α .Observe that, L α being almost surely continuous, W is in synchronization with L α (in the sense of [30] (p.793)) and then we can use the duality theorem [30] Continuity of X η α leads to Equation (4).Vice versa, if X η α is the unique strong solution of Equation ( 4), then the duality theorem tells us that X η is the unique strong solution of Equation ( 2).Thus, we observe that X η α is the unique strong solution of Equation ( 4) if and only if X η α = X η • L α , and the statement directly follows from item (ii) of Theorem 4. One could try to use the previous characterization to provide an algorithm for the simulation of the DRBM.Such an approach requires, first of all, the simulation of a stable subordinator, which can be done by means of the Chambers-Mallow-Stuck algorithm [55], which is a generalization of the Box-Müller algorithm.As underlined in [43] (Section 5), one can simulate a time-changed process X α = X • L α by setting a grid {t n } n≤N (with and then the parent process X up to t N .Finally, an approximate trajectory of the process X α is obtained by a step plot between the nodes {(X(t n ), σ α (t n ))} n∈N .Let us stress that such an algorithm can be generalized to any subordinator, but in this case, Laplace inversion could be required.Moreover, in the case of the DRBM, this involves the simulation of an RBM.We can bypass such a step by using a CTRW approximation of W η α , at least for η < 0. This follows from an analogous approach for the RBM, which will be discussed in the following sections.Let us first introduce the M/M/1 queueing model.Consider a system in which, at a certain average rate λ, a job (client) enters the system to be processed by a server, which takes a certain amount of time of mean 1/µ to perform.If a job is currently being served and another job enters the system, it waits in line in a queue.Jobs that are waiting are then processed, as soon as the server is free, via a First In First Out (FIFO) policy.To model such a system, we assume in any case that the interarrival times (i.e., the time interval between the arrival of two jobs) are i.i.d. and so are the service times.

Heavy Traffic Approximation of the
In this first case, i.e., the M/M/1 queueing system, we also suppose that: • service times and interarrival times are independent of each other; • the jobs enter the system following a Poisson arrival process; • service times are exponentially distributed.
Thus, we consider two sequences U = (U k ) k≥1 and V = (V k ) k≥1 of i.i.d.exponential random variables of parameters, respectively, λ > 0 and µ > 0 that are independent of each other, representing, respectively, the interarrival and the service times.We can define the sequence of the arrival times (i.e., the time instants in which the jobs enter the system) where we set S U 0 = 0. Let A = {A(t), t ≥ 0} be the arrival counting process, i.e., A(t) is the number of arrivals up to time t.It is defined as In the case of a M/M/1 system, as we supposed before, A is a Poisson process of parameter λ and we denote it by A ∼ Pois(λ).We can also define the sequence S V = (S V k ) k≥0 , which is, for any k ≥ 0, the total amount of time that is necessary to process the first k-th jobs, as where we set S V 0 = 0, and the cumulative input process C = {C(t), t ≥ 0} as i.e., the necessary service times to process all the tasks that entered the system up to time t.
Let us stress that S V k is not the time of completion of the k-th task, as there could have been some idle periods.By using the cumulative input process, we can introduce the net input process X = {X(t), t ≥ 0} as The net input process takes, in a certain sense, a balance between the total service time that is necessary to process all the tasks and the time that has passed since the system is initialized.By definition, X is a cádlág process that decreases with continuity while it increases only by jumping.Moreover, X(0) = 0, and thus X ∈ D 0 (R + 0 ) almost surely, and we can define the workload process W L = {WL(t), t ≥ 0} via the Skorokhod reflection map as W L = Ψ(X).One can visualize how W L works: when it is positive, the process W L decreases linearly with slope −1 until it reaches 0; once 0 has been reached, it remains constant (while the net input X(t) could be still decreasing); each time X jumps, so does W L. Thus, it is clear that W L is 0 during idle periods and it is positive while a job is being served.We can state that W L, in a certain sense, measures how much residual time is needed to process all the tasks.Here, the effect of the regulator is much clearer than in the RBM case.Indeed, the process W L(t) tries to cross 0 with a fixed slope of −1.As soon as W L(t) touches 0, the regulator symmetrizes such a slope, thus adding a linear term with slope 1 to the process.The sum of the symmetric effects of the proper velocity of W L(t) (or, in other words, the effort that X(t) puts into driving W L(t) against 0) and the velocity of the regulator is the main motivation for which the process W L(t) stabilizes on 0 until it increases jumping away.Such an example actually shows how the threshold 0 works as a mirror for the velocity of the process and how the regulator provides a symmetric additional term to such a velocity.Moreover, it is clear that, for any t ≥ 0, it holds that W L(t) ≤ C(t): they increase together with the same amount, but W L decreases with continuity in the meantime.As C measures the total amount of necessary service time, up to time t, to process all the jobs, and W L, the remaining amount of time to process all the waiting jobs, up to time t, their difference represents the total amount of time, up to time t, in which the server was actually working.We call such a process the cumulative busy time process B = {B(t), t ≥ 0}, where B(t) = C(t) − W L(t).The cumulative busy time process is increasing and piecewise linear.It remains constant during idle periods; otherwise, it grows with slope 1.Thus, if we consider B as the clock of the process, we are only neglecting the idle periods.Thus, if we count service times only on this clock, we obtain the exact number of processed jobs.Hence, we consider a counting process N = {N(t), t ≥ 0} of the service times, i.e., and then we define the departure process D = {D(t), t ≥ 0} as D(t) = N(B(t)).Finally, the balance between the number of arrivals and the number of departures, i.e., the number of jobs that are currently in the system, gives us the queue length process Q = {Q(t), t ≥ 0}, i.e., Q(t) = A(t) − D(t).Since we are mainly interested in the process Q, we resume the full construction with the notation Q ∼ M/M/1(λ, µ).In any case, we will adopt the full notation that we introduced before, since the previously constructed processes will still play a role.Moreover, if we write Q (c) ∼ M/M/1(λ, µ), then all the involved processes will present the same apex.
It is clear that the traffic of the system can be expressed in terms of the constant ρ = λ µ .As ρ < 1, Q admits a stationary distribution and, in particular, is ergodic, while ρ ≥ 1 implies that Q is not ergodic.The threshold ρ = 1 thus divides the ergodic behavior with the non-ergodic one.We are interested in what happens when ρ < 1 but 1 − ρ ≈ 0. In this situation, we say that the queueing system is in the heavy traffic regime.

The Heavy Traffic Approximation
It is well known that, under a suitable rescaling, a M/M/1 queueing system in the heavy traffic regime can be approximated (in distribution) with a Reflected Brownian Motion.The following theorem, which is practically [25] (Section 9.6), resumes the heavy traffic approximation result.Theorem 6. Fix λ > 0 and let (ρ n ) n≥1 be a sequence such that 0 < ρ n ≤ 1 for any n ≥ 1, Then, there exists a process Q ∼ RBM(λ 2 η, λ 3 σ 2 ) such that, as n → +∞, k ) k≥1 be the sequences of interarrival and service times of ) k≥1 be the arrival and cumulative service times.Observe that we can construct the n-th sequence of service times by rescaling the ones of the first model, i.e., we can set (V k ) k≥1 is independent of n and thus can be considered to be equal to (U (1) Observing that, for any k ≥ 1, and that S U,(n) and S V,(n) are independent of each other, we know, by Donsker's theorem (see [25] (Theorem 4.3.2)),that there exist two independent Brownian motions W i ∼ BM, i = 1, 2, such that, as n → +∞, where we also use the fact that µ n → λ.Since W 1 and W 2 are almost surely continuous, we can use both [25] (Theorems 9.3.3 and 9.3.4) to conclude that where W 3 ∼ BM(η, σ 2 ) and η, σ 2 are defined in the statement of the theorem.Denoting W = Φ(W 3 ), we know that W ∼ RBM(η, σ 2 ), and then, by property (i) of Proposition 1, we get Q ∼ RBM(λ 2 η, λ 3 σ 2 ), concluding the proof.
In the previous theorem, we considered a space-time scaling However, in place of the time one, we could consider an appropriate scaling of the arrival and service rates.From now on, we will denote the fact that a random variable T is exponentially distributed with parameter λ > 0 with T ∼ Exp(λ).We have the following scaling property.Proposition 6.Let Q ∼ M/M/1(λ, µ) and Q (c) ∼ M/M/1(cλ, cµ).Then, the following equalities hold: . This, together with the independence of the involved variables, implies . By definition, one can easily check that, for the counting processes (which are Poisson processes), it holds that and, in particular, C • ι c d = cC (c) .Moving to the net input processes, we clearly get Next, consequently, we have Now, let us focus on the queue length processes.To show the equality in law, we need to use a different representation of the queue length, due to [56] and widely exploited in [57].Precisely, if we consider another sequence V = ( V k ) k≥1 of i.i.d.random variables with V k ∼ Exp(µ), we can define the sequence S V = (S V k ) k≥0 , with S V k = ∑ k j=1 V k and S V 0 = 0, and the counting process N = { N(t), t ≥ 0}, where In particular, we obtain that, for the process Arguing as before, we get Finally, the property of the departure process follows by difference.
Remark 6.The fact that V k is distributed as the service times is typical of the M/M/1 case.If one considers a multi-channel queue, ( V k ) k≥1 represents the potential service times, i.e., the service times one gets if the servers are not shut down when they are idle.This means that, in the multi-channel case, in place of having a single sequence of service times, each one associated with a job, one has many sequences of service times and each sequence is associated with one server (as if the systems admits as many queues as servers).It is clear that in the M/M/1 queues, this makes no difference and ( V k ) k≥1 is really the sequence of service times (V k ) k≥1 .In particular, this means that the queue length process Q can be written as the reflection of the process X(t) = A(t) − N(t).In general, Q can be written as the reflection of the process X(t) = A(t) − N(t), which is called the modified net input process.
We can restate Theorem 6 in terms of such a scaling property.
Corollary 3. Fix λ > 0 and let (ρ n ) n≥1 be a sequence such that 0 < ρ n ≤ 1 for any n ≥ 1, Proof.The statement follows from Theorem 6 after observing from Proposition 6 that 4.2.The Heavy Traffic Approximation of the Fractional M/M/1 Queue 4.2.1.The Queueing Model Now, we need to introduce the model that we are interested in.Precisely, we refer to the fractional M/M/1 queue, first defined in [4].

Definition 7.
Let Q ∼ M/M/1(λ, µ), α ∈ (0, 1) and L α be an inverse stable subordinator independent of Q.We define a fractional M/M/1 queue by means of its queue length process Q α := Q • L α and we denote it as Q α ∼ M/M/1(λ, µ; α).We call Q its parent process and we extend the definition to α = 1, by setting Q α as a classical M/M/1 queue length process.
Since the process is defined by means of a time-change procedure, we know that it is a semi-Markov process.On the other hand, the interpretation of Q α as a queueing model is unclear unless we define some other quantities.In place of proceeding with a forward construction as done for the M/M/1 queue, here, we need to consider a backward construction to define the main quantities of a queueing system.Indeed, we can recognize the arrival times S U α = (S U α k ) k≥0 by setting S U α 0 = 0 and then Hence, the interarrival times U α = (U α k ) k≥1 are defined by setting Analogously, one can define the departure times DT α = (DT α k ) k≥0 by setting DT α 0 = 0 and To identify the service times, we have to distinguish between two cases.Indeed, if Q α (DT α k−1 ) = 0, then, as soon as the k − 1-th job leaves the system, the k-th one is already being served.Thus, its service time is DT α k − DT α k−1 .Otherwise, as soon as the k − 1-th job leaves the system, to identify the service time, we have to wait for the k-th job to enter the system.Hence, we can define the service times V α = (V α k ) k≥1 as Once the sequences U α and V α are defined, we can reconstruct all the quantities involved in the queueing model, as done in Section 4.1.1,and thus we will use the same notation with the apex α to denote the respective fractional counterpart.We are also interested in the interevent times T α = (T α k ) k≥1 .To define them, let us first define the event times S T α = (S T α k ) k≥0 by setting S T α 0 = 0 and for some j ∈ N, then T α k is the minimum between the interarrival time U α j and the residual service time If S T α k−1 = DT α j−1 for some j ∈ N and Q(S V α j−1 ) = 0, then T α k is the minimum between the service time V α j and the residual interarrival time k and RU 1 k are still independent and exponentially distributed with parameters λ and µ: this is due to the loss of memory property of the exponential distribution.Thus, the fact that, in the first two cases, T k is an exponential with parameter λ + µ can be seen as a consequence of the fact that it is the minimum of two independent exponential random variables.This is, in general, not true for α ∈ (0, 1).To describe the distribution of U α k , V α k and T α k , we need some additional definitions (see [58][59][60]).
Mittag-Leffler random variables naturally arise from the evaluation of an α-stable subordinator σ α in an independent exponential random variable.
Proof.It is clear that for t < 0, it holds that P(σ α (Y) > t) = 1.By using the independence of Y and σ α , we get, for t ≥ 0, where the last equality follows from the Laplace transform of the inverse α-stable subordinator, as obtained in [61].
In the following, we will specify some distributional properties of U α , V α and T α .Let us stress that the distribution of T α has been already exploited in [4].On the other hand, the distribution of U α has been obtained in [21] under the condition that there are no departures between two arrivals: the conditioning arises from the fact that the proof is carried on by using the semi-Markov property and a modification of the Kolmogorov equations for the queueing system.A similar argument holds for V α in [21].Here, we use some different techniques that rely on the fact that σ α is a Lévy process.As a consequence, as will be clear in the following statement, we do not have to consider any conditioning to obtain the distribution of U α and V α .Moreover, we also obtain the independence, respectively, of the interarrival times and the service times, while the mutual independence of the two sequences holds only if α = 1, as observed in [4].
The variables T α k are independent of each other; 2.
The variables U α k are independent of each other; 6.
The variables V α k are independent of each other; 8.
If α ∈ (0, 1), the sequences U α and V α are not independent of each other.
Proof.To prove item (1), let us first observe that Arguing on S T α k , by definition, we get , there exists a random variable Y ≥ 0 such that S T α k = σ(Y−) almost surely.Thus, we can rewrite If σ α is strictly increasing, we have that, almost surely, We conclude, by induction, that where the last equality holds due to the fact that σ α is stochastically continuous and S T is independent of σ α .Nevertheless, by the fact that S T is independent of σ α , if we define If σ α is a Lévy process, whenever where we also use the fact that T k and T j are independent.Now, we need to determine E[F 1 (T k ; t k )].To do this, we use again the fact that σ α is a Lévy process independent of the sequence T to obtain that Combining Equations ( 7) and ( 8), we have Properties ( 2) and (3) follow from Equation ( 8), as and Lemma 2. Property (4) is a direct consequence of (1), ( 2) together with the decomposition property of the generalized Erlang distribution.The proof of item (5) is analogous to the one of item (1) applied to S U α k .Item (6) follows from the equality U α k d = σ α (U k ), which can be proven analogously as in the case of the interevent times, and Lemma 2. The proof of item (5) is similar to the one of item (1); however, once k, j are fixed, one has to discuss separately Item (8) ) and Lemma 2. Finally, item (9) follows from the fact that, despite , as a consequence of the lack of semigroup property of the Mittag-Leffler function for α < 1 (see [62]).

Remark 7.
It is clear from the previous proposition that the arrival counting process A α is a fractional Poisson process (see [15,16]) of parameter λ and order α.

The Heavy Traffic Approximation
Now, we are ready to prove the heavy traffic limit for the fractional M/M/1 queue.It will be a clear consequence of the definition of the fractional M/M/1 queue, the classic heavy traffic approximation and the continuous mapping theorem.Theorem 7. Fix λ and let (ρ n ) n≥1 be a sequence such that 0 < ρ n ≤ 1 for any n ≥ 1, (n) for any n ≥ 1.By Corollary 3, we know that there exists Q ∼ RBM(λ 2 η, λ 3 σ 2 ) such that Without loss of generality, we can suppose that L α is independent of each n −1/2 Q (n) and Q.Then, we have that (n . Now, let us denote by Disc(•) the set of discontinuity points of the composition map by the continuous mapping theorem (see [25] (Theorem 3.4.3)).

Simulating a DRBM(η; α) with η < 0 via CTRW
In this section, we want to derive a simulation algorithm for the DRBM(η; α) with η < 0 from the heavy traffic approximation of the fractional M/M/1 queue.To do this, we first need to investigate how to simulate the fractional M/M/1 queue.Let us anticipate that we will adapt the well-known Doob-Gillespie algorithm (see [63]) to this case.This has been already done in [4] and discussed for general discrete event systems in [31] (Chapter II, Section 6); however, for completeness, let us recall the main steps of such generalization.

Simulation of the M/M/1(λ, µ; α)
To achieve the simulation algorithm for the fractional M/M/1, we need to exploit some distributional properties of the α-stable subordinator.First, let us define stable random variables.Definition 9. We say that a real random variable S is stable if there exist α ∈ (0, 1), (10) and we denote it as S ∼ S(α, β, γ, δ).We refer to [64] for the parametrization.
Since we know how to simulate an exponential random variable by means of the inversion method (see [31] (Chapter II, Example 2.3)) and a stable random variable by means of the Chambers-Mallow-Stuck algorithm (see [55]), we are able to simulate Mittag-Leffler random variables.Moreover, let us stress that one can easily simulate discrete random variables (see [31] (Chapter II, Example 2.1)).We only need to understand how to combine all these simulation procedures to obtain the fractional M/M/1 queue for fixed λ, µ > 0 and α ∈ (0, 1]. To do this, we define the functions r λ,µ , p λ,µ : N 0 → R (where N 0 is the set of nonnegative integers) as It is clear that X λ,µ is the jump chain of the classical M/M/1 queue.Moreover, let us stress that the time-change procedure does not change the jump chain of a continuoustime random walk; thus, X λ,µ is also the jump chain of the fractional M/M/1 queue.Concerning the interevent times, let us define the sequence T λ,µ,α = (T λ,µ,α k ) k≥1 of i.i.d.random variables with distribution dictated by the following relation: In general, we can define the sequence S λ,µ,α by setting S λ,µ,α 0 and the counting process N λ,µ,α = {N λ,µ,α (t), t ≥ 0} given by . Since, by definition, the process (X λ,µ , T λ,µ,α ) is a Markov-additive process, Q λ,µ,α is a semi-Markov process (see [51]).For α = 1, it is well known that Q λ,µ,1 ∼ M/M/1(λ, µ) and the Markov-additive decomposition that we exploited before is the main tool behind the Doob-Gillespie algorithm [63].Hence, it is clear that, to generalize the previous algorithm, we only need to show that Q λ,µ,α ∼ M/M/1(λ, µ; α).Proposition 8. Let Q λ,µ,α and Q λ,µ,1 be two processes as defined before, and let L α be an inverse stable subordinator independent of both X λ,µ and T λ,µ,1 .Then, ) for any k ≥ 0 and that (σ α (T λ,µ,1 k )) k≥1 is a sequence of independent random variables.Thus, we conclude that T λ,µ,α d = (σ(T λ,µ,α k )) k≥0 .By Skorokhod's representation theorem (see, for instance, [54]), we can suppose, without loss of generality, that the equality holds almost surely.Hence, we obtain that S λ,µ,α k ≤ t if and only if S λ,µ,1 k ≤ L α (t) for any k ≥ 1 and any t ≥ 0; that is to say, N λ,µ,α = N λ,µ,1 • L α .This concludes the proof.Now, we are ready to express the simulation algorithm.It is clear that we only need to simulate the following arrays:

•
The state array X λ,µ , which contains the states of the queue length process; • The calendar array S λ,µ,α , which contains the times in which an event happens.
Indeed, in such a case, one can recover Q λ,µ,α (t) by finding k such that S λ,µ,α k k+1 and then setting Q λ,µ,α (t) = X λ,µ k , as done in Algorithm 1.Let us stress that all the algorithms will be given as procedures, so that we can recall each algorithm in other ones.
Algorithm 1 Generation of the queue length process from the state and calendar arrays procedure GENERATEQUEUE Input: X λ,µ , S λ,µ,α Output: ← length(S λ,µ,α ) Recall that the arrays start with 0

end if end function end procedure
Once we know how to generate the queue from the arrays, let us state Algorithm 2, which simulates them up to the N-th event.

S end for end procedure
Algorithm 2 can be easily adapted to other stopping conditions.In the following, it will be useful to express how to simulate a M/M/1(λ, µ; α) queue up to time t stop > 0 (precisely, up to the time min{T α k ≥ t stop }).This is done in Algorithm 3, and some simulation results are shown in Figure 1.
Algorithm 4 Simulation of a DRBM(η; α) for η < 0 up to time t stop > 0 and iteration Algorithm 5 Generation of the DRBM process from the state and calendar arrays procedure GENERATEDRBM Input: X λ,µ , S λ,µ,α , n it Output: Recall that the arrays start with 0 √ n it end if end function end procedure

Numerical Results
In this section, we want to show some simulation results.However, how can one visualize the convergence in distribution?By the Portmanteau Theorem (see [54] (Chapter 2, Section 3)) and Theorem 7, we know that, for any bounded functional F : D([0, t stop ]) →∈ R that is continuous in the J 1 topology, it holds that and thus we could study whether e (n) converges to 0. However, to do this, we should know E[F ( W η α )] a priori, which is not always possible.Moreover, let us observe that even the evaluation map x ∈ D([0, t stop ]) → x(t) ∈ R is not continuous everywhere in the J 1 topology; hence, we need to enlarge the set of possible functionals F .To do this, we need the following weaker version of the Portmanteau Theorem.Proposition 9. Let X n , X be a random variable with values in a metric space M such that X n ⇒ X.Let also F : M → R be a function such that the following properties hold true: Proof.By property (i) and the continuous mapping theorem (see [25] (Theorem 3.4.3)),we know that F (X n ) ⇒ F (X).Moreover, property (ii) implies that F (X n ) are uniformly integrable.We conclude the proof by [54] (Chapter 2, Theorem 3.5).Now, let us select two particular functionals.The first one is given by i.e., it is an evaluation functional.To work with such a functional, we need to prove the following result.
Proposition 10.The functional F 1 satisfies the hypotheses of Proposition 9 with respect to the sequence [25] (Proposition 13.2.1).Thus, we only need to prove item (ii).To do this, let us select ε = 1 and remark that we need to estimate n 2 and observe that, by a simple conditioning argument, where L α is an inverse α-stable subordinator independent of Q n,α (t).
If ρ n < 1 for any n ∈ N, we can combine Equations (3.24) and (3.25) in [7] to get where we also use the fact that Hence, we finally get concluding the proof.
The second functional that we consider is the integral functional This time, we will directly prove the limit result.
Proposition 11.It holds that Furthermore, arguing exactly as in the proof of Proposition 10 and using Equation (3.24) and thus we can use the dominated convergence theorem to conclude the proof.
In particular, E[F i ( W η α )], i = 1, 2, could be estimated numerically by means of Corollary 2 and Mikusinski's representation of g α .However, one still needs to estimate , which is not an easy task since the distribution (and then the expected value) of the fractional M/M/1 queue admits some series representations (see [4,21]), which are not easy to evaluate numerically.To overcome this problem, we adopt a Monte Carlo method (with 5000 samples) to determine E F i (Q However, this means that we have to consider the fact that the evaluated value randomly oscillates around the exact one.For this reason, one cannot use the Cauchy-type error e (n) to estimate the actual approximation error.In any case, a further investigation on the link between e (n) F ,Cauchy and e (n) Cauchy will be carried out in future works.Let us also underline that, clearly, the oscillation of the Monte Carlo evaluation depends on the variance of √ n) and, thus, for fixed n, depends on α.This is made clear in the boxplots in

Conclusions
To summarize the results, we introduced the Delayed Reflected Brownian Motion by means of a suitable time change of the Reflected Brownian Motion (or, equivalently, by solving Skorokhod's reflection problem on the paths of the Delayed Brownian Motion) and recalled the main properties of fractional M/M/1 queues as defined in [4].These two processes are then linked via the heavy traffic approximation result exploited in Theorem 7. As we also underline in the Introduction, such a theorem can be read in two ways depending on the process that we want to focus on.If we are interested in the properties of the fractional M/M/1 queues, the theorem provides a subdiffusive approximation of them, in terms of the Delayed Reflected Brownian Motion, as the traffic intensity ρ is near 1.This is quite useful if one needs to investigate (approximatively) some distributional property of a fractional M/M/1 queue in the transient state.Indeed, different formulations of the one-dimensional distribution of the queue length process in the fractional case are given in [4,21] but they involve nested series of functions, which can be difficult to evaluate even numerically.However, if ρ is near 1, one can approximate the one-dimensional distribution of the queue length process with the one of the Delayed Reflected Brownian Motion given in Proposition 5, which can be numerically evaluated thanks to Mikusinski's representation of the density of the stable subordinator.
Vice versa, if we are interested in the properties of the Delayed Reflected Brownian Motion, Theorem 7 provides a continuous-time random walk approximation of such a process.This is a quite useful property when combined with the discrete event simulation procedure (which is a generalization of the well-known Gillespie's algorithm; see [31,63]).Indeed, the simulation of an inverse α-stable subordinator is usually done by means of Laplace inversion, if we start from the Laplace transform of f α , or by inverting the subordinator, which is instead simulated via the Chambers-Mallow-Stuck method [55].The algorithm presented in the paper does not rely at all on the simulation of the inverse subordinator, thanks to the fact that we are able to simulate Mittag-Leffler random variables by using the Chambers-Mallow-Stuck method.
This second point of view is investigated with more attention in Section 5. Precisely, Theorem 7 gives us a limit result, but does not tell us how large we should chose n to have a suitably good approximation.Moreover, as we observed before, estimating the distributional properties of a fractional queue is not an easy task, due to the quite complicated form of the state probabilities.However, some distributional quantities can be provided by Monte Carlo estimates, which, due to the random nature of the approach, invalidates the idea of using a form of error based on the Cauchy property of converging sequences.This problem can be overcome by using the Cesaro convergence of the sequence, as taking the average smooths in some sense the oscillating simulated data, as one can observe by comparing Figures 3 and 4. Thus, one can use the sequence of averages in place of the original one to provide a stopping criterion, as done in Algorithm 7.Such an approach is supported by the sequence of errors given in Figure 6.Clearly, one could use other smoothing procedures on data to overcome the oscillations caused by Monte Carlo estimates.In future works, we aim to discuss the properties of time-changed M/M/1 queues and Reflected Brownian Motions with more general inverse subordinators, trying to link them via a heavy traffic approximation result.The simulation of such types of queueing models will require some more sophisticated methods, due to the lack, in general, of both the self-similarity property and a special algorithm for the subordinator.
the unique solution of the Skorokhod problem in Definition 1 for the sample paths of W η,σ 2 α .

Figure 2 .
Figure 2. The plots of E[F i (Q (n) α / √ n)] against n are given in Figure 3.The convergence of such a sequence is not so clear due the oscillations caused by the Monte Carlo method.Another visualization of the convergence of E[F i (Q (n) α /

Figure 4 .
Figure 4.As S (i)N is smoother than the original sequence, one could still use the errore N F i ,Cauchy = |S (i) N+1 − S (i) N |to impose a stopping criterion of the form e N F i ,Cauchy < ε abs for the absolute error or e N F i ,Cauchy < ε rel S (i) N+1 for the relative error.

Figure 6 .
Figure 6.Sequence of e N F 2 ,Cauchy to produce the plots of Figure 5.