Lp-solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term

This paper aims at extending a previous contribution dealing with the random autonomous-homogeneous linear differential equation with discrete delay τ > 0, by adding a random forcing term f (t) that varies with time: x′(t) = ax(t) + bx(t− τ) + f (t), t ≥ 0, with initial condition x(t) = g(t), −τ ≤ t ≤ 0. The coefficients a and b are assumed to be random variables, while the forcing term f (t) and the initial condition g(t) are stochastic processes on their respective time domains. The equation is regarded in the Lebesgue space Lp of random variables with finite p-th moment. The deterministic solution constructed with the method of steps and the method of variation of constants, which involves the delayed exponential function, is proved to be an Lp-solution, under certain assumptions on the random data. This proof requires the extension of the deterministic Leibniz’s integral rule for differentiation to the random scenario. Finally, we also prove that, when the delay τ tends to 0, the random delay equation tends in Lp to a random equation with no delay. Numerical experiments illustrate how our methodology permits determining the main statistics of the solution process, thereby allowing for uncertainty quantification.


Introduction
In this paper, we are concerned with random delay differential equations, defined as classical delay differential equations whose inputs (coefficients, forcing term, initial condition, . . .) are considered as random variables or regular stochastic processes on an underlying complete probability space (Ω, F , P), which may take a wide variety of probability distributions, such as Binomial, Poisson, Gamma, Gaussian, etc.
Equations of this kind should not be confused with stochastic differential equations of Itô type, forced by an irregular error term called White noise process (formal derivative of Brownian motion). In contrast to random differential equations, the solutions to stochastic differential equations exhibit nondifferentiable sample-paths. See [1] (pp. 96-98) for a detailed explanation of the difference between random and stochastic differential equations. See [1][2][3][4][5][6], for instance, for applications of random differential equations in engineering, physics, biology, etc. Thus, random differential equations require their own treatment and study: they model smooth random phenomena, with any type of input probability distributions.
From a theoretical viewpoint, random differential equations may be studied in two senses: the sample-path sense or the L p -sense. The former case considers the trajectories of the stochastic processes involved, so that the realizations of the random system correspond to deterministic versions of the problem. The latter case works with the topology of the Lebesgue space (L p , · p ) of random variables with finite absolute p-th moment, where the norm · p is defined as: U p = E[|U| p ] 1/p for 1 ≤ p < ∞ (E denotes the expectation operator), and U ∞ = inf{C ≥ 0 : |U| ≤ C almost surely} (essential supremum), U : Ω → R being any random variable. The Lebesgue space (L p , · p ) has the structure of a Banach space. Continuity, differentiability, Riemann integrability, etc., can be considered in the aforementioned space L p , which gives rise to the random L p -calculus.
In order to fix concepts, given a stochastic process x(t) ≡ x(t, ω) on I × Ω, where I ⊆ R is an interval (notice that as usual the ω-sample notation might be hidden), we say that x is L p -continuous at t 0 ∈ I if lim h→0 x(t 0 + h) − x(t 0 ) p = 0. We say that x is L p -differentiable at t 0 ∈ I if lim h→0 x(t 0 +h)−x(t 0 ) h − x (t 0 ) p = 0, for certain random variable x (t 0 ) (called the derivative of x at t 0 ). Finally, if I = [a, b], we say that x is L p -Riemann integrable on [a, b] if there exists a sequence of partitions {P n } ∞ n=1 with mesh tending to 0, P n = {a = t n 0 < t n 1 < . . . < t n r n = b}, such that, for any choice of points s n i ∈ [t n i−1 , t n i ], i = 1, . . . , r n , the limit lim n→∞ ∑ r n i=1 x(s n i )(t n i − t n i−1 ) exists in L p . In this case, these Riemann sums have the same limit, which is a random variable and is denoted by b a x(t) dt.
This L p -approach has been widely used in the context of random differential equations with no delay, especially the case p = 2 which corresponds to the Hilbert space L 2 and yields the so-called mean square calculus; see [5,[7][8][9][10][11][12][13][14][15]. Only recently, a theoretical probabilistic analysis of random differential equations with discrete constant delay has been addressed in [16][17][18]. In [16], general random delay differential equations in L p were analyzed, with the goal of extending some of the existing results on random differential equations with no delay from the celebrated book [5]. In [17], we started our study on random delay differential equations with the basic autonomous-homogeneous linear equation, by proving the existence and uniqueness of L p -solution under certain conditions. In [18], the authors studied the same autonomous-homogeneous random linear differential equation with discrete delay as [17], but considered the solution in the sample-path sense and computed its probability density function via the random variable transformation technique, for certain forms of the initial condition process. Other recent contributions for random delay differential equations, but focusing on numerical methods instead, are [19][20][21].
There is still a lack of theoretical analysis for important random delay differential equations. Motivated by this issue, the aim of this contribution is to advance further in the theoretical analysis of relevant random differential equations with discrete delay. In particular, in this paper we extend the recent study performed in [17] for the basic linear equation by adding a stochastic forcing term: The delay τ > 0 is constant. The coefficients a and b are random variables. The forcing term f (t) and the initial condition g(t) are stochastic processes on [0, ∞) and [−τ, 0] respectively, which depend on the outcome ω ∈ Ω of a random experiment which might be sometimes omitted in notation.
The term x(t) represents the differentiable solution stochastic process in a certain probabilistic sense. Formally, according to the deterministic theory [22], we may express the solution process as is the delayed exponential function [22] (Definition 1), c, t ∈ R, and n = t/τ + 1 (here · denotes the integer part defined by the so-called floor function). This formal solution is obtained with the method of steps and the method of variation of constants. The primary objective of this paper is to set probabilistic conditions under which x(t) is an L p -solution to (1). We decompose the original problem (1) as and System (3) does not possess a stochastic forcing term, and it was deeply studied in the recent contribution [17]. Under certain assumptions, its L p -solution is expressed as as a generalization of the deterministic solution to (3) obtained via the method of steps [22] (Theorem 1). Problem (4) is new and requires an analysis in the L p -sense, in order to solve the initial problem (1). Its formal solution is given by see [22] (Theorem 2). In order to differentiate (6) in the L p -sense, one requires the extension of the deterministic Leibniz's integral rule for differentiation to the random scenario. This extension is an important piece of this paper.
In Section 2, we show preliminary results on L p -calculus that are used through the exposition, which correspond to those preliminary results from [17] and the new random Leibniz's rule for L p -Riemann integration. Auxiliary but novel results to demonstrate the random Leibniz's integral rule are Fubini's theorem and a chain rule theorem. In Section 3, we prove in detail that x(t) defined by (2) is the unique L p -solution to (1), by analyzing problem (4). We also find closed-form expressions for some statistics (expectation and variance) of x(t) related to its moments. Section 4 deals with the L p -convergence of x(t) as the delay τ tends to 0. We then show a numerical example that illustrates the theoretical findings. Finally, Section 5 draws the main conclusions.
In order to complete a fair overview of the existing literature, it must be pointed out that, apart for random delay differential equations (which is the context of this paper), other complementary approaches are stochastic delay differential equations and fuzzy delay differential equations. Stochastic delay differential equations are those in which uncertainty appears due to stochastic processes with irregular sample-paths: the Brownian motion process, Wiener process, Poisson process, etc. A new tool is required to tackle equations of this type, called Itô calculus [23]. Studies on stochastic delay differential equations can be read in [24][25][26][27][28], for example. On the other hand, in fuzzy delay differential equations, uncertainty is driven by fuzzy processes; see [29] for instance. In any of these approaches, the delay might even be considered random; see [30,31].

Results on L p -calculus
In this section, we state the preliminary results on L p -calculus needed for the following sections. Proposition 1 is the chain rule theorem in L p -calculus, which was first proved in [8] (Theorem 3.19) in the setting of mean square calculus (p = 2). Both Lemma 1 and Lemma 2 provide conditions under which the product of three stochastic processes is L p -continuous or L p -differentiable. Proposition 2 is a result concerning L p -differentiation under the L p -Riemann integral sign, when the interval of integration is fixed. These four results have been already used and stated in the recent contribution [17], and will be required through our forthcoming exposition.
For the sake of completeness, we demonstrate Proposition 2 with an alternative proof to [17], based on Fubini's theorem for L p -Riemann integration. In the random framework, Fubini's theorem has not been tackled yet in the recent literature. It states that, if a stochastic process depending on two variables is L p -continuous, then the two iterated L p -Riemann integrals can be interchanged.
We present a new result, Proposition 3, in which we put conditions in order to L p -differentiate an L p -Riemann integral whose interval of integration depends on t. This proposition supposes the extension of the so-called Leibniz's rule for integration to the random scenario. The proof relies on a new chain rule theorem. (i) X is L 2p -differentiable at t; (ii) X is path continuous on [a, b]; (iii) there exist r > 2p and δ > 0 such that sup s∈[−δ,δ] E[| f (X(t + s))| r ] < ∞.
Then f • X is L p -differentiable at t and ( f • X) (t) = f (X(t))X (t).
and Y 3 (t, s) be three stochastic processes and fix 1 ≤ p < ∞. If Y 1 and Y 2 are L q -continuous for all 1 ≤ q < ∞, and Y 3 is L p+η -continuous for certain η > 0, then the product process On the other hand, if Y 1 and Y 2 are L ∞ -continuous, and Y 3 is L p -continuous, then the product process with mesh tending to 0. Write P n = {a = t n 0 < t n 1 < · · · < t n n = b}, and let r n where the last inequality is due to a property of L p -integration ([5] p. 102). As H(t, s) and F n (s) are , respectively (uniformly on n ≥ 1), the dominated convergence theorem allows concluding that lim n→∞ Proof. We present an alternative and simpler proof to ([17] Proposition 2.2), based on Fubini's theorem (Lemma 3). Since ∂F ∂t is L p -continuous, by Barrow's rule ( [5] p. 104) we can write ∂t (t, s) ds in L p , as a consequence of the fundamental theorem for L p -calculus; see ( [5] p. 103).

Lemma 4 (Version of the chain rule theorem). Let G(t, s) be a stochastic process on
Proof. For h = 0, by the triangular inequality, .
and that u is differentiable on [a, b], so the following existing version of the chain rule theorem applies: ([33] Theorem 2.1).

Remark 1.
Although not needed in the subsequent development, Lemma 4 gives in fact a general multidimensional chain rule theorem for L p -calculus, for the composition of a stochastic process G(t, s) and two deterministic functions (v(r), u(r)). This is the generalization of ([33] Theorem 2.1) to several variables. Indeed, let G(t, s) be a stochastic process on an open set Λ ⊆ R 2 , with L p -partial derivatives, ∂G ∂t (t, s) and ∂G Then G(v(r), u(r)) = G(r, u(r)) can be L p -differentiated at each r, by our (the integral is considered as an L p -Riemann integral). Let Remark 2 (Proposition 3 against another proof of the random Leibniz's rule). In [10, Proposition 6], a result pointing towards the conclusion of Proposition 3 was stated (in the mean square case p = 2, with v(t) = t, u(t) = 0 and [c, d] = [a, b]). However, the proof presented therein is not correct. In the notation therein, the authors proved an inequality of the form The authors justified correctly that K 1 (x, t, ∆t) 2 → 0 and K 2 (x, t, ∆t) 2 → 0 as ∆t → 0, for each x ∈ [a, b]. However, this fact does not imply as they stated at the end of their proof. For K 1 , one has to utilize the dominated convergence theorem. For K 2 , one should use uniform continuity.
Remark 3 (Random Leibniz's rule cannot be proved with a mean value theorem). In the deterministic setting, both Proposition 2 and Proposition 3 can be proven with the mean value theorem. However, such proofs do not work in the random scenario, as there is no version of the stochastic mean value theorem. In previous contributions (see [15] Then Y(η) = 1 − U almost surely. But this is not possible, since 1 − U ∈ (0, 1) and Y(η) ∈ {0, 1}. Thus, Y does not satisfy any mean square mean value theorem.

L p -solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term
In this section we solve (1) in the L p -sense. To do so, we will demonstrate that x(t) defined by (2) is the unique L p -solution to (1). We will take advantage of the decomposition of problem (1) into its homogeneous part, (3), and its complete part, (4). The formal solution to (3) is given by y(t) defined as (5), while the formal solution to (4) is given by z(t) expressed as (6). The previous contribution [17] provides conditions under which y(t) defined by (5) solves (3) in the L p -sense. Thus, our primary goal will be to find conditions under which z(t) given by (6) is a true solution to (4) in the L p -sense.
Again, recall that the integrals that appear in the expressions (2), (5) and (6) are L p -Riemann integrals.
The uniqueness (not existence for now) of (1) is proved analogously to ([17] Theorem 3.1), by invoking results from [7] that connect L p -solutions with sample-path solutions, which satisfy analogous properties to deterministic solutions. The precise uniqueness statement is as follows.
Theorem 1 (Uniqueness). The random differential equation problem with delay (1) has at most one L p -solution, for 1 ≤ p < ∞.
Proof. Assume that (1) has an L p -solution. We will prove it is unique. Let x 1 (t) and x 2 (t) be two L p -solutions to (1). Let u(t) = x 1 (t) − x 2 (t), which satisfies the random differential equation problem with delay If t ∈ [0, τ], then t − τ ∈ [−τ, 0]; therefore, u(t − τ) = 0. Thus, u(t) satisfies a random differential equation problem with no delay on [0, τ]: In [7], it was proved that any L p -solution to a random initial value problem has a product measurable representative which is an absolutely continuous solution in the sample-path sense. Since the sample-path solution to (7) must be 0 (from the deterministic theory), we conclude that u(t) = 0 on [0, τ], as desired. For the subsequent intervals [τ, 2τ], [2τ, 3τ], etc., the same reasoning applies. Now we move on to existence results. First, recall that the random delayed exponential function is the solution to the random linear homogeneous differential equation with pure delay that satisfies the unit initial condition.
Proposition 4 (L p -derivative of the random delayed exponential function ([17] Prop 3.1)). Consider the random system with discrete delay where c(ω) is a random variable. If c has absolute moments of any order, then e c,t τ is the unique L p -solution to (8), for all 1 ≤ p < ∞. On the other hand, if c is bounded, then e c,t τ is the unique L ∞ -solution to (8).
In [17], two results on the existence of solution to (3) were stated and proved. In terms of notation, the moment-generating function of a random variable a is denoted as φ a (ζ) = E[e aζ ], ζ ∈ R.
In what follows, we establish two theorems on the existence of a solution to (4); see Theorem 4 and Theorem 5. As a corollary, we will derive the solution to (1); see Theorem 6 and Theorem 7.
Theorem 4 (Existence and uniqueness for (4), first version). Fix 1 ≤ p < ∞. Suppose that φ a (ζ) < ∞ for all ζ ∈ R, b has absolute moments of any order, and f is continuous on [0, ∞) in the L p+η -sense, for certain η > 0. Then the stochastic process z(t) defined by (6) is the unique L p -solution to (4).
Proof. At the beginning of the proof of ([17] Theorem 3.2), it was proved that b 1 = e −aτ b has absolute moments of any order, as a consequence of Cauchy-Schwarz inequality, therefore Proposition 4 tells us that the process e b 1 ,t τ is L q -differentiable, for each 1 ≤ q < ∞, and d dt e It was also proved that, by the chain rule theorem (Proposition 1), the process e at is L q -differentiable, for each 1 ≤ q < ∞, and d dt e at = ae at . To justify these two assertions on e b 1 ,t τ and e at , the hypotheses φ a (ζ) < ∞ and b having absolute moments of any order are required.
Fix 0 ≤ s ≤ t. Let Y 1 (t, s) = e a(t−s) , Y 2 (t, s) = e b 1 ,t−τ−s τ and Y 3 (s) = f (s), according to the notation of Lemma 1. Set the product of the three processes F(t, s) = Y 1 (t, s)Y 2 (t, s)Y 3 (s), so that our candidate solution process becomes z(t) = Leibniz's rule, see Proposition 3, to differentiate z(t). By the first paragraph of this proof, in which we stated that both e b 1 ,t τ and e at are L q -differentiable, for each 1 ≤ q < ∞, we derive that Y 1 and Y 2 are L q -continuous on both variables, for all 1 ≤ q < ∞. Since Y 3 is L p+η -continuous, for certain η > 0 by assumption, we deduce that F is L p -continuous on both variables, as a consequence of Lemma 1.
and Y 3 = f (s). We have that Y 1 and Y 2 are L q -differentiable, for each 1 ≤ q < ∞. The random variable Y 3 belongs to L p+η . By Lemma 2, F(·, s) is Let us see that ∂F ∂t (t, s) is L p -continuous at (t, s). Since a has absolute moments of any order (by finiteness of its moment-generating function) and e a(t−s) is L q -continuous at (t, s), for each 1 ≤ q < ∞, we derive that ae a(t−s) is L q -continuous at each (t, s), for every 1 ≤ q < ∞, by Hölder's inequality. Thus, we have that Y 1 (t, s) = ae a(t−s) and Y 2 (t, s) = e b 1 ,t−τ−s τ are L q -continuous at (t, s), Once the existence of L p -solution has been proved, uniqueness follows from Theorem 1. (4), second version). Fix 1 ≤ p < ∞. Suppose that a and b are bounded random variables, and f is continuous on [0, ∞) in the L p -sense. Then the stochastic process z(t) defined by (6) is the unique L p -solution to (4). The rest of the proof is completely analogous to the previous Theorem 4, by applying the second part of both Lemma 1 and Lemma 2.

Proof. As was shown in
Theorem 6 (Existence and uniqueness for (1), first version). Fix 1 ≤ p < ∞. Suppose that φ a (ζ) < ∞ for all ζ ∈ R, b has absolute moments of any order, g belongs to C 1 ([−τ, 0]) in the L p+η -sense and f is continuous on [0, ∞) in the L p+η -sense, for certain η > 0. Then the stochastic process x(t) defined by (2) is the unique L p -solution to (1).
Proof. This is a consequence of Theorem 2 and Theorem 4, with x(t) = y(t) + z(t). Uniqueness follows from Theorem 1. (1), second version). Fix 1 ≤ p < ∞. Suppose that a and b are bounded random variables, g belongs to C 1 ([−τ, 0]) in the L p -sense and f is continuous on [0, ∞) in the L p -sense. Then the stochastic process x(t) defined by (2) is the unique L p -solution to (1).

Theorem 7 (Existence and uniqueness for
Proof. This is a consequence of Theorem 3 and Theorem 5, with x(t) = y(t) + z(t). Uniqueness follows from Theorem 1.

Remark 4.
As emphasized in ([17] Remark 3.6), the condition of boundedness for a and b in Theorem 7 is necessary if we only assume that g ∈ C 1 ([−τ, 0]) in the L p -sense. See ( [7] Example p. 541), where it is proved that, in order for a random autonomous and homogeneous linear differential equation of first-order to have an L p -solution for every initial condition in L p , one needs the random coefficient to be bounded.
Assume the conditions from Theorem 6 or Theorem 7. From expression (2), it is possible to approximate the statistical moments of x(t). We focus on its expectation, E[x(t)], and on its variance, 2 . These statistics provide information on the average and the dispersion of x(t), and they are very useful for uncertainty quantification for x(t). For ease of notation, denote the stochastic processes Due to the linearity of the expectation and its interchangeability with the L 1 -Riemann integral ( [5] p. 104), if p ≥ 1, To compute V[x(t)] when p ≥ 2, we start by Each of these integrals has to be considered in L p/2 ; see ( [35] Remark 2). This is due to the loss of integrability of the product, by Hölder's inequality. By applying expectations, As a consequence, one derives an expression for V[x(t)], by utilizing the relation V[ Other statistics related to moments could be derived in a similar fashion.
In Example 1, we will show how useful these expressions are to determine E[x(t)] and V[x(t)] in practice. Our procedure is an alternative to the usual techniques for uncertainty quantification: Monte Carlo simulation, generalized polynomial chaos (gPC) expansions, etc. [1,2].
On the other hand, if a and b are bounded random variables and g(0) ∈ L p , then the stochastic process y 0 (t) = g(0)e (a+b)t is the unique L p -solution to (12).
in the L p+η -sense for certain η > 0, then the stochastic process z 0 (t) = t 0 e (a+b)(t−s) f (s) ds is the unique L p -solution to (13).
On the other hand, if a and b are bounded random variables and f is continuous on [0, ∞) in the L p -sense, then the stochastic process z 0 (t) = t 0 e (a+b)(t−s) f (s) ds is the unique L p -solution to (13).
Proof. Take the first set of assumptions. Let F(t, s) = e (a+b)(t−s) f (s) be the process inside the integral sign. Since φ a < ∞ and φ b < ∞, the chain rule theorem (Proposition 1) allows differentiating e (a+b)t in L q , for each 1 ≤ q < ∞. In particular, e (a+b)(t−s) is L q -continuous at (t, s), for 1 ≤ q < ∞. As f is continuous on [0, ∞) in the L p+η -sense, we derive that F is L p -continuous at (t, s). It also exists ∂F ∂t (t, s) = (a + b)e (a+b)(t−s) f (s) in L p . Since a + b has absolute moments of any order, (a + b)e (a+b)(t−s) is L q -continuous at (t, s), for 1 ≤ q < ∞. Then ∂F ∂t is L p -continuous at (t, s). By Proposition 3, z 0 is L p -differentiable and z 0 (t) = F(t, t) + Theorem 10. Fix 1 ≤ p < ∞. If φ a (ζ) < ∞ and φ b (ζ) < ∞ for all ζ ∈ R, g(0) ∈ L p+η , and f is continuous on [0, ∞) in the L p+η -sense for certain η > 0, then the stochastic process x 0 (t) = g(0)e (a+b)t + t 0 e (a+b)(t−s) f (s) ds is the unique L p -solution to (11).
On the other hand, if a and b are bounded random variables, g(0) ∈ L p , and f is continuous on [0, ∞) in the L p -sense, then the stochastic process x 0 (t) = g(0)e (a+b)t + t 0 e (a+b)(t−s) f (s) ds is the unique L p -solution to (11).
Proof. It is a consequence of Theorem 8 and Theorem 9 with x 0 (t) = y 0 (t) + z 0 (t).
Next we prove the convergence lim τ→0 z τ (t) = z 0 (t). As a corollary, we will be able to derive lim τ→0 x τ (t) = x 0 (t). Theorem 12. Fix 1 ≤ p < ∞. Let a and b be bounded random variables and let f be a continuous stochastic process on [0, ∞) in the L p -sense. Then, lim τ→0 z τ (t) = z 0 (t) in L p , uniformly on [0, T], for each T > 0.
Proof. Notice that z τ (t) defined by (6) (see the first paragraph of this section) exists by Theorem 5, which used the boundedness of a and b and the L p -continuity of f on [0, ∞). Analogously, z 0 (t) exists by Theorem 9.
Example 1. This is a test example, with arbitrary distributions, to show how (9) and (10) may be employed to compute the expectation and the variance of the stochastic solution. Theoretical results are also illustrated. Let a ∼ Beta(2, 3) and b ∼ Uniform(0.2, 1). Define g(t, ω) = sin(sin(d(ω)t 2 )) and f (t, ω) = cos(te(ω) 2 ), where d and e are random variables with d ∼ Triangular(1, 1.15, 1.3) and e ∼ Uniform(0.1, 0.2). By using the chain rule theorem, Proposition 1, it is easy to prove that both g and f are C ∞ in the L p -sense, 1 ≤ p < ∞. The random variables a, b, d and e are assumed to be independent. Consider the solution stochastic process x τ (t) defined by (2). It is an L p -solution for all 1 ≤ p < ∞, by Theorem 7. With expressions (9) and (10), we can compute E[x τ (t)] and V[x τ (t)]; see Figure 1. The results agree with Monte Carlo simulation on (1). Observe that, as τ approaches 0, the solution stochastic process tends to the solution with no delay defined in Theorem 10, as predicted by Theorem 13.

Example 2.
In this example, we specify new probability distributions for the input coefficients. Let a ∼ Uniform(0.2, 1), b ∼ Uniform(−1, 0), d ∼ Beta(1, 1.3) and e ∼ Uniform(−0.2, −0.1), all of them independent. The stochastic process x τ (t) given by (2) is an L p -solution for all 1 ≤ p < ∞, by Theorem 7. We compute E[x τ (t)] and V[x τ (t)] with (9) and (10), see Figure 2. Observe that the convergence when τ → 0 agrees with Theorem 13.  We now comment on some computational aspects. We have used the software Mathematica R , version 11.2 [37]. The integrals and expectations from (9) and (10) have been computed as multidimensional integrals with the built-in function NIntegrate (recall that the expectation is an integral with respect to the corresponding probability density function). Expression (9) does not pose serious numerical challenges, and one can use a standard NIntegrate routine with no specified options. However, for expression (10), we have set the option quasi-Monte Carlo with 10 5 sampling points (otherwise the computational time would increase dramatically). We have checked numerically that the following factors increase the computational time: large ratio t/τ; probability distributions with unbounded support for the input data; and moderate or large dimensions of the random space.

Conclusions
In this paper, we have performed a comprehensive stochastic analysis of the random linear delay differential equation with stochastic forcing term. The equation considered has one discrete delay τ > 0, two random coefficients a and b (corresponding to the non-delay and the delay term, respectively) and two stochastic processes f (t) and g(t) (corresponding to the forcing term on [0, ∞) and the initial condition on [−τ, 0], respectively). Our setting supposes a step further than the previous contribution [17], in which no forcing term was considered (i.e., f (t) = 0). We have rigorously addressed the problem of extending the deterministic theory to the random scenario, by proving that the deterministic solution constructed via the method of steps and the method of variation of constants is an L p -solution, under certain assumptions on the random data. A new result, the random Leibniz's rule for L p -Riemann integration has been necessary to derive our conclusions. We have also studied the behavior in L p of the random delay equation when the delay tends to zero.
Our approach has been shown to be useful to approximate the statistical moments of the solution stochastic process, in particular its expectation and its variance. Thus, it is possible to perform uncertainty quantification. Our procedure is an alternative to the usual techniques for uncertainty quantification: Monte Carlo simulation, generalized polynomial chaos (gPC) expansions, etc.
Our approach could be extendable to other random differential equations with or without delay. As usual, one could prove that the deterministic solution also works in the random framework. To do so, a rigorous and careful analysis of the probabilistic properties of the solution based on L p -calculus should be conducted.
Finally, we humbly think that advancing in theoretical aspects of random differential equations with delay will permit rigorously applying this class of equations to modeling phenomena involving memory and aftereffects together with uncertainties. In particular, they may be crucial to capture uncertainties inherent to some complex modeling problems, since input parameters of this type of equations may belong to a wider range of probability distributions than the ones considered in Itô differential equations.