Next Article in Journal
On Exact and Approximate Approaches for Stochastic Receptor-Ligand Competition Dynamics—An Ecological Perspective
Next Article in Special Issue
Mean Square Convergent Non-Standard Numerical Schemes for Linear Random Differential Equations with Delay
Previous Article in Journal
Group Invariant Solutions and Conserved Quantities of a (3+1)-Dimensional Generalized Kadomtsev–Petviashvili Equation
Previous Article in Special Issue
Second-Order Dual Phase Lag Equation. Modeling of Melting and Resolidification of Thin Metal Film Subjected to a Laser Pulse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lp-Solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term

Instituto Universitario de Matemática Multidisciplinar, Building 8G, access C, 2nd floor, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(6), 1013; https://doi.org/10.3390/math8061013
Submission received: 25 May 2020 / Revised: 17 June 2020 / Accepted: 18 June 2020 / Published: 20 June 2020
(This article belongs to the Special Issue Models of Delay Differential Equations)

Abstract

:
This paper aims at extending a previous contribution dealing with the random autonomous-homogeneous linear differential equation with discrete delay τ > 0 , by adding a random forcing term f ( t ) that varies with time: x ( t ) = a x ( t ) + b x ( t τ ) + f ( t ) , t 0 , with initial condition x ( t ) = g ( t ) , τ t 0 . The coefficients a and b are assumed to be random variables, while the forcing term f ( t ) and the initial condition g ( t ) are stochastic processes on their respective time domains. The equation is regarded in the Lebesgue space L p of random variables with finite p-th moment. The deterministic solution constructed with the method of steps and the method of variation of constants, which involves the delayed exponential function, is proved to be an L p -solution, under certain assumptions on the random data. This proof requires the extension of the deterministic Leibniz’s integral rule for differentiation to the random scenario. Finally, we also prove that, when the delay τ tends to 0, the random delay equation tends in L p to a random equation with no delay. Numerical experiments illustrate how our methodology permits determining the main statistics of the solution process, thereby allowing for uncertainty quantification.

1. Introduction

In this paper, we are concerned with random delay differential equations, defined as classical delay differential equations whose inputs (coefficients, forcing term, initial condition, …) are considered as random variables or regular stochastic processes on an underlying complete probability space ( Ω , F , P ) , which may take a wide variety of probability distributions, such as Binomial, Poisson, Gamma, Gaussian, etc.
Equations of this kind should not be confused with stochastic differential equations of Itô type, forced by an irregular error term called White noise process (formal derivative of Brownian motion). In contrast to random differential equations, the solutions to stochastic differential equations exhibit nondifferentiable sample-paths. See [1] (pp. 96–98) for a detailed explanation of the difference between random and stochastic differential equations. See [1,2,3,4,5,6], for instance, for applications of random differential equations in engineering, physics, biology, etc. Thus, random differential equations require their own treatment and study: they model smooth random phenomena, with any type of input probability distributions.
From a theoretical viewpoint, random differential equations may be studied in two senses: the sample-path sense or the L p -sense. The former case considers the trajectories of the stochastic processes involved, so that the realizations of the random system correspond to deterministic versions of the problem. The latter case works with the topology of the Lebesgue space ( L p , · p ) of random variables with finite absolute p-th moment, where the norm · p is defined as: U p = E [ | U | p ] 1 / p for 1 p < ( E denotes the expectation operator), and U = inf { C 0 : | U | C almost surely } (essential supremum), U : Ω R being any random variable. The Lebesgue space ( L p , · p ) has the structure of a Banach space. Continuity, differentiability, Riemann integrability, etc., can be considered in the aforementioned space L p , which gives rise to the random L p -calculus.
In order to fix concepts, given a stochastic process x ( t ) x ( t , ω ) on I × Ω , where I R is an interval (notice that as usual the ω -sample notation might be hidden), we say that x is L p -continuous at t 0 I if lim h 0 x ( t 0 + h ) x ( t 0 ) p = 0 . We say that x is L p -differentiable at t 0 I if lim h 0 x ( t 0 + h ) x ( t 0 ) h x ( t 0 ) p = 0 , for certain random variable x ( t 0 ) (called the derivative of x at t 0 ). Finally, if I = [ a , b ] , we say that x is L p -Riemann integrable on [ a , b ] if there exists a sequence of partitions { P n } n = 1 with mesh tending to 0, P n = { a = t 0 n < t 1 n < < t r n n = b } , such that, for any choice of points s i n [ t i 1 n , t i n ] , i = 1 , , r n , the limit lim n i = 1 r n x ( s i n ) ( t i n t i 1 n ) exists in L p . In this case, these Riemann sums have the same limit, which is a random variable and is denoted by a b x ( t ) d t .
This L p -approach has been widely used in the context of random differential equations with no delay, especially the case p = 2 which corresponds to the Hilbert space L 2 and yields the so-called mean square calculus; see [5,7,8,9,10,11,12,13,14,15]. Only recently, a theoretical probabilistic analysis of random differential equations with discrete constant delay has been addressed in [16,17,18]. In [16], general random delay differential equations in L p were analyzed, with the goal of extending some of the existing results on random differential equations with no delay from the celebrated book [5]. In [17], we started our study on random delay differential equations with the basic autonomous-homogeneous linear equation, by proving the existence and uniqueness of L p -solution under certain conditions. In [18], the authors studied the same autonomous-homogeneous random linear differential equation with discrete delay as [17], but considered the solution in the sample-path sense and computed its probability density function via the random variable transformation technique, for certain forms of the initial condition process. Other recent contributions for random delay differential equations, but focusing on numerical methods instead, are [19,20,21].
There is still a lack of theoretical analysis for important random delay differential equations. Motivated by this issue, the aim of this contribution is to advance further in the theoretical analysis of relevant random differential equations with discrete delay. In particular, in this paper we extend the recent study performed in [17] for the basic linear equation by adding a stochastic forcing term:
x ( t , ω ) = a ( ω ) x ( t , ω ) + b ( ω ) x ( t τ , ω ) + f ( t , ω ) , t 0 , ω Ω , x ( t , ω ) = g ( t , ω ) , τ t 0 , ω Ω .
The delay τ > 0 is constant. The coefficients a and b are random variables. The forcing term f ( t ) and the initial condition g ( t ) are stochastic processes on [ 0 , ) and [ τ , 0 ] respectively, which depend on the outcome ω Ω of a random experiment which might be sometimes omitted in notation. The term x ( t ) represents the differentiable solution stochastic process in a certain probabilistic sense. Formally, according to the deterministic theory [22], we may express the solution process as
x ( t , ω ) = e a ( ω ) ( t + τ ) e τ b 1 ( ω ) , t g ( τ , ω ) + τ 0 e a ( ω ) ( t s ) e τ b 1 ( ω ) , t τ s ( g ( s , ω ) a ( ω ) g ( s , ω ) ) d s + 0 t e a ( ω ) ( t s ) e τ b 1 ( ω ) , t τ s f ( s , ω ) d s ,
where b 1 = e a τ b and
e τ c , t = 0 , < t < τ , 1 , τ t < 0 , 1 + c t 1 ! , 0 t < τ , 1 + c t 1 ! + c 2 ( t τ ) 2 2 ! , τ t < 2 τ , k = 0 n c k ( t ( k 1 ) τ ) k k ! , ( n 1 ) τ t < n τ ,
is the delayed exponential function [22] (Definition 1), c , t R , and n = t / τ + 1 (here · denotes the integer part defined by the so-called floor function). This formal solution is obtained with the method of steps and the method of variation of constants.
The primary objective of this paper is to set probabilistic conditions under which x ( t ) is an L p -solution to (1). We decompose the original problem (1) as
y ( t , ω ) = a ( ω ) y ( t , ω ) + b ( ω ) y ( t τ , ω ) , t 0 , y ( t , ω ) = g ( t , ω ) , τ t 0 ,
and
z ( t , ω ) = a ( ω ) z ( t , ω ) + b ( ω ) z ( t τ , ω ) + f ( t , ω ) , t 0 , z ( t , ω ) = 0 , τ t 0 .
System (3) does not possess a stochastic forcing term, and it was deeply studied in the recent contribution [17]. Under certain assumptions, its L p -solution is expressed as
y ( t , ω ) = e a ( ω ) ( t + τ ) e τ b 1 ( ω ) , t g ( τ , ω ) + τ 0 e a ( ω ) ( t s ) e τ b 1 ( ω ) , t τ s ( g ( s , ω ) a ( ω ) g ( s , ω ) ) d s ,
as a generalization of the deterministic solution to (3) obtained via the method of steps [22] (Theorem 1). Problem (4) is new and requires an analysis in the L p -sense, in order to solve the initial problem (1). Its formal solution is given by
z ( t , ω ) = 0 t e a ( ω ) ( t s ) e τ b 1 ( ω ) , t τ s f ( s , ω ) d s ,
see [22] (Theorem 2). In order to differentiate (6) in the L p -sense, one requires the extension of the deterministic Leibniz’s integral rule for differentiation to the random scenario. This extension is an important piece of this paper.
In Section 2, we show preliminary results on L p -calculus that are used through the exposition, which correspond to those preliminary results from [17] and the new random Leibniz’s rule for L p -Riemann integration. Auxiliary but novel results to demonstrate the random Leibniz’s integral rule are Fubini’s theorem and a chain rule theorem. In Section 3, we prove in detail that x ( t ) defined by (2) is the unique L p -solution to (1), by analyzing problem (4). We also find closed-form expressions for some statistics (expectation and variance) of x ( t ) related to its moments. Section 4 deals with the L p -convergence of x ( t ) as the delay τ tends to 0. We then show a numerical example that illustrates the theoretical findings. Finally, Section 5 draws the main conclusions.
In order to complete a fair overview of the existing literature, it must be pointed out that, apart for random delay differential equations (which is the context of this paper), other complementary approaches are stochastic delay differential equations and fuzzy delay differential equations. Stochastic delay differential equations are those in which uncertainty appears due to stochastic processes with irregular sample-paths: the Brownian motion process, Wiener process, Poisson process, etc. A new tool is required to tackle equations of this type, called Itô calculus [23]. Studies on stochastic delay differential equations can be read in [24,25,26,27,28], for example. On the other hand, in fuzzy delay differential equations, uncertainty is driven by fuzzy processes; see [29] for instance. In any of these approaches, the delay might even be considered random; see [30,31].

2. Results on L p -Calculus

In this section, we state the preliminary results on L p -calculus needed for the following sections. Proposition 1 is the chain rule theorem in L p -calculus, which was first proved in [8] (Theorem 3.19) in the setting of mean square calculus ( p = 2 ). Both Lemma 1 and Lemma 2 provide conditions under which the product of three stochastic processes is L p -continuous or L p -differentiable. Proposition 2 is a result concerning L p -differentiation under the L p -Riemann integral sign, when the interval of integration is fixed. These four results have been already used and stated in the recent contribution [17], and will be required through our forthcoming exposition.
For the sake of completeness, we demonstrate Proposition 2 with an alternative proof to [17], based on Fubini’s theorem for L p -Riemann integration. In the random framework, Fubini’s theorem has not been tackled yet in the recent literature. It states that, if a stochastic process depending on two variables is L p -continuous, then the two iterated L p -Riemann integrals can be interchanged.
We present a new result, Proposition 3, in which we put conditions in order to L p -differentiate an L p -Riemann integral whose interval of integration depends on t. This proposition supposes the extension of the so-called Leibniz’s rule for integration to the random scenario. The proof relies on a new chain rule theorem.
Proposition 1
(Chain rule theorem ([17] Proposition 2.1)). Let { X ( t ) : t [ a , b ] } be a stochastic process, where [ a , b ] is any interval in R . Let f be a deterministic C 1 function on an open set that contains X ( [ a , b ] ) . Fix 1 p < . Let t [ a , b ] be any point such that:
(i)
X is L 2 p -differentiable at t;
(ii)
X is path continuous on [ a , b ] ;
(iii)
there exist r > 2 p and δ > 0 such that sup s [ δ , δ ] E [ | f ( X ( t + s ) ) | r ] < .
Then f X is L p -differentiable at t and ( f X ) ( t ) = f ( X ( t ) ) X ( t ) .
Lemma 1
([17] Lemma 2.1). Let Y 1 ( t , s ) , Y 2 ( t , s ) and Y 3 ( t , s ) be three stochastic processes and fix 1 p < . If Y 1 and Y 2 are L q -continuous for all 1 q < , and Y 3 is L p + η -continuous for certain η > 0 , then the product process Y 1 Y 2 Y 3 is L p -continuous.
On the other hand, if Y 1 and Y 2 are L -continuous, and Y 3 is L p -continuous, then the product process Y 1 Y 2 Y 3 is L p -continuous.
Lemma 2
([17] Lemma 2.2). Let Y 1 ( t ) , Y 2 ( t ) and Y 3 ( t ) be three stochastic processes, and 1 p < . If Y 1 and Y 2 are L q -differentiable for all 1 q < , and Y 3 is L p + η -differentiable for certain η > 0 , then the product process Y 1 Y 2 Y 3 is L p -differentiable and d d t ( Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) ) = Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) + Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) + Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) .
Additionally, if Y 1 and Y 2 are assumed to be L -differentiable, and Y 3 is L p -differentiable, then Y 1 Y 2 Y 3 is L p -differentiable, with d d t ( Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) ) = Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) + Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) + Y 1 ( t ) Y 2 ( t ) Y 3 ( t ) .
Lemma 3
(Fubini’s theorem for iterated L p -Riemann integrals). Let H ( t , s ) be a process on [ a , b ] × [ c , d ] . If H is L p -continuous, then a b c d H ( t , s ) d s d t = c d a b H ( t , s ) d t d s , where the integrals are regarded as L p -Riemann integrals.
Proof. 
The proof is a variation of Fubini’s theorem for Itô stochastic integration with respect to the standard Brownian motion ([32] Theorem 2.10.1). The stochastic processes H ( t , s ) , c d H ( t , s ) d s and a b H ( t , s ) d t are L p -continuous, so the iterated integrals exist. Let { P n } n = 1 be a sequence of partitions of [ a , b ] with mesh tending to 0. Write P n = { a = t 0 n < t 1 n < < t n n = b } , and let r i n [ t i 1 n , t i n ] , 1 i n , n 1 . Consider the processes G n ( t , s ) = i = 1 n H ( r i n , s ) 𝟙 [ t i 1 n , t i n ] ( t ) (here 𝟙 denotes the characteristic function of a set) and F n ( s ) = a b G n ( t , s ) d t = i = 1 n H ( r i n , s ) ( t i n t i 1 n ) . Notice that, by definition of L p -Riemann integral, lim n F n ( s ) = a b H ( t , s ) d t in L p .
By definition of L p -Riemann integral,
c d F n ( s ) d s = i = 1 n c d H ( r i n , s ) d s ( t i n t i 1 n ) n a b c d H ( t , s ) d s d t
in L p . On the other hand,
c d a b H ( t , s ) d t d s c d F n ( s ) d s p = c d a b H ( t , s ) d t F n ( s ) d s p c d a b H ( t , s ) d t F n ( s ) p d s ,
where the last inequality is due to a property of L p -integration ([5] p. 102). As H ( t , s ) and F n ( s ) are L p -bounded on [ a , b ] × [ c , d ] and [ c , d ] , respectively (uniformly on n 1 ), the dominated convergence theorem allows concluding that lim n c d F n ( s ) d s = c d a b H ( t , s ) d t d s in L p . □
Proposition 2
( L p -differentiation under the L p -Riemann integral sign). Let F ( t , s ) be a stochastic process on [ a , b ] × [ c , d ] . Fix 1 p < . Suppose that F ( t , · ) is L p -continuous on [ c , d ] , for each t [ a , b ] , and that there exists the L p -partial derivative F t ( t , s ) for all ( t , s ) [ a , b ] × [ c , d ] , which is L p -continuous on [ a , b ] × [ c , d ] . Let G ( t ) = c d F ( t , s ) d s (the integral is understood as an L p -Riemann integral). Then G is L p -differentiable on [ a , b ] and G ( t ) = c d F t ( t , s ) d s .
Proof. 
We present an alternative and simpler proof to ([17] Proposition 2.2), based on Fubini’s theorem (Lemma 3). Since F t is L p -continuous, by Barrow’s rule ([5] p. 104) we can write G ( t ) = c d F ( a , s ) d s + c d a t F t ( τ , s ) d τ d s = c d F ( a , s ) d s + a t c d F t ( τ , s ) d s d τ . The stochastic process τ [ a , b ] c d F t ( τ , s ) d s is L p -continuous; therefore, G ( t ) = c d F t ( t , s ) d s in L p , as a consequence of the fundamental theorem for L p -calculus; see ([5] p. 103). □
Lemma 4
(Version of the chain rule theorem). Let G ( t , s ) be a stochastic process on [ a , b ] × [ c , d ] . Let u : [ a , b ] [ c , d ] be a differentiable deterministic function. Suppose that G ( t , s ) has L p -partial derivatives, with G t ( t , s ) being L p -continuous on [ a , b ] × [ c , d ] , and G s ( t , · ) being L p -continuous on [ c , d ] , for each t [ a , b ] . Then d d t G ( t , u ( t ) ) = G t ( t , u ( t ) ) + u ( t ) G s ( t , u ( t ) ) in L p .
Proof. 
For h 0 , by the triangular inequality,
G ( t + h , u ( t + h ) ) G ( t , u ( t ) ) h G t ( t , u ( t ) ) u ( t ) G s ( t , u ( t ) ) p G ( t + h , u ( t + h ) ) G ( t , u ( t + h ) ) h G t ( t , u ( t ) ) p I 1 ( t , h ) + G ( t , u ( t + h ) ) G ( t , u ( t ) ) h u ( t ) G s ( t , u ( t ) ) p I 2 ( t , h ) .
By Barrow’s rule ([5] p. 104) and an inequality from ([5] p. 102),
I 1 ( t , h ) = 1 h t t + h G t ( τ , u ( t + h ) ) d τ G t ( t , u ( t ) ) p = 1 h t t + h G t ( τ , u ( t + h ) ) G t ( t , u ( t ) ) d τ p 1 | h | t t + h G t ( τ , u ( t + h ) ) G t ( t , u ( t ) ) p d τ .
The process G t ( t , u ( r ) ) is L p -uniform continuous on [ a , b ] × [ a , b ] ; therefore,
I 1 ( t , h ) sup τ [ t , t + h ] [ t + h , t ] G t ( τ , u ( t + h ) ) G t ( t , u ( t ) ) p h 0 0 .
On the other hand, let Y ( r ) = G ( t , r ) , for t [ a , b ] fixed. To conclude that lim h 0 I 2 ( t , h ) = 0 , we need ( Y u ) ( t ) = Y ( u ( t ) ) u ( t ) . We have that Y is L p - C 1 ( [ c , d ] ) and that u is differentiable on [ a , b ] , so the following existing version of the chain rule theorem applies: ([33] Theorem 2.1). □
Remark 1.
Although not needed in the subsequent development, Lemma 4 gives in fact a general multidimensional chain rule theorem for L p -calculus, for the composition of a stochastic process G ( t , s ) and two deterministic functions ( v ( r ) , u ( r ) ) . This is the generalization of ([33] Theorem 2.1) to several variables. Indeed, let G ( t , s ) be a stochastic process on an open set Λ R 2 , with L p -partial derivatives, G t ( t , s ) and G s ( t , s ) , being L p -continuous on Λ. Let v , u : [ a , b ] R be two C 1 deterministic functions with ( v ( r ) , u ( r ) ) Λ . Then d d r G ( v ( r ) , u ( r ) ) = v ( r ) G t ( v ( r ) , u ( r ) ) + u ( r ) G s ( v ( r ) , u ( r ) ) . For the proof, just define G ¯ ( t , r ) = G ( v ( t ) , r ) . By ([33] Theorem 2.1), G ¯ t ( t , r ) = v ( t ) G t ( v ( t ) , r ) , which is L p -continuous on ( t , r ) . Additionally, G ¯ r ( t , r ) = G s ( v ( t ) , r ) is L p -continuous. Then G ( v ( r ) , u ( r ) ) = G ¯ ( r , u ( r ) ) can be L p -differentiated at each r, by our Lemma 4: d d r G ( v ( r ) , u ( r ) ) = G ¯ t ( r , u ( r ) ) + u ( r ) G ¯ r ( r , u ( r ) ) = v ( r ) G t ( v ( r ) , u ( r ) ) + u ( r ) G s ( v ( r ) , u ( r ) ) .
Proposition 3
(Random Leibniz’s rule for L p -calculus). Let F ( t , s ) be a stochastic process on [ a , b ] × [ c , d ] . Let u , v : [ a , b ] [ c , d ] be two differentiable deterministic functions. Suppose that F ( t , · ) is L p -continuous on [ c , d ] , for each t [ a , b ] , and that F t ( t , s ) exists in the L p -sense and is L p -continuous on [ a , b ] × [ c , d ] . Then H ( t ) = u ( t ) v ( t ) F ( t , s ) d s is L p -differentiable and
H ( t ) = v ( t ) F ( t , v ( t ) ) u ( t ) F ( t , u ( t ) ) + u ( t ) v ( t ) F t ( t , s ) d s
(the integral is considered as an L p -Riemann integral).
Proof. 
First, notice that H ( t ) is well-defined, since F ( t , · ) is L p -continuous. Decompose H ( t ) as H ( t ) = a v ( t ) F ( t , s ) d s a u ( t ) F ( t , s ) d s . Let G ( t , r ) = a r F ( t , s ) d s , t [ a , b ] , r [ c , d ] . We have H ( t ) = G ( t , v ( t ) ) G ( t , u ( t ) ) .
Let us check the conditions of Lemma 4. By Lemma 2, G t ( t , r ) = a r F t ( t , s ) d s , which is L p -continuous on [ a , b ] × [ c , d ] as a consequence of the L p -continuity of F t ( t , s ) . On the other hand, G r ( t , r ) = F ( t , r ) , by the fundamental theorem of L p -calculus ([5] p. 103), with G r ( t , · ) = F ( t , · ) being L p -continuous. Thus, by Lemma 4,
H ( t ) = G t ( t , v ( t ) ) + v ( t ) G r ( t , v ( t ) ) G t ( t , u ( t ) ) u ( t ) G r ( t , u ( t ) ) = v ( t ) F ( t , v ( t ) ) u ( t ) F ( t , u ( t ) ) + u ( t ) v ( t ) F t ( t , s ) d s .
Remark 2
(Proposition 3 against another proof of the random Leibniz’s rule). In [10] Proposition 6, a result pointing towards the conclusion of Proposition 3 was stated (in the mean square case p = 2 , with v ( t ) = t , u ( t ) = 0 and [ c , d ] = [ a , b ] ). However, the proof presented therein is not correct. In the notation therein, the authors proved an inequality of the form
K ( t , Δ t ) 2 ( t a ) max x [ a , t ] K 1 ( x , t , Δ t ) 2 + max x [ t , t + Δ t ] K 2 ( x , t , Δ t ) 2 .
The authors justified correctly that K 1 ( x , t , Δ t ) 2 0 and K 2 ( x , t , Δ t ) 2 0 as Δ t 0 , for each x [ a , b ] . However, this fact does not imply
max x [ a , t ] K 1 ( x , t , Δ t ) 2 Δ t 0 0 , max x [ t , t + Δ t ] K 2 ( x , t , Δ t ) 2 Δ t 0 0 ,
as they stated at the end of their proof. For K 1 , one has to utilize the dominated convergence theorem. For K 2 , one should use uniform continuity.
Remark 3
(Random Leibniz’s rule cannot be proved with a mean value theorem). In the deterministic setting, both Proposition 2 and Proposition 3 can be proven with the mean value theorem. However, such proofs do not work in the random scenario, as there is no version of the stochastic mean value theorem. In previous contributions (see [15] Lemma 2.4, Corollary 2.5; [34] Lemma 3.1, Theorem 3.2), there is an incorrect version of it. For instance, if U Uniform ( 0 , 1 ) and Y ( t ) = 𝟙 { t > U } ( t ) , t [ 0 , 1 ] , then Y is mean square continuous on [ 0 , 1 ] (notice that Y ( t ) Y ( s ) 2 2 = | t s | ). Suppose that there exists η [ 0 , 1 ] such that 0 1 Y ( s ) d s = Y ( η ) almost surely. Then Y ( η ) = 1 U almost surely. But this is not possible, since 1 U ( 0 , 1 ) and Y ( η ) { 0 , 1 } . Thus, Y does not satisfy any mean square mean value theorem.

3. L p -Solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term

In this section we solve (1) in the L p -sense. To do so, we will demonstrate that x ( t ) defined by (2) is the unique L p -solution to (1). We will take advantage of the decomposition of problem (1) into its homogeneous part, (3), and its complete part, (4). The formal solution to (3) is given by y ( t ) defined as (5), while the formal solution to (4) is given by z ( t ) expressed as (6). The previous contribution [17] provides conditions under which y ( t ) defined by (5) solves (3) in the L p -sense. Thus, our primary goal will be to find conditions under which z ( t ) given by (6) is a true solution to (4) in the L p -sense.
Again, recall that the integrals that appear in the expressions (2), (5) and (6) are L p -Riemann integrals.
The uniqueness (not existence for now) of (1) is proved analogously to ([17] Theorem 3.1), by invoking results from [7] that connect L p -solutions with sample-path solutions, which satisfy analogous properties to deterministic solutions. The precise uniqueness statement is as follows.
Theorem 1
(Uniqueness). The random differential equation problem with delay (1) has at most one L p -solution, for 1 p < .
Proof. 
Assume that (1) has an L p -solution. We will prove it is unique. Let x 1 ( t ) and x 2 ( t ) be two L p -solutions to (1). Let u ( t ) = x 1 ( t ) x 2 ( t ) , which satisfies the random differential equation problem with delay
u ( t , ω ) = a ( ω ) u ( t , ω ) + b ( ω ) u ( t τ , ω ) , t 0 , u ( t , ω ) = 0 , τ t 0 .
If t [ 0 , τ ] , then t τ [ τ , 0 ] ; therefore, u ( t τ ) = 0 . Thus, u ( t ) satisfies a random differential equation problem with no delay on [ 0 , τ ] :
u ( t , ω ) = a ( ω ) u ( t , ω ) , t [ 0 , τ ] , u ( 0 , ω ) = 0 .
In [7], it was proved that any L p -solution to a random initial value problem has a product measurable representative which is an absolutely continuous solution in the sample-path sense. Since the sample-path solution to (7) must be 0 (from the deterministic theory), we conclude that u ( t ) = 0 on [ 0 , τ ] , as desired. For the subsequent intervals [ τ , 2 τ ] , [ 2 τ , 3 τ ] , etc., the same reasoning applies. □
Now we move on to existence results. First, recall that the random delayed exponential function is the solution to the random linear homogeneous differential equation with pure delay that satisfies the unit initial condition.
Proposition 4
( L p -derivative of the random delayed exponential function ([17] Prop 3.1)). Consider the random system with discrete delay
x ( t , ω ) = c ( ω ) x ( t τ , ω ) , t 0 , x ( t , ω ) = 1 , τ t 0 ,
where c ( ω ) is a random variable.
If c has absolute moments of any order, then e τ c , t is the unique L p -solution to (8), for all 1 p < .
On the other hand, if c is bounded, then e τ c , t is the unique L -solution to (8).
In [17], two results on the existence of solution to (3) were stated and proved. In terms of notation, the moment-generating function of a random variable a is denoted as ϕ a ( ζ ) = E [ e a ζ ] , ζ R .
Theorem 2
(Existence and uniqueness for (3), first version ([17] Theorem 3.2)). Fix 1 p < . Suppose that ϕ a ( ζ ) < for all ζ R , b has absolute moments of any order, and g belongs to C 1 ( [ τ , 0 ] ) in the L p + η -sense, for certain η > 0 . Then the stochastic process y ( t ) defined by (5) is the unique L p -solution to (3).
Theorem 3
(Existence and uniqueness for (3), second version ([17] Theorem 3.4)). Fix 1 p < . Suppose that a and b are bounded random variables, and g belongs to C 1 ( [ τ , 0 ] ) in the L p -sense. Then the stochastic process y ( t ) defined by (5) is the unique L p -solution to (3).
In what follows, we establish two theorems on the existence of a solution to (4); see Theorem 4 and Theorem 5. As a corollary, we will derive the solution to (1); see Theorem 6 and Theorem 7.
Theorem 4
(Existence and uniqueness for (4), first version). Fix 1 p < . Suppose that ϕ a ( ζ ) < for all ζ R , b has absolute moments of any order, and f is continuous on [ 0 , ) in the L p + η -sense, for certain η > 0 . Then the stochastic process z ( t ) defined by (6) is the unique L p -solution to (4).
Proof. 
At the beginning of the proof of ([17] Theorem 3.2), it was proved that b 1 = e a τ b has absolute moments of any order, as a consequence of Cauchy-Schwarz inequality, therefore Proposition 4 tells us that the process e τ b 1 , t is L q -differentiable, for each 1 q < , and d d t e τ b 1 , t = b 1 e τ b 1 , t τ . It was also proved that, by the chain rule theorem (Proposition 1), the process e a t is L q -differentiable, for each 1 q < , and d d t e a t = a e a t . To justify these two assertions on e τ b 1 , t and e a t , the hypotheses ϕ a ( ζ ) < and b having absolute moments of any order are required.
Fix 0 s t . Let Y 1 ( t , s ) = e a ( t s ) , Y 2 ( t , s ) = e τ b 1 , t τ s and Y 3 ( s ) = f ( s ) , according to the notation of Lemma 1. Set the product of the three processes F ( t , s ) = Y 1 ( t , s ) Y 2 ( t , s ) Y 3 ( s ) , so that our candidate solution process becomes z ( t ) = 0 t F ( t , s ) d s . We check the conditions of the random Leibniz’s rule, see Proposition 3, to differentiate z ( t ) . By the first paragraph of this proof, in which we stated that both e τ b 1 , t and e a t are L q -differentiable, for each 1 q < , we derive that Y 1 and Y 2 are L q -continuous on both variables, for all 1 q < . Since Y 3 is L p + η -continuous, for certain η > 0 by assumption, we deduce that F is L p -continuous on both variables, as a consequence of Lemma 1.
Fixed s, let Y 1 ( t ) = e a ( t s ) , Y 2 ( t ) = e τ b 1 , t τ s and Y 3 = f ( s ) . We have that Y 1 and Y 2 are L q -differentiable, for each 1 q < . The random variable Y 3 belongs to L p + η . By Lemma 2, F ( · , s ) is L p -differentiable at each t, with
F t ( t , s ) = a e a ( t s ) e τ b 1 , t τ s + e a ( t s ) b 1 e τ b 1 , t 2 τ s f ( s ) .
Let us see that F t ( t , s ) is L p -continuous at ( t , s ) . Since a has absolute moments of any order (by finiteness of its moment-generating function) and e a ( t s ) is L q -continuous at ( t , s ) , for each 1 q < , we derive that a e a ( t s ) is L q -continuous at each ( t , s ) , for every 1 q < , by Hölder’s inequality. Thus, we have that Y 1 ( t , s ) = a e a ( t s ) and Y 2 ( t , s ) = e τ b 1 , t τ s are L q -continuous at ( t , s ) , for each 1 q < , while Y 3 ( s ) = f ( s ) is L p + η -continuous. By Lemma 1, a e a ( t s ) e τ b 1 , t τ s f ( s ) is L p -continuous at each ( t , s ) . Analogously, e a ( t s ) b 1 e τ b 1 , t 2 τ s f ( s ) is L p -continuous at ( t , s ) . Therefore, F t ( t , s ) is L p -continuous at ( t , s ) . By Proposition 3, the process z ( t ) is L p -differentiable and z ( t ) = F ( t , t ) + 0 t F t ( t , s ) d s = f ( t ) + a z ( t ) + b z ( t τ ) (by definition of F ( t , s ) in the proof, F ( t , t ) = e a ( t t ) e τ b 1 , t τ t f ( t ) = e τ b 1 , τ f ( t ) = f ( t ) , where e τ b 1 , τ = 1 by definition of delayed exponential function), and we are done.
Once the existence of L p -solution has been proved, uniqueness follows from Theorem 1. □
Theorem 5
(Existence and uniqueness for (4), second version). Fix 1 p < . Suppose that a and b are bounded random variables, and f is continuous on [ 0 , ) in the L p -sense. Then the stochastic process z ( t ) defined by (6) is the unique L p -solution to (4).
Proof. 
As was shown in ([17] Theorem 3.4), the process e τ b 1 , t is L -differentiable and d d t e τ b 1 , t = b 1 e τ b 1 , t τ , because b 1 = e a τ b is bounded. Additionally, the process e a t is L -differentiable and d d t e a t = a e a t , as a consequence of the deterministic mean value theorem and the boundedness of a.
The rest of the proof is completely analogous to the previous Theorem 4, by applying the second part of both Lemma 1 and Lemma 2. □
Theorem 6
(Existence and uniqueness for (1), first version). Fix 1 p < . Suppose that ϕ a ( ζ ) < for all ζ R , b has absolute moments of any order, g belongs to C 1 ( [ τ , 0 ] ) in the L p + η -sense and f is continuous on [ 0 , ) in the L p + η -sense, for certain η > 0 . Then the stochastic process x ( t ) defined by (2) is the unique L p -solution to (1).
Proof. 
This is a consequence of Theorem 2 and Theorem 4, with x ( t ) = y ( t ) + z ( t ) . Uniqueness follows from Theorem 1. □
Theorem 7
(Existence and uniqueness for (1), second version). Fix 1 p < . Suppose that a and b are bounded random variables, g belongs to C 1 ( [ τ , 0 ] ) in the L p -sense and f is continuous on [ 0 , ) in the L p -sense. Then the stochastic process x ( t ) defined by (2) is the unique L p -solution to (1).
Proof. 
This is a consequence of Theorem 3 and Theorem 5, with x ( t ) = y ( t ) + z ( t ) . Uniqueness follows from Theorem 1. □
Remark 4.
As emphasized in ([17] Remark 3.6), the condition of boundedness for a and b in Theorem 7 is necessary if we only assume that g C 1 ( [ τ , 0 ] ) in the L p -sense. See ([7] Example p. 541), where it is proved that, in order for a random autonomous and homogeneous linear differential equation of first-order to have an L p -solution for every initial condition in L p , one needs the random coefficient to be bounded.
Assume the conditions from Theorem 6 or Theorem 7. From expression (2), it is possible to approximate the statistical moments of x ( t ) . We focus on its expectation, E [ x ( t ) ] , and on its variance, V [ x ( t ) ] = E [ x ( t ) 2 ] ( E [ x ( t ) ] ) 2 . These statistics provide information on the average and the dispersion of x ( t ) , and they are very useful for uncertainty quantification for x ( t ) . For ease of notation, denote the stochastic processes
F 1 ( t , ω ) = e a ( ω ) ( t + τ ) e τ b 1 ( ω ) , t g ( τ , ω ) ,
F 2 ( t , s , ω ) = e a ( ω ) ( t s ) e τ b 1 ( ω ) , t τ s ( g ( s , ω ) a ( ω ) g ( s , ω ) ) ,
F 3 ( t , s , ω ) = e a ( ω ) ( t s ) e τ b 1 ( ω ) , t τ s f ( s , ω ) .
Due to the linearity of the expectation and its interchangeability with the L 1 -Riemann integral ([5] p. 104), if p 1 ,
E [ x ( t ) ] = E [ F 1 ( t ) ] + τ 0 E [ F 2 ( t , s ) ] d s + 0 t E [ F 3 ( t , s ) ] d s .
To compute V [ x ( t ) ] when p 2 , we start by
x ( t ) 2 = F 1 ( t ) 2 + τ 0 τ 0 F 2 ( t , s 1 ) F 2 ( t , s 2 ) d s 2 d s 1 + 0 t 0 t F 3 ( t , s 1 ) F 3 ( t , s 2 ) d s 2 d s 1 + 2 τ 0 F 1 ( t ) F 2 ( t , s ) d s + 2 0 t F 1 ( t ) F 3 ( t , s ) d s + 2 τ 0 0 t F 2 ( t , s 1 ) F 3 ( t , s 2 ) d s 2 d s 1 .
Each of these integrals has to be considered in L p / 2 ; see ([35] Remark 2). This is due to the loss of integrability of the product, by Hölder’s inequality. By applying expectations,
E [ x ( t ) 2 ] = E [ F 1 ( t ) 2 ] + τ 0 τ 0 E [ F 2 ( t , s 1 ) F 2 ( t , s 2 ) ] d s 2 d s 1 + 0 t 0 t E [ F 3 ( t , s 1 ) F 3 ( t , s 2 ) ] d s 2 d s 1 + 2 τ 0 E [ F 1 ( t ) F 2 ( t , s ) ] d s + 2 0 t E [ F 1 ( t ) F 3 ( t , s ) ] d s + 2 τ 0 0 t E [ F 2 ( t , s 1 ) F 3 ( t , s 2 ) ] d s 2 d s 1 .
As a consequence, one derives an expression for V [ x ( t ) ] , by utilizing the relation V [ x ( t ) ] = E [ x ( t ) 2 ] ( E [ x ( t ) ] ) 2 . Other statistics related to moments could be derived in a similar fashion.
In Example 1, we will show how useful these expressions are to determine E [ x ( t ) ] and V [ x ( t ) ] in practice. Our procedure is an alternative to the usual techniques for uncertainty quantification: Monte Carlo simulation, generalized polynomial chaos (gPC) expansions, etc. [1,2].

4. L p -Convergence to a Random Complete Linear Differential Equation When the Delay Tends to 0

Given a discrete delay τ > 0 , we denote the L p -solution (2) to (1) by x τ ( t ) . We denote the L p -solutions (5) and (6) to (3) and (4) by y τ ( t ) and z τ ( t ) , respectively, so that x τ ( t ) = y τ ( t ) + z τ ( t ) . Thus, we are making the dependence on the delay τ explicit. If we put τ = 0 into (1), (3) and (4), we obtain random linear differential equations with no delay:
x 0 ( t , ω ) = ( a ( ω ) + b ( ω ) ) x 0 ( t , ω ) + f ( t , ω ) , t 0 , x 0 ( 0 , ω ) = g ( 0 , ω ) ,
y 0 ( t , ω ) = ( a ( ω ) + b ( ω ) ) y 0 ( t , ω ) , t 0 , y 0 ( 0 , ω ) = g ( 0 , ω ) ,
z 0 ( t , ω ) = ( a ( ω ) + b ( ω ) ) z 0 ( t , ω ) + f ( t , ω ) , t 0 , z 0 ( 0 , ω ) = 0 ,
respectively. The following results establish conditions under which (11), (12) and (13) have L p -solutions.
Theorem 8
([17] Corollary 4.1). Fix 1 p < . If ϕ a ( ζ ) < and ϕ b ( ζ ) < for all ζ R , and g ( 0 ) L p + η for certain η > 0 , then the stochastic process y 0 ( t ) = g ( 0 ) e ( a + b ) t is the unique L p -solution to (12).
On the other hand, if a and b are bounded random variables and g ( 0 ) L p , then the stochastic process y 0 ( t ) = g ( 0 ) e ( a + b ) t is the unique L p -solution to (12).
Theorem 9.
Fix 1 p < . If ϕ a ( ζ ) < and ϕ b ( ζ ) < for all ζ R , and f is continuous on [ 0 , ) in the L p + η -sense for certain η > 0 , then the stochastic process z 0 ( t ) = 0 t e ( a + b ) ( t s ) f ( s ) d s is the unique L p -solution to (13).
On the other hand, if a and b are bounded random variables and f is continuous on [ 0 , ) in the L p -sense, then the stochastic process z 0 ( t ) = 0 t e ( a + b ) ( t s ) f ( s ) d s is the unique L p -solution to (13).
Proof. 
Take the first set of assumptions. Let F ( t , s ) = e ( a + b ) ( t s ) f ( s ) be the process inside the integral sign. Since ϕ a < and ϕ b < , the chain rule theorem (Proposition 1) allows differentiating e ( a + b ) t in L q , for each 1 q < . In particular, e ( a + b ) ( t s ) is L q -continuous at ( t , s ) , for 1 q < . As f is continuous on [ 0 , ) in the L p + η -sense, we derive that F is L p -continuous at ( t , s ) . It also exists F t ( t , s ) = ( a + b ) e ( a + b ) ( t s ) f ( s ) in L p . Since a + b has absolute moments of any order, ( a + b ) e ( a + b ) ( t s ) is L q -continuous at ( t , s ) , for 1 q < . Then F t is L p -continuous at ( t , s ) . By Proposition 3, z 0 is L p -differentiable and z 0 ( t ) = F ( t , t ) + 0 t F t ( t , s ) d s = f ( t ) + ( a + b ) z 0 ( t ) , and we are done.
Suppose that a and b are bounded random variables and f is continuous on [ 0 , ) in the L p -sense. If a and b are bounded, then e ( a + b ) t is L -differentiable (this is because of an application of the deterministic mean value theorem; see ([17] Theorem 3.4)). Then an analogous proof to the previous paragraph works in this case, by only assuming that f is continuous on [ 0 , ) in the L p -sense. □
Theorem 10.
Fix 1 p < . If ϕ a ( ζ ) < and ϕ b ( ζ ) < for all ζ R , g ( 0 ) L p + η , and f is continuous on [ 0 , ) in the L p + η -sense for certain η > 0 , then the stochastic process x 0 ( t ) = g ( 0 ) e ( a + b ) t + 0 t e ( a + b ) ( t s ) f ( s ) d s is the unique L p -solution to (11).
On the other hand, if a and b are bounded random variables, g ( 0 ) L p , and f is continuous on [ 0 , ) in the L p -sense, then the stochastic process x 0 ( t ) = g ( 0 ) e ( a + b ) t + 0 t e ( a + b ) ( t s ) f ( s ) d s is the unique L p -solution to (11).
Proof. 
It is a consequence of Theorem 8 and Theorem 9 with x 0 ( t ) = y 0 ( t ) + z 0 ( t ) . □
Our goal is to establish conditions under which lim τ 0 x τ ( t ) = x 0 ( t ) in L p , for each t 0 . To do so, we will utilize lim τ 0 y τ ( t ) = y 0 ( t ) and lim τ 0 z τ ( t ) = z 0 ( t ) .
The first limit, lim τ 0 y τ ( t ) = y 0 ( t ) , was demonstrated in ([17] Theorem 4.5), by using inequalities for the deterministic and random delayed exponential function ([36] Theorem A.3), ([17] Lemma 4.2, Lemma 4.3, Lemma 4.4).
Theorem 11
([17] Theorem 4.5). Fix 1 p < . Let a and b be bounded random variables and let g be a stochastic process that belongs to C 1 ( [ τ , 0 ] ) in the L p -sense. Then, lim τ 0 y τ ( t ) = y 0 ( t ) in L p , uniformly on [ 0 , T ] , for each T > 0 .
Next we prove the convergence lim τ 0 z τ ( t ) = z 0 ( t ) . As a corollary, we will be able to derive lim τ 0 x τ ( t ) = x 0 ( t ) .
Theorem 12.
Fix 1 p < . Let a and b be bounded random variables and let f be a continuous stochastic process on [ 0 , ) in the L p -sense. Then, lim τ 0 z τ ( t ) = z 0 ( t ) in L p , uniformly on [ 0 , T ] , for each T > 0 .
Proof. 
Notice that z τ ( t ) defined by (6) (see the first paragraph of this section) exists by Theorem 5, which used the boundedness of a and b and the L p -continuity of f on [ 0 , ) . Analogously, z 0 ( t ) exists by Theorem 9.
Fix t [ 0 , T ] . We bound
z τ ( t ) z 0 ( t ) p 0 t e a ( t s ) f ( s ) e τ b 1 , t τ s e b ( t s ) p d s 0 t e a ( t s ) f ( s ) p e τ b 1 , t τ s e b ( t s ) d s .
We have e a ( t s ) e a T and f ( s ) p C f = max s [ 0 , T ] f ( s ) p . These bounds yield
z τ ( t ) z 0 ( t ) p C f e a T 0 t e τ b 1 , t τ s e b ( t s ) d s .
Let k be a number such that k b 1 = e a τ b , for all τ ( 0 , 1 ] . By ([17] Lemma 4.3),
e τ b 1 , t τ s e b 1 ( t s ) C T , k · τ ,
for t [ 0 , T ] , 0 s t and τ ( 0 , 1 ] . On the other hand, by the deterministic mean value theorem (applied for each outcome ω ),
e b 1 ( t s ) e b ( t s ) = e e a τ b ( t s ) e b ( t s ) = b ( t s ) e ξ τ , ω b ( t s ) ( e a τ 1 ) ,
where ξ τ , ω ( 1 , e a τ ) ( e a τ , 1 ) . In particular, | ξ τ , ω | 1 + e a . We apply again the deterministic mean value theorem to e a τ 1 :
e a τ 1 = e ξ ¯ τ , ω ( a τ ) ,
where ξ ¯ τ , ω ( a τ , 0 ) ( 0 , a τ ) . In particular,
e a τ 1 e a a τ .
As a consequence,
e b 1 ( t s ) e b ( t s ) b T e ( 1 + e a ) b T e a a C ¯ T , a , b τ .
By combining (15) and (16) and by the triangular inequality,
e τ b 1 , t τ s e b ( t s ) e τ b 1 , t τ s e b 1 ( t s ) + e b 1 ( t s ) e b ( t s ) C T , k + C ¯ T , a , b τ .
Substituting this inequality into (14),
z τ ( t ) z 0 ( t ) p C f e a T C T , k + C ¯ T , a , b τ τ 0 0 ,
uniformly on [ 0 , T ] . □
Theorem 13.
Fix 1 p < . Let a and b be bounded random variables, let g be a stochastic process that belongs to C 1 ( [ τ , 0 ] ) in the L p -sense, and let f be a continuous stochastic process on [ 0 , ) in the L p -sense. Then, lim τ 0 x τ ( t ) = x 0 ( t ) in L p , uniformly on [ 0 , T ] , for each T > 0 .
Proof. 
This is a consequence of Theorem 11 and Theorem 12, with x τ ( t ) = y τ ( t ) + z τ ( t ) and x 0 ( t ) = y 0 ( t ) + z 0 ( t ) . □
Example 1.
This is a test example, with arbitrary distributions, to show how (9) and (10) may be employed to compute the expectation and the variance of the stochastic solution. Theoretical results are also illustrated. Let a Beta ( 2 , 3 ) and b Uniform ( 0.2 , 1 ) . Define g ( t , ω ) = sin ( sin ( d ( ω ) t 2 ) ) and f ( t , ω ) = cos ( t e ( ω ) 2 ) , where d and e are random variables with d Triangular ( 1 , 1.15 , 1.3 ) and e Uniform ( 0.1 , 0.2 ) . By using the chain rule theorem, Proposition 1, it is easy to prove that both g and f are C in the L p -sense, 1 p < . The random variables a, b, d and e are assumed to be independent. Consider the solution stochastic process x τ ( t ) defined by (2). It is an L p -solution for all 1 p < , by Theorem 7. With expressions (9) and (10), we can compute E [ x τ ( t ) ] and V [ x τ ( t ) ] ; see Figure 1. The results agree with Monte Carlo simulation on (1). Observe that, as τ approaches 0, the solution stochastic process tends to the solution with no delay defined in Theorem 10, as predicted by Theorem 13.
Example 2.
In this example, we specify new probability distributions for the input coefficients. Let a Uniform ( 0.2 , 1 ) , b Uniform ( 1 , 0 ) , d Beta ( 1 , 1.3 ) and e Uniform ( 0.2 , 0.1 ) , all of them independent. The stochastic process x τ ( t ) given by (2) is an L p -solution for all 1 p < , by Theorem 7. We compute E [ x τ ( t ) ] and V [ x τ ( t ) ] with (9) and (10), see Figure 2. Observe that the convergence when τ 0 agrees with Theorem 13.
We now comment on some computational aspects. We have used the software Mathematica®, version 11.2 [37]. The integrals and expectations from (9) and (10) have been computed as multidimensional integrals with the built-in function NIntegrate (recall that the expectation is an integral with respect to the corresponding probability density function). Expression (9) does not pose serious numerical challenges, and one can use a standard NIntegrate routine with no specified options. However, for expression (10), we have set the option quasi-Monte Carlo with 10 5 sampling points (otherwise the computational time would increase dramatically). We have checked numerically that the following factors increase the computational time: large ratio t / τ ; probability distributions with unbounded support for the input data; and moderate or large dimensions of the random space.

5. Conclusions

In this paper, we have performed a comprehensive stochastic analysis of the random linear delay differential equation with stochastic forcing term. The equation considered has one discrete delay τ > 0 , two random coefficients a and b (corresponding to the non-delay and the delay term, respectively) and two stochastic processes f ( t ) and g ( t ) (corresponding to the forcing term on [ 0 , ) and the initial condition on [ τ , 0 ] , respectively). Our setting supposes a step further than the previous contribution [17], in which no forcing term was considered (i.e., f ( t ) = 0 ). We have rigorously addressed the problem of extending the deterministic theory to the random scenario, by proving that the deterministic solution constructed via the method of steps and the method of variation of constants is an L p -solution, under certain assumptions on the random data. A new result, the random Leibniz’s rule for L p -Riemann integration has been necessary to derive our conclusions. We have also studied the behavior in L p of the random delay equation when the delay tends to zero.
Our approach has been shown to be useful to approximate the statistical moments of the solution stochastic process, in particular its expectation and its variance. Thus, it is possible to perform uncertainty quantification. Our procedure is an alternative to the usual techniques for uncertainty quantification: Monte Carlo simulation, generalized polynomial chaos (gPC) expansions, etc.
Our approach could be extendable to other random differential equations with or without delay. As usual, one could prove that the deterministic solution also works in the random framework. To do so, a rigorous and careful analysis of the probabilistic properties of the solution based on L p -calculus should be conducted.
Finally, we humbly think that advancing in theoretical aspects of random differential equations with delay will permit rigorously applying this class of equations to modeling phenomena involving memory and aftereffects together with uncertainties. In particular, they may be crucial to capture uncertainties inherent to some complex modeling problems, since input parameters of this type of equations may belong to a wider range of probability distributions than the ones considered in Itô differential equations.

Author Contributions

Investigation, M.J.; methodology, M.J.; software, M.J.; supervision, J.C.C.; validation, J.C.C.; visualization, J.C.C. and M.J.; writing—original draft, M.J.; writing—review and editing, J.C.C. and M.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the Spanish Ministerio de Economía, Industria y Competitividad (MINECO), the Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER UE) grant MTM2017–89664–P.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this article.

References

  1. Smith, R.C. Uncertainty Quantification: Theory, Implementation, and Applications; SIAM: Philadelphia, PA, USA, 2013; Volume 12. [Google Scholar]
  2. Xiu, D. Numerical Methods for Stochastic Computations: A Spectral Method Approach; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  3. Le Maître, O.P.; Knio, O.M. Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics; Springer Science & Business Media: Berlin, Germany, 2010. [Google Scholar]
  4. Xiu, D.; Karniadakis, G.E. Supersensitivity due to uncertain boundary conditions. Int. J. Numer. Methods Eng. 2004, 61, 2114–2138. [Google Scholar] [CrossRef]
  5. Soong, T.T. Random Differential Equations in Science and Engineering; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  6. Casabán, M.C.; Cortés, J.C.; Navarro-Quiles, A.; Romero, J.V.; Roselló, M.D.; Villanueva, R.J. A comprehensive probabilistic solution of random SIS-type epidemiological models using the random variable transformation technique. Commun. Nonlinear Sci. Numer. Simul. 2016, 32, 199–210. [Google Scholar] [CrossRef] [Green Version]
  7. Strand, J.L. Random ordinary differential equations. J. Differ. Equ. 1970, 7, 538–553. [Google Scholar] [CrossRef] [Green Version]
  8. Villafuerte, L.; Braumann, C.A.; Cortés, J.C.; Jódar, L. Random differential operational calculus: Theory and applications. Comput. Math. Appl. 2010, 59, 115–125. [Google Scholar] [CrossRef] [Green Version]
  9. Saaty, T.L. Modern Nonlinear Equations; Dover Publications: New York, NY, USA, 1981. [Google Scholar]
  10. Cortés, J.C.; Jódar, L.; Roselló, M.D.; Villafuerte, L. Solving initial and two-point boundary value linear random differential equations: A mean square approach. Appl. Math. Comput. 2012, 219, 2204–2211. [Google Scholar] [CrossRef]
  11. Calatayud, J.; Cortés, J.C.; Jornet, M.; Villafuerte, L. Random non-autonomous second order linear differential equations: Mean square analytic solutions and their statistical properties. Adv. Differ. Equ. 2018, 2018, 392. [Google Scholar] [CrossRef]
  12. Calatayud, J.; Cortés, J.C.; Jornet, M. Improving the approximation of the first-and second-order statistics of the response stochastic process to the random Legendre differential equation. Mediterr. J. Math. 2019, 16, 68. [Google Scholar] [CrossRef]
  13. Licea, J.A.; Villafuerte, L.; Chen-Charpentier, B.M. Analytic and numerical solutions of a Riccati differential equation with random coefficients. J. Comput. Appl. Math. 2013, 239, 208–219. [Google Scholar] [CrossRef]
  14. Burgos, C.; Calatayud, J.; Cortés, J.C.; Villafuerte, L. Solving a class of random non-autonomous linear fractional differential equations by means of a generalized mean square convergent power series. Appl. Math. Lett. 2018, 78, 95–104. [Google Scholar] [CrossRef] [Green Version]
  15. Nouri, K.; Ranjbar, H. Mean square convergence of the numerical solution of random differential equations. Mediterr. J. Math. 2015, 12, 1123–1140. [Google Scholar] [CrossRef]
  16. Calatayud, J.; Cortés, J.C.; Jornet, M. Random differential equations with discrete delay. Stoch. Anal. Appl. 2019, 37, 699–707. [Google Scholar] [CrossRef]
  17. Calatayud, J.; Cortés, J.C.; Jornet, M. Lp-calculus approach to the random autonomous linear differential equation with discrete delay. Mediterr. J. Math. 2019, 16, 85. [Google Scholar] [CrossRef]
  18. Caraballo, T.; Cortés, J.C.; Navarro-Quiles, A. Applying the Random Variable Transformation method to solve a class of random linear differential equation with discrete delay. Appl. Math. Comput. 2019, 356, 198–218. [Google Scholar] [CrossRef]
  19. Zhou, T. A stochastic collocation method for delay differential equations with random input. Adv. Appl. Math. Mech. 2014, 6, 403–418. [Google Scholar] [CrossRef]
  20. Shi, W.; Zhang, C. Generalized polynomial chaos for nonlinear random delay differential equations. Appl. Numer. Math. 2017, 115, 16–31. [Google Scholar] [CrossRef]
  21. Licea-Salazar, J.A. The Polynomial Chaos Method With Applications To Random Differential Equations. Ph.D. Thesis, University of Texas at Arlington, Arlington, TX, USA, 2013. [Google Scholar]
  22. Khusainov, D.Y.; Ivanov, A.; Kovarzh, I.V. Solution of one heat equation with delay. Nonlinear Oscil. 2009, 12, 260–282. [Google Scholar] [CrossRef]
  23. Øksendal, B. Stochastic Differential Equations: An Introduction with Applications; Springer Science & Business Media: New York, NY, USA, 1998. [Google Scholar]
  24. Shaikhet, L. Lyapunov Functionals and Stability of Stochastic Functional Differential Equations; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  25. Shaikhet, L. Stability of equilibrium states of a nonlinear delay differential equation with stochastic perturbations. Int. J. Robust Nonlinear Control. 2017, 27, 915–924. [Google Scholar] [CrossRef]
  26. Benhadri, M.; Zeghdoudi, H. Mean square asymptotic stability in nonlinear stochastic neutral Volterra-Levin equations with Poisson jumps and variable delays. Funct. Approx. Comment. Math. 2018, 58, 157–176. [Google Scholar] [CrossRef]
  27. Santonja, F.J.; Shaikhet, L. Analysing social epidemics by delayed stochastic models. Discret. Dyn. Nat. Soc. 2012, 2012. [Google Scholar] [CrossRef] [Green Version]
  28. Liu, L.; Caraballo, T. Analysis of a stochastic 2D-Navier–Stokes model with infinite delay. J. Dyn. Differ. Equ. 2019, 31, 2249–2274. [Google Scholar] [CrossRef]
  29. Lupulescu, V.; Abbas, U. Fuzzy delay differential equations. Fuzzy Optim. Decis. Mak. 2012, 11, 99–111. [Google Scholar] [CrossRef]
  30. Krapivsky, P.L.; Luck, J.M.; Mallick, K. On stochastic differential equations with random delay. J. Stat. Mech. Theory Exp. 2011, 2011, P10008. [Google Scholar] [CrossRef]
  31. Garrido-Atienza, M.J.; Ogrowsky, A.; Schmalfuß, B. Random differential equations with random delays. Stochastics Dyn. 2011, 11, 369–388. [Google Scholar] [CrossRef]
  32. Calatayud, J. A Theoretical Study of a Short Rate Model. Master’s Thesis, Universitat de Barcelona, Barcelona, Spain, 2016. [Google Scholar]
  33. Cortés, J.C.; Villafuerte, L.; Burgos, C. A mean square chain rule and its application in solving the random Chebyshev differential equation. Mediterr. J. Math. 2017, 14, 35. [Google Scholar] [CrossRef] [Green Version]
  34. Cortés, J.C.; Jódar, L.; Villafuerte, L. Numerical solution of random differential equations: A mean square approach. Math. Comput. Model. 2007, 45, 757–765. [Google Scholar] [CrossRef]
  35. Braumann, C.A.; Cortés, J.C.; Jódar, L.; Villafuerte, L. On the random gamma function: Theory and computing. J. Comput. Appl. Math. 2018, 335, 142–155. [Google Scholar] [CrossRef]
  36. Khusainov, D.Y.; Pokojovy, M. Solving the linear 1D thermoelasticity equations with pure delay. Int. J. Math. Math. Sci. 2015, 2015. [Google Scholar] [CrossRef] [Green Version]
  37. Wolfram Mathematica, V.11.2; Wolfram Research Inc.: Champaign, IL, USA, 2017.
Figure 1. Expectation (up) and variance (down) of x τ ( t ) , Example 1.
Figure 1. Expectation (up) and variance (down) of x τ ( t ) , Example 1.
Mathematics 08 01013 g001
Figure 2. Expectation (up) and variance (down) of x τ ( t ) , Example 2.
Figure 2. Expectation (up) and variance (down) of x τ ( t ) , Example 2.
Mathematics 08 01013 g002

Share and Cite

MDPI and ACS Style

Cortés, J.C.; Jornet, M. Lp-Solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term. Mathematics 2020, 8, 1013. https://doi.org/10.3390/math8061013

AMA Style

Cortés JC, Jornet M. Lp-Solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term. Mathematics. 2020; 8(6):1013. https://doi.org/10.3390/math8061013

Chicago/Turabian Style

Cortés, Juan Carlos, and Marc Jornet. 2020. "Lp-Solution to the Random Linear Delay Differential Equation with a Stochastic Forcing Term" Mathematics 8, no. 6: 1013. https://doi.org/10.3390/math8061013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop