Non-Gaussian Quadrature Integral Transform Solution of Parabolic Models with a Finite Degree of Randomness

: In this paper, we propose an integral transform method for the numerical solution of random mean square parabolic models, that makes manageable the computational complexity due to the storage of intermediate information when one applies iterative methods. By applying the random Laplace transform method combined with the use of Monte Carlo and numerical integration of the Laplace transform inversion, an easy expression of the approximating stochastic process allows the manageable computation of the statistical moments of the approximation.


Introduction
Since the seminal works [1,2], random uncertainty has gained presence in the mathematical models as a way to fit better the real world. The parameters, coefficients and initial/boundary conditions in real problems are subject to uncertainties, not only by error measurement but also due to the lack of access to the measurement or the heterogeneity of the media; see [3,4] for applications. However, successful approaches used to deal with deterministic problems are not appropriated to manage random models. In particular, this occurs with iterative methods. The reason is that in the solution of random models, it is not enough to provide the solution stochastic process (s.p.), but also the computation of its statistical moments such as the expectation and the variance. Iterative processes involve the storage of intermediate operational random calculus becoming unmanageable. This motivates the search of alternative approaches allowing an easy formal expression of the approximating s.p. solution making possible the computation of its statistical moments. Integral transform method is an efficient approach dealing with both deterministic and random partial differential models when they are combined with quadrature rules to approximate the final infinite integral linked to the inverse integral transform solution [5,6]. Random uncertainty can be modelled in several ways depending of the use of Brownian motion stochastic processes (s.p.'s) and Itô calculus, or other s.p.'s and the mean square (m.s.) calculus [7]. We follow the m.s. approach because of two main advantages, the first is that the m.s. approach coincides with the deterministic, when the random model is supposed to be deterministic [8]. The second is that when an s.p. is m.s. close to another one, then its expectation and its variance are also close [6].
The use of an integral transform needs to be combined with certain appropriate numerical integration technique for the numerical inversion. The quadrature integration method should take care of the oscillatory behaviour of the integrand, and it suggests avoiding Gaussian-quadrature rules used in [6]; see also [9][10][11]. Although the integral transform method has been mainly used for constant coefficient partial differential models [12] in the deterministic framework, because it underlies the assumption that the transformed ordinary differential problem needs to be solved explicitly, and this is not the case; see [13]. This approach has been used for the random case in [5,6,8] for both the constant coefficient and variable coefficient random m.s. case. Apart from the coefficients, the randomness may appear into the initial/boundary conditions. The uncertainty into the model is based on impurities of materials, heterogeneities in the media, error measurements or the lack/difficulty of access to measurements; see [6] and references therein.
This paper deals with random parabolic partial differential models of the form with appropriate additional conditions on the random coefficients p(x) and q(x), the random source term Φ(x, t), and the random initial/boundary conditions s.p.'s, g(x), f 1 (t) and f 2 (t) to be specified later, in the framework of uncertainty models with a finite degree of randomness [8,14], that is, the s.p.'s are functions depending on a finite number of random variables (r.v.'s). The model (1)-(5) is frequent in heat and mass transfer theory and chemical engineering sciences, [15], ([16] p. 388). This paper is organized as follows. Section 2 includes some notations, definitions and preliminary results, about the mean square calculus of random differential equations and random Laplace transform, as well as the construction of the formal s.p. solution of problem (1)-(5). In Section 3, we construct firstly an approximation s.p. solution by truncating the infinite integral Laplace inversion. Then, non-Gaussian random quadrature formulae allow the approximating s.p. as a sum whose expectation and variance are easily computable. Numerical procedure algorithms to construct the numerical approximation are also included. In Section 4, numerical simulations which combine the use of Monte Carlo method and the numerical integration of the Laplace transform inversion are made to check the efficiency of the proposed numerical methods.

Preliminaries and Formal Solution
In order to be comprehensive but not exhaustive we recall some notation, definitions and results about m.s. calculus and random Laplace transform; see [8]. Let (Ω, F, R) be a complete probability space, and let L m×n p (Ω) be the set of all random matrices Y = (y ij ) m×n whose entries y ij are random variables (r.v.'s) satisfying what this means is that y ij ∈ L p (Ω), where E[·] denotes the expectation operator. The space of all random matrices of size m × n, endowed with the matrix p-norm, L m×n p (Ω), defined by is a Banach space. This definition of the matrix p−norm can be extended to matrix s.p.'s Y(t) = y ij (t) m×n of L m×n p (Ω), where now each entry y ij (t) ∈ L p (Ω) for every 1 ≤ i ≤ m, 1 ≤ j ≤ n. The definitions of integrability, continuity, differentiability of a matrix function lying in L m×n p (Ω) follows in a natural manner using matrix p−norm introduced above. The case mean square corresponds to p = 2, and mean four p = 4. One has L m×n 4 (Ω) ⊂ L m×n 2 (Ω); see [6].
Definition 1. C is the class of all the 2-s.p.'s h(t) defined in the real line such that: The 2-norm of h(t) is of exponential order, i.e., there exist real constants c ≥ 0, called the abscissa of convergence, and M > 0 such that Taking into account Definition 1, the random Laplace transform of a 2-s.p. h(t) ∈ C is defined by the m.s. integral see [8]. Furthermore, if h(t) is a s.p., that is, h(t) is m.s. differentiable, and h (t) belongs to class C, then see [8]. If we know H(s) = L[h(t)](s), then the random inverse transform allows to recover h(t) in terms of H(s): where i denotes the imaginary unit. Now assume that the coefficient p(x) in (1) is a 2-s.p. m.s differentiable so that the m.s. derivative and (1) can be written for x > 0, t > 0, Let us assume that Equation (10) has a solution 2-s.p. u(x, t) regarded as a function of active variable t, lies in the class C, u(x, ·) and let U(x)(s) = L [u(x, ·)] (s) .
Assume that the source term Φ(x, t) is also a 2-s.p. in class C for each fixed x > 0, so that it is well defined Formal application of random Laplace transform to both sides of Equation (10), using property (8), (11), (12) and assuming that Leibniz rule for the derivative of an integral (see [17]), holds true, one has Let us assume that the 2-s.p. p(x) has positive realizations for almost every (a.e.) event ξ ∈ Ω, i.e., Under hypothesis (14), using (4) and (13), we can write (13) in the more compact form, random ordinary differential equation in the variable x: Assume that the random boundary conditions f 1 (t) and f 2 (t) are also 2-s.p.'s in class C, so by applying random Laplace transform to both sides of (2) and (3), admitting Leibniz rule for the derivative of an integral, one has where we have denoted The m.s. random differential problem (15) and (16) is transformable into extended first order random initial value problem in the unknown where In order to guarantee the existence of a m.s. solution s.p., one needs that entries of the random matrix A(x)(s) satisfies the moment condition of [18] for every x > 0, that is and the entries of B(x)(s) are 4-integrable and lie in L 4 (Ω) see [18]. In our case, this means that each s.p. and Under conditions (19)- (21), assuming that the random vector initial condition verifies there is a m.s. solution s.p. of (17) given by where Φ A (x; 0) is the random fundamental matrix solution of the random homogeneous matrix differential problem: with A(x)(s) the random matrix defined in (18); see [18]. Following with the construcction of the formal m.s. solution of the problem (10), (2)-(5), using the random Laplace inverse Formula (9) and denoting it follows that where Y(x)(s) is given by (23). Note that the convergence of the integral (26) The next step is the approximation of the s.p. u(x, t) given by the numerical treatment of (26), so that its expression makes manageable the computation of its statistical moments. For the sake of accuracy for long time domain and the oscillatory character of the integrand, Gaussian quadrature is not advisable.
As u(x, t) is a real s.p., taking advantage of the relationship between the inverse Laplace transform and Fourier cosine integrals (see [19] and ( [20] p. 28)) we can write and using (25)

Random Numerical Solutions
Under hypothesis (27) the random integral (26) is m.s. convergent and the truncated s.p. approximate solution takes the form where R approaches infinity and Y(x) is the solution s.p. of the random initial value problem (17). Let us write (29) in the following form for each realization ξ ∈ Ω Let us consider the partition of the interval [0, R] given by M + 1 points w j = k j, so that k = ∆w = R M , 0 ≤ j ≤ M. Using the random trapezoidal quadrature inspired in ([10] Section 3.5) one approximates where the half weight is omitted at the right end due to the infinite nature of the original domain [0, +∞); see [19].
From (30) and (31) and taking into account all the realizations ξ ∈ Ω, one gets the approximate s.p. solution of the problem (1)-(5) given by Taking expectations in (32) one has and the corresponding variance and Note that the m.s. solution Y(x)(·) involved in (32), given by (23), is not available except in a limited number of cases, see Example 5 of [18], because the random fundamental matrix Φ A (x; 0) is not known. This motivates, from a practical point of view, the search of alternative approximations via simulations, the so called Monte Carlo approach [21]. Although it is well-known that the speed of convergence of Monte Carlo method is rather slow [22], it is a useful tool for our random problem in combination with random integral transform methods and numerical integration techniques. Monte Carlo method provides the E [Re[Y(x)(·)]] throughout the average of an appropriate number of realizations ξ ∈ Ω of the deterministic problem Then the expectation and its corresponding variance by K-Monte Carlo (MC) simulations are computed by the following expressions and where the coefficients γ j , j = 0, · · · , M, were defined in (33) and 1: Take the coefficient p(x) ∈ L 4 (Ω) a 4-integrable and 4-differentiable s.p. verifying condition (14). 2: Take the source term Φ(x, t) ∈ L 4 (Ω) and the boundary conditions f 1 (t) ∈ L 4 (Ω) and f 2 (t) ∈ L 4 (Ω) all of them 4-integrable s.p.'s in class C. 3: Take the coefficient q(x) lying in L 2 (Ω). 4: Take the initial condition g(x) lying in L 4 (Ω) a 4-integrable s.p. 5: Check that conditions (20) and (21) over the data are verified. 6: Fix a point (x, t), with x > 0, t > 0. 7: Take the length of the truncation end-point R and the number of subintervals M. 8: Compute the step-size k = ∆ w using the relationship M k = R. 9: Take the value of the parameter α involves in the random trapezoidal quadrature. 10: for j = 0 to j = M do 11: Compute the M + 1 points w j = kj and s j = α + i w j , and the functions cos(w j t). 12: end for 13: Take and carry out K realizations, ξ i , 1 ≤ i ≤ K, over the r.v.'s involved in the random data of the problem (1)-(5). 14: for each realization ξ i , 1 ≤ i ≤ K do 15: Compute the Laplace transform of the source term Ψ(x, ξ i )(s) and the Laplace transform of the boundary conditions F 1 (s)(ξ i ) and F 2 (s)(ξ i ). 16: end for 17: for j = 0 to j = M do 18: for i = 1 to i = K do 19: Compute the numerical solution of the deterministic initial value problem (17) Compute the real part of the K solutions obtained:

Numerical Examples and Simulations
In this section, we check that the proposed method is efficient in presence of randomness and the involved computational complexity by means of two illustrative examples. We begin with an example where the exact solution, in the deterministic case, has a known closed form expression permitting a reliable comparison for each sample realization.
has the exact solution with where a 0 and b 0 are strictly positive real numbers.
Using the integral representation of erfc z Taking values From (38), (39) and (41) it follows that For the evaluation of the last integral in (42), let us consider the integral 7.4.37 in page 304 of [24] e d 1 η erf d 2 where ρ(z) = e −z 2 erfc(−i z), with arguments z = ± d 1 η + i d 2 η . From (42) and (43) for d 1 = −b 0 and d 2 = x 2 4 a 0 , it is easy to obtain explicitly the exact solution of problem (37) Expression (44), considering a 0 = a and b 0 = b as the r.v.'s taken in this Example 1, will be useful to compare with our proposed numerical solution s.p. Now, in order to obtain an approximate solution s.p. of problem (1)-(5) with the data considered in this Example 1 using the developed random Laplace transform method, the associated second order ordinary differential Equation (13) takes the form Then, as a(ξ) > 0 and b(ξ) > 0 for a.e. realization ξ ∈ Ω, one gets the real general random solution of (45) By imposing condition (5) it is obtained that c 2 (s) = 0, thus from (46) we have c 1 (s) = − 1 s(b+s) and the random solution of the Laplace transformed problem is given by Thus the approximation solution s.p. (32) for the Example 1 takes the form  Table 1 collects the root mean square errors (RMSEs), [25], for n = 12 spatial points in the spatial domain [0.5, 6], x i = ih, 1 ≤ i ≤ n with h = ∆x = 0.5, computed for k = 0.2 and K = 5000 realizations using the following expressions where E[u(x i , t)] and Var[u(x i , t)] represent the statistical moments of exact solution (44) considering a 0 = a and b 0 = b as the r.v.'s taken in this Example 1. It is observed the good behaviour of both approximations the expectation and the standard deviation obtained by our proposed method. Figure 1 shows that the approximations to the expectation getting better when the length of R is increasing, that is, the absolute errors decrease. Computations have been carried out by Mathematica©software version 11.3.0.0 [26], for Windows 10Pro (64-bit) Intel(R) Core(TM) i7-7820X CPU, 3.60 GHz 8 kernels.
The CPU times (in seconds) spent in the Wolfram Language kernel are included in Table 1 as well as in the subsequent tables.  Secondly, by varying the length of the step-size of the trapezoidal quadrature, k , 1 ≤ ≤ 5, for both R and the number of realized events K fixed, the computed RMSEs are shown in Table 2. Finally, in the last experiment, we have taken several number of realizations K i in the Monte Carlo method checking the improvement of the approaches obtained for both the expectation and the standard deviation; see Table 3. It is observed that for a number of events K equal to 1600 the results are enough precise. In this sense, Figure 2 illustrates the decreasing trend of the absolute errors for both statistical moments when the number of simulations increase. Table 2. Example 1. RMSEs for the expectation (49) and the standard deviation (50) considering the discretization on the spatial domain [0.5, 6] in the following way x i = ih, 1 ≤ i ≤ 12, h = ∆x = 0.2, taking fixed α = 0.5, K = 5000 simulations and R = 16 but varying the step-size of the trapezoidal quadrature k ∈ {0.8, 0.4, 0.2, 0.1, 0.05} at the temporal instant t = 1.

Example 2
Let us consider the random parabolic models (1)-(5) with the following choice of data In (51)  . In order to evidence the convergence of approximations we will compute the root mean square deviations (RMSDs) for consecutive approximations of the mean and the standard deviation, in two stages. Firstly, varying the length of truncation end-point R by means the following expressions and secondly, varying the step-size k of the trapezoidal quadrature by means in the spatial domain [0.1, 1.5] for n = 15 points x i = ih, 1 ≤ i ≤ n, h = ∆x = 0.1. Table 4 collects the RMSDs (53) and (54) taking k = 0.5 and the number of realized events K fixed (1600). The choice of the values for both parameter k and K are motived by the good results obtained in Example 1. Figures 3 and 4a show the convergence of the approximations for the expectation (35) and the standard deviation (36) as R increases. This behaviour is depicted in Figures 3 and 4b where it is shown the decreasing trend of the absolute deviations (AbsDev) for the expectation and the standard deviation, defined as follows AbsDev    Computations have been carried out by Mathematica© software version 11.3.0.0 for Windows 10Pro (64-bit) AMD Ryzen Threadripper 2990WX 32-Core Processor, 3.00 GHz. As regards the study of the convergence of the approximations for both the expectation and the standard deviation when parameter k changes, Table 5 collects the RMSDs (55) and (56) taking R = 32 and the number of realized events K fixed (1600). The decreasing behaviour of these RMSDs is in full agreement with the results shown in Figures 5 and 6 where it is illustrated how the successive approximations of the absolute deviations for both the expectation and the standard deviation are close to each other when k decreases.  In Tables 6 and 7 we show the timings (CPU time spent in the Wolfram Language kernel) to compute both statistical moments of the approximated solution (32) plotted in Figures 3-6a. Table 6. Example 2. CPU time spent to compute the approximations of the expectation (35) and the standard deviation (36) of (47) and (48) considering the discretization on the spatial domain [0.1, 1.5] in the following way x i = ih, 1 ≤ i ≤ 15, h = ∆x = 0.1, taking K = 1600 simulations and k = 0.5 at the temporal instant t = 1.

Conclusions
In this paper, we focus in the numerical approximation of the solution s.p. and its statistical moments of random heterogeneous parabolic partial differential models following the mean square approach. Random integral transform method is used avoiding iterative methods that are not useful because of the complex nature of the random problem. For the computation of the integrals arising when the random inverse integral transform is applied, we choose non Gaussian quadrature methods instead of the common used Gaussian quadrature ones due to the high oscillatory nature of the involved integrands. Simulations are performed using Monte Carlo method. Illustrative examples show the convergence of the proposed method and the sensibility of the approximate expectation and standard deviation to the number of the realizations considered.