Next Article in Journal
Comparing Groups of Decision-Making Units in Efficiency Based on Semiparametric Regression
Previous Article in Journal
On a New Half-Discrete Hilbert-Type Inequality Involving the Variable Upper Limit Integral and Partial Sums
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Second-Order Linear Differential Equations with Random Analytic Coefficients about Regular-Singular Points

by
Juan-Carlos Cortés
1,†,
Ana Navarro-Quiles
2,*,†,
José-Vicente Romero
1,† and
María-Dolores Roselló
1,†
1
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
2
Department of Statistics and Operational Research, Universitat de València, Dr. Moliner 50, 46100 Burjassot, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(2), 230; https://doi.org/10.3390/math8020230
Submission received: 17 January 2020 / Revised: 4 February 2020 / Accepted: 5 February 2020 / Published: 10 February 2020

Abstract

:
In this contribution, we construct approximations for the density associated with the solution of second-order linear differential equations whose coefficients are analytic stochastic processes about regular-singular points. Our analysis is based on the combination of a random Fröbenius technique together with the random variable transformation technique assuming mild probabilistic conditions on the initial conditions and coefficients. The new results complete the ones recently established by the authors for the same class of stochastic differential equations, but about regular points. In this way, this new contribution allows us to study, for example, the important randomized Bessel differential equation.

1. Introduction and Motivation

As a main difference with respect to deterministic (or classical) differential equations, solving a random differential equation (RDE) does not only consist of determining, exactly or approximately, its solution stochastic process (SP), say X ( t ) , but also of computing its main statistical properties such as the mean and the variance. Even more, the approximation of the first probability density function (1-PDF), say f 1 ( x , t ) , of the solution SP is a more ambitious and useful goal since from it, one can calculate, via its integration, the mean and the variance of X ( t ) ,
μ X ( t ) = E X ( t ) = x f 1 ( x , t ) d x , V X ( t ) = ( x μ X ( t ) ) 2 f 1 ( x , t ) d x ,
as well as its higher one-dimensional statistical moments (such as asymmetry, kurtosis, etc.),
E ( X ( t ) ) k = R x k f 1 ( x , t ) d x , k = 1 , 2 , 3 , ,
provided they exist. Moreover, given a fixed time instant t ^ , the 1-PDF permits calculating key information as the probability that the solution varies in any specific set of interest, say [ a , b ] , just via its integration,
P ω Ω : a x ( t ^ , ω ) b = a b f 1 ( x , t ^ ) d x , a , b R .
The computation of the 1-PDF in the setting of RDEs and their applications is currently a cutting-edge topic for which a number of interesting advances have been achieved in the recent literature. Here, we highlight the following contributions that are strongly related to the goal of the present paper [1,2,3,4]. The main aim of this paper is to contribute to advance the field of RDEs by extending important classical results to the stochastic context. Specifically, we address the problem of constructing reliable approximations for the 1-PDF of the solution SP to second-order linear differential equations whose coefficients are analytic SPs depending on a random variable (RV) about a regular-singular point and whose initial conditions (ICs) are also RVs. Therefore, we are dealing with what is often called RDEs having a finite degree of randomness [5] (p. 37). In this manner, we complete the analysis performed in our previous contribution [6], where we dealt with the ordinary point case. It is important to emphasize that the analysis of second-order linear random differential equations has been performed in previous contributions using the random mean square calculus [5] (Ch. 4) (see [7,8,9,10,11,12] for the Airy, Hermite, Legendre, Laguerre, Chebyshev, and Bessel equations, respectively) and using other approaches like the random homotopy method [13,14], the Adomian method [15], the differential transformation method [16], the variational iteration method [17], etc. However, in all these contributions, only approximations for the two first statistical moments (mean and variance) of the solution were calculated. In contrast, in [6] and in the present paper, we deal with the general form of the aforementioned second-order linear RDEs, and furthermore, we provide additional key information of the solution via the computation of approximations for the 1-PDF that, as has been previously indicated, permits calculating not only the mean and the variance, but also higher moments of the solution, as well as further relevant information as the probability that the solution lies in specific sets of interest. To the best of our knowledge, these results for the 1-PDF of second-order linear RDEs about regular-singular points are new, and then, they contribute to advance the setting of this important class of RDEs. At this point, is important to underscore the main differences between our previous contribution [6] and the present paper. In [6], we provided a comprehensive study of second-order linear differential equations with random analytic coefficients about ordinary points. Now, we propose a natural continuation of [6] by extending the analysis for the same class of differential equations, but about singular-regular points, whose mathematical nature is completely distinct. Our aim is to complete the stochastic analysis for this important type of differential equation inspired by the well-known extension of the deterministic setting, the first one dealing with ordinary points and secondly with singular-regular points, by applying the Fröbenius theorem. It is important to point out that apparently, the problems have a strong similarity, but they are applied to differential equations of a different nature. In this regard, this new contribution allows us to study, for example, the important randomized Bessel differential equation, which does not fall within the mathematical setting of the previous contribution [6].
For the sake of completeness, below, we summarize the main results about the deterministic theory of second-order linear differential equations about ordinary and regular-singular points. These results will be very useful for the subsequent development. Let us then consider the second-order linear differential equation:
a 0 ( t ) x ¨ ( t ) + a 1 ( t ) x ˙ ( t ) + a 2 ( t ) x ( t ) = 0 ,
where coefficients a i ( t ) , i = 0 , 1 , 2 are analytic functions at a certain point, say t 0 , i.e., they admit convergent Taylor series expansions about t 0 (in practice, these coefficients are often polynomials, which are analytic everywhere). To study when this equation admits a power series solution centered at the point t 0 , say x ( t ) = x 0 + x 1 ( t t 0 ) + x 2 ( t t 0 ) 2 + (where the coefficients x i , i = 0 , 1 , 2 , must be determined so that this series satisfies the differential equation), it is convenient to recast Equation (2) in its standard or canonical form:
x ¨ ( t ) + p ( t ) x ˙ ( t ) + q ( t ) x ( t ) = 0 ,
where p ( t ) = a 1 ( t ) / a 0 ( t ) and q ( t ) = a 2 ( t ) / a 0 ( t ) . The key question is how must we pick the center of the previous expansion, t 0 , because this choice fully determines the region of convergence of the power series. To this end, t 0 is classified as an ordinary or a singular point. Specifically, t 0 is called an ordinary point of the differential Equation (3) (or equivalently (2)) if coefficients p ( t ) and q ( t ) are both analytic at t 0 , otherwise t 0 is termed a singular point. Recall that the quotient of analytic functions is also an analytic function provided the denominator is distinct from zero. Therefore, if a 0 ( t 0 ) 0 in (2) (and using that their coefficients are analytic functions about t 0 ), then t 0 is an ordinary point. In the particular case that a i ( t ) , i = 0 , 1 , 2 are polynomials without common factors, then t 0 is an ordinary point of (2) if and only if a 0 ( t 0 ) 0 . It is well known that when t 0 is an ordinary point of differential Equation (2), then its general solution can be expressed as x ( t ) = n 0 x n ( t t 0 ) n = η 1 x 1 ( t ) + η 2 x 2 ( t ) , where η 1 , η 2 are free constants and x 1 ( t ) and x 2 ( t ) are two linearly independent series solutions of (2) centered at the point t 0 . These series are also analytic about t 0 , i.e., they converge in a certain common interval centered at t 0 : 0 | t t 0 | < ρ ( ρ = min ( ρ 1 , ρ 2 ) being ρ i the radius of convergence of x i ( t ) , i = 1 , 2 , respectively). In the important case that coefficients a i ( t ) , i = 0 , 1 , 2 are polynomials, the radius of convergence ρ > 0 of both series is at least as great as the distance from t 0 to the nearest root of a 0 ( t ) = 0 . As a consequence, if the leading coefficient a 0 ( t ) is constant, then a power series solution expanded about any point can be found, and this power series will converge on the whole real line. In [6], we solved, in the probabilistic sense previously explained, the randomization of Equation (3) by assuming that coefficients p ( t ) = p ( t ; A ) and q ( t ) = q ( t ; A ) depend on a common RV, denoted by A, together with two random ICs fixed at the ordinary point t 0 , namely X ( t 0 ) = Y 0 and X ˙ ( t 0 ) = Y 1 .
The study of Equation (3) (or equivalently, (2)) about a singular point requires further distinguishing between regular-singular points and irregular-singular points. In this paper, we shall deal with the former case, which happens when p ( t ) approaches infinity no more rapidly than 1 / ( t t 0 ) and q ( t ) no more rapidly than 1 / ( t t 0 ) 2 , as t t 0 . In other words, p ( t ) and q ( t ) have only weak singularities at t 0 , i.e., writing Equation (3) in the form:
x ¨ ( t ) + p ^ ( t ) t t 0 x ˙ ( t ) + q ^ ( t ) ( t t 0 ) 2 x ( t ) = 0 ,
where:
p ^ ( t ) = ( t t 0 ) p ( t ) and q ^ ( t ) = ( t t 0 ) 2 q ( t ) ,
then p ^ ( t ) and q ^ ( t ) are analytic about t 0 . Otherwise, the point t 0 is called an irregular-singular point. In the case that p ^ ( t ) and/or q ^ ( t ) defined in (4) become indeterminate forms at t 0 , the situation is determined by the limits:
p ^ 0 = p ^ ( t 0 ) = lim t t 0 p ^ ( t ) = lim t t 0 ( t t 0 ) p ( t ) and q ^ 0 = q ^ ( t 0 ) = lim t t 0 q ^ ( t ) = lim t t 0 ( t t 0 ) 2 q ( t ) .
If p ^ 0 = 0 = q ^ 0 , then t = t 0 may be an ordinary point of the differential equation ( t t 0 ) 2 x ¨ ( t ) + ( t t 0 ) p ^ ( t ) x ˙ ( t ) + q ^ ( t ) x ( t ) = 0 (or equivalently, dividing it by t t 0 and taking into account (5), of the differential Equation (3), i.e., x ¨ ( t ) + p ( t ) x ˙ ( t ) + q ( t ) x ( t ) = 0 ). Otherwise, if both limits in (6) exist and are finite (and distinct form zero), then t = t 0 is a regular-singular point, while if either limit fails to exist or is infinite, then t = t 0 is an irregular-singular point. As has been previously underlined, the most common case in applications, in dealing with differential equations of the form (4), is when p ^ ( t ) and q ^ ( t ) are both polynomials. In such a case, p ^ 0 and q ^ 0 are simply the coefficients of the terms ( t t 0 ) 0 of these polynomials, if they are expressed in powers of t t 0 , so t 0 is a regular-singular point. In dealing with the case that t 0 is a regular-singular point, once the differential Equation (2) is written in the form (4), we are formally working in a deleted (or punctured) neighborhood of t 0 , say 0 < | t t 0 | < ρ , ρ > 0 . The solution of (4) (equivalently of (2)) is then sought via generalized power series (often called Fröbenius series) centered at the point t 0 , x ( t ) = | t t 0 | r n 0 x n ( t t 0 ) n , 0 < | t t 0 | < ρ , being x 0 0 and where r is a (real or complex) value to be determined, as well as coefficients x n , n = 0 , 1 , 2 , , by imposing that this series satisfies the differential Equation (4) (equivalently of (2)). This leads to the fact that parameter r must satisfy the following quadratic equation:
r ( r 1 ) + p ^ 0 r + q ^ 0 = 0 .
This equation is called the indicial equation of differential Equation (8), and its two roots, r 1 and r 2 (possibly equal), are called the exponents of the differential equation at the regular-singular point t 0 ,
( t t 0 ) 2 x ¨ ( t ) + ( t t 0 ) p ^ ( t ) x ˙ ( t ) + q ^ ( t ) x ( t ) = 0 ,
which is obtained after multiplying (4) by ( t t 0 ) 2 . The full solution of differential Equation (8) (or equivalently, (2)) is given in the following theorem in terms of the nature of r 1 and r 2 .
Theorem 1.
(Fröbenius method) [18] (p. 240). Let us consider the differential Equation (2) whose coefficients are analytic about t 0 , and assume that p ^ ( t ) and q ^ ( t ) defined in (4) and (5) are analytic about t 0 . Let us assume that t 0 is a regular-singular point of the differential Equation (2). Let r 1 and r 2 be the roots of the indicial equation associated with (8) at the point t 0 :
r ( r 1 ) + p ^ 0 r + q ^ 0 = 0 ,
where p ^ 0 and q ^ 0 are defined in (6). Without loss of generality, we assume that Re ( r 1 ) Re ( r 2 ) , where Re ( · ) stands for the real part.
Then, the differential Equation (2) has two linearly independent solutions, x 1 ( t ) and x 2 ( t ) , in a certain deleted neighborhood centered at t 0 , of the following form:
(a)
If r 1 r 2 is not a non-negative integer (i.e., r 1 r 2 0 , 1 , 2 , ),
x 1 ( t ) = | t t 0 | r 1 n = 0 x 1 , n ( t t 0 ) n a n d x 2 ( t ) = | t t 0 | r 2 n = 0 x 2 , n ( t t 0 ) n ,
where x 1 , 0 0 and x 2 , 0 0 .
(b)
If r 1 r 2 is a positive integer (i.e., r 1 r 2 = 1 , 2 , ),
x 1 ( t ) = | t t 0 | r 1 n = 0 x 1 , n ( t t 0 ) n a n d x 2 ( t ) = | t t 0 | r 2 n = 0 x 2 , n ( t t 0 ) n + c x 1 ( t ) log | t t 0 | ,
where x 1 , 0 0 , x 2 , 0 0 , and c is a constant, which may or may not be distinct from zero.
(c)
If r 1 = r 2 ,
x 1 ( t ) = | t t 0 | r 1 n = 0 x 1 , n ( t t 0 ) n , a n d x 2 ( t ) = | t t 0 | r 1 + 1 n = 0 x 2 , n ( t t 0 ) n + x 1 ( t ) log | t t 0 | ,
where x 1 , 0 0 .
As was previously indicated, in this contribution, the objective is to extend the analysis performed in [6] for the aforementioned randomization of differential Equation (3) (equivalently of (2)) about an ordinary point t 0 , to the case that t 0 is a regular-singular point. Specifically, we will consider the following second-order linear RDE:
X ¨ ( t ) + p ( t ; A ) X ˙ ( t ) + q ( t ; A ) X ( t ) = 0 ,
together with the following random ICs:
X ( t 1 ) = Y 0 , X ˙ ( t 1 ) = Y 1 ,
fixed at an initial instant t 1 belonging to a certain deleted interval centered at t 0 , T ^ 0 = { t : 0 < | t t 0 | < ρ } , ρ > 0 . Then, we will construct approximations of the 1-PDF, f 1 ( x , t ) , of the SP, X ( t ) , to this random initial value problem (IVP) via a random Fröbenius series centered at the regular-singular point t 0 . In (10), p ( t ; A ) and q ( t ; A ) are SPs satisfying certain hypotheses, which will be stated later, which depend on the common RV denoted by A. Here, we assume that we are working on a probability space ( Ω , F , P ) , which is complete, to which the real RVs A, Y 0 , and Y 1 belong. Furthermore, we assume that these three RVs have a joint density, i.e., the random vector ( A , Y 0 , Y 1 ) is absolutely continuous.
As usual, observe that in (10) and (11), we are writing RVs with capital letters, for instance, Z : Ω D Z R (here, D Z is referred to as the range of Z). Moreover, each realization of an RV, say Z, will be denoted by z ( ω ) , ω Ω or simply z = z ( ω ) D Z . To be as general as possible, in the subsequent analysis, A, Y 0 , and Y 1 are assumed to be dependent absolutely continuous RVs, and f A , Y 0 , Y 1 ( a , y 0 , y 1 ) will denote their joint PDF. Notice that, if A, Y 0 , and Y 1 are independent RVs, then their PDF can be factorized as the product of their respective marginal PDFs, i.e.,
f A , Y 0 , Y 1 ( a , y 0 , y 1 ) = f A ( a ) f Y 0 ( y 0 ) f Y 1 ( y 1 ) .
Based on our previous contribution [6], together with Theorem 1 and further reasons that will be apparent later, hereinafter, we will assume the following hypotheses:
H0: A is a bounded RV, i.e., M A > 0 : | a ( ω ) | < M A , ω Ω .
H1: f A , Y 0 , Y 1 ( a , y 0 , y 1 ) is continuous in the second component and bounded, i.e.,
M f > 0 : f A , Y 0 , Y 1 ( a , y 0 , y 1 ) < M f , ( a , y 0 , y 1 ) D A , Y 0 , Y 1 .
In addition, we will assume that the SPs p ^ ( t ; A ) = ( t t 0 ) p ( t ; A ) and q ^ ( t ; A ) = ( t t 0 ) 2 q ( t ; A ) are analytic about ( t 0 ; a 0 ( ω ) ) for every a 0 ( ω ) D A , ω Ω , i.e.,
H2: There exists a common neighborhood N p ^ , q ^ ( t 0 ; a 0 ( ω ) ) T ^ 0 × D A where:
p ^ ( t ; A ) = ( t t 0 ) p ( t ; A ) , q ^ ( t ; A ) = ( t t 0 ) 2 q ( t ; A ) are analytic a 0 ( ω ) D A , ω Ω .
Here, T ^ 0 denotes a deleted interval centered at t 0 that has been previously defined. Recall that here, the SP p ^ ( t ; A ) = ( t t 0 ) p ( t ; A ) is analytic about a point ( t 0 , a 0 ( ω ) ) , a 0 ( ω ) D A , for all ω Ω , if the deterministic function ( t t 0 ) p ( t ; a ( ω ) ) is analytic about ( t 0 , a 0 ( ω ) ) (equivalently to the SP q ^ ( t ; A ) = ( t t 0 ) 2 q ( t ; A ) ). To simplify notation, we shall assume that p ^ ( t ; A ) = ( t t 0 ) p ( t ; A ) and q ^ ( t ; A ) = ( t t 0 ) 2 q ( t ; A ) are analytic in a common neighborhood that will be denoted by N p ^ , q ^ ( t 0 ; a 0 ( ω ) ) . In practice, this neighborhood is determined intersecting the domains of analyticity of p ^ ( t ; A ) = ( t t 0 ) p ( t ; A ) and q ^ ( t , A ) = ( t t 0 ) 2 q ( t ; A ) . In [5] (Th. 4.4.3), a characterization was stated of analyticity of second-order SPs (those having finite variance) in terms of the analyticity of the correlation function. Moreover, to ensure that the IVP (10)–(11) has a unique solution, both SPs p ( t ; A ) and q ( t ; A ) are assumed to satisfy all the needed conditions. (see [5] (Th. 5.1.2), for instance).
According to the Fröbenius method (see Theorem 1) and under the analyticity condition assumed in Hypothesis H2, the solution SP of RDE (10) about a regular-singular point, t 0 , can be written as a linear combination of two uniformly convergent independent random series, X 1 ( t ; A ) and X 2 ( t ; A ) ,
X ( t ) = K 0 ( A , Y 0 , Y 1 ) X 1 ( t ; A ) + K 1 ( A , Y 0 , Y 1 ) X 2 ( t ; A ) ,
where t T ^ 0 , and the coefficients K 0 ( A , Y 0 , Y 1 ) and K 1 ( A , Y 0 , Y 1 ) can be obtained in terms of the random ICs given in (11), which are established at the time instant t 1 T ^ 0 . In the subsequent development, we will take advantage of the above representation along with the application of the random variable transformation (RVT) technique [5] (p. 25), [6] (Th. 1), to construct the 1-PDF, f 1 N ( x , t ) , corresponding to approximations, X N ( t ) , of the solution SP, X ( t ) , about the regular-singular point t 0 . We shall provide sufficient conditions so that f 1 N ( x , t ) approximate the 1-PDF, f 1 ( x , t ) , of X ( t ) . The RVT method has been successfully applied to obtain exact or approximate representations of the 1-PDF of the solution SP to random ordinary/partial differential and difference equations and systems [1,2,3,4]. In particular, the RVT technique has also been applied to conduct the study for the random IVP (10) and (11) in the case that t 0 is an ordinary point [6]. Thus, the present contribution can be considered as a natural continuation of our previous paper [6]. This justifies that in our subsequent development, we directly refer to [6] when applying a number of technical results already stated in the above-mentioned contribution. In this manner, we particularly avoid repeating the multidimensional random variable transformation technique [6] (Th. 1), Poincare’s expansion [6] (Th. 2), as well as several results related to uniform convergence that can be inferred from classical real Analysis [6] (Prop. 1-4). For the sake of clarity, we point out that we keep identical the notation in both contributions. These results were also extensively applied in [6,19,20,21].
The paper is organized as follows. In Section 2, the 1-PDF, f 1 N ( x , t ) , of the approximate solution SP, X N ( t ) , to the random IVP (10) and (11) is formally constructed. This function is obtained by applying the RVT method to X N ( t ) , which follows from truncating the random generalized power series solution derived after applying the Fröbenius method stated in Theorem 1. Section 3 is devoted to rigorously proving the convergence of approximations f 1 N ( x , t ) to the exact 1-PDF, f 1 ( x , t ) , associated with the solution SP, X ( t ) . In Section 4, several illustrative examples are shown to demonstrate the usefulness of the theoretical results established in Section 2 and Section 3. Our main conclusions are drawn in Section 5.

2. Computation of the 1-PDF of the Truncated Solution SP

As was indicated in the Introduction, by the Fröbenius method, Theorem 1, the solution SP of the random IVP (10) and (11) can be written as:
X ( t ) = K 0 ( A , Y 0 , Y 1 ) X 1 ( t ; A ) + K 1 ( A , Y 0 , Y 1 ) X 2 ( t ; A ) ,
where t T ^ 0 and X 1 ( t ; A ) and X 2 ( t ; A ) are determined taking into account the values of the random roots of the associated indicial Equation (9). Since A is assumed to be an absolutely continuous RV, Cases (b) and (c) in Theorem 1 have null probability to occur, because punctual probabilities considering continuous RVs are zero. Thus, by (a), for each t T ^ 0 , the solution SP is given by Expression (13) being:
X 1 ( t ; A ) = | t t 0 | r 1 ( A ) n = 0 X 1 , n ( A ) t t 0 n and X 2 ( t ; A ) = | t t 0 | r 2 ( A ) n = 0 X 2 , n ( A ) t t 0 n ,
where r 1 ( A ) and r 2 ( A ) are the random roots of the random indicial Equation (9). Random coefficients X 1 , n ( A ) and X 2 , n ( A ) are recursively determined in practice. At this point, it is convenient to underline that the factors | t t 0 | r 1 ( A ) and | t t 0 | r 2 ( A ) are the distinctive and major difference in dealing with the analysis of RDEs about regular-singular points with respect to the case of RDEs about ordinary points presented in [6].
By imposing that Expression (13) satisfies the ICs given in (11), we obtain the following random algebraic system:
Y 0 = X ( t 1 ) = K 0 ( A , Y 0 , Y 1 ) X 1 ( t 1 ; A ) + K 1 ( A , Y 0 , Y 1 ) X 2 ( t 1 ; A ) , Y 1 = X ˙ ( t 1 ) = K 0 ( A , Y 0 , Y 1 ) X ˙ 1 ( t 1 ; A ) + K 1 ( A , Y 0 , Y 1 ) X ˙ 2 ( t 1 ; A ) .
Solving for K 0 and K 1 , one gets:
K 0 ( A , Y 0 , Y 1 ) = Y 0 X ˙ 2 ( t 1 ; A ) Y 1 X 2 ( t 1 ; A ) W ( t 1 ; A ) , K 1 ( A , Y 0 , Y 1 ) = Y 0 X ˙ 1 ( t 1 ; A ) + Y 1 X 1 ( t 1 ; A ) W ( t 1 ; A ) ,
where W ( t ; A ) = X 1 ( t ; A ) X ˙ 2 ( t ; A ) X 2 ( t ; A ) X ˙ 1 ( t ; A ) is the Wronskian of the fundamental system { X 1 ( t ) , X 2 ( t ) } given by (14).
Remark 1.
Since A is a continuous RV, then:
P { ω Ω : W ( t 1 ; a ( ω ) ) 0 } = 1 .
Moreover, there exists a positive constant, m W , such that:
0 < m W < | W ( t 1 ; a ( ω ) ) | , ω Ω ,
being this lower bound m W independent of ω Ω since, according to Hypothesis H0, RV A is bounded.
Hence, according to Remark 1, the functions K 0 ( A , Y 0 , Y 1 ) and K 1 ( A , Y 0 , Y 1 ) , given in (15), are well defined. Summarizing, the solution SP (13) and (14) can be written in the form:
X ( t ) = Y 0 S 1 ( t ; A ) + Y 1 S 2 ( t ; A ) ,
where Y 0 and Y 1 are the random ICs and:
S 1 ( t ; A ) = 1 W ( t 1 ; A ) X ˙ 2 ( t 1 ; A ) X 1 ( t ; A ) X ˙ 1 ( t 1 ; A ) X 2 ( t ; A ) , S 2 ( t ; A ) = 1 W ( t 1 ; A ) X 2 ( t 1 ; A ) X 1 ( t ; A ) + X 1 ( t 1 ; A ) X 2 ( t ; A ) .
From an analytical point of view, it could be infeasible to obtain the explicit expression of both series, X 1 ( t ; A ) and X 2 ( t ; A ) . This fact makes it more advisable to consider the approximate solution SP that comes from truncating both series:
X N ( t ) = Y 0 S 1 N ( t ; A ) + Y 1 S 2 N ( t ; A ) ,
where:
S 1 N ( t ; A ) = 1 W N ( t 1 ; A ) X ˙ 2 N ( t 1 ; A ) X 1 N ( t ; A ) X ˙ 1 N ( t 1 ; A ) X 2 N ( t ; A ) , S 2 N ( t ; A ) = 1 W N ( t 1 ; A ) X 2 N ( t 1 ; A ) X 1 N ( t ; A ) + X 1 N ( t 1 ; A ) X 2 N ( t ; A ) .
with W N ( t 1 ; A ) = X 1 N ( t 1 ; A ) X ˙ 2 N ( t 1 ; A ) X 2 N ( t 1 ; A ) X ˙ 1 N ( t 1 ; A ) , and X 1 N ( t ; A ) and X 2 N ( t ; A ) result from the truncation of X 1 ( t ; A ) and X 2 ( t ; A ) , defined in (14), at a common order N, i.e.,
X 1 N ( t ; A ) = | t t 0 | r 1 ( A ) n = 0 N X 1 , n ( A ) t t 0 n and X 2 N ( t ; A ) = | t t 0 | r 2 ( A ) n = 0 N X 2 , n ( A ) t t 0 n .
In the context of the RVT method [6] (Th. 1) and following the same methodology as in the contribution [6], we compute the 1-PDF of the truncated solution SP (18) and (19), obtaining:
f 1 N ( x , t ) = D A , Y 1 f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 S 1 N ( t ; a ) d a d y 1 ,
for every t T ^ 0 arbitrary.

3. Study of the Convergence

In this section, we give sufficient conditions to guarantee the convergence of the approximation, f 1 N ( x , t ) , given in (21), to the exact function f 1 ( x , t ) when N tends to infinity, i.e., we study when:
lim N + f 1 N ( x , t ) = f 1 ( x , t ) , for each ( x , t ) R × T ^ 0 [ t 1 , + [ fixed ,
being:
f 1 ( x , t ) = D A , Y 1 f A , Y 0 , Y 1 a , x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , y 1 1 S 1 ( t ; a ) d y 1 d a ,
where S 1 ( t ; a ) and S 2 ( t ; a ) are the realizations of the random series S 1 ( t ; A ) and S 2 ( t ; A ) defined in (17).
Now, we include here some remarks about the convergence and boundedness of the series involved in (23), which will play a key role in our subsequent developments.
Remark 2.
Expanding first the terms | t t 0 | r 1 ( A ) and | t t 0 | r 2 ( A ) as their corresponding series and interpreting the RV A as a parameter indexed by ω Ω , A = { a ( ω ) : ω Ω } , under the hypothesis H2, Poincaré’s theorem, stated in [6](Th. 2), permits representing the solution X ( t ) as a series of parameter a ( ω ) , ω Ω . Therefore, S 1 ( t ; A ) and S 2 ( t ; A ) do. On the other hand, taking into account the uniqueness of the solution of IVP (10)–(11), both series expansions (as powers of t t 0 and as powers of a ( ω ) a 0 ( ω ) ) match. Henceforth, the series X 1 ( t ; A ) and X 2 ( t ; A ) , given by (14), are convergent in t 0 -deleted neighborhoods N X 1 ( t 0 ; a 0 ( ω ) ) and N X 2 ( t 0 ; a 0 ( ω ) ) , respectively, for all a 0 ( ω ) D A , ω Ω . In addition, uniform convergence takes place in every closed set contained in N X 1 ( t 0 ; a 0 ( ω ) ) and N X 2 ( t 0 ; a 0 ( ω ) ) . Notice that the domain of convergence, N X ( t 0 ; a 0 ( ω ) ) , of the series solution X ( t ) given in (13), satisfies N p ^ , q ^ ( t 0 ; a 0 ( ω ) ) N X ( t 0 ; a 0 ( ω ) ) = N X 1 ( t 0 ; a 0 ( ω ) ) N X 2 ( t 0 ; a 0 ( ω ) ) for all a 0 ( ω ) D A , ω Ω , where N p ^ , q ^ ( t 0 ; a 0 ( ω ) ) is defined in Hypothesis H1.
Remark 3.
Functions S 1 ( t ; A ) and S 2 ( t ; A ) are linear combinations of series X 1 ( t ; A ) and X 2 ( t ; A ) (see (17)), which, by Remark 2, are uniformly convergent in every closed set N X * ( t 0 ; a 0 ( ω ) ) contained in N X ( t 0 ; a 0 ( ω ) ) . Then, S 1 ( t ; A ) and S 2 ( t ; A ) also converge uniformly in N X * ( t 0 ; a 0 ( ω ) ) .
Remark 4.
Note that for every ω Ω :
S 1 N ( t 1 ; a ( ω ) ) = 1 W N ( t 1 ; a ( ω ) ) X ˙ 2 N ( t 1 ; a ( ω ) ) X 1 N ( t 1 ; a ( ω ) ) X ˙ 1 N ( t 1 ; a ( ω ) ) X 2 N ( t 1 ; a ( ω ) ) 1 .
Then, according to Remark 2, there exist a certain neighborhood, N X ( t 1 ; a 0 ( ω ) ) , and a positive constant, m S 1 , such that for all N 0 :
0 < m S 1 | S 1 N ( t ; a ( ω ) ) | , ( t , a ( ω ) ) N X ( t 1 ; a 0 ( ω ) ) , a 0 ( ω ) D A , ω Ω .
On the other hand, by Remark 3, S i ( t ; A ) , i = 1 , 2 , converge uniformly in every closed set N X * ( t 0 ; a 0 ( ω ) ) contained in N X ( t 0 ; a 0 ( ω ) ) for all a 0 ( ω ) D A , ω Ω . This guarantees the existence of constants M S 1 , M S 2 > 0 such that:
S i N ( t ; a ( ω ) ) < M S i ,
( t , a ( ω ) ) N X * ( t 0 ; a 0 ( ω ) ) N X ( t 0 ; a 0 ( ω ) ) , a 0 ( ω ) D A , ω Ω , N 0 , i = 1 , 2 .
Let ( x , t ) R × T ^ 0 [ t 1 , + [ be fixed; below, we establish conditions to assure the convergence stated in (22). First, let us take limits as N in Expression (21):
lim N + f 1 N ( x , t ) = lim N + D A , Y 1 f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 S 1 N ( t ; a ) d a d y 1 .
To commute the limit as N + and the above double integral, we apply [6] (Prop. 1). We assume the following hypothesis:
H3: D A , Y 1 is a Lebesgue measurable set of R 2 with finite measure such that:
M ^ = sup | y 1 ( ω ) | : ( a ( ω ) , y 1 ( ω ) ) D A , Y 1 , ω Ω < + .
Then, we shall prove that k N ( a , y 1 ) : N 0 L 1 ( D A , Y 1 ) and k N ( a , y 1 ) N + uniformly D A , Y 1 k ( a , y 1 ) , being:
k N ( a , y 1 ) = f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 S 1 N ( t ; a )
and:
k ( a , y 1 ) = f A , Y 0 , Y 1 a , x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , y 1 1 S 1 ( t ; a ) .
By Remark 4, considering the lower bound given in (24) for | S 1 N ( t ; a ) | , one gets:
k N ( a , y 1 ) 1 m S 1 f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 .
Therefore, as f A , Y 0 , Y 1 is a PDF, we conclude that k N ( a , y 1 ) is Lebesgue integrable in D A , Y 1 , i.e., k N ( a , y 1 ) : N 0 L 1 ( D A , Y 1 ) . To prove that lim N + k N ( a , y 1 ) = k ( a , y 1 ) uniformly in D A , Y 1 , we will apply the same argument as in the previous contribution [6]. We demonstrate that for every ϵ > 0 , there exists N 0 such that for all N N 0 :
k N ( a , y 1 ) k ( a , y 1 ) < ϵ , ( a , y 1 ) D A , Y 1 .
Adding and subtracting the term f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 S 1 ( t , a ) and applying the triangular inequality, one gets:
k N ( a , y 1 ) k ( a , y 1 ) f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 | S 1 N ( t , a ) | 1 | S 1 ( t , a ) | + 1 | S 1 ( t ; a ) | f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 f A , Y 0 , Y 1 a , x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , y 1 .
Let ( x , t ) R × T ^ 0 [ t 1 , + [ be fixed and N 0 be an arbitrary non-negative integer, and consider ( t , a ) = ( t , a ( ω ) ) N X * ( t 0 ; a 0 ( ω ) ) , for all a 0 ( ω ) D A , ω Ω . Then, we apply [6] (Prop. 2) to z 1 = t fixed, z 2 = a D A arbitrary, D = { ( t , a ) R 2 : a D A } , g N ( t , a ) = 1 , g ( t , a ) = 1 , h N ( t , a ) = | S 1 N ( t ; a ) | , and h ( t , a ) = | S 1 ( t ; a ) | . Note that, by Remark 4, h N ( t , a ) × h ( t , a ) 0 for all ( t , a ) D . In addition, it is obvious that g N ( t , a ) converges uniformly to g ( t , a ) on D. Regarding the real function h N ( t , a ) , it converges uniformly to h ( t , a ) , by the arguments shown in Remarks 2 and 3. As g N ( t , a ) = 1 , its absolute value is bounded. By Remark 4, there exists constants m h = m S 1 and M h = M S 1 , such that 0 < m h < | h N ( t , a ) | < M h , a D A . Taking into account the uniform convergences, the bounds, and the hypothesis H1 (boundedness of the PDF f A , Y 0 , Y 1 ( a , y 0 , y 1 ) ), for every ϵ > 0 , there exists N 0 (which depends on ϵ ) such that:
f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 | S 1 N ( t ; a ) | 1 | S 1 ( t ; a ) | M f ϵ 2 M f = ϵ 2 , N N 0 ,
independently of the values of ( a , y 1 ) D A , Y 1 .
Now, we obtain an analogous result for the second term. First, assuming Hypothesis H3, we apply [6] (Prop. 3) to u = a , v = y 1 , c = x , l N ( a ) = S 2 N ( t ; a ) , l ( a ) = S 2 ( t ; a ) , b N ( a , y 1 ) = x y 1 S 2 N ( t ; a ) and b ( a , y 1 ) = x y 1 S 2 ( t ; a ) , with t T ^ 0 [ t 1 , + [ fixed, and such that ( t , a ) = ( t , a ( ω ) ) N X * ( t 0 ; a 0 ( ω ) ) , ω Ω . By Remark 3, l N ( a ) converges uniformly to l ( a ) on N X * ( t 0 ; a 0 ( ω ) ) . Then, given ( x , t ) R × T ^ 0 [ t 1 , + [ fixed, x y 1 S 2 N ( t ; a ) converges uniformly to x y 1 S 2 ( t ; a ) in D A , Y 1 . Secondly, we apply [6] (Prop. 2) taking z 1 = a , z 2 = y 1 , g N ( a , y 1 ) = x y 1 S 2 N ( t ; a ) , g ( a , y 1 ) = x y 1 S 2 ( t ; a ) , h N ( a , y 1 ) = S 1 N ( t ; a ) , and h ( a , y 1 ) = S 1 ( t ; a ) . Note that, previously we have proven, by applying [6] (Prop. 3), that g N ( a , y 1 ) converges uniformly to g ( a , y 1 ) . In addition, by Remark 3, h N ( a , y 1 ) converges uniformly to h ( a , y 1 ) . Let M g = | x | + M ^ M S 2 and M h = M S 1 , these bounds being the ones established in Hypothesis H3 and Remark 4, then:
x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) N + uniformly D A , Y 1 x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , ( x , t ) R × T ^ 0 [ t 1 , + [ fixed .
The next step is to apply [6] (Prop. 4) with the following identification: z 1 = a , z 2 = y 1 , ( a , y 1 ) D A , Y 1 such that ( t , a ) N X * ( t 0 ; a 0 ( ω ) ) for every a 0 ( ω ) D A , ω Ω , γ N ( a , y 1 ) = x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , γ ( a , y 1 ) = x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , ϕ ( a , y 0 , y 1 ) = f A , Y 0 , Y 1 ( a , y 0 , y 1 ) , ψ N ( a , y 1 ) = f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 and ψ ( a , y 1 ) = f A , Y 0 , Y 1 a , x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , y 1 . Finally, as it was previously shown, the sequence γ N ( a , y 1 ) is uniformly convergent to γ ( a , y 1 ) , and according to Hypothesis H1, the mapping ϕ is continuous in y 0 . Therefore, taking into account the lower bound of | S 1 ( t ; a ) | given in Remark 4 and applying [6] (Prop. 4), for every ϵ > 0 there exists N 0 , which depends on ϵ , such that
1 | S 1 ( t ; a ) | f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 f A , Y 0 , Y 1 a , x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , y 1 < 1 m S 1 m S 1 ϵ 2 = ϵ 2 , N N 0 ,
independently of the values of ( a , y 1 ) . Hence, we proved that k N ( a , y 1 ) N + uniformly D A , Y 1 k ( a , y 1 ) , and then, by [6] (Prop. 1) and Hypothesis H3, we can commute the limit and the integral. Therefore, applying the continuity of the joint PDF f A , Y 1 , Y 2 ( a , y 1 , y 2 ) with respect to the second variable (see Hypothesis H1) and taking into account Expression (23), one gets:
lim N f 1 N ( x , t ) = D A , Y 1 lim N f A , Y 0 , Y 1 a , x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 S 1 N ( t ; a ) d a d y 1 , = D A , Y 1 f A , Y 0 , Y 1 a , lim N x y 1 S 2 N ( t ; a ) S 1 N ( t ; a ) , y 1 1 lim N S 1 N ( t ; a ) d a d y 1 = D A , Y 1 f A , Y 0 , Y 1 a , x y 1 S 2 ( t ; a ) S 1 ( t ; a ) , y 1 1 S 1 ( t ; a ) d a d y 1 = f 1 ( x , t ) .
Summarizing, we establish the following result.
Theorem 2.
Let us consider the random IVP (10) and (11), and assume that:
(i)
The RV A satisfies Hypothesis H0.
(ii)
The joint PDF, f A , Y 0 , Y 1 ( a , y 0 , y 1 ) , of the random input data ( a , y 0 , y 1 ) satisfies Hypothesis H1.
(iii)
The coefficients p ( t ; A ) and q ( t ; A ) satisfy Hypothesis H2.
(iv)
The D A , Y 1 domain of the random vector ( A , Y 1 ) satisfies Hypothesis H3.
Let X 1 N ( t , a ) and X 2 N ( t ; a ) be the finite sum defined in (20). Then, f 1 N ( x , t ) , defined by (19)–(21), is the 1-PDF of the truncated solution SP X N ( t ) given by (13)–(15) to the IVP (10) and (11). Furthermore, for each ( x , t ) , 1-PDF f 1 N ( x , t ) converges to f 1 ( x , t ) given in (17) and (23).

4. Examples

In this section, we present two examples to illustrate the theoretical results previously established. In both examples, we calculate the 1-PDF of the approximate solution SP, and from it, we also approximate the mean and the variance of the solution by taking advantage of the expressions given in (1).
Example 1.
Let us consider the random IVP:
A t 2 X ¨ ( t ) + t ( t + 1 ) X ˙ ( t ) X ( t ) = 0 , X ( 1 ) = Y 0 , X ˙ ( 1 ) = Y 1 .
Notice that t 0 = 0 is a regular-singular point since according to Equation (2), a 0 ( t ) = A t 2 , a 1 ( t ) = t ( t + 1 ) and a 2 ( t ) = 1 , being a 0 ( t 0 ) = 0 and, according to (4) and (5), p ( t ; A ) = t ( t + 1 ) / ( A t 2 ) and q ( t ; A ) = 1 / ( A t 2 ) , which are not analytic about t 0 = 0 , while p ^ ( t ; A ) = ( t + 1 ) / A and q ^ ( t ; A ) = 1 / A are both analytic about t 0 = 0 . We will assume that A, Y 0 , and Y 1 are independent RVs with the following distributions:
  • A has a uniform distribution on the interval [ 1 , 2 ] , i.e., A U ( [ 1 , 2 ] ) .
  • Y 0 has a Gaussian RV with mean zero and standard deviation 0.1 , i.e., Y 0 N ( 0 ; 0.1 ) .
  • Y 1 has a beta distribution with parameters α = 2 and β = 3 , i.e., Y 1 Be ( 2 ; 3 ) .
First, we check that Hypotheses H0–H3 hold:
  • H0: Since A U ( [ 1 , 2 ] ) , it is bounded. With the notation of Hypothesis H0, we can take M A = 2 .
  • H1: Since A, Y 0 , and Y 1 are assumed to be independent RVs, their joint PDF is given by the product of the PDF of each RV; see Expression (12). In this case, proving the continuity of the joint PDF with respect to the second variable, y 0 , is equivalent to proving the continuity of the PDF of the RV Y 0 . Since Y 0 has a Gaussian distribution, then it is continuous and bounded on the whole real line. As the PDFs of A and Y 1 are also bounded, Hypothesis H1 holds,
    f A ( a ) f Y 0 ( y 0 ) f Y 1 ( y 1 ) = 1 2 1 10 2 π exp 50 y 0 2 4 ! 1 ! 2 ! y 1 ( 1 y 1 ) 2 120 2 π , ( a , y 0 , y 1 ) [ 1 , 2 ] × R × [ 0 , 1 ] .
  • H2: As it was previously indicated, p ( t ; A ) and q ( t ; A ) are not analytic functions at t 0 = 0 , but p ^ ( t ; A ) and q ^ ( t ; A ) are, since both are polynomials as a function of t for every a = a ( ω ) , ω Ω .
  • H3: The domain of random vector ( A , Y 1 ) is:
    D A , Y 1 = D A × D Y 1 = [ 1 , 2 ] × [ 0 , 1 ] .
    Hence, D A , Y 1 is a Lebesgue measurable set of R 2 and M ^ = 1 < . Therefore, Hypothesis H3 is fulfilled.
In this case, the solution SP of IVP (27) is given by:
X ( t ) = Y 0 S 1 ( t ; A ) + Y 1 S 2 ( t ; A ) ,
where S i ( t ; A ) , i = 1 , 2 , are defined in (17) with:
X 1 ( t ; A ) = t 1 + n = 1 ( 1 ) n j = 1 n 1 ( j + 1 ) A + 1 t n a n d X 2 ( t ; A ) = t 1 / A n = 0 ( 1 ) n n ! A n t n ;
see Appendix A for further details. In this case, we can compute, via Mathematica ® , the exact value of both power series,
X 1 ( t ; A ) = ( 1 + A ) e t / A t A 1 / A Γ 1 + 1 A Γ 1 + 1 A , t A a n d X 2 ( t ) = e t / A t 1 / A ,
where Γ ( z ) and Γ ( b , z ) are the gamma and the incomplete gamma function defined, respectively, by:
Γ ( z ) = 0 t z 1 e t d t a n d Γ ( b , z ) = z t b 1 e t d t .
Notice that in the above expression of X 1 ( t ; A ) , these gamma functions must be interpreted as random functions [22].
In Figure 1, Figure 2 and Figure 3, the 1-PDF of the exact solution SP, f 1 ( x , t ) , and the 1-PDF of the truncation, f 1 N ( x , t ) , are plotted, respectively, for different values of the truncation order N { 1 , 2 , 3 , 4 , 5 } at the time instants t { 1.1 , 1.5 , 2 } . In order to graphically check the convergence, in all of these figures, we plotted the approximation corresponding to the highest order ( N = 5 ) from Expression (21) together with the exact 1-PDF. In this latter case, the 1-PDF can been obtained by applying the RVT technique from the exact representation of the solution SP given by (28), (30), and (31) and using the software Mathematica ® . From this latter plot, we can observe in the three cases ( t { 1.1 , 1.5 , 2 } ) that f 1 ( x , t ) and f 1 N ( x , t ) , with N = 5 , practically coincide, so illustrating quick convergence. For the sake of completeness, in Table 1, we show the values of the error measure e N ( t ) , defined in (32), which has been calculated for each t { 1.1 , 1.5 , 2 } using different orders of truncation N { 1 , 2 , 3 , 4 , 5 } :
e N ( t ) = + | f 1 N ( x , t ) f 1 ( x , t ) | d x .
From values collected in Table 1, we observe that for N fixed, the error increases as t goes far from the initial value t = 1 (except for N = 1 since the approximations are still rough), while for t fixed, the error decreases as the order of truncation N increases, as expected.
In Figure 4, we compare the expectation and the variance of the exact solution SP in the interval t [ 1 , 2 ] with the corresponding approximations for different orders of truncation, N { 1 , 2 , 3 , 4 , 5 } . We measure the accuracy of the approximations for these two moments via the errors defined in (33),
e N E = 1 2 E X ( t ) E X N ( t ) d t , e N V = 1 2 V X ( t ) V X N ( t ) d t .
In Table 2, we collect the values for theses errors. For both moments, we observe that the corresponding error decreases as the order of truncation increases, as expected.
Example 2.
In this example, we consider the randomized Bessel equation:
t 2 X ¨ ( t ) + t X ˙ ( t ) + ( t 2 A 2 ) X ( t ) = 0 , X ( 0.5 ) = Y 0 , X ˙ ( 0.5 ) = Y 1 ,
where Y 0 and Y 1 denote RVs fixed at the time instant t 1 = 0.5 . In [12], the authors provided sufficient conditions on the input random data (A, Y 0 , and Y 1 ) to extend the classical analysis for the Bessel equation to the random setting in the so-called mean squared sense [5]. In the aforementioned contribution, approximations for the mean and the variance of the solution SP to the Bessel RDE were given. In this example, we go further since, based on the results established in previous sections, we will construct approximations for the 1-PDF of the solution SP of this important RDE. To this end, let us first observe that, according to [12], the solution is given by X ( t ) = Y 0 S 1 ( t ; A ) + Y 1 S 2 ( t ; A ) , where S i ( t ; A ) , i = 1 , 2 are defined by (17), being:
X 1 ( t ; A ) = t A 1 + n = 1 ( 1 ) n 4 n n ! i = 1 n ( A + i ) t 2 n , X 2 ( t ; A ) = t A 1 + n = 1 ( 1 ) n 4 n n ! i = 1 n ( A + i ) t 2 n .
Hereinafter, we assume that A, Y 0 , and Y 1 are independent RVs with the following distributions:
  • A has a gamma distribution with shape and scale parameters α = 2 and β = 0.5 , respectively, truncated on the interval I = [ 2.25 , 2.75 ] , i.e., A I Ga ( 2 ; 0.5 ) .
  • Y 0 has a beta distribution with parameters α = 4 and β = 3 , i.e., Y 0 Be ( 4 ; 3 ) .
  • Y 1 has a uniform distribution on the interval [ 1 , 2 ] , i.e., Y 1 U ( [ 1 , 2 ] ) .
Similarly to Example 1, we first check that Hypotheses H0–H3 are fulfilled:
  • H0: Since A is a truncated RV on the interval I = [ 2.25 , 2.75 ] , A is bounded. With the notation of Hypothesis H0, notice that we can take, for instance, M A = 2.75 .
  • H1: Since A, Y 0 , and Y 1 are assumed to be independent RVs, their joint PDF is given by the product of the PDF of each RV,
    f A , Y 0 , Y 1 ( a , y 0 , y 1 ) = 115.823 a e 2 a 6 ! 3 ! 2 ! y 0 3 ( 1 y 0 ) 2 , ( a , y 0 , y 1 ) [ 2.25 , 2.75 ] × [ 0 , 1 ] × [ 1 , 2 ] .
    From this expression, it is clear that this function is continuous with respect to its second argument, y 0 , and it is bounded. As we deal with a rectangular domain where the first factor is a decreasing function and the second factor, y 0 3 ( 1 y 0 ) 2 , has a maximum value at y 0 = 0.6 below 0.035 , the maximum of f A , Y 0 , Y 1 ( a , y 0 , y 1 ) is obtained taking this value and evaluating the first factor at a = 2.25 . Therefore, with the notation of H1, we can take, for instance, M f = 6.1 . Therefore, Hypothesis H1 is fulfilled.
  • H2: Notice that t 0 = 0 is a regular-singular point of the Bessel RDE (34). On the one hand, observe that according to Equation (2), a 0 ( t ) = t 2 , a 1 ( t ) = t , and a 2 ( t ) = t 2 A 2 , being a 0 ( t 0 ) = 0 . On the other hand, according to (4) and (5), p ( t ; A ) = t / t 2 and q ( t ; A ) = ( t 2 A 2 ) / t 2 , which are not analytic functions at t 0 = 0 , while p ^ ( t ; A ) = 1 and q ^ ( t ; A ) = t 2 A 2 are both analytic at t 0 = 0 inasmuch as they are polynomials as functions of t, for every a = a ( ω ) , ω Ω .
  • H3: Since A and Y 1 are assumed to be independent RVs, then the domain of the random vector ( A , Y 1 ) is:
    D A , Y 1 = D A × D Y 1 = [ 2.25 , 2.75 ] × [ 1 , 2 ] .
    Hence, D A , Y 1 is a Lebesgue measurable set of R 2 , and with the notation of Hypothesis H3, we can take M ^ = 2 < . Therefore, Hypothesis H3 holds.
The 1-PDFs of the truncated solution SP, f 1 N ( x , t ) , for different orders of truncation, N { 1 , 2 , 3 , 4 , 5 } , at the time instants t { 1.5 , 2.5 , 3.5 } are plotted in Figure 5, Figure 6 and Figure 7, respectively. As in Example 1, using the RVT method and the exact representation of series X 1 ( t , A ) and X 2 ( t , A ) , defined in (35), in terms of the Bessel function J A ( t ) integrated in Mathematica ® and the gamma function Γ ( z ) defined in (31), namely X 1 ( t , A ) = 2 A J A ( t ) Γ ( 1 + A ) and X 2 ( t , A ) = 2 A J A ( t ) Γ ( 1 A ) , we can obtain the exact 1-PDF, f 1 ( x , t ) . On the one hand, taking this benchmark, in the left panel of Figure 5, Figure 6 and Figure 7, we show the convergence of the approximations, f 1 N ( x , t ) , to the exact 1-PDF, f 1 ( x , t ) , as N increases ( N { 1 , 2 , 3 , 4 , 5 } ) for different values of t { 1.5 , 2.5 , 3.5 } . On the other hand, to illustrate better the convergence, in the right panel of the aforementioned figures, we show the approximations corresponding to the highest order of truncation, f 1 N ( x , t ) , N = 5 , and the exact 1-PDF, f 1 ( x , t ) . We observe that both plots overlap, thus showing quick convergence.
As in Example 1, we use the error e N ( t ) defined in (32) to measure the quality of approximations f 1 N ( x , t ) . In Table 3, we calculated, for each t { 1.5 , 2.5 , 3.5 } , the values of this error for the following orders of truncation N { 1 , 2 , 3 , 4 , 5 } . Notice that for each t, the error diminishes as the order of truncation increases, while the error increases as we move far from the initial data t 1 = 0.5 , as expected.
Taking f 1 N ( x , t ) , given by (21), as an approximation to f 1 ( x , t ) , in the expressions (1), we can calculate approximations for the mean ( E [ X N ( t ) ] ) and the variance ( V [ X N ( t ) ] ) of the solution of the random IVP (34). In Figure 8, we plotted the above-mentioned approximations on the interval t [ 0.5 , 3.5 ] for different orders of truncation, N { 1 , 2 , 3 , 4 , 5 } , as well as their respective exact values, E X ( t ) and V X ( t ) , using as the expression to f 1 ( x , t ) the one obtained via the RVT method. In these plots, we can clearly see that the approximations quickly improve on the whole interval as N increases. To finish, we calculated the errors of these approximations by means of the following expressions:
e N E = 0.5 3.5 E [ X N ( t ) ] E X ( t ) d t , e N V = 0.5 3.5 V [ X N ( t ) ] V X ( t ) d t ,
Values of errors for the mean, e N E , and the variance, e N V , for each truncation are shown in Table 4. From these values, we observe that errors decrease as N increases.

5. Conclusions

In this paper, we presented a methodology to construct reliable approximations to the first probability density function of the solution of second-order linear differential equations about a regular-singular point with a finite degree of randomness in the coefficients and also assuming that both initial conditions were random variables. Therefore, we studied this class of differential equations assuming randomness in all its input data, which provided great generality to our analysis. The results completed the ones already presented in a previous paper where the same class of random initial value problems was solved about an ordinary point. In this manner, the results obtained in both contributions permitted solving, from a probabilistic standpoint, a number of important randomized differential equations that appear in the field of mathematical physics like Airy, Hermite, Legendre, Laguerre, Chebyshev, Bessel, etc. Under our approach, we could compute the first probability density function of the solution. This is really advantageous since apart from allowing us to determine the mean and the variance, the knowledge of the first probability density function also permits determining higher one-dimensional moments, as well as the probability that the solution lies in any interval of interest, which might be key information in many practical problems.

Author Contributions

Formal analysis, J.-C.C, A.N.-Q., J.-V.R. and M.-D.R.; Investigation, J.-C.C., A.N.-Q., J.-V.R. and M.-D.R.; Methodology, J.-C.C., A.N.-Q., J.-V.R. and M.-D.R.; Writing – original draft, J.-C.C., A.N.-Q., J.-V.R. and M.-D.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially funded by the Ministerio de Economía y Competitividad Grant MTM2017-89664-P. Ana Navarro Quiles acknowledges the funding received from Generalitat Valenciana through a postdoctoral contract (APOSTD/2019/128). Computations were carried out thanks to the collaboration of Raúl San Julián Garcés and Elena López Navarro granted by the European Union through the Operational Program of the European Regional Development Fund (ERDF)/European Social Fund (ESF) of the Valencian Community 2014–2020, Grants GJIDI/2018/A/009 and GJIDI/2018/A/010, respectively.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Computing the solution of IVP (27) in Example 1

For the sake of completeness, in this Appendix, we detail how to obtain the expressions (28) and (29) used in Example 1. Let us consider the deterministic differential equation:
a t 2 x ¨ ( t ) + t ( t + 1 ) x ˙ ( t ) x ( t ) = 0 , a 0 , t 0 .
It is clear that the time instant t 0 = 0 is a regular-singular point. The indicial equation associated with the differential Equation (A1) is given by:
r 2 + ( 1 + 1 / a ) r 1 / a = 0 .
The solution of (A1) is sought in the form of generalized power series, x ( t ) = n = 0 x n t n + r , t > 0 , where coefficients x n are determined by imposing that this series is a solution provided x 0 0 . This leads to the following relationship:
a t 2 n = 0 ( n + r ) ( n + r 1 ) x n t n + r 2 + t ( t + 1 ) n = 0 ( n + r ) x n t n + r 1 n = 0 x n t n + r = 0 .
We work out with the last expression, obtaining:
n = 0 a ( n + r ) ( n + r 1 ) x n t n + r + n = 0 ( n + r ) x n t n + r + 1 + n = 0 ( n + r ) x n t n + r n = 0 x n t n + r = 0 ,
n = 0 a ( n + r ) ( n + r 1 ) x n t n + r + n = 1 ( n + r 1 ) x n 1 t n + r + n = 0 ( n + r ) x n t n + r n = 0 x n t n + r = 0 ,
( a r ( r 1 ) x 0 + r x 0 x 0 ) t r + n = 1 a ( n + r ) ( n + r 1 ) x n + ( n + r 1 ) x n 1 + ( n + r ) x n x n t n + r = 0 .
In this last expression, we can take out the common factor t r , obtaining the following expression:
( a r ( r 1 ) x 0 + r x 0 x 0 ) + n = 1 a ( n + r ) ( n + r 1 ) x n + ( n + r 1 ) x n 1 + ( n + r ) x n x n t n = 0
Now, using the uniqueness of the expansion, one gets ( a r ( r 1 ) + r 1 ) x 0 = 0 (for the first addend), which corresponds to indicial Equation (A2), since x 0 0 and the following recurrence relationship derived from the other terms:
x n = x n 1 a ( n + r ) + 1 , n 1 .
The roots of the indicial Equation (A2) are r 1 = 1 and r 2 = 1 / a , then according to the Fröbenius method stated in Theorem 1, if a 1 and 1 + 1 / a is not an integer, differential Equation (A1) has two linearly independent solutions of the form:
x 1 ( t ) = n = 0 x 1 , n t n + 1 and x 2 ( t ) = n = 0 x 2 , n t n 1 a ,
where terms x 1 , n and x 2 , n are obtained from the recurrence (A3) with r = r 1 = 1 and r = r 2 = 1 / a , respectively, for x 0 0 fixed. Taking x 0 = 1 , one recursively calculates:
x 1 , n = ( 1 ) n j = 1 n 1 ( j + 1 ) a + 1 , x 2 , n = ( 1 ) n 1 a n n ! .

References

  1. Hussein, A.; Selim, M.M. Solution of the stochastic radiative transfer equation with Rayleigh scattering using RVT technique. Appl. Math. Comput. 2012, 218, 7193–7203. [Google Scholar] [CrossRef]
  2. Dorini, F.A.; Cecconello, M.S.; Dorini, M.B. On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density. Commun. Nonlinear Sci. Numer. Simul. 2016, 33, 160–173. [Google Scholar] [CrossRef]
  3. Santos, L.T.; Dorini, F.A.; Cunha, M.C.C. The probability density function to the random linear transport equation. Appl. Math. Comput. 2010, 216, 1524–1530. [Google Scholar] [CrossRef]
  4. Hussein, A.; Selim, M.M. A complete probabilistic solution for a stochastic Milne problem of radiative transfer using KLE-RVT technique. J. Quant. Spectrosc. Radiat. Transf. 2019, 232, 254–265. [Google Scholar] [CrossRef]
  5. Soong, T.T. Random Differential Equations in Science and Engineering; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  6. Cortés, J.C.; Navarro-Quiles, A.; Romero, J.V.; Roselló, M.D. Solving second-order linear differential equations with random analytic coefficients about ordinary points: A full probabilistic solution by the first probability density function. Appl. Math. Comput. 2018, 331, 33–45. [Google Scholar] [CrossRef]
  7. Cortés, J.C.; Jódar, L.; Camacho, J.; Villafuerte, L. Random Airy type differential equations: Mean square exact and numerical solutions. Comput. Math. Appl. 2010, 60, 1237–1244. [Google Scholar] [CrossRef] [Green Version]
  8. Calbo, G.; Cortés, J.C.; Jódar, L. Random Hermite differential equations: Mean square power series solutions and statistical properties. Appl. Math. Comput. 2011, 218, 3654–3666. [Google Scholar] [CrossRef]
  9. Calbo, G.; Cortés, J.C.; Jódar, L. Solving the random Legendre differential equation: Mean square power series solution and its statistical functions. Comput. Math. Appl. 2011, 61, 2782–2792. [Google Scholar] [CrossRef] [Green Version]
  10. Calbo, G.; Cortés, J.C.; Jódar, L. Laguerre random polynomials: Definition, differential and statistical properties. Util. Math. 2015, 60, 283–293. [Google Scholar]
  11. Cortés, J.C.; Villafuerte, L.; Burgos, C. A mean square chain rule and its application in solving the random Chebyshev differential equation. Mediterr. J. Math. 2017, 14, 14–35. [Google Scholar] [CrossRef] [Green Version]
  12. Cortés, J.C.; Jódar, L.; Villafuerte, L. Mean square solution of Bessel differential equation with uncertainties. J. Comput. Appl. Math. 2017, 309, 383–395. [Google Scholar] [CrossRef] [Green Version]
  13. Golmankhaneh, A.K.; Porghoveh, N.A.; Baleanu, D. Mean square solutions of second-order random differential equations by using homotopy analysis method. Rom. Rep. Phys. 2013, 65, 350–362. [Google Scholar]
  14. Khalaf, S. Mean Square Solutions of Second-Order Random Differential Equations by Using Homotopy Perturbation Method. Int. Math. Forum 2011, 6, 2361–2370. [Google Scholar]
  15. Khudair, A.; Ameen, A.; Khalaf, S. Mean square solutions of second-order random differential equations by using Adomian decomposition method. Appl. Math. Sci. 2011, 5, 2521–2535. [Google Scholar]
  16. Khudair, A.; Haddad, S.; Khalaf, S. Mean Square Solutions of Second-Order Random Differential Equations by Using the Differential Transformation Method. Open J. Appl. Sci. 2016, 6, 287–297. [Google Scholar] [CrossRef] [Green Version]
  17. Khudair, A.; Ameen, A.; Khalaf, S. Mean Square Solutions of Second-Order Random Differential Equations by Using Variational Iteration Method. Appl. Math. Sci. 2011, 5, 2505–2519. [Google Scholar]
  18. Ross, S.L. Differential Equations; John Willey & Sons: New York, NY, USA, 1984. [Google Scholar]
  19. Qi, Y. A Very Brief Introduction to Nonnegative Tensors from the Geometric Viewpoint. Mathematics 2018, 6, 230. [Google Scholar] [CrossRef] [Green Version]
  20. Ragusa, M.A.; Tachikawa, A. Boundary regularity of minimizers of p(x)-energy functionals. Ann. de l’Institut Henri Poincare (C) Non Linear Anal. 2016, 33, 451–476. [Google Scholar] [CrossRef]
  21. Ragusa, M.A.; Tachikawa, A. Regularity for minimizers for functionals of double phase with variable exponents. Adv. Nonlinear Anal. 2020, 9, 710–728. [Google Scholar] [CrossRef]
  22. Braumann, C.; Cortés, J.C.; Jódar, L.; Villafuerte, L. On the random gamma function: Theory and computing. J. Comput. Appl. Math. 2018, 335, 142–155. [Google Scholar] [CrossRef]
Figure 1. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution stochastic process (SP), X N ( t ) , to the random initial value problem (IVP) (27) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 1.1 . Right: To facilitate the comparison, f 1 N ( x , 1.1 ) , with N = 5 , is plotted together with f 1 ( x , 1.1 ) . Example 1.
Figure 1. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution stochastic process (SP), X N ( t ) , to the random initial value problem (IVP) (27) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 1.1 . Right: To facilitate the comparison, f 1 N ( x , 1.1 ) , with N = 5 , is plotted together with f 1 ( x , 1.1 ) . Example 1.
Mathematics 08 00230 g001
Figure 2. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (27) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 1.5 . Right: To facilitate the comparison, f 1 N ( x , 1.5 ) , with N = 5 , is plotted together with f 1 ( x , 1.5 ) . Example 1.
Figure 2. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (27) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 1.5 . Right: To facilitate the comparison, f 1 N ( x , 1.5 ) , with N = 5 , is plotted together with f 1 ( x , 1.5 ) . Example 1.
Mathematics 08 00230 g002
Figure 3. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (27) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 2 . Right: To facilitate the comparison, f 1 N ( x , 2 ) , with N = 5 , is plotted together with f 1 ( x , 2 ) . Example 1.
Figure 3. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (27) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 2 . Right: To facilitate the comparison, f 1 N ( x , 2 ) , with N = 5 , is plotted together with f 1 ( x , 2 ) . Example 1.
Mathematics 08 00230 g003
Figure 4. Left: Comparison of the exact mean and the mean different approximations of the solution SP to the random IVP (27) taking as orders of truncation for the approximate series the values N { 1 , 2 , 3 , 4 , 5 } . Right: Comparison of exact variance and the variance different approximations of the solution SP to the random IVP (27) using the same orders of truncation. Example 1.
Figure 4. Left: Comparison of the exact mean and the mean different approximations of the solution SP to the random IVP (27) taking as orders of truncation for the approximate series the values N { 1 , 2 , 3 , 4 , 5 } . Right: Comparison of exact variance and the variance different approximations of the solution SP to the random IVP (27) using the same orders of truncation. Example 1.
Mathematics 08 00230 g004
Figure 5. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (34) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 1.5 . Right: To facilitate the comparison, f 1 N ( x , 1.5 ) , with N = 5 , is plotted together with f 1 ( x , 1.5 ) . Example 2.
Figure 5. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (34) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 1.5 . Right: To facilitate the comparison, f 1 N ( x , 1.5 ) , with N = 5 , is plotted together with f 1 ( x , 1.5 ) . Example 2.
Mathematics 08 00230 g005
Figure 6. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (34) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 2.5 . Right: To facilitate the comparison, f 1 N ( x , 2.5 ) , with N = 5 , is plotted together with f 1 ( x , 2.5 ) . Example 2.
Figure 6. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (34) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 2.5 . Right: To facilitate the comparison, f 1 N ( x , 2.5 ) , with N = 5 , is plotted together with f 1 ( x , 2.5 ) . Example 2.
Mathematics 08 00230 g006
Figure 7. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (34) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 3.5 . Right: To facilitate the comparison, f 1 N ( x , 3.5 ) , with N = 5 , is plotted together with f 1 ( x , 3.5 ) . Example 2.
Figure 7. Left: 1-PDF, f 1 N ( x , t ) , of the truncated solution SP, X N ( t ) , to the random IVP (34) taking as the order of truncation N { 1 , 2 , 3 , 4 , 5 } and the corresponding 1-PDF, f 1 ( x , t ) , of the exact solution SP at t = 3.5 . Right: To facilitate the comparison, f 1 N ( x , 3.5 ) , with N = 5 , is plotted together with f 1 ( x , 3.5 ) . Example 2.
Mathematics 08 00230 g007
Figure 8. Left: Expectation of the solution to Problem (34) using different orders of truncation N { 1 , 2 , 3 , 4 , 5 } in the series (17). Right: Variance of the solution to Problem (34) using different orders of truncation N { 1 , 2 , 3 , 4 , 5 } in the series (17).
Figure 8. Left: Expectation of the solution to Problem (34) using different orders of truncation N { 1 , 2 , 3 , 4 , 5 } in the series (17). Right: Variance of the solution to Problem (34) using different orders of truncation N { 1 , 2 , 3 , 4 , 5 } in the series (17).
Mathematics 08 00230 g008
Table 1. Error measure e N ( t ) defined by (32) for different time instants, t { 1.1 , 1.5 , 2 } , and orders of truncation of the approximate solution SP, N { 1 , 2 , 3 , 4 , 5 } . Example 1.
Table 1. Error measure e N ( t ) defined by (32) for different time instants, t { 1.1 , 1.5 , 2 } , and orders of truncation of the approximate solution SP, N { 1 , 2 , 3 , 4 , 5 } . Example 1.
e N ( t ) N = 1 N = 2 N = 3 N = 4 N = 5
t = 1.1 0.5198320.1664200.0392330.0072280.001129
t = 1.5 0.4374950.2195200.0718470.0187120.004031
t = 2.0 0.3194640.2492860.1170280.04281930.0125606
Table 2. Values of errors for the mean, e N E , and the variance e N V , given by (33) for N { 1 , 2 , 3 , 4 , 5 } . Example 1.
Table 2. Values of errors for the mean, e N E , and the variance e N V , given by (33) for N { 1 , 2 , 3 , 4 , 5 } . Example 1.
Error N = 1 N = 2 N = 3 N = 4 N = 5
e N E 0.0774880.0375040.0130750.0037080.000903
e N V 0.0017760.0013170.0003430.0000150.000012
Table 3. Values of the error measure, e N , defined by (32) for different time instants, t { 1.5 , 2.5 , 3.5 } , and orders of truncation to the approximate solution SP of (34), N { 1 , 2 , 3 , 4 , 5 } . Example 2.
Table 3. Values of the error measure, e N , defined by (32) for different time instants, t { 1.5 , 2.5 , 3.5 } , and orders of truncation to the approximate solution SP of (34), N { 1 , 2 , 3 , 4 , 5 } . Example 2.
e N N = 1 N = 2 N = 3 N = 4 N = 5
t = 1.5 0.034460.0015930.0000692.3272 · 10 6 5.5373 · 10 8
t = 2.5 0.1537840.0302820.0020780.0001080.000020
t = 3.5 1.5183060.2856140.0388470.0034350.000233
Table 4. Values of errors for the approximations of mean, e N E , and the variance, e N V , given by (36) for N { 1 , 2 , 3 , 4 , 5 } . Example 2.
Table 4. Values of errors for the approximations of mean, e N E , and the variance, e N V , given by (36) for N { 1 , 2 , 3 , 4 , 5 } . Example 2.
Error N = 1 N = 2 N = 3 N = 4 N = 5
e N E 6.7665641.0521050.1080920.0080960.000475
e N V 20.7692413.7612940.3523570.0250390.001421

Share and Cite

MDPI and ACS Style

Cortés, J.-C.; Navarro-Quiles, A.; Romero, J.-V.; Roselló, M.-D. Solving Second-Order Linear Differential Equations with Random Analytic Coefficients about Regular-Singular Points. Mathematics 2020, 8, 230. https://doi.org/10.3390/math8020230

AMA Style

Cortés J-C, Navarro-Quiles A, Romero J-V, Roselló M-D. Solving Second-Order Linear Differential Equations with Random Analytic Coefficients about Regular-Singular Points. Mathematics. 2020; 8(2):230. https://doi.org/10.3390/math8020230

Chicago/Turabian Style

Cortés, Juan-Carlos, Ana Navarro-Quiles, José-Vicente Romero, and María-Dolores Roselló. 2020. "Solving Second-Order Linear Differential Equations with Random Analytic Coefficients about Regular-Singular Points" Mathematics 8, no. 2: 230. https://doi.org/10.3390/math8020230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop