A Class of Spline Functions for Solving 2-Order Linear Differential Equations with Boundary Conditions

: In this paper, we exploit an numerical method for solving second order differential equations with boundary conditions. Based on the theory of the analytic solution, a series of spline functions are presented to ﬁnd approximate solutions, and one of them is selected to approximate the solution automatically. Compared with the other methods, we only need to solve a tri-diagonal system, which is much easier to implement. This method has the advantages of high precision and less computational cost. The analysis of local truncation error is also discussed in this paper. At the end, some numerical examples are given to illustrate the effectiveness of the proposed method.


Introduction
Ordinary differential equations (ODEs) of second order are used throughout engineering, mathematics and science. It is well known that of all differential equations, only a little of them have analytic solutions. Most of them, especially for these with variable coefficients are very difficult to solve. Very often, the solving techniques are difficult to deal with complicated equations in practice. In this paper, we consider the numerical solutions for second order linear differential equations of the form with the boundary conditions y(a) = A, y(b) = B, where A, B are constants and P(x), Q(x) are sufficiently smooth functions. We remark that Equation (1) doesn't contain the first derivative term y (x), because we can eliminate it by proper transformation. How to solve the differential equations effectively has gain great attentions in recent years, researchers have been putting forth effort on obtaining stable algorithms with less float operations and higher accuracy. Numerical methods for differential equations are well developed and some of them are widely used, such as Euler's method, the linear multi-step method, Runge-Kutta Method and so on. For more detailed researches on this topic, we refer the reader to read [1].
Recently, solving differential equations by spline functions have been investigated by more and more researchers and a large of spline functions have been exploited to solve these equations. Usmani [2] derived an uniform quartic polynomial spline at mid-knots. Ramadan et al. [3,4] presented some quadratic, cubic polynomial and nonpolynomial spline functions to find the approximate solutions. They also shown that the nonpolynomial splines could yield better approximations and have lower computational cost than the polynomial ones. By using the nonpolynomial quintic splines, Srivastava et al. developed a numerical algorithm [5]. In [6], a quadratic non-polynomial spline functions based method was proposed by Islam et al. Zahra and Mhlawy studied the singularly perturbed semi-linear boundary value problem with exponential spline [7]. Lodhi and Mishra exploited the numerical method to solve second order singularly perturbed two-point linear and nonlinear boundary value problems [8]. In [9], Zahra proposed the exponential spline functions consisting of a polynomial part of degree three and an exponential part to find approximation of linear and nonlinear fourth order two-point boundary value problems. By solving a algebraic system, Said and Noor proposed the optimized multigrid methods [10]. Ideon and Peeter investigated the collocation method with linear/linear rational spline for the numerical solution of two-point boundary value problems [11]. By using B-spline, Rashidinia and Ghasemi developed a numerical method which can solve the general nonlinear two-point boundary value problems up to order 6 [12].
Despite the fact that there are a large number of spline functions to approximate the solutions to the equations, most of them are not constructed based on the differential equations to be solved, and can not achieve satisfying results. To the best of our knowledge, there is still no any research about the construction of spline functions based on the differential equations. It is known to all that second order linear differential equations with constant coefficients can be solved analytically. This leads us to think that whether we can use the analytic solutions to solve these with variable coefficients. The spline function based on analytic solutions can achieve better approximation performance. Therefore, we put forth effort on constructing a spline function based on analytic solutions of equations with constant coefficients.

Preliminary
We first review some conclusions on the theory of solutions to linear differential equations. Equation (1) is said to be homogeneous if Q(x) ≡ 0, and nonhomogeneous if not. Moreover, the equation y + P(x)y = 0 is said to be the associated homogeneous equation of (1). Equation (1) is said to have constant coefficients when P(x) is a constant, and have variable coefficients if not. If Equation (1) has constant coefficients, it can be written as where p is a constant. The associated homogeneous equation of (3) is y + py = 0 and the characteristic equation of (3) is λ 2 + p = 0. Hence the characteristic roots of (3) are λ = ± √ −p.
Let y p denote any particular solution of Equation (3) and y h represent the general solution of its associated homogeneous equation, then the general solution of Equation (3) is y = y h + y p .
The following two lemmas are introduced for obtaining the general solution y h and particular solution y p of Equation (3), which will be used in the next section. Lemma 1 ([13]). If p < 0, then λ is real, and hence the general solution is y h = k 1 e λx + k 2 e −λx . If p = 0, so is the value of λ, then the general solution is y h = k 1 e 0x + k 2 xe 0x = k 1 + k 2 x. If p > 0, then λ is a pair of conjugated imaginary numbers, and hence the general solution is y h = k 1 sin(λx) + k 2 cos(λx). Here k 1 , k 2 are arbitrary constants. Lemma 2 ([13]). Suppose that Q(x) is a polynomial of degree n, then a particular solution of Equation (2) is y p = e αx (k n x n + k n−1 x n−1 + · · · + k 1 x 1 + k 0 ), where α is a known constant and k i (i = 0, 1, 2, · · · , n) are constants to be determined.
Specially, when the degree of Q(x) equals to 0, the particular solution y p = c if λ = 0; and y p = cx 2 if λ = 0, where c is a constant.

Definition of the Piecewise Spline Functions
In this subsection, we use the theory of the analytic solution to construct the spline functions.
If h is small, the value of x varies little at the interval [x i−1 , x i ], then P(x) and Q(x) can be seem as constants at the interval [x i−1 , x i ], therefore Equation (1) can also be seem as a second order linear differential equation with constant coefficients at the interval [x i−1 , x i ]. We assume that the value of x at the interval [x i−1 , x i ] identically equals to the midpoint x i−1/2 , hence Equation (1) can be seem as The roots of the characteristic equation of (4) is λ i = ± −P i−1/2 . Therefore, the solution of (4) can be obtained in the following three cases. case 1.
Additionally, since the right side of Equation (4) identically equals to the constant Q i−1/2 , we can assume that y p = c according to Lemma 2. By substituting y p into Equation (4), we have c = . Similarly to that described in Case 1, the particular solution is y p = . Therefore, the approximate function at the interval [x i , x i+1 ] can be defined asỹ . There are many particular solutions, here we take the simplest form y p = g i (x − x i−1 ) 2 . By substituting y p into Equation (4), we yield g i = 1 2 Q i−1/2 . Therefore, the approximate function at the interval [x i , x i+1 ] can be defined as To summarize, the approximate function at the interval [x i−1 , x i ] is defined as where a i , b i , c i , d i , e i and f i are the coefficients to be determined. Let the spline Function (5) satisfy the interpolation conditions From (5) and (6), by a tedious but straightforward calculation, we can obtain the coefficients
The coefficient matrix of the linear system (18) is tridiagonal, which can be solved easily by Thomas algorithm.

Truncation Error Estimation
Suppose y(x) is sufficiently smooth at [a, b]. By substituting x i−1/2 into (1), we have where M i−1/2 , y i−1/2 denote y (x i−1/2 ), y(x i−1/2 ) respectively. To estimate the truncation error, we have to consider all these nine cases in Section 2, here we only analysis parts of these cases due to space limitations. Without loss of generality, we discuss the truncation errors of case 1 and 2 in detail, the results of the others are also listed. For convenience, the local truncation error is denoted by t j i (i = 1, 2, · · · , n − 1; j = 1, 2, · · · , 9). According to (9), the local truncation error can be written as By using Taylor series, the terms y i−1 , y i−1/2 , y i+1 , y i+1/2 and M i−1/2 , M i+1/2 can be expanded around the point x i , hence we have Note that A i = 2 sinh(λ i h), B i = 2 cosh(λ i h), since the hyperbolic functions can be expanded in By expanding the terms A i−1 , A i , B i−1 and B i in series, and by a tedious but straightforward calculation, k 1 j (j = 1, 2, 3, 4, 5, 6) can be written as Similarly, the local truncation error of (10) can be written as Since the trigonometric functions can be expressed in By expanding the trigonometric functions of k 2 j (j = 1, 2, · · · , 6) in series, then k 2 j (j = 1, 2, · · · , 6) can be written as Then the local truncation error of (10) is O(h 4 ). Next, we only give conclusions of the other cases. The truncation error of (11) is The truncation error of (12) is The truncation error of (13) is . The truncation error of (14) is The truncation error of (15) is . The truncation error of (16) is . The truncation error of (17) is . To summarize, the local truncation errors of the proposed method is O(h 4 ).

Numerical Examples
In this section, some numerical examples are given to illustrate the effectiveness of the proposed method. We begin with two definitions to measure the effectiveness of the proposed method.
The maximum absolute error [14] is computed by whereỹ(x i ) and y(x i ) represent the numerical and exact solution at the node x i , respectively.
The analytic solutions of (20)-(22) are y = (x − 1)e x , y = x sin x and y = cos x − 1, respectively. In Table 1, we list the approximate and exact solutions to (20). In Table 2, we show the maximum errors in Examples 1-3 and the rate of convergence in Examples 1 and 2. From Table 2, we can observe that the order of convergence is 2. It is evident from Tables 1 and 2 that the proposed method produces high precision numerical solutions. Especially in Example 3, the errors of numerical solutions are close to the machine accuracy. This is because the approximate Function (5) is constructed based on the analytic solutions to the linear differential equation with constant coefficients, and Equation (22) happens to be the one with constant coefficients. It is not surprising that we can yield the nearly exact solutions in Example 3. In Figure 1a, we show the exact solution and the numerical results obtained by the proposed method. The corresponding error curve are also shown in Figure 1b. These results coincide with the results listed in Table 2.  Example 4. Consider the linear differential equation with variable coefficients [3,5]: with boundary conditions y (0) = −1, y (1) = 2 sin 1.
The analytic solution of (23) is y = (x 2 − 1) sin x. In Table 3, we list the maximum errors and the elapsed CPU time. The numerical results obtained by implementing the methods [3,5] are also listed for comparison. Note that the elapsed CPU time for the same routine in Matlab may be different every time. In order to count the time accurately, we took the average of all runtimes of 100 independent runs as the elapsed CPU time.
On one hand, our method is more accurate than Ramadan's method [3] and comparable to Srivastava's method [5]. On the other hand, the runtime of our method is less than that of Srivastava's method and comparable to Ramadan's method. This indicates that Srivastava's method can achieve the best precision in our experiment, while our method is superior in the term of elapsed CPU time. This is because we only need to solve a tridiagonal linear system instead of solving a penta-diagonal system in [5], which is much easier to implement.  Clearly, the proposed method is efficient, therefore it provides a new approach to deal with second order linear differential equations.

Conclusions
Based on the analytic solutions to the linear differential equation with constant coefficients, we propose an numerical method for second order differential equations in this paper. The analytic solutions are used to construct the approximation function and the local truncation error is analyzed. Numerical examples have shown the effectiveness of the proposed method. Compared with the other methods, we only need to solve a tridiagonal system, which is much easier to implement.