On the Numerical Simulation of HPDEs Using θ -Weighted Scheme and the Galerkin Method

: Herein, an efﬁcient algorithm is proposed to solve a one-dimensional hyperbolic partial differential equation. To reach an approximate solution, we employ the θ -weighted scheme to discretize the time interval into a ﬁnite number of time steps. In each step, we have a linear ordinary differential equation. Applying the Galerkin method based on interpolating scaling functions, we can solve this ODE. Therefore, in each time step, the solution can be found as a continuous function. Stability, consistency, and convergence of the proposed method are investigated. Several numerical examples are devoted to show the accuracy and efﬁciency of the method and guarantee the validity of the stability, consistency, and convergence analysis.


Introduction
Partial differential equations (PDEs) are ubiquitous in mathematically scientific fields and play an important role in engineering and physics. They arise from many purely mathematical considerations, such as the calculus of variations and differential geometry. One of the momentous subclasses of PDEs is the hyperbolic partial differential equations (HPDEs).
HPDEs are used to model many phenomena such as biology, industry, atomic physics, and aerospace [1][2][3]. Telegraph and wave equations are the most famous types of HPDEs that are enforceable in various fields such as random walk theory, wave propagation, and signal analysis [2,4].
In this study, we construct and analyze a numerical algorithm based on the θ-weighted method, and the Galerkin method to solve the HPDEs with boundary condition w(0, t) = g(t), t ∈ [0, T], and initial condition w(x, 0) = h(t), x ∈ [0, 1].
Furthermore, we assume that f , g, and h are the known functions, and also a 1 and a 2 are real constants.
In this paper, we attempt to apply an efficient scheme that has not been used before to solve such problems. The method includes three steps. In the first step, we use the θ-weighted method to break the time interval into a finite number of time steps. At each time step, we obtain a linear ordinary differential equation. In the second step, the obtained ODE is solved using the Galerkin method. To do this, interpolating scaling functions are used. In comparison to the scaling function arising from multiresolution analysis (MRA), interpolation scaling functions (ISFs) have some properties that make them attractive.
These characteristics include the flexible zero moments, a compact support, orthonormality, and having a closed-form. The most important property of these bases is the interpolation. This property is useful to avoid integrals to find coefficients in expansions. At the last step, the linear algebraic system obtained from the second one must be solved using an appropriate technique. Stability, consistency, and convergence analysis are investigated, and numerical tests guarantee their validity.
Numerous studies proposed a variety of numerical and analytical solutions to HPDEs. Doha et al. [5] proposed a numerical method based on the collocation method for solving a system consisting of such equations. In [6], the spectral-Galerkin method is proposed to solve this equation. Singh et al. [7] solved one-dimensional HPDEs with the initial and boundary conditions (2) and (3), using an algorithm based on Chebyshev and Legendre multiwavelets. Dehghan et al. [8] introduced a numerical scheme based on the cubic B-spline scaling functions to solve (1) with nonlocal conservation conditions. Bin Jebreen et al. [4] proposed an efficient method based on the wavelet Galerkin method to solve the Telegraph equation, and the collocation method based on interpolating scaling functions is also used for solving this equation in [3]. For study, we refer the readers to [9,10].
The outline of the paper is as follows. Section 2 is devoted to the brief introduction to the interpolating scaling function. Mixed θ-weighted scheme and Galerkin method based on interpolating scaling functions are used to solve the desired equation and the stability, consistency, and convergence analysis are also investigated in Section 3. Section 4 is devoted to some numerical examples to show the ability and accuracy of the proposed method.

Interpolating Scaling Functions
To derive a set of bases that covers the multiresolution analysis conditions, Alpert et al. [11] introduced a set of functions to generate the nested spaces {V r J } ∞ J=0 ∈ L 2 ([0, 1]) using piecewise polynomial bases of degree less than r ≥ 0 (the multiplicity parameter). Let J ∈ Z + ∪ {0} be given. Putting B J := {0, . . . , 2 J − 1}, and R := {0, 1, · · · , r − 1}, there is a sequence of nested subspaces that are spanned by by means of the Interpolating scaling functions {φ k } k∈R , introduced by Alpert using the Legendre polynomials. Assume that P r is the Legendre polynomial of order r and {τ k } k∈R are the roots of P r . Let {ω k } k∈R be the Gauss-Legendre quadrature weights [3,11]. We determine ISFs as follows: These bases fulfill the orthonormality relation φ k J,b , φ k J,b = δ b,b δ k,k where ., . denotes the L 2 -inner product on Ω := [0, 1].
Assume that ∪ b∈B I J,b is a uniform finite discretization of Ω. Here, the subintervals I J,b := [x b , x b+1 ] are specified by points x b := b/(2 J ). To project a function into V r J , we introduce the orthogonal projection P r J that maps L 2 (Ω) onto the subspace V r J . Utilizing this projection, every function p ∈ L 2 (Ω) can be represented in the form Due to the orthonormality of the bases, it is easy to prove that the coefficients p k J,b can be obtained by p, φ k J,b = I J,b p(x)φ k J,b (x)dx. To avoid integration, we apply the interpolation property of ISFs [11,12], via Given r-times continuously differentiable function p ∈ C r (Ω), the projection P r J (p) is bounded by means of L 2 -inner product as According to this relation, the projection P r J is convergent when J or r increases. To study more details, we refer the readers to [13]. Consequently, this projection is convergent with the rate of O(2 −Jr ).
We determine the vector function Φ r includes the scaling functions and called multi-scaling functions. The approximation (4) can be rewritten using the vector P whose entries are P br+k+1 := p k J,b as follows: where P is a vector of dimensional N := r2 J .
To approximate a higher-dimensional function, the building blocks of the bases can be utilized. In this regard, one can consider the subspace V r,2 In order to derive an approximation of two-dimensional function p ∈ L 2 (Ω × Ω), we apply the projection operator P r J , viz.
where components of the square matrix P of order N are obtained by where M max is a constant [12] as follows:

Numerical Method of Solution
The main idea behind the proposed method is based on the θ-weighted scheme and Galerkin methods. In the first step, the θ-weighted method is used to discretize the time interval into a finite number of time steps. The linear system of ordinary differential equations obtained after the first step can be reduced to a system of algebraic equations by using the Galerkin method in the second one. Thus, one can find the approximate solution of the desired equation at the time step points t n := nδt, n = 0, . . . , M (t n ∈ [0, T] and T = Mδt).
To discretize the time variable, we use the θ-weighted scheme as follows: where for simplicity w(x, t n ) is assumed to be w n , for t n = t n−1 + δt (or equivalently t n = nδt), and also R ≤ Cδt for a positive constant C and θ ∈ [0, 1] is a constant. Note that, by selecting the different values for θ, one can find various methods, such as implicit method (θ = 1), explicit method (θ = 0), and the Crank-Nicolson method (θ = 1/2). Since the remainder term R is a small quantity, one can neglect it and, after simplification, we obtain Next, this system of ordinary differential equations would be discretized by the Galerkin method based on ISFs. To this end, one can approximate the solution w n of (12) using the projection operator P r J , via where W n , for n = 1, . . . , M is a vector of dimension N which must be found. The same approximation could be imagined to w n x , as where D φ is the operational matrix for derivative introduced in [14 -16].
Replacing (13) and (14) in (12) yields where F n is a N × 1 vector that is obtained by projecting the function f n into V r J , viz. Let where I is the identity matrix of dimension N. Multi-scaling function Φ r J (x) are orthonormal bases for V r J . Thus, they are linearly independent, and then we have the following linear system: To apply the boundary condition (2), it can also be projected into V r J , via Substituting the first row of A and the first element of b by Φ r J (0) T and g(t n+1 ), respectively, we obtain the modified system Now, note that, to start the steps, the initial condition should be utilized via and gives a system of equations at every time step t n , n = 0, . . . , M. Thus, one can obtain the approximate solution w(x, t n ) by means of a linear expansion of interpolating scaling function (13).

Stability
To analyze the stability of the time discretization by θ-weighted scheme, assume that w n+1 is the approximate solution of (17). We set e n+1 := w n+1 −ŵ n+1 as the error that arises from the proposed Galerkin method. Consequently, the roundoff error satisfies Projecting the error e n using P r J into V r J , one can write Inserting (22) into (21) and applying the operational matrix of derivative for ISFs, where Taking the norm of both sides of (24), and using the matrix norm property, we obtain the following inequality: This gives rise to a sufficient and necessary condition for the stability of the method so that, in order to have a stable method, the spectral radius of the matrix A −1 B must be less than one (ρ < 1, where ρ is a spectral radius of A −1 B).

Convergence Analysis
Assume that e n+1 := w n+1 −ŵ n+1 . Subtracting (12) from (15) and using the notations in the previous section, one can write after some simplification where f n := f (x, t n ) andf n := P r J ( f n ). Due to invertibility of matrix D φ [11], it is obvious that C 1 := A −1 and C 2 := B are finite. Therefore, we have It follows from (6) that E n and f n −f n are bounded Note that θ plays an important role in the structure of A −1 and B. Thus, the value of the variables C 1 and C 2 will change when the value of θ changes. These variables play a direct role in the error presented in (27).
We know that a method is consistent if, by reducing the mesh (by increase the refinement level J) and time step size (δt), the truncation error terms could be made to approach zero. Consequently, the inequality (27) confirms that the method is consistent at every time step when the refinement level J or multiplicity r increases.
If the condition for stability holds (ρ(A −1 B) < 1) and if, for the Galerkin method, used for solving the ordinary differential equation at each time, the overall error approaches zero as J → ∞ (indeed, the method is consistent), we usually find that the solution converges to the exact solution. This derives from the Lax Equivalence Theorem [17].

Numerical Results
To illustrate the validity of stability, consistency, and convergence analysis, some numerical tests have been considered in this section.

Example 1.
Let us dedicate the first example to the case that the desired Equation (1) is of form with boundary and initial conditions One can find the exact solution that is reported in [6] w(x, t) = cos(x + t). Table 1 describes the comparison of L 2 -error via explicit, implicit and Crank-Nicolson methods with time step size δt = 0.1/2 m−1 , m = 1, . . . , 10. It is quite obvious that the error tends to zero with increasing m. Table 2 consists of L 2 norm of Example 1 at different values of time. The L 2 -error graph of the explicit, implicit, and Crank-Nicolson methods taking different values for J when r = 3 are shown in Figure 1. Figure 2 illustrates the approximate solution and absolute error taking r = 5 and J = 2 at time t = 1. Table 3 displays absolute values of the error at the selected points by using the presented method taking r = 4, J = 1, θ = 1/2 and δt = 0.1/2 9 . The results have been compared with the Legendre wavelets and Chebyshev wavelet collocation method [7], and also Bernoulli matrix approach [18]. To confirm the stability condition that we obtained in Section 3.1, Figures 3 and 4 and Table 4 are considered. One can observe that when the spectral radius of matrix A −1 B is less than 1, the proposed method is stable. In view of Figures 3 and 4, the explicit method at m = 10 becomes stable while the Crank-Nicolson method is stable from m = 1. We have the same result for the implicit method (θ = 1).     Table 3. Absolute values of the error at the selected points taking θ = 1/2 and δt = 0.1/2 9 for Example 1.  Crank-Nicolson mathod explicit method implicit method

Example 2.
As the second example, let us consider the HPDEs (1) so that a 1 = 1, a 2 = 1 and with boundary and initial conditions given by For this example, we have the exact solution [7] w(x, t) = (x − t) 2 . Table 5 shows the comparison of L 2 -error via explicit, implicit and Crank-Nicolson methods with time step size δt = 0.1/2 m−1 , m = 1, . . . , 10, r = 5 and J = 2. Table 6 consists of L 2 norm of Example 2 at different values of time. Figure 5 illustrates the approximate solution and absolute error taking r = 5 and J = 2 at time t = 1. Figure 6 shows the L 2 -error using explicit method and implicit method taking r = 3 and J = 2 at time δt = 0.1/2 m−1 , m = 1, . . . , 10. Figures 7-9 confirm our investigation about stability. According to stability investigation, if the spectral radius of matrix A −1 B is not less than 1, then the time discretization leads to divergence when t increases. To reduce this effect, we must increase the time steps. In Table 7 the results have been compared with the Legendre wavelets and Chebyshev wavelet collocation method [7]. It shows that the proposed method offers better accuracy using the same multiplicity parameter r and refinement level J. In Figure 10, we show the effect of refinement level J and time step size δt on absolute error. In addition, this figure confirms our investigation about consistency.

Conclusions
This work is devoted to solving the one-dimensional partial differential equation with boundary and initial conditions. To this end, the desired equation reduces to an ordinary differential equation using the θ-weighted method. This ODE is solved by employing the Galerkin method based on the interpolating scaling functions. The stability, consistency, and convergency of the method are investigated. The numerical examples are reported to illustrate the accuracy and efficiency of the method. The results show that three parameters are important here: the θ parameter that changes the θ-weighted method, the δt parameter that controls the time steps, and the refinement level J. The results show that, using the proposed method, better results are obtained compared to similar methods. Among the methods utilized in this paper, the implicit and Crank-Nicolson methods are stable methods that need fewer steps than the explicit method to achieve proper accuracy.