Next Article in Journal
q-Rung Orthopair Fuzzy Competition Graphs with Application in the Soil Ecosystem
Previous Article in Journal
Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Spectral Method to Solve Multi-Dimensional Linear Partial Different Equations Using Chebyshev Polynomials

School of Aerospace and Mechanical Engineering, Korea Aerospace University, Goyang 10540, Korea
Mathematics 2019, 7(1), 90; https://doi.org/10.3390/math7010090
Submission received: 25 November 2018 / Revised: 30 December 2018 / Accepted: 11 January 2019 / Published: 16 January 2019

Abstract

:
We present a new method to efficiently solve a multi-dimensional linear Partial Differential Equation (PDE) called the quasi-inverse matrix diagonalization method. In the proposed method, the Chebyshev-Galerkin method is used to solve multi-dimensional PDEs spectrally. Efficient calculations are conducted by converting dense equations of systems sparse using the quasi-inverse technique and by separating coupled spectral modes using the matrix diagonalization method. When we applied the proposed method to 2-D and 3-D Poisson equations and coupled Helmholtz equations in 2-D and a Stokes problem in 3-D, the proposed method showed higher efficiency in all cases than other current methods such as the quasi-inverse method and the matrix diagonalization method in solving the multi-dimensional PDEs. Due to this efficiency of the proposed method, we believe it can be applied in various fields where multi-dimensional PDEs must be solved.

1. Introduction

The spectral method has been used in many fields to solve linear partial differential equations (PDEs) such as the Poisson equation, the Helmholtz equation, and the diffusion equation [1,2]. The advantage of the spectral method over other numerical methods in solving linear PDEs is its high accuracy; when solutions of PDEs are smooth enough, errors of numerical solutions decrease exponentially as the number of discretization nodes increases [3].
Based on the types of boundary conditions, different spectral basis functions can be used to discretize in physical space. There are several boundary conditions used in solving mathematical and engineering linear PDEs, but the boundary conditions can be classified as periodic and non-periodic boundary conditions. For periodic boundary conditions, the domain is usually discretized using the Fourier series, while Lengendre polynomials and Chebyshev polynomials are most frequently used as basis functions for non-periodic boundary conditions. To investigate an efficient way to spectrally solve PDEs in this paper, we restrict the scope of this paper by only considering linear PDEs with non-periodic boundary conditions. Of the spectral basis functions that can be used with non-periodic boundary conditions, we use the Chebyshev polynomial to discretize the equations in space to use their minimum error properties and achieve spectral accuracy across the whole domain [4].
There are several methods for solving linear PDEs using Chebyshev polynomials. In the collocation method, linear PDEs are directly solved in physical space so that implementation of the method is relatively easy compared to other methods. However, the differentiation matrix derived from the collocation method is full, making computation slow, and ill-conditioning of the differentiation matrix in the collocation method can produce numerical solutions with high errors, especially when a high number of collocation points is used [5].
Another method for solving linear PDEs spectrally with Chebyshev polynomials is the Chebyshev-tau method, which involves solving linear PDEs in spectral space. In the Chebyshev-tau method, the boundary conditions of PDEs are directly enforced to the equation of the system [5,6,7]. This enforcement of boundary conditions produces tau lines, making numerical calculation slow because tau lines increase interactions between Chebyshev modes [8,9]. As the dimension of PDEs rises, the length of calculation time caused by tau lines increases rapidly, resulting in significant lagging to solve PDEs [10,11].
Unlike the Chebyshev-tau method, the Chebyshev-Galerkin method removes direct enforcement of boundary conditions to the equation of the system by discretizing in space using carefully chosen basis functions called Galerkin basis functions that automatically obey given boundary conditions [8]. As a result, when the Chebyshev-Galerkin method is used, tau lines are not created in the system of equation, and therefore, the interaction of tau lines in numerical calculations is not a concern. Because of this advantage, the Chebyshev-Galerkin method is popular for spectral calculation of PDEs. For example, Shen [12] studied proper choices of Galerkin basis functions satisfying standard boundary conditions such as Dirichlet and Neumann boundary conditions and solved linear PDEs with them, and Julien and Watson [13] and Liu et al. [11] used the Galerkin basis functions to solve multi-dimensional linear PDEs with high efficiency.
In 1-D problems, it is straightforward to write linear PDEs as a matrix form in spectral space using a differentiation matrix (see Section 2 for more details). However, because the differentiation matrix is an upper triangular matrix, solving linear PDEs by inverting the differentiation matrix (or using iterative methods to solve the differentiation matrix) is a computationally expensive way to solve equations. Because computational cost increases rapidly as the dimension of the problems increases, efficiently computing solutions of linear PDEs becomes even more important in multi-dimensional problems. In 1-D problems, computational time can be easily saved by modifying the dense equation to sparse systems using the three term recurrence relation [14]. However, due to the coupling of Chebyshev modes in multi-dimensional problems, finding an efficient way to solve linear PDEs is not as straightforward as in 1-D problems.
To facilitate computation in multi-dimensional linear PDE problems, Julien and Watson [13] presented a method called the quasi-inverse technique. Based on the three term recursion relationship in 1-D, they invented a quasi-inverse matrix, and applied it to multi-dimensional linear PDE problems. As a result of the quasi-inverse technique, dense differential operators in multi-dimensional linear PDEs were reduced to sparse operators, then it was possible to efficiently solve multi-dimensional linear PDEs from the sparse systems with approximately O ( N 2 k 1 ) operations for k-dimensional problems. Haidvogel and Zang [15] and Shen [12] presented another way to efficiently solve multi-dimensional linear PDEs called matrix diagonalization method. The key of the matrix diagonalization method is decoupling the coupled Chebyshev modes using the eigenvalue and eigenvector decomposition method [16] and lowering dimensionality of the problem to convert solving a high dimensional problem into solving 1-D linear PDE problems multiple times as a function of the eigenvalues of the system [17].
In this paper, we propose a new method to solve multi-dimensional linear PDEs using Chebyshev polynomials called the quasi-inverse matrix decomposition method. Our method was created by adapting the quasi-inverse technique to the matrix diagonalization method. In our method, the equations of systems are first derived using the quasi-inverse technique, then the matrix diagnalization method is used to efficiently solve the derived equations of the systems using eigenvalue and eigenvector decomposition. Because our method uses merits of both the quasi-inverse technique and the matrix diagonalization method, it computes numerical solutions of multi-dimesional linear PDE problems faster than these two methods. We will describe our quasi-inverse matrix decomposition method in this paper by applying it to four test examples.
The rest of our paper is composed as follows. In Section 2, we review the Chebyshev-tau method, the Chebyshev-Galerkin method, and the quasi-inverse technique with a 1-D Poisson equation. Then, the quasi-inverse matrix decomposition method is explained in the next section by applying it to solve 2-D and 3-D Poisson equations based on the Chebyshev-Galerkin method. In Section 4, we apply our quasi-inverse matrix decomposition method to several multi-dimensional linear PDEs and compare results to the ones from the quasi-inverse method and matrix diagonalization method in terms of accuracy of numerical solutions and CPU time to solve the problems. Our conclusions are described in Section 5.

2. 1-D Poisson Equation

In this section, we will explain the Chebyshev-tau method, the Chebyshev-Galerkin method, and the quasi-inverse technique by applying them to solve a 1-D Poisson equation.

2.1. Chebyshev-Tau Method

We start with the 1-D Poisson equation with the Dirichlet boundary conditions as follows:
2 u x 2 = f ( x ) , 1 x 1 u ( ± 1 ) = 0 .
Definition 1.
The u ( x ) and f ( x ) can be approximated in space with the M-th highest mode, respectively defined as
u ( x ) = m = 0 M u ^ m T m ( x )
and
f ( x ) = m = 0 M f ^ m T m ( x ) ,
where T m ( x ) is the m-th Chebyshev polynomial in x, and u ^ m and f ^ m are the m-th Chebyshev coefficients of u ( x ) and f ( x ) , respectively.
Definition 2.
The second derivative of u with respect to x (defined as 2 u x 2 ) can be approximated by a sum of the coefficients of the second derivatives u ^ m ( 2 ) multiplied by Chebyshev polynomials
2 u x 2 = m = 0 M u ^ m ( 2 ) T m ( x ) .
Definition 3.
The coefficients of the second derivatives can be written as
u ^ m ( 2 ) = 1 c m p = m + 2 p + m e v e n M p ( p 2 m 2 ) u ^ p ,
where c 0 = 2 and c m = 1 for m = 1 , 2 , , M [18].
From Equations (1) and (3)–(5), we can obtain
1 c m p = m + 2 p + m e v e n M p ( p 2 m 2 ) u ^ p = f ^ m ,
As a matrix form, Equation (6) can be rewritten as
D x 2 u = f
where D x 2 is usually called the 1-D Laplacian matrix descretized in x, whose matrix components are defined by the left-hand side of Equation (6), and u and f are the ( M + 1 ) × 1 arrays that are defined as u = [ u ^ 0 , u ^ 1 , , u ^ M ] T and f = [ f ^ 0 , f ^ 1 , , f ^ M ] T , respectively. Note that D x 2 is the square matrix of order M + 1 , but the rank of the matrix D x 2 is M 1 because all elements at the two bottom-most rows are zeros. To make Equation (7) solvable, two additional equations that are provided from the boundary conditions of the Poisson equation as follows
m = 0 M u ^ m = 0 , m = 0 M ( 1 ) m u ^ m = 0
are required to be enforced to the matrix in Equation (7). If the boundary conditions incorporated 1-D Laplacian matrix and the boundary conditions incorporated right-hand side array are defined as D ¯ x and f ¯ , respectively, solutions of the 1-D Poisson equation can be obtained from
D ¯ x 2 u = f ¯
by solving Equation (9) for u . However, solving the 1-D Poisson equation from Equation (9) is an inefficient way to achieve numerical solutions because the left-hand side matrix of Equation (9) is the upper triangular matrix with the tau lines at the bottom. Operation complexity of solving this matrix is O ( M 2 ) , which is not computationally cheap.

2.2. Quasi-Inverse Approach

A better way of computing the 1-D Poisson equation is using recursion relations in computing derivatives
c m 1 u ^ m 1 ( q ) u ^ m + 1 ( q ) = 2 m u ^ m ( q 1 ) , m , q 1
where u ^ m ( q ) is the m-th Chebyshev coefficients of q-th derivative of u with respect to x defined as
d q u d x q = m = 0 M u ^ m ( q ) T m ( x ) .
If we use Equation (10) with q = 2 and q = 1 , it respectively gives
c m 1 u ^ m 1 ( 2 ) u ^ m + 1 ( 2 ) = 2 m u ^ m ( 1 )
c m 1 u ^ m 1 ( 1 ) u ^ m + 1 ( 1 ) = 2 m u ^ m
where u ^ m is used instead of u ^ m ( 0 ) . Without loss generality, to satisfy Equation (1) in the Chebyshev space for all m, the condition u ^ m ( 2 ) = f ^ m is needed to be established. Then, substituting u ^ m 1 ( 2 ) and u ^ m + 1 ( 2 ) in Equation (12) to f ^ m 1 and f ^ m + 1 , respectively, and rearranging the terms give
u ^ m ( 1 ) = 1 2 m c m 1 f ^ m 1 f ^ m + 1 .
Then, with Equations (13) and (14), it is possible to derive
u ^ m = c m 2 4 m ( m 1 ) f ^ m 2 β m 2 ( m 2 1 ) f ^ m + β m + 2 4 m ( m + 1 ) f ^ m + 2
for m = 2 , 3 , , M where β m = 1 for m = 2 , 3 , , M 2 and β m = 0 for m > M 2 .
Remark 1.
If the matrix J x is defined as the identity matrix except for the two top-most diagonal elements whose values zero out as J x = d i a g s ( [ 0 0 1 1 1 ] ) , Equation (15) can be written in a matrix form as
J x u = B x f
where the matrix B x is the tridiagonal matrix, where the non-zero elements of B x defined by b i , j are given as
b i , j = { c i 2 4 i ( i 1 ) i f   i = j 2 β i 2 ( i 2 1 ) i f   i = j β i + 2 4 i ( i + 1 ) i f   i = j + 2
for i = 2 , 3 , , M .
Note that the subscript x in matrix symbols (e.g., B x and J x ) is to indicate that those matrices are associated with x. Using this subscript symbol identifying which direction the matrix comes from is useful in distinguishing the influence of matrices in multi-dimensional problems where subscripts can be x and y in 2-D, and x , y , and z in 3-D. When the matrix B x is multiplied on the left in Equation (7), it is possible to claim
B x D x 2 = J x .
The matrix B x is called a quasi-inverse matrix in x for the Laplace operator because it acts like an inverse matrix of the Laplace operator, producing the quasi-identity matrix (not identity matrix) when multiplied with the Laplace operator.
Similar to the Chebyshev-tau method, boundary conditions need to be incorporated to Equation (16). To do it, we rewrite two equations in Equation (8) as a matrix. By defining B c as the ( M + 1 ) × ( M + 1 ) matrix whose elements are zeros, except for the top two rows where all elements’ values in the top row are one and the values in the second row from the top are alternatively 1 and 1 from the first to last columns, the boundary conditions can be written in a matrix form as
B c u = f c
where the f c is the ( M + 1 ) × 1 zero array. Adding Equation (18) to Equation (16) gives
( J x + B c ) u = B x f + f c .
Note that f c is the zero array only under the given boundary conditions in Equation (1). The f c can be omitted in Equation (19) because f c is the zero array, but we represent this term to clearly show the boundary condition enforcement to the equation. The left-hand side matrix of Equation (19) is sparse, so it can be solved efficiently. Non-zero components of ( J x + B c ) are diagonal elements from J x and the top two rows’ entries from B c forming tau lines. This matrix can be solved with O ( M ) operations for u using a simple Gauss elimination. Compared to the method described in Section 2.1, solving a Poisson equation with the quasi-inverse approach is more efficient because the quasi-inverse approach allows reduction of operation complexity by one order of the number of unknowns.
The key point of the quasi-inverse approach for a Poisson equation is the constitution of the quasi-inverse matrix B x for the Laplace operator. Multiplying B x to the both sides of the Poisson equation simplifies the system of the equation based on the property of B x described in Equation (17). This property of the quasi-inverse matrix B x can also be used in solving multi-dimensional PDEs efficiently, which will be described in Section 3.

2.3. Chebyshev-Galerkin Method

One disadvantage of the Chebyshev-tau method is the direct enforcement of boundary conditions to the system, which creates tau lines in the equation. For example, when the Poisson equation is solved with the quasi-inverse technique using the Chebyshev-tau method so that the final equation is expressed as Equation (19), the enforcement of boundary conditions generates tau lines in the top two rows. These tau lines increase interaction between Chebyshev modes, hindering fast calculations of the equation. To overcome this drawback of the Chebyshev-tau method and enable efficient computation, researchers have used the Chebyshev-Galerkin method to solve PDEs.
While the boundary conditions of PDEs are directly enforced to the equations of systems in the Chebyshev-tau method, when the Chebyshev-Galerkin method is used, boundary conditions are automatically satisfied without direct boundary condition enforcement to the equations. This is done by discretizing PDEs with new basis functions called Galerkin basis functions that satisfy the given boundary conditions automatically. There are plenty of ways to define Galerkin basis functions satisfying specific boundary conditions, but one commonly used way is representing Galerkin basis functions with linear combinations of Chebyshev polynomials to allow simple but fast transformations between Chebyshev space and Galerkin space. In solving PDEs with Dirichlet boundary conditions using the Chebyshev-Galerkin method, Shen [12] showed the efficiency of the following Galerkin basis functions
ϕ m ( x ) = T m + 2 ( x ) T m ( x ) ,
where ϕ m ( x ) is the m-th Galerkin basis function. In this paper, we used the same Galerkin basis functions as Shen used to transform efficiently from Chebyshev space to Galerkin space and vice versa.
Definition 4.
When a 1-D Poisson equation is solved with the Chebyshev-Galerkin method using the Galerkin basis functions defined in Equation (20), u ( x ) can be approximated by a sum of the Galerkin basis functions multiplied by their coefficients v ^ m , which can be written as
u ( x ) = m = 0 M 2 v ^ m ϕ m ( x ) = m = 0 M 2 v ^ m T m + 2 ( x ) T m ( x ) .
Remark 2.
Equating Equations (2) and (21) allows Chebyshev coefficients to associate with Galerkin coefficients
u = S x v ,
where v is the ( M 1 ) × 1 array made up with v = [ v ^ 0 , v ^ 1 , , v ^ M 2 ] T and S x is the ( M + 1 ) × ( M 1 ) transformation matrix, where the ( i , j ) entry s i , j is defined as
s i , j = { 1 i f   j = i 1 i f   j = i 2 0 o t h e r w i s e
for j = 0 , 1 , , M 2 .
Note that the matrix S x is not a square matrix because u ( x ) is discretized with ( M + 1 ) -th modes in Chebyshev space but discretized with ( M 1 ) -th modes in Galerkin space, where the difference comes from the absence of boundary conditions enforcement at the Chebyshev-Galerkin method. Substituting Equation (22) into Equation (16) gives
A x v = g x ,
where A x = J x S x and g x = B x f . Note that A x is the ( M + 1 ) × ( M 1 ) matrix and g x is the ( M + 1 ) × 1 array. The top two rows’ elements of the matrix A x and array g x are zeros because these rows are discarded for boundary conditions enforcement. However, in the Chebyshev-Garlerkin method, since the boundary conditions are automatically obeyed by the proper choice of Galerkin basis functions, the top two rows of A x and g x are no longer necessary. Therefore, those modes can be deleted from the equation so we can only consider non-discarded modes in Equation (23).
Definition 5.
Let us define the symbol tilde to represent the top two rows of a certain matrix that are thrown away. Based on this notation, A ˜ x and g ˜ x are respectively the ( M 1 ) × ( M 1 ) matrix and ( M 1 ) × 1 array, which are the subsets of A x and g x , where the top two rows of A x and g x are omitted, respectively. Then, we can take into account only non-discarded modes from Equation (23) by using A ˜ x and g ˜ x
A ˜ x v = g ˜ x .
Because the matrix A ˜ x is the bi-diagonal matrix whose main diagonal elements are all 1 and the second super-diagonal elements are all 1 , Equation (24) can be solved for v with the backward substitution efficiently in O ( M ) operations. After v is computed, the solution of the 1-D Poisson equation u can be easily obtained from Equation (22).

3. Quasi-Inverse Matrix Diagonalization Method

The quasi-inverse technique can be used with both the Chebyshev-tau method and the Chebyshev-Galerkin method. In this paper, we will use the quasi-inverse technique with the Chebyshev-Galerkin method to solve multi-dimensional problems for two reasons: (1) using the Galerkin basis function is much more convenient to apply the matrix diagonalization method because the Chebyshev-tau method needs to impose boundary conditions to the equation, which makes the entire calculation complicated in multi-dimensional problems and sometimes makes equations non-separable, and (2) the Chebyshev-Galerkin method is faster than the Chebyshev-tau method to obtain numerical solutions of PDEs. Due to these advantages, our quasi-inverse matrix diagonalization method is invented based on the Chebyshev-Galerkin method. In this section, we will introduce the quasi-inverse matrix diagonalization method by applying it to solve 2-D and 3-D Poisson equations.

3.1. 2-D Poisson Equation

Solving PDEs (including 2-D and 3-D Poisson equations) using the quasi-inverse matrix diagonalization method is divided into two steps.: (1) derive sparse equations of systems using the quasi-inverse technique, and (2) solve the derived equations efficiently using the matrix diagonalization method.

3.1.1. Quasi-Inverse Technique

The 2-D Poisson equation with Dirichlet boundary conditions can be expressed as follows:
2 u x 2 + 2 u y 2 = f ( x , y ) , ( 1 x 1 , 1 y 1 ) u ( ± 1 , y ) = u ( x , ± 1 ) = 0 .
Definition 6.
When the 2-D Poisson equation is solved spectrally with Chebyshev polynomials, u ( x , y ) and f ( x , y ) are respectively discretized in space as
u ( x , y ) = l = 0 L m = 0 M u ^ l m T l ( x ) T m ( y )
and
f ( x , y ) = l = 0 L m = 0 M f ^ l m T l ( x ) T m ( y )
where L and M are the highest Chebyshev modes in x and y, respectively, and u ^ l m and f ^ l m are ( l , m ) Chebyshev coefficients of u ( x , y ) and f ( x , y ) , respectively.
Multi-dimensional PDEs can be solved with the combination of a 1-D differential matrix using the Kronecker product [19]. Here, the Kronecker product of matrices A and B is defined as
A B = a 11 B a 12 B a 1 n B a 21 B a 22 B a 2 n B a m 1 B a m 1 B a m n B ,
where a i j is the ( i , j ) element of matrix A . Because the second derivative of u with respect to x and y in 2-D are respectively equal to D x 2 I y and I x D y 2 where D x 2 and D y 2 are respectively the 1-D Laplacian matrices differentiated in x and y, Equation (25) can be rewritten in a matrix form as
( D x 2 I y + I x D y 2 ) u = f .
Here, u and f are the ( L + 1 ) ( M + 1 ) × 1 arrays where the i row elements u i and f i equal u ^ l m and f ^ l m , respectively, where i = l ( M + 1 ) + m for l = 0 , 1 , , L and m = 0 , 1 , , M . To make use of the advantage of the quasi-inverse technique, we multiply on the left by B x B y to Equation (28). Then, based on the property of the Kronecker product such that ( A B ) ( C D ) = ( AC ) ( BD ) , we can obtain
( J x B y + B x J y ) u = ( B x B y ) f .
Definition 7.
In the Chebyshev-Galerkin method, u ( x , y ) is approximated by the coefficients of the Galerkin basis functions v ^ l m multiplied by the Galerkin basis functions in x and y, defined by
u ( x , y ) = l = 0 L 2 m = 0 M 2 v ^ l m ϕ l ( x ) ϕ m ( y ) .
Remark 3.
If we define v as the ( L 1 ) ( M 1 ) × 1 array in which the j row element v j is equal to v ^ l m where j = l ( M 1 ) + m for l = 0 , 1 , , L 2 and m = 0 , 1 , , M 2 , equating Equations (30) and (26) allows us to transform Chebyshev space to Galerkin space from
u = ( S x S y ) v .
Here, the sizes of matrices S x and S y are ( L + 1 ) × ( L 1 ) and ( M + 1 ) × ( M 1 ) , respectively.
Substituting Equation (31) to Equation (29) gives
( J x B y + B x J y ) ( S x S y ) v = ( B x B y ) f .
Then, by defining A χ = J χ S χ and C χ = B χ S χ where χ = x and y, it is possible to obtain
( A x C y + C x A y ) v = ( B x B y ) f
from Equation (32). The top two rows of all matrices in Equation (33) are zero because these rows are discarded for boundary conditions enforcement. As studied in the 1-D Poisson equation, discarded modes should be omitted from the equation when a PDE is solved with the Chebyshev-Galerkin method. Using the tilde notation described in Section 2.3, we can only consider non-discarded modes at Equation (33), which can be written as
( A ˜ x C ˜ y + C ˜ x A ˜ y ) v = ( B ˜ x B ˜ y ) f .
The array v can be computed by inverting the sparse matrix ( A ˜ x C ˜ y + C ˜ x A ˜ y ) in Equation (34), but we will further use the matrix diagonalization method to obtain the solutions more efficiently.

3.1.2. Matrix Diagonalization Method

To separate Equation (34) using the eigenvalue and eigenvector decomposition technique, we first reshape the ( L 1 ) ( M 1 ) × 1 array v to the ( L 1 ) × ( M 1 ) matrix and define the matrix as V . Similarly, let’s reshape the ( L + 1 ) ( M + 1 ) × 1 array f to the ( L + 1 ) × ( M + 1 ) matrix and define the matrix as F . Then, we can rewrite Equation (34) as
A ˜ x V C ˜ y T + C ˜ x V A ˜ y T = B ˜ x F B ˜ y T .
Multiplying to Equation (35) on the right by C ˜ y T gives
A ˜ x V + C ˜ x V A ˜ y T C ˜ y T = B ˜ x F B ˜ y T C ˜ y T .
We want to emphasize that an n-by-n matrix is diagonalizable if and only if the matrix has n independent eigenvectors [20]. From this argument, we can easily show that the matrix A ˜ y T C ˜ y T is diagonalizable because matrices A ˜ y and C ˜ y (and their inverse and transpose matrices) are linearly independent, and multiplication of two linearly independent matrices is also linearly independent. As a result, using eigenvalue decomposition, the matrix A ˜ y T C ˜ y T can be decomposed as
A ˜ y T C ˜ y T = P Q P 1 ,
where P and Q are the eigenvector and diagonal eigenvalue matrices of A ˜ y T C ˜ y T , respectively.
After substituting Equation (37) to Equation (36), multiplying on the right by P allows us to obtain
A ˜ x V + C ˜ x V Q = H ,
where V = V P and H = B ˜ x F B ˜ y T C ˜ y T P . The matrix V in Equation (38) can be solved efficiently column by column due to the fact that Q is the diagonal matrix. If we set V = [ v 0 , v 1 , , v M 2 ] and H = [ h 0 , h 1 , , h M 2 ] and define the j-th diagonal element of Q as q j , the j-th column of Equation (38) can be written as
( A ˜ x + q j C ˜ x ) v j = h j
for j = 0 , 1 , , M 2 . The matrices A ˜ x and C ˜ x are bi-diagonal and quad-diagonal matrices, respectively, whose non-zero elements are presented in Figure 1. Due to the sparsity, solving Equation (39) for v j can be performed efficiently with O ( L ) operations for all j. Once it is performed so that all elements of the matrix V are obtained, the matrix V can be computed from
V = V P 1 .

3.2. 3-D Poisson Equation

By following the same steps used in the 2-D problem, we can use the quasi-inverse matrix diagonalization method to 3-D PDEs. Here, we will explain the way to use the quasi-inverse matrix diagonalization method for 3-D problems, particularly applying it to solve a 3-D Poisson equation.

3.2.1. Quasi-Inverse Technique

We want to solve the 3-D Poisson equation with the following Dirichlet boundary conditions:
2 u x 2 + 2 u y 2 + 2 u z 2 = f ( x , y , z ) , ( 1 x 1 , 1 y 1 , 1 z 1 ) u ( ± 1 , y , z ) = u ( x , ± 1 , z ) = u ( x , y , ± 1 ) = 0 .
Definition 8.
In employing Chebyshev polynomials to solve PDEs, u ( x , y ) and f ( x , y ) are represented by the sums of Chebyshev coefficients multiplied by Chebyshev polynomials, defined as
u ( x , y , z ) = l = 0 L m = 0 M n = 0 N u ^ l m n T l ( x ) T m ( y ) T n ( y )
and
f ( x , y , z ) = l = 0 L m = 0 M n = 0 N f ^ l m n T l ( x ) T m ( y ) T n ( y )
where L, M and N are the highest Chebyshev modes in x, y, and z, respectively, and u ^ l m n and f ^ l m n are the ( l , m , n ) -th Chebyshev coefficients of u ( x , y , z ) and f ( x , y , z ) , respectively.
In a matrix form, Equation (41) can be rewritten using the Kronecker product as
( D x 2 I y I z + I x D y 2 I z + I x I y D z 2 ) u = f ,
where u and f are the ( L + 1 ) ( M + 1 ) ( N + 1 ) × 1 arrays in which the j row components of those arrays are respectively equal to u ^ l m n and f ^ l m n where j = ( M + 1 ) ( N + 1 ) l + ( N + 1 ) m + n for l = 0 , 1 , , L , m = 0 , 1 , , M and n = 0 , 1 , , N . Multiplying to the left of Equation (44) by B x B y B z gives
J x B y B z + B x J y B z + B x B y J z u = ( B x B y B z ) f .
Definition 9.
When using the Chebyshev-Galerkin method, we approximate u ( x , y , z ) with the Galerkin basis functions, with their coefficients as
u ( x , y , z ) = l = 0 L 2 m = 0 M 2 n = 0 N 2 v ^ l m n ϕ l ( x ) ϕ m ( y ) ϕ n ( z ) ,
where v ^ l m n is the ( l , m , n ) mode coefficient of the Galerkin basis functions.
Remark 4.
By defining v as the ( L 1 ) ( M 1 ) ( N 1 ) × 1 array in which the j row component is set to v ^ l m n where j = ( M 1 ) ( N 1 ) l + ( N 1 ) m + n for l = 0 , 1 , , L 2 , m = 0 , 1 , , M 2 and n = 0 , 1 , , N 2 , the arrays u and v are associated by
u = ( S x S y S z ) v .
Substituting Equation (47) to Equation (45) and distributing the transformation matrix S χ in each direction produce
( J x S x ) ( B y S y ) ( B z S z ) + ( B x S x ) ( J y S y ) ( B z S z ) + ( B x S x ) ( B y S y ) ( J z S z ) v = ( B x B y B z ) f .
As we defined in the solution of the 2-D Poisson equation, use notations of A χ = J χ S χ and C χ = B χ S χ for χ = x , y and z. Then, Equation (49) can be rewritten as
A x C y C z + C x A y C z + C x C y A z v = ( B x B y B z ) f .
At each matrix in Equation (49), all elements in the top two rows are zeros because those places are discarded to impose boundary conditions. Erasing those rows and using the symbol tilde to represent the deleted top two rows of given matrices give
A ˜ x C ˜ y C ˜ z + C ˜ x A ˜ y C ˜ z + C ˜ x C ˜ y A ˜ z v = ( B ˜ x B ˜ y B ˜ z ) f .
For simplicity, define E ˜ x y = A ˜ x C ˜ y + C ˜ x A ˜ y , C ˜ x y = C ˜ x C ˜ y and B ˜ x y = B ˜ x B ˜ y , then Equation (50) can be simplified as
C ˜ x y A ˜ z + E ˜ x y C ˜ z v = ( B ˜ x y B ˜ z ) f .

3.2.2. Matrix Diagonalization Method

Decomposition of Equation (51) can be performed by using the same matrix diagonalization steps used in the 2-D Poisson equation. However, here we want to briefly describe the procedure again to clearly show how the equation is diagonalized and solved efficiently in 3-D problems.
We reshape the array v and f to the ( L 1 ) ( M 1 ) × ( N 1 ) matrix and ( L + 1 ) ( M + 1 ) × ( N + 1 ) matrix and define them as V and F , respectively. With these matrices, we can rewrite Equation (51) as
C ˜ x y V A ˜ z T + E ˜ x y V C ˜ z T = B ˜ x y F B ˜ z T .
Let us multiply on the right by A ˜ z T to Equation (52)
C ˜ x y V + E ˜ x y V C ˜ z T A ˜ z T = B ˜ x y F B ˜ z T A ˜ z T .
and decompose C ˜ z T A ˜ z T as
C ˜ z T A ˜ z T = P Q P 1 ,
where P and Q are the eigenvector matrix and diagonal eigenvalue matrix of C ˜ z T A ˜ z T , respectively. Then, after substituting Equation (54) to Equation (53) and multiplying on the right by P , if we define W = V P and G = B ˜ x y F B ˜ z T A ˜ z T P , it is possible to obtain
( C ˜ x y + q k E ˜ x y ) w k = g k
for k = 0 , 1 , , N 2 where the q k is the k-th diagonal element of Q , and W and G are set as W = [ w 0 , w 1 , , w N 2 ] and G = [ g 0 , g 1 , , g N 2 ] , respectively.
Equation (55) is decoupled in z direction but still coupled in x and y directions. To decouple it completely in all directions, based on the definition of C ˜ x y and E ˜ x y , Equation (55) can be expanded as
( C ˜ x + q k A ˜ x ) C ˜ y + q k C ˜ x A ˜ y w k = g k .
Again, let’s reshape ( L 1 ) ( M 1 ) × 1 arrays w k and g k to ( L 1 ) × ( M 1 ) matrices and refer those matrices as Z ( k ) and H ( k ) . Note that the index k in the parenthesis here is used to clearly show that those matrices are a function of k so that the matrices’ entries are changed when k is changed. Defining K x ( k ) = C ˜ x + q k A ˜ x allows Equation (56) to be rewritten as
K x Z ( k ) C ˜ y T + q k C ˜ x Z ( k ) A ˜ y T = H ( k ) .
At each k, Equation (57) is the 2-D coupled equation. This is the same type of equation as Equation (35), and therefore the Z ( k ) can be efficiently solved by applying the matrix diagonalization method once again to separate coupled modes in x and y directions. To do this, after multiplying on the right by C ˜ y T to Equation (57), decompose A ˜ y T C y T = R Λ R 1 where R and Λ are the eigenvector matrix and diagonal eigenvalue matrix of A ˜ y T C y T , respectively. After that, if we multiply on the right by R and set Y ( k ) = Z ( k ) R and S ( k ) = H ( k ) C y T R , then we can get
( K x + q k λ j C ˜ x ) y j ( k ) = s j ( k ) .
Here, the λ j is the j-th diagonal element of Λ , and Y ( k ) and S ( k ) are set to Y ( k ) = [ y 0 ( k ) , y 0 ( k ) , , y M 2 ( k ) ] and S ( k ) = [ s 0 ( k ) , s 1 ( k ) , , s M 2 ( k ) ] . For each k and j, Equation (58) can be solved for y j ( k ) in O ( L ) operations.

4. Numerical Examples

In this section, our quasi-inverse matrix diagonalization method is applied to several 2-D and 3-D PDEs, and the numerical results are presented. The numbers of Chebyshev modes to discretize solutions at each spatial direction do not need to be identical. Nonetheless, we use the same numbers of Chebyshev modes N at each spatial direction in all test problems for simplicity. Numerical codes are implemented in MATLAB and operated using a 16 GB-RAM Macbook Pro with a 2.3 GHz quad-core Intel Core i7 processor. For matrix diagonalization, MATLAB’s built-in function “eig” is used to compute the matrix’s eigenvalues and eigenvectors. For inverse matrix calculation, we use MATLAB’s backslash operator.

4.1. Multi-Dimensional Poisson Equation

The n-dimensional Poisson equations described in Section 3 are solved using the quasi-inverse matrix diagonalization method for n = 2 and 3
Δ u ( x ̲ ) = f ( x ̲ ) , u = 0 on Γ
where Γ is the boundary of the rectangular domain. As a test example, we define f ( x ̲ ) as
f ( x ̲ ) = n ( 4 π 2 ) i = 1 n sin ( 2 π x i ) .
Note that we use coordinates x 1 and x 2 in the 2-D problem and x 1 , x 2 , and x 3 in the 3-D problem. Then, the exact solution of Equation (59) becomes
u ( x ̲ ) = i = 1 n sin ( 2 π x i ) .
This test problem is solved using the quasi-inverse technique, matrix diagonalization method, and quasi-inverse matrix diagonalization method. The accuracy of numerical solutions obtained with these three methods is presented in Table 1 as a function of N where the accuracy is computed by the relative L 2 -norm error between the exact and numerical solutions, defined as
Error = | | u u e x t | | 2 | | u e x t | | 2 ,
where | | | | 2 indicates the L 2 -norm, and u and u e x t are the numerical and exact solutions, respectively. In both 2-D and 3-D problems, three methods show a similar order of accuracy, which is identical until N = 24 . At N = 32 , all numerical solutions encounter machine accuracy (∼10 16 ) in double precision.
At high N, errors of numerical solutions obtained from the spectral method using Chebyshev polynomials can increase because the system of equation tends to be ill-conditioned as N increases. To be a good spectral method, the error of numerical solutions at high N should be retained near the machine accuracy and should not rise as N increases. In this aspect, to further check the error and robustness of the proposed method, we solve the Poisson problem with high N up to N = 1024 in 2-D and N = 256 in 3-D and measure the errors of numerical solutions of the proposed method as a function of N. We present the results of this experiment in Figure 2. As shown in the figure, even at high N, the errors of numerical solutions of the proposed method retain the order of 10 15 , showing high accuracy and robustness of the proposed method.
Figure 3 shows the CPU time of the quasi-inverse, matrix diagonalization, and quasi-inverse matrix diagonalization methods in solving 2-D and 3-D Poisson equations as a function of N as a log-log scale. Then, we know that the slope of each method’s data in the figure indicates the operation complexity of the corresponding method. To calculate slopes, the date are fitted using the best linear regression by t = exp ( m ln N + b ) where t is CPU time, and m and b are the coefficients of best linear regression. The results of the best regression are presented in Table 2 where r 2 is the r-squared value of the best linear regression.
In solving the 2-D Poisson equation, we observe that the operation complexity of the matrix diagonalization method is highest among the three methods, while the other two methods show similar operation complexity. However, actual CPU time of the quasi-inverse matrix diagonalization method is more than 10 times shorter than the quasi-inverse method. These observations indicate that reduction of operation complexity comes from the quasi-inverse technique’s ability to make the equations of systems sparse, rather than from the matrix diagonalization technique. However, the matrix diagonalization technique is still able to help reduce CPU times by decreasing the actual number of calculations, even though the two methods have the same order of operation complexity.
In solving the 3-D Poisson equation, as Julien and Waston claimed in [13], the operation complexity of the quasi-inverse technique is approximately O ( N 5 ) . In the matrix diagonalization method, the operation complexity approximately increases from O ( N 3 ) to O ( N 4 ) as dimensionality of the Poisson equation rises from 2-D to 3-D. This is because the bottleneck of the matrix diagonalization method in solving the Poisson equation is computing v i from the dense matrix, which requires about O ( N 2 ) operations per calculation, and repetition of this calculation increases from N to N 2 times. When using the quasi-inverse matrix diagonalization method to solve the 3-D problem, the most time consuming step is also computing v i , but the left-hand side matrix ( K x + q i λ j C ˜ x ) in Equation (58) is sparse where the bandwidth is four, and therefore v i can be solved in O ( N ) operations. Because this calculation is repeated N 2 times in the 3-D problem, the total operation complexity of the quasi-inverse matrix diagonalization method becomes O ( N 3 ) as shown in Table 2.

4.2. Two-Dimensional Poisson Equation with No Analytic Solution

The next numerical example is a 2-D Poisson equation with the complicated forcing term. That is, we solve the same problem described in Section 3 in 2-D but with the forcing term
f ( x ) = 20 sin ( π x ) sin ( π y ) e x y 1 + tanh ( x ) 2 + tanh ( y ) 2 .
Note that analytic solutions cannot be found for this Poisson equation. The problem is solved using the quasi-inverse matrix diagonalization method by changing the number of unknowns so N = 16 , 32 , 64 , 128 , 256 , 512 , and 1024.
Figure 4 shows the values of the forcing term f ( x , y ) in Figure 4a and the numerical solution of the problem u ( x , y ) at N = 128 in Figure 4b. To check the accuracy of the quasi-inverse matrix diagonalization method in solving this problem, we compute the numerical errors of the problem. Because no exact solution of the problem is available, we treat the numerical solution at N = 1024 as the most accurate solution and compare the other numerical solutions with it. Therefore, the error here indicates the relative difference between the numerical solutions obtained at N < 1024 and the numerical solution obtained at N = 1024 in a sense of the L 2 norm. We present the values of the errors as a function of N as a log-log scale in Figure 5. As N increases, the error decreases exponentially, which shows spectral accuracy of our method, as expected.

4.3. Coupled Two-Dimensional Helmholtz Equation

We apply the quasi-inverse matrix diagonalization method to 2-D coupled Helmholtz equations and compared the numerical results with the ones obtained from the quasi-inverse technique. We consider the following coupled equations with Dirichlet boundary conditions
Δ u 1 + k 1 u 2 = f 1 ( x , y ) k 2 u 1 + Δ u 2 = f 2 ( x , y ) u 1 ( x , y ) = 0 , u 2 ( x , y ) = 0 on Γ
where k 1 and k 2 are constant numbers. As a test problem, the functions f 1 ( x , y ) and f 2 ( x , y ) are assumed as
f 1 ( x , y ) = 2 sin ( π x ) sin ( π y ) + k 1 π 2 1 + cos ( π x ) + cos ( π y ) + cos ( π x ) cos ( π y ) f 2 ( x , y ) = cos ( π x ) cos ( π y ) 2 cos ( π x ) cos ( π y ) + k 2 π 2 sin ( π x ) sin ( π y ) .
Then, the exact solutions of Equation (64) are
u 1 ( x , y ) = 1 π 2 sin ( π x ) sin ( π y ) , u 2 ( x , y ) = 1 π 2 1 + cos ( π x ) 1 + cos ( π y ) .
The numerical solutions u 1 ( x , y ) and u 2 ( x , y ) are defined by using Chebyshev polynomials and Galerkin basis functions as
u 1 ( x , y ) = l = 0 L m = 0 M ( u ^ 1 ) l m T l ( x ) T m ( y ) = l = 0 L 2 m = 0 M 2 ( v ^ 1 ) l m ϕ l ( x ) ϕ m ( y ) u 2 ( x , y ) = l = 0 L m = 0 M ( u ^ 2 ) l m T l ( x ) T m ( y ) = l = 0 L 2 m = 0 M 2 ( v ^ 2 ) l m ϕ l ( x ) ϕ m ( y )
and f 1 ( x , y ) and f 2 ( x , y ) are defined using Chebyshev polynomials as
f 1 ( x , y ) = l = 0 L m = 0 M ( f ^ 1 ) l m ϕ l ( x ) ϕ m ( y ) f 2 ( x , y ) = l = 0 L m = 0 M ( f ^ 2 ) l m ϕ l ( x ) ϕ m ( y ) .
For k = 1 and 2, define two ( L 1 ) ( M 1 ) × 1 arrays as v k and f k in which the j-th entries are respectively equal to ( v ^ k ) l m and ( f ^ k ) l m where j = ( M 1 ) l + m for l = 0 , 1 , , L 2 and m = 0 , 1 , , M 2 . This allows us to express Equation (64) in a matrix form as
( J x S x ) ( B y S y ) + ( B x S x ) ( J y S y ) v 1 + k 1 ( B x S x ) ( B y S y ) v 2 = ( B x B y ) f 1 ( J x S x ) ( B y S y ) + ( B x S x ) ( J y S y ) v 2 + k 2 ( B x S x ) ( B y S y ) v 1 = ( B x B y ) f 2 .
To make Equation (69) simple, rewrite J χ S χ and B χ S χ as A χ and C χ , respectively, then
A x C y + C x A y v 1 + k 1 C x C y v 2 = ( B x B y ) f 1 A x C y + C x A y v 2 + k 2 C x C y v 1 = ( B x B y ) f 2 .
Erasing the top two rows of the matrices in Equation (70) as we did in solving the multi-dimensional Poisson equations gives
A ˜ x C ˜ y + C ˜ x A ˜ y v 1 + k 1 C ˜ x C ˜ y v 2 = ( B ˜ x B ˜ y ) f 1 A ˜ x C ˜ y + C ˜ x A ˜ y v 2 + k 2 C ˜ x C ˜ y v 1 = ( B ˜ x B ˜ y ) f 2 ,
where the symbol tilde is defined as before. We want to combine two equations in Equation (71). To do this, define v = [ v 1 ; v 2 ] and J i as
J i = α 1 α 2 α 3 α 4 ,
where α j = 1 if i = j , otherwise 0. Then, two equations in Equation (71) can be combined as
[ ( A ˜ x C ˜ y + C ˜ x A ˜ y ) J 1 ] v + k 1 ( C ˜ x C ˜ y ) J 2 v + ( A ˜ x C ˜ y + C ˜ x A ˜ y ) J 4 v + k 2 ( C ˜ x C ˜ y ) J 3 v = ( B ˜ x B ˜ y J 1 ) f + ( B ˜ x B ˜ y J 4 ) f .
By rearranging the terms, Equation (72) can be rewritten as
( E ˜ x y ( J 1 + J 4 ) ) v + ( C ˜ x y ( k 1 J 2 + k 2 J 3 ) ) v = ( B ˜ x y ( J 1 + J 4 ) ) f ,
where E ˜ x y = A ˜ x C ˜ y + C ˜ x A ˜ y , C ˜ x y = C ˜ x C ˜ y and B ˜ x y = B ˜ x B ˜ y as defined in Section 3.2. Because Equations (73) and (51) are the same types of equations, Equation (73) can be solved for v by following the same steps as solving 3-D PDEs described in Section 3.2.
We tested coupled 2-D Helmholtz equations at k 1 = 0.7 and k 2 = 1.1 . When N is 32, 64, 128, 256, and 512, the L-infinite errors between the numerical and exact solutions and CPU times are computed using the quasi-inverse and quasi-inverse matrix diagonalization methods, and the results are presented in Table 3. In both cases, the same order of high accuracy is obtained for all N. In terms of CPU time, similar to the multi-dimensional Poisson equation problems, the latter method is much faster than the former method, especially when N is high, showing the efficiency of our quasi-inverse matrix diagonalization method.

4.4. Stokes Problem

As a last text problem, a 3-D Stokes problem is solved using our quasi-inverse matrix diagonalization method. The Stokes problem we solved is
μ ( 2 u x 2 + 2 u y 2 + 2 u z 2 ) p x + f x = 0 μ ( 2 v x 2 + 2 v y 2 + 2 v z 2 ) p y + f y = 0 μ ( 2 w x 2 + 2 w y 2 + 2 w z 2 ) p z + f z = 0 u x + v y + w z = 0
with the boundary conditions u = v = w = p = 0 on Γ where Γ is the boundary of the rectangular domain. In the problem, we set the body force terms f x , f y and f z in Equation (74) as
f x = 4 π ( e cos ( 2 π x ) e ) sin ( 2 π y ) sin ( 2 π z ) f y = π e cos ( 2 π x ) sin ( 2 π x ) sin ( 2 π z ) g ( x , y ) f z = π e cos ( 2 π x ) sin ( 2 π x ) cos ( 2 π y ) g ( x , z ) ,
where g ( x , y ) = cos 2 ( 2 π x ) + 3 cos ( 2 π x ) + 1 cos ( 2 π y ) 1 + 3 cos ( 2 π y ) and μ = 1 . Then, the exact solutions of this Stokes problem are given by
u = 1 2 π e cos ( 2 π x ) e sin ( 2 π y ) sin ( 2 π z ) v = 1 4 π e cos ( 2 π x ) sin ( 2 π x ) sin ( 2 π z ) ( cos ( 2 π y ) 1 ) w = 1 4 π e cos ( 2 π x ) sin ( 2 π x ) sin ( 2 π y ) ( cos ( 2 π z ) 1 ) p = e cos ( 2 π x ) sin ( 2 π x ) sin ( 2 π y ) sin ( 2 π z ) .
The accuracy of numerical solutions of u , v , w , and p obtained using the quasi-inverse matrix diagonalization method is presented in Figure 6. The errors decay slower than the ones tested in the previous examples, but all of them show spectral convergence that reached machine accuracy around N = 90 . Figure 7 shows the CPU time of the Stokes problem in a log-log scale as a function of N. When the best linear regression of the data is implemented, the slope of this graph is 2.884, approximately showing O ( N 3 ) operation complexity of the 3-D Stokes problem as expected.

5. Conclusions

A new efficient way of solving multi-dimensional linear PDEs spectrally using Chebyshev polynomials called the quasi-inverse matrix diagonalization method is presented in this paper. In this method, we discretize numerical solutions in spaces using the Chebyshev-Galerkin basis functions, which automatically obey the boundary conditions of the PDEs. Then, the quasi-inverse technique is used to construct sparse differential operators for the Chebyshev-Galerkin basis functions, and the matrix diagonalization technique is applied to further facilitate computations by decoupling the modes in spatial directions. It is proved that the proposed method is highly efficient compared to other current methods when applying it on the test problems. The proposed method is especially fast in computing 3-D linear PDEs. We show that the operation complexity of the proposed method in solving 3-D linear PDEs is approximately O ( N 3 ) when tested with the numerical examples in this paper (tested up to N = 512 ), suggesting the possibility of handling 3-D linear PDE problems with high numbers of unknowns. Although the test problems in the paper are restricted to second order linear PDEs, the quasi-inverse matrix diagonalization method can be used to solve separable high order linear PDEs such as fourth order linear PDEs which do not have cross terms (e.g., 2 u x y ) and biharmonic equations with { u , 2 u n 2 } types of boundary conditions where the derivative with respect to n is the normal derivative of the boundary of the domain.

Funding

This work was supported by 2018 Korea Aerospace University faculty research grant.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Buzbee, B.L.; Golub, G.H.; Nielson, C.W. On direct methods for solving Poisson’s equations. SIAM J. Numer. Anal. 1970, 7, 627–656. [Google Scholar] [CrossRef]
  2. Bueno-Orovio, A.; Pérez-García, V.M.; Fenton, F.H. Spectral methods for partial differential equations in irregular domains: The spectral smoothed boundary method. SIAM J. Sci. Comput. 2006, 28, 886–900. [Google Scholar] [CrossRef]
  3. Mai-Duy, N.; Tanner, R.I. A spectral collocation method based on integrated Chebyshev polynomials for two-dimensional biharmonic boundary-value problems. J. Comput. Appl. Math. 2007, 201, 30–47. [Google Scholar] [CrossRef] [Green Version]
  4. Boyd, J.P. Chebyshev and Fourier Spectral Methods; Courier Corporation: North Chelmsford, MA, USA, 2001. [Google Scholar]
  5. Peyret, R. Spectral Methods for Incompressible Viscous Flow; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  6. Saadatmandi, A.; Dehghan, M. Numerical solution of hyperbolic telegraph equation using the Chebyshev tau method. Numer. Methods Part. Differ. Equ. Int. J. 2010, 26, 239–252. [Google Scholar] [CrossRef]
  7. Ren, R.; Li, H.; Jiang, W.; Song, M. An efficient Chebyshev-tau method for solving the space fractional diffusion equations. Appl. Math. Comput. 2013, 224, 259–267. [Google Scholar] [CrossRef]
  8. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods: Evolution to Complex Geometries and Applications to Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  9. Johnson, D. Chebyshev Polynomials in the Spectral Tau Method and Applications to Eigenvalue Problems; NASA: Washington, DC, USA, 1996. [Google Scholar]
  10. Marti, P.; Calkins, M.; Julien, K. A computationally efficient spectral method for modeling core dynamics. Geochem. Geophys. Geosyst. 2016, 17, 3031–3053. [Google Scholar] [CrossRef]
  11. Liu, F.; Ye, X.; Wang, X. Efficient Chebyshev spectral method for solving linear elliptic PDEs using quasi-inverse technique. Numer. Math. Theory Methods Appl. 2011, 4, 197–215. [Google Scholar]
  12. Shen, J. Efficient spectral-Galerkin method II. Direct solvers of second-and fourth-order equations using Chebyshev polynomials. SIAM J. Sci. Comput. 1995, 16, 74–87. [Google Scholar] [CrossRef]
  13. Julien, K.; Watson, M. Efficient multi-dimensional solution of PDEs using Chebyshev spectral methods. J. Comput. Phys. 2009, 228, 1480–1503. [Google Scholar] [CrossRef]
  14. Dang-Vu, H.; Delcarte, C. An accurate solution of the Poisson equation by the Chebyshev collocation method. J. Comput. Phys. 1993, 104, 211–220. [Google Scholar] [CrossRef]
  15. Haidvogel, D.B.; Zang, T. The accurate solution of Poisson’s equation by expansion in Chebyshev polynomials. J. Comput. Phys. 1979, 30, 167–180. [Google Scholar] [CrossRef]
  16. Abdi, H. The eigen-decomposition: Eigenvalues and eigenvectors. Encycl. Meas. Stat. 2007, 2007, 304–308. [Google Scholar]
  17. Nam, S.; Patil, A.K.; Patil, S.; Chintalapalli, H.R.; Park, K.; Chai, Y. Hybrid interface of a two-dimensional cubic Hermite curve oversketch and a three-dimensional spatial oversketch for the conceptual body design of a car. Proc. Inst. Mech. Eng. D-J. Automob. Eng. 2013, 227, 1687–1697. [Google Scholar] [CrossRef]
  18. Canuto, C.G.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  19. Trefethen, L.N. Spectral Methods in MATLAB; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  20. Meyer, C.D. Matrix Analysis and Applied Linear Algebra; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
Figure 1. Non-zero elements of the matrices A ˜ x and C ˜ x are presented in (a) and (b), respectively. Here, both L and M are set to 64.
Figure 1. Non-zero elements of the matrices A ˜ x and C ˜ x are presented in (a) and (b), respectively. Here, both L and M are set to 64.
Mathematics 07 00090 g001
Figure 2. Errors of numerical solutions of 2-D (plotted in (a)) and 3-D (plotted in (b)) Poisson equations with a the high number of N. In both 2-D and 3-D problems, the errors of the proposed method (QIMD) do not rise as N increases. The proposed method shows the same order of accuracy as the quasi-inverse method and the matrix diagonalization method.
Figure 2. Errors of numerical solutions of 2-D (plotted in (a)) and 3-D (plotted in (b)) Poisson equations with a the high number of N. In both 2-D and 3-D problems, the errors of the proposed method (QIMD) do not rise as N increases. The proposed method shows the same order of accuracy as the quasi-inverse method and the matrix diagonalization method.
Mathematics 07 00090 g002
Figure 3. Comparison of CPU times among the Quasi-Inverse (QI), Matrix Diagonalization (MD), and Quasi-Inverse Matrix Diagonalization (QIMD) methods as a function of the one-dimensional number of unknowns in solving (a) a 2-D Poisson equation and (b) a 3-D Poisson equation. The results of the best linear regression associated with the lines in the figure are represented in Table 2. Here, O ( N 2 ) and O ( N 3 ) operation lines in (a), and O ( N 3 ) and O ( N 5 ) operation lines in (b) are plotted for comparison.
Figure 3. Comparison of CPU times among the Quasi-Inverse (QI), Matrix Diagonalization (MD), and Quasi-Inverse Matrix Diagonalization (QIMD) methods as a function of the one-dimensional number of unknowns in solving (a) a 2-D Poisson equation and (b) a 3-D Poisson equation. The results of the best linear regression associated with the lines in the figure are represented in Table 2. Here, O ( N 2 ) and O ( N 3 ) operation lines in (a), and O ( N 3 ) and O ( N 5 ) operation lines in (b) are plotted for comparison.
Mathematics 07 00090 g003
Figure 4. Values of the forcing term f ( x , y ) and the numerical solution u ( x , y ) of a 2-D Poisson equation with no exact solution are presented in (a) and (b), respectively. The numerical solution presented in (b) is at N = 128 .
Figure 4. Values of the forcing term f ( x , y ) and the numerical solution u ( x , y ) of a 2-D Poisson equation with no exact solution are presented in (a) and (b), respectively. The numerical solution presented in (b) is at N = 128 .
Mathematics 07 00090 g004
Figure 5. Error of numerical solutions of a 2-D Poisson equation with no exact solution. The numerical solutions here are obtained from the quasi-inverse matrix diagonalization method. The values of the errors are computed by comparing the numerical solution at each N with the one at N = 1024 .
Figure 5. Error of numerical solutions of a 2-D Poisson equation with no exact solution. The numerical solutions here are obtained from the quasi-inverse matrix diagonalization method. The values of the errors are computed by comparing the numerical solution at each N with the one at N = 1024 .
Mathematics 07 00090 g005
Figure 6. Numerial errors of u , v , w , and p as a function of N in solving the Stokes problem.
Figure 6. Numerial errors of u , v , w , and p as a function of N in solving the Stokes problem.
Mathematics 07 00090 g006
Figure 7. Blue markers show CPU times in solving the 3-D Stokes problem. The blue dashed line is plotted from the best linear regression of the blue markers’ data, where the slope is 2.884. The black dashed line shows O ( N 3 ) for comparsion.
Figure 7. Blue markers show CPU times in solving the 3-D Stokes problem. The blue dashed line is plotted from the best linear regression of the blue markers’ data, where the slope is 2.884. The black dashed line shows O ( N 3 ) for comparsion.
Mathematics 07 00090 g007
Table 1. Errors of numerical solutions of 2-D and 3-D Poisson equations as a function of unknowns in 1-D. The numerical solutions are computed using QI, MD, and OIMD methods where QI, MD, and QIMD stand for Quasi-Inverse, Matrix Diagonalization, and Quasi-Inverse Matrix Diagonalization, respectively. All methods show a similar order of accuracy and reach spectral accuracy at N = 32 .
Table 1. Errors of numerical solutions of 2-D and 3-D Poisson equations as a function of unknowns in 1-D. The numerical solutions are computed using QI, MD, and OIMD methods where QI, MD, and QIMD stand for Quasi-Inverse, Matrix Diagonalization, and Quasi-Inverse Matrix Diagonalization, respectively. All methods show a similar order of accuracy and reach spectral accuracy at N = 32 .
MethodNError (2-D)Error (3-D)
QI method8 1.68 × 10 1 1.87 × 10 2
16 1.75 × 10 6 2.83 × 10 6
24 4.36 × 10 13 7.31 × 10 13
32 6.39 × 10 16 1.58 × 10 15
MD method8 1.68 × 10 1 1.87 × 10 2
16 1.75 × 10 6 2.83 × 10 6
24 4.36 × 10 13 7.31 × 10 13
32 1.75 × 10 15 3.41 × 10 15
QIMD method8 1.68 × 10 1 1.87 × 10 2
16 1.75 × 10 6 2.83 × 10 7
24 4.36 × 10 13 7.31 × 10 13
32 1.43 × 10 15 3.62 × 10 15
Table 2. CPU time (t) and the number of unknowns in 1-D (N) are associated using the best linear regression in the log-log scale by t = exp ( m ln N + b ) . Values of m, b and r 2 of the Quasi-Inverse (QI), Matrix Diagonalization(MD) and Quasi-Inverse Matrix Diagonalization (QIMD) methods in solving 2-D and 3-D Poisson equations are presented where r 2 here is the r-squared value of the best linear regression.
Table 2. CPU time (t) and the number of unknowns in 1-D (N) are associated using the best linear regression in the log-log scale by t = exp ( m ln N + b ) . Values of m, b and r 2 of the Quasi-Inverse (QI), Matrix Diagonalization(MD) and Quasi-Inverse Matrix Diagonalization (QIMD) methods in solving 2-D and 3-D Poisson equations are presented where r 2 here is the r-squared value of the best linear regression.
Method2-D3-D
m b r 2 m b r 2
QI method2.498 14.069 0.88575.055 16.043 0.9899
MD method2.890 17.936 0.98333.934 18.511 0.9922
QIMD method2.431 16.655 0.96692.973 15.033 0.9951
Table 3. Results of solving the coupled Helmholtz equations using the Quasi-Inverse (QI) and Quasi-Inverse Matrix Diagonalization (QIMD) methods with respect to accuracy and CPU times. Although both methods show the similar order of accuracy as a function of N, the CPU times of the QIMD method are much less than the ones of the QI method, showing the high efficiency of the QIMD method.
Table 3. Results of solving the coupled Helmholtz equations using the Quasi-Inverse (QI) and Quasi-Inverse Matrix Diagonalization (QIMD) methods with respect to accuracy and CPU times. Although both methods show the similar order of accuracy as a function of N, the CPU times of the QIMD method are much less than the ones of the QI method, showing the high efficiency of the QIMD method.
NQI MethodQIMD Method
Error ( u 1 ) Error ( u 2 ) CPU Time (s)Error ( u 1 ) Error ( u 2 ) CPU Time (s)
32 5.94 × 10 16 3.10 × 10 16 2.94 × 10 2 9.19 × 10 16 4.94 × 10 16 3.98 × 10 2
64 5.78 × 10 16 4.48 × 10 16 1.71 × 10 1 1.25 × 10 15 7.85 × 10 16 9.40 × 10 2
128 5.78 × 10 16 4.48 × 10 16 1.29 × 10 0 1.37 × 10 15 7.06 × 10 16 2.14 × 10 1
256 4.66 × 10 16 4.58 × 10 16 6.47 × 10 0 1.12 × 10 15 6.45 × 10 16 5.89 × 10 1
512 5.58 × 10 16 4.48 × 10 16 3.26 × 10 1 1.49 × 10 15 8.75 × 10 16 1.75 × 10 0

Share and Cite

MDPI and ACS Style

Oh, S. An Efficient Spectral Method to Solve Multi-Dimensional Linear Partial Different Equations Using Chebyshev Polynomials. Mathematics 2019, 7, 90. https://doi.org/10.3390/math7010090

AMA Style

Oh S. An Efficient Spectral Method to Solve Multi-Dimensional Linear Partial Different Equations Using Chebyshev Polynomials. Mathematics. 2019; 7(1):90. https://doi.org/10.3390/math7010090

Chicago/Turabian Style

Oh, Sahuck. 2019. "An Efficient Spectral Method to Solve Multi-Dimensional Linear Partial Different Equations Using Chebyshev Polynomials" Mathematics 7, no. 1: 90. https://doi.org/10.3390/math7010090

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop