An Efficient Spectral Method to Solve Multi-Dimensional Linear Partial Different Equations Using Chebyshev Polynomials

We present a new method to efficiently solve a multi-dimensional linear Partial Differential Equation (PDE) called the quasi-inverse matrix diagonalization method. In the proposed method, the Chebyshev-Galerkin method is used to solve multi-dimensional PDEs spectrally. Efficient calculations are conducted by converting dense equations of systems sparse using the quasi-inverse technique and by separating coupled spectral modes using the matrix diagonalization method. When we applied the proposed method to 2-D and 3-D Poisson equations and coupled Helmholtz equations in 2-D and a Stokes problem in 3-D, the proposed method showed higher efficiency in all cases than other current methods such as the quasi-inverse method and the matrix diagonalization method in solving the multi-dimensional PDEs. Due to this efficiency of the proposed method, we believe it can be applied in various fields where multi-dimensional PDEs must be solved.


Introduction
The spectral method has been used in many fields to solve linear partial differential equations (PDEs) such as the Poisson equation, the Helmholtz equation, and the diffusion equation [1,2].The advantage of the spectral method over other numerical methods in solving linear PDEs is its high accuracy; when solutions of PDEs are smooth enough, errors of numerical solutions decrease exponentially as the number of discretization nodes increases [3].
Based on the types of boundary conditions, different spectral basis functions can be used to discretize in physical space.There are several boundary conditions used in solving mathematical and engineering linear PDEs, but the boundary conditions can be classified as periodic and non-periodic boundary conditions.For periodic boundary conditions, the domain is usually discretized using the Fourier series, while Lengendre polynomials and Chebyshev polynomials are most frequently used as basis functions for non-periodic boundary conditions.To investigate an efficient way to spectrally solve PDEs in this paper, we restrict the scope of this paper by only considering linear PDEs with non-periodic boundary conditions.Of the spectral basis functions that can be used with non-periodic boundary conditions, we use the Chebyshev polynomial to discretize the equations in space to use their minimum error properties and achieve spectral accuracy across the whole domain [4].
There are several methods for solving linear PDEs using Chebyshev polynomials.In the collocation method, linear PDEs are directly solved in physical space so that implementation of the method is relatively easy compared to other methods.However, the differentiation matrix derived from the collocation method is full, making computation slow, and ill-conditioning of the differentiation matrix in the collocation method can produce numerical solutions with high errors, especially when a high number of collocation points is used [5].
Another method for solving linear PDEs spectrally with Chebyshev polynomials is the Chebyshev-tau method, which involves solving linear PDEs in spectral space.In the Chebyshev-tau method, the boundary conditions of PDEs are directly enforced to the equation of the system [5][6][7].This enforcement of boundary conditions produces tau lines, making numerical calculation slow because tau lines increase interactions between Chebyshev modes [8,9].As the dimension of PDEs rises, the length of calculation time caused by tau lines increases rapidly, resulting in significant lagging to solve PDEs [10,11].
Unlike the Chebyshev-tau method, the Chebyshev-Galerkin method removes direct enforcement of boundary conditions to the equation of the system by discretizing in space using carefully chosen basis functions called Galerkin basis functions that automatically obey given boundary conditions [8].As a result, when the Chebyshev-Galerkin method is used, tau lines are not created in the system of equation, and therefore, the interaction of tau lines in numerical calculations is not a concern.Because of this advantage, the Chebyshev-Galerkin method is popular for spectral calculation of PDEs.For example, Shen [12] studied proper choices of Galerkin basis functions satisfying standard boundary conditions such as Dirichlet and Neumann boundary conditions and solved linear PDEs with them, and Julien and Watson [13] and Liu et al. [11] used the Galerkin basis functions to solve multi-dimensional linear PDEs with high efficiency.
In 1-D problems, it is straightforward to write linear PDEs as a matrix form in spectral space using a differentiation matrix (see Section 2 for more details).However, because the differentiation matrix is an upper triangular matrix, solving linear PDEs by inverting the differentiation matrix (or using iterative methods to solve the differentiation matrix) is a computationally expensive way to solve equations.Because computational cost increases rapidly as the dimension of the problems increases, efficiently computing solutions of linear PDEs becomes even more important in multi-dimensional problems.In 1-D problems, computational time can be easily saved by modifying the dense equation to sparse systems using the three term recurrence relation [14].However, due to the coupling of Chebyshev modes in multi-dimensional problems, finding an efficient way to solve linear PDEs is not as straightforward as in 1-D problems.
To facilitate computation in multi-dimensional linear PDE problems, Julien and Watson [13] presented a method called the quasi-inverse technique.Based on the three term recursion relationship in 1-D, they invented a quasi-inverse matrix, and applied it to multi-dimensional linear PDE problems.As a result of the quasi-inverse technique, dense differential operators in multi-dimensional linear PDEs were reduced to sparse operators, then it was possible to efficiently solve multi-dimensional linear PDEs from the sparse systems with approximately O(N 2k−1 ) operations for k-dimensional problems.Haidvogel and Zang [15] and Shen [12] presented another way to efficiently solve multi-dimensional linear PDEs called matrix diagonalization method.The key of the matrix diagonalization method is decoupling the coupled Chebyshev modes using the eigenvalue and eigenvector decomposition method [16] and lowering dimensionality of the problem to convert solving a high dimensional problem into solving 1-D linear PDE problems multiple times as a function of the eigenvalues of the system [17].
In this paper, we propose a new method to solve multi-dimensional linear PDEs using Chebyshev polynomials called the quasi-inverse matrix decomposition method.Our method was created by adapting the quasi-inverse technique to the matrix diagonalization method.In our method, the equations of systems are first derived using the quasi-inverse technique, then the matrix diagnalization method is used to efficiently solve the derived equations of the systems using eigenvalue and eigenvector decomposition.Because our method uses merits of both the quasi-inverse technique and the matrix diagonalization method, it computes numerical solutions of multi-dimesional linear PDE problems faster than these two methods.We will describe our quasi-inverse matrix decomposition method in this paper by applying it to four test examples.
The rest of our paper is composed as follows.In Section 2, we review the Chebyshev-tau method, the Chebyshev-Galerkin method, and the quasi-inverse technique with a 1-D Poisson equation.Then, the quasi-inverse matrix decomposition method is explained in the next section by applying it to solve 2-D and 3-D Poisson equations based on the Chebyshev-Galerkin method.In Section 4, we apply our quasi-inverse matrix decomposition method to several multi-dimensional linear PDEs and compare results to the ones from the quasi-inverse method and matrix diagonalization method in terms of accuracy of numerical solutions and CPU time to solve the problems.Our conclusions are described in Section 5.

1-D Poisson Equation
In this section, we will explain the Chebyshev-tau method, the Chebyshev-Galerkin method, and the quasi-inverse technique by applying them to solve a 1-D Poisson equation.

Chebyshev-Tau Method
We start with the 1-D Poisson equation with the Dirichlet boundary conditions as follows: Definition 1.The u(x) and f (x) can be approximated in space with the M-th highest mode, respectively defined as and where T m (x) is the m-th Chebyshev polynomial in x, and ûm and fm are the m-th Chebyshev coefficients of u(x) and f (x), respectively.
Definition 2. The second derivative of u with respect to x (defined as ∂ 2 u ∂x 2 ) can be approximated by a sum of the coefficients of the second derivatives û(2) m multiplied by Chebyshev polynomials Definition 3. The coefficients of the second derivatives can be written as where c 0 = 2 and c m = 1 for m = 1, 2, • • • , M [18].
From Equations (1) and ( 3)-( 5), we can obtain As a matrix form, Equation ( 6) can be rewritten as where D 2 x is usually called the 1-D Laplacian matrix descretized in x, whose matrix components are defined by the left-hand side of Equation ( 6), and u and f are the (M + 1) × 1 arrays that are defined as u = [ û0 , û1 , • • • , ûM ] T and f = [ f0 , f1 , • • • , fM ] T , respectively.Note that D 2 x is the square matrix of order M + 1, but the rank of the matrix D 2 x is M − 1 because all elements at the two bottom-most rows are zeros.To make Equation (7) solvable, two additional equations that are provided from the boundary conditions of the Poisson equation as follows are required to be enforced to the matrix in Equation (7).If the boundary conditions incorporated 1-D Laplacian matrix and the boundary conditions incorporated right-hand side array are defined as Dx and f, respectively, solutions of the 1-D Poisson equation can be obtained from by solving Equation ( 9) for u.However, solving the 1-D Poisson equation from Equation ( 9) is an inefficient way to achieve numerical solutions because the left-hand side matrix of Equation ( 9) is the upper triangular matrix with the tau lines at the bottom.Operation complexity of solving this matrix is O(M 2 ), which is not computationally cheap.

Quasi-Inverse Approach
A better way of computing the 1-D Poisson equation is using recursion relations in computing derivatives where û(q) m is the m-th Chebyshev coefficients of q-th derivative of u with respect to x defined as If we use Equation (10) with q = 2 and q = 1, it respectively gives where ûm is used instead of û(0) m .Without loss generality, to satisfy Equation (1) in the Chebyshev space for all m, the condition û(2) m = fm is needed to be established.Then, substituting û(2) m−1 and û(2) in Equation ( 12) to fm−1 and fm+1 , respectively, and rearranging the terms give Then, with Equations ( 13) and (14), it is possible to derive Remark 1.If the matrix J x is defined as the identity matrix except for the two top-most diagonal elements whose values zero out as 15) can be written in a matrix form as where the matrix B x is the tridiagonal matrix, where the non-zero elements of B x defined by b i,j are given as Note that the subscript x in matrix symbols (e.g., B x and J x ) is to indicate that those matrices are associated with x.Using this subscript symbol identifying which direction the matrix comes from is useful in distinguishing the influence of matrices in multi-dimensional problems where subscripts can be x and y in 2-D, and x, y, and z in 3-D.When the matrix B x is multiplied on the left in Equation ( 7), it is possible to claim The matrix B x is called a quasi-inverse matrix in x for the Laplace operator because it acts like an inverse matrix of the Laplace operator, producing the quasi-identity matrix (not identity matrix) when multiplied with the Laplace operator.
Similar to the Chebyshev-tau method, boundary conditions need to be incorporated to Equation (16).To do it, we rewrite two equations in Equation ( 8) as a matrix.By defining B c as the (M + 1) × (M + 1) matrix whose elements are zeros, except for the top two rows where all elements' values in the top row are one and the values in the second row from the top are alternatively 1 and −1 from the first to last columns, the boundary conditions can be written in a matrix form as where the f c is the (M + 1) × 1 zero array.Adding Equation (18) to Equation ( 16) gives Note that f c is the zero array only under the given boundary conditions in Equation ( 1).The f c can be omitted in Equation ( 19) because f c is the zero array, but we represent this term to clearly show the boundary condition enforcement to the equation.The left-hand side matrix of Equation ( 19) is sparse, so it can be solved efficiently.Non-zero components of (J x + B c ) are diagonal elements from J x and the top two rows' entries from B c forming tau lines.This matrix can be solved with O(M) operations for u using a simple Gauss elimination.Compared to the method described in Section 2.1, solving a Poisson equation with the quasi-inverse approach is more efficient because the quasi-inverse approach allows reduction of operation complexity by one order of the number of unknowns.
The key point of the quasi-inverse approach for a Poisson equation is the constitution of the quasi-inverse matrix B x for the Laplace operator.Multiplying B x to the both sides of the Poisson equation simplifies the system of the equation based on the property of B x described in Equation ( 17).This property of the quasi-inverse matrix B x can also be used in solving multi-dimensional PDEs efficiently, which will be described in Section 3.

Chebyshev-Galerkin Method
One disadvantage of the Chebyshev-tau method is the direct enforcement of boundary conditions to the system, which creates tau lines in the equation.For example, when the Poisson equation is solved with the quasi-inverse technique using the Chebyshev-tau method so that the final equation is expressed as Equation ( 19), the enforcement of boundary conditions generates tau lines in the top two rows.These tau lines increase interaction between Chebyshev modes, hindering fast calculations of the equation.To overcome this drawback of the Chebyshev-tau method and enable efficient computation, researchers have used the Chebyshev-Galerkin method to solve PDEs.
While the boundary conditions of PDEs are directly enforced to the equations of systems in the Chebyshev-tau method, when the Chebyshev-Galerkin method is used, boundary conditions are automatically satisfied without direct boundary condition enforcement to the equations.This is done by discretizing PDEs with new basis functions called Galerkin basis functions that satisfy the given boundary conditions automatically.There are plenty of ways to define Galerkin basis functions satisfying specific boundary conditions, but one commonly used way is representing Galerkin basis functions with linear combinations of Chebyshev polynomials to allow simple but fast transformations between Chebyshev space and Galerkin space.In solving PDEs with Dirichlet boundary conditions using the Chebyshev-Galerkin method, Shen [12] showed the efficiency of the following Galerkin basis functions where φ m (x) is the m-th Galerkin basis function.In this paper, we used the same Galerkin basis functions as Shen used to transform efficiently from Chebyshev space to Galerkin space and vice versa.
Definition 4. When a 1-D Poisson equation is solved with the Chebyshev-Galerkin method using the Galerkin basis functions defined in Equation ( 20), u(x) can be approximated by a sum of the Galerkin basis functions multiplied by their coefficients vm , which can be written as Remark 2. Equating Equations ( 2) and (21) allows Chebyshev coefficients to associate with Galerkin coefficients where v is the T and S x is the (M + 1) × (M − 1) transformation matrix, where the (i, j) entry s i,j is defined as Note that the matrix S x is not a square matrix because u(x) is discretized with (M + 1)-th modes in Chebyshev space but discretized with (M − 1)-th modes in Galerkin space, where the difference comes from the absence of boundary conditions enforcement at the Chebyshev-Galerkin method.Substituting Equation (22) into Equation (16) gives where A x = J x S x and g x = B x f.Note that A x is the (M + 1) × (M − 1) matrix and g x is the (M + 1) × 1 array.The top two rows' elements of the matrix A x and array g x are zeros because these rows are discarded for boundary conditions enforcement.However, in the Chebyshev-Garlerkin method, since the boundary conditions are automatically obeyed by the proper choice of Galerkin basis functions, the top two rows of A x and g x are no longer necessary.Therefore, those modes can be deleted from the equation so we can only consider non-discarded modes in Equation (23).
Because the matrix Ãx is the bi-diagonal matrix whose main diagonal elements are all 1 and the second super-diagonal elements are all −1, Equation (24) can be solved for v with the backward substitution efficiently in O(M) operations.After v is computed, the solution of the 1-D Poisson equation u can be easily obtained from Equation (22).

Quasi-Inverse Matrix Diagonalization Method
The quasi-inverse technique can be used with both the Chebyshev-tau method and the Chebyshev-Galerkin method.In this paper, we will use the quasi-inverse technique with the Chebyshev-Galerkin method to solve multi-dimensional problems for two reasons: (1) using the Galerkin basis function is much more convenient to apply the matrix diagonalization method because the Chebyshev-tau method needs to impose boundary conditions to the equation, which makes the entire calculation complicated in multi-dimensional problems and sometimes makes equations non-separable, and (2) the Chebyshev-Galerkin method is faster than the Chebyshev-tau method to obtain numerical solutions of PDEs.Due to these advantages, our quasi-inverse matrix diagonalization method is invented based on the Chebyshev-Galerkin method.In this section, we will introduce the quasi-inverse matrix diagonalization method by applying it to solve 2-D and 3-D Poisson equations.

2-D Poisson Equation
Solving PDEs (including 2-D and 3-D Poisson equations) using the quasi-inverse matrix diagonalization method is divided into two steps.: (1) derive sparse equations of systems using the quasi-inverse technique, and (2) solve the derived equations efficiently using the matrix diagonalization method.

Quasi-Inverse Technique
The 2-D Poisson equation with Dirichlet boundary conditions can be expressed as follows: where L and M are the highest Chebyshev modes in x and y, respectively, and ûlm and flm are (l, m) Chebyshev coefficients of u(x, y) and f (x, y), respectively.
Multi-dimensional PDEs can be solved with the combination of a 1-D differential matrix using the Kronecker product [19].Here, the Kronecker product of matrices A and B is defined as Here, u and f are the (L + 1)(M + 1) × 1 arrays where the i row elements u i and f i equal ûlm and flm , respectively, where i = l(M + 1) To make use of the advantage of the quasi-inverse technique, we multiply on the left by B x ⊗ B y to Equation (28).Then, based on the property of the Kronecker product such that Definition 7. In the Chebyshev-Galerkin method, u(x, y) is approximated by the coefficients of the Galerkin basis functions vlm multiplied by the Galerkin basis functions in x and y, defined by Remark 3. If we define v as the (L − 1)(M − 1) × 1 array in which the j row element v j is equal to vlm where j = l(M − 1) Here, the sizes of matrices S x and S y are (L + 1) × (L − 1) and (M + 1) × (M − 1), respectively.
Substituting Equation (31) to Equation (29) gives Then, by defining A χ = J χ S χ and C χ = B χ S χ where χ = x and y, it is possible to obtain from Equation (32).The top two rows of all matrices in Equation ( 33) are zero because these rows are discarded for boundary conditions enforcement.As studied in the 1-D Poisson equation, discarded modes should be omitted from the equation when a PDE is solved with the Chebyshev-Galerkin method.Using the tilde notation described in Section 2.3, we can only consider non-discarded modes at Equation (33), which can be written as The array v can be computed by inverting the sparse matrix ( Ãx ⊗ Cy + Cx ⊗ Ãy ) in Equation ( 34), but we will further use the matrix diagonalization method to obtain the solutions more efficiently.

Matrix Diagonalization Method
To separate Equation (34) using the eigenvalue and eigenvector decomposition technique, we first reshape the (L − 1)(M − 1) × 1 array v to the (L − 1) × (M − 1) matrix and define the matrix as V. Similarly, let's reshape the (L + 1)(M + 1) × 1 array f to the (L + 1) × (M + 1) matrix and define the matrix as F.Then, we can rewrite Equation (34) as Multiplying to Equation (35) on the right by C−T y gives We want to emphasize that an n-by-n matrix is diagonalizable if and only if the matrix has n independent eigenvectors [20].From this argument, we can easily show that the matrix ÃT y C−T y is diagonalizable because matrices Ãy and Cy (and their inverse and transpose matrices) are linearly independent, and multiplication of two linearly independent matrices is also linearly independent .As a result, using eigenvalue decomposition, the matrix ÃT y C−T y can be decomposed as where P and Q are the eigenvector and diagonal eigenvalue matrices of ÃT y C−T y , respectively.After substituting Equation (37) to Equation (36), multiplying on the right by P allows us to obtain where V * = VP and H = Bx F BT y C−T y P. The matrix V * in Equation (38) can be solved efficiently column by column due to the fact that Q is the diagonal matrix.If we set and define the j-th diagonal element of Q as q j , the j-th column of Equation (38) can be written as The matrices Ãx and Cx are bi-diagonal and quad-diagonal matrices, respectively, whose non-zero elements are presented in Figure 1.Due to the sparsity, solving Equation (39) for v * j can be performed efficiently with O(L) operations for all j.Once it is performed so that all elements of the matrix V * are obtained, the matrix V can be computed from (40)

3-D Poisson Equation
By following the same steps used in the 2-D problem, we can use the quasi-inverse matrix diagonalization method to 3-D PDEs.Here, we will explain the way to use the quasi-inverse matrix diagonalization method for 3-D problems, particularly applying it to solve a 3-D Poisson equation.
(41) Definition 8.In employing Chebyshev polynomials to solve PDEs, u(x, y) and f (x, y) are represented by the sums of Chebyshev coefficients multiplied by Chebyshev polynomials, defined as where L, M and N are the highest Chebyshev modes in x, y, and z, respectively, and ûlmn and flmn are the (l, m, n)-th Chebyshev coefficients of u(x, y, z) and f (x, y, z), respectively.
In a matrix form, Equation (41) can be rewritten using the Kronecker product as where u and f are the (L + 1)(M + 1)(N + 1) × 1 arrays in which the j row components of those arrays are respectively equal to ûlmn and flmn where j = (M + 1) Multiplying to the left of Equation ( 44) by B x ⊗ B y ⊗ B z gives where Γ is the boundary of the rectangular domain.As a test example, we define f (x) as Note that we use coordinates x 1 and x 2 in the 2-D problem and x 1 , x 2 , and x 3 in the 3-D problem.Then, the exact solution of Equation (59) becomes This test problem is solved using the quasi-inverse technique, matrix diagonalization method, and quasi-inverse matrix diagonalization method.The accuracy of numerical solutions obtained with these three methods is presented in Table 1 as a function of N where the accuracy is computed by the relative L 2 -norm error between the exact and numerical solutions, defined as where || || 2 indicates the L 2 -norm, and u and u ext are the numerical and exact solutions, respectively.In both 2-D and 3-D problems, three methods show a similar order of accuracy, which is identical until N = 24.At N = 32, all numerical solutions encounter machine accuracy (∼10 −16 ) in double precision.At high N, errors of numerical solutions obtained from the spectral method using Chebyshev polynomials can increase because the system of equation tends to be ill-conditioned as N increases.To be a good spectral method, the error of numerical solutions at high N should be retained near the machine accuracy and should not rise as N increases.In this aspect, to further check the error and robustness of the proposed method, we solve the Poisson problem with high N up to N = 1024 in 2-D and N = 256 in 3-D and measure the errors of numerical solutions of the proposed method as a function of N. We present the results of this experiment in Figure 2. As shown in the figure, even at high N, the errors of numerical solutions of the proposed method retain the order of 10 −15 , showing high accuracy and robustness of the proposed method.
Table 1.Errors of numerical solutions of 2-D and 3-D Poisson equations as a function of unknowns in 1-D.The numerical solutions are computed using QI, MD, and OIMD methods where QI, MD, and QIMD stand for Quasi-Inverse, Matrix Diagonalization, and Quasi-Inverse Matrix Diagonalization, respectively.All methods show a similar order of accuracy and reach spectral accuracy at N = 32.In both 2-D and 3-D problems, the errors of the proposed method (QIMD) do not rise as N increases.The proposed method shows the same order of accuracy as the quasi-inverse method and the matrix diagonalization method.

Method
Figure 3 shows the CPU time of the quasi-inverse, matrix diagonalization, and quasi-inverse matrix diagonalization methods in solving 2-D and 3-D Poisson equations as a function of N as a log-log scale.Then, we know that the slope of each method's data in the figure indicates the operation complexity of the corresponding method.To calculate slopes, the date are fitted using the best linear regression by t = exp(mlnN + b) where t is CPU time, and m and b are the coefficients of best linear regression.The results of the best regression are presented in Table 2 where r 2 is the r-squared value of the best linear regression.In solving the 2-D Poisson equation, we observe that the operation complexity of the matrix diagonalization method is highest among the three methods, while the other two methods show similar operation complexity.However, actual CPU time of the quasi-inverse matrix diagonalization method is more than 10 times shorter than the quasi-inverse method.These observations indicate that reduction of operation complexity comes from the quasi-inverse technique's ability to make the equations of systems sparse, rather than from the matrix diagonalization technique.However, the matrix diagonalization technique is still able to help reduce CPU times by decreasing the actual number of calculations, even though the two methods have the same order of operation complexity.
In solving the 3-D Poisson equation, as Julien and Waston claimed in [13], the operation complexity of the quasi-inverse technique is approximately O(N 5 ).In the matrix diagonalization method, the operation complexity approximately increases from O(N 3 ) to O(N 4 ) as dimensionality of the Poisson equation rises from 2-D to 3-D.This is because the bottleneck of the matrix diagonalization method in solving the Poisson equation is computing v i from the dense matrix, which requires about O(N 2 ) operations per calculation, and repetition of this calculation increases from N to N 2 times.When using the quasi-inverse matrix diagonalization method to solve the 3-D problem, the most time consuming step is also computing v i , but the left-hand side matrix (K x + q i λ j Cx ) in Equation ( 58) is sparse where the bandwidth is four, and therefore v i can be solved in O(N) operations.Because this calculation is repeated N 2 times in the 3-D problem, the total operation complexity of the quasi-inverse matrix diagonalization method becomes O(N 3 ) as shown in Table 2.
Note that analytic solutions cannot be found for this Poisson equation.The problem is solved using the quasi-inverse matrix diagonalization method by changing the number of unknowns so N = 16, 32, 64, 128, 256, 512, and 1024.
Figure 4 shows the values of the forcing term f (x, y) in Figure 4a and the numerical solution of the problem u(x, y) at N = 128 in Figure 4b.To check the accuracy of the quasi-inverse matrix diagonalization method in solving this problem, we compute the numerical errors of the problem.Because no exact solution of the problem is available, we treat the numerical solution at N = 1024 as the most accurate solution and compare the other numerical solutions with it.Therefore, the error here indicates the relative difference between the numerical solutions obtained at N < 1024 and the numerical solution obtained at N = 1024 in a sense of the L 2 norm.We present the values of the errors as a function of N as a log-log scale in Figure 5.As N increases, the error decreases exponentially, which shows spectral accuracy of our method, as expected.

Coupled Two-Dimensional Helmholtz Equation
We apply the quasi-inverse matrix diagonalization method to 2-D coupled Helmholtz equations and compared the numerical results with the ones obtained from the quasi-inverse technique.We consider the following coupled equations with Dirichlet boundary conditions where k 1 and k 2 are constant numbers.As a test problem, the functions f 1 (x, y) and f 2 (x, y) are assumed as where Ẽxy = Ãx ⊗ Cy + Cx ⊗ Ãy , Cxy = Cx ⊗ Cy and Bxy = Bx ⊗ By as defined in Section 3.2.Because Equations ( 73) and (51) are the same types of equations, Equation (73) can be solved for v by following the same steps as solving 3-D PDEs described in Section 3.2.We tested coupled 2-D Helmholtz equations at k 1 = 0.7 and k 2 = 1.1.When N is 32, 64, 128, 256, and 512, the L-infinite errors between the numerical and exact solutions and CPU times are computed using the quasi-inverse and quasi-inverse matrix diagonalization methods, and the results are presented in Table 3.In both cases, the same order of high accuracy is obtained for all N.In terms of CPU time, similar to the multi-dimensional Poisson equation problems, the latter method is much faster than the former method, especially when N is high, showing the efficiency of our quasi-inverse matrix diagonalization method.p = e cos(2πx) sin(2πx) sin(2πy) sin(2πz). (76) The accuracy of numerical solutions of u, v, w, and p obtained using the quasi-inverse matrix diagonalization method is presented in Figure 6.The errors decay slower than the ones tested in the previous examples, but all of them show spectral convergence that reached machine accuracy around N = 90.Figure 7 shows the CPU time of the Stokes problem in a log-log scale as a function of N. When the best linear regression of the data is implemented, the slope of this graph is 2.884, approximately showing O(N 3 ) operation complexity of the 3-D Stokes problem as expected.

Conclusions
A new efficient way of solving multi-dimensional linear PDEs spectrally using Chebyshev polynomials called the quasi-inverse matrix diagonalization method is presented in this paper.In this method, we discretize numerical solutions in spaces using the Chebyshev-Galerkin basis functions, which automatically obey the boundary conditions of the PDEs.Then, the quasi-inverse technique is used to construct sparse differential operators for the Chebyshev-Galerkin basis functions, and the matrix diagonalization technique is applied to further facilitate computations by decoupling the modes in spatial directions.It is proved that the proposed method is highly efficient compared to other current methods when applying it on the test problems.The proposed method is especially fast in computing 3-D linear PDEs.We show that the operation complexity of the proposed method in solving 3-D linear PDEs is approximately O(N 3 ) when tested with the numerical examples in this paper (tested up to N = 512), suggesting the possibility of handling 3-D linear PDE problems with high numbers of unknowns.Although the test problems in the paper are restricted to second order linear PDEs, the quasi-inverse matrix diagonalization method can be used to solve separable high order linear PDEs such as fourth order linear PDEs which do not have cross terms (e.g., ∂ 2 u ∂x∂y ) and biharmonic equations with {u, ∂ 2 u ∂n 2 } types of boundary conditions where the derivative with respect to n is the normal derivative of the boundary of the domain.

Definition 5 .
Let us define the symbol tilde to represent the top two rows of a certain matrix that are thrown away.Based on this notation, Ãx and gx are respectively the (M − 1) × (M − 1) matrix and (M − 1) × 1 array, which are the subsets of A x and g x , where the top two rows of A x and g x are omitted, respectively.Then, we can take into account only non-discarded modes from Equation (23) by using Ãx and gx Ãx v = gx .(

) Definition 6 .
When the 2-D Poisson equation is solved spectrally with Chebyshev polynomials, u(x, y) and f (x, y) are respectively discretized in space as u(x, y) = L ∑ l=0 M ∑ m=0 ûlm T l (x)T m (y) (x)T m (y)

a 11 B
a 12 B . . .a 1n B a 21 B a 22 B . . .a 2n B is the (i, j) element of matrix A. Because the second derivative of u with respect to x and y in 2-D are respectively equal to D 2 x ⊗ I y and I x ⊗ D 2 y where D 2 x and D 2 y are respectively the 1-D Laplacian matrices differentiated in x and y, Equation (25) can be rewritten in a matrix form as

Figure 1 .
Non-zero elements of the matrices Ãx and Cx are presented in (a) and (b), respectively.Here, both L and M are set to 64.

Figure 2 .
Errors of numerical solutions of 2-D (plotted in (a)) and 3-D (plotted in (b)) Poisson equations with a the high number of N.

Figure 3 .
Comparison of CPU times among the Quasi-Inverse (QI), Matrix Diagonalization (MD), and Quasi-Inverse Matrix Diagonalization (QIMD) methods as a function of the one-dimensional number of unknowns in solving (a) a 2-D Poisson equation and (b) a 3-D Poisson equation.The results of the best linear regression associated with the lines in the figure are represented in Table 2. Here, O(N 2 ) and O(N 3 ) operation lines in (a), and O(N 3 ) and O(N 5 ) operation lines in (b) are plotted for comparison.

Figure 4 .Figure 5 .
Figure 5. Error of numerical solutions of a 2-D Poisson equation with no exact solution.The numerical solutions here are obtained from the quasi-inverse matrix diagonalization method.The values of the errors are computed by comparing the numerical solution at each N with the one at N = 1024.

Figure 6 .
Figure 6.Numerial errors of u, v, w, and p as a function of N in solving the Stokes problem.

Figure 7 .
Figure 7. Blue markers show CPU times in solving the 3-D Stokes problem.The blue dashed line is plotted from the best linear regression of the blue markers' data, where the slope is 2.884.The black dashed line shows O(N 3 ) for comparsion.

Table 2 .
CPU time (t) and the number of unknowns in 1-D (N) are associated using the best linear regression in the log-log scale by t = exp(m ln N + b).Values of m, b and r 2 of the Quasi-Inverse (QI), Matrix Diagonalization(MD) and Quasi-Inverse Matrix Diagonalization (QIMD) methods in solving 2-D and 3-D Poisson equations are presented where r 2 here is the r-squared value of the best linear regression.