A Multiscale RBF Collocation Method for the Numerical Solution of Partial Differential Equations

In this paper, we derive and discuss the hierarchical radial basis functions method for the approximation to Sobolev functions and the collocation to well-posed linear partial differential equations. Similar to multilevel splitting of finite element spaces, the hierarchical radial basis functions are constructed by employing successive refinement scattered data sets and scaled compactly supported radial basis functions with varying support radii. Compared with the compactly supported radial basis functions approximation and stationary multilevel approximation, the new method can not only solve the present problem on a single level with higher accuracy and lower computational cost, but also produce a highly sparse discrete algebraic system. These observations are obtained by taking the direct approach of numerical experimentation.


Introduction
In this paper, we introduce the hierarchical radial basis functions method for the approximation to Sobolev functions and the collocation to well-posed linear partial differential equations.Hierarchical basis may be a common concept in finite element books, which is formed by decomposing finite element spaces (see an earlier paper by Yserentant, in 1986 [1]).Hierarchical radial basis functions network is a concept in neural networks, which is an approximating neural model and is a self-organizing (by growing) multiscale version of a radial basis function network.In this paper, hierarchical radial basis functions (H-RBFs) refer to some basis functions that are constructed by employing hierarchical data structure and scaled compactly supported radial basis functions.Below, we can explain why hierarchical radial basis functions are necessary.
As the first kernel-based collocation method was introduced by Kansa [2], radial basis functions have been successfully applied to solving various partial differential equations numerically.However, unfortunately, we are obtaining highly accurate solutions from severely ill-conditioned linear systems and high computational cost when using radial basis functions collocation methods.This is a well-known uncertainty or trade-off principle [3].To overcome this problem, different strategies have been suggested, see a summary of existing methods in the recent monograph [4].There are about three kinds of popular methods to deal with this problem.
The first method is finding optimal shape parameter ε (which is related to the scattered centers distribution, and usually is inversely proportional to mesh norm).We have become accustomed to scaling one of radial functions through multiplying the independent variable • by a shape parameter ε in the practical approximation.Obviously, a smaller value of ε causes the function to become flatter, whereas increasing ε leads to a more peaked radial function, and therefore localizes its influence.
The choice of ε has a profound influence on both the approximation accuracy and numerical stability of the solution to interpolation problem.The general observation was that a large ε leads to a very well-conditioned linear system but also a poor approximation rate, whereas a smaller ε yields excellent approximation at the price of a badly conditioned system.A number of strategies for choosing a "good" value of ε have been suggested, such as the power function as indicator, the Cross Validation algorithm and the Contour-Padé algorithm.Some discussion of the existing parametrization schemes is provided in books [4,5].Madych [6] indicated that the interpolation error goes to zero as ε → 0, this does not seem to be true in practice.Beyond that, there is no case known where the error and the sensitivity are both reasonably small.
The second method is the utilization of compactly supported radial basis functions (CSRBFs).Since 1995, a series of compactly supported radial basis functions have emerged gradually, such as Wendland's function [7], Wu's function [8], Buhmann's function [9], and others [5].The compact support automatically ensures the strict positive definiteness of CSRBFs.However, an additional trade-off principle remains, now depending on the choice of the support size.Specifically, a small support leads to a well-conditioned system but also poor approximation accuracy, whereas a larger support yields excellent accuracy at the price of ill-conditioned system.
The third method is stationary multilevel interpolation.With a stationary multiscale algorithm [10], the condition number of the discrete matrix can be relatively small, and computation can be performed in O(N) operations.In this method, the present problem (interpolation or numerical PDEs) is solved first on the coarsest level by one of the compactly supported radial basis functions with a larger support (usually scaling the size of the support with the fill distance).Then, the residual can be formed, and then computed on the next finer level by the same compactly supported radial basis function but with a smaller support.This process can be repeated and be stopped on the finest level.The final approximation is the sum of all of interpolants.For interpolation problems, the linear convergence order has been proved in Sobelev spaces on the sphere in [11], and on bounded domains in [12].Applications of this algorithm in solving PDEs on spheres are proposed in [13], on bounded domains were proposed in [14][15][16].However, a series of global interpolation or approximation problems must be solved on different levels, although two kinds of local algorithms have been derived in [17,18] recently.
This paper considers solving the present problem on a single level.At the current scattered data set the trial function is represented as linear combination of hierarchical radial basis functions.The approximation method can produce a sparse discrete algebraic system, because hierarchical radial basis functions are derived from CSRBFs with different support radii.Compared with compactly supported radial basis functions approximation [7,8] and stationary multilevel approximation [11][12][13][14][15][16][17][18], the new method can solve the present problem on a single level with higher accuracy and lower computational cost.The effectiveness of H-RBFs collocation method will be conformed by several numerical observations.

H-RBFs Trial Spaces
In this section, we build H-RBFs trial spaces and assemble the discrete spaces with appropriate norms.In particular, we will prove two norm equivalence theorems and describe a function spaces commuting diagram.To avoid writing constants repeatedly, we shall use notations and ∼ =.Short notation x y means x ≤ C 1 y, and x ∼ = y means C 1 x ≤ y ≤ C 2 x, and C 1 and C 2 are positive constants.

H-RBFs Trial Spaces
Let X = {x 1 , ..., x N } be a finite point set in Ω ⊆ R d .We define a commonly used trial discretization parameter: h X,Ω := sup x∈Ω min which can be regarded as the radius of the largest empty ball that can be placed among the data sites x j .To build H-RBFs trial spaces, we need the nested point sets which have trial discretization parameters h j = h X j ,Ω .Let X * j be newly added point set in X j , then we have X 1 = X * 1 and X j = X j−1 X * j for j = 2, .... Given a kind of the compactly supported radial functions Φ : R d → R, we can rescale it (by a scaling parameter ε j > 0, and in the translation-invariant kernel-based case) as To make the support radii of Φ j become smaller and smaller with the addition of new points, we select ... Then we can build the H-RBFs trial spaces of the form which differs from the RBFs approximation spaces Here, we have written

Norms of the Discrete Spaces V j
The following is a relation between several function spaces.
When Ω = R d , we get a characterization of N Φ (R d ) in terms of the Fourier transforms Here In fact, there exist such radial functions whose Fourier transforms decay only algebraically, satisfying Then, scaled radial functions Φ j (x) = ε d j Φ(ε j x) have Fourier transforms satisfying To clarify the relation between Sobolev spaces and native spaces, we cite the following extension operator theorems.
Theorem 1. (Sobolev extension operator, see Section 5.17 in [19].)Suppose Ω ⊆ R d is open and has a Lipschitz boundary.Let τ ≥ 0. Then.for all g ∈ H τ (Ω), there exists a linear operator, E : Theorem 2. (Native extension operator, see Section 10.7 in [20].)Suppose Φ is a strictly positive definite kernel.Let The main difference between these two theorems is that the extensions for functions from Sobolev spaces impose a restriction on regions Ω, while extensions from native spaces are adequate for more general regions.
Then, we have the first of the following norm equivalence theorems.
) is a strictly positive definite kernel satisfying (5) with τ > d 2 , and Φ j is defined by (1) with ε j ≥ 1.Then we have H τ (R d ) = N Φ j (R d ), and for every g ∈ H τ (R d ), we have Proof.Using norm definition of Sobolev space H τ (R d ), definition (4), and inequalities ( 5) and ( 6), we have By similar scaling method, the lower bound follows from A similar norm equivalence theorem with inverse of ε j as scaling parameter can be found in [12].If further assumptions are made on boundary of Ω, the results of Theorem 3 will be still true forN Φ j (Ω), Ω ⊆ R d .Then, we have another norm equivalence theorem.
) is a strictly positive definite kernel satisfying (5) with τ > d 2 , and Φ j is defined by (1) with equivalent norms, and, for every g ∈ H τ (Ω), we have In addition, every g ∈ H τ (Ω) has an extension We can understand these two extension operator theorems (Theorems 1 and 2) and two norm equivalence theorems (Theorems 3 and 4) using a diagram.More concretely, under the following conditions, we have the following commuting diagram and corollary.

Interpolation via H-RBFs
In this section we discuss the scattered data interpolation with hierarchical radial basis functions.Given a target function f : Ω(⊆ R d ) → R, now we can find a interpolant f j ∈ V j of the form The coefficients c k i are found by enforcing the interpolation conditions where we use Y j as testing data on j−th level.This may lead to an unsymmetric or even nonsquare discrete system.The linear system will be solved by least square method (it is a Matlab '\' operation in our experiments).Computations were performed on a laptop with 2.4 GHz Intel Core i7 processor, using MATLAB running in the Windows 7 operating system.First, we generate the nested spaced Halton points N = 9, 25, 81, 289, 1089, and 4225 in the interior of the domain Ω = [0, 1] 2 (blue points in Figure 1).In Figure 1, we display six data sets used in the experiments.We choose all of blue points as the whole centers for the basis functions, and use Wendland's compactly supported function φ(r) = (1 − r) 6  + (35r 2 + 18r + 3) to interpolate 2D Franke's function.M = 40 × 40 equally spaced evaluation points are used to compute RMS-error: We now present four sets of interpolation experiments: nonstationary CSRBFs interpolation (ε fixed), stationary CSRBFs interpolation (with an initial ε, then double its value for every successive level), multilevel interpolation, and H-RBFs interpolation.For the first three methods, we can let testing data be taken to be the same as the centers, because the discrete matrices produced by these methods are nonsingular under this choice.However, a nonsquare testing is necessary for H-RBFs interpolation method.To do this simply, we use N + 4( √ N − 1) Halton points to test (9) on each level.We list numerical results (including RMS-error, convergence rates, and total CPU time) in Tables 1-4.In the nonstationary case (Table 1), we have convergence, although it is not obvious what the rate might be.However, the computation requires lost of time because the matrices become increasingly dense.The stationary CSRBFs interpolation is also numerically stable, but there will be essentially no convergence (see Table 2).Table 3 is the numerical results for multilevel interpolation.The linear convergence behavior for Sobolev functions interpolation has been proved by Wendland in [12].The corresponding 3D multilevel experiment can be found in Fasshauer's book [5], and 2D uniformly data interpolation experiment is in [12].From Table 3, we observe that the convergence seems cease at a later stage when given a initial ε = 0.5.Compared with nonstationary CSRBFs interpolation, multilevel method saves much time but with limited accuracy.
The corresponding RMS-error and observed convergence rates for the H-RBFs method is listed in Table 4. Several observations can be made by looking at Table 4.The H-RBFs interpolation method is numerically stable and has relatively small errors, even for ε = 0.5 case.The RMS-error on 4225 level has been remarkably reduced to 10 −6 .Compared with CSRBFs and multilevel interpolation, the present H-RBF method can doubtlessly solve problem ( 9) with a higher accuracy and lower computational cost.The sparsity behavior of H-RBFs interpolation matrices are displayed in Figure 2.

Collocation via H-RBFs
In this section, we discuss the implementation of the H-RBFs collocation method for linear partial differential equations.

Example 1
We consider the following Poisson problem with Dirichlet boundary conditions, where The exact solution of the problem ( 10) is given by u(x, y) = sin(πx) cos( πy 2 ).
We use the unsymmetric Kansa method to solve problem (10).At this time, a nonsquare testing is necessary because the small trial space must be tested on a fine-grained space discretization according to Schaback's theory [21,22].As always, the interior collocation data are taken to be the same as the centers.We create additional 4( √ N − 1) equally spaced collocation points for the boundary conditions on each level (red points in Figure 1).That is, on each level, the centers only include blue points, whereas the collocation sites contain blue points in the interior of the domain and red points on the boundary.Consequently, the collocation matrix becomes nonsquare, because the test side has more degrees of freedom than trial side.In our experiments, Wendland's C 6 function + (32r 3 + 25r 2 + 8r + 1) is used to construct trial spaces.We list RMS-error and total CPU time (last row in the Tables) for CSRBFs collocation method, multilevel collocation method and H-RBFs collocation method in Tables 5-10, respectively.Table 5 is the nonstationary CSRBFs collocation with fixed scaling parameters of ε = 0.5 and ε = 0.25.Table 6 is the stationary setting with two different initial parameter values, 0.5 and 0.25, and then doubles its value for every successive level.We note that the nonconvergence behavior can be observed for the stationary collocation method.Similar observations are reported in detail in Fasshauer's book [5], where the boundary centers were selected to coincide with the boundary collocation points.
Tables 7 and 8 are numerical results for multilevel collocation method.Several observations can be made by looking at these two tables.We observe that multilevel collocation method is nonconvergent with some relatively large initial values, such as 0.5, 0.25, and 0.125.With an initial parameter 0.0625, the method has convergence behavior.However, it seems that the convergence ceases at a later stage.But the multilevel collocation method will have linear convergence if we putting additional centers on boundary.The corresponding numerical experiments have been shown in book [5].Generally, the bandwidth of the collocation matrix must be allowed to increase slowly from one level to the next (namely, the support radii go slower to zero than the mesh norms) when solving linear partial differential equations by multilevel collocation method.In fact, these phenomena have been noted in Fasshauer's numerical observations in [23] and theoretical explanations in [16].Through above experiments (Tables 7 and 8), the conclusion that we need to add is that the multilevel collocation method is terrible if the centers just being placed in the interior of the domain.
In Tables 9 and 10, we list RMS-error and CPU time for the hierarchical radial basis functions collocation method.We observe that H-RBFs method has an ideal convergence behavior.Even to a relatively large initial parameter ε = 0.5, the convergence rate of H-RBFs method is close to 2. With an initial parameter ε = 0.0625, RMS-error on 4225 level is remarkably reduced to 10 −8 , whereas the CPU consumption time is only ~37.6 s.Compared with CSRBFs and multilevel collocation method, the present H-RBFs method can solve the model problem with a higher accuracy and lower computational cost.The sparsity behavior of H-RBFs collocation matrices are displayed in Figure 3.

Example 2
In this subsection, we consider the following Helmholtz test problem, −∇ 2 u(x, y) + u(x, y) = cos(πx) cos(πy), (x, y) where n denotes the unit outer normal vector.It is easy to verify that the exact solution for the problem ( 11) is given by u(x, y) = cos(πx) cos(πy) 2π 2 + 1 .
For this example, we use the same centers data, collocation sites, and Wendland's C 6 function as in section 4.1.The RMS-error, total CPU time, and observed convergence rates are listed in Tables 11 and 12, and the sparsity behavior of the H-RBF collocation matrices of this problem are displayed in Figure 4. We observe that it seems that the convergence ceases at a later stage when using a relatively large initial value ε = 0.5.However, the H-RBF method has ideal convergence behavior and lower computational cost, as in example 1.We also display the plots of the absolute error in Figure 5.

Conclusions
To handle the long-standing trade-off principle in the RBF collocation method, we derived hierarchical radial basis functions (H-RBFs) collocation method in the paper.Based on a nested scattered point sets, H-RBF trial spaces are constructed using scaled compactly supported radial basis functions with varying support radii.Several numerical observations were demonstrated in the paper.The experiments showed that H-RBFs collocation method can retain high accuracy with a lower computational cost, when compared with existing CSRBFs collocation method and multilevel RBFs collocation method.
There are many possibilities for enhancement of this method: Adding a restrictive condition (1), the Hilbert space H(Ω) becomes the reproducing kernel Hilbert space H Φ (Ω) with reproducing Φ : Ω × Ω → R. N Φ (Ω) is a reproducing kernel Hilbert space with a strictly positive definite kernel Φ. N Φ (Ω) is the native space of Φ, which contains all functions of the form g