Next Article in Journal
A Super-Twisting Extended State Observer for Nonlinear Systems
Next Article in Special Issue
An Approximation Method to Compute Highly Oscillatory Singular Fredholm Integro-Differential Equations
Previous Article in Journal
Some Congruences for the Coefficients of Rogers–Ramanujan Type Identities
Previous Article in Special Issue
A Collocation Method for Mixed Volterra–Fredholm Integral Equations of the Hammerstein Type
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Direct Prediction of the Shape Parameter in the Collocation Method of Solving Poisson Equation

Department of Data Science and Big Data Analytics, Providence University, Shalu, Taichung 43310, Taiwan
Mathematics 2022, 10(19), 3583; https://doi.org/10.3390/math10193583
Submission received: 15 August 2022 / Revised: 25 September 2022 / Accepted: 27 September 2022 / Published: 1 October 2022
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing II)

Abstract

:
In this paper, we totally discard the traditional trial-and-error algorithms of choosing the acceptable shape parameter c in the multiquadrics c 2 + x 2 when dealing with differential equations, for example, the Poisson equation, with the RBF collocation method. Instead, we choose c directly by the MN-curve theory and hence avoid the time-consuming steps of solving a linear system required by each trial of the c value in the traditional methods. The quality of the c value thus obtained is supported by the newly born choice theory of the shape parameter. Experiments demonstrate that the approximation error of the approximate solution to the differential equation is very close to the best approximation error among all possible choices of c.
MSC:
31A30; 35J05; 35J25; 35J67; 35Q40; 35Q70; 65D05; 65L10; 65N35

1. Introduction

The generalized multiquadrics are defined as
ϕ ( x ) : = ( 1 ) β / 2 ( c 2 + x 2 ) β / 2 , β R 2 N 0 , c > 0 , x R d ,
where β / 2 denotes the smallest integer greater than or equal to β / 2 and the constant c is called the shape parameter. These are the most popular radial basis functions (RBFs) and are frequently used in the collocation method of solving partial differential equations. In this paper, we let β = 1 . In the collocation method, an approximate solution to a differential equation is of the form
u ^ ( x ) : = i = 1 N λ i ϕ ( x x i ) + p ( x ) ,
where p ( x ) P m 1 , the space of polynomials of a degree less than or equal to m 1 in R d , and X = { x 1 , , x N } is a set of points scattered in the domain. For m = 0 , P m 1 : = { 0 } . The integer m : = β / 2 . Since we let β = 1 , here m = 1 and p ( x ) is a constant λ 0 . An unorthodox way even drops λ 0 and lets it be 0. The constants λ i , i = 0 , , N , are chosen so that u ^ ( x ) satisfies the differential equation (including the boundary conditions) at the points x i , i = 1 , , N , called the collocation points.
The function u ^ ( x ) originates from the interpolation theory of the radial basis functions, where u ^ ( x ) interpolates a given function f ( x ) at x 1 , , x N . It is required that i = 1 N λ i p l ( x i ) = 0 for l = 1 , Q , where { p 1 , , p Q } is a basis of P m 1 . Besides this, the only requirement for X = { x 1 , , x N } is that it should be P m 1 -unisolvent. Namely, if p P m 1 and p ( x i ) = 0 for i = 1 , , N , then p ( x ) is a zero polynomial. Further details can be observed in Section 8.5 of Wendland [1]. This origin has to be mentioned because we need it in the design of u ^ ( x ) .
This approach of solving the differential equation was first introduced by E. Kansa [2,3]. A huge amount of experiments demonstrate that it works very well. The main advantage of this approach, namely the collocation method (or Kansa method), is that the data points are scattered in the domain without meshes. Moreover, its high accuracy is also very attractive. However, the choice of the shape parameter c contained in the multiquadrics is a big problem. Experts in this field only know that it is very influential, but do not know how to choose it. G. Fasshauer [4] pointed out that only trial-and-error algorithms were available. Some people even give up using mutiquadrics to solve differential equations due to this problem. This greatly lowers the power of the collocation method. The purpose of this paper is to handle this problem.

2. Materials and Methods

2.1. Sobolev Error Estimates

We need a space that plays an intermediate role in our approximating work.
Definition 1.
For any positive number γ,
B γ : = f L 2 ( R d ) : f ^ ( ξ ) = 0 if ξ > γ
where f ^ denotes the Fourier transform of f. For each f B γ , its norm is
f B γ : = | f ^ ( ξ ) | 2 d ξ 1 / 2 .
Here, f L 2 ( R d ) means that f ( x ) 2 is integrable. Our main interest is in the Sobolev space W 2 τ ( R d ) because it contains the solutions of a lot of important differential equations. The Sobolev space is defined as follows.
Definition 2.
For any positive integer τ,
W 2 τ ( R d ) = f L 2 ( R d ) : D k f ( x 1 , , x d ) L 2 ( R d ) f o r | k | τ
where k = ( k 1 , , k d ) and | k | = i = 1 d k i . The Sobolev norm is
f W 2 τ ( R d ) : = 0 | k | τ | D k f ( x ) | 2 d x 1 / 2 .
In the preceding definition, the derivatives are more general than the classical derivatives, called the distributional derivatives. The vector k can even have negative or noninteger coordinates. Further details can be observed in Yosida [5] and any textbooks of functional analysis.
Before introducing Sobolev error estimates, some necessary ingredients should be defined. Suppose X = { x 1 , , x N } Ω is a finite subset of a bounded set Ω in R d . Then, the separation radius is defined by
q X : = 1 2 min i j x i x j .
Then, we have the following core theorem, which is just Theorem 3.4 of Narcowich et al. [6].
Theorem 1.
Let β, t R satisfy β > d / 2 and t 0 . If f W 2 β + t ( R d ) , then there exists f γ B γ such that f γ | X = f | X and
f f γ W 2 β ( R d ) 5 · κ t q X t f W 2 β + t ( R d ) ,
with γ = κ / q X , where κ 1 depends only on β and d.
This theorem clearly shows that any function in the Sobolev space, possibly a solution to an important differential equation, can be interpolated by a B γ function with a good error bound. In the next subsection, we will demonstrate that any B γ function can be interpolated by a function of the form (2), also with a good error bound.

2.2. MN-Curve Theory

We need some basic definitions.
Definition 3.
For any positive number σ,
E σ : = f L 2 ( R d ) : | f ^ ( ξ ) | 2 e | ξ | 2 / σ d ξ <
where f ^ denotes the Fourier transform of f. For each f E σ , its norm is
f E σ : = | f ^ ( ξ ) | 2 e | ξ | 2 / σ d ξ 1 / 2 .
Obviously, for any γ , σ > 0 , B γ E σ . We are going to demonstrate how E σ functions can be approximated by functions of the form (2).
For any set Ω R d and any set X = { x 1 , , x N } of sample points contained in Ω , the fill distance is defined by
δ ( Ω , X ) : = sup x Ω inf i = 1 , , N x x i ,
abbreviated as δ , which measures the spacing of the sample points in Ω . The smaller δ is, the more sample points are needed. In this paper, Ω denotes the function domain.
Definition 4.
Let d and β be as in (1). The numbers ρ and Δ 0 are defined, as follows.
(a) 
Suppose β < d 3 . Let s = ( d β 3 ) / 2 . Then
(i) 
if β < 0 , ρ = ( 3 + s ) / 3 a n d Δ 0 = ( 2 + s ) ( 1 + s ) 3 ρ 2 ;
(ii) 
if β > 0 , ρ = 1 + s 2 β / 2 + 3 a n d Δ 0 = ( 2 m + 2 + s ) ( 2 m + 1 + s ) ( 2 m + 3 ) ρ 2 m + 2 where m = β / 2 .
(b) 
Suppose d 3 β < d 1 . Then ρ = 1 and Δ 0 = 1 .
(c) 
Suppose β d 1 . Let s = ( d β 3 ) / 2 . Then
ρ = 1 a n d Δ 0 = 1 ( 2 m + 2 ) ( 2 m + 1 ) ( 2 m s + 3 ) w h e r e m = β / 2 .
For any f E σ and x Ω , the upper bound of | f ( x ) u ^ ( x ) | is a very complicated expression involving both ρ and Δ 0 , as can be observed in Luh [7]. A modified theory for a purely scattered data setting can be observed in Luh [8]. In this paper, we only need to extract its essential part
| f ( x ) u ^ ( x ) | MN ( c ) f E σ
where c is just the shape parameter defined in (1) and MN ( c ) is the MN function to be defined below, as in [8].
In the MN-curve theory, we require that the diameter r of the function domain Ω satisfy b 0 / 2 r b 0 where b 0 is a parameter determined by us. Once b 0 is fixed, there are three cases for the definition of MN ( c ) .
Case 1: β < 0 , | d + β | 1 and d + β + 1 0 Let f E σ and ϕ ( x ) be as in (1). For any fixed fill distance δ satisfying 0 < δ b 0 / 2 , the optimal value of c in [ 24 ρ δ , ) is the number minimizing
MN ( c ) : = 8 ρ c ( β d 1 ) / 4 ( ξ ) ( d + β + 1 ) / 2 e c ξ ( ξ ) 2 / σ 1 / 2 ( 2 3 ) c / ( 24 ρ δ ) if   c [ 24 ρ δ , 12 b 0 ρ ) , 2 3 b 0 c ( β d + 1 ) / 4 ( ξ ) ( d + β + 1 ) / 2 e c ξ ( ξ ) 2 / σ 1 / 2 ( 2 3 ) b 0 / ( 2 δ ) if   c [ 12 b 0 ρ , ) ,
where
ξ = c σ + c 2 σ 2 + 4 σ ( d + β + 1 ) 4 .
Remark 1.
Note that lim c 0 + MN ( c ) = and lim c MN ( c ) = .
Case 2: β = 1 and d = 1 Let f E σ and ϕ ( x ) be as in (1). For any fill distance δ satisfying 0 < δ b 0 / 2 , the optimal value of c in [ 24 ρ δ , ) is the number minimizing
MN ( c ) : = 8 ρ c ( β 1 ) / 2 1 ln 2 + 2 3 M ( c ) 1 / 2 ( 2 3 ) c / ( 24 ρ δ ) if   c [ 24 ρ δ , 12 b 0 ρ ) , 2 3 b 0 c β / 2 1 ln 2 + 2 3 M ( c ) 1 / 2 ( 2 3 ) b 0 / ( 2 δ ) if   c [ 12 b 0 ρ , ) ,
where
M ( c ) : = e 1 1 / ( c 2 σ ) if   0 < c 2 3 σ , g ( c σ + c 2 σ 2 + 4 σ 4 ) if   2 3 σ < c ,
g being defined by g ( ξ ) : = c ξ e c ξ ξ 2 / σ .
Remark 2.
The same as Case 1, we have lim c 0 + MN ( c ) = and lim c MN ( c ) = .
Case 3: β > 0 and d 1 Let f E σ and ϕ ( x ) be as in (1). For any fixed fill distance δ satisfying 0 < δ b 0 / 2 , the optimal value of c in [ 24 ρ δ , ) is the number minimizing
MN ( c ) : = 8 ρ c ( β d 1 ) / 4 ( ξ ) ( 1 + β + d ) / 2 e c ξ e ( ξ ) 2 / σ 1 / 2 ( 2 3 ) c / ( 24 ρ δ ) if   c [ 24 ρ δ , 12 b 0 ρ ) , 2 3 b 0 c ( 1 + β d ) / 4 ( ξ ) ( 1 + β + d ) / 2 e c ξ e ( ξ ) 2 / σ 1 / 2 ( 2 3 ) b 0 / ( 2 δ ) if   c [ 12 b 0 ρ , ) ,
where
ξ = c σ + c 2 σ 2 + 4 σ ( 1 + β + d ) 4 .
Remark 3.
(a) If β d 1 > 0 , lim c 0 + MN ( c ) = 0 . (b) If β d 1 < 0 , lim c 0 + MN ( c ) = . (c) If β d 1 = 0 , lim c 0 + MN ( c ) is a finite positive number. (d) lim c MN ( c ) = . Practically, we never let β d 1 0 . Hence (a) and (c) will not happen.
In all the three cases, the requirement c 24 ρ δ is harmless because the value 24 ρ δ is usually quite small and a huge amount of experiments demonstrate that the optimal choice of c never lies in the interval ( 0 , 24 ρ δ ) .
The function u ^ ( x ) in (4) in fact interpolates f ( x ) at the sample points. Hence, strictly speaking, MN ( c ) can only be used to measure the quality of the function interpolation. Nevertheless, collocation is in spirit a kind of interpolation, not just approximation. We require that u ^ ( x ) satisfy the given differential equation at the sample points. Theoretically, the value c minimizing MN ( c ) should also be the optimal choice of c in the definition of u ^ ( x ) when dealing with differential equations. Another point is that in (4), it is required that f E σ , which is mathematically just a subset of the Sobolev space W 2 τ ( R d ) . Fortunately, the Formula (3) offers a bridge for our approximating f W 2 τ ( R d ) with u ^ ( x ) . For a fixed set X = { x 1 , , x N } of sample points in Ω R d , any f W 2 τ ( R d ) can be interpolated by an f γ B γ E σ . Then f γ can be interpolated by u ^ . Both happen in X. The influence of c happens in the latter part of the interpolations. Hence, we can directly predict the optimal value of c with the curve of MN ( c ) . No search is needed.

2.3. Problem Setting

We try to handle Poisson equations. A standard 3D Poisson equation is of the form
u x x ( x , y , z ) + u y y ( x , y , z ) + u z z ( x , y , z ) = f ( x , y , z ) for   ( x , y , z ) Ω Ω , u ( x , y , z ) = g ( x , y , z ) for   ( x , y , z ) Ω ,
where Ω is the domain with boundary Ω , and f , g are given functions. A natural extension to d dimensions can be easily understood by replacing ( x , y , z ) with ( x 1 , , x d ) and letting Ω R d .
Our goal is to find an approximate solution u ^ ( x ) of the form (2). Our approach is to find real numbers λ 0 , , λ N such that u ^ ( x ) : = i = 1 N λ i ϕ ( x x i ) + λ 0 satisfies (5) in R d at the sample points x 1 , , x N R d , called collocation points. The constant c appearing in ϕ is determined by the MN-curve theory. Of course, this process involves solving a system of linear equations with unknowns λ 0 , , λ N , by requiring that
j = 1 N λ j L [ ϕ ( x i x j ) ] = f ( x i ) for   i = 1 , , N i n t , j = 1 N λ j = 0 , j = 1 N λ j ϕ ( x i x j ) + λ 0 = g ( x i ) for   i = N i n t + 1 , , N ,
where L denotes the differential operator of the Poisson equation. The sample points x i , , x N i n t are located in the interior of the domain cube, and x N i n t + 1 , , x N on the boundary. The requirement j = 1 N λ j = 0 results from the interpolation theory, as explained in the introduction section. We thus have an ( N + 1 ) × ( N + 1 ) system of linear equations.
Although its coefficient matrix is not sparse, it can be efficiently solved because its scale is not large, as long as the shape parameter c is well chosen. For a long time, the RBF collocation method has been severely criticized for the full matrix induced by the multiquadrics. Fortunately, now we know that the amount of sample points needed can be greatly reduced by choosing the shape parameter c according to our theory. This is exciting. As for the collocation points, they are scattered in the domain and boundary without meshes. Theoretically, if the exact solution u ( x ) lies in the Sobolev space, the approximate solution u ^ ( x ) thus found should be quite good. Experiments demonstrate that the approximation error | u ( x ) u ^ ( x ) | is indeed very small.

3. Results

The crux of this approach is the choice of the parameter σ in the definition of MN ( c ) . Once f γ B γ in (3) is given, in order to measure | f γ ( x ) u ^ ( x ) | , there are infinitely many possible choices of σ for the implementation of the inequality (4). If σ is too large, the value of MN ( c ) will be very large, making the MN curve meaningless. If σ is very small, the value of MN ( c ) is usually very small. However, f E σ will become extremely large, also making (4) meaningless. Fortunately, after analyzing the MN curves, we can always find a suitable σ without much effort, as demonstrated in our experiments.

3.1. 1D Model

Although our main interest is in two and three-dimensional problems, in order to help the reader use our approach, we still include the 1D problem.
Let u ( x ) = e x with domain Ω = { x : 0 x 1 } be the test function. We are going to solve
u x x ( x ) = e x
for 0 < x < 1 with u ( 0 ) = 1 and u ( 1 ) = e 1 .
The approximate solution will be of the form u ^ ( x ) = i = 1 N ϕ ( x x i ) + λ 0 , where ϕ ( x ) = c 2 + x 2 and λ i , i = 0 , , N are constants to be determined. The sample points x i , i = 1 , , N , are randomly generated and scattered in the unit interval, except that two of them are 0 and 1, respectively. The choice of c will be made according to Case 3 of the MN curves.
The MN function value greatly depends on the parameter σ in the definition of MN ( c ) . We present three curves for σ = 10 1 and three for σ = 10 5 in Figure 1, Figure 2 and Figure 3 and Figure 4, Figure 5 and Figure 6, respectively. In the figures, δ denotes the fill distance, b 0 denotes the domain diameter, d is the dimension, and β is the parameter in the definition of the multiquadrics (1).
Note that in these figures, c = 12 always corresponds to the minimal value of MN ( c ) . It suggests that one should choose c = 12 in our approximate solution to the Poisson equation. We present the experimental results in Table 1. In the table, N d denotes the number of data points used, and N i n t , N b d y denote the number of interior and boundary data points, respectively. We use N t test points to measure the quality of the approximation
RMS : = { ( i = 1 N t | u ( x ) u ^ ( x i ) | 2 ) / N t } 1 / 2 .
As in our previous papers, b 0 denotes the diameter of the domain, and COND denotes the condition number of the system of the linear Equation (6).
Table 1. 1D experiment, c = 12 , b 0 = 1 , N t = 501 .
Table 1. 1D experiment, c = 12 , b 0 = 1 , N t = 501 .
N d 712224282
N i n t 510204080
N b d y 22222
RMS 3.2 · 10 6 2.5 · 10 11 2.7 · 10 24 4.9 · 10 52 3.2 · 10 127
COND 1.3 · 10 23 1.4 · 10 39 6.8 · 10 69 3.3 · 10 133 2.0 · 10 281
In order to cope with the problem of ill-conditioning, enough effective digits were adopted for each step of the calculations. For example, for N d = 82 , we adopted 300 effective digits to the right of the decimal point to handle its huge corresponding condition number. Even so, it took only one second for the computer to solve the linear system. All these were achieved in virtue of the computer software Mathematica.
If c is chosen arbitrarily, say c = 1 , then the RMS will be 4.3 · 10 30 for N d = 82 . As for the comparison with other choices of c values, we are not going to present in this subsection for three reasons. Firstly, the results for c = 12 are already quite good. Secondly, in our theory, the prediction is reliable only when enough data points are used, and then the condition number will be extremely large for d = 1 and b 0 = 1 . For example, experimentally, we found that the optimal value of c is 1600 for N d = 162 , when COND = 1.5 · 10 1211 and RMS = 7.96 · 10 359 . In order to obtain the predicted value c = 12 , we have to increase the number of data points so that N d > > 162 , as in the experiments for interpolation. Then, the condition number will become much larger than 1.5 · 10 1211 , although the RMS will be smaller than 7.96 · 10 359 . In practice we do not need such accuracy. Thirdly, such comparisons can be perfectly handled in the 2D and 3D experiments.

3.2. 2D Model

Our test function is now u ( x , y ) = e x y on Ω = { ( x , y ) : 0 x 1 , 0 y 1 } . This function obviously lies in the Sobolev space. The Poisson equation is then
u x x ( x , y ) + u y y ( x , y ) = 2 e x y for   ( x , y ) Ω Ω , u ( x , y ) = g ( x , y ) for   ( x , y ) Ω ,
where g ( 0 , y ) = e y , g ( x , 0 ) = e x , g ( 1 , y ) = e 1 y , g ( x , 1 ) = e x 1 for 0 x 1 and 0 y 1 .
We are trying to find an approximate solution u ^ ( x , y ) = i = 1 N λ i ϕ ( x x i , y y i ) + λ 0 , where ϕ ( x , y ) = c 2 + x 2 + y 2 and λ i , i = 0 , , N , are constants to be determined. The sample points ( x i , y i ) , i = 1 , , N , are scattered in the domain Ω . Our focus is the choice of c.
Now, let us analyze the MN curves. We first let σ = 10 1 and list five MN curves with different fill distances δ .
In Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, it is clearly observed that as the fill distances decrease, the lowest points of the curves move to a fixed value c = 17 which is just 12 b 0 ρ in Case 3 of the definition of MN ( c ) . Now, we investigate σ = 10 5 .
Figure 12, Figure 13 and Figure 14 also demonstrate that the optimal choice of c is 17. In fact, if we test other σ ’s, the same result will appear. In order to save space, we do not list them.
All the MN curves strongly suggest that one should choose c = 17 whenever enough data points are used. Thus, we investigate its quality and present the results in Table 2. Here, N b d y and N i n t denote the numbers of data points located on the boundary and interior of the domain, respectively. Then, N d and N t denote the total numbers of data points and test points, respectively. The root-mean-square error, used to measure the approximation error, is defined by
RMS : = { ( i = 1 N t | u ( x i , y i ) u ^ ( x i , y i ) | 2 ) / N t } 1 / 2 .
As before, b 0 denotes the diameter of the domain Ω and COND is the condition number of the linear system involved. The most time-consuming work of solving the linear system took only two seconds for 341 data points. Hence, we did not put the computer time into the table.
Figure 7. MN curve for δ = 0.1 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Figure 7. MN curve for δ = 0.1 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Mathematics 10 03583 g007
Figure 8. MN curve for δ = 0.05 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Figure 8. MN curve for δ = 0.05 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Mathematics 10 03583 g008
Figure 9. MN curve for δ = 0.01 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Figure 9. MN curve for δ = 0.01 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Mathematics 10 03583 g009
Figure 10. MN curve for δ = 0.005 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Figure 10. MN curve for δ = 0.005 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Mathematics 10 03583 g010
Figure 11. MN curve for δ = 0.001 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Figure 11. MN curve for δ = 0.001 where d = 2 , β = 1 , b 0 = 2 and σ = 10 1 .
Mathematics 10 03583 g011
Figure 12. MN curve for δ = 0.1 where d = 2 , β = 1 , b 0 = 2 and σ = 10 5 .
Figure 12. MN curve for δ = 0.1 where d = 2 , β = 1 , b 0 = 2 and σ = 10 5 .
Mathematics 10 03583 g012
Figure 13. MN curve for δ = 0.01 where d = 2 , β = 1 , b 0 = 2 and σ = 10 5 .
Figure 13. MN curve for δ = 0.01 where d = 2 , β = 1 , b 0 = 2 and σ = 10 5 .
Mathematics 10 03583 g013
Figure 14. MN curve for δ = 0.005 where d = 2 , β = 1 , b 0 = 2 and σ = 10 5 .
Figure 14. MN curve for δ = 0.005 where d = 2 , β = 1 , b 0 = 2 and σ = 10 5 .
Mathematics 10 03583 g014
Table 2. 2D experiment, c = 17 , b 0 = 2 , N t = 961 .
Table 2. 2D experiment, c = 17 , b 0 = 2 , N t = 961 .
N d 4691141191341
N i n t 550100150300
N b d y 4141414141
RMS 2.3 · 10 5 2.4 · 10 12 2.4 · 10 16 7.2 · 10 20 9.2 · 10 25
COND 1.0 · 10 40 6.5 · 10 44 5.4 · 10 53 1.3 · 10 64 4.8 · 10 87
For simplicity, the test points were evenly spaced in the domain. The interior data points were purely scattered and generated randomly by Mathematica in the domain, but the boundary data points were evenly spaced just for ease of programming. The problem of ill-conditioning was overcome by keeping enough effective digits to the right of the decimal point for each step of the computation. For example, when N d = 341 , we adopted 200 digits and successfully defeated the large condition number 4.8 × 10 87 . In fact, 110 digits are already good enough and will lead to the same result. Even with 200 digits, it took only two seconds to solve the linear system. All these were achieved by the help of the arbitrarily precise computer software Mathematica.
Although c = 17 leads to satisfactory results, a comparison with other choices of c is also needed. Table 3 offers such a comparison. We fix N i n t = 300 and N b d y = 41 for all choices of c.
In Table 3, it is clear that our theoretically predicted optimal value c = 17 coincides exactly with the experimentally optimal value. Moreover, as depicted by the MN curves in Figure 12, Figure 13 and Figure 14, the approximation errors become large very slowly for c > 17 . This is fully reflected by our experimental results.
The three-dimensional experiment is more challenging and is expected to be much more time-consuming. Fortunately, it takes only five minutes to compute the linear system, as we shall see in the next subsection.

3.3. 3D Model

The test function is now u ( x , y , z ) = e x y z on Ω = { ( x , y , z ) : 0 x 1 , 0 y 1 , 0 z 1 } . It lies in the Sobolev space. The Poisson equation is
u x x ( x , y , z ) + u y y ( x , y , z ) + u z z ( x , y , z ) = 3 e x y z for   ( x , y , z ) Ω Ω , u ( x , y , z ) = g ( x , y , z ) for   ( x , y , z ) Ω ,
where g ( x , y , z ) is just the restriction of u ( x , y , z ) to the six surfaces of the domain cube. In other words, the Dirichlet condition is adopted.
The approximate solution will be of the form u ^ ( x , y , z ) = i = 1 N λ i ϕ ( x x i , y y i , z z i ) + λ 0 where ϕ ( x , y , z ) = c 2 + x 2 + y 2 + z 2 and λ i , i = 0 , , N , are constants to be determined. The sample points ( x i , y i , z i ) , i = 1 , , N , are still scattered in the interior of the domain cube and evenly spaced on the boundary. We are going to find c such that the approximation error | u ( x , y , z ) u ^ ( x , y , z ) | is as small as possible.
In order to find a suitable c, one has to analyze the MN curves first. Again, Case 3 of MN ( c ) applies. However, for 3D MN functions, different σ ’s indicate different optimal values of c. For σ = 10 1 , the optimal values of c are shown in Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19. These figures show that as long as δ is small enough, the optimal value of c is 20.7846, which is just 12 b 0 ρ in the definition of MN ( c ) .
Now, we test σ = 10 5 . The MN curves are presented in Figure 20, Figure 21 and Figure 22. They demonstrate that one should choose c = 129 .
The MN curves for σ = 10 10 are as in Figure 23, Figure 24 and Figure 25. They indicate that one should let c = 40,800.
The three different optimal values of c are all logically correct. However, when σ is very small, f E σ in (4) will be extremely large, making (4) meaningless. Hence, we should choose c = 20.7846 according to Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19, where σ = 10 1 is larger and the MN ( c ) values are reasonably small.
For c = 20.7846 , we compare different numbers of data points. The results are presented in Table 4.
Figure 15. MN curve for δ = 0.05 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Figure 15. MN curve for δ = 0.05 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Mathematics 10 03583 g015
Figure 16. MN curve for δ = 0.03 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Figure 16. MN curve for δ = 0.03 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Mathematics 10 03583 g016
Figure 17. MN curve for δ = 0.02 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Figure 17. MN curve for δ = 0.02 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Mathematics 10 03583 g017
Figure 18. MN curve for δ = 0.01 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Figure 18. MN curve for δ = 0.01 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Mathematics 10 03583 g018
Figure 19. MN curve for δ = 0.001 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Figure 19. MN curve for δ = 0.001 where d = 3 , β = 1 , b 0 = 3 and σ = 10 1 .
Mathematics 10 03583 g019
Figure 20. MN curve for δ = 0.05 where d = 3 , β = 1 , b 0 = 3 and σ = 10 5 .
Figure 20. MN curve for δ = 0.05 where d = 3 , β = 1 , b 0 = 3 and σ = 10 5 .
Mathematics 10 03583 g020
Figure 21. MN curve for δ = 0.03 where d = 3 , β = 1 , b 0 = 3 and σ = 10 5 .
Figure 21. MN curve for δ = 0.03 where d = 3 , β = 1 , b 0 = 3 and σ = 10 5 .
Mathematics 10 03583 g021
Figure 22. MN curve for δ = 0.01 where d = 3 , β = 1 , b 0 = 3 and σ = 10 5 .
Figure 22. MN curve for δ = 0.01 where d = 3 , β = 1 , b 0 = 3 and σ = 10 5 .
Mathematics 10 03583 g022
Figure 23. MN curve for δ = 0.05 where d = 3 , β = 1 , b 0 = 3 and σ = 10 10 .
Figure 23. MN curve for δ = 0.05 where d = 3 , β = 1 , b 0 = 3 and σ = 10 10 .
Mathematics 10 03583 g023
Figure 24. MN curve for δ = 0.03 where d = 3 , β = 1 , b 0 = 3 and σ = 10 10 .
Figure 24. MN curve for δ = 0.03 where d = 3 , β = 1 , b 0 = 3 and σ = 10 10 .
Mathematics 10 03583 g024
Figure 25. MN curve for δ = 0.01 where d = 3 , β = 1 , b 0 = 3 and σ = 10 10 .
Figure 25. MN curve for δ = 0.01 where d = 3 , β = 1 , b 0 = 3 and σ = 10 10 .
Mathematics 10 03583 g025
Table 4. 3D experiment, c = 20.7846 , b 0 = 3 .
Table 4. 3D experiment, c = 20.7846 , b 0 = 3 .
N d 6166667669661366
N i n t 50100200400800
N b d y 566566566566566
N t 12001200120012001800
RMS 6.7 · 10 9 1.5 · 10 11 3.1 · 10 13 1.8 · 10 15 7.0 · 10 19
COND 2.4 · 10 71 2.6 · 10 71 3.0 · 10 71 3.7 · 10 71 5.5 · 10 71
The computation is very efficient. Even though we adopted 200 effective digits for each step of the calculations, the most time-consuming work of solving the linear system took only two seconds for N d = 616 and five minutes for N d = 1366 . In fact, keeping only 100 effective digits would have obtained the same RMS’s and COND’s. We stopped adding more data points at N d = 1366 because the RMS is already good enough.
The comparison among different values of c is presented in Table 5. We fixed N d = 1366 , N i n t = 800 and N b d y = 566 . In order to cope with the problem of ill-conditioning, enough effective digits were adopted for the calculations. For c = 1 , we used 50 digits, and for c = 220 , 140 digits were used. The most time-consuming work of solving the linear system always took less than six minutes’ computer time.
Note that Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19 show that if δ is small enough, or, equivalently, if the number of data points is large enough, the values of MN(c) become large very slowly for c > 20.7846 . Numerical investigations also demonstrate this. Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25 even tend to move the optimal value of c to the right. All these are supported by the RMS’s in Table 5. Although for c > 20.7846 , the choice of c does not influence the approximation error much, one still has to choose c = 20.7846 because its corresponding condition number is smaller. In other words, the theoretically predicted optimal value of c coincides with the experimentally optimal one.

4. Discussion

In our experiments, the exact solution function u ( x 1 , , x d ) is a natural function contained in the Sobolev space W 2 τ ( R d ) for any τ 0 . An approximate solution function u ^ ( x 1 , , x d ) could always be efficiently found with a very small approximation error. The optimality of our choice of the shape parameter c has also been corroborated. It means that, in the process of the RBF collocation method, any differential equation may be effectively handled by our approach, as long as its solution belongs to Sobolev space, such as the Poisson equation in our experiment. This is exciting. However, how to apply our approach to solving various important but hard differential equations is still very challenging, especially when the solution does not lie in the Sobolev space.

Funding

This work was supported by the Taiwanese Ministry of Science and Technology [MOST project 109-2115-M-126-003].

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Wendland, H. Scattered Data Approximation; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  2. Kansa, E.J. Multiquadrics—A Scattered data approximation scheme with applications to computational fluid dynamics I: Surface approximations and partial derivative estimates. Comput. Math. Appl. 1990, 19, 127–145. [Google Scholar] [CrossRef] [Green Version]
  3. Kansa, E.J. Multiquadrics—A scattered data approximation scheme with applications to computational fluid dynamics II:Solutions to parabolic, hyperbolic, and elliptic partial differential equations. Comput. Math. Appl. 1990, 19, 147–161. [Google Scholar] [CrossRef] [Green Version]
  4. Fasshauer, G. Meshfree Approximation Methods with MATLAB; World Scientific Publishers: Singapore, 2007. [Google Scholar]
  5. Yosida, K. Functional Analysis; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
  6. Narcowich, F.J.; Ward, J.D.; Wendland, H. Sobolev error estimates and a Berstein inequality for scattered data interpolation via radial basis functions. Constr. Approx. 2006, 20, 175–186. [Google Scholar] [CrossRef]
  7. Luh, L.-T. The mystery of the shape parameter IV. Eng. Anal. Bound. Elem. 2014, 48, 24–31. [Google Scholar] [CrossRef] [Green Version]
  8. Luh, L.-T. The choice of the shape parameter-a friendly approach. Eng. Anal. Bound. Elem. 2019, 98, 103–109. [Google Scholar] [CrossRef]
Figure 1. MN curve for δ = 0.005 where d = 1 , β = 1 , b 0 = 1 and σ = 10 1 .
Figure 1. MN curve for δ = 0.005 where d = 1 , β = 1 , b 0 = 1 and σ = 10 1 .
Mathematics 10 03583 g001
Figure 2. MN curve for δ = 0.003 where d = 1 , β = 1 , b 0 = 1 and σ = 10 1 .
Figure 2. MN curve for δ = 0.003 where d = 1 , β = 1 , b 0 = 1 and σ = 10 1 .
Mathematics 10 03583 g002
Figure 3. MN curve for δ = 0.001 where d = 1 , β = 1 , b 0 = 1 and σ = 10 1 .
Figure 3. MN curve for δ = 0.001 where d = 1 , β = 1 , b 0 = 1 and σ = 10 1 .
Mathematics 10 03583 g003
Figure 4. MN curve for δ = 0.005 where d = 1 , β = 1 , b 0 = 1 and σ = 10 5 .
Figure 4. MN curve for δ = 0.005 where d = 1 , β = 1 , b 0 = 1 and σ = 10 5 .
Mathematics 10 03583 g004
Figure 5. MN curve for δ = 0.003 where d = 1 , β = 1 , b 0 = 1 and σ = 10 5 .
Figure 5. MN curve for δ = 0.003 where d = 1 , β = 1 , b 0 = 1 and σ = 10 5 .
Mathematics 10 03583 g005
Figure 6. MN curve for δ = 0.001 where d = 1 , β = 1 , b 0 = 1 and σ = 10 5 .
Figure 6. MN curve for δ = 0.001 where d = 1 , β = 1 , b 0 = 1 and σ = 10 5 .
Mathematics 10 03583 g006
Table 3. 2D experiment, b 0 = 2 , N d = 341 , N t = 961 .
Table 3. 2D experiment, b 0 = 2 , N d = 341 , N t = 961 .
c110173050
RMS 1.1 · 10 8 5.2 · 10 22 9.2 · 10 25 1.4 · 10 24 4.1 · 10 24
COND 1.8 · 10 28 7.0 · 10 75 4.8 · 10 87 1.6 · 10 100 2.0 · 10 111
c7090110130150
RMS 2.2 · 10 23 2.4 · 10 22 3.2 · 10 22 2.8 · 10 22 3.6 · 10 22
COND 1.2 · 10 119 1.8 · 10 125 2.9 · 10 129 5.9 · 10 132 5.8 · 10 135
Table 5. 3D experiment, b 0 = 3 , N d = 1366 , N t = 1800 .
Table 5. 3D experiment, b 0 = 3 , N d = 1366 , N t = 1800 .
c115 20.7846 3040
RMS 8.7 · 10 7 2.6 · 10 7 7.0 · 10 19 7.6 · 10 18 1.0 · 10 19
COND 4.6 · 10 21 6.4 · 10 65 5.5 · 10 71 2.5 · 10 78 4.7 · 10 83
c6080100120140
RMS 3.4 · 10 19 8.7 · 10 19 6.8 · 10 19 1.4 · 10 19 3.8 · 10 19
COND 1.1 · 10 91 1.9 · 10 96 2.5 · 10 100 5.1 · 10 103 3.2 · 10 106
c180220
RMS 1.7 · 10 18 3.7 · 10 19
COND 1.2 · 10 111 5.6 · 10 114
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luh, L.-T. A Direct Prediction of the Shape Parameter in the Collocation Method of Solving Poisson Equation. Mathematics 2022, 10, 3583. https://doi.org/10.3390/math10193583

AMA Style

Luh L-T. A Direct Prediction of the Shape Parameter in the Collocation Method of Solving Poisson Equation. Mathematics. 2022; 10(19):3583. https://doi.org/10.3390/math10193583

Chicago/Turabian Style

Luh, Lin-Tian. 2022. "A Direct Prediction of the Shape Parameter in the Collocation Method of Solving Poisson Equation" Mathematics 10, no. 19: 3583. https://doi.org/10.3390/math10193583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop