Abstract
Surface modeling is closely related to interpolation and approximation by using level set methods, radial basis functions methods, and moving least squares methods. Although radial basis functions with global support have a very good approximation effect, this is often accompanied by an ill-conditioned algebraic system. The exceedingly large condition number of the discrete matrix makes the numerical calculation time consuming. The paper introduces a truncated exponential function, which is radial on arbitrary n-dimensional space and has compact support. The truncated exponential radial function is proven strictly positive definite on while internal parameter l satisfies . The error estimates for scattered data interpolation are obtained via the native space approach. To confirm the efficiency of the truncated exponential radial function approximation, the single level interpolation and multilevel interpolation are used for surface modeling, respectively.
1. Introduction
Radial basis functions can be used to construct trial spaces that have high precision in arbitrary dimensions with arbitrary smoothness. The applications of RBFs (or so-called meshfree methods) can be found in many different areas of science and engineering, including geometric modeling with surfaces [1].The globally supported radial basis functions such as Gaussians or generalized (inverse) multiquadrics have excellent approximation properties. However, they often produce dense discrete systems, which tend to have poor conditioning and lead to a high computational cost. The radial basis functions with compact supports can lead to a very well conditioned sparse system. The goal of this work is to design a truncated exponential function that has compact support and is strictly positive definite and radial on arbitrary n-dimensional space and to show the advantages of the truncated exponential radial function approximation for surface modeling.
2. Auxiliary Tools
In order to make the paper self-contained and have a complete basis for the theoretical analysis in the later sections, we introduce some concepts and theorems related to radial functions in this section.
2.1. Radial Basis Functions
Definition 1.
A multivariate function is called radial if its value at each point depends only on the distance between that point and the origin, or equivalently provided there exists a univariate function such that with . Here, is usually the Euclidean norm. Then, the radial basis functions are defined by translation for any fixed center .
Definition 2.
A real-valued continuous function is called positive definite on if it is even and:
for any N pairwise different points , and . It is the Fourier transform of a (positive) measure. The function Φ is strictly positive definite on if the quadratic (1) is zero only for .
The strictly positive definiteness of the radial function can be characterized by considering the Fourier transform of a univariate function. This is described in the following theorem. Its proof can be found in [2].
Theorem 1.
A continuous function such that is strictly positive definite and radial on if and only if the n-dimensional Fourier transform:
is non-negative and not identically equal to zero. Here, is the classical Bessel function of the first kind of order .
2.2. Multiply Monotonicity
Since Fourier transforms are not always easy to compute, it is convenient to decide whether a function is strictly positive definite and radial on by the multiply monotonicity for limited choices of n.
Definition 3.
A function , which is in , and for which is non-negative, non-increasing, and convex for , is called k-times monotone on . In the case , we only require to be non-negative and non-increasing.
This definition can be found in the monographs [2,3]. The following Micchelli theorem (see [4]) provides a multiply monotonicity characterization of strictly positive definite radial functions.
Theorem 2.
Let be a positive integer. If , is k-times monotone on , but not constant, then φ is strictly positive definite and radial on for any n such that .
2.3. Native Spaces
Every strictly positive definite function can indeed be associated with a reproducing kernel Hilbert space (or its native space see [5]).
Definition 4.
Suppose is a real-valued strictly positive definite function. Then, the native space of Φ is defined by
and equip this space with the norm
For any domain , is in fact the completion of the pre-Hilbert space . Of course, contains all functions of the form:
provided , and can be assembled with an equivalent norm:
Here, is also allowed.
3. Truncated Exponential Function
In this section, we design a truncated exponential function:
with , and l is a positive integer. By Definition 1, it becomes apparent that is a radial function centered on the origin on when and .
The following theorem characterizes the strictly positive definiteness of .
Theorem 3.
The function is strictly positive definite and radial on provided parameter l satisfies .
Proof.
Theorem 2 shows that multiply monotone functions give rise to positive definite radial functions. Therefore, we only need to verify the multiply monotonicity of univariate function defined by (5).
Obviously, the truncated exponential function is in when and
For any positive integers p and q, and are non-negative, but with negative derivatives. Therefore,
and is -times monotone on . Then, the conclusion follows directly by Theorem 2. □
There are two ways to scale :
(1) In order to make , we can multiply (5) by the positive constant such that . Here, is still strictly positive definite and has the same support as (5).
(2) Adding a shape parameter , the scaled truncated exponential function can be given by:
Obviously, a smaller causes the function to become flatter and the support to become larger, while increasing leads to a more peaked and therefore localizes its support.
4. Errors in Native Spaces
This section discusses the scattered data interpolation with compactly supported radial basis functions , .
Given a distinct scattered point set , the interpolant of target function f can be represented as:
Solving the interpolation problem leads to the following system of linear equations:
where the entries of matrix A are given by , , , and . A solution to the system (8) exists and is unique, since the matrix A is positive definite.
Let be a cardinal basis vector function, then also has the following form (see [6]):
Comparing (9) with (7), we have:
where .
Equation (10) shows a connection between the radial basis functions and the cardinal basis functions.
First, the generic error estimate is as follows.
Theorem 4.
Let , be distinct and be the truncated exponential radial basis function with . Denote the interpolant to on the set by . Then, for every , we have
Here
Proof.
Since , the reproducing property yields
Then
Applying the Cauchy–Schwarz inequality, we have
Denote the second term as
By the definition of the native space norm and Equation (10), can be rewritten as
Then, the conclusion follows directly by the strict positive definiteness of . □
One of the main benefits of Theorem 4 is that we are now able to estimate the interpolation error by computing . In addition, can be used as an indicator for choosing a good shape parameter.
When equipping the dataset with a fill distance (or sample density, see [7]):
for any symmetric and strictly positive definite , the following generic error estimate can be obtained.
Theorem 5.
Suppose is bounded and satisfies an interior cone condition. Suppose is symmetric and strictly positive definite. Denote the interpolant to on the set by . Then, there exist some positive constants and C such that:
provided . Here
with denoting the ball of radius centered at .
Proof.
The estimate can be obtained by applying the Taylor expansion. The technical details can be found in [2,3]. □
Since the truncated radial basis function is only in , is vanishing in the above error estimate of Theorem 5. Therefore, we need to bound the by some additional powers of in order to obtain the estimate in terms of fill distance. The resulting theorem is as follows.
Theorem 6.
Suppose is bounded and satisfies an interior cone condition. Suppose Φ is the truncated exponential radial basis function with . Denote the interpolant to on the set by . Then, there exist some positive constants and C such that:
provided .
Proof.
From [2], for functions, the factor can be expressed as:
independent of . Selecting , we bound the determined by the truncated exponential radial basis function.
Using the definition of and Lagrange’s mean value theorem, we have:
with denoting the truncated power radial basis function. From [2],
□
5. Numerical Experiments
5.1. Single-Level Approximation
This subsection shows how our truncated exponential radial basis function (TERBF) works at a single level. Our first 2D target surface is the standard Franke’s function. In the experiments, we let the kernel in (7) be the truncated exponential radial function . A Halton point set with increasingly greater data density is generated in domain . Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 list the test results of Gaussian interpolation, MQ (Multiquadrics) interpolation, IMQ (Inverse Multiquadrics) interpolation, and TERBF interpolation with different values of respectively. In the tables, the RMS-error is computed by
where are the evaluation points. The rate listed in the Tables is computed using the formula:
where is the RMS-error for experiment number k and is the fill distance of the k-level. is the condition number of the interpolation matrix defined by (8). From Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, we observe that the globally supported radial basis functions (Gaussian, MQ, IMQ) can obtain ideal accuracy when assembling a smaller value of . However, the condition number of the interpolation matrix will become surprisingly large as the scattered data increase. We note that MATLAB issues a “matrix close to singular” warning when carrying out Gaussian and MQ interpolation experiments for and . Table 7 and Table 8 show that TERBF interpolation can not only keep better approximation accuracy, but also produce a well conditioned interpolation matrix. Even for and , the condition number of the presented method is relatively smaller (about ). The change of RMS-error with varying values is displayed in Figure 1. We see that the error curves of Gaussian and MQ interpolation are not monotonic and even become erratic for the largest datasets. However, the curves of IMQ and TERBF interpolation are relatively smooth. In particular, TERBF greatly improves the condition number of the interpolation matrix. To show the application of TERBF approximation to compact 3D images, we interpolate Beethoven data in Figure 2 and Stanford bunny in Figure 3. Numerical experiments suggest that TERBF interpolation is essentially faster than the scattered data interpolation with globally supported radial basis functions. However, we observe that TERBF interpolation causes some artifacts such as the extra surface fragment near the bunny’s ear from the left part of Figure 3. This is because the interpolating implicit surface has a narrow band support. It will be better if the sample density is smaller than the width of the support band (see the right part of Figure 3). Similar observations have been reported in Fasshauer’s book [3], where a partition of unity fits based on Wendland’s function was used. The same observation was also made in [1].
Table 1.
Gaussian interpolation to the 2D Franke’s function with .
Table 2.
Gaussian interpolation to the 2D Franke’s function with .
Table 3.
MQ interpolation to the 2D Franke’s function with .
Table 4.
MQ interpolation to the 2D Franke’s function with .
Table 5.
IMQ interpolation to the 2D Franke’s function with .
Table 6.
IMQ interpolation to the 2D Franke’s function with .
Table 7.
TERBF interpolation to the 2D Franke’s function with .
Table 8.
TERBF interpolation to the 2D Franke’s function with .

Figure 1.
RMS-error curves for Gaussian, MQ, IMQ, and TERBF interpolations.
Figure 2.
TERBF approximation of the Beethoven data. From top left to bottom right: 163 (a), 663 (b), 1163 (c), and 2663 (d) points.
Figure 3.
TERBF approximation of the Stanford bunny with 453 (left) and 8171 (right) data points.
5.2. Multilevel Approximation
The multilevel scattered approximation was implemented first in [8] and then studied by a number of other researchers [9,10,11,12,13]. In the multilevel algorithm, the residual can be formed on the coarsest level first and then be approximated on the later finer level by the compactly supported radial basis functions with gradually smaller support. This process can be repeated and be stopped on the finest level. An advantage of this multilevel interpolation algorithm is its recursive property (i.e., the same routine can be applied recursively at each level in the programming language), of course the disadvantage being the allocation that memory needs.
In this experiment, suppose a 3D target surface is an explicit function . We generate a uniform points set in the domain , with levels and . The scale parameter , and . The corresponding slice plots, the iso-surfaces, and slice plots of the absolute error are shown in Figure 4, Figure 5, Figure 6 and Figure 7. Both the iso-surfaces and the slice plots are color coded according to the absolute error. At each level, the trial space is constructed by a series of truncated exponential radial basis functions with varying support radii. Hence, the multilevel approximation algorithm can produce a well conditioned sparse discrete algebraic system in each recursion and keep ideal approximation accuracy at the same time. Numerical experiments show that TERBF multilevel interpolation is very effective for 3D explicit surface approximation. These observations can be found from Figure 4, Figure 5, Figure 6 and Figure 7. Similar experiments and observations are reported in detail in Fasshauer’s book [3], where Wendland’s function has been used for approximation. However, to improve the allocation memory needs of the multilevel algorithm, we can make use of the hierarchical collocation method developed in [13].
Figure 4.
Fits and errors at Level 1.
Figure 5.
Fits and errors at Level 2.
Figure 6.
Fits and errors at Level 3.
Figure 7.
Fits and errors at Level 4.
6. Conclusions
The truncated exponential radial function, which has compact support, was introduced in the paper. The strictly positive definiteness of TERBF was proven via the multiply monotonicity approach, and the interpolation error estimates were obtained via the native space approach. Moreover, the TERBF was applied to 2D/3D scattered data interpolation and surface modeling successfully.
However, we found that was only in space. In the error estimates in terms of the fill distance, the power of was only . There are many possibilities for enhancement of TERBF approximation:
(1) We can construct new strictly positive definite radial functions with finite smoothness from the given by a “dimension-walk” technique.
(2) We can do in-depth analysis of the characterization of TERBF in terms of Fourier transforms established by Bochner and Schoenberg’s theorems.
(3) TERBF can also be used for the numerical solution of partial differential equations. The convergence proof will depend on the approximation of TERBF trial spaces, the appropriate inverse inequality, and the sampling theorem.
Author Contributions
Conceptualization, Methodology and Writing–original draft preparation, Q.X.; Formal analysis and Writing—review and editing, Z.L.
Funding
The research of the first author was partially supported by the Natural Science Foundations of Ningxia Province (No. NZ2018AAC03026) and the Fourth Batch of the Ningxia Youth Talents Supporting Program (No. TJGC2019012). The research of the second author was partially supported by the Natural Science Foundations of China (No. 11501313), the Natural Science Foundations of Ningxia Province (No. 2019AAC02001), the Project funded by the China Postdoctoral Science Foundation (No. 2017M621343), and the Third Batch of the Ningxia Youth Talents Supporting Program (No. TJGC2018037).
Acknowledgments
The authors would like to thank the Editor and two unknown reviewers who made valuable comments on an earlier version of this paper. The authors used some Halton datasets and drew lessons from partial codes from Fasshauer’s book [3]. We are grateful to [3] for its free CD, which contains many MATLAB codes.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ohtake, Y.; Belyaev, A.; Seidel, H.P. 3D scattered data interpolation and approximation with multilevel compactly supported RBFs. Graph. Model. 2005, 67, 150–165. [Google Scholar] [CrossRef]
- Wendland, H. Scattered Data Approximation; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
- Fasshauer, G.E. Meshfree Approximation Methods with MATLAB; World Scientific Publishers: Singapore, 2007. [Google Scholar]
- Micchelli, C.A. Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constr. Approx. 1986, 2, 11–22. [Google Scholar] [CrossRef]
- Schaback, R. A unified theory of radial basis functions: Native Hilbert spaces for radial basis functions II. J. Comp. Appl. Math. 2000, 121, 165–177. [Google Scholar] [CrossRef]
- De Marchi, S.; Perracchiono, E. Lectures on Radial Basis Functions; Department of Mathematics, “Tullio Levi-Civita”, University of Padova: Padova, Italy; Available online: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=2ahUKEwjkuuu01ejlAhW9xosBHaZ0Ct8QFjAAegQIABAC&url=https%3A%2F%2Fwww.math.unipd.it%2F~demarchi%2FRBF%2FLectureNotes_new.pdf&usg=AOvVaw0sDK5WcNE1POWoa_lVur9v (accessed on 20 October 2019).
- Bernard, C.P.; Mallat, S.G.; Slotine, J.J. Scattered data interpolation with wavelet trees. In Curve and Surface Fitting (Saint-Malo, 2002); Nashboro Press: Brentwood, TN, USA, 2003; pp. 59–64. [Google Scholar]
- Floater, M.S.; Iske, A. Multistep scattered data interpolation using compactly supported radial basis functions. J. Comput. Appl. Math. 1996, 73, 65–78. [Google Scholar] [CrossRef]
- Chen, C.S.; Ganesh, M.; Golberg, M.A.; Cheng, A.H.D. Multilevel compact radial functions based computational schemes for some elliptic problems. Comput. Math. Appl. 2002, 43, 359–378. [Google Scholar] [CrossRef]
- Chernih, A.; Gia, Q.T.L. Multiscale methods with compactly supported radial basis functions for the Stokes problem on bounded domains. Adv. Comput. Math. 2016, 42, 1187–1208. [Google Scholar] [CrossRef]
- Farrell, P.; Wendland, H. RBF multiscale collocation for second order elliptic boundary value problems. SIAM J. Numer. Anal. 2013, 51, 2403–2425. [Google Scholar] [CrossRef]
- Fasshauer, G.E.; Jerome, J.W. Multistep approximation algorithms: Improved convergence rates through postconditioning with smoothing kernels. Adv. Comput. Math. 1999, 10, 1–27. [Google Scholar] [CrossRef]
- Liu, Z.; Xu, Q. A Multiscale RBF Collocation Method for the Numerical Solution of Partial Differential Equations. Mathematics 2019, 7, 964. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).