Abstract
This paper demonstrates that the space of piecewise-smooth bivariate functions can be well-approximated by the space of the functions defined by a set of simple (non-linear) operations on smooth uniform tensor product splines. The examples include bivariate functions with jump discontinuities or normal discontinuities across curves, and even across more involved geometries such as a three-corner discontinuity. The provided data may be uniform or non-uniform, and noisy, and the approximation procedure involves non-linear least-squares minimization. Also included is a basic approximation theorem for functions with jump discontinuity across a smooth curve.
Keywords:
bivariate approximation; non-smooth; three-corner jump discontinuity; non-linear algorithms MSC:
65D15
1. Introduction
High-quality approximations of piecewise-smooth functions from a discrete set of function values present a challenging problem, with applications in image processing and geometric modeling. The univariate problem has been studied by several research groups, and satisfactory solutions can be found in the works of Harten [1], Arandiga et al. [2], Archibald et al. [3,4], and Lipman et al. [5]. However, the 2D problem is still far from being solved, and the 1D methods are not easily adapted to the real 2D case. Furthermore, even the 1D problem is not easily solved in the presence of noisy data. In the 1D problem, we are provided values of a piecewise-smooth function, with or without noise, and the challenge is to approximate the location of the ’singular points’ that separate one smooth part of the function from the other and also to reconstruct the smooth parts. In the 2D case, a piecewise-smooth function on a domain D is defined by a partition of the domain into segments separated by boundary curves (smooth or non-smooth), and the function is smooth in the interior of each segment. By the term smooth, we mean that the derivatives (up to a certain order) of the function are bounded. Of course, the function and/or its derivatives may be discontinuous across a boundary curve between segments. Given the data acquired from such an underlying piecewise-smooth function, the challenge here is to approximate the separating curves (the singularity curves) and to reconstruct the smooth parts. Note that, apart from noise in the function values, there may also be ’noise’ in the location of the separating curves (as demonstrated in Section 3.2). The problem of approximating piecewise-smooth functions is a model problem for image processing algorithms, and some sophisticated classes of wavelets and frames have been designed to approximate such functions. For example, see Candes and Donoho [6]. A method for the approximation of piecewise-smooth functions would also be useful for the reconstruction of surfaces in CAGD or in 3D computer graphics, e.g., via the moving least-squares framework.
It is well-established now that only non-linear methods may achieve optimal approximation results in ’non-smooth’ spaces, e.g., see Binev et al. [7]. In this paper, we are going back to using the ’good old’ splines with uniform knots as our basis functions for the approximation, but we add to the game some (simple) non-linear operations on the space of splines. All the non-linearity used here can be expressed by the sign operation. We remark that the choice of the spline basis is not essential here, and other basis functions may be utilized within the same framework.
We present the idea and the proposed approximation algorithms through a series of illustrative examples. Building from the derivative discontinuity in the univariate case, we move into normal discontinuity and jump discontinuity across the curves in the bivariate case, with some non-trivial topologies of the singularity curves. We shall also present a basic approximation result for the case of jump discontinuity across a smooth curve. Altogether, we present a simple yet powerful approach to piecewise-smooth approximation. The suggested method seems to be quite robust to noisy data, and even the univariate version is interesting in this respect. Open issues, such as the development of efficient algorithms and further approximation analysis, are left for future research.
2. Non-Smooth Univariate Approximations
To demonstrate the main idea, we start with the univariate problem: assume we know that our underlying univariate function f is continuous, , and that it has one point of discontinuity in its first derivative in , and that . Then, it makes sense to look for two smooth functions, and , where approximates f on the left segment and approximates f on the right segment , such that
The function may be viewed as a smooth extension of to the whole interval , and as a smooth extension of to . There are many pairs of smooth functions and that satisfy the above relation. Therefore, one may suspect that the problem of finding such a pair is ill-conditioned. Let us check this by trying a specific algorithm for solving this problem and check it on a few examples. It becomes clear from these examples that the approximations by and are well-defined in the relevant intervals, i.e., in and in . To approximate functions and , we use cubic spline basis functions, with equidistant knots .
Assuming we are provided data , we look for and such that
Here, p stands for the set of parameters used in the representation of the unknown functions and . We use the convenient representation
are, for example, the basis functions for cubic spline interpolant with the not–knot end conditions, satisfying . Hence, in (2), p stands for the unknown splines’ coefficients, .
In Figure 1 and Figure 2, we see the results of reconstructing piecewise-smooth functions from exact data and noisy data. In both cases, and . The solution of the optimization problem (2.2) is depicted in a bold line. The underlying function f is generated as , and the graphs of these two generating functions are depicted by dashed lines. The fine continuous lines in the figures represent the functions and , which, as we see in those graphs, approximate and accordingly, and the approximation is good only in the appropriate regions. Here, , and thus we have 10 unknown parameters to solve for. The optimization has been performed using a differential evolution procedure, using the data values at the points as the starting points of the iterations for both and .
Figure 1.
A univariate example—no noise.
Figure 2.
A univariate example—reconstruction in the presence of noise.
Remark 1.
An alternative representation of f in (1) is
where
Hence, we can replace the cost functional (2) by
Here, p stands for the set of parameters in the representation of the unknown spline functions and , with the advantage that, here, only one unknown spline function, , influences the functional in a non-linear manner. We shall further discuss such semi-linear cases in the bivariate case.
The Case and More
Obviously, in this case, we should replace the min operation within (2) by a max operation. In case we have two break points and in , e.g., with and , then we may look for three unknown spline functions, , such that approximates the data in the least-squares sense, and so on. To avoid high complexity, we suggest subdividing into intervals, partially overlapping, each containing at most one break point, and blending the individual local approximations into a global one over . We shall further discuss and demonstrate this strategy in the 2D case.
The problem of approximating piecewise-smooth univariate data has been investigated by many authors. A prominent approach to the problem involves the so-called essentially nonoscillatory (ENO) and subcell resolution (SR) schemes introduced by Harten [1]. The ENO scheme constructs a piecewise-polynomial interpolant on a uniform grid that, loosely speaking, uses the smoothest consecutive data points in the vicinity of each data cell. The SR technique approximates the singularity location by intersecting two polynomials, each from another side of the suspected singularity cell. In the spirit of ENO-SR, many interesting works have been written using this simple but powerful idea. Recently, Arandiga et al. [2] provided a rigorous treatment to a variation of the technique, proving the expected approximation power on piecewise-smooth data. Archibald et al. [3,4] further improved the ENO idea by introducing polynomial annihilation techniques for locating the cell that contains the singularity. A recent paper by Lipman et al. [5] used quasi-interpolation operators for this problem. Yet, the extension of the univariate methods to the 2D case is not obvious and is not simple. In [2], after locating an interval of possible singularity using ENO [1], two polynomial approximations were defined, each one approximating the data on one side of the singularity, and their intersection was used to approximate the singularity location. The method suggested here is similar since we also look for two different approximations related to the two sides of a singularity. However, the least-squares optimization approach enables natural extension to interesting cases in the bivariate case. The singularity localization is integrated within the approximation procedure, and thus it is less sensitive to noise. In the next section, we demonstrate that the simple idea represented in Section 2 has the potential to solve some non-trivial bivariate approximation problems.
3. Non-Smooth Bivariate Approximations
As demonstrated in the 1D case, the non-linear space of functions defined by uniform splines, together with the simple operations min and max, may be used to approximate univariate piecewise-smooth continuous functions. In the bivariate case, we consider functions with derivative discontinuities or jump discontinuities across curves. The objectives of this section are fourfold:
- To exhibit a range of piecewise-smooth bivariate functions that can be represented by simple non-linear operations (as min and max) on smooth functions.
- To suggest some non-linear least-squares approximation procedures for the approximation of piecewise-smooth bivariate functions.
- To present interesting examples of approximating piecewise-smooth bivariate functions given noisy data.
- To provide a basic approximation result.
3.1. Normals’ Discontinuity across Curves—Problem A
We start with a numerical demonstration of a direct extension of the univariate approach to the approximation of continuous piecewise-smooth bivariate functions. Recalling the 1D discussion, the choice of a min or a max operation depends on the sign of . In the 2D case, we refer to an analogous condition involving the slopes of the graph along the singularity curves. A discontinuity (singularity) of the normals of a bivariate function f is said to be convex along a curve if the exterior angle of the graph of f at every point along the curve is (e.g., see Figure 3), and it is considered to be concave if the exterior angles are . In a neighborhood of a concave singularity (discontinuity) curve, the function may be described as the minimum between two (or more) smooth functions, and near a convex singularity curve the function may be defined as the maximum of two or more smooth functions. Let us consider the following noisy data, , taken from a function with convex singularities. For the numerical experiment, we take X as the set of data points on a square grid of mesh size in , and the provided noisy data are shown in Figure 4. In this case, the function has a ‘3-corner’-type singularity, where f has convex singularity along three curves meeting at a point. Therefore, we look for three spline functions, , so that
where solve the non-linear least-squares problem:
Figure 3.
The blended approximation over
Figure 4.
The noisy data over .
Within this example, we would also like to show how to blend two non-smooth approximations. Therefore, we consider the approximation problem on two partially overlapping sub-domains of and . After solving the approximation problem separately on each sub-domain, the two approximations will be blended into a global one. On each sub-domain, the unknown functions are chosen to be cubic spline functions with a square grid of knots of grid size . Here again, the triplet of functions that solve the minimization problem (8) are not unique. However, it turns out that the approximation to f is well-defined by (8); that is, the parts of that are relevant to are well-defined.
Let us first consider the approximation on the sub-domain . For the particular data shown on the left plot in Figure 5, the solution of (8) yields the piecewise-smooth approximation depicted on the right plot. In this plot, we see the full graphs of the three functions (for this sub-domain), while the approximation is only the upper part (the maximal values) of these graphs. The solution to the optimization problem (8) has been found using a differential evolution procedure [8]. As an initial guess for the three unknown functions, we take, as in the univariate case, the spline function that approximates the data over the whole domain . Next, we look for the approximation on , which partially overlaps . The relevant data and the resulting approximation are shown in Figure 6.
Figure 5.
The noisy data and the three-corner approximation over .
Figure 6.
The noisy data and the approximation over .
To achieve an approximation over the whole domain , we now explain how to blend the two approximations defined on and on . The singularity curves of the two approximations do not necessarily overlap on . Therefore, a direct blending of the two approximations will not provide a smooth transition of the singularity curve. The appropriate blending should be completed between the corresponding spline functions generating these singularity curves. On each sub-domain, the approximation is defined by another triplet of splines . For the approximation over , only two of the splines are active in the final max operation, and the graph of the third spline is below the maximum of the other two. To prepare for the blending step, we have to match appropriate pairs of both triplets, and this can easily be completed by proximity over the blending zone . The final approximation over D is defined by , where are defined by blending the appropriate pairs using the simplest blending function. The resulting blended approximation over D, to the data provided in Figure 4, is displayed in Figure 3.
3.2. Jump Discontinuity across a Curve—Problem B
Another interesting problem in bivariate approximation is the approximation of a function with a discontinuity across a curve. Consider the case of a function defined over a domain D, with a discontinuity across a (simple) curve , separating D into two sub-domains, and . We assume that and are smooth on and , respectively. Such problems, and especially the problem of approximating , appear in image segmentation. Efficient algorithms for constructing that are useful even for more involved data are the method of snakes, or active contours, and the level-set method. The method of snakes, introduced in [9], iteratively finds contours that approach the contour separating two distinctive regions in an image, with applications to shape modeling [10]. The level-set method, first suggested in [11], is also an iterative method for approximating using a variational formulation for minimizing appropriate energy functionals. More recent algorithms for the approximation of piecewise-smooth functions in two and three dimensions have been introduced in [12], using data on a grid, and in [13], for scattered data. A variational spline level-set approach has been suggested in [14]. Here, the focus is on simultaneously approximating the curve and the function on and . This goal is reflected in the cost functional used below, and, as demonstrated in Section 3.5, we can also handle non-simple topologies of , such as a three-corner discontinuity. The following procedure for treating a jump singularity comes as a natural extension of the framework for approximating a continuous function with derivative discontinuity, as suggested in Section 3.2:
Again, we look for three spline functions, , and , such that the zero-level set of approximates the singularity curve approximates f on , and approximates f on . Formally, we would like to minimize the following objective function:
Note that the non-linearity of the minimization problem here, which we denote as Problem B, is due to the non-linear operation of sign checking. This approximation problem may seem to be more complicated than Problem A of the previous section, but, actually, it is somewhat simpler. While in problem A the unknown coefficients of all three splines appear in a non-linear form in the objective function (due to the max operation), here, only the coefficients of influence the value of in a non-linear manner. This is due to the observation that, once is known, the functions and that minimize are defined via a linear system of equations. Given this observation, and for reasons that will be clarified below, we use a slight variation of the optimization problem. Namely, we look for a function that minimizes , where and are defined by the (linear) least-squares problem:
where denotes the zero-level set of is the ’mesh size’ in the data set X, and
For non-noisy data, we would like to achieve an approximation order to and on and , respectively. This can be obtained by using proper boundary conditions in the computation of and , e.g., by extending the data by local polynomial approximations. We thus consider a third version of the least-squares problem for and
In (11) and is the provided data on and the extension of these data into , and is the provided data on and the extension of these data on . The extension operator should be exact for cubic polynomials.
Remark 2.
Since may be defined up to a multiplying factor, we may restrict its unknown coefficients to lie in a compact bounded box, and thus the existence of a global minimizer in (9)–(11) is ensured.
Let us now describe a numerical experiment based on the above framework. The function we would like to approximate is defined on , and it has a jump discontinuity across a sinusoidal-shaped curve. We may consider two types of noisy data. The first includes noise in the data values, and the second includes noise in the location of the singularity curve . The three unknown functions are again cubic spline functions with a square grid of knots of grid size . However, the unknown parameters p in are just the coefficients of . The other two spline functions are computed within the evaluation procedure of by solving the linear system of equations for their coefficients, i.e., the system defined by the least-squares problem (10). The noisy data of the second type (noise in the location of ), and the resulting approximation obtained by minimizing (9), are displayed in Figure 7 and Figure 8.
Figure 7.
Discontinuity across a noisy curve .
Figure 8.
The approximation using noisy curve data.
For a function with a more involved shape of singularity curve, we would suggest subdividing the domain into patches, partially overlapping, and then blending the approximations over the individual patches into a global approximation. As in the blending suggested for Problem A, the blending of two approximations to jump discontinuities over partially overlapping patches and should be performed on the functions that generate the approximations on the different patches. Here, one should take care of the fact that the function is not uniquely defined by the optimization problem (9). Let us denote by and the functions generating the singularity curve on and , respectively. To achieve a nice blending of the two curves, we suggest scaling one of the two functions, say , so that on . It is important to match the two functions only on that part of that is close to the zero curves defined by and .
3.3. Problem B—Approximation Analysis
The approximation problem is as follows: consider a piecewise-smooth function f defined over a domain D, with a discontinuity across a simple, smooth curve , separating D into two open sub-domains and . We assume that and are smooth, with bounded derivatives of order four on and , respectively, and so is the curve . Let be a grid of data points of grid size h, and let us consider the approximations for Problem B using bi-cubic spline functions with knots on a grid of size . The classical result on approximation by least-squares by cubic splines implies an approximation order to a function with bounded derivatives of order four (provided there are enough data points for a well-posed solution). On the other hand, even in the univariate case, the location of a jump discontinuity in a piecewise-smooth function is inherently up to an error. Therefore, the best we can expect from a good approximation procedure for f such as above is the following:
Theorem 1.
Consider Problem B on D and let be a bi-cubic spline function (with knots’ grid size ), which provides a local minimum to (9), with and defined by minimizing (11). Denote the segmentation defined by by and . For , and for h small enough, there exists such local minimizer such that if or then dist .
Proof.
The theorem says that the zero-level set of , separates the data set X well into the two parts, and only data points that are very close to may appear in the wrong segment. To prove this result, we first observe that the curve can be approximated by the zero-level set of bi-cubic splines with approximation error . One such spline would be , the approximation to the signed distance function related to the curve . Fixing determines and , which minimize for this , and we denote the corresponding value . We note that the contribution to the value of is (as from a point that falls on the right side of , and it is from a point on the wrong side of . For a small enough h, only a small number of points will fall in the wrong side of , and any choice of that induces more points in the wrong side will induce a larger value of . The minimizing solution induces a value , and this can be achieved only by reducing the set of ’wrong side’ points. Since already defines an separation approximation, only points that are at distance from may stay on the wrong side in the local minimizer that evolves by a continuous change in , which reduces . □
Corollary 1.
If the least-squares problems defining and by (10) are well-posed, we obtain
Remark 3.
The above well-posed condition can be checked while computing and . Also, an approximation order can be obtained by using proper boundary conditions in the computation of and , e.g., by extending the data by local polynomial approximations, as suggested in (11).
Remark 4.
The need to restrict the set of data points defining and in (10) emerged given the condition needed for the proof of Theorem 1. As shown in the numerical example below, this restriction may be very important in practical applications.
3.4. Noisy Data of the 1st Type
This section demonstrates the performance of the method for the approximation of noisy data of a function with jump discontinuity. Furthermore, we use this example to emphasize the importance of using the restricted sets in rather than using . The underlying function and its noisy version are displayed in Figure 9. In the numerical test, we have used the same mesh and knot sizes as in the previous example. In Figure 10, we show the results with and without restricting the set of points that participate in the computation of . In the left graph, we note that the approximation in the inner region is infected by wrong values from the outer region, and this is corrected in the right graph where the least-squares approximations use values that are not too close to the discontinuity curve. In Figure 11, we see two approximations to the exact singularity curves (in red), using different knots’ grid sizes, and , together with the singularity curve of the underlying function. As expected, a smaller enables higher flexibility of and a better approximation to the exact curve.
Figure 9.
The underlying function and its noisy data of 1st type.
Figure 10.
Two approximations using different point sets in the least-squares approximation.
Figure 11.
Two approximations to the exact singularity curves—using different knots’ grid sizes, and .
3.5. Three-Corner Jump Discontinuity—Problem C
Combining elements from Problems A and B, we can now approach the more complex problem of a three-corner discontinuity. Consider the case of a function defined over a domain D, with a discontinuity across three curves meeting at a three-corner discontinuity, subdividing D into three sub-domains, , and , as in Figure 12. We assume that is smooth on . Following the above discussions, the following procedure is suggested:
Figure 12.
Three-corner discontinuity—noisy data and approximation.
We look for three spline functions, , approximating f on , respectively. Here, the approximation of the segmentation into three domains cannot be conducted via a zero-level set approach. Instead, we look for an additional triplet of spline functions, , which define approximations to as follows:
Denoting , we would like to minimize the following objective function:
Hence, the segmentation is defined by a max operation as in Problem A. Given a segmentation of D into , the triplet is defined, as in Problem B, by a system of linear equations that defines the least-squares solution of (3.8). To achieve better approximation on , in view of Theorem 1, the least-squares approximation for should exclude data points that are near joint boundaries of .
For a numerical illustration of Problem and the approximation obtained by minimization of , we took noisy data from a function with three-corner discontinuity in . All the unknown spline functions, and , are bi-cubic with a square grid of knots of grid size . Since only the splines enter in a non-linear way into , the minimization problem involves unknowns. As in all the previous examples, we have used a differential evolution algorithm to find an approximate solution to this minimization problem. The noisy data and the resulting approximation are shown in Figure 12.
4. Summary and Issues for Further Research
We have introduced a unified framework for approximating functions with normals’ discontinuity or jump discontinuity across curves. The method may be viewed as an extension of the well-known procedures of boolean operations in solid geometry. In this work, it is suggested to use a kind of boolean operation on splines as an approximation tool. Through a series of non-trivial examples, we have presented the potential of this approach to achieve high-quality approximations in many applications. It is interesting to note that all the non-linearity in the suggested approximations can be expressed by the sign operation, or equivalently the operation. The approximation procedure requires high-dimensional non-linear optimization, and thus the complexity of computing the approximations is very high. For all the numerical examples in this paper, we have used a very powerful Matlab code written by Markus Buehren, based upon the method of differential evolution ([8]). The execution time ranges from 1 s for the simple univariate problem to 80 s for bivariate Problem C. The differential evolution algorithm usually finds a local minimum and not ’the global minimizer’. Yet, as demonstrated, it finds for us very good approximations, and it seems to be robust to noise. A main issue for further study would be the acceleration of the optimization process, e.g., by generating good initial candidates for the optimization. Yet, despite the high computational cost, the method may still be very useful for high-quality up-sampling, and for functions (or surfaces) with few singularity curves. In a scene with many discontinuities, we would suggest subdividing the domain into patches, each containing one or two singularity curves. Choosing partially overlapping patches, the local approximations can be blended into a global approximation, as demonstrated in Section 3.2. Another simple idea is based upon Corollary 1, which tells us to ignore a few data points near the approximated singularity curve to attain a higher approximation order.
Other important issues for further research would be the following:
- Improved optimization: Here, we believe that geometric considerations may be used to significantly accelerate the minimization procedure. Gradient-descent algorithms, similar to those used in [14], may be helpful here.
- Constructive rules for choosing the grid size for the splines.
- Other basis functions instead of splines.
- Using the -norm instead of the -norm in the objective functions.
- Using the basic procedures presented in this paper within neural network algorithms for the analysis of multivariate data.
Funding
This research received no external funding.
Data Availability Statement
Data is contained within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Harten, A. Eno schemes with subcell resolution. J. Comput. Phys. 1989, 83, 148–184. [Google Scholar] [CrossRef]
- Arandiga, F.; Cohen, A.; Donat, R.; Dyn, N. Interpolation and approximation of piecewise smooth functions. SIAM J. Numer. Anal. 2005, 43, 41–57. [Google Scholar] [CrossRef]
- Archibald, R.; Gelb, A.; Yoon, J. Polynomial fitting for edge detection in irregularly sampled signals and images. SIAM J. Numer. Anal. 2005, 43, 259–279. [Google Scholar] [CrossRef]
- Archibald, R.; Gelb, A.; Yoon, J. Determining the locations and discontinuities in the derivatives of functions. Appl. Numer. Math. 2007, 58, 577–592. [Google Scholar] [CrossRef]
- Lipman, Y.; Levin, D. Approximating piecewise-smooth functions. Ima J. Numer. Anal. 2010, 30, 1159–1183. [Google Scholar] [CrossRef]
- Candes, E.J.; Donoho, D.L. New tight frames of curvelets and optimal representations of objects with piecewise c2 singularities. Commun. Pure Appl. Math. 2004, 57, 219–266. [Google Scholar] [CrossRef]
- DeVore, R.; Petrushev, P.; Binev, P.; Dahmen, W. Approximation classes for adaptive methods. Serdica Math. J. 2002, 28, 391–416. [Google Scholar]
- Price, J.L.K.; Storn, R. Differential Evolution—A Practical Approach to Global Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
- Malladi, R.; Sethian, J.A.; Vemuri, B.C. Shape modeling with front propagation: A level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 158–175. [Google Scholar] [CrossRef]
- Osher, S.; Sethian, J.A. Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef]
- Amat, S.; Levin, D.; Ruiz-Álvarez, J. A two-stage approximation strategy for piecewise smooth functions in two and three dimensions. IMA J. Numer. Anal. 2022, 42, 3330–3359. [Google Scholar] [CrossRef]
- Amir, A.; Levin, D. High order approximation to non-smooth multivariate functions. Comput. Aided Geom. Des. 2018, 63, 31–65. [Google Scholar] [CrossRef]
- Bernard, O.; Friboulet, D.; Thévenaz, P.; Unser, M. Variational B-spline level-set: A linear filtering approach for fast deformable model evolution. IEEE Trans. Image Process. 2009, 18, 1179–1191. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).