Reconstruction of piecewise-smooth multivariate functions from Fourier data

In some applications, one is interested in reconstructing a function $f$ from its Fourier series coefficients. The problem is that the Fourier series is slowly convergent if the function is non-periodic, or is non-smooth. In this paper, we suggest a method for deriving high order approximation to $f$ using a Pad\'e-like method. Namely, by fitting some Fourier coefficients of the approximant to the given Fourier coefficients of $f$. Given the Fourier series coefficients of a function on a rectangular domain in $\mathbb{R}^d$, assuming the function is piecewise smooth, we approximate the function by piecewise high order spline functions. First, the singularity structure of the function is identified. For example in the 2-D case, we find high accuracy approximation to the curves separating between smooth segments of $f$. Secondly, simultaneously we find the approximations of all the different segments of $f$. We start by developing and demonstrating a high accuracy algorithm for the 1-D case, and we use this algorithm to step up to the multidimensional case.


Introduction
Fourier series expansion is a useful tool for representing and approximating functions, with application in many areas of applied mathematics. The quality of the approximation depends on the smoothness of the approximated function and on whether or not it is periodic. For functions that are not periodic, the convergence rate is slow near the boundaries and the approximation by partial sums exhibits the Gibbs phenomenon. There are several approaches that have been used to improve the convergence rate, mostly for the one-dimensional case. One approach is to filter out the oscillations, as discussed in several papers as [6] and [12]. Another useful approach is to transform the Fourier series into an expansion in a different basis. For the univariate case this approach is shown to be very efficient, as shown in [6] using Gegenbauer polynomials with suitably chosen parameters. Further improvement of this approach is presented in [5] using Freud polynomials, achieving very good results for univariate functions with singularities.
An algebraic approach for reconstructing a piecewise smooth univariate function from its first N Fourier coefficients has been realized by Eckhoff in a series of papers [2], [3], [4]. There the jumps are determined by a corresponding system of linear equations. A full analysis of this approach is presented by Betankov [1]. Nersessian and Poghosyan [10] have used a rational Padé type approximation strategy for approximating univariate non-periodic smooth functions. For multiple Fourier series of smooth non-periodic functions, a convergence acceleration approach has been suggested by Levin and Sidi [8]. More challenging is the case of multivariate functions with discontinuities, i.e., functions which are piecewise smooth. Here again, the convergence rate is slow, and near the discontinuities the approximation exhibits the Gibbs phenomenon. In this paper, we present a Padé-like approach consisting of finding a piecewise-defined spline whose Fourier coefficients match the given Fourier coefficients.
The main contribution of this paper is demonstrating that this approach can be successfully applied to the multivariate case. Namely, we present a strategy for approximating both non-periodic and non-smooth multivariate functions. We derive the numerical procedures involved and provide some interesting numerical results. We start by developing and demonstrating a high accuracy algorithm for the 1-D case, and use this algorithm to step up to the multidimensional case.

The 1-D case
In this section, we present the main tools for function approximation using its Fourier series coefficients. We define the basis functions and describe the fitting strategy and develop the computation algorithm. After dealing with the smooth case we move on to approximating a piecewise smooth function with a jump singularity.

Reconstructing smooth non-periodic functions.
Let f ∈ C m [0, 1], and assume we know the Fouries series expansion of f f (x) = n∈Zf n e 2πinx . (2.1) The series converge pointwise for any x ∈ [0, 1], however, if f is not periodic the convergence may be slow, and if f (1) = f (0) the convergence is not uniform and the Gibbs phenomenon occurs near 0 and near 1. As discussed in [11] and [8], one can apply convergence acceleration techniques for improving the convergence rate of the series. Another convergence acceleration approach is suggested by Gottlieb and Shu [6] using Gegenbauer polynomials. Yet, in both approaches, the convergence rate is not much improved near 0 and near 1. We suggest an approach in the spirit of Padé approximation. A Padé approximant is a rational function whose power series agrees as much as possible with the given power series of f . Here we look for approximations to f whose Fourier coefficients agree with a subset of the given Fourier coefficients of f . The approximation space can be any favourable linear approximation space, such as polynomials or trigonometric functions. We choose to build the approximation using kth order spline functions, represented in the B-spline basis: d (x) is the B-spline of order k with equidistant knots {−kd, ..., −2d, −d, 0}, and N d = 1/d + k − 1 is the number of B-splines whose shifts do not vanish in [0, 1]. The advantage of using spline functions is threefold: • The locality of the B-spline basis functions.
Notice that it is enough to consider the Fourier coefficients with non-negative indeices. We denote by 1], and by {B i,n } its Fourier coefficients. The induced system of linear equations for and We consider the test function f (x) = xexp(x) + sin(8x), assuming only its Fourier series coefficients are given. We have used only the 20 Fourier coefficients {f n } 19 n=0 , and computed an approximation using 12th degree splines with equidistant knots' distance d = 0.1. For this case, the matrix A is of size 19 × 19, and cond(A) = 5.75 × 10 20 . We have employed an iterative refinement algorithm described below to obtain a high precision solution. The results are shown in the following two figures. In Figure 2 we see the test function on the left and the approximation error on the right. Figure 3 shows the graph of Log 10 (f n ) in blue and the graph of Log 10 (f n −Ŝ n ). Notice the matching in the first Fourier coefficients reflected in the begining of the red graph.
Remark 2.1. Iterative refinement The powerful iterative refinement method described in[13], [9] is as follows: For solving a system Ax = b, we use some solver, e.g. the matlab pinv function. We obtain the solution x (0) = pinv(A)b. Next we compute the residual r (0) = b − Ax (0) . In case cond(A) is very large, the residual will be large. Now we solve again the system with r (0) at the right hand side, and use the solution to correct x (0) , to obtain We repeat this correction steps a few times, i.e., r (k) = b − Ax (k) , and  Let f be a piecewise smooth function on [0, 1], defined by combined two pieces f 1 ∈ C m [0, s * ] and f 2 ∈ C m (s * , 1].
Here again, we assume that all we know about f is its Fourier series expansion. In particular, we do not know the position s * ∈ [0, 1] of the singularity of f . As in the case of a non-periodic function, the existence of a singularity in [0, 1] significantly influences the Fourier series coefficients and implies their slow decay. As we demonstrate below, good matching of the Fourier coefficients requires a good approximation of the singularity location. The approach we suggest here involves finding approximations to f 1 and f 2 simultaneously with a high precision identification of s * .
Let s be an approximation of the singularity location s * , and let us follow the algorithm suggested above for the smooth case. The difference here is that now we look for two separate spline approximations: and The combination S of S 1 and S 2 constitutes the aproximation to f . Here again we aim at matching the first M + 1 Fourier coefficients of f and of S. Here S dependes upon the N d coefficients {a 1i } of S 1 , the N d coefficients {a 2i } of S 2 and upon s. Therefore, the minimization process solves for all these unknowns: The minimization is non-linear w.r.t. s, and linear w.r.t. the other unknowns. Therefore, the minimization problem is actually a one parameter non-linear minimization problem, the parameter s. Using the approximation power of kth order splines (k ≤ m), and considering the value of the objective cost function for s = s * , we can deduce that the minimal value of M n=0 |f n −Ŝ n | 2 is O(d 2k ). We also observe that an ǫ deviation from s * implies a bounded deviation of the minimizing Fourier coeficients max n∈Z |f n −Ŝ n | ≤ c 1 ǫ + c 2 d k . (2.10) As shown below, these observations can be used for finding a good approximation to s * . We denote by B 1i ≡ B to the interval (s, 1]. We concatenate these two sequences of basis functions, {B 1i } and {B 2i } into one sequence , and denote their Fourier coefficients by {B i,n } n∈Z . For a given s, the induced system of linear equations for the splines' coefficients a = ( is Aa = b defined as follows: Due to the locality of the B-splines, some of the basis functions {B 1i } and {B 2i } may be identical 0. It thus seems better to use only the non-zero basis functions. From our experience, since we use the generalized inverse approach for solving the system of equations, using all the basis functions gives the same solution.
The generalized inverse approach computes the least-squares solution to a system of linear equations that lacks a unique solution. It is also called the Moore-Penrose inverse, and is computed by matlab pinv. *** The above construction can be carried out to the case of several singular points.

Finding s * .
We present the strategy for finding s * together with a specific numerical example. We consider a test function on [0, 1] with a jump discontinuity at s * = 0.5: (2.13) As expected, the Fourier series of f is slowly convergent, and it exhibits the Gibbs phenomenon near the ends of [0, 1] and near s * . In Figure 4, on the left, we present the sum of the first 200 terms of the Fourier series, computed at 20000 points in [0, 1]. This sum is nonacceptable as an approximation to f , and yet we can use it to obtain a good initial approximation to s 0 ∼ s * . On the right graph, we plot the first differences of the values in the left graph. The maximal difference is achieved at a distance of order 10 −4 from s * . Having a good approximation s 0 ∼ s * is not enough for achieving a good approximation to f . However, s 0 can be used as a starting point for an iterative method leading to a high precision approximation to s * . To support this assertion we present the graph in Figure 5, depicting the maximum norm of the difference between 1000 of the given Fourier coefficients and the corresponding Fourier coefficients of the approximation S, as a function of s, near s * = 0.5. This function is almost linear on each side of s * , and simple quasi-Newton iterations converge very fast to s * .
After obtaining a high accuracy approximation to s * , we use it for deriving the piecewise spline approximation to f . Figure 6 depicts the approximation error, while Figure  7 shows Log 10 of the absolute values of the given Fourier coefficients of f (in blue), and the corresponding values for the Fourier coefficients of f − S (in red). The graph shows a reduction of ∼ 7 orders of magnitude.  Approximation error f-S Figure 6. The approximation error for the 1D non-smooth case.

The 1-D approximation procedure.
Let us sum up the suggested approximation procedure: (1) Choose the approximation space Π for approximating f 1 and f 2 .
(2) Define the number of Fourier coefficients to be used for building the approximation such that M + 1 ≥ 2dim(Π).
(2.14) (3) Find first approximation to s * : Compute a partial Fourier sum and locate maximal first order difference. Such series are obtained when solving PDE using spectral methods. However, if the function is not periodic, or, as in the case of hyperbolic equations, the function has a jump discontinuity along some curve in [0, 1] 2 , the convergence of the Fourier series is slow. Furthermore, the approximation of f by its partial sums suffers from the Gibbs phenomenon near the boundaries and near the singularity curve.
We deal with the case of smooth non-periodic 2-D functions in the same manner as we did for the univariate case. We look for a bivariate spline function S whose Fourier coefficients match the Fourier coefficients of f . As in the univariate case, it is enough to match the coefficients of low frequency terms in the Fourier series. The technical difference in the 2-D case is that we look for a tensor product spline approximation, using tensor product kth order B-spline basis functions. (3.2) The system of equations for the B-spline coefficients is the same as the system defined by (2.4)-(2.5) in the univariate case, only here we reshape the unknowns as a vector of We consider the test function assuming only its Fourier series coefficients are given. We have used only 160 Fourier coefficients, and constructed an approximation using 10th degree tensor product splines with equidistant knots' distance d = 0.1 in each direction. For this case, the matrix A is of size 361 × 361, and cond(A) = 6.2 × 10 30 . Again, we have employed the iterative refinement algorithm to obtain a high precision solution (relative error 10 −15 ). Computation time ∼ 18 seconds.
In figure 8 we plot the test function on [0, 1] 2 . Note that it has high derivatives near (0, 1). The approximation error f −S [10] 0.1 is shown in figure 9. To demonstrate the convergence Figure 9. The approximation error f − S [10] 0.1 . acceleration of the Fourier series achieved by substracting the approximation from f , we present in Figure 10 Log 10 of the absolute values of the Fourier coefficients of f (in green) and the of the Fourier coefficients of f −S [10] 0.1 (in blue), for frequencies 0 ≤ m, n ≤ 200. The magnitude of the Fourier coefficients is reduced by a factor of 10 5 , and even more so for the low frequencies due to the matching strategy used to derive the spline approximation. Figure 10. Log 10 of the Fourier coefficients before (green), and after (blue).

The non-smooth 2-D case.
Let f be a piecewise smooth function on [0, 1] 2 , defined by combined two pieces f 1 ∈ C m [Ω 1 ] and f 2 ∈ C m [Ω 2 ], Ω 2 = [0, 1] 2 \ Ω 1 . Here again, we assume that all we know about f is its Fourier series expansion. In particular, we do not know the position of the dividing curve separating Ω 1 and Ω 2 . We denote this curve by Γ * , and we assume that it is a C m -smooth curve. As in the case of a non-periodic function, the existence of a singularity curve in [0, 1] 2 significantly influences the Fourier series coefficients and implies their slow decay. In case of discontinuity of f across Γ * , partial sums of the Fourier series exhibit the Gibbs phenomenon near Γ * . As demonstrated below, good matching of the Fourier coefficients requires a good approximation of the singularity location. As in the univariate non-smooth case, the computation algorithm involves finding approximations to f 1 and f 2 simultaneously with a high precision identification of Γ * .
Evidently, finding a high precision approximation of the singularity curve Γ * is more involved than finding a high precision approximation to the singularity point s * in the univariate case. Let D Γ * (x, y) be the signed-distance function corresponding to the curve Γ * : In looking for an approximation to Γ * , we look for an approximation to D Γ * . Here again we are using a tensor product spline approximants, the same set of spline functions described in the previous section. Since the curve is C m , it can be shown that one can construct a spline functionD of order k ≤ m, with knots' distance h, which approximates D Γ * near Γ * so that the Hausdorff distance belween the zero level set ofD and Γ * is O(h k ). Let Db be a spline approximation to D Γ * , with spline coefficientsb = {b ij } N h i,j=1 : h (y − jh).

(3.4)
For a given Db we define the approximation to f similar to the construction in the univariate case by equations (2.7), (2.8), (2.9). We look here for an approximation S to f which is a combination of two bivariate splines components: such that (2M + 1) 2 Fourier coefficients of f and S are matched in the least-squares sense: We denote by B 1ij (x, y) the restriction of B  d (x − id)B [k] (y − jd) to the domain defined by Db(x, y) < 0. We concatenate these two sequences of basis functions, , denoting their Fourier coefficients by {B ij,n } n∈Z , and rearranging them (for each n) in vectors of lenght For a given D¯b, the induced system of linear equations for the splines' coefficients a = ({a 1ij } N d i,j=1 , {a 2ij } N d i,j=1 ) is Aa = b defined as follows: and For a given choice ofb = {b ij }, the coefficients are obtained by solving a linear system of equations, and properly rearranging the solution. However, finding the optimalb is a non-linear problem that requires an iterative process and is much more expensive.
Remark 3.1. Representing the singularity curve of the approximation S as the zero level set of the bivariate spline function Db is the way to achieve a smooth control over the approximation. As a result, the objective function in (3.7) varies smoothly with respect to the spline coefficients {b ij }.
Here again we choose to demonstrate the whole approximation procedure alongside a specific numerical example.
(3.10) Figure 11. The test function for the 2D non-smooth case.
In the univariate case, in Section 2.2.1, we use the Gibbs phenomenon in order to find an initial approximation s 0 to the singularity location s * . The same idea, with some modifications to the 2D case, is applied here. The truncated Fourier sum f 50 (x, y) = 50 m,n=−50f mn e 2πimx e 2πiny . (3.11) gives an approximation to f , but the approximation suffers from a Gibbs phenomenon near the boundaries of the domain and near the singularity curve Γ * . We evaluated f 50 on a 400 × 400 mesh on [0, 1] 2 , and enhanced the Gibbs effect by applying first order differences along the x-direction. The results are depicted in Figure 12. The locations of large x-direction differences and of large y-direction differences within [0, 1] 2 indicate the location of Γ * . Building the initial approximation Db 0 : Searching along 50 horizontal lines (x-direction) for maximal x-direction differences, and along 50 vertical lines (y-direction) for maximal y direction differences, we have found 72 such maximum points, which we denote by P 0 . We display these points (in red) in Figure 13, on top of the curve Γ * (in blue). Now we use these points to construct the spline Db 0 , whose zero level curve is taken as the initial approximation to Γ * . To construct Db 0 we first overlay on [0, 1] 2 a net of 11 × 11 points, Q 0 . These are the green points displayed in Figure 14.
To each point in Q 0 we assign the value of its distance from the set P 0 , with a plus sign for points which are on the right or above P 0 , and a minus sign for the other points. To each point in P 0 we assign the value zero. The spline function Db 0 is now defined by the least-squares approximation to the values at all the points P 0 ∪ Q 0 . We have used here tensor product splines of order 10, on a uniform mesh with knots' distance = 0.1. We denote the level curve zero of the resulting Db 0 as Γ 0 , and this curve is depicted in yellow in Figure 14. It seems that Γ 0 is already a good approximation to Γ * (in blue), and thus it is a good starting point for achieving the minimization target (3.7). Improving the approximation to Γ * , and building the two approximants.
Starting from Db 0 we use a quasi-Newton method for iterative improvement of the approximation to Γ * . The expensive ingredient in the computation procedure is the need to recompute the Fourier coefficients of the B-splines for any new set of coefficients b of Db. We recall that we need (2M + 1) 2 of these coefficients for each B-spline, and we have 2N 2 d B-splines. In the numerical example we have used M = 40 and N d = 19. To illustrate the issue we present in Figure 15 one of those B-spline whose support intersects the singularity curve. When the singularity curve is updated, the Fourier coefficients of this B-spline are recalculated.

Remark 3.2. Calculating Fourier coefficients of the B-splines
Calculating the Fourier coefficients of the B-splines is the most costly step in the approximation procedure. For the univariate case the Fourier coefficients of the B-splines can be computed analytically. For the smooth nultivariate case, with no singularity within the unit cube, piecewise Gauss quadrature may be used to compute the Fourier coefficients with high precision. The non-smooth multivariate case is more difficult, and more expensive. However, we noticed that using low precision approximations for the Fourier coefficients of the B-splines is fine. For example, in the above example, we have employed a simple numerical quadrature combined with fast Fourier transform, and we obtained the Fourier coefficients with a relative error ∼ 10 −5 . Yet the resulting approximation error is small f − S ∞ < 5 × 10 −6 , as seen in Figure 18. Using one quasi-Newton step we obtained new spline coefficientsb 1 and an improved approximation Γ 1 to Γ * as the zero level set of Db 1 . Stopping the procedure at this point yield approximation results as shown in the figures below. Figure 16 shows the approximation error f − S on [0, 1] 2 \ U , where U is a small neighborhood of Γ * . Figure  17 shows, in green, Log 10 of the magnitude of the giver Fourier coefficientsf mn and, in blue, Log 10 of the Fourier coefficients of the difference f − S. We observe a reduction of three orders of magnitude between the two.
Applying four quasi-Newton iterations took ∼ 24 minutes execution time. The approximation of Γ * by the zero level set of Db 4 is now with an error of 10 −9 . The consequent approximation error to f is reduced as shown in Figure 18, and the Fourier coefficients of the error are reduced by 5 orders of magnitude, as shown in Figure 19.

The 2-D approximation procedure.
Let us sum up the suggested approximation procedure: (1) Choose the approximation space Π 1 for approximating f 1 and f 2 and the approximation space Π 2 for approximating Γ * . (2) Define the number of Fourier coefficients to be used for building the approximation such that    (6) Update D b to improve the approximation to Γ * , by performing quasi-Newton iterations to reduce the objective function in (3.7). (7) Go back to (4) to update the approximation.

Lower order singularities.
Let us assume that f (x, y) is a continuous function, and that f x (x, y) is discontinuous across the singularity curve Γ * . In this case we cannot use the Gibbs phnomenon effect to approximate the singularity curve. However, the Fourier coefficientŝ g mn = imf mn , represent a function g that has discontinuity across Γ * , and the above procedure for approximating Γ * can be applied.

Error analysis.
We consider the non-smooth bivariate case, where f is a combination of two smooth parts, f 1 on Ω 1 and f 2 on Ω 2 , separated by a smooth curve Γ * . Throughout the paper we approximate f using spline functions. In this section we consider approximations by general approximation spaces. Let Π 1 be the approximation space for approximating the smooth pieces constituting f , and let Π 2 be the approximation space used for approximating the singularity curve. The following assumption characterize and quantify the assumptions about the function f and its singularity curve Γ * in terms the ability to approximate them using the approximation spaces Π 1 , Π 2 . Assumption 3.3. We assume that Π 1 and Π 2 are finite dimensional spaces of dimensions N 1 and N 2 respectively. Assumption 3.4. We assume that f 1 and f 2 are smoothly extendable to [0, 1] 2 and Assumption 3.5. For p ∈ Π 2 , let us denote by Γ 0 (p) the zero level curve of p within [0, 1] 2 . we assume there exists p ∈ Π 2 such that where d H denotes the Hausdorff distance.
We look for an approximation S to f which is a combination of two components, p 1 ∈ Π 1 inΩ 1 and p 2 ∈ Π 1 inΩ 2 , separated by Γ 0 (p), p ∈ Π 2 , such that (2M + 1) 2 Fourier coefficients of f and S are matched in the least-squares sense: (3.13) Assumption 3.6. Consider the above function S constructed by a triple (p 1 , p 2 , Γ 0 (p)), p 1 , p 2 ∈ Π 1 , p ∈ Π 2 . We assume that there is a Lipschitz continuous inverse mapping from the (2M + 1) 2 Fourier coefficients of S to the triple (p 1 , p 2 , Γ 0 (p)): {Ŝ mn } M m,n=−M → (p 1 , p 2 , Γ 0 (p)). (3.14) Remark 3.7. To enable the above property we choose M so that The topology in the space of triples (p 1 , p 2 , Γ 0 (p)) is in terms of the maximum norm for the first two components and the Hausdorff distance for the third component.

Validity of the approximation assumptions.
Let us check the validity of Assumptions 3.3, 3.4, 3.5 and 3.6 for the approximation tools suggested in Section 3.2 and used in the above numerical tests.
We assume that f 1 , f 2 ∈ C m [0, 1] 2 , and that Γ * is a C m curve. To construct the approximation to f 1 and f 2 we use the space Π 1 of kth degree tensor-product splines with equidistant knots' distance d in each direction, k ≤ m. The approximation to Γ * is obtained using the approximation space Π 2 of ℓth degree tensor product splines with equidistant knots' distance h in each direction, ℓ ≤ m. dim( h , and for both spaces we use the B-spline basis functions. Assumptions 3.4 and 3.5 are fulfilled with ǫ = C 1 d k and δ = C 2 h ℓ . Assumption 3.6 is more challenging. To define the mapping {Ŝ mn } M m,n=−M → (p 1 , p 2 , Γ 0 (p)), (3.30) we use the same procedure 3.2.2 for defining the approximation to f : We represent p and p 1 , p 2 using the B-spline basis function as in (3.4) and (3.5), (3.6) respectively. Each triple (p 1 , p 2 , p) defines a piecewise spline approximation T (x, y), and we look for the approximation T(x,y) such that (2M +1) 2 Fourier coefficients of T match the Fourier coefficients {Ŝ mn } M m,n=−M in the least-squares sense: Out of all the possible solutions of the above problem we look for the one with minimial coefficients' norm, i.e., minimizing a 2 2ij . (3.32) Following the procedure 3.2.2 we observe that every step in the procedure is smooth with respect to its input. Possible non-uniqueness in solving the linear system of equations on step (5) is resolved by using the generalized inverse. Therefore, the composition of all the steps is also a smooth function of the input, which implies the validity of Assumption 3.6.

Numerical Example -The smooth 3-D case.
We consider the test function f (x, y, z) = (x 2 + y 2 + z 2 − 0.5)sin(4(x + y − z)), assuming only its Fourier series coefficients are given. We have used only 10 3 Fourier coefficients and constructed an approximation using 5th-degree tensor product splines with equidistant knots' distance d = 0.1 in each direction. For this case, the matrix A is of size 15 3 × 15 3 , and cond(A) = 1.2 × 10 22 . Again, we have employed the iterative refinement algorithm to obtain a high precision solution. The test function is shown in Figure 20. The error in the resulting approximation is displayed in Figure 21.

Concluding remarks
The basic crucial assumption behind the presented Fourier acceleration strategy is that the underlying function is piecewise 'nice'. That is, piecewisely, the function can be well approximated by a suitable finite set of basis functions. The Fourier series of the function may be given to us as a result of the computational method dictated by the structure of the mathematical problem at hand. In itself, the Fourier series may not be the best tool for approximating the desired solution, and yet it contains all the information about the requested function. Utilizing this information we can derive high accuracy piecewise approximations to that function. The simple idea is to make the approximation match the coefficients of the given Fourier series. The suggested method is simple to implement for the approximation of smooth non-periodic functions in any dimension. The case of non-smooth functions is more challenging, and a special strategy is suggested and demonstrated for the univariate and bivariate cases. The paper contains a descriptive graphical presentation of the approximation procedure, together with a fundamentall error analysis.