Next Article in Journal
Spurious OLS Estimators of Detrending Method by Adding a Linear Trend in Difference-Stationary Processes—A Mathematical Proof and Its Verification by Simulation
Next Article in Special Issue
On the Stability of la Cierva’s Autogiro
Previous Article in Journal
Hydromagnetic Dissipative and Radiative Graphene Maxwell Nanofluid Flow Past a Stretched Sheet-Numerical and Statistical Analysis
Previous Article in Special Issue
Improved Oscillation Results for Functional Nonlinear Dynamic Equations of Second Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Computation of Highly Oscillatory Fourier Transforms with Nearly Singular Amplitudes over Rectangle Domains

School of Mathematics and Statistics, Guizhou University, Guiyang 550025, Guizhou, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 1930; https://doi.org/10.3390/math8111930
Submission received: 31 July 2020 / Revised: 19 October 2020 / Accepted: 22 October 2020 / Published: 2 November 2020

Abstract

:
In this paper, we consider fast and high-order algorithms for calculation of highly oscillatory and nearly singular integrals. Based on operators with regard to Chebyshev polynomials, we propose a class of spectral efficient Levin quadrature for oscillatory integrals over rectangle domains, and give detailed convergence analysis. Furthermore, with the help of adaptive mesh refinement, we are able to develop an efficient algorithm to compute highly oscillatory and nearly singular integrals. In contrast to existing methods, approximations derived from the new approach do not suffer from high oscillatory and singularity. Finally, several numerical experiments are included to illustrate the performance of given quadrature rules.

1. Introduction

Highly oscillatory integrals frequently arise in acoustic scattering [1], computational physical optics [2], computational electromagnetics [3], and related fields. Generally, dramatically changing integrands make classical approximations perform poor. Therefore, studies on numerical calculation of highly oscillatory integrals have attracted much attention during the past few decades, and a variety of contributions has been made, for example, Filon-type quadrature [4,5], numerical steepest descent method [6], Levin method [7], and so on.
When the phase is nonlinear, researchers usually resort to Levin-type quadrature, which originates from David Levin’s pioneering work in [7]. By transforming the oscillatory integration problem into a special ordinary differential equation, one could get an efficient approximation to the generalized Fourier transform with the help of collocation methods. Afterwards, Levin analyzed the convergence rate of the innovative approach in [8]. Analogous to Filon-type quadrature, the Levin-type method based on Hermite interpolation was developed by Olver in [9]. Application of Hermite interpolation definitely increased the convergence rate of the numerical method with respect to the frequency. In [10], Li et al. proposed a stable and high-order Levin quadrature rule by employing the spectral Chebyshev collocation method and truncated singular value decomposition technique. Multiquadric radial basis functions were applied to Levin’s equation and an innovative composite Levin method was presented in [11]. Numerical tests manifested that such kind of algorithms was able to deal with stationary problems. Sparse solvers for Levin’s equation in one-dimension were constructed by employing recurrence of Chebyshev polynomials in [12,13]. Meanwhile, a class of preconditioners was proposed by the second author to deal with the ill-conditioned linear system in [13]. Molabahrami studied the Galerkin method for Levin’s equation and developed the Galerkin–Levin method for oscillatory integrals in [14].
Levin-type quadrature rules were extended to solving singular problems in the past several years. In [15], Wang and Xiang employed the technique of singularity separation and transformed Levin’s equation into coupled non-singular ordinary differential equations. By solving the transformed equations numerically, they obtained an efficient Levin quadrature rule for weakly singular integrals with highly oscillatory Fourier kernels. Recently, the second author proposed the fractional Jacobi-Galerkin–Levin quadrature by investigating fractional Jacobi approximations in [16]. Through properly choosing weighted Jacobi polynomials, the discretized Levin’s equation was turned into a sparse linear system. It had been verified that the convergence rate of this kind of Levin quadrature rules could be analyzed by studying coefficients of the fractional Jacobi expansion of the error function. In [17], a multi-resolution quadrature rule was applied to deal with the singularity, and the modified Levin quadrature rule coupled with the multi-quadric radial basis function was developed to calculate oscillatory integrals with Bessel and Airy kernels.
Levin’s quadrature rule also plays an important role in solving multi-dimensional problems. By introducing a multivariate ordinary differential equation, Levin found a non-oscillatory approximation to the integrand in [7], which led to an efficient algorithm for computing oscillatory integrals over rectangular regions. In [18], Li et al. devised a class of spectral Levin methods for multi-dimensional integrals by utilizing the Chebyshev differential matrix and delaminating quadrature rule. An innovative procedure for multivariate highly oscillatory integrals was devised by employing multi-resolution analysis in [19]. Meanwhile, the meshless approximation was obtained by truncated singular value decomposition. In [20], the second author studied a fast algorithm for Hermite differential matrix by the barycentric formula. With the help of delaminating quadrature, the spectral Levin-type method for calculation of highly oscillatory integrals over rectangular regions was constructed.
Although researchers have made much contribution to numerical calculation of highly oscillatory integrals, little attention has been paid to the computation of nearly singular and highly oscillatory integrals, for example,
I [ F , G 1 , G 2 , ω ] = 1 1 1 1 F ( x , y ) ( x a ) 2 + ( x b ) 2 + ϵ 2 e i ω ( G 1 ( x ) + G 2 ( y ) ) d x d y .
In this paper, we are concerned with efficient computation of Integral (1), and partly fill in the gap in this field. Moreover, we suppose that F ( x , y ) is analytic with respect to both variables, G 1 ( x ) and G 2 ( y ) are sufficiently smooth functions without stationary points, and the frequency parameter ω 1 , ( a , b ) R 2 , and | ϵ | 1 .
A large frequency parameter ω implies that integrands of Integral (1) are highly oscillatory, and classical quadrature rules suffer from the computational cost. In Table 1, we list numerical results computed by the classical delaminating quadrature rule coupled with Clenshaw–Curtis quadrature (CCQ), where the quadrature nodes are fixed 16 . Referenced values are computed by Chebfun toolbox (see [21]). Chebfun, which approximate functions by Chebyshev interplant, was firstly developed in 2004. Due to the fast and high-order approximation to the integrand, numerical integration methods in Chebfun usually provide efficient numerical approaches for univariate and multivariate integrals. Hence, the 2D quadrature method in Chebfun is chosen as a benchmark. It can be seen from Table 1 that, as ω goes to infinity, CCQ diverges from the referenced values when we do not add quadrature nodes.
In contrast to oscillatory integrals arising in existing studies, when the point ( a , b ) in Integral (1) is close to or falls in the integration domain and ϵ is particularly small, the integrand attains its peak value around ( a , b ) and decays dramatically away from such a critical point. In general, the point ( a , b ) is called the nearly singular point. Plenty of additional quadrature nodes have to be used if we want to make the numerical formula retain a tolerance error.
There also exist several contributions to tackle the nearly singular problem. The sinh transformation is deemed one of the most important tools. For nearly singular moments arising in Laplace’s equation, Johnston et al. proposed the sinh transformation in [22]. Occorsio and Serafini considered two kinds of cubature rules for nearly singular and highly oscillatory integrals in [23]. With the help of 2D-dilation technique, Occorsio and Serafini were able to relax the fast changing integrand and applied Gauss–Jacobi quadrature to the transformed integral. Numerical experiments verified that such an approximation procedure greatly increased the numerical performance of Gauss quadrature.
The remaining parts are organized as follows. In the second section, we review some results with regard to the calculation of Chebyshev series and present the convergence property of Chebyshev interplant and series. In Section 3, we first extend the idea in [13] to two-dimensional oscillatory integrals. Compared with existing Levin quadrature, the new approach has an advantage in computational time. Then, noting that there is little convergence analysis of 2D Levin quadrature rules, we try to fill the gap through examining the modified Levin equation. Finally, we present an innovative composite Levin quadrature rule for solving nearly singular and highly oscillatory problems. In contrast to existing numerical integration methods, the proposed composite method does not suffer from high oscillation and nearly singular amplitudes. Numerical tests included in Section 4 are conducted to verify the efficiency of the proposed approach, and some remarks are concluded in Section 5.

2. Auxillary Tools

In this section, we first revisit auxillary operators with regard to Chebyshev series, which help develop numerical algorithms for computation of two-dimensional oscillatory integrals. Then, error bounds for coefficients of Chebyshev series and Clenshaw–Curtis interplant are introduced.
When the given function f ( x ) is analytic in a sufficiently large domain containing [ 1 , 1 ] , one can compute its Chebyshev series by (see [24])
f ( x ) = n = 0 f n T n ( x ) ,
with
f 0 = 1 π 1 1 f ( s ) 1 s 2 d s , f n = 2 π 1 1 f ( s ) T n ( s ) 1 s 2 d s , n = 1 , 2 , .
Here, T n ( x ) denotes the first-kind Chebyshev polynomial of order n . Noting the relation between the first- and second-kind Chebyshev polynomials T n ( x ) , U n ( x ) (see [24])
d d x T n ( x ) = n U n 1 ( x ) , n 1 , 0 , n = 0 ,
and
T n ( x ) = U 0 ( x ) , n = 0 , 1 2 U 1 ( x ) , n = 1 , 1 2 ( U n ( x ) U n 2 ( x ) ) , n 2 ,
we are able to compute
f ( x ) = n = 0 f n T n ( x ) = n = 0 k = 0 ( 2 n + 4 k + 2 ) f n + 2 k + 1 T n ( x ) ,
which implies Chebyshev coefficients of the derivative can be represented by
f 0 f 1 f 2 f 3 = 0 1 0 3 0 5 0 0 4 0 8 0 0 0 0 6 0 10 0 0 0 0 8 0 f 0 f 1 f 2 f 3 = D f 0 f 1 f 2 f 3 .
Secondly, suppose that there exists a sufficiently smooth function
a ( x ) = n = 0 a n T n ( x ) .
Noting the identity (see [24])
T m ( x ) T n ( x ) = 1 2 ( T m + n ( x ) + T | m n | ( x ) ) ,
we can compute the product a ( x ) f ( x ) by
a ( x ) f ( x ) = n = 0 c n T n ( x ) ,
where a = [ a 0 , a 1 , ] T , and coefficients { c n } n = 0 are defined by
c 0 c 1 c 2 = 1 2 2 a 0 a 1 a 2 a 3 a 1 2 a 0 a 1 a 2 a 2 a 1 2 a 0 a 1 + 0 0 0 0 a 1 a 2 a 3 a 4 a 2 a 3 a 4 a 5 f 0 f 1 f 2 = M [ a ] f 0 f 1 f 2 .
Operators D , M [ a ] have been verified to be efficient tools for discretizing Levin’s equation. For more details, one can refer to [13].
On the other hand, when f ( x ) is analytic with | f ( x ) | M in the region bounded by the Bernstein ellipse with the radius ρ > 1 , we have for every n 0 (see [25])
| f n | 2 M ρ n .
Noting that
f n = k = 0 ( 2 n + 4 k + 2 ) f n + 2 k + 1 ,
we can compute
| f n | k = 0 ( 2 n + 4 k + 2 ) | f n + 2 k + 1 | k = 0 ( 2 n + 4 k + 2 ) 2 M ρ n 2 k 1 4 M ρ n 1 ( n + 1 ) k = 0 ρ 2 k + 2 k = 0 k ρ 2 k .
Employing
k = 0 ρ 2 k = ρ 2 ρ 2 1 , k = 0 k ρ 2 k = ρ 2 ( ρ 2 1 ) 2
leads to
| f n | 4 M ρ n 1 ( n + 1 ) ρ 2 ρ 2 1 + 2 ρ 2 ( ρ 2 1 ) 2 4 M ρ n 1 ρ ( ρ 1 ) ( n + 1 ) + 2 ( ρ 1 ) 2 4 M ( n + 1 ) ρ n + 1 ( ρ 1 ) 2 .
Furthermore, according to ([25], Theorems 2.1, 2.4), we have
f p N 4 M ρ N ρ 1
and
f p N 4 M ( N + 1 ) 2 ρ N + 2 ( ρ 1 ) 3 .
Here, p N ( x ) denotes the interplant of f ( x ) at Clenshaw–Curtis nodes or the truncated Chebyshev series of f ( x ) .

3. Main Results

This section is devoted to investigating fast algorithms for calculation of Integral (1). To begin with, let us consider the computation of oscillatory integral without nearly singular integrands, that is,
I ^ [ F , G 1 , G 2 , ω ] = 1 1 1 1 F ( x , y ) e i ω ( G 1 ( x ) + G 2 ( y ) ) d x d y .
Here, F ( x , y ) , G 1 ( x ) , G 2 ( y ) are smooth functions with sufficiently large analytic regions, and G 1 ( x ) , G 2 ( y ) do not have stationary points in the complex plane.
Consider the inner integral
H ( y ) = 1 1 F ( x , y ) e i ω G 1 ( x ) d x .
For fixed y j = cos j N π , j = 0 , 1 , , N , we are restricted to finding a function P j ( x ) satisfying
P j ( x ) + i ω G 1 ( x ) P j ( x ) = F ( x , y j ) .
Noting that G 1 ( x ) never vanishes over the interval [ 1 , 1 ] , we can get the modified Levin equation,
P j ( x ) G 1 ( x ) + i ω P j ( x ) = F ( x , y j ) G 1 ( x ) .
Let
P j ( x ) = n = 0 p n j T n ( x ) , 1 / G 1 ( x ) = n = 0 g n 1 T n ( x ) , F ( x , y j ) = n = 0 f n j T n ( x ) .
With the help of operators D , M [ a ] , we rewrite modified Levin’s Equation (11) as
M [ G 1 ] D P j + i ω P j = M [ G 1 ] F j ,
where
x j = cos j N π , P j = p 0 j p 1 j , G 1 = g 0 1 g 1 1 , F j = f 0 j f 1 j .
Solving Equation (12) by the truncation method [26] gives the unknown coefficients p n j , n = 0 , 1 , , N , and we can get approximations to P j ( ± 1 ) by Clenshaw algorithm,
P j ( ± 1 ) b 0 ( ± 1 ) b 2 ( ± 1 ) 2
with
b N + 1 ( ± 1 ) = 0 , b N ( ± 1 ) = p N j , N , b k ( ± 1 ) = ( ± 2 ) × b k + 1 ( ± 1 ) + p k j , N ,
where p k j , N denotes the approximation to p k j . Hence, the inner integral (9) is computed by
1 1 F ( x , y j ) e i ω G 1 ( x ) d x e i ω G 1 ( 1 ) b 0 ( 1 ) b 2 ( 1 ) 2 e i ω G 1 ( 1 ) b 0 ( 1 ) b 2 ( 1 ) 2 .
Since H N ( y j ) , the approximation to H ( y ) at Clenshaw–Curtis nodes, has been obtained, we are able to construct the polynomial H N ( y ) by
H N ( y ) = j = 0 N H N ( y j ) L j ( y ) = n = 0 N h n T n ( y ) ,
where L j ( y ) denotes Lagrange basis with respect to Clenshaw–Curtis nodes and h n can be computed by fast Fourier transform. Letting
Q ( y ) = n = 0 q n T n ( y )
denote the function satisfying
M [ G 2 ] D Q + i ω Q = M [ G 2 ] H N ,
where
1 / G 2 ( y ) = n = 0 g n 2 T n ( y ) , H N ( y ) = n = 0 N h n T n ( y ) ,
and
Q = q 0 q 1 , G 2 = g 0 2 g 1 2 , H N = h 0 h 1 ,
we are able to approximate q 0 , , q N by the truncation method again. Computing a 0 ( ± 1 ) , a 2 ( ± 1 ) by
a N + 1 ( ± 1 ) = 0 , a N ( ± 1 ) = q N N , a k ( ± 1 ) = ( ± 2 ) × a k + 1 ( ± 1 ) + q k N ,
where q k N denotes the approximation to q k , we arrive at 2D spectral coefficient Levin quadrature for Integral (8)
I ^ [ F , G 1 , G 2 , ω ] I ^ N [ F , G 1 , G 2 , ω ] : = e i ω G 2 ( 1 ) a 0 ( 1 ) a 2 ( 1 ) 2 e i ω G 2 ( 1 ) a 0 ( 1 ) a 2 ( 1 ) 2 .
In [27], Xiang established the relation between Filon and Levin quadrature rules in the case of the phase g ( x ) = 1 and analyzed the convergence property of Levin quadrature. Instead of resorting to Filon quadrature, we consider the convergence rate of the above spectral coefficient Levin method with respect to quadrature nodes and frequency in the case of nonlinear oscillators through examining the decaying rate of the coefficients.
For any fixed y [ 1 , 1 ] , F ( x , y ) turns to the univariate function with regard to x . Let M H , M F ( y ) , M G 1 , M G 2 denote the maximum of H ( y ) , F ( x , y ) , 1 / G 1 ( x ) , 1 / G 2 ( y ) within their corresponding Bernstein ellipse with radiuses ρ H , ρ F ( y ) , ρ G 1 , ρ G 2 , respectively. Furthermore, denoting
ρ F = inf y [ 1 , 1 ] { ρ F ( y ) } , M F : = sup y [ 1 , 1 ] { M F ( y ) } , M 2 : = sup y [ 1 , 1 ] { G 2 ( y ) } , m 2 : = inf y [ 1 , 1 ] { G 2 ( y ) } ,
we summarize the convergence analysis in the following theorem.
Theorem 1.
Suppose
  • F ( x , y ) , 1 / G 1 ( y ) , and 1 / G 2 ( y ) are analytic within corresponding Bernstein ellipses;
  • G 1 ( x ) , G 2 ( y ) are smooth and bounded over [ 1 , 1 ] ;
  • The analytic radiuses P F , P G 1 satisfy P F < P G 1 .
Then, for sufficiently large ω , we have
I ^ [ F , G 1 , G 2 , ω ] I ^ N [ F , G 1 , G 2 , ω ] C ( N + 1 ) 2 ω ρ H N + 2 ( ρ H 1 ) 3 + ( N + 2 ) 3 log ( N + 1 ) ω ρ F N + 4 ( ρ F 1 ) 6 + ( N + 2 ) 3 ω ρ H N + 4 ( ρ H 1 ) 6 ,
where the constant C does not depend on ω , N .
Proof. 
Let H ^ N ( y ) = j = 0 N H ( y j ) L j ( y ) denote the interplant of H ( y ) at Clenshaw–Curtis points. A direct calculation implies the quadrature error can be decomposed into
I ^ [ F , G 1 , G 2 , ω ] I ^ N [ F , G 1 , G 2 , ω ] = 1 1 H ( y ) e i ω G 2 ( y ) d y e i ω G 2 ( 1 ) a 0 ( 1 ) a 2 ( 1 ) 2 e i ω G 2 ( 1 ) a 0 ( 1 ) a 2 ( 1 ) 2 = 1 1 1 1 F ( x , y ) e i ω ( G 1 ( x ) + G 2 ( y ) ) d x d y 1 1 H ^ N ( y ) e i ω G 2 ( y ) d y + 1 1 j = 0 N H ( y j ) L j ( y ) e i ω G 2 ( y ) d y 1 1 j = 0 N P j ( 1 ) e i ω G 1 ( 1 ) P j ( 1 ) e i ω G 1 ( 1 ) L j ( y ) e i ω G 2 ( y ) d y + 1 1 H N ( y ) e i ω G 2 ( y ) d y Q ( 1 ) e i ω G 2 ( 1 ) Q ( 1 ) e i ω G 2 ( 1 ) = E 1 + E 2 + E 3 .
Here,
E 1 : = 1 1 1 1 F ( x , y ) e i ω ( G 1 ( x ) + G 2 ( y ) ) d x d y 1 1 H ^ N ( y ) e i ω G 2 ( y ) d y , E 2 : = 1 1 j = 0 N H ( y j ) L j ( y ) e i ω G 2 ( y ) d y 1 1 j = 0 N P j ( 1 ) e i ω G 1 ( 1 ) P j ( 1 ) e i ω G 1 ( 1 ) L j ( y ) e i ω G 2 ( y ) d y , E 3 : = 1 1 H N ( y ) e i ω G 2 ( y ) d y Q ( 1 ) e i ω G 2 ( 1 ) Q ( 1 ) e i ω G 2 ( 1 ) ,
In the remaining work, we give estimates for E 1 , E 2 , E 3 with respect to the increasing truncation term N and frequency ω .
For E 1 , note that H ( y ) is bounded within its Bernstein ellipse by
| H ( y ) | = 1 1 F ( x , y ) e i ω G 1 ( x ) d x 1 1 | F ( x , y ) | | e i ω G 1 ( x ) | d x 2 M F .
As a result, we have according to integration by parts
| E 1 | = 1 1 ( H ( y ) H ^ N ( y ) ) e i ω G 2 ( y ) d y H ( y ) H ^ N ( y ) i ω G 2 ( y ) e i ω G 2 ( y ) | y = 1 y = 1 + 1 i ω 1 1 H ( y ) H ^ N ( y ) G 2 ( y ) e i ω G 2 ( y ) d y = H ( y ) H ^ N ( y ) i ω G 2 ( y ) e i ω G 2 ( y ) | y = 1 y = 1 + 1 i ω 1 1 ( H ( y ) H ^ N ( y ) ) G 2 ( y ) ( H ( y ) H ^ N ( y ) ) G 2 ( y ) ( G 2 ( y ) ) 2 e i ω G 2 ( y ) d y 2 H ( y ) H ^ N ( y ) ω m 2 + 2 H ( y ) H ^ N ( y ) ω m 2 + 2 M 2 H ( y ) H ^ N ( y ) ω m 2 2 8 M F m 2 ω ρ H N ρ H 1 + 16 M F m 2 ω ( N + 1 ) 2 ρ H N + 2 ( ρ H 1 ) 3 + 8 M F M 2 m 2 2 ω ρ H N ρ H 1 8 M F m 2 ω 1 + M 2 m 2 ρ H N ρ H 1 + ( N + 1 ) 2 ρ H N + 2 ( ρ H 1 ) 3 8 M F m 2 ω 1 + M 2 m 2 ρ H N ( ρ H 1 ) 2 + ( N + 1 ) 2 ρ H N + 2 ( ρ H 1 ) 3 8 M F m 2 ω 1 + M 2 m 2 2 ( N + 1 ) 2 ρ H N + 2 ( ρ H 1 ) 3 = C 1 ( N + 1 ) 2 ω ρ H N + 2 ( ρ H 1 ) 3 ,
where C 1 : = 16 M F m 2 1 + M 2 m 2 .
For E 2 , since
H ^ N ( y ) = j = 0 N H ( y j ) L j ( y ) , H N ( y ) = j = 0 N H N ( y j ) L j ( y ) , H N ( y j ) = P j ( 1 ) e i ω G 1 ( 1 ) P j ( 1 ) e i ω G 1 ( 1 ) ,
letting
E 2 , j : = H ( y j ) H N ( y j ) ,
we obtain
E 2 = j = 0 N E 2 , j 1 1 L j ( y ) e i ω G 2 ( y ) d y .
Furthermore, letting
Err 2 , j ( x ) : = F ( x , y j ) P j ( x ) i ω G 1 ( x ) P j ( x ) G 1 ( x ) ,
we have
E 2 , j = 1 1 Err 2 , j ( x ) G 1 ( x ) e i ω G 1 ( x ) d x .
A direct calculation as is done in the estimation procedure for E 1 results in
| E 2 , j | 2 Err 2 , j ( x ) ω + 2 Err 2 , j ( x ) ω .
Then, let us consider the decaying rate of coefficients of Chebyshev expansions of Err 2 , j ( x ) , which helps to analyze Err 2 , j ( x ) and Err 2 , j ( x ) . In fact, the truncation technique implies
M N [ G 1 ] D N P j , N + i ω P j , N = M N [ G 1 ] F j , N
with
P j , N = p 0 j , N p 1 j , N p N j , N , F j , N = f 0 j f 1 j f N j ,
and p k j , N denotes the approximation to p k j in Equation (12). For sufficiently large ω > N , it follows that 1 ω M N [ G 1 ] D N < 1 . By Neumann’s lemma, we have
P j , N = 1 i ω n = 0 1 i ω n M N n [ G 1 ] D N n M N [ G 1 ] F j , N .
Denoting the maximum of n = 0 1 i ω n M N n [ G 1 ] D N n M N [ G 1 ] by S N , we notice that
| p n j , N | 2 S N M F ω ρ F n , | d p n j , N | 4 S N M F ω ( n + 1 ) ρ F n + 1 ( ρ F 1 ) 2 4 S N M F ρ F n + 1 ( ρ F 1 ) 2 .
The Chebyshev coefficients of Err 2 , j ( x ) can be computed by
c n 2 , j = 1 1 Err 2 , j ( x ) T n ( x ) 1 x 2 d x , n = 0 , 1 , .
Noting the construction technique in the modified spectral Levin coefficient method, we get c n 2 , j = 0 for n = 0 , 1 , , N . On the other hand, for n N + 1 , it follows that
| c n 2 , j | 1 1 F ( x , y j ) T n ( x ) G 1 ( x ) 1 x 2 d x + 1 1 P j ( x ) T n ( x ) G 1 ( x ) 1 x 2 d x .
It is noted that the first term in the right-hand side of the above equation is the coefficient of F ( x , y j ) G 1 ( x ) and the second term is that of P j ( x ) G 1 ( x ) , where we denote coefficients to be c n F G , c n P G , respectively. Recalling the product operator in Equation (5), we have
| c n F G | 3 2 | g n 1 | | f 0 j | + | g n 1 1 | | f 1 j | + + | g 0 1 | | f n j | 6 M G 1 M F ( n + 1 ) ρ F n ,
and
| c n P G | 3 2 | g n 1 | | d p 0 j , N | + | g n 1 1 | | d p 1 j , N | + + | g 0 1 | | d p n j , N | 12 M G 1 M F S N ( n + 1 ) ρ F n + 1 ( ρ F 1 ) 2 .
Therefore, it follows
| c n 2 , j | 6 M G 1 M F ( n + 1 ) ρ F n + 12 M G 1 M F S N ( n + 1 ) ρ F n + 1 ( ρ F 1 ) 2 6 M G 1 M F ( n + 1 ) ρ F n + 1 ( ρ F 1 ) 2 + 12 M G 1 M F S N ( n + 1 ) ρ F n + 1 ( ρ F 1 ) 2 C ( n + 1 ) ρ F n + 1 ( ρ F 1 ) 2 .
Here, C : = 2 max { 6 M G 1 M F , 12 M G 1 M F S N } . Hence, Err 2 , j ( x ) and Err 2 , j ( x ) can be bounded by
Err 2 , j ( x ) n = N + 1 | c n 2 , j | C ( ρ F 1 ) 2 n = N + 1 ( n + 1 ) ρ F n + 1 C ( N + 2 ) ρ F N + 2 ( ρ F 1 ) 4 ,
and
Err 2 , j ( x ) 2 n = N + 1 | c n 2 , j | T n ( x ) 2 n = N + 1 C ( n + 1 ) ρ F n + 1 ( ρ F 1 ) 2 n 2 C ( N + 2 ) 3 ρ F N + 4 ( ρ F 1 ) 6 .
As a result, it follows that
| E 2 , j | 2 ω ( Err 2 , j ( x ) + Err 2 , j ( x ) ) 8 C ω ( N + 2 ) 3 ρ F N + 4 ( ρ F 1 ) 6 .
Now, we arrive at the fact
| E 2 | j = 0 N | E 2 , j | 1 1 L j ( y ) e i ω G 2 ( y ) d y C 2 ( N + 2 ) 3 log ( N + 1 ) ω ρ F N + 4 ( ρ F 1 ) 6 ,
where C 2 : = 64 C π .
The estimation procedure for E 3 is similar to that of E 2 , j . We ignore details and give the conclusion directly
| E 3 | C 3 ( N + 2 ) 3 ω ρ H N + 4 ( ρ H 1 ) 6 ,
where C 3 does not depend on N and ω .
To sum up, we arrive at the following error bound by combining Equations (17), (22), and (23),
I ^ [ F , G 1 , G 2 , ω ] I ^ N [ F , G 1 , G 2 , ω ] | E 1 | + | E 2 | + | E 3 | C 1 ( N + 1 ) 2 ω ρ H N + 2 ( ρ H 1 ) 3 + C 2 ( N + 2 ) 3 log ( N + 1 ) ω ρ F N + 4 ( ρ F 1 ) 6 + C 3 ( N + 2 ) 3 ω ρ H N + 4 ( ρ H 1 ) 6 C ( N + 1 ) 2 ω ρ H N + 2 ( ρ H 1 ) 3 + ( N + 2 ) 3 log ( N + 1 ) ω ρ F N + 4 ( ρ F 1 ) 6 + ( N + 2 ) 3 ω ρ H N + 4 ( ρ H 1 ) 6 ,
with C = max { C 1 , C 2 , C 3 } . It is easily seen that the constant C does not depend on N and ω . This completes the proof. □
Finally, let us turn to the construction of the composite quadrature rule for calculation of Integral (1). It is observed in the above theorem that, when the radiuses ρ F , ρ H are close to 1 , the error bound would expand dramatically. Therefore, an efficient quadrature rule has to guarantee the fact that the integrand has a relatively large analytic radius over the integration domain. To make this judgment be satisfied, we choose a non-uniform grid instead of partitioning the integration region uniformly.
To begin with, the singular point z * is projected into the plane containing the integration region and we get the projection point z. In the case of the projected point z falling into the integration domain (Case I), the first box is determined by the distance between z * and z . We construct a square with its center being z and its side length being 2 z * z . Then, the side length of level-2 box’s with the center z is set to be 2 2 z * z . To devise the composite quadrature rule, we first select level-1 box as a subdomain. Noting that the remaining domain is not a rectangle, we partition it into four subdomains, that is, Box21, Box22, Box23, and Box24 (see Figure 1). In general, the side length of level-l box’s with the center z is set to be 2 l z * z , and the integration subdomain is constructed similarly, which finally results in a nonuniform grid (see Figure 2).
When the projected point falls out of the integration domain, for example, it is around the side (Case II) or vertex (Case III), we implement a similar partition procedure like that in Case I. The final partition grid is shown in Figure 3 and Figure 4. Applying 2D spectral coefficient Levin quadrature rule in the subdomain leads to a class of composite 2D spectral coefficient Levin quadrature. It is noted that such kind of partition techniques guarantee the fact that the distance between the singular and the integration interval is no less than 2 when we map the integration domain into [ 1 , 1 ] × [ 1 , 1 ] .

4. Numerical Experiments

This section is devoted to illustrating the numerical performance of 2D spectral coefficient Levin quadrature (2DSC-Levin) and composite 2D spectral coefficient Levin quadrature (C2DSC-Levin) given in Section 3.
Example 1.
Let us consider the computation of the oscillatory integral
1 1 1 1 cos ( x + y ) e i ω ( x + y ) d x d y .
The phase x + y has no stationary points within the domain [ 1 , 1 ] × [ 1 , 1 ] , and the amplitude cos ( x + y ) is analytic with respect to both variables. Therefore, it is expected that the new approach has an exponential convergence rate.
It is noted that approximation results derived from classical cubature usually do not make sense when ω 1 . We first employ CCQ and give the computational results in Table 2, where both quantity of quadrature nodes N and oscillation parameter ω are variables.
Although plenty of quadrature nodes have been used in the above example, computed results are not satisfactory especially in the case of high oscillation. Now, we list approximated results of 2DSC-Levin in Table 3, where the referenced exact value is computed by the Chebfun toolbox again. It can be seen from Table 3 that absolute errors do not increase as the frequency ω enlarges, which implies that the new method is robust to high oscillation. On the other hand, when we raise the truncation term of Chebyshev series, the absolute error decays fast. In Figure 5, we give tendencies of absolute errors with respect to increasing frequencies and compare the computational time of 2DSC-Levin and referenced algorithm in Chebfun. It can be found that the consumed time of 2DSC-Levin does not vary as the frequency ω enlarges, whereas that of Chebfun’s 2D quadrature (2D-Cheb) increases dramatically. Since 2D-Cheb is a class of self-adaptive algorithms, it has to increase quadrature nodes when the frequency goes to infinity to retain a tolerance error, which results in the dramatically growing curve. However, approximations derived from 2DSC-Levin do not suffer from high oscillation according to Theorem 1. Hence, there is no need to raise quadrature nodes of 2DSC-Levin in high oscillation, and we do not witness an obvious change in Figure 5.
We also employ 2D-Cheb–Levin quadrature given in [18] to give a comparison. Computed results are shown in Table 4.
Comparison between Table 3 and Table 4 illustrates that the accuracy of 2DSC-Levin and 2D-Cheb–Levin quadrature is similar. However, 2DSC-Levin does a little better than 2D-Cheb–Levin quadrature when CPU time is considered. Since both approaches consist of approximations to a series of one-dimensional integrals, we show the consuming time of both approaches for computing the final highly oscillatory integrals in Table 5, where it can be seen that 2DSC-Levin is slightly faster than 2D-Cheb–Levin quadrature, which is partly due to the sparse structure of the discretizatized modified Levin equation.
For computation of univariate oscillatory integrals, Levin quadrature does well in solving problems with complicate phases. In the following example, we consider a highly oscillatory integral with nonlinear oscillators over [ 0 , 1 ] × [ 0 , 1 ] .
Example 2.
Let us consider the computation of the oscillatory integral
0 1 0 1 1 x 2 + y 2 + 15 e i ω ( x 2 + x + y 2 + y ) d x d y .
The amplitude 1 x 2 + y 2 + 15 is no longer an entire function, and the inverse of the phase function x 2 + x + y 2 + y can not be calculated directly.
We show absolute errors and CPU time of 2DSC-Levin in Table 6 and Figure 6, respectively. Due to the fact that the amplitude in Example 2 has a limited analytic radius, 2DSC-Levin converges to the machine precision much more slowly than that in Example 1. However, noting the decaying curve in the left part of Figure 6 manifests that 2DSC-Levin has the property that the higher the oscillation, the better the approximation, which also coincides with the theoretical estimate in Theorem 1. Hence, 2DSC-Levin is feasible for calculation of highly oscillatory integrals over rectangle regions when the oscillator g ( x , y ) is nonlinear. In addition, it is interesting that the curve of CPU time of 2DSC-Levin has a jump at about ω = 5500 in the right part of Figure 6. Such a phenomenon may originate from the fact that the Levin equation can be solved more efficiently as the frequency becomes larger. However, this is still a conjecture and we need more theoretical investigation in the future work.
To illustrate the effectiveness of the composite 2D spectral coefficient Levin quadrature rule (C2DSC Levin), we give a comparison among the new approach, 2D sinh transformation (JJE) in [22], and 2D dilation quadrature (2D-d) in [23].
For Integral (1), the sinh transformation is defined by
x = a + ϵ sinh ( μ 1 u η 1 ) , y = b + ϵ sinh ( μ 2 u η 2 ) ,
where
μ 1 = 1 2 arcsinh 1 + a ϵ + arcsinh 1 a ϵ , μ 2 = 1 2 arcsinh 1 + a ϵ arcsinh 1 a ϵ .
Since the transformed integrand is no longer nearly singular, a direct 2D Gauss cubature can be applied in practical computation.However, it should be noted that, although JJE can efficiently deal with nearly singular problems, it generally suffers from the highly oscillatory integrands.
In [23], Occorsio and Serafini proposed 2D-d for the integral
I ( F , ω ) = D F ( x ) K ( x , ω ) d x ,
where D : = [ 1 , 1 ] × [ 1 , 1 ] , x = ( x , y ) , and
F ( x ) = 1 ( x a ) 2 + ( y b ) 2 + ϵ 2 , K ( x , ω ) = e i ω ( G 1 ( x ) + G 2 ( y ) ) .
Letting ω 1 = | ω | , x = η ω 1 , y = θ ω 1 , we have
I ( F , ω ) = ω 1 2 [ ω 1 , ω 1 ] 2 F η ω 1 , θ ω 1 K η ω 1 , θ ω 1 , ω d η d θ .
Properly choosing d R + and S = 2 ω 1 d N results in
I ( F , ω ) = ω i = 1 S j = 1 S D i , j F η ω 1 , θ ω 1 K η ω 1 , θ ω 1 , ω d η d θ .
Here, D i , j : = [ ω 1 + ( i 1 ) d , ω 1 + i d ] × [ ω 1 + ( j 1 ) d , ω 1 + j d ] . Employing the transformed Gauss–Jacobi quadrature to the moment integral gives 2D-dilation quadrature.
Example 3.
Consider the computation of
1 1 1 1 sin ( x y ) ( x + 0.5 ) 2 + ( y 0.5 ) 2 + 0.09 e i ω ( x + y ) d x d y .
It is noted that the integrand will reach its peak value at ( 0.5 , 0.5 ) , and dramatically decrease away from such a critical point.
We list computed absolute errors of JJE, 2D-d and C2DSC-Levin in Table 7, Table 8 and Table 9. As the number of quadrature nodes N increases, all of algorithms converge fast to the referenced value. When the integrand does not change rapidly, JJE provides the best numerical approximation. However, as the frequency ω enlarges, JJE and 2D-d suffer from high oscillation, while C2DSC-Levin is still able to maintain a relatively high-order approximation. It should be noted that the dilation parameter d in 2D-d is restricted to make the number of quadrature nodes coincide with the other two methods, and a slightly modified choice as is done in [23] may make 2D-d be able to deal with some highly oscillatory problems. The corresponding results are shown in Figure 7. Numerical results in this figure indicate that the absolute error derived from JJE increases dramatically when ω goes beyond 500 , while the error of 2D-d rises slowly. It is also found that both absolute errors and computational time of the new approach do not suffer from the varying frequency ω . Hence, C2DSC-Levin is the most effective tool for computing oscillatory and nearly singular integrals.
Although JJE is efficient for solving some nearly singular problems, it may fail when b = 0 in the integrand.
Example 4.
Let us consider the computation of the following integral
0 1 0 1 1 ( x + 0.02 ) 2 + ( y + 0.02 ) 2 e i ω ( x 3 + 3 x + y 2 + 6 y ) d x d y ,
It is noted that JJE does not work in this case.
Absolute errors derived from 2D-d and C2DSC-Levin are listed in Table 10 and Table 11, respectively. It can be found that 2D-d provides more accurate approximation than that of C2DSC-Levin in the relatively low frequency. Nevertheless, the absolute error of C2DSC-Levin never increases in the high frequency while its 2D-d counterpart enlarges. Furthermore, although 2D-d will provide a more accurate approximation if we employ the choice of the dilation parameter considered in [23], it still cannot beat C2DSC-Levin in the high frequency when both absolute errors and computational time are considered (see Figure 8).

5. Conclusions

In this paper, we have presented the modified spectral coefficient Levin quadrature for calculation of highly oscillatory integrals over rectangle regions and established its convergence rate with respect to the truncation term and oscillation parameter. Furthermore, by considering numerical calculation of moments over a non-uniform mesh, we derive the composite Levin quadrature. Numerical experiments indicate that the non-uniform partition technique greatly reduces the nearly singular problem. Recently, sharp bounds for coefficients of multivariate Gegenbauer expansion of analytic functions have been studied in [28], which definitely opens a door for our ongoing work about convergence analysis of Levin quadrature in high dimensional hypercube.
On the other hand, studies on the asymptotic and oscillatory behavior of solutions to highly oscillatory integral and differential equations have attracted much attention during the past decades [29,30,31,32]. It is noted that computation and numerical analysis of oscillatory integrals provide efficient tools for such kinds of studies and investigation of application of the proposed approaches to oscillatory equations is also necessary in the future work.

Author Contributions

Z.Y. and J.M. conceived and designed the experiments; Z.Y. performed the experiments; Z.Y. and J.M. analyzed the data; J.M. contributed reagents/materials/analysis tools; Z.Y. and J.M. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 11901133) and the Science and Technology Foundation of Guizhou Province (No. QKHJC[2020]1Y014).

Acknowledgments

The authors thank referees for their helpful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CCQClenshaw–Curtis quadrature
2DSC-Levin2D spectral coefficient Levin quadrature
C2DSC-LevinComposite 2D spectral coefficient Levin quadrature
2D-ChebChebfun’s 2D quadrature
JJEcubature with 2D sinh transformation in [21]
2D-d2D dilation quadrature

References

  1. Chandler-Wilde, S.N.; Graham, I.G.; Langdon, S.; Spence, E.A. Numerical-asymptotic boundary integral methods in high-frequency acoustic scattering. Acta Numer. 2012, 21, 89–305. [Google Scholar] [CrossRef] [Green Version]
  2. Wu, Y.; Jiang, L.; Chew, W. An efficient method for computing highly oscillatory physical optics integral. Prog. Electromagn. Res. 2012, 127, 211–257. [Google Scholar] [CrossRef] [Green Version]
  3. Ma, J. Fast and high-precision calculation of earth return mutual impedance between conductors over a multilayered soil. COMPEL Int. J. Comput. Math. Electr. Electron. Eng. 2018, 37, 1214–1227. [Google Scholar] [CrossRef]
  4. Iserles, A.; Nørsett, S.P. Efficient quadrature of highly oscillatory integrals using derivatives. Proc. R. Soc. A Math. Phys. Eng. Sci. 2005, 46, 1383–1399. [Google Scholar] [CrossRef] [Green Version]
  5. Xiang, S. Efficient Filon-type methods for a b f(x)eiωg(x)dx. Numer. Math. 2007, 105, 633–658. [Google Scholar] [CrossRef]
  6. Huybrechs, D.; Vandewalle, S. On the evaluation of highly oscillatory integrals by analytic continuation. SIAM J. Numer. Anal. 2006, 44, 1026–1048. [Google Scholar] [CrossRef] [Green Version]
  7. Levin, D. Procedures for computing one- and two-dimensional integrals of functions with rapid irregular oscillations. Math. Comput. 1982, 38, 531–538. [Google Scholar] [CrossRef]
  8. Levin, D. Analysis of a collocation method for integrating rapidly oscillatory functions. J. Comput. Appl. Math. 1997, 78, 131–138. [Google Scholar] [CrossRef] [Green Version]
  9. Olver, S. Moment-free numerical integration of highly oscillatory functions. IMA J. Numer. Anal. 2006, 26, 213–227. [Google Scholar] [CrossRef]
  10. Li, J.; Wang, X.; Wang, T. A universal solution to one-dimensional oscillatory integrals. Sci. China Ser. F Inf. Sci. 2008, 51, 1614–1622. [Google Scholar] [CrossRef]
  11. Zaman, S. New quadrature rules for highly oscillatory integrals with stationary points. J. Comput. Appl. Math. 2015, 278, 75–89. [Google Scholar]
  12. Hasegawa, T.; Sugiura, H. A user-friendly method for computing indefinite integrals of oscillatory functions. J. Comput. Appl. Math. 2017, 315, 126–141. [Google Scholar] [CrossRef]
  13. Ma, J.; Liu, H. A well-conditioned Levin method for calculation of highly oscillatory integrals and its application. J. Comput. Appl. Math. 2018, 342, 451–462. [Google Scholar] [CrossRef]
  14. Molabahrami, A. Galerkin Levin method for highly oscillatory integrals. J. Comput. Appl. Math. 2017, 321, 499–507. [Google Scholar] [CrossRef]
  15. Wang, Y.; Xiang, S. Levin methods for highly oscillatory integrals with singularities. Sci. China Math. 2020. [Google Scholar] [CrossRef] [Green Version]
  16. Ma, J.; Liu, H. A sparse fractional Jacobi-Galerkin–Levin quadrature rule for highly oscillatory integrals. Appl. Math. Comput. 2020, 367, 124775. [Google Scholar] [CrossRef]
  17. Zaman, S.; Hussain, I. Approximation of highly oscillatory integrals containing special functions. J. Comput. Appl. Math. 2020, 365, 112372. [Google Scholar] [CrossRef]
  18. Li, J.; Wang, X.; Wang, T.; Shen, C. Delaminating quadrature method for multi-dimensional highly oscillatory integrals. Appl. Math. Comput. 2009, 209, 327–338. [Google Scholar] [CrossRef]
  19. Siraj-ul-Islam; Zaman, S. Numerical methods for multivariate highly oscillatory integrals. Int. J. Comput. Math. 2018, 95, 1024–1046. [Google Scholar] [CrossRef]
  20. Ma, J.; Duan, S. Spectral Levin-type methods for calculation of generalized Fourier transforms. Comput. Appl. Math. 2019, 38, 1–14. [Google Scholar] [CrossRef]
  21. Trefethen, L.N. Chebfun Version 5.7.0. The Chebfun Development Team. 2017. Available online: http://www.maths.ox.ac.uk/chebfun/ (accessed on 31 July 2020).
  22. Johnston, B.M.; Johnston, P.R.; Elliott, D. A sinh transformation for evaluating twodimensional nearly singular boundary element integrals. Int. J. Numer. Methods Eng. 2007, 69, 1460–1479. [Google Scholar] [CrossRef]
  23. Occorsio, D.; Serafini, G. Cubature formulae for nearly singular and highly oscillating integrals. Calcolo 2018, 55, 1–33. [Google Scholar] [CrossRef]
  24. Mason, J.C.; Handscomb, D.C. Chebyshev Polynomials; Taylor and Francis: Oxfordshire, UK, 2002. [Google Scholar]
  25. Xiang, S.; Chen, X.; Wang, H. Error bounds for approximation in Chebyshev points. Numer. Math. 2010, 116, 463–491. [Google Scholar] [CrossRef]
  26. Olver, S.; Townsend, A. A fast and well-conditioned spectral method. SIAM Rev. 2013, 55, 462–489. [Google Scholar] [CrossRef]
  27. Xiang, S. On the Filon and Levin methods for highly oscillatory integral a b f(x)eiωg(x)dx. J. Comput. Appl. Math. 2007, 208, 434–439. [Google Scholar] [CrossRef] [Green Version]
  28. Wang, H.; Zhang, L. Analysis of multivariate Gegenbauer approximation in the hypercube. Adv. Comput. Math. 2020, 46, 53. [Google Scholar] [CrossRef]
  29. Xiang, S.; Brunner, H. Efficient methods for Volterra integral equations with highly oscillatory Bessel kernels. BIT Numer. Math. 2013, 53, 241–263. [Google Scholar] [CrossRef]
  30. Ma, J.; Kang, H. Frequency-explicit convergence analysis of collocation methods for highly oscillatory Volterra integral equations with weak singularities. Appl. Numer. Math. 2020, 151, 1–12. [Google Scholar] [CrossRef]
  31. Bazighifan, O.; Ramos, H. On the asymptotic and oscillatory behavior of the solutions of a class of higher-order differential equations with middle term. Appl. Math. Lett. 2020, 107, 106431. [Google Scholar] [CrossRef]
  32. Bazighifan, O. Kamenev and Philos-types oscillation criteria for fourth-order neutral differential equations. Adv. Differ. Equ. 2020, 2020, 201. [Google Scholar] [CrossRef]
Figure 1. The integration subdomains for level-2 box.
Figure 1. The integration subdomains for level-2 box.
Mathematics 08 01930 g001
Figure 2. The partition method in Case I (left: location of the singular point z * and projection point z. right: the nonuniform grid).
Figure 2. The partition method in Case I (left: location of the singular point z * and projection point z. right: the nonuniform grid).
Mathematics 08 01930 g002
Figure 3. The partition method in Case II (left: location of the singular point z * and projection point z. right: the nonuniform grid).
Figure 3. The partition method in Case II (left: location of the singular point z * and projection point z. right: the nonuniform grid).
Mathematics 08 01930 g003
Figure 4. The partition method in Case III (left: location of the singular point z * and projection point z. right: the nonuniform grid).
Figure 4. The partition method in Case III (left: location of the singular point z * and projection point z. right: the nonuniform grid).
Mathematics 08 01930 g004
Figure 5. Comparison between 2DSC-Levin and 2D-Cheb in Example 1, ω is a variable (left: absolute errors, right: CPU time).
Figure 5. Comparison between 2DSC-Levin and 2D-Cheb in Example 1, ω is a variable (left: absolute errors, right: CPU time).
Mathematics 08 01930 g005
Figure 6. Comparison between 2DSC-Levin and 2D-Cheb in Example 2, ω is a variable (left: absolute errors, right: CPU time).
Figure 6. Comparison between 2DSC-Levin and 2D-Cheb in Example 2, ω is a variable (left: absolute errors, right: CPU time).
Mathematics 08 01930 g006
Figure 7. Comparison of JJE, 2D-d and C2DSC-Levin in Example 3, ω is a variable (left: absolute errors, right: CPU time).
Figure 7. Comparison of JJE, 2D-d and C2DSC-Levin in Example 3, ω is a variable (left: absolute errors, right: CPU time).
Mathematics 08 01930 g007
Figure 8. Comparison of 2D-d and C2DSC-Levin in Example 4, ω is a variable (left: absolute errors, right: CPU time).
Figure 8. Comparison of 2D-d and C2DSC-Levin in Example 4, ω is a variable (left: absolute errors, right: CPU time).
Mathematics 08 01930 g008
Table 1. Numerical results of classical delaminating quadrature rules for highly oscillatory multivariate integrals 1 1 1 1 cos ( x + y ) e i ω ( x + y ) d x d y .
Table 1. Numerical results of classical delaminating quadrature rules for highly oscillatory multivariate integrals 1 1 1 1 cos ( x + y ) e i ω ( x + y ) d x d y .
CCQReferenced Value
ω = 10 0.0207250791383030.020722222756906
ω = 40 0.3258216654385730.001251371233532
ω = 160 0.0845467989351770.000107898607873
ω = 640 0.0988317106732390.000004494735846
ω = 2560 0.2133797107923150.000000211475406
Table 2. Absolute errors of CCQ for Example 1.
Table 2. Absolute errors of CCQ for Example 1.
ω = 200 ω = 500 ω = 2000 ω = 5000 ω = 10,000
N = 200 2.2 × 10 16 3.5 × 10 2 8.8 × 10 2 6.2 × 10 4 1.6 × 10 2
N = 400 1.6 × 10 16 1.9 × 10 16 1.0 × 10 3 2.5 × 10 2 8.1 × 10 3
N = 800 2.0 × 10 17 4.2 × 10 17 7.2 × 10 3 1.5 × 10 4 1.7 × 10 2
N = 1600 4.4 × 10 17 8.8 × 10 18 8.2 × 10 17 1.4 × 10 3 4.9 × 10 4
Table 3. Absolute errors of 2DSC-Levin for Example 1.
Table 3. Absolute errors of 2DSC-Levin for Example 1.
ω = 200 ω = 500 ω = 2000 ω = 5000 ω = 10,000
N = 6 1.7 × 10 10 6.0 × 10 11 7.3 × 10 13 2.1 × 10 14 1.8 × 10 13
N = 8 7.1 × 10 13 2.0 × 10 13 2.6 × 10 15 8.4 × 10 17 8.0 × 10 16
N = 10 1.9 × 10 15 4.3 × 10 16 7.6 × 10 18 3.5 × 10 17 1.8 × 10 16
N = 12 9.2 × 10 18 4.4 × 10 18 1.3 × 10 18 3.5 × 10 17 1.8 × 10 16
N = 16 5.9 × 10 18 3.7 × 10 18 1.3 × 10 18 3.5 × 10 17 1.8 × 10 16
Table 4. Absolute errors of 2D-Cheb–Levin quadrature for Example 1.
Table 4. Absolute errors of 2D-Cheb–Levin quadrature for Example 1.
ω = 200 ω = 500 ω = 2000 ω = 5000 ω = 10,000
N = 6 1.8 × 10 10 8.7 × 10 12 1.2 × 10 13 3.5 × 10 15 6.5 × 10 16
N = 8 1.4 × 10 12 5.7 × 10 14 8.4 × 10 16 4.2 × 10 17 1.8 × 10 16
N = 10 5.8 × 10 15 2.0 × 10 16 1.9 × 10 18 3.5 × 10 17 1.8 × 10 16
N = 12 1.1 × 10 17 4.2 × 10 18 1.3 × 10 18 3.5 × 10 17 1.8 × 10 16
N = 16 6.2 × 10 18 3.7 × 10 18 1.3 × 10 18 3.5 × 10 17 1.8 × 10 16
Table 5. Comparison of CPU time for 2DSC-Levin and 2D-Cheb–Levin quadrature for fixed ω = 1000 .
Table 5. Comparison of CPU time for 2DSC-Levin and 2D-Cheb–Levin quadrature for fixed ω = 1000 .
2DSC-Levin2D-Cheb–Levin Quadrature
N = 32 0.000822 s0.001703 s
N = 64 0.002251 s0.005618 s
N = 128 0.009882 s0.017755 s
N = 256 0.054631 s0.088747 s
Table 6. Absolute errors of 2DSC-Levin for Example 2.
Table 6. Absolute errors of 2DSC-Levin for Example 2.
ω = 200 ω = 500 ω = 2000 ω = 5000 ω = 10,000
N = 6 7.3 × 10 10 8.3 × 10 11 3.8 × 10 12 2.7 × 10 13 2.2 × 10 13
N = 8 6.3 × 10 11 5.5 × 10 12 2.9 × 10 13 2.0 × 10 14 1.6 × 10 14
N = 10 5.3 × 10 12 3.6 × 10 13 2.2 × 10 14 1.5 × 10 15 1.1 × 10 15
N = 12 4.2 × 10 13 2.2 × 10 14 1.7 × 10 15 1.2 × 10 16 8.0 × 10 17
N = 16 2.2 × 10 15 6.4 × 10 17 9.2 × 10 18 8.6 × 10 19 3.1 × 10 19
Table 7. Absolute errors of JJE for Example 3.
Table 7. Absolute errors of JJE for Example 3.
N = 28 N = 56 N = 84 N = 112 N = 140 N = 168
ω = 10 2.5 × 10 16 4.9 × 10 16 5.9 × 10 16 2.3 × 10 16 1.5 × 10 15 2.3 × 10 16
ω = 20 4.8 × 10 11 9.0 × 10 16 8.9 × 10 16 9.2 × 10 16 8.6 × 10 16 9.0 × 10 16
ω = 40 2.6 × 10 4 2.6 × 10 16 8.3 × 10 17 7.8 × 10 17 1.2 × 10 16 1.3 × 10 16
ω = 80 1.1 × 10 2 2.3 × 10 5 1.2 × 10 16 3.5 × 10 17 8.1 × 10 17 6.1 × 10 17
ω = 160 1.5 × 10 2 8.5 × 10 3 8.6 × 10 3 5.9 × 10 7 2.5 × 10 17 3.0 × 10 17
Table 8. Absolute errors of 2D-d for Example 3.
Table 8. Absolute errors of 2D-d for Example 3.
N = 32 N = 64 N = 96 N = 128 N = 160 N = 192
ω = 10 1.6 × 10 2 6.1 × 10 4 1.9 × 10 6 4.2 × 10 8 6.4 × 10 10 4.3 × 10 12
ω = 20 2.7 × 10 1 5.7 × 10 3 4.1 × 10 5 7.8 × 10 8 1.9 × 10 9 4.7 × 10 11
ω = 40 6.9 × 10 1 7.7 × 10 1 1.6 × 10 2 1.1 × 10 4 3.3 × 10 7 1.6 × 10 9
ω = 80 7.2 × 10 2 6.7 × 10 2 5.8 × 10 2 1.9 × 10 1 2.9 × 10 2 4.5 × 10 4
ω = 160 6.8 × 10 2 3.1 × 10 2 8.3 × 10 2 2.3 × 10 2 6.1 × 10 3 2.8 × 10 2
Table 9. Absolute errors of C2DSC-Levin for Example 3.
Table 9. Absolute errors of C2DSC-Levin for Example 3.
N = 28 N = 56 N = 84 N = 112 N = 140 N = 168
ω = 10 7.3 × 10 4 6.7 × 10 4 9.0 × 10 5 4.4 × 10 6 1.1 × 10 8 3.4 × 10 10
ω = 20 7.2 × 10 4 1.5 × 10 6 3.4 × 10 6 1.5 × 10 6 3.2 × 10 8 5.0 × 10 9
ω = 40 1.2 × 10 4 2.4 × 10 5 5.9 × 10 7 3.8 × 10 7 2.0 × 10 7 2.0 × 10 8
ω = 80 8.8 × 10 6 9.2 × 10 7 4.6 × 10 7 4.3 × 10 8 2.3 × 10 9 1.1 × 10 10
ω = 160 5.5 × 10 6 5.3 × 10 7 2.1 × 10 8 9.3 × 10 9 3.6 × 10 9 5.5 × 10 10
Table 10. Absolute errors of 2D-d for Example 4.
Table 10. Absolute errors of 2D-d for Example 4.
N = 64 N = 128 N = 192 N = 256 N = 320 N = 384
ω = 10 3.0 × 10 1 1.3 × 10 4 6.5 × 10 7 2.9 × 10 9 2.1 × 10 11 5.5 × 10 14
ω = 20 4.1 × 10 1 7.0 × 10 2 1.1 × 10 4 8.9 × 10 8 5.3 × 10 11 4.3 × 10 13
ω = 40 9.4 × 10 1 3.9 × 10 1 9.0 × 10 2 2.2 × 10 2 1.1 × 10 3 8.2 × 10 6
ω = 80 3.7 × 10 1 1.6 × 10 1 4.6 × 10 1 1.3 × 10 1 5.0 × 10 2 3.5 × 10 2
ω = 160 1.3 × 10 0 1.9 × 10 1 2.6 × 10 1 1.8 × 10 1 2.0 × 10 1 6.3 × 10 2
Table 11. Absolute errors of C2DSC-Levin for Example 4.
Table 11. Absolute errors of C2DSC-Levin for Example 4.
N = 64 N = 128 N = 192 N = 256 N = 320 N = 384
ω = 10 9.5 × 10 5 8.4 × 10 8 3.3 × 10 11 1.5 × 10 12 1.8 × 10 13 4.8 × 10 13
ω = 20 7.2 × 10 5 5.6 × 10 8 5.1 × 10 12 3.9 × 10 13 6.8 × 10 15 1.5 × 10 13
ω = 40 6.8 × 10 6 1.3 × 10 8 4.9 × 10 12 3.3 × 10 14 1.8 × 10 13 6.4 × 10 13
ω = 80 1.1 × 10 5 1.3 × 10 8 9.4 × 10 12 1.2 × 10 14 3.7 × 10 12 5.5 × 10 13
ω = 160 1.6 × 10 6 2.6 × 10 9 2.2 × 10 12 4.4 × 10 15 3.8 × 10 15 2.6 × 10 13
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Z.; Ma, J. Efficient Computation of Highly Oscillatory Fourier Transforms with Nearly Singular Amplitudes over Rectangle Domains. Mathematics 2020, 8, 1930. https://doi.org/10.3390/math8111930

AMA Style

Yang Z, Ma J. Efficient Computation of Highly Oscillatory Fourier Transforms with Nearly Singular Amplitudes over Rectangle Domains. Mathematics. 2020; 8(11):1930. https://doi.org/10.3390/math8111930

Chicago/Turabian Style

Yang, Zhen, and Junjie Ma. 2020. "Efficient Computation of Highly Oscillatory Fourier Transforms with Nearly Singular Amplitudes over Rectangle Domains" Mathematics 8, no. 11: 1930. https://doi.org/10.3390/math8111930

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop