Numerical Integration of Highly Oscillatory Functions with and without Stationary Points

: This paper proposes an original approach to calculating integrals of rapidly oscillating functions, based on Levin’s algorithm, which reduces the search for an anti-derivative function to solve an ODE with a complex coefficient. The direct solution of the differential equation is based on the method of integrating factors. The reduction in the original integration problem to a two-stage method for solving ODEs made it possible to overcome the instability that arises in the standard (in the form of solving a system of linear algebraic equations) approach to the solution. And due to the active use of Chebyshev interpolation when using the collocation method on Gauss–Lobatto grids, it is possible to achieve high speed and stability when taking into account a large number of collocation points. The presented spectral method of integrating factors is both flexible and reliable and allows for avoiding the ambiguities that arise when applying the classical method of collocation for the ODE solution (Levin) in the physical space. The new method can serve as a basis for solving ordinary differential equations of the first and second orders when creating high-efficiency software, which is demonstrated by solving several model problems.


Introduction
The integration of rapidly oscillating functions is used in many applications, from fluid dynamics to mathematical physics, electronic engineering, acoustic and electromagnetic radiation, etc. [1].In the study of optical radiation, the problem of integrating rapidly oscillating functions arose and still continues to be relevant in scalar wave optics using the Huygens-Fresnel and Helmholtz-Kirchhoff integrals [2].An example of numerical integration of a rapidly oscillating function in this kind of problem is given in Ref. [3].The generalization of the theory of diffraction from the scalar theory to the vector one, described by Maxwell's equations, is very well described in the classic book by Ilyinsky, Kravtsov, and Sveshnikov [4] and in the newer revised book by Sveshnikov and Mogilevskii [5].In these books, using the Stratton-Chu formulas, the problem of electromagnetic diffraction is reduced to solving the Fredholm equations of the first and second kind, and further it is shown that it is very convenient to represent the solution as an expansion in complete systems of rapidly oscillating functions corresponding to the geometry of a particular problem.The calculation of the expansion coefficients in such problems is reduced to the integration of rapidly oscillating functions.An example of solving the problem of describing structured laser radiation, described by Maxwell's equations, is given in Refs.[6,7].
Numerical calculations of integrals of rapidly oscillating functions are often a part of large-scale computational projects for solving problems of describing electromagnetic radiation.In addition, there are research projects with large amounts of numerical integration of rapidly oscillating functions devoted to the study of other, non-optical phenomena, e.g., the geolocation method [8][9][10].Another example, the problem of the interaction of particles affected by external electromagnetic fields and matter, is one of the important topics in elementary particle physics (see, e.g., [11]).The theoretical basis of the muon decay process in an electromagnetic field was developed mainly in Ref. [12].Practical calculation of the muon decay rate using the formulae of Ref. [12] is difficult due to the summation of a significant number of integrals of rapidly oscillating functions with varying amplitude functions and constant phase frequencies.Such serial integration I ω [ f ] = 1 −1 f (x)e iωx dx with various amplitude functions f (x) on similar grids appear also in other applied areas.
Whereas at the early stages the problem of integrating rapidly oscillating functions was solved by improving the traditional methods of numerical integration, later, as the number of such problems grew and the requirements for the achieved accuracy increased, the researchers were forced to look for alternative approaches.
Levin's approach [13] to the integration of rapidly oscillating functions consists in the transition to the calculation of the anti-derivative function from the integrand using the collocation procedure in the physical space.In this case, the element of degenerate [14,15] differentiation matrix of the collocation method are functions of coordinates of the grid points; the matrix elements are calculated using very simple expressions.
Let us consider the solution of the problem of integrating a rapidly oscillating function with a non-linear phase.According to Levin's approach [13,16], the calculation of the integral of a rapidly oscillating function with a non-linear phase is as follows: and reduces to solving an ordinary differential equation, i.e., to the determination of the anti-derivative F(x): Levin's paper states that Equation (2) has a particular solution that is not rapidly oscillating.We will search for this particular solution by the collocation method in the class of polynomial expansion in Chebyshev polynomials.This approach is justified by the fact that the complete system of Chebyshev polynomials is optimal in the uniform metric and almost optimal in the quadratic metric [15,17].
If the unknown function F(x) is a solution to Equation (2), then the result of integration can be obtained according to the formula a) . ( The method proposed by Levin in both one-dimensional and multidimensional cases was published by him in [13,16] and then thoroughly studied in [18].The method is presented in great detail in the famous monograph [1], which describes the evolution of numerical methods for integrating rapidly oscillating functions at the beginning of the century. In [11,19], various options for implementing this method in the case of various nonlinear frequency functions of a fairly general form were studied, and many applied problems were solved.The method of integrating functions with a linear phase was studied in [20].The use of a simple structure of a system of linear algebraic equations (SLAEs), to which the integration problem is reduced in this case, allowed the authors of [21] to construct a very fast and economical algorithm for calculating integrals.
In the case of a non-linear phase, a particular difficulty for the existing algorithms is the degenerate situation-the presence of singular, so-called stationary phase points.The stationary phase point of an oscillatory integral is a point where the first derivative of the phase function vanishes.In this case, the algorithms considered above do not work reliably enough, since they make it necessary to solve ill-conditioned SLAEs.In this regard, we note the recently published work devoted to an in-depth study of Levin's method [19].
In Refs.[18,[21][22][23][24], the authors consider various approaches in order to propose fast and stable methods for integrating rapidly oscillating functions, often reducing these methods to solving SLAEs.As noted in these papers, the main problem requiring further research is that the corresponding systems of linear algebraic equations are ill-conditioned.One of the signs that a system is ill-conditioned is the presence of stationary points for the phase function [18,25].This is one of the fundamental problems for methods of integrating rapidly oscillating functions.The method we propose here for solving ODEs based on integrating factors makes it possible to discard the drawbacks of existing approaches to integrate rapidly oscillating functions based on Levin's method in the presence of stationary points.The algorithm stably determines the solution both in the absence of stationary points and in their presence when the derivative of the phase function vanishes in the interval of integration.
Usually, the collocation method for solving an ODE is formulated in physical space.A convenient and promising tool for finding a solution based on the Chebyshev approximation is the use of differentiation and integration matrices [26], whose elements are functions of the coordinates of the grid points calculated using rather simple formulas.However, in specific physical-space implementations of the collocation method degenerate Chebyshev differentiation matrices are used, which, moreover, have eigenvalues that differ by orders of magnitude.In this case, it may be impossible to construct a stable numerical algorithm for solving the resulting SLAEs.If instead we search for the expansion coefficients of the solution of the differential equation in the spectral space, then the problem reduces to sparse and well-conditioned SLAEs.The properties of the algorithm are improved both by using the sparse structure of the differentiation and integration matrices, and by taking into account the discrete orthogonality of the Chebyshev matrices when choosing the Chebyshev-Gauss-Lobatto computational grid.
Differentiation matrices are given explicitly or implicitly in many publications concerning the use of pseudospectral collocation methods [6][7][8].Solving ODEs with the use of degenerate differentiation matrices in physical and/or spectral spaces rather naturally led to the poor conditionality in the systems of linear algebraic equations to be solved.Refs.[5,6,17,18,20] formulate the specific features of differentiation and integration matrices considered on similar or mutually dependent grids.The explicit use of differentiation matrices on Chebyshev-Gauss-Lobatto grids when solving ODEs allows us to propose stable and economical methods for solving ODEs.We use integration matrices on Chebyshev-Gauss-Lobatto grids in the spectral representation.For more details about the form and properties of these matrices, see Refs.[6,17,18,20].
In the present paper, we propose an original numerical method for solving an ordinary differential equation based on the multistage algorithm [26] of finding a single-parameter family of solutions of the simplest first-order differential equation y ′ (x) = f (x) without initial or boundary conditions.This algorithm was developed for solving linear ODEs [27] based on constructing and using the integrating factor method, once proposed by Bernoulli brothers [28].
The novelty of the proposed approach [26,27] to solving Cauchy problems for ODEs of the first and second order is to first select a class (set) of functions that satisfy a differential equation using a stable and computationally simple method for calculating the expansion coefficients of the derivative of the future solution.Then; we calculate the coefficients (except for the zero one) of the expansion of the future solution from the calculated expansion coefficients of the derivative using the Chebyshev integration matrix [29].Only after that, from the set of solutions obtained in this way, we select those corresponding to the given initial conditions.
In this paper, the calculation of the integral of rapidly oscillating functions in accordance with Levin's approach is reduced to solving a linear ordinary differential equation with a complex coefficient.The approximate solution of ODE (1) is constructed based on the method of integrating factors [28] in the form of an expansion into a truncated series in Chebyshev polynomials of the first kind: First, the main "building block" of the method is presented, namely the algorithm for obtaining an approximate general solution to the simplest Cauchy problem, which in turn consists of several sub-tasks.The solution of the first sub-task is the calculation of the interpolation coefficients of the derivative of the desired solution by multiplying the transposed Chebyshev matrix by the vector of derivative values (functions on the right-hand side of the equation).The search for the expansion coefficients of the derivative is carried out on the Gauss-Lobatto grid, which makes it possible to use the discrete orthogonality of Chebyshev polynomials of the first kind.This fact allows for calculating the expansion coefficients by simple division of the product components by n or n/2.Introducing the differentiation matrices and the "inverse" integration matrix in the spectral space into consideration makes it possible to reveal the relationship between the expansion coefficients of functions and the expansion coefficients of derivatives with respect to the same basis in matrix form.In fact, the calculation of the leading coefficients for solving the simplest ODE is reduced to multiplying the tridiagonal integration matrix by the calculated vector of derivative interpolation coefficients.To solve the Cauchy problem, it is sufficient to use the initial or boundary condition for determining the zero coefficient of the solution.
In the following Sections, we consider a technique for solving ODE (2) and show that the problem of finding an integrating factor and then solving an ODE can be reduced to the already solved problem of restoring the expansion coefficients of a function from the values of its derivative.In fact, to solve a first-order ODE, it suffices to apply the algorithm for restoring a function from its derivative twice.

Methods
One of the most efficient methods of approximating a function f (x) = ∑ k≥0 a k φ k (x) : [−1, 1] → R is to find the interpolating polynomial P n of degree n, satisfying the condition P n x j = f (x j ) for the set of (n + 1) collocation points {x j }, j = 0, . . ., n.In practice, the Chebyshev points are usually chosen for collocation points, and the obtained interpolation polynomial, known as Chebyshev interpolant, is an almost optimal approximation to f (x) in the space of polynomials of degree n [30].A usual method to calculate the interpolation polynomial P n is to use the polynomial Lagrange basis, which can be stably implemented using the barycentric interpolation formula [5,20].We will use the basis of Chebyshev functions of the first kind to calculate the interpolation polynomial.The interpolation itself is implemented on Chebyshev-Gauss-Lobatto grids, allowing for the active use of the discrete orthogonality of Chebyshev matrices and providing high interpolative properties of this approach [30] with minimum computational costs.
The main idea of spectral methods is to represent the solution as a truncated series expansion in terms of known basis functions.The linear transformation (derivative operator) that translates the vector of coefficients a = {a k } k≥0 of the expansion of function in a similar series in the same basis is known as the matrix of spectral differentiation.
The expansion of the n times differentiable function f As polynomial interpolant, the series truncated to n terms is usually considered: This approximation has the order O( 1 it tends to zero super algebraically [15,16].

Reconstructing a Function from Its Derivative: The Cauchy Problem
Following [26], as the basic algorithm for calculating an integral of a rapidly oscillating function by Levin's method, we consider the problem of reconstructing the expansion coefficients If no additional restrictions are imposed on the solution, it is determined up to the zero coefficient a 0 .If an initial/boundary condition (or other additional condition at some internal point x c ) is specified, then the problem of finding the function y(x) that satisfies conditions ( 4)-( 5) is called a single-point problem.
The choice of basis of orthogonal polynomials is determined by the following properties: 1.
Convergence and the rate of convergence.The approximations y n (x) rapidly converge to the solution y(x) at n → ∞.

2.
Differentiability.Simple methods are necessary to determine the expansion coefficients for derivatives {b k } n k=0 from the given expansion coefficients for the function {a k } n k=0 :

3.
Integrability.It is desirable to have simple methods to determine the expansion coefficients {a k } n k=0 for the approximated function from the given expansion coefficients {b k } n k=0 for its derivative:

4.
Transformation.Recalculating the parameters, defining the function, from spectral space to physical space and vice versa.The calculation of expansion coefficients {a k } n k=0 from the known values of function {y(x i )} n i=0 and the reconstruction of the interpolant values y n (x) at the required points of the interval from the expansion coefficients should be simple: The system of Chebyshev polynomials of the first kind satisfies these requirements.
The next problem is the method of calculating the expansion coefficients.It can often be simplified by the choice of collocation nodes.Let us consider in more detail the various stages of the approximate solution of the simplest differential Equation (4) with additional condition (5).

•
Interpolation-calculation of the coefficients of the derivative expansion in Chebyshev polynomials; • Integration-calculating the expansion coefficients of the solution from the expansion coefficients of the derivative; • Extension of the definition of the solution expansion coefficients (using an additional condition) to identify the missing zero coefficient of the solution expansion.
To solve these problems consistently, we will consider as an approximate solution an interpolation polynomial obtained by expanding the solution into a series in Chebyshev polynomials of the first kind on a certain grid.The approximation of continuous functions is obtained by discarding the terms of the Chebyshev series, the value of which is small.Unlike the approximations obtained using other power series, the approximation in Chebyshev polynomials has the property of almost optimality, minimizing the number of terms needed to approximate the function by polynomials with a given accuracy [26,30,31].The approximation by Chebyshev polynomials in known to be the best in . This is also related to the fact that the approximation based on the Chebyshev series turns out to be rather close to the best uniform approximation (among polynomials of the same degree), but it is easier to find.In addition, it makes it possible to discard the Runge phenomenon upon a reasonable choice of interpolating points.
Without loss of generality, below we will assume that the domain of the solution is the interval [−1, 1].Consider the problem of finding the derivative of the desired function, or rather, the polynomial p(x) approximating y ′ (x), satisfying the collocation condition at a given finite number of points in the interval [−1, 1].The pseudospectral (collocation) method for solving the problem consists in representing the desired approximating polynomial as an expansion into a finite series in Chebyshev polynomials of the first kind, {T k (x)} ∞ k=0 , defined in the Hilbert space of functions on the segment [−1, 1].

Algorithm Based on the Use of Integration Matrices
Let us represent the desired function y(x), the future approximate solution to Equation ( 4), as an expansion in a finite set of Chebyshev polynomials T 0 , T 1 , . . ., T n : By differentiation of Equation ( 6), the first derivative of this function can be presented as a series: At the same time, the derivative y ′ (x) as a polynomial of degree n can be expanded in a series in the initial basis T 0 , T 1 , . . ., T n with coefficients b = {b 0 , b 1 , . . ., b n }:

Differentiation and Integration Matrices
From the properties of Chebyshev polynomials [6,23], it follows that the derivative of the polynomial T k (x) of the k-th degree can be written as an expansion in Chebyshev polynomials T 0 , T 1 , . . ., T k−1 of lower degree as a sum: Here, the notation ⌊x⌋ means that the maximum integer does not exceed x, and the expression [k odd] equals 1, if k is odd and 0, if k is even.
The integral of a Chebyshev polynomial can be written in a compact form [32]: where K is an arbitrary constant, often chosen such that T 0 (x)dx = T 1 (x).
Using these recurrence relations for Chebyshev polynomials of the first kind and their derivatives and anti-derivatives [29,33], and equating the coefficients at similar polynomials in Equations ( 7) and ( 8), we arrive [34] to the dependence of the expansion coefficients of the desired solution a k on the expansion coefficients of the derivative b k and vice versa.
Based on the above formulae, it is possible to establish a relationship between the expansion coefficients of a Chebyshev polynomial of the first kind a = {a 0 , a 1 , . . ., a n } and the expansion coefficients of its derivative b = {b 0 , b 1 , . . ., b n } in the same basis.
In matrix form, this dependence can be represented using the matrices of differentiation D and integration D + as b =Da and a = D + b.The infinite matrix of differentiation D for Chebyshev polynomials has the form [22,30,35]: , and the infinite three-diagonal matrix of integration (anti-derivative) has the following form [26,35]: . . .
The formula that determines (partially) the coefficients a = {a 0 , a 1 , . . ., a n } of the expansion of function y(x) from the known coefficients b = {b 0 , b 1 , . . ., b n } of the expansion of its derivative in matrix form is written as a = D + b.The infinite three-diagonal matrix of integration (anti-derivative) is presented in the truncated form.The expansion coefficients b = {b 0 , b 1 , . . ., b n } of the derivative y ′ (x) are calculated from the known expansion coefficients a = {a 0 , a 1 , . . ., a n } of function y(x) by the formula b = Da.
Let us make use of the relation a = D + b in the finite-dimensional formulation: Then, the dependence of the coefficients {a 1 , a 2 , . . ., a n } on {b 0 , b 1 , . . ., b n } can be realized algorithmically by means of the following explicit formulae: Thus, known the expansion coefficients b k , k = 0, . . ., n of function f (x) of problem (4) in Chebyshev polynomials of the first kind, we can reconstruct the expansion coefficients (except zero) of the desired function in the same basis functions using Equations ( 9) and (10) [29,34].
Obviously, the availability of the integration tridiagonal matrix D + Chebyshev offers a simple and efficient method of calculating the expansion coefficients of the solution from the expansion coefficients of its derivative.
Let us return to the problem of interpolating f (x), i.e., calculation of the spectral coefficients of the derivative by the collocation method, i.e., choosing such coefficients {b 0 , b 1 , . . ., b n } of the interpolation polynomial expansion, at which the following equalities hold: at the collocation points {x 0 , x 1 , . . ., x n }.
The latter means that coefficients b k , k = 0, . . ., n must be a solution to the system of linear algebraic Equations (11).In matrix form, this system looks as Here, we use the notation T kj = T k x j , k, j = 0, . . ., n, for the values of the polynomial of degree k calculated at point x j .T is the Chebyshev matrix mapping the point (vector) from the space of coefficients into the space of function values.
For non-degeneracy of the system of Equations (11), it is sufficient that all points of the grid be different.Matrix T of the system in this case is a matrix of the general form, and for solving the SLAE it will be necessary to use the Gauss method or the LU decomposition method mostly from the point of view of the number of operations.To simplify the calculation of vector b (i.e., the search for the normal solution) is possible by using the arbitrariness in the position of the collocation points.On the Gauss-Lobatto grid, the Chebyshev matrices possess the property of discrete orthogonality.Let us choose the set of collocation points as x j = cos( πj n ), j = 0, . . ., n and multiply the first and the last equation from ( 12) by 1 √ 2 . We obtain an equivalent "modified" system with a new matrix T (instead of T) and vector f instead of f.The new system is good for possessing the property of discrete "orthogonality": its multiplication from the left by the transposed matrix T T yields a diagonal matrix: Denoting by f 0 , f 1 , . . ., f n−1 , f n T the product of matrix T T by vector in the right-hand side of the equation, it is easy to write the desired expansion coefficients for the derivative of the solution in the explicit form Therefore, Equations ( 10) and ( 13) unambiguously determine the last n coefficients {a 1 , a 2 , . . ., a n } of the expansion of the desired approximate solution y(x) in Chebyshev polynomials.Namely, the senior coefficients of the Cauchy problem solution easily express through the interpolation coefficients of the derivative expansion: Hence, Equations ( 14) unambiguously define the last n coefficients of expansion of the desired function y(x), and to determine one more coefficient, a 0 , it is necessary to impose at least one more additional condition.

Integrating Factor
We now return to the solution of the ordinary differential Equation (2), which, using Levin's method, will give us the desired value of the integral according to formula (3).
The equation for the anti-derivative is an ordinary differential equation with complex coefficient ip(x) = iωg ′ (x) and real right-hand side f (x) where the notation p(x) = ωg ′ (x) is introduced.We will seek the solution of the equation using the numerical method of integrating factors proposed in Ref. [27].Let us try to find an integrating complex factor µ(x) such that, by multiplying Equation ( 15) by it, we will reduce the solution of the problem of finding anti-derivative to a simpler equation of the form (4) for finding the product µ(x)y(x): When functions p(x) and f (x) are continuous, there is an integrating factor µ(x), nonzero in the entire domain of definition and satisfying the equality iµ(x)p(x) = µ ′ (x). ( Dividing both sides of Equation ( 16) by µ(x), we obtain a differential equation for calculating the integrating factor, satisfying an auxiliary equation the left-hand side of which is a logarithmic derivative of the integrating factor, Thus, the search for the derivative of the logarithmic function of µ(x) is reduced to the problem solved above, namely the reconstruction of a polynomial approximation of the function from its derivative.Integrating equality (17), we obtain ln µ(x) + k = i p(x) dx. ( From the point of view of notations, we know that k is an unknown integration constant.To simplify, we put the sign minus before it and will use the constant with plus multiplied by the imaginary unit.This will not affect the ultimate result but will simplify the expression for the integrating factor.Considering this change, we denote by u(x) = p(x) dx + k the integral of the frequency function and write in explicit form the expression for calculating the integrating factor: Let us denote by c = {c 0 , c 1 , . . ., c n } T the vector of expansion coefficients of polynomial p(x) on the grid x j = cos( πj n ), j = 0, . . ., n, calculated in correspondence with the first part of the algorithm for reconstructing the expansion coefficients of the function using expressions of the form (7). To simplify further calculations, we denote the vector of the values of Chebyshev polynomials of the first kind by T(x) = {T 0 (x), T 1 (x), . . ., T n (x)}.Then, and taking into account the transition to the formula for calculating the integral at an arbitrary point from the known coefficients c = {c 0 , c 1 , . . ., c n } T , we obtain The formula for calculating the logarithm of the integrating factor takes the form The integration constant k in this formula remains indefinite.
Using the properties of an exponential function of a complex-valued argument, we can present the integrating factor as the following function: where the complex factor exp(ik), expressed through the integration constant k, is present as an indefinite coefficient.

Linear Differential Equations of the First Order
We substitute the obtained expression for the integrating factor into the relation where µ(x) = exp(ik) exp iu(x) .From this expression, it is seen that the equation does not change upon division of the factor µ(x) by the constant coefficient exp(ik).Therefore, we will consider the integrating factor in the form of a simplified complex-valued function: where u(x) is calculated as u(x) = T(x)D + c.The left-hand side of Equation ( 22) contains a derivative of the product of the integrating factor and the desired solution.Therefore, in accordance with the algorithm of reconstructing the function from its derivative by Equations ( 9)-( 14), we obtained a possibility of stable and reliable calculation of the desired solution y(x) multiplied by the integrating factor µ(x): Thus, the problem of finding a solution to ODE ( 22) is reduced to the same basic problem of reconstructing a function from its known derivative µ(x) f (x); in the present case, the product µ(x)y(x), in which µ(x) ̸ = 0 in the entire interval where the differential equation is defined.The problem can be solved using the method described in the first Section, i.e., to determine the spectral coefficients of the function expansion in Chebyshev polynomials provided that the expansion coefficients for the derivative of this function are known.The difference is only that the real and imaginary parts of the product µ(x)y(x) are to be reconstructed separately.
The problem splits into two subproblems of calculating the real and imaginary components of the solution of the differential Equation (23).Let us denote by µ . We calculate the expansion coefficients for the real and imaginary components of the product µ(x) f (x) in accordance with the basic algorithm of reconstructing the coefficient of function from the interpolation coefficients of its derivative.Using Equations ( 9)-( 14) and the integration matrix, we calculate the expansion coefficients for the product µ(x)y(x) in the Chebyshev basis.Let us denote them by µ y k , k = 0, N, so that Thus, the formula for calculating the product of the solution by the integrating factor in the entire interval of integration takes the form Here, one of the factors µ(x) = exp iu(x) is an exponential function.We multiply Equation ( 25) by a nonzero function exp −iu(x) and arrive at an explicit expression for the desired solution Writing explicitly (to simplify the program implementation), the real and imaginary parts in Equation ( 26), we obtain From the last formula, it follows that the presence of stationary points of function g(x) does not affect the properties and behavior of the algorithm (there is no low-frequency breakdown in the realization of the above algorithm).
To find the desired integral using Equation (3), it remains for us to calculate the values of function ( 27) at two boundary points of the interval, i.e., x = −1, x = 1.Substituting the calculated expressions into Equation (3), we obtain the value of the integral of the rapidly oscillating function. (1)− y(−1)e iωg(−1) . (28)

Brief Formulation of the Method Using Integrating Factors
Description of the algorithm for solving the problem of calculating the integral of rapidly oscillating functions.
The derivation of the differential equation of Levin's method of integrating rapidly oscillating functions is the transition from integrating the integrand to the numerical determination of the anti-derivative.We use the method of integrating factors to solve the resulting ordinary differential equation.
The basic algorithm is the restoration of a function from its derivative.
• Derivative interpolation-a series of Chebyshev polynomials.A simple efficient and economical algorithm using the discrete orthogonality of Chebyshev polynomials.Computational complexity-multiplication of the transposed Chebyshev matrix by the vector of the derivative values on the grid.

•
The general solution of the differential equation is obtained by multiplying the integration matrix (inverse to the differentiation matrix) by the vector of interpolation coefficients.

2.
Calculation of the integrating factor: • The basic algorithm for restoring the expansion coefficients of the function-the logarithm of the integrating factor-by its derivative.

•
Calculation of the values of the integrating factor as an exponential function of the values of the logarithm.

3.
The final stage of the algorithm for calculating the solution of the differential equation of Levin's method: • Calculation of the product of the integrating factor by the vector of values of the free term of the equation.

•
Calculation of the general solution (basic algorithm) of the product of the desired solution and the integrating factor.• Calculation of the desired general solution of Levin's equation by dividing by the integrating factor.

Numerical Examples
Let us illustrate the efficiency of the proposed method by several examples of integration of rapidly oscillating functions of various complexities.The examples below demonstrate that the dependence of the solution accuracy on the number of approximation points is similar to the dependence shown in Refs.[36][37][38][39][40].The advantage of our approach is the simplicity of the algorithm and the high rate of convergence, regardless of the presence of stationary points of the phase function or the smallness of the value of g ′ (x) in some segments of the integration domain.To obtain the correct solution, it is sufficient that the derivative of the phase function be continuous g . The high accuracy of calculation of integrals is ensured by the unique possibilities of polynomial approximation of both amplitude and frequency using the Chebyshev polynomials.In addition, the method requires no numerical solution of systems of linear algebraic equations.The method is based on reducing the main problem to a series of elementary operations.The most complex of them are the operations of multiplying the Chebyshev matrix by the vectors of amplitude and phase values on the Gauss-Lobatto grid to determine the interpolation coefficients and the operation of multiplying the tridiagonal integration matrix by the vectors of interpolation coefficients.Performing these operations does not contribute to the accumulation of rounding errors in calculations.

Consider the Solution of Several Typical Examples
In the examples considered below (and their variants with a different number of stationary points of the phase function or with different multiplicity of stationary points), the absolute errors in calculating the integrals of rapidly oscillating functions at different frequencies of the phase functions ω = 0.1, 1, 10, 100, 150, 200 (indicated by blue, green, chestnut, magenta, fuchsia, red in ascending ω) depending on the number of collocation points are plotted on a logarithmic scale.The necessary number of collocation points to achieve the accuracy of calculating the integral is demonstrated to be 10 −7 and 10 −14 , respectively.

Example 1
Let us give an example of calculating the integral, when for a good polynomial approximation of the amplitude-a slowly oscillating factor of the integrand-it is required to use polynomials of large degrees.
This integral is noted by Olver ([36] (p.6)) as an example of the fact that the GMRES method allows for calculating an integral with much higher accuracy than Levin's collocation method.However, in his article, the solution of the obtained system of linear algebraic equations requires O(n 3 ) operations, like in Levin's collocation method using the Gauss elimination algorithm.
Table 1 presents the calculated values of the integral (29) at different values of the parameter ω, coinciding with the exact values (obtained by means of Wolfram|Alpha, accessed on 23 April 2023).
We use a computational knowledge engine with up to 17 significant digits.Below, we plot the deviation of the integral value calculated using the integrating factor method from the exact value for various parameter values ω = 0.1, 1, 10, 100, 150, 200 depending on the number of collocation points.From the plots of an absolute error versus the number of collocation points, it can be seen that, for high frequencies ω > 100 at n > ω at first the range of stability of the discrepancy between the calculated solution and the exact value is established at a level of ∼ 10 −5 : ∼ 10 −7 .Then, upon an increase in the number of collocation points n ≥ ω, a sharp improvement in accuracy in the integral computation occurs.At n ≥∼ 1.2 • ω, the integral is calculated practically with the machine precision.
Figure 1 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.

Example 2
In the next example [39], we turn to the calculation of a definite integral of a function, composed from a combination of elementary functions.In this case, an explicit formula for exact calculation of the integral is known: The deviations from the exact values were calculated at the same values of the parameter ω = 0.1, 1, 10, 100, 150, 200, as in the previous example.
Figure 2 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.
In this example, the absolute error function behaves in the same way.As in the previous case, at small frequencies it is sufficient to consider a small number of collocation points to achieve high accuracy.With an increase in the integrand function frequency, to ensure a high accuracy it is necessary to increase the number of colocation points to n ≥∼ 1.2 • ω.

Example 3
Consider the calculation of the same integral [41] (in the absence of stationary points) of a rapidly oscillating function presented in two different forms.First, let us calculate the integral of the function with nonlinear phase in its initial form: Figure 3 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.
In the case of a linear phase function, the high precision of the integral is achieved with a smaller number of collocation points as compared to the integration using Equation (31).
The change in variables and the transition to integrating a product of a complicated amplitude function and an exponential function of a linear phase allows for greatly improving the convergence rate of the algorithm.The effect is especially expressed in the case of high frequencies.
Let us illustrate by a few examples from Ref. [39] the described integration method in the presence of one or several stationary points of different order.Examples (33) with different parameters m of the phase function smoothness illustrate the dependence of the integration accuracy on the order m = 2, 3, 4 of the stationary points with the amplitude of the form cos x x 2 +1 poorly approximable by Chebyshev polynomials.Figure 4 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.
Depending on the parameter m, the order of the stationary point changes.Namely, at m = 2, the first derivative of the phase turns into zero, and at m = 3 both the first and the second derivative turn into zero.Let us demonstrate the dependence of the integral calculation accuracy on the order of a stationary point.
In the case when m = 2, the dependences of an absolute error, i.e., the deviation of the calculated integral value from the exact one, on the number of collocation points are plotted in Figure 5.
Figure 5 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.
In the case when m = 3, the plots of deviation of the calculated integral value from the exact one are presented in Figure 6. Figure 6 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.In the case when m = 4, the plots of deviation of the calculated value of the integral from the exact one are presented in Figure 7. Figure 7 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.From the calculations carried out for different variants of the example, it follows that, with an increase in the order of stationary points, the achievement of the ultimate accuracy of the integration algorithm requires a substantial increase in the number of collocation points.

Example 5
Let us illustrate the integration method in the case when the derivative of the phase function turns into zero at several different points in the integration interval.The number of stationary points of the phase function cos 2 ( π 2 mx) (34), including the end points, is (2m + 1).The plots of the derivatives of the phase function have the form shown in Figure 8.For the case m = 3, the plots of the dependence of an absolute error of the calculated integral compared with its exact value on the number of collocation points are presented in Figure 9. Figure 9 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.In the case when m = 4, the plots of an absolute error of the calculated integral on the number of collocation points are presented in Figure 10. Figure 10 graphically shows the magnitude of the integration error depending on the degree of the interpolation polynomial at different frequency values.The given examples show that the deterioration of the properties of the phase function leads to the need to increase the number of collocation points to ensure the required accuracy of the algorithm.To overcome the difficulties that arise and reduce the accuracy, the partitioning the integration interval into subintervals "without singularities" and "without stationary points inside" are often used [19,39,42].The use of a stable operation of multiplying a tridiagonal integration matrix by a vector of interpolation coefficients allows for a significant increase in the number of collocation points without compromising the accuracy of the approximation of the anti-derivative.

Conclusions
Levin's method and other methods of this type are very good quadrature methods for integrating rapidly oscillating functions.However, in their traditional representation they encounter a number of difficulties.
Levin's method is a classical method for calculating integrals of rapidly oscillating functions, which consists in the transition to finding the anti-derivative function in the form of a solution to an ordinary differential equation.The difference between the values of the anti-derivative at the boundary points of the integration interval is the desired value of the integral.Solving such an ODE by the collocation method can be reduced to solving a system of linear algebraic equations.Unfortunately, if the integrand has stationary points, the resulting SLAE becomes degenerate or ill-conditioned at ω → 0 or in the presence of stationary points of the phase function.
That is why, for a long time, it was believed that Levin's approach suffers from instability, the error in calculating the integral increases when the integrand oscillates slowly ω → 0 (low-frequency breakdown) or the derivative of the phase function turns into zero (stationary points).The results of Refs.[37,38] testify that, if the system of linear algebraic equations obtained by the discretization of the differential equation is solved by means of a singular value decomposition, then no low-frequency breakdown occurs.A similar approach based on the Tikhonov regularization method was undertaken in Ref. [41].In Bremer's paper [39], a technique for reducing instability in the calculation of integrals of rapidly oscillating functions by dividing the integration interval into parts is presented.
The modification of Levin's method based on the transition to the solution of the first-order ODE formed by the integrating factor method significantly expands the scope of applicability of the original Levin's method.The solution of this intermediate ODE is reduced to the successive solution of two auxiliary ODEs of the simplest type using integration matrices.There is no need to solve ill-conditioned SLAEs that give rise to a lowfrequency breakdown when using the traditional Levin's method.The main computational operations in the proposed method are operations of multiplication of banded tridiagonal matrices by vectors, which are well-conditioned regardless of the presence of stationary points in the phase function.The proposed approach turns out to be highly efficient in the cases of both linear and nonlinear phase functions.
Here, we present an algorithm that does not experience difficulties in the presence of low-frequency oscillations or stationary points of the phase function and does not lead to a deterioration in accuracy and stability during the calculation process.The proposed stable method based on integrating factors is applicable when not only the integrand is slowly oscillating, but also there are stationary points.Note also that the proposed two-stage method (based on the use of integrating factors and integration matrices) makes Levin's approach very promising for a wide frequency range.The presented results of numerical experiments demonstrate that the resulting two-stage algorithm for solving ODEs (Levin's method) can effectively and accurately calculate certain integrals for a large class of rapidly oscillating functions, including those with stationary points and other features.
system(12) by multiplying it from the left by the transposed matrix T T and arrive at a matrix equation with a diagonal matrix for calculating the desired expansion coefficients {b 0 , b 1 , . . ., b n }, i.e., the normal solution to system (11):

Figure 1 .
Figure 1.Plots of an absolute error of the calculated value of integral (29) versus the number of collocation points.

Figure 2 .
Figure 2. Plots of an absolute error of the calculated value of integral (30) versus the number of collocation points.

Figure 3 .
Figure 3. Plots of an absolute error of the calculated value of integral (31) versus the number of collocation points.Now, let us calculate the same integral after the change of variable y = sin(x + 1/4) and transformation to the equivalent form of an integral with linear phase sin( 5 4 ) − sin( 3 4 )

Figure 5 .
Figure 5. Plots of an absolute error of the calculated value of integral (33) at m = 2 versus the number of collocation points.

Figure 6 .
Figure 6.Plots of an absolute error of the calculated value of integral (33) at m = 3 versus the number of collocation points.

Figure 7 .
Figure 7. Plots of an absolute error of the calculated value of integral (33) versus the number of collocation points.
on the value of parameter m, the phase function has a different number of stationary points.At m = 3, the first derivative of the phase turns into zero at seven points, including the end points [1, −1], and at m = 4 the first derivative turns into zero at 9 points.Let us demonstrate the influence of the number of stationary points on the accuracy of calculating the integral.

Figure 8 .
Figure 8. Plots of the derivative of the phase function in the interval of integration: blue for m = 3 and black for m = 4.

Figure 9 .
Figure 9. Plots of an absolute error of the calculated value of integral (34) versus the number of collocation points at m = 3.

Figure 10 .
Figure 10.Plots of an absolute error of the calculated value of integral (34) at m = 4 versus the number of collocation points.