Trigonometrically-Fitted Methods: A Review

: Numerical methods for the solution of ordinary differential equations are based on polynomial interpolation. In 1952, Brock and Murray have suggested exponentials for the case that the solution is known to be of exponential type. In 1961, Gautschi came up with the idea of using information on the frequency of a solution to modify linear multistep methods by allowing the coefﬁcients to depend on the frequency. Thus the methods integrate exactly appropriate trigonometric polynomials. This was done for both ﬁrst order systems and second order initial value problems. Gautschi concluded that “the error reduction is not very substantial unless” the frequency estimate is close enough. As a result, no other work was done in this direction until 1984 when Neta and Ford showed that “Nyström’s and Milne-Simpson’s type methods for systems of ﬁrst order initial value problems are not sensitive to changes in frequency”. This opened the ﬂood gates and since then there have been many papers on the subject.


Introduction
In this article, we discuss various methods for the numerical solution of the second order initial value problem y = f (x, y), y(x 0 ) = y 0 , y (x 0 ) = y 0 . (1) If the initial value problem contains y (x) then it is usually converted to a system of first order y 1 (x) = y 2 , y 1 (x 0 ) = y 0 , y 2 (x) = f (x, y 1 , y 2 ), by defining new variables y 1 = y, y 2 = y . In vector notation the system (2) can be written as y = f(x, y), y(x 0 ) = y 0 , where y = [y 1 , y 2 ] T , f = [y 2 , f (x, y 1 , y 2 )] T , and y 0 = [y 0 , y 0 ] T .
Here we are concerned with trigonometrically-fitted methods for (1) and (3).
There are several classes of methods, such as linear multistep methods (including Obrechkoff methods) and Runge-Kutta methods. Here we will introduce each class and then review the extension of those to solution of problems for which the frequency is approximately known in advance.
Linear multistep methods for the solution of (3) are given by and of (1) are given by where y n+j is the approximate value at x n+j and similarly for f n+j . In here k is called the step-number and k is either k − 1 or k. In the former case the method is called explicit and in the latter it is called implicit. The coefficients α j and β j are chosen to satisfy stability and convergence, as we describe in the sequel. We now introduce the first and second characteristic polynomials, For (3) explicit methods for which ρ(ζ) = ζ k − ζ k−1 are called Adams-Bashforth and the implicit ones are called Adams-Moulton. Explicit methods for which ρ(ζ) = ζ k − ζ k−2 are called Nyström methods and the implicit ones are called Generalized Milne-Simpson methods. Gautschi [1] has developed Adams-type methods for first order equation as well as Nyström methods for the second order equation. Neta and Ford [2] only developed Nyström and Generalized Milne-Simpson methods for first order equation. Definition 1. If, for an arbitrary smooth enough test function z(x), we have k ∑ j=0 α j z(x + jh) − h k ∑ j=0 β j z (x + jh) = C p+1 h p+1 z (p+1) (x) + O(h p+2 ), (8) then, p is called the order of the linear multistep method (4) and C p+1 is its error constant. The expression given by (8) is called the local truncation error at x n+k of the method (4), when z(x) is the theoretical solution of the initial value problem (1).
In a similar fashion we have for (5) Throughout, we shall assume that the linear multistep method (4) satisfies the following hypotheses (see [3]): • No common factors for the characteristic polynomials ρ and σ. • ρ(1) = 0, ρ (1) = σ(1); this is a necessary and sufficient condition for the method to be consistent.
For the method (5) for second order initial value problems we have • No common factors for the characteristic polynomials ρ and σ. (1); which is a necessary and sufficient condition for the method to be consistent.

•
The method is zero-stable.

Remark 1.
If the problem (1) has periodic solutions and the period is not known, then the P-stability is desirable. If the period is known approximately, then one can use the ideas in Gautschi [1], Neta and Ford [2], and others to be reviewed here.
Another important property when solving (1) is the phase lag which was introduced by Brusa and Nigro [6]. Upon applying a linear two-step method to the test Equation (10), we obtain a difference equation of the form A(H)y n+2 + B(H)y n+1 + C(H)y n = 0, whose solution is where B 1 and B 2 are constants depending on the initial conditions. The quadratic polynomial is called the stability polynomial. The solutions to (14) are given by If a(H) ≡ 0 and b(H) ≡ 1, then we get the exact solution to the test Equation (10). The difference between the amplitudes of the exact solution of (10) and numerical solution is called dissipation error, see [7]. The expansion b(H) − 1 in powers of H is called phase lag expansion. The modulus of the leading terms is the phase lag of the method. See also Thomas [8] and Twizell [9]. Remark 2. Raptis and Simos have developed methods with minimal phase-lag and also P-stable methods in [10][11][12][13][14].
We now introduce an extension to the linear multistep methods. These are called multiderivative or Obrechkoff methods, see Obrechkoff [15] or [16].
For the first order equation we have and for the second order equation According to Lambert and Mitchell [17], the error constant decreases more rapidly with increasing rather than the step k. Thus, one can get one-step high order methods. A list of Obrechkoff methods for various k and is given in [17] for first order equations and in [18] for second order equations.
In general, a method integrates exactly the set of functions {u 1 (x), u 2 (x), . . . , u r (x)}, r ≤ k if the following conditions are satisfied
The first paper suggesting the use of the frequency of the solution is due to Gautschi [1]. He considered Störmer type methods for the solution of (1). The idea is to allow the coefficients to depend on the frequency ω. Let L be a functional defined by where α 0 = 1. Since we are introducing trigonometric functions, we refer to order as algebraic order and define trigonometric order as follows:

Definition 3. A linear functional L ∈ C s [a, b] is said to be of algebraic order p, if
Lx r ≡ 0, r = 0, 1, . . . , p, and Lx p+1 does not vanish. Therefore we have p + 1 conditions for methods of algebraic order p. The method where v = ωh and α k = 1 is said to be of trigonometric order q relative to the frequency ω if the associated linear functional and L cos((q + 1)ωx) and L sin((q + 1)ωx) are not both identically zero.
Therefore, methods of trigonometric order q satisfy 2q + 1 conditions. Linear multistep or trigonometrically fitted method for second order ordinary differential equations (ODEs) (1) satisfy an additional condition Lx ≡ 0 for the same order, see Lambert [3].

Remark 3.
The trigonometric order is lower than the algebraic order, since the trigonometric polynomials requires two conditions for each degree, see Lambert [3].
Gautschi [1] allowed the coefficients α j to depend on v and listed several explicit and implicit methods of trigonometric orders q ≤ 3. The form of the explicit methods is: We only list the methods of trigonometric orders 1 and 2 using powers of cos(v) instead of the Taylor series expansions shown in [1]. Those Taylor series expansions should be used when h → 0.
For q = 1, the coefficients are: For q = 2, the coefficients are: The form of the implicit methods is: For q = 1, the coefficients are: , For q = 2, the coefficients are: , Neta and Ford [2] have constructed the Nyström and Generalized Milne-Simpson methods for a first order (3) where the coefficients β j are functions of the frequency. Here we list a few of those.
For q = 1, the explicit method is For q = 2, the explicit method is For q = 1, the implicit method is a one-parameter family Note that the choice β 0 = 0 leads to the explicit method (29). For q = 2, the implicit method is Vigo-Aguiar and Ramos [26] show how to choose the frequency for nonlinear ODEs. Van der Houwen and Sommeijer [27] have developed predictor-corrector methods. Neta [28] has developed exponentially fitted methods for problems whose oscillatory solution is damped. Raptis and Allison [29] have used the idea for the solution of Schrödinger equation. Stiefel and Bettis [30] have stabilized Cowell's method [31] by fitting trigonometric polynomials. Lambert and Watson [5] introduced symmetric multistep methods which have non-vanishing interval of periodicity. Quinlan and Termaine [32] have developed high order symmetric multistep methods. Simos and Vigo-Aguiar [33] have developed exponentially-fitted symmetric methods of algebraic order eight based on the work of [32]. They demonstrated the superiority of their method on two orbital examples integrated on a long time interval t ∈ [0, 10 5 ]. Another idea developed by Neta and Lipowski [34] is to use the energy of the system instead of integrating the angular velocity. They have demonstrated the benefit of their method using several examples for perturbation-free flight and a more general case on long time flight. Vigo-Aguiar and Ferrándiz [35] have developed a general procedure for the adaptation of multistep methods to numerically solve problems having periodic solutions. Vigo-Aguiar et al. [36] and Martín-Vaquero and Vigo-Aguiar [37] have developed methods for stiff problems by using Backward Differentiation Formulae (BDF) methods. See also Neta [38].
Sommeijer et al. [39] have suggested a different idea for trigonometrically-fitted methods. Instead of requiring fitting cosine and sine functions of multiple of the frequency, they suggest taking several frequencies in some interval around the known frequency. The frequencies are chosen to be the roots of a Chebyshev polynomial, so that we minimize the maximum error. Such methods were called minimax methods. See also Neta [40].
We now give more details. Suppose we have an interval [ ω,ω] of frequencies and we pick q frequencies The idea is to interpolate the sine and cosine functions of those frequencies L1 ≡ 0, and L cos(ω r x) ≡ L sin(ω r x) ≡ 0, r = 1, 2, . . . , q.
Thus for the second order initial value problem, we have the system for j = 1, . . . , q. Unfortunately, this yields very messy coefficients and we will not list any of them here.

Methods Based on Obrechkoff Methods
Simos [41] has developed a P-stable trigonometrically-fitted Obrechkoff method of algebraic order 10 for (1). where In order to ensure P-stability, the coefficient b 3 1 must be The method requires an approximation of the first derivative which is given by He showed that the local truncation error is n .

Remark 4.
Neta [43] has developed a P-stable method of algebraic order 18.
Vanden Berghe and Van Daele [44] have suggested fitting a combination of monomials and exponentials, i.e., the set {1, x, . . . , x K , e ±µx , xe ±µx , . . . , x P e ±µx }. Clearly when µ is purely imaginary, we get the cosine and sine functions. When K = −1, we get only the exponential functions and when P = −1 we get only monomials (which is the well known Obrechkoff method). Even when K = −1, we are not getting the cosine and sine functions of multiples of the frequency as in the previously discussed methods. They developed methods of algebraic order 8. Here we list only two of those, one with K = 5, P = 1 (39) and the other with K = 7, P = 0 (40).
The first method is given by where A = (v 2 + cos(v) − 1) and B = (v 2 cos(v) + cos(v) − 1). The second method is

Methods Based on Runge-Kutta
For a trigonometrically-fitted method, we have (see Franco and Rández [20]) The solution for k = 3 is given in Franco [45] This method has an algebraic order 2 and reduces to the two-stage explicit Numerov method of Chawla [46]. The method integrates exactly the set of functions {1, x, x 2 , cos(ωx), sin(ωx)}, similar to the idea of Vanden Berghe and Van Daele [44].
Franco and Rández [20] have developed a 7-stage method of algebraic order 7 which integrates exacly the monomials up to x 6 and sin(ωx) and cos(ωx). A 5-stage family of methods of algebraic order 6 listed here and in Tsitouras [47] has been developed by Fang et al. [48]. Here we list just one member of the family.
Demba et al. [49] have developed fourth and fifth order Runge-Kutta-Nyström trigonometrically-fitted methods for (1). The idea is based on using 3-stage method to get a 4-stage trigonometrically-fitted method. Here we list the coefficients y n+1 = y n + hy n + h 2 where Y i are given by (18) and where The Taylor series expansion of the coefficients is given by (50)

Comments on Order
Definition 1 of order (see (8) and (9)) can be extended to trigonometrically fitted methods. Note that a method is of order p for first (second) order ODEs if it fits monomomials up to degree p + 1 (p + 2), respectively. Therefore, methods of trigonometric order q are methods of order 2q. Method, such as (39) for second order ODEs is of eighth order, since it fits monomial up to degree 5, and x n cos(ωx), x n sin(ωx), n = 0, 1. In Table 1, we will list all methods used in the examples with their order.

Numerical Examples
The methods developed originally by Gautschi [1] and those that follow by Neta and Ford [2] fit low order monomials and the sine and cosine functions of multiples of the frequency. On the other hand, the methods developed later by Vanden Berghe and Van Daele [44] use monomials and product of monomials and sine and cosine functions of the frequency. We will demonstrate via the first three examples the difference between the two strategies. Vanden Berghe and Van Daele [50] compared the two approaches in some cases but not used schemes developed by Neta and Ford [2]. In the latter examples we also compare the results to Runge-Kutta-Nyström based method (46)- (49), see [49].
First, we list a method of trigonometric order 2 based on the idea of Vanden Berghe and Van Daele [44], which we obtained using Maple software, see Chun and Neta [51]. where The Taylor series expansion of the coefficients are (53) Example 1. The first example is chosen so that the exact solution in a combination of sine and cosine of multiples of the frequency, i.e., y (x) + 9y(x) = 3 sin(6x), 0 ≤ x ≤ 40π (54) subject to the initial conditions y(0) = 1, y (0) = 3.
The results using h = π/500 are given in Table 2. We expect the methods that fit sine and cosine of multiples of the frequency will do better. Based on the results we see that (25) is best when the frequency is known exactly. If it is not known exactly, the method prefers underestimation of the frequency. The second best is (30). This method will have no preference to underestimation. Example 2. The second example is very similar subject to the initial conditions y(0) = 1, y (0) = 3.
The exact solution is The results are given in Table 3. Now we expect that the method (51) due to Chun and Neta [51] will perform better, since the exact solution has a product of monomial and cosine. In fact this is the case followed by (25). The method (30) requires smaller step size to converge and the results are not as good as those of the other two methods. Note that for this example all methods have no preference to underestimation of the frequency. Example 3. What if the frequency of the forcing term is not an integer multiple of the frequency of the homogeneous solution? We now consider the following example subject to the initial conditions y(0) = 1, y (0) = 3.
The exact solution is The results are given in Table 4. Now (51) is best followed by (30). Based on the three examples, we find that (51) is best in the last two examples, but not in the first case where the frequency of the forcing term is a multiple of the frequency of the homogeneous solution.
Before moving to the rest of the experiments, we have decided to rerun the first example on a much longer interval. This will test the quality of those methods in long-term integration. We have taken the same step size h = π/500 and integrated for 0 ≤ x ≤ 4000π. The results are given in Table 5. It is clear that the method due to Neta and Ford is no longer viable. The method (51) gave same errors when ω = 3 but all other cases show lower accuracy at the end of the longer interval Example 4. The fourth example is a system of two second order initial value problems The exact solution is given by We have converted this to a system of first order equations and solved it numerically using implicit Adams methods of trigonometric orders 2 and 3 (denoted AI2 and AI3, respectively) and generalized Milne-Simpson methods (GMS) of the same order, which are (32) and (66), respectively. We also included results from [51] and Runge-Kutta-Nyström method [49]. The results are given in Table 6. For Adams implicit, we have used the Taylor series coefficients as given in [1]. For GMS with q = 2, we used the coefficients as given in [2]. They did not give the coefficients for q = 3 but suggested to numerically solve the system for the coefficients. We were able now to get the coefficients where Table 6. The L 2 norm of the error for the fourth example using two implicit methods of trigonometric orders 2 and 3 and one explicit from [51] and one based on Runge-Kutta-Nyström. Remark 5.

1.
Adams implicit using the Taylor series for the coefficients did not improve the accuracy by using a higher order.

2.
GMS of second trigonometric order gave better results than the Adams implicit. There is no improvement by using q = 3.

4.
The method based on Runge-Kutta-Nyström could not compete with the others.
Example 5. The fifth example is the "almost periodic" problem studied by Stiefel and Bettis [30] z + z = 0.001e it , subject to the initial conditions z(0) = 1, The exact solution is z exact (t) = cos t + 0.0005t sin t + i(sin t − 0.0005t cos t).
The solution represents motion on a perturbation of a circular orbit in the complex plane; the point z(t) spirals slowly outwards. The first order system equivalent was solved numerically using the above six methods. The results for h = π/60 and the exact value of ω = 1 are given in Table 7. It is clear that the methods of trigonometric order 3 are better than the lower order ones. Also the GMS is better than Adams implicit due to Gautschi [1]. Again, the method (51) is superior.
The results are given in Table 8. It is clear that the methods that converged gave similar results. The methods (32) and (66) did not converge. The error is computed at x = 2000π. Example 7. The last example is a system of two second order ODEs describing two coupled oscillators with different frequencies, see [52].
x (t) + x(t) = 2 x(t)y(t), y (t) + 2y(t) = x(t) 2 + 4 y(t) 3 , with initial conditions x(0) = 1, x (0) = 0, y(0) = 1, where = 10 −3 . The frequencies ω x = 1 and ω y = can be found in [52]. We have compared the solution using the same methods to RKF45 of Maple. The L 2 norm of the difference between the solution of RKF45 and the six methods is given in Table 9. Now Adams implicit based method of trigonometric order 3 and our method (51) performed better than the others. Again, the methods due to Neta and Ford diverged.

Conclusions
We reviewed various trigonometrically-fitted methods and implemented representatives on several examples of second order initial value problems. In most examples our method from [51] was superior to others except for the first example for which Gautschi's method performed better. The methods (32) and (66) due to Neta and Ford failed to converge for the last two examples and in the long-term integration of example 1. The method based on Runge-Kutta-Nyström due to Demba et al. [49] could not compete with our method based on Obrechkoff.