A Family of Functionally-Fitted Third Derivative Block Falkner Methods for Solving Second-Order Initial-Value Problems with Oscillating Solutions

: One of the well-known schemes for the direct numerical integration of second-order initial-value problems is due to Falkner. This paper focuses on the construction of a family of adapted block Falkner methods which are frequency dependent for the direct numerical solution of second-order initial value problems with oscillatory solutions. The techniques of collocation and interpolation are adopted here to derive the new methods. The study of the properties of the proposed adapted block Falkner methods reveals that they are consistent and zero-stable, and thus, convergent. Furthermore, the stability analysis and the algebraic order conditions of the proposed methods are established. As may be seen from the numerical results, the resulting family is efﬁcient and competitive compared to some recent methods in the literature.


Introduction
The numerical integration of initial value problems (IVPs) of second order ordinary differential equations (ODEs) has attracted the attention of researchers in the field for decades. The importance of such problems is their use in the applied sciences to model different phenomena such as the mass movement under the action of a force, problems of orbital dynamics, molecular dynamics, circuit theory, control theory, or quantum mechanics among others. It turns out that most of these problems do not have closed-form solutions, and consequently it is important to develop numerical methods that can solve them directly. Accordingly, several numerical methods for solving second-order IVPs that do not contain the first derivative have been investigated by Lambert and Watson [1], Ananthakrishnaiah [2], Simos [3], Hairer et al. [4], Tsitouras et al. ([5,6]), Wang et al. ([7,8]), Franco ([9,10]), Ramos and Patricio [11], Chen et al. [12], Shi and Wu [13], Fang et al. [14], Senu et al. [15] among others.
Recently, some researchers have considered the direct integration of the general second order IVPs containing the first derivative. This can be found in the works by Guo and Yan [16], Vigo-Aguiar and Ramos [17], Jator et al. ( [18,19]), Mahmoud and Osman [20], Awoyemi [21], Liu and Wu [22], You et al. [23], Li et al. [24], Chen et al. [25], Li et al. [26], and You et al. [27]. The implementation of some of these methods is based on a step-by-step fashion, while others are implemented in predictor-corrector modes. In either case, the cost of execution increases, especially, for higher-order methods. It turns out that some of these methods do not take advantage of the oscillatory or even periodic behavior of the solutions. If the period is known or can be estimated in advance this could be considered in the development of the method in order to improve its performance.
In the current article, we propose a class of Functionally-Fitted third derivative Block Falkner Methods (BFFM) for the direct integration of the general second order initial value problem whose solution is oscillatory or periodic, in the latter case with the frequency known, or that can be estimated in advance. This class, which is an adapted formulation of the methods in Ramos and Rufai [33], uses a basis function different from what can be seen in the reviewed literature. We emphasize that this method is different from the methods by Jator ( [18,44]); whereas our methods are implicit Falkner methods whose coefficients are trigonometric and hyperbolic functions depending on the fitting frequency, ω, and the step size, h, the methods by Jator ( [18,44]) present trigonometric coefficients. It is important to note here that the accuracy in estimating this frequency is crucial in adapted numerical methods, as shown in [45].
The rest of this paper is organized as follows: the derivation of BFFM is presented in Section 2. The analysis of the characteristics of the BFFM is discussed in Section 3 while some numerical experiments are presented in Section 4. Finally, we give some concluding remarks in Section 5.

Development of the BFFM
Consider the general second order IVP of the form whose solution is oscillatory or periodic with the frequency approximately known in advance and f : [x 0 , x N ] × R 2s → R s is a smooth function that satisfies a Lipschitz condition and s is the dimension of the system. For the development of the method, y(x) is taken as a scalar function although as we will see in the numerical experiments, the method may be applied in a component wise mode to solve differential systems. We now set-out some useful definitions related with the methods in Ramos and Rufai [33], that will aid the derivation of the BFFM.
In these formulas, y n+j , y n+j , f n+j and g n+k are numerical approximations to the exact values y(x n+j ), y (x n+j ), f (x n+j , y(x n+j ), y (x n+j )) and g(x n+j , y(x n+j ), y (x n+j )), respectively, with x n+j = x n + jh discrete points on [x 0 , x N ] being h a fixed stepsize. The coefficients β kj , γ k andβ kj ,γ k depend on the parameter u and are obtained after evaluating the fitting function I(x) in Theorem 1 and its derivative, respectively, at x n+k .

Definition 4.
The adapted k−step third derivative block Falkner method consists of the primary formulas in (3) and the secondary formulas in (4), which form the BFFM.
The coefficients of the adapted Falkner method depend on the nature of the fitting function I(x) accordingly on how the set Ω is chosen, which can be any of the types listed in Nguyen et al. [50], where for any of the choices we have to take a total of k + 4 elements to determine the adapted block Falkner method on the basis that the approximations are of the form in (2). In order to develop the adapted Falkner methods in this paper we choose Ω as To get the coefficients of the fitting function associated to the set Ω in (5), I(x) is interpolated at the point x = x n+1 , and the following collocating conditions are considered: I (x) at x = x n+1 , I (x) at the points x = x n+j , j = 0, 1, . . . , k, and I (x) at x = x n+k . This leads to the following system of k + 4 equations I (x n+1 ) = y n+1 , I (x n+j ) = f n+j , j = 0, 1, . . . , k, Theorem 1. Let I(x) be the fitting function associated to the set Ω in (5), . , x k−1 , sin(ωx), cos(ωx), sinh(ωx), cosh(ωx)}, and the vector Λ = y n+1 , y n+1 , f n , f n+1 , . . . , f n+k , g n+k T , where T denotes the transpose. Consider the following square matrix of dimension k + 4 which is the matrix of coefficients of the system in (6), and Π i obtained by replacing the i-th column of Π by the vector Λ. If we impose that I(x) satisfies the system of k + 4 equations in (6), then it can be written as Proof. The proof can be readily obtained, similarly to the one given in Jator [18] with slight modifications in notations.

Remark 1.
As an illustration of the theoretical result in the above theorem, the explicit form of the matrix Π and det(Π i ) are provided in the Appendix A for k = 2.

Specification of the BFFM
We emphasize that for each k, there are two primary formulas of the form in Equation (3) and 2k − 2 secondary formulas as those in Equation (4) (which are obtained by evaluating the fitting function in (7) at the corresponding points) that combined together form the proposed BFFM. Hence the BFFM has 2k formulas.
As an illustration, we specified how to obtain the BFFM for k = 2 and k = 3, repectively. For k = 2, we evaluate the fitting function in (7) and its first derivative at x = {x n+2 , x n } to obtain the two primary formulas and the two secondary formulas as Whereas for k = 3, we evaluate the fitting function in (7) and its first derivative at x = x n+3 and then at x = {x n+2 , x n } to obtain the two primary formulas and the four secondary formulas, which result in the following

Remark 2.
For small values of u, the coefficients of the BFFM may be subject to heavy cancellations.
In that case the Taylor series expansion of the coefficients is preferable (see Lambert, [54]). Specific coefficients of the two primary formulas and their corresponding series expansion up to O u 16 for k = 2 are provided in Appendix B.

Analysis of the BFFM
We discuss the basic analysis of the proposed BFFM in this section. The analysis includes the Algebraic Order, Local Truncation Error, Consistency, Zero-Stability, Convergence and Linear Stability of the BFFM.

Algebraic Order, Local Truncation Errors and Consistency of the BFFM
The purpose of this subsection is to establish the uniform algebraic order for each of the formulas that form the BFFM and their corresponding local truncation errors with the aid of the theory of linear operators (Lambert [54]).

Local Truncation Error of BFFM
Proposition 1. The local truncation error of each formula of the k−step BFFM is of the form (3) and (4) are made up of generalized linear multistep formulas, we associate the Falkner formulas with linear difference

Proof. Since the block Falkner formulas in Equations
Consider the Taylor series expansions of the right hand sides of the above formulas in powers of h. It can be shown that the first non zero term is of the form Corollary 2. The Local Truncation Errors of the BFFM for k = 3 are given by Corollary 3. The order p of the k−step BFFM is p = k + 2. Hence the order of BFFM for k = 2 and k = 3 are respectively p = 4 and p = 5. (1) is a linear combination of the basis

Theorem 2. When the solution of the problem in Equation
i=0 , then the local truncation errors vanish.

Definition 5.
Zero stability is concerned with the stability of the difference system in the limit as h tends to 0. Thus as h → 0, the difference system in Equation (13) becomes where A 1 and A 0 are 2k × 2k constant matrices.
Definition 6. (Fatunla [56]) A block method is zero-stable if the roots of the first characteristic polynomial have modulus less than or equal to one and those of modulus one do not have multiplicity greater than 2, i.e. the roots of ρ(R) = det[RA 1 − A 0 ] = 0 satisfy |R i | ≤ 1 and for those roots with |R i | = 1, the multiplicity does not exceed 2.

Remark 5.
We note that the explicit form of the matrix RA 1 − A 0 for k = 2 and k = 3, respectively, are provided in Appendix C.

Convergence of BFFM
The necessary and sufficient condition for a method to be convergent is that it must be zero-stable and consistent (Lambert, [54] and Fatunla, [55]). Since BFFM (for each k) is both zero-stable and consistent, we therefore conclude that it is convergent.

Linear Stability and Region of Stability of BFFM
To analyze the linear stability of BFFM, the block method in Equation (13) is applied to the Lambert-Watson test equation y = λ 2 y. After simple algebraic calculations and letting z = λh , we obtain Y n+1 = M(z, u)Y n , where The rational function M(z, u) is called the amplification matrix and determines the stability of the method. Since the stability matrix depends on two parameters z and u, we plot the stability regions in the (z, u)−plane for both k = 2 and k = 3 respectively, in Figure 1, where the colored regions (blue and green) are the stability regions corresponding to the test problem y = λ 2 y. Since the Lambert-Watson test does not contain the first derivative, another usual test equation to analyze linear stability is the one given by which has bounded solutions for λ ≥ 0 that tend to zero when x → ∞. We have plotted in Figure 2 the corresponding stability region for the BFFM for k = 2 and k = 3.

Implementation of BFFM
The BFFM is implemented using a written code in Maple 2016.1 enhanced by the feature f solve for both linear nonlinear problems respectively. All numerical experiments are conducted on a Laptop with the following features 1.
The summary of how BFFM is applied to solve initial value problems (IVPs) with oscillatory solutions in a block by-block fashion is as follows: Step 1: Choose N, h = (x N − x 0 )/N to form the grid Γ N = {x 0 , x 1 , . . . , x N } with x i = x 0 + ih. Note that N must be a multiple of k, N = mk.
Step 2: Using the difference Equation (13), n = 0, solve for the values of (y 1 , y 2 , . . . , y k ) T and (y 1 , y 2 , . . . , y k ) T simultaneously on the block sub-interval [x 0 , x k ], as y 0 and y 0 are known from the IVP (1). As an illustration, we outline the procedure with k = 2 for the two first block intervals, when n = 0 and n = 2, in Appendix D.
Step 3: Next, for n = k, the values of (y k+1 , y k+2 , . . . , y 2k ) T and (y k+1 , y k+2 , . . . , y 2k ) T are simultaneously obtained over the block sub-interval [x k , x 2k ], as y k and y k are known from the previous block.

Numerical Examples
In order to examine the effectiveness of the BFFM derived in Section 2, we apply specifically BFFM for k = 2 on some well known oscillatory problems that were solved in the recent literature. The criteria used in the numerical investigations are two-fold, the accuracy and the efficiency. A measure of the accuracy is investigated using the maximum error of the approximate solution defined as Error = max 1≤n≤N y(x) − y n , where y(x) is the exact solution and y n is the numerical solution obtained using BFFM while the computational efficiency can be observed through the plots of the maximum errors versus the number of function evaluations, NFE, required by each integrator. We emphasize that the fitting frequencies used in the numerical experiments have been obtained from the problems referenced from the literature. As our first test, we consider the following general second order IVP y + ω 2 y = −δy , with initial conditions y(0) = 1 and y (0) = − δ 2 , whose analytical solution is y(x) = e −( δ 2 )x cos x ω 2 − δ 2 4 . We solve this problem in the interval [0,100] with ω = 1, δ = 10 −3 and compare the result of BFFM with the BNM of order 5 in Jator and Oladejo [57], BHT of order 5 and BHTRKNM of order 3 in Ngwane and Jator [19,58]. Table 1 shows the Maximum errors and the NFE, while the efficiency curves are presented in Figure 3.

Example 2
Let us consider the following oscillatory system , and whose solution in closed form is given as In our experiment, we choose the parameter value ε = 10 −3 and the fitting frequency as ω = 5. The problem is solved in the interval [0, 100]. The step sizes for the numerical experiment are taken as h = 1 2 i , i = 3, 4, 5, 6. The numerical results of BFFM in comparison with the Block Falkner methods (BFM) of order 5 in Ramos et al. [32] and Modified Block Falkner methods (MBFM) of order 5 in Ehigie and Okunuga [36] are displayed in Table 2 while the efficiency curves are displayed in Figure 4 respectively.

Example 3
Consider the popular Van der Pol equation given by with initial values This is a nonlinear scalar equation. In our numerical experiment, the parameter δ is selected as δ = 10 −3 and the principal frequency is chosen as ω = 1. We integrate this problem in the interval [0, 100] . In order to compare error of different methods, we use step lengths h = 1 2 i , i = 1, 2, 3, 4. We emphasize that the analytic solution of this problem does not exists, thus, we used a reference numerical solution which is obtained via special perturbation approach (Anderson and Geer [59] and Verhulst [60]). The BFFM results in comparison with the Block Falkner methods (BFM) of order 5 in Ramos et al. [32], Modified Block Falkner methods (MBFM) of order 5 in Ehigie and Okunuga [36] and The two-stage and three-stage Two-derivative Runge-Kutta-Nystrom Methods (TDRKN2 and TDRKN3) of orders 4 and 5 respectively in Chen et al. [25] are displayed in Table 3 while the efficiency curves are displayed in Figure 4 respectively. It is evident from the results in Table 3 and Figure 5 that BFFM performs better than some of the existing methods in the literature.   As our fourth experiment, we consider the periodically forced nonlinear IVP y + y 3 + y = (cos(x) + sin(10x)) 3 − 99 sin(10x), y(0) = 1, y (0) = 10 , 0 ≤ x ≤ 1000, whose analytic solution is y(x) = cos(x) + sin(10x). For this problem, ω = 1 is selected as principal frequency with parameter = 10 −10 . Table 4 shows the performnce of BFFM in comparison with the TFARKN by Fang et al. [14], the EFRK by Franco [39] and the EFRKN by Franco [10] respectively. The efficiency curves of the BFFM and the other methods used for comparisons are displayed in Figure 6.

Example 5
As our fifth numerical experiment, we consider the following nonlinear system with V(y) = y 1 y 2 (y 1 + y 2 ) 3 , whose solution in closed form is given as sin(5x) + cos(5x) . Table 5 and Figure 7 show the superiority of the BFFM in the interval [0, 100] over the BNM of order 5 in Jator and oladejo [57], the BHM of order 11 in Jator and King [61] and the fourth order ARKN in Franco [9].

Example 6
We consider the following well known two body problem where r = y 2 1 + y 2 2 , and the solution in closed form is given by y 1 (x) = cos(x), y 2 (x) = sin(x). Table 6 reveals the performance of our proposed BFFM in the interval 0 ≤ x ≤ 10 with ω = 1 as it compared with the fourth order DIRKNNew of Senu et al. [15] while Figure 8 establishes the efficiency of BFFM.

Conclusions
In this paper, we have proposed a family of Adapted block Falkner methods using third derivative for solving second order initial value problem with oscillatory solution directly numerically. The methods are applied in block form as simultaneous numerical integrators and thus, do not suffer the disadvantages of predictor-corrector mode. The basic properties of the methods are investigated and discussed. The convergence of the proposed methods was established and the stability regions are presented. The numerical results on well-known second order initial value problems with oscillatory solutions show the effectiveness of the proposed methods compared with some existing methods in the reviewed literature. Although the proposed family of methods can be implemented with variable steps, that aspect was not considered in the current work, but will be considered in our future research. Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Institutional Review Board Statement: Not applicable.

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.