Efﬁcient k -Step Linear Block Methods to Solve Second Order Initial Value Problems Directly

: There are dozens of block methods in literature intended for solving second order initial-value problems. This article aimed at the analysis of the efﬁciency of k -step block methods for directly solving general second-order initial-value problems. Each of these methods consists of a set of 2 k multi-step formulas (although we will see that this number can be reduced to k + 1 in case of a special equation) that provides approximate solutions at k grid points at once. The usual way to obtain these formulas is by using collocation and interpolation at different points, which are not all necessarily in the mesh (it may also be considered intra-step or off-step points). An important issue is that for each k , all of them are essentially the same method, although they can adopt different formulations. Nevertheless, the performance of those formulations is not the same. The analysis of the methods presented give some clues as how to select the most appropriate ones in terms of computational efﬁciency. The numerical experiments show that using the proposed formulations, the computing time can be reduced to less than half.


Introduction
Let us consider a second-order initial value problem (IVP) of the form y (x) = f (x, y(x), y (x)), y(x 0 ) = y 0 , y (x 0 ) = y 0 , on an interval [x 0 , b] ⊂ R, for which we assume that there exist a unique solution.
There are many methods in the literature for directly approximating the solution of the problem in (1) such as the Runge-Kutta-Nyström methods, Störmer-Cowell methods, or Falkner methods, and also the so-called block methods (see [1][2][3][4] and references therein).
To the best of our knowledge, the first appearance of block methods was presented by Milne [5] for solving first order problems. They have some advantages concerning the cost of implementation, and the accuracy. These methods were designed with the aim of addressing some of the drawbacks of the methods in predictor-corrector modes [6][7][8][9]. Since the time of Milne, many of these types of methods have been published in the mathematical literature; an interesting monograph on the subject is that by Brugnano and Trigiante [10].
In this work, our goal was to analyze and classify block methods used for solving second order problems directly. Our analysis reveals that for each k ≥ 2, k ∈ N, there are some specific formulations which are the most efficient ones in terms of computational time if standard routines based on the classical Newton's method are used for solving the discrete problem. This work is an extension of the one developed in [11], where it was shown that, from the class of k-step linear block methods for solving first-order IVPs, there is only one formulation for each k which is the most efficient.
We present below a brief description of the different sections. In Section 2, we will analyze in detail the 2-step block methods. This analysis shows that many methods that have appeared in the literature actually correspond to the same method, although they present different formulations. We will show how to obtain the most efficient formulations in terms of computational time. In Section 3, the analysis of k-step block methods is considered. Section 4 deals with the study of the main characteristics of the methods. In Section 5, we will present some examples of the performance of the different formulations. Section 6 is devoted to presenting some conclusions. Finally, in Appendix A, we have included an algorithm to generate the methods. The formulas of the different k-step block methods can be easily obtained using a computer algebra system with the given algorithm.

Detailed Analysis of 2-Step Block Methods
We will consider here only methods that use information on the grid points; that is, we will not consider methods that use backward values or intra-step or future off-step points. This section is divided into two parts, accordingly with the type of differential equation considered, either of the general form y = f (x, y, y ), or of the special type y = f (x, y), where the first derivative is absent. From now on, h will denote a fixed step size.

Two-
Step Block Methods for y = f (x, y, y ): The derivation of the following 2-step block method by Adesanya et al. for solving this kind of problem, appeared in [12] although in this reference there were some misprints that have been corrected here. Later, Adebile et al. in [13] presented the following 2-step block method which may readily be rewritten as On the other hand, Majid et al. presented in [14] the 2-step block method given by These are three of the different 2-step block methods that have appeared in literature. What do the above methods have in common? It is obvious that the unknowns are the values y n+1 , y n+2 , y n+1 , y n+2 , and that the terms f n+1 , f n+2 appear on the right hand side of all the equations. There is more to say, we are going to show that these methods are different formulations of the same underlying method. Now, let us see how to get the previous 2-step block methods and how among the different formulations there are some of them which are the most efficient.
Consider the grid points x n , x n+1 = x n + h, x n+2 = x n + 2h. Now, in order to get approximate values of the solution of the problem in (1) on the interval [x n , x n+2 ], we approximate the true solution y(x) by a polynomial p(x) where the coefficients a j ∈ R will be determined after considering appropriate collocation conditions. These conditions are imposed to p(x) and its derivatives up to the second order, at the grid points x n , x n+1 , x n+2 . The notations y n+i , y n+i and f n+i correspond to approximations of the solution and the derivatives at the grid points, that is, y n+i y(x n + ih), y n+i y (x n + ih) , f n+i y (x n + ih) = f (x n + ih, y(x n + ih), y (x n + ih)). Taking five equations of the set E 2 = p(x n + ih) = y n+i , p (x n + ih) = y n+i , p (x n + ih) = f n+i , i = 0, 1, 2 we obtain a system where the unknowns are the coefficients a j , j = 0, 1, 2, 3, 4. We solve this system and substitute the values of the coefficients a j in p(x). The four remaining equations of the above set, once we have replaced the obtained a j s, result in a block method. The 2-step block methods in (2) and (4) may be obtained in this way.
The set E 2 may be expressed in matrix-vector notation as where I 9 is the identity matrix of order nine, and y 2 = a 0 , a 1 , a 2 , a 3 , a 4 , y n , y n+1 , y n+2 , y n , y n+1 , y n+2 , f n , f n+1 , f n+2 T .
Matrix B 2 has rank nine, and thus the above system of nine equations has a unique solution which would be expressed in terms of some parameters. If we take any nine terms in y 2 as principal variables, the solution will be expressed in terms of these variables. Among these variables five of them must be fixed as the five parameters a j , j = 0, 1, 2, 3, 4, which are the coefficients of p(x), and the other remaining principal variables give rise to four formulas, which are the different 2-step block methods. Doing this we obtain a number of possibilities of ( 9 4 ) = 126, which corresponds to different formulations of the 2-step block method, all of which are equivalent to each other.
Among the 126 different formulations of the 2-step block method, we will consider those that have the lowest number of occurrences of f . There are ( 7 2 ) = 21 of such formulations which are obtained by taking as principal variables in the algebraic system the a j , j = 0, 1, 2, 3, 4, and f n+1 , f n+2 . The remaining two principal variables can be chosen arbitrarily. We choose these variables as y n+1 , y n+2 ; thus, one of the simplest formulations of the 2-step block methods is If we solve the system in (8), we get approximate values on the interval [x n , x n+2 ]. The above system, which discretizes the problem on [x n , x n+2 ], is usually solved by Newton's method (or any of its variants). The y n , y n and f n in (8) are constants while the unknowns are y n+1 , y n+1 and y n+2 , y n+2 . We see that the non-linearity of f is reflected only twice in the system, through f n+1 and f n+2 . Nevertheless, for any of the methods in (2), (4), or (5), the number of occurrences of f is higher, which usually results in a more complicated system. In the numerical examples of Section 5, one can see clearly the importance of choosing one formulation or another. This importance becomes more marked the more complicated and nonlinear the function f is..

Remark 1.
We note that we can also develop formulations of the 2-step block method combining different formulas obtained with the above procedure. For example, the two first formulas in the method in (5) are obtained taking as principal variables the a j , j = 0, . . . , 4, and y n+1 , y n+2 , y n+1 , y n+2 , while the two second formulas in (5) are obtained taking as principal variables the a j , j = 0, . . . , 4, and y n , y n+2 , y n , y n+2 . It is a simple routine to see that after substituting the values of y n+1 , y n+1 given by the two first formulas, in the second ones, we obtain the method in (2).

Two-Step Block Methods for y = f (x, y):
For the special second order problem the procedure is similar as before. In this case, we get even a simpler system than in the previous case. Now, if we are not interested in the values of the first derivatives we can remove these values in the system (7). We must be careful, because we cannot remove all of them. Firstly, the y n is one of the initial-values of the second order problem on the block [x n , x n+1 ], which is necessary to guarantee a unique solution. On the other hand, after solving the equations for the first block we need the values y n+2 and y n+2 , which will be taken as the initial values in the sequential iterative procedure to obtain the approximations on the next block. Therefore, the value y n+2 cannot be removed either, and the 2-step block methods are given in this case by three formulas where the unknowns are y n+1 , y n+2 and y n+2 . Now the set of equations is the previous E 2 after removing the equation corresponding to y n+1 , that is, p (x n + h) = y n+1 .
Proceeding similarly as before, fixing the principal variables a j , j = 0, . . . , 4, there is a total of ( 8 3 ) = 56 possibilities to get the solutions, that is, this is a number of different formulations of the 2-step block method. To obtain the simplest, we also establish as principal variables the f n+1 and f n+2 , which results in 6 possibilities. Taking as the remaining principal variable y n+2 , we get one of the simplest formulations of the two-step block method for numerically solving the special second-order problem, which is given by If we had taken as principal variables y n+1 , y n+2 and y n+2 , or had solved the above equations in them, the method would be written as although it is not recommended, due to the higher number of occurrences of f .

Analysis of k-Step Block Methods
For values of k > 2, there is a similar situation as before. The k-step block methods may be formulated in many different ways, but a few of them are the most efficient ones. We again consider two sections, one for the general second-order differential equation and another for the special second-order differential equation.

k-Step Block
Methods for y = f (x, y, y ): For k = 3, the most common formulation of the block method for solving (1) is given by (see for example [15][16][17], although in the last one there is a misprint in the formula for y n+3 ): Other formulations for three-step block methods may be found in [18], [19], or [20]. Nevertheless, one of the simplest formulations of the above three-step method is Concerning the 4-step block method, different formulations may be found in [21][22][23]. In [24][25][26][27] different formulations for the 5-step block method have appeared. For the 6-step block method one can see the references [28][29][30][31]. For k > 6 we have found only two references, both by the same author, one of them concerning the 7-step block method [32] and the other one on the 8-step block method [33].
In what follows, we will analyze the development and the best formulations of the block methods with k steps, with k ∈ N. For this, we use a similar strategy as in the previous case for k = 2. Now we take the grid points x n , x n+1 = x n + h, . . . , x n+k = x n + kh. To get an approximate solution of the problem in (1) on the grid points on the interval [x n , x n+k ], we consider that the true solution y(x) is approximated by the polynomial p(x) The a j ∈ R are unknown coefficients that will be determined after imposing collocation conditions to p(x), p (x), and p (x) at the grid points x n , x n+1 , . . . , x n+k . If we take k + 3 equations from the set E k = p(x n + ih) = y n+i , p (x n + ih) = y n+i , p (x n + ih) = f n+i , i = 0, 1, . . . , k we obtain a system of k + 3 equations in k + 3 unknowns (these are the coefficients a j , j = 0, 1, . . . , k + 2). After solving the system, the obtained values are substituted in p(x). There are 2k remaining equations of the total of 3(k + 1) equations, which result in a k-step block method.
It is clear that the above matrix has rank 3(k + 1), and thus the system of 3(k + 1) equations has a unique solution given in terms of some parameters. If we choose any 3(k + 1) values of y k as principal variables, the solutions will be expressed in terms of these variables. Among those variables, k + 3 are taken as the parameters a j , j = 0, 1, . . . , k + 2, in order to get the polynomial p(x). The other principal variables that have not been considered up to now, give the 2k formulas that form the different k-step block methods. Doing this, the number of possibilities is ( 3k+3 2k ), which corresponds to a number of different formulations of the k-step block method. The important thing is that all these formulations are equivalent, and thus we can say that there is only one k-step block method, that present different formulations. It should be noted here that applying the different formulations of a selected method will not produce the same numerical results, but this is due to the different errors involved in the calculations. If the calculations were exact, without rounding errors, all methods would produce the same approximations.
Among the possible equivalent formulations of the k-step block methods, we will consider those in which the number of occurrences of f is the lowest. Taking the a j , j = 0, 1, . . . , k + 2, and f n+i , i = 1, . . . , k, as principal variables, there are ( 2k+3 k ) of such simplest methods. Among these, if we also take the y n+i , i = 1, . . . , k, as principal variables, then there is only one such method (as is the method in (12) for k = 3). For this choice, the possible non-linearity of f is reflected only k times in the equations of the block method.

k-Step Block Methods for y = f (x, y):
Now, for the special second order equation the procedure is similar as we did for k = 2. If we are not interested in the values of the first derivatives, we can remove almost all the equations corresponding to these values in the system (14) being careful not to remove the one corresponding to the initial value value y n and the one for y n+k (this value will be needed in the iterative procedure to approximate the solution in the next block). The k-step block methods will consist of a set of k + 1 formulas where the k + 1 unknowns are y n+1 , . . . , y n+k and y n+k . In the system in (14) we have to remove the equations p (x n + ih) = y n+i , i = 1, . . . , k − 1.
The resulting system can be also arranged in matrix form and the corresponding matrix, similar to the B k in the previous case, now has a rank 2(k + 1) + 2. Proceeding as before, fixing as principal variables the a j , j = 0, . . . , k + 2, there is a total of ( 2k+4 k+1 ) possibilities to get the solutions, that is, a number of different formulations of the k-step block method. To obtain the simplest, we also set as principal variables the f n+1 , f n+2 , . . . , f n+k , resulting in ( k+4 1 ) = k + 4 possibilities. Among these k + 4 possibilities, if we take as the remaining principal variable y n+k , we get one of the simplest formulations of the k-step block method. This formulation is unique, as is the method in (9) for k = 2. Again, for these formulations, the non-linearity of f is reflected only k times in the formulas of the block method.
We note that in the simplified formulation of the method for the special equation y = f (x, y), the variables y n+1 , . . . , y n+(k−1) are not involved. Therefore, it is not possible to use this simplified method when the equation is of the form y = f (x, y, y ).

Characteristics of the Methods
Order and Local Truncation Error: The k-step block methods constructed in Section 3 for solving (1) can be expressed as where Y τ+1 = (y n+1 , . . . , y n+k , hy n+1 , . . . , hy n+k ) , Y τ = (y n−k+1 , . . . , y n−1 , y n , hy n−k+1 , . . . , hy n−1 , hy n ) , The local truncation error (LTE) of (15) is defined as where Assuming that z(x) is a sufficiently differentiable function, the terms in (16) can be expanded by Taylor series about the point x n to obtain the expression of the LTE as where the constant coefficients C q , q = 0, 1, . . . are column vectors of size 2k (see [44]), since the entries of the matrices A 0 , A 1 , B 0 , and B 1 are given by the coefficients of the methods in Section 3. For instance, for k = 3 the specific matrices for the method in (12) are given by

Definition 2.
If the block method in (15) has order p ≥ 1, it is said to be consistent.
Zero-stability: Zero-stability is concerned with the stability of the first characteristic polynomial ρ(R) in the limit as h tends to zero. Thus, as h → 0, the block method (15) tends to the difference system A 1 Y τ+1 − A 0 Y τ = 0. It can be shown that ρ(R) for the methods given in Section 3 can be expressed as ℵ is a constant and k is the number of steps in the block. For instance, for k = 2, ℵ = 3 2 , We note that the roots R j of ρ(R) = 0, satisfy |R j | ≤ 1, j = 1, 2, and for those roots with |R j | = 1, the multiplicity does not exceed 2. (15) is zero stable provided the roots R j , j = 1, . . . , 2k, of its first characteristic polynomial satisfy |R j | ≤ 1, j = 1, . . . , 2k, and for those roots with |R j | = 1, the multiplicity does not exceed 2.

Definition 4. The block method (15) is said to be convergent if it is consistent and zero-stable.
Linear stability: In order to study the linear stability of (15) we apply the method to the test equation y = −λ 2 y − δy , where λ is the frequency and δ is the damping (see [45]). Letting θ = λh and σ = δh, it can be easily shown that the application of (15) to the test equation yields where the eigenvalues of the amplification matrix M(θ, σ) determine the stability of the method.

Remark 2.
We note that when δ = 0 (σ = 0) the stability interval associated with test linear model y = −λy is obtained. Specifically, for ρ(M(θ 2 )) = 1, we obtain the so-called interval of periodicity which is an interval throughout which the spectral radius is less or equal to unity (see [46]).
For example, in Figures 1 and 2 the stability regions and periodicity intervals for the 2-step and 3-step methods are shown.

Numerical Experiments
The performances of the different formulations of the block methods derived in this work are presented in this section considering some numerical experiments. Our goal was to compare the performance of the usual formulations with the simplest formulations proposed in this paper. We will consider the usual k-step block methods named as Bk, which are obtained taking as principal variables in the system in Section 3 the a j , j = 0, 1, . . . , k, the y n+1 , . . . , y n+k , and the y n+1 , . . . , y n+k , and one of the simplest k-step block methods, named as BkS (which are obtained taking as principal variables in the system in Section 3 the a j , j = 0, 1, . . . , k the f n+1 , . . . , f n+k ) and the y n+1 , . . . , y n+k . For problems of the form y = f (x, y) the proposed methods in Section 3.2 will be namedB kS. We have chosen values of k = 4, 6, 8, 10 and have expressed the results considering the efficiency curves where we have plotted the logarithms of the maximum absolute errors, log 10 Err, over the integration interval versus the CPU times. We can see clearly in the plots that the BkS (orB kS) formulations outperform greatly those of the Bk. 20] y(0) = 1, y (0) = 0 , (19) whose exact solution is given by y(x) = cos(x). We have taken fixed stepsizes h = 20 10k , 20 20k , 20 40k , 20 80k , with k = 4, 6, 8, and 10. The efficiency curves are shown in Figure 3. Example 2 (source: [45]). Consider the IVP given by 20],

Conclusions
This paper aimed at analyzing the linear k-step block methods for directly solving second-order IVPs. We have presented an extensive bibliography that includes the state of the art up to this moment. For a fixed number of steps, the different methods that appear in the literature are actually different formulations of the same method. Not only this, but among the many formulations of a method there are a few of them that are the most efficient from the point of view of computational cost. Some examples have been presented to show the performance of the methods considered. We claim that among the different formulations of the methods considered, the simplest ones in terms of number of occurrences of the function f are the most efficient ones if standard routines based on the classical Newton's method are used for solving the discrete problem. This claim is supported by the numerical experiments, where the proposed formulations use half the time or less than the formulations that usually appear in the literature. Nevertheless, different conclusions might be possible if more efficient ad-hoc procedures based on a suitable diagonalization of the coefficient matrices or on suitable nonlinear splittings are employed.

Conflicts of Interest:
The authors declare no conflict of interest.