On a class of Hermite-Obrechkoff one-step methods with continuous spline extension

The class of A-stable symmetric one-step Hermite-Obrechkoff (HO) methods introduced in [1] for dealing with Initial Value Problems is analyzed. Such schemes have the peculiarity of admitting a multiple knot spline extension collocating the differential equation at the mesh points. As a new result, it is shown that these maximal order schemes are conjugate symplectic which is a benefit when the methods have to be applied to Hamiltonian problems. Furthermore a new efficient approach for the computation of the spline extension is introduced, adopting the same strategy developed in [2] for the BS linear multistep methods. The performances of the schemes are tested in particular on some Hamiltonian benchmarks and compared with those of the Gauss Runge-Kutta schemes and Euler-Maclaurin formulas of the same order.


Introduction
We are interested in the numerical solution of the first order Ordinary Differential Equation (ODE), y (t) = f(t, y(t)) t ∈ [t 0 , t 0 + T], associated with the initial condition y(t 0 ) = y 0 , where f : [t 0 , t 0 + T] × IR m → IR m , m ≥ 1, is a C R−1 , R ≥ 1, function on its domain and y 0 ∈ IR m is assigned.In this context, here we focus on one-step Hermite-Obrechkoff (HO) methods [3, pag. 277].Unlike Runge-Kutta schemes, high order of convergence is obtained with HO methods without adding stages.Clearly there is a price for this because total derivatives of the f function are involved in the difference equation defining the method and thus a suitable smoothness requirement for f is necessary.Multiderivative methods have been considered often in the past for the numerical treatment of ordinary differential equations, for example also in the context of boundary value methods [4] and in the last years there is a renewed interest in this topic, also considering its application to the numerical solution of differential algebraic equations, see e.g.[5][6][7][8][9][10].Here we consider the numerical solution of Hamiltonian problems which in canonical form can be written as follows y = J∇H(y), Peer-reviewed version available at Axioms 2018, 7, 58; doi:10.3390/axioms7030058with y = q p , q, p ∈ IR , where q and p are the generalized coordinates and conjugate momenta, H : IR 2 → IR is the Hamiltonian function and I stands for the identity matrix of dimension .Note that the flow ϕ t : y 0 → y(t) associated with the dynamical system (3) is symplectic, this means that its Jacobian satisfies ∂ϕ t (y) ∂y J ∂ϕ t (y) ∂y = J, ∀ y ∈ IR 2 . ( Recently the class of Euler-Maclaurin HO methods for the solution of Hamiltonian problems has been analyzed in [11,12] where, despite the non-existence results of symplectic multiderivative methods shown in [13], the conjugate symplecticity of the methods was proved.
In this paper we consider the symmetric one-step HO methods which were analyzed in [1,14] in the context of spline applications.We call them BSHO methods, since they are connected to B-Splines, as we will show.BSHO methods have a formulation similar to that of the Euler-Maclaurin formulas and actually the order 2 and 4 schemes of the two families are the same.As a new result, we prove that BSHO methods are conjugate symplectic schemes, as is the case for the Euler-Mclaurin methods [11,12] and so both the families are suited in the context of geometric integration.
BSHO methods are also strictly related to BS methods [15,16], which are a class of Linear Multistep Methods (LMMs) also based on B-splines suited for addressing boundary value problems formulated as first order differential problems. 1Actually, as for the BSHO methods, the discrete solution generated by a BS method can be easily extended to a continuous spline collocating the differential problem at the mesh points [2].The idea now is to rely on B-splines with multiple inner knots in order to derive one-step HO schemes.Actually the inner knot multiplicity is strictly connected to the number of derivatives of f involved in the difference equations defining the method and consequently with the order of the method.The efficient approach introduced in [2] dealing with BS methods for the computation of the collocating spline extension is here extended to BSHO methods, working with multiple knots.Note that we adopt a reversed point of view with respect to [1,14] because we assume to have already available the numerical solution generated by the BSHO methods and to be interested in an efficient procedure for obtaining the B-spline coefficients of the associated spline.
The paper is organized as follows.In Section 2 one-step symmetric HO methods are introduced, focusing in particular on BSHO methods.Section 3 is devoted to prove that BSHO methods are conjugate symplectic methods.Then Section 4 first shows how these methods can be revisited in the spline collocation context.Successively, it introduces an efficient procedure to compute the B-spline form of the collocating spline extension associated with the numerical solution produced by the R-th BSHO and it shows that its convergence order is equal to that of the numerical solution.
Section 6 presents some numerical results related to Hamiltonian problems, comparing them with those generated by Euler-Maclaurin and Gauss Runge-Kutta schemes of the same order. 1 Note that also BS methods were firstly studied in [1,14] but at that time they were discarded in favor of BSHO methods since, when used as initial value methods, they are not convergent.In [15,16] the same schemes have been studied as boundary value methods and in such context they have been recovered in particular for being applied to boundary value problems.

One step symmetric Hermite-Obrechkoff methods
Let t i , i = 0, . . ., N, be an assigned partition of the integration interval [t 0 , t 0 + T] and let us denote by u i an approximation of y(t i ).Any one-step symmetric Hermite-Obrechkoff (HO) methods can be written as follows, clearly setting u 0 := y 0 , where := f(t r , u r ), and where u (j) r for j ≥ 2, denotes the total (j − 1)-th derivative of f with respect to t computed at (t r , u r ) , r ≈ y (j) (t r ) and, on the basis of (1), the analytical computation of the j-th derivative y (j)     involves a tensor of order j; for example, y (2)  (where ∂f ∂y becomes the Jacobian m × m matrix of f with respect to y when m > 1).
For example the one-step Euler-Maclaurin [3] formulas of order 2s with s ∈ IN, s ≥ 1, (where the b k denote the Bernoulli numbers) belong to this class of methods.These methods will be referred to in the following with the label EMHO.
Here we consider another class of symmetric HO methods which can be obtained by defining as follows the polynomial P 2R , appearing in Lemma 13.3 of [3] whose statement is reported in Lemma 1, Lemma 1.Let R be any positive integer and P 2R be a polynomial of exact degree 2R.Then the following one step linear difference equation, defines a multiderivative method of order 2R.Referring to the methods obtainable by Lemma 1, if in particular the polynomial P 2R is defined as in (8), we obtain the class of methods we are here interested in.They can be written as in (6) with, which are reported in Table 1, for R = 1, . . ., 5. In particular for R = 1 and R = 2 we obtain the Trapezoidal rule and the Euler-Maclaurin method of order 4, respectively.
These methods were originally introduced in the spline collocation context, dealing in particular with splines with multiple knots [1,14], as we will show in Section 4. We call them BSHO methods since we will show that in such context they can be obtained dealing in particular with the standard B-spline basis.The stability function of the R-th one step symmetric BSHO method is the rational function corresponding to the (R, R)−Padè approximation of the exponential function, as the same order Runge-Kutta Gauss method [17] Then there exists a positive constant h 0 such that if h < h 0 and {u i } N i=0 denotes the related numerical solution produced by the R-th one step symmetric BSHO method in (6), it is u

Conjugate Symplecticity of the symmetric one step BSHO methods
Following the lines of the proof given in [12], in this section we prove that one step symmetric BSHO methods are conjugate symplectic schemes.Without loss of generality in this section we suppose that f does not explicitly depend on t, that is f = f(y(t)).The following lemma proved in [18] is the starting point of the proof.
Lemma 2. Assume that problem (1) admits a quadratic first integral Q(y) = y Sy (with S a symmetric matrix) and that it is solved by a B-series integrator Φ h (y).Then, the following properties are equivalent: (a) Φ h (y) has a modified first integral of the form Q(y) = Q(y) + O(h); (b) Φ h (y) is formally conjugate to a symplectic B-series method.
Observe that this lemma considers B-series integrators.The concept of B-series is well explained in [19] ch. 5 and is used in [20] to prove the conjugate symplecticity of symmetric linear multistep methods.With similar arguments we prove the following theorem.
Theorem 1.The map y 1 = Φ h (u 0 ) associated with the one-step method ( 6)-( 9) admits a B-series expansion and satisfies property (a) of Lemma 2.
Proof.By defining the two characteristic polynomial of the trapezoidal rule and the shift operator E(y n ) := y n+1 , the R-th method described in (6) reads, We now consider a function v(t), a stepsize h and the shift operator E h (v(t)) := v(t + h) and we look for a continuous function u(t) that formally satisfies (10), that is, using the relation By multiplying both side of the previuos equation by Dρ(e hD ) −1 we obtain: that is, Now, since Bernoulli numbers define the Taylor expansion of the function z/(e z − 1) and b 0 = 1, b 1 = −1/2 and b j = 0 for the other odd j, we have Thus we can write that With some algebra we arrive to the following relation, with Observe that δ j = 0, j = 1, . . ., R − 1 since the method is of order 2R (see [19, Theorem 3.1, page 340]).
Therefore, we derive the modified initial value differential equation associated with the numerical scheme by coupling (13) with the initial condition v(t 0 ) = y 0 .Thus we can conclude that the one step symmetric BSHO methods are B-series methods.The proof of (a) follows exactly the same steps of the analogous proof in Theorem 1 of [12] and in [19, Theorem 4.10, page 591].
In the last column of Table 1 we report the coefficients δ R for R ≤ 5. We can observe that the truncation error in the modified initial value problem is smaller than the one of the EMHO methods of the same order which is equal to b i /i! (see [12]).The conjugate symplecticy property of a numerical scheme makes it suitable for the solution of Hamiltonian problems, since a conjugate symplectic method has the same long-time behaviour of a symplectic one.A well known pair of conjugate methods is composed by the trapezoidal and midpoint rules.Observe that the trapezoidal rule belongs to both the classes BSHO and EMHO of multiderivative methods and its characteristic polynomial plays an important role in the proof of their conjugate symplecticity.

The spline extension
In his PhD thesis [1] Loscalzo proves that a polynomial of degree 2R interpolating both u n and u n+1 together with all the derivatives u n+1 , k = 1, . . ., R, exists if and only if ( 6) is fulfilled with the β coefficients defined as in (9).The piecewise polynomial whose restriction to [t n , t n+1 ] coincides with this polynomial is clearly a C R spline of degree 2R with breakpoints at the mesh points and it collocates the differential equation at the mesh points with multiplicity R. 2 Actually this piecewise representation of the spline is that adopted in [1].Here we are interested in deriving its more compact B-spline representation.Besides being more compact, this allows also us to clarify the connection between BSHO and BS methods previously introduced in [2, 15,16].
The mentioned result proved by Loscalzo is equivalent to say that any C R spline of degree 2R with breakpoints at the mesh points fulfills the relation in ( 6) if ( 9) holds.On its turn, this is equivalent to say that such a relation holds for any element of the B-spline basis of S 2R , the space of C R 2R-degree splines with breakpoints at t i , i = 0, . . ., N, where t 0 < • • • < t N = t 0 + T. Since we relate to the B-spline basis, we need to introduce the associated extended knot vector where which means that all the inner breakpoints have multiplicity R in T and both t 0 and t N have multiplicity 2R + 1.The associated B-spline basis is denoted as B i , i = −2R, . . ., (N − 1)R and the dimension of S as D, with D := (N + 1)R + 1.
Thus, setting α := (−1, 1) T ∈ IR 2 and . ., R , considering the local support of the B-spline basis, we have that (α T , β (1)T , ..., β (R)T ) T can be also characterized as the unique solution of the following linear system, where e 2R+2 = (0, . . ., 0, 1) T ∈ IR 2R+2 and with e := (1, 1) T and A (17) where B (j) i denotes the j-th derivative of B i .Note that the last equation in (15), 2β In the following proposition the nonsingularity of the matrix G (n) is proved.(16) and associated with the B-spline basis of S 2R is nonsingular.

2
This means that also the equations y (j) = f (j−1) , j = 2, . . ., R are fulfilled at the mesh points.Proof.First of all let us observe that the principal submatrix of G (n) of order 2R + 1 is nonsingular because of the Karlin-Ziegler theorem on osculatory spline interpolation at the breakpoints, see for example [21] or Theorem 6 reported in the Appendix of [16].Such a theorem requires that all successive derivatives are considered at each breakpoint and that the sum of any breakpoint multiplicity with the maximal order of derivative there considered is less or equal to the spline degree.This is true for our principal submatrix, since the multiplicity is R at both t n and t n+1 and the maximal derivative order is equal to R in t n and to R − 1 in t n+1 .So, referring to G (n)T , it remains to prove that its last column can't be a linear combination of the other columns.Actually this is not possible because, considering that the restriction to [t n , t n+1 ] of any function in S 2R is a polynomial of degree 2R, in the opposite case it would exist a polynomial P of degree 2R such that P(t n ) = P(t n+1 ) = 0, P (t n ) = P (t n+1 ) = −1 , and P (j) (t n ) = P (j) (t n+1 ) = 0, j = 2, . . ., R. This is not admissible recalling the result proved by Loscalzo and considering that such a polynomial does not fulfill (8).
Note that this different form for defining the coefficient of the R-th BSHO scheme is formally analogous to that adopted in [16] for defining a BS method on a general partition.However in this case the coefficients of the scheme do not depend on the mesh distribution, so there is no need to determine them solving the above linear system.On the other hand, having proved that the matrix G (n) is nonsingular will be useful in the following for determining the B-spline form of the associated spline extension.
Thus, let us now see how the B-spline coefficients of the spline in S 2R associated to the numerical solution generated by the R-th BSHO can be efficiently obtained, considering that the following conditions have to be imposed, Now we are interested in deriving the B-spline coefficients c i , i = −2R, . . ., (N − 1)R, of s 2R , Relating to such a representation, all the conditions in ( 18) can be re-written in the following compact matrix form, T 0 , . . ., u where c = (c T −2R , . . ., c T (N−1)R ) T ∈ IR mD , with c j ∈ IR m , I m is the identity matrix of size m × m and where with each A being a (R + 1)-banded matrix of size (N + 1) × D with entries defined as follows Following the lines of the analogous proof given in [2] dealing with BS methods, we could prove that the rectangular linear system in (19) has always a unique solution, if the entries of the vector on its right hand side satisfy (6) with the β coefficients given in (9).For brevity we do not report here such theorem and move directly to introduce the strategy we adopt for an efficient computation of the B-spline coefficients of s 2R .
Proof.The results follows by considering that the local truncation error of the methods is null.

Efficient spline computation
Concerning the computation of the spline coefficient vectors the unique solution of ( 19) can be computed with several different strategies which can have very different computational costs and can produce results with different accuracy when implemented in finite arithmetic.Here we follow the local strategy used in [2].Taking into account the banded structure of A i , i = 1, . . ., R + 1 we can verify that (19) implies the following relations, where u = (u 0 , . . ., u N ) T , c (i) := (c T (i−3)R , . . ., c T (i−1)R ) T ∈ IR m (2R+1) , i = 1, . . ., N and As a consequence, we can also write that, where ĉ(i) := (c (i) T , 0 T ) T ∈ IR m (2R+2) .Now, for all integers r < 2R + 2, we can define other R + 1 auxiliary vectors α(R) i,r , β(R) l,i,r , l = 1, . . ., R ∈ IR 2 , defined as the solution of the following linear system, where e r is the r-th unit vector in IR 2R+2 (that is the auxiliary vectors are the r-th column of the inverse of G (i) ).Then, we can write, From this formula, considering (22) we can conclude that Thus solving all the systems (23) for i = 1, . . ., N, r = r 1 (i), . . ., r 2 (i), with all the spline coefficients are obtained.Note that, with this approach, we solve D auxiliary systems whose size does not depend on N, using only N different coefficient matrices.Furthermore we can also observe that only the information at t i−1 and t i are necessary to compute c (i−3)R+r−1 .Thus the spline can be dynamically extended at the same time the numerical solution is computed at a new time value.This is clearly of interest for a dynamical adaptation of the time step.
In the following subsection, relying on its B-spline representation, we prove that the convergence order of s 2R to y is equal to that of the numerical solution.This result was already available in [1] (see Therem 4.2 in such a reference) but proved with different longer arguments.

Spline convergence
Let us assume the following quasi-uniformity requirement for the mesh, with M l and M u not depending on h.Then we can easily prove the following theorem (where for notational simplicity we restrict to m = 1), whose statement is analogous to that on the convergence of the spline extension associated with BS methods [2].
Theorem 3. Let us assume that the assumptions on f done in Corollary 1 hold and that (25) holds.Then the spline extension s 2R approximates the solution y of ( 1) with an error of order O(h 2R ) where h := max i=0,...,N−1 h i .

Proof.
Let s 2R denote the spline belonging to S 2R obtained by quasi-interpolating y with one of the rules introduced in formula (5.1) in reference [22] by point evaluators functionals.From [22][Theorem 5.2], under the quasi-uniformity assumption on the mesh distribution, we can derive that such a spline approximates y with maximal approximation order also with respect to all the derivatives, that is, s where K is a constant depending only on R, M l and M u .
On the other hand, by using the triangular inequality we can state that Thus we need to consider the first term on the right hand side of this inequality.On this concern, because of the partition of unity property of the B-splines, we can write where c := (c −2R , . . ., c (N+1)R+1 ) T and c := (c −2R , . . ., c (N+1)R+1 ) T .Now, for any function g ∈ C 2R [t 0 , t 0 + T], we can define the following linear functionals, where and the vector ( α(R)T i,r , β(R)T 1,i,r , . . ., β(R)T R,i,r ) has been defined in the previous section.Considering that s 2R , as well as any other spline belonging to S 2R , can be written as follows, ) T is defined in ( 23) as the r-th column of the inverse of the matrix G (i) .On the other hand, the entries of such nonsingular matrix do not depend on h but, because of the locality of the B-spline basis and of the R-th multiplicity of the inner knots, only on the ratios h j /h j+1 , j = i − 1, i which are uniformly bounded from below and from above because of ( 25).Thus we can conclude that there exists a constant C depending on M l , M u and R such that ≤ C, which implies that the same is true for any one of the mentioned coefficient vectors.This implies that for all indices it is On the other hand, taking into account the result reported in Corollary 1 besides (26), we can easily derive that w (i)

Approximation of the derivatives
The computation of the derivative is quite expensive and thus usually methods not requiring derivative values are preferred.So, as well as for any other multiderivative method, it is of interest to associate with BSHO methods an efficient way to compute the derivative values at the mesh points.
We are exploiting a lot of possibilities like: • to use a generic symbolic tools, if the function f is know in closed form; • to use a tool of automatic differentiation, like ADiGator, a MATLAB Automatic Differentiation Tool [23] ; • to use the Infinity Computer Arithmetic, if the function f is know as a black box [8,9]; • to approximate it with, for example, finite differences.
In this section we just show the relation of these methods with a class of Boundary Value Methods (BVMs), the Extended Trapezoidal Rules (ETRs), linear multistep methods used with boundary conditions [24].Similar relations have been found in [25] with HO and the equivalent class of super-implicit methods.The ETRs can be derived from BSHO when the derivative are approximated by finite differences.Let us consider the order 4 method with R = 2.In this case the first derivative of f could be approximated using central differences =: f i and u (2) i =: f i , is rearranging we recover the ETR of order 4: With similar arguments for the method of order 6, R = 3, by approximating the derivatives with the order 4 finite differences: and rearranging we obtain the sixth order ETR method: This relation allows us to derive a continuous extension of the ETR schemes using the continuous extension of the BSHO method, just substituting the derivatives by the corresponding approximations.Naturally a change of the stepsize will now change the coefficients of the linear multistep schemes.Observe that BVMs have been efficiently used for the solution of boundary value problems in [26] and the BS-method are also in this class [15].
It has been proved in [20] that symmetric linear multistep methods are symplectic conjugate schemes.Naturally in the context of linear multistep methods used with only initial conditions this properties refers only to the trapezoidal method, but when we solve Boundary Value Problems the correct use of a linear multistep formula is with boundary conditions, and this makes the corresponding formulas stable, with region of stability equal to C − (see [24]).The conjugate symplecticity of the methods is the motivation of their good behavior shown in [27] and in [28] when used in block form and with a sufficiently large block for the solution of conservative problems.

Numerical examples
The numerical examples reported here have two main purposes, the first one is to show the good behavior of BSHO methods for Hamiltonian problems, showing both the linear growth of the error for long time computation and the conservation of the Hamiltonian.To this end we compare the methods with the symplectic Gauss Runge-Kutta methods and with the conjugate symplectic EMHO methods.On the other side we are interested in showing the convergence properties of the spline continuous extensions.Observe that the availability of a continuous extension of the same order of the method is an important property.In fact for high order methods, especially for superconvergent methods like the Gauss ones, it is very difficult to find a good continuous extension.The natural continuous extension of these methods, actually, do not keep the same order of accuracy, without adding extra stages [29].Observe also that a good continuous extension is an important tool, for example for the event location.
We report results of our experiments for BSHO methods of order 6 and 8.We recall that the order 2 BSHO method corresponds to the well known trapezoidal rule, whose property of conjugate symplecticity is well known (see for example [19]) and whose continuous extension by the B-spline has been already developed in [2].The order 4 BSHO belongs also to the EMHO class and it has been analyzed in details in [12].The first example is the classical Kepler problem, that describes the motion of two bodies subject to Newton's law of gravitation.This problem is a completely integrable Hamiltonian nonlinear dynamical system with two degrees of freedom (see, for details [30]).The Hamiltonian function describes the motion of the body that is not located in the origin of the coordinate systems.This motion is an ellipse in the q 1 -q 2 plane whose eccentricity e is set using as starting values and with period µ := 2π.First integrals of this problems are: the total energy H, the angular momentum M(q 1 , q 2 , p 1 , p 2 ) := q 1 p 2 − q 2 p 1 .
and the Lenz vector A := (A 1 , A 2 , A 3 ) , whose components are Only three of the four first integrals are independent so, for example, A 1 can be neglected.
As in [12] we set e = 0.6 and h = µ/200, and we integrate the problem over 10 3 periods.
Setting y := (q 1 , q 2 , p 1 , p 2 ), the error y m − y 0 1 in the solution is computed at specific times fixed equal to multiple of the period, that is at t m = 2πm, with m = 1, 2, . . .; the errors in the invariants have been computed by using also another point inside each period.Figure 1 reports the obtained results for the sixth and eight order BSHO (dotted line, BSHO6, BSHO8), the sixth order EMHO (solid lines, EMHO6) and the sixth and eight order Gauss Runge-Kutta (GRK) (dashed lines, GRK6,GRK8) methods.On the top-left picture the absolute error of the numerical solution; the top-right picture shows the error in the Hamiltonian function; the error in the angular momentum is drawn in the bottom-left picture while the bottom-right picture concerns the error in the second component of the Lenz vector.As expected from a symplectic or a conjugate symplectic integrator, we can see a linear drift in the error y m − y 0 1 as the time increases.As well as the other considered methods, we can see that BSHO methods guarantee a near conservation of the Hamiltonian function and angular momentum.This latter quadratic invariant is precisely conserved (up to machine precision) by GRK methods due to their symplecticity property.We observe also that, as expected, the error for the BSHO6 method is 3 10 of the error of the EMHO6 method.
To check the convergence behavior of the continuous extensions, we integrated the problem over 10 periods starting with stepsize h = µ/N, N = 100.We computed a reference solution using the order 8 method with an halved step-size and we compute the maximum absolute error on the doubled grid.
The results are reported in Table 2 for the solution and the first derivative and clearly show that the continuous extension respects the theoretical order of convergence.

Non linear pendulum problem
As a second example, we consider the dynamics of a pendulum under influence of gravity.This dynamics is usually described in terms of the angle q that the pendulum forms with its stable rest position: q + sin q = 0,  where p = q is the angular velocity.The Hamiltonian function associated with ( 28) is An initial condition (q 0 , p 0 ) such that |H(q 0 , p 0 )| < 1 gives rise to a periodic solution y(t) = (q(t), p(t)) corresponding to oscillations of the pendulum around the straight-down stationary position.In particular, starting at y 0 = (q 0 , 0) , the period of oscillation may be expressed in terms of the complete elliptical integral of the first kind as .
For the experiments we choose q 0 = π/2, thus the period µ is equal to 7.416298709205487.We use the sixth and eight order BSHO and GRK methods and the sixth order EMHO method with stepsize h = µ/20 to integrate the problem over 2 • 10 4 periods.Setting y = (q, p), again the errors y m − y 0 in the solution are evaluated at times multiple of the period µ, that is for t m = µm, with m = 1, 2, . . .; the energy error H(y n ) − H(y 0 ) has been computed at the mesh points t n = 11hn, n = 1, 2, . ... Figure 2 reports the obtained results.On the left plot, we can see that, for all the considered methods, the error in the solution grows linearly as time increases.A near conservation of the energy function is observable in both the pictures on the right.The amplitude of the bounded oscillations are similar for both methods, confirming the good long-time behavior properties of BSHO methods for the problem at hand.To check the convergence behavior of the continuous extensions, we integrated the problem over 10 periods starting with stepsize h = µ/N, N = 10.We computed a reference solution using the order 8 method with an halved step-size and we compute the maximum absolute error on the doubled grid.The results are reported in Table 3 for the solution and the first derivative and clearly show, also for this example, that the continuous extension respects the theoretical order of convergence.

Figure 1 .
Figure 1.Kepler problem: results for the sixth and eighth order BSHO methods (dotted lines), sixth order Euler-MacLauren method (solid line) and sixth and eighth order Gauss methods (dashed lines).

)Figure 2 .
Figure 2. Nonlinear pendulum problem: results for the Hermite Obrechkoff method of order six and eight (dotted lines), for the sixth order Euler-Maclaurin (solid line) and Gauss method (dashed lines) applied to the pendulum problem.The upper-right and bottom-right plots report the error in the Hamiltonian function for the six order and eight order integrators respectively.

Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2018 doi:10.20944/preprints201805.0341.v1
If y ∈ S 2R and so in particular if y is a polynomial of degree at most 2R, then