Rosenbrock Type Methods for Solving Non-Linear Second-Order in Time Problems

: In this work, we develop a new class of methods which have been created in order to numerically solve non-linear second-order in time problems in an efﬁcient way. These methods are of the Rosenbrock type, and they can be seen as a generalization of these methods when they are applied to second-order in time problems which have been previously transformed into ﬁrst-order in time problems. As they also follow the ideas of Runge–Kutta–Nyström methods when solving second-order in time problems, we have called them Rosenbrock–Nyström methods. When solving non-linear problems, Rosenbrock–Nyström methods present ess computational cost than implicit Runge–Kutta–Nyström ones, as the non-linear systems which arise at every intermediate stage when Runge–Kutta–Nyström methods are used are replaced with sequences of inear ones.


Introduction
In the iterature, there are many non-linear second-order in time problems which must be numerically solved by means of appropriate methods. It is usual to use numerical methods that have been designed to numerically solve first-order in time problems by transforming the initial problem into a first-order one. In this way, we have several options to choose from, such as Runge-Kutta methods (RK methods), fractional step Runge-Kutta methods (FSRK methods) or exponential integrators. These methods are a good option if the problem we have is a inear one but they present a high computational cost when the problem we are solving is a non-linear one, as we have to solve non-linear problems at every intermediate stage (see [1][2][3][4]). In this case, a good option is to use Rosenbrock methods or exponential Rosenbrock type methods [5][6][7], for example. However, in this case, when converting the original problem to a first-order one, the dimension of the problem doubles, so the computational cost increases.
Another possibility is to use numerical integrators specially derived to numerically solve second-order in time problems. In this way, we can use, for example, Runge-Kutta-Nyström methods (RKN methods) or fractional step Runge-Kutta-Nyström methods (FSRKN methods) [2,8,9]. When we use this type of method, we have to choose whether to use explicit or implicit methods. When we use explicit methods, there may be stability problems when the problem we are solving is a stiff one. On the other hand, if we choose an implicit method, we can select a method with an infinite stability interval, but with a high computational cost [8] when the problem is non-linear or/and multidimensional in space. In order to avoid the high computational cost that implicit RKN methods present when multidimensional problems in space are solved, FSRKN methods were developed and studied in [9]. The idea of these methods is to split the spatial operator in a suitable way so that at every intermediate stage the problem to be solved is simpler in a certain way than the original one.
In order to avoid all the previous drawbacks when solving a non-linear second-order in time problem, in this paper, we present a new class of methods, which we call Rosenbrock-Nyström methods. These methods avoid the non-linear systems which arise when RKN methods are used by replacing them with sequences of inear ones. Rosenbrock-Nyström methods arise in a natural way as a generalization of Rosenbrock ones when they are applied to second-order in time equations that have been previously transformed into first-order in time problems.
In the iterature, the construction of methods of the Rosenbrock type to numerically solve second-order non-linear systems of ordinary differential equations has already been studied [10]. However, the methods presented in that paper differ from the ones presented here. We remark here on three of the most important differences between the methods in [10] and our methods when solving a problem such as y (t) = f (t, y), y(0) = y 0 , y (0) = v 0 , with 0 ≤ t ≤ T < ∞. The first one is that to use the methods presented in [10], we have to define u = (y, v, t) T , with v = y (t) and then we have to convert the problem to a first-order one in the following way: u = g(y) = (v, f (t, y), 1) T , u(0) = (y 0 , v 0 , t 0 ) T , while with our methods we do not need to convert the problem to a first-order one. The second difference is that in [10], we have to solve two inear systems at each intermediate step, instead of the one inear system we have to solve at each intermediate step when using the methods presented here. The ast one is that in the mentioned paper, operator g(y) and its Jacobian have to be evaluated at every intermediate stage. When using our method, we evaluate operator f (t, y) at every intermediate stage, but we use fixed values of f t (t, y) and f y (t, y) at every time step, so the number of function evaluations per intermediate stage is smaller with our methods.
In this article, we show the development of Rosenbrock-Nyström methods, as well as the conditions that must be satisfied to obtain the desired classical order (up to order four) and the main ideas in order to have stability when inear problems are solved. In addition, we show some numerical experiments that prove the good behavior of these new methods.
This paper is structured as follows: in the next section, we give a brief description of Rosenbrock-Nyström methods, together with their development. In Section 3, we describe the stability requirements that these methods should satisfy when integrating inear problems. In Section 4, we deal with the conditions that the coefficients of the method should satisfy in order to have up to classical order four. The construction of such methods is studied in [11]. Finally, in Section 5, we present some numerical experiments in order to test such methods.

Development of Rosenbrock-Nyström Methods
Non-linear second-order in time problems can be written in an abstract form as follows: where, typically, H is a Hilbert space of functions defined in a certain bounded domain Ω ⊆ R M , integer M ≥ 1 with smooth boundary Γ. This formulation involves ots of different problems: partial differential equations, ordinary differential equations, etc.
Example 1. Let us show here two problems that will be solved in the numerical experiments (Section 5) and can be solved by using Rosenbrock-Nyström methods.
The first problem is a modification of the non-linear wave propagation suggested in [12] is such that the exact solution is given by u j (t) = sin 2πj N+1 cos(t). Parameter λ controls the stiffness of the problem.

Example 2.
The second example is the following non-linear Euler-Bernoulli equation: with h(t, x) such that the exact solution is u(t, x) = (x 2 − 1) 3 cos(t + x). Operator A is such that Au = u xxxx . This problem can be discretized in space by taking, for example, a pseudo-spectral discretization. When solving this problem with an explicit Runge-Kutta-Nyström method, we have to impose very restrictive hypotheses over the time step size in order to guarantee stability.
When we solve a problem such as (1) with a Rosenbrock-Nyström method, the numerical approximation to the exact solution and its derivative, (y(t n ), v(t n )) is given by (y n , v n ), where t n = t 0 + nτ, with τ as the time step size. The numerical approximation (y n , v n ) proposed by us is calculated as where K n,i , i = 1, . . . , s are the intermediate stages and e = (1, . . . , 1) T . These intermediate stages are given by We use notation f t (t, y) and f y (t, y) to refer to ∂ f ∂t (t, y) and ∂ f ∂y (t, y), respectively. When the problem we are solving is autonomous, that is, of the form    y (t) = f (y), y(0) = y 0 , y (0) = v 0 , the equations which determine the method are Notice that in this case, we have a simplification of the main problem (1) and the equations can be obtained by considering that f t (y) = 0.
Similarly, as happens with other classical methods such as RK methods, RKN methods, Rosenbrock methods, etc., the coefficients of these methods can be written in a tableau as follows: where we will assume that α i = ∑ i−1 j=1 α ij . Notice that at every intermediate stage, the problem to be solved is a inear one, so the computational cost is reduced compared with the equations that implicit RKN methods provide when solving this type of problem. When we select the values γ ii = γ, i = 1, . . . , s, then, at every intermediate stage K n,i , we have to solve a inear problem such as (I − τ 2 γ f y (t n , y n ))K n,i = T i , where I denotes the identity operator and T i is the term in (4) that does not depend on K n,i , i = 1, . . . , s. Therefore, the computational cost reduces as I − τ 2 γ f y (t n , y n ) remains constant for all the intermediate stages.
In order to guarantee the solvability of the intermediate stages, we will assume that, for every t > 0, f y (t, y(t)) is such that (I − µ f y (t, y(t))) −1 exists and is bounded for every µ with µ ≥ 0.
Remark 1. When we have that for every t > 0, f y (t, y(t)) is self-adjoint and negative semi-definite, the solvability and boundedness of the intermediate stages are guaranteed because of the following:

•
In the case of f (t, u(t)) being a space differential operator, f y (t, y(t)) is the infinitesimal generator of a C 0 -semigroup of typeω ≤ 0, so (µI − f y (t, y(t))) −1 exists and is bounded for every µ with µ ≥ω [13].

•
In the case of f (t, u(t)) : R n+1 → R n being a regular function, then, for every t > 0, we have that f y (t, y(t)) is a symmetric negative semi-definite matrix, so (I − µ f y (t, y(t))) −1 exists for every µ ≥ 0.
Let us see now the way in which these methods have been developed. The way of defining these new methods is the natural one since they can be obtained from Rosenbrock methods applied to problem (1) when it is transformed to a first-order in time problem. Let us remember that the coefficients of Rosenbrock methods are given by an array of the form , and the equations that these methods give when solving a problem such as are where t n = t 0 + nτ, with τ as the time step size and u n as the numerical approximation to u(t n ). Q n,i , i = 1, . . . , s, are the intermediate stages.
To solve problem (1) with a Rosenbrock method, we first write it as a first-order one, by defining v(t) = y (t). In this way, we define u(t) = (y(t), v(t)) T and, therefore, we have a problem ike (7), being F(t, u) = (v(t), f (t, y)) T . The initial condition is u(0) = u 0 with u 0 = (y 0 , v 0 ) T . We apply a Rosenbrock method to this problem, by using notation Q n,i = (Q y n,i , Q v n,i ). We operate in the equations that give the intermediate stages, replacing Q v n,j with its expression in the equations for Q y n,j . From this, we obtain Now, ets us assume that the Rosenbrock method satisfies The first two conditions are usual restrictions satisfied for many Rosenbrock methods and the third one is the condition to have classical order 1. Then, if we denote K n,i = Q y n,i , i = 1, . . . , s and if we define what we obtain is precisely Equation (4). Furthermore, we can construct Rosenbrock-Nyström methods that do not come from Rosenbrock ones. This fact gives much more freedom to obtain the coefficients of the desired methods. The proposed methods with classical order 3 and 4 that are used in the numerical experiments have been obtained without using existing Rosenbrock ones.

Stability When Solving Linear Ordinary Differential Equations
Following the ideas given for Rosenbrock methods in [6], in this part we deal with the stability of Rosenbrock-Nyström methods when they are applied to a simplified problem such as where B is a given symmetric positive defined matrix of order m ≥ 1 and U(t), U 0 and V 0 ∈ R m . Here, we study the stability in the energy norm, which is the natural norm for the study of the well-posedness of problem (9). This norm is given by with · being the Euclidean norm in R m . When solving problem (9) with a Rosenbrock-Nyström method, we obtain where terms r ij (kB), 1 ≤ i, j ≤ 2 are given, in tensorial form, by These elements form matrix R(τB), By bounding (10) in the energy norm, we obtain that the proof of stability is related to the boundedness of the powers of matrix R(τB). As matrix B is assumed to be symmetric and positive definite, then B is normal and we can use the following spectral result: with σ(τB) being the spectrum of τB. Then, the boundedness of the powers of matrix R(τB) is reduced to the study of the boundedness of matrix R(θ). (Note: if we assume that B is not normal, we can use a similar result to (11), but considering the numerical range instead of the spectrum [14].) Following the results in [15], the following definitions and theorem can be stated.
We can say that the Rosenbrock-Nyström method is P- where C is independent of the size of σ(kB). This result cannot be obtained if assumption (iv) is not satisfied.
Proof. The proof of this theorem is straightforward by using the results of Theorems 5 and 7 in [15].
Corollary 1. When the Rosenbrock-Nyström method comes from a Rosenbrock one with classical order greater than or equal to one, the condition is always satisfied.
Proof. Let us assume that the Rosenbrock-Nyström method comes from a Rosenbrock one with Butcher arrayαÃ αγÃγ inebT and that the coefficients of the Rosenbrock-Nyström method satisfy relations (8). Then,

Order Conditions for Rosenbrock-Nyström Methods
Let us see the conditions that Rosenbrock-Nyström methods should satisfy to obtain the highest possible order when integrating a problem such as (1). In a similar way as with Runge-Kutta-Nyström methods, a Rosenbrock-Nyström method is said to have classical order p if In order to study the order conditions, it is useful to write the equations as in the autonomous case, given by (5), in the following way, where the superscript indices in capital etters indicate the component of the vector we are using (in this part, the notation is similar to that used in [6]). In the following, ∂ f J /∂y K will be denoted by f J K , ∂ 2 f J /∂y K ∂y L by f J KL , etc. To obtain the order conditions, we compare the Taylor series ofŷ n+1 andv n+1 obtained from the exact solution (ỹ n ,ṽ n ) T with the Taylor series of the exact solution.

Numerical Experiments
This section is devoted to the numerical experiments we have carried out in order to prove the advantages of these methods when solving a non-linear equation.
The Rosenbrock-Nyström methods we have chosen are the ones that are developed and studied in [11]. The first one is the one-stage method given by: This method can be obtained from the well-known Rosenbrock method with Therefore, the equations that this Rosenbrock-Nyström method provides are the same that we have with the Rosenbrock method when converting the second order problem in a first order one. This Rosenbrock method is A-stable, but not L-stable [6]. The method presented here is the only Rosenbrock-Nyström method with one stage and classical order 2 that is stable. In fact, it can be proved that this method is P-stable as the eigenvalues of its stability matrix are 4−θ 2 4+θ 2 ± 4θ 4+θ 2 i, for θ ∈ R. They are complex conjugate with modulus 1 except for θ = 0 and θ → ∞, where they are double and equal to 1. There are no one-stage Rosenbrock-Nyström methods with classical order 2 that are just R-stable but not P-stable, so in order to obtain methods of this type, we should use at east two stages. Another possibility is to construct one-stage Rosenbrock-Nyström methods with complex coefficients, following the ideas given in [16] for Rosenbrock methods. This will be the idea of future works.
The second one is an R-stable method with two stages and classical order 3, which we will call RN3. The coefficients of this method are given by the following array: The third one is a method with three stages and classical order 4. This method is called RN4. The coefficients of this method are given by the following array:  In the tables, the ocal and global orders are given. The global error has been calculated as the difference between the exact solution at T = 1 and the numerical one obtained with our method.
Example 3. The first problem we have solved is the first problem presented in Example 1, that is, problem (2). Here, the parameters we have selected are We present the results obtained for the method with classical order 2 in Table 1, where we can see that the expected orders are obtained for the solution as well as for the derivative. For the ocal error in the solution, we obtain one order more than the one expected since the odd derivatives vanish.
The results for the method with classical order 3 are given in Table 2 where we can see that the expected orders are obtained for the solution as well as for the derivative. For the ocal error in the derivative, we obtain one order more than the one expected since the odd derivatives vanish.
The results for the method with classical order 4 are presented in Table 3, where again we can see that the expected results are obtained. The ocal order in the solution is one unit higher than the expected one as the odd derivatives vanish. Example 4. The second problem is the equation of motion of a soliton in an exponential attice. This problem was first proposed in [17] and it is a highly non-linear system.
The results for method RN2 can be seen in Table 4. There, we can observe that we also obtain the expected ocal and global order for the solution and the derivative.
The results for the second method, RN3, can be seen in Table 5, for which the expected results have been obtained.
Finally, the results for method RN4 are presented in Table 6.  We first discretized in space, by using a spectral method suggested in [18] and deeply studied in [8]. This spectral method has been derived to discretize in space a problem such as −Au = F, x ∈ Ω, ∂u = G, x ∈ Γ with Au = u xxxx , F ∈ X, G ∈ Y, and u ∈ D(A), with X being a Hilbert space of functions and Y being a suitable space of functions defined on the boundary. In particular, for the problem we have, we take Ω = (−1, 1) with boundary Γ = {−1, 1} and boundary conditions u(−1) = g −1 u(1) = g 1 , u x (−1) = h −1 , u x (1) = h 1 . For the spatial discretization of this problem, we consider a grid Ω J , Ω ∪ Γ associated with a natural parameter J (related to the number of nodes on it). Then, we take X J as the space of polynomials of a degree ess than or equal to J, integer J ≥ 4, and X J,0 as the subspace of X J such that they and their derivatives vanish on the boundary. We will consider in X J an approximation of the L 2 -norm in X = L 2 (Ω), which will be denoted by · J . Then, the spatial discretization is given by −A J,0 U J + C J ∂u = P J F with A J,0 being the matrix that discretizes A 0 and C J and P J being the operators associated with the discretization of A which take into account the information on the boundary of u and F, respectively. The interior nodes are denoted by µ j , j = 1, . . . , J. They are the zeros of the second derivative of the Legendre polynomial of degree J + 2.
In this way, after the spatial discretization, our problem reads In the numerical experiments, we have taken J = 40.
In this part, we present the results that have been obtained with method RN2 in Table 7, the results obtained with method RN3 in Table 8, and the results that have been obtained with method RN4 in Table 9. In this case, with RN4, we do not obtain order 4 in the global error in the derivative because of the order reduction phenomenon, which has been deeply studied for other numerical integrators such as Runge-Kutta methods, Rosenbrock methods, or RKN methods [19][20][21]. The way to avoid this order reduction is the objective of a forthcoming paper.
Finally, we present the comparison in terms of computational cost between RN2, RN3, RN4, and the RKN method with two stages and classical order 3 which is given in [22]: .
We will denote this method by the abbreviation RKN3.
The results can be seen in Figure 1 and Table 10, where we have written the values of τ and the value of the computational cost to obtain global errors in the solution in the range [10 −12 , 10 −6 ]. In the graph, the error has been plotted as a function of τ in double ogarithmic scale. In this graph and this table, we can see that the method which generates more computational cost to achieve the desired error is RN2. Given a fixed value of τ, this method presents a ower computational cost than the other three, but as its classical order is only two, it needs a ower value of τ to obtain the same error than the other three methods, and in this way, it needs more time steps. For example, if we take τ = 1 320 , we can see that RN2 takes 1.8751 × 10 −2 seconds to obtain the result, but RN4 needs 3.7180 × 10 −2 seconds, nearly double. However, for this value, the error that is found with RN2 is 7.2957 × 10 −7 , but with RN4 it is 6.6974 × 10 −9 . To obtain this error with the RN2 method, we need a value of τ between 1 2560 and 1 5120 , so the number of steps is much greater and so is the computational cost.
As we can also see, both RN3 and RN4 are better in terms of the computational cost than RKN3. As RN4 has more stages than RN3, for bigger values of τ, RN3 needs fewer steps to obtain the desired error, but when τ is smaller, because of the classical order of the RN4 method, it requires fewer time steps and the computational cost of this method is ower.

Conclusions and Future Work
As we have seen in this work, these new methods are a good choice when we have a second-order in time problem to solve. By using these methods, we avoid the drawback that other integrators ike Runge-Kutta-Nyström ones present when the problem we are solving is a non-linear one.
This work is an introductory work about these methods and there is more work to do in the near future. For example, we have the problem of the order reduction that this type of method presents when we solve a second-order in time PDE with non-vanishing order conditions. This problem has already been studied for other existing methods such as Runge-Kutta, Rosenbrock, and Runge-Kutta-Nyström.
Another piece of work to be done is the construction and study of Rosenbrock-Nyström methods with complex coefficients. As we have seen, the only stable method that exists with only one stage is a P-stable one. To obtain R-stable methods (not Pstable) with classical order two, we should have two intermediate stages. Rosenbrock methods with complex coefficients have been proven to be very efficient when solving stiff problems [16,23]. In this way, one piece of future work is to use this idea of complex coefficients to obtain efficient Rosenbrock-Nyström methods to solve second-order in time problems.
To conclude the work to be done, we are also interested in the comparison between different Rosenbrock-Nyström methods, in order to study the best method to be used in practice. One high-stage scheme could be worse in practice than one method with fewer stages because of machine round-off errors.