Next Article in Journal
Genetic Hybrid Optimization of a Real Bike Sharing System
Next Article in Special Issue
An Efficient Discrete Model to Approximate the Solutions of a Nonlinear Double-Fractional Two-Component Gross–Pitaevskii-Type System
Previous Article in Journal
Adaptive Boundary Control for a Certain Class of Reaction–Advection–Diffusion System
Previous Article in Special Issue
Efficient Time Integration of Nonlinear Partial Differential Equations by Means of Rosenbrock Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rosenbrock Type Methods for Solving Non-Linear Second-Order in Time Problems

by
Maria Jesus Moreta
IMUVA. Departamento de Análisis Económico y Economía Cuantitativa, Campus de Somosaguas, Universidad Complutense de Madrid, Pozuelo de Alarcón, 28223 Madrid, Spain
Mathematics 2021, 9(18), 2225; https://doi.org/10.3390/math9182225
Submission received: 30 July 2021 / Revised: 3 September 2021 / Accepted: 6 September 2021 / Published: 10 September 2021
(This article belongs to the Special Issue Numerical Methods for Evolutionary Problems)

Abstract

:
In this work, we develop a new class of methods which have been created in order to numerically solve non-linear second-order in time problems in an efficient way. These methods are of the Rosenbrock type, and they can be seen as a generalization of these methods when they are applied to second-order in time problems which have been previously transformed into first-order in time problems. As they also follow the ideas of Runge–Kutta–Nyström methods when solving second-order in time problems, we have called them Rosenbrock–Nyström methods. When solving non-linear problems, Rosenbrock–Nyström methods present less computational cost than implicit Runge–Kutta–Nyström ones, as the non-linear systems which arise at every intermediate stage when Runge–Kutta–Nyström methods are used are replaced with sequences of linear ones.

1. Introduction

In the literature, there are many non-linear second-order in time problems which must be numerically solved by means of appropriate methods. It is usual to use numerical methods that have been designed to numerically solve first-order in time problems by transforming the initial problem into a first-order one. In this way, we have several options to choose from, such as Runge–Kutta methods (RK methods), fractional step Runge–Kutta methods (FSRK methods) or exponential integrators. These methods are a good option if the problem we have is a linear one but they present a high computational cost when the problem we are solving is a non-linear one, as we have to solve non-linear problems at every intermediate stage (see [1,2,3,4]). In this case, a good option is to use Rosenbrock methods or exponential Rosenbrock type methods [5,6,7], for example. However, in this case, when converting the original problem to a first-order one, the dimension of the problem doubles, so the computational cost increases.
Another possibility is to use numerical integrators specially derived to numerically solve second-order in time problems. In this way, we can use, for example, Runge–Kutta–Nyström methods (RKN methods) or fractional step Runge–Kutta–Nyström methods (FSRKN methods) [2,8,9]. When we use this type of method, we have to choose whether to use explicit or implicit methods. When we use explicit methods, there may be stability problems when the problem we are solving is a stiff one. On the other hand, if we choose an implicit method, we can select a method with an infinite stability interval, but with a high computational cost [8] when the problem is non-linear or/and multidimensional in space. In order to avoid the high computational cost that implicit RKN methods present when multidimensional problems in space are solved, FSRKN methods were developed and studied in [9]. The idea of these methods is to split the spatial operator in a suitable way so that at every intermediate stage the problem to be solved is simpler in a certain way than the original one.
In order to avoid all the previous drawbacks when solving a non-linear second-order in time problem, in this paper, we present a new class of methods, which we call Rosenbrock–Nyström methods. These methods avoid the non-linear systems which arise when RKN methods are used by replacing them with sequences of linear ones. Rosenbrock–Nyström methods arise in a natural way as a generalization of Rosenbrock ones when they are applied to second-order in time equations that have been previously transformed into first-order in time problems.
In the literature, the construction of methods of the Rosenbrock type to numerically solve second-order non-linear systems of ordinary differential equations has already been studied [10]. However, the methods presented in that paper differ from the ones presented here. We remark here on three of the most important differences between the methods in [10] and our methods when solving a problem such as y ( t ) = f ( t , y ) , y ( 0 ) = y 0 , y ( 0 ) = v 0 , with 0 t T < . The first one is that to use the methods presented in [10], we have to define u = ( y , v , t ) T , with v = y ( t ) and then we have to convert the problem to a first-order one in the following way: u = g ( y ) = ( v , f ( t , y ) , 1 ) T , u ( 0 ) = ( y 0 , v 0 , t 0 ) T , while with our methods we do not need to convert the problem to a first-order one. The second difference is that in [10], we have to solve two linear systems at each intermediate step, instead of the one linear system we have to solve at each intermediate step when using the methods presented here. The last one is that in the mentioned paper, operator g ( y ) and its Jacobian have to be evaluated at every intermediate stage. When using our method, we evaluate operator f ( t , y ) at every intermediate stage, but we use fixed values of f t ( t , y ) and f y ( t , y ) at every time step, so the number of function evaluations per intermediate stage is smaller with our methods.
In this article, we show the development of Rosenbrock–Nyström methods, as well as the conditions that must be satisfied to obtain the desired classical order (up to order four) and the main ideas in order to have stability when linear problems are solved. In addition, we show some numerical experiments that prove the good behavior of these new methods.
This paper is structured as follows: in the next section, we give a brief description of Rosenbrock–Nyström methods, together with their development. In Section 3, we describe the stability requirements that these methods should satisfy when integrating linear problems. In Section 4, we deal with the conditions that the coefficients of the method should satisfy in order to have up to classical order four. The construction of such methods is studied in [11]. Finally, in Section 5, we present some numerical experiments in order to test such methods.

2. Development of Rosenbrock–Nyström Methods

Non-linear second-order in time problems can be written in an abstract form as follows:
“Find y : [ 0 , T ] H solution of
y ( t ) = f ( t , y ( t ) ) , 0 t T < , y ( 0 ) = y 0 , y ( 0 ) = v 0 , "
where, typically, H is a Hilbert space of functions defined in a certain bounded domain Ω R M , integer M 1 with smooth boundary Γ . This formulation involves lots of different problems: partial differential equations, ordinary differential equations, etc.
Example 1.
Let us show here two problems that will be solved in the numerical experiments (Section 5) and can be solved by using Rosenbrock–Nyström methods.
The first problem is a modification of the non-linear wave propagation suggested in [12]
u j ( t ) = F ( u j + 1 u j ) F ( u j u j 1 ) + g j ( t ) , j = 1 , , N , u j ( 0 ) = s i n 2 π j N + 1 , j = 1 , , N , u j ( t ) = 0 j = 1 , , N ,
with
F ( u ) = λ u + α u p , u 0 ( t ) = u N + 1 ( t ) = 0 ,
g j ( t ) is such that the exact solution is given by u j ( t ) = s i n 2 π j N + 1 c o s ( t ) . Parameter λ controls the stiffness of the problem.
Example 2.
The second example is the following non-linear Euler–Bernoulli equation:
u t t = A u 2 + h ( t , x ) , x [ 1 , 1 ] 0 t T = 1 , u ( 0 , x ) = ( x 2 1 ) 3 c o s ( x ) , x [ 1 , 1 ] , u t ( 0 , x ) = ( x 2 1 ) 3 s i n ( x ) , x [ 1 , 1 ] , u ( t , 1 ) = u ( t , 1 ) = 0 , 0 t T = 1 u x ( t , 1 ) = u x ( t , 1 ) = 0 , 0 t T = 1
with h ( t , x ) such that the exact solution is u ( t , x ) = ( x 2 1 ) 3 c o s ( t + x ) . Operator A is such that A u = u x x x x . This problem can be discretized in space by taking, for example, a pseudo-spectral discretization.
When solving this problem with an explicit Runge–Kutta–Nyström method, we have to impose very restrictive hypotheses over the time step size in order to guarantee stability.
When we solve a problem such as (1) with a Rosenbrock–Nyström method, the numerical approximation to the exact solution and its derivative, ( y ( t n ) , v ( t n ) ) is given by ( y n , v n ) , where t n = t 0 + n τ , with τ as the time step size. The numerical approximation ( y n , v n ) proposed by us is calculated as
v n + 1 = v n + τ i = 1 s b i f ( t n + α i τ , y n + j = 1 i 1 α i j K n , j ) + τ 2 ( β T · e ) f t ( t n , y n ) + τ f y ( t n , y n ) i = 1 s β i K n , i , y n + 1 = y n + i = 1 s b i K n , i ,
where K n , i , i = 1 , , s are the intermediate stages and e = ( 1 , , 1 ) T . These intermediate stages are given by
K n , i = τ v n + τ 2 j = 1 i δ i j f ( t n + α j τ , y n + l = 1 j 1 α j l K n , l ) + τ 2 j = 1 i γ i j ( τ f t ( t n , y n ) + f y ( t n , y n ) K n , j )
We use notation f t ( t , y ) and f y ( t , y ) to refer to f t ( t , y ) and f y ( t , y ) , respectively. When the problem we are solving is autonomous, that is, of the form
y ( t ) = f ( y ) , y ( 0 ) = y 0 , y ( 0 ) = v 0 ,
the equations which determine the method are
K n , i = τ v n + τ 2 j = 1 i δ i j f ( y n + l = 1 j 1 α j l K n , l ) + τ 2 f y ( y n ) j = 1 i γ i j K n , j , v n + 1 = v n + τ i = 1 s b i f ( y n + j = 1 i 1 α i j K n , j ) + τ f y ( y n ) i = 1 s β i K n , i , y n + 1 = y n + i = 1 s b i K n , i .
Notice that in this case, we have a simplification of the main problem (1) and the equations can be obtained by considering that f t ( y ) = 0 .
Similarly, as happens with other classical methods such as RK methods, RKN methods, Rosenbrock methods, etc., the coefficients of these methods can be written in a tableau as follows:
Mathematics 09 02225 i001
where we will assume that α i = j = 1 i 1 α i j .
Notice that at every intermediate stage, the problem to be solved is a linear one, so the computational cost is reduced compared with the equations that implicit RKN methods provide when solving this type of problem. When we select the values γ i i = γ , i = 1 , , s , then, at every intermediate stage K n , i , we have to solve a linear problem such as
( I τ 2 γ f y ( t n , y n ) ) K n , i = T i ,
where I denotes the identity operator and T i is the term in (4) that does not depend on K n , i , i = 1 , , s . Therefore, the computational cost reduces as I τ 2 γ f y ( t n , y n ) remains constant for all the intermediate stages.
In order to guarantee the solvability of the intermediate stages, we will assume that, for every t > 0 , f y ( t , y ( t ) ) is such that ( I μ f y ( t , y ( t ) ) ) 1 exists and is bounded for every μ with μ 0 .
Remark 1.
When we have that for every t > 0 , f y ( t , y ( t ) ) is self-adjoint and negative semi-definite, the solvability and boundedness of the intermediate stages are guaranteed because of the following:
  • In the case of f ( t , u ( t ) ) being a space differential operator, f y ( t , y ( t ) ) is the infinitesimal generator of a C 0 -semigroup of type ω ˜ 0 , so ( μ I f y ( t , y ( t ) ) ) 1 exists and is bounded for every μ with μ ω ˜ [13].
  • In the case of f ( t , u ( t ) ) : R n + 1 R n being a regular function, then, for every t > 0 , we have that f y ( t , y ( t ) ) is a symmetric negative semi-definite matrix, so ( I μ f y ( t , y ( t ) ) ) 1 exists for every μ 0 .
Let us see now the way in which these methods have been developed. The way of defining these new methods is the natural one since they can be obtained from Rosenbrock methods applied to problem (1) when it is transformed to a first-order in time problem. Let us remember that the coefficients of Rosenbrock methods are given by an array of the form
Mathematics 09 02225 i002
and the equations that these methods give when solving a problem such as
u ( t ) = F ( t , u ) , u ( 0 ) = u 0 ,
are
Q n , i = τ F ( t n + α ˜ i τ , u n + j = 1 i 1 α ˜ i j Q n , j ) + τ 2 γ ˜ i F t ( t n , u n ) + τ F y ( t n , u n ) j = 1 i γ ˜ i j Q n , j , u n + 1 = u n + j = 1 s b ˜ j Q n , j ,
where t n = t 0 + n τ , with τ as the time step size and u n as the numerical approximation to u ( t n ) . Q n , i , i = 1 , , s , are the intermediate stages.
To solve problem (1) with a Rosenbrock method, we first write it as a first-order one, by defining v ( t ) = y ( t ) . In this way, we define u ( t ) = ( y ( t ) , v ( t ) ) T and, therefore, we have a problem like (7), being F ( t , u ) = ( v ( t ) , f ( t , y ) ) T . The initial condition is u ( 0 ) = u 0 with u 0 = ( y 0 , v 0 ) T . We apply a Rosenbrock method to this problem, by using notation Q n , i = ( Q n , i y , Q n , i v ) .
We operate in the equations that give the intermediate stages, replacing Q n , j v with its expression in the equations for Q n , j y . From this, we obtain
Q n , i y = τ v n + τ 2 j = 1 i ( α i j ˜ + γ i j ˜ ) f ( t n + α j ˜ τ , y n + j = 1 i 1 α ˜ i j Q n , j y ) + τ 3 j = 1 i ( α i j ˜ + γ i j ˜ ) γ j ˜ f t ( t n , y n ) + τ 2 j = 1 i ( α i j ˜ + γ i j ˜ ) l = 1 j γ l j ˜ Q n , l y , v n + 1 = v n + τ j = 1 s b ˜ j f ( t n + α ˜ j τ , y n + l = 1 j 1 α ˜ j l Q n , l y ) + τ 2 j = 1 s b ˜ j γ ˜ j f t ( t n , y n ) + τ f y ( t n , y n ) j = 1 s b ˜ j l = 1 j γ ˜ j l , y n + 1 = y n + j = 1 s b ˜ j Q n , j y
Now, lets us assume that the Rosenbrock method satisfies
α ˜ i = j = 1 i 1 α ˜ i j , γ ˜ i = j = 1 i γ ˜ i j , j = 1 s b ˜ j = 1 .
The first two conditions are usual restrictions satisfied for many Rosenbrock methods and the third one is the condition to have classical order 1. Then, if we denote K n , i = Q n , i y , i = 1 , , s and if we define
α = α ˜ , A α = A ˜ α A γ = ( A ˜ α + A γ ˜ ) A ˜ γ , A δ = A ˜ α + A γ ˜ , b T = b ˜ T , β T = b ˜ T A ˜ γ ,
what we obtain is precisely Equation (4). Furthermore, we can construct Rosenbrock–Nyström methods that do not come from Rosenbrock ones. This fact gives much more freedom to obtain the coefficients of the desired methods. The proposed methods with classical order 3 and 4 that are used in the numerical experiments have been obtained without using existing Rosenbrock ones.

3. Stability When Solving Linear Ordinary Differential Equations

Following the ideas given for Rosenbrock methods in [6], in this part we deal with the stability of Rosenbrock–Nyström methods when they are applied to a simplified problem such as
U ( t ) = B 2 U ( t ) , U ( 0 ) = U 0 , U ( 0 ) = V 0 ,
where B is a given symmetric positive defined matrix of order m 1 and U ( t ) , U 0 and V 0 R m .
Here, we study the stability in the energy norm, which is the natural norm for the study of the well-posedness of problem (9). This norm is given by
( U ( t ) , U ( t ) ) T B 2 = B U ( t ) 2 2 + U ( t ) 2 2 ,
with · being the Euclidean norm in R m .
When solving problem (9) with a Rosenbrock–Nyström method, we obtain
U n + 1 V n + 1 = r 11 ( τ B ) B 1 r 12 ( τ B ) B r 21 ( τ B ) r 22 ( τ B ) U 0 V 0 ,
where terms r i j ( k B ) , 1 i , j 2 are given, in tensorial form, by
r 11 ( τ B ) = I ( b T τ 2 B 2 ) M ( A δ , A α , A γ , B ) 1 ( A δ e I ) , r 12 ( τ B ) = ( b T τ B ) M ( A δ , A α , A γ , B ) 1 ( e I ) r 21 ( τ B ) = τ B [ ( b T e I ) ( ( b T A α + β T ) τ 2 B 2 ) M ( A δ , A α , A γ , B ) 1 ( A δ e I ) ] , r 22 ( τ B ) = I ( ( b T A α + β T ) τ 2 B 2 ) M ( A δ , A α , A γ , B ) 1 ( e I ) .
with M ( A δ , A α , A γ , B ) = I I + ( A δ A α + A γ ) τ 2 B 2 . These elements form matrix R ( τ B ) ,
R ( τ B ) = r 11 ( τ B ) r 12 ( τ B ) r 21 ( τ B ) r 22 ( τ B ) .
By bounding (10) in the energy norm, we obtain that the proof of stability is related to the boundedness of the powers of matrix R ( τ B ) . As matrix B is assumed to be symmetric and positive definite, then B is normal and we can use the following spectral result:
R ( τ B ) n 2 sup θ σ ( τ B ) R ( θ ) n 2 ,
with σ ( τ B ) being the spectrum of τ B . Then, the boundedness of the powers of matrix R ( τ B ) is reduced to the study of the boundedness of matrix R ( θ ) . (Note: if we assume that B is not normal, we can use a similar result to (11), but considering the numerical range instead of the spectrum [14].)
Following the results in [15], the following definitions and theorem can be stated.
Definition 1.
The interval C = [ 0 , β s t a b ) is the interval of stability of the Rosenbrock–Nyström method if β s t a b R + is the highest value, such that
C { θ R + { 0 } / ρ ( R ( θ ) ) 1 and R ( θ ) is simple when ρ ( R ( θ ) ) = 1 } .
The Rosenbrock–Nyström method is said to be R-stable if C = R + { 0 } .
Definition 2.
The interval C * = [ 0 , β p e r ) is the interval of periodicity of the Rosenbrock–Nyström method if β p e r R + is the highest value, such that
C * { θ R + { 0 } / R ( θ ) is simple and for all λ σ ( R ( θ ) ) , | λ | = 1 } .
We can say that the Rosenbrock–Nyström method is P-stable if C * = R + { 0 } .
Theorem 1.
Under assumptions that
(i) 
The method is R-stable.
(ii) 
σ ( A δ A α + A γ ) ( , 0 ] = .
(iii) 
There exists a value θ ¯ R such that R ( θ ¯ ) does not have double eigenvalues.
(iv) 
( b T A α + β T ) ( A δ A α + A γ ) 1 ( A δ e ) = 1 .
Then,
R ( k B ) n 2 C , n N .
where C is independent of the size of σ ( k B ) .
This result cannot be obtained if assumption (iv) is not satisfied.
Proof. 
The proof of this theorem is straightforward by using the results of Theorems 5 and 7 in [15]. □
Corollary 1.
When the Rosenbrock–Nyström method comes from a Rosenbrock one with classical order greater than or equal to one, the condition
( b T A α + β T ) ( A δ A α + A γ ) 1 ( A δ e ) = 1
is always satisfied.
Proof. 
Let us assume that the Rosenbrock–Nyström method comes from a Rosenbrock one with Butcher array
Mathematics 09 02225 i003
and that the coefficients of the Rosenbrock–Nyström method satisfy relations (8). Then,
( b T A α + β T ) ( A δ A α + A γ ) 1 ( A δ e ) = ( b ˜ T A ˜ α + b ˜ T A ˜ γ ) ( ( A ˜ α + A ˜ γ ) A ˜ α + ( A ˜ α + A ˜ γ ) A ˜ γ ) 1 ( A ˜ α + A ˜ γ ) e = b ˜ T ( A ˜ α + A ˜ γ ) ( ( A ˜ α + A ˜ γ ) ( A ˜ α + A ˜ γ ) ) 1 ( A ˜ α + A ˜ γ ) e = b ˜ T ( A ˜ α + A ˜ γ ) ( ( A ˜ α + A ˜ γ ) 2 ) 1 ( A ˜ α + A ˜ γ ) e = b ˜ T e = 1

4. Order Conditions for Rosenbrock–Nyström Methods

Let us see the conditions that Rosenbrock–Nyström methods should satisfy to obtain the highest possible order when integrating a problem such as (1). In a similar way as with Runge–Kutta–Nyström methods, a Rosenbrock–Nyström method is said to have classical order p if
ρ n + 1 = y ( t n + 1 ) y ^ n + 1 = O ( τ p + 1 ) , ξ n + 1 = y ( t n + 1 ) v ^ n + 1 = O ( τ p + 1 ) ,
where ( y ^ n + 1 , v ^ n + 1 ) T is the numerical solution obtained from the exact solution ( y ( t n ) , y ( t n ) ) T = ( y ˜ n + 1 , v ˜ n + 1 ) T with time step size τ .
In order to study the order conditions, it is useful to write the equations as in the autonomous case, given by (5), in the following way,
g n , i J = y n J + j = 1 i 1 α i j K n , j J , K n , i J = τ v n J + τ 2 j = 1 i δ i j f J ( g j ) + τ 2 K f K J ( y n ) j = 1 i γ i j K n , j K , v n + 1 J = v n J + τ i = 1 s b i f J ( g i ) + τ K f K J ( y n ) i = 1 s β i K n , i K , y n + 1 J = y n J + i = 1 s b i K n , i J ,
where the superscript indices in capital letters indicate the component of the vector we are using (in this part, the notation is similar to that used in [6]). In the following, f J / y K will be denoted by f K J , 2 f J / y K y L by f K L J , etc.
To obtain the order conditions, we compare the Taylor series of y ^ n + 1 and v ^ n + 1 obtained from the exact solution ( y ˜ n , v ˜ n ) T with the Taylor series of the exact solution.
In this part, the following formulae are used:
( τ φ ( τ ) ) ( q ) | τ = 0 = q φ ( q 1 ) ( τ ) | τ = 0 , q 1 , ( τ 2 φ ( τ ) ) ( q ) | τ = 0 = q ( q 1 ) φ ( q 2 ) ( τ ) | τ = 0 , q 2 .
These formulae, which can be proved in a recursive manner, are obtained by using that
( τ φ ( τ ) ) ( q ) = q φ ( q 1 ) ( τ ) + τ φ ( q ) ( τ ) , q 1 , ( τ 2 φ ( τ ) ) ( q ) = q ( q 1 ) φ ( q 2 ) ( τ ) + 2 q τ φ ( q 1 ) ( τ ) + τ 2 φ ( q ) ( τ ) , q 2 .
We differentiate by using the notation φ ( τ ) = j = 1 i ( δ i j f J ( g j ) + γ i j K f K J ( y ˜ n ) K n , j K ) together with the previous formulae. Then, we obtain
( K n , i J ) ( 0 ) | τ = 0 = 0 , ( K n , i J ) ( 1 ) | τ = 0 = v ˜ n J | τ = 0 + 2 τ j = 1 i ( δ i j f J ( g j ) + γ i j K f K J ( y ˜ n ) K n , j K ) | τ = 0 + ( τ 2 j = 1 i ( δ i j f J ( g j ) + γ i j K f K J ( y ˜ n ) K n , j K ) ( 1 ) | τ = 0 = v ˜ n J , ( K n , i J ) ( 2 ) | τ = 0 = 2 j = 1 i ( δ i j f J ( g j ) + γ i j K f K J ( y ˜ n ) K n , j K ) | τ = 0 = 2 j = 1 i δ i j f J ( y ˜ n ) ( K n , i J ) ( 3 ) | τ = 0 = 6 j = 1 i ( δ i j f J ( g j ) ( 1 ) + γ i j K f K J ( y ˜ n ) ( K n , j K ) ( 1 ) ) | τ = 0 = 6 j = 1 i ( δ i j α j K f K J ( y ˜ n ) v ˜ n K + γ i j K f K J ( y ˜ n ) v ˜ n K ) = 6 j = 1 i ( δ i j α j + γ i j ) K f K J ( y ˜ n ) v ˜ n K , ( K n , i J ) ( 4 ) | τ = 0 = 12 j = 1 i ( δ i j ( f J ( g j ) ) ( 2 ) + γ i j K f K J ( y ˜ n ) ( K n , j K ) ( 2 ) ) | τ = 0 = 12 j = 1 i δ i j α j 2 K , L f K L J ( y ˜ n ) v ˜ n K v ˜ n L + 24 j = 1 i l = 1 i 1 m = 1 l δ i j α j l δ l m K f K J ( y ˜ n ) f K ( y ˜ n ) + 24 j = 1 i l = 1 j γ i j δ j l K f K j ( y ˜ n ) f K ( y ˜ n ) ,
where we have used that
f J ( g i ) | τ = 0 = f J ( y ˜ n ) , ( f J ( g i ) ) ( 1 ) | τ = 0 = K α i f K J ( y ˜ n ) v ˜ n K , ( f J ( g i ) ) ( 2 ) | τ = 0 = K , L α i 2 f K , L J ( y ˜ n ) v ˜ n K v ˜ n L + 2 K f K J ( y ˜ n ) ) f K ( y ˜ n ) j = 1 i 1 l = 1 j α i j δ j l ,
together with
( g i K ) ( 0 ) | τ = 0 = y ˜ n K , ( g i K ) ( 1 ) | τ = 0 = j = 1 i 1 α i j ( K n , j J ) τ = 0 ( 1 ) = j = 1 i 1 α i j v ˜ n J = α i v ˜ n J , ( g i K ) ( 2 ) | τ = 0 = j = 1 i 1 α i j ( K n , j J ) τ = 0 ( 2 ) = 2 j = 1 i 1 l = 1 j α i j δ j l f J ( y ˜ n ) , ( g i K ) ( 3 ) | τ = 0 = j = 1 i 1 α i j ( K n , j J ) τ = 0 ( 3 ) = 6 j = 1 i 1 l = 1 j α i j ( δ j l α l + γ j l ) K f K J ( y ˜ n ) v ˜ n K .
Then, by using the expressions obtained for ( K n , i J ) ( l ) , l = 1 , , 4 , we have
( v ^ n + 1 J ) ( 1 ) | τ = 0 = i = 1 s b i f J ( g i ) | τ = 0 + K f K J ( y ˜ n ) i = 1 s β i ( K n , i K ) ( 0 ) | τ = 0 = f J ( y ˜ n ) i = 1 s b i , ( v ^ n + 1 J ) ( 2 ) | τ = 0 = 2 i = 1 s b i ( f J ( g i ) ) ( 1 ) | τ = 0 + 2 K f K J ( y ˜ n ) i = 1 s β i ( K n , i K ) ( 1 ) | τ = 0 = 2 i = 1 s b i K f K J ( y ˜ n ) α i v ˜ n K + 2 K f K J ( y ˜ n ) i = 1 s β i v ˜ n K = 2 i = 1 s ( b i α i + β i ) K f K J ( y ˜ n ) v ˜ n K , ( v ^ n + 1 J ) ( 3 ) | τ = 0 = 3 i = 1 s b i ( f J ( g i ) ) ( 2 ) | τ = 0 + 3 K f K J ( y ˜ n ) i = 1 s β i ( K i K ) ( 2 ) | τ = 0 = 3 i = 1 s b i K , L f K , L J ( y ˜ n ) α i 2 v ˜ n K v ˜ n L + 6 i = 1 s b i K f K J ( y ˜ n ) f K ( y ˜ n ) j = 1 i 1 l = 1 j α i j δ j l + 6 K f K J ( y ˜ n ) i = 1 s j = 1 i β i δ i j f K ( y ˜ n ) = 3 i = 1 s b i α i 2 K , L f K , L J ( y ˜ n ) v ˜ n K v ˜ n L + 6 i = 1 s K f K J ( y ˜ n ) f K ( y ˜ n ) j = 1 i 1 l = 1 j b i α i j δ j l + j = 1 i β i δ i j , ( v ^ n + 1 J ) ( 4 ) | τ = 0 = 4 i = 1 s b i ( f J ( g i ) ) ( 3 ) | τ = 0 + 4 K f K J ( y ˜ n ) i = 1 s β i ( K n , i K ) ( 3 ) | τ = 0 = 4 i = 1 s b i α i 3 K , L , M f K , L , M J ( y ˜ n ) v ˜ n K v ˜ n L v ˜ n M + 16 i = 1 s b i α i j = 1 i 1 l = 1 j α i j δ j l K , L f K , L J ( y ˜ n ) f K ( y ˜ n ) v ˜ n L + 8 i = 1 s b i α i j = 1 i 1 l = 1 j α i j δ j l K , L f K , L J ( y ˜ n ) f L ( y ˜ n ) v ˜ n K + 24 i = 1 s b i j = 1 i 1 l = 1 j α i j ( δ j l α l + γ j l ) K , L f K J ( y ˜ n ) f L K ( y ˜ n ) v ˜ n L + 24 i = 1 s j = 1 i β i ( δ i j α j + γ i j ) K , L f K J ( y ˜ n ) f L K ( y ˜ n ) v ˜ n L = K , L , M f K , L , M J ( y ˜ n ) v ˜ n K v ˜ n L v ˜ n M i 1 s 4 b i α i 3 + K , L f K , L J ( y ˜ n ) f K ( y ˜ n ) v ˜ n L i = 1 s j = 1 i 1 l = 1 j 24 b i α i α i j δ j l + 24 K , L f K J ( y ˜ n ) f L K ( y ˜ n ) v ˜ n L i = 1 s j = 1 i 1 l = 1 j b i α i j ( δ j l α l + γ j l ) + j = 1 i β i ( δ i j α j + γ i j )
where we have used that
( f J ( g i ) ) ( 3 ) = K , L , M f K , L , M J ( g i ) ( g i K ) ( 1 ) ( g i L ) ( 1 ) ( g i M ) ( 1 ) + 2 K , L f K , L J ( g i ) ( g i K ) ( 2 ) ( g i L ) ( 1 ) + K , L f K , L J ( g i ) ( g i K ) ( 1 ) ( g i L ) ( 2 ) + K f K J ( g i ) ( g i K ) ( 3 )
and therefore
( f J ( g i ) ) ( 3 ) | τ = 0 = K , L , M f K , L , M J ( y ˜ n ) v ˜ n K v ˜ n L v ˜ n L α i 3 + 4 K , L f K , L J ( y ˜ n ) j = 1 i 1 l = 1 j α i α i j δ j l f K ( y ˜ n ) v ˜ n L + 2 K , L f K , L J ( y ˜ n ) j = 1 i 1 l = 1 j α i α i j δ j l f L ( y ˜ n ) v ˜ n K + 6 K , L f K J ( y ˜ n ) j = 1 i 1 l = 1 j α i j ( δ j l α l + γ j l ) f L K ( y ˜ n ) v ˜ n L .
For ( y ^ n + 1 J ) ( l ) , l = 1 , , 4 , we have
( y ^ n + 1 J ) ( 0 ) | τ = 0 = y ˜ n J , ( y ^ n + 1 J ) ( 1 ) | τ = 0 = i = 1 s b i ( K n , i J ) ( 1 ) | τ = 0 = v ˜ n J i = 1 s b i , ( y ^ n + 1 J ) ( 2 ) | τ = 0 = i = 1 s b i ( K n , i J ) ( 2 ) | τ = 0 = 2 f J ( y ˜ n ) i = 1 s j = 1 i b i δ i j , ( y ^ n + 1 J ) ( 3 ) | τ = 0 = i = 1 s b i ( K n , i J ) ( 3 ) | τ = 0 = 6 i = 1 s b i j = 1 i ( δ i j α j + γ i j ) K f K J ( y ˜ n ) v ˜ n K = 6 K f K J ( y ˜ n ) v ˜ n K i = 1 s j = 1 i b i ( δ i j α j + γ i j ) , ( y ^ n + 1 J ) ( 4 ) | τ = 0 = i = 1 s b i ( K n , i J ) ( 4 ) | τ = 0 = 12 i = 1 s b i j = 1 i δ i j α j 2 K , L f K , L J ( y ˜ n ) v ˜ n K v ˜ n L + 24 i = 1 s b i j = 1 i l = 1 j 1 m = 1 l δ i j α j l δ l m K f K J ( y ˜ n ) f K ( y ˜ n ) + 24 i = 1 s b i j = 1 i l = 1 j γ i j δ j l K f K J ( y ˜ n ) f K ( y ˜ n ) = 12 K , L f K , L J ( y ˜ n ) v ˜ n K v ˜ n L i = 1 s j = 1 i b i δ i j α j 2 + 24 K f K J ( y ˜ n ) f K ( y ˜ n ) i = 1 s j = 1 i l = 1 j b i ( γ i j δ j l + δ i j α j l m = 1 l δ l m ) .
Now, we calculate the derivatives of the exact solution, by taking into account that y = f ( y ) ,
( y ˜ n J ) ( 1 ) = v ˜ n J , ( y ˜ n J ) ( 2 ) = ( v ˜ n J ) ( 1 ) = f J ( y ˜ n ) , ( y ˜ n J ) ( 3 ) = ( v ˜ n J ) ( 2 ) = ( f J ( y ˜ n ) ) ( 1 ) = K f K J ( y ˜ n ) v ˜ n K , ( y ˜ n J ) ( 4 ) = ( v ˜ n J ) ( 3 ) = ( f J ( y ˜ n ) ) ( 2 ) = K , L f K L J ( y ˜ n ) v ˜ n K ( y ˜ n L ) ( 1 ) + K f K J ( y ˜ n ) ( v ˜ n K ) ( 1 ) = K , L f K L J ( y ˜ n ) v ˜ n K v ˜ n L + K f K J ( y ˜ n ) f K ( y ˜ n ) , ( y ˜ n J ) ( 5 ) = ( v ˜ n J ) ( 4 ) = ( f J ( y ˜ n ) ) ( 3 ) = K , L , M f K , L , M J ( y ˜ n ) v ˜ n K v ˜ n L ( y ˜ n M ) ( 1 ) + K , L f K , L J ( y ˜ n ) ( v ˜ n K ) ( 1 ) v ˜ n L + K , L f K , L J ( y ˜ n ) v ˜ n K ( v ˜ n L ) ( 1 ) + K , L f K , L J ( y ˜ n ) ( y ˜ n L ) ( 1 ) f K ( y ˜ n ) + K , L f K J ( y ˜ n ) f L K ( y ˜ n ) ( y ˜ n L ) ( 1 ) = K , L , M f K , L , M J ( y ˜ n ) v ˜ n K v ˜ n L v ˜ n M + K , L f K , L J ( y ˜ n ) f K ( y ˜ n ) v ˜ n L + K , L f K , L J ( y ˜ n ) v ˜ n K f L ( y ˜ n ) + K , L f K , L J ( y ˜ n ) f K ( y ˜ n ) v ˜ n L + K , L f K J ( y ˜ n ) f L K ( y ˜ n ) v ˜ n L = K , L , M f K , L , M J ( y ˜ n ) v ˜ n K v ˜ n L v ˜ n M + 3 K , L f K , L J ( y ˜ n ) f K ( y ˜ n ) v ˜ n L + K , L f K J ( y ˜ n ) f L K ( y ˜ n ) v ˜ n L .
To obtain the order conditions up to order four, we compare the results in (12) and (13) with the results in (14). Then, these order conditions are:
Order 1: We compare ( v ^ n J ) ( 1 ) with ( v ˜ n J ) ( 1 ) and ( y ^ n J ) ( 1 ) with ( y ˜ n J ) ( 1 )
i = 1 s b i = 1 .
Order 2: We compare ( v ^ n J ) ( 2 ) with ( v ˜ n J ) ( 2 ) and ( y ^ n J ) ( 2 ) with ( y ˜ n J ) ( 2 )
i = 1 s ( b i α i + β i ) = 1 2 , i = 1 s j = 1 i b i δ i j = 1 2 .
Order 3: We compare ( v ^ n J ) ( 3 ) with ( v ˜ n J ) ( 3 ) and ( y ^ n J ) ( 3 ) with ( y ˜ n J ) ( 3 )
i = 1 s b i α i 2 = 1 3 , i = 1 s j = 1 i 1 l = 1 j b i α i j δ j l + i = 1 s j = 1 i β i δ i j = 1 6 , i = 1 s j = 1 i b i ( δ i j α j + γ i j ) = 1 6 .
Order 4: We compare ( v ^ n J ) ( 4 ) with ( v ˜ n J ) ( 4 ) and ( y ^ n J ) ( 4 ) with ( y ˜ n J ) ( 4 )
i = 1 s b i α i 3 = 1 4 , i = 1 s j = 1 i 1 l = 1 j b i α i α i j δ j l = 1 8 , i = 1 s j = 1 i 1 l = 1 j b i α i j ( δ j l α l + γ j l ) + i = 1 s j = 1 i β i ( δ i j α j + γ i j ) = 1 24 , i = 1 s j = 1 i b i δ i j α j 2 = 1 12 , i = 1 s j = 1 i l = 1 j 1 m = 1 l b i δ i j α j l δ l m + i = 1 s j = 1 i l = 1 j b i γ i j δ j l = 1 24 .
By using notation α j = ( α 1 j , , α s j ) T and ( b α ) T = ( b 1 α 1 , , b s α s ) , these order conditions can be written as follows:
Order 1:
b T e = 1 .
Order 2:
b T α + β T e = 1 2 , b T A δ e = 1 2 .
Order 3:
b T α 2 = 1 3 , ( b T A α + β T ) A δ e = 1 6 , b T ( A δ A α + A γ ) e = 1 6 .
Order 4:
b T α 3 = 1 4 , ( b α ) T A α A δ e = 1 8 , ( b T A α + β T ) ( A δ α + A γ ) e = 1 24 , b T A δ α 2 = 1 12 , b T ( A δ A α + A γ ) A δ e = 1 24 .

5. Numerical Experiments

This section is devoted to the numerical experiments we have carried out in order to prove the advantages of these methods when solving a non-linear equation.
The Rosenbrock–Nyström methods we have chosen are the ones that are developed and studied in [11]. The first one is the one-stage method given by:
Mathematics 09 02225 i004
This method can be obtained from the well-known Rosenbrock method with
α ˜ = 0 , A ˜ α = 0 , A ˜ γ = 1 2 , b ˜ T = 1 ,
Therefore, the equations that this Rosenbrock–Nyström method provides are the same that we have with the Rosenbrock method when converting the second order problem in a first order one.
This Rosenbrock method is A-stable, but not L-stable [6]. The method presented here is the only Rosenbrock–Nyström method with one stage and classical order 2 that is stable. In fact, it can be proved that this method is P-stable as the eigenvalues of its stability matrix are 4 θ 2 4 + θ 2 ± 4 θ 4 + θ 2 i , for θ R . They are complex conjugate with modulus 1 except for θ = 0 and θ , where they are double and equal to 1. There are no one-stage Rosenbrock–Nyström methods with classical order 2 that are just R-stable but not P-stable, so in order to obtain methods of this type, we should use at least two stages. Another possibility is to construct one-stage Rosenbrock–Nyström methods with complex coefficients, following the ideas given in [16] for Rosenbrock methods. This will be the idea of future works.
The second one is an R-stable method with two stages and classical order 3, which we will call R N 3 . The coefficients of this method are given by the following array:
Mathematics 09 02225 i005
The third one is a method with three stages and classical order 4. This method is called R N 4 . The coefficients of this method are given by the following array:
Mathematics 09 02225 i006
In the tables, the local and global orders are given. The global error has been calculated as the difference between the exact solution at T = 1 and the numerical one obtained with our method.
Example 3.
The first problem we have solved is the first problem presented in Example 1, that is, problem (2). Here, the parameters we have selected are
N = 20 , λ = 1000 , α = 2 , p = 3 .
We present the results obtained for the method with classical order 2 in Table 1, where we can see that the expected orders are obtained for the solution as well as for the derivative. For the local error in the solution, we obtain one order more than the one expected since the odd derivatives vanish.
The results for the method with classical order 3 are given in Table 2 where we can see that the expected orders are obtained for the solution as well as for the derivative. For the local error in the derivative, we obtain one order more than the one expected since the odd derivatives vanish.
The results for the method with classical order 4 are presented in Table 3, where again we can see that the expected results are obtained. The local order in the solution is one unit higher than the expected one as the odd derivatives vanish.
Example 4.
The second problem is the equation of motion of a soliton in an exponential lattice. This problem was first proposed in [17] and it is a highly non-linear system.
u j ( t ) = 2 e u j e u j 1 e u j + 1 , j = 1 , , N , u j ( 0 ) = ln ( 1 + β 2 s e c h 2 ( α j ) ) , u j ( 0 ) = 2 β 3 s e c h 2 ( α j ) t a n h ( α j ) 1 + β 2 s e c h 2 ( α j ) ,
with α = 2 , β = sinh ( α ) and N = 20 . The solution of this problem is u j ( t ) = ln ( 1 + β 2 s e c h 2 ( α j + β t ) ) .
The results for method RN2 can be seen in Table 4. There, we can observe that we also obtain the expected local and global order for the solution and the derivative.
The results for the second method, RN3, can be seen in Table 5, for which the expected results have been obtained.
Finally, the results for method RN4 are presented in Table 6.
Example 5.
The last problem we have solved is the non-linear Euler–Bernoulli Equation (3) presented in Example 2.
We first discretized in space, by using a spectral method suggested in [18] and deeply studied in [8]. This spectral method has been derived to discretize in space a problem such as
A u = F , x Ω , u = G , x Γ
with A u = u x x x x , F X , G Y , and u D ( A ) , with X being a Hilbert space of functions and Y being a suitable space of functions defined on the boundary. In particular, for the problem we have, we take Ω = ( 1 , 1 ) with boundary Γ = { 1 , 1 } and boundary conditions u ( 1 ) = g 1 u ( 1 ) = g 1 , u x ( 1 ) = h 1 , u x ( 1 ) = h 1 . For the spatial discretization of this problem, we consider a grid Ω J , Ω Γ associated with a natural parameter J (related to the number of nodes on it). Then, we take X J as the space of polynomials of a degree less than or equal to J, integer J 4 , and X J , 0 as the subspace of X J such that they and their derivatives vanish on the boundary. We will consider in X J an approximation of the L 2 -norm in X = L 2 ( Ω ) , which will be denoted by · J . Then, the spatial discretization is given by
A J , 0 U J + C J u = P J F
with A J , 0 being the matrix that discretizes A 0 and C J and P J being the operators associated with the discretization of A which take into account the information on the boundary of u and F, respectively. The interior nodes are denoted by μ j , j = 1 , , J . They are the zeros of the second derivative of the Legendre polynomial of degree J + 2 .
In this way, after the spatial discretization, our problem reads
U J = A J , 0 U J + C J u + U J 2 + P J H ( t ) , 0 t T = 1 , U J ( 0 ) = U J , 0 U J ( 0 ) = V J , 0
with P J H ( t ) j = h ( t , μ j ) , ( U J , 0 ) j = u ( 0 , μ j ) and ( V J , 0 ) j = u t ( 0 , μ j ) .
In the numerical experiments, we have taken J = 40 .
In this part, we present the results that have been obtained with method RN2 in Table 7, the results obtained with method RN3 in Table 8, and the results that have been obtained with method RN4 in Table 9. In this case, with RN4, we do not obtain order 4 in the global error in the derivative because of the order reduction phenomenon, which has been deeply studied for other numerical integrators such as Runge–Kutta methods, Rosenbrock methods, or RKN methods [19,20,21]. The way to avoid this order reduction is the objective of a forthcoming paper.
Finally, we present the comparison in terms of computational cost between RN2, RN3, RN4, and the RKN method with two stages and classical order 3 which is given in [22]:
Mathematics 09 02225 i007
We will denote this method by the abbreviation RKN3.
The results can be seen in Figure 1 and Table 10, where we have written the values of τ and the value of the computational cost to obtain global errors in the solution in the range [ 10 12 , 10 6 ] . In the graph, the error has been plotted as a function of τ in double logarithmic scale. In this graph and this table, we can see that the method which generates more computational cost to achieve the desired error is RN2. Given a fixed value of τ, this method presents a lower computational cost than the other three, but as its classical order is only two, it needs a lower value of τ to obtain the same error than the other three methods, and in this way, it needs more time steps. For example, if we take τ = 1 320 , we can see that RN2 takes 1.8751 × 10 2 seconds to obtain the result, but RN4 needs 3.7180 × 10 2 seconds, nearly double. However, for this value, the error that is found with RN2 is 7.2957 × 10 7 , but with RN4 it is 6.6974 × 10 9 . To obtain this error with the RN2 method, we need a value of τ between 1 2560 and 1 5120 , so the number of steps is much greater and so is the computational cost.
As we can also see, both RN3 and RN4 are better in terms of the computational cost than RKN3. As RN4 has more stages than RN3, for bigger values of τ, RN3 needs fewer steps to obtain the desired error, but when τ is smaller, because of the classical order of the RN4 method, it requires fewer time steps and the computational cost of this method is lower.

6. Conclusions and Future Work

As we have seen in this work, these new methods are a good choice when we have a second-order in time problem to solve. By using these methods, we avoid the drawback that other integrators ike Runge–Kutta–Nyström ones present when the problem we are solving is a non-linear one.
This work is an introductory work about these methods and there is more work to do in the near future. For example, we have the problem of the order reduction that this type of method presents when we solve a second-order in time PDE with non-vanishing order conditions. This problem has already been studied for other existing methods such as Runge–Kutta, Rosenbrock, and Runge–Kutta–Nyström.
Another piece of work to be done is the construction and study of Rosenbrock–Nyström methods with complex coefficients. As we have seen, the only stable method that exists with only one stage is a P-stable one. To obtain R-stable methods (not P-stable) with classical order two, we should have two intermediate stages. Rosenbrock methods with complex coefficients have been proven to be very efficient when solving stiff problems [16,23]. In this way, one piece of future work is to use this idea of complex coefficients to obtain efficient Rosenbrock–Nyström methods to solve second-order in time problems.
To conclude the work to be done, we are also interested in the comparison between different Rosenbrock–Nyström methods, in order to study the best method to be used in practice. One high-stage scheme could be worse in practice than one method with fewer stages because of machine round-off errors.

Funding

This research was funded by the Ministerio de Ciencia e Innovación through project PGC2018-101443-B-I00.

Acknowledgments

The author acknowledges the useful comments given by the referees.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bujanda, B.; Jorge, J.C. Fractional Step Runge-Kutta methods for time dependent coefficient parabolic problems. Appl. Numer. Math. 2003, 45, 99–122. [Google Scholar] [CrossRef]
  2. Hairer, E.; Nrsett, S.P.; Wanner, G. Solving Ordinary Differential Equations I, Nonstiff Problems, 2nd ed.; Springer: Berlin, Germany, 2000. [Google Scholar]
  3. Hochbruck, M.; Ostermann, A. Exponential integrators. Acta Numer. 2010, 19, 209–286. [Google Scholar] [CrossRef] [Green Version]
  4. Peaceman, D.W.; Rachford, H.H. The numerical solution of parabolic and elliptic differential equations. J. SIAM 1955, 3, 28–42. [Google Scholar] [CrossRef]
  5. Rosenbrock, H.H. Some general implicit precesses for the numerical solution of differential equations. Comput. J. 1963, 5, 329–330. [Google Scholar] [CrossRef] [Green Version]
  6. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II, Stiff and Differential-Algebraic Problems, 2nd ed.; Springer: Berlin, Germany, 1996. [Google Scholar]
  7. Hochbruck, M.; Ostermann, A.; Schweitzer, J. Exponential Rosenbrock-type methods. SIAM J. Numer. Anal. 2008, 47, 786–803. [Google Scholar] [CrossRef]
  8. Moreta, M.J. Discretization of Second-Order in Time Partial Differential Equations by Means of Runge-Kutta-Nyström Methods. Ph.D. Thesis, Department of Applied Mathematics, University of Valladolid, Valladolid, Spain, 2005. [Google Scholar]
  9. Moreta, M.J.; Bujanda, B.; Jorge, J.C. Numerical resolution of linear evolution multidimensional problems of second order in time. Numer. Methods Partial. Differ. Equ. 2012, 28, 597–620. [Google Scholar]
  10. Goyal, S.; Serbin, S.M. A class of Rosenbrock-type schemes for second-order nonlinar systems of ordinary differential equations. Comput. Math. Appl. 1987, 13, 351–362. [Google Scholar] [CrossRef] [Green Version]
  11. Moreta, M.J. Construction of Rosenbrock-Nyström methods up to order four. 2021; in preparison. [Google Scholar]
  12. Fermi, E.; Pasta, J.; Ulam, S. Studies of nonlinear problems I. Lect. Appl. Math. 1974, 15, 143–156. [Google Scholar]
  13. Goldstein, J.A. Semigroups of Linear Operators and Applications; Oxford University Press: New York, NY, USA, 1985. [Google Scholar]
  14. Crouzeix, M. Numerical Range and Hilbertian Functional Calculus; Institute of Mathematical Research of Rennes, Université de Rennes: Rennes, France, 2003. [Google Scholar]
  15. Alonso-Mallo, I.; Cano, B.; Moreta, M.J. Stability of Runge-Kutta-Nyström methods. J. Comp. Appl. Math. 2006, 189, 120–131. [Google Scholar] [CrossRef] [Green Version]
  16. Al’shin, A.B.; Al’shina, E.A.; Kalitkin, N.N.; Koryagina, A.B. Rosenbrock schemes with complex coefficients for stiff and differential-algebraic systems. Comput. Math. Math. Phys. 2006, 46, 1320–1340. [Google Scholar] [CrossRef]
  17. Toda, M. Waves in nonlinear lattice. Suppl. Prog. Theor. Phys. 1970, 45, 174–200. [Google Scholar] [CrossRef]
  18. Bernardi, C.; Maday, Y. Approximations Spectrales de Problemés auxd Limites Elliptiques; Springer: Berlin, Germany, 1992. [Google Scholar]
  19. Alonso-Mallo, I.; Cano, B. Spectral/Rosenbrock discretizations without order reduction for linear parabolic problems. Appl. Num. Math. 2002, 47, 247–268. [Google Scholar] [CrossRef]
  20. Alonso-Mallo, I.; Cano, B. Avoiding order reduction of Runge-Kutta discretizations for linear time-dependent parabolic problems. BIT 2004, 44, 1–20. [Google Scholar] [CrossRef]
  21. Alonso-Mallo, I.; Cano, B.; Moreta, M.J. Optimal time order when implicit Runge-Kutta-Nyström methods solve linear partial differential equations. Appl. Numer. Math. 2008, 58, 539–562. [Google Scholar] [CrossRef]
  22. Alonso-Mallo, I.; Cano, B.; Moreta, M.J. Stable Runge-Kutta-Nyström methods for dissipative problems. Numer. Algorithms 2006, 42, 193–203. [Google Scholar] [CrossRef]
  23. Al’shin, A.B.; Al’shina, E.A.; Limonov, A.G. Two-stage complex Rosenbrock schemes for stiff systems. Comput. Math. Math. Phys. 2009, 49, 261–278. [Google Scholar] [CrossRef]
Figure 1. Graph for the global error in the solution with RK2 (⋄, black), RK3 (□, magenta), RK4 (*, green), and RKN3 (o, blue).
Figure 1. Graph for the global error in the solution with RK2 (⋄, black), RK3 (□, magenta), RK4 (*, green), and RKN3 (o, blue).
Mathematics 09 02225 g001
Table 1. Local and global errors and orders for problem (2) with RN2.
Table 1. Local and global errors and orders for problem (2) with RN2.
τ 1/801/1601/3201/6401/12801/2560
Loc. err. u ( t ) 8.7268 × 10 7 5.4684 × 10 8 3.4200 × 10 9 2.1379 × 10 10 1.3362 × 10 11 8.3524 × 10 13
Order 3.99623.99913.99983.99993.9998
Loc. err. u ( t ) 1.3910 × 10 4 1.7433 × 10 5 2.1806 × 10 6 2.7262 × 10 7 3.4079 × 10 8 4.2599 × 10 9
Order 2.99622.99912.99982.99993.0000
Glob. err. u ( t ) 1.9668 × 10 4 4.9142 × 10 5 1.2282 × 10 5 3.0700 × 10 6 7.6745 × 10 7 1.9186 × 10 7
Order 2.00082.00042.00022.00012.0001
Glob. err. u ( t ) 1.3275 × 10 4 3.5692 × 10 5 9.0829 × 10 6 2.2811 × 10 6 5.7098 × 10 7 1.4279 × 10 7
Order 1.89511.97441.99341.99821.9994
Table 2. Local and global errors and orders for problem (2) with RN3.
Table 2. Local and global errors and orders for problem (2) with RN3.
τ 1/801/1601/3201/6401/12801/2560
Loc. err. u ( t ) 7.4745 × 10 7 4.8042 × 10 8 3.0236 × 10 9 1.8931 × 10 10 1.1837 × 10 11 7.3971 × 10 13
Order 3.95963.98993.99753.99944.0002
Loc. err. u ( t ) 2.5478 × 10 6 8.0729 × 10 8 2.5316 × 10 9 7.9180 × 10 11 2.4749 × 10 12 7.7275 × 10 14
Order 4.98004.99504.99884.99975.0012
Glob. err. u ( t ) 4.6594 × 10 6 4.0170 × 10 7 3.8542 × 10 8 4.0814 × 10 9 4.6400 × 10 10 5.5107 × 10 11
Order 3.53603.38163.23933.13693.0738
Glob. err. u ( t ) 9.4688 × 10 5 1.2141 × 10 5 1.5315 × 10 6 1.9215 × 10 7 2.4058 × 10 8 3.0095 × 10 9
Order 2.96332.98692.99472.99772.9989
Table 3. Local and global errors and orders for problem (2) with RN4.
Table 3. Local and global errors and orders for problem (2) with RN4.
τ 1/801/1601/3201/6401/12801/2560
Loc. err. u ( t ) 6.1363 × 10 7 9.9822 × 10 9 1.5757 × 10 10 2.4682 × 10 12 3.8384 × 10 14 4.5776 × 10 16
Order 5.94195.98535.99646.00686.3898
Loc. err. u ( t ) 1.8146 × 10 6 6.9583 × 10 8 2.2799 × 10 9 7.2081 × 10 11 2.2592 × 10 12 7.0702 × 10 14
Order 4.70484.93174.98324.99584.9979
Glob. err. u ( t ) 2.6326 × 10 6 1.9208 × 10 7 1.2682 × 10 8 8.1048 × 10 10 5.1163 × 10 11 3.2167 × 10 12
Order 3.77673.92093.96783.98563.9914
Glob. err. u ( t ) 7.9785 × 10 5 2.7301 × 10 6 9.5300 × 10 8 3.5643 × 10 9 1.4775 × 10 10 6.9183 × 10 12
Order 4.86914.84034.74084.59244.4166
Table 4. Local and global errors and orders for problem (15) with RN2.
Table 4. Local and global errors and orders for problem (15) with RN2.
τ 1/801/1601/3201/6401/12801/2560
Loc. err. u ( t ) 2.6003 × 10 6 3.0800 × 10 7 3.7439 × 10 8 4.6137 × 10 9 5.7258 × 10 10 7.1314 × 10 11
Order 3.07773.04033.02063.01043.0052
Loc. err. u ( t ) 1.1920 × 10 4 1.4952 × 10 5 1.8721 × 10 6 2.3420 × 10 7 2.9286 × 10 8 3.6615 × 10 9
Order 2.99492.99762.99892.99942.9997
Glob. err. u ( t ) 1.1100 × 10 4 2.5216 × 10 5 5.9894 × 10 6 1.4582 × 10 6 3.5965 × 10 7 8.9303 × 10 8
Order 2.13812.07392.03832.01952.0098
Glob. err. u ( t ) 2.6457 × 10 4 6.6306 × 10 5 1.6598 × 10 5 4.1520 × 10 6 1.0383 × 10 6 2.5963 × 10 7
Order 1.99641.99821.99911.99951.9998
Table 5. Local and global errors and orders for problem (15) with RN3.
Table 5. Local and global errors and orders for problem (15) with RN3.
τ 1/801/1601/3201/6401/12801/2560
Loc. err. u ( t ) 6.0678 × 10 7 3.8272 × 10 8 2.4025 × 10 9 1.5048 × 10 10 9.4150 × 10 12 5.8841 × 10 13
Order 3.98683.99373.99693.99854.0001
Loc. err. u ( t ) 7.2800 × 10 7 4.5052 × 10 8 2.8007 × 10 9 1.7456 × 10 10 1.0895 × 10 11 6.7900 × 10 13
Order 4.01434.00784.00404.00204.0041
Glob. err. u ( t ) 3.6685 × 10 6 4.6022 × 10 7 5.7584 × 10 8 7.2002 × 10 9 9.0011 × 10 10 1.1252 × 10 10
Order 2.99482.99862.99962.99992.9999
Glob. err. u ( t ) 3.2844 × 10 6 4.1504 × 10 7 5.2155 × 10 8 6.5363 × 10 9 8.1810 × 10 10 1.0233 × 10 10
Order 2.98432.99242.99622.99812.9991
Table 6. Local and global errors and orders for problem (15) with RN4.
Table 6. Local and global errors and orders for problem (15) with RN4.
τ 1/801/1601/3201/6401/12801/2560
Loc. err. u ( t ) 1.3797 × 10 7 2.4952 × 10 9 4.8387 × 10 11 1.0514 × 10 12 2.5920 × 10 14 4.6849 × 10 16
Order 5.78915.68845.52425.34215.7899
Loc. err. u ( t ) 1.6797 × 10 6 5.3774 × 10 8 1.6885 × 10 9 5.2794 × 10 11 1.6491 × 10 12 5.2897 × 10 14
Order 4.96524.99314.99925.00064.9624
Glob. err. u ( t ) 1.4563 × 10 6 5.7019 × 10 8 3.4328 × 10 9 2.3119 × 10 10 1.5242 × 10 11 9.8139 × 10 13
Order 4.67484.05403.89223.92293.9571
Glob. err. u ( t ) 4.0638 × 10 6 2.5938 × 10 7 1.6313 × 10 8 1.0216 × 10 9 6.3895 × 10 11 3.9944 × 10 12
Order 3.96973.99093.99713.99903.9997
Table 7. Local and global errors and orders for problem (3) with RN2.
Table 7. Local and global errors and orders for problem (3) with RN2.
τ 1/801/1601/3201/6401/1280
Loc. err. u ( t ) 3.5093 × 10 7 2.3204 × 10 8 1.5422 × 10 9 1.1241 × 10 10 1.0014 × 10 11
Order 3.91873.91143.77813.4887
Loc. err. u ( t ) 5.5860 × 10 5 7.2984 × 10 6 9.2722 × 10 7 1.1657 × 10 7 1.4601 × 10 8
Order 2.93612.97662.99172.9972
Glob. err. u ( t ) 1.1780 × 10 5 2.9189 × 10 6 7.2957 × 10 7 1.8249 × 10 7 4.5639 × 10 8
Order 2.01292.00031.99931.9994
Glob. err. u ( t ) 1.9962 × 10 4 4.8801 × 10 5 1.2060 × 10 5 2.9991 × 10 6 7.4888 × 10 7
Order 2.03222.01672.00762.0017
Table 8. Local and global errors and orders for problem (3) with RN3.
Table 8. Local and global errors and orders for problem (3) with RN3.
τ 1/801/1601/3201/6401/1280
Loc. err. u ( t ) 2.1291 × 10 7 1.7820 × 10 8 1.2296 × 10 9 7.9634 × 10 11 5.0414 × 10 12
Order 3.57863.85723.94873.9815
Loc. err. u ( t ) 1.3417 × 10 5 9.5553 × 10 7 6.8098 × 10 8 4.8287 × 10 9 3.3715 × 10 10
Order 3.81173.81063.81793.8402
Glob. err. u ( t ) 1.2456 × 10 6 1.6021 × 10 7 2.0110 × 10 8 2.5093 × 10 9 3.1001 × 10 10
Order 2.95872.99403.00263.0169
Glob. err. u ( t ) 1.7959 × 10 5 2.4474 × 10 6 3.5574 × 10 7 5.0192 × 10 8 6.8569 × 10 9
Order 2.87542.78242.82532.8718
Table 9. Local and global errors and orders for problem (3) with RN4.
Table 9. Local and global errors and orders for problem (3) with RN4.
τ 1/801/1601/3201/6401/1280
Loc. err. u ( t ) 1.7070 × 10 6 6.1236 × 10 8 2.1793 × 10 9 7.7642 × 10 11 2.7480 × 10 12
Order 4.80104.81254.81094.8204
Loc. err. u ( t ) 4.8165 × 10 5 3.4176 × 10 6 2.4071 × 10 7 1.6643 × 10 8 1.1149 × 10 9
Order 3.81703.82763.85433.8999
Glob. err. u ( t ) 3.0516 × 10 6 1.6639 × 10 7 6.6974 × 10 9 2.8418 × 10 10 1.4797 × 10 11
Order 4.19694.63484.55874.2634
Glob. err. u ( t ) 2.1788 × 10 4 2.5549 × 10 5 2.8936 × 10 6 3.1875 × 10 7 3.4444 × 10 8
Order 3.09223.14233.18243.2101
Table 10. Computational cost for problem (3) with RNRN2, RN3, RN4, and RKN3 4.
Table 10. Computational cost for problem (3) with RNRN2, RN3, RN4, and RKN3 4.
τ CPUErrorCPUErrorCPUErrorCPUError
RN2RN2RN3RN3RN4RN4RKN3RKN3
1/80 7.32 × 10 3 1.24 × 10 6 9.66 × 10 3 3.05 × 10 6 1.72 × 10 2 6.34 × 10 7
1/1601.04 × 10 2 2.92 × 10 6 1.60 × 10 2 1.60 × 10 7 2.05 × 10 2 1.66 × 10 7 3.13 × 10 2 8.04 × 10 8
1/3201.88 × 10 2 7.30 × 10 7 2.70 × 10 2 2.01 × 10 8 3.72 × 10 2 6.70 × 10 9 5.83 × 10 2 1.01 × 10 8
1/6403.70 × 10 2 1.82 × 10 7 5.49 × 10 2 2.51 × 10 9 7.07 × 10 2 2.84 × 10 10 1.12 × 10 1 1.26 × 10 9
1/12807.23 × 10 2 4.56 × 10 8 1.02 × 10 1 3.10 × 10 10 1.36 × 10 1 1.48 × 10 11 2.13 × 10 1 1.53 × 10 10
1/25601.34 × 10 1 1.14 × 10 8
1/51202.61 × 10 1 2.86 × 10 9
1/102405.12 × 10 1 7.17 × 10 10
1/204801.03 × 10 0 1.82 × 10 10
1/409602.04 × 10 0 4.86 × 10 11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moreta, M.J. Rosenbrock Type Methods for Solving Non-Linear Second-Order in Time Problems. Mathematics 2021, 9, 2225. https://doi.org/10.3390/math9182225

AMA Style

Moreta MJ. Rosenbrock Type Methods for Solving Non-Linear Second-Order in Time Problems. Mathematics. 2021; 9(18):2225. https://doi.org/10.3390/math9182225

Chicago/Turabian Style

Moreta, Maria Jesus. 2021. "Rosenbrock Type Methods for Solving Non-Linear Second-Order in Time Problems" Mathematics 9, no. 18: 2225. https://doi.org/10.3390/math9182225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop