Abstract
In several important scientific fields, the efficient numerical solution of symmetric systems of ordinary differential equations, which are usually characterized by oscillation and periodicity, has become an open problem of interest. In this paper, we construct a class of embedded exponentially fitted Rosenbrock methods with variable coefficients and adaptive step size, which can achieve third order convergence. This kind of method is developed by performing the exponentially fitted technique for the two-stage Rosenbrock methods, and combining the embedded methods to estimate the frequency. By using Richardson extrapolation, we determine the step size control strategy to make the step size adaptive. Numerical experiments are given to verify the validity and efficiency of our methods.
1. Introduction
In several important scientific fields such as quantum mechanics, elasticity, and electronics, many problems can be represented by mathematical models of symmetric systems, see, e.g., [1,2,3,4], which usually lead to ordinary differential equations characterized by oscillation and periodicity [5,6], i.e.,
In view of the oscillation and periodicity of the equations in symmetric systems, the exponentially fitted methods, whose theoretical basis was first provided by Gautschi [7] and Lyche [8], have been considered to solve these equations in many studies. For instance, The authors of [9,10] investigated exponentially fitted two-step BDF methods and linear multistep methods. The idea of exponential fitting was first applied to the Runge–Kutta methods by Simos [11], in 1998. Since then, based on Simos’ research, a few exponentially fitted Runge–Kutta methods have been constructed [12,13,14]. Some scholars have also tried to estimate the frequency of the methods by analyzing the local truncation error, and control the step size to improve the effects and efficiency [15,16].
However, the exponentially fitted technique has been mostly applied to Runge–Kutta methods, which have difficulty in solving stiff problems by using the explicit form or cost large amounts of computation by using the implicit form. For this reason, Rosenbrock methods, which are based on the idea introduced by Rosenbrock [17] that a single Newton iteration is enough to preserve the stability properties of the diagonally implicit Runge–Kutta methods, have been considered to solve ordinary differential equations in symmetric systems. The general form of Rosenbrock methods for first-order ordinary differential equations has been given by Hairer and Wanner [18]; then, many scholars developed this form and analyzed the implementation, see, e.g., [19,20] and their references. Rosenbrock methods not only keep the stability of the relative diagonal implicit Runge–Kutta methods, but also reduce the amount of calculation compared against the implicit methods because only a linear system of equations needs to be solved per step. At present, there have been some studies on the exponentially fitted Rosenbrock methods [21,22,23], but these methods, which use constant frequency and step size, have difficulty in solving the equations efficiently and adaptively.
In this paper, we will combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to estimate the frequency before each step and control the step size by using Richardson extrapolation. By the frequency estimation and step size control, our methods with variable coefficients can solve the equations with oscillation and periodicity efficiently, and the order of convergence can be improved by one compared with the methods with constant coefficients.
The outline of this paper is as follows. In Section 2, a class of exponentially fitted Rosenbrock methods is constructed, and we give the local truncation error, frequency estimation, and stability analysis of the methods. In Section 3, we combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to construct a kind of embedded variable coefficient exponentially fitted Rosenbrock (3,2) methods, and perform the frequency estimation and step size control strategy. In Section 4, three numerical tests are presented to verify the validity of our methods by comparing the number of calculation steps, the error and the calculating time with other numerical methods. Section 5 gives some discussion and remarks.
2. A Class of Exponentially Fitted Rosenbrock Methods
In this section, a class of exponentially fitted Rosenbrock methods for the models of the ordinary differential equations is constructed, and we give the local truncation error, frequency estimation, and stability analysis of the methods.
Applying the s-stage Rosenbrock method to solve system (1) yields
where h is the step size, , , , , , and are real coefficients which satisfy and for .
We can also change the nonautonomous system (1) to an autonomous system by the following transformation,
Thus, we will focus on the autonomous problems for simplicity of presentation, i.e.,
where is assumed to be thrice continuously differentiable and is assumed to be twice continuously differentiable for the subsequent theoretical analysis. We consider the following s-stage Rosenbrock methods for the autonomous system (2),
or, in tableau form,
i.e.,
where , , , for and
Compared with the classic Rosenbrock methods, the methods (4) in which for , need only one LU-decomposition per step and their order conditions can be simplified [18]. Moreover, this kind of method has higher degree of freedom in its coefficient selection due to the extra coefficients in each step.
Let this method exactly integrate the function , then, we have
Let , it follows that
We now try to construct 1–2 stage Rosenbrock methods by (5). If , together with , we have
Solving the system of equations above with derives in , , , which yields a class of 1-stage exponentially fitted Rosenbrock methods, i.e.,
If , together with and , we have
In order to obtain the second order methods, Hairer and Wanner provided the following order conditions in [18], i.e.,
Combining (6) and (7), and letting the coefficients of the methods satisfy the order conditions when , then we have . Therefore, we obtain the following coefficients of the 2-stage exponentially fitted Rosenbrock methods with order 2,
where is a free coefficient or, in tableau form,
We now consider the following 2-stage exponentially fitted Rosenbrock methods of order 2 and analyze the local truncation error, i.e.,
where the coefficients are given by (8). Based on Bui’s idea in [24], we expand in the geometrical series, i.e.,
If we assume that in (9) is the exact solution at and expand the hyperbolic functions in the coefficients in Taylor series, we can obtain a one-step approximation of the solution at by (9) and (10), i.e.,
Meanwhile, for the exact solution at , we have
Based on (11) and (12), the local truncation error of exponentially fitted Rosenbrock methods (9) can be expressed as
where , , , , and all functions in (13) are evaluated at and .
If we let the principle local truncation error in (13) be zero, we can approximate and renew the frequency in each step to make the coefficients variable by the following equation,
where and for are respectively the ith-component of and , then the order of the methods can be improved by one.
On the other hand, we consider the stability of (9). Let , then we get a class of constant coefficient exponentially fitted Rosenbrock methods, i.e.,
By analogy with the definition of the stability function of Runge–Kutta methods in [25], the definition of the stability function of Rosenbrock methods is as follows.
Definition 1.
It is obvious that
where the numerator and denominator of are polynomials of degree no more than s. It means that can be expressed as
where , are two polynomials with real coefficients, , , , , and and contain no common factors. Li pointed out in [25] that the A-stability of single-step methods was equivalent to the A-acceptability of the rational approximations to the function . The following lemma gives the necessary and sufficient condition for the A-acceptability of .
Lemma 1
([25]). Assume that , then is A-acceptable iff the three inequalities hold
- (i)
- ;
- (ii)
- when ;
- (iii)
- ,
where , and we add the definitions that ⋯ if .
Based on Lemma 1, we can give the following theorem for the A-stability of method (15).
Theorem 1.
If , the 2-stage Rosenbrock method (15) with single coefficient γ is A-stable.
Proof.
Let , then we have . When , we have , which means that the condition (i) in Lemma 1 holds if .
According to (17), we have . For , we have , which means that the condition (ii) in Lemma 1 holds.
Consider the condition (iii) in Lemma 1 when , it is easy to find in (17) that , , and , , . If , we have
which means that the condition (iii) in Lemma 1 holds.
To sum up, if , is A-acceptable as the rational approximation to the function , which means that the 2-stage Rosenbrock method (15) with single coefficient is A-stable. □
3. Frequency Estimation and Step Size Control
This section will combine the exponentially fitted Rosenbrock methods with the embedded Rosenbrock methods to construct a kind of embedded variable coefficient exponentially fitted Rosenbrock (3,2) methods, and perform the frequency estimation and step size control strategy.
We consider the embedded Rosenbrock methods for system (2). This kind of method combines two Rosenbrock methods with different orders, which have the same coefficients of the lower stage part. The tableau form of the methods is as follows,
and we define and . To estimate the local truncation error of the embedded methods, we give the following lemma referred to in [26].
Lemma 2
([26]). Whenever a starting step h has been chosen, the Rosenbrock methods with order p and q respectively compute two approximations to the solution, and , where , then the error of is estimated by , i.e.,
Now, we construct a class of embedded Rosenbrock methods by using the coefficients of the 2-stage Rosenbrock methods (8). It can be expressed in the following tableau form
where is a free coefficient. Together with the order conditions for order 3 in [18]
when we let and , the coefficients of method (18) are determined by (19), i.e.,
where or, in tableau form,
We now choose one of the methods above to introduce the frequency estimation and step size control strategy. We record the exponentially fitted Rosenbrock (3,2) method (20) as EFRB(3,2) when , i.e.,
We also record the Rosenbrock (3,2) method (21) as RB(3,2) when , i.e.,
Suppose that and are the numerical solution and the local truncation error for the second-order component of RB(3,2), and and are the numerical solution and the local truncation error for the second-order component of EFRB(3,2), then we have
where , , and all functions in (23) and (24) are evaluated at and . Together with (23) and (24), we have
For each integration step, we can estimate by the following equation,
where and for are respectively the ith-component of and . For the first integration step, is set as a suitable starting frequency .
On the other hand, if we suppose that the numerical solution for the third-order component of RB(3,2) is , we can estimate based on lemma 2 by
After obtaining the approximations of and by (25) and (26), we substitute them into (14) to estimate and renew the frequency . In addition, if the estimate of is close to zero, the estimate of is zero, which means that the coefficients of the EFRB(3,2) method (21) are equal to the coefficients of the RB(3,2) method (22). If the estimate of is close to zero, it means that the principle local truncation error is not related to . In this case, we do not renew the frequency. If we estimate the frequency before each step of the method, we will get a variable coefficient method and its order will be increased by one.
Now, we try to control its step size by Richadson extrapolation. We first give the following lemma referred to in [26].
Lemma 3
([26]). Assume that is the numerical solution of one step with step size h of a Rosenbrock method of order p from , and is the numerical solution of two steps with step size , then the error of can be expressed as
Based on Lemma 3, we give the following step size control strategy. Let , we compare with the tolerance which is given by user. If , then we accept the step and progress with the value; if , then we reject the step and repeat the whole procedure with a new step size. In both cases, referred to in [18], the new step size is given by
where and are the maximum and minimum acceptable factors, respectively, and is the safety factor. In this paper, we let and .
4. Numerical Experiments
In this section, we present three numerical experiments to test the performance of our methods and compare the error and computational efficiency with other numerical methods. All the numerical experiments were executed by using MATLAB® on a Windows 11 PC with an Intel® Core i5-10210U CPU.
Example 1.
Consider the following ODE system in [27],
with exact solution . Let , then we get a new ODE system
Problem (28) has been solved in the interval with for each component of the solution when .
Example 2.
Consider the following ODE system in [28],
with the following solution:
Problem (29) has been solved in the interval with for each component of the solution.
Example 3.
Consider the following PDE system in [29],
with exact solution . The PDE system (30) can be transformed into an ODE system by spatial discretization with central finite difference of second order, which results in
where , is meant to approximate the solution of (30) at the point and we define . Then, problem (31) has been solved in the interval with for each component of the solution when .
We first solve Example 1 by the EFRB(3,2) method with constant step size to test the order of our method. We use the following formula to estimate the order of our method,
where , represents the error of when h is the step size and . Let , then the error and the order of convergence of our method are shown in Table 1. The results in Table 1 imply that the EFRB(3,2) method can achieve third order convergence.
Table 1.
The error and the order of convergence of the EFRB(3,2) method for problem (27).
We now compare the EFRB(3,2) method with the stiff ODE solvers in MATLAB® such as ode23s, ode23t, and ode23tb, which have the same stage as our method. For each stiff ODE solver in MATLAB®, the relative tolerance is set as and the absolute tolerance is set as . The error for each component of the solution was calculated as the maximum of the absolute value of the difference between the numerical and exact solutions, and we use the largest error across all components as the error of the problems. Figure 1, Figure 2 and Figure 3 show the relationship between the error and the average calculating time for each method when . Table 2, Table 3 and Table 4 show the calculating steps, the error, and the calculating time for each method when and . From the figures and tables, we conclude that the EFRB(3,2) method achieves better performance than all the stiff ODE solvers for ODE Examples 1 and 2. For the PDE Example 3, the EFRB(3,2) method achieves similar performance with ode23tb and better performance than other stiff ODE solvers. Furthermore, our method performs better than ode23tb in the small-tolerance range for Example 3. The performance of the EFRB(3,2) method in these three examples verifies the effectiveness and efficiency of our method, making it possible to be applied to the stiff ODE systems and PDE systems.
Figure 1.
CPUtime−error of each method for Example 1.
Figure 2.
CPUtime−error of each method for Example 2.
Figure 3.
CPUtime−error of each method for Example 3.
Table 2.
The accepted steps, rejected steps, error, and calculating time of each method for Example 1.
Table 3.
The accepted steps, rejected steps, error, and calculating time of each method for Example 2.
Table 4.
The accepted steps, rejected steps, error, and calculating time of each method for Example 3.
5. Conclusions
In this paper, a class of variable coefficient exponentially fitted embedded Rosenbrock methods with adaptive step size has been developed. By the frequency estimation and step size control strategy, the order of convergence will be increased by one and the methods can renew the step size adaptively. The numerical experiments show that compared with other methods such as ode23s, ode23t, and ode23tb in MATLAB®, our methods can achieve lower error with fewer calculating steps and shorter time, and these advantages will be much more significant if the tolerance is lower. We believe that this kind of methods can be applied to more complex symmetric systems, and the Rosenbrock methods of higher order can be constructed by our frequency estimation and step size control strategy.
Author Contributions
Conceptualization, T.Q.; methodology, T.Q.; software, Y.H.; validation, Y.H.; formal analysis, T.Q. and M.Z.; investigation, T.Q.; resources, T.Q.; data curation, Y.H.; writing—original draft preparation, T.Q. and Y.H.; writing—review and editing, T.Q. and Y.H.; visualization, Y.H.; supervision, T.Q.; project administration, T.Q.; funding acquisition, T.Q. All authors have read and agreed to the published version of the manuscript.
Funding
This work is supported by Hubei Provincial Natural Science Foundation of China (Grant No. 2019CFC844) and the Fundamental Research Funds for the Central Universities, HUST: 2019093.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Li, D.; Zhang, C. Split Newton iterative algorithm and its application. Appl. Math. Comput. 2010, 217, 2260–2265. [Google Scholar] [CrossRef]
- Cao, W.; Li, D.; Zhang, Z. Optimal superconvergence of energy conserving local discontinuous Galerkin methods for wave equations. Commun. Comput. Phys. 2017, 21, 211–236. [Google Scholar] [CrossRef]
- Li, D.; Sun, W. Linearly implicit and high-order energy-conserving schemes for nonlinear wave equations. J. Sci. Comput. 2020, 83, 1–17. [Google Scholar] [CrossRef]
- Landau, L.D.; Lifshitz, E.M. Quantum Mechanics; Pergamon Press: New York, NY, USA, 1965. [Google Scholar]
- Liboff, R. Introductory Quantum Mechanics; Holden-Day: Oakland, CA, USA, 1980. [Google Scholar]
- Gautschi, W. Numerical integration of ordinary differential equations based on trigonometric polynomials. Numer. Math. 1961, 3, 381–397. [Google Scholar] [CrossRef]
- Lyche, T. Chebyshevian multistep methods for ordinary differential equations. Nmuer. Math. 1972, 19, 65–75. [Google Scholar] [CrossRef]
- Ixaru, L.G.; Berghe, G.V.; Meyer, H.D. Exponentially fitted variable two-step BDF algorithm for first order ODEs. Comput. Phys. Commun. 2003, 150, 116–128. [Google Scholar] [CrossRef]
- Simos, T.E. Exponentially-fitted and trigonometrically-fitted symmetric linear multistep methods for the numerical integration of orbital problems. Phys. Lett. A 2003, 315, 437–446. [Google Scholar] [CrossRef]
- Simos, T.E. An exponentially-fitted Runge-Kutta method for the numerical integration of initial-value problems with periodic or oscillating solutions. Comput. Phys. Commun. 1998, 115, 1–8. [Google Scholar] [CrossRef]
- Berghe, G.V.; De Meyer, H.; Van Daele, M.; Van Hecke, T. Exponentially fitted explicit Runge-Kutta methods. Comput. Phys. Commun. 1999, 123, 7–15. [Google Scholar] [CrossRef]
- Avdelas, G.; Simos, T.E.; Vigo-Aguiar, J. An embedded exponentially-fitted Runge-Kutta method for the numerical solution of the Schrödinger equation and related periodic initial-value problems. Comput. Phys. Commun. 2000, 131, 52–67. [Google Scholar] [CrossRef]
- Franco, J.M. An embedded pair of exponentially fitted explicit Runge–Kutta methods. J. Comput. Appl. Math. 2002, 149, 407–414. [Google Scholar] [CrossRef]
- Berghe, G.V.; De, M.H.; Van, D.M.; Hecke, T.V. Exponentially fitted Runge-Kutta methods. J. Comput. Appl. Math. 2000, 125, 107–115. [Google Scholar] [CrossRef]
- Berghe, G.V.; Ixaru, L.G.; Meyer, H.D. Frequency determination and step-length control for exponentially-fitted Runge–Rutta methods. J. Comput. Appl. Math. 2001, 132, 95–105. [Google Scholar] [CrossRef]
- Rosenbrock, H. Some general implicit processes for the numerical solution of differential equations. Comput. J. 1963, 5, 329–330. [Google Scholar] [CrossRef]
- Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
- Tranquilli, P.; Sandu, A. Rosenbrock-Krylov methods for large systems of differential equations. SIAM J. Sci. Comput. 2014, 36, 1313–1338. [Google Scholar] [CrossRef]
- Wang, L.; Yu, M. Comparison of ROW, ESDIRK, and BDF2 for unsteady flows with the high-order flux reconstruction formulation. J. Sci. Comput. 2020, 83, 1–27. [Google Scholar] [CrossRef]
- Bao, Z. Two Classes of Functionally Fitted Rosenbrock Methods for Solving Stiff Ordinary Differential Equations. Mater’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2018. [Google Scholar]
- Zhang, Y. Exponentially fitted Rosenbrock Methods with Variable Coefficients for Solving First-Order Ordinary Differential Equations. Mater’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2019. [Google Scholar]
- Wei, T. Some Kinds of Exponentially Fitted Rosenbrock Methods for Solving Second-Order Ordinary Differential Equations. Mater’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2019. [Google Scholar]
- Bui, T.D.; Poon, S.W.H. On the computational aspects of rosenbrock procedures with built-in error estimates for stiff systems. BIT 1981, 21, 168–174. [Google Scholar] [CrossRef]
- Li, S. Numerical Analysis for Stiff Ordinary and Functional Differential Equations; Xiangtan University Press: Xiangtan, China, 2010. [Google Scholar]
- Hairer, E.; Wanner, G.; Nørsett, S.P. Solving Ordinary Differential Equations I; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
- Paternoster, B. Runge-Kutta(-Nyström) methods for ODEs with periodic solutions based on trigonometric polynomials. Appl. Numer. Math. 1998, 28, 401–412. [Google Scholar] [CrossRef]
- Lambert, J.D. Computational Methods in Ordinary Differential Equations; John Wiley: London, UK, 1973. [Google Scholar]
- Wang, W.; Mao, M.; Wang, Z. Stability and error estimates for the variable step-size BDF2 method for linear and semilinear parabolic equations. Adv. Comput. Math. 2021, 47, 1–28. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).