Abstract
This study introduces a family of implicit four-point block methods for solving first-order initial value problems (IVPs) with oscillatory solutions. In addition to an eighth-order block method, amplification-fitted and phase-fitted implicit block methods are also derived. The methods are implemented in a predictor–corrector framework, where the predictor is a four-point explicit block method constructed with the corresponding properties. A comprehensive stability analysis is carried out to assess the robustness of the proposed approaches. Comparative evaluations with existing methods demonstrate the superior efficiency of the new algorithms. Numerical experiments further confirm that the proposed techniques provide significant improvements over traditional methods, particularly for oscillatory IVPs.
MSC:
65N35; 65M12; 65M70; 65L04; 34A12
1. Introduction
The ordinary differential equations (ODEs) are powerful mathematical tools that can be used to describe and analyze various real-world phenomena. Regarding processes that display periodic or cyclic behavior, examples include mechanical vibrations, electrical circuits, and biological rhythms that are often modeled using differential equations with oscillatory solutions. These formulations commonly include sinusoidal or related periodic terms, where capturing the oscillations with high accuracy is critical for reliable analysis of natural phenomena. The repetitive nature of these equations typically necessitates specialized numerical methods for their solution, making this a significant area of study [1]. Among the classical strategies for solving initial value problems (IVPs), Runge–Kutta (RK) schemes stand out as the most extensively adopted. Furthermore, several specialized classes of RK methods have been proposed on the basis of exponential fitting, trigonometrically fitted schemes, and hybrid methods [2,3,4,5,6,7,8,9,10] in the literature to enhance the accuracy of oscillatory problem solving. Implicit methods are more stable for oscillatory IVPs, allowing for larger time steps and better handling of stiffness compared to explicit methods. For a comprehensive overview of various method classes, readers can refer to the works of Butcher [11] and Hairer et al. [12]. These traditional methods compute numerical solutions sequentially, one point at a time. Instead of progressing point by point, efficiency may be enhanced through schemes that determine solutions at multiple locations concurrently through an approach identified as the block method. Several researchers have developed block methods for numerically solving first-order ordinary differential equations (ODEs), including notable contributions from Birta et al. [13], Chu and Hamilton [14], Shampine and Watt [15], and Tam et al. [16]. The following key insights from prior work underpin our study:
- Gragg and Stetter [17] employed hybrid multistep methods to address stiff systems, while Milne [18] introduced block methods that improve computational efficiency through parallelism.
- To enhance stability, hybrid block methods were later developed and optimized [19]. These approaches primarily focus on minimizing local truncation errors but are less effective for highly oscillatory problems.
- In high-frequency oscillatory problems, accurately capturing the amplitude, energy, envelope, and especially phase over long time intervals is crucial yet difficult. Real-time applications often accept phase and amplification errors in exchange for larger time steps and improved efficiency.
- Most existing numerical methods assume near-linearity, which limits their applicability. Block methods, on the other hand, are well suited for nonlinear problems and are thus more adaptable to real-world applications.
Although considerable research has been devoted to block methods, there is still a lack of approaches that efficiently control phase lag and amplification error, particularly in the context of first-order initial value problems (IVPs). In this study, we present a numerical integrator designed to solve IVPs of the form
where denotes the system’s rate of change, describes the system’s dynamics, and serves as the initial condition. Within this framework, zero-stable implicit and explicit block methods are developed, and the associated theory to evaluate phase lag and amplification error (or amplification factor) [7] is employed. In addition, we introduce systematic approaches for the construction of amplification-fitted and phase-fitted block methods.
The structure of this paper is outlined as follows:
- Section 2 derives an implicit zero-stable four-point block method of order eight and an explicit zero-stable four-point block method of order four. The derivation employs the framework in [7] to evaluate the phase lag and amplification error (or amplification factor) associated with block methods for solving first-order IVPs.
- Section 3 develops strategies to optimize phase lag and amplification factor, leading to the construction of phase-fitted and amplification-fitted block methods. In particular, we present procedures for minimizing phase lag, techniques for deriving amplification-fitted schemes, and an integrated approach for constructing methods that are both phase- and amplification-fitted.
- Section 4 provides a detailed stability analysis of the proposed family of methods, examining their behavior under various conditions.
- Section 5 reports numerical experiments illustrating the accuracy and effectiveness of the proposed approaches when applied in predictor–corrector mode.
2. 4-Points Block Methods
As stated by Fatunla [20], the difference equation in matrix form for the s-point m-step block method in solving an initial value problem (IVP) (1) is given by
where
where and
In the classification of block schemes, the method is considered explicit if the coefficient matrix vanishes, ; otherwise, it is an implicit method. According to [21,22,23], assuming u is smooth, the local truncation error for the block method (2) results from the Taylor expansion of the linear difference operator L and is expressed as
The block method achieves an order p if and only if and , where represents the error constant.
For the four-point one-step block method (), the corresponding matrix finite difference equation is given by
with the coefficient matrices specified as
The property of zero stability guarantees that numerical errors remain bounded and are not amplified as the solution evolves step by step. Consequently, for a zero-stable one-step block method, all eigenvalues of A must lie within or on the unit circle in the complex plane, and any eigenvalue with a magnitude equal to 1 must have an algebraic multiplicity of 1. Several instances meet this condition for A, and one of them is given as
2.1. Implicit Block Method
Considering matrix A given in (4), a zero-stable four-point implicit block method
is derived as follows: By using Taylor series, the difference equation
is expanded, and enforcing zero coefficients for across the rows generates a system of 32 equations with 32 unknowns, wherein solving it using MATHEMATICA (Version number: 11.0.1.0), gives the following matrix
2.2. Explicit Block Method
We follow the above procedure to find the coefficient matrix for the zero-stable four-point explicit block method via
The difference Equation (6), when expanded using Taylor’s expansion and with vanishing coefficients of , yields 16 equations in 16 unknowns. These equations can then be solved using MATHEMATICA, yielding the following matrix:
The implicit block method (5) has the following local truncation error:
and for the explicit block method (6), we have
The implicit method is of order eight, while the explicit method has an accuracy of order four. To analyze the phase lag and amplification factor of the four-point one-step block method (3), consider the following test equation:
which has an analytical solution given as
Using the theory presented in [7], we assess it for the block methods (3). After applying the block method to the IVP (7), the following result is obtained:
Further, considering , the system of difference Equation (9) possesses the following characteristic equation:
Holding , the above equation arrives at
Definition 1
(Order of Phase Lag). Let the theoretical solution of the scalar test Equation (7) at be expressed as , or equivalently, . The corresponding numerical solution at , obtained from the scalar test Equation (7), is given by . The phase lag is then defined by the following expression:
If the phase lag Φ asymptotically behaves as as , the phase lag is said to have an order of q.
Lemma 1.
The relations provided below hold as
Proof.
See Ref. [7] for the detailed proof. □
Theorem 1.
(Amplification Factor): The amplification factor for a four-point one-step block method (3) is given by
where q is the order of the amplification factor, which is defined as
and .
Proof.
By applying the relation (13) derived from Lemma 1 directly into the characteristic Equation (11), we obtain the following result:
The imaginary component of the system presented above is expressed as
which can be further simplified as
where . The explicit formula for calculating the amplification factor () of the block method (3) is
where
.
- □
Theorem 2.
(Phase Lag): The phase lag for a four-point one-step block method (3) is calculated as
where q is the order of phase lag
and .
Proof.
In a manner analogous to the proof of Theorem 1, we apply Lemma 1 and substitute the resulting relation (13) into the characteristic Equation (10), thereby deriving (15). Similarly, by examining the real part of (15), one obtains
where .
- □
To calculate the order of the amplification factor of the implicit block method (5), Formula (14) from Theorem 1 for the implicit method (5) is expanded with and , using the Taylor series expansion, as follows:
Thus, , indicating that the implicit block method (5) has an eighth-order amplification factor. Similarly, the phase error for (5) is obtained by applying Taylor’s expansion to the direct Formula (18) from Theorem 2, with and , resulting in
Thus, , and the block method (5), referred to as , exhibits an eighth-order phase lag.
2.3. Method 2: Amplification-Fitted Block Method with Minimal Phase Lag
The modified four-point one-step block method is derived by preserving the property of zero stability to ensure the accuracy and stability of the numerical solution. To derive the amplification-fitted block method with minimal phase lag, we begin by considering the matrix A from (4) in conjunction with the direct formula provided in (14). The method follows the steps outlined in the algorithm below.
Algorithm:
- Eliminate the amplification factor: Eliminate the amplification factor and solve the set of equations for unknown coefficients and using the coefficients obtained, calculate the phase lag.
- Perform Taylor series expansion: Expand the calculated phase lag using a Taylor series to obtain a set of relations of unknown coefficients.
- Minimize the phase lag: Solve the system of equations required to adjust the parameters such that the phase lag is minimized. Similarly, minimize the local truncation error, making use of the updated coefficients derived.
2.3.1. Implicit Amplification-Fitted Method
By following the algorithm, eliminating the amplification factor from the block method (3), where the matrix A is defined as in (4), and denotes the element of at the ith row and jth column, results in a set of equations that are used to adjust the method for minimal phase lag:
Furthermore, the phase lag is assessed using the coefficient values mentioned above and then minimized by eliminating the coefficients of , , ,,, and . The obtained values are subsequently substituted to compute the local truncation error, aiming to eliminate the coefficient of , which gives the following matrix:
where
and
The modified implicit block method is denoted as and produces the following local truncation error:
and has the following phase error
2.3.2. Explicit Amplification Fitted Method
Applying the algorithm to eliminate the amplification factor to derive the explicit block method, where the matrix A is defined as in (4) and , generates following set of equations:
Subsequently, the coefficients of and equate to zero in order to minimize phase error, wherein the following coefficient matrix is obtained:
where
This modified explicit method is denoted as . The method has the following properties:
2.4. Method 3: Amplification-Fitted and Phase-Fitted Block Method
The method is derived through the following steps:
- Eliminate and , which results in eight equations.
- Compute the local truncation error.
- Determine the remaining unknown coefficients by enhancing the precision of the local truncation error.
2.4.1. Implicit Phase-Fitted and Amplification-Fitted Block Method
Ensuring zero stability by considering A given in (4), the amplification factor is evaluated using Formula (14), which is (A1) given in Appendix A. By eliminating the amplification error, the phase error for the amplification-fitted block method is calculated from Formula (18), as given (A2) in Appendix A.
Eliminating leads to (22), while setting to zero results in the following:
and Appendix A provides the other elements of the matrices.
The local truncation error of the block method is
2.4.2. Explicit Phase-Fitted and Amplification-Fitted Block Method
To find coefficient matrix for amplification fitted and phase fitted explicit block method, is considered as the zero matrix, and after eliminating the amplification error, it results in the set of Equation (24). The phase error is then evaluated and given (A3) in Appendix A.
With elimination of the phase error, the following set of equations are obtained:
and optimizing the local truncation error, other unknown elements of the coefficient matrix are obtained, and they are given in Appendix A. The method is labeled as and holds the following properties:
2.5. Method 4: Amplification-Fitted and Phase-Fitted Block Method with Vanished First Derivative of Phase Error
For this method, the first derivative of phase error is also taken into account. Even if a method has zero phase lag at a specific frequency or step size, a non-zero first derivative implies that small deviations can cause the phase error to grow rapidly. By ensuring that the first derivative of the phase error vanishes, we enhance the robustness and accuracy of the method across a broader frequency range, not just at a single point. This is particularly beneficial in long-term integration of oscillatory problems, where the frequency may vary slightly over time or be imperfectly known. The procedure is executed through the following steps:
- Evaluate the amplification factor (AF) and the phase error (PhEr).
- Compute the first derivative of the phase error.
- Repeat analogous steps to construct the block method, ensuring that the amplification factor, phase error, and its derivative are all nullified.
- Determine the remaining undetermined coefficients by optimizing the local truncation error (LTE).
2.5.1. Implicit Amplification- and Phase-Fitted Block Method with Vanished First Derivative of Phase Error
Starting with the first step outlined in the algorithm above, the system of equations given in (A1) is obtained. Eliminating the amplification factor (AF) from this system yields Equation (22). Using the resulting coefficient values, the phase error (PhEr) is then evaluated, leading to Equation (A2). Differentiating this expression with respect to and proceeding with the subsequent steps—namely, the elimination of the phase error and its derivative while also taking the local truncation error (LTE) into account—results in the following coefficient matrices:
Appendix A lists the values of the matrix elements, and the method labeled as local truncation error of the method is
2.5.2. Explicit Amplification- and Phase-Fitted Block Method with Vanished First Derivative of Phase Error
When is a zero matrix and the above algorithm is applied, the first step leads to Equations (24) and (27). Consequently, the following coefficient matrix is obtained:
The values of the matrix elements are provided in Appendix A, and the method denoted as local truncation error of the method is
2.6. Method 5: Amplification-Fitted and Phase-Fitted Block Method with Vanished First Derivative of Amplification Factor
The procedure is carried out through the following steps:
- The amplification factor (AF) and phase error (PhEr) are first evaluated.
- Next, the first derivative of the amplification factor is computed.
- Formulate the block method, ensuring the elimination of the phase error, amplification factor, and its first-order derivative.
- Finally, the remaining undetermined coefficients are obtained by minimizing the local truncation error (LTE).
2.6.1. Implicit Amplification- and Phase-Fitted Block Method with Vanished First Derivative of Amplification Error
As a result of this algorithmic procedure, the coefficient matrices are obtained. The detailed closed-form expression for elements of the matrices can be found in Appendix A. The method is referred as .
2.6.2. Explicit Amplification- and Phase-Fitted Block Method with Vanished First Derivative of Amplification Error
Considering , the zero matrix, the algorithmic procedure yields the coefficient matrix . The end analytical formulations of this matrix are provided in Appendix A. The method is denoted as .
The local truncation error of the method is
The local truncation error of the method is
Table 1 comprehensively encapsulates the essential properties of the developed methods, highlighting their strengths and distinguishing features.
Table 1.
Outline of key properties.
3. Stability
The stability analysis helps us to understand how the method behaves under different step sizes and parameter values, ensuring that the numerical solution remains bounded and converges to the correct solution, even for large or small values of the step size h.
3.1. Explicit Block Methods
For the family of explicit block methods, a general formulation (3) is adopted in which the coefficient matrix A is specified as in (4), the matrix is identically zero, and is non-zero matrix and is applied to the scalar test problem , with , where it reduces to the following difference equation:
The characteristic polynomial of the method, as derived from Equation (28), is given by
The stability polynomial is solved for , subject to the condition , using MATHEMATICA, and the resulting regions in the complex plane illustrate the method’s stability characteristics. This analysis reveals the influence of step size and parameter (for methods –) on overall stability.
3.2. Implicit Block Methods
Considering a general form (3) for implicit block method that incorporates the non-zero coefficients A, , and , with the corresponding function evaluations at different time steps, is applied to the scalar test equation , with , the method results in the following difference equation:
The characteristic polynomial for this method, derived from Equation (29), is
Stability regions in the complex plane are obtained from the stability polynomial , utilizing MATHEMATICA, and examined for various step sizes and parameter values, for the – methods. Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25 and Figure 26 depict these regions for the derived explicit and implicit block methods, showing that the implicit variants consistently exhibit larger stability domains than their explicit counterparts.
Figure 1.
Region of stability for (explicit method).
Figure 2.
Region of stability for (implicit method).
Figure 3.
Region of stability for (AF = 0), with .
Figure 4.
Region of stability for (AF = 0), with .
Figure 5.
Region of stability for (AF = 0), with .
Figure 6.
Region of stability for (AF = 0, PhEr = 0), with .
Figure 7.
Region of stability for (AF = 0, PhEr = 0), with .
Figure 8.
Region of stability for (AF = 0, PhEr = 0), with .
Figure 9.
Region of stability for (AF = 0, PhEr = 0, D[PhEr] = 0), with .
Figure 10.
Region of stability for (AF = 0, PhEr = 0, D[PhEr] = 0), with .
Figure 11.
Region of stability for (AF = 0, PhEr = 0, D[PhEr] = 0), with .
Figure 12.
Region of stability for (AF = 0, PhEr = 0, D[AF] = 0), with .
Figure 13.
Region of stability for (AF = 0, PhEr = 0, D[AF] = 0), with .
Figure 14.
Region of stability for (AF = 0, PhEr = 0, D[AF] = 0), with .
Figure 15.
Region of stability for (AF = 0), with .
Figure 16.
Region of stability for (AF = 0), with .
Figure 17.
Region of stability for (AF = 0), with .
Figure 18.
Region of stability for (AF = 0, PhEr = 0), with .
Figure 19.
Region of stability for (AF = 0, PhEr = 0), with .
Figure 20.
Region of stability for (AF = 0, PhEr = 0), with .
Figure 21.
Region of stability for (AF = 0, PhEr = 0, D[PhEr] = 0), with .
Figure 22.
Region of stability for (AF = 0, PhEr = 0, D[PhEr] = 0), with .
Figure 23.
Region of stability for (AF = 0, PhEr = 0, D[PhEr] = 0), with .
Figure 24.
Region of stability for (AF = 0, PhEr = 0, D[AF] = 0), with .
Figure 25.
Region of stability for (AF = 0, PhEr = 0, D[AF] = 0), with .
Figure 26.
Region of stability for (AF = 0, PhEr = 0, D[AF] = 0), with .
4. Numerical Results
We investigated seven oscillatory ODE test problems and one PDE problem, the Telegraph equation. The accuracy associated with each case was determined using the following relation:
MATHEMATICA (Version number: 11.0.1.0) was used to solve the ODEs, while MATLAB R2017a was used to solve the PDE. An implicit optimized hybrid block method of atleast order five [24] and three ODE solvers from the Runge–Kutta family [25,26] have been compared: the fourth-order classical Runge–Kutta method (RK4), the fifth-order Cash–Karp method (RKCash), and the fifth-order Fehlberg method (RKFehl). For each problem, the first three steps were calculated using a high-order Runge–Kutta algorithm. Following this, the explicit block methods, as described earlier, served as the predictors, while the implicit block methods, also outlined above, were employed as the correctors.
The following numerical methods were employed to address the given problems:
- Hybrid: Optimized hybrid block method.
- RK4: Classical fourth-order Runge–Kutta method.
- RKCash: Fifth-order Cash–Karp method.
- RKFehl: Fifth-order Fehlberg method.
- M1: The method incorporates in the predictor–corrector mode, with as the predictor.
- M2: The method utilizes as the predictor and as the corrector.
- M3: The method applies as the predictor and as the corrector.
- M4: This method uses the explicit method as the predictor and as the corrector.
- M5: The method incorporates in the predictor–corrector mode, with as the predictor.
Example 1.
The domain is defined to be . Numerical experiments were performed using the derived block methods through with , and the results are illustrated in Figure 27. The following key insights have been drawn from the graphical comparison of these methods:Consider the Stiefel and Bettis [27] problem, where the system of equations governing it is as follows:
The exact solution is
Figure 27.
Error comparison for Example 1.
- The hybrid block method [24] (hybrid) underperformed, falling short of the efficiency and accuracy demonstrated by the other methods.
- The classical fourth-order Runge–Kutta method (RK4) exhibited the lowest accuracy among all methods considered.
- The fifth-order Runge–Kutta–Cash–Karp method (RKCash) demonstrated improved accuracy over both the Runge–Kutta–Fehlberg method (RKFehl) and RK4.
- The proposed implicit block methods – consistently outperformed the traditional Runge–Kutta-based schemes (RK4, RKFehl, and RKCash) in terms of numerical accuracy and computational time.
- Method showed a notable improvement over in the early stages of computation but later overtook the position.
- Method further enhanced the accuracy compared to , indicating a clear progression in performance.
- Among all methods tested, delivered the most accurate results, establishing its superiority in this comparative analysis.
Example 2.
Choosing and the solution domain as , and the numerical solution of the system of equation was computed with . The computational performance of the various methods, as illustrated in Figure 28, revealed the following key observations:The inhomogeneous linear problem, examined by Franco et al. [28], is as follows:
The exact solution is
Figure 28.
Error comparison for Example 2.
- The hybrid method was least efficient among the considered methods.
- Of the classical Runge–Kutta methods tested, RK4 attained the poorest accuracy, in contrast to RKFehl, which offered a moderate gain in precision.
- RKCash provided enhanced accuracy compared to RKFehl; however, the block method significantly surpassed both in performance.
- Method offered a marked improvement over , reflecting increased precision across the tested step sizes.
- For relatively large step sizes, initially outperformed all other methods, followed by , , and , respectively.
- While initially yielded better results than , the latter marginally overtook as the step size decreased, suggesting improved efficiency in finer resolutions.
- Among all the methods considered, consistently yielded the most accurate results on average.
Example 3.
The essential points derived from Figure 29 are presented as follows:The problem presented by Franco and Palacios is defined as follows [29]:
with the analytical solution given as
where and . To apply modified block methods, consider .
Figure 29.
Error comparison for Example 3.
- Within the tested CPU time range, the hybrid block method (Hybrid) consistently fell below the performance benchmark set by the other methods.
- RKCash outperformed both RKFehl and RK4, with RK4 showing the least accuracy.
- The accuracy of the block method surpassed that of the traditional Runge–Kutta approaches.
- Method significantly outperformed in terms of performance.
- A reduction in step size led to a consistent enhancement in the accuracy of .
- While was more accurate than for larger step sizes, both methods achieved the same level of accuracy as the step size reduced.
- and were nearly identical in terms of accuracy.
- Among all the methods, and were the most accurate.
The hybrid block method was excluded from further comparisons, as initial results demonstrated that it was computationally expensive and consistently less efficient than the Runge–Kutta methods and the proposed methods across the tested examples.
Example 4.
For , the numerical results are presented in Figure 30. The following points summarize the key observations:Simos studied the nonlinear problem in [30] described as follows:
where , , and the exact solution is
Figure 30.
Error comparison for Example 4.
- The Runge–Kutta methods exhibited considerable computational overhead, yielding only marginal to negligible improvements in accuracy.
- Despite increased CPU time, RK4 failed to demonstrate a meaningful enhancement in performance.
- RKCash surpassed both RKFehl and RK4 in terms of overall efficiency and accuracy.
- The block method consistently provided more precise results compared to the traditional Runge-Kutta approaches.
- Method significantly outperformed in terms of both computational efficiency and accuracy.
- Although initially under-performed compared to in terms of accuracy, it eventually surpassed as the step size decreased, demonstrating superior performance.
- Methods and delivered nearly identical levels of accuracy.
- Among all methods, and exhibited the highest accuracy.
Example 5.
The parameters are . To employ modified block methods, was chosen, and the results are plotted in Figure 31; the following lists the observations obtained:The nonlinear first-order differential problem studied by Petzold [31] is defined as follows:
with the analytical solution defined as
Figure 31.
Error comparison for Example 5.
- RK4 failed to achieve an acceptable accuracy within the allocated CPU time.
- RKCash demonstrated superior overall performance compared to RKFehl.
- The block method yielded more accurate results than RKCash, with the accuracy steadily improving as the CPU time increased.
- For large step sizes, was the least accurate among the block methods, while achieved the highest accuracy.
- Method initially outperformed in both computational efficiency and accuracy, although this trend reversed as the step size decreased.
- Methods and exhibited comparable performance in terms of both accuracy and efficiency up to a certain threshold, after which surpassed .
- Method emerge as the best performer overall.
Example 6.
With the choice of , the numerical results for were obtained and are displayed in Figure 32. The key findings are summarized as follows:The two-body gravitational problem is considered as follows:
and the exact solution is
Figure 32.
Error comparison for Example 6.
- The Runge–Kutta methods (RK4, RKCash, and RKFehl) exhibited marginal improvements, maintaining a steady level of performance without significant advancement.
- The block method demonstrated superior accuracy compared to the traditional Runge–Kutta methods.
- Method significantly outperformed in terms of both accuracy and efficiency.
- As the step size decreased, initially showed an increase in accuracy, which eventually stabilized.
- Initially, surpassed in performance, although it did not exceed the performance of .
- Method was on par with in terms of both accuracy and computational efficiency.
- The block method provided more precise results compared to the others on average.
Example 7.
Consider the perturbed two-body Kepler’s problem:
where its analytical solution is
The value was considered. From Figure 33, the following observations can be made:
Figure 33.
Error comparison for Example 7.
- The Runge–Kutta methods (RK4, RKFehl, and RKCash) showed only marginal improvements in both accuracy and efficiency, with RK4 performing the least effectively.
- The block method surpassed all Runge–Kutta methods in terms of accuracy.
- Method delivered superior performance compared to .
- Method maintained a consistent level of accuracy, even when using large step sizes, with its performance remaining relatively stable regardless of the chosen step size.
- The performance of was equivalent to that of .
- Method outperformed in terms of both accuracy and computational efficiency.
- On average, and outperformed all other methods in terms of both accuracy and efficiency.
Example 8.
Consider the hyperbolic telegraph equation
The initial and boundary conditions are given by
The parameters are chosen as , , and
The analytic solution is
For the block methods, the parameter was selected. These methods were implemented using the algorithm outlined in [19]. Using [19], block methods were combined with the differential quadrature method, utilizing unified splines for the spatial variable. The following key observations can be made:
- The error analysis in Figure 34 for shows that provided superior accuracy relative to .
Figure 34. Error comparison for Example 8. - Methods , , and exhibited virtually identical accuracy across the spatial grid.
- Method significantly outperformed in terms of accuracy.
- The CPU times for differed among the methods: took 1.810412 s, required 0.021124 s, completed in 0.020640 s, took 0.021212 s, and required 0.017187 s.
- In terms of overall performance, outperformed the other methods.
The performance of frequency-dependent approaches, including the newly proposed method, largely hinges on the appropriate selection of the parameter . Often, this parameter can be directly inferred from the characteristics of the specific problem being addressed. In cases where determining is less straightforward, several strategies for its estimation have been developed and discussed in existing studies [2,4].
5. Conclusions
In summary, the block methods demonstrated a marked advantage over the traditional Runge–Kutta methods and optimized hybrid block method, with notably outperforming RK4, RKFehl, and RKCash. As the study advanced to the implementation of the modified block methods , , , and , the results demonstrated a consistent enhancement in both accuracy and computational efficiency. Among these, and exhibited a competitive performance, with no clear winner emerging, as the effectiveness of numerical methods is highly contingent upon the specific characteristics of the problem at hand. However, consistently proved to be the most robust method, delivering superior results across a range of problem types. For partial differential equations (PDEs), method particularly excelled, showcasing its potential for complex applications. This study not only provides a thorough evaluation of the methods’ performance but also underscores the broad applicability and potential of modified implicit block methods in diverse numerical scenarios. These findings expand the scope for their use in more complex and varied computational problems, offering promising avenues for future research and development.
Author Contributions
Conceptualization, N.H.A., A.K., T.E.S. and R.T.A.; methodology, N.H.A., A.K., T.E.S. and R.T.A.; software, A.K.; validation, N.H.A., A.K., T.E.S. and R.T.A.; formal analysis, T.E.S.; investigation, N.H.A.; data curation, A.K.; writing—original draft preparation, A.K.; writing—review and editing, N.H.A., T.E.S. and R.T.A.; visualization, A.K. and T.E.S.; supervision, T.E.S.; funding acquisition, T.E.S. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP-RP25).
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Appendix A
Appendix A.1. Amplification-Fitted and Phase-Fitted Block Method: Method 3 (Im3 and E3)
Appendix A.2. Implicit Block Method Im3
Appendix A.3. Explicit Block Method E3
- The coefficient matrix for amplification-fitted and phase-fitted explicit block method is
Appendix A.4. Implicit Block Method Im4
Appendix A.5. Explicit Block Method E4
For the explicit method, is a zero matrix.
Appendix A.6. Implicit Block Method Im5
Appendix A.7. Explicit Block Method E5
References
- Landau, L.D.; Lifshitz, E.M. Quantum Mechanics: Non-Relativistic Theory; Pergamon Press: Oxford, UK, 1965. [Google Scholar]
- Ixaru, L.G.; Berghe, G.V.; DeMeyer, H. Frequency evaluation in exponential fitting multistep algorithms for ODEs. J. Comput. Appl. Math. 2002, 140, 423–434. [Google Scholar] [CrossRef]
- Lee, K.C.; Senu, N.; Ahmadian, A.; Ibrahim, S.N.I. High-order exponentially fitted and trigonometrically fitted explicit two-derivative Runge–Kutta-type methods for solving third-order oscillatory problems. Math. Sci. 2022, 16, 281–297. [Google Scholar] [CrossRef]
- Ramos, H.; Vigo-Aguiar, J. On the frequency choice in trigonometrically fitted methods. Appl. Math. Lett. 2010, 23, 1378–1381. [Google Scholar] [CrossRef]
- Raptis, A.; Allison, A. Exponential-fitting methods for the numerical solution of the schrodinger equation. Comput. Phys. Commun. 1978, 14, 1–5. [Google Scholar] [CrossRef]
- Senu, N.; Lee, K.; WanIsmail, W.; Ahmadian, A.; Ibrahim, S.; Laham, M. Improved Runge-Kutta method with trigonometrically-fitting technique for solving oscillatory problem. Malays. J. Math. Sci. 2021, 15, 253–266. [Google Scholar]
- Simos, T.E. A new methodology for the development of efficient multistep methods for first-order IVPs with oscillating solutions. Mathematics 2024, 12, 504. [Google Scholar] [CrossRef]
- Thomas, R.; Simos, T. A family of hybrid exponentially fitted predictor-corrector methods for the numerical integration of the radial Schrödinger equation. J. Comput. Appl. Math. 1997, 87, 215–226. [Google Scholar] [CrossRef][Green Version]
- Van de Vyver, H. A symplectic exponentially fitted modified Runge–Kutta–Nyström method for the numerical integration of orbital problems. New Astron. 2005, 10, 261–269. [Google Scholar] [CrossRef]
- Zhai, W.; Fu, S.; Zhou, T.; Xiu, C. Exponentially-fitted and trigonometrically-fitted implicit RKN methods for solving y”= f (t, y). J. Appl. Math. Comput. 2022, 68, 1449–1466. [Google Scholar] [CrossRef]
- Butcher, J.C. The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods; Wiley-Interscience: Hoboken, NJ, USA, 1987. [Google Scholar]
- Hairer, E.; Wanner, G. Convergence for nonlinear problems. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems; Springer: Berlin/Heidelberg, Germany, 1996; pp. 339–355. [Google Scholar]
- Birta, A.-B. Parallel block predictor-corrector methods for ODE’s. IEEE Trans. Comput. 1987, C-36, 299–311. [Google Scholar] [CrossRef]
- Chu, M.T.; Hamilton, H. Parallel solution of ODE’s by multiblock methods. SIAM J. Sci. Stat. Comput. 1987, 8, 342–353. [Google Scholar] [CrossRef]
- Shampine, L.F.; Watts, H. Block implicit one-step methods. Math. Comput. 1969, 23, 731–740. [Google Scholar] [CrossRef]
- Tam, H.W. Parallel Methods for the Numerical Solution of Ordinary Differential Equations; University of Illinois at Urbana-Champaign: Chicago, IL, USA, 1989. [Google Scholar]
- Gragg, W.B.; Stetter, H.J. Generalized multistep Predictor-Corrector methods. J. ACM (JACM) 1964, 11, 188–209. [Google Scholar] [CrossRef]
- Milne, W.E. Numerical solution of differential equations. Bull. Am. Math. Soc. 1953, 59, 577–579. [Google Scholar] [CrossRef][Green Version]
- Kaur, A.; Kanwar, V.; Ramos, H. A coupled scheme based on uniform algebraic trigonometric tension B-spline and a hybrid block method for Camassa-Holm and Degasperis-Procesi equations. Comput. Appl. Math. 2024, 43, 16. [Google Scholar] [CrossRef]
- Fatunla, S.O. Numerical Methods for Initial Value Problems in Ordinary Differential Equations; Academic Press: Cambridge, MA, USA, 1988. [Google Scholar]
- Fatunla, S.O. Block methods for second order ODEs. Int. J. Comput. Math. 1991, 41, 55–63. [Google Scholar] [CrossRef]
- Lambert, J.D. Computational Methods in Ordinary Differential Equations; Wiley: Hoboken, NJ, USA, 1973. [Google Scholar]
- Chollom, J.; Ndam, J.; Kumleng, G. On some properties of the block linear multi-step methods. Sci. World J. 2007, 2, 11–17. [Google Scholar] [CrossRef]
- Singh, G.; Garg, A.; Kanwar, V.; Ramos, H. An efficient optimized adaptive step-size hybrid block method for integrating differential systems. Appl. Math. Comput. 2019, 362, 124567. [Google Scholar] [CrossRef]
- Cash, J.R.; Karp, A.H. A variable order Runge-Kutta method for initial value problems with rapidly varying right-hand sides. ACM Trans. Math. Softw. 1990, 16, 201–222. [Google Scholar] [CrossRef]
- Fehlberg, E. Classical Fifth-, Sixth-, Seventh-, and Eighth-Order Runge-Kutta Formulas with Stepsize Control; National Aeronautics and Space Administration: Washington, DC, USA, 1968.
- Stiefel, E.; Bettis, D. Stabilization of Cowell’s method. Numer. Math. 1969, 13, 154–175. [Google Scholar] [CrossRef]
- Franco, J.; Gómez, I.; Rández, L. Four-stage symplectic and P-stable SDIRKN methods with dispersion of high order. Numer. Algorithms 2001, 26, 347–363. [Google Scholar] [CrossRef]
- Franco, J.; Palacios, M. High-order P-stable multistep methods. J. Comput. Appl. Math. 1990, 30, 1–10. [Google Scholar] [CrossRef]
- Simos, T. New open modified Newton Cotes type formulae as multilayer symplectic integrators. Appl. Math. Model. 2013, 37, 1983–1991. [Google Scholar] [CrossRef]
- Petzold, L.R. An efficient numerical method for highly oscillatory ordinary differential equations. SIAM J. Numer. Anal. 1981, 18, 455–479. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).