Next Article in Journal
A Comprehensive Review on the Generalized Sylvester Equation AXYB = C
Previous Article in Journal
Exact Solutions to Multiple-Delayed Linear Discrete Matrix Equations
Previous Article in Special Issue
The Swinging Sticks Pendulum: Small Perturbations Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional-Order Numerical Scheme with Symmetric Structure for Fractional Differential Equations with Step-Size Control

by
Mudassir Shams
1,2 and
Mufutau Ajani Rufai
2,*
1
Department of Mathematics, Faculty of Arts and Science, Balikesir University, Balikesir 10145, Turkey
2
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bolzano, Italy
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(10), 1685; https://doi.org/10.3390/sym17101685
Submission received: 30 July 2025 / Revised: 3 September 2025 / Accepted: 18 September 2025 / Published: 8 October 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Nonlinear Partial Differential Equations)

Abstract

This research paper uses two-stage explicit fractional numerical schemes to solve fractional-order initial value problems of ODEs. The proposed methods exhibit structural symmetry in their formulation, contributing to enhanced numerical stability and balanced error behavior across computational steps. The schemes utilize constant and variable step sizes, allowing them to adapt efficiently to solve the considered fractional-order initial value problems. These schemes employ variable step-size control based on error estimation, aiming to minimize computational costs while maintaining good accuracy and stability. We discuss the linear stability of the proposed numerical schemes and observe that a higher-stability region is obtained when the fractional parameter value equals one. We also discuss consistency and convergence analysis of the proposed methods and observe that as the fractional parameter values rise from 0 to 1, the scheme’s convergence rate improves and achieves its maximum at 1. Several numerical test problems are used to demonstrate the efficiency of the proposed methods in solving fractional-order initial value problems with either constant or variable step sizes. The proposed numerical schemes’ results demonstrate better accuracy and convergence behavior than the existing methods used for comparison.

1. Introduction

Fractional calculus [1,2], which employs derivatives and integrals of arbitrary (non-integer) order, has received a lot of attention for its ability to mimic complex systems with memory, non-locality, and intrinsic symmetry features that are normally beyond the scope of classical calculus. Fractional calculus has been applied in various scientific and engineering fields, including control theory [3], signal processing [4], fluid dynamics [5], viscoelasticity [6], and bioengineering [7]. Fractional differential equations, for instance, describe anomalous diffusion in porous media, which is crucial to groundwater flow and oil recovery [8].
Additionally, in electrical engineering, fractional models describe the behavior of capacitors and inductors in non-ideal circuits. They are also significant for biomedical applications, as they aid in simulating the intricate dynamics of biological model problems and neural activity. Improvements in numerical methods are making fractional calculus more useful in solving nonlinear differential equations, paving the way for new developments in scientific and engineering computation. Symmetric formulations in numerical analysis usually result in improved stability, error distribution, and computing efficiency. Preserving or exploiting such symmetry in numerical algorithms often leads to greater stability, balanced error propagation, and improved computational efficiency. Due to its flexibility, fractional calculus is essential for scholars and practitioners grappling with the complexities of modern times.
Due to fractional calculus’s ability to model a wide range of complex phenomena from various domains, such as fluid flow, electrical circuits, biological systems, and viscoelasticity, which frequently exhibit temporal or spatial symmetry, where conventional integer-order models fall short, the study of fractional initial value problems has garnered significant attention. Fractional initial value problems (FIVPs) are closely related to fractional-order differential equations (FODEs), which are differential equations defined using fractional derivatives [9]:
[ ¢ ] y ( t ) = f t , y t , y ( k ) t 0 = y 0 ( k ) , k = 0 , 1 , , m 1 ,
where
[ ¢ ] y ( t ) = 1 Γ m ¢ t t 0 y ( m ) h d h t h ¢ m + 1 , m 1 < ¢ < m Z + ,
is the Caputo fractional derivative of order ¢ and t t 0 , t n . These FIVPs provide a more adaptable and realistic representation of dynamic systems, especially those that involve memory effects or long-range interactions. While reasonable analytical solutions to these problems can be obtained in only a few cases by using basic equations, such as series expansions, they may quickly become computationally expensive for more complex systems. Consequently, numerical methods have emerged as highly efficient and accurate solutions to FIVPs. The two-stage fractional-order explicit approaches are advantageous due to their simplicity, versatility, and ability to handle nonlinearities. Numerical schemes, particularly when combined with adaptive step-size strategies, address the challenges posed by analytical methods and provide both robustness and computational efficiency across a wide range of applications, as mentioned below:
  • They have a lower computational cost because of their phased strategy, which optimizes calculations at each step while retaining accuracy.
  • They minimize CPU time by reducing redundant computations, especially with adaptive step-size control and efficient implementations.
  • Using fractional derivatives in iterative processes leads to faster convergence than existing methods.
  • The efficient two-stage structure requires fewer functional evaluations per iteration, lowering overall computational cost.
  • They have modest algorithmic complexity, ranging from simple, nonadaptive procedures to more complicated, adaptive strategies.
  • Robust for numerous FIVPs and complex problems with strong nonlinear or singular behavior.
  • They provide better-than-average control over local and global errors through repeated refining and dynamically changing step sizes.
  • Memory efficient, as big arrays or matrices are avoided, particularly in adaptive implementation.
  • They are adaptable and include various stopping criteria, such as error tolerance and step-size control, which offer improved accuracy for the considered FIVPs.
The fractional two-stage approach is distinguished by its balance of accuracy and efficiency, making it a suitable numerical method for solving FIVPs. To solve FIVPs in [10], the fractional Euler method is proposed as follows:
y j + 1 = y j + h ¢ Γ ¢ + 1 f t j , y j ,
where h = t j + 1 t j is the step size. The aforementioned Euler scheme’s local truncation error is O h 2 ¢ Γ 2 ¢ + 1 , suggesting lower-order precision in approximating fractional-order derivatives. These restrictions have an impact on solution precision, especially for stiff or highly nonlinear problems. To address this, Batiha et al. [11] proposed a better approach with enhanced convergence capabilities, i.e.,
y j + 1 = y j + h ¢ Γ ¢ + 1 k 4 ,
k 1 = f t j , y j , k 2 = f t j , y j + h ¢ Γ ¢ + 1 k 1 , k 3 = f t j , y j + h ¢ Γ ¢ + 1 k 2 , k 4 = f t j , y j + h ¢ Γ ¢ + 1 k 3 ,
with the following local truncation error:
y t j y j + 1 1 12 f D t y [ ¢ ] f + 1 2 f 2 D y y [ ¢ ] f + 1 6 D t [ ¢ ] f D y [ ¢ ] f 1 24 D t t [ ¢ ] f 1 3 f D y [ ¢ ] f 2 h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 D [ 3 ¢ ] y ζ ,
where the operator D t , y [ ¢ ] f denotes the Caputo fractional derivative of order ¢ of the function f with respect to the variables t , y . Thus, for h 3 ¢ Γ 3 ¢ + 1 , the error is
O h 4 ¢ Γ 4 ¢ + 1 D [ 3 ¢ ] y ζ .
The fractional version in [12] is given as
y j + 1 = y j + h ¢ Γ ¢ + 1 k 1 2 + k 2 2 k 1 + k 2 ,
k 1 = f t j , y j , k 2 = f t j , y j + h ¢ Γ ¢ + 1 k 1 .
The local truncation error of the aforementioned fractional scheme is computed as a function of the deviation induced at each step. It illustrates how well the scheme approximates the true solution to the fractional differential equation. It is critical to examine the error in order to evaluate the proposed methods’ reliability and consistency.
y t j y j + 1 1 6 f D t y [ ¢ ] f 1 12 f 2 D y y [ ¢ ] f 1 3 D t [ ¢ ] f D y [ ¢ ] f 1 4 D t y [ ¢ ] f 2 f 1 12 f D y [ ¢ ] f 2 × h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 D [ 3 ¢ ] y ζ .
Some researchers have developed various numerical methods for solving ordinary and fractional differential equations, as reported in [13,14,15,16,17] and the references cited therein. To solve FIVPs, most fractional methods rely on a fixed step size throughout the integration process. In particular, to maintain precision, very small step sizes are typically required, which significantly increases the cost and memory usage.
Motivated by the established advantages of fractional-order schemes and the need for enhanced accuracy, this paper aims to develop a robust family of two-stage fractional-order numerical methods for solving (1). A variable-step-size technique is introduced to address the drawbacks of constant-step-size approaches. It performs dynamic adjustments during integration based on local error estimates. Besides improving computational efficiency by allowing for larger steps during smoother intervals, this adaptive method ensures greater accuracy in regions with rapid changes in the solution. Additionally, the adaptive nature of this approach prevents instability and significantly enhances accuracy, especially when solving complex problems.
This paper is structured as follows: Section 2 develops two-stage fractional-order schemes and provides a theoretical analysis of the proposed methods. The implementation and performance of the proposed methods, along with a comparison with some existing methods, are reported in Section 3 using a few fractional-order IVPs. The paper is concluded in Section 4.

2. Development of Novel Fractional Schemes

We begin by introducing essential definitions and concepts necessary for understanding the considered FIVPs, focusing on the Caputo fractional derivative, which is preferred for dealing with fractional-order differential equations because it satisfies initial conditions, such as [ ¢ ] c = 0 , and defines the Riemann–Liouville integral operator of order ¢ as reported in [18]:
J [ ¢ ] f x = 1 Γ ¢ 0 x x t ¢ 1 f ( t ) d t , x > 0 , ¢ 0 , 1
The following characteristics hold:
  • J [ ¢ 1 [ ] ] J [ ¢ 2 [ ] ] f x = J [ ¢ 2 [ ] ] J [ ¢ 1 [ ] ] f x , ¢ 1 [ ] , ¢ 2 [ ] > 0 ;
    J [ ¢ 1 [ ] ] J [ ¢ 2 [ ] ] f x = J [ ¢ 1 [ ] ] + [ ¢ 2 [ ] ] f x , ¢ 1 [ ] , ¢ 2 [ ] > 0 ;
    J [ ¢ 1 [ ] ] x κ = Γ ¢ + κ Γ ¢ + κ + 1 x κ + [ ¢ 1 [ ] ] , κ > 1 .
Lemma 1
([19]). If f C n [ 0 , b ] , x > 0 , and n 1 < ¢ n , such that n N , we have
[ ¢ ] J [ ¢ ] f ( x ) = f ( x ) ,
and
J [ ¢ ] [ ¢ ] f ( x ) = f ( x ) s = 0 n 1 f s 0 + x s s ! .
Lemma 1 follows from the standard properties of Caputo fractional derivatives and fractional integrals: Equation (11) shows that the derivative of the fractional integral returns the original function, while Equation (12) recovers the function up to a polynomial determined by initial values. These results are well-established in [20].
Theorem 1
(Generalized Taylor Formula for Caputo Derivatives). In the context of Caputo fractional derivatives, a function f ( t ) can be expanded around a node t 0 using a fractional Taylor series. This expansion provides an approximate representation of f ( t ) and plays a key role in analyzing the convergence of fractional numerical schemes.
Specifically, for ¢ ( 0 , 1 ] and f C n [ 0 , t n ] , the fractional Taylor formula is given by [21]
f t = i = 0 n t t 0 i ¢ Γ i ¢ + 1 [ i ¢ ] f ( t o ) + t t 0 n + 1 ¢ Γ n + 1 ¢ + 1 [ n + 1 ¢ ] f ( ζ ) , 0 < ζ < t ,
where [ s ¢ ] f ( t ) denotes the Caputo derivative of order s¢, for s = 0 , 1 , 2 , , n . In expanded form, the series can be written as
f t = f ( t 0 ) + t t 0 ¢ Γ ¢ + 1 [ ¢ ] f ( t 0 ) + t t 0 2 ¢ Γ 2 ¢ + 1 [ 2 ¢ ] f ( t 0 ) + + t t 0 i ¢ Γ i ¢ + 1 [ i ¢ ] f ( t 0 ) + t t 0 n + 1 ¢ Γ n + 1 ¢ + 1 [ n + 1 ¢ ] f ( ζ ) .
Equation (14) is used to calculate the error of fractional-order algorithms to solve FIVPs.
For fractional-order IVPs, the general form of a fractional-order second-stage Runge–Kutta method is often employed because of its simplicity and higher accuracy compared with lower-order fractional schemes. To improve the stability and accuracy of the solution, we evaluate the function at intermediate points within each phase. The mathematical formulation of this method is provided as [22]
y j + 1 = y j + h ¢ Γ ¢ + 1 t j , y j ; h ¢ Γ ¢ + 1 ,
where
t j , y j ; h ¢ Γ ¢ + 1 = i = 1 m ω i k i
with
k 1 = f t j , y j , k i = f t j + c i h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 j = 1 i 1 α i j k j ,
where c i = j = 1 i 1 α i j , i = 2 . For some particular values of i = 1 m ω i , c i and j = 1 i 1 α i j , we have the well-known midpoint method [23]:
y j + 1 = y j + h ¢ 2 Γ ¢ + 1 k 1 + k 2 ,
where
k 1 = f t j , y j , k 2 = f t j + h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 k 1 .
We abbreviated the method as RMFS 1 [ ] , and it satisfies the local truncation error (L.T.E)
L . T . E = 1 12 x x [ ¢ ] f 1 6 f t y [ ¢ ] f 1 12 f 2 y y [ ¢ ] f + 1 6 f y [ ¢ ] f 2 + 1 6 t [ ¢ ] f y [ ¢ ] f × h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 [ 3 ¢ ] y ζ ,
and the order of convergence is h 2 ¢ Γ 2 ¢ + 1 . The developed method is consistent, since
lim h 0 Φ t j , y j , h ¢ Γ ¢ + 1 = f t j , y j ,
therefore,
lim h 0 Φ t j , y j , h ¢ Γ ¢ + 1 = lim h 0 f t j , y j + f t j + h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 k 2 2 , = f t j , y j .
The Modified Fractional Euler method [24] is defined as follows:
y j + 1 = y j + h ¢ Γ ¢ + 1 k 2 ,
where
k 1 = f t j , y j , k 2 = f t j + 1 2 h ¢ Γ ¢ + 1 , y j + 1 2 h ¢ Γ ¢ + 1 k 1 .
We abbreviated the method as RMFS 2 [ ] , and it satisfies the local truncation error
L . T . E = 1 24 t t [ ¢ ] f + 1 12 f t y [ ¢ ] f + 1 24 f 2 y y [ ¢ ] f + 1 6 f y [ ¢ ] f 2 + 1 6 t [ ¢ ] f y [ ¢ ] f × h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 [ 3 ¢ ] y ζ ,
and the obtained order of convergence is h 2 ¢ Γ 2 ¢ + 1 . The developed method is consistent, since
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = f t j , y j ,
therefore
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = lim h 0 f t j + h ¢ 2 Γ ¢ + 1 , y j + h ¢ 2 Γ ¢ + 1 k 1 , = f t j , y j .
Fractional-order methods are essential because they account for the memory and hereditary effects present in many physical, biological, and engineering systems. Their ability to depict non-local dynamics makes them excellent for simulating real-world processes with complex temporal behavior. Unlike classical methods, these approaches effectively model complex phenomena such as anomalous diffusion and viscoelasticity, especially with power-law behavior. Their ability to precisely capture the non-local and history-dependent dynamics of fractional systems enables a deeper understanding of natural phenomena.

2.1. Construction and Convergence Analysis of the Fractional Schemes

Here, we illustrate the construction of the proposed fractional schemes and analyze their convergence for Caputo-type fractional initial value problems. Consider
y ( x ) = y x j + x j x y t t .
Using a hybrid quadrature rule, we approximate (27) as
x j x y t t x x j m i = 1 m y x j + x x j 2 i 1 2 m .
Formula (28) is derived by applying a hybrid quadrature rule to approximate the integral in (27). In particular, the interval x j , x is subdivided into m equal sub-intervals, and the midpoint rule is employed within each sub-interval. This choice balances accuracy with computational efficiency, as the midpoint rule is known to be second-order-accurate. From the perspective of error analysis, the approximation can be expressed as
x j x y t t x x j m i = 1 m y x j + x x j 2 i 1 2 m + O x x j 3 m 2 .
Hence, the truncation error introduced by (28) is of order O h 2 , which vanishes as h . Therefore, the approximation is consistent with the Taylor series expansion and ensures the accuracy required for constructing the proposed fractional schemes. From (1)–(6) and (27)–(28), we define the following fractional scheme as
y j + 1 = y j + h ¢ Γ ¢ + 1 t j , y j ; h ¢ Γ ¢ + 1 ,
where
t j , y j ; h ¢ Γ ¢ + 1 = i = 1 m ω i k i + k 1 k 2 k 1 + k 2 ,
and
k 1 = f t j , y j , k i = f t j + c i h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 j = 1 i 1 α i j k j .
Using (32), we determine the unknown parameters ω 1 , ω 2 , c 2 , and α 21 to agree with the Taylor series method of the highest possible term. Then, we use the generalized Taylor series expansion as follows:
y t j + 1 = y j + h ¢ Γ ¢ + 1 f + 1 2 t [ ¢ ] f + 1 2 f y [ ¢ ] f h 2 ¢ Γ 2 ¢ + 1 + 1 6 t t [ ¢ ] f + 1 3 f t y [ ¢ ] f + 1 6 2 y y [ ¢ ] f + 1 6 t [ ¢ ] f y [ ¢ ] + 1 6 f y [ ¢ ] f 2 h 3 ¢ Γ 3 ¢ + 1 +
By computing the Taylor series expansion of k 1 and k i , we have
k 1 = f ,
k 2 = f + c 2 t [ ¢ ] f + α 21 f y [ ¢ ] f h ¢ Γ ¢ + 1 + 1 2 f 2 y y [ ¢ ] f α 21 2 + f t y [ ¢ ] f c 2 α 21 + 1 2 c 2 2 t t [ ¢ ] f h 2 ¢ Γ 2 ¢ + 1 + 1 6 f 3 y y y [ ¢ ] f α 21 3 + 1 2 f 2 t y y [ ¢ ] f c 2 α 21 2 + 1 2 f t t y [ ¢ ] f c 2 2 α 21 + 1 6 t t t [ ¢ ] f c 2 3 h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 [ 3 ¢ ] y ζ .
Thus,
ω 1 k 1 + k 1 k 2 k 1 + k 2 + ω 2 k 2 = ω 1 f + 1 2 f + ω 2 f + 1 4 t [ ¢ ] f c 2 + 1 4 f y [ ¢ ] f α 21 + ω 2 c 2 t [ ¢ ] f + α 21 f y [ ¢ ] h ¢ Γ ¢ + 1 + 1 2 f 1 2 f 1 2 f 2 y y [ ¢ ] f α 21 2 + c 2 α 21 2 f t y [ ¢ ] f + 1 2 c 2 2 t t [ ¢ ] f + 1 4 c 2 t [ ¢ ] f 1 4 α 21 f y [ ¢ ] f c 2 t [ ¢ ] f + α 21 f y [ ¢ ] f + ω 2 1 2 f 2 y y [ ¢ ] f α 21 2 + f t y [ ¢ ] f α 21 c 2 + 1 2 c 2 2 t t [ ¢ ] f h 2 ¢ Γ 2 ¢ + 1 +
By substituting (36) into (30) and comparing the coefficients of h ¢ Γ ¢ + 1 f , h 2 ¢ Γ 2 ¢ + 1 t [ ¢ ] f , and h 2 ¢ Γ 2 ¢ + 1 f y [ ¢ ] f with the corresponding terms in the Taylor series expansion (33), we obtain the following relations:
ω 1 + ω 2 = 1 2 ,
1 4 + ω 2 c 2 = 1 2 ,
1 4 + ω 2 α 21 = 1 2 .
By solving the system in (37)–(39), we have the following schemes for ω 1 = 1 2 , ω 2 = 1 , and c 2 = α 21 = 2 5 :
y j + 1 = y j + h ¢ Γ ¢ + 1 2 k 2 k 1 2 + k 1 k 2 k 1 + k 2 ,
where
k 1 = f t j , y j , k 2 = f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2 .
We abbreviated the method as RMFS 3 [ ] , and it satisfies the local truncation error (L.T.E)
L . T . E = 1 15 t t [ ¢ ] f + 2 15 f t y [ ¢ ] f + 1 15 f 2 y y [ ¢ ] f + 14 75 f y [ ¢ ] f 2 + 31 150 t [ ¢ ] f y [ ¢ ] f + 1 50 f t [ ¢ ] f 2 h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 [ 3 ¢ ] y ζ ,
and the obtained order of convergence is h 2 ¢ Γ 2 ¢ + 1 . The developed method in (40) is consistent, since
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = f t j , y j ,
therefore,
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = lim h 0 2 f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2 f t j , y j 2 + f t j , y j f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2 f t j , y j + f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2 , = f t j , y j .
For ω 1 = 1 , ω 2 = 3 2 , and c 2 = α 21 = 2 3 , we developed the scheme
y j + 1 = y j + h ¢ Γ ¢ + 1 3 k 1 2 k 2 2 + k 1 k 2 k 1 + k 2 ,
where
k 1 = f t j , y j , k 2 = f t j 2 3 h ¢ Γ ¢ + 1 , y j 2 h ¢ 3 Γ ¢ + 1 k 2 ,
and abbreviated the method as RMFS 4 [ ] , and it satisfies the local truncation error
L . T . E = 1 3 t t [ ¢ ] f + 2 3 f t y [ ¢ ] f + 1 3 f 2 y y [ ¢ ] f + 2 9 f y [ ¢ ] f 2 + 5 18 t [ ¢ ] f y [ ¢ ] f + 1 18 f t [ ¢ ] f 2 h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 [ 3 ¢ ] y ζ ,
The obtained order of convergence is h 2 ¢ Γ 2 ¢ + 1 . The developed method in (43) is consistent, since
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = f t j , y j ,
therefore,
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = lim h 0 3 f t j , y j 2 f t j 2 3 h ¢ Γ ¢ + 1 , y j h ¢ Γ ¢ + 1 2 3 k 2 2 + f t j , y j f t j 2 3 h ¢ Γ ¢ + 1 , y j h ¢ Γ ¢ + 1 2 3 k 2 f t j , y j + f t j 2 3 h ¢ Γ ¢ + 1 , y j h ¢ Γ ¢ + 1 2 3 k 2 , = f t j , y j .
For ω 1 = 1 2 , ω 2 = 1 , and c 2 = α 21 = 2 5 , we developed the scheme
y j + 1 = y j + h ¢ Γ ¢ + 1 k 1 + k 2 4 + k 1 k 2 k 1 + k 2 ,
where
k 1 = f t j , y j , k 2 = f t j + h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 k 2 ,
and abbreviated the method as RMFS 5 [ ] , and it satisfies the L.T.E:
L . T . E = 1 12 t t [ ¢ ] f + 1 6 f t y [ ¢ ] f + 1 12 f 2 y y [ ¢ ] f + 7 24 f y [ ¢ ] f 2 + 5 12 t [ ¢ ] f y [ ¢ ] f + 1 8 f t [ ¢ ] f 2 h 3 ¢ Γ 3 ¢ + 1 + O h 4 ¢ Γ 4 ¢ + 1 [ 3 ¢ ] y ζ ,
The obtained order of convergence is h 2 ¢ Γ 2 ¢ + 1 . The developed method in (46) is consistent, since
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = f t j , y j ,
therefore,
lim h 0 t j , y j , h ¢ Γ ¢ + 1 = lim h 0 f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2 + f t j , y j 4 + f t j , y j f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2 f t j , y j + f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2 , = f t j , y j .

2.2. Stability Analysis of the Fractional Scheme

Stability analysis is essential to producing accurate and reliable results from complex systems or long-term simulations. Particularly in rigid or extremely sensitive topics, errors should not grow uncontrollably as computations advance. In fractional-order methods, the range of acceptable step sizes and technique parameters for which the numerical solution remains constrained is known as the stability region. It is critical to the reliability of simulations, especially for long-term or stiff problems. Setting the step size correctly inside the stability region improves convergence and maintains the qualitative behavior of the solution. It ensures consistent performance across a variety of problem types. Thus, step-size control and method selection are influenced by stability analysis. In order to determine the stability of the proposed fractional schemes in (40), (43), and (46), Dahlquist’s test equation, as presented in [25], is used.
[ ¢ ] y t = λ y t ; λ C ; 0 < α < 1 y t 0 = y 0 .
Thus,
k 1 = h ¢ λ y t j Γ ¢ + 1 ,
k 2 = h ¢ λ Γ ¢ + 1 + 2 5 h ¢ λ Γ ¢ + 1 2 y t j .
Now, using (50)–(51), in (40), we have the following stability function:
y t j + 1 = y t j + 2 h ¢ λ Γ ¢ + 1 + 2 5 h ¢ λ Γ ¢ + 1 2 h ¢ λ Γ ¢ + 1 2 + h ¢ λ Γ ¢ + 1 h ¢ λ Γ ¢ + 1 + 2 5 h ¢ λ Γ ¢ + 1 2 h ¢ λ Γ ¢ + 1 + h ¢ λ Γ ¢ + 1 + 2 5 h ¢ λ Γ ¢ + 1 2 y t j .
Thus,
y t j + 1 = y t j + h ¢ λ Γ ¢ + 1 + 4 5 h ¢ λ Γ ¢ + 1 2 2 + h ¢ λ Γ ¢ + 1 2 + 2 5 h ¢ λ Γ ¢ + 1 3 2 h ¢ λ Γ ¢ + 1 + 2 5 h ¢ λ Γ ¢ + 1 2 y t j .
Therefore,
R z ˇ = y t j + 1 y t j = 1 + z ˇ + 4 5 z ˇ 2 2 + z ˇ 2 + 4 5 z ˇ 3 2 z ˇ + 2 5 z ˇ 2 ,
where z ˇ = h ¢ λ Γ ¢ + 1 C . Similarly for (43), the stability polynomial is
R z ˇ = y t j + 1 y t j = 1 + z ˇ + 4 3 z ˇ 2 2 + z ˇ 2 2 3 z ˇ 3 2 z ˇ 2 3 z ˇ 2 ,
and for scheme (46), it is
R z ˇ = y t j + 1 y t j = 1 + 2 z ˇ + z ˇ 2 4 + z ˇ 2 + z ˇ 3 2 z ˇ + z ˇ 2 .
Using the CAS-MatLab computing tool, the stability region of RMFS 1 [ ] RMFS 5 [ ] is depicted for various values of ¢. The newly proposed fractional-order methods have better stability regions than the other existing fractional schemes RMFS 1 [ ] RMFS 2 [ ] , as illustrated in Figure 1.
Figure 1 illustrates the stability regions of the proposed methods, while Table 1 lists the corresponding stability intervals for various fractional orders. The results show that the proposed methods have better stability and wider stability regions compared to the existing fractional-order methods.

3. Fractional Scheme Implementation and Performance Assessment

To assess the accuracy and efficiency of fractional-order methods, both fixed-step-size and variable-step-size strategies are implemented in this section.

3.1. Fixed-Step-Size Strategies

The fixed-step-size technique keeps the time step consistent throughout the integration intervals. This approach simplifies implementation and maintains consistent time resolution. Applying it to FIVPs with varying solutions could result in a loss of accuracy or efficiency. To evaluate the accuracy of the proposed methods, we use the following error formula:
Maximum - Error = max j = 0 , 1 , , n y t j y j ,
where
  • y t j is the exact solution;
  • y j is the approximated solution by using fractional schemes at each grid point t j .
It is easier to find convergence orders and perform theoretical stability analysis using fixed-step-size approaches. This formulation enables direct accuracy assessment without interpolation or resampling, which is particularly useful for comparing the methodology with other existing methods in the literature. Furthermore, fixed-step-size techniques produce predictable, reproducible results that are unaffected by floating-point fluctuations, ensuring uniform results.

3.2. Adaptive-Step-Size Strategies

The adjustable step-size methods dynamically modify the grid points to improve the performance. To ensure accuracy and effectiveness, the adjustment is predicated on local error estimation as follows:
E r r = y t j y j .
In complex simulations, it lowers computational costs and guarantees improved stability. This technique offers improved accuracy while requiring minimal computational effort. Each step involves calculating an estimated error (Err) and comparing it with a predetermined tolerance (Tol). The adaptive-step-size strategy used in this study is inspired by a similar strategy reported in [26]:
  • If E r r T o l , the step is accepted, and the step size is increased to improve efficiency.
    If E r r > T o l , the step is rejected, and the step size is reduced to improve accuracy.
    An adaptive-step-size strategy is used, with the fractional Euler method in (3) serving as a local error estimate.
The dynamic adjustment is implemented using the following strategy to improve the overall performance of the proposed methods:
h n e w = η [ ] × h o l d T o l E r r 1 ¢ + 1 ,
where the adjustment factor η [ ] 0 , 1 protects against step-size failures, Tol represents the predefined absolute tolerance, and ¢ indicates the order of the lower-order approach in (30). This adaptive technique ensures that errors are managed efficiently and reliably during the numerical solution process.

3.3. Implementations of RMFS 1 [ ] RMFS 5 [ ]

In this subsection, we describe the implementation of the proposed and existing fractional numerical methods using fixed- and adaptive-step-size strategies. Algorithms 1 and 2 provide a structured approach to solving FIVPs and enable a consistent comparison of method performance in terms of accuracy and computational effort.
Algorithm 1 (Fixed Step Size): This algorithm solves FIVPs by using a fixed step size h. The main steps are shown below.
Algorithm 1: Use of RMFS 3 [ ] to solve FIVPs with a fixed step length.
StepsDescription
1: InitializeDefine D [ ϑ ] y ( t ) = f t , y t ; y k θ 0 [ ] = y 0 k ,
Read the initial values of ( t 0 and y 0 ), the values of fractional
parameter ¢, number of steps (n) and end point.
Choose the step size h = ( t n t 0 ) / n
2: Start loopSet j = 0
  over time
3: Compute k 1 k 1 = f t j , y j
4: Compute k 2 k 2 = f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2
5: Update solution y j + 1 = y j + h ¢ Γ ¢ + 1 2 k 2 k 1 2 + k 1 k 2 k 1 + k 2
y j = y 0 .
j = j + 1 .
while j < n do
  Approximate solution y j at discrete time points.
6: End loopend while
Algorithm 2 (Adaptive Step Size): This algorithm uses an adaptive-step-size strategy to control the local error during the integration process.
Algorithm 2: Use of RMFS 3 [ ] to solve FIVPs with adaptive-step-size technique.
StepsDescription
1: InitializeDefine D [ ϑ ] y ( t ) = f t , y t ; y k θ 0 [ ] = y 0 k ,
Choose initial step size h 0 , the values of fractional parameter ¢,
error tolerance Tol, and set t j = t 0
2: Start loop over timewhile j < N do
3: Compute k 1 k 1 = f t j , y j
4: Compute k 2 k 2 = f t j + 2 5 h ¢ Γ ¢ + 1 , y j + h ¢ Γ ¢ + 1 2 5 k 2
5: Compute tentative y j + 1 = y j + h ¢ Γ ¢ + 1 2 k 2 k 1 2 + k 1 k 2 k 1 + k 2
solution
6: Estimate local errorCompute error error norm E r r = .
7: Adjust step size j = j + 1 .
8: Accept or reject step If E r r > T o l , reduce h ; if Err < Tol / 2 ; increase h ; Use : h n e w = η [ ] × h o l d T o l E r r 1 ϑ + 1 . If E r r T o l , accept y j + 1 = y j and update t j = t 0 + h , otherwise , recompute with new h .
9: End loopend while
10: OutputApproximate solution y j at adaptive time steps.
To ensure clarity and uniformity, we use the following abbreviations:
hStep length
ExactExact solution
MSE Mean square error norm
Max-ErrorMaximum error
. Error norm
Avg Average error
FunTotal number of functions
RMFS 1 [ ] RMFS 2 [ ] Existing methods of fractional order
RMFS 3 [ ] RMFS 5 [ ] Newly developed methods of fractional order

3.4. Performance Assessment

This subsection presents numerical examples to assess the efficiency, stability, and consistency of the proposed fractional-order and some existing methods. The results demonstrate the accuracy, robustness, and computational efficiency of the proposed methods.
Example 1
([27]). In the first numerical example, we consider the following FIVP, which is used to effectively model complex systems with memory and hereditary features, as reported in [27]:
[ ¢ ] y ( t ) = 40320 Γ 9 ¢ t 8 ¢ 3 Γ 5 + ¢ 2 Γ 5 ¢ 2 t 4 ¢ 2 + g ( t ) , y 0 = 0 , y 0 = 0 , t > 0
where
g ( t ) = 9 4 Γ ¢ + 1 + 3 2 t ¢ 2 t 4 3 y ( t ) 3 2 .
The exact solution of (60) is
y ( t ) = t 8 3 t 4 + ¢ 2 + 9 4 t ¢
Table 2 illustrates the exact and approximate solutions of (60) to five decimal places utilizing fractional schemes RMFS 1 [ ] RMFS 5 [ ] with fractional parameter values of 0.5 and step length of 0.1.
Table 3 shows the error for a fixed step length of 0.1 , indicating that the newly constructed methods RMFS 3 [ ] RMFS 5 [ ] are more stable than the existing methods RMFS 1 [ ] RMFS 2 [ ] .
The adaptive-step-size implementation is also applied to solve Example 1, using tolerances ϵ = 10 2 , 10 3 , and 10 6 , to improve the convergence rate of fractional-order numerical schemes. The results of these numerical schemes are presented in Table 4.
We used a variety of stopping conditions to determine the accuracy reported in Table 4, and we utilized a common tic-toc function in MATLAB R2016a to calculate the computational time in seconds. Our developed methods, RMFS 3 [ ] RMFS 5 [ ] outperform RMFS 1 [ ] RMFS 2 [ ] in terms of CPU time, MSE(mean square error norm), the Avg error norm, and the . error norm. Figure 2 compares exact and approximate solutions for different fractional parameter values, along with the related error plots. This figure also shows the result for ¢ = 0.99 , where the approximate solutions produced by RMFS 3 [ ] RMFS 5 [ ] closely match the exact solution, indicating high precision.
Example 2
([28]). Consider the fractional initial value problem as follows:
[ ¢ ] y ( t ) + y ( t ) = 2 Γ 3 ¢ t 2 ¢ + t 2 , y 0 = 0 , t > 0
where ¢ 0 , 1 . The exact solution of (61) is y ( t ) = t 2 ¢ .
Table 5 illustrates the exact and approximate solutions of Example 2 utilizing fractional schemes RMFS 1 [ ] RMFS 5 [ ] with fractional parameter values of 0.5 and step length of 0.1.
Table 6 shows the error for a fixed step length of 0.1, demonstrating that the newly constructed methods RMFS 3 [ ] RMFS 5 [ ] are more stable than the existing methods RMFS 1 [ ] RMFS 2 [ ] .
We analyze the behavior and enhance the rate of convergence of fractional-order numerical schemes by implementing an adaptive step size using various tolerances, i.e., = 10 2 , 10 3 , and 10 6 . Table 7 presents the results of the numerical schemes using adaptive step sizes.
We used a variety of stopping conditions to determine the accuracy presented in Table 7, and we used a common tic-toc function before the loop in Matlab to calculate the computational time in seconds. In terms of CPU time, MSE (mean square error norm), the Avg error norm, and the . error norm, our recently created methods RMFS 3 [ ] RMFS 5 [ ] perform better than the others schemes, i.e., RMFS 1 [ ] RMFS 2 [ ] , as shown in Table 7. This indicates better convergence behavior and a more efficient solution for fractional differential equations. In addition, Figure 3 illustrates the associated error graph with the exact and approximate solutions for a range of fractional parameter values. Figure 3 also clearly shows that the approximate solution exactly matches the exact solution for ¢ = 0.99 using RMFS 3 [ ] RMFS 5 [ ] .
Example 3
([29]). Consider the fractional initial value problem as follows:
[ ¢ ] y ( t ) + y ( t ) = 2 Γ 3 ¢ t 2 ¢ t 1 ¢ Γ 2 ¢ + t 2 t , y 0 = 0 , t > 0
where ¢ 0 , 1 . The exact solution of (62) is y ( t ) = t 2 t .
Table 8 illustrates the exact and approximate solution of FIVP (62) utilizing fractional schemes RMFS 1 [ ] RMFS 5 [ ] with fractional parameter values of 0.5 and step length of 0.1.
Table 9 shows the error for a fixed step length of 0.1, demonstrating that the newly constructed methods RMFS 3 [ ] RMFS 5 [ ] are more stable than the existing methods RMFS 1 [ ] RMFS 2 [ ] .
We analyze the behavior and enhance the rate of convergence of fractional-order numerical schemes using an adaptive-step-size strategy. Table 10 presents the results of the numerical schemes.
We used a variety of stopping conditions to determine the accuracy reported in Table 10, and we used a common tic-toc function before the loop in Matlab to calculate the computational time in seconds. In terms of CPU time, MSE (mean square error norm), the Avg error norm, and the . error norm, our developed methods RMFS 3 [ ] RMFS 5 [ ] perform better than the other schemes, i.e., RMFS 1 [ ] RMFS 2 [ ] , as indicated in Table 10. Figure 4 also compares the exact and approximate solutions for various fractional parameter values, as well as the corresponding error graph. Figure 4 also clearly shows that the approximate solution exactly matches the exact solution for ¢ = 0.99 using RMFS 3 [ ] RMFS 5 [ ] .
Example 4
([30]). Consider the fractional initial value problem as follows:
[ ¢ ] y ( t ) + 2 y ( t ) 2 = Γ ¢ + 2 t + 2 t ¢ + 1 2 , y 0 = 0 , t > 0
where ¢ 0 , 1 . The exact solution of (63) is y ( t ) = t + 1 .
Table 11 illustrates the exact and approximate solutions of Example 4 utilizing fractional schemes RMFS 1 [ ] RMFS 5 [ ] with fractional parameter values of 0.5 and step length of 0.1.
Table 12 shows the error for a fixed step length of 0.1, indicating that the newly constructed methods RMFS 3 [ ] RMFS 5 [ ] are more stable than the existing methods RMFS 1 [ ] RMFS 2 [ ] . We analyze the behavior and enhance the rate of convergence of fractional-order numerical schemes using the adaptive-step-size approach with various tolerances, i.e., = 10 2 , 10 3 , and 10 6 . Table 13 reports the results of the numerical schemes.
We used a variety of stopping conditions to determine the accuracy presented in Table 13, and we used a common tic-toc function before the loop in Matlab to calculate the computational time in seconds. Our recently developed methods RMFS 3 [ ] RMFS 5 [ ] outperform other schemes, i.e., RMFS 1 [ ] RMFS 2 [ ] , in terms of CPU time, MSE (mean square error norm), the Avg error norm, and the . error norm, as illustrated in Table 13. Figure 5 also compares the exact and approximate solutions for various fractional parameter values, as well as the corresponding error graph. Figure 5 also clearly shows that the approximate solution exactly matches the exact solution for ¢ = 0.99 using RMFS 3 [ ] RMFS 5 [ ] .
Example 5
([31]). Consider the fractional initial value problem as follows:
[ ¢ ] y ( t ) + y 2 ( t ) = g t , y 0 = 0 , y 0 = 0 , t > 0
where
g t = Γ 6 Γ 6 ¢ t 5 ¢ 3 Γ 5 Γ 5 ¢ t 4 ¢ + 2 Γ 4 Γ 4 ¢ t 3 ¢ + t 5 3 t 4 + 2 t 3 2
and ¢ 0 , 1 . The exact solution of (64) is
y ( t ) = t 5 3 t 4 + 2 t 3 .
Table 14 illustrates the exact and approximate solutions for Example 5 utilizing fractional schemes RMFS 1 [ ] RMFS 5 [ ] with fractional parameter values of 0.5 and step length of 0.1.
Table 15 shows the error for a fixed step length of 0.1, indicating that the newly constructed methods RMFS 3 [ ] RMFS 5 [ ] are more stable than the existing methods RMFS 1 [ ] RMFS 2 [ ] . Table 16 presents the results of the numerical schemes with adaptive step size.
We employed a variety of stopping conditions to determine the accuracy reported in Table 16, and we utilized a common tic-toc function in MATLAB to compute the processing time in seconds. Our developed methods, RMFS 3 [ ] RMFS 5 [ ] , outperform other schemes, including RMFS 1 [ ] RMFS 2 [ ] , in terms of CPU time, MSE (mean square error norm), the Avg error norm, and the . error norm, as shown in Table 16. Figure 6 compares the exact and numerical solutions for different ¢ values, along with the related error plots. For ¢ = 0.99 , the approximate solutions produced by RMFS 3 [ ] RMFS 5 [ ] closely match the exact solution, indicating high precision of the proposed method.
Example 6
(Forced Fractional Relaxation). This model is a forced fractional relaxation system characterized by the Caputo fractional IVP [32], which is useful for describing dynamical processes in which memory and heredity effects cannot be ignored. The fractional relaxation system, which integrates prior states through the fractional derivative of order α, provides a more accurate description of complicated materials and processes compared with the classical relaxation model, which only considers the current state. This is especially relevant in viscoelasticity, dielectric relaxation, anomalous diffusion, and biomedical engineering, where experimental results frequently reveal power-law decays rather than simple exponential ones. In physiological models, the forcing term Q g t refers to an external input, such as stress, field, or metabolic stimulation. Long-term memory dynamics and non-exponential relaxation are naturally captured by the Mittag-Leffler function, a fractional counterpart of the exponential function utilized in the model’s solution. The given model can be described by the Caputo fractional initial value problem
[ ¢ ] y ( t ) = κ y ( t ) + Q g t , y 0 = y 0 ,
where the parameters are κ > 0 (relaxation rate), Q R (force amplitude), μ > 1 (force exponent), g t = t μ , and ¢ 0 , 1 . The exact solution of (65) is
y ( t ) = y 0 E ¢ κ t ¢ + Q t ¢ + μ E ¢ , ¢ + μ + 1 κ t ¢ ,
where E ¢ ( κ t ¢ ) and E ¢ , ¢ + μ + 1 ( κ t ¢ ) are the one- and two-parameter Mittag–Leffler functions, defined by (see [33])
E ¢ ( z ) = n = 0 z n Γ ( ¢ n + 1 ) , E ¢ , β ( z ) = n = 0 z n Γ ( ¢ n + β ) .
In particular,
E ¢ κ t ¢ = n = 0 ( κ t ¢ ) n Γ ( ¢ n + 1 ) , E ¢ , ¢ + μ + 1 κ t ¢ = n = 0 ( κ t ¢ ) n Γ ( ¢ n + ¢ + μ + 1 ) .
To simulate the model using fractional schemes, the following parameter values in (65) are given:
  • ¢ 0 , 1 is the fractional order, controlling the "memory effect" of the process.
  • κ = 0.5 is the relaxation rate, representing how quickly the system returns to equilibrium.
  • Q = 1.0 and μ = 0 specify constant external forcing.
  • y 0 = 1 is the initial state.
The term
f ( t , y ) = κ y ( t ) + Q g t
contains two competing effects: a decay term, κ y ( t ) , which pulls the system down, and a forcing term, Q g t , which injects energy into the system.
Table 17 illustrates the exact and approximate solutions for Example 6 utilizing fractional schemes RMFS 1 [ ] RMFS 5 [ ] with fractional parameter values of 0.5 and step length of 0.1.
Table 17 shows the error for a fixed step length of 0.1, indicating that the newly constructed methods RMFS 3 [ ] RMFS 5 [ ] are more stable than the existing methods RMFS 1 [ ] RMFS 2 [ ] . Table 18 presents the results of the numerical schemes with adaptive step size.
In Table 19, we evaluated accuracy using several stopping conditions and computed CPU time with MATLAB’s tic-toc function. The proposed methods RMFS 3 [ ] RMFS 5 [ ] consistently outperformed RMFS 1 [ ] RMFS 2 [ ] in CPU time, and MSE∞, Avg∞, and | · | error norms, as shown in Table 19. Figure 7 compares the exact and numerical solutions for different ¢ values, along with the related error plots. For ¢ = 0.99 , the approximate solutions produced by RMFS 3 [ ] RMFS 5 [ ] closely match the exact solution, indicating high precision of the proposed method.
Physical Interpretation: This model can be interpreted as a generalized relaxation process with memory effects. In a classical relaxation system ( ¢ = 1 ), the solution is a simple exponential decay to equilibrium, with forcing altering the steady-state value. For ¢ 0 , 1 , the decay is slower and follows a stretched-exponential/power-law pattern, which commonly reflects memory and hereditary features in viscoelastic materials, dielectric relaxation, or anomalous diffusion in complex media. In particular,
  • The term y 0 E ¢ κ t ¢ represents the natural fractional relaxation, showing how the system dissipates its initial energy.
  • The term Q t μ + ¢ E ¢ , ¢ + μ + 1 κ t ¢ represents the response to the external forcing, which gradually drives the system to a new equilibrium.
Physically, this means that the system does not forget its past in an instant; rather, it has a long memory, with prior conditions continuing to influence the present.

4. Conclusions

In this research article, we introduce novel fractional-order numerical methods that utilize the Caputo fractional derivative to solve initial value problems associated with fractional-order differential equations (FODEs). The proposed methods are designed to capture the memory effects inherent in FODEs while ensuring computational efficiency. Our findings demonstrate that the proposed RMFS 3 [ ] RMFS 5 [ ] schemes exhibit superior accuracy and stability compared with existing methods, particularly when dealing with nonlinear problems. The theoretical results were confirmed by numerical tests using both fixed- and adaptive-step-size implementations. Based on the error norms and CPU time presented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18 and Table 19 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7, the proposed methods RMFS 3 [ ] RMFS 5 [ ] outperform the existing fractional-order schemes RMFS 1 [ ] RMFS 2 [ ] , making the proposed methods reliable and effective numerical methods for solving both linear and nonlinear FODEs.
Despite these strengths, the current study has several limitations:
  • The proposed techniques were evaluated mainly on standard benchmark problems; their performance on more stiff or chaotic FODEs remains to be explored.
  • The methods have been applied to one-dimensional FODEs; applications to multidimensional systems remain to be addressed.
  • The theoretical convergence analysis assumes smooth solutions; non-smooth or discontinuous problems may necessitate modifications.
Future work will concentrate on expanding the proposed methods to multidimensional systems and systems of FODEs, as well as developing memory-efficient methods for long-term simulations. Additionally, we aim to study the behavior of the proposed techniques for solving stiff, non-smooth, or chaotic problems in order to expand their application to a broader range of practical problems. Furthermore, hybrid approaches that combine RMFS schemes with adaptive-step-size strategies or neural network-based techniques will be investigated to improve computing efficiency and solution accuracy. Additional directions include
  • Using uncertainty quantification to evaluate the robustness of the proposed methods.
  • Using integration with machine learning techniques to optimize the scheme parameters for various types of FODEs.
Overall, these directions will improve the applicability and robustness of the proposed RMFS 3 [ ] RMFS 5 [ ] schemes in solving a broader class of fractional-order problems in engineering and applied sciences.

Author Contributions

Conceptualization, M.S.; methodology, M.S. and M.A.R.; software, M.S. and M.A.R.; validation, M.S. and M.A.R.; formal analysis, M.S.; investigation, M.S.; resources, M.S. and M.A.R.; writing—original draft preparation, M.S. and M.A.R.; writing—review and editing, M.S. and M.A.R.; visualization, M.S. and M.A.R.; supervision, M.S. and M.A.R.; project administration, M.S. and M.A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

References

  1. Hilfer, R. Applications of Fractional Calculus in Physics; World Scientific: Singapore, 2000; pp. 1–85. [Google Scholar]
  2. Baleanu, D.; Diethelm, K.; Scalas, E.; Trujillo, J.J. Fractional Calculus: Models and Numerical Methods; World Scientific: Singapore, 2012. [Google Scholar]
  3. Buedo-Fernández, S.; Nieto, J.J. Basic control theory for linear fractional differential equations with constant coefficients. Front. Phys. 2020, 8, 377. [Google Scholar] [CrossRef]
  4. Kumar, P.; Agrawal, O.P. An approximate method for numerical solution of fractional differential equations. Signal Process. 2006, 86, 2602–2610. [Google Scholar] [CrossRef]
  5. Kilbas, A.A. Theory and applications of fractional differential equations. North-Holl. Math. Stud. 2006, 204, 1–9. [Google Scholar]
  6. Matlob, M.A.; Jamali, Y. The concepts and applications of fractional order differential calculus in modeling of viscoelastic systems: A primer. Crit. Rev. Biomed. Eng. 2019, 47, 1–10. [Google Scholar] [CrossRef]
  7. Arshad, S.; Baleanu, D.; Tang, Y. Fractional differential equations with bio-medical applications. Appl. Eng. Life Sci. Part A 2019, 7, 1–11. [Google Scholar]
  8. Chang, A.; Sun, H.; Zheng, C.; Lu, B.; Lu, C.; Ma, R.; Zhang, Y. A time fractional convection–diffusion equation to model gas transport through heterogeneous soil and gas reservoirs. Phys. A Stat. Mech. 2018, 502, 356–369. [Google Scholar] [CrossRef]
  9. Petrás, I. Fractional Derivatives, Fractional Integrals, and Fractional Differential Equations in Matlab; IntechOpen: London, UK, 2011; p. 9412. [Google Scholar]
  10. Mazandarani, M.; Kamyad, A.V. Modified fractional Euler method for solving fuzzy fractional initial value problem. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 12–21. [Google Scholar] [CrossRef]
  11. Batiha, I.M.; Abubaker, A.A.; Jebril, I.H.; Al-Shaikh, S.B.; Matarneh, K. New algorithms for dealing with fractional initial value problems. Axioms 2023, 12, 488. [Google Scholar] [CrossRef]
  12. Workie, A.H. Small modification on modified Euler method for solving initial value problems. Abstr. Appl. Anal. 2021, 1, 9951815. [Google Scholar] [CrossRef]
  13. Sowa, M. Application of SubIval, a method for fractional-order derivative computations in IVPs. In Proceedings of the Theory and Applications of Non-integer Order Systems: 8th Conference on Non-integer Order Calculus and Its Applications, Zakopane, Poland, 20–21 September 2016; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  14. Alaroud, M.O.; Saadeh, R.A.; Al-smadi, M.O.; Ahmad, R.R.; Din, U.K.; Abu Arqub, O. Solving nonlinear fuzzy fractional IVPs using fractional residual power series algorithm. Int. Assoc. Color Manuf. 2019, 2019, 170–175. [Google Scholar]
  15. Baleanu, D.; Qureshi, S.; Soomro, A.; Rufai, M.A. Optimizing A-stable hyperbolic fitting for time efficiency: Exploring constant and variable stepsize approaches. J. Math. Comput. Sci. 2024, 35, 411–430. [Google Scholar] [CrossRef]
  16. Rufai, M.A. Numerical integration of third-order BVPs using a fourth-order hybrid block method. J. Comput. Sci. 2024, 81, 102338. [Google Scholar] [CrossRef]
  17. Xue, D.; Bai, L. Numerical algorithms for Caputo fractional-order differential equations. Int. J. Control. 2017, 90, 1201–1211. [Google Scholar] [CrossRef]
  18. Farid, G. Bounds of Riemann-Liouville fractional integral operators. Comput. Methods Differ. Equ. 2021, 9, 637–648. [Google Scholar]
  19. Podlubny, I. An introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications. Math. Sci. Eng. 1999, 198, 0924-34008. [Google Scholar]
  20. Burden, R.L.; Faires, J.D.; Toomey, H.A. Numerical Analysis, 9th ed.; Thomson Brooks/Cole: Boston, MA, USA, 2005. [Google Scholar]
  21. Odibat, Z.M.; Momani, S. An algorithm for the numerical solution of differential equations of fractional order. J. Appl. Math. Inform. 2008, 26, 15–27. [Google Scholar]
  22. Jackiewicz, Z.; Tracogna, S. A general class of two-step Runge–Kutta methods for ordinary differential equations. SIAM J. Numer. Anal. 1995, 32, 1390–1427. [Google Scholar] [CrossRef]
  23. Nazir, G.; Zeb, A.; Shah, K.; Saeed, T.; Khan, R.A.; Khan, S.I.U. Study of COVID-19 mathematical model of fractional order via modified Euler method. Alex. Eng. J. 2021, 60, 5287–5296. [Google Scholar] [CrossRef]
  24. Khader, M. Using Modified Fractional Euler Formula for Solving the Fractional Smoking Model. Eur. J. Pure Appl. Math. 2024, 17, 2676–2691. [Google Scholar]
  25. Corless, R.M.; Kaya, C.Y.; Moir, R.H. Optimal residuals and the Dahlquist test problem. Numer. Algorithms 2019, 81, 1253–1274. [Google Scholar] [CrossRef]
  26. Rufai, M.A.; Carpentieri, B.; Ramos, H. A new hybrid block method for solving first-order differential system models in applied sciences and engineering. Fractal Fract. 2023, 7, 703. [Google Scholar] [CrossRef]
  27. Esmaeili, S.; Shamsi, M.; Luchko, Y. Numerical solution of fractional differential equations with a collocation method based on Müntz polynomials. Comput. Math. Appl. 2011, 62, 918–929. [Google Scholar] [CrossRef]
  28. Qureshi, S.; Kumar, P. Using Shehu integral transform to solve fractional order Caputo type initial value problems. J. Appl. Comput. Mech. 2019, 18, 75–83. [Google Scholar] [CrossRef]
  29. Sowa, M. Numerical computations of the fractional derivative in IVPS, examples in MATLAB and Mathematica. Inform. Autom. Pomiary W Gospod. I Ochr. Środowiska 2021, 7, 19–22. [Google Scholar] [CrossRef]
  30. Shams, M.; Kausar, N.; Ozbilge, E.; Bulut, A. Stable Computer Method for Solving Initial Value Problems with Engineering Applications. Comput. Syst. Sci. Eng. 2023, 45, 2617–2633. [Google Scholar] [CrossRef]
  31. Ford, N.J.; Simpson, A.C. The numerical solution of fractional differential equations: Speed versus accuracy. Num. Algorithms 2001, 26, 333–346. [Google Scholar] [CrossRef]
  32. Webb, J.R.L. Initial value problems for Caputo fractional equations with singular nonlinearities. Electron. J. Differ. 2019, 2019, 1–32. [Google Scholar]
  33. Haubold, H.J.; Mathai, A.M.; Saxena, R.K. Mittag-Leffler functions and their applications. J. Appl. Math. 2011, 1, 298628. [Google Scholar] [CrossRef]
Figure 1. Stability regions of RMFS 1 [ ] RMFS 5 [ ] with different fractional parameter values.
Figure 1. Stability regions of RMFS 1 [ ] RMFS 5 [ ] with different fractional parameter values.
Symmetry 17 01685 g001
Figure 2. Comparison of solutions incorporating an adjustable step length using RMFS 3 [ ] to solve (60) with different tolerances.
Figure 2. Comparison of solutions incorporating an adjustable step length using RMFS 3 [ ] to solve (60) with different tolerances.
Symmetry 17 01685 g002
Figure 3. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (61) with different tolerances.
Figure 3. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (61) with different tolerances.
Symmetry 17 01685 g003
Figure 4. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (62) with different tolerances.
Figure 4. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (62) with different tolerances.
Symmetry 17 01685 g004
Figure 5. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (63) with different tolerances.
Figure 5. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (63) with different tolerances.
Symmetry 17 01685 g005
Figure 6. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (64) with different tolerances.
Figure 6. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (64) with different tolerances.
Symmetry 17 01685 g006
Figure 7. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (65) with different tolerances.
Figure 7. Comparison of solutions incorporating adjustable step length using RMFS 3 [ ] to solve (65) with different tolerances.
Symmetry 17 01685 g007
Table 1. Stability analysis comparison of RMFS 1 [ ] RMFS 5 [ ] .
Table 1. Stability analysis comparison of RMFS 1 [ ] RMFS 5 [ ] .
¢ RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.1 [ 1.739479 , 0 ] [ 1.490982 , 0 ] [ 1.34 , 0.008 ] [ 3.12 , 0.01 ] [ 1.34 , 0.008 ]
0.3 [ 2.116232 , 0 ] [ 1.490982 , 0 ] [ 1.42 , 0.008 ] [ 3.71 , 0.01 ] [ 1.42 , 0.008 ]
0.5 [ 1.490982 , 0 ] [ 1.695391 , 0 ] [ 1.61 , 0.008 ] [ 4.20 , 0.01 [ 1.61 , 0.008 ]
0.7 [ 1.695391 , 0 ] [ 1.995992 , 0 ] [ 1.90 , 0.008 ] [ 4.96 , 0.01 ] [ 1.90 , 0.008 ]
0.9 [ 1.995992 , 0 ] [ 2.422846 , 0 ] [ 2.32 , 0.008 ] [ 6.03 , 0.01 ] [ 2.32 , 0.008 ]
Table 2. Comparison of exact and approximate solutions with a step size of 0.1.
Table 2. Comparison of exact and approximate solutions with a step size of 0.1.
tExact RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.2 0.0013085 0.0013453 0.0014435 0.0013086 0.0013076 0.0013043
0.4 0.0029592 0.0023445 0.0029886 0.0029556 0.0029874 0.0029765
0.6 0.0036968 0.0022342 0.0032342 0.0036956 0.0036685 0.0036934
0.8 0.0028626 0.0065326 0.0065322 0.0028645 0.0028645 0.0028676
1.0 0.0 0.0 0.0 0.0 0.0 0.0
Table 3. Comparison of errors in fractional schemes with a step size of 0.1.
Table 3. Comparison of errors in fractional schemes with a step size of 0.1.
x RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0
0.2 4.45 × 10 4 1.44 × 10 3 0.86 × 10 5 0.76 × 10 5 0.43 × 10 4
0.4 3.44 × 10 3 0.79 × 10 3 1.56 × 10 5 2.45 × 10 5 0.63 × 10 5
0.6 1.87 × 10 2 1.56 × 10 3 9.66 × 10 4 0.08 × 10 5 0.17 × 10 5
0.8 9.54 × 10 2 3.87 × 10 3 3.06 × 10 5 6.45 × 10 5 1.10 × 10 5
1.0 0.0 0.0 0.0 0.0 0.0
Table 4. Comparisons of fractional methods with an adjustable step size for solving (60).
Table 4. Comparisons of fractional methods with an adjustable step size for solving (60).
Parameter Tolerance = 10 2
Scheme . AvgMSECPU Time
RMFS 1 [ ] 0.98 × 10 1 5.05 × 10 2 4.26 × 10 2 0.012312
RMFS 2 [ ] 0.65 × 10 2 9.06 × 10 2 8.57 × 10 2 0.023423
RMFS 3 [ ] 9.87 × 10 2 3.41 × 10 2 1.23 × 10 3 0.034242
RMFS 4 [ ] 2.34 × 10 3 1.87 × 10 3 0.65 × 10 3 0.006523
RMFS 5 [ ] 5.57 × 10 3 8.76 × 10 3 0.60 × 10 3 0.004363
Tolerance = 10 3
RMFS 1 [ ] 0.24 × 10 3 9.98 × 10 3 0.87 × 10 3 0.54 × 10 2
RMFS 2 [ ] 5.03 × 10 3 3.66 × 10 3 4.02 × 10 3 7.65 × 10 3
RMFS 3 [ ] 3.17 × 10 4 7.51 × 10 5 0.78 × 10 5 6.34 × 10 5
RMFS 4 [ ] 8.37 × 10 5 7.72 × 10 4 9.96 × 10 5 3.56 × 10 5
RMFS 5 [ ] 0.37 × 10 5 8.72 × 10 5 0.45 × 10 5 0.28 × 10 5
Tolerance = 10 6
RMFS 1 [ ] 3.24 × 10 4 9.98 × 10 4 2.19 × 10 4 5.75 × 10 3
RMFS 2 [ ] 7.07 × 10 4 5.64 × 10 4 1.87 × 10 4 7.67 × 10 4
RMFS 3 [ ] 7.17 × 10 7 9.51 × 10 7 6.71 × 10 7 6.77 × 10 7
RMFS 4 [ ] 0.36 × 10 7 5.79 × 10 6 2.13 × 10 7 1.23 × 10 5
RMFS 5 [ ] 0.44 × 10 7 0.02 × 10 7 2.15 × 10 7 1.92 × 10 6
Table 5. Comparison of exact and approximate solutions with a step size of 0.1.
Table 5. Comparison of exact and approximate solutions with a step size of 0.1.
tExact RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.2 0.16034 0.16341 0.16458 0.160375 0.165641 0.160650
0.4 0.24141 0.24553 0.24437 0.241347 0.245787 0.241761
0.6 0.24672 0.24346 0.24540 0.246793 0.246763 0.246794
0.8 0.16998 0.16545 0.16532 0.169768 0.168905 0.169785
1.0 0.0 0.0 0.0 0.0 0.0 0.0
Table 6. Comparison of errors in fractional schemes with a step size of 0.1.
Table 6. Comparison of errors in fractional schemes with a step size of 0.1.
x RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0
0.2 0.06 × 10 2 0.42 × 10 2 0.37 × 10 4 0.26 × 10 2 0.30 × 10 3
0.4 0.41 × 10 2 0.10 × 10 2 0.32 × 10 3 0.43 × 10 2 0.35 × 10 3
0.6 0.32 × 10 2 0.34 × 10 2 0.13 × 10 3 0.73 × 10 4 0.74 × 10 4
0.8 0.45 × 10 2 0.46 × 10 2 0.21 × 10 3 0.10 × 10 3 0.19 × 10 3
1.0 0.0 0.0 0.0 0.0 0.0
Table 7. Comparisons of fractional methods with an adjustable step size for solving (61).
Table 7. Comparisons of fractional methods with an adjustable step size for solving (61).
Parameter Tolerance = 10 2
Scheme . AvgMSECPU Time
RMFS 1 [ ] 0.36 × 10 2 5.01 × 10 2 1.87 × 10 2 0.0652324
RMFS 2 [ ] 0.76 × 10 2 7.60 × 10 2 2.56 × 10 2 0.0562623
RMFS 3 [ ] 1.23 × 10 3 0.06 × 10 2 1.67 × 10 3 0.0365423
RMFS 4 [ ] 0.87 × 10 3 1.87 × 10 2 4.16 × 10 4 0.0012532
RMFS 5 [ ] 0.55 × 10 3 1.19 × 10 3 2.14 × 10 3 0.0015232
Tolerance = 10 3
RMFS 1 [ ] 9.24 × 10 2 4.98 × 10 2 2.19 × 10 6 5.586 × 10 4
RMFS 2 [ ] 6.76 × 10 3 2.08 × 10 2 1.53 × 10 4 7.758 × 10 4
RMFS 3 [ ] 1.09 × 10 4 1.89 × 10 4 6.71 × 10 3 6.735 × 10 5
RMFS 4 [ ] 2.37 × 10 6 1.98 × 10 6 6.01 × 10 6 1.324 × 10 5
RMFS 5 [ ] 0.27 × 10 6 1.87 × 10 6 2.15 × 10 6 1.852 × 10 5
Tolerance = 10 6
RMFS 1 [ ] 1.24 × 10 4 0.12 × 10 5 2.09 × 10 5 5.565 × 10 4
RMFS 2 [ ] 6.44 × 10 4 4.54 × 10 5 1.65 × 10 4 7.773 × 10 4
RMFS 3 [ ] 7.17 × 10 6 1.51 × 10 7 6.98 × 10 7 6.755 × 10 4
RMFS 4 [ ] 7.77 × 10 7 1.72 × 10 8 0.88 × 10 7 1.267 × 10 7
RMFS 5 [ ] 0.39 × 10 7 1.72 × 10 7 4.56 × 10 7 1.256 × 10 5
Table 8. Comparison of exact and approximate solutions with a step size of 0.1.
Table 8. Comparison of exact and approximate solutions with a step size of 0.1.
tExact RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.2 0.04 0.04651 0.04613 0.040123 0.043531 0.040052
0.4 0.16 0.16346 0.16879 0.165156 0.160267 0.165654
0.6 0.36 0.36537 0.36165 0.360456 0.367195 0.361625
0.8 0.64 0.64693 0.64346 0.642421 0.640163 0.640115
1.0 1.0 1.02236 1.176 1.0 1.0 1.0
Table 9. Comparison of errors in fractional schemes with a step size of 0.1.
Table 9. Comparison of errors in fractional schemes with a step size of 0.1.
x RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0
0.2 2.65 × 10 2 4.61 × 10 2 1.23 × 10 4 0.35 × 10 2 0.52 × 10 4
0.4 1.34 × 10 2 8.79 × 10 2 5.15 × 10 2 2.63 × 10 4 5.65 × 10 2
0.6 1.53 × 10 2 1.65 × 10 2 0.45 × 10 3 7.19 × 10 2 0.16 × 10 2
0.8 6.93 × 10 2 3.46 × 10 2 0.42 × 10 2 1.63 × 10 3 1.15 × 10 3
1.0 2.23 × 10 1 1.17 × 10 1 0.0 0.0 0.0
Table 10. Comparisons of fractional methods with an adjustable step size for solving (62).
Table 10. Comparisons of fractional methods with an adjustable step size for solving (62).
Parameter Tolerance = 10 2
Scheme . AvgMSECPU Time
RMFS 1 [ ] 1.87 × 10 1 5.40 × 10 2 9.46 × 10 2 0.016545
RMFS 2 [ ] 2.65 × 10 2 7.98 × 10 1 8.57 × 10 1 0.027566
RMFS 3 [ ] 6.87 × 10 3 0.46 × 10 2 1.98 × 10 3 0.037565
RMFS 4 [ ] 2.45 × 10 3 8.53 × 10 3 8.74 × 10 3 0.006467
RMFS 5 [ ] 1.09 × 10 3 3.54 × 10 3 0.84 × 10 3 0.001656
Tolerance = 10 3
RMFS 1 [ ] 6.24 × 10 3 9.98 × 10 3 2.19 × 10 3 5.55 × 10 4
RMFS 2 [ ] 6.98 × 10 3 4.66 × 10 3 1.87 × 10 3 7.74 × 10 4
RMFS 3 [ ] 4.45 × 10 5 1.51 × 10 4 6.45 × 10 3 6.75 × 10 4
RMFS 4 [ ] 6.87 × 10 5 1.72 × 10 5 4.87 × 10 5 1.24 × 10 5
RMFS 5 [ ] 3.23 × 10 5 1.72 × 10 5 2.45 × 10 5 1.53 × 10 6
Tolerance = 10 6
RMFS 1 [ ] 1.24 × 10 4 9.98 × 10 5 2.19 × 10 5 5.54 × 10 4
RMFS 2 [ ] 6.03 × 10 5 4.66 × 10 5 1.87 × 10 5 7.37 × 10 4
RMFS 3 [ ] 0.17 × 10 8 1.51 × 10 7 6.71 × 10 8 6.76 × 10 4
RMFS 4 [ ] 0.37 × 10 7 1.72 × 10 8 2.15 × 10 8 1.72 × 10 5
RMFS 5 [ ] 0.37 × 10 8 1.72 × 10 8 2.15 × 10 8 8.52 × 10 5
Table 11. Comparison of exact and approximate solutions with a step size of 0.1.
Table 11. Comparison of exact and approximate solutions with a step size of 0.1.
tExact RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.2 0.04064 0.04066 0.04064 0.04067 0.04062 0.04068
0.4 0.16147 0.16176 0.16734 0.16145 0.16147 0.16187
0.6 0.36184 0.36309 0.36687 0.36134 0.36184 0.36184
0.8 0.64143 0.64567 0.64578 0.64176 0.64143 0.64143
1.0 1.0 1.0 1.0 1.0 1.0 1.0
Table 12. Comparison of errors in fractional schemes with a step size of 0.1.
Table 12. Comparison of errors in fractional schemes with a step size of 0.1.
x RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0
0.2 2.04 × 10 4 3.87 × 10 3 1.87 × 10 4 4.87 × 10 4 4.55 × 10 4
0.4 3.06 × 10 4 8.34 × 10 2 7.46 × 10 5 8.34 × 10 5 7.34 × 10 3
0.6 6.53 × 10 2 1.09 × 10 2 3.87 × 10 3 3.64 × 10 4 5.87 × 10 5
0.8 7.34 × 10 2 5.98 × 10 2 0.45 × 10 5 8.44 × 10 3 8.34 × 10 3
1.0 0.0 0.0 0.0 0.0 0.0
Table 13. Comparisons of fractional methods with an adjustable step size for solving (63).
Table 13. Comparisons of fractional methods with an adjustable step size for solving (63).
Parameter Tolerance = 10 2
Scheme . AvgMSECPU Time
RMFS 1 [ ] 1.36 × 10 2 5.05 × 10 1 7.96 × 10 1 5.578 × 10 2
RMFS 2 [ ] 9.61 × 10 1 7.76 × 10 2 8.56 × 10 1 5.764 × 10 2
RMFS 3 [ ] 1.76 × 10 3 9.29 × 10 3 1.94 × 10 3 6.385 × 10 4
RMFS 4 [ ] 6.91 × 10 4 1.76 × 10 3 0.01 × 10 3 8.394 × 10 4
RMFS 5 [ ] 4.67 × 10 4 0.14 × 10 3 2.65 × 10 4 6.539 × 10 3
Tolerance = 10 3
RMFS 1 [ ] 1.24 × 10 3 9.34 × 10 3 2.19 × 10 3 5.346 × 10 3
RMFS 2 [ ] 6.03 × 10 4 1.98 × 10 3 1.07 × 10 3 7.791 × 10 3
RMFS 3 [ ] 3.76 × 10 6 1.35 × 10 5 6.98 × 10 3 6.782 × 10 4
RMFS 4 [ ] 5.87 × 10 5 1.87 × 10 5 4.76 × 10 5 1.990 × 10 5
RMFS 5 [ ] 4.98 × 10 6 1.98 × 10 6 8.15 × 10 6 1.285 × 10 6
Tolerance = 10 6
RMFS 1 [ ] 1.24 × 10 4 9.98 × 10 4 2.19 × 10 4 5.576 × 10 5
RMFS 2 [ ] 6.03 × 10 5 4.66 × 10 5 1.87 × 10 4 7.735 × 10 4
RMFS 3 [ ] 0.17 × 10 7 1.51 × 10 7 6.71 × 10 6 6.711 × 10 7
RMFS 4 [ ] 0.37 × 10 7 1.72 × 10 7 2.15 × 10 7 1.208 × 10 8
RMFS 5 [ ] 0.37 × 10 7 1.72 × 10 7 2.15 × 10 7 1.209 × 10 7
Table 14. Comparison of exact and approximate solutions with a step size of 0.1.
Table 14. Comparison of exact and approximate solutions with a step size of 0.1.
tExact RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.2 0.01152 0.01198 0.01365 0.011342 0.011654 0.0115276
0.4 0.06144 0.06146 0.06176 0.061443 0.061443 0.0614445
0.6 0.12096 0.12096 0.12186 0.120965 0.120966 0.1209635
0.8 0.12288 0.12287 0.12334 0.122838 0.122883 0.1228864
1.0 0.0 0.0 0.0 0.0 0.0 0.0
Table 15. Comparison of errors in fractional schemes with a step size of 0.1.
Table 15. Comparison of errors in fractional schemes with a step size of 0.1.
x RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0
0.2 2.87 × 10 3 9.65 × 10 2 6.09 × 10 4 9.59 × 10 4 1.55 × 10 4
0.4 1.54 × 10 3 5.75 × 10 3 1.96 × 10 5 2.62 × 10 4 0.60 × 10 4
0.6 0.98 × 10 3 0.87 × 10 2 9.28 × 10 3 6.14 × 10 5 5.45 × 10 5
0.8 7.78 × 10 3 3.45 × 10 2 0.06 × 10 5 7.17 × 10 3 6.43 × 10 3
1.0 0.0 0.0 0.0 0.0 0.0
Table 16. Comparisons of fractional methods with an adjustable step size for solving (64).
Table 16. Comparisons of fractional methods with an adjustable step size for solving (64).
Parameter Tolerance = 10 2
Scheme . AvgMSECPU Time
RMFS 1 [ ] 1.36 × 10 2 5.05 × 10 2 9.96 × 10 2 0.0657124
RMFS 2 [ ] 9.65 × 10 2 7.66 × 10 2 8.56 × 10 1 0.0875723
RMFS 3 [ ] 1.05 × 10 3 9.56 × 10 2 1.64 × 10 2 0.0376542
RMFS 4 [ ] 0.61 × 10 3 1.05 × 10 3 0.12 × 10 3 0.0067645
RMFS 5 [ ] 6.71 × 10 3 0.17 × 10 3 2.15 × 10 3 6.7 × 10 4
Tolerance = 10 3
RMFS 1 [ ] 1.24 × 10 3 9.98 × 10 3 2.19 × 10 3 5.095 × 10 4
RMFS 2 [ ] 6.03 × 10 4 4.87 × 10 5 1.87 × 10 4 7.677 × 10 4
RMFS 3 [ ] 0.17 × 10 5 1.51 × 10 4 6.71 × 10 4 6.867 × 10 4
RMFS 4 [ ] 0.19 × 10 5 1.75 × 10 5 0.60 × 10 5 1.082 × 10 5
RMFS 5 [ ] 0.67 × 10 5 1.72 × 10 5 1.05 × 10 5 1.209 × 10 5
Tolerance = 10 6
RMFS 1 [ ] 1.23 × 10 5 9.98 × 10 5 2.19 × 10 5 1.809 × 10 4
RMFS 2 [ ] 6.83 × 10 5 4.65 × 10 5 1.87 × 10 5 7.744 × 10 4
RMFS 3 [ ] 4.15 × 10 7 1.51 × 10 7 6.71 × 10 6 6.376 × 10 4
RMFS 4 [ ] 9.37 × 10 7 1.02 × 10 7 7.05 × 10 8 1.276 × 10 5
RMFS 5 [ ] 7.87 × 10 8 0.99 × 10 8 0.18 × 10 8 0.246 × 10 6
Table 17. Comparison of exact and approximate solutions with a step size of 0.1.
Table 17. Comparison of exact and approximate solutions with a step size of 0.1.
tExact RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.4 0.18334 0.18367 0.18775 0.18337 0.18335 0.18334
0.8 1.33086 1.33054 1.33576 1.33089 1.33086 1.33086
1.2 1.14509 1.14578 1.14376 1.14509 1.14505 1.14509
1.6 1.54889 1.54809 1.54986 1.54881 1.54886 1.54889
2.0 1.62897 1.62842 1.62976 1.62897 1.62897 1.62897
Table 18. Comparison of errors in fractional schemes with a step size of 0.1.
Table 18. Comparison of errors in fractional schemes with a step size of 0.1.
x RMFS 1 [ ] RMFS 2 [ ] RMFS 3 [ ] RMFS 4 [ ] RMFS 5 [ ]
0.0 0.0 0.0 0.0 0.0 0.0
0.4 0.07 × 10 3 1.17 × 10 2 1.56 × 10 4 0.09 × 10 5 5.87 × 10 5
0.8 2.10 × 10 3 1.49 × 10 2 0.78 × 10 4 1.76 × 10 5 9.01 × 10 6
1.2 0.14 × 10 3 4.07 × 10 2 0.28 × 10 3 0.13 × 10 3 0.75 × 10 5
1.6 1.08 × 10 3 6.83 × 10 2 0.43 × 10 5 9.84 × 10 3 1.03 × 10 5
2.0 2.08 × 10 2 1.37 × 10 2 0.0 0.0 0.0
Table 19. Comparisons of fractional methods with an adjustable step size for solving (65).
Table 19. Comparisons of fractional methods with an adjustable step size for solving (65).
Parameter Tolerance = 10 2
Scheme . AvgMSECPU Time
RMFS 1 [ ] 1.01 × 10 2 6.12 × 10 2 0.13 × 10 2 0.0257
RMFS 2 [ ] 3.72 × 10 2 1.75 × 10 2 1.21 × 10 1 0.0573
RMFS 3 [ ] 7.14 × 10 3 0.06 × 10 2 5.13 × 10 2 0.0376
RMFS 4 [ ] 5.32 × 10 3 1.78 × 10 3 1.87 × 10 3 0.0064
RMFS 5 [ ] 0.97 × 10 3 9.21 × 10 3 1.10 × 10 3 0.0057
Tolerance = 10 3
RMFS 1 [ ] 8.01 × 10 3 0.08 × 10 3 2.17 × 10 3 1.032 × 10 4
RMFS 2 [ ] 7.01 × 10 4 2.13 × 10 5 1.01 × 10 4 1.607 × 10 4
RMFS 3 [ ] 9.93 × 10 5 6.47 × 10 4 6.15 × 10 4 0.834 × 10 3
RMFS 4 [ ] 0.58 × 10 5 0.51 × 10 5 7.32 × 10 5 1.202 × 10 3
RMFS 5 [ ] 0.30 × 10 5 4.31 × 10 5 9.01 × 10 5 1.652 × 10 4
Tolerance = 10 6
RMFS 1 [ ] 3.03 × 10 5 1.78 × 10 5 3.09 × 10 5 0.443 × 10 4
RMFS 2 [ ] 1.98 × 10 5 0.35 × 10 5 6.89 × 10 5 3.105 × 10 5
RMFS 3 [ ] 0.05 × 10 7 0.11 × 10 7 0.01 × 10 6 2.001 × 10 6
RMFS 4 [ ] 1.31 × 10 7 3.72 × 10 7 1.07 × 10 8 7.332 × 10 5
RMFS 5 [ ] 6.56 × 10 8 2.69 × 10 8 7.88 × 10 8 0.141 × 10 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Rufai, M.A. Fractional-Order Numerical Scheme with Symmetric Structure for Fractional Differential Equations with Step-Size Control. Symmetry 2025, 17, 1685. https://doi.org/10.3390/sym17101685

AMA Style

Shams M, Rufai MA. Fractional-Order Numerical Scheme with Symmetric Structure for Fractional Differential Equations with Step-Size Control. Symmetry. 2025; 17(10):1685. https://doi.org/10.3390/sym17101685

Chicago/Turabian Style

Shams, Mudassir, and Mufutau Ajani Rufai. 2025. "Fractional-Order Numerical Scheme with Symmetric Structure for Fractional Differential Equations with Step-Size Control" Symmetry 17, no. 10: 1685. https://doi.org/10.3390/sym17101685

APA Style

Shams, M., & Rufai, M. A. (2025). Fractional-Order Numerical Scheme with Symmetric Structure for Fractional Differential Equations with Step-Size Control. Symmetry, 17(10), 1685. https://doi.org/10.3390/sym17101685

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop