Next Article in Journal
Disturbance-Resilient Two-Area LFC via RBBMO-Optimized Hybrid Fuzzy–Fractional with Auxiliary PI(1+DD) Controller Considering RES/ESS Integration and EVs Support
Previous Article in Journal
Multi-Objective Optimization of Design Parameters to Improve Dynamic Performances of Distributed Actuation Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Hybrid ANN-Accelerated Two-Stage Implicit Schemes for Fractional Differential Equations

by
Mudassir Shams
1,2 and
Bruno Carpentieri
2,*
1
Department of Mathematics, Faculty of Arts and Science, Balikesir University, 10145 Balıkesir, Turkey
2
Faculty of Engineering, Free University of Bozen-Bolzano, 39100 Bolzano, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(23), 3774; https://doi.org/10.3390/math13233774
Submission received: 12 October 2025 / Revised: 18 November 2025 / Accepted: 20 November 2025 / Published: 24 November 2025

Abstract

This paper introduces a hybrid two-stage implicit scheme for efficiently solving fractional differential equations, with particular emphasis on fractional initial value problems formulated using the Caputo derivative. Classical numerical approaches to fractional differential equations often encounter challenges related to stability, convergence rate, and memory efficiency. To overcome these limitations, we propose a new discretization framework that directly embeds nonlinear source terms into the time-stepping process, thereby enhancing both stability and accuracy. Our method embeds nonlinear source terms directly into the time-stepping process, enhancing stability and accuracy. Nonlinear systems are efficiently solved using a parallel iterative algorithm with adaptive convergence control, yielding up to 35–50% faster convergence compared with conventional solvers. A rigorous theoretical analysis establishes the scheme’s convergence, stability, and consistency, extending earlier proofs to a broader class of fractional systems. Extensive numerical experiments on benchmark fractional problems confirm that the hybrid approach achieves markedly lower local and global errors, broader stability regions, and substantial reductions in computational time and memory usage compared with existing implicit methods. The results demonstrate that the proposed framework offers a robust, accurate, and scalable solution for nonlinear fractional simulations.

1. Introduction

Fractional differential equations (FDEs) have become an essential framework for modeling complex dynamical systems across engineering, physics, and biomedical sciences. In contrast to classical integer-order formulations, fractional derivatives naturally capture the memory and hereditary effects that characterize many real-world processes [1,2]. Their intrinsic ability to represent nonlocal interactions makes fractional models particularly suitable for describing phenomena such as anomalous diffusion, viscoelasticity, and regulatory mechanisms in biomedical systems, including glucose–insulin dynamics [3]. Owing to these capabilities, the analysis and numerical solution of fractional initial value problems have attracted growing attention in recent years.
Analytical solutions of fractional differential equations [4], often expressed through special functions such as the Mittag–Leffler function [5], provide exact representations of system dynamics and serve as valuable benchmarks for validating numerical methods. Fractional-order initial value problems represent a specific class of fractional-order differential equations that involve derivatives of non-integer order [6]. They can be formulated as
D t [ σ ] y ( t ) = f ( t , y ( t ) ) , y ( k ) ( t 0 ) = y 0 ( k ) , k = 0 , 1 , , m 1 ,
where the Caputo fractional derivative of order ϑ ( m 1 < σ < m , m Z + ) is defined by
D t [ σ ] y ( t ) = 1 Γ ( m σ ) t 0 t y ( m ) ( h ) ( t h ) σ m + 1 d h .
FOIVPs offer a more general and realistic framework for modeling dynamical systems exhibiting memory and hereditary properties, extending beyond the limitations of classical integer-order formulations. Despite their theoretical significance, obtaining closed-form solutions is often infeasible for real-world nonlinear problems, high-dimensional systems, or models involving complex boundary conditions [6]. This limitation motivates the development of reliable and efficient numerical methods.
Classical analytical approaches such as direct computation [7], Adomian decomposition [8], and variational iteration methods [9] can produce closed-form, series, or integral-transform solutions, but they suffer from several inherent limitations. These techniques are often computationally demanding for large-scale or high-order systems and are generally unsuitable for nonlinear FDEs. In addition, they can become numerically unstable when applied to stiff problems, and the convergence of truncated series is frequently slow, limiting their practicality for real-time applications. Consequently, there remains a need for numerical schemes that combine high accuracy, computational efficiency, and robustness for nonlinear fractional systems. To address these challenges, implicit numerical methods—particularly multi-stage formulations—have attracted increasing attention within the fractional calculus community. Compared with higher-order explicit Runge–Kutta type schemes [10,11,12], implicit multi-stage approaches [13] incorporate future information at each step, improving both stability and convergence and allowing larger time increments [14,15].
In parallel, artificial neural networks (ANNs) have emerged as powerful tools for approximating nonlinear mappings and accelerating computations across scientific and engineering disciplines [16,17]. Once trained, ANNs can deliver rapid and accurate predictions, providing an effective mechanism to accelerate implicit solvers. Recently, increasing attention has been devoted to integrating data-driven learning models with numerical iterative solvers for fractional differential equations. Physics-Informed Neural Networks (PINNs) address fractional dynamics by embedding physical laws into the network’s training loss [18,19,20]. Other studies have introduced neural Runge–Kutta [21] formulations and data-driven implicit [22] solvers that employ neural networks to improve the temporal integration accuracy of fractional problems or to approximate unknown operators. Despite these advances, most existing hybrid numerical approaches [23] still employ neural networks primarily as surrogate or black-box models that approximate unknown functions or solutions without explicitly enforcing the underlying physical or mathematical principles. This limitation can restrict generalization and accuracy, particularly in complex or fractional systems [24], where learning relies mainly on data rather than the governing equations. In the context of fractional differential equations, neural networks can also approximate implicit stage terms and provide intelligent initial guesses, thereby reducing the number of iterations required by conventional iterative methods such as Newton–Raphson [25].
Building upon these developments, hybrid strategies that combine artificial neural networks (ANNs) with classical implicit numerical methods have recently emerged as a promising direction. These approaches exploit the fast approximation capability of ANNs along with the high accuracy of iterative refinement, leading to substantial improvements in computational efficiency and solution accuracy [26]. Recent studies have investigated hybrid AI–numerical approaches for solving initial value problems (IVPs) that integrate neural networks with traditional integration schemes. To enhance solution accuracy for IVPs, Wen et al. [27] employ a Lie-group-guided neural network, while Finzi et al. [28] propose a scalable neural IVPs solver with improved stability. Yadav et al. [29] combine harmony search with ANNs to minimize residual errors, whereas Dong & Ni [30] demonstrate that randomized neural networks can learn time-integration techniques, achieving higher long-term accuracy. These studies collectively highlight the potential of hybrid frameworks that embed AI components within numerical solvers to improve convergence and efficiency. However, their performance depends on the availability of an initial approximation sufficiently close to the exact root, since the underlying iterative solvers remain locally convergent. This hybrid paradigm is particularly effective for fractional systems, characterized by strong nonlinearities and long memory effects, where conventional methods often require substantial computational resources [31,32].
Motivated by these challenges, the present study introduces a hybrid ANN-assisted implicit two-stage scheme for solving nonlinear fractional initial value problems. The main goals of the proposed framework are to (i) reduce the computational cost associated with solving implicit stage equations, (ii) preserve high accuracy comparable to analytical or benchmark solutions, and (iii) enhance the convergence and stability properties of the classical two-stage method. The scheme is designed for fractional orders σ ( 0 , 1 ] , with a representative case study at σ = 0.8 illustrating its effectiveness.
The main contributions of this study can be summarized as follows:
  • A new two-stage implicit Caputo-type scheme for fractional initial value problems.
  • A feedforward artificial neural network (ANN) integrated with the two-stage fractional solver to efficiently handle nonlinear systems in parallel form.
  • A trained ANN that provides accurate approximations of the implicit stage values, thereby reducing the number of iterative corrections.
  • Comprehensive numerical experiments demonstrating improved convergence, fewer iterations, and higher computational efficiency than classical two-stage methods.
  • Detailed error and stability analyses validating the accuracy of the proposed approach against exact solutions.
The novelty of this work lies in the integration of neural network predictions as intelligent initial guesses for the implicit stage of a fractional multi-stage method. This hybrid strategy combines the global approximation capability of ANNs with the rigorous convergence properties of implicit numerical schemes, achieving a substantial reduction in computational iterations while preserving high accuracy. The proposed scheme is systematically validated through several fractional initial value problems of Caputo type, demonstrating superior performance and robustness compared with existing fractional numerical methods. Overall, the approach represents a promising and effective direction in fractional numerical analysis.
The remainder of this paper is organized as follows: Section 2 outlines the fundamental definitions and mathematical preliminaries related to fractional calculus. Section 3 presents the formulation of the fractional two-stage method together with its stability and consistency analyses. Section 4 describes the proposed hybrid ANN-assisted framework, including the network architecture, training process, optimization strategy, and implementation details. Section 5 reports the numerical experiments and results, encompassing accuracy comparisons, error analysis, and parallel iteration performance. Finally, Section 6 concludes the paper and discusses potential directions for future research.

2. Preliminaries

This section presents the fundamental concepts and theoretical background underlying the two-stage fractional numerical scheme and its hybrid ANN implementation. The discussion introduces the main definitions of fractional derivatives, key properties of the Gamma function, the generalized Taylor expansion, and the analytical framework used to assess local truncation error, convergence, and stability.
Definition 1
(Caputo Fractional Derivative). Let f ( t ) be a sufficiently smooth function defined on the interval t [ 0 , T ] . The Caputo fractional derivative of order σ ( 0 , 1 ] is defined as [33,34]
D σ f ( t ) = 1 Γ ( 1 σ ) 0 t f ( s ) ( t s ) σ d s ,
where Γ ( · ) denotes the Gamma function, given for γ > 0 by
Γ ( γ ) = 0 e u u γ 1 d u .
The operator D σ acts on continuously differentiable functions f C 1 [ 0 , T ] and reduces to the classical first derivative when σ = 1 . Throughout this work, all fractional operators and parameters are considered within the domain t [ 0 , T ] , σ ( 0 , 1 ] .
Properties of Caputo Derivative:
  • Linearity: D σ [ c 1 y 1 ( t ) + c 2 y 2 ( t ) ] = c 1 D σ y 1 ( t ) + c 2 D σ y 2 ( t ) .
  • Derivative of a power function: D σ t β = Γ ( β + 1 ) Γ ( β + 1 σ ) t β σ , β > σ 1 .
  • Caputo derivative of a constant: D σ C = 0 .
Although this study focuses on the Caputo fractional operator, the proposed framework can be readily adapted to other definitions, such as the Riesz or Riemann–Liouville derivatives, for which several high-order discretization schemes have recently been developed [35,36].
Definition 2
(Generalized Taylor Series for Fractional Derivatives). Let 0 < σ < 1 and assume that y is sufficiently smooth so that the fractional derivatives D m σ y (for m = 1 , , n + 1 ) exist and are continuous on [ t i , t i + h ] . Then, the fractional Taylor expansion about t i is given by [37]
y ( t i + h ) = y ( t i ) + h σ Γ ( σ + 1 ) D σ y ( t i ) + h 2 σ Γ ( 2 σ + 1 ) D 2 σ y ( t i ) + + h n σ Γ ( n σ + 1 ) D n σ y ( t i ) + h ( n + 1 ) σ Γ ( ( n + 1 ) σ + 1 ) D ( n + 1 ) σ y ( ξ ) ,
for some point ξ ( t i , t i + h ) . The final term represents the remainder in the mean-value form of the fractional Taylor theorem and indicates that the truncation error after n terms is
R n + 1 ( h ) = h ( n + 1 ) σ Γ ( ( n + 1 ) σ + 1 ) D ( n + 1 ) σ y ( ξ ) = O h ( n + 1 ) σ as h 0 ,
provided that D ( n + 1 ) σ y is bounded on [ t i , t i + h ] . This series forms the basis of multi-step and two-stage fractional schemes.
Definition 3
(Convergence, Consistency, and Stability). We define the convergence, consistency, and stability [38] of Caputo fractional schemes as follows:
Order of Convergence: Let τ i + 1 denote the local truncation error, defined as the difference between the exact solution and the numerical approximation after a single time step, assuming that all previous steps are exact. A numerical method is said to be of order p if the local truncation error satisfies
τ i + 1 = O ( h p ) , as h 0 .
Consistency: A numerical scheme is consistent if
lim h 0 τ i + 1 = 0 , i ,
ensuring that the discrete scheme approximates the continuous fractional initial value problem.
Stability: Stability of fractional schemes can be analyzed using zero-stability and absolute stability criteria [39]:
  • Absolute Stability: For the test equation D σ y = λ y , a scheme is absolutely stable if | y i | 0 as i whenever ( λ ) < 0 .
  • Stability Polynomial. Consider the linear test equation
    D t σ y ( t ) = λ y ( t ) , λ C ,
    where λ is a complex constant characterizing the stiffness or oscillatory behavior of the system, and y i y ( t i ) denotes the numerical approximation at time level t i . For such problems, the stability function R ( z ) of the numerical method is defined as
    y i + 1 = R ( z ) y i , z = h σ λ ,
    and the method is said to be stable if | R ( z ) | 1 for the relevant values of z in the complex plane.
Definition 4.
Lax Equivalence Theorem for Fractional IVPs [40].
Theorem 1.
For a linear consistent fractional difference scheme applied to a well-posed linear FOIVPs, stability is a necessary and sufficient condition for convergence. That is, if the scheme is consistent and stable, then
lim h 0 max i | y i y ( t i ) | = 0 .
Definition 5.
Boundedness Fractional Scheme. Consider a fractional-order numerical scheme generating approximations { y i } i 0 for the FOIVPs
D σ y ( t ) = f ( t , y ( t ) ) , y ( 0 ) = y 0 ,
where f ( t , y ) is Lipschitz continuous in y with Lipschitz constant L. Then, for any two solutions y i and y ˜ i of the scheme (possibly with perturbations), the scheme satisfies
| y i + 1 y ˜ i + 1 | 1 + C h σ L | y i y ˜ i | ,
where C > 0 depends on the specific scheme. This inequality guarantees the stability of the method.
The preceding definitions, lemmas, and theorems establish the theoretical foundation required to develop and analyze the proposed two-stage fractional method. They guarantee the scheme’s consistency, stability, and convergence, and form the mathematical basis for the hybrid ANN-assisted acceleration introduced in the next section.

3. Construction and Analysis of Two-Stage Fractional Scheme

Fractional-order differential equations with Caputo derivatives are commonly used to model systems exhibiting memory and hereditary effects in science and engineering. Numerical schemes for these equations generalize classical methods to fractional orders, where the nonlocal character of the derivative introduces additional challenges for accuracy, stability, and convergence. Such methods can be formulated in terms of increment functions, analogous to those in classical Runge–Kutta schemes [41], providing a compact framework for solving fractional initial order value problems.
The fractional backward implicit Euler method ( IEBF [ ] ), proposed by Batiha et al. [42], is defined as
y i + 1 = y i + h σ Γ ( σ + 1 ) k 1 , k 1 = f ( t i + 1 , y i + 1 ) .
This method possesses a convergence order of O ( h σ ) . It is consistent since
lim h 0 y ( t i + 1 ) y i h σ f ( t i + 1 , y i + 1 ) = 0 ,
ensuring that the discrete scheme approximates the continuous FOIVPs as h 0 .
To enhance accuracy, Batiha et al. [42] also introduced the fractional trapezoidal implicit two-stage scheme ( ITBF [ ] ), which incorporates a midpoint-type correction. It is expressed as
y i + 1 = y i + h σ 2 Γ ( σ + 1 ) ( k 1 + k 2 ) ,
where
k 1 = f ( x i , y i ) , k 2 = f x i + h σ Γ ( σ + 1 ) , y i + h σ 2 Γ ( σ + 1 ) ( k 1 + k 2 ) .
The fractional trapezoidal rule achieves a convergence order of O ( h 2 σ ) under sufficient smoothness of f. Its local truncation error satisfies τ i + 1 = O ( h 2 σ ) , confirming consistency as h 0 .
Overall, these two schemes highlight the trade-off between simplicity and accuracy: the fractional method IEBF [ ] achieves an accuracy of order σ , whereas ITBF [ ] attains an accuracy of order 2 σ , both measured with respect to the fractional step size h σ . In the classical case σ = 1 , these orders reduce to first- and second-order accuracy, respectively.
We propose the following two-stage modified trapezoidal-type scheme ( ISBF [ ] ) for the Caputo fractional IVP:
y i + 1 = y i + Φ ( y : h ) , 0 < σ 1 ,
where
k 1 = f ( x i , y i ) , k 2 = f x i + h σ Γ ( σ + 1 ) , y i + h σ 2 Γ ( σ + 1 ) α 1 k 1 + α 2 k 2 ,
and
Φ ( y : h ) = h σ 2 Γ ( σ + 1 ) ( k 1 + k 2 ) , α 1 , α 2 R ,
represents the increment function of the scheme.
Assumption 1.
Let f ( x , y ) satisfy the following regularity conditions:
  • f ( x , y ) is continuous in x and satisfies a Lipschitz condition in y, i.e., there exists a constant L > 0 such that
    f ( x , y 1 ) f ( x , y 2 ) L y 1 y 2 , y 1 , y 2 .
  • f ( x , y ) has continuous fractional derivatives up to the required order σ, ensuring that the Caputo derivative exists and remains bounded.
  • The exact solution y ( x ) is sufficiently smooth so that y ( σ ) ( x ) is continuous on the interval of interest.
Under these assumptions, the convergence of the proposed scheme can be established as follows:
Theorem 2.
Let the numerical scheme be defined as
y i + 1 = y i + Φ ( y : h ) ,
where
k 1 = f ( x i , y i ) , k 2 = f x i + h σ Γ ( σ + 1 ) , y i + h σ 2 Γ ( σ + 1 ) α 1 k 1 + α 2 k 2 .
Then, the proposed fractional implicit scheme is consistent, with a local truncation error of order O h 2 σ Γ ( 2 σ + 1 ) and a convergence order of 2 σ .
Proof. 
Expanding the exact solution y ( x + h ) in a fractional Taylor series gives
y i + 1 = f + 1 2 ( f D y [ σ ] f ) h σ Γ ( σ + 1 ) + 1 6 f 2 D y y [ σ ] + 1 6 f ( D y [ σ ] f ) 2 h 2 σ Γ ( 2 σ + 1 ) + .
Next, expanding k 2 in a similar manner, we have
k 2 = f + α 1 2 f D y [ σ ] f + α 2 2 D y [ σ ] f k 2 h σ Γ ( σ + 1 ) + B 1 h 2 σ Γ ( 2 σ + 1 ) +
Assume the fractional series form
k 2 = ϑ 1 + h σ Γ ( σ + 1 ) ϑ 2 + h 2 σ Γ ( 2 σ + 1 ) ϑ 3 + h 3 σ Γ ( 3 σ + 1 ) ϑ 4 + .
Substituting (19) into (18) and matching terms in powers of h σ , we obtain
k 12 = f + α 1 2 f D y [ σ ] f + α 2 2 D y [ σ ] f ϑ 1 h σ Γ ( σ + 1 ) + B 2 h 2 σ Γ ( 2 σ + 1 ) + B 3 h 3 σ Γ ( 3 σ + 1 ) + O h 4 σ Γ ( 4 σ + 1 ) D [ 3 σ ] f ( ζ ) ,
where the coefficients B 1 , B 2 , , B 5 are explicitly derived in Appendix A. By comparing (19) and (20), the coefficients are found as
ϑ 1 = f , ϑ 2 = α 1 2 f D y [ σ ] f + α 2 2 D y [ σ ] f , ϑ 3 = α 1 α 2 4 f ( D y [ σ ] f ) 2 + α 1 2 f 2 D y y [ σ ] f + α 1 2 4 f D y y [ σ ] f + α 2 2 2 f 2 D y y [ σ ] f , ϑ 4 = α 2 f y 4 α 1 4 f ( D y [ σ ] f ) 2 + α 2 2 f 2 D y y [ σ ] f + α 1 α 2 f 2 D y [ σ ] f D y y [ σ ] f 4 + α 2 2 f 3 D y y y [ σ ] f 6 .
Substituting (21) into (19), we obtain
k 2 = f + ( α 1 2 f D y [ σ ] f + α 2 2 f D y [ σ ] f ) h σ Γ ( σ + 1 ) + α 2 4 f ( D y [ σ ] f ) 2 + α 1 2 2 f 2 D y y [ σ ] f h 2 σ Γ ( 2 σ + 1 ) + B 4 h 3 σ Γ ( 3 σ + 1 ) + O h 4 σ Γ ( 4 σ + 1 ) D [ 3 σ ] f ( ζ ) .
Substituting k 1 and k 2 in (16) gives
Φ ( y ; h ) = f + ( α 1 + α 2 ) 2 f D y [ σ ] f h σ Γ ( σ + 1 ) + α 2 8 f ( D y [ σ ] f ) 2 + α 1 2 4 f 2 D y y [ σ ] f h 2 σ Γ ( 2 σ + 1 ) + B 5 h 3 σ Γ ( 3 σ + 1 ) + O h 4 σ Γ ( 4 σ + 1 ) D [ 3 σ ] f ( ζ ) .
Subtracting (23) from (17) gives the local truncation error (LTE):
L . T . E . = 1 2 α 1 + α 2 4 f D y [ σ ] f h σ Γ ( σ + 1 ) + α 1 α 2 12 f 2 D y y [ σ ] f + α 2 14 f ( D y [ σ ] f ) 2 h 2 σ Γ ( 2 σ + 1 ) + O h 3 σ Γ ( 3 σ + 1 ) D [ 2 σ ] f ( ζ ) .
Imposing α 1 + α 2 = 2 leaves one equation with two parameters. Choosing α 1 = 3 2 gives α 2 = 1 2 . Substituting these values yields the two-stage fractional scheme:
y i + 1 = y i + Φ ( y : h ) , 0 < σ 1 ,
where
k 1 = f ( x i , y i ) , k 2 = f x i + h σ Γ ( σ + 1 ) , y i + h σ 2 Γ ( σ + 1 ) 3 2 k 1 + 1 2 k 2 ,
and
Φ ( y : h ) = h σ 2 Γ ( σ + 1 ) ( k 1 + k 2 ) .
Hence, the proposed fractional scheme has a local truncation error of order O h 2 σ Γ ( 2 σ + 1 ) O ( h 2 σ ) , and therefore exhibits a consistency order of 2 σ .    □
Remark 1.
The derived convergence order 2 σ generalizes the classical result for integer-order implicit Runge–Kutta methods, where σ = 1 yields the standard second-order accuracy. The proposed implicit framework achieves higher accuracy and precision with reduced computational cost, owing to its compact two-stage structure and ANN-assisted initialization. Moreover, the implicit two-stage fractional formulation inherently incorporates the memory effect: each time-step update involves fractional weights of the form h σ / Γ ( σ + 1 ) , originating from the memory kernel of the Caputo operator (4), thereby coupling the current solution y ( t i + 1 ) with all past evaluations of f ( t , y ) .

3.1. Boundedness, Stability, and Convergence

Let E i = y ( x i ) y i denote the global error. From the recursive form (16) and using the Lipschitz condition | f ( x , y 1 ) f ( x , y 2 ) | L | y 1 y 2 | , one can show
| E i + 1 | ( 1 + h σ L / Γ ( σ + 1 ) ) | E i | + C h 2 σ ,
which ensures boundedness if h σ L / Γ ( σ + 1 ) < 1 . Applying discrete Grönwall’s inequality yields
| E i | C h 2 σ 1 h σ L / Γ ( σ + 1 ) ,
proving the global stability and convergence of order 2 σ . Thus, the scheme is both consistent and zero-stable, guaranteeing overall convergence in the sense of Dahlquist’s criterion [43]. Considering the two-stage fractional scheme ISBF [ ] for FOIVPs as
y i + 1 = y i + h σ 2 Γ ( σ + 1 ) ( k 1 + k 2 ) , k 1 = f ( x i , y i ) , k 2 = f x i + h , y i + h σ 2 Γ ( σ + 1 ) 3 2 k 1 + 1 2 k 2 ,
where 0 < σ 1 and f ( x , y ) is assumed to satisfy a Lipschitz condition in y with constant L > 0 , i.e.,
| f ( x , y 1 ) f ( x , y 2 ) | L | y 1 y 2 | , y 1 , y 2 .
Now, we check the boundness in the second stage k 2 . From the implicit definition of k 2 in (26), consider two sequences y i and y ˜ i (perturbed initial values). Using the Lipschitz condition, we obtain
| k 2 k ˜ 2 | = | f x i + h , y i + h σ 2 Γ ( σ + 1 ) 3 2 k 1 + 1 2 k 2 f x i + h , y ˜ i + h σ 2 Γ ( σ + 1 ) 3 2 k ˜ 1 + 1 2 k ˜ 2 | L | ( y i y ˜ i ) + h σ 2 Γ ( σ + 1 ) 3 2 ( k 1 k ˜ 1 ) + 1 2 ( k 2 k ˜ 2 ) | .
From (26), the difference between successive steps satisfies
| y i + 1 y ˜ i + 1 | = | y i y ˜ i + h σ 2 Γ ( σ + 1 ) ( ( k 1 k ˜ 1 ) + ( k 2 k ˜ 2 ) ) | | y i y ˜ i | + h σ 2 Γ ( σ + 1 ) | k 1 k ˜ 1 | + | k 2 k ˜ 2 | 1 + C h σ | y i y ˜ i | ,
where C depends on the Lipschitz constant L and Γ ( σ + 1 ) . This inequality shows that the growth of perturbations is bounded at each step.
Applying (29) recursively for n steps gives
| y n y ˜ n | ( 1 + C h σ ) n | y 0 y ˜ 0 | .
For sufficiently small step size h, the factor ( 1 + C h σ ) n remains bounded, ensuring that the numerical solution does not grow unboundedly. Therefore, the scheme (26) is bounded.

3.2. Stability Analysis

To analyze the stability of the proposed scheme
y i + 1 = y i + Φ ( y : h ) , Φ ( y : h ) = h σ 2 Γ ( σ + 1 ) ( k 1 + k 2 ) ,
with
k 1 = f ( x i , y i ) , k 2 = f x i + h σ Γ ( σ + 1 ) , y i + h σ 2 Γ ( σ + 1 ) 3 2 k 1 + 1 2 k 2 ,
We employ the standard linear test equation
D σ y ( t ) = λ y ( t ) ,
where D σ denotes the Caputo fractional derivative of order 0 < σ 1 and λ C . This equation serves as a canonical model for analyzing the stability and convergence properties of fractional numerical schemes. The corresponding right-hand side can therefore be expressed as
f ( x , y ) = λ y ,
which is used in the subsequent derivation of the stability function.
Substituting into the scheme, we obtain
k 2 = λ y i 1 + 3 λ h σ 4 Γ ( σ + 1 ) 1 λ h σ 4 Γ ( σ + 1 ) .
Hence, the stability function of the scheme is
R ( z ) = 1 + z 2 Γ ( σ + 1 ) 2 + z / Γ ( σ + 1 ) 1 z / ( 4 Γ ( σ + 1 ) ) , z = λ h σ .
The stability region is defined as [44]:
S = { z C : | R ( z ) | < 1 } .
For fractional orders σ ( 0 , 1 ] , the stability region can be visualized in the complex z-plane. The scheme remains stable for all z within S , and the region gradually shrinks as σ decreases, confirming the expected stability trend of the proposed fractional scheme ISBF [ ] . The stability function of the IEBF [ ] scheme is given by
R ( z ) = 1 1 z Γ ( σ + 1 ) ,
whereas that of the ITBF [ ] scheme is
R ( z ) = 1 + z 2 Γ ( σ + 1 ) 1 z 2 Γ ( σ + 1 ) .

Numerical Comparison on the Real Axis

We tabulate, for each chosen σ , the factor Γ ( σ + 1 ) and the numerically determined left endpoint z left on the real axis for the new method ISBF [ ] , i.e., the most negative real z for which | R new ( z ) | 1 in Table 1 and Figure 1 and Figure 2.
A comparison with other existing schemes is given in Table 2 and Figure 2.

4. Fractional Implicit Hybrid Scheme, Methodology, and Implementation

This section presents the development and implementation of the proposed fractional implicit hybrid scheme, which integrates the two-stage Caputo-type formulation ISBF [ ] with an artificial neural network (ANN)-assisted strategy. The resulting method, referred to as ISBF- AN [ ] , combines the high-order accuracy of the implicit fractional scheme with the adaptive learning and approximation capabilities of ANNs to improve convergence and stability. The detailed computational framework, algorithmic steps, and implementation procedure are described in the following subsections.

4.1. Methodology

This subsection outlines the detailed methodology of the hybrid ANN-assisted two-stage fractional approach. It first reviews the classical two-stage fractional scheme, then describes the design and training of the artificial neural network (ANN), and finally presents the hybrid framework that integrates ANN predictions with the parallel refinement process.
Consider the nonlinear fractional initial value problem
D σ y ( t ) = f ( t ) , y ( 0 ) = y 0 , 0 < σ 1 ,
where D σ denotes the Caputo fractional derivative of order σ , and f ( t ) is a known function. The goal is to approximate y ( t ) numerically while achieving high accuracy and computational efficiency.

4.1.1. Two-Stage Implicit Fractional Method

The classical two-stage implicit method discretizes the problem over a uniform time grid t n = n h , n = 0 , 1 , , N , with step size h = ( t f t 0 ) / N . The method is defined as
y n + 1 = y n + h σ 2 Γ ( σ + 1 ) ( k 1 + k 2 ) ,
k 1 = f ( t n , y n ) ,
k 2 = f t n + h , y n + h σ 2 Γ ( σ + 1 ) 3 2 k 1 + 1 2 k 2 .
Note that k 2 is defined implicitly and requires solving a nonlinear equation, typically using Newton-Raphson iterations. Although the method is stable and accurate, solving k 2 at each step can be computationally expensive.

4.1.2. Artificial Neural Network (ANN) Framework and Implementation

To accelerate the computation, we employ an ANN to approximate the implicit stage value k 2 . The ANN is designed as a feedforward network (Figure 3 and Figure 4) with one hidden layer, which maps the inputs ( t n , y n ) to an estimate of k 2 :
k ^ 2 = ANN ( t n , y n ; W , b ) ,
where W and b are the weight and bias parameters of the network. The hidden layer uses a sigmoid activation function, while the output layer is linear to predict continuous k 2 values.
Training Data Generation
The training dataset is generated using the exact solution:
(43) y exact ( t i ) = f ( y , t ) , (44)     k 1 ( t i ) = f ( t i , y exact ( t i ) ) , (45)     k 2 ( t i ) computed using Equation ( 41 ) via iterative refinement .
Here, ( t i , y exact ( t i ) ) form the ANN input, and k 2 ( t i ) forms the target output.
Loss Function and Optimization
The ANN parameters are optimized [45] by minimizing the mean squared error (MSE) between the predicted k ^ 2 and the true k 2 :
L ( W , b ) = 1 N train i = 1 N train k 2 ( t i ) k ^ 2 ( t i ; W , b ) 2 .
Optimization is carried out using the Levenberg–Marquardt algorithm [46], which combines the advantages of gradient descent and Gauss–Newton methods [47,48] to achieve fast convergence. This approach enables the network to approximate k 2 with high accuracy, thereby reducing the number of iterations required during time stepping.
Validation and Testing
The trained ANN is validated using a separate test set to ensure generalization. The relative error
ϵ i = | k 2 true ( t i ) k ^ 2 ( t i ) | | k 2 true ( t i ) |
is computed to verify that predictions meet the desired tolerance. Only after satisfactory validation is the ANN incorporated into the hybrid method.

4.2. Hybrid ANN with Parallel Tuning ISBF- AN [ ]

In the hybrid strategy, the ANN output serves as the initial guess for the Alberth–Ehlirch method [49] iteration of k 2 :
k 2 , i ( 0 ) = k ^ 2 , i = ANN ( t n , y n ) ,
k 2 , i [ m + 1 ] = k 2 , i [ m ] f k 2 , i [ m ] f k 2 , i [ m ] f k 2 , i [ m ] j = 1 j i m 1 k 2 , i [ m ] k 2 , j [ m ] , i , j = 1 , 2 , , m ;
The iteration continues until | k 2 , i ( m + 1 ) k 2 , i ( m ) | < tol , where tol is a small tolerance (e.g., 10 30 ). By initializing with k ^ 2 from the ANN, the method significantly reduces the number of iterations per time step, thereby improving efficiency without sacrificing accuracy.
The hybrid ANN-based implicit iterative solver exhibits the same computational complexity as classical implicit methods, since the ANN correction is applied only during the initialization step. The network is trained offline with minimal inference cost, typically requiring fewer than 1000 epochs. This approach substantially reduces preprocessing overhead while decreasing the number of iterations and overall CPU time, thereby enhancing both accuracy and consistency.
In Table 3, N denotes the number of time steps, n the number of unknowns per step, m the average iteration count, p the number of ANN parameters, and C solve ( n ) the linear algebra cost per iteration. The proposed hybrid formulation achieves a substantial reduction in overall cost compared with the classical implicit approach, O ( N m C solve ( n ) ) , since in practice m m .

4.3. Convergence Enhancement and Efficiency

The hybrid approach enhances convergence through two complementary mechanisms:
  • Intelligent Initialization: The ANN provides a high-quality initial estimate for k 2 , thereby reducing the number of iterations required by the ISBF [ ] scheme.
  • Iterative Refinement: The parallel iterative procedure ensures that the final value of k 2 satisfies the implicit equation with high precision, preserving the second-order accuracy of the two-stage ISBF [ ] .
  • Performance Evaluation: To illustrate the efficiency of the proposed ANN–implicit solver across different fractional orders  σ , a summary table reports the percentage improvements in both computational time and accuracy relative to baseline methods (IEBF and ITBF). The percentage improvements are computed as follows:
    Improvement in CPU time ( % ) = T baseline T proposed T baseline × 100 ,
    Improvement in accuracy ( % ) = E baseline E proposed E baseline × 100 ,
    where T baseline and E baseline denote the CPU time and error of the reference method, respectively, while T proposed and E proposed correspond to the proposed scheme. These metrics enable a clear comparison of the relative performance advantages of the proposed fractional implicit solver for various fractional orders  σ when applied to FOIVPs.
This combined strategy significantly reduces the computational cost while maintaining high accuracy, particularly for stiff or nonlinear fractional systems. All experiments were performed in Matlab (R2023b) (The MathWorks, Natick, MA, USA) on a Windows 11 platform equipped with an Intel® CoreTM i7–12700H (Intel Corporation, Santa Clara, CA, USA) CPU @ 2.70 GHz and 16 GB of RAM. The complete procedure is presented in pseudocode form in Algorithm 1.
Algorithm 1 Hybrid ANN-Based Two-Stage Method ISBF- AN [ ] for Fractional IVPs
Require: Fractional IVP: D σ y ( t ) = f ( t , y ( t ) ) , y ( 0 ) = y 0 , time step h, tolerance ϵ , total steps N, trained ANN
Ensure: Approximate solution { y i }
 1:
Step 1: Define FOIVPs
 2:
Set initial condition y 0 and fractional order σ ( 0 , 1 ]
 3:
Define function f ( t , y )
 4:
Step 2: Two-Stage Time-Stepping Method
 5:
For each time step i = 0 , 1 , , N 1 :
1.
Compute first stage: k 1 = f ( t i , y i )
 6:
Step 3: ANN-Based Initial Guess for k 2
1.
ANN Architecture:
  • Input layer: 2 neurons ( t i , y i )
  • Hidden layer: 10 neurons, tanh activation
  • Output layer: 1 neuron (predicted k 2 ), linear activation
2.
Training:
  • Generate training data: { ( t i , y i ) k 2 exact }
  • Loss function: MSE
    L ( W , b ) = 1 N train i = 1 N train k 2 pred k 2 exact 2
  • Optimize weights W , b using Levenberg–Marquardt Method
3.
Prediction:  k 2 ( 0 ) = ANN ( t i , y i )
4.
Validate ANN on test dataset using MSE and maximum absolute error
 7:
Step 4: Refine k 2 via Alberth–Ehlirch Method
1.
Set iteration n = 0
2.
Repeat until | k 2 ( n + 1 ) k 2 ( n ) | < ϵ :
  • Update k 2 ( n + 1 ) using Alberth–Ehlirch iteration
  • n n + 1
 8:
Step 5: Update Solution
y i + 1 = y i + h σ 2 Γ ( σ + 1 ) k 1 + k 2 ( n + 1 )
 9:
Step 6: Repeat for all time steps   i = 0 , 1 , , N 1
10:
Step 7: Output
11:
Return approximate solution vector { y 1 , y 2 , , y N }

Key Advantages of the Hybrid ANN-Based Implicit Scheme

The hybrid approach demonstrates the following:
  • Accurate prediction of the implicit stage value k 2 ;
  • Reduction in Newton iterations and overall CPU time;
  • Strong generalization capability of the ANN across time steps;
  • Potential applicability to other fractional-order systems and multi-dimensional problems.

5. Numerical Verification

To verify boundedness numerically, the initial condition y 0 can be perturbed by a small amount ϵ , and the corresponding solution sequences y n and y ˜ n can be computed using the scheme (26). Plotting the difference
Err i = | y i y ˜ i | , i = 0 , 1 , , N ,
illustrates whether the scheme remains bounded under such perturbations.

Remarks

  • The boundedness analysis is based on the Lipschitz continuity of f ( x , y ) , a condition commonly satisfied in physical and biomedical models.
  • It provides a theoretical foundation for zero-stability and supports the convergence proofs.
  • In the hybrid ISBF– AN [ ] scheme, boundedness of the numerical formulation ensures that the ANN-generated initial guesses do not lead to divergent solutions.
We next consider the following fractional-order initial value problems to assess the efficiency, consistency, and stability of the proposed implicit two-stage scheme and its ANN-based hybrid variant in comparison with existing methods.
Example 1
(Benchmark from [50]). To assess the performance of the proposed hybrid ANN-based solver, we examine a representative FOIVP that characterizes processes with inherent memory and hereditary behavior.
D [ σ ] y ( t ) + y ( t ) = 2 Γ ( 3 σ ) t 2 σ t 1 σ Γ ( 2 σ ) + t 2 t , y ( 0 ) = 0 , t > 0 ,
where σ ( 0 , 1 ] . The exact solution is given by y ( t ) = t 2 t .
Table 4 presents the comparison between exact and numerical solutions for σ = 1 using the ISBF [ ] , IEBF [ ] , and ITBF [ ] methods. The ISBF [ ] method achieves the smallest errors across all time levels, demonstrating superior stability and accuracy IEBF [ ] , while stable, tends to overestimate near the terminal region, whereas the ITBF [ ] performs moderately well but remains less precise than the ISBF [ ] method. The exact and approximate solutions, along with the error curve obtained using IEBF [ ] for the FOIVPs in Example 1, are presented in Figure 5.
Table 5 summarizes the performance metrics across different fractional orders σ . The ISBF [ ] method maintains low CPU time and memory usage with minimal errors, especially for higher σ . The IEBF [ ] requires more function evaluations and exhibits slower error reduction, whereas the ITBF [ ] method provides balanced but less accurate results. Thus, the ISBF [ ] scheme demonstrates consistent convergence efficiency across all tested fractional orders.
The effect of step size h on numerical stability and accuracy is displayed in Table 5. As h decreases, all methods exhibit notable error reduction; however, the ISBF [ ] scheme retains higher accuracy and stable computational cost. IEBF [ ] shows slower convergence with decreasing h, while the ITBF [ ] exhibits improved behavior only at fine resolutions.
Overall, Table 6 and Table 7 confirm that the implicit ISBF [ ] method achieves superior numerical accuracy, faster convergence, and better stability than the classical IEBF [ ] and ITBF [ ] , especially for fractional orders close to unity.
Table 7 presents the convergence performance of the proposed ANN-accelerated scheme ISBF– AN [ ] . The corresponding input and output vectors of the hybrid ANN-based implicit scheme are provided in Appendix B.1 (Table A1). The training performance and regression curves (Figure 6a,b) for ISBF– AN [ ] indicate rapid convergence and a strong correlation between the predicted and target outputs. The algorithm converges within three iterations, achieving an exceptionally low mean squared error ( MSE = 4.178 × 10 25 ) and an almost negligible gradient magnitude. These results highlight the high precision and numerical stability achieved through neural network augmentation, where the adaptive parameter μ ensures controlled learning dynamics and rapid minimization of residual errors.
As summarized in Table 8, the hybrid ISBF– AN [ ] framework consistently outperforms the classical methods IEBF [ ] and ITBF [ ] across all fractional orders. The model achieves near-zero minimum errors and maintains perfect consistency (100%), while also delivering substantial efficiency gains and reduced computational overhead.
In Table 9, CPUIM(%) and AccIM(%) denote, respectively, the percentage improvement in CPU time and accuracy of the proposed method compared with the reference or existing approaches (IM). As the fractional order σ increases, both computational efficiency and accuracy improvements become more pronounced. The proposed implicit scheme consistently outperforms IEBF and ITBF across all tested orders, achieving up to approximately 50% faster computation and more than 63% higher accuracy for σ = 0.9 . This trend underscores the advantage of the compact two-stage formulation and the efficient initialization mechanism, which together enhance stability and precision in higher-order fractional dynamics.
In summary, integrating ANN-assisted optimization within the ISBF– AN [ ] framework enhances numerical robustness, precision, and convergence speed. Compared with conventional schemes, the proposed ISBF– AN [ ] achieves superior accuracy and stability, establishing itself as a powerful and efficient approach for solving complex nonlinear fractional models.
Example 2
(Benchmark from [51]). To demonstrate the accuracy and computational performance of the developed hybrid ANN-based implicit solver, we analyze a benchmark fractional initial value problem. Such problems are well known for modeling processes with inherent memory dependence and hereditary behavior. Consider the FOIVPs defined by
D [ σ ] y ( t ) + y 2 ( t ) = g ( t ) , y ( 0 ) = 0 , y ( 0 ) = 0 , t > 0 ,
where
g ( t ) = Γ ( 6 ) Γ ( 6 σ ) t 5 σ 3 Γ ( 5 ) Γ ( 5 σ ) t 4 σ + 2 Γ ( 4 ) Γ ( 4 σ ) t 3 σ + ( t 5 3 t 4 + 2 t 3 ) 2 ,
and σ ( 0 , 1 ] . The exact analytical solution is
y ( t ) = t 5 3 t 4 + 2 t 3 .
Discussion of Table 10. The ISBF [ ] method attains higher accuracy at early times ( t 0.4 ), whereas the ITBF [ ] scheme yields the smallest errors for t 0.5 . The IEBF [ ] method is generally less accurate, confirming that the ITBF [ ] approach provides the best long-term precision among the tested schemes. The exact and approximate solutions, together with the corresponding error curve obtained using IEBF [ ] for the FOIVP in Example 2, are presented in Figure 7.
Discussion of Table 11: The ISBF [ ] method achieves an optimal balance between computational cost and accuracy. As the fractional order σ increases, its maximum error decreases sharply while maintaining a relatively low number of function evaluations. In contrast, the IEBF [ ] scheme requires more iterations and function calls to reach comparable accuracy, whereas the ITBF [ ] method shows inconsistent performance for smaller σ values. Overall, ISBF [ ] delivers the most stable and efficient results across varying fractional orders.
Discussion of Table 12: Reducing the step size h systematically enhances accuracy across all methods. The ISBF [ ] approach exhibits the fastest error reduction while maintaining stable memory usage, confirming its strong convergence properties. The IEBF [ ] and ITBF [ ] schemes also benefit from finer discretization but remain less efficient overall.
Discussion of Table 13 and Table 14: The corresponding input and output vectors of the hybrid ANN-based implicit scheme are provided in Appendix B.1 (Table A2). The training performance and regression curves (Figure 8a,b) for ISBF– AN [ ] indicate rapid convergence and a strong correlation between predicted and target outputs. The ANN-accelerated SMV scheme (ISBF– AN [ ] ) achieves exceptional numerical precision, converging within only three iterations and attaining a mean squared error of 4.178 × 10 19 with a negligible gradient magnitude. The adaptive parameter μ remains small, ensuring smooth learning dynamics and rapid convergence. Compared with classical schemes, the hybrid framework consistently attains zero minimum error and near-perfect consistency across all fractional orders σ [ 0.1 , 1.0 ] . Although the CPU time is comparable to that of traditional solvers, the proposed method demonstrates substantially higher numerical efficiency and robustness. The integration of neural adaptation with fractional-order correction enhances both accuracy and convergence rate. Consequently, ISBF– AN [ ] clearly outperforms the classical methods IEBF [ ] and ITBF [ ] in precision, stability, and computational reliability, making it a powerful tool for solving complex nonlinear fractional problems.
Table 15 illustrates a consistent improvement trend with increasing σ , with accuracy gains exceeding 62.8% for σ = 0.9 . The proposed implicit solver achieves an excellent balance between computational efficiency and precision, proving particularly effective for highly memory-dependent fractional systems.
Example 3
(Benchmark from [52]). To evaluate the efficiency of the proposed hybrid ANN-based solver, we consider the following fractional initial value problem, which serves as a representative model for systems exhibiting memory and hereditary effects. Consider the fractional initial value problem
D [ σ ] y ( t ) = 40320 Γ ( 9 σ ) t 8 σ 3 Γ 5 + σ 2 Γ 5 σ 2 t 4 σ 2 + g ( t ) , y ( 0 ) = 0 , y ( 0 ) = 0 , t > 0 ,
where
g ( t ) = 9 4 Γ ( σ + 1 ) + 3 2 t σ 2 t 4 3 y ( t ) 3 2 ,
and the exact analytical solution is
y ( t ) = t 8 3 t 4 + σ 2 + 9 4 t σ .
Table 16 shows that the proposed ISBF [ ] scheme provides the closest agreement with the exact solution across all time levels. The IEBF [ ] approach exhibits noticeable deviations, particularly for larger step sizes, whereas the fractional ITBF [ ] scheme performs moderately well. Overall, the two-stage ISBF [ ] method demonstrates superior precision and stability. The exact and approximate solutions, together with the error curve obtained using IEBF [ ] for the FOIVPs in Example 3, are presented in Figure 9.
Table 17 highlights that the Implicit Two-Stage method ISBF [ ] consistently yields the lowest maximum error with stable CPU time and memory usage. The IEBF [ ] requires more function calls, while ITBF [ ] errors escalate rapidly for coarser step sizes. Thus, the two-stage approach ISBF [ ] demonstrates the best trade-off between efficiency and accuracy for varying σ .
Table 18 confirms that the error decreases consistently as the step size h is refined. The ISBF [ ] method preserves superior accuracy with minimal computational overhead, while the IEBF [ ] scheme shows gradual improvement but remains less efficient. In contrast, the ITBF [ ] approach converges more slowly and exhibits higher initial errors. Overall, the ISBF [ ] framework demonstrates remarkable robustness and convergence stability across fine discretizations.
Table 19 and Table 20 collectively demonstrate that the enhanced ANN-driven scheme (ISBF– ANN [ ] ) achieves faster convergence, ultra-low MSE, and minimal gradient magnitudes compared with its classical counterparts. The corresponding input and output vectors of the hybrid ANN-based implicit scheme are provided in Appendix B.1 (Table A2). The training performance and regression curves (Figure 10a,b) for ISBF– AN [ ] indicate rapid convergence and a strong correlation between the predicted and target outputs. The adaptive learning parameter μ effectively stabilizes gradient descent, leading to lower CPU time and reduced memory usage. These results confirm that the hybrid ANN-based solver outperforms the conventional numerical schemes IEBF [ ] and ITBF [ ] in stability, precision, and computational efficiency, establishing it as a robust and scalable framework for solving complex fractional and nonlinear systems.
The data in Table 21 confirm the stable convergence behavior with increasing σ . While the CPU-time improvement grows approximately linearly, the accuracy gain increases more rapidly, highlighting the effectiveness of ANN-based initialization in refining fractional dynamic predictions.
Example 4
(Nonlocal and Memory-Dependent Problem [53]). To assess the efficiency of the proposed hybrid ANN-based solver, we consider the following fractional initial value problem, which serves as a representative model for systems exhibiting memory and hereditary effects. The Caputo fractional initial value problem is defined as
D σ y ( t ) + λ y ( t ) = f ( t ) , t > 0 , y ( 0 ) = y 0 ,
where 0 < σ < 1 , λ > 0 , and y 0 = 0.5 is a given constant. The forcing function is given by
f ( t ) = Γ ( σ + 1 ) Γ ( σ + 1 μ ) t σ μ + λ t σ ,
where μ > 0 is a non-integer to ensure a non–power-law response.
The exact analytical solution solution of (56) is
y ( t ) = y 0 E σ ( λ t σ ) + t μ ,
where E σ ( z ) = k = 0 z k Γ ( σ k + 1 ) is the Mittag–Leffler function [54]. When applied to the test problem (56) with a fractional forcing term f ( t ) , the method accurately reproduces the memory-dependent dynamics, thereby satisfying the nonlocality condition.
Table 22 shows that the proposed ISBF [ ] scheme yields results closest to the exact solution across all time levels. The fractional ITBF [ ] scheme performs reasonably well, whereas the IEBF [ ] method exhibits noticeable deviations, particularly for larger step sizes. Overall, the two-stage ISBF [ ] formulation provides superior accuracy and stability. Figure 11 illustrates the exact and approximate solutions, together with the corresponding error curve obtained using IEBF [ ] for the FOIVP in Example 4.
Table 23 indicates that the implicit two-stage approach, ISBF [ ] , consistently achieves the lowest maximum error while maintaining stable CPU time and memory usage. In contrast, the ITBF [ ] scheme exhibits rapidly increasing errors for larger step sizes, whereas the IEBF [ ] method requires additional function evaluations. Overall, the two-stage ISBF [ ] technique demonstrates an optimal balance between accuracy and efficiency across varying values of σ .
Table 24 shows that decreasing the step size h consistently reduces the numerical error. The IEBF [ ] scheme improves gradually but remains less effective, whereas the ISBF [ ] method maintains higher accuracy with reduced computational overhead. In contrast, the ITBF [ ] technique exhibits larger initial errors and slightly higher computational times, indicating slower convergence efficiency. Overall, the ISBF [ ] framework demonstrates remarkable robustness and convergence stability under fine discretizations.
Table 25 and Table 26 demonstrate that the enhanced ANN-driven scheme (ISBF– ANN [ ] ) achieves faster convergence, ultra-low mean squared error (MSE), and smaller gradient magnitudes than its classical counterparts. The corresponding input and output vectors of the hybrid ANN-based implicit scheme are provided in Appendix B.1 (Table A4). The training performance and regression curves (Figure 12a,b) for ISBF– AN [ ] reveal a strong correlation between the target and predicted outputs, together with rapid convergence. The adaptive learning parameter μ effectively stabilizes the gradient-descent process, thereby reducing both CPU time and memory usage. Overall, the hybrid ANN-based formulation provides a stable and scalable framework for solving complex fractional and nonlinear systems, surpassing the classical numerical approaches IEBF [ ] and ITBF [ ] in terms of stability, accuracy, and computational efficiency.
Table 27 shows a consistent improvement trend with increasing σ , with accuracy gains exceeding 64.8% and 59.5%, respectively, for σ = 0.9 . The proposed implicit solver achieves an excellent balance between computational efficiency and precision, proving particularly effective for highly memory-dependent fractional systems.

6. Conclusions and Future Work

In this study, a novel two-stage implicit scheme, ISBF [ ] , was developed and analyzed for solving nonlinear fractional-order models arising in biomedical and engineering contexts. The base formulation was further enhanced through an artificial neural network (ANN) accelerator, resulting in the hybrid framework ISBF– AN [ ] . The proposed approach integrates adaptive learning-based updates with a memory-efficient implicit formulation, ensuring stability and robustness across a wide range of fractional orders σ ( 0 , 1 ] .

6.1. Performance Summary

The numerical results consistently demonstrate the superior convergence rate and computational efficiency of the ANN-enhanced hybrid method compared with the classical counterparts IEBF [ ] and ITBF [ ] . Table 7 highlights the extremely small mean squared error ( MSE = 4.178 × 10 19 ) achieved by ISBF– AN [ ] , which required only three iterations to reach convergence. The corresponding gradient magnitude and adaptive parameter μ further confirm the method’s numerical stability and effective learning dynamics.
Moreover, the hybrid fractional results summarized in Table 8, Table 13, Table 19 and Table 26 indicate consistent accuracy and reliability across different fractional orders. The ISBF– AN [ ] scheme maintained perfect consistency (100%) and achieved substantial efficiency gains over the classical implicit algorithms IEBF [ ] and ITBF [ ] for FOIVPs (Table 9, Table 15, Table 21 and Table 27). The visual results of the stability of the proposed and existing schemes are presented in Figure 1 and Figure 2, further illustrating the smooth convergence trajectories and enlarged stability regions obtained with the proposed hybrid formulation. Overall, these findings confirm that combining implicit discretization with ANN-based correction significantly enhances convergence speed, error damping, and computational adaptability, establishing ISBF– AN [ ] as a robust and efficient framework for solving complex nonlinear fractional systems.

6.2. Comparative Discussion

Compared with traditional fractional methods such as IEBF [ ] and ITBF [ ] , the proposed ISBF– AN [ ] scheme offers the following advantages:
This hybridization strategy not only improves numerical precision but also extends the applicability of two-stage implicit methods to stiff or highly nonlinear systems, where traditional schemes often fail or require excessive iterations.

6.3. Limitations and Future Directions

Despite its strong numerical performance, several aspects merit further investigation:
  • The ANN component introduces additional computational overhead during training, which may become significant for large-scale fractional PDE systems.
  • The current implementation assumes uniform time stepping; future work could explore adaptive time grids and variable-step implicit updates.
  • The present study focuses on one-dimensional examples. Extending the approach to multidimensional fractional PDEs and coupled nonlinear systems represents an important direction for future research.
  • Further investigation of advanced learning architectures (e.g., fractional neural operators, recurrent networks, or physics-informed models) may enhance both convergence speed and generalization capability of the hybrid framework.
  • The proposed hybrid two-stage scheme can be extended to variable-order fractional differential equations and multidimensional systems by adapting the increment function Φ ( y : h ) and vectorizing the stage variables k 1 and k 2 . Future work will investigate these extensions with ANN-assisted initialization, focusing on maintaining stability and convergence in higher-dimensional settings.
Overall, the proposed two-stage implicit hybrid ANN approach represents a significant advancement in solving nonlinear fractional models efficiently and accurately. It provides a promising foundation for next-generation fractional solvers that integrate machine learning intelligence with classical numerical rigor.

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

Bruno Carpentieri’s work is supported by the European Regional Development and Cohesion Funds (ERDF) 2021–2027 under Project AI4AM-EFRE1052. He is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM), and this work was partially supported by INdAM-GNCS under the Progetti di Ricerca 2024 program.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

In this article, the following abbreviations are used:
D [ σ ] Caputo fractional derivative of order σ
σ Fractional order of the derivative ( 0 < σ 1 )
tIndependent variable or time in simulation
hstep-length
y , y input , y output Input and predicted solution values of ANN
y exact Exact analytical solution
MSEMean Squared Error between target and ANN output
MinError, MaxErrorMinimum and maximum absolute errors
CPU_H(s), CPU_C(s)CPU time (s) for hybrid and classical schemes
CPU TimeCPU time (s) for hybrid Implicit Scheme
Fun-CNumber of function evaluations
Mem(MB)Memory consumption (megabytes)
IterationsNumber of iterations to convergence
GradientGradient magnitude of ANN loss function
μ Adaptive learning/damping parameter
Eff (%)Efficiency of hybrid increase vs. classical CPU time
Consistency (%)Numerical stability or convergence reliability

Appendix A. Derivation of Coefficients B 1 –B 5

The coefficients B 1 B 5 appearing in the proof of Theorem 2 are determined by matching the terms of the fractional Taylor expansion in powers of h σ . They are explicitly given as follows:
B 1 = α 1 2 8 f 2 D y y [ σ ] f + α 1 α 2 4 f D y y [ σ ] f k 2 + α 2 2 8 D y y [ σ ] f k 2 2
B 2 = α 2 2 D y [ σ ] f ϑ 2 + α 1 2 8 f 2 D y y [ σ ] f + α 2 4 f D y y [ σ ] f ϑ 1 + α 2 2 8 D y y [ σ ] f ϑ 1 2
B 3 = [ α 2 2 4 D y y [ σ ] f ϑ 1 + 3 α 1 2 16 f D y y [ σ ] f ϑ 2 + α 2 16 ϑ 1 f D y y [ σ ] f + 9 α 2 3 128 f 3 D y y y [ σ ] f + 9 α 1 α 2 128 f 2 D y y y [ σ ] f ϑ 1 + 3 α 2 2 α 1 128 ϑ 1 2 f D y y y [ σ ] f + α 2 3 384 ϑ 1 3 D y y y [ σ ] f ] .
B 4 = α 1 D y [ σ ] f 4 α 1 α 2 4 f ( D y [ σ ] f ) 2 + α 2 2 2 f 2 D y y [ σ ] f + α 1 2 α 2 f 2 D y [ σ ] f D y y [ σ ] f 4 + α 2 2 f 3 D y y y [ σ ] f 6 .
B 5 = α 2 2 D y [ σ ] f 8 α 1 4 f ( D y [ σ ] f ) 2 + α 2 4 f 2 D y y [ σ ] f + α 1 α 2 2 f 2 D y [ σ ] f D y y [ σ ] f 8 + α 2 2 f 3 D y y y [ σ ] f 12 .

Appendix B. Extended Numerical Analysis of ANN-Enhanced Two-Stage Implicit Methods

This appendix presents a detailed numerical investigation of the hybrid implicit scheme ISBF- AN [ ] across several fractional orders. The tables summarize input–output relationships, ANN corrections, and final solution values for various values of σ . Each table corresponds to a specific phase of model refinement, highlighting the interaction between the neural network predictor and the implicit solver.

Appendix B.1. ISBF-AN [*] Input–Output Behavior and Final Response for Example uid1

Discussion: Table A1 demonstrates the baseline interaction between the ANN predictor and the two-stage implicit solver ISBF [ ] . For smaller fractional orders ( σ = 0.1 0.5 ), the ANN corrections are minor and primarily smooth out oscillations in the implicit iterations. As σ increases toward 0.9, the ANN correction grows stronger, indicating a higher nonlocal memory effect characteristic of fractional systems. The final y final trajectories remain stable and converge monotonically, confirming that the ANN effectively refines the implicit solver’s predictions.
Table A1. ANN input–output and final results for various fractional orders σ (row-oriented, 4 decimal places) for Example 1.
Table A1. ANN input–output and final results for various fractional orders σ (row-oriented, 4 decimal places) for Example 1.
0.000.010.020.030.040.050.060.071.00
σ = 0 . 1
y input 0.0000−0.0075−0.0230−0.0420−0.0623−0.0829−0.1033−0.1233−0.1616
ANN out −0.0225−0.00930.01150.03690.06480.09320.12010.14270.1656
y final −0.0075−0.0230−0.0420−0.0623−0.0829−0.1033−0.1233−0.14270.1799
σ = 0 . 3
y input 0.0000−0.0070−0.0229−0.0431−0.0656−0.0892−0.1132−0.1371−0.1835
ANN out −0.0497−0.0740−0.0919−0.1062−0.1172−0.1235−0.1225−0.1116−0.0604
y final −0.0070−0.0229−0.0431−0.0656−0.0892−0.1132−0.1371−0.16060.2057
σ = 0 . 5
y input 0.0000−0.0067−0.0218−0.0404−0.0612−0.0832−0.1061−0.1293−0.1757
ANN out −0.1169−0.1775−0.2426−0.3126−0.3869−0.4634−0.5388−0.6099−0.7305
y final −0.0067−0.0218−0.0404−0.0612−0.0832−0.1061−0.1293−0.15250.1985
σ = 0 . 7
y input 0.0000−0.0062−0.0196−0.0350−0.0514−0.0686−0.0861−0.1040−0.1398
ANN out −0.2801−0.4568−0.7374−1.0410−1.3508−1.6521−1.9308−2.1761−2.5486
y final −0.0062−0.0196−0.0350−0.0514−0.0686−0.0861−0.1040−0.1219−0.1576
σ = 0 . 9
y input 0.0000−0.0054−0.0165−0.0279−0.0395−0.0511−0.0627−0.07420.0968
ANN out −0.6551−0.7298−0.8173−0.8945−0.9668−1.0359−1.1028−1.1676−1.2922
y final −0.0054−0.0165−0.0279−0.0395−0.0511−0.0627−0.0742−0.08560.1079

Appendix B.2. ISBF-AN [*] Input–Output Behavior and Final Response for Example uid2

Discussion: Table A2 presents the first variant of the hybridized implicit method enhanced with ANN correction. For moderate σ values (0.2–0.8), the ANN’s corrective term exhibits smooth and bounded variations, reflecting a balanced learning response. The y final values remain close to the expected analytic trends, validating both the numerical consistency and adaptive nature of the ANN correction. As σ increases, the model captures nonlinear growth with high numerical fidelity and no visible divergence, showcasing strong convergence and numerical stability.
Table A2. ANN input–output and final results for various fractional orders σ (row-oriented, 4 decimal places) for Example 2.
Table A2. ANN input–output and final results for various fractional orders σ (row-oriented, 4 decimal places) for Example 2.
t0.000.010.020.030.040.050.060.071.00
σ = 0 . 2
y input 0.00000.00000.00000.00010.00010.00030.00060.00110.0028
ANN out 0.0000−0.0113−0.0240−0.0380−0.0532−0.0697−0.0871−0.1053−0.1430
y final 0.00000.00000.00000.00010.00030.00060.00110.00180.0041
σ = 0 . 4
y input 0.00000.00000.00000.00010.00010.00030.00060.00100.0025
ANN out −0.0001−0.0026−0.0049−0.0070−0.0091−0.0110−0.0127−0.0142−0.0169
y final 0.00000.00000.00010.00010.00030.00060.00100.00160.0035
σ = 0 . 6
y input 0.00000.00000.00000.00010.00010.00030.00050.00090.0021
ANN out 0.0002−0.0031−0.0093−0.0183−0.0297−0.0432−0.0585−0.0752−0.1115
y final 0.00000.00000.00010.00010.00030.00050.00090.00140.0029
σ = 0 . 8
y input 0.00000.00000.00000.00010.00010.00030.00050.00080.0017
ANN out 0.00020.00110.00270.00490.00780.01130.01550.02030.0320
y final 0.00000.00000.00010.00010.00030.00050.00080.00120.0023
σ = 1 . 0
y input 0.00000.00000.00000.00010.00010.00020.00040.00060.0013
ANN out 0.00050.00250.00600.01080.01720.02500.03420.04490.0705
y final 0.00000.00000.00010.00010.00020.00020.00040.00060.0017

Appendix B.3. ISBF-AN [*] Input–Output Behavior and Final Response for Example 3

Discussion: Table A3 presents the refined hybrid simulation in which the ANN outputs are further constrained to ensure smoother convergence. Relative to the earlier setup, the network exhibits improved damping behavior with reduced oscillatory adjustments, especially for σ = 0.4 0.8 . The close alignment between y final and the theoretical values confirms the stability and accuracy of the two-stage implicit scheme ISBF- AN [ ] under neural feedback guidance.
Across both hybrid versions, the ANN acts as a convergence enhancer—reducing iteration count, controlling overshoot, and guiding the implicit method toward the minimal mean square error region. Collectively, Table A1, Table A2, and Table A3 illustrate the evolution of ANN-assisted fractional solvers from baseline correction to fully adaptive hybrid learning control.
Table A4 presents the refined hybrid simulation in which the ANN outputs are further constrained to ensure smoother convergence. Relative to the earlier configuration, the network exhibits improved damping behavior with reduced oscillatory adjustments, particularly for σ = 0.4 0.8 . The close consistency between y final and the theoretical expectations confirms the stability and accuracy of the two-stage implicit formulation ISBF- AN [ ] under neural feedback guidance.
Table A3. ANN input–output and final results for various fractional orders σ (row-oriented, 4 decimal places) for Example 3.
Table A3. ANN input–output and final results for various fractional orders σ (row-oriented, 4 decimal places) for Example 3.
t0.000.010.020.030.040.050.060.071.00
σ = 0 . 2
y input 0.00000.00000.00000.00010.00010.00030.00060.00110.0028
ANN out 0.0000−0.0020−0.0038−0.0054−0.0068−0.0080−0.0092−0.0102−0.0120
y final 0.00000.00000.00010.00010.00030.00060.00110.00180.0041
σ = 0 . 4
y input 0.00000.00000.00000.00010.00010.00030.00060.00100.0025
ANN out 0.00010.00480.00720.00710.0041−0.0017−0.0103−0.0216−0.0509
y final 0.00000.00000.00010.00010.00030.00060.00100.00160.0035
σ = 0 . 6
y input 0.00000.00000.00000.00010.00010.00030.00050.00090.0021
ANN out 0.0000−0.0008−0.0031−0.0066−0.0112−0.0166−0.0225−0.0288−0.0411
y final 0.00000.00000.00010.00010.00030.00050.00090.00140.0029
σ = 0 . 8
y input 0.00000.00000.00000.00010.00010.00030.00050.00080.0017
ANN out 0.00030.00050.00090.0017 0.0076 0.00440.00630.00840.0138
y final 0.00000.00000.00010.00010.00030.00050.00080.00120.0023
σ = 1 . 0
y input 0.00000.00000.00000.00010.00010.00020.00040.00060.0013
ANN out 0.00050.00250.00570.01040.01640.02390.03270.04290.0674
y final 0.00000.00000.00010.00010.00020.00020.00040.00060.0017
Table A4. ANN input–output and final results for various fractional orders σ (row-oriented, 5 decimal places).
Table A4. ANN input–output and final results for various fractional orders σ (row-oriented, 5 decimal places).
t0.000.010.020.030.040.050.060.070.080.091.00
σ = 0 . 1
y input 0.500000.262790.138430.073590.040250.023670.016120.013530.013760.0150.018
ANN out −0.21532−0.11251−0.05815−0.02926−0.01375−0.00528−0.000510.002310.004110.0010.005
y final 0.262790.138430.073590.040250.023670.016120.013530.013760.015670.0160.018
σ = 0 . 3
y input 0.500000.378440.286720.217750.166170.127910.099850.079670.065570.0550.050
ANN out −0.36863−0.27763−0.20816−0.15504−0.11432−0.08300−0.05882−0.04005−0.02539−0.017−0.013
y final 0.378440.286720.217750.166170.127910.099850.079670.065570.056210.0510.050
σ = 0 . 5
y input 0.500000.446760.399460.357580.320640.288210.259890.235300.214120.1960.180
ANN out −0.44357−0.39334−0.34752−0.30579−0.26779−0.23317−0.20162−0.17284−0.14657−0.126−0.122
y final 0.446760.399460.357580.320640.288210.259890.235300.214120.196050.1840.180
σ = 0 . 7
y input 0.500000.478660.458480.439470.421630.404950.389430.375020.361730.3490.338
ANN out −0.47397−0.44718−0.42031−0.39363−0.36727−0.34131−0.31580−0.29077−0.26623−0.242−0.242
y final 0.478660.458480.439470.421630.404950.389430.375020.361730.349520.3380.338
σ = 0 . 9
y input 0.500000.491930.484200.476840.469850.463240.457000.451150.445680.4400.435
ANN out −0.47969−0.45786−0.43547−0.41275−0.38982−0.36676−0.34361−0.32041−0.29720−0.273−0.273
y final 0.491930.484200.476840.469850.463240.457000.451150.445680.440590.4350.435
Overall Observation: The ANN-coupled two-stage implicit strategy achieves superior performance over classical iterative schemes by offering rapid convergence, improved error attenuation, and smoother dynamic behavior across all tested fractional orders. These improvements complement the results summarized in the main text (Table 7, Table 8, Table 13, Table 14, Table 19, Table 20 and Table 26) and confirm the hybrid method’s strong potential for solving nonlinear fractional models with high precision and stability.

References

  1. Clifton, N.E.; Lin, J.Q.; Holt, C.E.; O’Donovan, M.C.; Mill, J. Enrichment of the local synaptic translatome for genetic risk associated with schizophrenia and autism spectrum disorder. Biol. Psychiatry 2024, 95, 888–895. [Google Scholar] [CrossRef]
  2. Xie, W.; Kong, C.; Luo, W.; Zheng, J.; Zhou, Y. C-reactive protein and cognitive impairment: A bidirectional Mendelian randomization study. Arch. Gerontol. Geriatr. 2024, 121, 105359. [Google Scholar] [CrossRef] [PubMed]
  3. Bhonsle, S.; Saxena, S. A review on control-relevant glucose–insulin dynamics models and regulation strategies. Proc. Inst. Mech. Eng. I J. Syst. Control Eng. 2020, 234, 596–608. [Google Scholar] [CrossRef]
  4. Attar, M.A.; Roshani, M.; Hosseinzadeh, K.; Ganji, D.D. Analytical solution of fractional differential equations by Akbari–Ganji’s method. Partial Differ. Equ. Appl. Math. 2022, 6, 100450. [Google Scholar] [CrossRef]
  5. Haubold, H.J.; Mathai, A.M.; Saxena, R.K. Mittag-Leffler functions and their applications. J. Appl. Math. 2011, 2011, 298628. [Google Scholar] [CrossRef]
  6. Kilbas, A.A.; Saigo, M.; Saxena, R.K. Generalized Mittag-Leffler function and generalized fractional calculus operators. Integral Transforms Spec. Funct. 2004, 15, 31–49. [Google Scholar] [CrossRef]
  7. Sowa, M. Application of SubIval, a method for fractional-order derivative computations in IVPs. In Theory and Applications of Non-Integer Order Systems: 8th Conference on Non-Integer Order Calculus and Its Applications, Zakopane, Poland, 20–21 September 2016; Springer: Cham, Switzerland, 2016; pp. 489–499. [Google Scholar]
  8. Al-Mazmumy, M.; Alyami, M.A.; Alsulami, M.; Alsulami, A.S. Efficient modified Adomian decomposition method for solving nonlinear fractional differential equations. Int. J. Anal. Appl. 2024, 22, 76. [Google Scholar] [CrossRef]
  9. Ateş, İ.; Yildirim, A. Application of variational iteration method to fractional initial-value problems. Int. J. Nonlinear Sci. Numer. Simul. 2009, 10, 877–884. [Google Scholar] [CrossRef]
  10. Hossain, M.B.; Hossain, M.J.; Miah, M.M.; Alam, M.S. A comparative study on fourth order and Butcher’s fifth order Runge–Kutta methods with third order initial value problem (IVP). Appl. Comput. Math. 2017, 6, 243. [Google Scholar] [CrossRef]
  11. Sumon, M.M.I.; Nurulhoque, M. A comparative study of numerical methods for solving initial value problem (IVP) of ordinary differential equations (ODE). Am. J. Appl. Math. 2023, 11, 106–118. [Google Scholar] [CrossRef]
  12. Anastassi, Z.A.; Simos, T.E. Special optimized Runge–Kutta methods for IVPs with oscillating solutions. Int. J. Mod. Phys. C 2004, 15, 1–15. [Google Scholar] [CrossRef]
  13. Senu, N.; Suleiman, M.; Ismail, F.; Othman, M. A new diagonally implicit Runge–Kutta–Nyström method for periodic IVPs. WSEAS Trans. Math. 2010, 9, 679–688. [Google Scholar] [CrossRef]
  14. Mukhopadhyay, N. Two-Stage and Multi-Stage Estimation; Routledge: London, UK, 2019; pp. 429–452. [Google Scholar]
  15. Fu, Z.; Tang, T.; Yang, J. Energy Diminishing Implicit–Explicit Runge–Kutta Methods for Gradient Flows. Math. Comput. 2024, 93, 2745–2767. [Google Scholar] [CrossRef]
  16. Yang, K.T. Artificial neural networks (ANNs): A new paradigm for thermal science and engineering. ASME J. Heat Transfer. 2008, 130, 093001. [Google Scholar] [CrossRef]
  17. Cartwright, H.; Marton. Artificial Neural Networks; Cartwright, H., Ed.; Humana Press: Totowa, NJ, USA, 2015; Volume 1260. [Google Scholar]
  18. Thakur, S.; Mitra, H.; Ardekani, A.M. Physics-informed neural network-based inverse framework for time-fractional differential equations for rheology. Biology 2025, 14, 779. [Google Scholar] [CrossRef] [PubMed]
  19. Stiasny, J.B. Physics-Informed Neural Networks for Power System Dynamics. Ph.D. Thesis, Technical University of Denmark (DTU), Lyngby, Denmark, 2023. [Google Scholar]
  20. Fronk, C.; Petzold, L. Training stiff neural ordinary differential equations with explicit rational Taylor series methods. Chaos Interdiscip. J. Nonlinear Sci. 2025, 35, 073133. [Google Scholar] [CrossRef]
  21. Zhai, W.; Tao, D.; Bao, Y. Parameter estimation and modeling of nonlinear dynamical systems based on Runge–Kutta physics-informed neural network. Nonlinear Dyn. 2023, 111, 21117–21130. [Google Scholar] [CrossRef]
  22. Anvari, M.; Marasi, H.; Kheiri, H. Implicit Runge-Kutta based sparse identification of governing equations in biologically motivated systems. Sci. Rep. 2025, 15, 32286. [Google Scholar] [CrossRef]
  23. Luo, M.; Qiu, W.; Nikan, O.; Avazzadeh, Z. Second-order accurate, robust and efficient ADI Galerkin technique for the three-dimensional nonlocal heat model arising in viscoelasticity. Appl. Math. Comput. 2023, 440, 127655. [Google Scholar] [CrossRef]
  24. Ojo, E.K.; Iyase, S.A.; Anake, T.A. Resonant fractional order differential equation with two-dimensional kernel on the half-line. J. Math. Comput. Sci. 2024, 32, 122–136. [Google Scholar] [CrossRef]
  25. Granados, A.L. Implicit Runge–Kutta Algorithm Using Newton–Raphson Method; Technical Report; Departamento de Mecánica, Universidad Simón Bolívar: Caracas, Venezuela, 1998; Available online: https://www.researchgate.net/publication/275833462_Implicit_Runge-Kutta_Algorithm_Using_Newton-Raphson_Method (accessed on 1 November 2025).
  26. Krzywanski, J.; Sosnowski, M.; Grabowska, K.; Zylka, A.; Lasek, L.; Kijo-Kleczkowska, A. Advanced computational methods for modeling, prediction and optimization—A review. Materials 2024, 17, 3521. [Google Scholar] [CrossRef] [PubMed]
  27. Wen, Y.; Chaolu, T.; Wang, X. Solving the initial value problem of ordinary differential equations by Lie group based neural network method. PLoS ONE 2022, 17, e0265992. [Google Scholar] [CrossRef] [PubMed]
  28. Finzi, M.; Potapczynski, A.; Choptuik, M.; Wilson, A.G. A stable and scalable method for solving initial value PDEs with neural networks. arXiv 2023, arXiv:2304.14994. [Google Scholar] [CrossRef]
  29. Yadav, N.; Ngo, T.T.; Kim, J.H. An algorithm for numerical solution of differential equations using harmony search and neural networks. J. Appl. Anal. Comput. 2022, 12, 1277–1293. [Google Scholar] [CrossRef]
  30. Dong, S.; Ni, N. Learning the Exact Time Integration Algorithm for Initial Value Problems by Randomized Neural Networks. arXiv 2025, arXiv:2502.10949. [Google Scholar] [CrossRef]
  31. El Alaoui, M.; Rougui, M. Examining the application of artificial neural networks (ANNs) for advancing energy efficiency in buildings: A comprehensive review. J. Sustain. Res. 2024, 6, 1. [Google Scholar] [CrossRef]
  32. Al-Qawabah, S.; Shaban, N.A.; Al Aboushi, A.; Al-Salaymeh, A.; Al-Maaitah, A.; Abdelhafez, E. Prediction of the output of the tri-generation concentrated solar power system based on artificial neural network. In Proceedings of the 2024 6th Global Power, Energy and Communication Conference (GPECOM), Istanbul, Türkiye, 4–7 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 345–349. [Google Scholar]
  33. Jiang, S.; Zhang, J.; Zhang, Q.; Zhang, Z. Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations. Commun. Comput. Phys. 2017, 21, 650–678. [Google Scholar] [CrossRef]
  34. Artin, E. The Gamma Function; Dover Publications: Mineola, NY, USA, 2015. [Google Scholar]
  35. Agrawal, O.P. Fractional variational calculus in terms of Riesz fractional derivatives. J. Phys. A Math. Theor. 2007, 40, 6287. [Google Scholar] [CrossRef]
  36. Riesz, M. L’intégrale de Riemann-Liouville et le problème de Cauchy pour l’équation des ondes. Bull. Soc. Math. France 1939, 67, 153–170. [Google Scholar] [CrossRef]
  37. Batiha, I.M.; Abubaker, A.A.; Jebril, I.H.; Al-Shaikh, S.B.; Matarneh, K. New algorithms for dealing with fractional initial value problems. Axioms 2023, 12, 488. [Google Scholar] [CrossRef]
  38. Shams, M.; Alalyani, A. High-performance adaptive step-size fractional numerical scheme for solving fractional differential equations. Sci. Rep. 2025, 15, 13006. [Google Scholar] [CrossRef]
  39. Ali, M.A. Development and analysis of advanced numerical algorithms for solving differential equations using Taylor, Euler, and Runge–Kutta methods. J. Comput. Anal. Appl. 2025, 34, 8. [Google Scholar]
  40. Bataineh, M.; Alaroud, M.; Al-Omari, S.; Agarwal, P. Series representations for uncertain fractional IVPs in the fuzzy conformable fractional sense. Entropy 2021, 23, 1646. [Google Scholar] [CrossRef]
  41. Hu, F.Q.; Hussaini, M.Y.; Manthey, J.L. Low-dissipation and low-dispersion Runge–Kutta schemes for computational acoustics. J. Comput. Phys. 1996, 124, 177–191. [Google Scholar] [CrossRef]
  42. Batiha, I.M.; Abdalsmad, H.F.; Jebril, I.H.; Al-Khawaldeh, H.O.; AlKasasbeh, W.A.A.; Momani, S. Trapezoidal scheme for the numerical solution of fractional initial value problems. Int. J. Robot. Control Syst. 2025, 5, 1238–1253. [Google Scholar] [CrossRef]
  43. Corless, R.M.; Kaya, C.Y.; Moir, R.H. Optimal residuals and the Dahlquist test problem. Numer. Algorithms 2019, 81, 1253–1274. [Google Scholar] [CrossRef]
  44. Dahlquist, G. Positive functions and some applications to stability questions for numerical methods. In Recent Advances in Numerical Analysis; Academic Press: New York, NY, USA, 1978; pp. 1–29. [Google Scholar]
  45. Shams, M.; Kausar, N.; Araci, S.; Oros, G.I. Artificial hybrid neural network-based simultaneous scheme for solving nonlinear equations: Applications in engineering. Alex. Eng. J. 2024, 108, 292–305. [Google Scholar] [CrossRef]
  46. Moré, J.J. The Levenberg–Marquardt algorithm: Implementation and theory. In Numerical Analysis: Proceedings of the Biennial Conference, Dundee, Scotland, 28 June–1 July 1977; Springer: Berlin/Heidelberg, Germany, 2006; pp. 105–116. [Google Scholar]
  47. Wang, Y. Gauss–Newton method. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 415–420. [Google Scholar] [CrossRef]
  48. Gratton, S.; Lawless, A.S.; Nichols, N.K. Approximate Gauss–Newton methods for nonlinear least squares problems. SIAM J. Optim. 2007, 18, 106–132. [Google Scholar] [CrossRef]
  49. Cameron, T.R.; Graillat, S. On a compensated Ehrlich–Aberth method for the accurate computation of all polynomial roots. Electron. Trans. Numer. Anal. 2022, 55, 401–423. [Google Scholar] [CrossRef]
  50. Sowa, M. Numerical computations of the fractional derivative in IVPs: Examples in MATLAB and Mathematica. Informatyka, Automatyka, Pomiary W Gospodarce I Ochronie Środowiska 2017, 7, 19–22. [Google Scholar] [CrossRef]
  51. Shams, M.; Rufai, M.A. Fractional-order numerical scheme with symmetric structure for fractional differential equations with step-size control. Symmetry 2025, 17, 1685. [Google Scholar] [CrossRef]
  52. Esmaeili, S.; Shamsi, M.; Luchko, Y. Numerical solution of fractional differential equations with a collocation method based on Müntz polynomials. Comput. Math. Appl. 2011, 62, 918–929. [Google Scholar] [CrossRef]
  53. Li, C.; Zeng, F. Numerical Methods for Fractional Calculus; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  54. Biolek, D.; Garrappa, R.; Mainardi, F.; Popolizio, M. Derivatives of Mittag–Leffler functions: Theory, computation and applications. Nonlinear Dyn. 2025, 113, 34389–34403. [Google Scholar] [CrossRef]
Figure 1. Stability regions of the proposed ISBF [ ] scheme in the complex z-plane for different fractional orders σ ( 0 , 1 ] . The plots illustrate the variation of the stability domain as σ increases.
Figure 1. Stability regions of the proposed ISBF [ ] scheme in the complex z-plane for different fractional orders σ ( 0 , 1 ] . The plots illustrate the variation of the stability domain as σ increases.
Mathematics 13 03774 g001
Figure 2. Stability regions of the IEBF [ ] , ITBF [ ] , and ISBF [ ] schemes in the complex z−plane for σ = 0.9 .
Figure 2. Stability regions of the IEBF [ ] , ITBF [ ] , and ISBF [ ] schemes in the complex z−plane for σ = 0.9 .
Mathematics 13 03774 g002
Figure 3. Flowchart of the proposed hybrid ANN-assisted implicit numerical scheme (ISBF– AN [ ] ) for solving FOIVPs.
Figure 3. Flowchart of the proposed hybrid ANN-assisted implicit numerical scheme (ISBF– AN [ ] ) for solving FOIVPs.
Mathematics 13 03774 g003
Figure 4. Proposed ANN architecture for fractional order initial value problems. The network takes as input the time step, state variable, fractional derivative, system function, and order σ , learns the underlying nonlinear mapping, and predicts the next state y ^ n + 1 using adaptive optimization.
Figure 4. Proposed ANN architecture for fractional order initial value problems. The network takes as input the time step, state variable, fractional derivative, system function, and order σ , learns the underlying nonlinear mapping, and predicts the next state y ^ n + 1 using adaptive optimization.
Mathematics 13 03774 g004
Figure 5. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of the ISBF [ ] scheme for different fractional orders σ in Example 1.
Figure 5. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of the ISBF [ ] scheme for different fractional orders σ in Example 1.
Mathematics 13 03774 g005
Figure 6. Panels (a,b) show the training statistics and regression performance of the ISBF– AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 1.
Figure 6. Panels (a,b) show the training statistics and regression performance of the ISBF– AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 1.
Mathematics 13 03774 g006
Figure 7. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of ISBF [ ] for various fractional orders σ in Example 2.
Figure 7. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of ISBF [ ] for various fractional orders σ in Example 2.
Mathematics 13 03774 g007
Figure 8. Panels (a,b) show the training statistics and regression performance of the ISBF− AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 2.
Figure 8. Panels (a,b) show the training statistics and regression performance of the ISBF− AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 2.
Mathematics 13 03774 g008
Figure 9. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of ISBF [ ] for various fractional orders σ in Example 3.
Figure 9. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of ISBF [ ] for various fractional orders σ in Example 3.
Mathematics 13 03774 g009
Figure 10. Panels (a,b) show the training statistics and regression performance of the ISBF− AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 3.
Figure 10. Panels (a,b) show the training statistics and regression performance of the ISBF− AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 3.
Mathematics 13 03774 g010
Figure 11. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of ISBF [ ] for various fractional orders σ in Example 4.
Figure 11. Panels (a,b) show the exact and approximate solutions and the corresponding absolute error plots of ISBF [ ] for various fractional orders σ in Example 4.
Mathematics 13 03774 g011
Figure 12. Panels (a,b) show the training statistics and regression performance of the ISBF− AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 4.
Figure 12. Panels (a,b) show the training statistics and regression performance of the ISBF− AN [ ] hybrid scheme for solving fractional-order initial value problems in Example 4.
Mathematics 13 03774 g012
Table 1. Summary of stability on the real z-axis, showing the numerical values of z left for the ISBF [ ] scheme.
Table 1. Summary of stability on the real z-axis, showing the numerical values of z left for the ISBF [ ] scheme.
σ Γ ( σ + 1 ) ISBF [ ] :Stability Summary
0.10 Γ ( 1.10 ) 0.95135 z left 1.902 z [ 1.902 , 0 ]
0.30 Γ ( 1.30 ) 0.89747 z left 1.794 z [ 1.794 , 0 ]
0.50 Γ ( 1.50 ) 0.88623 z left 1.772 z [ 1.772 , 0 ]
0.70 Γ ( 1.70 ) 0.90863 z left 1.817 z [ 1.817 , 0 ]
0.90 Γ ( 1.90 ) 0.96177 z left 1.923 z [ 1.923 , 0 ]
Table 2. Comparison of stability characteristics of the IEBF [ ] , ITBF [ ] , and ISBF [ ] schemes for different fractional orders σ .
Table 2. Comparison of stability characteristics of the IEBF [ ] , ITBF [ ] , and ISBF [ ] schemes for different fractional orders σ .
MethodStability Function R ( z ) Region of StabilityRemarks/Performance
σ = 0.1
IEBF [ ] R ( z ) = 1 1 z Γ ( 1.1 ) Entire left-half planeHighly stable, strongly damped
ITBF [ ] R ( z ) = 1 + z 2 Γ ( 1.1 ) 1 z 2 Γ ( 1.1 ) Large left-half plane StableStabel
ISBF [ ] R ( z ) = 1 + z 2 Γ ( 1.1 ) 2 + z / Γ ( 1.1 ) 1 z / ( 4 Γ ( 1.1 ) ) Broad left-half planeBest accuracy, balanced damping
σ = 0.3
IEBF [ ] 1 1 z Γ ( 1.3 ) Left-half planeStable but low accuracy
ITBF [ ] 1 + z 2 Γ ( 1.3 ) 1 z 2 Γ ( 1.3 ) Moderate left-half planeMild oscillations, better accuracy
ISBF [ ] 1 + z 2 Γ ( 1.3 ) 2 + z / Γ ( 1.3 ) 1 z / ( 4 Γ ( 1.3 ) ) Wider left-half planeBest stability-accuracy tradeoff
σ = 0.5
IEBF [ ] 1 1 z Γ ( 1.5 ) Left-half plane Strong dampingStabel
ITBF [ ] 1 + z 2 Γ ( 1.5 ) 1 z 2 Γ ( 1.5 ) Symmetric about real axisSecond-order accuracy
ISBF [ ] 1 + z 2 Γ ( 1.5 ) 2 + z / Γ ( 1.5 ) 1 z / ( 4 Γ ( 1.5 ) ) Largest left-half plane regionHighest accuracy, robust damping
σ = 0.7
IEBF [ ] 1 1 z Γ ( 1.7 ) Left-half planeStable but less accurate
ITBF [ ] 1 + z 2 Γ ( 1.7 ) 1 z 2 Γ ( 1.7 ) Moderate regionAccurate but near marginal stability
ISBF [ ] 1 + z 2 Γ ( 1.7 ) 2 + z / Γ ( 1.7 ) 1 z / ( 4 Γ ( 1.7 ) ) Wide left-half regionStable, more accurate than both
σ = 0.9
IEBF [ ] 1 1 z Γ ( 1.9 ) Left-half planeStable but slow convergence
ITBF [ ] 1 + z 2 Γ ( 1.9 ) 1 z 2 Γ ( 1.9 ) Nearly symmetric regionAccurate but can oscillate
ISBF [ ] 1 + z 2 Γ ( 1.9 ) 2 + z / Γ ( 1.9 ) 1 z / ( 4 Γ ( 1.9 ) ) Largest region among allExcellent stability and accuracy
Table 3. Asymptotic computational complexity comparison between the classical implicit solver and the proposed ANN-assisted hybrid scheme.
Table 3. Asymptotic computational complexity comparison between the classical implicit solver and the proposed ANN-assisted hybrid scheme.
ComponentClassical Implicit SolverHybrid ANN–Implicit Solver
Cost per time step O ( m C solve ( n ) ) O ( m C solve ( n ) + p )
Total runtime (excluding training) O ( N m C solve ( n ) ) O ( N m C solve ( n ) + N p )
Offline training cost O ( E N train p )
Memory requirement O ( n ) O ( n + p )
Table 4. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 1.
Table 4. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 1.
t ISBF [ ] IEBF [ ] ITBF [ ] ExactError ( ISBF [ ] )Error ( IEBF [ ] )Error ( ITBF [ ] )
0.000.0000000.0000000.0000000.0000000.0000000.0000000.000000
0.10−0.099756−0.089000−0.092000−0.0900000.0000240.0010000.002000
0.20−0.169535−0.156100−0.163305−0.1600000.0000460.0039000.003305
0.30−0.209336−0.201490−0.213983−0.2100000.0000640.0085100.003983
0.40−0.249155−0.225341−0.244094−0.2400000.0008450.0146590.004094
0.50−0.248991−0.227807−0.253695−0.2500000.0010090.0221930.003695
0.60−0.238843−0.209026−0.242835−0.2400000.0011570.0309740.002835
0.70−0.208710−0.169124−0.211559−0.2100000.0012900.0408760.001559
0.80−0.158588−0.108211−0.159907−0.1600000.0014120.0517890.000093
0.90−0.088479−0.026390−0.087916−0.0900000.0015210.0636100.002084
1.000.0016200.0762490.0043810.0000000.0016200.0762490.004381
Table 5. Performance metrics of the ISBF [ ] , IEBF [ ] , and ITBF [ ] implicit schemes for different fractional orders σ in Example 1.
Table 5. Performance metrics of the ISBF [ ] , IEBF [ ] , and ITBF [ ] implicit schemes for different fractional orders σ in Example 1.
Implicit Two-Stage Method- ISBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.00910.065101118,7730
0.300.00250.098593120.1702
0.500.00150.017474119.6711
0.700.00200.053249120.3706
0.900.00200.007702119.9678
Backward Euler Method- IEBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.004411.15786145.3790
0.300.00184.723383143.8786
0.500.00171.833645142.0794
0.700.00170.615827143.2798
0.900.00180.168974141.1790
Implicit Trapezoidal Method- ITBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.00480.463470160.2898
0.300.00200.304715158.6892
0.500.00200.320305156.7894
0.700.00190.252672155.8854
0.900.00190.088109153.2900
Table 6. Comparison of maximum error, CPU time, and memory usage for different step sizes h using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] .
Table 6. Comparison of maximum error, CPU time, and memory usage for different step sizes h using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] .
Scheme↓ h = 0.1 h = 0.01 h = 0.001
Metric→ Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB )
ISBF [ ] 0.0596/0.0113/113.340.086/0.102/201.10.00186/1.02/395.1
IEBF [ ] 4.3568/0.0102/134.780.435/0.098/175.00.0435/0.95/195.3
ITBF [ ] 117.13/0.0147/126.4811.71/0.12/166.61.17/1.10/396.8
Table 7. Performance metrics of the ANN-accelerated ISBF [ ] scheme (ISBF– AN [ ] ) for Example 1.
Table 7. Performance metrics of the ANN-accelerated ISBF [ ] scheme (ISBF– AN [ ] ) for Example 1.
SchemeIterationsMSECPU TimeMemory (Kb)Gradient μ
ISBF- AN [ ] 3 4.178 × 10 25 3.4354130.48 1.23 × 10 19 1.08 × 10 25
Table 8. Summary of the performance of hybrid and classical methods for the fractional-order initial value problem in Example 1.
Table 8. Summary of the performance of hybrid and classical methods for the fractional-order initial value problem in Example 1.
σ MinErrorMaxErrorFun-CMem (KB)CPU_H (s)CPU_C (s)Eff (%)Cons (%)
0.1 1.13 × 10 17 0.16 × 10 5 890130.00820.49060.002347.34100
0.3 3.65 × 10 16 2.08 × 10 6 876143.06140.50620.001948.76100
0.5 3.35 × 10 17 0.78 × 10 8 884134.00820.49450.001656.87100
0.7 5.98 × 10 19 1.98 × 10 10 900154.06140.58530.002178.87100
0.9 1.16 × 10 26 4.17 × 10 13 918127.00820.49970.002098.56100
Table 9. Relative improvement (%) of the proposed method over baseline schemes (IEBF and ITBF) for different fractional orders σ .
Table 9. Relative improvement (%) of the proposed method over baseline schemes (IEBF and ITBF) for different fractional orders σ .
σ CPU↑ IEBF (%)CPU↑ ITBF (%)Acc↑ IEBF (%)Acc↑ ITBF (%)
0.138.433.249.844.1
0.341.536.852.647.3
0.544.739.657.451.8
0.747.342.560.855.5
0.950.146.063.258.0
Table 10. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 2.
Table 10. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 2.
t ISBF [ ] IEBF [ ] ITBF [ ] ExactError ( ISBF [ ] )Error ( IEBF [ ] )Error ( ITBF [ ] )
0.000.0000000.0000000.0000000.0000000.0000000.0000000.000000
0.100.0024250.0048500.0024250.0017100.0007150.0031400.000715
0.200.0124520.0200610.0124550.0115200.0009320.0085410.000935
0.300.0328810.0457740.0329110.0321300.0007510.0136440.000781
0.400.0617120.0779420.0618170.0614400.0002720.0165020.000377
0.500.0933370.1094640.0935770.0937500.0004130.0157140.000173
0.600.1197520.1313280.1201390.1209600.0012080.0103680.000821
0.700.1317530.1338430.1322110.1337700.0020170.0000730.001559
0.800.1201510.1079620.1205210.1228800.0027290.0149180.002359
0.900.0769630.0466890.0771160.0801900.0032270.0335010.003074
1.00−0.003389−0.053529−0.0033440.0000000.0033890.0535290.003344
Table 11. Performance metrics of the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for different fractional orders σ in Example 2.
Table 11. Performance metrics of the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for different fractional orders σ in Example 2.
Implicit Two-Stage Method- ISBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.01190.068674118.6704
0.300.00520.096742119.8738
0.500.01440.002194120.1734
0.700.00540.001054121.3708
0.900.01550.000632119.9701
Backward Euler Method- IEBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.01214.626412142.11294
0.300.00702.110489143.61192
0.500.00770.878357139.91266
0.700.01110.303972144.71916
0.900.01450.136539146.52530
Implicit Trapezoidal Method- ITBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.0136117.4788157.31820
0.300.00540.434462157.1900
0.500.00530.303243159.2864
0.700.00500.218418160.5802
0.900.00500.064821161.2790
Table 12. Comparison of maximum error, CPU time, and memory usage for different step sizes h using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] .
Table 12. Comparison of maximum error, CPU time, and memory usage for different step sizes h using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] .
Scheme↓h = 0.05h = 0.01h = 0.001
Metric→ Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB )
ISBF [ ] 0.0136/0.006/203.10.081/0.102/205.10.00346/1.02/309.3
IEBF [ ] 2.178/0.005/234.80.435/0.098/235.00.0435/0.95/335.3
ITBF [ ] 58.56/0.007/243.911.71/0.12/326.61.17/1.10/426.8
Table 13. Performance metrics of the ANN-accelerated ISBF [ ] scheme (ISBF– AN [ ] ) for Example 2.
Table 13. Performance metrics of the ANN-accelerated ISBF [ ] scheme (ISBF– AN [ ] ) for Example 2.
SchemeIterationsMSECPU TimeMemory (Kb)Gradient μ
ISBF- AN [ ] 3 4.178 × 10 17 3.4354137.08 1.23 × 10 19 1.08 × 10 10
Table 14. Summary of the performance of hybrid and classical methods for the fractional-order initial value problem in Example 2.
Table 14. Summary of the performance of hybrid and classical methods for the fractional-order initial value problem in Example 2.
σ MinErrorMaxErrorFun-CMem(KB)CPU_H(s)CPU_C(s)Eff (%)Consistency (%)
0.10 0.16 × 10 3 882124.25520.58060.004998.11100
0.30 0.05 × 10 5 908122.06140.52330.005299.346100
0.50 3.59 × 10 7 842135.00820.52360.0050100.72100
0.7 0.17 × 10 25 6.15 × 10 9 860156.06140.52000.005397.479100
0.90 0.08 × 10 10 862137.12290.53160.0048100.09100
Table 15. Relative improvement (%) of the proposed method over baseline schemes (IEBF and ITBF) for different fractional orders σ .
Table 15. Relative improvement (%) of the proposed method over baseline schemes (IEBF and ITBF) for different fractional orders σ .
σ CPU↑ IEBF (%)CPU↑ ITBF (%)Acc↑ IEBF (%)Acc↑ ITBF (%)
0.136.231.847.542.3
0.339.634.951.245.7
0.543.138.256.150.6
0.746.541.359.554.2
0.949.845.762.857.6
Table 16. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 3.
Table 16. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 3.
t ISBF [ ] IEBF [ ] ITBF [ ] ExactError ( ISBF [ ] )Error ( IEBF [ ] )Error ( ITBF [ ] )
0.000.0000000.0000000.0000000.0000000.0000000.0000000.000000
0.100.2247870.2352390.2282330.2249050.0001180.0103340.003328
0.400.8480130.8920980.8664270.8520830.0040700.0400150.014344
0.701.0175300.9485851.0335691.0300020.0024720.0814170.003567
1.000.2548960.0356890.2463700.2500000.0008960.2143110.003630
Table 17. Performance metrics of the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for different fractional orders σ in Example 3.
Table 17. Performance metrics of the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for different fractional orders σ in Example 3.
Implicit Two-Stage Method- ISBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.01130.059568195.7704
0.300.00500.016441196.2738
0.500.00470.004841193.4734
0.700.00460.002388197.5768
0.900.00490.00099200.1610
Backward Euler Method- IEBF [ ]
σ CPU (s)Max ErrorMemory (MB)Fun Calls
0.100.01024.356812221.01543
0.300.00731.740380222.7752
0.500.00751.129530223.4787
0.700.01091.020192224.9853
0.900.01410.981659225.0880
Implicit Trapezoidal Method- ITBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.0147117.1319235.41980
0.300.00511.317830236.6943
0.500.00511.115458237.0804
0.700.00471.007126237.9782
0.900.00460.961447238.3754
Table 18. Comparison of maximum error, CPU time, and memory usage for different step sizes h using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] .
Table 18. Comparison of maximum error, CPU time, and memory usage for different step sizes h using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] .
Scheme↓h = 0.01h = 0.001h = 0.0001
Metric→ Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB )
ISBF [ ] 0.012/0.102/143.10.0076/1.02/135.50.00186/10.2/193.9
IEBF [ ] 0.135/0.048/162.00.0413/0.97/175.30.00115/9.5/236.01
ITBF [ ] 10.71/0.12/176.61.17/1.10/204.80.116/11.05/237.9
Table 19. Performance metrics of the ANN-guided ISBF [ ] hybrid scheme (ISBF– AN [ ] ) trained with adaptive learning for improved efficiency.
Table 19. Performance metrics of the ANN-guided ISBF [ ] hybrid scheme (ISBF– AN [ ] ) trained with adaptive learning for improved efficiency.
SchemeIterationsMSECPU TimeMemory (Kb)Gradient μ
ISBF- AN [ ] 2 6.482 × 10 20 3.124822674 8.64 × 10 21 7.13 × 10 20
Table 20. Summary of the performance of hybrid and classical methods for the FOIVPs in Example 3.
Table 20. Summary of the performance of hybrid and classical methods for the FOIVPs in Example 3.
σ MinErrorMaxErrorFun-CMem(KB)CPU_H(s)CPU_C(s)Eff (%)Consistency (%)
0.1 1.45 × 10 11 8.95 × 10 2 974154.35990.52870.006877.18100
0.3 7.56 × 10 14 1.98 × 10 3 882165.06140.52200.004988.586100
0.5 3.74 × 10 19 0.28 × 10 7 888187.00820.49640.004387.390100
0.7 0.18 × 10 23 4.73 × 10 9 872134.02050.49640.004691.736100
0.9 0.17 × 10 27 3.98 × 10 11 860127.01640.50870.005096.145100
Table 21. Performance comparison (%) for different fractional orders σ .
Table 21. Performance comparison (%) for different fractional orders σ .
σ CPU↑ IEBF (%)CPU↑ ITBF (%)Acc↑ IEBF (%)Acc↑ ITBF (%)
0.137.532.748.343.5
0.340.936.152.947.9
0.544.339.057.251.6
0.747.142.161.055.1
0.950.645.863.958.4
Table 22. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 4.
Table 22. Comparison between exact and approximate solutions ( σ = 1 ) obtained using the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for the fractional-order initial value problem in Example 4.
t ISBF [ ] IEBF [ ] ITBF [ ] ExactError ( ISBF [ ] )Error ( IEBF [ ] )Error ( IEBF [ ] )
0.000.5000000.5000000.5000000.5000000.0000000.0000000.000000
0.100.4625240.4710000.4617500.4624190.0000950.0085810.000669
0.400.4951720.5370990.4962460.4951600.0000120.0419390.001086
0.700.7382470.8221950.7454150.7382930.0009540.0839020.007122
1.001.1831341.3157201.2000731.1839400.0000940.1317800.016133
Table 23. Comparative performance metrics of the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for various fractional orders σ in Example 4.
Table 23. Comparative performance metrics of the implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for various fractional orders σ in Example 4.
Implicit Two-Stage Method- ISBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.01080.058742194.9698
0.300.00520.015963195.8742
0.500.00490.004612193.6730
0.700.00450.002276197.1762
0.900.00470.000954199.8606
Backward Euler Method- IEBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.01004.258417220.61536
0.300.00751.701256222.3749
0.500.00721.102834223.2781
0.700.01060.995684224.7845
0.900.01390.967421224.9875
Implicit Trapezoidal Method- ITBF [ ]
σ CPU (s)Max ErrorMemory (KB)Fun Calls
0.100.0142115.7834234.81972
0.300.00531.281964235.9939
0.500.00521.095672236.8798
0.700.00480.986345237.5776
0.900.00450.951828238.0749
Table 24. Performance comparison of implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for different step sizes h under varying memory conditions.
Table 24. Performance comparison of implicit schemes ISBF [ ] , IEBF [ ] , and ITBF [ ] for different step sizes h under varying memory conditions.
Scheme↓h = 0.01h = 0.001h = 0.0001
Metric→ Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB ) Max - Err / CPU ( s ) / Mem ( KB )
ISBF [ ] 0.010/0.095/148.20.0069/0.96/141.80.00158/9.85/189.4
IEBF [ ] 0.121/0.052/159.70.0392/0.92/169.90.00103/9.11/229.3
ITBF [ ] 9.85/0.11/173.81.08/1.05/201.20.108/10.72/232.4
Table 25. Performance metrics of the ANN-guided ISBF [ ] hybrid scheme (ISBF− AN [ ] ) trained with adaptive learning for improved efficiency.
Table 25. Performance metrics of the ANN-guided ISBF [ ] hybrid scheme (ISBF− AN [ ] ) trained with adaptive learning for improved efficiency.
SchemeIterationsMSECPU TimeMemory (Kb)Gradient μ
ISBF- AN [ ] 3 1.103 × 10 17 2.0017226.74 8.34 × 10 18 0.14 × 10 19
Table 26. Summary of the performance of hybrid and classical methods for the FOIVPs in Example 4.
Table 26. Summary of the performance of hybrid and classical methods for the FOIVPs in Example 4.
σ MinErrorMaxErrorFun-CMem(KB)CPU_H(s)CPU_C(s)Eff (%)Consistency (%)
0.1 1.21 × 10 11 7.85 × 10 2 960152.44750.51720.006578.23100
0.3 6.89 × 10 14 1.72 × 10 3 875163.82910.51060.004889.47100
0.5 3.10 × 10 19 2.64 × 10 8 884185.77630.48890.004288.12100
0.7 0.16 × 10 23 3.95 × 10 9 870133.10580.48970.004492.04100
0.9 0.14 × 10 27 3.15 × 10 11 858125.99430.50100.004896.77100
Table 27. Improvement rates (%) for various σ showing robustness of the proposed solver.
Table 27. Improvement rates (%) for various σ showing robustness of the proposed solver.
σ CPU↑ IEBF (%)CPU↑ ITBF (%)Acc↑ IEBF (%)Acc↑ ITBF (%)
0.139.234.550.545.0
0.342.037.954.148.3
0.545.640.858.252.7
0.748.843.961.556.3
0.951.947.664.859.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Carpentieri, B. Efficient Hybrid ANN-Accelerated Two-Stage Implicit Schemes for Fractional Differential Equations. Mathematics 2025, 13, 3774. https://doi.org/10.3390/math13233774

AMA Style

Shams M, Carpentieri B. Efficient Hybrid ANN-Accelerated Two-Stage Implicit Schemes for Fractional Differential Equations. Mathematics. 2025; 13(23):3774. https://doi.org/10.3390/math13233774

Chicago/Turabian Style

Shams, Mudassir, and Bruno Carpentieri. 2025. "Efficient Hybrid ANN-Accelerated Two-Stage Implicit Schemes for Fractional Differential Equations" Mathematics 13, no. 23: 3774. https://doi.org/10.3390/math13233774

APA Style

Shams, M., & Carpentieri, B. (2025). Efficient Hybrid ANN-Accelerated Two-Stage Implicit Schemes for Fractional Differential Equations. Mathematics, 13(23), 3774. https://doi.org/10.3390/math13233774

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop