Next Article in Journal
Mathematical and Physical Analysis of the Fractional Dynamical Model
Next Article in Special Issue
A Structure-Preserving Finite Difference Scheme for the Nonlinear Space Fractional Sine-Gordon Equation with Damping Based on the T-SAV Approach
Previous Article in Journal
An Eighth-Order Numerical Method for Spatial Variable-Coefficient Time-Fractional Convection–Diffusion–Reaction Equations
Previous Article in Special Issue
Finite Element Method for Time-Fractional Navier–Stokes Equations with Nonlinear Damping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling and Neural Network Approximation of Asymptotic Behavior for Delta Fractional Difference Equations with Mittag-Leffler Kernels

by
Pshtiwan Othman Mohammed
1,2,3,*,
Muteb R. Alharthi
4,
Majeed Ahmad Yousif
5,
Alina Alb Lupas
6,* and
Shrooq Mohammed Azzo
7
1
Department of Mathematics, College of Education, University of Sulaimani, Sulaymaniyah 46001, Iraq
2
Research Center, University of Halabja, Halabja 46018, Iraq
3
Research and Development Center, University of Sulaimani, Sulaymaniyah 46001, Iraq
4
Department of Mathematics and Statistics, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
5
Department of Mathematics, College of Education, University of Zakho, Zakho 42002, Iraq
6
Department of Mathematics and Computer Science, University of Oradea, 410087 Oradea, Romania
7
Department of Artificial Intelligence, College of Computer Science and Mathematics, University of Mosul, Mosul 41001, Iraq
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2025, 9(7), 452; https://doi.org/10.3390/fractalfract9070452
Submission received: 7 June 2025 / Revised: 7 July 2025 / Accepted: 8 July 2025 / Published: 9 July 2025

Abstract

The asymptotic behavior of discrete Riemann–Liouville fractional difference equations is a fundamental problem with important mathematical and physical implications. In this paper, we investigate a particular case of such an equation of the order 0.5 subject to a given initial condition. We establish the existence of a unique solution expressed via a Mittag-Leffler-type function. The delta-asymptotic behavior of the solution is examined, and its convergence properties are rigorously analyzed. Numerical experiments are conducted to illustrate the qualitative features of the solution. Furthermore, a neural network-based approximation is employed to validate and compare with the analytical results, confirming the accuracy, stability, and sensitivity of the proposed method.

1. Introduction

In recent years, the synergy between fractional calculus and machine learning has opened new research avenues for modeling and predicting complex dynamics in systems with memory and nonlocal behavior. Specifically, machine learning algorithms have been increasingly integrated with fractional-order models to improve prediction accuracy, system identification, and control under uncertainty [1,2]. In the real world, the performance of discrete fractional calculus is often not stable but is affected by a variety of factors [3,4,5,6]. Due to their extensive applications, fractional operators of discrete types such as Riemann–Liouville [7,8] and Liouville–Caputo [9,10] have attracted considerable interest from the research community. Numerous studies have explored both numerical methods and theoretical analysis for discrete fractional problems (DFPs) and various fractional difference systems; see [11,12,13,14,15]. These systems of fractional differential and difference equations can be modeled using Cauchy-type problems; see, for example, [16,17,18,19].
In fractional calculus theory, Mittag-Leffler (ML) functions are fundamental to introduce fractional operators and to study dynamical systems; see [20,21,22,23]. ML functions extend the traditional framework of discrete fractional calculus by introducing special function parameters that control the decay rate of the memory kernels. These special functions allow for more flexible modeling of discrete fractional operators with long-range memory effects, capturing wider ranges of behaviors beyond those described by classical fractional operators (Riemann–Liouville and Liouville–Caputo), such as Attangana–Baleanu and other generalized types of discrete fractional operators [24,25,26,27].
Recently, many studies have been actively conducted to analyze the existence and uniqueness for the DFP solutions. To the best of our knowledge, most of the existence and uniqueness of DFPs has been focusing on ML functions; see [28,29,30,31,32]. Furthermore, for DFPs of nabla types with some boundary and initial conditions [33,34,35,36], comparison theorems, sufficient and necessary conditions of the stability analyses, and behavior of solutions have been derived [37,38,39,40].
As there are few papers on studying asymptotic behavior of solutions for the DFP of delta type, our main effort in this paper is to design the behavior of solutions for the following DFP of delta type:
Δ a , h rl α x ( t ) = c x ( t + α ) , t N a , h ,
subject to a positive initial condition, as follows:
x ( a ) = A > 0 ,
for c 1 h α .
The motivation behind this study lies in exploring the dynamic behavior of discrete fractional systems with Riemann–Liouville-type operators, particularly of the order 0.5 , due to their rich mathematical structure and relevance in modeling memory effects. The novelty of this includes the derivation of a discrete Mittag-Leffler-type function and analysis of asymptotic behavior and convergence of the solution of DFP (1). Furthermore, we incorporate neural network approximations to benchmark the analytical results, highlighting the effectiveness and accuracy of the proposed approach.
The rest of our study can be summarized as follows: Section 2 details with the main definitions of discrete operators and Laplace transformations in the delta and nabla cases. In Section 3, the new special function (ML function) will be introduced, the asymptotic behavior and convergence of solutions are investigated for the half-order fractional model, and finally, the formula of solution is constructed. Section 4 presents two illustrative examples that demonstrate the convergence, stability, and instability behavior of the solution. Additionally, the results are compared with neural network approximations to validate the accuracy of the proposed analytical method. Finally, the conclusion will be outlined in the last section.

2. Preliminaries

Let α I n : = ( n 1 , n ) with N a , h : = { a , a + h , } and n N 1 = { 1 , 2 , } . Then, the delta sum is defined by [7]
Δ a , h α y ( t ) : = r = a h t h α H ^ α 1 t , r h + h y ( r h ) , t N a + α h , h ,
and the delta RL-difference, for α I 1 , is defined in [9] by
Δ a , h rl α y ( t ) : = r = a h t h + α H ^ α 1 t , r h + h y ( r h ) , t N a + ( 1 α ) h , h ,
where
H ^ α ( x , r h ) : = ( x r h ) h α ̲ Γ ( 1 α ) = Γ x h r + 1 Γ ( 1 α ) Γ x h r + α + 1 ,
for x N a + ( 1 α ) h , h and y is defined on N a , h .
We recall the definition of Δ -Laplace and ∇-Laplace transformations, which are defined in Theorem 2.2 and Theorem 3.65 in [7], respectively.
Lemma 1. 
If x : N a , h R , then the Δ-Laplace is given by
L a , h D x ( z ) = h r = 0 ( 1 z ) r 1 x ( a + r h ) ,
s.t. z 1 .
Furthermore, if x : N a + h , h R , then the -Laplace is given by
L a , h N x ( z ) = h r = 1 ( 1 z ) r 1 x ( a + r h ) ,
s.t. the series converges for each values of z.
The convergence of the ∇-Laplace transformation can be stated in the following lemma.
Lemma 2 
(see [7]). If the -Laplace of f : N a + h , h R is convergent to | 1 z h | , for some r > 0 , then we have
L a , h N x ( z ) = L a , h N x ( z ) 1 z h h x ( a + h ) 1 z h ,
s.t. | 1 z h | < r .
Next, the lemma finds the discrete Laplace of the solution of DFP (1).
Lemma 3. 
For α I 1 , DFP (1) satisfies
L a , h N x ( z ) = h A h α c z α c .
Proof. 
According to Lemma 2.3 in [40], DFP (1) can be recast as
x ( t ) = c Δ a α x ( t + α ) + ( 1 c ) H ^ α t + α , σ ( a ) x ( a ) = c r = a h t h H ^ α 1 t + α , r h + h x ( r h ) + ( 1 c ) H ^ α t + α , σ ( a ) x ( a ) = c r = a h t h H α 1 t , r h h x ( r h ) + ( 1 c ) H ^ α t + α , σ ( a ) x ( a ) = c a 1 , h α x ( t ) + ( 1 c ) H α t , a 1 x ( a ) .
where a , h α x ( t ) is the nabla sum, which is defined by [4], as follows:
a , h α x ( t ) = r = a + h h t h H α 1 t , r h h x ( r h ) , t N a + h , h ,
where
H α ( x , r h ) : = ( x r h ) h α ¯ Γ ( α + 1 ) = Γ ( x r h + α h ) Γ ( x r h ) .
Now, we take the ∇-Laplace of (8), and using
L a , h N a , h α x ( t ) ( z ) = L a , h x ( z ) z α , L a , h N H α t , a ( z ) = 1 z α + 1 ,
we obtain
z α L a , h N x ( z ) c L a , h N x ( z ) = A h z α h α 1 z h .
Thus, by considering Lemma 2, we have
z α c L a 1 , h N x ( z ) 1 z h A h 1 z h + A h z α h α 1 z h = 0 ,
which rearranges the desired proof. □

3. Asymptotic Behavior of 1 2 -Order

A special function used in our study is the following ML function:
E c , α h ( t , a + 1 ) = s = 0 c s H ^ α s + α 1 t + α s + α , a + 1 ,
for t N a , h , where
H ^ μ k + μ 1 ( t , a ) = Γ t a h + 1 Γ ( μ k + μ ) Γ t a h + 2 μ k μ
according to [4,7]. In addition, it is clear that E c , α h ( t , a + 1 ) converges for | c | < h α .
Lemma 4. 
The unique solution of DFP (1) with the initial condition x ( a ) = h α 1 1 c h α is E c , α h ( t , a + 1 ) .
Proof. 
The proof is omitted as it is similar to Lemma 3.1 in [40]. □
Next, for ν = 1 2 , we expand our result for the asymptotic solution x ( t ) of DFP (1)–(2). This allows us to obtain some necessary properties of x ( t ) . For example, for large t, we will obtain the following:
  • A sign condition and monotonicity result for x ( t ) whenever c 2 h , ;
  • The asymptotic estimate of x ( t ) whenever c 0 , 1 h .
Now, we consider DFP (1) and (2) with ν = 0.5 , as follows:
Δ a , h rl 0.5 g ( t ) = k g ( t ) , t N a , h ,
g ( a ) = A > 0 ,
such that k 1 h .
For ν = 5 in DFP (10) and (11), we can deduce the following uniqueness result:
Theorem 1. 
For 0 k 2 h , the unique solution g ( t ) of DFP (10) and (11) with ν = 0.5 can satisfy
g ( a + L h ) = A ( k h 0.5 ) h ( h k 2 1 ) 1 ( 1 ) L h 0.5 ( h k 2 1 ) L 2 h 0.5 k R L [ ( h k 2 1 ) ] ,
where lim L R L [ ( h k 2 1 ) ] = 0 .
Proof. 
If we set 1 z h = y and ν = 1 2 , then we see that
1 k z ν = 1 k z = k + z k 2 z = k + 1 y h k 2 1 y h = [ k + h 1 2 ( 1 y ) 1 2 ] × h ( h k 2 1 ) 1 1 + y h k 2 1 1 = h ( h k 2 1 ) 1 × [ k + h 1 2 ( 1 1 2 y + 1 2 ( 1 2 1 ) 2 ! y 2 + + ( 1 ) L 1 2 ( 1 2 1 ) ( 0.5 L + 1 ) L ! y L ) ] × 1 y h k 2 1 + y 2 ( h k 2 1 ) 2 + + ( 1 ) L y L ( h k 2 1 ) L + = h ( h k 2 1 ) 1 L = 0 k L y L ,
where
k L = a 1 b L 1 + + a L b 0 ,
where
a j = { ( 1 ) j 1 2 ( 1 2 1 ) ( 1 2 j + 1 ) j ! h 1 2 , j 1 , k + h 1 2 , j = 0 , b j = ( 1 ) j 1 ( h k 2 1 ) j .
Therefore, k L becomes
k L = 1 2 ( 1 2 1 ) ( 1 2 i + 1 ) ( 1 ) L h 1 2 L ! + 1 h k 2 1 1 2 ( 1 2 1 ) ( 1 2 L + 2 ) ( 1 ) L 1 h 1 2 L 1 ! + + 1 2 ( 1 ) L 1 h 1 2 ( h k 2 1 ) L 1 + ( k + h 1 2 ) ( 1 ) L ( h k 2 1 ) L = ( 1 ) L [ 1 2 ( 1 2 1 ) ( 1 2 L + 1 ) h 1 2 L ! + 1 2 ( 1 2 1 ) ( 1 2 L + 2 ) h 1 2 ( L 1 ) ! ( h k 2 1 ) + + 1 2 h 1 2 ( h k 2 1 ) L 1 + ( k + h 1 2 ) ( h k 2 1 ) L ] = ( 1 ) L h 1 2 ( h k 2 1 ) L [ 1 2 ( 1 2 1 ) ( 1 2 L + 1 ) L ! ( h k 2 1 ) L + 1 2 ( 1 2 1 ) ( 1 2 L + 2 ) ( L 1 ) ! ( h k 2 1 ) L 1 + + 1 2 ( h k 2 1 ) + ( k + h 1 2 ) ] .
One can observe that, for | h k 2 1 | 1 , we can deduce
[ 1 + ( h k 2 1 ) 1 2 ] = 1 + 1 2 ( h k 2 1 ) + 1 2 ( 1 2 1 ) 2 ! ( h k 2 1 ) 2 + + 1 2 ( 1 2 1 ) ( 1 2 L + 1 ) L ! ( h k 2 1 ) L + R L [ ( h k 2 1 ) ] ,
where lim L R L [ ( h k 2 1 ) ] = 0 . As a result, by using (13) and k > 0 , we see that
k L = ( 1 ) L h 1 2 ( h k 2 1 ) L h 1 2 R L [ ( h k 2 1 ) ] 1 + h 1 2 ( k + h 1 2 ) = ( 1 ) L h 1 2 ( h k 2 1 ) L 2 h 1 2 k R L [ ( h k 2 1 ) ] .
Considering (7), we have
L a , h N g ( z ) = h L = 0 ( 1 z h ) L g ( a + L h ) .
From Lemma 3 and (12), we get that
L a , h N g ( z ) = h A ( k h 1 2 ) h ( h k 2 1 ) L = 0 k L ( 1 z h ) L .
It follows from (15) and (16) that
h L = 0 ( 1 z h ) L g ( a + L h ) = h A ( k h 1 2 ) h ( h k 2 1 ) L = 0 k L ( 1 z h ) L .
By using (14) and (17), we obtain
g ( a + L h ) = A ( k h 1 2 ) h ( h k 2 1 ) ( 1 ) L h 1 2 ( h k 2 1 ) L 2 h 1 2 k R L [ ( h k 2 1 ) ] .
This ends our proof. □
Corollary 1. 
Let k 1 h , 2 h . Then the solution g ( t ) of DFP (10) and (11) satisfies
lim sup L g ( a + L h ) = + ,
and
lim inf L g ( a + L h ) = .
Proof. 
Note that if k 1 h , 2 h , then we have 0 < h k 2 1 < 1 . Therefore, by considering (18), the desired results can be obtained. □
Corollary 2. 
Suppose that k 1 h , 2 h . Then every solution g ( t ) of DFP (10) and (11) is unstable and oscillatory.
Corollary 3. 
For k 1 , 2 . Then every solution g ( t ) of DFP
Δ a rl 0.5 g ( t ) = k g ( t ) , t N a + 1 , k 1 , g ( a ) = A > 0
is unstable and oscillatory.
Next, by considering (14), we can obtain an asymptotic behavior of the ML function E k , 0.5 , 0.5 h ( t , a + 1 ) .
Corollary 4. 
If k 0 , 1 h , then every solution g ( t ) of DFP (10) and (11) tends to ∞. In addition,
E k , 0.5 , 0.5 h ( a + L h , ρ ( a ) ) = h 0.5 ( 1 h k 2 ) L + 1 2 h 0.5 k R L [ ( h k 2 1 ) ] ,
where lim L R L [ ( h k 2 1 ) ] = 0 .
For h = 1 , we denote
E k , μ h ( t , a + 1 ) : = E k , μ ( t , a + 1 ) = k = 0 k k H μ k + μ 1 ( t + ν k + ν , a + 1 ) , t N a ,
where
H μ k + μ 1 ( t , a ) = Γ t a + 1 Γ ( μ k + μ ) Γ t a + 2 μ k μ
according to Definition 2.24 in [7].
Corollary 5. 
Let 0 < k < 1 . Then every solution g ( t ) of DFP (19) tends to ∞. Moreover,
E k , 0.5 , 0.5 ( a + L , ρ ( a ) ) = 1 ( 1 k 2 ) L + 1 [ 2 k R L [ ( k 2 1 ) ] ] ,
where lim L R L [ ( k 2 1 ) ] = 0 .
If g ( t ) is the unique solution of DFP (10) and (11), then we can deduce the following theorem and corollary.
Theorem 2. 
Assume that k = 2 h . Then g ( t ) satisfies
g ( a + L h ) = A ( 2 1 ) ( 1 ) L 2 2 R L [ 1 ] ,
where lim L R L [ 1 ] = 0 .
Proof. 
From (18), when k = 2 h , we obtain the desired result. □
Corollary 6. 
Suppose that k = 2 h . Then g ( t ) can satisfy
lim sup t g ( t ) = 2 2 2 A ,
and
lim inf t g ( t ) = 2 2 2 A .
Proof. 
The desired results can be obtained according to (20). □
Remark 1. 
From (18), we know that if k = 0 , then the solution g ( t ) of half-order fractional initial value problems (10) and (11) is asymptotically stable.
The next theorem shows that g ( t ) tends to 0 depending on the following lemma and (14).
Lemma 5 
(see [41], Stolz-Cesáro theorem, pp. 85–88). If there are two sequences of real numbers
A n n 1 and B n n 1 ,
such that B n is unbounded, strictly increasing and positive with
lim n A n + 1 A n B n + 1 B n = l ,
then
lim n A n B n = l .
Theorem 3. 
For the unique solution g ( t ) of DFP (10) and (11), for k > 2 h , it can satisfy
lim t g ( t ) = 0 .
Moreover, g ( t ) will be an increasing function (or g ( t ) < 0 ), for large t.
Proof. 
Define A l , B l , and k l as follows:
A l = i = 1 l 1 2 i ( h k 2 1 ) i + h 1 2 ( k + h 1 2 ) , B l = ( h k 2 1 ) l ,
and
k l = ( 1 ) l h 1 2 A l B l .
Then, for k > 2 h , we see that B l is positive, unbounded, and strictly increasing.
lim l A l + 1 A l B l + 1 B l = lim l 1 2 l + 1 ( h k 2 1 ) l + 1 ( h k 2 1 ) l + 1 ( h k 2 1 ) l = lim l Γ ( 3 2 ) Γ ( l + 2 ) Γ ( 1 2 l ) · h k 2 1 h k 2 2 = lim l Γ ( 3 2 ) Γ ( 1 2 + l ) Γ ( l + 2 ) Γ ( 1 2 l ) Γ ( 1 2 + l ) · h k 2 1 h k 2 2 = lim l Γ ( 3 2 ) Γ ( 1 2 + l ) Γ ( l + 2 ) · sin ( π ( 1 2 + l ) ) π · h k 2 1 h k 2 2 = lim l Γ ( 1 2 + l ) Γ ( l + 2 ) ( l + 2 ) 3 2 · Γ ( 3 2 ) ( l + 2 ) 3 2 · sin ( π ( 1 2 + l ) ) π · h k 2 1 h k 2 2 = 0 ,
where we have used the Stirling formula (see [42])
lim n Γ ( n + α ) Γ ( n ) n α = 1 , α C , | sin ( π ( 1 2 + l ) ) π | 1 π ,
and
Γ ( 1 z ) Γ ( z ) = π sin ( π z ) , z Z .
By considering Lemma 5, we know that
lim l A l B l = 0 .
Thus, one can have
lim l k l = 0 .
In view of (18), we see that
g ( a + l h ) = A ( k h 1 2 ) h ( h k 2 1 ) k l .
Therefore, from (21) and (22), we have
lim l g ( a + l h ) = 0 .
Next, by further characterizing the asymptotic behavior of g ( t ) , we observe that
( 1 ) l [ 1 2 ( 1 2 1 ) ( 1 2 l + 1 ) l ! h k 2 1 l + 1 2 ( 1 2 1 ) ( 1 2 l + 2 ) ( l 1 ) ! h k 2 1 l 1 ] = ( 1 ) l [ ( 1 ) l 1 1 2 ( 1 2 + 1 ) ( 1 2 l 11 ) l ! ( h k 2 1 ) l + ( 1 ) l 2 1 2 ( 1 2 + 1 ) ( 1 2 + l 2 ) ( l 1 ) ! ( h k 2 1 ) l 1 ] = 1 2 ( 1 2 + 1 ) ( 1 2 + l 2 ) ( h k 2 1 ) l 1 ( l 1 ) ! 1 2 l + 1 l ( h k 2 1 ) + 1 = 1 2 ( 1 2 + 1 ) ( 1 2 + l 2 ) ( l 2 ) ! ( l 2 ) 1 2 · ( h k 2 1 ) l 1 ( l 2 ) 1 2 l 1 · 1 2 l + 1 l ( h k 2 1 ) + 1 ,
where the following is used:
1 2 ( 1 2 + 1 ) ( 1 2 + l 2 ) ( l 2 ) ! ( l 2 ) 1 2 1 Γ ( 1 2 ) ( h k 2 1 ) l 1 ( l 2 ) 1 2 l 1 + , 1 2 l + 1 l ( h k 2 1 ) + 1 h k 2 + 2 < 0 ,
as l + . Therefore, for large , we see that
k l < 0 .
By considering (22), we know, for large t, that g ( t ) < 0
k l k l 1 = h 1 2 1 2 l ( 1 ) l + h 1 2 1 2 l 1 1 h k 2 1 ( 1 ) l 1 + + h 1 2 1 2 2 1 h k 2 1 l 2 ( 1 ) 2 + h 1 2 1 2 1 1 h k 2 1 l 1 ( 1 ) 1 + ( k + h 1 2 ) 1 h k 2 1 l [ h 1 2 1 2 l 1 ( 1 ) l 1 + h 1 2 1 2 l 2 1 h k 2 1 ( 1 ) l 2 + k + h 1 2 ) 1 h k 2 1 l 1 ]
= h 1 2 3 2 l ( 1 ) l + h 1 2 3 2 l 1 1 h k 2 1 ( 1 ) l 1 + + h 1 2 3 2 2 1 h k 2 1 l 2 ( 1 ) 2 + h 1 2 1 2 1 1 h k 2 1 l 1 ( 1 ) 1 + 1 h k 2 1 l h k 2 ( k + h 1 2 ) = + h 1 2 1 h k 2 1 l [ 3 2 l ( h k 2 1 ) l + 3 2 l 1 ( h k 2 1 ) l 1 + + 3 2 2 ( h k 2 1 ) 2 + 1 2 ( h k 2 1 ) + ( h k 1 2 + 1 ) h k 2 ] .
Because
( 1 ) l 3 2 l ( h k 2 1 ) l + 3 2 l 1 h k 2 1 l 1 = ( 1 ) l 3 2 l ( h k 2 1 ) l 5 2 l l + 1 h k 2 1 = ( 1 ) l 1 3 2 l 1 h k 2 1 l 5 2 + l l + 1 h k 2 1 = l 7 2 l 1 h k 2 1 l 5 2 + l l + 1 h k 2 1 = Γ ( l 5 2 ) Γ ( m ) Γ ( 3 2 ) h k 2 1 l 5 2 + l l + 1 h k 2 1 = Γ ( l 5 2 ) Γ ( m ) l 5 2 ( h k 2 1 ) l l 5 2 Γ ( 3 2 ) 5 2 + l l + 1 h k 2 1 ,
where we have used h k 2 1 > 1 ,
lim l Γ ( l 5 2 ) Γ ( m ) l 5 2 = 1 , lim l ( h k 2 1 ) l l 5 2 Γ ( 3 2 ) = + ,
and
lim l 5 2 + l l + 1 h k 2 1 = 1 1 h k 2 1 > 0 .
By the same way used for k l < 0 , for large , we can deduce that
k l k l 1 > 0 .
Considering (22), we have that
g ( a + l h ) g ( a + ( l 1 ) h ) > 0 ,
or equivalently, g ( t ) is increasing for large and t. □

4. Applications

This section aims to illustrate the practical behavior of the solution to the discrete fractional problem discussed in the preceding results. Specifically, using Lemma 4, we derive a summation formula for computing the solution x ( t ) at discrete time points. The formula reveals that x ( t ) depends only on the initial value x ( a ) and is independent of the starting point a. We then explore the qualitative behavior of the solution based on the parameter k, highlighting cases where the solution tends to zero, becomes unbounded, or exhibits oscillatory stability or instability. Numerical examples, figures, and a comparison with neural network approximations are provided to validate and support the theoretical findings. For this, we let t = a + ( d 1 ) h , for d 2 . Then, we can deduce from Lemma 4, a summation formula for (1) as follows:
x ( a + ( d 1 ) h ) = 1 k h ν 1 p = 1 d 1 d p μ 1 d i x ( a + ( p 1 ) h ) , d 2 .
Before starting to the examples, we should note, from (23), that x ( t ) depends on the initial value x ( a ) and is independent of the starting point a. Consequently, by considering [40], Corollary 3, Corollary 5, Theorem 2, and Theorem 3, the following results can be summarized: A solution x ( t ) of DFP (19)
  • Tends to 0 if k 0 ;
  • Tends to if k ( 0 , 1 ) ;
  • Is oscillatorily unstable if k 1 , 2 . That is,
    lim sup t x ( t ) = + ,
    and
    lim inf t x ( t ) = .
  • is oscillatory stable if k = 2 . That is,
    lim sup t x ( t ) = 2 2 2 A ,
    and
    lim inf t x ( t ) = 2 2 2 A .
  • Tends to 0 if k > 2 .

4.1. Neural Network Approximation

To support the analytical results, a feedforward neural network (NN) was used to approximate the solution of the discrete fractional problem. The NN was trained on data from the analytical solution, with normalized time inputs and corresponding x ( t ) values as targets. The network had one hidden layer with 20 neurons and sigmoid activation, and was trained using the Levenberg–Marquardt algorithm. The dataset was split into 70% training, 15% validation, and 15% testing. The trained NN closely matched the analytical solution across the domain, with small absolute errors and stable performance under varying conditions, confirming its effectiveness as a surrogate model for fractional discrete systems. Similar approaches to solving fractional differential equations using neural network approximations can be found in [43,44,45].
Example 1. 
We consider the DFP
Δ 1 , h rl 1 2 x ( t ) = k x t + 1 2 , t N 2 , x ( 1 ) = 1 .
From Figure 1, Figure 2 and Figure 3, we can observe that
  • x ( t ) is oscillatorily stable for k = 1.3 and k = 1.39 ;
  • x ( t ) is oscillatorily unstable for k = 1.41 and k = 1.42 ;
  • x ( t ) will tend to 0 for k = 1.51 and k = 1.7 .
Figure 1. Plot of (23) for a = h = 1 , L = 0.25 and different k. (a) k = 1.3 , a = h = 1 , L = 0.25 . (b) k = 1.39 , a = h = 1 , L = 0.25 .
Figure 1. Plot of (23) for a = h = 1 , L = 0.25 and different k. (a) k = 1.3 , a = h = 1 , L = 0.25 . (b) k = 1.39 , a = h = 1 , L = 0.25 .
Fractalfract 09 00452 g001
Figure 2. Plot of (23) for a = h = 1 , L = 0.25 and different k. (a) k = 1.41 , a = h = 1 , L = 0.25 . (b) k = 1.42 , a = h = 1 , L = 0.25 .
Figure 2. Plot of (23) for a = h = 1 , L = 0.25 and different k. (a) k = 1.41 , a = h = 1 , L = 0.25 . (b) k = 1.42 , a = h = 1 , L = 0.25 .
Fractalfract 09 00452 g002
Figure 3. Plot of (23) for a = h = 1 , L = 0.25 and different k. (a) k = 1.51 , a = h = 1 , L = 0.25 . (b) k = 1.7 , a = h = 1 , L = 0.25 .
Figure 3. Plot of (23) for a = h = 1 , L = 0.25 and different k. (a) k = 1.51 , a = h = 1 , L = 0.25 . (b) k = 1.7 , a = h = 1 , L = 0.25 .
Fractalfract 09 00452 g003
Example 1 investigates the behavior of the solution x ( t ) for various values of the parameter k, using the discrete fractional problem with the initial condition x ( 1 ) = 1 . The solution profiles shown in Figure 1, Figure 2 and Figure 3 illustrate distinct dynamic behaviors depending on the value of k. In Figure 1, for k = 1.3 and k = 1.39 , the solution exhibits oscillatorily stable behavior, with amplitudes that remain bounded and exhibit regular periodicity. Figure 2 shows a transition to oscillatory instability for k = 1.41 and k = 1.42 , where the amplitude of oscillations increases over time, indicating instability. In contrast, Figure 3 demonstrates that, for the larger values k = 1.51 and k = 1.7 , the solution decays and tends to zero, reflecting damping and asymptotic stability.
To further validate the numerical results, Table 1 compares the present method’s computed values of x ( t ) with those obtained using a neural network approximation for selected time points. The absolute error between the two methods remains consistently low (on the order of 10 3 ), confirming the accuracy and reliability of the proposed approach. This agreement not only supports the theoretical analysis but also demonstrates the efficiency of the method in approximating discrete fractional models with high precision.
Figure 4 displays the absolute error | x ( t ) x N N ( t ) | for various values of k at t = 10 , 30 , 50 for Example 1. The results show that the error remains small (approximately 10 3 ) across all cases, indicating strong agreement between the proposed method and the neural network approximation. The error is stable with respect to k, confirming the accuracy and reliability of both approaches.
The sensitivity analysis results presented in Table 2 illustrate the impact of varying the parameters k and α on the accuracy of the neural network approximation at t = 5 , with the fixed values a = h = 1 for Example 1. As observed, for each fixed value of k, the absolute error increases with increasing α. This trend suggests that the fractional order α influences the complexity of the dynamic behavior, leading to a higher approximation error by the neural network. Conversely, for a fixed α, increasing the parameter k tends to reduce the approximation error, indicating that stronger damping (larger k) contributes to smoother system behavior, which is easier to learn by the network. These observations confirm that both parameters play a significant role in the model’s approximation performance and must be carefully considered in neural network-based solutions.
The histogram in Figure 5 illustrates the distribution of absolute errors in estimating the parameter k using the neural network model for Example 1. With 20 bins, the error values are tightly clustered around zero, demonstrating that the network effectively captures the relationship between the sampled solution values x ( t ) and the corresponding k. This confirms the network’s reliability in reverse-engineering the parameter from observed data.
Table 3 shows the effect of a varying parameter k on the neural network’s prediction accuracy for k under noisy initial conditions. The error tends to increase with larger values of k, indicating a slightly more challenging inverse problem for higher k, but the overall error remains relatively low, confirming the NN’s robustness.
Example 2. 
In this example, we consider the DFP
Δ 0 , h rl 1 2 x ( t ) = k x t + 1 2 , t N 4 , 4 , x ( 0 ) = 1 .
The conclusion of Figure 6, Figure 7 and Figure 8 shows that
  • x ( t ) is oscillatorily stable for k = 0.7 and k = 0.75 ;
  • x ( t ) is oscillatorily unstable for k = 0.79 and k = 0.81 ;
  • x ( t ) will tend to 0 for k = 0.9 and k = 0.97 .
Figure 6. Comparison of analytical solution and neural network approximation for (23) with a = 0 , h = 4 , L = 0.85 , and different k. (a) k = 0.7 , a = 0 , h = 4 , L = 0.85 . (b) k = 0.75 , a = 0 , h = 4 , L = 0.85 .
Figure 6. Comparison of analytical solution and neural network approximation for (23) with a = 0 , h = 4 , L = 0.85 , and different k. (a) k = 0.7 , a = 0 , h = 4 , L = 0.85 . (b) k = 0.75 , a = 0 , h = 4 , L = 0.85 .
Fractalfract 09 00452 g006
Figure 7. Comparison of analytical solution and neural network approximation for (23) with a = 0 , h = 4 , L = 0.85 , and different k. (a) k = 0.79 , a = 0 , h = 4 , L = 0.85 . (b) k = 0.81 , a = 0 , h = 4 , L = 0.85 .
Figure 7. Comparison of analytical solution and neural network approximation for (23) with a = 0 , h = 4 , L = 0.85 , and different k. (a) k = 0.79 , a = 0 , h = 4 , L = 0.85 . (b) k = 0.81 , a = 0 , h = 4 , L = 0.85 .
Fractalfract 09 00452 g007
Figure 8. Comparison of analytical solution and neural network approximation for (23) with a = 0 , h = 4 , L = 0.85 , and different k. (a) k = 0.9 , a = 0 , h = 4 , L = 0.85 . (b) k = 0.97 , a = 0 , h = 4 , L = 0.85 .
Figure 8. Comparison of analytical solution and neural network approximation for (23) with a = 0 , h = 4 , L = 0.85 , and different k. (a) k = 0.9 , a = 0 , h = 4 , L = 0.85 . (b) k = 0.97 , a = 0 , h = 4 , L = 0.85 .
Fractalfract 09 00452 g008
The numerical results demonstrate distinct dynamic behaviors of the solution x ( t ) for varying values of the parameter k. When k = 0.7 and k = 0.75 , the solution exhibits oscillatory stability, characterized by bounded fluctuations with an overall increasing tendency. As k increases to 0.79 and 0.81 , the solution becomes oscillatorily unstable, with a growing amplitude indicating divergence. For larger values such as k = 0.9 and k = 0.97 , the solution x ( t ) tends to zero, reflecting a decaying pattern.
These observations are well supported by the graphical comparisons in Figure 6, Figure 7 and Figure 8, where the present method and the neural network approximation show close agreement. This is further confirmed by the results in Table 4, where the absolute errors | x ( t ) x N N ( t ) | remain small, indicating the effectiveness and reliability of the neural network approach in approximating the discrete fractional problem.
Figure 9 presents the absolute error | x ( t ) x N N ( t ) | for different values of k at time levels t = 40 , 80 , 120 for Example 2. The error remains within a small range, showing slightly increasing trends with time. Despite the longer time intervals and dynamic variations in the solution behavior, the neural network approximation closely matches the proposed method, confirming its effectiveness and accuracy.
The results presented in Table 5 show the absolute error between the analytical DFP solution and the neural network prediction at t = 20 for various values of the parameters k and α, with fixed a = 0 and h = 4 for Example 2. It is evident that, for each fixed value of k, the absolute error increases as the fractional order α increases from 0.25 to 0.75. This indicates that higher fractional orders introduce greater complexity in the system dynamics, which challenges the neural network approximation. Additionally, for each fixed α, the absolute error tends to decrease as k increases from 0.75 to 0.97 , suggesting that larger values of k lead to smoother or more stable system behavior that the neural network can approximate more accurately. These observations emphasize the significant influence of both k and α on the approximation accuracy and highlight the importance of carefully selecting these parameters when modeling and approximating fractional dynamic systems.
The histogram in Figure 10 illustrates the distribution of absolute errors in estimating the parameter k using the neural network model for Example 2. With 20 bins, the error values are tightly clustered around zero, demonstrating that the network effectively captures the relationship between the sampled solution values x ( t ) and the corresponding k. This confirms the network’s reliability in reverse-engineering the parameter from observed data.
Table 6 reports the mean absolute error of neural network predictions of the parameter k for Example 2 under the noisy initial condition x ( 0 ) . The results indicate a gradual increase in prediction error as c increases, reflecting the growing difficulty of parameter recovery in this fractional difference system. Despite the noise, the neural network remains effective across the tested parameter range.

4.2. Algorithm for Solving the DFP and Neural Network Approximation

The following steps describe the procedure for solving the DFP and approximating the solution using a neural network, as follows:
Inputs: Initial value x ( a ) = 1 , parameters a , h , ν ( = α ) , k , L , and maximum time t max .
Step 1: Compute the number of steps, as follows:
d max = t max a h + 1 .
Step 2: Define time levels, as follows:
t d = a + ( d 1 ) h , for d = 1 , 2 , , d max .
Step 3: Initialize the solution array with x ( a ) = 1 .
Step 4: For each d = 2 to d max , compute the following:
x ( t d ) = 1 k h ν 1 p = 1 d 1 d p ν 1 d p · x ( a + ( p 1 ) h ) .
Step 5: Normalize time values, as follows:
t ^ d = t d max ( t d ) .
Step 6: Prepare training data for the neural network, as follows:
X train = { t ^ d } , Y train = { x ( t d ) } .
Step 7: Define a feedforward neural network with one hidden layer of size 20.
Step 8: Set the data division, as follows:
  • Training ratio: 70%;
  • Validation ratio: 15%;
  • Testing ratio: 15%.
Step 9: Train the neural network using ( X train , Y train ) .
Step 10: Predict the values x NN ( t d ) using the trained network.
Step 11: Compute the absolute error at each time level, as follows:
Error ( t d ) = | x ( t d ) x NN ( t d ) | .
Outputs: The approximated values x ( t d ) , predicted values x NN ( t d ) , and absolute error at each time point.

5. Conclusions and Future Direction

This paper has studied the discrete fractional problem of the order 0.5 given by
Δ a , h rl 1 2 x ( t ) = c x t + 1 2 , x ( a ) = A > 0 ,
for t N a , h . We established the existence and uniqueness of the solution, which was expressed in terms of a Mittag-Leffler-type function suitable for discrete fractional calculus. The asymptotic behavior of the solution was rigorously analyzed, showing convergence properties under various parameter regimes. Additionally, numerical experiments, including comparisons with neural network approximations, were performed to validate the theoretical findings. The close agreement between analytical and neural results confirms the reliability, accuracy, and efficiency of the proposed method. Several examples were presented to illustrate the behavior of the solution under different conditions, including oscillatory stability, instability, and convergence to zero.
The analysis presented in this study can be generalized in several ways. For instance, the consideration of discrete generalized operators with exponential or Mittag-Leffler kernels (see [46,47]) could lead to a deeper understanding of the asymptotic behavior of solutions.

Author Contributions

Conceptualization, P.O.M., A.A.L. and M.R.A.; funding acquisition, A.A.L.; investigation, M.A.Y. and S.M.A.; project administration, S.M.A.; software, M.R.A. and M.A.Y.; Writing—original draft, P.O.M. and M.A.Y.; Writing—review and editing, A.A.L., M.R.A. and S.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The publication of this paper was supported by the University of Oradea.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to acknowledge the Deanship of Graduate Studies and Scientific Research, Taif University, for funding this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Raubitzek, S.; Mallinger, K.; Neubauer, T. Combining Fractional Derivatives and Machine Learning: A Review. Entropy 2023, 25, 35. [Google Scholar] [CrossRef] [PubMed]
  2. Walasek, R.; Gajda, J. Fractional differentiation and its use in machine learning. Int. J. Adv. Eng. Sci. Appl. Math. 2021, 13, 270–277. [Google Scholar] [CrossRef]
  3. Chen, C. Discrete Caputo Delta Fractional Economic Cobweb Models. Qual. Theory Dyn. Syst. 2023, 22, 8. [Google Scholar] [CrossRef]
  4. Abdeljawad, T. Different type kernel h-fractional differences and their fractional h-sums. Chaos Solit. Fract. 2018, 116, 146–156. [Google Scholar] [CrossRef]
  5. Baleanu, D.; Wu, G.C. Some further results of the Laplace transform for variable-order fractional difference equations. Fract. Calc. Appl. Anal. 2019, 22, 1641–1654. [Google Scholar] [CrossRef]
  6. Dhawan, S.; Jonnalagadda, J.M. Nontrivial solutions for arbitrary order discrete relaxation equations with periodic boundary conditions. J. Anal. 2024, 32, 2113–2133. [Google Scholar] [CrossRef]
  7. Goodrich, C.S.; Peterson, A.C. Discrete Fractional Calculus; Springer: New York, NY, USA, 2015. [Google Scholar]
  8. Danca, M.-F.; Jonnalagadda, J.M. On the Solutions of a Class of Discrete PWC Systems Modeled with Caputo-Type Delta Fractional Difference Equations. Fractal Fract. 2023, 7, 304. [Google Scholar] [CrossRef]
  9. Guirao, J.L.G.; Mohammed, P.O.; Srivastava, H.M.; Baleanu, D.; Abualrub, M.S. A relationships between the discrete Riemann-Liouville and Liouville-Caputo fractional differences and their associated convexity results. AIMS Math. 2022, 7, 18127–18141. [Google Scholar] [CrossRef]
  10. Ikram, A. Lyapunov inequalities for nabla Caputo boundary value problems. J. Differ. Equ. Appl. 2018, 25, 757–775. [Google Scholar] [CrossRef]
  11. Atici, F.; Sengul, S. Modeling with discrete fractional equations. J. Math. Anal. Appl. 2010, 369, 1–9. [Google Scholar] [CrossRef]
  12. Silem, A.; Wu, H.; Zhang, D.-J. Discrete rogue waves and blow-up from solitons of a nonisospectral semi-discrete nonlinear Schrödinger equation. Appl. Math. Lett. 2021, 116, 107049. [Google Scholar] [CrossRef]
  13. Cabada, A.; Dimitrov, N. Nontrivial solutions of non-autonomous Dirichlet fractional discrete problems. Fract. Calc. Appl. Anal. 2020, 23, 980–995. [Google Scholar] [CrossRef]
  14. Mohammed, P.O.; Agarwal, R.P.; Yousif, M.A.; Al-Sarairah, E.; Lupas, A.A.; Abdelwahed, M. Theoretical Results on Positive Solutions in Delta Riemann-Liouville Setting. Mathematics 2024, 12, 2864. [Google Scholar] [CrossRef]
  15. Wang, P.; Liu, X.; Anderson, D.R. Fractional averaging theory for discrete fractional-order system with impulses. Chaos 2024, 34, 013128. [Google Scholar] [CrossRef] [PubMed]
  16. Eidelman, S.D.; Kochubei, A.N. Cauchy problem for fractional diffusion equations. J. Differ. Equ. 2004, 199, 211–255. [Google Scholar] [CrossRef]
  17. Karimova, E.; Ruzhansky, M.; Tokmagambetov, N. Cauchy type problems for fractional differential equations. Integral Transforms Spec. Funct. 2022, 33, 47–64. [Google Scholar] [CrossRef]
  18. Boutiara, A.; Rhaima, M.; Mchiri, L.; Makhlouf, A.B. Cauchy problem for fractional (p,q)-difference equations. AIMS Math. 2023, 8, 15773–15788. [Google Scholar] [CrossRef]
  19. Cinque, F.; Orsingher, E. Analysis of fractional Cauchy problems with some probabilistic applications. J. Math. Anal. Appl. 2024, 536, 128188. [Google Scholar] [CrossRef]
  20. Abdeljawad, T. Fractional difference operators with discrete generalized Mittag-Leffler kernels. Chaos Solit. Fract. 2019, 126, 315–324. [Google Scholar] [CrossRef]
  21. Mohammed, P.O.; Goodrich, C.S.; Hamasalh, F.K.; Kashuri, A.; Hamed, Y.S. On positivity and monotonicity analysis for discrete fractional operators with discrete Mittag-Leffler kernel. Math. Meth. Appl. Sci. 2022, 45, 6391–6410. [Google Scholar] [CrossRef]
  22. Nagai, A. Discrete Mittag–Leffler function and its applications. Publ. Res. Inst. Math. Sci. Kyoto Univ. 2003, 1302, 1–20. [Google Scholar]
  23. Awadalla, M.; Mahmudov, N.I.; Alahmadi, J. A novel delayed discrete fractional Mittag-Leffler function: Representation and stability of delayed fractional difference system. J. Appl. Math. Comput. 2024, 70, 1571–1599. [Google Scholar] [CrossRef]
  24. Saenko, V.V. The calculation of the Mittag–Leffler function. Int. J. Comput. Math. 2022, 99, 1367–1394. [Google Scholar] [CrossRef]
  25. Wu, G.C.; Baleanu, D.; Zeng, S.D.; Luo, W.H. Mittag–Leffler function for discrete fractional modelling. J. King Saud Univ. Sci. 2016, 28, 99–102. [Google Scholar] [CrossRef]
  26. Mohammed, P.O.; Abdeljawad, T.; Hamasalh, F.K. Discrete Prabhakar fractional difference and sum operators. Chaos Solit. Fractals 2021, 150, 111182. [Google Scholar] [CrossRef]
  27. Abdeljawad, T.; Baleanu, D. Discrete fractional differences with nonsingular discrete Mittag-Leffler kernels. Adv. Differ. Equ. 2016, 232, 2016. [Google Scholar]
  28. Wei, Y.; Zhao, L.; Lu, J.; Cao, J. Stability and Stabilization for Delay Delta Fractional Order Systems: An LMI Approach. IEEE Trans. Circuits Syst. II: Express Br. 2023, 70, 4093–4097. [Google Scholar] [CrossRef]
  29. Jonnalagadda, J.M. A Comparison Result for the Nabla Fractional Difference Operator. Foundations 2023, 3, 181–198. [Google Scholar] [CrossRef]
  30. Mohammed, P.O.; Lizama, C.; Lupas, A.A.; Al-Sarairah, E.; Abdelwahed, M. Maximum and Minimum Results for the Green’s Functions in Delta Fractional Difference Settings. Symmetry 2024, 16, 991. [Google Scholar] [CrossRef]
  31. Gholami, Y. A uniqueness criterion for nontrivial solutions of the nonlinear higher-order -difference systems of fractional-order. Fract. Differ. Calc. 2021, 11, 85–110. [Google Scholar] [CrossRef]
  32. Goodrich, C.S. A comparison result for the fractional difference operator. Int. J. Differ. Equ. 2011, 6, 17–37. [Google Scholar]
  33. Jia, B.; Du, F.; Erbe, L.; Peterson, A. Asymptotic behavior of nabla half order h-difference equations. JAAC 2018, 8, 1707–1726. [Google Scholar]
  34. Du, F.; Lu, J.-G. Explicit solutions and asymptotic behaviors of Caputo discrete fractional-order equations with variable coefficients. Chaos Solit. Fractals 2021, 153, 111490. [Google Scholar] [CrossRef]
  35. Jia, B.; Lynn, E.; Peterson, A. Comparison theorems and asymptotic behavior of solutions of discrete fractional equations. Electron. J. Qual. Theory Differ. Equ. 2015, 89, 1–18. [Google Scholar] [CrossRef]
  36. Jonnalagadda, J.M. Asymptotic behaviour of linear fractional nabla difference equations. Int. J. Differ. Equ. 2017, 12, 255–265. [Google Scholar]
  37. Wang, M.; Jia, B.; Du, F.; Liu, X. Asymptotic stability of fractional difference equations with bounded time delays. Fract. Calc. Appl. Anal. 2020, 23, 571–590. [Google Scholar] [CrossRef]
  38. Brackins, A. Boundary Value Problems of Nabla Fractional Difference Equations. Ph.D.Thesis, The University of Nebraska, Lincoln, NE, USA, 2014. [Google Scholar]
  39. Chen, C.; Bohner, M.; Jia, B. Existence and uniqueness of solutions for nonlinear Caputo fractional difference equations. Turk. J. Math. 2020, 44, 857–869. [Google Scholar] [CrossRef]
  40. Mohammed, P.O. On Asymptotic Behavior Solutions for Delta Fractional Differences. Commun. Nonlinear Sci. Numer. Simul.
  41. Muresan, M. A Concrete Approach to Classical Analysis; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  42. Dutka, J. The early history of the factorial function. Arch. Hist. Exact Sci. 1991, 43, 225–249. [Google Scholar] [CrossRef]
  43. Javadi, R.; Mesgarani, H.; Nikan, O.; Avazzadeh, Z. Solving Fractional Order Differential Equations by Using Fractional Radial Basis Function Neural Network. Symmetry 2023, 15, 1275. [Google Scholar] [CrossRef]
  44. Sivalingam, S.M.; Kumar, P.; Govindaraj, V. A Neural Networks-Based Numerical Method for the Generalized Caputo-Type Fractional Differential Equations. Math. Comput. Simul. 2023, 213, 302–323. [Google Scholar]
  45. Allahviranloo, T.; Jafarian, A.; Saneifard, R.; Ghalami, N.; Measoomy Nia, S.; Kiani, F.; Fernandez-Gamiz, U.; Noeiaghdam, S. An Application of Artificial Neural Networks for Solving Fractional Higher-Order Linear Integro-Differential Equations. Bound. Value Probl. 2023, 2023, 74. [Google Scholar] [CrossRef]
  46. Mohammed, P.O.; Kürt, C.; Abdeljawad, T. Bivariate discrete Mittag-Leffler functions with associated discrete fractional operators. Chaos Solit. Fractals 2022, 165, 112848. [Google Scholar] [CrossRef]
  47. Mohammed, P.O.; Abdeljawad, T. Discrete generalized fractional operators defined using h-discrete Mittag-Leffler kernels and applications to AB fractional difference systems. Math. Meth. Appl. Sci. 2020, 46, 7688–7713. [Google Scholar] [CrossRef]
Figure 4. Plot of absolute error in a different value of k and time values for Example 1.
Figure 4. Plot of absolute error in a different value of k and time values for Example 1.
Fractalfract 09 00452 g004
Figure 5. Histogram of absolute error between analytical and neural network solutions with 20 bins for Example 1.
Figure 5. Histogram of absolute error between analytical and neural network solutions with 20 bins for Example 1.
Fractalfract 09 00452 g005
Figure 9. Plot of absolute error in different values of k and time values for Example 2.
Figure 9. Plot of absolute error in different values of k and time values for Example 2.
Fractalfract 09 00452 g009
Figure 10. Histogram of absolute error between analytical and neural network solutions with 20 bins for Example 2.
Figure 10. Histogram of absolute error between analytical and neural network solutions with 20 bins for Example 2.
Fractalfract 09 00452 g010
Table 1. Comparison of the present method and neural network approximation for different k, with parameters a = h = 1 , L = 0.25 , and times t = 10 , 30 , 50 .
Table 1. Comparison of the present method and neural network approximation for different k, with parameters a = h = 1 , L = 0.25 , and times t = 10 , 30 , 50 .
ktPresent Work x ( t ) NN Approx. x NN ( t ) Absolute Error | x ( t ) x NN ( t ) |
1.30100.84320.84050.0027
300.92070.91780.0029
500.98150.97890.0026
1.39100.76340.76020.0032
300.81590.81250.0034
500.86500.86170.0033
1.41100.72100.71800.0030
300.76420.76080.0034
500.80050.79710.0034
1.42100.69050.68730.0032
300.73210.72890.0032
500.76590.76280.0031
1.51100.53870.53600.0027
300.57350.57040.0031
500.60230.59950.0028
1.70100.32510.32290.0022
300.34670.34410.0026
500.36490.36250.0024
Table 2. Sensitivity analysis: absolute error between analytical DFP solution and neural network prediction at t = 5 for various k and α values (with fixed a = h = 1 ).
Table 2. Sensitivity analysis: absolute error between analytical DFP solution and neural network prediction at t = 5 for various k and α values (with fixed a = h = 1 ).
k α Absolute Error
0.250.0013
1.390.500.0021
0.750.0034
0.250.0011
1.420.500.0018
0.750.0030
0.250.0007
1.700.500.0013
0.750.0023
Table 3. Neural network prediction accuracy for various c values with noise level 0.05 in initial condition x ( 1 ) for Example 1 ( a = h = 1 , α = 0.5 ).
Table 3. Neural network prediction accuracy for various c values with noise level 0.05 in initial condition x ( 1 ) for Example 1 ( a = h = 1 , α = 0.5 ).
kMean Absolute Error (MAE)
1.300.0040
1.390.0045
1.410.0052
1.420.0055
1.510.0061
1.700.0065
Table 4. Comparison of the present method and neural network approximation for different k, with parameters a = 0 , h = 4 , and L = 0.85 .
Table 4. Comparison of the present method and neural network approximation for different k, with parameters a = 0 , h = 4 , and L = 0.85 .
ktPresent Work x ( t ) NN Approx. x NN ( t ) Absolute Error | x ( t ) x NN ( t ) |
0.70401.18141.17920.0022
801.35671.35300.0037
1201.55631.55050.0058
0.75401.20451.20130.0032
801.40181.39710.0047
1201.62791.62120.0067
0.79401.22361.22100.0026
801.43821.43430.0039
1201.68041.67410.0063
0.81401.23191.22920.0027
801.45371.44940.0043
1201.70521.69830.0069
0.90400.92510.92630.0012
800.85770.86020.0025
1200.79540.79900.0036
0.97400.89030.88950.0008
800.81090.81270.0018
1200.73810.74150.0034
Table 5. Sensitivity analysis: absolute error between analytical DFP solution and neural network prediction at t = 20 for various k and α values with fixed a = 0 and h = 4 .
Table 5. Sensitivity analysis: absolute error between analytical DFP solution and neural network prediction at t = 20 for various k and α values with fixed a = 0 and h = 4 .
k α Absolute Error
0.250.0028
0.750.500.0032
0.750.0039
0.250.0024
0.810.500.0027
0.750.0034
0.250.0019
0.970.500.0021
0.750.0028
Table 6. Neural network prediction accuracy for various k values with noise level 0.05 in initial condition x ( 1 ) for Example 2 ( a = h = 1 , α = 0.5 ).
Table 6. Neural network prediction accuracy for various k values with noise level 0.05 in initial condition x ( 1 ) for Example 2 ( a = h = 1 , α = 0.5 ).
kMean Absolute Error (MAE)
1.300.0040
1.390.0045
1.410.0052
1.420.0055
1.510.0061
1.700.0065
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammed, P.O.; Alharthi, M.R.; Yousif, M.A.; Lupas, A.A.; Azzo, S.M. Modeling and Neural Network Approximation of Asymptotic Behavior for Delta Fractional Difference Equations with Mittag-Leffler Kernels. Fractal Fract. 2025, 9, 452. https://doi.org/10.3390/fractalfract9070452

AMA Style

Mohammed PO, Alharthi MR, Yousif MA, Lupas AA, Azzo SM. Modeling and Neural Network Approximation of Asymptotic Behavior for Delta Fractional Difference Equations with Mittag-Leffler Kernels. Fractal and Fractional. 2025; 9(7):452. https://doi.org/10.3390/fractalfract9070452

Chicago/Turabian Style

Mohammed, Pshtiwan Othman, Muteb R. Alharthi, Majeed Ahmad Yousif, Alina Alb Lupas, and Shrooq Mohammed Azzo. 2025. "Modeling and Neural Network Approximation of Asymptotic Behavior for Delta Fractional Difference Equations with Mittag-Leffler Kernels" Fractal and Fractional 9, no. 7: 452. https://doi.org/10.3390/fractalfract9070452

APA Style

Mohammed, P. O., Alharthi, M. R., Yousif, M. A., Lupas, A. A., & Azzo, S. M. (2025). Modeling and Neural Network Approximation of Asymptotic Behavior for Delta Fractional Difference Equations with Mittag-Leffler Kernels. Fractal and Fractional, 9(7), 452. https://doi.org/10.3390/fractalfract9070452

Article Metrics

Back to TopTop