Next Article in Journal
On Estimation of α-Stable Distribution Using L-Moments
Previous Article in Journal
Efficient Scheme for Solving Tempered Fractional Quantum Differential Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical and Numerical Analysis of a Memory-Dependent Fractional Model for Behavioral Learning Dynamics

1
School of Software, Northwestern Polytechnical University, 127 West Youyi Road, Beilin District, Xi’an 710072, China
2
Department of Software and Computing Systems, University of Alicante, 03690 Alicante, Spain
3
Department of Applied Mathematics, University of Alicante, 03690 Alicante, Spain
4
Department of Mathematics, Faculty of Sciences, University of Ostrava, 70103 Ostrava, Czech Republic
5
Centre for Wireless Technology, CoE for Intelligent Network, Faculty of Artificial Intelligence & Engineering, Multimedia University, Persiaran Multimedia, Cyberjaya 63100, Selangor, Malaysia
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2025, 9(11), 710; https://doi.org/10.3390/fractalfract9110710
Submission received: 24 September 2025 / Revised: 1 November 2025 / Accepted: 1 November 2025 / Published: 4 November 2025

Abstract

Fractional differential equations offer a natural framework for describing systems in which present states are influenced by the past. This work presents a nonlinear Caputo-type fractional differential equation (FDE) with a nonlocal initial condition and attempts to describe a model of memory-dependent behavioral adaptation. The proposed framework uses a fractional-order derivative η ( 0 , 1 ) to discuss the long-term memory effects. The existence and uniqueness of solutions are demonstrated by Banach’s and Krasnoselskii’s fixed-point theorems. Stability is analyzed through Ulam–Hyers and Ulam–Hyers–Rassias benchmarks, supported by sensitivity results on the kernel structure and fractional order. The model is further employed for behavioral despair and learned helplessness, capturing the role of delayed stimulus feedback in shaping cognitive adaptation. Numerical simulations based on the convolution-based fractional linear multistep (FVI–CQ) and Adams–Bashforth–Moulton (ABM) schemes confirm convergence and accuracy. The proposed setup provides a compact computational and mathematical paradigm for analyzing systems characterized by nonlocal feedback and persistent memory.

1. Introduction

Differential equations are used to define the association between a variable and its rate of change. They are a background system of modeling dynamic systems in natural and applied science (see [1,2]). In their most common form, ordinary differential equations (ODEs) involve functions of a single variable and its derivatives. The solution to a differential equation is a function that satisfies the specified relationship between the dependent variable and its derivatives. Researchers apply this framework across physics, biology, and engineering disciplines to model evolving system behavior (for more details, see [3,4,5]).
In behavioral neuroscience, ODEs are commonly used to model learning and memory processes. For instance, the model presented in [6,7] quantifies shock-response rates in animals exposed to aversive stimuli. This model does not consider complex behaviors, such as maze navigation, but instead focuses on the fundamental responses triggered by repeated shocks. The underlying dynamics are expressed by the following integro-differential equation
z ( t ) = z ( t 0 ) ϑ t 0 t z ( τ ) φ t τ d τ ,
where z ( t ) is the shock response rate at time t, ϑ is a sensitivity parameter, and φ ( t τ ) is a memory kernel that weights the effect of previous shocks. The cumulative influence of past experiences in the process is a part of the integral term, which is the model of memory-dependent changes in behavior. Data used to determine the effect of shock exposure were collected in two animal groups, an experimental group and a control group which was not exposed to shocks (see [8]). Figure 1 and Figure 2 below depict the cumulative number of shocks received by selected animals in the respective groups in the course of time.
It has been empirically suggested that the retention of memory decays exponentially over time [9,10]. Hence, a suitable choice for the memory kernel is φ ( t ) = exp ( t / λ ) , where λ represents a characteristic memory timescale. With this exponential kernel, Equation (1) becomes
z ( t ) = z ( t 0 ) ϑ exp t λ t 0 t z ( τ ) exp τ λ d τ .
In Equation (2), the initial derivative z ( t 0 ) characterizes the animal’s initial reactivity to shocks, while the integral term incorporates the cumulative, decaying influence of past shocks on subsequent behavior.
Nevertheless, the ODE model presumes instant or rapidly decaying memory effects even though memory impacts on behavior can be behavioral dependencies in the long run (for more details, see [11,12]). This is common in neuroscience experimental observations that suggest more complex memory dynamics [13], where past experiences influence future behavior through non-local temporal interactions. Fractional-order differential equations (FDEs) are more useful when such long-term or non-exponential memory processes are involved (see [14,15,16,17]).
Motivated by the limitations of classical ODEs in modeling long-term memory, we propose the following fractional-order model
D η C z ( t ) = z ( t 0 ) ϑ ( t ) t 0 t f ( z ( τ ) ) ϕ ( t , τ ) d τ , 0 < η < 1 , t [ t 0 , T ]
with initial conditions
z ( t 0 ) = z 0 .
In the above system (3)–(4),
  • D η C denotes the Caputo fractional derivative of order η , which captures memory effects over extended periods;
  • z ( t ) represents the behavioral response (e.g., cumulative shocks or response intensity) measured during the experiment;
  • ϑ ( t ) is a time-dependent sensitivity parameter controlling the strength of past experiences;
  • f ( z ( τ ) ) describes the animal’s behavioral adaptation or reaction to prior stimuli;
  • ϕ ( t , τ ) determines how past responses at time τ influence current behavior at time t.
This study examines a nonlinear memory-dependent system described by a Caputo-type fractional differential equation with a nonlocal initial condition. The model uses a fractional derivative of order η ( 0 , 1 ) to represent long-term memory in behavioral learning under delayed feedback conditions. The existence and uniqueness of solutions are proved using Banach’s contraction principle and Krasnoselskii’s fixed-point theorem. A contractive framework is established with explicit stability conditions based on the fractional order and kernel properties. Numerical solutions are obtained using the Adams–Bashforth–Moulton method [18] and a convolution-based fractional integrator [19].
The paper is structured as follows. Section 2 provides the necessary mathematical preliminaries and notational norms, whereas Section 3 employs fixed-point theorems to establish the existence and uniqueness of solutions under suitable contractivity conditions. Section 4 is dedicated to the discussion of well-posedness, the derivation of the conditions in which the operator is stable. In Section 5, we discuss the sensitivity analysis of the proposed framework. The model is applied to behavioral learning in the use of numerical simulations and comparisons (Section 6). Lastly, Section 7 provides the conclusion and suggests the possible future directions of work.

2. Preliminaries

This section presents key ideas and foundational results employed in the following analysis.
Definition 1 
([20]). Let α > 0 and let m N be such that m 1 < α < m . Assume that g C m [ 0 , + ) . The Caputo fractional derivative of order α of the function g is defined by
D α C g ( t ) : = 1 Γ ( m α ) 0 t ( t s ) m α 1 g ( m ) ( s ) d s , t > 0 ,
where Γ ( · ) denotes the Gamma function and g ( m ) is the m-th derivative of g. The (left-sided) Riemann–Liouville fractional integral of order α > 0 is given by
I α g ( t ) : = 1 Γ ( α ) 0 t ( t s ) α 1 g ( s ) d s , t 0 .
The Caputo derivative and Riemann–Liouville integral satisfy the following fundamental identities:
D α C I α g ( t ) = g ( t ) , I α D α C g ( t ) = g ( t ) j = 0 m 1 g ( j ) ( 0 + ) j ! t j .
Moreover, the general solution of the homogeneous fractional differential equation
D α C g ( t ) = 0 , t > 0 ,
is a polynomial of degree m 1 of the form
g ( t ) = a 0 + a 1 t + a 2 t 2 + + a m 1 t m 1 ,
where a i R are arbitrary constants for i = 0 , 1 , , m 1 .
In classical differential Equations (1) and (2), the term z ( t 0 ) often represents the initial rate of change and can be used as an initial condition. However, for Caputo fractional derivatives of order 0 < η < 1 , such classical derivatives are generally undefined at the initial time t 0 . Therefore, the model (3) adopts a parameter c 0 R in place of z ( t 0 ) , which captures the system’s initial tendency or responsiveness without violating the regularity assumptions required by the Caputo derivative. This substitution ensures the formulation remains mathematically consistent while preserving the interpretability of the model in behavioral terms.
We now prove a lemma necessary for the analysis carried out in later sections.
Lemma 1. 
Let 0 < η < 1 and z C 1 ( [ t 0 , T ] , R ) satisfy
D η C z ( t ) = c 0 ϑ ( t ) t 0 t f z ( τ ) ϕ ( t , τ ) d τ , t [ t 0 , T ] , z ( t 0 ) = z 0 ,
where D η C denotes the Caputo fractional derivative of order 0 < η < 1 and c 0 R is a given constant. Here, ϑ : [ t 0 , T ] R is a continuous weighting function, f : R R is a continuous nonlinear response function, and ϕ : [ t 0 , T ] 2 R is a continuous memory kernel describing the effect of past states. Then z satisfies (9) if and only if it fulfills the following integral equation:
z ( t ) = z 0 + c 0 Γ ( η + 1 ) ( t t 0 ) η 1 Γ ( η ) t 0 t ( t s ) η 1 ϑ ( s ) t 0 s f z ( τ ) ϕ ( s , τ ) d τ d s , t [ t 0 , T ] .
Proof. 
Assume first that z ( t ) satisfies (9). Applying the fractional integral operator I η to both sides of the differential equation gives
I η D η C z ( t ) = I η c 0 ϑ ( t ) t 0 t f ( z ( τ ) ) ϕ ( t , τ ) d τ .
Using the standard identity for the Caputo derivative of order 0 < η < 1 for z C 1 ,
I η D η C z ( t ) = z ( t ) z ( t 0 ) ,
we obtain
z ( t ) = z 0 + I η [ c 0 ] I η ϑ ( t ) t 0 t f ( z ( τ ) ) ϕ ( t , τ ) d τ .
Since I η acts linearly and I η [ 1 ] = ( t t 0 ) η Γ ( η + 1 ) , the first integral on the right-hand side yields
I η [ c 0 ] = c 0 Γ ( η + 1 ) ( t t 0 ) η .
Hence,
z ( t ) = z 0 + c 0 Γ ( η + 1 ) ( t t 0 ) η I η ϑ ( t ) t 0 t f ( z ( τ ) ) ϕ ( t , τ ) d τ .
Writing out the fractional integral explicitly gives
I η ϑ ( t ) t 0 t f ( z ( τ ) ) ϕ ( t , τ ) d τ = 1 Γ ( η ) t 0 t ( t s ) η 1 ϑ ( s ) t 0 s f ( z ( τ ) ) ϕ ( s , τ ) d τ d s .
Substituting this expression back into the previous equation yields (10).
Conversely, assume that z ( t ) satisfies (10). Applying the Caputo derivative D η C to both sides and using the identities
D η C ( t t 0 ) η = Γ ( η + 1 ) , D η C I η g ( t ) = g ( t ) ,
together with the interchange of differentiation and integration under the continuity assumptions on f , ϑ , ϕ , we recover Equation (9). This completes the proof. □
In our analysis, we will employ the following classical fixed-point results.
Theorem 1 
(Banach’s contraction principle [21]). Let ( Y , ρ ) be a complete metric space and consider an operator T : Y Y satisfying the contraction condition
ρ ( T u , T v ) γ ρ ( u , v ) , u , v Y ,
where 0 γ < 1 . Then, the operator T has a unique fixed point u * Y . Moreover, for any initial guess u 0 Y , the iterative sequence T n ( u 0 ) converges to u * , specifically,
T n ( u 0 ) u * a s n .
Theorem 2 
(Krasnoselskii’s fixed-point theorem [22]). Let M be a nonempty, bounded, convex, and closed subset of a Banach space Y . Suppose that operators T 1 , T 2 : M Y satisfy the conditions:
 1. 
T 1 u + T 2 v M , for all u , v M ;
 2. 
T 1 is continuous and compact;
 3. 
T 2 is a contraction operator.
Then the operator T 1 + T 2 admits at least one fixed point in the set M .

3. Analytical Investigation

To establish the well-posedness of solutions for the integral Equation (10) associated with the fractional differential Equation (9), we construct a functional framework consistent with both the operator’s structure and the required regularity of solutions. We consider the Banach space
C ( [ t 0 , T ] , R ) = { z : [ t 0 , T ] R z is continuous } ,
equipped with the norm
z = sup t [ t 0 , T ] | z ( t ) | .
We impose the following assumptions to ensure the validity of our analysis:
(A1)
The sensitivity function ϑ : [ t 0 , T ] R is continuous and bounded. Hence, it is uniformly continuous on [ t 0 , T ] and satisfies
ϑ = sup t [ t 0 , T ] | ϑ ( t ) | < .
(A2)
The nonlinear function f : R R is globally Lipschitz-continuous with constant f > 0 ; that is,
| f ( u ) f ( v ) | f | u v | , u , v R .
Consequently, f is continuous and bounded on every bounded subset of R .
(A3)
The memory kernel ϕ : [ t 0 , T ] 2 R is continuous on the compact set
K = { ( t , τ ) : t 0 τ t T } ,
and measurable in both variables. Therefore, it is bounded, and there exists a constant M ϕ > 0 such that
| ϕ ( t , τ ) | M ϕ , ( t , τ ) K .
This framework enables the application of standard fixed-point results, leading directly to our main result.
Theorem 3. 
Let Z = C ( [ t 0 , T ] , R ) be the Banach space equipped with the supremum norm defined in (13), and let T : Z Z be the operator associated with the integral equation
( T z ) ( t ) = z 0 + c 0 Γ ( η + 1 ) ( t t 0 ) η 1 Γ ( η ) t 0 t ( t s ) η 1 ϑ ( s ) t 0 s f z ( τ ) ϕ ( s , τ ) d τ d s , t [ t 0 , T ] .
Assume that the conditions (A1)(A3) hold. Then there exists a constant α [ 0 , 1 ) , explicitly given by
α = f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) ,
such that, for all z 1 , z 2 Z ,
T z 1 T z 2 α z 1 z 2 .
Consequently:
(i)
If the parameters satisfy
f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) < 1 ,
i.e., if either the interval length ( T t 0 ) is sufficiently small or the product f ϑ M ϕ is suitably bounded, then T is a strict contraction on Z .
(ii)
By Banach’s fixed-point theorem, T admits a unique fixed point z Z satisfying T z = z .
(iii)
The function z coincides with the unique solution to both the integral Equation (10) and the corresponding fractional differential Equation (9).
Proof. 
We first verify that the operator T : Z Z , defined by (14), is well-defined. Under assumptions (A1)–(A3), the functions ϑ , f, and ϕ are continuous and bounded, ensuring that for each z Z , the integral operator in (14) is finite and yields a continuous function on the compact interval [ t 0 , T ] . Therefore, T maps continuous functions into continuous functions, making T : Z Z well-defined.
Next, we establish the contraction property. Let z 1 , z 2 Z . Then, for each t [ t 0 , T ] , we have
| ( T z 1 ) ( t ) ( T z 2 ) ( t ) | = 1 Γ ( η ) t 0 t ( t s ) η 1 ϑ ( s ) t 0 s [ f ( z 1 ( τ ) ) f ( z 2 ( τ ) ) ] ϕ ( s , τ ) d τ d s .
Using the Lipschitz continuity assumption (A2) for f and the boundedness assumptions (A1) and (A3) for ϑ and ϕ , respectively, we obtain
| ( T z 1 ) ( t ) ( T z 2 ) ( t ) | f ϑ M ϕ Γ ( η ) z 1 z 2 t 0 t ( t s ) η 1 ( s t 0 ) d s .
Evaluating the integral explicitly, we set s = t 0 + ( t t 0 ) u , where u [ 0 , 1 ] , so that d s = ( t t 0 ) d u . Substituting and simplifying, we obtain
t 0 t ( t s ) η 1 ( s t 0 ) d s = ( t t 0 ) η + 1 0 1 ( 1 u ) η 1 u d u .
The remaining integral equals the Beta function B ( η , 2 ) , i.e.,
0 1 ( 1 u ) η 1 u d u = Γ ( η ) Γ ( 2 ) Γ ( η + 2 ) = Γ ( η ) η ( η + 1 ) .
Hence,
t 0 t ( t s ) η 1 ( s t 0 ) d s = ( t t 0 ) η + 1 Γ ( η ) η ( η + 1 ) .
Thus,
| ( T z 1 ) ( t ) ( T z 2 ) ( t ) | f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) z 1 z 2 .
Taking the supremum over t [ t 0 , T ] , we obtain
T z 1 T z 2 α z 1 z 2 ,
with the contraction constant given by
α = f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) .
By choosing a sufficiently small interval length T t 0 or suitably restricted parameters f , ϑ , and M ϕ , we ensure that α [ 0 , 1 ) . Since Z is complete with respect to the supremum norm defined in (13), Banach’s fixed-point theorem (Theorem 1) guarantees that the operator T possesses a unique fixed point z Z , satisfying T z = z .
Finally, the equivalence between solutions of the integral Equation (10) and the fractional differential Equation (9) has already been established (Lemma 10). Consequently, the unique fixed point z found here is precisely the unique solution to both equations. Therefore, the proof is complete. □
Remark 1. 
It is important to note that the contraction constant α = f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) depends on the interval length ( T t 0 ) . Since α grows as ( T t 0 ) η + 1 , we have α as T . Therefore, Banach’s fixed-point Theorem 1 guarantees a unique solution only on finite intervals for which α < 1 . For unbounded time domains, the analysis must be carried out locally on successive bounded subintervals of [ t 0 , ) .
Corollary 1. 
Under assumptions (A1)(A3) and the conditions of Theorem 3, the unique solution z ( t ) of the fractional differential Equation (9) depends continuously on the initial condition. Precisely, if z ( t ; z 0 ) and z ( t ; z ˜ 0 ) are solutions corresponding to initial conditions z 0 and z ˜ 0 , respectively, then there exists a constant C > 0 such that
z ( · ; z 0 ) z ( · ; z ˜ 0 ) C | z 0 z ˜ 0 | .
Proof. 
Let z ( t ; z 0 ) and z ( t ; z ˜ 0 ) denote the solutions of the integral Equation (14) corresponding to initial values z 0 and z ˜ 0 , respectively. Then,
z ( t ; z 0 ) z ( t ; z ˜ 0 ) = ( z 0 z ˜ 0 ) + T z ( t ; z 0 ) T z ( t ; z ˜ 0 ) ,
where T is the operator defined in (14). Taking the supremum norm on both sides and applying the contraction property of T , we obtain
z ( · ; z 0 ) z ( · ; z ˜ 0 ) | z 0 z ˜ 0 | + α z ( · ; z 0 ) z ( · ; z ˜ 0 ) .
Rearranging terms gives
z ( · ; z 0 ) z ( · ; z ˜ 0 ) 1 1 α | z 0 z ˜ 0 | .
The result follows by setting C = 1 1 α , where α [ 0 , 1 ) . □
Theorem 4. 
Let Z = C ( [ t 0 , T ] , R ) be the Banach space equipped with the supremum norm defined in (13). Define operators T 1 , T 2 : Z Z by
T 1 ( z ( t ) ) = z 0 + c 0 Γ ( η + 1 ) ( t t 0 ) η ,
and
T 2 ( z ( t ) ) = 1 Γ ( η ) t 0 t ( t s ) η 1 ϑ ( s ) t 0 s f ( z ( τ ) ) ϕ ( s , τ ) d τ d s .
Suppose the following conditions hold:
(H1)
The operator T 1 is continuous and compact on Z .
(H2)
The operator T 2 is a contraction on Z , i.e., there exists a constant α [ 0 , 1 ) such that
T 2 z 1 T 2 z 2 α z 1 z 2 , z 1 , z 2 Z .
(H3)
There exists r > 0 such that T 1 z 1 + T 2 z 2 B r ( 0 ) for all z 1 , z 2 B r ( 0 ) .
Then, the integral Equation (10) and equivalently the fractional differential Equation (9) admit at least one solution z Z .
Proof. 
Define the operator T : Z Z by
T ( z ) = T 1 ( z ) + T 2 ( z ) , z Z .
The operator T 1 defined by (16) maps each z Z to the fixed function z 0 + c 0 Γ ( η + 1 ) ( t t 0 ) η , independent of z. Hence, the image of T 1 is a singleton subset of Z , which is trivially compact and continuous.
The operator T 2 defined in (17) satisfies the contraction condition. As shown in Theorem 3, using assumptions (A1)–(A3), we obtain
T 2 z 1 T 2 z 2 f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) z 1 z 2 , z 1 , z 2 Z .
Therefore, T 2 is a contraction with constant
α = f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) < 1 ,
provided T t 0 is sufficiently small or the parameters are appropriately bounded.
Next, we verify the invariance condition. Let z 1 , z 2 B r ( 0 ) Z . Then,
T 1 z 1 + T 2 z 2 T 1 z 1 + T 2 z 2 | z 0 | + | c 0 | Γ ( η + 1 ) ( T t 0 ) η + α r .
Set
C ˜ = | z 0 | + | c 0 | Γ ( η + 1 ) ( T t 0 ) η .
To ensure T 1 z 1 + T 2 z 2 B r ( 0 ) , we choose
r C ˜ 1 α .
Thus, the closed ball B r ( 0 ) is invariant under T = T 1 + T 2 . By Theorem 4, T admits at least one fixed point z B r ( 0 ) Z , satisfying T z = z .
Hence, z is a solution to the integral Equation (10), and equivalently, the fractional differential Equation (9). The proof is complete. □
Example 1. 
Consider the following fractional differential equation (FDE) on the interval [ 0 , 1 / 4 ] with fractional order η = 1 2
D 1 2 C z ( t ) = c 0 e t 0 t sin ( z ( τ ) ) e ( t τ ) d τ , t [ 0 , 1 / 4 ] ,
subject to the initial condition z ( 0 ) = 0 , where c 0 = 1 is treated as a model parameter representing the initial response tendency. The model components are specified as follows
ϑ ( t ) = e t , f ( z ) = sin ( z ) , ϕ ( t , τ ) = e ( t τ ) .
These functions satisfy assumptions (A1)–(A3): the functions ϑ ( t ) and ϕ ( t , τ ) are continuous and bounded by 1 on their respective domains, and f ( z ) is Lipschitz-continuous with Lipschitz constant f = 1 .
The associated integral equation, derived from the Caputo formulation with z ( 0 ) = 0 and Γ ( 3 / 2 ) = π 2 , is given by
T ( z ( t ) ) = 2 π t 1 2 1 π 0 t e s ( t s ) 1 / 2 0 s sin ( z ( τ ) ) e ( s τ ) d τ d s .
To verify the contraction condition required by Theorem 3, we compute the constant α using
f = 1 , ϑ = 1 , M ϕ = 1 , T t 0 = 1 4 , η = 1 2 .
This yields
α = ( T t 0 ) η + 1 η ( η + 1 ) = 1 6 .
Since α = 1 6 < 1 , Theorem 3 guarantees the existence and uniqueness of a solution to the integral Equation (19) and the associated FDE (18).
We now verify the conditions of Theorem 4. Define the operators
T 1 ( z ( t ) ) = z 0 + c 0 Γ ( 3 / 2 ) t 1 / 2 , T 2 ( z ( t ) ) = 1 Γ ( 1 / 2 ) 0 t ( t s ) 1 / 2 ϑ ( s ) 0 s f ( z ( τ ) ) ϕ ( s , τ ) d τ d s .
With z 0 = 0 , c 0 = 1 , and Γ ( 3 / 2 ) = π 2 , we compute
T 1 z 2 π ( T t 0 ) 1 / 2 = 1 π .
Since T 2 z α z with α = 1 6 , the invariance condition
T 1 z 1 + T 2 z 2 T 1 z 1 + T 2 z 2 1 π + α r r
is satisfied by choosing r 6 5 π .
Hence, the closed ball B r ( 0 ) of radius r = 6 5 π is invariant under the operator T = T 1 + T 2 , and all the conditions of Theorem 4 are satisfied. Therefore, both the existence and the uniqueness (via Theorem 4 and Theorem 3) of the solution to the FDE (18) and associated integral Equation (19) are guaranteed.
The convergence behavior of the Picard iteration scheme applied to the integral Formulation (19) is illustrated in Figure 3. The first and second iterates, denoted z 1 ( t ) and z 2 ( t ) , are plotted over the interval [ 0 , 1 4 ] . The visibly close agreement between the two iterates indicates that the sequence { z n ( t ) } generated by successive applications of the integral operator is rapidly stabilizing. This observation is further corroborated by the error profile in Figure 4, which plots the absolute difference | z 2 ( t ) z 1 ( t ) | . The error remains small throughout the domain and decays near t = 0 , demonstrating uniform convergence near the initial condition and verifying that the contraction condition derived analytically is reflected numerically. These figures support the theoretical guarantees of existence, uniqueness, and convergence established via Theorems 3 and 4.

4. Stability Analysis

In this section, we discuss the stability of the proposed fractional differential equation (FDE) model (3)–(4). For further details on stability, see [23,24,25] and the references cited therein. We begin with the following result.
Theorem 5. 
Let μ : [ t 0 , T ] R + be a continuous and integrable function. Suppose y C ( [ t 0 , T ] , R ) is continuously differentiable and satisfies the inequality
D η C y ( t ) c 0 + ϑ ( t ) t 0 t f ( y ( τ ) ) ϕ ( t , τ ) d τ μ ( t ) , t [ t 0 , T ] ,
where D η C denotes the Caputo derivative of order 0 < η < 1 . Then there exists a unique solution x C ( [ t 0 , T ] , R ) to the integral Equation (10) (and hence to the FDE (9)) such that
y x ϱ μ ,
where
ϱ = ( T t 0 ) η ( 1 α ) Γ ( η + 1 ) ,
and α [ 0 , 1 ) is the contraction constant defined in Theorem 3.
Proof. 
Let y C ( [ t 0 , T ] , R ) be an approximate solution satisfying the perturbed inequality. Applying the fractional integral operator I η to both sides of
D η C y ( t ) = c 0 ϑ ( t ) t 0 t f ( y ( τ ) ) ϕ ( t , τ ) d τ + ϵ ( t ) ,
with | ϵ ( t ) | μ ( t ) , and using the identity I η [ D η C y ] ( t ) = y ( t ) y ( t 0 ) , we obtain
y ( t ) = z 0 + c 0 Γ ( η + 1 ) ( t t 0 ) η 1 Γ ( η ) t 0 t ( t s ) η 1 ϑ ( s ) t 0 s f ( y ( τ ) ) ϕ ( s , τ ) d τ d s + I η [ ϵ ] ( t ) .
Let T be the integral operator defined by
T ( y ( t ) ) : = z 0 + c 0 Γ ( η + 1 ) ( t t 0 ) η 1 Γ ( η ) t 0 t ( t s ) η 1 ϑ ( s ) t 0 s f ( y ( τ ) ) ϕ ( s , τ ) d τ d s .
Then
y ( t ) = T ( y ) ( t ) + I η [ ϵ ] ( t ) .
Let x X be the unique fixed point of T , guaranteed by Theorem 3, satisfying T ( x ) = x . So
| y ( t ) x ( t ) | | T ( y ) ( t ) T ( x ) ( t ) | + | I η [ ϵ ] ( t ) | .
Taking the supremum norm on both sides gives
y x T ( y ) T ( x ) + I η [ ϵ ] α y x + I η [ μ ] .
Since T is a strict contraction with 0 α < 1 (as established in Theorem 3), the above inequality holds for all y , x Z . In particular, when μ 0 (equivalently, ϵ 0 ), it reduces to
y x α y x ,
which implies y x = 0 , and therefore y = x . Hence, the mapping T remains strictly contractive even in the unperturbed case, and its unique fixed point x coincides with the exact solution of the proposed fractional model. Hence,
y x ϱ μ ,
where ϱ is defined in (20). This establishes the desired Ulam–Hyers–Rassias stability result. □
Corollary 2. 
Let ζ > 0 be given. Then there exists δ > 0 such that for every continuously differentiable function y C ( [ t 0 , T ] , R ) satisfying
D η C y ( t ) c 0 + ϑ ( t ) t 0 t f ( y ( τ ) ) ϕ ( t , τ ) d τ δ , t [ t 0 , T ] ,
there exists a unique solution x C ( [ t 0 , T ] , R ) to the FDE (3)–(4) such that
y x ζ .
Proof. 
The result follows directly from Theorem 5 by setting μ ( t ) = δ for all t [ t 0 , T ] . □

5. Parameter Sensitivity Analysis

This section analyzes the influence of key parameters on the dynamics of the fractional-order system (3)–(4). Numerical experiments were conducted to study the sensitivity of the solution z ( t ) with respect to the fractional order η , the response parameter c 0 , the sensitivity function ϑ ( t ) , the initial state z 0 , and the stability index α .
Figure 5 shows the effect of the memory index η ( 0 , 1 ) on z ( t ) over the interval [ 0 , T ] with T = 0.25 . The simulations were performed with z 0 = 0 using Picard iteration for η = 0.1 , 0.2 , , 0.9 . The results indicate that smaller values of η produce sharper trajectories, reflecting stronger memory effects, while larger η values yield smoother and slower responses. These findings demonstrate the central role of η in controlling memory depth and temporal adaptation in the model.
Figure 6 depicts the sensitivity of the model to changes in the parameter c 0 , which appears linearly in the Caputo integral formulation via the term c 0 Γ ( η + 1 ) ( t t 0 ) η . As expected, increasing c 0 results in uniformly elevated trajectories, reflecting a stronger immediate response to stimuli. The divergence between curves becomes more pronounced over time due to memory retention effects. Thus, c 0 serves as a tunable gain factor influencing the early-phase learning or adaptation intensity.
On the other hand, Figure 7 and Figure 8 provide a comparative view of the influence of the time-dependent sensitivity function ϑ ( t ) and the initial state z 0 . In Figure 7, we explore five oscillatory forms of ϑ ( t ) ( sin ( t ) , sin ( 2 t ) , , sin ( 5 t ) ) , each of which satisfies the boundedness condition ϑ < . Despite increasing frequency of temporal modulation, the solution curves remain virtually identical over the interval [ 0 , 0.25 ] , suggesting that short-term dynamics are predominantly governed by η and c 0 , rather than by the oscillatory structure of ϑ ( t ) . This robustness under temporal perturbation conditions reinforces the model’s stability in encoding early-time behavioral responses.
Figure 8 investigates sensitivity to the initial condition z 0 , revealing a consistent upward shift in the solution trajectories with increasing z 0 , while preserving the overall shape. The nearly parallel evolution of curves indicates that the system responds linearly to perturbations in initial state, confirming well-posedness and stability under initial data variability conditions, in agreement with Corollary 1.
To identify the admissible parameter regimes ensuring well-posedness via Theorem 3, we analyze the stability index α defined as
α = f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) ,
where f is the Lipschitz constant of the nonlinearity f, ϑ is the supremum norm of the sensitivity function, and M ϕ bounds the memory kernel ϕ ( t , τ ) . Figure 9 displays a contour plot of α over a grid of ( η , T t 0 ) values. The red dashed curve indicates the critical threshold α = 1 , demarcating the domain where the integral operator is contractive. The subregion α < 1 thus characterizes the guaranteed stability zone in which the existence and uniqueness of solutions are ensured. The visualization confirms that model stability is favored by larger values of η and shorter observation intervals T t 0 , consistent with theoretical predictions.

6. Application to Behavioral Despair and Learned Helplessness

We demonstrate the applicability of the proposed fractional-order model (3)–(4) within the learned helplessness paradigm, a classical framework for modeling depressive-like behavior in response to uncontrollable stress. Repeated exposure to aversive stimuli, such as electric shocks, leads to suppression of escape behavior, even when escape becomes feasible.
We consider the following model
D 0.7 C z ( t ) = 1 e 2 t t 0 t tanh ( z ( τ ) ) ( 1 + t τ ) 1.2 d τ , t [ t 0 , T ] ,
subject to the nonlocal initial condition
z ( t 0 ) = t 0 T e β s z ( s ) d s , β > 0 .
The interval [ t 0 , T ] corresponds to the duration of the behavioral trial, with t 0 denoting the onset of stimulus exposure and T the termination of observation. The function z ( t ) represents the temporal evolution of behavioral passivity. Memory dependence is introduced via the Caputo fractional derivative of order η = 0.7 , modeling nonlocal effects of prior responses. The constant c 0 = 1 reflects a suppressed initial reactivity. The sensitivity function ϑ ( t ) = e 2 t models temporal attenuation of stimulus influence, whereas the nonlinearity is incorporated through f ( z ) = tanh ( z ) , imposing bounded growth of passivity. The memory kernel ϕ ( t , τ ) = ( 1 + t τ ) 1.2 allows sustained influence of past events through power-law decay. The integral-type initial condition (22) encodes dependence on anticipated future behavior via exponential discounting. Figure 10 depicts the experimental setting and the feedback mechanisms linking aversive input, accumulated memory, and behavioral output.
We now verify that model (21)–(22) satisfies all analytical conditions for well-posedness and stability on the interval [ 0 , 1 ] . The sensitivity function ϑ ( t ) is continuous and satisfies ϑ = 1 . The nonlinearity f ( z ) = tanh ( z ) is globally Lipschitz with constant f = 1 , and the kernel ϕ ( t , τ ) is continuous and bounded by M ϕ = 1 . Thus, assumptions (A1)–(A3) hold.
To evaluate the nonlocal initial condition z t 0 = 0 1 e β s z ( s ) d s , we adopt the approximation z ( s ) 1 e 3 s , which reflects the empirical pattern of behavioral saturation. With β = 1 , this yields z t 0 0.39 , used for computing the subsequent constants.
The contraction constant is given by
α = f ϑ M ϕ ( T t 0 ) η + 1 η ( η + 1 ) 0.84 < 1 ,
which satisfies the condition of Theorem 3. Hence, the model admits a unique solution.
To verify the existence via Theorem 4, we decompose the operator as
T 1 ( z ) ( t ) = z 0 + c 0 Γ ( 1.7 ) t 0.7 , T 2 ( z ) ( t ) = 1 Γ ( 0.7 ) 0 t ( t s ) 0.3 ϑ ( s ) 0 s f ( z ( τ ) ) ϕ ( s , τ ) d τ d s .
Here, T 1 is compact and independent of z, while T 2 satisfies the contraction condition with the same constant α 0.84 < 1 . The ball B r ( 0 ) C ( [ 0 , 1 ] , R ) with radius r 9.31 is invariant under T 1 + T 2 , ensuring the existence of a solution.
For Ulam–Hyers–Rassias stability (Theorem 5), all hypotheses are satisfied. The resulting stability bound is
ϱ = ( T t 0 ) 0.7 ( 1 α ) Γ ( 1.7 ) 6.89 .
Thus, the proposed model is analytically well-posed, admits a unique solution, and is stable under perturbation conditions in the sense of the Ulam–Hyers–Rassias criterion (see Theorem 5).
The analytical properties of the proposed model (21)–(22) results were first examined through the contraction constant α , which determines the applicability of Theorem 3. Figure 11 and Figure 12 provide a parametric view of α as a function of the interval length T t 0 and the fractional order η , respectively. In Figure 11, it is observed that for each fixed η , α increases monotonically with the interval length. Beyond a critical length, contractivity fails ( α 1 ), thereby violating the uniqueness condition. This threshold is more restrictive for smaller η , indicating that systems with stronger memory effects impose tighter limitations on admissible domains. Figure 12 complements this by fixing several interval lengths and plotting α against η . Here, a monotonic decrease in α is evident, and for each fixed interval, there exists a threshold order above which α < 1 . These two perspectives jointly delineate a well-defined stability region in the ( η , T t 0 ) parameter space and offer explicit criteria for selecting model parameters that guarantee analytical well-posedness.
The dynamical implications of varying the fractional order η are illustrated in Figure 13. As η decreases, solutions exhibit stronger memory retention, with a sharper initial decline in z ( t ) , followed by a delayed transition to steady behavior. For higher η , this behavior progressively shifts toward classical ODE-like decay. This transition is most evident as η 1 , where memory effects diminish, and the system responds more immediately to the external stimulus. These trends confirm the capacity of the fractional framework to interpolate between highly nonlocal and memoryless dynamics.
To evaluate numerical behavior, the Adams–Bashforth–Moulton method (ABMM) and a fractional variational integrator based on convolution quadrature (FVI–CQ) were applied to the model for η = 0.7 . As shown in Figure 14, both schemes produced nearly identical trajectories. The corresponding error curve in Figure 15 shows that the absolute discrepancy remained below 1.2 × 10 2 over the entire interval, with no signs of numerical instability. These results demonstrate that both methods offer robust numerical approximations, despite their different algorithmic constructions as one based on predictor–corrector discretization and the other on convolution-weighted integration.
Further comparison of numerical accuracy is provided in Figure 16, where both schemes were evaluated with progressively refined step sizes h { 0.1 , 0.05 , 0.025 , 0.0125 } . The maximum error relative to a reference solution was computed for each case. The near-parallel slopes of both error curves in the log–log scale confirm that the methods exhibit similar convergence behavior, with consistent order across refinements. No irregular error growth was observed, reinforcing the numerical stability of both approaches. These results confirm that the FVI–CQ method, despite its heavier formulation, can match the convergence accuracy of the ABMM.
Computational efficiency was then assessed via performance benchmarks with varying step sizes. Figure 17 shows that both methods incur increased runtime as h 0 , but the growth is more pronounced for FVI–CQ due to its reliance on convolution weights. For coarse discretizations, FVI–CQ performs marginally faster, but beyond h 0.02 , the cost increases steeply. In contrast, the ABMM displays a smoother runtime curve with predictable scaling. This suggests that while both methods are viable, the ABMM is computationally more efficient in the fine-mesh regime and preferable when high-resolution accuracy is required.
Table 1 delineates the numerical reliability and computational complexity of the two methodologies at various intervals. The results confirm that maximum absolute error remains consistently low, with root mean square error of approximately 7.79 × 10 3 . Runtime differences are minimal, indicating near-equivalence in computational load per step.
The numerical analysis performed using two distinct approximations for the nonlocal term z ( s ) offers insight into the adaptability and robustness of the proposed fractional-order model (21)–(22). Table 2(a) reports the results for the power-law approximation z ( s ) s 0.9 , which enhances long-term memory retention, consistent with fractional dynamics exhibiting strong hereditary effects. The solution trajectories remain closely aligned across both numerical schemes, with the maximum absolute error confined below 4 × 10 2 . The discrepancy peaks near t = 0.4 , corresponding to the region of maximal memory influence, after which the solutions converge as t 1 , indicating numerical stabilization in the asymptotic regime.
In contrast, Table 2(b) presents the corresponding values for the exponential-log approximation z ( s ) = exp ( a e b s ) , with a = 1.0 and b = 1.0 . This form induces a rapid saturation of memory, capturing early-time responsiveness followed by a stabilization phase. The numerical outputs remain consistent across solvers, with maximum deviations below 1.5 × 10 2 , the largest occurring near t = 0.7 . This behavior reflects a sharper initial transition governed by the exponential decay, followed by smooth asymptotic alignment.
The convergence patterns illustrated in Figure 18 and Figure 19 reinforce the numerical consistency of both solvers for the two memory profiles. For both approximations, the log–log error curves exhibit a monotonic decline with mesh refinement and no evidence of instability or degradation in accuracy. The close alignment of the curves highlights the inherent numerical stability of the methods. Importantly, the biologically inspired approximation z ( s ) = exp ( a e b s ) retains convergence behavior comparable to the more singular power-law form. This suggests that the numerical solvers are sufficiently robust to accommodate both strong memory persistence and rapid adaptation, supporting their use in modeling a wide spectrum of memory-dependent cognitive processes.
To examine the influence of fractional memory strength on system dynamics, we compare numerical solutions of the proposed Caputo-type model (21)–(22) for two fractional orders, η = 0.65 and η = 0.95 , both selected from within the analytically admissible region determined by contraction-based stability analysis. The approximation z ( s ) 1 e 3 s is used throughout, representing a biologically plausible learning profile that reflects rapid stimulus-driven adaptation followed by asymptotic saturation. Model parameters are fixed at β = 1.0 , T = 1 , and t 0 = 0 .
For η = 0.65 , the computed well-posedness constants are z 0 0.3867 , contraction constant α 0.9324 , bounding constant C 1.4977 , with radius r 22.16 and bound ϱ 16.43 . These values indicate that the operator remains weakly contractive, placing the model near the edge of the stable regime. In contrast, at η = 0.95 , the values shift to z 0 0.3867 , α 0.5398 , C 1.4072 , r 3.06 , and ϱ 2.22 , suggesting a significantly stronger contraction and tighter control over solution norms.
The numerical solutions for both regimes are presented in Figure 20 and Figure 21. For η = 0.65 , moderate divergence is observed between the two methods in the mid-to-late interval, whereas the curves under η = 0.95 are almost indistinguishable. This behavior is quantified in the error profiles (Figure 22 and Figure 23), where the maximum error at η = 0.65 reaches 1.4 × 10 2 , concentrated around t 0.8 , while at η = 0.95 , the peak error remains below 1.6 × 10 3 , an order of magnitude smaller. The smoother and more uniform error decay at higher fractional order reflects the diminished contribution of historical states, consistent with theoretical expectations of reduced memory depth.
Convergence comparisons (Figure 24 and Figure 25) further corroborate these observations. Both methods exhibit stable and nearly identical convergence behavior across discretizations, but the magnitude of the error decreases significantly for η = 0.95 . This improved numerical performance aligns with the analytical contraction results; as η 1 , the operator becomes more contractive, and the dynamics progressively resemble classical differential systems.

7. Conclusions and Open Problems

In this study, we presented a nonlinear Caputo-type fractional differential equation in (3) and (4) to discuss the memory-dependent behavioral adjustment. The existence and uniqueness of solutions were established using Banach’s contraction principle and Krasnoselskii’s fixed-point theorem. The stability analysis determined the permissible parameter range that guarantees the operator’s contractivity. The model was applied to behavioral despair and learned helplessness, demonstrating how feedback delay and memory accumulation shape passivity. Numerical simulations using the Adams–Bashforth–Moulton method and a convolution-based fractional integrator (FVI–CQ) confirmed accuracy and stability across varying fractional orders and memory kernels. Overall, the proposed framework offers a mathematically rigorous and computationally tractable approach for modeling systems with persistent memory and delayed feedback. It provides theoretical and numerical tools for understanding cognitive dynamics in biologically inspired models and can be extended to a broader class of fractional systems exhibiting nonlocality and adaptation.
Several questions remain open for future investigation. An important extension is to examine how stochastic perturbations and random memory kernels influence the long-term stability and convergence of the proposed system. Another promising direction is the development of fractional Lyapunov-based stability criteria and invariant manifold formulations for the nonlinear operator governing the model. A further challenge involves constructing data-driven or neural fractional frameworks capable of estimating both the memory kernel and the fractional order directly from experimental behavioral data.

Author Contributions

Conceptualization, A.T., J.-A.N.-S. and J.-J.T.; Methodology, A.T., W.A., A.M. and J.-J.T.; Validation, A.T., J.-A.N.-S. and A.M.; Formal analysis, J.-A.N.-S. and A.M.; Investigation, J.-A.N.-S., A.M. and J.-J.T.; Resources, W.A.; Data curation, W.A. and J.-J.T.; Writing—original draft and revision, A.T.; Visualization, A.T. and W.A.; Supervision, A.M. and J.-J.T.; Writing—review and editing, A.T. and J.-J.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the University of Alicante, Spain; the Spanish Ministry of Science and Innovation; the Generalitat Valenciana, Spain; and the European Regional Development Fund (ERDF) through the following funding sources: At the national level, this work was funded by the following projects: TRIVIAL (PID2021-122263OB-C22) and CORTEX (PID2021-123956OB-I00), granted by MCIN/AEI/10.13039/501100011033 and, as appropriate, co-financed by “ERDF A way of making Europe”, the “European Union”, or the “European Union Next Generation EU/PRTR”. At the regional level, the Generalitat Valenciana (Conselleria d’Educació, Investigació, Cultura i Esport), Spain, provided funding for NL4DISMIS (CIPROM/2021/21).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors state that they do not have any conflicts of interest.

References

  1. Linot, A.J.; Burby, J.W.; Tang, Q.; Balaprakash, P.; Graham, M.D.; Maulik, R. Stabilized neural ordinary differential equations for long-time forecasting of dynamical systems. J. Comput. Phys. 2023, 474, 111838. [Google Scholar] [CrossRef]
  2. Turab, A.; Montoyo, A.; Nescolarde-Selva, J.A. Stability and numerical solutions for second-order ordinary differential equations with application in mechanical systems. J. Appl. Math. Comput. 2024, 70, 5103–5128. [Google Scholar] [CrossRef]
  3. Whitby, M.; Cardelli, L.; Kwiatkowska, M.; Laurenti, L.; Tribastone, M.; Tschaikowski, M. PID control of biochemical reaction networks. IEEE Trans. Autom. Control 2021, 67, 1023–1030. [Google Scholar] [CrossRef]
  4. Fröhlich, F.; Sorger, P.K. Fides: Reliable trust-region optimization for parameter estimation of ordinary differential equation models. PLoS Comput. Biol. 2022, 18, e1010322. [Google Scholar] [CrossRef]
  5. Turab, A.; Sintunavarat, W. A unique solution of the iterative boundary value problem for a second-order differential equation approached by fixed point results. Alex. Eng. J. 2021, 60, 5797–5802. [Google Scholar] [CrossRef]
  6. Brady, J.P.; Marmasse, C. Analysis of a simple avoidance situation: I. Experimental paradigm. Psychol. Rec. 1962, 12, 361. [Google Scholar] [CrossRef]
  7. Turab, A.; Montoyo, A.; Nescolarde-Selva, J.-A. Computational and analytical analysis of integral-differential equations for modeling avoidance learning behavior. J. Appl. Math. Comput. 2024, 70, 4423–4439. [Google Scholar] [CrossRef]
  8. Turab, A.; Nescolarde-Selva, J.A.; Ali, W.; Montoyo, A.; Tiang, J.J. Computational and Parameter-Sensitivity Analysis of Dual-Order Memory-Driven Fractional Differential Equations with an Application to Animal Learning. Fractal Fract. 2025, 9, 664. [Google Scholar] [CrossRef]
  9. Marmasse, C.; Brady, J.P. Analysis of a simple avoidance situation. II. A model. Bull. Math. Biophys. 1964, 26, 77–81. [Google Scholar] [CrossRef]
  10. Radvansky, G.A.; Doolen, A.C.; Pettijohn, K.A.; Ritchey, M. A new look at memory retention and forgetting. J. Exp. Psychol. Learn. Mem. Cogn. 2022, 48, 1698. [Google Scholar] [CrossRef]
  11. Arena, G.; Mulder, J.; Leenders, R.T.A. How fast do we forget our past social interactions? Understanding memory retention with parametric decays in relational event models. Netw. Sci. 2023, 11, 267–294. [Google Scholar] [CrossRef]
  12. Ginn, T.; Schreyer, L. Compartment models with memory. SIAM Rev. 2023, 65, 774–805. [Google Scholar] [CrossRef]
  13. Guo, J.; Albeshri, A.A.; Sanchez, Y.G. Psychological Memory Forgetting Model Using Linear Differential Equation Analysis. Fractals 2022, 30, 2240080. [Google Scholar] [CrossRef]
  14. Boško, D.; Pradip, D. Trends in Fixed Point Theory and Fractional Calculus. Axioms 2025, 14, 660. [Google Scholar] [CrossRef]
  15. Shah, K.; Arfan, M.; Ullah, A.; Al-Mdallal, Q.; Ansari, K.J.; Abdeljawad, T. Computational study on the dynamics of fractional order differential equations with applications. Chaos Solitons Fractals 2022, 157, 111955. [Google Scholar] [CrossRef]
  16. Hai, X.; Yu, Y.; Xu, C.; Ren, G. Stability analysis of fractional differential equations with the short-term memory property. Fract. Calc. Appl. Anal. 2022, 25, 962–994. [Google Scholar] [CrossRef]
  17. Ghosh, U.; Pal, S.; Banerjee, M. Memory effect on Bazykin’s prey-predator model: Stability and bifurcation analysis. Chaos Solitons Fractals 2021, 143, 110531. [Google Scholar] [CrossRef]
  18. Agarwal, P.; Singh, R.; ul Rehman, A. Numerical solution of hybrid mathematical model of dengue transmission with relapse and memory via Adam–Bashforth–Moulton predictor-corrector scheme. Chaos Solitons Fractals 2021, 143, 110564. [Google Scholar] [CrossRef]
  19. Hariz Belgacem, K.; Jiménez, F.; Ober-Blöbaum, S. Fractional Variational Integrators Based on Convolution Quadrature. J. Nonlinear Sci. 2025, 35, 38. [Google Scholar] [CrossRef]
  20. Almeida, R. A Caputo fractional derivative of a function with respect to another function. Commun. Nonlinear Sci. Numer. Simul. 2017, 44, 460–481. [Google Scholar] [CrossRef]
  21. Banach, S. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  22. Burton, T.A. A fixed-point theorem of Krasnoselskii. Appl. Math. Lett. 1998, 11, 85–88. [Google Scholar] [CrossRef]
  23. Vu, H.; Rassias, J.M.; Hoa, N.V. Hyers–Ulam stability for boundary value problem of fractional differential equations with κ-Caputo fractional derivative. Math. Methods Appl. Sci. 2023, 46, 438–460. [Google Scholar] [CrossRef]
  24. Wang, X.; Luo, D.; Zhu, Q. Ulam-Hyers stability of caputo type fuzzy fractional differential equations with time-delays. Chaos Solitons Fractals 2022, 156, 111822. [Google Scholar] [CrossRef]
  25. Matar, M.M.; Samei, M.E.; Etemad, S.; Amara, A.; Rezapour, S.; Alzabut, J. Stability analysis and existence criteria with numerical illustrations to fractional jerk differential system involving generalized Caputo derivative. Qual. Theory Dyn. Syst. 2024, 23, 111. [Google Scholar] [CrossRef]
Figure 1. Cumulative number of shocks in experimental group.
Figure 1. Cumulative number of shocks in experimental group.
Fractalfract 09 00710 g001
Figure 2. Cumulative number of shocks in control group.
Figure 2. Cumulative number of shocks in control group.
Fractalfract 09 00710 g002
Figure 3. Convergence of Picard iterations.
Figure 3. Convergence of Picard iterations.
Fractalfract 09 00710 g003
Figure 4. Error between 1st and 2nd iterations.
Figure 4. Error between 1st and 2nd iterations.
Fractalfract 09 00710 g004
Figure 5. Sensitivity of the FDE to the fractional order η .
Figure 5. Sensitivity of the FDE to the fractional order η .
Fractalfract 09 00710 g005
Figure 6. Sensitivity of the FDE to initial response tendency c 0 .
Figure 6. Sensitivity of the FDE to initial response tendency c 0 .
Fractalfract 09 00710 g006
Figure 7. Sensitivity of the FDE to ϑ ( t ) .
Figure 7. Sensitivity of the FDE to ϑ ( t ) .
Fractalfract 09 00710 g007
Figure 8. Sensitivity of the FDE to initial state z 0 .
Figure 8. Sensitivity of the FDE to initial state z 0 .
Fractalfract 09 00710 g008
Figure 9. Stability region for α < 1 in the ( η , T t 0 ) plane.
Figure 9. Stability region for α < 1 in the ( η , T t 0 ) plane.
Fractalfract 09 00710 g009
Figure 10. Schematic representation of the learned helplessness experiment and memory-dependent behavioral adaptation.
Figure 10. Schematic representation of the learned helplessness experiment and memory-dependent behavioral adaptation.
Fractalfract 09 00710 g010
Figure 11. Contraction constant α vs. interval length.
Figure 11. Contraction constant α vs. interval length.
Fractalfract 09 00710 g011
Figure 12. Contraction constant α vs. fractional order.
Figure 12. Contraction constant α vs. fractional order.
Fractalfract 09 00710 g012
Figure 13. Effect of fractional order η on system trajectory z ( t ) .
Figure 13. Effect of fractional order η on system trajectory z ( t ) .
Fractalfract 09 00710 g013
Figure 14. Solution trajectories: ABMM vs. FVI–CQ.
Figure 14. Solution trajectories: ABMM vs. FVI–CQ.
Fractalfract 09 00710 g014
Figure 15. Absolute error between ABMM and FVI–CQ methods.
Figure 15. Absolute error between ABMM and FVI–CQ methods.
Fractalfract 09 00710 g015
Figure 16. Convergence of ABMM and FVI–CQ.
Figure 16. Convergence of ABMM and FVI–CQ.
Fractalfract 09 00710 g016
Figure 17. Runtime comparison across step sizes.
Figure 17. Runtime comparison across step sizes.
Fractalfract 09 00710 g017
Figure 18. Convergence under z ( s ) s 0.9 .
Figure 18. Convergence under z ( s ) s 0.9 .
Fractalfract 09 00710 g018
Figure 19. Convergence under z ( s ) = e a e b s .
Figure 19. Convergence under z ( s ) = e a e b s .
Fractalfract 09 00710 g019
Figure 20. Solution under η = 0.65 .
Figure 20. Solution under η = 0.65 .
Fractalfract 09 00710 g020
Figure 21. Solution under η = 0.95 .
Figure 21. Solution under η = 0.95 .
Fractalfract 09 00710 g021
Figure 22. Error under η = 0.65 .
Figure 22. Error under η = 0.65 .
Fractalfract 09 00710 g022
Figure 23. Error under η = 0.95 .
Figure 23. Error under η = 0.95 .
Fractalfract 09 00710 g023
Figure 24. Convergence under η = 0.65 .
Figure 24. Convergence under η = 0.65 .
Fractalfract 09 00710 g024
Figure 25. Convergence under η = 0.95 .
Figure 25. Convergence under η = 0.95 .
Fractalfract 09 00710 g025
Table 1. Comparison of ABM and FVI–CQ solutions with maximum absolute error.
Table 1. Comparison of ABM and FVI–CQ solutions with maximum absolute error.
Time ( t ) ABM SolutionFVI–CQ SolutionMax Absolute Error
0.00.3866994690.3866994690.000000000
0.10.1627536410.1588291750.003924466
0.20.0226371610.0193065850.003330576
0.3−0.093524523−0.0936232290.000098706
0.4−0.195242941−0.1915899310.003653010
0.5−0.287522273−0.2806316480.006890625
0.6−0.373512444−0.3642605040.009251941
0.7−0.455269703−0.4445606280.010709075
0.8−0.534129972−0.5227532220.011376751
0.9−0.610942172−0.5995260580.011416114
1.0−0.686226707−0.6752365160.010990191
Table 2. Comparison of ABM and FVI–CQ methods for two approximations of z ( s ) .
Table 2. Comparison of ABM and FVI–CQ methods for two approximations of z ( s ) .
(a) z ( s ) s 0.9
tABMFVI–CQError
0.09.2839729.2839720.000000
0.19.0475089.0304850.017023
0.28.8846738.8547960.029877
0.38.7440618.7083070.035754
0.48.6201778.5834130.036764
0.58.5099218.4751000.034821
0.68.4107888.3794900.031298
0.78.3206388.2935360.027102
0.88.2376568.2148550.022801
0.98.1603348.1416100.018724
1.08.0874418.0723950.015046
(b) z ( s ) = exp ( a e b s ) , a = 1.0 , b = 1.0
tABMFVI–CQError
0.00.1429520.1429520.000000
0.1−0.077021−0.0769810.000040
0.2−0.211087−0.2076930.003393
0.3−0.321961−0.3144720.007489
0.4−0.419953−0.4090320.010921
0.5−0.510110−0.4968580.013252
0.6−0.595338−0.5808500.014487
0.7−0.677373−0.6625690.014805
0.8−0.757246−0.7428230.014423
0.9−0.835544−0.8219940.013550
1.0−0.912580−0.9002160.012364
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Turab, A.; Nescolarde-Selva, J.-A.; Ali, W.; Montoyo, A.; Tiang, J.-J. Analytical and Numerical Analysis of a Memory-Dependent Fractional Model for Behavioral Learning Dynamics. Fractal Fract. 2025, 9, 710. https://doi.org/10.3390/fractalfract9110710

AMA Style

Turab A, Nescolarde-Selva J-A, Ali W, Montoyo A, Tiang J-J. Analytical and Numerical Analysis of a Memory-Dependent Fractional Model for Behavioral Learning Dynamics. Fractal and Fractional. 2025; 9(11):710. https://doi.org/10.3390/fractalfract9110710

Chicago/Turabian Style

Turab, Ali, Josué-Antonio Nescolarde-Selva, Wajahat Ali, Andrés Montoyo, and Jun-Jiat Tiang. 2025. "Analytical and Numerical Analysis of a Memory-Dependent Fractional Model for Behavioral Learning Dynamics" Fractal and Fractional 9, no. 11: 710. https://doi.org/10.3390/fractalfract9110710

APA Style

Turab, A., Nescolarde-Selva, J.-A., Ali, W., Montoyo, A., & Tiang, J.-J. (2025). Analytical and Numerical Analysis of a Memory-Dependent Fractional Model for Behavioral Learning Dynamics. Fractal and Fractional, 9(11), 710. https://doi.org/10.3390/fractalfract9110710

Article Metrics

Back to TopTop