You are currently viewing a new version of our website. To view the old version click .
Entropy
  • Article
  • Open Access

11 December 2025

A Lyapunov-Based Analysis on the Almost Periodicity of Impulsive Conformable Reaction–Diffusion Neural Networks with Distributed Delays

,
and
1
Department of Mathematics, University of Texas at San Antonio, San Antonio, TX 78249, USA
2
Department of Mathematics, Technical University of Sofia, 8800 Sliven, Bulgaria
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.

Abstract

The focus of this research is the qualitative behavior of a reaction–diffusion neural network with distributed delays and conformable derivatives under impulsive perturbations. In particular, the almost periodic behavior of the proposed model is studied using a Lyapunov-based approach. By constructing an appropriate Lyapunov-type function, criteria that guarantee the existence and uniqueness of an almost periodic state are provided. The established criteria extend a few existing results on the almost periodicity of conformable models and contribute to the development of the field. In addition, the notion of global conformable exponential stability is introduced and analyzed for the developed model. A suitable example is discussed.

1. Introduction

The qualitative behavior of neural networks with reaction–diffusion terms has attracted the attention of researchers in many fields of science and engineering. The main reason for the intense interest in their study lies in the possibilities of their applications in biosciences [1,2], secure communication [3], pattern formation [4], and many others [5,6,7]. In fact, the structure of a reaction–diffusion neural network model includes reaction terms as well as diffusion, which arise naturally when dealing with systems whose behavior depends on time and space. At the same time, the reaction–diffusion terms can affect the dynamic evolution of the process. This is another motivation and reason for the growing interest in their study.
Moreover, delay effects are also present in real-world reaction–diffusion systems, which has greatly enhanced the study of delayed reaction–diffusion neural network models. Numerous researchers have investigated how constant, time-varying, distributed, and infinite delays can affect the dynamics of such models [8,9,10,11].
The future development of reaction–diffusion models includes the consideration of some impulsive effects. In fact, short-term perturbations are common in the evolution of reaction–diffusion models, and this stimulates great interest in the study of impulsive reaction–diffusion neural networks [12,13,14,15]. On the one hand, impulsive effects can alter the high performance of a neural network model. Also, such effects can be used to efficiently control the desired qualitative behavior. Impulsive reaction–diffusion models are described primarily using impulsive differential equations as a modeling approach [16,17]. In addition, the theory of impulsive control [18,19] is of crucial importance in the study of their performance. Note that there are also several interesting results on the dynamics of impulsive reaction–diffusion neural networks with distributed delays [20,21]. In fact, distributed delays account for variable time delays that occur in real-world processes such as transcription and translation, which are more realistic than fixed or time-varying delays.
Another innovative approach to neural network modeling involves the use of ideas of fractional calculus. Since the dynamics in reaction–diffusion neural networks are complex, the use of fractional-order derivatives leads to more flexible models. The power ability of fractional-order neural network models to improve and generalize existing integer-order models and to increase the degree of reliability and accuracy [22,23] provokes widespread interest in the study of their characteristics. Numerous results on qualitative theory and applications of fractional-order reaction–diffusion neural network models have recently been established [24,25,26,27,28].
Part of the development of the theory of fractional calculus is related to the introduction of new derivatives of fractional order [29,30]. This is motivated by the fact that the application of classical fractional derivatives to applied models leads to some complexities in their analysis. For example, the application of the fractional Lyapunov approach requires the use of a complicated chain rule. Furthermore, some of the classical fractional derivatives do not obey Leibniz’s rule, and the corresponding Rolle’s theorem and the mean value theorem are not provided. The conformable derivative introduced in [31,32] aims to avoid the complexities in the applications of fractional-order derivatives. It is very promising in the computational aspect, as the corresponding definitions are limit-based. That is why the research on conformable calculus is growing [33,34,35,36,37]. Furthermore, the advantages of conformal derivatives expand their use in mathematical modeling [38,39,40,41]. The extensive work conducted on the analysis of applied conformable mathematical models, including conformable neural network models [42,43], shows their importance for theory and applications.
Impulsive extensions of conformable systems have also recently been investigated [44,45,46]. The hybrid impulsive conformable calculus approach is also applied to some impulsive neural network models [47,48,49,50,51]. Note that research on impulsive conformable neural networks of reaction–diffusion type is still very rare. For example, in [47], the existence and stability of integral manifolds of impulsive HCV conformable neural network models with reaction–diffusion terms are investigated. The paper [48] studies the existence and uniqueness of almost periodic solutions of conformable impulsive reaction–diffusion neural network models by constructing suitable Lyapunov-like functions. However, the authors in [47,48] did not take into account delay effects. Therefore, research on impulsive conformable reaction–diffusion neural network models needs further development.
A review of the existing literature showed that most qualitative research on conformable neural network models is concerned with stability and synchronization properties [42,43,45,47,49,50]. Progress in research on periodicity for such models is not satisfactory. The paper [44] considers a periodic boundary value problem for a class of impulsive conformable fractional integro-differential equations. However, in real-world problems, pure periodicity is not substantial. In addition, it has been shown in [52] that pure periodic solutions do not exist for fractional-order systems. Hence, the almost periodicity is an option that motivates extensive research on almost periodic solutions of fractional-order models [24,53,54,55,56]. This qualitative behavior is not fully studied for conformable neural network models. The almost periodic behavior of the solutions of impulsive conformable reaction–diffusion neural network models is studied only in [48] without considering delay factors. Very recently, the quasi-periodic behavior and chaotic nature of a conformable fractional paraxial wave equation are observed in [57]. The goal of this research is to contribute to the development of the almost periodicity for impulsive conformable neural network models considering both reaction–diffusion terms and distributed delays.
In this paper, we will define an impulsive conformable reaction–diffusion neural network model with distributed delays. The almost periodic behavior of the solutions will be analyzed using the conformable Lyapunov approach.
The main contributions of our paper can be summarized as follows:
(i)
Using the hybrid impulsive conformable setting, an impulsive conformable reaction-diffusion model is constructed. Unlike [48], distributed delays are also considered. The explicit structure of the model is very general and includes some conformable models developed in the existing literature [42,43,47,49,50]. It is also very useful for overcoming a fundamental difficulty in estimating network performance from experiments, the fact that only some representative subsets of neurons can be measured simultaneously. In addition, impulsive (short-term) effects allow the application of impulsive control strategies. Entropy can also be applied to measure complexity in a neural network architecture [58];
(ii)
The concept of almost periodicity is introduced to the discontinuous impulsive conformable delayed reaction–diffusion model. The notion of almost periodicity extends the concepts of periodicity applied in [44] and is well suited for conformable models;
(iii)
By the construction of a suitable Lyapunov function, new criteria are provided for the existence, uniqueness, and global conformable exponential stability of almost periodic solutions.
Furthermore, a comprehensive reference is provided for researchers interested in impulsive and conformal neural network models, describing the existing literature and demonstrating the latest progress in their qualitative research.
The plan of the rest of the paper is as follows. In Section 2, some notes on conformable calculus are given. Then, the impulsive conformable reaction–diffusion neural network model with distributed delays is formulated. Definitions and lemmas related to the almost periodic notion and the Lyapunov approach are also presented. The main results on almost periodicity are established in Section 3. The global conformable exponential stability results are also presented. An example is presented in Section 4. Section 5 is the conclusion section, where a brief review of the main results of our research and their significance is presented, and extensions of our work are suggested.

2. Basic Theory—Model Formulation—Preliminary Notes

We consider the Euclidean n-dimensional space with elements y = ( y 1 , y 2 , , y n ) T R n , and R + = [ 0 , ) .

2.1. Conformable Calculus Notes

We will begin with some notations and properties related to conformable derivatives. Let r ¯ 0 and q ( r ) : [ r ¯ , ) R .
Definition 1
([31,32]). Let 0 < σ 1 . The conformable derivative of order σ with the lower limit r ¯ for the function q ( r ) is defined by
T r ¯ σ q ( r ) = lim ε 0 q ( r + ε ( r r ¯ ) 1 σ ) q ( r ) ε , r > r ¯ .
Note that a function could be σ conformable differentiable at a point but not ordinary differentiable at this point [32].
Denote by
B = { { r k } : r k R , r k < r k + 1 , r k 0 , k Z , lim k ± r k = ± }
the set of all unbounded and strictly increasing sequences for which a distance is defined by ρ { r k ( 1 ) } , { r k ( 2 ) } , { r k ( j ) } B , j = 1 , 2 .
Next, following [45], for a function q : [ r k 1 , r k ) R , the conformable derivative of order σ starting from r k 1 is defined as
T r k 1 σ q ( r ) = lim ε 0 q ( r + ε ( r r k 1 ) 1 σ ) q ( r ) ε ,
and at r k 1 , we define
T r k 1 α q ( r k 1 ) = lim r r k 1 + T r k 1 σ q ( r ) .
By C σ [ [ r ¯ , ) , R ] we denote the class of all functions q ( r ) that have σ- conformable derivatives for any r [ r ¯ , ) .
Definition 2
([47,48,49]). The conformable integral of order 0 < σ 1 with a lower limit r ¯ , of the function q ( r ) , is
I r ¯ σ q ( r ) = r ¯ r ( s r ¯ ) σ 1 q ( s ) d s .
The following properties of the conformable derivatives will be used in our analysis.
Lemma 1
([47,48,49]). For a function q : [ r ¯ , ) R , q C σ [ [ r ¯ , ) , R ] , 0 < σ 1 , we have
(i) 
I r ¯ σ ( T r ¯ σ q ( r ) ) = q ( r ) q ( r ¯ ) , r > r ¯ ;
(ii) 
If y ( q ( r ) ) : [ r ¯ , ) R is differentiable with respect to q ( r ) , then for any r [ r ¯ , ) and q ( r ) 0 , we have
T r ¯ σ y ( q ( r ) ) = y ( q ( r ) ) T r ¯ σ q ( r ) ,
where y is the derivative of y ( · ) .
Let Θ be a bounded domain in R n with a smooth boundary Θ and a positive measure mes Θ > 0 containing the origin. The time conformable derivative is defined as follows.
Definition 3
([47,48,49]). For a function z = z ( t , y ) , z : [ r ¯ , ) × Θ R , the limit
T r ¯ σ z ( r , y ) = σ z ( r , y ) r σ | r ¯ = lim ε 0 z ( r + ε ( r r ¯ ) 1 σ , y ) z ( r , y ) ε , r > r ¯ , y Θ
is the conformable derivative along r of order σ, 0 < σ 1 .
If r ¯ = r k 1 , k Z , then
T r k 1 σ z ( r k 1 , y ) = lim r r k 1 + T r k 1 σ z ( r , y ) .
Remark 1.
The σ conformable differentiable with respect to time functions has the same properties as the properties of the σ-conformable differentiable functions listed in Lemma 1 [45,48].
Definition 4
([31,32]). The conformable exponential function E σ ( ν , τ ) for 0 < σ 1 is defined by
E σ ( ν , τ ) = exp ν τ σ σ , ν R , τ R + .
Remark 2.
More details on the conformable calculus can be found in [31,32,33,34,35,36,37], and for results related to impulsive conformable calculus, we refer to [44,45,46,47,48,49,50,51]. For the physical and geometrical interpretations of conformable derivatives, we refer to [36].

2.2. Model Formulation

For vector functions of the type
z ( r , y ) = ( z 1 ( r , y ) , z 2 ( r , y ) , , z m ( r , y ) ) T R m ,
throughout this paper, we will apply the norm
| | z ( r , · ) | | = Θ i = 1 m z i 2 ( r , y ) d y 1 / 2 .
Let the functions a i j , b i j , and Ω i be continuous in R , D i l : R × Θ R + , and the constants c i > 0 , i , j = 1 , 2 , , m , l = 1 , 2 , , n . For { r k } B , we develop an impulsive conformable reaction–diffusion neural network model with distributed delays of the type
T r k 1 σ z i ( r , y ) = l = 1 n y l D i l z i ( r , y ) y l c i z i ( r , y ) + j = 1 m a i j ( r ) f j z j ( r , y ) + j = 1 m b i j ( r ) r K i j ( r s ) f j z j ( s , y ) d s + Ω i ( r ) , r [ r k 1 , r k ) , z i ( r k , y ) = γ i k ( z i ( r k , y ) ) , k Z , y Θ ,
where i = 1 , 2 , , m , m 2 corresponds to the number of neurons in the neural network model (1), z = ( z 1 , z 2 , , z m ) T R m , z i = z i ( r , y ) denotes the state of the ith neuronal unit at time r and space y Θ , and f j represents the activation functions of the jth unit. The functions a i j and b i j are the connection weights, the positive constant c i represents the rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and external input, and Ω i denotes the external input for the ith unit. The smooth functions D i l = D i l ( r , y ) 0 correspond to the diffusion operators of transmission along the ith unit, and K i j is the delay kernel, i , j = 1 , 2 , , m , l = 1 , 2 , , n . The elements of the sequence { r k } B are the impulsive moments at which abrupt changes of the nodes z i ( r , y ) from the positions z i ( r k , y ) to the positions z i ( r k + , y ) are observed, γ i k represents the impulsive functions that measure the impulsive control effects on the nodes z i ( r , y ) at the instants r k , and we have z i ( r k + , y ) = z i ( r k , y ) and Δ z i ( r k , y ) = z i ( r k + , y ) z i ( r k , y ) = γ i k ( z i ( r k , y ) ) z i ( r k , y ) i = 1 , 2 , , m , k Z .
Remark 3.
The formulated model (1) extends many existing reaction–diffusion neural network models [8,9,10,11,20,21,24,25,26,27,28] to the impulsive conformable settings considering general distributed delays. The used conformable approach allows for the flexibility in the provided framework compared with the integer-order models, and simplifications in the analysis are compared with the fractional-order models. In addition, the designed model (1) allows control signals to be applied only at fixed time intervals r k , k Z . These signals with magnitudes determined by the functions γ i k , i = 1 , 2 , , m , k Z , can be used to impulsively control the qualitative behavior of the model.
Remark 4.
The impulsive control approach has been applied to numerous integer-order and fractional-order reaction–diffusion neural network models. For conformable models, this approach has been applied to reaction–diffusion neural networks just in [47,48] without analyzing the delay effects. Thus, our research generalizes and complements numerous important results in the existing literature. It also contributes to the development of the theory and applications of conformable neural network models.
We denote by PC Θ the class of all real–valued functions φ = φ ( χ , y ) , φ : ( , 0 ] × Θ R that are piecewise continuous with respect to χ and bounded on ( , 0 ] × Θ . We also assume that the limits on the left and right sides at the points of discontinuity exist, and all functions φ 0 i ( χ , y ) are continuous on the right at these points. We denote the norm in PC Θ by
| | φ | | = sup < χ 0 | | φ ( χ , · ) | | .
The impulsive control conformable model (1) will be studied under boundary and initial conditions of the following type:
z i ( r , y ) = 0 , r R , y Θ ,
z i ( χ , y ) = φ 0 i ( χ , y ) , χ ( , 0 ] , y Θ ,
where φ 0 = ( φ 01 , φ 02 , , φ 0 m ) T , and φ 0 i PC Θ for any i = 1 , 2 , , m .
The solution of the initial boundary value problem (IBVP) (1)–(3) will be denoted by
z ( r , y ) = z ( r , y ; φ 0 ) , r R , y Θ .
The function z ( r , y ) is piecewise continuous with respect to its first variable, with points of discontinuity of the first kind, and z i ( r k + , y ) = z i ( r k , y ) = γ i k ( z i ( r k , y ) ) , y Ω , k Z .

2.3. Lyapunov Approach Definitions and Lemmas

The application of the Lyapunov analysis approach to impulsive systems requires the use of piecewise continuous Lyapunov-type functions Λ : R × R 2 m R + , Λ = Λ ( r , z , v ) such that for { r k } B :
(i).
Λ ( r , z , v ) is continuous in ( r k 1 , r k ) × R 2 m , k Z and local Lipschitz continuous with respect to the arguments ( z , v ) ;
(ii).
For each k Z and ( z , v ) T R 2 m , the finite limits
Λ ( r k 1 , z , v ) = lim t r k 1 t < r k 1 Λ ( r , z , v ) , Λ ( r k 1 + , z , v ) = lim t r k 1 t > r k 1 Λ ( r , z , v )
exist, and Λ ( r k 1 + , z , v ) = Λ ( r k 1 , z , v ) .
The class of all functions Λ = Λ ( r , z , v ) of the above type will be denoted by Λ r k 1 σ .
The proof of the next Lemma is similar to the proof of Theorem 3.1 in [45], and we will omit it here.
Lemma 2.
Assume that, for the piecewise continuous in R function w ( r ) , there exist positive constants κ 1 , κ 2 , I , κ 1 > κ 2 , such that
T r k 1 σ w ( r ) κ 1 w ( r ) + κ 2 sup < s r w ( s ) + I ,
for r [ r k 1 , r k ) , k Z .
Then,
w ( r ) sup s r k 1 w ( s ) E σ ( η k , r r k 1 ) + I κ 1 κ 2 ,
where η k > 0 is determined by
κ 2 e η k θ k κ 1 + η k < 0 , θ k ( r k r k 1 ) σ σ ,
for k Z .
Lemma 3.
Assume that for the function Λ ( r , z , v ) Λ r k 1 σ , there exist positive constants κ 1 , κ 2 , I , μ k , κ 1 > κ 2 , such that
1.
For  r [ r k 1 , r k ) , k Z , ( z , v ) T R 2 m ,
T r k 1 σ Λ ( r , z ( r , · ) , v ( r , · ) ) κ 1 Λ ( r , z ( r , · ) , v ( r , · ) ) + κ 2 sup < s r Λ ( s , z ( s , · ) , v ( s , · ) ) + I ;
2.
Λ ( r k , z ( r k , · ) , v ( r k , · ) ) μ k Λ ( r k , z ( r k , · ) , v ( r k , · ) ) , k Z .
Then,
Λ ( r , z ( r , · ) , v ( r , · ) ) j = 1 k 1 μ j sup < s 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η ^ , r ) + I κ 1 κ 2 1 + o = 0 k 2 j = o + 1 k 1 μ j ,
where η ^ = sup k Z η k , r [ 0 , ) .
Proof. 
Let r 0 be the first instant in the sequence B such that r 0 [ 0 , ) . Consider the interval [ 0 , r 0 ) , and from Lemma 2, we get
Λ ( r , z ( r , · ) , v ( r , · ) ) sup < s 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η 1 , r ) + I κ 1 κ 2 ,
for any r [ 0 , r 0 ) .
Then, from condition 2 of Lemma 3, we have
Λ ( r 0 , z ( r 0 , · ) , v ( r 0 , · ) ) μ 0 Λ ( r 0 , z ( r 0 , · ) , v ( r 0 , · ) )
μ 0 sup < s 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η 1 , r 0 ) + I κ 1 κ 2
μ 0 sup < s 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η 1 , r 0 ) + I μ 0 κ 1 κ 2
Now, if r [ r 0 , r 1 ) , and we apply the well known inequality [45 ζ q + ξ q ( ζ + ξ ) q ,   ζ ,   ξ 0 ,   0 < q 1 , we derive the estimate
Λ ( r , z ( r , · ) , v ( r , · ) ) sup < s r 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η 1 , r r 0 ) + I κ 1 κ 2
μ 1 sup < s 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η 0 , r 0 ) + I μ 1 κ 1 κ 2 E σ ( η 1 , r r 0 ) + I κ 1 κ 2
μ 1 sup < s 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η ^ , r ) + I μ 1 κ 1 κ 2 + I κ 1 κ 2 .
If we continue the process, then for r [ r k 1 , r k ) , we have
Λ ( r , z ( r , · ) , v ( r , · ) ) sup < s r k 1 Λ ( s , z ( s , · ) , v ( s , · ) ) E σ ( η k , r r k 1 ) + I κ 1 κ 2
j = 1 k 1 μ j sup < s 0 Λ ( s , z ( s , · ) , v ( s , · ) ) E α ( η ^ , r ) + I κ 1 κ 2 1 + o = 0 k 2 j = o + 1 k 1 μ j .

2.4. Almost Periodicity Notes

The concept of almost periodicity for impulsive conformable systems will be defined in the next definitions [48].
Let Ψ = ( ψ ( r , y ) , T ) P C [ R × Θ , R m ] × B . For any infinite sequence of type { s p } p = 1 , s p R , we denote by θ s p Ψ the sets { ψ ( r + s p , y ) , T s p } P C [ R × Θ , R m ] × B where T s p = { r k s p } , k Z , p = 1 , 2 , .
Definition 5.
A function ψ P C [ R × Θ , R m ] , ψ = ψ ( r , y ) , is almost periodic piecewise continuous with respect to its first argument with jump discontinuities at the points r k , { r k } B , if any sequence of real numbers { s m } has a subsequence { s p } , s p = s m p , such that θ s p Ψ is compact in P C [ R × Θ , R m ] × B .
Definition 6.
A sequence { Ψ p } , Ψ p = ( ψ p ( r , y ) , T p ) P C [ R × Θ , R m ] × B , converges to Ψ uniformly with respect to r, if the existence of an ε > 0 implies the existence of a p 0 > 0 , such that both inequalities
ρ ( T , T p ) < ε , | | ψ p ( r , y ) ψ ( r , y ) | | < ε
hold uniformly for p p 0 , r R { θ ε ( s ( T p T ) ) } , y Θ , where s ( T p T ) : B B , s ( T p T ) forms a strictly increasing sequence, θ ε ( s ( T p T ) ) = { r + ε , r s ( T p T ) } .
Definition 7.
The set of all sequences of the type { r k h } , r k h = r k + h r k , k , h Z , is uniformly almost periodic if, from each infinite sequence of shifts { r k s p } , k Z , p = 1 , 2 , , s p R , it is possible to derive a convergent subsequence in B .
Remark 5.
The above definitions extend the definitions on almost periodicity adopted in [48] to the reaction–diffusion case. They also generalize the almost periodic concept considered in [24,56] and allow the almost periodic behavior to be studied for solutions of conformable systems.
The next assumptions will ensure the existence, uniqueness, and almost periodicity of the nodes [24,48]:
Assumption 1.
The functions a i j ( r ) , b i j ( r ) and Ω i ( r ) are almost periodic on r, and
sup r R | a i j ( r ) | = a i j M , sup r R | b i j ( r ) | = b i j M ,
i , j = 1 , 2 , , m .
Assumption 2.
The continuous activation functions are such that
| f i ( χ 1 ) f i ( χ 2 ) | L i | χ 1 χ 2 | , | f i ( χ ) | H i
and f i ( 0 ) = 0 for L i > 0 , H i > 0 and all χ 1 , χ 2 R , χ 1 χ 2 , i = 1 , 2 , , m .
Assumption 3.
The delay kernel K i j : R + R + , i , j = 1 , 2 , , m is a continuous function that satisfies the following conditions:
(i) 
0 K i j ( s ) d s = 1 ;
(ii) 
0 K i j ( r s ) d s < ;
(iii) 
r K i j ( r s ) d s < Γ i j * < .
Assumption 4.
The functions D i l = D i l ( r , y ) are almost periodic in r for any y Θ , and
D i l ( r , y ) D ̲ i l 0 , r R , y Θ ,
i = 1 , 2 , , m , l = 1 , 2 , , n .
Assumption 5.
For any i = 1 , 2 , , m and k Z , the impulsive functions γ i k are γ i k ( z i ( r k , y ) ) = H i k z i ( r k , y ) , H i k R , y Θ , and
H i k 2 μ k ,
where μ k > 0 , i = 1 , 2 , , m , k Z .
Assumption 6.
The set of all sequences { r k h } , r k h = r k + h r k , k , h Z is uniformly almost periodic, and inf k r k 1 = τ > 0 .
Assumption 7.
The initial function φ 0 = ( φ 01 , φ 02 , , φ 0 m ) T , φ 0 i = φ 0 i ( χ , y ) , φ 0 i PC Θ , i = 1 , 2 , , m , is almost periodic with respect to its first argument.
It follows that [24,48], under Assumptions 1–7, for an arbitrary infinite sequence of real numbers { s q } , there exists a subsequence { s p } , s p = s q p that shifts system (1) to the system
T r k 1 σ z i ( r , y ) = l = 1 n y l D i l z i ( r , y ) y l c i z i ( r , y ) + j = 1 m a i j s ( r ) f j z j ( r , y ) + j = 1 m b i j s ( r ) r K i j ( r s ) f j z j ( s , y ) d s + Ω i s ( r ) , r r k s , z i ( r k s , y ) = γ i k ( z i ( r k s , y ) ) , k Z , y Θ .
The set of all displaced systems of type (4) will be denoted by H ( z , r k ) .

3. Main Results

3.1. A Lyapunov-Based Analysis on the Almost Periodicity

We will consider the impulsive conformable model (1) on R × Θ with
Θ = { y = ( y 1 , y 2 , , y n ) T R n , | y l | < ν l } ,
where ν l ( l = 1 , 2 , , n ) are positive constants.
The following lemma will be useful in the proof of the main result.
Lemma 4
([9]). For a real–valued function u ( y ) , u C 1 ( Θ ) such that u ( y ) | Θ = 0 , we have
Θ u 2 ( y ) d y ν l 2 Θ | u ( y ) y l | 2 d y .
Theorem 1.
Assume that Assumptions 1–7 are satisfied, and
1.
There exist positive numbers κ 1 , κ 2 , d i , such that
κ 1 = min 1 i m 2 ( c i + d i ) j = 1 m a i j M + b i j M Γ i j * L j > 0 ,
κ 2 = max 1 i m j = 1 m b i j M L j Γ i j * > 0 ,
d i = l = 1 n D ̲ i l ν l 2 , i = 1 , 2 , , m ,
and κ 1 > κ 2 .
2.
There is a solution z ( r , y ) of (1), such that
| | z ( r , y ) | | < C , C > 0 ,
for r 0 and y Θ .
Then, there exists a unique almost periodic along r solution β ( r , y ) of system (1), uniformly on y Θ , such that
(a) 
| | β ( r , y ) | | C 1 , C 1 < C ;
(b) 
H ( β , r k ) H ( z , r k ) .
Proof. 
Consider an infinite sequence of real numbers { s p } that moves system (1) to a system from the set H ( z , r k ) , such that s p as p .
Let z ( r , y ) and v ( r , y ) be two solutions to system (1), ( r , y ) R × Θ . We define a Lyapunov function
Λ ( z ( r , · ) , v ( r , · ) ) = Θ 1 2 i = 1 m z i ( r , y ) v i ( r , y ) 2 d y .
For any real number b, let p 0 = p 0 ( b ) be the smallest value of p, such that s p 0 + b 0 . Condition 2 of Theorem 1 implies the existence of a C 1 < C , such that | | z ( r , y ) | | 2 C 1 for all t 0 , and hence, | | z ( r + s p , y ) | | 2 C 1 for r b , p p 0 , y Θ .
Suppose that Π , Π ( b , ) is compact. Hence, for every ε > 0 , we can pick out an integer number n 0 ( ε , b ) p 0 ( b ) , so large that, for h p n 0 ( ε , b ) and r R , r r k , k Z , we have
| Ω i ( r + s p ) Ω i ( r + s h ) | < ε 2 ( κ 1 κ 2 ) 24 m C ( ν l ) n 1 + j = 0 k 2 i = j + 1 k 1 μ i 1 ,
| a i j ( r + s p ) a i j ( r + s h ) | ε 2 ( κ 1 κ 2 ) 24 m C 2 n ( ν l ) n 1 + j = 0 k 2 i = j + 1 k 1 μ i j = 1 m H j 1 ,
| b i j ( r + s p ) b i j ( r + s h ) | ε 2 ( κ 1 κ 2 ) 24 m C 2 n ( ν l ) n 1 + j = 0 k 2 i = j + 1 k 1 μ i j = 1 m H j Γ * 1 ,
and
i = 1 k 1 μ i sup < s 0 Θ i = 1 m φ 0 i ( s + s p , y ) φ 0 i ( s + s h , y ) 2 d y E σ ( η ^ , r + s p s h ) < ε 2 2 .
Then, by Assumption 5, for r = r k , we obtain
Λ ( z ( r k + + s p , · ) , z ( r k + + s h , · ) ) = Θ 1 2 i = 1 m ( H i k z i ( r k + s p , y ) z i ( r k + s h , y ) 2 d y μ k Θ 1 2 i = 1 m z i ( r k + s p , y ) z i ( r k + s h , y ) 2 d y = μ k Λ ( z ( r k + s p , · ) , z ( r k + s h , · ) ) .
Now, we will consider the derivative T r k 1 σ Λ ( r , z ( r , · ) , v ( r , · ) ) of the function Λ for r [ r k 1 , r k ) , k Z , ( z , v ) T R 2 m . We have that [48]
T r k 1 σ Λ ( z ( r , · ) , v ( r , · ) ) = 1 2 T r k 1 σ Θ i = 1 m z i ( r , y ) v i ( r , y ) 2 d y = 1 2 i = 1 m Θ T r k 1 σ z i ( r , y ) v i ( r , y ) 2 d y = i = 1 m Θ z i ( r , y ) v i ( r , y ) T r k 1 σ z i ( r , y ) v i ( r , y ) d y .
Hence, along with the system (1), we derive
T r k 1 σ Λ ( z ( r + s p , · ) , z ( r + s h , · ) )
= i = 1 m Θ z i ( r + s p , y ) z i ( r + s h , y ) ( l = 1 n y l D i l y l z i ( r + s p , y ) z i ( r + s h , y )
c i z i ( r + s p , y ) z i ( r + s h , y ) + j = 1 m ( a i j ( r + s p ) a i j ( r + s h ) f j ( z j ( r + s p , y ) )
+ a i j ( r + s h ) f j ( z j ( r + s p , y ) ) f j ( z j ( r + s h , y ) ) )
+ j = 1 m ( b i j ( r + s p ) b i j ( r + s h ) r K i j ( r s ) f j ( z j ( s + s p , y ) d s
+ j = 1 m b i j ( r + s h ) ( r K i j ( r s ) f j ( z j ( s + s p , y ) ) d s r K i j ( r s ) f j z j ( s + s h , y ) ) d s
+ Ω i ( r + s p ) Ω i ( r + s h ) ) d y .
We introduce the notation ϖ i ( r , y ) = z i ( r + s p , y ) z i ( r + s h , y ) . Then, by the boundary conditions and the Green’s identity, we have
l = 1 n Θ ϖ i ( r , y ) y l D i l ϖ i ( r , y ) y l d y = l = 1 n Θ D i l ϖ i ( r , y ) y l 2 d y .
We apply to the above equality, Assumption 4, condition 1 of Theorem 1, and Lemma 4 and obtain
l = 1 n Θ ϖ i ( r , y ) y l D i l ϖ i ( r , y ) y l d y l = 1 n Θ D ̲ i l ϖ i ( r , y ) y l 2 d y l = 1 n Θ D ̲ i l ν l 2 ϖ i 2 ( r , y ) d y = d i Θ ϖ i 2 ( r , y ) d y .
Next, by Assumption 1, Assumption 2, and (6), we get
j = 1 m ( a i j ( r + s p ) a i j ( r + s h ) Θ ϖ i ( r , y ) f j ( z j ( r + s p , y ) ) d y + a i j ( r + s q ) Θ ϖ i ( r , y ) f j ( z j ( r + s p , y ) ) f j ( z j ( r + s h , y ) ) d y ) ε 2 ( κ 1 κ 2 ) 12 m 1 + j = 0 k 2 i = j + 1 k 1 μ i 1 + j = 1 m a i j M L j Θ | ϖ i ( r , y ) | | ϖ j ( r , y ) | d y ε 2 ( κ 1 κ 2 ) 12 m 1 + j = 0 k 2 i = j + 1 k 1 μ i 1 + 1 2 j = 1 m a i j M L j Θ ϖ i 2 ( r , y ) + ϖ j 2 ( r , y ) d y .
Analogously, from Assumptions 1–3 and (7), we obtain
j = 1 m ( b i j ( r + s p ) b i j ( r + s h ) Θ ϖ i ( r , y ) r K i j ( r s ) f j ( z j ( s + s p , y ) ) d s d y + j = 1 m b i j ( r + s h ) Θ ϖ i ( r , y ) ( r K i j ( r s ) f j ( z j ( s + s p , y ) ) f j z j ( s + s h , y ) ) d s d y ε 2 ( κ 1 κ 2 ) 12 m 1 + j = 0 k 2 i = j + 1 k 1 μ i 1 + j = 1 m b i j M | L j r K i j ( r s ) Θ | ϖ i ( r , y ) | | ϖ j ( s , y ) | d y d s ε 2 ( κ 1 κ 2 ) 12 m 1 + j = 0 k 2 i = j + 1 k 1 μ i 1 + j = 1 m b i j M L j Γ i j * Θ ϖ i 2 ( r , y ) ) d y + j = 1 m b i j M L j Γ i j * sup < s r Θ ϖ j 2 ( s , y ) d y .
Then, by (10)–(13) and condition 1 of Theorem 1, we get
T r k 1 σ Λ ( z ( r + s p , · ) , z ( r + s h , · ) )
κ 1 Λ ( z ( r + s p , · ) , z ( r + s h , · ) ) + κ 2 sup < s r Λ ( z ( s + s p , · ) , z ( s + s h , · ) )
+ ε 2 ( κ 1 κ 2 ) 4 1 + j = 0 k 2 i = j + 1 k 1 μ i 1 .
Now, from the last estimate, (9), and Lemma 3, we have
Λ ( z ( r + s p , · ) , z ( r + s h , · ) )
i = 1 k 1 μ i sup < s 0 Λ ( z ( s , · ) , z ( s , · ) ) E σ ( η ^ , r + s p s h ) + ε 2 4 ,
or from the definition of the function Λ , it follows that
Θ 1 2 i = 1 m z i ( r + s p , y ) z i ( r + s h , y ) 2 d y
i = 1 k 1 μ i sup < s 0 Θ 1 2 i = 1 m φ 0 i ( s + s p , y ) φ 0 i ( s + s h , y ) 2 d y E σ ( η ^ , r + s p s h ) + ε 2 4 .
Then,
| | z ( r + s p , y ) z ( r + s h , y ) | | 2 i = 1 k 1 μ i [ sup < s 0 Θ i = 1 m ( φ 0 i ( s + s p , y ) φ 0 i ( s + s h , y ) ) 2 d y ] E σ ( η ^ , r + s p s h ) + ε 2 2 .
Next, from (14) and (8), we obtain
| | z ( r + s p , y ) z ( r + s h , y ) | | < ε 2 2 + ε 2 2 1 2 = ε .
Therefore, there exists a function β ( r , y ) = ( β 1 ( r , y ) , β 2 ( r , y ) , , β m ( r , y ) ) T for which z ( r + s p , y ) β ( r , y ) as p uniformly on y Θ . The arbitrariness of the real number b implies that β ( r , y ) is also defined uniformly on r R .
In a similar way, for any ε , we can prove the next inequality
| T r k 1 σ z i ( r + s p , y ) T r k 1 σ z i ( r + s h , y ) | < ε ,
holds, or lim p T r k 1 σ z i ( r + s p , y ) exists uniformly on all compact subsets of R , r r k , k Z , y Θ .
Then,
T r k 1 σ β i ( r , y ) = F i s ( r , β i ( r , y ) ) , r r k s ,
where r k s = lim p ( r k + s p ) , y Θ , and
F i s ( r , β i ( r , y ) ) = l = 1 n y l D i l β i ( r , y ) y l c i β i ( r , y ) + j = 1 m a i j s ( r ) f j β j ( r , y ) + j = 1 m b i j s ( r ) r K i j ( r s ) f j β j ( s , y ) d s + Ω i s ( r ) , i = 1 , 2 , , m .
In addition, for r = r k s we get
β i ( r k s , y ) = lim p z i ( r k s + s p , y ) = lim p γ i k z i ( r k s + s p , y ) = γ i k β i ( r k s , y ) , i = 1 , 2 , , m .
Hence, (17) and (18) imply that β ( r , y ) is a solution of (4).
To prove the almost periodicity of the solution β ( r , y ) , we will use the sequence { s p } that moves the system (1) to H ( z , r k ) .
If we consider the function
Λ 1 ( β ( r , · ) , ω ( r + s p s h , · ) ) = Θ 1 2 i = 1 m β i ( r , y ) β i ( r + s p s h , y ) 2 d y ,
then using the same arguments as for the function Λ , for r ¯ r k 1 , we have
T r k 1 σ Λ 1 ( β ( r , · ) , β ( r + s p s h , · ) ) κ 1 Λ 1 ( β ( r , · ) , β ( r + s p s h , · ) ) + κ 2 sup < s r Λ 1 ( β ( s , · ) , β ( s + s p s h , · ) ) + ε 2 ( κ 1 κ 2 ) 4 1 + j = 0 k 2 i = j + 1 k 1 μ i 1 ,
and
Λ 1 ( β ( r k s , · ) , β ( r k s + s p s h , · ) ) μ k Λ 1 ( β ( r k s , · ) , β ( r k s + s p s h , · ) ) , k Z .
Hence, from (19), (20), and Lemma 3, we have
| | β ( r + s p , y ) β ( r + s h , y ) | | < ε , h p p 0 ( ε ) .
Finally, since the sequence { s p } is such that ρ ( r k + s p , r k + s h ) < ε for h p p 0 ( ε ) , β ( r + s p , y ) converges uniformly to β ( r , y ) .
Hence, we immediately obtain the results in statements (a) and (b) of Theorem 1, and the proof is complete. □
Remark 6.
The almost periodic behavior of fractional-order neural networks is investigated by numerous researchers, and interesting results have been published in [53,54,55,56]. Reaction diffusion terms have been considered in [24]. Theorem 1 extends all these results to the confirmable setting. Thus, our results complement some results in the existing literature and contribute to the development of theory. In addition, since the application of conformal calculus avoids the complexities of analyzing fractional-order systems, the proposed results are well-suited for applied models.
Remark 7.
An analysis of the almost periodicity for impulsive conformable reaction–diffusion neural network models has been made just in [48] ignoring delay factors. Thus, compared with [48], distributed delays are taken into account in the impulsive conformable neural network (1); that is, the model studied in [48] is a special case of (1). The consideration of distributed delays makes the developed model more complex and challenging, and the qualitative results proposed are more general.
Remark 8.
The Lipschitz conditions for the activation functions in A2 are essential in the proof of our main almost periodicity results. In scenarios with strongly nonlinear activation functions, modified criteria that allow the introduced framework to deal with such nonlinearities without losing result rigor may be required. This is an interesting open problem for future investigations.

3.2. Stability Analysis

Since the stability is the most investigated qualitative behavior for numerous applied models, we will study the global conformable exponential stability of the almost periodic solution of (1). To this end, we will adopt the following definition [59].
Definition 8.
A solution z ( r , y ) = z ( r , y ; φ 0 ) , r R , y Θ of model (1), corresponding to an initial function φ 0 = ( φ 01 , φ 02 , , φ 0 m ) T , φ 0 i PC Θ , i = 1 , 2 , , m , is globally conformably exponentially stable if there exist positive constants M and κ such that, for any other solution v ( r , y ) = v ( r , y ; φ ˜ 0 ) , r R , y Θ of model (1), corresponding to an initial function φ ˜ 0 = ( φ ˜ 01 , φ ˜ 02 , , φ ˜ 0 m ) T , φ ˜ 0 i PC Θ , i = 1 , 2 , , m , we have
| | z ( r , y ) v ( r , y ) | | M | | φ 0 φ ˜ 0 | | E σ ( κ , r ) t 0 .
Remark 9.
It is well known that, for integer-order neural network models, the global exponential stability is a very important behavior as it guarantees fast convergence [9,14]. The fractional-order analogue is the Mittag–Leffler stability [27]. With Definition 8, we extend the global exponential stability notion to the conformable case. Similar definitions are given in [31,45].
Theorem 2.
Under the conditions of Theorem 1, the unique almost periodic solution β ( r , y ) of system (1) is globally conformably exponentially stable.
Proof. 
We denote by
β ^ ( r , y ) = β ¯ ( r , y ) β ( r , y ) ,
F s ( r , β ^ ( r , y ) ) = F s ( r , β ^ ( r , y ) + β ( r , y ) ) F s ( r , β ( r , y ) ) ,
where β ¯ ( r , y ) is any solution of (4), F s = ( F 1 s , F 2 s , , F m s ) T .
Now, we consider the system
T r k 1 σ β ^ ( r , y ) = F s ( r , β ^ ( r , y ) , r [ r k 1 , r k ) , β ^ ( r k , y ) = μ k β ^ ( r k , y ) , k Z , y Θ ,
and the Lyapunov function Λ ( β ( r , · ) , β ( r , · ) + β ^ ( r , · ) ) .
Applying Lemma 3, we conclude that the zero solution β ^ ( r , y ) = 0 , ( r , y ) R × Θ of (22) is globally conformably exponentially stable, which implies the global conformable exponential stability of the almost periodic solution β ( r , y ) of (1). □
Remark 10.
Although the stability of impulsive conformable neural networks is studied in articles [47,49], the results obtained there are not applicable to model (1), since the impulsive control functions γ i k are restricted to satisfy | H i k | < 1 , i = 1 , 2 , , m , k Z . The application of the new Lemma 3 allows for the use of more general impulsive functions that satisfy Assumption 5, making the proposed results less restrictive.
Remark 11.
In most of the impulsive control approaches applied to conformable systems the authors constrained the upper bounds of the impulsive intervals and on the distances between the impulsive moments [59,60]. One advantage of the proposed stability criteria is that they do not include restrictions on the distance between the impulsive control instances. Hence, the proposed technique improves some recently proposed impulsive control strategies to conformable models in [59,60].
Remark 12.
The established stability results can be successfully applied in synchronization and control problems. Additionally, they can be extended to other stability concepts such as practical exponential stability as in [61], perfect exponential stability as in [24], or predefined time stability as in [62,63]. Thus, our results open the door to future contributions on the topic.

4. An Example

In this example, we consider the impulsive conformable delayed reaction–diffusion neural network model (1) for n = m = 2 on the set Θ R 2 , Θ = { y = ( y 1 , y 2 ) T R 2 , | y l | < 1 } , l = 1 , 2 , given by
T r k 1 σ z i ( r , y ) = l = 1 n y l D i l z i ( r , y ) y l c i z i ( r , y ) + j = 1 m a i j ( r ) f j z j ( r , y ) + j = 1 m b i j ( r ) r K i j ( r s ) f j z j ( s , y ) d s + Ω i ( r ) , r [ r k 1 , r k ) , z i ( r k , y ) = γ i k ( z i ( r k , y ) ) = 0.04635 0 0 0.00168 + 0.7 0 0 0.9 z i ( r k , y ) , k Z , y Θ ,
where r R , σ = 0.9 , i = 1 , 2 , r k B , r k satisfy A6, Ω 1 = 0.040002 , Ω 2 = 0.058395 , f i ( z i ) = 1 2 ( | z i + 1 | | z i 1 | ) , K i j ( s ) = e s , c 1 = 0.2 , c 2 = 0.3 ,
( a i j ) 2 × 2 ( r ) = a 11 ( r ) a 12 ( r ) a 21 ( r ) a 22 ( r ) = 0.03 sin ( r 2 ) 0.02 cos ( r 3 ) 0.04 + cos ( r 2 ) 0.01 sin ( r 3 ) ,
( b i j ) 2 × 2 ( r ) = b 11 ( r ) b 12 ( r ) b 21 ( r ) b 22 ( r ) = 0.03 + sin ( r 2 ) 0.01 + cos ( r 3 ) 0.01 cos ( r 2 ) 0.01 + sin ( r 3 ) ,
( D i q ) 2 × 2 = 4 + sin r 2 0 0 5 + cos r 3 .
Obviously, for model (23), all Assumptions 1–5 are satisfied for
( a i j M ) 2 × 2 = 1.03 1.02 1.04 1.01 , ( b i j M ) 2 × 2 = 1.03 1.01 1.01 1.01 ,
L j = 1 , Γ i j * = 1 , i , j = 1 , 2 , μ k 1 , k Z .
Condition 1 of Theorem 1 holds for d 1 = 3 , d 2 = 4 ,
κ 1 = min 1 i m 2 ( c i + d i ) j = 1 m a i j M + b i j M Γ i j * L j = 2.31 > 0 ,
κ 2 = max 1 i m j = 1 m b i j M L j Γ i j * = 2.04 > 0 .
In addition, since system (23) has an equilibrium of the type
e ( r , y ) = ( 0.1545 , 0.0168 ) T ,
such that | | e ( r , y ) | | < 0.15542 , condition 2 of Theorem 1 holds.
Therefore, according to Theorem 1, there exists a unique almost periodic along r solution β ( r , y ) of system (1), uniformly on y Θ , such that
(a)
| | β ( r , y ) | | C 1 , C 1 < 0.15542 ;
(b)
H ( β , r k ) H ( z , r k ) .
In addition, according to Theorem 2, the unique almost periodic solution β ( r , y ) of system (1) is globally conformably exponentially stable.

5. Conclusions

In this paper, the impulsive conformable modeling approach is applied, and an impulsive conformable reaction–diffusion neural network model with distributed delays has been constructed. The almost periodic concept is applied to the proposed model, and the corresponding analysis is performed. The analysis is based on the impulsive conformable Lyapunov technique by means of which efficient sufficient conditions are established for the existence and uniqueness of an almost periodic state of the model. The obtained criteria extend the very few results on the almost periodicity of impulsive conformable neural network systems considering reaction–diffusion terms and distributed delays. The global conformable exponential stability of the almost periodic solution is also studied, and corresponding criteria are proposed. The computational simplicities in the work with conformable derivatives in comparison with the fractional-order derivatives make our results more relevant in applied sciences. The stability results presented can be extended to the integral manifold case considering additional aspects of conformable neural network systems that require further investigation. Some future directions also include developing practical stability or predefined time stability criteria when the Lyapunov-like stability concept cannot be applied. In addition, the effect of uncertain parameters on the qualitative behavior of the proposed model is planned to be studied in future research.

Author Contributions

Conceptualization, G.S., I.S. and C.S.; methodology, G.S., I.S. and C.S.; formal analysis, G.S., I.S. and C.S.; investigation, G.S., I.S. and C.S.; writing—original draft preparation, I.S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the support of the project CoE UNITe BG16RFPR002-1.014-0004.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cao, Z.; Chen, R.; Xu, L.; Zhou, X.; Fu, X.; Zhong, W.; Grima, R. Efficient and scalable prediction of stochastic reaction–diffusion processes using graph neural networks. Math. Biosci. 2024, 375, 109248. [Google Scholar] [CrossRef]
  2. Hattaf, K.; Yousfi, N. Global stability for reaction-diffusion equations in biology. Comput. Math. Appl. 2013, 66, 1488–1497. [Google Scholar] [CrossRef]
  3. Shanmugam, L.; Mani, P.; Rajan, R.; Joo, Y.H. Adaptive synchronization of reaction–diffusion neural networks and its application to secure communication. IEEE Trans. Cybern. 2020, 50, 911–922. [Google Scholar] [CrossRef] [PubMed]
  4. Halatek, J.; Frey, E. Rethinking pattern formation in reaction–diffusion systems. Nat. Phys. 2018, 14, 507–514. [Google Scholar] [CrossRef]
  5. Arena, P.; Fortuna, L.; Branciforte, M.M. Reaction-diffusion CNN algorithms to generate and control artificial locomotion. IEEE Trans. Circuits Syst. 1999, 46, 253–260. [Google Scholar] [CrossRef]
  6. Huan, M.; Li, C. Synchronization of reaction–diffusion neural networks with sampled-data control via a new two-sided looped-functional. Chaos Solitons Fractals 2023, 167, 113059. [Google Scholar] [CrossRef]
  7. Zhang, R.; Wang, H.; Park, J.H.; Lam, H.K.; He, P. Quasisynchronization of reaction–diffusion neural networks under deception attacks. IEEE Trans. Syst. Man. Cybern. 2022, 52, 7833–7844. [Google Scholar] [CrossRef]
  8. Chen, W.H.; Liu, L.; Lu, X. Intermittent synchronization of reaction-diffusion neural networks with mixed delays via Razumikhin technique. Nonlinear Dyn. 2017, 87, 535–551. [Google Scholar] [CrossRef]
  9. Lu, J.G. Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Chaos Solitons Fractals 2008, 35, 116–125. [Google Scholar] [CrossRef]
  10. Wang, Z.; Zhang, H. Global asymptotic stability of reaction-diffusion Cohen–Grossberg neural networks with continuously distributed delays. IEEE Trans. Neral Netw. 2010, 21, 39–49. [Google Scholar] [CrossRef]
  11. Zhang, H.; Zeng, Z. Stability and synchronization of nonautonomous reaction-diffusion neural networks with general time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 5804–5817. [Google Scholar] [CrossRef] [PubMed]
  12. Chen, W.H.; Luo, S.; Zheng, W.X. Impulsive synchronization of reaction–diffusion neural networks with mixed delays and its application to image encryption. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2696–2710. [Google Scholar] [CrossRef] [PubMed]
  13. Hui, M.; Liu, X.; Zhu, S.; Cao, J. Event-triggered impulsive cluster synchronization of coupled reaction–diffusion neural networks and its application to image encryption. Neural Netw. 2024, 170, 46–54. [Google Scholar] [CrossRef] [PubMed]
  14. Li, K.; Song, Q. Exponential stability of impulsive Cohen–Grossberg neural networks with time-varying delay and reaction-diffusion terms. Neurocomputing 2008, 72, 231–240. [Google Scholar] [CrossRef]
  15. Wei, P.C.; Wang, J.L.; Huang, Y.L.; Xu, B.B.; Ren, S.Y. Impulsive control for the synchronization of coupled neural networks with reaction–diffusion terms. Neurocomputing 2016, 207, 539–547. [Google Scholar] [CrossRef]
  16. Benchohra, M.; Henderson, J.; Ntouyas, J. Impulsive Differential Equations and Inclusions, 1st ed.; Hindawi Publishing Corporation: New York, NY, USA, 2006; ISBN 977594550X/978-9775945501. [Google Scholar]
  17. Li, X.; Song, S. Impulsive Systems with Delays: Stability and Control, 1st ed.; Science Press & Springer: Singapore, 2022; ISBN 978-981-16-4686-7. [Google Scholar]
  18. Yang, T. Impulsive Control Theory, 1st ed.; Springer: Berlin, Germany, 2001; ISBN 978-3540422969. [Google Scholar]
  19. Yang, X.; Peng, D.; Lv, X.; Li, X. Recent progress in impulsive control systems. Math. Comput. Simul. 2019, 155, 244–268. [Google Scholar] [CrossRef]
  20. Li, Z.; Li, K. Stability analysis of impulsive Cohen–Grossberg neural networks with distributed delays and reaction-diffusion terms. Appl. Math. Model. 2009, 33, 1337–1348. [Google Scholar] [CrossRef]
  21. Wei, T.; Li, X.; Stojanovic, V. Input-to-state stability of impulsive reaction–diffusion neural networks with infinite distributed delays. Nonlinear Dynam. 2021, 103, 1733–1755. [Google Scholar] [CrossRef]
  22. Baleanu, D.; Diethelm, K.; Scalas, E.; Trujillo, J.J. Fractional Calculus: Models and Numerical Methods, 1st ed.; World Scientific: Singapore, 2012; ISBN 978-981-4355-20-9. [Google Scholar]
  23. Magin, R. Fractional Calculus in Bioengineering, 1st ed.; Begell House: Redding, CA, USA, 2006; ISBN 978-1567002157. [Google Scholar]
  24. Cao, J.; Stamov, G.; Stamova, I.; Simeonov, S. Almost periodicity in reaction-diffusion impulsive fractional neural networks. IEEE Trans. Cybern. 2021, 51, 151–161. [Google Scholar] [CrossRef]
  25. Li, W.H.; Gao, X.B.; Li, R.X. Dissipativity and synchronization control of fractional-order memristive neural networks with reaction–diffusion terms. Math. Methods Appl. Sci. 2019, 42, 7494–7505. [Google Scholar] [CrossRef]
  26. Liu, F.; Yang, Y.; Chang, Q. Synchronization of fractional-order delayed neural networks with reaction-diffusion terms: Distributed delayed impulsive control. Commun. Nonlinear Sci. Numer. Simul. 2023, 124, 107303. [Google Scholar] [CrossRef]
  27. Stamova, I.; Stamov, G. Mittag–Leffler synchronization of fractional neural networks with time-varying delays and reaction-diffusion terms using impulsive and linear controllers. Neural Netw. 2017, 96, 22–32. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, H.; Gu, Y.; Zhang, X.; Yu, Y. Stability and synchronization of fractional-order reaction–diffusion inertial time-delayed neural networks with parameters perturbation. Neural Netw. 2024, 179, 106564. [Google Scholar] [CrossRef] [PubMed]
  29. Caputo, M.; Fabrizio, M. A new definition of fractional derivative without singular kernel. Rog. Fract. Differ. Appl. 2015, 1, 73–85. [Google Scholar]
  30. Sales, T.G.; Tenreiro Machado, J.A.; de Oliveira, E.C. A review of definitions of fractional derivatives and other operators. J. Comput. Phys. 2019, 388, 195–208. [Google Scholar] [CrossRef]
  31. Abdeljawad, T. On conformable fractional calculus. J. Comput. Appl. Math. 2015, 279, 57–66. [Google Scholar] [CrossRef]
  32. Khalil, R.; Al Horani, M.; Yousef, A.; Sababheh, M. A new definition of fractional derivative. J. Comput. Appl. Math. 2014, 264, 65–70. [Google Scholar] [CrossRef]
  33. Anderson, D.R.; Ulness, D.J. Newly defined conformable derivatives. Adv. Dyn. Syst. Appl. 2015, 10, 109–137. [Google Scholar]
  34. Batarfi, H.; Losada, J.; Nieto, J.J.; Shammakh, W. Three-point boundary value problems for conformable fractional differential equations. J. Funct. Spaces 2015, 2015, 706383. [Google Scholar] [CrossRef]
  35. Kiskinov, H.; Petkova, M.; Zahariev, A.; Veselinova, M. Some results about conformable derivatives in Banach spaces and an application to the partial differential equations. AIP Conf. Proc. 2021, 2333, 120002. [Google Scholar]
  36. Martynyuk, A.A.; Stamov, G.; Stamova, I. Integral estimates of the solutions of fractional-like equations of perturbed motion. Nonlinear Anal. Model. Control 2019, 24, 138–149. [Google Scholar] [CrossRef]
  37. Mesmouli, M.B.; Hassan, T.S. On the positive solutions for IBVP of conformable differential equations. AIMS Math. 2023, 8, 24740–24750. [Google Scholar] [CrossRef]
  38. Bohner, M.; Hatipoğlu, V.F. Cobweb model with conformable fractional derivatives. Math. Methods Appl. Sci. 2018, 41, 9010–9017. [Google Scholar] [CrossRef]
  39. Morales-Bañuelos, P.; Rodríguez Bojalil, S.E.; Quezada-Téllez, L.A.; Fernández-Anaya, G. A General conformable Black–Scholes equation for option pricing. Mathematics 2025, 13, 1576. [Google Scholar] [CrossRef]
  40. Thabet, H.; Kendre, S. Conformable mathematical modeling of the COVID-19 transmission dynamics: A more general study. Math. Methods Appl. Sci. 2023, 46, 18126–18149. [Google Scholar] [CrossRef]
  41. Soni, K.; Sinha, A.K. Dynamics of epidemic model with conformable fractional derivative. Nonlinear Sci. 2025, 4, 100040. [Google Scholar] [CrossRef]
  42. Kütahyalıoglu, A.; Karakoç, F. Exponential stability of Hopfield neural networks with conformable fractional derivative. Neurocomputing 2021, 456, 263–267. [Google Scholar] [CrossRef]
  43. Xiong, X.; Zhang, Z. Asymptotic synchronization of conformable fractional-order neural networks by L’ Hopital’s rule. Chaos Solitons Fractals 2023, 173, 113665. [Google Scholar] [CrossRef]
  44. Asawasamrit, S.; Ntouyas, S.K.; Thiramanus, P.; Tariboon, J. Periodic boundary value problems for impulsive conformable fractional integrodifferential equations. Bound. Value Probl. 2016, 2016, 122. [Google Scholar] [CrossRef]
  45. He, D.; Xu, L. Stability of conformable fractional delay differential systems with impulses. Appl. Math. Lett. 2024, 149, 108927. [Google Scholar] [CrossRef]
  46. Qiu, W.; Feckan, M.; O’Regan, D.; Wang, J. Convergence analysis for iterative learning control of conformable impulsive differential equations. Bull. Iran. Math. Soc. 2022, 48, 193–212. [Google Scholar] [CrossRef]
  47. Bohner, M.; Stamova, I.; Stamov, G.; Spirova, C. Integral manifolds for impulsive HCV conformable neural network models. Appl. Math. Sci. Eng. 2024, 32, 2345896. [Google Scholar] [CrossRef]
  48. Stamov, G.; Stamova, I.; Martynyuk, A.; Stamov, T. Almost periodic dynamics in a new class of impulsive reaction–diffusion neural networks with fractional-like derivatives. Chaos Solitons Fractals 2021, 143, 110647. [Google Scholar] [CrossRef]
  49. Stamov, G.; Stamova, I.; Spirova, C. On an impulsive conformable M1 oncolytic virotherapy neural network model: Stability of sets analysis. Mathematics 2025, 13, 141. [Google Scholar] [CrossRef]
  50. Xiong, Y.; Li, Y.; Lv, H.; Wu, W.; Xie, S.; Chen, M.; Hu, C.; Li, M. Fixed-time synchronization of Caputo/conformable fractional-order inertial Cohen–Grossberg neural networks via event-triggered one/two-phase hybrid impulsive control. Neural Process. Lett. 2025, 57, 7. [Google Scholar] [CrossRef]
  51. Yang, J.; Feckan, M.; Wang, J.R. Consensus of linear conformable fractional order multi-agent systems with impulsive control protocols. Asian J. Control 2023, 25, 314–324. [Google Scholar] [CrossRef]
  52. Kaslik, E.; Sivasundaram, S. Non-existence of periodic solutions in fractional-order dynamical systems and a remarkable difference between integer and fractional-order derivatives of periodic functions. Nonlinear Anal. Real World Appl. 2012, 12, 1489–1497. [Google Scholar] [CrossRef]
  53. Cao, J.; Samet, B.; Zhou, Y. Asymptotically almost periodic mild solutions to a class of Weyl-like fractional difference equations. Adv. Differ. Equ. 2019, 2019, 371. [Google Scholar] [CrossRef]
  54. Huo, N.; Li, Y. Finite-time Sp-almost periodic synchronization of fractional-order octonion-valued Hopfield neural networks. Chaos Solitons Fractals 2023, 173, 113721. [Google Scholar] [CrossRef]
  55. Li, Y.; Huang, M.; Li, B. Besicovitch almost periodic solutions for fractional-order quaternion-valued neural networks with discrete and distributed delays. Math. Methods Appl. Sci. 2022, 45, 4791–4808. [Google Scholar] [CrossRef]
  56. Stamov, G.; Stamova, I. Second method of Lyapunov and almost periodic solutions for impulsive differential systems of fractional order. IMA J. Appl. Math. 2015, 80, 1619–1633. [Google Scholar] [CrossRef]
  57. Khan, S.; Gepreel, K.A.; Khatun, M.M.; Akbar, A. Bifurcation, quasi-periodic, and chaotic analysis of the conformable fractional paraxial wave equation using the rational expansion method. AIP Adv. 2025, 15, 055323. [Google Scholar] [CrossRef]
  58. Ban, J.-C.; Chang, C.-H.; Huang, N.-Z. Entropy bifurcation of neural networks on Cayley trees. arXiv 2018, arXiv:1706.09283. [Google Scholar] [CrossRef]
  59. Xiao, S.; Li, J. Exponential stability of impulsive conformable fractional-order nonlinear differential system with time-varying delay and its applications. Neurocomputing 2023, 560, 126845. [Google Scholar] [CrossRef]
  60. Luo, L.; Li, L.; Cao, J.; Abdel-Aty, M. Fractional exponential stability of nonlinear conformable fractional-order delayed systems with delayed impulses and its application. J. Frankl. Inst. 2025, 362, 107353. [Google Scholar] [CrossRef]
  61. Stamova, I.; Stamov, G. Integral Manifolds for Impulsive Differential Problems with Applications, 1st ed.; Academic Press: Cambridge, MA, USA; Elsevier: London, UK, 2025; ISBN 978-0-443-30134-6. [Google Scholar]
  62. Xu, H.; Yu, D.; Wang, Z.; Cheong, K.H.; Chen, C.L.P. Nonsingular predefined time adaptive dynamic surface control for quantized nonlinear systems. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 5567–5579. [Google Scholar] [CrossRef]
  63. Sui, S.; Chen, C.L.P.; Tong, S. Command filter-based predefined time adaptive control for nonlinear systems. IEEE Trans. Autom. Control 2024, 69, 7863–7870. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.