Next Article in Journal
Representation of Special Functions by Multidimensional A- and J-Fractions with Independent Variables
Previous Article in Journal
A Comparative Study of Fractal Models Applied to Artificial and Natural Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fixed/Preassigned Time Synchronization of Impulsive Fractional-Order Reaction–Diffusion Bidirectional Associative Memory (BAM) Neural Networks

by
Rouzimaimaiti Mahemuti
1,*,
Abdujelil Abdurahman
2 and
Ahmadjan Muhammadhaji
2
1
School of Information Technology and Engineering, Guangzhou College of Commerce, Guangzhou 511363, China
2
College of Mathematics and Systems Science, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(2), 88; https://doi.org/10.3390/fractalfract9020088
Submission received: 26 December 2024 / Revised: 26 January 2025 / Accepted: 26 January 2025 / Published: 28 January 2025

Abstract

This study delves into the synchronization issues of the impulsive fractional-order, mainly the Caputo derivative of the order between 0 and 1, bidirectional associative memory (BAM) neural networks incorporating the diffusion term at a fixed time (FXT) and a predefined time (PDT). Initially, this study presents certain characteristics of fractional-order calculus and several lemmas pertaining to the stability of general impulsive nonlinear systems, specifically focusing on FXT and PDT stability. Subsequently, we utilize a novel controller and Lyapunov functions to establish new sufficient criteria for achieving FXT and PDT synchronizations. Finally, a numerical simulation is presented to ascertain the theoretical dependency.

1. Introduction

Neural networks are a cornerstone of artificial intelligence, mimicking the human brain to address technological challenges and pave the way for new discoveries. These networks are instrumental in information processing, pattern classification, and cognitive control. Through mathematical modeling, we can examine and replicate brain functions in artificial systems, such as robotics. Both theoretical and practical research are essential to advance the field of dynamic neural networks.
Recently, fractional-order neural networks have become popular among scholars because fractional calculus can explain phenomena that classical calculus cannot, such as random errors and unusual diffusion processes. Their memory and hereditary properties make them valuable in various fields, including blood flow, electrolysis, and viscosity [1]. These fractional calculus properties, which allow neural networks to exhibit characteristics such as memory and heredity, demonstrate advantages over traditional integer-order derivatives when applied to various processes. These networks possess infinite memory, and the adjustment of the fractional parameters enhances the device output by increasing the degrees of freedom. They excel in information processing, parameter estimation, and diverse artificial intelligence applications and are comparable to biological networks [2,3] and geometric models [4]. Additionally, fractional-order derivatives offer more accurate depictions of chaotic behavior and a variety of nonlinear phenomena, because fractional-order derivatives exhibit global correlation and can better describe processes with strong historical dependence. Consequently, in recent years, a significant number of studies have been presented focusing on fractional-order chaotic systems [5,6]. With the increasing complexity of economic issues in the financial field, which are influenced by nonlinear factors, financial systems exhibit highly intricate phenomena. Therefore, it is essential to investigate the dynamic characteristics and effects of chaos in intricate financial systems [7]. Fractional-order models can better capture the complex behaviors and interactions within financial markets or provide more accurate risk assessments and predictions. In [8], variable-order time-fractional generalized Navier–Stokes equations were introduced to describe the anomalous dynamics in porous flows to reveal the impact of the fractional order on fluid flow in porous media. Other works detailed the use of fractional-order systems in the encryption of images [9,10,11].
On the other hand, in various scientific domains, including image encryption [12,13], pattern formation, biology [14] and chemistry, reaction–diffusion terms are indispensable. In particular, bidirectional associative memory (BAM) neural networks with reaction–diffusion terms enhances their ability to store and recall the associations between input and output patterns, making them effective tools for pattern recognition [15] and associative memory tasks [16]. Ali et al. [17] examined the synchronization of fractional-order BAM neural networks with fuzzy terms and time-varying delays by presenting sufficient conditions. Lin et al. [18] investigated the synchronization of issue of BAM neural networks with diffusion and time-varying delays and achieved spatio-temporal synchronization by proposing an impulsive pinning controller. Chen et al. [19] investigated the global exponential synchronization for BAM neural memristive neural networks with mixed delays and reaction–reaction diffusion terms based on a new integral inequality with an infinite distributed delay. Furthermore, in certain networks, especially biological neural networks, there are instances where we must precisely capture the propagation of electrical signals and the diffusion of neurotransmitters within a neuron network. To achieve this, we must consider the impact of previous states and the spatial arrangement of neurons. Therefore, we will focus on fractional-order BAM neural networks subject to reaction–diffusion terms.
However, in the aforementioned studies, they neglected abrupt changes in the system. Many motion processes in nature undergo sudden changes compared with the entire motion process, and the duration of these abrupt changes is very short; we refer to this phenomenon as impulse effects. Impulse effects are significant in many engineering systems, such as instantaneous transitions or resets in system states, video encoding, image encryption [20,21], and natural language processing [22]. Several studies have been conducted on impulse phenomena [16,22,23]. In [23], the synchronization of linearly coupled memristor-based recurrent neural networks with impulses was investigated. In [22], the authors provided sufficient conditions for the asymptotic stability of impulse stochastic systems. In [16], new stability criteria for reaction–diffusion impulsive neural networks were developed.
Synchronization is also a critical element in numerous natural processes, and it holds paramount importance within neural network systems. The analysis of the synchronization between two systems can be encapsulated in the stability of an error system. Particularly in the realm of fractional-order neural networks, synchronization is vital for preserving the stability, performance, and communication among linked networks. It improves the precision and efficiency of computations, enhancing performance in tasks such as signal processing, pattern recognition, and control systems. Many studies have explored this field; for instance, Pratap et al. [24] investigated the asymptotic synchronization of Cohen–Grossberg fractional-order neural networks and confirmed the global existence of asymptotically stable solutions. Velmurugan et al. [25] analyzed finite-time synchronization conditions for fractional-order memristor-based neural networks using Laplace transforms and Mittag–Leffler functions. Subsequently, Du and Lu [26] introduced a fractional-order Gronwall inequality for finite-time synchronization of fractional-order memristor-based neural networks with a time delay. However, finite-time synchronization has limitations such as the settling time depending on the initial conditions, which are often unknown in practical applications, complicating the determination of precise convergence times. To address these limitations, the concept of fixed-time (FXT) stability was introduced by Polyakov [27] and has been applied to various neural network systems, including those of fractional-order, with reaction–diffusion and impulsive terms [5,6,7,20]. Nevertheless, there is a paucity of studies on the synchronization of fractional-order impulsive neural networks incorporating reaction–diffusion terms. Building upon preceding work, this study explores the FXT and predefined-time (PDT) synchronizations in impulsive fractional-order systems with reaction–diffusion components. The main contributions of this study are as follows:
  • We present a novel controller to establish a sufficient condition for reaching FXT and PDT synchronizations in fractional-order impulsive neural networks with diffusion terms.
  • We establish the robustness of the FXT and PDT synchronization approaches against fluctuations in parameter configurations.
  • We demonstrate the influence of the fractional-order parameter on the synchronization of the given system.
The organization of the paper is laid out as follows: Section 2 provides the necessary definitions, lemmas, and details of the systems under study, which are vital for the proof of the main results in the following section. Section 3 describes the design of a new controller intended for FXT and PDT synchronizations in impulsive reaction–diffusion fractional-order neural networks. Section 4 presents an evaluation of the effectiveness of the theoretical findings introduced in this study. Finally, Section 5 concludes the paper with a summary of the main points.

2. Preliminaries

2.1. Theoretical Background

The study of fractional-order derivatives has gained prominence with the development of mathematical analysis. There are various definitions of fractional-order derivatives, including the Caputo derivative and Riemann–Liouville’s derivative. The specific definition is as follows [28]:
  • Riemann–Liouville integral for α ( 0 , 1 ) :
    I α x ( t ) = 1 Γ ( α ) 0 t x ( θ ) ( t θ ) 1 α d θ .
  • Riemann–Liouville derivative for α ( 0 , 1 ) :
    R L D α x ( t ) = d d t I 1 α x ( t ) = 1 Γ ( 1 α ) d d t 0 t x ( θ ) x ( 0 ) ( t θ ) α d θ .
  • Caputo derivative:
    D α x ( t ) = 1 Γ ( 1 α ) 0 t x ( θ ) ( t θ ) α d θ , 0 < α < 1 , D α x ( t ) = 1 Γ ( 1 α ) 0 t x ( θ ) ( t θ ) 1 m + α d θ , m 1 < α < m .
Our main concentration is on the Caputo derivative with order α ( 0 , 1 ) . We will now present lemmas and properties that facilitate the derivation of the main results.
Property 1 
([29]). The following property can potentially be met for α > 0 , β > 0 ,
D α I β x ( t ) = D α β x ( t ) .
Property 2 
([30]). The Caputo fractional derivative of the | x ( t ) | satisfies the following inequality:
D α | x ( t ) | sign ( x ( t ) ) D α x ( t ) ,
where 0 < α < 1 , x ( t ) C 1 [ 0 , ) .
Lemma 1 
([31]). The equality
R L D t α x ( t ) = D t α x ( t ) + x ( 0 ) Γ ( 1 α ) t α
holds true if x ( t ) C [ 0 , ) . Where, 0 < α < 1 .
Lemma 2 
([32]). Given that 0 < a 1 , b > 1 , and z i > 0 for i = 1 , 2 , , k , the subsequent inequalities are valid:
i = 1 k z i a ( i = 1 k z i ) a , i = 1 k z i b k 1 b ( i = 1 k z i ) b .

2.2. System Description

Throughout this study, we consider the following class of nonlinear impulsive neural networks characterized by fractional-order dynamics and the inclusion of diffusion terms:
D t α v ι ( t , x ) = d ι Δ v ι ( t , x ) a ι v ι ( t , x ) + κ = 1 m b ι κ h κ ( w κ ( t , x ) ) + I ι , t t τ , D t α w κ ( t , x ) = d ^ κ Δ w κ ( t , x ) a ^ κ w κ ( t , x ) + ι = 1 n b ^ κ ι h ^ ι ( v ι ( t , x ) ) + I ^ κ , t t τ , v ι ( t τ + , x ) = ξ ι Γ ( 2 α ) v ι ( t τ , x ) , t = t τ , w κ ( t τ + , x ) = ξ ^ κ Γ ( 2 α ) w κ ( t τ , x ) , t = t τ ,
for ι I = { 1 , 2 , , n } , κ J = { 1 , 2 , , m } , where the numbers of neurons are represented by n and m, Δ μ = h = 1 l d 2 μ d x h 2 , and l is the space dimension. The state variables for the ι th neuron and the κ th neuron at time t and spatial location x are represented by v ι ( t , x ) R and w κ ( t , x ) R , respectively. The self-inhibition rates of the neurons are denoted by the positive constants a ι and a ^ κ . The synaptic connection weights are represented by the constants b ι κ and b ^ κ ι . The activation functions for the neurons are h ι ( · ) and h ^ κ ( · ) . The biases of the neurons are given by the variables I ι and I ^ κ . Additionally, ξ ι and ξ ^ κ are positive constants. The functions v ι ( t τ + , x ) and w κ ( t τ + , x ) describe the impulse jumps that take place at the specific impulse moments t τ ( τ = 1 , 2 , 3 , ) . The sequence { t τ } is strictly increasing and satisfies the condition lim τ + t τ = + .
The initial and boundary conditions for the system (1) are detailed as
v ι ( 0 , x ) = v ι 0 ( x ) , w κ ( 0 , x ) = w κ 0 ( x ) , for x Ω ,
v ι ( t , x ) ν = 0 , w κ ( t , x ) ν = 0 , for ( t , x ) ( 0 , + ) × Ω ,
for ι I , κ J , where v ι 0 and w κ 0 are bounded continuous functions.
The response system for the drive system can be introduced as
D t α v ι * ( t , x ) = d ι Δ v ι * ( t , x ) a ι v ι * ( t , x ) + κ = 1 m b ι κ h κ ( w κ * ( t , x ) ) + I ι + u ι ( t , x ) , t t τ , D t α w κ * ( t , x ) = d ^ κ Δ w κ * ( t , x ) a ^ κ w κ * ( t , x ) + ι = 1 n b ^ κ ι h ^ ι ( v ι * ( t , x ) ) + I ^ κ + u ^ κ ( t , x ) , t t τ , v ι * ( t τ + , x ) = ξ ι Γ ( 2 α ) v ι * ( t τ , x ) , t = t τ , w κ * ( t τ + , x ) = ξ ^ κ Γ ( 2 α ) w κ * ( t τ , x ) ,
for ι I , κ J . Where, v ι * ( t , x ) R and w κ * ( t , x ) represent the state variables of the system (2). The controllers, denoted as u ι ( t , x ) , u ^ κ ( t , x ) , will be designed in the subsequent section.
The initial and boundary conditions for the system (2) are detailed as
v ι * ( 0 , x ) = v ι * 0 ( x ) , w κ * ( 0 , x ) = w κ * 0 ( x ) , for x Ω ,
v ι * ( t , x ) ν = 0 , w κ * ( t , x ) ν = 0 , for ( t , x ) ( 0 , + ) × Ω ,
where the functions v ι * 0 ( · ) and w κ * 0 ( · ) are both continuous and bounded. Let ι I , κ J , and the paper maintains the following consistent assumptions.
Assumption 1. 
There exist constants L i > 0 ( i = 1 , 2 , , n ) and L ^ j > 0 ( j = 1 , 2 , , m ) such that the activation functions h i ( · ) and h ^ i ( · ) satisfy
| h i ( a 1 ) h i ( a 2 ) | L i | a 1 a 2 | , | h ^ j ( a 1 ) h ^ j ( a 2 ) | L ^ j | a 1 a 2 | ,
where a 1 and a 2 are arbitrary real constants.
The error system of (1) and (2) is
D t α ε ι ( t , x ) = d ι Δ ε ι ( t , x ) a ι ε ι ( t , x ) + κ = 1 m b ι κ H κ ( η κ ( t , x ) ) + u ι , t t τ , D t α η κ ( t , x ) = d ^ κ Δ η κ ( t , x ) a ^ κ η κ ( t , x ) + ι = 1 n b ^ κ ι H ^ ι ( ε ι ( t , x ) ) + u ^ κ , t t τ , ε ι ( t τ + , x ) = ξ ι Γ ( 2 α ) ε ι ( t τ , x ) , t = t τ , η κ ( t τ + , x ) = ξ ^ κ Γ ( 2 α ) η κ ( t τ , x ) , t = t τ ,
where
ε ι ( t , x ) = v ι * ( t , x ) v ι ( t , x ) , η κ ( t , x ) = w κ * ( t , x ) w κ ( t , x ) , H κ η κ ( t , x ) = h κ w κ * ( t , x ) h κ w κ ( t , x ) , H ^ ι ε ι ( t , x ) = h ^ ι v ι * ( t , x ) h ^ ι v ι ( t , x ) ,
for ι I , κ J .
Here. we delineate the essential definitions and supporting lemmas that form the basis of our main results. Consider the system described below:
d d t φ ( t ) = F ( t , φ ( t ) ) , t t τ , φ ( 0 ) = φ 0 , Δ φ | t = t τ = Λ ( t τ , φ ( t τ ) ) , t τ N ,
where the state vector of the system denoted by φ ( t ) R n . F : R + × R n R n is a given continuous function with the condition that F ( 0 , φ ( 0 ) ) = 0 . Λ : R + × R n = R n is a function that is both continuously differentiable and locally Lipschitz and meets the condition Λ ( t , 0 ) = 0 .
Definition 1 
([32,33]). The zero solution of the system (4) is called FXT-stable if the solution φ ( t , φ 0 ) starting from the initial condition φ 0 R n meets the following criteria:
(i) 
Lyapunov stable. For any ε > 0 , there is a δ = δ ( ε ) > 0 such that φ ( t , φ 0 ) < ε for any φ 0 δ and t 0 ;
(ii) 
Finite-time convergence. There exists a function T : R n 0 ( 0 , + ) , called the settling time (ST) function, such that lim t T ( φ 0 ) φ ( t , φ 0 ) = 0 and φ ( t , φ 0 ) = 0 for all t T ( φ 0 ) ;
(iii) 
T ( φ 0 ) is bounded. There exist T max > 0 such that T ( φ 0 ) T max for all φ 0 R n .
Definition 2 
([34]). The zero solution of system (4) is PDT-stable if it exhibits FXT stability for any initial condition φ 0 R n and for any given time T c > 0   T ( φ 0 ) T c holds true.
Lemma 3 
([33]). If a Lyapunov function V ( t , φ ( t ) ) exists and satisfies
(i) 
ε 1 φ ( t ) 2 V ( t , φ ( t ) ) ε 2 φ ( t ) 2 , t R + , φ R n ;
(ii) 
V ˙ ( t , φ ( t ) ) μ V p ( t , φ ( t ) ) λ V q ( t , φ ( t ) ) , t t τ , V ( t + , φ ( t τ + ) ) Λ V ( t τ , φ ( t τ ) ) , t = t τ ,
where ε 1 , ε 2 , K , μ , λ are positive scalars, 0 < p < 1 , q > 1 and 0 < Λ 1 . Then, the system (4) is FXT-stable with the ST
T 1 = 2 ν τ N 0 + 1 ( 1 p ) η ln μ μ η + 1 ( 1 q ) η ln 1 η Λ N 0 ( 1 q ) λ
where η = ln Λ ν τ .
In Lemma 3, ν τ denotes the average impulse interval, and N 0 is a positive constant. For the detailed definitions of ν τ and N 0 , refer to Definition 2 in the reference [32].
Lemma 4 
([34]). If a Lyapunov function V ( t , φ ( t ) ) exists and satisfies:]
(i) 
ε 1 φ ( t ) 2 V ( t , φ ( t ) ) ε 2 φ ( t ) 2 , t R + , φ R n ;
(ii) 
V ˙ ( t , φ ( t ) ) T 0 T c μ V p ( t , φ ( t ) ) + λ V q ( t , φ ( t ) ) , t t τ , V ( t + , φ ( t τ + ) ) Λ V ( t τ , φ ( t τ ) ) , t = t τ ,
then the system (4) will be PDT-stable with a preassigned time T c , where T 0 is given as T 0 = 1 μ ( 1 p ) π 2 + 1 λ ( q 1 ) ϖ , ϖ = Λ τ 0 ( 1 γ ) , π = Λ τ 0 ( 1 p ) . ε 1 , ε 2 , K , μ , λ , p , q are described in Lemma 3.

2.3. Fractional-Order Lyapunov Exponent

Lyapunov exponents are invaluable tools for characterizing chaos, analyzing stability, predicting critical transitions, and efficiently computing the properties of chaotic attractors. Their applications extend from theoretical studies to practical data-driven approaches in various scientific fields. For example, the largest Lyapunov exponent determines the dominant rate of divergence or convergence of trajectories, while the full spectrum of Lyapunov exponents can reveal the overall stability characteristics of the system. In chaotic systems, the presence of at least one positive Lyapunov exponent confirms the existence of chaotic dynamics [35,36]. Now, we introduce the Lyapunov exponent for a fractional-order system D t α Φ = f ( Φ ( t ) ) , where Φ ( t ) = ( ϕ 1 , ϕ 2 , , ϕ n ) , f ( Φ ( t ) ) = ( f ( ϕ 1 ) , f ( ϕ 2 ) , , f ( ϕ n ) ) are real-valued n-dimensional vector. Then, its Lyapunov exponents can be introduced as [35]
a i j ( k ) = Δ t α f i ϕ j a i j ( k 1 ) = 1 k + 1 ω l a i j ( k ) , λ j = lim k 1 k h ln a j ( k ) ,
with ω = ( 1 1 + α ) ω 1 , and initial value ( a 1 ( 0 ) , a 2 ( 0 ) , , a j ( 0 ) , , a n ( 0 ) ) = I (I is n identity matrix). For more specific information, please refer to work [35].

3. Main Results

We present a pair of pivotal theorems that are instrumental in achieving FXT and PDT synchronizations within the system under consideration.

3.1. FXT Synchronization

The controllers u ι ( t ) and u ^ κ ( t ) are designed as
u ι ( t , x ) = δ ι ε ι ( t , x ) [ θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | + ϑ ι ε ˜ q 1 2 ( t ) D t α 1 | ε ι ( t , x ) | ] sign ( ε ι ( t , x ) ) , u ^ κ ( t , x ) = δ ^ κ η κ ( t , x ) [ θ ^ κ η ˜ p 1 2 ( t ) D t α 1 | η κ ( t , x ) | + ϑ ^ κ η ˜ q 1 2 ( t ) D t α 1 | η κ ( t , x ) | ] sign ( η κ ( t , x ) ) ,
where ι I , κ J , δ ι , δ ^ κ , θ ι , θ ^ κ , ϑ ι and ϑ ^ κ are positive constants, p ( 0 , 1 ) , q ( 1 , + ) . ε ˜ and η ˜ are given as follows:
ε ˜ ( t ) = Ω ι = 1 n D t α 1 | ε ι ( t , x ) | d x , η ˜ ( t ) = Ω κ = 1 m D t α 1 | η κ ( t , x ) | d x .
Theorem 1. 
Suppose Assumption 1 is met; if the inequalities a ι δ ι + L ^ ι κ = 1 m | b ^ κ ι | 0 and a ^ κ δ ^ κ + L κ ι = 1 n | b ι κ | 0 are satisfied for ι I , κ J , then the drive–response systems (1) and (2) attain FXT synchronization with the ST time
T 2 = 2 ν τ N 0 + 2 ( 1 p ) η ln Θ Θ η + 2 ( 1 q ) η ln ( 1 η ζ 1 2 N 0 ( 1 q ) Υ ) ,
where η = ln Λ ν τ , Θ = min { θ , θ ^ } , Υ = min { ϑ , ϑ ^ } and ζ = max { max ι I { ξ ι } , max κ J { ξ ^ κ } } Γ ( 2 α ) .
Proof. 
In this paper, we construct the Lyapunov function below:
V ( t ) = V 1 ( t ) + V 2 ( t ) = Ω ι = 1 n D t α 1 | ε ι ( t , x ) | d x + Ω κ = 1 m D t α 1 | η κ ( t , x ) | d x .
By the definition of V 1 ( t ) , and by applying Property 1 and Property 2, the derivative of the Lyapunov function V 1 ( t ) can be handled as
V ˙ 1 ( t ) = d d t Ω ι = 1 n D t α 1 | ε ι ( t , x ) | d x = Ω ι = 1 n 1 Γ ( 1 α ) d d t D t α 1 | ε ι ( t , x ) | d x = Ω ι = 1 n D t α | ε ι ( t , x ) | d x Ω ι = 1 n sign ( ε ι ( t , x ) ) D t α ε ι ( t , x ) d x .
Then, substituting (7) into the above inequality, it follows that
V ˙ 1 ( t ) Ω ι = 1 n sign ( ε ι ( t , x ) ) [ D ι Δ ε ι ( t , x ) a ι ε ι ( t , x ) + κ = 1 m b ι κ H κ ( η κ ( t , x ) ) δ ι ε ι ( t , x ) sign ( ε ι ( t , x ) ) θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | sign ( ε ι ( t , x ) ) ϑ ι ε ˜ q 1 2 ( t ) D t α 1 | ε ι ( t , x ) | ] d x ι = 1 n Ω D ι sign ( ε ι ( t , x ) ) Δ ε ι ( t , x ) + ( a ι δ ι ) | ε ι ( t , x ) | + sign ( ε ι ( t , x ) ) κ = 1 m b ι κ H κ ( η κ ( t , x ) ) θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | ϑ ι ε ˜ q 1 2 ( t ) D t α 1 | ε ι ( t , x ) | d x .
First, by utilizing Green’s identities along with the model’s boundary condition for the diffusion term, we have
Ω D ι sign ( ε ι ( t , x ) ) Δ ε ι ( t , x ) d x D ι | Ω sign ( ε ι ( t , x ) ) Δ ε ι ( t , x ) d x | D ι | Ω Δ ε ι ( t , x ) d x | = D ι | Ω ε ι ( t , x ) · ν d S | = 0 .
Then, in view of Cauchy’s inequality and Assumption 1, we have
Ω ι = 1 n κ = 1 n sign ( ε ι ( t , x ) ) κ = 1 m b ι κ H κ ( η κ ( t , x ) ) d x Ω ι = 1 n κ = 1 m | b ι κ | | H κ ( η κ ( t , x ) ) | d x Ω ι = 1 n κ = 1 m | b ι κ | L κ | η κ ( t , x ) | d x
Furthermore, for the controller term, we have
Ω ( sign ( ε ι ( t , x ) ) ) 2 θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | d x = Ω θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | d x = θ ι ε ˜ p 1 2 ( t ) Ω D t α 1 | ε ι ( t , x ) | d x = θ ι Ω D t α 1 | ε ι ( t , x ) | d x p + 1 2 = θ ι V 1 p + 1 2 .
Similarly, one can obtain
Ω ( sign ( ε ι ( t , x ) ) ) 2 ϑ ι ε ˜ q 1 2 ( t ) D t α 1 | ε ι ( t , x ) | d x = ϑ ι V 1 q + 1 2 .
Then, by substituting (9), (11), (12) into (10), we can obtain the following inequality for V 1 ( t ) ,
V ˙ 1 ( t ) Ω ι = 1 n [ ( a ι δ ι ) | ε ι ( t , x ) | + κ = 1 m | b ι κ | L κ | η κ ( t , x ) | ] d x θ V 1 p + 1 2 ϑ V 1 q + 1 2 ,
where θ = min ι I { θ ι } and ϑ = min ι I { ϑ ι } . Similarly, we have the subsequent inequality for V 2 ,
V ˙ 2 ( t ) Ω κ = 1 m [ ( a ^ κ δ ^ κ ) | η κ ( t , x ) | + ι = 1 n | b ^ κ ι | L ^ ι | ε ι ( t , x ) | ] d x θ ^ V 2 p + 1 2 ϑ ^ V 2 q + 1 2 ,
where θ ^ = min κ J { θ ^ κ } and ϑ ^ = min κ J { ϑ ^ κ } .
Finally, by substituting (13) and (14) into (8), we have the following inequality for V ( t ) :
V ˙ ( t ) Ω ι = 1 n ( a ι δ ι + L ^ ι κ = 1 m | b ^ κ ι | ) | ε ι ( t , x ) | + κ = 1 n ( a ^ κ δ ^ κ + L κ ι = 1 n | b ι κ | ) | η κ ( t , x ) | θ V 1 p + 1 2 θ ^ V 2 p + 1 2 ϑ V 1 q + 1 2 ϑ ^ V 2 q + 1 2 = Ω ι = 1 n ( a ι δ ι + L ^ ι κ = 1 m | b ^ κ ι | ) | ε ι ( t , x ) | + κ = 1 n ( a ^ κ δ ^ κ + L κ ι = 1 n | b ι κ | ) | η κ ( t , x ) | min { θ , θ ^ } ( V 1 p + 1 2 ( t ) + V 2 p + 1 2 ( t ) ) min { ϑ , ϑ ^ } ( V 1 q + 1 2 + V 2 q + 1 2 ) .
If the following inequalities hold true,
a ι δ ι + L ^ ι κ = 1 m | b ^ κ ι | 0 , a ^ κ δ ^ κ + L κ ι = 1 n | b ι κ | 0 ,
and by applying Lemma 1, we can derive the following inequality,
V ˙ ( t ) Θ V p + 1 2 ( t ) Υ V q + 1 2 ( t ) ,
where Θ = min { θ , θ ^ } and Υ = min { ϑ , ϑ ^ } .
When t = t τ ,
V ( t τ + ) = V 1 ( t τ + ) + V 2 ( t τ + ) = Ω ι = 1 n D t α 1 | ε ι ( t τ + , x ) | + κ = 1 m D t α 1 | η κ ( t τ + , x ) | d x = Ω ι = 1 n D t α 1 | ξ ι Γ ( 2 α ) ε ι ( t τ , x ) | + κ = 1 m D t α 1 | ξ ^ κ Γ ( 2 α ) η κ ( t τ , x ) | d x max { max ι I { ξ ι } , max κ J { ξ ^ κ } } Γ ( 2 α ) Ω ι = 1 n D t α 1 | ε ι ( t τ , x ) | + κ = 1 m D t α 1 | η κ ( t τ , x ) | d x ,
thus, we have,
V ( t τ + ) ζ V ( t τ ) ,
where ζ = max { max ι I { ξ ι } , max κ J { ξ ^ κ } } Γ ( 2 α ) . Utilizing inequalities (15) and (16), and in accordance with Lemma 3, the drive–response system detailed in (1) and (2) attains FXT synchronization under the controller (7). The proof is completed. □
Remark 1. 
In prior studies, particularly those involving neural networks incorporating reaction–diffusion terms [37,38], the scholars applied inequality ( Ω u d x ) p Ω u p d x . According to Hölder’s inequality [39], inequality ( Ω u d x ) p | Ω | p 1 Ω u p d x holds for u L ( p ) , p 1 , where | Ω | stands for the volume of the bounded compact set Ω with the smooth boundary Ω . However, in [37,38], the authors did not show the necessary condition for values of p in ( 0 , 1 ] . In this paper, we present a novel controller and Lyapunov function that enables us to bypass this inequality during the proof.
In Theorem 1, we discussed the FXT synchronization of impulsive fractional-order neural networks incorporating reaction–diffusion components. By eliminating the impulse component from the drive–response systems (1) and (2), then these systems can be reconstructed as follows
D t α v ι ( t , x ) = d ι Δ v ι ( t , x ) a ι v ι ( t , x ) + κ = 1 m b ι κ h κ ( w κ ( t , x ) ) + I ι , D t α w κ ( t , x ) = d ^ κ Δ w κ ( t , x ) a ^ κ w κ ( t , x ) + ι = 1 n b ^ κ ι h ^ ι ( v ι ( t , x ) ) + I ^ κ ,
D t α v ι * ( t , x ) = d ι Δ v ι * ( t , x ) a ι v ι * ( t , x ) + κ = 1 m b ι κ h κ ( w κ * ( t , x ) ) + I ι + u ι ( t , x ) , D t α w κ * ( t , x ) = d ^ κ Δ w κ * ( t , x ) a ^ κ w κ * ( t , x ) + ι = 1 n b ^ κ ι h ^ ι ( v ι * ( t , x ) ) + I ^ κ + u ^ κ ( t , x ) ,
Corollary 1. 
Assume that Assumption 1 hold, if the inequalities a ι δ ι + L ^ ι κ = 1 m | b ^ κ ι | 0 , and a ^ κ δ ^ κ + L κ ι = 1 n | b ι κ | 0 are met for ι I , κ J , then the drive–response systems (17) and (18) reaches FXT synchronization with the ST time T 3 = 2 Θ ( 1 p ) + 2 Υ ( 1 q ) under the controller (7), where Θ = min { θ , θ ^ } and Υ = min { ϑ , ϑ ^ } .

3.2. PDT Synchronization

As we mentioned in the introduction section, PDT synchronization is necessary in some situations. Here we consider the PDT synchronization of the drive response systems (1) and (2). To achieve PDT synchronization, we redesign controller u ι ( t ) and u ^ κ ( t ) as
u ι ( t , x ) = T 0 T c [ δ ι ε ι ( t , x ) + ( θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | + ϑ ι ε ˜ q 1 2 ( t ) D t α 1 | ε ι ( t , x ) | ) sign ( ε ι ( t , x ) ) ] , u ^ κ ( t , x ) = T 0 T c [ δ ^ κ η κ ( t , x ) + ( θ ^ κ η ˜ p 1 2 ( t ) D t α 1 | η κ ( t , x ) | + ϑ ^ κ η ˜ q 1 2 ( t ) D t α 1 | η κ ( t , x ) | ) sign ( η κ ( t , x ) ) ] ,
Theorem 2. 
Suppose that Assumption 1 holds; if inequalities T c T 0 a ι δ ι + T c T 0 L ^ ι κ = 1 m | b ^ κ ι | 0 , and T c T 0 a ^ κ δ ^ κ + T c T 0 L κ ι = 1 n | b ι κ | 0 are satisfied for ι I , κ J , then the drive–response system (1) and (2) reaches PDT synchronization within preassigned time T c , where T 0 = 2 Θ ( 1 p ) π 2 + 2 Υ ( q 1 ) ϖ , ϖ = ζ 1 2 N 0 ( 1 q ) , π = ζ 1 2 N 0 ( 1 p ) , η = ln ζ ν τ . Θ = min { θ , θ ^ } and Υ = min { ϑ , ϑ ^ } .
Proof. 
In this paper, we construct the Lyapunov function as shown below;
V ( t ) = V 1 ( t ) + V 2 ( t ) = Ω ι = 1 n D t α 1 | ε ι ( t , x ) | d x + Ω κ = 1 m D t α 1 | η κ ( t , x ) | d x .
By the definition of V 1 ( t ) , and by applying Property 1 and Property 2, the derivative of Lyapunov function V 1 ( t ) can be described as
V ˙ 1 ( t ) = d d t Ω ι = 1 n D t α 1 | ε ι ( t , x ) | d x = Ω ι = 1 n 1 Γ ( 1 α ) d d t D t α 1 | ε ι ( t , x ) | d x = Ω ι = 1 n D t α | ε ι ( t , x ) | d x Ω ι = 1 n sign ( ε ι ( t , x ) ) D t α ε ι ( t , x ) d x .
Then, substituting (7) into the above inequality, it follows
V ˙ 1 ( t ) Ω ι = 1 n sign ( ε ι ( t , x ) ) [ D ι Δ ε ι ( t , x ) a ι ε ι ( t , x ) + κ = 1 m b ι κ H κ ( η κ ( t , x ) ) T 0 T c δ ι ε ι ( t , x ) T 0 T c sign ( ε ι ( t , x ) ) θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | T 0 T c sign ( ε ι ( t , x ) ) ϑ ι ε ˜ q 1 2 ( t ) D t α 1 | ε ι ( t , x ) | ] d x ι = 1 n Ω D ι sign ( ε ι ( t , x ) ) Δ ε ι ( t , x ) + ( a ι T 0 T c δ ι ) | ε ι ( t , x ) | + sign ( ε ι ( t , x ) ) κ = 1 m b ι κ H κ ( η κ ( t , x ) ) T 0 T c θ ι ε ˜ p 1 2 ( t ) D t α 1 | ε ι ( t , x ) | T 0 T c ϑ ι ε ˜ q 1 2 ( t ) D t α 1 | ε ι ( t , x ) | d x
Similar to Theorem 1, by using inequalities (9)–(12), we can obtain that
V ˙ 1 ( t ) T 0 T c Ω ι = 1 n [ ( T c T 0 a ι δ ι ) | ε ι ( t , x ) | + T c T 0 κ = 1 m | b ι κ | L κ | η κ ( t , x ) | ] d x θ V 1 p + 1 2 ϑ V 1 q + 1 2 ,
where θ = min ι I { θ ι } , ϑ = min ι I { ϑ ι } . Similarly, we have the subsequent inequality for V 2 ,
V ˙ 2 ( t ) T 0 T c Ω κ = 1 m [ ( T c T 0 a ^ κ δ ^ κ ) | η κ ( t , x ) | + T c T 0 ι = 1 n | b ^ κ ι | L ^ ι | ε ι ( t , x ) | ] d x θ ^ V 2 p + 1 2 ϑ ^ V 2 q + 1 2 ,
where θ ^ = min κ J { θ ^ κ } and ϑ ^ = min κ J { ϑ ^ κ } .
Finally, by substituting (21) and (22) into (20), we have the following inequality for V ( t ) , such that
V ˙ ( t ) T 0 T c Ω ι = 1 n ( T c T 0 a ι δ ι + T c T 0 L ^ ι κ = 1 m | b ^ κ ι | ) | ε ι ( t , x ) | + κ = 1 n ( T c T 0 a ^ κ δ ^ κ + T c T 0 L κ ι = 1 n | b ι κ | ) | η κ ( t , x ) | θ V 1 p + 1 2 θ ^ V 2 p + 1 2 ϑ V 1 q + 1 2 ϑ ^ V 2 q + 1 2 = T 0 T c Ω ι = 1 n ( T c T 0 a ι δ ι + T c T 0 L ^ ι κ = 1 m | b ^ κ ι | ) | ε ι ( t , x ) | + κ = 1 n ( T c T 0 a ^ κ δ ^ κ + T c T 0 L κ ι = 1 n | b ι κ | ) | η κ ( t , x ) | min { θ , θ ^ } ( V 1 p + 1 2 ( t ) + V 2 p + 1 2 ( t ) ) min { ϑ , ϑ ^ } ( V 1 q + 1 2 + V 2 q + 1 2 ) .
Suppose the following inequalities hold:
T c T 0 a ι δ ι + T c T 0 L ^ ι κ = 1 m | b ^ κ ι | 0 , T c T 0 a ^ κ δ ^ κ + T c T 0 L κ ι = 1 n | b ι κ | 0 ,
and by applying Lemma 1, we can derive the following inequality,
V ˙ ( t ) T 0 T c ( Θ V p + 1 2 ( t ) + Υ V q + 1 2 ( t ) ) ,
where Θ = min { θ , θ ^ } and Υ = min { ϑ , ϑ ^ } . Based on inequalities (16) and (23), and in accordance with Lemma 4, the drive–response system detailed in (1) and (2) attains PDT synchronization under the controller (19). The proof is complete. □
Remark 2. 
Most synchronization outcomes related to fractional-order neural networks, such as those reported in [2,26], are characterized by infinite-time synchronization, also known as asymptotic synchronization. However, infinite-time synchronization may not align with specific practical requirements, particularly in terms of speed and precision. Consequently, the studies in [5,20,26,29] focused on finite and FXT synchronization in fractional-order neural networks. Building on this foundation, we investigate the finite-time and FXT synchronization of impulsive fractional-order BAM neural networks incorporating reaction–diffusion terms. Definitions 1 and 2 outline FXT and PDT synchronization, respectively, revealing that PDT synchronization is an extension of FXT synchronization. This approach allows us to tailor the control strategy to meet actual engineering requirements. Therefore, the synchronization criterion presented in this paper offers greater flexibility and broader applicability than those in [5,20,29].
Corollary 2. 
Suppose Assumption 1 is met, if the inequalities T c T 0 a ι δ ι + T c T 0 L ^ ι κ = 1 m | b ^ κ ι | 0 , and T c T 0 a ^ κ δ ^ κ + T c T 0 L κ ι = 1 n | b ι κ | 0 hold for ι I , κ J , then the drive–response systems (17) and (18) reach PDT synchronization in PDT T c under the controller (19).

4. Numerical Examples

The simulations are conducted using Python. The ( α 1 ) -th-order integrals are computed through the Grünwald–Letnikov method, since, when α ( 0 , 1 ) , the two fractional-order integrals, Grünwald–Letnikov and Caputo integrals, are indistinguishable in practical applications and completely equivalent [28,40]. To discretize the fractional-order diffusion equation, the L1 approximation method was employed [41]. This section includes two numerical examples to confirm the correctness of the analytical findings presented in the previous section.
Example 1.
Consider the following fractional-order impulsive neural networks with reaction–diffusion term
D t α v ι ( t , x ) = d ι Δ v ι ( t , x ) a ι v ι ( t , x ) + κ = 1 3 b ι κ h κ ( w κ ( t , x ) ) + I ι , t t τ , D t α w κ ( t , x ) = d ^ κ Δ w κ ( t , x ) a ^ κ w κ ( t , x ) + ι = 1 3 b ^ κ ι h ^ ι ( v ι ( t , x ) ) + I ^ κ , t t τ , v ι ( t τ + , x ) = ξ ι Γ ( 2 α ) v ι ( t τ , x ) , t = t τ , w κ ( t τ + , x ) = ξ ^ κ Γ ( 2 α ) w κ ( t τ , x ) , t = t τ ,
for ι , κ { 1 , 2 , 3 } , and x [ 5 , 5 ] , t [ 0 , 25 ] , the spatial and temporal step sizes are taken as Δ x = 0.33 and Δ t = 0.01 , respectively. We take the parameters to α = 0.9 , { I ι } = { I ^ κ } = 0 , { ξ i } = 0.5 and { ξ ¯ j } = 0.65 . The other parameters are shown in Table 1. The activation functions are given as h κ ( · ) = tanh ( · ) and h ^ ι ( · ) = tanh ( · ) .
The initial conditions are v 1 0 = 0.0517 sin ( 3 x 5 ) , v 2 0 = 0.08146 sin ( 3 x 5 ) , v 3 0 = 0.04863 sin ( 3 x 5 ) , w 1 0 = 0.10239 sin ( 3 x 5 ) , w 2 0 = 0.11400 sin ( 3 x 5 ) and w 3 0 = 0.127070 sin ( 3 x 5 ) . The results presented in Figure 1 and Figure 2, along with the corresponding Lyapunov exponents calculated using the method outlined in Section 2.3 are λ v = ( 0.1216 , 0.1379 , 0.1404 ) and λ w = ( 0.1541 , 0.1374 , 1.376 ) . Therefore, the system (24) exhibits a chaotic attractor.
The response system of the system (24) is given as
D t α v ι * ( t , x ) = d ι Δ v ι * ( t , x ) a ι v ι * ( t , x ) + κ = 1 m b ι κ h κ ( w κ * ( t , x ) ) + I ι + u ι ( t , x ) , t t τ , D t α w κ * ( t , x ) = d ^ κ Δ w κ * ( t , x ) a ^ κ w κ * ( t , x ) + ι = 1 n b ^ κ ι h ^ ι ( v ι * ( t , x ) ) + I ^ κ + u ^ κ ( t , x ) , t t τ , v ι * ( t τ + , x ) = ξ ι Γ ( 2 α ) v ι * ( t τ , x ) , t = t τ , w κ * ( t τ + , x ) = ξ ^ κ Γ ( 2 α ) w κ * ( t τ , x ) ,
where parameters d ι , d ^ κ , a ι , a ^ κ , b ι κ , b ^ κ ι , I ι , I ^ κ , along with the activations functions h κ ( · ) and h ^ ι ( ) ˙ , are similar to those in the drive system (24). The jump coefficients ξ ι = 0.5 , ξ ^ κ = 0.65 . δ 1 = δ 2 = 6.6 , δ 3 = 6.4 , δ ^ 1 = 6.5 , δ ^ 2 = 6.8 , δ ^ 3 = 6.6 . θ 1 = θ 2 = 3.2 , θ 3 = 4.2 , ϑ 1 = ϑ 2 = ϑ 3 = 3.2 , θ ^ 1 = 4.2 , θ ^ 2 = θ ^ 3 = 3.2 , ϑ ^ 1 = ϑ ^ 2 = 3.2 , ϑ 3 = 4.2 , p = 0.4 , q = 1.6 , and L ^ ι = L κ = 1.1 , N 0 = 0.1 , ν τ = 0.5 . Consequently, we can easily calculate that
k = max { max ι { 1 , 2 , 3 } { a ι δ ι + L ^ ι κ = 1 3 | b ^ κ ι | } , max κ { 1 , 2 , 3 } { a ^ κ δ ^ κ + L κ ι = 1 3 | b ι κ | } } = 3359 .
This indicates that the inequalities outlined in Theorem 1 are met. Therefore, as shown in Figure 3 and Figure 4, the response system (25) achieves synchronization with the drive system (24) within FXT T 2 = 1.9785 under the controller (7). This confirms the controller’s efficiency in reaching the desired FXT synchronization through numerical means.
Remark 3. 
Figure 1 and Figure 2 clearly show that the system (24) exhibits chaotic phenomena. Fractional-order derivatives consider the entire history of the system; this property can lead to more complex nonlinear phenomena [42]. In impulsive neural networks, the state of the system can change abruptly at a certain moment; therefore, associating the memory of fractional-order derivatives to the abrupt changes of impulsive neural networks can result in a rich variety of dynamical behaviors.
Remark 4. 
Figure 3 and Figure 4 depict the convergence of the error system to zero within the FXT T 1 , as indicated by Theorem 1. Compared with the works [29,31], the results presented in this paper are more practical and applicable.
As mentioned in the introduction section, there are cases where it is necessary to set the synchronization time in advance. Theorem 2 provided the necessary conditions for PDT synchronization. Now, we validate these conditions numerically, taking into account the PDT synchronization of drive–response systems (24) and (25). All parameters are as previously delineated in the FXT synchronization context. Set the preassigned-time as T c = 1.9 ; then,
k ˜ = max { max ι { 1 , 2 , 3 } { T 0 T c a ι δ ι + T 0 T c L ^ ι κ = 1 3 | b ^ κ ι | } , max κ { 1 , 2 , 3 } { T 0 T c a ^ κ δ ^ κ + T 0 T c L κ ι = 1 3 | b ι κ | } } = 1.1623 ,
where T 0 = 2.1194 . This ensures that the conditions specified in Theorem 2 are satisfied. Consequently, as demonstrated in Figure 5 and Figure 6, in accordance with Theorem 2, the drive–response systems (24) and (25) achieve synchronization within PDT T c = 1.9 .
Finally, we examine the impact of the fractional-order parameter α on synchronization by analyzing the outcomes for various values of α , such as α = 0.8 , 0.9 , 0.95 , 0.98 .
As observed in Figure 7, the convergence of ε i 2 and η 2 ( ι = 12 , 3 , κ = 1 , 2 , 3 ) is more rapid for smaller α values. Nonetheless, when α = 0.8 , there is oscillation around t = 0.15 , indicating a transient instability.
To further investigate this behavior, we set α = 0.6 , and Figure 8 reveals more pronounced chattering effects compared to those observed in Figure 7.

5. Conclusions

In this study, the FXT and PDT synchronizations are investigated for the impulsive fractional-order BAM neural networks with diffusion terms. Initially, we presented fundamental knowledge of the fractional-order calculus and the FXT and PDT synchronizations of neural networks. Expanding on prior studies, we integrated the effects of the impulse and fractional-order within reaction–diffusion BAM neural networks. Since, fractional-order calculus and reaction–diffusion processes associated with impulsive BAM neural networks provide a more comprehensive representation of real-world systems. These systems exhibit not only temporal changes but also spatial factors and diffusion effects. Consequently, the scope of applications for BAM neural networks has been broadened in fields such as environmental science and secure communication [43]. We have developed an innovative controller for the system and have formulated adequate conditions for the FXT and PDT synchronizations of drive–response systems, employing the Lyapunov function approach. To substantiate the theoretical results of our proposed model, a numerical example was provided. Nevertheless, stochastic perturbations are frequently inevitable in many dynamical systems. These perturbations enhance the network’s ability to generalize and accurately forecast outcomes in uncertain environments, thus bolstering its robustness and adaptability [34,44,45,46]. In future studies, we will focus on the synchronization of fractional-order BAM neural networks with randomness, thereby enhancing the applicability of our dynamic system.

Author Contributions

Conceptualization, R.M.; Methodology, A.M.; Software, R.M.; Supervision, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (Grant no. 62266042), the Outstanding Youth Program of Xinjiang, China (Grant no. 2022D01E10).

Data Availability Statement

There is no data associated with this paper.

Conflicts of Interest

The authors declare that they have no any competing interests regarding the publication of this article.

References

  1. Song, C.; Cao, J. Dynamics in fractional-order neural networks. Neurocomputing 2014, 142, 494–948. [Google Scholar] [CrossRef]
  2. Chen, J.; Chen, B.; Zeng, Z. Global asymptotic stability and adaptive ultimate mittag-leffler synchronization for a fractional-order complex-valued memristive neural networks with delays. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 2519–2535. [Google Scholar] [CrossRef]
  3. Zhang, L.; Yang, Y. Bipartite synchronization analysis of fractional order coupled neural networks with hybrid control. Neural Process. Lett. 2020, 52, 1969–1981. [Google Scholar] [CrossRef]
  4. Podlubny, I. Geometrical and physical interpretation of fractional integration and fractional differentiation. Fract. Calc. Appl. Anal. 2002, 5, 367–386. [Google Scholar]
  5. Luo, R.; Liu, S.; Song, Z.; Zhang, F. Fixed-time control of a class of fractional-order chaotic systems via backstepping method. Chaos Soliton. Fract. 2023, 167, 113076. [Google Scholar] [CrossRef]
  6. Balamash, A.S.; Bettayeb, M.; Djennoune, S.; Al-Saggaf, U.M.; Moinuddin, M. Fixed-time terminal synergetic observer for synchronization of fractional-order chaotic systems. Chaos 2020, 30, 073124. [Google Scholar] [CrossRef]
  7. He, Y.; Peng, J.; Zheng, S. Fractional-Order Financial System and Fixed-Time Synchronization. Fractal Fract. 2022, 6, 507. [Google Scholar] [CrossRef]
  8. Ren, J.; Lei, H.; Song, J. An improved lattice Boltzmann model for variable-order time-fractional generalized Navier–Stokes equations with applications to permeability prediction. Chaos Soliton Fract. 2024, 189, 115616. [Google Scholar]
  9. Sun, J.; Zhao, Y.; Li, X.; Wang, S.; Wei, J.; Lu, Y. Fractional Order Spectrum of Cumulant in Edge Detection. In Proceedings of the 2024 IEEE 2nd International Conference on Image Processing and Computer Applications (ICIPCA), Shenyang, China, 28–30 June 2024; pp. 408–412. [Google Scholar]
  10. Ding, D.; Jin, F.; Zhang, H.; Yang, Z.; Chen, S.; Zhu, H.; Xu, X.; Liu, X. Fractional-order heterogeneous neuron network based on coupled locally-active memristors and its application in image encryption and hiding. Chaos Soliton Fract. 2024, 187, 115397. [Google Scholar]
  11. Chen, L.; Yin, H.; Huang, T.; Yuan, L.; Zheng, S.; Yin, L. Chaos in fractional-order discrete neural networks with application to image encryption. Neural Netw. 2020, 125, 174–184. [Google Scholar] [CrossRef]
  12. Wei, T.; Lin, P.; Wang, Y.; Wang, L. Stability of stochastic impulsive reaction–diffusion neural networks with S-type distributed delays and its application to image encryption. Neural Netw. 2019, 116, 35–45. [Google Scholar] [CrossRef] [PubMed]
  13. Kowsalya, P.; Kathiresan, S.; Kashkynbayev, A.; Rakkiyappan, R. Fixed-time synchronization of delayed multiple inertial neural network with reaction–diffusion terms under cyber–physical attacks using distributed control and its application to multi-image encryption. Neural Netw. 2024, 180, 106743. [Google Scholar] [CrossRef] [PubMed]
  14. Vilas, C.; García, M.R.; Banga, J.R.; Alonso, A.A. Robust feed-back control of travelling waves in a class of reaction–diffusion distributed biological systems. Phys. D Nonlinear Phenom. 2008, 237, 2353–2364. [Google Scholar] [CrossRef]
  15. Song, Q.; Cao, J. Global exponential robust stability of cohen-grossberg neural network with time-varying delays and reaction–diffusion terms. J. Frankl. Inst. 2006, 343, 705–719. [Google Scholar] [CrossRef]
  16. Cao, J.; Stamov, G.; Stamova, I.; Simeonov, S. Almost periodicity in impulsive fractional-order reaction–diffusion neural networks with time-varying delay. IEEE Trans. Cybern. 2021, 51, 151–161. [Google Scholar] [CrossRef]
  17. Ali, M.S.; Hymavathi, M.; Rajchakit, G.; Saroha, S.; Palanisamy, L.; Hammachukiattikul, P. Synchronization of Fractional Order Fuzzy BAM Neural Networks with Time Varying Delays and Reaction Diffusion Terms. IEEE Access 2020, 8, 186551–186571. [Google Scholar] [CrossRef]
  18. Lin, J.; Xu, R.; Li, L. Spatio-temporal synchronization of reaction–diffusion BAM neural networks via impulsive pinning control. Neurocomputing 2020, 418, 300–313. [Google Scholar] [CrossRef]
  19. Chen, H.; Jiang, M.; Hu, J. Global exponential synchronization of BAM memristive neural networks with mixed delays and reaction–diffusion terms. Commun. Nonlinear Sci. Numer. Simulat. 2024, 137, 108137. [Google Scholar] [CrossRef]
  20. Kowsalya, P.; Mohanrasu, S.S.; Kashkynbayev, A.; Gokul, P.; Rakkiyappan, R. Fixed-time synchronization of Inertial Cohen–Grossberg Neural Networks with state dependent delayed impulse control and its application to multi-image encryption. Chaos Soliton. Fract. 2024, 181, 114693. [Google Scholar] [CrossRef]
  21. Li, H.; Li, C.; Ouyang, D.; Nguang, S.K. Impulsive synchronization of unbounded delayed inertial neural networks with actuator saturation and sampled-data control and its application to image encryption. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1460–1473. [Google Scholar] [CrossRef]
  22. Bishop, S.A.; Eke, K.S.; Okagbue, H.I. Advances on asymptotic stability of impulsive stochastic evolution equations. Comput. Sci. 2021, 16, 99–109. [Google Scholar]
  23. Zhang, W.; Li, C.; Huang, T.; He, X. Synchronization of Memristor-Based Coupling Recurrent Neural Networks with Time-Varying Delays and Impulses. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3308–3313. [Google Scholar] [CrossRef] [PubMed]
  24. Pratap, A.; Cao, J.; Lim, C.P.; Bagdasar, O. Stability and pinning synchronization analysis of fractional order delayed Cohen–Grossberg neural networks with discontinuous activations. Appl. Math. Comput. 2019, 359, 241–260. [Google Scholar] [CrossRef]
  25. Velmurugan, G.; Rakkiyappan, R.; Cao, J. Finite-time synchronization of fractional-order memristor-based neural networks with time delays. Neural Netw. 2016, 73, 36–46. [Google Scholar] [CrossRef] [PubMed]
  26. Du, F.; Lu, J. New criterion for finite-time synchronization of fractional order memristor-based neural networks with time delay. Appl. Math. Comput. 2021, 389, 125616. [Google Scholar] [CrossRef]
  27. Polyakov, A. Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Automat. Control 2012, 57, 2106–2110. [Google Scholar] [CrossRef]
  28. Podlubny, I. Fractional Differential Equations, Mathematics in Science and Engineering; Academic Press: Cambridge, MA, USA, 1999. [Google Scholar]
  29. Huang, S.; Xiong, L.; Wang, J.; Li, P.; Wang, Z.; Ma, M. Fixed-time fractional-order sliding mode controller for multimachine power systems. IEEE Trans. Power Syst. 2021, 36, 2866–2876. [Google Scholar] [CrossRef]
  30. Xiao, J.; Wen, S.; Yang, X.; Zhong, S. New approach to global Mittag–Leffler synchronization problem of fractional-order quaternion-valued BAM neural networks based on a new inequality. Neural Netw. 2020, 122, 320–337. [Google Scholar] [CrossRef]
  31. Khanzadeh, A.; Mohammadzaman, I. Comment on ‘Fractional-order fixed-time nonsingular terminal sliding mode synchronization and control of fractional-order chaotic systems’. Nonlinear Dynam. 2018, 94, 3145–3153. [Google Scholar] [CrossRef]
  32. Lee, L.; Liu, Y.; Liang, J.; Cai, X. Finite time stability of nonlinear impulsive systems and its applications in sampled-data systems. ISA Trans. 2015, 57, 172–178. [Google Scholar] [CrossRef]
  33. Li, H.; Li, C.; Huang, T.; Ouyang, D. Fixed-time stability and stabilization of impulsive dynamical systems. J. Frankl. Inst. 2017, 354, 8626–8644. [Google Scholar] [CrossRef]
  34. Abudusaimaiti, M.; Abdurahman, A.; Jiang, H. Fixed/predefined-time synchronization of fuzzy neural networks with stochastic perturbations. Chaos Solitons Fractals 2022, 154, 111596. [Google Scholar] [CrossRef]
  35. Li, H.; Shen, Y.; Han, Y.; Dong, J.; Li, J. Determining Lyapunov exponents of fractional-order systems: A General method based on memory principle. Chaos Solitons Fractals 2023, 168, 113167. [Google Scholar] [CrossRef]
  36. Echenausía-Monroy, J.L.; Quezada-Tellez, L.A.; Gilardi-Velázquez, H.E.; Ruíz-Martínez, O.F.; Heras-Sánchez, M.D.C.; Lozano-Rizk, J.E.; Álvarez, J. Beyond chaos in fractional-order systems: Keen insight in the dynamic effects. Fractal Fract. 2024, 9, 22. [Google Scholar] [CrossRef]
  37. Sadik, H.; Abdurahman, A.; Tohti, R. Fixed-Time synchronization of reaction–diffusion fuzzy neural networks with stochastic perturbations. Mathematics 2023, 11, 1493. [Google Scholar] [CrossRef]
  38. Song, X.; Man, J.; Song, S.; Zhang, Y.; Ning, Z. Finite/fixed-time synchronization for Markovian complex-valued memristive neural networks with reaction–diffusion terms and its application. Neurocomputing 2020, 414, 131–142. [Google Scholar] [CrossRef]
  39. Evans, L.C. Partial Differential Equation, 2nd ed.; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
  40. Lubich, C. Discretized fractional calculus. SIAM J. Math. Anal. 1986, 17, 704–719. [Google Scholar] [CrossRef]
  41. Lin, Y.; Xu, C. Finite difference/spectral approximations for the time-fractional diffusion equation. J. Comput. Phys. 2007, 225, 1533–1552. [Google Scholar] [CrossRef]
  42. Huang, C.; Wang, H.; Liu, H.; Cao, J. Bifurcations of a delayed fractional-order BAM neural network via new parameter perturbations. Neural Netw. 2023, 168, 123–142. [Google Scholar] [CrossRef]
  43. Wang, Z.; Zhang, W.; Zhang, H.; Chen, H.; Cao, J.; Abdel-Aty, M. Finite-time quasi-projective synchronization of fractional-order reaction–diffusion delayed neural networks. Inform. Sci. 2025, 686, 121365. [Google Scholar] [CrossRef]
  44. Abdurahman, A.; Abudusaimaiti, M.; Jiang, H. Fixed/predefined-time lag synchronization of complex-valued BAM neural networks with stochastic perturbations. Appl. Math. Comput. 2023, 444, 127811. [Google Scholar] [CrossRef]
  45. Yin, J.; Khoo, S.; Man, Z. Finite-time stability and instability of stochastic nonlinear systems. Automatica 2011, 47, 2671–2677. [Google Scholar] [CrossRef]
  46. Torres, J.J.; Muñoz, M.A.; Cortés, J.M.; Mejías, J.F. Special issue on emergent effects in stochastic neural networks with application to learning and information processing. Neurocomputing 2021, 461, 632–634. [Google Scholar] [CrossRef]
Figure 1. The chaotic attractor of v ( t , x ) (left) and w ( t , x ) (right) in system (24), where x = 1 is fixed.
Figure 1. The chaotic attractor of v ( t , x ) (left) and w ( t , x ) (right) in system (24), where x = 1 is fixed.
Fractalfract 09 00088 g001
Figure 2. The chaotic attractor of the system (24), where x = 1 is fixed.
Figure 2. The chaotic attractor of the system (24), where x = 1 is fixed.
Fractalfract 09 00088 g002
Figure 3. The evolution diagram of ε 1 ( t , x ) (left), ε 2 ( t , x ) (middle), and ε 3 ( t , x ) (right).
Figure 3. The evolution diagram of ε 1 ( t , x ) (left), ε 2 ( t , x ) (middle), and ε 3 ( t , x ) (right).
Fractalfract 09 00088 g003
Figure 4. The evolution diagram of η 1 ( t , x ) (left), η 2 ( t , x ) (middle), and η 3 ( t , x ) (right).
Figure 4. The evolution diagram of η 1 ( t , x ) (left), η 2 ( t , x ) (middle), and η 3 ( t , x ) (right).
Fractalfract 09 00088 g004
Figure 5. The time evolution diagram of ε 1 ( t , x ) (left), ε 2 ( t , x ) (middle), and ε 3 ( t , x ) (right).
Figure 5. The time evolution diagram of ε 1 ( t , x ) (left), ε 2 ( t , x ) (middle), and ε 3 ( t , x ) (right).
Fractalfract 09 00088 g005
Figure 6. The time evolution diagram of η 1 ( t , x ) (left), η 2 ( t , x ) (middle), and η 3 ( t , x ) (right).
Figure 6. The time evolution diagram of η 1 ( t , x ) (left), η 2 ( t , x ) (middle), and η 3 ( t , x ) (right).
Fractalfract 09 00088 g006
Figure 7. The time evolution diagram of ε ι 2 ( ι { 1 , 2 , 3 } ) and η κ 2 ( κ { 1 , 2 , 3 } ) for α = 0.8 , 0.9 , 0.95 , 0.98 , respectively.
Figure 7. The time evolution diagram of ε ι 2 ( ι { 1 , 2 , 3 } ) and η κ 2 ( κ { 1 , 2 , 3 } ) for α = 0.8 , 0.9 , 0.95 , 0.98 , respectively.
Fractalfract 09 00088 g007
Figure 8. The time evolution diagram of ε ι 2 ( ι { 1 , 2 , 3 } ) (left) and η κ 2 ( κ { 1 , 2 , 3 } ) (right) for α = 0.6 .
Figure 8. The time evolution diagram of ε ι 2 ( ι { 1 , 2 , 3 } ) (left) and η κ 2 ( κ { 1 , 2 , 3 } ) (right) for α = 0.6 .
Fractalfract 09 00088 g008
Table 1. The main parameters for Example 1.
Table 1. The main parameters for Example 1.
d ι d ^ κ a ι a ^ κ b ι κ b ^ ι κ
0.96310.31460.00731.2633−2.89991.4399−1.133−1.98692.9773.0862
0.96310.31462.65920.25611.3996−2.2743−0.5121−2.0131−3.48192.2811
0.96310.31463.02661.71721.4111−1.7352−3.923−1.2189−0.1094−2.8969
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahemuti, R.; Abdurahman, A.; Muhammadhaji, A. Fixed/Preassigned Time Synchronization of Impulsive Fractional-Order Reaction–Diffusion Bidirectional Associative Memory (BAM) Neural Networks. Fractal Fract. 2025, 9, 88. https://doi.org/10.3390/fractalfract9020088

AMA Style

Mahemuti R, Abdurahman A, Muhammadhaji A. Fixed/Preassigned Time Synchronization of Impulsive Fractional-Order Reaction–Diffusion Bidirectional Associative Memory (BAM) Neural Networks. Fractal and Fractional. 2025; 9(2):88. https://doi.org/10.3390/fractalfract9020088

Chicago/Turabian Style

Mahemuti, Rouzimaimaiti, Abdujelil Abdurahman, and Ahmadjan Muhammadhaji. 2025. "Fixed/Preassigned Time Synchronization of Impulsive Fractional-Order Reaction–Diffusion Bidirectional Associative Memory (BAM) Neural Networks" Fractal and Fractional 9, no. 2: 88. https://doi.org/10.3390/fractalfract9020088

APA Style

Mahemuti, R., Abdurahman, A., & Muhammadhaji, A. (2025). Fixed/Preassigned Time Synchronization of Impulsive Fractional-Order Reaction–Diffusion Bidirectional Associative Memory (BAM) Neural Networks. Fractal and Fractional, 9(2), 88. https://doi.org/10.3390/fractalfract9020088

Article Metrics

Back to TopTop