Next Article in Journal
Hybrid Fourier Series and Weighted Residual Function Method for Caputo-Type Fractional PDEs with Variable Coefficients
Previous Article in Journal
Enhanced Optimum PTFOIDN Speed Controller for Battery-Powered Brushless Direct Current Motor-Based Electromobility Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synchronization of Fractional-Order Reaction–Diffusion Neural Networks via ETILC

1
School of Automation, Guangxi University of Science and Technology, Liuzhou 545000, China
2
Guangxi Low-Altitude Unmanned Aircraft Key Technologies Engineering Research Center, Liuzhou 545616, China
3
School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(12), 764; https://doi.org/10.3390/fractalfract9120764
Submission received: 28 October 2025 / Revised: 15 November 2025 / Accepted: 19 November 2025 / Published: 24 November 2025

Abstract

This paper focuses on the synchronization of fractional-order reaction–diffusion neural networks (FORDNN) under sampling event-triggered iterative learning control. A D α -type iterative learning protocol of complete synchronization is proposed, which combines the advantages of both event-triggered control and sampling iterative learning control. Using mathematical tools, sufficient conditions for synchronization are derived. A numerical simulation example is provided to confirm the effectiveness of the analysis.

1. Introduction

Neural networks (NNs) were first proposed in 1940 and an improved model was proposed by Hopfield in 1982 [1]. Numerous studies have demonstrated that NNs possess strong capabilities for nonlinear mapping, nonlinear learning, and parallel computation. Consequently, NNs have found applications in various fields. In recent years, the combination of factional calculus and NNs has received widespread attention.
Fractional derivatives have been proposed in many fields, such as fluid mechanics [2], biology [3], and others [4]. Due to the characteristics of fractional derivatives, fractional-order systems have certain advantages, particularly in terms of infinite memory. To study neuron dynamics, the concept of fractional order is introduced into the neural networks, forming the fractional-order neural networks (FONN). The importance of fractional-order calculus lies in two aspects. First, as [5,6] note, its integrals and derivatives can accurately describe neurons’ nonlinear responses and non-integer-order dynamics that integer-order calculus cannot fully capture. Second, as [7,8] confirm, adjusting the fractional order enables fine-grained characterization of neural networks’ dynamic behaviors (e.g., synchronization speed, stability).
As we have known, diffusion phenomenon of electrons in a nonuniform magnetic field cannot be neglected. Fractional differential operators possess memory and hereditary properties, enabling them to better describe non-local characteristics. As a result, fractional-order reaction–diffusion neural networks exhibit unique advantages. Since differential operators can more accurately capture the temporal dynamic characteristics between neurons, this allows FORDNNs to simulate the complex behaviors of biological neural systems. Some researchers have studied RDNN; Ref. [9] studied the exponential synchronization of fractional-order reaction–diffusion coupled neural networks. There are many different methods of synchronization, such as Mittag–Leffler synchronization [10,11], pinning synchronization [12], quasi-synchronization [9], and others [13,14,15].
In order to address synchronization issues, considering the high precision tracking characteristic of iterative learning, the method of iterative learning control is adopted to achieve the purpose of tracking synchronization. In Ref. [16] the D-type learning law was used to realize the RDNNs tracking synchronization. Ref. [17] studied the system parameters using neural networks and conducted research on iterative learning control. Ref. [18] studied a nonlinear unknown model, using neural networks as estimators to solve the system’s key parameters, and then applied an iterative learning control algorithm to achieve tracking objectives.
Event-triggered control (ETC), whose execution timing is entirely determined by the designed event-triggering conditions, can effectively reduce control resource consumption. It should be noted that although event-triggered control has broad application prospects, a major challenge in its design is avoiding the Zeno phenomenon. The Zeno phenomenon refers to the occurrence of an infinite number of events within an extremely short time, which prevents the system from effectively conserving network resources and may even lead to system collapse. Therefore, avoiding the Zeno phenomenon has become a primary principle in designing event-triggering conditions. Due to the advantage of event-triggered control in significantly reducing network resource consumption, the integration of event-triggered control into neural network research has attracted widespread attention from scholars. In Ref. [19], due to the bidirectional synchronization problem of multi-order fractional neural networks with time-varying delays, the authors proposed an event-triggered controller and analyzed its Zeno behavior. Ref. [20] studied the synchronization problem of neural networks with communication delays and introduced an event-triggering condition based on measurement errors and current sampled states. Ref. [21] explored the synchronization of coupled neural networks affected by cyber attacks, proposing an adaptive distributed dynamic event-triggered strategy with local random state information sampling. By introducing different dynamic variables for each node, this strategy dynamically adjusts the event-triggering threshold, effectively reducing data transmission load. Since iterative learning control (ILC) requires the use of computational data from previous iteration batches, event-triggered iterative learning control (ETILC) has also been extensively studied by many scholars [22,23,24,25]. Fractional-order differential operators have memory—they capture the continuous influence of past states on current dynamics (not just local instantaneous changes), enabling accurate modeling of non-local characteristics in real systems (e.g., neurotransmitter diffusion in brain neural circuits, chemical concentration propagation in industrial sensors). Adjusting the fractional order allows for finer description of complex dynamics (e.g., smaller orders for slow-response systems like furnace temperature control, larger orders for fast neuromorphic chip signal processing), and fractional calculus can simplify to integer-order in specific cases, expanding application flexibility. These advantages make FORDNN more suitable for simulating real neuronal time dynamics, driving scholarly research on its dynamics [26]. In this paper, a data sampling mechanism is employed to ensure that there exists a lower bound between two consecutive event-triggering instants, thereby avoiding the occurrence of the Zeno phenomenon.
The main novelties of this paper are highlighted as follows:
  • Unlike the reference [16], conventional ILC for RDNN which updates controllers at all sampling points, our ETILC adds a sampling-based event-triggering condition only updating when error energy fails to decay, reducing controller updates while ensuring synchronization. ETILC precisely controls the update frequency of ILC inputs through triggering conditions, eliminating the need for additional redundant update operations. In essence, it shares the same design goal as ETC in that both achieve efficient saving of system resources and reduction in energy consumption by decreasing the amount of data transmission and the number of controller updates during the iteration process.
  • This paper uses a fractional-order-type ILC, that is D α -type iterative learning law, to achieve tracking synchronization for the FONNs with diffusion term model, in which different results exist [16,17].
  • Different from [23] (ETILC for integer-order systems), our ETILC combines a Lyapunov-like function with the actuator–sensor network and integrates FORDNN’s fractional diffusion terms into an event-triggering condition, adapting to non-local spatial–temporal dynamics.
The remaining contents of this paper are as follows. In Section 2, the model description of FORDNN is introduced. In Section 3, the event-triggering condition is given and the convergence of tracking synchronization errors is analyzed. A numerical simulation example demonstrates the effectiveness of the analysis in Section 4.
Notation 1. 
N be the set of natural numbers. R n and C n represent the n-dimensional real vector space and n-dimensional complex vector space, respectively. Let a = a R + ι a I , where ι represents the imaginary unit, i.e., ι= 1 .   denotes the Kronecker product between two matrices. ’T’ is the superscript and represented the transposition of the matrix. If G is a real matrix, G = λ max ( G T G ) , where λ max ( · ) denotes the maximum eigenvalue. I N represent the N × N identity matrix. Ω = { ζ = ( ζ 1 , ζ 2 , , ζ n ) R n , | ζ k | < H , k = 1 , 2 , , n } is a bounded domain with a smooth boundary Ω . Suppose G ( ζ , t ) = [ G 1 ( ζ , t ) , G 2 ( ζ , t ) , , G n ( ζ , t ) ] T R n , then the definition of L 2 norm is G ( · , t ) L 2 2 = Ω G ( ζ , t ) T G ( ζ , t ) d ζ . And ( L 2 , λ ) norm is defined as G ( L 2 , λ ) 2 = sup 0 t T { G ( · , t ) L 2 2 e λ t } ( λ > 0 ) . For discrete time t, G ( L 2 , λ ) 2 = sup 0 t N { G ( · , t ) L 2 2 e λ t } ( λ > 0 ) . 

2. Preliminaries

2.1. Fractional-Order Calculus

First, some useful definitions and lemmas are introduced as follows.
Definition 1 
([27]). The definition of fractional integral is described by
I t 0 t α f ( t ) = 1 Γ ( α ) t 0 t ( t τ ) α 1 f ( τ ) d τ , α > 0 ,
where the gamma function Γ ( α ) = 0 + ( t s ) α 1 e s d s .
Definition 2 
([27]). The Caputo fractional derivative with fractional-order α is defined as
D t 0 C t α f ( t ) = 1 Γ ( k α ) t 0 t ( t s ) k α 1 f ( k ) ( s ) d s ,
where t t 0 , α ( k 1 , k ) ,   k is a positive integer, especially, if α ( 0 , 1 ) ,
D t 0 C t α f ( t ) = 1 Γ ( 1 α ) t 0 t ( t s ) α f ( s ) d s .
Definition 3 
([27]). Define the two-parameter Mittag–Leffler function as
E i , j ( t ) = k = 0 t k Γ ( i k + j ) , i > 0 , j > 0 ,
where t is a complex value; the Mittag–Leffler function is defined as
E i , 1 ( t ) = E i ( t ) = k = 0 t k Γ ( i k + 1 ) , i > 0 .
Remark 1. 
Our study exclusively uses the Caputo fractional derivative. This choice is suitable. First, the Caputo derivative only requires integer-order initial conditions, which have clear biological engineering meaning; second, its time-decaying weight aligns with real neuronal memory—recent inputs exert stronger influence on current states than distant ones, matching the dynamic behavior of biological neurons.
Lemma 1 
([27]). According to the definition of Caputo fractional derivative, when 0 < α 1 ,
I t 0 t α D t 0 C t α ( A ( t ) ) = A ( t ) A ( t 0 ) .
Lemma 2 
([28]). For differentiable vector A ( t ) R n , when t t 0 the following relation holds
D t 0 C t α ( A ( t ) T P A ( t ) ) 2 A ( t ) T P d t 0 C t α A ( t ) .
Lemma 3 
([29]). For 0 t 0 t T , non-negative non-decreasing function a ( t ) , non-negative non-decreasing continuous function g ( t ) , non-negative integrable function u ( t ) satisfy
u ( t ) a ( t ) + g ( t ) t 0 t ( t s ) α 1 u ( s ) d s a ( t ) E α ( g ( t ) Γ ( α ) t α ) .

2.2. Graph Theory

Let G = { V , A , G } be a directed graph, wherein V = { 1 , 2 , , N } is the set of nodes, edge set A = { ( i , j ) : i , j V } V × V and G = [ G i , j ] N × N is the adjacent matrix which means that if exists ( i , j ) A , then G i j = 1 , otherwise G i j = 0 ; one thing to note is G i i is defined as 0. The Laplacian matrix L = [ l i j ] N × N is defined as l i j = G i j ( i j ) and l i i = j = 1 N l i j . If a graph has an initial node passing through a directed path to other nodes, then the graph is a spanning tree.

2.3. FORDNN Model

Consider a coupled network composed of N identical FORDNNs operating repeatedly, with its communication topology described by the Laplacian matrix L. After incorporating the control input and system output, the mathematical model of the system is expressed as
α t α z i , k ( ζ , t ) = D 2 ζ 2 z i , k ( ζ , t ) A z i , k ( ζ , t ) + B f ( z i , k ( ζ , t ) ) + c j = 1 N L i j Φ z j , k ( ζ , t ) + o ( ζ ; ζ a ) u i , k ( t ) + J , y i , k ( t ) = Ω g ( ζ ; ζ s ) z i , k ( ζ , t ) d ζ .
where i is the number of nodes in the network and k N represents the number of iterations. ζ = ( ζ 1 , ζ 2 , , ζ p ) T Ω denotes the spatial variable, Ω is bounded smooth open set with boundary Ω . z ( ζ , t ) = [ z 1 ( ζ , t ) , z 2 ( ζ , t ) , , z n ( ζ , t ) ] T R n represent the state of the neurons on the i-th node. t [ 0 , T ] is a fixed time interval. D , A R n × n are positive definite matrices representing the transmission diffusion matrix along the i-th node and self-feedback matrix, respectively. B = ( b i j ) n × n R n × n is the connect weight matrix of the j-th neuron on the i-th node. L = ( L i j ) N × N denotes the Laplacian matrix. Φ R n × n is a positive definite matrix, denotes the inner coupling matrix. c > 0 is overall coupling strength. J R n is an external input. u ( t ) R n is the control input. o ( ζ , ζ a ) is a spatial distribution function. f ( · ) denote the activation function.
The boundary condition and initial condition of (1) are given as follows:
z i , k ( ζ , t ) = 0 , ( ζ , t ) Ω × [ 0 , T ] , z i , k ( ζ , 0 ) = z 0 ( ζ ) .
where z 0 ( ζ ) denotes the initial activity state of neurons at each spatial point.
Using the Kronecker product to extend the dimension of (1), one obtains
α t α z k ( ζ , t ) = ( I N D ) 2 ζ 2 z k ( ζ , t ) ( I N A ) z k ( ζ , t ) + ( I N B ) f ( z k ( ζ , t ) ) + c ( L Φ ) z k ( ζ , t ) + ( I N o ( ζ ; ζ a ) I n ) u k ( t ) + 1 N J , y k ( t ) = Ω [ I N g ( ζ ; ζ s ) I n ] z k ( ζ , t ) d ζ .
where z k ( · , · ) = [ z 1 , k T , z 2 , k T , , z N , k T ] T , u k ( · ) = [ u 1 , k T , u 2 , k T , , u N , k T ] T , y k ( · ) = [ y 1 , k T , y 2 , k T , , y N , k T ] T .
Assumption 1. 
f ( · ) satisfy the Lipschitz condition. For any x i R n exists a positive constant l f > 0 that satisfy f ( x 1 ) f ( x 2 ) l f x 1 x 2 .
Assumption 2. 
The FORDNN is the ideal model without time delay.

2.4. Synchronization of FORDNN with Iteration

Assumption 3. 
The output vector y i , k ( t ) of the NNs model (1) is defined as (see [30])
y i , k ( t ) = Ω g ( ζ ; ζ s ) z i , k ( ζ , t ) d ζ ,
where g ( ζ ; ζ s ) is a spatial distribution function. The spatial distributions of o ( ζ ; ζ a ) and g ( ζ ; ζ s ) satisfy
o ( ζ , ζ a ) = γ , ζ [ 0 , H ] , g ( ζ , ζ s ) = β , ζ [ 0 , H ] ,
where γ , β > 0 . H denotes the upper limit of the operating range for both the actuator and sensor.
Assumption 4. 
There exists a unique control input u d ( t ) and desired state z d ( ζ , t ) for the desired output y d ( t ) , which satisfy the following:
α t α z d ( ζ , t ) = D 2 ζ 2 z d ( ζ , t ) A z d ( ζ , t ) + B f ( z d ( ζ , t ) ) + c j = 1 N L i j Φ z d ( ζ , t ) + J + o ( ζ , ζ a ) u d ( t ) , y d ( t ) = Ω g ( ζ , ζ s ) z d ( ζ , t ) d ζ .
The control objective is to achieve tracking synchronization of fractional-order reaction–diffusion neural networks using an event-triggered sampled-data iterative learning control approach. First, Equation (4) is discretized in time with W sampling intervals, where the sampling period is h, and t = n h denotes the n-th sampling instant. To achieve state synchronization at the sampling points, it is required that the synchronization error converge to zero as the iteration number k approaches infinity, i.e.,
lim k z i , k ( · , t ) z j , k ( · , t ) L 2 2 = 0 , t = n h , n = 0 , 1 , 2 , , N .

3. Main Result

3.1. Event-Triggering Condition

For the event-triggered sampling iterative learning control problem of the neural network system (1), this section designs the event-triggered strategy using a Lyapunov-like energy function approach. The proposed strategy ensures the decay property of the energy function along the iteration axis. At the sampling points, the following Lyapunov energy function is constructed:
V i , k ( n h ) = e i , k ( n h ) 2 ,
where e k ( · ) = y k ( · ) y d ( · ) . Due to the decay property of the Lyapunov energy function, a relaxation factor can be introduced to make the event-triggered frequency controllable. Based on the above description, the following event-triggering function is proposed.
E i , k ( n h ) = V i , k ( n h ) ϵ V i , k 1 ( n h ) , ϵ [ 0 , 1 ] .
It should be noted that when ϵ = 0 , due to the non-negativity of the norm, E k 0 holds, causing the controller to update iteratively at all sampling points. This corresponds to the traditional iterative learning control method.
According to the event-triggering function (7), it can be observed that when E k 0 , the energy decay property is not satisfied. Consequently, at the occurrence of the k + 1 -th iteration, the control input at this sampling point will be updated. Conversely, when E k < 0 , it indicates that the output error at the sampling point continues to decay even without updating the control input, making an update unnecessary. Based on the above analysis, the event-triggering sequence can be formally defined as follows:
k r + 1 v = inf { k + 1 | k + 1 > k r v , E i , k ( v h ) 0 } .

3.2. Controller Design

Combining the definition of the Caputo fractional derivative, this paper employs a D α -type iterative learning control algorithm, with the controller structure designed as follows:
u i , k + 1 ( n h ) = u i , k ( n h ) + Λ j = 1 N [ L i j D t α ( y j , k ( n h ) y i , k ( n h ) ) + p i D t α ( y i , k ( n h ) y d ( n h ) ) ] .
Conventional ILC for neural networks mostly uses P-type (proportional to error) or D-type (proportional to error derivative) laws, which are designed for full-space state feedback. However, our discrete finite-point actuator–sensor network cannot provide full-space state information—thus, we developed the iterative learning protocol (9) tailored to it. This protocol only relies on the discrete state signals collected by the sensor network and the control output of the discrete actuator network, realizing effective synchronization control while adapting to the discrete characteristics of the actuator–sensor network.
Due to the characteristics of sampling iterative learning control, the control input is held constant between sampling instants using a zero-order hold mechanism.
u i , k ( t ) = u i , k ( n h ) , t [ n h , ( n + 1 ) h ) .
Considering the characteristics of event-triggered sampled-data iterative learning control, the control input updates only when the event-triggering conditions are satisfied; otherwise, it maintains the control input from the last triggering instant. Based on this analysis and incorporating Equations (9) and (10), the controller can be designed as follows:
u i , k + 1 ( n h ) = u i , k ( n h ) + Λ [ j = 1 N L i j D t α ( y j , k ( n h ) y i , k ( n h ) ) + p i D t α ( y i , k ( n h ) y d ( n h ) ) ] , k = k r n , u i , k + 1 ( n h ) = u i , k ( n h ) , k ( k r n , k r + 1 n ) , u i , k ( t ) = u i , k ( n h ) , t [ n h , ( n + 1 ) h ) ,
where Λ is the gain matrix. The notation k r + 1 n represents the r + 1 -th triggered iteration batch at the n-th sampling point, while k ( k r n , k r + 1 n ) indicates the current iteration batch remains non-triggered.

3.3. Convergence Analysis

3.3.1. Initial State Fixed FORDNN Model

Theorem 1. 
For the FORDNN (1) if Assumptions 1–4 are satisfied and the iterative learning control algorithm (11) is applied. When the learning gain meets
( 1 + 1 2 ξ ) I N n + ( L + K ) Λ γ β H 2 < 1 ,
where constant ξ > 0 , then the synchronization error at the sampling points converges to zero in the sense of the L 2 norm as the iteration batch k tends to infinity.
Proof. 
Define
δ z k ( ζ , t ) z k ( ζ , t ) z d ( ζ , t ) , δ u k ( t ) u k ( t ) u d ( t ) , e k ( t ) y k ( t ) y d ( t ) , f ( δ z k ( ζ , t ) ) f ( z k ( ζ , t ) ) f ( z d ( ζ , t ) ) .
For event-triggered batch { k r } , arbitrary sampling interval t [ ( n 1 ) h , n h ) is considered. According to (2) and (13), one has
D t α δ z k ( ζ , t ) = ( I N D ) 2 ζ 2 δ z k ( ζ , t ) ( I N A ) δ z k ( ζ , t ) + ( I N B ) f ( δ z k ( ζ , t ) ) + c ( L Φ ) δ z k ( ζ , t ) + ( I N o ( ζ ; ζ a ) I n ) δ u k ( t ) ,
According to Lemma 2, one obtains
D t 0 C t α ( δ z k T ( ζ , t ) δ z k ( ζ , t ) ) 2 δ z k T ( ζ , t ) [ ( I N D ) 2 ζ 2 δ z k ( ζ , t ) ( I N A ) δ z k ( ζ , t ) + ( I N B ) f ( δ z k ( ζ , t ) ) + c ( L Φ ) δ z k ( ζ , t ) + ( I N o ( ζ ; ζ a ) I n ) δ u k ( t ) ] .
Based on the definition of L 2 norm, integrating about ζ in Ω on both sides (15),
D t 0 C t α δ z k ( · , t ) L 2 2 2 Ω δ z k T ( ζ , t ) [ ( I N D ) 2 ζ 2 δ z k ( ζ , t ) ( I N A ) δ z k ( ζ , t ) + ( I N B ) f ( δ z k ( ζ , t ) ) + c ( L Φ ) δ z k ( ζ , t ) + ( I N o ( ζ ; ζ a ) I n ) δ u k ( t ) ] d ζ Q 1 + Q 2 + Q 3 + Q 4 + Q 5 .
For Q 1 , using Green’s formula and boundary condition, one gets
Q 1 = Ω δ z k T ( ζ , t ) ( I N D ) 2 ζ 2 z k ( ζ , t ) d ζ = Ω δ z k T ( I N D ) v z k ( ζ , t ) d S Ω ζ δ z k T ( ζ , t ) ( I N D ) ζ δ z k ( ζ , t ) d ζ 0 .
where v represents the outward normal vector on the boundary Ω , and d S denotes the infinitesimal surface element for integration over Ω . For Q 5 , by applying the Cauchy–Schwarz inequality and fundamental inequalities, one obtains
Q 5 = Ω δ z k T ( ζ , t ) ( I N o ( ζ , ζ a ) I n ) δ u k ( t ) d ζ Ω δ z k T ( ζ , t ) δ z k ( ζ , t ) d ζ 1 2 Ω [ ( I N o ( ζ , ζ a ) I n ) δ u k ( t ) ] T ( I N o ( ζ , ζ a ) I n ) δ u k ( t ) d ζ 1 2 1 2 Ω δ z k T ( ζ , t ) δ z k ( ζ , t ) d ζ + 1 2 β 2 H δ u k T ( t ) δ u k ( t ) 1 2 δ z k ( · , t ) L 2 2 + 1 2 β 2 H δ u k ( t ) 2 .
Using the same method for Q 2 , Q 3 , Q 4 , from (16), we have
D t 0 C t α δ z k ( · , t ) L 2 2 ( I N A 2 + I N B 2 + l f + 2 c ( L Φ ) 2 + 2 ) δ z k ( · , t ) L 2 2 + H β 2 δ u k ( t ) 2 ν ( δ z k ( · , t ) L 2 2 ) + ψ ( δ u k ( t ) 2 ) ,
where ν = I N A 2 + I N B 2 + l f + 2 c ( L Φ ) 2 + 2 , ψ = H β 2 . With the definition of fractional order integrating, one gets
I ( n 1 ) h n h α D t α ( δ z k ( · , t ) L 2 2 ) ν Γ ( α ) ( n 1 ) h n h ( n h τ ) α 1 ( δ z k ( · , τ ) L 2 2 ) d τ + ψ Γ ( α ) ( n 1 ) h n h ( n h τ ) α 1 δ u k ( τ ) 2 d τ .
According to Lemma 3, one obtains
δ z k ( · , n h ) L 2 2 δ z k ( · , ( n 1 ) h ) L 2 2 + ψ h α Γ ( α ) α ( δ u k ( ( n 1 ) h ) 2 ) + ν Γ ( α ) ( n 1 ) h n h ( n h τ ) α 1 ( δ z k ( · , τ ) L 2 2 ) d τ [ δ z k ( · , ( n 1 ) h ) L 2 2 + ψ h α Γ ( α ) α ( δ u k ( ( n 1 ) h ) 2 ) ] E α ( ν Γ ( α ) Γ ( α ) t α ) κ ( δ z k ( · , ( n 1 ) h ) L 2 2 ) + ω ( δ u k ( ( n 1 ) h ) 2 ) ,
where κ = E α ( ν t α ) , ω = E α ( ν t α ) ψ h α Γ ( α ) α , one has
δ z k ( · , n h ) L 2 2 κ n ( δ z k ( · , 0 ) L 2 2 + ω l = 0 n 1 κ n 1 l ( δ u k ( l h ) ) .
Noting the definition of δ u k + 1 with Kronecker product at triggering time,
δ u k + 1 ( n h ) = δ u k ( n h ) + ( L + K ) Λ D t α e k ( n h ) .
When estimating the value of D t α e k ( n h ) ,
D t α e k ( n h ) = Ω g ( ζ ; ζ s ) D t α δ z k ( ζ , n h ) d ζ = Ω g ( ζ ; ζ s ) [ ( I N D ) 2 ζ 2 δ z k ( ζ , n h ) ( I N A ) δ z k ( ζ , n h ) + ( I N B ) f ( δ z k ( ζ , n h ) ) + c ( L Φ ) δ z k ( ζ , n h ) + ( I N o ( ζ ; ζ a ) I n ) δ u k ( n h ) ] d ζ .
According boundary condition, one has
Ω g ( ζ ; ζ s ) ( I N D ) 2 ζ 2 δ z k d ζ = 0 .
Noting the definition of g ( ζ ; ζ s ) and o ( ζ ; ζ a ) , we can obtain
Ω ( I N g ( ζ ; ζ s ) o ( ζ ; ζ a ) I n ) δ u k ( n h ) d ζ = γ β H δ u k ( n h ) .
Let Ω g ( ζ ; ζ s ) [ ( I N A ) δ z k ( ζ , n h ) + ( I N B ) f ( δ z k ( ζ , n h ) ) + c ( L Φ ) δ z k ( ζ , n h ) ] d ζ = e ˜ k ( n h ) ; one gets
δ u k + 1 ( n h ) = δ u k ( n h ) + ( L + K ) Λ γ β H δ u k ( n h ) + [ ( L + K ) Λ e ˜ k ( n h ) ] .
Let ( L + K ) Λ e ˜ k ( n h ) = z ^ k , where based on Young’s inequality,
δ u k + 1 ( n h ) 2 ( 1 + 1 2 ξ ) ( I N n + ( L + K ) Λ γ β H ) δ u k ( n h ) 2 + ( 1 + ξ 2 ) z ^ k T z ^ k .
Estimating z ^ k T z ^ k using Jensen’s inequality,
z ^ k T z ^ k 3 λ Λ { [ Ω g ( ζ ; ζ s ) ( I N A ) δ z k ( ζ , n h ) d ζ ] T · [ Ω g ( ζ ; ζ s ) ( I N A ) δ z k ( ζ , n h ) d ζ ] + [ Ω g ( ζ ; ζ s ) ( I N B ) f ( δ z k ( ζ , n h ) ) d ζ ] T · [ Ω g ( ζ ; ζ s ) ( I N B ) f ( δ z k ( ζ , n h ) ) d ζ ] + [ Ω g ( ζ ; ζ s ) c ( L Φ ) δ z k ( ζ , n h ) d ζ ] T · [ Ω g ( ζ ; ζ s ) c ( L Φ ) δ z k ( ζ , n h ) d ζ ] } X 1 + X 2 + X 3 ,
where λ Λ = ( L + K ) Λ 2 . When estimating X 1 by using Cauchy–Schwarz inequality,
Ω I N n g ( ζ ; ζ s ) A δ z k ( ζ , n h ) d ζ Ω [ I N n g ( ζ ; ζ s ) A ] T [ I N n g ( ζ ; ζ s ) A ] d ζ 1 2 · Ω δ z k T ( ζ , n h ) δ z k ( ζ , n h ) d ζ 1 2 H 1 2 I N γ A δ z k ( · , n h ) L 2 .
Utilizing the same method for X 2 , X 3 , to obtain
z ^ k T z ^ k 3 λ Λ ( H I N γ A 2 + H l f I N γ B 2 + H L c Φ 2 ) δ z k ( · , n h ) L 2 2 η δ z k ( · , n h ) L 2 2 .
Multiply both sides of (19) by κ λ n , according to the initial value condition and the definition of λ norm, one obtains
δ z k ( L 2 , λ ) 2 ω κ ( λ 1 ) n 1 κ κ λ ( δ u k λ 2 ) .
When substituting (24) into (23) and multiplying κ λ n on both sides (22), one can obtain
δ u k + 1 λ 2 ( λ Λ + η ω κ ( λ 1 ) n 1 κ κ λ ) ( δ u k λ 2 ) ,
where λ Λ = ( 1 + 1 2 ξ ) I N n + ( L + K ) Λ γ β H 2 . According to the properties of the Mittag–Leffler function, κ > 1 . When choosing a λ that is sufficiently large, combining the conditions in Theorem 1 with the contraction-mapping principle, one has
lim k δ u k ( n h ) 2 = 0 .
According to the relationship between δ u k and δ z k in (19),
lim k δ z k ( · , n h ) L 2 2 = 0 .
Noting the definition of synchronization error,
z i , k ( · , n h ) z j , k ( · , n h ) L 2 2 z i , k ( · , n h ) z d ( · , n h ) δ z j , k ( · , n h ) + z d ( · , n h ) L 2 2 δ z i , k ( · , n h ) L 2 2 + δ z j , k ( · , n h ) L 2 2 = 0 .
The proof of Theorem 1 is completed. □

3.3.2. Initial State Offset FORDNN Model

Theorem 1 proves the convergence of the synchronization error in FORDNN under fixed initial-state values. However, in many practical scenarios, the initial values of the system are not fixed. Therefore, this subsection considers the following iterative learning control algorithm with initial-state learning.
u i , k + 1 ( n h ) = u i , k ( n h ) + Λ [ j V L i j D t α ( y j , k ( n h ) y i , k ( n h ) ) + p i D t α ( y i , k ( n h ) y d ( n h ) ) ] , k = k r n , u i , k + 1 ( n h ) = u i , k ( n h ) , k ( k r n , k r + 1 n ) , u i , k ( t ) = u i , k ( n h ) , t [ n h , ( n + 1 ) h ) , z i , k + 1 ( ζ , 0 ) = z i , k ( ζ , 0 ) + Λ 1 e i , k ( 0 ) ,
where Λ 1 is the initial-state learning gain. Different from (11), it increases the initial value learning.
Theorem 2. 
For the FORDNN (1) under Assumptions 1–4, the iterative learning control algorithm (27) is applied, if the learning gain meets
( 1 + 1 2 ξ ) I N n + ( L + K ) Λ γ β H 2 < 1 , I + γ H Λ 1 2 < 1 .
where constant ξ > 0 , Λ 1 is the initial value learning gain. Then, the synchronization error at the sampling points converges to zero in the sense of the L 2 norm as the iteration batch k tends to infinity.
Proof. 
When t = 0 , one has
y k + 1 ( 0 ) y k ( 0 ) = Ω g ( ζ ; ζ s ) [ z k + 1 ( ζ , 0 ) z k ( ζ , 0 ) ] d ζ .
Noting y k + 1 ( 0 ) y k ( 0 ) = e k + 1 ( 0 ) e k ( 0 ) , one has
Ω g ( ζ ; ζ s ) [ z k + 1 ( ζ , 0 ) z k ( ζ , 0 ) ] d ζ = e k + 1 ( 0 ) e k ( 0 ) .
Substituting (26) into (28), one obtains
e k + 1 ( 0 ) e k ( 0 ) = Ω g ( ζ ; ζ s ) Λ 1 e k ( 0 ) d ζ = γ H Λ 1 e k ( 0 ) e k + 1 ( 0 ) = ( I + γ H Λ 1 ) e k ( 0 ) .
Taking the norm on both sides of (29),
e k + 1 ( 0 ) 2 = ( I + γ H Λ 1 ) e k ( 0 ) 2 I + γ H Λ 1 2 e k ( 0 ) 2 .
According to (27), one obtains lim k δ z k ( · , 0 ) L 2 2 = 0 . Similarly as Theorem 1, one has
δ u k + 1 λ 2 ( λ Λ + η ω κ ( λ 1 ) n 1 κ κ λ ) δ u k λ 2 + κ λ n κ n δ z k ( · , 0 ) L 2 2 ( λ Λ + η ω κ ( λ 1 ) n 1 κ κ λ ) δ u k λ 2 .
Again, according to (27), we can obtain that
lim k δ u k ( n h ) 2 = 0 .
Correspondingly,
lim k δ z k ( · , n h ) L 2 2 = 0 .
The proof of Theorem 2 is completed. □

4. Simulations

Let ζ [ 0 , 1 ] , t [ 0 , 1 ] , c = 0.1 , J = 0 , α = 0.5 , γ = 0.4 , β = 0.5 , H = 1 . The remaining parameters are chosen as
D = 0.1 , A = 0.2 , B = 0.4 , Φ = 0.2 ,
where the gain matrix Γ = 0.7 . Substituting the parameters into Theorem 1, one has 0.8824 < 1 . The activation function is chosen as f ( x ) = 0.25 ( | x + 1 | | x 1 | ) , which meets the Lipchitz condition. And the expected output trajectory is selected as
y d ( t ) = t 2 .
The initial values of the neuron states z k ( ζ , 0 ) and control input u k ( 0 ) are set as zero. The Laplacian matrix L and matrix K are
L = 2 1 1 0 1 1 0 0 0 1 2 1 0 0 1 1 and K = 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 ,
The corresponding communication topology is shown in Figure 1.
Case 1: Initial state fixed
Numerical simulation is conducted, yielding the following results.
Figure 2 and Figure 3 show the norm curves of the maximum output error and the maximum synchronization error, respectively. Both the output error and synchronization error exhibit a clear convergence trend, reaching the order of 10 4 , thus accomplishing the tracking synchronization control objective.
Figure 4 illustrates the state surfaces of individual neurons at the 100th iteration, demonstrating that the neural states have achieved synchronization.
Figure 5 depicts the event-triggering intervals at the 40th iteration. Similarly, if no triggering occurs at a given sampling point, the interval is 0; otherwise, it is determined by the time elapsed since the previous trigger.
Figure 6 displays the event-triggering intervals at the 300th sampling point. If the sampling point does not satisfy the triggering condition in the current iteration, the interval is recorded as 0. If triggered, the interval is calculated based on the number of iterations since the last triggering event (e.g., if the last trigger occurred two iterations ago, the interval is 2).
Figure 7 presents the number of event triggers during the iterative learning process. From the perspective of triggering frequency, it can be concluded that the controller’s update frequency is reduced. Combined with the results in Figure 2 and Figure 3, the proposed event-triggered sampled-data iterative learning control algorithm not only ensures the convergence of synchronization errors but also effectively reduces the computational burden on the controller. In Figure 7, the higher number of triggers in the initial iterations is due to the complexity of the fractional-order reaction–diffusion neural network system. At this stage, the controller’s effectiveness is relatively weak, causing most sampling points to meet the event-triggering condition and thus increasing the triggering frequency. As the iteration progresses, the controller’s performance gradually improves, leading to a reduction in triggering events.
By analyzing Figure 5, Figure 6 and Figure 7 together, it is evident that the proposed event-triggering scheme effectively reduces the number of triggering events, confirming its efficiency.
Case 2: Initial state offset
The system parameters and iterative learning gains from the fixed initial condition (Case 1) are retained. With the initial learning gain set as Λ 1 = 0.5 , the following simulation results are obtained.
Figure 8 and Figure 9 depict the maximum output error curve and the L 2 -norm curve of the maximum synchronization error, respectively. It can be observed that during initial-state learning, the error magnitude remains on the order of 10 3 , which indicates lower control accuracy compared to the case with a fixed initial state.
Figure 10 illustrates the state surfaces of individual neurons at the 100th iteration, demonstrating that the neural states have achieved synchronization.
Figure 11, Figure 12 and Figure 13 represent the event-triggering intervals at the 40th iteration, the event-triggering intervals along the iteration axis at the 300th sampling point, and the number of event-triggered actions at per iteration, respectively.
Assuming 500 sampling points per iteration and 100 iterations, with four neurons (each having 50,000 potential triggering opportunities in the absence of event triggering), Figure 13 shows that the actual triggering rates of these four neurons are approximately 45%, 35%, 50%, and 45%, respectively. This means that each neuron avoided 55%, 65%, 50%, and 55% of unnecessary controller updates, thereby conserving resources for the event-triggering mechanism.

5. Conclusions

In this work, an algorithm for event-triggered sampling iterative learning control is proposed for FORDNN. The algorithm provides the event-triggering condition as well as sufficient conditions for synchronization. The scenario with initial-state offset is considered, where the Neumann boundary condition is applied in this case. Numerical simulations demonstrate that after a certain number of iterative learning iterations, the synchronization error converges and effectively saves network resources. Future work will extend the proposed ETILC algorithm to FORDNN with time-varying delays and external disturbances for more practical engineering scenarios, explore the synchronization of heterogeneous FORDNN (with non-identical node dynamics) under the event-triggered iterative learning framework, and investigate the optimal design of event-triggering thresholds and learning gains to enhance system convergence speed and resource-saving efficiency.

Author Contributions

X.D.: Supervision, Conceptualization, methodology, writing editing, funding acquisition; Y.L.: Writing—original draft. Y.W.: Editing; J.Z.: Formal analysis; S.T.: Supervision, review. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the NSFC under Grants 62363002, 61863004 and 62173151.

Data Availability Statement

No data was used for the research described in the paper.

Acknowledgments

The authors sincerely thank the editors and reviewers for their valuable comments and suggestions, which contributed to the improvement of this paper.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Hopfield, J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  2. Yadav, P.; Jahan, S.; Nisar, K. Solving fractional bagley- torvik equation by fractional order fibonacci wavelet arising in fluid mechanics. Ain Shams Eng. J. 2024, 15, 102299. [Google Scholar] [CrossRef]
  3. Alqhtani, M.; Owolabi, K.; Saad, K. Efficient numerical techniques for computing the riesz fractional-order reaction-diffusion models arising in biology. Chaos Solitions Fractals 2022, 161, 112394. [Google Scholar] [CrossRef]
  4. Xu, C.; Aouiti, C.; Liu, Z. Bifurcation caused by delay in a fractional-order coupled oremonator model in chemistry. MATCH Commun. Math. Comput. Chem. 2022, 88, 371–396. [Google Scholar] [CrossRef]
  5. Mondal, A.; Upadhyay, R.K. Diverse neuronal responses of a fractional-order Izhikevich model: Journey from chattering to fast spiking. Nonlinear Dyn. 2018, 91, 1275–1288. [Google Scholar] [CrossRef]
  6. Shrama, T.R.; Gade, P.M. Fractional-order neuronal maps: Dynamics, control and stability analysis. Pramana 2024, 98, 53. [Google Scholar] [CrossRef]
  7. Santamaria, F.; Wils, S.; De Schutter, E.; Augustine, G.J. Anomalous diffusion in Purkinje cell dendrites caused by spines. Neuron 2006, 52, 635–648. [Google Scholar] [CrossRef]
  8. Yuan, M.; Li, Y.; Zheng, M. Novel intermittent pinning control beyond the semigroup framework for fractional-order delayed inertial memristive neural networks. Chaos Solitons Fractals 2026, 202, 117568. [Google Scholar] [CrossRef]
  9. Yang, S.; Jiang, H.; Hu, C. Exponential synchronization of fractional-order reaction-diffusion coupled neural networks with hybrid delay-dependent impulses. J. Frankl. Inst. 2021, 358, 3167–3192. [Google Scholar] [CrossRef]
  10. Narayanan, G.; Ali, M.; Karthikeyan, R. Adaptive strategies and its application in the mittag-leffler synchronization of delayed fractional-order complex-valued reaction-diffusion neural networks. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 3294–3307. [Google Scholar] [CrossRef]
  11. Stamova, I.; Stamov, G. Mittag-leffler synchronization of fractional neural networks with time-varying delays and reaction-diffusion terms using impulsive and linear controllers. Neural Netw. 2017, 96, 22–32. [Google Scholar] [CrossRef]
  12. Wang, S.; Huang, Y.; Xu, B. Pinning synchronization of spatial diffusion coupled reaction-diffusion neural networks with and without multiple time-varying delays. Neurocomputing 2017, 227, 92–100. [Google Scholar] [CrossRef]
  13. Feng, J.; Sun, W.; Tang, Z. Quasi-synchronization of heterogeneous neural networks with hybrid time delays via sampled-data saturating impulsive control. Chaos Solitions Fractals 2024, 182, 114788. [Google Scholar]
  14. Zhang, W.; Xing, K.; Li, J. Adaptive synchronization of delayed reaction-diffusion fcnns via learning control approach. J. Intell. Fuzzy Syst. 2015, 28, 141–150. [Google Scholar] [CrossRef]
  15. Xu, Y.; Sun, F.; Li, W. Exponential synchronization of fractional-order multilayer coupled neural networks with reaction-diffusion terms via intermittent control. Neural Netw. 2021, 33, 16019–16032. [Google Scholar] [CrossRef]
  16. Zhou, X.; Wang, H.; Tian, Y. Iterative learning control-based tracking synchronization for linearly coupled reaction-diffusion neural networks with time delay and iteration-varying switching topology. J. Frankl. Inst. 2021, 358, 3822–3846. [Google Scholar] [CrossRef]
  17. Zhang, D.; Wang, Z.; Masayoshi, T. Neural-network-based iterative learning control for multiple tasks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4178–4190. [Google Scholar] [CrossRef]
  18. Shi, Q.; Huang, X.; Meng, B. Neural network-based iterative learning control for trajectory tracking of unknown SISO nonlinear systems. Expert Syst. Appl. 2023, 232, 120863. [Google Scholar] [CrossRef]
  19. Liu, P. Event-triggered bipartite synchronization of coupled multi-order fractional neural networks. Knowl. Based Syst. 2022, 255, 109733. [Google Scholar] [CrossRef]
  20. Wen, S.; Zeng, Z.; Chen, M.; Huang, T. Synchronization of switched neural networks with communication delays via the event-triggered control. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2334–2343. [Google Scholar] [CrossRef]
  21. Li, H.; Cao, J.; Kashkynbaye, A.; Cai, S. Adaptive dynamic event-triggered cluster synchronization in an array of coupled neural networks subject to cyber-attacks. Neurocomputing 2022, 511, 380–598. [Google Scholar] [CrossRef]
  22. Chi, R.; Lin, N. Event-triggered learning consensus of networked heterogeneous nonlinear agents with switching topologies. J. Frankl. Inst. 2021, 385, 3803–3821. [Google Scholar] [CrossRef]
  23. Lin, N.; Chi, R.; Huang, B.; Hou, Z. Event-triggered nonlinear iterative learning control. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 5118–5128. [Google Scholar] [CrossRef]
  24. Lin, N.; Chi, R.; Huang, B. Event-triggered ILC for optimal consensus at specified data points of heterogeneous networked agents with switching topologies. IEEE Trans. Cybern. 2022, 52, 8951–8961. [Google Scholar] [CrossRef] [PubMed]
  25. Zhu, S.; Dai, X.; Zhou, R. Sampling-based event-triggered iterative learning control in nonlinear hyperbolic distributed parameter systems. J. Frankl. Inst. 2024, 361, 106676. [Google Scholar] [CrossRef]
  26. Song, X.; Sun, X.; Man, J. Synchronization of fractional-order spatiotemproal complex-valued neural networks in finite-time interval and its application. J. Frankl. Inst. 2021, 358, 8207–8225. [Google Scholar] [CrossRef]
  27. Podlubny, I. Fractional Differential Equations; Academie Press: New York, NY, USA, 1999. [Google Scholar]
  28. Duarte-Mermoud, M.; Aguila-Camacho, N.; Gallegos, J. Using general quadratic Lyapunov function to prove Lyapunov uniform stability for fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 2015, 22, 650–659. [Google Scholar] [CrossRef]
  29. Ye, H.; Gao, J.; Ding, Y. A generalized gronwall inequality and its application to a fractional differential equation. J. Math. Anal. Appl. 2007, 328, 1075–1081. [Google Scholar] [CrossRef]
  30. Jiang, Z.; Cui, B.; Wu, W. Event-driven observer-based control for distributed parameter systems using mobile sensor and actuator. Comput. Math. Appl. 2016, 72, 2854–2864. [Google Scholar] [CrossRef]
Figure 1. Network topology.
Figure 1. Network topology.
Fractalfract 09 00764 g001
Figure 2. Maximum output error curve.
Figure 2. Maximum output error curve.
Fractalfract 09 00764 g002
Figure 3. Maximum synchronization error L 2 -norm curve.
Figure 3. Maximum synchronization error L 2 -norm curve.
Fractalfract 09 00764 g003
Figure 4. State surface of neurons.
Figure 4. State surface of neurons.
Fractalfract 09 00764 g004
Figure 5. Triggered interval along the time axis at the 40-th iteration batch.
Figure 5. Triggered interval along the time axis at the 40-th iteration batch.
Fractalfract 09 00764 g005
Figure 6. Triggered interval along the iteration axis at 300-th sampling point.
Figure 6. Triggered interval along the iteration axis at 300-th sampling point.
Fractalfract 09 00764 g006
Figure 7. Number of triggers at the k-th iteration.
Figure 7. Number of triggers at the k-th iteration.
Fractalfract 09 00764 g007
Figure 8. Maximum output error curve.
Figure 8. Maximum output error curve.
Fractalfract 09 00764 g008
Figure 9. Maximum synchronization error L 2 -norm curve.
Figure 9. Maximum synchronization error L 2 -norm curve.
Fractalfract 09 00764 g009
Figure 10. State surface of neurons.
Figure 10. State surface of neurons.
Fractalfract 09 00764 g010
Figure 11. Triggered interval along the time axis at the 40-th iteration batch.
Figure 11. Triggered interval along the time axis at the 40-th iteration batch.
Fractalfract 09 00764 g011
Figure 12. Triggered interval along the iteration axis at 300-th sampling point.
Figure 12. Triggered interval along the iteration axis at 300-th sampling point.
Fractalfract 09 00764 g012
Figure 13. Number of triggers at the k-th iteration.
Figure 13. Number of triggers at the k-th iteration.
Fractalfract 09 00764 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dai, X.; Liu, Y.; Wang, Y.; Zhang, J.; Tian, S. Synchronization of Fractional-Order Reaction–Diffusion Neural Networks via ETILC. Fractal Fract. 2025, 9, 764. https://doi.org/10.3390/fractalfract9120764

AMA Style

Dai X, Liu Y, Wang Y, Zhang J, Tian S. Synchronization of Fractional-Order Reaction–Diffusion Neural Networks via ETILC. Fractal and Fractional. 2025; 9(12):764. https://doi.org/10.3390/fractalfract9120764

Chicago/Turabian Style

Dai, Xisheng, Yehui Liu, Yanxue Wang, Jianxiang Zhang, and Senping Tian. 2025. "Synchronization of Fractional-Order Reaction–Diffusion Neural Networks via ETILC" Fractal and Fractional 9, no. 12: 764. https://doi.org/10.3390/fractalfract9120764

APA Style

Dai, X., Liu, Y., Wang, Y., Zhang, J., & Tian, S. (2025). Synchronization of Fractional-Order Reaction–Diffusion Neural Networks via ETILC. Fractal and Fractional, 9(12), 764. https://doi.org/10.3390/fractalfract9120764

Article Metrics

Back to TopTop