Next Article in Journal
A Fuzzy Hypergraph-Based Framework for Secure Encryption and Decryption of Sensitive Messages
Previous Article in Journal
Explainable Domain Adaptation Learning Framework for Credit Scoring in Internet Finance Through Adversarial Transfer Learning and Ensemble Fusion Model
Previous Article in Special Issue
Fixed/Preassigned-Time Synchronization of Fully Quaternion-Valued Cohen–Grossberg Neural Networks with Generalized Time Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synchronization in Fractional-Order Delayed Non-Autonomous Neural Networks

by
Dingping Wu
1,2,
Changyou Wang
2,* and
Tao Jiang
3
1
Division of Mathematics, Sichuan University Jingjiang College, Meishan 620860, China
2
College of Applied Mathematics, Chengdu University of Information Technology, Chengdu 610225, China
3
School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu 611137, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(7), 1048; https://doi.org/10.3390/math13071048
Submission received: 25 January 2025 / Revised: 9 March 2025 / Accepted: 21 March 2025 / Published: 24 March 2025
(This article belongs to the Special Issue Artificial Neural Networks and Dynamic Control Systems)

Abstract

:
Neural networks, mimicking the structural and functional aspects of the human brain, have found widespread applications in diverse fields such as pattern recognition, control systems, and information processing. A critical phenomenon in these systems is synchronization, where multiple neurons or neural networks harmonize their dynamic behaviors to a common rhythm, contributing significantly to their efficient operation. However, the inherent complexity and nonlinearity of neural networks pose significant challenges in understanding and controlling this synchronization process. In this paper, we focus on the synchronization of a class of fractional-order, delayed, and non-autonomous neural networks. Fractional-order dynamics, characterized by their ability to capture memory effects and non-local interactions, introduce additional layers of complexity to the synchronization problem. Time delays, which are ubiquitous in real-world systems, further complicate the analysis by introducing temporal asynchrony among the neurons. To address these challenges, we propose a straightforward yet powerful global synchronization framework. Our approach leverages novel state feedback control to derive an analytical formula for the synchronization controller. This controller is designed to adjust the states of the neural networks in such a way that they converge to a common trajectory, achieving synchronization. To establish the asymptotic stability of the error system, which measures the deviation between the states of the neural networks, we construct a Lyapunov function. This function provides a scalar measure of the system’s energy, and by showing that this measure decreases over time, we demonstrate the stability of the synchronized state. Our analysis yields sufficient conditions that guarantee global synchronization in fractional-order neural networks with time delays and Caputo derivatives. These conditions provide a clear roadmap for designing neural networks that exhibit robust and stable synchronization properties. To validate our theoretical findings, we present numerical simulations that demonstrate the effectiveness of our proposed approach. The simulations show that, under the derived conditions, the neural networks successfully synchronize, confirming the practical applicability of our framework.
MSC:
34A08; 34K24; 68T07; 93D20

1. Introduction

In practical neural network models, external influences and internal system dynamics can degrade system stability, potentially leading to instability. When selecting a fractional-order neural network as a research subject, it is crucial to account for these effects and focus on analyzing its stability, as well as devising a simple and effective synchronous controller. Only by addressing these aspects can we effectively utilize fractional-order neural networks and enhance their practical applications. Synchronization, a prevalent dynamic behavior in real-world systems, underscores the importance of studying synchronization in fractional-order neural network models. Consequently, numerous scholars have dedicated their research to this topic, as evidenced in [1,2,3,4,5,6,7] and related references.
In 2014, Yu et al. [8] conducted research on the global projective synchronization of a specific fractional-order neural network model:
D α x i ( t ) = a i x i ( t ) + j = 1 n c i j f j ( x j ( t ) ) + I i , x i ( t 0 ) = x 0 i , i = 1 , 2 , , n .
where 0 < α < 1 . Utilizing certain analytical techniques and integrating open-loop and adaptive control methods, Yu et al. derived novel criteria that ensure the projection synchronization of fractional-order neural networks. In 2018, Hu et al. [9] delved into the global asymptotic synchronization issue of the aforementioned fractional-order neural network model (1). Initially, they explored fresh properties of fractional calculus and formulated the asymptotic stability theorem for fractional-order systems represented by (1). Furthermore, they introduced a novel feedback controller. Ultimately, by leveraging the proposed asymptotic stability theorem and matrix inequality techniques, they established sufficient conditions for achieving global asymptotic synchronization in fractional-order neural networks. However, it is evident that Model (1) lacks a delay term. In reality, time delays are ubiquitous in numerous network domains. Considering fluctuations in network parameters due to hardware operations and the limited switching speeds of signal transmitters and amplifiers within the network, incorporating delays into neural network systems is imperative. Consequently, in recent times, the investigation of dynamic behaviors in delayed fractional-order neural networks (DFNNs) has garnered increasing attention from scholars. For instance, in 2015, Wang et al. [10] conducted a study on the fractional-order Hopfield neural network incorporating time delays:
D t q x i ( t ) = a i x i ( t ) + j = 1 n b i j f j ( x j ( t ) ) + j = 1 n c i j g j ( x j ( t τ i j ) ) , t > 0 , i = 1 , 2 , , n ,
where q ( 0 , 1 ) ,   a i , b i j , c i j are constants. Utilizing the linearization method and Laplace transform, they derived stability conditions for two-dimensional fractional-order neural networks with time delays. Additionally, they proposed three-dimensional fractional-order neural networks featuring diverse ring structures and time delays, along with the corresponding stability conditions for these networks. In 2017, Peng et al. [11] investigated fractional-order neural networks that incorporate discontinuous activations and time delays:
D 0 α t c x ( t ) = C x ( t ) + A f ( x ( t ) ) + B f ( x ( t τ ) ) + I
where x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T R n is the vector of neuron states at time t . By applying the fractional differential inclusion theory, inequality analysis techniques, and linear matrix inequalities, Peng et al. provided sufficient conditions to ensure global Mittag–Leffler synchronization and finite-time synchronization. In 2018, Zhang et al. [12] examined the drive–response synchronization problem for a specific class of fractional-order delay neural networks. The model in question is as follows:
D α x i ( t ) = c i x i ( t ) + j = 1 n a i j f j ( x j ( t ) ) + j = 1 n b i j f j ( x j ( t τ ) ) + I i ,
where 0 < α < 1 , i N = 1 , 2 , , n . Both a state feedback controller and an adaptive controller were designed, and several novel synchronization conditions were derived utilizing relevant theoretical knowledge to guarantee the global synchronization of System (4).
On the other hand, it is generally recognized that obtaining exact values for model parameters is often impractical. This is primarily due to inherent perturbations in the model and environmental disturbances, which result in parameter uncertainty. Consequently, when analyzing the dynamic behavior of nonlinear systems, the impact of these parameter uncertainties cannot be overlooked, as they may compromise the stability, synchronization, or other properties of the system. In recent years, numerous studies have centered on the synchronization of neural network models with parametric uncertainty. In 2018, researchers [13] explored the synchronization problem in delayed fractional-order complex-valued neural networks with memristors and uncertain parameters. They derived sufficient conditions to ensure global asymptotic synchronization for the drive–response models in question, utilizing the differential inclusion theory, Lyapunov direct method, and comparison theorem. It is particularly noteworthy that Wang et al. [14] conducted a study on the following non-autonomous fractional-order delayed neural network:
D α x i ( t ) = e p t a i x i ( t ) + j = 1 n b i j f j ( x j ( t ) ) + j = 1 n c i j f j ( x j ( t τ ) ) + I i , i = 1 , 2 , , n .
Utilizing the Mittag–Leffler function and linear feedback control, Wang et al. derived the analytic expression for the synchronization controller. Furthermore, they obtained a sufficient condition to guarantee the synchronization of this delayed fractional-order neural network with Caputo derivatives. This was achieved by developing innovative analysis methods, employing the theory of delayed differential inequalities, and constructing an appropriate Lyapunov function. In 2024, Wang et al. [15] continued their research on the following non-autonomous neural networks, incorporating time delays and Caputo derivatives:
D α u i ( t ) = e p t a i u i ( t ) + j = 1 n b i j g j ( u j ( t ) ) + j = 1 n c i j f j ( u j ( t τ ) ) + I i , i = 1 , 2 , , n .
By constructing two innovative Lyapunov functions and leveraging the properties of fractional-order delay differential inequalities, Wang et al. derived criteria for achieving global projective synchronization in delayed non-autonomous neural networks with Caputo derivatives. These criteria were obtained under the conditions of two newly proposed synchronous controllers.
However, to our knowledge, previous research has seldom explored non-autonomous delayed fractional-order neural networks (DFNNs). These networks can more accurately simulate the interactions between neurons, as the connection weights and self-inhibition rates within neural networks are typically not constant but rather functions that vary over time. Motivated by prior studies, the aim of this paper is to investigate the global synchronization of a broader class of non-autonomous DFNNs incorporating the Caputo derivative. By constructing a Lyapunov function, we verify the asymptotic stability of the error system and derive sufficient conditions for ensuring global synchronization in these new neural networks. The structure of this paper is as follows: Section 2 introduces necessary definitions and lemmas, along with a description of the model. In Section 3, we obtain global synchronization schemes and provide sufficient conditions for global synchronization in the new neural networks. Section 4 presents numerical simulations to validate our findings. Finally, Section 5 concludes this paper with some remarks.

2. Preliminaries and System Description

In this paper, R represents a real space, while R m denotes an m-dimensional Euclidean space. N + is a set consisting of positive integers. The gamma functions Γ ( z ) , E α ( ) , and E α , β ( ) are defined as follows:
Definition 1 ([16,17]).
The fractional integral of the integrable function  x ( t )  is defined as:
D t 0 α x ( t ) = 1 Γ ( α ) t 0 t ( t s ) α 1 x ( s ) d s ,
where  t t 0 , order  α > 0 .  Γ ( )  is a gamma function, which is defined as:
Γ ( α ) = 0 e t t α 1 d t ,
satisfying the recursive relationship  Γ ( α + 1 ) = α   Γ ( α ) .
Definition 2.
The single-parameter Mittag–Leffler function is defined as follows:
Given  α > 0 , z C , we denote 
E α ( z ) = k = 0 z k Γ ( k α + 1 )
as the single-parameter Mittag–Leffler function.
Definition 3.
The two-parameter Mittag–Leffler function is defined as follows:
Given  α > 0 , β > 0  and  z C , we denote 
E α , β ( z ) = k = 0 z k Γ ( k α + β )
as the two-parameter Mittag–Leffler function.
Based on the above definitions, it holds that  E α ( z ) = E α , 1 ( z ) , E 1 , 1 ( z ) = e z .
As mentioned above, in physical systems, the Caputo fractional derivative carries more practical physical meaning and is more suitable for describing actual systems with initial values compared to the Riemann–Liouville (RL) fractional derivative and the Grünwald–Letnikov (GL) fractional derivative. Consequently, in practical applications, the Caputo differential operation is more frequently utilized than the RL and GL differential operations. Therefore, the definition of the Caputo fractional derivative will also be employed in this article.
Definition 4 ([16,17]).
The Caputo fractional derivative of function  x ( t )  is defined as: 
D t 0 α x ( t ) = 1 Γ ( n α ) t 0 t ( t s ) n α 1 d d s n x ( s ) d s ,
where  α  represents the order of the derivative,  n 1 < α n .
In particular, when  0 < α < 1 , the following applies:
D α x ( t ) = 1 Γ ( 1 α ) t 0 t ( t s ) α x ( s ) d s .
Lemma 1 ([17]).
For any constant  c 1 , c 2 ,  if  w ( t ) , u ( t ) [ t 0 , b ] ,   α > 0 , β > 0 ,  then
(1)  D α D β w ( t ) = D α β w ( t ) ,
(2)  D α D α w ( t ) = w ( t ) ,
(3)  D α ( c 1 w ( t ) ± c 2 u ( t ) ) = c 1 D α w ( t ) ± c 2 D α u ( t ) .
Lemma 2 ([18]).
Assume that  f ( t ) R n  is a differentiable vector value function and  P R n × n  is a positive definite, symmetric matrix. Then, it holds that 
D α ( f T ( s ) P f ( s ) ) 2 f T ( s ) P D α f ( s ) ,
where  α [ 0 , 1 ] ,   s s 0 .
Lemma 3 ([19]).
Consider a Captuto fractional differential system  D α x ( t ) = f ( t , x ( t ) , x ( t τ ) ) , where  x R n 0 < α < 1 . Suppose that  w 1 ( s ) , w 2 ( s )  are continuous non-decreasing functions,  w 1 ( s )  and  w 2 ( s )  are positive for  s > 0 w 1 ( 0 ) = w 2 ( 0 ) , and  w 2 ( s )  is strictly increasing. If there exists a continuously differentiable function  V : R × R n R  such that  w 1 ( x ) V ( t , x ) w 2 ( x ) , for  t R x R n ,  and additionally, if there are two positive constants  p  and  q , such that  p < q  and 
D α V ( t , x ( t ) ) q V ( t , x ( t ) ) + p sup τ θ 0 V ( t + θ , x ( t + θ ) ) , t t 0 ,
then the Caputo fractional system is globally uniformly asymptotically stable.
In this article, we investigate non-autonomous neural networks with time delays and the Caputo derivative serving as the leader system. This system is characterized by the following:
D α x i ( t ) = c i ( t ) x i ( t ) + j = 1 n a i j ( t ) f j ( x j ( t ) ) + j = 1 n b i j ( t ) g j ( x j ( t τ ) ) + I i .
Alternatively, they can be expressed in vector form:
D α x ( t ) = C ( t ) x ( t ) + A ( t ) f ( x ( t ) ) + B ( t ) g ( x ( t τ ) ) + I ,
where each element in C ( t ) is required to be a positive bounded function, and each element in A ( t ) , B ( t ) is a bounded function. 0 < α < 1 ,   i N = 1 , 2 , , n , t 0 , n represents the number of neurons, x ( t ) = ( x 1 ( t ) ) , , x n ( t ) ) T R n represents a state variable at time t , τ represents the propagation delay of the neuron, x ( t τ ) = ( x 1 ( t τ ) ) , , x n ( t τ ) ) T R n represent a state variable at time t τ , and f ( x ( t ) ) = ( f 1 ( x 1 ( t ) ) , , f n ( x n ( t ) ) ) T and g ( x ( t ) ) = ( g 1 ( x 1 ( t ) ) , , g n ( x n ( t ) ) ) T   ( n N + ) denote the excitation function of a neuron at time t and t τ , respectively. C ( t ) denotes a self-joining weight of a neuron, A ( t ) denotes an internal connection weight matrix when there is no time delay, B ( t ) denotes an internal connection weight matrix when there is a time delay, and I = ( I 1 , I 2 , I n ) T denotes an external input vector. Neural network models (15) can be employed to simulate the analysis and processing of diverse types of information performed by the human brain. They constitute a crucial element of artificial intelligence algorithms.

3. Main Conclusions and Their Proof

Consider the follower system of leader system (15), which is characterized as follows:
D α y i ( t ) = c i ( t ) y i ( t ) + j = 1 n a i j ( t ) f j ( y j ( t ) ) + j = 1 n b i j ( t ) g j ( y j ( t τ ) ) + I i + u i ( t ) .
Alternatively, they can be expressed in vector form:
D α y ( t ) = C ( t ) y ( t ) + A ( t ) f ( y ( t ) ) + B ( t ) g ( y ( t τ ) ) + I + u ( t ) ,
where y ( t ) = ( y 1 ( t ) ) , , y n ( t ) ) T R n represents a state variable from the follower system (18), u ( t ) = ( u 1 ( t ) , , u n ( t ) ) T represents the synchronous controller of the follower system (18), and the representation of A ( t ) , B ( t ) , C ( t ) , f ( ) , g ( ) is consistent with the leader system (17).
We define the synchronous error vector as e i ( t ) = y i ( t ) x i ( t ) ( i = 1 , 2 , , n ) . Then, using the leader system (15) and the follower system (17), we can derive the expression for the error system, which is given as follows:
D α e i ( t ) = c i ( t ) e i ( t ) + j = 1 n a i j ( t ) [ f j ( y j ( t ) ) f j ( x j ( t ) ) ] + j = 1 n b i j ( t ) [ g j ( y j ( t τ ) ) g j ( x j ( t τ ) ) ] + u i ( t ) ,
Alternatively, they can be expressed in vector form:
D α e ( t ) = C ( t ) e ( t ) + A ( t ) [ f ( y ( t ) ) f ( x ( t ) ) ] + B ( t ) [ g ( y ( t τ ) ) g ( x ( t τ ) ) ] + u ( t ) ,
where e ( t ) = ( e 1 ( t ) , , e n ( t ) ) T R n .
Through analysis, it becomes evident that proving the asymptotic stability of the zero solution of the error system (19) is equivalent to demonstrating the synchronization between the leader system (15) and the follower system (17).
Assumption 1.
The neuron excitation functions  f j ( )  and  g j ( )  satisfy the Lipschitz continuity condition in the field of real numbers. That is, there exist constants  l j > 0  and  h j > 0  (for  j = 1 , 2 , , n ) such that the inequalities    f j ( y i ( t ) ) f j ( x i ( t ) ) l j y i ( t ) x i ( t )  and    g j ( y i ( t ) ) g j ( x i ( t ) ) h j y i ( t ) x i ( t )  hold for all functions  x i ( t )  and  y i ( t )  (for  i = 1 , 2 , , n ).
Let
p = max 1 i n { j = 1 n h j sup t 0 t < + { b i j ( t ) } } ,
and
q = min 1 i n { 2 c i ( t ) + 2 R i 2 j = 1 n l j sup t 0 t < + { a i j ( t ) } j = 1 n h j sup t 0 t < + { b i j ( t ) } } ,
where  R i  can be appropriately chosen such that  q > p > 0 .
Control Scheme 1.
The controller input function  u i ( t ) , i = 1 , 2 , , n ,  in the follower system (17) is designed as follows:
u ( t ) = R ( y ( t ) x ( t ) ) ,
where  R = d i a g ( R 1 , R 2 , , R n ) .
The process framework for synchronizing non-autonomous fractional-order neural networks studied in this article is described as follows (Figure 1):
Theorem 1.
Under Assumption 1, and employing Control Scheme 1, we can deduce that the fractional-order, non-autonomous delayed neural network systems (15) and (17) achieve global synchronization.
Proof. 
Based on the error system (20), we devise an appropriate Lyapunov function, denoted as follows:
V ( t ) = 1 2 e ( t ) T e ( t ) .
Based on Assumption 1, it follows that   f j ( y i ( t ) ) f j ( x i ( t ) ) l j e i ( t ) ,     g j ( y i ( t ) ) g j ( x i ( t ) ) h j e i ( t ) .
Since each element in A ( t ) and B ( t ) must be a bounded function in both the leader system (15) and the follower system (17), there exists an upper bound for each element of A ( t ) and B ( t ) over the interval [ t 0 , ) .
According to Lemmas 1 and 2, we compute the fractional derivative of V ( t ) and substitute the error system (20) into it. Consequently,
D α V ( t ) = D α 1 2 e ( t ) T e ( t ) e ( t ) T D α e ( t ) = i = 1 n e i ( t ) T { c i ( t ) e i ( t ) + j = 1 n a i j ( t ) [ f j ( y j ( t ) ) f j ( x j ( t ) ) ] + j = 1 n b i j ( t ) [ g j ( y j ( t τ ) ) g j ( x j ( t τ ) ) ] + u i ( t ) } i = 1 n e i ( t ) { c i ( t ) e i ( t ) + j = 1 n sup t 0 t < + { a i j ( t ) } l j e i ( t ) + j = 1 n sup t 0 t < + { b i j ( t ) } h j e i ( t τ ) } i = 1 n R i e i 2 ( t ) i = 1 n ( c i ( t ) + R i j = 1 n l j sup t 0 t < + { a i j ( t ) } ) e i 2 ( t ) + i = 1 n j = 1 n h j sup t 0 t < + { b i j ( t ) } e i ( t ) e i ( t τ ) .
According to the triangle inequality, the following can be concluded:
e i ( t ) e i ( t τ ) 1 2 e i ( t ) 2 + 1 2 e i ( t τ ) 2 .
Substituting (26) into (25) yields the following:
D α V ( t ) i = 1 n ( c i ( t ) + R i j = 1 n l j sup t 0 t < + { a i j ( t ) } ) e i 2 ( t ) + i = 1 n j = 1 n h j sup t 0 t < + { b i j ( t ) } e i ( t ) e i ( t τ ) 1 2 i = 1 n ( 2 c i ( t ) + 2 R i 2 j = 1 n l j sup t 0 t < + { a i j ( t ) } j = 1 n h j sup t 0 t < + { b i j ( t ) } ) e i 2 ( t ) + 1 2 i = 1 n j = 1 n h j sup t 0 t < + { b i j ( t ) } e i 2 ( t τ ) q V ( t ) + p V ( t τ ) .
From (27), we obtain the following:
D α V ( t , x ( t ) )   q V ( t , x ( t ) ) + p sup τ θ 0 V ( t + θ , x ( t + θ ) ) .
Based on Lemma 3, the fractional-order neural network with the time-delay leader system (15) and the follower system (17) achieve global synchronization under state feedback control (23). □

4. Numerical Simulation

In this section, we present an example of numerical simulation for a fractional-order neural network model incorporating time delays. The instantiated system’s numerical simulation is conducted using Matlab R2024b software, and the resulting simulations are analyzed to validate the accuracy of the aforementioned theoretical analysis and derivations.
Example 1.
Consider the following delayed Caputo fractional-order neural networks as the leader system:
D 0.87 x 1 ( t ) = ( cos ( t ) + 2 ) x 1 ( t ) + cos ( t ) tanh ( x 1 ( t ) ) cos ( t ) tanh ( x 2 ( t ) ) sin ( t ) tanh ( x 1 ( t 0.01 ) ) cos ( t ) tanh ( x 2 ( t 0.01 ) ) , D 0.87 x 2 ( t ) = ( cos ( t ) + 2 ) x 2 ( t ) + sin ( t ) tanh ( x 1 ( t ) ) + 2 cos ( t ) tanh ( x 2 ( t ) ) 2 sin ( t ) tanh ( x 1 ( t 0.01 ) ) sin ( t ) tanh ( x 2 ( t 0.01 ) ) ,
The follower system is described as follows:
D 0.87 y 1 ( t ) = ( cos ( t ) + 2 ) y 1 ( t ) + cos ( t ) tanh ( y 1 ( t ) ) cos ( t ) tanh ( y 2 ( t ) ) sin ( t ) tanh ( y 1 ( t 0.01 ) ) cos ( t ) tanh ( y 2 ( t 0.01 ) ) 7 ( y 1 ( t ) x 1 ( t ) ) , D 0.87 y 2 ( t ) = ( cos ( t ) + 2 ) y 2 ( t ) + sin ( t ) tanh ( y 1 ( t ) ) + 2 cos ( t ) tanh ( y 2 ( t ) ) 2 sin ( t ) tanh ( y 1 ( t 0.01 ) ) sin ( t ) tanh ( y 2 ( t 0.01 ) ) 7 ( y 2 ( t ) x 2 ( t ) ) .
Comparing systems (29) and (30) with systems (15) and (17), respectively, we can conclude that the correlation weight matrices  A ( t )  and  B ( t )  between neurons at time  t  and  t τ , the self-correlation weight matrix  C ( t )  of neurons, as well as the excitation functions  f ( x ( t ) )  and  g ( x ( t ) )  of a neuron at time  t  and  t τ  are as follows, respectively:
  A ( t ) = cos   ( t ) cos   ( t ) sin ( t ) 2 cos   ( t ) , B ( t ) = sin ( t ) cos   ( t ) 2 sin ( t ) sin ( t ) C ( t ) = cos (   t ) + 2 0 0 cos (   t ) + 2 , g ( x ( t ) ) = f ( x ( t ) ) = ( tanh ( x 1 ( t ) ) , tanh ( x 2 ( t ) ) ) T .
And the selection of state variables, time delays, fractional derivatives, and external inputs is as follows:
x ( t ) = ( x 1 ( t ) , x 2 ( t ) ) T , τ = 0.0 1 , α = 0.87   ,   I = ( I 1 , I 2 ) T = ( 0 , 0 ) T .
Through a straightforward calculation, by setting  R 1 = R 2 = 7 ,   l 1 = l 2 = h 1 = h 2 = 1 , it becomes evident that the systems (29) and (30) fulfill the criteria outlined in Theorem 1. The initial conditions (IC) of the leader system (29) and the follower system (30) are taken as follows:
x 1 ( t 0 ) = 1 , x 2 ( t 0 ) = 3 , y 1 ( t 0 ) = 2 , y 2 ( t 0 ) = 1 .
Using MATLAB, we can obtain some numerical solutions of Systems (29) and (30) with the initial conditions (31), which are shown in Figure 2 and Figure 3.
As can be observed from Figure 2 and Figure 3, the numerical solution ( x 1 , x 2 ) of the leader system (29) and the numerical solution ( y 1 , y 2 ) of the corresponding follower system (30) gradually converge and tend towards consistency. The synchronization errors of the leader–follower systems (29) and (30) progressively stabilize at zero. Consequently, it is confirmed that the leader system (29) achieves global synchronization with the follower system (30).
To further validate the global synchronization between the leader system (29) and the follower system (30), we conducted numerical simulations with varying initial conditions, fractional orders, and time delays. The results of these simulations are presented in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. Upon observing Figure 4 and Figure 5, it becomes evident that, regardless of the initial conditions chosen, the trajectories of the leader–follower system tend to converge, with the error curve approaching zero. This confirms that the leader system (29) achieves global synchronization with the follower system (30). Similarly, Figure 6 and Figure 7 demonstrate that, as the fractional orders vary, the trajectories of the leader–follower system remain consistent, and the error curve trends towards zero. This further verifies the global synchronization between the leader system (29) and the follower system (30). Furthermore, as shown in Figure 8 and Figure 9, even under different time delays, the trajectories of the leader–follower system continue to align, and the error curve diminishes towards zero. This indicates that the leader system (29) maintains global synchronization with the follower system (30). In summary, regardless of the initial conditions, fractional orders, or time delays selected, the error curve ultimately tends to zero. This conclusively verifies that the leader system (29) is globally synchronized with the follower system (30), and these variations do not impact the synchronization of the system.

5. Summary and Future Work

In this paper, we have delved into the synchronization dynamics of a specific class of fractional-order, delayed, and non-autonomous neural networks. Our primary contribution lies in proposing a streamlined framework for achieving global synchronization within this complex system. By employing a novel state feedback control strategy, we have derived an analytical formula for the synchronization controller. This formula serves as a cornerstone for ensuring the synchronization of the neural networks under consideration. Furthermore, we have leveraged the construction of a Lyapunov function to establish the asymptotic stability of the error system. This stability analysis is crucial for understanding the long-term behavior of the synchronization process. Through rigorous mathematical derivations, we have obtained sufficient conditions that guarantee global synchronization in fractional-order neural networks with time delays and Caputo derivatives. These conditions provide a clear pathway for designing neural networks that exhibit the desired synchronization properties. To validate our theoretical findings, we have conducted numerical simulations. These simulations have demonstrated the effectiveness of our proposed approach in achieving global synchronization. The results obtained from these simulations further reinforce the robustness and practical applicability of our synchronization framework. In addition, the method of constructing Lyapunov functions in references [20,21] is also worthy of our reference and learning in future research work.
Looking ahead, there are several avenues for future research. One potential direction is to extend our synchronization framework to more complex neural network architectures, such as those with spatial dependencies or multiple layers. Additionally, it would be interesting to explore the synchronization properties of neural networks with different types of fractional derivatives, such as the Riemann–Liouville derivative. Furthermore, the impact of various types of delays, such as distributed delays or time-varying delays, on the synchronization of fractional-order neural networks warrants further investigation. Finally, the application of our synchronization framework to real-world problems, such as secure communication or pattern recognition, could provide valuable insights and practical benefits.

Author Contributions

Conceptualization, D.W. and C.W.; methodology, C.W.; software, C.W. and T.J.; validation, D.W., C.W. and T.J.; formal analysis, D.W.; investigation, D.W. and C.W.; writing—original draft preparation, D.W. and C.W.; writing—review and editing, D.W., C.W. and T.J.; funding acquisition, D.W. and T.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Sichuan Higher Education Institutions “Double First Class” Construction Gongga Program, the Sichuan Science and Technology Planning Project under (Grant No. 2024YFHZ0320), and the Special Project for Traditional Chinese Medicine Research of Sichuan Administration of Traditional Chinese Medicine (Grant No. 2024zd030) of China.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Song, Q. Synchronization analysis of coupled connected neural networks with mixed time delays. Neurocomputing 2009, 72, 3907–3914. [Google Scholar] [CrossRef]
  2. Bao, H.; Park, J.H.; Cao, J. Adaptive synchronization of fractional-order memristor-based neural networks with time delay. Nonlinear Dyn. 2015, 82, 1343–1354. [Google Scholar] [CrossRef]
  3. Velmurugan, G.; Rakkiyappan, R.; Cao, J. Finite-time synchronization of fractional-order memristor-based neural networks with time delays. Neural Netw. 2016, 73, 36–46. [Google Scholar] [CrossRef] [PubMed]
  4. Gu, Y.; Yu, Y.; Wang, H. Synchronization for fractional-order time-delayed memristor-based neural networks with parameter uncertainty. J. Frankl. Inst. 2016, 353, 3657–3684. [Google Scholar] [CrossRef]
  5. Stamova, I.; Stamov, G. Mittag-Leffler synchronization of fractional neural networks with time-varying delays and reaction–diffusion terms using impulsive and linear controllers. Neural Netw. 2017, 96, 22–32. [Google Scholar] [CrossRef] [PubMed]
  6. Hu, H.-P.; Wang, J.-K.; Xie, F.-L. Dynamics analysis of a new fractional-order hopfield neural network with delay and its generalized projective synchronization. Entropy 2019, 21, 1. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, W.; Cao, J.; Wu, R.; Alsaedi, A.; Alsaadi, F.E. Projective synchronization of fractional-order delayed neural networks based on the comparison principle. Adv. Differ. Equ. 2018, 2018, 73. [Google Scholar] [CrossRef]
  8. Yu, J.; Hu, C.; Jiang, H.; Fan, X. Projective synchronization for fractional neural networks. Neural Netw. 2014, 49, 87–95. [Google Scholar] [CrossRef] [PubMed]
  9. Hu, T.; Zhang, X.; Zhong, S. Global asymptotic synchronization of nonidentical fractional-order neural networks. Neurocomputing 2018, 313, 39–46. [Google Scholar] [CrossRef]
  10. Wang, H.; Yu, Y.; Wen, G.; Zhang, S. Stability analysis of fractional-order neural networks with time delay. Neural Process. Lett. 2015, 42, 479–500. [Google Scholar] [CrossRef]
  11. Peng, X.; Wu, H.; Song, K.; Shi, J. Global synchronization in finite time for fractional-order neural networks with discontinuous activations and time delays. Neural Netw. 2017, 94, 46–54. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, L.; Yang, Y.; Wang, F. Synchronization analysis of fractional-order neural networks with time-varying delays via discontinuous neuron activations. Neurocomputing 2018, 275, 40–49. [Google Scholar] [CrossRef]
  13. Yang, X.; Li, C.; Huang, T.; Song, Q.; Huang, J. Synchronization of fractional-order memristor-based complex-valued neural networks with uncertain parameters and time delays. Chaos Solitons Fractals 2018, 110, 105–123. [Google Scholar] [CrossRef]
  14. Wang, C.; Yang, Q.; Zhuo, Y.; Li, R. Synchronization analysis of a fractional-order non-autonomous neural network with time delay. Phys. A Stat. Mech. Its Appl. 2020, 549, 124176. [Google Scholar] [CrossRef]
  15. Wang, C.; Lei, Z.; Jia, L.; Du, Y.; Zhang, Q.; Liu, J. Projective synchronization of a nonautonomous delayed neural networks with Caputo derivative. Int. J. Biomath. 2024, 17, 2350069. [Google Scholar] [CrossRef]
  16. Pldlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  17. Anatoly, A.K.; Hari, M.S.; Juan, J.T. Theory and Applications of Fractional Differential Equations; Elsevier Science Ltd.: New York, NY, USA, 2006. [Google Scholar]
  18. Duarte-Mermoud, M.A.; Aguila-Camacho, N.; Gallegos, J.A.; Castro-Linares, R. Using general quadratic Lyapunov functions to prove Lyapunov uniform stability for fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 2015, 22, 650–659. [Google Scholar] [CrossRef]
  19. Chen, B.; Chen, J. Razumikhin-type stability theorems for functional fractional-order differential systems and applications. Appl. Math. Comput. 2015, 254, 63–69. [Google Scholar] [CrossRef]
  20. Dass, A.; Srivastava, S.; Kumar, R. A novel Lyapunov-stability-based recurrent-fuzzy system for the Identification and adaptive control of nonlinear systems. Appl. Soft Comput. 2023, 137, 110161. [Google Scholar] [CrossRef]
  21. Man, Z.; Wu, H.R.; Liu, S.; Yu, X. A new adaptive backpropagation algorithm based on lyapunov stability theory for neural networks. IEEE Trans. Neural Netw. 2006, 17, 1580–1591. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of the model and algorithm in this article.
Figure 1. The flowchart of the model and algorithm in this article.
Mathematics 13 01048 g001
Figure 2. Numerical solution of leader–follower systems (29) and (30) with IC (31).
Figure 2. Numerical solution of leader–follower systems (29) and (30) with IC (31).
Mathematics 13 01048 g002
Figure 3. Synchronization errors of leader–follower systems (29) and (30) with IC (31).
Figure 3. Synchronization errors of leader–follower systems (29) and (30) with IC (31).
Mathematics 13 01048 g003
Figure 4. Numerical solution of leader–follower systems (29) and (30) under varying IC.
Figure 4. Numerical solution of leader–follower systems (29) and (30) under varying IC.
Mathematics 13 01048 g004
Figure 5. Synchronization errors of leader–follower systems (29) and (30) under varying IC.
Figure 5. Synchronization errors of leader–follower systems (29) and (30) under varying IC.
Mathematics 13 01048 g005
Figure 6. Numerical solution of leader–follower systems (29) and (30) under varying fractional orders.
Figure 6. Numerical solution of leader–follower systems (29) and (30) under varying fractional orders.
Mathematics 13 01048 g006
Figure 7. Synchronization errors of leader–follower systems (29) and (30) under varying fractional orders.
Figure 7. Synchronization errors of leader–follower systems (29) and (30) under varying fractional orders.
Mathematics 13 01048 g007
Figure 8. Numerical solution of leader–follower systems (29) and (30) under varying delays.
Figure 8. Numerical solution of leader–follower systems (29) and (30) under varying delays.
Mathematics 13 01048 g008
Figure 9. Synchronization errors of leader–follower systems (29) and (30) under varying delays.
Figure 9. Synchronization errors of leader–follower systems (29) and (30) under varying delays.
Mathematics 13 01048 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, D.; Wang, C.; Jiang, T. Synchronization in Fractional-Order Delayed Non-Autonomous Neural Networks. Mathematics 2025, 13, 1048. https://doi.org/10.3390/math13071048

AMA Style

Wu D, Wang C, Jiang T. Synchronization in Fractional-Order Delayed Non-Autonomous Neural Networks. Mathematics. 2025; 13(7):1048. https://doi.org/10.3390/math13071048

Chicago/Turabian Style

Wu, Dingping, Changyou Wang, and Tao Jiang. 2025. "Synchronization in Fractional-Order Delayed Non-Autonomous Neural Networks" Mathematics 13, no. 7: 1048. https://doi.org/10.3390/math13071048

APA Style

Wu, D., Wang, C., & Jiang, T. (2025). Synchronization in Fractional-Order Delayed Non-Autonomous Neural Networks. Mathematics, 13(7), 1048. https://doi.org/10.3390/math13071048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop