Next Article in Journal
Local Error Estimate of an L1-Finite Difference Scheme for the Multiterm Two-Dimensional Time-Fractional Reaction–Diffusion Equation with Robin Boundary Conditions
Previous Article in Journal
Modeling the Transmission Dynamics of Coronavirus Using Nonstandard Finite Difference Scheme
Previous Article in Special Issue
From Stochastic to Rough Volatility: A New Deep Learning Perspective on Hedging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synchronization of Discrete-Time Fractional-Order Complex-Valued Neural Networks with Distributed Delays

1
Department of Mathematics, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, Tamil Nadu, India
2
Complex Systems and Networked Science Research Laboratory, Department of Mathematics, Thiruvalluvar University, Vellore 632115, Tamil Nadu, India
3
Department of Mathematics, Faculty of Sciences and Arts in Sarat Abeda, King Khalid University, Abha 62521, Saudi Arabia
4
Department of Mathematics, Faculty of Science and Arts, King Khalid University, Abha 62529, Saudi Arabia
5
Department of Mathematics, Faculty of Sciences and Arts (Mahayel), King Khalid University, Abha 62529, Saudi Arabia
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(6), 452; https://doi.org/10.3390/fractalfract7060452
Submission received: 22 March 2023 / Revised: 8 May 2023 / Accepted: 24 May 2023 / Published: 1 June 2023

Abstract

:
This research investigates the synchronization of distributed delayed discrete-time fractional-order complex-valued neural networks. The necessary conditions have been established for the stability of the proposed networks using the theory of discrete fractional calculus, the discrete Laplace transform, and the theory of fractional-order discrete Mittag–Leffler functions. In order to guarantee the global asymptotic stability, adequate criteria are determined using Lyapunov’s direct technique, the Lyapunov approach, and some novel analysis techniques of fractional calculation. Thus, some sufficient conditions are obtained to guarantee the global stability. The validity of the theoretical results are finally shown using numerical examples.

1. Introduction

The 19th century witnessed the majority of the development of fractional calculus theory. More than 300 years ago, in Leibniz’s letter to L’Hospital from 1695, fractional calculus was first introduced. The distinct advantage of fractional-order systems over traditional integer-order systems is that they provide an ideal instrument for describing the memory and hereditary features of diverse materials and processes. Fractional calculus did not receive much attention for a very long time due to the complexity and lack of application for the background. Fractional-order differential equations have recently been demonstrated to be useful modelling tools in a variety of scientific and engineering domains, as shown in [1,2,3]. The dynamical characteristics of neural networks have drawn significant attention during the last few decades. Due to neural networks’ effective application in optimization, signal processing, associative memory, parallel computing, pattern recognition, artificial intelligence, etc., their dynamical features have come under intense scrutiny during the past few decades. As fractional calculus advanced quickly, several researchers astonishingly found that fractional calculus could be implemented into neural networks [4,5,6,7,8,9,10].
Despite the significant progress made by continuous fractional calculus, discrete fractional calculus research is still in its early stages. In order to explore discrete fractional calculus, Diaz and Osler introduced an infinite series in 1974. However, continuous-time and discrete-time systems are two complementary characteristics in real-world applications, therefore the question of whether discrete-time systems have similar dynamical behaviors to their continuous-time counterparts has emerged. It is crucial to study the dynamic behavior of discrete fractional calculus since not all discrete operators in theoretical research have the same properties as continuous operators. Researchers often take continuous-time systems into consideration when simulating and analyzing dynamic behavior on computers. However, in a digital network, signal reception and operation are based on discrete time rather than continuous time [11,12,13,14,15,16,17,18].
The networks that process complex-valued input by employing complex-valued parameters and variables are known as complex-valued neural networks (CVNNs) [19,20,21,22,23,24,25,26,27]. In comparison to real-valued neural networks, complex-valued neural networks have a favoured superiority in easier network layout, quicker training times, and increased power throughout complex signal processing. However, according to Liouville’s theorem, every bounded and smooth activation function in CVNNs simplifies to a constant. Therefore, it is more difficult and important to understand the dynamical behaviors of CVNNs. Additionally, they have better solutions than real-valued neural networks for resolving several challenging real-life problems, such as the XOR problem. A variety of techniques have been used to evaluate the stability of CVNNs based on the outcomes so far. Some scholars provided numerous significant results in recent years to guarantee the dynamics of complex-valued neural networks with temporal delays [28,29,30,31,32,33,34,35,36,37,38].
Synchronization for time-delayed neural networks has received particular attention due to their numerous potential applications in the areas of image processing, signal processing, associative memory, and secure communication. Synchronization has grown in popularity as a neural network research issue during the past decade. There are several different types of fractional neural network synchronization issues in use today [39,40]. These synchronization analyses are carried out using a singular Gronwall inequality and Filippov solution theorem [41,42,43].
The broad field of science and engineering known as stability theory examines how dynamical structures affect both linear and nonlinear systems. Most stability studies conducted in recent decades have focused on stability in the Lyapunov sense, including asymptotic, exponential, and uniform stability. The well-known methods for time-domain stability analysis for systems with integer orders, such as the Lyapunov functional method and those combined with Razumikhin-type techniques, cannot be easily generalised to FO systems with time delay because it is challenging to calculate the FO derivatives of Lyapunov functions. The Caputo definition is used. A numerical example is used to demonstrate the accuracy of the suggested procedure. The novelties of the study are given below:
(1)
We studied the global synchronization of discrete-time fractional-order complex-valued neural networks with distributed delays.
(2)
Unlike the previous literature, this paper explicitly examines the stability for discrete fractional-order complex-valued neural networks using the stability theory in complex fields as opposed to breaking down complex-valued systems into real-valued systems.
(3)
Using the Lyapunov direct technique, the synchronization condition of FOCVNNs with temporal delays is determined. In light of the definition of the Caputo fractional difference, it is simple to calculate the first-order backward difference of the Lyapunov function that we design, which includes discrete fractional sum terms.
(4)
Some conditions regarding the global Mittag-Leffler stability of fractional-order CVNNs are established using fractional derivative inequalities and fractional-order appropriate Lyapunov functions.
(5)
It is necessary to investigate the essential characteristics of the discrete Mittag–Leffler function and the Nabla discrete Laplace transform.
(6)
Finally, numerical illustrations are provided.

2. Preliminaries

Let  ϱ ( ) : = ϱ ( ) ϱ ( 1 )  be a backward difference operator and  m ϱ ( ) : = ( ( m 1 ) ϱ ( ) )  be the operator, where  m N + .
Definition 1
([44]). The Nabla discrete fractional sum of order  β > 0  is defined as:
a β ϱ ( ) = 1 Γ ( β ) s = a ( ρ ( s ) ) β 1 ¯ ϱ ( s ) ,
where  a R ρ ( s ) = s 1 ,   N a = { a ,   a + 1 ,   a + 2 ,   } .
Definition 2
([45]). The Riemann–Liouville fractional difference of order  β > 0  is defined as
C a n ϱ ( ) = m ( a ( m n ) ϱ ( ) ) ,  
where  m 1 < β m ,   m N + ,   t N a + m .
Definition 3
([46]). (Global Mittag–Leffler stability) The origin of System (1) is Mittag–Leffler stable if
| | x ( ) | | R ( x ( 0 ) ) E q δ ( 0 ) q σ ,  
where  t 0  denotes the initial instant,  q ( 0 ,   1 ) ,   δ > 0 ,   σ > 0 ,   R ( 0 ) = 0 ,   R ( x ) 0 , and R(x) is locally Lipschiz on  x R  with respect to the Lipschitz constant  R 0 .
Lemma 1
([47]). Let  q ( ) = ( q 1 ( ) ,   ,   q m ( ) ) T R m  be a positive definite matrix, which, if  H R m × m  is a positive definite matrix, implies
C 0 β q T ( ) H q ( ) 2 q T ( ) H c 0 β q ( ) ,   β ( 0 ,   1 ) .
Lemma 2
([48]). For  0 < β 1 ,   = a + n ,  
a β ϱ 2 ( ) 2 ϱ ( ) a β ϱ ( ) .
Lemma 3
([49]). Suppose that  V ( ) R  is a continuous, differentiable, and non-negative function satisfying
D β V ( ) b V ( ) + c V ( ω ) ,   0 < β < 1 ,   V ( ) = φ ( ) 0 . [ ω ,   0 ] .
If  b > 2 c  and  c > 0 , then for all  φ ( ) 0 ω > 0 lim + V ( ) = 0 .
Lemma 4
([50]). Let  V ( )  be a continuous function on  [ 0 ,   + )  satisfying
D β V ( ) δ V ( ) ,   β ( 0 ,   1 )
and let δ be a constant. Then,
V ( ) V ( 0 ) E β ( δ β ) .

3. Main Results

We consider the following discrete-time fractional-order complex-valued neural networks with time-varying delays:
C 0 β γ ψ ( ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( γ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( γ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) f ϕ ( γ ϕ ( s ) ) d s + I ψ ,   t N 1 ,   γ ψ ( ) = Φ ψ ( ) ,   t N 1 ,  
where  C 0 β  denotes the Caputo fractional difference operator with the order  β ( 0 < β < 1 ) γ ψ ( ) = [ γ ψ ϕ ( ) ,   γ ψ 2 ( ) ,   ,   γ ψ m ( ) ] T R n  denotes the state vector,  f ( γ ϕ ( ) ) = [ f 1 ( γ ϕ 1 ( ) ) ,   f 2 ( γ ϕ 2 ( ) ) ,   . . ,   f ψ ( γ ϕ m ( ) ) ] T : C m C m  are vector-valued activation functions,  c ψ a ψ ϕ b ψ ϕ d ψ ϕ  are connection weight matrices, and  τ σ ( )  are time-varying delays.
The response system is designed as
C 0 β δ ψ ( ) = c ψ δ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( δ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( δ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) f ϕ ( γ ϕ ( s ) ) d s + I ψ + u ψ ( ) ,   δ ψ ( ) = Φ ˜ ψ ( ) ,   N 1 ,  
where  δ 1 ( ) = ( δ 11 ( ) ,   δ 12 ( ) ,   ,   δ 1 n ( ) ) T C n u ψ ( ) = [ u ψ 1 ( ) ,   u ψ 2 ( ) ,   ,   u ψ m ( ) ] T R n  is the control input.
Here, we study their solutions using Filippov regularization. Then, System (1) can be expressed as
C 0 β γ ψ ( ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ F [ f ϕ ( γ ϕ ( ) ) ] + ϕ = 1 m b ψ ϕ F [ f ϕ ( γ ϕ ( ω ) ) ] + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) F [ f ϕ ( γ ϕ ( s ) ) ] d s ,   γ ψ ( ) = F [ Φ ψ ( ) ] ,   t N 1 .
If there exist  p l ( ) F [ f j ( x ) ] , then
C 0 β γ ψ ( t ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ p ϕ ( γ ϕ ( ) ) + ϕ = 1 m b ψ ϕ p ϕ ( γ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) p ϕ ( γ ϕ ( s ) ) d s ,   γ ψ ( ) = p ψ ( t ) ,   N 1 .
Similarly, from System (2), we have
C 0 β δ ψ ( ) = c ψ δ ψ ( ) + ϕ = 1 m a ψ ϕ p ϕ ( δ ϕ ( ) ) + ϕ = 1 m b ψ ϕ p ϕ ( δ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) p ϕ ( δ ϕ ( s ) ) d s + u ψ ( ) ,   δ ψ ( ) = p ˜ ψ ( ) ,   t N 1 .
Assumptions:
( H 1 ) : Let  γ ( ) = ξ ( ) + i χ ( )  and  δ ( ) = ξ ´ ( ) + i χ ´ ( ) ; then, we have
p ϕ ( γ j ( ω s ) = p ϕ R ( γ j ( ω s ) ,   δ j ( ω s ) ) + i p ϕ I ( γ j ( ω s ) ,   δ j ( ω s ) ) ,  
( H 2 ) : | ϕ i R ( γ j ( ω s ) k i R ( δ j ( ω s ) | λ i R R | γ j ( ω s ) δ j ( ω s ) | + λ i R I | γ j ( ω s ) δ j ( ω s ) | ,  
| ϕ i I ( γ j ( ω s ) k i I ( δ j ( ω s ) | λ i I R | γ j ( ω s ) δ j ( ω s ) | + λ i I I | γ j ( ω s ) γ j ( ω s ) | .
From (H1) and (H2), Systems (4) and (5) can be expressed as
C 0 β ξ ψ ( ) = c ψ ξ ψ ( ) + l = 1 m a ψ ϕ R p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( t ) ) ϕ = 1 m a ψ ϕ I p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) + ϕ = 1 m b ψ ϕ R p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ϕ = 1 m b ψ ϕ I p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ,  
C 0 β χ ψ ( ) = c ψ χ ψ ( ) + ϕ = 1 m a ψ ϕ R p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) + ϕ = 1 m a ψ ϕ I p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) + ϕ = 1 m b ψ ϕ R p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) + ϕ = 1 m b ψ ϕ I p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s + ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) p v R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ,  
C 0 β ξ ψ ( ) = c ψ ξ ψ ( ) + l = 1 m a ψ ϕ R p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) ϕ = 1 m a ψ ϕ I p ϕ I ( ξ ϕ ( t ) ,   χ ϕ ( ) ) + ϕ = 1 m b ψ ϕ R p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( t ω ) ) ϕ = 1 m b ψ ϕ I p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ,  
C 0 β χ ψ ( t ) = c ψ χ ψ ( t ) + ϕ = 1 m a ψ ϕ R p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( t ) ) + ϕ = 1 m a ψ ϕ I p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( t ) ) + ϕ = 1 m b ψ ϕ R p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( t ω ) ) + ϕ = 1 m b ψ ϕ I p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( t ω ) ) + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) K ψ ϕ ( s ) p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s + ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) p v R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ,  
We define  R ( ) = ξ ( ) ξ ( ) ,   I ( ) = χ ( ) χ ( )  as the synchronization errors. Let  u ψ ( ) = 0 . Then, the system’s error is defined as
C 0 β [ ψ R ( ) ] = c ψ ψ R ( ) + ϕ = 1 m a ψ ϕ R [ p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) ] ϕ = 1 m a ψ ϕ I [ p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) ] + ϕ = 1 m b ψ ϕ R [ p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) p ϕ R ( ξ ϕ ( t ω ) ,   χ ϕ ( ω ) ) ] ϕ = 1 m b ψ ϕ I [ p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ] + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) [ p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) ] d s ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) [ p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ I ( ξ ϕ R ( s ) ,   χ ϕ ( s ) ) ] d s .
C 0 β [ ψ I ( ) ] = c ψ ψ I ( ) + ϕ = 1 m a ψ ϕ R [ p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) ] + ϕ = 1 m a ψ ϕ I [ p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ R ( γ ϕ ( ) ,   γ ϕ ( ) ) ] + ϕ = 1 m b ψ ϕ R [ p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ] + ϕ = 1 m b ψ ϕ I [ p ϕ R ( ξ ϕ ( t ω ) ,   χ ϕ ( ω ) ) p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ] + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) [ p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) ] + ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) [ p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ] .
Theorem 1.
Under Assumptions (H1) and (H2) and Lemma 3, Systems (10) and (11) are globally asymptotically stable and satisfy  θ 1 > 2 θ 2 θ 2 > 0 .
Proof. 
We construct a Lyapunov functional
V ( ) = ψ = 1 m [ | ψ R ( ) | + | ψ I ( ) | ] .
In the light of Lemma 1, we can calculate the fractional difference of  V ( ) ,
C 0 β V ( ) ψ = 1 m c 0 β [ | ψ R ( t ) | + | ψ I ( ) | ] ,   C 0 β V ( ) = ψ = 1 m [ s i g n ( ψ R ( ) ) { c ψ ψ R ( ) + ϕ = 1 m a ψ ϕ R [ p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) ] ϕ = 1 m a ψ ϕ I [ p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) ] + ϕ = 1 m b ψ ϕ R [ p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ] ϕ = 1 m b ψ ϕ I [ p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ] + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) [ p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) ] d s ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) [ p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ I ( ξ ϕ R ( s ) ,   χ ϕ ( s ) ) ] d s } + ψ = 1 m s i g n ( ψ I ( ) ) { c ψ ψ I ( ) + ϕ = 1 m a ψ ϕ R [ p ϕ I ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ I ( ξ ϕ ( t ) ,   χ ϕ ( ) ) ] + ϕ = 1 m a ψ ϕ I [ p ϕ R ( ξ ϕ ( ) ,   χ ϕ ( ) ) p ϕ R ( γ ϕ ( ) ,   γ ϕ ( ) ) ] + ϕ = 1 m b ψ ϕ R [ p ϕ I ( ξ ϕ ( t ω ) ,   χ ϕ ( ω ) ) p ϕ I ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ] + ϕ = 1 m b ψ ϕ I [ p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) p ϕ R ( ξ ϕ ( ω ) ,   χ ϕ ( ω ) ) ] + ϕ = 1 m d ψ ϕ R 0 K ψ ϕ ( s ) [ p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ I ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) ] + ϕ = 1 m d ψ ϕ I 0 K ψ ϕ ( s ) [ p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) p ϕ R ( ξ ϕ ( s ) ,   χ ϕ ( s ) ) d s ] }
Under Assumptions (H1) and (H2), Systems (10) and (11) can be expressed as:
C 0 β V ( ) ψ = 1 m { c ψ | ψ R ( ) | + ϕ = 1 m | a ψ ϕ R | [ λ ϕ R R | ϕ R ( ) | + λ ϕ R I | ϕ I ( ) | ] + ϕ = 1 m | a ψ ϕ I | [ λ k I R | ϕ R ( ) | + λ ϕ I I | ϕ I ( ) | ] + ϕ = 1 m | b ψ ϕ R | [ λ ϕ R R | ϕ R ( ω ) | + λ ϕ R I | ϕ I ( ω ) | ] + ϕ = 1 m | b ψ ϕ I | [ λ ϕ I R | ϕ R ( ω ) | + λ ϕ I I | ϕ I ( ω ) | ] + ϕ = 1 m | d ψ ϕ R | 0 K ψ ϕ ( s ) [ λ ϕ R R | ϕ R ( s ) | + λ ϕ R I | ϕ I ( s ) | ] d s + ϕ = 1 m | d ψ ϕ I | 0 K ψ ϕ ( s ) [ λ ϕ I R | ϕ R ( s ) | + λ ϕ I I | ϕ I ( s ) | ] d s } + ψ = 1 m { c ψ | ψ I ( ) | + ϕ = 1 m | a ψ ϕ R | [ λ k I R | ϕ R ( ) | + λ ϕ I I | ϕ I ( ) | ] + ϕ = 1 m | a ψ ϕ I | [ λ k R R | ϕ R ( ) | + λ ϕ R I | ϕ I ( ) | ] + ϕ = 1 m | b ψ ϕ R | [ λ ϕ I R | ϕ R ( ω ) | + λ ϕ I I | ϕ I ( ω ) | ] + ϕ = 1 m | b ψ ϕ I | [ λ ϕ R R | ϕ R ( ω ) | + λ ϕ R I | ϕ I ( ω ) | ] + ϕ = 1 m | d ψ ϕ R | 0 K ψ ϕ ( s ) [ λ ϕ I R | ϕ R ( s ) | + λ ϕ I I | ϕ I ( s ) | ] d s + ϕ = 1 m | d ψ ϕ I | 0 K ψ ϕ ( s ) [ λ ϕ R R | ϕ R ( s ) | + λ ϕ R I | ϕ I ( s ) | ] d s } ψ = 1 m { c ψ | ψ R ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ R R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ R I | ϕ I ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ I R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ I I | ϕ I ( ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ R R | ϕ R ( ω ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ R I | ϕ I ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ I R | ϕ R ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ I I | ϕ I ( ω ) | + ϕ = 1 m | d ψ ϕ R | 0 K ψ ϕ ( s ) λ ϕ R R | ϕ R ( s ) | d s + ϕ = 1 m | d ψ ϕ R | 0 K ψ ϕ ( s ) λ ϕ R I | ϕ I ( s ) | d s + ϕ = 1 m | d ψ ϕ I | 0 K ψ ϕ ( s ) λ ϕ I R | ϕ R ( s ) | d s + ϕ = 1 m | d ψ ϕ I | 0 K ψ ϕ ( s ) λ ϕ I I | ϕ I ( s ) | d s } + ψ = 1 m { c ψ | ψ I ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ I R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ I I | ϕ I ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ R R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ R I | ϕ I ( ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ I R | ϕ R ( t ω ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ I I | ϕ I ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ R R | R ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ R I | ϕ I ( ω ) | + ϕ = 1 m | d ψ ϕ R | 0 K ψ ϕ ( s ) λ ϕ I R | ϕ R ( s ) | d s + ϕ = 1 m | d ψ ϕ R | 0 K ψ ϕ ( s ) λ ϕ I I | ϕ I ( s ) | d s + ϕ = 1 m | d ψ ϕ I | 0 K ψ ϕ ( s ) λ ϕ R R | ϕ R ( s ) | d s + ϕ = 1 m | d ψ ϕ I | 0 K ψ ϕ ( s ) λ ϕ R I | ϕ I ( s ) | } ψ = 1 m { c ψ | ψ R ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ R R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ R I | ϕ I ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ I R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ I I | ϕ I ( ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ R R | ϕ R ( ω ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ R I | ϕ I ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ I R | ϕ R ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ I I | ϕ I ( ω ) | + ϕ = 1 m | d ψ ϕ R | λ ϕ R R | ϕ R ( ) | + ϕ = 1 m | d ψ ϕ R | λ ϕ R I | ϕ I ( ) | + ϕ = 1 m | d ψ ϕ I | λ ϕ I R | ϕ R ( ) | + ϕ = 1 m | d ψ ϕ I | λ ϕ I I | ϕ I ( ) | } + ψ = 1 m { c ψ | ψ I ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ I R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ R | λ ϕ I I | ϕ I ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ R R | ϕ R ( ) | + ϕ = 1 m | a ψ ϕ I | λ ϕ R I | ϕ I ( ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ I R | ϕ R ( ω ) | + ϕ = 1 m | b ψ ϕ R | λ ϕ I I | ϕ I ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ R R | ϕ R ( ω ) | + ϕ = 1 m | b ψ ϕ I | λ ϕ R I | ϕ I ( ω ) | + ϕ = 1 m | d ψ ϕ R | λ ϕ I R | ϕ R ( ) | + ϕ = 1 m | d ψ ϕ R | λ ϕ I I | ϕ I ( ) | + ϕ = 1 m | d ψ ϕ R λ ϕ R R | ϕ R ( ) | + ϕ = 1 m | d ψ ϕ I | λ ϕ R I | ϕ I ( ) | } θ min ( 1 ) ψ = 1 m | ϕ R ( ) | θ min ( 2 ) ψ = 1 m | ϕ I ( ) | + θ max ( 1 ) ψ = 1 m | ϕ R ( ω ) | + θ max ( 2 ) ψ = 1 m | ϕ I ( ω ) | ,   = θ 1 V ( ) + θ 2 V ( ω ) ,  
where
θ min ( 1 ) = { c ψ ϕ m | a ϕ ψ R | λ ψ R R ϕ = 1 m | a ϕ ψ I | λ ψ I R ϕ = 1 m | d ϕ ψ R | λ ψ R R ϕ = 1 m | d ϕ ψ I | λ ψ I R ϕ = 1 m | a ϕ ψ R λ ψ I R ϕ = 1 m | a ϕ ψ I | λ ψ R R ϕ = 1 m | d ϕ ψ R | λ ψ I R ϕ = 1 m | d ϕ ψ I | λ ψ R R } ,   θ min ( 2 ) = { c ψ ϕ m | a ϕ ψ R | λ ψ R I ϕ = 1 m | a ϕ ψ I | λ ψ I I ϕ = 1 m | d ϕ ψ R | λ ψ R I ϕ = 1 m | d ϕ ψ I | λ ψ I I ϕ = 1 m | a ϕ ψ R | λ ψ I I ϕ = 1 m | a ϕ ψ I | λ ψ R I ϕ = 1 m | d ϕ ψ R | λ ψ I I ϕ = 1 m | d ϕ ψ I | λ ψ R I } ,   θ max ( 1 ) = ψ = 1 m ϕ = 1 m | b ψ ϕ I | λ ϕ R R + ϕ = 1 m | b ψ ϕ I | λ ϕ I R + ϕ = 1 m | b ψ ϕ R | λ ϕ I R + ϕ = 1 m | b ψ ϕ I | λ ϕ R R ,   θ max ( 2 ) = ψ = 1 m ϕ = 1 m | b ψ ϕ R | λ ϕ I R + ϕ = 1 m | b ψ ϕ I | λ ϕ I I + ϕ = 1 m | b ψ ϕ R | λ ϕ I I + ϕ = 1 m | b ψ ϕ I | λ ϕ R I ,  
By Lemma 3,  θ 1 > 2 θ 2 θ 2 > 0 .
Consequently, Systems (10) and (11) are globally asymptotically stable. □
Theorem 2.
Under Assumptions (H1) and (H2) and Lemma 3, Systems (10) and (11) are globally asymptotically stable and satisfy  Θ 1 > 2 Θ 2 Θ 2 > 0 .
Proof. 
Consider the auxiliary function
V ( ) = ψ = 1 m 1 2 ( ψ R ( ) ) 2 + ψ = 1 m 1 2 ( ψ I ( ) ) 2 .
In light of Lemma 2, calculating the fractional difference of V(t), we have
C 0 β V ( ) ψ = 1 m { c ψ ( ψ R ( ) ) 2 + ϕ = 1 m a ψ ϕ R ψ R ( t ) [ λ ϕ R R ϕ R ( ) + λ ϕ R I ϕ I ( t ) ] + ϕ = 1 m a ψ ϕ I ψ R ( ) [ λ k I R ϕ R ( ) + λ ϕ I I ϕ I ( ) ] + ϕ = 1 m b ψ ϕ R ψ R ( ) [ λ ϕ R R ϕ R ( ω ) + λ ϕ R I ϕ I ( ω ) ] + ϕ = 1 m b ψ ϕ I ψ R ( ) [ λ ϕ I R ϕ R ( ω ) + λ ϕ I I ϕ I ( ω ) ] + ϕ = 1 m d ψ ϕ R ψ R ( ) 0 K ψ ϕ ( s ) [ λ ϕ R R ϕ R ( s ) + λ ϕ R I ϕ I ( s ) ] d s + ϕ = 1 m d ψ ϕ I ψ R ( ) 0 K ψ ϕ ( s ) [ λ ϕ I R ϕ R ( s ) + λ ϕ I I ϕ I ( s ) ] d s } + ψ = 1 m { c ψ ( ψ I ( ) ) 2 + ϕ = 1 m a ψ ϕ R ψ I ( ) [ λ k I R ϕ R ( ) + λ ϕ I I ϕ I ( ) ] + ϕ = 1 m a ψ ϕ I ψ I ( t ) [ λ k R R ϕ R ( ) + λ ϕ R I ϕ I ( ) ] + ϕ = 1 m b ψ ϕ R ψ I ( ) [ λ ϕ I R ϕ R ( ω ) + λ ϕ I I ϕ I ( ω ) ] + ϕ = 1 m b ψ ϕ I ψ I ( ) [ λ ϕ R R ϕ R ( ω ) + λ ϕ R I ϕ I ( ω ) ] + ϕ = 1 m d ψ ϕ R ψ I ( ) 0 K ψ ϕ ( s ) [ λ ϕ I R ϕ R ( s ) + λ ϕ I I ϕ I ( s ) ] d s + ϕ = 1 m d ψ ϕ I ψ I ( ) 0 K ψ ϕ ( s ) [ λ ϕ R R ϕ R ( s ) + λ ϕ R I ϕ I ( s ) ] d s } ψ = 1 m { c ψ ( ψ R ( ) ) 2 + [ ϕ = 1 m a ψ ϕ R ψ R ( ) λ ϕ R R ϕ R ( ) + ϕ = 1 m a ψ ϕ R ψ R ( ) λ ϕ R I ϕ I ( ) ] + [ ϕ = 1 m a ψ ϕ I ψ R ( ) λ k I R ϕ R ( ) + ϕ = 1 m a ψ ϕ I ψ R ( ) λ ϕ I I ϕ I ( ) ] + [ ϕ = 1 m b ψ ϕ R ψ R ( ) ϕ = 1 m b ψ ϕ R ψ R ( ) λ ϕ R R ϕ R ( t ω ) + ϕ = 1 m b ψ ϕ R ψ R ( ) λ ϕ R I ϕ I ( ω ) ] + [ ϕ = 1 m b ψ ϕ I ψ R ( ) λ ϕ I R ϕ R ( ω ) + ϕ = 1 m b ψ ϕ I ψ R ( ) λ ϕ I I ϕ I ( ω ) ] + [ ϕ = 1 m d ψ ϕ R ψ R ( ) 0 K ψ ϕ ( s ) λ ϕ R R ϕ R ( s ) + ϕ = 1 m d ψ ϕ R ψ R ( ) 0 K ψ ϕ ( s ) λ ϕ R I ϕ I ( s ) ] d s + [ ϕ = 1 m d ψ ϕ I ψ R ( ) 0 K ψ ϕ ( s ) λ ϕ I R ϕ R ( s ) + ϕ = 1 m d ψ ϕ I ψ R ( ) 0 K ψ ϕ ( s ) λ ϕ I I ϕ I ( s ) ] d s } + ψ = 1 m { c ψ ( ψ I ( ) ) 2 + [ ϕ = 1 m a ψ ϕ R ψ I ( ) λ k I R ϕ R ( ) + ϕ = 1 m a ψ ϕ R ψ I ( ) λ ϕ I I ϕ I ( ) ] + ϕ = 1 m a ψ ϕ I ψ I ( ) λ k R R ϕ R ( ) + ϕ = 1 m a ψ ϕ I ψ I ( ) λ ϕ R I ϕ I ( ) ] + [ ϕ = 1 m b ψ ϕ R ψ I ( ) λ ϕ I R ϕ R ( ω ) + ϕ = 1 m b ψ ϕ R ψ I ( ) λ ϕ I I ϕ I ( ω ) ] + [ ϕ = 1 m b ψ ϕ I ψ I ( ) λ ϕ R R R ( ω ) + ϕ = 1 m b ψ ϕ I ψ I ( ) λ ϕ R I I ( t ω ) ] + [ ϕ = 1 m d ψ ϕ R ψ I ( t ) 0 K ψ ϕ ( s ) λ ϕ I R ϕ R ( s ) d s + ϕ = 1 m d ψ ϕ R ψ I ( ) 0 K ψ ϕ ( s ) λ ϕ I I ϕ I ( s ) d s ] + [ ϕ = 1 m d ψ ϕ I ψ I ( ) 0 K ψ ϕ ( s ) λ ϕ R R ϕ R ( s ) d s + ϕ = 1 m d ψ ϕ I ψ I ( ) 0 K ψ ϕ ( s ) λ ϕ R I ϕ I ( s ) d s ] } ψ = 1 m { c ψ ( ψ R ( ) ) 2 + [ ϕ = 1 m a ψ ϕ R ψ R ( ) λ ϕ R R ϕ R ( ) + ϕ = 1 m a ψ ϕ R ψ R ( ) λ ϕ R I ϕ I ( ) ] + [ ϕ = 1 m a ψ ϕ I ψ R ( ) λ k I R ϕ R ( t ) + ϕ = 1 m a ψ ϕ I ψ R ( ) λ ϕ I I ϕ I ( ) ] + [ ϕ = 1 m b ψ ϕ R ψ R ( ) ϕ = 1 m b ψ ϕ R ψ R ( ω ) λ ϕ R R ϕ R ( ω ) + ϕ = 1 m b ψ ϕ R ψ R ( t ) λ ϕ R I ϕ I ( t τ ) ] + [ ϕ = 1 m b ψ ϕ I ψ R ( ) λ ϕ I R ϕ R ( ω ) + ϕ = 1 m b ψ ϕ I ψ R ( ) λ ϕ I I ϕ I ( ω ) ] + [ ϕ = 1 m d ψ ϕ R ψ R ( ) λ ϕ R R ϕ R ( ) + ϕ = 1 m d ψ ϕ R ψ R ( ) λ ϕ R I ϕ I ( t ) ] + [ ϕ = 1 m d ψ ϕ I ψ R ( ) λ ϕ I R ϕ R ( ) + ϕ = 1 m d ψ ϕ I ψ R ( ) λ ϕ I I ϕ I ( ) ] } + ψ = 1 m { c ψ ( ψ I ( ) ) 2 + [ ϕ = 1 m a ψ ϕ R ψ I ( ) λ k I R ϕ R ( ) + ϕ = 1 m a ψ ϕ R ψ I ( ) λ ϕ I I ϕ I ( ) ] + [ ϕ = 1 m a ψ ϕ I ψ I ( ) λ k R R ϕ R ( ) + ϕ = 1 m a ψ ϕ I ψ I ( t ) λ ϕ R I ϕ I ( t ) ] + [ ϕ = 1 m b ψ ϕ R ψ I ( ) λ ϕ I R ϕ R ( ω ) + ϕ = 1 m b ψ ϕ R ψ I ( ) λ ϕ I I ϕ I ( ω ) ] + [ ϕ = 1 m b ψ ϕ I ψ I ( ) λ ϕ R R ϕ R ( ω ) + ϕ = 1 m b ψ ϕ I ψ I ( ) λ ϕ R I ϕ I ( ω ) ] + [ ϕ = 1 m d ψ ϕ R ψ I ( ) λ ϕ I R ϕ R ( ) + ϕ = 1 m d ψ ϕ R ψ I ( t ) λ ϕ I I ϕ I ( ) ] + [ ϕ = 1 m d ψ ϕ I ψ I ( ) λ ϕ R R ϕ R ( ) + ϕ = 1 m d ψ ϕ I ψ I ( ) λ ϕ R I ϕ I ( t ) ] } ψ = 1 m { c ψ ( ψ R ( ) ) 2 + [ 1 2 ϕ = 1 m a ψ ϕ R λ ϕ R R [ ( ψ R ( ) ) 2 + ( ϕ R ( ) ) 2 ] + 1 2 ϕ = 1 m a ψ ϕ R λ ϕ R I [ ( ψ R ( t ) ) 2 + ( ϕ I ( ) ) 2 ] ] + [ 1 2 ϕ = 1 m a ψ ϕ I λ k I R [ ( ψ R ( ) ) 2 + ( ϕ R ( ) ) 2 ] + 1 2 ϕ = 1 m a ψ ϕ I λ ϕ I I [ ( ψ R ( ) ) 2 + ( ϕ I ( ) ) 2 ] + [ 1 2 ϕ = 1 m b ψ ϕ R λ ϕ R R [ ( ψ R ( ) ) 2 + ( ψ R ( ω ) ) 2 ] + 1 2 ϕ = 1 m b ψ ϕ R λ ϕ R I [ ( ψ R ( ) ) 2 + ( ϕ I ( ω ) ) 2 ] ] + [ 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R R [ ( ψ R ( ) ) 2 + ( ϕ R ( τ ) ) 2 ] + 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R I [ ( ψ R ( ) ) 2 + ( ϕ I ( τ ) ) 2 ] ] + [ 1 2 ϕ = 1 m d ψ ϕ R λ ϕ R R [ ( ψ R ( ) ) 2 + ( ϕ R ( ) ) 2 ] + 1 2 ϕ = 1 m d ψ ϕ R λ ϕ R I [ ( ψ R ( ) ) 2 + ( ϕ I ( ) ) 2 ] ] + [ 1 2 ϕ = 1 m d ψ ϕ I λ ϕ I R [ ( ψ R ( ) ) 2 + ( ϕ R ( ) ) 2 ] + 1 2 ϕ = 1 m d ψ ϕ I λ ϕ I I [ ( ψ R ( ) ) 2 + ( ϕ I ( ) ) 2 ] ] } + 1 2 ψ = 1 m { c ψ ( ψ I ( ) ) 2 + [ 1 2 ϕ = 1 m a ψ ϕ R λ k I R [ ( ψ I ( ) ) 2 + ( ϕ R ( t ) ) 2 ] + 1 2 ϕ = 1 m a ψ ϕ R λ ϕ I I [ ( ψ I ( t ) ) 2 + ( ϕ I ( ) ) 2 ] ] + [ 1 2 ϕ = 1 m a ψ ϕ I λ k R R [ ( ψ I ( ) ) 2 + ( ϕ R ( ) ) 2 ] + 1 2 ϕ = 1 m a ψ ϕ I λ ϕ R I [ ( ψ I ( ) ) 2 + ( ϕ I ( ) ) 2 ] ] + [ 1 2 ϕ = 1 m b ψ ϕ R λ ϕ I R [ ( ψ I ( ) ) 2 + ( ϕ R ( ω ) ) 2 ] + 1 2 ϕ = 1 m b ψ ϕ R λ ϕ I I [ ( ψ I ( ) ) 2 + ( ϕ I ( ω ) ) 2 ] ] + [ 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R R [ ( ψ I ( ) ) 2 + ( R ( ω ) ) 2 ] + 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R I [ ( ψ I ( ) ) 2 + ( I ( ω ) ) 2 ] ] + [ 1 2 ϕ = 1 m d ψ ϕ R λ ϕ I R [ ( ψ I ( ) ) 2 + ( ϕ R ( ) ) 2 ] + 1 2 ϕ = 1 m d ψ ϕ R λ ϕ I I [ ( ψ I ( ) ) 2 + ( ϕ I ( ) ) 2 ] ] + [ 1 2 ϕ = 1 m d ψ ϕ I λ ϕ R R [ ( ψ I ( ) ) 2 + ( ϕ R ( s ) ) 2 ] + 1 2 ϕ = 1 m d ψ ϕ I λ ϕ R I [ ( ψ I ( ) ) 2 + ( ϕ I ( s ) ) 2 ] ] } Θ min ( 1 ) 1 2 ψ = 1 m ( ϕ R ( t ) ) 2 Θ min ( 2 ) 1 2 ψ = 1 m ( ϕ I ( t ) ) 2 + Θ max ( 1 ) 1 2 ψ = 1 m ( ϕ R ( ω ) ) 2 + Θ max ( 2 ) 1 2 ψ = 1 m ( ϕ I ( ω ) ) 2 ,   = Θ 1 V ( ) + Θ 2 V ( ω ) ,  
where
Θ min ( 1 ) = { c ψ 1 2 ϕ = 1 m a ψ ϕ R λ ϕ R R 1 2 ϕ = 1 m a ϕ ψ R λ ψ R R 1 2 ϕ = 1 m a ϕ ψ R λ ψ R I 1 2 ϕ = 1 m a ψ ϕ I λ ϕ I R 1 2 ϕ = 1 m a ϕ ψ I λ ψ I R 1 2 ϕ = 1 m a ϕ ψ I λ ϕ I I 1 2 ϕ = 1 m b ψ ϕ R λ ϕ R R 1 2 ϕ = 1 m b ψ ϕ R λ ϕ R I 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R R 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R I 1 2 ϕ = 1 m d ψ ϕ R λ ϕ R R 1 2 ϕ = 1 m d ϕ ψ R λ ψ R R 1 2 ϕ = 1 m d ψ ϕ R λ ϕ R I 1 2 ϕ = 1 m d ϕ ψ I λ ϕ I R 1 2 ϕ = 1 m d ϕ ψ I λ ψ I R 1 2 ϕ = 1 m d ψ ϕ I λ ϕ I I 1 2 ϕ = 1 m a ψ ϕ R λ ϕ I R 1 2 ϕ = 1 m a ϕ ψ I λ ψ R R 1 2 ϕ = 1 m d ϕ ψ R λ ψ I R 1 2 ϕ = 1 m d ϕ ψ I λ ψ R R } ,   Θ min ( 2 ) = { c ψ 1 2 ϕ = 1 m a ψ ϕ R λ ϕ I R 1 2 ϕ = 1 m a ψ ϕ R λ ϕ I I 1 2 ϕ = 1 m a ϕ ψ R λ ψ I I 1 2 ϕ = 1 m a ϕ ψ I λ ψ R R 1 2 ϕ = 1 m a ψ ϕ I λ ϕ R I 1 2 ϕ = 1 m a ϕ ψ I λ ψ R I 1 2 ϕ = 1 m b ψ ϕ R λ ϕ I R 1 2 ϕ = 1 m b ψ ϕ R λ ϕ I I 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R R 1 2 ϕ = 1 m b ψ ϕ I λ ϕ R I 1 2 ϕ = 1 m d ϕ ψ R λ ψ I R 1 2 ϕ = 1 m d ψ ϕ R λ ϕ I I 1 2 ϕ = 1 m d ϕ ψ R λ ψ I I 1 2 ϕ = 1 m d ψ ϕ I λ ϕ R R 1 2 ϕ = 1 m d ψ ϕ I λ ϕ R I 1 2 ϕ = 1 m d ϕ ψ I λ ψ R I 1 2 ϕ = 1 m a ϕ ψ R λ ψ R I 1 2 ϕ = 1 m a ϕ ψ I λ ϕ I I 1 2 ϕ = 1 m d ϕ ψ I λ ψ R I 1 2 ϕ = 1 m d ϕ ψ I λ ψ I I } ,   Θ max ( 1 ) = ψ = 1 m ϕ = 1 m b ψ ϕ I λ ϕ R R + ϕ = 1 m b ψ ϕ I λ ϕ I R + ϕ = 1 m b ψ ϕ R λ ϕ I R + ϕ = 1 m b ψ ϕ I λ ϕ R R ,   Θ max ( 2 ) = ψ = 1 m ϕ = 1 m | b ψ ϕ R λ ϕ I R + ϕ = 1 m b ψ ϕ I λ ϕ I I + ϕ = 1 m b ψ ϕ R λ ϕ I I + ϕ = 1 m b ψ ϕ I λ ϕ R I ,  
By Lemma 3,  Θ 1 > 2 Θ 2 , and  Θ 2 > 0 .
Consequently, Systems (10) and (11) are globally asymptotically stable. □
Remark 1.
Consider the master system
C 0 β γ ψ ( ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( γ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( γ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) f ϕ ( γ ϕ ( s ) ) d s + I ψ ,   N 1 ,  
Consider the slave system
C 0 β γ ˜ ψ ( ) = c ψ γ ˜ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( γ ˜ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( γ ˜ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) f ϕ ( γ ˜ ϕ ( s ) ) d s + I ψ + u ψ ( ) ,   N 1 ,  
The error system is defined as
C 0 β ψ ( ) = c ψ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) f ϕ ( ϕ ( s ) ) d s + u ψ ( ) ,   N 1 ,  
Theorem 3.
Under Assumptions (H1) and (H2), the system is globally Mittag–Leffler stable if the activation functions are bounded. Let  u ψ ( ) = 0 ; then,
0 < ρ = ρ 1 ρ 2 < 1 ,  
ρ 1 = min 1 ψ m c ψ ϕ = 1 m | a ϕ ψ | L ψ ϕ = 1 m | d ϕ ψ I | L ψ ρ 2 = max 1 ψ m ϕ n | b ψ ϕ | L ϕ > 0
Proof. 
Let us consider the Lyapunov functional
V ( t ,   ξ ( t ) ) = ψ = 1 m | ψ ( t ) | .
By calculating the Nabla–Caputo left-fractional difference of  V ( t )  along the trajectories of System (1), we obtain
C 0 β V ( t ,   ( ) ) ψ = 1 m s i g n ( ( ) ) { c ψ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( ϕ ( τ ) ) + ϕ = 1 m d ψ ϕ 0 K ψ ϕ ( s ) f ϕ ( ϕ ( s ) ) d s } ψ = 1 m { c ψ | ψ ( ) | + ϕ = 1 m | a ψ ϕ | f ϕ ( ϕ ( ) ) + ϕ = 1 m | b ψ ϕ | f ϕ ( ϕ ( ω ) ) + ϕ = 1 m | d ψ ϕ | 0 K ψ ϕ ( s ) f ϕ ( ϕ ( s ) ) d s } ψ = 1 m c ψ | ψ ( ) | + ϕ = 1 m | a ψ ϕ | f ϕ ( ϕ ( ) ) + ϕ = 1 m | b ψ ϕ | f ϕ ( ϕ ( ω ) ) + ϕ = 1 m | d ψ ϕ | f ϕ ( ϕ ( ) ) ψ = 1 m c ψ | ψ ( ) | + ϕ = 1 m | a ψ ϕ | f ϕ ( ϕ ( ) ) + ϕ = 1 m | d ψ ϕ | f ϕ ( ϕ ( ) ) + ψ = 1 m ϕ = 1 m | b ψ ϕ | f ϕ ( ϕ ( ω ) ) ρ 1 V ( ,   ( ) ) + ρ 2 sup τ s V ( s ,   ( s ) )
as any solution (t) of Error System (22), which satisfies the Razumikhin condition. Hence, on the basis of the Razumikhin technique, one has the criteria
sup ω s t V ( s ,   ( s ) ) V ( ,   ( ) )
Next, based on Systems (22) and (23), assume that there exists a constant  Δ > 0 . One can then obtain
D β V ( ,   ( ) ) ( ρ 1 ρ 2 ) V ( ,   ( ) ) ,   ρ 1 ρ 2 Δ ,  
and from (24), one observes that
D β V ( ,   ( ) ) Δ V ( ,   ( ) )
Then, from (25) and Lemma 1, one has
V ( ,   ( ) ) V ( 0 ) E β ( Δ α ) ,   [ 0 ,   )
Therefore, one concludes that
| | ( ) | | = | | γ ˜ ( ) γ ( ) | | ,   = ψ = 1 m | γ ψ ( ) γ ( ) | | ,   | | ψ 0 ϕ 0 | | E α ( δ α )
According to Definition 3, the fractional-order complex-valued neural network (1) achieves global Mittag–Leffler synchronization with fractional Systems (10) and (11). This completes the proof of Theorem 3. □

4. Numerical Examples

Numerical examples are provided to demonstrate the validity of the results in this section.
Example 1.
Consider the following discrete-time fractional-order complex-valued neural networks:
C 0 β γ ψ ( ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( γ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( γ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 f ϕ ( γ ϕ ( s ) ) d s
Suppose  β = 0.95  and  ω = 0.2  and suppose the parameters and the function are defined by
C = 16 8 4 16 ,   A = 0.4 0.2 i 0.2 + 0 i 0.2 0.5 i 0.4 + 0.2 i ,   B = 0.7 0.7 i 0.4 + 0.4 i 0.3 + 0.9 i 0.4 + 0.1 i ,   D = 0.6 0.8 i 0.3 + 0.4 i 0.2 + 0.3 i 0.7 + 0.7 i .
U R = 0.1 t a n ( t ) 0.4 c o t ( t ) U I = 0.3 t a n ( t ) 0.7 c o t ( t )
Assumptions  H 1  and  H 2  are satisfied for  λ 1 R I ,   λ l I I ,   λ l R R ,   λ l I R = 1 c 1 = 28.95 | a 11 R | = 0.75 | a 11 I | = 0.97 | b 11 R | = 0.95 | b 11 I | = 0.87 | d 11 I | = 0.28 | d 11 R | = 0.29 .
From Theorem 1,  θ min ( 1 ) = 24.3700 ,   θ min ( 2 ) = 24.3700 ,   θ max ( 1 ) = 3.56 ,   θ max ( 2 ) = 3.64 .  and  θ 1 = 48.7400 θ 2 = 7.2000
For these conditions, we have  48.7400 < 10.1823 .  Hence, it follows from Theorem 1 that the system can achieve global asymptotic synchronization.
Example 2.
Consider the following discrete-time fractional-order complex-valued neural networks:
C 0 β γ ψ ( ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( γ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( γ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 f ϕ ( γ ϕ ( s ) ) d s
where  β = 0.56 γ ψ ( ) = γ ψ R + γ ψ I ( ) i γ ψ R ( t ) ,   γ ψ I ( t ) R f ψ ( γ ψ ) = 0.3 t a n h ( γ ψ R ) + 0.3 t a n h ( γ ψ I ) i g ψ ( γ ψ ) = 0.65 t a n h ( γ ψ R ) + 0.65 t a n h ( γ ψ I ) i τ = 3 , and
C = 0.9 0 0 0 0.9 0 0 0 0.9 ,   A = 0.1 + 0.97 i 0.6 + 0.3 i 0.5 + 0.2 i 0.9 + 0.2 i 0.2 + 0.7 i 0.6 + 0.5 i 0.5 + 0.1 i 0.6 + 0.5 i 0.6 + 0.7 i ,   B = 0.2 + 0.87 i 5.56 + 3.45 i 3.56 + 5.45 i 0.56 + 2.45 i 4.56 + 2.45 i 0.56 + 6.45 i 1.56 + 0.35 i 2.56 + 4.45 i 0.56 + 8.45 i ,   D = 0.3 + 0.29 i 3.56 + 4.45 i 4.56 + 2.45 i 0.56 + 2.55 i 2.56 + 5.45 i 1.56 + 3.45 i 3.56 + 0.45 i 1.56 + 4.35 i 4.56 + 3.45 i .
Assumptions  H 1  and  H 2  are satisfied for  λ l R I ,   λ l I I ,   λ l R R ,   λ l I R = 1 c 1 = 29.1456 | a 11 R | = 0.74 | a 11 I | = 0.96 | b 11 R | = 0.94 | b 11 I | = 0.86 | d 11 I | = 0.27 | d 11 R | = 0.27 .
From Theorem 2,  Θ min ( 1 ) = 16.44 ,   Θ min ( 2 ) = 15.2956 ,   Θ max ( 1 ) = 3.52 ,   Θ max ( 2 ) = 3.20 ,    and  Θ 1 = 31.7356 Θ 2 = 6.72 .
For these conditions, we have  31.7356 > 9.5035 . Hence, it follows from Theorem 2 that this system can achieve global asymptotic synchronization. Refer Figure 1 and Figure 2 for the graphical representation of the obtained result.
Example 3.
Consider the following discrete-time fractional order complex-valued neural networks:
C 0 β γ ψ ( ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( γ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( γ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 f ϕ ( γ ϕ ( s ) ) d s
Suppose  β = 0.98  and  ω = 0.2  and suppose the parameters and the function are defined by  λ 1 R I ,   λ l I I ,   λ l R R ,   λ l I R = 1 c 1 = 19.95 | a 11 R | = 0.72 | a 11 I | = 0.77 | b 11 R | = 0.94 | b 11 I | = 0.27 | d 11 I | = 0.23 | d 11 R | = 0.69 .
C = 16 8 4 16 ,   A = 0.4 0.2 i 0.2 + 0 i 0.2 + 0.5 i 0.4 + 0.2 i ,   B = 0.7 0.7 i 0.4 + 0.4 i 0.3 + 0.9 i 0.4 + 0.1 i ,   D = 0.6 0.8 i 0.3 + 0.4 i 0.2 + 0.3 i 0.7 + 0.7 i .
U R = 0.2 t a n ( t ) 0.5 c o t ( t ) U I = 0.3 t a n ( t ) 0.7 c o t ( t )
Θ min ( 1 ) = 7.8300 ,   Θ min ( 2 ) = 8.35 ,   Θ max ( 1 ) = 2.75 ,   Θ max ( 2 ) = 2.48 .  and  Θ 1 = 16.1800 Θ 2 = 5.23 .
For these conditions, we have  16.1800 > 7.3963 . Hence, it follows from Theorem 2 that this system can achieve global asymptotic synchronization.
Example 4.
Consider the discrete-time fractional-order complex-valued neural network
C 0 β γ ψ ( ) = c ψ γ ψ ( ) + ϕ = 1 m a ψ ϕ f ϕ ( γ ϕ ( ) ) + ϕ = 1 m b ψ ϕ f ϕ ( γ ϕ ( ω ) ) + ϕ = 1 m d ψ ϕ 0 f ϕ ( γ ϕ ( s ) ) d s
Suppose  β = 0.98  and  ω = 0.2  and suppose the parameters and the function are defined by
C = 0.9 0 0 0 0.9 0 0 0 0.9 ,   A = 0.1 + 0.97 i 0.6 + 0.3 i 0.5 + 0.2 i 0.9 + 0.2 i 0.2 + 0.7 i 0.6 + 0.5 i 0.5 + 0.1 i 0.6 + 0.5 i 0.6 + 0.7 i ,   B = 0.2 + 0.87 i 5.56 + 3.45 i 3.56 + 5.45 i 0.56 + 2.45 i 4.56 + 2.45 i 0.56 + 6.45 i 1.56 + 0.35 i 2.56 + 4.45 i 0.56 + 8.45 i ,   D = 0.3 + 0.29 i 3.56 + 4.45 i 4.56 + 2.45 i 0.56 + 2.55 i 2.56 + 5.45 i 1.56 + 3.45 i 3.56 + 0.45 i 1.56 + 4.35 i 4.56 + 3.45 i .
Assumptions  H 1 H 2  are satisfied for  c 1 = 2.14 ,   a 11 = 0.94 ,   d 11 = 0.92 ,   b 11 = 0.87 ,   L 1 = 1 .
By Theorem 3, we find that  ρ 1 = 1.28 > 0 ,   ρ 2 = 0.87 > 0 .
Hence,  0 < ρ 1 ρ 2 = 0.41 < 1 ,    and Theorem 3 holds. Therefore, (31) is globally Mittag–Leffler stable.
Remark 2.
Many scholars have discussed the uniform stability, global asymptotic stability, and finite-time stability of fractional-order CVNNs with time delays Zhang et al. [30], Rakkiyappan et al. [21], Wang et al. [19], and Song et al. [22]. Most of these scholars considered that the activation functions of complex-valued systems can be separated into their real parts and imaginary parts. Thus, they transformed CVNNs to equivalent RVNNs to analyze their dynamic behavior. However, this method increases the dimension of systems and brings difficulties upon analysis. Compared with the existing literature, regardless of the activity, functions are separable, and the provided existence and finite-time stability criteria for discrete fractional-order CVNNs are valid and feasible in this paper.
Remark 3.
Many authors studied the dynamics prosperities of discrete fractional difference equations in a real field. However, there are very few results about discrete fractional-order system in complex fields. Different from the existing literature, we first investigated discrete fractional-order CVNNs and analyzed their dynamic behavior.
Remark 4.
In the aforementioned works, it is noted that only the discrete constant delays are involved in the network models. In this situation, discrete delays cannot well characterize the neural networks since the signal propagation is no longer instantaneous. Consequently, the distributed delays should also be taken into account in the description of neural network models. In recent decades, many researchers have made great efforts to the dynamics of neural networks with both discrete and distributed delays, and there have been some excellent results. Notice that these works were mainly concerned with integer-order neural networks. Research on fractional-order neural networks with discrete and distributed time delays has received little attention.

5. Conclusions

The synchronization of discrete-time fractional-order complex-valued neural networks with distributed delays is examined in this research. By building suitable Lyapunov functions, sufficient conditions are attained. The resulting results are fresh and add to the global Mittag–Leffler synchronization findings for fractional networks that already exist. Some adequate requirements are derived from the theory of discrete fractional calculus, the discrete Laplace transform, the theory of complex functions, and discrete Mittag–Leffler functions in order to guarantee the global stability and synchronization of the Mittag–Leffler function for the suggested networks. In future research, we will further study the dynamical behaviors, such as projective synchronization and finite time stability, of more sophisticated neural networks, including fractional-order coupled discontinuous neural networks with time-varying delays.

Author Contributions

Methodology, R.P. and M.H.; Software, T.F.I.; Validation, M.S.A.; Formal analysis, B.A.A.M.; Investigation, W.M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Scientific Research at King Khalid University under grant number RGP.2/141/44.

Data Availability Statement

There is no data associated with this study.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large groups (project under grant number RGP.2/141/44).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmeda, E.; Elgazzar, A. On fractional order differential equations model for nonlocal epidemics. Physica A 2007, 379, 607–614. [Google Scholar] [CrossRef] [PubMed]
  2. Moaddy, K.; Radwan, A.; Salama, K.; Momani, S.; Hashim, I. The fractional-order modeling and synchronization of electrically coupled neuron systems. Comput. Math. Appl. 2012, 64, 3329–3339. [Google Scholar] [CrossRef]
  3. Bhalekar, S.; Daftardar-Gejji, V. Synchronization of differential fractional-order chaotic systems using active control. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 3536–3546. [Google Scholar] [CrossRef]
  4. Narayanan, G.; Ali, M.S.; Karthikeyan, R.; Rajchakit, G.; Jirawattanapanit, A. Impulsive control strategies of mRNA and protein dynamics on fractional-order genetic regulatory networks with actuator saturation and its oscillations in repressilator model. Biomed. Process. Control 2023, 82, 104576. [Google Scholar] [CrossRef]
  5. Jmal, A.; Makhlouf, A.B.; Nagy, A.M. Finite-Time Stability for Caputo Katugampola Fractional-Order Time-Delayed Neural Networks. Neural Process Lett. 2019, 50, 607–621. [Google Scholar] [CrossRef]
  6. Li, H.L.; Jiang, H.; Cao, J. Global synchronization of fractional-order quaternion-valued neural networks with leakage and discrete delays. Neurocomputing 2020, 385, 211–219. [Google Scholar] [CrossRef]
  7. Chen, S.; An, Q.; Ye, Y.; Su, H. Positive consensus of fractional-order multi-agent systems. Neural Comput. Appl. 2021, 33, 16139–16148. [Google Scholar] [CrossRef]
  8. Chen, S.; An, Q.; Zhou, H.; Su, H. Observer-based consensus for fractional-order multi-agent systems with positive constraint Author links open overlay panel. Neurocomputing 2022, 501, 489–498. [Google Scholar] [CrossRef]
  9. Li, Z.Y.; Wei, Y.H.; Wang, J.; Li, A.; Wang, J.; Wang, Y. Fractional-order ADRC framework for fractional-order parallel systems. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 1813–1818. [Google Scholar]
  10. Wang, L. Symmetry and conserved quantities of Hamilton system with comfortable fractional derivatives. In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 3430–3436. [Google Scholar]
  11. Castañeda, C.E.; López-Mancilla, D.; Chiu, R.; Villafana-Rauda, E.; Orozco-López, O.; Casillas-Rodríguez, F.; Sevilla-Escoboza, R. Discrete-time neural synchronization between an arduino microcontroller and a compact development system using multiscroll chaotic signals. Chaos Solitons Fractals 2019, 119, 269–275. [Google Scholar] [CrossRef]
  12. Atici, F.M.; Eloe, P.W. Gronwalls inequality on discrete fractional calculus. Comput. Math. Appl. 2012, 64, 3193–3200. [Google Scholar] [CrossRef]
  13. Ostalczyk, P. Discrete Fractional Calculus: Applications in Control and Image Processing; World Scientific: Singapore, 2015. [Google Scholar]
  14. Ganji, M.; Gharari, F. The discrete delta and nabla Mittag-Leffler distributions. Commun. Stat. Theory Methods 2018, 47, 4568–4589. [Google Scholar] [CrossRef]
  15. Wyrwas, M.; Mozyrska, D.; Girejko, E. Stability of discrete fractional-order nonlinear systems with the nabla Caputo difference. IFAC Proc. Vol. 2013, 46, 167–171. [Google Scholar] [CrossRef]
  16. Gray, H.L.; Zhang, N.F. On a new definition of the fractional difference. Math. Comput. 1988, 50, 513–529. [Google Scholar] [CrossRef]
  17. Wu, G.C.; Baleanu, D.; Luo, W.H. Lyapunov functions for Riemann-Liouville-like fractional difference equations. Appl. Math. Comput. 2017, 314, 228–236. [Google Scholar] [CrossRef]
  18. Baleanu, D.; Wu, G.C.; Bai, Y.R.; Chen, F.L. Stability analysis of Caputolike discrete fractional systems. Commun. Nonlinear Sci. Numer. Simul. 2017, 48, 520–530. [Google Scholar] [CrossRef]
  19. Hu, J.; Wang, J. Global stability of complex-valued recurrent neural networks with time-delays. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 853–865. [Google Scholar] [CrossRef] [PubMed]
  20. Ozdemir, N.; Iskender, B.B.; Ozgur, N.Y. Complex valued neural network with Mobius activation function. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 4698–4703. [Google Scholar] [CrossRef]
  21. Rakkiyappan, R.; Cao, J.; Velmurugan, G. Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 84–97. [Google Scholar] [CrossRef]
  22. Song, Q.; Zhao, Z.; Liu, Y. Stability analysis of complex-valued neural networks with probabilistic time-varying delays. Neurocomputing 2015, 159, 96–104. [Google Scholar] [CrossRef]
  23. Pan, J.; Liu, X.; Xie, W. Exponential stability of a class of complex-valued neural networks with time-varying delays. Neurocomputing 2015, 164, 293–299. [Google Scholar] [CrossRef]
  24. Li, X.; Rakkiyappan, R.; Velmurugan, G. Dissipativity analysis of memristor-based complex-valued neural networks with time-varying delays. Inf. Sci. 2015, 294, 645–665. [Google Scholar] [CrossRef]
  25. Rakkiyappan, R.; Sivaranjani, K.; Velmurugan, G. Passivity and passification of memristor-based complex-valued recurrent neural networks with interval time-varying delays. Neurocomputing 2014, 144, 391–407. [Google Scholar] [CrossRef]
  26. Chen, L.P.; Chai, Y.; Wu, R.C.; Ma, T.D.; Zhai, H.Z. Dynamic analysis of a class of fractional-order neural networks with delay. Neurocomputing 2013, 111, 190–194. [Google Scholar] [CrossRef]
  27. Song, Q.; Yan, H.; Zhao, Z.; Liu, Y. Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects. Neural Netw. 2016, 79, 108–116. [Google Scholar] [CrossRef]
  28. Syed Ali, M.; Yogambigai, J.; Kwon, O.M. Finite-time robust passive control for a class of switched reaction-diffusion stochastic complex dynamical networks with coupling delays and impulsive control. Int. J. Syst. Sci. 2018, 49, 718–735. [Google Scholar] [CrossRef]
  29. Zhou, B.; Song, Q. Boundedness and complete stability of complex valued neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1227–1238. [Google Scholar] [CrossRef]
  30. Zhang, Z.; Lin, C.; Chen, B. Global stability criterion for delayed complex-valued recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1704–1708. [Google Scholar] [CrossRef]
  31. Song, Q.; Zhao, Z. Stability criterion of complex-valued neural networks with both leakage delay and time-varying delays on time scales. Neurocomputing 2016, 171, 179–184. [Google Scholar] [CrossRef]
  32. Sakthivel, R.; Sakthivel, R.; Kwon, O.M.; Selvaraj, P.; Anthoni, S.M. Observer-based robust synchronization of fractional-order multi-weighted complex dynamical networks. Nonlinear Dyn. 2019, 98, 1231–1246. [Google Scholar] [CrossRef]
  33. Chen, X.; Song, Q. Global stability of complex-valued neural networks with both leakage time delay and discrete time delay on time scales. Neurocomputing 2013, 121, 254–264. [Google Scholar] [CrossRef]
  34. Zhang, Z.; Yu, S. Global asymptotic stability for a class of complex valued Cohen-Grossberg neural networks with time delays. Neurocomputing 2016, 171, 1158–1166. [Google Scholar] [CrossRef]
  35. Gong, W.; Liang, J.; Cao, J. Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays. Neural Netw. 2015, 70, 81–89. [Google Scholar] [CrossRef]
  36. Syed Ali, M.; Yogambigai, J.; Cao, J. Synchronization of master-slave Markovian switching complex dynamical networks with time-varying delays in nonlinear function via sliding mode control. Acta Math. Sci. 2017, 37, 368–384. [Google Scholar]
  37. Zhou, J.; Liu, Y.; Xia, J.; Wang, Z.; Arik, S. Resilient fault-tolerant antisynchronization for stochastic delayed reaction-diffusion neural networks with semi-Markov jump parameters. Neural Netw. 2020, 125, 194–204. [Google Scholar] [CrossRef] [PubMed]
  38. Syed Ali, M.; Yogambigai, J. Finite-time robust stochastic synchronization of uncertain Markovian complex dynamical networks with mixed time-varying delays and reaction-diffusion terms via impulsive control. J. Frankl. Inst. 2017, 354, 2415–2436. [Google Scholar]
  39. Narayanan, G.; Syed Ali, M.; Karthikeyan, R.; Rajchakit, G.; Jirawattanapanit, A. Novel adaptive strategies for synchronization mechanism in nonlinear dynamic fuzzy modeling of fractional-order genetic regulatory networks. Chaos Solitons Fractals 2022, 165, 112748. [Google Scholar] [CrossRef]
  40. Yogambigai, J.; Syed Ali, M. Exponential Synchronization of switched complex dynamical networks with time varying delay via periodically intermittent control. Int. J. Differ. Equ. 2017, 12, 41–53. [Google Scholar]
  41. Yogambigai, J.; Syed Ali, M. Finite-time and Sampled-data Synchronization of Delayed Markovian Jump Complex Dynamical Networks Based on Passive Theory. In Proceedings of the Third International Conference on Science Technology Engineering and Management (ICONSTEM), Chennai, India, 23–24 March 2017. [Google Scholar]
  42. Yang, L.X.; Jiang, J. Adaptive synchronization of driveresponse fractional-order complex dynamical networks with uncertain parameters. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 1496–1506. [Google Scholar] [CrossRef]
  43. Wong, W.K.; Li, H.; Leung, S.Y.S. Robust synchronization of fractional-order complex dynamical networks with parametric uncertainties. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4877–4890. [Google Scholar] [CrossRef]
  44. Bao, H.; Park, J.H.; Cao, J. Synchronization of fractional order complex-valued neural networks with time delay. Neural Netw. 2016, 81, 16–28. [Google Scholar] [CrossRef]
  45. Qi, D.L.; Liu, M.Q.; Qiu, M.K.; Zhang, S.L. Exponential H synchronization of general discrete-time chaotic neural networks with or without time delays. IEEE Trans. Neural Netw. Learn. Syst. 2010, 21, 1358–1365. [Google Scholar]
  46. Li, Z.Y.; Liu, H.; Lu, J.A.; Zeng, Z.G.; Lü, J. Synchronization regions of discrete-time dynamical networks with impulsive couplings. Inf. Sci. 2018, 459, 265–277. [Google Scholar] [CrossRef]
  47. You, X.; Song, Q.; Zhao, Z. Global Mittag-Leffler stability and synchronization of discrete-time fractional-order complex-valued neural networks with time delay. Neural Netw. 2020, 122, 382–394. [Google Scholar] [CrossRef]
  48. Atici, F.M.; Eloe, P.W. Discrete fractional calculus with the nabla operator. Electron. J. Qual. Theory Differ. Equ. 2009, 3, 1–12. [Google Scholar] [CrossRef]
  49. Mu, X.X.; Chen, Y.G. Synchronization of delayed discrete-time neural networks subject to saturated time-delay feedback. Neurocomputing 2016, 175, 293–299. [Google Scholar] [CrossRef]
  50. Liang, S.; Wu, R.C.; Chen, L.P. Comparison principles and stability of nonlinear fractional-order cellular neural networks with multiple time delays. Neurocomputing 2015, 168, 618–625. [Google Scholar] [CrossRef]
Figure 1. State trajectories of the FOCNNs (29) with fractional-order  α = 0.45  in the real axis.
Figure 1. State trajectories of the FOCNNs (29) with fractional-order  α = 0.45  in the real axis.
Fractalfract 07 00452 g001
Figure 2. State trajectories of the FOCNNs (29) with fractional-order  α = 0.45  in the imaginary axis.
Figure 2. State trajectories of the FOCNNs (29) with fractional-order  α = 0.45  in the imaginary axis.
Fractalfract 07 00452 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Perumal, R.; Hymavathi, M.; Ali, M.S.; Mahmoud, B.A.A.; Osman, W.M.; Ibrahim, T.F. Synchronization of Discrete-Time Fractional-Order Complex-Valued Neural Networks with Distributed Delays. Fractal Fract. 2023, 7, 452. https://doi.org/10.3390/fractalfract7060452

AMA Style

Perumal R, Hymavathi M, Ali MS, Mahmoud BAA, Osman WM, Ibrahim TF. Synchronization of Discrete-Time Fractional-Order Complex-Valued Neural Networks with Distributed Delays. Fractal and Fractional. 2023; 7(6):452. https://doi.org/10.3390/fractalfract7060452

Chicago/Turabian Style

Perumal, R., M. Hymavathi, M. Syed Ali, Batul A. A. Mahmoud, Waleed M. Osman, and Tarek F. Ibrahim. 2023. "Synchronization of Discrete-Time Fractional-Order Complex-Valued Neural Networks with Distributed Delays" Fractal and Fractional 7, no. 6: 452. https://doi.org/10.3390/fractalfract7060452

APA Style

Perumal, R., Hymavathi, M., Ali, M. S., Mahmoud, B. A. A., Osman, W. M., & Ibrahim, T. F. (2023). Synchronization of Discrete-Time Fractional-Order Complex-Valued Neural Networks with Distributed Delays. Fractal and Fractional, 7(6), 452. https://doi.org/10.3390/fractalfract7060452

Article Metrics

Back to TopTop