Next Article in Journal
Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology
Next Article in Special Issue
Fractional-Order System: Control Theory and Applications
Previous Article in Journal
Sensitivity of Fractional-Order Recurrent Neural Network with Encoded Physics-Informed Battery Knowledge
Previous Article in Special Issue
Exponential Enclosures for the Verified Simulation of Fractional-Order Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synchronization of Fractional-Order Uncertain Delayed Neural Networks with an Event-Triggered Communication Scheme

1
Department of Mathematics, Thiruvalluvar University, Vellore 632115, Tamilnadu, India
2
Department of Mathematics, Faculty of Sciences and Arts (Mahayel), King Khalid University, Abha 62529, Saudi Arabia
3
Mathematics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
4
Department of Mathematics, Faculty of Science and Arts in Zahran Alganoob, King Khalid University, Abha 62529, Saudi Arabia
5
Department of Mathematics, Faculty of Sciences and Arts in Sarat Abeda, King Khalid University, Abha 62529, Saudi Arabia
6
Department of Mathematics, Faculty of Sciences, Khon Kaen University, Khon Kaen 40002, Thailand
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(11), 641; https://doi.org/10.3390/fractalfract6110641
Submission received: 12 August 2022 / Revised: 26 October 2022 / Accepted: 27 October 2022 / Published: 2 November 2022
(This article belongs to the Special Issue Fractional-Order System: Control Theory and Applications)

Abstract

:
In this paper, the synchronization of fractional-order uncertain delayed neural networks with an event-triggered communication scheme is investigated. By establishing a suitable Lyapunov–Krasovskii functional (LKF) and inequality techniques, sufficient conditions are obtained under which the delayed neural networks are stable. The criteria are given in terms of linear matrix inequalities (LMIs). Based on the drive–response concept, the LMI approach, and the Lyapunov stability theorem, a controller is derived to achieve the synchronization. Finally, numerical examples are presented to confirm the effectiveness of the main results.

1. Introduction

Fractional calculus is a mathematical theory that has been studied and applied in different fields for the past 300 years. Compared with traditional integer-order systems, fractional-order (FO) derivatives provide an excellent tool for the description of memory and inherent properties of various materials and processes, with applications in many areas, such as heat conduction, electronics, and abnormal diffusion [1,2]. As a result, fractional calculus has attracted increasing attention from physicists and engineers [3,4,5,6,7]. Moreover, fractional calculus has been applied to numerous neural network models [8,9]. Hence, the research on fractional neural networks (NNs) is important for practical applications, and many important results on chaotic dynamics, stability analysis, stabilization, synchronization, dissipativity, and passivity have been reported [10,11,12,13,14,15,16]. This popularity is due to the fact that fractional calculus has the ability to include memory when describing complex systems and gives a more precise characterization than the standard integer-order approach. A key characteristic is that the FO derivatives require an infinite number of terms, whereas the integer-order derivatives only indicate a finite series. Consequently, the integer derivatives are local operators, whereas the FO derivative has the memory of all past events.
In the real world, there are different types of uncertainty that can attenuate the performance of the system and affect its stability. These uncertainties may result from parameter variations and external disturbances. If a structural process is observed experimentally, it is not possible to assign precise values to the observed events. This means data uncertainty occurs, which may result from scale-dependent impacts that are not considered, which create inaccuracies in the estimations and incomplete sets of observations. In this manner, the estimated results are more or less described by the data uncertainty that begins with imprecision. In addition, the parameter uncertainties are unavoidable while displaying a neural network, which creates unstable results. It is known that a precise physical model of an engineering plant is difficult to build because of the uncertainties and noises. In actual operation, due to the existence of some external or internal uncertain disturbances, system states sometimes are not always fully accessible [17,18,19,20,21,22,23,24,25,26].
Generally speaking, an event-triggered control strategy is more appealing than the traditional time-triggered one from an economic perspective, since the control input is updated only when the predetermined triggering condition is reached. Since the event-triggered control approach can reduce information exchange in systems, event-triggered synchronization or consensus for fractional-order systems has received increasing attention in recent years. Recently, there has been significant research on the event-triggered control (ETC) strategy [27,28,29]. Compared to the time-driven consensus, the event-triggered consensus is more realistic. The event-triggered controller introduced in the field of networked control systems has the advantage of using limited communication network resources efficiently. Recently, an event-triggered scheme (ETS) provided an effective way of determining when the sampling action should be carried out and when the packet should be transmitted. A number of researchers have recommended event-triggered control. To deal with network congestion, the ETS has been proposed to improve data transmission efficiency. In the past few years, event-triggered control has proved to be an efficient way to reduce the transmitted data in the networks, which can relieve the burden of network bandwidth. Thus event-triggered control strategies have been employed to study networked systems [30,31,32].
In addition, in many practical applications, the system is expected to reach synchronization as quickly as possible. Synchronization is an important phenomenon in the real world, which exists widely in practical systems, as well as in nature. The problem of achieving synchronization in a neural network is another research hotspot. Different kinds of synchronization, such as pinning synchronization [33], local synchronization [34,35], lag synchronization [36], and impulsive synchronization [37] have been considered in the literature. Recently synchronization has also attracted attention in the field of complex networks systems [38,39]. Synchronization techniques require communication among nodes, which creates network congestion and wastes network resources. Moreover, the treatment of the synchronization problem of fractional-order systems with input quantization is quite limited in the literature. Numerous consequence have been described for the synchronization-based event-triggered problem [40,41,42]. As collective behaviors, consensus and synchronization are important in nature.
There is no doubt that the Lyapunov functional method provides an effective approach to analyze the stability of integer-order nonlinear systems. The synchronization and stabilization of fractional Caputo neural network (FCNNs) were proved by constructing a simple quadratic Lyapunov function and calculating its fractional derivative. The contributions of this article are listed below:
1. The synchronization of fractional-order uncertain delayed neural networks with an event-triggered communication scheme is investigated.
2. A fractional integral, which is suitable for the considered fractional-order error system, is proposed.
3. A Lyapunov–Krasovskii (L–K) functional is established, and the conditions corresponding to asymptotic stability are derived for the design of an event-triggered controller based on linear matrix inequalities (LMIs).
4. The derived conditions are expressed in terms of linear matrix inequalities (LMIs), which can be checked numerically via the LMI toolbox very efficiently.
5. Numerical examples are provided to demonstrate the effectiveness and applicability of the proposed stability results.
The following notations are used in this paper. R and R n denote the set of real numbers and the n -dimensional real spaces, respectively; R n × n denotes the set of n × n matrices. I denotes the identity matrix of appropriate dimension. The super script “ T " denotes the matrix transposition. “(−1)” represents the matrix inverse. X > 0 ( X < 0 ) means that X is positive definite (negative definite). I represents the identity matrix and zero matrix with compatible dimensions. In symmetric block matrices or a long matrix expression, we use an asterisk (*) to represent a term that is induced by symmetry. L 2 [ 0 , ) denotes the space of square-integrable vector functions over [ 0 , ) .

2. Preliminaries

In this section, we recall the basic definition and some properties concerning fractional-order calculus. In addition, definition, remark, assumption and some lemmas are presented.
Definition 1
([43]). The Caputo fractional derivative of order β for a function f (t) is defined as
D β f ( t ) = 1 Γ ( m β ) 0 t f m ( γ ) ( t γ ) β m + 1 d γ ,
where t 0 , and m 1 < β < m Z + . In particular, when β ( 0 , 1 ) ,
D β f ( t ) = 1 Γ ( 1 β ) 0 t f ( γ ) ( t γ ) β d γ .
Lemma 1
([44]). Let a vector-valued function ϱ ( t ) R n be differentiable. Then, for any t > 0 , one has
D α ( ϱ T ( t ) S ϱ ( t ) ) 2 ϱ T ( t ) S D α ϱ ( t ) , 0 < α < 1 .
Lemma 2
([45]). For the given positive scalar λ > 0 , l , r R m and matrix D ,
l T D r λ 1 2 l T DD T l + λ 2 r T r .
Lemma 3
([46]). If N > 0 , and the given matrices are S , Q , N , then
Q S T S N < 0 ,
if and only if
Q + S T N 1 S < 0 .
Lemma 4
([47]). For a vector function Ξ : [ t 1 , t 2 ] R n and any positive definite matrix P , we have
t 1 t 2 Ξ ( s ) ds T P t 1 t 2 Ξ ( s ) ds ( t 2 t 1 ) t 1 t 2 Ξ T ( s ) P Ξ ( s ) ds .
Assumption A1.
Let g i ( · ) be continuous and bounded; X s and X s + are constants,
X s g s ( r 1 ) g s ( r 2 ) r 1 r 2 X s + , s = 1 , 2 , , n ,
where r 1 , r 2 R and r 1 r 2 .
Remark 1.
From the literature survey, it is clear that most of the results on fractional order neural networks (FONNs) are derived with fractional-order Lyapunov stability criteria having quadratic terms. However, in this paper, we introduce the integral term D ( α + 1 ) t η t e T ( s ) R 2 e ( s ) ds in the Lyapunov functional candidate, which is solved by utilizing the properties of Caputo fractional-order derivatives and integrals. The Lyapunov functional is novel, as it contains the quadratic term. By applying fractional-order derivatives in the error system of the FCNNs under suitable adaptive update laws, a new sufficient condition can be derived in terms of solvable LMIs.

3. Main Results

Consider the following uncertain delayed neural network described by
D α w i ( t ) = ( r i + Δ r i ( t ) ) w i ( t ) + j = 1 n ( c ij + Δ c ij ( t ) ) h j ( w j ( t ) ) + j = 1 n ( b ij + Δ b ij ( t ) ) h j ( w j ( t σ j ( t ) ) ) + j = 1 n ( a ij + Δ a ij ( t ) ) t η t w j ( s ) ds + p i ( t ) .
Conveniently, we write the master system as
D α w ( t ) = ( R + Δ R ( t ) ) w ( t ) + ( C + Δ C ( t ) ) h ( w ( t ) ) + ( B + Δ B ( t ) ) h ( w ( t σ ( t ) ) + ( A + Δ A ( t ) ) t η t ( w ( s ) ) ds + P ( t ) ,
in which w ( t ) = ( w 1 ( t ) , w 2 ( t ) , , w n ( t ) ) T R n , is the state vector associated with n neurons, the diagonal matrix r i ( t ) = d i a g { r 1 ( t ) , r 2 ( t ) , , r n ( t ) } , and C ( t ) , B ( t ) , and A ( t ) are the known constant matrices of appropriate dimensions; the symbol Δ denotes the uncertain term, and Δ C ( t ) , Δ B ( t ) , and Δ A ( t ) are known matrices that represent the time-varying parameter uncertainties. h ( w ( t ) ) is the neuron activation function.
Next, we consider the corresponding slave system as follows:
D α v i ( t ) = ( r i + Δ r i ( t ) ) v i ( t ) + j = 1 n ( c ij + Δ c ij ( t ) ) h j ( v j ( t ) ) + j = 1 n ( b ij + Δ b ij ( t ) ) h j ( v j ( t σ j ( t ) ) ) + j = 1 n ( a ij + Δ a ij ( t ) ) t η t v j ( s ) ds + p i ( t ) + h q i ( t ) .
The compact form of (3) is
D α v ( t ) = ( R + Δ R ( t ) ) v ( t ) + ( C + Δ C ( t ) ) h ( v ( t ) ) + ( B + Δ B ( t ) ) h ( v ( t σ ( t ) ) + ( A + Δ A ( t ) ) t η t v ( s ) ds + P ( t ) + H Q ( t ) .
Now, we introduce the e ( t ) = v ( t ) w ( t ) :
D α e ( t ) = ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + ( A + Δ A ( t ) ) t η t e ( s ) ds + H Q ( t ) .
The purpose of this paper is to design a controller Q ( t ) = K e ( t ) , such that the slave system (3) synchronizes with the master system (1), and K is the controller gain to be determined.
Without distributed delays in the system (1), it is easy to obtain the error system
D α e ( t ) = ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + H K e ( t ) .
Theorem 1.
The FNNs (1) and (3) are globally asymptotically synchronized under the event-triggered control scheme, for the given scalars δ 1 , δ 2 , δ 3 , δ 4 , δ 5 , and μ 1 , and if there exist symmetric positive definite matrices R 1 > 0 , R 2 > 0 , such that a feasible solution exists for the following LMIs,
Ω = Ω 11 R 1 J r R 1 J c R 1 J b R 1 C R 1 B 0 * δ 1 I 0 0 0 0 0 * * δ 2 I 0 0 0 0 * * * δ 3 I 0 0 0 * * * * δ 4 I 0 0 * * * * * δ 5 I 0 * * * * * * Ω 66 . < 0 ,
where
Ω 11 = 2 R 1 R + δ 1 L r T L r + δ 2 ϕ T L c T L c ϕ + δ 4 ϕ T ϕ + R 2 + R 1 HK , Ω 66 = δ 3 ϕ T L b T L b ϕ + δ 5 ϕ T ϕ R 2 ( 1 μ )
Proof. 
Now, let us define the Lyapunov–Krasovskii functional as follows:
V ( t ) = V 1 ( t ) + V 2 ( t ) ,
where
V 1 ( t ) = e T ( t ) R 1 e ( t ) , V 2 ( t ) = D ( α + 1 ) t σ ( t ) t e T ( s ) R 2 e ( s ) ds .
By using Lemma 2, we have,
2 e T ( t ) R 1 Δ R ( t ) e ( t ) 2 e T ( t ) R 1 J d K ( t ) L d e ( t ) , δ 1 1 e T ( t ) R 1 J d J d T R 1 T e ( t ) + δ 1 e T ( t ) L r T L r e ( t ) ,
2 e T ( t ) R 1 Δ C ( t ) h ( e ( t ) ) 2 e T ( t ) R 1 J c K ( t ) L c h ( e ( t ) ) , δ 2 1 e T ( t ) R 1 J c J c T R 1 T e ( t ) + δ 2 e T ( t ) ϕ T L c T L c ϕ e ( t ) ,
2 e T ( t ) R 1 Δ B ( t ) h ( e ( t σ ( t ) ) ) 2 e T ( t ) R 1 J b K ( t ) L b h ( e ( t σ ( t ) ) ) , δ 3 1 e T ( t ) R 1 J b J b T R 1 T e ( t ) + δ 3 e T ( t σ ( t ) ) ϕ T L b T L b ϕ e ( t σ ( t ) ) ,
2 e T ( t ) R 1 C h ( e ( t ) ) ) δ 4 1 e T ( t ) R 1 C C T R 1 T e ( t ) + δ 4 e T ( t ) ϕ T ϕ e ( t ) ,
2 e T ( t ) R 1 B h ( e ( t σ ( t ) ) ) ) δ 5 1 e T ( t ) R 1 B B T R 1 T e ( t ) + δ 5 e T ( t σ ( t ) ) ϕ T ϕ e ( t σ ( t ) ) .
Then, with the support of Lemma 1 and the linearity nature of the Caputo fractional-order derivative, the fractional derivative along the trajectories of the system state is acquired as follows
D α V ( t ) 2 e T ( t ) R 1 D α e ( t ) , 2 e T ( t ) R 1 [ ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + H K e ( t ) ] , 2 e T ( t ) R 1 R e ( t ) + δ 1 1 e T ( t ) R 1 J d J d T R 1 T e ( t ) + δ 1 e T ( t ) L d T L d e ( t ) + 2 e T ( t ) R 1 C h ( e ( t ) ) + δ 2 1 e T ( t ) R 1 J c J c T R 1 T e ( t ) + δ 2 e T ( t ) ϕ T L c T L c ϕ e ( t ) + 2 e T ( t ) R 1 B h ( e ( t σ ( t ) ) ) + δ 3 1 e T ( t ) R 1 J b J b T R 1 T e ( t ) + δ 3 e T ( t σ ( t ) ) ϕ T L b T L b ϕ e ( t σ ( t ) ) + δ 4 1 e T ( t ) R 1 C T C R 1 T e ( t ) + δ 4 e T ( t ) ϕ T ϕ e ( t ) + δ 5 1 e T ( t ) R 1 B T B R 1 T e ( t ) + δ 5 e T ( t σ ( t ) ) ϕ T ϕ e ( t σ ( t ) ) + e T ( t ) R 2 e ( t ) e T ( t σ ( t ) ) R 2 e ( t σ ( t ) ) ( 1 μ ) .
From (9)–(13), the following can be obtained.
D α V ( t ) ζ T ( t ) Ω ζ ( t ) ,
where
ζ ( t ) = c o l [ e ( t ) , e ( t σ ( t ) ) ) ] .
From the aforementioned part, we know that matrix inequality (7) guarantees Ω < 0 .
Thereby, the master system (1) is synchronized with the slave system (3). The proof of Theorem 1 is complete. □
Theorem 2.
The FNNs (1) and (3) are globally asymptotically synchronized, for given scalars δ 1 , δ 2 , δ 3 , δ 4 , δ 5 , and σ, if there exist symmetric positive definite matrices R 1 > 0 , R 2 > 0 , such that the following LMIs hold:
π = π 11 J r J c J b C B 0 * δ 1 I 0 0 0 0 0 * * δ 2 I 0 0 0 0 * * * δ 3 I 0 0 0 * * * * δ 4 I 0 0 * * * * * δ 5 I 0 * * * * * * π 66 < 0 ,
where
π 11 = 2 R 1 X 1 + X 1 δ 1 L r T L r X 1 + X 1 δ 2 ϕ T L c T L c ϕ X 1 + X 1 δ 4 ϕ T ϕ X 1 + X 1 R 2 X 1 + H Y 1 , π 66 = δ 3 ϕ T L b T L b ϕ + δ 5 ϕ T ϕ R 2 ( 1 μ ) ,
and the other parameters are the same as in Theorem 1; among them, the gain matrix is defined with R 1 1 = X 1 .
Proof. 
We pre- and post-multiply Ω by { R 1 1 , I , I , I , I , I , I } and R 1 1 = X 1
Φ = Φ 11 J r J c J b C B 0 * δ 1 I 0 0 0 0 0 * * δ 2 I 0 0 0 0 * * * δ 3 I 0 0 0 * * * * δ 4 I 0 0 * * * * * δ 5 I 0 * * * * * * Φ 66 < 0 ,
where
Φ 11 = 2 R 1 X 1 + X 1 δ 1 L r T L r X 1 + X 1 δ 2 ϕ T L c T L c ϕ X 1 + X 1 δ 4 ϕ T ϕ X 1 + X 1 R 2 X 1 + HK X 1 , Φ 66 = δ 3 ϕ T L b T L b ϕ + δ 5 ϕ T ϕ R 2 ( 1 μ ) .
At the same time, the controller gain matrix K can be obtained as Y 1 = K X 1 ,
π = π 11 J r J c J b C B 0 * δ 1 I 0 0 0 0 0 * * δ 2 I 0 0 0 0 * * * δ 3 I 0 0 0 * * * * δ 4 I 0 0 * * * * * δ 5 I 0 * * * * * * π 66 < 0 .
Hence, (15) guarantees that
π < 0 .
Thereby, the master system (1) is synchronized with the slave system (3). The proof of Theorem 2 is complete. □
Remark 2.
Specifically, when there are no uncertainties in the given system, the neural network (6) reduces to
D α e ( t ) = R e ( t ) + C h ( e ( t ) ) + B h ( e ( t σ ( t ) ) + A t η t e ( s ) ds + H K e ( t ) .
Corollary 1.
The scalars are δ 4 , δ 5 , η , ϵ , and σ , and if there exist symmetric positive definite matrices R 1 > 0 , R 2 > 0 , a feasible solution exists for the following LMIs:
β < 0 .
Proof. 
Now, let us define the Lyapunov–Krasovskii functional as follows:
V ( t ) = V 1 ( t ) + V 2 ( t ) ,
where
V 1 ( t ) = e T ( t ) R 1 e ( t ) , V 2 ( t ) = D ( α + 1 ) t σ ( t ) t e T ( s ) R 2 e ( s ) d s .
By using Lemma 2, we have
2 e T ( t ) R 1 C h ( e ( t ) ) ) δ 4 1 e T ( t ) R 1 C C T R 1 T e ( t ) + δ 4 e T ( t ) ϕ T ϕ e ( t ) ,
2 e T ( t ) R 1 B h ( e ( t σ ( t ) ) ) ) δ 5 1 e T ( t ) R 1 B B T R 1 T e ( t ) + δ 5 e T ( t σ ( t ) ) ϕ T ϕ e ( t σ ( t ) ) .
Further, the above term is computed in view of the procedure in [47], and by employing Lemma 2.1 in [47] and the Cauchy matrix inequality, we have
2 e T ( t ) R 1 A ( t ) t η t e ( s ) ) ds η e T ( t ) R 1 A R 1 1 A T R 1 e ( t ) + 1 η t η t e ( s ) ) ds T R 1 t η t e ( s ) ) ds , η e T ( t ) R 1 A R 1 1 A T R 1 e ( t ) + 1 η t η t e T ( s ) ) R 1 e ( s ) ) ds , η e T ( t ) R 1 A R 1 1 A T R 1 e ( t ) + 1 η η 0 e T ( t + s ) ) R 1 e ( t + s ) ) ds ,
since V ( t + s , x ( t + s ) ) ϵ V ( t , x ( t ) )
2 e T ( t ) R 1 A ( t ) t η t e ( s ) ) ds η e T ( t ) R 1 A R 1 1 A T R 1 e ( t ) + η ϵ e T ( t ) R 1 e ( t ) .
Then, with the support of Lemma 1 and the linearity nature of the Caputo fractional-order derivative, the fractional derivative along the trajectories of the system state is acquired as follows
D α V ( t ) 2 e T ( t ) R 1 D α e ( t ) , 2 e T ( t ) R 1 [ R e ( t ) + C h ( e ( t ) ) + B h ( e ( t σ ( t ) ) + 2 e T ( t ) R 1 A ( t ) t η t e ( s ) ) ds + K e ( t ) ] , 2 e T ( t ) R 1 R e ( t ) + δ 4 1 e T ( t ) R 1 C T C R 1 T e ( t ) + δ 4 e T ( t ) ϕ T ϕ e ( t ) + δ 5 1 e T ( t ) R 1 B T B R 1 T e ( t ) + δ 5 e T ( t σ ( t ) ) ϕ T ϕ e ( t σ ( t ) ) + η e T ( t ) R 1 A R 1 1 A T R 1 e ( t ) + η ϵ e T ( t ) R 1 e ( t ) + e T ( t ) R 2 e ( t ) e T ( t σ ( t ) ) R 2 e ( t σ ( t ) ) ( 1 μ ) .
From (23)–(27) and applying Lemma 4, we obtain
Θ = Θ 11 R 1 C R 1 B η R 1 A 0 * δ 4 I 0 0 0 * * δ 5 I 0 0 * * * η R 1 0 * * * * δ 5 ϕ T ϕ R 2 < 0 ,
Θ 11 = 2 R 1 R + η ϵ R 1 + R 2 + R 1 HK .
We pre- and post-multiply Θ by { R 1 1 , I , I , R 1 1 , I }
Ξ = Ξ 11 C B η A X 1 0 * δ 4 I 0 0 0 * * δ 5 I 0 0 * * * η X 1 0 * * * * δ 5 ϕ T ϕ R 2 ,
where Ξ 11 = 2 R X 1 + X 1 δ 4 ϕ T ϕ X 1 + X 1 η ϵ + X 1 R 2 X 1 + HK X 1
ς = ς 11 C B η A X 1 0 * δ 4 I 0 0 0 * * δ 5 I 0 0 * * * η X 1 0 * * * * δ 5 ϕ T ϕ R 2 ,
where ς 11 = 2 R X 1 + X 1 δ 4 ϕ T ϕ X 1 + X 1 η ϵ + X 1 R 2 X 1 + HY .
Thereby, the master system (1) is synchronized with the slave system (3). □

4. Event-Triggered Control Scheme

In this section, we introduce an event generator in the controller node by using the following judgment algorithm
[ e ( ( k + j ) h ) e ( kh ) ] T Φ [ e ( ( k + j ) h ) e ( kh ) ] Σ e T ( ( k + j ) h ) Φ e ( ( k + j ) h ) ,
where Φ is a positive definite matrix to be determined, k , j Z + and kh denotes the release instant, e ( ( k + j ) h ) = v ( ( k + j ) w ( ( k + j ) h ) is the error information at the instant ( k + j ) h , and σ [ 0 , 1 ) is a given constant. Cases A and B relate to the following delayed differential equation
D α e ( t ) = ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + ( A + Δ A ( t ) ) t η t h ( e ( s ) ) ds + H K e ( t k h ) , t [ t k h + τ k , t k + 1 h + τ k + 1 ) .
Case A: if t k h + h + τ ¯ t k + 1 h + τ k + 1 , we can define τ ( t ) as
τ ( t ) = t t k h , t [ t k h + τ k , t k + 1 h + τ k + 1 ) .
It can be seen that
τ t τ ( t ) ( t k + 1 t k ) h + t k + 1 h + τ ¯ .
Case B: if t k h + h + τ ¯ < t k + 1 h + τ k + 1 , since t k τ ¯ , we can easily demonstrate that a positive constant m exists such that t k h + m h + τ ¯ < t k + 1 h + τ k + 1 t k h + ( m + 1 ) h + τ ¯ . For the time intervals [ t k h + τ k , t k + 1 h + τ k + 1 ) , we divide them as F 0 = [ t k h + τ k , t k h + h + τ ¯ ) , F i = [ t k h + i h + τ ¯ , t k h + i h + h + τ ¯ ) , and F m = [ t k h + m h + τ ¯ , t k + 1 h + τ k + 1 ) , and we define τ ( t ) as
τ ( t ) = t t k ( t ) i h , it F i , i = 0 , 1 , , m .
It is easy to prove that 0 τ k τ ( ( t ) ) h + τ ¯ = τ M , t [ t k h + τ k , t k + 1 h + τ k + 1 ) . Finally, we define
e k ( t ) = e ( t k h ) e ( t k h + i h ) , t F i , i = 0 , 1 , , m .
For case A, m = 0 , we have e k ( t ) = 0 from (33). Based on the analysis above, the event generator (31) can be rewritten as
e k T ( t ) Φ e k ( ( t ) ) Σ e T ( t τ ( t ) ) Φ e T ( t τ ( t ) , t [ t k h + τ k , t k + 1 h + τ k + 1 ) .
Then, the system is reduced to
D α e ( t ) = R e ( t ) + C h ( e ( t ) ) + B h ( e ( t σ ( t ) ) + A t η t e ( s ) ds + H K e ( t ) + H K e ( t τ ( t ) ) .
Theorem 3.
For the given scalars δ 1 , δ 2 , δ 3 , δ 4 , δ 5 , μ 1 , and σ and the diagonal matrices L 1 , L 2 , and L 3 , if there exist symmetric positive definite matrices R 1 > 0 , R 2 > 0 , then a feasible solution exists for the following LMIs:
ξ < 0 .
Proof. 
Now, let us define the Lyapunov–Krasovskii functional as follows:
V ( t ) = V 1 ( t ) + V 2 ( t ) ,
where
V 1 ( t ) = e T ( t ) R 1 e ( t ) , V 2 ( t ) = D ( α + 1 ) t σ ( t ) t e T ( s ) R 2 e ( s ) d s .
Using Lemma 2, we have
2 e T ( t ) R 1 Δ R ( t ) e ( t ) 2 e T ( t ) R 1 J d K ( t ) L d e ( t ) , δ 1 1 e T ( t ) R 1 J d J d T R 1 T e ( t ) + δ 1 e T ( t ) L r T L r e ( t ) ,
2 e T ( t ) R 1 Δ C ( t ) h ( e ( t ) ) 2 e T ( t ) R 1 J c K ( t ) L c h ( e ( t ) ) , δ 2 1 e T ( t ) R 1 J c J c T R 1 T e ( t ) + δ 2 e T ( t ) ϕ T L c T L c ϕ e ( t ) ,
2 e T ( t ) R 1 Δ B ( t ) h ( e ( t σ ( t ) ) ) 2 e T ( t ) R 1 J b K ( t ) L b h ( e ( t σ ( t ) ) ) , δ 3 1 e T ( t ) R 1 J b J b T R 1 T e ( t ) + δ 3 e T ( t σ ( t ) ) ϕ T L b T L b ϕ e ( t σ ( t ) ) .
Then, with the support of Lemma 1 and the linearity nature of the Caputo fractional-order derivative, the fractional derivative along the trajectories of the system state is acquired as follows
D α V ( t ) 2 e T ( t ) R 1 D α e ( t ) , 2 e T ( t ) R 1 [ ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + H K e ( t ) + H K e ( t τ ( t ) ) ] , 2 e T ( t ) R 1 R e ( t ) + δ 1 1 e T ( t ) R 1 J d J d T R 1 T e ( t ) + δ 1 e T ( t ) L d T L d e ( t ) + 2 e T ( t ) R 1 C h ( e ( t ) ) + δ 2 1 e T ( t ) R 1 J c J c T R 1 T e ( t ) + δ 2 e T ( t ) ϕ T L c T L c ϕ e ( t ) + 2 e T ( t ) R 1 B h ( e ( t σ ( t ) ) ) + δ 3 1 e T ( t ) R 1 J b J b T R 1 T e ( t ) + δ 3 e T ( t σ ( t ) ) ϕ T L b T L b ϕ e ( t σ ( t ) ) + e T ( t ) R 2 e ( t ) e T ( t σ ( t ) ) R 2 e ( t σ ( t ) ) ( 1 μ ) .
From Assumption 1, we have
e ( t ) h ( e ( t ) ) T L 1 Γ 2 L 1 Γ 1 * L 1 e ( t ) h ( e ( t ) ) 0
e ( t σ ( t ) ) h ( e ( t σ ( t ) ) ) T L 2 Γ 2 L 2 Γ 1 * L 2 e ( t σ ( t ) ) h ( e ( t σ ( t ) ) ) 0
e ( t τ ( t ) ) h ( e ( t τ ( t ) ) ) T L 3 Γ 2 L 3 Γ 1 * L 3 e ( t τ ( t ) ) h ( e ( t τ ( t ) ) ) 0 .
From (37)–(43), we obtain
D α V ( t ) 2 e T ( t ) R 1 D α e ( t ) , 2 e T ( t ) R 1 [ ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + H K e ( t ) + H K e ( t τ ( t ) ) ] , 2 e T ( t ) R 1 R e ( t ) + δ 1 1 e T ( t ) R 1 J d J d T R 1 T e ( t ) + δ 1 e T ( t ) L d T L d e ( t ) + 2 e T ( t ) R 1 C h ( e ( t ) ) + δ 2 1 e T ( t ) R 1 J c J c T R 1 T e ( t ) + δ 2 e T ( t ) ϕ T L c T L c ϕ e ( t ) + 2 e T ( t ) R 1 B h ( e ( t σ ( t ) ) ) + δ 3 1 e T ( t ) R 1 J b J b T R 1 T e ( t ) + δ 3 e T ( t σ ( t ) ) ϕ T L b T L b ϕ e ( t σ ( t ) ) + δ 4 1 e T ( t ) R 1 C T C R 1 T e ( t ) + δ 4 e T ( t ) ϕ T ϕ e ( t ) + δ 5 1 e T ( t ) R 1 B T B R 1 T e ( t ) + δ 5 e T ( t σ ( t ) ) ϕ T ϕ e ( t σ ( t ) ) + e T ( t ) R 2 e ( t ) e T ( t σ ( t ) ) R 2 e ( t σ ( t ) ) + e ( t ) h ( e ( t ) ) T L 1 Γ 2 L 1 Γ 1 * L 1 e ( t ) h ( e ( t ) ) + e ( t σ ( t ) ) h ( e ( t σ ( t ) ) ) T L 2 Γ 2 L 2 Γ 1 * L 2 e ( t σ ( t ) ) h ( e ( t σ ( t ) ) ) + e ( t τ ( t ) ) h ( e ( t τ ( t ) ) ) T L 3 Γ 2 L 3 Γ 1 * L 3 e ( t τ ( t ) ) h ( e ( t τ ( t ) ) ) .
Then,
Λ = Λ 11 Λ 12 0 Λ 14 Λ 15 0 Λ 17 Λ 18 Λ 19 * Λ 22 0 0 0 0 0 0 0 * * Λ 33 Λ 34 0 0 0 0 0 * * * Λ 44 0 0 0 0 0 * * * * Λ 55 0 0 0 0 * * * * * Λ 66 0 0 0 * * * * * * Λ 77 0 0 * * * * * * * Λ 88 0 * * * * * * * * Λ 99 < 0 ,
where
Λ 11 = 2 R 1 R + δ 1 L r T L r + δ 2 ϕ T L c T L c ϕ + R 2 + 2 R 1 HK L 1 Γ 2 Φ , Λ 12 = R 1 C + L 1 Γ 1 , Λ 14 = R 1 B , Λ 15 = R 1 HK , Λ 17 = R 1 J r , Λ 18 = R 1 J c , Λ 19 = R 1 J b , Λ 22 = L 1 , Λ 33 = δ 3 ϕ T L b T L b ϕ R 2 ( 1 μ ) L 2 Γ 2 , Λ 34 = L 2 Γ 1 , Λ 44 = L 2 , Λ 55 = Σ Φ L 3 Γ 2 , Λ 56 = L 3 Γ 1 , Λ 66 = L 3 , Λ 77 = δ 1 I , Λ 88 = δ 2 I , Λ 99 = δ 3 I .
We pre- and post-multiply Λ with { R 1 1 , I , I , I , R 1 1 , I , I , I , I }
Υ = Υ 11 Υ 12 0 Υ 14 Υ 15 0 Υ 17 Υ 18 Υ 19 * Υ 22 0 0 0 0 0 0 0 * * Υ 33 Υ 34 0 0 0 0 0 * * * Υ 44 0 0 0 0 0 * * * * Υ 55 0 0 0 0 * * * * * Υ 66 0 0 0 * * * * * * Υ 77 0 0 * * * * * * * Υ 88 0 * * * * * * * * Υ 99 < 0 ,
where
Υ 11 = 2 R X 1 + X 1 δ 1 L r T L r X 1 + X 1 δ 2 ϕ T L c T L c ϕ X 1 + X 1 R 2 X 1 + 2 HK X 1 X 1 L 1 Γ 2 X 1 X 1 ϕ X 1 , Υ 12 = C + X 1 L 1 Γ 1 , Υ 14 = B , Υ 15 = HK X 1 , Υ 17 = J r , Υ 18 = J c , Υ 19 = J b , Υ 22 = L 1 , Υ 33 = δ 3 ϕ T L b T L b ϕ R 2 ( 1 μ ) L 2 Γ 2 , Υ 34 = L 2 Γ 1 , Υ 44 = L 2 , Υ 55 = X 1 Σ Φ X 1 X 1 L 3 Γ 2 X 1 , Υ 56 = L 3 Γ 1 , Υ 66 = L 3 , Υ 77 = δ 1 I , Υ 88 = δ 2 I , Υ 99 = δ 3 I .
At the same time, the controller gain matrix K can be obtained as Y 1 = K X 1
ξ = ξ 11 ξ 12 0 ξ 14 ξ 15 0 ξ 17 ξ 18 ξ 19 * ξ 22 0 0 0 0 0 0 0 * * ξ 33 ξ 34 0 0 0 0 0 * * * ξ 44 0 0 0 0 0 * * * * ξ 55 0 0 0 0 * * * * * ξ 66 0 0 0 * * * * * * ξ 77 0 0 * * * * * * * ξ 88 0 * * * * * * * * ξ 99 < 0 ,
where
ξ 11 = 2 R X 1 + X 1 δ 1 L r T L r X 1 + X 1 δ 2 ϕ T L c T L c ϕ X 1 + X 1 R 2 X 1 + 2 HY X 1 L 1 Γ 2 X 1 X 1 ϕ X 1 , ξ 12 = C + X 1 L 1 Γ 1 , ξ 14 = B , ξ 15 = HY , ξ 17 = J r , ξ 18 = J c , ξ 19 = J b , ξ 22 = L 1 , ξ 33 = δ 3 ϕ T L b T L b ϕ R 2 ( 1 μ ) L 2 Γ 2 , ξ 34 = L 2 Γ 1 , ξ 44 = L 2 , ξ 55 = X 1 Σ Φ X 1 X 1 L 3 Γ 2 X 1 , ξ 56 = L 3 Γ 1 , ξ 66 = L 3 , ξ 77 = δ 1 I , ξ 88 = δ 2 I , ξ 99 = δ 3 I .
D α V ( t ) φ T ( t ) ξ φ ( t ) ,
where
φ ( t ) = c o l [ e ( t ) , h ( e ( t ) ) , e ( t σ ( t ) ) , h ( e ( t σ ( t ) ) , e ( t τ ( t ) ) , h ( e ( t τ ( t ) ) ) ] .
By the Lypunov stability theory analysis, the event-triggered synchronization of the fractional-order uncertain neural networks’ error system (34) is globally asymptotic stable if LMI (35) holds. This completes the proof. □

5. Numerical Example

Example 1.
Consider the following uncertain neural networks (5) with time-varying delays described by
D α e ( t ) = ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + H Q ( t ) ,
with the following parameters
C = 1.5241 1.2489 1.6844 1.2946 1.8722 1.2567 1.1247 1.4211 1.6522 1.2807 1.5427 1.1227 1.4567 1.0425 1.1727 1.2514 1.1077 1.2404 1.6507 1.2701 1.9472 1.1174 1.2567 1.9989 1.2486 , B = 1.4932 1.5968 1.2567 1.0567 1.2674 1.2942 1.9942 1.6911 1.2849 1.5677 1.0977 1.4217 1.2415 1.5661 1.5717 1.2567 1.0741 1.2961 1.2247 1.2702 1.0047 1.2742 1.4274 1.6611 1.4428 , H = 1.5432 1.0968 1.2987 1.0097 1.9974 1.6542 1.5642 1.3411 1.7649 1.5767 1.2377 1.3417 1.9815 1.3461 1.5887 1.8767 1.8741 1.6561 1.9847 1.2092 1.3247 1.2652 1.4094 1.6871 1.4488 , R = 0.7289 0 0 0 0 0 0.7289 0 0 0 0 0 0.7289 0 0 0 0 0 0.7289 0 0 0 0 0 0.7289 , I = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 . J r = 0.4428 0 0 0 0 0 0.4428 0 0 0 0 0 0.4428 0 0 0 0 0 0.4428 0 0 0 0 0 0.4428 , L d = 1.7782 0 0 0 0 0 1.7782 0 0 0 0 0 1.7782 0 0 0 0 0 1.7782 0 0 0 0 0 1.7782 , J c = 0.5242 0 0 0 0 0 0.5242 0 0 0 0 0 0.5242 0 0 0 0 0 0.5242 0 0 0 0 0 0.5242 .
L c = 2.8976 0 0 0 0 0 2.8976 0 0 0 0 0 2.8976 0 0 0 0 0 2.8976 0 0 0 0 0 2.8976 , L b = 1.8974 0 0 0 0 0 1.8974 0 0 0 0 0 1.8974 0 0 0 0 0 1.8974 0 0 0 0 0 1.8974 , J b = 0.2995 0 0 0 0 0 0.2995 0 0 0 0 0 0.2995 0 0 0 0 0 0.2995 0 0 0 0 0 0.2995 , ϕ = 0.2494 0 0 0 0 0 0.2494 0 0 0 0 0 0.2494 0 0 0 0 0 0.2494 0 0 0 0 0 0.2494 .
Moreover, the activation functions are f ( e ( t ) ) = t a n h ( e ( t ) ) and f ( e ( t σ ( t ) ) ) = s i n h ( e ( t ) ) .
Solving the LMI conditions provided in (7) based on the MATLAB toolbox returns the following feasible solutions:
R 1 = 0.0284 0.0154 0.0180 0.0127 0.0074 0.0154 0.0244 0.0120 0.0070 0.0260 0.0180 0.0120 0.0209 0.0054 0.0102 0.0127 0.0070 0.0054 0.0904 0.0873 0.0074 0.0260 0.0102 0.0873 0.1118 , R 2 = 36.6572 0.0000 0.0000 0.0000 0.0000 0.0000 36.6572 0.0000 0.0000 0.0000 0.0000 0.0000 36.6572 0.0000 0.0000 0.0000 0.0000 0.0000 36.6572 0.0000 0.0000 0.0000 0.0000 0.0000 36.6572 .
The gain matrix of the designed controller can be obtained as:
K = 9.2914 0.0000 0.0000 0.0000 0.0000 0.0000 9.2914 0.0000 0.0000 0.0000 0.0000 0.0000 9.2914 0.0000 0.0000 0.0000 0.0000 0.0000 9.2914 0.0000 0.0000 0.0000 0.0000 0.0000 9.2914 .
δ 1 = 20.2099 , δ 2 = 20.2097 , δ 3 = 20.2099 , δ 4 = 20.2099 , and δ 5 = 20.2099 , which preserves system (48) as synchronous.
Example 2.
Consider the following uncertain neural networks with time-varying delays described by
D α e ( t ) = ( R + Δ R ( t ) ) e ( t ) + ( C + Δ C ( t ) ) h ( e ( t ) ) + ( B + Δ B ( t ) ) h ( e ( t σ ( t ) ) + H Q K ( t )
C = 1.7841 1.2499 1.6876 1.9046 1.8092 1.3367 1.3447 1.4541 1.6982 1.7807 1.2327 1.1447 1.4897 1.0895 1.5627 1.8714 1.7677 1.2094 1.9807 1.7801 1.3472 1.8974 1.6667 1.5689 1.2986 , B = 1.9832 1.5878 1.6767 1.0567 1.2674 1.3442 1.9482 1.9811 1.2899 1.5097 1.9877 1.4977 1.6615 1.5687 1.5787 1.6767 1.6741 1.2977 1.2277 1.2982 1.9847 1.2892 1.8774 1.6666 1.4499 , H = 1.7632 1.0878 1.2897 1.7897 1.9674 1.9942 1.3342 1.8711 1.7999 1.6767 1.9877 1.3817 1.5615 1.7861 1.4587 1.6567 1.6741 1.9561 1.8747 1.2702 1.6647 1.2652 1.4564 1.6771 1.6788 , R = 0.2389 0 0 0 0 0 0.2389 0 0 0 0 0 0.2389 0 0 0 0 0 0.2389 0 0 0 0 0 0.2389 , J r = 0.7628 0 0 0 0 0 0.7628 0 0 0 0 0 0.7628 0 0 0 0 0 0.7628 0 0 0 0 0 0.7628 , L d = 1.9882 0 0 0 0 0 1.9882 0 0 0 0 0 1.9882 0 0 0 0 0 1.9882 0 0 0 0 0 1.9882 , J c = 0.9087 0 0 0 0 0 0.9087 0 0 0 0 0 0.9087 0 0 0 0 0 0.9087 0 0 0 0 0 0.9087 , L c = 2.5676 0 0 0 0 0 2.5676 0 0 0 0 0 2.5676 0 0 0 0 0 2.5676 0 0 0 0 0 2.5676 ,
L b = 1.0987 0 0 0 0 0 1.0987 0 0 0 0 0 1.0987 0 0 0 0 0 1.0987 0 0 0 0 0 1.0987 , J b = 0.8765 0 0 0 0 0 0.8765 0 0 0 0 0 0.8765 0 0 0 0 0 0.8765 0 0 0 0 0 0.8765 , ϕ = 0.2476 0 0 0 0 0 0.2476 0 0 0 0 0 0.2476 0 0 0 0 0 0.2476 0 0 0 0 0 0.2476 , I = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 .
Moreover, the activation functions are f ( e ( t ) ) = t a n h ( e ( t ) ) and f ( e ( t σ ( t ) ) ) = s i n h ( e ( t ) ) . Solving the LMI conditions provided in (15) based on the MATLAB toolbox returns the following feasible solutions:
X 1 = 0.0346 0.0132 0.0158 0.0124 0.0074 0.0132 0.0310 0.0070 0.0006 0.0156 0.0158 0.0070 0.0301 0.0123 0.0014 0.0124 0.0006 0.0123 0.1428 0.1226 0.0074 0.0156 0.0014 0.1226 0.1419 , R 2 = 34.3231 0.0000 0.0000 0.0000 0.0000 0.0000 34.3231 0.0000 0.0000 0.0000 0.0000 0.0000 34.3231 0.0000 0.0000 0.0000 0.0000 0.0000 34.3231 0.0000 0.0000 0.0000 0.0000 0.0000 34.3231 , Y = 10.4053 0.0000 0.0000 0.0000 0.0000 0.0000 10.4053 0.0000 0.0000 0.0000 0.0000 0.0000 10.4053 0.0000 0.0000 0.0000 0.0000 0.0000 10.4053 0.0000 0.0000 0.0000 0.0000 0.0000 10.4053 .
The gain matrix of the designed controller can be obtained as:
K = 5.0940 1.0518 1.7014 1.6104 1.5249 1.0518 4.4548 0.0090 1.0158 1.3140 1.7014 0.0090 4.7390 0.8662 0.7067 1.6104 1.0158 0.8662 4.3173 3.9362 1.5249 1.3140 0.7067 3.9362 4.3671 .
δ 1 = 21.1589 , δ 2 = 21.1589 , δ 3 = 21.1567 , δ 4 = 21.1590 , and δ 5 = 21.1583 , which preserves (49) as synchronous.
Example 3.
Consider the following neural networks (20), with the following parameters
C = 1.5041 1.0489 1.0844 1.9946 1.8762 1.7567 1.5247 1.7211 1.4522 1.2877 1.0427 1.8227 1.5567 1.9425 1.1877 1.6514 1.0077 1.2904 1.6507 1.7601 1.9872 1.6174 1.6567 1.9989 1.0986 , A = 1.0941 1.9889 1.6544 1.2096 1.1722 1.1567 1.6547 1.4871 1.6672 1.7807 1.5727 1.1347 1.4987 1.0765 1.6727 1.2514 1.8777 1.2094 1.6597 1.9701 1.9272 1.8874 1.6767 1.8089 1.9486 , B = 1.4872 1.5878 1.8767 1.6667 1.9074 1.8742 1.9452 1.9911 1.9049 1.8877 1.0877 1.4987 1.2315 1.7761 1.0917 1.0567 1.3441 1.9861 1.0947 1.8902 1.6047 1.2872 1.4874 1.0911 1.0928 , R = 0.1459 0 0 0 0 0 0.1459 0 0 0 0 0 0.1459 0 0 0 0 0 0.1459 0 0 0 0 0 0.1459 , H = 1.5782 1.0068 1.7687 1.0097 1.0974 1.6942 1.8742 1.0911 1.6749 1.8767 1.6377 1.9817 1.4515 1.9861 1.7687 1.8097 1.8651 1.0961 1.0947 1.0992 1.3677 1.2698 1.4874 1.8671 1.9888 , I = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 , ϕ = 0.0987 0 0 0 0 0 0.0987 0 0 0 0 0 0.0987 0 0 0 0 0 0.0987 0 0 0 0 0 0.0987 .
Moreover, the activation functions are f ( e ( t ) ) = t a n h ( e ( t ) ) and f ( e ( t σ ( t ) ) ) = s i n h ( e ( t ) ) .
Solving the LMI conditions provided in (21) based on the MATLAB toolbox returns the following feasible solutions:
R 1 = 0.6253 0.2376 0.3124 0.1046 0.2094 0.2376 0.4979 0.1106 0.1841 0.0731 0.3124 0.1106 0.4894 0.1397 0.3460 0.1046 0.1841 0.1397 1.4658 1.2618 0.2094 0.0731 0.3460 1.2618 1.5839 , R 2 = 29.0877 0.0002 0.0001 0.0005 0.0009 0.0002 29.0933 0.0006 0.0017 0.0018 0.0001 0.0006 29.0873 0.0002 0.0008 0.0005 0.0017 0.0002 29.0905 0.0026 0.0009 0.0018 0.0008 0.0026 29.0847 .
The gain matrix of the designed controller and trigger parameters can be obtained as follows:
K = 6.6909 0.0173 0.0082 0.0374 0.0673 0.0173 6.4671 0.0464 0.1301 0.1326 0.0082 0.0464 6.7093 0.0127 0.0577 0.0374 0.1301 0.0127 6.5897 0.1818 0.0673 0.1326 0.0577 0.1818 6.8241 .
δ 4 = 4.3607 and δ 5 = 4.5189 . Therefore, preserves system (20) is synchronous.

6. Conclusions

In this paper, the synchronization problem was investigated for neural networks. It is well known that the Lyapunov direct method is the most effective method to analyze the stability of neural networks; the authors gave an important inequality on the Caputo derivative of quadratic functions, which plays an important role in analyzing the stability of fractional-order systems. By using Lyapunov functionals and analytical techniques, we obtained some sufficient conditions, and we derived event triggering to guarantee the synchronization of the delayed neural networks. We appled the Lyapunov functional method and the LMI approach to establish the synchronization criteria for the fractional-order nerual network matrix. A linear matrix inequality approach was developed to solve the problem. Numerical examples were given to demonstrate the effectiveness of the proposed schemes. Future work will focus on event-triggered control for fractional-order systems with time-delay and measurement noises. In addition, more effective event-triggered schemes such as an adaptive one, a dynamic one, and a hybrid one will also be considered for the stability analysis of fractional-order systems.

Author Contributions

Conceptualization, M.H., M.S.A., T.F.I. and B.A.Y.; methodology, K.I.O., M.H., M.S.A., T.F.I. and B.A.Y.; software, M.H., M.S.A. and K.I.O.; validation, M.H., M.S.A., T.F.I. and K.M.; formal analysis, K.I.O., M.H., M.S.A., T.F.I. and K.M.; investigation, M.H., T.F.I., B.A.Y. and K.I.O. resources, M.H., M.S.A., T.F.I. and K.M.; writing—review and editing, M.H., M.S.A., T.F.I., B.A.Y. and K.M.; visualization, M.H., M.S.A., T.F.I. and B.A.Y.; supervision, K.I.O., M.H., M.S.A., T.F.I. and B.A.Y.; project administration, M.H., M.S.A., T.F.I. and K.M.; funding acquisition, M.S.A. and K.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large groups (project under grant number RGP.2/47/43/1443). Moreover, this research received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F650018].

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Podlubny, I. Fractional differential equations. In Mathematics in Science and Engineering; Academic Press: San Diego, CA, USA, 1999; Volume 198. [Google Scholar]
  2. Monje, C.A.; Chen, Y.; Vinagre, B.M.; Xue, D.; Feliu, V. Fractional-Order systems and controls. In Advances in Industrial Control; Springer: London, UK, 2010. [Google Scholar]
  3. Wang, Y.; Gu, L.; Xu, Y.; Cao, X. Practical tracking control of robot manipulators with continuous fractional-order nonsingular terminal sliding mode. IEEE Trans. Ind. Electron. 2016, 63, 6194–6204. [Google Scholar] [CrossRef]
  4. Li, C.; Chen, G. Chaos and hyperchaos in the fractional-order rossler equations. Phys. A Stat. Mech. Its Appl. 2004, 341, 55–61. [Google Scholar] [CrossRef]
  5. Debnath, L.; Feyman, R.P. Recent applications of fractional calculus to science and engineering. Int. J. Math. Math. Sci. 2003, 54, 3413–3442. [Google Scholar] [CrossRef] [Green Version]
  6. Ding, Z.; Zeng, Z.; Zhang, H.; Wang, L.; Wang, L. New results on passivity of fractional-order uncertain neural networks. Neurocomputing 2019, 351, 51–59. [Google Scholar] [CrossRef]
  7. Rajivganthi, C.; Rihan, F.; Laxshmanan, S.; Rakkiappan, R.; Muthuumar, P. Synchronization of memristor-based delayed BAM neural networks with fractional-order derivatives. Complexity 2016, 21, 412–426. [Google Scholar] [CrossRef]
  8. Bao, H.; Park, J.H.; Cao, J. Non-fragile state estimation for fractional-order delayed memristive BAM neural networks. Neural Netw. 2019, 119, 190–199. [Google Scholar] [CrossRef] [PubMed]
  9. Song, C.; Fei, S.; Cao, J.; Huang, C. Robust synchronization of fractional-order uncertain chaotic systems based on output feedback sliding mode control. Mathematics 2019, 7, 599. [Google Scholar] [CrossRef] [Green Version]
  10. Henderson, J.; Ouahab, A. Fractional functional differential inclusions with finite delay. Nonlinear Anal. 2009, 70, 2091–2105. [Google Scholar] [CrossRef]
  11. Tan, N.; Faruk, O.; Mine, M. Robust stability analysis of fractional order interval polynomials. ISA Trans. 2009, 48, 166–172. [Google Scholar] [CrossRef]
  12. Ahn, H.S.; Chen, Y. Necessary and sufficient stability condition of fractional-order interval linear systems. Automatica 2008, 44, 2985–2988. [Google Scholar] [CrossRef]
  13. Dzielinski, A.; Sierociuk, D. Stability of discrete fractional order state—Space system. J. Vib. Control 2008, 14, 1543–1556. [Google Scholar] [CrossRef]
  14. Liao, Z.; Peng, C.; Li, W.; Wang, Y. Robust stability analysis for a class of fractional order systems with uncertain parameters. J. Frankl. Inst. Eng. Appl. Math. 2011, 348, 1101–1113. [Google Scholar] [CrossRef]
  15. Chilali, M.; Gahinet, P.; Apkarian, P. Robust pole placement in LMI regions. Inst. Electr. Electron. Eng. Autom. Control 1999, 44, 2257–2270. [Google Scholar] [CrossRef] [Green Version]
  16. Tang, Y.; Wang, Z.; Fang, J. Pinning control of fractional-order weighted complex networks. Chaos 2009, 19, 013112. [Google Scholar] [CrossRef]
  17. Luo, J.; Tian, W.; Zhong, S.; Shi, K.; Gu, X.M.; Wang, W. Improved delay-probability-dependent results for stochastic neural networks with randomly occurring uncertainties and multiple delays. Int. J. Syst. Sci. 2018, 49, 2039–2059. [Google Scholar] [CrossRef]
  18. Shi, K.; Tang, Y.; Liu, X.; Zhong, S. Secondary delay-partition approach on robust performance analysis for uncertain time-varying Lurie nonlinear control system. Optim. Control Appl. Methods 2017, 38, 1208–1226. [Google Scholar] [CrossRef]
  19. Syed Ali, M.; Balasubramaniam, P. Global exponential stability for uncertain stochastic fuzzy BAM neural networks with time-varying delays. Chaos Solitons Fractals 2009, 42, 2191–2199. [Google Scholar] [CrossRef]
  20. Zeng, D.; Shi, K.; Zhang, R.; Zhong, S. Novel mean square exponential stability criterion of uncertain stochastic interval type-2 fuzzy neural networks with multiple time-varying delays. In Proceedings of the 2017 36th Chinese Control Conference, Dalian, China, 26–28 July 2017. [Google Scholar]
  21. Syed Ali, M.; Esther Rani, M. Passivity analysis of uncertain stochastic neural networks with time-varying delays and Markovian jumping parameters. Netw. Comput. Neural Syst. 2015, 26, 73–96. [Google Scholar] [CrossRef]
  22. Shi, K.; Tang, Y.; Zhong, S.; Yin, C.; Huang, X.; Wang, W. Nonfragile asynchronous control for uncertain chaotic Lurie network systems with Bernoulli stochastic process. Int. J. Robust Nonlinear Control 2018, 28, 1693–1714. [Google Scholar] [CrossRef]
  23. Luo, J.; Tian, W.; Zhong, S.; Shi, K.; Wang, W. Non-fragile asynchronous event-triggered control for uncertain delayed switched neural networks. Nonlinear Anal. Hybrid Syst. 2018, 29, 54–73. [Google Scholar] [CrossRef]
  24. Saravanakumar, R.; Syed Ali, M.; Huang, H.; Cao, J.; Joo, Y.H. Robust H state-feedback control for nonlinear uncertain systems with mixed time-varying delay. Int. J. Control Autom. Syst. 2018, 16, 225–233. [Google Scholar] [CrossRef]
  25. Chen, H.; Shi, K.; Zhong, S.; Liu, X. Error state convergence on master-slave generalized uncertain neural networks using robust nonlinear H Control theory. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 2042–2055. [Google Scholar] [CrossRef]
  26. Saravanakumar, R.; Syed Ali, M.; Karimi, H.R. Robust H control for a class of uncertain stochastic Markovian jump systems (SMJSs) with interval and distributed time-varying delays. Int. J. Syst. 2017, 48, 862–872. [Google Scholar] [CrossRef]
  27. Zhang, X.M.; Han, Q.L.; Zhang, B.L. An overview and deep investigation on sampled-data-based event-triggered control and filtering for networked systems. IEEE Trans. Inf. Theory 2017, 13, 4–16. [Google Scholar] [CrossRef]
  28. Ge, X.H.; Han, Q.L.; Wang, Z.D. A threshold-parameter-dependent approach to designing distributed event-triggered H consensus filters over sensor networks. IEEE Trans. Cybern. 2019, 49, 1148–1159. [Google Scholar] [CrossRef] [Green Version]
  29. Syed Ali, M.; Vadivel, R.; Saravanakumar, R. Event-triggered state estimation for Markovian jumping impulsive neural networks with interval time varying delays. Int. J. Control 2017, 92, 270–290. [Google Scholar]
  30. Dimarogonas, D.V.; Frazzoli, E.; Johansson, K.H. Distributed event triggered control for multi-agent systems. IEEE Trans. Autom. Control 2012, 57, 1291–1297. [Google Scholar] [CrossRef]
  31. Xie, D.; Xu, S.; Li, Z. Event-triggered consensus control for second-order multi-agent systems. IET Control Theory Appl. 2015, 9, 667–680. [Google Scholar] [CrossRef]
  32. Zhou, B.; Liao, X.; Huang, T. Leader-following exponential consensus of general linear multi-agent systems via event-triggered control with combinational measurements. Appl. Math. Lett. 2015, 40, 35–39. [Google Scholar]
  33. Qin, J.; Zheng, W.X.; Gao, H. On pinning synchronisability of complex networks with arbitrary topological structure. Int. J. Syst. Sci. 2011, 42, 1559–1571. [Google Scholar] [CrossRef]
  34. Chakravartula, S.; Indic, P.; Sundaram, B.; Killingback, T. Emergence of local synchronization in neuronal networks with adaptive couplings. PLoS ONE 2017, 12, e0178975. [Google Scholar]
  35. Liu, X.; Cao, J. Local synchronization of one-to-one coupled neural networks with discontinuous activations. Cogn. Neurodyn. 2011, 5, 13–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Yang, Y.; Cao, J. Exponential lag synchronization of a class of chaotic delayed neural networks with impulsive effects. Phys. A Stat. Mech. Its Appl. 2007, 386, 492–502. [Google Scholar] [CrossRef]
  37. He, W.; Qian, F.; Cao, J.; Han, Q.L. Impulsive synchronization of two nonidentical chaotic systems with time-varying delay. Phys. Lett. A 2011, 498–504. [Google Scholar] [CrossRef]
  38. Pan, L.; Cao, J.; Hu, J. Synchronization for complex networks with Markov switching via matrix measure approach. Appl. Math. Model. 2015, 39, 5636–5649. [Google Scholar] [CrossRef]
  39. Du, C.; Liu, L.; Shi, S. Synchronization of fractional-order complex chaotic system using active control method. In Proceedings of the 2019 IEEE 10th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York, NY, USA, 10–12 October 2019; pp. 817–823. [Google Scholar]
  40. Xu, Z.; He, W. Quantized synchronization of master-slave systems under event-triggered control against DoS attacks. In Proceedings of the IECON 2020 the 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 18–21 October 2020; pp. 3568–3573. [Google Scholar]
  41. Du, S.; Dong, L.; Ho, D.W.C. Event-triggered control for output synchronization of heterogeneous network with input saturation constraint. In Proceedings of the IECON 2017—43rd Annual Conference of the IEEE Industrial Electronics Society, Beijing, China, 29 October–1 November 2017; pp. 5785–5790. [Google Scholar]
  42. Yang, Y.; Long, Y. Event-triggered sampled-data synchronization of complex networks with time-varying coupling delays. Adv. Differ. Equ. 2020, 312, 2020. [Google Scholar] [CrossRef]
  43. Lundstrom, B.; Higgs, M.; Spain, W.; Fairhall, A. Fractional differentiation by neocortical pyramidal neurons. Nat. Neurosci. 2008, 11, 1335–1342. [Google Scholar] [CrossRef]
  44. Aguila-Camacho, N.; Duarte-Mermoud, M.A.; Gallegos, J.A. Lyapunov functions for fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 2951–2957. [Google Scholar] [CrossRef]
  45. Zhang, S.; Yu, Y.; Yu, J. LMI conditions for global stability of fractional-order neural networks. IEEE Trans. Neural Netw. 2017, 28, 2423–2433. [Google Scholar] [CrossRef]
  46. Boyd, B.; Ghoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; SIAM: Philadephia, PA, USA, 1994. [Google Scholar]
  47. Dinh, C.; Mai, H.V.T.; Duong, T.H. New results on stability and stabilization of delayed caputo fractional order systems with convex polytopic uncertainties. J. Syst. Sci. Complex 2020, 33, 563–583. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hymavathi, M.; Syed Ali, M.; Ibrahim, T.F.; Younis, B.A.; Osman, K.I.; Mukdasai, K. Synchronization of Fractional-Order Uncertain Delayed Neural Networks with an Event-Triggered Communication Scheme. Fractal Fract. 2022, 6, 641. https://doi.org/10.3390/fractalfract6110641

AMA Style

Hymavathi M, Syed Ali M, Ibrahim TF, Younis BA, Osman KI, Mukdasai K. Synchronization of Fractional-Order Uncertain Delayed Neural Networks with an Event-Triggered Communication Scheme. Fractal and Fractional. 2022; 6(11):641. https://doi.org/10.3390/fractalfract6110641

Chicago/Turabian Style

Hymavathi, M., M. Syed Ali, Tarek F. Ibrahim, B. A. Younis, Khalid I. Osman, and Kanit Mukdasai. 2022. "Synchronization of Fractional-Order Uncertain Delayed Neural Networks with an Event-Triggered Communication Scheme" Fractal and Fractional 6, no. 11: 641. https://doi.org/10.3390/fractalfract6110641

Article Metrics

Back to TopTop