Next Article in Journal
Cauchy Processes, Dissipative Benjamin–Ono Dynamics and Fat-Tail Decaying Solitons
Next Article in Special Issue
Robust Stability of Fractional Order Memristive BAM Neural Networks with Mixed and Additive Time Varying Delays
Previous Article in Journal
Fractal–Fractional Michaelis–Menten Enzymatic Reaction Model via Different Kernels
Previous Article in Special Issue
Lower and Upper Bounds of Fractional Metric Dimension of Connected Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synchronization of Fractional Order Uncertain BAM Competitive Neural Networks

by
M. Syed Ali
1,
M. Hymavathi
1,
Syeda Asma Kauser
2,
Grienggrai Rajchakit
3,
Porpattama Hammachukiattikul
4 and
Nattakan Boonsatit
5,*
1
Department of Mathematics, Thiruvalluvar University, Vellore 632115, Tamil Nadu, India
2
Department of Mathematics, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
3
Department of Mathematics, Faculty of Science, Maejo University, Chiang Mai 50290, Thailand
4
Department of Mathematics, Faculty of Science, Phuket Rajabhat University (PKRU), 6 Thepkasattree Road, Raddasa, Phuket 83000, Thailand
5
Department of Mathematics, Faculty of Science and Technology, Rajamangala University of Technology Suvarnabhumi, Nonthaburi 11000, Thailand
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(1), 14; https://doi.org/10.3390/fractalfract6010014
Submission received: 16 October 2021 / Revised: 8 December 2021 / Accepted: 23 December 2021 / Published: 29 December 2021
(This article belongs to the Special Issue Frontiers in Fractional-Order Neural Networks)

Abstract

:
This article examines the drive-response synchronization of a class of fractional order uncertain BAM (Bidirectional Associative Memory) competitive neural networks. By using the differential inclusions theory, and constructing a proper Lyapunov-Krasovskii functional, novel sufficient conditions are obtained to achieve global asymptotic stability of fractional order uncertain BAM competitive neural networks. This novel approach is based on the linear matrix inequality (LMI) technique and the derived conditions are easy to verify via the LMI toolbox. Moreover, numerical examples are presented to show the feasibility and effectiveness of the theoretical results.

1. Introduction

In recent years, fractional calculus has attracted the attention of researchers because it can describe real phenomenon more accurately. Therefore, both in theory and in application, fractional-order calculus is more applicable in many branches of science and engineering than traditional integer-order calculus, such as artificial intelligence, optimal combination, material science, electronic information, and cognitive science. When the model of a system includes at least one fraction derivative or integral term, we call it a fractional order system. The main advantage of fractional-order models in comparison with their integer-order counterparts is that fractional derivatives provide an excellent instrument in the description of memory and hereditary properties of various materials and process [1,2,3]. On the other hand, the interest in stability analysis of various fractional differential systems has been growing rapidly due to their successful applications in widespread fields of science and engineering. Different types of stability criteria, such as as the global stability [4], global asymptotic stability [5], the quasi-uniform stability [6], global uniform asymptotic fixed deviation stability [7] are studied.
Meanwhile, the genuine systems are regularly dependent upon external disturbances. However, the outside disturbance cannot be particularly assessable and hard to measure. On the other hand, external disturbance and uncertainties generally exist in almost all industrial systems and speed up unfavorable impacts execution and even strength of control frameworks. One instinctive plan to manage this issue is to access the disturbance or effect of the disturbances from computable variables, and thereafter, a control move can be made, in context of the disturbance estimate, to make up the effect of the disturbances. Parameter uncertainties negatively affect networks dynamical behaviors, including stability and synchronization, hence permit further investigations. To our best knowledge, the impact of parameter uncertainties was not given enough consideration, and was only studied by few authors [8,9,10,11,12,13,14].
The synchronization issue has got a continually growing variety of analysts attention due to its expected applications. Pecora and Carrol introduced a method to synchronize two identical chaotic systems with different initial conditions. The study of synchronization can provide research ideas for the analytical method of dynamic behavior, which further extends the study of dynamic behavior. However, few practical network systems can be synchronized directly. Neural networks can arise chaotic behaviors due to an unpredictable disturbance. In other words, in the drive-response (or master-slave) systems, the response (or slave) system is influenced by the behavior of the drive (or master) system, but the drive (or master) system is independent of the response (or slave) one. Recently, the study of fractional-order synchronization has been attracting a host of attention due to emerging excellent applications in biological technology, chemical systems, ecological systems, cryptography, etc. As we know, there are synchronizations among which the NNs have been extensively studied [15,16,17,18,19].
Recently, Meyer-Base et al. [20] proposed the so-called competitive neural networks with different time scales. Generally speaking, competitive neural network (CNN) contains two types of state variable: short-term memory (STM) and long-term memory (LTM). STM describes the rapidly changing behavior of neuronal dynamics, whereas LTM describes the slow behavior of unsupervised neuronal synapses [21,22]. On the other hand, much attention has been devoted to analyzing the stability and synchronization of Competitive neural networks, for example [23,24]. Since fractional calculus provides a better way to show the nature of hereditary and memory in dynamical process, fractional order competitive neural network (FCNN) is more appropriate than integer-order CNN. The multi-stability, global stability, and complete synchronization for fractional-order competitive neural network are researched [25,26].
Bidirectional associative memory neural networks (BAMNNs) are a class of two-layer neural systems, which were first introduced by Kosko in 1987 [27]. There are many types of neural networks modeled inspired by biological neural networks and are used to approximate functions that are generally unknown. Generally speaking, the neurons in one layer are fully incorporated to the neurons in the other layer. Moreover, there may be no interconnection among neurons in the same layer. Owing to these reasons, BAM neural networks have been widely discussed both in theory and applications, due to their wide application in many fields, such as signal processing [28], image processing [29] and pattern recognition [30].
Inspired by above discussions, we propose stability and synchronization criteria for the fractional order uncertain BAM competitive neural networks.
(1)
We extend competitive neural networks with different time scales to bidirectional associative memory neural networks with different time scales and proposed a double-layer uncertain BAM competitive neural network.
(2)
The uncertainty BAM competitive neural networks are introduced for the first time.
(3)
We constructed novel Lyapunov–Krasovskii functionals and applied inequality techniques to achieve synchronization of the considered systems.
(4)
The derived conditions are expressed in terms of linear matrix inequalities (LMIs), which can be checked numerically very efficiently via the LMI toolbox.
(5)
Lastly, numerical results are given to show the effectiveness of the proposed results.

2. Preliminaries

Definition 1 
([31]). The fractional-order integral of order α ( 0 , 1 ) for an integral function f ( t ) C m ( [ 0 , + ) , R ) is defined as
I α f ( t ) = 1 Γ ( α ) 0 t ( t ξ ) α 1 f ( ξ ) d ξ ,
where Γ ( · ) is the Euler’s gamma function, which is denominated by
Γ ( i ) = 0 exp ( γ ) t i 1 d γ , ( R e ( i ) > 0 ) ,
where R e ( i ) is the real part of i.
Definition 2 
([31]). The Caputo fractional derivative of order α for a function f (t) is defined as
D β f ( t ) = 1 Γ ( m β ) 0 t f m ( γ ) ( t γ ) β m + 1 d γ ,
where t 0 and m 1 < β < m Z + . In particular, when β ( 0 , 1 ) ,
D β f ( t ) = 1 Γ ( 1 β ) 0 t f ( γ ) ( t γ ) β d γ .
Definition 3 
([31]). The one-parameter Mittag–Leffler function is defined as
E ϕ ( p ) = l = 0 p l Γ ( l ϕ + 1 ) .
Definition 4 
([31]). The two-parameter Mittag–Leffler function is defined as
E ϕ , ψ ( p ) = l = 0 p l Γ ( l ϕ + ψ ) .
Lemma 1 
([32]). Let h : J R be continuous function. A function φ C ( J , R ) is a solution of fractional integral equation:
φ ( t ) = φ 0 1 Γ ( α ) 0 b ( b i ) α 1 f ( i ) d i + 1 Γ ( α ) 0 b ( t i ) α 1 f ( i ) d i
if and only if φ ( t ) is a solution of the following fractional Cauchy problem:
D α φ ( t ) = f ( t , φ ( t ) ) , t J , φ ( b ) = φ 0 , b ( 0 , T ) .
Lemma 2 
([32]). If p ( t ) C m [ 0 , ] , then
D α D α p ( t ) = p ( t ) δ = 0 m 1 t δ δ ! p δ ( 0 ) , α 0 ,
where n Z + . Especially for 0 < α < 1 , p ( t ) C 1 [ 0 , ) , then
D α I α p ( t ) = p ( t )
and
I α D α p ( t ) = p ( t ) p ( 0 ) .
Lemma 3 
([33]). For α > 0 , assume h ( t ) is a non-negative, non-decreasing function locally integrable on 0 t < R ( R + ) , and d ( t ) N is a non-negative, non-decreasing continuous function defined on 0 t < R , where N is a constant. v ( t ) is non-negative and locally integrable on 0 t < R and satisfying
v ( t ) = h ( t ) + d ( t ) 0 t ( t s ) α 1 v ( s ) d s
then
v ( t ) h ( t ) E α ( d ( t ) Γ ( α ) t α ) .
Lemma 4 
([33]). Assume that ξ ( t ) C 1 [ p , q ] and satisfies,
D α ξ ( t ) = f ( t , ξ ( t ) ) 0 , 0 < α < 1 ,
for all t [ p , q ] ; then, ξ ( t ) is monotonously nondecreasing for 0 < α < 1 . If
D α ξ ( t ) = f ( t , ξ ( t ) ) 0 , 0 < α < 1 ,
then ξ ( t ) is monotonously non increasing for 0 < α < 1 .
Lemma 5 
([34]). Let a vector-valued function u ( t ) R n is differentiable. Then, for any t > 0 , one has
D α u T ( t ) S u ( t ) 2 u T ( t ) S D α u ( t ) , 0 < α < 1 .
Lemma 6 
([34]). For the given positive scalar λ > 0 , w , z R m and matrix D, then
w T D z λ 1 2 w T D D T w + λ 2 z T z .
Lemma 7 
([35]). If N > 0 , given matrices are S , Q , N then
Q S T S N < 0
if and only if
Q + S T N 1 S < 0 .
Lemma 8 
([36]). Let π ( p ) : λ ¯ R n be a continuous mapping. If d e g ( π ( p ) , λ , y ) 0 , then there exists at least one solution of π ( p ) = y λ .
For the neuron activation function g, the following assumption is given.
Assumption 1. 
For j = 1, 2, … m, the nonlinear activation function h j with h j ( 0 ) = 0 satisfies the Lipchitz continuous; namely, there exists a Lipchitz constant l j > 0 such that
| h ( p j ) h ( q j ) | l j | p j q j | f o r   a l l   p j , q j R .

3. Mian Results

In this section, we will present the synchronization of following fractional order uncertain BAM competitive neural networks with delays,
ϵ D α x i ( t ) = ( a i + Δ a i ( t ) ) x i ( t ) + j = 1 m ( c ji + Δ c ji ( t ) ) f j ( y j ( t ) ) + j = 1 m ( p ji + Δ p ji ( t ) ) v j ( y j ( t σ ) ) + ( b i + Δ b i ( t ) ) l = 1 γ r li ( t ) θ l + I i , D α r li ( t ) = ( d i + Δ d i ( t ) ) r li ( t ) + θ l ( h i + Δ h i ( t ) ) f i ( y i ( t ) ) , i = 1 , 2 , . . , n , l = 1 , 2 , . . . γ , ϵ D α y j ( t ) = ( l j + Δ l j ( t ) ) y j ( t ) + i = 1 n ( e ij + Δ e ij ( t ) ) g i ( x i ( t ) ) + i = 1 n ( q ij + Δ q ij ( t ) ) u i ( x i ( t σ ) ) + ( m j + Δ m j ( t ) ) o = 1 ρ s oj ( t ) δ o + J j , D α s oj ( t ) = ( t j + Δ t j ( t ) ) s oj ( t ) + δ o ( h ¯ j + Δ h ¯ j ( t ) ) g j ( x j ( t ) ) , j = 1 , 2 , . . . , m , o = 1 , 2 , . . . . ρ .
where x ( t ) = ( x 1 ( t ) , x 2 ( t ) , . . . . . , x n ( t ) ) T , y ( t ) = ( y 1 ( t ) , y 2 ( t ) , . . . , y m ( t ) ) T , x i ( t ) , y j ( t ) are the states of ith and jth neurons in the U-layer and V-layer, respectively. f j ( · ) ,   g i ( · ) are the neuron activation functions. Where D α , α ( 0 , 1 ) is the Caputo fractional-order derivative operator, which is discussed in the succeeding section; a i > 0 ,and l j > 0 are the time constants; d i , h i , t j and h ¯ j represent the disposable scaling positive scalars; c ji and e ij represents the connection weights of ith neuron and jth neuron, p ji and q ij represents the synaptic interconnection weight of describing the dynamical efficiency of the synaptic strength between the U-layer and the V-layer, respectively. σ denotes the constant time delay; r li and s oj represents the synaptic efficiency; b i , m j represents the strength of the external stimulus terms; ϵ = ( ϵ 1 , ϵ 2 , . . . , ϵ m ) T represents the constant external stimulus; and I i and J j are constant external inputs. Next, we denote u i ( t ) = l = 1 γ r li ( t ) θ l = θ r i T ( t ) , i = 1 , 2 , . . , n and v j ( t ) = o = 1 ρ s oj ( t ) δ o = δ s j T ( t ) , j = 1 , 2 , . . , m . Δ a i , Δ c ji , Δ p ji , Δ b i , Δ d i , Δ h i , Δ l j , Δ e ij , Δ q ij , Δ m j , Δ t j , Δ h ¯ j , are uncertain parameters, which will be defined later.
The state-space form of Equation (1) can be rearranged by the following form:
ϵ D α x i ( t ) = ( a i + Δ a i ( t ) x i ( t ) + j = 1 m ( c ji + Δ c ji ( t ) ) f j ( y j ( t ) ) + j = 1 m ( p ji + Δ p ji ( t ) ) v j ( y j ( t σ ) ) + ( b i + Δ b i ( t ) ) u i ( t ) + I i , D α u i ( t ) = ( d i + Δ d i ( t ) ) u i ( t ) + | θ l 2 | ( h i + Δ h i ( t ) ) f i ( u i ( t ) ) , i = 1 , 2 , . . , n , ϵ D α y j ( t ) = ( l j + Δ l j ( t ) ) y j ( t ) + i = 1 n ( e ij + Δ e ij ( t ) ) g i ( x i ( t ) ) + i = 1 n ( q ij + Δ q ij ( t ) ) u i ( x i ( t σ ) ) + ( m j + Δ m j ( t ) ) v j ( t ) + J j , D α v j ( t ) = ( t j + Δ t j ( t ) ) v j ( t ) + | δ o 2 | ( h ¯ j + Δ h ¯ j ( t ) ) g j ( v j ( t ) ) , j = 1 , 2 , . . . , m
where | θ l | 2 = θ 1 2 + . . . + θ m 2 and | δ o | 2 = δ 1 2 + . . . + δ n 2 are scalars. Without loss of generality, the input stimulus vectors θ and δ are assumed to normalized with unit magnitudes | θ l 2 | = 1 and | δ o 2 | = 1 for l = 1, 2, …, n and o = 1, 2, …, m.
ϵ D α x i ( t ) = ( a i + Δ a i ( t ) ) x i ( t ) + j = 1 m ( c ji + Δ c ji ( t ) ) f j ( y j ( t ) ) + j = 1 m ( p ji + Δ p ji ( t ) ) v j ( y j ( t σ ) ) + ( b i + Δ b i ( t ) ) u i ( t ) + I i , D α u i ( t ) = ( d i + Δ d i ( t ) ) u i ( t ) + ( h i + Δ h i ( t ) ) f i ( u i ( t ) ) , i = 1 , 2 , . . , n , ϵ D α y j ( t ) = ( l j + Δ l j ( t ) ) y j ( t ) + i = 1 n ( e ij + Δ e ij ( t ) ) g i ( x i ( t ) ) + i = 1 n ( q ij + Δ q ij ( t ) ) u i ( x i ( t σ ) ) + ( m j + Δ m j ( t ) ) v j ( t ) + J j , D α v j ( t ) = ( t j + Δ t j ( t ) ) v j ( t ) + ( h ¯ j + Δ h ¯ j ( t ) ) g j ( v j ( t ) ) , j = 1 , 2 , . . . , m ,
The compact form of (3) is given by
D α x ( t ) = 1 ϵ ( A + Δ A ( t ) ) x ( t ) + 1 ϵ ( C + Δ C ( t ) ) f ( y ( t ) ) + 1 ϵ ( P + Δ P ( t ) ) v ( y ( t σ ) ) + 1 ϵ ( B + Δ B ( t ) ) u ( t ) + I ϵ , D α u ( t ) = ( D + Δ D ( t ) ) u ( t ) + ( H + Δ H ( t ) ) f ( u ( t ) ) , D α y ( t ) = 1 ϵ ( L + Δ L ( t ) ) y ( t ) + 1 ϵ ( E + Δ E ( t ) ) g ( x ( t ) ) + 1 ϵ ( Q + Δ Q ( t ) ) u ( x ( t σ ) ) + 1 ϵ ( M + Δ M ( t ) ) v ( t ) + J ϵ , D α v ( t ) = ( T + Δ T ( t ) ) v ( t ) + ( H ¯ + Δ H ¯ ( t ) ) g ( v ( t ) ) ,
where A = d i a g { a 1 ( t ) , . . . , a n ( t ) } , P = ( p j i ) n × n , C = ( c j i ) n × n , B = d i a g { b 1 ( t ) , . . . , b n ( t ) } ,
D = d i a g { d 1 ( t ) , . . . , d n ( t ) } , H = d i a g { h 1 ( t ) , . . . , h n ( t ) } , Δ A ( t ) = d i a g { Δ a 1 ( t ) , . . . , Δ a n ( t ) } ,
Δ C ( t ) = ( Δ c ji ) n × n ( t ) , Δ P ( t ) = ( Δ p ji ) n × n ( t ) , Δ B ( t ) = d i a g { Δ b 1 ( t ) , . . . , Δ b n ( t ) } ,
Δ D ( t ) = d i a g { Δ d 1 ( t ) , . . . , Δ d n ( t ) } , Δ H ( t ) = d i a g { Δ h 1 ( t ) , . . . , Δ h n ( t ) } ,
L = d i a g { l 1 ( t ) , . . . , l m ( t ) } , E = ( e i j ) m × m , Q = ( q ij ) m × m , M = d i a g { m 1 ( t ) , . . . , m m ( t ) } , T = d i a g { t 1 ( t ) , . . . , t m ( t ) } , H ¯ = d i a g { h ¯ 1 ( t ) , . . . , h ¯ m ( t ) } , Δ L ( t ) = d i a g { Δ l 1 ( t ) , . . . , Δ l m ( t ) } ,
Δ E ( t ) = ( Δ e ij ) m × m ( t ) , Δ Q ( t ) = ( Δ q ij ) m × m ( t ) , Δ M ( t ) = d i a g { Δ m 1 ( t ) , . . . , Δ m m ( t ) } ,
Δ T ( t ) = d i a g { Δ t 1 ( t ) , . . . , Δ t m ( t ) } , Δ H ¯ ( t ) = d i a g { Δ h ¯ 1 ( t ) , . . . , Δ h ¯ m ( t ) } .
The structure of time-varying parameter uncertain matrices Δ A ( t ) , Δ C ( t ) , Δ P ( t ) , Δ B ( t ) , Δ D ( t ) , Δ H ( t )   Δ L ( t ) , Δ E ( t ) , Δ Q ( t ) , Δ M ( t ) , Δ T ( t ) , and Δ H ¯ ( t ) satisfy the following conditions:
Δ A ( t ) = J a K ( t ) L a , Δ C ( t ) = J c K ( t ) L c , Δ P ( t ) = J p K ( t ) L p , Δ B ( t ) = J b K ( t ) L b , Δ D ( t ) = J d K ( t ) L d , Δ H ( t ) = J h K ( t ) L h , Δ L ( t ) = J l K ( t ) L l , Δ E ( t ) = J e K ( t ) L e , Δ Q ( t ) = J q K ( t ) L q , Δ M ( t ) = J m K ( t ) L m , Δ T ( t ) = J t K ( t ) L t , a n d Δ H ¯ ( t ) = J h K ( t ) L h .
where I , J a , J c , J p , J b , J d , J h , J l , J e , J q , J m , J h , J t , L a , L c , L p , L b , L d , L h , L l , L e , L q , L m , L h , and L t are matrices are known constant, and K(t) is an unknown time-varying matrix, satisfying K T ( t ) K ( t ) 1 . The initial values of system (4) are associated with x ( t ) = ρ ( t ) C ( [ σ , 0 ] , R n ) , u ( t ) = ρ ˜ ( t ) C ( [ σ , 0 ] , R n )   y ( t ) = Ψ ( t ) C ( [ σ , 0 ] , R m ) and v ( t ) = Ψ ˜ ( t ) C ( [ σ , 0 ] , R m )
The corresponding response (slave) system of master system (4) is
D α x ˇ ( t ) = 1 ϵ ( A + Δ A ( t ) ) x ˇ ( t ) + 1 ϵ ( C + Δ C ( t ) ) f ( y ˇ ( t ) ) + 1 ϵ ( P + Δ P ( t ) ) v ( y ˇ ( t σ ) ) + 1 ϵ ( B + Δ B ( t ) ) u ˇ ( t ) + I ϵ + α ( t ) , D α u ˇ ( t ) = ( D + Δ D ( t ) ) u ˇ ( t ) + ( H + Δ H ( t ) ) f ( u ˇ ( t ) ) + β ( t ) , D α y ˇ ( t ) = 1 ϵ ( L + Δ L ( t ) ) y ˇ ( t ) + 1 ϵ ( E + Δ E ( t ) ) g ( x ˇ ( t ) ) + 1 ϵ ( Q + Δ Q ( t ) ) u ( x ˇ ( t σ ) ) + 1 ϵ ( M + Δ M ( t ) ) v ˇ ( t ) + J ϵ + γ ( t ) , D α v ˇ ( t ) = ( T + Δ T ( t ) ) v ˇ ( t ) + ( H ¯ + Δ H ¯ ( t ) ) g ( v ˇ ( t ) ) + δ ( t ) ,
Then, the error system can be described as follows: x ˇ ( t ) x ( t ) = w ( t ) , u ˇ ( t ) u ( t ) = θ ( t ) , y ˇ ( t ) y ( t ) = z ( t ) , v ˇ ( t ) v ( t ) = ψ ( t )
D α w ( t ) = 1 ϵ ( A + Δ A ( t ) w ( t ) + 1 ϵ ( C + Δ C ( t ) ) f ( z ( t ) ) + 1 ϵ ( P + Δ P ( t ) ) v ( z ( t σ ) ) + 1 ϵ ( B + Δ B ( t ) ) θ ( t ) + α ( t ) , D α θ ( t ) = ( D + Δ D ( t ) ) θ ( t ) + ( H + Δ H ( t ) ) f ( θ ( t ) ) + β ( t ) , D α z ( t ) = 1 ϵ ( L + Δ L ( t ) ) z ( t ) + 1 ϵ ( E + Δ E ( t ) ) g ( w ( t ) ) + 1 ϵ ( Q + Δ Q ( t ) ) u ( w ( t σ ) ) + 1 ϵ ( M + Δ M ( t ) ) ψ ( t ) + γ ( t ) , D α ψ ( t ) = ( T + Δ T ( t ) ) ψ ( t ) + ( H ¯ + Δ H ¯ ( t ) ) g ( ψ ( t ) ) + δ ( t ) ,
where, α ( t ) , β ( t ) , γ ( t ) , δ ( t ) denotes an feedback controllers:
α ( t ) = 1 w ( t ) , β ( t ) = 2 θ ( t ) , γ ( t ) = 3 z ( t ) , δ ( t ) = 4 ψ ( t ) .
Theorem 1. 
For a given scalars, λ 1 , λ 2 , λ 3 , λ 4 , λ 5 , λ 6 , μ 1 , μ 2 , μ 3 , μ 4 , μ 5 , μ 6 , ξ 1 , ξ 2 , ξ 3 , ξ 4 , ξ 5 , ξ 6 , η 1 , η 2 , η 3 , η 4 , η 5 , η 6 , if there exist a positive definite matrices R 1 , M 1 , U 1 , R 2 , M 2 , U 2 , such that the following LMI holds:
Υ 1 = Ψ 1 R 1 C R 1 P R 1 B R 1 R 1 J a R 1 J c R 1 J p R 1 J b * ϵ λ 1 I 0 0 0 0 0 0 0 * * ϵ λ 2 I 0 0 0 0 0 0 * * * ϵ λ 3 I 0 0 0 0 0 * * * * λ 4 I 0 0 0 0 * * * * * ϵ μ 1 I 0 0 0 * * * * * * ϵ μ 2 I 0 0 * * * * * * * ϵ μ 3 I 0 * * * * * * * * ϵ μ 4 I < 0
Υ 2 = Ψ 2 R 2 E R 2 Q R 2 M R 2 R 2 J l R 2 J e R 2 J q R 2 J m * ϵ ξ 1 I 0 0 0 0 0 0 0 * * ϵ ξ 2 I 0 0 0 0 0 0 * * * ϵ ξ 3 I 0 0 0 0 0 * * * * ξ 4 I 0 0 0 0 * * * * * ϵ η 1 I 0 0 0 * * * * * * ϵ η 2 I 0 0 * * * * * * * ϵ η 3 I 0 * * * * * * * * ϵ η 4 I < 0
Υ 3 = Ψ 3 U 1 H U 1 U 1 J d U 1 J h * λ 5 I 0 0 0 * * λ 6 I 0 0 * * * μ 5 I 0 * * * * μ 6 I < 0
Υ 4 = Ψ 4 U 2 H ¯ U 2 U 2 J t U 2 J h ¯ * ξ 5 I 0 0 0 * * ξ 6 I 0 0 * * * η 5 I 0 * * * * η 6 I < 0
Ψ 1 = 2 ϵ R 1 A + M 1 + μ 1 ϵ L a T L a + ξ 1 ϵ ϕ T ϕ + η 2 ϵ ϕ T L e T L e ϕ + 2 R 1 1 , Ψ 2 = 2 ϵ R 2 L + M 2 + η 1 ϵ L l T L l + λ 1 ϵ ϕ T ϕ + μ 2 ϵ ϕ T L c T L c ϕ + 2 R 2 3 , Ψ 3 = λ 3 ϵ + μ 4 ϵ L b T L b 2 ϵ U 1 D + λ 5 ϕ T ϕ + μ 5 L d T L d + μ 6 ϕ T L h T L h ϕ + 2 U 1 2 , Ψ 4 = ξ 3 ϵ + η 4 ϵ L m T L m 2 ϵ T U 2 + ξ 5 ϕ T ϕ + η 5 L t T L t + η 6 ϕ T L h ¯ T L h ¯ ϕ + 2 U 2 4 ,
then the drive system (4) is synchronized with the slave system (5). The detailed proof of Theorem 1 can be referred to in Appendix A.
Theorem 2. 
For given scalars, λ 1 , λ 2 , λ 3 , λ 5 , ξ 1 , ξ 2 , ξ 3 , ξ 5 , if there exist positive definite matrices R 1 , R 2   U 1 , U 2 , M 1 , M 2 , the scalars and the following matrix hold:
π 1 R 1 C R 1 P R 1 B * ϵ λ 1 I 0 0 * * ϵ λ 2 I 0 * * * ϵ λ 3 I < 0 ,
π 2 R 2 E R 2 Q R 2 M * ϵ ξ 1 I 0 0 * * ϵ ξ 2 I 0 * * * ϵ ξ 3 I < 0 ,
π 3 U 1 H * λ 5 I < 0 ,
π 4 U 2 H ¯ * ξ 5 I < 0 ,
π 1 = 2 ϵ R 1 A + M 1 + ξ 1 ϵ ϕ T ϕ , π 2 = 2 ϵ R 2 L + M 2 + λ 1 ϵ ϕ T ϕ , π 3 = λ 3 ϵ 2 U 1 D + λ 5 ϕ T ϕ , π 4 = ξ 3 ϵ 2 U 2 T + ξ 5 ϕ T ϕ .
then the drive system (4) is synchronized with the slave system (5).
Proof. 
Neglecting uncertain, controls terms in system (6),
D α w ( t ) = A ϵ w ( t ) + C ϵ f ( z ( t ) ) + P ϵ v ( z ( t σ ) ) + B ϵ θ ( t ) , D α θ ( t ) = D θ ( t ) + H f ( θ ( t ) ) , D α z ( t ) = L ϵ z ( t ) + E ϵ g ( w ( t ) ) + Q ϵ u ( w ( t σ ) ) + M ϵ ψ ( t ) , D α ψ ( t ) = T ψ ( t ) + H ¯ g ( ψ ( t ) ) ,
We select the following Lyapunov–Krasovskii functional:
V ( t ) = w T ( t ) R 1 w ( t ) + θ T ( t ) U 1 θ ( t ) + 1 Γ ( β ) 0 t ( t γ ) w T ( γ ) M 1 w ( γ ) d γ 1 Γ ( β ) 0 t σ ( t σ γ ) w T ( γ ) M 1 w ( γ ) d γ + z T ( t ) R 2 z ( t ) + ψ T ( t ) U 2 ψ ( t ) + 1 Γ ( β ) 0 t ( t γ ) z T ( γ ) M 2 z ( γ ) d γ 1 Γ ( β ) 0 t σ ( t σ γ ) z T ( γ ) M 2 z ( γ ) d γ
Lemmas 2 and 5, the following estimation can be derived:
D α V ( t ) 2 w T ( t ) R 1 D α w ( t ) + 2 θ T ( t ) U 1 θ ( t ) + w T ( t ) M 1 w ( t ) w T ( t σ ) M 1 w ( t σ ) + 2 z T ( t ) R 2 D α z ( t ) + 2 ψ T ( t ) U 2 ψ ( t ) + z T ( t ) M 2 z ( t ) z T ( t σ ) M 2 z ( t σ ) .
According to Lemma 6, we obtain
2 ϵ w T ( t ) R 1 C f ( z ( t ) ) λ 1 1 ϵ w T ( t ) R 1 C C T R 1 T w ( t ) + λ 1 ϵ z T ( t ) ϕ T ϕ z ( t )
2 ϵ w T ( t ) R 1 P v ( z ( t σ ) ) λ 2 1 ϵ w T ( t ) R 1 P P T R 1 T w ( t ) + λ 2 ϵ z T ( t σ ) ϕ T ϕ z ( t σ )
2 ϵ w T ( t ) R 1 B θ ( t ) λ 3 1 ϵ w T ( t ) R 1 B B T R 1 T w ( t ) + λ 3 ϵ θ T ( t ) θ ( t )
2 θ T ( t ) U 1 H f ( θ ( t ) ) λ 5 1 θ T ( t ) U 1 H H T U 1 T θ ( t ) + λ 5 θ T ( t ) ϕ T ϕ θ ( t )
2 ϵ z T ( t ) R 2 E g ( w ( t ) ) ξ 1 1 ϵ z T ( t ) R 2 E E T R 2 T z ( t ) + ξ 1 ϵ w T ( t ) ϕ T ϕ w ( t )
2 ϵ z T ( t ) R 2 Q u ( w ( t σ ) ) ξ 2 1 ϵ z T ( t ) R 2 Q Q T R 2 T z ( t ) + ξ 2 ϵ w T ( t σ ) ϕ T ϕ w ( t σ )
2 ϵ z T ( t ) R 2 M ψ ( t ) ξ 3 1 ϵ z T ( t ) R 2 M M T R 2 T z ( t ) + ξ 3 ϵ ψ T ( t ) ψ ( t )
2 ψ T ( t ) U 2 H ¯ g ( ψ ( t ) ) ξ 5 1 ψ T ( t ) U 2 H ¯ H ¯ T U 2 T ψ ( t ) + ξ 5 ψ T ( t ) ϕ T ϕ ψ ( t ) .
From (17)–(25) we get,
D α V ( t ) w T ( t ) [ 2 ϵ R 1 A + λ 1 1 ϵ R 1 C C T R 1 T + λ 2 1 ϵ R 1 P P T R 1 T + λ 3 1 ϵ R 1 B B T R 1 T + M 1 + ξ 1 ϵ ϕ T ϕ ] w ( t ) + z T ( t ) [ 2 ϵ R 2 L + ξ 1 1 ϵ R 2 E E T R 2 T + ξ 2 1 ϵ R 2 Q Q T R 2 T + ξ 3 1 ϵ R 2 M M T R 2 T + M 2 + λ 1 ϵ ϕ T ϕ ] z ( t ) + w T ( t σ ) ξ 2 ϵ ϕ T ϕ M 1 w ( t σ ) + z T ( t σ ) λ 2 ϵ ϕ T ϕ M 2 z ( t σ ) + θ T ( t ) [ λ 3 ϵ 2 U 1 D + λ 5 1 U 1 H H T U 1 T + λ 5 ϕ T ϕ ] θ ( t ) + ψ T ( t ) ξ 3 ϵ 2 U 2 T + ξ 5 1 U 2 H ¯ H ¯ T U 2 T + ξ 5 ϕ T ϕ ψ ( t )
Θ 1 = π 1 + λ 1 1 ϵ R 1 C C T R 1 T + λ 2 1 ϵ R 1 P P T R 1 T + λ 3 1 ϵ R 1 B B T R 1 T , Θ 2 = π 2 + ξ 1 1 ϵ R 2 E E T R 2 T + ξ 2 1 ϵ R 2 Q Q T R 2 T + ξ 3 1 ϵ R 2 M M T R 2 T , Θ 3 = π 3 + λ 5 1 U 1 H H T U 1 T , Θ 4 = π 4 + ξ 5 1 U 2 H ¯ H ¯ T U 2 T .
System (6) is globally asymptotic stable. As a result, master system (4) is globally synchronized with slave system (5). This completes the proof of the theorem.

4. Numerical Examples

Example 1. 
Consider the following fractional order uncertain BAM competitive neural networks,
D α w ( t ) = 1 ϵ ( A + Δ A ( t ) w ( t ) + 1 ϵ ( C + Δ C ( t ) ) f ( z ( t ) ) + 1 ϵ ( P + Δ P ( t ) ) v ( z ( t σ ) ) + 1 ϵ ( B + Δ B ( t ) ) θ ( t ) + α ( t ) , D α θ ( t ) = ( D + Δ D ( t ) ) θ ( t ) + ( H + Δ H ( t ) ) f ( θ ( t ) ) + β ( t ) , D α z ( t ) = 1 ϵ ( L + Δ L ( t ) ) z ( t ) + 1 ϵ ( E + Δ E ( t ) ) g ( w ( t ) ) + 1 ϵ ( Q + Δ Q ( t ) ) u ( w ( t σ ) ) + 1 ϵ ( M + Δ M ( t ) ) ψ ( t ) + γ ( t ) , D α ψ ( t ) = ( T + Δ T ( t ) ) ψ ( t ) + ( H ¯ + Δ H ¯ ( t ) ) g ( ψ ( t ) ) + δ ( t ) ,
where ϵ = 1.589 , α = 0.91
A = 1.324 0 0 0 0 1.324 0 0 0 0 1.324 0 0 0 0 1.324 , D = 1.567 0 0 0 0 1.567 0 0 0 0 1.567 0 0 0 0 1.567 , H = 1.876 0 0 0 0 1.876 0 0 0 0 1.876 0 0 0 0 1.876 , L = 1.456 0 0 0 0 1.456 0 0 0 0 1.456 0 0 0 0 1.456 , T = 2.654 0 0 0 0 2.654 0 0 0 0 2.654 0 0 0 0 2.654 , H ¯ = 1.787 0 0 0 0 1.787 0 0 0 0 1.787 0 0 0 0 1.787 ,
C = 1.567 1.677 1.765 1.654 1.987 1.567 1.765 1.345 1.234 1.987 1.567 1.977 1.677 1.979 1.098 1.567 , P = 1.876 1.0872 1.876 1.567 1.678 1.876 1.678 1.098 1.876 1.557 1.876 1.654 1.987 1.987 1.456 1.876 , B = 1.456 0 0 0 0 1.456 0 0 0 0 1.456 0 0 0 0 1.456 , E = 2.654 1.987 1.678 1.987 1.987 2.654 1.678 1.765 1.876 1.987 2.654 1.876 1.098 1.567 1.765 2.654 , Q = 1.567 1.876 1.876 1.872 1.569 1.567 1.788 1.555 1.987 1.678 1.567 1.765 1.098 1.876 1.778 1.567 , M = 1.876 0 0 0 0 1.876 0 0 0 0 1.876 0 0 0 0 1.876 .
L a = d i a g { 1.324 , 1.324 , 1.324 , 1.324 } , L e = d i a g { 1.567 , 1.567 , 1.567 , 1.567 } , L l = d i a g { 1.876 , 1.876 , 1.876 , 1.876 } , L c = d i a g { 1.456 , 1.456 , 1.456 , 1.456 } , L b = d i a g { 2.654 , 2.654 , 2.654 , 2.654 } , L d = d i a g { 1.787 , 1.787 , 1.787 , 1.787 } , L m = d i a g { 1.806 , 1.806 , 1.806 , 1.806 } , L h = d i a g { 1.876 , 1.876 , 1.876 , 1.876 } , L t = d i a g { 2.654 , 2.654 , 2.654 , 2.654 } , J c = d i a g { 2.456 , 2.456 , 2.456 , 2.456 } , J p = d i a g { 0.654 , 0.654 , 0.654 , 0.654 } , J b = d i a g { 1.087 , 1.087 , 1.087 , 1.087 } , J l = d i a g { 1.006 , 1.006 , 1.006 , 1.006 } , J q = d i a g { 1.806 , 1.806 , 1.806 , 1.806 } , J m = d i a g { 1.654 , 1.654 , 1.654 , 1.654 } , J d = d i a g { 1.054 , 1.054 , 1.054 , 1.054 } , J h ¯ = d i a g { 1.767 , 1.767 , 1.767 , 1.767 } , I = d i a g { 1 , 1 , 1 , 1 } , ϕ = d i a g { 1.786 , 1.786 , 1.786 , 1.786 } , L h ¯ = d i a g { 1.787 , 1.787 , 1.787 , 1.787 } , J a = d i a g { 1.654 , 1.654 , 1.654 , 1.654 } , J h = d i a g { 0.597 , 0.597 , 0.597 , 0.597 } , J e = d i a g { 0.517 , 0.517 , 0.517 , 0.517 } , J l = d i a g { 1.876 , 1.876 , 1.876 , 1.876
After using the LMI solver in the hypothesis of Theorem 1, we get the following feasible solutions:
T 1 = 9.0768 0.0002 0.0002 0.0002 0.0002 9.0766 0.0001 0.0002 0.0002 0.0001 9.0766 0.0002 0.0002 0.0002 0.0002 9.0768 , T 2 = 7.9371 1.0713 1.0679 1.0340 1.0713 7.9555 0.9906 1.0916 1.0679 0.9906 7.8274 1.0083 1.0340 1.0916 1.0083 7.8004 , T 3 = 9.0580 0.0057 0.0057 0.0055 0.0057 9.0581 0.0053 0.0058 0.0057 0.0053 9.0574 0.0054 0.0055 0.0058 0.0054 9.0573 , T 4 = 3.1877 1.4322 1.4272 1.5078 1.4322 3.5551 1.5254 1.2934 1.4272 1.5254 3.5077 1.5386 1.5078 1.2934 1.5386 3.1431 ,
R 1 = 0.0166 0.0052 0.0052 0.0050 0.0052 0.0165 0.0048 0.0053 0.0052 0.0048 0.0171 0.0049 0.0050 0.0053 0.0049 0.0172 , U 1 = 10 e 4 0.8253 0.0557 0.0555 0.0538 0.0557 0.8272 0.0515 0.0568 0.0555 0.0515 0.8139 0.0524 0.0538 0.0568 0.0524 0.8111 , M 1 = 4.5384 0.0001 0.0001 0.0001 0.0001 4.5383 0.0001 0.0001 0.0001 0.0001 4.5383 0.0001 0.0001 0.0001 0.0001 4.5384 ,
R 2 = 10 e 4 0.4032 0.1389 0.1194 0.0996 0.1389 0.4255 0.1363 0.1391 0.1194 0.1363 0.4203 0.1301 0.0996 0.1391 0.1301 0.4073 , U 2 = 0.2943 1.6937 1.1419 1.8154 1.6937 0.3751 1.4464 1.7804 1.1419 1.4464 1.1290 1.1535 1.8154 1.7804 1.1535 0.1751 , M 2 = 4.5290 0.0029 0.0028 0.0028 0.0029 4.5290 0.0026 0.0029 0.0028 0.0026 4.5287 0.0027 0.0028 0.0029 0.0027 4.5286 .
μ 1 = 14.2799 , μ 2 = 14.2799 , μ 3 = 14.2799 , μ 4 = 14.2799 , η 1 = 14.2799 , η 2 = 14.2799 , η 3 = 14.2799 , η 4 = 14.2799 , ξ 1 = 14.2799 , ξ 2 = 14.2799 , ξ 3 = 14.2799 , ξ 4 = 22.6901 , λ 1 = 14.2799 , λ 2 = 14.2799 , λ 3 = 14.2799 , λ 4 = 22.6901 ,
The gain matrices of the desired controller gains can be obtained as follows
1 = 10 e 3 1.7840 1.3669 1.3025 1.3118 1.3670 1.7859 1.2955 1.3172 1.3025 1.2955 1.6490 1.2479 1.3118 1.3172 1.2479 1.6703 , 2 = 10 e 4 9.5017 0.5810 0.5826 0.5566 0.5797 9.5032 0.5284 0.5959 0.5908 0.5370 9.5087 0.5511 0.5663 0.6077 0.5530 9.5033 , 3 = 10 e 5 8.0427 6.7558 6.4348 6.3297 6.7568 8.7789 6.8866 6.8507 6.4360 6.8868 8.2460 6.5603 6.3306 6.8505 6.5600 8.2057 , 4 = 15.2067 6.4515 34.2196 12.0679 4.9513 6.0147 15.9783 5.2256 31.9806 19.2653 84.7198 31.9213 11.2761 6.2499 32.0500 14.0781 .
Hence, the master system (4) is synchronized with the slave system (5).
Example 2. 
Consider the following fractional order uncertain BAM competitive neural networks:
D α w ( t ) = A ϵ w ( t ) + C ϵ f ( z ( t ) ) + P ϵ v ( z ( t σ ) ) + B ϵ θ ( t ) , D α θ ( t ) = D θ ( t ) + H f ( θ ( t ) ) , D α z ( t ) = L ϵ z ( t ) + E ϵ g ( w ( t ) ) + Q ϵ u ( w ( t σ ) ) + M ϵ ψ ( t ) , D α ψ ( t ) = T ψ ( t ) + H ¯ g ( ψ ( t ) ) ,
where α = 0.98 , ϵ = 0.87 .
I = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , A = 2.304 0 0 0 0 2.304 0 0 0 0 2.304 0 0 0 0 2.304 , D = 2.567 0 0 0 0 2.567 0 0 0 0 2.567 0 0 0 0 2.567 , H = 1.826 0 0 0 0 1.826 0 0 0 0 1.826 0 0 0 0 1.826 , T = 0.654 0 0 0 0 0.654 0 0 0 0 0.654 0 0 0 0 0.654 , L = 1.453 0 0 0 0 1.453 0 0 0 0 1.453 0 0 0 0 1.453 , H ¯ = 3.787 0 0 0 0 3.787 0 0 0 0 3.787 0 0 0 0 3.787 , C = 1.067 1.177 2.765 1.654 0.987 1.561 1.765 1.345 1.234 1.982 1.564 1.977 1.677 1.979 1.098 1.567 , P = 2.876 1.0872 1.876 1.567 1.678 4.876 0.678 1.098 1.876 1.557 1.176 1.654 1.987 1.987 1.456 1.876 , B = 1.436 0 0 0 0 1.426 0 0 0 0 1.436 0 0 0 0 1.436 ,
E = 2.154 1.987 1.678 1.987 1.917 2.654 1.678 1.765 1.872 1.987 2.654 1.876 1.098 1.567 1.765 2.654 , Q = 1.562 1.176 1.876 1.872 1.569 1.557 1.788 1.555 1.987 1.678 1.567 1.765 1.098 1.876 1.778 1.567 , M = 1.816 0 0 0 0 1.816 0 0 0 0 1.816 0 0 0 0 1.846 , I = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , ϕ = 1.486 0 0 0 0 1.486 0 0 0 0 1.486 0 0 0 0 1.486 .
Using MATLAB LMI control toolbox, we solve the LMI in Theorem 2, and get the following feasible solutions:
R 1 = 0.7438 0.0443 0.0358 0.0387 0.0443 0.7197 0.0368 0.0419 0.0358 0.0368 0.7475 0.0370 0.0387 0.0419 0.0370 0.7485 , M 1 = 24.8735 0.0714 0.0577 0.0623 0.0714 24.8346 0.0593 0.0676 0.0577 0.0593 24.8794 0.0597 0.0623 0.0676 0.0597 24.8811 , R 2 = 0.0018 0.0009 0.0008 0.0000 0.0009 0.0017 0.0003 0.0006 0.0008 0.0003 0.0017 0.0007 0.0000 0.0006 0.0007 0.0013 ,
M 2 = 23.3004 0.0232 0.0189 0.0193 0.0232 23.3126 0.0188 0.0217 0.0189 0.0188 23.2987 0.0194 0.0193 0.0217 0.0194 23.2986 , U 1 = 0.8519 0.1685 0.1363 0.1470 0.1685 0.9438 0.1399 0.1595 0.1363 0.1399 0.8382 0.1409 0.1470 0.1595 0.1409 0.8341 , U 2 = 0.0260 0.0591 0.0478 0.0516 0.0591 0.0583 0.0491 0.0560 0.0478 0.0491 0.0212 0.0495 0.0516 0.0560 0.0495 0.0198 .
ξ 1 = 14.9022 , ξ 2 = 14.9022 , ξ 3 = 14.9022 , ξ 5 = 14.9022 , λ 1 = 14.9022 , λ 2 = 14.9022 , λ 3 = 14.9022 , λ 5 = 14.9022 .
Hence, the master system (4) is synchronized with the slave system (5).

5. Conclusions

This article investigated the stability of fractional order uncertain BAM competitive neural networks. By using Lyapunov–Krasovskii functional, we explore the synchronization problem of this kind of neural network. The time delays is designed to achieve the synchronization of competitive BAM neural networks and novel conditions to ensure synchronization of the drive system and the corresponding response system are given. By using Lyapunov functional synchronization, the criteria are demonstrated in the terms of LMI approach and we check the feasibility of obtaining results by using the LMI MATLAB control toolbox. Moreover, the controller gain matrices can be obtained by solving the LMIs. Finally, two numerical examples were provided to demonstrate the effectiveness of our theoretical results. In the future, we will furthermore discuss stochastic competitive BAM neural networks with additive time varying delays, complex valued competitive BAM neural networks.

Author Contributions

All the authors have contributed equally in the article. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The financial support from Rajamangala University of Technology Suvarnabhumi, Thailand towards this research is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof. 
We construct the following Lyapunov–Krasovskii functional:
V ( t ) = w T ( t ) R 1 w ( t ) + θ T ( t ) U 1 θ ( t ) + 1 Γ ( β ) 0 t ( t γ ) w T ( γ ) M 1 w ( γ ) d γ 1 Γ ( β ) 0 t σ ( t σ γ ) w T ( γ ) M 1 w ( γ ) d γ + z T ( t ) R 2 z ( t ) + ψ T ( t ) U 2 ψ ( t ) + 1 Γ ( β ) 0 t ( t γ ) z T ( γ ) M 2 z ( γ ) d γ 1 Γ ( β ) 0 t σ ( t σ γ ) z T ( γ ) M 2 z ( γ ) d γ
Lemmas 2 and 5, the following estimation can be derived
D α V ( t ) 2 w T ( t ) R 1 D α w ( t ) + 2 θ T ( t ) U 1 D α θ ( t ) + w T ( t ) M 1 w ( t ) w T ( t σ ) M 1 w ( t σ ) + 2 z T ( t ) R 2 D α z ( t ) + 2 ψ T ( t ) U 2 D α ψ ( t ) + z T ( t ) M 2 z ( t ) z T ( t σ ) M 2 z ( t σ ) ,
According to Lemma 6, we obtain
2 ϵ w T ( t ) R 1 C f ( z ( t ) ) λ 1 1 ϵ w T ( t ) R 1 C C T R 1 T w ( t ) + λ 1 ϵ z T ( t ) ϕ T ϕ z ( t ) 2 ϵ w T ( t ) R 1 P v ( z ( t σ ) ) λ 2 1 ϵ w T ( t ) R 1 P P T R 1 T w ( t )
+ λ 2 ϵ z T ( t σ ) ϕ T ϕ z ( t σ )
2 ϵ w T ( t ) R 1 B θ ( t ) λ 3 1 ϵ w T ( t ) R 1 B B T R 1 T w ( t ) + λ 3 ϵ θ T ( t ) θ ( t )
2 w T ( t ) R 1 α ( t ) λ 4 1 w T ( t ) R 1 R 1 T w ( t ) + λ 4 α T ( t ) α ( t ) 2 ϵ w T ( t ) R 1 Δ A ( t ) w ( t ) 2 ϵ w T ( t ) R 1 J a K ( t ) L a w ( t )
μ 1 1 ϵ w T ( t ) R 1 J a J a T R 1 T w ( t ) + μ 1 ϵ w T ( t ) L a T L a w ( t ) 2 ϵ w T ( t ) R 1 Δ C ( t ) f ( z ( t ) ) 2 ϵ w T ( t ) R 1 J c K ( t ) L c f ( z ( t ) )
μ 2 1 ϵ w T ( t ) R 1 J c J c T R 1 T w ( t ) + μ 2 ϵ z T ( t ) ϕ T L c T L c ϕ z ( t ) 2 ϵ w T ( t ) R 1 Δ P ( t ) v ( z ( t σ ) ) 2 ϵ w T ( t ) R 1 J p K ( t ) L p v ( z ( t σ ) ) μ 3 1 ϵ w T ( t ) R 1 J p J p T R 1 T w ( t )
+ μ 3 ϵ z T ( t σ ) ϕ T L p T L p ϕ z ( t σ )
2 ϵ w T ( t ) R 1 Δ B ( t ) θ ( t ) 2 ϵ w T ( t ) R 1 J b K ( t ) L b θ ( t )
μ 4 1 ϵ w T ( t ) R 1 J b J b T R 1 T w ( t ) + μ 4 ϵ θ T ( t ) L b T L b θ ( t )
2 θ T ( t ) U 1 H f ( θ ( t ) ) λ 5 1 θ T ( t ) U 1 H H T U 1 T θ ( t ) + λ 5 θ T ( t ) ϕ T ϕ θ ( t )
2 θ T ( t ) U 1 β ( t ) λ 6 1 θ T ( t ) U 1 U 1 T θ ( t ) + λ 6 β T ( t ) β ( t ) 2 θ T ( t ) U 1 Δ D ( t ) θ ( t ) 2 θ T ( t ) U 1 J d K ( t ) L d θ ( t )
μ 5 1 θ T ( t ) U 1 J d J d T U 1 T θ ( t ) + μ 5 θ T ( t ) L d T L d θ ( t ) 2 θ T ( t ) U 1 Δ H ( t ) f ( θ ( t ) ) 2 θ T ( t ) U 1 J h K ( t ) L h f ( θ ( t ) )
μ 6 1 θ T ( t ) U 1 J h J h T U 1 T θ ( t ) + μ 6 θ T ( t ) ϕ T L h T L h ϕ θ ( t )
2 ϵ z T ( t ) R 2 E g ( w ( t ) ) ξ 1 1 ϵ z T ( t ) R 2 E E T R 2 T z ( t ) + ξ 1 ϵ w T ( t ) ϕ T ϕ w ( t )
2 ϵ z T ( t ) R 2 Q u ( w ( t σ ) ) ξ 2 1 ϵ z T ( t ) R 2 Q Q T R 2 T z ( t ) + ξ 2 ϵ w T ( t σ ) ϕ T ϕ w ( t σ )
2 ϵ z T ( t ) R 2 M ψ ( t ) ξ 3 1 ϵ z T ( t ) R 2 M M T R 2 T z ( t ) + ξ 3 ϵ ψ T ( t ) ψ ( t )
2 z T ( t ) R 2 γ ( t ) ξ 4 1 z T ( t ) R 2 R 2 T z ( t ) + ξ 4 γ T ( t ) γ ( t ) 2 ϵ z T ( t ) R 2 Δ L ( t ) z ( t ) 2 ϵ z T ( t ) R 2 J l K ( t ) L l z ( t )
η 1 1 ϵ z T ( t ) R 2 J l J l T R 2 T z ( t ) + η 1 ϵ z T ( t ) L l T L l z ( t ) 2 ϵ z T ( t ) R 2 Δ E ( t ) g ( w ( t ) ) 2 ϵ z T ( t ) R 2 J e K ( t ) L e g ( w ( t ) )
η 2 1 ϵ z T ( t ) R 2 J e J e T R 2 T z ( t ) + η 2 ϵ w T ( t ) ϕ T L e T L e ϕ w ( t )
2 ϵ z T ( t ) R 2 Δ Q ( t ) u ( w ( t σ ) ) 2 ϵ z T ( t ) R 2 J q K ( t ) L q u ( w ( t σ ) ) η 3 1 ϵ z T ( t ) R 2 J q J q T R 2 T z ( t )
+ η 3 ϵ w T ( t σ ) ϕ T L q T L q ϕ w ( t σ ) 2 ϵ z T ( t ) R 2 Δ M ( t ) ψ ( t ) 2 ϵ z T ( t ) R 2 J m K ( t ) L m ψ ( t )
η 4 1 ϵ z T ( t ) R 2 J m J m T R 2 T z ( t ) + η 4 ϵ ψ T ( t ) L m T L m ψ ( t )
2 ψ T ( t ) U 2 H ¯ g ( ψ ( t ) ) ξ 5 1 ψ T ( t ) U 2 H ¯ H ¯ T U 2 T ψ ( t ) + ξ 5 ψ T t ) ϕ T ϕ ψ ( t )
2 ψ T ( t ) U 2 δ ( t ) ξ 6 1 ψ T ( t ) U 2 U 2 T ψ ( t ) + ξ 6 δ T ( t ) δ ( t ) 2 ψ T ( t ) U 2 Δ T ( t ) ψ ( t ) 2 ψ T ( t ) U 2 J t K ( t ) L t ψ ( t )
η 5 1 ψ T ( t ) U 2 J t J t T U 2 T ψ ( t ) + η 5 ψ T ( t ) L t T L t ψ ( t ) 2 ψ T ( t ) U 2 Δ H ¯ ( t ) g ( ψ ( t ) ) 2 ψ T ( t ) U 2 J h ¯ K ( t ) L h ¯ g ( ψ ( t ) )
η 6 1 ψ T ( t ) U 2 J h ¯ J h ¯ T U 2 T ψ ( t ) + η 6 ψ T ( t ) ϕ T L h ¯ T L h ¯ ϕ ψ ( t )
From (30)–(54), we have
D α V ( t ) w T ( t ) [ 2 ϵ R 1 A + λ 1 1 ϵ R 1 C C T R 1 T + λ 2 1 ϵ R 1 P P T R 1 T + λ 3 1 ϵ R 1 B B T R 1 T + λ 4 1 R 1 R 1 T + μ 1 1 ϵ R 1 J a J a T R 1 T + μ 1 ϵ L a T L a + μ 2 1 ϵ R 1 J c J c T R 1 T + μ 3 1 ϵ R 1 J p J p T R 1 T + μ 4 1 ϵ R 1 J b J b T R 1 T + M 1 + ξ 1 ϵ ϕ T ϕ + η 2 ϵ ϕ T L e T L e ϕ + 2 R 1 1 ] w ( t ) + z T ( t ) [ 2 ϵ R 2 L + ξ 1 1 ϵ R 2 E E T R 2 T + ξ 2 1 ϵ R 2 Q Q T R 2 T + ξ 3 1 ϵ R 2 M M T R 2 T + ξ 4 1 R 2 R 2 T + η 1 1 ϵ R 2 J l J l T R 2 T + η 1 ϵ L l T L l + η 2 1 ϵ R 2 J e J e T R 2 T + η 3 1 ϵ R 2 J q J q T R 2 T + η 4 1 ϵ R 2 J m J m T R 2 T + M 2 + λ 1 ϵ ϕ T ϕ + μ 2 ϵ ϕ T L c T L c ϕ + 2 R 2 3 ] z ( t ) + θ T ( t ) [ λ 3 ϵ + μ 4 ϵ L b T L b 2 ϵ U 1 D + λ 5 1 U 1 H H T U 1 T + λ 5 ϕ T ϕ + λ 6 1 U 1 U 1 T + μ 5 1 U 1 J d J d T U 1 T + μ 5 L d T L d + μ 6 1 U 1 J h J h 1 U 1 T + μ 6 ϕ T L h T L h ϕ + 2 U 1 2 ] θ ( t ) + ψ T ( t ) [ ξ 3 ϵ + η 4 ϵ L m T L m 2 ϵ T U 2 + ξ 5 1 U 2 H ¯ H ¯ T U 2 T + ξ 5 ϕ T ϕ + ξ 6 1 U 2 U 2 T + η 5 1 U 2 J t J t T U 2 T + η 5 L t T L t + η 6 1 U 2 J h ¯ J h ¯ 1 U 2 T + η 6 ϕ T L h ¯ T L h ¯ ϕ + 2 U 2 4 ] ψ ( t ) + w T ( t τ ) ξ 2 ϵ ϕ T ϕ + η 3 ϵ ϕ T L q T L q ϕ M 1 w ( t σ ) + z T ( t τ ) λ 2 ϵ ϕ T ϕ + μ 3 ϵ ϕ T L p T L p ϕ M 2 z ( t σ ) , D α V ( t ) w T ( t ) Υ 1 w ( t ) + z T ( t ) Υ 2 z ( t ) + θ T ( t ) Υ 3 θ ( t ) + ψ T ( t ) Υ 4 ψ ( t ) .
Υ 1 = Ψ 1 + λ 1 1 ϵ R 1 C C T R 1 T + λ 2 1 ϵ R 1 P P T R 1 T + λ 3 1 ϵ R 1 B B T R 1 T + λ 4 1 R 1 R 1 T + μ 1 1 ϵ R 1 J a J a T R 1 T + μ 2 1 ϵ R 1 J c J c T R 1 T + μ 3 1 ϵ R 1 J p J p T R 1 T + μ 4 1 ϵ R 1 J b J b T R 1 T , Υ 2 = Ψ 2 + ξ 1 1 ϵ R 2 E E T R 2 T + ξ 2 1 ϵ R 2 Q Q T R 2 T + ξ 3 1 ϵ R 2 M M T R 2 T + ξ 4 1 R 2 R 2 T + η 1 1 ϵ R 2 J l J l T R 2 T + η 2 1 ϵ R 2 J e J e T R 2 T + η 3 1 ϵ R 2 J q J q T R 2 T + η 4 1 ϵ R 2 J m J m T R 2 T , Υ 3 = Ψ 3 + λ 5 1 U 1 H H T U 1 T + λ 6 1 U 1 U 1 T + μ 5 1 U 1 J d J d T U 1 T + μ 6 1 U 1 J h J h 1 U 1 T , Υ 4 = Ψ 4 + ξ 5 1 U 2 H ¯ H ¯ T U 2 T + ξ 6 1 U 2 U 2 T + η 5 1 U 2 J t J t T U 2 T + η 6 1 U 2 J h ¯ J h ¯ 1 U 2 T .
System (6) is globally asymptotic stable. As a result, master system (4) is globally synchronized with slave system (5). This completes the proof of theorem. □

References

  1. Yildiz, T.; Jajarmi, A.; Yildiz, B.; Baleanu, D. New aspects of time fractional optimal control problems within operators with nonsingular kernel. Discret. Contin. Dyn. Syst. 2020, 13, 407–428. [Google Scholar]
  2. Li, H.; Muhammadhaji, A.; Zhang, L.; Teng, Z. Stability analysis of a fractional-order predator prey model incorporating a constant prey refuge and feedback control. Adv. Differ. Equ. 2018, 2018, 325. [Google Scholar] [CrossRef]
  3. Zhao, K.; Deng, S. Existence and Ulam CHyers stability of a kind of fractional-order multiple point BVP involving non instantaneous impulses and abstract bounded operator. Adv. Differ. Equ. 2021, 2021, 44. [Google Scholar] [CrossRef]
  4. Ali, S.; Narayanan, G.; Sevgen, S.; Sheher, V.; Arik, S. Global stability analysis of fractional-order fuzzy BAM neural networks with time delay and impulsive effects. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104853. [Google Scholar]
  5. Wang, F.; Yang, Y.; Xu, X.; Li, L. Global asymptotic stability of impulsive fractional-order BAM neural networks with time delay. Neural Comput. Appl. 2017, 28, 345–352. [Google Scholar] [CrossRef]
  6. Wu, H.; Zhang, X.; Xue, S.; Niu, P. Quasi-uniform stability of Caputo-type fractional-order neural networks with mixed delay. Int. J. Mach. Learn. Cybern 2017, 8, 1501–1511. [Google Scholar] [CrossRef]
  7. Chen, J.; Chen, B.; Zeng, Z. Global uniform asymptotic fixed deviation stability and stability for delayed fractional-order memristive neural networks with generic memductance. Neural Netw. 2018, 98, 65–75. [Google Scholar] [CrossRef]
  8. Baskar, P.; Padmanabhan, S.; Ali, M.S. LMI based stability criterion for uncertain neutral-type neural networks with discrete and distributed delays. Control Cybern. 2020, 49, 77–97. [Google Scholar]
  9. Balasubramaniam, P.; Ali, M.S. Robust exponential stability of uncertain fuzzy Cohen-Grossberg neural networks with time-varying delays. Fuzzy Sets Syst. 2010, 161, 608–618. [Google Scholar] [CrossRef]
  10. Ali, M.S.; Balasubramaniam, P. Stability analysis of uncertain fuzzy Hopfield neural networks with time delays. Commun. Nonlinear Sci. Numer. Simul. 2009, 14, 2776–2783. [Google Scholar]
  11. Ali, M.S.; Saravanakumar, R. Novel delay-dependent robust H control of uncertain systems with distributed time-varying delays. Appl. Math. Comput. 2014, 249, 510–520. [Google Scholar]
  12. Zhao, K. Global robust exponential synchronization of BAM recurrent FNNs with infinite distributed delays and diffusion terms on time scales. Adv. Differ. Equ. 2014, 2014, 317. [Google Scholar] [CrossRef] [Green Version]
  13. Saravanakumar, R.; Ali, M.S.; Huang, H.; Cao, J.; Joo, Y.H. Robust H state-feedback control for nonlinear uncertain systems with mixed time-varying delays. Int. J. Control Autom. Syst. 2018, 16, 225–233. [Google Scholar] [CrossRef]
  14. Saravanakumar, R.; Rajchakit, G.; Ali, M.S.; Xiang, Z.; Joo, Y.H. Robust extended dissipativity criteria for discrete-time uncertain neural networks with time-varying delays. Neural Comput. Appl. 2018, 30, 3893–3904. [Google Scholar] [CrossRef]
  15. Ali, M.S.; Hymavathi, M.; Senan, S.; Shekher, V.; Arik, S. Global asymptotic synchronization of impulsive fractional-order complex-valued memristor-based neural networks with time varying delays. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104869. [Google Scholar]
  16. Ali, M.S.; Hymavathi, M. Synchronization of fractional order neutral type fuzzy cellular neural networks with discrete and distributed delays via state feedback control. Neural Process. Lett. 2021, 53, 929–957. [Google Scholar] [CrossRef]
  17. Abdurahman, A.; Jiang, H.; Teng, Z. Lag synchronization for Cohen-Grossberg neural networks with mixed time delays via periodically intermittent control. Int. J. Comput. Math. 2013, 94, 275–295. [Google Scholar] [CrossRef]
  18. Zhang, X.; Lu, X.; Li, X. Sampled-data based lag synchronization of chaotic delayed neural networks with impulsive control. Nonlinear Dyn. 2017, 90, 2199–2207. [Google Scholar] [CrossRef]
  19. Ali, M.S.; Hymavathi, M.; Rajchakit, G.; Saroha, S.; Palanisamy, L. Synchronization of fractional order fuzzy BAM neural networks with time varying delays and reaction diffusion terms. IEEE Access 2020, 8, 186551–186571. [Google Scholar] [CrossRef]
  20. Meyer-Base, A.; Pilyugin, S.S.; Wismler, A.; Foo, S. Local exponential stability of competitive neural networks with different time scales. Eng. Appl. Artif. Intell. 2004, 17, 227–232. [Google Scholar] [CrossRef]
  21. Meyer-Base, A.; Thummler, V. Local and global stability of an unsupervised competitive neural network. IEEE Trans. Neural Netw. 2008, 18, 346–351. [Google Scholar] [CrossRef]
  22. Ali, M.S.; Hymavathi, M.; Priya, B.; Kauser, S.A.; Thakur, G.K. Stability analysis of stochastic fractional-order competitive neural networks with leakage delay 2021, 6, 3205–3241. AIMS Math. 2021, 6, 3205–3241. [Google Scholar]
  23. Tan, Y.; Jing, K. Existence and global exponential stability of almost periodic solution for delayed competitive neural networks with discontinuous activations. Math. Methods Appl. Sci. 2016, 39, 2821–2839. [Google Scholar] [CrossRef]
  24. Yang, X.; Cao, J.; Long, Y.; Rui, W. Adaptive lag synchronization for competitive neural networks with mixed delays and uncertain hybrid perturbations. IEEE Trans. Neural Netw. 2010, 21, 1656–1667. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, P.P.; Nie, X.B.; Liang, J.L.; Cao, J.D. Multiple Mittag-Leffler stability of fractional-order competitive neural networks with Gaussian activation functions. Neural Comput. Appl. Neural Netw. 2018, 108, 452–465. [Google Scholar] [CrossRef]
  26. Zhang, H.; Ye, M.L.; Cao, J.D.; Alsaedi, A. Synchronization control of Riemann-Liouville fractional competitive network systems with time-varying delay and different time scales. Int. J. Control Autom. Syst. 2018, 16, 1404–1414. [Google Scholar] [CrossRef]
  27. Kosko, B. Adaptive bidirectional associative memories. Appl. Opt. 1987, 26, 4947–4960. [Google Scholar] [CrossRef] [Green Version]
  28. Tavan, P.; Grubmuller, H.; Kuhnel, H. Self-organization of associative memory and pattern classification: Recurrent signal processing on topological feature maps. Biol. Cybern. 1990, 64, 95–105. [Google Scholar] [CrossRef] [Green Version]
  29. Sharma, N.; Ray, A.K.; Sharma, S.; Shukla, K.K.; Pradhan, S.; Aggarwal, L.M. Segmentation and classification of medical images using texture-primitive features: Application of BAM-type artificial neural network. J. Med. Phys. 2008, 33, 119–126. [Google Scholar] [CrossRef]
  30. Carpenter, A.; Grossberg, S. Pattern recognition by self organizing neural networks. In Applied Optics; MIT Press: Cambridge, UK, 1991. [Google Scholar]
  31. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  32. Feckan, M.; Zhou, Y.; Wang, J. On the concept and existence of solution for impulsive fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 3050–3060. [Google Scholar] [CrossRef]
  33. Ye, H.P.; Gao, J.; Ding, Y. A generalized Gronwall inequality and its application to a fractional differential equation. J. Math. Anal. Appl. 2007, 328, 1075–1081. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, S.; Yu, Y.; Yu, J. LMI conditions for global stability of fractional-order neural networks. IEEE Trans. Neural Netw. 2017, 28, 2423–2433. [Google Scholar] [CrossRef] [PubMed]
  35. Boyd, S.; Ghaoui, L.E.; Feron, E.; Balakrishnam, V. Linear Matrix Inequalities in System and Control Theory; SIAM: Philadephia, PA, USA, 1994. [Google Scholar]
  36. Zhao, K.; Ma, Y. Study on the Existence of Solutions for a Class of Nonlinear Neutral Hadamard-Type Fractional Integro-Differential Equation with Infinite Delay. Fractal Fract. 2021, 5, 52. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Syed Ali, M.; Hymavathi, M.; Kauser, S.A.; Rajchakit, G.; Hammachukiattikul, P.; Boonsatit, N. Synchronization of Fractional Order Uncertain BAM Competitive Neural Networks. Fractal Fract. 2022, 6, 14. https://doi.org/10.3390/fractalfract6010014

AMA Style

Syed Ali M, Hymavathi M, Kauser SA, Rajchakit G, Hammachukiattikul P, Boonsatit N. Synchronization of Fractional Order Uncertain BAM Competitive Neural Networks. Fractal and Fractional. 2022; 6(1):14. https://doi.org/10.3390/fractalfract6010014

Chicago/Turabian Style

Syed Ali, M., M. Hymavathi, Syeda Asma Kauser, Grienggrai Rajchakit, Porpattama Hammachukiattikul, and Nattakan Boonsatit. 2022. "Synchronization of Fractional Order Uncertain BAM Competitive Neural Networks" Fractal and Fractional 6, no. 1: 14. https://doi.org/10.3390/fractalfract6010014

APA Style

Syed Ali, M., Hymavathi, M., Kauser, S. A., Rajchakit, G., Hammachukiattikul, P., & Boonsatit, N. (2022). Synchronization of Fractional Order Uncertain BAM Competitive Neural Networks. Fractal and Fractional, 6(1), 14. https://doi.org/10.3390/fractalfract6010014

Article Metrics

Back to TopTop