Next Article in Journal
A Review of Recent Developments in Autotuning Methods for Fractional-Order Controllers
Previous Article in Journal
Analytical Solutions of a Class of Fluids Models with the Caputo Fractional Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Results on Finite-Time Passivity and Synchronization Problem for Fractional-Order Memristor-Based Competitive Neural Networks: Interval Matrix Approach

1
Department of Mathematics, Near East University TRNC, Mersin 99138, Turkey
2
Department of Mathematics, Alagappa University, Karaikudi 630 003, India
3
Ramanujan Centre for Higher Mathematics, Alagappa University, Karaikudi 630 003, India
4
Department of Mathematics and General Sciences, Prince Sultan University, Riyadh 12435, Saudi Arabia
5
Department of Industrial Engineering, OSTİM Technical University, Ankara 06374, Turkey
6
Faculty of Automatic Control, Electronics and Computer Science, Department of Automatic Control and Robotics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(1), 36; https://doi.org/10.3390/fractalfract6010036
Submission received: 11 October 2021 / Revised: 26 December 2021 / Accepted: 4 January 2022 / Published: 11 January 2022

Abstract

:
This research paper deals with the passivity and synchronization problem of fractional-order memristor-based competitive neural networks (FOMBCNNs) for the first time. Since the FOMBCNNs’ parameters are state-dependent, FOMBCNNs may exhibit unexpected parameter mismatch when different initial conditions are chosen. Therefore, the conventional robust control scheme cannot guarantee the synchronization of FOMBCNNs. Under the framework of the Filippov solution, the drive and response FOMBCNNs are first transformed into systems with interval parameters. Then, the new sufficient criteria are obtained by linear matrix inequalities (LMIs) to ensure the passivity in finite-time criteria for FOMBCNNs with mismatched switching jumps. Further, a feedback control law is designed to ensure the finite-time synchronization of FOMBCNNs. Finally, three numerical cases are given to illustrate the usefulness of our passivity and synchronization results.

1. Introduction

In recent years, the research on competitive-type neural networks (CNNs) has attracted expanding consideration from mathematicians, engineers, physicists and scholars. There are two forms of state variables: short-term memory and long-term memory. They make a good application background in convex optimization, cybernetics, image recognition and associative memory. Recently, many significant results on the dynamics of different kinds of neural networks, especially recurrent neural networks [1], cellular neural networks [2], T-S fuzzy neural networks [3], BAM neural networks [4] and competitive neural networks [5,6,7,8] have been obtained, but these results are mainly discussed in integer-order cases. However, there are few results focused on CNNs with fractional-order cases, see [9,10,11,12,13].
As the fourth circuit component along with resistor, capacitor and inductor, memristor was firstly postulated by Professor Chua in 1971 [14]. A resistor describes the voltage–current relationship, a capacitor depicts the charge–voltage relationship and an inductor displays the flux–current relationship. Chua showed the missing flux–charge relationship, which he named memristance and is the value of a memristor. The Hewlett-Packard laboratory fabricated a practically working memristor device in 2008 [15]. Good features of memristors such as nanometer size, nonvolatility and nonlinearity make them more suitable for simulating synapses resistors in network models. If the resistor of self-feedback connections weights is instead modeled by a memristor in conventional neural networks model, then memristor-based neural networks (MNNs) can be reached. Many interesting results on the dynamics of memristor-based neural networks have recently been proposed and studied [16,17,18,19]. Currently, many researchers also claim that MNNs provide more memory storage than conventional neural networks [20]. Undoubtedly, memristor-based competitive neural networks will advantage the associative memory capability of neural networks. It is very significant to analyze the memristor-based competitive neural networks’ dynamical behaviors.
As far back as the 1695s, the idea of fractional calculus [21] was discussed by Gottfried Wilhelm Leibniz. Comparing with traditional integer-order calculus, fractional calculus has unlimited memory property. Recently, fractional calculus has played a vital role in the science and engineering fields [22,23], and lots of scientific results have been reported on this topic, see [24,25,26,27,28]. At present, there is a trend to utilize fractional differential techniques to study the dynamics of networks, especially neural networks [29,30,31,32]. It should be mentioned that the memristance of the memristor has a fractional order. Therefore, it has been very common and precise to utilize fractional differential techniques for studying the dynamics of memristor-based neural network systems. Recently, an ever-increasing number of specialists have talked about memristor-based neural network systems with fractional order (MNNWFO) and some significant outcomes have been accounted for on stability [33,34], stabilization [35,36] and state estimation [37,38].
Passivity can keep a system internally stable, which helps to understand the stability of different dynamical systems and their properties. Recently, the research on passivity analysis has become a hot research subject and it has been effectively applied to different fields such as control systems, power systems and robot systems. In view of the energy theory, the systems have been described as well as their Lyapunov-related input/output information. In light of the Lyapunov theory, there have been numerous scientific results in the literature, see [39,40,41,42,43,44,45,46]. On the other hand, synchronization has already emerged as a hot research theme and some meaningful scientific results on MNNWFO have been obtained, see [47,48].
Since the life spans of machines and human beings are finite, asymptotic synchronization is inapplicable in practice. In this regard, the finite-time synchronization of nonlinear systems has been comprehensively investigated in the literature [49,50,51,52]. For example, the authors in [53] have analyzed the Mittag–Leffler synchronization in finite-time criteria Caputo fractional-order memristor-based BAM neural networks with fractional orders 0 < ξ < 1 and 1 < ξ < 2 , by means of a linear feedback control law, and the generalized Gronwall inequality. On the other hand, the finite-time passivity theory can provide a powerful tool to analyze the dynamics of fractional-order neural networks. Meanwhile, to our knowledge, few published research works exist concerning this problem. To mention a few, the authors in [54] have established the robust passivity criterion of interval-parameter-based neural networks with a Caputo fractional-order derivative via passivity theory, Lyapunov theory and LMI techniques. Unfortunately, the passivity and synchronization of FOMCNNS have not been investigated yet and this situation motivates further investigation of FOMBCNNs.
Motivated by the aforementioned issues, this paper aims at analyzing the finite-time passivity and finite-time synchronization criterion of FOMBCNNs. The novelty of this manuscript is summarized as follows. (1) As far as we know, this paper is the first attempt to address the finite-time passivity and finite-synchronization for fractional-order competitive neural networks. (2) The problem addressed in this paper is described by a class of robust analytical techniques. (3) To obtain our main results, finite-time boundedness, finite-time passivity and finite-time synchronization definitions are presented. (4) In light of these definitions, several results are established theoretically. (5) These theoretical results and techniques are improved compared to the existing passivity and synchronization results of fractional-order neural networks. The remaining structure of this research work is outlined as follows: basic results on fractional-order calculus and a description of FOMBCNN systems are formally introduced in the next section. Section 3 and Section 4 describe the main results of this research paper. Section 5 yields numerical results and their simulations. Finally, Section 6 ends with conclusions.

2. System Description and Preliminaries

For a real matrix U, Λ min ( U ) and Λ max ( U ) signify the maximal and the minimal eigenvalue of U, respectively. The superscript T indicates the matrix transposition. For all u 0 , the matrix H is positive definite if u T H u > 0 . H > H means H H > 0 . I is the identity matrix. The symmetric term in a matrix is displayed by ★. K ( U ) stands for the closure of the convex hull of U. For β ( t ) = ( β 1 ( t ) , . . . , β n ( t ) ) T R n , we denote
β ( t ) 2 = j = 1 n β j 2 ( t ) .

2.1. Preliminaries

This section comprises the rudimentary definitions and lemmas, which are further employed in the subsequent section.
Definition 1.
The fractional-order integral of function β ( t ) is [21]:
0 D t ξ β ( t ) = 1 Γ ( ξ ) t 0 t ( t ) ξ 1 β ( ) d ,
where Γ ( · ) is the gamma function.
Definition 2.
The Caputo fractional-order derivative of function β ( t ) is [21]:
0 C D t ξ β ( t ) = 1 Γ ( n ξ ) t 0 t β ( n ) ( ) ( t ) ξ n + 1 d ,
where t t 0 and n 1 < ξ < n Z + .
Definition 3.
The Mittag–Leffler function with two parameter is described as [21]:
E ξ 1 , ξ 2 ( ) = i = 0 + i Γ ( ξ 1 i + ξ 2 ) ,
where ξ 1 , ξ 2 R + and C . For ξ 2 = 1 , its Mittag–Leffler function with one parameter is displayed as
E ξ 1 , 1 ( ) = i = 0 + i Γ ( ξ 1 i + 1 ) = E ξ 1 ( ) .
Lemma 1.
If β ( t ) C n [ 0 , + ) and n 1 < ξ < n , ( n Z + , n 1 ) , then [21]
0 D t ξ 0 C D t ξ β ( t ) = β ( t ) u = 0 n 1 t u u ! β u ( t 0 ) .
In particular, if 0 < ξ < 1 , then 0 D t ξ 0 C D t ξ β ( t ) = β ( t ) β ( t 0 ) .
Lemma 2.
For 0 < ξ < 1 , function p ( t ) is continuous and is defined on [ 0 , + ) , then there exist constants ϱ 1 > 0 and ϱ 2 0 such that [31]
0 C D t ξ p ( t ) ϱ 1 p ( t ) + ϱ 2 , t t 0 ,
then
p ( t ) p ( 0 ) E ξ ϱ 1 t ξ + ϱ 2 t ξ E ξ , ξ + 1 ϱ 1 t ξ , t t 0 .
Lemma 3.
Let p ( t ) R n be a continuously differentiable vector-valued function, then for any t t 0 [32]
0 C D t ξ { p T ( t ) H p ( t ) } 2 p T ( t ) H { D t ξ p ( t ) } , ξ ( 0 , 1 ) ,
where H R n × n is a positive definite symmetric matrix.

2.2. Model Description

We consider a fractional-order memristor-based competitive neural network (FOMBCNN) in this manuscript:
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q j k u j ( t ) g k u k ( t ) + s j i = 1 m δ j i ( t ) ζ i + π f x j ( t ) 0 C D t ξ δ j i ( t ) = a j δ j i ( t ) + ζ i b j g j u j ( t ) + θ f y j ( t ) , f = 1 , 2 , . . . , d ,
where 0 C D t ξ signifies the Caputo derivative with order 0 < ξ < 1 , u j ( t ) represents the state variables; the current activity level is represented by p j ; a j and b j signifies the disposable scaling positive scalars; g k ( · ) represents the neuron activations; δ j i signifies the synaptic efficiency; the weights of the external stimulus is denoted by s j ; ζ = ( ζ 1 , . . . , ζ n ) T represents the constant external stimulus; x j ( t ) and y j ( t ) are disturbing input vectors; π f and θ f are known scalars; the synaptic memristor-based connection weights satisfy
q j k u j ( t ) = q ^ j k , | u j ( t ) | I j q ˘ j k , | u j ( t ) | > I j ,
in which I j > 0 are switching jumps and q ^ j k > 0 , q ˘ j k > 0 are constants.
Define w j ( t ) = i = 1 m δ j i ( t ) ζ i = ζ δ j T ( t ) , j = 1 , 2 , . . . , n , where δ j ( · ) = [ δ j 1 ( · ) , . . . , δ j m ( · ) ] T . Then, the transformation system (1) can be written as
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q j k u j ( t ) g k u k ( t ) + s j w j ( t ) + π f x j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + | ζ i | 2 b j g j u j ( t ) + θ f y j ( t ) , f = 1 , 2 , . . . , d ,
where | ζ i | 2 = ζ 1 2 + . . . + ζ m 2 is scalar. Without loss of generality, the input stimulus vector is supposed to be normalized with unit magnitude | ζ i | 2 = 1 . Then, the above equation can be written as
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q j k u j ( t ) g k u k ( t ) + s j w j ( t ) + π f x j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) + θ f y j ( t ) , f = 1 , 2 , . . . , d .
Remark 1.
The memristor-based connection weights of the system are changed based on the system state. Therefore, FOMBCNN (1) can be regarded as a state-dependent switching system, which is a special case of the dynamics of competitive neural networks.
Define q j k + = max q ^ j k , q ˘ j k , q j k = min q ^ j k , q ˘ j k , q ` j k = 1 2 q j k + + q j k and q ´ j k = 1 2 q j k + q j k . Then, FOMBCNN (1) can be written in the following form:
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q ` j k + Δ j k u j ( t ) g k u k ( t ) + s j w j ( t ) + π f x j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) + θ f y j ( t ) , f = 1 , 2 , . . . , d ,
where
Δ j k u j ( t ) = q ´ j k , q j k u j ( t ) = q j k + q ´ j k , q j k u j ( t ) = q j k .
Based on Filippov’s theory [55] and some transformation techniques, it can be obtained from (4) that
0 C D t ξ u j ( t ) p j u j ( t ) + k = 1 n q ` j k + K q ´ j k , q ´ j k g k u k ( t ) + s j w j ( t ) + π f x j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) + θ f y j ( t ) , f = 1 , 2 , . . . , d ,
According to the measurable selection theorem [56], there exists a measurable function χ j k ( t ) K [ 1 , 1 ] such that
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q ` j k + q ´ j k χ j k ( t ) g k u k ( t ) + s j w j ( t ) + π f x j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) + θ f y j ( t ) , f = 1 , 2 , . . . , d .
Throughout this manuscript, we make the following assumptions.
( A 1 ) . The nonlinear feedback function g k is bounded and satisfies:
0 g k ( γ 1 ) g k ( γ 2 ) γ 1 γ 2 l k , k = 1 , 2 , . . . , n , γ 1 γ 2 R ,
where l k > 0 ( k = 1 , 2 , . . . , n ) are scalars.
( A 2 ) . The disturbance input x ( t ) = [ x 1 ( t ) , . . . , x n ( t ) ] T , y ( t ) = [ x 1 ( t ) , . . . , x n ( t ) ] T R n are time varying, and there exist constants λ 1 > 0 and λ 2 > 0 such that the disturbance input vectors fulfills the following relationships:
x T ( t ) x ( t ) λ 1 , y T ( t ) y ( t ) λ 2 .
In the development of the main results, the following lemmas are significant.
Lemma 4.
[57] Let ζ 1 > 0 , ζ 2 > 0 , ζ 3 > 1 , ζ 4 > 1 and ζ 3 1 + ζ 4 1 = 1 , then the following inequality is fulfilled:
ζ 1 ζ 2 ( η ζ 1 ) ζ 3 ζ 3 + ( η 1 ζ 2 ) ζ 4 ζ 4
where η > 0 is a constant.
Lemma 5.
[58] Given matrices H = H T , E , G and M = M T > 0 of appropriate dimension,
H + E Φ G + G T Φ T G T < 0 ,
for all Φ T Φ M , if and only if there exists some β > 0 such that
H + β E E T + β 1 G T M G < 0 .
Lemma 6.
[59] Let α > 0 , for any 1 , 2 R n and n × n matrix H ,
1 T H 2 α 1 2 1 T H H T 1 + α 2 2 T 2 .

3. Finite-Time Passivity

In this section, we demonstrate the finite-time passivity criterion of FOMBCNN (1), which is equivalent to the FOMBCNN system (6).
First, we define the following notations:
E = q ´ 11 η 1 , . . . . , q ´ 1 n η 1 , . . . . , q ´ n 1 η n , . . . . , q ´ n n η n n × n 2 G = q ´ 11 η 1 , . . . . , q ´ 1 n η n , . . . . , q ´ n 1 η 1 , . . . . , q ´ n n η n n 2 × n T ,
η j R n , the j-th element of η j is one and all others are zero. From (7), one has
Q 1 = E E T = d i a g k = 1 n q ´ 1 k , . . . , k = 1 n q ´ n k , Q 2 = G G T = d i a g k = 1 n q ´ k 1 , . . . , k = 1 n q ´ k n .
Let us also denote
N ( t ) n 2 × n 2 = d i a g χ 11 ( t ) , . . . . , χ 1 n ( t ) , . . . , χ n 1 ( t ) , . . . . , χ n n ( t ) , | χ j k | 1 , 1 j , k n .
Obviously, N T ( t ) N ( t ) 1 . The vector form of FOMBCNN (6) can be written as:
0 C D t ξ u ( t ) = P u ( t ) + Q + E N ( t ) G g u ( t ) + S w ( t ) + F x ( t ) 0 C D t ξ w ( t ) = A w ( t ) + B g u ( t ) + H y ( t ) ,
where P = d i a g { p 1 , . . . , p n } R n , Q = q ` j k R n × n , S = d i a g { s 1 , . . . , s n } R n , A = d i a g { a 1 , . . . , a n } R n ,   B = d i a g { b 1 , . . . , b n } R n , F = d i a g { π 1 , . . . , π d } R d and H = d i a g { θ 1 , . . . , θ d } R d . The measured output vector of model (9) is assumed to be:
q u ( t ) = R u u ( t ) q w ( t ) = R w w ( t ) ,
where q u ( t ) R d , q w ( t ) R d and R u , R w R d × n .
The following definitions are much needed to establish the finite-time passivity criteria of system (9).
Definition 4.
FOMBCNN (9) with measured output q u ( t ) = q w ( t ) = 0 is finite-time bounded with respect to σ 1 , σ 2 , T σ , H 1 , H 2 , λ 1 , λ 2 , where H 1 , H 2 are symmetric positive definite matrices, if
u T ( t 0 ) H 1 u ( t 0 ) + w T ( t 0 ) H 2 w ( t 0 ) σ 1
implies that
u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) σ 2 , t [ t 0 , T σ ] ,
and for all x ( t ) R n and y ( t ) R n Assumption ( A 2 ) is held.
Definition 5.
FOMBCNN (9) is finite-time passive with respect to σ 1 , σ 2 , T σ , H 1 , H 2 , λ 1 , λ 2 , where H 1 , H 2 are symmetric positive definite matrices, if the following relationship holds:
1.
When measured output q u ( t ) = q w ( t ) = 0 , FOMBCNN (9) is finite-time bounded with respect to σ 1 , σ 2 , T σ , H 1 , H 2 , λ 1 , λ 2 .
2.
Under the zero initial values, there exists a constant υ > 0 such that
2 0 C D t ξ q u ( t ) x ( t ) + q w ( t ) y ( t ) υ 0 C D t ξ x T ( t ) x ( t ) + y T ( t ) y ( t ) , t [ t 0 , T σ ] .
Theorem 1.
Suppose that Assumptions ( A 1 ) and ( A 2 ) hold. FOMBCNN (1) with measured output q u ( t ) = q w ( t ) = 0 is finite-time bounded with respect to σ 1 , σ 2 , T σ , H 1 , H 2 , λ 1 , λ 2 , if there exist symmetric positive definite matrices H 1 , a positive diagonal matrix H 2 and positive constants α i , i = 1 , 2 , 3 , 4 , 5 and β > 0 such that
Φ = Φ 1 H 1 Q H 1 S H 1 F H 1 Q 1 α 1 I + β Q 2 0 0 0 α 2 I 0 0 α 3 I 0 β I < 0
Ψ = H 2 A A T H 2 + α 2 I H 2 B H 2 H α 4 I 0 α 5 I < 0
ϑ ̲ σ 2 > ϑ ¯ σ 1 + α ( λ 1 + λ 2 ) Γ ( ξ + 1 ) T σ ξ ,
where Φ 1 = H 1 P P T H 1 + [ α 1 + α 4 ] L T L , H 1 = H 1 1 2 H 1 H 1 1 2 , H 2 = H 2 1 2 H 2 H 2 1 2 , ϑ ̲ = min { Λ min ( H 1 ) , Λ min ( H 2 ) } , ϑ ¯ = max { Λ max ( H 1 ) , Λ max ( H 2 ) } and α = max { α 1 , α 2 } .
Proof. 
Define the following functional for FOMBCNN (9) as
W ( u ( t ) , w ( t ) ) = u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) .
According to Lemma 3, we have
0 C D t ξ W ( u ( t ) , w ( t ) ) 2 u T ( t ) H 1 { 0 C D t ξ u ( t ) } + 2 w T ( t ) H 2 { 0 C D t ξ w ( t ) } = 2 u T ( t ) H 1 P u ( t ) + Q g u ( t ) + S w ( t ) + F x ( t ) + 2 w T ( t ) H 2 A w ( t ) + B g u ( t ) + H y ( t ) = u T ( t ) H 1 P P T H 1 u ( t ) + 2 u T ( t ) H 1 Q g u ( t ) + 2 u T ( t ) H 1 S w ( t ) + 2 u T ( t ) H 1 F x ( t ) + w T ( t ) H 2 A A T H 2 w ( t ) + 2 w T ( t ) H 2 B g u ( t ) + 2 w T ( t ) H 2 H y ( t ) ,
where Q = Q + E N ( t ) G . Based on Lemma 6, we obtain
0 C D t ξ W ( u ( t ) , w ( t ) ) u T ( t ) H 1 P P T H 1 u ( t ) + α 1 1 u T ( t ) ( H 1 Q ) ( H 1 Q ) T u ( t ) + α 1 u T ( t ) L T L u ( t ) + α 2 1 u T ( t ) ( H 1 S ) ( H 1 S ) T u ( t ) + α 2 w T ( t ) w ( t ) + α 3 1 u T ( t ) ( H 1 F ) ( H 1 F ) T u ( t ) + α 3 x T ( t ) x ( t ) + w T ( t ) H 2 A A T H 2 w ( t ) + α 4 1 w T ( t ) ( H 2 B ) ( H 2 B ) T w ( t ) + α 4 u T ( t ) L T L u ( t ) + α 5 1 w T ( t ) ( H 2 H ) ( H 2 H ) T w ( t ) + α 5 y T ( t ) y ( t ) u T ( t ) Φ u ( t ) + w T ( t ) Ψ w ( t ) + α 3 x T ( t ) x ( t ) + α 5 y T ( t ) y ( t ) ,
where L = d i a g { l 1 , . . . , l n } and
Φ = H 1 P P T H 1 + α 1 1 ( H 1 Q ) ( H 1 Q ) T + [ α 1 + α 4 ] L T L + α 2 1 ( H 1 S ) ( H 1 S ) T + α 3 1 ( H 1 F ) ( H 1 F ) T < 0
Ψ = H 2 A A T H 2 + α 4 1 ( H 2 B ) ( H 2 B ) T + α 5 1 ( H 2 H ) ( H 2 H ) T + α 2 I < 0 .
According to the Schur complement Lemma, (18) is equivalent to LMIs (12) and (17) is equivalent to the following LMIs (19)
Φ = Φ 1 H 1 Q H 1 S H 1 F α 1 I 0 0 α 2 I 0 α 3 I
with Φ 1 = H 1 P P T H 1 + [ α 1 + α 4 ] L T L .
Replacing Q in (19) by Q + E N ( t ) G . From (19), Φ is equivalent to
Φ = Φ 1 H 1 Q H 1 S H 1 F α 1 I 0 0 α 2 I 0 α 3 I + H 1 E 0 0 0 N ( t ) 0 G 0 0 + 0 G T 0 0 N T ( t ) E T H 1 0 0 0 < 0 .
Based on Lemma 5 in (20), there exists β > 0 such that Φ < 0 is equivalent to
Π = Φ 1 H 1 Q H 1 S H 1 F α 1 I 0 0 α 2 I 0 α 3 I + β 1 H 1 E 0 0 0 E T H 1 0 0 0 + β 0 G T 0 0 0 G 0 0 < 0
which can be rewritten as
Φ = Φ 1 + β 1 H 1 E E T H 1 H 1 Q H 1 S H 1 F α 1 I + β G G T 0 0 α 2 I 0 α 3 I < 0 .
Based on the Schur Complement Lemma, expression (22) can be rearranged as condition (11). Therefore, from conditions (11) and (12), we have
0 C D t ξ W ( u ( t ) , w ( t ) ) α [ x T ( t ) x + y T ( t ) y ( t ) ] .
Taking the fractional integration of (23) from t 0 to t ( t 0 t T σ ) and based on Lemma 1, one can obtain
u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) u T ( t 0 ) H 1 u ( t 0 ) + w T ( t 0 ) H 2 w ( t 0 ) + α Γ ( ξ ) t 0 t ( t ) ξ 1 x T ( ) x ( ) + y T ( ) y ( ) d u T ( t 0 ) H 1 u ( t 0 ) + w T ( t 0 ) H 2 w ( t 0 ) + T σ ξ α ( λ 1 + λ 2 ) Γ ( ξ + 1 ) .
On the other hand,
u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) = u T ( t ) H 1 1 2 H 1 H 1 1 2 u ( t ) + w T ( t ) H 2 1 2 H 2 H 2 1 2 w ( t ) Λ min ( H 1 ) u T ( t ) H 1 u ( t ) + Λ min ( H 2 ) w T ( t ) H 2 w ( t ) = ϑ ̲ u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) ,
and
u T ( t 0 ) H 1 u ( t 0 ) + w T ( t 0 ) H 2 w ( t 0 ) = u T ( t 0 ) H 1 1 2 H 1 H 1 1 2 u ( t 0 ) + w T ( t 0 ) H 2 1 2 H 2 H 2 1 2 w ( t 0 ) Λ max ( H 1 ) u T ( t 0 ) H 1 u ( t 0 ) + Λ max ( H 2 ) w T ( t 0 ) H 2 w ( t 0 ) = ϑ ¯ u T ( t 0 ) H 1 u ( t 0 ) + w T ( t 0 ) H 2 w ( t 0 ) , ϑ ¯ σ 1 .
Combining (24)–(26), one obtains
υ ̲ u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) W ( u ( t ) , w ( t ) ) = u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) ϑ ¯ σ 1 + α ( λ 1 + λ 2 ) Γ ( ξ + 1 ) T σ ξ .
Condition (14) implies that u T ( t ) H 1 u ( t ) + w T ( t ) H 2 w ( t ) < σ 2 . Thus, FOMBCNN (1) with measured output q u ( t ) = q w ( t ) = 0 is finite-time bounded with respect to σ 1 , σ 2 , T σ , H 1 , H 2 , λ 1 , λ 2 . □
Theorem 2.
FOMBCNN (1) is finite-time passive with respect to σ 1 , σ 2 , T σ , H 1 , H 2 , λ 1 , λ 2 and the conditions (13) of Theorem 3 hold if Assumptions ( A 1 ) and ( A 2 ) are satisfied, and there exist a symmetric positive definite matrix H 1 , positive diagonal matrix H 2 and scalars α i > 0 , i = 1 , 2 , 3 , 4 , 5 and β , ρ , κ > 0 such that
Π = Π 1 H 1 Q H 1 S H 1 F H 1 Q 1 R u α 1 I + β Q 2 0 0 0 0 α 2 I 0 0 0 α 3 I 0 0 β I 0 ρ I < 0 ,
Σ = H 2 A A T H 2 + α 2 I H 2 B H 2 H R w α 4 I 0 0 α 5 I 0 κ I < 0 ,
and
0 > α 3 + ρ υ I , 0 > α 5 + κ υ I ,
where Π 1 = H 1 P P T H 1 + [ α 1 + α 4 ] L T L .
Proof. 
Based on previous Theorem 3, one can easily obtain
0 C D t ξ W ( u ( t ) , w ( t ) ) 2 q u ( t ) x ( t ) 2 q w ( t ) y ( t ) υ x T ( t ) x ( t ) υ y T ( t ) y ( t )
u 1 T ( t ) Φ u 1 ( t ) + w 1 T ( t ) Ψ w 2 ( t ) + α 3 υ x T ( t ) x ( t ) + α 5 υ y T ( t ) y ( t ) 2 q u ( t ) x ( t ) 2 q w ( t ) y ( t ) .
In view of Lemma 6, there exists ρ > 0 such that
2 q u ( t ) x ( t ) ρ 1 q u T ( t ) q u ( t ) + ρ x T ( t ) x ( t )
= ρ 1 u T ( t ) R u T R u u ( t ) + ρ x T ( t ) x ( t ) 2 q w ( t ) y ( t ) κ 1 q w T ( t ) q w ( t ) + κ y T ( t ) y ( t )
= κ 1 w T ( t ) R w T R w w ( t ) + κ y T ( t ) y ( t )
0 C D t ξ W ( u ( t ) , w ( t ) ) 2 q u ( t ) x ( t ) 2 q w ( t ) y ( t ) υ x T ( t ) x ( t ) υ y T ( t ) y ( t )
u T ( t ) Π u ( t ) + w T ( t ) Σ w ( t ) + [ α 3 + ρ υ ] x T ( t ) x ( t ) + α 5 + κ υ β T ( t ) β ( t ) .
Then, from (29)–(32), we have
0 C D t ξ W ( u ( t ) , w ( t ) ) 2 q u ( t ) x ( t ) 2 q w ( t ) y ( t ) υ x T ( t ) x ( t ) υ y T ( t ) y ( t ) 0 .
Now, we set
Λ = 0 C D t ξ 2 q u ( t ) x ( t ) 2 q w ( t ) y ( t ) υ x T ( t ) x ( t ) υ y T ( t ) y ( t ) = 1 Γ ( ξ ) t 0 t ( t ) ξ 1 [ 2 q u ( ) x ( ) 2 q w ( ) y ( ) υ x T ( ) x ( ) υ y T ( ) y ( ) ] d , t [ t 0 , T h ] .
In view of Lemma 1, we have
Λ = 0 C D t ξ 2 q u ( t ) x ( t ) 2 q w ( t ) y ( t ) υ x T ( t ) x ( t ) υ y T ( t ) y ( t ) = W ( u ( t ) , w ( t ) ) + 0 C D t ξ [ 0 C D t ξ W ( u ( t ) , w ( t ) ) 2 q u ( t ) x ( t ) 2 q w ( t ) y ( t ) υ x T ( t ) x ( t ) υ y T ( t ) y ( t ) ]
due to W ( u ( t ) , w ( t ) ) 0 . Combining (34) and (35), we obtain Λ < 0 . Hence,
2 0 C D t ξ q u ( t ) x ( t ) + q w ( t ) y ( t ) ϑ 0 C D t ξ x T ( t ) x ( t ) + y T ( t ) y ( t ) , t [ t 0 , T σ ] ,
which means that the main FOMBCNN system (1) is passive in finite time with respect to σ 1 , σ 2 , T σ , H 1 , H 2 , λ 1 , λ 2 . □
Remark 2.
When x j ( t ) = y j ( t ) = q u ( t ) = q w ( t ) = 0 , the asymptotic stability of the FOMBCNN model (1) can directly be obtained from Theorem 3. As a result, the passivity investigation is a high impact level of stability for FOMBCNNs.
Remark 3.
We now discuss a special case of system (1). Especially, when y ( t ) = 0 , q w ( t ) = 0 and F = I , then model (9) is reduced to the FOMBCNN discussed in [60]
0 C D t ξ u ( t ) = P u ( t ) + Q + E N ( t ) G g u ( t ) + S w ( t ) + x ( t ) 0 C D t ξ w ( t ) = A w ( t ) + B g u ( t ) q u ( t ) = R u u ( t )
Based on Theorem 1, the following corollary can be obtained for the system model (35).
Corollary 1.
Suppose that Assumptions ( A 1 ) hold, if there exist symmetric positive definite matrix H 1 , positive diagonal matrix H 2 and scalars α i > 0 , i = 1 , 2 , 3 , 4 and β , ρ > 0 such that
Π = Π 1 H 1 Q H 1 S H 1 H 1 Q 1 R u α 1 I + β Q 2 0 0 0 0 α 2 I 0 0 0 α 3 I 0 0 β I 0 ρ I < 0 ,
Σ = H 2 A A T H 2 + α 2 I H 2 B α 4 I < 0 ,
and
0 > α 3 + ρ υ I ,
where Π 1 = H 1 P P T H 1 + [ α 1 + α 4 ] L T L . Then, FOMBCNN (35) is passive.
Remark 4.
Generally, the maximum absolute value method is an effective tool to study the dynamics of FOMBCNNs. Because the obtained sufficient conditions are all considered in the form of the maximum absolute value of the memristor-based synaptic connection strengths, i.e., max { | q ^ j k | , | q ˘ j k | } , when we choose these kinds of conditions ( max { | q ^ j k | , | q ˘ j k | } ), we lose half of the information. However, in this paper, the proposed system is changed from a fractional-order memristor-based competitive neural network into an interval parameter system. To overcome this issue, in this present work, we have transformed the memristor-based connection weights into interval parameters, which reduce more information losses. Therefore, the interval parameter approach is more effective than the maximum absolute value method.

4. Finite-Time Synchronization

Let x j ( t ) = y j ( t ) = 0 in FOMBCNN (1), then we have
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q j k u j ( t ) g k u k ( t ) + s j w j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) .
Here, all the parameters are similar to the FOMBCNN system (1). Similar to FOMBCNN (4), the above system can be written as follows:
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q ` j k + Δ j k u j ( t ) g k u k ( t ) + s j w j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) ,
where q ` j k , Δ j k u j ( t ) are already defined in (4). Based on Filippov’s theory [55] and some transformation techniques, it can be obtained from (39) that
0 C D t ξ u j ( t ) p j u j ( t ) + k = 1 n q ` j k + K q ´ j k , q ´ j k g k u k ( t ) + s j w j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) .
Based on the measurable selection theorem [56], there exists a measurable function χ j k ( t ) K [ 1 , 1 ] such that
0 C D t ξ u j ( t ) = p j u j ( t ) + k = 1 n q ` j k + q ´ j k χ j k ( t ) g k u k ( t ) + s j w j ( t ) 0 C D t ξ w j ( t ) = a j w j ( t ) + b j g j u j ( t ) .
The response FOMBCNN with control inputs is denoted by:
0 C D t ξ u ˜ j ( t ) = p j u ˜ j ( t ) + k = 1 n q j k u ˜ j ( t ) g k u ˜ k ( t ) + s j w ˜ j ( t ) + h 1 j ( t ) 0 C D t ξ w ˜ j ( t ) = a j w ˜ j ( t ) + b j g j u ˜ j ( t ) + h 2 j ( t )
where the memristor-based connection weights q j k u ˜ j ( t ) are already defined in Section 2, h 1 j ( t ) , h 2 j ( t ) are suitable controllers to be designed. Similar to FOMBCNN (38), the above response system (41) can be written as follows:
0 C D t ξ u ˜ j ( t ) = p j u ˜ j ( t ) + k = 1 n q ` j k + Δ j k u ˜ j ( t ) g k u ˜ k ( t ) + s j w ˜ j ( t ) + h 1 j ( t ) 0 C D t ξ w ˜ j ( t ) = a j w ˜ j ( t ) + b j g j u ˜ j ( t ) + h 2 j ( t ) ,
where
Δ j k u ˜ j ( t ) = q ´ j k , q j k u ˜ j ( t ) = q j k + q ´ j k , q j k u ˜ j ( t ) = q j k .
Based on Filippov’s theory [55] and some transformation techniques, it can be obtained from (42) that
0 C D t ξ u ˜ j ( t ) p j u ˜ j ( t ) + k = 1 n q ` j k + K q ´ j k , q ´ j k g k u ˜ k ( t ) + s j w ˜ j ( t ) + h 1 j ( t ) 0 C D t ξ w ˜ j ( t ) = a j w ˜ j ( t ) + b j g j u ˜ j ( t ) + h 2 j ( t ) .
Based on the measurable selection theorem [56], there exists a measurable function χ ˜ j k ( t ) K [ 1 , 1 ] such that
0 C D t ξ u ˜ j ( t ) = p j u ˜ j ( t ) + k = 1 n q ` j k + q ´ j k χ ˜ j k ( t ) g k u ˜ k ( t ) + s j w ˜ j ( t ) + h 1 j ( t ) 0 C D t ξ w ˜ j ( t ) = a j w ˜ j ( t ) + b j g j u ˜ j ( t ) + h 2 j ( t ) .
The following finite-time synchronization definition is very important role to achieve the finite time synchronization criteria.
Definition 6.
FOMBCNN (38) is said to be finite-time synchronized with system (41) by suitable control inputs if there exists a settling time T > 0 , which is a real number, if
lim t T u ˜ j ( t ) u j ( t ) + w ˜ j ( t ) w j ( t ) = 0
and
u ˜ ( t ) u ( t ) + w ˜ ( t ) w ( t ) 0 f o r t > T ,
where j = 1 , 2 , . . . , n .
Denote e j ( t ) = u ˜ j ( t ) u j ( t ) and z j ( t ) = w ˜ j ( t ) w j ( t ) . One has from (40) and (43) that
0 C D t ξ e j ( t ) = p j e j ( t ) + k = 1 n q ` j k + q ´ j k χ ˜ j k ( t ) ζ k e k ( t ) + k = 1 n q ´ j k χ ˜ j k ( t ) χ j k ( t ) × g k u k ( t ) + s j z j ( t ) + h 1 j ( t ) 0 C D t ξ z j ( t ) = a j z j ( t ) + b j ζ j e j ( t ) + h 2 j ( t ) , j = 1 , 2 , . . . , n
where ζ k e k ( t ) = g k u ˜ k ( t ) g k u k ( t ) .
Inspired by [61,62], we define the following feedback controller
h 1 j ( t ) = ω j e j ( t ) ϑ j s i g n ( e j ( t ) ) η j s i g n ( e j ( t ) ) | e j ( t ) ) | , | e j ( t ) 0 0 , | e j ( t ) | = 0 , h 2 j ( t ) = ϖ j z j ( t ) θ j s i g n ( z j ( t ) ) | z j ( t ) | , | z j ( t ) | 0 0 , | z j ( t ) | = 0 ,
where j = 1 , 2 , . . . , n . Based on controller (45), we can generate
0 C D t ξ e j ( t ) = p j e j ( t ) + k = 1 n q ` j k + q ´ j k χ ˜ j k ( t ) ζ k e k ( t ) + k = 1 n q ´ j k χ ˜ j k ( t ) χ j k ( t ) g k u k ( t ) + s j z j ( t ) ω j e j ( t ) ϑ j s i g n ( e j ( t ) ) η j s i g n ( e j ( t ) ) | e j ( t ) ) | , | e j ( t ) 0 0 C D t ξ e j ( t ) = p j e j ( t ) + k = 1 n q ` j k + q ´ j k χ ˜ j k ( t ) ζ k e k ( t ) + k = 1 n q ´ j k χ ˜ j k ( t ) χ j k ( t ) g k u k ( t ) + s j z j ( t ) , | e j ( t ) | = 0 ,
0 C D t ξ z j ( t ) = a j z j ( t ) + b j ζ j e j ( t ) ϖ j z j ( t ) θ j s i g n ( z j ( t ) ) | z j ( t ) | , | z j ( t ) | 0 0 C D t ξ z j ( t ) = a j z j ( t ) + b j ζ j e j ( t ) , | z j ( t ) | = 0 .
Next, with the help of controller (45) and error system (44), we derive the finite-time synchronization criteria.
Theorem 3.
Suppose ( A 1 ) holds. FOMBCNN (38) and FOMBCNN (41) can achieve synchronization in finite time under controller (45) if there exist positive constants λ j , μ j , ω j , ϑ j , η j , ϖ j and θ j , j N such that
0 < 2 λ j ω j + p j 1 2 s j μ j b j l j k = 1 n λ j | q | j k l j + λ k | q | k j l k
0 < 2 μ j ϖ j + a j 1 2 b j l j λ j s j
0 k = 1 n | q j k + q j k | δ k ϑ j
where | q | j k = max { | q ^ | j k , | q ˘ | j k } . Furthermore, the settling time is evaluated by
t Γ ( ξ + 1 ) j = 1 n λ j e j 2 ( t 0 ) + μ j z j 2 ( t 0 ) j = 1 n 2 λ j η j + μ j θ j ξ .
Proof. 
Consider the following Lyapunov function:
W ( e ( t ) , z ( t ) ) = j = 1 n λ j e j 2 ( t ) + j = 1 n μ j z j 2 ( t ) .
When | e j ( t ) | 0 and | z j ( t ) | 0 , from (46) and (51), it generates,
0 C D t ξ W ( e ( t ) , z ( t ) ) j = 1 n 2 λ j e j ( t ) { 0 C D t ξ e j ( t ) } + j = 1 n 2 μ j z j ( t ) { 0 C D t ξ z j ( t ) } = j = 1 n 2 λ j e j ( t ) [ p j e j ( t ) + k = 1 n q ˜ j k ζ k e k ( t ) + k = 1 n q ´ j k χ ˜ j k ( t ) χ j k ( t ) × g k u k ( t ) + s j z j ( t ) ω j [ e j ( t ) ] ϑ j s i g n ( e j ( t ) ) η j s i g n ( e j ( t ) ) | e j ( t ) | ] + j = 1 n 2 μ j z j ( t ) a j z j ( t ) + b j ζ j e j ( t ) ϖ j [ z j ( t ) ] θ j s i g n ( z j ( t ) ) | z j ( t ) | ,
where q ˜ j k = q ` j k + q ´ j k χ ˜ j k ( t ) and using Lemma 3. Based on Assumption ( A 1 ) , we have
0 C D t ξ W ( e ( t ) , z ( t ) ) j = 1 n 2 λ j ω j + p j e j 2 ( t ) + j = 1 n k = 1 n 2 λ j | q ˜ j k | | e j ( t ) | | ζ k e k ( t ) | + j = 1 n k = 1 n 2 λ j | e j ( t ) | | q ´ j k χ ˜ j k ( t ) q ´ j k χ j k ( t ) | | g k ( u k ( t ) ) | + j = 1 n k = 1 n 2 λ j s j | e j ( t ) | | z j ( t ) | j = 1 n 2 λ j | e j ( t ) | ϑ j j = 1 n 2 λ j η j j = 1 n 2 μ j ϖ j + a j z j 2 ( t ) + j = 1 n 2 μ j b j | z j ( t ) | | ζ j e j ( t ) | j = 1 n 2 μ j θ j .
According to Assumption ( A 1 ) , the neuron activations are bounded, i.e., there exist scalars δ ˜ k > 0 , ( k = 1 , 2 , . . . , n ) such that | g k ( u k ( t ) ) | δ ˜ k , then
0 C D t ξ W ( e ( t ) , z ( t ) ) j = 1 n 2 λ j ω j + p j e j 2 ( t ) + j = 1 n k = 1 n 2 λ j | q | j k l k | e j ( t ) | | e k ( t ) | + j = 1 n k = 1 n 2 λ j | e j ( t ) | | q j k + q j k | δ ˜ k + j = 1 n k = 1 n 2 λ j s j | e j ( t ) | | z j ( t ) | j = 1 n 2 λ j | e j ( t ) | ϑ j j = 1 n 2 μ j ϖ j + a j z j 2 ( t ) + j = 1 n 2 μ j b j l j | z j ( t ) | | e j ( t ) | j = 1 n 2 λ j η j + μ j θ j .
In view of Lemma 4, we have
j = 1 n k = 1 n 2 λ j | q | j k l k | e j ( t ) | | e k ( t ) | j = 1 n k = 1 n λ j | q | j k l j e j 2 ( t ) + e k 2 ( t ) = j = 1 n k = 1 n λ j | q | j k l j + λ k | q | k j l k e j 2 ( t )
j = 1 n 2 λ j s j | e j ( t ) | | z j ( t ) | j = 1 n k = 1 n λ j s j e j 2 ( t ) + z j 2 ( t ) = j = 1 n λ j s j e j 2 ( t ) + j = 1 n λ j s j z j 2 ( t )
and
j = 1 n 2 μ j b j l j | z j ( t ) | | e j ( t ) | j = 1 n μ j b j l j z j 2 ( t ) + e j 2 ( t ) = j = 1 n μ j b j l j z j 2 ( t ) + j = 1 n μ j b j l j e j 2 ( t )
Substituting Equations (55)–(57) into Equation (54) yields:
0 C D t ξ W ( e ( t ) , z ( t ) ) j = 1 n 2 λ j ω j + p j 1 2 s j μ j b j l j k = 1 n λ j | q | j k l j + λ k | q | k j l k e j 2 ( t ) j = 1 n 2 λ j η j + μ j θ j + j = 1 n 2 μ j ϖ j + a j 1 2 b j l j λ j s j z j 2 ( t ) + j = 1 n k = 1 n | q j k + q j k | δ ˜ k ϑ j | e j ( t ) | .
From (49) and (58), one has
0 C D t ξ W ( e ( t ) , z ( t ) ) j = 1 n 2 λ j ω j + p j 1 2 s j μ j b j l j k = 1 n λ j | q | j k l j + λ k | q | k j l k e j 2 ( t ) + j = 1 n 2 μ j ϖ j + a j 1 2 b j l j λ j s j z j 2 ( t ) j = 1 n 2 λ j η j + μ j θ j
j = 1 n 2 λ j ω j + p j 1 2 s j μ j b j l j k = 1 n λ j | q | j k l j + λ k | q | k j l k e j 2 ( t ) + j = 1 n 2 μ j ϖ j + a j 1 2 b j l j λ j s j z j 2 ( t ) .
According to Equations (47) and (48), we can choose an appropriate scalar γ > 0 that fulfills
min { min 1 j n 2 λ j ω j + p j 1 2 s j μ j b j l j k = 1 n λ j | q | j k l j + λ k | q | k j l k , min 1 j n 2 μ j ϖ j + a j 1 2 b j l j λ j s j } γ > 0 .
From (59) and (60), one obtains
0 C D t ξ W ( e ( t ) , z ( t ) ) γ j = 1 n e j 2 ( t ) + z j 2 ( t ) γ γ 1 max j = 1 n λ j e j 2 ( t ) γ γ 2 max j = 1 n μ j z j 2 ( t ) min { γ γ 1 max , γ γ 2 max } W ( e ( t ) , z ( t ) )
where γ 1 max = max 1 j n { λ j } , γ 2 max = max 1 j n { μ j } . Let ϱ = min { γ γ 1 max , γ γ 2 max } , then ϱ > 0 and
0 C D t ξ W ( e ( t ) , z ( t ) ) ϱ W ( e ( t ) , z ( t ) ) .
From Lemma 2, one can obtain
W ( e ( t ) , z ( t ) ) ϱ W ( e ( t 0 ) , z ( t 0 ) ) E ξ ( ϱ ( t t 0 ) ξ ) .
From (63), we can simply get
lim t + W ( e ( t ) , z ( t ) ) = 0 ,
which implies that
lim t + j = 1 n λ j e j 2 ( t ) + j = 1 n μ j z j 2 ( t ) = 0 .
Thus, the FOMBCNN system (38) is asymptotically synchronized with the FOMBCNN system (41). Next, we demonstrate that the FOMBCNN system (38) is finite-time synchronized with the FOMBCNN system (41).
Let v = j = 1 n 2 λ j η j + μ j θ j , from (47)–(49), it follows that
0 C D t ξ W ( e ( t ) , z ( t ) ) j = 1 n 2 λ j η j + μ j θ j = v .
Hence, there exists a non-negative function R ( t ) such that
0 C D t ξ W ( e ( t ) , z ( t ) ) + R ( t ) = v .
By using Lemma 1, we take the fractional integral of both sides of (65) from 0 to t, one can get
W ( e ( t ) , z ( t ) ) W ( e ( t 0 ) , z ( t 0 ) ) + 0 C D t ξ R ( t ) = 0 C D t ξ ( v ) .
From Definition 1, we have
0 C D t ξ ( v ) = 1 Γ ( ξ ) 0 t ( t ) ξ 1 ( v ) d = v Γ ( ξ ) 0 t ( t ) ξ 1 d = v t ξ Γ ( ξ + 1 ) .
From (66) and (67), we have
W ( e ( t 0 ) , z ( t 0 ) ) W ( e ( t ) , z ( t ) ) W ( e ( t 0 ) , z ( t 0 ) ) + 0 C D t ξ R ( t ) = v t ξ Γ ( ξ + 1 ) .
From (68), we have
t Γ ( ξ + 1 ) W ( e ( t 0 ) , z ( t 0 ) ) v ξ = Γ ( ξ + 1 ) j = 1 n λ j e j 2 ( t 0 ) + μ j z j 2 ( t 0 ) j = 1 n 2 λ j η j + μ j θ j ξ .
Thus, the FOMBCNN system (38) is finite-time synchronized with the FOMBCNN system (41) under the controller designed in (45). □
Remark 5.
When the memristor-based connection weights q ^ k l = q ˘ k l , which means the connection weights are implemented only by a resistor, then the presented results are also still valid for the robust passivity and finite-time synchronization of fractional-order competitive neural networks, while these conservative results are not yet studied in the literature.
Remark 6.
When ξ = 1 , the FOMBCNN model (1) degenerates into finite-time passivity and finite-time synchronization of traditional-order competitive neural networks.
Remark 7.
Suppose that δ j i ( t ) = 0 in (1), the proposed system model is also still true for finite-time passivity and finite-time synchronization of fractional-order memristor based neural networks, and these results have not yet been studied in existing research works.
Remark 8.
In the existing literature, there are several results on synchronization analysis of memristive neural networks with switching jumps mismatch parameters. Specifically, Yang et al. [63] studied the asymptotic and finite-time synchronization problem of integer-order memristive neural networks, while the authors in [64] investigated the exponential synchronization problem of integer-order time-delayed memristive neural networks. Compared with the above-mentioned results, our criteria guarantee the finite-time passivity and finite-time synchronization of fractional-order memristive competitive neural networks. Moreover, the systems discussed in [63,64] are special cases of our stability results when δ i j ( t ) = x j ( t ) = y j ( t ) = τ ( t ) = 0 and ς = 1 .

5. Numerical Results

Here, two numerical examples are given to validate the advantages of the obtained results.
Example 1.
Considering the following three-dimensional FOMBCNN:
0 C D t 0.98 u 1 ( t ) = 4 u 1 ( t ) + q 11 u 1 ( t ) tan ( u 1 ( t ) ) + q 12 u 1 ( t ) tan ( u 2 ( t ) ) + q 13 u 1 ( t ) tan ( u 3 ( t ) ) + 0.25 w 1 ( t ) + 0.6 ( 1 + sin t ) , t 0 0 C D t 0.98 u 2 ( t ) = 4 u 2 ( t ) + q 21 u 2 ( t ) tan ( u 1 ( t ) ) + q 22 u 2 ( t ) tan ( u 2 ( t ) ) + q 23 u 2 ( t ) tan ( u 3 ( t ) ) + 0.3 w 2 ( t ) + 0.6 ( 1 + cos t ) , t 0 0 C D t 0.98 u 3 ( t ) = 4 u 3 ( t ) + q 31 u 3 ( t ) tan ( u 1 ( t ) ) + q 32 u 3 ( t ) tan ( u 2 ( t ) ) + q 33 u 3 ( t ) tan ( u 3 ( t ) ) + 0.75 w 3 ( t ) + 0.6 ( 1 + sin t ) , t 0 0 C D t 0.98 w 1 ( t ) = 3 w 1 ( t ) + 0.5 tan ( u 1 ( t ) ) + 0.3 ( 1 + cos t ) , t 0 0 C D t 0.98 w 2 ( t ) = 3 w 2 ( t ) + 0.5 tan ( u 2 ( t ) ) + 0.3 ( 1 + sin t ) , t 0 0 C D t 0.98 w 3 ( t ) = 3 w 3 ( t ) + 0.5 tan ( u 3 ( t ) ) + 0.3 ( 1 + cos t ) , t 0 ,
where
q 11 u 1 ( t ) = 0.1 , | u 1 ( t ) | 1 0.35 , | u 1 ( t ) | > 1 , q 12 u 2 ( t ) = 0.2 , | u 2 ( t ) | 1 0.45 , | u 2 ( t ) | > 1 , q 13 u 3 ( t ) = 0.7 , | u 3 ( t ) | 1 0.45 , | u 3 ( t ) | > 1 , q 21 u 1 ( t ) = 0.4 , | u 1 ( t ) | 1 0.65 , | u 1 ( t ) | > 1 , q 22 u 2 ( t ) = 0.35 , | u 2 ( t ) | 1 0.6 , | u 2 ( t ) | > 1 , q 23 u 3 ( t ) = 0.3 , | u 3 ( t ) | 1 0.55 , | u 3 ( t ) | > 1 , q 31 u 1 ( t ) = 0.20 , | u 1 ( t ) | 1 0.45 , | u 1 ( t ) | > 1 , q 32 u 2 ( t ) = 0.45 , | u 2 ( t ) | 1 0.2 , | u 2 ( t ) | > 1 , q 33 u 3 ( t ) = 0.50 , | u 3 ( t ) | 1 0.75 , | u 3 ( t ) | > 1 .
The measured output vector of model (71) is assumed to be:
q u ( t ) = R u u ( t ) , q w ( t ) = R w w ( t ) ,
where
R u = 0.5 0 0 0 0.5 0 0 0 0.5 , R w = 0.4 0 0 0 0.4 0 0 0 0.4 .
We note that Assumptions ( A 1 ) and ( A 2 ) satisfy L = d i a g { 1.5 , 1.5 , 1.5 } . By standard computation, we get
E = 0.125 0.125 0.125 0 0 0 0 0 0 0 0 0 0.125 0.125 0.125 0 0 0 0 0 0 0 0 0 0.125 0.125 0.125 ,
G = 0.125 0 0 0.125 0 0 0.125 0 0 0 0.125 0 0 0.125 0 0 0.125 0 0 0 0.125 0 0 0.125 0 0 0.125 T .
Let σ 1 = 2 , σ 2 = 10 , T σ = 3 and H 1 = H 2 = I . Then, it is clear that the given LMIs (14) and (15) of Theorem 3 are feasible, and these solutions are given as follows:
H 1 = 8.4304 0.0155 0.0217 0.0155 8.4287 0.1362 0.0217 0.1362 8.2164 , H 2 = 7.7009 0 0 0 7.7009 0 0 0 7.7009 ,
υ = 50.25 , α 1 = 29.9694 , α 2 = 23.1027 , α 3 = 23.9755 , α 4 = 23.9755 , α 5 = 23.9755 , β = 23.9755 , κ = 23.9755 , ρ = 19.8736 . Moreover,
10 = ϑ ̲ σ 2 > ϑ ¯ σ 1 + α ( λ 1 + λ 2 ) Γ ( ξ + 1 ) T σ ξ = 8.43016 .
Therefore, FOMBCNN (1) is passive in finite time with respect to 2 , 10 , 3 , I , I , 0.02 , 0.03 . The initial values are selected to be u ( 0 ) = ( 2 , 0.2 , 1 ) T and w ( 0 ) = ( 1 , 0.5 , 2.6 ) T . Figure 1 and Figure 2 illustrate the time responses of the states of model (70) with x ( t ) = [ 1 + sin t , 1 + cos t , 1 + sin t ] T and y ( t ) = [ 1 + cos t , 1 + sin t , 1 + cos t ] T . Figure 3 and Figure 4 show the phase trajectories of system (70) with inputs x ( t ) = [ 1 + sin t , 1 + cos t , 1 + sin t ] T and y ( t ) = [ 1 + cos t , 1 + sin t , 1 + cos t ] T . The state curves of system (70) with inputs x ( t ) = [ 0 , 0 , 0 ] T and y ( t ) = [ 0 , 0 , 0 ] T are shown in Figure 5 and Figure 6.
Example 2.
Considering the following two-dimensional FOMBCNN:
0 C D t 0.98 u 1 ( t ) = 2 u 1 ( t ) + 0.5 q 11 u 1 ( t ) tan ( u 1 ( t ) ) + q 12 u 1 ( t ) tan ( u 2 ( t ) ) + 0.15 w 1 ( t ) + 0.6 ( 1 + sin t ) , t 0 0 C D t 0.98 u 2 ( t ) = 2 u 2 ( t ) + 0.5 q 21 u 2 ( t ) tan ( u 1 ( t ) ) + q 22 u 2 ( t ) tan ( u 2 ( t ) ) + 0.18 w 2 ( t ) + 0.6 ( 1 + cos t ) , t 0 0 C D t 0.98 w 1 ( t ) = 1.5 w 1 ( t ) + 1.05 tan ( u 1 ( t ) ) , t 0 0 C D t 0.98 w 2 ( t ) = 1.5 w 2 ( t ) + 1.15 tan ( u 2 ( t ) ) t 0 ,
where
q 11 u 1 ( t ) = 1 6 , | u 1 ( t ) | 1 1 6 , | u 1 ( t ) | > 1 , q 12 u 2 ( t ) = 1 5 , | u 2 ( t ) | 1 1 5 , | u 2 ( t ) | > 1 , q 21 u 1 ( t ) = 1 5 , | u 2 ( t ) | 1 1 5 , | u 2 ( t ) | > 1 , , q 22 u 2 ( t ) = 1 8 , | u 2 ( t ) | 1 1 8 , | u 2 ( t ) | > 1 .
The measured output vector of model (71) is assumed to be:
q u ( t ) = 0.8 0 0 1 u ( t ) ,
With a simple calculation, we get
E = 0.166 0.2 0 0 0 0 0.2 0.125 , G = 0.166 0 0.2 0 0 0.2 0 0.125 T .
Then, by solving (36) and (37) of Corollary 1 with the LMI solver in MATLAB, we can get the following feasible solutions:
H 1 = 7.7468 0.0414 0.0414 7.5224 , H 2 = 9.1107 0 0 8.7665 ,
υ = 56.5105 , α 1 = 27.7489 , α 2 = 13.4079 , α 3 = 20.6206 , α 4 = 20.6206 , β = 20.6117 , and ρ = 15.2692 . Therefore, FOMBCNN (71) is passive under the initial values u ( 0 ) = ( 1 , 1.2 ) T and w ( 0 ) = ( 0.4 , 0.5 ) T . Figure 7 and Figure 8 illustrate the time responses of the states of model (71) with external inputs [ 1 + cos ( 2 t ) , 1 sin ( 2 t ) ] T .
When solving the sufficient conditions in Corollary 1, the minimum passive index is listed in Table 1.
Compared with the minimum passive index υ in [60], the effective value of the result obtained in this manuscript is increased by 5.8804%. This Table 1 demonstrates that our method provides a minimum passive index than existing works [60]. Hence, the proposed method gives fewer conservative results.
Example 3.
Considering the following three-dimensional FOMBCNN:
0 C D t 0.98 u 1 ( t ) = 0.8 u 1 ( t ) + q 11 u 1 ( t ) tan ( u 1 ( t ) ) + q 12 u 1 ( t ) tan ( u 2 ( t ) ) + q 13 u 1 ( t ) tan ( u 3 ( t ) ) + 1.2 w 1 ( t ) 0 C D t 0.98 u 2 ( t ) = 0.8 u 2 ( t ) + q 21 u 2 ( t ) tan ( u 1 ( t ) ) + q 22 u 2 ( t ) tan ( u 2 ( t ) ) + q 23 u 2 ( t ) tan ( u 3 ( t ) ) + 1.2 w 2 ( t ) 0 C D t 0.98 u 3 ( t ) = 0.8 u 3 ( t ) + q 31 u 3 ( t ) tan ( u 1 ( t ) ) + q 32 u 3 ( t ) tan ( u 2 ( t ) ) + q 33 u 3 ( t ) tan ( u 3 ( t ) ) + 1.2 w 3 ( t ) 0 C D t 0.98 w 1 ( t ) = w 1 ( t ) + 2 tan ( u 1 ( t ) ) 0 C D t 0.98 w 2 ( t ) = w 2 ( t ) + 0.5 tan ( u 2 ( t ) ) 0 C D t 0.98 w 3 ( t ) = w 3 ( t ) + tan ( u 3 ( t ) )
where
q 11 u 1 ( t ) = 0.3 , | u 1 ( t ) | 1 0.6 , | u 1 ( t ) | > 1 , q 12 u 2 ( t ) = 0.25 , | u 2 ( t ) | 1 0.5 , | u 2 ( t ) | > 1 , q 13 u 3 ( t ) = 3.35 , | u 3 ( t ) | 1 3.65 , | u 3 ( t ) | > 1 , q 21 u 1 ( t ) = 0.1 , | u 1 ( t ) | 1 0.4 , | u 1 ( t ) | > 1 , q 22 u 2 ( t ) = 1.1 , | u 2 ( t ) | 1 1.4 , | u 2 ( t ) | > 1 , q 23 u 3 ( t ) = 0.4 , | u 3 ( t ) | 1 0.7 , | u 3 ( t ) | > 1 , q 31 u 1 ( t ) = 1 , | u 1 ( t ) | 1 1.3 , | u 1 ( t ) | > 1 , q 32 u 2 ( t ) = 0.7 , | u 2 ( t ) | 1 6.7 , | u 2 ( t ) | > 1 , q 33 u 3 ( t ) = 0.2 , | u 3 ( t ) | 1 0.5 , | u 3 ( t ) | > 1 .
The response FOMBCNN with control inputs is denoted by:
0 C D t 0.98 u ˜ 1 ( t ) = 0.8 u ˜ 1 ( t ) + q 11 u ˜ 1 ( t ) tan ( u ˜ 1 ( t ) ) + q 12 u ˜ 1 ( t ) tan ( u ˜ 2 ( t ) ) + q 13 u ˜ 1 ( t ) tan ( u ˜ 3 ( t ) ) + 1.2 w ˜ 1 ( t ) + h 11 ( t ) 0 C D t 0.98 u ˜ 2 ( t ) = 0.8 u ˜ 2 ( t ) + q 21 u ˜ 2 ( t ) tan ( u ˜ 1 ( t ) ) + q 22 u ˜ 2 ( t ) tan ( u ˜ 2 ( t ) ) + q 23 u ˜ 2 ( t ) tan ( u ˜ 3 ( t ) ) + 1.2 w ˜ 2 ( t ) + h 12 ( t ) 0 C D t 0.98 u ˜ 3 ( t ) = 0.8 u ˜ 3 ( t ) + q 31 u ˜ 3 ( t ) tan ( u ˜ 1 ( t ) ) + q 32 u ˜ 3 ( t ) tan ( u ˜ 2 ( t ) ) + q 33 u ˜ 3 ( t ) tan ( u ˜ 3 ( t ) ) + 1.2 w ˜ 3 ( t ) + h 13 ( t ) 0 C D t 0.98 w ˜ 1 ( t ) = w ˜ 1 ( t ) + 2 tan ( u ˜ 1 ( t ) ) + h 21 ( t ) 0 C D t 0.98 w ˜ 2 ( t ) = w ˜ 2 ( t ) + 0.5 tan ( u ˜ 2 ( t ) ) + h 22 ( t ) 0 C D t 0.98 w ˜ 3 ( t ) = w ˜ 3 ( t ) + tan ( u ˜ 3 ( t ) ) + h 33 ( t ) .
Next, we demonstrate that the drive system (72) and controlled response system (73) achieve synchronization in finite time under the discontinuous controller (46). Let u 1 ( 0 ) = 0.5 , u 2 ( 0 ) = 0.5 , u 3 ( 0 ) = 1 , w 1 ( 0 ) = 2 , w 2 ( 0 ) = 1 , w 3 ( 0 ) = 3 , u ˜ 1 ( 0 ) = 2 , u ˜ 2 ( 0 ) = 1.2 , u ˜ 3 ( 0 ) = 3 , w ˜ 1 ( 0 ) = 2.5 , w ˜ 2 ( 0 ) = 1.5 , w ˜ 3 ( 0 ) = 0.8 , λ j = 1.5 and μ j = 1.2 for j = 1 , 2 , 3 . From A s s u m p t i o n (   A 1 ) , we have l 1 = l 2 = l 3 = 1 and δ ˜ 1 = δ ˜ 2 = δ ˜ 3 = 0.5 . In controller (46), we must choose ω 1 = ω 2 = ω 3 = 8 , ϖ 1 = ϖ 2 = ϖ 3 = 5 , ϑ 1 = ϑ 2 = ϑ 3 = 0.5 , η 1 = η 2 = η 3 = 1 and θ 1 = θ 2 = θ 3 = 0.8 . By a simple calculation, we can obtain 2 λ j ω j + p j 1 2 s j μ j b j l j k = 1 n λ j | q | j k l j + λ k | q | k j l k > 0 , 2 μ j ϖ j + a j 1 2 b j l j λ j s j > 0 and ϑ j k = 1 n | q j k + q j k | δ k > 0 . Therefore, FOMBCNNs (72) and (73) are synchronized in finite time based on Theorem 2, and are displayed in Figure 9, Figure 10 and Figure 11; we get an upper bound of the settling time t 2.2871 . The synchronization error curves of drive–response systems without and with control inputs are displayed in Figure 9 and Figure 10. The chaotic behavior of the synchronization error curves of drive–response systems with control inputs is shown in Figure 11.

6. Conclusions

Based on a robust control method, both the passivity and synchronization criterion of FOMBCNNs were investigated in this manuscript. Through a key role of fractional order properties, finite-time stability theory and fractional-order Lyapunov functional, some novel sufficient criterion was derived to ensure the designed FOMBCNN is finite-time passive. Furthermore, a finite-time discontinuous feedback control law was designed to achieve synchronization in finite time for FOMBCNNs and we also evaluated the upper bound of the settling time. The feasibility and advantages of the obtained finite-time passivity and finite-time synchronization were illustrated in two numerical computer simulations. In the future, the global Mittag–Leffler synchronization, projective synchronization and quasi-synchronization problems will be considered for FOMBCNNs via nonfragile control [63,65], delayed impulsive control [66], quantized control [67], and quantized intermittent control [68].

Author Contributions

Conceptualization, P.A., R.R. and J.A.; methodology, P.A., R.R. and E.H.; validation, R.R., J.A. and M.N.; formal analysis, J.A. and E.H.; funding acquisition, J.A. All authors have read and agreed to the published version of the manuscript.

Funding

Not applicable.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

J. Alzabut would like to thank Prince Sultan University and OSTIM Technical University for supporting this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, Q.; Yu, Q.; Zhao, Z.; Liu, Y.; Alsaadi, F.E. Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertainties. Neural Netw. 2018, 103, 55–62. [Google Scholar] [CrossRef] [PubMed]
  2. Song, Q.; Yan, H.; Zhao, Z.; Liu, Y. Global exponential stability of impulsive complex-valued neural networks with both asynchronous time-varying and continuously distributed delays. Neural Netw. 2016, 81, 1–10. [Google Scholar] [CrossRef] [PubMed]
  3. Song, Q.; Wang, Z. Dynamical behaviors of fuzzy reaction-diffusion periodic cellular neural networks with variable coefficients and delays. Appl. Math. Model. 2009, 33, 3533–3545. [Google Scholar] [CrossRef]
  4. Song, Q.; Zhao, Z.; Li, Y. Global exponential stability of BAM neural networks with distributed delays and reaction-diffusion terms. Phys. Lett. A 2005, 335, 213–225. [Google Scholar] [CrossRef]
  5. Arbi, A.; Cao, J. Pseudo-almost periodic solution on time-space scales for a novel class of competitive neutral-type neural networks with mixed time-varying delays and leakage delays. Neural Process. Lett. 2017, 46, 719–745. [Google Scholar] [CrossRef]
  6. Duan, L.; Huang, L. Global dynamics of equilibrium point for delayed competitive neural networks with different time scales and discontinuous activations. Neurocomputing 2016, 123, 318–327. [Google Scholar] [CrossRef]
  7. Yang, X.; Cao, J.; Long, Y.; Rui, W. Adaptive lag synchronization for competitive neural networks with mixed delays and uncertain hybrid perturbations. IEEE Trans. Neural Netw. 2010, 21, 1656–1667. [Google Scholar] [CrossRef] [PubMed]
  8. Yingchun, L.; Yang, X.; Shi, L. Finite-time synchronization for competitive neural networks with mixed delays and non-identical perturbations. Neurocomputing 2016, 185, 242–253. [Google Scholar]
  9. Liu, P.; Nie, X.; Liang, J.; Cao, J. Multiple Mittag–Leffler stability of fractional-order competitive neural networks with Gaussian activation functions. Neural Netw. 2018, 108, 452–465. [Google Scholar] [CrossRef]
  10. Pratap, A.; Raja, R.; Cao, J.; Rajchakit, G.; Fardoun, H.M. Stability and synchronization criteria for fractional order competitive neural networks with time delays: An asymptotic expansion of Mittag Leffler function. J. Frankl. Inst. 2019, 356, 2212–2239. [Google Scholar] [CrossRef]
  11. Pratap, A.; Raja, R.; Cao, J.; Rajchakit, G.; Alsaadi, F.E. Further synchronization in finite time analysis for time-varying delayed fractional order memristive competitive neural networks with leakage delay. Neurocomputing 2018, 317, 110–126. [Google Scholar] [CrossRef]
  12. Pratap, A.; Raja, R.; Agarwal, R.P.; Cao, J. Stability analysis and robust synchronization of fractional-order competitive neural networks with different time scales and impulsive perturbations. Int. J. Adapt. Control Signal Process. 2019, 33, 1635–1660. [Google Scholar] [CrossRef]
  13. Zhang, H.; Ye, M.; Cao, J.; Alsaedi, A. Synchronization control of Riemann–Liouville fractional competitive network systems with time-varying delay and different time scales. Int. J. Control Autom. Syst. 2018, 16, 1404–1414. [Google Scholar] [CrossRef]
  14. Chua, L.Q. Memristor-the missing circuit element. IEEE Trans. Circuit Theory 1971, 18, 507–519. [Google Scholar] [CrossRef]
  15. Strukov, D.B.; Snider, G.S.; Stewart, D.R.; Williams, R.S. The missing memristor found. Nature 2008, 453, 80–83. [Google Scholar] [CrossRef]
  16. Mathiyalagan, K.; Anbuvithya, R.; Sakthivel, R.; Park, J.H.; Prakash, P. Non-fragile H synchronization of memristor-based neural networks using passivity theory. Neural Netw. 2016, 74, 85–100. [Google Scholar] [CrossRef]
  17. Qin, X.; Wang, C.; Li, L.; Peng, H.; Yang, Y.; Ye, L. Finite-time modified projective synchronization of memristor-based neural network with multi-links and leakage delay. Chaos Solitons Fractals 2018, 116, 302–315. [Google Scholar] [CrossRef]
  18. Yang, Z.; Biao, L.; Derong, L.; Yueheng, L. Pinning synchronization of memristor-based neural networks with time-varying delays. Neural Netw. 2017, 93, 143–151. [Google Scholar] [CrossRef]
  19. Zhang, Z.; Liu, X.; Zhou, D.; Lin, C.; Chen, J.; Wang, H. Finite-time stabilizability and instabilizability for complex-valued memristive neural networks with time delays. IEEE Trans. Syst. Man Cybern Syst. 2018, 48, 2371–2382. [Google Scholar] [CrossRef]
  20. Guo, Z.; Wang, J.; Yan, Z. Attractivity analysis of memristor-based cellular neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 704–717. [Google Scholar] [CrossRef]
  21. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  22. Kim, T.; Pinkham, J. A theoretical basis for the application of fractional calculus to viscoelasticity. J. Rheol. 2000, 27, 115–198. [Google Scholar]
  23. Lundstrom, B.; Higgs, M.; Spain, W. Fractional differentiation by neocortical pyramidal neurons. Nat. Neurosci. 2008, 11, 1335–1342. [Google Scholar] [CrossRef]
  24. Alzabut, J.; Mohammadaliee, B.; Samei, M.E. Solutions of two fractional q–integro–differential equations under sum and integral boundary value conditions on a time scale. Adv. Differ. Equ. 2020, 2020, 304. [Google Scholar] [CrossRef]
  25. Alzabut, J.; Viji, J.; Muthulakshmi, V.; Sudsutad, W. Oscillatory behavior of a type of generalized proportional fractional differential equations with forcing and damping terms. Mathematics 2020, 8, 1037. [Google Scholar] [CrossRef]
  26. Alzabut, J.; Selvam, A.; Dhineshbabu, R.; Kaabar, M.K.A. The Existence, Uniqueness, and stability analysis of the discrete fractional three-point boundary value problem for the elastic beam equation. Symmetry 2021, 13, 789. [Google Scholar] [CrossRef]
  27. Aadhithiyan, S.; Raja, R.; Zhu, Q.; Alzabut, J.; Niezabitowski, M.; Lim, C.P. Modified projective synchronization of distributive fractional order complex dynamic networks with model uncertainty via adaptive control. Chaos Solitons Fractals 2021, 147, 110853. [Google Scholar] [CrossRef]
  28. Stephen, A.; Raja, R.; Alzabut, J.; Zhu, Q.; Niezabitowski, M.; Lim, C.P. A Lyapunov–Krasovskii functional approach to stability and linear feedback synchronization control for nonlinear multi-agent systems with mixed time delays. Math. Probl. Eng. 2021, 1–20. [Google Scholar] [CrossRef]
  29. Wang, F.; Yang, T.Q.; Hu, M.F. Asymptotic stability of delayed fractional-order neural networks with impulsive effects. Neurocomputing 2015, 154, 239–244. [Google Scholar] [CrossRef]
  30. Wang, F.; Yang, Y.; Xu, X.; Li, L. Global asymptotic stability of impulsive fractional-order BAM neural networks with time delay. Neural Comput. Appl. 2017, 28, 345–352. [Google Scholar] [CrossRef]
  31. Wu, H.; Zhang, X.; Xue, S.; Wang, L.; Wang, Y. LMI conditions to global Mittag–Leffler stability of fractional-order neural networks with impulses. Neurocomputing 2016, 193, 148–154. [Google Scholar] [CrossRef]
  32. Zhang, S.; Yu, Y.; Yu, J. LMI conditions for global stability of fractional-order neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2423–2433. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, W.; Jiang, M.; Yan, M. Stability analysis of memristor-based time-delay fractional-order neural networks. Neurocomputing 2019, 323, 117–127. [Google Scholar] [CrossRef]
  34. Zheng, M.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Zhang, Y.; Zhao, H. Finite-time stability and synchronization of memristor-based fractional-order fuzzy cellular neural networks. Commun. Nonlinear Sci. Numer. Simul. 2018, 59, 272–291. [Google Scholar] [CrossRef]
  35. Ailong, A.; Zhigang, Z. Global Mittag–Leffler stabilization of fractional-order memristive neural networks. IEEE Tranctions Neural Netw. Learn. Syst. 2016, 28, 2016–2027. [Google Scholar]
  36. Wenting, C.; Song, Z.; Jinyu, L.; Kaili, S. Global Mittag–Leffler stabilization of fractional-order complex-valued memristive neural networks. Appl. Math. Comput. 2018, 338, 346–362. [Google Scholar]
  37. Bao, H.; Cao, J.; Kurths, J. State estimation of fractional-order delayed memristive neural networks. Nonlinear Dyn. 2014, 94, 1215–1225. [Google Scholar] [CrossRef]
  38. Bao, H.; Park, J.H.; Cao, J. Non-fragile state estimation for fractional-order delayed memristive BAM neural networks. Neural Netw. 2019, 119, 190–199. [Google Scholar] [CrossRef]
  39. Mathiyalagan, K.; Park, J.H.; Sakthivel, R. Novel results on robust finite-time passivity for discrete-time delayed neural networks. Neurocomputing 2016, 177, 585–593. [Google Scholar] [CrossRef]
  40. Qi, W.; Gao, X.; Wang, J. Finite-time passivity and passification for stochastic time-delayed Markovian switching systems with partly known transition rates. Circuits Syst. Signal Process. 2016, 35, 3913–3934. [Google Scholar] [CrossRef]
  41. Rajavel, S.; Samidurai, R.; Cao, J.; Alsaedi, A.; Ahmad, B. Finite-time non-fragile passivity control for neural networks with time-varying delay. Appl. Math. Comput. 2017, 297, 145–158. [Google Scholar] [CrossRef]
  42. Song, Q.; Liang, J.; Wang, Z. Passivity analysis of discrete-time stochastic neural networks with time-varying delays. Neurocomputing 2009, 72, 1782–1788. [Google Scholar] [CrossRef]
  43. Song, Q.; Zhao, Z.; Yang, J. Passivity and passification for stochastic Takagi-Sugeno fuzzy systems with mixed time-varying delays. Neurocomputing 2013, 122, 330–337. [Google Scholar] [CrossRef]
  44. Song, Q.; Zhao, Z. Global dissipativity of neural networks with both variable and unbounded delays. Chaos Solitons Fractals 2005, 25, 393–401. [Google Scholar] [CrossRef]
  45. Wen, S.; Zeng, Z.; Huang, T.; Chen, Y. Passivity analysis of memristor-based recurrent neural networks with time-varying delays. J. Frankl. Inst. 2013, 350, 2354–2370. [Google Scholar] [CrossRef]
  46. Zhixia, D.; Zeng, Z.; Zhang, H.; Wang, L.; Liheng, W. New results on passivity of fractional-order uncertain neural networks. Neurocomputing 2019, 351, 51–59. [Google Scholar]
  47. Yang, X.; Li, C.; Huang, T.; Song, Q.; Chen, X. Quasi-uniform synchronization of fractional-order memristor-based neural networks with delay. Neurocomputing 2017, 234, 205–215. [Google Scholar] [CrossRef]
  48. Zhang, L.; Yang, Y. Different impulsive effects on synchronization of fractional-order memristive BAM neural networks. Nonlinear Dyn. 2018, 93, 233–250. [Google Scholar] [CrossRef]
  49. Liu, X.; Ho, D.W.C.; Cao, J.; Xu, W. Discontinuous observers design for finite-time consensus of multiagent systems with external disturbances. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2826–2830. [Google Scholar] [CrossRef]
  50. Liu, X.; Ho, D.W.C.; Song, Q.; Xu, W. Finite/Fixed-time pinning synchronization of complex networks with stochastic disturbances. IEEE Trans. Cybern. 2019, 49, 2398–2403. [Google Scholar] [CrossRef]
  51. Liu, X.; Ho, D.W.C.; Xie, C. Prespecified-time cluster synchronization of complex networks via a smooth control approach. IEEE Trans. Cybern. 2020, 50, 1771–1775. [Google Scholar] [CrossRef]
  52. Velmurugan, G.; Rakkiyappan, R.; Cao, J. Finite-time synchronization of fractional-order memristor-based neural networks with time delays. Neural Netw. 2016, 73, 36–46. [Google Scholar] [CrossRef]
  53. Xiao, J.; Zhong, S.; Li, Y.; Xu, F. Finite-time Mittag–Leffler synchronization of fractional-order memristive BAM neural networks with time delays. Neurocomputing 2017, 219, 431–439. [Google Scholar] [CrossRef]
  54. Thuan, M.V.; Huong, D.C.; Hong, D.T. New results on robust finite-time passivity for fractional-order neural networks with uncertainties. Neural Process. Lett. 2019, 50, 1065–1078. [Google Scholar] [CrossRef]
  55. Filippov, A.F. Differential Equations with Discontinuous Right-Hand Sides; Kluwer: Dordrecht, The Netherlands, 1988. [Google Scholar]
  56. Aubin, J.P.; Cellina, A. Differential Inclusions; Springer: Berlin, Germany, 1984. [Google Scholar]
  57. Kuang, J. Applied Inequalities; Shandong Science and Technology Press: Jinan, China, 2004. [Google Scholar]
  58. Singh, V. New global robust stability results for delayed cellular neural networks based on norm-bounded uncertainties. Chaos Solitons Fractals 2016, 30, 1165–1171. [Google Scholar] [CrossRef]
  59. Zeng, H.B.; He, Y.; Wu, M.; Xiao, H.Q. Improved conditions for passivity of neural networks with a time-varying delay. IEEE Trans. Cybern. 2014, 44, 785–792. [Google Scholar] [CrossRef]
  60. Rajchakit, G.; Chanthorn, P.; Niezabitowski, M.; Raja, R.; Baleanu, D.; Pratap, A. Impulsive effects on stability and passivity analysis of memristor-based fractional-order competitive neural networks. Neurocomputing 2020, 417, 290–301. [Google Scholar] [CrossRef]
  61. Zhang, W.; Yang, X.; Yang, S.; Alsaedi, A. Finite-time and fixed-time bipartite synchronization of complex networks with signed graphs. Math. Comput. Simul. 2021, 188, 319–329. [Google Scholar] [CrossRef]
  62. Zhou, Y.; Wan, X.; Huang, C.; Yang, X. Finite-time stochastic synchronization of dynamic networks with nonlinear coupling strength via quantized intermittent control. Appl. Math. Comput. 2020, 376, 125157. [Google Scholar] [CrossRef]
  63. Yang, X.; Ho, D.W.C. Synchronization of delayed memristive neural networks: Robust analysis approach. IEEE Trans. Cybern. 2016, 46, 3377–3387. [Google Scholar] [CrossRef]
  64. Yang, X.; Cao, J.; Liang, J. Exponential synchronization of memristive neural networks with delays: Interval matrix method. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1878–1888. [Google Scholar] [CrossRef]
  65. Stephen, A.; Raja, R.; Alzabut, J.; Zhu, Q.; Niezabitowski, M.; Bagdasar, O. Mixed time-delayed nonlinear multi-agent dynamic systems for asymptotic stability and non-fragile synchronization criteria. Neural Process. Lett. 2021, 1–32. [Google Scholar] [CrossRef]
  66. Yang, X.; Cao, J.; Qiu, J. pth moment exponential stochastic synchronization of coupled memristor-based neural networks with mixed delays via delayed impulsive control. Neural Netw. 2016, 65, 80–91. [Google Scholar] [CrossRef] [PubMed]
  67. Zou, Y.; Su, H.; Tang, R.; Yang, X. Finite-time bipartite synchronization of switched competitive neural networks with time delay via quantized control. ISA Trans. 2021. [Google Scholar] [CrossRef] [PubMed]
  68. Feng, Y.; Yang, X.; Song, Q.; Cao, J. Synchronization of memristive neural networks with mixed delays via quantized intermittent control. Appl. Math. Comput. 2018, 339, 874–887. [Google Scholar] [CrossRef]
Figure 1. The time responses of the states u 1 ( t ) , u 2 ( t ) and u 3 ( t ) of system (70) for the external inputs x ( t ) = [ 1 + sin t , 1 + cos t , 1 + sin t ] T in Example 1.
Figure 1. The time responses of the states u 1 ( t ) , u 2 ( t ) and u 3 ( t ) of system (70) for the external inputs x ( t ) = [ 1 + sin t , 1 + cos t , 1 + sin t ] T in Example 1.
Fractalfract 06 00036 g001
Figure 2. The time responses of the states w 1 ( t ) , w 2 ( t ) and w 3 ( t ) of system (70) for the external inputs y ( t ) = [ 1 + cos t , 1 + sin t , 1 + cos t ] T in Example 1.
Figure 2. The time responses of the states w 1 ( t ) , w 2 ( t ) and w 3 ( t ) of system (70) for the external inputs y ( t ) = [ 1 + cos t , 1 + sin t , 1 + cos t ] T in Example 1.
Fractalfract 06 00036 g002
Figure 3. The phase trajectories of u 1 ( t ) , u 2 ( t ) and u 3 ( t ) of system (70) for the external inputs x ( t ) = [ 1 + sin t , 1 + cos t , 1 + sin t ] T in Example 1.
Figure 3. The phase trajectories of u 1 ( t ) , u 2 ( t ) and u 3 ( t ) of system (70) for the external inputs x ( t ) = [ 1 + sin t , 1 + cos t , 1 + sin t ] T in Example 1.
Fractalfract 06 00036 g003
Figure 4. The phase trajectories of w 1 ( t ) , w 2 ( t ) and w 3 ( t ) of system (70) for the external inputs y ( t ) = [ 1 + cos t , 1 + sin t , 1 + cos t ] T in Example 1.
Figure 4. The phase trajectories of w 1 ( t ) , w 2 ( t ) and w 3 ( t ) of system (70) for the external inputs y ( t ) = [ 1 + cos t , 1 + sin t , 1 + cos t ] T in Example 1.
Fractalfract 06 00036 g004
Figure 5. The time responses of the states u 1 ( t ) , u 2 ( t ) and u 3 ( t ) of system (70) for the external inputs x ( t ) = [ 0 , 0 , 0 ] T in Example 1.
Figure 5. The time responses of the states u 1 ( t ) , u 2 ( t ) and u 3 ( t ) of system (70) for the external inputs x ( t ) = [ 0 , 0 , 0 ] T in Example 1.
Fractalfract 06 00036 g005
Figure 6. The time responses of the states w 1 ( t ) , w 2 ( t ) and w 3 ( t ) of system (70) for the external inputs y ( t ) = [ 0 , 0 , 0 ] T in Example 1.
Figure 6. The time responses of the states w 1 ( t ) , w 2 ( t ) and w 3 ( t ) of system (70) for the external inputs y ( t ) = [ 0 , 0 , 0 ] T in Example 1.
Fractalfract 06 00036 g006
Figure 7. The time responses of the states u 1 ( t ) and u 2 ( t ) of system (71) for the external inputs x ( t ) = [ 1 + cos ( 2 t ) , 1 sin ( 2 t ) ] T in Example 2.
Figure 7. The time responses of the states u 1 ( t ) and u 2 ( t ) of system (71) for the external inputs x ( t ) = [ 1 + cos ( 2 t ) , 1 sin ( 2 t ) ] T in Example 2.
Fractalfract 06 00036 g007
Figure 8. The time responses of the states w 1 ( t ) and w 2 ( t ) of system (71) for the external inputs x ( t ) = [ 1 + cos ( 2 t ) , 1 sin ( 2 t ) ] T in Example 2.
Figure 8. The time responses of the states w 1 ( t ) and w 2 ( t ) of system (71) for the external inputs x ( t ) = [ 1 + cos ( 2 t ) , 1 sin ( 2 t ) ] T in Example 2.
Fractalfract 06 00036 g008
Figure 9. Time behaviors of error signals e 1 ( t ) , e 2 ( t ) , e 3 ( t ) , z 1 ( t ) , z 2 ( t ) and z 3 ( t ) without control input in Example 2.
Figure 9. Time behaviors of error signals e 1 ( t ) , e 2 ( t ) , e 3 ( t ) , z 1 ( t ) , z 2 ( t ) and z 3 ( t ) without control input in Example 2.
Fractalfract 06 00036 g009
Figure 10. Time behaviors of error signals e 1 ( t ) , e 2 ( t ) , e 3 ( t ) , z 1 ( t ) , z 2 ( t ) and z 3 ( t ) with control input in Example 2.
Figure 10. Time behaviors of error signals e 1 ( t ) , e 2 ( t ) , e 3 ( t ) , z 1 ( t ) , z 2 ( t ) and z 3 ( t ) with control input in Example 2.
Fractalfract 06 00036 g010
Figure 11. Chaotic behaviors e 1 ( t ) , e 2 ( t ) , e 3 ( t ) , and z 1 ( t ) , z 2 ( t ) , z 3 ( t ) of systems (72) and (73) with control inputs in Example 2.
Figure 11. Chaotic behaviors e 1 ( t ) , e 2 ( t ) , e 3 ( t ) , and z 1 ( t ) , z 2 ( t ) , z 3 ( t ) of systems (72) and (73) with control inputs in Example 2.
Fractalfract 06 00036 g011
Table 1. Minimum passive index υ .
Table 1. Minimum passive index υ .
Methods [60]Corollary 1Improvement
υ 60.041256.51055.8804%
The number of decision variables n 2 + 3 n + 16 n 2 + 3 n + 12
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Anbalagan, P.; Ramachandran, R.; Alzabut, J.; Hincal, E.; Niezabitowski, M. Improved Results on Finite-Time Passivity and Synchronization Problem for Fractional-Order Memristor-Based Competitive Neural Networks: Interval Matrix Approach. Fractal Fract. 2022, 6, 36. https://doi.org/10.3390/fractalfract6010036

AMA Style

Anbalagan P, Ramachandran R, Alzabut J, Hincal E, Niezabitowski M. Improved Results on Finite-Time Passivity and Synchronization Problem for Fractional-Order Memristor-Based Competitive Neural Networks: Interval Matrix Approach. Fractal and Fractional. 2022; 6(1):36. https://doi.org/10.3390/fractalfract6010036

Chicago/Turabian Style

Anbalagan, Pratap, Raja Ramachandran, Jehad Alzabut, Evren Hincal, and Michal Niezabitowski. 2022. "Improved Results on Finite-Time Passivity and Synchronization Problem for Fractional-Order Memristor-Based Competitive Neural Networks: Interval Matrix Approach" Fractal and Fractional 6, no. 1: 36. https://doi.org/10.3390/fractalfract6010036

Article Metrics

Back to TopTop