Next Article in Journal
Transfer Entropy as a Tool for Hydrodynamic Model Validation
Next Article in Special Issue
Adaptive Synchronization of Fractional-Order Complex-Valued Neural Networks with Discrete and Distributed Delays
Previous Article in Journal
Performance Features of a Stationary Stochastic Novikov Engine
Previous Article in Special Issue
Searching for Chaos Evidence in Eye Movement Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synchronization in Fractional-Order Complex-Valued Delayed Neural Networks

1
School of Mathematics and Computational Science, Anqing Normal University, Anqing 246011, China
2
School of Mathematics, Southeast University, Nanjing 210096, China
3
Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
4
Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(1), 54; https://doi.org/10.3390/e20010054
Submission received: 13 December 2017 / Revised: 7 January 2018 / Accepted: 8 January 2018 / Published: 12 January 2018
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)

Abstract

:
This paper discusses the synchronization of fractional order complex valued neural networks (FOCVNN) at the presence of time delay. Synchronization criterions are achieved through the employment of a linear feedback control and comparison theorem of fractional order linear systems with delay. Feasibility and effectiveness of the proposed system are validated through numerical simulations.

1. Introduction

Recently, complex valued neural networks (CVNN) have attracted more attention in many research fields, such as signal processing, quantum waves, speech synthesis, and so on [1,2,3,4,5]. Unlike real valued neural networks (RVNN), the state vectors, including weights of connections and activation functions of CVNN, derives from the complex valued field. CVNN can help solve some real-world problems that RVNN can never solve. For example, the Exclusive-OR (XOR) problem and the detection of symmetry problem can be solved with a single complex valued neuron with the orthogonal boundaries, whereas neither of them could be achieved by RVNN with such a simple network structure [6]. Generally speaking, the CVNN have more complicated properties and dynamical behaviors [7,8,9]. In fact, the activation functions in RVNN are employed to be bounded and smooth. However, based on Liouville’s theorem [10], a function that is bounded and analytic at the same time in the complex domain must be constant. Therefore, careful selection of the activation functions of CVNN is a challenging task [11]. Hence, considering the dynamical behaviors of CVNN is important and necessary. Existing results have concerned stability and synchronization [12,13,14].
Fractional calculus, which acts with derivatives and integrals of arbitrary order, was firstly proposed by Leibniz in 1695 [15]. Compared with an integer-order model, fractional order models can offer more accurate instrument for memory description and inherited properties of several processes. Some researchers introduced the fractional order derivatives into neural networks; the fractional order neural networks were designed for precisely modelling in real world [16,17,18,19].
It is worth pointing out that the interesting results of integer order CVNN can not be directly extended to fractional-order complex valued neural networks (FOCVNN). The stability and synchronization analysis of fractional order systems, including FOCVNN, are very difficult. Since calculating the fractional order derivatives of Lyapunov functions is complicated, the stability analysis methods for integer order systems such as Lyapunov functional method can not be easily generalized to fractional order systems. Taking these factors into consideration, many researchers have studied the dynamic behaviors of FOCVNN [20,21,22,23,24,25].
Due to finite switching speeds of amplifiers, it is quite difficult to avoid time delays in neural networks. This may induce oscillation and instability behaviors [26,27,28]. Some interesting results have been presented on the stability of FOCVNN with time delay. For instance, in [21], Gronwall inequality, Cauchy-Schiwartz inequality and inequality skills were utilized to consider stability of FOCVNN at the presence of time delay. Existence and uniform stability analysis of FOCVNN with time delays were studied in [22]. Stability analysis of fractional order complex valued neural networks and memristive neural networks with time delays were studied in [23,24]. To the best of our knowledge, only a few research works have considered the synchronization of FOCVNN with time delay. For example, in [25], synchronization of FOCVNN with time delay was achieved by employing linear delay feedback and a fractional order inequality.
Because the fractional order systems cannot have any exact non-constant periodic solution [29,30], we consider in this paper, from a numerical point of view, that a periodic solution is an extremely-near periodic trajectory. The main goal of this paper is to study the synchronization of FOCVNN with time delay by adopting a new strategy, and some interesting results are obtained. To ensure synchronization, sufficient conditions are established by constructing a Lyapunov function, employing a fractional order inequality and comparison theorem of fractional order linear systems when there is a time delay.

2. Preliminaries and Model Description

The literature gives some definitions of the fractional order derivatives, including the Riemann-Liouville definition and Caputo definition. The Caputo derivative only requires initial conditions given in terms of integer-order derivatives, thus it is more applicable in the real world. Therefore, this paper considers the Caputo derivative.
Definition 1
([31]). The Caputo derivative of fractional order α of a function φ ( t ) is defined by:
D α φ ( t ) = 1 Γ ( m α ) t 0 t ( t τ ) m α 1 φ ( m ) ( τ ) d τ ,
where t t 0 , m 1 < α < m Z + , Γ ( · ) is the Gamma function, Γ ( s ) = 0 t s 1 e t d t .
This paper proposes a class of FOCVNN at the presence of time delay as master system, which is expressed as:
D α z j ( t ) = c j z j ( t ) + k = 1 n a j k f k ( z k ( t ) ) + k = 1 n b j k g k ( z k ( t τ ) ) + I j ,
or equivalently
D α z ( t ) = C z ( t ) + A f ( z ( t ) ) + g ( z ( t τ ) ) + I ( t ) ,
where 0 < α < 1 , j = 1 , 2 , , n , n is the number of units in a neural networks, z j ( t ) corresponds to the state of the j-th unit at time t, denotes z ( t ) = ( z 1 ( t ) , , z n ( t ) ) T C n , C = d i a g ( c 1 , , c n ) R n × n with c j > 0 is the self-regulating parameters of the neurons. I ( t ) = ( I 1 ( t ) , I 2 ( t ) , , I n ( t ) ) T C n represents the external input, A = ( a j k ) n × n and B = ( b j k ) n × n are the connective weights matrix in the presence and absence of delay, respectively. Functions f k ( z k ( t ) ) : C n C n and g k ( z k ( t τ ) ) : C n C n are the complex valued activation functions of the kth unit at time t and t τ , respectively, τ > 0 is the transmission delay, denotes f ( z ( t ) ) = ( f 1 ( z 1 ( t ) ) , , f n ( z n ( t ) ) ) T , g ( z ( t τ ) ) = ( g 1 ( z 1 ( t τ ) ) , , g n ( z n ( t τ ) ) ) T .
The slave system is given:
D α z j ( t ) = c j z j ( t ) + k = 1 n a j k f k ( z k ( t ) ) + k = 1 n b j k g k ( z k ( t τ ) ) + I j + U j ( t ) ,
or equivalently
D α z ( t ) = C z ( t ) + A f ( z ( t ) ) + g ( z ( t τ ) ) + I ( t ) + U ( t ) ,
where z ( t ) = ( z 1 ( t ) , , z n ( t ) ) T C n is the state vector of the system response, U ( t ) = ( u 1 ( t ) , , u n ( t ) ) T is a suitable controller.
In following, some assumptions and useful lemmas are presented to proof the main results.
Assumption 1.
Let z ( t ) = x ( t ) + i y ( t ) , f ( z ( t ) ) and g ( z ( t τ ) ) are analytic and can be expressed, while separating the real and imaginary parts, as
f ( z ( t ) ) = f R ( x ( t ) , y ( t ) ) + i f I ( x ( t ) , y ( t ) ) ,
g ( z ( t τ ) ) = g R ( x ( t τ ) , y ( t τ ) ) + i g I ( x ( t τ ) , y ( t τ ) ) ,
where f R ( · , · ) = R e ( f ( · , · ) ) = ( f 1 R ( x 1 , y 1 ) , , f n R ( x n , y n ) ) T , f I ( · , · ) = I m ( f ( · , · ) ) = ( f 1 I ( x 1 , y 1 ) , , f n I ( x n , y n ) ) T , g R ( · , · ) = R e ( g ( · , · ) ) = ( g 1 R ( x 1 , y 1 ) , , g n R ( x n , y n ) ) T , g I ( · , · ) = I m ( g ( · , · ) ) = ( g 1 I ( x 1 , y 1 ) , , g n I ( x n , y n ) ) T .
Assumption 2.
The functions f j R ( · , · ) , f j I ( · , · ) , g j R ( · , · ) , g j I ( · , · ) satisfy the following conditions: there exist positive constants F j R R , F j R I , F j I R , F j I I , G j R R , G j R I , G j I R , G j I I , such that
| f j R ( u , v ) f j R ( u , v ) | F j R R | u u | + F j R I | v v | ,
| f j I ( u , v ) f j I ( u , v ) | F j I R | u u | + F j I I | v v | ,
| g j R ( u , v ) g j R ( u , v ) | G j R R | u u | + G j R I | u u | ,
| g j I ( u , v ) g j I ( u , v ) | G j I R | u u | + G j I I | u u | ,
for all ( u , v ) , ( u , v ) R 2 .
Note that Assumption 2 is very important. Compare with the Lipschitz condition | f j ( u ) f j ( u ) | F j | u u | , Assumption 2 is the general Lipschitz condition. In CVNN, the activation functions cannot be bounded and analytic, careful selection of the activation functions of CVNN is a challenge task. Therefore, under Assumption 2, the results in this paper have been obtained.
Lemma 1
([32]). Suppose x ( t ) R n is a continuous and differentiable vector-value function. Then, for any time instant t t 0 , we get
D α x T ( t ) x ( t ) 2 x T ( t ) D α x ( t ) ,
where 0 < α < 1 .
Lemma 2
([33]). Suppose W ( t ) R 1 is a continuous differentiable and nonnegative function, which satisfies
D α W ( t ) a W ( t ) + b W ( t τ ) , 0 < α < 1 W ( t ) = ϕ ( t ) 0 , t [ τ , 0 ]
where t [ 0 , + ) . If a > b > 0 , for all ϕ ( t ) 0 , τ > 0 , then lim t + W ( t ) = 0 .

3. Main Results

This section derives the synchronization conditions of FOCVNN with time delay by designing a suitable controller.
Assuming that e ( t ) = z ( t ) z ( t ) is the synchronization error, then the system’s error can be computed as
D α e j ( t ) = c j e j ( t ) + k = 1 n a j k [ f k ( z k ( t ) ) f k ( z k ( t ) ) ] + k = 1 n b j k [ g k ( z j ( t τ ) ) g k ( z k ( t τ ) ) ] + U j ( t ) .
The vector form as follows:
D α e ( t ) = C e ( t ) + A [ f ( z ( t ) ) f ( z ( t ) ) ] + B [ g ( z ( t τ ) ) g ( z ( t τ ) ) ] + U ( t ) .
In the following, the notations are used:
z ( t ) = x ( t ) + i y ( t ) , z ( t ) = x ( t ) + i y ( t ) ,
e R ( t ) = x ( t ) x ( t ) , e I ( t ) = y ( t ) y ( t ) .
Select the control input function U ( t ) = u ( t ) + i v ( t ) as the following form:
u ( t ) = η ( x ( t ) x ( t ) ) , v ( t ) = η ( y ( t ) y ( t ) ) ,
where each η = d i a g ( η 1 , , η n ) , η = d i a g ( η 1 , , η n ) with η j > 0 , η j > 0 ( i = 1 , , n ) denote the control gain.
Then the system’s error can be given as
D α e R ( t ) = Ω e R ( t ) + A R [ f R ( x ( t ) , y ( t ) ) f R ( x ( t ) , y ( t ) ) ] A I [ f I ( x ( t ) , y ( t ) ) f I ( x ( t ) , y ( t ) ) ] + B R [ g R ( x ( t τ ) , y ( t τ ) ) g R ( x ( t τ ) , y ( t τ ) ) ] B I [ g I ( x ( t τ ) , y ( t τ ) ) g I ( x ( t τ ) , y ( t τ ) ) ] ,
D α e I ( t ) = Ω e I ( t ) + A I [ f R ( x ( t ) , y ( t ) ) f R ( x ( t ) , y ( t ) ) ] + A R [ f I ( x ( t ) , y ( t ) ) f I ( x ( t ) , y ( t ) ) ] + B I [ g R ( x ( t τ ) , y ( t τ ) ) g R ( x ( t τ ) , y ( t τ ) ) ] + B R [ g I ( x ( t τ ) , y ( t τ ) ) g I ( x ( t τ ) , y ( t τ ) ) ] ,
where A R , B R are the real parts of matrix A , B , respectively, A I , B I are the imaginary parts of matrix A , B , respectively. Ω = d i a g ( c 1 + η 1 , , c n + η n ) , Ω = d i a g ( c 1 + η 1 , , c n + η n ) .
The error system (12) and (13) can also be rewritten as
D α e j R ( t ) = ( c j + η j ) e j R ( t ) + k = 1 n a j k R [ f k R ( x k ( t ) , y k ( t ) ) f k R ( x k ( t ) , y k ( t ) ) ] k = 1 n a j k I [ f k I ( x k ( t ) , y k ( t ) ) f k I ( x k ( t ) , y k ( t ) ) ] + k = 1 n b j k R [ g k R ( x k ( t τ ) , y k ( t τ ) ) g k R ( x k ( t τ ) , y k ( t τ ) ) ] k = 1 n b j k I [ g k I ( x k ( t τ ) , y k ( t τ ) ) g k I ( x k ( t τ ) , y k ( t τ ) ) ] ,
D α e j I ( t ) = ( c j + η j ) e j I ( t ) + k = 1 n a j k I [ f k R ( x k ( t ) , y k ( t ) ) f k R ( x k ( t ) , y k ( t ) ) ] + k = 1 n a j k R [ f k I ( x k ( t ) , y k ( t ) ) f k I ( x k ( t ) , y k ( t ) ) ] + k = 1 n b j k I [ g k R ( x k ( t τ ) , y k ( t τ ) ) g k R ( x k ( t τ ) , y k ( t τ ) ) ] + k = 1 n b j k R [ g k I ( x k ( t τ ) , y k ( t τ ) ) g k I ( x k ( t τ ) , y k ( t τ ) ) ] .
Note λ 1 = min 1 j n [ ( c j + η j ) k = 1 n 1 2 | a j k R | F k R R k = 1 n 1 2 | a k j R | F j R R k = 1 n 1 2 | a j k R | F k R I k = 1 n 1 2 | a j k I | F k I R k = 1 n 1 2 | a k j I | F j I R k = 1 n 1 2 | a j k I | F k I I k = 1 n 1 2 | b j k R | G k R R k = 1 n 1 2 | b j k R | G k R I k = 1 n 1 2 | b j k I | G k I R k = 1 n 1 2 | b j k I | G k I I k = 1 n 1 2 | a k j I | F j R R k = 1 n 1 2 | a k j R | F j I R ] , λ 2 = min 1 j n [ ( c j + η j ) k = 1 n 1 2 | a k j R | F j R I k = 1 n 1 2 | a k j I | F j I I k = 1 n 1 2 | a j k I | F k R R k = 1 n 1 2 | a j k I | F k R I k = 1 n 1 2 | a k j I | F j R I k = 1 n 1 2 | a j k R | F k I R k = 1 n 1 2 | a j k R | F k I I k = 1 n 1 2 | a k j R | F j I I k = 1 n 1 2 | b j k I | G k R R k = 1 n 1 2 | b j k I | G k R I k = 1 n 1 2 | b j k R | G k I R k = 1 n 1 2 | b j k R | G k I I ] , μ 1 = max 1 j n [ k = 1 n 1 2 | b k j R | G j R R + 1 2 | b k j I | G j I R + k = 1 n 1 2 | b k j R | G j I R + k = 1 n 1 2 | b k j I | G j R R ] , μ 2 = max 1 j n [ k = 1 n 1 2 | b k j R | G j R I + k = 1 n 1 2 | b k j I | G j I I + k = 1 n 1 2 | b k j I | G j R I + k = 1 n 1 2 | b k j R | G j I I ] .
Theorem 1.
Suppose Assumptions 1 and 2 hold, the control gains η , η satisfy λ > μ > 0 , then the master system (1) and the slave system (3) are globally asymptotically synchronized, where λ = min { λ 1 , λ 2 } , μ = max { μ 1 , μ 2 } .
Proof. 
See the Appendix A.
If the parameters, states and activation functions in systems (1) and (3) are all selected from real valued field, based on Theorem 1, we get λ 1 = min 1 j n [ ( c j + η j ) k = 1 n 1 2 | a j k R | F k R R k = 1 n 1 2 | a k j R | F j R R k = 1 n 1 2 | b j k R | G k R R ] , λ 2 = min 1 j n c j , μ 1 = max 1 j n k = 1 n 1 2 | b k j R | G j R R . μ 2 = 0 . Thus, one can obtain the following corollary. ☐
Corollary 1.
Suppose Assumptions 1 and 2 hold, the control gains η satisfy λ 1 > μ 1 > 0 , then the master system (1) and the slave system (3) are globally asymptotically synchronized.
Remark 1.
Compared with [21], in this paper, the comparison theorem of linear fractional order systems with delay is adopted to achieve the synchronization of FOCVNN with time delay, and the results are presented. The method is new and effective at designing the synchronization of complex valued neural networks.
Remark 2.
Lemmas 1 and 2 play important and useful roles for studying synchronization of FOCVNN. The proposed method can be extended to consider the synchronization of fractional order complex valued memristive neural networks at the existence of delays, including fractional order chaotic and hyperchaotic systems.

4. Numerical Simulations

The following fractional order complex valued delayed neural networks is considered as the master system:
D α z ( t ) = C z ( t ) + A f ( z ( t ) ) + B g ( z ( t τ ) ) + I ( t ) ,
where z ( t ) = ( z 1 ( t ) , z 2 ( t ) ) T , and z j ( t ) = x j ( t ) + i y j ( t ) , j = 1 , 2 , α = 0.98 , τ = 1 .
C = 2.5 0 0 2 , A = 3 + i 2 5 i 1 + 1.5 i 0.5 + i , B = 1 + 2 i 1 + i 1.5 1.5 i 1.5 + 5 i , I ( t ) = ( sin t 2 i cos t , 3 cos ( t + 1 ) + i sin ( t 1 ) ) T , f ( z ( t ) ) = ( f 1 ( z 1 ( t ) ) , f 2 ( z 2 ( t ) ) ) T , g ( z ( t ) ) = ( g 1 ( z 1 ( t ) ) , g 2 ( z 2 ( t ) ) ) T , and f j ( z j ) = 1 e x j 1 + e x j + 1 1 + e y j , g j ( z j ) = 1 e y j 1 + e y j + 1 1 + e x j , for j = 1 , 2 .
The slave system is given as:
D α z ( t ) = C z ( t ) + A f ( z ( t ) ) + B g ( z ( t τ ) ) + I ( t ) + U ( t ) ,
where z ( t ) = ( z 1 ( t ) , z 2 ( t ) ) T , z j ( t ) = x j ( t ) + i y j ( t ) ( j = 1 , 2 ) , U ( t ) = ( U 1 ( t ) , U 2 ( t ) ) T , U j ( t ) = u j ( t ) + i v j ( t ) ( j = 1 , 2 ) is the control function to be designed later.
The initial values are selected z 1 ( s ) = 1 2 i , z 2 ( s ) = 2 4 i , z 1 ( s ) = 1 + 2 i , z 2 ( s ) = 3 + 3 i for s [ 1 , 0 ] . The curves of z 1 ( t ) , z 2 ( t ) and z 1 ( t ) , z 2 ( t ) are shown without controller in Figure 1 and Figure 2. Figure 3, Figure 4, Figure 5 and Figure 6 depict the time evolution of the real and imaginary parts of z 1 ( t ) , z 2 ( t ) and z 1 ( t ) , z 2 ( t ) with control gains η = η = 0 . The simulation results show that the master system cannot synchronize the slave system without a controller.
If we select the control gain η 1 = η 2 = 1 , η 1 = η 2 = 2 , by simple computing, the condition of Theorem 1 is satisfied. The initial values are selected z 1 ( s ) = 1 2 i , z 2 ( s ) = 2 4 i , z 1 ( s ) = 1 + 2 i , z 2 ( s ) = 3 + 3 i , for s [ 1 , 0 ] . The curves of z 1 ( t ) , z 2 ( t ) and z 1 ( t ) , z 2 ( t ) are shown with controller in Figure 7 and Figure 8. The synchronization errors of real and imaginary parts of z 1 ( t ) , z 2 ( t ) , z 1 ( t ) , z 2 ( t ) are shown in Figure 9, Figure 10, Figure 11 and Figure 12, the synchronization trajectories of real and imaginary parts of z 1 ( t ) , z 2 ( t ) , z 1 ( t ) , z 2 ( t ) are shown in Figure 13, Figure 14, Figure 15 and Figure 16, which indicates that the slave system (17) achieved synchronization with the master system (16).

5. Conclusions

Compared with real valued neural networks, FOCVNN has more complicated properties and dynamical behaviors. In this paper, the synchronization of FOCVNN with time delay is considered. An error feedback controller is designed by using the comparison theorem of linear fractional order systems with delay and a fractional inequality. An example is proposed to demonstrate the correctness and effectiveness of the obtained results. The method is not only easy to apply for achieving the synchronization of FOCVNN with delay, but also has improved the previous results. The results obtained are still suitable for synchronization of a fractional order real valued neural network with delay. The stability and synchronization of FOCVNN still remain open topics which need to be pursued in the future.

Acknowledgments

The authors thanks the referees and the editor for their valuable comments. This work was supported by the National Natural Science Foundation of China (No. 61573096), the Natural Science Foundation of Anhui Province (No. 1608085MA14) and the Natural Science Foundation of the Higher Education Institutions of Anhui Province (No. KJ2015A152).

Author Contributions

In this paper, Weiwei Zhang is in charge of the fractional calculus theory and paper writing. Jinde Cao is in charge of the synchronization controller design. Dingyuan Chen mainly contribute in the stability analysis, and Fuad E. Alsaadi is in charge of the discussion and the simulation. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CVNNcomplex valued neural networks
RVNNreal valued neural networks
FOCVNNfractional-order complex-valued neural networks

Appendix A

Proof of Theorem 1:
Construct an auxiliary function:
V ( e ( t ) ) = 1 2 j = 1 n ( e j R ( t ) ) 2 + 1 2 j = 1 n ( e j I ( t ) ) 2 .
Using Lemma 1, then
D α V ( e ( t ) ) e j R ( t ) D α j = 1 n e j R ( t ) + e j I ( t ) D α j = 1 n e j I ( t ) = e j R ( t ) j = 1 n { ( c j + η j ) e j R ( t ) + k = 1 n a j k R [ f k R ( x k ( t ) , y k ( t ) ) f k R ( x k ( t ) , y k ( t ) ) ] k = 1 n a j k I [ f k I ( x k ( t ) , y k ( t ) ) f k I ( x k ( t ) , y k ( t ) ) ] + k = 1 n b j k R [ g k R ( x k ( t τ ) , y k ( t τ ) ) g k R ( x k ( t τ ) , y k ( t τ ) ) ] k = 1 n b j k I [ g k I ( x k ( t τ ) , y k ( t τ ) ) g k I ( x k ( t τ ) , y k ( t τ ) ) ] } + e j I ( t ) j = 1 n { ( c j + η j ) e j I ( t ) + k = 1 n a j k I [ f k R ( x k ( t ) , y k ( t ) ) f k R ( x k ( t ) , y k ( t ) ) ] + k = 1 n a j k R [ f k I ( x k ( t ) , y k ( t ) ) f k I ( x k ( t ) , y k ( t ) ) ] + k = 1 n b j k I [ g k R ( x k ( t τ ) , y k ( t τ ) ) g k R ( x k ( t τ ) , y k ( t τ ) ) ] + k = 1 n b j k R [ g k I ( x k ( t τ ) , y k ( t τ ) ) g k I ( x k ( t τ ) , y k ( t τ ) ) ] } j = 1 n { ( c j + η j ) ( e j R ( t ) ) 2 + k = 1 n | e j R ( t ) | | a j k R | [ F k R R | e k R ( t ) | + F k R I | e k I ( t ) | ] + k = 1 n | e j R ( t ) | | a j k I | [ F k I R | e k R ( t ) | + F k I I | e k I ( t ) | ] + k = 1 n | e j R ( t ) | | b j k R | [ G k R R | e k R ( t τ ) | + G k R I | e k I ( t τ ) | ] + k = 1 n | e j R ( t ) | | b j k I | [ G k I R | e k R ( t τ ) | + G k I I | e k I ( t τ ) | ] } + j = 1 n { ( c j + η j ) ( e j I ( t ) ) 2 + k = 1 n | e j I ( t ) | | a j k I | [ F k R R | e k R ( t ) | + F k R I | e k I ( t ) | ] + k = 1 n | e j I ( t ) | | a j k R | [ F k I R | e k R ( t ) | + F k I I | e k I ( t ) | ] + k = 1 n | e j I ( t ) | | b j k I | [ G k R R | e k R ( t τ ) | + G k R I | e k I ( t τ ) | ] + k = 1 n | e j I ( t ) | | b j k R | [ G k I R | e k R ( t τ ) | + G k I I | e k I ( t τ ) | ] } j = 1 n { ( c j + η j ) ( e j R ( t ) ) 2 + k = 1 n 1 2 | a j k R | F k R R [ ( e j R ( t ) ) 2 + ( e k R ( t ) ) 2 ] + k = 1 n 1 2 | a j k R | F k R I [ ( e j R ( t ) ) 2 + ( e k I ( t ) ) 2 ] + k = 1 n 1 2 | a j k I | F k I R [ ( e j R ( t ) ) 2 + ( e k R ( t ) ) 2 ] + k = 1 n 1 2 | a j k I | F k I I [ ( e j R ( t ) ) 2 + ( e k I ( t ) ) 2 ] + k = 1 n 1 2 | b j k R | G k R R [ ( e j R ( t ) ) 2 + ( e k R ( t τ ) ) 2 ] + k = 1 n 1 2 | b j k R | G k R I [ ( e j R ( t ) ) 2 + ( e k I ( t τ ) ) 2 ] + k = 1 n 1 2 | b j k I | G k I R [ ( e j R ( t ) ) 2 + ( e k R ( t τ ) ) 2 ] + k = 1 n 1 2 | b j k I | G k I I [ ( e j R ( t ) ) 2 + ( e k I ( t τ ) ) 2 ] } + j = 1 n { ( c j + η j ) ( e j I ( t ) ) 2 + k = 1 n 1 2 | a j k I | F k R R [ ( e j I ( t ) ) 2 + ( e k R ( t ) ) 2 ] + k = 1 n 1 2 | a j k I | F k R I [ ( e j I ( t ) ) 2 + ( e k I ( t ) ) 2 ] + k = 1 n 1 2 | a j k R | F k I R [ ( e j I ( t ) ) 2 + ( e k R ( t ) ) 2 ] + k = 1 n 1 2 | a j k R | F k I I [ ( e j I ( t ) ) 2 + ( e k I ( t ) ) 2 ] + k = 1 n 1 2 | b j k I | G k R R [ ( e j I ( t ) ) 2 + ( e k R ( t τ ) ) 2 ] + k = 1 n 1 2 | b j k I | G k R I [ ( e j I ( t ) ) 2 + ( e k I ( t τ ) ) 2 ] + k = 1 n 1 2 | b j k R | G k I R [ ( e j I ( t ) ) 2 + ( e k R ( t τ ) ) 2 ] + k = 1 n 1 2 | b j k R | G k I I [ ( e j I ( t ) ) 2 + ( e k I ( t τ ) ) 2 ] } = j = 1 n [ ( c j + η j ) k = 1 n 1 2 | a j k R | F k R R k = 1 n 1 2 | a k j R | F j R R k = 1 n 1 2 | a j k R | F k R I k = 1 n 1 2 | a j k I | F k I R k = 1 n 1 2 | a k j I | F j I R k = 1 n 1 2 | a j k I | F k I I k = 1 n 1 2 | b j k R | G k R R k = 1 n 1 2 | b j k R | G k R I k = 1 n 1 2 | b j k I | G k I R k = 1 n 1 2 | b j k I | G k I I k = 1 n 1 2 | a k j I | F j R R k = 1 n 1 2 | a k j R | F j I R ] ( e j R ( t ) ) 2 j = 1 n [ ( c j + η j ) k = 1 n 1 2 | a k j R | F j R I k = 1 n 1 2 | a k j I | F j I I k = 1 n 1 2 | a j k I | F k R R k = 1 n 1 2 | a j k I | F k R I k = 1 n 1 2 | a k j I | F j R I k = 1 n 1 2 | a j k R | F k I R k = 1 n 1 2 | a j k R | F k I I k = 1 n 1 2 | a k j R | F j I I k = 1 n 1 2 | b j k I | G k R R k = 1 n 1 2 | b j k I | G k R I k = 1 n 1 2 | b j k R | G k I R k = 1 n 1 2 | b j k R | G k I I ] ( e j I ( t ) ) 2 + j = 1 n [ k = 1 n 1 2 | b k j R | G j R R + 1 2 | b k j I | G j I R + k = 1 n 1 2 | b k j R | G j I R + k = 1 n 1 2 | b k j I | G j R R ] ( e j R ( t τ ) ) 2 + j = 1 n [ k = 1 n 1 2 | b k j R | G j R I + k = 1 n 1 2 | b k j I | G j I I + k = 1 n 1 2 | b k j I | G j R I + k = 1 n 1 2 | b k j R | G j I I ] ( e j I ( t τ ) ) 2 .
Then, one gets
D α V ( e ( t ) ) 2 λ 1 j = 1 n ( e j R ( t ) ) 2 2 λ 2 j = 1 n ( e j I ( t ) ) 2 + 2 μ 1 j = 1 n ( e j R ( t τ ) ) 2 + 2 μ 2 j = 1 n ( e j I ( t τ ) ) 2 2 λ V ( e ( t ) ) + 2 μ V ( e ( t τ ) ) .
According to Lemma 2, when λ > μ > 0 , the system (1) synchronizes the system (3).

References

  1. Aizenberg, I.; Aizenberg, N.N.; Vandewalle, J.P.L. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications; Springer: New York, NY, USA, 2000; pp. 88–106. [Google Scholar]
  2. Aizenberg, I. Complex-Valued Neural Networks with Multi-Valued Neurons; Springer: New York, NY, USA, 2011; pp. 39–62. [Google Scholar]
  3. Hirose, A. Complex-Valued Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012; pp. 53–71. [Google Scholar]
  4. Tanaka, G.; Aihara, K. Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction. IEEE Trans. Neural Netw. 2009, 2011, 1463–1473. [Google Scholar] [CrossRef] [PubMed]
  5. Hirose, A. Complex-Valued Neural Networks: Advances and Applications; Wiley Georg Olms Verlagsbuch-Handlung: Hoboken, NJ, USA, 2013; pp. 32–58. [Google Scholar]
  6. Hirose, A. Dynamics of fully complex-valued neural networks. Electron. Lett. 1992, 28, 1492–1494. [Google Scholar] [CrossRef]
  7. Mathews, J.H.; Howell, R.W. Complex Analysis for Mathematics and Engineering; Jones and Bartlett Learning: Burlington, NJ, USA, 2012; pp. 63–98. [Google Scholar]
  8. Hu, J.; Wang, J. Global stability of complex-valued recurrent neural networks with time-delays. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 853–865. [Google Scholar] [CrossRef] [PubMed]
  9. Xu, X.H.; Zhang, J.Y.; Shi, J.Z. Exponential stability of complex-valued neural networks with mixed delays. Neurocomputing 2014, 128, 483–490. [Google Scholar] [CrossRef]
  10. Song, Q.K.; Zhao, Z.J.; Liu, Y.R. Impulsive effects on stability of discrete-time complex-valued neural networks with both discrete and distributed time-varying delays. Neurocomputing 2015, 168, 1044–1050. [Google Scholar] [CrossRef]
  11. Gong, W.Q.; Liang, J.L.; Zhang, C.J. Multistability of complex-valued neural networks with distributed delays. Neural Comput. Appl. 2017, 28, 1–14. [Google Scholar] [CrossRef]
  12. Zhou, B.; Song, Q.K. Boundedness and complete stability of complex-valued neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1227–1238. [Google Scholar] [CrossRef] [PubMed]
  13. Wu, E.L.; Yang, X.S. Adaptive synchronization of coupled nonidentical chaotic systems with complex variables and stochastic perturbations. Nonlinear Dyn. 2016, 84, 261–269. [Google Scholar] [CrossRef]
  14. Velmurugan, G.; Rakkiyappan, R.; Cao, J.D. Further analysis of global μ-stability of complex-valued neural networks with unbounded time-varying delays. Neural Netw. 2015, 67, 14–27. [Google Scholar] [CrossRef] [PubMed]
  15. Leibniz, G.W. Mathematische Schiften; Georg Olms Verlagsbuch-Handlung: Hildesheim, Germany, 1962; pp. 36–63. [Google Scholar]
  16. Zhang, W.W.; Cao, J.D.; Alsaedi, A.; Alsaadi, F.E. New Methods of Finite-Time Synchronization for a Class of Fractional-Order Delayed Neural Networks. Math. Probl. Eng. 2017. [Google Scholar] [CrossRef]
  17. Zhang, H.; Ye, R.Y.; Cao, J.D.; Alsaedie, A. Existence and globally asymptotic stability of equilibrium solution for fractional-order hybrid BAM neural networks with distributed delays and impulses. Complexity 2017. [Google Scholar] [CrossRef]
  18. Zhang, W.W.; Wu, R.C.; Cao, J.D.; Alsaedi, A.; Hayat, T. Synchronization of a class of fractional-order neural networks with multiple time delays by comparison principles. Nonlinear Anal. Model. Control 2017, 22, 636–645. [Google Scholar]
  19. Zhang, H.; Ye, R.Y.; Cao, J.D.; Alsaedie, A. Lyapunov functional approach to stability analysis of Riemann-Liouville fractional neural networks with time-varying delays. Asian J. Control 2017. [Google Scholar] [CrossRef]
  20. Rakkiyappan, R.; Velmurugan, G.; Cao, J.D. Finite-time stability analysis of fractional-order complex-valued memristor-based neural networks with time delays. Nonlinear Dyn. 2014, 78, 2823–2836. [Google Scholar] [CrossRef]
  21. Ding, X.S.; Cao, J.D.; Zhao, X.; Alsaadi, F.E. Finite-time Stability of Fractional-order Complex-valued Neural Networks with Time Delays. Neural Process Lett. 2017, 46, 561–580. [Google Scholar] [CrossRef]
  22. Rakkiyappan, R.; Cao, J.D.; Velmurugan, G. Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 84–97. [Google Scholar] [CrossRef] [PubMed]
  23. Rakkiyappana, R.; Velmurugana, G.; Cao, J.D. Stability analysis of fractional-order complex-valued neural networks with time delays. Chaos Solitons Fractals 2015, 78, 297–316. [Google Scholar] [CrossRef]
  24. Wei, H.Z.; Li, R.X.; Chen, C.R.; Tu, Z.W. Stability Analysis of Fractional Order Complex-Valued Memristive Neural Networks with Time Delays. Neural Process Lett. 2016, 45, 379–399. [Google Scholar] [CrossRef]
  25. Bao, H.B.; Park, J.H.; Cao, J.D. Synchronization of fractional-order complex-valued neural networks with time delay. Neural Netw. 2016, 81, 16–28. [Google Scholar] [CrossRef] [PubMed]
  26. Li, X.; Wu, J. Stability of nonlinear differential systems with state-dependent delayed impulses. Automatica 2016, 64, 63–69. [Google Scholar] [CrossRef]
  27. Li, X.; Song, S. Stabilization of Delay Systems: Delay-dependent Impulsive Control. IEEE Trans. Autom. Control 2017, 62, 406–411. [Google Scholar]
  28. Li, X.; Cao, J.D. An impulsive delay inequality involving unbounded time-varying delay and applications. IEEE Trans. Autom. Control 2017, 62, 3618–3625. [Google Scholar] [CrossRef]
  29. Tavazoei, M.S.; Haeri, M. A proof for non existence of periodic solutions in time invariant fractional order systems. Automatica 2009, 45, 1886–1890. [Google Scholar] [CrossRef]
  30. Shen, J.; Lam, J. Non-existence of finite-time stable equilibria in fractional-order nonlinear systems. Automatica 2014, 50, 547–551. [Google Scholar]
  31. Hilfer, R. Applications of Fractional Calculus in Physics; World Scientic York: Singapore, 2000; pp. 45–74. [Google Scholar]
  32. Aguila-Camacho, N.; Duarte-Mermoud, M.A.; Gallegos, J.A. Lyapunov functions for fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 2951–2957. [Google Scholar] [CrossRef]
  33. Liang, S.; Wu, R.C.; Chen, L.P. Adaptive pinning synchronization in fractional-order uncertain complex dynamical networks with delay. Physica A 2016, 444, 49–62. [Google Scholar] [CrossRef]
Figure 1. Curves of z 1 , z 2 , z 1 , z 2 in 3-dimensional space without control.
Figure 1. Curves of z 1 , z 2 , z 1 , z 2 in 3-dimensional space without control.
Entropy 20 00054 g001
Figure 2. Curves of z 1 , z 2 , z 1 , z 2 in 2-dimensional space without control.
Figure 2. Curves of z 1 , z 2 , z 1 , z 2 in 2-dimensional space without control.
Entropy 20 00054 g002
Figure 3. The trajectories of x 1 , x 1 without control.
Figure 3. The trajectories of x 1 , x 1 without control.
Entropy 20 00054 g003
Figure 4. The trajectories of x 2 , x 2 without control.
Figure 4. The trajectories of x 2 , x 2 without control.
Entropy 20 00054 g004
Figure 5. The trajectories of y 1 , y 1 without control.
Figure 5. The trajectories of y 1 , y 1 without control.
Entropy 20 00054 g005
Figure 6. The trajectories of y 2 , y 2 without control.
Figure 6. The trajectories of y 2 , y 2 without control.
Entropy 20 00054 g006
Figure 7. Curves of z 1 , z 2 , z 1 , z 2 in 3-dimensional space with controller.
Figure 7. Curves of z 1 , z 2 , z 1 , z 2 in 3-dimensional space with controller.
Entropy 20 00054 g007
Figure 8. Curves of z 1 , z 2 , z 1 , z 2 in 2-dimensional space with controller.
Figure 8. Curves of z 1 , z 2 , z 1 , z 2 in 2-dimensional space with controller.
Entropy 20 00054 g008
Figure 9. The synchronization trajectories of x 1 , x 1 with controller.
Figure 9. The synchronization trajectories of x 1 , x 1 with controller.
Entropy 20 00054 g009
Figure 10. The synchronization trajectories of x 2 , x 2 with controller.
Figure 10. The synchronization trajectories of x 2 , x 2 with controller.
Entropy 20 00054 g010
Figure 11. The synchronization trajectories of y 1 , y 1 with controller.
Figure 11. The synchronization trajectories of y 1 , y 1 with controller.
Entropy 20 00054 g011
Figure 12. The synchronization trajectories of y 2 , y 2 with controller.
Figure 12. The synchronization trajectories of y 2 , y 2 with controller.
Entropy 20 00054 g012
Figure 13. The synchronization error e 1 R state.
Figure 13. The synchronization error e 1 R state.
Entropy 20 00054 g013
Figure 14. The synchronization error e 2 R state.
Figure 14. The synchronization error e 2 R state.
Entropy 20 00054 g014
Figure 15. The synchronization error e 1 I state.
Figure 15. The synchronization error e 1 I state.
Entropy 20 00054 g015
Figure 16. The synchronization error e 2 I state.
Figure 16. The synchronization error e 2 I state.
Entropy 20 00054 g016

Share and Cite

MDPI and ACS Style

Zhang, W.; Cao, J.; Chen, D.; Alsaadi, F.E. Synchronization in Fractional-Order Complex-Valued Delayed Neural Networks. Entropy 2018, 20, 54. https://doi.org/10.3390/e20010054

AMA Style

Zhang W, Cao J, Chen D, Alsaadi FE. Synchronization in Fractional-Order Complex-Valued Delayed Neural Networks. Entropy. 2018; 20(1):54. https://doi.org/10.3390/e20010054

Chicago/Turabian Style

Zhang, Weiwei, Jinde Cao, Dingyuan Chen, and Fuad E. Alsaadi. 2018. "Synchronization in Fractional-Order Complex-Valued Delayed Neural Networks" Entropy 20, no. 1: 54. https://doi.org/10.3390/e20010054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop