Synchronization of a Class of Fractional-Order Chaotic Neural Networks

Liping Chen <sup>1</sup>, Jianfeng Qu <sup>1</sup>, \*, Yi Chai <sup>1</sup>, Ranchao Wu <sup>2</sup> and Guoyuan Qi <sup>3</sup>


*Received: 5 June 2013; in revised form: 3 August 2013 / Accepted: 5 August 2013 / Published: 14 August 2013*

Abstract: The synchronization problem is studied in this paper for a class of fractional-order chaotic neural networks. By using the Mittag-Leffler function, M-matrix and linear feedback control, a sufficient condition is developed ensuring the synchronization of such neural models with the Caputo fractional derivatives. The synchronization condition is easy to verify, implement and only relies on system structure. Furthermore, the theoretical results are applied to a typical fractional-order chaotic Hopfield neural network, and numerical simulation demonstrates the effectiveness and feasibility of the proposed method.

Keywords: synchronization; fractional-order; chaotic neural networks; linear feedback control

#### 1. Introduction

Fractional calculus has been a 300-year-old topic. Although it has a long mathematical history, the applications of fractional calculus to physics and engineering are only a recent focus of interest. Recent monographs and symposia proceedings have highlighted the application of fractional calculus in physics, continuum mechanics, signal processing, bioengineering, diffusion wave and

electromagnetics [1–4]. The major advantage of the fractional-order derivatives is that they provide an excellent instrument for the description of memory and hereditary properties of various materials and processes. As such, some researchers introduced fractional calculus to neural networks to form fractional-order neural networks, which can better describe the dynamical behavior of the neurons, such as "memory". It was pointed out that fractional derivatives provide neurons with a fundamental and general computation ability that can contribute to efficient information processing, stimulus anticipation and frequency-independent phase shifts of oscillatory neuronal firing [5]. It is suggested that the oculomotor integrator, which converts eye velocity into eye position commands, may be of a fractional order [6]. It was demonstrated that neural network approximation taken at the fractional level resulted in higher rates of approximation [7]. Furthermore, note that fractional-order recurrent neural networks might be expected to play an important role in parameter estimation. Therefore, the incorporation of memory terms (a fractional derivative or integral operator) into neural network models is an important improvement [8], and it will be of important significance to study fractional-order neural networks.

Chaos has been a focus of intensive discussion in numerous fields during the last four decades. Moreover, it has been verified that some neural networks can exhibit chaotic dynamics. For example, experimental and theoretical studies have revealed that a mammalian brain not only can display in its dynamical behavior strange attractors and other transient characteristics for its associative memories, but also can modulate oscillatory neuronal synchronization by selective visual attention optimization problems [9,10]. In recent years, the study on synchronization of chaotic neural networks has attracted considerable attention, due to the potential applications in many fields, including secure communication, parallel image processing, biological systems, information science, *etc*. As we know, there are many synchronization results about integer-order neural networks; see [11–13] and references therein. On the other hand, since bifurcations and chaos of fractional-order neural networks were investigated firstly in [14,15], some important and interesting results about fractional-order neural networks have been obtained. For instance, in [16], a fractional-order Hopfield neural model was proposed, and its stability was investigated by an energy-like function. Chaos and hyperchaos in fractional-order cellular neural networks was discussed in [17]. Yu *et al*. [18] investigated α-stability and α-synchronization for fractional-order neural networks. Several recent results concerning chaotic synchronization in fractional-order neural networks have been reported in [19–22].

Due to the complexity of fractional-order systems, to the best of our knowledge, there are few theoretical results on the synchronization of fractional-order neural networks; most of the existing results are only numerical simulation [19–22]. Although there have been many synchronization results about integer-order neural networks in the past few decades, these results and methods could not be extended and applied easily to the fractional-order case. Therefore, to establish some theoretical sufficient criteria for the synchronization of fractional-order neural networks is very necessary and challenging. Motivated by the above discussions, by using the Mittag-Leffler function, some properties of fractional calculus and linear feedback control, a simple and efficient criterion in

terms of the M-matrix for synchronization of such neural network is derived. Numerical simulations also demonstrate the effectiveness and feasibility of the proposed technique.

The rest of the paper is organized as follows. Some necessary definitions and lemmas are given, and the fractional-order network model is introduced in Section 2. A sufficient criterion ensuring the synchronization of such neural networks is presented in Section 3. An example and simulation are obtained in Section 4. Finally, the paper is concluded in Section 5.

#### 2. Preliminaries and System Description

In this section, some definitions of fractional calculation are recalled and some useful lemmas are introduced.

Definition 1[1]. The fractional integral (Riemann-Liouville integral), D−<sup>α</sup> <sup>t</sup>0,t, with fractional order, <sup>α</sup> <sup>∈</sup> <sup>R</sup><sup>+</sup>, of function <sup>x</sup>(t) is defined as:

$$D\_{t\_0,t}^{-\alpha}x(t) = \frac{1}{\Gamma(\alpha)} \int\_{t\_0}^{t} (t-\tau)^{\alpha-1} x(\tau) d\tau \tag{1}$$

where Γ(·) is the gamma function, Γ(<sup>τ</sup> ) = <sup>∞</sup> <sup>0</sup> t <sup>τ</sup>−<sup>1</sup>e−<sup>t</sup> dt.

Definition 2[1]. The Riemann-Liouville derivative of fractional order α of function x(t) is given as:

$$\,\_{RL}D\_{t\_0,t}^{\alpha}x(t) \;=\,\frac{d^n}{dt^n}D\_{t\_0,t}^{-(n-\alpha)}x(t) = \frac{d^n}{dt^n}\frac{1}{\Gamma(n-\alpha)}\int\_{t\_0}^t (t-\tau)^{n-\alpha-1}x(\tau)d\tau\tag{2}$$

where <sup>n</sup> <sup>−</sup> <sup>1</sup> <α<n <sup>∈</sup> <sup>Z</sup><sup>+</sup>.

Definition 3[1]. The Caputo derivative of fractional order α of function x(t) is defined as follows:

$$\,\_{C}D\_{t\_{0},t}^{\alpha}x(t) \;=\,\_{D}D\_{t\_{0},t}^{-(n-\alpha)}\frac{d^{n}}{dt^{n}}x(t) = \frac{1}{\Gamma(n-\alpha)}\int\_{t\_{0}}^{t}(t-\tau)^{n-\alpha-1}x^{\langle n\rangle}(\tau)d\tau\tag{3}$$

where <sup>n</sup> <sup>−</sup> <sup>1</sup> <α<n <sup>∈</sup> <sup>Z</sup><sup>+</sup>.

Note from Equations (2) and (3) that the fractional derivative is related to all the history information of a function, while the integer one is only related to its nearby points. That is, the next state of a system not only depends upon its current state, but also upon its historical states starting from the initial time. As a result, a model described by fractional-order derivatives possesses memory and inheritance and will be more precise to describe the states of neurons. In the following, the notation, D<sup>α</sup>, is chosen as the Caputo derivative, D<sup>α</sup> <sup>0</sup>,t. For <sup>x</sup> <sup>∈</sup> <sup>R</sup><sup>n</sup>, the norm is defined by x <sup>=</sup> <sup>n</sup> <sup>i</sup>=1 |xi|.

Definition 4[1]. The Mittag-Leffler function with two parameters appearing is defined as:

$$E\_{\alpha,\beta}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(k\alpha + \beta)}\tag{4}$$

where α > <sup>0</sup>,β > <sup>0</sup>, and <sup>z</sup> <sup>∈</sup> <sup>C</sup>. When <sup>β</sup> = 1, one has <sup>E</sup>α(z) = <sup>E</sup>α,<sup>1</sup>(z), further, <sup>E</sup><sup>1</sup>,<sup>1</sup>(z) = <sup>e</sup><sup>z</sup>.

Lemma 1. Let V (t) be a continuous function on [0, +∞) and satisfy:

$$D^{\alpha}V(t) \le -\lambda V(t) \tag{5}$$

Then:

$$V(t) \le V(t\_0) E\_\alpha(-\lambda(t - t\_0)^\alpha) \tag{6}$$

where α ∈ (0, 1) and λ are positive constant.

Proof. It follows from Equation (5) that there exists a nonnegative function, M(t), such that:

$$D^{\alpha}V(t) + \lambda V(t) + M(t) = 0\tag{7}$$

Taking the Laplace transform on Equation (7), then one has:

$$s^{\alpha}V(s) - s^{\alpha - 1}V(t\_0) + \lambda V(s) + M(s) = 0\tag{8}$$

where V (s) = L{V (t)}, M(s) = L{M(t)}. It then follows that:

$$V(s) = \frac{s^{\alpha - 1}V(t\_0) - M(s)}{s^{\alpha} + \lambda} \tag{9}$$

Taking the inverse Laplace transform in Equation (9), one obtains:

$$V(t) = V(t\_0)E\_\alpha(-\lambda(t-t\_0)^\alpha) - M(t) \* \left[ (t-t\_0)^{\alpha-1} E\_{\alpha,\alpha}(-\lambda(t-t\_0)^\alpha) \right] \tag{10}$$

Note that both (<sup>t</sup> <sup>−</sup> <sup>t</sup>0)<sup>α</sup> and <sup>E</sup>α,α(−λ(<sup>t</sup> <sup>−</sup> <sup>t</sup>0)<sup>α</sup>) are nonnegative functions; it follows that:

$$V(t) \le V(t\_0) E\_\alpha(-\lambda(t - t\_0)^\alpha) \tag{11}$$

Lemma 2[1]. If α < 2, β is an arbitrary real number, μ is such that πα/2 <μ< min{π, πα} and C is a real constant, then:

$$|E\_{\alpha,\beta}(z)| \le \frac{C}{1+|z|}, (\mu \le |\arg(z)| \le \pi), |z| > 0 \tag{12}$$

Definition 4[23]. A real n × n matrix, A = (aij ), is said to be a M-matrix if aij ≤ 0, i, j = 1, 2, ··· n, i = j, and all successive principal minors of A are positive.

Lemma 3[23]. Let A = (aij ) be an n × n matrix with non-positive off-diagonal elements. Then, the following statements are equivalent:

(1) A is a nonsingular M-matrix;


The dynamic behavior of a continuous fractional-order cellular neural networks can be described by the following system:

$$D^\alpha x\_i(t) = -c\_i x\_i(t) + \sum\_{j=1}^n a\_{ij} f\_j(x\_j(t)) + I\_i \tag{13}$$

which can also be written in the following compact form:

$$D^{\alpha}x(t) \;=\;-Cx(t) + Af(x(t)) + I\tag{14}$$

where i ∈ N = {1, 2, ··· , n}, t ≥ 0, 0 <α< 1, n is the number of units in a neural network, <sup>x</sup>(t)=(x1(t), ··· , xn(t))<sup>T</sup> <sup>∈</sup> <sup>R</sup><sup>n</sup> corresponds to the state vector at time <sup>t</sup>, <sup>f</sup>(x(t)) = (f1(x1(t), ··· , fn(xn(t))<sup>T</sup> denotes the activation function of the neurons and <sup>C</sup> <sup>=</sup>diag(c1, ··· , cn) represents the rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs. The weight matrix, A = (aij )<sup>n</sup>×<sup>n</sup>, is referred to as the connection of the <sup>j</sup>th neuron to the <sup>i</sup>th neuron at time <sup>t</sup>; <sup>I</sup> = (I1, I2, ··· , In)<sup>T</sup> is an external bias vector.

Here, in order to obtain the main results, the following assumption is presented firstly.

A1. The neuron activation functions, f<sup>j</sup> , are Lipschitz continuous, that is, there exist positive constants, L<sup>j</sup> (j = 1, 2, ··· , n), such that:

$$|f\_j(u\_j) - f\_j(v\_j)| \le L\_j |u\_j - v\_j|, \quad \forall u\_j, v\_j \in R \tag{15}$$

#### 3. Main Results

In this section, a sufficient condition for synchronization of fractional-order neural networks is derived.

Based on the drive-response concept, we refer to system Equation (13) as the drive cellular neural network and consider a response network characterized as follows:

$$D^\alpha y\_i(t) = -c\_i y\_i(t) + \sum\_{j=1}^n a\_{ij} f\_j(y\_j(t)) + I\_i + u\_i(t) \tag{16}$$

or, equivalently:

$$D^{\alpha}y(t) = -Cy(t) + Af(y(t)) + I + u(t) \tag{17}$$

where <sup>y</sup>(t)=(y1(t), ··· , yn(t))<sup>T</sup> <sup>∈</sup> <sup>R</sup><sup>n</sup> is the state vector of the slave system, C, A and <sup>f</sup>(·) are the same as Equation (13) and <sup>u</sup>(t)=(u1(t), ··· , un(t))<sup>T</sup> is the external control input to be designed later.

Defining the synchronization error signal as ei(t) = yi(t)−xi(t), the error dynamics between the master system Equation (14) and the slave system Equation (17) can be expressed by:

$$D^{\alpha}e(t) = -Ce(t) + A[f(y(t)) - f(x(t))] + u(t) \tag{18}$$

where <sup>e</sup>(t)=(e1(t), ··· , en(t))<sup>T</sup> ; therefore, synchronization between master system Equation (13) and slave Equation (16) is equivalent to the asymptotic stability of error system Equation (18) with the suitable control law, u(t). To this end, the external control input, u(t), can be defined as u(t) = Ke(t), where K=diag(ki, ··· , kn) is the controller gain matrix. Then, error system Equation (18) can be rewritten as:

$$D^{\alpha}e\_i(t) = -(c\_i - k\_i)e(t) + \sum\_{j=1}^{n} a\_{ij}(f\_j(y\_j(t)) - f\_j(x\_j(t)))\tag{19}$$

or can be described by the following compact form:

$$\begin{array}{rcl}D^\alpha e(t) &=& -(C-K)e(t) + A(f(y(t)) - f(x(t)))\end{array} \tag{20}$$

Theorem 1. For the master-slave fractional-order chaotic neural networks Equations (14) and (17), which satisfy Assumption 1, if the controller gain matrix, K, satisfies (C − K) − |A|L as a M matrix(L=diag(L1, ··· , Ln)), then the synchronization between systems Equations (14) and (17) is achieved.

Proof. If <sup>e</sup>i(t)=0, then <sup>D</sup><sup>α</sup>|ei(t)<sup>|</sup> = 0. If <sup>e</sup>i(t) <sup>&</sup>gt; <sup>0</sup>, then:

$$D^{\alpha}|e\_{i}(t)\rangle = \frac{1}{\Gamma(1-\alpha)} \int\_{0}^{t} \frac{\left|e\_{i}(s)\right|^{\prime}}{(t-s)^{\alpha}}ds = \frac{1}{\Gamma(1-\alpha)} \int\_{0}^{t} \frac{e\_{i}^{\prime}(s)}{(t-s)^{\alpha}}ds = D^{\alpha}e\_{i}(t) \tag{21}$$

Similarly, if ei(t) < 0, then:

$$|D^{\alpha}|e\_{i}(t)| = \frac{1}{\Gamma(1-\alpha)} \int\_{0}^{t} \frac{\left|e\_{i}(s)\right|^{\prime}}{(t-s)^{\alpha}}ds = -\frac{1}{\Gamma(1-\alpha)} \int\_{0}^{t} \frac{e\_{i}^{\prime}(s)}{(t-s)^{\alpha}}ds = -D^{\alpha}e\_{i}(t) \tag{22}$$

Therefore, it follows that:

$$D^{\alpha}|e\_i(t)\rangle = \text{sgn}(e\_i(t))D^{\alpha}e\_i(t) \tag{23}$$

Due to (C − K) − |A|L being an M matrix, it follows from Lemma 3 that there are a set of positive constants, ξi, such that:

$$-(c\_i - k\_i)\xi\_i + \sum\_{j=1}^n \xi\_j |a\_{ji}| L\_i < 0, i \in N \tag{24}$$

Define functions:

$$F\_i(\theta) = -(c\_i - k\_i - \theta)\xi\_i + \sum\_{j=1}^n \xi\_j |a\_{ji}| L\_i, i \in N \tag{25}$$

Obviously:

$$F\_i(0) = -(c\_i - k\_i)\\
\xi\_i + \sum\_{j=1}^n \xi\_j |a\_{ji}| L\_i < 0, i \in N \tag{26}$$

Therefore, there exists a constant, λ > 0, such that:

$$-(c\_i - k\_i - \lambda)\xi\_i + \sum\_{j=1}^n \xi\_j |a\_{ji}| L\_i \le 0, i \in N \tag{27}$$

Consider an auxiliary function defined by <sup>V</sup> (t) = <sup>n</sup> i=1 ξi|ei(t)|, where ξi(i ∈ N are chosen as those in Equation (27). The Caputo derivative of V (t) along the solution of system Equation (19) is:

$$\begin{split} D^{\alpha}V(t) &= \sum\_{i=1}^{n} \xi\_{i} D^{\alpha} |e\_{i}(t)| \\ &= \sum\_{i=1}^{n} \xi\_{i} \text{sign}(e\_{i}(t)) \{- (c\_{i} - k\_{i}) e\_{i}(t) + \sum\_{j=1}^{n} a\_{ij} (f\_{j}(x\_{j}(t)) - f\_{j}(y\_{j}(t))) \} \\ &\leq \sum\_{i=1}^{n} \xi\_{i} \{- (c\_{i} - k\_{i}) |e\_{i}(t)| + \sum\_{j=1}^{n} |a\_{ij}| L\_{j} |e\_{j}(t)| \} \\ &= \sum\_{i=1}^{n} \{- \xi\_{i} (c\_{i} - k\_{i}) + \sum\_{j=1}^{n} \xi\_{j} |a\_{ji}| L\_{i} \} |e\_{i}(t)| \\ &\leq -\lambda V(t) \end{split} \tag{28}$$

One can see that:

$$|V(t\_0)| = \sum\_{i=1}^{n} \xi\_i |e\_i(t\_0)| \le \max\_{1 \le i \le n} \{\xi\_i\} ||e(t\_0)||\tag{29}$$

$$|V(t)| = \sum\_{i=1}^{n} \xi\_i |e\_i(t)| \ge \min\_{1 \le i \le n} \{\xi\_i\} ||e(t)||\tag{30}$$

Based on Lemma 1, it yields:

$$\min\_{1 \le i \le n} \{ \xi\_i \} ||e(t)|| \le \max\_{1 \le i \le n} \{ \xi\_i \} ||e(t\_0)|| E\_\alpha(-\lambda(t - t\_0)^\alpha) \tag{31}$$

That is:

$$||e(t)|| \le \frac{\max\_{1 \le i \le n} \{\xi\_i\}}{\min\_{1 \le i \le n} \{\xi\_i\}} ||e(t\_0)|| E\_\alpha(-\lambda(t - t\_0)^\alpha) \tag{32}$$

Let <sup>z</sup> <sup>=</sup> <sup>−</sup>λ(<sup>t</sup> <sup>−</sup> <sup>t</sup>0)<sup>α</sup> in Lemma 2, <sup>|</sup>arg(z)<sup>|</sup> <sup>=</sup> <sup>π</sup>; it follows from Lemma 2 that there exists a real constant C, such that:

$$||e(t)|| \le \frac{\max\_{1 \le i \le n} \{\xi\_i\}}{\min\_{1 \le i \le n} \{\xi\_i\}} ||e(t\_0)|| \frac{C}{1 + |\lambda(t - t\_0)^\alpha|} \tag{33}$$

which implies that ||e(t)|| converges asymptotically to zero as t tends to infinity, namely, the fractional-order chaotic neural network Equation (14) is globally synchronized with Equation (17). 

Remark 1. Up to now, with the help of the traditional Lyapunov direct theory, there are many results about synchronization of integer-order chaotic neural networks, but the method and these results are not suitable for fractional-order chaotic neural networks.

Remark 2. [19–22] discussed chaos and synchronization of the fractional-order neural networks, but these are only numerical simulations. Here, theoretical proof is proposed.

Remark 3. [18] considered α-synchronization for fractional-order neural networks; unfortunately, the obtained results are not correct [24].

#### 4. Numerical Example

An illustrative example is given to demonstrate the validity of the proposed controller. Consider a fractional-order Hopfield neural chaotic network with neurons as follows [25]:

$$D^{\alpha}x(t) = -Cx(t) + Af(x(t))\tag{34}$$

where x(t)=(x1(t), x2(t), x3(t))<sup>T</sup> , C =diag(1, 1, 1), f(x(t)) = (tanh(x1(t)),tanh(x2(t)), tanh(x3(t)))<sup>T</sup> , and A = ⎡ ⎢ ⎣ 2 −1.2 0 2 1.71 1.15 −4.75 0 1.1 ⎤ ⎥ <sup>⎦</sup>. The system satisfies Assumption 1 with <sup>L</sup><sup>1</sup> <sup>=</sup>

L<sup>2</sup> = L<sup>3</sup> = 1. As is shown in Figure 1, the fractional-order Hopfield neural network possesses a chaotic behavior when α = 0.95.

Figure 1. Chaotic behaviors of fractional-order Hopfield neural network Equation (34) with fractional-order, α = 0.95.

The controlled response fractional Hopfield neural network is designed as follows:

$$D^{\alpha}y(t) = -Cy(t) + Af(y(t)) + u(t) \tag{35}$$

The controller gain matrix, u(t), is chosen as K= diag(−6, −5, −2), and it can be easily verified that (C − K) − |A|L = ⎡ ⎢ ⎣ 5 −1.2 0 −2 4.29 −1.15 −4.75 0 1.9 ⎤ ⎥ ⎦ is an M matrix. According to Theorem 1, the synchronization between Equations (34) and (35) can be achieved. In the numerical simulations, the initial states of the drive and response systems are taken as x(0) = (0.1, 0.4, 0.2)<sup>T</sup> and y(0) = (0.8, 0.1, 0.7)<sup>T</sup> , respectively. Figure 2 shows the state synchronization trajectory of the drive and response systems; the synchronization error response is depicted in Figure 3.

Figure 2. State synchronization trajectories of drive system Equation (34) and response system Equation (35).

Figure 3. Synchronization error time response of drive system Equation (34) and response system Equation (35).

Figure 3. *Cont*.

#### 5. Conclusions

In this paper, the synchronization problem has been studied theoretically for a class of fractional-order chaotic neural networks, which is more difficult and challenging than the integer-order chaotic neural networks. Based on the Mittag-Leffler function and linear feedback control, a sufficient condition in the form of the M-matrix has been derived. Finally, a simulation example has been given to illustrate the effectiveness of the developed approach.

#### Acknowledgements

The authors thank the referees and the editor for their valuable comments and suggestions. This work was supported by the National Natural Science Foundation of China (No.60974090), the Fundamental Research Funds for the Central Universities (No. CDJXS12170001), the Natural Science Foundation of Anhui Province (No. 11040606M12), the Ph.D. Candidate Academic Foundation of Ministry of Education of China, the Natural Science Foundation of Anhui Education Bureau (KJ2013B015) and the 211 project of Anhui University (No. KJJQ1102).

#### Conflict of Interest

The authors declare no conflict of interest.

#### References


### *Article*
