Next Article in Journal
Selected Papers from IIKII 2019 Conferences in Symmetry
Next Article in Special Issue
On (ψ, ϕ)-Rational Contractions
Previous Article in Journal
Multi-Column Atrous Convolutional Neural Network for Counting Metro Passengers
Previous Article in Special Issue
Real-Time Control for the EHU Stellarator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Delay-Dividing Approach to Robust Stability of Uncertain Stochastic Complex-Valued Hopfield Delayed Neural Networks

by
Pharunyou Chanthorn
1,
Grienggrai Rajchakit
2,*,
Usa Humphries
3,
Pramet Kaewmesri
3,
Ramalingam Sriraman
4 and
Chee Peng Lim
5
1
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Department of Mathematics, Faculty of Science, Maejo University, Chiang Mai 50290, Thailand
3
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang mod, Thung Khru 10140, Thailand
4
Department of Science and Humanities, Vel Tech High Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Avadi, Tamil Nadu 600062, India
5
Institute for Intelligent Systems Research and Innovation, Deakin University, Waurn Ponds, VIC 3216, Australia
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(5), 683; https://doi.org/10.3390/sym12050683
Submission received: 18 February 2020 / Revised: 12 March 2020 / Accepted: 8 April 2020 / Published: 25 April 2020
(This article belongs to the Special Issue Symmetry in Nonlinear Studies)

Abstract

:
In scientific disciplines and other engineering applications, most of the systems refer to uncertainties, because when modeling physical systems the uncertain parameters are unavoidable. In view of this, it is important to investigate dynamical systems with uncertain parameters. In the present study, a delay-dividing approach is devised to study the robust stability issue of uncertain neural networks. Specifically, the uncertain stochastic complex-valued Hopfield neural network (USCVHNN) with time delay is investigated. Here, the uncertainties of the system parameters are norm-bounded. Based on the Lyapunov mathematical approach and homeomorphism principle, the sufficient conditions for the global asymptotic stability of USCVHNN are derived. To perform this derivation, we divide a complex-valued neural network (CVNN) into two parts, namely real and imaginary, using the delay-dividing approach. All the criteria are expressed by exploiting the linear matrix inequalities (LMIs). Based on two examples, we obtain good theoretical results that ascertain the usefulness of the proposed delay-dividing approach for the USCVHNN model.

1. Introduction

Over the past two decades, the dynamic analysis for different types of neural networks (NNs), including cellular NNs, recurrent NNs, static NNs, generalized NNs, bi-directional associative memory NNs, memristor NNs, Cohen-Grossberg NNs, fractional-order NNs, have received remarkable attention due to their successful applications [1,2,3,4,5,6,7,8]. In this domain, the Hopfield Neural Network (HNN) has been considered as an attractive model due to its robust mathematical capability [9]. Indeed, HNN-related models have received significant research attention in both areas of mathematical and practical analysis [10,11,12,13].
Time delays naturally occur in practical systems. They are normally viewed as the main source of chaos, leading to a poor performance as well as causing system instability. As a result, the study of NNs with time-delays is important [11,12,13,14]. On the other hand, as mentioned in References [15,16,17], stochastic inputs affect the stability of NNs. As such, the stochastic effects must be taken into consideration in stability analysis of NNs. Accordingly, several stability methods for HNNs with stochastic effects have been published recently [10,11,12,13,18,19,20,21,22]. As an example, Wan and Sun in Reference [19] discussed the mean square exponential stability pertaining to a NN, as in (1)
d α i ( t ) = c i α i ( t ) + j = 1 n a i j θ j ( α j ( t ) ) + j = 1 n b i j θ j ( α j ( t ν j ) ) d t + j = 1 n σ i j ( α j ( t ) ) d ω j ( t ) ,   i = 1 , n ¯ .
Later, in Reference [20], the authors generalized the model proposed in Reference [19]. The exponential stability of NNs has also been analysed, as in (2)
d α i ( t ) = c i α i ( t ) + j = 1 n a i j θ j ( α j ( t ) ) + j = 1 n b i j θ j ( α j ( t ν j ( t ) ) ) d t   + j = 1 n σ i j ( t , α j ( t ) , α j ( t ν j ( t ) ) ) d ω j ( t ) ,   i = 1 , n ¯ ,
where α ( t ) = [ α 1 ( t ) , , α n ( t ) ] T R n is the state vector; ω ( t ) = [ ω 1 ( t ) , , ω n ( t ) ] T is the Brownian motion. Note that c i > 0 denotes the self-feedback weight, while a i j and b i j denote the interconnection matrices that contain the neuron weight coefficients. In addition, ν j > 0 denotes to the transmission delay; θ j ( · ) denotes the corresponding output of the jth unit; and σ i j denotes the density of stochastic effects.
On the other hand, CVNN has been commonly used in various areas, such as image processing, computer vision, electromagnetic waves, speech synthesis, sonic waves, quantum devices and so in References [23,24,25]. Recently, due to good applicability, CVNNs have attracted tremendous interest from the research community [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. In general, a CVNN model comprises complex-valued variables as compared with those of real-valued NNs. These complex-valued variables include the inputs, states, connection weights, as well as activation functions. Therefore, a lot of investigations on CVNNs have been conducted. Recent studies mainly focus on their complex behaviors, including global asymptotic stability [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42], global exponential stability [28,29], state estimation of CVNNs [30,31]. As an example, in Reference [13], several sufficient conditions are derived by separating the real and imaginary parts in a CVNN. The results confirm that the global asymptotic stability pertaining to the considered CVNN. Similarly, some other stability conditions have been defined for CVNNs [32,33,34,35,36,37,38,39,40,41]. However, it should be noted that the robust stability issue with respect to uncertain stochastic complex-valued Hopfield neural networks (USCVHNNs) with time delays is yet to be fully investigated. This forms the motivation of the present research.
This article focuses on the robust stability issue of the USCVHNN model. Firstly, we formulate a general form of the system model by considering both aspects of parameter uncertainties and stochastic effects. Based on the studies in References [42,43], we divide the delay interval into N equivalent sub-intervals. Then, we exploit the homeomorphism principle along with the Lyapunov function and other analytical methods in our analysis. Specifically, we derive new sufficient conditions with respect to the robust stability issue of a USCVHNN model in terms of linear matrix inequalities (LMIs). We obtain the feasible solutions by using the MATLAB software package. Finally, the obtained theoretical results are validated by two examples. The remaining part of this paper is organized as follows. In Section 2, we formally define the proposed model. In Section 3, we explain the new stability criterion. In Section 4, we present the numerical examples and the associated simulation results. Concluding remarks are given in Section 5.
Notations: in this article, the Euclidean n-space and n × n real matrices are denoted by R n and R n × n , respectively. Note that i represents the imaginary unit, that is, i = 1 . In addition, C n and C n × n denote the n dimensional complex vectors and n × t h e   n complex matrices, respectively. The complex conjugate transpose and matrix transposition are denoted by the superscripts * and T, respectively. Besides that, a matrix P > 0   ( P < 0 ) indicates that P is a positive (negative) definite matrix. On the other hand, d i a g { · } denotes the diagonal of the block diagonal matrix, while I denotes an identity matrix, while ☆ indicates a symmetric term in a matrix. Finally, E { · } stands for mathematical expectation, 1 , n ¯ = 1 , 2 , , n , and ( Ω , F , P ) indicates a complete probability space with a filtration { F t } t 0 .

2. Problem Statement and Mathematical Preliminaries

Motivated by Wang et al. [13], we consider the following CVHNN model with a time-varying delay:
σ ˙ j ( t ) = d j σ j ( t ) + k = 1 n a j k θ k ( σ k ( t ν ( t ) ) ) + κ j ,   j = 1 , n ¯ , σ j ( t ) = ϕ j ( t ) ,   t [ ν , 0 ] ,
where σ j ( t ) denotes state vector; d j > 0 and a j k indicate the self-feedback connection weights as well as the interconnection matrix representing neuron weight coefficients, respectively. In addition, θ k ( · ) : C C is the neuron activation function; κ j indicates the external input vector, while ϕ j ( t ) is the initial condition. Note that ν ( t ) is the time-varying delay, which satisfies the following conditions
0 ν ( t ) ν ,   ν ˙ ( t ) μ ,
where ν and μ are real known constants.
The model in (3) can be written in the vector form as in (5)
σ ˙ ( t ) = M σ ( t ) + N θ ( σ ( t ν ( t ) ) ) + κ , σ ( t ) = ϕ ( t ) ,   t [ ν , 0 ] ,
where
σ ( t ) = [ σ 1 ( t ) , σ 2 ( t ) , , σ n ( t ) ] T C n , θ ( σ ( · ) ) = [ θ 1 ( σ 1 ( · ) ) , θ 2 ( σ 2 ( · ) ) , , θ n ( σ n ( · ) ) ] T C n , κ = [ κ 1 , κ 2 , , κ n ] T C n , M = d i a g { d 1 , , d n } R n ,   N = ( a i j ) n × n C n × n .
To derive new stability criterion for the CVNN model in (5), we use the delay-dividing method by dividing ν ( t ) into several intervals using an integer N with the length of each interval denoted by τ ( t ) ; as follows
τ ( t ) = ν ( t ) N .
where τ ( t ) denotes a portion of the time-varying delay ν ( t ) , which satisfies
0 τ ( t ) τ ,   τ ˙ ( t ) η ,
where τ = ν N and η = μ N .
Assumption 1.
The activation function θ j ( · ) ,   j = 1 , n ¯ , is able to satisfy the Lipschitz condition for all σ 1 , σ 2 C , that is,
| θ j ( σ 1 ) θ j ( σ 2 ) | b j | σ 1 σ 2 | ,   j = 1 , n ¯ ,
where b j is a constant.
From Assumption 1, one can obtain
( θ ( σ 1 ) θ ( σ 2 ) ) * ( θ ( σ 1 ) θ ( σ 2 ) ) ( σ 1 σ 2 ) * B T B ( σ 1 σ 2 ) ,
where B = d i a g { b 1 , , b n } .
Remark 1.
Assumption 1 is a generalization of the real-valued function that is able to satisfy the Lipschitz condition on C . In addition, it is valid by dividing its real and imaginary parts.
It must be noticed that, in a realistic analysis, the uncertainties associated with the neuron weight coefficients are unavoidable. Besides that, the network model is susceptible to the stochastic effects. As a result, it is important to take the parameter uncertainties and stochastic disturbances into account when analysing the stability of NN models. Therefore, the model in (5) can be stated as
d σ ( t ) = [ ( M + Δ M ( t ) ) σ ( t ) + ( N + Δ N ( t ) ) θ ( σ ( t ν ( t ) ) ) + κ ] d t + [ C σ ( t ) + S σ ( t ν ( t ) ) ] d ω ( t ) , σ ( t ) = ϕ ( t ) , t [ ν , 0 ] ,
where C , S R n × n are the known constant matrices, and ω ( t ) is the n dimensional Brownian motion defined on ( Ω , F , { F t } t 0 , P ) .
It is assumed that the parameter uncertainties Δ M ( t ) , Δ N ( t ) = Δ N R ( t ) + i Δ N I ( t ) in (10) are able to satisfy:
Δ M ( t ) ,   Δ N R ( t ) ,   Δ N I ( t ) = U W ( t ) V 1 ,   V 2 ,   V 3 ,
where U and V 1 , V 2 , V 3 , are the known real matrices, while W ( t ) is the time-varying uncertain matrix that satisfies
W T ( t ) W ( t ) I .
We define σ ( t ) = α ( t ) + i β ( t ) , N = N R + i N I , θ ( σ ( t ν ( t ) ) ) = θ R ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) + i θ I ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) , κ = κ R + i κ I , where i shows the imaginary unit.
Now, the CVNN model in (10) can be separated into both the real and imaginary parts, that is,
d α ( t ) = [ ( M + Δ M ( t ) ) α ( t ) + ( N R + Δ N R ( t ) ) θ R ( α ( t ν ( t ) ) , β ( t ν ( t ) ) )   ( N I + Δ N I ( t ) ) θ I ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) + κ R ] d t   + [ C α ( t ) + S α ( t ν ( t ) ) ] d ω ( t ) d β ( t ) = [ ( M + Δ M ( t ) ) β ( t ) + ( N I + Δ N I ( t ) ) θ R ( α ( t ν ( t ) ) , β ( t ν ( t ) ) )   + ( N R + Δ N R ( t ) ) θ I ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) + κ I ] d t   + [ C β ( t ) + S β ( t ν ( t ) ) ] d ω ( t )
From (13), the system can be written in an equivalent form of
d α ( t ) d β ( t ) = [ M + Δ M ( t ) 0 0 M + Δ M ( t ) α ( t ) β ( t )   + N R + Δ N R ( t ) N I Δ N I ( t ) N I + Δ N I ( t ) N R + Δ N R ( t ) θ R ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) θ I ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) + κ R κ I ] d t   + C 0 0 C α ( t ) β ( t ) + S 0 0 S α ( t ν ( t ) ) β ( t ν ( t ) ) d ω ( t ) .
which is equivalent to
d α ( t ) d β ( t ) = [ M 0 0 M + Δ M ( t ) 0 0 Δ M ( t ) α ( t ) β ( t )   + N R N I N I N R + Δ N R ( t ) Δ N I ( t ) Δ N I ( t ) Δ N R ( t ) θ R ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) θ I ( α ( t ν ( t ) ) , β ( t ν ( t ) ) )   + κ R κ I ] d t + C 0 0 C α ( t ) β ( t ) + S 0 0 S α ( t ν ( t ) ) β ( t ν ( t ) ) d ω ( t ) .  
Let
ω ( t ) = α ( t ) β ( t ) ,   ω ( t ν ( t ) ) = α ( t ν ( t ) ) β ( t ν ( t ) ) , θ ¯ ( ω ( t ν ( t ) ) ) = θ R ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) θ I ( α ( t ν ( t ) ) , β ( t ν ( t ) ) ) , M ¯ = M 0 0 M ,   N ¯ = N R N I N I N R ,   κ ¯ = κ R κ I , C ¯ = C 0 0 C ,   S ¯ = S 0 0 S , Δ M ¯ = Δ M ( t ) 0 0 Δ M ( t ) ,   Δ N ¯ = Δ N R ( t ) Δ N I ( t ) Δ N I ( t ) Δ N R ( t ) .
Then, the model in (15) can be equivalently re-written as
d ω ( t ) = [ ( M ¯ + Δ M ¯ ) ω ( t ) + ( N ¯ + Δ N ¯ ) θ ¯ ( ω ( t ν ( t ) ) ) + κ ¯ ] d t + [ C ¯ ω ( t ) + S ¯ ω ( t ν ( t ) ) ] d ω ( t ) .
From (11) and (12), the parameter uncertainties Δ M ¯ , Δ N ¯ , satisfy the following:
Δ M ¯ ,   Δ N ¯ = U ¯ W ¯ ( t ) V ¯ 1 ,   V ¯ 2 ,
where
U ¯ = U 0 0 U ,   W ¯ ( t ) = W ( t ) 0 0 W ( t ) ,   V ¯ 1 = V 1 0 0 V 1 ,   V ¯ 2 = V 2 V 3 V 3 V 2 .
From (9) one can obtain
( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) T ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) ( ω 1 ω 2 ) T B ¯ ( ω 1 ω 2 ) ,
where B ¯ = B T B 0 0 B T B .
The model in (16) has the following initial condition:
ω ( t ) = ϕ ¯ ( t ) ,   t [ ν , 0 ] ,
where ϕ ¯ ( t ) = [ ϕ R ( t ) , ϕ I ( t ) ] T .
Remark 2.
It should be noted that, if we let Δ M ¯ = Δ N ¯ = C ¯ = S ¯ 0 and ν ( t ) ν , where ν is a constant, the model in (16) turns out to be the subsequent CVNN of
d ω ( t ) = [ M ¯ ω ( t ) + N ¯ θ ¯ ( ω ( t ν ) ) + κ ¯ ] d t .
Lemma 1.
[41] Let H ( σ ) : R n R n be a continuous map. It satisfies the following conditions:
(i)
H ( σ ) is injective on R n
(ii)
H ( σ ) as σ , then H ( σ ) is a homeomorphism on R n .
Lemma 2.
[41] Given any vectors of the form M , N R n , along with a positive-definite matrix P R n as well as a scalar ϵ > 0 , the following inequality is always true:
M T P N + N T P M ϵ M T P M + 1 ϵ N T P N .
Lemma 3.
[44] Let Ξ = Ξ T ,   J 1 and J 2 be real matrices, W ( t ) satisfies W T ( t ) W ( t ) I . In this respect, Ξ + ( J 1 W ( t ) J 2 ) + ( J 1 W ( t ) J 2 ) T < 0 , iff there exist a scalar ϵ > 0 such that Ξ + ϵ 1 J 1 J 1 T + ϵ J 2 T J 2 < 0 or equivalently
Ξ J 1 ϵ J 2 ϵ I 0 ϵ I <   0 .

3. An Analysis of Uniqueness and Stability

In this sub-section, we present the sufficient conditions for the existence, uniqueness, as well as global asymptotic stability of the considered CVNN model.

3.1. Delay-Independent Stability Criteria

Theorem 1.
Based on Assumption 1, we can divide the activation function into two parts: real and imaginary. The model in (20) has a unique equilibrium point. The model is deemed globally asymptotically stable if there exist matrix P > 0 and scalar ϵ > 0 that are able to satisfy the following LMI:
P M ¯ M ¯ T P + ϵ B ¯ P N ¯ ϵ I < 0 ,
Proof. 
A map associated with (20) is defined, that is,
H ( ω ) = M ¯ ω + N ¯ θ ¯ ( ω ) + κ ¯ .
We prove that the map H ( ω ) is injective by contradiction. Given ω 1 and ω 2 with ω 1 ω 2 such that H ( ω 1 ) = H ( ω 2 ) , it follows that
M ¯ ( ω 1 ω 2 ) + N ¯ ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) = 0 .
Multiplying both sides of (23) by ( ω 1 ω 2 ) T P yields
( ω 1 ω 2 ) T ( P M ¯ ) ( ω 1 ω 2 ) + ( ω 1 ω 2 ) T ( P N ¯ ) ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) = 0 .
Taking the transpose on (24), we have
( ω 1 ω 2 ) T ( M ¯ T P ) ( ω 1 ω 2 ) + ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) T ( N ¯ T P ) ( ω 1 ω 2 ) = 0 .
Adding (24) and (25) results in
( ω 1 ω 2 ) T ( P M ¯ M ¯ T P ) ( ω 1 ω 2 ) + ( ω 1 ω 2 ) T ( P N ¯ ) ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) + ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) T ( N ¯ T P ) ( ω 1 ω 2 ) = 0 .
From Lemma (2) and (26), we have
0 ( ω 1 ω 2 ) T ( P M ¯ M ¯ T P ) ( ω 1 ω 2 ) + ϵ ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) T ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) )   + 1 ϵ ( ω 1 ω 2 ) T ( P N ¯ N ¯ T P ) ( ω 1 ω 2 ) .
Since ϵ is a positive constant, we can obtain from (18) that
ϵ ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) T ( θ ¯ ( ω 1 ) θ ¯ ( ω 2 ) ) ϵ ( ω 1 ω 2 ) T B ¯ ( ω 1 ω 2 ) .
From (27) and (28), we have
0 ( ω 1 ω 2 ) T ( P M ¯ M ¯ T P + ϵ B ¯ + 1 ϵ P N ¯ N ¯ T P ) ( ω 1 ω 2 ) .
If (21) holds, by Schur complement, we have
P M ¯ M ¯ T P + ϵ B ¯ + 1 ϵ P N ¯ N ¯ T P < 0 .
From (29) and (30), note that ω 1 = ω 2 indicates a contradiction. As a result, the map H ( ω ) is injective.
In addition we prove that H ( ω ) as ω . Based on (30) we have
P M ¯ M ¯ T P + ϵ B ¯ + 1 ϵ P N ¯ N ¯ T P < λ I ,
for sufficiently small λ > 0 . By using the same derivation method of (29), we have
2 ω T P ( H ( ω ) H ( 0 ) ) ω T ( P M ¯ M ¯ T P + ϵ B ¯ + 1 ϵ P N ¯ N ¯ T P ) ω < λ ω 2 .
One can obtain from (32) that
λ ω 2 2 ω P ( H ( ω ) H ( 0 ) ) .
As a result, we have H ( ω ) as ω .
We can see from Lemma 1 that the map H ( ω ) is homeomorphic on R n . In addition, there exists a unique point ω ^ such that H ( ω ^ ) = 0 . Therefore, the model in (20) has a unique equilibrium point ω ^ .
Now, we prove the global asymptotic stability of the equilibrium point possessed by the model in (20). Based on the transformation ω ¯ = ω ω ^ , the equilibrium point of the model in (20) can be converted to the origin. As such, we have
ω ¯ ˙ ( t ) = M ¯ ω ¯ ( t ) + N ¯ θ ¯ ( ω ¯ ( t ν ) ) ,
where θ ¯ ( ω ¯ ( t ν ) ) = θ ¯ ( ω ( t ν ) + ω ^ ) θ ¯ ( ω ^ ) .
To analyse the global asymptotic stability with respect to the equilibrium point of the model in (34), the Lyapunov functional is utilized, that is,
V ( t , ω t ) = ω ¯ T ( t ) P ω ¯ ( t ) + t ν t ω ¯ T ( s ) Q ω ¯ ( s ) d s ,
where P > 0 and Q > 0 . As a result, the time derivative of V ( t , ω t ) and the solution of the model in (35) yields
V ˙ ( t , ω t ) = 2 ω ¯ T ( t ) P [ M ¯ ω ¯ ( t ) + N ¯ θ ¯ ( ω ¯ ( t ν ) ) ] + ω ¯ ( t ) Q ω ¯ ( t ) ω ¯ ( t ν ) Q ω ¯ ( t ν ) .
From (18), we have
0 ϵ [ ω ¯ T ( t ν ) B ¯ ω ¯ ( t ν ) θ ¯ T ( ω ¯ ( t ν ) ) θ ¯ ( ω ¯ ( t ν ) ) ] ,
for ϵ > 0 . Then, together with (36) and (37), we have
V ˙ ( t , ω t ) ζ T ( t ) Ω ζ ( t ) ,
where
ζ ( t ) = [ ω ¯ T ( t )   ω ¯ T ( t ν )   θ ¯ T ( ω ¯ ( t ν ) ) ] T , Ω = P M ¯ M ¯ T P + Q 0 P N ¯ Q + ϵ B ¯ 0 ϵ I .  
It is easy to observe that Ω < 0 if and only if Q > ϵ B ¯ and
P M ¯ M ¯ T P + Q P N ¯ ϵ I   < 0 .
If we choose Q > ϵ B ¯ , then Ω < 0 if the condition of (21) holds. This indicates that V ˙ ( t , ω t ) < 0 ; therefore the origin of the model in (34) is global asymptotically stable, which is equivalent to the case of the equilibrium point of the model in (20). □

3.2. Delay-Dependent Stability Criteria

In this section, we address the delay-dependent stability criteria for the model in (16) using the delay-dividing method.
Theorem 2.
Based on Assumption 1, we can divide the activation function into two parts: real and imaginary. For given scalars 0 < ν and 0 < μ , the model in (16) is robust as well as globally asymptotically stable in the mean square, subject to the existence of matrices P > 0 , Q 1 Q 2 Q 3 > 0 , R j > 0   ( j = 1 , , N ) and positive scalars ϵ 1 , ϵ 2 , γ in a way that the following LMI is satisfied:
Θ = Ξ 1 0 0 0 0 Q 2 P N ¯ P U ¯ C ¯ Ξ 2 0 0 0 0   0 0 0 Ξ 3 0 0 0 0 0 0 Ξ 4 0 0 0 0 0 Ξ 6 0 Ξ 7 0 S ¯ Ξ 8 0 0 0 Ξ 9 P U ¯ 0 γ I 0 P <   0 ,
where Ξ 1 = P M ¯ M ¯ T P + Q 1 + R 1 + ϵ 1 B ¯ + γ V ¯ 1 T V ¯ 1 ,   Ξ 2 = ( 1 η ) R 1 + ( 1 + η ) R 2 ,   Ξ 3 = ( 1 2 η ) R 2 + ( 1 + 2 η ) R 3 ,   Ξ 4 = ( 1 3 η ) R 3 ,   Ξ 6 = ( 1 μ ) Q 1 ( 1 μ ) R N + ϵ 2 B ¯ ,   Ξ 7 = ( 1 μ ) Q 2 ,   Ξ 8 = Q 3 ϵ 1 I ,   Ξ 9 = ( 1 μ ) Q 3 ϵ 2 I + γ V ¯ 2 T V ¯ 2 .
Proof. 
The Lyapunov function candidate pertaining to the model in (16) is formed as follows:
V ( t , ω t ) = ω ¯ T ( t ) P ω ¯ ( t )   + t ν ( t ) t ω ¯ ( s ) θ ¯ ( ω ¯ ( s ) ) T Q 1 Q 2 Q 3 ω ¯ ( s ) θ ¯ ( ω ¯ ( s ) ) d s   + j = 1 N t j τ ( t ) t ( j 1 ) τ ( t ) ω ¯ T ( s ) R j ω ¯ ( s ) d s .
Based on the It o ^ ’s differential rule, we can obtain the stochastic derivative of V ( t , ω t ) along with the trajectories of the model in (16) as follows:
d V ( t , ω t ) = { 2 ω ¯ T ( t ) P [ ( M ¯ + Δ M ¯ ) ω ¯ ( t ) + ( N ¯ + Δ N ¯ ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) ]   + ω ¯ ( t ) θ ¯ ( ω ¯ ( t ) ) T Q 1 Q 2 Q 3 ω ¯ ( t ) θ ¯ ( ω ¯ ( t ) ) ( 1 ν ˙ ( t ) ) ω ¯ T ( t ν ( t ) ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) T Q 1 Q 2 Q 3   × ω ¯ ( t ν ( t ) ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) + j = 1 N [ ( 1 ( j 1 ) τ ˙ ( t ) ) ω ¯ T ( t ( j 1 ) τ ( t ) ) R j ω ¯ ( t ( j 1 ) τ ( t ) )   ( 1 j τ ˙ ( t ) ) ω ¯ T ( t j τ ( t ) ) R j ω ¯ ( t j τ ( t ) ) + [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] T P   × [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] } d t + 2 ω ¯ T ( t ) P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] d ω ( t ) ,
d V ( t , ω t ) { 2 ω ¯ T ( t ) ( P M ¯ ) ω ¯ ( t ) 2 ω ¯ T ( t ) ( P U ¯ W ¯ ( t ) V ¯ 1 ) ω ¯ ( t ) + 2 ω ¯ T ( t ) ( P N ¯ ) θ ¯ ( ω ¯ ( t ν ( t ) ) )   + 2 ω ¯ T ( t ) ( P U ¯ W ¯ ( t ) V ¯ 2 ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) + ω ¯ T ( t ) Q 1 ω ¯ ( t ) + ω ¯ T ( t ) Q 2 θ ¯ ( ω ¯ ( t ) )   + θ ¯ T ( ω ¯ ( t ) ) Q 3 θ ¯ ( ω ¯ ( t ) ) ( 1 μ ) ω ¯ T ( t ν ( t ) ) Q 1 ω ¯ ( t ν ( t ) ) ( 1 μ ) ω ¯ T ( t ν ( t ) )   × Q 2 θ ¯ ( ω ¯ ( t ν ( t ) ) ) ( 1 μ ) θ ¯ T ( ω ¯ ( t ν ( t ) ) ) Q 3 θ ¯ ( ω ¯ ( t ν ( t ) ) )   + j = 1 N [ ( 1 ( j 1 ) η ) ω ¯ T ( t ( j 1 ) τ ( t ) ) R j ω ¯ ( t ( j 1 ) τ ( t ) )   ( 1 j η ) ω ¯ T ( t j τ ( t ) ) R j ω ¯ ( t j τ ( t ) ) + [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] T P   × [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] } d t + 2 ω ¯ T ( t ) P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] d ω ( t ) .
Based on Lemma (3),
d V ( t , ω t ) { 2 ω ¯ T ( t ) ( P M ¯ ) ω ¯ ( t ) + 1 γ ω ¯ T ( t ) ( P U ¯ U ¯ T P T ) ω ¯ ( t ) + γ ω ¯ T ( t ) ( V ¯ 1 T V ¯ 1 ) ω ¯ ( t )   + 2 ω ¯ T ( t ) ( P N ¯ ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) + 1 γ ω ¯ T ( t ) ( P U ¯ U ¯ T P T ) ω ¯ ( t ) + γ θ ¯ T ( ω ¯ ( t ν ( t ) ) )   × ( V ¯ 2 T V ¯ 2 ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) + ω ¯ T ( t ) Q 1 ω ¯ ( t ) + ω ¯ T ( t ) Q 2 θ ¯ ( ω ¯ ( t ) ) + θ ¯ T ( ω ¯ ( t ) ) Q 3 θ ¯ ( ω ¯ ( t ) )   ( 1 μ ) ω ¯ T ( t ν ( t ) ) Q 1 ω ¯ ( t ν ( t ) ) ( 1 μ ) ω ¯ T ( t ν ( t ) )   × Q 2 θ ¯ ( ω ¯ ( t ν ( t ) ) ) ( 1 μ ) θ ¯ T ( ω ¯ ( t ν ( t ) ) ) Q 3 θ ¯ ( ω ¯ ( t ν ( t ) ) )   + j = 1 N [ ( 1 ( j 1 ) η ) ω ¯ T ( t ( j 1 ) τ ( t ) ) R j ω ¯ ( t ( j 1 ) τ ( t ) )   ( 1 j η ) ω ¯ T ( t j τ ( t ) ) R j ω ¯ ( t j τ ( t ) ) + [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] T   × P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] } d t + 2 ω ¯ T ( t ) P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] d ω ( t ) .
Moreover, from (18) it follows that
0 ϵ 1 [ ω ¯ T ( t ) B ¯ ω ¯ ( t ) θ ¯ T ( ω ¯ ( t ) ) θ ¯ ( ω ¯ ( t ) ) ] ,
0 ϵ 2 [ ω ¯ T ( t ν ( t ) ) B ¯ ω ¯ ( t ν ( t ) ) θ ¯ T ( ω ¯ ( t ν ( t ) ) ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) ] ,
for ϵ 1 , ϵ 2 > 0 .
Then, combining with (42)–(44), we have
d V ( t , ω t ) { 2 ω ¯ T ( t ) ( P M ¯ ) ω ¯ ( t ) + 1 γ ω ¯ T ( t ) ( P U ¯ U ¯ T P T ) ω ¯ ( t ) + γ ω ¯ T ( t ) ( V ¯ 1 T V ¯ 1 ) ω ¯ ( t )   + 2 ω ¯ T ( t ) ( P N ¯ ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) + 1 γ ω ¯ T ( t ) ( P U ¯ U ¯ T P T ) ω ¯ ( t ) + γ θ ¯ T ( ω ¯ ( t ν ( t ) ) )   × ( V ¯ 2 T V ¯ 2 ) θ ¯ ( ω ¯ ( t ν ( t ) ) ) + ω ¯ T ( t ) Q 1 ω ¯ ( t ) + ω ¯ T ( t ) Q 2 θ ¯ ( ω ¯ ( t ) ) + θ ¯ T ( ω ¯ ( t ) ) Q 3 θ ¯ ( ω ¯ ( t ) )   ( 1 μ ) ω ¯ T ( t ν ( t ) ) Q 1 ω ¯ ( t ν ( t ) ) ( 1 μ ) ω ¯ T ( t ν ( t ) ) Q 2 θ ¯ ( ω ¯ ( t ν ( t ) ) )   ( 1 μ ) θ ¯ T ( ω ¯ ( t ν ( t ) ) ) Q 3 θ ¯ ( ω ¯ ( t ν ( t ) ) ) + j = 1 N [ ( 1 ( j 1 ) η ) ω ¯ T ( t ( j 1 ) τ ( t ) )   × R j ω ¯ ( t ( j 1 ) τ ( t ) ) ( 1 j η ) ω ¯ T ( t j τ ( t ) ) R j ω ¯ ( t j τ ( t ) ) + ω ¯ T ( t ) ϵ 1 B ¯ ω ¯ ( t )   θ ¯ T ( ω ¯ ( t ) ) ϵ 1 θ ¯ ( ω ¯ ( t ) ) + ω ¯ T ( t ν ( t ) ) ϵ 2 B ¯ ω ¯ ( t ν ( t ) ) θ ¯ T ( ω ¯ ( t ν ( t ) ) ) ϵ 2 θ ¯ ( ω ¯ ( t ν ( t ) ) )   + [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] T P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] } d t   + 2 ω ¯ T ( t ) P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] d ω ( t ) .
As a result, based on (45), we can have
d V ( t , ω t ) ξ T ( t ) [ Φ + 1 γ Π T Π + Λ T P Λ ] ξ ( t ) d t + 2 ω ¯ T ( t ) P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] d ω ( t ) ,
where
ξ ( t ) = ω ¯ T ( t ) ω ¯ T ( t τ ( t ) ) ω ¯ T ( t 2 τ ( t ) ) ω ¯ T ( t 3 τ ( t ) )   N 1   ω ¯ T ( t ν ( t ) ) θ ¯ T ( ω ¯ ( t ) ) θ ¯ T ( ω ¯ ( t ν ( t ) ) ) T , Φ = Ξ 1 0 0 0 0 Q 2 P N ¯ Ξ 2 0 0 0 0 0 Ξ 3 0 0 0 0 Ξ 4 0 0 0 Ξ 6 0 Ξ 7 Ξ 8 0 Ξ 9 Π = [ P U ¯ T   0   0   0   N 1   0   0   P U ¯ T ] T , Λ = [ C ¯ T   0   0   0   N 1   S ¯ T   0   0 ] T .
and Ξ 1 , Ξ 2 , Ξ 3 , Ξ 4 , Ξ 6 , Ξ 7 , Ξ 8 and Ξ 9 are defined in (39).
By using Schur complement lemma, Φ + 1 γ Π T Π + Λ T P Λ can be written as Θ .
As a result,
d V ( t , ω t ) ξ T ( t ) Θ ξ ( t ) d t + 2 ω ¯ T ( t ) P [ C ¯ ω ¯ ( t ) + S ¯ ω ¯ ( t ν ( t ) ) ] d ω ( t ) .
It can be easily deduced that for Θ < 0 , there must exist a scalar δ > 0 such that
Θ + d i a g { δ I   0   0   0   N 1   0   0   0 } .
By taking the mathematical expectation on both sides of the model in (47),
d E { V ( t , ω t ) } d t E { ξ T ( t ) Θ ξ ( t ) } δ E { ω ¯ ( t ) 2 } .
The above result indicates that the model in (16) is robust. it is also globally asymptotically stable in the mean square. This completes the proof. □
Remark 3.
The delay-dividing number N is a positive integer and, at the same time, N should not be lower than 2. This conservatism is reduced as N increases, and the time-delay tends to approach certain upper boundary. When N equals to 2 or 3, the conservatism is small enough.
Remark 4.
In the situation where no uncertainties exist, the model in (10) becomes
d σ ( t ) = [ M σ ( t ) + N θ ( σ ( t ν ( t ) ) ) + κ ] d t + [ C σ ( t ) + S σ ( t ν ( t ) ) ] d ω ( t ) .
Simultaneously, the model in (16) becomes
d ω ( t ) = [ M ¯ ω ( t ) + N ¯ θ ¯ ( ω ( t ν ( t ) ) ) + κ ¯ ] d t + [ C ¯ ω ( t ) + S ¯ ω ( t ν ( t ) ) ] d ω ( t ) .
Corollary 1 is obtained by setting Δ M ¯ = Δ N ¯ = 0 in the proof of Theorem (2).
Corollary 1.
Based on Assumption 1, we can divide the activation function into two parts: real and imaginary. For given scalars 0 < ν and 0 < μ , the model in (51) is globally asymptotically stable in the mean square, subject to the existence of matrices P > 0 , Q 1 Q 2 Q 3 > 0 , R j > 0   ( j = 1 , , N ) and positive scalars ϵ 1 , ϵ 2 in a way that the following LMI is satisfied:
Θ ¯ = Ξ ¯ 1 , 1 0 0 0 0 Q 2 P N ¯ C ¯ Ξ ¯ 2 0 0 0 0   0 0 Ξ ¯ 3 0 0 0 0 0 Ξ ¯ 4 0 0 0 0 Ξ ¯ 6 0 Ξ ¯ 7 S ¯ Ξ ¯ 8 0 0 Ξ ¯ 9 0 P <   0 ,
where Ξ ¯ 1 = P M ¯ M ¯ T P + Q 1 + R 1 + ϵ 1 B ¯ ,   Ξ ¯ 2 = ( 1 η ) R 1 + ( 1 + η ) R 2 ,   Ξ ¯ 3 = ( 1 2 η ) R 2 + ( 1 + 2 η ) R 3 ,   Ξ ¯ 4 = ( 1 3 η ) R 3 ,   Ξ ¯ 6 = ( 1 μ ) Q 1 ( 1 μ ) R N + ϵ 2 B ¯ ,   Ξ ¯ 7 = ( 1 μ ) Q 2 ,   Ξ ¯ 8 = Q 3 ϵ 1 I ,   Ξ ¯ 9 = ( 1 μ ) Q 3 ϵ 2 I .
Remark 5.
In the situation where no uncertainties and stochastic effects exist, the model in (10) becomes
d σ ( t ) = [ M σ ( t ) + N θ ( σ ( t ν ( t ) ) ) + κ ] d t .
Simultaneously, the model in (16) becomes
d ω ( t ) = [ M ¯ ω ( t ) + N ¯ θ ¯ ( ω ( t ν ( t ) ) ) + κ ¯ ] d t .  
Corollary 2 is obtained by setting Δ M ¯ = Δ N ¯ = C ¯ = S ¯ = 0 in the proof of Theorem 2.
Corollary 2.
Based on Assumption 1, we can divide the activation function into two parts: real and imaginary. For given scalars 0 < ν and 0 < μ , the model in (54) is globally asymptotically stable, subject to the existence of matrices P > 0 , Q 1 Q 2 Q 3 > 0 , R j > 0   ( j = 1 , , N ) and positive scalars ϵ 1 , ϵ 2 is a way that the following LMI is satisfied:
Θ ˜ = Ξ ˜ 1 0 0 0 0 Q 2 P N ¯ Ξ ˜ 2 0 0 0 0 0 Ξ ˜ 3 0 0 0 0 Ξ ˜ 4 0 0 0 Ξ ˜ 6 0 Ξ 7 Ξ ˜ 8 0 Ξ ˜ 9 <   0 ,
where Ξ ˜ 1 = P M ¯ M ¯ T P + Q 1 + R 1 + ϵ 1 B ¯ ,   Ξ ˜ 2 = ( 1 η ) R 1 + ( 1 + η ) R 2 ,   Ξ ˜ 3 = ( 1 2 η ) R 2 + ( 1 + 2 η ) R 3 ,   Ξ ˜ 4 = ( 1 3 η ) R 3 ,   Ξ ˜ 6 = ( 1 μ ) Q 1 ( 1 μ ) R N + ϵ 2 B ¯ ,   Ξ ˜ 7 = ( 1 μ ) Q 2 ,   Ξ ˜ 8 = Q 3 ϵ 1 I ,   Ξ ˜ 9 = ( 1 μ ) Q 3 ϵ 2 I .
Remark 6.
It should be noticed that, due to the existence of modeling errors and measurement errors, we cannot solve some of the problems by using an exact mathematical model in realistic scenarios. The model described in this paper contains the following parameter uncertainties Δ M ( t ) , Δ N ( t ) . As a result, it is assumed that the effects of parameter uncertainties in our study are under the time-varying and norm-bounded condition.
Remark 7.
It is worth noting that several stability results have been published recently without taking into account the parameter uncertainties and stochastic effects on CVNNs [26,27]. In this paper, we take into account both the parameter uncertainties and stochastic effects. Therefore, this paper includes more general stability criteria than those in References [26,27].

4. Illustrative Examples

Here, two numerical examples are provided to ascertain the usefulness of the derived outcomes presented in the previous section.
Example 1.
The CVNN model defined in (10) that contains the following parameters is considered:
M = 4 0 0 4 ,   N = 1 + i 2 + i 1 i 1 ,   κ = 1 + 0 . 5 i 0 . 5 + i ,   C = 0 . 2 0 . 1 0 . 6 0 . 3 , S = 0 . 1 0 . 2 0 . 4 0 . 5 ,   U = 0 . 1 0 0 0 . 1 ,   V 1 = 0 . 2 0 0 0 . 2 ,   V 2 = 0 . 2 0 . 01 0 . 01 0 . 2 , V 3 = 0 . 3 0 . 01 0 . 01 0 . 3 ,   θ j ( σ j ) = 1 e 2 α j β j 1 + e 2 α j β j + i 1 1 + e α j + 2 β j , j = 1 , 2 .
Assume that θ j ( σ ) = θ j R ( α , β ) + i θ j I ( α , β ) ,   j = 1 , 2 . We can obtain the following based on a simple calculation:
M ¯ = 4 0 0 0 0 4 0 0 0 0 4 0 0 0 0 4 ,   N ¯ = 1 2 1 1 1 1 1 0 1 1 1 2 1 0 1 1 , U ¯ = 0 . 1 0 0 0 0 0 . 1 0 0 0 0 0 . 1 0 0 0 0 0 . 1 , V ¯ 1 = 0 . 2 0 0 0 0 0 . 2 0 0 0 0 0 . 2 0 0 0 0 0 . 2 ,   V ¯ 2 = 0 . 2 0 . 01 0 . 3 0 . 01 0 . 01 0 . 2 0 . 01 0 . 3 0 . 3 0 . 01 0 . 2 0 . 01 0 . 01 0 . 3 0 . 01 0 . 2 ,   B ¯ = 1 4 0 0 0 0 1 4 0 0 0 0 1 4 0 0 0 0 1 4 .
When N = 3 , ν ( t ) = 0 . 2 s i n t + 0 . 3 , which satisfies ν = 0 . 5 and μ = 0 . 3 . By using the MATLAB software package, the LMI condition of (40) in Theorem (2) is true.
Based on the initial values ω ¯ 1 ( t ) = 0 . 5 0 . 3 i and ω ¯ 2 ( t ) = 0 . 6 + 0 . 4 i , Figure 1 and Figure 2 depict the state trajectories pertaining to both the real and imaginary parts of the model in (10). On the other hand, Figure 3 illustrates the phase trajectories pertaining to both the real and imaginary parts of the model in (10). It is obvious form Figure 1, Figure 2 and Figure 3 that the model converges to an equilibrium point, which means that the model in (10) or equivalently the model in (16) is robust as well as globally asymptotically stable.
Example 2.
The CVNN model defined in (53) that contains the following parameters is considered:
d σ ( t ) = 8 0 0 8 σ ( t ) + 1 + i 2 + i 1 i 1 g ( σ ( t ν ( t ) ) ) + 1 + 0 . 2 i 0 . 2 + i d t , θ j ( σ j ) = t a n h ( σ j ) ,   j = 1 , 2 .
By assuming that θ j ( σ ) = θ j R ( α , β ) + i θ j I ( α , β ) ,   j = 1 , 2 , we can obtain the following by simple calculation
M ¯ = 8 0 0 0 0 8 0 0 0 0 8 0 0 0 0 8 ,   N ¯ = 1 2 1 1 1 1 1 0 1 1 1 2 1 0 1 1 , B ¯ = 0 . 5 0 0 0 0 0 . 5 0 0 0 0 0 . 5 0 0 0 0 0 . 5 .
When N = 3 , ν ( t ) = 0 . 5 s i n t + 0 . 8 which satisfying ν = 1 . 3 and μ = 0 . 3 . By using the MATLAB software package, it is found that the LMI of (55) is feasible. Table 1 shows the maximum allowable upper bound of ν under various μ settings. Figure 4 and Figure 5 depict the state trajectories with respect to the real and imaginary parts of the model in (53), in which 20 initial values that are randomly selected within a bounded interval are considered. On the other hand, Figure 6 and Figure 7 depict the phase trajectories with respect to the real and imaginary parts of the model in (53). From Corollary 2, we can confirm that the equilibrium point of the model in (53) or equivalently the model in (54), is globally asymptotically stable.

5. Conclusions

In this paper, we have investigated the robust stability issue of the USCVHNN models along with the consideration with their time-varying delay and stochastic effects. Based on the delay-dividing approach, we have formulated a more general form of the model under scrutiny by incorporating the parameter uncertainties and stochastic effects. Following the studies in References [42,43], we have divided the delay interval into N equivalent sub-intervals. In addition, we have exploited the homeomorphism principle along with the Lyapunov function and other analytical methods in our analysis. As a results, we have derived new sufficient conditions in terms of LMIs, which allow us to analyse the robust stability issues pertaining to USCVHNN models. The feasible solutions have been obtained by using the MATLAB software package. Finally, the obtained theoretical results are ascertained through two numerical examples.
For further research, we will extend our proposed approach to analysing other relevant types of CVNN models. In this regard, we intend to undertake an investigation of the fuzzy stochastic CVNN models.

Author Contributions

Funding acquisition, P.C.; Conceptualization, G.R.; Software, P.C., P.K., U.H., R.R. and R.S.; Formal analysis, G.R.; Methodology, G.R.; Supervision, C.P.L.; Writing—original draft, G.R.; Validation, G.R.; Writing—review and editing, G.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research is made possible through financial support of Chiang Mai University.

Acknowledgments

The authors are grateful to the support provided by Chiang Mai University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arik, S. An analysis of global asymptotic stability of delayed cellular neural networks. IEEE Trans. Neural Netw. 2002, 13, 1239–1242. [Google Scholar] [CrossRef]
  2. Liang, S.; Cao, J. A based-on LMI stability criterion for delayed recurrent neural networks. Chaos Soliton. Fract. 2006, 28, 154–160. [Google Scholar] [CrossRef]
  3. Cao, J. Global asymptotic stability of neural networks with transmission delays. Int. J. Syst. Sci. 2000, 31, 1313–1316. [Google Scholar] [CrossRef]
  4. Wu, Z.G.; Lam, J.; Su, H.; Chu, J. Stability and dissipativity analysis of static neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 199–210. [Google Scholar] [PubMed] [Green Version]
  5. Chen, G.; Xia, J.; Zhuang, G. Delay-dependent stability and dissipativity analysis of generalized neural networks with Markovian jump parameters and two delay components. J. Frankl. Inst. 2016, 353, 2137–2158. [Google Scholar] [CrossRef]
  6. Zhu, Q.; Cao, J. Robust exponential stability of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 2010, 21, 1314–1325. [Google Scholar]
  7. Li, R.; Cao, J.; Tu, Z. Passivity analysis of memristive neural networks with probabilistic time-varying delays. Neurocomputing 2016, 191, 249–262. [Google Scholar] [CrossRef]
  8. Bao, H.; Park, J.H.; Cao, J. Synchronization of fractional-order complex-valued neural networks with time delay. Neural Netw. 2016, 81, 16–28. [Google Scholar] [CrossRef]
  9. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79. [Google Scholar] [CrossRef] [Green Version]
  10. Li, X.; Ding, D. Mean square exponential stability of stochastic Hopfield neural networks with mixed delays. Stat. Probabil. Lett. 2017, 126, 88–96. [Google Scholar] [CrossRef]
  11. Wang, T.; Zhao, S.; Zhou, W.; Yu, W. Finite-time state estimation for delayed Hopfield neural networks with Markovian jump. Neurocomputing 2015, 156, 193–198. [Google Scholar] [CrossRef]
  12. Sriraman, R.; Samidurai, R. Robust dissipativity analysis for uncertain neural networks with additive time-varying delays and general activation functions. Math. Comput. Simulat. 2019, 155, 201–216. [Google Scholar]
  13. Wang, Z.; Guo, Z.; Huang, L.; Liu, X. Dynamical behavior of complex-valued Hopfield neural networks with discontinuous activation functions. Neural Process. Lett. 2017, 45, 1039–1061. [Google Scholar] [CrossRef]
  14. Kwon, O.M.; Park, J.H. New delay-dependent robust stability criterion for uncertain neural networks with time-varying delays. Appl. Math. Comput. 2008, 205, 417–427. [Google Scholar] [CrossRef]
  15. Blythe, S.; Mao, X.; Liao, X. Stability of stochastic delay neural networks. J. Franklin Inst. 2001, 338, 481–495. [Google Scholar] [CrossRef]
  16. Chen, Y.; Zheng, W. Stability analysis of time-delay neural networks subject to stochastic perturbations. IEEE Trans. Cyber. 2013, 43, 2122–2134. [Google Scholar] [CrossRef] [PubMed]
  17. Tan, H.; Hua, M.; Chen, J.; Fei, J. Stability analysis of stochastic Markovian switching static neural networks with asynchronous mode-dependent delays. Neurocomputing 2015, 151, 864–872. [Google Scholar] [CrossRef]
  18. Cao, Y.; Samidurai, R.; Sriraman, R. Stability and dissipativity analysis for neutral type stochastic Markovian jump static neural networks with time delays. J. Artif. Int. Soft Comput. Res. 2019, 9, 189–204. [Google Scholar] [CrossRef] [Green Version]
  19. Wan, L.; Sun, J. Mean square exponential stability of stochastic delayed Hopfield neural networks. Phys. Lett. A 2005, 343, 306–318. [Google Scholar] [CrossRef]
  20. Sun, Y.; Cao, J. pth moment exponential stability of stochastic recurrent neural networks with time-varying delays. Nonlinear Anal. Real World Appl. 2007, 8, 1171–1185. [Google Scholar] [CrossRef]
  21. Zhu, S.; Shen, Y. Passivity analysis of stochastic delayed neural networks with Markovian switching. Neurocomputing 2011, 74, 1754–1761. [Google Scholar] [CrossRef]
  22. Guo, J.; Meng, Z.; Xiang, Z. Passivity analysis of stochastic memristor-based complex-valued recurrent neural networks with mixed time-varying delays. Neural Process. Lett. 2018, 47, 1097–1113. [Google Scholar] [CrossRef]
  23. Mathews, J.H.; Howell, R.W. Complex Analysis for Mathematics and Engineering, 3rd ed.; Jones & Bartlett: Boston, MA, USA, 1997. [Google Scholar]
  24. Jankowski, S.; Lozowski, A.; Zurada, J.M. Complex-valued multistate neural associative memory. IEEE Trans. Neural Netw. 1996, 7, 1491–1496. [Google Scholar] [CrossRef] [PubMed]
  25. Nishikawa, T.; Iritani, T.; Sakakibara, K.; Kuroe, Y. Phase dynamics of complex-valued neural networks and its application to traffic signal control. Int. J. Neural Syst. 2005, 15, 111–120. [Google Scholar] [CrossRef] [PubMed]
  26. Samidurai, R.; Sriraman, R.; Cao, J.; Tu, Z. Effects of leakage delay on global asymptotic stability of complex-valued neural networks with interval time-varying delays via new complex-valued Jensen’s inequality. Int. J. Adapt. Control Signal Process. 2018, 32, 1294–1312. [Google Scholar] [CrossRef]
  27. Wang, Z.; Huang, L. Global stability analysis for delayed complex-valued BAM neural networks. Neurocomputing 2016, 173, 2083–2089. [Google Scholar] [CrossRef]
  28. Liu, D.; Zhu, S.; Chang, W. Mean square exponential input-to-state stability of stochastic memristive complex-valued neural networks with time varying delay. Int. J. Syst. Sci. 2017, 48, 1966–1977. [Google Scholar] [CrossRef]
  29. Wang, Z.; Liu, X. Exponential stability of impulsive complex-valued neural networks with time delay. Math. Comput. Simulat. 2019, 156, 143–157. [Google Scholar] [CrossRef]
  30. Liang, J.; Li, K.; Song, Q.; Zhao, Z.; Liu, Y.; Alsaadi, F.E. State estimation of complex-valued neural networks with two additive time-varying delays. Neurocomputing 2018, 309, 54–61. [Google Scholar] [CrossRef]
  31. Gong, W.; Liang, J.; Kan, X.; Nie, X. Robust state estimation for delayed complex-valued neural networks. Neural Process. Lett. 2017, 46, 1009–1029. [Google Scholar] [CrossRef]
  32. Ramasamy, S.; Nagamani, G. Dissipativity and passivity analysis for discrete-time complex-valued neural networks with leakage delay and probabilistic time-varying delays. Int. J. Adapt. Control Signal Process. 2017, 31, 876–902. [Google Scholar] [CrossRef]
  33. Sriraman, R.; Cao, Y.; Samidurai, R. Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays. Math. Comput. Simulat. 2020, 171, 103–118. [Google Scholar] [CrossRef]
  34. Sriraman, R.; Samidurai, R. Global asymptotic stability analysis for neutral-type complex-valued neural networks with random time-varying delays. Int. J. Syst. Sci. 2019, 50, 1742–1756. [Google Scholar] [CrossRef]
  35. Pratap, A.; Raja, R.; Cao, J.; Rajchakit, G.; Lim, C.P. Global robust synchronization of fractional order complex-valued neural networks with mixed time-varying delays and impulses. Int. J. Control. Automat. Syst. 2019, 17, 509–520. [Google Scholar]
  36. Samidurai, R.; Sriraman, R.; Zhu, S. Leakage delay-dependent stability analysis for complex-valued neural networks with discrete and distributed time-varying delays. Neurocomputing 2019, 338, 262–273. [Google Scholar] [CrossRef]
  37. Song, Q.; Zhao, Z.; Liu, Y. Stability analysis of complex-valued neural networks with probabilistic time-varying delays. Neurocomputing 2015, 159, 96–104. [Google Scholar] [CrossRef]
  38. Chen, X.; Zhao, Z.; Song, Q.; Hu, J. Multistability of complex-valued neural networks with time-varying delays. Appl. Math. Comput. 2017, 294, 18–35. [Google Scholar] [CrossRef]
  39. Tu, Z.; Cao, J.; Alsaedi, A.; Alsaadi, F.E.; Hayat, T. Global Lagrange stability of complex-valued neural networks of neutral type with time-varying delays. Complexity 2016, 21, 438–450. [Google Scholar] [CrossRef]
  40. Zhang, Z.; Liu, X.; Guo, R.; Lin, C. Finite-time stability for delayed complex-valued BAM neural networks. Neural Process. Lett. 2018, 48, 179–193. [Google Scholar] [CrossRef]
  41. Zhang, Z.; Liu, X.; Chen, J.; Guo, R.; Zhou, S. Further stability analysis for delayed complex-valued recurrent neural networks. Neurocomputing 2017, 251, 81–89. [Google Scholar] [CrossRef]
  42. Qiu, F.; Cui, B.; Ji, Y. A delay-dividing approach to stability of neutral system with mixed delays and nonlinear perturbations. Appl. Math. Model. 2008, 34, 3701–3707. [Google Scholar] [CrossRef]
  43. Hui, J.J.; Kong, X.Y.; Zhang, H.X.; Zhou, X. Delay-partitioning approach for systems with interval time-varying delay and nonlinear perturbations. J. Comput. Appl. Math. 2015, 281, 74–81. [Google Scholar] [CrossRef]
  44. Liu, G.; Yang, S.X.; Chai, Y.; Feng, W.; Fu, W. Robust stability criteria for uncertain stochastic neural networks of neutral-type with interval time-varying delays. Neural Comput. Appl. 2013, 22, 349–359. [Google Scholar] [CrossRef]
Figure 1. An illustration of the state trajectories of the real part for the model in (10) with respect to Example 1.
Figure 1. An illustration of the state trajectories of the real part for the model in (10) with respect to Example 1.
Symmetry 12 00683 g001
Figure 2. An illustration of the state trajectories of the imaginary part for the model in (10) with respect to Example 1.
Figure 2. An illustration of the state trajectories of the imaginary part for the model in (10) with respect to Example 1.
Symmetry 12 00683 g002
Figure 3. An illustration of the phase trajectories between the real and imaginary subspace of the model in (10).
Figure 3. An illustration of the phase trajectories between the real and imaginary subspace of the model in (10).
Symmetry 12 00683 g003
Figure 4. An illustration of the state trajectories of the real part for the model in (53) with respect to Example 2.
Figure 4. An illustration of the state trajectories of the real part for the model in (53) with respect to Example 2.
Symmetry 12 00683 g004
Figure 5. An illustration of the state trajectories of the imaginary part for the model in (53) with respect to Example 2.
Figure 5. An illustration of the state trajectories of the imaginary part for the model in (53) with respect to Example 2.
Symmetry 12 00683 g005
Figure 6. An illustration of the phase trajectories between the real subspace [ R e ( σ 1 ) , R e ( σ 2 ) ] for the model in (53).
Figure 6. An illustration of the phase trajectories between the real subspace [ R e ( σ 1 ) , R e ( σ 2 ) ] for the model in (53).
Symmetry 12 00683 g006
Figure 7. An illustration of the phase trajectories between the real subspace [ I m ( σ 1 ) , I m ( σ 2 ) ] for the model in (53).
Figure 7. An illustration of the phase trajectories between the real subspace [ I m ( σ 1 ) , I m ( σ 2 ) ] for the model in (53).
Symmetry 12 00683 g007
Table 1. The maximum allowable upper bounds of ν for different μ settings in Example 2.
Table 1. The maximum allowable upper bounds of ν for different μ settings in Example 2.
μ   Methods   0.10.30.50.7Unknown μ
Corollary 21.39041.30001.10851.10020.9830

Share and Cite

MDPI and ACS Style

Chanthorn, P.; Rajchakit, G.; Humphries, U.; Kaewmesri, P.; Sriraman, R.; Lim, C.P. A Delay-Dividing Approach to Robust Stability of Uncertain Stochastic Complex-Valued Hopfield Delayed Neural Networks. Symmetry 2020, 12, 683. https://doi.org/10.3390/sym12050683

AMA Style

Chanthorn P, Rajchakit G, Humphries U, Kaewmesri P, Sriraman R, Lim CP. A Delay-Dividing Approach to Robust Stability of Uncertain Stochastic Complex-Valued Hopfield Delayed Neural Networks. Symmetry. 2020; 12(5):683. https://doi.org/10.3390/sym12050683

Chicago/Turabian Style

Chanthorn, Pharunyou, Grienggrai Rajchakit, Usa Humphries, Pramet Kaewmesri, Ramalingam Sriraman, and Chee Peng Lim. 2020. "A Delay-Dividing Approach to Robust Stability of Uncertain Stochastic Complex-Valued Hopfield Delayed Neural Networks" Symmetry 12, no. 5: 683. https://doi.org/10.3390/sym12050683

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop