Next Article in Journal
Fisher-Type Fixed Point Results in b-Metric Spaces
Previous Article in Journal
Fractional Metric Dimension of Generalized Jahangir Graph
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability Analysis of Quaternion-Valued Neutral-Type Neural Networks with Time-Varying Delay

1
School of Mathematics and Computer Science, Yunnan Minzu University, Kunming 650500, China
2
Department of Mathematics and Statistics, Guizhou University of Finance and Economics, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 101; https://doi.org/10.3390/math7010101
Submission received: 28 November 2018 / Revised: 29 December 2018 / Accepted: 15 January 2019 / Published: 18 January 2019
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
This paper addresses the problem of global μ -stability for quaternion-valued neutral-type neural networks (QVNTNNs) with time-varying delays. First, QVNTNNs are transformed into two complex-valued systems by using a transformation to reduce the complexity of the computation generated by the non-commutativity of quaternion multiplication. A new convex inequality in a complex field is introduced. In what follows, the condition for the existence and uniqueness of the equilibrium point is primarily obtained by the homeomorphism theory. Next, the global stability conditions of the complex-valued systems are provided by constructing a novel Lyapunov–Krasovskii functional, using an integral inequality technique, and reciprocal convex combination approach. The gained global μ -stability conditions can be divided into three different kinds of stability forms by varying the positive continuous function μ ( t ) . Finally, three reliable examples and a simulation are given to display the effectiveness of the proposed methods.

1. Introduction

As is well known, with the rapid development of electronic information science, complex-valued signals appear frequently in engineering practice. The application fields of complex-valued neural networks (CVNNs) are also becoming increasingly extensive: for instance, automatic control, eddy current defect detection, image processing, object recognition, frequency-domain blind source separation, and signal processing (see, e.g., [1,2,3,4,5,6]). Hence, many scholars are directing much attention to studying the dynamic behavior of CVNNs, and lots of important results have been reported in the literature. The exponential stability of complex-valued BAM neural networks was studied based on the differential inclusion theory and the properties of homeomorphism [7]. The synchronization problem for CVNNs with time delays was discussed in [8,9]. Following these results, in [10,11], the problem of extended dissipative synchronization of CVNNs was also discussed. In [12], the Lagrange stability of CVNNs was studied by using a transformation in which the CVNN is rewritten as a first-order differential system. In [13,14,15,16], the authors studied the impact of impulses on the stability of CVNNs with time-varying delays, and they obtained ample conditions for the CVNNs to ensure exponential convergence. Moreover, fractional complex-valued neural networks (FCVNNs) have certain advantages when describing dynamical properties. In [17], Huang studied local asymptotical stability and Hopf bifurcation, and the condition for the emergence of bifurcation was obtained.
In fact, the quaternion as an extension of a complex-valued system can also be applied to engineering practices. This issue has aroused the interest of scholars. After an active exploration, scholars found that the quaternion can also play a very important role in engineering, mainly on the basis of its advantages in rotation and direction modeling. For example, a data covariance model using a quaternion form was proposed to estimate its wavenumber and polarization parameters, similar to a music algorithm [18]. In addition, quaternions are used to define Fourier transforms that are suitable for color images. It was also shown that the transformation can be calculated using two standard complex fast Fourier transforms [19].
In recent years, it has become gradually more common to discuss the quaternion-valued neural network (QVNN) as an extension of the CVNN because of the following facts. On the basis of Liouville’s theorem [20], each bounded function must be constant, i.e., the activation function of CVNNs cannot have boundaries and be analytic at the same time unless it is a constant. However, the activation function of QVNNs can be bounded and analytic at the same time, as applied in [21], but how to choose the activation function of QVNN is a difficult problem. The analyticity of general quaternion-valued functions has not been rigorously examined in the quaternion field. To ensure that the class of quaternion-valued functions is analyzed, strict Cauchy–Riemann–Fueter (CRF) and generalized Cauchy–Riemann (GCR) conditions only pledge that the global analysis of quaternion-valued functions is a linear function and a constant, respectively. To overcome this difficulty, References [22,23] give some very important conditions for a partial change to the Cauchy–Riemann–Fueter condition and the local analysis condition—namely, the local analyticity condition (LAC)—to ensure that the quaternion-valued functions are bounded and analytic at the same time. This technique, which provides more flexibility in choosing the activation function of QVNNs, is significant progress. Until now, quaternion algebra has been successfully applied to communications problems and signal processing, such as color image processing [24] and wind forecasting [25]. Since then, numerous scholars have produced many excellent results in the field of QVNNs (see, e.g., [26,27,28,29] and literature referenced therein). QVNN was changed into two complex-valued systems by using a simple transformation, and [26] reduced the complexity of computation generated by the non-commutativity of quaternion multiplication. With homeomorphism theory, Reference [27] proved the existence of the equilibrium point of QVNNs and provided ample conditions for global robust stability. In [28], the pseudo-major period synchronization problem of quaternion-valued cellular neural networks (QVCNNs) was also studied. The existence of pseudo almost periodic functions was proved, and the global exponential synchronization of QVCNNs was obtained by designing the controller and combining Lyapunov functions.
On the other hand, the neutral-type systems not only consider the past state but also specifically involve the influence of changes in past states on the current state. Due to this feature, neutral-type systems have become the subject of extensive research by many scholars (see [30,31,32,33,34,35,36,37,38]). Furthermore, neutral systems have many applications in practical engineering, including heat exchangers, population ecology, and so on (see [39,40]). Many neural networks can be regarded as special cases of neutral neural networks, and most of the neural networks can be transformed into neutral neural networks for research (see [41,42,43]). It can be seen that the neutral neural network has great research value and potential significance. Nevertheless, to the best of the authors’ knowledge, for QVNNs with time-varying delays, there is no research in the literature for the global μ -stability of quaternion-valued neutral-type neural networks (QVNTNNs) at this time.
All of the above factors motivate our research. This article is intended to discuss the μ -stability of QVNTNNs. The remainder is divided into the following sections to elaborate. In the second part, the fundamental definition of quaternion is given. In the third part, we first introduce the QVNTNN model. Then, some important definitions and lemmas are provided, and the new extended convex inequality is obtained for the first time in this paper. In the fourth part, using the homeomorphism theory, we firstly obtain a new condition for the existence and uniqueness of the equilibrium point, and the global μ -stability criterion for QVNTNNs is provided using the Lyapunov functional theory combined with some inequality techniques. Based on the obtained stability results, power-stability, log-stability, and exponential stability are given as corollaries. In the fifth part, the effectiveness and feasibility of the method in this paper are illustrated by three examples. In the sixth part, we draw conclusions of the article.
Notations: Some significant symbols used throughout this paper are considerably standard. R n × m denotes the collection of all n × m real-valued matrices. C n × m denotes the collection of all n × m complex-valued matrices. Q n × m denotes the collection of all n × m quaternion-valued matrices. diag ( ) denotes a block-diagonal matrix. · denotes the Euclidean vector norm. S C n ( Q ) denotes the collection of all quaternion positive matrices and quaternion self-conjugate matrices. p denotes a quaternion-valued function, and p ¯ denotes the conjugate of p. The superscript ∗ denotes the transpose of a matrix or a vector. For any matrix G , λ max ( G ) ( λ min ( G ) ) denotes the largest (smallest) eigenvalue of G .

2. Definition of Quaternion

The quaternion consists of four parts, one of which is a real number and three of which are imaginary numbers, ( i , j , and k ). Generally, the quaternion is defined by a vector q, where q belongs to the four-dimensional vector space. We use the following form to represent the quaternion
q = q 0 + q 1 i + q 2 j + q 3 k ,
where q v ( v = 0 , 1 , 2 , 3 ) are real numbers and i , j , k satisfy the multiplication table formed by
i 2 = j 2 = k 2 = 1 ; i j = j i = k ; k j = j k = i ; i k = k i = j .
The above representations are said to be the Hamilton rule. We see that the multiplication of the quaternion is not interchangeable.
Similar to the definition of complex, q ¯ is defined as the conjugate of the quaternion q Q n .
q ¯ = q 0 q 1 i q 2 j q 3 k ,
For any q Q n , | q | = q q ¯ = q 0 2 + q 1 2 + q 2 2 + q 3 2 . q can be expressed as q = c 1 + c 2 j with each q Q n , where c 1 , c 2 C n .

3. Problem Statement and Preliminaries

Firstly, the delayed QVNTNN is introduced by the following
y ˙ ( t ) C y ˙ ( t ν ( t ) ) = D y ( t ) + A p ( y ( t ) ) + B p ( y ( t ν ( t ) ) ) + κ .
where y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y n ( t ) ) * Q n is the state vector, and p ( ) = ( p ( ) , , p n ( ) ) * Q n is the feedback function of a neuron. κ = ( κ 1 , κ 2 , , κ n ) * Q n is the external input function. D = diag ( d 1 , d 2 , , d n ) R n × n is the diagonal matrix with d i > 0 ( i = 1 , 2 , , n ) . C = ( c i j ) n × n Q n × n is the suitable dimensional quaternion matrix. A = ( a i j ) n × n Q n × n , B = ( b i j ) n × n Q n × n stand for the connection weight matrix and delayed connection weight matrix, respectively. ν ( t ) represents the time-varying delay and satisfies 0 ν ( t ) ν , ν ˙ ( t ) ς . The initial condition of the QVNTNNs (Equation (1)) is y ( t ) = ψ ( t ) , t [ ν , 0 ] , where ψ ( t ) Q n .
Assumption 1.
For any y Q , y can be expressed as
y = y 11 + i y 12 + j y 21 + k y 22 = y 1 + y 2 j ,
where y 1 = y 11 + i y 12 , y 2 = y 21 + i y 22 .
Similarly,
A = A 1 + A 2 j , B = B 1 + B 2 j , C = C 1 + C 2 j , p ( y ( t ) ) = p 1 ( y 1 ( t ) ) + p 2 ( y 2 ( t ) ) j , p ( y ( t ν ( t ) ) ) = p 1 ( y 1 ( t ν ( t ) ) ) + p 2 ( y 2 ( t ν ( t ) ) ) j , p ¯ ( y ( t ) ) = p ¯ 1 ( y 1 ( t ) ) + p ¯ 2 ( y 2 ( t ) ) j , p ¯ ( y ( t ν ( t ) ) ) = p ¯ 1 ( y 1 ( t ν ( t ) ) ) + p ¯ 2 ( y 2 ( t ν ( t ) ) ) j .
where A 1 = A 1 R + i A 1 I , B 1 = B 1 R + i B 1 I , C 1 = C 1 R + i C 1 I . Note that ( · ) R on behalf of R e ( · ) , ( · ) I on behalf of I m ( · ) . p v ( · ) C n ( v = 1 , 2 ) , p ¯ v ( · ) C n ( v = 1 , 2 ) . Particularly, j T = T ¯ j or j T j = T ¯ for any complex matrix T C n × n .
Assumption 2.
The neuron activation function p v ( · ) and p ¯ v ( y ( · ) ) ( v = 1 , 2 ) satisfy the Lipschitz condition for any y , y C n , y y . There exist constants L γ ( γ = 1 , 2 , , n ) such that
p v ( y ) p v ( y ) L γ y y , p ¯ v ( y ) p ¯ v ( y ) L γ y y .
Assumption 3.
According to the stability of the theorem in [44] for neutral systems, we assume that the radius of C is smaller than 1.
Definition 1
([45]). The QVNTNNs (Equation (1)) is called μ-stable. For a function μ ( t ) , which is positive and continuous, μ ( t ) + when t + . Then, there exists a positive constant φ such that the following inequality holds:
y ( t ) φ μ ( t ) ,
for all t > 0 .
Remark 1.
The gained μ-stable conditions can be transformed as power-stability, log-stability, and exponential stability by varying the positive continuous function μ ( t ) .
Definition 2
([45]). For a function e ϖ t , which is positive and continuous, let t + ; it is clear that e ϖ t + . Then, there exists a positive constant φ for all t > 0 such that the following inequality holds:
y ( t ) φ e ϖ t ,
and the QVNTNN (Equation (1)) is called exponentially stable.
Definition 3
([45]). For a function t ϖ , which is positive and continuous, let t + ; it is clear that t ϖ + if there exists a constant φ > 0 such that the following inequality holds:
y ( t ) φ t ϖ , ( t > 0 )
and the QVNTNN (Equation (1)) is power-stable.
Definition 4
([45]). There exists a positive constant φ and a positive and continuous function l n ( ϖ t + 1 ) . While t + , we have l n ( ϖ t + 1 ) + such that the following inequality holds:
y ( t ) φ l n ( ϖ t + 1 ) , ( t > 0 )
and the QVNTNN (Equation (1)) is called log-stable.
Lemma 1
([46]). For given a Hermitian matrix W > 0 , the following inequality holds for all continuously differentiable functions ϕ in [ f , g ] C n × n :
f g ϕ ˙ * ( u ) W ϕ ˙ ( u ) d u 1 g f ( ϕ ( g ) ϕ ( f ) ) * W ( ϕ ( g ) ϕ ( f ) ) + 3 g f Ξ * W Ξ ,
where
Ξ = ϕ ( g ) + ϕ ( f ) 2 g f f g ϕ ( u ) d u .
Lemma 2
([26]). If each given matrix G S C n ( Q ) , then each eigenvalue of matrix G is real.
Lemma 3
([47]). If there exists a continuous mapping M ( y ) : C n C n and it satisfies the following conditions
(1) 
M ( y ) : C n C n is an injective mapping,
(2) 
while y , then M ( y ) ,
then, M ( y ) is called a homeomorphism of C n .
Lemma 4.
For ρ i ( t ) [ 0 , 1 ] , i = 1 n ρ i ( t ) = 1 , and vectors ξ i which satisfy ξ i = 0 , with ρ i ( t ) = 0 , matrices M i > 0 , M i C n × n , if there exist Hermitian matrices S i j ( i = 1 , 2 , , m 1 , j = i + 1 , , m ) , S i j C n × n satisfying
M i S i j S i j * M i 0 ,
then, the following inequality holds:
i = 1 n 1 ρ i ( t ) ξ i * M i ξ i ξ i ξ i * M i S i j * * * M i ξ i ξ i
Proof. 
For i = 2 , it is easy to see that the following inequality
ρ 2 ( t ) ρ 1 ( t ) ξ 1 ρ 1 ( t ) ρ 2 ( t ) ξ 2 * M i S i j S i j * M i ρ 2 ( t ) ρ 1 ( t ) ξ 1 ρ 1 ( t ) ρ 2 ( t ) ξ 2 0
always holds. Then, one has
1 ρ 1 ( t ) ξ 1 * M 1 ξ 1 + 1 ρ 2 ( t ) ξ 2 * M 2 ξ 2 = 1 ρ 1 ( t ) ξ 1 * ( ρ 1 ( t ) + ρ 2 ( t ) ) M 1 ξ 1 + 1 ρ 2 ( t ) ξ 2 * ( ρ 1 ( t ) + ρ 2 ( t ) ) M 2 ξ 2 = ξ 1 * M 1 ξ 1 + ξ 2 * M 2 ξ 2 + ρ 2 ( t ) ρ 1 ( t ) ξ 1 * M 1 ξ 1 + ρ 1 ( t ) ρ 2 ( t ) ξ 2 * M 2 ξ 2 ξ 1 * M 1 ξ 1 + ξ 2 * M 2 ξ 2 + ξ 1 * S i j ξ 2 + ξ 2 * S i j * ξ 1 = ξ 1 ξ 2 * M i S i j S i j * M i ξ 1 ξ 2 .
The situation of i = n can also be established with a similar method. Here, the proof processing is omitted.  □
Remark 2.
Clearly, Lemma 4 is an extension of Lemma 2 in [48], which just considers the application in the real number field. Lemma 4 can be applied to the complex field. Therefore, the range of application of Lemma 4 is wider than that given in [48]. This paper is further extended by the literature [48] so that it can be applied to the complex number field. Thus, one can conclude that the range of application for Lemma 4 is wider and more practical.

4. Main Results

In the following content, we first present the condition for the existence and uniqueness of the equilibrium point for the system in Equation (1).
Theorem 1.
On the basis of Assumptions 1 and 2, the system in Equation (1) has a unique equilibrium point if there exists a positive diagonal matrix V i ( i = 1 , 2 , , 6 ) and the following LMIs are satisfied
Ξ 8 × 8 < 0
where
Ξ 1 , 1 = D * D 2 D V 1 + L 1 * V 3 L 1 + L 3 * V 5 L 3 , Ξ 1 , 3 = 2 V 1 ( A 1 + B 1 ) D * ( A 1 + B 1 ) , Ξ 1 , 6 = D * ( A 2 + B 2 ) 2 V 2 ( A 2 + B 2 ) , Ξ 1 , 7 = 2 C V 1 D * C , Ξ 2 , 2 = D * D 2 D V 2 + L 2 * V 4 L 2 + L 4 * V 6 L 4 , Ξ 2 , 4 = 2 V 2 ( A 1 + B 1 ) D * ( A 1 + B 1 ) , Ξ 2 , 5 = 2 V 2 ( A 2 + B 2 ) D * ( A 2 + B 2 ) , Ξ 2 , 8 = 2 C V 2 D * C , Ξ 3 , 3 = ( A 1 * + B 1 * ) ( A 1 + B 1 ) V 3 , Ξ 3 , 6 = ( A 1 * + B 1 * ) ( A 2 + B 2 ) , Ξ 3 , 7 = ( A 1 * + B 1 * ) C , Ξ 4 , 4 = ( A 1 * + B 1 * ) ( A 1 + B 1 ) V 4 , Ξ 4 , 5 = ( A 1 * + B 1 * ) ( A 2 + B 2 ) , Ξ 4 , 8 = ( A 1 * + B 1 * ) C , Ξ 5 , 5 = ( A 2 * + B 2 * ) ( A 2 + B 2 ) V 5 , Ξ 5 , 8 = ( A 2 * + B 2 * ) C , Ξ 6 , 6 = ( A 2 * + B 2 * ) ( A 2 + B 2 ) V 6 , Ξ 6 , 7 = ( A 2 * + B 2 * ) C , Ξ 7 , 7 = C * C I , Ξ 8 , 8 = C * C I .
Proof. 
According to Assumption 1, Equation (1) can be rewritten in the following form
y ˙ 1 ( t ) = D y 1 ( t ) + C y ˙ 1 ( t ν ( t ) ) + A 1 p 1 ( y 1 ( t ) ) A 2 p ¯ 2 ( y 2 ( t ) ) + B 1 p 1 ( y 1 ( t ν ( t ) ) B 2 p ¯ 2 ( y 2 ( t ν ( t ) ) + κ 1 , y ˙ 2 ( t ) = D y 2 ( t ) + C y ˙ 2 ( t ν ( t ) ) + A 1 p 2 ( y 2 ( t ) ) + A 2 p ¯ 1 ( y 1 ( t ) ) + B 1 p 2 ( y 2 ( t ν ( t ) ) + B 2 p ¯ 1 ( y 1 ( t ν ( t ) ) + κ 2 .
To prove the existence and uniqueness of the solution, we need to construct a mapping which combines the information of the system in Equation (3), and it can be written as follows:
M ( y 1 , y 2 ) = D y 1 + C M 1 ν ( y 1 , y 2 ) + A 1 p 1 ( y 1 ) A 2 p ¯ 2 ( y 2 ) + B 1 p 1 ( y 1 ) B 2 p ¯ 2 ( y 2 ) + κ 1 D y 2 + C M 2 ν ( y 1 , y 2 ) + A 1 p 2 ( y 2 ) + A 2 p ¯ 1 ( y 1 ) + B 1 p 2 ( y 2 ) + B 2 p ¯ 1 ( y 1 ) + κ 2
where
M ( y 1 , y 2 ) = ( M 1 ( y 1 , y 2 ) , M 2 ( y 1 , y 2 ) ) * , M 1 ( y 1 , y 2 ) = D y 1 + C M 1 ν ( y 1 , y 2 ) + A 1 p 1 ( y 1 ) A 2 p ¯ 2 ( y 2 ) + B 1 p 1 ( y 1 ) B 2 p ¯ 2 ( y 2 ) + κ 1 , M 2 ( y 1 , y 2 ) = D y 2 + C M 2 ν ( y 1 , y 2 ) + A 1 p 2 ( y 2 ( t ) ) + A 2 p ¯ 1 ( y 1 ) + B 1 p 2 ( y 2 ) + B 2 p ¯ 1 ( y 1 ) + κ 2 .
If y ˘ is an equilibrium point of the system in Equation (1), in light of Assumptions 1 and 3, let y ˘ = y ˘ 1 + y ˘ 2 j ; then, y ˘ satisfies the following equation
0 0 = D y ˘ 1 + A 1 p 1 ( y ˘ 1 ) A 2 p ¯ 2 ( y ˘ 2 ) + B 1 p 1 ( y ˘ 1 ) B 2 p ¯ 2 ( y ˘ 2 ) + κ 1 D y ˘ 2 + A 1 p 2 ( y ˘ 2 ) + A 2 p ¯ 1 ( y ˘ 1 ) + B 1 p 2 ( y ˘ 2 ) + B 2 p ¯ 1 ( y ˘ 1 ) + κ 2
In light of Lemma 4, if M ( y ) satisfies the homeomorphic mapping on C n , then we can find conditions to guarantee that there exists a unique equilibrium point for the system in Equation (1).
Next, the proof is divided into two sections.
In the first place, we need to prove that M ( y 1 , y 2 ) is an injective. If we choose two points, ( y 1 , y 2 ) * , ( y 1 , y 2 ) * C n and ( y 1 , y 2 ) ( y 1 , y 2 ) , in light of the definition of the activation function given by Assumption 2, we have p ( y 1 , y 2 ) p ( y 1 , y 2 ) .
From Equation (4), we have
M ( y 1 , y 2 ) M ( y 1 , y 2 ) = D ( y 1 y 1 ) + C ( M 1 ( y 1 , y 2 ) M 1 ( y 1 , y 2 ) ) + A 1 ( p 1 ( y 1 ) p 1 ( y 1 ) ) A 2 ( p ¯ 2 ( y 2 ) p ¯ 2 ( y 2 ) ) + B 1 ( p 1 ( y 1 ) p 1 ( y 1 ) ) B 2 ( p ¯ 2 ( y 2 ) p ¯ 2 ( y 2 ) ) D ( y 2 y 2 ) + C ( M 2 ( y 1 , y 2 ) M 2 ( y 1 , y 2 ) ) + A 1 ( p 2 ( y 2 ) p 2 ( y 2 ) ) + A 2 ( p ¯ 1 ( y 1 ) p ¯ 1 ( y 1 ) ) + B 1 ( p 2 ( y 2 ) p 2 ( y 2 ) ) + B 2 ( p ¯ 1 ( y 1 ) p ¯ 1 ( y 1 ) )
Let us multiply both sides of Equation (6) by
2 ( y 1 y 1 ) * ( y 2 y 2 ) * V 1 0 0 V 2 + ( M 1 ( y 1 , y 2 ) M 1 ( y 1 , y 2 ) ) * ( M 2 ( y 1 , y 2 ) M 2 ( y 1 , y 2 ) ) * .
We can get
2 ( y 1 y 1 ) * ( y 2 y 2 ) * V 1 0 0 V 2 + ( M 1 ( y 1 , y 2 ) M 1 ( y 1 , y 2 ) ) * ( M 2 ( y 1 , y 2 ) M 2 ( y 1 , y 2 ) ) * × M ( y 1 , y 2 ) M ( y 1 , y 2 ) = 2 ( y 1 y 1 ) * ( y 2 y 2 ) * V 1 0 0 V 2 M ( y 1 , y 2 ) M ( y 1 , y 2 )
+ ( M 1 ( y 1 , y 2 ) M 1 ( y 1 , y 2 ) ) * ( M 2 ( y 1 , y 2 ) M 2 ( y 1 , y 2 ) ) * M ( y 1 , y 2 ) M ( y 1 , y 2 ) .
For the sake of providing a clean and succinct representation of the equation, some symbols are defined as follows:
ϝ 1 = [ e 1 * e 2 * e 3 * e 4 * e 5 * e 6 * e 7 * e 8 * ] , e 1 = y 1 y 1 , e 2 = y 2 y 2 , e 3 = p 1 ( y 1 ) p 1 ( y 1 ) , e 4 = p 2 ( y 2 ) p 2 ( y 2 ) , e 5 = p ¯ 1 ( y 1 ) p ¯ 1 ( y 1 ) , e 6 = p ¯ 2 ( y 2 ) p ¯ 2 ( y 2 ) , e 7 = M 1 ( y 1 , y 2 ) M 1 ( y 1 , y 2 ) , e 8 = M 2 ( y 1 , y 2 ) M 2 ( y 1 , y 2 ) .
To make a transformation for Equation (8), we have
2 e 1 * e 2 * V 1 0 0 V 2 M ( y 1 , y 2 ) M ( y 1 , y 2 ) = 2 e 1 * V 1 e 7 * + 2 e 2 * V 2 e 8 * + e 7 * e 8 * M ( y 1 , y 2 ) M ( y 1 , y 2 ) e 7 * e 8 * M ( y 1 , y 2 ) M ( y 1 , y 2 ) = e 7 * e 7 e 8 * e 8 + 2 e 1 * V 1 [ D e 1 + C e 7 + A 1 e 3 A 2 e 6 + B 1 e 3 B 2 e 6 ] + 2 e 2 * V 2 [ D e 2 + C e 8 + A 1 e 4 + A 2 e 5 + B 1 e 4 + B 2 e 5 ] + [ D e 1 + C e 7 + A 1 e 3 A 2 e 6 + B 1 e 3 B 2 e 6 ] * [ D e 1 + C e 7 + A 1 e 3 A 2 e 6 + B 1 e 3 B 2 e 6 ] + [ D e 2 + C e 8 + A 1 e 4 + A 2 e 5 + B 1 e 4 + B 2 e 5 ] * [ D e 2 + C e 8 + A 1 e 4 + A 2 e 5 + B 1 e 4 + B 2 e 5 ] .
= e 7 * e 7 e 8 * e 8 2 e 1 * V 1 D e 1 + 2 e 1 * V 1 C e 7 + 2 e 1 * V 1 A 1 e 3 2 e 1 * V 1 A 2 e 6 + 2 e 1 * V 1 B 1 e 3 2 e 1 * V 1 B 2 e 6 2 e 2 * V 2 D e 2 + 2 e 2 * V 2 C e 8 + 2 e 2 * V 2 A 1 e 4 + 2 e 2 * V 2 A 2 e 5 + 2 e 2 * V 2 B 1 e 4 + 2 e 2 * V 2 B 2 e 5 + e 1 * D * D e 1 e 1 * D * ( A 1 + B 1 ) e 3 + e 1 * D * ( A 2 + B 2 ) e 6 e 1 * D * C e 7 e 3 * ( A 1 * + B 1 * ) D e 1 + e 3 * ( A 1 * + B 1 * ) ( A 1 + B 1 ) e 3 e 3 * ( A 1 * + B 1 * ) ( A 2 + B 2 ) e 6 + e 3 * ( A 1 * + B 1 * ) C e 7 + e 6 * ( A 2 * + B 2 * ) D e 1 e 6 * ( A 2 * + B 2 * ) ( A 1 + B 1 ) e 3 + e 6 * ( A 2 * + B 2 * ) ( A 2 + B 2 ) e 6 e 6 * ( A 2 * + B 2 * ) C e 7 e 7 * C * D e 1 + e 7 * C * ( A 1 + B 1 ) e 3 e 7 * C * ( A 2 + B 2 ) e 6 + e 7 * C * C e 7 + e 2 * D * D e 2 e 2 * D * ( A 1 + B 1 ) e 4 + e 2 * D * ( A 2 + B 2 ) e 5 e 2 * D * C e 8 e 4 * ( A 1 * + B 1 * ) D e 2 + e 4 * ( A 1 * + B 1 * ) ( A 1 + B 1 ) e 4 + e 4 * ( A 1 * + B 1 * ) ( A 2 + B 2 ) e 5 + e 4 * ( A 1 * + B 1 * ) C e 8 + e 5 * ( A 2 * + B 2 * ) D e 2 + e 5 * ( A 2 * + B 2 * ) ( A 1 + B 1 ) e 4 + e 5 * ( A 2 * + B 2 * ) ( A 2 + B 2 ) e 5 + e 5 * ( A 2 * + B 2 * ) C e 8 e 8 * C * D e 2 + e 8 * C * ( A 1 + B 1 ) e 4 + e 8 * C * ( A 2 + B 2 ) e 5 + e 8 * C * C e 8 .
On the basis of Assumption 2, for diagonal matrices V i > 0 ( i = 3 , 4 , 5 , 6 ) , we can obtain
0 e 1 * L 1 * V 3 L 1 e 1 e 3 * V 3 e 3 , 0 e 2 * L 2 * V 4 L 2 e 2 e 4 * V 4 e 4 , 0 e 1 * L 3 * V 5 L 3 e 1 e 5 * V 5 e 5 , 0 e 2 * L 4 * V 6 L 4 e 2 e 6 * V 6 e 6 .
Combining Equation (10) with Equation (12), one can obtain
2 e 1 * e 2 * V 1 0 0 V 2 M ( y 1 , y 2 ) M ( y 1 , y 2 ) e 1 * ( D * D 2 D V 1 + L 1 * V 3 L 1 + L 3 * V 5 L 3 ) e 1 + e 1 * [ 2 V 1 ( A 1 + B 1 ) D * ( A 1 + B 1 ) ] e 3 + e 1 * [ D * ( A 2 + B 2 ) 2 V 2 ( A 2 + B 2 ) ] e 6 + e 1 * ( 2 C V 1 D * C ) e 7 + e 2 * ( D * D 2 D V 2 + L 2 * V 4 L 2 + L 4 * V 6 L 4 ) e 2 + e 2 * [ 2 V 2 ( A 1 + B 1 ) D * ( A 1 + B 1 ) ] e 4 + e 2 * [ 2 V 2 ( A 2 + B 2 ) D * ( A 2 + B 2 ) ] e 5 + e 2 * ( 2 C V 2 D * C ) e 8 + e 3 * [ ( A 1 * + B 1 * ) ( A 1 + B 1 ) V 3 ] e 3 e 3 * [ ( A 1 * + B 1 * ) ( A 2 + B 2 ) ] e 6 + e 3 * [ ( A 1 * + B 1 * ) C ] e 7 + e 4 * [ ( A 1 * + B 1 * ) ( A 1 + B 1 ) V 4 ] e 4 + e 4 * [ ( A 1 * + B 1 * ) ( A 2 + B 2 ) ] e 5 + e 4 * [ ( A 1 * + B 1 * ) C ] e 8 + e 5 * [ ( A 2 * + B 2 * ) ( A 2 + B 2 ) V 5 ] e 5 + e 5 * [ ( A 2 * + B 2 * ) C ] e 8 + e 6 * [ ( A 2 * + B 2 * ) ( A 2 + B 2 ) V 6 ] e 6 e 6 * [ ( A 2 * + B 2 * ) C ] e 7 + e 7 * ( C * C I ) e 7 + e 8 * ( C * C I ) e 8
= ϝ 1 * Ξ ϝ 1 .
In light of Theorem 1 and ( y 1 , y 2 ) ( y 1 , y 2 ) , the following inequality is established
2 e 1 * e 2 * V 1 0 0 V 2 M ( y 1 , y 2 ) M ( y 1 , y 2 ) < 0
One can draw the conclusion that M ( y 1 , y 2 ) M ( y 1 , y 2 ) for all ( y 1 , y 2 ) ( y 1 , y 2 ) . Accordingly, M ( y 1 , y 2 ) is an injective.
In the second place, we need to prove that M ( y 1 , y 2 ) as ( y 1 , y 2 ) . Let ( y 1 , y 2 ) = ( 0 , 0 ) ; then, we have
2 y 1 * y 2 * V 1 0 0 V 2 M ( y 1 , y 2 ) M ( 0 , 0 ) λ min ( Ξ ) ( y 1 , y 2 ) 2
From the Cauchy–Schwarz inequality, we have
2 ( y 1 , y 2 ) V 1 V 2 ( M ( y 1 , y 2 ) + M ( 0 , 0 ) ) λ min ( Ξ ) ( y 1 , y 2 ) 2
while ( y 1 , y 2 ) 0 . So, we have
M ( y 1 , y 2 ) λ min ( Ξ ) ( y 1 , y 2 ) 2 V 1 V 2 M ( 0 , 0 )
Therefore, we clearly know that M ( y 1 , y 2 ) as ( y 1 , y 2 ) . Thus, the conditions of Lemma 3 are satisfied, and M ( y 1 , y 2 ) is a homeomorphism mapping. Hence, from Corollary 1 in [49], the condition for the existence of a unique equilibrium point of the system in Equation (1) is proved.
In the following content, we present the conditions for global μ -stability criteria for the system in Equation (1). Firstly, suppose that y ˇ is the unique equilibrium point of the QVNTNN (Equation (1)), where y ˇ = ( y ˇ , y ˇ , , y ˇ ) * . According to Assumption 1 and the transformation y ˜ = y y ˇ , the system in Equation (1) can be rewritten as the following:
y ˜ ˙ 1 ( t ) = D y ˜ 1 ( t ) + C y ˜ ˙ 1 ( t ν ( t ) ) + A 1 ( p 1 ( y ˜ 1 ( t ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) A 2 ( p ¯ 2 ( y ˜ 2 ( t ) + y ˘ 2 ) p 2 ¯ ( y ˘ 2 ) ) + B 1 ( p 1 ( y ˜ 1 ( t ν ( t ) ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) B 2 ( p ¯ 2 ( y ˜ 2 ( t ν ( t ) ) + y ˘ 2 ) p 2 ¯ ( y ˘ 2 ) ) , y ˜ ˙ 2 ( t ) = D y ˜ 2 ( t ) + C y ˜ ˙ 2 ( t ν ( t ) ) + A 1 ( p 2 ( y ˜ 2 ( t ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) + A 2 ( p ¯ 1 ( y ˜ 1 ( t ) + y ˘ 1 ) p 1 ¯ ( y ˘ 1 ) ) + B 1 ( p 2 ( y ˜ 2 ( t ν ( t ) ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) + B 2 ( p ¯ 1 ( y ˜ 1 ( t ν ( t ) ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) .
For the sake of convenience, in this paper, some symbols are defined as follows:
1 = [ I 1 * I 2 * I 3 * I 4 * I 5 * I 6 * I 7 * I 8 * I 9 * I 10 * I 11 * I 12 * ] * , 2 = [ h 1 * h 2 * h 3 * h 4 * h 5 * h 6 * h 7 * h 8 * h 9 * h 10 * h 11 * h 12 * ] * , I 1 = y ˜ 1 ( t ) , I 2 = y ˜ 1 ( t ν ( t ) ) , I 3 = ( p 1 ( y ˜ 1 ( t ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) , I 4 = ( p ¯ 2 ( y ˜ 2 ( t ) + y ˘ 2 ) p 2 ¯ ( y ˘ 2 ) ) , I 5 = ( p 1 ( y ˜ 1 ( t ν ( t ) ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) , I 6 = ( p ¯ 2 ( y ˜ 2 ( t ν ( t ) ) + y ˘ 2 ) p 2 ¯ ( y ˘ 2 ) ) , I 7 = y ˜ ˙ 1 ( t ν ( t ) ) , I 8 = y ˜ ˙ 1 ( t ) , I 9 = y ˜ 1 ( t ν ) , I 10 = 2 ν ( t ) t ν ( t ) t y ˜ 1 ( u ) d u , I 11 = 2 ν ν ( t ) t ν t ν ( t ) y ˜ 1 ( u ) d u , I 12 = ( p 1 ( y ˜ 1 ( t ν ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) , h 1 = y ˜ 2 ( t ) , h 2 = y ˜ 2 ( t ν ( t ) ) , h 3 = ( p 2 ( y ˜ 2 ( t ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) , h 4 = ( p ¯ 1 ( y ˜ 1 ( t ) + y ˘ 1 ) p 1 ¯ ( y ˘ 1 ) ) , h 5 = ( p 2 ( y ˜ 2 ( t ν ( t ) ) + y ˘ 2 ) f 2 ( y ˘ 2 ) ) , h 6 = ( p ¯ 1 ( y ˜ 1 ( t ν ( t ) ) + y ˘ 1 ) p 1 ¯ ( y ˘ 1 ) ) , h 7 = y ˜ ˙ 2 ( t ν ( t ) ) , h 8 = y ˜ ˙ 2 ( t ) , h 9 = y ˜ 2 ( t ν ) , h 10 = 2 ν ( t ) t ν ( t ) t y ˜ 2 ( u ) d u , h 11 = 2 ν ν ( t ) t ν t ν ( t ) y ˜ 2 ( u ) d u , h 12 = ( p 2 ( a ˜ 2 ( t ν ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) .
Now, we present our main results in the following theorem. □
Theorem 2.
Assume that Assumptions 1 and 2 hold. For a given positive constant ν, the equilibrium point of QVNTNNs (Equation (1)) is μ-stable if there exist positive definite Hermitian matrices P C n × n , Q C n × n , R C n × n , S C n × n , W C n × n , X C n × n , Y C n × n , constants β 1 0 , β 2 0 , positive definite real diagonal matrices N i ( i = 1 , 2 , , 6 ) , and if μ ( t ) is a nonnegative function which belongs to L 2 [ 0 , ) for all t > 0 .
μ ˙ ( t ) μ ( t ) β 1 , min μ ( t ν ( t ) ) μ ( t ) , μ ( t ν ) μ ( t ) β 2
such that the following LMIs hold
S ¯ i > 0 ( i = 1 , 2 ) , Φ 12 × 12 < 0 , Ω 12 × 12 < 0 .
where
Φ 1 , 1 = Q + X + β 1 P 4 β 2 S D N 6 * N 6 D * + Λ 1 N 1 + Λ 5 N 3 ,
Φ 1 , 2 = β 2 U 1 β 2 U 2 β 2 U 4 β 2 U 2 * 2 β 2 S , Φ 1 , 3 = A 1 N 6 * , Φ 1 , 4 = A 2 N 6 * ,
Φ 1 , 5 = B 1 N 6 * , Φ 1 , 6 = B 2 N 6 * , Φ 1 , 7 = C N 6 * , Φ 1 , 8 = P N 6 * N 5 D * ,
Φ 1 , 9 = β 2 U 1 β 2 U 2 β 2 U 4 + β 2 U 2 * , Φ 1 , 10 = 3 β 2 S , Φ 1 , 11 = β 2 U 2 + 2 β 2 U 4
Φ 2 , 2 = β 2 U 1 β 2 U 4 + β 2 U 1 * β 2 U 4 * 8 β 2 S β 2 X + ( ς 1 ) β 2 Q + Λ 3 β 2 N 2 + Λ 7 β 2 N 4 ,
Φ 2 , 9 = β 2 U 2 β 2 U 1 β 2 U 4 + β 2 U 2 * 2 β 2 S , Φ 2 , 10 = β 2 U 2 + β 2 U 4 * + 3 β 2 S ,
Φ 2 , 11 = β 2 U 4 β 2 U 2 + 3 β 2 S , Φ 3 , 3 = R + Y N 3 , Φ 3 , 8 = N 5 A 1 * , Φ 4 , 4 = N 1 ,
Φ 4 , 8 = N 5 A 2 * , Φ 5 , 5 = ( ς 1 ) β 2 R N 4 , Φ 5 , 8 = N 5 B 1 * , Φ 6 , 6 = β 2 N 2 , Φ 6 , 8 = N 5 B 2 * ,
Φ 7 , 7 = β 2 W , Φ 7 , 8 = N 5 C * , Φ 8 , 8 = W N 5 N 5 * + ν 2 S , Φ 9 , 9 = 4 β 2 S , Φ 9 , 10 = β 2 U 4 * 2 β 2 U 2 ,
Φ 9 , 11 = 3 β 2 S , Φ 10 , 10 = 3 β 2 S , Φ 10 , 11 = 4 β 2 U 4 , Φ 11 , 11 = 3 β 2 S , Φ 12 , 12 = β 2 Y ,
Ω 1 , 1 = Q + X + β 1 P 4 β 2 S D N 6 * N 6 D * + Λ 2 N 1 + Λ 6 N 3 ,
Ω 1 , 2 = β 2 U 5 β 2 U 6 β 2 U 8 β 2 U 6 * 2 β 2 S , Ω 1 , 3 = A 1 N 6 * , Ω 1 , 4 = A 2 N 6 * ,
Ω 1 , 5 = B 1 N 6 * , Ω 1 , 6 = B 2 N 6 * , Ω 1 , 7 = C N 6 * , Ω 1 , 8 = P N 6 * N 5 D * ,
Ω 1 , 9 = β 2 U 5 β 2 U 6 β 2 U 8 + β 2 U 6 * , Ω 1 , 10 = 3 β 2 S , Ω 1 , 11 = β 2 U 6 + β 2 U 8 ,
Ω 2 , 2 = β 2 U 5 β 2 U 8 + β 2 U 5 * β 2 U 8 * 8 β 2 S β 2 X + β 2 ( ς 1 ) Q + β 2 Λ 4 N 2 + β 2 Λ 8 N 4 ,
Ω 2 , 9 = β 2 U 6 β 2 U 5 β 2 U 8 + β 2 U 6 * 2 β 2 S , Ω 2 , 10 = β 2 U 6 + β 2 U 8 * + 3 β 2 S ,
Ω 2 , 11 = β 2 U 8 β 2 U 6 + 3 β 2 S , Ω 3 , 3 = R + Y N 3 , Ω 3 , 8 = N 5 A 1 * , Ω 4 , 4 = N 1 ,
Ω 4 , 8 = N 5 A 2 * , Ω 5 , 5 = β 2 ( ς 1 ) R N 4 , Ω 5 , 8 = N 5 B 1 * , Ω 6 , 6 = β 2 N 2 , Ω 6 , 8 = N 5 B 2 * ,
Ω 7 , 7 = β 2 W , Ω 7 , 8 = N 5 C * , Ω 8 , 8 = W N 5 N 5 * + ν 2 S , Ω 9 , 9 = 4 β 2 S , Ω 9 , 10 = β 2 U 8 * β 2 U 6 ,
Ω 9 , 11 = 3 S β 2 , Ω 10 , 10 = 3 β 2 S , Ω 10 , 11 = β 2 U 8 , Ω 11 , 11 = 3 β 2 S , Ω 12 , 12 = β 2 Y ,
S ¯ 1 = d i a g { S 1 , 3 S 3 } U ¯ 1 U ¯ 1 * d i a g { S 1 , 3 S 3 } , U ¯ 1 = U 1 U 2 U 2 * U 4 ,
S ¯ 2 = d i a g { S 1 , 3 S 3 } U ¯ 2 U ¯ 2 * d i a g { S 1 , 3 S 3 } , U ¯ 2 = U 5 U 6 U 6 * U 8 .
Proof. 
Let us choose a new Lyapunov–Krasovskii functional for the system in Equation (13) as follows:
V ( y ˜ ( t ) ) = i = 1 4 V i ( y ˜ ( t ) ) ,
where
V 1 ( y ˜ ( t ) ) = μ ( t ) y ˜ 1 * ( t ) P y ˜ 1 ( t ) + μ ( t ) y ˜ 2 * ( t ) P y ˜ 2 ( t ) ,
V 2 ( y ˜ ( t ) ) = t ν ( t ) t μ ( u ) y ˜ 1 * ( u ) Q y ˜ 1 ( u ) d u + t ν ( t ) t μ ( u ) y ˜ 2 * ( u ) Q y ˜ 2 ( u ) d u
+ t ν ( t ) t μ ( u ) ( p 1 ( y ˜ 1 ( u ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) * R ( p 1 ( y ˜ 1 ( u ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) d u
+ t ν ( t ) t μ ( u ) ( p 2 ( y ˜ 2 ( u ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) * R ( p 2 ( y ˜ 2 ( u ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) d u
+ t ν t μ ( u ) y ˜ 1 * ( u ) X y ˜ 1 ( u ) d u + t ν t μ ( u ) y ˜ 2 * ( u ) X y ˜ 2 ( u ) d u
+ t ν t μ ( u ) ( p 1 ( y ˜ 1 ( u ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) * Y ( p 1 ( y ˜ 1 ( u ) + y ˘ 1 ) p 1 ( y ˘ 1 ) ) d u
+ t ν t μ ( u ) ( p 2 ( y ˜ 2 ( u ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) * Y ( p 2 ( y ˜ 2 ( u ) + y ˘ 2 ) p 2 ( y ˘ 2 ) ) d u ,
V 3 ( y ˜ ( t ) ) = t ν ( t ) t μ ( u ) y ˜ ˙ 1 * ( u ) W y ˜ ˙ 1 ( u ) d u + t ν ( t ) t μ ( u ) y ˜ ˙ 2 * ( u ) W y ˜ ˙ 2 ( u ) d u ,
V 4 ( y ˜ ( t ) ) = ν ν 0 t + θ t μ ( u ) y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u d θ + ν ν 0 t + θ t μ ( u ) y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u d θ .
Differentiating V ( t ) leads to
V ˙ 1 ( y ˜ ( t ) ) = μ ˙ ( t ) I 1 * P I 1 + μ ˙ ( t ) h 1 * P h 1 + μ ( t ) I 8 * P I 1 + μ ( t ) h 8 * P h 1 + μ ( t ) I 1 * P I 8 + μ ( t ) h 8 * P h 1 ,
V ˙ 2 ( y ˜ ( t ) ) = μ ( t ) I 1 * Q I 1 μ ( t ν ( t ) ) ( 1 ς ) I 2 * Q I 2 + μ ( t ) h 1 * Q h 1 μ ( t ν ( t ) ) ( 1 ς ) h 2 * Q h 2
+ μ ( t ) I 3 * R I 3 μ ( t ν ( t ) ) ( 1 ς ) h 6 * R h 6 + μ ( t ) h 3 * R h 3 μ ( t ν ( t ) ) ( 1 ς ) I 6 * R I 6
+ μ ( t ) I 1 * X I 1 μ ( t ν ) I 9 * X I 9 + μ ( t ) h 1 * X h 1 μ ( t ν ) h 9 * X h 9 + μ ( t ) I 3 * Y I 3
μ ( t ν ) I 12 * Y I 12 + μ ( t ) h 3 * Y h 3 μ ( t ν ) h 12 * Y h 12 ,
V ˙ 3 ( y ˜ ( t ) ) = μ ( t ) I 8 * W I 8 μ ( t ν ( t ) ) I 7 * W I 7 + μ ( t ) h 8 * W h 8 μ ( t ν ( t ) ) h 7 * W h 7 ,
V ˙ 4 ( y ˜ ( t ) ) = ν 2 μ ( t ) I 8 * S I 8 + ν 2 μ ( t ) h 8 * S h 8 ν t ν t μ ( u ) y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u ν t ν t μ ( u ) y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u . ν 2 μ ( t ) I 8 * S I 8 + ν 2 μ ( t ) h 8 * S h 8 ν μ ( t ν ) t ν t y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u ν μ ( t ν ) t ν t y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u = ν 2 μ ( t ) I 8 * S I 8 + ν 2 μ ( t ) h 8 * S h 8 + μ ( t ν ) ν t ν t y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u ν t ν t y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u .
Applying Lemma 1 to the integral term in Equation (16) yields
ν t ν t y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u
= ν t ν ( t ) t y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u ν t ν t ν ( t ) y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u
ν ν ( t ) [ E 1 * S E 1 + 3 E 2 * S E 2 ] ν ν ν ( t ) [ E 3 * S E 3 + 3 E 4 * S E 4 ]
= ν ν ( t ) E 1 E 2 * S 0 0 3 S E 1 E 2 ν ν ν ( t ) E 3 E 4 * S 0 0 3 S E 3 E 4 ,
ν t ν t y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u
= ν t ν ( t ) t y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u ν t ν t ν ( t ) y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u
ν ν ( t ) [ E 5 * S E 5 + 3 E 6 * S E 6 ] ν ν ν ( t ) [ E 7 * S E 7 + 3 E 8 * S E 8 ]
= ν ν ( t ) E 5 E 6 * S 0 0 3 S E 5 E 6 ν ν ν ( t ) E 7 E 8 * S 0 0 3 S E 7 E 8 ,
where
E 1 = I 1 I 2 , E 2 = I 1 + I 2 I 10 , E 3 = I 2 I 9 , E 4 = I 2 + I 9 I 11 ,
E 5 = h 1 h 2 , E 6 = h 1 + h 2 h 10 E 7 = h 2 h 9 , E 8 = h 2 + h 9 h 11 .
Furthermore, due to S ¯ i 0 ( i = 1 , 2 ) , according to Lemma 4, we can easily get
ν t ν t y ˜ ˙ 1 * ( u ) S y ˜ ˙ 1 ( u ) d u E 1 * ˜ S 1 ¯ E 1 ˜ , ν t ν t y ˜ ˙ 2 * ( u ) S y ˜ ˙ 2 ( u ) d u E 2 * ˜ S 2 ¯ E 2 ˜ ,
with
E 1 ˜ = E 1 * E 2 * E 3 * E 4 * * , E 2 ˜ = E 5 * E 6 * E 7 * E 8 * * .
On the other hand, for any diagonal matrices N i 0 ( i = 1 , 2 , 3 , 4 ) , it follows from Assumption 2 that
μ ( t ) I 1 * Λ 1 N 1 I 1 μ ( t ) h 4 * N 1 h 4 0 , μ ( t ) h 1 * Λ 2 N 1 h 1 μ ( t ) I 4 * N 1 I 4 0 , μ ( t ν ( t ) ) I 2 * Λ 3 N 2 I 2 μ ( t ν ( t ) ) h 6 * N 2 h 6 0 , μ ( t ν ( t ) ) h 2 * Λ 4 N 2 h 2 μ ( t ν ( t ) ) I 6 * N 2 I 6 0 , μ ( t ) I 1 * Λ 5 N 3 I 1 μ ( t ) I 3 * N 3 I 3 0 , μ ( t ) h 1 * Λ 6 N 3 h 1 μ ( t ) h 3 * N 3 h 3 0 , μ ( t ν ( t ) ) I 2 * Λ 7 N 4 I 2 μ ( t ν ( t ) ) I 5 * N 4 I 5 0 , μ ( t ν ( t ) ) h 2 * Λ 8 N 4 h 2 μ ( t ν ( t ) ) h 5 * N 4 h 5 0 .
with Λ i = d i a g ( Λ 1 i , Λ 2 i , , Λ n i ) ( i = 1 , 2 , , 8 ) . The following zero inequalities are introduced with appropriate dimensional complex-valued matrices N 5 0 and N 6 0 :
0 = μ ( t ) [ I 8 * N 5 + I 1 * N 6 ] [ I 8 D I 1 + C I 7 + A 1 I 3 A 2 I 4 + B 1 I 5 B 2 I 6 ] + μ ( t ) [ I 8 D I 1 + C I 7 + A 1 I 3 A 2 I 4 + B 1 I 5 B 2 I 6 ] * [ I 8 N 5 + I 1 N 6 ] , 0 = μ ( t ) [ h 8 * N 5 + h 1 * N 6 ] [ h 8 D h 1 + C h 7 + A 1 h 3 + A 2 h 4 + B 1 h 5 + B 2 h 6 ] + μ ( t ) [ h 8 D h 1 + C h 7 + A 1 h 3 + A 2 h 4 + B 1 h 5 + B 2 h 6 ] * [ h 8 N 5 + h 1 N 6 ] .
Combining i = 1 4 V ˙ i with Equations (17) and (18), we can easily get that
V ˙ ( y ˜ ( t ) ) μ ˙ ( t ) I 1 * P I 1 + μ ˙ ( t ) h 1 * P h 1 + μ ( t ) I 8 * P I 1 + μ ( t ) h 8 * P h 1 + μ ( t ) I 1 * P I 8 + μ ( t ) h 8 * P h 1 + μ ( t ) I 1 * Q I 1
μ ( t ν ( t ) ) ( 1 ς ) I 2 * Q I 2 + μ ( t ) h 1 * Q h 1 μ ( t ν ( t ) ) ( 1 ς ) h 2 * Q h 2 + μ ( t ) I 3 * R I 3
μ ( t ν ( t ) ) ( 1 ς ) h 6 * R h 6 + μ ( t ) h 3 * R h 3 μ ( t ν ( t ) ) ( 1 ς ) I 6 * R I 6 + μ ( t ) I 1 * X I 1 μ ( t ν ) I 9 * X I 9
+ μ ( t ) h 1 * X h 1 μ ( t ν ) h 9 * X h 9 + μ ( t ) I 3 * Y I 3 μ ( t ν ) I 12 * Y I 12 + μ ( t ) h 3 * Y h 3 μ ( t ν ) h 12 * Y h 12
+ μ ( t ) I 8 * W I 8 μ ( t ν ( t ) ) I 7 * W I 7 + μ ( t ) h 8 * W h 8 μ ( t ν ( t ) ) h 7 * W h 7 + ν 2 μ ( t ) I 8 * S I 8
+ ν 2 μ ( t ) h 8 * S h 8 + μ ( t ν ) ( ν ν ( t ) E 1 E 2 * S 0 0 3 S E 1 E 2 ν ν ν ( t ) E 3 E 4 * S 0 0 3 S E 3 E 4
ν ν ( t ) E 5 E 6 * S 0 0 3 S E 5 E 6 ν ν ν ( t ) E 7 E 8 * S 0 0 3 S E 7 E 8 ) + μ ( t ) I 1 * Λ 1 N 1 I 1 μ ( t ) h 4 * N 1 h 4
+ μ ( t ) h 1 * Λ 2 N 1 h 1 μ ( t ) I 4 * N 1 I 4 + μ ( t ν ( t ) ) I 2 * Λ 3 N 2 I 2 μ ( t ν ( t ) ) h 6 * N 2 h 6 + μ ( t ν ( t ) ) h 2 * Λ 4 N 2 h 2
μ ( t ν ( t ) ) I 6 * N 2 I 6 + μ ( t ) I 1 * Λ 5 N 3 I 1 μ ( t ) I 3 * N 3 I 3 + μ ( t ) h 1 * Λ 6 N 3 h 1 μ ( t ) h 3 * N 3 h 3
+ μ ( t ν ( t ) ) I 2 * Λ 7 N 4 I 2 μ ( t ν ( t ) ) I 5 * N 4 I 5 + μ ( t ν ( t ) ) h 2 * Λ 8 N 4 h 2 μ ( t ν ( t ) ) h 5 * N 4 h 5
+ μ ( t ) [ I 8 * N 5 + I 1 * N 6 ] [ I 8 D I 1 + C I 7 + A 1 I 3 A 2 I 4 + B 1 I 5 B 2 I 6 ]
+ μ ( t ) [ I 8 D I 1 + C I 7 + A 1 I 3 A 2 I 4 + B 1 I 5 B 2 I 6 ] * [ I 8 N 5 + I 1 N 6 ] ,
+ μ ( t ) [ h 8 * N 5 + h 1 * N 6 ] [ h 8 D h 1 + C h 7 + A 1 h 3 + A 2 h 4 + B 1 h 5 + B 2 h 6 ]
+ μ ( t ) [ h 8 D h 1 + C h 7 + A 1 h 3 + A 2 h 4 + B 1 h 5 + B 2 h 6 ] * [ h 8 N 5 + h 1 N 6 ]
μ ( t ) { μ ˙ ( t ) μ ( t ) I 1 * P I 1 + μ ˙ ( t ) μ ( t ) h 1 * P h 1 + I 8 * P I 1 + h 8 * P h 1 + I 1 * P I 8 + h 8 * P h 1 + I 1 * Q I 1
μ ( t ν ( t ) ) ( 1 ς ) μ ( t ) I 2 * Q I 2 + h 1 * Q h 1 μ ( t ν ( t ) ) ( 1 ς ) μ ( t ) h 2 * Q h 2 + I 3 * R I 3
μ ( t ν ( t ) ) ( 1 ς ) μ ( t ) h 6 * R h 6 + h 3 * R h 3 μ ( t ν ( t ) ) ( 1 ς ) μ ( t ) I 6 * R I 6 + I 1 * X I 1 μ ( t ν ) μ ( t ) I 9 * X I 9
+ h 1 * X h 1 μ ( t ν ) μ ( t ) h 9 * X h 9 + I 3 * Y I 3 μ ( t ν ) μ ( t ) I 12 * Y I 12 + h 3 * Y h 3 μ ( t ν ) μ ( t ) h 12 * Y h 12
+ I 8 * W I 8 μ ( t ν ( t ) ) μ ( t ) I 7 * W I 7 + h 8 * W h 8 μ ( t ν ( t ) ) μ ( t ) h 7 * W h 7 + ν 2 I 8 * S I 8
+ ν 2 h 8 * S h 8 + μ ( t ν ) μ ( t ) ( ν ν ( t ) E 1 E 2 * S 0 0 3 S E 1 E 2 ν ν ν ( t ) E 3 E 4 * S 0 0 3 S E 3 E 4
ν ν ( t ) E 5 E 6 * S 0 0 3 S E 5 E 6 ν ν ν ( t ) E 7 E 8 * S 0 0 3 S E 7 E 8 ) + I 1 * Λ 1 N 1 I 1 h 4 * N 1 h 4
+ h 1 * Λ 2 N 1 h 1 I 4 * N 1 I 4 + μ ( t ν ( t ) ) μ ( t ) I 2 * Λ 3 N 2 I 2 μ ( t ν ( t ) ) μ ( t ) h 6 * N 2 h 6 + μ ( t ν ( t ) ) μ ( t ) h 2 * Λ 4 N 2 h 2
μ ( t ν ( t ) ) μ ( t ) I 6 * N 2 I 6 + I 1 * Λ 5 N 3 I 1 I 3 * N 3 I 3 + h 1 * Λ 6 N 3 h 1 h 3 * N 3 h 3
+ μ ( t ν ( t ) ) μ ( t ) I 2 * Λ 7 N 4 I 2 μ ( t ν ( t ) ) μ ( t ) I 5 * N 4 I 5 + μ ( t ν ( t ) ) μ ( t ) h 2 * Λ 8 N 4 h 2 μ ( t ν ( t ) ) μ ( t ) h 5 * N 4 h 5
+ [ I 8 * N 5 + I 1 * N 6 ] [ I 8 D I 1 + C I 7 + A 1 I 3 A 2 I 4 + B 1 I 5 B 2 I 6 ]
+ [ I 8 D I 1 + C I 7 + A 1 I 3 A 2 I 4 + B 1 I 5 B 2 I 6 ] * [ I 8 N 5 + I 1 N 6 ] ,
+ [ h 8 * N 5 + h 1 * N 6 ] [ h 8 D h 1 + C h 7 + A 1 h 3 + A 2 h 4 + B 1 h 5 + B 2 h 6 ]
+ [ h 8 D h 1 + C h 7 + A 1 h 3 + A 2 h 4 + B 1 h 5 + B 2 h 6 ] * [ h 8 N 5 + h 1 N 6 ] }
μ ( t ) 1 * Φ 1 + μ ( t ) 2 * Ω 2 .
where Φ , Ω , S ¯ 1 , and S ¯ 2 are defined in Theorem 2.
Consequently, according to Equation (14), we have
V ˙ ( y ˜ ( t ) ) 0 .
Combined with Lemma 2, we claim that Λ m i n ( P ) is constant. Then, from Equation (15), one can get
V ( 0 ) V ( y ˜ ( t ) ) μ ( t ) Λ m i n ( P ) y ˜ ( t ) 2 ,
for 0 t 0 t , and we have
y ˜ ( t ) 2 μ ( t ) ,
where = V ( 0 ) Λ m i n ( P ) . By the above derivation, it is obvious that Definition 1 is satisfied, and the origin point of QVNTNNs (Equation (1)) is μ -stable.  □
Corollary 1.
Assume that Assumptions 1 and 2 hold. Given a positive constant ν, the equilibrium point of QVNTNNs (Equation (1)) is globally exponentially stable if there exist positive definite Hermitian matrices P C n × n , Q C n × n , R C n × n , S C n × n , W C n × n , X C n × n , Y C n × n , constants β 1 0 , β 2 0 , positive definite real diagonal matrices N i ( i = 1 , 2 , , 6 ) , and if μ ( t ) is a nonnegative function which belongs to L 2 [ 0 , ) such that Φ, Ω, and S ¯ i ( i = 1 , 2 ) in Theorem 2 hold, where β 1 = ϖ , β 2 = e ϖ ν .
Proof. 
Taking μ ( t ) = e ϖ t , we can obtain
μ ˙ ( t ) μ ( t ) = ϖ = β 1 , min μ ( t ν ( t ) ) μ ( t ) , μ ( t ν ) μ ( t ) = e ϖ ν = β 2 .
On the basis of the above discussion, it is clear that the results are derived directly via Theorem 2. This proof is immediately completed.  □
Corollary 2.
Assume that Assumptions 1 and 2 hold. Given a positive constant ν, the equilibrium point of QVNTNNs (Equation (1)) is globally power-stable if there exist positive definite Hermitian matrices P C n × n , Q C n × n , R C n × n , S C n × n , W C n × n , X C n × n , Y C n × n , two constants β 1 0 , β 2 0 , positive definite real diagonal matrices N i ( i = 1 , 2 , , 6 ) , and if μ ( t ) is a nonnegative function which belongs to L 2 [ 0 , ) such that Φ, Ω, and S ¯ i ( i = 1 , 2 ) in Theorem 2 hold, where β 1 = ϖ 2 ν , β 2 = 1 2 ϖ .
Proof. 
Taking μ ( t ) = t ϖ , for any t 2 max { 1 , ν } , we can obtain
μ ˙ ( t ) μ ( t ) ϖ 2 ν = β 1 , min μ ( t ν ( t ) ) μ ( t ) , μ ( t ν ) μ ( t ) 1 2 ϖ = β 2 .
By the above computation, it is concluded that the conditions in Theorem 2 are still satisfied. The proof is completed.  □
Corollary 3.
Assume that Assumptions 1 and 2 hold. Given a positive constant ν, the equilibrium point of QVNTNNs (Equation (1)) is globally log-stable if there exist positive definite Hermitian matrices P C n × n , Q C n × n , R C n × n , S C n × n , W C n × n , X C n × n , Y C n × n , constants β 1 0 , β 2 0 , positive definite real diagonal matrices N i ( i = 1 , 2 , , 6 ) , and if μ ( t ) is a nonnegative function which belongs to L 2 [ 0 , ) such that Φ, Ω, and S ¯ i ( i = 1 , 2 ) in Theorem 2 hold, where β 1 = ϖ e , β 2 = 1 ln ( e + ϖ ν ) .
Proof. 
Taking μ ( t ) = l n ( ϖ t + 1 ) , for any t ( e 1 ϖ ) + ν , we have
μ ˙ ( t ) μ ( t ) ϖ e = β 1 , min μ ( t ν ( t ) ) μ ( t ) , μ ( t ν ) μ ( t ) 1 ln ( e + ϖ ν ) = β 2 .
From the above expressions, and based on the Theorem 2, one can conclude that the conditions in Corollary 3 can be easily achieved. Thus, this completes the proof.  □
Remark 3.
Compared with the existing literature (see, e.g., [27,28,30]), we use the reciprocal convex combination approach combined with the free-weighting matrix method for getting Theorem 2. In this way, the time-delay information is fully explored and can be a reduced conservative result.
Remark 4.
By Theorem 2, we obtain the stability criterion of global μ-stability, and then we can generalize the results to the global exponential stability, global power-stability, and global log-stability.
Remark 5.
Since the delay-dependent stability conditions are always less conservative than the delay-independent stability conditions, this paper mainly considered the delay-dependent stability for the systems with bounded time-varying delays. In fact, the stability conditions of QVNTNNs that are unbounded time-varying can also established with a similar method. Moreover, the stability conditions in this paper are also suitable for unbounded time-varying delays depending on the properties of the QVNTNN itself.

5. Numerical Example

In order to show the effectiveness and advantages of the proposed method, three interesting numerical examples are given as follows.
Example 1.
The delayed QVNTNN (Equation (1)) is rewritten as follows:
y ˙ ( t ) C y ˙ ( t ν ( t ) ) = D y ( t ) + A p ( y ( t ) ) + B p ( y ( t ν ( t ) ) ) + κ .
where y = y 11 + i y 12 + j y 21 + k y 22 Q 2 × 1 , and
A = 0.2 + 0.5 i 0.5 j + 0.1 k 0.4 + 0.4 i 0.6 j + 0.1 k 0.5 + 0.2 i 0.1 j + 0.5 k 0.3 + 0.4 i + 0.1 j + 0.6 k
= 0.2 + 0.5 i 0.4 + 0.4 i 0.5 + 0.2 i 0.3 + 0.4 i + 0.5 + 0.1 i 0.6 + 0.1 i 0.1 + 0.5 i 0.1 + 0.6 i j
= A 1 + A 2 j ,
B = 0.3 + 0.4 i 0.5 j + 0.2 k 0.6 0.2 i 0.3 j 0.5 k 0.2 + 0.8 i + 0.2 j + 0.5 k 0.4 0.3 i 0.5 j 0.3 k
= 0.3 + 0.4 i 0.6 0.2 i 0.2 + 0.8 i 0.4 0.3 i + 0.5 + 0.2 i 0.3 0.5 i 0.2 + 0.5 i 0.5 0.3 i j
= B 1 + B 2 j ,
C = 0.1 + 0.05 i + 0.2 j + 0.05 k 0.2 + 0.04 i + 0.4 j + 0.04 k 0.1 + 0.04 i 0.5 j + 0.02 k 0.2 + 0.04 i + 0.03 j + 0.04 k
= 0.1 + 0.05 i 0.2 + 0.04 i 0.1 + 0.04 i 0.2 + 0.04 i + 0.2 + 0.05 i 0.4 + 0.04 i 0.5 + 0.02 i 0.03 + 0.04 i j
= C 1 + C 2 j ,
D = 5 0 0 5 , κ = ( 0 , 0 ) * .
In this example, we take the activation functions as p ( u ) = 0.5 ( | u + 1 | | u 1 | ) + 0.5 ( | u + 1 | | u 1 | ) j . Clearly, it can be confirmed that Assumption 2 satisfies Λ i = d i a g ( 0.01 , 0.01 ) ( i = 1 , 2 , , 8 ) . Assume the time-varying delay satisfies ν ( t ) = 12.1566 | s i n ( t ) | ; then, obviously, it can be computed that ς = 0.5 , ν = 12.1566 . In addition, let ϖ = 0.1 . It is easy to calculated that β 1 = 0.1 , β 2 = e 0.05 . By using the Yalmip toolbox to solve Corollary 1, we can obtain the following feasible solutions.
P = 7.1417 + 0.0000 i 1.0375 + 0.1874 i 1.0375 0.1874 i 6.9721 + 0.0000 i × 10 2 , Q = 2.9401 + 0.0000 i 1.1097 + 0.2555 i 1.1097 0.2555 i 2.4352 + 0.0000 i × 10 3 ,
R = 83.3051 + 0.0000 i 18.2915 + 11.0733 i 18.2915 11.0733 i 82.7744 + 0.0000 i , S = 0.6318 + 0.0000 i 0.0028 + 0.0653 i 0.0028 0.0653 i 0.6254 + 0.0000 i ,
W = 51.2573 + 0.0000 i 15.4123 1.1918 i 15.4123 + 1.1918 i 53.7044 + 0.0000 i , N 1 = 1.2019 0 0 1.2376 × 10 3 ,
N 2 = 923.8130 0 0 866.7363 , N 3 = 462.6568 0 0 479.7606 , N 4 = 1.1404 0 0 1.0782 × 10 3 ,
N 5 = 1.1850 + 0.2537 i 0.2186 + 0.3304 i 0.1780 + 0.2630 i 1.2111 + 0.2945 i × 10 2 , N 6 = 3.7716 + 1.1880 i 0.7546 + 1.5896 i 1.0880 + 1.2376 i 3.0510 + 1.4867 i × 10 2 ,
U 1 = 0.0273 + 0.0000 i 0.0052 + 0.0019 i 0.0052 0.0019 i 0.0200 + 0.0000 i , U 2 = 0.7157 + 0.0000 i 0.1415 + 0.1018 i 0.1415 0.1018 i 0.6476 + 0.0000 i × 10 3 ,
U 4 = 0.0131 + 0.0000 i 0.0035 0.0007 i 0.0035 + 0.0007 i 0.0124 + 0.0000 i , U 5 = 0.0264 + 0.0000 i 0.0050 + 0.0019 i 0.0050 0.0019 i 0.0192 + 0.0000 i ,
U 6 = 0.0012 + 0.0000 i 0.0003 + 0.0002 i 0.0003 0.0002 i 0.0009 + 0.0000 i , U 8 = 0.0024 + 0.0000 i 0.0210 0.0019 i 0.0210 + 0.0019 i 0.0049 + 0.0000 i ,
X = 6.0218 + 0.0000 i 4.6899 1.0333 i 4.6899 + 1.0333 i 3.9657 + 0.0000 i × 10 2 , Y = 1.4287 + 0.0000 i 0.2653 + 0.1715 i 0.2653 0.1715 i 1.4202 + 0.0000 i × 10 2 .
Thus, the equilibrium point of QVNTNNs (Equation (1)) is globally exponentially stable. Figure 1 shows four parts of the state responses of the QVNTNNs (Equation (1)).
Example 2.
The delayed QVNTNN (Equation (1)) is rewritten as follows
y ˙ ( t ) C y ˙ ( t ν ( t ) ) = D y ( t ) + A p ( y ( t ) ) + B p ( y ( t ν ( t ) ) ) + κ .
where y = y 11 + i y 12 + j y 21 + k y 22 Q 2 × 1 , and
A = 0.4 + 0.3 i 0.5 j + 0.4 k 0.7 + 0.7 i 0.6 j + 0.14 k 0.6 0.4 i + 0.14 j + 0.53 k 0.7 + 0.3 i + 0.1 j + 0.6 k
= 0.4 + 0.3 i 0.7 + 0.7 i 0.6 0.4 i 0.7 + 0.3 i + 0.5 + 0.4 i 0.6 + 0.14 i 0.14 + 0.53 i 0.1 + 0.6 i j
= A 1 + A 2 j ,
B = 0.7 j + 0.1 k 0.5 0.3 i + 0.6 j 0.1 k 0.7 i + 0.2 j + 0.6 k 0.1 + 0.6 i 0.3 j 0.3 k
= 0 0.5 0.3 i 0.7 i 0.1 + 0.6 i + 0.7 + 0.1 i 0.6 0.1 i 0.2 + 0.6 i 0.3 0.3 i j
= B 1 + B 2 j ,
C = 0.07 + 0.05 i 0.06 j + 0.03 k 0.06 + 0.02 i 0.04 j 0.02 k 0.04 + 0.04 i + 0.01 j + 0.04 k 0.06 + 0.04 i 0.02 j 0.03 k
= 0.07 + 0.05 i 0.06 + 0.02 i 0.04 + 0.04 i 0.06 + 0.04 i + 0.06 + 0.03 i 0.04 0.02 i 0.01 + 0.04 i 0.02 0.03 i j
= C 1 + C 2 j ,
D = 3 0 0 3 , κ = ( 0 , 0 ) * .
Here, we use p ( u ) = 1 25 ( | u + 1 | | u 1 | ) + 1 10 ( | u + 1 | | u 1 | ) j as the activation function. Clearly, it can be confirmed that Assumption 2 satisfies Λ 1 = Λ 3 = Λ 5 = Λ 7 = d i a g ( 0.08 , 0.08 ) , Λ 2 = Λ 4 = Λ 6 = Λ 8 = d i a g ( 0.2 , 0.2 ) . Assuming that the time-varying delay satisfies ν ( t ) = 1 + 15.7508 s i n ( t ) , it can be obviously computed that ς = 0.2 , ν = 15.7506 . In addition, let ϖ = 0.1 ; then, it is easy to calculate that β 1 = 1 20 ν , β 2 = 0.2 0.1 . Using the Yalmip toolbox, Corollary 2 can be solved. After calculation, a feasible solution is obtained.
P = 8.1305 + 0.0000 i 1.6705 1.3097 i 1.6705 + 1.3097 i 7.0427 + 0.0000 i × 10 2 , Q = 1.7466 + 0.0000 i 0.6000 0.3775 i 0.6000 + 0.3775 i 1.6290 + 0.0000 i × 10 3 ,
R = 1.3109 + 0.0000 i 0.2807 0.2632 i 0.2807 + 0.2632 i 1.3281 + 0.0000 i × 10 2 , S = 0.7719 + 0.0000 i 0.1719 0.1559 i 0.1719 + 0.1559 i 0.6759 + 0.0000 i ,
W = 74.2120 + 0.0000 i 23.1226 14.6231 i 23.1226 + 14.6231 i 63.5486 + 0.0000 i , N 1 = 1.5729 0 0 1.4706 × 10 3 ,
N 2 = 1.3224 0 0 1.2860 × 10 3 , N 3 = 1.0368 0 0 1.0848 × 10 3 , N 4 = 1.2418 0 0 1.1836 × 10 3 ,
N 5 = 2.4691 + 0.6137 i 0.3724 + 0.2442 i 0.2087 + 0.6044 i 2.1374 + 0.3713 i × 10 2 , N 6 = 5.1458 + 1.1580 i 0.5341 + 0.9446 i 0.5025 + 1.4435 i 4.6514 + 1.0152 i × 10 2 ,
U 1 = 0.0232 + 0.0000 i 0.0684 + 0.0823 i 0.0684 0.0823 i 0.0371 + 0.0000 i , U 2 = 0.0050 + 0.0000 i 0.0029 0.0023 i 0.0029 + 0.0023 i 0.0037 + 0.0000 i ,
U 4 = 0.0518 + 0.0000 i 0.0300 + 0.0223 i 0.0300 0.0223 i 0.0392 + 0.0000 i , U 5 = 0.0228 + 0.0000 i 0.0500 + 0.0575 i 0.0500 0.0575 i 0.0186 + 0.0000 i ,
U 6 = 0.6729 + 0.0000 i 0.0180 0.0231 i 0.0180 + 0.0231 i 0.6275 + 0.0000 i × 10 3 , U 8 = 0.0658 + 0.0000 i 0.0529 + 0.0383 i 0.0529 0.0383 i 0.0454 + 0.0000 i ,
X = 2.1273 + 0.0000 i 2.7733 + 1.6689 i 2.7733 1.6689 i 1.9530 + 0.0000 i × 10 2 , Y = 3.2197 + 0.0000 i 0.4433 0.6310 i 0.4433 + 0.6310 i 3.2625 + 0.0000 i × 10 2 .
Thus, the equilibrium point of QVNTNNs (Equation (1)) is globally power-stable. Figure 2 shows four parts of the state responses of the QVNTNNs (Equation (1)).
We have listed the maximal allowable bounds of ν for QVNNs and QVNTNNs in Table 1. From the comparison of QVNNs and QVNTNNs, we can see that the maximal delay bounds are bigger than those of QVNTNNs.
Example 3.
The delayed QVNTNN (Equation (1)) is rewritten as follows:
y ˙ ( t ) C y ˙ ( t ν ( t ) ) = D y ( t ) + A p ( y ( t ) ) + B p ( y ( t ν ( t ) ) ) + κ .
where y = y 11 + i y 12 + j y 21 + k y 22 Q 2 × 1 , and
A = 0.7 + 1 i 0.2 j + 0.4 k 0.3 + 1.2 i 0.4 j + 0.3 k 0.3 0.2 i + 0.2 j + 0.1 k 1 + i 0.2 j + 0.4 k
= 0.7 + 1 i 0.3 + 1.2 i 0.3 0.2 i 1 + 1 i + 0.2 + 0.4 i 0.4 + 0.3 i 0.2 + 0.1 i 0.2 + 0.4 i j
= A 1 + A 2 j ,
B = 0.4 + 0.7 i + 0.2 j + 0.5 k 1 + 0.5 i + 0.3 j 0.5 k 0.3 + 0.2 i 0.2 j + 0.1 k 0.5 + 0.5 i + 0.2 j + 0.4 k
= 0.4 + 0.7 i 1 + 0.5 i 0.3 + 0.2 i 0.5 + 0.5 i + 0.2 + 0.5 i 0.3 0.5 i 0.2 + 0.1 i 0.2 + 0.4 i j
= B 1 + B 2 j ,
C = 0.2 + 0.08 i + 0.3 j + 0.05 k 0.5 + 0.08 i + 0.8 j + 0.01 k 0.3 0.02 i 0.5 j + 0.02 k 0.2 + 0.04 i + 1 j + 0.02 k
= 0.2 + 0.08 i 0.5 + 0.08 i 0.3 0.02 i 0.2 + 0.04 i + 0.3 + 0.05 i 0.8 + 0.01 i 0.5 + 0.02 i 1 + 0.02 i j
= C 1 + C 2 j ,
D = 1.8 0 0 2.8 , κ = ( 0 , 0 ) * .
For this example, the activation function is chosen as p ( u ) = 0.5 t a n h ( u ) + 0.5 t a n h ( u ) j . Clearly, it can be verified that Assumption 2 is satisfied with Λ 1 = Λ 3 = Λ 5 = Λ 7 = d i a g ( 0.07 , 0.07 ) , Λ 2 = Λ 4 = Λ 6 = Λ 8 = d i a g ( 0.3 , 0.3 ) . Assuming that the time-varying delay satisfies ν ( t ) = 10.3423 | s i n ( t ) | , it can be obviously computed that ς = 0.5 , ν = 10.3423 . In addition, let ϖ = 0.1 ; then, it is easy to calculate that β 1 = 1 10 e , β 2 = 1 l n ( e + 0.1 ν ) . By using the Yalmip toolbox, Corollary 3 can be solved. The following feasible solutions were calculated by us:
P = 3.6113 + 0.0000 i 0.1457 0.3895 i 0.1457 + 0.3895 i 2.2912 + 0.0000 i × 10 2 , Q = 1.0170 + 0.0000 i 0.1735 0.1745 i 0.1735 + 0.1745 i 1.3937 + 0.0000 i × 10 2 ,
R = 1.5034 + 0.0000 i 0.6070 0.4689 i 0.6070 + 0.4689 i 1.0799 + 0.0000 i × 10 2 , S = 1.3172 + 0.0000 i 0.0604 0.1931 i 0.0604 + 0.1931 i 0.5848 + 0.0000 i ,
W = 41.0682 + 0.0000 i 1.8619 5.9645 i 1.8619 + 5.9645 i 18.7619 + 0.0000 i , N 1 = 780.2274 0 0 619.7779 ,
N 2 = 663.6841 0 0 381.4323 , N 3 = 1.0702 0 0 0.6026 × 10 3 , N 4 = 629.9553 0 0 528.2100 ,
N 5 = 1.8379 + 0.2680 i 0.0879 + 0.1924 i 0.1497 + 0.1378 i 0.7496 + 0.1278 i × 10 2 , N 6 = 3.6887 + 0.4126 i 0.1864 + 0.5835 i 0.2439 + 0.1794 i 1.4943 + 0.2445 i × 10 2 ,
U 1 = 0.1033 + 0.0000 i 0.0089 0.0339 i 0.0089 + 0.0339 i 0.2079 + 0.0000 i , U 2 = 0.0075 + 0.0000 i 0.0005 0.0002 i 0.0005 + 0.0002 i 0.0018 + 0.0000 i ,
U 4 = 0.0712 + 0.0000 i 0.0021 + 0.0060 i 0.0021 0.0060 i 0.0327 + 0.0000 i , U 5 = 0.0460 + 0.0000 i 0.0147 + 0.0212 i 0.0147 0.0212 i 0.0285 + 0.0000 i ,
U 6 = 0.0175 + 0.0000 i 0.0005 0.0017 i 0.0005 + 0.0017 i 0.0026 + 0.0000 i , U 8 = 0.0543 + 0.0000 i 0.0027 0.0120 i 0.0027 + 0.0120 i 0.0099 + 0.0000 i ,
X = 3.7674 + 0.0000 i 0.0512 0.0312 i 0.0512 + 0.0312 i 2.8247 + 0.0000 i × 10 2 , Y = 1.7454 + 0.0000 i 0.6159 0.3628 i 0.6159 + 0.3628 i 1.3755 + 0.0000 i × 10 2 .
Thus, the equilibrium point of QVNTNNs (Equation (1)) is globally log-stable. Figure 3 shows four parts of the state responses of the QVNTNNs (Equation (1)).

6. Conclusions

In this paper, the global μ -stability problem of QVNTNNs with time-varying delays is discussed. Firstly, the QVNTNNs are transformed into two complex-valued systems by using a transformation to reduce the complexity of the computation generated by the non-commutativity of quaternion multiplication. A new convex inequality in the complex field is introduced. Secondly, the conditions for the existence and uniqueness of the equilibrium point are obtained by primarily applying the homeomorphism theory. Thirdly, the global stability conditions of the complex-valued systems are provided by constructing a novel Lyapunov–Krasovskii functional, using an integral inequality technique, and reciprocal convex combination approach. The gained global μ -stability conditions are divided into three different kinds of stability forms by varying the positive continuous function μ ( t ) . In the end, three reliable examples and a simulation are provided to guarantee the validity of the obtained LMIs conditions. In the future, the problem of the stability, stochasticity, and synchronization of QVNTNNs with time delays and the QVNTNN with Markovian switching will be considered based on the results in this article.

Author Contributions

J.S. put forward the innovation of the article and completed the writing of the whole article. L.X. guided the construction of the LyapunovCKrasovskii functional. Z.L. gave some suggestions in LMIs programming. T.W. gave advice in the process of proving the existence and uniqueness of the solution.

Funding

This work was supported by National Nature Science Foundation under Grants 11461082, 11601474, and 61472093, funded by key laboratory of numerical simulation of Sichuan Province under Grant 2017KF002, Postgraduate Innovation Foundation of Yunnan Minzu University under Grant 2018YJCXS225.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Birx, D.L.; Pipenberg, S.J. A complex mapping network for phase sensitive classification. IEEE Trans. Neural Netw. 1993, 4, 127–135. [Google Scholar] [CrossRef] [PubMed]
  2. Forster, B. Five Good Reasons for Complex-Valued Transforms in Image Processing; Springer International Publishing: Cham, Switzerland, 2014. [Google Scholar]
  3. Arima, Y. Envelope-detection Millimeter-wave Active Imaging System using Complex-Valued Self-Organizing Map Image Processing. Water Resour. Res. 2015, 50, 1846. [Google Scholar]
  4. Filliat, D.; Battesti, E.; Bazeille, S. RGBD object recognition and visual texture classification for indoor semantic mapping. In Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications, Woburn, MA, USA, 23–24 April 2012. [Google Scholar]
  5. Hirose, A.; Eckmiller, R. Behavior control of coherent-type neural networks by carrier-frequency modulation. IEEE Trans. Neural Netw. 1996, 7, 1032–1034. [Google Scholar] [CrossRef] [PubMed]
  6. Xu, D.S.; Tan, M.C. Delay-independent stability criteria for complex-valued BAM neutral-type neural networks with time delays. Nonlinear Dyn. 2017, 89, 819–832. [Google Scholar] [CrossRef]
  7. Guo, R.; Zhang, Z.; Liu, X.; Lin, C. Existence, uniqueness, and exponential stability analysis for complex-valued memristor-based BAM neural networks with time delays. Appl. Math. Comput. 2017, 311, 100–117. [Google Scholar] [CrossRef]
  8. Liu, D.; Zhu, S.; Ye, E. Synchronization stability of memristor-based complex-valued neural networks with time delays. Neural Netw. 2017, 96, 115–127. [Google Scholar] [CrossRef] [PubMed]
  9. Zhang, H.; Wang, X.Y. Complex projective synchronization of complex-valued neural network with structure identification. J. Frankl. Inst. 2017, 354, 5011–5025. [Google Scholar] [CrossRef]
  10. Ali, M.S.; Yogambigai, J. Extended dissipative synchronization of complex dynamical networks with additive time-varying delay and discrete-time information. J. Comput. Appl. Math. 2019, 348, 28–341. [Google Scholar]
  11. Zhang, L.; Yang, X.; Xu, C. Exponential synchronization of complex-valued complex networks with time-varying delays and stochastic perturbations via time-delayed impulsive control. Appl. Math. Comput. 2017, 306, 22–30. [Google Scholar] [CrossRef]
  12. Wang, J.F.; Tian, L.X. Global Lagrange stability for inertial neural networks with mixed time-varying delays. Neurocomputing 2017, 235, 140–146. [Google Scholar] [CrossRef]
  13. Tang, Q.; Jian, J.G. Matrix measure based exponential stabilization for complex-valued inertial neural networks with time-varying delays using impulsive control. Neurocomputing 2018, 273, 251–259. [Google Scholar] [CrossRef]
  14. Li, X.F.; Fang, J.A.; Li, H.Y. Master-slave exponential synchronization of delayed complex-valued memristor-based neural networks via impulsive control. Neural Netw. 2017, 93, 165–175. [Google Scholar] [CrossRef] [PubMed]
  15. Song, Q.K.; Zhao, Z.J.; Liu, Y.R. Impulsive effects on stability of discrete-time complex-valued neural networks with both discrete and distributed time-varying delays. Neurocomputing 2015, 168, 1044–1050. [Google Scholar] [CrossRef]
  16. Jian, J.G.; Wan, P. Global exponential convergence of fuzzy complex-valued neural networks with time-varying delays and impulsive effects. Fuzzy Sets Syst. 2018, 338, 23–39. [Google Scholar] [CrossRef]
  17. Huang, C.D.; Cao, J.D.; Xiao, M.; Alsaedi, A.; Haya, T. Bifurcations in a delayed fractional complex-valued neural network. Appl. Math. Comput. 2017, 292, 210–227. [Google Scholar] [CrossRef]
  18. Miron, S.; Le Bihan, N.; Mars, J.I. Quaternion-music for vector-sensor array processing. IEEE Trans. Signal Process. 2006, 54, 1218–1229. [Google Scholar] [CrossRef]
  19. Ell, T.; Sangwine, S.J. Hypercomplex fourier transforms of color images. IEEE Trans. Image Process. 2007, 16, 22–35. [Google Scholar] [CrossRef]
  20. Took, C.C.; Strbac, G.; Aihara, K.; Mandic, D.P. Quaternion-valued short-term joint forecasting of three-dimensional wind and atmospheric parameters. Renew. Energy 2011, 36, 1754–1760. [Google Scholar] [CrossRef]
  21. Isokawa, T.; Kusakabe, T.; Matsui, N.; Peper, F. Quaternion Neural Network and It’s Application. In Knowledge-Based Intelligent Information and Engineering Systems; Springer: Berlin/Heidelberg, Germany, 2003; pp. 318–324. [Google Scholar]
  22. Mandic, D.P.; Jahanchahi, C.; Took, C.C. A quaternion gradient operator and it is applications. IEEE Trans. Signal Process. Lett. 2011, 18, 47–50. [Google Scholar] [CrossRef]
  23. Sudbery, A. Quaternionic analysis. Math. Proc. Camb. Philos. Soc. 1979, 85, 199–225. [Google Scholar] [CrossRef]
  24. Bihan, N.L.; Sangwine, S.J. Quaternion principal component analysis of color images. In Proceeding of the IEEE International Conference on Image Processing, Barcelona, Spain, 14–17 September 2003. [Google Scholar]
  25. Mandic, D.P.; Goh, V.S.L. Complex Valued Nonlinear Adaptive Filters: Noncircularity; Widely Linear and Neural Models; Wiley Publishing: Hoboken, NJ, USA, 2009. [Google Scholar]
  26. Liu, Y.; Zhang, D.D.; Lu, J.Q. Global μ-stability criteria for quaternion-valued neural networks with unbounded time-varying delays. Inf. Sci. 2016, 360, 273–288. [Google Scholar] [CrossRef]
  27. Chen, X.; Li, Z.; Song, Q. Robust stability analysis of quaternion-valued neural networks with time delays and parameter uncertainties. Neural Netw. 2017, 91, 55–65. [Google Scholar] [CrossRef] [PubMed]
  28. Li, Y.; Li, B.; Yao, S. The global exponential pseudo almost periodic synchronization of quaternion-valued cellular neural networks with time-varying delays. Neurocomputing 2018, 303, 75–87. [Google Scholar] [CrossRef]
  29. Liu, Y.; Zhang, D.D.; Lou, J.Q.; Cao, J.D. Stability analysis of quaternion-valued neural networks: decomposition and direct approaches. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4201–4211. [Google Scholar] [PubMed]
  30. Ali, M.S.; Saravanakumar, R.; Cao, J. New passivity criteria for memristor-based neutral-type stochastic BAM neural networks with mixed time-varying delays. Neurocomputing 2016, 171, 1533–1547. [Google Scholar]
  31. Zhou, H.; Zhou, Z.; Jiang, W. Almost periodic solutions for neutral type BAM neural networks with distributed leakage delays on time scales. Neurocomputing 2015, 157, 223–230. [Google Scholar] [CrossRef]
  32. Lakshmanan, S.; Lim, C.P.; Prakash, M.; Nahavandi, S.; Balasubramaniam, P. Neutral-type of delayed inertial neural networks and their stability analysis using the LMI Approach. Neurocomputing 2016, 230, 243–250. [Google Scholar] [CrossRef]
  33. Wang, Y.; Zhang, X.; Zhang, X. Neutral-delay-range-dependent absolute stability criteria for neutral-type Lur’e systems with time-varying delays. J. Frankl. Inst. 2016, 353, 5025–5039. [Google Scholar] [CrossRef]
  34. Wu, T.; Xiong, L.L.; Cao, J.D.; Liu, Z.X.; Zhang, H.Y. New stability and stabilization conditions for stochastic neural networks of neutral type with Markovian jumping parameters. J. Frankl. Inst. 2018, 355, 8462–8483. [Google Scholar] [CrossRef]
  35. Xiong, L.L.; Zhang, H.Y.; Li, Y.K.; Liu, Z.X. Improved stability and H∞ performance for neutral systems with uncertain Markovian jump. Nonlinear Anal. Hybrid Syst. 2016, 19, 13–25. [Google Scholar] [CrossRef]
  36. Xiong, L.L.; Tian, J.K.; Liu, X.Z. Stability analysis for neutral Markovian jump systems with partially unknown transition probabilities. J. Frankl. Inst. 2012, 349, 2193–2214. [Google Scholar] [CrossRef]
  37. Wu, T.; Cao, J.D.; Xiong, L.L.; Ahmad, B. Exponential passivity conditions on neutral stochastic neural networks with leakage delay and partially unknown transition probabilities in Markovian jump. Adv. Differ. Eq. 2018, 2018, 317. [Google Scholar] [CrossRef]
  38. Xiong, L.L.; Cheng, J.; Liu, X.Z.; Wu, T. Improved conditions for neutral delay systems with novel inequalities. J. Nonlinear Sci. Appl. 2017, 10, 2309–2317. [Google Scholar] [CrossRef] [Green Version]
  39. Ghadiri, H.; Motlagh, M.R.J.; Yazdi, M.B. Robust stabilization for uncertain switched neutral systems with interval time-varying mixed delays. Nonlinear Anal. Hybrid Syst. 2014, 13, 2–21. [Google Scholar] [CrossRef]
  40. Salamon, D. Control and Observation of Neutral Systems; Pitman Advanced Publishing Program: Boston, MA, USA, 1984. [Google Scholar]
  41. Xiong, W.J.; Liang, J.L. Novel stability criteria for neutral systems with multiple time-delays. Chaos Solitons Fractals 2007, 32, 1735–1741. [Google Scholar] [CrossRef]
  42. Xu, S.; Lam, J.; Ho, D.W.C.; Zou, Y. Delay-dependent exponential stability for a class of neural networks with time delays. J. Comput. Appl. Math. 2005, 183, 16–28. [Google Scholar] [CrossRef] [Green Version]
  43. Zuo, Z.Q.; Wang, Y. Novel Delay-Dependent Exponential Stability Analysis for a Class of Delayed Neural Networks. In Proceedings of the International Conference on Intelligent Computing, Kunming, China, 16–19 August 2006; pp. 216–226. [Google Scholar]
  44. Hale, J.K. The Theory of Functional Differential Equations; Applied Mathematical Sciences; Springer-Verlag: New York, NY, USA, 1977. [Google Scholar]
  45. Velmurugan, G.; Rakkiyappan, R.; Cao, J. Further analysis of global μ-stability of complex-valued neural networks with unbounded time-varying delays. Neural Netw. 2015, 67, 14–27. [Google Scholar] [CrossRef] [PubMed]
  46. Chen, X.; Song, Q. Global stability of complex-valued neural networks with both leakage time delay and discrete time delay on time scales. Neurocomputing 2013, 121, 254–264. [Google Scholar] [CrossRef]
  47. Li, J.D.; Huang, N.J. Asymptotical Stability for a Class of Complex-Valued Projective Neural Network. J. Optim. Theory Appl. 2018, 177, 261–270. [Google Scholar] [CrossRef]
  48. Xiao, N.; Jia, Y. New approaches on stability criteria for neural networks with two additive time-varying delay components. Neurocomputing 2013, 118, 150–156. [Google Scholar] [CrossRef]
  49. Forti, M. New condition for global stability of neural networks with application to linear and quadratic programming problems. IEEE Trans. Circuits Syst. Fundam. Theory Appl. 1995, 42, 354–366. [Google Scholar] [CrossRef]
Figure 1. The four parts of the state trajectories for the quaternion-valued neutral-type neural networks (QVNTNNs) (Equation (1)) in Example 1.
Figure 1. The four parts of the state trajectories for the quaternion-valued neutral-type neural networks (QVNTNNs) (Equation (1)) in Example 1.
Mathematics 07 00101 g001
Figure 2. The four parts of the state trajectories for the QVNTNNs (Equation (1)) in Example 2.
Figure 2. The four parts of the state trajectories for the QVNTNNs (Equation (1)) in Example 2.
Mathematics 07 00101 g002
Figure 3. The four parts of the state trajectories for the QVNTNNs (Equation (1)) in Example 3.
Figure 3. The four parts of the state trajectories for the QVNTNNs (Equation (1)) in Example 3.
Mathematics 07 00101 g003
Table 1. The maximal allowable bounds of ν .
Table 1. The maximal allowable bounds of ν .
Condition QVNN QVNTNN
global exponential stability 13.9447 12.1566
global power-stability 15.7909 15.7508
global log-stability 15.0446 10.3423

Share and Cite

MDPI and ACS Style

Shu, J.; Xiong, L.; Wu, T.; Liu, Z. Stability Analysis of Quaternion-Valued Neutral-Type Neural Networks with Time-Varying Delay. Mathematics 2019, 7, 101. https://doi.org/10.3390/math7010101

AMA Style

Shu J, Xiong L, Wu T, Liu Z. Stability Analysis of Quaternion-Valued Neutral-Type Neural Networks with Time-Varying Delay. Mathematics. 2019; 7(1):101. https://doi.org/10.3390/math7010101

Chicago/Turabian Style

Shu, Jinlong, Lianglin Xiong, Tao Wu, and Zixin Liu. 2019. "Stability Analysis of Quaternion-Valued Neutral-Type Neural Networks with Time-Varying Delay" Mathematics 7, no. 1: 101. https://doi.org/10.3390/math7010101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop