Next Article in Journal
On Mann-Type Subgradient-like Extragradient Method with Linear-Search Process for Hierarchical Variational Inequalities for Asymptotically Nonexpansive Mappings
Next Article in Special Issue
Research on Intellectualized Location of Coal Gangue Logistics Nodes Based on Particle Swarm Optimization and Quasi-Newton Algorithm
Previous Article in Journal
Event Study: Advanced Machine Learning and Statistical Technique for Analyzing Sustainability in Banking Stocks
Previous Article in Special Issue
Improved Rotor Flux and Torque Control Based on the Third-Order Sliding Mode Scheme Applied to the Asynchronous Generator for the Single-Rotor Wind Turbine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite-Time Passivity Analysis of Neutral-Type Neural Networks with Mixed Time-Varying Delays

by
Issaraporn Khonchaiyaphum
1,
Nayika Samorn
2,
Thongchai Botmart
1 and
Kanit Mukdasai
1,*
1
Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand
2
Faculty of Agriculture and Technology, Nakhon Phanom University, Nakhon Phanom 48000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(24), 3321; https://doi.org/10.3390/math9243321
Submission received: 2 November 2021 / Revised: 2 December 2021 / Accepted: 15 December 2021 / Published: 20 December 2021

Abstract

:
This research study investigates the issue of finite-time passivity analysis of neutral-type neural networks with mixed time-varying delays. The time-varying delays are distributed, discrete and neutral in that the upper bounds for the delays are available. We are investigating the creation of sufficient conditions for finite boundness, finite-time stability and finite-time passivity, which has never been performed before. First, we create a new Lyapunov–Krasovskii functional, Peng–Park’s integral inequality, descriptor model transformation and zero equation use, and then we use Wirtinger’s integral inequality technique. New finite-time stability necessary conditions are constructed in terms of linear matrix inequalities in order to guarantee finite-time stability for the system. Finally, numerical examples are presented to demonstrate the result’s effectiveness. Moreover, our proposed criteria are less conservative than prior studies in terms of larger time-delay bounds.

1. Introduction

Neural networks have been intensively explored in recent decades due to their vast range of applications in a variety of fields, including signal processing, associative memories, learning ability and so on [1,2,3,4,5,6,7,8,9,10]. In the study of real systems, time-delay phenomena are unavoidable. Many interesting neural networks, such as Hopfield neural networks, cellular neural networks, Cohen-Grossberg neural networks and bidirectional associative memory neural networks frequently exhibit time delays. In addition, time delays are well recognized as a source of instability and poor performance [11]. Accordingly, stability analysis of delayed neural networks has become a topic of significant theoretical and practical relevance (see [12,13,14,15]), and many important discoveries have been reported on this subject. In recent years, T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control have been presented by Syed Ali et al. [16]. The global stability analysis of fractional-order fuzzy BAM neural networks with time delay and impulsive effects was considered in [17].
Furthermore, conventional neural network models are often unable to accurately represent the qualities of a neural reaction process due to the complex dynamic features of neural cells in the real world. It is only natural for systems to store information about the derivative of a previous state in order to better characterize and analyze the dynamics of such complicated brain responses. Neutral neural networks and neutral-type neural networks are the names given to this new type of neural network. Several academics [18,19,20,21,22,23] have studied neutral-type neural networks with time-varying delays in recent years. In 2018 [24], the authors investigated improved results on passivity analysis of neutral-type neural networks with mixed time-varying delays. In particular, a type of time-varying delay known as distributed delay occurs in networked-based systems and has received a lot of academic interest because of its significance in digital control systems [25]. Then, this system has all three types of delays: discrete delay, neutral delay and distributed delay. As a result, the neutral delay in neural networks has recently been reported, as well as some stability analysis results for neutral-type neural networks with mixed time-varying delays.
The passive theory [26] is a useful tool for analyzing system stability, and it can deal with systems based solely on the input–output dynamics’ general features. The passive theory has been used in engineering applications such as in high-integrity and safety-critical systems. Krasovskii and Lidskii proposed this family of linear systems in 1961 [27]. Researchers have been looking at the passivity of neural networks with delays since then. Many studies have been performed on stability in recent years, including Lyapunov stability, asymptotic stability, uniform stability, eventually uniformly bounded stability and exponential stability, all of which are concerned with the behavior of systems over an indefinite time span. Most actual neural systems, on the other hand, only operate over finite-time intervals. Finite-time passivity is obviously vital and vital for investigating finite-time stabilization of neural networks as a useful tool for analyzing system stability.
This topic has piqued the curiosity of researchers [28,29,30,31,32,33,34,35]. They deal with by Jensen’s and Coppel’s inequality in [28], which is concerned with the problem of finite-time stability of continuous time delay systems. The authors used an unique control protocol based on the Lyapunov theory and inequality technology to examine the finite-time stabilization of delayed neural networks in [29]. Rajavel et al. [30] solves the problem of finite-time non-fragile passivity control for neural networks with time-varying delay using the Lyapunov–Krasovskii functional technique. Researchers used a new Lyapunov–Krasovskii function with triple and four integral terms to examine finite-time passive filtering for a class of neutral time-delayed systems in [31]. The free-weighting matrix approach and Wirtinger’s double integral inequality were used to demonstrate finite-time stability of neutral-type neural networks with random time-varying delays in [32]. Syed Ali et al. [33] studied finite-time passivity for neutral-type neural networks with time-varying delays using the auxiliary integral inequality. Ali et al. [34] explored popular topics including the finite-time H boundedness of discrete-time neural networks and norm-bounded disturbances with time-varying delay. In 2021, Phanlert et al. [35] has been researching a finite-time non-neutral system. Based on the above research, there are many different methods for stability analysis. Our research will make stability stronger. However, no results on finite-time passivity analysis of neutral-type neural networks with mixed time-varying delays latency have been reported to the best of the authors’ knowledge. This is the driving force behind our current investigation.
As a result of the foregoing, we investigate three types of finite passivity in neural networks and provide matching criteria for judging network properties using Lyapunov functional theory and inequality technology. The following are the primary contributions of this paper:
(i)
We examine a system with mixed time-varying delays in this study. Furthermore, because time-varying delays are distributed, discrete and neutral, the upper bounds for the delays are known.
(ii)
We then used the theorems to derive finite-time boundedness, finite-stability and finite-time passivity requirements.
(iii)
By using Peng-integral Park’s inequality, model transformation, zero equation and subsequently Wirtinger-based integral inequality approach, some of the simplest LMI-based criteria have been developed.
(iv)
Several cases have been examined to ensure that the primary theorem and its corollaries are accurate.
The following is a breakdown of the paper’s structure. Section 2 introduces the network under consideration and offers some definitions, propositions and lemmas. In Section 3, three types of finite-time passivity of the neural network are introduced, and finite-time stability is achieved. In Section 4, several useful outcomes are observed. In Section 4, five numerical examples are presented to demonstrate the usefulness of our proposed results. Finally, in Section 5, we bring this study to a close.

2. Preliminaries

We begin by explaining various notations and lemmas that will be used throughout the study. R denotes the set of all real numbers; R n denotes the n-dimensional space; R m × n denotes the set of all m × n real matrices; A T denotes the transpose of the matrix A; A is symmetric if A = A T ; λ ( A ) denotes the set of all eigenvalues of A; and λ m a x ( A ) and λ m i n ( A ) represent the maximum and minimum eigenvalues of the matrix A, respectively. ∗ represents the elements below the main diagonal of the symmetric matrices; diag { . } stands for the diagonal matrix.
Consider the study of finite-time passivity analysis of neutral-type neural networks with mixed time-varying delays of the following form:
ξ ˙ ( t ) G c ξ ˙ ( t τ ( t ) ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + H κ ( t ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s , z ( t ) = G 1 f ( ξ ( t ) ) + G 2 κ ( t ) , t R + ξ ( t ) = ϕ ( t ) , t [ h ¯ , 0 ] ,
where ξ ( t ) = [ ξ 1 ( t ) , ξ 2 ( t ) , , ξ n ( t ) ] T R n is the neural state vector, z ( t ) is the output vector of neuron network, and κ ( t ) is the exogenous disturbance input vector belongs to L 2 [ 0 , ) . A = diag { a 1 , a 2 , , a n } > 0 is a diagonal matrix with a i > 0 , i = 1 , 2 , , n . Matrices G b , G d and G e are the interconnection matrices representing the weight coefficients of the neurons. Matrices G 1 , G 2 , H and G c are known real constant matrices with appropriate dimensions. f ( ξ ( t ) ) = [ f 1 ( ξ 1 ( t ) ) , f 2 ( ξ 2 ( t ) ) , , f n ( ξ n ( t ) ) ] T R n is the neuron activation function, and ϕ ( t ) C [ [ h ¯ , 0 ] , R n ] denotes the initial function. μ ( t ) is the discrete time-varying delay, ρ ( t ) is the distributed time-varying delay, τ ( t ) is neutral delay and h ¯ = max { μ M , ρ M , τ M } .
The variables μ ( t ) , ρ ( t ) and τ ( t ) represent the mixed delays of the model in (1) and satisfy the following:
0 μ ( t ) μ M , 0 μ ˙ ( t ) μ d , 0 ρ ( t ) ρ M , 0 ρ ˙ ( t ) ρ d , 0 τ ( t ) τ M , 0 τ ˙ ( t ) τ d ,
where μ M , μ d , ρ M , ρ d , τ M and τ d are positive real constants.
Assumtion 1.
The activation function f is continuous and the exist real constants F i and F i + such that the following is the case:
F i f i ( c 1 ) f i ( c 2 ) c 1 c 2 F i + ,
for all c 1 c 2 , and f i = [ f 1 , f 2 , , f n ] T for any i { 1 , 2 , , n } satisfies f i ( 0 ) = 0 . For the sake of presentation convenience, in the following, we denote F 1 = diag ( F 1 F 1 + , F 2 F 2 + , , F n F n + ) and F 2 = diag ( F 1 + F 1 + 2 , F 2 + F 2 + 2 , , F n + F n + 2 ) .
Assumtion 2.
In the case of a positive parameter δ, κ ( t ) is a time-varying external disturbance that satisfies the following.
0 T f κ T ( t ) κ ( t ) d t δ , δ > 0 .
Definition 1
((Finite-time boundedness) [36,37]). For a positive constant of T, system (1) is finite-time bounded with respect to ( g 1 , g 2 , T f , P 1 , δ ) if there exist constants g 2 > g 1 > 0 such that the following is the case:
sup μ M t 0 0 { ξ T ( t 0 ) P 1 ξ ( t 0 ) , ξ ˙ T ( t 0 ) P 1 ξ ˙ ( t 0 ) } g 1 ξ T ( t ) P 1 ξ ( t ) g 2 , f o r t [ 0 , T f ] ,
for a given positive constant T f , and P 1 is a positive definite matrix.
Definition 2
((Finite-time stability) [36,37]). System (1) with κ ( t ) = 0 is said to be finite-time stable with respect to ( g 1 , g 2 , T f , P 1 ) if there exist constants g 2 > g 1 > 0 such that the following is the case:
sup μ M t 0 0 { ξ T ( t 0 ) P 1 ξ ( t 0 ) , ξ ˙ T ( t 0 ) P 1 ξ ˙ ( t 0 ) } g 1 ξ T ( t ) P 1 ξ ( t ) g 2 , f o r t [ 0 , T f ] ,
for a given positive constant T f , and P 1 is a positive definite matrix.
Definition 3
((Finite-time passivity) [37]). System (1) is said to be a finite-time passive with with a prescribed dissipation performance level γ > 0 , if the following relations hold:
(a) 
For any external disturbances κ ( t ) , system (1) is finite-time bounded;
(b) 
For a given positive scalar γ > 0 , the following relationship holds under a zero initial condition.
0 T f κ T ( t ) z ( t ) d t γ 0 T f κ T ( t ) κ ( t ) d t .
Lemma 1
((Jensen’s Inequality) [38]). For each positive definite symmetric matrix P 7 , positive real constant μ M and vector function ξ ˙ : [ μ M , 0 ] R n such that the following integral is well defined, then the following is obtained.
μ M μ M 0 ξ ˙ T ( s + t ) P 7 ξ ˙ ( s + t ) d s μ M 0 ξ ˙ ( s + t ) d s T P 7 μ M 0 ξ ˙ ( s + t ) d s .
Lemma 2
((Wirtinger-based integral inequality) [39]). For any matrix P 12 > 0 , the following inequality holds for all continuously differentiable function ξ ˙ : [ α , β ] R n
( β α ) α β ξ ˙ T ( s ) P 12 ξ ˙ ( s ) d s κ T 4 P 12 2 P 12 6 P 12 4 P 12 6 P 12 12 P 12 κ ,
where κ = [ ξ T ( β ) , ξ T ( α ) , 1 β α α β ξ T ( s ) d s ] T .
Lemma 3
((Peng-Park’s integral inequality) [40,41]). For any matrix of the following: P 13 S P 13 0 , 0 < μ ( t ) < μ M is satisfied by positive constants μ M and μ ( t ) , and ξ ˙ : [ μ M , 0 ] R n is a vector function that verifies the integrations in question are correctly specified. We then have the following:
μ M t μ M t ξ ˙ T ( s ) P 13 ξ ˙ ( s ) d s Ψ T P 13 P 13 S S 2 P 13 + S + S T P 13 S P 13 Ψ ,
where Ψ = [ ξ T ( t ) , ξ T ( t μ ( t ) ) , ξ T ( t μ M ) ] T and Θ = P 13 P 13 S S 2 P 13 + S + S T P 13 S P 13 .
Lemma 4
([42]). The following inequality applies to a positive matrix P 10
( α β ) 2 2 β α s α ξ T ( u ) P 10 ξ ( u ) d u d s β α s α ξ ( u ) d u d s T P 10 β t s α ξ ( u ) d u d s .
Lemma 5
([43]). P 6 R n × n is a constant symmetric positive definite matrix. For any constant symmetric positive definite matrix P 6 R n × n , μ ( t ) is a discrete time-varying delay with (2), vector function ξ : [ μ M , 0 ] R n such that the integrations concerned are well defined, then the following is the case.
μ M μ M 0 ξ T ( s ) P 6 ξ ( s ) d s μ ( t ) 0 ξ T ( s ) d s P 6 μ ( t ) 0 ξ ( s ) d s μ M μ ( t ) ξ T ( s ) d s P 6 μ M μ ( t ) ξ ( s ) d s .
Lemma 6
([43]). For any constant matrices R 7 , R 8 , R 9 R n × n , R 7 0 , R 9 > 0 , R 7 R 8 R 9 0 , μ ( t ) is a discrete time-varying delay with (2) and vector function ξ ˙ : [ μ M , 0 ] R n such that the following integration is well defined:
μ M t μ M t ξ ( s ) ξ ˙ ( s ) T R 7 R 8 R 9 ξ ( s ) ξ ˙ ( s ) d s Υ T Π Υ ,
where Υ T = ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) t μ ( t ) t ξ ( s ) d s t μ M t μ ( t ) ξ ( s ) d s .
and the following is the case Π = R 9 R 9 0 R 8 T 0 R 9 R 9 T R 9 R 8 T R 8 T R 9 0 R 8 T R 7 0 R 7 .
Lemma 7
([43]). Let ξ ( t ) R n be a vector-valued function with first-order continuous-derivative entries. For any constant matrices P 5 , M ^ i R n × n , then the following integral inequality holds, i = 1 , 2 , , 5 and μ ( t ) is a discrete time-varying delay with (2):
t μ M t ξ ˙ T ( s ) P 5 ξ ˙ ( s ) d s Γ T M ^ 1 + M ^ 1 T M ^ 1 T + M ^ 2 0 M ^ 1 + M ^ 1 T M ^ 2 M ^ 2 T M ^ 1 T + M ^ 2 M ^ 2 M ^ 2 T Γ + μ M Γ T M ^ 3 M ^ 4 0 M ^ 3 + M ^ 5 M ^ 4 M ^ 5 Γ ,
where Γ = ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) , P 5 M ^ 1 M ^ 2 M ^ 3 M ^ 4 M ^ 5 0 .
Lemma 8
([44]). For a positive definite matrix P 8 , P 9 > 0 and any continuously differentiable function ξ ˙ : [ a , b ] R n , the following inequality holds:
a b ξ ˙ T ( s ) P 5 ξ ˙ ( s ) d s 1 b a Θ 1 T P 8 Θ 1 + 3 b a Θ 2 T P 8 Θ 2 + 5 b a Θ 3 T P 8 Θ 3 , a b u b ξ ˙ T ( s ) P 5 ξ ˙ ( s ) d s d u 2 Θ 4 T P 9 Θ 4 + 4 Θ 5 T P 9 Θ 5 + 6 Θ 6 T P 9 Θ 6 ,
where the following is the case.
Θ 1 = ξ ( a ) ξ ( b ) , Θ 2 = ξ ( a ) + ξ ( b ) 2 b a a b ξ ( s ) d s , Θ 3 = ξ ( a ) ξ ( b ) + 6 b a a b ξ ( s ) d s 12 ( b a ) 2 a b u b ξ ( s ) d s d u , Θ 4 = ξ ( b ) 1 b a a b ξ ( s ) d s , Θ 5 = ξ ( b ) + 2 b a a b ξ ( s ) d s 6 ( b a ) 2 a b u b ξ ( s ) d s d u , Θ 6 = ξ ( b ) 3 b a a b ξ ( s ) d s + 24 ( b a ) 2 a b u b ξ ( s ) d s d u 60 ( b a ) 3 a b u b s b ξ ( r ) d r d s d u .

3. Main Results

3.1. Finite-Time Boundedness Analysis

The following finite-time boundedness analysis of neutral-type neural networks with mixed time-varying delays is discussed in this subsection.
ξ ˙ ( t ) G c ξ ˙ ( t τ ( t ) ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + H κ ( t ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s , ξ ( t ) = ϕ ( t ) , t [ h ¯ , 0 ] .
In the first subsection, we look at system (5) with (2) that uses new criteria for systems introduced via the LMIs approach.
= Π ( i , j ) 23 × 23 .
For future reference, we introduce the following notations in the Appendix A.
Theorem 1.
For C < 1, system (5) is finite-time bounded if there exist positive definite matrices P i , R j , i = 1 , 2 , 3 , , 16 , j = 1 , 2 , 3 , , 9 any appropriate matrices S , P 13 , R 8 , Q k , R 7 0 , Z l , l = 1 , 2 and O e , e = 1 , 2 , 3 , , 8 , R n + 3 n R 2 + 3 n R 2 + 3 n T R 3 + 3 n 0 , P 13 S P 13 0 where n = 0 , 1 , 2 , k = 1 , 2 , , 14 , positive diagonal matrices H p , W p , p = 1 , 2 and positive real constants μ M , ρ M , μ d , τ M , τ d , δ , α , g 1 , g 2 , T such that the following symmetric linear matrix inequality holds:
P 5 M 1 M 2 M 3 M 4 M 5 0 ,
< 0 ,
λ 1 g 2 e α T > Λ g 1 + δ ( 1 e α T ) .
For future reference, we introduce the following notations in Appendix A. Then, λ i , i = 1 , 2 , , 31 in system (9) is defined in Remark 1.
Proof. 
First, we show that system (5) is the finite-time bounded analysis. As a result, we consider system (5) to satisfy the following.
ξ ˙ ( t ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + H κ ( t ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s .
We can rewrite system (10) to the following system:
ξ ˙ ( t ) = y ( t ) ,
0 = y ( t ) A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + H κ ( t ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s ,
by using the model transformation approach. Construct a Lyapunov–Krasovskii functional candidate for system (10)–(12) of the following form:
V ( t ) = i = 1 10 V i ( t ) ,
where the following is the case:
V 1 ( t ) = ξ T ( t ) P 1 ξ ( t ) + 2 i = 1 N w i 1 0 ξ i ( t ) ( f i ( s ) F s ) d s , V 2 ( t ) = ζ T ( t ) G P 2 ζ ( t ) + 2 i = 1 N w i 2 0 ξ i ( t ) ( F + s f i ( s ) ) d s , V 3 ( t ) = t μ M t ξ T ( s ) P 3 ξ ( s ) d s + t μ ( t ) t ξ ( s ) f ( ξ ( s ) ) T R 1 R 2 R 3 ξ ( s ) f ( ξ ( s ) ) d s + t μ M t ξ ( s ) f ( ξ ( s ) ) T R 4 R 5 R 6 ξ ( s ) f ( ξ ( s ) ) d s , V 4 ( t ) = μ M μ M 0 t + s t ξ ( θ ) y ( θ ) T R 7 R 8 R 9 ξ ( θ ) y ( θ ) d θ d s , V 5 ( t ) = μ M μ M 0 t + s t ξ T ( θ ) P 4 ξ ( θ ) d θ d s + μ M 0 t + s t y T ( θ ) P 5 y ( θ ) d θ d s , V 6 ( t ) = μ M μ M 0 t + s t y T ( θ ) P 6 y ( θ ) d θ d s + μ M μ M 0 t + s t ξ ˙ T ( θ ) P 7 ξ ˙ ( θ ) d θ d s , V 7 ( t ) = μ M μ M 0 t + s t y T ( θ ) S 1 y ( θ ) d θ d s , + μ M μ M 0 λ 0 t + s t y T ( θ ) S 2 y ( θ ) d θ d s d λ , V 8 ( t ) = ( μ M ) 2 2 μ M 0 λ 0 t + s t ξ T ( θ ) P 10 ξ ( θ ) d θ d s d λ + ( μ M ) 2 2 μ M 0 λ 0 t + s t y T ( θ ) P 11 y ( θ ) d θ d s d λ ,
V 9 ( t ) = μ M μ M 0 t + s t y T ( θ ) P 12 y ( θ ) d θ d s + μ M μ M 0 t + s t y T ( θ ) P 13 y ( θ ) d θ d s , V 10 ( t ) = t τ ( t ) t ξ ˙ T ( s ) P 14 ξ ˙ ( s ) d s + τ M t τ M t ξ ˙ T ( s ) P 15 ξ ˙ ( s ) d s , V 11 ( t ) = ρ M ρ M 0 t + s t f ( ξ ( θ ) ) T P 16 f ( ξ ( θ ) ) d θ d s ,
where G = I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , ζ T ( t ) = ξ ( t ) t μ ( t ) t y ( s ) d s t μ M t d ( t ) y ( s ) d s y ( t ) T .
Along the trajectory of system (10)–(12), the time derivative of V ( t ) is equivalent to the following.
V ˙ ( t ) = i = 1 10 V i ˙ ( t ) .
The time derivative of V 1 ( t ) is then computed as the following.
V ˙ 1 ( t ) = 2 ξ T ( t ) P 1 [ A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s + H κ ( t ) ] + 2 f T ( ξ ( t ) ) W 1 ξ ˙ ( t ) ξ T ( t ) W 1 F 1 ξ ˙ ( t ) .
Taking the derivative of V 2 ( t ) along any system solution trajectory, we have the following.
V ˙ 2 ( t ) = 2 ξ T ( t ) P 2 [ A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s + H κ ( t ) ] + 2 ξ ˙ T ( t ) Q 13 T ξ ˙ ( t ) y ( t ) + 2 y T ( t ) Q 14 T ξ ˙ ( t ) y ( t ) + ξ T ( t ) W 2 F 2 ξ ˙ ( t ) 2 f T ( ξ ( t ) ) W 2 ξ ˙ ( t ) = 2 ξ T ( t ) P 2 [ A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s + H κ ( t ) ] + 2 ξ ˙ T ( t ) Q 13 T ξ ˙ ( t ) y ( t ) + 2 y T ( t ) Q 14 T ξ ˙ ( t ) y ( t ) + 2 ξ T ( t ) Q 1 T + t μ ( t ) t y T ( s ) d s Q 4 T + t μ M t μ ( t ) y T ( s ) d s Q 7 T + y T ( t ) Q 10 T y ( t ) A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s + H κ ( t ) + 2 ξ T ( t ) Q 2 T + t μ ( t ) t y T ( s ) d s Q 5 T + t μ M t μ ( t ) y T ( s ) d s Q 8 T + y T ( t ) Q 11 T × ξ ( t ) ξ ( t μ ( t ) ) t μ ( t ) t y ( s ) d s + 2 ξ T ( t ) Q 3 T + t μ ( t ) t y T ( s ) d s Q 6 T + t μ M t μ ( t ) y T ( s ) d s Q 9 T + y T ( t ) Q 12 T × ξ ( t μ ( t ) ) ξ ( t μ M ) t μ M t μ ( t ) y ( s ) d s + ξ T ( t ) W 2 F 2 ξ ˙ ( t ) 2 f T ( ξ ( t ) ) W 2 ξ ˙ ( t ) .
For V 3 ( t ) and μ ˙ ( t ) μ d , we now have the following.
V ˙ 3 ( t ) = ξ T ( t ) P 3 ξ ( t ) ξ T ( t μ M ) P 3 ξ ( t μ M ) + ξ ( t ) f ( ξ ( t ) ) T R 1 R 2 R 2 T R 3 ξ ( t ) f ( ξ ( t ) ) ( 1 μ ˙ ( t ) ) ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) T R 1 R 2 R 2 T R 3 ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) + ξ ( t ) f ( ξ ( t ) ) T R 4 R 5 R 5 T R 6 ξ ( t ) f ( ξ ( t ) ) ξ ( t μ M ) f ( ξ ( t μ M ) ) T R 4 R 5 R 5 T R 6 ξ ( t μ M ) f ( ξ ( t μ M ) ) ξ T ( t ) P 3 ξ ( t ) ξ T ( t μ M ) P 3 ξ ( t μ M ) + ξ ( t ) f ( ξ ( t ) ) T R 1 R 2 R 2 T R 3 ξ ( t ) f ( ξ ( t ) ) ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) T R 1 R 2 R 2 T R 3 ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) + μ d ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) T R 1 R 2 R 2 T R 3 ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) + ξ ( t ) f ( ξ ( t ) ) T R 4 R 5 R 5 T R 6 ξ ( t ) f ( ξ ( t ) ) ξ ( t μ M ) f ( ξ ( t μ M ) ) T R 4 R 5 R 5 T R 6 ξ ( t μ M ) f ( ξ ( t μ M ) ) .
It is from Lemma 6 that we have the following.
V ˙ 4 ( t ) = μ M 2 ξ ( t ) y ( t ) T R 7 R 8 R 8 T R 9 ξ ( t ) y ( t ) μ M t μ M t ξ ( s ) y ( s ) T R 7 R 8 R 8 T R 9 ξ ( s ) y ( s ) d s μ M 2 ξ ( t ) y ( t ) T R 7 R 8 R 8 T R 9 ξ ( t ) y ( t ) + ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) t μ ( t ) t ξ ( s ) d s t μ M t μ ( t ) ξ ( s ) d s T Π ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) t μ ( t ) t ξ ( s ) d s t μ M t μ ( t ) ξ ( s ) d s
where
Π = R 9 R 9 0 R 8 T 0 R 9 T R 9 R 9 T R 9 R 8 T R 8 T 0 R 9 T R 9 0 R 8 T R 9 R 8 0 R 7 0 0 R 8 R 8 0 R 7 .
Using Lemmas 5 and 7, V 5 ( t ) is computed as follows:
V ˙ 5 ( t ) = μ M 2 ξ T ( t ) P 4 ξ ( t ) μ M t μ M t ξ T ( s ) P 4 ξ ( s ) d s + μ M y T ( t ) P 5 y ( t ) t μ M t ξ ˙ T ( s ) P 5 ξ ˙ ( s ) d s μ M 2 ξ T ( t ) P 4 ξ ( t ) + μ M y T ( t ) P 5 y ( t ) t μ ( t ) t ξ T ( s ) d s P 4 t μ ( t ) t ξ ( s ) d s t μ M t μ ( t ) ξ T ( s ) d s P 4 t μ M t μ ( t ) ξ ( s ) d s + ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) T Θ ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) + μ M ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) T M ^ 3 M ^ 4 0 M ^ 4 T M ^ 3 + M ^ 5 M ^ 4 0 M ^ 4 T M ^ 5 ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M )
where the following is the case.
Θ = M ^ 1 + M ^ 1 T M ^ 1 T + M ^ 2 0 M ^ 1 + M ^ 2 T M ^ 1 + M ^ 1 T M ^ 2 M ^ 2 T M ^ 1 T + M ^ 2 0 M ^ 1 + M ^ 2 T M ^ 2 M ^ 2 T .
Using Lemma 1 (Jensen’s Inequality), we have the following.
V ˙ 6 ( t ) μ M 2 y T ( t ) P 6 y ( t ) t μ M t y T ( s ) d s P 6 t μ M t y ( s ) d s + μ M 2 ξ ˙ T ( t ) P 7 ξ ˙ ( t ) t μ M t ξ ˙ T ( s ) d s P 7 t μ M t ξ ˙ ( s ) d s μ M 2 y T ( t ) P 6 y ( t ) + μ M 2 ξ ˙ T ( t ) P 7 ξ ˙ ( t )
t μ ( t ) t y T ( s ) d s + t μ M t μ ( t ) y T ( s ) d s P 6 t μ ( t ) t y T ( s ) d s + t μ M t μ ( t ) y T ( s ) d s t μ ( t ) t ξ ˙ T ( s ) d s + t μ M t μ ( t ) ξ ˙ T ( s ) d s P 7 t μ ( t ) t ξ ˙ T ( s ) d s + t μ M t μ ( t ) ξ ˙ T ( s ) d s .
Using Lemma 8 to confront V ˙ 7 ( t ) , we can obtain the following:
V ˙ 7 ( t ) = μ M 2 y ( t ) P 8 y ( t ) μ M t μ M t ξ ˙ T ( s ) P 8 ξ ˙ ( s ) d s + μ M 2 2 y T ( t ) P 9 y ( t ) t μ M t u t ξ ˙ T ( λ ) P 11 ξ ˙ ( λ ) d λ d u μ M 2 y ( t ) P 8 y ( t ) + μ M 2 2 y T ( t ) P 9 y ( t ) [ Θ 1 T P 8 Θ 1 + 3 Θ 2 T P 8 Θ 2 + 5 Θ 3 T P 8 Θ 3 ] [ 2 Θ 4 T P 9 Θ 4 + 4 Θ 5 T P 9 Θ 5 + 6 Θ 6 T P 9 Θ 6 ] ,
where the following is the case.
Θ 1 = ξ ( t μ M ) ξ ( t ) , Θ 2 = ξ ( t μ M ) + ξ ( t ) 2 μ M t μ M t ξ ( s ) d s , Θ 3 = ξ ( t μ M ) ξ ( t ) + 6 μ M t μ M t ξ ( s ) d s 12 μ M 2 t μ M t u t ξ ( s ) d s d u , Θ 4 = ξ ( t ) 1 μ M t μ M t ξ ( s ) d s , Θ 5 = ξ ( t ) + 2 μ M t μ M t ξ ( s ) d s 6 μ M 2 t μ M t u t ξ ( s ) d s d u , Θ 6 = ξ ( t ) 3 μ M t μ M t ξ ( s ) d s + 24 μ M 2 t μ M t u t ξ T ( s ) d s d u 60 μ M 3 t μ M t u t s t ξ ( s ) d r d s d u .
According to Lemma 4, we can obtain V ˙ 8 ( t ) by performing the following.
V ˙ 8 ( t ) μ M 4 4 ξ T ( t ) P 10 ξ ( t ) μ M 2 2 t μ M t u t ξ T ( λ ) P 10 ξ ( λ ) d λ d u + μ M 4 2 y T ( t ) P 11 y ( t ) μ M 2 t μ M t u t ξ ˙ T ( λ ) P 11 ξ ˙ ( λ ) d λ d u μ M 4 4 ξ T ( t ) P 10 ξ ( t ) t μ M t u t ξ T ( λ ) d λ d u P 10 t μ M t u t ξ ( λ ) d λ d u + μ M 4 2 y T ( t ) P 11 y ( t ) 2 t μ M t u t ξ ˙ T ( λ ) d λ d u P 11 t μ M t u t ξ ˙ ( λ ) d λ d u μ M 4 4 ξ T ( t ) P 10 ξ ( t ) t μ M t u t ξ T ( λ ) d λ d u P 10 t μ M t u t ξ ( λ ) d λ d u + μ M 4 2 y T ( t ) P 11 y ( t ) 2 μ M 2 ξ T ( t ) P 11 ξ ( t ) + 2 μ M ξ T ( t ) P 11 t μ M t ξ T ( u ) d u + 2 μ M t μ M t ξ T ( u ) d u P 11 ξ ( t ) 2 t μ M t ξ T ( u ) d u P 11 t μ M t ξ T ( u ) d u .
Using Lemmas 2 and 3, an upper bound of V 9 ( t ) can be obtained as follows.
V ˙ 9 ( t ) μ M 2 y T ( t ) P 12 y ( t ) + μ M 2 y T ( t ) P 13 y ( t ) + ξ ( t ) ξ ( t μ M ) 1 μ M t μ M t ξ ( s ) d s T 4 P 12 2 P 12 6 P 12 2 P 12 T 4 P 12 6 P 12 6 P 12 T 6 P 12 T 12 P 12 ξ ( t ) ξ ( t μ M ) 1 μ M t μ M t ξ ( s ) d s + ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) T P 13 P 13 S S P 13 T S T 2 P 13 + S + S T P 13 S S T P 13 T S T P 13 ξ ( t ) ξ ( t μ ( t ) ) ξ ( t μ M ) .
Taking the time derivative of V 10 ( t ) , we have the following.
V ˙ 10 ( t ) ξ ˙ T ( t ) P 14 ξ ˙ ( t ) ( 1 τ ˙ ( t ) ) ξ ˙ T ( t τ ( t ) ) P 14 ξ ˙ ( t τ ( t ) ) + ξ ˙ T ( t ) P 15 ξ ˙ ( t ) τ M ξ ˙ T ( t τ M ) P 15 ξ ˙ ( t τ M ) ξ ˙ T ( t ) P 14 ξ ˙ ( t ) ( 1 τ d ) ξ ˙ T ( t τ ( t ) ) P 14 ξ ˙ ( t τ ( t ) ) + τ M ξ ˙ T ( t ) P 15 ξ ˙ ( t ) τ M ξ ˙ T ( t τ M ) P 15 ξ ˙ ( t τ M ) .
Calculating V ˙ 11 ( t ) yields the following.
V ˙ 11 ( t ) = ρ M 2 f T ( ξ ( t ) ) P 16 f ( ξ ( t ) ) ρ M t ρ M t f T ( ξ ( s ) ) d s P 16 f ( ξ ( s ) ) d s ρ M 2 f T ( ξ ( t ) ) P 16 f ( ξ ( t ) ) t ρ ( t ) t f T ( ξ ( s ) ) d s P 16 t ρ ( t ) t f T ( ξ ( s ) ) d s .
From (3), for any positive diagonal matrices H 1 > 0 , H 2 > 0 , the following is obtained.
ξ ( t ) f ( ξ ( t ) ) T F 1 H 1 F 2 H 1 F 2 T H 1 T H 1 ξ ( t ) f ( ξ ( t ) ) 0 ,
ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) T F 1 H 2 F 2 H 2 F 2 T H 2 T H 2 ξ ( t μ ( t ) ) f ( ξ ( t μ ( t ) ) ) 0 .
Furthermore, for any real matrices Z i , i = 1 , 2 and O j , j = 1 , 2 , 3 , , 8 of compatible dimensions, we obtain
2 t μ ( t ) t ξ ˙ ( s ) d s Z 1 T ξ ( t ) ξ ( t μ ( t ) ) t μ ( t ) t ξ ˙ ( s ) d s = 0 ,
2 t μ M t μ ( t ) ξ ˙ ( s ) d s Z 2 T ξ ( t μ ( t ) ) ξ ( t μ M ) t μ M t μ ( t ) ξ ˙ ( s ) d s = 0 ,
2 ξ ˙ T ( t ) O 1 T + ξ T ( t ) O 2 T + f ( ξ ( t ) ) O 3 T + f ( ξ ( t μ ( t ) ) ) O 4 T [ ξ ˙ ( t ) A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s + H κ ( t ) ] 2 y T ( t ) O 5 T + ξ T ( t ) O 6 T + f ( ξ ( t ) ) O 7 T + f ( ξ ( t μ ( t ) ) ) O 8 T × [ y ( t ) A ξ ( t ) + G b f ( ξ ( t ) ) + G c ξ ˙ ( t τ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s + H κ ( t ) ] = 0 .
Based on (14)–(19), it is clear that the following is observed:
η T ( t ) η ( t ) < 0 ,
where the following is the case.
η ( t ) = [ ξ ( t ) , y ( t ) , f ( ξ ( t ) ) , f ( ξ ( t μ ( t ) ) ) , ξ ( t μ ( t ) ) , ξ ( t μ M ) , t μ ( t ) t y ( s ) d s , t μ M t μ ( t ) y ( s ) d s , f ( ξ ( t μ M ) ) , t μ ( t ) t ξ ( s ) d s , t μ M t μ ( t ) ξ ( s ) d s , ξ ˙ ( t ) , 1 μ M t μ M t ξ ( s ) d s , 1 μ M 2 t μ M t t μ M t ξ ( s ) d s , 1 μ M 3 t μ M t t μ M t t μ M t ξ ( s ) d s , t μ M t ξ ( u ) d u , μ M t u t ξ ( λ ) d λ d u , t μ ( t ) t ξ ˙ ( s ) d s , t μ M t μ ( t ) ξ ˙ ( s ) d s , ξ ˙ ( t τ M ) , ξ ˙ ( t τ ( t ) ) , t ρ ( t ) t f ( ξ ( s ) ) d s , κ ( t ) ] .
Then, α > 0 and we are able to obtain the following.
V ˙ ( t ) α V ( t ) α κ T ( t ) κ ( t ) ζ T ( t ) ζ ( t ) .
By multiplying the above inequality by e α t , we can obtain the following.
d d t [ e α t V ( t ) ] α e α t κ T ( t ) κ ( t ) .
Integrating the two sides of the inequality (22) from 0 to t, with t [ 0 , T ] , we have obtained the following.
V ( t ) e α t V ( 0 ) + α e α t 0 t e α s κ T ( t ) κ ( t ) d s .
They include the following.
V ( 0 ) = ξ T ( 0 ) P 1 ξ ( 0 ) + 2 i = 1 N k i 0 ξ i ( 0 ) ( f i ( s ) F s ) d s + ζ T ( 0 ) G P 2 ζ ( 0 ) + 2 i = 1 N w i 0 ξ i ( 0 ) ( F + s f i ( s ) ) d s + 0 μ M 0 ξ T ( s ) P 3 ξ ( s ) d s + μ ( t ) 0 ξ ( s ) f ( ξ ( s ) ) T R 1 R 2 R 3 ξ ( s ) f ( ξ ( s ) ) d s + μ M 0 ξ ( s ) f ( ξ ( s ) ) T R 4 R 5 R 6 ξ ( s ) f ( ξ ( s ) ) d s + μ M μ M 0 s 0 ξ ( θ ) y ( θ ) T R 7 R 8 R 9 ξ ( θ ) y ( θ ) d θ d s + μ M μ M 0 s 0 ξ T ( θ ) P 4 ξ ( θ ) d θ d s + μ M 0 s 0 y T ( θ ) P 5 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) P 6 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) P 7 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) P 8 y ( θ ) d θ d s + μ M μ M 0 λ 0 s 0 ξ T ( θ ) P 9 ξ ( θ ) d θ d s d λ + ( μ M ) 2 2 μ M 0 λ 0 s 0 ξ T ( θ ) P 10 ξ ( θ ) d θ d s d λ + ( μ M ) 2 2 μ M 0 λ 0 s 0 y T ( θ ) P 11 y ( θ ) d θ d s d λ + μ M μ M 0 s 0 y T ( θ ) P 12 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) P 13 y ( θ ) d θ d s + τ ( t ) 0 ξ T ( s ) P 14 ξ ( s ) d s + τ M τ M 0 ξ T ( s ) P 15 ξ ( s ) d s + ρ M ρ M 0 s 0 f ( ξ ( θ ) ) T P 16 f ( ξ ( θ ) ) d θ d s .
Note that P ˜ = L 1 2 P i L 1 2 ; i = 1 , 2 , 3 , , 13 , R ˜ = L 1 2 R i L 1 2 ; i = 1 , 2 , 3 , , 9 and the following relationship can be found.
V ( 0 ) = ξ T ( 0 ) L 1 2 P ˜ 1 L 1 2 ξ ( 0 ) + 2 W 1 ˜ f ( ξ T ( 0 ) ) + ζ T ( 0 ) L 1 2 G P ˜ 2 L 1 2 ζ ( 0 ) 2 W 2 ˜ f ( ξ T ( 0 ) ) + 0 μ M 0 ξ T ( s ) L 1 2 P ˜ 3 L 1 2 ξ ( s ) d s + μ ( t ) 0 ξ ( s ) f ( ξ ( s ) ) T L 1 2 R ˜ 1 L 1 2 L 1 2 R ˜ 2 L 1 2 L 1 2 R ˜ 2 T L 1 2 L 1 2 R ˜ 3 L 1 2 ξ ( s ) f ( ξ ( s ) ) d s + μ M 0 ξ ( s ) f ( ξ ( s ) ) T L 1 2 R ˜ 4 L 1 2 L 1 2 R ˜ 5 L 1 2 L 1 2 R ˜ 5 T L 1 2 L 1 2 R ˜ 6 L 1 2 ξ ( s ) f ( ξ ( s ) ) d s + μ M μ M 0 s 0 ξ ( θ ) y ( θ ) T L 1 2 R ˜ 7 L 1 2 L 1 2 R ˜ 8 L 1 2 L 1 2 R ˜ 8 T L 1 2 L 1 2 R ˜ 9 L 1 2 ξ ( θ ) y ( θ ) d θ d s + μ M μ M 0 s 0 ξ T ( θ ) L 1 2 P ˜ 4 L 1 2 ξ ( θ ) d θ d s + μ M 0 s 0 y T ( θ ) L 1 2 P ˜ 5 L 1 2 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) L 1 2 P ˜ 6 L 1 2 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) L 1 2 P ˜ 7 L 1 2 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) L 1 2 P ˜ 8 L 1 2 y ( θ ) d θ d s + μ M μ M 0 λ 0 s 0 ξ T ( θ ) L 1 2 P ˜ 9 L 1 2 ξ ( θ ) d θ d s d λ + ( μ M ) 2 2 μ M 0 λ 0 s 0 ξ T ( θ ) L 1 2 P ˜ 10 L 1 2 ξ ( θ ) d θ d s d λ + ( μ M ) 2 2 μ M 0 λ 0 s 0 y T ( θ ) L 1 2 P ˜ 11 L 1 2 y ( θ ) d θ d s d λ + μ M μ M 0 s 0 y T ( θ ) L 1 2 P ˜ 12 L 1 2 y ( θ ) d θ d s + μ M μ M 0 s 0 y T ( θ ) L 1 2 P ˜ 13 L 1 2 y ( θ ) d θ d s + τ ( t ) 0 ξ T ( s ) L 1 2 P ˜ 14 L 1 2 ξ ( s ) d s + τ M τ M 0 ξ T ( s ) L 1 2 P ˜ 15 L 1 2 ξ ( s ) d s + ρ M ρ M 0 s 0 f ( ξ ( θ ) ) T L 1 2 P ˜ 16 L 1 2 f ( ξ ( θ ) ) d θ d s , [ λ max ( P ˜ 1 + P ˜ 2 ) + 2 λ max ( K + W ) + μ M λ max ( P ˜ 3 + R ˜ 1 + R ˜ 2 + R ˜ 2 T + R ˜ 3 + R ˜ 4 + R ˜ 5 + R ˜ 5 T + R ˜ 6 ) + μ M 3 2 λ m a x ( P ˜ 4 + P ˜ 5 + P ˜ 6 + P ˜ 7 + R ˜ 7 + R ˜ 8 + R ˜ 8 T + R ˜ 9 + P ˜ 8 + P ˜ 12 + P ˜ 13 ) + μ M 5 12 λ m a x ( P ˜ 9 + P ˜ 10 ) + τ M λ m a x ( P ˜ 14 ) + τ M 2 λ m a x ( P ˜ 15 ) + ρ M 3 2 λ m a x ( P ˜ 16 ) ] × sup μ M t 0 0 { ξ T ( t 0 ) L ξ ( t 0 ) , ξ ˙ T ( t 0 ) L ξ ˙ ( t 0 ) } , Λ g 1 .
We have the following:
e α t V ( 0 ) + α e α t 0 t e α s κ T ( t ) κ ( t ) d s e α t Λ g 1 + α e α t 0 t e α s κ T ( s ) κ ( s ) d s , e α T Λ g 1 + e α T δ ( 1 e α T ) , e α T Λ g 1 + δ ( 1 e α T ) ,
where the following is the case.
Λ = λ 2 + λ 4 + 2 ( λ 3 + λ 5 ) + μ M λ max ( λ 6 + λ 7 + λ 8 + λ 9 + λ 10 + λ 11 + λ 12 + λ 13 + λ 14 ) + μ M 2 2 λ 20 + μ M 3 2 ( λ 15 + λ 16 + λ 17 + λ 18 + λ 19 + λ 21 + λ 22 + λ 23 + λ 27 + λ 28 ) + μ M 4 6 λ 24 + μ M 5 12 ( λ 25 + λ 26 ) + τ M ( λ 29 ) + τ M 2 ( λ 30 ) + ρ M 3 2 ( λ 31 ) .
On the other hand, the following condition holds.
V ( t ) ξ T ( t ) P 1 ξ ( t ) λ m i n ( P ˜ 1 ) ξ T ( t ) L ξ ( t ) = λ 1 ξ T ( t ) L ξ ( t ) .
From Equations (24) and (27), we obtain the following.
ξ T ( t ) L ξ ( t ) e α T Λ g 1 + δ ( 1 e α T ) λ 1 .
Condition [ λ 1 g 2 e α T > Λ g 1 + δ ( 1 e α T ) ] indicates that for t [ 0 , T ] , ξ T ( t ) L ξ ( t ) < g 2 . From Definition 2, system (5) is finite-time bounded with regard to ( g 1 , g 2 , T , L , δ ) . The proof is now finished. □
Remark 1.
Condition (9) is not a standard form of LMIs. In order to verify that this condition is equivalent to the relation of LMIs, let λ i , i = 1 , 2 , 3 , , 31 be some positive scalars with the following.
λ 1 I P ˜ 1 λ 2 I , 0 W 1 ˜ λ 3 I , 0 P ˜ 2 λ 4 I , 0 W 2 ˜ λ 5 I , 0 P ˜ 3 λ 6 I , 0 R ˜ 1 λ 7 I , 0 R ˜ 2 λ 8 I , 0 R ˜ 2 T λ 9 I , 0 R ˜ 3 λ 10 I , 0 R ˜ 4 λ 11 I , 0 R ˜ 5 λ 12 I , 0 R ˜ 5 T λ 13 I , 0 R ˜ 6 λ 14 I , 0 R ˜ 7 λ 15 I , 0 R ˜ 8 λ 16 I , 0 R ˜ 8 T λ 17 I , 0 R ˜ 9 λ 18 I , 0 P ˜ 4 λ 19 I , 0 P ˜ 5 λ 20 I , 0 P ˜ 6 λ 21 I , 0 P ˜ 7 λ 22 I , 0 P ˜ 8 λ 23 I , 0 P ˜ 9 λ 24 I , 0 P ˜ 10 λ 25 I , 0 P ˜ 11 λ 26 I , 0 P ˜ 12 λ 27 I , 0 P ˜ 13 λ 28 I . 0 P ˜ 14 λ 29 I , 0 P ˜ 15 λ 30 I . 0 P ˜ 16 λ 31 I .
Consider the following.
λ 1 = λ m i n ( P ˜ 1 ) , λ 2 = λ m a x ( P ˜ 1 ) , λ 3 = λ m a x ( W 1 ˜ ) , λ 4 = λ m a x ( P ˜ 2 ) , λ 5 = λ m a x ( W 2 ˜ ) , λ 6 = λ m a x ( P ˜ 3 ) , λ 7 = λ m a x ( R ˜ 1 ) , λ 8 = λ m a x ( R ˜ 2 ) , λ 9 = λ m a x ( R ˜ 2 T ) , λ 10 = λ m a x ( R ˜ 3 ) , λ 11 = λ m a x ( R ˜ 4 ) , λ 12 = λ m a x ( R ˜ 5 ) , λ 13 = λ m a x ( R ˜ 5 T ) , λ 14 = λ m a x ( R ˜ 6 ) , λ 15 = λ m a x ( R ˜ 7 ) , λ 16 = λ m a x ( R ˜ 8 ) , λ 17 = λ m a x ( R ˜ 8 T ) , λ 18 = λ m a x ( R ˜ 9 ) , λ 19 = λ m a x ( P ˜ 4 ) , λ 20 = λ m a x ( P ˜ 5 ) , λ 21 = λ m a x ( P ˜ 6 ) , λ 22 = λ m a x ( P ˜ 7 ) , λ 23 = λ m a x ( P ˜ 8 ) , λ 24 = λ m a x ( P ˜ 9 ) , λ 25 = λ m a x ( P ˜ 10 ) , λ 26 = λ m a x ( P ˜ 11 ) , λ 27 = λ m a x ( P ˜ 12 ) , λ 28 = λ m a x ( P ˜ 13 ) , λ 29 = λ m i n ( P ˜ 14 ) , λ 30 = λ m a x ( P ˜ 15 ) , λ 31 = λ m a x ( P 16 ˜ ) .

3.2. Finite-Time Stability Analysis

Remark 2.
If there is an external disruption κ ( t ) = 0 , system (5) changes into the following.
ξ ˙ ( t ) G c ξ ˙ ( t τ ( t ) ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s , ξ ( t ) = ϕ ( t ) , t [ h ¯ , 0 ] .
By (8), we provide additional notation for finite-time stability analysis for (28).
˜ = Π ( i , j ) 22 × 22 .
We obtain that Π ( 1 , 1 ) Π ( 22 , 22 ) is the same as in Theorem 1. Then, we define the following:
η ˜ ( t ) = [ ξ ( t ) , y ( t ) , f ( ξ ( t ) ) , f ( ξ ( t μ ( t ) ) ) , ξ ( t μ ( t ) ) , ξ ( t μ M ) , t μ ( t ) t y ( s ) d s , t μ M t μ ( t ) y ( s ) d s , f ( ξ ( t μ M ) ) , t μ ( t ) t ξ ( s ) d s , t μ M t μ ( t ) ξ ( s ) d s , ξ ˙ ( t ) , 1 μ M t μ M t ξ ( s ) d s , 1 μ M 2 t μ M t t μ M t ξ ( s ) d s , 1 μ M 3 t μ M t t μ M t t μ M t ξ ( s ) d s , t μ M t ξ ( u ) d u , μ M t u t ξ ( λ ) d λ d u , t μ ( t ) t ξ ˙ ( s ) d s , t μ M t μ ( t ) ξ ˙ ( s ) d s , ξ ˙ ( t τ ( t ) ) , t ρ ( t ) t f ( ξ ( s ) ) d s ] ,
and construct a new theorem that follows Corollary 1.
Corollary 1.
For C < 1, system (28) with κ ( t ) = 0 is finite-time stable if there exist positive symmetric matrices P i , R j , i = 1 , 2 , 3 , , 16 , j = 1 , 2 , 3 , , 9 any appropriate matrices S , P 13 , R 8 , Q k , R 7 0 , Z l , l = 1 , 2 and O e , e = 1 , 2 , 3 , , 8 , R n + 3 n R 2 + 3 n R 2 + 3 n T R 3 + 3 n 0 , P 13 S P 13 0 , where n = 0 , 1 , 2 , k = 1 , 2 , , 14 , positive diagonal matrices are H p , W p , p = 1 , 2 and positive real constants are μ M , ρ M , μ d , τ M , τ d , α , g 1 , g 2 , T such that the following symmetric linear matrix inequality holds:
P 5 M 1 M 2 M 3 M 4 M 5 0 ,
˜ < 0 ,
λ 1 g 2 e α T > Λ g 1 ,
where κ ( t ) = 0 as described in Theorem 1.
Proof. 
Since the proof is identical to that of Theorem 1, it is excluded from this section. □

3.3. Finite-Time Passivity Analysis

This section discusses the topic of finite-time passivity analysis investigated for the following system.
ξ ˙ ( t ) G c ξ ˙ ( t τ ( t ) ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + H κ ( t ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s , z ( t ) = G 1 f ( ξ ( t ) ) + G 2 κ ( t ) , t R + ξ ( t ) = ϕ ( t ) , t [ h ¯ , 0 ] .
Theorem 2.
For C < 1, system (43) is finite-time passivity if there exist positive symmetric matrices P i , R j , G t , i = 1 , 2 , 3 , , 16 , j = 1 , 2 , 3 , , 9 , t = 1 , 2 any appropriate matrices S , P 13 , R 8 , Q k , R 7 0 , Z l , l = 1 , 2 and O e , e = 1 , 2 , 3 , , 8 , R n + 3 n R 2 + 3 n R 2 + 3 n T R 3 + 3 n 0 , P 13 S P 13 0 , where n = 0 , 1 , 2 , k = 1 , 2 , , 14 , positive diagonal matrices are H p , W p , p = 1 , 2 and positive real constants are μ M , ρ M , μ d , τ M , τ d , α , δ , β , g 1 , g 2 , T such that the following symmetric linear matrix inequality holds:
P 5 M 1 M 2 M 3 M 4 M 5 0 ,
^ = Π ^ ( i , j ) 23 × 23 < 0 ,
λ 1 g 2 e α T > Λ g 1 + δ ( 1 e α T ) ,
where Π ^ ( i , j ) = Π ( i , j ) , i , j = 1 , 2 , , 23 except Π ^ 4 , 19 = Π 3 , 23 G 1 T , Π ^ 23 , 3 = Π 19 , 4 G 1 , Π ^ 23 , 23 = β I G 2 T G 2 .
Proof. 
The following function is defined using the same Lyapunov–Krasovskii function as Theorem 1.
V ˙ ( t ) [ α V ( t ) + 2 κ T ( t ) z ( t ) β κ T ( t ) κ ( t ) ] η T ( t ) ^ η ( t ) .
^ is show in (35), and then the following is the case.
V ˙ ( t ) α V ( t ) 2 κ T ( t ) z ( t ) β κ T ( t ) κ ( t ) .
Then, multiplying (38) by e α T and integrating it between 0 and T, we can obtain the following:
V ( t ) e α T 2 0 T e α t κ T ( t ) z ( t ) d t β 0 T e α t κ T ( t ) κ ( t ) d t , 2 0 T κ T ( t ) z ( t ) d t β e α T 0 T κ T ( t ) κ ( t ) d t ,
which implies the following.
V ( t ) 2 e α T 0 T κ T ( t ) z ( t ) d t β 0 T κ T ( t ) κ ( t ) d t .
Due to V ( t ) 0 , it is reasonable to obtain it from (39) and the following:
0 T κ T ( t ) z ( t ) d t γ 0 T κ T ( t ) κ ( t ) ,
where γ = β e α T 2 . As a result, we may infer that system (33) is finite-time passive. This completes the proof. □
Remark 3.
When E = 0 , C = 0 and H = 0 system (5) changes to delayed neural network, the following is the case.
ξ ˙ ( t ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) .
By (8), we consider system (41) without finite-time stability condition and same proof line of Theorem 1. Moreover, the system is said to be asymptotically stable:
¯ = Π ( i , j ) 19 × 19 ,
where Π ¯ 12 , 12 = Π 12 , 12 P 14 τ M P 15 , Π ¯ 4 , 4 = Π 3 , 3 ρ M 2 P 16 , and the parameters are as defined in Theorem 1. Then, we define the following.
η ¯ ( t ) = [ ξ ( t ) , y ( t ) , f ( ξ ( t ) ) , f ( ξ ( t μ ( t ) ) ) , ξ ( t μ ( t ) ) , ξ ( t μ M ) , t μ ( t ) t y ( s ) d s , t μ M t μ ( t ) y ( s ) d s , f ( ξ ( t μ M ) ) , t μ ( t ) t ξ ( s ) d s , t μ M t μ ( t ) ξ ( s ) d s , ξ ˙ ( t ) , 1 μ M t μ M t ξ ( s ) d s , 1 μ M 2 t μ M t t μ M t ξ ( s ) d s , 1 μ M 3 t μ M t t μ M t t μ M t ξ ( s ) d s , t μ M t ξ ( u ) d u , μ M t u t ξ ( λ ) d λ d u , t μ ( t ) t ξ ˙ ( s ) d s , t μ M t μ ( t ) ξ ˙ ( s ) d s ] .

4. Numerical Examples

Simulation examples are provided in this part to show the feasibility and efficiency of theoretic solutions. Five examples are given in this part to demonstrate the key theoretical conclusions that have been offered.
Example 1.
Consider the following matrix parameters for the neutral-type neural networks:
ξ ˙ ( t ) G c ξ ˙ ( t τ ( t ) ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + H κ ( t ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s ,
with the following.
A = 3.6 0 0 3.6 , G b = 0.34 0 0.1 0.1 , G d = 0.1 0.2 0.15 0.18 , G c = 0.5 0 0.2 0.5 , H = 0.41 0.5 0.69 0.31 .
Let the following be the case:
τ M = 0.2 , g 1 = 0.4 , T = 6 , ρ M = 0.1 , α = 0.10 , δ = 0.005 , μ d = 0.5 , τ d = 0.2 ,
and μ M = 1.3 , F 1 = diag { 0 , 0 } , F 2 = diag { 1 , 1 } . Using the MATHLAB tools to solve LMIs (8) and (9), we may obtain g 2 = 7.8794 , indicating that the neutral system under consideration is finite-time bounded. The activation function is described by f ( ξ ( t ) ) = 2 | c o s ( t ) | , and we allow discrete time-varying delays to satisfy μ ( t ) = 0.8 + 0.5 | s i n ( t ) | , ρ ( t ) = 0.1 | s i n ( t ) | and τ ( t ) = 0.2 | c o s ( t ) | .
Example 2.
Consider the following matrix parameters for the neutral-type neural networks matrix parameters:
ξ ˙ ( t ) G c ξ ˙ ( t τ ( t ) ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + H κ ( t ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s , z ( t ) = G 1 f ( ξ ( t ) ) + G 2 κ ( t ) ,
with the following.
A = 1.5 0 0 1.5 , G b = 1.1 0.2 0.1 1.1 , G d = 0.2 0 0.2 0.2 , G c = 0.5 0.3 0.2 0.1 , H = 0.4 0.2 0.3 0.14 , G 1 = 0.1 0.2 0.01 0.4 , G 2 = 0.2 0.6 0.3 0.2 .
Let the following be the case:
μ M = 2.4 , τ M = 1.2 , g 1 = 0.5 , T = 5 , g 2 = 6 , α = 0.10 , δ = 1 , μ d = 0.9 , τ d = 0.2 , ρ M = 0.1 ,
then F 1 = diag { 0 , 0 } , F 2 = diag { 0.5 , 0.9 } . Using the MATHLAB tools to solve LMIs (35) and (36), we may obtain γ = 17.4493 , indicating that the neutral system under consideration is finite-time passive. The activation function is described by f ( ξ ( t ) ) = [ 0.5 | s i n ( t ) | , 0.9 | c o s ( t ) | ] , and we allow discrete time-varying delays to satisfy μ ( t ) = 0.1 + 0.1 | s i n ( t ) | , ρ ( t ) = 1.1 | s i n ( t ) | and τ ( t ) = 1 + 0.2 | c o s ( t ) | .
Example 3.
Consider the following matrix parameters for the neutral-type neural networks:
ξ ˙ ( t ) G c ξ ˙ ( t τ ( t ) ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) + G e t ρ ( t ) t f ( ξ ( s ) ) d s ,
with the following.
A = 4 0 0 4 , G b = 1.3 0.4 0.9 0.2 , G d = 0.6 0.2 0.3 0.3 , G c = 0.5 0 0.2 0.5 , H = 0.41 0.5 0.69 0.31 , E = 0.4 0.2 0.3 0.3 .
Let the following be the case:
τ M = 0.2 , g 1 = 3 , T = 5 , ρ M = 1.1 , α = 0.001 , δ = 0.005 , μ d = 0.1 , τ d = 0.1 ,
and μ M = 0.1 , F 1 = diag { 0 , 0 } , F 2 = diag { 2 , 2 } . Using the MATHLAB tools to solve LMIs (8) and (9), we may obtain g 2 = 0.5996 , indicating that the neutral system under consideration is finite-time stable. The activation function is described by f ( ξ ( t ) ) = 4 | c o s ( t ) | , and we allow discrete time-varying delays to satisfy μ ( t ) = 0.8 + 0.5 | s i n ( t ) | , ρ ( t ) = 0.1 | s i n ( t ) | and τ ( t ) = 0.1 + 0.1 | c o s ( t ) | .
Example 4.
Consider the following matrix parameters for the neural networks matrix parameters:
ξ ˙ ( t ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) ,
with the following:
A = 2 0 0 2 , G b = 1 1 1 1 , G d = 0.88 1 1 1 .
then F 1 = diag { 0 , 0 } , F 2 = diag { 0.4 , 0.8 } . Using the MATHLAB tools to solve LMIs (35) and (36), we indicate that the neutral system under consideration is finite-time passive. In addition, the acquired results are compared to previously published studies. The findings show that the stability conditions presented in this paper are more effective than those found in previous research. By solving Example 4 with LMI in Remark 3, we can obtain a maximum permissible upper bound μ M for different μ d , as shown in Table 1.
Figure 1 provides the state response of system (4) under zero input and the initial condition [ 3.5 , 3.5 ] . The interval time-varying delays are chosen as μ ( t ) = [ 3.6 + 0.9 | s i n ( t ) | ] , and the activation function is set as f ( ξ ( t ) ) = [ 0.4 t a n h ( x 1 ( t ) ) , 0.8 t a n h ( x 2 ( t ) ) ] T .
The permissible upper bound μ M for various μ d is shown in Table 1. Table 1 shows that the conclusions of Remark 3 in this study are less conservative than those in [45,46,47,48], demonstrating the effectiveness of our efforts. Table 1 shows the state variables’ temporal responses. The allowable upper bounds of μ M are listed in Table 1.
Example 5.
Consider the following matrix parameters for the neural networks matrix parameters:
ξ ˙ ( t ) = A ξ ( t ) + G b f ( ξ ( t ) ) + G d f ( ξ ( t μ ( t ) ) ) ,
with the following:
A = 1.5 0 0 1.7 , G b = 0.0503 0.0454 0.0987 0.2075 , G d = 0.2381 0.9320 0.0388 0.5062 ,
then F 1 = diag { 0 , 0 } , F 2 = diag { 0.3 , 0.8 } . The maximum delay bounds with μ calculated by Remark 3, as and the recommended criteria are presented in the Table 2.
Figure 2 provides the state response of system (4) under zero input and the initial condition [ 3.5 , 3.5 ] . The interval time-varying delays are chosen as μ ( t ) = [ 6.3190 + 0.55 | s i n ( t ) | ] , and the activation function is set as f ( ξ ( t ) ) = [ 0.3 t a n h ( x 1 ( t ) ) , 0.8 t a n h ( x 2 ( t ) ) ] T .
From Table 2, it follows that Remark 3 provides significantly better results than [49,50,51,52] in the case of μ d = 0.4 and μ d = 0.45. However, in cases where μ d = 0.5 and μ d = 0.55, the results are slightly worse than in [21]. Additionally, the acquired results are compared to previously published studies. The findings show that the stability conditions presented in this paper are more effective than those found in previous research.

5. Conclusions

In this study, a novel result was presented. The new systems have been used to derive the analysis of finite-time passivity analysis of neutral-type neural networks with mixed time-varying delays. The time-varying delays are distributed, discrete and neutral, and the upper bounds for the delays are available. We are investigating the creation of sufficient conditions for finite boundness, finite-time stability and finite-time passivity, which has not been performed before. First, we create a new Lyapunov–Krasovskii functional, Peng–Park’s integral inequality, descriptor model transformation and zero equation use, and then we used Wirtinger’s integral inequality technique. New finite-time stability necessary conditions are constructed in terms of linear matrix inequalities to guarantee finite-time stability for the system. Finally, numerical examples are presented to demonstrate the result’s effectiveness, and our proposed criteria are less conservative than prior studies in terms of larger time-delay bounds. By combining numerous integral components of the Lyapunov–Krasovskii function with inequality, our results offered wider bounds of time-delay than the previous literature (see Table 1 and Table 2). Construction of an LMI variable number based on integral inequalities yields less conservative stability criteria for interval time-delay systems. We expect to be able to improve existing research and lead research into other areas of application.

Author Contributions

Conceptualization, I.K., N.S. and K.M.; methodology, I.K. and N.S.; software, N.S. and I.K.; validation, N.S., T.B. and K.M.; formal analysis, K.M.; investigation, N.S., I.K., T.B. and K.M.; writing—original draft preparation, N.S., I.K. and K.M.; writing—review and editing, N.S., I.K., T.B. and K.M.; visualization, I.K. and K.M.; supervision, N.S. and K.M.; project administration, I.K. and K.M.; funding acquisition, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Faculty of Engineering, Rajamangala University of Technology Isan Khon Kaen Campus and Research and Graduate Studies, Khon Kaen University (Research Grant RP64-8-002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the reviewers for their valuable comments and suggestions, which resulted in the improvement of the content of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

For Π ( i , j ) = Π ( j , i ) , i , j = 1 , 2 , 3 , , 23 where the following is the case:
Π ( 1 , 1 ) = P 1 A A T P 1 Q 1 T A A T Q 1 + Q 2 T + Q 2 + P 3 + R 1 + R 4 F 1 H 1 + μ M 2 P 4 + M 1 T + M 1 T + μ M M 3 9 P 8 12 P 9 + μ M 4 4 P 10 2 μ M 2 P 11 P 2 A A P 2 + μ M 2 R 7 R 9 O 2 T A A T O 2 O 6 T A A T O 6 4 P 12 P 13 , Π ( 1 , 2 ) = Q 1 T A T Q 10 + Q 11 + μ M 2 R 8 A T O 5 O 6 T , Π ( 1 , 3 ) = P 1 G b + Q 1 T G b + R 2 + R 5 + F 2 H 1 + P 2 G b + O 2 T G b A T O 3 + O 6 T G b A T O 7 , Π ( 1 , 4 ) = P 1 G c + Q 1 T G d + P 2 G d + O 2 T G d A T O 4 + O 6 T G d A T O 8 , Π ( 1 , 5 ) = Q 2 T Q 3 T M 1 T + M 2 + μ M M 4 + R 9 + P 13 S , Π ( 1 , 6 ) = Q 3 T + 3 P 8 2 P 12 + S , Π ( 1 , 7 ) = A T Q 4 Q 2 T Q 5 , Π ( 1 , 8 ) = A T Q 7 + Q 8 Q 3 T , Π ( 1 , 10 ) = R 8 T , Π ( 1 , 12 ) = W 1 F 1 + W 2 F 2 A T N 2 T O 1 O 2 T , Π ( 1 , 13 ) = 36 P 8 + 12 P 9 + 6 P 12 , Π ( 1 , 14 ) = 60 P 8 120 P 9 , Π ( 1 , 15 ) = 360 P 9 , Π ( 1 , 16 ) = 2 μ M P 11 , Π ( 1 , 18 ) = Z 1 , Π ( 1 , 21 ) = O 6 T G c + O 2 T G c + P 1 G c + P 2 G c , Π ( 1 , 22 ) = O 6 T G e + O 2 T G e + P 1 G e + P 2 G e , Π ( 1 , 23 ) = O 6 T H + O 2 T H + P 1 H + P 2 H , Π ( 2 , 2 ) = Q 10 T Q 1 0 + μ M P 5 + μ M 2 P 6 + μ M 2 2 P 9 + μ M 4 2 P 11 + Q 14 T + Q 14 + μ M 2 R 9 O 5 T O 5 + μ M 2 P 8 , Π ( 2 , 3 ) = Q 10 T G b + O 5 T G b O 7 , Π ( 2 , 4 ) = Q 10 T G d + O 5 T G d O 8 , Π ( 2 , 5 ) = Q 11 T Q 12 T , Π ( 2 , 6 ) = Q 12 T , Π ( 2 , 7 ) = Q 4 Q 11 T , Π ( 2 , 8 ) = Q 7 Q 12 T , Π ( 2 , 12 ) = Q 13 T Q 14 , Π ( 2 , 21 ) = O 5 T G c , Π ( 2 , 22 ) = O 5 T G e , Π ( 2 , 23 ) = O 5 T H , Π ( 3 , 3 ) = R 3 + R 6 H 1 + O 3 T G b + G b T O 3 + O 7 T G b + G b T O 7 + μ M 2 P 12 + μ M 2 M P 13 + ρ 2 P 16 , Π ( 3 , 4 ) = O 3 T G d + G d T O 4 + O 7 T G d + G b T O 8 , Π ( 3 , 7 ) = G b T Q 4 , Π ( 3 , 8 ) = G b T Q 7 , Π ( 3 , 12 ) = W 1 W 2 + G b T N 2 + G b T O 1 O 3 T , Π ( 3 , 21 ) = O 7 T G c + O 3 T G c , Π ( 3 , 22 ) = O 7 T G e + O 3 T G e , Π ( 3 , 23 ) = O 7 T H + O 3 T H , Π ( 4 , 4 ) = μ d G d R 3 R 3 H 2 + O 4 T G d + G d T O 4 + O 8 T G d + G d T O 8 , Π ( 4 , 5 ) = μ d G d R 2 T R 2 T + H 2 T F 2 T , Π ( 4 , 7 ) = G d T Q 4 , Π ( 4 , 8 ) = G d T Q 7 , Π ( 4 , 12 ) = G d T N 2 + G d T O 1 O 4 T , Π ( 4 , 21 ) = O 8 T G c + O 4 T G c , Π ( 4 , 22 ) = O 8 T G e + O 4 T G e , Π ( 4 , 23 ) = O 8 T H + O 4 T H , Π ( 5 , 5 ) = μ d G d R 1 R 1 + M 1 + M 1 T M 2 M 2 T + μ M M 3 + μ M M 5 F 1 H 2 R 9 R 9 T 2 P 13 + S + S T , Π ( 5 , 6 ) = M 2 M 1 T + μ M M 4 + R 9 + P 13 S , Π ( 5 , 7 ) = Q 5 Q 6 , Π ( 5 , 8 ) = Q 8 Q 9 , Π ( 5 , 10 ) = R 8 T , Π ( 5 , 11 ) = R 8 T , Π ( 5 , 18 ) = Z 1 , Π ( 5 , 19 ) = Z 2 , Π ( 6 , 6 ) = P 3 R 4 M 2 M 2 T + μ M M 5 9 P 8 R 9 4 P 12 P 13 , Π ( 6 , 7 ) = Q 6 , Π ( 6 , 8 ) = Q 9 , Π ( 6 , 9 ) = R 5 , Π ( 6 , 11 ) = R 8 T , Π ( 6 , 13 ) = 24 P 8 + 6 P 12 , Π ( 6 , 14 ) = 60 P 8 , Π ( 6 , 19 ) = Z 2 , Π ( 7 , 7 ) = Q 5 T Q 5 P 6 , Π ( 7 , 8 ) = Q 8 Q 6 T P 6 , Π ( 8 , 8 ) = Q 9 T Q 9 P 6 , Π ( 9 , 9 ) = R 6 , Π ( 10 , 10 ) = P 4 R 7 , Π ( 11 , 11 ) = P 4 R 7 , Π ( 12 , 12 ) = N 2 T N 2 + μ M 2 P 7 Q 13 T Q 13 O 1 T A A T O 1 + P 14 + τ M P 15 , Π ( 12 , 21 ) = O 1 T G c , Π ( 12 , 22 ) = O 1 T G e , Π ( 12 , 23 ) = O 1 T H , Π ( 13 , 13 ) = 192 P 8 72 P 9 12 P 12 , Π ( 13 , 14 ) = 360 P 8 + 480 P 9 , Π ( 13 , 15 ) = 1080 P 9 , Π ( 14 , 14 ) = 720 P 8 3600 P 9 , Π ( 14 , 15 ) = 8640 P 9 , Π ( 15 , 15 ) = 21600 P 9 , Π ( 16 , 16 ) = 2 P 11 , Π ( 17 , 17 ) = P 10 , Π ( 18 , 18 ) = Z 1 T Z 1 P 7 , Π ( 18 , 19 ) = P 7 , Π ( 19 , 19 ) = Z 2 T Z 2 P 7 , Π ( 20 , 20 ) = τ M P 15 , Π ( 21 , 21 ) = P 14 + τ M P 14 , Π ( 22 , 22 ) = P 16 , Π ( 23 , 23 ) = α I , and the other are equal zero .

References

  1. Wu, A.; Zeng, Z. Exponential passivity of memristive neural networks with time delays. Neural Netw. 2014, 49, 11–18. [Google Scholar] [CrossRef]
  2. Gunasekaran, N.; Zhai, G.; Yu, Q. Sampled-data synchronization of delayed multi-agent networks and its application to coupled circuit. Neurocomputing 2020, 413, 499–511. [Google Scholar] [CrossRef]
  3. Gunasekaran, N.; Thoiyab, N.M.; Muruganantham, P.; Rajchakit, G.; Unyong, B. Novel results on global robust stability analysis for dynamical delayed neural networks under parameter uncertainties. IEEE Access 2020, 8, 178108–178116. [Google Scholar] [CrossRef]
  4. Wang, H.T.; Liu, Z.T.; He, Y. Exponential stability criterion of the switched neural networks with time-varying delay. Neurocomputing 2019, 331, 1–9. [Google Scholar] [CrossRef]
  5. Syed Ali, M.; Saravanan, S. Finite-time stability for memristor base d switched neural networks with time-varying delays via average dwell time approach. Neurocomputing 2018, 275, 1637–1649. [Google Scholar]
  6. Pratap, A.; Raja, R.; Cao, J.; Alzabut, J.; Huang, C. Finite-time synchronization criterion of graph theory perspective fractional-order coupled discontinuous neural networks. Adv. Differ. Equ. 2020, 2020, 97. [Google Scholar] [CrossRef] [Green Version]
  7. Pratap, A.; Raja, R.; Alzabut, J.; Dianavinnarasi, J.; Cao, J.; Rajchakit, G. Finite-time Mittag-Leffler stability of fractional-order quaternion-valued memristive neural networks with impulses. Neural Process. Lett. 2020, 51, 1485–1526. [Google Scholar] [CrossRef]
  8. Li, J.; Jiang, H.; Hu, C.; Yu, J. Analysis and discontinuous control for finite-time synchronization of delayed complex dynamical networks. Chaos Solut. Fractals 2018, 114, 219–305. [Google Scholar] [CrossRef]
  9. Wang, H.; Liu, P.X.; Bao, J.; Xie, X.J.; Li, S. Adaptive neural output-feedback decentralized control for large-scale nonlinear systems with stochastic disturbances. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 972–983. [Google Scholar] [CrossRef]
  10. Zheng, M.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Zhang, Y.; Zhao, H. Finite-time stability and synchronization of memristor-based fractional-order fuzzy cellular neural networks. Commun. Nonlinear Sci. Numer. Simul. 2018, 59, 272–297. [Google Scholar] [CrossRef]
  11. Wu, A.; Zeng, Z. Lagrange stability of memristive neural networks with discrete and distributed delays. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 690–703. [Google Scholar] [CrossRef]
  12. Bai, C.Z. Global stability of almost periodic solutions of Hopfield neural networks with neutral time-varying delays. Appl. Math. Comput. 2008, 203, 72–79. [Google Scholar] [CrossRef]
  13. Syed Ali, M.; Zhu, Q.; Pavithra, S.; Gunasekaran, N. A study on <(Q,S,R),γ>-dissipative synchronisation of coupled reaction-diffusion neural networks with time-varying delays. Int. J. Syst. Sci. 2017, 49, 755–765. [Google Scholar]
  14. Chen, Z.; Ruan, J. Global stability analysis of impulsive Cohen-Grossberg neural networks with delay. Phys. Lett. A 2005, 345, 101–111. [Google Scholar] [CrossRef]
  15. Li, W.J.; Lee, T. Hopfield neural networks for affine invariant matching. IEEE Trans. Neural Netw. Learn. Syst. 2001, 12, 1400–1410. [Google Scholar]
  16. Syed Ali, M.; Gunasekaran, N.; Zhu, Q. State estimation of T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst. 2017, 306, 87–104. [Google Scholar] [CrossRef]
  17. Syed Ali, M.; Narayanana, G.; Sevgenb, S.; Shekher, V.; Arik, S. Global stability analysis of fractional-order fuzzy BAM neural networks with time delay and impulsive effects. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104853. [Google Scholar] [CrossRef]
  18. Samorn, N.; Yotha, N.; Srisilp, P.; Mukdasai, K. LMI-based results on robust exponential passivity of uncertain neutral-type neural networks with mixed interval time-varying delays via the reciprocally convex combination technique. Computation 2021, 9, 70. [Google Scholar] [CrossRef]
  19. Meesuptong, B.; Mukdasai, K.; Khonchaiyaphum, I. New exponential stability criterion for neutral system with interval time-varying mixed delays and nonlinear uncertainties. Thai J. Math. 2020, 18, 333–349. [Google Scholar]
  20. Jian, J.; Wang, J. Stability analysis in lagrange sense for a class of BAM neural networks of neutral type with multiple time-varying delays. Neurocomputing 2015, 149, 930–939. [Google Scholar] [CrossRef]
  21. Lakshmanan, S.; Park, J.H.; Jung, H.Y.; Kwon, O.M.; Rakkiyappan, R. A delay partitioning approach to delay-dependent stability analysis for neutral type neural networks with discrete and distributed delays. Neurocomputing 2013, 111, 81–89. [Google Scholar] [CrossRef]
  22. Peng, W.; Wu, Q.; Zhang, Z. LMI-based global exponential stability of equilibrium point for neutral delayed BAM neural networks with delays in leakage terms via new inequality technique. Neurocomputing 2016, 199, 103–113. [Google Scholar] [CrossRef]
  23. Syed Ali, M.; Saravanakumar, R.; Cao, J. New passivity criteria for memristor-based neutral-type stochastic BAM neural networks with mixed time-varying delays. Neurocomputing 2016, 171, 1533–1547. [Google Scholar] [CrossRef]
  24. Klamnoi, A.; Yotha, N.; Weera, W.; Botmart, T. Improved results on passivity analysis of neutral-type neural networks with mixed time-varying delays. J. Res. Appl. Mech. Eng. 2018, 6, 71–81. [Google Scholar]
  25. Ding, D.; Wang, Z.; Dong, H.; Shu, H. Distributed H state estimation with stochastic parameters and nonlinearities through sensor networks: The finite-horizon case. Automatica 2012, 48, 1575–1585. [Google Scholar] [CrossRef]
  26. Carrasco, J.; Baños, A.; Schaft, A.V. A passivity-based approach to reset control systems stability January. Syst. Control Lett. 2010, 59, 18–24. [Google Scholar] [CrossRef] [Green Version]
  27. Krasovskii, N.N.; Lidskii, E.A. Analytical design of controllers in systems with random attributes. Autom. Remote Control 1961, 22, 1021–1025. [Google Scholar]
  28. Debeljković, D.; Stojanović, S.; Jovanović, A. Finite-time stability of continuous time delay systems: Lyapunov-like approach with Jensen’s and Coppel’s inequality. Acta Polytech. Hung. 2013, 10, 135–150. [Google Scholar]
  29. Wang, L.; Shen, Y.; Ding, Z. Finite time stabilization of delayed neural networks. Neural Netw. 2015, 70, 74–80. [Google Scholar] [CrossRef] [PubMed]
  30. Rajavela, S.; Samiduraia, R.; Caobc, J.; Alsaedid, A.; Ahmadd, B. Finite-time non-fragile passivity control for neural networks with time-varying delay. Appl. Math. Comput. 2017, 297, 145–158. [Google Scholar] [CrossRef]
  31. Chen, X.; He, S. Finite-time passive filtering for a class of neutral time-delayed systems. Trans. Inst. Meas. Control 2016, 1139–1145. [Google Scholar] [CrossRef]
  32. Syed Ali, M.; Saravanan, S.; Zhu, Q. Finite-time stability of neutral-type neural networks with random time-varying delays. Syst. Sci. 2017, 48, 3279–3295. [Google Scholar]
  33. Saravanan, S.; Syed Ali, M.; Alsaedib, A.; Ahmadb, B. Finite-time passivity for neutral-type neural networks with time-varying delays. Nonlinear Anal. Model. Control 2020, 25, 206–224. [Google Scholar]
  34. Syed Ali, M.; Meenakshi, K.; Gunasekaran, N. Finite-time H boundedness of discrete-time neural networks normbounded disturbances with time-varying delay. Int. J. Control Autom. Syst. 2017, 15, 2681–2689. [Google Scholar] [CrossRef]
  35. Phanlert, C.; Botmart, T.; Weera, W.; Junsawang, P. Finite-Time Mixed H/passivity for neural networks with mixed interval time-varying delays using the multiple integral Lyapunov-Krasovskii functional. IEEE Access 2021, 9, 89461–89475. [Google Scholar] [CrossRef]
  36. Amato, F.; Ariola, M.; Dorato, P. Finite-time control of linear systems subject to parametric uncertainties and disturbances. Automatica 2001, 37, 1459–1463. [Google Scholar] [CrossRef]
  37. Song, J.; He, S. Finite-time robust passive control for a class of uncertain Lipschitz nonlinear systems with time-delays. Neurocomputing 2015, 159, 275–281. [Google Scholar] [CrossRef]
  38. Kwon, O.M.; Park, M.J.; Park, J.H.; Lee, S.M.; Cha, E.J. New delay-partitioning approaches to stability criteria for uncertain neutral systems with time-varying delays. SciVerse Sci. Direct 2012, 349, 2799–2823. [Google Scholar] [CrossRef]
  39. Seuret, A.; Gouaisbaut, F. Wirtinger-based integral inequality: Application to time-delay system. Automatica 2013, 49, 2860–2866. [Google Scholar] [CrossRef] [Green Version]
  40. Peng, C.; Fei, M.R. An improved result on the stability of uncertain T-S fuzzy systems with interval time-varying delay. Fuzzy Sets Syst. 2013, 212, 97–109. [Google Scholar] [CrossRef]
  41. Park, P.G.; Ko, J.W.; Jeong, C.K. Reciprocally convex approach to stability of systems with time-varying delays. Automatica 2011, 47, 235–238. [Google Scholar] [CrossRef]
  42. Kwon, O.M.; Park, M.J.; Park, J.H.; Lee, S.M.; Cha, E.J. Analysis on robust H performance and stability for linear systems with interval time-varying state delays via some new augmented Lyapunov Krasovskii functional. Appl. Math. Comput. 2013, 224, 108–122. [Google Scholar]
  43. Singkibud, P.; Niamsup, P.; Mukdasai, K. Improved results on delay-rage-dependent robust stability criteria of uncertain neutral systems with mixed interval time-varying delays. IAENG Int. J. Appl. Math. 2017, 47, 209–222. [Google Scholar]
  44. Tian, J.; Ren, Z.; Zhong, S. A new integral inequality and application to stability of time-delay systems. Appl. Math. Lett. 2020, 101, 106058. [Google Scholar] [CrossRef]
  45. Kwon, O.M.; Park, J.H.; Lee, S.M.; Cha, E.J. New augmented Lyapunov-Krasovskii functional approach to stability analysis of neural networks with time-varying delays. Nonlinear Dyn. 2014, 76, 221–236. [Google Scholar] [CrossRef]
  46. Zeng, H.B.; He, Y.; Wu, M.; Xiao, S.P. Stability analysis of generalized neural networks with time-varying delays via a new integral inequality. Neurocomputing 2015, 161, 148–154. [Google Scholar] [CrossRef]
  47. Yang, B.; Wang, J. Stability analysis of delayed neural networks via a new integral inequality. Neural Netw. 2017, 88, 49–57. [Google Scholar] [CrossRef]
  48. Hua, C.; Wang, Y.; Wu, S. Stability analysis of neural networks with time-varying delay using a new augmented Lyapunov-Krasovskii functional. Neurocomputing 2019, 332, 1–9. [Google Scholar] [CrossRef]
  49. Zhu, X.L.; Yue, D.; Wang, Y. Delay-dependent stability analysis for neural networks with additive time-varying delay components. IET Control Theory Appl. 2013, 7, 354–362. [Google Scholar] [CrossRef]
  50. Li, T.; Wang, T.; Song, A.; Fei, S. Combined convex technique on delay-dependent stability for delayed neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1459–1466. [Google Scholar] [CrossRef]
  51. Zhang, C.K.; He, Y.; Jiang, L.; Wu, M. Stability analysis for delayed neural networks considering both conservativeness and complexity. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 1486–1501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Zhang, C.K.; He, Y.; Jiang, L.; Lin, W.J.; Wu, M. Delay-dependent stability analysis of neural networks with time-varying delay: A generalized free-weighting-matrix approach. Appl. Math. Comput. 2017, 294, 102–120. [Google Scholar] [CrossRef]
Figure 1. It provides the state response of system (4) under zero input and the initial condition [ 3.5 , 3.5 ] . The interval time-varying delays are chosen as μ ( t ) = [ 3.6 + 0.9 | s i n ( t ) | ] , and the activation function is set as f ( ξ ( t ) ) = [ 0.4 t a n h ( x 1 ( t ) ) , 0.8 t a n h ( x 2 ( t ) ) ] T .
Figure 1. It provides the state response of system (4) under zero input and the initial condition [ 3.5 , 3.5 ] . The interval time-varying delays are chosen as μ ( t ) = [ 3.6 + 0.9 | s i n ( t ) | ] , and the activation function is set as f ( ξ ( t ) ) = [ 0.4 t a n h ( x 1 ( t ) ) , 0.8 t a n h ( x 2 ( t ) ) ] T .
Mathematics 09 03321 g001
Figure 2. It provides the state response of system (4) under zero input and the initial condition [ 3.5 , 3.5 ] . The interval time-varying delays are chosen as μ ( t ) = [ 6.3190 + 0.55 | s i n ( t ) | ] , and the activation function is set as f ( ξ ( t ) ) = [ 0.3 t a n h ( x 1 ( t ) ) , 0.8 t a n h ( x 2 ( t ) ) ] T .
Figure 2. It provides the state response of system (4) under zero input and the initial condition [ 3.5 , 3.5 ] . The interval time-varying delays are chosen as μ ( t ) = [ 6.3190 + 0.55 | s i n ( t ) | ] , and the activation function is set as f ( ξ ( t ) ) = [ 0.3 t a n h ( x 1 ( t ) ) , 0.8 t a n h ( x 2 ( t ) ) ] T .
Mathematics 09 03321 g002
Table 1. Allowable upper bound μ M for various μ d of Example 4.
Table 1. Allowable upper bound μ M for various μ d of Example 4.
Method μ d = 0.8 μ d = 0.9 Number of Variables
[45]4.59403.46717.5 n 2 + 8.5n
[46]4.81673.424513.5 n 2 + 13.5n
[47]5.44283.6482-
[48]5.63843.771822 n 2 + 14n
Remark 36.54114.507423 n 2 + 23n
Table 2. Allowable upper bound μ M for various μ d of Example 5.
Table 2. Allowable upper bound μ M for various μ d of Example 5.
Method μ d = 0.4 μ d = 0.45 μ d = 0.5 μ d = 0.55 Number of Variables
[49]4.65693.72683.40763.28418 n 2 + 12n
[50]4.55433.83643.55833.411013.5 n 2 + 21.5n
[51]7.66976.72876.41263.256913.5 n 2 + 13.5n
[52]8.34987.38177.02196.81567 n 2 + 11n
Remark 39.79017.64706.78756.319023 n 2 + 23n
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khonchaiyaphum, I.; Samorn, N.; Botmart, T.; Mukdasai, K. Finite-Time Passivity Analysis of Neutral-Type Neural Networks with Mixed Time-Varying Delays. Mathematics 2021, 9, 3321. https://doi.org/10.3390/math9243321

AMA Style

Khonchaiyaphum I, Samorn N, Botmart T, Mukdasai K. Finite-Time Passivity Analysis of Neutral-Type Neural Networks with Mixed Time-Varying Delays. Mathematics. 2021; 9(24):3321. https://doi.org/10.3390/math9243321

Chicago/Turabian Style

Khonchaiyaphum, Issaraporn, Nayika Samorn, Thongchai Botmart, and Kanit Mukdasai. 2021. "Finite-Time Passivity Analysis of Neutral-Type Neural Networks with Mixed Time-Varying Delays" Mathematics 9, no. 24: 3321. https://doi.org/10.3390/math9243321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop