Finite-Time Passivity Analysis of Neutral-Type Neural Networks with Mixed Time-Varying Delays

: This research study investigates the issue of ﬁnite-time passivity analysis of neutral-type neural networks with mixed time-varying delays. The time-varying delays are distributed, discrete and neutral in that the upper bounds for the delays are available. We are investigating the creation of sufﬁcient conditions for ﬁnite boundness, ﬁnite-time stability and ﬁnite-time passivity, which has never been performed before. First, we create a new Lyapunov–Krasovskii functional, Peng– Park’s integral inequality, descriptor model transformation and zero equation use, and then we use Wirtinger’s integral inequality technique. New ﬁnite-time stability necessary conditions are constructed in terms of linear matrix inequalities in order to guarantee ﬁnite-time stability for the system. Finally, numerical examples are presented to demonstrate the result’s effectiveness. Moreover, our proposed criteria are less conservative than prior studies in terms of larger time-delay bounds.


Introduction
Neural networks have been intensively explored in recent decades due to their vast range of applications in a variety of fields, including signal processing, associative memories, learning ability and so on [1][2][3][4][5][6][7][8][9][10]. In the study of real systems, time-delay phenomena are unavoidable. Many interesting neural networks, such as Hopfield neural networks, cellular neural networks, Cohen-Grossberg neural networks and bidirectional associative memory neural networks frequently exhibit time delays. In addition, time delays are well recognized as a source of instability and poor performance [11]. Accordingly, stability analysis of delayed neural networks has become a topic of significant theoretical and practical relevance (see [12][13][14][15]), and many important discoveries have been reported on this subject. In recent years, T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control have been presented by Syed Ali et al. [16]. The global stability analysis of fractional-order fuzzy BAM neural networks with time delay and impulsive effects was considered in [17].
Furthermore, conventional neural network models are often unable to accurately represent the qualities of a neural reaction process due to the complex dynamic features of neural cells in the real world. It is only natural for systems to store information about the derivative of a previous state in order to better characterize and analyze the dynamics of such complicated brain responses. Neutral neural networks and neutral-type neural networks are the names given to this new type of neural network. Several academics [18][19][20][21][22][23] have studied neutral-type neural networks with time-varying delays in recent years. In 2018 [24], the authors investigated improved results on passivity analysis of neutral-type neural networks with mixed time-varying delays. In particular, a type of time-varying delay known as distributed delay occurs in networked-based systems and has received a Section 4, five numerical examples are presented to demonstrate the usefulness of our proposed results. Finally, in Section 5, we bring this study to a close.

Preliminaries
We begin by explaining various notations and lemmas that will be used throughout the study. R denotes the set of all real numbers; R n denotes the n-dimensional space; R m×n denotes the set of all m × n real matrices ; A T denotes the transpose of the matrix A; A is symmetric if A = A T ; λ(A) denotes the set of all eigenvalues of A; and λ max (A) and λ min (A) represent the maximum and minimum eigenvalues of the matrix A, respectively.
* represents the elements below the main diagonal of the symmetric matrices; diag{.} stands for the diagonal matrix.
Matrices G b , G d and G e are the interconnection matrices representing the weight coefficients of the neurons. Matrices G 1 , G 2 , H and G c are known real constant matrices with The variables µ(t), ρ(t) and τ(t) represent the mixed delays of the model in (1) and satisfy the following: where µ M , µ d , ρ M , ρ d , τ M and τ d are positive real constants.

Assumtion 1.
The activation function f is continuous and the exist real constants F − i and F + i such that the following is the case: for all c 1 = c 2 , and f i = [ f 1 , f 2 , . . ., f n ] T for any i ∈ {1, 2, . . ., n} satisfies f i (0) = 0. For the sake of presentation convenience, in the following, we denote , . . ., ).

Assumtion 2.
In the case of a positive parameter δ, κ(t) is a time-varying external disturbance that satisfies the following.
Definition 1 ((Finite-time boundedness) [36,37]). For a positive constant of T, system (1) is finite-time bounded with respect to (g 1 , g 2 , T f , P 1 , δ) if there exist constants g 2 > g 1 > 0 such that the following is the case: for a given positive constant T f , and P 1 is a positive definite matrix.
Definition 2 ((Finite-time stability) [36,37]). System (1) with κ(t) = 0 is said to be finite-time stable with respect to (g 1 , g 2 , T f , P 1 ) if there exist constants g 2 > g 1 > 0 such that the following is the case: for a given positive constant T f , and P 1 is a positive definite matrix.
Definition 3 ((Finite-time passivity) [37]). System (1) is said to be a finite-time passive with with a prescribed dissipation performance level γ > 0, if the following relations hold: (a) For any external disturbances κ(t), system (1) is finite-time bounded; (b) For a given positive scalar γ > 0, the following relationship holds under a zero initial condition.
Lemma 1 ((Jensen's Inequality) [38]). For each positive definite symmetric matrix P 7 , positive real constant µ M and vector functionξ : [−µ M , 0] → R n such that the following integral is well defined, then the following is obtained.
Lemma 2 ((Wirtinger-based integral inequality) [39]). For any matrix P 12 > 0, the following inequality holds for all continuously differentiable functionξ : Lemma 3 ((Peng-Park's integral inequality) [40,41]). For any matrix of the following: P 13 S * P 13 ≥ 0, 0 < µ(t) < µ M is satisfied by positive constants µ M and µ(t), andξ : [−µ M , 0] → R n is a vector function that verifies the integrations in question are correctly specified. We then have the following: Lemma 5 ([43]). P 6 ∈ R n×n is a constant symmetric positive definite matrix. For any constant symmetric positive definite matrix P 6 ∈ R n×n , µ(t) is a discrete time-varying delay with (2), vector function ξ : [−µ M , 0] → R n such that the integrations concerned are well defined, then the following is the case.
Lemma 6 ( [43]). For any constant matrices R 7 , R 8 , is a discrete time-varying delay with (2) and vector functionξ : [−µ M , 0] → R n such that the following integration is well defined: and the following is the case Π =

Finite-Time Boundedness Analysis
The following finite-time boundedness analysis of neutral-type neural networks with mixed time-varying delays is discussed in this subsection.
In the first subsection, we look at system (5) with (2) that uses new criteria for systems introduced via the LMIs approach.
Proof. First, we show that system (5) is the finite-time bounded analysis. As a result, we consider system (5) to satisfy the following.
We can rewrite system (10) to the following system: by using the model transformation approach. Construct a Lyapunov-Krasovskii functional candidate for system (10)-(12) of the following form: where the following is the case: Along the trajectory of system (10)- (12), the time derivative of V(t) is equivalent to the following.V The time derivative of V 1 (t) is then computed as the following.
Taking the derivative of V 2 (t) along any system solution trajectory, we have the following.
For V 3 (t) andμ(t) ≤ µ d , we now have the following.
It is from Lemma 6 that we have the following.
Using Lemmas 5 and 7, V 5 (t) is computed as follows: where the following is the case.
Using Lemma 1 (Jensen's Inequality), we have the following.
Using Lemma 8 to confrontV 7 (t), we can obtain the following: where the following is the case.
According to Lemma 4, we can obtainV 8 (t) by performing the following.
Using Lemmas 2 and 3, an upper bound of V 9 (t) can be obtained as follows.
Taking the time derivative of V 10 (t), we have the following.
Based on (14)- (19), it is clear that the following is observed: where the following is the case.
Then, α > 0 and we are able to obtain the following.
By multiplying the above inequality by e αt , we can obtain the following.
Integrating the two sides of the inequality (22) from 0 to t, with t ∈ [0, T], we have obtained the following.
We have the following: , where the following is the case.
On the other hand, the following condition holds.
indicates that for ∀t ∈ [0, T], ξ T (t)Lξ(t) < g 2 . From Definition 2, system (5) is finite-time bounded with regard to (g 1 , g 2 , T, L, δ). The proof is now finished. (9) is not a standard form of LMIs. In order to verify that this condition is equivalent to the relation of LMIs, let λ i , i = 1, 2, 3, . . ., 31 be some positive scalars with the following.

Remark 1. Condition
Consider the following.

Proof.
Since the proof is identical to that of Theorem 1, it is excluded from this section.

Finite-Time Passivity Analysis
This section discusses the topic of finite-time passivity analysis investigated for the following system.
Then, multiplying (38) by e −αT and integrating it between 0 and T, we can obtain the following: which implies the following.
Due to V(t) ≥ 0, it is reasonable to obtain it from (39) and the following: where γ = βe −αT 2 . As a result, we may infer that system (33) is finite-time passive. This completes the proof. Remark 3. When E = 0, C = 0 and H = 0 system (5) changes to delayed neural network, the following is the case.ξ By (8), we consider system (41) without finite-time stability condition and same proof line of Theorem 1. Moreover, the system is said to be asymptotically stable: whereΠ 12,12 = Π 12,12 − P 14 − τ M P 15 ,Π 4,4 = Π 3,3 − ρ 2 M P 16 , and the parameters are as defined in Theorem 1. Then, we define the following.
Example 3. Consider the following matrix parameters for the neutral-type neural networks: f (ξ(s))ds, with the following.

Example 4.
Consider the following matrix parameters for the neural networks matrix parameters: with the following: Using the MATHLAB tools to solve LMIs (35) and (36), we indicate that the neutral system under consideration is finite-time passive. In addition, the acquired results are compared to previously published studies. The findings show that the stability conditions presented in this paper are more effective than those found in previous research. By solving Example 4 with LMI in Remark 3, we can obtain a maximum permissible upper bound µ M for different µ d , as shown in Table 1. Figure 1 provides the state response of system (4) under zero input and the initial condition [−3.5, 3.5]. The interval time-varying delays are chosen as µ(t) = [3.6 + 0.9|sin(t)|], and the activation function is set as f (ξ(t)) = [0.4tanh(x 1 (t)), 0.8tanh(x 2 (t))] T .
The permissible upper bound µ M for various µ d is shown in Table 1. Table 1 shows that the conclusions of Remark 3 in this study are less conservative than those in [45][46][47][48], demonstrating the effectiveness of our efforts. Table 1 shows the state variables' temporal responses. The allowable upper bounds of µ M are listed in Table 1.  Example 5. Consider the following matrix parameters for the neural networks matrix parameters: with the following: The maximum delay bounds with µ calculated by Remark 3, as and the recommended criteria are presented in the Table 2. Figure 2 provides the state response of system (4) under zero input and the initial condition [−3.5, 3.5]. The interval time-varying delays are chosen as µ(t) = [6.3190 + 0.55|sin(t)|], and the activation function is set as f (ξ(t)) = [0.3tanh(x 1 (t)), 0.8tanh(x 2 (t))] T .
From Table 2, it follows that Remark 3 provides significantly better results than [49][50][51][52] in the case of µ d = 0.4 and µ d = 0.45. However, in cases where µ d = 0.5 and µ d = 0.55, the results are slightly worse than in [21]. Additionally, the acquired results are compared to previously published studies. The findings show that the stability conditions presented in this paper are more effective than those found in previous research.

Conclusions
In this study, a novel result was presented. The new systems have been used to derive the analysis of finite-time passivity analysis of neutral-type neural networks with mixed time-varying delays. The time-varying delays are distributed, discrete and neutral, and the upper bounds for the delays are available. We are investigating the creation of sufficient conditions for finite boundness, finite-time stability and finite-time passivity, which has not been performed before. First, we create a new Lyapunov-Krasovskii functional, Peng-Park's integral inequality, descriptor model transformation and zero equation use, and then we used Wirtinger's integral inequality technique. New finite-time stability necessary conditions are constructed in terms of linear matrix inequalities to guarantee finite-time stability for the system. Finally, numerical examples are presented to demonstrate the result's effectiveness, and our proposed criteria are less conservative than prior studies in terms of larger time-delay bounds. By combining numerous integral components of the Lyapunov-Krasovskii function with inequality, our results offered wider bounds of time-delay than the previous literature (see Tables 1 and 2). Construction of an LMI variable number based on integral inequalities yields less conservative stability criteria for interval time-delay systems. We expect to be able to improve existing research and lead research into other areas of application. Π (3,4) = O T