Next Article in Journal
Nonlinear Influence of Chamber Pressure on the Asymmetric Dynamic Response of a Rifle Muzzle Under Continuous Firing Conditions
Previous Article in Journal
An Intelligent Playbook Recommendation Algorithm Based on Dynamic Interest Modeling for SOAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability Analysis of Bidirectional Associative Memory Neural Networks with Time-Varying Delays via Second-Order Reciprocally Convex Approach

by
Kalaivani Chandran
1,*,
Renuga Kuppusamy
2 and
Vembarasan Vaitheeswaran
3
1
Department of Mathematics, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam 603110, Tamil Nadu, India
2
Department of Mathematics, St. Joseph’s Institute of Technology, OMR, Chennai 600119, Tamil Nadu, India
3
School of Sciences and Humanities, Shiv Nadar University Chennai, Kalavakkam 603110, Tamil Nadu, India
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(11), 1852; https://doi.org/10.3390/sym17111852
Submission received: 3 September 2025 / Revised: 7 October 2025 / Accepted: 16 October 2025 / Published: 3 November 2025
(This article belongs to the Section Mathematics)

Abstract

This research examines the Lyapunov-based criterion for global asymptotic stability of Bidirectional Associative Memory (BAM) neural networks that have mixed-interval time-varying delays. Using a second-order reciprocally convex approach, this paper introduces a novel stability criterion for BAM neural networks with time delays. The literature has recently incorporated a few triple integral expressions in the Lyapunov–Krasovskii functional to lessen conservatism in the analysis of system stability with interval time-varying delays using a second-order reciprocally convex combination strategy. This research work establishes the negative definiteness of the Lyapunov–Krasovskii functional derivative and is formulated using Linear Matrix Inequalities (LMIs). The effectiveness of the proposed result is demonstrated through numerical examples.

1. Introduction

The field of neural networks (NNs) has vast applications in the domains of pattern recognition, signal processing, associative memory, optimization, etc., Furthermore, several NN architectures, including Hopfield Networks [1], Fuzzy Neural Networks with Markovian jumps [2,3] and Bidirectional Associative Memory (BAM) models [4,5,6], have undergone extensive analysis. By extending the single-layer auto-associative Hebbian network into a two-layer, pattern-based hetero-associative network, the BAM framework, which was first presented by Kosko [7,8], allows reconstructing whole unambiguous patterns from incomplete or noisy inputs [9]. In this design, every neuron in each layer is completely connected to all neurons in the opposing layer. A substantial body of work has explored the stability of BAM networks, deriving suitable conditions for both exponential and asymptotic stability [10,11,12]. Consequently, BAM architectures demonstrate superior performance in memory storage and pattern association.
Time delays play a crucial role in synchronization control, sliding mode control, industrial systems, ecological modeling, and networked environments, such as the Internet, among others [13,14,15]. These delays often degrade system performance and can potentially lead to instability. One of the key objectives in this area is to identify the maximum allowable time in which the given system continues to maintain asymptotic stability [13].
Time-varying delays will inevitably be imposed during the deployment of artificial NNs due to the intrinsic communication time between neurons and the limiting slew rate of amplifiers [16,17,18]. These signal transmission delays can trigger unwanted behaviors, including oscillations and instability. Consequently, the study of stability of neural network under time-varying delays has become a central research focus, with many significant contributions reported in the literature [19]. Therefore, it is essential to establish a proper delay-dependent stability criterion to accurately assess how delays impact NN stability. Consequently, delayed neural networks’ (DNNs) [20] delay-dependent stability has been a major focus of research in recent decades, leading to significant progress in the field.
The main objective of studying time delay systems for system stability is to develop less conservative stability conditions. One key factor in this is the upper limit of the delay, which helps to measure the conservativeness of the stability condition [21]. For systems where delay affects the outcome, the Lyapunov–Krasovskii functionals (LKFs) [22,23,24] method is widely considered one of the most effective tools for analyzing stability. The primary challenge in this approach is how to estimate the integral terms accurately so that the final stability condition can be written with respect to LMIs [25,26] which are easier to work with mathematically. To address this, a new and improved LKF has been introduced, which includes a triple integral term for better accuracy.
Before 2013, Jensen’s inequality was one of the most widely used tools for analyzing system stability involving time delays [27]. New techniques, such as inequality employing an auxiliary function [28,29], and an integral bound derived using slack matrices [20], have been introduced to enhance results and reduce conservative estimates. Later, a more general form called the Bessel–Legendre inequality was proposed [30,31], which helped to reduce the error in estimating single integral terms. However, it has become increasingly difficult to make further improvements in reducing conservatism by just refining these single-integral-based methods.
Another way to improve the stability conditions is by designing more advanced Lyapunov–Krasovskii functionals (LKFs) [23] than the basic ones used in earlier studies. Some of the improved methods include delay-partitioning LKFs [32], multiple-integral LKFs [33,34,35], and augmented LKFs. These methods often use free-weighting matrices [36] to include more detailed information about the system’s state and the time delay. These techniques have helped reduce conservatism to some extent, but as the complexity of LKFs increases, the improvements become smaller, while the computational cost becomes significantly higher. Still, among all, Jensen’s inequality is considered very effective and widely used.
Furthermore, systems with time-varying delays have been studied for stability using the reciprocally convex approach [37]. Combining this technique with Jensen’s inequality [27] helps reduce conservatism even further. In the field of stability analysis, a new technique called the second-order reciprocally convex combination method [38,39] was developed to improve upon the limitations of Jensen’s method [40,41]. This new approach provides a more accurate and less conservative way to estimate the upper bounds of quadratic terms, which is crucial for determining the stability of a system.
From the above discussion, we will propose a novel stability criterion for BAM NNs along with time delays in this research paper by combining the reciprocally convex combination approach [37,38,42] and second-order reciprocally convex combination technique [39] for the newly constructed LKF. It should be noted that our method does not require the monotonicity, differentiability, and boundedness of the activation function. Our findings show that this technique can give a less conservative condition for ensuring stability. Several examples are also provided to demonstrate how well the method works in practice.
This paper is organized into Section 1, Section 2, Section 3, Section 4 and Section 5. Section 1 outlines the problem statement and presents the necessary background information. Section 2 gives conditions, using LMIs, that ensure the BAM NN’s global asymptotic stability behaviour. Section 3 presents further LMI conditions for the globally asymptotic stability of these networks under time-delay conditions. Section 4 shows practical examples that support our theoretical work. Section 5 gives the conclusion.
Notations: In this paper, any positive definite matrix X is represented as X > 0 . M T is the transpose of M ; I is the identity matrix with suitable dimension; R n represents the n-D Cartesian space, R n × n represents the set of all n × n real matrices, and ∗ indicates the symmetric block in one symmetric matrix.

2. Problem Overview and Basic Notions

Let us examine the following BAM NNs for stability, incorporating the time-varying delays:
X ˙ r ( t ) = a 1 r X r ( t ) + s = 1 n b 1 s r f s ( У s ( t ) ) + s = 1 n c 1 s r f s ( У s ( t ς s r ( t ) ) ) + s = 1 n d 1 s r t η s r 2 ( t ) t η s r 1 ( t ) f s ( У s ( ϰ ) ) d ϰ , У ˙ s ( t ) = a 2 s У s ( t ) + r = 1 n b 2 r s g r ( X r ( t ) ) + r = 1 n c 2 r s g r ( X r ( t ϱ r s ( t ) ) ) + r = 1 n d 2 r s t γ r s 2 ( t ) t γ r s 1 ( t ) g r ( X r ( ϰ ) ) d ϰ ,
for t = 1 , 2 , , where X r ( t ) represents the state of the r t h neuron at any time t , and similarly У s ( t ) represents the state of the s t h neuron at any time t ; a 1 r , a 2 r describes the stability of internal neuron processes. b 1 s r , b 2 r s are connection weights, and c 1 s r , c 2 r s , d 1 s r , d 2 r s are delayed connection weights. g r , f s indicates the activation functions of the r t h neuron and s t h neuron, respectively.
The vector form of the above system is
X ˙ ( t ) = A 1 X ( t ) + B 1 f ( У ( t ) ) + C 1 f ( У ( t ς ( t ) ) ) + D 1 t η 2 ( t ) t η 1 ( t ) f ( У ( ϰ ) ) d ϰ У ˙ ( t ) = A 2 У ( t ) + B 2 g ( X ( t ) ) + C 2 g ( X ( t ϱ ( t ) ) ) + D 2 t γ 2 ( t ) t γ 1 ( t ) g ( X ( ϰ ) ) d ϰ ,
The delays ς ( t ) , ϱ ( t ) , η ( t ) , γ ( t ) satisfy ς ( t ) [ ς 1 , ς 2 ] , 0 ς 1 ς 2 ; ς ˙ ( t ) υ 1 ; ϱ ( t ) [ ϱ 1 , ϱ 2 ] , 0 ϱ 1 ϱ 2 ; ϱ ˙ ( t ) υ 2 ; 0 η 1 η 1 ( t ) η 2 ( t ) η 2 ; 0 γ 1 γ 1 ( t ) γ 2 ( t ) γ 2 ; ϱ 12 = ϱ 2 ϱ 1 ; ϱ 12 2 = ( ϱ 2 ϱ 1 ) 2 ; ς 12 = ς 2 ς 1 ; ς 12 2 = ( ς 2 ς 1 ) 2 ; where ς 1 , ς 2 , η 1 , η 2 , ϱ 1 , ϱ 2 , γ 1 , γ 2 are non-negative real constants with initial parameters X ( s ) = ϖ ( s ) , s [ K 1 , 0 ] ; У ( s ) = φ ( s ) , s [ K 2 , 0 ] where K 1 = m a x { ϱ 2 , γ 2 } , K 2 = m a x { ς 2 , η 2 } and A 1 = [ a 1 i ] and A 2 = [ a 2 j ] are diagonal matrices with a 1 i > 0 , i = 1 t o n , a 2 j > 0 , j = 1 t o n , B 1 , B 2 are the connection weight matrices, C 1 , and C 2 , D 1 , D 2 are delayed connection weight matrices.
By making the following assumptions, we achieve the stability conditions for the above system.
Assumption A1.
The activation function g ( X ( t ) ) = { g 1 ( X 1 ( t ) ) , g 2 ( X 2 ( t ) ) , , g n ( X n ( t ) ) } T R n is assumed to satisfy the following condition:
0 g i ( ζ 1 ) g i ( ζ 2 ) ζ 1 ζ 2 l i , g ( 0 ) = 0 , ζ 1 , ζ 2 R , ζ 1 ζ 2 , i = 1 , 2 , , n
where l i , i = 1 , 2 , , n are positive real constants. Denote L = [ l i ] , i = 1 , 2 , , n as a diagonal matrix. The following Lemmas are extensively used in the analysis of our time delay systems.
Lemma 1
(Lower bound theorem [42]). If an open subset D R n has positive values for f 1 , f 2 , , f N R n R , the reciprocal convex combination of f r over D satisfies
min { α r | α r > 0 , α r = 1 } r 1 α r f r ( t ) = r f r ( t ) + max g r , s ( t ) r s g ( r , s ) ( t )
subject to
g ( r , s ) ( t ) : R n R , g r , s ( t ) = g s , r ( t ) , f r ( t ) g ( r , s ) ( t ) g s , r ( t ) f s ( t ) 0 .
Lemma 2
([25] ). For the symmetric matrices R > 0 , Ω and matrix Γ , the following are equivalent.
1.
Ω Γ T R Γ < 0 .
2.
There exists an appropriate dimensional matrix Π such that
Ω + Γ T Π + Π T Γ Π T R < 0 .
Lemma 3
(Schur Complement [25] ). For the given constant matrices Ψ 1 , Ψ 2 , Ψ 3 with appropriate dimensions, and Ψ 1 T = Ψ 1 , Ψ 2 T = Ψ 2 > 0 , then
Ψ 1 + Ψ 3 T Ψ 2 1 Ψ 3 < 0 ,
if and only if
Ψ 1 Ψ 3 T Ψ 2 < 0 , o r Ψ 2 Ψ 3 Ω 1 < 0 .

3. Main Results

This work explores a novel delay-dependent stability condition for the system (1) as given below:
Theorem 1.
For any scalars 0 ς 1 ς 2 , 0 ϱ 1 ϱ 2 , our BAM NNs (1) is globally asymptotically stable provided there are some symmetric matrices P = [ P i j ] 4 × 4 > 0 , Q = [ Q i j ] 4 × 4 > 0 , R k > 0 , S k > 0 , T k > 0 , k = 1 , 2 , 3 , 4 ,   N 1 , N 2 > 0 , U j > 0 , j = 1 , 2 , , 8 ,   R 11 R 12 R 22 > 0 ,   R 31 R 32 R 33 > 0 ,   J 1 , J 2 > 0 , and any matrices M 1 , M 2 , , M 10 and Π 1 , Π 2 , Π 3 , Π 4 with appropriate dimensions and a m > 0 , b m > 0 , m = 1 , 2 exists subject to the following constraints holding for l = 1 to 4,
ϕ l Π 1 T Π 2 T Π 3 T Π 4 T U 3 ¯ 0 0 0 U 4 ¯ 0 0 U 7 ¯ 0 U 8 ¯ < 0 ,
2 U 3 0 M 1 0 U 3 0 M 3 2 U 3 0 U 3 > 0 ,   2 U 4 0 M 5 0 U 4 0 M 6 2 U 4 0 U 4 > 0 ,
2 U 7 0 M 7 0 U 7 0 M 8 2 U 7 0 U 7 > 0 ,   2 U 8 0 M 9 0 U 8 0 M 10 2 U 8 0 U 8 > 0 ,
S 2 + ϱ 12 2 2 U 3 M 2 S 2 + ϱ 12 2 2 U 4 0 ,   S 4 + ς 12 2 2 U 7 M 4 S 4 + ς 12 2 2 U 8 0 ,
T 2 N 1 T 2 0 ,   T 4 N 2 T 4 0 ,
where
ϕ l = Ω 2 Γ 1 l T U 3 ^ 2 Γ 2 l T U 4 ^ 2 Γ 3 l T U 7 ^ 2 Γ 4 l T U 8 ^ 2 U 3 ^ 0 0 0 2 U 4 ^ 0 0 2 U 7 ^ 0 2 U 8 ^ + Γ 1 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 1 + Π 1 T Γ 1 l T 0 2 n 0 2 n 0 2 n 0 2 n T + Γ 2 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 2 + Π 2 T Γ 2 l T 0 2 n 0 2 n 0 2 n 0 2 n T + Γ 3 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 3 + Π 3 T Γ 3 l T 0 2 n 0 2 n 0 2 n 0 2 n T + Γ 4 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 4 + Π 4 T Γ 4 l T 0 2 n 0 2 n 0 2 n 0 2 n T ,
U 3 ¯ = 3 U 3 M 1 + M 3 3 U 3 , U 4 ¯ = 3 U 4 M 5 + M 6 3 U 4 , U 7 ¯ = 3 U 7 M 7 + M 8 3 U 7 , U 8 ¯ = 3 U 8 M 9 + M 10 3 U 8 ,
where Ω = [ Ω r , s ] 20 × 20 , Ω r , s = Ω s , r T , r , s = 1 , 2 , , 20 , Γ 1 l , Γ 2 l , Γ 3 l , Γ 4 l , U 3 ^ , U 4 ^ , U 7 ^ , and U 8 ^ are defined as
Ω 1 , 1 = P 11 A 1 A 1 T P 11 T + P 12 + P 12 T + R 1 + R 11 + A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) A 1 S 1 + ϱ 1 2 T 1 + ϱ 12 2 T 2 + A 1 T U α A 1 ϱ 1 2 U 1 + a 1 W 1 T W 1 , Ω 1 , 2 = ( 1 υ 2 ) P 13 ( 1 υ 2 ) P 14 , Ω 1 , 3 = P 12 + P 14 + S 1 , Ω 1 , 4 = P 13 , Ω 1 , 5 = P 22 T + ϱ 1 U 1 T , Ω 1 , 6 = R 12 , Ω 1 , 18 = P 11 B 1 A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) B 1 A 1 T U α B 1 , Ω 1 , 19 = P 11 C 1 A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) C 1 A 1 T U α C 1 , Ω 1 , 20 = P 11 D 1 A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 A 1 T U α D 1 , Ω 2 , 2 = ( 1 υ 2 ) R 11 2 S 2 + M 2 + M 2 T + a 2 W 1 T W 1 , Ω 2 , 3 = M T + S 2 , Ω 2 , 4 = M 2 + S 2 , Ω 2 , 5 = ( 1 υ 2 ) P 23 T ( 1 υ 2 ) P 24 T , Ω 2 , 6 = ( 1 υ 2 ) P 44 T , Ω 2 , 7 = ( 1 υ 2 ) P 33 T ( 1 υ 2 ) P 34 T , Ω 2 , 9 = ( 1 υ 2 ) R 12 , Ω 3 , 3 = R 2 R 1 S 1 S 2 ϱ 1 2 U 2 , Ω 3 , 5 = P 22 T + P 24 T + ϱ 1 U 2 T , Ω 3 , 6 = P 44 T , Ω 3 , 7 = P 34 T , Ω 4 , 4 = R 2 , Ω 4 , 5 = P 23 T , Ω 4 , 7 = P 33 T , Ω 5 , 5 = T 1 U 1 U 2 , Ω 6 , 6 = T 2 , Ω 6 , 7 = N 1 , Ω 7 , 7 = T 2 , Ω 8 , 8 = R 22 + B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) B 2 + B 2 T U β B 2 + γ 12 2 J 1 a 1 I , Ω 8 , 9 = B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) C 2 + B 2 T U β C 2 , Ω 8 , 10 = B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) D 2 + B 2 T U β D 2 , Ω 8 , 11 = B 2 T Q 11 T B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 B 2 T U β A 2 , Ω 9 , 9 = ( 1 υ 2 ) R 22 + C 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) C 2 + C 2 T U β C 2 a 2 I , Ω 9 , 10 = C 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) D 2 + C 2 T U β D 2 , Ω 9 , 11 = C 2 T Q 11 T C 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 C 2 T U β A 2 , Ω 10 , 10 = D 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) D 2 + D 2 T U β D 2 J 1 , Ω 10 , 11 = D 2 T Q 11 T D 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 D 2 T U β A 2 , Ω 11 , 11 = Q 11 A 2 A 2 T Q 11 T + Q 12 + Q 12 T + R 3 + R 31 + A 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 S 3 + ς 1 2 T 3 + ς 12 2 T 4 + A 2 T U β A 2 ς 1 2 U 5 + b 1 W 2 T W 2 Ω 11 , 12 = ( 1 υ 1 ) Q 13 ( 1 υ 1 ) Q 14 , Ω 11 , 13 = Q 12 + Q 14 + S 3 , Ω 11 , 14 = Q 13 , Ω 11 , 15 = Q 22 T + ς 1 U 5 T , Ω 11 , 18 = R 32 , Ω 12 , 12 = ( 1 υ 1 ) R 31 2 S 4 + M 4 + M 4 T + b 2 W 2 T W 2 , Ω 12 , 13 = M 4 T + S 4 , Ω 12 , 14 = M 4 + S 4 , Ω 12 , 15 = ( 1 υ 1 ) Q 23 T ( 1 υ 1 ) Q 24 T , Ω 12 , 16 = ( 1 υ 1 ) Q 44 T , Ω 12 , 17 = ( 1 υ 1 ) Q 33 T ( 1 υ 1 ) Q 34 T , Ω 12 , 19 = ( 1 υ 1 ) R 32 Ω 13 , 13 = R 4 R 3 S 3 S 4 ς 1 2 U 6 , Ω 13 , 14 = M 4 , Ω 13 , 15 = Q 22 T + Q 24 T + ς 1 U 6 T , Ω 13 , 16 = Q 44 T , Ω 13 , 17 = Q 34 T , Ω 14 , 14 = R 4 S 4 , Ω 14 , 15 = Q 23 T , Ω 14 , 17 = Q 33 T , Ω 15 , 15 = T 3 U 5 U 6 , Ω 16 , 16 = T 4 , Ω 16 , 17 = N 2 , Ω 17 , 17 = T 4 , Ω 18 , 18 = R 33 + B 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) B 1 + B 1 T U α B 1 + η 12 2 J 2 b 1 I , Ω 18 , 19 = B 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) C 1 + B 1 T U α C 1 , Ω 18 , 20 = B 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 + B 1 T U α D 1 , Ω 19 , 19 = ( 1 υ 1 ) R 33 + C 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) C 1 + C 1 T U α C 1 b 1 I , Ω 19 , 20 = C 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 + C 1 T U α D 1 , Ω 20 , 20 = D 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 + D 1 T U α D 1 J 2 , U α = ϱ 1 4 U 1 + ϱ 1 4 U 2 + ϱ 12 4 U 3 + ϱ 12 4 U 4 4 , U β = ς 1 4 U 5 + ς 1 4 U 6 + ς 12 4 U 7 + ς 12 4 U 8 4 ;
the remaining terms are all zeros and
Γ 11 = 0 n 0 n 3 t e r m s ( 0 n ) I n 0 n 13 t e r m s ( 0 n ) 0 n ϱ 12 I n 3 t e r m s ( 0 n ) 0 n I n 13 t e r m s ( 0 n ) , Γ 12 = 2 t e r m s ( 0 n ) ϱ 12 I n 2 t e r m s ( 0 n ) I n 0 n 13 t e r m s ( 0 n ) 2 t e r m s ( 0 n ) 0 n 2 t e r m s ( 0 n ) 0 n I n 13 t e r m s ( 0 n ) , Γ 21 = 3 t e r m s ( 0 n ) 0 n 0 n I n 0 n 13 t e r m s ( 0 n ) 3 t e r m s ( 0 n ) ϱ 12 I n 0 n 0 n I n 13 t e r m s ( 0 n ) , Γ 22 = 0 n ϱ 12 I n 3 t e r m s ( 0 n ) I n 0 n 13 t e r m s ( 0 n ) 0 n 0 n 3 t e r m s ( 0 n ) 0 n I n 13 t e r m s ( 0 n ) , Γ 31 = 11 t e r m s ( 0 n ) 0 n 3 t e r m s ( 0 n ) I n 0 n 3 t e r m s ( 0 n ) 11 t e r m s ( 0 n ) ς 12 I n 3 t e r m s ( 0 n ) 0 n I n 3 t e r m s ( 0 n ) , Γ 32 = 12 t e r m s ( 0 n ) ς 12 I n 2 t e r m s ( 0 n ) I n 0 n 3 t e r m s ( 0 n ) 12 t e r m s ( 0 n ) 0 n 2 t e r m s ( 0 n ) 0 n I n 3 t e r m s ( 0 n ) , Γ 41 = 13 t e r m s ( 0 n ) 0 n 0 n I n 0 n 3 t e r m s ( 0 n ) 13 t e r m s ( 0 n ) ς 12 I n 0 n 0 n I n 3 t e r m s ( 0 n ) , Γ 42 = 11 t e r m s ( 0 n ) ς 12 I n 3 t e r m s ( 0 n ) I n 0 n 3 t e r m s ( 0 n ) 11 t e r m s ( 0 n ) 0 n 3 t e r m s ( 0 n ) 0 n I n 3 t e r m s ( 0 n ) , U 3 ^ = U 3 0 0 U 3 , U 4 ^ = U 4 0 0 U 4 , U 7 ^ = U 7 0 0 U 7 , U 8 ^ = U 8 0 0 U 8 .
Proof. 
Consider the newly constructed LKF
V ( t ) = r = 1 6 V r ( t ) ,
where
V 1 ( t ) = ζ 1 T ( t ) P ζ 1 ( t ) + ζ 2 T ( t ) Q ζ 2 ( t ) , in which ζ 1 ( t ) = c o l { X ( t ) , t ϱ 1 t X ( ϰ ) d ϰ , t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ , t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ } , ζ 2 ( t ) = c o l { У ( t ) , t ς 1 t У ( ϰ ) d ϰ , t ς 2 t ς ( t ) У ( ϰ ) d ϰ , t ς ( t ) t ς 1 У ( ϰ ) d ϰ } , P = P 11 P 12 P 13 P 14 P 22 P 23 P 24 P 33 P 34 P 44 > 0 , Q = Q 11 Q 12 Q 13 Q 14 Q 22 Q 23 Q 24 Q 33 Q 34 Q 44 > 0 ,
V 2 ( t ) = t ϱ 1 t X T ( ϰ ) R 1 X ( ϰ ) d ϰ + t ϱ 2 t ϱ 1 X T ( ϰ ) R 2 X ( ϰ ) d ϰ + t ϱ ( t ) t X T ( ϰ ) g T ( X ( ϰ ) ) R 11 R 12 R 22 X ( ϰ ) g ( X ( ϰ ) ) d ϰ + t ς 1 t У T ( ϰ ) R 3 У ( ϰ ) d ϰ + t ς 2 t ς 1 У T ( ϰ ) R 4 У ( ϰ ) d ϰ + t ς ( t ) t У T ( ϰ ) f T ( У ( ϰ ) ) R 31 R 32 R 33 У ( ϰ ) f ( У ( ϰ ) ) d ϰ , V 3 ( t ) = ϱ 1 ϱ 1 0 t + λ t X ˙ T ( ϰ ) S 1 X ˙ ( ϰ ) d ϰ d λ + ϱ 12 ϱ 2 ϱ 1 t + λ t X ˙ T ( ϰ ) S 2 X ˙ ( ϰ ) d ϰ d λ + ς 1 ς 1 0 t + λ t У ˙ T ( ϰ ) S 3 У ˙ ( ϰ ) d ϰ d λ + ς 12 ς 2 ς 1 t + λ t У ˙ T ( ϰ ) S 4 У ˙ ( ϰ ) d ϰ d λ , V 4 ( t ) = ϱ 1 ϱ 1 0 t + λ t X T ( ϰ ) T 1 X ( ϰ ) d ϰ d λ + ϱ 12 ϱ 2 ϱ 1 t + λ t X T ( ϰ ) T 2 X ( ϰ ) d ϰ d λ λ + ς 1 ς 1 0 t + λ t У T ( ϰ ) T 3 У ( ϰ ) d ϰ d λ + ς 12 ς 2 ς 1 t + λ t У T ( ϰ ) T 4 У ( ϰ ) d ϰ d λ ,
V 5 ( t ) = ϱ 1 2 2 ϱ 1 0 η 0 t + θ t X ˙ T ( ϰ ) U 1 X ˙ ( ϰ ) d ϰ d θ d η + ϱ 1 2 2 ϱ 1 0 ϱ 1 η t + θ t X ˙ T ( ϰ ) U 2 X ˙ ( ϰ ) d ϰ d θ d η + ϱ 12 2 2 ϱ 2 ϱ 1 η ϱ 1 t + θ t X ˙ T ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ d θ d η + ϱ 12 2 2 ϱ 2 ϱ 1 ϱ 2 η t + θ t X ˙ T ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ d θ d η + ς 1 2 2 ς 1 0 η 0 t + θ t У ˙ T ( ϰ ) U 5 У ˙ ( ϰ ) d ϰ d θ d η + ς 1 2 2 ς 1 0 ς 1 η t + θ t У ˙ T ( ϰ ) U 6 У ˙ ( ϰ ) d ϰ d θ d η + ς 12 2 2 ς 2 ς 1 η ς 1 t + θ t Y ˙ T ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ d θ d η + ς 12 2 2 ς 2 ς 1 ς 2 η t + θ t У ˙ T ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ d θ d η , V 6 ( t ) = γ 12 γ 2 γ 1 t + θ t g T ( X ( ϰ ) ) J 1 g ( X ( ϰ ) ) d ϰ d θ + η 12 η 2 η 1 t + θ t f T ( У ( ϰ ) ) J 2 f ( У ( ϰ ) ) d ϰ d θ ,
where
ϱ 12 = ϱ 2 ϱ 1 , ς 12 = ς 2 ς 1 , γ 12 = γ 2 γ 1 , η 12 = η 2 η 1 , α 1 = ϱ ( t ) ϱ 1 ϱ 12 , β 1 = ϱ 2 ϱ ( t ) ϱ 12 , α 2 = ς ( t ) ς 1 ς 12 , β 2 = ς 2 ς ( t ) ς 12 .
Calculating the time derivatives of Equation (5), we get
V ˙ 1 ( t ) = 2 ζ 1 T ( t ) P ζ 1 ˙ ( t ) + 2 ζ 2 T ( t ) Q ζ 2 ˙ ( t ) .
Calculating V ˙ 2 ( t ) and substituting ς ˙ ( t ) υ 1 and ϱ ˙ ( t ) υ 2 leads to
V ˙ 2 ( t ) X T ( t ) ( R 1 ) X ( t ) + X T ( t ϱ 1 ) ( R 2 R 1 ) X ( t ϱ 1 ) X T ( t ϱ 2 ) ( R 2 ) X ( t ϱ 2 ) + X T ( t ) ( R 11 ) X ( t ) + X T ( t ) ( R 12 ) g ( X ( t ) ) + g T ( X ( t ) ) ( R 12 T ) X ( t ) + g T ( X ( t ) ) ( R 22 ) g ( X ( t ) ) ( 1 υ 2 ) X T ( t ϱ ( t ) ) ( R 11 ) X ( t ϱ ( t ) ) ( 1 υ 2 ) X T ( t ϱ ( t ) ) ( R 12 ) g ( X ( t ϱ ( t ) ) ) ( 1 υ 2 ) g T ( X ( t ϱ ( t ) ) ) ( R 22 ) g ( X ( t ϱ ( t ) ) ) ( 1 υ 2 ) g T ( X ( t ϱ ( t ) ) ) ( R 12 T ) X ( t ϱ ( t ) ) + У T ( t ) ( R 3 ) У ( t ) + У T ( t ς 1 ) ( R 4 R 3 ) У ( t ς 1 ) У T ( t ς 2 ) ( R 4 ) У ( t ς 2 ) + У T ( t ) ( R 31 ) У ( t ) + У T ( t ) ( R 32 ) f ( У ( t ) ) + f T ( У ( t ) ) ( R 32 T ) У ( t ) + f T ( У ( t ) ) ( R 33 ) f ( У ( t ) ) ( 1 υ 1 ) У T ( t ς ( t ) ) ( R 31 ) У ( t ς ( t ) ) ( 1 υ 1 ) У T ( t ς ( t ) ) ( R 32 ) f ( У ( t ς ( t ) ) ) ( 1 υ 1 ) f T ( У ( t ς ( t ) ) ) ( R 33 ) f ( У ( t ς ( t ) ) ) ( 1 υ 1 ) f T ( У ( t ς ( t ) ) ) ( R 32 T ) У ( t ς ( t ) ) .
Calculating the time derivative of V 3 ( t ) and applying Jensen’s inequality [27], we obtain
l V ˙ 3 ( t ) ϱ 1 2 X ˙ T ( t ) S 1 X ˙ ( t ) t ϱ 1 t X ˙ T ( ϰ ) d ϰ S 1 t ϱ 1 t X ˙ ( ϰ ) d ϰ + ς 1 2 У ˙ T ( t ) S 3 У ˙ ( t ) t ς 1 t У ˙ T ( ϰ ) d ϰ S 3 t ς 1 t У ˙ ( ϰ ) d ϰ + ϱ 12 2 X ˙ T ( t ) S 2 X ˙ ( t ) 1 α 1 t ϱ ( t ) t ϱ 1 X ˙ T ( ϰ ) d ϰ S 2 t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ 1 β 1 t ϱ 2 t ϱ ( t ) X ˙ T ( ϰ ) d ϰ S 2 t ϱ 2 t ϱ ( t ) X ˙ ( ϰ ) d ϰ + ς 12 2 У ˙ T ( t ) S 4 У ˙ ( t ) 1 α 2 t ς ( t ) t ς 1 У ˙ T ( ϰ ) d ϰ S 4 t ς ( t ) t ς 1 У ˙ ( ϰ ) d ϰ 1 β 2 t ς 2 t ς ( t ) У ˙ T ( ϰ ) d ϰ S 4 t ς 2 t ς ( t ) У ˙ ( ϰ ) d ϰ ,
By applying Lemma 1 to V ˙ 3 ( t ) , we have
V ˙ 3 ( t ) ϱ 1 2 X ˙ T ( t ) S 1 X ˙ ( t ) + ϱ 12 2 X ˙ T ( t ) S 2 X ˙ ( t ) t ϱ 1 t X ˙ T ( ϰ ) d ϰ S 1 t ϱ 1 t X ˙ ( ϰ ) d ϰ + ς 1 2 У ˙ T ( t ) S 3 У ˙ ( t ) + ς 12 2 У ˙ T ( t ) S 4 У ˙ ( t ) t ς 1 t У ˙ T ( ϰ ) d ϰ S 3 t ς 1 t У ˙ ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X ˙ T ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ˙ T ( ϰ ) d ϰ S 2 M 2 S 2 t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ˙ ( ϰ ) d ϰ t ς ( t ) t ς 1 У ˙ T ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) У ˙ T ( ϰ ) d ϰ S 4 M 4 S 4 t ς ( t ) t ς 1 У ˙ ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) У ˙ ( ϰ ) d ϰ ,
V ˙ 4 ( t ) = ϱ 1 2 X T ( t ) T 1 X ( t ) ϱ 1 t ϱ 1 t X T ( ϰ ) T 1 X ( ϰ ) d ϰ + ς 1 2 У T ( t ) T 3 У ( t ) ς 1 t ς 1 t У T ( ϰ ) T 3 У ( ϰ ) d ϰ + ϱ 12 2 X T ( t ) T 2 X ( t ) ϱ 12 t ϱ 2 t ϱ 1 X T ( ϰ ) T 2 X ( ϰ ) d ϰ + ς 12 2 У T ( t ) T 4 У ( t ) ς 12 t ς 2 t ς 1 У T ( ϰ ) T 4 У ( ϰ ) d ϰ
ϱ 1 2 X T ( t ) T 1 X ( t ) + ς 1 2 У T ( t ) T 3 У ( t ) + ϱ 12 2 X T ( t ) T 2 X ( t ) + ς 12 2 У T ( t ) T 4 У ( t ) t ϱ 1 t X T ( ϰ ) d ϰ T 1 t ϱ 1 t X ( ϰ ) d ϰ t ς 1 t У T ( ϰ ) d ϰ T 3 t ς 1 t У ( ϰ ) d ϰ 1 α 1 t ϱ ( t ) t ϱ 1 X T ( ϰ ) d ϰ T 2 t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ 1 β 1 t ϱ 2 t ϱ ( t ) X T ( ϰ ) d ϰ T 2 t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ 1 α 2 t ς ( t ) t ς 1 У T ( ϰ ) d ϰ T 4 t ς ( t ) t ς 1 У ( ϰ ) d ϰ 1 β 2 t ς 2 t ς ( t ) У T ( ϰ ) d ϰ T 4 t ς 2 t ς ( t ) У ( ϰ ) d ϰ ,
By applying Jensen’s inequality [27] and Lemma 1 to V ˙ 4 ( t ) , we get
V ˙ 4 ( t ) X T ( t ) { ϱ 1 2 T 1 + ϱ 12 2 T 2 } X ( t ) + У T ( t ) { ς 1 2 T 3 + ς 12 2 T 4 } У ( t ) t ϱ 1 t X T ( ϰ ) d ϰ T 1 t ϱ 1 t X ( ϰ ) d ϰ t ς 1 t У T ( ϰ ) d ϰ T 3 t ς 1 t У ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X T ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X T ( ϰ ) d ϰ T 2 N 1 T 2 t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ t ς ( t ) t ς 1 У T ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) У T ( ϰ ) d ϰ T 4 N 2 T 4 t ς ( t ) t ς 1 У ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) У ( ϰ ) d ϰ .
When applying Jensen’s inequality [27] and second-order reciprocal convex combination Lemma 1 to V ˙ 5 ( t ) ,
V ˙ 5 ( t ) = ϱ 1 2 2 ϱ 1 0 η 0 [ X ˙ T ( t ) U 1 X ˙ ( t ) X ˙ T ( t + ϰ ) U 1 X ˙ ( t + ϰ ) ] d ϰ d η + ϱ 1 2 2 ϱ 1 0 ϱ 1 η [ X ˙ T ( t ) U 2 X ˙ ( t ) X ˙ T ( t + ϰ ) U 2 X ˙ ( t + ϰ ) ] d ϰ d η + ϱ 12 2 2 ϱ 2 ϱ 1 η ϱ 1 [ X ˙ T ( t ) U 3 X ˙ ( t ) X ˙ T ( t + ϰ ) U 3 X ˙ ( t + ϰ ) ] d ϰ d η + ϱ 12 2 2 ϱ 2 ϱ 1 ϱ 2 η [ X ˙ T ( t ) U 4 X ˙ ( t ) X ˙ T ( t + ϰ ) U 4 X ˙ ( t + ϰ ) ] d ϰ d η + ς 1 2 2 ς 1 0 η 0 [ У ˙ T ( t ) U 5 У ˙ ( t ) У ˙ T ( t + ϰ ) U 5 У ˙ ( t + ϰ ) ] d ϰ d η + ς 1 2 2 ς 1 0 ς 1 η [ У ˙ T ( t ) U 6 У ˙ ( t ) У ˙ T ( t + ϰ ) U 6 У ˙ ( t + ϰ ) ] d ϰ d η + ς 12 2 2 ς 2 ς 1 η ς 1 [ У ˙ T ( t ) U 7 У ˙ ( t ) У ˙ T ( t + ϰ ) U 7 У ˙ ( t + ϰ ) ] d ϰ d η + ς 12 2 2 ς 2 ς 1 ς 2 η [ У ˙ T ( t ) U 8 У ˙ ( t ) У ˙ T ( t + ϰ ) U 8 У ˙ ( t + ϰ ) ] d ϰ d η
= ϱ 1 4 4 X ˙ T ( t ) U 1 X ˙ ( t ) + ϱ 1 4 4 X ˙ T ( t ) U 2 X ˙ ( t ) ϱ 1 2 2 ϱ 1 0 t + η t X ˙ T ( ϰ ) U 1 X ˙ ( ϰ ) d ϰ d η ϱ 1 2 2 ϱ 1 0 t ϱ 1 t + η X ˙ T ( ϰ ) U 2 X ˙ ( ϰ ) d ϰ d η + ϱ 12 4 4 X ˙ T ( t ) U 3 X ˙ ( t ) ϱ 12 2 2 ϱ 2 ϱ 1 t + η t ϱ 1 X ˙ T ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ d η + ϱ 12 4 4 X ˙ T ( t ) U 4 X ˙ ( t ) ϱ 12 2 2 ϱ 2 ϱ 1 t ϱ 2 t + η X ˙ T ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ d η + ς 1 4 4 У ˙ T ( t ) U 5 У ˙ ( t ) ς 1 2 2 ς 1 0 t + η t У ˙ T ( ϰ ) U 5 У ˙ ( ϰ ) d ϰ d η + ς 1 4 4 У ˙ T ( t ) U 6 У ˙ ( t ) ς 1 2 2 ς 1 0 t ς 1 t + η У ˙ T ( ϰ ) U 6 У ˙ ( ϰ ) d ϰ d η + ς 12 4 4 У ˙ T ( t ) U 7 У ˙ ( t ) ς 12 2 2 ς 2 ς 1 t + η t ς 1 У ˙ T ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ d η + ς 12 4 4 У ˙ T ( t ) U 8 У ˙ ( t ) ς 12 2 2 ς 2 ς 1 t ς 2 t + η У ˙ T ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ d η
X ˙ T ( t ) U α X ˙ ( t ) + У ˙ T ( t ) U β У ˙ ( t ) ϱ 1 X T ( t ) t ϱ 1 t X T ( ϰ ) d ϰ U 1 ϱ 1 X ( t ) t ϱ 1 t X ( ϰ ) d ϰ t ϱ 1 t X T ( ϰ ) d ϰ ϱ 1 X T ( t ϱ 1 ) U 2 t ϱ 1 t X ( ϰ ) d ϰ ϱ 1 X ( t ϱ 1 ) ϱ 12 2 2 . β 1 α 1 ( ϱ ( t ) ϱ 1 ) t ϱ ( t ) t ϱ 1 X ˙ T ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ 1 α 1 2 . ( ϱ ( t ) ϱ 1 ) 2 2 ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ d η 1 β 1 2 . ( ϱ 2 ϱ ( t ) ) 2 2 ϱ 2 ϱ ( t ) t + η t ϱ 1 X ˙ ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ d η ϱ 12 2 2 . α 1 β 1 ( ϱ 2 ϱ ( t ) ) t ϱ 2 t ϱ ( t ) X ˙ T ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ 1 α 1 2 . ( ϱ ( t ) ϱ 1 ) 2 2 ϱ ( t ) ϱ 1 t ϱ ( t ) t + η X ˙ ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ d η 1 β 1 2 . ( ϱ 2 ϱ ( t ) ) 2 2 ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ d η ς 1 У T ( t ) t ς 1 t У T ( ϰ ) d ϰ U 5 ς 1 У ( t ) t ς 1 t У ( ϰ ) d ϰ t ς 1 t У T ( ϰ ) d ϰ ς 1 У T ( t ς 1 ) U 6 t ς 1 t У ( ϰ ) d ϰ ς 1 Y ( t ς 1 ) ς 12 2 2 . β 2 α 2 ( ς ( t ) ς 1 ) t ς ( t ) t ς 1 У ˙ T ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ 1 α 2 2 . ( ς ( t ) ς 1 ) 2 2 ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ d η 1 β 2 2 . ( ς 2 ς ( t ) ) 2 2 ς 2 ς ( t ) t + η t ς 1 У ˙ ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ d η ς 12 2 2 . α 2 β 2 ( ς 2 ς ( t ) ) t ς 2 t ς ( t ) У ˙ T ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ 1 α 2 2 . ( ς ( t ) ς 1 ) 2 2 ς ( t ) ς 1 t ς ( t ) t + η У ˙ ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ d η 1 β 2 2 . ( ς 2 ς ( t ) ) 2 2 ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ d η ,
where
U α = ϱ 1 4 U 1 + ϱ 1 4 U 2 + ϱ 12 4 U 3 + ϱ 12 4 U 4 4 , U β = ς 1 4 U 5 + ς 1 4 U 6 + ς 12 4 U 7 + ς 12 4 U 8 4 ;
from lower bound lemma 1, and if ϱ ( t ) = ϱ 1 , ϱ ( t ) = ϱ 2 , ς ( t ) = ς 1 , ς ( t ) = ς 2 , w e h a v e
t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ = 0 , t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ = 0 , t ς ( t ) t ς 1 У ˙ ( ϰ ) d ϰ = 0 , t ς ( t ) t ς 1 У ( ϰ ) d ϰ = 0 , t ϱ 2 t ϱ ( t ) X ˙ ( ϰ ) d ϰ = 0 , t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ = 0 , t ς 2 t ς ( t ) У ˙ ( ϰ ) d ϰ = 0 , t ς 2 t ς ( t ) У ( ϰ ) d ϰ = 0 .
The second-order convex combination’s upper bounds can be obtained in V ˙ 5 ( t ) from the lower bound Lemma 1 if there exist matrices M 1 , M 2 , , M 10 such that (3) and (4) hold
1 α 1 2 ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ T ( ϰ ) d ϰ d η U 3 ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η 1 β 1 2 ϱ 2 ϱ ( t ) t + η t ϱ ( t ) X ˙ T ( ϰ ) d ϰ d η U 3 ϱ 2 ϱ ( t ) t + η t ϱ ( t ) X ˙ ( ϰ ) d ϰ d η 1 α 1 2 ϱ ( t ) ϱ 1 t ϱ ( t ) t + η X ˙ T ( ϰ ) d ϰ d η U 4 ϱ ( t ) ϱ 1 t ϱ ( t ) t + η X ˙ ( ϰ ) d ϰ d η 1 β 1 2 ϱ 2 ϱ ( t ) t ϱ 2 t + η ) X ˙ T ( ϰ ) d ϰ d η U 4 ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η 1 α 2 2 ς ( t ) ς 1 t + η t ς 1 У ˙ T ( ϰ ) d ϰ d η U 7 ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η 1 β 2 2 ς 2 ς ( t ) t + η t ς ( t ) У ˙ T ( ϰ ) d ϰ d η U 7 ς 2 ς ( t ) t + η t ς ( t ) У ˙ ( ϰ ) d ϰ d η 1 α 2 2 ς ( t ) ς 1 t ς ( t ) t + η У ˙ T ( ϰ ) d ϰ d η U 8 ς ( t ) ς 1 t ς ( t ) t + η У ˙ ( ϰ ) d ϰ d η 1 β 2 2 ς 2 ς ( t ) t ς 2 t + η ) У ˙ T ( ϰ ) d ϰ d η U 8 ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η
ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t + η t ϱ ( t ) X ˙ ( ϰ ) d ϰ d η T U 3 M 1 + M 3 U 3 ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t + η t ϱ ( t ) X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t ϱ ( t ) t + η X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η T U 4 M 5 + M 6 U 4 ϱ ( t ) ϱ 1 t ϱ ( t ) t + η X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t + η t ς ( t ) У ˙ ( ϰ ) d ϰ d η T U 7 M 7 + M 8 U 7 ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t + η t ς ( t ) У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t ς ( t ) t + η У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η T U 8 M 9 + M 10 U 8 ς ( t ) ς 1 t ς ( t ) t + η У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η ,
1 α 1 t ϱ ( t ) t ϱ 1 X ˙ T ( ϰ ) d ϰ S 2 t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ 1 β 1 t ϱ 2 t ϱ ( t ) X ˙ T ( ϰ ) d ϰ S 2 t ϱ 2 t ϱ ( t ) X ˙ ( ϰ ) d ϰ ϱ 12 2 2 . β 1 α 1 t ϱ ( t ) t ϱ 1 X ˙ T ( ϰ ) d ϰ U 3 t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ ϱ 12 2 2 . α 1 β 1 t ϱ 2 t ϱ ( t ) X ˙ T ( ϰ ) d ϰ U 4 t ϱ 2 t ϱ ( t ) X ˙ ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ˙ ( ϰ ) d ϰ T S 2 + ϱ 12 2 2 U 3 M 2 S 2 + ϱ 12 2 2 U 4 t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ˙ ( ϰ ) d ϰ ,
1 α 2 t ς ( t ) t ς 1 У ˙ T ( ϰ ) d ϰ S 4 t ς ( t ) t ς 1 У ˙ ( ϰ ) d ϰ 1 β 2 t ς 2 t ς ( t ) У ˙ T ( ϰ ) d ϰ S 4 t ς 2 t ς ( t ) У ˙ ( ϰ ) d ϰ ς 12 2 2 . β 2 α 2 t ς ( t ) t ς 1 У ˙ T ( ϰ ) d ϰ U 7 t ς ( t ) t ς 1 У ˙ ( ϰ ) d ϰ ς 12 2 2 . α 2 β 2 t ς 2 t ς ( t ) У ˙ T ( ϰ ) d ϰ U 8 t ς 2 t ς ( t ) У ˙ ( ϰ ) d ϰ t ς ( t ) t ς 1 У ˙ ( ϰ ) d ϰ t ς 2 t ς ( t ) У ˙ ( ϰ ) d ϰ T S 4 + ς 12 2 2 U 7 M 4 S 4 + ς 12 2 2 U 8 t ς ( t ) t ς 1 У ˙ ( ϰ ) d ϰ t ς 2 t ς ( t ) У ˙ ( ϰ ) d ϰ ,
1 α 1 t ϱ ( t ) t ϱ 1 X T ( ϰ ) d ϰ T 2 t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ 1 β 1 t ϱ 2 t ϱ ( t ) X T ( ϰ ) d ϰ T 2 t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ T T 2 N 1 T 2 t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ ,
1 α 2 t ς ( t ) t ς 1 У T ( ϰ ) d ϰ T 4 t ς ( t ) t ς 1 У ( ϰ ) d ϰ 1 β 2 t ς 2 t ς ( t ) У T ( ϰ ) d ϰ T 4 t ς 2 t ς ( t ) У ( ϰ ) d ϰ t ς ( t ) t ς 1 У ( ϰ ) d ϰ t ς 2 t ς ( t ) У ( ϰ ) d ϰ T T 4 N 2 T 4 t ς ( t ) t ς 1 У ( ϰ ) d ϰ t ς 2 t ς ( t ) У ( ϰ ) d ϰ ,
Substituting (11)–(15) in (10), we have
V ˙ 5 ( t ) X ˙ T ( t ) U α X ˙ ( t ) + У ˙ T ( t ) U β У ˙ ( t ) ϱ 1 X T ( t ) t ϱ 1 t X T ( ϰ ) d ϰ U 1 ϱ 1 X ( t ) t ϱ 1 t X ( ϰ ) d ϰ t ϱ 1 t X T ( ϰ ) d ϰ ϱ 1 X T ( t ϱ 1 ) U 2 t ϱ 1 t X ( ϰ ) d ϰ ϱ 1 X ( t ϱ 1 ) ς 1 У T ( t ) t ς 1 t У T ( ϰ ) d ϰ U 5 ς 1 У ( t ) t ς 1 t У ( ϰ ) d ϰ t ς 1 t У T ( ϰ ) d ϰ ς 1 У T ( t ς 1 ) U 6 t ς 1 t У ( ϰ ) d ϰ ς 1 У ( t ς 1 ) ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t + η t ϱ ( t ) X ˙ ( ϰ ) d ϰ d η T U 3 M 1 + M 3 U 3 ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t + η t ϱ ( t ) X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t ϱ ( t ) t + η X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η T U 4 M 5 + M 6 U 4 ϱ ( t ) ϱ 1 t ϱ ( t ) t + η X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t + η t ς ( t ) У ˙ ( ϰ ) d ϰ d η T U 7 M 7 + M 8 U 7 ς ( t ) ς t t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t + η t ς ( t ) У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t ς ( t ) t + η У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η T U 8 M 9 + M 10 U 8 ς ( t ) ς 1 t ς ( t ) t + η У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η
X ˙ T ( t ) U α X ˙ ( t ) + У ˙ T ( t ) U β У ˙ ( t ) ϱ 1 X T ( t ) t ϱ 1 t X T ( ϰ ) d ϰ U 1 ϱ 1 X ( t ) t ϱ 1 t X ( ϰ ) d ϰ t ϱ 1 t X T ( ϰ ) d ϰ ϱ 1 X T ( t ϱ 1 ) U 2 t ϱ 1 t X ( ϰ ) d ϰ ϱ 1 X ( t ϱ 1 ) ς 1 У T ( t ) t ς 1 t У T ( ϰ ) d ϰ U 5 ς 1 У ( t ) t ς 1 t У ( ϰ ) d ϰ t ς 1 t У T ( ϰ ) d ϰ ς 1 У T ( t ς 1 ) U 6 t ς 1 t У ( ϰ ) d ϰ ς 1 У ( t ς 1 ) Ξ 1 T ( t ) { Γ 1 T ( t ) U 3 M 1 + M 3 U 3 Γ 1 ( t ) Γ 2 T ( t ) U 4 M 5 + M 6 U 4 Γ 2 ( t ) Γ 3 T ( t ) U 7 M 7 + M 8 U 7 Γ 3 ( t ) Γ 4 T ( t ) U 8 M 9 + M 10 U 8 Γ 4 ( t ) } Ξ 1 ( t ) ,
where
Γ 1 ( t ) = 0 n 0 n ( ϱ ( t ) ϱ 1 ) I n 2 t e r m s ( 0 n ) ( I n ) 0 n 13 t e r m s ( 0 n ) 0 n ( ϱ 2 ϱ ( t ) ) I n 0 n 2 t e r m s ( 0 n ) 0 n ( I n ) 13 t e r m s ( 0 n ) ,
Γ 2 ( t ) = 0 n ( ϱ ( t ) ϱ 1 ) ( I n ) 0 n 0 n 0 n I n 0 n 13 t e r m s ( 0 n ) 0 n 0 n 0 n ( ϱ 2 ϱ ( t ) ) ( I n ) 0 n 0 n I n 13 t e r m s ( 0 n ) ,
Γ 3 ( t ) = 11 t e r m s ( 0 n ) 0 n ( ς ( t ) ς 1 ) I n 2 t e r m s ( 0 n ) ( I n ) 0 n 3 t e r m s ( 0 n ) 11 t e r m s ( 0 n ) ( ς 2 ς ( t ) ) I n 0 n 2 t e r m s ( 0 n ) 0 n ( I n ) 3 t e r m s ( 0 n ) ,
Γ 4 ( t ) = 12 t e r m s ( 0 n ) ( ς ( t ) ς 1 ) ( I n ) 0 n 0 n I n 0 n 3 t e r m s ( 0 n ) 12 t e r m s ( 0 n ) 0 n ( ς 2 ς ( t ) ) ( I n ) 0 n 0 n I n 3 t e r m s ( 0 n ) ,
Ξ 1 ( t ) = c o l { X ( t ) , X ( t ϱ ( t ) , X ( t ϱ 1 ) , X ( t ϱ 2 ) , t ϱ 1 t X ( ϰ ) d ϰ , t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ , t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ , g ( X ( t ) ) , g ( X ( t ϱ ( t ) , t γ 2 ( t ) t γ 1 ( t ) g ( X ( ϰ ) ) d ϰ , У ( t ) , У ( t ς ( t ) , У ( t ς 1 ) , У ( t ς 2 ) , t ς 1 t У ( ϰ ) d ϰ , t ς ( t ) t ς 1 У ( ϰ ) d ϰ , t ς 2 t ς ( t ) У ( ϰ ) d ϰ , f ( У ( t ) ) , f ( У ( t ς ( t ) , t η 2 ( t ) t η 1 ( t ) f ( У ( ϰ ) ) d ϰ } .
Calculating the time derivative of V 6 ( t ) , we obtain
V ˙ 6 ( t ) = γ 12 2 g T ( X ( t ) ) J 1 g ( X ( t ) ) γ 12 t γ 2 t γ 1 g T ( X ( ϰ ) ) J 1 g ( X ( ϰ ) ) d ϰ + η 12 2 f T ( У ( t ) ) J 2 f ( У ( t ) ) η 12 t η 2 t η 1 f T ( У ( ϰ ) ) J 2 f ( У ( ϰ ) ) d ϰ , γ 12 2 g T ( X ( t ) ) J 1 g ( X ( t ) ) t γ 2 ( t ) t γ 1 ( t ) g T ( X ( ϰ ) ) d ϰ J 1 t γ 2 ( t ) t γ 1 ( t ) g ( X ( ϰ ) ) d ϰ + η 12 2 f T ( У ( t ) ) J 2 f ( У ( t ) ) t η 2 ( t ) t η 1 ( t ) f T ( У ( ϰ ) ) d ϰ J 2 t η 2 ( t ) t η 1 ( t ) f ( У ( ϰ ) ) d ϰ .
In addition, from Assumption 1, we have
g T ( X ( t ϱ ( t ) ) ) g ( X ( t ϱ ( t ) ) ) X T ( t ϱ ( t ) ) W 1 T W 1 X ( t ϱ ( t ) ) ,
f T ( У ( t ς ( t ) ) ) f ( У ( t ς ( t ) ) ) У T ( t ς ( t ) ) W 2 T W 2 У ( t ς ( t ) ) ,
and from (18) and (19), it is easily verified that
a 1 X T ( t ) W 1 T W 1 X ( t ) a 1 g T ( X ( t ) ) g ( X ( t ) ) 0 ,
a 2 X T ( t ϱ ( t ) ) W 1 T W 1 X ( t ϱ ( t ) ) a 2 g T ( X ( t ϱ ( t ) ) ) g ( X ( t ϱ ( t ) ) ) 0 ,
b 1 У T ( t ) W 2 T W 2 Y ( t ) b 1 f T ( У ( t ) ) f ( У ( t ) ) 0 ,
b 2 У T ( t ς ( t ) ) W 2 T W 2 У ( t ς ( t ) ) b 2 f T ( У ( t ς ( t ) ) ) f ( У ( t ς ( t ) ) ) 0 .
From the estimation of the time derivatives (6)–(9), (16) and (17) of V ( t ) along with the activation functions (20)–(23), it is evident that
V ˙ ( t ) Ξ 1 T ( t ) { Ω Γ 1 T ( t ) U 3 M 1 + M 3 U 3 Γ 1 ( t ) Γ 2 T ( t ) U 4 M 5 + M 6 U 4 Γ 2 ( t ) Γ 3 T ( t ) U 7 M 7 + M 8 U 7 Γ 3 ( t ) Γ 4 T ( t ) U 8 M 9 + M 10 U 8 Γ 4 ( t ) } Ξ 1 ( t ) .
Furthermore,
Ω Γ 1 T ( t ) U 3 M 1 + M 3 U 3 Γ 1 ( t ) Γ 2 T ( t ) U 4 M 5 + M 6 U 4 Γ 2 ( t ) Γ 3 T ( t ) U 7 M 7 + M 8 U 7 Γ 3 ( t ) Γ 4 T ( t ) U 8 M 9 + M 10 U 8 Γ 4 ( t ) < 0 ,
is intrinsically linear in ϱ ( t ) & ς ( t ) and from Lemma 2 and Schur Complement Lemma 3, we can see that
ϕ l Π 1 T Π 2 T Π 3 T Π 4 T U 3 ¯ 0 0 0 U 4 ¯ 0 0 U 7 ¯ 0 U 8 ¯ < 0 ,
where
ϕ l = Ω 2 Γ 1 l T U 3 ^ 2 Γ 2 l T U 4 ^ 2 Γ 3 l T U 7 ^ 2 Γ 4 l T U 8 ^ 2 U 3 ^ 0 0 0 2 U 4 ^ 0 0 2 U 7 ^ 0 2 U 8 ^ + Γ 1 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 1 + Π 1 T Γ 1 l T 0 2 n 0 2 n 0 2 n 0 2 n T + Γ 2 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 2
+ Π 2 T Γ 2 l T 0 2 n 0 2 n 0 2 n 0 2 n T + Γ 3 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 3 + Π 3 T Γ 3 l T 0 2 n 0 2 n 0 2 n 0 2 n T + Γ 4 l T 0 2 n 0 2 n 0 2 n 0 2 n Π 4 + Π 4 T Γ 4 l T 0 2 n 0 2 n 0 2 n 0 2 n T ,
Therefore, four corresponding boundary LMIs can be treated non-conservatively in (26): two for ϱ ( t ) = ϱ 1 , ς ( t ) = ς 1 and the other two for ϱ ( t ) = ϱ 2 , ς ( t ) = ς 2 , which imply V ˙ ( t ) < 0 . This completes the proof. □
Theorem 2.
A given scalar 0 ς 1 ς 2 , 0 ϱ 1 ϱ 2 system (1) is globally asymptotically stable if there exist symmetric matrices P = [ P i j ] 4 × 4 > 0 , Q = [ Q i j ] 4 × 4 > 0 , R k > 0 , S k > 0 , T k > 0 , k = 1 , 2 , 3 , 4 , U j > 0 , j = 1 , 2 , , 8 ,   R 11 R 12 R 22 > 0 , R 31 R 32 R 33 > 0 ,   J 1 , J 2 > 0 , a k > 0 , b k > 0 , k = 1 , 2 such that the LMI condition Λ < 0 holds, where Λ = [ Λ i j ] 32 × 32 , Λ r , s = Λ s , r T , r , s = 1 , 2 , , 32 .
Λ 1 , 1 = P 11 A 1 A 1 T P 11 T + P 12 + P 12 T + R 1 + R 11 + A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) A 1 S 1 + ϱ 1 2 T 1 + ϱ 12 2 T 2 + ϱ 12 2 T 2 + A 1 T U α A 1 + a 1 W 1 T W 1 , Λ 1 , 2 = ( 1 υ 2 ) ( P 13 P 14 ) , Λ 1 , 3 = P 12 + P 14 + S 1 , Λ 1 , 4 = P 13 , Λ 1 , 5 = P 22 T , Λ 1 , 14 = R 12 , Λ 1 , 30 = P 11 B 1 A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) B 1 A 1 T U α B 1 , Λ 1 , 31 = P 11 C 1 A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) C 1 A 1 T U α C 1 , Λ 1 , 32 = P 11 D 1 A 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 A 1 T U α D 1 , Λ 2 , 2 = ( 1 υ 2 ) R 11 + a 2 W 1 T W 1 , Λ 2 , 5 = ( 1 υ 2 ) P 23 T ( 1 υ 2 ) P 24 T , Λ 2 , 6 = ( 1 υ 2 ) P 44 T , Λ 2 , 7 = ( 1 υ 2 ) P 33 T ( 1 υ 2 ) P 34 T , Λ 2 , 15 = ( 1 υ 2 ) R 12 , Λ 3 , 3 = R 2 R 1 S 1 S 2 , Λ 3 , 4 = S 2 , Λ 3 , 5 = P 22 T + P 24 T , Λ 3 , 6 = P 44 T , Λ 3 , 7 = P 34 T , Λ 4 , 4 = R 2 S 2 , Λ 4 , 5 = P 23 T , Λ 4 , 7 = P 33 T , Λ 5 , 5 = T 1 , Λ 6 , 6 = T 2 , Λ 6 , 7 = T 2 ¯ T , Λ 7 , 7 = T 2 , Λ 8 , 8 = U 1 , Λ 9 , 9 = U 3 , Λ 9 , 10 = U 3 ¯ T , Λ 10 , 10 = U 3 , Λ 11 , 11 = U 2 , Λ 12 , 12 = U 4 , Λ 12 , 13 = U 4 ¯ T , Λ 13 , 13 = U 4 Λ 14 , 14 = R 22 + B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) B 2 + γ 12 2 J 1 a 1 I + B 2 T U β B 2 , Λ 14 , 15 = B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) C 2 + B 2 T U β C 2 , Λ 14 , 16 = B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) D 2 + B 2 T U β D 2 , Λ 14 , 17 = B 2 T Q 11 T B 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 B 2 T U β A 2 , Λ 15 , 15 = ( 1 υ 2 ) R 22 + C 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) C 2 + C 2 T U β C 2 a 2 I , Λ 15 , 16 = C 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) D 2 + C 2 T U β D 2 , Λ 15 , 17 = C 2 T Q 11 T C 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 C 2 T U β A 2 , Λ 16 , 16 = D 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) D 2 + D 2 T U β D 2 J 1 , Λ 16 , 17 = D 2 T Q 11 T D 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 D 2 T U β A 2 , Λ 17 , 17 = Q 11 A 2 A 2 T Q 11 T + Q 12 + Q 12 T + R 3 + R 31 + A 2 T ( ς 1 2 S 3 + ς 12 2 S 4 ) A 2 S 3 + ς 1 2 T 3 + ς 12 2 T 4 + A 2 T U β A 2 + b 1 W 2 T W 2 , Λ 17 , 18 = ( 1 υ 1 ) ( Q 13 Q 14 ) , Λ 17 , 19 = Q 12 + Q 14 + S 3 , Λ 17 , 20 = Q 13 , Λ 17 , 21 = Q 22 T , Λ 17 , 30 = R 32 , Λ 18 , 18 = ( 1 υ 1 ) R 31 + b 2 W 2 T W 2 , Λ 18 , 21 = ( 1 υ 1 ) ( Q 23 T Q 24 T ) , Λ 18 , 22 = ( 1 υ 1 ) Q 44 T , Λ 18 , 23 = ( 1 υ 1 ) ( Q 33 T Q 34 T ) , Λ 18 , 31 = ( 1 υ 1 ) R 32 , Λ 19 , 19 = R 3 + R 4 S 3 S 4 , Λ 19 , 20 = S 4 , Λ 19 , 21 = Q 22 T + Q 24 T , Λ 19 , 22 = Q 44 T , Λ 19 , 23 = Q 34 T , Λ 20 , 20 = R 4 S 4 , Λ 20 , 21 = Q 23 T , Λ 20 , 23 = Q 33 T , Λ 21 , 21 = T 3 , Λ 22 , 22 = T 4 , Λ 22 , 23 = T 4 ¯ T , Λ 23 , 23 = T 4 , Λ 24 , 24 = U 5 , Λ 25 , 25 = U 7 , Λ 25 , 26 = U 7 ¯ T , Λ 26 , 26 = U 7 , Λ 27 , 27 = U 6 , Λ 28 , 28 = U 8 , Λ 28 , 29 = U 8 ¯ T , Λ 29 , 29 = U 8 , Λ 30 , 30 = R 33 + B 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) B 1 + B 1 T U α B 1 + η 12 2 J 2 b 1 I , Λ 30 , 31 = B 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) C 1 + B 1 T U α C 1 , Λ 30 , 32 = B 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 + B 1 T U α D 1 , Λ 31 , 31 = ( 1 u p s i l o n 1 ) R 33 + C 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) C 1 + C 1 T U α C 1 b 2 I , Λ 31 , 32 = C 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 + C 1 T U α D 1 , Λ 32 , 32 = D 1 T ( ϱ 1 2 S 1 + ϱ 12 2 S 2 ) D 1 + D 1 T U α D 1 J 2 , w h e r e U α = ϱ 1 4 U 1 + ϱ 1 4 U 2 + ϱ 12 4 U 3 + ϱ 12 4 U 4 4 , U β = ς 1 4 U 5 + ς 1 4 U 6 + ς 12 4 U 7 + ς 12 4 U 8 4 ;
Proof. 
Consider the LKF candidates as in Equation (5)
V ( t ) = r = 1 6 V r ( t ) .
Calculating the time derivatives, we have
V ˙ 1 ( t ) = 2 ζ 1 T ( t ) P ζ 1 ˙ ( t ) + 2 ζ 2 T ( t ) Q ζ 2 ˙ ( t ) ,
V ˙ 2 ( t ) X T ( t ) ( R 1 ) X ( t ) + X T ( t ϱ 1 ) ( R 2 R 1 ) X ( t ϱ 1 ) X T ( t ϱ 2 ) ( R 2 ) X ( t ϱ 2 ) + X T ( t ) ( R 11 ) X ( t ) + X T ( t ) ( R 12 ) g ( X ( t ) ) + g T ( X ( t ) ) ( R 12 T ) X ( t ) + g T ( X ( t ) ) ( R 22 ) g ( X ( t ) ) ( 1 υ 2 ) X T ( t ϱ ( t ) ) ( R 11 ) X ( t ϱ ( t ) ) ( 1 υ 2 ) X T ( t ϱ ( t ) ) ( R 12 ) g ( X ( t ϱ ( t ) ) ) ( 1 υ 2 ) g T ( X ( t ϱ ( t ) ) ) ( R 22 ) g ( X ( t ϱ ( t ) ) ) ( 1 υ 2 ) g T ( X ( t ϱ ( t ) ) ) ( R 12 T ) X ( t ϱ ( t ) ) + У T ( t ) ( R 3 ) У ( t ) + У T ( t ς 1 ) ( R 4 R 3 ) У ( t ς 1 ) У T ( t ς 2 ) ( R 4 ) У ( t ς 2 ) + У T ( t ) ( R 31 ) У ( t ) + У T ( t ) ( R 32 ) f ( У ( t ) ) + f T ( У ( t ) ) ( R 32 T ) У ( t ) + f T ( У ( t ) ) ( R 33 ) f ( У ( t ) ) ( 1 υ 1 ) У T ( t ς ( t ) ) ( R 31 ) У ( t ς ( t ) ) ( 1 υ 1 ) У T ( t ς ( t ) ) ( R 32 ) f ( У ( t ς ( t ) ) ) ( 1 υ 1 ) f T ( У ( t ς ( t ) ) ) ( R 33 ) f ( У ( t ς ( t ) ) ) ( 1 υ 1 ) f T ( У ( t ς ( t ) ) ) ( R 32 T ) У ( t ς ( t ) ) ,
By applying Jensen’s inequality [27], we have
V ˙ 3 ( t ) ϱ 1 2 X ˙ T ( t ) S 1 X ˙ ( t ) t ϱ 1 t X ˙ T ( ϰ ) d ϰ S 1 t ϱ 1 t X ˙ ( ϰ ) d ϰ + ς 1 2 У ˙ T ( t ) S 3 У ˙ ( t ) t ς 1 t У ˙ T ( ϰ ) d ϰ S 3 t ς 1 t У ˙ ( ϰ ) d ϰ + ϱ 12 2 X ˙ T ( t ) S 2 X ˙ ( t ) t ϱ ( t ) t ϱ 1 X ˙ T ( ϰ ) d ϰ S 2 t ϱ ( t ) t ϱ 1 X ˙ ( ϰ ) d ϰ t ϱ 2 t ϱ 1 X ˙ T ( ϰ ) d ϰ S 2 t ϱ 2 t ϱ 1 X ˙ ( ϰ ) d ϰ + ς 12 2 У ˙ T ( t ) S 4 У ˙ ( t ) t ς 2 t ς 1 У ˙ T ( ϰ ) d ϰ S 4 t ς 2 t ς 1 У ˙ ( ϰ ) d ϰ ,
V ˙ 4 ( t ) = ϱ 1 2 X T ( t ) T 1 X ( t ) ϱ 1 t ϱ 1 t X T ( ϰ ) T 1 X ( ϰ ) d ϰ + ς 1 2 У T ( t ) T 3 У ( t ) ς 1 t ς 1 t У T ( ϰ ) T 3 У ( ϰ ) d ϰ + ϱ 12 2 X T ( t ) T 2 X ( t ) ϱ 12 t ϱ 2 t ϱ 1 X T ( ϰ ) T 2 X ( ϰ ) d ϰ + ς 12 2 У T ( t ) T 4 У ( t ) ς 12 t ς 2 t ς 1 У T ( ϰ ) T 4 У ( ϰ ) d ϰ ,
By applying Jensen’s inequality [27] and the lower bound Lemma 1 to V ˙ 4 ( t ) , we obtain
ϱ 1 t ϱ 1 t X T ( ϰ ) T 1 X ( ϰ ) d ϰ t ϱ 1 t X ( ϰ ) d ϰ T T 1 t ϱ 1 t X ( ϰ ) d ϰ ,
ς 1 t ς 1 t У T ( ϰ ) T 3 У ( ϰ ) d ϰ t ς 1 t У ( ϰ ) d ϰ T T 3 t ς 1 t У ( ϰ ) d ϰ ,
ϱ 12 t ϱ 2 t ϱ 1 X T ( ϰ ) T 2 X ( ϰ ) d ϰ = ϱ 12 t ϱ 2 t ϱ ( t ) X T ( ϰ ) T 2 X ( ϰ ) d ϰ ϱ 12 t ϱ ( t ) t ϱ 1 X T ( ϰ ) T 2 X ( ϰ ) d ϰ ϱ 12 ϱ 2 ϱ ( t ) t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ T T 2 t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ ϱ 12 ϱ ( t ) ϱ 1 t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ T T 2 t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ T T 2 T 2 ¯ T 2 t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ ,
Similarly,
ς 12 t ς 2 t ς 1 У T ( ϰ ) T 4 У ( ϰ ) d ϰ t ς 2 t ς ( t ) У ( ϰ ) d ϰ t ς ( t ) t ς 1 У ( ϰ ) d ϰ T T 4 T 4 ¯ T 4 t ς 2 t ς ( t ) У ( ϰ ) d ϰ t ς ( t ) t ς 1 У ( ϰ ) d ϰ ,
By substituting (31)–(34) in (30), we get
V ˙ 4 ( t ) ϱ 1 2 X T ( t ) T 1 X ( t ) + ς 1 2 У T ( t ) T 3 У ( t ) + ϱ 12 2 X T ( t ) T 2 X ( t ) + ς 12 2 У T ( t ) T 4 У ( t ) t ϱ 1 t X ( ϰ ) d ϰ T T 1 t ϱ 1 t X ( ϰ ) d ϰ t ς 1 t У ( ϰ ) d ϰ T T 3 t ς 1 t У ( ϰ ) d ϰ t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ T T 2 T 2 ¯ T 2 t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ t ς 2 t ς ( t ) У ( ϰ ) d ϰ t ς ( t ) t ς 1 У ( ϰ ) d ϰ T T 4 T 4 ¯ T 4 t ς 2 t ς ( t ) У ( ϰ ) d ϰ t ς ( t ) t ς 1 У ( ϰ ) d ϰ ,
V ˙ 5 ( t ) = ϱ 1 2 2 ϱ 1 0 η 0 [ X ˙ T ( t ) U 1 X ˙ ( t ) X ˙ T ( t + ϰ ) U 1 X ˙ ( t + ϰ ) ] d ϰ d η + ϱ 1 2 2 ϱ 1 0 ϱ 1 η [ X ˙ T ( t ) U 2 X ˙ ( t ) X ˙ T ( t + ϰ ) U 2 X ˙ ( t + ϰ ) ] d ϰ d η + ϱ 12 2 2 ϱ 2 ϱ 1 η ϱ 1 [ X ˙ T ( t ) U 3 X ˙ ( t ) X ˙ T ( t + ϰ ) U 3 X ˙ ( t + ϰ ) ] d ϰ d η + ϱ 12 2 2 ϱ 2 ϱ 1 ϱ 2 η [ X ˙ T ( t ) U 4 X ˙ ( t ) X ˙ T ( t + ϰ ) U 4 X ˙ ( t + ϰ ) ] d ϰ d η + ς 1 2 2 ς 1 0 η 0 [ У ˙ T ( t ) U 5 У ˙ ( t ) У ˙ T ( t + ϰ ) U 5 У ˙ ( t + ϰ ) ] d ϰ d η + ς 1 2 2 ς 1 0 ς 1 η [ У ˙ T ( t ) U 6 У ˙ ( t ) У ˙ T ( t + ϰ ) U 6 У ˙ ( t + ϰ ) ] d ϰ d η + ς 12 2 2 ς 2 ς 1 η ς 1 [ У ˙ T ( t ) U 7 У ˙ ( t ) У ˙ T ( t + ϰ ) U 7 У ˙ ( t + ϰ ) ] d ϰ d η + ς 12 2 2 ς 2 ς 1 ς 2 η [ У ˙ T ( t ) U 8 У ˙ ( t ) У ˙ T ( t + ϰ ) U 8 У ˙ ( t + ϰ ) ] d ϰ d η ,
= ϱ 1 4 4 X ˙ T ( t ) U 1 X ˙ ( t ) + ϱ 1 4 4 X ˙ T ( t ) U 2 X ˙ ( t ) ϱ 1 2 2 ϱ 1 0 t + η t X ˙ T ( ϰ ) U 1 X ˙ ( ϰ ) d ϰ d η ϱ 1 2 2 ϱ 1 0 t ϱ 1 t + η X ˙ T ( ϰ ) U 2 X ˙ ( ϰ ) d ϰ d η + ϱ 12 4 4 X ˙ T ( t ) U 3 X ˙ ( t ) ϱ 12 2 2 ϱ 2 ϱ 1 t + η t ϱ 1 X ˙ T ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ d η + ϱ 12 4 4 X ˙ T ( t ) U 4 X ˙ ( t ) ϱ 12 2 2 ϱ 2 ϱ 1 t ϱ 2 t + η X ˙ T ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ d η + ς 1 4 4 У ˙ T ( t ) U 5 У ˙ ( t ) ς 1 2 2 ς 1 0 t + η t У ˙ T ( ϰ ) U 5 У ˙ ( ϰ ) d ϰ d η + ς 1 4 4 У ˙ T ( t ) U 6 У ˙ ( t ) ς 1 2 2 ς 1 0 t ς 1 t + η У ˙ T ( ϰ ) U 6 У ˙ ( ϰ ) d ϰ d η + ς 12 4 4 У ˙ T ( t ) U 7 У ˙ ( t ) ς 12 2 2 ς 2 ς 1 t + η t ς 1 У ˙ T ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ d η + ς 12 4 4 У ˙ T ( t ) U 8 У ˙ ( t ) ς 12 2 2 ς 2 ς 1 t ς 2 t + η У ˙ T ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ d η ,
Further,
ϱ 1 2 2 ϱ 1 0 t + η t X ˙ T ( ϰ ) U 1 X ˙ ( ϰ ) d ϰ d η ϱ 1 2 2 ϱ 1 0 t ϱ 1 t + η X ˙ T ( ϰ ) U 2 X ˙ ( ϰ ) d ϰ d η ς 1 2 2 ς 1 0 t + η t У ˙ T ( ϰ ) U 5 У ˙ ( ϰ ) d ϰ d η ς 1 2 2 ς 1 0 t ς 1 t + η У ˙ T ( ϰ ) U 6 У ˙ ( ϰ ) d ϰ d η ϱ 1 0 t + η t X ˙ ( ϰ ) d ϰ d η T U 1 ϱ 1 0 t + η t X ˙ ( ϰ ) d ϰ d η ϱ 1 0 t ϱ 1 t + η X ˙ ( ϰ ) d ϰ d η T U 2 ϱ 1 0 t ϱ 1 t + η X ˙ ( ϰ ) d ϰ d η ς 1 0 t + η t У ˙ ( ϰ ) d ϰ d η T U 5 ς 1 0 t + η t У ˙ ( ϰ ) d ϰ d η ς 1 0 t ς 1 t + η У ˙ ( ϰ ) d ϰ d η T U 6 ς 1 0 t ς 1 t + η У ˙ ( ϰ ) d ϰ d η ,
and
ϱ 12 2 2 ϱ 2 ϱ ( t ) t + η t ϱ 1 X ˙ T ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ d η ϱ 12 2 2 ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ T ( ϰ ) U 3 X ˙ ( ϰ ) d ϰ d η ϱ 12 2 2 ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ T ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ d η ϱ 12 2 2 ϱ ( t ) ϱ 1 t ϱ 2 t + η X ˙ T ( ϰ ) U 4 X ˙ ( ϰ ) d ϰ d η ς 12 2 2 ς 2 ς ( t ) t + η t ς 1 У ˙ T ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ d η ς 12 2 2 ς ( t ) ς 1 t + η t ς 1 У ˙ T ( ϰ ) U 7 У ˙ ( ϰ ) d ϰ d η ς 12 2 2 ς 2 ς ( t ) t ς 2 t + η У ˙ T ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ d η ς 12 2 2 ς ( t ) ς 1 t ς 2 t + η У ˙ T ( ϰ ) U 8 У ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η T U 3 U 3 ¯ U 3 ϱ 2 ϱ ( t ) t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η T U 4 U 4 ¯ U 4 ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η T U 7 U 7 ¯ U 7 ς 2 ς ( t ) t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t ς 2 t + η У ˙ ( ϰ ) d ϰ d η T U 8 U 8 ¯ U 8 ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t ς 2 t + η У ˙ ( ϰ ) d ϰ d η ,
By substituting (37) and (38) in (36), we get
V 5 ˙ ( t ) X ˙ T ( t ) U α X ˙ ( t ) + У ˙ T ( t ) U β У ˙ ( t ) ϱ 1 0 t + η t X ˙ ( ϰ ) T U 1 ϱ 1 0 t + η t X ˙ ( ϰ ) ϱ 1 0 t ϱ 1 t + η X ˙ ( ϰ ) T U 2 ϱ 1 0 t ϱ 1 t + η X ˙ ( ϰ ) ς 1 0 t + η t У ˙ ( ϰ ) T U 5 ς 1 0 t + η t У ˙ ( ϰ ) ς 1 0 t ς 1 t + η У ˙ ( ϰ ) T U 6 ς 1 0 t ς 1 t + η У ˙ ( ϰ ) ϱ 2 ϱ ( t ) t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η T U 3 U 3 ¯ U 3 ϱ 2 ϱ ( t ) t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t + η t ϱ 1 X ˙ ( ϰ ) d ϰ d η ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η T U 4 U 4 ¯ U 4 ϱ 2 ϱ ( t ) t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ϱ ( t ) ϱ 1 t ϱ 2 t + η X ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η T U 7 U 7 ¯ U 7 ς 2 ς ( t ) t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t + η t ς 1 У ˙ ( ϰ ) d ϰ d η ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t ς 2 t + η У ˙ ( ϰ ) d ϰ d η T U 8 U 8 ¯ U 8 ς 2 ς ( t ) t ς 2 t + η У ˙ ( ϰ ) d ϰ d η ς ( t ) ς 1 t ς 2 t + η У ˙ ( ϰ ) d ϰ d η ,
w h e r e U α = ϱ 1 4 U 1 + ϱ 1 4 U 2 + ϱ 12 4 U 3 + ϱ 12 4 U 4 4 , U β = ς 1 4 U 5 + ς 1 4 U 6 + ς 12 4 U 7 + ς 12 4 U 8 4 .
V ˙ 6 ( t ) γ 12 2 g T ( X ( t ) ) J 1 g ( X ( t ) ) t γ 2 ( t ) t γ 1 ( t ) g T ( X ( ϰ ) ) d ϰ J 1 t γ 2 ( t ) t γ 1 ( t ) g ( X ( ϰ ) ) d ϰ + η 12 2 f T ( У ( t ) ) J 2 f ( У ( t ) ) t η 2 ( t ) t η 1 ( t ) f T ( У ( ϰ ) ) d ϰ J 2 t η 2 ( t ) t η 1 ( t ) f ( У ( ϰ ) ) d ϰ .
Adding the activation functions (20) to (23) along with the calculation of the time derivatives (27)–(29), (35), (39) and (40) of V ( t ) , we can see that
V ˙ ( t ) Ξ 2 T ( t ) Λ Ξ 2 ( t ) ,
where
Ξ 2 ( t ) = c o l { X ( t ) , X ( t ϱ ( t ) ) , X ( t ϱ 1 ) , X ( t ϱ 2 ) , t ϱ 1 t X ( ϰ ) d ϰ , t ϱ ( t ) t ϱ 1 X ( ϰ ) d ϰ , t ϱ 2 t ϱ ( t ) X ( ϰ ) d ϰ , ϱ 1 0 t + η t X ( ϰ ) d ϰ d η , ϱ ( t ) ϱ 1 t + η t ϱ 1 X ( ϰ ) d ϰ d η , ϱ 2 ϱ ( t ) t + η t ϱ 1 X ( ϰ ) d ϰ d η , ϱ 1 0 t ϱ 1 t + η X ( ϰ ) d ϰ d η , ϱ ( t ) ϱ 1 t ϱ 2 t + η X ( ϰ ) d ϰ d η , ϱ 2 ϱ ( t ) t ϱ 2 t + η X ( ϰ ) d ϰ d η , g ( X ( t ) ) , g ( X ( t ϱ ( t ) ) ) , t γ 2 ( t ) t γ 1 ( t ) g ( X ( ϰ ) ) d ϰ , У ( t ) , У ( t ς ( t ) ) , У ( t ς 1 ) , У ( t ς 2 ) , t ς 1 t У ( ϰ ) d ϰ , t ς ( t ) t ς 1 У ( ϰ ) d ϰ , t ς 2 t ς ( t ) У ( ϰ ) d ϰ , ς 1 0 t + η t У ( ϰ ) d ϰ d η , ς ( t ) ς 1 t + η t ς 1 У ( ϰ ) d ϰ d η , ς 2 ς ( t ) t + η t ς 1 У ( ϰ ) d ϰ d η , ς 1 0 t ς 1 t + η У ( ϰ ) d ϰ d η , ς ( t ) ς 1 t ς 2 t + η У ( ϰ ) d ϰ d η , ς 2 ς ( t ) t ς 2 t + η У ( ϰ ) d ϰ d η , f ( У ( t ) ) , f ( У ( t ς ( t ) ) ) , t η 2 ( t ) t η 1 ( t ) f ( У ( ϰ ) ) d ϰ } ,
Furthermore,
Λ < 0
is intrinsically linear in ϱ ( t ) & ς ( t ) , which implies V ˙ ( t ) < 0 .

4. Numerical Examples

Example 1.
Consider the two-neuron BAM NN model with mixed time-varying delays (1) with the following parameters:
A 1 = 4 0 0 4 , B 1 = 0.55 0.32 0.42 0.76 , C 1 = 0.9 0.6 0.3 0.1 , D 1 = 0.4 0.1 0.1 0.2 , A 2 = 3 0 0 3 , B 2 = 0.3 0.6 0.5 0.4 , C 2 = 0.7 0.5 0.6 0.2 , D 2 = 0.2 0.3 0.2 0.1 , W 1 = 0.1 0 0 0.1 , W 2 = 0.1 0 0 0.1 , I = 1 0 0 1 , υ 1 = 0.5 , υ 2 = 0.5 , η 1 = 0.25 , η 2 = 0.4756 , γ 1 = 0.25 , γ 2 = 0.4756 , ς ( t ) = 1.5 + 0.5 s i n t , ϱ ( t ) = 1.5 + 0.5 s i n t . C l e a r l y ς 1 = 1 , ς 2 = 2 , ϱ 1 = 1 , ϱ 2 = 2 .
By Theorem 1, using the MATLAB R2025a LMI solver toolbox, we can find that the system (1) is globally asymptotically stable and a part of the feasible solution is as follows:
P 11 = 49.1139 5.4309 5.4309 55.3414 , P 12 = 13.1157 1.1130 1.1130 14.3562 , P 13 = 7.4812 0.0846 0.0846 7.5586 , P 14 = 9.7913 0.0041 0.0041 9.8517 , P 22 = 13.2004 0.4010 0.4010 13.6412 , P 23 = 5.8365 0.0464 0.0464 5.8844 , P 24 = 6.7410 0.0640 0.0640 6.6803 , P 33 = 14.7311 0.0933 0.0933 14.8188 , P 34 = 7.7115 0.0106 0.0106 7.7015 , P 44 = 12.7271 0.0027 0.0027 12.7502 , Q 11 = 53.0313 6.4562 6.4562 56.5498 , Q 12 = 11.9544 1.8408 1.8408 12.9746 , Q 13 = 7.1708 0.2469 0.2469 7.2948 , Q 14 = 9.8058 0.0548 0.0548 9.8537 , Q 22 = 12.7087 0.7036 0.7036 13.0921 , Q 23 = 5.5854 0.1589 0.1589 5.6705 , Q 24 = 6.5708 0.0641 0.0641 6.6120 , Q 33 = 13.9142 0.1976 0.1976 14.0179 , Q 34 = 6.9318 0.0065 0.0065 6.9338 , Q 44 = 14.7622 0.2441 0.2441 14.8952 , R 1 = 45.1445 2.4128 2.4128 42.4111 , R 2 = 24.1601 0.3687 0.3687 24.5842 , R 3 = 27.8471 2.9449 2.9449 26.2354 , R 4 = 19.6855 0.0687 0.0687 19.6998 , e t c . ,
The follwing Table 1 shows the Maximum Allowable Upper Bounds (MAUBs) obatined for different values of υ by Theorem 1.
Example 2.
Consider the 2-neuron BAM NN model with mixed time-varying delays (1) with the same parameters as in (42):
By Theorem 2, using the MATLAB LMI solver toolbox, we can find that the system (1) is globally asymptotically stable and a part of the feasible solution is as follows:
P 11 = 91.8077 12.8486 12.8486 106.4923 , P 12 = 23.1396 0.0511 0.0511 23.1529 , P 13 = 16.0093 0.2931 0.2931 15.7299 , P 14 = 22.6700 0.1689 0.1689 22.6655 , P 22 = 25.7340 0.6961 0.6961 24.8843 , P 23 = 11.5107 0.2773 0.2773 11.2136 , P 24 = 15.2282 0.3415 0.3415 14.7799 , P 33 = 2.6639 0.0157 0.0157 2.6828 , P 34 = 12.9641 0.0595 0.0595 12.9106 , P 44 = 25.2266 0.3279 0.3279 24.8217 , Q 11 = 100.1672 14.9924 14.9924 108.2465 , Q 12 = 20.9750 0.8581 0.8581 21.5058 , Q 13 = 15.1543 0.2330 0.2330 15.2414 , Q 14 = 20.9160 0.0844 0.0844 21.0900 , Q 22 = 23.7231 0.1029 0.1029 23.7113 , Q 23 = 10.8709 0.0140 0.0140 10.8634 , Q 24 = 14.1974 0.0961 0.0961 14.1894 , Q 33 = 24.3363 0.4064 0.4064 24.5526 , Q 34 = 12.7398 0.1551 0.1551 12.8310 , Q 44 = 24.2265 0.1200 0.1200 24.2192 , R 1 = 76.1489 5.3839 5.3839 70.0132 , R 2 = 41.1025 1.0392 1.0392 39.9201 , R 3 = 64.4080 6.2536 6.2536 61.1712 , R 4 = 38.4461 0.9853 0.9853 37.8757 , e t c . ,
Table 2 shows the MAUBs obtained for different values of υ by Theorem 2.
Remark 1.
From Example 1, it is observed that Theorem 1 gives less conservatism compared to the conservatism obtained by Example 2 of Theorem 2. Even though the number of variables is higher in Theorem 1 as compared to Theorem 2, we obtain less conservatism while using Theorem 1. So the second-order reciprocally convex combination using Theorem 1 gives less conservatism to the system (1). Table 3 gives the comparison of Example 1 and Example 2.
Remark 2.
In [42], the stability criterion for linear time-delay systems is obtained by using the reciprocally convex approach and in [38], the stability criterion of the delayed neural network system is obtained by using the reciprocally convex approach. In [39], the second-order convex combination technique is applied to the genetic regulatory neural network system, whereas we have taken BAM NNs to show the effectiveness of the second-order convex combination technique. However, in this paper, we have applied second-order convex combination to obtain reduced conservatism of BAM NNs with mixed time-varying delays.
Remark 3.
In this study, we investigated the stability issues of delayed BAM NNs using Jensen’s inequality and second-order reciprocally convex combination approach, and we came up with a less conservative criterion. These methods can also be used to investigate the stability of a variety of systems, including linear time-delay systems, neutral type delay systems, and other NNs [41].

5. Conclusions

This paper examined BAM NNs that had both discrete and distributed delays, as well as mixed-interval time-varying delays. We focused on interval time-varying delays, where the lower bounds could be either zero or positive. Based on a reciprocally convex approach, this research work suggests a new approach for dealing linear combinations of positive functions scaled by the reciprocal-squared values of convex weight parameters. By introducing modifications of the lower bound lemma to handle different types of function combinations emerging from triple integral terms in the formulation of LMI requirements, a novel delay-dependent stability criterion is constructed. Furthermore, this second-order convex approach can be extended to Cohen–Grossberg Neural Networks for the superiority of the lemma and the third-order convex combination approach can also be applied to our BAMNN’s system to obtain a novel delay-dependent stability criterion. This criterion enhances the robustness of the system by allowing greater flexibility in managing delays. Consequently, it opens new avenues for improving performance in various applications.

Author Contributions

Conceptualization, R.K. and V.V.; methodology, R.K.; software, R.K. and V.V.; validation, K.C., R.K. and V.V.; formal analysis, K.C.; investigation, R.K.; resources, V.V.; data curation, V.V.; writing—original draft preparation, R.K.; writing—review and editing, K.C. and V.V.; visualization, K.C.; supervision, K.C.; project administration, R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Kalaivani Chandran thanks the management of Sri Sivasubramaniya Nadar College of Engineering for supporting her research work. Renuga Kuppusamy is thankful to the management of Sri Sivasubramaniya Nadar College of Engineering and St. Joseph’s Institute of Technology, OMR, Chennai, Tamil Nadu 600119, for supporting her research work. Vembarasan Vaitheeswaran is thankful to the management of Shiv Nadar University Chennai for supporting his research work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, B.; Yang, H.; Yao, Q.; Yu, A.; Hong, T.; Zhang, J.; Kadoch, M.; Cheriet, M. Hopfield Neural Network-based Fault Location in Wireless and Optical Networks for Smart City IoT. In Proceedings of the 15th International Wireless Communications & Mobile Computing Conference, Tangier, Morocco, 24–28 June 2019. [Google Scholar]
  2. Balasubramaniam, P.; Vembarasan, V.; Rakkiyappan, R. Delay-dependent robust exponential state estimation of Markovian jumping fuzzy Hopfield neural networks with mixed random time-varying delays. Commun. Nonlinear Sci. 2011, 16, 2109–2129. [Google Scholar] [CrossRef]
  3. Nguang, S.K.; Assawinchaichote, W.; Shi, P.; Shi, Y. Robust H control design for uncertain fuzzy systems with Markovian jumps: An LMI approach. In Proceedings of the American Control Conference, Oregon, Portland, 8–10 June 2005. [Google Scholar]
  4. Zhu, Q.; Cao, J. Stability Analysis of Markovian Jump Stochastic BAM Neural Networks With Impulse Control and Mixed Time Delays. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 467–479. [Google Scholar] [PubMed]
  5. Cong, E.Y.; Han, X.; Zhang, X. Global exponential stability analysis of dicrete-time BAM NNs with delays. Neurocomputing 2020, 379, 227–235. [Google Scholar] [CrossRef]
  6. Madiyalagan, K.; Sakthivel, R.; Marshal Anthony, S. New robust passivity criteria for stochastic fuzzy BAM neural network with time-varying delays. Commun. Nonlinear Sci. 2012, 17, 1392–1407. [Google Scholar] [CrossRef]
  7. Kosko, B. Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. Syst. 1988, 18, 49–60. [Google Scholar] [CrossRef]
  8. Kosko, B. Adaptive bidirectional associative memories. Appl. Opt. 1987, 26, 4947–4960. [Google Scholar] [CrossRef]
  9. Ritter, G.X.; Urcid, G.; Lancu, L. Reconstruction of Patterns from Noisy Inputs Using Morphological Associative Memories. J. Math. Imaging Vis. 2003, 19, 95–111. [Google Scholar] [CrossRef]
  10. Lan, J.; Wang, X.; Zhang, X. Global Robust Exponential Synchronization of Interval BAM Neural Networks with Multiple Time-Varying Delays. Circuits Syst. Signal Process. 2024, 43, 2147–2170. [Google Scholar] [CrossRef]
  11. Sakthivel, R.; Samidurai, R.; Marshal Anthony, S. Global asymptotic stability of BAM neural networks with mixed delays and impulses. Appl. Math. Comput. 2009, 212, 113–119. [Google Scholar] [CrossRef]
  12. Ali, M.S.; Balasubramaniam, P. Global exponential stability of uncertain fuzzy BAM neural networks with time-varying delays. Chaos Solitons Fractals 2009, 42, 2191–2199. [Google Scholar]
  13. Zhang, X.M.; Han, Q.L.; Seuret, A.; Gouaisbaut, F.; He, Y. Overview of recent advances in stability of linear systems with time-varying delays. IET Control Theory Appl. 2019, 13, 1–16. [Google Scholar] [CrossRef]
  14. Li, Z.; Bai, Y.; Huang, C.; Mu, S. Improved Stability Analysis for Delayed Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4535–4541. [Google Scholar] [CrossRef]
  15. Zhang, C.K.; Jiang, L.; Wu, Q.H.; He, Y.; Wu, M. Delay-dependent robust load frequency control for time delay power systems. IEEE Trans. Power Syst. 2013, 28, 2192–2201. [Google Scholar] [CrossRef]
  16. Seuret, A.; Gouaisbaut, F. Wirtinger-based integral inequality: Application to time-delay systems. Automatica 2013, 49, 2860–2866. [Google Scholar] [CrossRef]
  17. Al-Wais, S.; Mohajerpoor, R.; Shanmugam, L.; Abdi, H.; Nahavandi, S. Improved delay-dependent stability criteria for telerobotic systems with time-varying delays. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 2470–2484. [Google Scholar] [CrossRef]
  18. Richard, J. Time-delay systems: An overview of some recent advances and open problems. Automatica 2003, 39, 1667–1694. [Google Scholar] [CrossRef]
  19. Marcus, C.M.; Westervelt, R.M. Stability of analog neural networks with delay. Phys. Rev. A 1989, 39, 347–359. [Google Scholar] [CrossRef]
  20. Zeng, H.B.; Zu, Z.J.; Wang, W.; Zhang, X.M. Relaxed stability criteria of delayed neural networks using delay-parameters-dependent slack matrices. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1486–1501. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, Z.; Wang, Z.; Liu, D. A comprehensive review of stability analysis of continuous-time recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1229–1261. [Google Scholar] [CrossRef]
  22. Li, Y.; Gu, K.; Zhou, J.; Xu, S. Estimating stable delay intervals with a discretized Lyapunov–Krasovskii functional formulation. Automatica 2014, 50, 1691–1697. [Google Scholar] [CrossRef]
  23. Chen, J.; Park, J.H.; Xu, S. Stability analysis of continuous time systems with time-varying delay using new Lyapunov–Krasovskii functionals. J. Frankl. Inst. 2018, 355, 5957–5967. [Google Scholar] [CrossRef]
  24. Lee, D.H.; Kim, Y.J.; Lee, S.H.; Kwon, O.M. Enhancing Stability Criteria for Linear Systems with Interval Time-Varying Delays via an Augmented Lyapunov–Krasovskii Functional. Mathematics 2024, 12, 2241. [Google Scholar] [CrossRef]
  25. Boyd, S.; Ghaoui, L.E.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in Systems and Control Theory, 1st ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1994; pp. 7–35. [Google Scholar]
  26. Zhai, G.; Koyama, N.; Bruzelius, F.; Yoshida, M. Strict LMI conditions for stability and stabilization of discrete-time descriptor systems. In Proceedings of the International Symposium on Intelligent Control Conference, Taipei, Taiwan, 2–4 September 2004. [Google Scholar]
  27. Briat, C. Convergence and equivalence results for the Jensen’s inequality—Application to time-delay and sampled-data systems. IEEE Trans. Automat. Control 2011, 56, 1660–1665. [Google Scholar] [CrossRef]
  28. Azizi, T.; Kerr, G. Application of Stability Theory in Study of Local Dynamics of Nonlinear Systems. J. Appl. Phys. 2020, 8, 1180–1192. [Google Scholar] [CrossRef]
  29. Chen, J.; Park, J.H.; Xu, S. Stability analysis for neural networks with time-varying delay via improved techniques. IEEE Trans. Cybern. 2019, 49, 4495–4500. [Google Scholar] [CrossRef]
  30. Chen, J.; Park, J.H. New versions of Bessel–Legendre inequality and their applications to systems with time-varying delay. Appl. Math. Comput. 2020, 375, 125060. [Google Scholar] [CrossRef]
  31. Lee, W.; Lee, S.Y.; Park, P. Affine Bessel–Legendre inequality: Application to stability analysis for systems with time-varying delays. Automatica 2018, 93, 535–539. [Google Scholar] [CrossRef]
  32. Chen, Y.; Zeng, H.B.; Li, Y. Stability analysis of linear delayed systems based on an allowable delay set partitioning approach. Automatica 2024, 163, 111603. [Google Scholar] [CrossRef]
  33. Tian, Y.; Wang, Z. A new multiple integral inequality and its application to stability analysis of time-delay systems. Appl. Math. Lett. 2020, 105, 106325. [Google Scholar] [CrossRef]
  34. Chen, J.; Xu, S.; Zhang, B.; Liu, G. A note on relationship between two classes of integral inequalities. IEEE Trans. Autom. Control 2017, 62, 4044–4049. [Google Scholar] [CrossRef]
  35. Zhang, C.K.; He, Y.; Jiang, Y.; Wu, M.; Zeng, H.B. Stability analysis of systems with time-varying delay via relaxed integral inequalities. Syst. Control Lett. 2016, 92, 52–61. [Google Scholar] [CrossRef]
  36. Chandra, S.R.; Padmanabhan, S.; Umesha, V.; Ali, M.S.; Rajchakit, G.; Jirawattanpanit, A. New Insights on Bidirectional Associative Memory Neural Networks with Leakage Delay Components and Time-Varying Delays Using Samled-Data Control. Neural Process. Lett. 2024, 56, 94. [Google Scholar] [CrossRef]
  37. Zhang, C.K.; He, Y.; Jiang, L.; Wu, M. An extended reciprocally convex matrix inequality for stability analysis of systems with time-varying delay. Automatica 2017, 85, 481–485. [Google Scholar] [CrossRef]
  38. Arunagirinathan, S.; Lee, T.H. Generalized delay-dependent reciprocally convex inequality on stability for neural networks with time-varying delay. Math. Comput. Simulat. 2024, 217, 109–120. [Google Scholar] [CrossRef]
  39. Chandrasekar, A.; Radhika, T.; Zhu, Q. State Estimation for Genetic Regulatory Networks with Two Delay Components by Using Second-Order Reciprocally Convex Approach. Neural Process. Lett. 2021, 54, 327–345. [Google Scholar] [CrossRef]
  40. Lee, W.; Park, W. Second-order reciprocally convex approach to stability of systems with interval time-varying delays. Appl. Math. Comput. 2014, 229, 245–253. [Google Scholar] [CrossRef]
  41. Liu, K.; Fridman, E.; Johansson, K.H.; Xia, Y. Generalized Jensen inequalities with application to stability analysis of systems with distributed delays over infinite time-horizons. Automatica 2016, 69, 222–231. [Google Scholar] [CrossRef]
  42. Park, P.; Ko, J.W.; Jeong, C. Reciprocally convex approach to stability of systems with time-varying delays. Automatica 2011, 47, 235–238. [Google Scholar] [CrossRef]
Table 1. The MAUB for delays ς 2 = ϱ 2 = r ¯ = γ ¯ with different υ ( υ 1 = υ 2 ) .
Table 1. The MAUB for delays ς 2 = ϱ 2 = r ¯ = γ ¯ with different υ ( υ 1 = υ 2 ) .
Values of υ
Theorem 10.51.01.52.02.5
values of ς 1 = ϱ 1 0.0145.1342145.3240145.4740145.4943145.5407
0.5146.2320146.4451146.5670146.6116146.6683
1.0147.1399147.3313147.4210147.5522147.5958
1.5148.2490148.4354148.5136148.5840148.6148
2.0149.1221149.2633149.4471149.5325149.6182
Table 2. The MAUB for delays ς 2 = ϱ 2 = r ¯ = γ ¯ with different υ ( υ 1 = υ 2 ) .
Table 2. The MAUB for delays ς 2 = ϱ 2 = r ¯ = γ ¯ with different υ ( υ 1 = υ 2 ) .
Values of υ
Theorem 20.250.50.750.90.99
values of ς 1 = ϱ 1 0.0120.1468120.2357120.4543120.5462120.6128
0.5121.2548121.3829121.4517121.5015121.6882
1.0122.2789122.3982122.4028122.5234122.6789
1.5123.2167123.3454123.4675123.5678123.6248
2.0124.3925124.5689124.6895124.7052124.8962
Table 3. Comparison of theorems.
Table 3. Comparison of theorems.
Comparison of TheoremsTheorem 1Theorem 2
MAUB obtainednearly 150nearly 125
Value of υ 0 υ a n y v a l u e 0 υ < 1
Symmetric Matrices5054
Any Matrices10Nil
Unknown Constants2284
No. of Decision Variables 35 n 2 + 25 n + 228 27 n 2 + 27 n + 4
when n = 2418166
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chandran, K.; Kuppusamy, R.; Vaitheeswaran, V. Stability Analysis of Bidirectional Associative Memory Neural Networks with Time-Varying Delays via Second-Order Reciprocally Convex Approach. Symmetry 2025, 17, 1852. https://doi.org/10.3390/sym17111852

AMA Style

Chandran K, Kuppusamy R, Vaitheeswaran V. Stability Analysis of Bidirectional Associative Memory Neural Networks with Time-Varying Delays via Second-Order Reciprocally Convex Approach. Symmetry. 2025; 17(11):1852. https://doi.org/10.3390/sym17111852

Chicago/Turabian Style

Chandran, Kalaivani, Renuga Kuppusamy, and Vembarasan Vaitheeswaran. 2025. "Stability Analysis of Bidirectional Associative Memory Neural Networks with Time-Varying Delays via Second-Order Reciprocally Convex Approach" Symmetry 17, no. 11: 1852. https://doi.org/10.3390/sym17111852

APA Style

Chandran, K., Kuppusamy, R., & Vaitheeswaran, V. (2025). Stability Analysis of Bidirectional Associative Memory Neural Networks with Time-Varying Delays via Second-Order Reciprocally Convex Approach. Symmetry, 17(11), 1852. https://doi.org/10.3390/sym17111852

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop