Next Article in Journal
Coefficient Inequalities and Fekete–Szegö-Type Problems for Family of Bi-Univalent Functions
Next Article in Special Issue
New Monotonic Properties for Solutions of a Class of Functional Differential Equations and Their Applications
Previous Article in Journal
On the Effects of Tokamak Plasma Edge Symmetries on Turbulence Relaxation
Previous Article in Special Issue
From Optimal Control to Mean Field Optimal Transport via Stochastic Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Asymptotic Properties of Stochastic Neutral-Type Inertial Neural Networks with Mixed Delays

School of Mathematics and Statistics, Huaiyin Normal University, Huaian 223300, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(9), 1746; https://doi.org/10.3390/sym15091746
Submission received: 24 August 2023 / Revised: 8 September 2023 / Accepted: 11 September 2023 / Published: 12 September 2023
(This article belongs to the Special Issue Symmetry in Mathematical Analysis and Functional Analysis II)

Abstract

:
This article studies the stability problem of a class of stochastic neutral-type inertial delay neural networks. By introducing appropriate variable transformations, the second-order differential system is transformed into a first-order differential system. Using homeomorphism mapping, standard stochastic analyzing technology, the Lyapunov functional method and the properties of a neutral operator, we establish new sufficient criteria for the unique existence and stochastically globally asymptotic stability of equilibrium points. An example is also provided, to show the validity of the established results. From our results, we find that, under appropriate conditions, random disturbances have no significant impact on the existence, stability, and symmetry of network systems.

1. Introduction

Inertial neural network systems (INNs) can be understood as damped neural networks. When the damping exceeds a certain critical value, the dynamic properties of each neuron state will also change. Therefore, studying inertial neural networks can help us to understand the complex changes in neural network systems. NNs are second-order systems, which were first introduced by Wheeler and Schieve [1]. Subsequently, INNs have received increasing attention from scholars, whose dynamic behavior research has received considerable attention. In [2,3,4], the authors studied bifurcation, chaos and the stability of periodic solutions in a single INN. The dynamics of an inertial two-neuron system with delay were studied in [5]. Draye, Winters and Cheron [6] considered self-selected modular recurrent neural networks with postural and inertial subnetworks. For the global exponential stability in the Lagrange sense for INNs, see [7,8]; for the global convergence problem of impulsive inertial neural networks with variable delay and complex neural networks, see [9,10]; for the antiperiodic problem of INNs, see [11]; for inertial Cohen–Grossberg neural networks, see [12,13,14]; for inertial BAM neural networks, see [15,16].
In practice, neural network systems are not only affected by damping (inertia) factors, but also by random factors. The research on stochastic neural network systems is relatively mature and has achieved many results. Zhang and Kong [17] considered photovoltaic power prediction based on the hybrid modeling of neural networks and stochastic differential equations. Shu et al. [18] studied the stochastic stabilization of Markov jump quaternion-valued neural networks, by using a sampled-data control strategy. Guo [19] investigated globally robust stability analysis for stochastic Cohen–Grossberg neural networks with impulse control and time-varying delays. For the stability of random cellular neural networks, see [20,21]; for mean square exponential stability and periodic solutions of stochastic Hopfield neural networks, see [22,23]; for the stability problem of recurrent neural networks with random delays, see [24].
Neutral-type neural networks are nonlinear systems, which show neutral properties by involving derivatives with delays. Due to extensive applications in ecology, control theory, biology and physics for neutral-type neural networks, many results have been obtained. He et al. [25] addressed the problem of synchronization control of neutral-type neural networks with sampled data. Using an adaptive event-triggered communication scheme, they obtained some weak synchronization conditions for synchronization control. In [26], a new control scheme was studied for exponential synchronization of coupled neutral-type neural networks with mixed delays. Si, Xie and Lie [27] investigated the global exponential stability of recurrent neural networks with piecewise constant arguments and neutral terms subject to uncertain connection weights. For more results of neutral-type neural networks, see, e.g., [28,29,30,31,32,33].
Stochastic neutral-type INNs provide more useful models in practical applications. In this paper, we will study the stability problem of a class of stochastic neutral-type inertial neural networks with mixed delays via stochastic analyzing technology and by constructing a suitable Lyapunov–Krasovskii functional. A simulation example is used, to demonstrate the usefulness of our theoretical results. The contributions of this paper are threefold:
(1)
In this paper, we introduce a new class of stochastic neutral-type INNs with D-operators, which is different from the existing models (see, e.g., [25,26,27]).
(2)
For constructing a suitable Lyapunov–Krasovskii functional, the mixed delays and the neutral terms are taken into consideration.
(3)
Unlike the previous papers, we introduce a new unified framework, to deal with mixed delays, inertia terms and D-operators. It is noted that our main results are also valid in cases of non-neutral systems.
The following sections are organized as follows: Section 2 gives preliminaries and model formulation. In Section 3, sufficient conditions are established for the unique existence of equilibrium points of system (3). The stochastically globally asymptotic stability of equilibrium points is given in Section 4. In Section 5, a numerical example is given, to show the feasibility of our results. Finally, some conclusions are given about this paper.

2. Preliminaries and Problem Formulation

Consider a class of neutral-type INNs with mixed delays as follows:
d 2 [ ( A i x i ) ( t ) ] d t 2 = a i d [ ( A i x i ) ( t ) ] d t b i x i ( t ) + j = 1 n c i j f j ( x j ( t ) ) + j = 1 n d i j f j ( x j ( t τ j ( t ) ) ) + j = 1 n e i j t γ 1 t f j ( x j ( s ) ) d s + I i ,
where t 0 , i = 1 , 2 , , n , A i is a difference operator defined by
( A i x i ) ( t ) = x i ( t ) c i ( t ) x i ( t γ 0 ) .
If a random disturbance term is added to system (1), we obtain the following stochastic INNs:
d ( A i x i ) ( t ) = a i ( A i x i ) ( t ) d t b i x i ( t ) d t + j = 1 n c i j f j ( x j ( t ) ) d t + j = 1 n d i j f j ( x j ( t τ j ( t ) ) ) d t + j = 1 n e i j t γ 1 t f j ( x j ( s ) ) d s d t + I i d t + j = 1 n h i j g j ( x j ( t ) ) d B i ( t ) ,
where x i ( t ) denotes the neuron state, c i ( t ) C ( R , R ) is neutral parameter, a i > 0 is the damping (inertia) coefficient, b i > 0 is a constant, c i j , d i j and e i j represent the output feedback weight values, I i represents the threshold or bias of the system, γ 0 , γ 1 , τ j ( t ) > 0 are delays with τ j ( t ) < 1 , and where j = 1 n h i j g j ( x j ( t ) ) d B i ( t ) represents random perturbation and B ( t ) = ( B 1 ( t ) , B 2 ( t ) , , B n ( t ) ) T is defined as an n dimensional Brownian motion with natural filtering { F t } t 0 on a complete probability space ( Ω , F , P ) . The initial conditions of system (1) are given by
x i ( s ) = ϕ i ( s ) , s ( μ , 0 ] , i = 1 , 2 , , n , x i ( s ) ) = ψ i ( s ) , s ( μ , 0 ] , i = 1 , 2 , , n ,
where μ = max { γ 0 , γ 1 , τ ^ } , τ ^ = max t R τ j ( t ) . Let
y i ( t ) = d [ A i x i ( t ) ] d t + ξ i ( A i x i ) ( t ) , i = 1 , 2 , , n ,
where ξ i > 0 is a constant. Then, system (3) is changed into the following form:
d ( A i x i ) ( t ) = ξ i ( A i x i ) ( t ) d t + y i ( t ) d t , y i ( t ) = ( a i ξ i ) y i ( t ) d t + [ ( a i ξ i ) ξ i ] ( A i x i ) ( t ) d t b i x i ( t ) d t + j = 1 n c i j f j ( x j ( t ) ) d t + j = 1 n d i j f j ( x j ( t τ j ( t ) ) ) d t + j = 1 n e i j t γ 1 t f j ( x j ( s ) ) d s d t + I i ( t ) d t + j = 1 n h i j g j ( x j ( t ) ) d B i ( t ) .
Remark 1. 
The operator A i in (2) is called D operator (see [34]). D operator has many excellent properties. On the one hand, it can help us better understand the characteristics of neutral equations and, on the other hand, the above properties can be used to study the dynamic behavior of the solution of the system. Recently, the authors [33] studied a class of neutral inertial neural networks. The differences between this article and [33] are mainly reflected in two aspects: firstly, the neutral-type system in this article has the form of D operator, while the system in [33] does not; secondly, the system in this article is stochastic, while the system in [33] is deterministic.
Lemma 1 
([35]). For t R , if c ^ i < 1 , then, for the operator A i there exists an inverse operator A i 1 , which satisfies
| | A i 1 | | 1 1 c ^ i ,
where i = 1 , 2 , , n , c ^ i = max t R | c i ( t ) | , A i is defined by (2).
Definition 1. 
If X * = ( x 1 * , x 2 * , , x n * ) T R n satisfies
b i x i * + j = 1 n c i j + d i j + γ 1 e i j f j ( x j * ) + I i = 0 ,
then X * is called the equilibrium point of system (1). If g j ( x j * ) = 0 , then system (3) reaches equilibrium state and no longer sends noise interference to other units, indicating that the equilibrium point X * of system (1) is also the equilibrium point of system (3).
Let C 1 , 2 ( R + × S h , R + ) be the non-negative function V ( t , X ) on R + × S h , where S h = { X : | | X | | < h } R n , V ( t , X ) has continuous first and second derivatives for ( t , X ) , respectively.
Definition 2 
([36]). Give the following system:
d X ( t ) = f ( t , X ( t ) ) d t + g ( t , X ( t ) ) d B ( t ) , t t 0 , X ( t 0 ) = X 0 .
Define the operator:
L V ( t , X ) = V t ( t , X ) + V X ( t , X ) f ( t , X ) + 1 2 trace g T ( t , X ) V X X ( t , X ) g ( t , X ) ,
where V ( t , X ) C 1 , 2 ( R + × S h , R + ) , V t ( t , X ) = V ( t , X ) t ,
V X ( t , X ) = V ( t , X ) x 1 , V ( t , X ) x 2 , , V ( t , X ) x n , V X X ( t , X ) = 2 V ( t , X ) x i x j n × n .
Definition 3 
([37]). Let Ω R n , where Ω is an n dimensional open field containing the origin. The function V ( t , X ) has an infinitesimal upper bound, if there exists a positive definite function W ( X ) , such that | V ( t , X ) | W ( X ) .
Definition 4 
([37]). The function W ( X ) C ( R n , R ) is an infinite positive definite function, if W ( X ) is positive definite and W ( X ) + for | | X | | .
Lemma 2 
([37]). If H ( u ) C 0 satisfies
(i) 
H ( u ) is injective on R n ;
(ii) 
| | H ( u ) | | for | | u | | , then H ( u ) is homeomorphic on R n .
Lemma 3 
([37]). If there is a radial unbounded positive definite function V ( t , X ) C 1 , 2 ( [ t 0 , + ) × S h , R + ) with an infinitesimal upper bound, such that L V ( t , X ) is negative definite, then the zero solution of system (5) is stochastically globally asymptotically stable.
Throughout the paper, the following assumptions hold:
(H1)
There exist constants l j 0 , such that
| f j ( x ) f j ( y ) | l j | x y | , j = 1 , 2 , , n , x , y R .
(H2)
There exist constants l ˜ j 0 , such that
| g j ( x ) g j ( y ) | l ˜ j | x y | , j = 1 , 2 , , n , x , y R .
(H3)
There exist constants N j 0 , such that
| f j ( x ) | 1 2 N j , j = 1 , 2 , , n , x R .

3. Existence of Equilibrium Points

Theorem 1. 
Suppose that assumption (H 1 ) holds. Then, system (1) has a unique equilibrium point, provided that
b i + 1 2 j = 1 n | c i j | + | d i j | + γ 1 | e i j | l j + 1 2 j = 1 n | c j i | + | d j i | + γ 1 | e j i | l i < 0 .
Proof. 
For u = ( u 1 , u 2 , , u n ) T , consider the mapping H ( u ) = ( H 1 ( u ) , H 2 ( u ) , , H n ( u ) ) T , where
H i ( u ) = b i u i + j = 1 n c i j + d i j + γ 1 e i j f j ( u j ) + I i .
If H ( u ) is homeomorphic on R n , there is a unique equilibrium point u * , such that H ( u * ) = 0 . Now, we show that H ( u ) is homeomorphic on R n . We first show that H ( u ) is injective on R n . Assume that there exist u , v R n with u v , such that H ( u ) = H ( v ) . Then, we have
b i ( u i v i ) + j = 1 n c i j + d i j + γ 1 e i j [ f j ( u j ) f j ( v j ) ] = 0 , i = 1 , 2 , , n .
Multiplying both sides of system (7) by u i v i yields
b i ( u i v i ) 2 + ( u i v i ) j = 1 n c i j + d i j + γ 1 e i j [ f j ( u j ) f j ( v j ) ] = 0 .
Using a 2 + b 2 2 a b , (H 1 ) and (8), we have
b i ( u i v i ) 2 + 1 2 j = 1 n | c i j | + | d i j | + γ 1 | e i j | l j [ ( u i v i ) 2 + ( u j v j ) 2 ] 0
and
i = 1 n b i + 1 2 j = 1 n | c i j | + | d i j | + γ 1 | e i j | l j + 1 2 j = 1 n | c j i | + | d j i | + γ 1 | e j i | l i ( u i v i ) 2 0 .
From (6) and (9), we have u i = v i , which contradicts the assumption u i v i . Therefore, H ( u ) is injective on R n .
Furthermore, let H ˜ ( u ) = H ( u ) H ( 0 ) . We have
u T H ˜ ( u ) = i = 1 n u i H ˜ i ( u ) = i = 1 n b i u i 2 + u i j = 1 n c i j + d i j + γ 1 e i j ( f j ( u j ) f j ( 0 ) ) i = 1 n b i + 1 2 j = 1 n | c i j | + | d i j | + γ 1 | e i j | l j + 1 2 j = 1 n | c j i | + | d j i | + γ 1 | e j i | l i u i 2 min 1 i n b i 1 2 j = 1 n | c i j | + | d i j | + γ 1 | e i j | l j 1 2 j = 1 n | c j i | + | d j i | + γ 1 | e j i | l i | | u | | 2 .
Using inequality X T Y | X T Y | | | X | | | | Y | | , we obtain
| | H ˜ ( u ) | | min 1 i n b i 1 2 j = 1 n | c i j | + | d i j | + γ 1 | e i j | l j 1 2 j = 1 n | c j i | + | d j i | + γ 1 | e j i | l i | | u | |
and | | H ˜ ( u ) | | for | | u | | , i.e., | | H ( u ) | | for | | u | | . Hence, H ( u ) is homeomorphic on R n , and there is a unique equilibrium point for system (1). □
Remark 2. 
The proof of Theorem 1 is similar to the one in [38]. For the convenience of readers, we have provided detailed proof of Theorem 1.
Remark 3. 
Since system (1) contains neutral-type terms and inertial terms, some research techniques, such as topological degree methods and fixed point theorems, are not easy for studying the existence of the equilibrium point of system (1). However, under the assumptions of this article, we can easily study the existence of the equilibrium point of system (1), by using homeomorphism mapping.

4. Stochastically Globally Asymptotic Stability

Theorem 2. 
Assume that all conditions of Theorem 1 hold and that assumptions (H 2 ) and (H 3 ) hold. Then, the equilibrium point of system (3) is stochastically globally asymptotically stable, provided that
2 ξ i + 1 + | ( a i ξ i ) ξ i | + b i ( 1 | c i | ) 2 + j = 1 n | d i j | ω ^ j L j 2 ( 1 c ^ i ) 2 + j = 1 n | e i j | γ 1 L j 2 ( 1 c ^ i ) 2 + k = 1 n j = 1 n h k j 2 l ˜ i 2 ( 1 c ^ i ) 2 < 0
and
2 ( a i ξ i ) + 1 + | ( a i ξ i ) ξ i | + b i + j = 1 n | c i j | + | d i j | + | e i j | < 0 ,
where c ^ i = max t R | c i ( t ) | , ω ^ j = max t R ω j ( t ) , ω j ( t ) = 1 1 τ j ( μ j ( t ) ) , μ j ( t ) is an inverse function of t τ j ( t ) .
Proof. 
From Theorem 1, system (1) exists a unique equilibrium point. When g j ( x j * ) = 0 , systems (1) and (3) have the same equilibrium point. Let X * = ( x 1 * , x 2 * , , x n * ) T R n be the unique equilibrium point for system (3). Let ( A i x ˜ i ) ( t ) = ( A i x i ) ( t ) A i x i * and y ˜ i ( t ) = y i ( t ) y i * , where y i * = ξ i A i x i * . By (4), we have
d ( A i x ˜ i ) ( t ) = ξ i ( A i x ˜ i ) ( t ) d t + y ˜ i ( t ) d t , y ˜ i ( t ) = ( a i ξ i ) y ˜ i ( t ) d t + [ ( a i ξ i ) ξ i ] ( A i x ˜ i ) ( t ) d t b i x ˜ i ( t ) d t + j = 1 n c i j f ˜ j ( x ˜ j ( t ) ) d t + j = 1 n d i j f ˜ j ( x ˜ j ( t τ j ( t ) ) ) d t + j = 1 n e i j t γ 1 t f ˜ j ( x ˜ j ( s ) ) d s d t + j = 1 n h i j g ˜ j ( x ˜ j ( t ) ) d B i ( t ) ,
where
f ˜ j ( x ˜ j ( t ) ) = f j ( x ˜ j ( t ) + x j * ) f j ( x j * ) , g ˜ j ( x ˜ j ( t ) ) = g j ( x ˜ j ( t ) + x j * ) g j ( x j * ) .
Let
Z ˜ = ( A X ˜ , Y ˜ ) = ( A 1 x ˜ 1 , A 2 x ˜ 2 , , A n x ˜ n , y ˜ 1 , y ˜ 2 , , y ˜ n )
and
V ( t , Z ˜ ) = i = 1 n ( A i x ˜ i ) 2 ( t ) + y ˜ i 2 ( t ) + i = 1 n j = 1 n | d i j | t τ j ( t ) t ω j ( s ) f ˜ j 2 ( x ˜ j ( s ) ) d s + i = 1 n j = 1 n | e i j | 0 γ 1 t s t f ˜ j 2 ( x ˜ j ( θ ) ) d θ d s .
By (13), we obtain
V t ( t , Z ˜ ) = i = 1 n j = 1 n | d i j | ω j ( t ) f ˜ j 2 ( x ˜ j ( t ) ) f ˜ j 2 ( x ˜ j ( t τ j ( t ) ) ) + i = 1 n j = 1 n | e i j | γ 1 f ˜ j 2 ( x ˜ j ( t ) ) 0 γ 1 f ˜ j 2 ( x ˜ j ( t s ) ) d s
V Z ˜ ( t , Z ˜ ) = 2 ( A 1 x ˜ 1 , A 2 x ˜ 2 , , A n x ˜ n , y ˜ 1 , y ˜ 2 , , y ˜ n ) , V Z ˜ Z ˜ ( t , Z ˜ ) = 2 E 2 n × 2 n ,
where E 2 n × 2 n is a 2 n × 2 n identity matrix. From Definition 2, (12), (14) and (15), we obtain
L V ( t , Z ˜ ) = i = 1 n j = 1 n | d i j | ω j ( t ) f ˜ j 2 ( x ˜ j ( t ) ) f ˜ j 2 ( x ˜ j ( t τ j ( t ) ) ) + i = 1 n j = 1 n | e i j | γ 1 f ˜ j 2 ( x ˜ j ( t ) ) 0 γ 1 f ˜ j 2 ( x ˜ j ( t s ) ) d s + 2 i = 1 n A i x ˜ i ξ i ( A i x ˜ i ) ( t ) + y ˜ i ( t ) + 2 i = 1 n y ˜ i ( ( a i ξ i ) y ˜ i ( t ) + [ ( a i ξ i ) ξ i ] ( A i x ˜ i ) ( t ) b i x ˜ i ( t ) + j = 1 n c i j f ˜ j ( x ˜ j ( t ) ) + j = 1 n d i j f ˜ j ( x ˜ j ( t τ j ( t ) ) ) + j = 1 n e i j t γ 1 t f ˜ j ( x ˜ j ( s ) ) d s ) + i = 1 n j = 1 n h i j g ˜ j ( x ˜ j ( t ) ) 2 .
Using inequalities a 2 + b 2 2 a b , i = 1 n a i b i 2 i = 1 n a i 2 i = 1 n b i 2 , (16) and Lemma 1, we obtain
L V ( t , Z ˜ ) i = 1 n j = 1 n | d i j | ω j ( t ) f ˜ j 2 ( x ˜ j ( t ) ) f ˜ j 2 ( x ˜ j ( t τ j ( t ) ) ) + i = 1 n j = 1 n | e i j | γ 1 f ˜ j 2 ( x ˜ j ( t ) ) 0 γ 1 f ˜ j 2 ( x ˜ j ( t s ) ) d s + 2 i = 1 n ξ i ( A i x ˜ i ) 2 ( t ) + ( A i x ˜ i ) 2 ( t ) + y ˜ i 2 ( t ) 2 + 2 i = 1 n ( ( a i ξ i ) y ˜ i 2 ( t ) + | ( a i ξ i ) ξ i | ( A i x ˜ i ) 2 ( t ) + y ˜ i 2 ( t ) 2 + b i y ˜ i 2 ( t ) + ( 1 c ^ i ) 2 ( A i x ˜ i ) 2 ( t ) 2 + j = 1 n | c i j | y ˜ i 2 ( t ) + f ˜ j 2 ( x ˜ j ( t ) ) 2 + j = 1 n | d i j | f ˜ j ( x ˜ j 2 ( t τ j ( t ) ) ) + y ˜ i 2 ( t ) 2 + j = 1 n | e i j | y ˜ i 2 ( t ) + γ 1 t γ 1 t f ˜ 2 ( x ˜ j ( s ) ) d s 2 ) + i = 1 n k = 1 n j = 1 n h k j 2 l ˜ i 2 x ˜ i 2 ( t ) i = 1 n [ 2 ξ i + 1 + | ( a i ξ i ) ξ i | + b i ( 1 c ^ i ) 2 + j = 1 n | d i j | ω j ( t ) L j 2 ( 1 c ^ i ) 2 + j = 1 n | e i j | γ 1 L j 2 ( 1 c ^ i ) 2 + k = 1 n j = 1 n h k j 2 l ˜ i 2 ( 1 c ^ i ) 2 ] ( A i x ˜ i ) 2 ( t ) + i = 1 n 2 ( a i ξ i ) + 1 + | ( a i ξ i ) ξ i | + b i + j = 1 n | c i j | + | d i j | + | e i j | y ˜ i 2 ( t ) .
In view of (10), (11) and (17), we have L V ( t , Z ˜ ) < 0 , i.e., L V ( t , Z ˜ ) is a negative definite. Evidently, V ( t , Z ˜ ) is a positive definite. Now, we show that V ( t , Z ˜ ) has an infinitesimal upper bound. By assumption (H 3 ), we obtain | f ˜ j ( x ) | N j for all x R , j = 1 , 2 , , n . Thus,
V ( t , Z ˜ ) = i = 1 n ( A i x ˜ i ) 2 ( t ) + y ˜ i 2 ( t ) + i = 1 n j = 1 n | d i j | t τ j ( t ) t ω j ( s ) f ˜ j 2 ( x ˜ j ( s ) ) d s + i = 1 n j = 1 n | e i j | 0 γ 1 t s t f ˜ j 2 ( x ˜ j ( θ ) ) d θ d s i = 1 n ( A i x ˜ i ) 2 ( t ) + y ˜ i 2 ( t ) + i = 1 n j = 1 n | d i j τ ^ j ω ^ j N j 2 + i = 1 n j = 1 n | e i j | γ 1 2 2 N j 2 .
In view of Definition 3, V ( t , X ) has an infinitesimal upper bound. Furthermore, by (13), we obtain
V ( t , Z ˜ ) i = 1 n ( A i x ˜ i ) 2 ( t ) + y ˜ i 2 ( t ) .
Evidently, we have V ( t , Z ˜ ) + for | | Z ˜ | | . Hence, by Definition 4, V ( t , Z ˜ ) is an infinite positive definite function for Z ˜ . In view of Lemma 3, the equilibrium point of system (4) is stochastically globally asymptotically stable, i.e., the equilibrium point of system (3) is also stochastically globally asymptotically stable. □
Remark 4. 
Constructing an appropriate Lyapunov functions is the key to proving Theorem 2. We innovatively construct Lyapunov functions, based on fully considering neutral operators which are different from the corresponding ones of [9,10,11,12,13].

5. Examples

Consider the following example:
d ( A 1 x 1 ) ( t ) = a 1 ( A i x 1 ) ( t ) d t b 1 x 1 ( t ) d t + j = 1 2 c 1 j f j ( x j ( t ) ) d t + j = 1 2 d 1 j f j ( x j ( t τ j ( t ) ) ) d t + j = 1 2 e 1 j t γ 1 t f j ( x j ( s ) ) d s d t + I 1 d t + j = 1 2 h 1 j g j ( x j ( t ) ) d B 1 ( t ) , d ( A 2 x 2 ) ( t ) = a 2 ( A 2 x 2 ) ( t ) d t b 2 x 2 ( t ) d t + j = 1 2 c 2 j f j ( x j ( t ) ) d t + j = 1 2 d 2 j f j ( x j ( t τ j ( t ) ) ) d t + j = 1 n e 2 j t γ 1 t f j ( x j ( s ) ) d s d t + I i d t + j = 1 2 h 2 j g j ( x j ( t ) ) d B 2 ( t ) ,
where
i , j = 1 , 2 , c 1 ( t ) = 0.1 sin t 10 , c 2 ( t ) = 0.1 cos t 10 , a i = 1.425 , b 1 = b 2 = 0.3 ,
c i j = d i j = 0.15 , e i j = h i j = 0.25 , τ j ( t ) = 1 10 cos 10 t , f j ( u ) = 0.01 sin 2 u u 2 + 1 ,
g j ( u ) = 2.5 2 π × 10 2 sin u π 2.5 , γ 0 = γ 1 = 0.02 , I 1 = 0.25 , I 2 ( t ) = 0.35 .
Evidently, we obtain
l j = N j = 0.01 , l ˜ j = 0.005 , c ^ i = 0.1 , ω ^ i = 1 .
Choosing ξ i = 0.745 , we obtain
b i + 1 2 j = 1 2 | c i j | + | d i j | + γ 1 | e i j | l j + 1 2 j = 1 2 | c j i | + | d j i | + γ 1 | e j i | l i 0.2952 < 0 ,
2 ξ i + 1 + | ( a i ξ i ) ξ i | + b i ( 1 | c i | ) 2 + j = 1 2 | d i j | ω ^ j L j 2 ( 1 c ^ i ) 2 + j = 1 2 | e i j | γ 1 L j 2 ( 1 c ^ i ) 2 + k = 1 2 j = 1 2 h k j 2 l ˜ i 2 ( 1 c ^ i ) 2 0.1032 < 0
and
2 ( a i ξ i ) + 1 + | ( a i ξ i ) ξ i | + b i + j = 1 2 | c i j | + | d i j | + | e i j | 0.0175 < 0 .
Then, all conditions of Theorems 1 and 2 hold, and system (18) has a unique equilibrium point, which is stochastically globally asymptotically stable. Through simple calculations, system (18) has a unique equilibrium point ( 2.5 , 2.5 ) . Give any three sets of initial values:
( x i ( 0 ) , x i ( 0 ) ) = ( 3.45 , 2.91 ) ; ( 2.46 , 1.25 ) ; ( 1.67 , 3.53 ) , i = 1 , 2 .
Figure 1 and Figure 2 show that for system (18) there exists a stochastically globally asymptotically stable equilibrium point ( 2.5 , 2.5 ) .

6. Conclusions

In this article, we studied the stochastically globally asymptotic stability for a class of stochastic inertial delay neural networks, and provided sufficient conditions for determining their existence and stability. This has a certain significance for practical applications and theoretical exploration. The research methods of this article are based on homeomorphism mapping, standard stochastic analyzing technology and the Lyapunov functional method. Using the discussion ideas and methods in this article, the stability problems of other types of stochastic inertial neural networks can be further studied: examples include stochastic inertial neural networks with impulses and stochastic inertial neural networks with a Markov switch.

Author Contributions

Writing—review and editing, B.D. and B.W.; methodology, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor and the anonymous referees for their helpful comments and valuable suggestions regarding this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wheeler, D.; Schieve, W. Stability and chaos in an inertial two-neuron system. Phys. D Nonlinear Phenom. 1997, 105, 267–284. [Google Scholar] [CrossRef]
  2. Liu, Q.; Liao, X.; Yang, D.; Guo, S. The research for Hopf bifurcation in a single inertial neuron model with external forcing. IEEE Int. Conf. Granul. Comput. 2007, 85, 528–833. [Google Scholar]
  3. Liu, Q.; Liao, X.; Guo, S.; Wu, Y. Stability of bifurcating periodic solutions for a single delayed inertial neuron model under periodic excitation. Nonlinear Anal. Real World Appl. 2009, 10, 2384–2395. [Google Scholar] [CrossRef]
  4. Li, C.; Chen, G.; Liao, X.; Yu, J. Hopf bifurcation and chaos in a single inertial neuron model with time delay. Eur. Phys. J. B 2004, 41, 337–343. [Google Scholar] [CrossRef]
  5. Liu, Q.; Liao, X.; Liu, Y.; Zhou, S.; Guo, S. Dynamics of an inertial two-neuron system with time delay. Nonlinear Dyn. 2009, 58, 574–609. [Google Scholar] [CrossRef]
  6. Arik, S. Global robust stability analysis of neural networks with discrete time delays. Chaos Solitons Fractals 2005, 26, 1407–1414. [Google Scholar] [CrossRef]
  7. Tu, Z.; Cao, J.; Hayat, T. Global exponential stability in Lagrange sense for inertial neural networks with time-varying delays. Neurocomputing 2016, 171, 524–531. [Google Scholar] [CrossRef]
  8. Wang, J.; Tian, L. Global Lagrange stability for inertial neural networks with mixed time-varying delays. Neurocomputing 2017, 235, 140–146. [Google Scholar] [CrossRef]
  9. Wan, P.; Jian, J. Global convergence analysis of impulsive inertial neural networks with time-varying delays. Neurocomputing 2017, 245, 68–76. [Google Scholar] [CrossRef]
  10. Tang, Q.; Jian, J. global exponential convergence for impulsive inertial complex-valued neural networks with time-varying delays. Math. Comput. Simul. 2019, 159, 39–56. [Google Scholar] [CrossRef]
  11. Ke, Y.; Miao, C. Anti-periodic solutions of inertial neural networks with time delays. Neural Process. Lett. 2017, 45, 523–538. [Google Scholar] [CrossRef]
  12. Ke, Y.; Miao, C. Stability analysis of inertial Cohen-Grossberg-type neural networks with time delays. Neurocomputing 2013, 117, 196–205. [Google Scholar] [CrossRef]
  13. Ke, Y.; Miao, C. Exponential stability of periodic solutions for inertial Cohen-Grossberg-type neural networks. Neural Netw. World 2014, 4, 377–394. [Google Scholar] [CrossRef]
  14. Huang, Q.; Cao, J. Stability analysis of inertial Cohen-Grossberg neural networks with Markovian jumping parameters. Neurocomputing 2018, 282, 89–97. [Google Scholar] [CrossRef]
  15. Ke, Y.; Miao, C. Stability analysis of BAM neural networks with inertial term and.time delay. WSEAS Trans. Syst. 2011, 10, 425–438. [Google Scholar]
  16. Ke, Y.; Miao, C. Stability and existence of periodic solutions in inertial BAM neural networks with time delay. Neural Comput. Appl. 2013, 23, 1089–1099. [Google Scholar]
  17. Zhang, Y.; Kong, L. Photovoltaic power prediction based on hybrid modeling of neural network and stochastic differential equation. ISA Trans. 2022, 128, 181–206. [Google Scholar] [CrossRef]
  18. Shu, J.; Wu, B.; Xiong, L.; Zhang, H. Stochastic stabilization of Markov jump quaternion-valued neural network using sampled-data control. Appl. Math. Comput. 2021, 400, 126041. [Google Scholar] [CrossRef]
  19. Guo, Y. Globally robust stability analysis for stochastic Cohen- Grossberg neural networks with impulse control and time-varying delays. Ukr. Math. J. 2018, 69, 1220–1233. [Google Scholar] [CrossRef]
  20. Hu, J.; Zhong, S.; Liang, L. Exponential stability analysis of stochastic delayed cellular neutral networks. Chaos Solitons Fractals 2006, 27, 1006–1010. [Google Scholar] [CrossRef]
  21. Xu, J.; Chen, L.; Li, P. On p-th moment exponential stability for stochastic cellular neural networks with distributed delays. Int. J. Control. Autom. Syst. 2018, 16, 1217–1225. [Google Scholar] [CrossRef]
  22. Liu, L.; Deng, F.; Zhu, Q. Mean square stability of two classes of theta methods for numerical computation and simulation of delayed stochastic Hopfield neural networks. J. Comput. Appl. Math. 2018, 343, 428–447. [Google Scholar] [CrossRef]
  23. Yang, L.; Fei, Y.; Wu, W. Periodic Solution for del-stochastic high-Order Hopfield neural networks with time delays on time scales. Neural Process. Lett. 2019, 49, 1681–1696. [Google Scholar] [CrossRef]
  24. Chen, G.; Li, D.; Shi, L.; van Gaans, O.; Lunel, S.V. Stability results for stochastic delayed recurrent neural networks with discrete and distributed delays. J. Differ. Equ. 2018, 264, 3864–3898. [Google Scholar] [CrossRef]
  25. Zhang, H.; Ma, Q.; Lu, J.; Chu, Y.; Li, Y. Synchronization control of neutral-type neural networks with sampled-data via adaptive event-triggered communication scheme. J. Frankl. Inst. 2021, 358, 1999–2014. [Google Scholar] [CrossRef]
  26. Yang, X.; Cheng, Z.; Li, X.; Ma, T. xponential synchronization of coupled neutral-type neural networks with mixed delays via quantized output control. J. Frankl. Inst. 2019, 356, 8138–8153. [Google Scholar] [CrossRef]
  27. Si, W.; Xie, T. Further Results on Exponentially Robust Stability of Uncertain Connection Weights of Neutral-Type Recurrent Neural Networks. Complexity 2021, 2021, 6941701. [Google Scholar] [CrossRef]
  28. Du, B. Anti-periodic solutions problem for inertial competitive neutral-type neural networks via Wirtinger inequality. J. Inequalities Appl. 2019, 2019, 187. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Zhou, D. Existence and global exponential stability of a periodic solution for a discrete-time interval general BAM neural networks. J. Frankl. Inst. 2010, 347, 763–780. [Google Scholar] [CrossRef]
  30. Park, J.; Kwon, O.; Lee, S. LMI optimization approach on stability for delayed neural networks of neutral-type. Appl. Math. Comput. 2008, 196, 236–244. [Google Scholar] [CrossRef]
  31. Yu, K.; Lien, C. Stability criteria for uncertain neutral systems with interval time-varying delays. Chaos Solitons Fractals 2008, 38, 650–657. [Google Scholar] [CrossRef]
  32. Zhang, J.; Chang, A.; Yang, G. Periodicity on Neutral-Type Inertial Neural Networks Incorporating Multiple Delays. Symmetry 2021, 13, 2231. [Google Scholar] [CrossRef]
  33. Wang, C.; Song, Y.; Zhang, F.; Zhao, Y. Exponential Stability of a Class of Neutral Inertial Neural Networks with Multi-Proportional Delays and Leakage Delays. Mathematics 2023, 11, 2596. [Google Scholar] [CrossRef]
  34. Hale, J. The Theory of Functional Differential Equations; Springer: New York, NY, USA, 1977. [Google Scholar]
  35. Wang, K.; Zhu, Y. Stability of almost periodic solution for a generalized neutral-type neural networks with delays. Neurocomputing 2010, 73, 3300–3307. [Google Scholar] [CrossRef]
  36. Mao, X. Stochastic Differential Equations and Applications; Horwood: Chichester, UK, 1997. [Google Scholar]
  37. Forti, M.; Tesi, A. New conditions for global stability of neural networks with application to linear and quadratic programming problems. IEEE Trans. Circuits Syst. 1995, 42, 354–366. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Liu, W.; Jiang, W. Stability of stochastic and intertial neural networks with time delays. Appl. Math. J. Chin. Univ. 2020, 35, 83–98. [Google Scholar]
Figure 1. Instantaneous response of state variable x 1 ( t ) in system (18) with different initial conditions.
Figure 1. Instantaneous response of state variable x 1 ( t ) in system (18) with different initial conditions.
Symmetry 15 01746 g001
Figure 2. Instantaneous response of state variable x 2 ( t ) in system (18) with different initial conditions.
Figure 2. Instantaneous response of state variable x 2 ( t ) in system (18) with different initial conditions.
Symmetry 15 01746 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, B.; Yin, H.; Du, B. On Asymptotic Properties of Stochastic Neutral-Type Inertial Neural Networks with Mixed Delays. Symmetry 2023, 15, 1746. https://doi.org/10.3390/sym15091746

AMA Style

Wang B, Yin H, Du B. On Asymptotic Properties of Stochastic Neutral-Type Inertial Neural Networks with Mixed Delays. Symmetry. 2023; 15(9):1746. https://doi.org/10.3390/sym15091746

Chicago/Turabian Style

Wang, Bingxian, Honghui Yin, and Bo Du. 2023. "On Asymptotic Properties of Stochastic Neutral-Type Inertial Neural Networks with Mixed Delays" Symmetry 15, no. 9: 1746. https://doi.org/10.3390/sym15091746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop