Next Article in Journal
Existence and Phase Structure of Random Inverse Limit Measures
Next Article in Special Issue
Convergence and Stability of the Truncated Stochastic Theta Method for McKean-Vlasov Stochastic Differential Equations Under Local Lipschitz Conditions
Previous Article in Journal
On an Ambrosetti-Prodi Type Problem with Applications in Modeling Real Phenomena
Previous Article in Special Issue
Adaptive Fault-Tolerant Tracking Control for Continuous-Time Interval Type-2 Fuzzy Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability of Stochastic Delayed Recurrent Neural Networks

1
School of Mathematics and Physics, Yibin University, Yibin 644000, China
2
Department of Mathematics, China Three Gorges University, Yichang 443002, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(14), 2310; https://doi.org/10.3390/math13142310
Submission received: 22 May 2025 / Revised: 8 July 2025 / Accepted: 17 July 2025 / Published: 19 July 2025

Abstract

This paper addresses the stability of stochastic delayed recurrent neural networks (SDRNNs), identifying challenges in existing scalar methods, which suffer from strong assumptions and limited applicability. Three key innovations are introduced: (1) weakening noise perturbation conditions by extending diagonal matrix assumptions to non-negative definite matrices; (2) establishing criteria for both mean-square exponential stability and almost sure exponential stability in the absence of input; (3) directly handling complex structures like time-varying delays through matrix analysis. Compared with prior studies, this approach yields broader stability conclusions under weaker conditions, with numerical simulations validating the theoretical effectiveness.

1. Introduction

The concept of stochastic delayed recurrent neural networks (SDRNNs) emerged as a natural extension of traditional recurrent neural networks (RNNs), incorporating stochastic processes and time delays to model more complex and realistic dynamical systems [1,2,3,4,5]. The utility of SDRNNs extends across a broad spectrum of disciplines.For instance, SDRNNs have achieved outstanding results in pattern recognition domains, including speech recognition, handwriting analysis and image classification. Moreover, their inherent stochastic characteristics enable them to effectively adapt and generalize under noisy and uncertain conditions. In addition, SDRNNs are adapted to handling the intricacies of time series data with complex temporal relationships. As a result, they have become indispensable for critical applications like stock market trend prediction, weather forecasting and advanced financial market analysis [6,7,8].
The stability of a dynamical system refers to the network’s ability to reach a steady state or to converge to a bounded trajectory over time, despite the presence of disturbances or initial conditions. In the context of SDRNNs, which include both stochastic processes and time delays, stability is particularly important. For instance, stable SDRNNs exhibit robustness to noise, enabling them to handle the inherent randomness of real-world data and maintain consistent performance.
Conventional stability involves the network’s states progressively nearing an equilibrium point as time approaches infinity, encompassing types such as asymptotic stability [9,10,11,12], exponential stability [13,14,15,16,17,18,19,20,21,22,23] and the probabilistic variant of almost sure exponential stability [24,25]. On the other hand, there also exist alternative stability definitions, such as input-to-state stability. Input-to-state stability focuses on the network’s ability to respond to inputs while preserving boundedness in the face of external disturbances, as detailed in [26,27,28,29,30,31,32,33,34,35].
However, a critical analysis of the literature reveals significant limitations in conventional stability studies. Most existing works rely on scalar approaches, which impose restrictive assumptions (e.g., positive coefficients and diagonal noise structures) that fail to capture the complexity of coupled dynamics in SDRNNs. For instance, the scalar-based mean-square exponential input-to-state stability analysis in [36] assumes Lipschitz noise perturbations with positive diagonal matrices, severely limiting its applicability to systems with non-diagonal or time-varying noise. Additionally, few studies address the coexistence of multiple stabilities (e.g., mean-square and almost sure exponential stability) in input-free systems, leaving a crucial gap in understanding robust dynamics in complex stochastic environments.
To bridge the gaps in the existing literature (e.g., scalar approaches in [36] requiring diagonal noise matrices and positive coefficients), this study introduces a matrix-theory framework. The main contributions, distinguished from prior works, are as follows:
  • Assumption relaxation: The authors of [36] assume Lipschitz noise with positive diagonal matrices, while we generalize to non-negative definite matrices (Assumptions 1 and 2), expanding stability analysis to broader noise structures;
  • Multiple stabilities coexistence: Unlike single-stability analyses, we concurrently prove mean-square and almost sure exponential stabilities for input-free systems (Theorems 3–5). As far as we are aware, this has not been reported in other studies.
  • Complex dynamics handling: Scalar methods struggle with time-varying delays, but our matrix approach directly addresses such structures without equivalent scalar representation.
The structure of this paper is outlined as follows: Section 2 begins with the introduction of essential definitions and preliminaries concerning stochastic delayed recurrent neural networks, placing particular emphasis on the significance of stability analysis. It also addresses the constraints of scalar methods and underscores the requirement for a matrix-based strategy. In Section 3, the primary theorems of the paper are presented. Section 4 offers two numerical examples, thereby illustrating the effectiveness of the obtained results. Section 5 provides a detailed exposition of the proof procedures for the aforementioned theorems. Ultimately, Section 6 serves as the conclusion of the paper, where we summarize our contributions.

2. Model Description and Problem Formulation

Zhu et al. considered the mean-square exponential input-to-state stability of SDRNNs in [36]:
d X i ( t ) = d i X i ( t ) + m = 1 n a i m f m X m ( t ) + m = 1 n b i m g m X m ( t τ ¯ ) + m = 1 n c i m h m X m ( t τ ( t ) ) + u i ( t ) d t + m = 1 n σ i m X m ( t ) , X m ( t τ ¯ ) , X m ( t τ ( t ) ) d W m ( t ) , X i ( t ) = Ψ i ( t ) , τ ¯ t 0
for any t 0 with i ranging from 1 to n.
Firstly, an explanation of the functions appearing in Itô-type Equation (1) is provided. X k ( t ) denotes the state variable of the k t h neuron at a given time t. Moreover, f m x m ( · ) , g m x m ( · ) and h m x m ( · ) represent the activation functions of the m t h neuron at the same time instance. Additionally, the function u k ( · ) signifies the regulatory input for the k t h neuron at that moment, and u = ( u 1 ( t ) , u 2 ( t ) , , u n ( t ) ) belongs to L ( [ 0 , ) ; R n ) , where L denotes the space of essentially bounded measurable functions satisfying u = ess sup t 0 | u ( t ) | < .
Furthermore, the stochastic term σ i m , which is a function from R 3 to R , is Borel-measurable. Let ( Ω , F , { F t } t 0 , P ) be a complete probability space where the filtration { F t } t 0 is non-decreasing, F t contains all P -null sets and is naturally generated by the underlying Wiener processes W j = ( W j ( t ) ) t 0 , see [37]. Note that both constant and time-varying delays are considered simultaneously. The constant delay τ ¯ typically corresponds to the fixed physical transmission delay in the system, while the time-varying delay τ ( t ) is used to describe the delay factors that fluctuate over time in the system. We assume that τ ( t ) is differentiable and fulfills 0 τ ( t ) τ ¯ and τ ˙ ( t ) ρ < 1 , with τ ¯ and ρ being positive constants.
Next, we provide an explanation for the parameters within Equation (1). The coefficient d k signifies the intensity of the self-feedback link for the k t h unit. The coefficients a l m , b l m and c l m denote the influence strengths of the m t h unit on the l t h unit at the moment t.
Finally, we introduce some notations that will be used throughout this paper. is defined as the collection of essentially bounded functions from the interval [ 0 , ) into R n , satisfying u = esssup t 0 | u ( t ) | < . We use C [ τ ¯ , 0 ] ; R n to denote the set of continuous functions ψ over [ τ ¯ , 0 ] into R n , equipped with ψ = sup τ ¯ s 0 | ψ ( s ) | . The notation L F 0 2 [ τ ¯ , 0 ] ; R n refers to the collection of all F 0 -measurable random variables ψ that take values in C [ τ ¯ , 0 ] ; R n and satisfy τ ¯ 0 E | ψ ( y ) | 2 d y < , with E [ · ] denoting the expectation operator with respect to the probability measure P .
This paper begins by expressing Equation (1) in a vector format as
d X ( t ) = [ D X ( t ) + A f ( X ( t ) ) + B g ( X ( t τ ¯ ) ) + C h ( X ( t τ ( t ) ) ) + u ( t ) ] d t + δ ( X ( t ) ) d W ( t ) X ( t ) = Ψ ( t )
with
A = ( a i m ) n , n , B = ( b i m ) n , n , C = ( c i m ) n , n , D = d i a g ( d 1 , , d n ) , X ( t ) = ( X 1 ( t ) , , X n ( t ) ) T , f ( X ( t ) ) = ( f 1 ( X 1 ( t ) ) , , f n ( X n ( t ) ) ) T , g ( X ( t ) ) = ( g 1 ( X 1 ( t ) ) , , g n ( X n ( t ) ) ) T , h ( X ( t ) ) = ( h 1 ( X 1 ( t ) ) , , h n ( X n ( t ) ) ) T , δ ( X ( t ) ) = ( δ i m ( X m ( t ) , X m ( t τ ¯ ) , X m ( t τ ( t ) ) ) ) n , n , Ψ ( t ) = ( Ψ 1 ( t ) , , Ψ n ( t ) ) T .
Remark 1.
It is important to observe that recasting Equation (1) into the vector format (2) constitutes more than an equivalent formulation. The vector notation facilitates a more relaxed discussion of the general forms of functions f , g , h and δ . For instance, we can broaden the condition of f ( X ( t ) ) = ( f 1 ( X 1 ( t ) ) , , f n ( X n ( t ) ) ) T to
f ( X ( t ) = ( f 1 ( X ( t ) ) , , f n ( X ( t ) ) ) T ,
and the integrity of all subsequent discussions remains unaffected. In such cases, discussing the problem using the component form becomes inconvenient. This relaxation suggests that the scope of our work extends to applications beyond the realm of neural networks.
We will also consider system (2) with u ( t ) = 0 . In this case, it simplifies to the following system
d X ( t ) = [ D X ( t ) + A f ( X ( t ) ) + B g ( X ( t τ ¯ ) ) + C h ( X ( t τ ( t ) ) ) ] d t + δ ( X ( t ) ) d W ( t ) X ( t ) = Ψ ( t )
It is crucial to note that Equations (2) and (3) are Itô-type stochastic differential equations, consistent with Equation (1). Below, we present several definitions of stability for the trivial solution of Equations (2) and (3).
Definition 1.
The trivial solution of system (2) is mean-square exponentially input-to-state-stable if, for any Ψ L F 0 2 [ τ ¯ , 0 ] ; R n and u L ( [ 0 , ) ; R n ) , there exist positive scalars α 1 , α 2 , α 3 such that
E X ( t ; Ψ ) 2 α 1 e α 2 t E Ψ 2 + α 3 u 2 .
Definition 2.
The trivial solution of system (3) is mean-square exponentially stable if, for all Ψ L F 0 2 [ τ ¯ , 0 ] ; R n , there exist positive scalars α 1 , α 2 such that
E X ( t ; Ψ ) 2 α 1 e α 2 t E Ψ 2 .
Definition 3.
The trivial solution of system (3) is almost surely exponentially stable if, for each Ψ L F 0 2 [ τ ¯ , 0 ] ; R n , there exists a positive scalar γ such that
lim sup t 1 t log | X ( t ; Ψ ) | γ 2 . a . s .
Throughout the paper, we adhere to foundational assumptions as follows.
Assumption 1.
Assume that f ( 0 ) = 0 , g ( 0 ) = 0 , h ( 0 ) = 0 and there exist positive scalars L 1 , L 2 , L 3 such that for any z 1 , z 2 R n ,
| f ( z 1 ) f ( z 2 ) | L 1 | z 1 z 2 | , | g ( z 1 ) g ( z 2 ) | L 2 | z 1 z 2 | , | h ( z 1 ) h ( z 2 ) | L 3 | z 1 z 2 | .
Furthermore, we assume that there exist symmetric nonnegative-definite matrices C 1 , C 2 , C 3 such that
t r δ T ( X ( t ) ) δ ( X ( t ) ) X T ( t ) C 1 X ( t ) + X T ( t τ ¯ ) C 2 X ( t τ ¯ ) + X T ( t τ ( t ) ) C 3 ( X ( t τ ( t ) )
Assumption 2.
Assume that there exist symmetrical matrices F i ( j ) , i = 1 , 2 , j = 1 , 2 , 3 and symmetrical positive definite matrix P such that H 1 , H 2 , H 3 are non-positive definite with
H 1 = F 1 ( 1 ) P A A T P T F 2 ( 1 ) , H 2 = F 1 ( 2 ) P B B T P T F 2 ( 2 ) , H 3 = F 1 ( 3 ) P C C T P T F 2 ( 3 )
Remark 2.
It follows from Assumptions 1 that for any z R n ,
| f ( z ) | L 1 | z | , | g ( z ) | L 2 | z | , | h ( z ) | L 3 | z | .
Remark 3.
In this study, we assume that the matrices C 1 , C 2 , C 3 in Equation (5) are non-negative definite. In comparison, the assumption made in [36] only considers the case where these matrices are diagonal. The reader is referred to [36] for the requirement
σ i j x 1 , y 1 , z 1 σ i j x 2 , y 2 , z 2 2 μ i j x 1 x 2 2 + ν i j y 1 y 2 2 + δ i j z 1 z 2 2
for any x 1 , x 2 , y 1 , y 2 , z 1 , z 2 R , i , j = 1 , 2 , , n , and μ i j , ν i j , δ i j are defined to be nonnegative constants.
Remark 4.
Similarly, we can also generalize the assumption in Equation (4). Assume that for any z 1 , z 2 R n ,
| f ( z 1 ) f ( z 2 ) | 2 ( z 1 z 2 ) T L 1 ( z 1 z 2 ) T , | g ( z 1 ) f ( z 2 ) | 2 ( z 1 z 2 ) T L 2 ( z 1 z 2 ) T , | h ( z 1 ) f ( z 2 ) | 2 ( z 1 z 2 ) T L 3 ( z 1 z 2 ) T ,
where L 1 , L 2 , L 3 are non-negative definite matrices. This paper maintains the Assumption (4), and interested readers can follow our approach to derive conclusions under the broader assumption.
Remark 5.
Pertaining to Assumption 2, the following question naturally emerges: given an n × n matrix A, what criteria should be applied to select symmetric matrices F 1 and F 2 such that the block matrix
F 1 A A T F 2
is non-positive definite? Utilizing Lemma A1 from the Appendix A, we know that in order for the matrices H 1 , H 2 , H 3 in Assumption 2 to be non-positive definite, the corresponding matrices F j i must be non-positive definite where i = 1 , 2 , j = 1 , 2 , 3 .
Remark 6.
Under Assumption 2, let λ ( 1 ) , λ ( 2 ) , λ ( 3 ) denote the smallest eigenvalues of F 2 ( 1 ) , F 2 ( 2 ) , F 2 ( 3 ) , respectively. Then we derive from Remark 5
λ ( 1 ) 0 , λ ( 2 ) 0 , λ ( 3 ) 0 .

3. Main Results

We begin by presenting the theorem on mean-square exponentially input-to-state stability for Equation (1) as detailed in [36].
Assumption 3
([36]). Assume that
f k ( z 1 ) f k ( z 2 ) l k | z 1 z 2 | , g k ( z 1 ) g k ( z 2 ) m k | z 1 z 2 | , h k ( z 1 ) h k ( z 2 ) n k | z 1 z 2 | ,
for any z 1 , z 2 R , k = 1 , 2 , , n , and l k , m k , n k are defined to be positive constants.
Assumption 4
([36]). Assume that
σ i j ( x 1 , y 1 , z 1 ) σ i j ( x 2 , y 2 , z 2 ) 2 μ i j ( x 1 x 2 ) 2 + ν i j ( y 1 y 2 ) 2 + δ i j ( z 1 z 2 ) 2 ,
for any x 1 , x 2 , y 1 , y 2 , z 1 , z 2 R , i , j = 1 , 2 , , n , and μ i j , ν i j , δ i j are defined to be nonnegative constants.
Assumption 5
([36]). f k ( 0 ) = g k ( 0 ) = h k ( 0 ) = u k ( 0 ) = 0 , σ k m ( 0 , 0 , 0 ) = 0 , k , m = 1 , 2 , , n .
Theorem 4
([36]). If Assumptions 3–5 are satisfied, the trivial solution of system (1) is mean-square exponentially input-to-state-stable, provided that there exist positive scalars λ, p i , q i and r i for i = 1 , 2 , , n such that
2 d i p i ( λ + 1 ) p i + q i + r i + j = 1 n p j μ j i + p i j = 1 n a i j l j + j = 1 n p j a j i l i + p i j = 1 n b i j m j + p i j = 1 n c i j n j , q i j = 1 n e λ τ ¯ p j b j i m i + j = 1 n e λ τ ¯ p j ν j i , ( 1 ρ ) r i j = 1 n e λ τ ¯ p j c j i n i + j = 1 n e λ τ ¯ p j δ j i .
Subsequently, we introduce the stability criteria of this paper and compare them with the results of Theorem 4. To establish the next theorems, we define the following matrices
H 4 = P P T + λ P + Q + R + λ max ( P ) C 1 2 P D F 1 ( 1 ) F 1 ( 2 ) F 1 ( 3 ) λ ( 1 ) L 1 2 I H 5 = λ max ( P ) C 2 e λ τ ¯ Q λ ( 2 ) L 2 2 I , H 6 = λ max ( P ) C 3 e λ τ ¯ ( 1 ρ ) R λ ( 3 ) L 3 2 I .
where λ ( 1 ) , λ ( 2 ) , λ ( 3 ) are presented in Remark 6 and I represents the n × n identity matrix.
Theorem 5.
If Assumptions 1 and 2 are fulfilled, the trivial solution of system (2) is mean-square exponential input-to-state-stable if there exist positive scalars λ and positive definite matrices P , Q , R such that H 4 , H 5 , H 6 are all non-positive definite.
Remark 7.
Let the matrices C 1 , C 2 , C 3 in Assumption 1 and P , Q , R in Theorem 5 be diagonal matrices, and it can be readily observed that Theorem 5 reduces to Theorem 4. This implies that our theorem encompasses the results of [36].
On the other hand, if u ( t ) = 0 , Theorem 5 reduces to the mean-square exponential stability of system (3). This reduction is stated as follows.
Theorem 6.
If Assumptions 1 and 2 are satisfied, the trivial solution of system (3) is mean-square exponential-stable if there exist positive scalars λ and positive definite matrices P , Q , R such that H 4 , H 5 , H 6 are all non-positive definite.
Theorem 7.
If Assumptions 1 and 2 are fulfilled, the trivial solution of system (3) is almost surely exponential-stable if there exist positive scalars λ and positive definite matrices P , Q , R such that H 4 + e λ τ ¯ ( H 5 + H 6 ) is non-positive definite. To be exact, we have
lim sup s 1 s log ( | X ( s ; Ψ ) | ) λ 2 a . s .
Remark 8.
Theorem 2 and Theorem 3, respectively, provide the conditions for the mean-square exponential stability and almost surely exponential stability for the trivial solutions of Equation (3). It is well known that exponential stability implies the almost surely exponential stability under certain conditions. By comparing our two theorems, this fact is further corroborated.
Below, we present the conditions under which the trivial solution of Equation (3) satisfies both types of stability.
Theorem 8.
If Assumptions 1 and 2 are fulfilled, the trivial solution of system (3) is both mean-square exponential-stable and almost surely exponential-stable if there exist positive scalars λ and positive definite matrices P , Q , R such that H 4 , H 5 , H 6 are non-positive definite.
Remark 9.
In Theorems 6–8, we obtain several stability criteria for system (3). By choosing appropriate Lyapunov–Krasovskii functional and a refined application of stochastic analysis theory, we achieve both mean-square exponential stability and almost sure exponential stability for SDRNNs with no input. As far as we are aware, this has not been reported in other studies.

4. Illustrative Examples

In this section, we will present simulation experiments in two-dimensional and three-dimensional spaces. Numerical solutions were obtained via the Euler–Maruyama scheme with fixed step-size Δ t = 0.001 . The strong convergence order of 0.5 under global Lipschitz conditions was first proven by Gikhman and Skorochod [38]. We note that more advanced methods exist, such as the stochastic balanced drift-implicit Theta methods [39], which may offer improved stability for stiff systems. Reference [40] provides additional implementation insights for stochastic simulation.
Example 1.
(2-D case) Consider the 2-D SDRNNs (2) with
A = 0.0100 0.1001 0.0400 0.0300 , B = 0.0020 0.0030 0.0001 0.0020 , C = 0.0400 0.0060 0.0030 0.2210 , D = 21 0 0 17.2 .
(1) 
Select the following functions and parameters:
λ = 0.4 , τ ¯ = 2 , ρ = 0.4 , τ ( t ) = 0.4 sin ( t ) + 0.8 , L 1 = 2.1 , L 2 = 1 , L 3 = 3.2 , f x 1 , x 2 = 2 sin ( x 1 ) , 2.1 sin ( x 2 ) T , g x 1 , x 2 = cos ( x 1 ) , 0.5 x 2 T , h x 1 , x 2 = 3.2 x 1 , sin ( x 2 ) T , u ( t ) = 0.02 cos ( t ) , 0.01 sin ( t ) T , δ ( x ( t ) ) = 0.03 x 1 ( t ) + 0.025 x 1 ( t τ ¯ ) 0.02 x 2 ( t ) + 0.023 x 2 ( t τ ¯ ) 0.026 x 1 ( t ) + 0.033 x 1 ( t τ ( t ) ) 0.02 x 2 ( t ) + 0.1 x 2 ( t τ ( t ) ) , C 1 = 0.1 0.1 0.1 0.2 , C 2 = 0.12 0.24 0.24 0.1 , C 3 = 0.3 0.15 0.15 0.3
With these choices, Assumption 1 is satisfied.
(2) 
Define matrices as follows
P = 1 0 0 1 , Q = 1.3799 0.5341 0.5341 3.3384 , R = 12.6198 0.5564 0.5564 12.6198 , F 1 ( 1 ) = 1.0000 0 0 1.0000 , F 1 ( 2 ) = 1.0100 0 0 1.0100 , F 1 ( 3 ) = 1.0100 0 0 1.0100 , F 2 ( 1 ) = 0.0017 0.0002 0.0002 0.0110 , F 2 ( 2 ) = 0.0005 0.0008 0.0008 0.0001 , F 2 ( 3 ) = 0.0068 0.0018 0.0018 0.2053 .
with λ ( 1 ) = 0.0110 , λ ( 2 ) = 0 , λ ( 3 ) = 0.2053 , it can be obtained by Definition (6) that
H 1 = 1 0 0.0100 0.1000 0 1 0.0400 0.0300 0.0100 0.0400 0.0017 0.0002 0.1000 0.0300 0.0002 0.0110 , H 2 = 1.0100 0 0.0020 0.0030 0 1.0100 0.0001 0.0020 0.0020 0.0001 0.0005 0.0008 0.0030 0.0020 0.0008 0.0001 , H 3 = 1.0100 0 0.0400 0.0060 0 1.0100 0.0030 0.2210 0.0400 0.0030 0.0068 0.0018 0.0060 0.2210 0.0018 0.2053 .
It is straightforward to verify that H 1 , H 2 , H 3 are non-positive definite, which means that Assumption 2 is satisfied.
(3) 
It follows from Definition (8) that
H 4 = 23.4318 1.1905 1.1905 13.7733 , H 5 = 0.5 0 0 0.5 , H 6 = 1 0 0 1 .
It is straightforward to demonstrate that H 4 , H 5 , H 6 are non-positive definite.
In summary, for the parameters considered here, the requirements of Theorem 1 are met, thereby ensuring the mean-square exponential input-to-state stability of the solution to the Equation (2). The numerical solutions are presented in Figure 1, offering a graphical demonstration that supports the stability of the solution. As observed, the trajectories remain within certain bounds, suggesting that the solution is stable. Nevertheless, it is not guaranteed that they will converge to the equilibrium state.
Example 2.
(3-D case) Consider the 3-D SDRNNs (3) with
A = 0.234 0.543 0.111 0.087 0.003 0.764 0.123 0.456 0.789 , B = 0.12 0.43 0.75 0.56 0.08 0.21 0.33 0.66 0.10 , C = 0.528 0.123 0.876 0.002 0.439 0.762 0.111 0.345 0.231 , D = 21 0 0 0 17.2 0 0 0 33.2 .
(1) 
Assume that
λ = 0.1 , τ ¯ = 0.9 , ρ = 1.2 , τ ( t ) = 0.6 cos ( 2 t ) + 0.3 , L 1 = 1.32 , L 2 = 0.8 , L 3 = 1.4 , f x 1 , x 2 , x 3 = 1.32 sin ( x 1 ) , cos ( x 2 ) , x 3 T , g x 1 , x 2 , x 3 = cos ( x 1 ) , 0.61 x 2 , 0.8 x 3 T , h x 1 , x 2 , x 3 = x 1 , 2 sin ( x 2 ) , 1.4 x 3 T , δ ( x ( t ) ) = 0.2 x 1 ( t ) + 0.14 x 1 ( t τ ¯ ) 0.02 x 2 ( t ) + 0.14 x 2 ( t τ ¯ ) 0.18 x 3 ( t ) 0.18 x 1 ( t ) + 0.01 x 1 ( t τ ( t ) ) 0.2 x 2 ( t ) + 0.11 x 2 ( t τ ( t ) ) x 3 ( t τ ¯ ) 0.17 x 1 ( t τ ¯ ) + 0.12 x 1 ( t τ ( t ) ) 0.1 x 2 ( t τ ¯ ) + 0.023 x 2 ( t τ ( t ) ) x 3 ( t τ ( t ) ) , C 1 = 1.2299 1.2148 0.7938 1.2148 2.2408 0.7683 0.7938 0.7683 0.9853 , C 2 = 3.5854 0.1381 0.7480 0.1381 0.4837 0.1760 0.7480 0.1760 1.2145 , C 3 = 1.0982 0.9260 0.4042 0.9260 3.0415 0.7613 0.4042 0.7613 0.8503 .
With these selections, Assumption 1 is fulfilled.
(2) 
Define matrices as the following
P = 1.1559 0.0913 0.0952 0.0913 1.3201 1.5632 0.0952 1.5632 3.1346 , Q = 31.6731 0.6097 3.3026 0.6097 17.9784 0.7771 3.3026 0.7771 21.2050 , R = 364.1912 20.4426 8.9232 20.4426 407.0918 16.8066 8.9232 16.8066 358.7185 , F 1 ( 1 ) = 1.2000 0 0 0 1.2000 0 0 0 1.2000 , F 1 ( 2 ) = 1.1000 0 0 0 1.1000 0 0 0 1.1000 , F 1 ( 3 ) = 1.3000 0 0 0 1.3000 0 0 0 1.3000 , F 2 ( 1 ) = 0.1865 0.3530 1.5343 0.3530 4.6539 10.5651 1.5343 10.5651 27.7902 , F 2 ( 2 ) = 10.9870 10.2946 3.5284 10.2946 9.9435 2.8212 3.5284 2.8212 2.8912 , F 2 ( 3 ) = 1.6881 2.8188 0.7869 2.8188 13.2336 14.1577 0.7869 14.1577 19.7202 .
with λ ( 1 ) = 31.97 , λ ( 2 ) = 21.84 , λ ( 3 ) = 31.19 . It follows from the Definition (6) that
H 1 = 1.2000 0 0 0.2508 0.6708 0.1337 0 1.2000 0 0.0988 0.6672 2.2318 0 0 1.2000 0.2273 1.4858 3.6781 0.2508 0.0988 0.2273 0.1865 0.3530 1.5343 0.6708 0.6672 1.4858 0.3530 4.6539 10.5651 0.1337 2.2318 3.6781 1.5343 10.5651 27.7902 ,
H 2 = 1.1000 0 0 0.1584 0.4269 0.8573 0 1.1000 0 1.2661 0.9654 0.3651 0 0 1.1000 1.8984 1.9028 0.7131 0.1584 1.2661 1.8984 10.9870 10.2946 3.5284 0.4269 0.9654 1.9028 10.2946 9.9435 2.8212 0.8573 0.3651 0.7131 3.5284 2.8212 2.8912 ,
H 3 = 1.3000 0 0 0.6207 0.1349 1.0601 0 1.3000 0 0.1279 1.1076 1.4470 0 0 1.3000 0.4013 1.7794 1.8319 0.6207 0.1279 0.4013 1.6881 2.8188 0.7869 0.1349 1.1076 1.7794 2.8188 13.2336 14.1577 1.0601 1.4470 1.8319 0.7869 14.1577 19.7202
It is readily apparent that H 1 , H 2 , H 3 are non-positive definite, thereby confirming that Assumption 2 holds.
(3) 
It can be deduced from (8) that
H 4 = 315.3378 29.0086 2.9766 29.7025 361.8602 115.8151 5.2995 65.7927 469.7871 , H 5 = 0.5000 0 0 0 0.5000 0 0 0 0.5000 ,
and H 6 = 1 0 0 0 1 0 0 0 1 . It can be easily established that H 4 , H 5 , H 6 are non-positive definite.
In conclusion, given the parameters under consideration, the conditions of Theorem 4 are satisfied, which guarantees both mean-square exponential stability and almost sure exponential stability for the solution to Equation (3). The numerical solutions illustrated in Figure 2 provide a visual confirmation of the solution’s stability. They reveal that the solution rapidly converges to the equilibrium state, indicating a swift stabilization of the system.

5. Proof of Main Theorems

5.1. Preliminary Lemmas

Let C 2 , 1 R n × R + ; R + represent all nonnegative functions V ( X , t ) defined over R n × R + that are twice continuously differentiable with respect to x and once continuously differentiable with respect to t. For every V C 2 , 1 R n , R + , we introduce an operator d V , which is related to the stochastic delay function (2) as follows:
d V ( X ( t ) , t ) = L V ( X ( t ) , t ) d t + V x ( X ( t ) , t ) δ ( X ( t ) ) d W ( t )
with
L V ( X ( t ) , t ) = V x ( X ( t ) , t ) D X ( t ) + A f ( X ( t ) ) + B g ( X ( t τ ¯ ) ) + C h ( X ( t τ ( t ) ) ) + u ( t ) + V t ( X ( t ) , t ) + 1 2 tr σ T ( X ( t ) ) V xx ( X ( t ) , t ) σ ( X ( t ) ) .
Subsequently, we will proceed to estimate L V ( x ( t ) , t ) for specific Lyapunov functions associated with the stochastic delay function (2).
Lemma 1.
If Assumptions 1 and 2 are satisfied, let λ ( 1 ) , λ ( 2 ) , λ ( 3 ) be given in Remark 6. Define
V ( X ( t ) , t ) = e λ t X T ( t ) P X ( t ) + t τ ¯ t e λ s X T ( s ) Q X ( s ) d s + t τ ( t ) t e λ s X T ( s ) R X ( s ) d s
where λ > 0 , and P , Q , R are n × n positive matrices. Then, for the trivial solution x ( t ) of Equation (2), the following holds:
e λ t L V ( X ( t ) , t ) X T ( t ) H 4 X ( t ) + X T ( t τ ¯ ) H 5 X T ( t τ ¯ )                                     + X T ( t τ ( t ) ) H 6 X T ( t τ ( t ) ) + | u ( t ) | 2 .
where H 4 , H 5 , H 6 are defined in Equation (8).
Proof. 
Under the assumptions of this lemma, it can be readily inferred from (2), (5) and (12) that
V t ( X ( t ) , t ) e λ t X T ( t ) ( λ P + Q + R ) X ( t ) e λ t e λ τ ¯ X T ( t τ ¯ ) Q X ( t τ ¯ ) e λ t e λ τ ¯ ( 1 ρ ) X T ( t τ ( t ) ) R X ( t τ ( t ) ) ,
V x ( X ( t ) , t ) = 2 e λ t X T ( t ) P ,
2 X ( t ) T P A f ( X ( t ) ) + B g ( X ( t τ ¯ ) ) + C h ( X ( t τ ( t ) ) ) = X ( t ) , f ( X ( t ) ) T F 1 ( 1 ) P A A T P T F 2 ( 1 ) X ( t ) f ( X ( t ) ) f T ( X ( t ) ) F 2 ( 1 ) f ( X ( t ) ) + X ( t ) , g ( X ( t τ ¯ ) ) T F 1 ( 2 ) P B B T P T F 2 ( 2 ) X ( t ) g ( X ( t τ ¯ ) ) g T ( X ( t τ ¯ ) ) F 2 ( 2 ) g ( X ( t τ ¯ ) ) + X ( t ) , h ( X ( t τ ( t ) ) ) T F 1 ( 3 ) P C C T P T F 2 ( 3 ) X ( t ) h ( X ( t τ ( t ) ) ) h T ( X ( t τ ( t ) ) ) F 2 ( 3 ) h ( X ( t τ ( t ) ) ) X T ( t ) F 1 ( 1 ) + F 1 ( 2 ) + F 1 ( 3 ) X ( t ) , 1 2 tr σ T ( X ( t ) ) V x x ( X ( t ) , t ) σ ( X ( t ) ) e λ t λ max ( P ) X T ( t ) C 1 X ( t ) + X T ( t τ ¯ ) C 2 X ( t τ ¯ ) + X T ( t τ ( t ) ) C 3 ( X ( t τ ( t ) ) .
Combing these results with the fact that H 1 , H 2 , H 3 are symmetric non-positive definite, we get
L V ( X ( t ) , t )   e λ t X T ( t ) P P T + λ P + Q + R + λ max ( P ) C 1 2 P D F 1 ( 1 ) F 1 ( 2 ) F 1 ( 3 ) X ( t )   + e λ t X T ( t τ ¯ ) λ max ( P ) C 2 e λ τ ¯ Q X T ( t τ ¯ )   + e λ t X T ( t τ ( t ) ) λ max ( P ) C 3 e λ τ ¯ ( 1 ρ ) R X T ( t τ ( t ) )   + e λ t | u ( t ) | 2 e λ t f T ( X ( t ) ) F 2 ( 1 ) f ( X ( t ) )   e λ t g T ( X ( t τ ¯ ) ) F 2 ( 2 ) g ( X ( t τ ¯ ) ) e λ t h T ( X ( t τ ( t ) ) ) F 2 ( 3 ) h ( X ( t τ ( t ) ) ) .
Combining this with the relation (7), we arrive at the conclusion stated in (13). □
To establish almost sure exponential stability for the system (3), we necessitate the following results. The subsequent discussion is inspired by the work presented in [41].
Lemma 2
(Semimartingale convergence theorem [41]). Consider A 1 ( t ) and A 2 ( t ) as two continuous, adapted and increasing processes on t 0 with initial conditions A 1 ( 0 ) = A 2 ( 0 ) = 0 almost surely (a.s.). Let A 3 ( t ) be a real-valued continuous local martingale, also starting at A 3 ( 0 ) = 0 a.s. Let ζ be a nonnegative random variable that is F 0 -measurable with a finite expected value, E [ ζ ] < . Define the process
Y ( t ) = ζ + A 1 ( t ) A 2 ( t ) + A 3 ( t ) for t 0 .
If Y ( t ) remains nonnegative, then the following implication holds:
{ lim t A 1 ( t ) < } { lim t Y ( t ) < } { lim t A 2 ( t ) < } a . s .
Here, B 1 B 2 a.s. indicates that the probability of the intersection of B 1 and the complement of B 2 is zero, P ( B 1 B 2 c ) = 0 . Specifically, if lim t A 1 ( t ) < a.s., then for almost all outcomes ω Ω ,
lim t Y ( t , ω ) < and lim t A 2 ( t , ω ) < .
This suggests that both the process Y and A 2 tend to approach finite random values as t .
Lemma 3.
Let γ > 0 , and let ψ : R R + be any continuous function, then
0 t e γ s ψ s τ ( t ) d s e γ τ ¯ t τ ( t ) t e γ s ψ ( s ) d s + e γ τ ¯ τ ¯ 0 e γ s ψ ( s ) d s + e γ τ ¯ 0 t e γ s ψ ( s ) d s 0 t e γ s ψ s τ ¯ d s e γ τ ¯ t τ ¯ t e γ s ψ ( s ) d s + e γ τ ¯ τ ¯ 0 e γ s ψ ( s ) d s + e γ τ ¯ 0 t e γ s ψ ( s ) d s
Proof. 
It is sufficient to demonstrate the validity of the first relation, as the second relation is merely a specific instance of the first one. Through direct computation, we arrive at the following result:
t τ ( t ) t e γ s ψ ( s ) d s = t τ ( t ) τ ( t ) e γ s ψ ( s ) d s + τ ( t ) t e γ s ψ ( s ) d s = t 0 e γ ( s τ ( t ) ) ψ ( s τ ( t ) ) d s + τ ( t ) t e γ s ψ ( s ) d s 0 t e γ τ ¯ e γ s ψ ( s τ ( t ) ) d s + τ ¯ t e γ s ψ ( s ) d s
The final equality is valid because τ ( t ) is less than or equal to τ ¯ by definition. □

5.2. Theorem Demonstration: A Systematic Exposition of the Proofs

Proof. 
Theorem 5’s proof: Let V ( X , t ) be defined in accordance with (12), and consider X ( t ) : = X ( t ; Ψ ) to be the solution of Equation (2). When we take into account that H 4 , H 5 , H 6 are all non-positive definite matrices and couple this observation with the conclusions of Lemma 1, we derive the following conclusion:
L V ( X ( t ) , t ) e λ t | u ( t ) | 2 .
Next, we introduce a stopping time
θ k : = inf { t 0 : | X ( t ) | k }
It can be inferred from Dynkin’s formula that
E V X t θ k , t θ k = E V ( X ( 0 ) , 0 ) + E 0 t θ k L V ( X ( y ) , y ) d y .
By taking the limit k on both sides of Equation (15) and applying the monotone convergence theorem along with Equation (14), we easily derive
E V ( X ( t ) , t ) E V ( X ( 0 ) , 0 ) + u 2 0 t e λ y d y E Ψ 2 + 1 λ u 2 e λ t 1 .
Combining this with the definitions provided in (12), we obtain the following result
e λ t λ min ( P ) E X ( t ) 2 e λ t E X T ( t ) P X ( t ) E V ( X ( t ) , t ) E Ψ 2 + 1 λ u 2 e λ t 1 .
That is,
E | X ( t ) | 2 e λ t E Ψ 2 λ min ( P ) + 1 λ u 2 λ min ( P ) .
This in conjunction with Definition 1 confirms that the trivial solution of system (2) is mean-square exponentially input-to-state-stable. □
Proof. 
Theorem 3’s proof: Let V ( X , t ) be defined in (12), and consider X ( t ) : = X ( t ; Ψ ) as the solution to the Equation (3). It follows from Lemma 1 with the assumption u ( t ) = 0 that
e λ t L V ( X ( t ) , t )   X T ( t ) H 4 X ( t ) + X T ( t τ ¯ ) H 5 X T ( t τ ¯ ) + X T ( t τ ( t ) ) H 6 X T ( t τ ( t ) ) .
Thus
V ( X ( t ) , t )   V ( X ( 0 ) , 0 ) + 0 t V x ( X ( y ) , y ) δ ( X ( y ) ) d W ( y )   + 0 t e λ y X T ( y ) H 4 X ( y ) d y   + 0 t e λ y X T ( y τ ¯ ) H 5 X ( y τ ¯ ) d y   + 0 t e λ y X T ( y τ ( y ) ) H 6 X ( y τ ( y ) ) d y .
This together with Lemma 3 implies that
0 t e λ y X T ( y τ ¯ ) H 5 X ( y ) d y + 0 t e λ y X T ( y τ ( t ) ) H 6 X ( y τ ( y ) ) d y e λ τ ¯ t τ ¯ t e λ y X T ( y ) H 5 X ( y ) d y e λ τ ¯ t τ ( t ) t e λ y X T ( y ) H 6 X ( y ) d y + e λ τ ¯ τ ¯ 0 e λ y X ( y ) H 5 X ( y ) d y + e λ τ ¯ τ ¯ 0 e λ y X T ( y ) H 6 X ( y ) d y + e λ τ ¯ 0 t e λ y X T ( y ) H 5 X ( y ) d y + e λ τ ¯ 0 t e λ y X T ( y ) H 6 X ( y ) d y
Combining relation (16) and (17), we have
V ( X ( t ) , t ) V ( X ( 0 ) , 0 ) + 0 t V x ( X ( y ) , y ) δ ( x ( y ) ) d W ( y ) + 0 t e λ y X T ( y ) H 4 X ( y ) d y e λ τ ¯ t τ ¯ t e λ y X T ( y ) H 5 X ( y ) d y e λ τ ¯ t τ ( t ) t e λ y X T ( y ) H 6 X ( y ) d y + e λ τ ¯ τ ¯ 0 e λ y X T ( y ) H 5 X ( y ) d y + e λ τ ¯ τ ¯ 0 e λ y X T ( y ) H 6 X ( y ) d y + e λ τ ¯ 0 t e λ y X T ( y ) H 5 X ( y ) d y + e λ τ ¯ 0 t e λ y X T ( y ) H 6 X ( y ) d y
thus
V ( X ( t ) , t ) + e λ τ ¯ t τ ¯ t e λ y X T ( y ) H 5 X ( y ) d y + e λ τ ¯ t τ ( t ) t e λ y X T ( y ) H 6 X ( y ) d y   V ( X ( 0 ) , 0 ) + 0 t V x ( X ( y ) , y ) δ ( X ( y ) ) d W ( y )   + e λ τ ¯ τ ¯ 0 e λ y X T ( y ) H 5 X ( y ) d y + e λ τ ¯ τ ¯ 0 e λ y X T ( y ) H 6 X ( y ) d y   + 0 t e λ y X T ( y ) H 4 + e λ τ ¯ H 5 + e λ τ ¯ H 6 X ( y ) d y
Making use of (19) and the non-positiveness of H 4 + e γ τ ¯ ( H 5 + H 6 ) yields that
V ( X ( t ) , t ) + e λ τ ¯ t τ ¯ t e λ y X T ( y ) H 5 X ( y ) d y + e λ τ ¯ ¯ t τ ( t ) t e λ y X T ( y ) H 6 X ( y ) d y   Y ( t )
where
Y ( t ) = V ( X ( 0 ) , 0 ) + 0 t U x ( X ( y ) , y ) δ ( X ( y ) ) d W ( y ) + e λ τ ¯ τ ¯ 0 e λ y X T ( y ) H 5 X ( y ) d y + e λ τ ¯ τ ¯ 0 e λ y X T ( y ) H 6 X ( y ) d y
which constitutes a nonnegative martingale, and Lemma 2 demonstrates that
lim s Y ( s ) < a . s .
Consequently, Equation (20) leads to
lim sup s V ( X ( s ) , s ) < a . s . .
This together with the fact that
e λ t λ min ( P ) | X ( t ) | 2 e λ t X T ( t ) P x ( t ) V ( X ( t ) , t ) .
implies that
lim sup s 1 s log | X ( t ) | 2 lim sup s 1 s log ( V ( X ( s ) , s ) ) λ a . s .
which is equivalent to (9). □

6. Conclusions

The investigation conducted in this research, employing a matrix-based methodology, has successfully demonstrated the stability of solutions for SDRNNs. Our findings rest on less restrictive assumptions compared to scalar methods and yield a broader range of stability conclusions. Beyond enhancing the mean-square exponential input-to-state stability results previously introduced in [36], we have also provided insights into mean-square exponential stability and almost sure exponential stability for cases with no input. The effectiveness of our theoretical results has been confirmed through numerical simulations, highlighting their practical applicability.
While our work focuses on theoretical stability analysis, future research could explore several promising directions. First, extending our analysis to more general system configurations, such as those with different types of time-varying delays or noise structures, would further broaden the applicability of the results. Second, developing efficient numerical methods to compute the controller parameters based on the derived stability conditions could bridge the gap between theory and practical implementation. Third, investigating the resilience of SDRNNs to parameter uncertainties and external disturbances would enhance the robustness of the stability results. These future directions align with the growing demand for more sophisticated and reliable neural network models in real-world applications.

Author Contributions

All four authors have contributed equally to this paper. H.X. conducted theoretical analysis and wrote the paper. S.W. proposed the research problem and verified the correctness of the theoretical analysis. M.X. provided the data for the study. Y.Z. performed the numerical simulations. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the specialized research fund of Yibin University (Grant No. 412-2021QH027).

Data Availability Statement

No new data were created.

Acknowledgments

All procedures and data handling in this study were in accordance with relevant ethical standards and were approved by the Ethics Committee of Yibin University prior to the commencement of the research. The data collection and processing in this study were conducted in accordance with the principle of informed consent, ensuring the privacy and rights of the participants. We sincerely thank the anonymous reviewers for their expert guidance and constructive engagement, which profoundly improved the scholarly rigor of this work. Their insightful critique enhanced foundational mathematical frameworks, refined probabilistic conventions, and enriched the specialist literature. We are particularly grateful for their valuable recommendations on key references, which strengthened the theoretical foundations and contextual depth of this study.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

In this section, we aim to address the issue proposed in Assumption 2: given an n × n matrix A, how can we choose symmetric matrices F 1 , F 2 such that F 1 A A T F 2 is non-positive definite?
Lemma A1.
Let A be an n × n matrix, if there exist symmetrical matrices F 1 , F 2 such that F 1 A A T F 2 is non-positive definite, then F 1 , F 2 must be non-positive definite.
Proof. 
Let 0 be n-dimensional column vector with all elements being zero. x 1 , x 2 R n , it can by deduced by the non-positiveness of matrix F 1 A A T F 2 that
x 1 T 0 T F 1 A A T F 2 x 1 0 0 , 0 T x 2 T F 1 A A T F 2 0 x 2 0 ,
or equivalently x 1 T F 1 x 1 0 , x 2 T F 2 x 2 0 . This ensures that both F 1 , F 2 are non-positive definite. □
Lemma A2.
Let A be an n × n matrix, if there exists symmetrical matrices F 1 , F 2 and scalar ε > 0 such that ε I + F 1 and ε F 2 + A T A are non-positive definite, then F 1 A A T F 2 is non-positive definite.
Proof. 
For all x 1 , x 2 R n , we can compute as follows:
x 1 T x 2 T F 1 A A T F 2 x 1 x 2 = x 1 T F 1 x 1 + x 2 T F 2 x 2 + 2 x 1 T A x 2 x 1 T F 1 x 1 + x 2 T F 2 x 2 + ε x 1 T x 1 + 1 ε A x 2 T A x 2 = x 1 T F 1 + ε I x 1 + x 2 T F 2 + A T A ε x 2 0 .
It should be noted that the last inequality holds due to the assumption that ε I + F 1 and ε F 2 + A T A are non-positive definite. □

References

  1. Duan, L.; Ren, Y.; Duan, F. Adaptive stochastic resonance based convolutional neural network for image classification. Chaos Solitons Fractals 2022, 162, 112429. [Google Scholar] [CrossRef]
  2. Morán, A.; Parrilla, L.; Roca, M.; Font-Rossello, J.; Isern, E.; Canals, V. Digital implementation of radial basis function neural networks based on stochastic computing. IEEE J. Emerg. Sel. Top. Circuits Syst. 2022, 13, 257–269. [Google Scholar] [CrossRef]
  3. Hofmann, D.; Fabiani, G.; Mentink, J.H.; Carleo, G.; Sentef, M.A. Role of stochastic noise and generalization error in the time propagation of neural-network quantum states. Scipost Phys. 2022, 12, 165. [Google Scholar] [CrossRef]
  4. Yamakou, M.E.; Zhu, J.; Martens, E.A. Inverse stochastic resonance in adaptive small-world neural networks. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 113119. [Google Scholar] [CrossRef] [PubMed]
  5. Sun, Y.; Liang, F. A kernel-expanded stochastic neural network. J. R. Stat. Soc. Ser. B Stat. Methodol. 2022, 84, 547–578. [Google Scholar] [CrossRef]
  6. Joya, G.; Atencia, M.A.; Sandoval, F. Hopfield neural networks for optimization: Study of the different dynamics. Neurocomputing 2022, 43, 219–237. [Google Scholar] [CrossRef]
  7. Yang, J.; Ma, L.; Chen, L. L2L state estimation for continuous stochastic delayed neural networks via memory event-triggering strategy. Int. J. Syst. Sci. 2022, 53, 2742–2757. [Google Scholar] [CrossRef]
  8. Yang, L.; Gao, T.; Lu, Y.; Duan, J.; Liu, T. Neural network stochastic differential equation models with applications to financial data forecasting. Appl. Math. Model. 2023, 115, 279–299. [Google Scholar] [CrossRef]
  9. Yang, X.; Deng, W.; Yao, J. Neural network based output feedback control for DC motors with asymptotic stability. Mech. Syst. Signal Process. 2022, 164, 108288. [Google Scholar] [CrossRef]
  10. Yuan, H.; Zhu, Q. The well-posedness and stabilities of mean-field stochastic differential equations driven by G-Brownian motion. SIAM J. Control Optim. 2025, 63, 596–624. [Google Scholar] [CrossRef]
  11. McAllister, D.R.; Rawlings, J.B. Nonlinear stochastic model predictive control: Existence, measurability, and stochastic asymptotic stability. IEEE Trans. Autom. Control 2022, 68, 1524–1536. [Google Scholar] [CrossRef]
  12. Xu, W.; Chen, R.; Li, X. Infinitely deep bayesian neural networks with stochastic differential equations. Int. Conf. Artif. Intell. Stat. 2022, 721–738. [Google Scholar]
  13. Zhao, Y.; Wang, L. Practical exponential stability of impulsive stochastic food chain system with time-varying delays. Mathematics 2022, 11, 147. [Google Scholar] [CrossRef]
  14. Zhu, Q. Event-triggered sampling problem for exponential stability of stochastic nonlinear delay systems driven by Levy processes. IEEE Trans. Autom. Control 2025, 70, 1176–1183. [Google Scholar] [CrossRef]
  15. Zhu, Q.; Cao, J. Robust exponential stability of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. Learn. Syst. 2010, 21, 1314–1325. [Google Scholar]
  16. Tran, K.Q.; Nguyen, D.H. Exponential stability of impulsive stochastic differential equations with Markovian switching. Syst. Control Lett. 2022, 162, 105178. [Google Scholar] [CrossRef]
  17. Xu, H.; Zhu, Q. New criteria on pth moment exponential stability of stochastic delayed differential systems subject to average-delay impulses. Syst. Control Lett. 2022, 164, 105234. [Google Scholar] [CrossRef]
  18. Li, Z.; Xu, L.; Ma, W. Global attracting sets and exponential stability of stochastic functional differential equations driven by the time-changed Brownian motion. Syst. Control Lett. 2022, 160, 105103. [Google Scholar] [CrossRef]
  19. Zhu, Q.; Cao, J. Exponential stability analysis of stochastic reaction-diffusion Cohen-Grossberg neural networks with mixed delays. Neurocomputing 2011, 74, 3084–3091. [Google Scholar] [CrossRef]
  20. Zhu, Q.; Li, X.; Yang, X. Exponential stability for stochastic reaction-diffusion BAM neural networks with time-varying and distributed delays. Appl. Math. Comput. 2011, 217, 6078–6091. [Google Scholar] [CrossRef]
  21. Liu, Y.; Xu, J.; Lu, J.; Gui, W. Stability of stochastic time-delay systems involving delayed impulses. Automatica 2023, 152, 110955. [Google Scholar] [CrossRef]
  22. Kechiche, D.; Khemmoudj, A.; Medjden, M. Exponential stability result for the wave equation with Kelvin-Voigt damping and past history subject to Wentzell boundary condition and delay term. Math. Methods Appl. Sci. 2023, 47, 1546–1576. [Google Scholar] [CrossRef]
  23. Zhang, J.; Nie, X. Coexistence and locally exponential stability of multiple equilibrium points for fractional-order impulsive control Cohen-Grossberg neural networks. Neurocomputing 2024, 589, 127705. [Google Scholar] [CrossRef]
  24. Zhu, D.; Zhu, Q. Almost sure exponential stability of stochastic nonlinear semi-Markov jump T-S fuzzy systems under intermittent EDF scheduling controller. J. Frankl. Inst. 2024, 361, 107188. [Google Scholar] [CrossRef]
  25. Liu, X.; Cheng, P. Almost sure exponential stability and stabilization of hybrid stochastic functional differential equations with Lévy noise. J. Appl. Math. Comput. 2023, 69, 3433–3458. [Google Scholar] [CrossRef]
  26. Pan, L.; Cao, J. Input-to-state Stability of Impulsive Stochastic Nonlinear Systems Driven by G-Brownian Motion. Int. J. Control Autom. Syst. 2021, 19, 1–10. [Google Scholar] [CrossRef]
  27. Li, X.; Zhang, T.; Wu, J. Input-to-State Stability of Impulsive Systems via Event-Triggered Impulsive Control. IEEE Trans. Cybern. 2022, 52, 7187–7195. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, S.; Russo, A. Further characterizations of integral input-to-state stability for hybrid systems. Automatica 2024, 163, 111484. [Google Scholar] [CrossRef]
  29. Yang, X.; Zhao, T.; Zhu, Q. Aperiodic event-triggered controls for stochastic functional differential systems with sampled-data delayed output. IEEE Trans. Autom. Control 2025, 70, 2090–2097. [Google Scholar] [CrossRef]
  30. Huang, F.; Gao, S. Stochastic integral input-to-state stability for stochastic delayed networked control systems and its applications. Commun. Nonlinear Sci. Numer. Simul. 2024, 138, 108177. [Google Scholar] [CrossRef]
  31. Xu, G.; Bao, H. Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching. Neurocomputing 2020, 376, 191–201. [Google Scholar] [CrossRef]
  32. Song, Q.; Zhao, Z.; Liu, Y.; Alsaadi, F. Mean-square input-to-state stability for stochastic complex-valued neural networks with neutral delay. Neurocomputing 2021, 470, 269–277. [Google Scholar] [CrossRef]
  33. Wu, K.; Ren, M.; Liu, X. Exponential input-to-state stability of stochastic delay reaction-diffusion neural networks. Neurocomputing 2020, 412, 399–405. [Google Scholar] [CrossRef]
  34. Wang, W. Mean-square exponential input-to-state stability of stochastic delayed recurrent neural networks with local Lipschitz condition. Math. Methods Appl. Sci. 2023, 46, 17788–17797. [Google Scholar] [CrossRef]
  35. Radhika, T.; Chandrasekar, A.; Vijayakumar, V.; Zhu, Q. Analysis of Markovian Jump Stochastic Cohen–Grossberg BAM Neural Networks with Time Delays for Exponential Input-to-State Stability. Neural Process. Lett. 2023, 55, 11055–11072. [Google Scholar] [CrossRef]
  36. Zhu, Q.; Cao, J. Mean-square exponential input-to-state stability of stochastic delayed neural networks. Neurocomputing 2014, 131, 157–163. [Google Scholar] [CrossRef]
  37. Mao, X. Stochastic Differential Equations and Applications, 2nd ed.; Woodhead Publishing: Cambridge, UK, 2007. [Google Scholar]
  38. Gikhman, I.; Skorochod, A.V. Stochastic Differential Equations (SDEs); Naukova Dumka, Kiev, 1968; English and German Translation; Springer: Berlin/Heidelberg, Germany, 1971. (in Russian) [Google Scholar]
  39. Schurz, H. Basic concepts of numerical analysis of stochastic differential equations explained by balanced implicit theta methods. In Stochastic Differential Equations and Processes; Mounir, Z., Filatova, D.V., Eds.; Springer Proceedings in Mathematics 7; Springer: New York, NY, USA, 2012; pp. 1–139. [Google Scholar]
  40. Graham, C.; Talay, D. Discretization of Stochastic Differential Equations. Stochastic Simulation and Monte Carlo Methods. In Mathematical Foundations of Stochastic Simulation; Springer: New York, NY, USA, 2013; pp. 155–195. [Google Scholar]
  41. Liptser, R.S.; Shiryaev, A.N. Statistics of Random Processes I: General Theory, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
Figure 1. The state response of system (2) in Example 1.
Figure 1. The state response of system (2) in Example 1.
Mathematics 13 02310 g001
Figure 2. The state response of network (3) in Example 2.
Figure 2. The state response of network (3) in Example 2.
Mathematics 13 02310 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, H.; Xu, M.; Zhang, Y.; Weng, S. Stability of Stochastic Delayed Recurrent Neural Networks. Mathematics 2025, 13, 2310. https://doi.org/10.3390/math13142310

AMA Style

Xiao H, Xu M, Zhang Y, Weng S. Stability of Stochastic Delayed Recurrent Neural Networks. Mathematics. 2025; 13(14):2310. https://doi.org/10.3390/math13142310

Chicago/Turabian Style

Xiao, Hongying, Mingming Xu, Yuanyuan Zhang, and Shengquan Weng. 2025. "Stability of Stochastic Delayed Recurrent Neural Networks" Mathematics 13, no. 14: 2310. https://doi.org/10.3390/math13142310

APA Style

Xiao, H., Xu, M., Zhang, Y., & Weng, S. (2025). Stability of Stochastic Delayed Recurrent Neural Networks. Mathematics, 13(14), 2310. https://doi.org/10.3390/math13142310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop