Next Article in Journal
Using the Embedding Theorem to Solve Interval-Valued Optimization Problems
Previous Article in Journal
Combinatorial Properties and Values of High-Order Eulerian Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating Uniform Stability of Fractional-Order Complex-Valued Stochastic Neural Networks with Impulses via a Direct Method

1
Department of Mathematics and Artificial Intelligence, Chongqing University of Arts and Sciences, Chongqing 402100, China
2
Department of Mathematics, Minnan Normal University, Zhangzhou 363000, China
*
Author to whom correspondence should be addressed.
Axioms 2026, 15(1), 17; https://doi.org/10.3390/axioms15010017 (registering DOI)
Submission received: 17 November 2025 / Revised: 12 December 2025 / Accepted: 22 December 2025 / Published: 26 December 2025
(This article belongs to the Special Issue Advances in Nonlinear Dynamics: Theory and Application)

Abstract

This paper focuses on exploring the existence and uniqueness of solutions for a specific type of impulsive fractional-order complex-valued stochastic neural network within the complex domain, a topic hitherto undocumented. The combination of fractional order, stochastic nature, complex values, and impulses allows the model to seize memory-related, noise-resilient, phase-sensitive, and discontinuous dynamics. These dynamics are crucial for applications in neuroscience, signal processing, engineering control, and time-series prediction. In contrast to more simplistic models, this framework provides greater fidelity when simulating real-world systems and wider applicability without the need for redundant component splitting, thus justifying the requirement for such a comprehensive model. Leveraging the contraction mapping principle and contradiction, sufficient conditions are deduced to guarantee the existence and uniform stability (in the distribution sense) of solutions for the impulsive fractional-order complex-valued stochastic neural networks under study. Finally, a numerical example is presented to illustrate the feasibility and precision of our findings.

1. Introduction

An artificial neural network, often simply called a “neural network” in engineering and academic circles, is a mathematical model. It emulates the synaptic connection architectures of the human brain to carry out information processing tasks. As a prominent research focus globally, neural networks have achieved remarkable success across diverse fields—including image processing, combinational optimization, associative memory, pattern recognition, complex-frequency signal analysis (e.g., radar, sonar, quantum signals), and noise-robust feature extraction, as well as time-series prediction. These applications fully demonstrate the model’s inherent capabilities of association, generalization, and analogy [1,2,3,4,5,6].
Fractional calculus, serving as an effective mathematical tool to characterize the intrinsic non-local and genetic traits of processes, enables precise depiction of neural networks’ memory features and enhances their authenticity. Therefore, the study of fractional-order neural network is more in line with the reality of neural networks, and its qualitative analysis and research on its control have attracted many scholars’ attention [7,8,9,10,11,12,13,14,15,16].
In [10], the authors investigated synchronization of fractional-order complex-valued systems through the Lyapunov function method. Zhang et al. [11] examined the stabilization of fractional-order neural networks using the LMI technique. Recently, Xu et al. achieved Adaptive finite-time synchronization control for fractional-order complex-valued dynamical networks with multiple weights [12]. Additionally, several dynamical behaviors merit exploration, including Mittag–Leffler stability [13], synchronization [14,15], finite-time stability [16], and more. As artificial neural networks evolve rapidly, a substantial gap remains between them and the biological brain. Thus, some scholars have put forward the third-generation neural networks, known as impulsive neural networks.
Impulsive neural networks strive to bridge the divide between neuroscience and machine learning. They employ models that optimally match biological neural mechanisms for computations, being more attuned to such mechanisms. Essentially a distinctive class of impulsive differential equations, they characterize abrupt alterations taking place at specific instants. In control systems, these equations can depict the system’s pulse and step responses, facilitating the analysis of its stability and performance [17,18,19,20,21]. For instance, in [17], the Lyapunov method is utilized to establish conditions regarding the stability and synchronization of impulsive neural networks. Li et al. introduce input-to-state stability via a Lyapunov method that involves an indefinite derivative. Reference [18] examines memristor-based BAM neural networks, where the networks exhibit lag synchronization phenomena under event-triggered hybrid impulsive control. In [19], relying on the stability theory of fractional-order linear differential equations, along with the Mittag–Leffler function, Laplace transform, and Gronwall inequality, a sufficient condition is deduced for the uniform stability of a class of complex-valued fractional-order neural networks with linear impulses.
Notably, based on our current understanding, there remains a lack of scholarly work that explores the existence and stability of the almost automorphic solution (in the distribution sense) for time-delayed impulsive fractional-order complex-valued stochastic neural networks. The subsequent sections will outline the novel insights and contributions of this study.
(1)
Primarily, based on our current understanding, this study constitutes the first endeavor to scrutinize the existence and uniform stability (from a distribution-sense perspective) of solutions for impulsive fractional-order complex-valued stochastic neural networks with time delays.
(2)
As a common trend, most researchers concentrate on real-valued neural networks. In contrast, this study centers on impulsive fractional-order stochastic neural networks featuring delays, explored within the complex domain.
(3)
Ultimately, the analytical tools developed in this paper can be leveraged to examine distribution-sense solutions for alternative types of impulsive fractional-order stochastic neural networks with delays. To date, no existing research has utilized this methodology to investigate such neural network architectures.
Within the scope of this paper, our focus lies in examining the existence and stability, in the distribution sense, of solutions for an impulsive fractional-order complex-valued stochastic neural network model incorporating delays, specified as follows:
D t α x p ( t ) = [ c p ( t ) x p ( t ) + q = 1 n a p q ( t ) f q ( x q ( t ) ) + q = 1 n b p q ( t ) g q ( x q ( t τ p q ( t ) ) ) + u p ( t ) ] d t + q = 1 n d p q ( t ) δ p q ( x q ( s ) ) d ω q ( t ) d t , t J = [ 0 , T ] , T > 0 , t t k , Δ x p ( t ) = I p k ( x p ( t k ) ) , k = 1 , 2 , , m . x p ( t ) = φ p ( t ) , φ p ( t ) L F 0 2 ( [ τ + , 0 ] , C n ) .
In the above formulation, D t α denotes the Caputo fractional derivative with order α ( 0 < α < 1 ); let p { 1 , 2 , , n } = : I , where n stands for the number of neurons in the network layers. x p ( · ) C denotes the state variable of the p-th neuron unit; c p ( · ) R + indicates the rate at which the p-th unit restores its membrane potential to the resting state when isolated from the neural network and free of external input signals. a p q ( · ) , b p q ( · ) , d p q ( · ) C represent the connection weights of the neural network for non-delayed and delayed signal transmission, respectively; f q ( · ) , g q ( · ) : J C are defined as the activation functions governing signal propagation; u p ( · ) R is the external input acting on the p-th unit; τ p q ( · ) R + refers to the transmission delay at time t, with τ + = sup t R { τ p q ( t ) } ; ω ( t ) = ω 1 ( t ) , ω 2 ( t ) , , ω n ( t ) T denotes an n-dimensional Brownian motion defined on a complete probability space; δ p q ( · ) : J C is a Borel measurable function. C = d i a g { c 1 , c 2 , , c n } , C : D ( C ) C n C n is the infinitesimal generator of an α -resolvent family S α ( t ) t 0 . f q · , g q · , δ q · : C C . I p k : C C and k = 1 , 2 , , m are appropriate functions. The time sequence satisfies 0 = t 0 t 1 t 2 t m t m + 1 = T .   Δ x p ( t k ) = x p ( t k + ) x p ( t k ) , where x p ( t k + ) = lim t 0 x p ( t k + t ) and x p ( t k ) = lim t 0 x p ( t k t ) denote the right-hand and left-hand limits of x p ( t ) at time t = t k , respectively.
Consider a complete probability space ( Ω , F , { F t } t 0 , P ) equipped with a natural filtration { F t } t 0 that satisfies the standard regularity conditions. Let φ p ( · ) denote the initial function, where φ ( · ) L F 0 2 ( [ τ + , 0 ] , C n ) . Here, L F 0 2 ( [ τ + , 0 ] , C n ) represents the collection of all F 0 -measurable, C -valued random variables. These variables are independent of the Brownian motion ω and possess finite second moments.
The structure of this paper is as follows. Section 2 revisits fundamental definitions and lemmas. In Section 3, leveraging the contraction mapping principle, we delve into the existence of distribution-sense solutions for delayed impulsive fractional-order complex-valued stochastic neural networks. Section 4, grounded in the Lyapunov function method, examines the uniform stability of such distribution-sense solutions. Finally, Section 5 presents a numerical example to validate the theoretical findings.

2. Materials and Methods

Definition 1 
([22]). The Caputo derivative of order α for a function f : [ 0 , ) C , which is at least n-times differentiable, can be defined as
D t α f ( t ) = 1 Γ ( n α ) 0 t ( t s ) n α 1 f n ( s ) d s = I n α f ( n ) ( t ) ,
for n 1 α < n , n N . If 0 < α 1 , then
D t α f ( t ) = 1 Γ ( 1 α ) 0 t ( t s ) α f ( 1 ) ( s ) d s .
Remark 1. 
It is a well-known property that the Caputo derivative of a constant function equals zero. For a Caputo derivative of order α > 0 , its Laplace transform can be expressed as follows:
L { D t α f ( t ) ; λ } = λ α f ^ ( λ ) k = 0 n 1 λ α k 1 f ( k ) ( 0 ) ; n 1 α < n .
Γ ( . ) represents the Gamma function, a fundamental special function widely used in fractional calculus and the analysis of fractional-order systems.
Definition 2 
([22]). The fractional integral of order α of function f is I t α f ( t ) = 1 Γ ( α ) t 0 t ( t s ) α 1 f ( s ) d s , α > 0 .
Definition 3 
([23]). The Mittag–Leffler function with two parameters is defined by the series expansion
E α , β ( z ) = k = 0 z k Γ ( α k + β ) = 1 2 π i D μ α β e μ μ α z d μ , α , β C ¯ , R ( α ) > 0 ,
where C ¯ is a contour which starts and ends at , and encircles the disc μ z 1 / 2 counter-clockwise. For short, E α ( z ) = E α , 1 ( z ) .
Remark 2. 
A particularly notable property of Mittag–Leffler functions is closely linked to their Laplace integral representation, which is given by
0 e λ t t β 1 E α , β ( w t α ) d t = λ α β λ α w , R e λ > w 1 α , w > 0 .
For additional details regarding this property, refer to reference [23].
Definition 4 
([24]). A closed and linear operator A induced by C ( t ) is defined as sectorial if there exist the constants w R , θ [ π 2 , π ] and M > 0 that satisfy the two conditions outlined below:
(i) 
ρ ( A ) θ , w = { λ C ¯ : λ w , | a r g ( λ w ) | < θ } ;
(ii) 
R ( λ , A ) M | λ w | , λ θ , w .
Definition 5 
([25]). Let A be a closed linear operator induced by C ( t ) , with its domain D ( A ) specified in a Banach space X. Denote the resolvent set of A as ρ ( A ) . We define A as the generator of an α-resolvent family if there exist w 0 and a strongly continuous function S α : R + L ( X ) , where L ( X ) denotes the Banach space consisting of all bounded linear operators mapping X to X, with the associated norm denoted by · such that { λ α : R e λ > w } ρ ( A ) , and the following holds:
( λ α I A ) 1 x = 0 e λ t S α ( t ) x d t , R e λ > w , x X .
Here, the function S α ( t ) is referred to as the α-resolvent family generated by A.
Definition 6 
([26]). Let A be a closed linear operator induced by C ( t ) , with its domain D ( A ) defined in a Banach space X and α > 0 . We define A as the generator of a solution operator if there exist ω > 0 and a strongly continuous function S α : R + L ( X ) satisfying two conditions: first, { λ α : R e λ > ω } ρ ( A ) ; second, the integral equality below holds:
λ α 1 ( λ α I A ) 1 x = 0 e λ t S α ( t ) x d t , R e λ > w , x X .
Here, the function S α ( t ) is termed the solution operator generated by A.
Definition 7. 
Suppose the function f satisfies the uniform Hölder condition with exponent β ( 0 , 1 ] , and let A be a sectorial operator. Under these premises, the Cauchy problem admits a unique solution.
D t α x ( t ) = A x ( t ) + f ( t , x t , B x ( t ) ) , t t 0 , t 0 R , 0 < α < 1 , x ( t ) = φ ( t ) , t t 0
is given by
x ( t ) = T α ( t t 0 ) ( x ( t 0 + ) ) + 0 t S α ( t s ) f ( s , x s , B x ( s ) ) d s ,
where T α ( t ) = E α , 1 ( A T α ) = 1 2 π i B ^ r e λ t λ α 1 λ α A d λ , S α ( t ) = t α 1 E α , α ( A t α ) = 1 2 π i B ^ r e λ t 1 λ α A d λ , Here B ^ r denotes the Bromwich path. S α ( t ) is called the α-resolvent family and T α ( t ) is the solution operator generated by A. An operator A is said to be in the class C α ( X ; M , ω ) , or abbreviated as C α ( M , ω ) , if the problem (2) admits a solution operator T α that satisfies the estimate T α M e ω t , for all t 0 . We denote as A α ( θ 0 , ω 0 ) the set of all operators { A C α such that A generates analytic solution operators T α of type ( θ 0 , ω 0 ) } .
Lemma 1 
([25] (Krasnoselskii’s fixed point theorem)). Let B be a nonempty closed convex of a Banach space ( X , · ) . Suppose that P and Q map B into X such that
(i) 
P x + Q y B whenever x , y B ;
(ii) 
P is compact and continuous;
(iii) 
Q is a contraction mapping.
Then there exists z B such that z = P z + Q z .
Let H be a real separable Hilbert space. We denote as L F 0 2 ( Ω , H ) the space consisting of all F 0 -measurable, H-valued random variables X that satisfy the finiteness condition:
E ( X 2 ) = Ω X 2 d P < .
For any random variable X L F 0 2 ( Ω , C n ) , we define two quantities as follows:
X 2 : = Ω X 0 2 d P 1 2 , E X 0 2 : = Ω X 0 2 d P ,
where X 0 denotes the maximum norm given by X 0 = max p I { X p C } .
For the sake of convenience, we introduce the following notation:
c p + = sup t R | c p ( t ) | , c p = inf t R | c p ( t ) | , τ + = max p , q I { τ p q } ,
a p q + = sup t R | a p q ( t ) | , b p q + = sup t R | b p q ( t ) | , d p q + = sup t R | d p q ( t ) | , u p + = sup t R | u p ( t ) | .
For the purpose of obtaining our conclusions, we present the following assumptions:
( H 1 )
If α ( 0 , 1 ) and A A α ( θ 0 , ω 0 ) , then for any x R and t > 0 , we have T α M e ω t and S α ( t ) C ¯ e ω t ( 1 + t α 1 ) , ω > ω 0 . Thus, we have T α M ˜ T , and S α ( t ) t α 1 M ˜ S , where M ˜ T = sup 0 t T T α ( t ) , and M ˜ S ( t ) = sup 0 t T C ¯ e ω t ( 1 + t α 1 ) ; for more details see [23].
( H 2 )
For each q I , there exist positive constants L q f , L q g , L p q δ such that the following Lipschitz conditions hold for all x , y C :
f q ( x ) f q ( y ) C L q f x y C ,
g q ( x ) g q ( y ) C L q g x y C ,
δ p q ( x ) δ p q ( y ) C L p q δ x y C .
( H 3 )
For every p I and each integer k = 1 , 2 , , m , there exists a positive constant η p k such that the inequality
I p k ( x ) I p k ( y ) C η p k x y C
holds for all x , y C .
( H 4 )
The following inequality holds:
max 1 k m { 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 α α 2 q = 1 n ( a q p + L p f + b q p + L p g )     + 4 M ˜ S 2 T 2 α 1 2 α 1 q = 1 n d q p + L p δ } < 1 .

3. Results

To establish the existence of solutions in the distribution sense for Equation (1), we resort to Lemma 1 as the core theoretical tool in this section. Next, we introduce the function space
B b = x : [ τ + , T ] C | x | J k C ( J k , C ) for each k = 1 , 2 , , m , and the one-sided limits x ( t k + ) , x ( t k ) exist with x ( t k ) = x ( t k ) ,
where the subintervals are defined as J k = ( t k , t k + 1 ] for k = 0 , 1 , 2 , , m . Equip this space with the norm
x B b = sup t [ τ + , T ] E x ( t ) 0 2 1 2 ,
in which x ( t ) 0 = max p I x p ( t ) C . Then the pair ( B b , · B b ) forms a Banach space. It should be noted that the interval J 0 is given by J 0 = J { t 1 , t 2 , , t m } .
Definition 8. 
Let F p : J 0 R be a continuous function. A F t -adapted stochastic process x p : [ τ + , T ] C is called a mild solution to system (1) if it satisfies the following conditions:
(I) 
x p ( t ) is B b -valued, and restriction of x p ( · ) to the interval ( t k , t k + 1 ] , k = 1 , 2 , , m is continuous.
(II) 
For each t J , x p ( t ) satisfies the following integral equation:
x p ( t ) = φ p ( t ) , t [ τ + , 0 ] 0 t S α ( t s ) F q ( s ) d s + 0 t S α ( t s ) δ q ( s ) d ω ( s ) , t [ 0 , t 1 ] T α ( t t 1 ) [ x p ( t 1 + I p 1 ( x p ( t 1 ) ) ) ] + t 1 t S α ( t s ) F q ( s ) d s + t 1 t S α ( t s ) δ q ( s ) d ω ( s ) , t ( t 1 , t 2 ] T α ( t t m ) [ x p ( t m + I p m ( x p ( t m ) ) ) ] + t m t S α ( t s ) F q ( s ) d s + t m t S α ( t s ) δ q ( s ) d ω ( s ) , t ( t m , T ] ,
where
F q ( t , x q ( t ) ) = q = 1 n a p q ( t ) f q ( x q ( t ) ) + q = 1 n b p q ( t ) g q ( x q ( t τ p q ( t ) ) ) + u p ( t ) δ q ( t , x q ( t ) ) = q = 1 n d p q ( t ) δ p q ( x q ( t ) ) .
( I I I )
Δ x p | t = t k = I p k ( x p ( t k ) ) , and the restriction of x p ( · ) to the interval [ 0 , T ] { t 1 , t 2 , , t m } is continuous, where k = 1 , 2 , , m .
Theorem 1. 
Suppose that Assumptions ( H 1 ) ( H 4 ) are satisfied. If the operator A belongs to the class A α ( θ 0 , ω 0 ) , then the system (1) admits a unique distributional mild solution within the space B * .
Proof. 
Define the operator Π : B b B b as
( Π x p ) ( t ) = φ p ( t ) , t [ τ + , 0 ] 0 t S α ( t s ) F q ( s , x q ( s ) ) d s + 0 t S α ( t s ) δ q ( s , x q ( s ) ) d ω ( s ) , t [ 0 , t 1 ] T α ( t t 1 ) [ x p ( t 1 + I p 1 ( x p ( t 1 ) ) ) ] + t 1 t S α ( t s ) F q ( s , x q ( s ) ) d s + t 1 t S α ( t s ) δ q ( s , x q ( s ) ) d ω ( s ) , t ( t 1 , t 2 ] T α ( t t m ) [ x p ( t m + I p m ( x p ( t m ) ) ) ] + t m t S α ( t s ) F q ( s , x q ( s ) ) d s + t m t S α ( t s ) δ q ( s , x q ( s ) ) d ω ( s ) , t ( t m , T ] .
For φ p B b , define
G p ( t ) = φ p ( t ) , t [ τ + , 0 ] , 0 , t J ;
then G p ( t ) = φ p ( t ) . Next we define the function
z ¯ p ( t ) = 0 , t [ τ + , 0 ] , z p ( t ) , t J ,
for each z p ( t ) C ( J , C ) with z p ( 0 ) = 0 . If x p ( t ) satisfies (3), then x p ( t ) = G p ( t ) + z ¯ p ( t ) for t J , which implies that x p ( t τ ( t ) ) = G p ( t τ ( t ) ) + z ¯ p ( t τ ( t ) ) for t J , and the function z p ( · ) satisfies
z p ( t ) = 0 t S α ( t s ) F q ( s , G p ( s ) + z ¯ p ( s ) ) d s + 0 t S α ( t s ) δ q ( s , G p ( s ) + z ¯ p ( s ) ) d ω ( s ) , t [ 0 , t 1 ] T α ( t t 1 ) [ G p ( t 1 ) + z ¯ p ( t 1 ) + I p 1 ( G p ( t 1 ) + z ¯ p ( t 1 ) ) ] + t 1 t S α ( t s ) F q ( s , G p ( s ) + z ¯ p ( s ) ) d s + t 1 t S α ( t s ) δ q ( s , G p ( s ) + z ¯ p ( s ) ) d ω ( s ) , t ( t 1 , t 2 ] T α ( t t m ) [ G p ( t m ) + z ¯ p ( t m ) + I p m ( G p ( t m ) + z ¯ p ( t m ) ) ] + t m t S α ( t s ) F q ( s , G p ( s ) + z ¯ p ( s ) ) d s + t m t S α ( t s ) δ q ( s , G p ( s ) + z ¯ p ( s ) ) d ω ( s ) , t ( t m , T ] .
Set B 0 * = { z p B b , such that z p ( 0 ) = 0 } , and for any z p B 0 * , we have
z p B 0 * = z p ( 0 ) B b + max p I sup t J ( E z p ( t ) C 2 ) 1 2 = max p I sup t J ( E z p ( t ) C 2 ) 1 2 ;
thus, ( B 0 * , · B 0 * ) is a Banach space.
Define the operator Φ : B 0 * B 0 * as
( Φ z p ) ( t ) = 0 t S α ( t s ) F q ( s , G p ( s ) + z ¯ p ( s ) ) d s + 0 t S α ( t s ) δ q ( s , G p ( s ) + z ¯ p ( s ) ) d ω ( s ) , t [ 0 , t 1 ] T α ( t t 1 ) [ G p ( t 1 ) + z ¯ p ( t 1 ) + I p 1 ( G p ( t 1 ) + z ¯ p ( t 1 ) ) ] + t 1 t S α ( t s ) F q ( s , G p ( s ) + z ¯ p ( s ) ) d s + t 1 t S α ( t s ) δ q ( s , G p ( s ) + z ¯ p ( s ) ) d ω ( s ) , t ( t 1 , t 2 ] T α ( t t m ) [ G p ( t m ) + z ¯ p ( t m ) + I p m ( G p ( t m ) + z ¯ p ( t m ) ) ] + t m t S α ( t s ) F q ( s , G p ( s ) + z ¯ p ( s ) ) d s + t m t S α ( t s ) δ q ( s , G p ( s ) + z ¯ p ( s ) ) d ω ( s ) , t ( t m , T ] .
To establish the existence of the desired solution, it suffices to verify that the operator Φ admits a unique fixed point in B 0 * . Let z p , z p * B 0 * be arbitrary elements; for any t [ 0 , t 1 ] , the following estimation holds:
Φ z Φ z * B 0 * 2 = max p I sup t [ 0 , t 1 ] E ( Φ z p ) ( t ) ( Φ z p * ) ( t ) C 2 max p I sup t [ 0 , t 1 ] 2 E 0 t S α ( t s ) [ F q ( s , G p ( s ) + z ¯ p ( s ) ) F q ( s , G p ( s ) + z ¯ p * ( s ) ) ] d s C 2 + max p I sup t [ 0 , t 1 ] 2 E 0 t S α ( t s ) [ δ q ( s , G p ( s ) + z ¯ p ( s ) ) δ q ( s , G p ( s ) + z ¯ p * ( s ) ) ] d ω ( s ) C 2 max p I sup t [ 0 , t 1 ] 2 0 t S α ( t s ) d s 0 t S α ( t s ) × E F q ( s , G p ( s ) + z ¯ p ( s ) ) F q ( s , G p ( s ) + z ¯ p * ( s ) ) C 2 d s + max p I sup t [ 0 , t 1 ] 2 0 t S α ( t s ) d s 0 t S α ( t s ) × E δ q ( s , G p ( s ) + z ¯ p ( s ) ) δ q ( s , G p ( s ) + z ¯ p * ( s ) ) C 2 d s max p I sup t [ 0 , t 1 ] 2 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 q = 1 n a q p + L p f × E z ¯ p ( s ) z ¯ p * ( s ) C 2 d s + max p I sup t [ 0 , t 1 ] 2 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 q = 1 n b q p + L p g × E z ¯ p ( s ) z ¯ p * ( s ) C 2 d s + max p I sup t [ 0 , t 1 ] 2 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 q = 1 n d q p + L p δ × E z ¯ p ( s ) z ¯ p * ( s ) C 2 d s 2 M ˜ S 2 T 2 α α 2 q = 1 n ( a q p + L p f + b q p + L p g ) z ¯ z ¯ * B 0 * 2 + 2 M ˜ S 2 T 2 α 1 2 α 1 q = 1 n d q p + L p δ z ¯ z ¯ * B 0 * 2 .
For all t ( t 1 , t 2 ] , we have
Φ z Φ z * B 0 * 2 = max p I sup t ( t 1 , t 2 ] E ( Φ z p ) ( t ) ( Φ z p * ) ( t ) C 2 max p I sup t ( t 1 , t 2 ] 4 T α ( t t 1 ) 2 × E z p ( t 1 ) z p * ( t 1 ) C 2 + max p I sup t ( t 1 , t 2 ] 4 T α ( t t 1 ) 2 × E I p 1 ( z p ( t 1 ) ) I p 1 ( z p * ( t 1 ) ) C 2 + max p I sup t ( t 1 , t 2 ] 4 E t 1 t S α ( t s ) [ F q ( s , G p ( s ) + z ¯ p ( s ) ) F q ( s , G p ( s ) + z ¯ p * ( s ) ) ] d s C 2 + max p I sup t ( t 1 , t 2 ] 4 E t 1 t S α ( t s ) [ δ q ( s , G p ( s ) + z ¯ p ( s ) ) δ q ( s , G p ( s ) + z ¯ p * ( s ) ) ] d ω ( s ) C 2 max p I sup t ( t 1 , t 2 ] 4 M ˜ T 2 E z p ( t 1 ) z p * ( t 1 ) C 2 + E I p 1 ( z p ( t 1 ) ) I p 1 ( z p * ( t 1 ) ) C 2 + max p I sup t [ 0 , t 1 ] 4 t 1 t S α ( t s ) d s t 1 t S α ( t s ) × E F q ( s , G p ( s ) + z ¯ p ( s ) ) F q ( s , G p ( s ) + z ¯ p * ( s ) ) C 2 d s + max p I sup t ( t 1 , t 2 ] 4 t 1 t S α ( t s ) d s t 1 t S α ( t s ) × E δ q ( s , G p ( s ) + z ¯ p ( s ) ) δ q ( s , G p ( s ) + z ¯ p * ( s ) ) C 2 d s max p I sup t ( t 1 , t 2 ] 4 M ˜ T 2 E z p ( t 1 ) z p * ( t 1 ) C 2 + E I p 1 ( z p ( t 1 ) ) I p 1 ( z p * ( t 1 ) ) C 2 + max p I sup t ( t 1 , t 2 ] 4 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 q = 1 n a q p + L p f × E z ¯ p ( s ) z ¯ p * ( s ) C 2 d s + max p I sup t ( t 1 , t 2 ] 4 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 q = 1 n b q p + L p g × E z ¯ p ( s ) z ¯ p * ( s ) C 2 d s + max p I sup t ( t 1 , t 2 ] 4 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 q = 1 n d q p + L p δ × E z ¯ p ( s ) z ¯ p * ( s ) C 2 d s 4 M ˜ T 2 ( 1 + η p 1 ) z ¯ z ¯ * B 0 * 2 + 4 M ˜ S 2 T 2 α α 2 q = 1 n ( a q p + L p f + b q p + L p g ) z ¯ z ¯ * B 0 * 2 + 4 M ˜ S 2 T 2 α 1 2 α 1 q = 1 n d q p + L p δ z ¯ z ¯ * B 0 * 2 .
Similarly, when t ( t k , t k + 1 ] , k = 2 , 3 , , m , we get
Φ z Φ z * B 0 * 2 4 M ˜ T 2 ( 1 + η p k ) z ¯ z ¯ * B 0 * 2 + 4 M ˜ S 2 T 2 α α 2 q = 1 n ( a q p + L p f + b q p + L p g ) z ¯ z ¯ * B 0 * 2 + 4 M ˜ S 2 T 2 α 1 2 α 1 q = 1 n d q p + L p δ z ¯ z ¯ * B 0 * 2 .
Thus, for all t [ 0 , T ] , we can obtain
Φ z Φ z * B 0 * 2 max 1 k m { 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 α α 2 q = 1 n ( a q p + L p f + b q p + L p g ) + 4 M ˜ S 2 T 2 α 1 2 α 1 q = 1 n d q p + L p δ } z ¯ z ¯ * B 0 * 2 .
Consequently, by virtue of Assumption ( H 4 ) and Lemma 1 (Krasnoselskii’s fixed point theorem), the operator Φ is proven to be a contraction mapping. As a result, Φ admits a unique fixed point z B 0 * , which serves as the mild solution in the distribution sense for the impulsive fractional-order complex-valued stochastic neural network (1) over the interval [ τ + , T ] . □
Remark 3. 
Theorem 1 leverages the fixed point theorem to establish a sufficient condition guaranteeing the existence and uniqueness of solutions for the considered class of impulsive fractional-order complex-valued stochastic neural networks. Notably, this analytical approach is equally applicable to the study of real-valued impulsive fractional-order stochastic neural networks, extending the scope of our proposed methodology.
Definition 9. 
Let x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T denote a solution in the distribution sense for system (1) with initial value φ ( t ) = ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) T C B F 0 ( [ τ + , 0 ] , C n ) , and let y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y n ( t ) ) T be any other solution to system (1) corresponding to the initial value ψ ( t ) = ( ψ 1 ( t ) , ψ 2 ( t ) , , ψ n ( t ) ) T C B F 0 ( [ τ + , 0 ] , C n ) . The solution to system (1) is defined to be stable if for every given ϵ > 0 , there exists a positive constant δ ( t 0 , ϵ ) such that whenever t t 0 0 and E ψ ( t ) φ ( t ) 0 2 < δ , it holds that E y ( t , t 0 , ψ ) x ( t , t 0 , φ ) B b 2 < ϵ . Herein, the relevant norms are defined as follows:
x ( t ) y ( t ) B b 2 = sup t [ τ + , T ] x ( t ) y ( t ) 0 2 ,
x ( t ) y ( t ) 0 2 = max p I x p ( t ) y p ( t ) C 2 ,
φ ψ τ = max p I sup t [ τ + , 0 ] φ p ( t ) ψ p ( t ) C .
Moreover, the solution is said to be uniformly stable in the mean square sense if the aforementioned constant δ is independent of t 0 .
Theorem 2. 
When 0 < α < 1 , suppose that Assumptions ( H 1 ) to ( H 4 ) are all satisfied. If the constant M defined as
M = max p I 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 q = 1 n 1 α 2 a q p + L p f + b q p + L p g + 1 T ( 2 α 1 ) d q p + L p δ
satisfies M < 1 , then the impulsive fractional-order complex-valued stochastic neural network (1) admits a unique solution in the mean square sense, which is uniformly stable.
Proof. 
According to Definition 8, any solution to system (1) can be represented as
x p ( t ) = φ p ( t ) , t [ τ + , 0 ] 0 t S α ( t s ) F q ( s , x q ( s ) ) d s + 0 t S α ( t s ) δ q ( s , x q ( s ) ) d ω ( s ) , t [ 0 , t 1 ] T α ( t t 1 ) [ x p ( t 1 + I p 1 ( x p ( t 1 ) ) ) ] + t 1 t S α ( t s ) F q ( s , x q ( s ) ) d s + t 1 t S α ( t s ) δ q ( s , x q ( s ) ) d ω ( s ) , t ( t 1 , t 2 ] T α ( t t m ) [ x p ( t m + I p m ( x p ( t m ) ) ) ] + t m t S α ( t s ) F q ( s , x q ( s ) ) d s + t m t S α ( t s ) δ q ( s , x q ( s ) ) d ω ( s ) , t ( t m , T ] .
From above inequality, we can get
y p ( t ) x p ( t ) = ψ p ( t ) φ p ( t ) , t [ τ + , 0 ] 0 t S α ( t s ) [ F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) ] d s + 0 t S α ( t s ) [ δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) ] d ω ( s ) , t [ 0 , t 1 ] T α ( t t 1 ) [ x p ( t 1 + I p 1 ( x p ( t 1 ) ) ) ] + t 1 t S α ( t s ) [ F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) ] d s + t 1 t S α ( t s ) [ δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) ] d ω ( s ) , t ( t 1 , t 2 ] T α ( t t m ) [ x p ( t m + I p m ( x p ( t m ) ) ) ] + t m t S α ( t s ) [ F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) ] d s + t m t S α ( t s ) [ δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) ] d ω ( s ) , t ( t m , T ] .
For all t [ τ + , 0 ] , we have
E y ( t ) x ( t ) 0 2 = max p I sup t [ τ + , 0 ] ψ p ( t ) φ p ( t ) C 2 ψ φ τ 2 .
For all t [ 0 , t 1 ] , and for the Itô isometry property of the stochastic integral, we have
E y ( t ) x ( t ) 0 2 = max p I sup t [ 0 , t 1 ] E 0 t S α ( t s ) [ F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) ] d s + 0 t S α ( t s ) [ δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) ] d ω ( s ) C 2 max p I sup t [ 0 , t 1 ] 2 E 0 t S α ( t s ) [ F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) ] d s C 2 + max p I sup t [ 0 , t 1 ] 2 E 0 t S α ( t s ) [ δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) ] d ω ( s ) C 2 max p I sup t [ 0 , t 1 ] 2 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 E F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) C 2 d s + max p I sup t [ 0 , t 1 ] 2 M ˜ S 2 0 t ( t s ) 2 ( α 1 ) E δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) C 2 d s max p I 2 M ˜ S 2 T 2 q = 1 n 1 α 2 ( a q p + L p f + b q p + L p g ) + 1 T ( 2 α 1 ) d q p + L p δ sup t [ 0 , t 1 ] E { y p ( t ) x p ( t ) C 2 } .
For all t ( t 1 , t 2 ] , we have
E y ( t ) x ( t ) 0 2 max p I sup t ( t 1 , t 2 ] 4 T α ( t t 1 ) 2 × E z p ( t 1 ) z p * ( t 1 ) C 2 + max p I sup t ( t 1 , t 2 ] 4 T α ( t t 1 ) 2 × E I p 1 ( z p ( t 1 ) ) I p 1 ( z p * ( t 1 ) ) C 2 + max p I sup t ( t 1 , t 2 ] 4 E 0 t S α ( t s ) [ F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) ] d s C 2 + max p I sup t ( t 1 , t 2 ] 4 E 0 t S α ( t s ) [ δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) ] d ω ( s ) C 2 max p I 4 M ˜ T 2 ( 1 + η p 1 ) sup t ( t 1 , t 2 ] E { y p ( t ) x p ( t ) C 2 } + max p I sup t ( t 1 , t 2 ] 4 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 E F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) C 2 d s + max p I sup t ( t 1 , t 2 ] 4 M ˜ S 2 0 t ( t s ) 2 ( α 1 ) E δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) C 2 d s max p I 4 M ˜ T 2 ( 1 + η p 1 ) + 4 M ˜ S 2 T 2 q = 1 n 1 α 2 ( a q p + L p f + b q p + L p g ) + 1 T ( 2 α 1 ) d q p + L p δ × sup t [ 0 , t 1 ] E { y p ( t ) x p ( t ) C 2 } .
Similarly, when t ( t k , t k + 1 ] , k = 2 , 3 , , m , we get
E y ( t ) x ( t ) 0 2 max p I sup t ( t 1 , t 2 ] 4 T α ( t t 1 ) 2 × E z p ( t 1 ) z p * ( t 1 ) C 2 + max p I sup t ( t 1 , t 2 ] 4 T α ( t t 1 ) 2 × E I p 1 ( z p ( t 1 ) ) I p 1 ( z p * ( t 1 ) ) C 2 + max p I sup t ( t 1 , t 2 ] 4 E 0 t S α ( t s ) [ F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) ] d s C 2 + max p I sup t ( t 1 , t 2 ] 4 E 0 t S α ( t s ) [ δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) ] d ω ( s ) C 2 max p I 4 M ˜ T 2 ( 1 + η p k ) sup t ( t 1 , t 2 ] E { y p ( t ) x p ( t ) C 2 } + max p I sup t ( t 1 , t 2 ] 4 M ˜ S 2 0 t ( t s ) α 1 d s 0 t ( t s ) α 1 E F q ( s , y q ( s ) ) F q ( s , x q ( s ) ) C 2 d s + max p I sup t ( t 1 , t 2 ] 4 M ˜ S 2 0 t ( t s ) 2 ( α 1 ) E δ q ( s , y q ( s ) ) δ q ( s , x q ( s ) ) C 2 d s max p I 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 q = 1 n 1 α 2 ( a q p + L p f + b q p + L p g ) + 1 T ( 2 α 1 ) d q p + L p δ × sup t [ 0 , t 1 ] E { y p ( t ) x p ( t ) C 2 } .
Thus, for all t [ τ + , T ] , we can obtain
E y ( t ) x ( t ) B b 2 = sup t [ τ + , T ] E y ( t ) x ( t ) 0 2 max p I 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 q = 1 n 1 α 2 ( a q p + L p f + b q p + L p g ) + 1 T ( 2 α 1 ) d q p + L p δ × sup t [ 0 , T ] E { y p ( t ) x p ( t ) C 2 } + ψ φ τ 2 max p I 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 q = 1 n 1 α 2 ( a q p + L p f + b q p + L p g ) + 1 T ( 2 α 1 ) d q p + L p δ × E y ( t ) x ( t ) B b 2 + E ψ φ τ 2 .
Taking
M = max p I 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 q = 1 n 1 α 2 ( a q p + L p f + b q p + L p g ) + 1 T ( 2 α 1 ) d q p + L p δ ,
we have
E y ( t ) x ( t ) B b 2 M E y ( t ) x ( t ) B b 2 + E ψ φ τ 2 .
That is,
E x ( t ) y ( t ) B b 2 1 1 M E φ ψ τ 2 .
If we take E φ ψ τ 2 < ϵ δ ¯ , then we can derive that
E x ( t ) y ( t ) B b 2 < ϵ ,
where δ = 1 1 M . Based on the inequalities derived above, we can draw the following conclusion: for an arbitrary constant ϵ > 0 , there exists a positive constant δ ( ϵ ) > 0 such that the inequality E y ( t , t 0 , ψ ) x ( t , t 0 , φ ) B b 2 < ϵ , holds whenever E ψ ( t ) φ ( t ) 0 2 < δ . It follows that the solution x ( t ) is uniformly stable. This completes the proof. □

4. Example

Example 1. 
Let p = 1 , 2 , k = 1 , 2 . We investigate the fractional-order stochastic neural network given below:
D t α x p ( t ) = [ c p ( t ) x p ( t ) + q = 1 n a p q ( t ) f q ( x q ( t ) ) + q = 1 n b p q ( t ) g q ( x q ( t τ p q ( t ) ) ) + u p ( t ) ] d t + q = 1 n d p q ( t ) δ p q ( x q ( s ) ) d ω q ( t ) d t , t J = [ 0 , T ] , T > 0 , t t k , Δ x p ( t ) = I p k ( x p ( t k ) ) = η p k x p ( t k ) , k = 1 , 2 , , m . x p ( t ) = φ p ( t ) , φ p ( t ) L F 0 2 ( [ τ + , 0 ] , C n ) ,
where α = 0.6 , T = 10 , x p = x p R + i x p I , c p ( t ) = 4.8 + cos t , u p ( t ) = 1.5 + cos t + 1 1 + 36 e t 2 , η p k = 1 10 ( k + 1 ) 2 0.1 ,
a 11 a 12 a 21 a 22 = 1.2 + 0.1 sin t 3 i 2 + i 0.8 cos t 3 + 1 4 + t 2 + 2 i 2 + 1 10 + t 2 0.8 i ,
b 11 b 12 b 21 b 22 = 2 + 0.2 sin t 2 i 1 + i 0.8 cos t 2 + 3 i + 1 10 + t 2 1 + 1 20 + 2 t 2 + 0.5 i ,
d 11 d 12 d 21 d 22 = 1 + 0.2 sin t 2 i 1 + i 0.8 cos t 2 + 3 i + 1 10 + t 2 1 + 1 20 + 2 t 2 + 0.5 i ,
τ 11 τ 12 τ 21 τ 22 = 0.5 + 0.2 sin t 1 + 0.8 cos t 0.5 + 0.1 sin t 0.3 + 0.1 cos t ,
f q ( x q ) = 1 4 ( | x q ( t ) + 1 | + | x q ( t ) 1 | ) , g q ( x q ) = 1 10 ( | x q ( t ) + 1 | + | x q ( t ) 1 | ) , δ p q ( x q ) = 3 e 2 x q 10 + e 2 x q . It is easily found that Assumptions ( H 1 ) ( H 4 ) ( L q f = 1 2 , L q g = 1 5 , L p q δ = 3 10 , τ = 1.8 ) hold, and there exist positive constants M ˜ T = 0.2 , M ˜ S = 0.02 , such that
max 1 k m { 4 M ˜ T 2 ( 1 + η p k ) + 4 M ˜ S 2 T 2 α α 2 q = 1 n ( a q p + L p f + b q p + L p g ) + 4 M ˜ S 2 T 2 α 1 2 α 1 q = 1 n d q p + L p δ } 0.581 < 1 .
Here, two cases with different initial conditions are given.
Case 1: The initial states φ 1 ( t ) = ( 0.2 0.2 i , 0.1 0.1 i ) T and φ 2 ( t ) = ( 0.3 + 0.3 i , 0.2 + 0.2 i ) T for t [ 1.8 , 0 ] .
Case 2: The initial states φ R ( t ) = ( 0.4 , 0.2 , 0.5 ) T and φ I ( t ) = ( 0.6 , 0.3 , 0.2 ) T for t [ 1.8 , 0 ] .
Therefore, all of the conditions of Theorems 1 and 2 are satisfied, which means that the system (2) is uniformly stable in the mean square sense.
Remark 4. 
In Example 1, let the Bromwich path ( B ^ r ) be in the range ( 0.5 , 0.1 ) and A = 1 ; then we can get | T α ( t ) | 0.173 , | S α ( t ) | 0.016 . There exist positive constants M ˜ T = 0.2 , M ˜ S = 0.02 , such that Assumption ( H 1 ) true.
Remark 5. 
Figure 1 illustrates the evolution process of the real parts, x 1 R ( t ) and x 2 R ( t ) , of the two neuron states: at the initial moment ( t = 0 ), x 1 R ( 0 ) starts at 0.2 , and x 2 R ( 0 ) starts at 0.3 . After a slight fluctuation near the impulse action ( t 1.8 ), they converge rapidly. For t > 1.8 , the real-part values stabilize within the interval [ 0.1 , 0.1 ] . Figure 1 presents the variation characteristics of the imaginary parts x 1 I ( t ) and x 2 I ( t ) : their initial values are 0.1 and 0.2 , respectively, showing an overall steady decay trend without obvious oscillation, and finally tending to near 0. These results verify the dynamic stability of the system within a short time scale.
Remark 6. 
In Figure 2, after a short adjustment over the first five impulse cycles ( t < 9 ), the real parts, x 1 R ( t ) and x 2 R ( t ) , fully converge to a small fluctuation interval [ 0.05 , 0.05 ] for t > 10 without any divergence trend. Specifically, x 1 R ( t ) stabilizes around 0, while x 2 R ( t ) stabilizes at approximately 0.02 , demonstrating the long-term stability of the system. In Figure 2, the imaginary parts, x 1 I ( t ) and x 2 I ( t ) , exhibit similar convergence characteristics: they gradually decay for t < 12 and stabilize within the range [ 0.03 , 0.03 ] for t > 12 with a fluctuation amplitude of less than 0.05 . These results verify the conclusion of “uniform stability in mean square” in Theorem 2, indicating that the system possesses strong robustness against impulse disturbances and stochastic noise during long-term operation.
Remark 7. 
In Figure 3, the horizontal axis represents time t [ 0 , 20 ] , the vertical axis denotes the imaginary part x 1 I ( t ) , and the vertical axis stands for the real part x 1 R ( t ) . From the three-dimensional perspective, the complex-plane trajectory of x 1 ( t ) exhibits a “spiral convergence” characteristic: in the initial phase ( t < 5 ), the trajectory distributes over a wide range (real part [ 0.2 , 0.3 ] , imaginary part [ 0.1 , 0.2 ] ); as time progresses, the trajectory gradually contracts toward the coordinate origin ( x 1 R = 0 , x 1 I = 0 ); for t > 15 , the trajectory concentrates within a tiny spherical region centered at the origin with a radius of less than 0.05 , without any divergence or oscillation phenomena. This three-dimensional trajectory intuitively demonstrates the complete evolution process of the complex-valued state, further verifying the existence, uniqueness, and uniform stability of the system’s solution, which is highly consistent with the theoretical results of Theorems 1 and 2.

5. Discussion

This study establishes the existence, uniqueness, and uniform stability in the distribution sense for complex-valued stochastic neural networks with impulses. Through the application of the Banach fixed point theorem, we have obtained several verifiable criteria that guarantee these properties via a direct analytical method. It is noteworthy that the complex-valued neural network model serves as a non-trivial extension of its real-valued counterpart. The methodologies developed and the results presented in this work are thus applicable to the analysis of real-valued stochastic neural networks subject to linear impulses.

6. Conclusions

This paper establishes a rigorous analytical framework for investigating the dynamics of impulsive complex-valued stochastic neural networks. The criteria introduced herein guarantee both the uniform stability of the system and the existence of a unique solution. Moreover, the methodologies and insights presented serve as effective instruments that can be directly applied to tackle challenges in linear impulsive real-valued stochastic neural networks.

Author Contributions

J.X.: Methodology, Writing—Original Draft, Conceptualization, Methodology, Formal Analysis, Data Curation, Software, Simulation, Funding Acquisition. T.T.: Software, Simulation, Editing, Conceptualization. X.H.: Methodology, Writing—Review, Editing, Conceptualization, Supervision, Funding Acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by grants from the Chongqing Municipal Education Commission (Nos. KJQN202301318 and KJQN202401336), the Natural Science Foundation of Fujian Province (No. 2024J08207), and the Fujian Provincial Department of Education (No. JAT220213).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2018, 27, 206–219. [Google Scholar] [CrossRef] [PubMed]
  2. Zhu, S.; Wang, L.D.; Duan, S.K. Memristive pulse coupled neural network with applications in medical image processing. Neurocomputing 2017, 227, 149–157. [Google Scholar] [CrossRef]
  3. Villarrubia, G.; De Paz, J.F.; Chamoso, P.; Prieta, F.D.L. Artificial neural networks used in optimization problems. Neurocomputing 2018, 272, 10–16. [Google Scholar] [CrossRef]
  4. Nikseresht, A.; Nazemi, A. A novel neural network for solving semidefinite programming problems with some applications. J. Comput. Appl. Math. 2019, 350, 309–323. [Google Scholar] [CrossRef]
  5. Tang, J.; Qiao, J.F.; Zhang, J.; Wu, Z.; Chai, T.; Yu, W. Combinatorial optimization of input features and learning parameters for decorrelated neural network ensemble-based soft measuring model. Neurocomputing 2018, 275, 1426–1440. [Google Scholar] [CrossRef]
  6. Wang, B.; Merrick, K.E.; Abbass, H.A. Co-operative coevolutionary neural networks for mining functional association rules. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1331–1344. [Google Scholar] [CrossRef]
  7. Zhao, K.H.; Zhao, X.X.; Lv, X.J. A general framework for the multiplicity of positive solutions to higher-order Caputo and Hadamard fractional functional differential coupled Laplacian systems. Fractal Fract. 2025, 9, 701. [Google Scholar] [CrossRef]
  8. Zhao, K.H. A generalized stochastic Nicholson blowfly model with mixed time-varying lags and harvest control: Almost periodic oscillation and global stable behavior. Adv. Contin. Discret. Model. 2025, 2025, 171. [Google Scholar] [CrossRef]
  9. Li, Y.K.; Ruan, C.F.; Li, B. Existence and finite-time stability of Besicovitch almost periodic solutions of fractional-order quaternion-valued neural networks with time-varying delays. Neural Process. Lett. 2022, 54, 2127–2141. [Google Scholar] [CrossRef]
  10. Bao, H.B.; Park, J.H.; Cao, J.D. Synchronization of fractional-order complex-valued neural networks with time delay. Neural Netw. 2016, 81, 16–28. [Google Scholar] [CrossRef] [PubMed]
  11. Zhang, S.; Yu, Y.; Yu, J. LMI conditions for global stability of fractional-order neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2423–2433. [Google Scholar] [CrossRef]
  12. Xu, Y.; Li, Y.Z.; Li, W.X. Adaptive finite-time synchronization control for fractional-order complex-valued dynamical networks with multiple weights. Commun. Nonlinear Sci. Numer. Simul. 2020, 85, 105239. [Google Scholar] [CrossRef]
  13. Wu, H.Q.; Zhang, X.X.; Xue, S.H.; Wang, L.; Wang, Y. LMI conditions to global Mittag-Leffler stability of fractional-order neural networks with impulses. Neurocomputing 2016, 193, 148–154. [Google Scholar] [CrossRef]
  14. Zhang, H.; Chen, X.B.; Ye, R.Y.; Stamova, I.; Cao, J. Adaptive quasi-synchronization analysis for Caputo delayed Cohen–Grossberg neural networks. Math. Comput. Simul. 2023, 212, 49–65. [Google Scholar] [CrossRef]
  15. Li, H.L.; Hu, C.; Zhang, L.; Jiang, H.; Cao, J. Complete and finite-time synchronization of fractional-order fuzzy neural networks via nonlinear feedback control. Fuzzy Sets Syst. 2022, 443, 50–69. [Google Scholar] [CrossRef]
  16. Gokul, P.; Rakkiyappan, R. New finite-time stability for fractional-order time-varying time-delay linear systems: A Lyapunov approach. J. Frankl. Inst. 2022, 359, 7620–7631. [Google Scholar]
  17. Li, P.; Li, X.D. Input-to-state stability of nonlinear impulsive systems via Lyapunov method involving indefinite derivative. Math. Comput. Simul. 2019, 155, 314–323. [Google Scholar] [CrossRef]
  18. Yuan, M.M.; Luo, X.; Mao, X.; Han, Z.; Sun, L.; Zhu, P. Event-triggered hybrid impulsive control on lag synchronization of delayed memristor-based bidirectional associative memory neural networks for image hiding. Chaos Solitons Fractals 2022, 161, 112311. [Google Scholar] [CrossRef]
  19. Li, H.; Kao, Y.; Bao, H.B.; Chen, Y. Uniform stability of Complex-valued neural networks of fractional order with linear impulses and fixed time delays. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 5321–5331. [Google Scholar] [CrossRef]
  20. Fang, T.; Sun, J.T. Stability of complex-valued impulsive system with delay. Appl. Math. Comput. 2014, 240, 102–108. [Google Scholar] [CrossRef]
  21. Yang, X.J.; Li, C.D.; Huang, T.W.; Song, Q. Mittag-Leffler stability analysis of nonlinear fractional-order systems with impulses. Appl. Math. Comput. 2017, 293, 416–422. [Google Scholar] [CrossRef]
  22. Podlubny, I. Fractional Differential Equations. In Mathematics in Science and Engineering; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  23. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier Science B.V.: Amsterdam, The Netherlands, 2006. [Google Scholar]
  24. Haase, M. The functional calculus for sectorial operators. In Operator Theory: Advances and Applications; Birkhauser: Basel, Switzerland, 2006. [Google Scholar]
  25. Araya, D.; Lizama, C. Almost automorphic mild solutions to fractional differential equations. Nonlinear Anal. Theory Methods Appl. 2009, 69, 3692–3705. [Google Scholar] [CrossRef]
  26. Dabas, J.; Chauhan, A.; Kumar, M. Existence of the mild solutions for impulsive fractional equations with infinite delay. Int. J. Differ. Equ. 2011, 20, 793023. [Google Scholar] [CrossRef]
Figure 1. Dynamic trajectories of the real and imaginary parts of states x 1 ( t ) and x 2 ( t ) with an impulse interval of t k t k 1 = 1.8 and a simulation duration of t = 2 .
Figure 1. Dynamic trajectories of the real and imaginary parts of states x 1 ( t ) and x 2 ( t ) with an impulse interval of t k t k 1 = 1.8 and a simulation duration of t = 2 .
Axioms 15 00017 g001
Figure 2. Long-term evolution trajectories of the real and imaginary parts of states x 1 ( t ) and x 2 ( t ) with an impulse interval of t k t k 1 = 1.8 and a simulation duration of t = 20 .
Figure 2. Long-term evolution trajectories of the real and imaginary parts of states x 1 ( t ) and x 2 ( t ) with an impulse interval of t k t k 1 = 1.8 and a simulation duration of t = 20 .
Axioms 15 00017 g002
Figure 3. Three-dimensional spatial trajectory diagram of the real part x 1 R ( t ) and imaginary part x 1 I ( t ) of neuron x 1 ( t ) evolving over time t.
Figure 3. Three-dimensional spatial trajectory diagram of the real part x 1 R ( t ) and imaginary part x 1 I ( t ) of neuron x 1 ( t ) evolving over time t.
Axioms 15 00017 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiang, J.; Tang, T.; Huang, X. Investigating Uniform Stability of Fractional-Order Complex-Valued Stochastic Neural Networks with Impulses via a Direct Method. Axioms 2026, 15, 17. https://doi.org/10.3390/axioms15010017

AMA Style

Xiang J, Tang T, Huang X. Investigating Uniform Stability of Fractional-Order Complex-Valued Stochastic Neural Networks with Impulses via a Direct Method. Axioms. 2026; 15(1):17. https://doi.org/10.3390/axioms15010017

Chicago/Turabian Style

Xiang, Jianglian, Tiantian Tang, and Xiaoli Huang. 2026. "Investigating Uniform Stability of Fractional-Order Complex-Valued Stochastic Neural Networks with Impulses via a Direct Method" Axioms 15, no. 1: 17. https://doi.org/10.3390/axioms15010017

APA Style

Xiang, J., Tang, T., & Huang, X. (2026). Investigating Uniform Stability of Fractional-Order Complex-Valued Stochastic Neural Networks with Impulses via a Direct Method. Axioms, 15(1), 17. https://doi.org/10.3390/axioms15010017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop