1. Introduction
The research on dynamical behavior analysis for NN models has attracted increasing attention in recent years and their results have been widely used in a variety of science and engineering disciplines [
1,
2,
3,
4,
5,
6,
7,
8,
9]. The stability analysis of NN models is fundamental and important in the applications of NNs, which have received significant attention recently [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50].
Indeed, most of the NN analyses are dealt with in the continuous-time case. Nevertheless, in today’s digital world, nearly all signals are digitalized for computer processing needs before and after transmission. In this regard, instead of continuous-time analysis, it is important to study discrete-time signals in implementing NN models. As a result, several researchers have studied various dynamical behaviors of discrete-time NN models. For example, a number of scientific results for various dynamic behaviors in the discrete-time case for both the real-world neural network (RVNN) as well as for the complex-valued neural network (CVNN) models have been published recently [
6,
7,
8,
9,
10,
11,
12,
13,
21,
22,
23,
24,
25]. However, the corresponding research on the quaternion field is still in its infancy.
The characteristics of NN models, including RVNNs and CVNNs, can be analyzed based on their functional and/or structural properties. Recently, RVNNs have been commonly used in a number of engineering domains, such as optimization, associative memory, and image and signal processing [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12]. However, with respect to
affine transformations and XOR problems, RVNNs perform poorly. In view of this, complex properties are incorporated into RVNNs, leading to CVNNs [
51,
52,
53] that can effectively address the
affine transformation challenge and XOR problems. As a result, various CVNN-related models have received substantial research attention in both mathematical and practical analyses [
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52]. For instance, in [
14,
15,
16,
17], the problem of global stability, finite-time stability, and global Lagrange stability for continuous-time CVNNs were investigated using the Lyapunov stability theory. In [
20,
21,
22,
23,
24,
25], the investigations on discrete-time CVNNs and their corresponding sufficient conditions were discussed. Nevertheless, CVNN models are inefficient in handling higher dimension transformations, including color night vision, color image compression, and
and
problems [
23,
24,
25].
On the other hand, the quaternion-valued signals and quaternion functions are very useful in the many engineering domains, such as in
wind forecasting, polarized signal classification, and color night vision [
54,
55,
56,
57,
58,
59,
60]. Undoubtedly, quaternion-based networks perform as good mathematical models to undertake these applications, due to the quaternion features. In view of this, quaternion-valued neural networks (QVNNs) have been developed by implementing quaternion algebra into CVNNs, in order to generalize RVNN and CVNN models with quaternion-valued activation functions, connection weights, and signal states [
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55]. The main advantage of a QVNN model is its capability of reducing the computational complexity in higher-dimensional problems. Therefore, the investigation of QVNN model dynamics is essential and important. Recently, many computational approaches for various QVNN models and their learning algorithms have been studied; e.g., exponential input-to-state stability, global Mittag–Leffler stability, synchronization analysis, global
stability, global synchronization, and global asymptotic stability [
26,
27,
28,
29,
30,
31,
32] have been studied for the continuous-time QVNNs. Very recently, the issue of mean-square exponential input-to-state stability for continuous-time stochastic memristive QVNNs with time-varying delays has been studied in [
50]. Similarly, some other stability conditions have been defined for QVNN models [
29,
30,
33,
34,
35,
36]. In the earlier studies [
37,
38,
39], the problems of global asymptotic stability, exponential stability, and exponentially periodicity, respectively, for discrete-time QVNNs, have been investigated with linear threshold activation functions.
In addition, the stochastic effects are unavoidable in most practical NN models. As a result, it is important to investigate stochastic NN models comprehensively, since their behaviors are susceptible by certain stochastic inputs. In practice, a stochastic neural network (SNN) is useful for the modeling of real-world systems, especially in the presence of external disturbances [
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60]. As a result, several aspects of SNN models have been analysed extensively in both continuous and discrete-time cases; e.g., the problems of passivity [
40], robust stability [
41], exponential stability [
42], robust dissipativity [
44], mean-square exponential input-to-state stability [
49], and mean-square exponential input-to-state stability for QVNNs [
50]. Other SNN-related dynamics have also been investigated in [
43,
45,
46,
47,
48]. Nonetheless, studies on the dynamics of discrete-time QVNN (DSQVNN) models are limited. Indeed, the investigation on DSQVNN models with time delays and their mean-square asymptotic stability analysis is novel, which constitutes the main contribution of our paper.
Inspired by the above debate, our main aim is to do research into the sufficient conditions of DSQVNN models pertaining to their mean-square asymptotic stability. The designed DSQVNNs encompasses discrete-time stochastic CVNN and discrete-time stochastic RVNN as its special cases. Firstly, we equivalently represent a QVNN with four RVNNs via the real-imaginary separate type activation function. Secondly, we establish new linear matrix inequality (LMI)-based, sufficient conditions for the mean-square asymptotic stability of DSQVNNs via suitable Lyapunov functional and stochastic concepts. Note that several known results can be viewed as special cases of the results of our work. Finally, we provide numerical examples to illustrate the usefulness of the proposed results.
This study presents three key contributions. (1) This is the first analysis of the mean-square asymptotic stability of the considered DSQVNN models. (2) Unlike the traditional stability analysis, we establish new mean-square asymptotic stability criteria for the considered DSQVNN models, which is achieved through the Lyapunov functional and real-imaginary separate-type activation functions. (3) Developed sufficient conditions can be directly solved by the standard Matlab LMI toolbox. (4) The results of this study are more general and powerful than those in the existing discrete-time QVNN models in the literature.
In
Section 2, we define the proposed problem model formally. We explain the new stability criterion in
Section 3. The numerical examples are given in
Section 4. Concluding remarks are given in the last section.
2. Mathematical Fundamentals and Definition of the Problem
2.1. Notations
We use , and to indicate the real, complex, and skew quaternion fields, respectively. The matrices with entries from , and are denoted as , and , while the m-dimension vectors are denoted as , and , respectively. For any matrix and its transpose, the conjugate transposes are denoted as and , respectively. In addition, a block diagonal matrix is denoted as , while the smallest and largest eigenvalues of are denoted as and , respectively. The Euclidean norm of a vector x and the mathematical expectation of a stochastic variable x are represented by and , respectively. Meanwhile, given integers a, b with , the discrete interval given by is denoted by , while the set of all functions is denoted by , respectively. Moreover, we assume that be a complete probability space with a filtration satisfying the usual conditions. In a given matrix, a term induced by symmetry is denoted by □ in the matrix.
2.2. Quaternion Algebra
Firstly, we address the quaternion and its operating rules. The quaternion is expressed in the form:
where the real constants are denoted by
, while the fundamental quaternion units are denoted by
i,
j, and
k. The following Hamilton rules are satisfied:
which implies that quaternion multiplication has the non-commutativity property.
The following expressions define the operations between quaternions and . Note that the definitions of addition and subtraction of complex numbers are applicable to those of the quaternions as well.
The multiplication of
x and
y, which is in line with the Hamilton multiplication rules (
1), is defined as follows:
The following expression of
represents the module of a quaternion
:
where the conjugate transpose of
x is denoted by
. The following expression represents the norm of
x:
.
2.3. Problem Definition
The DSQVNN model with time delays is considered; i.e.,
where
,
.
The model in (
2) can be expressed in an equivalent vector form of
where the state variable and quaternion-valued neuron activation function are denoted by
and
, respectively. In addition,
the input vector. A self-feedback connection weight matrix with
is denoted by
. Besides that, a connection weight matrix is denoted by
, while the transmission delay is denoted by a positive scalar of
.
Given the model in (
3), its initial condition is
where
.
Definition 1. A vector is said to be an equilibrium point of NN model (3), if it satisfies Now, consider
, which is similar to [
20,
21,
22]. We can define the following, given
,
As such,
is a Banach space having uniform convergence in its topology. Suppose the solutions of the model in (
3) are
and
starting from
and
, respectively, in which any
. As such, following the model in (
3), we have
Let ,
As a results, we can express (
6) as
A1: For
,
, we can divide
into two parts, real and imaginary, as follows:
where
,
,
,
. There exist constants
,
,
,
,
,
,
,
, such that for any
and
,
In practical application of NN models, stochastic disturbances usually affect their performances. As such, stochastic disturbances must be included when studying the issue of network stability, which represent more realistic dynamic behaviors. Therefore, the following DSQVNN model is formulated:
Note that is a noise intensity function, while is a scalar Wiener process (Brownian motion), which is defined on with , , .
For further analysis, we divide the NN model in (
8) into both real and imaginary parts through the use of the quaternion multiplication. As such, we have
where
The following expression denotes the initial condition of the model in (
9):
for
, where
,
,
,
.
We denote , , , , , , , , .
Therefore, the following expression can be used to represent the model in (
9)
Note that the following expression constitutes the initial condition of the model in (
11)
where
, with
,
.
A2: The noise intensity function
with
, it is able to satisfy the following conditions
where the positive constants are denoted by
,
,
and
.
Definition 2. For any solution of the NN model in (8), it is asymptotically stable in the mean square sense if the following expression is true: Lemma 1. [43] Given matrix , integers and satisfying , and vector function , in a way whereby the sums concerned are well-defined, we have 3. Main Results
Given the NN model in (
11), we derive the new sufficient conditions to ensure its mean-square asymptotic stability.
Theorem 1. The activation function can be separated into both real and imaginary parts based on assumption . Given the existence of matrices , , , diagonal matrices , and scalars the NN model in (11) is asymptotically stable in the mean square sense, subject to satisfying the following LMI:where , , , , , , , , , , , , , , , , , , , . The detailed proof of Theorem (1) can be referred to in
Appendix A.
Remark 1. When stochastic disturbances are excluded, we can reduce the NN model in (11) to become The proof of Theorem (1) can be applied to yield Corollary (1).
Corollary 1. The activation function can separated into both real and imaginary parts based on assumption . Given the existence of matrices , , and diagonal matrices , the NN model in (22) is globally asymptotically stable, subject to satisfying the following LMI:
where
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
.
Remark 2. The QVNN models are generalizations of CVNN models. Based on Theorem (1), we can analyze the the mean-square asymptotic stability criterion with respect to the CVNN model in (27). By complex number properties
, the NN model in (
8) becomes
where
Consider
the model in (
27) becomes
The following expression constitute the initial condition of the model in (
28)
where
, with
,
A3: For
,
, we can divide
into two parts, real and imaginary, as follows:
where
,
. There exist constants
,
,
,
, such that for any
and
,
A4: The noise intensity function
has the properties of (i) Borel measurable; (ii) locally Lipschitz continuous, and it satisfies the following expressions:
where
,
are known positive constants.
The proof of (1) can be applied to yield Corollary (2).
Corollary 2. The activation function can be separated into both real and imaginary parts based on assumption . Given the existence of matrices , , , diagonal matrices , , and scalars , , the NN model in (28) is said to be asymptotically stable in the mean square sense, subject to satisfying the following LMI:
where
,
.
When stochastic disturbances are excluded, we can reduce the NN model in (
28) to become
The proof of Theorem (1) can be applied to yield Corollary (3).
Corollary 3. The activation function can be separated into both real and imaginary parts based on assumption . Given the existence of matrices , , and diagonal matrices , , the NN model in (34) is said to be globally asymptotically stable, subject to satisfying the following LMI:
where
,
.
Remark 3. In the literature of QVNN models, the way to choose a suitable quaternion-valued activation function is still an open question. Several activation functions have recently been used to study QVNN models; e.g., non-monotonic piecewise nonlinear activation functions [30], linear threshold activation functions [37,38,39], and real-imaginary separate type activation functions [28,32,34]. Under the assumption that the activation functions can be divided into real and imaginary parts, our current results provide some criteria to ascertain the asymptotic stability in the mean-square sense pertaining to the considered DSQVNN models along with time-delays. Remark 4. In [37], the authors used the semi-discretization technique to obtain discrete-time analogues of continuous-time QVNNs with linear threshold neurons and study their global asymptotical stability without considering time delays. Compared with the previous work [37] by separating the real part and imaginary part of the DSQVNNs with time delays and constructing suitable Lyapunov–Krasovskii functional candidates, we obtain the sufficient conditions for the mean-square asymptotic stability of the DSQVNNs in the form of LMIs. The LMI conditions in this paper are more concise than those obtained in [37,38,39] and much easier to be checked. Remark 5. Different dynamics of DCVNN models without stochastic disturbances have been examined in previous studies [20,21,22]. In this study, we not only focus on the mean-square asymptotic stability criteria with respect to a class of discrete-time SNN models by using the same method proposed in [20,21,22] but also extend our results to the quaternion domain. As such, the approach proposed in this paper is more general and powerful. 4. Illustrative Examples
This section presents two numerical examples to show the usefulness of the proposed method.
Example 1. The following parameters pertaining to the NN model in (8) are considered: By separating the activation function into real and imaginary parts, we can find
,
,
, and
. Choose the noise intensity functions as
,
,
,
, it can be verified that
A2 is satisfied with
. Given a time delay of
, and the activation functions can be taken as:
with
It can verified that
A1 is satisfied with
,
,
,
,
By utilizing the Matlab LMI Control toolbox, we can find the feasible solutions pertaining to the LMIs in (
13)–(
21), along with
,
and
,
,
,
. It can verified easily that conditions (
13)–(
16) satisfied with
,
,
,
.
In view of Theorem (1), it is easy to conclude that the NN model in (
8) with the above given parameters is mean-square asymptotically stable based on the Lyapunov stability theory. The state trajectories
,
of the NN model in (
8) with stochastic disturbances are depicted in
Figure 1 and
Figure 2, respectively.
Figure 3 and
Figure 4 show the state trajectories
,
of the NN model in (
8) without stochastic disturbances.
Example 2. The following parameters pertaining to the NN model in (27) are considered: By separating the activation function into both real and imaginary parts, we obtain
and
. The noise intensity functions are considered as
,
; we can verify that
A4 is satisfied with
. Take the time-delay
, and subject to the following activation functions
with
It can verified that
A3 is satisfied with
,
,
,
,
We can find that the conditions (
30)–(
33) are true by using the LMI control toolbox in MATLAB. According to Corollary (2), we can conclude that the NN model in (
27) with the aforementioned parameters is asymptotically stable in the mean square sense based on the Lyapunov stability theory. The state trajectories
,
of the NN model in (
27) with stochastic disturbances are depicted in
Figure 5 and
Figure 6, respectively.
Figure 7 and
Figure 8 show the state trajectories
,
of the NN model in (
27) without stochastic disturbances.