Next Article in Journal
Semi-Active Suspension Control Strategy Based on Negative Stiffness Characteristics
Previous Article in Journal
The Category G-GrR-Mod and Group Factorization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global Exponential Synchronization of Delayed Quaternion-Valued Neural Networks via Decomposition and Non-Decomposition Methods and Its Application to Image Encryption

1
Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, Tamil Nadu, India
2
School of Electrical Engineering, Chungbuk National University, Cheongju 361763, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(21), 3345; https://doi.org/10.3390/math12213345
Submission received: 4 September 2024 / Revised: 11 October 2024 / Accepted: 23 October 2024 / Published: 25 October 2024

Abstract

:
With the rapid advancement of information technology, digital images such as medical images, grayscale images, and color images are widely used, stored, and transmitted. Therefore, protecting this type of information is a critical challenge. Meanwhile, quaternions enable image encryption algorithm (IEA) to be more secure by providing a higher-dimensional mathematical system. Therefore, considering the importance of IEA and quaternions, this paper explores the global exponential synchronization (GES) problem for a class of quaternion-valued neural networks (QVNNs) with discrete time-varying delays. By using Hamilton’s multiplication rules, we first decompose the original QVNNs into equivalent four real-valued neural networks (RVNNs), which avoids non-commutativity difficulties of quaternions. This decomposition method allows the original QVNNs to be studied using their equivalent RVNNs. Then, by utilizing Lyapunov functions and the matrix measure method (MMM), some new sufficient conditions for GES of QVNNs under designed control are derived. In addition, the original QVNNs are examined using the non-decomposition method, and corresponding GES criteria are derived. Furthermore, this paper presents novel results and new insights into GES of QVNNs. Finally, two numerical verifications with simulation results are given to verify the feasibility of the obtained criteria. Based on the considered master–slave QVNNs, a new IEA for color images Mandrill (256 × 256), Lion (512 × 512), Peppers (1024 × 1024) is proposed. In addition, the effectiveness of the proposed IEA is verified by various experimental analysis. The experiment results show that the algorithm has good correlation coefficients (CCs), information entropy (IE) with an average of 7.9988, number of pixels change rate (NPCR) with average of 99.6080%, and unified averaged changed intensity (UACI) with average of 33.4589%; this indicates the efficacy of the proposed IEAs.

1. Introduction

Recently, artificial intelligence has gained significant attention among researchers because of its broad uses across scientific and engineering disciplines. Due to these applications, many nonlinear properties of artificial intelligence have been extensively studied, including neural networks (NNs) [1,2,3]. Recently, various NNs have received significant interest primarily due to their versatile applications in diverse areas, including optimization, signal and image problems, solving nonlinear dynamical problems, and pattern recognition [4,5,6]. Consequently, many interesting and insightful results concerning NN dynamics, including stability and synchronization analysis, have been published by employing Lyapunov functions and linear matrix inequalities [7,8,9].
As we all know, RVNNs as well as complex-valued neural networks (CVNNs) have been studied extensively due to their vast application in automatic control, parallel computing problems, signal- and image-related problems, speech recognition problems, computer vision, and other fields [3,4,5,10,11,12]. However, RVNNs and CVNNs have certain limitations, especially when dealing with high-dimensional data like color images, 3D and 4D signals, medical imaging, and so on [13,14,15]. Meanwhile, researchers have found that QVNNs can overcome a wide range of issues that RVNNs and CVNNs are not able to handle because they are generalizations of RVNNs as well as CVNNs [16,17,18]. In modern engineering, QVNNs have more advantages than RVNNs and CVNNs, allowing them to be widely applicable in aerospace, satellite tracking, and magnetic resonance imaging. More specifically, quaternions allow for efficient and compact representation of three-dimensional geometric affine transformations. Due to these various applications, researchers have developed QVNNs by utilizing the properties of quaternions. As a result, QVNNs have become a powerful modeling tool and have attracted growing attention from researchers. Recently, numerous researchers have explored QVNNs in greater depth, uncovering many significant findings [19,20,21]. Additionally, most recent research has concentrated on studying dynamical properties of NNs, including stability [3], periodic solutions [6], and bifurcation analysis [9]. While considering the dynamics of NNs, synchronization is essential, ensuring that neurons exhibit the same dynamical behavior simultaneously, enabling the NNs to respond quickly and exactly. In absence of synchronization, neurons may lead to undesirable results [22]. Thus, considering synchronization in NNs is essential for theoretical as well as practical perspectives. Recently, a huge number of synchronization issues have been studied vastly, especially due to their broad uses, such as secure communications, image- and signal-related problems, and biological systems [23,24,25].
Moreover, due to limitations in transmission speed and network bandwidth, time delays are inevitably introduced, which can significantly reduce network performance and compromise stability. This adds complexity to both the theoretical analysis and practical application of NNs. Hence, it is crucial to study time delays’ influence on network dynamics [26,27,28]. Recently, various methods to address time delays have been extensively studied to enhance our understanding of NN dynamics [3,6,8,24,26,27]. On the other hand, multiple approaches, including the MMM, have been applied to analyze NN dynamics. MMM has recently gained popularity for its precision in handling connection weights that include both positive and negative values. Additionally, MMM is highly sensitive and eliminates the need for complex Lyapunov function construction, ensuring results obtained from MMM are more precise and less conservative when compared to those derived from matrix norms. Consequently, MMM has been widely used to examine the dynamics of RVNNs and CVNNs [29,30,31,32,33]. On the other hand, image encryption technology is applicable across numerous domains for protecting sensitive visual data. In healthcare, encrypted medical images ensure patient anonymity during data exchange between hospitals. Moreover, in military operations, encrypted satellite imagery might obstruct adversaries from obtaining vital intelligence. In recent years, IEAs employing chaotic systems have attracted considerable attention from researchers. However, existing IEAs fail to achieve an optimal equilibrium among security and efficiency. To address the limitations of existing algorithms, an IEA employing chaotic systems has attracted considerable attention from scholars. Several different algorithms for encrypting images have been proposed [34,35,36,37,38,39,40]. According to the author’s knowledge, MMM has not been used to examine the GES of QVNNs with non-differentiable discrete time-varying delays through decomposition and non-decomposition methods. Furthermore, the synchronization criteria derived have not been applied to color IEAs. This makes the current work both necessary and valuable for further investigation.
Motivated based on the foregoing discussion, we shall investigate the GES for the class of QVNNs incorporating non-differentiable time-varying delays. Inspired based on the studies in [30,33], by utilizing suitable Lyapunov functions as well as the Halanay inequality, new matrix-measure-based GES criteria for considered QVNNs under designed control are obtained. Also, this paper presents novel results as well as new insights to achieve GES of QVNNs. We conclude by illustrating the feasibility of the obtained synchronization criteria through two numerical examples. As compared to some previous results, this paper has the following significant advantages: (i) For the first time, we employ MMM to investigate the GES for QVNNs with non-differentiable time-varying delays via decomposition and non-decomposition methods; (ii) By utilizing MMM, we obtain several sufficient criteria for the GES of QVNNs, which provides results that are more precise than those obtained using matrix norms; (iii) By using master–slave QVNNs, a new IEA for color images is proposed including different security measures to analyze the performance of the cipher image; (iv) This study presents more general results than previous studies, since they remain valid even when the considered QVNNs are reduced to RVNNs or CVNNs.
The paper follows the following structure: Section 2 presents preliminary information on quaternion algebra, the problem model, activation functions, and time delays. Section 3 provides useful definitions, lemmas, and the main results of the paper: Theorem (1) and Corollary (1) offer GES criteria based on the decomposition method, while Theorem (2) presents GES criteria using the non-decomposition method. Section 4 includes two numerical examples to demonstrate the feasibility of the main results. Section 5 applies the theoretical findings to color image encryption. Finally, Section 6 concludes the paper.

2. Problem Formulation and Preliminaries

2.1. Notations

R , C , and Q represent the sets of all real numbers, complex numbers, and quaternions. R n represents the n-dimensional real space, C n represents the n-dimensional complex space and Q n represents the n-dimensional quaternion space. R n × n , C n × n , and Q n × n represent the sets of all n × n real matrices, complex matrices, and quaternion matrices, respectively. Additionally, P T and P represent the transpose and conjugate transpose of the matrix P , respectively.

2.2. Quaternion Algebra

The quaternion Q is a four-dimensional vector space associated with R and ordered basis consisting of 1, i, j, and k. A quaternion number can be expressed as follows: ϑ = ϑ { 0 } + i ϑ { 1 } + j ϑ { 2 } + k ϑ { 3 } Q , where ϑ { 0 } , ϑ { 1 } , ϑ { 2 } , ϑ { 3 } R . The imaginary terms i, j, and k follow Hamilton’s multiplication rules, which are defined as follows: i 2 = j 2 = k 2 = 1 , j k = k j = i , k i = i k = j , i j = j i = k . These rules illustrate that the quaternion multiplication does not satisfy commutative property. When considering two quaternions ϑ = ϑ { 0 } + i ϑ { 1 } + j ϑ { 2 } + k ϑ { 3 } Q and ϑ ^ = ϑ ^ { 0 } + i ϑ ^ { 1 } + j ϑ ^ { 2 } + k ϑ ^ { 3 } Q , their addition can be described as follows: ϑ + ϑ ^ = ( ϑ { 0 } + ϑ ^ { 0 } ) + i ( ϑ { 1 } + ϑ ^ { 1 } ) + j ( ϑ { 2 } + ϑ ^ { 2 } ) + k ( ϑ { 3 } + ϑ ^ { 3 } ) . The product between them is defined by Hamilton’s multiplication rules as follows: ϑ ϑ ^ = ϑ { 0 } ϑ ^ { 0 } ϑ { 1 } ϑ ^ { 1 } ϑ { 2 } ϑ ^ { 2 } ϑ { 3 } ϑ ^ { 3 } + i ϑ { 0 } ϑ ^ { 1 } + ϑ { 1 } ϑ ^ { 0 } + ϑ { 2 } ϑ ^ { 3 } ϑ { 3 } ϑ ^ { 2 } + j ϑ { 0 } ϑ ^ { 2 } + ϑ { 2 } ϑ ^ { 0 } ϑ { 1 } ϑ ^ { 3 } + ϑ { 3 } ϑ ^ { 1 } + k ϑ { 0 } ϑ ^ { 3 } + ϑ { 3 } ϑ ^ { 0 } + ϑ { 1 } ϑ ^ { 2 } ϑ { 2 } ϑ ^ { 1 } . For any quaternion ϑ = ϑ { 0 } + i ϑ { 1 } + j ϑ { 2 } + k ϑ { 3 } Q , the conjugate of ϑ is symbolized by ϑ , which is defined as ϑ = ϑ { 0 } i ϑ { 1 } j ϑ { 2 } k ϑ { 3 } Q and the norm of ϑ is defined as ϑ = ϑ ϑ = ( ϑ { 0 } ) 2 + ( ϑ { 1 } ) 2 + ( ϑ { 2 } ) 2 + ( ϑ { 3 } ) 2 . More information about quaternions can be found in [13,14,15].

2.3. Problem Formulation

We examine the QVNNs with time-varying delays that are given below:
ϑ ˙ ( ι ) = D ϑ ( ι ) + A ( ϑ ( ι ) ) + B ( ϑ ( ι ( ι ) ) ) + I ,
in which time ι 0 ; ϑ ( ι ) = [ ϑ 1 ( ι ) , ϑ 2 ( ι ) , , ϑ n ( ι ) ] T Q n denotes the neural state vector; D = d i a g [ d 1 , d 2 , , d n ] R n × n , where d u > 0 ( u = 1 , 2 , , n ) denotes the weight matrix for self-feedback neurons; A = ( a u v ) n × n Q n × n and B = ( b u v ) n × n Q n × n denote the weight matrices for interconnection neurons; ( ϑ ( · ) ) = [ 1 ( ϑ 1 ( · ) ) , 2 ( ϑ 2 ( · ) ) , , n ( ϑ n ( · ) ) ] T : Q n Q n denotes the neuron activation; I = [ I 1 , I 2 , , I n ] T Q n denotes the external input vector.
Assumption A1.
Assume that the discrete time-varying delay ( ι ) holds the following condition:
0 ( ι ) ,
where represents a non-negative constant.
Assumption A2.
The function ( · ) satisfies the Lipschitz continuity property, i.e., there is L > 0 such that for any u ˇ , a ˇ Q n , we have
( u ˇ ) ( a ˇ ) ϖ L u ˇ a ˇ ϖ .
The initial state vector of (1) is given as follows:
ϑ ( ξ ) = ψ ( ξ ) , ξ [ ι 0 , ι 0 ] ,
in which ψ ( ξ ) = [ ψ 1 ( ξ ) , ψ 2 ( ξ ) , , ψ n ( ξ ) ] T Q n is continuous on [ ι 0 , ι 0 ] .
To investigate the synchronization behavior of QVNNs [22,33], we define the master QVNNs given in (1), while the corresponding slave QVNNs can be described as follows:
ϑ ^ ˙ ( ι ) = D ϑ ^ ( ι ) + A ( ϑ ^ ( ι ) ) + B ( ϑ ^ ( ι ( ι ) ) ) + I + u ˇ ( ι ) ,
in which ϑ ^ ( ι ) = [ ϑ ^ 1 ( ι ) , ϑ ^ 2 ( ι ) , , ϑ ^ n ( ι ) ] T Q n and u ˇ ( ι ) = [ u ˇ 1 ( ι ) , u ˇ 2 ( ι ) , , u ˇ n ( ι ) ] T Q n represent the neural state and control input vector, separately; D , A , B , I and ( ι ) are similar to those mentioned in the QVNNs (1).
The initial state vector of (3) is given as follows:
ϑ ^ ( ξ ) = ψ ^ ( ξ ) , ξ [ ι 0 , ι 0 ] ,
in which ψ ^ ( ξ ) = [ ψ ^ 1 ( ξ ) , ψ ^ 2 ( ξ ) , , ψ ^ n ( ξ ) ] T Q n is continuous on [ ι 0 , ι 0 ] .

3. Main Results

Here, we derive matrix-measure-based GES criteria for the master–slave QVNNs (1) and (3) based on decomposition and non-decomposition methods.

3.1. Decomposition-Method-Based Synchronization Analysis

Initially, we express the QVNNs in (1) into the following RVNNs by dividing them into their real and imaginary components:  
ϑ ˙ { 0 } ( ι ) = D ϑ { 0 } ( ι ) + A { 0 } { 0 } ( ϑ ( ι ) ) A { 1 } { 1 } ( ϑ ( ι ) ) A { 2 } { 2 } ( ϑ ( ι ) ) A { 3 } { 3 } ( ϑ ( ι ) ) + B { 0 } { 0 } ( ϑ ( ι ( ι ) ) ) B { 1 } { 1 } ( ϑ ( ι ( ι ) ) ) B { 2 } { 2 } ( ϑ ( ι ( ι ) ) ) B { 3 } { 3 } ( ϑ ( ι ( ι ) ) ) + I { 0 } ϑ ˙ { 1 } ( ι ) = D ϑ { 1 } ( ι ) + A { 0 } { 1 } ( ϑ ( ι ) ) + A { 1 } { 0 } ( ϑ ( ι ) ) + A { 2 } { 3 } ( ϑ ( ι ) ) A { 3 } { 2 } ( ϑ ( ι ) ) + B { 0 } { 1 } ( ϑ ( ι ( ι ) ) ) + B { 1 } { 0 } ( ϑ ( ι ( ι ) ) ) + B { 2 } { 3 } ( ϑ ( ι ( ι ) ) ) B { 3 } { 2 } ( ϑ ( ι ( ι ) ) ) + I { 1 } ϑ ˙ { 2 } ( ι ) = D ϑ { 2 } ( ι ) + A { 0 } { 2 } ( ϑ ( ι ) ) + A { 2 } { 0 } ( ϑ ( ι ) ) A { 1 } { 3 } ( ϑ ( ι ) ) + A { 3 } { 1 } ( ϑ ( ι ) ) + B { 0 } { 2 } ( ϑ ( ι ( ι ) ) ) + B { 2 } { 0 } ( ϑ ( ι ( ι ) ) ) B { 1 } { 3 } ( ϑ ( ι ( ι ) ) ) + B { 3 } { 1 } ( ϑ ( ι ( ι ) ) ) + I { 2 } ϑ ˙ { 3 } ( ι ) = D ϑ { 3 } ( ι ) + A { 0 } { 3 } ( ϑ ( ι ) ) + A { 3 } { 0 } ( ϑ ( ι ) ) + A { 1 } { 2 } ( ϑ ( ι ) ) A { 2 } { 1 } ( ϑ ( ι ) ) + B { 0 } { 3 } ( ϑ ( ι ( ι ) ) ) + B { 3 } { 0 } ( ϑ ( ι ( ι ) ) ) + B { 1 } { 2 } ( ϑ ( ι ( ι ) ) ) B { 2 } { 1 } ( ϑ ( ι ( ι ) ) ) + I { 3 } ,
in which ϑ { 0 } ( ι ) = R e ( ϑ ( ι ) ) , ϑ { 1 } ( ι ) = I m ( ϑ ( ι ) ) , ϑ { 2 } ( ι ) = I m ( ϑ ( ι ) ) , ϑ { 3 } ( ι ) = I m ( ϑ ( ι ) ) . We denote the real and imaginary components using the superscripts { 0 } , { 1 } , { 2 } , and { 3 } , where A { 0 } = R e ( A ) , A { 1 } = I m ( A ) , A { 2 } = I m ( A ) , A { 3 } = I m ( A ) .
The initial state of (5) is given as follows:
ϑ { 0 } ( ξ ) = ψ { 0 } ( ξ ) , ξ [ ι 0 , ι 0 ] ϑ { 1 } ( ξ ) = ψ { 1 } ( ξ ) , ξ [ ι 0 , ι 0 ] ϑ { 2 } ( ξ ) = ψ { 2 } ( ξ ) , ξ [ ι 0 , ι 0 ] ϑ { 3 } ( ξ ) = ψ { 3 } ( ξ ) , ξ [ ι 0 , ι 0 ] ,
in which ψ { 0 } ( ξ ) = R e ( ψ ( ξ ) ) , ψ { 1 } ( ξ ) = I m ( ψ ( ξ ) ) , ψ { 2 } ( ξ ) = I m ( ψ ( ξ ) ) , ψ { 3 } ( ξ ) = I m ( ψ ( ξ ) ) , and ψ η ϖ = sup ξ [ ι 0 , ι 0 ] ψ η ( ξ ) ϖ is the norm function of ψ η C ( [ ι 0 , ι 0 ] , R n ) , where ϖ = 1 , 2 , and η = { 0 } , { 1 } , { 2 } , { 3 } .
Assumption A3.
The functions { 0 } ( · ) , { 1 } ( · ) , { 2 } ( · ) and { 3 } ( · ) hold the Lipschitz continuity property, i.e., there exist L { 0 } > 0 , L { 1 } > 0 , L { 2 } > 0 and L { 3 } > 0 such that for any u ˇ , a ˇ Q n , we have
{ 0 } ( u ˇ ) { 0 } ( a ˇ ) ϖ L { 0 } u ˇ a ˇ ϖ { 1 } ( u ˇ ) { 1 } ( a ˇ ) ϖ L { 1 } u ˇ a ˇ ϖ { 2 } ( u ˇ ) { 2 } ( a ˇ ) ϖ L { 2 } u ˇ a ˇ ϖ { 3 } ( u ˇ ) { 3 } ( a ˇ ) ϖ L { 3 } u ˇ a ˇ ϖ .
Correspondingly, we divide the QVNNs in (3) into their real and imaginary components as follows:  
ϑ ^ ˙ { 0 } ( ι ) = D ϑ ^ { 0 } ( ι ) + A { 0 } { 0 } ( ϑ ^ ( ι ) ) A { 1 } { 1 } ( ϑ ^ ( ι ) ) A { 2 } { 2 } ( ϑ ^ ( ι ) ) A { 3 } { 3 } ( ϑ ^ ( ι ) ) + B { 0 } { 0 } ( ϑ ^ ( ι ( ι ) ) ) B { 1 } { 1 } ( ϑ ^ ( ι ( ι ) ) ) B { 2 } { 2 } ( ϑ ^ ( ι ( ι ) ) ) B { 3 } { 3 } ( ϑ ^ ( ι ( ι ) ) ) + I { 0 } + u ˇ { 0 } ( ι ) ϑ ^ ˙ { 1 } ( ι ) = D ϑ ^ { 1 } ( ι ) + A { 0 } { 1 } ( ϑ ^ ( ι ) ) + A { 1 } { 0 } ( ϑ ^ ( ι ) ) + A { 2 } { 3 } ( ϑ ^ ( ι ) ) A { 3 } { 2 } ( ϑ ^ ( ι ) ) + B { 0 } { 1 } ( ϑ ^ ( ι ( ι ) ) ) + B { 1 } { 0 } ( ϑ ^ ( ι ( ι ) ) ) + B { 2 } { 3 } ( ϑ ^ ( ι ( ι ) ) ) B { 3 } { 2 } ( ϑ ^ ( ι ( ι ) ) ) + I { 1 } + u ˇ { 1 } ( ι ) ϑ ^ ˙ { 2 } ( ι ) = D ϑ ^ { 2 } ( ι ) + A { 0 } { 2 } ( ϑ ^ ( ι ) ) + A { 2 } { 0 } ( ϑ ^ ( ι ) ) A { 1 } { 3 } ( ϑ ^ ( ι ) ) + A { 3 } { 1 } ( ϑ ^ ( ι ) ) + B { 0 } { 2 } ( ϑ ^ ( ι ( ι ) ) ) + B { 2 } { 0 } ( ϑ ^ ( ι ( ι ) ) ) B { 1 } { 3 } ( ϑ ^ ( ι ( ι ) ) ) + B { 3 } { 1 } ( ϑ ^ ( ι ( ι ) ) ) + I { 2 } + u ˇ { 2 } ( ι ) ϑ ^ ˙ { 3 } ( ι ) = D ϑ ^ { 3 } ( ι ) + A { 0 } { 3 } ( ϑ ^ ( ι ) ) + A { 3 } { 0 } ( ϑ ^ ( ι ) ) + A { 1 } { 2 } ( ϑ ^ ( ι ) ) A { 2 } { 1 } ( ϑ ^ ( ι ) ) + B { 0 } { 3 } ( ϑ ^ ( ι ( ι ) ) ) + B { 3 } { 0 } ( ϑ ^ ( ι ( ι ) ) ) + B { 1 } { 2 } ( ϑ ^ ( ι ( ι ) ) ) B { 2 } { 1 } ( ϑ ^ ( ι ( ι ) ) ) + I { 3 } + u ˇ { 3 } ( ι ) ,
where ϑ ^ { 0 } ( ι ) = R e ( ϑ ^ ( ι ) ) , ϑ ^ { 1 } ( ι ) = I m ( ϑ ^ ( ι ) ) , ϑ ^ { 2 } ( ι ) = I m ( ϑ ^ ( ι ) ) , ϑ ^ { 3 } ( ι ) = I m ( ϑ ^ ( ι ) ) and u ˇ { 0 } ( ι ) , u ˇ { 1 } ( ι ) , u ˇ { 2 } ( ι ) , u ˇ { 3 } ( ι ) represent the control input vectors.
The initial state of (7) is given as follows:
ϑ ^ { 0 } ( ξ ) = ψ ^ { 0 } ( ξ ) , ξ [ ι 0 , ι 0 ] ϑ ^ { 1 } ( ξ ) = ψ ^ { 1 } ( ξ ) , ξ [ ι 0 , ι 0 ] ϑ ^ { 2 } ( ξ ) = ψ ^ { 2 } ( ξ ) , ξ [ ι 0 , ι 0 ] ϑ ^ { 3 } ( ξ ) = ψ ^ { 3 } ( ξ ) , ξ [ ι 0 , ι 0 ] ,
in which ψ ^ { 0 } ( ξ ) = R e ( ψ ^ ( ξ ) ) , ψ ^ { 1 } ( ξ ) = I m ( ψ ^ ( ξ ) ) , ψ ^ { 2 } ( ξ ) = I m ( ψ ^ ( ξ ) ) , ψ ^ { 3 } ( ξ ) = I m ( ψ ^ ( ξ ) ) .
For designing control inputs, this paper adopts the linear combination of the state differences between the master–slave QVNNs in (5) and (7), expressed as follows:
u ˇ 1 { 0 } ( ι ) . . . u ˇ n { 0 } ( ι ) = v = 1 n θ 1 v ( ϑ ^ v { 0 } ( ι ) ϑ v { 0 } ( ι ) ) . . . v = 1 n θ n v ( ϑ ^ v { 0 } ( ι ) ϑ v { 0 } ( ι ) ) = θ 11 . . . θ 1 n . . . . . . . . . θ n 1 . . . θ n n ϑ ^ 1 { 0 } ( ι ) ϑ 1 { 0 } ( ι ) . . . ϑ ^ n { 0 } ( ι ) ϑ n { 0 } ( ι ) u ˇ { 0 } ( ι ) = Θ ( ϑ ^ { 0 } ( ι ) ϑ { 0 } ( ι ) ) ,
u ˇ 1 { 1 } ( ι ) . . . u ˇ n { 1 } ( ι ) = v = 1 n θ 1 v ( ϑ ^ v { 1 } ( ι ) ϑ v { 1 } ( ι ) ) . . . v = 1 n θ n v ( ϑ ^ v { 1 } ( ι ) ϑ v { 1 } ( ι ) ) = θ 11 . . . θ 1 n . . . . . . . . . θ n 1 . . . θ n n ϑ ^ 1 { 1 } ( ι ) ϑ 1 { 1 } ( ι ) . . . ϑ ^ n { 1 } ( ι ) ϑ n { 1 } ( ι ) u ˇ { 1 } ( ι ) = Θ ( ϑ ^ { 1 } ( ι ) ϑ { 1 } ( ι ) ) ,
u ˇ 1 { 2 } ( ι ) . . . u ˇ n { 2 } ( ι ) = v = 1 n θ 1 v ( ϑ ^ v { 2 } ( ι ) ϑ v { 2 } ( ι ) ) . . . v = 1 n θ n v ( ϑ ^ v { 2 } ( ι ) ϑ v { 2 } ( ι ) ) = θ 11 . . . θ 1 n . . . . . . . . . θ n 1 . . . θ n n ϑ ^ 1 { 2 } ( ι ) ϑ 1 { 2 } ( ι ) . . . ϑ ^ n { 2 } ( ι ) ϑ n { 2 } ( ι ) u ˇ { 2 } ( ι ) = Θ ( ϑ ^ { 2 } ( ι ) ϑ { 2 } ( ι ) ) ,
u ˇ 1 { 3 } ( ι ) . . . u ˇ n { 3 } ( ι ) = v = 1 n θ 1 v ( ϑ ^ v { 3 } ( ι ) ϑ v { 3 } ( ι ) ) . . . v = 1 n θ n v ( ϑ ^ v { 3 } ( ι ) ϑ v { 3 } ( ι ) ) = θ 11 . . . θ 1 n . . . . . . . . . θ n 1 . . . θ n n ϑ ^ n { 3 } ( ι ) ϑ n { 3 } ( ι ) . . . ϑ ^ n { 3 } ( ι ) ϑ n { 3 } ( ι ) u ˇ { 3 } ( ι ) = Θ ( ϑ ^ { 3 } ( ι ) ϑ { 3 } ( ι ) ) ,
in which Θ R n × n represents the controller gain matrix.
Here, we introduce the synchronization error systems for the master–slave QVNNs (5) and (7) with the control law (9)–(12) as follows:
ϱ ˙ { 0 } ( ι ) = D ϱ { 0 } ( ι ) + A { 0 } { 0 } ( ϱ ( ι ) ) A { 1 } { 1 } ( ϱ ( ι ) ) A { 2 } { 2 } ( ϱ ( ι ) ) A { 3 } { 3 } ( ϱ ( ι ) ) + B { 0 } { 0 } ( ϱ ( ι ( ι ) ) ) B { 1 } { 1 } ( ϱ ( ι ( ι ) ) ) B { 2 } { 2 } ( ϱ ( ι ( ι ) ) ) B { 3 } { 3 } ( ϱ ( ι ( ι ) ) ) + Θ ϱ { 0 } ( ι ) ϱ ˙ { 1 } ( ι ) = D ϱ { 1 } ( ι ) + A { 0 } { 1 } ( ϱ ( ι ) ) + A { 1 } { 0 } ( ϱ ( ι ) ) + A { 2 } { 3 } ( ϱ ( ι ) ) A { 3 } { 2 } ( ϱ ( ι ) ) + B { 0 } { 1 } ( ϱ ( ι ( ι ) ) ) + B { 1 } { 0 } ( ϱ ( ι ( ι ) ) ) + B { 2 } { 3 } ( ϱ ( ι ( ι ) ) ) B { 3 } { 2 } ( ϱ ( ι ( ι ) ) ) + Θ ϱ { 1 } ( ι ) ϱ ˙ { 2 } ( ι ) = D ϱ { 2 } ( ι ) + A { 0 } { 2 } ( ϱ ( ι ) ) + A { 2 } { 0 } ( ϱ ( ι ) ) A { 1 } { 3 } ( ϱ ( ι ) ) + A { 3 } { 1 } ( ϱ ( ι ) ) + B { 0 } { 2 } ( ϱ ( ι ( ι ) ) ) + B { 2 } { 0 } ( ϱ ( ι ( ι ) ) ) B { 1 } { 3 } ( ϱ ( ι ( ι ) ) ) + B { 3 } { 1 } ( ϱ ( ι ( ι ) ) ) + Θ ϱ { 2 } ( ι ) ϱ ˙ { 3 } ( ι ) = D ϱ { 3 } ( ι ) + A { 0 } { 3 } ( ϱ ( ι ) ) + A { 3 } { 0 } ( ϱ ( ι ) ) + A { 1 } { 2 } ( ϱ ( ι ) ) A { 2 } { 1 } ( ϱ ( ι ) ) + B { 0 } { 3 } ( ϱ ( ι ( ι ) ) ) + B { 3 } { 0 } ( ϱ ( ι ( ι ) ) ) + B { 1 } { 2 } ( ϱ ( ι ( ι ) ) ) B { 2 } { 1 } ( ϱ ( ι ( ι ) ) ) + Θ ϱ { 3 } ( ι ) ,
in which ϱ { 0 } ( ι ) = ϑ ^ { 0 } ( ι ) ϑ { 0 } ( ι ) , ϱ { 1 } ( ι ) = ϑ ^ { 1 } ( ι ) ϑ { 1 } ( ι ) , ϱ { 2 } ( ι ) = ϑ ^ { 2 } ( ι ) ϑ { 2 } ( ι ) , ϱ { 3 } ( ι ) = ϑ ^ { 3 } ( ι ) ϑ { 3 } ( ι ) , ϱ { 0 } ( ι ) = [ ϱ 1 { 0 } ( ι ) , , ϱ n { 0 } ( ι ) ] T R n , ϱ { 1 } ( ι ) = [ ϱ 1 { 1 } ( ι ) , , ϱ n { 1 } ( ι ) ] T R n , ϱ { 2 } ( ι ) = [ ϱ 1 { 2 } ( ι ) , , ϱ n { 2 } ( ι ) ] T R n , ϱ { 3 } ( ι ) = [ ϱ 1 { 3 } ( ι ) , , ϱ n { 3 } ( ι ) ] T R n , { 0 } ( ϱ ( ι ) ) = { 0 } ( ϑ ^ ( ι ) ) { 0 } ( ϑ ( ι ) ) , { 1 } ( ϱ ( ι ) ) = { 1 } ( ϑ ^ ( ι ) ) { 1 } ( ϑ ( ι ) ) , { 2 } ( ϱ ( ι ) ) = { 2 } ( ϑ ^ ( ι ) ) { 2 } ( ϑ ( ι ) ) , { 3 } ( ϱ ( ι ) ) = { 3 } ( ϑ ^ ( ι ) ) { 3 } ( ϑ ( ι ) ) , { 0 } ( ϱ ( ι ( ι ) ) ) = { 0 } ( ϑ ^ ( ι ( ι ) ) ) { 0 } ( ϑ ( ι ( ι ) ) ) , { 1 } ( ϱ ( ι ( ι ) ) ) = { 1 } ( ϑ ^ ( ι ( ι ) ) ) { 1 } ( ϑ ( ι ( ι ) ) ) , { 2 } ( ϱ ( ι ( ι ) ) ) = { 2 } ( ϑ ^ ( ι ( ι ) ) ) { 2 } ( ϑ ( ι ( ι ) ) ) , { 3 } ( ϱ ( ι ( ι ) ) ) = { 3 } ( ϑ ^ ( ι ( ι ) ) ) { 3 } ( ϑ ( ι ( ι ) ) ) .

3.2. Preliminaries

Definition 1
([41,42]). For any ϑ = [ ϑ 1 , ϑ 2 , , ϑ n ] T R n , the real vector norm is given as follows:  
ϑ 1 = v = 1 n | ϑ v | , ϑ 2 = v = 1 n ϑ v 2 , ϑ = max 1 v n | ϑ v | .
Likewise, for any ϑ = [ ϑ 1 , ϑ 2 , , ϑ n ] T Q n , where ϑ = ϑ { 0 } + i ϑ { 1 } + j ϑ { 2 } + k ϑ { 3 } , the quaternion vector norm is given as follows:
ϑ 1 = v = 1 n | ϑ v | , ϑ 2 = ϑ ϑ = v = 1 n ϑ v { 0 } 2 + ϑ v { 1 } 2 + ϑ v { 2 } 2 + ϑ v { 3 } 2 , ϑ = max 1 v n | ϑ v | .
For constant matrix S = ( s u v ) n × n R n × n , the matrix norm is defined as follows:
S 1 = max v u = 1 n | s u v | , S 2 = λ m a x ( S T S ) , S = max u v = 1 n | s u v | .
Likewise, for constant matrix S = ( s u v ) n × n Q n × n , the matrix norm is defined as follows:
S 1 = max v u = 1 n | s u v | , S 2 = λ m a x ( S S ) , S = max u v = 1 n | s u v | .
Definition 2
([41,42]). The matrix measure of a matrix S = ( s u v ) n × n R n × n is as follows:
μ ϖ ( S ) = lim 0 + I + S ϖ 1 ,
in which · ϖ ( ϖ = 1 , 2 , ) denotes the matrix norm on R n × n , and I is an n × n identity matrix.
We can determine the matrix measure through simple mathematics as follows:  
μ 1 ( S ) = max v s v v + u = 1 , u v n | s u v | , μ 2 ( S ) = 1 2 λ m a x ( S + S T ) , μ ( S ) = max u s u u + v = 1 , v u n | s u v | .
Definition 3
([41,42]). The matrix measure of a matrix S = ( s u v ) n × n Q n × n is as follows:
μ ϖ ( S ) = lim 0 + I + S ϖ 1 ,
in which · ϖ ( ϖ = 1 , 2 , ) denotes the matrix norm on Q n × n , and I is an n × n identity matrix.
We can obtain the matrix measure based on simple mathematics as follows:
μ 1 ( S ) = max v R e ( s v v ) + u = 1 , u v n | s u v | , μ 2 ( S ) = 1 2 λ m a x ( S + S ) , μ ( S ) = max u R e ( s u u ) + v = 1 , v u n | s u v | .
Definition 4
([33,43]). The master–slave QVNNs (1) and (3) are GES if scalars h > 1 and λ > 0 exist; then,
ϑ ^ ( ι ) ϑ ( ι ) ϖ h ψ ^ ( ι ) ψ ( ι ) ϖ exp λ ( ι ι 0 ) , ι ι 0 .
Definition 5
([33,43]). The master–slave QVNNs (5) and (7) are GES if scalars h > 1 and λ > 0 exist; then,
ϑ ^ { 0 } ( ι ) ϑ { 0 } ( ι ) ϖ + ϑ ^ { 1 } ( ι ) ϑ { 1 } ( ι ) ϖ + ϑ ^ { 2 } ( ι ) ϑ { 2 } ( ι ) ϖ + ϑ ^ { 3 } ( ι ) ϑ { 3 } ( ι ) ϖ h sup [ ι 0 , ι 0 ] ( ψ ^ { 0 } ( ι ) ψ { 0 } ( ι ) ϖ + ψ ^ { 1 } ( ι ) ψ { 1 } ( ι ) ϖ + ψ ^ { 2 } ( ι ) ψ { 2 } ( ι ) ϖ + ψ ^ { 3 } ( ι ) ψ { 3 } ( ι ) ϖ ) exp λ ( ι ι 0 ) , ι ι 0 .
Lemma 1
([33,41]). As defined in Definition (2), the matrix measure μ ϖ ( · ) includes the following:
(1) 
S ϖ μ ϖ ( S ) S ϖ , S R n × n ,
(2) 
μ ϖ ( α S ) = α μ ϖ ( S ) , S R n × n , α > 0 ,
(3) 
μ ϖ ( S 1 + S 2 ) μ ϖ ( S 1 ) + μ ϖ ( S 2 ) , S 1 , S 2 R n × n .
Lemma 2
([33,41]). Suppose c 1 and c 2 are constants with c 1 > c 2 > 0 and a non-negative continuous function ϑ ( ι ) defined on the interval [ ι 0 , + ] which satisfies, for all ι ι 0 ,
D + ϑ ( ι ) c 1 ϑ ( ι ) + c 2 ϑ ¯ ( ι ) ,
where ϑ ¯ ( ι ) = sup ι ξ ι ϑ ( ξ ) . Then,
ϑ ( ι ) ϑ ¯ ( ι ) exp λ ( ι ι 0 ) ,
in which λ > 0 denotes the unique solution of
λ = c 1 c 2 exp λ ,
and D + ϑ ( ι ) is defined as
D + ϑ ( ι ) = lim 0 + ϑ ( ι + ) ϑ ( ι ) .
Remark 1.
As is well known, there are notable distinctions among matrix norms and matrix measures. But matrix norms are limited because of its non-negative values, while matrix measures can provide positive as well as negative values. Additionally, MMM is sensitive to sign; specifically, μ ϖ ( S ) μ ϖ ( S ) in general, whereas S ϖ S ϖ . Due to its special properties, MMM provides many advantages over matrix norms in terms of obtaining precise results.
We have adopted the following notations in order to simplify the resulting parts: ϱ = ϱ ( ι ) , ϱ = ϱ ( ι ( ι ) ) , ϱ { 0 } = ϱ { 0 } ( ι ) , ϱ { 0 } = ϱ { 0 } ( ι ( ι ) ) , { 0 } ( ϱ ) = { 0 } ( ϱ ( ι ) ) , { 0 } ( ϱ ) = { 0 } ( ϱ ( ι ( ι ) ) ) , ϱ { 1 } = ϱ { 1 } ( ι ) , ϱ { 1 } = ϱ { 1 } ( ι ( ι ) ) , { 1 } ( ϱ ) = { 1 } ( ϱ ( ι ) ) , { 1 } ( ϱ ) = { 1 } ( ϱ ( ι ( ι ) ) ) , ϱ { 2 } = ϱ { 2 } ( ι ) , ϱ { 2 } = ϱ { 2 } ( ι ( ι ) ) , { 2 } ( ϱ ) = { 2 } ( ϱ ( ι ) ) , { 2 } ( ϱ ) = { 2 } ( ϱ ( ι ( ι ) ) ) , ϱ { 3 } = ϱ { 3 } ( ι ) , ϱ { 3 } = ϱ { 3 } ( ι ( ι ) ) , { 3 } ( ϱ ) = { 3 } ( ϱ ( ι ) ) , { 3 } ( ϱ ) = { 3 } ( ϱ ( ι ( ι ) ) ) .
The subsequent Theorem (1) derives the GES criteria between the master–slave QVNNs (5) and (7) by choosing appropriate controller gain matrix Θ via decomposition method.
Theorem 1.
Under Assumptions 1 and 3, QVNNs (5) are GES with QVNNs (7), if the control gain matrix Θ satisfies
μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ > L { 0 } + L { 1 } + L { 2 } + L { 3 } B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ > 0 ,
where ϖ = 1 , 2 , .
Proof. 
The Lyapunov functions are defined as follows:
V ( ϱ ) = V 1 ( ϱ ) + V 2 ( ϱ ) + V 3 ( ϱ ) + V 4 ( ϱ ) ,
where
V 1 ( ϱ ) = ϱ { 0 } ϖ , V 2 ( ϱ ) = ϱ { 1 } ϖ , V 3 ( ϱ ) = ϱ { 2 } ϖ , V 4 ( ϱ ) = ϱ { 3 } ϖ .
Utilizing the Dini derivation and Taylor’s theorem based on Peano’s remainder form, we can estimate the Dini derivative of V 1 ( ϱ ) along the QVNNs described in (13) as follows:  
D + V 1 ( ϱ ) = lim 0 + ϱ { 0 } ( ι + ) ϖ ϱ { 0 } ϖ = lim 0 + ϱ { 0 } + ϱ ˙ { 0 } + o ( ) ϖ ϱ { 0 } ϖ = lim 0 + ( ϱ { 0 } + ( D ϱ { 0 } + A { 0 } { 0 } ( ϱ ) A { 1 } { 1 } ( ϱ ) A { 2 } { 2 } ( ϱ ) A { 3 } { 3 } ( ϱ ) + B { 0 } { 0 } ( ϱ ) B { 1 } { 1 } ( ϱ ) B { 2 } { 2 } ( ϱ ) B { 3 } { 3 } ( ϱ ) + Θ ϱ { 0 } ) + o ( ) ϖ ϱ { 0 } ϖ ) / lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 0 } ϖ + A { 0 } ϖ { 0 } ( ϱ ) ϖ + A { 1 } ϖ { 1 } ( ϱ ) ϖ + A { 2 } ϖ { 2 } ( ϱ ) ϖ + A { 3 } ϖ { 3 } ( ϱ ) ϖ + B { 0 } ϖ { 0 } ( ϱ ) ϖ + B { 1 } ϖ { 1 } ( ϱ ) ϖ + B { 2 } ϖ { 2 } ( ϱ ) ϖ + B { 3 } ϖ { 3 } ( ϱ ) ϖ .
By the similar techniques, the Dini derivative of V 2 ( ϱ ) , V 3 ( ϱ ) , V 4 ( ϱ ) along the QVNNs (13) can be estimated by
D + V 2 ( ϱ ) lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 1 } ϖ + A { 0 } ϖ { 1 } ( ϱ ) ϖ + A { 1 } ϖ { 0 } ( ϱ ) ϖ + A { 2 } ϖ { 3 } ( ϱ ) ϖ + A { 3 } ϖ { 2 } ( ϱ ) ϖ + B { 0 } ϖ { 1 } ( ϱ ) ϖ + B { 1 } ϖ { 0 } ( ϱ ) ϖ + B { 2 } ϖ { 3 } ( ϱ ) ϖ + B { 3 } ϖ { 2 } ( ϱ ) ϖ
D + V 3 ( ϱ ) lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 2 } ϖ + A { 0 } ϖ { 2 } ( ϱ ) ϖ + A { 2 } ϖ { 0 } ( ϱ ) ϖ + A { 1 } ϖ { 3 } ( ϱ ) ϖ + A { 3 } ϖ { 1 } ( ϱ ) ϖ + B { 0 } ϖ { 2 } ( ϱ ) ϖ + B { 2 } ϖ { 0 } ( ϱ ) ϖ + B { 1 } ϖ { 3 } ( ϱ ) ϖ + B { 3 } ϖ { 1 } ( ϱ ) ϖ
D + V 4 ( ϱ ) lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 3 } ϖ + A { 0 } ϖ { 3 } ( ϱ ) ϖ + A { 3 } ϖ { 0 } ( ϱ ) ϖ + A { 1 } ϖ { 2 } ( ϱ ) ϖ + A { 2 } ϖ { 1 } ( ϱ ) ϖ + B { 0 } ϖ { 3 } ( ϱ ) ϖ + B { 3 } ϖ { 0 } ( ϱ ) ϖ + B { 1 } ϖ { 2 } ( ϱ ) ϖ + B { 2 } ϖ { 1 } ( ϱ ) ϖ
The combinations of (16)–(19) deduce to  
D + V ( ϱ ) = D + V 1 ( ϱ ) + D + V 2 ( ϱ ) + D + V 3 ( ϱ ) + D + V 4 ( ϱ ) lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 0 } ϖ + A { 0 } ϖ { 0 } ( ϱ ) ϖ + A { 1 } ϖ { 1 } ( ϱ ) ϖ + A { 2 } ϖ { 2 } ( ϱ ) ϖ + A { 3 } ϖ { 3 } ( ϱ ) ϖ + B { 0 } ϖ { 0 } ( ϱ ) ϖ + B { 1 } ϖ { 1 } ( ϱ ) ϖ + B { 2 } ϖ { 2 } ( ϱ ) ϖ + B { 3 } ϖ { 3 } ( ϱ ) ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 1 } ϖ + A { 0 } ϖ { 1 } ( ϱ ) ϖ + A { 1 } ϖ { 0 } ( ϱ ) ϖ + A { 2 } ϖ { 3 } ( ϱ ) ϖ + A { 3 } ϖ { 2 } ( ϱ ) ϖ + B { 0 } ϖ { 1 } ( ϱ ) ϖ + B { 1 } ϖ { 0 } ( ϱ ) ϖ + B { 2 } ϖ { 3 } ( ϱ ) ϖ + B { 3 } ϖ { 2 } ( ϱ ) ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 2 } ϖ + A { 0 } ϖ { 2 } ( ϱ ) ϖ + A { 2 } ϖ { 0 } ( ϱ ϖ + A { 1 } ϖ { 3 } ( ϱ ) ϖ + A { 3 } ϖ { 1 } ( ϱ ) ϖ + B { 0 } ϖ { 2 } ( ϱ ) ϖ + B { 2 } ϖ { 0 } ( ϱ ) ϖ + B { 1 } ϖ { 3 } ( ϱ ) ϖ + B { 3 } ϖ { 1 } ( ϱ ) ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 3 } ϖ + A { 0 } ϖ { 3 } ( ϱ ) ϖ + A { 3 } ϖ { 0 } ( ϱ ) ϖ + B { 3 } ϖ { 2 } ( ϱ ) ϖ + B { 1 } ϖ { 1 } ( ϱ ) ϖ + B { 2 } ϖ { 3 } ( ϱ ) ϖ
By using Assumption 3, we have
{ 0 } ( ϱ ) ϖ L { 0 } ϱ ϖ = L { 0 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 0 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ,
{ 1 } ( ϱ ) ϖ L { 1 } ϱ ϖ = L { 1 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ,
{ 2 } ( ϱ ) ϖ L { 2 } ϱ ϖ = L { 2 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 2 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ,
{ 3 } ( ϱ ) ϖ L { 3 } ϱ ϖ = L { 3 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 3 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ,
{ 0 } ( ϱ ) ϖ L { 0 } ϱ ϖ = L { 0 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 0 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ,
{ 1 } ( ϱ ) ϖ L { 1 } ϱ ϖ = L { 1 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ,
{ 2 } ( ϱ ) ϖ L { 2 } ϱ ϖ = L { 2 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 2 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ,
{ 3 } ( ϱ ) ϖ L { 3 } ϱ ϖ = L { 3 } ϱ { 0 } + i ϱ { 1 } + j ϱ { 2 } + k ϱ { 3 } ϖ L { 3 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ .
By substituting (21)–(28) into (20), we obtain  
D + V ( ϱ ) lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 0 } ϖ + A { 0 } ϖ L { 0 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 1 } ϖ L { 1 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 2 } ϖ L { 2 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 3 } ϖ L { 3 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 0 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 1 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 2 } ϖ L { 2 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 3 } ϖ L { 3 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 1 } ϖ + A { 0 } ϖ L { 1 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 1 } ϖ L { 0 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 2 } ϖ L { 3 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 3 } ϖ L { 2 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 1 } ϖ L { 0 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 2 } ϖ L { 3 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 3 } ϖ L { 2 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 2 } ϖ + A { 0 } ϖ L { 2 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 2 } ϖ L { 0 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 1 } ϖ L { 3 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 3 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 2 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 2 } ϖ L { 0 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 1 } ϖ L { 3 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 3 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 3 } ϖ + A { 0 } ϖ L { 3 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 3 } ϖ L { 0 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 1 } ϖ L { 2 } ( ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ) + A { 2 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 3 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 3 } ϖ L { 0 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 1 } ϖ L { 2 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 2 } ϖ L { 1 } ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ = lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 0 } ϖ + ( A { 0 } ϖ L { 0 } + A { 1 } ϖ L { 1 } + A { 2 } ϖ L { 2 } + A { 3 } ϖ L { 3 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 0 } + B { 1 } ϖ L { 1 } + B { 2 } ϖ L { 2 } + B { 3 } ϖ L { 3 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 1 } ϖ + ( A { 0 } ϖ L { 1 } + A { 1 } ϖ L { 0 } + A { 2 } ϖ L { 3 } + A { 3 } ϖ L { 2 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 1 } + B { 1 } ϖ L { 0 } + B { 2 } ϖ L { 3 } + B { 3 } ϖ L { 2 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 2 } ϖ + ( A { 0 } ϖ L { 2 } + A { 2 } ϖ L { 0 } + A { 1 } ϖ L { 3 } + A { 3 } ϖ L { 1 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 2 } + B { 2 } ϖ L { 0 } + B { 1 } ϖ L { 3 } + B { 3 } ϖ L { 1 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + lim 0 + I + ( D + Θ ) ϖ 1 ϱ { 3 } ϖ + ( A { 0 } ϖ L { 3 } + A { 3 } ϖ L { 0 } + A { 1 } ϖ L { 2 } + A { 2 } ϖ L { 1 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + B { 0 } ϖ L { 3 } + B { 3 } ϖ L { 0 } + B { 1 } ϖ L { 2 } + B { 2 } ϖ L { 1 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ .
Using Definition (2) and (29), we obtain
D + V ( ϱ ) μ ϖ ( D + Θ ) ϱ { 0 } ϖ + A { 0 } ϖ L { 0 } + A { 1 } ϖ L { 1 } + A { 2 } ϖ L { 2 } + A { 3 } ϖ L { 3 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + ( B { 0 } ϖ L { 0 } + B { 1 } ϖ L { 1 } + B { 2 } ϖ L { 2 } + B { 3 } ϖ L { 3 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + μ ϖ ( D + Θ ) ϱ { 1 } ϖ + A { 0 } ϖ L { 1 } + A { 1 } ϖ L { 0 } + A { 2 } ϖ L { 3 } + A { 3 } ϖ L { 2 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + ( B { 0 } ϖ L { 1 } + B { 1 } ϖ L { 0 } + B { 2 } ϖ L { 3 } + B { 3 } ϖ L { 2 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + μ ϖ ( D + Θ ) ϱ { 2 } ϖ + A { 0 } ϖ L { 2 } + A { 2 } ϖ L { 0 } + A { 1 } ϖ L { 3 } + A { 3 } ϖ L { 1 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + ( B { 0 } ϖ L { 2 } + B { 2 } ϖ L { 0 } + B { 1 } ϖ L { 3 } + B { 3 } ϖ L { 1 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + μ ϖ ( D + Θ ) ϱ { 3 } ϖ + A { 0 } ϖ L { 3 } + A { 3 } ϖ L { 0 } + A { 1 } ϖ L { 2 } + A { 2 } ϖ L { 1 } × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + ( B { 0 } ϖ L { 3 } + B { 3 } ϖ L { 0 } + B { 1 } ϖ L { 2 } + B { 2 } ϖ L { 1 } ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ( μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } ( A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ ) ) ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ + ( L { 0 } + L { 1 } + L { 2 } + L { 3 } ) B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ × ϱ { 0 } ϖ + ϱ { 1 } ϖ + ϱ { 2 } ϖ + ϱ { 3 } ϖ ( μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } ( A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ ) ) V ( ϱ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } ( B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ ) V ( ϱ ) ( μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } ( A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ ) ) V ( ϱ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } ( B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ ) sup ι ξ ι V ( ϱ ( ξ ) ) .
Let
c 1 = μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ , c 2 = L { 0 } + L { 1 } + L { 2 } + L { 3 } B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ .
On the basis of (14), we have c 1 > c 2 > 0 . It follows from Lemma (2) that
V ( ϱ ) sup ι 0 ξ ι 0 V ( ϱ ( ξ ) ) exp λ ( ι ι 0 ) ,
where λ represents the unique positive solution of
λ = c 1 c 2 exp λ = μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ L { 0 } + L { 1 } + L { 2 } + L { 3 } B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ exp λ .
As stated in Definition (5), V ( ϱ ) decays exponentially to zero with a rate of λ . This implies that ϱ 0 , ϱ 1 , ϱ 2 , and ϱ 3 converge globally and exponentially to zero, leading to the conclusion that the master QVNNs in (5) are GES with respect to the slave QVNNs in (7). The proof is complete.    □
Corollary 1.
Under Assumptions 1 and 3, QVNNs (5) are GES with QVNNs (7) provided that the controller gain matrix Θ satisfies the following condition:
μ ϖ ( D ) + μ ϖ ( Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ > L { 0 } + L { 1 } + L { 2 } + L { 3 } B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ > 0 .
where ϖ = 1 , 2 , .
Proof. 
Based on Lemma (1), we have
μ ϖ ( D + Θ ) μ ϖ ( D ) + μ ϖ ( Θ ) ,
and then, we can obtain
μ ϖ ( D ) + μ ϖ ( Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ .
Based on (33), we obtain
μ ϖ ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } ϖ + A { 1 } ϖ + A { 2 } ϖ + A { 3 } ϖ > L { 0 } + L { 1 } + L { 2 } + L { 3 } B { 0 } ϖ + B { 1 } ϖ + B { 2 } ϖ + B { 3 } ϖ > 0 .
According to Theorem (1), we can conclude that the master QVNNs in (5) is GES with the slave QVNNs in (7). The proof is complete.    □
Remark 2.
There are many studies on QVNN dynamics based on the decomposition method available in the literature [16,17,18]. Ultimately, the primary reason for using the decomposition method is to avoid the non-commutativity difficulties associated with quaternions, which makes it much easier to understand the analysis process and the resulting conditions. By taking these merits into account, the calculation process of Theorem (1) and Corollary (1) have been derived by employing the decomposition method, which increases the theoretical analysis complexity but greatly reduces the difficulties of analysis.

3.3. Non-Decomposition-Method-Based Synchronization Analysis

In this subsection, we will derive matrix-measure-based GES criteria for the master–slave QVNNs (1) and (3) based on the non-decomposition method.
To investigate the synchronization dynamical behavior of QVNNs, we utilize the master QVNNs described in (1) and the slave QVNNs presented in (3). Also, we consider the control input by using the linear combinations of differences among the state of master–slave QVNNs (1) and (3) as follows:
u ˇ 1 ( ι ) . . . u ˇ n ( ι ) = v = 1 n θ 1 v ( ϑ ^ v ( ι ) ϑ v ( ι ) ) . . . v = 1 n θ n v ( ϑ ^ v ( ι ) ϑ v ( ι ) ) = θ 11 . . . θ 1 n . . . . . . . . . θ n 1 . . . θ n n ϑ ^ 1 ( ι ) ϑ 1 ( ι ) . . . ϑ ^ n ( ι ) ϑ n ( ι ) u ˇ ( ι ) = Θ ( ϑ ^ ( ι ) ϑ ( ι ) ) ,
in which Θ R n × n represents the controller gain matrix.
Here, we introduce the synchronization errors between the master–slave QVNNs (1) and (3) with the control law (37) as follows:
ϱ ˙ ( ι ) = D ϱ ( ι ) + A ( ϱ ( ι ) ) + B ( ϱ ( ι ( ι ) ) ) + Θ ϱ ( ι ) ,
in which ϱ ( ι ) = ϑ ^ ( ι ) ϑ ( ι ) , ( ϱ ( ι ) ) = ( ϑ ^ ( ι ) ) ( ϑ ( ι ) ) , ϱ ( ι ) = [ ϱ 1 ( ι ) , ϱ 2 ( ι ) , , ϱ n ( ι ) ] T Q n , 1 ( ϱ 1 ( · ) ) = [ ( ϱ ( · ) ) , 2 ( ϱ 2 ( · ) ) , , n ( ϱ n ( · ) ) ] T Q n .
The subsequent Theorem (2) provides the GES criteria for the master–slave QVNNs (1) and (3) by selecting on appropriate controller gain matrix Θ via the non-decomposing method.
Theorem 2.
Under Assumptions 1 and 2, QVNNs (1) are GES with QVNNs (3), if the control gain matrix Θ satisfies
μ ϖ ( D + Θ ) + L A ϖ > L B ϖ > 0 ,
in which ϖ = 1 , 2 , .
Proof. 
The Lyapunov functions are defined as follows:
V ( ϱ ) = ϱ ϖ .
Utilizing the Dini derivation and Taylor’s theorem based on Peano’s remainder form, we can estimate the Dini derivative of V ( ϱ ) along the QVNNs described in (38) as follows:
D + V ( ϱ ) = lim 0 + ϱ ( ι + ) ϖ ϱ ϖ = lim 0 + ϱ + ϱ ˙ + o ( ) ϖ ϱ ϖ = lim 0 + ϱ + ( D ϱ + A ( ϱ ) + B ( ϱ ) + Θ ϱ ) + o ( ) ϖ ϱ ϖ / lim 0 + I + ( D + Θ ) ϖ 1 ϱ ϖ + A ϖ ( ϱ ) ϖ + B ϖ ( ϱ ) ϖ .
By using Assumption 2, we have
( ϱ ) ϖ L ϱ ϖ ,
( ϱ ) ϖ L ϱ ϖ ,
By substituting (42) and (43) into (41), we obtain
D + V ( ϱ ) lim 0 + I + ( D + Θ ) ϖ 1 ϱ ϖ + A ϖ L ϱ ϖ + B ϖ L ϱ ϖ .
Using Definition (3) and (44), we obtain
D + V ( ϱ ) μ ϖ ( D + Θ ) ϱ ϖ + L A ϖ ϱ ϖ + L B ϖ ϱ ϖ μ ϖ ( D + Θ ) + L A ϖ ϱ ϖ + L B ϖ ϱ ϖ μ ϖ ( D + Θ ) + L A ϖ V ( ϱ ) + L B ϖ V ( ϱ ) μ ϖ ( D + Θ ) + L A ϖ V ( ϱ ) + L B ϖ sup ι ξ ι V ( ϱ ( ξ ) ) .
Let c 1 = μ ϖ ( D + Θ ) + L A ϖ , c 2 = L B ϖ .
On the basis of (39), we have c 1 > c 2 > 0 . It follows from Lemma (2)
V ( ϱ ) sup ι 0 ξ ι 0 V ( ϱ ( ξ ) ) exp λ ( ι ι 0 ) ,
in which λ represents the unique positive solution of
λ = c 1 c 2 exp λ = μ ϖ ( D + Θ ) + L A ϖ L B ϖ exp λ .
As stated in Definition (4), V ( ϱ ) decays exponentially to zero with a rate of λ . This implies that ϱ converge globally and exponentially to zero, leading to the conclusion that the master QVNNs in (1) are GES with respect to the slave QVNNs in (3). The proof is complete.    □
Remark 3.
As we all know, the decomposition methods lead to larger system sizes and present mathematical challenges. Moreover, the decomposition methods add complexity to theoretical analysis. Hence, the calculation process of Theorem (2) has been derived by using the non-decomposition method for the first time, which greatly reduces the mathematical challenges and complexity of theoretical analysis.
Remark 4.
There have been many studies on NN dynamics based on the Lyapunov functions available in the literature [19,21,23,24,25]. In reality, selecting the right Lyapunov functions as well as proving the stability or synchronization of NNs is a very complex process. However, by using MMM, the Lyapunov function can itself be expressed based on the matrix measures of variables or error vectors. Hence, the results obtained in this paper are more concise.
Remark 5.
Recently, several authors have examined various stability or synchronization analyses of RVNNs and CVNNs based on Lyapunov functions by applying the Halanay inequality and MMM [29,30,32,33]. To the best of our knowledge, no papers have been published pertaining to the stability or synchronization analysis of QVNNs using MMM. To fill this gap, we have examined the GES problem of QVNNs for the first time by utilizing Lyapunov functions, the Halanay inequality as well as MMM via decomposition and non-decomposition methods. Moreover, the results obtained in this paper are new as well as more general than those presented in previous studies [29,33].
Remark 6.
It is essential to note that the sufficient conditions derived from Theorem (1), (2) and Corollary (1) are independent of the time-delays but depend on the inequality of the system variables and controller gain matrix.

4. Numerical Examples

Here, we present two numerical case studies to describe the advantages and validity of our theoretical findings.
Example 1.
In master QVNNs (1), assuming n = 2 , we consider
ϑ ˙ 1 ( ι ) ϑ ˙ 2 ( ι ) = 4 0 0 4 ϑ 1 ( ι ) ϑ 2 ( ι ) + 0.6 + 0.2 i 0.2 j + 0.4 k 0.2 + 1.3 i 0.4 j + 0.3 k 0.1 0.1 i + 0.2 j + 0.2 k 0.7 + 0.8 i 0.2 j + 0.4 k 1 ( ϑ 1 ( ι ) ) 2 ( ϑ 2 ( ι ) ) + 0.9 + 0.7 i 0.5 j + 0.6 k 0.2 + 1.2 i 1.4 j + 0.8 k 1.2 0.5 i + 0.3 j + 0.2 k 0.1 + 0.4 i + 0.5 j + 0.9 k 1 ( ϑ 1 ( ι ( ι ) ) ) 2 ( ϑ 2 ( ι ( ι ) ) ) + 0.1 + 0.2 i + 0.1 j 0.2 k 0.2 + 0.2 i 0.3 j + 0.1 k ,
and let the two-neuron slave QVNNs be defined as follows:
ϑ ^ ˙ 1 ( ι ) ϑ ^ ˙ 2 ( ι ) = 4 0 0 4 ϑ ^ 1 ( ι ) ϑ ^ 2 ( ι ) + 0.6 + 0.2 i 0.2 j + 0.4 k 0.2 + 1.3 i 0.4 j + 0.3 k 0.1 0.1 i + 0.2 j + 0.2 k 0.7 + 0.8 i 0.2 j + 0.4 k 1 ( ϑ ^ 1 ( ι ) ) 2 ( ϑ ^ 2 ( ι ) ) + 0.9 + 0.7 i 0.5 j + 0.6 k 0.2 + 1.2 i 1.4 j + 0.8 k 1.2 0.5 i + 0.3 j + 0.2 k 0.1 + 0.4 i + 0.5 j + 0.9 k 1 ( ϑ ^ 1 ( ι ( ι ) ) ) 2 ( ϑ ^ 2 ( ι ( ι ) ) ) + 0.1 + 0.2 i + 0.1 j 0.2 k 0.2 + 0.2 i 0.3 j + 0.1 k + 45 2 1 48 ϑ ^ 1 ( ι ) ϑ 1 ( ι ) ϑ ^ 2 ( ι ) ϑ 2 ( ι ) .
Take ( ι ) = 0.5 c o s ( ι ) as a discrete time-varying delay, which shows that its supremum is = 0.5 . Further, the activation functions are considered as v ( ϑ v ( · ) ) = 0.5 t a n h ( ϑ v { 0 } ( · ) ) + 0.5 t a n h ( ϑ v { 1 } ( · ) ) i + 0.5 t a n h ( ϑ v { 2 } ( · ) ) j + 0.5 t a n h ( ϑ v { 3 } ( · ) ) k , v = 1 , 2 . One can verify that Assumption 3 holds with L { 0 } = 0.5 , L { 1 } = 0.5 , L { 2 } = 0.5 , L { 3 } = 0.5 . Using a simple calculation, we can obtain A { 0 } , A { 1 } , A { 2 } , A { 3 } , B { 0 } , B { 1 } , B { 2 } , B { 3 } . By choosing ϖ = 2 , we can verify the condition (14) of Theorem (1) is satisfied based on the above parameters as μ 2 ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } 2 + A { 1 } 2 + A { 2 } 2 + A { 3 } 2 = 36.7450 > 6.7650 = L { 0 } + L { 1 } + L { 2 } + L { 3 } B { 0 } 2 + B { 1 } 2 + B { 2 } 2 + B { 3 } 2 > 0 . On the basis of Theorem (1), we can state that the master–slave QVNNs (48) and (49) achieve GES.
Based on randomly selected six initial conditions, the transient behaviors and phase plots of the states of master–slave QVNNs (48) and (49) are presented in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12. Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 depict the transient behaviors of the states corresponding to the real and imaginary parts. Figure 9, Figure 10, Figure 11 and Figure 12 show the phase diagrams of the states corresponding to the real and imaginary components. Figure 13, Figure 14, Figure 15 and Figure 16 depict the synchronization errors ϱ v { 0 } ( ι ) , ϱ v { 1 } ( ι ) , ϱ v { 2 } ( ι ) , ϱ v { 3 } ( ι ) , v = 1 , 2 between the master–slave QVNNs (48) and (49) with control gain matrix Θ . From these simulation results in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16, it is clear that the master–slave QVNNs (48) and (49) have achieved GES.
Remark 7.
In [41], the authors used state coupling control to achieve GES for a class of NNs. However, the control gain matrix needs to be symmetric as well as a positive–definite matrix, which is restrictive. In contrast, the control gain matrix based on this paper has no such special requirements. So, our results are new and more general. To prove this, we utilize the same system parameters in Example 1 and select the control gain matrix as Θ = 44 0 1 47 . Letting ϖ = 2 , we can verify that condition (14) of Theorem (1) is satisfied as μ 2 ( D + Θ ) + L { 0 } + L { 1 } + L { 2 } + L { 3 } A { 0 } 2 + A { 1 } 2 + A { 2 } 2 + A { 3 } 2 = 36.2852 > 6.7650 = L { 0 } + L { 1 } + L { 2 } + L { 3 } B { 0 } 2 + B { 1 } 2 + B { 2 } 2 + B { 3 } 2 > 0 . In addition, the control gain matrix only needs to be adjusted to satisfy the sufficient conditions presented in this paper and to achieve GES.
Example 2.
In master QVNNs (1), assuming n = 2 , we consider
ϑ ˙ 1 ( ι ) ϑ ˙ 2 ( ι ) = 1 0 0 1 ϑ 1 ( ι ) ϑ 2 ( ι ) + 0.8 0.2 i 0.3 j 0.5 k 0.4 1.3 i 0.5 j 0.4 k 0.4 0.3 i + 1.5 j 0.7 k 0.9 + i 0.3 j + 0.5 k 1 ( ϑ 1 ( ι ) ) 2 ( ϑ 2 ( ι ) ) + 1 + 0.8 i 0.6 j + 0.7 k 0.3 + 1.3 i 1.5 j + 0.9 k 1.3 0.8 i + 0.9 j + 0.4 k 0.2 + 0.5 i 0.7 j + k 1 ( ϑ 1 ( ι ( ι ) ) ) 2 ( ϑ 2 ( ι ( ι ) ) ) + 0.2 0.3 i + 0.2 j 0.3 k 0.2 + 0.3 i 0.3 j + 0.2 k ,
and let the two-neuron slave QVNNs be defined as follows:  
ϑ ^ ˙ 1 ( ι ) ϑ ^ ˙ 2 ( ι ) = 1 0 0 1 ϑ ^ 1 ( ι ) ϑ ^ 2 ( ι ) + 0.8 0.2 i 0.3 j 0.5 k 0.4 1.3 i 0.5 j 0.4 k 0.4 0.3 i + 1.5 j 0.7 k 0.9 + i 0.3 j + 0.5 k 1 ( ϑ ^ 1 ( ι ) ) 2 ( ϑ ^ 2 ( ι ) ) + 1 + 0.8 i 0.6 j + 0.7 k 0.3 + 1.3 i 1.5 j + 0.9 k 1.3 0.8 i + 0.9 j + 0.4 k 0.2 + 0.5 i 0.7 j + k 1 ( ϑ ^ 1 ( ι ( ι ) ) ) 2 ( ϑ ^ 2 ( ι ( ι ) ) ) + 0.2 0.3 i + 0.2 j 0.3 k 0.2 + 0.3 i 0.3 j + 0.2 k + 7 2 1 6 ϑ ^ 1 ( ι ) ϑ 1 ( ι ) ϑ ^ 2 ( ι ) ϑ 2 ( ι ) . .
The activation functions are considered as v ( ϑ v ( · ) ) = 0.5 t a n h ( ϑ v ( · ) ) , v = 1 , 2 . It is obvious that Assumption 2 holds with L = 0.5 . Further, the discrete time-varying delay is defined as ( ι ) = 0.5 c o s ( ι ) , which shows that its supremum is = 0.5 . By choosing ϖ = 1 , and using the parameters above, we can easily verify that the condition (39) of Theorem (2) is satisfied with the above parameters as μ 1 ( D + Θ ) + ( L A 1 ) = 3.5152 > 1.7671 = L B 1 > 0 . By applying Theorem (2), it is clear that the master–slave QVNNs (50) and (51) are GES.

5. Application to Image Encryption

Here, we address the application of master–slave QVNNs (48) and (49) in IEA. First, a color image cryptosystem is proposed, and performance is assessed applying certain security criteria.

5.1. Background on the Color Image

Color spaces serve as a standardized method used for encoding and representing colors in a digital format, allowing images to be recorded, displayed, and manipulated accurately. This paper uses the R G B -based color space among different color space models to analyze the color image. In the R G B -based color space, red ( R ) , green ( G ) , and blue ( B ) are the basic colors that is depicted by gray values with integers between 0 to 255. According to previous works [44,45], there is a one–one function from R G B components ( r , g , b ) to quaternions with imaginary components only, i.e., r i + g j + b k . Therefore, we utilize M × N pure quaternion matrix F to represents a color image with M × N pixels, where F = ( F i ¯ j ¯ ) M × N Q M × N . The pure quaternion F i ¯ j ¯ corresponds to the color at pixel ( i ¯ , j ¯ ) . Specifically, F i ¯ j ¯ { 1 } , F i ¯ j ¯ { 2 } , F i ¯ j ¯ { 3 } { 0 , 1 , 2 , , 255 } , where i ¯ = { 1 , 2 , , M } and j ¯ = { 1 , 2 , , N } .

5.2. Permutation Procedure

Let the dimension of the original R G B color space color image F be M × N × 3 . Then, the original color image F can be represented by a M × N quaternion matrix F = ( F i ¯ j ¯ ) Q M × N , where F i ¯ j ¯ is a pure quaternion indicating the color at the ( i ¯ , j ¯ ) pixel of the image. Let F i ¯ j ¯ = R i ¯ j ¯ i + G i ¯ j ¯ j + B i ¯ j ¯ k , where R i ¯ j ¯ , G i ¯ j ¯ , B i ¯ j ¯ { 0 , 1 , 2 , , 255 } , i ¯ { 1 , 2 , , M } , j ¯ { 1 , 2 , , N } . So, M × N is the dimension of R G B component matrix. Before encrypting, we first shuffle the original image matrix F , that is, change the row and column position of matrix F in a certain way. To shuffle the matrix F , we need a chaotic sequence to generate the permutation square matrix T of which the elements are only 0 and 1, and each row or column has one and only one 1. This permutation square matrix has the following properties:
  • We can rearrange the rows of F by multiplying a permutation square matrix T to the left of F (i.e., F r = T M × M × F M × N ). Similarly, we can rearrange the columns of F by multiplying a permutation square matrix T N × N to the right of F (i.e., F c = F M × N × T N × N ).
  • The inverse of a permutation matrix is simply its transpose (i.e., T T = T 1 ).
  • One can obtain T M × M T × F r = T M × M T × T M × M × F M × N = F M × N .
To obtain a valid permutation procedure, we utilize the following logistic–logistic map (LLM), to generate the chaotic sequence.
z n + 1 = w 0 × z n × ( 1 z n ) × 2 14 floor ( w 0 × z n × ( 1 z n ) × 2 14 ) ,
where z 0 is the initial value of the sequence and the parameter w 0 ( 0 , 10 ] . As mentioned in early works [20], the LLM’s chaotic range and performance are much better than the previous logistic map’s. The generation process of the row permutation matrix T M × M is summarized as follows:
  • For given initial values z 0 ( 0 , 1 ) and w 0 ( 0 , 10 ] , iterate the LLM (52) to obtain z n .
  • Consider w n = floor ( z n × M ) + 1 . Obviously, we have w n { 1 , 2 , , M } .
  • Perform the iteration LLM (52) until there are M unique and different values w n located from 1 to M . Then, arrange these M unique and different values in an orderly manner to be stored in w i ¯ , i ¯ = 1 , 2 , , M .
  • For each w i ¯ , we have w i ¯ = j ¯ , i ¯ = 1 , 2 , , M , j ¯ { 1 , 2 , , M } . After replaying the i ¯ th row of the identity matrix I M × M on the j ¯ th row of the row permutation matrix T M × M , we obtain the row permutation matrix T M × M . Similarly, the column permutation matrix T N × N is obtained.
For example, we choose M = 4 for the row permutation matrix T 4 × 4 and { w 1 = 4 , w 2 = 3 , w 3 = 1 , w 4 = 2 } . Thus, we have
T 4 × 4 = 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0
In this way, the shuffled image can be obtained from the matrix F as follows:
I M × N = T M × M × F M × N × T N × N ,
where I M × N is the shuffled image, and T M × M denotes the row permutation matrix. Also, T N × N denotes the column permutation matrix. Then, the IEA based on QVNNs with time-varying delay is proposed in the next subsection.

5.3. QVNN-Based Encryption Algorithm

The proposed synchronization method will be applied to IEA for an R G B color image F based on the size of M × N × 3 based on QVNNs (48) and (49) as shown below:
S1:
Separate the R G B channels of shuffled image into three gray ones with red, green, and blue. Hence, three new pixel matrices are obtained as I i ¯ j ¯ r , I i ¯ j ¯ g , and I i ¯ j ¯ b , in which i ¯ { 1 , 2 , M } and j ¯ { 1 , 2 , , N } .
S2:
Arrange the pixels of each I i ¯ j ¯ r , I i ¯ j ¯ g , and I i ¯ j ¯ b in the order from left to right and then top to down to obtain I ω = { I ω ( 1 ) , I ω ( 2 ) , , I ω ( M × N ) } , for ω { r , g , b } .
S3:
To obtain the chaotic sequences, the master QVNN (48) is iterated continuously M × N times with a step size of 0.001. Then, after a certain transformation, the chaotic signals can be obtained as shown in Algorithm 1. In Algorithm 1, the symbol · denotes the flooring operation, whereas mod ( · ) denotes the modulo operation.
S4:
The ciphertext can be obtained by the following operation, i.e., C ω ( ¯ ) C ω ( ¯ ) I ω ( ¯ ) , where ω = { r , g , b } and ⊕ corresponds to the XOR operator. Then, the decryption scheme is identical to the discussed encryption scheme in reverse order, which is neglected.
Algorithm 1: Reorganizing Chaotic Sequences.
Require:
1. ¯ 1 ;
Ensure:
  2. for  ¯  do 1 to M × N
    3. C r ( ¯ ) mod ( 10 4 ϑ 1 { 1 } ( ¯ ) ϑ 2 { 1 } ( ¯ ) , 256 ) ;
    4. C g ( ¯ ) mod ( 10 4 ϑ 1 { 2 } ( ¯ ) ϑ 2 { 2 } ( ¯ ) , 256 ) ;
    5. C b ( ¯ ) mod ( 10 4 ϑ 1 { 3 } ( ¯ ) ϑ 2 { 3 } ( ¯ ) , 256 ) ;
6. end

5.4. Simulation Results

The proposed IEA is tested using MATLAB R2024a on a computer with an Intel(R) Core(TM) i7-1355U CPU @1.2 GHz 3.7 GHz, Microsoft Windows 11 Pro operating system with 32 GB memory. We choose the parameters z 0 = 1 , w 0 = 0.26 for the LLM (52) and the QVNN is the same as (48) in Example 2. Figure 17 shows the original, encrypted, and decrypted color images: Mandrill ( 256 × 256 ) , Lion ( 512 × 512 ) , and Peppers ( 1024 × 1024 ) . The encryption and decryption have been implemented through the operation shown in Figure 18.

5.5. Performance Analysis

In this subsection, we use various security measures to analyze the performance of our color cipher image.

5.6. Key Space Analysis

The key space represents the complete set of all permissible keys available in an IEA. The encryption keys are obtained from the QVNNs in Example 1 using the parameters D , A , B , and initial functions. In addition, the parameters w 0 and z 0 from LLM can also be in the key space. That means the key space from this encryption scheme is huge to withhold brute-force attacks.

5.7. Key Sensitivity Analysis

Key sensitivity is based on when slight changes in the key affect the resulting decrypted image. A high level of key sensitivity is crucial for a secure IEA. A high level of key sensitivity ensures a small variation in the keys fails to decrypt the image. To analyze the key sensitivity of the encrypted scheme of this paper, we tested it using incorrect keys that were slightly different from the keys to decrypt the encrypted image, which is shown in Figure 19. As shown in Figure 19, the results indicate that, even if attackers obtain the encrypted image, recovering the original image accurately remains impossible without the correct keys. This shows the strong key sensitivity of the proposed IEA.

5.7.1. Histograms Analysis

The effectiveness of the IEA is obtained by comparing the histograms of the original as well as encrypted images. To protect against statistical attacks, it is crucial that there is no statistical similarity among the plaintext image and the encrypted image. The histogram analysis for the original image, shuffled image, encrypted image, and decrypted image of Mandrill across the R , G and B channels is presented in Figure 20, Figure 21 and Figure 22. As illustrated in Figure 22, the histograms of the R G B components of the encrypted image exhibit a flat distribution. Thus, in terms of histogram analysis, this algorithm is sufficiently robust to resist statistical attacks.

5.7.2. Correlation Coefficient Analysis

To analyze the performance of an IE algorithm, correlation analysis is also an important metric. It helps determine the level of similarity between adjacent pixels in an image. The CC is measured over the range of [ 1 , 1 ] , where the correlation = 1 or 1 indicates that two pictures are identically same, and correlation = 0 indicates that two pictures are completely unrelated. The CC is determined as follows:
r = = 1 O ¯ ¯ = 1 O ¯ 2 = 1 O ¯ 2 ,
where ¯ = 1 O = 1 O and ¯ = 1 O = 1 O ; O represents the total number pixels; and ♭, ♮ are the color values of the two adjacent pixels in the images. Table 1 shows the correlation values for plaintext and ciphertext of the image ‘Mandrill’. In addition, Figure 23, Figure 24 and Figure 25 illustrate the pixel distribution of plaintext image and ciphertext image of Mandrill on three color channels in horizontal (H), vertical (V), and diagonal (D) directions. Figure 23, Figure 24 and Figure 25 and Table 1 demonstrate the effectiveness of our algorithm and the security of the IE.

5.7.3. Information Entropy Analysis

The IE helps to quantify the level of uncertainty associated with an image. When it comes to IEA, the IE of the ciphered image needs to be as close to 8 as possible. This indicates that an encrypted image with a greater entropy measure signifies a stronger level of security. Thus, IE serves as an essential measurement for evaluating the performance of an encryption scheme. Generally, the entropy can be obtained by
H ( s ) = l = 0 255 P ( s l ) log 2 1 P ( s l ) ,
in which P ( s l ) represents the probability associated with grayscale value s l . Then, the IE results for the ciphered images derived from the plaintext images are presented in Table 2. Based on Table 2, the entropy value for the encrypted images is nearly 8, demonstrating its effectiveness. As a result of the high entropy value, the encrypted images are more secure against attacks because they have a higher level of randomness. Furthermore, we compared the IE generated by our algorithm with other existing quaternion-based encryption algorithms, as shown in Table 3. From Table 3, it can be observed that the image produced by our algorithm has the highest IE, demonstrating that our IEA offers significant performance.

5.8. Differential Attack Analysis

Recently, many IEAs have been cracked by attackers using differential attacks. Therefore, an efficient IEA must be highly sensitive to slight changes in the plain image. For an IEA, even a minor modification in the plain image should result in a significantly different encrypted image. The NPCR and UACI are metrics used to assess the resistance of an IEA to differential attacks. Also, NPCR and UACI can be obtained mathematically as follows:
NPCR ( I 1 , I 2 ) = x = 1 M y = 1 N D ( x , y ) M × N × 100 % , UACI ( I 1 , I 2 ) = x = 1 M y = 1 N | I 1 ( x , y ) I 2 ( x , y ) | 255 × M × N × 100 % ,
in which I 1 and I 2 denote the images whose differences have to be determined, where the size of both images is M × N ; D ( x , y ) denotes the difference among the images I 1 and I 2 based on the pixel ( x , y ) . When I 1 ( x , y ) I 2 ( x , y ) , D ( x , y ) = 1 ; otherwise, D ( x , y ) = 0 .
The results are presented in Table 4. The optimal level of NPCR and UACI are 99.6094 % and 33.4635 % . Table 4 demonstrates that changing a pixel value in each channel yields NPCR and UACI values near the optimal level. So, it is evident that the proposed IEA is highly sensitive to minor changes in the image, even when only a single bit is altered. This demonstrates that the proposed IEA is more effective in resisting differential attacks.

5.9. Efficient Analysis

Additionally to security, improving IEA efficiency is another key motivation for researchers to design a new IEA. As a result, a well-designed IEA should not only be extremely secure, but also very efficient with respect to encryption. This section focuses on evaluating the efficiency of the proposed IEA by testing it on three distinct images of varying sizes. Here, we choose a Mandrill image of 256 × 256 size, Lion image of 512 × 512 size and Peppers image of 1024 × 1024 size to verify the efficiency. Images with a larger size have a higher level of complexity. The encryption times based on this IEA are given in Table 5. From Table 5, we can observe that the proposed IEA is efficient in terms of encryption time.

6. Conclusions

This paper explored the GES problem for a class of QVNNs with discrete time-varying delays by using MMM. The results of this paper are derived through two methods. The first method is to address the non-commutative property of quaternion multiplication by decomposing the original QVNNs into equivalent four RVNNs. Then, by employing Lyapunov functions, Halanay’s inequality, and MMM, we established new solid criteria to ensure the GES of QVNNs under designed control. The second method examines the non-commutative property of quaternion multiplication directly; the original QVNNs are examined using the non-decomposition method, and corresponding GES criteria are derived. Furthermore, this paper presents novel results and new insights into GES of QVNNs. Finally, two numerical verifications with simulation results are given to verify the feasibility of the obtained criteria. Based on the considered master–slave QVNNs, a new IEA for color images Mandrill ( 256 × 256 ) , Lion ( 512 × 512 ) , and Peppers ( 1024 × 1024 ) is proposed. In addition, the effectiveness of the proposed IEA is verified by various experimental analysis. The experiment results show that the algorithm has good CCs, IE with an average of 7.9988, NPCR with an average of 99.6080%, and UACI with an average of 33.4589%, and the results indicate that the IEA is robust against a range of security attacks. In addition, the results presented in this paper may be applied to more detailed investigations of various QVNN-related problems. Therefore, our future research will be focused on impulsive synchronization of QVNNs via an event-triggered impulsive controller and its application to secure communication.

Author Contributions

Conceptualization, R.S. and O.K.; methodology, R.S. and O.K.; software, R.S.; validation, O.K.; formal analysis, R.S. and O.K.; investigation, R.S. and O.K.; resources, R.S.; data curation, R.S.; writing—original draft preparation, R.S. and O.K.; writing—review and editing, R.S. and O.K.; visualization, R.S.; supervision, O.K.; project administration, R.S. and O.K.; funding acquisition, R.S. and O.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was fully supported by the Empowerment and Equity Opportunities for Excellence in Science program funded by the Science and Engineering Research Board (SERB), Government of India under grant EEQ/2023/000513. In addition, the work of second author was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant NRF-2020R1A6A1A12047945 and in part by Innovative Human Resource Development for Local Intellectualization program through the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (IITP-2024-2020-0-01462, 30%).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The first author is grateful to the Science and Engineering Research Board (SERB), Government of India, for financial support. The second author further expresses gratitude to the National Research Foundation (NRF) for funding provided by the Basic Science Research Program. The authors also express their gratitude to the handling editor and reviewers for their valuable comments and suggestions regarding this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jayawardana, R.; Bandaranayake, T.S. Analysis of optimizing neural networks and artificial intelligent models for guidance, control, and navigation systems. Int. Res. J. Mod. Eng. Technol. Sci. 2021, 3, 743–759. [Google Scholar]
  2. Yu, Z.; Abdulghani, A.M.; Zahid, A.; Heidari, H.; Imran, M.A.; Abbasi, Q.H. An overview of neuromorphic computing for artificial intelligence enabled hardware-based hopfield neural network. IEEE Access 2020, 8, 67085–67099. [Google Scholar] [CrossRef]
  3. Liao, T.L.; Wang, F.C. Global stability for cellular neural networks with time delay. IEEE Trans. Neural Netw. 2000, 11, 1481–1484. [Google Scholar] [PubMed]
  4. Zhang, Z.; Quan, Z. Global exponential stability via inequality technique for inertial BAM neural networks with time delays. Neurocomputing 2015, 151, 1316–1326. [Google Scholar] [CrossRef]
  5. Huang, H.; Cao, J.; Wang, J. Global exponential stability and periodic solutions of recurrent neural networks with delays. Phys. Lett. A 2002, 298, 393–404. [Google Scholar] [CrossRef]
  6. Cai, Z.; Huang, L. Existence and global asymptotic stability of periodic solution for discrete and distributed time-varying delayed neural networks with discontinuous activations. Neurocomputing 2011, 74, 3170–3179. [Google Scholar] [CrossRef]
  7. Guo, R.; Zhang, Z.; Liu, X.; Lin, C. Existence, uniqueness, and exponential stability analysis for complex-valued memristor-based BAM neural networks with time delays. Appl. Math. Comput. 2017, 311, 100–117. [Google Scholar] [CrossRef]
  8. Yuan, Y.; Song, Q.; Liu, Y.; Alsaadi, F.E. Synchronization of complex-valued neural networks with mixed two additive time-varying delays. Neurocomputing 2019, 332, 149–158. [Google Scholar] [CrossRef]
  9. Zhou, B.; Song, Q. Stability and Hopf bifurcation analysis of a tri-neuron BAM neural network with distributed delay. Neurocomputing 2012, 82, 69–83. [Google Scholar] [CrossRef]
  10. Hirose, A. Nature of complex number and complex-valued neural networks. Front. Inf. Technol. Electron. Eng. 2011, 6, 171–180. [Google Scholar] [CrossRef]
  11. Nitta, T. Orthogonality of decision boundaries in complex-valued neural networks. Neural Comput. 2004, 16, 73–97. [Google Scholar] [CrossRef] [PubMed]
  12. Lee, D.L. Relaxation of the stability condition of the complex-valued neural networks. IEEE Trans. Neural Netw. 2001, 12, 1260–1262. [Google Scholar] [PubMed]
  13. Zhang, F. Quaternions and matrices of quaternions. Linear Algebra Appl. 1997, 251, 21–57. [Google Scholar] [CrossRef]
  14. Liu, Y.; Zheng, Y.; Lu, J.; Cao, J.; Rutkowski, L. Constrained quaternion-variable convex optimization: A quaternion-valued recurrent neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1022–1035. [Google Scholar] [CrossRef]
  15. Liu, L.; Lei, M.; Bao, H. Event-triggered quantized quasisynchronization of uncertain quaternion-valued chaotic neural networks with time-varying delay for image encryption. IEEE Trans. Cybern. 2022, 53, 3325–3336. [Google Scholar] [CrossRef]
  16. Song, Q.; Long, L.; Zhao, Z.; Liu, Y.; Alsaadi, F.E. Stability criteria of quaternion-valued neutral-type delayed neural networks. Neurocomputing 2020, 412, 287–294. [Google Scholar] [CrossRef]
  17. Xu, X.; Xu, Q.; Yang, J.; Xue, H.; Xu, Y. Further research on exponential stability for quaternion-valued neural networks with mixed delays. Neurocomputing 2020, 400, 186–205. [Google Scholar] [CrossRef]
  18. Duan, H.; Peng, T.; Tu, Z.; Qiu, J.; Lu, J. Globally exponential stability and globally power stability of quaternion-valued neural networks with discrete and distributed delays. IEEE Access 2020, 8, 46837–46850. [Google Scholar] [CrossRef]
  19. Meng, X.; Li, Y. Pseudo almost periodic solutions for quaternion-valued cellular neural networks with discrete and distributed delays. J. Inequal. Appl. 2018, 2018, 245. [Google Scholar] [CrossRef]
  20. Lin, D.; Zhang, Q.; Chen, X.; Li, Z.; Wang, S. A color image encryption using one quaternion-valued neural network. SSRN Electron J. 2022, 4, 1–29. [Google Scholar] [CrossRef]
  21. Tu, Z.; Zhao, Y.; Ding, N.; Feng, Y.; Zhang, W. Stability analysis of quaternion-valued neural networks with both discrete and distributed delays. Appl. Math. Comput. 2019, 343, 342–353. [Google Scholar] [CrossRef]
  22. Pecora, L.M.; Carroll, T.L. Synchronization in chaotic systems. Phys. Rev. Lett. 1990, 64, 821. [Google Scholar] [CrossRef]
  23. Alsaedi, A.; Cao, J.; Ahmad, B.; Alshehri, A.; Tan, X. Synchronization of master-slave memristive neural networks via fuzzy output-based adaptive strategy. Chaos Solitons Fractals 2022, 158, 112095. [Google Scholar] [CrossRef]
  24. Xu, D.; Wang, T.; Liu, M. Finite-time synchronization of fuzzy cellular neural networks with stochastic perturbations and mixed delays. Circuits Syst. Signal Process. 2021, 40, 3244–3265. [Google Scholar] [CrossRef]
  25. Peng, T.; Zhong, J.; Tu, Z.; Lu, J.; Lou, J. Finite-time synchronization of quaternion-valued neural networks with delays: A switching control method without decomposition. Neural Netw. 2022, 148, 37–47. [Google Scholar] [CrossRef] [PubMed]
  26. Samidurai, R.; Sriraman, R. Non-fragile sampled-data stabilization analysis for linear systems with probabilistic time-varying delays. J. Frank. Inst. 2019, 356, 4335–4357. [Google Scholar] [CrossRef]
  27. Samidurai, R.; Sriraman, R.; Zhu, S. Stability and dissipativity analysis for uncertain Markovian jump systems with random delays via new approach. Int. J. Syst. Sci. 2019, 50, 1609–1625. [Google Scholar] [CrossRef]
  28. Sriraman, R.; Cao, Y.; Samidurai, R. Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays. Math. Comput. Simul. 2020, 171, 103–118. [Google Scholar] [CrossRef]
  29. He, W.; Cao, J. Exponential synchronization of chaotic neural networks: A matrix measure approach. Nonlinear Dyn. 2009, 55, 55–65. [Google Scholar] [CrossRef]
  30. Li, Y.; Li, C. Matrix measure strategies for stabilization and synchronization of delayed BAM neural networks. Nonlinear Dyn. 2016, 84, 1759–1770. [Google Scholar] [CrossRef]
  31. Gong, W.; Liang, J.; Cao, J. Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays. Neural Netw. 2015, 70, 81–89. [Google Scholar] [CrossRef]
  32. Tang, Q.; Jian, J. Matrix measure based exponential stabilization for complex-valued inertial neural networks with time-varying delays using impulsive control. Neurocomputing 2018, 273, 251–259. [Google Scholar] [CrossRef]
  33. Xie, D.; Jiang, Y.; Han, M. Global exponential synchronization of complex-valued neural networks with time delays via matrix measure method. Neural Process. Lett. 2019, 49, 187–201. [Google Scholar] [CrossRef]
  34. Kocak, O.; Erkan, U.; Toktas, A.; Gao, S. PSO-based image encryption scheme using modular integrated logistic exponential map. Expert Syst. Appl. 2024, 237, 121452. [Google Scholar] [CrossRef]
  35. Toktas, F.; Erkan, U.; Yetgin, Z. Cross-channel color image encryption through 2D hyperchaotic hybrid map of optimization test functions. Expert Syst. Appl. 2024, 249, 123583. [Google Scholar] [CrossRef]
  36. Feng, W.; Wang, Q.; Liu, H.; Ren, Y.; Zhang, J.; Zhang, S.; Wen, H. Exploiting newly designed fractional-order 3D Lorenz chaotic system and 2D discrete polynomial hyper-chaotic map for high-performance multi-image encryption. Fractal Fract. 2023, 7, 887. [Google Scholar] [CrossRef]
  37. Feng, W.; Zhao, X.; Zhang, J.; Qin, Z.; Zhang, J.; He, Y. Image encryption algorithm based on plane-level image filtering and discrete logarithmic transform. Mathematics 2022, 10, 2751. [Google Scholar] [CrossRef]
  38. Wen, H.; Lin, Y. Cryptanalysis of an image encryption algorithm using quantum chaotic map and DNA coding. Expert Syst. Appl. 2024, 237, 121514. [Google Scholar] [CrossRef]
  39. Wen, H.; Lin, Y. Cryptanalyzing an image cipher using multiple chaos and DNA operations. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 101612. [Google Scholar] [CrossRef]
  40. Feng, W.; Zhang, J. Cryptanalzing a novel hyper-chaotic image encryption scheme based on pixel-level filtering and DNA-level diffusion. IEEE Access 2020, 8, 209471–209482. [Google Scholar] [CrossRef]
  41. Vidyasagar, M. Nonlinear Systems Analysis; Prentice-Hall: Englewood Cliffs, NJ, USA, 1993. [Google Scholar]
  42. Chen, M. Chaos synchronization in complex networks. IEEE Trans. Circuits Syst. I Regul. Pap. 2008, 55, 1335–1346. [Google Scholar] [CrossRef]
  43. Cheng, C.J.; Liao, T.L.; Hwang, C.C. Exponential synchronization of a class of chaotic neural networks. Chaos Solitons Fractals 2005, 24, 197–206. [Google Scholar] [CrossRef]
  44. Chen, X.; Song, Q.; Li, Z. Design and analysis of quaternion-valued neural networks for associative memories. IEEE Trans. Syst. Man Cybern. 2017, 48, 2305–2314. [Google Scholar] [CrossRef]
  45. Zhang, Y.; Yang, L.; Kou, K.I.; Liu, Y. Synchronization of fractional-order quaternion-valued neural networks with image encryption via event-triggered impulsive control. Knowl.-Based Syst. 2024, 296, 111953. [Google Scholar] [CrossRef]
Figure 1. Transient behaviors of the states ϑ 1 { 0 } ( ι ) and ϑ ^ 1 { 0 } ( ι ) over time ι in Example 1.
Figure 1. Transient behaviors of the states ϑ 1 { 0 } ( ι ) and ϑ ^ 1 { 0 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g001
Figure 2. Transient behaviors of the states ϑ 2 { 0 } ( ι ) and ϑ ^ 2 { 0 } ( ι ) over time ι in Example 1.
Figure 2. Transient behaviors of the states ϑ 2 { 0 } ( ι ) and ϑ ^ 2 { 0 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g002
Figure 3. Transient behaviors of the states ϑ 1 { 1 } ( ι ) and ϑ ^ 1 { 1 } ( ι ) over time ι in Example 1.
Figure 3. Transient behaviors of the states ϑ 1 { 1 } ( ι ) and ϑ ^ 1 { 1 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g003
Figure 4. Transient behaviors of the states ϑ 2 { 1 } ( ι ) and ϑ ^ 2 { 1 } ( ι ) over time ι in Example 1.
Figure 4. Transient behaviors of the states ϑ 2 { 1 } ( ι ) and ϑ ^ 2 { 1 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g004
Figure 5. Transient behaviors of the states ϑ 1 { 2 } ( ι ) and ϑ ^ 1 { 2 } ( ι ) over time ι in Example 1.
Figure 5. Transient behaviors of the states ϑ 1 { 2 } ( ι ) and ϑ ^ 1 { 2 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g005
Figure 6. Transient behaviors of the states ϑ 2 { 2 } ( ι ) and ϑ ^ 2 { 2 } ( ι ) over time ι in Example 1.
Figure 6. Transient behaviors of the states ϑ 2 { 2 } ( ι ) and ϑ ^ 2 { 2 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g006
Figure 7. Transient behaviors of the states ϑ 1 { 3 } ( ι ) and ϑ ^ 1 { 3 } ( ι ) over time ι in Example 1.
Figure 7. Transient behaviors of the states ϑ 1 { 3 } ( ι ) and ϑ ^ 1 { 3 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g007
Figure 8. Transient behaviors of the states ϑ 2 { 3 } ( ι ) and ϑ ^ 2 { 3 } ( ι ) over time ι in Example 1.
Figure 8. Transient behaviors of the states ϑ 2 { 3 } ( ι ) and ϑ ^ 2 { 3 } ( ι ) over time ι in Example 1.
Mathematics 12 03345 g008
Figure 9. Phase diagram of the states ϑ v { 0 } ( ι ) and ϑ ^ v { 0 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 9. Phase diagram of the states ϑ v { 0 } ( ι ) and ϑ ^ v { 0 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g009
Figure 10. Phase diagram of the states ϑ v { 1 } ( ι ) and ϑ ^ v { 1 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 10. Phase diagram of the states ϑ v { 1 } ( ι ) and ϑ ^ v { 1 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g010
Figure 11. Phase diagram of the states ϑ v { 2 } ( ι ) and ϑ ^ v { 2 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 11. Phase diagram of the states ϑ v { 2 } ( ι ) and ϑ ^ v { 2 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g011
Figure 12. Phase diagram of the states ϑ v { 3 } ( ι ) and ϑ ^ v { 3 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 12. Phase diagram of the states ϑ v { 3 } ( ι ) and ϑ ^ v { 3 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g012
Figure 13. Plots of the synchronization errors ϱ v { 0 } ( ι ) = ϑ ^ v { 0 } ( ι ) ϑ v { 0 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 13. Plots of the synchronization errors ϱ v { 0 } ( ι ) = ϑ ^ v { 0 } ( ι ) ϑ v { 0 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g013
Figure 14. Plots of the synchronization errors ϱ v { 1 } ( ι ) = ϑ ^ v { 1 } ( ι ) ϑ v { 1 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 14. Plots of the synchronization errors ϱ v { 1 } ( ι ) = ϑ ^ v { 1 } ( ι ) ϑ v { 1 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g014
Figure 15. Plots of the synchronization errors ϱ v { 2 } ( ι ) = ϑ ^ v { 2 } ( ι ) ϑ v { 2 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 15. Plots of the synchronization errors ϱ v { 2 } ( ι ) = ϑ ^ v { 2 } ( ι ) ϑ v { 2 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g015
Figure 16. Plots of the synchronization errors ϱ v { 3 } ( ι ) = ϑ ^ v { 3 } ( ι ) ϑ v { 3 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Figure 16. Plots of the synchronization errors ϱ v { 3 } ( ι ) = ϑ ^ v { 3 } ( ι ) ϑ v { 3 } ( ι ) , v = 1 , 2 over time ι in Example 1.
Mathematics 12 03345 g016
Figure 17. (ac) corresponds to original, encrypted, and decrypted (using correct key) 256 × 256 images of “Mandrill”. (df) corresponds to original, encrypted, and decrypted (using correct key) 512 × 512 images of “Lion”. (gi) corresponds to original, encrypted, and decrypted (using correct key) 1024 × 1024 images of “Peppers”.
Figure 17. (ac) corresponds to original, encrypted, and decrypted (using correct key) 256 × 256 images of “Mandrill”. (df) corresponds to original, encrypted, and decrypted (using correct key) 512 × 512 images of “Lion”. (gi) corresponds to original, encrypted, and decrypted (using correct key) 1024 × 1024 images of “Peppers”.
Mathematics 12 03345 g017
Figure 18. The flow chart of encryption process.
Figure 18. The flow chart of encryption process.
Mathematics 12 03345 g018
Figure 19. (ac) corresponds to original, encrypted, and decrypted (using wrong key) 256 × 256 image of “Mandrill”. (df) corresponds to original, encrypted, and decrypted (using wrong key) 512 × 512 image of “Lion”. (gi) corresponds to original, encrypted, and decrypted (using wrong key) 1024 × 1024 image of “Peppers”.
Figure 19. (ac) corresponds to original, encrypted, and decrypted (using wrong key) 256 × 256 image of “Mandrill”. (df) corresponds to original, encrypted, and decrypted (using wrong key) 512 × 512 image of “Lion”. (gi) corresponds to original, encrypted, and decrypted (using wrong key) 1024 × 1024 image of “Peppers”.
Mathematics 12 03345 g019
Figure 20. The R G B histogram of original image “Mandrill”.
Figure 20. The R G B histogram of original image “Mandrill”.
Mathematics 12 03345 g020
Figure 21. The R G B histogram of shuffled image “Mandrill”.
Figure 21. The R G B histogram of shuffled image “Mandrill”.
Mathematics 12 03345 g021
Figure 22. The R G B histogram of encrypted image “Mandrill”.
Figure 22. The R G B histogram of encrypted image “Mandrill”.
Mathematics 12 03345 g022
Figure 23. The first and second rows demonstrate the correlation analysis for original and encrypted image of “Mandrill” in red channel.
Figure 23. The first and second rows demonstrate the correlation analysis for original and encrypted image of “Mandrill” in red channel.
Mathematics 12 03345 g023
Figure 24. The first and second rows demonstrate the correlation analysis for original and encrypted image of “Mandrill” in green channel.
Figure 24. The first and second rows demonstrate the correlation analysis for original and encrypted image of “Mandrill” in green channel.
Mathematics 12 03345 g024
Figure 25. The first and second rows demonstrate the correlation analysis for original and encrypted image of “Mandrill” in blue channel.
Figure 25. The first and second rows demonstrate the correlation analysis for original and encrypted image of “Mandrill” in blue channel.
Mathematics 12 03345 g025
Table 1. The CCs of Mandrill, Lion, and Peppers.
Table 1. The CCs of Mandrill, Lion, and Peppers.
Plain ImageCipher Image
ImagesDirection R G B R G B
MandrillH0.94740.87270.92150.01300.0075−0.0114
( 256 × 256 )V0.92070.98240.91380.0032−0.01650.0014
D0.90330.79240.8762−0.0077−0.00150.0040
LionH0.94730.86930.94080.0028−0.00080.0042
( 512 × 512 )V0.98370.95980.98150.00600.01030.0129
D0.93990.85220.9339−0.00120.0032−0.0009
PeppersH0.99400.98870.97290.00710.00600.0012
( 1024 × 1024 )V0.98930.97960.95320.0018−0.00030.0068
D0.98710.97370.9382−0.0024−0.00060.0005
Table 2. The IE of Mandrill, Lion and Peppers.
Table 2. The IE of Mandrill, Lion and Peppers.
Plain ImageCipher Image
Images R G B R G B
Mandrill
( 256 × 256 )7.60577.35807.66647.99787.99797.9973
Lion
( 512 × 512 )7.73227.20437.17177.99937.99927.9991
Peppers
( 1024 × 1024 )7.92737.13295.97507.99987.99977.9997
Table 3. The comparison of IE values of Mandrill cipher image.
Table 3. The comparison of IE values of Mandrill cipher image.
IE Value
Encryption AlgorithmImages R G B
Mandrill
[20]( 256 × 256 )7.99767.99727.9971
Mandrill
This article( 256 × 256 )7.99787.99797.9973
Table 4. The NPCR and UACI scores for different images.
Table 4. The NPCR and UACI scores for different images.
NPCRUACI
Images R G B R G B
Mandrill
( 256 × 256 )99.598699.592599.604733.452733.461533.4539
Lion
( 512 × 512 )99.617799.610599.594433.462333.444933.4699
Peppers
( 1024 × 1024 )99.613899.612199.628333.461133.455133.4688
Table 5. The encryption time (in seconds) for different images using IEA.
Table 5. The encryption time (in seconds) for different images using IEA.
ImagesEncryption Time (s)
Mandrill
( 256 × 256 )0.072019 s
Lion
( 512 × 512 )0.236879 s
Peppers
( 1024 × 1024 )0.971330 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sriraman, R.; Kwon, O. Global Exponential Synchronization of Delayed Quaternion-Valued Neural Networks via Decomposition and Non-Decomposition Methods and Its Application to Image Encryption. Mathematics 2024, 12, 3345. https://doi.org/10.3390/math12213345

AMA Style

Sriraman R, Kwon O. Global Exponential Synchronization of Delayed Quaternion-Valued Neural Networks via Decomposition and Non-Decomposition Methods and Its Application to Image Encryption. Mathematics. 2024; 12(21):3345. https://doi.org/10.3390/math12213345

Chicago/Turabian Style

Sriraman, Ramalingam, and Ohmin Kwon. 2024. "Global Exponential Synchronization of Delayed Quaternion-Valued Neural Networks via Decomposition and Non-Decomposition Methods and Its Application to Image Encryption" Mathematics 12, no. 21: 3345. https://doi.org/10.3390/math12213345

APA Style

Sriraman, R., & Kwon, O. (2024). Global Exponential Synchronization of Delayed Quaternion-Valued Neural Networks via Decomposition and Non-Decomposition Methods and Its Application to Image Encryption. Mathematics, 12(21), 3345. https://doi.org/10.3390/math12213345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop