Next Article in Journal
An Effective Iterative Process Utilizing Transcendental Sine Functions for the Generation of Julia and Mandelbrot Sets
Previous Article in Journal
Comparative Studies of Nonlinear Models and Their Applications to Magmatic Evolution and Crustal Growth of the Huai’an Terrane in the North China Craton
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite-Time Cluster Synchronization of Fractional-Order Complex-Valued Neural Networks Based on Memristor with Optimized Control Parameters

1
School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
2
School of Science, Wuxi Engineering Research Center for Biocomputing, Jiangnan University, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(1), 39; https://doi.org/10.3390/fractalfract9010039
Submission received: 28 November 2024 / Revised: 5 January 2025 / Accepted: 11 January 2025 / Published: 14 January 2025

Abstract

:
The finite-time cluster synchronization (FTCS) of fractional-order complex-valued (FOCV) neural network has attracted wide attention. It is inconvenient and difficult to decompose complex-valued neural networks into real parts and imaginary parts. This paper addresses the FTCS of coupled memristive neural networks (CMNNs), which are FOCV systems with a time delay. A controller is designed with a complex-valued sign function to achieve FTCS using a non-decomposition approach, which eliminates the need to separate the complex-valued system into its real and imaginary components. By applying fractional-order stability theory, some conditions are derived for FTCS based on the proposed controller. The settling time, related to the system’s initial values, can be computed using the Mittag–Leffler function. We further investigate the optimization of control parameters by formulating an optimization model, which is solved using particle swarm optimization (PSO) to determine the optimal control parameters. Finally, a numerical example and a comparative experiment are both provided to verify the theoretical results and optimization method.

1. Introduction

Neural networks (NNs) have significant applications across various emerging fields, including multilayer perceptron [1], food quality assessment [2], image recognition [3] and so on [4]. An artificial neural network is a mathematical model that mimics the structure of synaptic connections in the brain to process information. Coupled neural networks (CNNs) enable the interconnection of information between different neurons. With advancements in computers, algorithms, and software, the memristive neural network (MNN) model has gained widespread adoption across diverse domains [5,6,7]. The existence of memristors was confirmed by HP Labs in 2008, and since then, MNNs have been a focal point of research [8,9,10]. MNNs offer memory, adaptability, and high parallel processing capabilities. The features of memristors are close to the characteristics of neuronal synapses [11]. Compared to traditional cellular neural networks, MNNs can construct highly integrated nervous systems that more closely approximate the size and structure of the human brain. Fractional calculus extends integer-order calculus, and models described by fractional differential equations can more accurately simulate real-world systems, particularly viscoelastic models [12]. Complex-valued systems, as an extension of real-valued systems, address certain problems that real-valued systems cannot solve [13,14]. For instance, real-valued systems struggle with symmetry problems and the XOR problem, while a single complex-valued system can handle these issues effectively. In light of these considerations, this paper investigates FOCV-coupled memristive neural networks (FCCMNNs).
Finite-time synchronization (FTS) [15] is notable for its ability to achieve synchronization within a settling time, which has garnered significant attention. Different from asymptotic synchronization, which requires an infinite amount of time, FTS offers practical advantages by reaching synchronization in a predetermined time frame. For effective synchronization within a network, neurons must communicate with one another. In leader-following synchronization scenarios, neurons also need to receive information from a designated leader. Specifically, in the context of cluster synchronization (CS) [16], there may be multiple leaders within each cluster. Consequently, neurons in different clusters must follow the leaders assigned to their respective clusters. Due to its practical implications, CS has attracted considerable interest from researchers [17,18]. This paper addresses the FTCS [19] of FCCMNNs. Neurons within the same cluster can synchronize with their leader’s state, achieving synchronization within a settling time that depends on their initial values.
There are various studies that employ the decomposition method, which converts the complex-valued system into two real-valued parts [13,20,21]. However, in practical applications, decomposing a complex-valued system can be challenging. To address this, the Complex-Valued Sign Function (CVSF) [14] has been introduced, allowing for the direct study of complex-valued systems. For example, if y = c + d i is a complex number, the CVSF of y is [ y ] = [ c ] + [ d ] i . Additionally, a complex 1-norm [14] is proposed, defined as | y | 1 = | c | + | d | i . Using the CVSF, a controller is designed to achieve the FTCS of the FCCMNNs without employing the decomposition method. To handle discontinuous terms involving memristors and sign functions, the set-valued map theory [22] and complex-valued Filippov solution [23] are essential. In the study of FTS, it is necessary to calculate the settling time. The Mittag–Leffler function, implemented via the ’mlf’ function in MATLAB, provides a means to compute the settling time.
The proposed controller in this paper is a typical negative feedback controller utilizing the CVSF. In the study of network synchronization, many papers design controllers and provide conditions that control parameters must satisfy, often offering a range for these parameters. However, selecting the optimal control parameters within this broad range is a significant challenge that is frequently overlooked. This paper proposes an optimization model to facilitate the selection of control parameters. The fitness function of the model is constructed according to actual control requirements. The constraints of the model are considered by the theoretical results of FTCS. Due to the complexity of the optimization model, an algorithm based on PSO [24] is proposed to find the optimal solution. This method provides guidance for selecting control parameters, leading to an optimal controller that conserves resources. Typically, research on the synchronization of neural networks (NNs) focuses on synchronization criteria, aiming to reduce control energy through various control methods, such as event-triggered control [25], intermittent control [26], impulsive control [27], adaptive control [28], and sampled control [29]. Among these, feedback control [8] is the most commonly used in practical applications. The proposed optimization method offers guidance for parameter selection in feedback control, contributing to more efficient controller design.
It is natural to incorporate time delay into research, given its significant impact on real systems [30]. The study in [31] addressed the FTCS of FOCV NNs without considering time delay and memristors. Both asymptotic and FTS of coupled NNs, excluding clusters and memristors, were explored in [32], where the focus was on real-valued systems. Additionally, the controller designed in [32] included a time delay, which is not required in this paper. The FTS of FCCMNNs using the decomposition method was investigated in [33]. After decomposition, the research process closely resembles that of real-valued systems. However, there are no prior works that have addressed the FTCS of FCCMNNs using a non-decomposition method. Furthermore, none of the aforementioned studies considered the optimization of control parameters. Therefore, this paper focuses on the FTCS of FCCMNNs, employing a non-decomposition method with optimized control parameters. The major contributions are highlighted below. The difference between this paper and other references is outlined in Table 1.
This paper studies the fractional-order CNNs with a time delay, where the network includes memristive elements and complex-valued states, resulting in a model that is not only more complex than those studied in [31,32] but also better suited for simulating real-world scenarios. Unlike the conventional decomposition approach [20,21,34], this study employs a non-decomposition method. Although the decomposition method simplifies theoretical analysis, it often falls short in practical applications. The non-decomposition approach, on the other hand, provides a more realistic representation of complex systems, aligning more closely with real-world dynamics.
This paper focuses on FTCS, extending beyond conventional CS [35,36] or FTS [37], which has faster convergence time, and the synchronization time can be calculated in advance. To achieve FTCS, we design a simple controller based on the CVSF and derive synchronization criteria that provide a range of control parameters. The settling time for FTCS can be readily determined through the Mittag–Leffler function. While FTCS has been explored in previous studies [19,31,32], the networks investigated in those works differ significantly from the one considered here. Notably, this is the first study to address FTCS for FOCV-coupled neural networks with memristors using a non-decomposition method.
An optimization model solved by PSO is proposed to select the most cost-effective control parameters. This optimization approach guides the selection of control parameters within a broad range that satisfies the synchronization conditions. Although previous studies [31,32] designed similar controllers, they only provided a range for the parameters without specifically focusing on actual control requirement.
Table 1. The difference between this paper and others in the literature.
Table 1. The difference between this paper and others in the literature.
Item[19][32][20][31][34][21][37]This Paper
Fractional-order
Complex-valued
Memristor
FTS
CS
Non-decomposition
Parameter optimization
The remainder of this paper is structured as follows: Section 2 outlines the essential definitions and lemmas, as well as the model of FCCMNNs and the foundational assumptions. Section 3 introduces the designed controller and provides the theorem for achieving FTCS. Section 4 is dedicated to the optimization of control parameters. Section 5 illustrates the effectiveness of the proposed methods through two simulations. Finally, Section 6 provides the conclusions.
Notations. 
R , R n , R n × n represent the set of real numbers, n-dimensional real vectors and n × n real matrices, respectively. N + denotes the set of positive integers. C , C n , C n × n represent the set of complex numbers, n-dimensional complex vectors and n × n complex matrices. If y C , y = y R + y I i . y R and y I means real and imaginary parts of y, i is the imaginary unit. | y | 1 = | y R | + | y I | . [ y ] = [ y R ] + [ y I ] i , and [ · ] represents the sign function. If Y C n , Y = ( y 1 , y 2 , , y n ) . Y H is the conjugate transpose of Y . Y T is the transpose of Y .   | Y | 1 = i = 1 n ( | y i R | + | y i I | ) . [ Y ] = ( [ y 1 ] , [ y 2 ] , , [ y n ] ) . Θ R n × n > 0 means that Θ is a positive definite matrix. λ min ( Θ ) is the minimum eigenvalue of Θ . d i a g ( ) denotes a diagonal matrix. The notation ⊗ is the Kronecker product operator. I N is an N-dimensional identity matrix. 1 N is an N-dimensional full one vectors.

2. Preliminaries and Model Description

2.1. Preliminaries

Definition 1
([14]). The sign function of the complex vector x ( t ) = x 1 ( t ) , x 2 ( t ) , , x n ( t ) T is defined by [ x ( t ) ] = [ x 1 R ( t ) ] + [ x 1 I ( t ) ] i , [ x 2 R ( t ) ] + [ x 2 I ( t ) ] i , , [ x n R ( t ) ] + [ x n I ( t ) ] i T , where [ · ] = s i g n ( · ) and s i g n ( · ) is the sign function. Obviously, [ x H ( t ) ] = [ x ( t ) ] H .
Definition 2
([38]). When fractional order α ( 0 , 1 ) , the Caputo derivative with function ω ( s ) is defined as:
D t 0 , t α ω ( s ) = 1 Γ ( 1 α ) t 0 t ( s τ ) α ω ( τ ) d τ ,
where t t 0 , and Γ ( · ) is the Gamma function that Γ ( α ) = 0 t α 1 e t d t .
Definition 3
([38]). The two-parameter Mittag–Leffler function is defined by the series expansion
E p , q ( x ) = j = 0 + x j Γ ( p j + q ) ,
where p , q > 0 , x C . As q = 1 , the one-parameter Mittag–Leffler function is
E p ( x ) = E p , 1 ( x ) = j = 0 + x j Γ ( p j + 1 ) ,
where Γ ( · ) is the same Gamma function with Definition 2.
For any x ( t ) C n , the convex hull of [ x j ( t ) ] ( j = 1 , 2 , , n ) is defined as [14]:
c o ¯ ( [ x j ( t ) ] ) = { 1 + i } , x j R ( t ) > 0 , x j I ( t ) > 0 , 1 + c o ¯ { 1 , 1 } i , x j R ( t ) > 0 , x j I ( t ) = 0 , { 1 i } , x j R ( t ) > 0 , x j I ( t ) < 0 , c o ¯ { 1 , 1 } + i , x j R ( t ) = 0 , x j I ( t ) > 0 , c o ¯ { 1 , 1 } + c o ¯ { 1 , 1 } i , x j R ( t ) = 0 , x j I ( t ) = 0 , c o ¯ { 1 , 1 } i , x j R ( t ) = 0 , x j I ( t ) < 0 , { 1 + i } , x j R ( t ) < 0 , x j I ( t ) > 0 , 1 + c o ¯ { 1 , 1 } i , x j R ( t ) < 0 , x j I ( t ) = 0 , { 1 i } , x j R ( t ) < 0 , x j I ( t ) < 0 .
Lemma 1.
For column vector χ ( t ) : [ t 0 , + ) C n , there are some properties.
[31] 2 [ χ H ( t ) ] [ χ ( t ) ] = [ χ H ( t ) ] ω ( t ) + ω H ( t ) [ χ ( t ) ] , where ω ( t ) = ( ω 1 ( t ) , ω 2 ( t ) , , ω n ( t ) ) T c o ¯ ( [ χ ( t ) ] ) .
[14] χ p H ( t ) + χ p ( t ) = 2 χ p R ( t ) 2 | χ p ( t ) | 1 , p = 1 , 2 , , n .
2 | χ ( t ) | 1 = [ χ H ( t ) ] χ ( t ) + χ H ( t ) [ χ ( t ) ] .
| [ χ ( t ) ] | 1 = [ χ H ( t ) ] [ χ ( t ) ] .
Proof. 
Based on the definition of | χ ( t ) | 1 , [ χ ( t ) ] and the Lemma 3 in [14], one has
2 | χ ( t ) | 1 = 2 p = 1 n | χ p ( t ) | 1 = p = 1 n [ χ p ( t ) ¯ ] χ p ( t ) + χ p ( t ) ¯ [ χ p ( t ) ] = [ χ H ( t ) ] χ ( t ) + χ H ( t ) [ χ ( t ) ] .
| [ χ ( t ) ] | 1 = p = 1 n | [ χ p ( t ) ] | 1 = p = 1 n [ χ p ( t ) ¯ ] [ χ p ( t ) ] = [ χ H ( t ) ] [ χ ( t ) ] .
Lemma 2
([37]). If there is a continuous and analytic function χ ( t ) = χ 1 ( t ) , χ 2 ( t ) , , χ n ( t ) T C n , then for t [ t 0 , + ) , α ( 0 , 1 ) , one has:
D t 0 , t α [ χ H ( t ) ] χ ( t ) + χ H ( t ) [ χ ( t ) ] [ χ H ( t ) ] D t 0 , t α χ ( t ) + D t 0 , t α χ H ( t ) [ χ ( t ) ] .
Lemma 3.
If there is a continuous and differentiable function v ( t ) > 0 , t [ t 0 , + ) , v ( t 0 ) > 0 , κ 1 , κ 2 > 0 , α ( 0 , 1 ] , then for D t 0 , t α v ( t ) κ 1 v ( t ) κ 2 , one has
[32] v ( t ) v ( t 0 ) + κ 2 κ 1 E α κ 1 ( t t 0 ) α κ 2 κ 1 .
There exists t > t 0 such that lim t t v ( t ) 0 , when t t , v ( t ) 0 . t = t 0 + ϖ κ 1 1 α , where ϖ = max y | E α ( y ) = κ 2 κ 1 v ( t 0 ) + κ 2 .
Proof. 
As demonstrated in [31], let Σ ( t ) = v ( t 0 ) + κ 2 κ 1 E α κ 1 ( t t 0 ) α κ 2 κ 1 . Since κ 1 > 0 and α ( 0 , 1 ] , it is clear that E α κ 1 ( t t 0 ) α is non-increasing. Firstly, the existence of t needs to be verified. Obviously, E α ( 0 ) = 1 and lim t + E α ( t ) = 0 , then Σ ( t 0 ) = v ( t 0 ) > 0 , lim t Σ ( t ) = κ 2 κ 1 < 0 . Consequently, there is t > t 0 such that Σ ( t ) = 0 . That is to say E α κ 1 ( t t 0 ) α = κ 2 κ 1 v ( t 0 ) + κ 2 . In addition, t = t 0 + ϖ κ 1 1 α , where ϖ = max y | E α ( y ) = κ 2 κ 1 v ( t 0 ) + κ 2 < 0 . Secondly, it is proved by contradiction that v ( t ) 0 ( t t ) . If v ( t ) 0 ( t t ) is not true, then t > t such that v ( t ) > 0 . However, v ( t ) Σ ( t ) = v ( t 0 ) + κ 2 κ 1 E α κ 1 ( t t 0 ) α κ 2 κ 1 v ( t 0 ) + κ 2 κ 1 E α κ 1 ( t t 0 ) α κ 2 κ 1 = 0 , which is a contradiction with v ( t ) > 0 . Consequently, there exists t > t 0 such that lim t t v ( t ) 0 , when t t , v ( t ) 0 . □

2.2. Model Description

An FCCMNN with a time delay is considered as follows:
D t 0 , t α z i ( t ) = C z i ( t ) + A ( z i ( t ) ) f ( z i ( t ) ) + B ( z i ( t ) ) f ( z i ( t τ ) ) + σ j = 1 N g i j Γ z j ( t ) + I ( t ) + u i ( t ) ,
where α ( 0 , 1 ) , i , j = 1 , 2 , , N . z i ( t τ ) = z i 1 ( t τ ) , z i 2 ( t τ ) , , z i n ( t τ ) T C n and z i ( t ) = z i 1 ( t ) , z i 2 ( t ) , , z i n ( t ) T C n are the state of ith neuron with and without the time delay, 0 < τ τ m a x . C = d i a g ( c 1 , , c n ) C n × n . f ( ) : C n C n represents the activation function. σ > 0 is the coupled strength. G = ( g i j ) N × N and g i j R , g i j 0 means that the neuron i can receive the information from the neuron j. Otherwise, there is no link. Define g i i = j = 1 , i j N g i j , which is equivalent to j = 1 N g i j = 0 . Γ = d i a g { γ 1 , γ 2 , , γ n } R n × n is an inner matrix and Γ > 0 . I ( t ) C n is an external input. u i ( t ) is the controller that needs to be designed. A ( z i ( t ) ) = ( a p q ( z i p ( t ) ) ) n × n , B ( z i ( t ) ) = ( b p q ( z i p ( t ) ) ) n × n , and A ( z i ( t ) ) , B ( z i ( t ) ) C n × n . Based on the characteristics of memristors, a p q ( z i p ( t ) ) and b p q ( z i p ( t ) ) are given as follows:
a p q ( z i p ( t ) ) = a ´ p q , | z i p ( t ) | 1 I , a p q , | z i p ( t ) | 1 > I , b p q ( z i p ( t ) ) = b ´ p q , | z i p ( t ) | 1 I , b p q , | z i p ( t ) | 1 > I ,
where I > 0 represents the switching jump, a ´ p q , a p q , b ´ p q and b p q are all known complex numbers, p , q = 1 , 2 , , n . The initial value of network (6) is that z i ( w ) = ψ i ( w ) , w [ t 0 τ , t 0 ] , i = 1 , 2 , , N .
The leaders of the network (6) is stated as follows:
D t 0 , t α s l ( t ) = C s l ( t ) + A ( s l ( t ) ) f ( s l ( t ) ) + B ( s l ( t ) ) f ( s l ( t τ ) ) + I ( t ) ,
where s l ( t ) = s l 1 ( t ) , s l 2 ( t ) , , s l n ( t ) T C n is the leader of the lth cluster. The initial value of network (8) is that s l ( w ) = ϕ l ( w ) , w [ t 0 τ , t 0 ] , l = 1 , 2 , , m , m N + . The definitions of C , A ( · ) , B ( · ) , f ( · ) , I ( t ) are same with Equation (6).
Suppose that all neurons in network (6) are divided into m clusters, with neurons within the same cluster following the same leader. For example,
z n 0 + 1 ( t ) , , z n 0 + n 1 ( t ) s 1 ( t ) , z n 0 + n 1 + 1 ( t ) , , z n 0 + n 1 + n 2 ( t ) s 2 ( t ) , , z n 0 + n 1 + + n m 1 + 1 ( t ) , , z n 0 + n 1 + n 2 + + n m ( t ) s m ( t ) ,
where n l is the number of neurons in the lth cluster, l = 1 , 2 , , m , n 0 = 0 , n 0 + n 1 + n 2 + + n m = N . Define Λ l = { n 0 + n 1 + + n l 1 + 1 , , n 0 + n 1 + + n l } .
According to the set-valued map theory and complex-valued Filippov solution [34], the networks (6) and (8) can be rewritten to Equations (10) and (11).
D t 0 , t α z i ( t ) = C z i ( t ) + A ˜ ( z i ( t ) ) f ( z i ( t ) ) + B ˜ ( z i ( t ) ) f ( z i ( t τ ) ) + σ j = 1 N g i j Γ z j ( t ) + I ( t ) + u i ( t ) ,
D t 0 , t α s l ( t ) = C s l ( t ) + A ˜ ( s l ( t ) ) f ( s l ( t ) ) + B ˜ ( s l ( t ) ) f ( s l ( t τ ) ) + I ( t ) .
where A ˜ ( z i ( t ) ) = ( a p q ˜ ( z i p ( t ) ) ) n × n , B ˜ ( z i ( t ) ) = ( b p q ˜ ( z i p ( t ) ) ) n × n , a p q ˜ ( z i p ( t ) ) c o ¯ [ a p q ( z i p ( t ) ) ] , b p q ˜ ( z i p ( t ) ) c o ¯ [ b p q ( z i p ( t ) ) ] . c o ¯ [ · ] is a convex hull. Similarly, A ˜ ( s l ( t ) ) = ( a p q ˜ ( s l p ( t ) ) ) n × n , B ˜ ( s l ( t ) ) = ( b p q ˜ ( s l p ( t ) ) ) n × n . It is worth noting that a p q ˜ ( s l p ( t ) ) a p q ˜ ( z i p ( t ) ) , and b p q ˜ ( s l p ( t ) ) b p q ˜ ( z i p ( t ) ) , p , q = 1 , 2 , , n .
For convenience, there are some assumptions and propositions.
Assumption 1.
G = ( g i j ) N × N is the adjacent matrix of network (6),
G = G 11 G 12 G 1 m G 21 G 22 G 2 m G m 1 G m 2 G m m ,
where G l l R n l × n l , G k l R n k × n l ( k l ) . Suppose that G l l and G k l are both zero row-sum matrices, k , l = 1 , 2 , , m . Obviously, G is a zero row-sum matrix.
Assumption 2.
For a , b C , there exists L > 0 , such that the activation functions f ( ) satisfies | f ( a ) f ( b ) | 1 L | a b | 1 .There exists F > 0 , such that | f ( c ) | 1 F for c C .
Proposition 1.
Given Assumptions 1 and 2, the following inequalities can be derived:
| a p q ˜ ( z i p ( t ) ) f q ( z i q ( t ) ) a p q ˜ ( s l p ( t ) ) f q ( s l q ( t ) ) | 1 a ¯ p q L | e i q ( t ) | 1 + a ˇ p q F ;
| b p q ˜ ( z i p ( t ) ) f q ( z i q ( t τ ) ) b p q ˜ ( s l p ( t ) ) f q ( s l q ( t τ ) ) | 1 2 b ¯ p q F ;
where a ¯ p q = max { | a ´ p q | 1 , | a p q | 1 } , b ¯ p q = max { | b ´ p q | 1 , | b p q | 1 } , a ˇ p q = | a ´ p q a p q | 1 , i = 1 , 2 , , N , l = 1 , 2 , , m , p , q = 1 , 2 , , n .
Proof. 
| a p q ˜ ( z i p ( t ) ) f q ( z i q ( t ) ) a p q ˜ ( s l p ( t ) ) f q ( s l q ( t ) ) | 1 = | a p q ˜ ( z i p ( t ) ) f q ( z i q ( t ) ) a p q ˜ ( z i p ( t ) ) f q ( s l q ( t ) ) + a p q ˜ ( z i p ( t ) ) f q ( s l q ( t ) ) a p q ˜ ( s l p ( t ) ) f q ( s l q ( t ) ) | 1 = | a p q ˜ ( z i p ( t ) ) f q ( z i q ( t ) ) f q ( s l q ( t ) ) + a p q ˜ ( z i p ( t ) ) a p q ˜ ( s l p ( t ) ) f q ( s l q ( t ) ) | 1 a ¯ p q L | e i q ( t ) | 1 + a ˇ p q F ;
| b p q ˜ ( z i p ( t ) ) f q ( z i q ( t τ ) ) b p q ˜ ( s l p ( t ) ) f q ( s l q ( t τ ) ) | 1 | b p q ˜ ( z i p ( t ) ) | 1 | f q ( z i q ( t τ ) ) | 1 + | b p q ˜ ( s l p ( t ) ) | 1 | f q ( s l q ( t τ ) ) | 1 2 b ¯ p q F .
Define the synchronization error: e i ( t ) = z i ( t ) s l ( t ) , i Λ l , l = 1 , 2 , , m . Based on Assumption 1 that G is a zero row-sum matrix, controlled error network can be described as follows:
D t 0 , t α e i ( t ) = C e i ( t ) + A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) + B ˜ ( e i ( t ) ) f ˜ ( e i ( t τ ) ) + σ j = 1 N g i j Γ e j ( t ) + u i ( t ) ,
where A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) = A ˜ ( z i ( t ) ) f ( z i ( t ) ) A ˜ ( s l ( t ) ) f ( s l ( t ) ) , B ˜ ( e i ( t ) ) f ˜ ( e i ( t τ ) ) = B ˜ ( z i ( t ) ) f ( z i ( t τ ) ) B ˜ ( s l ( t ) ) f ( s l ( t τ ) ) .
Definition 4
([31]). The networks (6) and (8) achieve the FTCS, if there exists the settling time t > t 0 and satisfies:
lim ¯ t | s k ( t ) s l ( t ) | 1 > 0 , k l , k , l = 1 , 2 , , m ;
lim t t | x i ( t ) s l ( t ) | 1 = 0 , i Λ l , l = 1 , 2 , , m ;
| x i ( t ) s l ( t ) | 1 0 , t t , i Λ l , l = 1 , 2 , , m .

3. Main Results

To achieve the FTCS of the networks (6) and (8), a novel controller with CVSF is designed.
u i ( t ) = k i 1 e i ( t ) k 2 [ e i ( t ) ] ,
where k i 1 > 0 , k 2 = d i a g ( k 1 2 , k 2 2 , , k n 2 ) R n × n > 0 is a diagonal matrix. Based on Equation (14), the error network (13) can be written to
D t 0 , t α e i ( t ) = C e i ( t ) + A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) + B ˜ ( e i ( t ) ) f ˜ ( e i ( t τ ) ) + σ j = 1 N g i j Γ e j ( t ) k i 1 e i ( t ) k 2 μ i ( t ) ,
where μ i ( t ) c o ¯ ( [ e i ( t ) ] ) .
Theorem 1.
Assuming the validity of Assumptions 1 and 2. Under the designed controller (14), the networks (6) and (8) achieve the FTCS, if
λ 1 = λ min I N ( C ^ L A ¯ ) σ G ¯ Γ + K 1 I n > 0 ,
λ 2 = λ min k 2 F ( A ˇ + 2 B ¯ ) > 0 .
The settling time t = t 0 + ϖ λ 1 1 α , ϖ = max y | E α ( y ) = λ 2 λ 1 | e ( t 0 ) | 1 + λ 2 , where C ^ = d i a g c 1 R | c 1 I | , , c n R | c n I | ,   A ¯ = ( a ¯ p q ) n × n ,   B ¯ = ( b ¯ p q ) n × n ,   A ˇ = ( a ˇ p q ) n × n ,   G ¯ = ( g ¯ i j ) N × N , if i = j , g ¯ i i = g i i , otherwise, g ¯ i j = | g i j | . K 1 = d i a g ( k 1 1 , k 2 1 , , k N 1 ) .
Proof. 
Let Lyapunov function: V ( t ) = 2 l = 1 m i Λ l | e i ( t ) | 1 = l = 1 m i Λ l [ e i H ( t ) ] e i ( t ) + e i H ( t ) [ e i ( t ) ] . Along with the error network (7) and Lemma 2, it can be concluded that
D t 0 , t α V ( t ) l = 1 m i Λ l [ e i H ( t ) ] D t 0 , t α e i ( t ) + D t 0 , t α e i H ( t ) [ e i ( t ) ] = l = 1 m i Λ l { [ e i H ( t ) ] C e i ( t ) e i H ( t ) C H [ e i ( t ) ] + [ e i H ( t ) ] A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) + A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) H [ e i ( t ) ] + [ e i H ( t ) ] B ˜ ( e i ( t ) ) f ˜ ( e i ( t τ ) ) + B ˜ ( e i ( t ) ) f ˜ ( e i ( t τ ) ) H [ e i ( t ) ] + σ j = 1 N g i j [ e i H ( t ) ] Γ e j ( t ) + e j H ( t ) Γ [ e i ( t ) ] k i 1 [ e i H ( t ) ] e i ( t ) + e i H ( t ) [ e i ( t ) ] [ e i H ( t ) ] k 2 μ i ( t ) + μ i H ( t ) k 2 [ e i ( t ) ] } .
Calculate each term separately,
l = 1 m i Λ l [ e i H ( t ) ] C e i ( t ) + e i H ( t ) C H [ e i ( t ) ] = l = 1 m i Λ l q = 1 n c q [ e i q H ( t ) ] e i q ( t ) + c q H e i q H ( t ) [ e i q ( t ) ] = 2 l = 1 m i Λ l q = 1 n c q R | e i q ( t ) | 1 + c q I [ e i q R ( t ) ] e i q I ( t ) [ e i q I ( t ) ] e i q R ( t ) 2 l = 1 m i Λ l q = 1 n c q R | e i q ( t ) | 1 | c q I | | e i q ( t ) | 1 = 2 1 N n T ( I N C ^ ) e ^ ( t ) ,
where e ^ ( t ) = | e 11 | 1 , , | e 1 n | 1 , , | e n 1 1 | 1 , , | e n 1 n | 1 , , | e N 1 | 1 , , | e N n | 1 T R N n .
Based on Lemma 1 and Proposition 1, it is straightforward to derive
l = 1 m i Λ l [ e i H ( t ) ] A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) + A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) H [ e i ( t ) ] = l = 1 m i Λ l p = 1 n q = 1 n { [ e i q H ( t ) ] a p q ˜ ( z i p ( t ) ) f q ( z i q ( t ) ) a p q ˜ ( s l p ( t ) ) f q ( s l q ( t ) ) + a p q ˜ ( z i p ( t ) ) f q ( z i q ( t ) ) a p q ˜ ( s l p ( t ) ) f q ( s l q ( t ) ) H [ e i q ( t ) ] } l = 1 m i Λ l p = 1 n q = 1 n [ e i q H ( t ) ] a ¯ p q L | e i q ( t ) | 1 + a ˇ p q F + [ e i q ( t ) ] a ¯ p q L | e i q ( t ) | 1 + a ˇ p q F = l = 1 m i Λ l p = 1 n q = 1 n a ¯ p q L | e i q ( t ) | 1 + a ˇ p q F × ( [ e i q H ( t ) ] + [ e i q ( t ) ] ) l = 1 m i Λ l p = 1 n q = 1 n 2 [ e i q ( t ) ] R a ¯ p q L | e i q ( t ) | 1 + 2 | [ e i q ( t ) ] | 1 a ˇ p q F 2 L 1 N n T ( I N A ¯ ) e ^ ( t ) + 2 F [ e H ( t ) ] ( I N A ˇ ) [ e ( t ) ] ,
where e ( t ) = ( e 1 T ( t ) , e 2 T ( t ) , , e N T ( t ) ) T .
Similarly, it can be shown that
l = 1 m i Λ l [ e i H ( t ) ] B ˜ ( e i ( t ) ) f ˜ ( e i ( t τ ) ) + B ˜ ( e i ( t ) ) f ˜ ( e i ( t τ ) ) H [ e i ( t ) ] = l = 1 m i Λ l p = 1 n q = 1 n { [ e i q H ( t ) ] b p q ˜ ( z i p ( t ) ) f q ( z i q ( t τ ) ) b p q ˜ ( s l p ( t ) ) f q ( s l p ( t τ ) ) + b p q ˜ ( z i p ( t ) ) f q ( z i q ( t τ ) ) b p q ˜ ( s l p ( t ) ) f q ( s l p ( t τ ) ) H [ e i q ( t ) ] } l = 1 m i Λ l p = 1 n q = 1 n [ e i q H ( t ) ] 2 b ¯ p q F + [ e i q ( t ) ] 2 b ¯ p q F l = 1 m i Λ l p = 1 n q = 1 n 4 b ¯ p q F | [ e i q ( t ) ] | 1 = 4 F [ e H ( t ) ] ( I N B ¯ ) [ e ( t ) ] .
According to Lemma 1, it follows that
σ l = 1 m i Λ l j = 1 N g i j [ e i H ( t ) ] Γ e j ( t ) + e j H ( t ) Γ [ e i ( t ) ] = σ l = 1 m i Λ l g i i [ e i H ( t ) ] Γ e i ( t ) + e i H ( t ) Γ [ e i ( t ) ] + σ l = 1 m i Λ l j Λ l , i j g i j [ e i H ( t ) ] Γ e j ( t ) + e j H ( t ) Γ [ e i ( t ) ] 2 σ l = 1 m i Λ l q = 1 n g i i γ q | e i q ( t ) | 1 + 2 σ l = 1 m i Λ l j Λ l , i j q = 1 n | g i j | γ q | e j q ( t ) | 1 = 2 σ l = 1 m i Λ l q = 1 n g i i γ q | e i q ( t ) | 1 + 2 σ l = 1 m j Λ l , i j i Λ l q = 1 n | g j i | γ q | e i q ( t ) | 1 = 2 σ l = 1 m i Λ l j Λ l q = 1 n g ¯ i j γ q | e i q ( t ) | 1 = 2 σ 1 N n T ( G ¯ Γ ) e ^ ( t ) .
Based on Lemma 1, it can also be derived that
l = 1 m i Λ l k i 1 [ e i H ( t ) ] e i ( t ) + e i H ( t ) [ e i ( t ) ] = l = 1 m i Λ l q = 1 n k i 1 [ e i q H ( t ) ] e i q ( t ) + e i q H ( t ) [ e i q ( t ) ] = 2 l = 1 m i Λ l q = 1 n k i 1 | e i q ( t ) | 1 = 2 1 N n T ( K 1 I n ) e ^ ( t ) .
It is easy to obtain
l = 1 m i Λ l [ e i H ( t ) ] k 2 μ i ( t ) + μ i H ( t ) k 2 [ e i ( t ) ] = l = 1 m i Λ l q = 1 n k q 2 [ e i q H ( t ) ] μ i q ( t ) + μ i q H ( t ) [ e i q ( t ) ] = 2 l = 1 m i Λ l q = 1 n k q 2 | [ e i q ( t ) ] | 1 = 2 [ e H ( t ) ] ( I N k 2 ) [ e ( t ) ] .
Then, it is straightforward to derive
D t 0 , t α V ( t ) 2 1 N n T I N ( C ^ L A ¯ ) σ ( G ¯ Γ ) + K 1 I n e ^ ( t ) 2 [ e H ( t ) ] I N ( k 2 F ( A ˇ + 2 B ¯ ) ) [ e ( t ) ] λ 1 V ( t ) λ 2 ,
where e ( t ) C N n { 0 } . Based on Lemma 3, one can obtain
| e ( t ) | 1 | e ( t 0 ) | 1 + λ 2 λ 1 E α λ 1 ( t t 0 ) α λ 2 λ 1 .
Therefore, the networks (6) and (8) will realize the FTCS with settling time t = t 0 + ϖ λ 1 1 α , and ϖ = max y | E α ( y ) = λ 2 λ 1 | e ( t 0 ) | 1 + λ 2 . □
Corollary 1.
When α = 1 , assuming the validity of Assumptions 1 and 2. Under the designed controller (14), the networks (6) and (8) achieve the FTCS, if
λ 1 = λ min I N ( C ^ L A ¯ ) σ G ¯ Γ + K 1 I n > 0 ,
λ 2 = λ min k 2 F ( A ˇ + 2 B ¯ ) > 0 .
The settling time t = t 0 + 1 λ 1 ln λ 1 | e ( t 0 ) | 1 + λ 2 λ 2 .
Remark 1.
When α = 1 , the FCCMNN (6) degenerates into a first-order complex-valued neural network model. Thus, the FTCS of an integer-order neural network can be regarded as a special case of Theorem 1.
If networks (6) and (8) exclude the time delay τ, they should be modified to
D t 0 , t α z i ( t ) = C z i ( t ) + A ( z i ( t ) ) f ( z i ( t ) ) + B ( z i ( t ) ) f ( z i ( t ) ) + σ j = 1 N g i j Γ z j ( t ) + I ( t ) + u i ( t ) ,
D t 0 , t α s l ( t ) = C s l ( t ) + A ( s l ( t ) ) f ( s l ( t ) ) + B ( s l ( t ) ) f ( s l ( t ) ) + I ( t ) .
Then, the error network becomes
D t 0 , t α e i ( t ) = C e i ( t ) + A ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) + B ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) + σ j = 1 N g i j Γ e j ( t ) + u i ( t ) ,
where B ˜ ( e i ( t ) ) f ˜ ( e i ( t ) ) = B ˜ ( z i ( t ) ) f ( z i ( t ) ) B ˜ ( s l ( t ) ) f ( s l ( t ) ) . Other parameters are same with Equations (6), (8), and (13).
Corollary 2.
Assuming the validity of Assumptions 1 and 2. Under the designed controller (14), the networks (29) and (30) achieve the FTCS, if
λ 1 = λ min I N ( C ^ L A ¯ L B ¯ ) σ G ¯ Γ + K 1 I n > 0 ,
λ 2 = λ min k 2 F ( A ˇ + B ˇ ) > 0 .
The settling time t = t 0 + ϖ λ 1 1 α , ϖ = max x | E α ( x ) = λ 2 λ 1 | e ( t 0 ) | 1 + λ 2 , where B ˇ = ( b ˇ p q ) n × n , b ˇ p q = | b ´ p q b p q | 1 .
Remark 2.
Corollary 2 encompasses Theorem 1 in [31] as a particular case since the memristive items are not considered in [31].
If there is no memristor in the coupled networks (6) and (8), then the networks should be changed to
D t 0 , t α z i ( t ) = C z i ( t ) + A f ( z i ( t ) ) + B f ( z i ( t τ ) ) + σ j = 1 N g i j Γ z j ( t ) + I ( t ) + u i ( t ) ,
D t 0 , t α s l ( t ) = C s l ( t ) + A f ( s l ( t ) ) + B f ( s l ( t τ ) ) + I ( t ) .
Then, the error network is
D t 0 , t α e i ( t ) = C e i ( t ) + A f ˜ ( e i ( t ) ) + B f ˜ ( e i ( t τ ) ) + σ j = 1 N g i j Γ e j ( t ) + u i ( t ) ,
where A , B C n × n , f ˜ ( e i ( t ) ) = f ( z i ( t ) ) f ( s l ( t ) ) , f ˜ ( e i ( t τ ) ) = f ( z i ( t ) ) f ( s l ( t τ ) ) . Other parameters are same with Equations (6), (8), and (13).
Corollary 3.
Assuming the validity of Assumptions 1 and 2. Under the designed controller (14), the networks (34) and (35) achieve the FTCS, if
λ 1 = λ min I N ( C ^ L A ) σ G ¯ Γ + K 1 I n > 0 ,
λ 2 = λ min k 2 2 F B > 0 .
The settling time t = t 0 + ϖ λ 1 1 α , where ϖ = max x | E α ( x ) = λ 2 λ 1 | e ( t 0 ) | 1 + λ 2 .
Remark 3.
Corollary 3 is similar to Theorem 2 in [32], with the key difference being that the controller in [32] included a time delay, which can be challenging to implement in practice. The three corollaries demonstrate that Theorem 1 proposed in this paper is more comprehensive and capable of addressing more complex problems.

4. Optimization of Control Parameters

In the previous section, the FTCS conditions for networks (6) and (8) were established. Then, one can choose control parameters based on Equations (16) and (17) in Theorem 1. However, these equations only specify the range of the control parameters, which could lead to inefficient use of control resources if the parameters are chosen arbitrarily. To address this issue, an optimization model is developed and solved by PSO.

4.1. The Optimization Model

The optimization model of control parameters is structured as follows:
min J = 1 2 t 0 t ( e H ( t ) e ( t ) + u H ( t ) u ( t ) ) d t ,
s . t . λ 1 > 0 ,
λ 2 > 0 ,
k i 1 0 , i = 1 , 2 , , N ,
k j 2 0 , j = 1 , 2 , , n ,
t = t 0 + ϖ λ 1 1 α ,
ϖ = max y | E α ( y ) = λ 2 λ 1 | e ( t 0 ) | 1 + λ 2 ,
where u ( t ) = ( u 1 T ( t ) , u 2 T ( t ) , , u N T ( t ) ) T , e ( t ) , λ 1 , λ 2 , k i 1 , k j 2 are same with Theorem 1. Equation (39) is the optimized target function [15], in which 1 2 t 0 t ( e H ( t ) e ( t ) ) d t is the integral square error index, and 1 2 t 0 t ( u H ( t ) u ( t ) ) d t represents the control energy. The target is to minimize the sum of these two terms. Equations (40)–(45) serve as the constraint conditions, with Equations (40) and (41) corresponding to the FTCS conditions outlined in Theorem 1. An algorithm based on the PSO method is designed to solve this complicated optimization model.

4.2. An Algorithm with PSO

The PSO algorithm is a classical intelligent optimization technique that leverages the collective intelligence of a group to find the optimal solution. In this algorithm, a large number of particles are iteratively updated. Each particle has its own velocity and position, where the position represents the value of a potential solution, and the velocity updates the position in each iteration. The velocity update is guided by the particle’s historical best position P b e s t and the global best position G b e s t . The following outlines the rules for updating position and velocity:
X i ( t + 1 ) = X i ( t ) + V i ( t + 1 ) , V i ( t + 1 ) = w ( t ) V i ( t ) + c 1 r a n d 1 ( P b e s t i ( t ) X i ( t ) ) + c 2 r a n d 2 ( G b e s t j ( t ) X i ( t ) ) ,
where V i ( t ) = ( V i 1 ( t ) , V i 2 ( t ) , , V i ( N + n ) ( t ) ) T R N + n is the velocity vector of the ith particle, X i ( t ) = ( X i 1 ( t ) , X i 2 ( t ) , , X i ( N + n ) ( t ) ) T R N + n is the position vector of the ith particle at the tth iteration. N + n represents the size of the solution space. i = 1 , 2 , , M , M denotes particle size. w ( t ) represents the inertia weight, which is randomly chosen from ( 0.4 , 0.9 ) . c 1 , c 2 , r a n d 1 and r a n d 2 are constants. P b e s t i ( t ) = ( p b e s t i 1 , p b e s t i 2 , , p b e s t i ( N + n ) ) T is the historical optimal position vector of ith particle. G b e s t j ( t ) = ( g b e s t j 1 , g b e s t j 2 , , g b e s t j ( N + n ) ) T is the global optimal position vector of the jth iteration, j = 1 , 2 , , N m , N m is the maximum iterations.
To address the optimization model presented in Section 4.1, a novel algorithm is developed based on PSO. The flow chart of the algorithm is outlined in Figure 1.
Remark 4.
There is a negative feedback control with CVSF in [31,32]. However, these papers only provide a range for the control parameters, such as k i 1 , k i 2 X , where X is a non-negative constant, leaving the selection of specific control parameters within this range up to the user. The method outlined in this paper facilitates the calculation of control parameters aimed at minimizing the combined costs of control energy and dynamic errors, providing a more economical strategy for selecting control parameters in FCCMNN. This algorithm is based on PSO algorithm and combined the Theorem 1’s conditions. It is not a global optimal algorithm and still has some limitations, but it can give a method to select the optimized control parameters.

5. Simulation

5.1. Example 1

There is an example to demonstrate the feasibility of the main result and the optimization method. The leader’s dynamical network (8) is that:
D t 0 , t α s l ( t ) = C s l ( t ) + A ( s l ( t ) ) f ( s l ( t ) ) + B ( s l ( t ) ) f ( s l ( t τ ) ) + I ( t ) .
Let fractional-order α = 0.97 , the dimension of neuronal states n = 2 , the number of clusters m = 2 . The initial values are s 1 = ( 1.05 i , 0.8 2 i ) T , s 2 = ( 2.74 + 1.05 i , 1.75 + 1.31 i ) T . C = d i a g ( 0.4 3.07 i , 0.2 1.43 i ) , f ( s ( t ) ) = tanh ( s R ( t ) ) + tanh ( s I ( t ) ) i , then L = 1 and F = 2 . Let τ = 0.1 , I ( t ) = 0.1 sin ( t ) + 0.3 cos ( t ) i .
a 11 ( y 1 ) = 1.24 1.75 i , | y 1 | 1 3 , 1.3 1.13 i , | y 1 | 1 > 3 , a 12 ( y 1 ) = 2.17 1.89 i , | y 1 | 1 3 , 2.22 1.99 i , | y 1 | 1 > 3 ,
a 21 ( y 2 ) = 1.56 + 1.69 i , | y 2 | 1 3 , 1.62 + 1.87 i , | y 2 | 1 > 3 , a 22 ( y 2 ) = 1.43 0.44 i , | y 2 | 1 3 , 1.3 0.06 i , | y 2 | 1 > 3 ,
b 11 ( y 1 ) = 0.06 + 0.31 i , | y 1 | 1 3 , 0.03 + 0.25 i , | y 1 | 1 > 3 , b 12 ( y 1 ) = 0.31 + 0.09 i , | y 1 | 1 3 , 0.22 0.07 i , | y 1 | 1 > 3 ,
b 21 ( y 2 ) = 0.03 + 0.44 i , | y 2 | 1 3 , 0.09 + 0.07 i , | y 2 | 1 > 3 , b 22 ( y 2 ) = 0.1 + 0.08 i , | y 2 | 1 3 , 2.21 0.07 i , | y 2 | 1 > 3 .
The two-dimensional (2D) state curves of s ( t ) are shown in Figure 2. The network (6) with the controller (14) is that:
D t 0 , t α z i ( t ) = C z i ( t ) + A ( z i ( t ) ) f ( z i ( t ) ) + B ( z i ( t ) ) f ( z i ( t τ ) ) + σ j = 1 N g i j Γ z j ( t ) + I ( t ) k i 1 e i ( t ) k i 2 [ e i ( t ) ] ,
where the strength of coupling σ = 1.5 , Γ = d i a g ( 0.2 , 0.2 ) . The topology of the network (6) is in Figure 3. The solid line represents a connection weight of 1, while the dashed line denotes 1 . The nine neurons are divided into two clusters. Λ 1 = { 1 , 2 , 3 , 4 } , Λ 2 = { 5 , 6 , 7 , 8 , 9 } . By calculation, one can have C ^ = d i a g ( 2.67 , 1.23 ) . Select K 1 = d i a g ( 9.87 , 10.17 , 10.47 , 9.87 , 9.87 , 9.87 , 10.47 , 9.87 , 10.17 ) and K 2 = d i a g ( 4.84 , 4.6 ) . Then, λ 1 = 1.4807 > 0 , λ 2 = 0.1 > 0 . Let t 0 = 0 , the initial value of z i p ( i = 1 , 2 , , 9 , p = 1 , 2 ) is selected randomly from [ 2 , 2 ] + [ 2 , 2 ] i . Based on Theorem 1, one can calculate that | e ( t 0 ) | 1 = 67.5107 , ϖ = 32.47 , t = 24.1267 .
A ¯ = 2.99 4.21 3.49 1.87 , B ¯ = 0.37 0.4 0.47 0.28 , A ˜ = 0.68 0.15 0.24 0.51 .
The states of s l ( t ) ( l = 1 , 2 ) and z i ( t ) ( i = 1 , 2 , , 9 ) are shown in Figure 4 and Figure 5 under the controller (14). The red circle represents s 1 ( t ) , the blue square is s 2 ( t ) . The magenta double lines indicate neurons 1-4 belonging to the first cluster and all the dot dash lines in cyan indicate neurons 5-9 belonging to the second cluster. Obviously, they can achieve the FTCS, and the settling time is less than t in practice. The error systems are shown in Figure 6. In this figure, all magenta double lines represent the neurons belonging to the first cluster, and blue dotted lines represent the neurons belonging to the second cluster. As time goes by, the systematic errors tend to 0.
The following is the optimization part of the control parameters. There are some parameters for the PSO algorithm. Let M = 50 , N m = 200 , c 1 = c 2 = 2 . Figure 7 describes the value of the fitness function (target function) J corresponding to the global optimal solution G b e s t during each iteration. Along with the increasing iteration times, the value of J becomes lower and lower. One can obtain the optimized control parameters, and the relevant J is 573.7228 in this simulation.
There is a comparison experiment. The control parameters with and without the provided optimization algorithm are stated in Table 2. Most papers only give the range of control parameters, that is to say, the control parameters need to satisfy the conditions of Theorem 1. This method is expressed by the Normal method. The control gains is selected as k 1 1 = 9.87 , k 2 1 = 10.17 , k 3 1 = 10.47 , k 4 1 = 9.87 , k 5 1 = 9.87 , k 6 1 = 9.87 , k 7 1 = 10.47 , k 8 1 = 9.87 , k 9 1 = 10.17 , k 1 2 = 4.84 , k 2 2 = 4.6 , in this method, which satisfy the Theorem 1, and the sum of ISE and control energy is 691.2507. This paper proposes an optimization method of control parameters, which is called the optimization method. The control parameters obtained according to the algorithm proposed in this paper is k 1 1 = 10.6886 , k 2 1 = 1.3105 × 10 4 , k 3 1 = 8.6932 , k 4 1 = 8.2978 , k 5 1 = 9.6064 , k 6 1 = 8.4519 , k 7 1 = 0.0408 , k 8 1 = 10.643 , k 9 1 = 10.355 , k 1 2 = 5.6879 , k 2 2 = 3.7145 and the optimization objective is 573.7228. The control parameters and the provided optimization algorithm can obtain a lower sum of control energy and ISE index. Therefore, this optimization method can guide the selection of the control parameters.
Remark 5.
The control matrix K 1 can let the matrix Ξ 1 = I N ( C ^ L A ¯ ) σ G ¯ Γ + K 1 I n in Equation (16) is a diagonally-dominant matrix. At the same time, K 1 has the minimum norm in the normal method of the comparison experiment. The selection of control matrices in [32] is similar to the normal method in this paper. Since the optimization of the controller is not considered in [32], the controller is not the minimum norm matrix that can make the matrix of synchronization condition is a diagonally dominant matrix in the simulation. However, the proposed optimization method can give better control parameters than the parameters via the normal method. That is to say, the proposed optimization method is better than an arbitrary selection within a certain range.
Remark 6.
In this example, the “tanh” function is used as an activation function. In fact, all functions which satisfy Assumption 2 can be used as the activation function.

5.2. Example 2

There is a comparison experiment with [31] to further illustrate the advantage of the optimization method. The studied model is the same with Example 1 in [31]. The leader’s state of each cluster is
D 0 , t α s l ( t ) = C s l ( t ) + B f ( s l ( t ) ) ,
where l = 1 , 2 .
The CNN without memristor is described as follows:
D 0 , t α z i ( t ) = C z i ( t ) + B f ( z i ( t ) ) + j = 1 N g i j h ( z j ( t ) ) d l e i ( t ) θ l [ e i ( t ) ] ,
where N = 14 , Λ 1 = { 1 , 2 , , 7 } , Λ 2 = { 8 , 9 , , 14 } . When i Λ 1 ,   l = 1 , otherwise, l = 2 . f ( s ( t ) ) = tanh ( s R ( t ) ) + tanh ( s I ( t ) ) i , h ( s ( t ) ) = 0.1 × 1 e x p ( s ( t ) R ) 1 + e x p ( s ( t ) R ) + 0.1 1 + e x p ( s ( t ) I ) i .
The topology of the network (50) is in Figure 8. If neurons i and j can connect with each other, g i j = 1 , otherwise, g i j = 0 , besides, g i i = j = 1 n g i j . The parameters are identical to Case 2 of Example 1 in [31], such as α = 0.7 ,   d 1 = 9 ,   d 2 = 8 ,   θ 1 = θ 2 = 5 . The matrices C and B, and the initial value of s l ( t ) are omitted here, which can be found in [31]. In [31], the settling time is 7.6398, and by calculation, the value of J in Equation (39) is 3.7505 × 10 3 . We replaced the synchronization conditions of the optimization model in Section 4.1 with the synchronization conditions of Theorem 2 in [31]. The parameters of the optimization algorithm are M = 50 , N m = 100 , c 1 = c 2 = 1 , and we solved the optimization model with the step in Section 4.2. Then we found the optimal control parameters are d 1 = 13.8477 ,   d 2 = 13.8515 ,   θ 1 = 2.5530 ,   θ 2 = 2.5525 . Accordingly, the settling time is 7.5698, and the value of the fitness function J is 2.8144 × 10 3 . The results of the comparison experiment are shown in Table 3 detailedly. Obviously, the settling times are close, but the values of J are 2.8144 × 10 3 < 3.7505 × 10 3 , which substantiate the proposed optimization method can help to select better control parameters, which can save control resources greatly.
Figure 8. The topology of 14 neurons of the network (50) in Example 2.
Figure 8. The topology of 14 neurons of the network (50) in Example 2.
Fractalfract 09 00039 g008
Remark 7.
A lot of papers [13,14,18,20,21,23,31,32,33,37] only pay attention to the synchronization of the networks, they can give the range of control parameters, but our optimization method can select the optimal control parameters and obtain an energy-efficient controller.

6. Conclusions

This paper investigates a coupled neural network based on a memristor, specifically a FOCV system with a time delay. The FTCS of these networks is analyzed, with neurons divided into different clusters, each following distinct leaders. A controller using the CVSF is designed to achieve FTCS without decomposition. Additionally, an optimization model solved by PSO is developed, guiding the selection of control parameters. The simulation results verify the proposed theorem and optimization method. This work successfully achieves FTCS for fractional-order complex systems. The advantage of the proposed theorem is that the FTCS of FOCV neural networks can be achieved without dividing the complex values into real and imaginary parts. Additionally, the model studied in this paper is more extensive, and the special case of the theorem is the conclusion in other literature, which is challenging. The advantage of the proposed algorithm is that the method of selecting control parameters can be given on the basis of the theorem, which makes up for the lack of research. However, the settling time of FTCS is influenced by the systems’ initial values, which is difficult to obtain sometimes. Therefore, achieving fixed-time synchronization, where the settling time is independent of initial values, remains an area for future research, particularly for FOCV systems, which are not yet well-explored. That will be our future study.

Author Contributions

Writing—original draft preparation and funding, Q.C.; writing—review and editing, Q.C. and R.W.; supervision, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62303342).

Data Availability Statement

All data included in this study are available.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. lghamdi, F.A.; Almanaseer, H.; Jaradat, G.; Jaradat, A.; Alsmadi, M.K.; Jawarneh, S.; Almurayh, A.S.; Alqurni, J.; Alfagham, H. Multilayer perceptron neural network with arithmetic optimization algorithm-based feature selection for cardiovascular disease prediction. Mach. Learn. Knowl. Extr. 2024, 6, 987–1008. [Google Scholar] [CrossRef]
  2. Luo, N.; Xu, D.; Xing, B.; Yang, X.; Sun, C. Principles and applications of convolutional neural network for spectral analysis in food quality evaluation: A review. J. Food Compos. Anal. 2024, 128, 105996. [Google Scholar] [CrossRef]
  3. Zhao, B.; Cao, X.; Zhang, W.; Liu, X.; Miao, Q.; Li, Y. CompNET: Boosting image recognition and writer identification via complementary neural network post-processing. Pattern Recognit. 2025, 157, 110880. [Google Scholar] [CrossRef]
  4. Park, J.H. Recent Advances in Control Problems of Dynamical Systems and Networks; Springer-Nature: Cham, Switzerland, 2021. [Google Scholar]
  5. Lai, Q.; Yang, L.; Hu, G.; Guan, Z.-H.; Iu, H.H.-C. Constructing multiscroll memristive neural network with local activity memristor and application in image encryption. IEEE Trans. Cybern. 2024, 54, 4039–4048. [Google Scholar] [CrossRef]
  6. Liu, H.; Cheng, J.; Cao, J.; Katib, I. Preassigned-time synchronization for complex-valued memristive neural networks with reaction–diffusion terms and Markov parameters. Neural Netw. 2024, 169, 520–531. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Zhou, L. State estimation for proportional delayed complex-valued memristive neural networks. Inf. Sci. 2024, 680, 121150. [Google Scholar] [CrossRef]
  8. Chang, Q.; Park, J.H.; Yang, Y. The optimization of control parameters: Finite-time bipartite synchronization of memristive neural networks with multiple time delays via saturation function. IEEE Trans. Neural Networks Learn. Syst. 2022, 34, 7861–7872. [Google Scholar] [CrossRef]
  9. Aguirre, F.; Sebastian, A.; Le Gallo, M.; Song, W.; Wang, T.; Yang, J.J.; Lu, W.; Chang, M.-F.; Ielmini, D.; Yang, Y.; et al. Hardware implementation of memristor-based artificial neural networks. Nat. Commun. 2024, 15, 1974. [Google Scholar] [CrossRef]
  10. Zhang, G.; Wen, S. New approximate results of fixed-time stabilization for delayed inertial memristive neural networks. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 3428–3432. [Google Scholar] [CrossRef]
  11. Deng, Q.; Wang, C.; Lin, H. Memristive Hopfield neural network dynamics with heterogeneous activation functions and its application. Chaos Solitons Fractals 2024, 178, 114387. [Google Scholar] [CrossRef]
  12. Li, Y.; Chen, Y.; Podlubny, I. Mittag–Leffler stability of fractional order nonlinear dynamic systems. Automatica 2009, 45, 1965–1969. [Google Scholar] [CrossRef]
  13. Panda, S.K.; Nagy, A.; Vijayakumar, V.; Hazarika, B. Stability analysis for complex-valued neural networks with fractional order. Chaos Solitons Fractals 2023, 175, 114045. [Google Scholar] [CrossRef]
  14. Feng, L.; Yu, J.; Hu, C.; Yang, C.; Jiang, H. Nonseparation method-based finite/fixed-time synchronization of fully complex-valued discontinuous neural networks. IEEE Trans. Cybern. 2020, 51, 212–3223. [Google Scholar] [CrossRef]
  15. Chang, Q.; Park, J.H.; Yang, Y.; Wang, F. Finite-Time multiparty synchronization of T–S fuzzy coupled memristive neural networks with optimal event-triggered control. IEEE Trans. Fuzzy Syst. 2022, 31, 2545–2555. [Google Scholar] [CrossRef]
  16. Sorrentino, F.; Pecora, L.M.; Hagerstrom, A.M.; Murphy, T.E.; Roy, R. Complete characterization of the stability of cluster synchronization in complex dynamical networks. Sci. Adv. 2016, 2, e1501737. [Google Scholar] [CrossRef] [PubMed]
  17. Rossa, F.D.; Pecora, L.; Blaha, K.; Shirin, A.; Klickstein, I.; Sorrentino, F. Symmetries and cluster synchronization in multilayer networks. Nat. Commun. 2020, 11, 3179. [Google Scholar] [CrossRef]
  18. Zhou, L.; Tan, F.; Yu, F.; Liu, W. Cluster synchronization of two-layer nonlinearly coupled multiplex networks with multi-links and time-delays. Neurocomputing 2019, 359, 264–275. [Google Scholar] [CrossRef]
  19. Tang, Z.; Park, J.H.; Shen, H. Finite-time cluster synchronization of lur’e networks: A nonsmooth approach. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1213–1224. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Liu, X.; Zhou, D.; Lin, C.; Chen, J.; Wang, H. Finite-time stabilizability and instabilizability for complex-valued memristive neural networks with time delays. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 2371–2382. [Google Scholar] [CrossRef]
  21. Chen, J.; Chen, B.; Zeng, Z. Global asymptotic stability and adaptive ultimate mittag-leffler synchronization for a fractional-order complex-valued memristive neural networks with delays. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2519–2535. [Google Scholar] [CrossRef]
  22. Li, X.; Wu, H.; Cao, J. Synchronization in finite time for variable-order fractional complex dynamic networks with multi-weights and discontinuous nodes based on sliding mode control strategy. Neural Netw. 2021, 139, 335–347. [Google Scholar] [CrossRef] [PubMed]
  23. Duan, L.; Shi, M.; Huang, C.; Fang, X. Synchronization in finite-/fixed-time of delayed diffusive complex-valued neural networks with discontinuous activations. Chaos Solitons Fractals 2021, 142, 110386. [Google Scholar] [CrossRef]
  24. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  25. Tan, Y.; Yuan, Y.; Xie, X.; Tian, E.; Liu, J. Observer-based event-triggered control for interval type-2 fuzzy networked system with network attacks. IEEE Trans. Fuzzy Syst. 2023, 31, 2788–2798. [Google Scholar] [CrossRef]
  26. Liu, B.; Liu, T.; Xiao, P. Dynamic event-triggered intermittent control for stabilization of delayed dynamical systems. Automatica 2023, 149, 110847. [Google Scholar] [CrossRef]
  27. Li, X.; Liu, W.; Gorbachev, S.; Cao, J. Event-triggered impulsive control for input-to-state stabilization of nonlinear time-delay systems. IEEE Trans. Cybern. 2023, 54, 2536–2544. [Google Scholar] [CrossRef]
  28. Liu, Z.; Gao, H.; Yu, X.; Lin, W.; Qiu, J.; Rodríguez-Andina, J.J.; Qu, D. B-spline wavelet neural-network-based adaptive control for linear-motor-driven systems via a novel gradient descent algorithm. IEEE Trans. Ind. Electron. 2023, 71, 1896–1905. [Google Scholar] [CrossRef]
  29. Wang, J.; Ru, T.; Shen, H.; Cao, J.; Park, J.H. Finite-time L2L synchronization for semi-markov jump inertial neural networks using sampled data. IEEE Trans. Network Sci. Eng. 2021, 8, 163–173. [Google Scholar] [CrossRef]
  30. Park, J.H.; Lee, T.H.; Liu, Y.; Chen, J. Dynamic Systems with Time Delays: Stability and Control; Springer-Nature: Singapore, 2019. [Google Scholar]
  31. Yang, S.; Hu, C.; Yu, J.; Jiang, H. Finite-time cluster synchronization in complex-variable networks with fractional-order and nonlinear coupling. Neural Netw. 2021, 135, 212–224. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, P.; Zeng, Z.; Wang, J. Asymptotic and finite-time cluster synchronization of coupled fractional-order neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 4956–4967. [Google Scholar] [CrossRef]
  33. Wang, L.; Song, Q.; Liu, Y.; Zhao, Z.; Alsaadi, F.E. Finite-time stability analysis of fractional-order complex-valued memristor-based neural networks with both leakage and time-varying delays. Neurocomputing 2017, 245, 86–101. [Google Scholar] [CrossRef]
  34. Yu, T.; Cao, J.; Rutkowski, L.; Luo, Y.-P. Finite-time synchronization of complex-valued memristive-based neural networks via hybrid control. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3938–3947. [Google Scholar] [CrossRef] [PubMed]
  35. Zhu, X.; Tang, Z.; Feng, J.; Park, J.H. Aperiodically intermittent event-triggered pinning control on cluster synchronization of directed complex networks. ISA Trans. 2023, 138, 281–290. [Google Scholar] [CrossRef]
  36. Zhang, J.; Ma, Z.; Li, X.; Qiu, J. Cluster Synchronization in Delayed Networks With Adaptive Coupling Strength via Pinning Control. Front. Phys. 2020, 8, 235. [Google Scholar] [CrossRef]
  37. Hou, T.; Yu, J.; Hu, C.; Jiang, H. Finite-time synchronization of fractional-order complex-variable dynamic networks. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 4297–4307. [Google Scholar] [CrossRef]
  38. Podlubny, I. Fractional Differential Equations; Academic Press: London, UK, 1999. [Google Scholar]
Figure 1. The flow chart of control parameters selection algorithm based on PSO.
Figure 1. The flow chart of control parameters selection algorithm based on PSO.
Fractalfract 09 00039 g001
Figure 2. The 2D state curves of the leaders s 1 ( t ) and s 2 ( t ) of 2 clusters in Example 1.
Figure 2. The 2D state curves of the leaders s 1 ( t ) and s 2 ( t ) of 2 clusters in Example 1.
Fractalfract 09 00039 g002
Figure 3. The topology of 9 neurons of the network (6) in Example 1.
Figure 3. The topology of 9 neurons of the network (6) in Example 1.
Fractalfract 09 00039 g003
Figure 4. The stable evolution of the first dimensional state of the leaders s l 1 ( t ) and follower-neurons z i 1 ( t ) in Example 1, l = 1 , 2 , i = 1 , 2 , , 9 .
Figure 4. The stable evolution of the first dimensional state of the leaders s l 1 ( t ) and follower-neurons z i 1 ( t ) in Example 1, l = 1 , 2 , i = 1 , 2 , , 9 .
Fractalfract 09 00039 g004
Figure 5. The stable evolution of the second dimensional state of the leaders s l 2 ( t ) and follower-neurons z i 2 ( t ) in Example 1, l = 1 , 2 , i = 1 , 2 , , 9 .
Figure 5. The stable evolution of the second dimensional state of the leaders s l 2 ( t ) and follower-neurons z i 2 ( t ) in Example 1, l = 1 , 2 , i = 1 , 2 , , 9 .
Fractalfract 09 00039 g005
Figure 6. The stable evolutions of 9 neurons’ errors e i ( t ) in Example 1, i = 1 , 2 , , 9 .
Figure 6. The stable evolutions of 9 neurons’ errors e i ( t ) in Example 1, i = 1 , 2 , , 9 .
Fractalfract 09 00039 g006
Figure 7. The evolution of optimization target function J in Example 1.
Figure 7. The evolution of optimization target function J in Example 1.
Fractalfract 09 00039 g007
Table 2. Results with different control parameters in Example 1.
Table 2. Results with different control parameters in Example 1.
Method ControlParameters J
Normal k i 1 , i = 1 3 9.8710.1710.47691.2507
k i 1 , i = 4 6 9.879.879.87
k i 1 , i = 7 9 10.479.8710.17
k i 2 , i = 1 , 2 4.844.6
Optimization k i 1 , i = 1 3 10.6886 1.3105 × 10 4 8.6932573.7228
k i 1 , i = 4 6 8.29789.60648.4519
k i 1 , i = 7 9 0.040810.64310.355
k i 2 , i = 1 , 2 5.68793.7145
Table 3. Results of comparison experiment in Example 2.
Table 3. Results of comparison experiment in Example 2.
Method d 1 d 2 θ 1 θ 2 t J
[31]98557.6398 3.7505 × 10 3
Optimization13.847713.85152.55302.55257.5698 2.8144 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, Q.; Wang, R.; Yang, Y. Finite-Time Cluster Synchronization of Fractional-Order Complex-Valued Neural Networks Based on Memristor with Optimized Control Parameters. Fractal Fract. 2025, 9, 39. https://doi.org/10.3390/fractalfract9010039

AMA Style

Chang Q, Wang R, Yang Y. Finite-Time Cluster Synchronization of Fractional-Order Complex-Valued Neural Networks Based on Memristor with Optimized Control Parameters. Fractal and Fractional. 2025; 9(1):39. https://doi.org/10.3390/fractalfract9010039

Chicago/Turabian Style

Chang, Qi, Rui Wang, and Yongqing Yang. 2025. "Finite-Time Cluster Synchronization of Fractional-Order Complex-Valued Neural Networks Based on Memristor with Optimized Control Parameters" Fractal and Fractional 9, no. 1: 39. https://doi.org/10.3390/fractalfract9010039

APA Style

Chang, Q., Wang, R., & Yang, Y. (2025). Finite-Time Cluster Synchronization of Fractional-Order Complex-Valued Neural Networks Based on Memristor with Optimized Control Parameters. Fractal and Fractional, 9(1), 39. https://doi.org/10.3390/fractalfract9010039

Article Metrics

Back to TopTop