Next Article in Journal
A Fast High-Order Predictor–Corrector Method on Graded Meshes for Solving Fractional Differential Equations
Next Article in Special Issue
Collective Sensitivity and Collective Accessibility of Non-Autonomous Discrete Dynamical Systems
Previous Article in Journal
Discriminant and Root Trajectories of Characteristic Equation of Fractional Vibration Equation and Their Effects on Solution Components
Previous Article in Special Issue
Sampled-Data Based Fault-Tolerant Control Design for Uncertain CE151 Helicopter System with Random Delays: Takagi-Sugeno Fuzzy Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays

by
Nattakan Boonsatit
1,
Santhakumari Rajendran
2,*,
Chee Peng Lim
3,
Anuwat Jirawattanapanit
4 and
Praneesh Mohandas
2
1
Faculty of Science and Technology, Rajamangala University of Technology Suvarnabhumi 13000, Thailand
2
Sri Ramakrishna College of Arts and Science, Coimbatore 641006, India
3
Institute for Intelligent Systems Research and Innovation, Deakin University, Waurn Ponds, VIC 3216, Australia
4
Department of Mathematics, Faculty of Science, Phuket Rajabhat University (PKRU), Phuket 83000, Thailand
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(9), 515; https://doi.org/10.3390/fractalfract6090515
Submission received: 3 August 2022 / Revised: 2 September 2022 / Accepted: 6 September 2022 / Published: 13 September 2022

Abstract

:
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.

1. Introduction

Due to sub-network contact and cooperation, coupled neural networks (CNNs) are likelier than conventional neural networks (NNs) [1] to exhibit more complicated dynamical features, as explained in [2,3,4,5]. In view of the potential of CNNs in various fields such as electrical grid, image processing, compression coding, and medical science, there has been an increase in interest in research relating to CNNs over the years [6,7,8,9]. Although complex-valued signals are common in real-world applications, coupled real-valued neural networks (CRVNNs) [2,3] are unable to handle these signals. In order to deal with complex-valued inputs, coupled complex-valued neural networks (CCVNNs) are introduced, leading to a more efficient model by incorporating complex variables as network elements [10,11,12,13]. Compared with CRVNNs, CCVNNs can address important practical problems. As an example, an orthogonal decision boundary and complex signal in a neuron offer effective solutions to tackle the XOR and symmetry detection challenges, as presented in [14,15]. As presented in [16], CCVNNs can accurately represent optical wave fields of phase-conjugate resonators, since their complex-valued signals with respect to the underlying phase and amplitude characteristics can be interpreted conveniently. Additionally, CCVNNs provide a variety of benefits, such as increased learning speed and dependability, as well as efficient computational capabilities. Therefore, CCVNNs have gained importance in real world applications.
In the control field, synchronization has recently attracted a lot of attention, and its application areas, which include biology, medicine, chemistry, electronics, secure communication and information science, have overgrown. Until now, synchronization of dynamical networks has been a significant source of concern, with numerous valuable results reported. For instance, the authors of [5] investigated the global exponential synchronization problem of quaternion-valued coupled neural networks with impulses. An array of memristive neural networks with the inertial term, linear coupling and time-varying delay are considered for the synchronization problem in [9]. The finite-time synchronization problem for delayed neutral-type and uncertain neural networks was investigated in [17] and [18], respectively. A review of the literature reveals a variety of synchronization studies, including lag synchronization, complete synchronization, anti-synchronization, and more; see [17,18,19]. Due to many uses in image encryption, image protection and secure communication, the synchronization issue in CNNs has drawn the most attention. The phenomenon of cluster synchronization (CS) describes how all elements in a network are separated into various clusters. While the elements in the same cluster are completely synchronized, those from separate clusters are desynchronized. Because CS is a widespread phenomenon that can be discovered in a wide range of natural and man-made systems, and it has wide applications in different complex networks, including cellular and metabolic networks, social networks, electrical power grid networks, food webs, biological NNs, telephone cell graphs and the World Wide Web, to name a few, many related research studies have been conducted [20,21,22]. Liu et al. [23] considered a fractional-order linearly coupled system consisting of N NNs. They derived different adequate conditions to determine the synchronization issue of the addressed model. Yang et al. [24] discussed the CS issue of fractional-order networks with complex variables, along with nonlinear coupling in finite time based on the decomposition method. They computed the settling time efficiently using certain important aspects of the Mittag-Leffler functions and fractional Caputo derivatives. Zhang et al. [20] explored the CS issue of delayed CNNs with fixed and switching coupling topology by employing Lyapunov theory and differential inequalities method. While CS of complex networks has been studied widely, the investigation of CS with complex values in complex networks is yet to attract attention, despite its potential use. In [24], CS of complex variable dynamical complex network was examined, but on complex variable networks without time delays. Therefore, CS of complex variables in complex networks in finite time requires further investigation.
Recently the dynamical study of neural networks attracted a lot of attention; see [25,26,27,28,29,30,31]. The work mentioned above [5,9] focuses on the infinite-time synchronization of the drive-response system. Finite-time stability has entered the field of vision of researchers [17,18,23] to make the error between systems tend to zero quickly. Contrarily, from an engineering and application perspective, the convergence rate is critical in determining how well the suggested control algorithm performs and how successful it is. Therefore, the finite-time control approach has received a lot of attention [23,24,32,33]. Practically speaking, the finite-time CS of CCVNNs is trackable. With the aid of suitable controllers, finite-time CS refers to the capability of the controlled systems to establish synchronization in a predetermined amount of time. Compared with asymptotic synchronization, finite time synchronization in some cases not only accelerates synchronization, which happens when time approaches infinity, but also offers the benefit of low interruption and tenacity in the presence of uncertainty. Therefore, it is beneficial to look into CS of CCVNNs in finite time. To this end, Yu et al. [32] studied the finite-time CS problem for a coupled dynamical system without delays. He et al. [34] investigated adaptive CS in finite time for neutral-type CNNs with mixed delays. The finite-time CS problem for a coupled fuzzy cellular NNs was investigated in [35]. To the authors’ knowledge, no many results on CS of CCVNNs in finite time exist, and the conclusions are based on the assumption that the parameters of complex networks are available in the actual world. Furthermore, it is well understood that time delays are unavoidable in NNs, which can cause oscillation or asynchronization. As a result, it is essential to study how time delays affect CNNs. Motivated by the aforementioned aspects, the key objective of this article is to study improved finite-time CS of CCVNNs with mixed time delays. To address adaptive CS of CCVNNs, some useful criteria are derived. Specifically, the the important contributions of this article can be summarized as follows: (1) some adequate conditions are obtained to determining CCVNNs with systems adaptive finite-time CS. The major benefit of the adequate conditions such as linear matrix inequalities (LMIs) is that they are non-singular. (2) Unlike recent CS results work on CVNNs in finite time with nonlinear coupling and no time delay [24], the techniques presented here are applicable to CVNNs with both mixed and time-variable delays.

2. Model Description and Preliminaries

2.1. Preliminaries

Graph Theory: Let G = ( V , E , G ) be a graph with a set of nodes V = { 1 , , V } ( V > 2 ), E V × V , and coupling matrix J = 𝚥 i j V × V R V × V with 𝚥 i i = j = 1 , j i V 𝚥 i j for i = 1 , , V , where 𝚥 i j > 0 if there is an interaction between nodes i and j, or else 𝚥 i j = 0 . Denote C 1 , C 2 , , C M with C k = l k 1 + 1 , l k 1 + 2 , , l k as a set of partitions of nodes V with non-empty subset M, such that ( 2 M < V ) . In addition, several notations of graph partition are introduced. F = F 1 , , F m is the partition of the given vertex set V , if the following conditions are satisfied for p m , p , n = 1 , 2 , m
(i)
F n = 1 m F n = F .
(ii)
F n .
(iii)
F p F m = .
Then, F 1 = 1 , 2 , , q 1 , F 2 = q 1 + 1 , , q 1 + q 2 , , F n = { 1 + i = 1 n q i 1 , , i = 1 n
q i } , , F m = q 1 + + q m 1 + 1 , , q 1 + + q m , q 1 + q 2 + + q m = N , 1 m N .
The following are some useful lemmas:
Lemma 1.
For any two n-dimensional vectors Λ 1 , Λ 2 and any matrix H > 0 R n × n , and any scalar θ > 0 , the following inequality always holds:
2 Λ 1 T Λ 2 θ Λ 1 T H Λ 1 + θ 1 Λ 2 T H 1 Λ 2 .
Lemma 2.
Suppose that a continuous V ( t ) is a positive–definite function, and it satisfies
V ˙ ( t ) α V β ( t ) , t t 0 , V t 0 0 .
Here, α R + , 0 < β < 1 . Then, the following inequality is satisfied for V ( t ) :
V 1 β ( t ) V 1 β t 0 α ( 1 β ) t t 0 , t 0 t T .
where T denotes the settling time. It follows from V ( t ) 0 , t T , T = t 0 + V 1 γ t 0 β ( 1 γ ) .
Lemma 3.
For any chosen matrix D R n × n > 0 that is positive–definite, scalar δ R + , and any function φ : [ 0 , δ ] R n , the following inequality exists:
0 δ φ ( s ) d s T D 0 δ φ ( s ) d s δ 0 δ φ T ( s ) D φ ( s ) d s .
Lemma 4.
For any N-dimensional type vectors c 1 , c 2 , , c N and positive real numbers x , y , such that x > y > 0 , then the following condition is true
i = 1 N | | c i | | y i = 1 N | | c i | | x x y .
Remark 1.
Recently, there has been an increase in the number of studies concerning the NN synchronization problem. The analyzed NN models are classified into two types: with time delays [36,37] and without time delays [38,39,40]. Note that the evaluation of model states in delayed NNs is dependent on both the current and prior states, compared with that in NNs without time delays. This makes the investigation on delayed NNs more practical in real-world applications, but with more complex theoretical analyses. One of the most important and complex areas is the analysis of NNs with mixed delays. They include the mixture of state delays, coupling delays and distributed delays [41,42,43]. Furthermore, as a subset of delayed NNs, neutral-type CNNs have been employed in mechatronics and communication areas.

2.2. Model Formulation

Consider the CCVNNs with mixed delays consisting of N nodes, as follows:
d ϕ i ( t ) d t = E ϕ ˙ i ( t σ ( t ) ) D ξ i ( t ) + A f ^ ϕ i ( t ) + B k ^ ( ϕ i ( t τ ( t ) ) ) + C t τ t h ^ ϕ i ( ζ ) d ζ + j = 1 N l i j Γ ϕ j ( t τ ( t ) ) + u i ( t ) , t > 0 , ϕ i ( s ) = ϑ i ( s ) , s [ τ , 0 ] , i = 1 , 2 , , N ,
where ϕ i ( t ) = ( ϕ i 1 ( t ) , ϕ i 2 ( t ) , , ϕ i n ( t ) ) T and u i ( t ) are the state and the controller input of the i t h subnetwork with n neurons, respectively; f ^ ( ϕ i ( t ) ) = ( f ^ 1 ( ξ i 1 ( t ) ) , f ^ 2 ( ϕ i 2 ( t ) ) , ,
f ^ n ( ϕ i n ( t ) ) ) T , k ^ ( ϕ i ( t τ ( t ) ) ) = ( k ^ 1 ( ϕ i 1 ( t τ ( t ) ) ) , k ^ 2 ( ϕ i 2 ( t τ ( t ) ) ) , , k ^ n ( ϕ i n ( t τ ( t ) ) ) ) T ,
h ^ ( ϕ i ( ζ ) ) = ( h ^ 1 ( ϕ i 1 ( ζ ) ) , h ^ 2 ( ϕ i 2 ( ζ ) ) , , h ^ n ( ϕ i n ( ζ ) ) ) T , are the complex-valued neuron activation functions; D = diag ( d 1 , d 2 , , d n ) T R n with d k > 0 ( k = 1 , 2 , , n ) is the self-feedback connection between neurons of the i t h subnetwork; A = ( a p q ) C n × n , B = ( b p q ) C n × n , C = ( c p q ) C n × n denote the connection, delayed connection and distributed delayed connection weight matrices, respectively, p , q = 1 , 2 , n ; σ ( t ) , τ ( t ) , τ are the neutral-type time delay, coupling delay and distributed delay, respectively, and they satisfy τ = sup t t 0 σ ( t ) , τ ( t ) , t t 0 , τ R + ; L = ( l i j ) N × N denotes the configuration coupling terms of the outer coupling, which satisfies l i j = q i j , l i i = j = 1 , j i N q i j ; Γ = diag δ 1 , δ 2 , , δ n is the positive diagonal inner coupling matrix; u i ( t ) = ( u i 1 , u i 2 , , u i n ) t denotes the control input to be designed later, for all t 0 , i , j = 1 , 2 , , N .
The following assumption is necessary throughout this study:
Assumption 1.
We can decompose the nonlinear continuous activation functions f ^ ( ϕ ) , h ^ ( ϕ ) and k ^ ( ϕ ( t τ ( t ) ) ) into real and imaginary parts, namely:
f ^ ( ϕ ) = f ^ R ϕ R , ϕ I + i f ^ I ϕ R , ϕ I , k ^ ( ϕ ) = k ^ R ϕ R , ϕ I + i k ^ I ϕ R , ϕ I
where f ^ R , f ^ I , k ^ R , k ^ I and h ^ R , h ^ I are real and imaginary parts of f ^ ( ϕ ) , h ^ ( ϕ ) and k ^ ( ϕ ( t τ ( t ) ) ) , and all are real-valued continuous functions.
Assumption 2.
For any vectors ϕ 1 ( t ) , ϕ 2 ( t ) R , it is assumed that the real and imaginary parts of the complex-valued activation function f ^ q ( · ) are able to satisfy
| f ^ q R ( ϕ 1 ( t ) ) | L q R , | f ^ q I ( ϕ 1 ( t ) ) | L q I , | f ^ q R ( ϕ 1 ( t ) ) f ^ q R ( ϕ 2 ( t ) ) | K q R | ϕ 1 ( t ) ϕ 2 ( t ) | , | f ^ q I ( ϕ 1 ( t ) ) f ^ q I ( ϕ 2 ( t ) ) | K q I | ϕ 1 ( t ) ϕ 2 ( t ) | ,
where f ^ q R ( · ) , f ^ q I ( · ) are the real and imaginary parts of f ^ q ( · ) , and L q R , L q I , K q R , K q I are positive constants.
 Remark 2. 
A complex number’s real and imaginary components also exhibit some statistical correlation. It makes more sense to use a complex-valued model rather than a real-valued model when we are aware of how crucial phase and magnitude are to our learning purpose. Complex numbers are used in neural networks for two fundamental reasons.
(a)
In many applications, such as wireless communications or audio processing, where complex numbers occur naturally or intentionally, there is a correlation between the real and imaginary parts of the complex signal. For instance, the Fourier transform is a linear transformation that multiplies the magnitude of the signal in the frequency domain by multiplying the signal’s magnitude by a scalar in the time domain. The circular rotation of a signal in the time domain corresponds to a phase change in the frequency domain. This means a complex number’s real and imaginary parts are statistically correlated during the phase change.
(b)
Suppose the relevance of the magnitude and phase to the learning objective is known a priori. In that case, it makes more sense to use a complex-valued model because it imposes more constraints on the complex-valued model than a real-valued model would.
Assumption 3.
There exists any positive constant δ that satisfies the following condition:
τ ˙ ( t ) δ < 1 , t 0 .
Assumption 4.
Assume that elements of the outer coupling matrix L = ( l i j ) N × N satisfy j = l m i l m x l i j = 0 , l m i = 1 + i = 1 h r i 1 , l m x = i = 1 h r i , i , j = 1 , 2 , , N .
By assuming that system (2) is the response system, we can provide the accompanying driving system as follows:
d ψ h ( t ) d t = E ψ ˙ h ( t σ ( t ) ) D ψ h ( t ) + A f ^ ψ h ( t ) + B k ^ ( ψ h ( t τ ( t ) ) ) + C t τ t h ^ ψ h ( ζ ) d ζ , t > 0 , ψ h ( s ) = φ h ( s ) , s [ τ , 0 ] , h = 1 , 2 , , g ,
where ψ g ( t ) = ( ψ g 1 ( t ) , ψ g 2 ( t ) , , ψ g n ( t ) ) T denotes the state vector of the drive system (3), and D , A , B , C and E are the feedback, interconnection, delayed interconnection and neutral delayed interconnection matrices, respectively. In addition, σ ( t ) , τ ( t ) , respectively, denote the neutral time delay and time varying delay. There is no consensus in between two clusters of distinct dimensions. Therefore, the delays considered in (2) and (3) are identical.
Definition 1.
The neutral-type CVCNNs with cluster partition Ω can realize finite-time CS, if there exists a settling time t s > 0 , such that
lim t t s ϕ i ( t ) ψ i ( t ) = 0 , t t s , i Ω h , h = 1 , 2 , , g .
Remark 3.
Finite-time control has received a lot of attention from an engineering application perspective. The convergence rate is a crucial variable to consider when evaluating the effectiveness and performance of the proposed control method [33,35]. For instance, the finite-time CS problem in complex dynamical networks’ has been addressed in [33]. However, not many studies focus on adaptive finite-time control methods for tackling the CS problem of neutral type CVCNNs with mixed delays. In the following section, a new finite-time adaptive control scheme is presented to address the this issue.

3. Main Results

The main outcome of this article is discussed in this section. To begin with, the coupling term j = 1 N l i j Γ x j t τ i j ( t ) meets Assumption 4 Moreover, the set Ω = Ω 1 , Ω 2 , , Ω g is a partition of the edge set Δ = 1 , 2 , , N , if Ω h = 1 g Ω q = Ω ,   Ω h , and Ω u Ω v = for u v , u , h = 1 , 2 , . , g . Then, Ω 1 = 1 , 2 , , θ 1 , Ω 2 = θ 1 + 1 , , θ 1 + θ 2 , , Ω h = 1 + i = 1 h θ i 1 , , i = 1 h θ i , , Ω g = θ 1 + + θ g 1 + 1 , , θ 1 + + θ g , θ 1 + θ 2 + + θ g = N , 1 g N .
j = 1 N l i j Γ ϕ j t τ ( t ) = h = 1 g j Ω h l i j Γ ϕ j t τ ( t ) = h = 1 g j Ω h l i j Γ ξ j t τ ( t ) + h = 1 g j Ω h l i j Γ ψ h t τ ( t ) = h = 1 g j Ω h l i j Γ ξ j t τ ( t )
Thus, we consider the following error dynamics for systems (2) and (3)
ξ ˙ i ( t ) = E ξ ˙ i t τ ( t ) D ξ i ( t ) + A f ^ ξ i ( t ) + B k ^ ξ i t τ ( t ) + C t τ t h ^ ξ i ( ζ ) d ζ + j = 1 N l i j Γ ξ j t τ ( t ) + u i ( t ) , ξ i ( s ) = ϑ i ( s ) φ h ( s ) , s [ τ , 0 ] , i = 1 , 2 , , N ,
where f ^ ξ i ( t ) = f ϕ i ( t ) f ψ h ( t ) , k ^ ξ i t τ ( t ) = k ^ ϕ i t τ ( t ) k ^ ψ h t τ ( t ) , t τ t h ^ ξ i ( ζ ) d ζ = t τ t h ^ ϕ i ( ζ ) h ^ ψ h ( ζ ) d ζ , t 0 , i Δ . Complex-valued system (4) can be divided into its real and imaginary parts by utilizing Assumption 1, as follows:
ξ ˙ i R ( t ) = E R ξ ˙ i R ( t τ ( t ) ) + E I ξ ˙ i I ( t τ ( t ) ) D R ξ i R ( t ) + A R f ^ R ( ξ i ( t ) ) A I f ^ I ( ξ i ( t ) ) + B R K ^ R ( ξ i ( t τ ( t ) ) ) B I K ^ I ( ξ i ( t τ ( t ) ) ) + C R t τ t h ^ R ξ i ( ζ ) d ζ C I t τ t h ^ I ξ i ( ζ ) d ζ + j = 1 N l i j Γ ξ R ( t τ ( t ) + u R ( t ) , ξ ˙ i I ( t ) = E I ξ ˙ i R ( t τ ( t ) ) + E R ξ ˙ i I ( t τ ( t ) ) D R ξ i I ( t ) + A R f ^ I ( ξ i ( t ) ) + A I f ^ R ( ξ i ( t ) ) + B R K ^ I ( ξ ( t τ ( t ) ) ) + B I K ^ R ( ξ ( t τ ( t ) ) ) + C R t τ t h ^ I ξ i ( ζ ) d ζ + C I t τ t h ^ R ξ i ( ζ ) d ζ + j = 1 N l i j Γ ξ I ( t τ ( t ) + u I ( t ) , ξ i R ( s ) = ϑ i R ( s ) φ h R ( s ) , ξ i I ( s ) = ϑ i I ( s ) φ h I ( s ) , s [ τ , 0 ] , i = 1 , 2 , , N ,
where, ξ i R ( t ) = ( ξ i 1 R ( t ) , ξ i 2 R ( t ) , , ξ i n R ( t ) ) T , ξ i I ( t ) = ( ξ i 1 I ( t ) , ξ i 2 I ( t ) , , ξ i n I ( t ) ) T , E = E R + i E I = ( e p q R ) n × n + i ( e p q I ) n × n , A = A R + i A I = ( a p q R ) n × n + i ( a p q I ) n × n , B = B R + i B I = ( b p q R ) n × n + i ( b p q I ) n × n , C = C R + i C I = ( c p q R ) n × n + i ( c p q I ) n × n , u ( t ) = u R ( t ) + i u I ( t ) . The finite-time CS problem of (2) and (3) is solved when the states of system (5) is finite-time stable.
To design an improved finite-time adaptive controller for CS, the error system (5) can be used, as follows:
u i R ( t ) = π i R ( t ) ξ i R ( t ) , u i I ( t ) = π i I ( t ) ξ i I ( t ) , π ˙ i R ( t ) = ϑ T ( ς , π i R ( t ) ) ξ i R T E R ξ ˙ i ( t τ ( t ) ) + π i R ( t ) ξ R T ( t ) ξ R ( t ) + ξ R T ( t ) ϖ R ξ i R ( t ) + j = 1 N ξ R T ( t ) l i j Δ ξ j R ( t τ ( t ) ) ϑ ˜ R ( ς ˜ , ϑ R ( ς , π i R ( t ) ) ) | | ξ i R ( t ) | | ς ˜ ς s i g n ( π i R ( t ) ) + t τ ( t ) t ξ R ( θ ) P ξ R ( θ ) d θ 0.5 + ξ h R τ 0 t + s t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ d s 0.5 , π ˙ i I ( t ) = ϑ I ( ς , π i I ( t ) ) ξ i I T E I ξ ˙ i I ( t τ ( t ) ) + π i I ( t ) ξ I T ( t ) ξ I ( t ) + ξ I T ( t ) ϖ I ξ i I ( t ) + j = 1 N ξ I T ( t ) l i j Δ ξ j I ( t τ ( t ) ) ϑ ˜ I ( ς ˜ , ϑ I ( ς , π i I ( t ) ) ) | | ξ i I ( t ) | | ς ˜ ς s i g n ( π i I ( t ) ) + t τ ( t ) t ξ I ( θ ) P 1 ξ I ( θ ) d θ 0.5 + ξ h I τ 0 t + s t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ d s 0.5 ,
where ϕ , P and P 1 are positive–definite matrices, ς and ς ˜ are positive constants. ϑ R ( ς ˜ , π i R ( t ) ) = ς ˜ π i R ( t ) , π i R ( t ) < 0 ϑ ˜ ( ς , ϑ ( ς , π i R ( t ) ) ) = ς ϑ R ( ς ˜ , π i R ( t ) ) , i , j V ϑ I ( ς ˜ , π i I ( t ) ) = ς ˜ π i I ( t ) , π i I ( t ) < 0 ϑ ˜ ( ς , ϑ ( ς , π i I ( t ) ) ) = ς ϑ I ( ς ˜ , π i I ( t ) ) , i , j V .
Remark 4.
Complex-valued neural networks have proven useful in domains where the representation of data is complex by nature or design. Most CVNN research has focused on shallow designs and specific signal processing tasks, such as channel equalization. One reason for this is the difficulties associated with training. This is due to the limitation that the complex-valued activation is not complex differentiable and bounded at the same time [44,45]. Several studies have suggested that the condition that a complex-valued activation must be simultaneously bounded and complex differentiable need not be satisfied, and propose activations that are differentiable independently of the real and imaginary components. This remains an open area of research.
Theorem 1.
On the ground that Assumptions 1–4 are true, using adaptive control (6) for system (2), we can prove the existence of positive–diagonal matrix ϕ and symmetric matrices P , P 1 , such that
λ min ( D R ) + 1 2 λ max ( A R A R T ) 1 2 λ min ( A I A I T ) + 1 2 λ max ( B R B R T ) 1 2 λ min ( B I B I T ) + 1 2 λ max ( C R C R T ) 1 2 λ min ( C I C I T ) + 1 2 λ max ( P ) + τ ξ h I 2 2 < λ min ( ϕ ) ,
λ min ( D I ) + 1 2 λ max ( A R A R T ) + 1 2 λ max ( A I A I T ) + 1 2 λ max ( B R B R T ) + 1 2 λ max ( B I B I T ) + 1 2 λ max ( C R C R T ) + 1 2 λ max ( C I C I T ) + 1 2 λ max ( P 1 ) + τ ξ h I 2 2 < λ min ( ϕ ) ,
1 2 ξ k R 2 1 2 ( 1 τ ¯ P ) < 0 , 1 2 ξ k I 2 1 2 ( 1 τ ¯ P 1 ) < 0
then CS can be obtained in finite time.
Proof. 
A Lyapunov functional candidate is considered, as follows:
V ( t ) = V 1 ( t ) + V 2 ( t ) + V 3 ( t ) + V 4 ( t )
where,
V 1 ( t ) = 1 2 i = 1 N ξ i R T ( t ) ξ i R ( t ) + 1 2 i = 1 N ξ i I T ( t ) ξ i I ( t ) , V 2 ( t ) = 1 2 i = 1 N t τ ^ ( t ) t ξ i R T ( θ ) P ξ i R ( θ ) d θ + 1 2 i = 1 N t τ ^ ( t ) t ξ i I T ( t ) P 1 ξ i I ( t ) d θ , V 3 ( t ) = 1 2 i = 1 N τ ( t ) 0 t + s t ξ i R T ( ζ ) ξ h T ξ h ξ i ( ζ ) d ζ d s + 1 2 i = 1 N τ ( t ) 0 t + s t ξ i I T ( ζ ) ξ h T ξ h ξ i ( ζ ) d ζ d s , V 4 ( t ) = i = 1 N 1 2 ω π i R 2 ( t ) + i = 1 N 1 2 ω π i I 2 ( t ) .
Differentiating V 1 ( t ) along the state trajectories of model (5) yields:
V ˙ 1 ( t ) = i = 1 N ξ i R T ( t ) ξ ˙ i R ( t ) + i = 1 N ξ i I T ( t ) ξ ˙ i I ( t ) , = i = 1 N ξ i R T ( t ) [ E R ξ ˙ i R ( t τ ( t ) ) E I ξ ˙ i I ( t τ ( t ) ) D R ξ i R ( t ) + A R f ^ R ξ i ( t ) A I f ^ I ξ i ( t ) + B R K ^ R ( ξ i ( t τ ( t ) ) ) B I K ^ I ( ξ i ( t τ ( t ) ) ) + C R t τ t h ^ R ξ i ( ζ ) d ζ C I t τ t h ^ I ξ i ( ζ ) d ζ + j = 1 N l i j Γ ξ i R ( t τ ( t ) ) + u R ( t ) ] + i = 1 N ξ i I T ( t ) [ E I ξ i R ( t τ ( t ) ) + E R ξ i I ( t τ ( t ) ) D R ξ i I ( t ) + A R r f ^ I ξ i ( t ) + A I f ^ R ξ i ( t ) + B R k ^ I ( ξ i ( t τ ( t ) ) ) + B I K ^ R ξ i ( t τ ( t ) ) + C R t τ t h ^ I ξ i ( ζ ) d ζ + C I t τ t h R ξ i ( ζ ) d ζ + j = 1 N l i j Γ ξ i I ( t τ ( t ) ) + u I ( t ) ]
The following inequalities can be deduced from Lemma 1 and Assumption 2:
i = 1 N ξ i R T ( t ) A R f ^ R ξ i R ( t ) 1 2 i = 1 N ξ i R T ( t ) A R A R T ξ i R ( t ) + i = 1 N ξ i R T ( t ) ξ f R 2 ξ i R ( t ) , i = 1 N ξ i R T ( t ) A I f ^ I ξ i I ( t ) 1 2 i = 1 N ξ i R T ( t ) A I A I T ξ i R ( t ) + i = 1 N ξ i I T ξ f I 2 ξ i 2 ( t ) ,
i = 1 N ξ i I T ( t ) A R f ^ I ξ i ( t ) 1 2 i = 1 N ξ i I T ( t ) A R A 2 R T ξ i I ( t ) + i = 1 N ξ i I T ( t ) ξ f I 2 ξ i I ( t ) , i = 1 N ξ i I T ( t ) A I f ^ R ξ i ( t ) 1 2 i = 1 N ξ i I T ( t ) A I A I T ξ i I ( t ) + i = 1 N ξ i R T ( t ) ξ f R 2 ξ i R ( t ) ,
i = 1 N ξ i R T ( t ) B R K ^ R ( ξ i ( t τ ( t ) ) ) 1 2 i = 1 N ξ i R T ( t ) B R B R T ξ i R ( t ) + i = 1 N ξ i R T ( t τ ( t ) ) ξ k R 2 ξ i R ( t τ ( t ) ) , i = 1 N ξ i R ( t ) B I K ^ I ( ξ ( t τ ( t ) ) ) 1 2 i = 1 N ξ i R T ( t ) B I B I T ξ i R ( t ) + i = 1 N ξ i I T ( t τ ( t ) ) ξ K I 2 ξ i I ( t τ ( t ) ) , i = 1 N ξ i I T ( t ) B R K ^ I ξ i ( t τ ( t ) ) 1 2 i = 1 N ξ i T ( t ) B R B R T ξ i I ( t ) + i = 1 N ξ i I T ( t τ ( t ) ) ξ k I 2 ξ i I ( t τ ( t ) ) , i = 1 N ξ i I T ( t ) B ¯ I k ^ R ξ i ( t τ ( t ) ) 1 2 i = 1 N ξ i I T ( t ) B T B I T ξ i I ( t ) + i = 1 N ξ i R T ( t τ ( t ) ) ξ k R 2 ξ i R ( t τ ( t ) ) ,
From Lemma 3, it follows that
i = 1 N ξ i R T ( t ) C R t τ t h ^ R ξ i ( ζ ) d ζ = 1 2 i = 1 N ξ i R T ( t ) C R t τ t h ^ R ξ i R T ( ζ ) d ζ + t τ t h ^ R T ξ i ( ζ ) d ζ C R T ξ i R T ( t ) 1 2 i = 1 N t τ t h ^ R T ξ i ( ζ ) d ζ t τ t h ^ R ξ i ( ζ ) d ζ + ξ i R T ( t ) C R C R T ξ i R ( t ) 1 2 i = 1 N t τ t h ^ R T ξ i ( ζ ) h ^ R ξ i ( ζ ) d ζ + ξ i R T ( t ) C R C R T ξ i R ( t ) 1 2 i = 1 N ξ h R 2 t τ t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ + ξ i R T ( t ) C R C R T ξ i R ( t ) .
i = 1 N ξ i R T ( t ) C I t τ t h ^ I ξ i ( ζ ) d ζ 1 2 i = 1 N ξ h I 2 t τ t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ + ξ i R T ( t ) C I C I T ξ i R ( t ) , i = 1 N ξ i I T ( t ) C R t τ t h ^ I ξ i ( ζ ) d ζ 1 2 i = 1 N ξ h I 2 t τ t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ + ξ i I T ( t ) C R C R T ξ i I ( t ) , i = 1 N ξ i I T ( t ) C I t τ t h ^ R ξ i ( ζ ) d ζ 1 2 i = 1 N ξ h R 2 t τ t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ + ξ i I T ( t ) C I C I T ξ i I ( t ) ,
Therefore,
V ˙ 1 ( t ) i = 1 N ξ i R T E R ( t ) ξ ˙ i R ( t τ ( t ) ) ξ i R T E I ( t ) ξ ˙ i I ( t τ ( t ) ) + ξ i I T E I ( t ) ξ ˙ i R ( t τ ( t ) ) + ξ i I T E R ( t ) ξ ˙ i I ( t τ ( t ) ) + i = 1 N ξ i R T ( t ) D R + 1 2 A R A R T + ξ f R 2 A I A I T + B R B R T B I B I T + C R C R T C I C I T + π i R ( t ) ξ i R ( t ) + ξ k R 2 i = 1 N ξ i R T ( t τ ( t ) ) ξ i R ( t τ ( t ) ) + ξ k I 2 i = 1 N ξ i I T ( t τ ( t ) ) ξ i I ( t τ ( t ) )
Then, the derivative of the Lyapunov term V 2 ( t ) and V 3 ( t ) is described as follows:
V ˙ 2 ( t ) = 1 2 i = 1 N ξ i R T ( t ) P ξ i R ( t ) 1 2 i = 1 N 1 τ ^ ˙ ( t ) ξ i R T t τ ^ ( t ) P ξ i R t τ ^ ( t ) + 1 2 i = 1 N ξ i I T ( t ) P 1 ξ i I ( t ) 1 2 i = 1 N 1 τ ^ ˙ ( t ) ξ i I T t τ ^ ( t ) P 1 ξ i I t τ ^ ( t ) , 1 2 i = 1 N ξ i R T ( t ) P ξ i R ( t ) 1 2 i = 1 N 1 τ ˜ ξ i R T t τ ^ ( t ) P ξ i R t τ ^ ( t ) + 1 2 i = 1 N ξ i I T ( t ) P 1 ξ i I ( t ) 1 2 i = 1 N 1 τ ˜ ξ i I T t τ ^ ( t ) P 1 ξ i I t τ ^ ( t ) ,
V ˙ 3 ( t ) 1 2 ξ h R 2 i = 1 N τ ξ i R T ( t ) ξ i R ( t ) ξ h R 2 2 i = 1 N t τ t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ + 1 2 ξ h I 2 i = 1 N τ ξ i I T ( t ) ξ i I ( t ) ξ h I 2 2 i = 1 N t τ t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ
and
V ˙ 4 ( t ) = i = 1 N 1 ω π i R ( t ) π ˙ i R ( t ) + i = 1 N 1 ω π i I ( t ) π ˙ i I ( t ) , i = 1 N ξ i R T ( t ) E R ξ ˙ i R t τ ( t ) π i R ( t ) ξ i R T ( t ) ξ i R ( t ) ξ i I T ( t ) E I ξ ˙ i I t τ ( t ) π i I ( t ) ξ i I T ( t ) ξ i I ( t ) ξ i R T ( t ) ϕ ξ i R ( t ) ξ i I T ( t ) ϕ ξ i I ( t ) j = 1 N ξ i R T ( t ) l i j Γ ξ j R t τ ( t ) ε ξ i R ( t ) ε ε π i R ( t ) j = 1 N ξ i I T ( t ) l i j Γ ξ j I t τ ( t ) ε ξ i I ( t ) ε ε π i I ( t ) ε t τ ( t ) t ξ i R T ( s ) P ξ i R ( s ) d s 1 2 ε t τ ( t ) t ξ i I T ( s ) P ξ i I ( s ) d s 1 2 ε ξ h R 2 τ 0 t + θ t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ d θ 1 2 ε ξ h I 2 τ 0 t + θ t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ d θ 1 2
According to Assumption 3, we have
V ˙ ( t ) ξ i R T ( t ) [ D R + 1 2 A R A R T 1 2 A I A I T + 1 2 B R B R T 1 2 B I B I T + 1 2 C R C R T 1 2 C I C I T + 1 2 P + τ ξ h I 2 2 ϕ ] ξ R ( t ) + ξ i I T ( t ) [ D I + 1 2 A R A R T + 1 2 A I A I T + 1 2 B R B R T + 1 2 B I B I T + 1 2 C R C R T + 1 2 C I C I T + 1 2 P 1 + τ ξ h I 2 2 ϕ ] ξ I ( t ) + i = 1 N ξ i R T ( t τ ( t ) ) 1 2 ξ k R 2 1 2 ( 1 τ ¯ P ) ξ i R ( t τ ( t ) ) + i = 1 N ξ i I T ( t τ ( t ) ) 1 2 ξ k I 2 1 2 ( 1 τ ¯ P 1 ) ξ i I ( t τ ( t ) ) i = 1 N ε t τ ( t ) t ξ i R T ( s ) P ξ i R ( s ) d s 1 2 i = 1 N ε t τ ( t ) t ξ i I T ( s ) P ξ i I ( s ) d s 1 2 i = 1 N ε ξ h R 2 τ 0 t + θ t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ d θ 1 2 i = 1 N ε ξ h I 2 τ 0 t + θ t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ d θ 1 2 i = 1 N ε ξ i I ( t ) i = 1 N ε ε π i I ( t ) i = 1 N ε ξ i I ( t ) i = 1 N ε ε π i I ( t ) .
Then,
V ˙ ( t ) ξ i R T ( t ) λ min ( D R ) + 1 2 λ max ( A R A R T ) 1 2 λ min ( A I A I T ) + 1 2 λ max ( B R B R T ) 1 2 λ min ( B I B I T ) + 1 2 λ max ( C R C R T ) 1 2 λ min ( C I C I T ) + 1 2 λ max ( P ) + τ ξ h I 2 2 λ min ( ϕ ) ξ R ( t ) + ξ i I T ( t ) λ min ( D I ) + 1 2 λ max ( A R A R T ) + 1 2 λ max ( A I A I T ) + 1 2 λ max ( B R B R T ) + 1 2 λ max ( B I B I T ) + 1 2 λ max ( C R C R T ) + 1 2 λ max ( C I C I T ) + 1 2 λ max ( P 1 ) + τ ξ h I 2 2 λ min ( ϕ ) ξ I ( t ) + i = 1 N ξ i R T ( t τ ( t ) ) 1 2 ξ k R 2 1 2 ( 1 τ ¯ P ) ξ i R ( t τ ( t ) ) + i = 1 N ξ i I T ( t τ ( t ) ) 1 2 ξ k I 2 1 2 ( 1 τ ¯ P 1 ) ξ i I ( t τ ( t ) ) i = 1 N ε t τ ( t ) t ξ i R T ( s ) P ξ i R ( s ) d s 1 2 i = 1 N ε t τ ( t ) t ξ i I T ( s ) P ξ i I ( s ) d s 1 2 i = 1 N ε ξ h R 2 τ 0 t + θ t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ d θ 1 2 i = 1 N ε ξ h I 2 τ 0 t + θ t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ d θ 1 2 i = 1 N ε ξ i I ( t ) i = 1 N ε ε π i I ( t ) i = 1 N ε ξ i I ( t ) i = 1 N ε ε π i I ( t ) .
Based on inequality conditions (7)–(9), we have
V ˙ ( t ) i = 1 N i = 1 N ε t τ ( t ) t ξ i R T ( s ) P ξ i R ( s ) d s 1 2 i = 1 N ε t τ ( t ) t ξ i I T ( s ) P ξ i I ( s ) d s 1 2 i = 1 N ε ξ h R τ 0 t + θ t ξ i R T ( ζ ) ξ i R ( ζ ) d ζ d θ 1 2 i = 1 N ε ξ h I τ 0 t + θ t ξ i I T ( ζ ) ξ i I ( ζ ) d ζ d θ 1 2 i = 1 N ε ( ξ i I ( t ) 2 ) 1 2 i = 1 N ε ε ( π i I ( t ) 2 ) 1 2 i = 1 N ε ( ξ i I ( t ) 2 ) 1 2 i = 1 N ε ε ( π i I ( t ) 2 ) 1 2 , 2 ε 1 2 i = 1 N ξ i R T ( t ) ξ i R ( t ) + 1 2 i = 1 N ξ i I T ( t ) ξ i I ( t ) + 1 2 i = 1 N t τ ^ ( t ) t ξ i R T ( θ ) P ξ i R ( θ ) d θ + 1 2 i = 1 N t τ ^ ( t ) t ξ i I T ( t ) P 1 ξ i I ( t ) d θ + 1 2 i = 1 N τ ( t ) 0 t + s t ξ i R T ( ζ ) ξ h T ξ h ξ i ( ζ ) d ζ d s + 1 2 i = 1 N τ ( t ) 0 t + s t ξ i I T ( ζ ) ξ h T ξ h ξ i ( ζ ) d ζ d s + i = 1 N 1 2 ω π i R 2 ( t ) + i = 1 N 1 2 ω π i I 2 ( t ) 1 2 , = 2 ε V ( t ) 1 2 .
From Lemma 2 and (22), the error converges to zero in a finite amount of time under T = t 0 + ( 2 V ( t 0 ) ) 1 2 ε . As such, using the formulated adaptive control scheme, the finite-time CS problem of neutral-type CVCNNs with mixed delays can be solved. This concludes the proof.

4. Numerical Evaluation

To demonstrate the effectiveness of the aforementioned results, we consider the following numerical example
Case (i): A neutral-type CVCNN model with distributed delays is formed, where the i t h dynamics of the model is described as follows:
d ϕ i ( t ) d t = E ϕ ˙ i ( t σ ( t ) ) D ξ i ( t ) + A f ^ ϕ i ( t ) + B k ^ ( ϕ i ( t τ ( t ) ) ) + C t τ t h ^ ϕ i ( ζ ) d ζ + j = 1 N l i j Γ ϕ j ( t τ ( t ) ) + u i ( t ) , t > 0 , ϕ i ( s ) = ϑ i ( s ) , s [ τ , 0 ] , i = 1 , 2 , , N ,
where ϕ i ( t ) = [ ϕ 1 i ( t ) , ϕ 2 i ( t ) ] T C 2 is the state vector of the model and its parameters are defined as
E = 0.5 + i 0.5 0 0 0.5 + i 0.5 , D = 0.1 + i 0.6 0 0 0.1 + i 0.6 , A = 1.2 + i 1.0 0.1 + i 3.2 0.5 + i 0.1 0.2 + i 0.1 , B = 1 + i 0.2 0.5 + i 0.2 1.5 + i 1.1 0.3 + i 0.1 , C = 2.2 + i 1.4 0.7 + i 3.2 0.6 + i 0.1 0.5 + i 0.1 ,
The time-varying delay is chosen as τ ( t ) = e t / ( 1 + e t ) and τ ¯ = 1 , while the activations are considered to be f ^ ( ϕ i ( t ) ) = tanh ( ϕ i ( t ) ) , f ^ ( ϕ i ( t ) ) = tanh ( ϕ i ( t ) ) . According to Assumptions 1–4, f ^ ( ϕ i ( t ) ) = tanh ( ϕ i R ( t ) ) + i tanh ( ϕ i I ( t ) ) , f ^ ( ϕ i ( t ) ) = tanh ( ϕ i R ( t ) ) + i tanh ( ϕ i I ( t ) ) , let ξ f R = ξ f I = ξ k R = ξ k I = ξ h R = ξ h I = 1 , ς = 20 , ς ˜ = 5 , λ min ( D R ) = 1 , λ min ( D I ) = 1 , λ max ( A R A R T ) = 1.7190 , λ max ( A I A I T ) = 1.0181 , λ max ( B R B R T ) = 3.5327 , λ max ( B I B I T ) = 1.2685 , λ max ( C R C R T ) = 5.8611 , λ max ( C I C I T ) = 12.2173 , λ max ( E R E R T ) = 0.0100 , λ max ( E I E I T ) = 0.3600 λ min ( A R A R T ) = 0.0210 , λ min ( A I A I T ) = 0.0119 , λ min ( B R B R T ) = 0.0573 , λ min ( B I B I T ) = 0.0315 , λ min ( C R C R T ) = 0.0789 , λ min ( C I C I T ) = 0.0027 , λ min ( E R E R T ) = 0.0100 , λ min ( E I E I T ) = 0.3600 . According to Theorem 1, the conditions obtained indicate the feasible solutions are 741.3839 < λ min ( ϕ ) , 755.5942 < λ min ( ϕ 1 ) , λ max ( P P T ) = 1.4872 × 10 3 , λ max ( P 1 P 1 T ) = 1.4866 × 10 3 . The outer coupling term of the given system is given as
( l i j ) 4 × 4 = 2 1 1 0 0 1 0 0 0 1 1 0 0 0 0 1
Furthermore, the drive system is described as
d ψ h ( t ) d t = E ψ ˙ h ( t σ ( t ) ) D ψ h ( t ) + A f ^ ψ h ( t ) + B k ^ ( ψ h ( t τ ( t ) ) ) + C t τ t h ^ ψ h ( ζ ) d ζ , t > 0 , ψ h ( s ) = φ h ( s ) , s [ τ , 0 ] , h = 1 , 2 , , g ,
where ψ h ( t ) = ψ h 1 ( t ) , ψ h 2 ( t ) T is the state vector of the drive model, while the parameters of the drive system are considered the same as those of the response system in this example.
Then, by employing Assumption 1, the error model can be obtained as follows:
ξ ˙ i R ( t ) = E R ξ ˙ i R ( t τ ( t ) ) + E I ξ ˙ i I ( t τ ( t ) ) D R ξ i R ( t ) + A R f ^ R ( ξ i ( t ) ) A I f ^ I ( ξ i ( t ) ) + B R K ^ R ( ξ i ( t τ ( t ) ) ) B I K ^ I ( ξ i ( t τ ( t ) ) ) + C R t τ t h ^ R ξ i ( ζ ) d ζ C I t τ t h ^ I ξ i ( ζ ) d ζ + j = 1 N l i j Γ ξ R ( t τ ( t ) + u R ( t ) , ξ ˙ i I ( t ) = E I ξ ˙ i R ( t τ ( t ) ) + E R ξ ˙ i I ( t τ ( t ) ) D R ξ i I ( t ) + A R f ^ I ( ξ i ( t ) ) + A I f ^ R ( ξ i ( t ) ) + B R K ^ I ( ξ ( t τ ( t ) ) ) + B I K ^ R ( ξ ( t τ ( t ) ) ) + C R t τ t h ^ I ξ i ( ζ ) d ζ + C I t τ t h ^ R ξ i ( ζ ) d ζ + j = 1 N l i j Γ ξ I ( t τ ( t ) + u I ( t ) , ξ i R ( s ) = ϑ i R ( s ) φ h R ( s ) , ξ i I ( s ) = ϑ i I ( s ) φ h I ( s ) , s [ τ , 0 ] , i = 1 , 2 , , N ,
where ξ i R ( t ) = ( ξ i 1 R ( t ) , ξ i 2 R ( t ) , , ξ i n R ( t ) ) T , ξ i I ( t ) = ( ξ i 1 I ( t ) , ξ i 2 I ( t ) , , ξ i n I ( t ) ) T , E = E R + i E I = ( e p q R ) n × n + i ( e p q I ) n × n , A = A R + i A I = ( a p q R ) n × n + i ( a p q I ) n × n , B = B R + i B I = ( b p q R ) n × n + i ( b p q I ) n × n , C = C R + i C I = ( c p q R ) n × n + i ( c p q I ) n × n , u ( t ) = u R ( t ) + i u I ( t ) .
If the state of the above-mentioned system is finite-time stable, then finite-time CS of (23) and (24) is solved.
E R = 0.5 0 0 0.5 , D R = 0.1 0 0 0.1 , A R = 1.2 0.1 0.5 0.2 , B R = 1 0.5 1.5 0.3 , E I = 0.5 0 0 0.5 , D I = 0.6 0 0 0.6 , A I = 1.0 3.2 0.1 0.1 , B I = 0.2 0.2 1.1 0.1 , C R = 2.2 0.7 0.6 0.5 , C I = 1.4 3.2 0.1 0.1 ,
Figure 1, Figure 2, Figure 3 and Figure 4 illustrate that neurons in each cluster can synchronize with their target neurons in a finite amount of time, while under the adaptive controller, synchronization among various clusters is not achievable. Figure 5 displays the trajectories of the CS errors.
Case (ii): We consider the state of the model (23) as in the three-dimensional complex domain ϕ i ( t ) C 3 , that is ϕ i ( t ) = ( ϕ 1 i ( t ) , ϕ 2 i ( t ) , ϕ 3 i ( t ) ) T . Then, the system parameters are must be three-dimensional and are taken as
E = 0.5 + i 0.5 0 0 0 0.5 + i 0.5 0 0 0 0.5 + i 0.5 , D = 0.1 + i 0.6 0 0 0 0.1 + i 0.6 0 0 0 0.1 + i 0.6 , A = 1.78 + i 0.1 21 + i 0.02 0.1 + i 0.3 0.1 + i 0.0 1.78 + i 0.01 0.2 + i 0.1 0.2 + i 0.0 0 + i 0.01 0.1 + i 0.1 , B = 1.44 i 0.1 0.2 + i 0.0 0.1 + i 0.2 0.1 + i 0.01 1.44 + i 0.0 0.1 + i 0.02 0.2 + i 0.07 0.1 i 0.08 0.1 + i 0.01 , C = 2.2 + i 1.4 0.7 + i 3.2 0.1 + i 0.02 0.6 + i 0.1 0.5 + i 0.1 0 + i 0.01 0.01 + i 0 0.02 + i 0.1 0 + i 0.02 .
Therefore, according to the Assumptions 1–4 and Theorem 1, the conditions obtained indicate the feasible solutions are 770.7230 < λ min ( ϕ ) , 782.8778 < λ min ( ϕ 1 ) , λ max ( P P T ) = 1.0990 × 10 3 , λ max ( P 1 P 1 T ) = 1.0985 × 10 3 . Figure 6 and Figure 7 illustrate, that neurons in each cluster can synchronize with their target neurons in a finite amount of time, while under the adaptive controller, synchronization among various clusters is not achievable. Figure 8 and Figure 9 displays the trajectories of the CS errors.
Remark 5.
As a result, many academics have made significant efforts to study delayed neural network systems, and many excellent publications have resulted from their efforts [46,47,48,49]. For instance, the authors of [46] investigated the stability issues of neutral-type Cohen–Grossberg neural networks with multiple time delays. A novel sufficient stability criterion is derived for Cohen–Grossberg neural networks of neutral type with multiple delays by utilizing a modified and enhanced version of a previously introduced Lyapunov functional. The new stability problems for more general models of neutral-type neural network systems were investigated in [47,48]. In this study, new finite-time CS of CCVNN models with single neutral delay were realized under adaptive control. The obtained results can extend further those in the existing literature [46,47,48,49]. For further research, the dynamics of coupled delayed CVNN models with multiple neutral delays and impulsive effects will be investigated.

5. Conclusions

In this study, we examined the issue of adaptive finite-time CS pertaining to neutral-type CVCNNs with mixed time delays. The relevant stability analysis is very challenging, since it takes into account a more general dynamic model of neutral-type CVCNN with mixed time delays. A useful adaptive control scheme has been developed to address this challenging issue. Using the Lyapunov functionals approach and linear matrix inequality, the corresponding adequate conditions have been obtained. The simulation results positively indicate the viability and validity of the proposed method. For further work, finite-time CS of neutral-type delayed CVCNNs with stochastic inputs and disturbances will be studied in detail.

Author Contributions

Funding acquisition, N.B. and A.J.; Conceptualization, N.B., S.R., P.M. and A.J.; Software, N.B., S.R., P.M. and A.J.; Formal analysis, N.B., S.R., P.M. and A.J.; Methodology, N.B., S.R., P.M. and A.J.; Supervision, C.P.L.; Writing—original draft, N.B., S.R., P.M. and A.J.; Validation, N.B., S.R., P.M. and A.J.; Writing—review and editing, N.B., S.R., P.M. and A.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Rajamangala University of Technology Suvarnabhumi.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Manivannan, R.; Panda, S.; Chong, K.T.; Cao, J. An Arcak-type state estimation design for time-delayed static neural networks with leakage term based on unified criteria. Neural Netw. 2018, 106, 110–126. [Google Scholar] [CrossRef] [PubMed]
  2. Zhou, W.; Sun, Y.; Zhang, X.; Shi, P. Cluster synchronization of coupled neural networks with Lvy noise via event-triggered pinning control. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  3. Ouyang, D.; Shao, J.; Jiang, H.; Wen, S.; Nguang, S.K. Finite-time stability of coupled impulsive neural networks with time-varying delays and saturating actuators. Neurocomputing 2021, 453, 590–598. [Google Scholar] [CrossRef]
  4. Zhang, K.; Zhang, H.G.; Cai, Y.; Su, R. Parallel optimal tracking control schemes for mode-dependent control of coupled Markov jump systems via integral RL method. IEEE Trans. Autom. Sci. Eng. 2019, 17, 1332–1342. [Google Scholar] [CrossRef]
  5. Qi, X.; Bao, H.; Cao, J. Synchronization criteria for quaternion-valued coupled neural networks with impulses. Neural Netw. 2020, 128, 150–157. [Google Scholar] [CrossRef]
  6. Jin, X.; Jiang, J.; Qin, J.; Zheng, W.X. Robust pinning constrained control and adaptive regulation of coupled Chuas circuit networks. IEEE Trans. Circuits Syst. I Regul Pap. 2019, 66, 3928–3940. [Google Scholar] [CrossRef]
  7. Ahmad, B.; Alghanmi, M.; Alsaedi, A.; Nieto, J.J. Existence and uniqueness results for a nonlinear coupled system involving Caputo fractional derivatives with a new kind of coupled boundary conditions. Appl. Math. Lett. 2021, 116, 107018. [Google Scholar] [CrossRef]
  8. Bannenberg, M.W.; Ciccazzo, A.; Gnther, M. Coupling of model order reduction and multirate techniques for coupled dynamical systems. Appl. Math. Lett. 2021, 112, 106780. [Google Scholar] [CrossRef]
  9. Li, N.; Zheng, W.X. Synchronization criteria for inertial memristor-based neural networks with linear coupling. Neural Netw. 2018, 106, 260–270. [Google Scholar] [CrossRef]
  10. Li, L.; Shi, X.; Liang, J. Synchronization of impulsive coupled complex-valued neural networks with delay: The matrix measure method. Neural Netw. 2019, 117, 285–294. [Google Scholar] [CrossRef]
  11. Tan, M.; Pan, Q. Global stability analysis of delayed complex-valued fractional-order coupled neural networks with nodes of different dimensions. Int. J. Mach. Learn. 2019, 10, 897–912. [Google Scholar] [CrossRef]
  12. Huang, Y.; Hou, J.; Yang, E. Passivity and synchronization of coupled reaction-diffusion complex-valued memristive neural networks. Appl. Math. Comput. 2020, 379, 125271. [Google Scholar] [CrossRef]
  13. Feng, L.; Hu, C.; Yu, J.; Jiang, H.; Wen, S. Fixed-time Synchronization of Coupled Memristive Complex-valued Neural Networks. Chaos Solitons Fractals 2021, 148, 110993. [Google Scholar] [CrossRef]
  14. Benvenuto, N.; Piazza, F. On the complex backpropagation algorithm. IEEE Trans. Signal Process. 1992, 40, 967–969. [Google Scholar] [CrossRef]
  15. Nitta, T. Solving the XOR problem and the detection of symmetry using a single complex-valued neuron. Neural Netw. 2003, 16, 1101–1105. [Google Scholar] [CrossRef]
  16. Takeda, M.; Kishigami, T. Complex neural fields with a hopfield-like energy function and an analogy to optical fields generated in phaseconjugate resonators. J. Opt. Soc. Am. 1992, 9, 2182–2191. [Google Scholar] [CrossRef]
  17. Jayanthi, N.; Santhakumari, R. Synchronization of time-varying time delayed neutral-type neural networks for finite-time in complex field. Math. Model. Comput. 2021, 8, 486–498. [Google Scholar] [CrossRef]
  18. Jayanthi, N.; Santhakumari, R. Synchronization of time invariant uncertain delayed neural networks in finite time via improved sliding mode control. Math. Model. Comput. 2021, 8, 228–240. [Google Scholar] [CrossRef]
  19. Zhang, W.W.; Zhang, H.; Cao, J.D.; Zhang, H.M.; Chen, D.Y. Synchronization of delayed fractional-order complex-valued neural networks with leakage delay. Phys. A 2020, 556, 124710. [Google Scholar] [CrossRef]
  20. Zhang, X.; Li, C.; He, Z. Cluster synchronization of delayed coupled neural networks: Delay-dependent distributed impulsive control. Neural Netw. 2021, 142, 34–43. [Google Scholar] [CrossRef]
  21. Gambuzza, L.V.; Frasca, M. A criterion for stability of cluster synchronization in networks with external equitable partitions. Automatica 2019, 100, 212–218. [Google Scholar] [CrossRef]
  22. Qin, J.; Fu, W.; Shi, Y.; Gao, H.; Kang, Y. Leader-following practical cluster synchronization for networks of generic linear systems: An event-based approach. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 215–224. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, P.; Zeng, Z.; Wang, J. Asymptotic and finite-time cluster synchronization of coupled fractionalorder neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 4956–4967. [Google Scholar] [CrossRef]
  24. Yang, S.; Hu, C.; Yu, J.; Jiang, H. Finite-time cluster synchronization in complex-variable networks with fractional-order and nonlinear coupling. Neural Netw. 2021, 135, 212–224. [Google Scholar] [CrossRef]
  25. Niamsup, P.; Rajchakit, M.; Rajchakit, G. Guaranteed cost control for switched recurrent neural networks with interval time-varying delay. J. Inequalities Appl. 2013, 2013, 292. [Google Scholar] [CrossRef]
  26. Rajchakit, G.; Sriraman, R.; Lim, C.P.; Unyong, B. Existence, uniqueness and global stability of Clifford-valued neutral-type neural networks with time delays. Math. Comput. Simul. 2022, 201, 508–527. [Google Scholar] [CrossRef]
  27. Rajchakit, M.; Niamsup, P.; Rajchakit, G. A switching rule for exponential stability of switched recurrent neural networks with interval time-varying delay. Adv. Differ. Equ. 2013, 2013, 44. [Google Scholar] [CrossRef]
  28. Sriraman, R.; Rajchakit, G.; Lim, C.P.; Chanthorn, P.; Samidurai, R. Discrete-time stochastic quaternion-valued neural networks with time delays: An asymptotic stability analysis. Symmetry 2020, 12, 936. [Google Scholar] [CrossRef]
  29. Ratchagit, K. Asymptotic stability of delay-difference system of Hopfield neural networks via matrix inequalities and application. Int. J. Neural Syst. 2007, 17, 425–430. [Google Scholar] [CrossRef]
  30. Rajchakit, G.; Sriraman, R.; Boonsatit, N.; Hammachukiattikul, P.; Lim, C.P.; Agarwal, P. Global exponential stability of Clifford-valued neural networks with time-varying delays and impulsive effects. Adv. Differ. Equ. 2021, 2021, 208. [Google Scholar] [CrossRef]
  31. Rajchakit, G.; Chanthorn, P.; Niezabitowski, M.; Raja, R.; Baleanu, D.; Pratap, A. Impulsive effects on stability and passivity analysis of memristor-based fractional-order competitive neural networks. Neurocomputing 2020, 417, 290–301. [Google Scholar] [CrossRef]
  32. Yu, T.; Cao, J.; Huang, C. Finite-time cluster synchronization of coupled dynamical systems with impulsive effects. Discret. Contin. Dyn. Syst. Ser. B 2021, 26, 3595. [Google Scholar] [CrossRef]
  33. Xiao, F.; Gan, Q.; Yuan, Q. Finite-time cluster synchronization for time-varying delayed complex dynamical networks via hybrid control. Adv. Differ. Equ. 2019, 2019, 93. [Google Scholar] [CrossRef]
  34. He, J.J.; Lin, Y.Q.; Ge, M.F.; Liang, C.D.; Ding, T.F.; Wang, L. Adaptive finite-time cluster synchronization of neutral-type coupled neural networks with mixed delays. Neurocomputing 2020, 384, 11–20. [Google Scholar] [CrossRef]
  35. Tang, R.; Yang, X.; Wan, X. Finite-time cluster synchronization for a class of fuzzy cellular neural networks via non-chattering quantized controllers. Neural Netw. 2019, 113, 79–90. [Google Scholar] [CrossRef] [PubMed]
  36. Ding, D.; Wang, Z.; Han, Q.L. Neural-network-based output-feedback control with stochastic communication protocols. Automatica 2019, 106, 221–229. [Google Scholar] [CrossRef]
  37. He, W.; Du, W.; Qian, F.; Cao, J. Synchronization analysis of heterogeneous dynamical networks. Neurocomputing 2013, 104, 146–154. [Google Scholar] [CrossRef]
  38. Wang, L.; Zeng, Z.; Ge, M.F. A disturbance rejection framework for finite time and fixed-time stabilization of delayed memristive neural networks. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 905–915. [Google Scholar] [CrossRef]
  39. Wu, H.; Li, R.; Zhang, X.; Yao, R. Adaptive finite-time complete periodic synchronization of memristive neural networks with time delays. Neural Process. Lett. 2015, 42, 563–583. [Google Scholar] [CrossRef]
  40. Yang, C.; Huang, L.; Cai, Z. Fixed-time synchronization of coupled memristor-based neural networks with time-varying delays. Neural Netw. 2019, 116, 101–109. [Google Scholar] [CrossRef]
  41. Wang, Z.; Liu, H.; Shen, B.; Alsaadi, F.E.; Dobaie, A.M. H state estimation for discrete-time stochastic memristive BAM neural networks with mixed time-delays. Int. J. Mach. Learn. Cybern. 2019, 10, 771–785. [Google Scholar] [CrossRef]
  42. Wang, K.; Teng, Z.; Jiang, H. Adaptive synchronization of neural networks with time-varying delay and distributed delay. Phys. A 2008, 387, 631–642. [Google Scholar] [CrossRef]
  43. Wu, H.; Zhang, X.; Li, R.; Yao, R. Finite-time synchronization of chaotic neural networks with mixed time-varying delays and stochastic disturbance. Memetic Comput. 2015, 7, 231–240. [Google Scholar] [CrossRef]
  44. Chanthorn, P.; Rajchakit, G.; Ramalingam, S.; Lim, C.P.; Ramachandran, R. Robust dissipativity analysis of hopfield-type complex-valued neural networks with time-varying delays and linear fractional uncertainties. Mathematics 2020, 8, 595. [Google Scholar] [CrossRef]
  45. Chanthorn, P.; Rajchakit, G.; Humphries, U.; Kaewmesri, P.; Sriraman, R.; Lim, C.P. A delay-dividing approach to robust stability of uncertain stochastic complex-valued hopfield delayed neural networks. Symmetry 2020, 12, 683. [Google Scholar] [CrossRef]
  46. Faydasicok, O. An improved Lyapunov functional with application to stability of Cohen–Grossberg neural networks of neutral-type with multiple delays. Neural Netw. 2020, 132, 532–539. [Google Scholar] [CrossRef]
  47. Arik, S. New criteria for stability of neutral-type neural networks with multiple time delays. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1504–1513. [Google Scholar] [CrossRef]
  48. Akca, H.; Covachev, V.; Covacheva, Z. Global asymptotic stability of Cohen-Grossberg neural networks of neutral type. J. Math. Sci. 2015, 205, 719–732. [Google Scholar] [CrossRef]
  49. Arik, S. An analysis of stability of neutral-type neural systems with constant time delays. J. Frankl. Inst. 2014, 351, 4949–4959. [Google Scholar] [CrossRef]
Figure 1. (a) The real parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 1 . (b) The real parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 2 .
Figure 1. (a) The real parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 1 . (b) The real parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 2 .
Fractalfract 06 00515 g001
Figure 2. (a) The real parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 3 . (b) The real parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 4 .
Figure 2. (a) The real parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 3 . (b) The real parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 4 .
Fractalfract 06 00515 g002
Figure 3. (a) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 1 . (b) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 2 .
Figure 3. (a) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 1 . (b) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 1 , i = 2 .
Fractalfract 06 00515 g003
Figure 4. (a) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 3 . (b) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 4 .
Figure 4. (a) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 3 . (b) The imaginary parts of state trajectories of CVNNs (23) and (24) for h = 2 , i = 4 .
Fractalfract 06 00515 g004
Figure 5. (a) The error state trajectories between CVNNs (23) and (24) of cluster one i.e., ( h = 1 , i = 1 , 2 ) . (b) The error state trajectories between CVNNs (23) and (24) of cluster two i.e., ( h = 2 , i = 3 , 4 ) .
Figure 5. (a) The error state trajectories between CVNNs (23) and (24) of cluster one i.e., ( h = 1 , i = 1 , 2 ) . (b) The error state trajectories between CVNNs (23) and (24) of cluster two i.e., ( h = 2 , i = 3 , 4 ) .
Fractalfract 06 00515 g005
Figure 6. The real and imaginary parts of state trajectories of cluster one i.e., ( h = 1 , i = 1 , 2 ) of CVNNs (23) and (24).
Figure 6. The real and imaginary parts of state trajectories of cluster one i.e., ( h = 1 , i = 1 , 2 ) of CVNNs (23) and (24).
Fractalfract 06 00515 g006
Figure 7. The real and imaginary parts of state trajectories of cluster two i.e., ( h = 2 , i = 3 , 4 ) of CVNNs (23) and (24).
Figure 7. The real and imaginary parts of state trajectories of cluster two i.e., ( h = 2 , i = 3 , 4 ) of CVNNs (23) and (24).
Fractalfract 06 00515 g007
Figure 8. The error state trajectories of cluster one i.e., ( h = 1 , i = 1 , 2 ) between CVNNs (23) and (24).
Figure 8. The error state trajectories of cluster one i.e., ( h = 1 , i = 1 , 2 ) between CVNNs (23) and (24).
Fractalfract 06 00515 g008
Figure 9. The error state trajectories of cluster two i.e., ( h = 2 , i = 3 , 4 ) between CVNNs (23) and (24).
Figure 9. The error state trajectories of cluster two i.e., ( h = 2 , i = 3 , 4 ) between CVNNs (23) and (24).
Fractalfract 06 00515 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Boonsatit, N.; Rajendran, S.; Lim, C.P.; Jirawattanapanit, A.; Mohandas, P. New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays. Fractal Fract. 2022, 6, 515. https://doi.org/10.3390/fractalfract6090515

AMA Style

Boonsatit N, Rajendran S, Lim CP, Jirawattanapanit A, Mohandas P. New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays. Fractal and Fractional. 2022; 6(9):515. https://doi.org/10.3390/fractalfract6090515

Chicago/Turabian Style

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit, and Praneesh Mohandas. 2022. "New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays" Fractal and Fractional 6, no. 9: 515. https://doi.org/10.3390/fractalfract6090515

APA Style

Boonsatit, N., Rajendran, S., Lim, C. P., Jirawattanapanit, A., & Mohandas, P. (2022). New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays. Fractal and Fractional, 6(9), 515. https://doi.org/10.3390/fractalfract6090515

Article Metrics

Back to TopTop