Extended Stability and Control Strategies for Impulsive and Fractional Neural Networks: A Review of the Recent Results

: In recent years, cellular neural networks (CNNs) have become a popular apparatus for simulations in neuroscience, biology, medicine, computer sciences and engineering. In order to create more adequate models, researchers have considered memory effects, reaction–diffusion structures, impulsive perturbations, uncertain terms and fractional-order dynamics. The design, cellular aspects, functioning and behavioral aspects of such CNN models depend on efﬁcient stability and control strategies. In many practical cases, the classical stability approaches are useless. Recently, in a series of papers, we have proposed several extended stability and control concepts that are more appropriate from the applied point of view. This paper is an overview of our main results and focuses on extended stability and control notions including practical stability, stability with respect to sets and manifolds and Lipschitz stability. We outline the recent progress in the stability and control methods and provide diverse mechanisms that can be used by the researchers in the ﬁeld. The proposed stability techniques are presented through several types of impulsive and fractional-order CNN models. Examples are elaborated to demonstrate the feasibility of different technologies.


Introduction
In the last 80 years, there has been tremendous activity and development in the formulation of artificial neural network (ANN) models as a framework in the study of human brain function, mental or behavioral phenomena and brain structural plasticity.Researchers have proposed numerous ANN models to imitate human brain structures and study crucial aspects of information processing that meet new requirements and challenges.For example, the authors in [1] used ANNs to model brain responses and investigate human brain function.In [2], an ANN approach has been applied to study the achievement of stable dynamics in neural circuits.The paper [3] is devoted to continual lifelong learning with neural networks.Well-established and emerging research motivated by lifelong learning factors such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration have been discussed.The authors in [4] investigated the molecular transport in the human brain by physics-informed neural networks.All results cited above are very recent contributions in the area of modeling of the brain's neural networks and the study of their dynamics via ANNs.The book [5] provides a broad collection of articles that offer a comprehensive introduction to the world of brain and different modeling methods, including neural networks.
Recently, ANNs have become advantageous tools applied in pattern recognition, decision making, classification, optimization and linear and nonlinear programming, and they have attracted the attention of researchers in biology, medicine, computer sciences, engineering sciences and business sciences.Hence, the research on artificial intelligence approaches is becoming increasingly important in numerous emerging areas of science, medicine and engineering [6,7].
One of the most popular and appropriate neural network approaches is cellular neural networks (CNNs) in the form of differential equations.In fact, in modeling applications, mathematical models are essential.The explicit structure imposed by these models helps practitioners to overcome a major difficulty in the estimation of network connectivity from experiments: the fact that only some representative subsets of neurons can be measured simultaneously.The mathematical models lead to predictions of the effects of connections from all (even unmeasured) neurons.
CNNs are an important class of ANNs whose design was inspired by the design and functioning of the human brain and its components.The main feature of such a neural network model is that it is composed of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly only through their nearest neighbors.Introduced in 1988 by Chua and Yang [8,9], this class of information-processing systems processes signals in real time and has been widely used to study numerous phenomena in the learning and modeling of nonlinear and complex relationships.
In their attempts to create more realistic neural network models, researchers have also considered memory effects.Taking into account delay effects is essential to study how current patterns of neural activity impact future patterns of activity, which is a key point in the study of brain plasticity [3].Delayed CNNs (DCNNs) have had remarkable success in modeling neuron and brain maturation processes.The properties of their units have been found to mimic the properties of real neurons in their functions, necessary for working memory and response inhibition [10].In fact, time delay performs an important role in many applied dynamical systems, including dynamical processes of neuronal maturation in the infant and adult brain, in which processes' time delays critically affect the stable or unstable outcomes of the cell dynamics and neural circuit efficacy.Therefore, there has been enormous interest in the area of DCNNs among many researchers in the neurosciences, chemical technologies, population dynamics, biotechnologies, molecular sciences and robotics [11][12][13][14].Delay effects are also important in numerous other applications of CNNs [15][16][17].
Another fruitful line of research has been considering reaction-diffusion terms in CNNs.Indeed, such terms can appropriately represent spatial growth and time.Moreover, diffusion phenomena arise naturally in various fields when working with neural network models.This is why various reaction-diffusion CNNs have been proposed as modeling approaches in neurosciences to study the molecular transport in the human brain from magnetic resonance images [5], to study the human brain's development [18], to simulate the formation processes of dendritic spines, which show high plasticity and are related to learning [19], to understand the early developmental changes at the whole brain and regional levels [20], to model the relationship between structural and functional brain connectivity networks [21], to study the synaptic plasticity [22] and much more [23][24][25].
The Cohen-Grossberg neural networks (CGNNs) introduced in 1983 [26] also attracted more research interest since they are advantageous in global pattern formation and partial memory storage [27][28][29][30].This class of neural network models is a generalization of various CNNs, including Hopfield-type neural network models [31], and can develop a composite dynamical attitude [32,33].
The Bidirectional Associative Memory (BAM) type of neural network is another group of CNNs whose design is inspired by the associative phenomena existing in the human brain [34][35][36].It extends the single-layer auto-associative correlation to two-layer heteroassociative circuits [37], which is essential in numerous applied problems [38][39][40].Because of its expansion, it has been intensively studied.
The fractional-order modeling approach has been a consideration of numerous investigators, because of the universality provided and the large scope of the application areas [41][42][43][44].The fact that the memory is integrated in fractional derivatives is the fundamental superiority in using fractional models [45].Such memory is known as 'intrinsic memory'.It is proven that fractional-order networks can provide a flexible framework in the study of deep brain stimulation processes [46].Neural network models of fractional order have superiority in the understanding of the rich dynamics in the neurons' activities [47], which justifies research activities in the fractional-order formulation of ANN models, including results concerning biological neurons [48].Hence, a comprehensive scientific field of production is focused on the study of ANNs with fractional-order dynamics and their applications [33,[49][50][51][52][53].
Furthermore, the tool of impulsive neural networks has been extensively used in the accurate description of neuronal processes with short-term perturbations during their evolution [54][55][56][57][58][59].The design and properties of such networks are based on the theory of impulsive differential equations [60][61][62][63].Such systems are also adapted to impulsive control problems [64,65].In fact, impulsive control arises naturally in a wide variety of applications [66][67][68][69][70][71][72].The main advantage of impulsive control is in the fact that it is applied only in some discrete times and can reduce the amount of transmitted information and, hence, the control cost drastically.Neuroscientists are also aware of the impulsive control disorder phenomena [73].Impulse control mechanisms are considered in models derived from brain activity [74].Creating impulsive control architectures for neural network models opens up the possibility of successfully pursuing long-term goals despite short-term attacks and shocks.
The stability problem is one of the most important in the understanding of the internal mechanisms that stabilize and modulate neural activity [2,3].Indeed, the study of stability also has inspired a broad range of neural network approaches.Stability and control methods provide diverse strategies to investigate the qualitative behavior of the neuronal states [75][76][77][78].
However, in many practical situations, classical stability concepts are useless.This motivates researchers to further expand the stability notions and introduce new stability concepts such as practical stability, stability of sets, stability with respect to manifolds and Lipschitz stability.See, for example, [79] and the references therein.Such extended stability notions allow us to determine which movement mechanisms can support stability strategies that are acceptable from the practical point of view, where the classical strategies do not allow mathematically ideal stable behavior.As such, they have powerful practical applications in emerging areas such as biology, medicine, optimal control, mechanics, bio-technologies, economics, electronics, etc. [62].
This article reviews some authors' results and recent progress in the application of the extended stability concepts to different classes of impulsive and fractional neural network models.This survey also provides a reference for further research on stability and control strategies for such models.
The remaining part of this paper is organized as follows.In Section 2, materials and methods are presented.Various classes of impulsive neural network models, including fractional-order cases, are given.Extended stability notions are justified.Section 3 is devoted to a review of the results in extending stability and control strategies for impulsive and fractional DCNNs.The significance of the extended criteria is discussed and demonstrated via examples.The effect of uncertain terms is also considered.Section 4 discusses open problems and future research directions on these topics.Finally, some conclusions are included in Section 5.
The following notations will be used: R + = [0, ∞), R denotes the set of all real numbers, and R n is the n-dimensional real space with the Euclidean norm ||x|| of an x ∈ R n .

Materials and Methods
The McCulloch-Pitts neuron, introduced by Warren McCulloch and Walter Pitts in 1943 [80], is considered the first mathematical model of a neural network.Since then, the attention to neural network models and their applications has greatly increased [6,7,[81][82][83].

CNNs
CNNs form an important class of NNs that model human cognition using local realtime signal information processing [8,9].Such NNs can be represented as ẋi where n represents the number of the neurons; n ≥ 2, t ∈ R + , x i (t) represents the state of the i−th cell (neuron) at time t; ẋi (t) is the first-order derivative of x i with respect to t, which represents the rate of change in the 'activation' x i of neuron i with respect to time; f j is the activation function of the j−th neuron at time t; w ij are the weight coefficients, which are continuous functions; in general, c i is a constant that represents the rate with which the i−th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs; I i is an external input.

Hopfield NNs
The classical Hopfield-type neural networks originating from the model introduced in [31] can be considered as a specific case of CNNs for a particular choice of the constants c i .Such NNs are given by the following system: with i = 1, 2, . . ., n,, where n ≥ 2 denotes the number of nodes in the network; x(t) = (x 1 (t), . . ., x n (t)) T is the state vector at time t; the constants C i > 0, R i > 0 denote, respectively, the capacitance and the resistance for the node i at time t, and the rest of the parameters are as in (1).
For the description of the next CNN models, in order to avoid repetition, we will use unified notations of the models' parameters.

CNNs with Delays
In numerous neural networks' applications, the state vectors of the designed models depend on stored information.In biological CNNs, delay describes the maturation process of neuronal cells.Hence, the structure of the current network cells depends on their maturity and on the natural evolution rate of the proceeding generations.In fact, the cellintrinsic transcription factors are required to generate and promote the survival of newborn neurons [84].Moreover, delay effects may be accompanied by oscillation, divergence or instability, which may be damaging to the neuronal system.In order to study how time delays affect the dynamics of a CNN model, DCNNs of various types have been proposed.One of the most investigated types is the type of DCNNs with time-varying transmission delays given by [85]: where t ∈ R + , x i (t) denotes the state of the i-th cell at time t; h ij denotes the strength of the j−th unit on the i−th unit at time t − τ j (t); g j are activation functions that determine the output of the j−th node at time t − τ j (t), and τ j (t) are transmission delays, 0 ≤ τ j (t) ≤ τ.
In fact, due to presence of a multitude of parallel pathways with a variety of axon sizes and lengths, DCNNs have a spatial extent.This is why it is common to consider time-varying delays [86][87][88].The particular case of constant delays τ j is also studied [89].Recently, there have been many findings on DCNNs in which the effects of constant delays, time-varying delays, distributed delays and bounded and unbounded delays have been investigated.Since distributed and unbounded delays are more realistic, there is an extended interest in such DCNNs, which can be given as [90] ẋ where x : R + → R, K : R + → R + is the delayed kernel function; a, b, c are constants; the activation function f : R → R.

CNNs with Reaction-Diffusion Terms
In numerous applications of CNNs and DCNNs, the design and effective performance of the neural network model not only rely on the progression in time of the states but also exclusively depend on its location (area) [91].In such applications, formulating models of the reaction-diffusion type and evaluating the effects of the reaction-diffusion parameters on the neural network dynamics is crucial.
A DCNN model with reaction-diffusion terms is given by [92,93] where u i (t, x) represents the state of the ith neuron (cell) at time t ∈ (0, ∞) and space x ∈ Ω; Ω ⊂ R n is a bounded open set containing the origin with smooth boundary ∂Ω; ∂u i (t,x) ∂t is the partial derivative of u i (t, x) with respect to time t, which represents the rate of change in cell density with respect to time; the continuous functions D iq = D iq (t, x) ≥ 0 correspond to the transmission diffusion coefficients along the ith neuron, q = 1, 2, . . ., n.

Cohen-Grossberg DCNNs
The effects of time delays are also considered for the specific class of Cohen-Grossberg CNNs represented by [26] ẋi where t ≥ 0, a i denote the amplification functions, and b i correspond to appropriately behaved functions, i = 1, 2, . . ., n.
It is well seen that some very applicable NN models, such as CNNs and Hopfield neural networks, can be examined as particular cases of NNs of the Cohen-Grossberg type [31].

DCNNs with Reaction-Diffusion Terms of Cohen-Grossberg Type
The hybrid class of DCNNs with reaction-diffusion terms of Cohen-Grossberg type is also a major topic of interest because of the great opportunities for their applications in science, medicine and engineering.Such NN models can be given as where i = 1, 2, . . ., m, t ≥ 0. For more detailed results on DCNNs with reaction-diffusion terms of the Cohen-Grossberg type and their possible applications, we refer to [97][98][99].

Bidirectional Associative Memory (BAM) Neural Networks
Numerous classes of BAM neural network models are also studied in the existing literature.BAM NNs are generalizations of single-layer CNNs that can store bipolar vector pairs.Such NNs are composed of neurons arranged in two layers, the X-layer and Y-layer.A two-way associative search for stored bipolar vector pairs is performed by applying an iterative approach to the forward and backward information flows between the two layers [34][35][36][37][38]40,100].
As a generalization of the most applied BAM neural network models, we will present the Cohen-Grossberg-type BAM neural networks [101][102][103] given by where t ≥ 0, x i (t) and y j (t) correspond to the states of the ith neuron in the X-layer and jth neuron in the Y-layer, respectively, at time t; f j fi , g j , ĝi are activation functions; τ j (t), τi (t) are interneuronal transmission delays; 0 < τ j (t) < τ, 0 < τi (t) < τ, a i , âj denote amplification functions; b i , bj denote well-behaved functions; w ji , ŵij , h ji , ĥij are the connection weights, and I i , J j are external inputs, i = 1, 2, . . ., n, j = 1, 2, . . ., m.

Impulsive DCNNs
The impulsive control approach in neural network modeling addresses the case wherein short-term perturbations at some moments of time affect the dynamical behavior of the neuronal models.This approach is closely related to the brain plasticity-stability dilemma, which is essential also in other biological aspects [3,66,[72][73][74].By means of impulsive control neural networks, it is possible to analyze how impulses can be used to preserve the stability properties of the model or to design efficient impulsive controllers.Abrupt changes are very often caused by changes in the environment, external stimuli or may be inherent to the system due to the cell-intrinsic potential for structural change [75].Indeed, adding a mass of new cells to an already trained network can slow down the old memory that has been trained.In such a case, using external perturbations, it can be returned to the trained state.Hence, with the development of impulsive control theory [64,65], increasing attention has been paid to the study of impulsive CNN models.
Let t k , k = 1, 2, . . .be the impulsive points and satisfy 0 < t 1 < t 2 < . . ., lim An impulsive control DCNN model can be represented as [62] where i = 1, 2, . . ., n, t k , (k = 1, 2, . . . ) are the instances of impulsive perturbations at which the density x i (t) of a neuronal cell shifts from the amount of k ), and P ik are functions that characterize the size of the impulsive control effects of the states x i (t) at the moments t k .A graph of the trajectory of a state x i (t) is shown in Figure 1.
The trajectory of a mature state x i (t) of the impulsive control model ( 9).
The control model ( 9) generalizes many existing type (3) DCCN models to the impulsive case.It can be applied to impulsively control the behavior of the neurons in type (3) neuronal networks using appropriate impulsive functions P ik .Moreover, by adding impulsive controllers to the nodes in model ( 3), we can synchronize the trajectories of all nodes.
As generalizations of the above models, we will consider impulsive reaction-diffusion DCNNs of Cohen-Grossberg type given as [104] in which the points t k , k = 1, 2, . . .again represent instants (impulsive) where short-term perturbations on the node u i (t, x) from the level u i (t − k , x) = u i (t k , x) toward the level u i (t + k , x) are observed, and the functions P ik (u i (t, x)) determine the controlled outputs u i (t + k , x), which measure the effects of the impulsive controllers on the states u i (t, x) at the moments t k .
The impulsive model (10) admits the use of a suitable impulsive control strategy to a class of reaction-diffusion delayed CNNs that appear naturally in a broad range of applications.For example, several therapeutic impulsive control strategies have been recommended for some recently developed epidemic models of great interest for the contemporary world [105][106][107].
In the absence of reaction-diffusion terms, the model ( 10) is reduced to an impulsive Cohen-Grossberg-type model [108].
For the DCNN model (10), a more general type of impulsive control using impulsive jumps that are not performed at fixed instants also can be studied [109].In this case, the impulsive moments t l k occur when the integral surface of the solution u(t, x) meets hypersurfaces defined as where σ k are continuous functions and k = l k , in general [62,109].Moreover, as a result of the features of short-term/impulsive perturbations at non-fixed moments of time, distinct neuronal states corresponding to distinct initial data may have distinct impulsive moments.
This makes the analysis of models with non-fixed impulsive perturbations more complicated due to the potentiality for the loss of the autonomy property, bifurcation, "merging" of solutions, meeting one and the same hypersurface several or infinitely many times, etc. [110].A suitable choice of the impulsive forces for such models is essential.

Fractional-Order Impulsive CNNs
The contemporary approaches in network theory have led to the introduction of more multiplex models.The fractional-order scheme, which is more relevant in the description of accurate CNNs, has achieved increased attention in recent years [41,44,47,48,50,52,53,63,72].A variety of investigations have studied the long-term dependence of the current activities in neuronal states [45,46].To better model the long-term memory processes, numerous existing neural network models have been generalized to the fractional-order case.
Moreover, the influence of the fractional-order derivatives on the stability and synchronization performance of the neurons is recognized [52].
Let us consider a fractional-order delayed impulsive CNN of the type [63] is the Caputo fractional derivative of order α, 0 < α < 1 with the lower limit 0 for a continuously differentiable function l ∈ R and Γ(z) = ∞ 0 e −t t z−1 dt is the standard Gamma function [114].
Indeed, fractional-order derivatives of Caputo type are the most applied in modeling applications since they have the superiority of handling initial conditions that are defined in a form similar to the form used in the cases of integer-order derivatives.This situation is observed in most natural phenomena [115,116].
Another direction for generalizations of the model ( 12) is considering finite and infinite delays [117].In this case, the corresponding impulsive control model will be represented as where 0 ≤ τ j (t) ≤ τ, j = 1, 2, . . ., n, are the finite transmission delays and the delay kernel To reflect the fact that, in numerous neural network models, the activations depend on time and on space, the model ( 12) can be generalized to a fractional-order model with reaction-diffusion terms given as [118][119][120] where [41,114] Impulsive control fractional-order models of Cohen-Grossberg type [121,122] and impulsive BAM neural network models of fractional-order with and without reactiondiffusion setting [123,124] can be also applied as extended fractional CNN models of natural processes influenced or controlled by short-lived forces at some points in time.Indeed, the great attention to and the huge number of existing publications on neural network models of the impulsive type with fractional dynamics are an indication of their considerable significance.Impulsive control fractional-order models of Cohen-Grossberg type are described by [121] and fractional BAM reaction-diffusion neural network models are given as [123] where (u, v) ∈ R m R l is the state vector; K ij , N ji are the delay kernels; P ik , Q jk determine the level of the states' changes at the impulsive moments t k ; the numbers u i (t k , x) = u i (t − k , x) and u i (t + k , x) correspond, respectively, to the levels of the ith and jth neurons before and after an impulsive jump at the moment t k , and the numbers v j (t k , x) = v j (t − k , x) and v j (t + k , x) are, respectively, the states of the jth neuron of the second layer before and after an impulsive perturbation at the moment t k .
Fractional-order BAM neural network models can be efficiently used in many applied problems where the associative study of pairs of states designed in two layers through iterating information back and forth between the layers is fundamental.This class of CNNs is also a compelling tool for modeling in neuroscience.For example, the gene regulatory networks (GRNs) that model the regulation of genes' expression in the process of managing molecular-level organisms is one particular class of BAM neural networks [72,[125][126][127].A fractional-order delayed impulsive GRN model studied in [128] has been given as with i = 1, 2, . . ., n, t ≥ 0, where the i−th mRNA state at time t is represented by m i (t); the i − th protein's concentration at time t is represented by p i (t); the degradation rates in the i−th mRNA and i − th protein's molecule are denoted by the constants a i , c i ∈ R, respectively; d i denote the translation rates; the regulatory function f j , j = 1, 2, . . ., n is of the particular Hill form in which β j denote positive constants and the Hill coefficients H j are real constants; the function B i (t) = ∑ j∈I i b ij (t) denotes the basal rate of the repressor of gene i under the set I i of all its repressors j; the weight coefficients w ij (t) are determined by b ij (t) or by −b ij (t) depending on whether j is an activator of gene i or j is a repressor of gene i; w ij (t) = 0 only when there is no connection between the node j and the gene i; the distinct functions τ i (t) and τj (t) denote time-varying bounded delay functions for mRNA i and protein concentration j, respectively; i, j = 1, 2, . . ., n, the i−th mRNA state and the i − th protein's concentration at time t k are given by the values k ) and p i (t + k ) correspond to i−th mRNA and i − th protein concentration, respectively, at t + k , i.e., after an impulsive short-term effect on them at t k ; and the impulsive functions P ik and Q ik denote the amounts of the abrupt deviation from m i (t) and p i (t), respectively, at the point t k for k = 1, 2, . . .and i = 1, 2, . . ., n.

Extended Stability Concepts
Stability is among the most essential questions in the investigation of neural networks.Hence, research on stability strategies that regulate the neuronal stability properties has attracted tremendous attention.Moreover, stability characteristics are closely related to synchronization and control issues [116,120,126].
For CNNs, the most applied stability concept in the existing literature is that of global asymptotic stability [7,29,59,79,85,89,92,98,127].Despite the great possibilities of application, this concept is not comprehensive.In numerous particular problems, even if a CNN is globally asymptotically stable, it is actually impractical in implementations because of some inappropriate features.Moreover, there are CNNs that are not asymptotically stable in the classical case; however, their behavior is acceptable.For such cases, some extended stability concepts, which we will review below, are more appropriate.

Practical stability
The specific conception of practical stability is distinct from the classical asymptotic stability and, as a result of its benefits, it has been studied for various applied systems [62,79,[129][130][131][132][133], including NNs [57,[134][135][136].This extended stability concept is useful in many applied models when the states' trajectories are contained within specific constraints during a fixed time interval.In such cases, the global asymptotic notion is not applicable.The practical stability strategy is also very efficient when a neuronal state is unstable in the classical sense and yet state trajectories may oscillate sufficiently near the desired state such that its behavior is admissible, which does not imply stability or the convergence of trajectories.In addition, there are many applied systems that are stable or asymptotically stable in the classical sense, but are in fact meaningless in practice due to a small or inappropriate stability or attraction domain.
Let x(t) = x(t; 0, φ) be a solution of the impulsive DCNN model ( 9) corresponding to the initial function φ that is bounded and piecewise continuous on [−τ, 0] with points of jump discontinuities at which the one-sided limits exist and the functions are continuous from the left.We will denote the norm of the function φ Let (λ, A) with 0 < λ < A be given.Definition 1 can be also adapted to other types of impulsive DCNNs.It again restates the noted circumstance that the concept of practical stability is fairly autonomous from the basic notion of asymptotic stability and both notions are neither presumptive nor purely mutually exclusive [62,79,[129][130][131][132][133].In reality, the practical stability can be achieved in a set time and this is why it seems more appropriate for neural network models from an applied point of view [134][135][136].This is due to the fact that the practical stability requires the trajectories of the investigated model to be examined when the boundaries of the initial conditions and the region where the trajectories must remain as the independent variable evolves over a fixed interval are set in advance.

Stability of Sets
One of the most important aspects of the stability theory of differential equations is the so-called stability of sets.It is an extension of the stability of single trajectories notion, which is related to the study of the stability properties of a region of solutions.Such a concept seems to be very appropriate in population dynamics and biology, including the stability of cell populations across different brain regions.Such an extended stability notion is also of ample interest to networks able to approach not only one state of interest.
The introduction of the stability of sets notion is related to the following question: How far can initial data be allowed to vary without disordering the stability properties established in the immediate proximity of the specific states?On this question, scientists have proposed to study stable sets [137].Some demonstrated applications for the use of stability of sets are delay systems [137], the biological control of invasive species [138], Rosenblatt processes [139], planar homeomorphisms [140], maneuver systems [141] and Kolmogorovtype systems that generalize various models studied in population dynamics [142].
Because of the considerable opportunities for the application of the extended stability of sets approach, it has been developed for Cohen-Grossberg impulsive DCNNs with reaction-diffusion terms of the type (10) [104].
Let M be a set, and introduce the set M(t, x) of all u ∈ R m such that (t, x, u) ∈ M, (t, x) ∈ R + × Ω, and the set M 0 (t, x) of all z ∈ R m for which (t, x, z) ∈ M for (t, x) ∈ [−τ, 0] × Ω.
Other stability of sets notions for the system (10) are also introduced in [104].These extended stability concepts can be applied to numerous types of impulsive DCNNs, such as BAM DCNNs and fractional-order DCNNs.As is seen from Definition 2, the stability of sets concept is more general than the stability of a single neuronal trajectory and includes, as special cases, the stability of steady states, stability of periodic trajectories and stability of integral manifolds or other manifolds that can represent regions of neuronal states.

Stability with Respect to Manifolds
Stability with respect to manifolds notions are specific cases of stability of sets concepts for particular sets.While, in stability of sets definitions, sets of a sufficiently general type contained in some domain are considered, stability of manifolds notions consider sets defined by specific conditions, which is common in most of the biological DCNNs.Such stability notions are also known as conditional stability [79].One of the most applied stability with respect to manifolds concepts treats manifolds that are defined by a function that describes specific constraints on the determination of the manifold [113,119].The introduction of this notion is directed by the evidence that the stability behavior of neurons depends on limitations or restrictions that can be represented by a function.Another stability with respect to manifolds notion is related to integral manifolds defined by the system trajectories [58,109].
For quantifying the stability of manifolds determined by constraints, a particular function h is defined in the extended phase space of the CNN with values in R l , l ≤ n.For stability of integral manifolds, the set ( 18) is defined as a set M in the extended phase space For example, in order to study the stability with respect to a manifold of the impulsive fractional-order model with reaction-diffusion terms (14), in [119], a function l ≤ m is defined together with the following manifolds related to it: The following definition is introduced in [119] for the stability of the trajectories of the CNN (14) with respect to the manifold M t (m − l) determined by the function h.Definition 3. The fractional-order DCNN model ( 14) is globally Mittag-Leffler stable with respect to the function h if, for ϕ 0 ∈ P C, there exists a constant η > 0 such that where E α , 0 < α < 1, is the corresponding Mittag-Leffler function, defined as is Lipschitz continuous with respect to ϕ ∈ P C, and M(ϕ) ≥ 0.
Note that the Mittag-Leffler stability concepts are extensions of the exponential stability notions to the fractional-order case [41,44,63,72,116,117,123,124].
The numerous application possibilities of the stability with respect to manifolds concepts foster their intensive development [123,[143][144][145][146][147].These extended stability notions have been recently applied in the study of the brain's stability behavior [148][149][150].Indeed, fractional-order systems are intensively used in the study of biological phenomena [151].For fractional-order DCNN models, the concept is combined with the Mittag-Leffler stability definitions, which are as important as exponential stability definitions for the integer-order cases.

Practical Stability with Respect to Manifolds
The extended concept of practical stability with respect to manifolds also has received considerable attention recently and has been applied to different classes of impulsive control DCNNs.See, for example, [108,152,153] and the references therein.Such a stability concept is more appropriate for applied neuronal models, since it combines the generalization provided by the manifolds stability concept and the flexibility of the practical stability when some behavioral characteristics are not worthwhile.
If, in Definition 4, A = 0, then it is reduced to the global exponential stability with respect to the ĥ-manifold case.

Lipschitz Stability
The extended Lipschitz stability concept introduced in [154] is also appropriate for different applied models [155][156][157][158].For linear systems, the notion of Lipschitz stability is equivalent to that of uniform stability, which is not the case for nonlinear systems [154,159].The Lipschitz stability concept is adopted for impulsive control models in [62,79,159] and for fractional-order models in [160].The problem of Lipschitz stability seems more relevant in the neural network approach, because the Lipschitz nonlinearity and Lipschitz continuity are typical for most of the neural network models [161].
The notion is applied to impulsive reaction-diffusion fractional Cohen-Grossbergtype neural network models for the first time in [162], where the investigated model is a generalization of the model (14) and is defined as where m ≥ 2.
For the above impulsive control DCNN model, the Lipschitz stability definition is given as follows.
Definition 5.The fractional impulsive reaction-diffusion CNN model (20) of Cohen-Grossberg type is globally uniformly Lipschitz stable, if there exists a constant M > 0 such that, for any where ||ϕ 0 || τ is the norm of the initial function ϕ 0 ∈ P C corresponding to the norm ||u(t, x; 0, ϕ 0 )|| 2 .

Lyapunov Approach
To establish the extended stability result, we adapt the Lyapunov approach and use appropriate Lyapunov functions [62,63,79,87].In the case of DCNNs and impulsive CNNs, some modifications are proposed.More precisely, in the case of DCNNs, the Razumikhin technique is applied, which requires us to estimate the derivatives of the Lyapunov candidate functions only on specific sets of trajectories [62,137], and in the impulsive control case, piecewise continuous Lyapunov functions are considered [62].For fractional-order systems, the fractional Lyapunov method is applied [63].

Results
Using a Lyapunov-based analysis, numerous extended stability criteria have been proposed for different types of impulsive and fractional-order DCNNs.Most of the authors' earlier results are collected in [62,63,79].After the publication of these books, many new impulsive neural network models and corresponding stability strategies were developed by the authors [57,58,72,104,108,109,113,119,123,128,162].The primary aim of this review article is to present most of these recent results.The proposed results are in the form of bounds on the system's parameters, including synaptic weights and impulsive control functions.The practical meaning of the introduced criteria is that if the system's parameters and impulsive controls are driven to these bounds, this will reflect the extended stability behavior of the applied models.

Stability of Sets
To present some recent results related to the stability of sets concept, we will consider the impulsive control reaction-diffusion DCNN of Cohen-Grossberg type (10).Since the model ( 10) is more general and includes some of the special cases of impulsive DCNNs, the established results are more comprehensive.
To guarantee the existence and uniqueness of solutions, the activation functions are considered to be bounded, the impulsive functions are continuous, and the neural network's parameters are assumed to satisfy the following assumptions [104].
A1.There exist constants a i and a i such that, for the continuous amplification functions a i , i = 1, 2, . . ., m, we have 1 < a i ≤ a i (ι) ≤ a i for ι ∈ R.
To define a set M of trajectories related to the model (10), we consider two constant solutions of the impulsive control model (10) denoted as u * ∈ R m + and u * ∈ R m + .Then, the following criteria for stability with respect to . ., m, are established in [104].
Under the assumptions A1-A4, if the system parameters satisfy and the impulsive control functions P ik are such that there exist constants γ ik , 0 < γ ik < 2, such that then the set M is uniformly globally exponentially stable with respect to the reactiondiffusion impulsive Cohen-Grossberg DCNN (10).
The set M considered in the above result consists of all trajectories that are between two constant solutions and generalize a single state stability concept.A set M of a more general nature can also be considered.The proposed result and the extended stability concept are useful in the cases wherein the consideration of attractors other than single steady states is essential.
The impulsive reaction-diffusion Cohen-Grossberg DCNN (10) is the relevant closedloop system to the model ( 7), and can be also represented as a control system where i = 1, 2, . . ., m, t ≥ 0, and represent the control contribution, and δ(t) is the impulsive Dirac function (Figure 2).The addition of the controller v(t, x) = (v 1 (t, x), . . ., v n (t, x)) T leads to sudden changes in the neuronal states of (7) at the time moments t k due to which the states u i (t, x) of neuronal units momentarily shift from u i (t k , x) into the state u i (t + k , x), and P ik are the impulsive functions.Thus, the above result establishes a generic design method of the impulsive control strategy (24) for the impulse-free DCNN model (7).The constants γ ik define the control sizes of the synchronizing impulses.Therefore, the proposed criteria can be used to design impulsive control techniques with which the trajectories of the impulsive neural networks (10) (including those from the set M) can be uniformly globally exponentially synchronized onto those of system (7).

Stability with Respect to Manifolds
To demonstrate a stability with respect to h-manifolds result, we consider the DCNN of fractional order (14) with reaction-diffusion terms.In [119], a h-manifold stability result has been established as follows.
If, in addition to ( 21) and ( 22), the function h(t, u) satisfies where Λ(L) ≥ 1 exists for any 0 < H ≤ ∞, then the fractional-order DCNN model ( 14) is globally Mittag-Leffler stable with respect to the function h.When investigating the stability of a neural network, it is important to characterize the effects of uncertain terms.In fact, a real-world neural network system always involves uncertainties, which may give rise to the instability of the network trajectories [119,[163][164][165].
To study how some uncertain parameters can affect the stability behavior of the model ( 14), we consider the next uncertain reaction-diffusion impulsive delayed neural network related to the system (14): In ( 26), the constants ci ∈ R + and the real-valued functions wji , hij , fj , gj , Pik , i, j = 1, 2, . . ., m, k = 1, 2, . . .denote the uncertainties.We set sup To conduct a robust stability analysis of the model ( 26) with respect to the function h, we assume that there exist constants Li > 0, Mi > 0, H1 i > 0, H2 i > 0 such that  14) is called globally robustly Mittag-Leffler stable with respect to the function h, if, for any ϕ 0 ∈ P C, any wji , hij , fj , gj , Pik i, j = 1, . . ., m and any ci ∈ R + , i = 1, 2, . . ., m, the model ( 26) is globally Mittag-Leffler stable with respect to the function h.
The above result can be useful as a tool to measure the effects of dynamic changes in an uncertain environment on the stability behavior of neural networks.It can be also applied to predict instabilities in neural network systems with multiple sources of structural uncertain perturbations.
Example 2 ([109]).Consider again the impulsive reaction-diffusion DCNN model (10) for n = m = 2, Ω = [0, 1] × [0, 2], with activation functions and delays as in Example 1, a i (u and an impulsive control defined by where the rest of the parameters are and the impulsive controllers satisfy (22), then the manifold determined by a constant solution u C = (u C 1 , u C 2 ) T of the considered model is globally exponentially stable.This guarantees the global exponential synchronization of the master system (without impulses) and the impulsive response system with respect to the manifold M. Hence, the demonstrated extended stability criteria could be used as impulsive synchronization criteria, which are useful for various applied phenomena when some classical stability concepts cannot be used.Therefore, the illustrated criteria clarify the understanding of the impulses as suitable stability strategies on the dynamics of a neural network system.

Practical Stability with Respect to Manifolds
The conception of practical stability with respect to a manifold determined by a specific function has been applied in [108] to the BAM model (11), and to the particular case considering single-layer correlation.For the particular case, the paper [108] studies the global practical exponential stability of the impulsive control DCNN model of Cohen-Grossberg type where t > 0, i = 1, 2, . . ., n, σ k : R n → R, k = 1, 2, . . .with respect to a function h : [−τ, ∞) × R n → R, which defines a manifold of the type (19).
The global practical exponential stability criteria for the model (30) with respect to the function h consist of the following assumptions: where a = min 1≤i≤n a i , a = max 1≤i≤n a i , and the existence of a real positive Q such that ∑ n i=1 |I i | < AQ for a given A > 0.
Example 3. As an example, let us consider the model (30) with n = 2, where I 1 = I 2 = 0.03, To give a better interpretation of the practical stability with respect to manifolds notion, the graph of the function h = |x 1 | + |x 2 | is shown in Figure 3. Instead, to consider a single trajectory, the function h determines a manifold of trajectories whose stability behavior is of interest.Hence, this extended notion is applicable when we have to study the stability properties of a region of trajectories under some constraints defined by the function h.The practical stability of the neural network model (30) with respect to the manifold defined by h means that any neuronal state that starts close to the h-manifold oscillates around it and is bounded by a particular bound defined by the constant A so that its behavior is admissible from the practical point of view.
To further demonstrate the feasibility of the extended practical stability with respect to the manifolds impulsive control strategy, we will apply it to the Cohen-Grossberg BAM delay CNN model (11) considering the function ĥ = ĥ(t, z), ĥ : which defines a ĥ manifold of the type (19).It is proven in [108] that, for a given A > 0, if there exists a positive constant Q such where L j , Li , M j , Mi are the Lipschitz constants of the activation functions, and the impulsive functions P ik , Q jk are such that i = 1, 2, . . ., n, j = 1, 2, . . ., m, k = 1, 2, . . ., then system (11) is practically globally exponentially stable with respect to the function ĥ.
For A = 0, the global exponential stability of the impulsive BAM DCNN (11) has been studied in [113].
However, the condition (32) is not satisfied because Hence, drawing a conclusion about the practical stability behavior of the neuronal states of ( 11) is impossible using the results presented here.For example, for the particular choice of the models' parameters and impulsive conditions, and the manifold build function ĥ = x 2 1 + x 2 2 + y 2 1 + y 2 2 , the unstable trajectory of x 2 (t) with respect to ĥ is as illustrated in Figure 4. Examples 3 and 4 again illustrate the strength of the proposed criteria.These examples also demonstrate how the extended stability dynamics of the states of the considered Cohen-Grossberg BAM delayed CNNs may be controlled via suitable impulsive controllers.In addition, the opportunity for the extension of the stability concepts to fractional-order models is also demonstrated.

Lipschitz Stability
The extended notion of Lipschitz stability is gradually gaining popularity, receiving more and more attention, especially in the fractional-order modeling approach [157,158,160].Very recently, it has been also applied to some impulsive control fractional neural network models [162].
For the model (20), some global uniform Lipschitz stability criteria are proposed in [162] using appropriate Lyapunov-type functions.It is proven that the fractional reactiondiffusion neural network model (20) of Cohen-Grossberg type is globally uniformly Lipschitz stable if the impulsive control is designed as in ( 22) and ( 24), and at least one of the following criteria is satisfied for the models' parameters.

•
There exists a continuous for t ∈ As is seen, for β(t) = 0, t ∈ R + , the inequality ( 35) is reduced to (27).Hence, in this case, the Lipschitz stability can be considered as a generalization of the uniform stability of an integral manifold relevant to the model (20).For more detailed results, see [162].
Moreover, for α = 1, the established result may be applied to the impulsive control neural network model (10).
CNNs under impulsive disturbances are represented by two-component systems that consist of continuous and discrete components and are actually hybrid models under different hypotheses on the dynamical characteristics of the continuous and discrete components.Thus, the impulsive control modeling paradigm bridges the gap between continuous and discrete CNN models and offers opportunities for refinements to applied neuronal systems.
Stability of neural network states is a key property necessary for the efficient application of the CNN model.In the presence of impulsive effects, it is crucial to analyze the effect of the impulsive perturbations on the stability manner of the neural network model, and to determine how impulses can be used to manage the stability properties of a neural network system.External and internal short-term (impulsive) perturbations can be also appropriately used to create impulsive control strategies for the stability and synchronization of CNN models.
Parallel to the development of the impulsive control modeling approach, the variety of CNNs to which it is applied has been also expanded.Recently, the classes of CNN models to which this approach has been applied include DCNNs, DCNNs with reactiondiffusion terms, DCNNs of Cohen-Grossberg type, BAM DCNNs, fractional-order DCNNs and some others.See, for example, [57,58,72,104,108,109,113,[116][117][118][119][120]123,162] and the references therein.
Although a number of classical global asymptotic stability [29,59,79,85,89,92,98,127] results exist for some major classes of CNNs, this stability notion is not applicable in the cases where several steady states exist, or an entire region with neuronal states is considered, or the classical stability criteria are not met, but the stability performance of a CNN model is acceptable from an applied point of view.In such cases, some extended stability notions, such as practical stability, stability with respect to a set or Lipschitz stability, are of particular interest.In their recent research, the authors contributed to the development of these stability notions and enhanced the rigorous understanding of extended impulsive stability and control strategies for DCNN models, including models with fractional dynamics.
This paper has overviewed the research area of the application of extended stability criteria to some main classes of impulsive and fractional DCNNs.The main advantages of using extended stability strategies are mainly demonstrated via examples in terms of their implementation and usage in neural network models.
A chart of the existing results on extended stability strategies applied to impulsive and fractional DCNNs is provided in Table 1.
It is seen from Table 1 that some stability concepts are still not applied to important classes of neural network models.Hence, the framework of extended stability strategies for impulsive control and fractional-order neural network models is far from completion and needs more developments.The presented chart also emphasizes the future research directions in this important topic.In conclusion, the presented impulsive control and fractional DCNN models provide excellent perspectives and tools that would be extremely useful, as they would provide a framework enabling control bioscientists to better design neural network models that are resilient in the face of impulsive shocks, fractional dynamic and uncertainty.In addition, the developed extended stability and control criteria for such DCNNs can be used by neuroscientists to determine how the neurons respond to impulsive stimulation, and how such stimulation can be used to stabilize the model.Such results are also important in the process of the verification of DCNNs designed to demonstrate an optimal solution regardless of the initial data.

Conclusions
Neural network modeling is a very vital research area with broad applications.This research is an overview of the authors' main results on impulsive control neural network modeling applied to different classes of DCNNs, including fractional-order models.It also presents the recent progress in the applied extended stability and control strategies to such neural network models.The proposed studies would provide a better theoretical understanding of various types of impulsive control neural network systems, such as impulsive CNNs, impulsive DCNNs, impulsive BAM DCNNs, impulsive Hopfield neural networks, impulsive Cohen-Grossberg DCNNs, impulsive reaction-diffusion DCNNs and their fractional-order generalizations.The presented extended stability strategies can be used to understand the extent to which the DCNNs change under the influence of internal or external factors (for instance, external forces).The analysis developed may lead to a real understanding of the modeling approach by identifying mechanisms responsible for the stable model behavior.

Figure 4 .
Figure 4.The unstable trajectory of the state variable x 2 (t) of the CNN in Example 4.

Table 1 .
Recent results on extended stability concepts applied to impulsive control DCNNs.