Open Access
This article is

- freely available
- re-usable

*Mathematics*
**2018**,
*6*(9),
144;
https://doi.org/10.3390/math6090144

Article

Stability Analysis of Cohen–Grossberg Neural Networks with Random Impulses

^{1}

Department of Mathematics, Texas A&M University-Kingsville, Kingsville, TX 78363, USA

^{2}

Florida Institute of Technology, Melbourne, FL 32901, USA

^{3}

Faculty of Mathematics, Plovdiv University, Tzar Asen 24, 4000 Plovdiv, Bulgaria

^{4}

School of Mathematics, Statistics and Applied Mathematics, National University of Ireland, H91 CF50 Galway, Ireland

^{*}

Author to whom correspondence should be addressed.

Received: 27 July 2018 / Accepted: 17 August 2018 / Published: 21 August 2018

## Abstract

**:**

The Cohen and Grossberg neural networks model is studied in the case when the neurons are subject to a certain impulsive state displacement at random exponentially-distributed moments. These types of impulses significantly change the behavior of the solutions from a deterministic one to a stochastic process. We examine the stability of the equilibrium of the model. Some sufficient conditions for the mean-square exponential stability and mean exponential stability of the equilibrium of general neural networks are obtained in the case of the time-varying potential (or voltage) of the cells, with time-dependent amplification functions and behaved functions, as well as time-varying strengths of connectivity between cells and variable external bias or input from outside the network to the units. These sufficient conditions are explicitly expressed in terms of the parameters of the system, and hence, they are easily verifiable. The theory relies on a modification of the direct Lyapunov method. We illustrate our theory on a particular nonlinear neural network.

Keywords:

Cohen and Grossberg neural networks; random impulses; mean square stability## 1. Introduction

Artificial neural networks are important technical tools for solving a variety of problems in various scientific disciplines. Cohen and Grossberg [1] introduced and studied in 1983 a new model of neural networks. This model was extensively studied and applied in many different fields such as associative memory, signal processing and optimization problems. Several authors generalized this model [2] by including delays [3,4], impulses at fixed points [5,6] and discontinuous activation functions [7]. Furthermore, a stochastic generalization of this model was studied in [8]. The included impulses model the presence of the noise in artificial neural networks. Note that in some cases in the artificial neural network, the chaos improves the noise (see, for example, [9]).

To the best of our knowledge, there is only one published paper studying neural networks with impulses at random times [10]. However, in [10], random variables are incorrectly mixed with deterministic variables; for example ${I}_{[{\xi}_{k},{\xi}_{k+1})}\left(t\right)$ for the random variables ${\xi}_{k},{\xi}_{k+1}$ is not a deterministic index function (it is a stochastic process), and it has an expected value labeled by E, which has to be taken into account on page 13 of [10]; in addition, in [10], one has to be careful since the expected value of a product of random variables is equal to the product of expected values only for independent random variables. We define the generalization of Cohen and Grossberg neural network with impulses at random times, briefly giving an explanation of the solutions being stochastic processes, and we study stability properties. Note that a brief overview of randomness in neural networks and some methods for their investigations are given in [11] where the models are stochastic ones. Impulsive perturbation is a common phenomenon in real-world systems, so it is also important to consider impulsive systems. Note that the stability of deterministic models with impulses for neural networks was studied in [12,13,14,15,16,17,18]. However, the occurrence of impulses at random times needs to be considered in real-world systems. The stability problem for the differential equation with impulses at random times was studied in [19,20,21]. In this paper, we study the general case of the time-varying potential (or voltage) of the cells, with the time-dependent amplification functions and behaved functions, as well as time-varying strengths of connectivity between cells and variable external bias or input from outside the network to the units. The study is based on an application of the Lyapunov method. Using Lyapunov functions, some stability sufficient criteria are provided and illustrated with examples.

## 2. System Description

We consider the model proposed by Cohen and Grossberg [1] in the case when the neurons are subject to a certain impulsive state displacement at random moments.

Let ${T}_{0}\ge 0$ be a fixed point and the probability space ($\mathsf{\Omega},\mathcal{F},P$) be given. Let a sequence of independent exponentially-distributed random variables ${\left\{{\tau}_{k}\right\}}_{k=1}^{\infty}$ with the same parameter $\lambda >0$ defined on the sample space $\mathsf{\Omega}$ be given. Define the sequence of random variables ${\left\{{\xi}_{k}\right\}}_{k=0}^{\infty}$ by:

$${\xi}_{k}={T}_{0}+\sum _{i=1}^{k}{\tau}_{i},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}k=0,1,2,\dots .$$

The random variable ${\tau}_{k}$ measures the waiting time of the k-th impulse after the $(k-1)$-th impulse occurs, and the random variable ${\xi}_{k}$ denotes the length of time until k impulses occur for $t\ge {T}_{0}$.

**Remark**

**1.**

The random variable $\mathsf{\Xi}={\sum}_{i=1}^{k}{\tau}_{i}$ is Erlang distributed, and it has a pdf ${f}_{\mathsf{\Xi}}\left(t\right)=\lambda {e}^{-\lambda t}\frac{{\left(\lambda t\right)}^{k-1}}{(k-1)!}$ and a cdf $F\left(t\right)=P(\mathsf{\Xi}<t)=1-{e}^{-\lambda t}{\sum}_{j=0}^{k-1}\frac{{\left(\lambda t\right)}^{j}}{j!}$.

Consider the general model of the Cohen–Grossberg neural networks with impulses occurring at random times (RINN):
where n corresponds to the number of units in a neural network; ${x}_{i}\left(t\right)$ denotes the potential (or voltage) of cell i at time t, $x\left(t\right)=({x}_{1}\left(t\right),{x}_{2}\left(t\right),\dots ,{x}_{n}\left(t\right))\in {\mathbb{R}}^{n}$, ${f}_{j}\left({x}_{j}\left(t\right)\right)$ denotes the activation functions of the neurons at time t and represents the response of the j-th neuron to its membrane potential and $f\left(x\right)=({f}_{1}\left({x}_{1}\right),{f}_{2}\left({x}_{2}\right),\dots ,{f}_{n}\left({x}_{n}\right))$. Now, ${a}_{i}(.)>0$ represents an amplification function; ${b}_{i}(.)$ represents an appropriately behaved function; the $n\times n$ connection matrix $C\left(t\right)=\left({c}_{ij}\left(t\right)\right)$ denotes the strengths of connectivity between cells at time t; and if the output from neuron j excites (resp., inhibits) neuron i, then ${c}_{ij}\left(t\right)\ge 0$ (resp., ${c}_{ij}\left(t\right)\le 0$), and the functions ${I}_{i}\left(t\right)$, $I\left(t\right)=({I}_{1}\left(t\right),{I}_{2}\left(t\right),\dots ,{I}_{n}\left(t\right))\in {\mathbb{R}}^{n}$ correspond to the external bias or input from outside the network to the unit i at time t.

$$\begin{array}{c}{x}_{i}^{\prime}\left(t\right)=-{a}_{i}\left({x}_{i}\left(t\right)\right)\left(b\left({x}_{i}\left(t\right)\right)-\sum _{j=1}^{n}{c}_{ij}\left(t\right){f}_{j}\left({x}_{j}\left(t\right)\right)+{I}_{i}\left(t\right)\right)\hfill \\ \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}t\ge {T}_{0},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\xi}_{k}<t<{\xi}_{k+1},\phantom{\rule{4pt}{0ex}}k=0,1,\dots ,\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}i=1,2,\dots n,\hfill \\ {x}_{i}({\xi}_{k}+0)={\mathsf{\Phi}}_{k,i}\left({x}_{i}({\xi}_{k}-0)\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}k=1,2,\dots ,\hfill \\ {x}_{i}\left({T}_{0}\right)={x}_{i}^{0},\hfill \end{array}$$

We list some assumptions, which will be used in the main results:

(H1) For all $i=1,2,\dots ,n$, the functions ${a}_{i}\in C(\mathbb{R},(0,\infty ))$, and there exist constants ${A}_{i},{B}_{i}>0$ such that $0<{A}_{i}\le {a}_{i}\left(u\right)\le {B}_{i}$ for $u\in \mathbb{R}$.

(H2) There exist positive numbers ${M}_{i,j},\phantom{\rule{4pt}{0ex}}i,j=1,2,\dots ,n$ such that $|{c}_{i,j}\left(t\right)|\le {M}_{i,j}$ for $t\ge 0$.

**Remark**

**2.**

In the case when the strengths of connectivity between cells are constants, then Assumption (H2) is satisfied.

For the activation functions, we assume:

(H3) The neuron activation functions are Lipschitz, i.e., there exist positive numbers ${L}_{i},\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,n,$ such that $|{f}_{i}\left(u\right)-{f}_{i}\left(v\right)|\le {L}_{i}|u-v|$ for $u,v\in \mathbb{R}$.

**Remark**

**3.**

Note that the activation functions satisfying Condition (H3) are more general than the usual sigmoid activation functions.

#### 2.1. Description of the Solutions of Model (2)

Consider the sequence of points ${\left\{{t}_{k}\right\}}_{k=1}^{\infty}$ where the point ${t}_{k}$ is an arbitrary value of the corresponding random variable ${\tau}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $. Define the increasing sequence of points ${\left\{{T}_{k}\right\}}_{k=1}^{\infty}$ by:

$${T}_{k}={T}_{0}+\sum _{i=1}^{k}{t}_{k}.$$

Note that ${T}_{k}$ are values of the random variables ${\xi}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $.

Consider the corresponding RINN (2) initial value problem for the system of differential equations with fixed points of impulses ${\left\{{T}_{k}\right\}}_{k=1}^{\infty}$ (INN):

$$\begin{array}{c}{x}_{i}^{\prime}\left(t\right)=-{a}_{i}\left({x}_{i}\left(t\right)\right)\left(b\left({x}_{i}\left(t\right)\right)-\sum _{j=1}^{n}{c}_{ij}\left(t\right){f}_{j}\left({x}_{j}\left(t\right)\right)+{I}_{i}\left(t\right)\right)\hfill \\ \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}t\ge {T}_{0},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}t\ne {T}_{k},\phantom{\rule{4pt}{0ex}}k=0,1,\dots ,\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}i=1,2,\dots n,\hfill \\ {x}_{i}({T}_{k}+0)={\mathsf{\Phi}}_{k,i}\left({x}_{i}({T}_{k}-0)\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}k=1,2,\dots ,\hfill \\ {x}_{i}\left({T}_{0}\right)={x}_{i}^{0}.\hfill \end{array}$$

The solution of the differential equation with fixed moments of impulses (4) depends not only on the initial point $({T}_{0},{x}^{0})$, but on the moments of impulses ${T}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $, i.e., the solution depends on the chosen arbitrary values ${t}_{k}$ of the random variables ${\tau}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $. We denote the solution of the initial value problem (4) by $x(t;{T}_{0},{x}^{0},\left\{{T}_{k}\right\})$. We will assume that:

$$x({T}_{k};{T}_{0},{x}^{0},\left\{{T}_{k}\right\})=\underset{t\to {T}_{k}-0}{\mathrm{lim}}x(t;{T}_{0},{x}^{0},\left\{{T}_{k}\right\})\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\mathrm{any}\phantom{\rule{4pt}{0ex}}k=1,2,\dots .$$

**Remark**

**4.**

The set of all solutions $x(t;{T}_{0},{x}^{0},\left\{{T}_{k}\right\})$ of the initial value problem for the impulsive fractional differential Equation (4) for any values ${t}_{k}$ of the random variables ${\tau}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $ generates a stochastic process with state space ${\mathbb{R}}^{n}$. We denote it by $x(t;{T}_{0},{x}^{0},\left\{{\tau}_{k}\right\})$, and we will say that it is a solution of RINN (2).

**Remark**

**5.**

Note that $x(t;{T}_{0},{x}^{0},\left\{{T}_{k}\right\})$ is a deterministic function, but $x(t;{T}_{0},{x}^{0},\left\{{\tau}_{k}\right\})$ is a stochastic process.

**Definition**

**1.**

**Definition**

**2.**

A stochastic process $x(t;{T}_{0},{x}^{0},\left\{{\tau}_{k}\right\})$ with an uncountable state space ${\mathbb{R}}^{n}$ is said to be a solution of the IVP for the system of RINN (2) if for any values ${t}_{k}$ of the random variables ${\tau}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $, the corresponding function $x(t;{T}_{0},{x}^{0},\left\{{T}_{k}\right\})$ is a sample path solution of the IVP for RINN (2).

#### 2.2. Equilibrium of Model (2)

We define an equilibrium of the model (2) assuming Condition (H1) is satisfied:

**Definition**

**3.**

A vector ${x}^{\ast}\in {\mathbb{R}}^{n},\phantom{\rule{4pt}{0ex}}{x}^{\ast}=({x}_{1}^{\ast},{x}_{2}^{\ast},\dots ,{x}_{n}^{\ast})$ is an equilibrium point of RINN (2), if the equalities:
and
hold.

$$0=b\left({x}_{i}^{\ast}\right)-\sum _{j=1}^{n}{c}_{ij}\left(t\right){f}_{j}\left({x}_{i}^{\ast}\right)+{I}_{i}\left(t\right)\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,n$$

$${x}_{i}^{\ast}={\mathsf{\Phi}}_{k,i}\left({x}_{i}^{\ast}\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}t\ge 0,k=1,2,\dots ,\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,n$$

We assume the following:

(H4) Let RINN (2) have an equilibrium vector ${x}^{\ast}\in {\mathbb{R}}^{n}$.

If Assumption (H4) is satisfied, then we can shift the equilibrium point ${x}^{\ast}$ of System (2) to the origin. The transformation $y\left(t\right)=x\left(t\right)-{x}^{\ast}$ is used to put System (2) in the following form:
where ${p}_{i}\left(u\right)={a}_{i}(u+{x}_{i}^{\ast})$, ${q}_{i}\left(u\right)={b}_{i}(u+{x}_{i}^{\ast})-{b}_{i}\left({x}_{i}^{\ast}\right)$, ${F}_{j}\left(u\right)={f}_{j}(u+{x}_{j}^{\ast})-{f}_{j}({x}_{j}^{\ast}),j=1,2,\dots ,n$ and ${\varphi}_{k,i}\left(u\right)={\mathsf{\Phi}}_{k,i}(u+{x}_{i}^{\ast})-{\mathsf{\Phi}}_{k,i}\left({x}_{i}^{\ast}\right),\phantom{\rule{4pt}{0ex}}i=1,2,\dots ,n,\phantom{\rule{4pt}{0ex}}k=1,2,\dots ,$ ${y}_{i}^{0}={x}_{i}^{0}-{x}_{i}^{\ast}$.

$$\begin{array}{c}{y}_{i}^{\prime}\left(t\right)=-{p}_{i}\left({y}_{i}\left(t\right)\right)\left(q\left({y}_{i}\left(t\right)\right)-\sum _{j=1}^{n}{c}_{ij}\left(t\right){F}_{j}\left({y}_{j}\left(t\right)\right)\right)\hfill \\ \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}t\ge {T}_{0},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\xi}_{k}<t<{\xi}_{k+1},\phantom{\rule{4pt}{0ex}}k=0,1,\dots ,\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}i=1,2,\dots n,\hfill \\ {y}_{i}({\xi}_{k}+0)={\varphi}_{k,i}\left(y({\xi}_{k}-0)\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}k=1,2,\dots ,\hfill \\ {y}_{i}\left({T}_{0}\right)={y}_{i}^{0},\hfill \end{array}$$

**Remark**

**6.**

If Assumption (H3) is fulfilled, then the function F in RINN (8) satisfies $|{F}_{j}\left(u\right)|\le {L}_{j}\left|u\right|,\phantom{\rule{4pt}{0ex}}j=1,2,\dots ,n,$ for $u\in \mathbb{R}$.

## 3. Some Stability Results for Differential Equations with Impulses at Random Times

Consider the general type of initial value problem (IVP) for a system of nonlinear random impulsive differential equations (RIDE):
with ${x}^{0}\in {\mathbb{R}}^{n}$, random variables ${\xi}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $ are defined by (1), $g\in C([{T}_{0},\infty )\times {\mathbb{R}}^{n},{\mathbb{R}}^{n})$ and ${\mathsf{\Psi}}_{k}:{\mathbb{R}}^{n}\to {\mathbb{R}}^{n}$.

$$\begin{array}{c}{x}^{\prime}\left(t\right)=g(t,x\left(t\right))\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}t\ge {T}_{0},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\xi}_{k}<t<{\xi}_{k+1},\hfill \\ x({\xi}_{k}+0)={\mathsf{\Psi}}_{k}\left(x({\xi}_{k}-0)\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}k=1,2,\dots ,\hfill \\ x\left({T}_{0}\right)={x}^{0};\hfill \end{array}$$

**Definition**

**4.**

Let $p>0$. Then, the trivial solution (${x}^{0}=0$) of RIDE (9) is said to be p-moment exponentially stable if for any initial point $({T}_{0},{y}^{0})\in {\mathbb{R}}_{+}\times {\mathbb{R}}^{n}$, there exist constants $\alpha ,\mu >0$ such that $E[\parallel y(t;{T}_{0},{y}^{0},\left\{{\tau}_{k}\right)\}{)\parallel}^{p}]<\alpha \parallel {y}^{0}{\parallel}^{p}{e}^{-\mu (t-{T}_{0})}$ for all $t>{T}_{0}$, where $y(t;{T}_{0},{x}^{0},\left\{{\tau}_{k}\right)\}$ is the solution of the IVP for RIDE (9).

**Definition**

**5.**

Let $p>0$. Then, the equilibrium ${x}^{\ast}$ of RINN (2) is said to be p-moment exponentially stable if for any initial point $({T}_{0},{x}^{0})\in {\mathbb{R}}_{+}\times {\mathbb{R}}^{n}$, there exist constants $\alpha ,\mu >0$ such that $E[\parallel x(t;{T}_{0},{x}^{0},\left\{{\tau}_{k}\right)\})-{x}^{\ast}{\parallel}^{p}]<\alpha \parallel {x}^{0}-{x}^{\ast}{\parallel}^{p}{e}^{-\mu (t-{T}_{0})}$ for all $t>{T}_{0}$, where $x(t;{T}_{0},{x}^{0},\left\{{\tau}_{k}\right)\}$ is the solution of the IVP for RINN (2).

**Remark**

**7.**

We note that the two-moment exponential stability for stochastic equations is known as the mean square exponential stability, and in the case of $p=1$, it is called mean exponential stability.

Note that the p-moment exponential stability of RIDE (9) was studied in [20] by an application of Lyapunov functions from the class $\mathsf{\Lambda}(J,\Delta )$, $J\subset {\mathbb{R}}_{+}$, $\Delta \subset {\mathbb{R}}^{n},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}0\in \Delta $ with:

$$\begin{array}{cc}\hfill \mathsf{\Lambda}(J,\Delta )& =\{V(t,x)\in C(J\times \Delta ,{\mathbb{R}}_{+}):\phantom{\rule{4pt}{0ex}}V(t,0)\equiv 0,\hfill \\ \hfill & \phantom{\rule{4pt}{0ex}}V(t,x)\phantom{\rule{4pt}{0ex}}\mathrm{is}\mathrm{locally}\mathrm{Lipschitzian}\mathrm{with}\mathrm{respect}\mathrm{to}\mathrm{x}\}.\hfill \end{array}$$

We will use the Dini derivative of the Lyapunov function $V(t,x)\in \mathsf{\Lambda}(J,\Delta )$ given by:

$$\begin{array}{c}{}_{\left(9\right)}{D}_{+}V(t,x)=\underset{h\to {0}^{+}}{\mathrm{lim}\mathrm{sup}}\frac{1}{h}\left\{V(t,x)-V(t-h,x-hg(t,x\left)\right)\right\}\hfill \\ \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{for}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}t\in \phantom{\rule{4pt}{0ex}}J,\phantom{\rule{4pt}{0ex}}x\in \Delta .\hfill \end{array}$$

Now, we will give a sufficient condition result:

**Theorem**

**1**

([20]). Let the following conditions be satisfied:

- 1.
- For $t\ge 0:$ $g(t,0)\equiv 0$ and ${\mathsf{\Psi}}_{k}\left(0\right)=0,\phantom{\rule{4pt}{0ex}}k=1,2,\dots $ and for any initial values $({T}_{0},{x}^{0})$, the corresponding IVP for the ordinary differential equation ${x}^{\prime}\left(t\right)=g(t,x\left(t\right))$ has a unique solution.
- 2.
- The function $V\in \mathsf{\Lambda}([{T}_{0},\infty ),{\mathbb{R}}^{n})$, and there exist positive constants $a,b$ such that:
- (i)
- ${a\parallel x\parallel}^{p}\le V(t,x)\le {b\parallel x\parallel}^{p}$ for $t\ge {T}_{0},\phantom{\rule{4pt}{0ex}}x\in {\mathbb{R}}^{n};$
- (ii)
- there exists a function $m\in C({\mathbb{R}}_{+},{\mathbb{R}}_{+}):\phantom{\rule{4pt}{0ex}}{\mathrm{inf}}_{t\ge 0}m\left(t\right)=L\ge 0$, and the inequality:$${}_{\left(9\right)}{D}_{+}V(t,x)\le -m\left(t\right)V(t,x),\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}t\ge 0,\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}x\in {\mathbb{R}}^{n}$$
- (iii)
- for any $k=1,2,\dots $, there exist constants ${w}_{k}:\phantom{\rule{4pt}{0ex}}0\le {w}_{k}<1+\frac{L}{\lambda}$ for $t\ge 0$ such that:$$V(t,{I}_{k}(t,x))\le {w}_{k}V(t,x)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}t\ge 0,\phantom{\rule{4pt}{0ex}}x\in {\mathbb{R}}^{n}.$$

Then, the trivial solution of RIDE (9) is p-moment exponentially stable.

## 4. Stability Analysis of Neural Networks with Random Impulses

We will introduce the following assumptions:

(H5) For $i=1,2,\dots ,n$, the functions ${b}_{i}\in C(\mathbb{R},\mathbb{R})$, and there exist constants ${\beta}_{i}>0$ such that $u\left({b}_{i}(u+{x}_{i}^{\ast})-{b}_{i}\left({x}_{i}^{\ast}\right)\right)\ge {\beta}_{i}{u}^{2}$ for any $u\in \mathbb{R}$ where ${x}^{\ast}\in {\mathbb{R}}^{n},\phantom{\rule{4pt}{0ex}}{x}^{\ast}=({x}_{1}^{\ast},{x}_{2}^{\ast},\dots ,{x}_{n}^{\ast}),$ is the equilibrium from Condition (H4).

**Remark**

**8.**

If Condition (H5) is satisfied, then the inequality $uq\left(u\right)\ge {\beta}_{i}{u}^{2},u\in \mathbb{R}$ holds for RINN (8).

(H6) The inequality:
holds.

$$\begin{array}{cc}\hfill \nu =& 2\underset{i=\overline{1,n}}{\mathrm{min}}{A}_{i}{\beta}_{i}\phantom{\rule{4pt}{0ex}}-\underset{i=\overline{1,n}}{\mathrm{max}}{B}_{i}\left(\underset{i=\overline{1,n}}{\mathrm{max}}\sum _{j=1}^{n}{M}_{ij}{L}_{j}+(\sum _{i=1}^{n}\underset{j=\overline{1,n}}{\mathrm{max}}{M}_{ij}{L}_{j})\right)>0\hfill \end{array}$$

(H7) For any $k=1,2,\dots $, there exists positive number ${K}_{k}<1+\frac{\nu}{\lambda}$ such that the inequalities:
hold where ${x}^{\ast}\in {\mathbb{R}}^{n},\phantom{\rule{4pt}{0ex}}{x}^{\ast}=({x}_{1}^{\ast},{x}_{2}^{\ast},\dots ,{x}_{n}^{\ast}),$ is the equilibrium from Condition (H4).

$$\sum _{i=1}^{n}{\left({\mathsf{\Phi}}_{k,i}\left({x}_{i}\right)-{\mathsf{\Phi}}_{k,i}\left({x}_{i}^{\ast}\right)\right)}^{2}\le {K}_{k}\sum _{i=1}^{n}{({x}_{i}-{x}_{i}^{\ast})}^{2},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{x}_{i}\in \mathbb{R},i=1,2,\dots ,n,$$

**Remark**

**9.**

If Assumption (H7) is fulfilled, then the impulsive functions ${\varphi}_{k},\phantom{\rule{4pt}{0ex}}k=1,2,\dots $ in RINN (8) satisfy the inequalities ${\sum}_{i=1}^{n}{\varphi}_{k,i}^{2}\left({u}_{i}\right)\le {K}_{k}{\sum}_{i=1}^{n}{u}_{i}^{2}$.

**Theorem**

**2.**

Let Assumptions (H1)–(H7) be satisfied. Then, the equilibrium point ${x}^{\ast}$ of RINN (2) is mean square exponentially stable.

**Proof.**

Consider the quadratic Lyapunov function $V(t,x)={x}^{T}x$, $x\in {\mathbb{R}}^{n}$. From Remarks 6, 8 and inequality $2\left|uv\right|\le {u}^{2}+{v}^{2}$, we get:
where the positive constant $\nu $ is defined by (12). Therefore, Condition 2(ii) of Theorem 1 is satisfied. Furthermore, from (H7), it follows that Condition 2(iii) of Theorem 1 is satisfied.

$$\begin{array}{c}{}_{\left(8\right)}{D}_{+}V(t,y)\le 2\sum _{i=1}^{n}{y}_{i}\left(-{p}_{i}\left({y}_{i}\right)(q\left({y}_{i}\right)-\sum _{j=1}^{n}{c}_{ij}\left(t\right){F}_{j}\left({y}_{j}\right))\right)\hfill \\ =-2\sum _{i=1}^{n}{y}_{i}{p}_{i}\left({y}_{i}\right)q\left({y}_{i}\right)+2\sum _{i=1}^{n}{y}_{i}{p}_{i}\left({y}_{i}\right)\sum _{j=1}^{n}{c}_{ij}\left(t\right){F}_{j}\left({y}_{j}\right)\hfill \\ \le -2\sum _{i=1}^{n}{A}_{i}{\beta}_{i}{y}_{i}^{2}+2\sum _{i=1}^{n}|{y}_{i}|{B}_{i}\sum _{j=1}^{n}{M}_{ij}{L}_{j}\left|{y}_{j}\right|\hfill \\ \le -2\sum _{i=1}^{n}{A}_{i}{\beta}_{i}{y}_{i}^{2}+\sum _{i=1}^{n}{B}_{i}\sum _{j=1}^{n}{M}_{ij}{L}_{j}({y}_{i}^{2}+{y}_{j}^{2})\hfill \\ \le -2\sum _{i=1}^{n}{A}_{i}{\beta}_{i}{y}_{i}^{2}+\sum _{i=1}^{n}{B}_{i}{y}_{i}^{2}\sum _{j=1}^{n}{M}_{ij}{L}_{j}+\sum _{i=1}^{n}{B}_{i}\sum _{j=1}^{n}{M}_{ij}{L}_{j}{y}_{j}^{2}\hfill \\ \le -2\underset{i=\overline{1,n}}{\mathrm{min}}{A}_{i}{\beta}_{i}\sum _{i=1}^{n}{y}_{i}^{2}\hfill \\ \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}+\underset{i=\overline{1,n}}{\mathrm{max}}{B}_{i}\left(\underset{i=\overline{1,n}}{\mathrm{max}}\sum _{j=1}^{n}{M}_{ij}{L}_{j}+(\sum _{i=1}^{n}\underset{j=\overline{1,n}}{\mathrm{max}}{M}_{ij}{L}_{j})\right)\sum _{i=1}^{n}{y}_{i}^{2}\hfill \\ =-\nu \sum _{i=1}^{n}{y}_{i}^{2}.\hfill \end{array}$$

**Example**

**1.**

Let $n=3$, ${t}_{0}=0.1,$ and the random variables ${\tau}_{k},k=1,2,\dots $ are exponentially distributed with $\lambda =1$. Consider the following special case of RINN (2):
with ${a}_{i}\left(u\right)=2+\frac{\left|u\right|}{1\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}\left|u\right|}\in [2,3),i=1,2,3$, ${f}_{i}\left(u\right)={\alpha}_{i}\mathrm{cos}\left(u\right),\phantom{\rule{4pt}{0ex}}{\alpha}_{1}=0.1,{\alpha}_{2}=0.01,{\alpha}_{2}=2$, ${\mathsf{\Phi}}_{k,i}\left(u\right)=u\mathrm{sin}k+(1-\mathrm{sin}k)0.5\pi $, and $C={c}_{ij}\left(t\right)$ is given by:

$$\begin{array}{c}{x}_{i}^{\prime}\left(t\right)=-{a}_{i}\left({x}_{i}\left(t\right)\right)\left(2{x}_{i}\left(t\right)+\sum _{j=1}^{3}{c}_{ij}\left(t\right){f}_{j}\left({x}_{j}\left(t\right)\right)-\pi \right)\hfill \\ \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}t\ge 0\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\xi}_{k}<t<{\xi}_{k+1},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}i=1,2,3,\hfill \\ {x}_{i}({\xi}_{k}+0)={\mathsf{\Phi}}_{k,i}\left({x}_{i}({\xi}_{k}-0)\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}k=1,2,\dots ,\hfill \\ {x}_{i}\left(0.1\right)={x}_{i}^{0},\hfill \end{array}$$

$$C\left(t\right)=\left(\begin{array}{ccc}-0.1\mathrm{sin}t& 0.4& 0.3\\ -\frac{{t}^{2}}{5{t}^{2}+1}& 0.3& \frac{t}{5t+1}\\ \frac{t}{10t+1}& -0.2\mathrm{cos}t& -0.1\mathrm{sin}t\end{array}\right).$$

The point ${x}^{\ast}=(0.5\pi ,0.5\pi ,0.5\pi )$ is the equilibrium point of RINN (14), i.e., Condition (H4) is satisfied. Now, Assumption (H1) is satisfied with ${A}_{i}=2,{B}_{i}=3,\phantom{\rule{4pt}{0ex}}i=1,2,3$. In addition, Assumption (H5) is satisfied with ${\beta}_{i}=2,\phantom{\rule{4pt}{0ex}}i=1,2,3$.

Furthermore, $|{c}_{ij}|\le {M}_{ij},\phantom{\rule{4pt}{0ex}}i,j=1,2,3,\phantom{\rule{4pt}{0ex}}t\ge 0$ where $M=\left\{{M}_{ij}\right\},$ is given by:

$$M=\left(\begin{array}{ccc}0.1& 0.4& 0.3\\ 0.2& 0.3& 0.2\\ 0.1& 0.2& 0.1\end{array}\right).$$

Therefore, Assumption (H2) is satisfied. Note that Assumption (H3) is satisfied with Lipschitz constants ${L}_{1}=0.1,{L}_{2}=0.01,{L}_{3}=2$.

Then, the constant $\nu $ defined by (12) is $\nu =8-3\left(1.814\right)=2.558>0$. Next, Assumption (H7) is fulfilled with ${K}_{k}=1$ because:

$$\begin{array}{c}\sum _{i=1}^{3}{\left({\mathsf{\Phi}}_{k,i}\left({x}_{i}\right)-{\mathsf{\Phi}}_{k,i}\left({x}_{i}^{\ast}\right)\right)}^{2}=\sum _{i=1}^{3}{\left({x}_{i}\mathrm{sin}k+(1-\mathrm{sin}k)0.5\pi -0.5\pi \right)}^{2}\hfill \\ =\sum _{i=1}^{3}{\left(({x}_{i}-0.5\pi )\mathrm{sin}k\right)}^{2}\le \sum _{i=1}^{3}{\left({x}_{i}-0.5\pi \right)}^{2},\phantom{\rule{4pt}{0ex}}k=1,2,\dots .\hfill \end{array}$$

Therefore, according to Theorem 1, the equilibrium of RINN (14) is mean square exponentially stable.

Consider the system (14) without any kind of impulses. The equilibrium ${x}^{\ast}=(0.5\pi ,0.5\pi ,0.5\pi )$ is asymptotically stable (see Figure 1 and Figure 2). Therefore, an appropriate perturbation of the neural networks by impulses at random times can keep the stability properties of the equilibrium.

**Remark**

**10.**

Note that Condition (H7) is weaker than Condition (3.6) in Theorem 3.2 [16], and as a special case of Theorem 2, we obtain weaker conditions for exponential stability of the Cohen and Grossberg model without any type of impulses. For example, if we consider (14) according to Condition (3.6) [16], the inequality $\delta ={2\parallel M\parallel}_{2}\frac{3}{4}=1.0374<1$ is not satisfied, and Theorem 3.2 [16] does not give us any result about stability (compare with Example 1).

Now, consider the following assumption:

(H8) The inequality:
holds.

$$\begin{array}{c}\hfill \nu =\underset{i=\overline{1,n}}{\mathrm{min}}{\gamma}_{i}-\sum _{i=1}^{n}\underset{j=\overline{1,n}}{\mathrm{max}}{M}_{ij}>0\end{array}$$

**Theorem**

**3.**

Let Assumptions (H1)–(H5), (H7) and (H8) be satisfied. Then, the equilibrium point ${x}^{\ast}$ of RINN (2) is mean exponentially stable.

**Proof.**

For any $u\in {\mathbb{R}}^{n}$, we define $V\left(u\right)={\sum}_{i=1}^{n}{\int}_{0}^{{u}_{i}}\frac{sign\left(s\right)}{{a}_{i}\left(s\right)}ds$. Then:
and:
where $A={\mathrm{max}}_{i=\overline{1,n}}\frac{1}{{A}_{i}}$, $B={\mathrm{min}}_{i=\overline{1,n}}\frac{1}{{B}_{i}}$.

$$V\left(u\right)\le \sum _{i=1}^{n}{\int}_{0}^{{u}_{i}}\frac{sign\left(s\right)}{{A}_{i}}ds=\sum _{i=1}^{n}\frac{1}{{A}_{i}}|{u}_{i}|\le A\parallel u\parallel $$

$$V\left(u\right)\ge \sum _{i=1}^{n}{\int}_{0}^{{u}_{i}}\frac{sign\left(s\right)}{{B}_{i}}ds=\sum _{i=1}^{n}\frac{1}{{B}_{i}}|{u}_{i}|\ge B\parallel u\parallel $$

Then, for $t\ge 0$ and $y\in {\mathbb{R}}^{n}$ according to Remarks 6 and 8, we obtain:

$$\begin{array}{c}{}_{\left(8\right)}{D}_{+}V\left(y\right)\le \sum _{i=1}^{n}-sgn\left({y}_{i}\right)\left(q\left({y}_{i}\right)-\sum _{j=1}^{n}{c}_{ij}\left(t\right){F}_{j}\left({y}_{j}\right)\right)\hfill \\ \le \sum _{i=1}^{n}\left(-{\beta}_{i}\phantom{\rule{4pt}{0ex}}|{y}_{i}|+\sum _{j=1}^{n}{M}_{ij}|{F}_{j}\left({y}_{j}\right)|\right)\phantom{\rule{4pt}{0ex}}\le \sum _{i=1}^{n}\left(-{\beta}_{i}|{y}_{i}|+\sum _{j=1}^{n}{M}_{ij}\phantom{\rule{4pt}{0ex}}\left|{y}_{j}\right|\right)\hfill \\ =-\sum _{i=1}^{n}{\beta}_{i}|{y}_{i}|+\sum _{i=1}^{n}\sum _{j=1}^{n}{M}_{ij}|{y}_{j}|\le -\underset{i=\overline{1,n}}{\mathrm{min}}{\beta}_{i}\sum _{i=1}^{n}|{y}_{i}|+\left(\sum _{i=1}^{n}\underset{j=\overline{1,n}}{\mathrm{max}}{M}_{ij}\right)\sum _{j=1}^{n}\left|{y}_{j}\right|\hfill \\ \le -\nu \sum _{i=1}^{n}\left|{y}_{i}\right|\le -\frac{\nu}{B}V\left(u\right).\hfill \end{array}$$

Furthermore, from (H7) and Remark 9, it follows that Condition 2(iii) of Theorem 1 is satisfied. From Theorem 1, we have that Theorem 3 is true. ☐

**Example**

**2.**

Let $n=3$, ${t}_{0}=0.1$, and the random variables ${\tau}_{k},k=1,2,\dots $ are exponentially distributed with $\lambda =1$. Consider the following special case of RINN (2):
with ${a}_{i}\left(u\right)=2+\frac{\left|u\right|}{1\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}\left|u\right|}\in [2,3),i=1,2,3$, ${f}_{i}\left(u\right)=log\left(\frac{u}{1\phantom{\rule{3.33333pt}{0ex}}-\phantom{\rule{3.33333pt}{0ex}}u}\right)$, ${\mathsf{\Phi}}_{k,i}\left(u\right)=u\mathrm{sin}k+(1-\mathrm{sin}k)0.5$, and $C={c}_{ij}\left(t\right)$ is given by (15).

$$\begin{array}{c}{x}_{i}^{\prime}\left(t\right)=-{a}_{i}\left({x}_{i}\left(t\right)\right)\left(2{x}_{i}\left(t\right)+\sum _{j=1}^{3}{c}_{ij}\left(t\right){f}_{j}\left({x}_{j}\left(t\right)\right)-1\right)\hfill \\ \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}t\ge 0\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\xi}_{k}<t<{\xi}_{k+1},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}i=1,2,3\hfill \\ {x}_{i}({\xi}_{k}+0)={\mathsf{\Phi}}_{k,i}\left({x}_{i}({\xi}_{k}-0)\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}for\phantom{\rule{4pt}{0ex}}k=1,2,\dots \hfill \\ {x}_{i}\left(0.1\right)={x}_{i}^{0},\hfill \end{array}$$

The point ${x}^{\ast}=(0.5,0.5,0.5)$ is the equilibrium point of RINN (20), i.e., Condition (H4) is satisfied. Now, Assumption (H5) is satisfied with ${\beta}_{i}=2,\phantom{\rule{4pt}{0ex}}i=1,2,3$.

Furthermore, $|{c}_{ij}|\le {M}_{ij},\phantom{\rule{4pt}{0ex}}i,j=1,2,3,\phantom{\rule{4pt}{0ex}}t\ge 0$ where $M=\left\{{M}_{ij}\right\},$ is given by (16). Therefore, Assumption (H2) is satisfied. Then, the inequality ${\mathrm{min}}_{i=\overline{1,n}}{\beta}_{i}=2>{\sum}_{i=1}^{3}{\mathrm{max}}_{j=\overline{1,3}}{M}_{ij}=0.4+0.3+0.2=0.9$ holds.

According to Theorem 3, the equilibrium of (20) is mean exponentially stable.

Consider the system (20) without any kind of impulses. The equilibrium ${x}^{\ast}=(0.5,\phantom{\rule{3.33333pt}{0ex}}0.5,\phantom{\rule{3.33333pt}{0ex}}0.5)$ is asymptotically stable (see Figure 3 and Figure 4). Therefore, an appropriate perturbation of the neural networks by impulses at random times can keep the stability properties of the equilibrium.

## 5. Conclusions

In this paper, we study stability properties of the equilibrium point of a generalization of the Cohen–Grossberg model of neural networks in the case when:

- -
- the potential (or voltage) of any cell is perturbed instantaneously at random moments, i.e., the neural network is modeled by a deterministic differential equation with impulses at random times. This presence of randomness in the differential equation totally changes the behavior of the solutions (they are not deterministic functions, but stochastic processes).
- -
- the random moments of the impulsive state displacements of neurons are exponentially distributed.
- -
- the connection matrix $C=\left({c}_{ij}\right)$ is not a constant matrix which is usually the case in the literature (it is a matrix depending on time since the strengths of connectivity between cells could be changed in time).
- -
- the external bias or input from outside the network to any unit is not a constant (it is variable in time).
- -
- sufficient conditions for mean-square exponential stability and for mean exponential stability of the equilibrium are obtained.

## Author Contributions

All authors contributed equally to the writing of this paper. All four authors read and approved the final manuscript.

## Funding

The research was partially supported by Fund MU17-FMI-007, University of Plovdiv “Paisii Hilendarski”.

## Conflicts of Interest

The authors declare that they have no competing interests.

## References

- Cohen, M.; Grossberg, S. Stability and global pattern formation and memory storage by competitive neural networks. IEEE Trans. Syst. Man Cyber
**1983**, 13, 815–826. [Google Scholar] [CrossRef] - Guo, S.; Huang, L. Stability analysis of Cohen-Grossberg neural networks. IEEE Trans. Neural Netw.
**2006**, 17, 106–117. [Google Scholar] [CrossRef] [PubMed] - Bai, C. Stability analysis of Cohen–Grossberg BAM neural networks with delays and impulses. Chaos Solitons Fractals
**2008**, 35, 263–267. [Google Scholar] [CrossRef] - Cao, J.; Liang, J. Boundedness and stability for Cohen–Grossberg neural network with time-varying delays. J. Math. Anal. Appl.
**2004**, 296, 665–685. [Google Scholar] [CrossRef] - Aouiti, C.; Dridi, F. New results on impulsive Cohen–Grossberg neural networks. Neural Process. Lett.
**2018**, 48, 1–25. [Google Scholar] [CrossRef] - Liu, M.; Jiang, H.; Hu, C. Exponential Stability of Cohen-Grossberg Neural Networks with Impulse Time Window. Discret. Dyn. Nat. Soc.
**2016**, 2016, 2762960. [Google Scholar] [CrossRef] - Meng, Y.; Huang, L.; Guo, Z.; Hu, Q. Stability analysis of Cohen–Grossberg neural networks with discontinuous neuron activations. Appl. Math. Model.
**2010**, 34, 358–365. [Google Scholar] [CrossRef] - Huang, C.; Huang, L.; He, Y. Mean Square Exponential Stability of Stochastic Cohen-Grossberg Neural Networks with Unbounded Distributed Delays. Discret. Dyn. Nat. Soc.
**2010**, 2010, 513218. [Google Scholar] [CrossRef] - Bucolo, M.; Caponetto, R.; Fortuna, L.; Frasca, M.; Rizzo, A. Does chaos work better than noise? IEEE Circuits Syst. Mag.
**2002**, 2, 4–19. [Google Scholar] [CrossRef] - Vinodkumar, A.; Rakkiyappan, R. Exponential stability results for fixed and random type impulsive Hopfield neural networks. Int. J. Comput. Sci. Math.
**2016**, 7, 1–19. [Google Scholar] [CrossRef] - Scardapane, S.; Wang, D. Randomness in neural networks: an overview. WIREs Data Min. Knowl. Discov.
**2017**, 7, 1–18. [Google Scholar] [CrossRef] - Gopalsamy, K. Stability of artificial neural networks with impulses. Appl. Math. Comput.
**2004**, 154, 783–813. [Google Scholar] [CrossRef] - Rakkiyappan, R.; Balasubramaiam, P.; Cao, J. Global exponential stability of neutral-type impulsive neural networks. Nonlinear Anal. Real World Appl.
**2010**, 11, 122–130. [Google Scholar] [CrossRef] - Song, X.; Zhao, P.; Xing, Z.; Peng, J. Global asymptotic stability of CNNs with impulses and multi-proportional delays. Math. Methods Appl. Sci.
**2016**, 39, 722–733. [Google Scholar] - Wu, Z.; Li, C. Exponential stability analysis of delayed neural networks with impulsive time window. In Proceedings of the 2017 Ninth International Conference on Advanced Computational Intelligence (ICACI), Doha, Qatar, 4–6 February 2017; pp. 37–42. [Google Scholar]
- Wang, L.; Zou, X. Exponential stability of Cohen-Grossberg neural networks. Neural Netw.
**2002**, 15, 415–422. [Google Scholar] [CrossRef] - Yang, Z.; Xu, D. Stability analysis of delay neural networks with impulsive effects. IEEE Trans. Circuits Syst. Express Briefs
**2015**, 52, 517–521. [Google Scholar] [CrossRef] - Zhou, Q. Global exponential stability of BAM neural networks with distributed delays and impulses. Nonlinear Anal. Real World Appl.
**2009**, 10, 144–153. [Google Scholar] [CrossRef] - Agarwal, R.; Hristova, S.; O’Regan, D.; Kopanov, P. P-moment exponential stability of differential equations with random impulses and the Erlang distribution. Mem. Differ. Equ. Math. Phys.
**2017**, 70, 99–106. [Google Scholar] [CrossRef] - Agarwal, R.; Hristova, S.; O’Regan, D. Exponential stability for differential equations with random impulses at random times. Adv. Differ. Equ.
**2013**, 372, 12. [Google Scholar] [CrossRef] - Agarwal, R.; Hristova, S.; O’Regan, D.; Kopanov, P. Impulsive differential equations with Gamma distributed moments of impulses and p-moment exponential stability. Acta Math. Sci.
**2017**, 37, 985–997. [Google Scholar] [CrossRef]

**Figure 1.**Example 1. Graph of the solution of the system ODE corresponding to (14) with ${x}_{1}^{0}=1,{x}_{2}^{0}=2,{x}_{3}^{0}=1.4$.

**Figure 2.**Example 1. Graph of the solution of the system ODE corresponding to (14) with ${x}_{1}^{0}=-0.1,{x}_{2}^{0}=0.2,{x}_{3}^{0}=-0.4$.

**Figure 3.**Example 2. Graph of the solution of the system ODE corresponding to (20) with ${x}_{1}^{0}=0.55,{x}_{2}^{0}=0.8,{x}_{3}^{0}=0.1$.

**Figure 4.**Example 2. Graph of the solution of the system ODE corresponding to (20) with ${x}_{1}^{0}=0.4,{x}_{2}^{0}=0.3,{x}_{3}^{0}=0.1$.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).