Next Article in Journal
Radius of Star-Likeness for Certain Subclasses of Analytic Functions
Previous Article in Journal
A Design of Multi-Purpose Image-Based QR Code
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stabilization of Periodical Discrete Feedback Control for Markov Jumping Stochastic Systems

1
Key Lab of Power Electronics for Energy Conservation and Motor Drive of Hebei Province, Department of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China
2
College of Science, North China University of Science and Technology, Tangshan 063000, China
3
Department of Electromechanical Engineering, Tangshan University, Tangshan 063000, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2447; https://doi.org/10.3390/sym13122447
Submission received: 16 November 2021 / Revised: 13 December 2021 / Accepted: 14 December 2021 / Published: 19 December 2021

Abstract

:
Motivated by the two strategies of intermittent control and discrete feedback control, this paper aims to introduce a periodically intermittent discrete feedback control in the drift part to stabilize an unstable Markov jumping stochastic differential system. It is illustrated that, by the approach of comparison principle, this can be achieved in the sense of almost sure exponential stability. Further, the stabilization theory is applied to Markov jumping stochastic recurrent neural networks.

1. Introduction

Stochastic systems have played an important role in a series of fields. Markov jumping stochastic differential systems (MJSDSs, for short) could offer a strong mathematical model describing option pricing, manufacturing engineering, and electric power engineering [1,2,3], whose subsystems can switch according to the provided Markovian chains. In studying the MJSDSs, the stability, as a basic and significant property, received consequent emphasis, e.g., [4,5,6,7,8,9,10,11]. For an unstable system, various strategies have been introduced to stabilize it, such as sample control, impulse control, pinning control, feedback control, and intermittent control. Generally speaking, for stochastic differential systems, the feedback control can be divided into one in the drift part and one in the diffusion part, e.g., [12,13,14,15,16]. Here, the feedback control in the drift part is particularly noted for this paper.
With respect to an unstable MJSDS in the form
d u ( t ) = F ( u ( t ) , t , r ( t ) ) d t + G ( u ( t ) , t , r ( t ) ) d B ( t ) , t 0
it is traditional to introduce a feedback control H ( u ( t ) , t , r ( t ) ) in the drift part, based on continuous state observations, to stabilize it, where u ( t ) R n stands for the system state, r ( t ) S stands for the switching rule-Markovian chain, B ( t ) stands for an m-dimensional Brownian motion, and F , H : R n × R + × S R n , G : R n × R + × S R n × m . Please refer to more concrete definitions in the last paragraph of this section. It is noted that designing the continuous feedback control is costly and impossible to achieve in reality. To reduce the stabilization cost, Mao [17] introduced the feedback control in the drift part, based on discrete state observations, to stabilize the unstable system (1). So, the controlled system becomes
d u ( t ) = F ( u ( t ) , t , r ( t ) ) + H ( u ( [ t / h ] h ) , t , r ( t ) ) d t + G ( u ( t ) , t , r ( t ) ) d B ( t ) ,
where h > 0 is the observation interval between two consecutive observations, and [] is the integer operator, i.e., [ t / h ] = i as t [ i h , ( i + 1 ) h ) , i = 0 , 1 , 2 , . Then, the consequent results on the topic of discrete feedback control were reported, e.g., [18,19,20,21].
Additionally, compared with the continuous control strategy, the intermittent control strategy is more effective to achieve the aim of cost reduction. Intermittent control strategy was initiated by Zochowski [22]. It involves the contents of working time and non-working time. This topic with periodical working time is called as periodically intermittent control and has received much focus, e.g., [23,24,25,26,27,28,29].
Therefore, motivated by the discrete feedback control and periodically intermittent control, one may ask the following question.
Question: With respect to the unstable MJSDS (1), could one introduce a periodically intermittently discrete feedback control in the drift part to make the controlled MJSDS
d u ( t ) = F ( u ( t ) , t , r ( t ) ) + H ( u ( [ t / h ] h ) , t , r ( t ) ) d t + G ( u ( t ) , t , r ( t ) ) d B ( t ) , t [ l σ , ( l + θ ) σ ) d u ( t ) = F ( u ( t ) , t , r ( t ) ) d t + G ( u ( t ) , t , r ( t ) ) d B ( t ) , t [ ( l + θ ) σ , ( l + 1 ) σ )
stable, where l = 0 , 1 , 2 , , θ ( 0 , 1 ] stands for the intermittently controlled rate, and σ stands for the controlled period?
Up to now, there are few answers to this question. This paper offers a positive answer and fills the research gap. The contributions of this paper consist of the following points:
(1) The issue and criteria of periodically intermittently discrete feedback control in the drift part have been proposed for MJSDSs;
(2) The stabilization theory above is successfully applied to Markov jumping stochastic recurrent neural networks, which shows the effectiveness of our theory.
Notation 1.
Assume ( Ω , F , { F t } t 0 , P ) is a completed probability space with the usual conditions. B ( t ) stands for an m-dimensional Brownian motion, and r ( t ) stands for a right-continuous Markovian chain with the finite state space S = { 1 , , N } and the generator Γ = ( r k 1 k 2 ) N × N , which satisfies r k 1 k 2 > 0 as k 1 k 2 and k 2 S λ k 1 k 2 = 0 . Operator E stands for the expectation with respect to P . For a matrix M, M = t r a c e ( M T M ) and M = max { M x : x = 1 } , where T stands for the transpose operator and t r a c e stands for the trace operator. Matrix M is symmetric, when M T = M .

2. Problem Description and Main Results

This paper is mainly concerned with the unstable MJSDS (1) with initial data u ( 0 ) R n , r ( 0 ) S . Furthermore, the global Lipschitz conditions are assumed for functions F and G .
Assumption 1.
There exist two numbers c 1 , c 2 > 0 such that, for u , v R n , s R + , j S
| F ( u , s , j ) F ( v , s , j ) | c 1 | u v | | G ( u , s , j ) G ( v , s , j ) | c 2 | u v | .
Furthermore, F ( 0 , s , j ) = 0 , G ( 0 , s , j ) = 0 for s R + , j S .
From Assumption 1, MJSDS (1) has the zero equivalent solution. However, Assumption 1 can not ensure the stability. Hence, as discussed above, the periodically intermittently discrete feedback control in the drift part is introduced for MJSDS (1). For simplicity, the controlled MJSDS (3) can be rewritten as
d u ( t ) = F ( u ( t ) , t , r ( t ) ) + I ( t ) H ( u ( [ t / h ] h ) , t , r ( t ) ) d t + G ( u ( t ) , t , r ( t ) ) d B ( t ) ,
where I ( t ) equals 1 as t [ l σ , ( l + θ ) σ ) , while 0 as t [ ( l + θ ) σ , ( l + 1 ) σ ) , and θ σ stands for the controlled width.
It is further assumed that θ [ h σ , 1 ] , i.e., θ σ h , since the controlled period should not be less than the controlled width, and the controlled width should not be less than the observation interval between two consecutive observations. In fact, the constraint θ σ h can be removed.
Remark 1.
Compared with the assertions on traditionally continuous feedback control in the drift part, this paper emphasizes the feedback control based on discrete observations. Compared with the results on the discrete feedback control in the drift part [17,18,19,20,21], this paper emphasizes a periodically intermittent control strategy. In sum, the periodically intermittently discrete feedback control introduced in this paper can reduce the controlled cost effectively.
Remark 2.
The controlled MJSDS (4) is complicated for the two switching rules. One is the original Markovian switching rule; another is the periodically intermittent switching rule, which is newly added. From another angle, system (4) is a periodically switched system: one of its subsystems is the original unstable MJSDS; another is the discrete feedback controlled MJSDS.
Before the main contribution of the stabilization result is presented, the following assumptions are given for function H.
Assumption 2.
There exists a number c 3 > 0 such that, for u , v R n , s R + , j S
| H ( u , s , j ) H ( v , s , j ) | c 3 | u v | .
Furthermore, H ( 0 , s , j ) = 0 for s R + , j S .
Remark 3.
Assumption 2 implies the linear growth condition for function H, i.e., | H ( u , s , j ) | c 3 | u | for u R n , s R + , j S .
Assumption 3.
There exists a number c 4 > c 1 + c 2 2 / 2 such that, for v R n , s R + , j S
v T H ( v , s , j ) c 4 | v | 2 .
Please refer to c 1 , c 2 in Assumption 1.
Remark 4.
Given parameters c 1 and c 2 in Assumption 1, many cases exist for function H with Assumptions 2 and 3. For instance, H ( v , s , j ) can take the linear form of c 4 v with c 4 > c 1 + c 2 2 / 2 to satisfy Assumptions 2 and 3 with c 3 = c 4 .
As a matter of fact, the controlled MJSDS (4) is a Markov jumping stochastic delay differential system
d u ( t ) = F ( u ( t ) , t , r ( t ) ) + I ( t ) H ( u ( t δ ( t ) ) , t , r ( t ) ) d t + G ( u ( t ) , t , r ( t ) ) d B ( t )
with bounded time varying delay δ ( t ) = t [ t / h ] h . From [6], Assumptions 1 and 2 can ensure the existence and uniqueness of the global solution u ( t ; u ( 0 ) , r ( 0 ) , 0 ) for the controlled MJSDS (4), while u ( t ; u ( 0 ) , r ( 0 ) , 0 ) does not satisfy the Markovian property. However, for the special structure of system (4), the Markovian property holds for u ( t ; u ( 0 ) , r ( 0 ) , 0 ) and r ( t ; r ( 0 ) , 0 ) at the series of times l σ , l = 0 , 1 , 2 , , , i.e.,
u ( t ; u ( 0 ) , r ( 0 ) , 0 ) , r ( t ; r ( 0 ) , 0 ) = u ( t ; u ( l σ ) , r ( l σ ) , l σ ) , r ( t ; r ( l σ ) , l σ ) .
What is more, the definition of almost sure exponential stability is cited for the controlled MJSDS (4).
Definition 1.
Controlled MJSDS (4) is called to be almost surely exponentially stable, when lim sup t log | u ( t ; u ( 0 ) , r ( 0 ) , 0 ) | / t < 0 a.s. for u ( 0 ) R n , r ( 0 ) S .
In the following, we give the main stabilization result.
Theorem 1.
Under Assumptions 1–3, suppose there exist numbers σ > 0 , θ ( 0 , 1 ] and h > 0 such that conditions
h θ σ ,
L 1 ( σ , θ , h ) e ( 2 c 1 + c 2 2 + λ ) σ + e ε σ < 0.5
are satisfied for a parameter λ > 0 ; then, for u ( 0 ) R n , r ( 0 ) S , the global solution u ( t ; u ( 0 ) , r ( 0 ) , 0 ) of controlled MJSDS (4) satisfies
lim sup t log | u ( t ; u ( 0 ) , r ( 0 ) , 0 ) | / t < γ / 4 , a . s .
where ε = 2 c 4 2 c 1 c 2 2 , γ = log 2 L 1 ( σ , θ , h ) e ( 2 c 1 + c 2 2 + λ ) σ + 2 e ε σ / σ ,
L 1 ( σ , θ , h ) = c 3 2 / λ e ε θ σ ( 1 θ ) σ + 8 c 3 4 / λ 2 ( 2 c 1 2 h + c 2 2 + 2 c 3 2 h ) h θ 2 σ 2 e ( 4 c 1 + 2 c 2 2 + 2 λ + 3 c 3 2 / λ ) θ σ + 4 c 3 2 / λ ( 2 c 1 2 h + c 2 2 + 2 c 3 2 h ) h θ σ e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) θ σ .
Remark 5.
As discussed in Remark 1, compared with the results in [17,18,19,20,21], in addition to the discrete feedback control strategy, Theorem 1 introduces parameters σ and θ to implement the periodically intermittent control strategy. Compared with the results in [23,24,25,26,27,28,29], in addition to the periodically intermittent control strategy, Theorem 1 introduces parameters h to implement the discrete feedback control strategy.
Theorem 1 demonstrates the almost sure exponential stability of controlled MJSDS (4). Now, we recall the original question in Section 1 and can directly obtain the following theorem from Theorem 1.
Theorem 2.
With respect to the unstable MJSDS (1) satisfying Assumption 1, one can introduce a periodically intermittently discrete feedback control
I ( t ) H ( u ( [ t / h ] h ) , t , r ( t ) ) , I ( t ) = 1 , t [ l σ , ( l + θ ) σ ) 0 , t [ ( l + θ ) σ , ( l + 1 ) σ ) , l = 0 , 1 ,
satisfying Assumptions 2, 3 and conditions (5), (6) to exponentially stabilize it almost surely, where the related parameters have been proposed above.
Remark 6.
Compared with the controlled criteria in [17,18,19,20,21,23,24,25,26,27,28,29], our criteria (i.e., conditions (5) and (6)) are more complicated for the combination of two control strategies.
In the practical implementation, one can obey the following procedure to guarantee the achievement of stabilization:
(1) Design a function H such that Assumptions 2 and 3 hold. Furthermore, compute ε = 2 c 4 2 c 1 + c 2 2 ;
(2) Choose multiple proper parameters σ , θ , and λ such that condition (6) with h = 0 holds;
(3) Choose the proper parameter h such that condition (5) and condition (6) hold;
(4) Design the periodically intermittently discrete feedback control I ( t ) H ( u ( [ t / h ] h ) , t , r ( t ) ) .
Remark 7.
In this procedure, (2) is the key step and complicated. The common method is to choose some parameters provided the others.

3. Proof of Main Results

To give the detailed proof of Theorem 1, an auxiliary MJSDS
d v ( t ) = F ( v ( t ) , t , r ( t ) ) + H ( v ( t ) , t , r ( t ) ) d t + G ( v ( t ) , t , r ( t ) ) d B ( t ) , t 0
with initial data v ( 0 ) = u ( 0 ) R n , r ( 0 ) S and some necessary results are listed as below. Similarly, from [6], Assumptions 1–3 can also ensure the existence and uniqueness of the global solution v ( t ; v ( 0 ) , r ( 0 ) , 0 ) for the auxiliary MJSDS (8).
Lemma 1.
Under Assumptions 1–3, MJSDS (8) satisfies
E | v ( t ; v ( 0 ) , r ( 0 ) , 0 ) | 2 E | u ( 0 ) | 2 e ε t , t 0
where ε has been defined in Theorem 1.
Proof. 
Write v ( t ; v ( 0 ) , r ( 0 ) , 0 ) as v ( t ) for short. From Assumptions 1–3, compute the generalized Itô’s operator L [6] to | v ( t ) | 2 ,
L | v ( t ) | 2 = 2 v ( t ) T F ( v ( t ) , t , r ( t ) ) + H ( v ( t ) , t , r ( t ) ) + | G ( v ( t ) , t , r ( t ) ) | 2 ( 2 c 1 + c 2 2 2 c 4 ) | v ( t ) | 2 ε | v ( t ) | 2 .
Furthermore, we obtain for t 0 ,
E e ε t | v ( t ) | 2 = E | v ( 0 ) | 2 + E 0 t e ε s ( ε | v ( s ) | 2 + L | v ( s ) | 2 ) d s E | u ( 0 ) | 2 ,
implying the required assertion. □
Remark 8.
Here, the Lyapunov function | v ( t ) | 2 is adopted for the auxiliary MJSDS (8). In fact, we can adopt a more general Lyapunov function v T ( t ) P v ( t ) , where P is a symmetric positive definite matrix.
Lemma 2.
Under Assumptions 1 and 2, MJSDS (4) satisfies
sup 0 s t E | u ( s ; u ( 0 ) , r ( 0 ) , 0 ) | 2 E | u ( 0 ) | 2 e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) t , t 0
E | u ( t ; u ( 0 ) , r ( 0 ) , 0 ) u ( t δ ( t ) ; u ( 0 ) , r ( 0 ) , 0 ) | 2 ( 4 c 1 2 h + 2 c 2 2 + 4 c 3 2 h ) h E | u ( 0 ) | 2 e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) t , t 0
where λ > 0 has been defined in Theorem 1.
Proof. 
Write u ( t ; u ( 0 ) , r ( 0 ) , 0 ) as u ( t ) for short. By Assumptions 1 and 2, compute the generalized Itô’s operator L [6] to | u ( t ) | 2 ,
L | u ( t ) | 2 = 2 u ( t ) T F ( u ( t ) , t , r ( t ) ) + I ( t ) H ( u ( t δ ( t ) ) , t , r ( t ) ) + | G ( u ( t ) , t , r ( t ) ) | 2 ( 2 c 1 + c 2 2 ) | u ( t ) | 2 + 2 u ( t ) T I ( t ) H ( u ( t δ ( t ) ) , t , r ( t ) ) .
From Remark 3, we have, for a number λ > 0 ,
2 u ( t ) T I ( t ) H ( u ( t δ ( t ) ) , t , r ( t ) ) λ | u ( t ) | 2 + c 3 2 / λ | u ( t δ ( t ) ) | 2 .
Substituting this into (14), it follows, for t 0 ,
E | u ( t ) | 2 = E | u ( 0 ) | 2 + E 0 t L | u ( s ) | 2 d s E | u ( 0 ) | 2 + ( 2 c 1 + c 2 2 + λ ) 0 t E | u ( s ) | 2 d s + c 3 2 / λ 0 t E | u ( s δ ( s ) ) | 2 d s .
From the inequality E | u ( s δ ( s ) ) | 2 sup 0 v s E | y ( v ) | 2 and the Grownwall inequality, (15) implies that,
sup 0 v t E | u ( v ) | 2 E | u ( 0 ) | 2 + ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) 0 t sup 0 v s E | u ( v ) | 2 d s E | u ( 0 ) | 2 e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) t ,
which is the assertion (12).
Furthermore, we yield that, for t 0 ,
E | u ( t ) u ( t δ ( t ) ) | 2 = E | t δ ( t ) t F ( u ( s ) , s , r ( s ) ) + I ( s ) H ( u ( s δ ( s ) ) , s , r ( s ) ) d s + t δ ( t ) t G ( u ( s ) , s , r ( s ) ) d B ( s ) | 2 4 δ ( t ) t δ ( t ) t E | F ( u ( s ) , s , r ( s ) ) | 2 + E | H ( u ( s δ ( s ) ) , s , r ( s ) ) | 2 d s + 2 t δ ( t ) t E | G ( u ( s ) , s , r ( s ) ) | 2 d s ( 4 c 1 2 h + 2 c 2 2 ) t δ ( t ) t E | u ( s ) | 2 d s + 4 c 3 2 h t δ ( t ) t E | u ( s δ ( s ) ) | 2 d s ( 4 c 1 2 h + 2 c 2 2 + 4 c 3 2 h ) h E | u ( 0 ) | 2 e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) t ,
which is the assertion (13). □
Next, we give the detailed proof of Theorem 1.
Proof of Theorem 1.
By the controlled MJSDS (4) and the auxiliary MJSDS (8), one easily obtains, for t 0 ,
| u ( t ) v ( t ) | 2 = 0 t 2 ( u ( s ) v ( s ) ) T [ ( F ( u ( s ) , s , r ( s ) ) F ( v ( s ) , s , r ( s ) ) ) + ( I ( s ) H ( u ( s δ ( s ) ) , s , r ( s ) ) H ( v ( s ) , s , r ( s ) ) ) ] + | G ( u ( s ) , s , r ( s ) ) G ( v ( s ) , s , r ( s ) ) | 2 d s + 0 t 2 ( u ( s ) v ( s ) ) T ( G ( u ( s ) , s , r ( s ) ) G ( v ( s ) , s , r ( s ) ) ) d B ( s ) .
Take the expectation and obtain
E | u ( t ) v ( t ) | 2 = E 0 t 2 ( u ( s ) v ( s ) ) T [ F ( u ( s ) , s , r ( s ) ) F ( v ( s ) , s , r ( s ) ) ) + ( I ( s ) H ( u ( s δ ( s ) ) , s , r ( s ) ) H ( v ( s ) , s , r ( s ) ) ) ] + | G ( u ( s ) , s , r ( s ) ) G ( v ( s ) , s , r ( s ) ) | 2 d s .
For the different expressions of I ( t ) in [ l σ , ( l + θ ) σ ) and [ ( l + θ ) σ , ( l + 1 ) σ ) , we have the different estimations of E | u ( t ) v ( t ) | 2 as below.
As t [ 0 , θ σ ) , by Assumptions 1 and 2, and Lemma 2
E | u ( t ) v ( t ) | 2 = E 0 t 2 ( u ( s ) v ( s ) ) T [ ( F ( u ( s ) , s , r ( s ) ) F ( v ( s ) , s , r ( s ) ) ) + ( H ( u ( s δ ( s ) ) , s , r ( s ) ) H ( v ( s ) , s , r ( s ) ) ) ] + | G ( u ( s ) , s , r ( s ) ) G ( v ( s ) , s , r ( s ) ) | 2 d s E 0 t ( 2 c 1 + c 2 2 + λ ) | u ( s ) v ( s ) | 2 + c 3 2 / λ | u ( s δ ( s ) ) v ( s ) | 2 d s ( 2 c 1 + c 2 2 + λ + 2 c 3 2 / λ ) 0 t E | u ( s ) v ( s ) | 2 d s + 2 c 3 2 / λ 0 θ σ E | u ( s δ ( s ) ) u ( s ) | 2 d s ( 2 c 1 + c 2 2 + λ + 2 c 3 2 / λ ) 0 t E | u ( s ) v ( s ) | 2 d s + 2 c 3 2 / λ ( 4 c 1 2 h + 2 c 2 2 + 4 c 3 2 h ) h θ σ E | u ( 0 ) | 2 · e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) θ σ 4 c 3 2 / λ ( 2 c 1 2 h + c 2 2 + 2 c 3 2 h ) h θ σ e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) θ σ E | u ( 0 ) | 2 e ( 2 c 1 + c 2 2 + λ + 2 c 3 2 / λ ) t .
Moreover, we obtain
0 θ σ E | u ( s ) v ( s ) | 2 d s 4 c 3 2 / λ ( 2 c 1 2 h + c 2 2 + 2 c 3 2 h ) h θ 2 σ 2 e ( 4 c 1 + 2 c 2 2 + 2 λ + 3 c 3 2 / λ ) θ σ E | u ( 0 ) | 2 .
As t [ θ σ , σ ) , by Assumptions 1 and 2,
E | u ( t ) v ( t ) | 2 = E 0 t 2 ( u ( s ) v ( s ) ) T ( F ( u ( s ) , s , r ( s ) ) F ( v ( s ) , s , r ( s ) ) ) + | G ( u ( s ) , s , r ( s ) ) G ( v ( s ) , s , r ( s ) ) | 2 d s + E 0 θ σ 2 ( u ( s ) v ( s ) ) T ( H ( u ( s δ ( s ) ) , s , r ( s ) ) H ( v ( s ) , s , r ( s ) ) ) d s + E θ σ t 2 ( u ( s ) v ( s ) ) T ( H ( v ( s ) , s , r ( s ) ) ) d s E 0 t ( 2 c 1 + c 2 2 ) | u ( s ) v ( s ) | 2 d s + E 0 θ σ λ | u ( s ) v ( s ) | 2 + c 3 2 / λ | u ( s δ ( s ) ) v ( s ) | 2 d s + E θ σ t λ | u ( s ) v ( s ) | 2 + c 3 2 / λ | v ( s ) | 2 d s E 0 t ( 2 c 1 + c 2 2 + λ ) | u ( s ) v ( s ) | 2 d s + E 0 θ σ c 3 2 / λ | u ( s δ ( s ) ) v ( s ) | 2 d s + E θ σ t c 3 2 / λ | v ( s ) | 2 d s .
Combining the above two cases, as t [ 0 , σ ) , by Lemma 1, 2 and the Grownwall inequality, it follows that
E | u ( t ) v ( t ) | 2 ( 2 c 1 + c 2 2 + λ ) 0 t E | u ( s ) v ( s ) | 2 d s + 2 c 3 2 / λ 0 θ σ ( E | u ( s ) v ( s ) | 2 + E | u ( s ) u ( s δ ( s ) ) | 2 ) d s + c 3 2 / λ θ σ t E | v ( s ) | 2 d s ( 2 c 1 + c 2 2 + λ ) 0 t E | u ( s ) v ( s ) | 2 d s + 2 c 3 2 / λ 0 θ σ E | u ( s ) v ( s ) | 2 d s + 2 c 3 2 / λ ( 4 c 1 2 h + 2 c 2 2 + 4 c 3 2 h ) h θ σ e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) θ σ E | u ( 0 ) | 2 + c 3 2 / λ ( 1 θ ) σ e ε θ σ E | u ( 0 ) | 2 L 0 ( σ , θ , h ) E | u ( 0 ) | 2 + 2 c 3 2 / λ 0 θ σ E | u ( s ) v ( s ) | 2 d s e ( 2 c 1 + c 2 2 + λ ) t .
where L 0 ( σ , θ , h ) = 4 c 3 2 / λ ( 2 c 1 2 h + c 2 2 + 2 c 3 2 h ) h θ σ e ( 2 c 1 + c 2 2 + λ + c 3 2 / λ ) θ σ + c 3 2 / λ ( 1 θ ) σ e ε θ σ .
Substituting (19) into (21), we obtain, for t [ 0 , σ ) ,
E | u ( t ) v ( t ) | 2 L 1 ( σ , θ , h ) E | u ( 0 ) | 2 e ( 2 c 1 + c 2 2 + λ ) t .
Then, we yield,
E | u ( σ ) | 2 2 E | u ( σ ) v ( σ ) | 2 + 2 E | v ( σ ) | 2 2 L 1 ( σ , θ , h ) e ( 2 c 1 + c 2 2 + λ ) σ + e ε σ E | u ( 0 ) | 2 .
Condition (6) means that there exists a number γ > 0 satisfying 2 [ L 1 ( σ , θ , h ) e ( 2 c 1 + c 2 2 + λ ) σ + e ε σ ] = e γ σ . So,
E | u ( σ ) | 2 e γ σ E | u ( 0 ) | 2 .
Since the solution u ( t ; u ( 0 ) , r ( 0 ) , 0 ) can be regarded as u ( t ; u ( σ ) , r ( σ ) , σ ) . Then,
E | u ( 2 σ ) | 2 / F σ e γ σ E | u ( σ ) | 2 .
Computing the expectation for (25), we obtain
E | u ( 2 σ ) | 2 = E E | u ( 2 σ ) | 2 / F σ e 2 γ σ E | u ( 0 ) | 2 .
Furthermore, it follows that
E | u ( l σ ) | 2 e l γ σ E | u ( 0 ) | 2 . l = 1 , 2 ,
The B-D-G inequality, Hölder inequality, and Grownwall inequality show that,
E ( sup 0 v σ | u ( v ) | 2 ) 4 E | u ( 0 ) | 2 + 4 E ( sup 0 v σ | 0 v I ( w ) H ( u ( w δ ( w ) ) , w , r ( w ) ) ) d w | 2 ) + 4 E ( sup 0 v σ | 0 v ( F ( u ( w ) , w , r ( w ) ) d w | 2 ) + 4 E ( sup 0 v σ | 0 v G ( u ( w ) , w , r ( w ) ) d B ( w ) | 2 )
4 E | u ( 0 ) | 2 + 4 c 3 2 σ 0 σ E | ( u ( w δ ( w ) ) | 2 d w + ( 4 c 1 2 σ + 16 c 2 2 ) 0 σ E | ( u ( w ) | 2 d w 4 E | u ( 0 ) | 2 + ( 4 c 1 2 σ + 16 c 2 2 + 4 c 3 2 σ ) 0 δ E ( sup 0 v w | ( u ( v ) | 2 ) d w c 5 E | u ( 0 ) | 2 ,
where c 5 = 4 e ( 4 c 1 2 σ + 16 c 2 2 + 4 c 3 2 σ ) σ .
Similarly, we can derive
E ( sup l σ v ( l + 1 ) σ | u ( v ) | 2 ) c 5 E | u ( l σ ) | 2 c 5 e l γ σ E | u ( 0 ) | 2 . l = 1 , 2 , .
Chebyshev’s inequality shows that, for l = 1 , 2 , ,
E sup l σ v ( l + 1 ) σ | u ( v ) | 2 e 0.5 l γ σ E ( sup l σ v ( l + 1 ) σ | u ( v ) | 2 ) / e 0.5 l γ σ c 5 e 0.5 l γ σ E | u ( 0 ) | 2 .
For l = 1 ( e 0.5 γ σ ) l < , the Borel-Cantelli lemma shows that there is Ω 0 Ω with P ( Ω 0 ) = 1 such that, for any ω Ω 0 , there is an integer O ( ω ) with the property that, as l O ( ω ) and l σ v ( l + 1 ) σ ,
sup l σ v ( l + 1 ) σ | u ( v ) | 2 < e 0.5 l γ σ .
Noting that l means v , (31) means that
lim sup v log | u ( v ) | / v < γ / 4 . a . s .
The proof is complete.
Remark 9.
Similar to Remark 8, the Lyapunov function ( u T ( t ) v T ( t ) ) P ( u ( t ) v ( t ) ) can be adopted to obtain a more general criteria, where the symmetric matrix P is the same as that in Remark 8. That is left to readers.

4. Illustrated Application of Stochastic Neural Networks

Recurrent neural networks, as an important type of dynamic systems, have attracted much focus. For the inevitable random factor, stochastic neural networks (SNNs, for short) and Markov jumping stochastic neural networks (MJSNNs, for short) are modelled to describe these systems with a random factor, e.g., [30,31,32,33]. The stability issue, as a key property, for SNNs or MJSNNs, has been reported by many researchers in the past, e.g., [34,35,36,37,38,39]. Hence, guaranteeing the stability of SNNs or MJSNNs is dispensable.
Here, we consider an n-dimensional unstable MJSNN in the form
d u ( t ) = A ( r ( t ) ) u ( t ) + D ( r ( t ) ) F ¯ ( u ( t ) ) d t + G ¯ ( u ( t ) , r ( t ) ) d B ( t ) , t 0
with initial data u ( 0 ) R n and r ( 0 ) S , where u ( t ) R n stands for nervous states, A ( r ( t ) ) = d i a g { a 1 ( r ( t ) ) , a 2 ( r ( t ) ) , a n ( r ( t ) ) } > 0 stands for the self-feedback connection weight matrix, D ( r ( t ) ) R n × n stands for the connection weight matrix, F ¯ ( u ( t ) ) R n stands for the bounded active function, and G ¯ ( u ( t ) , r ( t ) ) R n × m . Furthermore, the global Lipschitz conditions are assumed for F ¯ and G ¯ .
Assumption 4.
There are a number c ¯ 2 > 0 and matrix Σ such that, for u , v R n , j S ,
F ¯ ( u ) F ¯ ( v ) Σ ( u v ) | G ¯ ( u , j ) G ¯ ( v , j ) | c ¯ 2 | ( u v ) | .
Furthermore, F ¯ ( 0 ) = 0 , G ¯ ( 0 , j ) = 0 for j S .
Remark 10.
Condition F ¯ ( u ) F ¯ ( v ) Σ ( u v ) implies that F ¯ satisfies the global Lipschitz condition.
Remark 11.
Assumption 4 means that, for u , v R n ,
( A ( r ( t ) ) u + D ( r ( t ) ) F ¯ ( u ) ) ( A ( r ( t ) ) v + D ( r ( t ) ) F ¯ ( v ) ) A ( r ( t ) ) u v + D ( r ( t ) ) ¯ } F ( u ) F ¯ ( v ) max j S ( A ( j ) + D ( j ) Σ ) u v ,
i.e., Assumption 1 holds for MJSNN (33) with c 1 = max j S ( A ( j ) + D ( j ) Σ ) .
For the unstable MJSNN (33), a periodically intermittently discrete feedback control I ( t ) H ¯ ( u ( [ t / h ] h ) , r ( t ) ) ) in the drift part is introduced, where the meaning of I ( t ) can be seen in Section 2, and H ¯ also satisfies the global Lipschitz condition.
Assumption 5.
There exist numbers c ¯ 3 > 0 and c ¯ 4 > max j S ( A ( j ) + D ( j ) Σ ) + ( c ¯ 2 ) 2 / 2 such that, for u , v R n , j S
| H ¯ ( u , j ) H ¯ ( v , j ) | c ¯ 3 | u v | v T H ¯ ( v , j ) c ¯ 4 | v | 2 .
Furthermore, H ¯ ( 0 , j ) = 0 for j S .
Then, the controlled MJSNN becomes
d u ( t ) = A ( r ( t ) ) u ( t ) + D ( r ( t ) ) F ¯ ( u ( t ) ) + I ( t ) H ¯ ( u ( [ t / h ] h ) , r ( t ) ) ) d t + G ¯ ( u ( t ) , r ( t ) ) d B ( t ) .
In what follows, we will illustrate the practical application in stochastic neural networks of the obtained stabilization theory.
Corollary 1.
Under Assumptions 4 and 5, suppose there exist numbers σ > 0 , θ ( 0 , 1 ] and h > 0 such that conditions (5) and
L ¯ 1 ( σ , θ , h ) e ( 2 max j S ( A ( j ) + D ( j ) Σ ) + ( c ¯ 2 ) 2 + λ ) σ + e ε ¯ σ < 0.5
are satisfied for a parameter λ > 0 ; then, for u ( 0 ) R n , r ( 0 ) S , the global solution u ( t ; u ( 0 ) , r ( 0 ) , 0 ) of controlled MJSNN (34) satisfies
lim sup t log | u ( t ; u ( 0 ) , r ( 0 ) , 0 ) | / t < γ ¯ / 4 , a . s .
where ε ¯ = 2 c ¯ 4 2 max j S ( A ( j ) + D ( j ) Σ ) ( c ¯ 2 ) 2 ,
γ ¯ = log 2 L ¯ 1 ( σ , θ , h ) e ( 2 max j S ( A ( j ) + D ( j ) Σ ) + ( c ¯ 2 ) 2 + λ ) σ + 2 e ε ¯ σ / σ , L ¯ 1 ( σ , θ , h ) = ( c ¯ 3 ) 2 / λ e ε ¯ θ σ ( 1 θ ) σ + 4 ( c ¯ 3 ) 2 / λ ( 2 max j S ( A ( j ) + D ( j ) Σ ) 2 h + ( c ¯ ) 2 2 + 2 ( c ¯ 3 ) 2 h ) h θ σ · e ( 2 max j S ( A ( j ) + D ( j ) Σ ) + ( c ¯ 2 ) 2 + λ + ( c ¯ 3 ) 2 / λ ) θ σ + 8 ( c ¯ 3 ) 4 / λ 2 ( 2 max j S ( A ( j ) + D ( j ) Σ ) 2 h + ( c ¯ 2 ) 2 + 2 ( c ¯ 3 ) 2 h ) h θ 2 σ 2 e ( 4 max j S ( A ( j ) + D ( j ) Σ ) + 2 ( c ¯ 2 ) 2 + 2 λ + 3 ( c ¯ 3 ) 2 / λ ) θ σ .
Corollary 2.
With respect to the unstable MJSNN (33) satisfying Assumption 4, one can introduce a periodically intermittently discrete feedback control
I ( t ) H ¯ ( u ( [ t / h ] h ) , r ( t ) ) , I ( t ) = 1 , t [ l σ , ( l + θ ) σ ) 0 , t [ ( l + θ ) σ , ( l + 1 ) σ ) , l = 0 , 1 ,
satisfying Assumption 5 and conditions (5), (35) to exponentially stabilize it almost surely.
Remark 12.
Similar to Remark 5, compared with the results on the periodically intermittent control strategy for SNNs, our Corollaries 1 and 2 emphasize the discrete feedback control strategy. Compared with the results on the discrete feedback control strategy for SNNs, our Corollaries 1 and 2 emphasize the periodically intermittent control strategy.
Remark 13.
Here, we state the application in SNNs of the stabilization theory. In fact, the stabilization theory can also be applied to other fields, such as stochastic complex networks and stochastic multiagent systems.

5. Conclusions

By the approach of comparison principle, the issue of periodically intermittently discrete feedback control for unstable MJSDSs has been studied in this paper. Furthermore, the stabilization theory has been applied to SNNs.

6. Further Work

The key technology, deriving the stabilization theory, lies in the flow property of the controlled MJSDS. However, for Markov jumping stochastic delay differential systems (MJSDDSs, for short), the Markovian property will be lost. Hence, how to introduce the intermittently discrete feedback control to stabilize an unstable MJSDDS is an open problem. This will be our future work.

Author Contributions

Conceptualization, Z.L. (Zhiyou Liu), L.F., X.L. and Z.L. (Zhigang Lu); Formal analysis, Z.L. (Zhiyou Liu) and L.F.; Methodology, X.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (No.61873224 and No. 61873225), China Postdoctoral Science Foundation (No. 2017M621588), Natural Science Foundation of Hebei Province of China (No. A2019209005), Science and Technology Research Foundation of Higher Education Institutions of Hebei Province of China (No. QN2017116), Tangshan Science and Technology Bureau Program of Hebei Province of China (No. 19130222g).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Q. Stock trading: An optimal selling rule. Siam J. Control Optim. 2001, 40, 64–87. [Google Scholar] [CrossRef]
  2. Mariton, M. Jump Linear Systems in Automatic Control; Marcel Dekker: New York, NY, USA, 1990. [Google Scholar]
  3. Skorokhod, A.V. Asymptotic Methods in the Theory of Stochastic Differential Equations; American Mathematical Society: Providence, RI, USA, 1989. [Google Scholar]
  4. Shaikhet, L. Stability of stochastic hereditary systems with Markov switching. Theory Stoch. Process. 1996, 2, 180–184. [Google Scholar]
  5. Mao, X. Stability of stochastic differential equations with Markovian switching. Stoch. Process. Their Appl. 1999, 79, 45–67. [Google Scholar] [CrossRef] [Green Version]
  6. Mao, X.; Yuan, C. Stochastic Differential Equations with Markovian Switching; Imperial College Press: London, UK, 2006. [Google Scholar]
  7. Teel, A.R.; Subbaraman, A.; Sferlazza, A. Stability analysis for stochastic hybrid systems: A survey. Automatica 2014, 50, 2435–2456. [Google Scholar] [CrossRef]
  8. Li, B. A note on stability of hybrid stochastic differential equations. Appl. Math. Comput. 2017, 299, 45–57. [Google Scholar] [CrossRef]
  9. Feng, L.; Cao, J.; Liu, L.; Alsaedi, A. Asymptotic stability of nonlinear hybrid stochastic systems driven by linear discrete time noises. Nonlinear Anal. Hybrid Syst. 2019, 33, 336–352. [Google Scholar] [CrossRef]
  10. Feng, L.; Wu, Z.; Cao, J.; Zheng, S.; Alsaadi, F.E. Exponential stability for nonlinear hybrid stochastic systems with time varying delays of neutral type. Appl. Math. Lett. 2020, 107, 106468. [Google Scholar] [CrossRef]
  11. Luo, S.; Deng, F.; Zhang, B.; Hu, Z. Almost sure stability of hybrid stochastic systems under asynchronous Markovian switching. Syst. Control Lett. 2019, 133, 104556. [Google Scholar] [CrossRef]
  12. Mao, X.; Yin, G.; Yuan, C. Stabilization and destabilization of hybrid systems of stochastic differential equations. Automatica 2007, 43, 264–273. [Google Scholar] [CrossRef]
  13. Liu, L.; Shen, Y. Noise suppresses explosive solutions of differential systems with coefficients satisfying the polynomial growth condition. Automatica 2012, 48, 619–624. [Google Scholar] [CrossRef]
  14. Deng, F.; Luo, Q.; Mao, X. Stochastic stabilization of hybrid differential equations. Automatica 2012, 48, 2321–2328. [Google Scholar] [CrossRef]
  15. Mao, X. Almost sure exponential stabilization by discrete-time stochastic feedback control. IEEE Trans. Autom. Control 2016, 61, 1619–1624. [Google Scholar] [CrossRef] [Green Version]
  16. Feng, L.; Li, S.; Song, R.; Li, Y. Suppression of explosion by polynomial noise for nonlinear differential systems. Sci. China Inf. Sci. 2018, 61, 136–146. [Google Scholar] [CrossRef]
  17. Mao, X. Stabilization of continuous-time hybrid stochastic differential equations by discrete-time feedback control. Automatica 2013, 49, 3677–3681. [Google Scholar] [CrossRef] [Green Version]
  18. Mao, X.; Liu, W.; Hu, L.; Luo, Q.; Lu, J. Stabilization of hybrid stochastic differential equations by feedback control based on discrete time state observations. Syst. Control Lett. 2014, 73, 88–95. [Google Scholar] [CrossRef] [Green Version]
  19. You, S.; Liu, W.; Lu, J. Stabilization of hybrid systems by feedback control based on discrete-time state observations. SIAM J. Control Optim. 2015, 53, 905–925. [Google Scholar] [CrossRef] [Green Version]
  20. Fei, C.; Fei, W.; Mao, X.; Xia, D.; Yan, L. Stabilization of highly nonlinear hybrid systems by feedback control based on discrete-time state observations. IEEE Trans. Autom. Control 2020, 65, 2899–2912. [Google Scholar] [CrossRef] [Green Version]
  21. Feng, L.; Liu, Q.; Cao, J.; Zhang, C.; Alsaadi, F. Stabilization in general decay rate of discrete feedback control for non-autonomous Markov jump stochastic systems. Appl. Math. Comput. 2022, 417, 126771. [Google Scholar] [CrossRef]
  22. Zochowski, M. Intermittent dynamical control. Phys. D 2000, 145, 181–190. [Google Scholar] [CrossRef]
  23. Li, C.; Gang, F.; Liao, X. Stabilization of nonlinear systems via periodically intermittent control. IEEE Trans. Circuits Syst. Express Briefs 2007, 54, 1019–1023. [Google Scholar]
  24. Xia, W.; Cao, J. Pinning synchronization of delayed dynamical networks via periodically intermittent control. Chaos 2009, 19, 013120. [Google Scholar] [CrossRef]
  25. Yang, X.; Cao, J. Stochastic synchronization of coupled neural networks with intermittent control. Phys. Lett. A 2009, 373, 3259–3272. [Google Scholar] [CrossRef]
  26. Yang, S.; Li, C.; Huang, T. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control. Neural Netw. 2016, 75, 162–172. [Google Scholar] [CrossRef] [PubMed]
  27. Chen, W.; Zhong, J.; Zheng, W. Delay-independent stabilization of a class of time-delay systems via periodically intermittent control. Automatica 2016, 71, 89–97. [Google Scholar] [CrossRef]
  28. Wang, P.; Hong, Y.; Su, H. Stabilization of stochastic complex-valued coupled delayed systems with Markovian switching via periodically intermittent control. Nonlinear Anal. Hybrid Syst. 2018, 29, 395–413. [Google Scholar] [CrossRef]
  29. Guo, B.; Wu, Y.; Xiao, Y.; Zhang, C. Graph-theoretic approach to synchronizing stochastic coupled systems with time-varying delays on networks via periodically intermittent control. Appl. Math. Comput. 2018, 331, 341–357. [Google Scholar] [CrossRef]
  30. Cao, J.; Rakkiyappan, R.; Maheswari, K. Exponential H filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities. Sci. China Technol. Sci. 2016, 59, 387–402. [Google Scholar] [CrossRef]
  31. Liu, L.; Cao, J.; Qian, C. pth moment exponential input-to-state stability of delayed recurrent neural networks with Markovian switching via vector Lyapunov function. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3152–3163. [Google Scholar] [CrossRef]
  32. Shen, H.; Zhu, Y.; Zhang, L.; Park, J.H. Extended dissipative state estimation for Markov jump neural networks with unreliable links. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 346–358. [Google Scholar] [CrossRef]
  33. Cheng, J.; Park, J.H.; Karimi, H.R.; Shen, H. A flexible terminal approach to sampled-data exponentially synchronization of Markovian neural networks with time-varying delayed signals. IEEE Trans. Cybern. 2018, 48, 2232–2244. [Google Scholar]
  34. Hu, S.; Liao, X.; Mao, X. Stochastic Hopfield neural networks. J. Phys. A Math. Gen. 2003, 36, 2235–2249. [Google Scholar] [CrossRef]
  35. Zhou, Q.; Wan, L. Exponential stability of stochastic delayed Hopfield neural networks. Appl. Math. Comput. 2008, 199, 84–89. [Google Scholar] [CrossRef]
  36. Zhu, Q.; Cao, J. Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays. IEEE Trans. Syst. Man Cybern. Part B 2011, 41, 341–353. [Google Scholar]
  37. Wan, L.; Zhou, Q.; Wang, P. Ultimate boundedness and an attractor for stochastic Hopfield neural networks with time-varying delays. Nonlinear Anal. Real World Appl. 2012, 13, 953–958. [Google Scholar] [CrossRef]
  38. Zhu, S.; Shen, Y.; Chen, G. Noise suppress exponential growth for hybrid Hopfield neural networks. Math. Comput. Simul. 2012, 86, 10–20. [Google Scholar] [CrossRef]
  39. Feng, L.; Cao, J.; Liu, L. Stability analysis in a class of Markov switched stochastic Hopfield neural networks. Neural Process. Lett. 2019, 50, 413–430. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Z.; Feng, L.; Li, X.; Lu, Z.; Meng, X. Stabilization of Periodical Discrete Feedback Control for Markov Jumping Stochastic Systems. Symmetry 2021, 13, 2447. https://doi.org/10.3390/sym13122447

AMA Style

Liu Z, Feng L, Li X, Lu Z, Meng X. Stabilization of Periodical Discrete Feedback Control for Markov Jumping Stochastic Systems. Symmetry. 2021; 13(12):2447. https://doi.org/10.3390/sym13122447

Chicago/Turabian Style

Liu, Zhiyou, Lichao Feng, Xinbin Li, Zhigang Lu, and Xianhui Meng. 2021. "Stabilization of Periodical Discrete Feedback Control for Markov Jumping Stochastic Systems" Symmetry 13, no. 12: 2447. https://doi.org/10.3390/sym13122447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop