Stabilization of Periodical Discrete Feedback Control for Markov Jumping Stochastic Systems

Motivated by the two strategies of intermittent control and discrete feedback control, this paper aims to introduce a periodically intermittent discrete feedback control in the drift part to stabilize an unstable Markov jumping stochastic differential system. It is illustrated that, by the approach of comparison principle, this can be achieved in the sense of almost sure exponential stability. Further, the stabilization theory is applied to Markov jumping stochastic recurrent neural networks.


Introduction
Stochastic systems have played an important role in a series of fields. Markov jumping stochastic differential systems (MJSDSs, for short) could offer a strong mathematical model describing option pricing, manufacturing engineering, and electric power engineering [1][2][3], whose subsystems can switch according to the provided Markovian chains. In studying the MJSDSs, the stability, as a basic and significant property, received consequent emphasis, e.g., [4][5][6][7][8][9][10][11]. For an unstable system, various strategies have been introduced to stabilize it, such as sample control, impulse control, pinning control, feedback control, and intermittent control. Generally speaking, for stochastic differential systems, the feedback control can be divided into one in the drift part and one in the diffusion part, e.g., [12][13][14][15][16].
Here, the feedback control in the drift part is particularly noted for this paper.
With respect to an unstable MJSDS in the form du(t) = F(u(t), t, r(t))dt + G(u(t), t, r(t))dB(t), t ≥ 0 (1) it is traditional to introduce a feedback control H(u(t), t, r(t)) in the drift part, based on continuous state observations, to stabilize it, where u(t) ∈ R n stands for the system state, r(t) ∈ S stands for the switching rule-Markovian chain, B(t) stands for an m-dimensional Brownian motion, and F, H : R n × R + × S → R n , G : R n × R + × S → R n×m . Please refer to more concrete definitions in the last paragraph of this section. It is noted that designing the continuous feedback control is costly and impossible to achieve in reality. To reduce the stabilization cost, Mao [17] introduced the feedback control in the drift part, based on discrete state observations, to stabilize the unstable system (1). So, the controlled system becomes du(t) = F(u(t), t, r(t)) + H(u([t/h]h), t, r(t)) dt + G(u(t), t, r(t))dB(t), where h > 0 is the observation interval between two consecutive observations, and [] is the integer operator, i.e., [t/h] = i as t ∈ [ih, (i + 1)h), i = 0, 1, 2, · · · . Then, the consequent results on the topic of discrete feedback control were reported, e.g., [18][19][20][21]. Additionally, compared with the continuous control strategy, the intermittent control strategy is more effective to achieve the aim of cost reduction. Intermittent control strategy was initiated by Zochowski [22]. It involves the contents of working time and non-working time. This topic with periodical working time is called as periodically intermittent control and has received much focus, e.g., [23][24][25][26][27][28][29].
Therefore, motivated by the discrete feedback control and periodically intermittent control, one may ask the following question.
Question: With respect to the unstable MJSDS (1), could one introduce a periodically intermittently discrete feedback control in the drift part to make the controlled MJSDS stable, where l = 0, 1, 2, · · · , θ ∈ (0, 1] stands for the intermittently controlled rate, and σ stands for the controlled period?
Up to now, there are few answers to this question. This paper offers a positive answer and fills the research gap. The contributions of this paper consist of the following points: (1) The issue and criteria of periodically intermittently discrete feedback control in the drift part have been proposed for MJSDSs; (2) The stabilization theory above is successfully applied to Markov jumping stochastic recurrent neural networks, which shows the effectiveness of our theory. Notation 1. Assume (Ω, F , {F t } t≥0 , P) is a completed probability space with the usual conditions. B(t) stands for an m-dimensional Brownian motion, and r(t) stands for a right-continuous Markovian chain with the finite state space S = {1, · · · , N} and the generator Γ = (r k 1 k 2 ) N×N , which satisfies r k 1 k 2 > 0 as k 1 = k 2 and ∑ k 2 ∈S λ k 1 k 2 = 0. Operator E stands for the expectation with respect to P.

Problem Description and Main Results
This paper is mainly concerned with the unstable MJSDS (1) with initial data u(0) ∈ R n , r(0) ∈ S. Furthermore, the global Lipschitz conditions are assumed for functions F and G. Assumption 1. There exist two numbers c 1 , Furthermore, F(0, s, j) = 0, G(0, s, j) = 0 for ∀s ∈ R + , j ∈ S.
From Assumption 1, MJSDS (1) has the zero equivalent solution. However, Assumption 1 can not ensure the stability. Hence, as discussed above, the periodically intermittently discrete feedback control in the drift part is introduced for MJSDS (1). For simplicity, the controlled MJSDS (3) can be rewritten as du(t) = F(u(t), t, r(t)) + I(t)H(u([t/h]h), t, r(t)) dt + G(u(t), t, r(t))dB(t), where I(t) equals 1 as t ∈ [lσ, (l + θ)σ), while 0 as t ∈ [(l + θ)σ, (l + 1)σ), and θσ stands for the controlled width. It is further assumed that θ ∈ [ h σ , 1], i.e., θσ ≥ h, since the controlled period should not be less than the controlled width, and the controlled width should not be less than the observation interval between two consecutive observations. In fact, the constraint θσ ≥ h can be removed.

Remark 1.
Compared with the assertions on traditionally continuous feedback control in the drift part, this paper emphasizes the feedback control based on discrete observations. Compared with the results on the discrete feedback control in the drift part [17][18][19][20][21], this paper emphasizes a periodically intermittent control strategy. In sum, the periodically intermittently discrete feedback control introduced in this paper can reduce the controlled cost effectively.

Remark 2.
The controlled MJSDS (4) is complicated for the two switching rules. One is the original Markovian switching rule; another is the periodically intermittent switching rule, which is newly added. From another angle, system (4) is a periodically switched system: one of its subsystems is the original unstable MJSDS; another is the discrete feedback controlled MJSDS.
Before the main contribution of the stabilization result is presented, the following assumptions are given for function H.

Assumption 3. There exists a number c
Please refer to c 1 , c 2 in Assumption 1.

Remark 4.
Given parameters c 1 and c 2 in Assumption 1, many cases exist for function H with Assumptions 2 and 3. For instance, H(v, s, j) can take the linear form of −c 4 v with c 4 > c 1 + c 2 2 /2 to satisfy Assumptions 2 and 3 with c 3 = c 4 .

Definition 1. Controlled MJSDS
In the following, we give the main stabilization result.
Theorem 1 demonstrates the almost sure exponential stability of controlled MJSDS (4). Now, we recall the original question in Section 1 and can directly obtain the following theorem from Theorem 1.

Theorem 2.
With respect to the unstable MJSDS (1) satisfying Assumption 1, one can introduce a periodically intermittently discrete feedback control satisfying Assumptions 2, 3 and conditions (5), (6) to exponentially stabilize it almost surely, where the related parameters have been proposed above.

Remark 7.
In this procedure, (2) is the key step and complicated. The common method is to choose some parameters provided the others.
where ε has been defined in Theorem 1.
Furthermore, we obtain for ∀t ≥ 0, implying the required assertion.

Remark 8.
Here, the Lyapunov function |v(t)| 2 is adopted for the auxiliary MJSDS (8). In fact, we can adopt a more general Lyapunov function v T (t)Pv(t), where P is a symmetric positive definite matrix.
where λ > 0 has been defined in Theorem 1.
Next, we give the detailed proof of Theorem 1.
Remark 9. Similar to Remark 8, the Lyapunov function (u T (t) − v T (t))P(u(t) − v(t)) can be adopted to obtain a more general criteria, where the symmetric matrix P is the same as that in Remark 8. That is left to readers.

Illustrated Application of Stochastic Neural Networks
Recurrent neural networks, as an important type of dynamic systems, have attracted much focus. For the inevitable random factor, stochastic neural networks (SNNs, for short) and Markov jumping stochastic neural networks (MJSNNs, for short) are modelled to describe these systems with a random factor, e.g., [30][31][32][33]. The stability issue, as a key property, for SNNs or MJSNNs, has been reported by many researchers in the past, e.g., [34][35][36][37][38][39]. Hence, guaranteeing the stability of SNNs or MJSNNs is dispensable.
Here, we consider an n-dimensional unstable MJSNN in the form with initial data u(0) ∈ R n and r(0) ∈ S, where u(t) ∈ R n stands for nervous states, A(r(t)) = diag{a 1 (r(t)), a 2 (r(t)), · · · a n (r(t))} > 0 stands for the self-feedback connection weight matrix, D(r(t)) ∈ R n×n stands for the connection weight matrix,F(u(t)) ∈ R n stands for the bounded active function, andḠ(u(t), r(t)) ∈ R n×m . Furthermore, the global Lipschitz conditions are assumed forF andḠ.
For the unstable MJSNN (33), a periodically intermittently discrete feedback control I(t)H(u([t/h]h), r(t))) in the drift part is introduced, where the meaning of I(t) can be seen in Section 2, andH also satisfies the global Lipschitz condition.
Remark 12. Similar to Remark 5, compared with the results on the periodically intermittent control strategy for SNNs, our Corollaries 1 and 2 emphasize the discrete feedback control strategy. Compared with the results on the discrete feedback control strategy for SNNs, our Corollaries 1 and 2 emphasize the periodically intermittent control strategy.

Remark 13.
Here, we state the application in SNNs of the stabilization theory. In fact, the stabilization theory can also be applied to other fields, such as stochastic complex networks and stochastic multiagent systems.

Conclusions
By the approach of comparison principle, the issue of periodically intermittently discrete feedback control for unstable MJSDSs has been studied in this paper. Furthermore, the stabilization theory has been applied to SNNs.

Further Work
The key technology, deriving the stabilization theory, lies in the flow property of the controlled MJSDS. However, for Markov jumping stochastic delay differential systems (MJSDDSs, for short), the Markovian property will be lost. Hence, how to introduce the intermittently discrete feedback control to stabilize an unstable MJSDDS is an open problem. This will be our future work.