Next Article in Journal
Deep Transfer Learning for Modality Classification of Medical Images
Next Article in Special Issue
Interval Type-2 Fuzzy Model Based on Inverse Controller Design for the Outlet Temperature Control System of Ethylene Cracking Furnace
Previous Article in Journal
Fuzzy Color Clustering for Melanoma Diagnosis in Dermoscopy Images
Article Menu

Export Article

Information 2017, 8(3), 90; doi:10.3390/info8030090

Article
Stabilization of Discrete-Time Markovian Jump Systems by a Partially Mode-Unmatched Fault-Tolerant Controller
Mo Liu and Guoliang Wang *
School of Information and Control Engineering, Liaoning Shihua University, Fushun 113001, China
*
Correspondence:
Received: 29 June 2017 / Accepted: 22 July 2017 / Published: 25 July 2017

Abstract

:
In this paper, a kind of fault-tolerant controller is proposed to study the stabilization problem of discrete-time Markovian jump systems, whose operation modes are not only partially-available but also unmatched. Here, such general properties of controller are modeled to be a controller having polytopic forms and uncertainties simultaneously. Based on the proposed model, concise conditions for the existence of such a controller are proposed with linear matrix inequality (LMI) forms, which are extended to consider observer design problem too. Compared with the traditional methods, not only is the designed controller more general but also the established results are fault free and could be solved directly. Finally, numerical examples are used to demonstrate the effectiveness of the proposed methods.
Keywords:
Markovian jump systems; stabilization; fault-tolerant controller; partially available mode; unmatched mode; linear matrix inequality (LMI)

1. Introduction

In the light of practical systems, many dynamical systems have instable structures due to random and abrupt variation, such as random failures of the components, the sudden change of the environment, and so on. As we know, Markovian jump systems (MJSs), which are modeled by a set of subsystems with transitions among all the modes governed by a Markov chain taking values in a finite set, are often employed to describe the above dynamical systems. Because many practical systems could be described in the form of an MJS, it has attracted great attention of both domestic and foreign scholars. It also has been widely studied in the field of industrial process control, space flight, medical treatment, electric power and economy. During the past few decades, many important results based on kinds of systems have emerged, such as stability analysis [1,2,3,4,5,6], stabilization [7,8,9,10], delay case [11,12,13], output control [14,15], H control [16,17,18,19] and filtering [20,21,22], robust control [23,24,25], sliding control [26], state estimation [27], fault detection [28], synchronization [29,30], and so on.
By investigating the most results on system synthesis in the literature, it is seen that there were lots of references to consider the fault-tolerant control problem. In the actual systems, it is very possible and even ineluctable that faults occur during the operating of the system. At the same time, fault-tolerant control [31,32], provides a new way to improve the reliability of complex systems. The main aim of fault-tolerant control is to guarantee the system capable of performing basic functions when the faults exist. That is to say, the closed-system affected by the faults of actuators, sensors or internal components faults, could be still stable and maintain accepted performance properties. From the published results during the past decade, fault-tolerant control has been divided into two different kinds: active fault-tolerant control [33,34] and passive fault-tolerant control. On the one hand, robust control technique [24,25,35,36] is usually adopted by the passive ones in order to make the closed-loop system insensitive to certain fault. Without changing the structure or parameters of controller, the required performance indicators could be implied. On the other hand, the active fault-tolerant control needs to readjust both the parameters and structure of controller. Most of all active fault-tolerant control need fault detection and diagnosis (FDD), and all kinds of fault information are also needed. Recently, some results concerning about the fault-tolerant control of Markovian jump systems [31,37] were proposed. By referring to the existing literature about Markovian jump systems, the main methods could be divided into two kinds: mode-dependent and mode-independent ones. As for mode-dependent ones, all the operation modes in mode-dependent results should be synchronously available online. On the contrary, mode-independent results have nothing to do with operation mode. In other words, the information of operation mode is totally ignored. Obviously, mode-independent methods are absolute ones. Based on these facts, it is said that the traditional methods based on such two methods have some applications and should be revisited carefully. One typical example is networked control system. Because of all the information transmitted through unreliable communication networks, the transmitted data inevitably experiences induced delay, packet dropout, and disordering. When operation mode is transmitted through network, it will experience a complicated case. In this case, the operation mode may be lost, asynchronous, or disordered. As a result, both mode-dependent and mode-independent methods are not suitable applied to the above complicated case. It is meaningful and necessary to consider the related problems of Markovian jump system with operation mode satisfying complicated conditions. As for this problem, some challenges will encounter. First, because of an MJS having many operation modes, it will be very complicated when such operation modes experience partially available and unmatched. Moreover, the corresponding problems will be hard to be considered, since there are so many possible events to be taken into account. Thus, how to describe this general phenomenon suitably is the first problem to be solved. Unfortunately, there are very few references to report this problem. Second, the fault of controller in this paper will be described to be a binary structured uncertainty. Though it has a better description and more application scope, so many possible fault combinations will be included and make the computation complexity very large, especially for an MJS with partially available and unmatched modes. How to reduce the complexity and make the obtained results concise are also necessary and meaningful problems. Third, but not the last, if a model could be established, due to such complexities leading to many difficulties, how to make its analyze and synthesis easily should be considered too. Based on the discussions, it is meaningful and necessary to consider the related problems of MJSs with operation mode satisfying such complicated conditions. Un to now, to our best knowledge, very few results are available. All the observations motivate the current research.
In this paper, the stabilization problem of discrete-time Markovian jump systems is realized by a partially mode-unmatched fault-tolerant controller. The main contributions of this paper are summarized as follows: (1) A kind of partially mode-unmatched fault-tolerant controller is proposed, whose faults are described to be a binary structured uncertainty. Particularly, it contains the traditional controllers such as the traditional mode-dependent controllers, mode-independent controller and mode disordering or unmatched controller as special cases; (2) Sufficient existence conditions for the designed controller are presented with LMI forms, which are fault free. Compared with the similar results, the proposed results have concise forms and could be solved easily; (3) Because of the results being LMIs, they are extended to design a fault-tolerant observer. Its form is more general and contains the traditional observers.

2. Problem Formulation

Consider a kind of discrete-time Markovian jump systems described as
x ( k + 1 ) = A ( r k ) x ( k ) + B ( r k ) u ( k )
where x ( k ) R n is the state vector, u ( k ) R m is the control input vector. Matrices A ( r k ) and B ( r k ) are known matrices of compatible dimensions. { r k , k R + } is a discrete-time Markov process taking values in a finite space S = { 1 , 2 , , N } with transition probability matrix (TPM) Π ( π i j ) R N × N given by
Pr ( r k + 1 = j | r k = i ) = π i j
where π i j 0 and j = 1 N π i j = 1 for all i , j S . When r k = i S , the system matrices of the ith mode are denoted by A i and B i .
In this paper, the designed state feedback controller may have faults, whose operation mode is partially available and unmatched. For any mode r k = i S , it is described by
u ( k ) = l S F i λ l ( k ) Δ l K l x ( k )
where
l S F i λ l ( k ) 1 , λ l ( k ) { 0 , 1 } S F i = { f i 1 , f i 2 , , f i κ } , S F i S
and K l is the control gain to be determined. Here, subset S F i = { f i 1 , f i 2 , , f i k } is the switching signal and specifies which subsystem is activated at the switching instant. Though the definition of subset S F i is based on mode i, it is assumed that no switching happens among these subsets. The main reason is to make the problem considered here concise and definite. The parameter Δ l is a diagonal matrix and used to describe the controller fault happening or not. Its form is defined as follows
Δ l Λ = { Δ = diag ( δ 1 , . . . , δ q ) | δ i { 0 , 1 } }
Clearly, there will be no faults if Δ l = I q . It is also seen that there are 2 q possible combinations representing the controller faults. Equivalently, Λ has 2 q elements. Then, the resulting closed-loop system is rewritten to be
x ( k + 1 ) = A ( r k ) x ( k ) + B ( r k ) l S F r k λ l ( k ) Δ l K l x ( k )
Remark 1.
Different from references [38,39,40,41,42], the fault of controller (3) is described to be a binary structured uncertainty. This formulation has a better description and more application scope. However, 2 q possible fault combinations are included and make the computation complexity very large, especially the underlying system is a switching system with N operation modes. How to reduce the complexity and make the obtained results with concise and easily solvable forms are necessary and meaningful problems. Moreover, it is seen that two kinds of uncertainties are contained in controller (3) simultaneously. For one thing, a kind of polytopic uncertainty is included and used to handle the partially available and unmatched modes. For another, the binary structured uncertainty is used to deal with the controller faults. In this sense, it is said that controller (3) is actually a robust controller and could bear uncertainties applied on the desired controller.
Remark 2.
Even there is no fault, controller (3) designed for Markovian jump systems is still more general and has superiorities in representing more complicated cases. For example, if S F i = { i } , i S , λ l ( k ) 1 and Δ l I q , l S F i , it could be reduced to traditionally mode-dependent controllers such as [43,44,45,46]. To the contrary, mode-independent controller [8,47], where K θ K i could be obtained by letting S F 1 = S F 2 = = S F N = { θ } with θ S , λ l ( k ) 1 and Δ l I q , l S F i . Moreover, another case that operation mode is disordering or unmatched [48,49] is included, where S F i = S , but l satisfying λ l ( k ) = 1 is not equal to mode i satisfying r k = i . Finally, but not the last, when S F i S , it is said that controller (3) with Δ l I q is named to be a partially mode-available and unmatched controller.
Remark 3.
On the other hand, some references [5,6,17,26] have considered other general cases that the transition rate matrix referred to continuous-time case is partially unknown, uncertain or depends on the random sojourn time. Even in reference [20], the partially accessible mode information is also referred to transition rate matrix partially unknown. As for discrete-time Markovian jump systems, reference [28] considered its fault detection problem, where the transition probability matrix is also partially unknown. In other words, it is seen that the partially known property of such references is closely related to transition rate or probability matrix. These general cases are very different from the one considered in this paper. First, instead of transition probability matrix partially unknown, the partially available property in this paper is referred to operation modes of controller; Second, the controller designed here is in contrast to the similar ones in the above references. Though some general cases were considered in these references, based on the above explanations in the former remark, it is concluded that the desired controller or filter will be included as special cases of controller (3); Third, because of the partially unknown properties different, it is seen that the underlying systems in addition to the studied methods are quite different; Fourth, but not the last, such general cases about transition rate or probability matrix considered in these references could be considered in this paper similarly.

3. Main Results

Theorem 1.
Given system (1), there exists a controller (3) such that the closed-loop system (5) is stochastically stable, if there exist matrices P ¯ i > 0 , G, Y l and S i , i S , l S F i , such that
P i ¯ G T A i T Y l T * Ω i B i S i T * * ( S i ) < 0
where
Ω i = ( G ) + P ˜ i P ˜ i = j = 1 N π i j P ¯ j
Then, the gain of controller (3) is computed as
K l = Y l G 1
Proof. 
For system (5), choose a stochastic Lyapunov function as
V ( x k , r k ) = x T ( k ) P ( r k ) x ( k )
It is obvious that
Δ V ( x k , r k ) = E { V ( k + 1 ) } V ( k ) = ( A i x ( k ) + B i l S F i λ l ( k ) Δ l K l x ( k ) ) T j = 1 N π i j P j ( A i x ( k ) + B i l S F i λ l ( k ) Δ l K l x ( k ) ) x T ( k ) P i x ( k ) < 0
From condition (6), it is known that matrix G is nonsingular. Then, it could be guaranteed by
G T ( l S F i λ l ( k ) A i + B i l S F i λ l ( k ) Δ l K l ) T j = 1 N π i j P j ( l S F i λ l ( k ) A i + B i l S F i λ l ( k ) Δ l K l ) G l S F i λ l ( k ) G T P i G < 0
Taking into account the property of λ l ( k ) defined in (3), it is known that condition (10) is implied by
l S F i λ l ( k ) [ G T ( A i + B i Δ l K l ) T j = 1 N π i j P j ( A i + B i Δ l K l ) G G T P i G ] < 0
Moreover, it is guaranteed by
G T ( A i + B i Δ l K l ) T j = 1 N π i j P j ( A i + B i Δ l K l ) G G T P i G < 0
which is equivalent to
P ¯ i G T ( A i + B i Δ l K l ) T * ( j = 1 N π i j P j ) 1 < 0
where P ¯ i = G T P i G . As for ( j = 1 N π i j P j ) 1 , it is got that
( j = 1 N π i j P j ) 1 = ( G T j = 1 N π i j G T P j G G 1 ) 1 ( G ) + j = 1 N π i j P ¯ j
Then, condition(13) is implied by
P i ¯ G T ( A i + B i Δ l K l ) T * ( G ) + P ˜ i < 0
It is known that condition (15) is equivalent to
P i ¯ G T A i T * ( G ) + P i ˜ + ( [ 0 B i ] Δ l [ K l G 0 ] ) < 0
By using Lemma 4 in [50], it is further obtained that condition (16) could be guaranteed by
P i ¯ G T A i T G T K l T * ( G ) + P i ˜ B i S i T * * S i S i T < 0
Based on representation (7), it is obvious that condition (17) is equivalent to (6). This completes the proof. ☐
Remark 4.
In this theorem, it is seen that matrix G is introduced to deal with nonlinear term and is a common matrix. It is said that it could be replaced by another one. For example, it could be substituted by matrix G l which depends on the partially available and unmatched modes. Compared with common matrix G, it is known that the conservatism of results obtained by G l will be further reduced. However, the complexity of the computation will be larger due to more matrices included to solved. Based on these facts, it is said that whether to choose matrix G l should depend on the concrete the situations.
Based on the main idea of controller (3), the other issues could be considered similarly. Without loss of generality, we only consider the observer design problem, where the corresponding system is described as
x ( k + 1 ) = A ( r k ) x ( k ) + B ( r k ) u ( k ) y ( k ) = Δ ( k ) C ( r k ) x ( k )
Here, the designed state observer system is described by
x ^ ( k + 1 ) = A ( r k ) x ^ ( k ) + B ( r k ) u ( k ) l S F r k λ l ( k ) H l ( y ( k ) y ^ ( k ) ) y ^ ( k ) = Δ ( k ) C ( r k ) x ^ ( k )
where x ^ ( k ) is state estimate vector, and H l is the parameter of the desired observer. Let e ( k ) = x ( k ) x ^ ( k ) be state error vector. Then, the resulting error system is rewritten to be
e ( k + 1 ) = A ( r k ) + l S F r k λ l ( k ) H l Δ ( k ) C ( r k ) e ( k )
Theorem 2.
Given system (18), there is an observer (19) such that error system (20) is asymptotically stable, if there exist matrices P i > 0 , G, Y l and S i , i S , l S F i , such that
P i A i T G T C i T S i T * Ω ^ i Y l * * ( S i ) < 0
where
Ω ^ i = ( G ) + P ^ i P ^ i = j = 1 N π i j P j
Thus, the observer gain is computed as
H l = G 1 Y l
Proof. 
Choose a stochastic Lyapunov function for error system (20) as
V ( e k , r k ) = e T ( k ) P ( r k ) e ( k )
Then, it is obtained that
Δ V ( e k , r k ) = E { V ( k + 1 ) } V ( k ) = ( A i e ( k ) + l S F i λ l ( k ) H l Δ ( k ) C i e ( k ) ) T j = 1 N π i j P j × ( A i e ( k ) + l S F i λ l ( k ) H l Δ ( k ) C i e ( k ) ) e T ( k ) P i e ( k ) < 0
which is implied by
( A i + l S F i λ l ( k ) H l Δ ( k ) C i ) T j = 1 N π i j P j ( A i + l S F i λ l ( k ) H l Δ ( k ) C i ) P i < 0
Similar to (11), it is known that condition (25) is guaranteed by
l S F i λ l ( k ) [ ( A i + H l Δ ( k ) C i ) T j = 1 N π i j P j ( A i + H l Δ ( k ) C i ) P i ] < 0
which could be implied by
P i ( A i + H l Δ ( k ) C i ) T * ( j = 1 N π i j P j ) 1 < 0
Because of G nonsingular, it is known that condition
P i ( A i + H l Δ ( k ) C i ) T G T * G ( j = 1 N π i j P j ) 1 G T < 0
is equal to (27) via pre- and post-multiplying its both sides with matrix
I 0 0 G T
and its transpose respectively. As for G ( j = 1 N π i j P j ) 1 G T , it is got that
G ( j = 1 N π i j P j ) 1 G T ( G ) + P ^ i
Based on these conditions, it is known that condition (28) is guaranteed by
P i A i T G T * ( G ) + P ^ i + C i T 0 Δ k 0 ( G H l ) T < 0
Similar to the proof of (16), it is further obtained that condition (31) could be guaranteed by
P i A i T G T C i T S i T * ( G ) * + P ^ i G H l * * S i S i T < 0
Considering representation (22), it is known that conditions (21) and (32) are equivalent. This completes the proof. ☐
Remark 5.
Though the above results are presented with exact transition probability matrix, they could be extended to other general cases. When TPM Π is uncertain or partially unknown respectively, similar problems could be studied by combing the methods given in this paper and some existing references.

4. Numerical Examples

Example 1: Consider a simplified economic system based on multiplier-accelerator model [51], whose form is described as
C ( k + 1 ) = c Y ( k )
I ( k + 1 ) = ω ( Y ( k ) Y ( k ) )
Y ( k + 1 ) = C ( k ) + I ( k ) + G ( k )
where C ( k ) is the consumption expenditure, Y ( k ) is the national income, I ( k ) is the induced private investment, G ( k ) is the government expenditure, c = ( 1 δ ) is the marginal propensity to consume or the slope of the consumption versus income curve, δ is the marginal propensity to save, 1 δ is the multiplier, and ω is the accelerator coefficient. By reference, conditions (33)–(35) could be expressed to be
x ( k + 1 ) = A x ( k ) + B u ( k )
with output
y ( k ) = C x ( k )
where
A = 0 1 ω 1 δ + ω , B = [ 0 1 ] , C = [ 0 1 ]
Here, x 2 ( k ) stands for the national income, and x 1 ( k ) differs from x 2 ( k ) only by a one-step lag. u ( k ) is the government expenditure. Coefficients δ and ω were computed for the U.S. economy for all years 1929 to 1971, which were based on data by the U.S. Department of Commerce (1971). From [52], the ranges for δ and ω are selected to be 0 < δ < 1 and 0 < ω < 3 , where c = 1 δ could be maintained. Similar to [53], parameters ω and δ are partly grouped in two natural classes or modes and listed in Table 1.
Then, matrix A i is
A 1 = 0 1 1 1.7 , A 2 = 0 1 0.8 0.7
which are “Norm” and “Slump” cases respectively. The transition probability matrix is given to be
Π = 0.2 0.8 0.3 0.7
Under initial condition x 0 = 1 3 T , the state response of the open-loop system is given in Figure 1, which is unstable. Based on the traditional stabilization methods such as [7,16,21,23], one could design a state feedback controller. In order to make some comparisons, it is assumed that the desired controller has no fault. For this practical example, the gains of the corresponding controller are computed as
K 1 = [ 1.0000 1.7000 ] , K 2 = [ 0.8000 0.7000 ]
when the above designed controller experiences a general case that the operation mode is partially available and unmatched, without loss of generality, the corresponding set S F i of mode i is assumed to be S F 1 = { 1 , 2 } and S F 2 = { 2 } respectively. Figure 2 presents the simulations of system and controller modes. Under this general case, the state simulation of the resulting closed-loop system is given in Figure 3, where the above controller experiences such a general condition. It is obviously unstable. In other words, it is said that the designed controller based on the above traditional methods will be disabled if it has its operation mode partially available and unmatched. In order to stabilize this system experiencing this general case, a kind of partially available and unmatched controller (3) with Δ = I could be designed to stabilize an MJS. By Theorem 1, we could obtain that
P 1 = 0.2179 0.2937 0.2937 0.3974 , P 2 = 0.0419 0.0180 0.0180 0.0229 , G = 891.9827 665.5581 684.4487 520.1799 Y 1 = [ 240.5871 170.4980 ] , Y 2 = [ 288.3256 225.1331 ] , S 1 = 0.5 , S 2 = 0.6
Then, the corresponding gains are computed as
K 1 = [ 1.0000 0.9518 ] , K 2 = [ 0.4864 1.0552 ]
After applying the above designed controller, we have the simulation of the resulting closed-loop system given in Figure 4. It is obvious that it is stable.
Example 2 Consider a discrete-time Markovian jump system of form (18) with r k S = { 1 , 2 , 3 } , whose parameters are described as follows:
A 1 = 0.9 0.1 0.1 1.6 , B 1 = 1 0.4 0.5 1 , C 1 = 1.2 0.3 0.5 0.8 A 2 = 1 0.2 0.3 0.6 , B 2 = 0.1 0.3 0.5 2 , C 2 = 0.3 0.4 0.6 2.2 A 3 = 1.2 0.7 0.2 0.1 , B 3 = 0.2 0.6 0.2 2.8 , C 3 = 0.6 0.8 0.9 1.5
The transition probability matrix is given as
Π = 0.1 0.7 0.2 0.6 0.3 0.1 0.6 0.2 0.2
Based on the proposed results, we could design a kind of observer (19). In this section, we will compare two types of observer: the partially mode-unmatched standard observer ( P M S as the subscript, l as the superscript) and the partially mode-unmatched fault-tolerant observer (satisfying the conditions of Theorem 2 and P M F as the subscript, l as the superscript). Without loss of generality, the corresponding set S F i of mode i is assumed to be S F 1 = { 1 , 2 } , S F 2 = { 2 } and S F 3 = { 1 } . By Theorem 2, one gets
P 1 = 0.1063 0.2565 0.2565 2.8646 , P 2 = 0.1034 0.0772 0.0772 0.9614 , P 3 = 0.1034 0.0772 0.0772 0.9614 G = 28.6087 0 0 15.7868 , Y 1 = 1.9080 1.5428 1.5428 0.1772 , Y 2 = 1.8345 1.4468 1.4468 0.6048 S 1 = 5.3412 6.5964 6.5964 11.5449 , S 2 = 34.1069 10.9356 10.9356 4.0599 , S 3 = 66.8323 43.1158 43.1158 30.5776
Then, the gains of fault-tolerant observer (19) could be computed as
H P M F 1 = 0.0285 0.0093 0.0206 0.0042 , H P M F 2 = 0.0274 0.0088 0.0175 0.0015
Similarly, the corresponding gains of PMS observer are given as
H P M S 1 = 0.4037 0.2500 0.2932 0.0721 , H P M S 2 = 0.5342 0.2545 0.3473 0.1607
After applying the above observers respectively, one could get the stable effects of the resulting error system given in Table 2. Here, it is seen that four types of fault combinations are contained in Δ . In this table, “s” represents the error system stable, while “u” denotes that the error system is unstable. From this table, it is obvious that the resulting error system closed by the PMS observer is unstable for type of fault Δ = diag { 1 , 0 } , while it is always stable by applying the designed PMF observer. In other words, observer fault plays a negative effect in terms of making the error system unstable. Under initial condition e 0 = [ 2 2 ] T , the state simulations of the resulting error systems are shown in Figure 5 and Figure 6 respectively, which also demonstrate the utility of the proposed observer. Moreover, the simulation of the operation mode of original system and designed observer is given in Figure 7. From these simulations, it is seen that the proposed methods are useful.

5. Conclusions

In this paper, the stabilization problem of discrete-time Markovian jump systems has been realized by a fault-tolerant controller with operation mode partially available but unmatched. A kind of controller having polytopic forms and uncertainties is proposed to describe such general properties. Based on the developed model, sufficient conditions are presented in terms of LMIs and fault free, which could be solved easily and directly. Moreover, the key idea of controller is further applied to design an observer with similar properties. Finally, the utility and advantages of the proposed methods have been illustrated by numerical examples.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants 61374043 and 61473140, the Program for Liaoning Excellent Talents in University under Grant LJQ2013040, the Natural Science Foundation of Liaoning Province under Grant 2014020106.

Author Contributions

Guoliang Wang proposed the idea; Mo Liu wrote this paper; Guoliang Wang carried out the experiments and analyzed the data; both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xiong, J.L.; Lam, J.; Shu, Z.; Mao, X.R. Stability analysis of continuous-time switched systems with a random switching signal. IEEE Trans. Autom. Control 2014, 59, 180–186. [Google Scholar] [CrossRef]
  2. Lakshmanan, S.; Rihan, F.A.; Rakkiyappan, R. Stability analysis of the differential genetic regulatory networks model with time-varying delays and Markovian jumping parameters. Nonlinear Anal. 2014, 14, 1–15. [Google Scholar] [CrossRef]
  3. Rakkiyappan, R.; Lakshmanan, S.; Sivasamy, R. Leakage-delay-dependent stability analysis of Markovian jumping linear systems with time-varying delays and nonlinear perturbations. Appl. Math. Model. 2016, 40, 5026–5043. [Google Scholar] [CrossRef]
  4. Ding, Y.; Liu, H. Stability analysis of continuous-time Markovian jump time-delay systems with time-varying transition rates. J. Frankl. Inst. 2016, 353, 2418–2430. [Google Scholar] [CrossRef]
  5. Li, Z.C.; Fei, Z.Y.; Jung, H.Y. New results on stability analysis and stabilization of time-delay continuous Markovian jump systems with partially known rates matrix. Int. J. Robust Nonlinear Control 2016, 26, 1873–1887. [Google Scholar] [CrossRef]
  6. Wei, Y.L.; Park, J.H.; Karimi, H.R.; Tian, Y.C.; Jung, H.Y. Improved stability and stabilization results for stochastic synchronization of continuous-time semi-Markovian jump neural networks with time-varying delay. IEEE Trans. Neural Netw. Learn. Syst. 2017, PP. [Google Scholar] [CrossRef] [PubMed]
  7. Ma, S.P.; Boukas, E.K.; Chinniah, Y. Stability and stabilization of discrete-time singular Markov jump systems with time-varying delay. Int. J. Robust Nonlinear Control 2010, 20, 531–543. [Google Scholar] [CrossRef]
  8. Wang, G.L. Robust stabilization of singular Markovian jump systems with uncertain switching. Int. J. Control Autom. Syst. 2013, 11, 188–193. [Google Scholar] [CrossRef]
  9. Bo, H.Y.; Wang, G.L. General observer-based controller design for singular markovian jump systems. Int. J. Innov. Comput. Inf. Control 2014, 10, 1897–1913. [Google Scholar]
  10. Qiu, Q.; Liu, W.; Hu, L. Stabilization of stochastic differential equations with Markovian switching by feedback control based on discrete-time state observation with a time delay. Stat. Probab. Lett. 2016, 115, 16–26. [Google Scholar] [CrossRef]
  11. Liu, X.H.; Xi, H.S. On exponential stability of neutral delay Markovian jump systems with nonlinear perturbations and partially unknown transition rates. Int. J. Control Autom. Syst. 2014, 12, 1–11. [Google Scholar] [CrossRef]
  12. Chen, W.M.; Xu, S.Y.; Zhang, B.Y.; Qi, Z.D. Stability and stabilisation of neutral stochastic delay Markovian jump systems. IET Control Theory Appl. 2016, 10, 1798–1807. [Google Scholar] [CrossRef]
  13. Feng, Z.G.; Zheng, W.X. On reachable set estimation of delay Markovian jump systems with partially known transition probabilities. J. Frankl. Inst. 2016, 353, 3835–3856. [Google Scholar] [CrossRef]
  14. Shen, M.Q.; Yan, S.; Zhang, G.M.; Park, J.H. Finite-time H static output control of Markov jump systems with an auxiliary approach. Appl. Math. Comput. 2016, 273, 553–561. [Google Scholar]
  15. Chen, J.; Lin, C.; Chen, B.; Wang, Q.G. Output feedback control for singular Markovian jump systems with uncertain transition rates. IET Control Theory Appl. 2016, 10, 2142–2147. [Google Scholar] [CrossRef]
  16. Chen, W.H.; Guan, Z.H.; Yu, P. Delay-dependent stability and H control of uncertain discrete-time Markovian jump systems with mode-dependent time delays. Syst. Control Lett. 2004, 52, 361–376. [Google Scholar] [CrossRef]
  17. Qiu, J.; Wei, Y.; Karimi, H.R. New approach to delay-dependent H control for continuous-time Markovian jump systems with time-varying delay and deficient transition descriptions. J. Frankl. Inst. 2015, 352, 189–215. [Google Scholar] [CrossRef]
  18. Kwon, N.K.; Park, L.S.; Park, P.G. H control for singular Markovian jump systems with incomplete knowledge of transition probabilities. Appl. Math. Comput. 2017, 295, 126–135. [Google Scholar]
  19. Zhai, D.; Lu, A.Y.; Liu, M.; Zhang, Q.L. H control for Markovian jump systems with partially unknown transition rates via an adaptive method. J. Math. Anal. Appl. 2017, 446, 886–907. [Google Scholar] [CrossRef]
  20. Wei, Y.; Qiu, J.; Karimi, H.R. A new design of H filtering for continuous-time Markovian jump systems with time-varying delay and partially accessible mode information. Signal Process. 2013, 93, 2392–2407. [Google Scholar] [CrossRef]
  21. Luan, X.L.; Zhao, S.Y.; Shi, P.; Liu, F. H filtering for discrete-time Markov jump systems with unknown transition probabilities. Int. J. Adapt. Control Signal Process. 2014, 28, 138–148. [Google Scholar] [CrossRef]
  22. Sakthivel, R.; Sathishkumar, M.; Mathiyalagan, K.; Anthoni, S.M. Robust reliable dissipative filtering for Markovian jump nonlinear systems with uncertainties. Int. J. Adapt. Control Signal Process. 2016, 31, 39–53. [Google Scholar] [CrossRef]
  23. Zhou, W.N.; Lu, H.Q.; Duan, C.M.; Li, M.H. Delay-dependent robust control for singular discrete-time Markovian jump systems with time-varying delay. Int. J. Robust Nonlinear Control 2010, 20, 1112–1128. [Google Scholar] [CrossRef]
  24. Yang, H.J.; Li, H.B.; Sun, F.C.; Yuan, Y. Robust control for Markovian jump delta operator systems with actuator saturation. Eur. J. Control 2014, 20, 207–215. [Google Scholar] [CrossRef]
  25. Benbrahim, M.; Kabbaj, M.N.; Benjelloun, K. Robust control under constraints of linear systems with Markovian jumps. Int. J. Control Autom. Syst. 2016, 14, 1447–1454. [Google Scholar] [CrossRef]
  26. Kao, Y.G.; Xie, J.; Zhang, L.; Jung, H.Y. A sliding mode approach to robust stabilization of Markovian jump linear time-delay systems with generally incomplete transition rates. Nonlinear Anal. Hybrid Syst. 2015, 17, 70–80. [Google Scholar] [CrossRef]
  27. Hou, N.; Dong, H.L.; Wang, Z.D.; Ren, W.J.; Alsaadi, F.E. Non-fragile state estimation for discrete Markovian jumping neural networks. Neurocomputing 2016, 179, 238–245. [Google Scholar] [CrossRef]
  28. Zhang, L.; Boukas, E.K.; Baron, L.; Karimi, H.R. Fault detection for discrete-time Markov jump linear systems with partially known transition probabilities. Int. J. Control 2010, 83, 1564–1572. [Google Scholar] [CrossRef]
  29. Wu, Z.G.; Shi, P.; Su, H.; Chu, J. Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data. IEEE Trans. Cybern. 2013, 43, 1796–1806. [Google Scholar] [CrossRef] [PubMed]
  30. Dai, A.N.; Zhou, W.N.; Xu, Y.H.; Xiao, C.E. Adaptive exponential synchronization in mean square for Markovian jumping neutral-type coupled neural networks with time-varying delays by pinning control. Neurocomputing 2016, 173, 809–818. [Google Scholar] [CrossRef]
  31. Chen, L.H.; Huang, X.L.; Fu, S.S. Fault-tolerant control for Markovian jump delay systems with an adaptive observer approach. Circuits Syst. Signal Process. 2016, 35, 4290–4306. [Google Scholar] [CrossRef]
  32. Wang, W.; Zhang, J.Z.; Cheng, M.; Li, S.H. Fault-tolerant control of dual three-phase permanent-magnet synchronous machine drives under open-phase faults. IEEE Trans. Power Electron. 2017, 32, 2052–2063. [Google Scholar] [CrossRef]
  33. Zhao, J.; Jiang, B.; Chowdhury, F.N.; Shi, S. Active fault-tolerant control for near space vehicles based on reference model adaptive sliding mode scheme. Int. J. Adapt. Control Signal Process. 2014, 27, 765–777. [Google Scholar] [CrossRef]
  34. Huang, J.; Shi, Y.; Zhang, X. Active fault tolerant control systems by the semi-Markov model approach. Int. J. Adapt. Control Signal Process. 2014, 29, 833–847. [Google Scholar] [CrossRef]
  35. Zhou, Q.; Yao, D.Y.; Yao, J.H.; Wu, C.G. Robust control of uncertain semi-Markovian jump systems using sliding mode control method. Appl. Math. Comput. 2016, 286, 72–87. [Google Scholar] [CrossRef]
  36. Asl, R.M.; Hagh, Y.S.; Palm, R. Robust control by adaptive non-singular terminal sliding mode. Eng. Appl. Artif. Intell. 2017, 59, 205–217. [Google Scholar]
  37. Rathinasamy, S.; Karimi, H.R.; Joby, M.; Santra, S. Resilient sampled-data control for Markovian jump systems with adaptive fault-tolerant mechanism. IEEE Circuits Syst. Soc. 2017, PP. [Google Scholar] [CrossRef]
  38. Zhang, X.D.; Parisini, T.; Polycarpou, M.M. Adaptive fault-tolerant control of nonlinear uncertain systems: An information-based diagnostic approach. IEEE Trans. Autom. Control 2004, 49, 1259–1274. [Google Scholar] [CrossRef]
  39. Shi, F.M.; Patton, R.J. Fault estimation and active fault tolerant control for linear parameter varying descriptor systems. Int. J. Robust Nonlinear Control 2015, 25, 689–706. [Google Scholar] [CrossRef]
  40. Liu, M.; Cao, X.; Shi, P. Fault estimation and tolerant control for fuzzy stochastic systems. IEEE Trans. Fuzzy Syst. 2013, 21, 221–229. [Google Scholar] [CrossRef]
  41. Li, H.Y.; Gao, H.J.; Shi, P.; Zhao, X.D. Fault-tolerant control of Markovian jump stochastic systems via the augmented sliding mode observer approach. Automatica 2014, 50, 1825–1834. [Google Scholar] [CrossRef]
  42. Chen, L.H.; Huang, X.L.; Fu, S.S. Observer-based sensor fault-tolerant control for semi-Markovian jump systems. Nonlinear Anal. Hybrid Syst. 2016, 22, 161–177. [Google Scholar] [CrossRef]
  43. Zhang, L.X.; Boukas, E.K. Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities. Automatica 2009, 45, 463–468. [Google Scholar] [CrossRef]
  44. Xia, Y.Q.; Boukas, E.K.; Shi, P.; Zhang, J.H. Stability and stabilization of continuous-time singular hybrid systems. Automatica 2009, 45, 1504–1509. [Google Scholar] [CrossRef]
  45. Zhang, Y.; He, Y.; Wu, M.; Zhang, J. Stabilization for Markovian jump systems with partial information on transition probability based on free-coonection weighting matrices. Automatica 2011, 47, 79–84. [Google Scholar] [CrossRef]
  46. Wang, G.L.; Zhang, Q.L.; Yang, C.Y.; Su, C.L. Stability and stabilization of continuous-time stochastic Markovian jump systems with random switching signals. J. Frankl. Inst. 2016, 353, 1339–1357. [Google Scholar] [CrossRef]
  47. Wang, G.L. Mode-independent control of singular Markovian jump systems: A stochastic optimization viewpoint. Appl. Math. Comput. 2016, 286, 527–538. [Google Scholar] [CrossRef]
  48. Wang, G.L.; Xu, S.Y.; Zou, Y. Stabilisation of hybrid stochastic systems by disordered controllers. IET Control Theory Appl. 2014, 8, 1154–1162. [Google Scholar] [CrossRef]
  49. Wang, G.L. H control of singular Markovian jump systems with operation modes disordering in controllers. Neurocomputing 2014, 142, 275–281. [Google Scholar] [CrossRef]
  50. Sevilla, F.R.S.; Jaimoukha, I.M.; Chaudhuri, B.; Korba, P. A semidefinite relaxation procedure for fault-tolerant observer design. IEEE Trans. Autom. Control 2015, 60, 3332–3337. [Google Scholar] [CrossRef]
  51. Samuelson, D.D. The Review of Economic Statistics; Harvard University Press: Cambirdge, UK, 1939. [Google Scholar]
  52. Ackley, G. Macroeconomic Theory; Macmillan: New York, NY, USA, 1969. [Google Scholar]
  53. Blair, W.P., Jr.; Sworder, D.D. Feedback control of a class of linear discrete systems with jump parameters and quadratic cost criteria. Int. J. Control 2015, 21, 833–841. [Google Scholar] [CrossRef]
Figure 1. Simulation of the open-loop system.
Figure 1. Simulation of the open-loop system.
Information 08 00090 g001
Figure 2. Simulation of system and controller modes.
Figure 2. Simulation of system and controller modes.
Information 08 00090 g002
Figure 3. State response of system closed by controller (38).
Figure 3. State response of system closed by controller (38).
Information 08 00090 g003
Figure 4. State response of the closed-loop system.
Figure 4. State response of the closed-loop system.
Information 08 00090 g004
Figure 5. State response of error system by PMS observer.
Figure 5. State response of error system by PMS observer.
Information 08 00090 g005
Figure 6. State response of error system by PMF observer.
Figure 6. State response of error system by PMF observer.
Information 08 00090 g006
Figure 7. Simulation of system and observer modes.
Figure 7. Simulation of system and observer modes.
Information 08 00090 g007
Table 1. Description of such two modes.
Table 1. Description of such two modes.
Mode iTerminologyDescription ω i δ i
1Norm δ  (or  ω ) in mid-range10.3
2Slump δ  in high range (or  ω  in low)−0.80.9
Table 2. Stability analysis of error system by two types of observers.
Table 2. Stability analysis of error system by two types of observers.
Δ Fault 1Fault 2Fault 3Fault 4
( δ 1 , δ 2 ) (0, 0)(0, 1)(1, 0)(1, 1)
H S i ssus
H FTS i ssss
Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top