Next Article in Journal
A Stimuli-Responsive Biosensor of Glucose on Layer-by-Layer Films Assembled through Specific Lectin-Glycoenzyme Recognition
Previous Article in Journal
Mass Sensitivity Optimization of a Surface Acoustic Wave Sensor Incorporating a Resonator Configuration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks

1
Key Laboratory of Gas and Fire Control for Coal Mines, China University of Mining and Technology, Xuzhou 221116, China
2
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(4), 566; https://doi.org/10.3390/s16040566
Submission received: 18 March 2016 / Revised: 13 April 2016 / Accepted: 17 April 2016 / Published: 20 April 2016
(This article belongs to the Section Sensor Networks)

Abstract

:
This paper studies the remote Kalman filtering problem for a distributed system setting with multiple sensors that are located at different physical locations. Each sensor encapsulates its own measurement data into one single packet and transmits the packet to the remote filter via a lossy distinct channel. For each communication channel, a time-homogeneous Markov chain is used to model the normal operating condition of packet delivery and losses. Based on the Markov model, a necessary and sufficient condition is obtained, which can guarantee the stability of the mean estimation error covariance. Especially, the stability condition is explicitly expressed as a simple inequality whose parameters are the spectral radius of the system state matrix and transition probabilities of the Markov chains. In contrast to the existing related results, our method imposes less restrictive conditions on systems. Finally, the results are illustrated by simulation examples.

Graphical Abstract

1. Introduction

The Kalman filter and its variations have great potential in many applications involving detection, tracking and control [1,2,3]. Recently, the problem of filtering for distributed systems has attracted increasing attention due to the advantages, such as low cost, reduced weight and inherent robustness. As shown in Figure 1, in such distributed systems, sensor measurements and final signal processing usually take place at different physical locations and, thus, require a wireless or wireline communication network to exchange information. In contrast to traditional filtering problems, one main issue of these systems is that packet losses are unavoidable because of congestion and transmission errors in communication channels. How missing data affect the performance of filtering schemes is of significant interest.
Similar to filtering with network packet losses, early work has studied the problem of estimation with missing data at certain time points, where the observation may drop the signal that has errors, i.e., such an observation may contain noise alone. By modeling the uncertainty in observation processes as an independent and identically distributed (i.i.d.) binary random variable sequence, the author in [4] proposes minimum mean-square error (MMSE) estimators, which are of a recursive form and similar to the Kalman filter. In [5], the authors generalize the work of [4] by investigating the filtering problem under the assumption that the uncertainty is not necessarily i.i.d. Moreover, when the data missing process can be modeled as an i.i.d. sequence with a known probability of the occurrence of signal losses, the work [6] derives sufficient conditions for the uniform asymptotic stability of the MMSE filter. It is noted that in the scenario of [6], since the error covariance is governed by a deterministic equation, the stability results can be obtained by defining an equivalent system where the observations contain the signal with probability one.
In the recent research for network models, a large number of works has been reported on the stability analysis of Kalman filtering (see, e.g., [7,8,9,10,11,12,13,14,15,16] and the references therein), and most of them account for possible observation losses. In the aforementioned literature, there have been basically two different methods to model the packet loss process in network systems. An arguably popular method is to describe the packet loss process as an i.i.d. Bernoulli sequence [17]. Recently, some results have been published on such a model; see, e.g., [9,10,11]. The second method is to employ the Markov process to model the packet loss phenomenon [18]. Such a model has been adopted in [12,13,14,15,16] to deal with the filtering problem with packet losses. Though under the i.i.d. model, filtering stability may usually be effectively analyzed by solving a modified Riccati recursion, an i.i.d. process is inadequate to describe the states of network channels that do not vary independently in time. Hence, compared to the i.i.d. model, a remarkable advantage of a Markovian packet loss model is that it can capture the possible temporal correlation of network channels.
It should be pointed out that almost all of the aforementioned results are obtained under the single sensor assumption. Note that from the view of state estimation, the single sensor case also includes the multi-sensor condition, where all of the measurements from different sensors can be encapsulated together and sent to the remote filter by a common channel. Under the single sensor assumption, the filter either receives the observations in full or losses them completely. Obviously, such an assumption may not hold in many practical distributed filtering systems. In these systems, the sensors are usually placed in a wide area; the measurements coming from different sensors cannot be encoded together; and they must be sent over multiple different channels. In contrast to the case of a single sensor, the main difficulty induced by multiple sensors is that observations may be partially lost. For the partial packet loss case, the explicit characterization of stability conditions for the Kalman filter is extremely challenging. Fortunately, by modeling the packet loss processes as i.i.d. Bernoulli sequences, the authors in [19] obtain a sharp transition curve, which is the function of loss rates and can separate the stable and unstable regions of the error covariance matrix, for a couple of special systems, including cases in which the state matrix of the system has a single unstable mode and all observation matrices are invertible. However, they cannot explicitly find the sharp transition curve for other systems, except providing a lower bound and an upper bound for this sharp transition curve. Moreover, when the data dropout process is Markovian, the work [20] offers the sufficient condition for the covariance stability. Nevertheless, this work fails to offer necessary and sufficient conditions. In addition, the restrictive condition in [20] is very strong, that is the observation matrix is invertible. More recently, for the scenario of multiple sensors, the authors in [21] derive necessary and sufficient stability conditions for the estimation error covariance. However, they are unable to characterize stability conditions as a simple form of inequalities, except for the second-order systems with the i.i.d. model. So far, few results in the existing literature are concerned with the explicit filtering stability conditions for multi-sensor systems with partial Markovian packet losses. This is the motivation of the present paper.
Our work studies the stability of Kalman filter under the multi-sensor case. We extend the results in [12] by allowing the measurements coming from different sensors to be sent by multiple channels. Under our scenario, the whole observations consist of multiple packets coming from corresponding sensors, and part or all of the observations may be lost. For the sake of simplicity, we first assume that the system is set with two sensors and give necessary and sufficient stability conditions for the error covariance matrix. Then, we extend our results to a more general case where the multi-sensor scenario is considered. Different sensors are possibly subject to different packet losses. It is worth pointing out that, similar to [12], this work gives the stability criterion in the form of a simple inequality whose parameters are the spectral radius of the system state matrix and the transition probabilities of the Markovian packet loss processes. Thus, based on our stability criterion, one may understand how the packet losses affect the stability.
The remainder of the paper is organized as follows. Section 2 formulates the problem under consideration. In Section 3, a necessary condition for stability is derived under standard assumptions, and the necessary condition is shown to be also sufficient for certain classes of systems. Differences between our results and previous ones are discussed in Section 4. Section 5 presents simulation examples, and some concluding remarks are drawn in Section 6.
Notation: throughout this paper, a symmetric matrix P > 0 P 0 denotes that P is a positive definite (semi-definite) matrix, and the relationship A > B A B means A B > 0 A B 0 . Matrix I represents the identity matrix with a compatible dimension. N , R and C are used to mean the sets of nonnegative integers, real numbers and complex numbers, respectively. Tr · denotes the matrix trace, and ρ · represents the spectral radius of a matrix.

2. Problem Formulation

Consider the following linear discrete-time stochastic system:
x k + 1 = A x k + w k
y k = C x k + v k
where x k R n is the system state and y k R m is the measured output. A and C are constant matrices of compatible dimensions, and C is of full row rank. Both w k R n and v k R m are white Gaussian noises with zero means and covariance matrices Q > 0 and R > 0 , respectively. Assume that the initial state x 0 is also a random Gaussian vector of mean x ¯ 0 and covariance P 0 > 0 . Moreover, w k , v k and x 0 are mutually independent.
The estimation problem under consideration is illustrated in Figure 2. For simplicity, we first assume the networked system is set with two sensors. Note that all of our results extend to the more sensors case easily. Under the two-sensor assumption, the observation y k consists of two parts y 1 ,   k and y 2 ,   k , which are transmitted through different channels. Then, Equation (2) can be rewritten as follows:
y 1 , k y 2 , k = C 1 C 2 x k + v 1 , k v 2 , k
where y 1 , k , v 1 , k R m 1 and y 2 , k , v 2 , k R m 2 . The covariance matrices of v 1 , k and v 2 , k are R 11 > 0 and R 22 > 0 , respectively. Comparing to Equation (2), it is obvious that: y k = y 1 , k y 2 , k , C = C 1 C 2 , and R = R 11 R 12 R 21 R 22 . Under our scenario, the sensor measurements y 1 , k and y 2 , k are transmitted to the remote filter via different unreliable communication channels. Due to fading and/or congestion, the communication channels may be subject to random packet losses. Two time-homogeneous binary Markov chains γ 1 , k and γ 2 , k are adopted to describe, respectively, the packet loss processes in the two channels. Note that such a Markov model is more general and realistic than the i.i.d. case studied in [19], since the Markov process can capture the temporal correlation of the channel variation. We assume that y i , k , i 1 , 2 , is received correctly in time step k if γ i , k = 1 , while there is a packet loss if γ i , k = 0 . In addition, we denote the transition probability matrix of γ i , k as follows:
Θ i + = P γ i , k + 1 = j γ i , k = l j , l 0 , 1 = 1 q i q i p i 1 p i , i 1 , 2
where q i and p i , respectively, are the recovery rate and failure rate of the i - th channel. It is further assumed that 0 < p i , q i < 1 , so that the Markov chain γ i , k k N is ergodic. The distributions of the process γ i , k are determined by the corresponding channel gains, bit-rates, power levels and temporal correlation. It is obvious that a smaller value of p i and a larger value of q i indicate that the i - th channel is more reliable. To avoid any trivial case, we assume that γ 1 , k and γ 2 , k are independent for every k and k .
Introduce S k to indicate the packet loss status in the whole network at time step k. In view of the above analysis and assumption, we easily obtain that the process S k k N is a four-state Markov chain. To be more specific, the following indicator function is given:
S k = ς 00 , γ 1 , k = 0 and γ 2 , k = 0 ς 01 , γ 1 , k = 0 and γ 2 , k = 1 ς 10 , γ 1 , k = 1 and γ 2 , k = 0 ς 11 , γ 1 , k = 1 and γ 2 , k = 1
Then, it follows from Equations (4) and (5) that the Markov process S k k N has a transition probability matrix given by:
Ψ + = 1 q 1 1 q 2 1 q 1 q 2 q 1 1 q 2 q 1 q 2 1 q 1 p 2 1 q 1 1 p 2 q 1 p 2 q 1 1 p 2 p 1 1 q 2 p 1 q 2 1 p 1 1 q 2 1 p 1 q 2 p 1 p 2 p 1 1 p 2 1 p 1 p 2 1 p 1 1 p 2
In order to simplify the analysis, we classify the packet loss status of networks into two categories by the same method as that in [20]:
S k = ς L , γ 1 , k = 0 and γ 2 , k = 0 ς R , γ 1 , k = 1 or/and γ 2 , k = 1
Associated with Equation (7), we, thus, model the packet loss status S k as a two-state Markov process with transition probability matrix:
Π + = P S k + 1 = ς j S k = ς l ς j , ς l Ω = 1 η η μ 1 μ
where Ω = ς L , ς R is the state space of the Markov chain. Note that with Equation (6), we further obtain that 0 < η , μ < 1 , which ensures the ergodic property of the above two-state Markov process. Specifically, one can easily derive the following equation:
1 η = 1 q 1 1 q 2
Remark 1. 
The problem of mean square stability for Kalman filtering with Markovian packet losses has been studied in [12], where the single sensor case is considered. Our paper generalizes [12] by allowing partial observation losses. In our scenario, the measurements coming from different sensors can be transmitted via different communication channels. Obviously, our results are more meaningful and general in practical applications, since the adopted model can be easily adjusted to describe the single sensor situation.
Based on the history Z k = y 0 , , y k H and Y k = γ 0 , , γ k H , where γ k = γ 1 , k ; γ 2 , k and A H is the conjugate transpose of A, one can define the filtering and one-step prediction equations corresponding to the optimal estimation as follows:
x ^ k | k = E x k | Z k , Y k
x ^ k + 1 | k = E x k + 1 | Z k , Y k
The associated estimation and prediction error covariance matrices can be written by:
P k | k = E x k x ^ k | k x k x ^ k | k H | Z k , Y k
P k + 1 | k = E x k + 1 x ^ k + 1 | k x k + 1 x ^ k + 1 | k H | Z k , Y k
Our approach is to analyze the estimation error of x ^ k + 1 | k ; hence, the details for the recursion of x ^ k | k and x ^ k + 1 | k are omitted here. Considering [19], we have the following recursions for estimation error covariance matrices:
P k | k = P k | k 1 γ 1 , k γ 2 , k P k | k 1 C H C P k | k 1 C H + R 1 C P k | k 1 γ 1 , k 1 γ 2 , k P k | k 1 C 1 H C 1 P k | k 1 C 1 H + R 11 1 C 1 P k | k 1 γ 2 , k 1 γ 1 , k P k | k 1 C 2 H C 2 P k | k 1 C 2 H + R 22 1 C 2 P k | k 1
P k + 1 | k = A P k | k A H + Q
In addition, the initial value is P 0 | 1 = P 0 . For simplicity of exposition, we slightly abuse the notation by substituting P k + 1 | k with P k + 1 . To analyze the statistical properties of the estimation error covariance matrix P k , we recall the following definition from [12].
Definition 1. 
The sequence P k k N is said to be stable if sup k N E P k < for any P 0 > 0 , where the expectation is taken on γ k k N with the initial value γ 0 being any Bernoulli random variable.
Here, E P k denotes the mean of prediction error covariance at time step k. Our objective is to derive necessary and sufficient conditions for the stability of P k k N under the multi-sensor environment with partial observation losses. It should be pointed out that, for the Kalman filter with partial Markovian packet losses, the stability has been studied in [20], while their approach imposes more restriction on systems such that the results are conservative. In this paper, we propose a completely different method to obtain the main results. To make the model nontrivial, we make the following assumptions throughout the paper.
Assumption 1. 
All of the eigenvalues of A lie outside the unit circle.
Assumption 2. 
The system C , A is observable.
Assumption 3. 
P 0 , Q and R are all identity matrices with appropriate dimensions.

3. Stability Conditions for Error Covariance

In the following, we will derive our stability conditions for the estimation error covariance matrix. We first give the necessary condition for stability under Assumptions 1–3. Then, we prove that the necessary condition is also sufficient for a certain class of systems.

3.1. Necessary Condition for Covariance Stability

In order to facilitate our presentation, we denote E ς i · as the mathematical expectation operator conditioned on the initial state S 0 = ς i , where ς i Ω . The following lemma will be used in the proof of necessary condition for stability.
Lemma 1. 
The matrix inequality sup k N E P k < holds if and only if sup k N E ς L P k < and sup k N E ς R P k < .
Proof. 
By revising Lemma 2 in [12], we can easily get the proof; hence, the details are omitted. ☐
Theorem 2. 
If System (1) and (3) satisfies Assumptions 1–3 and the packet loss process is described by a time-homogeneous Markov chain with transition probability matrix (8), a necessary condition for sup k N E P k < is ρ A 2 1 q 1 1 q 2 < 1 .
Proof. 
To simplify the notation, we define π j ς i = P S j = ς i , ς i Ω and π j = π j ς L , π j ς R . It immediately follows from Equation (8) that π j + 1 = π j Π + . Jointly with 0 < η , μ < 1 , one has π j ς i > 0 for any j 1 . In addition, we obtain that the two-state Markov chain S k k N has a unique stationary distribution π = π ς L , π ς R , specifically, π ς L = μ η + μ and π ς R = η η + μ . By letting π ς L = inf j 1 π j ς L , the following inequality holds for any j 2 ,
i = j k P S i = ς L S j 1 = ς L P S j 1 = ς L π ς L 1 η k j + 1
In view of Equation (7), it is clear that the above inequality implies that:
E i = j k 1 + γ 1 , i γ 2 , i γ 1 , i γ 2 , i E i = j k 1 + γ 1 , i γ 2 , i γ 1 , i γ 2 , , i γ 1 , j 1 = γ 2 , j 1 = 0 P γ 1 , j 1 = 0 , γ 2 , j 1 = 0 π ς L 1 η k j + 1
On the other hand, considering Assumption 3 and a similar argument as that used in Theorem 4 of [12], we obtain for any k 2 :
P k + 1 1 + γ 1 , k γ 2 , k γ 1 , k γ 2 , k A P k A H + Q j = 1 k k i = j 1 + γ 1 , i γ 2 , i γ 1 , i γ 2 , i A k j + 1 A k j + 1 H
Take expectation with respect to γ 1 , i and γ 2 , i on both sides of Inequality (18), and note Inequality (17); one can prove:
E P k + 1 π ς L j = 1 k 1 1 η j A j A j H
In the following, we show bounded conditions for the right side of Inequality (19) when k . To prove this, we introduce a nonsingular matrix U, such that A U = U J , where J is the Jordan canonical form of A with the form J = diag J 1 , , J d C n × n , and J i corresponds to the eigenvalue λ i . Letting λ min be the smallest eigenvalue of U 1 U - H , one can obtain λ min > 0 . Together with Inequality (19), it is easy to verify:
j = 1 k 1 1 η j A j A j H λ min U j = 1 k 1 1 η j J j J j H U H
Then, by use of Lemma 1, it can be checked that sup k N E P k < implies:
sup k N E ς L j = 1 k 1 1 η j J j J j H <
Along the same line of the proof of Theorem 3 in [12], we conclude that a necessary condition for (21) is λ i 2 1 η < 1 . Note that λ i is an arbitrary eigenvalue of A; now, it is clear that ρ A 2 1 q 1 1 q 2 < 1 . This completes the proof. ☐

3.2. Stability for Non-Degenerate Systems

In this subsection, we show that the condition given by Theorem 2 is also sufficient under the assumption that C , A is a non-degenerate pair. To prove this, we adopt the following definitions introduced in [9].
Definition 2. 
Assume the system C , A has the form of A = d i a g λ 1 , , λ n and C = C 1 , , C n . A block of the system is defined as a subsystem corresponding to A L = d i a g λ i 1 , , λ i l , C L = C i 1 , , , C i l , where L = l 1 , , l n 1 , , n . As a special case, a block satisfying λ i 1 = = λ i l is called an equi-block.
Definition 3. 
If C is of full column rank, we say the system C , A is one-step observable.
Definition 4. 
If an equi-block is one-step observable, it is non-degenerate. Moreover, if all equi-blocks of the system are non-degenerate, the system is non-degenerate. Otherwise, it is degenerate.
It is obvious that the non-degenerate assumption is stronger than the observable condition, but weaker than the one-step observable case. Before we continue on, let us define:
Λ k = i = 1 k + 1 A i H C H M k + 1 i C A i + A k + 1 H A k + 1
Ξ k = i = 1 k + 1 A i H C H M i C A i + A k + 1 H A k + 1
Γ = i = 1 A i H C H M i C A i
where M i = diag γ 1 , i I m 1 , γ 2 , i I m 2 . Then, to obtain the main results, we give the following lemma by recalling [9].
Lemma 3. 
Under Assumptions 1–3, the prediction error covariance matrix P k k N is bounded by:
α 1 Λ k 1 P k + 1 β 1 Λ k 1
where α 1 and β 1 are strictly positive constant numbers.
It follows from Equation (6) that Γ is invertible almost everywhere. Thus, there is no loss of generality to assume that the inverse of Γ is well defined in the rest of the paper. Similar to [9], we see that the matrix Γ plays an important role in proving the stability of P k k N , since it bounds E P k as in the following theorem.
Theorem 4. 
Assume that System (1) and (3) satisfies Assumptions 1–3 and that the packet loss process is described by a time-homogeneous Markov chain with transition probability matrix (8). Then, a necessary and sufficient condition for sup k N E P k < is E Γ 1 < . Moreover, there exist strictly positive constant numbers α 2 and β 2 , such that:
α 2 E Γ 1 sup k N E P k β 2 E Γ 1
Proof. 
Proof of the right-hand side: It follows readily from Equation (4) that the Markov chain γ i , k k N , i 1 , 2 , has a unique stationary distribution. Assume that γ i , k k N starts at its stationary distribution, then γ i , k has the same distribution for any k N . Moreover, under the assumption, it is easy to check:
Θ i = P γ i , k = j γ i , k + 1 = l j , l 0 , 1 = 1 q i q i p i 1 p i , i 1 , 2
According to the similar idea in [12], we introduce a measurable function f : R m × m R n × n . Thus, one has:
E f M k , , , M 0 = ( a ) i 1 , j , i 2 , j 0 , 1 , 0 j k E f diag i 1 , k I m 1 , i 2 , k I m 2 , , diag i 1 , 0 I m 1 , i 2 , 0 I m 2 × P γ 1 , k = i 1 , k , γ 2 , k = i 2 , k , , γ 1 , 0 = i 1 , 0 , γ 2 , 0 = i 2 , 0 = ( b ) i 1 , j , i 2 , j 0 , 1 , 0 j k E f diag i 1 , k I m 1 , i 2 , k I m 2 , , diag i 1 , 0 I m 1 , i 2 , 0 I m 2 × P γ 1 , k = i 1 , k , , γ 1 , 0 = i 1 , 0 P γ 2 , k = i 2 , k , , γ 2 , 0 = i 2 , 0
= ( c ) i 1 , j , i 2 , j 0 , 1 , 0 j k E f diag i 1 , k I m 1 , i 2 , k I m 2 , , diag i 1 , 0 I m 1 , i 2 , 0 I m 2 × P γ 1 , 0 = i 1 , k , , γ 1 , k = i 1 , 0 P γ 2 , 0 = i 2 , k , , γ 2 , k = i 2 , 0 = E f M 0 , , M k = ( d ) E f M 1 , , M k + 1
Here, (a) is due to the definition of M i . In view of the fact that γ 1 , k and γ 2 , k are dependent, we obtain (b). Combining Equations (4) and (27), we can verify for l 1 , 2 :
P γ l , k = i l , k , , γ l , 0 = i l , 0 = j = 0 k 1 P γ l , j + 1 = i l , j + 1 γ l , j = i l , j P γ l , 0 = i l , 0 = j = 0 k 1 P γ l , j = i l , j + 1 γ l , j + 1 = i l , j P γ l , k = i l , 0 = P γ l , 0 = i l , k , , γ l , k = i l , 0
which implies (c). The last equality (d) is derived by noting the assumption that γ i , k k N starts at its stationary distribution. From Equation (28), we conclude that:
E Λ k = E Ξ k
By Assumption 1, it yields that there exists a positive number β ˜ 1 , such that:
i = 1 A i H C H C A i β ˜ 1 I n
Thus, it follows from the above analysis:
Ξ k i = 1 k + 1 A i H C H M i C A i + β ˜ 1 1 A k + 1 H i = 1 A i H C H C A i A k + 1 = i = 1 k + 1 A i H C H M i C A i + β ˜ 1 1 i = k + 2 A i H C H C A i i = 1 k + 1 A i H C H M i C A i + β ˜ 1 1 i = k + 2 A i H C H M i C A i min 1 , β ˜ 1 1 Γ
Together with Equation (30), we can prove:
E Λ k 1 max 1 , β ˜ 1 E Γ 1
By letting β 2 = β 1 max 1 , β ˜ 1 and considering Lemma 3, we complete the proof.
Proof of the right-hand side: In view of Equation (23), it is obvious:
Ξ k i = 1 A i H C H M i C A i + A k + 1 H A k + 1
Thus, one has:
sup k N E Λ k 1 = sup k N E Ξ k 1 sup k N E i = 1 A i H C H M i C A i + A k + 1 H A k + 1 1 = ( e ) lim k E i = 1 A i H C H M i C A i + A k + 1 H A k + 1 1
It is a simple matter to show that the right-hand side of Equation (35) is monotonically increasing with respect to k, which proves (e). Now, considering the monotone convergence theorem, one obtains:
lim k E i = 1 A i H C H M i C A i + A k + 1 H A k + 1 1 = E lim k i = 1 A i H C H M i C A i + A k + 1 H A k + 1 1 = E Γ 1
Combining Equations (35) and (36), one can easily show:
sup k N E Λ k 1 E Γ 1
Then, it follows from Lemma 3 that:
sup k N E P k α 1 sup k N E Λ k 1 α 1 E Γ 1
Let α 2 = α 1 , which completes the proof. ☐
We are now in the position to establish the relationship between the stability of error covariance matrix P k and the transition probabilities of the Markovian packet loss processes. Compared to the above theorem, the following result is restricted on non-degenerate systems. Hence, in the remaining part of the paper, we assume that:
Assumption 4. 
The system C , A is non-degenerate.
Lemma 5. 
Assume system C , A satisfies Assumptions 1, 2 and 4. Then, there exists strictly positive constant number β 3 , such that:
lim Δ 1 , , Δ n j = 1 n A k j H C H C A k j 1 j = 1 n λ j 2 Δ j β 3 I n ,
where λ j 1 j n is the eigenvalues of A, which satisfy λ 1 λ 2 λ n , k 1 < k 2 < < k n N , Δ 1 = k 1 , Δ j = k j k j 1 for all j 2 , , n .
Theorem 6. 
Assume System (1) and (3) is a non-degenerate system, which satisfies Assumptions 1–3, and the packet loss process is described by a time-homogeneous Markov chain with transition probability matrix (8). Then, a necessary and sufficient condition for sup k N E P k < is ρ A 2 1 q 1 1 q 2 < 1 .
Proof. 
Sufficiency: Based on Lemma 5, it suffices to show that there exists a sufficient, large positive constant number Δ, such that for all Δ j > Δ , the following inequality holds:
j = 1 n A k j H C H C A k j 1 β 3 j = 1 n λ j 2 Δ j I n
We let C ˜ = min C 1 H C 1 , C 2 H C 2 and select k j as follows:
k 1 = inf i > Δ S i = ς R , k j = inf i > Δ + k j 1 S i = ς R , j 2 , , n
Obviously, it follows from Equations (24) and (40):
Γ 1 j = 1 n A k j H C ˜ H C ˜ A k j 1 β 3 j = 1 n λ j 2 Δ j I n
Thus, one can conclude that E j = 1 n λ j 2 Δ j < implies E Γ 1 < .
According to the similar idea in [9], we introduce stopping times as follows:
d 0 = 0 , d 1 = inf i 1 S i = ς R , d j = inf i > d j 1 S i = ς R , j 2 , , n
Moreover, sojourn time τ j , which is used to represent the time duration between two successive packet received times, is defined as:
τ j = d j d j 1
With regard to the definitions of Δ j and τ j , one can find a sufficient large positive constant number ε, such that:
E ς R j = 1 n λ j 2 Δ j E ς R j = 1 n λ j 2 τ j + ε = f E ς R j = 1 n λ j 2 τ 1 + ε <
where (f) is due to Lemma 1 in [12], and the last inequality follows from the assumption ρ A 2 1 q 1 1 q 2 < 1 .
Together with Equation (42), we now have sup k N E ς R P k < . The other case with S 0 = ς L may be treated in the same manner. Finally, considering Lemma 1, we complete the proof.
Necessity: It has been verified in Theorem 2. ☐
Corollary 7. 
Consider the same situation as in Theorem 6, except for the number of sensors. We assume that the system is set with a distributed array of sensors (one or more), and each sensor offers partial measurements through a single channel. Then, a necessary and sufficient condition for sup k N E P k < is:
ρ A 2 1 q 1 1 q 2 1 q l < 1
where l is the number of sensors and satisfies 1 l m .
Proof. 
This corollary can be proven along the same line of the proof in Theorem 6. ☐
So far, we have obtained the necessary and sufficient stability condition for non-degenerate systems with all eigenvalues unstable. In the following, we generalize our result to systems with stable or marginally-unstable eigenvalues by recalling [9].
Lemma 8. 
Consider that System (1) and (3) satisfies Assumptions 2–3, and assume that matrix A is a diagonal matrix with the form of A = d i a g A 1 , A 2 , A 3 , where A 1 , A 2 and A 3 are the unstable, marginally unstable and stable parts, respectively. If matrix C = C r 1 , C r 2 , C r 3 , then the critical value of the system can be bounded as:
η c A , C lim α 1 + η c d i a g α A 1 , α A 2 , C r 1 , C r 2
where C r i is of appropriate dimensions and the critical value η c can be explained as follows. If the recovery rate η > η c , then inequality sup k N E P k < holds for all initial conditions. Otherwise, E P k is unbounded for some initial conditions.

4. Discussion on Different Stability Criteria

As is well known, packet losses are inevitable due to unreliable communication channels under a network environment. Thus, compared to traditional filtering methods, estimation schemes for networked systems are required to deal with missing data. In most of the existing stability results for filtering under network environments, packet loss processes are modeled as an i.i.d. Bernoulli process or Markov chain. In this section, we compare the proposed results to existing related ones, which are under i.i.d. Bernoulli or Markov assumptions.

4.1. Comparison with Stability Results under i.i.d. Packet Losses

Under an i.i.d. Bernoulli packet loss assumption, the problem of filtering stability has attracted increased interest. Although technically challenging, it is possible to give an explicit characterization of the stability criteria for systems with certain restrictive conditions, e.g., invertibility on the observation matrix [17,19] or observable subspace [22], non-degeneracy on the pair C , A [9].
Note that an i.i.d. process can be treated as a special case of the Markov chain. By letting the failure rate μ and recovery rate η in the transition probability matrix (8) satisfy μ = 1 η , the Markov chain is reduced to an i.i.d. Bernoulli process with the arrival probability θ = η . Thus, the results given by Theorem 7 in [9] are recovered. In particular, for the case that part or all of the observation measurements are lost in an i.i.d. Bernoulli fashion, [19] proposes the following stability result:
ρ A 2 1 θ 1 1 θ 2 < 1
where θ 1 and θ 2 represent packet arrival percentages of two different channels. We would like to point out that the above result can also be interpreted as a special case of our stability criterion in Theorem 6. Specifically, θ 1 and θ 2 correspond to q 1 and q 2 , respectively, and the failure rate p i = 1 q i , i 1 , 2 .

4.2. Comparison with Existing Stability Results under Markovian Packet Losses

To capture the temporal correlation of communication channels, we specialize the channel model as a Markov chain in this subsection. In contrast to the i.i.d. Bernoulli model, the approach by using the modified Riccati recursion is no longer feasible for Markovian packet losses, which makes the analysis for filtering stability more challenging.
In most of the existing results concerning the stability analysis of Kalman filtering, the single sensor case is assumed; while in practical network-based systems, various sensors and filter are usually at different physical locations, and thus, the single sensor assumption may not hold. On the basis of the above analysis, [20] analyzes the stability of the Kalman filter with partial packet losses by using Riccati recursion. However, the results in [20] are conservative, since a more restrictive condition that C is invertible is imposed on the system C , A . Obviously, as a special case of one-step observability, the above invertible condition requires that the matrix C must have n rows, which implies y k is at least a n dimensional vector. In contrast, under our non-degenerate assumption, C can only have d rows, where d is the dimension of the largest equi-block of the system. It is worth pointing out that d is usually smaller than n in the practical application. In addition, we give the necessary and sufficient condition for the stability of the mean estimation error covariance matrix, rather than only the sufficient condition as in [20].

5. Numerical Examples

In this section, for the purpose of illustrating the results from the previous sections, we present some simple examples.
Example 1. 
Consider a second order diagonal system with parameters expressed by:
A = d i a g 2 . 5 , 1 . 5 , C = I 2
Our aim is to compare the stable and unstable regions determined by our stability criteria and other existing stability results. For this system, [19] proposes that one needs q 1 > 1 2 . 5 2 = 0 . 84 and q 2 > 1 1 . 5 2 = 0 . 56 (upper bound) to guarantee the stability of the error covariance matrix; furthermore, one can conclude that the Kalman filter is unstable if the recovery rate pair q 1 , q 2 satisfies 1 q 1 1 q 2 > 2 . 5 2 = 0 . 16 (lower bound). As shown in Figure 3, the region above the upper bound (red line) is stable, while the region below the lower bound (blue dotted line) is unstable. However, it can be seen from Figure 3 that the stability of the region between the upper and lower bounds cannot be determined with the results obtained by [19]. Recalling the stability result in [20], it is easy to check that the filter can achieve stability if only q 1 , q 2 falls above the lower bound; while the stability of the region below the lower bound cannot be determined since [20] only gives the sufficient condition for stability. We give the necessary and sufficient condition for stability in this paper. For this model, by Theorem 6, we can show that the region above the lower bound is stable; otherwise, it is unstable.
Example 2. 
Let a higher-order system be expressed by:
A = d i a g 1 . 5 , 1 . 3 , 1 . 3 , C 1 = 1 0 1 , C 2 = 1 1 0 , C = C 1 ; C 2 , Q = 0 . 2 I 3 , R = 0 . 2 I 2
In order to guarantee stability, the recovery rate pair q 1 , q 2 should satisfy 1 q 1 1 q 2 < 1 . 5 2 = 0 . 44 by Theorem 6. It is easy to check that the parameter pair q 1 , q 2 = 0 . 8 , 0 . 9 ensures the stability for the error covariance matrix of the filter. Figure 4a shows the change of the error covariance along that sample path, and in Figure 4b, the associated state of two channels jumping among ς 00 , ς 01 , ς 10 and ς 11 is displayed, where for the sake of simplicity, we note the states ς 00 , ς 01 , ς 10 and ς 11 as numbers 1, 2, 3 and 4, respectively. For comparison, we display a sample path with q 1 , q 2 = 0 . 2 , 0 . 2 in Figure 5a, and the associated channel state is shown in Figure 5b. The above two figures illustrate that with a lower recovery rate pair, the error covariance has more chances to diverge.
Example 3. 
The results in Theorem 6 can be extended to the system with marginally unstable or/and stable eigenvalues by Lemma 8. Consider an example system specified by A = d i a g 1 . 5 , 1 , 0 . 5 . The other parameters are chosen the same as those in Example 2. It is easy to check that 1 q 1 1 q 2 < 0 . 44 is sufficient to guarantee the stability of the error covariance matrix. Figure 6a shows a typical sample path with the recovery rate pair q 1 , q 2 = 0 . 6 , 0 . 8 , while Figure 6b displays the associated channel state.

6. Conclusions

In this paper, we consider the problem of stability analysis for Kalman filtering under the multi-sensor environment. A necessary and sufficient condition is derived for ensuring the stability of the filter with partial Markovian packet losses. This condition is more general than the existing ones, since it only requires the system to be non-degenerate instead of one-step observable. Our results can recover related results in the literature, such as the case that the loss and non-loss channel states are described as i.i.d. process and the scenario where only one sensor is set. For future work, it is of interest to study the stability conditions for distributed filtering systems with multiple relevant communication channels.

Acknowledgments

This work was supported by the Fundamental Research Funds for the Central Universities under Grant 2014QNB21 and A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).

Author Contributions

Shouwan Gao and Pengpeng Chen proved the stability criteria for Kalman filtering under multi-sensor scenario with partial packet losses. Shouwan Gao and Dan Huang wrote and revised the paper. Qiang Niu illustrated the theory results by simulation examples.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aubert, B.; Regnier, J.; Caux, S.; Alejo, D. Kalman-filter-based indicator for online interturn short circuits detection in permanent-magnet synchronous generators. IEEE Trans. Ind. Electron. 2015, 62, 1921–1930. [Google Scholar] [CrossRef]
  2. Cseko, L.H.; Kvasnica, M.; Lantos, B. Explicit MPC-based RBF neural network controller design with discrete-time actual Kalman filter for semiactive suspension. IEEE Trans. Control Syst. Technol. 2015, 23, 1736–1753. [Google Scholar] [CrossRef]
  3. Chen, P.P.; Ma, H.L.; Gao, S.W.; Huang, Y. Modified extended Kalman filtering for tracking with insufficient and intermittent observations. Math. Probl. Eng. 2015, 2015. [Google Scholar] [CrossRef]
  4. Nahi, N.E. Optimal recursive estimation with uncertain observation. IEEE Trans. Inf. Theory 1969, 15, 457–462. [Google Scholar] [CrossRef]
  5. Hadidi, M.T.; Schwartz, S.C. Linear recursive state estimators under uncertain observations. IEEE Trans. Autom. Control 1979, 24, 944–948. [Google Scholar] [CrossRef]
  6. Tugnait, J.K. Asymptotic stability of the MMSE linear filter for systems with uncertain observations. IEEE Trans. Autom. Control 1981, 27, 247–250. [Google Scholar] [CrossRef]
  7. Rezaei, H.; Esfanjani, R.M.; Sedaaghi, M.H. Improved robust finite-horizon Kalman filtering for uncertain networked time-varying systems. Inf. Sci. 2015, 293, 263–274. [Google Scholar] [CrossRef]
  8. Rezaei, H.; Esfanjani, R.M.; Farsi, M. Robust filtering for uncertain networked systems with randomly delayed and lost measurements. IET Signal Process. 2015, 9, 320–327. [Google Scholar] [CrossRef]
  9. Mo, Y.; Sinopoli, B. Towards Finding the Critical Value for Kalman Filtering with Intermittent Observations. Available online: http://arxiv.org/abs/1005.2442 (accessed on 19 February 2016).
  10. Zhang, H.S.; Song, X.M.; Shi, L. Convergence and mean square stability of suboptimal estimator for systems with measurement packet dropping. IEEE Trans. Autom. Control 2012, 57, 1248–1253. [Google Scholar] [CrossRef]
  11. Sui, T.; You, K.; Fu, M.; Marelli, D. Stability of MMSE state estimators over lossy networks using linear coding. Automatica 2015, 51, 167–174. [Google Scholar] [CrossRef]
  12. You, K.; Fu, M.; Xie, L. Mean square stability for Kalman filtering with Markovian packet losses. Automatica 2011, 47, 2647–2657. [Google Scholar] [CrossRef]
  13. You, K.; Xie, L. Minimum data rate for mean square stabilizability of linear systems with Markovian packet losses. IEEE Trans. Autom. Control 2011, 56, 772–785. [Google Scholar] [CrossRef]
  14. Mo, Y.; Sinopoli, B. Kalman filtering with intermittent observations: Tail distribution and critical value. IEEE Trans. Autom. Control 2012, 57, 677–689. [Google Scholar]
  15. Rohr, E.R.; Marelli, D.; Fu, M. Kalman filtering with intermittent observations: On the boundedness of the expected error covariance. IEEE Trans. Autom. Control 2014, 59, 2724–2738. [Google Scholar] [CrossRef]
  16. Wu, J.; Shi, L.; Xie, L.; Johansson, K.H. An improved stability condition for Kalman filtering with bounded Markovian packet losses. Automatica 2015, 62, 32–38. [Google Scholar] [CrossRef]
  17. Sinopoli, B.; Schenato, L.; Franceschetti, M.; Poolla, K.; Jordan, M.I.; Sastry, S.S. Kalman filtering with intermittent observations. IEEE Trans. Autom. Control 2004, 49, 1453–1464. [Google Scholar] [CrossRef]
  18. Huang, M.; Dey, S. Stability of Kalman filtering with Markovian packet losses. Automatica 2007, 43, 598–607. [Google Scholar] [CrossRef]
  19. Liu, X.; Goldsmith, A. Kalman filtering with partial observation losses. In Proceedings of the IEEE Conference on Decision and Control, Nassau, Bahamas, 14–17 December 2004; pp. 4180–4186.
  20. Wang, B.F.; Guo, G. Kalman filtering with partial Markovian packet losses. Int. J. Autom. Comput. 2009, 6, 395–400. [Google Scholar] [CrossRef]
  21. Sui, T.; You, K.; Fu, M. Stability conditions for multi-sensor state estimation over a lossy network. Automatica 2015, 53, 1–9. [Google Scholar] [CrossRef]
  22. Plarre, K.; Bullo, F. On Kalman filtering for detectable systems with intermittent observations. IEEE Trans. Autom. Control 2009, 54, 1–9. [Google Scholar] [CrossRef]
Figure 1. Distributed systems.
Figure 1. Distributed systems.
Sensors 16 00566 g001
Figure 2. Diagram of a networked filtering system under distributed sensing.
Figure 2. Diagram of a networked filtering system under distributed sensing.
Sensors 16 00566 g002
Figure 3. Stable and unstable regions.
Figure 3. Stable and unstable regions.
Sensors 16 00566 g003
Figure 4. The error covariance matrix P k and channel state S k with q 1 , q 2 = 0 . 8 , 0 . 9 : (a) The error covariance; (b) the associated channel state.
Figure 4. The error covariance matrix P k and channel state S k with q 1 , q 2 = 0 . 8 , 0 . 9 : (a) The error covariance; (b) the associated channel state.
Sensors 16 00566 g004
Figure 5. The error covariance matrix P k and channel state S k with q 1 , q 2 = 0 . 2 , 0 . 2 : (a) The error covariance; (b) the associated channel state.
Figure 5. The error covariance matrix P k and channel state S k with q 1 , q 2 = 0 . 2 , 0 . 2 : (a) The error covariance; (b) the associated channel state.
Sensors 16 00566 g005
Figure 6. The error covariance matrix P k and channel state S k with q 1 , q 2 = 0 . 6 , 0 . 8 : (a) The error covariance; (b) the associated channel state.
Figure 6. The error covariance matrix P k and channel state S k with q 1 , q 2 = 0 . 6 , 0 . 8 : (a) The error covariance; (b) the associated channel state.
Sensors 16 00566 g006aSensors 16 00566 g006b

Share and Cite

MDPI and ACS Style

Gao, S.; Chen, P.; Huang, D.; Niu, Q. Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks. Sensors 2016, 16, 566. https://doi.org/10.3390/s16040566

AMA Style

Gao S, Chen P, Huang D, Niu Q. Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks. Sensors. 2016; 16(4):566. https://doi.org/10.3390/s16040566

Chicago/Turabian Style

Gao, Shouwan, Pengpeng Chen, Dan Huang, and Qiang Niu. 2016. "Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks" Sensors 16, no. 4: 566. https://doi.org/10.3390/s16040566

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop